Thursday, August 25, 2005
"no touch" service: there is no escaping context
Software is increasingly making some of the most important decisions affecting your life: how you receive medical treatment, if you get a mortgage, or if you qualify for insurance. According to IT guru Thomas Davenport in the current issue of the MIT Sloan Management Review, "decision making automation" is no longer a pipe dream, it is "coming of age."
Automated decision making capabilities "are embedded into the normal flow of work and they are typically triggered without human intervention." Davenport writes that the goal of businesses is to have processes executed with "no touch" treatment -- no human intervention required. Currently, some "exceptions" to automated rules require human attention, but businesses are working to eliminate these altogether.
One can understand the motivation of businesses to reduce costs through automation, but there is a cost to users/customers that isn't pleasant. For example, while generally positive about the possibilities of automated decision making, Davenport concedes that hospital managers punish doctors and nurses for "overriding" automated decisions.
Automated decision making can run roughshod over users and customers. Corporate managers need to think carefully about the stakes involved. Davenport mentions managers need to define the context and limitations for automated decision making, but he doesn't specify how that is done, other than to caution firms not to create an application that will get one sued for malfeasance.
Too often, automated decision systems are implemented without enough thought to context and limitations. Decisions involving people, by definition, involve many variables, because people are endlessly complex. If you design a mortgage decision application, you can probably get a good approximation of who qualifies for a loan into a software program. But it is only an approximation. We have all heard funny stories about qualified people turned down for something because they didn't meet a rigid rule, even though common sense told you the rules shouldn't be a constraint in their case.
Automated software can't cope with unanticipated or uncommon exceptions. Common sense can cope with such exceptions, often easily. But it is difficult to capture common sense in "knowledge management" software. It involves too much tacit knowledge, often from outside the immediate work domain.
Today, knowledge engineers are draining the brains of people, trying to codify how they think. The result, even if credible, are often brittle. Circumstances are always changing, but software decision engines can't necessarily adapt to these changes. Davenport notes:
When automation of decisions affecting peoples lives goes too far, it will create some nonsensical situations. The aggrieved people whose situations where not considered by the requirements engineers will be very, very mad. Compared to common sense, software will never look so stupid. Companies will never look so uncaring.
Companies that don't want to be humiliated, or sued, need to do some deep contextual research if they are implementing automated decision systems that have the potential to be more than a mere annoyance to customers.
Automated decision making capabilities "are embedded into the normal flow of work and they are typically triggered without human intervention." Davenport writes that the goal of businesses is to have processes executed with "no touch" treatment -- no human intervention required. Currently, some "exceptions" to automated rules require human attention, but businesses are working to eliminate these altogether.
One can understand the motivation of businesses to reduce costs through automation, but there is a cost to users/customers that isn't pleasant. For example, while generally positive about the possibilities of automated decision making, Davenport concedes that hospital managers punish doctors and nurses for "overriding" automated decisions.
Automated decision making can run roughshod over users and customers. Corporate managers need to think carefully about the stakes involved. Davenport mentions managers need to define the context and limitations for automated decision making, but he doesn't specify how that is done, other than to caution firms not to create an application that will get one sued for malfeasance.
Too often, automated decision systems are implemented without enough thought to context and limitations. Decisions involving people, by definition, involve many variables, because people are endlessly complex. If you design a mortgage decision application, you can probably get a good approximation of who qualifies for a loan into a software program. But it is only an approximation. We have all heard funny stories about qualified people turned down for something because they didn't meet a rigid rule, even though common sense told you the rules shouldn't be a constraint in their case.
Automated software can't cope with unanticipated or uncommon exceptions. Common sense can cope with such exceptions, often easily. But it is difficult to capture common sense in "knowledge management" software. It involves too much tacit knowledge, often from outside the immediate work domain.
Today, knowledge engineers are draining the brains of people, trying to codify how they think. The result, even if credible, are often brittle. Circumstances are always changing, but software decision engines can't necessarily adapt to these changes. Davenport notes:
As the ranks of employees in lower-level jobs gets thinner, companies might find it increasingly difficult to find people with the right kinds of skill and experience to maintain the next wave of automated decision systems.Imagine how unresponsive industrialized service may become. Suppose someone with an emergency is stranded in an airport, due to a plane delay. Software reassigns people to alternate flights, using tested rules of fare paid for the ticket, and loyalty evidenced by frequent flyer miles. The software makes no allowance for the person in an emergency, and the person may not even be able to talk to someone about his situation.
When automation of decisions affecting peoples lives goes too far, it will create some nonsensical situations. The aggrieved people whose situations where not considered by the requirements engineers will be very, very mad. Compared to common sense, software will never look so stupid. Companies will never look so uncaring.
Companies that don't want to be humiliated, or sued, need to do some deep contextual research if they are implementing automated decision systems that have the potential to be more than a mere annoyance to customers.