Thursday, September 29, 2005

 

the loss of precision

Few people are capable of mental arithmetic anymore. Calculators have deskilled us. That would hardly seem to matter, except that many people can't even understand what the approximate answer to a numeric problem should be, to know if they have done the calculation properly. I can't count (without the aid of a calculator) the number of times a store clerk has produced a ridiculous calculation, and has had no awareness of how absurd the answer is. If you suggest that the calculation is wrong, they simply re-key the same incorrect sequence of numbers and operations, to produce the same wrong answer. They act as though you are stupid for doubting a calculator. For example, a constant problem is when a store advertises an extra 10% off already marked down prices. It seems no one who is paid a store clerk wage is able to figure the math for that.

A similar phenomenon is happening with spelling. For people who rarely write with pen and paper, spelling becomes more difficult. I have never been a good speller myself, but I realize that very few people really are. Few people know spelling rules anymore. Spell checkers have saved us much effort, but have become a crutch. We have stopped thinking about how words are composed, and speech has become disconnected from writing. I am not a linguist, but I suspect our pronunciation is gradually becoming less grounded in how a word is spelled. We are dropping syllables as we say words we long longer need to think about spelling.

The next big shift will happen as voice recognition matures to allow speaker-independent input in noisy environments. With that, we may finally stop writing altogether, expect for formal pieces. Dictation will change how we relate to words. If voice recognition is accurate enough that we don't need to watch a screen as we talk, we may end up rambling, just as we do in ordinary speech. But we won't have a companion who asks questions to check the meaning of the rambling. If our relationship to words become more oral, then we run the risks that accompany slang. People mimic words and phrases without a proper understanding of what the phrase is intended to mean. I notice people often use slang phrases in the exact opposite way they were intended.

Tuesday, September 27, 2005

 

enterprise automation and human productivity

Nearly everyone's job is increasingly defined by software, whether you work in a call center, as a currency trader, or a graphic designer in an ad agency managing a job workflow. If you use computers to work, sooner or later you will be drawn into a system to manage workflows, and these systems will be "rationalized" to improve productivity. Don't believe that because you use your brain for a living, or need to exercise creativity to do your work, that you are exempt for the march of enterprise automation.

Our inability to understand and measure the productivity of computer workflows is one of the most pressing issues facing modern society. Most of us use computers in some way to produce services of some sort. The logic of business to reduce costs means that corporations are always seeking ways to do more with less. Automation is the holy grail of cost reduction, and new applications to automate tasks involving computers are appearing all the time. What nearly all these applications promise is that work can be done faster, with fewer people. Often these applications do increase throughput, but it is not always the case that they improve productivity.

Enterprise automation -- using software to automate services across an enterprise -- very often repeats the same problems that accompanied industrial mass production automation. Before the development of lean production, manufacturers focused on almost exclusively on increasing throughput (output per work and/or per hour). They looked for ways to increase how many autos would be produced in an hour, and how to reduce the number of workers needed to produce those autos. The solution was to standardize production of common products based on interchangeable parts. This is exactly the approach currently used by enterprise applications.

In the case of auto production, the mass production system collapsed in the late 1970s. It proved unable to cope with the increasingly diverse demands of consumers (users). And the relentless automation caused workers to stretch beyond the limits of human capabilities. Harried workers did not notice quality problems when they were focused entirely on meeting production quotas. They saw the demands of the production system as dumbing down their work. They burned out.

As the case of auto production automation illustrates, productivity is about more than just getting faster, and using fewer people. Unless and until an enterprise can eliminate using people altogether -- more often a fantasy than a reality that will ever happen -- it needs to keep the cognitive and emotional needs of workers in central perspective when attempting to automate. People need to be engaged mentally in their work, and in control of the pacing and decisions. Without these, quality suffers, and productivity is elusive. The example of lean production in manufacturing, which uses workers to perform flexible tasks, and lets workers define tasks themselves, points to how the future of enterprise software needs to evolve.

Saturday, September 24, 2005

 

behavior verses preferences

Is the future of user centered design watching people, or probing people? When selecting an approach, a key question is: how reliable is behavior, and how reliable is preference?

Traditional usability engineering admonishes researchers to "watch what people do; don't listen to what they say." Ethnography also can stress the need to watch what people do in naturalistic settings, and to discount verbal introspection from users. It can deviate from straight behaviorism by asking users questions about why they do what they do -- it tries to overcome the black box approach. Emotional design research borrows far more from traditional market research by exploring preferences instead of behavior. It may be wary of the validity of verbal declarations of preference, but it nonetheless seeks to develop a model of the user's inner mind, and his or her wants. Many designers of new products argue that past behavior with mature products is a poor guide to understanding what people really want from future products.

I think the variability of people is well illustrated by how they behave on the job. I just finished listening to a BBC Radio 4 program on psychometric job testing. When I lived in the UK, I always thought the extensive use of psychometric testing to screen job candidates rather odd. If you visit a UK bookshop, you will find shelves of books on how to take, and "pass," psychometric tests. The whole exercise degenerates into a guessing game of finding and providing the socially acceptable answer. The flaw in psychometric job testing is the notion that one's stated personal preferences somehow reflect one's future job behavior. The first problem is that what people say they prefer may not be what really prefer. This is a well known problem with any probe. The second problem is that people's preferences may have little bearing on people's behavior. Whether someone says they prefer mercy or justice does not predict if they will fire an incompetent employee. There are many other factors that come into play (perhaps feelings toward potential lawsuits.) And it also confuses the idea of global personal preferences with role behavior. We have all seen examples of the kind grandfather/ruthless executive.

Probes may work in the absence of social influence and role associations. Some products have fewer social and role associations than others, but all products have some (unless intended for hermits.) People may be less able to manipulate their nonverbal responses than verbal responses to probes, but I would doubt their responses are entirely involuntary and thus "objective."

Ethnography is potentially powerful for accounting for the influence of social factors. But by subjecting observation to context, it is not possible to develop a global perspective on people, finding their core life motivations and preferences. Only traditional usability can pretend to offer a global perspective on users. It does this by ignoring preferences, and looking only at a narrow range of user behaviors that are consistent across contexts.

 

election usability: is it about close elections?

I doubt many people outside New Zealand are aware of the recent elections here, given that they fell on the same weekend as the headline-grabbing German and Afghan elections. To catch you up: New Zealand has divided election outcome, with both major parties gaining nearly identical share of votes, and neither in a obvious position to form a majority coalition. It is not so dramatic (yet) as Germany, but nearly so. Incidentally, while I'm no expert, I understand New Zealand borrowed extensively from Germany's electoral system when devising its sometimes confusing Mixed Member Proportional (MMP) system NZ implemented a decade ago.

As the election outcome remains unresolved, all parties are awaiting the results of overseas votes. This is no small matter, as at least 10% of New Zealanders live overseas (in Australia mainly.)

On the surface of things, New Zealand's elections would not seem the material of usability problems. New Zealand uses quaintly old-fashioned paper ballots, with simple tick boxes to indicate one's choices. It is so low tech it makes Afghanistan's multi-page lists with photo mug shots of candidates look high tech.

Paper is great, until it meets the web. Now, all those expat New Zealanders were given an opportunity to submit their paper ballot as well, but they had to use my least favorite software program to do so. I read in the Dominion Post today that some expats are complaining that they couldn't read the election form in Acrobat reader, which rendered the names of the candidates as blank. The Green Party in particular is complaining about Acrobat, as it stands to gain a seat in parliament is it gets a few more votes, or alternatively, lose all its seats if it loses a few votes and falls below 5% of total party votes. (Some commentators assert that the Greens get a disproportional share of votes from expats -- I have no way of knowing.) Acrobat Reader might just have the power to throw the outcome of the New Zealand elections one way or another.

What to me is even more strange is how the overseas votes are received. After printing out the ballots in Acrobat reader, voters are expected to fax them in. I can see at least two usability problems with voting by fax. First is fraud. In an age where identity theft seems easier than ever, how on earth do the election officials verify that the ballot is cast by the real person entitled to cast the ballot. I don't know what arrangements where made -- there must have been some -- but it would seem far short of the purple thumbs used in the Afghan election. The second usability problem concerns knowing that one's ballot has been received. Ballot by mail is not foolproof to be sure, but it is probably more reliable than sending a fax. The rule of thumb for any important fax is to phone to verify that is was received, and got into the right hands. How frustrating it must be to fax a ballot, only to wonder if it went to the right fax machine.

Thursday, September 15, 2005

 

what does this sign mean?



Here is an actual question from the New Zealand driving theory test.

What does this sign mean?

A) Children's playground ahead

B) Railway station ahead

C) Railway crossing ahead

D) Railway museum ahead

I am inclined to want to choose a non-existent "E": all choices seem plausible. Since the train does bear a striking resemblance to Thomas the Tank Engine, "A" might be a good choice. Since it a rather old looking train, "D" might work. The train seems like it is going some place (notice the smoke blowing in the direction that train has come from) so maybe I should follow the train to the railway station. Since the sign is yellow, maybe it is warning me about something, so "A" or "C" might be the right answer.

For the curious, I haven't yet seen a train that looks like the one pictured operating in New Zealand.


Monday, September 12, 2005

 

does 'best practice' make us zealots?

Most any successful businessperson or politician will tell you that the art of getting your way is to bend. Ultimatums along the lines of "my way or the highway" don't win friends or influence people. But unfortunately, UCD has developed a puritanical zealotry that is inhibiting its adoption in the business world.

A practical problem for most any religion dealing with a changing world is the presumption that a single truth exists, and that the truth is eternal. UCD practitioners may hesitate at the notion that they follow a faith, rather than represent the deepest enlightened reasoning that can be applied to a topic. But while reason does dominate the surface of discussion, deep philosophical assumptions shape the pattern of dialogue UCD practitioners engage in with others. These assumptions are rarely questioned, they are articles of faith, which (strangely!) aren't accepted wholesale by people outside the UCD flock.

One of the biggest sacred cows in the UCD world is that user centered research and iterative design must begin from the inception of an application. This idea reached the status of canon law by the early 1990s. The logic of the law seems unassailable: by involving users from the start, you get the application on the right path immediately, and you save time and money in the process. Indeed, early and constant user involvement does exactly those things in certain circumstances: when you build an application from scratch, and create it to fit the specific needs of target group. Unfortunately, those circumstances are getting less and less common.

When the law of "UCD should drive application development" came into being, the software development world was radically different than it is today. At that time, applications were developed one by one, often using procedural programming. Then modular software, object oriented programming, off-the-shelf enterprise applications, and a raft of other software development innovations burst on the scene in the 1990s. Software vendors discovered the key to reducing software costs was standardization and re-use. Vendors figured they could maximize return if they could re-use code they had already created from another application. If it wasn't exactly what they user needed, so be it: at least it was cheaper.

Once standardization and re-use dominated the software development world, UCD lost the argument that it is most cost-effective to start with user requirements before writing any code. Paradoxically, because of the rise of the Web, UCD practitioners were largely unaware that the UCD development cycle was becoming marginalized in application development. The Web demanded comparatively little input from application developers, giving usability a freer reign. Involving users from the start of the Web project worked easily. Until recently, Web sites have been shallow, involving a fairly dumb interface attached to some fairly basic databases at the back-end. Because the amount of code in Web sites was not overwhelming, modifications based on user research and iterative design could be accommodated without too much a kerfuffle.

The transition to Web applications has given application developers much more power. UCD is marginal when it comes to influencing many of the key decisions that affect usability, such as what CRM system will be used to handle customer data. These decisions are made by IT systems people and business analysts, generally with little involvement from UCD practitioners. But it is these key back-end elements that are shaping the user experience, because they define what the user can and cannot do with a Web app.

When we step back a step, we find even less UCD involvement in the development of enterprise applications that power Web apps. These applications involve off-the-shelf components that drive almost any conceivable aspect of an organization, from personnel to scheduling to customer service. Despite the enormous impact of enterprise applications on users both inside and outside of organizations, UCD practitioners generally have not been involved with shaping what these applications do, or how they do it. By and large, enterprise applications are created in the imaginations of business operations specialists, with hardly any user input. As an afterthought, users are brought in to test the development of the interface. The consequences of the situation are astounding: UCD has no involvement in the development of software representing billions of dollars of spending by corporations.

The usability profession to a large extent doesn't even understand how marginal it has become. I hear some usability people talk about working on their company Intranet, as though it where just another Web site. Their involvement is limited to rearranging chairs: a bit of information architecture work, and figuring where to put links, based on the specifications presented to them. But UCD did not drive the development of the application, in violation of the sacred laws of UCD. The application already existed. It was bought from SAP or Oracle or IBM, specified by an in-house BA and modified by systems people from a implementation partner.

I am not convinced many usability practitioners are necessarily interested in the mysterious working of the applications. They are happy to stick to the interface, where the issues are far easier to notice and deal with. Applications also involve dealing with an alien culture, people who complain about changes and don't seem excited when talking about user experience.

Some UCD practitioners do realize that users aren't going to gain proper value from an application unless the application reflects their prorities accurately. Many enterprise applications fail users and need extensive change. But UCD orthodoxy hardly equips us with a road map on how to fix the problem. If you follow the UCD script, you end up saying: "Well, see, the problem is that you did this all wrong from the start. Before you did anything, you should have called me in to do user research, then we could have designed a special application to meet the unique needs of the user population." Anyone hearing that would assume you were arrogant and obstructionist.

It is time to reality-check notions that application development must start with UCD. I empathize with these sentiments completely, but I question their practicality in today's business world. UCD has increasingly advocated as best practice the development of bespoke solutions for users. Meanwhile, in the hard-nosed world where money talks, businesses have embraced off-the-shelf solutions for a variety of reasons related to purchase cost and maintainability. The philosophical differences between the approaches couldn't be wider. The question is, are there practical solutions to provide a way for both sides to work constructively toward common goals without compromising their priorities?

I have been developing a framework I am calling "context-driven application re-engineering" to assess the costs to businesses of poorly functioning enterprise applications, while providing a pathway to making those application work better for users. To get the needs of users represented in the reality of off-the-shelf enterprise applications today, I commit heresy by dropping the "do it right from the start" attitude of the UCD profession. We need to recognize how marginal we are becoming, and face this reality if we hope to change it.

Friday, September 09, 2005

 

person-to-person human computer interaction

Mobile communication is fast moving from being exclusively aural to being visual as well. With camera and video phones, we don't just talk about a situation to someone far away, we can show it remotely. I predict mobile visual communication will introduce radical new work patterns that could require establishing a new interaction paradigm in HCI: person-to-person human interaction with computers. Sorry if that sounds confusing, bear with me.

Visual cell phone communication allows managers to diagnose and instruct field staff on how to solve problems involving visual analysis. The highly paid expert doesn't have to burn precious time traveling to field locations. He can see the problem remotely, and have a lower paid novice carry out his instructions. Suppose the problem involves fixing an interactive device, say a vending machine, then two sets of eyes are looking the problem, one at the location of the device, who can touch the parts, another remote, who can only see the parts. There are two levels of interaction happening: the service technician's interaction with the machine, and the expert's interaction (via verbal instruction to the technician) with the machine. A problem arises if the two parties describe or see things in different ways.

Although corporations have focused extensively on remote diagnostics, most of this activity involves machine to remote machine, or machine to remote human, communication. To the extent no one needs to be physically present to carry out the diagnosis and repair, such a solution is wonderful. But in many situations, having someone actually handle something physically is necessary.

Everyone has the maddening experience of waiting for a service technician show up to deal with an installation or to make an adjustment. Consumers hate the waiting, and corporations find house calls expensive. I envision that soon we will use picture and video phones to get instructions from companies on how to adjust our broadband setup or our digital television wiring or satellite dish.

Currently, calling customer service or a help desk about a technical problem, without the ability to provide visual materials to orient the remote helper, is a frustrating experience. Much time is wasted just finding common understanding of what is happening and how to do trivial actions. Visual feedback will reduce that gulf, and may encourage greater use of remote advice.

While the issue of remote instruction to other people on interaction is now new, I am not aware of literature dealing with the topic. Traditional HCI assumes one person is doing both the thinking and the interacting. Computer mediated communication is generally focused on the process of communication, rather than interacting with a device. Some contextual research has looked at group problem solving and interaction with a device, but generally with people who are collocated. What I am calling person-to-person HCI involves split cognition, mediated communication, but only one party interacting with the device. If you are aware of work in this area, I'd be interested to learn of it.

Thursday, September 01, 2005

 

cognitive-emotional complexity and customer value

The relationship between simplicity or complexity of a product or service, and value a customer derives from the choice they are offered, has never been less clear cut.

Making products and services simple is not the answer. People are often interested in possibilities offered by complexity, they just don't want to be overwhelmed by it. The limits of people's interest in choice is a major blindspot of free market liberals who believe unlimited choices make people happier. True, people want choices, but they also wince at choices that don't seem personally meaningful. In addition, when too many choices are available, people can feel emotionally drained, fearing they will make the wrong choice, and cognitively exhausted, when they have to compare too many options.

The challenge for designers is to offer products and services that demonstrate extra value, and are meaningful. Generally, people are willing pay more for products and services that reflect more differentiation or customization. As a consequence, businesses offer consumers an unparalleled array of choices in the market, with many possibilities to customize a purchase on the Internet. But choice for choices sake runs the risk of creating a "why bother?" backlash.

The corporate rush to higher value products and services is forcing greater complexity on consumers. ATMs dispense cash, but maybe banks would make more money if they dispensed theater tickets as well. And Lottery tickets. Why not dispense lottery winnings at the ATM too? Soon someone decides there is a market for machines that only dispense cash -- people will pay extra not to stand in line behind someone buying lottery tickets.

Some hotels offer a choice of pillow types for guests. Lovely touch perhaps, if that were the only decision the guest needs to make. But in the self-serviced economy, people are asked preferences about many minor details, only a couple of which matter personally to an individual. Like vampires, some corporations collect personal information for impersonal reasons -- faux customization serves as cheap market research.

The alternative to asking people to provide preferences each time they order is to capture these preferences in a profile. Even creating and maintaining a profile is too much work. I avoid registering for web sites and frequent buyer programs where ever possible. So far at least, helping computers develop some intelligence about our preferences is still a net investment of effort by users. Perhaps in the long run this will change, but I can not overly optimistic about the next ten years at least.

Although cognitive complexity is a symptom most prevalent among interactive products, other more humble products are seeking to become more complex, higher value, and more scary to purchase. Once simple white goods like stoves/cookers are now industrial machines with idiosyncratic design languages that match other similar looking white goods to form a cohesive kitchen "system." Woe the poor person who learns through use she hates the brand she has spent thousands of dollars or Euros on. The opportunities for buyers remorse keep escalating.

The future for user centered design never looked brighter. People's time is always limited, the total emotional capital they have available to invest in consumer decisions is limited as well. Our cognitive capacity to process information and juggle decisions is perhaps growing modestly, but is still outstripped by the growth in choices demanded of us. Meanwhile, corporations are addicted to the notion that offering consumers more choice is the best and only strategy to gaining market differentiation and achieving success.


This page is powered by Blogger. Isn't yours?