Thursday, March 31, 2005

 

the "beyond usability" debate

"Beyond usability" is a phrase one is starting to hear often. Google lists nearly 10,000 pages containing the phrase. What's going on? Is usability somehow inadequate?

In days past, people debated labels, such has how usability differed from user experience, or information architecture differed from interaction design. Other people drew elaborate diagrams trying to show how everything was different but still related. Two things have since happened: people got bored debating labels, and the definition of usability in particular has been definitively articulated through ISO standards. Now that we agree what usability is, can we live with it?

Usability has survived, and triumphed, over earlier criticism that it was barrier to "good" design, which usually meant the creative impulse of the designer. Now it is being challenged by more thoughtful questions about how to address innovation and the aspirational needs of users, and not just be about fixing things that are wrong. And even some usability professionals are wondering if being the critic is getting tiresome. An article last year in Interactions asked the profession: "Are you positive?", citing the need for human factors professionals to curb their critical dispositions. I've talked with others who have expressed feelings of existential tedium over the treadmill of just fixing stuff.

Caroline Jarrett, a seasoned usability professional and coauthor of a new HCI textbook, has explored the richness and limitations of usability in her article,
Not beyond Usability - just nearby. Caroline notes that usability is supposed to cover satisfaction, but it is "a concept that's both static and slippery. 'Static' in the sense that it's a one-shot type of concept: you're either satisfied with whether the product allowed you to achieve your goals, or you're not. 'Slippery' in the sense that it can have so many meanings pushed into it: delight, enjoyment, a mild lack of discomfort, a major thrill." She questions if satisfaction (typically the stuff of likert scale questionnaires) really captures how engaging an interactive product is. She asks: "What emotions or other aspects of usability do you think we should add into the definition? And will satisfaction' encompass them? Or how do they fit in?"

The issue of usability's treatment of user engagement has created several opinion clusters. On one side are people who feel usability has never been in better shape, and there is nothing to fix. They are "usability engineers" and proud of it. In their view the profession is moving toward scientific respectability, with standards, databases of user responses for cross application comparisons, and common reporting formats. To measure emotion is fine, in theory, but you'd better make it scientific, not some namby-pamby nonsense.

On another side are a small group of researchers who think emotion can be incorporated into design using evidence-based recommendations. Examples are found in the edited volume: Pleasure with Products: Beyond Usability. Personally I find this work, which centers around conferences sponsored by the Design and Emotion Society, very interesting. Some fantastic insights have been developed so far, but we are a long way from truly understanding users' emotional needs, and even further from taking those insights and translating them into design recommendations. It is one thing to get inspiration from some lose or narrow research around emotions, but quite another to figure out if something needs fixing. The "design for emotion" field is burdened by the lack of a solid common understanding of human emotional needs in general, which needs to come from psychology and neuroscience. Emotions are very complex. (I personally despair of the cognitive science and AI types who venture into emotional research, muddying understanding with factless theory and computer simulations.)

In the absence of a methodical way to assess user emotional needs and develop solid recommendations to address them, still others are taking a "just do it" approach to designing for emotional needs. In the absence of a conceptual framework, the results are predictably random. On the positive side, participatory design can capture some emotional needs that suggest promising design solutions. A separate trend is the increasing co-mingling of branding with user research, which can appeal to companies interested in getting users more engaged with their product. As usability has gone mainstream, design agencies (often owned by ad agencies) have set up usability teams. And some usability firms have taken branding on board as part of the "user experience" to differentiate themselves from the increasingly crowded usability market. The extent to which brands are intrinsic concerns of users, emotionally or otherwise, is a separate debate that dwarfs "beyond usability" considerably.

For my part, I do think that usability needs to address emotional needs more systematically, and as far as possible, robustly. I am not sure if that will be possible within the measurement-focused orientation of the discipline. Even if one is not a quantitative tester (and most are not), most usability questions are asked in terms of binaries: does something work or not? Emotions are analog, and resist such treatment. I do not believe usability, as it exists in practice, is nearly as scientific as some would wish. What usability does offer when evaluating effectiveness, and can bring to user research on emotion, is a (intellectually) critical attitude. By definition emotions are subjective, but one can still probe them deeply and question what they mean. The usability field has a tradition of thoughtfulness that doesn't exist in fields like advertising, where is not uncommon for observers of focus groups to spend most of their attention munching on food and chatting while beyond the mirror. This thoughtfulness is especially true for those trained in broader, observational research techniques, such as grounded theory. On this basis, user research on emotional needs can be something grounded in evidence, and rather than based on whims and hunches, as so much market research can be.

Sunday, March 27, 2005

 

design significance of the iPod

The iPod is one of the few examples of good usability being a motivator, instead of simply being means to avoid user dissatisfaction. Martin Maguire at Loughborough University once did a study on what factors contributed to users' satisfaction or dissatisfaction with electronic gadgets. He found that users are most turned off by poor usability, and limited functionality (compared with other products.) Bad styling, he found, is not a major turn-off. Martin found that users were most keen about a product having good styling and good functionality. He found good usability is not so strong a factor in creating a positive user connection with a product.

Martin's analysis makes a good deal of sense to me. Users rarely notice good usability, though they do recognize it when it is bad. Since most products look similar anyway, they tend to notice style when it stands out favorably, rather than when it is mundane. But the iPod seems to challenge this hierarchy of motives. If you are looking for a basic MP3-type player, the functionality is roughly the same among models from different vendors. Styling is variable: some vendors make clunky models, though some like Creative Labs make models arguably more hip looking than Apple's. (I find the white rubber iPod looks like it belongs in my bathroom, dispensing soap.) What does differ is the interface and usability. No one else has successfully emulated the iPod's ease of use, the lack of buttons and the smooth navigation via the click wheel. It is "delightful" because it exceeds are conventional expectations. What is amazing is that it continues to delight long after first encountering it, instead of being a momentary pleasure. The iPod shows good usability does have the ability to motivate us, and get us attached to a product. The concept was inspired, though the interface needed a bit of tweaking to become the hit it is today. Our professional challenge is to develop inspired usable interfaces: to generate delightful concepts that have staying power.

Tuesday, March 22, 2005

 

my current pet peeve

Pop-up blocking is a useful feature, but I never know when it is active, until I can't see something I'm trying to access. I can turn it on or off, but if I go back in my history, or open a new window, I have changed the state I've set it. Very annoying.

Monday, March 21, 2005

 

is usability demeaning?

Reading the current ACM Interactions magazine, I find a couple of provocative statements about how much usability really cares about people. Nico Macdonald writes: "in design, the advocacy of the user is often presented in terms of victimhood rather than recognizing people's innate abilities and adaptability. The idea that humans are proactive and problem-solving determinants of their own situations has tended to be replaced with a view of them as relatively incapable." Nico's position is that of a technophile and libertarian who sees usability as demeaning to people.

A different implied criticism comes from David Siegel and Susan Dray in their article comparing usability and ethnography. While in ethnography the focus is "on people and how they behave in context", in usability the focus is "on technology." They mention a "stereotype" that usability people are "so focused on technological features that they are hardly better than technocratic designers and developers." In human computer interaction, the emphasis is often on the computer, not on the human.

The issue comes down to whether we are drawn to technology, interested in it and excited by what it can do, or are having it forced on us, trying to accommodate ourselves to its demands. Of course both pull and push coexist. Many people elect to spend a lot of their free time using technology recreationally. At the same time, they are often forced to use technology against their wishes, such as when companies decide it is much cheaper to use an automated system instead of a person. Usability does increase user acceptance of technology. But acceptance shouldn't be confused with authorship. Usable products only allow people to do more easily what others have decided is worth doing.

Sunday, March 20, 2005

 

books have a future...in china

In the news this week a report that the world's largest bookshop has opened in Beijing. It is quite a change from 23 years ago, when I studied in Beijing. Then, one could not even browse in a bookshop. In what must have been the most unfriendly "information architecture" ever devised, one had to jossle with people to get near enough a counter so one could see the spines of books to read what titles were even available. If something looked interesting, you had to beg a grumpy sales clerk to take the title off the shelf, so you could briefly scan it at the counter. If you wanted it, you handed the book back to the clerk, who gave you a slip of paper with the book's title and price, which you took to a cashier, who took your money, stamped the slip, and allowed you to return to the clerk to collect the book. I think the Chinese are enjoying the Borders model more.

Another Chinese book innovation I just encountered comes with a recently purchased dictionary. It has a special "anti-forgery" watermarked title page (in washed out ink), so one knows it is genuine and not a pirate copy. No hologram, though.

Saturday, March 19, 2005

 

contaminated meaning

I came across an interesting story, related by Anders Opperund, of how users can associate meanings with a product that the manufacturer is not aware of. Several years ago Sony was making a "sport" model of its walkman product. The product, clad in a bright yellow casing, promised to be shock resistant and waterproof. Unfortunately for other manufacturers who made then-stylish yellow colored devices, consumers expected any yellow product to be shock resistant and waterproof as well, and were upset if they found they weren't. I suspect some manufacturers got their just deserts by copying the look of the Sony product without the underlying intent. But no doubt Sony copied the bright yellow look for other manufactures (perhaps a toy maker), whose products where never intended to be beach accessories. The original yellow products on the market may have intended the product to communicate friendliness, not ruggedness. They had the meaning they intended to communicate contaminated by Sony's reinterpretation of the color. Goes to show that even after launch, it can be valuable to do user research - your users' understanding may have changed.

Tuesday, March 15, 2005

 

prototypes for reflection

Prototypes are central to iterative design. They are often quick and dirty, just enough to test an idea before moving on. Using software, prototypes can be redone so quickly one hardly ponders them too long. Or they can be slick, with different glossy variants that can be displayed side-by-side in a beauty parade in front of users and company execs.

Before computers, designers used prototypes for reflection, and problem solving. A master maker of product prototypes, Giovanni Sacchi, died recently, aged 92. See his
obituary on designboom. He produced some 25,000 prototypes over his extensive life, beautiful crafted in unfinished wood, objects for contemplation rather than for dazzling. A description of his work can be found on the website of the Victoria Museum in Melbourne.

Sunday, March 13, 2005

 

computational linguistics goes mainstream

Amazon continues to amaze me. Searching today (via A9) for a book by Jamshid Gharajedaghi, I find six "statistically improbable phrases (SIPs)" flagged. I've not yet read Gharajedahi (he was cited by New Zealand design consultant Michael Symthe), but it would seem that the author enjoys spawning new phrases. One can see who else uses these phrases, and whether they predate or follow the use by the author. With a bit of digging, one can trace the diffusion of concepts. Such a pattern matching feature is not a "new" concept, but it is very exciting to see it in the mainstream.

Friday, March 11, 2005

 

luxury, populism and utility

Luxury has been called a "supra-functional" aspect of a good. It can supply prestige, of course, but it can supply pleasure that not socially derived. Luxury goods are supposed to be "better," though critics say they are just excessive, not better. What gives?

Today I saw a woman wearing a Burberry raincoat. That would be hardly noteworthy in a former British colony where it rains continually. But it was remarkable: it may be the first Burberry I've seen since I moved to New Zealand. New Zealand is like Britain in many ways, but with one big catch: there is no class system here, there is an anti-class system. No one shops at New Bond Street style retailers, quite the opposite: people take pride in shopping at warehouse stores that offer only practical goods at unadorned prices. There are no luxury goods for sale in New Zealand, except for a few duty-free stores catering to Asian tourists.

What's so bad about luxury, or alternatively what's so good about it? There is an enormous baggage associated with luxury goods. De Tocqueville notes that luxury is the privilege of the aristocracy. When goods are more abundant, and quality falls, then the artistocracy is in retreat, he thought. But he does associate luxury with quality, and it is interesting to press that association. Are popular goods necessarily inferior quality? De Tocqueville:
In an aristocracy he [the artisan] would seek to sell his workmanship at a high price to the few; he now conceives that the more expeditious way of getting rich is to sell them at a low price to all. But there are only two ways of lowering the price of commodities. The first is to discover some better, shorter, and more ingenious method of producing them; the second is to manufacture a larger quantity of goods, nearly similar, but of less value. Among a democratic population all the intellectual faculties of the workman are directed to these two objects: he strives to invent methods that may enable him not only to work better, but more quickly and more cheaply; or if he cannot succeed in that, to diminish the intrinsic quality of the thing he makes, without rendering it wholly unfit for the use for which it is intended. When none but the wealthy had watches, they were almost all very good ones; few are now made that are worth much, but everybody has one in his pocket.
I think people who are prepared to pay more for something, demand more, and often get more. It is a value judgment of course if what is received is worth what is paid. What I like about de Tocqueville's understanding is that quality is in fact compromised. It not just about image -- quality can be real attribute of a brand, even if not always present.

Back to the notion for supra-functionality. I think Victor Papanek, back in the earthy 1970s, discussed historic pioneer cultures, such as New Zealand or the US, which give priority to if something works, ahead of how beautifully it works. The pragmatic bias of such thinking working against celebrating frivolity. But I would argue that not all luxury is frivolous. I don't own an expensive watch, but can understand why people do. Some may want to impress neighbors and colleagues, a doubtful motive for me, but many just delight at something so complex as a mechanical watch in today's throwaway society. Many luxury goods celebrate simple tasks such as telling the time or writing one's signature. When I lived in Europe. I used to enjoy buying uber-expensive household items from a company called Manufactum, largely because what the items said to me: scrubbing the floor is an important part of the quality of my live, why not spend a little money on enhancing the experience?

Much exciting design today is about discovering opportunities to improve things that have been historically overlooked. Yes, utilitarian solutions exist, but are they all that is possible?

 

novelty fatigue

I can't keep up with all the innovations in blogging. All the time new bookmarking and tagging systems are being introduced that generate more metadata that supposedly helps one find yet more stuff you might be interested in. But my brainspace at the moment can't cope with sorting it out, especially where it involves separating marketing hype from genuine functionality.

I'm suffering from what I'll call "novelty fatigue." I am curious to know if such a syndrome has been studied. There is a rich literature on the diffusion of innovation: Everett's classic book, Gordon Moore's chasm. But most innovation research focuses on how innovations move from the bleeding edge to the laggards. The assumption is, one's receptivity to innovation is in a fixed category, and never varies. The received opinion is, one either always loves new stuff, or has always has some resistance to it, which can be broken down into categories such as "early majority" as so on.

I enjoy novelty, but my enjoyment varies with how much it demands from me. Why am I sometimes very curious about new technologies, and othertimes wary? And is the general population as fickle toward novelty as I am? There are many reasons why my behavior varies, but an important one is how "noisy" the innovation is. It's not worth my effort to learn something new if it seems like it's just another wild scheme among many. I would guess that when there is excessive innovation "churn" (more than the general population can absorb at once), the adoption of innovations is slower than when innovations clear cut as to what everyone will be using in the future.

Tuesday, March 08, 2005

 

what is a pest?

Designers, indeed any citizen who interacts with the wider world, should be culturally aware. There is even a specialized field know as cultural ergonomics that concerns itself with how different cultures, say, perceive risk. An early test bed for this research was the very international field of aviation, where misunderstanding just isn't tolerated.

Culture is often assumed to be based on long historical precedent, such as religion or language (literal or symbolic.) For example, color doesn't mean the same thing in different places. What surprises me is when cultural meanings are much newer in origin. The notion of a pest is generally based on a deep-seated biological aversion to something threatening. Many people, for example, find creepy-crawly insects pest-like. Yet in New Zealand, a very creepy looking insect, the weta, is revered, and measures are taken to assure its survival. The Department of Conservation notes "weta have become icons for invertebrate conservation." Show a photo of a weta outside New Zealand, and I bet few people would think, "wow, we should conserve these." Most would assume it was a cockroach. Can one imagine a weta instead of a panda as the logo of an international conservation organization?

If the weta isn't a pest, what counts as a pest is equally counter-intuitive to this outsider. The wallaby, a small kangaroo, to me seems the delightful stuff of Saturday morning cartoon shows. In Australia, it's native land, it is, like the weta, an icon for conservation. It is loved, and sadly it is endangered. A few wallabies also live on the outskirts of Auckland, New Zealand's largest city. Despite a very friendly rivalry, New Zealand and Australia are not that radically different -- heck, they even have nearly identical flags. But they are different when it comes to wallabies. In Auckland, wallabies are officially classified as a pest by government authorities. Seems while it struggles in Australia, it gets on too well in New Zealand. So, wallabies would be another problematic logo for an international conservation group to adopt: it would turn off New Zealand donors.

If Australia and New Zealand don't agree on what a pest is, how can we expect India and Iceland to agree? Or Mongolia and Mexico? Some cross-cultural differences are lessening with globalization, but, at the same time, we are inventing new ones.

Monday, March 07, 2005

 

changing user behavior

If "the customer is always right", one might say as a corollary that "the user is generally right (but not always)." Watching BBC World news tonight, I'm reminded of a deep philosophical assumption of user centered design: people don't change, so designs must. At its core, user centered design is a reaction to the once popular grand theories of reconditioning people, building a better (hu)-man, and naive or optimistic views (your choose) of human behavior as malleable.

The news story involved the price of oil, and what to do when it spikes. The International Energy Agency is looking not just at the supply side, but the demand side as well, how to curb demand, at least temporarily, when oil is in short supply (temporarily). The Dutch government (I love them) points out that energy demand can be reduced if drivers (users) simply change a few habits they use with how they drive. What wonderful common sense -- don't burn rubber while driving when oil is scarce. The advice of the Dutch and the entire International Energy Agency also quietly challenges the false assumption that price alone alters behavior.

What is not tested is if behavior can be altered by appealing to reason. Reason may work for the Dutch, but for the rest of the world, it is a tough sell. We are creatures of unconscious habits.

Will people learn to be more conscientious drivers? Despite my true desires that they might, I have my doubts. Several weeks ago, I found myself looking at a couple of old driver manuals that were produced by the Automobile Association. (I know that may sound odd, but I was in a used bookshop at a time when my wife is just now learning to drive.) One manual was from circa 1960, the other circa 1978. I didn't recognize the instruction in either of them. The older one seemed like it was for driving a tractor -- lost of stuff about the choke in the engine. The 1970s edition had elaborate theory on steering wheel hand positions and maneuvers I believe have been superseded by more contemporary research-informed opinion. I suspect when the publications were issued people didn't worry too much about the very informed, conscientious detail provided by the Automobile Association, and just did whatever minimally worked to get from one place to another.

I have to salute the Automobile Association for producing wonderfully clear instructions. Just how much good instructions change user behavior is not self-evident. Some things are too important not to try, though. The user is always reluctant to change, but is not always "right."

Sunday, March 06, 2005

 

affordances: assumed and observed

The more design considers the emotional domain -- what makes an object compelling -- the more I, and many others, want to make objects explain human behavior. It's interesting how the old conceptual chestnut known as affordances can experience a new lease of life in the emerging era of tangible interfaces and umbitiqious computing. I'm reminded how strong, if potentially misleading, the concept is as I read a book on toys.

More from a current favorite author of mine, Brian Sutton-Smith, this time from his book Toys as Culture. Discussing the properties of toys, he quotes the elegant, opinionated and hip-shooting essayist Roland Barthes, who blasts "complicated toys" for supplying "actions without adventure, without wonder, without joy...they are supplied to him [a child] ready-made, he is never allowed to discover anything from start to finish." Barthes goes on to praise the wonder-inducing qualities of wooden blocks. Without any misindoctrination from the debates in the HCI world over the utility of affordances, Barthes has arrived at the concept on his own, and applied it with conviction.

After presenting similar quote from another French author ascribing various intrinsic properties to manufactured toys, Sutton-Smith rightly notes that the writers:

assume that one can simply look at the character of toys and make predictions as to how they will be used and what their effects will be on human creativity. It is doubtful if that is possible. One needs to know the context in which the toys is used to know much at all about its effects.

I find the discussion brilliant, because it highlights how tempting, even apparently commonsensical, it is to make assumptions about how people (even children) will behave based on how something appears. In the GUI era, the affordance concept was rehabilitated by appealing to the notion of the "perceived" affordance: what a user guesses something does based on learning or cultural conditioning. As we embark on designing new tangible interactions that don't borrow from established patterns of behavior, it will be tempting to again imagine that certain properties will elicit, automatically, certain behavioral responses. But truth will come from observation, paying attention to the highly variable contextual element. And the reality will always be muddier than we'd like. Pity.


Friday, March 04, 2005

 

more tales of "blame the user"

It's no fun trying to get something to work, only to get the cold shoulder from some "helpdesk" type who views you as an security risk.

A colleague today tried to get some credit card details online. She had to go to a bank branch to present her credentials. Fair enough, except after she had done that, and was told to contact the customer service phone number once more, she was told by the call center that the branch should have provided another authentication key prior to calling customer service. Not very joined up.

Today I tried to access a computer account I should get as part of enrolling at Victoria University for an industrial design course. The account procedure should be simple enough, but for some reason it didn't work in my case. I emailed support about the problem, and instead of an acknowledgement something was amiss, I get a response saying they won't look into the matter until I supply various "security details," none of which are needed to set up an account in the first place.

Yes, identity theft is a problem (I was a victim several years ago.) But often "security" is a catch-all excuse for inventing new requirements for users, to make it unattractive for them to bother the support staffs. It is no secret that organizations desperately want to reduce support staff costs. What better way than to make it seem like it's all for the benefit of security?

Wednesday, March 02, 2005

 

heroes I don't get

Jef Raskin's death has received amazing attention -- a link on Google's homepage, frontpage treatment of the Guardian. For me, Raskin is like Gregory Bateson or Buckminster Fuller: someone many other people idolize, other people I respect, but someone who's significance and practical application escapes me entirely.

I read The Humane Interface when it first appeared 5 years ago. I recall thinking, here is yet someone else claiming to have invented the Apple Mac interface. At least he had no competition with the Canon Cat. His message was so old school HCI: all about modes of operation. At a discussion last year on the future of interfaces at AIGA London, speakers said Raskin's newest ideas were curious but irrelevant.

One thing I agreed with Raskin about: we both like the design of the Sony 2010 shortwave radio.

 

Usability Professionals do pseudoscience

My advance program for the Usability Professionals Association meeting arrived, and I'm a bit miffed at who the UPA has chosen for the keynote speaker: an "expert in neurolinguistic programming (NLP)."

True, neurolinguistic programming is less offensive than astrology. Reasonable people have been known to speak favorably of neurolinguistic programming (NLP*, not to be confused with serious if also often disappointing NLP-- natural language processing. I'll refer to the neuro-nonsense with an asterisk -- I think someone originally wanted to trademark it, but missed the boat.) Usually such NLP* enthusiasts only know of it on a superficial level. A lot of NLP* sounds sensible, and that which is reasonable is so because it is common sense.

But NLP* claims to be revolutionary. And it's the revolutionary claims that turn out be be preposterous. NLP* zombies claim there are three "modalities" of understanding: visual, kinesthetic, and auditory. Wow, this sounds like breakthrough stuff for understanding usability. What's more, depending on one's modality, one's eyes tend to shift upwards, or sideways or downwards. You can tell if a person is visual or auditory or whatever, just by looking at their eyes. Eyetracking has never had a bigger commercial promise, if you believe NLP*.

Sadly, NLP*, though 25 years old, was decisively disproven 25 years ago, and remains empirically invalid to this day. There is no credible evidence of modalities or eye movements. It was just a invented fiction that became a fact for its enthusiasts. And now the UPA is honoring this nonsense at its annual meeting.

I like that the fact that the UPA is for practitioners, and not academics. There are no tedious lectures about demonstration projects whose chief purpose seems to be acquiring funding for a phase two. Still, I expect the UPA to base their programs on factual material, and insights born of real experience. Even the closing plenary speaker, Aaron Marcus, is a disappointment. Aaron must be one of the most prolific writers in the usability/HCI sphere, but I have yet to read anything original from him -- he just recycles other peoples material. Come on UPA, give usability some credit for originality
.

This page is powered by Blogger. Isn't yours?