Tuesday, May 23, 2006

 

design for motivation

Why do some people put up with difficult-to-use products, while others give up quickly? Designers often dodge this question, instead putting the design in the spotlight: it either "works" or it doesn't. A design-oriented approach in effect absolves the individual of responsibility for how they get along with doing something. If users struggle, the design is too complex, or not fun, or somehow otherwise flawed in design.

Differences in user motivation are vastly under-emphasized in design research, as I realized last year when I tried, unsuccessfully, to learn to wear contact lenses at age 43. Millions of other people successfully learn to wear contacts, while I found contacts a ridiculously difficult product to use. To the extent people talk about differences in motivation to learn to use new products, it is typically done through the un-insightful language of marketing, talking about early adopters and laggards. People are "segmented" along a mythical bell curve, but we never know why they end up where they are placed.

It may seem strange to suggest that user motivation is mysterious. Major corporations spend billions of research dollars trying to unlock the secrets of what makes us want to use a product. There are plenty of solid insights on why we buy products, but we still don't really understand our relationship to products, the reasons why we choose use or not use them after purchase.

What is missing in grand approaches that confidently promise to "design compelling user experiences" is consideration of the real the differences in peoples' adoption of and adaptation to a design. The "compelling experiences" approach places the design in the role of hypnotist: an expected behavior is induced automatically and predictably by the suggestion (the design.) But we know that even with the simple behaviors commanded by the hypnotists, only a minority are susceptible to the suggestion.

Psychologists agree there are two basic types of motivation: intrinsic, and extrinsic. Intrinsic motivation relates to doing an activity for itself (no reward is offered, the activity is inherently enjoyable to the person), while extrinsic motivation relates to goals beyond those inherent in an activity, where rewards are offered as an inducement. Psychologists acknowledge that extrinsic rewards are very powerful motivations for inducing a short term behavior. But for people chasing extrinsic rewards, "compliance" worsens over the longer term when compared with intrinsically motivated individuals. Extrinsic rewards can become demotivating, offering decreasing psychic payoff over time, and also can become a distraction to doing the central task (people focus more on getting the reward than on the task itself.)

A classic intrinsically motivating activity is rock climbing. You scale up a rock, you scale down a rock, and you have nothing to show for it, unless cut hands count. Why do people bother, one might ask? And how to make make money selling to rock climbers?

Significantly, when designers look to make something more motivating, they often look to add extrinsic rewards. We can see this in gaming, where the reward is reaching a new level. Gamers often treat what level they have reached as a bragging right. The accomplishment is getting the external validation of a new level. Gamers can even try to cheat the computer with insiders' shortcuts in order to progress faster. Once the treadmill of level advancement stops, interest in game may be over.

Intrinsic motivation, in contrast, is more about enlargement of activities through self-directed choice. The role of the designer in facilitating the user's ability realize a richer, self-directed experience is that much more subtle, and the results less automatic. An example of such self-direction is what Kathy Sierra calls "superpowers": letting users do things they couldn't do before. There is a reward, but it essentially the activity itself. We are assuming users really want to do these things for their own sake, rather than to boast about having the superpower. We see how once exotic software capabilities loose their cool status once everyone can use them. Exclusivity is not an intrinsic motivator, even if makes people feel they are superpowerful. But other tools, like wikis, are truly empowering, and making them simpler and more widely accessible is part of making them more rewarding.

Intrinsic motivation is an important concept because it challenges the assumption that everyone will be interested in doing something, if only given the proper inducements. In an informal and highly motivating presentation to a UPA gathering in here Wellington yesterday, Kathy Sierra talked about the role challenge plays in motivation. Drawing on Csikszentmihalyi's flow concept, the balancing of challenge and capabilities, Kathy spoke about how much she enjoys Sudoku. There are Sudoku puzzles for all levels of ability. One can master one level, and move to the next. But it doesn't follow that everyone loves a challenge. I'm left cold by logic puzzles (reminds me too much of school) even though I can do some and am challenged by many of them. My intrinsic motivation for doing logic puzzles isn't there.

Even intrinsic motivation isn't a single value. We can have stronger or weaker intrinsic motivation, or even, quite commonly, conflicted motivation. Few pursuits are absent of trade-offs, and we often are torn by these, even if sometimes we aren't even consciously aware of what other desires undermine our commitment to a goal we are thinking about.

The table below is an attempt to estimate the effects design can have on various states of intrinsic motivation. In general the possibility that design will deflate our motivation is stronger than its potential to supercharge our motivation. As an example, an old wooden tennis racket might frustrate a novice tennis player, who might do well with a newer design that has a light carbon fiber frame and a bigger racket head. The wooden racket is demotivating, but the new racket is motivating only so far as it facilitates a feeling of minimal competence. But I don't think the improved design is causing more people to become interested in learning to play tennis, even if it is sightly easier today than it was a generation ago.



Many factors involved in a user's intrinsic motivation lie outside the scope of a design, especially where user goals are more diffuse and involve personally constructed meanings. The technique of laddering, successively asking "why?" in response to personal to statements about goals, can reveal that the correspondence between user tasks and broader life goals is rather tangled. A product may contribute to a one goal, but is often not sufficient in itself to achieving that goal. At the same time, the product might even detract from other goals, by consuming time, money or emotional energy. With things so complex, it is small wonder that designers focus on the external rewards of a design -- promising popularity or prestige.

Tuesday, May 16, 2006

 

un-compelling

A certain fatigue sets in when the ear repeatedly hears something artificial, and the brain keeps trying to interpret it.

I'm tired of the phrase "compelling" used to describe interaction. It isn't simply a overworked cliche, it is perversion of reality. I hear so many people in the user research and design space talk about designing "compelling user experiences," but I never hear actual users talk about wanting "compelling" experiences. They talk about finding interfaces as fun, or interesting, or useful, but not "compelling." To speak of something as fun is to speak of it from a natural ego-centric perspective: I have fun. To speak of something as compelling is to speak of it from object-centric perspective: I am compelled, by forces beyond my control.

My wariness is not simply a philosophical quibble. All this hype about compelling user experiences sounds like mindless corporate propaganda. Researchers and designers sound like zombies when talking about compelling experiences, and make users sound like zombies too. Let's strike the word from our speech, please.

Thursday, May 11, 2006

 

customers, users and agile software

In an effort to become less ignorant of agile software approaches, I recently listened to a podcast with Alistair Cockburn on the topic. What agile seems to do well is get users involved in the shaping of the functionality of a system. It can do this by offering a tangible prototype for users to react to. Too often, functionality is shaped entirely by business requirements documents, which are at a very high level. Specific functional requirements that users might need can be overlooked in less iterative processes that emphasize development of high level functional requirements, and if these needs are uncovered later in the process, they can be hard to "bolt on."

The podcast also sheds light on how agile approaches conceive of customers. Agile is indeed "customer focused", but customers are not entirely the same as what usability professionals refer to as users. Cockburn speaks of customer collaboration over customer negotiation. I wholeheartedly support these sentiments, which are good business practice, as well as good design practice. But usability professionals would never speak about "user collaboration over user negotiation." Customers -- who include people who buy/commission software -- aren't necessarily users, and the terms customer and user cannot be used interchangeably. So talking about customers, who Cockburn sees as being both "sponsors and end users", can sometimes obscure distinctions usability professionals would make between different end user profiles (abilities, attitudes, environments, etc.). Even if all end users have the same basic functional requirements (what the system does), it does not follow they have the same user requirements (the way the system needs to do it to meet user needs.)

The central tenet of usability is to test with a representative user population. Here I think usability testing and agile functional testing have widely different approaches. I have suspected that agile approaches lack the thoroughness of usability in seeking wide participation of users in the process, because the needs of functional testing (find functional limitations) are fundamentally different from usability testing (find interaction limitations.) Usability professionals, even when using lightweight "discount" approaches, might need to test 5 people to get meaningful understanding of interaction issues and problems that need refinement. Cockburn sees a successful "access to a customer" under an agile scenario as being an "hour or two" a week -- considerably shorter than usability approaches, and probably not involving as many individuals either. He also emphasizes the importance of access to "expert users." Experts are indeed useful for functional specifications, but not necessarily ideal for determining user requirements.

For those of us in the usability community, the podcast highlights some of the attitudes of the agile orientation, particularly the aversion to "process heavy" approaches. Notwithstanding the debate and diversity in the agile development community, agile developers generally embrace methodological minimalism and skepticism toward formalism, for example, asking for a justification of the value of non-code related activities such as documentation. Within agile, there is lively discussion over which activities are essential and which ones aren't. It remains to be seen if agile software developers will consider user centered design as another "process heavy" activity.

Thursday, May 04, 2006

 

talking usability in the meeting room

"We need to get a decision about this -- I'll book a meeting so we can get resolution on it."

In business, meetings are where action happens, where information is disclosed and debated and decisions are made. Meetings allow many parties to be involved in discussion and decisions more efficiently than one-on-one discussions, or (in many cases) group emails. Business culture gripes about the time meetings take (often simply a sign that people are too busy generally), adding to the pressure to make meetings productive. A meeting is not successful unless issues are closed out, and next steps agreed that are substantively different from what had been discussed at the current meeting. Employees are drilled in this culture, taking courses on running meetings, and being effective participants in them. Meeting attendees who don't follow the code of conduct are admonished in front of their colleagues.

The culture of efficient business meetings doesn't transplant well to issues involving design and user needs. Because meetings are so deeply ingrained into the corporate mindset, organizations often view meetings as the best way to make decisions about the usability of designs. Although meetings have their place in a user centered design process, they can equally hinder a user centered solution.

A standard organizational behavior is to call a meeting to get "resolution" about an issue. If we aren't sure how to design something, let's call a meeting to get resolution on it. The assumption is that if you get a cross sample of stakeholders in a room together, they can have a rational discussion and get a decision then and there. What also is assumed, sometimes without full consideration, is that everyone involved in the discussion has adequate knowledge and information to make a good decision.

The more finely-tuned the meeting process, the less obvious that meeting participants may not have sufficient information to decide usability issues. A carefully considered invitation list, a well structured agenda, and PowerPoint slides can give the appearance that decisions can be made. For many business issues this is in fact possible -- participants have top-of-the-head knowledge necessary to respond to topics that arise in the meeting, so decisions can be made then. But for usability-related issues, top-of-the-head knowledge of general business participants is generally inadequate (this would not necessarily be an issue if we are talking about a more specialized group meeting, such as an in-house design team.) The stakeholders at decision-related meetings are generally not the same people who will be using a system, and are not the right people to comment on user needs. And users aren't the right people to make final design decisions. Research and decision making are separate activities.

Some consultants have tried to adapt usability techniques to fit corporate meeting rituals, but I question the quality of information such techniques develop. The temptation is to borrow techniques familiar to corporate employees taken from decision making and training workshops, such as voting or role playing, and use them as the basis for making decisions about designs. One evaluation technique I have seen discussed involves a set of rules for different participants to pretend they were different kinds of users. The participants have to fill out worksheets, follow participation rules, then make decisions based on what they saw in the role play. I can imagine that participants, committed to doing a good job and spending effort to follow the process faithfully, would believe they are accomplishing something valuable at such a meeting. But we can't expect Hal in marketing to know how a customer who lives or works in very different circumstances will behave. People struggle enough trying to recall how they do their own work on computers when they aren't sitting down in front of one. Getting a surrogate to imagine how a mythical user does a task strains credibility even more.

So when are meetings useful in user centered design? In general, meetings are useful in the beginning, at the end, but not in the middle of design research. In the beginning, meetings can be useful to explore issues early. At this point, we aren't making design decisions, we are just trying to get a picture of what issues might need detailed consideration, so top-of-the-head comments are useful. We'll verify this information later through contextual or behavioral research, or user testing. Research needs to be done outside of a meeting setting. Group workshops are fine, since unlike meetings their purpose is simply to elicit user data, not to make decisions. At the end of of a design research phase, we have some results to report at a meeting, which can provide a basis for decisions about next steps.

The pressure will always be on for instant answers to allow quick decisions. The seduction of meetings is that they collapse the answer and decision making processes, which need to be separate activities in usability. Business people are accustomed, once showing up at a meeting, to be rewarded with a decision at the conclusion. It is therefore vital to emphasize the difference between a workshop and a meeting, and not to agree to meetings unless there is sufficient user data to make design decisions.

But in addition to knowing what is not appropriate about meetings in usability, we need to be appreciate why meetings are attractive to business people, and address the underlying needs that make them attractive. By combining discussion (the offering and analysis of information) with decision making, meetings can be fast, and they make clear the connections between the data with decisions. This suggests we need to work to make data gathering and analysis as quick and clear as possible, to make make data responsive to decision making (gathering data to address the agenda of a scheduled meeting) and transparent (decisions reflect data that was widely understood by decision makers.) These aren't new goals, but there's still plenty of room left for improvement.

Wednesday, May 03, 2006

 

task oriented information architecture

Most discussion of information architecture relates to finding information. There are articles people want to read, or catalog items people want to browse. What receives less attention in information architecture is how to organize user interfaces to perform tasks, particularly tasks involving complex, drawn-out processes. After recently working on an enterprise application, I have concluded that task-oriented information architecture involves unique issues.

One thing that makes task oriented information architecture a challenge is that it can be difficult to develop a solid understanding how users do their tasks, particularly if there is a lot of task variety. Consider the diagram below.



In creating an IA, the central question whether to organize screens around "objects" (concrete entities such as customers, or products) and then have screens relating to subordinate processes coming off each object. For example, first locate object 1, then do process 1 for object 1, then process 2, etc. Or, is the best way to organize screens first around processes, then have subordinate screens relating to object available while in the mist of a process?

For a one-off task, the question can be answered fairly easily. One can structure the task so that there is only one object and several process steps. Or less commonly, we force a process, and processing of all objects must be done at the same time. If you have more than one object or more than one process in these cases, you need to repeat the whole cycle.

A task oriented IA becomes much more complicated when you don't know in advance how many objects or processes might be involved. For example, if the IA needed to address the processing of travel-related information, we might need a certain elasticity in the IA to accommodate different scenarios. Some people will travel individually, others in groups (with several customers doing the same thing.) Some people will have accounts involving several trips. Customer representatives may need to process numerous items of unrelated customers at the same time. We don't know how many separate products (tickets, hotels) will be associated with any single travel journey. Groups may start with the same itinerary, so that doing processes together makes sense, but the group may break apart after a conference ends and then have separate itineraries. While my IA experience was not related to travel (I've tried to choose an example that might be easier to understand than what I worked on), I hope these examples give a flavor of some of the kinds of issues that can arise.

My experience suggests that traditional card sorting methods are not ideal for task oriented IA. The reason is that labels are often not meaningful outside a scenario. What makes sense when there is just one customer makes less sense when there is a group of customers. For scenarios to be really useful, one needs to cover off all the likely scenarios, not just the major ones.

When experimenting with task-oriented IA, here are some issues to keep in mind:


Tuesday, May 02, 2006

 

user testing and agile methods

I'm looking forward to discussion at next month's UPA conference on incorporating usability into agile software development methods (BTW: if you'll be in Denver and would like to meet up, drop me a line.)

One key question that needs a good discussion is how much user testing is needed in an agile development. How much testing is enough? Is testing always necessary?

These questions may seem to be a matter of opinion at the moment, but there is no reason why the usability community can't work to develop some data to answer these questions concretely.

At least one prominent advocate of agile usability believes that traditional user centered design methods involve an over-reliance on user testing.

I am concerned that agile software development can result in an under-reliance on user testing. By under-reliance, I mean that flaws in software design that affect users go undetected because they weren't tested during the development process.

While I am interested in agile methods, I can't claim special expertise. Insofar as I understand agile usability approaches, there is no unified approach people commonly follow. There have been some process-oriented methods that have been developed, as well as numerous improvisational attempts to find a balance that works. From what I have gleaned, these different approaches have not been explicit on how much user testing to conduct, and under what circumstances to conduct it, and at what point to conduct it. Some have suggested that successful results are possible with agile methods with less reliance on less user testing, because the design process is more robust, so problems are resolved before users ever see them. Others don't make this claim, though they admit that agile methods typically involve less user testing simply because the timelines and project structure limit the testing element.

What the usability community needs to know is whether doing a more light-weight version of user testing under an agile process is a net benefit, or simply a sacrifice to accommodate usability to the agile framework. Obviously, if there is no need for extensive usability testing in an agile process, and quality for users is as good as if more extensive testing was done, this is a big positive.

I'd like to see some comparisons of agile processes with limited user testing, compared with more extensive user testing. If agile methods can result in better designs that need little user testing, there will be few changes to the original design as a result of user testing. We would also expect that a very small sample of users would be adequate to catch any issues, so that larger numbers of test subjects would yield no new information. Once changes have been made to the design following the first round of user tests, any subsequent test would yield no further information. After one iteration, making any changes to the design would be superfluous.

Usability has a strong empirical tradition, where we look at and debate topics vital to our effectiveness, such as how many test subjects is optimal. Perhaps we can start developing some data about the effectiveness of testing with agile methods. We need to move beyond talking about over-reliance and under-reliance of user testing without clear definitions of criteria, and concrete data measuring the achievement of these criteria.

This page is powered by Blogger. Isn't yours?