Monday, May 14, 2007

 

transitions

It has been a long while since posting on this blog. The combination of a very full schedule and a desire to take a break from the blogging routine has created a hiatus far longer than I would have anticipated. Over the past year I have been fortunate to have been involved in a long term project that goes to the very core of what user centered design is about: redesigning companies!

For any friends who have wondered about my recent movements, I can report that I will returning to the Washington DC area after an absence of 7 years. I will be much closer to many friends and professional colleagues in the States and the UK (New Zealand is just too far from everywhere), though obviously this will place me a long distance from friends I have made in New Zealand over the past 3 years.

Tuesday, August 01, 2006

 

non-executive dashboards

Some follow-up to my posting last week on productive dashboards.

Interaction designers need to consider the different needs of traditional "executive" dashboards, which provide a big picture of activity throughout an enterprise, and is typically packed with minute data, and employee-centric dashboards, which reflect information relating to the activities of an individual employee or a small team they belong to.

To see what is happening in enterprise dashboards, one should consult a fantastic resource, The Dashboard Spy. Nearly all the examples posted are executive level views -- generally amalgamating and summarizing unit level data that used to be handled by Excel spreadsheets. Many these dashboards look like Excel spreadsheets. (Happily, some such as SAS/GRAPH have improved on jarring appearance of Excel's default graphics.) One would expect the mountains of data summarized, and capable of being sorted, drilled-through, and otherwise manipulated to be useful in analytical decision making such as business intelligence. If one's role is to make decisions for others, such by-the-numbers visualizations can be a powerful aid. It is still possible to drown in such data, be given so many options to manipulate it that one's never such if one's seen all the views one needs to see to make a sound decision. But executives are paid to make such decisions, and often they worry they aren't able to see things from all angles and link disparate variables. We'll defer to the brilliance of the executive to figure out what he or she needs, and not worry about the UI being too complex. On balance, access to many variables with many "degrees of freedom" to manipulate these variables is useful for executives.

Now let us consider line staff. Their job is more concerned with doing things, rather than thinking about how other entities should be doing things. This distinction is sometimes murky with the rise of self-management. As employees are often responsible for making their own decisions, it may be tempting to think they need a mini-executive dashboard. But the more the UI forces them to think about what they should be doing, the more distracted they can be from doing it.

To illustrate the employee's dilemma, let's look at a demo dashboard created by Visual I/O. Visual I/O is an exciting player in dashboards, creating visual user interfaces with impressive graphical representation and interactivity. The demo on their website concerns baseball; it is a playful illustration of interaction concepts and presentation methods they apply to more mundane subjects. They have designed a dashboard to manipulate variables to assess whether a pitcher should be replaced during a game. It reflects an extreme case of the fetish some baseball fans have for statistics!

Let's play along with the scenario, and imagine someone actually using the dashboard. While I can imagine a manager toying with the dashboard the next day at his desk, trying to figure out patterns for how to best rotate pitchers, I have trouble imagining him doing these tasks in the dugout as the game is being played. At that point, he isn't following the game, he's absorbed in the fantastically data-rich user interface. Now, let's stretch our scenario a bit and imagine that the pitcher also uses the same dashboard, which he had loaded on a PDA in his back pocket. Between pitches, he would toy with the dashboard, trying to figure out if he should ask to be relieved. The likely deterioration to his pitching concentration would present the answer, regardless of what the dashboard data suggested.

What pitchers and others involved in performing tasks need are simple heuristics to make decisions, not mountains of data. Dashboard design could learn from the field of heuristics. A useful volume on this topic, recommended by Don Norman, is Gigerenzer and Todd (eds) Simple heuristics that make us smart (Oxford, 1999.)

Thursday, July 27, 2006

 

long tails need organization to happen

Okay, I haven't yet read the best seller of the moment, The Long Tail, but I am skeptical. Lee Gomes writes in today's Wall Street Journal (subscription required) that evidence from nearly all quarters shows that the Long Tail isn't real -- people won't buy stuff just because it is there. Whether Amazon, Netflix, or the iTunes Store, most revenue comes from hits, and vast amounts of music, books, and other digital content are never downloaded at all. The same can be seen in the noncommercial world, where thousands of academic articles are never read except by their authors, and presumably, their editors and reviewers.

Like many ideas spawned by Wired magazine, the Long Tail is a vaguely libertarian notion that all anyone wants is unfettered access. Give people access, and the Tail will emerge spontaneously. The concept is argued on the basis of idealized statistical behavior and supposed transaction cost economies of data servers. From the perspective of user centered design, I find the Long Tail concept a bit naive.

Why are hits so powerful, despite the very real phenomenon that consumers have access to an ever broader range of content? The constraining factor has little to do with computers and economics, it has to do with human attention, both cognitive and psychic.

Cognitive attention is challenged the more stuff that is available. Computers have no trouble storing millions of records, but humans have trouble making sense of them. Browsing, scanning and searching are increasingly difficult the more records available. I don't want to discount the impressive progress in information architecture over the past decade, but I feel the solutions developed are still primitive compared with the needs posed by millions of records. Consider the "subject" taxonomy in Amazon's book store: it is simply too broad to be helpful in view of the millions of records. No one has developed any universally meaningful way to describe music genres that reflect the narrow-casted development of styles and approaches.

What I am calling psychic attention is grounded in the many facets of social psychology. We are drawn to things other people are buying for numerous reasons. People feel comfortable buying products that are already accepted. It is "rational" in terms of expected effort expenditure to buy something others have already tried, and presumably found useful or enjoyable. People experience social validation, extend trust, and have a basis for social connection when going for popular options. Information management has addressed the social dimension through behavioral data mining, showing connections between the purchases of different items, and through recommender systems, where people suggest items of interest, rate items, and rate each others ratings. These systems can reinforce the popularity of already strong sellers, working against the Long Tail.

There has been enormous progress in giving form to the mountains of records, but behavior and recommender systems often externalize the contradictions of individuals (especially in the low volume end of the Long Tail). Take someones "my favorite's" list: it may contain list of seemingly random items, books and CDs on unrelated topics or styles. Or people mean vastly different things by common words -- as an experiment type the word "liberal" in Amazon's Listmania. You will find recommendations for books that are far from your personal preferences (whatever they are), because people use the term in so many ways: as a positive term for either Left or Right wing politics, a derisive term for the same, as a theological orientation for various religions, etc. Sales behavior and recommendations are also not logically correlated, pointing to some gaps in behavioral classification. One Amazon reviewer noted that nearly everyone (several hundred reviewers) gave an Anti-virus software package the lowest possible rating, but it showed up as the most popular seller. A conversation with your next door neighbor might explain such a contradiction, but the user interface doesn't.

To navigate through and evaluate the long tail, people must rely on logical organization or social organization (the opinion and behavior of others). If theorists who argue that humans relate to concepts in ways similar to how they relate to people are right, then information organizations need to be smaller. You can't know everyone in a big person organization well, which is one reason organizations divide and splinter. The same may need to happen with the Superstore websites. Narrowcast marketing presumes people have a some intention behind their interest in a product, band, hobby or lifestyle. The superstores try to infer that intention by observing expressed opinions and behavior, but miss the organic aspect of collective intentions. Intentions are consciously formed, and microsites have much greater coherence in their offerings. Meaningful information management is not inherently self-organizing. When "everyone" (either the broader public or a data mining computer) tries to conceptualize and interpret the meaning of something that has resonance to a core group, the meaning gets lost.

Tuesday, July 18, 2006

 

productive dashboards

In the enterprise world, dashboards are gaining prominence, though their value to employees is often more presumed than validated by evidence. Dashboards previously have been concerned mostly with so-called "C" level information (the preoccupation of top executives), things like aggregate sales or the stock price. If worker-bee employees had access to a dashboard, they saw this big picture data, as if they would be exhorted to work harder noticing the stock price fell in the morning.

New generation dashboards are now presenting data more relevant to front-line employees, particularly their KPIs (key performance indicators). The seamless corporation created by enterprise software is allowing a multitude of data indicators to be collected and presented in ways tailored to the work of individual employees. Such dashboards promise to improve measurement and awareness of activity (enabling improvement) and support long-standing goals to de-layer decision making and give more responsibility to front line staff. Dashboards have moved a big step toward relevance to employees, but few dashboards are truly user centered, because they don't address underlying user motivations.

Dashboards have received scant attention from interaction designers, and what attention that has been given tends to view dashboards as just another UI, often likened to data-rich maps. Coping with data richness is certainly an aspect of dashboards, but it can potentially focus attention of the wrong end of the user experience. The question is not necessarily how to cram more information on a dashboard, so that users can successfully discriminate between different levels and layers of information. Rather, the question may well be to make sure that the KPIs presented truly support the employee's performance. Ironically, visually rich cartographic dashboards may be distracting to employee performance, even if they present lots of data people think is relevant and even if they can be understood without difficulty. Unlike a map, where data often represents something as lifeless and impersonal has geological formations, dashboards represent data that is anything but impersonal: it reflects the incentives employees are given and how they are rated.

Dashboards are a good example of the importance of understanding user needs in context, moving beyond static understanding to explore a user's lifeworld. A recent article in the Financial Times discussed recent academic and investment research on the paradox of incentives. It notes: "It seems that incentives work well when the subject is given a repetitive, mindless task to perform, rather like the piece rates that applied in manufacturing plants. But when more thought is involved, incentives may dent performance. Our minds start to worry about the incentives, rather than the task at hand. We choke under pressure."

What research suggests is with complex knowledge work, where there are many factors mentally juggle, the more we think about multiple KPIs displayed on a dashboard, the more we are distracted from completing the task at hand. Here, our cognitive make-up collides with the business imperative to measure and monitor everything. This conflict is can be resolved different ways. Perhaps employees are being overloaded with KPIs, and so they need fewer, and therefore a simpler dashboard. Perhaps they indeed need to measure and monitor a multitude of data factors, but they should not be rated on all these factors. We could have a sophisticated dashboard of enterprise data that are not KPIs for an individual employee.

Dashboards promise to act as a window on performance, but they can influence performance as well as reflect it. Ideally employees shouldn't be thinking too much about the dashboard. Dashboards are tools that should blend into the background to support an employee's work, not be in the foreground, screaming for attention.

Monday, July 17, 2006

 

usability testing isn't dead, only summative testing is dead

I'm hearing much about the misuse of usability testing from some big names like Don Norman, who writes in the current interactions magazine that usability testing is no more than minor activity of "catching bugs". While Norman maintains that usability testing is still necessary for clean-up purposes, he argues it shouldn't be used to "determine 'what users need'".

I have enormous respect for Norman, and love his recent contrarian views on User Centered Design, which contain many valuable insights. But on his point about user testing I think Norman is flat wrong, and out of touch with how usability testing has developed in recent years.

Norman, and a few other old-time professionals in the HCI world who I've seen be critical of user testing, reflect a dated understanding of what user testing is. They equate user testing with the bug-tallying process of summative testing, a test often done at the end of the design and development process that gives a report card on how the application works for users. Large groups of test subjects would work through uniform test protocols. In HCI, summative testing used to be holy grail of scientific respectability for the field, giving statistically measurable data on what works and what doesn't.

As a practitioner, I don't know anyone relying on summative testing to any extent-- for the very reasons Norman and others who criticize it as "too late." But there is plenty of room for usability testing to inform design -- just don't do it at the end, or try to make it scientific experiment. There is enormous confusion in the usability community because we sometimes discuss testing without being explicit whether it is old-fashion summative testing (largely a white elephant), or nimble and iterative formative testing. Both are usability testing, but formative testing is not simply about finding bugs and glitches. Formative testing can be a powerful tool for understanding user needs and preferences:


It was clear to me at the recent UPA conference that formative testing needs it's own identity. User testing is no longer a topic of active research in university computer and psychology departments, as it was when Norman and other HCI pioneers crafted the academic framework from which the user centered design profession has grown. To the PhDs in HCI, the definition of user testing remains frozen in the 1980s. Even the standard textbooks on user testing date from the early 1990s, before formative testing emerged.

Formative testing has developed in the practitioner world in response to the inability of summative testing to cope with iterative design cycles. But there is no orthodoxy about how formative tests are done or evaluated. In many ways the lack of orthodoxy with formative testing has been a blessing, as it has enabled it to be responsive on projects, and grow creatively outside the straitjacket of scientific method. On the downside, because formative testing has developed on the margins of HCI orthodoxy, it hasn't received the recognition it deserves, and can be misunderstood by even big-name HCI gurus.

Many practicing UCD researchers and interaction designers consider statistical validity irrelevant to the value of testing. Testing is valuable because it offers insight, not because it offers data. User comments and stories about their behavior provide richer insights useful to design than bug-seeking data. Sometimes it is confusing how strong an insight is, or whether we know if we have uncovered all we want to. In these cases it can be useful to find new methods to evaluate robustness and completeness the qualitative data arising from formative testing, and how to work with this in an agile, iterative setting. I was pleased to see the beginning of such a discussion of formative testing at the UPA conference last month. (For example, check out the boundary-pushing work of the team at Alias/Autodesk). There is plenty to improve with current formative testing methods, but let's not throw the baby out with the the bath water.

Tuesday, May 23, 2006

 

design for motivation

Why do some people put up with difficult-to-use products, while others give up quickly? Designers often dodge this question, instead putting the design in the spotlight: it either "works" or it doesn't. A design-oriented approach in effect absolves the individual of responsibility for how they get along with doing something. If users struggle, the design is too complex, or not fun, or somehow otherwise flawed in design.

Differences in user motivation are vastly under-emphasized in design research, as I realized last year when I tried, unsuccessfully, to learn to wear contact lenses at age 43. Millions of other people successfully learn to wear contacts, while I found contacts a ridiculously difficult product to use. To the extent people talk about differences in motivation to learn to use new products, it is typically done through the un-insightful language of marketing, talking about early adopters and laggards. People are "segmented" along a mythical bell curve, but we never know why they end up where they are placed.

It may seem strange to suggest that user motivation is mysterious. Major corporations spend billions of research dollars trying to unlock the secrets of what makes us want to use a product. There are plenty of solid insights on why we buy products, but we still don't really understand our relationship to products, the reasons why we choose use or not use them after purchase.

What is missing in grand approaches that confidently promise to "design compelling user experiences" is consideration of the real the differences in peoples' adoption of and adaptation to a design. The "compelling experiences" approach places the design in the role of hypnotist: an expected behavior is induced automatically and predictably by the suggestion (the design.) But we know that even with the simple behaviors commanded by the hypnotists, only a minority are susceptible to the suggestion.

Psychologists agree there are two basic types of motivation: intrinsic, and extrinsic. Intrinsic motivation relates to doing an activity for itself (no reward is offered, the activity is inherently enjoyable to the person), while extrinsic motivation relates to goals beyond those inherent in an activity, where rewards are offered as an inducement. Psychologists acknowledge that extrinsic rewards are very powerful motivations for inducing a short term behavior. But for people chasing extrinsic rewards, "compliance" worsens over the longer term when compared with intrinsically motivated individuals. Extrinsic rewards can become demotivating, offering decreasing psychic payoff over time, and also can become a distraction to doing the central task (people focus more on getting the reward than on the task itself.)

A classic intrinsically motivating activity is rock climbing. You scale up a rock, you scale down a rock, and you have nothing to show for it, unless cut hands count. Why do people bother, one might ask? And how to make make money selling to rock climbers?

Significantly, when designers look to make something more motivating, they often look to add extrinsic rewards. We can see this in gaming, where the reward is reaching a new level. Gamers often treat what level they have reached as a bragging right. The accomplishment is getting the external validation of a new level. Gamers can even try to cheat the computer with insiders' shortcuts in order to progress faster. Once the treadmill of level advancement stops, interest in game may be over.

Intrinsic motivation, in contrast, is more about enlargement of activities through self-directed choice. The role of the designer in facilitating the user's ability realize a richer, self-directed experience is that much more subtle, and the results less automatic. An example of such self-direction is what Kathy Sierra calls "superpowers": letting users do things they couldn't do before. There is a reward, but it essentially the activity itself. We are assuming users really want to do these things for their own sake, rather than to boast about having the superpower. We see how once exotic software capabilities loose their cool status once everyone can use them. Exclusivity is not an intrinsic motivator, even if makes people feel they are superpowerful. But other tools, like wikis, are truly empowering, and making them simpler and more widely accessible is part of making them more rewarding.

Intrinsic motivation is an important concept because it challenges the assumption that everyone will be interested in doing something, if only given the proper inducements. In an informal and highly motivating presentation to a UPA gathering in here Wellington yesterday, Kathy Sierra talked about the role challenge plays in motivation. Drawing on Csikszentmihalyi's flow concept, the balancing of challenge and capabilities, Kathy spoke about how much she enjoys Sudoku. There are Sudoku puzzles for all levels of ability. One can master one level, and move to the next. But it doesn't follow that everyone loves a challenge. I'm left cold by logic puzzles (reminds me too much of school) even though I can do some and am challenged by many of them. My intrinsic motivation for doing logic puzzles isn't there.

Even intrinsic motivation isn't a single value. We can have stronger or weaker intrinsic motivation, or even, quite commonly, conflicted motivation. Few pursuits are absent of trade-offs, and we often are torn by these, even if sometimes we aren't even consciously aware of what other desires undermine our commitment to a goal we are thinking about.

The table below is an attempt to estimate the effects design can have on various states of intrinsic motivation. In general the possibility that design will deflate our motivation is stronger than its potential to supercharge our motivation. As an example, an old wooden tennis racket might frustrate a novice tennis player, who might do well with a newer design that has a light carbon fiber frame and a bigger racket head. The wooden racket is demotivating, but the new racket is motivating only so far as it facilitates a feeling of minimal competence. But I don't think the improved design is causing more people to become interested in learning to play tennis, even if it is sightly easier today than it was a generation ago.



Many factors involved in a user's intrinsic motivation lie outside the scope of a design, especially where user goals are more diffuse and involve personally constructed meanings. The technique of laddering, successively asking "why?" in response to personal to statements about goals, can reveal that the correspondence between user tasks and broader life goals is rather tangled. A product may contribute to a one goal, but is often not sufficient in itself to achieving that goal. At the same time, the product might even detract from other goals, by consuming time, money or emotional energy. With things so complex, it is small wonder that designers focus on the external rewards of a design -- promising popularity or prestige.

Tuesday, May 16, 2006

 

un-compelling

A certain fatigue sets in when the ear repeatedly hears something artificial, and the brain keeps trying to interpret it.

I'm tired of the phrase "compelling" used to describe interaction. It isn't simply a overworked cliche, it is perversion of reality. I hear so many people in the user research and design space talk about designing "compelling user experiences," but I never hear actual users talk about wanting "compelling" experiences. They talk about finding interfaces as fun, or interesting, or useful, but not "compelling." To speak of something as fun is to speak of it from a natural ego-centric perspective: I have fun. To speak of something as compelling is to speak of it from object-centric perspective: I am compelled, by forces beyond my control.

My wariness is not simply a philosophical quibble. All this hype about compelling user experiences sounds like mindless corporate propaganda. Researchers and designers sound like zombies when talking about compelling experiences, and make users sound like zombies too. Let's strike the word from our speech, please.

Thursday, May 11, 2006

 

customers, users and agile software

In an effort to become less ignorant of agile software approaches, I recently listened to a podcast with Alistair Cockburn on the topic. What agile seems to do well is get users involved in the shaping of the functionality of a system. It can do this by offering a tangible prototype for users to react to. Too often, functionality is shaped entirely by business requirements documents, which are at a very high level. Specific functional requirements that users might need can be overlooked in less iterative processes that emphasize development of high level functional requirements, and if these needs are uncovered later in the process, they can be hard to "bolt on."

The podcast also sheds light on how agile approaches conceive of customers. Agile is indeed "customer focused", but customers are not entirely the same as what usability professionals refer to as users. Cockburn speaks of customer collaboration over customer negotiation. I wholeheartedly support these sentiments, which are good business practice, as well as good design practice. But usability professionals would never speak about "user collaboration over user negotiation." Customers -- who include people who buy/commission software -- aren't necessarily users, and the terms customer and user cannot be used interchangeably. So talking about customers, who Cockburn sees as being both "sponsors and end users", can sometimes obscure distinctions usability professionals would make between different end user profiles (abilities, attitudes, environments, etc.). Even if all end users have the same basic functional requirements (what the system does), it does not follow they have the same user requirements (the way the system needs to do it to meet user needs.)

The central tenet of usability is to test with a representative user population. Here I think usability testing and agile functional testing have widely different approaches. I have suspected that agile approaches lack the thoroughness of usability in seeking wide participation of users in the process, because the needs of functional testing (find functional limitations) are fundamentally different from usability testing (find interaction limitations.) Usability professionals, even when using lightweight "discount" approaches, might need to test 5 people to get meaningful understanding of interaction issues and problems that need refinement. Cockburn sees a successful "access to a customer" under an agile scenario as being an "hour or two" a week -- considerably shorter than usability approaches, and probably not involving as many individuals either. He also emphasizes the importance of access to "expert users." Experts are indeed useful for functional specifications, but not necessarily ideal for determining user requirements.

For those of us in the usability community, the podcast highlights some of the attitudes of the agile orientation, particularly the aversion to "process heavy" approaches. Notwithstanding the debate and diversity in the agile development community, agile developers generally embrace methodological minimalism and skepticism toward formalism, for example, asking for a justification of the value of non-code related activities such as documentation. Within agile, there is lively discussion over which activities are essential and which ones aren't. It remains to be seen if agile software developers will consider user centered design as another "process heavy" activity.

This page is powered by Blogger. Isn't yours?