tag:blogger.com,1999:blog-89817822024-03-07T22:47:51.368+13:00Modules and Wholesdiscussion around human centered design and human potentialMichael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.comBlogger185125tag:blogger.com,1999:blog-8981782.post-87084984910108248632007-05-14T21:34:00.000+12:002007-05-14T21:46:50.758+12:00transitionsIt has been a long while since posting on this blog. The combination of a very full schedule and a desire to take a break from the blogging routine has created a hiatus far longer than I would have anticipated. Over the past year I have been fortunate to have been involved in a long term project that goes to the very core of what user centered design is about: redesigning companies!<br /><br />For any friends who have wondered about my recent movements, I can report that I will returning to the Washington DC area after an absence of 7 years. I will be much closer to many friends and professional colleagues in the States and the UK (New Zealand is just too far from everywhere), though obviously this will place me a long distance from friends I have made in New Zealand over the past 3 years.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1154383118994555462006-08-01T09:58:00.000+12:002006-08-01T11:15:57.803+12:00non-executive dashboardsSome follow-up to my posting last week on <a href="http://michaelandrews.blogspot.com/2006/07/productive-dashboards.html">productive dashboards</a>.<br /><br />Interaction designers need to consider the different needs of traditional "executive" dashboards, which provide a big picture of activity throughout an enterprise, and is typically packed with minute data, and employee-centric dashboards, which reflect information relating to the activities of an individual employee or a small team they belong to.<br /><br />To see what is happening in enterprise dashboards, one should consult a fantastic resource, <a href="http://dashboardspy.wordpress.com/">The Dashboard Spy</a>. Nearly all the examples posted are executive level views -- generally amalgamating and summarizing unit level data that used to be handled by Excel spreadsheets. Many these dashboards look like Excel spreadsheets. (Happily, some such as SAS/GRAPH have improved on jarring appearance of Excel's default graphics.) One would expect the mountains of data summarized, and capable of being sorted, drilled-through, and otherwise manipulated to be useful in analytical decision making such as business intelligence. If one's role is to make decisions for others, such by-the-numbers visualizations can be a powerful aid. It is still possible to drown in such data, be given so many options to manipulate it that one's never such if one's seen all the views one needs to see to make a sound decision. But executives are paid to make such decisions, and often they worry they aren't able to see things from all angles and link disparate variables. We'll defer to the brilliance of the executive to figure out what he or she needs, and not worry about the UI being too complex. On balance, access to many variables with many "degrees of freedom" to manipulate these variables is useful for executives.<br /><br />Now let us consider line staff. Their job is more concerned with doing things, rather than thinking about how other entities should be doing things. This distinction is sometimes murky with the rise of self-management. As employees are often responsible for making their own decisions, it may be tempting to think they need a mini-executive dashboard. But the more the UI forces them to think about what they should be doing, the more distracted they can be from doing it.<br /><br />To illustrate the employee's dilemma, let's look at a demo dashboard created by Visual I/O. Visual I/O is an exciting player in dashboards, creating visual user interfaces with impressive graphical representation and interactivity. The <a href="http://www.visual-io.com/baseball/">demo</a> on their website concerns baseball; it is a playful illustration of interaction concepts and presentation methods they apply to more mundane subjects. They have designed a dashboard to manipulate variables to assess whether a pitcher should be replaced during a game. It reflects an extreme case of the fetish some baseball fans have for statistics!<br /><br />Let's play along with the scenario, and imagine someone actually using the dashboard. While I can imagine a manager toying with the dashboard the next day at his desk, trying to figure out patterns for how to best rotate pitchers, I have trouble imagining him doing these tasks in the dugout as the game is being played. At that point, he isn't following the game, he's absorbed in the fantastically data-rich user interface. Now, let's stretch our scenario a bit and imagine that the pitcher also uses the same dashboard, which he had loaded on a PDA in his back pocket. Between pitches, he would toy with the dashboard, trying to figure out if he should ask to be relieved. The likely deterioration to his pitching concentration would present the answer, regardless of what the dashboard data suggested.<br /><br />What pitchers and others involved in performing tasks need are simple heuristics to make decisions, not mountains of data. Dashboard design could learn from the field of heuristics. A useful volume on this topic, recommended by Don Norman, is Gigerenzer and Todd (eds) <em>Simple heuristics that make us smart</em> (Oxford, 1999.)Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1153950492346437152006-07-27T09:48:00.000+12:002006-07-28T09:02:58.536+12:00long tails need organization to happenOkay, I haven't yet read the best seller of the moment, <em>The Long Tail</em>, but I am skeptical. Lee Gomes <a href="http://online.wsj.com/article/SB115387606762117314.html?mod=technology_featured_stories_hs">writes</a> in today's <em>Wall Street Journal</em> (subscription required) that evidence from nearly all quarters shows that <a href="http://en.wikipedia.org/wiki/Long_tail">the Long Tail </a>isn't real -- people won't buy stuff just because it is there. Whether Amazon, Netflix, or the iTunes Store, most revenue comes from hits, and vast amounts of music, books, and other digital content are never downloaded at all. The same can be seen in the noncommercial world, where thousands of academic articles are never read except by their authors, and presumably, their editors and reviewers.<br /><br />Like many ideas spawned by <em>Wired</em> magazine, the Long Tail is a vaguely libertarian notion that all anyone wants is<strong> </strong>unfettered access. Give people access, and the Tail will emerge spontaneously. The concept is argued on the basis of idealized statistical behavior and supposed transaction cost economies of data servers. From the perspective of user centered design, I find the Long Tail concept a bit naive.<br /><br />Why are hits so powerful, despite the very real phenomenon that consumers have access to an ever broader range of content? The constraining factor has little to do with computers and economics, it has to do with human attention, both cognitive and psychic.<br /><br />Cognitive attention is challenged the more stuff that is available. Computers have no trouble storing millions of records, but humans have trouble making sense of them. Browsing, scanning and searching are increasingly difficult the more records available. I don't want to discount the impressive progress in information architecture over the past decade, but I feel the solutions developed are still primitive compared with the needs posed by millions of records. Consider the "subject" taxonomy in Amazon's book store: it is simply too broad to be helpful in view of the millions of records. No one has developed any universally meaningful way to describe music genres that reflect the narrow-casted development of styles and approaches.<br /><br />What I am calling psychic attention is grounded in the many facets of social psychology. We are drawn to things other people are buying for numerous reasons. People feel comfortable buying products that are already accepted. It is "rational" in terms of expected effort expenditure to buy something others have already tried, and presumably found useful or enjoyable. People experience social validation, extend trust, and have a basis for social connection when going for popular options. Information management has addressed the social dimension through behavioral data mining, showing connections between the purchases of different items, and through recommender systems, where people suggest items of interest, rate items, and rate each others ratings. These systems can reinforce the popularity of already strong sellers, working against the Long Tail.<br /><br />There has been enormous progress in giving form to the mountains of records, but behavior and recommender systems often externalize the contradictions of individuals (especially in the low volume end of the Long Tail). Take someones "my favorite's" list: it may contain list of seemingly random items, books and CDs on unrelated topics or styles. Or people mean vastly different things by common words -- as an experiment type the word "liberal" in Amazon's Listmania. You will find recommendations for books that are far from your personal preferences (whatever they are), because people use the term in so many ways: as a positive term for either Left or Right wing politics, a derisive term for the same, as a theological orientation for various religions, etc. Sales behavior and recommendations are also not logically correlated, pointing to some gaps in behavioral classification. One Amazon reviewer noted that nearly everyone (several hundred reviewers) gave an Anti-virus software package the lowest possible rating, but it showed up as the most popular seller. A conversation with your next door neighbor might explain such a contradiction, but the user interface doesn't.<br /><br />To navigate through and evaluate the long tail, people must rely on logical organization or social organization (the opinion and behavior of others). If theorists who argue that humans relate to concepts in ways similar to how they relate to people are right, then information organizations need to be smaller. You can't know everyone in a big person organization well, which is one reason organizations divide and splinter. The same may need to happen with the Superstore websites. Narrowcast marketing presumes people have a some intention behind their interest in a product, band, hobby or lifestyle. The superstores try to infer that intention by observing expressed opinions and behavior, but miss the organic aspect of collective intentions. Intentions are consciously formed, and microsites have much greater coherence in their offerings. Meaningful information management is not inherently self-organizing. When "everyone" (either the broader public or a data mining computer) tries to conceptualize and interpret the meaning of something that has resonance to a core group, the meaning gets lost.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com1tag:blogger.com,1999:blog-8981782.post-1153181829036144552006-07-18T12:04:00.000+12:002006-07-20T12:19:50.213+12:00productive dashboardsIn the enterprise world, dashboards are gaining prominence, though their value to employees is often more presumed than validated by evidence. Dashboards previously have been concerned mostly with so-called "C" level information (the preoccupation of top executives), things like aggregate sales or the stock price. If worker-bee employees had access to a dashboard, they saw this big picture data, as if they would be exhorted to work harder noticing the stock price fell in the morning.<br /><br />New generation dashboards are now presenting data more relevant to front-line employees, particularly their KPIs (key performance indicators). The seamless corporation created by enterprise software is allowing a multitude of data indicators to be collected and presented in ways tailored to the work of individual employees. Such dashboards promise to improve measurement and awareness of activity (enabling improvement) and support long-standing goals to de-layer decision making and give more responsibility to front line staff. Dashboards have moved a big step toward relevance to employees, but few dashboards are truly user centered, because they don't address underlying user motivations.<br /><br />Dashboards have received scant attention from interaction designers, and what attention that has been given tends to view dashboards as just another UI, often likened to data-rich maps. Coping with data richness is certainly an aspect of dashboards, but it can potentially focus attention of the wrong end of the user experience. The question is not necessarily how to cram more information on a dashboard, so that users can successfully discriminate between different levels and layers of information. Rather, the question may well be to make sure that the KPIs presented truly support the employee's performance. Ironically, visually rich cartographic dashboards may be distracting to employee performance, even if they present lots of data people think is relevant and even if they can be understood without difficulty. Unlike a map, where data often represents something as lifeless and impersonal has geological formations, dashboards represent data that is anything but impersonal: it reflects the incentives employees are given and how they are rated.<br /><br />Dashboards are a good example of the importance of understanding user needs in context, moving beyond static understanding to explore a user's lifeworld. A recent article in the <em>Financial Times</em> discussed recent academic and investment research on the paradox of incentives. It notes: "It seems that incentives work well when the subject is given a repetitive, mindless task to perform, rather like the piece rates that applied in manufacturing plants. But when more thought is involved, incentives may dent performance. Our minds start to worry about the incentives, rather than the task at hand. We choke under pressure."<br /><br />What research suggests is with complex knowledge work, where there are many factors mentally juggle, the more we think about multiple KPIs displayed on a dashboard, the more we are distracted from completing the task at hand. Here, our cognitive make-up collides with the business imperative to measure and monitor everything. This conflict is can be resolved different ways. Perhaps employees are being overloaded with KPIs, and so they need fewer, and therefore a simpler dashboard. Perhaps they indeed need to measure and monitor a multitude of data factors, but they should not be rated on all these factors. We could have a sophisticated dashboard of enterprise data that are not KPIs for an individual employee.<br /><br />Dashboards promise to act as a window on performance, but they can influence performance as well as reflect it. Ideally employees shouldn't be thinking too much about the dashboard. Dashboards are tools that should blend into the background to support an employee's work, not be in the foreground, screaming for attention.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com2tag:blogger.com,1999:blog-8981782.post-1153096135484618572006-07-17T11:37:00.000+12:002006-07-17T16:16:19.063+12:00usability testing isn't dead, only summative testing is deadI'm hearing much about the misuse of usability testing from some big names like Don Norman, who writes in the current<em> interactions</em> magazine that usability testing is no more than minor activity of "catching bugs". While Norman maintains that usability testing is still necessary for clean-up purposes, he argues it shouldn't be used to "determine 'what users need'".<br /><br />I have enormous respect for Norman, and love his recent contrarian views on User Centered Design, which contain many valuable insights. But on his point about user testing I think Norman is flat wrong, and out of touch with how usability testing has developed in recent years.<br /><br />Norman, and a few other old-time professionals in the HCI world who I've seen be critical of user testing, reflect a dated understanding of what user testing is. They equate user testing with the bug-tallying process of summative testing, a test often done at the end of the design and development process that gives a report card on how the application works for users. Large groups of test subjects would work through uniform test protocols. In HCI, summative testing used to be holy grail of scientific respectability for the field, giving statistically measurable data on what works and what doesn't. <br /><br />As a practitioner, I don't know anyone relying on summative testing to any extent-- for the very reasons Norman and others who criticize it as "too late." But there is plenty of room for usability testing to inform design -- just don't do it at the end, or try to make it scientific experiment. There is enormous confusion in the usability community because we sometimes discuss testing without being explicit whether it is old-fashion summative testing (largely a white elephant), or nimble and iterative formative testing. Both are usability testing, but formative testing is not simply about finding bugs and glitches. Formative testing can be a powerful tool for understanding user needs <em>and</em> preferences:<br /><br /><br /><ul><li>Formative user testing gives users something concrete to react to. While pre-design user research can be valuable to identify abstract user needs, concrete design alternatives provide the bridge to developing optimal solutions. You often can't know all the user requirements through pre-design research. It isn't always a matter giving a design a pass/fail rating, but exploring effectiveness of alternatives that often involve trade-offs for the user (and perhaps the sponsor organization as well.) Such formative testing is becoming increasingly common, but some people doing it seem reluctant to refer to it as usability testing (perhaps because usability testing sounds cumbersome or because formative testing isn't as rigorous as "proper" usability testing is meant to be.) Even fewer people refer to this testing as formative user testing - it is an quasi-informal activity that is never given a proper name or due status. </li><li>Users aren't idiots, and often can successfully understand and use different design alternatives, though they might not necessarily like all the alternatives equally well. A small example from my work: do users want initially to see a list of billing items in chronological or reverse-chronological order? I can ask this question orally, but get a much stronger indication of user preferences when I present alternative designs. Note that users could understand and use either one, if they were compelled to use it, but it doesn't follow they will bother to use it simply because it is usable.</li></ul>It was clear to me at the recent UPA conference that formative testing needs it's own identity. User testing is no longer a topic of active research in university computer and psychology departments, as it was when Norman and other HCI pioneers crafted the academic framework from which the user centered design profession has grown. To the PhDs in HCI, the definition of user testing remains frozen in the 1980s. Even the standard textbooks on user testing date from the early 1990s, before formative testing emerged. <br /><br />Formative testing has developed in the practitioner world in response to the inability of summative testing to cope with iterative design cycles. But there is no orthodoxy about how formative tests are done or evaluated. In many ways the lack of orthodoxy with formative testing has been a blessing, as it has enabled it to be responsive on projects, and grow creatively outside the straitjacket of scientific method. On the downside, because formative testing has developed on the margins of HCI orthodoxy, it hasn't received the recognition it deserves, and can be misunderstood by even big-name HCI gurus.<br /><br />Many practicing UCD researchers and interaction designers consider statistical validity irrelevant to the value of testing. Testing is valuable because it offers insight, not because it offers data. User comments and stories about their behavior provide richer insights useful to design than bug-seeking data. Sometimes it is confusing how strong an insight is, or whether we know if we have uncovered all we want to. In these cases it can be useful to find new methods to evaluate robustness and completeness the qualitative data arising from formative testing, and how to work with this in an agile, iterative setting. I was pleased to see the beginning of such a discussion of formative testing at the UPA conference last month. (For example, check out the boundary-pushing work of the team at Alias/Autodesk). There is plenty to improve with current formative testing methods, but let's not throw the baby out with the the bath water.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1148339902398773992006-05-23T10:40:00.000+12:002006-05-23T20:50:44.476+12:00design for motivationWhy do some people put up with difficult-to-use products, while others give up quickly? Designers often dodge this question, instead putting the design in the spotlight: it either "works" or it doesn't. A design-oriented approach in effect absolves the individual of responsibility for how they get along with doing something. If users struggle, the design is too complex, or not fun, or somehow otherwise flawed in design.<br /><br />Differences in user motivation are vastly under-emphasized in design research, as I realized last year when I tried, unsuccessfully, to <a href="http://michaelandrews.blogspot.com/2005/04/user-motivation-under-explored-issue.html">learn to wear contact lenses at age 43</a>. Millions of other people successfully learn to wear contacts, while I found contacts a ridiculously difficult product to use. To the extent people talk about differences in motivation to learn to use new products, it is typically done through the un-insightful language of marketing, talking about early adopters and laggards. People are "segmented" along a mythical bell curve, but we never know why they end up where they are placed.<br /><br />It may seem strange to suggest that user motivation is mysterious. Major corporations spend billions of research dollars trying to unlock the secrets of what makes us want to use a product. There are plenty of solid insights on why we buy products, but we still don't really understand our relationship to products, the reasons why we choose use or not use them after purchase.<br /><br />What is missing in grand approaches that confidently promise to "design compelling user experiences" is consideration of the real the differences in peoples' <em>adoption of </em>and <em>adaptation to</em> a design. The "compelling experiences" approach places the design in the role of hypnotist: an expected behavior is induced automatically and predictably by the suggestion (the design.) But we know that even with the simple behaviors commanded by the hypnotists, only a minority are susceptible to the suggestion.<br /><br />Psychologists agree there are two basic types of motivation: intrinsic, and extrinsic. Intrinsic motivation relates to doing an activity for itself (no reward is offered, the activity is inherently enjoyable to the person), while extrinsic motivation relates to goals beyond those inherent in an activity, where rewards are offered as an inducement. Psychologists acknowledge that extrinsic rewards are very powerful motivations for inducing a short term behavior. But for people chasing extrinsic rewards, "compliance" worsens over the longer term when compared with intrinsically motivated individuals. Extrinsic rewards can become demotivating, offering decreasing psychic payoff over time, and also can become a distraction to doing the central task (people focus more on getting the reward than on the task itself.)<br /><br />A classic intrinsically motivating activity is rock climbing. You scale up a rock, you scale down a rock, and you have nothing to show for it, unless cut hands count. Why do people bother, one might ask? And how to make make money selling to rock climbers?<br /><br />Significantly, when designers look to make something more motivating, they often look to add extrinsic rewards. We can see this in gaming, where the reward is reaching a new level. Gamers often treat what level they have reached as a bragging right. The accomplishment is getting the external validation of a new level. Gamers can even try to cheat the computer with insiders' shortcuts in order to progress faster. Once the treadmill of level advancement stops, interest in game may be over.<br /><br />Intrinsic motivation, in contrast, is more about enlargement of activities through self-directed choice. The role of the designer in facilitating the user's ability realize a richer, self-directed experience is that much more subtle, and the results less automatic. An example of such self-direction is what <a href="http://headrush.typepad.com/">Kathy Sierra </a>calls "superpowers": letting users do things they couldn't do before. There is a reward, but it essentially the activity itself. We are assuming users really want to do these things for their own sake, rather than to boast about having the superpower. We see how once exotic software capabilities loose their cool status once everyone can use them. Exclusivity is not an intrinsic motivator, even if makes people feel they are superpowerful. But other tools, like wikis, are truly empowering, and making them simpler and more widely accessible is part of making them more rewarding.<br /><br />Intrinsic motivation is an important concept because it challenges the assumption that <em>everyone</em> will be interested in doing something, if only given the proper inducements. In an informal and highly motivating presentation to a UPA gathering in here Wellington yesterday, Kathy Sierra talked about the role challenge plays in motivation. Drawing on Csikszentmihalyi's flow concept, the balancing of challenge and capabilities, Kathy spoke about how much she enjoys Sudoku. There are Sudoku puzzles for all levels of ability. One can master one level, and move to the next. But it doesn't follow that everyone loves a challenge. I'm left cold by logic puzzles (reminds me too much of school) even though I can do some and am challenged by many of them. My intrinsic motivation for doing logic puzzles isn't there.<br /><br />Even intrinsic motivation isn't a single value. We can have stronger or weaker intrinsic motivation, or even, quite commonly, conflicted motivation. Few pursuits are absent of trade-offs, and we often are torn by these, even if sometimes we aren't even consciously aware of what other desires undermine our commitment to a goal we are thinking about.<br /><br />The table below is an attempt to estimate the effects design can have on various states of intrinsic motivation. In general the possibility that design will deflate our motivation is stronger than its potential to supercharge our motivation. As an example, an old wooden tennis racket might frustrate a novice tennis player, who might do well with a newer design that has a light carbon fiber frame and a bigger racket head. The wooden racket is demotivating, but the new racket is motivating only so far as it facilitates a feeling of minimal competence. But I don't think the improved design is causing more people to become interested in learning to play tennis, even if it is sightly easier today than it was a generation ago.<br /><br /><a href="http://photos1.blogger.com/blogger/3901/635/1600/motivation%20levels.gif"><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="http://photos1.blogger.com/blogger/3901/635/320/motivation%20levels.png" border="0" /></a><br /><br />Many factors involved in a user's intrinsic motivation lie outside the scope of a design, especially where user goals are more diffuse and involve personally constructed meanings. The technique of laddering, successively asking "why?" in response to personal to statements about goals, can reveal that the correspondence between user tasks and broader life goals is rather tangled. A product may contribute to a one goal, but is often not sufficient in itself to achieving that goal. At the same time, the product might even detract from other goals, by consuming time, money or emotional energy. With things so complex, it is small wonder that designers focus on the external rewards of a design -- promising popularity or prestige.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1147731059162394952006-05-16T10:00:00.000+12:002006-05-16T10:23:18.990+12:00un-compellingA certain fatigue sets in when the ear repeatedly hears something artificial, and the brain keeps trying to interpret it.<br /><br />I'm tired of the phrase "compelling" used to describe interaction. It isn't simply a overworked cliche, it is perversion of reality. I hear so many people in the user research and design space talk about designing "compelling user experiences," but I never hear actual users talk about wanting "compelling" experiences. They talk about finding interfaces as fun, or interesting, or useful, but not "compelling." To speak of something as fun is to speak of it from a natural ego-centric perspective: I have fun. To speak of something as compelling is to speak of it from object-centric perspective: I am compelled, by forces beyond my control.<br /><br />My wariness is not simply a philosophical quibble. All this hype about compelling user experiences sounds like mindless corporate propaganda. Researchers and designers sound like zombies when talking about compelling experiences, and make users sound like zombies too. Let's strike the word from our speech, please.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1147326557819840032006-05-11T17:32:00.000+12:002006-05-16T10:29:55.316+12:00customers, users and agile softwareIn an effort to become less ignorant of agile software approaches, I recently listened to a <a href="http://www.itconversations.com/shows/detail175.html">podcast with Alistair Cockburn</a> on the topic. What agile seems to do well is get users involved in the shaping of the functionality of a system. It can do this by offering a tangible prototype for users to react to. Too often, functionality is shaped entirely by business requirements documents, which are at a very high level. Specific functional requirements that users might need can be overlooked in less iterative processes that emphasize development of high level functional requirements, and if these needs are uncovered later in the process, they can be hard to "bolt on."<br /><br />The podcast also sheds light on how agile approaches conceive of customers. Agile is indeed "customer focused", but customers are not entirely the same as what usability professionals refer to as users. Cockburn speaks of customer collaboration over customer negotiation. I wholeheartedly support these sentiments, which are good business practice, as well as good design practice. But usability professionals would never speak about "user collaboration over user negotiation." Customers -- who include people who buy/commission software -- aren't necessarily users, and the terms customer and user cannot be used interchangeably. So talking about customers, who Cockburn sees as being both "sponsors and end users", can sometimes obscure distinctions usability professionals would make between different end user profiles (abilities, attitudes, environments, etc.). Even if all end users have the same basic functional requirements (what the system does), it does not follow they have the same user requirements (the way the system needs to do it to meet user needs.)<br /><br />The central tenet of usability is to test with a representative user population. Here I think usability testing and agile functional testing have widely different approaches. I have suspected that agile approaches lack the thoroughness of usability in seeking wide participation of users in the process, because the needs of functional testing (find functional limitations) are fundamentally different from usability testing (find interaction limitations.) Usability professionals, even when using lightweight "discount" approaches, might need to test 5 people to get meaningful understanding of interaction issues and problems that need refinement. Cockburn sees a successful "access to a customer" under an agile scenario as being an "hour or two" a week -- considerably shorter than usability approaches, and probably not involving as many individuals either. He also emphasizes the importance of access to "expert users." Experts are indeed useful for functional specifications, but not necessarily ideal for determining user requirements.<br /><br />For those of us in the usability community, the podcast highlights some of the attitudes of the agile orientation, particularly the aversion to "process heavy" approaches. Notwithstanding the debate and diversity in the agile development community, agile developers generally embrace methodological minimalism and skepticism toward formalism, for example, asking for a justification of the value of non-code related activities such as documentation. Within agile, there is lively discussion over which activities are essential and which ones aren't. It remains to be seen if agile software developers will consider user centered design as another "process heavy" activity.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1146697823576406502006-05-04T10:58:00.000+12:002006-05-04T17:52:32.100+12:00talking usability in the meeting room"We need to get a decision about this -- I'll book a meeting so we can get resolution on it."<br /><br />In business, meetings are where action happens, where information is disclosed and debated and decisions are made. Meetings allow many parties to be involved in discussion and decisions more efficiently than one-on-one discussions, or (in many cases) group emails. Business culture gripes about the time meetings take (often simply a sign that people are too busy generally), adding to the pressure to make meetings productive. A meeting is not successful unless issues are closed out, and next steps agreed that are substantively different from what had been discussed at the current meeting. Employees are drilled in this culture, taking courses on running meetings, and being effective participants in them. Meeting attendees who don't follow the code of conduct are admonished in front of their colleagues.<br /><br />The culture of efficient business meetings doesn't transplant well to issues involving design and user needs. Because meetings are so deeply ingrained into the corporate mindset, organizations often view meetings as the best way to make decisions about the usability of designs. Although meetings have their place in a user centered design process, they can equally hinder a user centered solution.<br /><br />A standard organizational behavior is to call a meeting to get "resolution" about an issue. If we aren't sure how to design something, let's call a meeting to get resolution on it. The assumption is that if you get a cross sample of stakeholders in a room together, they can have a rational discussion and get a decision then and there. What also is assumed, sometimes without full consideration, is that everyone involved in the discussion has adequate knowledge and information to make a good decision.<br /><br />The more finely-tuned the meeting process, the less obvious that meeting participants may not have sufficient information to decide usability issues. A carefully considered invitation list, a well structured agenda, and PowerPoint slides can give the appearance that decisions can be made. For many business issues this is in fact possible -- participants have top-of-the-head knowledge necessary to respond to topics that arise in the meeting, so decisions can be made then. But for usability-related issues, top-of-the-head knowledge of general business participants is generally inadequate (this would not necessarily be an issue if we are talking about a more specialized group meeting, such as an in-house design team.) The stakeholders at decision-related meetings are generally not the same people who will be using a system, and are not the right people to comment on user needs. And users aren't the right people to make final design decisions. Research and decision making are separate activities.<br /><br />Some consultants have tried to adapt usability techniques to fit corporate meeting rituals, but I question the quality of information such techniques develop. The temptation is to borrow techniques familiar to corporate employees taken from decision making and training workshops, such as voting or role playing, and use them as the basis for making decisions about designs. One evaluation technique I have seen discussed involves a set of rules for different participants to pretend they were different kinds of users. The participants have to fill out worksheets, follow participation rules, then make decisions based on what they saw in the role play. I can imagine that participants, committed to doing a good job and spending effort to follow the process faithfully, would believe they are accomplishing something valuable at such a meeting. But we can't expect Hal in marketing to know how a customer who lives or works in very different circumstances will behave. People struggle enough trying to recall how they do their own work on computers when they aren't sitting down in front of one. Getting a surrogate to imagine how a mythical user does a task strains credibility even more.<br /><br />So when are meetings useful in user centered design? In general, meetings are useful in the beginning, at the end, but not in the middle of design research. In the beginning, meetings can be useful to explore issues early. At this point, we aren't making design decisions, we are just trying to get a picture of what issues might need detailed consideration, so top-of-the-head comments are useful. We'll verify this information later through contextual or behavioral research, or user testing. Research needs to be done outside of a meeting setting. Group workshops are fine, since unlike meetings their purpose is simply to elicit user data, not to make decisions. At the end of of a design research phase, we have some results to report at a meeting, which can provide a basis for decisions about next steps.<br /><br />The pressure will always be on for instant answers to allow quick decisions. The seduction of meetings is that they collapse the answer and decision making processes, which need to be separate activities in usability. Business people are accustomed, once showing up at a meeting, to be rewarded with a decision at the conclusion. It is therefore vital to emphasize the difference between a workshop and a meeting, and not to agree to meetings unless there is sufficient user data to make design decisions.<br /><br />But in addition to knowing what is not appropriate about meetings in usability, we need to be appreciate why meetings are attractive to business people, and address the underlying needs that make them attractive. By combining discussion (the offering and analysis of information) with decision making, meetings can be fast, and they make clear the connections between the data with decisions. This suggests we need to work to make data gathering and analysis as quick and clear as possible, to make make data responsive to decision making (gathering data to address the agenda of a scheduled meeting) and transparent (decisions reflect data that was widely understood by decision makers.) These aren't new goals, but there's still plenty of room left for improvement.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1146614515335826832006-05-03T11:53:00.000+12:002006-05-03T14:50:30.996+12:00task oriented information architectureMost discussion of information architecture relates to finding information. There are articles people want to read, or catalog items people want to browse. What receives less attention in information architecture is how to organize user interfaces to perform tasks, particularly tasks involving complex, drawn-out processes. After recently working on an enterprise application, I have concluded that task-oriented information architecture involves unique issues.<br /><br />One thing that makes task oriented information architecture a challenge is that it can be difficult to develop a solid understanding how users do their tasks, particularly if there is a lot of task variety. Consider the diagram below.<br /><br /><br /><br /><p><a href="http://photos1.blogger.com/blogger/3901/635/1600/alternative%20task%20ia.0.gif"><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="http://photos1.blogger.com/blogger/3901/635/320/alternative%20task%20ia.0.png" border="0" /></a> </p><p>In creating an IA, the central question whether to organize screens around "objects" (concrete entities such as customers, or products) and then have screens relating to subordinate processes coming off each object. For example, first locate object 1, then do process 1 for object 1, then process 2, etc. Or, is the best way to organize screens first around processes, then have subordinate screens relating to object available while in the mist of a process?</p><p>For a one-off task, the question can be answered fairly easily. One can structure the task so that there is only one object and several process steps. Or less commonly, we force a process, and processing of all objects must be done at the same time. If you have more than one object or more than one process in these cases, you need to repeat the whole cycle.</p><p>A task oriented IA becomes much more complicated when you don't know in advance how many objects or processes might be involved. For example, if the IA needed to address the processing of travel-related information, we might need a certain elasticity in the IA to accommodate different scenarios. Some people will travel individually, others in groups (with several customers doing the same thing.) Some people will have accounts involving several trips. Customer representatives may need to process numerous items of unrelated customers at the same time. We don't know how many separate products (tickets, hotels) will be associated with any single travel journey. Groups may start with the same itinerary, so that doing processes together makes sense, but the group may break apart after a conference ends and then have separate itineraries. While my IA experience was not related to travel (I've tried to choose an example that might be easier to understand than what I worked on), I hope these examples give a flavor of some of the kinds of issues that can arise.</p><p></p><p></p><p></p><p>My experience suggests that traditional card sorting methods are not ideal for task oriented IA. The reason is that labels are often not meaningful outside a scenario. What makes sense when there is just one customer makes less sense when there is a group of customers. For scenarios to be really useful, one needs to cover off all the likely scenarios, not just the major ones.</p><p>When experimenting with task-oriented IA, here are some issues to keep in mind:</p><ul><li><strong>Activity structure</strong>. Are tasks batched around a group of items, or around a sequence of events? Interestingly, the same mix of objects and processes may be done in different ways, depending on who is doing the work, and what the context is. How a customer representative processes a form will be different depending if the customer dropped in his local branch to deliver it, or whether it was mailed in and is sitting on a stack with other people's forms.</li><li><strong>Inputs</strong>. How is information received and entered? Does it come in a clump, or in dribbles? Are inputs calendar-driven, so you can predict when you will receive them, or can they arrive at any time? </li><li><strong>Time dimension</strong>. Are tasks done in parallel on the same timeline, or do they go on divergent timelines? Are sequences of activities fixed or flexible? Sometimes activities start at the same time, and get processes concurrently, though the services themselves involve different durations. </li></ul><p></p><a href="http://photos1.blogger.com/blogger/3901/635/1600/alternative%20task%20ia.gif"></a>Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1146537810560788592006-05-02T14:09:00.000+12:002006-05-10T09:56:36.066+12:00user testing and agile methodsI'm looking forward to discussion at next month's UPA conference on incorporating usability into agile software development methods (BTW: if you'll be in Denver and would like to meet up, drop me a line.)<br /><br />One key question that needs a good discussion is how much user testing is needed in an agile development. How much testing is enough? Is testing always necessary?<br /><br />These questions may seem to be a matter of opinion at the moment, but there is no reason why the usability community can't work to develop some data to answer these questions concretely.<br /><br />At least one prominent advocate of agile usability believes that traditional user centered design methods involve an <em>over-reliance</em> on user testing.<br /><br />I am concerned that agile software development can result in an <em>under-reliance</em> on user testing. By under-reliance, I mean that flaws in software design that affect users go undetected because they weren't tested during the development process.<br /><br />While I am interested in agile methods, I can't claim special expertise. Insofar as I understand agile usability approaches, there is no unified approach people commonly follow. There have been some process-oriented methods that have been developed, as well as numerous improvisational attempts to find a balance that works. From what I have gleaned, these different approaches have not been explicit on how much user testing to conduct, and under what circumstances to conduct it, and at what point to conduct it. Some have suggested that successful results are possible with agile methods with less reliance on less user testing, because the design process is more robust, so problems are resolved before users ever see them. Others don't make this claim, though they admit that agile methods typically involve less user testing simply because the timelines and project structure limit the testing element.<br /><br />What the usability community needs to know is whether doing a more light-weight version of user testing under an agile process is a net benefit, or simply a sacrifice to accommodate usability to the agile framework. Obviously, if there is no need for extensive usability testing in an agile process, and quality for users is as good as if more extensive testing was done, this is a big positive.<br /><br />I'd like to see some comparisons of agile processes with limited user testing, compared with more extensive user testing. If agile methods can result in better designs that need little user testing, there will be few changes to the original design as a result of user testing. We would also expect that a very small sample of users would be adequate to catch any issues, so that larger numbers of test subjects would yield no new information. Once changes have been made to the design following the first round of user tests, any subsequent test would yield no further information. After one iteration, making any changes to the design would be superfluous.<br /><br />Usability has a strong empirical tradition, where we look at and debate topics vital to our effectiveness, such as how many test subjects is optimal. Perhaps we can start developing some data about the effectiveness of testing with agile methods. We need to move beyond talking about over-reliance and under-reliance of user testing without clear definitions of criteria, and concrete data measuring the achievement of these criteria.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com1tag:blogger.com,1999:blog-8981782.post-1146378069490872072006-04-30T17:37:00.000+12:002006-04-30T18:39:41.476+12:00what design can learn from health researchInterdisciplinary teams can offer more perspectives on an issue, and more insights for creating a design. Fortunately the culture wars between "creatives" and usability engineers during the dom.com era are mostly forgotten now. But tensions remain below the surface, and interdisciplinary teams often involve more horse trading than a single collective understanding. The reason for this is that true interdisciplinary research involves acknowledging what one doesn't have an answer to.<br /><br />I was recently reading a book on health research and recognized some of the same kinds of alternative attitudes that affect the design community. To simplify, there are two camps: the "holists", who see health as a difficult-to-articulate but complex interaction of mental, social and physical processes; and the "hard scientists" who believe only in hard data that is unambiguous. The holists deride the hard scientists for narrow-minded "reductionism," while the hard scientists dismiss the holists as given to woolly-minded New Age thinking.<br /><br />What seems to be happening in at least some health research is the formation of real interdisciplinary research. Whereas previously anthropologists and immunologists both studied disease, they did so independently, without consulting the other's work. Now it is common for social and behavioral scientists to work together with biological scientists to study the interplay of biological and non-biological factors on a certain health issue. The upshot is that the biologists often find that the reality is more complex than previously thought, that strict genetic and biological factors don't explain certain variations. The non-biologists also find the reality more complex as well, that folk wisdom is only partly accurate, or that their intuitions about the behavioral side of health effects cannot easily be generalized the wider population.<br /><br />In the design world, creatives worry about the reductionism of usability missing the big picture. Usability engineers worry the loosey-goosey decision making of creatives plays roulette with design. What interdisciplinary design can highlight, and attempt to overcome, are the limitation of both the intuitive and data-driven approaches. Intuitives can recognize many patterns in "soft data" that are difficult for hard data methods to find. Hard data can miss nuances because it is too aggregated, or looking at an irrelevant or lagging variable. But intuitive insights can be wrong as well, either over-generalizing, or even becoming a false dogma because they sound like common sense when the reality is counter-intuitive.<br /><br />Personally, I look forward to interdisciplinary design opening up thinking for all people involved in interaction design. Too much faith is placed in best practice rules and methods, or data collection and decomposition. Too much hope is placed on inspiration as the divine source of good design.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1146267401885668122006-04-29T10:48:00.000+12:002006-11-14T22:01:52.793+13:00hotel patrons: users and customersI'm back from a week's stay at a "deluxe" hotel in Sydney, right in the financial district. Very nice place, full of smiling staff and polished marble. The hotel even boasted it was voted one of the top 25 hotels in the Asia-Pacific. Clearly, customer experience is a paramount concern for this hotel.<br /><br />But for all the concern about customer experience, there seemed to be little concern for user experience. On arriving, I looked around for the standard hotel portfolio binder explaining available services, when they were accessible and what the charges were. Unable to find such a binder, I then noticed the serious looking remote control by the television, and thought "this information must be all online." I turned on the television and started surfing. It seemed a struggle to by-pass the pay-for-view movie selections. I couldn't figure how to go back, when while looking for the hotel information I got sucked into various infomercials touting the hotel chain's other hotels in faraway places. After a seemingly random traversal through unrelated screens and meaningless menu option labels, the only piece of hotel-related information I discovered were stern instructions on what to do in case of a fire. Not exactly where I would expect to look for that information, though happily the hotel marble was not ablaze.<br /><br />We had to ring the concierge to get the information sought, but it wasn't simple either. There are two reasons I'd rather be a "user" dealing with an interactive device, rather than a "customer" dealing with human. First, the human encounter requires certain nice formalities that can be irrelevant to the task at hand, such as when the well-trained hotel staff solicitously enquire how I'm enjoying my stay, even though I've just arrived and am simply trying to get some information. Second, it can sometimes be difficult to articulate what you want; it is easier to scan for and recognize it. The conversation about the folder/binder thing with hotel information (do these things have a name?) did not quickly result in required information. The staff keep asking what information exactly we wanted, while what we wanted was general information we might want to know once we were aware of it. Our ever-attentive hotel staff attempted to satisfy us by giving as the hotel's corporate newsletter, which didn't have any information relevant to us at all. The confusion was finally resolved when the hotel staff realized that we didn't have the mysterious binder of information in our room as it should be. There was a binder, it had the information what we wanted, it just wasn't in our room.<br /><br />The incident was hardly traumatic, but it was amusing, considering the enormous stock hotels place in addressing the customer experience. While my Sydney hotel focused on the personal touch, it had a klutzy interactive TV that took several minutes to download the balance on one's room charges.<br /><br />Hotels can ironically misunderstand to the needs of their patrons on account of their people-orientated systems of delivering services. I don't mean to suggest that patrons don't want personal attention for such matters as getting restaurant recommendations and bookings, or theater tickets. But people, apart from the desperately lonely, don't want all encounters to be personal, and hotels don't recognize this.<br /><br />Labor economist Robert Reich notes he prefers ATMs to bank tellers. He admits what many feel deep down: "I'd prefer to save my scarce social energies for more important encounters." I think some hotel functions fall in the same category. But while ATMs are famous for being simple to use, hotel IT services lack such distinction. Some hotels have embraced wireless technology, but many, including big name outfits in major global business centers, haven't even grasped the importance of having user-friendly information technology available for patrons. Far too many punish patrons for wanting IT access, viewing it as just another way to gouge customers. Hotel business center fees can make the hotel spa services look like a bargain in comparison. My Sydney hotel charged A$36 an hour for internet access. As far as I'm concerned, that's like charging $36 for a newspaper.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1144312430411543242006-04-06T20:30:00.000+12:002006-04-06T21:25:58.110+12:00monster mash: the opportunity and its problemsSeems I am reading more and more about "mashed" applications: amalgamations of different applications, wrapped together by a savvy integrator. The concept isn't new, though it has recently become common, and seems posed to explode. Developments in software are allowing easier integration of modules designed by different parties: often parties who would never have imagined their respective children playing together. It can result in marvelous creative fusion, but also poses some unique challenges for the user experience.<br /><br />Mashed applications bring "hacks" (web APIs, RSS syndication) to the masses. Before recently, integration has been difficult. In a recently completed dissertation, <a href="http://ethesis.helsinki.fi/julkaisut/mat/tieto/pg/myller/">Mika Myller </a>writes: "the challenging usability problem of digital environment of everyday life is that people are forced to act as 'systems integrator'. However, from our point of view the problem is not that people have to integrate products but that they cannot or they have to 'integrate' the products most of the time they are using them because there are not enough possibilities (e.g. open interfaces) for people or third parties easily to compose independent products to systems of systems."<br /><br />Mashed applications empower individuals by integrating different knowledge, sometimes in ways not imagined. Fantastic. But such a concoction can have a life of its own, and without supervision, cause confusion and disappointment.<br /><br /><a href="http://www.cs.ncl.ac.uk/research/pubs/articles/abstract.php?id=462">Panayiotis Periorellis</a> notes that mash applications, known formally as a "system of systems," can be unstable. Consider the case of the travel website, which brings together hotel, airline and insurance offerings. One can integrate systems from different sources, but the goals of these sources are different, and can potentially change without notice. Something as simple as the length of notice to cancel a reservation can differ. From the users' point of view, they are dealing with a single entity, the travel website, and expect a predictable and uniform experience. But the single entity can be a mirage.<br /><br />I expect the usability issues arising from systems of systems will become increasingly important in the future. Mashed applications offer a lot, but will need to deliver what they promise.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1143927873206882512006-04-02T09:44:00.000+12:002006-04-02T11:12:32.190+12:00remote UCD and the offshore factorLooking over the latest CHI <em>Interactions</em> on Offshoring, I ask myself: Is usability about people, or data and specifications? That is the perhaps the central question when looking at how global outsourcing might affect usability over the next five or ten years.<br /><br />Princeton economist Alan Blinder believes the only jobs immune from offshoring are those where hands-on or face-to-face contact is essential. Many standardized jobs can nearly as easily be done offshore by people following detailed predictable procedures. Onshore jobs will be "in the delivery of services where personal presence is either imperative or highly beneficial. Thus, the U.S. workforce of the future will likely have more divorce lawyers and fewer attorneys who write routine contracts." (see, for example, <a href="http://www.washingtonpost.com/wp-dyn/content/article/2006/03/21/AR2006032101133.html">Will Your Job Survive?</a>)<br /><br />If one sees usability primarily has the collection and analysis of quantifiable data on user behavior, outsourcing these tasks seems possible, given adequate infrastructure. There are numerous firms selling click stream solutions to track user behavior, and VPN technologies are sure to improve to allow better self-administered user tests. There are even a few companies developing remote elicitation tools to collect data on user wants or mental models, to provide some raw data to shape new designs.<br /><br />For "mature" user interfaces, where a system of specifications has been defined extensively, offshore designers can design variants without problem. Modularity in UIs is good design practice, and makes it easy for third parties to create new UIs consistent with existing ones.<br /><br />There are inexorable pressures on user interface design to develop practices that yield more predictability, reusability, and speed to the production of user interfaces. These pressures are driving the creation of in-house corporate, and industry-wide standards. And standards are the lifeblood of the outsourcing industry. If a process can be standardized, it can be outsourced.<br /><br />Despite the pressures to define standards, there are many, many things about UI design that remain, and will likely remain, messy. Usability professions are like divorce lawyers, and UCD practitioners like psychotherapists soothing the traumatized divorcee. People a want to be happy, and formulaic standardized responses will not offer them the satisfaction they seek to embark on a new, happier life with a new technology.<br /><br />Telecommunications doesn't seem likely to displace the face-to-face communication needed to understand the <em>why</em> of an issue. Self administered questionnaires and remote discussions are hardly a robust source for insight. Innovation is a counterbalance to standardization. Innovation can be augmented by telecommunications, but face-to-face discussion seems vital.<br /><br />While I fully expect an offshore aspect to UCD to develop, I also believe that context is too crucial for offshore usability to become a full fledged alternative to onsite usability.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1141458494213122372006-03-04T20:23:00.000+13:002006-03-06T11:32:12.956+13:00help! usability in need of information architectureFor a number of months I have been involved in a project designing user interfaces for new banking system. A big challenge has been to develop a design language to handle diverse processes, such that patterns can be standardized (and hence learnable), while also being flexible enough to handle very complex, and ad hoc, situations.<br /><br />I have been mining the Internet for UI examples to leverage from. As I have sought every last permutation of UI design patterns, looking for the best ideas for my project, I have been struck by how inconsistent professionals in the UI design world are in the terms they use to describe widgets.<br /><br />If anyone should understand the value of consistent, meaningful information architecture, it should be UI designers. But I find little evidence they do, at least when it comes to the terms they use to discuss widgets. You might recognize the widget when you see it, but try to Google it. What term do you use?<br /><br />We don't even have a standard term for a drop-down list (a pick list? drop box?) What is a list box? Can it include tick boxes (check boxes)?<br /><br />Some of these terminology issues are legacies of Microsoft verses Apple. The widget terminology wars continue, especially as new widgets are created in the Rich Internet world, and their authors seek to give these subtle variations special names. Is the toggle in a<br />Windows dialog box the same as an Ajax show/hide toggle? Or the status bar of a web app the same as the task bar of a client application?<br /><br />I don't want binding advice, but would appreciate some movement to development of a consensus.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com1tag:blogger.com,1999:blog-8981782.post-1140513461546827532006-02-21T21:49:00.000+13:002006-02-21T22:41:08.590+13:00having something to formatMultimedia is the poster child of software development. We seem obsessed with making media "richer": adding interactive graphics, data visualization, video, sound, ticker tape updates and entertainment like controls. Apart from some not terrible useful efforts in "experimental typography," old fashion words seem to get ignored.<br /><br />Most text-related software seems devoted to navigating text, extracting concepts from text, and perhaps extracting snippets of text into an information manager. However, there seems little attention these days relating to tools to create text documents. I find this situation unsatisfying, as I believe existing text creation tools are wanting.<br /><br />Microsoft Word does many things, but it does not necessarily do all of them well. Every competitor I've seen simply tries to copy Word, not to better it. Word experienced some genuine innovation in the 1990s, adding features like auto summary, but it has been stagnant since then.<br /><br />One of the worst features of Word is its outlining capabilities. In the DOS era, there were several outlining programs that were interesting, but they never gained viability with Windows, as Word came to dominate word processing. Word's outlining is a basic hide-and-show hierarchy, a clumsy one at that. There are many other possibilities for outlining.<br /><br />I like the some features of the Mac-based OmniOutliner program, particularly adding columns such as tick boxes and numeric fields, as exist with a spreadsheet. But OmniOutliner remains a hide-and-show program.<br /><br />The most interesting outlining program I have found is designed for lawyers. Developed by CaseSoft, <a href="http://www.casesoft.com/notemap/index.shtml">NoteMap</a> has several nifty features. It allows gathering non-contiguous items, and putting them in another branch of the outline. This is very useful for connecting thoughts that have not thus far been related to each other. NoteMap has better annotation and formatting capabilities than many outliners, plus a basic sort facility (though more could be done with the sort.) But more important than its features, it offers smooth performance, where Word feels shaky.<br /><br />It seems strange to me that something as basic as outlining receives so little attention in word processors. Too much attention in word processors is placed on formatting, and not enough on how users draft their thoughts.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1139807984376746782006-02-13T17:54:00.000+13:002006-02-13T19:23:06.690+13:00the shrinking world of customized softwareWhat has become of customized software -- software designed to an organization's or group's unique needs? I have <a href="http://michaelandrews.blogspot.com/2005/09/does-best-practice-make-us-zealots.html">previously argued </a>that the shift from custom built to off-the-shelf software has largely sidelined traditional user centered design approaches premised on bottom-up design. I am coming to realize bottom-up design is under more pressure than ever, thanks to the rise of "on demand" web applications.<br /><br />Businesses have largely given up on customized software, because they deem it too expensive. They switched to homogeneous off-the-shelf software, with ready-made interchangeable components. Much of this software was sold as kits, which would get a modest customization done by a systems integration house. The integrators made far more money than the kit vendors. Various forms of software customization and service could represent $7 for every $1 spent on the actual software license. Add to that the aggravating "plumbing" problems that the systems integrators seemed to find, and often never completely manage, corporate customers have sought greater simplicity from their software spending.<br /><br />Vendors have responded with "on-demand" software solutions delivered over the web. No need to hire a systems integrator, we'll give you a complete package that meets all your needs, vendors promise. A good example is Salesforce.com, which offers a "customer relationship management" (CRM) system remotely delivered via the web. The solution is cheaper for the customer, and simpler too. Who could complain about that?<br /><br />The drive to reduce costs and complexity is understandable. But businesses are often suckered by the myopic logic of software vendors, especially the new web application vendors. They speak of "cost of ownership", but restrict their definition of costs to only those items that appear in the IT budget. Now, IT budgets are considerable, but they generally aren't the predominant expense in service companies: employees are. How IT costs affect labor productivity are never addressed.<br /><br />On-demand software pretends that business processes are so uniform and standard that you don't need to customize anything. Unfortunately, things are a bit more complicated than that.<br /><br />If any business process could be successfully supported by uncustomized software it ought to be CRM. Sales is hardly a intellectually complex activity. But CRM has been a big failure, even after three generations of trying to get it right. I recently read a roundtable discussion by CRM vendors and IT analysts , and all admitted usability is still a massive problem. Sales people don't feel CRM supports their needs -- it is just a monitoring tool for the benefit of management.<br /><br />Selling involves personal style, something one-size-fits-all software doesn't accommodate well. Forcing users to follow a set of rigid procedures doesn't translate into helping them make more sales.<br /><br />What companies need is the ability to customize -- in a meaningful way. Yes customization was expensive and fraught with technical glitches. But the problem wasn't the concept of customization, but rather what was required to achieve customization. Too much focus and energy went into solving plumbing problems, not user problems.<br /><br />On-demand vendors need to offer easy to use tools to allow customization of their offering. This customization needs to be fundamental, not cosmetic. Real customization isn't just about the UI, it is about the process of using the application in conjunction with one's daily work. Understanding real uses processes is gained through contextual research across a range of potential users. One size doesn't work. The task is to learn just how many different sizes are needed.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1139290633795392432006-02-07T18:30:00.000+13:002006-02-15T19:26:45.413+13:00innate and learned interactive behaviorsThe gold standard for social science is research that is non-trivial, non-obvious, and widely repeatable. User research sometimes yields seismic discoveries, but small discoveries -- or even ambiguous ones -- are more the norm. But it would be wrong to conclude that because everyday user research yields only mundane discoveries, we know the major things we need to know, or that pedestrian user research simply isn't worth the effort.<br /><br />Okay, few people are so bold as to argue user research is <span style="font-style: italic;">counterproductive.</span> But the concept of user research as a routine aspect of usable design is being questioned from many quarters, with alternative concepts sometimes promoted as being more time- or cost- effective. I see three often mooted ideas that can imply routine, classic empirical user research isn't necessary. One is that smart consultants already know the answers -- only dumb ones don't. Another is that there are newer, better theories that deliver the answers. The third is that there are better methods to deliver the answers.<br /><br />The first skepticism toward user research I will call "charismatic usability."Advisors and consultants can take so much satisfaction in what they know, they can loose sight of what they don't know. With usability charismatics, attention shifts from the credibility of the conclusion to the "credibility" of the messenger. Perhaps the consultant has secret knowledge, gained through previous work with very select clients, that is not known in the wider usability community. We also see this fantasy played out in scenarios imaging that "user experience" evangelism will storm the board room and become the pet interest of the CEO. Charismatic messages appeal to individuals and organizations in crisis, and charismatic seekers admit a desperation, as well analyzed by Harvard business professor Rakesh Khurana in <em>Searching for a Corporate Savior: the Irrational Quest for Charismatic CEOs. </em> The lovefest is often disappointing, with the charismatic expert over-promising, and under-delivering, a solution to a problem that is bigger than any single personality can solve. Because you can convenience a client she has a problem doesn't necessarily endow you with the capability to solve it.<br /><br />Other people doubt user centered design, which dates from the 1980s, is still a viable theory and believe needs to be replaced with something newer. Some would believe a more complex theory is what is needed, perhaps activity theory. As a theory, UCD has never been coherent, and traditional cognitive science approaches don't offer many answers sought. . While activity theory involves user research, it seems to have priorities reversed: in AT, theory drives user research, instead of user research driving development of theory. Like most Marxist-derived "theory", AT is not really a theory that can be proven, but a simply framework that guides one's focus of attention, for better or worse. How much value can be found by deciphering impenetrable jargon about the "zone of proximical development" or Hegelian dialectics, is open to question. Researchers get excited by the phenomenology of artifact-mediated activities -- say, how office workers use a magnetic white board -- while in offices across the globe, artifacts themselves are disappearing into the ether. There is less and less physical manipulation to watch and theorize about. Theory isn't always in sync with is happening on the ground.<br /><br />Another approach might be called the "re-engineering" of usability: obliterate all the unnecessary steps. Some imagine that definitive answers can be found "agilely" through quick polling techniques: just ask a few people, and your problem dissolves. So-called "discount usability" -- a concept that has merit when used judiciously -- is rapidly degenerating into a wholesale dumping-down of usability. The technique of testing five people -- plausible only under highly understood circumstances -- has collapsed into testing "3-5" people, soon to be "1-5" people, promising answers to all questions, irrespective of how facets are associated with the problem.<br /><br />If user research, which for me includes old-fashioned usability testing, is losing its mojo, how far we can rely on our pre-existing knowledge of users? To simplify, I propose we consider user behaviors of two kinds: innate behaviors, and learned behaviors. Both are valuable, but both are limited, for different reasons.<br /><br />Innate behaviors are essentially biological in character. Classical ergonomics, focused on the physical dimension of activity, looks at innate behavior: how easily people can grip a knob, or point a mouse. Innate behavior exists in the mental realm as well: the psychology of perception describes innate behaviors. Memory and attention functions are often innate, more a function of intrinsic capabilities than previous experiences. Generally speaking, people vary in their innate behavior by <em>degree</em> of ability, rather than by <em>kind</em> of ability (the obvious exception is for persons with a disability -- lacking an ability that exists in the general population. ) If there are wide differences in a behavior, such that some people can be successful accomplishing a task while other cannot, it seems unlikely the behavior is innate (an exception being if the performance required, or the person involved, is at one or the other end of a bell curve distribution.)<br /><br />Ergonomics and empirical cognitive science focus extensively on innate behavior. It is important to recognize that the numbers of behaviors that are innate is small in comparison to all observable human behavior. Still, these behaviors are often fundamental, and there is plenty to be learned about them. What is most interesting about innate behaviors is understanding the range of performance, especially the upper limits of what people can do. I am very excited by recent work in the field of "cognitive systems engineering" looking at information overload. This research is greatly extending, and qualifying, research from half a century ago on short term memory. Information overload is one of the most vital issues confronting design today, but we are only just starting to understand its complex dynamics, outside simple lab experiments.<br /><br />Designers can utilize findings on innate user behaviors knowing that if they were to do their own user tests, it would yield no new information. Unfortunately, the findings are sometimes not as useful as needed. The general finding may be valid, but the data does not address one's user group in adequate detail. Even basic anthropometric data is sometimes not available for certain population subgroups. Other times the finding does not fit the design problem appropriately: it is too general to predict how people would behave in a given circumstance, or the finding, while appearing robust (it has been repeated), has not be demonstrated with sufficiently diverse user groups and contexts to merit being considered a universal, innate behavior (the problem of how innate are behaviors of university psychology students). Truth is, studies on innate behavior are only of limited use to designers. Most design questions are not answered by data on innate human performance.<br /><br />Other user behavior is not biological but learned. This distinction is not clear cut, because one can learn to improve one's performance of an innate behavior. The general population is able to remember facts (an innate ability), though it may be possible to improve one's recall of facts through training or techniques (a learned ability.) Learning effects exist for both physical and mental performance. But a more serious problem is to act as if a learned behavior as innate one.<br /><br />Most HCI research addresses learned behaviors, even though it generally fails to stipulate that limitation. HCI research tends to report its results categorically, as in "subjects behaved" in such and such a manner, with very limited discussion of historical, contextual and conditional factors relating to the subjects. (We may simply learn that subjects were heavy Internet users and average 22 years of age. Sometimes we learn more about the computer equipment used in the experiment, which is often thoroughly dissected.) Sometimes we find cryptic findings in the HCI literature, stating some tentative finding about a highly particular, even artificial, activity, with recommendations that further study be considered (and sometimes only the investigator is sufficiently interested in the problem to do the further investigation.) Practitioners -- people who need to design things -- often can't use such information to any extent. Even with these limitations, HCI research<span style="font-style: italic;"> is</span> valuable, not least because it tracks what we don't know, and nudges us to learn more about it. Much HCI research consciously focuses on novel technologies we know little about. There are benefits of knowledge for knowledge's sake, and all practitioners owe their academic colleagues our appreciation for investigating areas that don't have an obvious payoff. But exploratory research, especially involving bespoke and idiosyncratic design configurations, is difficult to generalize from.<br /><br />Two things are vexing about learned behaviors. First, the practicality of applying any finding about learned behavior is difficult, since the finding is highly susceptible to what user population you have studied. Just because you found a strong user behavior among American college students doesn't mean you will find the same behavior with middle age Americans, or Egyptian college students, or some other user group. Second, learned behaviors change. People learn new behaviors, and old ones fall in disfavor. What is take as gospel about the Web today will become idiosyncratic once Web 2.0 (or whatever is ultimately becomes known as) gains currency.<br /><br />Much of our knowledge about users is highly perishable. It might be correct today, but we can't be sure how much longer it will remain valid. I am skeptical of many finding done in the 1990s, simply because technology and user behavior has moved on since then. There are few eternal truths in an out-of-date HCI textbook. What remains useful are the methods of HCI (e.g, task analysis), not the specific findings of user trials.<br /><br />We have a human tendency to want to think through of the implications of a finding, to generalize from it. This tendency may be even stronger among commercial researchers than academic ones. Commercial researchers can be less cautious in their generalization. If we encounter research data on something not previously studied, we want to treat it as the foundation of something of wide and lasting consequence, not as data limited to a particular group or a particular time.<br /><br />Over the past quarter century, starting with seminal research by Tversky and Kahneman, cognitive science has developed a clearer understanding of expert judgment. Humans have a strong and consistent tendency to be over-confident about their understanding of a situation, and their ability to predict outcomes. In HCI, we see these findings confirmed when competitions of different expert usability teams arrive at widely different conclusions about what needs fixing. Indeed, one "strong" conclusion of HCI research, according to strength of evidence index in the National Cancer Institute's usability research summary, is to <span style="font-style: italic;">avoid</span> relying heavily on expert reviews and cognitive walkthroughs, the very kinds of activities that dispense with user research.<br /><br />At least we have the routine feedback of user research to curb our temptation toward overconfidence. Let's hope.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1138387371696971392006-01-28T07:26:00.000+13:002006-01-28T12:08:56.966+13:00formulas and confusionEarlier this week Don Norman picked up on a comment I made on a discussion list and said I was confused about some fundamental things. Indeed I <em>can get</em> confused, and believe the confusion arises from our field's quest for universal solutions. The tidy advice of HCI that is out there glosses over a big variable on projects: what level of knowledge about user behavior is required to design a user interface.<br /><br />Let's arbitrarily divide user centered designs into two types: formulaic designs, and non-formulaic ones. Formulaic designs are the stock and trade of web agencies designing for the public -- things like news sites and online catalogs. They are ubiquitous, and generally all look and work the same, regardless of whose site it is. Generally people involved in such projects have a good idea about user needs even prior to starting: they are folks just like us, after all, and we have already talked to countless people like them for other similar projects. When we design for formulaic projects, we already have a good grasp of the solutions available to us. Designing an online shopping cart is not rocket science, it has been done countless times, users have expectations how they operate, and the work is mostly a matter of polishing the details.<br /><br />If you work exclusively on formulaic projects, the tidy advice of usability will serve you well. You can find rules on common issues such as how to display error messages, and find plug-and-play models for how to incorporate usability into the development cycle. I don't want to suggest that formulaic projects are trivial and unimportant, and certainly do not consider all consumer web projects as formulaic. My point is that if project is addressing well tilled ground, you have tools to deal with it. Confusion avoided.<br /><br />Now the rest of the interactive design universe is less than formulaic. The interaction domain might be novel, or complex, or highly specialized. Here the generic advice often falls short. HCI claims a scientific heritage based in cognitive psychology and computer science, and presumes there are universal truths that will be uncovered and will guide software development. And while I am happy to grasp and use any universal finding about human interactive behavior, the number of such findings is far fewer than the range of issues that confronts us. The reality is that for many issues, there is no universal user response, so there is no generic advice available.<br /><br />The inadequacy of usability "best practice" is illustrated by a simple example: what is best practice for using disabled buttons? A consultant posed this question on a discussion list recently, and I smiled with recognition at the problem. I have also struggled with the issue, and dutifully consulted my vast HCI library as well as my Google bookmarks, but found concrete advice on the issue wanting. Sure, there were a few references to how to do things very badly, some very abstract principles about not causing users grief, but not much on what to do so as to assure no one gets confused. Replies to the question on the discussion list yielded as many different, contradictory answers as there were respondents. All the answers were thoughtful, and plausible, but no one seemed able to give a definitive answer that could be applied by designer to another context without worry. Users just don't have a common understanding and set of expectations about disabled buttons. One of the most common, plain-vanilla issues eludes development of best practice guidance.<br /><br />The collective body of usability evidence -- published test results demonstrating what is effective -- is often murky. Sometimes it resembles seesaw headlines about diet and nutrition benefits. This week tofu makes you smarter, last week tofu was ineffective in preventing cancer, next week tofu will heighten cholesterol [I jest]. What to do?<br /><br />If you are Don Norman, you tell UI designers to "Have faith in our ability to design well-crafted, understandable interfaces and procedures without continual testing." I don't understand why user testing is falling out of favor, because it seems more relevant than ever. True, for formulaic designs, usability testing is not the eye opener it once was, which is exactly how it should be. But I sense the IT industry, broadly defined, may be creating diversity in interactive design faster than we are bringing interactive behavior into the fold of formulaic design. We extend online applications to a greater range of activities where no precedent exists, to smaller subsegments of users about whose behavior we know little, and to growing range of platforms, mobile especially. And diversity is a re-enforcing loop -- the more diverse systems become, the more variety users are exposed to, and the more their expectations of how systems behave diverge. Diversity is leading to a fragmentation of user expectations. It is a challenge to finding hard-and-fast rules to guide our designs. Usability testing seems more needed than ever.<br /><br />In Norman's view, usability testing should be for debugging a design only. "UI and Beta testing are simply to find bugs, not to redesign." I think I understand his point -- that if a problem could have been avoided before testing, it should have been designed properly to begin with. But I am concerned that Norman offers an overly restrictive view of the role usability testing can play. The debugging view of usability testing implies that the design tested is formulaic. As I have argued, that is often not the case. Even if we are designing the ubiquitous shopping cart, there is no reason why we <em>have</em> to follow the standard formula that is out there. We can experiment, and breath a bit of innovation into the design. Testing allows experimentation. Rather than grasp for certainty by sticking only to formulaic designs, which may work but might not be the best that can be offered, we can explore alternatives. For example, the addition of instant messaging help to online checkouts is an innovation that didn't happen without a few redesigns arising from usability tests. AJAX promises great innovations for UIs, but there is no knowing how users will respond to a concept until the testing is done; bold experiments will often entail a series of redesigns until it is right.<br /><br />Several years ago Nico Macdonald raised the concern that usability, by enforcing consistency, might stifle innovation in interaction design. That concern is valid if we believe there is only one way to design something. By enabling experimentation, usability testing actually fosters innovation. But more importantly, we need testing to explore our understanding of users, who can vary considerably in their expectations of how a system should behave.<br /><br />Projects obviously don't fall into a simple category of formulaic or otherwise. Projects might be mostly formulaic (dealing with domain issues largely understood) but still present significant unknown issues for designers (perhaps an AJAX widget.) Novel, complex and highly specialized projects will of course build on formulas where available and useful, but will need to explore more about user behavior. Whatever the mix, I think we need to get explicit about the borders between what we can safely assume about user behavior, and what we are assuming unsafely.<br /><br />Perhaps we need a central repository about what we as a discipline don't know, an issue tracker. A document on usability evidence produced by the US National Institute of Health a few years ago included the concept of "strength of evidence." Contradictory evidence is not a bad thing, even if it is sometimes maddening. For some of these issues we may eventually be able to separate factors that explain these differences (user profile, intervening contextual factors, etc.) Results can be change over time as well, as users learn new behaviors, and potentially even lose familiarity or patience with old ones. A central tracking system would require wide participation and a history before it would be useful, and for these reasons it might not be a viable solution. But even on a case-by-case basis, we need to let each other know what is unresolved in our field. For the sake of developing useful formula, let's externalize our confusion.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1137049561515033812006-01-12T19:44:00.000+13:002006-01-12T21:42:53.376+13:00up the business value chainRestructuring business processes have been one of the most important -- if problematic -- activities in business strategy over the past decade. Reducing waste can cut costs and even enhance revenue, through increased responsiveness to market conditions. But poorly considered restructuring can be a nightmare.<br /><br />I don't pretend that usability holds all the answers to how business should structure their process -- business is too complex for any one "formula" to work magic. That said, I would modestly suggest that usability can be an important resource to developing an effective implemenation of the restructuring of business processes.<br /><br />Usability often is focused on the micro, rather than the macro. It might look at what individuals do, their specific tasks. Through task decomposition and task analysis, it seeks to develop an optimization of how the task can be done by people.<br /><br />Usability also looks at how groups of people coordinate tasks, that is, how they perform activities. Originally, these group tasks were focused on teams, who needed a common view and shared understanding of what they were trying to accomplish. But as the world has become more joined-up through the Internet, the team has become a more amorphous concept, less of a cohesive social unit. Groupware has given way to portals that anyone with a password can access. Even the distinction between employees and customers is getting blurred. When customers access a portal to track their package shipments, they see the same data as employees. Questions now arise: should they see the same view, or a custom view?<br /><br />Just as tasks can be simplified to reduce the time and number of steps need for an individual, entire activities involving numerous people can be simplified. But usability is commonly associated only with optimizing tasks done by individuals or small groups. As a result, it is sometimes dismissed as irrelevant to restructuring larger processes. Sometimes user research is criticized as merely tweeking an existing process, rather than as enabling a radical new process that is much more efficient. I believe such an attitude is shortsighted.<br /><br />Most work on business process restructuring looks askance at people. Indeed, much process reengineering is aimed at reducing the numbers of people involved in a process in order to gain greater efficiencies. But too often a focus on process automation can lull strategists into ignoring the reality that people never disappear, they are sometimes simply marginalized.<br /><br />One of the most common ways businesses reengineer their processes is by outsourcing their activities to their own customers. This is more commonly referred to as "self service." Businesses get to reduce staff, and trumpet the fact they have automated, even if they have mostly shifted the annoying data entry responsiblities on to their customers. Customers, whether businesses or individuals, agree to this provided they are given sufficient incentive. The incentives vary, but at a minumim the burden can't be too complicated. In other words, it must be usable.<br /><br />Even when businesses don't outsource functions, usability places a critical role in the effective implementation of a reengineered process. Consider a common target of process reengineering: eliminating "unnecessary" internal approvals. One approach is to streamline the process, to simplify. It can yield a faster process cycle, and make the process more transparent to employees, who understand the simplified process more easily. Many businesses have found that streamlining processes have reduced costs enough to offset any benefits associated with the prior process.<br /><br />The other approach to internal approvals is to automate them. The danger of eliminating them is that it can led to looser standards, and more risks. Many businesses choose to continue to collect all the data, but to feed it to an automated decision program. Because a computer program acts on the data, it might be no slower than if the approval had been removed, but it elminates risk and improves decision making precision. Only one small downside: it can make things cognitively complex, as employees need to decipher the meanings of computer decision agent. Usability can potentially help untangle this problem, looking at how this is presented to users. Perhaps analyzing messages or visualization solutions can increase employee comprehension.<br /><br />However businesses choose to reduce procedural complexity, they must make their processes understandable to their employees -- and their customers. Employees need to be able to tell customers, who might be anyone outside the immediate process, what is going on. (Don't limit the concept of customer to individual consumers. Many companies have internal units that competitively bid for business from their own parent. It's an unforgiving world in business these days.) Companies look incompetent if employees have to say "it's somewhere in the computer system" or "the system rejected the request." The entire process must be comprehensible, that is, usable. Anything else is false economy: cost effective in a spreadsheet model, but not sustainably effective.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com1tag:blogger.com,1999:blog-8981782.post-1136171858848245342006-01-02T16:00:00.000+13:002006-01-02T18:20:43.953+13:00is usability a functional or non-functional requirement?Outside of projects involving consumer gadgets or consumer websites, usability professionals often feel they work on the margins of software projects. True, there is growing recognition that usability is important in principle, but in practice, many project plans simply don't reflect a realistic role for UCD, in terms of budget (do many of us get 10% of the project?), timelines, or process. Unfortunately, other than to groan, most fixes focus on highly variable soft solutions such as improving communication between team members, and sharing one's perspective. Such approaches are laudable, but inadequate. Except for the the smallest projects, improved communication will not accomplish much when the structure of the project has been cast by the project plan and room for addressing changes in scope has been eliminated.<br /><br />I believe a major reason UCD is on the outside looking in is because of simple phrase most of us hardly even notice. The phase is "functional requirement." Nearly all our colleagues --business analysts, programmers, project managers, and business stakeholders -- are under the impression that usability is a nonfunctional requirement. They are both right and wrong. And while usability professionals did not create this confusion -- we don't even use the terms functional and nonfunctional -- we are least partly responsible for confusing others about what is essential in what we do.<br /><br />Functional requirements are important for people involved with developing systems in many ways. In larger projects, functional requirements are articulated in a written specification. Once a specification is written, the scope to make changes has been reduced considerably. But what constitutes a functional requirement -- something so essential is must be done or else -- is open to considerable debate.<br /><br />I am not a professional programmer, so I can only offer a flavor of how programmers view functional requirements. Here are some distinctions others make between functional and non-functional requirements:<br /><ul><li><strong>Functional</strong>: quantatative tasks a system performs. <strong>Non-functional</strong>: how system fits into its context.</li><li><strong>Functional</strong>: actions system must <em>perform</em>, input-out behavior. <strong>Non-functional</strong>: system properties and constraints.</li><li><strong>Functional</strong>: system behavior expressed in Noun-Verb form. <strong>Non-functional</strong>: Adverbial qualifier what what system does.</li><li><strong>Functional</strong>: what system <em>must</em> perform, <em>what</em> a system does. <strong>Non-functional</strong>: <em>how</em> a system does it.</li></ul><p>Although the above distinctions differ, one element is clear: functional stuff looks important, while non-functional stuff looks a bit less so. </p><p>So what do our colleagues think about where usability fits? Generally, people concerned with functional requirements don't even address usability: it is not part of the functional requirements process. The closest explicit statement to that effect I can find comes from Leffingwell and Widrig in <em>Managing Software Requirements</em> (edited by the famous Booch/Jacobson/Rumbaugh) who give 2 pages to usability as non-functional requirement. They complain about usability being a "fuzzy" notion that hard to judge a system by. Given existing usability specifications, how do we know the system is performing the functions it needs to do? In some respects, I think that is a fair criticism. Too many usability requirements are fuzzy, and hard for others to respond to predictably.</p><p>Software requirements don't often specify UI behavior, which creates a host of problems. Agile methods, about which I wrote recently, offer a work-around by trying to remove documentation of requirements so that decisions about the UI aren't choked off by pre-existing functional requirements. Lucy Lockwood, codeveloper of usage centered design, argues that UI design is intrinistically associated with system behavior. "The best user interface design will offer little to users if crucial details are lost in implementation." She comes close to viewing usability as a functional requirement. While I agree that UI is deeply affected by what the system can offer the user, I balk at her belief that programmers need to drive the UI. She says: "most detail decisions affecting product usability are made by the people writing the code." Moreover, "good interface design is closely tied to the programming that supports it. Usability is a function of both appearance and behavior, and behavior implies programming." She believes a week's training in UI for programmers is sufficient for most the develop usable, if not excellent, designs. I can't speak for her experiences offering such training, but my experience has been that most programmers are not that interested in UI design and would prefer for someone else to make these decisions. Lockwood cites a shortage of usability professionals as limiting their involvement in UI design, but I think a bigger problem is the lack of explicit requirements processes for UIs, especially how these requirements are formalized, reviewed and communicated to coders.</p><p>Part of the confusion about the role of usability in the requirements process stems from its elastic meaning. Sometimes people refer to usability as user needs, sometimes as UI design. User needs in turn can be needs that relate to specific functions of a system (such as specific scenarios of use that are discovered from contextual inquiry), and needs that apply to the system as a whole (will users rely on mice or keyboards, will users need to conduct wildcard searches or sort results)? These user requirements can feed into both the functional requirements (via the business requirements) and the UI design. When usability is viewed solely as a non-functional requirement (screen layout issues that are independent of the system behavior), then major problems can arise in assuring that the system does what it must do to satisfy user goals.</p><p>Getting usability considered as a functional requirement is an essential step to getting it built into the project plan, getting time and resources for front-end activities essential to the success of project from the user's perspective. Sometimes even seeming trivial requests, such as wanting a summary screen drawing together different pieces of information, will precipitate a functional change request, and be viewed with reluctance. In these cases, usability is viewed as a project spoiler, and is further marginalized.</p><p>Unfortunately, the more UCD professionals talk about "user experience", the more others view usability as mostly a matter of presentation, and as a non-functional requirement. This aspect of usability does exist, and is both important and non-trivial, even if non-functional. There is much that usability can do to improve things for users without tinkering with the underlying behavior of the system. But such surface usability has limits. We need to get wiser about communicating the specific impacts of UCD, how they impact other non-UCD activities, and how project plans need to be constructed to assure that UCD addresses important functional aspects of the system.</p><p></p>Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1135889696680954702005-12-30T09:44:00.000+13:002005-12-30T17:27:29.873+13:00agile usability by committeeYou know something is brewing when the end-of-year issues of both SIGCHI's <em>Interactions</em> and the UPA's <em>User Experience</em> have articles on so-called "agile usability." I welcome the exploration of new approaches, especially hearing about real world experiences with these. Agile usability is an attempt to incorporate usability (sometimes loosely defined) with agile software development methods. It addresses a screaming problem in software development: the complete lack of an explicit role for UCD input in the standard frameworks of software development methods, particularly the "Rational Unified Process" that has such a stranglehold on development these days. IBM should be taken to court for promoting RUP as good practice when UCD at most is considered an optional bolt-on accessory (supply your own bolts and hope you can find the right size.)<br /><br />First the good news: agile software development methods are generally better than RUP in reflecting user needs. The bad news: agile is not that much better. Can fusing usability into agile programming make the product truly reflect user needs? The jury is still out, the experiment is still unfolding. But I am wary. Agile usability seems like a band-aid solution to a trauma wound.<br /><br />Agile programmers seem like cool people. They experiment, listen to others, and dislike stuffy paperwork. Not only are they cool, they even use a few words used by the UCD community, notably "iteration." (Alas, what agile programmers consider an iteration and what UCD folks consider an iteration differ widely, a <em>faux ami</em>. )<br /><br />While discussion is an endearing quality of agile methods, an important party seems to be missing from the discussion: those anonymous folks known as users. Agile advocates will protest that I exaggerate here: they invite a person variously called a "customer representative" or a "user surrogate" to chat with the programmers. But it is important not to let the conversation get too big, otherwise stealthy character of agileness is lost.<br /><br />At the heart of agile methods is meetings, generally very small meetings (we want to be agile, after all), but meetings all the same. At these meetings, programmers try to model what should be happening with the program. The two or three people meeting decide what to do next with the program. What do they base their decisions on? I see two major sources of feedback for agile programmers: how the code is performing in meeting perceived needs, and conceptual models that the programmers create to think through what users need from the program.<br /><br />There are some significant risks associated with using functional prototypes as a foundation for the final end product. You get invested in your solution when you choose not to throw away alternatives, which is the beauty of paper prototyping. Your UI can get enmeshed in your functional domain model, making it difficult to change. You are prisoner to the fundamental problem associated with any form of incremental design, namely that you invariably develop a solution that "satisfies" (is the first solution to minimally meet needs at hand), rather than a solution that looks at broader and longer term issues, and seeks to find the best alternative given the wider trade-offs. Sometimes you end up a blind alley, as your evolving solution fails to scale to the growing complexity of needs.<br /><br />I find troubling the notion that conceptual models can serve as adequate proxy of user needs. Programmers are smart people, and love models, especially high level abstract ones. It should be no surprise that techniques that promise to model user needs are the techniques of choice embraced by agile programmers. The models favored are variants of either scenario-based models, or usage-based models. What are these user models based on? Sometimes they are just based on a conversation between a pair of programmers. If more elaborate, the programmer pair might call a meeting to get input from some other stakeholders, such as the customer representative. But what these models aren't based upon is proper user research. The scenarios and "usage" reflects what a bunch of people sitting around a conference table said, and no more. All kinds of assumptions are made, and never verified, in such scenario and usage modeling. Models reflect the tunnel vision of their creators. They lack the peripheral vision gained by widespread consultation with users before design and during development.<br /><br />What is most lacking from agile usability is a formal role for user testing. There may be a grudging acknowledgement that user testing is useful is limited circumstances, and a few UCD consultants have managed to sneak-in testing on an agile development project. But generally agile programmers see usability testing as a time waster, and unless and until that attitude changes, agile usability will be only be agile without the usability. Some agile advocates, particularly Larry Constantine, claim you really don't need to test to attain usability. "Our view [of usability testing] is not uncritically positive," he writes:<br /><br /><br /><br /><blockquote>Our own view of usability testing is that it <em>can be</em> an important and useful tool in service of enhanced usability<em> so long</em> as it is recognized as only one <em>specialized</em> tool among many. Particularly in the <em>absence of good models</em> or methods of design, usability testing is indispensable. Testing, however, is <em>never sufficient</em> in itself to<br />deliver highly usable software. [my emphasis]<br /></blockquote><br />That quote might even sound reasonable out of context, until you see that Constantine devotes only a few pages to usability testing in his 500 page book (<em>Software for Use</em>) that is supposedly about usability. Constantine is a critic of usability testing, considering it too expensive and inefficient (if it weren't for the fact that usability testing "plays such a prominent role in the business of software development", I wonder if he would even acknowledge the limited benefits he concedes it offers). He proposes to "upgrade usability" through is own methodology of usage centered design, which which in his mind effectively eliminates the need for testing. Somehow Constantine fashions himself as a usability expert, but he dismisses what 99% of other usability experts consider the foundation of usability: usability testing. How Constantine can call usability testing "a specialized tool", as though it was on the fringes of common use, escapes me.<br /><br />I happen to think Constantine's usage centered design (to quote his own phrasing) "<em>can be</em> an important and useful tool in service of enhanced usability<em> so long</em> as it is recognized as only one <em>specialized</em> tool among many." But Constantine would have you believe his approach is the only one that matters (the book's list of references contain mainly his own writings). We are back to the old days, when checking real world usability is an afterthought, merely tidying up a few minor details. If only things were that simple.<br /><br />The hubris of usage centered design is the conceit that a select few can know the needs of many through enlightened processes. Constantine does briefly speak about the need to get information from real users, but he devotes most of this discussion to how to get information about users at arm's length (from surveys, for example, rather from realistic settings.) Whenever users are mentioned, the discussion is short (not enough to act on), perfunctory (by acknowledging that true, some people do direct user research, which in limited circumstances might be useful for some people, if interested look elsewhere for details) and ambivalent (dealing with users involves "chaos"; a lack of enthusiasm for UCD abounds.)<br /><br />Constantine wants to "move away from purely user centered approaches to software design." I'm all in favor of improving design methods, and reducing the amount of testing needed, even iterative testing. There <em>is</em> too much software to test properly, so testing needs to be prioritized. But while Constantine has identified a valid problem, and even offered some additional tools to deal with the problem (mostly task modeling), it is highly grandiose to imagine he has solved the problem. In Constantine's view, modeling will produce mostly usable software (what he calls "built-in usability"). Any remaining problems can be addressed through "collaborative usability inspections", in other words, more people chatting while sitting around a conference table.<br /><br />I may be entirely wrong: perhaps one can design completely usable software without doing either user research or user testing. One can simply rely on design methods and usability rules, and presto, a usable software system emerges. But even though I respect the power of methods and rules to improve products for users, I am unaware of any combination of methods/rules that would guarantee fully usable software. Best practice is useful, but insufficient. There are too many variations for best practice to address, too many unknowns about users, too much innovation happening, too little certainty about how all these factors interact. Perhaps years from now, when user research has uncovered its last discovery and when technology has evolved to a point of standing still, we will have a science that won't require users to offer their inputs into requirement and show their performance during testing. Until then, modeling and inspections seem like a recipe for missed requirements, unforeseen interaction problems, and confused people.<br /><br />I focus on Constantine's views in particular because for many people in the agile programming world, he is the face of usability. [Disclosure: I've never met Constantine or even know anyone who has. My criticisms of are the methods he advocates, not of him as a person.] Constantine is a major writer on the Yahoo agile usability list, a list more dominated by programmers than usability professionals. The people-free "usability solution" offered by usage centered design is no doubt appealing to some programmers. But if agile programmers are going to learn what usability is about, they need to get a representative presentation of usability, especially the importance of user testing.<br /><br /><p>However broken existing software development processes may be, with its inability to reflect user needs, I hope we can develop a meaningful solution to the problem, and not a lesser of two evils solution. For the moment, agile usability is just somewhat better than RUP. Let's hope we can get a real UCD solution embedded in the software development process, before agile usability gets entrenched, with everyone believing the problem has been solved.</p>Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1135840411608409732005-12-29T19:41:00.000+13:002005-12-29T20:35:13.463+13:00switching costs and usabilityI <a href="http://michaelandrews.blogspot.com/2005/11/open-source-blues.html">recently groaned </a>about how supposedly "open source" Firefox didn't really offer a vendor-neutral solution for storing bookmarks. Firefox forces users to rely on its own implementation of bookmarks, which is a usability negative. Firefox imposes a penalty for switching to another browser, making it difficult to export one's bookmarks, and try to hold their user base captive.<br /><br />Firefox is using a "lock-in" ploy used by many vendors (Microsoft, Abobe, Macromedia, etc, etc). Economists refer to lock-in as a "switching cost." Switching costs are directed at two parties: at rival vendors, to make it more difficult for them to sell to your clients, and at one's own customers, to prevent them from defecting to a rival vendor. From a user perspective, switching costs diminish user choice and sovereignty and consequently the usability to perform activities independently of a specific vendor's solution.<br /><br />Switching costs are a concrete example of how complete usability is not necessarily in the short term interests of a specific firm. Consider the broader issue of standards. On balance, standards benefit users, who can interact with data (numbers, images, whatever) without worrying about implementation idiosyncrasies. But market leaders or insurgents with a following consider standards a threat, since it has the potential to deflate their market share or market momentum. Standards market is easier for users to hop around between competing products, instead of being invested in one. While standards are good usability overall, they can have negative consequences for specific firms. Generally, dominant firms embrace standards only when their rivals have enough market share that it makes sense to say "We <em>are</em> the market leaders, but we pay well with any minnows you might also deal with."<br /><br />Dominant firms are often half interested in the usability aspects of standards. If they are truly dominant, they would like user-recognized standards not to exist, but they can never be sure how complacent they can be. Often, firms are in limbo, using a half-standard, perhaps shared with other firms, but not truly universal, authoritatively endorsed by a leading standards body. In this case, they are concerned with user perceptions about the importance of standards. Do they stand to gain more market share by opening up the standards, or lose market share by doing so? If a small player in competitive market, a firm will be interested in promoting standards, which will reduce the costs of acquiring new customers.<br /><br />Firms may abhor standards because they would appear to reduce one's own differentiation. The question is, does this differentiation matter to users, or is it just narcissism by the firm? Embracing standards generally reduces costs for a firm's product development, and so promotes cost leadership. Lower costs benefit firms and users alike.<br /><br />When it comes to reporting how users experience switching costs, usability professionals are simply messengers. Companies may see danger or opportunity in the message, but that is for them to interpret.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0tag:blogger.com,1999:blog-8981782.post-1135313164831683252005-12-23T17:34:00.000+13:002005-12-23T18:35:18.323+13:00data quality for enterprise usabilityIT productivity is a tricky subject. Many commentators focus on transactions and data and associated software and hardware costs, rather than employee activities and labor costs. A data centric view of IT productivity can led one to under value the role of people in creating and utilizing the data. On the other hand, an emphasis on employee satisfaction, such as advocated by HR departments, can led one to under value the business importance of data. Data may not be as warm and fuzzy as people are, but it is important all the same.<br /><br />Data management has always been a massive topic in IT, and that shows no sign of abating. Companies often proclaim that data is the key to being customer-centric. Sometimes they mean having data available about a specific customer transaction while speaking to a customer. Other times customer-centric means mining data to predict what customers will do in the future. A good example of both these dimensions is insurance. Data collected from current policies and claims are important for the resolution of issues to the customer's satisfaction. And historical data on past policies and claims can be used to predict customer behavior.<br /><br />The problem for companies is that while they collect volumes of data, it is not always useful. One study claims that data quality problems cost businesses $600 billion annually. While that figure sounds exaggerated, one reasonably can assume data quality is costly to business.<br /><br />Usability can play many roles in improving data quality. It can improve data labeling and taxonomies to enable better sharing and aggregation of data. It can explore how to streamline the collection of data by employees and from customers. It can map touchpoints where data can be verified from customers easily, to allow data such as addresses to be updated and corrected. It can improve retrieval and analysis of data, for example through drill down techniques, so it more often sees the light of day. Most data collected by companies is never used again a week after it was collected.<br /><br />In short, usability can help with the accuracy, completeness and relevance of data. A fair amount of data collection and analysis is automated, and usability has little to offer those processes. But if the automation worked as well as it is supposed to, data quality wouldn't be a problem. It always comes back to people.Michael Andrewshttp://www.blogger.com/profile/16866433480397621479noreply@blogger.com0