Friday, December 30, 2005

 

agile usability by committee

You know something is brewing when the end-of-year issues of both SIGCHI's Interactions and the UPA's User Experience have articles on so-called "agile usability." I welcome the exploration of new approaches, especially hearing about real world experiences with these. Agile usability is an attempt to incorporate usability (sometimes loosely defined) with agile software development methods. It addresses a screaming problem in software development: the complete lack of an explicit role for UCD input in the standard frameworks of software development methods, particularly the "Rational Unified Process" that has such a stranglehold on development these days. IBM should be taken to court for promoting RUP as good practice when UCD at most is considered an optional bolt-on accessory (supply your own bolts and hope you can find the right size.)

First the good news: agile software development methods are generally better than RUP in reflecting user needs. The bad news: agile is not that much better. Can fusing usability into agile programming make the product truly reflect user needs? The jury is still out, the experiment is still unfolding. But I am wary. Agile usability seems like a band-aid solution to a trauma wound.

Agile programmers seem like cool people. They experiment, listen to others, and dislike stuffy paperwork. Not only are they cool, they even use a few words used by the UCD community, notably "iteration." (Alas, what agile programmers consider an iteration and what UCD folks consider an iteration differ widely, a faux ami. )

While discussion is an endearing quality of agile methods, an important party seems to be missing from the discussion: those anonymous folks known as users. Agile advocates will protest that I exaggerate here: they invite a person variously called a "customer representative" or a "user surrogate" to chat with the programmers. But it is important not to let the conversation get too big, otherwise stealthy character of agileness is lost.

At the heart of agile methods is meetings, generally very small meetings (we want to be agile, after all), but meetings all the same. At these meetings, programmers try to model what should be happening with the program. The two or three people meeting decide what to do next with the program. What do they base their decisions on? I see two major sources of feedback for agile programmers: how the code is performing in meeting perceived needs, and conceptual models that the programmers create to think through what users need from the program.

There are some significant risks associated with using functional prototypes as a foundation for the final end product. You get invested in your solution when you choose not to throw away alternatives, which is the beauty of paper prototyping. Your UI can get enmeshed in your functional domain model, making it difficult to change. You are prisoner to the fundamental problem associated with any form of incremental design, namely that you invariably develop a solution that "satisfies" (is the first solution to minimally meet needs at hand), rather than a solution that looks at broader and longer term issues, and seeks to find the best alternative given the wider trade-offs. Sometimes you end up a blind alley, as your evolving solution fails to scale to the growing complexity of needs.

I find troubling the notion that conceptual models can serve as adequate proxy of user needs. Programmers are smart people, and love models, especially high level abstract ones. It should be no surprise that techniques that promise to model user needs are the techniques of choice embraced by agile programmers. The models favored are variants of either scenario-based models, or usage-based models. What are these user models based on? Sometimes they are just based on a conversation between a pair of programmers. If more elaborate, the programmer pair might call a meeting to get input from some other stakeholders, such as the customer representative. But what these models aren't based upon is proper user research. The scenarios and "usage" reflects what a bunch of people sitting around a conference table said, and no more. All kinds of assumptions are made, and never verified, in such scenario and usage modeling. Models reflect the tunnel vision of their creators. They lack the peripheral vision gained by widespread consultation with users before design and during development.

What is most lacking from agile usability is a formal role for user testing. There may be a grudging acknowledgement that user testing is useful is limited circumstances, and a few UCD consultants have managed to sneak-in testing on an agile development project. But generally agile programmers see usability testing as a time waster, and unless and until that attitude changes, agile usability will be only be agile without the usability. Some agile advocates, particularly Larry Constantine, claim you really don't need to test to attain usability. "Our view [of usability testing] is not uncritically positive," he writes:



Our own view of usability testing is that it can be an important and useful tool in service of enhanced usability so long as it is recognized as only one specialized tool among many. Particularly in the absence of good models or methods of design, usability testing is indispensable. Testing, however, is never sufficient in itself to
deliver highly usable software. [my emphasis]

That quote might even sound reasonable out of context, until you see that Constantine devotes only a few pages to usability testing in his 500 page book (Software for Use) that is supposedly about usability. Constantine is a critic of usability testing, considering it too expensive and inefficient (if it weren't for the fact that usability testing "plays such a prominent role in the business of software development", I wonder if he would even acknowledge the limited benefits he concedes it offers). He proposes to "upgrade usability" through is own methodology of usage centered design, which which in his mind effectively eliminates the need for testing. Somehow Constantine fashions himself as a usability expert, but he dismisses what 99% of other usability experts consider the foundation of usability: usability testing. How Constantine can call usability testing "a specialized tool", as though it was on the fringes of common use, escapes me.

I happen to think Constantine's usage centered design (to quote his own phrasing) "can be an important and useful tool in service of enhanced usability so long as it is recognized as only one specialized tool among many." But Constantine would have you believe his approach is the only one that matters (the book's list of references contain mainly his own writings). We are back to the old days, when checking real world usability is an afterthought, merely tidying up a few minor details. If only things were that simple.

The hubris of usage centered design is the conceit that a select few can know the needs of many through enlightened processes. Constantine does briefly speak about the need to get information from real users, but he devotes most of this discussion to how to get information about users at arm's length (from surveys, for example, rather from realistic settings.) Whenever users are mentioned, the discussion is short (not enough to act on), perfunctory (by acknowledging that true, some people do direct user research, which in limited circumstances might be useful for some people, if interested look elsewhere for details) and ambivalent (dealing with users involves "chaos"; a lack of enthusiasm for UCD abounds.)

Constantine wants to "move away from purely user centered approaches to software design." I'm all in favor of improving design methods, and reducing the amount of testing needed, even iterative testing. There is too much software to test properly, so testing needs to be prioritized. But while Constantine has identified a valid problem, and even offered some additional tools to deal with the problem (mostly task modeling), it is highly grandiose to imagine he has solved the problem. In Constantine's view, modeling will produce mostly usable software (what he calls "built-in usability"). Any remaining problems can be addressed through "collaborative usability inspections", in other words, more people chatting while sitting around a conference table.

I may be entirely wrong: perhaps one can design completely usable software without doing either user research or user testing. One can simply rely on design methods and usability rules, and presto, a usable software system emerges. But even though I respect the power of methods and rules to improve products for users, I am unaware of any combination of methods/rules that would guarantee fully usable software. Best practice is useful, but insufficient. There are too many variations for best practice to address, too many unknowns about users, too much innovation happening, too little certainty about how all these factors interact. Perhaps years from now, when user research has uncovered its last discovery and when technology has evolved to a point of standing still, we will have a science that won't require users to offer their inputs into requirement and show their performance during testing. Until then, modeling and inspections seem like a recipe for missed requirements, unforeseen interaction problems, and confused people.

I focus on Constantine's views in particular because for many people in the agile programming world, he is the face of usability. [Disclosure: I've never met Constantine or even know anyone who has. My criticisms of are the methods he advocates, not of him as a person.] Constantine is a major writer on the Yahoo agile usability list, a list more dominated by programmers than usability professionals. The people-free "usability solution" offered by usage centered design is no doubt appealing to some programmers. But if agile programmers are going to learn what usability is about, they need to get a representative presentation of usability, especially the importance of user testing.

However broken existing software development processes may be, with its inability to reflect user needs, I hope we can develop a meaningful solution to the problem, and not a lesser of two evils solution. For the moment, agile usability is just somewhat better than RUP. Let's hope we can get a real UCD solution embedded in the software development process, before agile usability gets entrenched, with everyone believing the problem has been solved.


Thursday, December 29, 2005

 

switching costs and usability

I recently groaned about how supposedly "open source" Firefox didn't really offer a vendor-neutral solution for storing bookmarks. Firefox forces users to rely on its own implementation of bookmarks, which is a usability negative. Firefox imposes a penalty for switching to another browser, making it difficult to export one's bookmarks, and try to hold their user base captive.

Firefox is using a "lock-in" ploy used by many vendors (Microsoft, Abobe, Macromedia, etc, etc). Economists refer to lock-in as a "switching cost." Switching costs are directed at two parties: at rival vendors, to make it more difficult for them to sell to your clients, and at one's own customers, to prevent them from defecting to a rival vendor. From a user perspective, switching costs diminish user choice and sovereignty and consequently the usability to perform activities independently of a specific vendor's solution.

Switching costs are a concrete example of how complete usability is not necessarily in the short term interests of a specific firm. Consider the broader issue of standards. On balance, standards benefit users, who can interact with data (numbers, images, whatever) without worrying about implementation idiosyncrasies. But market leaders or insurgents with a following consider standards a threat, since it has the potential to deflate their market share or market momentum. Standards market is easier for users to hop around between competing products, instead of being invested in one. While standards are good usability overall, they can have negative consequences for specific firms. Generally, dominant firms embrace standards only when their rivals have enough market share that it makes sense to say "We are the market leaders, but we pay well with any minnows you might also deal with."

Dominant firms are often half interested in the usability aspects of standards. If they are truly dominant, they would like user-recognized standards not to exist, but they can never be sure how complacent they can be. Often, firms are in limbo, using a half-standard, perhaps shared with other firms, but not truly universal, authoritatively endorsed by a leading standards body. In this case, they are concerned with user perceptions about the importance of standards. Do they stand to gain more market share by opening up the standards, or lose market share by doing so? If a small player in competitive market, a firm will be interested in promoting standards, which will reduce the costs of acquiring new customers.

Firms may abhor standards because they would appear to reduce one's own differentiation. The question is, does this differentiation matter to users, or is it just narcissism by the firm? Embracing standards generally reduces costs for a firm's product development, and so promotes cost leadership. Lower costs benefit firms and users alike.

When it comes to reporting how users experience switching costs, usability professionals are simply messengers. Companies may see danger or opportunity in the message, but that is for them to interpret.

Friday, December 23, 2005

 

data quality for enterprise usability

IT productivity is a tricky subject. Many commentators focus on transactions and data and associated software and hardware costs, rather than employee activities and labor costs. A data centric view of IT productivity can led one to under value the role of people in creating and utilizing the data. On the other hand, an emphasis on employee satisfaction, such as advocated by HR departments, can led one to under value the business importance of data. Data may not be as warm and fuzzy as people are, but it is important all the same.

Data management has always been a massive topic in IT, and that shows no sign of abating. Companies often proclaim that data is the key to being customer-centric. Sometimes they mean having data available about a specific customer transaction while speaking to a customer. Other times customer-centric means mining data to predict what customers will do in the future. A good example of both these dimensions is insurance. Data collected from current policies and claims are important for the resolution of issues to the customer's satisfaction. And historical data on past policies and claims can be used to predict customer behavior.

The problem for companies is that while they collect volumes of data, it is not always useful. One study claims that data quality problems cost businesses $600 billion annually. While that figure sounds exaggerated, one reasonably can assume data quality is costly to business.

Usability can play many roles in improving data quality. It can improve data labeling and taxonomies to enable better sharing and aggregation of data. It can explore how to streamline the collection of data by employees and from customers. It can map touchpoints where data can be verified from customers easily, to allow data such as addresses to be updated and corrected. It can improve retrieval and analysis of data, for example through drill down techniques, so it more often sees the light of day. Most data collected by companies is never used again a week after it was collected.

In short, usability can help with the accuracy, completeness and relevance of data. A fair amount of data collection and analysis is automated, and usability has little to offer those processes. But if the automation worked as well as it is supposed to, data quality wouldn't be a problem. It always comes back to people.

Thursday, December 22, 2005

 

design patterns and mental models

For those of us who design UIs, a few books are appearing that speak to "design patterns." I have just received Jennifer Tidwell's Design Interfaces, a welcome addition to other design pattern books, such as Susan Folwer's excellent Web Applications Design Handbook and Douglas K. van Duyne's The Design of Sites: Patterns, Principles, and Processes. I like all these books, and don't consider any duplicate the others. Still, I hunger for even more examples. "Design patterns" approach interaction at a very high level. I want a catalog of every variation done for ideas for the exact solution needed. I want a UI equivalent of the Owen Jones' Victorian design "pattern book", The Grammar of Ornament.

There are activity patterns-- the sequencing of tasks, and widget patterns -- how tasks are done within a screen. But patterns are a designer-centric approach. If they are familiar to users it is because designers use them frequently.

Mental models are slightly different. A mental model may be strong in a user's mind, but not often used by designers. It only takes one application to use an approach to develop a new mental model for users. Users develop expectations how something should work based on what they are familiar with. Consider something as ordinary as email. Users have a mental model of email based on what they use regularly. If you use Gmail, you expect email should offer the archiving abilities Gmail offers. If you use Lotus Notes, you see email functionality as incorporating unstructured databases. Outlook might be the most common of its genre, but irrelevant to the mental model of a given user.

What is becoming difficult for UI designers is the proliferation of user mental models. There are many variants of application in use, and one can never be sure what experiences have shaped a user's mental model. What we need is a catalog of interaction behaviors of widely used software applications. We need to be able to easily look up what mental models are being formed, when we ourselves might not be using the applications ourselves.

 

icing or cake?

People who know me know I have misgivings about the phrase "user experience". Sometimes the phrase is appropriate (e.g., when discussing a purely recreational interaction), but oftentimes it is used in a lazy way to describe a raft of benefits of UCD. Today I encountered a good example how the phrase "user experience" can pollute the minds of people.

I was talking with someone who had previously engaged a well known "user experience" consultancy for advice. The consultancy has a fine, international reputation, and I have seen some of their work, which is good. I'm not slagging their ability in the least. But what I do object to is how they cast the benefits of UCD. They talked the standard talk about improving user experience. The message came through clearly. Yes, user experience is good, but face it, it really isn't our highest priority. Sure, it is nice to offer a good user experience, I'm sure our employees would be grateful, but the really important issues are functionality, how we can value from our databases.

Most people in UCD can't speak the language of business, so they talk user experience. They can't tie benefits to existing business goals, so they rely on the feel good factor, with vague suggestions that unhappy people make mistakes or leave in frustration (not only a insufficiently substantiated suggestion, but a negative one to boot.)

But businesses aren't intrinsically empathic. Quite the opposite: business organizations don't care about happiness unless there is a monetary consequence involved. Until we develop a physics of happiness, with certain, immutable laws resulting, appealing to happiness won't effect any impersonal outcome produced by an organization.

We need to stop looking for admiration by acting like nice guys, and win respect by solving organizational problems.

Thursday, December 01, 2005

 

design for the unknown

My koan (riddle) for today is how to design for the unknown. I can't seem to nail down what the user requirements are. Asking other people what they need doesn't bring any more clarity. I don't have the luxury to try out different alternatives, and test them. No, I simply have to make an poorly informed guess about what users need, and live with it. I am unfamiliar enough with the subject domain that I have learned not to trust my instincts -- they aren't a reliable indicator of what is required. A very frustrating situation to be in, though a common one, especially where new product or process in involved.

This page is powered by Blogger. Isn't yours?