Saturday, January 28, 2006
formulas and confusion
Let's arbitrarily divide user centered designs into two types: formulaic designs, and non-formulaic ones. Formulaic designs are the stock and trade of web agencies designing for the public -- things like news sites and online catalogs. They are ubiquitous, and generally all look and work the same, regardless of whose site it is. Generally people involved in such projects have a good idea about user needs even prior to starting: they are folks just like us, after all, and we have already talked to countless people like them for other similar projects. When we design for formulaic projects, we already have a good grasp of the solutions available to us. Designing an online shopping cart is not rocket science, it has been done countless times, users have expectations how they operate, and the work is mostly a matter of polishing the details.
If you work exclusively on formulaic projects, the tidy advice of usability will serve you well. You can find rules on common issues such as how to display error messages, and find plug-and-play models for how to incorporate usability into the development cycle. I don't want to suggest that formulaic projects are trivial and unimportant, and certainly do not consider all consumer web projects as formulaic. My point is that if project is addressing well tilled ground, you have tools to deal with it. Confusion avoided.
Now the rest of the interactive design universe is less than formulaic. The interaction domain might be novel, or complex, or highly specialized. Here the generic advice often falls short. HCI claims a scientific heritage based in cognitive psychology and computer science, and presumes there are universal truths that will be uncovered and will guide software development. And while I am happy to grasp and use any universal finding about human interactive behavior, the number of such findings is far fewer than the range of issues that confronts us. The reality is that for many issues, there is no universal user response, so there is no generic advice available.
The inadequacy of usability "best practice" is illustrated by a simple example: what is best practice for using disabled buttons? A consultant posed this question on a discussion list recently, and I smiled with recognition at the problem. I have also struggled with the issue, and dutifully consulted my vast HCI library as well as my Google bookmarks, but found concrete advice on the issue wanting. Sure, there were a few references to how to do things very badly, some very abstract principles about not causing users grief, but not much on what to do so as to assure no one gets confused. Replies to the question on the discussion list yielded as many different, contradictory answers as there were respondents. All the answers were thoughtful, and plausible, but no one seemed able to give a definitive answer that could be applied by designer to another context without worry. Users just don't have a common understanding and set of expectations about disabled buttons. One of the most common, plain-vanilla issues eludes development of best practice guidance.
The collective body of usability evidence -- published test results demonstrating what is effective -- is often murky. Sometimes it resembles seesaw headlines about diet and nutrition benefits. This week tofu makes you smarter, last week tofu was ineffective in preventing cancer, next week tofu will heighten cholesterol [I jest]. What to do?
If you are Don Norman, you tell UI designers to "Have faith in our ability to design well-crafted, understandable interfaces and procedures without continual testing." I don't understand why user testing is falling out of favor, because it seems more relevant than ever. True, for formulaic designs, usability testing is not the eye opener it once was, which is exactly how it should be. But I sense the IT industry, broadly defined, may be creating diversity in interactive design faster than we are bringing interactive behavior into the fold of formulaic design. We extend online applications to a greater range of activities where no precedent exists, to smaller subsegments of users about whose behavior we know little, and to growing range of platforms, mobile especially. And diversity is a re-enforcing loop -- the more diverse systems become, the more variety users are exposed to, and the more their expectations of how systems behave diverge. Diversity is leading to a fragmentation of user expectations. It is a challenge to finding hard-and-fast rules to guide our designs. Usability testing seems more needed than ever.
In Norman's view, usability testing should be for debugging a design only. "UI and Beta testing are simply to find bugs, not to redesign." I think I understand his point -- that if a problem could have been avoided before testing, it should have been designed properly to begin with. But I am concerned that Norman offers an overly restrictive view of the role usability testing can play. The debugging view of usability testing implies that the design tested is formulaic. As I have argued, that is often not the case. Even if we are designing the ubiquitous shopping cart, there is no reason why we have to follow the standard formula that is out there. We can experiment, and breath a bit of innovation into the design. Testing allows experimentation. Rather than grasp for certainty by sticking only to formulaic designs, which may work but might not be the best that can be offered, we can explore alternatives. For example, the addition of instant messaging help to online checkouts is an innovation that didn't happen without a few redesigns arising from usability tests. AJAX promises great innovations for UIs, but there is no knowing how users will respond to a concept until the testing is done; bold experiments will often entail a series of redesigns until it is right.
Several years ago Nico Macdonald raised the concern that usability, by enforcing consistency, might stifle innovation in interaction design. That concern is valid if we believe there is only one way to design something. By enabling experimentation, usability testing actually fosters innovation. But more importantly, we need testing to explore our understanding of users, who can vary considerably in their expectations of how a system should behave.
Projects obviously don't fall into a simple category of formulaic or otherwise. Projects might be mostly formulaic (dealing with domain issues largely understood) but still present significant unknown issues for designers (perhaps an AJAX widget.) Novel, complex and highly specialized projects will of course build on formulas where available and useful, but will need to explore more about user behavior. Whatever the mix, I think we need to get explicit about the borders between what we can safely assume about user behavior, and what we are assuming unsafely.
Perhaps we need a central repository about what we as a discipline don't know, an issue tracker. A document on usability evidence produced by the US National Institute of Health a few years ago included the concept of "strength of evidence." Contradictory evidence is not a bad thing, even if it is sometimes maddening. For some of these issues we may eventually be able to separate factors that explain these differences (user profile, intervening contextual factors, etc.) Results can be change over time as well, as users learn new behaviors, and potentially even lose familiarity or patience with old ones. A central tracking system would require wide participation and a history before it would be useful, and for these reasons it might not be a viable solution. But even on a case-by-case basis, we need to let each other know what is unresolved in our field. For the sake of developing useful formula, let's externalize our confusion.