Saturday, January 28, 2006

 

formulas and confusion

Earlier this week Don Norman picked up on a comment I made on a discussion list and said I was confused about some fundamental things. Indeed I can get confused, and believe the confusion arises from our field's quest for universal solutions. The tidy advice of HCI that is out there glosses over a big variable on projects: what level of knowledge about user behavior is required to design a user interface.

Let's arbitrarily divide user centered designs into two types: formulaic designs, and non-formulaic ones. Formulaic designs are the stock and trade of web agencies designing for the public -- things like news sites and online catalogs. They are ubiquitous, and generally all look and work the same, regardless of whose site it is. Generally people involved in such projects have a good idea about user needs even prior to starting: they are folks just like us, after all, and we have already talked to countless people like them for other similar projects. When we design for formulaic projects, we already have a good grasp of the solutions available to us. Designing an online shopping cart is not rocket science, it has been done countless times, users have expectations how they operate, and the work is mostly a matter of polishing the details.

If you work exclusively on formulaic projects, the tidy advice of usability will serve you well. You can find rules on common issues such as how to display error messages, and find plug-and-play models for how to incorporate usability into the development cycle. I don't want to suggest that formulaic projects are trivial and unimportant, and certainly do not consider all consumer web projects as formulaic. My point is that if project is addressing well tilled ground, you have tools to deal with it. Confusion avoided.

Now the rest of the interactive design universe is less than formulaic. The interaction domain might be novel, or complex, or highly specialized. Here the generic advice often falls short. HCI claims a scientific heritage based in cognitive psychology and computer science, and presumes there are universal truths that will be uncovered and will guide software development. And while I am happy to grasp and use any universal finding about human interactive behavior, the number of such findings is far fewer than the range of issues that confronts us. The reality is that for many issues, there is no universal user response, so there is no generic advice available.

The inadequacy of usability "best practice" is illustrated by a simple example: what is best practice for using disabled buttons? A consultant posed this question on a discussion list recently, and I smiled with recognition at the problem. I have also struggled with the issue, and dutifully consulted my vast HCI library as well as my Google bookmarks, but found concrete advice on the issue wanting. Sure, there were a few references to how to do things very badly, some very abstract principles about not causing users grief, but not much on what to do so as to assure no one gets confused. Replies to the question on the discussion list yielded as many different, contradictory answers as there were respondents. All the answers were thoughtful, and plausible, but no one seemed able to give a definitive answer that could be applied by designer to another context without worry. Users just don't have a common understanding and set of expectations about disabled buttons. One of the most common, plain-vanilla issues eludes development of best practice guidance.

The collective body of usability evidence -- published test results demonstrating what is effective -- is often murky. Sometimes it resembles seesaw headlines about diet and nutrition benefits. This week tofu makes you smarter, last week tofu was ineffective in preventing cancer, next week tofu will heighten cholesterol [I jest]. What to do?

If you are Don Norman, you tell UI designers to "Have faith in our ability to design well-crafted, understandable interfaces and procedures without continual testing." I don't understand why user testing is falling out of favor, because it seems more relevant than ever. True, for formulaic designs, usability testing is not the eye opener it once was, which is exactly how it should be. But I sense the IT industry, broadly defined, may be creating diversity in interactive design faster than we are bringing interactive behavior into the fold of formulaic design. We extend online applications to a greater range of activities where no precedent exists, to smaller subsegments of users about whose behavior we know little, and to growing range of platforms, mobile especially. And diversity is a re-enforcing loop -- the more diverse systems become, the more variety users are exposed to, and the more their expectations of how systems behave diverge. Diversity is leading to a fragmentation of user expectations. It is a challenge to finding hard-and-fast rules to guide our designs. Usability testing seems more needed than ever.

In Norman's view, usability testing should be for debugging a design only. "UI and Beta testing are simply to find bugs, not to redesign." I think I understand his point -- that if a problem could have been avoided before testing, it should have been designed properly to begin with. But I am concerned that Norman offers an overly restrictive view of the role usability testing can play. The debugging view of usability testing implies that the design tested is formulaic. As I have argued, that is often not the case. Even if we are designing the ubiquitous shopping cart, there is no reason why we have to follow the standard formula that is out there. We can experiment, and breath a bit of innovation into the design. Testing allows experimentation. Rather than grasp for certainty by sticking only to formulaic designs, which may work but might not be the best that can be offered, we can explore alternatives. For example, the addition of instant messaging help to online checkouts is an innovation that didn't happen without a few redesigns arising from usability tests. AJAX promises great innovations for UIs, but there is no knowing how users will respond to a concept until the testing is done; bold experiments will often entail a series of redesigns until it is right.

Several years ago Nico Macdonald raised the concern that usability, by enforcing consistency, might stifle innovation in interaction design. That concern is valid if we believe there is only one way to design something. By enabling experimentation, usability testing actually fosters innovation. But more importantly, we need testing to explore our understanding of users, who can vary considerably in their expectations of how a system should behave.

Projects obviously don't fall into a simple category of formulaic or otherwise. Projects might be mostly formulaic (dealing with domain issues largely understood) but still present significant unknown issues for designers (perhaps an AJAX widget.) Novel, complex and highly specialized projects will of course build on formulas where available and useful, but will need to explore more about user behavior. Whatever the mix, I think we need to get explicit about the borders between what we can safely assume about user behavior, and what we are assuming unsafely.

Perhaps we need a central repository about what we as a discipline don't know, an issue tracker. A document on usability evidence produced by the US National Institute of Health a few years ago included the concept of "strength of evidence." Contradictory evidence is not a bad thing, even if it is sometimes maddening. For some of these issues we may eventually be able to separate factors that explain these differences (user profile, intervening contextual factors, etc.) Results can be change over time as well, as users learn new behaviors, and potentially even lose familiarity or patience with old ones. A central tracking system would require wide participation and a history before it would be useful, and for these reasons it might not be a viable solution. But even on a case-by-case basis, we need to let each other know what is unresolved in our field. For the sake of developing useful formula, let's externalize our confusion.

Thursday, January 12, 2006

 

up the business value chain

Restructuring business processes have been one of the most important -- if problematic -- activities in business strategy over the past decade. Reducing waste can cut costs and even enhance revenue, through increased responsiveness to market conditions. But poorly considered restructuring can be a nightmare.

I don't pretend that usability holds all the answers to how business should structure their process -- business is too complex for any one "formula" to work magic. That said, I would modestly suggest that usability can be an important resource to developing an effective implemenation of the restructuring of business processes.

Usability often is focused on the micro, rather than the macro. It might look at what individuals do, their specific tasks. Through task decomposition and task analysis, it seeks to develop an optimization of how the task can be done by people.

Usability also looks at how groups of people coordinate tasks, that is, how they perform activities. Originally, these group tasks were focused on teams, who needed a common view and shared understanding of what they were trying to accomplish. But as the world has become more joined-up through the Internet, the team has become a more amorphous concept, less of a cohesive social unit. Groupware has given way to portals that anyone with a password can access. Even the distinction between employees and customers is getting blurred. When customers access a portal to track their package shipments, they see the same data as employees. Questions now arise: should they see the same view, or a custom view?

Just as tasks can be simplified to reduce the time and number of steps need for an individual, entire activities involving numerous people can be simplified. But usability is commonly associated only with optimizing tasks done by individuals or small groups. As a result, it is sometimes dismissed as irrelevant to restructuring larger processes. Sometimes user research is criticized as merely tweeking an existing process, rather than as enabling a radical new process that is much more efficient. I believe such an attitude is shortsighted.

Most work on business process restructuring looks askance at people. Indeed, much process reengineering is aimed at reducing the numbers of people involved in a process in order to gain greater efficiencies. But too often a focus on process automation can lull strategists into ignoring the reality that people never disappear, they are sometimes simply marginalized.

One of the most common ways businesses reengineer their processes is by outsourcing their activities to their own customers. This is more commonly referred to as "self service." Businesses get to reduce staff, and trumpet the fact they have automated, even if they have mostly shifted the annoying data entry responsiblities on to their customers. Customers, whether businesses or individuals, agree to this provided they are given sufficient incentive. The incentives vary, but at a minumim the burden can't be too complicated. In other words, it must be usable.

Even when businesses don't outsource functions, usability places a critical role in the effective implementation of a reengineered process. Consider a common target of process reengineering: eliminating "unnecessary" internal approvals. One approach is to streamline the process, to simplify. It can yield a faster process cycle, and make the process more transparent to employees, who understand the simplified process more easily. Many businesses have found that streamlining processes have reduced costs enough to offset any benefits associated with the prior process.

The other approach to internal approvals is to automate them. The danger of eliminating them is that it can led to looser standards, and more risks. Many businesses choose to continue to collect all the data, but to feed it to an automated decision program. Because a computer program acts on the data, it might be no slower than if the approval had been removed, but it elminates risk and improves decision making precision. Only one small downside: it can make things cognitively complex, as employees need to decipher the meanings of computer decision agent. Usability can potentially help untangle this problem, looking at how this is presented to users. Perhaps analyzing messages or visualization solutions can increase employee comprehension.

However businesses choose to reduce procedural complexity, they must make their processes understandable to their employees -- and their customers. Employees need to be able to tell customers, who might be anyone outside the immediate process, what is going on. (Don't limit the concept of customer to individual consumers. Many companies have internal units that competitively bid for business from their own parent. It's an unforgiving world in business these days.) Companies look incompetent if employees have to say "it's somewhere in the computer system" or "the system rejected the request." The entire process must be comprehensible, that is, usable. Anything else is false economy: cost effective in a spreadsheet model, but not sustainably effective.

Monday, January 02, 2006

 

is usability a functional or non-functional requirement?

Outside of projects involving consumer gadgets or consumer websites, usability professionals often feel they work on the margins of software projects. True, there is growing recognition that usability is important in principle, but in practice, many project plans simply don't reflect a realistic role for UCD, in terms of budget (do many of us get 10% of the project?), timelines, or process. Unfortunately, other than to groan, most fixes focus on highly variable soft solutions such as improving communication between team members, and sharing one's perspective. Such approaches are laudable, but inadequate. Except for the the smallest projects, improved communication will not accomplish much when the structure of the project has been cast by the project plan and room for addressing changes in scope has been eliminated.

I believe a major reason UCD is on the outside looking in is because of simple phrase most of us hardly even notice. The phase is "functional requirement." Nearly all our colleagues --business analysts, programmers, project managers, and business stakeholders -- are under the impression that usability is a nonfunctional requirement. They are both right and wrong. And while usability professionals did not create this confusion -- we don't even use the terms functional and nonfunctional -- we are least partly responsible for confusing others about what is essential in what we do.

Functional requirements are important for people involved with developing systems in many ways. In larger projects, functional requirements are articulated in a written specification. Once a specification is written, the scope to make changes has been reduced considerably. But what constitutes a functional requirement -- something so essential is must be done or else -- is open to considerable debate.

I am not a professional programmer, so I can only offer a flavor of how programmers view functional requirements. Here are some distinctions others make between functional and non-functional requirements:

Although the above distinctions differ, one element is clear: functional stuff looks important, while non-functional stuff looks a bit less so.

So what do our colleagues think about where usability fits? Generally, people concerned with functional requirements don't even address usability: it is not part of the functional requirements process. The closest explicit statement to that effect I can find comes from Leffingwell and Widrig in Managing Software Requirements (edited by the famous Booch/Jacobson/Rumbaugh) who give 2 pages to usability as non-functional requirement. They complain about usability being a "fuzzy" notion that hard to judge a system by. Given existing usability specifications, how do we know the system is performing the functions it needs to do? In some respects, I think that is a fair criticism. Too many usability requirements are fuzzy, and hard for others to respond to predictably.

Software requirements don't often specify UI behavior, which creates a host of problems. Agile methods, about which I wrote recently, offer a work-around by trying to remove documentation of requirements so that decisions about the UI aren't choked off by pre-existing functional requirements. Lucy Lockwood, codeveloper of usage centered design, argues that UI design is intrinistically associated with system behavior. "The best user interface design will offer little to users if crucial details are lost in implementation." She comes close to viewing usability as a functional requirement. While I agree that UI is deeply affected by what the system can offer the user, I balk at her belief that programmers need to drive the UI. She says: "most detail decisions affecting product usability are made by the people writing the code." Moreover, "good interface design is closely tied to the programming that supports it. Usability is a function of both appearance and behavior, and behavior implies programming." She believes a week's training in UI for programmers is sufficient for most the develop usable, if not excellent, designs. I can't speak for her experiences offering such training, but my experience has been that most programmers are not that interested in UI design and would prefer for someone else to make these decisions. Lockwood cites a shortage of usability professionals as limiting their involvement in UI design, but I think a bigger problem is the lack of explicit requirements processes for UIs, especially how these requirements are formalized, reviewed and communicated to coders.

Part of the confusion about the role of usability in the requirements process stems from its elastic meaning. Sometimes people refer to usability as user needs, sometimes as UI design. User needs in turn can be needs that relate to specific functions of a system (such as specific scenarios of use that are discovered from contextual inquiry), and needs that apply to the system as a whole (will users rely on mice or keyboards, will users need to conduct wildcard searches or sort results)? These user requirements can feed into both the functional requirements (via the business requirements) and the UI design. When usability is viewed solely as a non-functional requirement (screen layout issues that are independent of the system behavior), then major problems can arise in assuring that the system does what it must do to satisfy user goals.

Getting usability considered as a functional requirement is an essential step to getting it built into the project plan, getting time and resources for front-end activities essential to the success of project from the user's perspective. Sometimes even seeming trivial requests, such as wanting a summary screen drawing together different pieces of information, will precipitate a functional change request, and be viewed with reluctance. In these cases, usability is viewed as a project spoiler, and is further marginalized.

Unfortunately, the more UCD professionals talk about "user experience", the more others view usability as mostly a matter of presentation, and as a non-functional requirement. This aspect of usability does exist, and is both important and non-trivial, even if non-functional. There is much that usability can do to improve things for users without tinkering with the underlying behavior of the system. But such surface usability has limits. We need to get wiser about communicating the specific impacts of UCD, how they impact other non-UCD activities, and how project plans need to be constructed to assure that UCD addresses important functional aspects of the system.


This page is powered by Blogger. Isn't yours?