Tuesday, February 07, 2006

 

innate and learned interactive behaviors

The gold standard for social science is research that is non-trivial, non-obvious, and widely repeatable. User research sometimes yields seismic discoveries, but small discoveries -- or even ambiguous ones -- are more the norm. But it would be wrong to conclude that because everyday user research yields only mundane discoveries, we know the major things we need to know, or that pedestrian user research simply isn't worth the effort.

Okay, few people are so bold as to argue user research is counterproductive. But the concept of user research as a routine aspect of usable design is being questioned from many quarters, with alternative concepts sometimes promoted as being more time- or cost- effective. I see three often mooted ideas that can imply routine, classic empirical user research isn't necessary. One is that smart consultants already know the answers -- only dumb ones don't. Another is that there are newer, better theories that deliver the answers. The third is that there are better methods to deliver the answers.

The first skepticism toward user research I will call "charismatic usability."Advisors and consultants can take so much satisfaction in what they know, they can loose sight of what they don't know. With usability charismatics, attention shifts from the credibility of the conclusion to the "credibility" of the messenger. Perhaps the consultant has secret knowledge, gained through previous work with very select clients, that is not known in the wider usability community. We also see this fantasy played out in scenarios imaging that "user experience" evangelism will storm the board room and become the pet interest of the CEO. Charismatic messages appeal to individuals and organizations in crisis, and charismatic seekers admit a desperation, as well analyzed by Harvard business professor Rakesh Khurana in Searching for a Corporate Savior: the Irrational Quest for Charismatic CEOs. The lovefest is often disappointing, with the charismatic expert over-promising, and under-delivering, a solution to a problem that is bigger than any single personality can solve. Because you can convenience a client she has a problem doesn't necessarily endow you with the capability to solve it.

Other people doubt user centered design, which dates from the 1980s, is still a viable theory and believe needs to be replaced with something newer. Some would believe a more complex theory is what is needed, perhaps activity theory. As a theory, UCD has never been coherent, and traditional cognitive science approaches don't offer many answers sought. . While activity theory involves user research, it seems to have priorities reversed: in AT, theory drives user research, instead of user research driving development of theory. Like most Marxist-derived "theory", AT is not really a theory that can be proven, but a simply framework that guides one's focus of attention, for better or worse. How much value can be found by deciphering impenetrable jargon about the "zone of proximical development" or Hegelian dialectics, is open to question. Researchers get excited by the phenomenology of artifact-mediated activities -- say, how office workers use a magnetic white board -- while in offices across the globe, artifacts themselves are disappearing into the ether. There is less and less physical manipulation to watch and theorize about. Theory isn't always in sync with is happening on the ground.

Another approach might be called the "re-engineering" of usability: obliterate all the unnecessary steps. Some imagine that definitive answers can be found "agilely" through quick polling techniques: just ask a few people, and your problem dissolves. So-called "discount usability" -- a concept that has merit when used judiciously -- is rapidly degenerating into a wholesale dumping-down of usability. The technique of testing five people -- plausible only under highly understood circumstances -- has collapsed into testing "3-5" people, soon to be "1-5" people, promising answers to all questions, irrespective of how facets are associated with the problem.

If user research, which for me includes old-fashioned usability testing, is losing its mojo, how far we can rely on our pre-existing knowledge of users? To simplify, I propose we consider user behaviors of two kinds: innate behaviors, and learned behaviors. Both are valuable, but both are limited, for different reasons.

Innate behaviors are essentially biological in character. Classical ergonomics, focused on the physical dimension of activity, looks at innate behavior: how easily people can grip a knob, or point a mouse. Innate behavior exists in the mental realm as well: the psychology of perception describes innate behaviors. Memory and attention functions are often innate, more a function of intrinsic capabilities than previous experiences. Generally speaking, people vary in their innate behavior by degree of ability, rather than by kind of ability (the obvious exception is for persons with a disability -- lacking an ability that exists in the general population. ) If there are wide differences in a behavior, such that some people can be successful accomplishing a task while other cannot, it seems unlikely the behavior is innate (an exception being if the performance required, or the person involved, is at one or the other end of a bell curve distribution.)

Ergonomics and empirical cognitive science focus extensively on innate behavior. It is important to recognize that the numbers of behaviors that are innate is small in comparison to all observable human behavior. Still, these behaviors are often fundamental, and there is plenty to be learned about them. What is most interesting about innate behaviors is understanding the range of performance, especially the upper limits of what people can do. I am very excited by recent work in the field of "cognitive systems engineering" looking at information overload. This research is greatly extending, and qualifying, research from half a century ago on short term memory. Information overload is one of the most vital issues confronting design today, but we are only just starting to understand its complex dynamics, outside simple lab experiments.

Designers can utilize findings on innate user behaviors knowing that if they were to do their own user tests, it would yield no new information. Unfortunately, the findings are sometimes not as useful as needed. The general finding may be valid, but the data does not address one's user group in adequate detail. Even basic anthropometric data is sometimes not available for certain population subgroups. Other times the finding does not fit the design problem appropriately: it is too general to predict how people would behave in a given circumstance, or the finding, while appearing robust (it has been repeated), has not be demonstrated with sufficiently diverse user groups and contexts to merit being considered a universal, innate behavior (the problem of how innate are behaviors of university psychology students). Truth is, studies on innate behavior are only of limited use to designers. Most design questions are not answered by data on innate human performance.

Other user behavior is not biological but learned. This distinction is not clear cut, because one can learn to improve one's performance of an innate behavior. The general population is able to remember facts (an innate ability), though it may be possible to improve one's recall of facts through training or techniques (a learned ability.) Learning effects exist for both physical and mental performance. But a more serious problem is to act as if a learned behavior as innate one.

Most HCI research addresses learned behaviors, even though it generally fails to stipulate that limitation. HCI research tends to report its results categorically, as in "subjects behaved" in such and such a manner, with very limited discussion of historical, contextual and conditional factors relating to the subjects. (We may simply learn that subjects were heavy Internet users and average 22 years of age. Sometimes we learn more about the computer equipment used in the experiment, which is often thoroughly dissected.) Sometimes we find cryptic findings in the HCI literature, stating some tentative finding about a highly particular, even artificial, activity, with recommendations that further study be considered (and sometimes only the investigator is sufficiently interested in the problem to do the further investigation.) Practitioners -- people who need to design things -- often can't use such information to any extent. Even with these limitations, HCI research is valuable, not least because it tracks what we don't know, and nudges us to learn more about it. Much HCI research consciously focuses on novel technologies we know little about. There are benefits of knowledge for knowledge's sake, and all practitioners owe their academic colleagues our appreciation for investigating areas that don't have an obvious payoff. But exploratory research, especially involving bespoke and idiosyncratic design configurations, is difficult to generalize from.

Two things are vexing about learned behaviors. First, the practicality of applying any finding about learned behavior is difficult, since the finding is highly susceptible to what user population you have studied. Just because you found a strong user behavior among American college students doesn't mean you will find the same behavior with middle age Americans, or Egyptian college students, or some other user group. Second, learned behaviors change. People learn new behaviors, and old ones fall in disfavor. What is take as gospel about the Web today will become idiosyncratic once Web 2.0 (or whatever is ultimately becomes known as) gains currency.

Much of our knowledge about users is highly perishable. It might be correct today, but we can't be sure how much longer it will remain valid. I am skeptical of many finding done in the 1990s, simply because technology and user behavior has moved on since then. There are few eternal truths in an out-of-date HCI textbook. What remains useful are the methods of HCI (e.g, task analysis), not the specific findings of user trials.

We have a human tendency to want to think through of the implications of a finding, to generalize from it. This tendency may be even stronger among commercial researchers than academic ones. Commercial researchers can be less cautious in their generalization. If we encounter research data on something not previously studied, we want to treat it as the foundation of something of wide and lasting consequence, not as data limited to a particular group or a particular time.

Over the past quarter century, starting with seminal research by Tversky and Kahneman, cognitive science has developed a clearer understanding of expert judgment. Humans have a strong and consistent tendency to be over-confident about their understanding of a situation, and their ability to predict outcomes. In HCI, we see these findings confirmed when competitions of different expert usability teams arrive at widely different conclusions about what needs fixing. Indeed, one "strong" conclusion of HCI research, according to strength of evidence index in the National Cancer Institute's usability research summary, is to avoid relying heavily on expert reviews and cognitive walkthroughs, the very kinds of activities that dispense with user research.

At least we have the routine feedback of user research to curb our temptation toward overconfidence. Let's hope.

Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?