Tuesday, February 21, 2006

 

having something to format

Multimedia is the poster child of software development. We seem obsessed with making media "richer": adding interactive graphics, data visualization, video, sound, ticker tape updates and entertainment like controls. Apart from some not terrible useful efforts in "experimental typography," old fashion words seem to get ignored.

Most text-related software seems devoted to navigating text, extracting concepts from text, and perhaps extracting snippets of text into an information manager. However, there seems little attention these days relating to tools to create text documents. I find this situation unsatisfying, as I believe existing text creation tools are wanting.

Microsoft Word does many things, but it does not necessarily do all of them well. Every competitor I've seen simply tries to copy Word, not to better it. Word experienced some genuine innovation in the 1990s, adding features like auto summary, but it has been stagnant since then.

One of the worst features of Word is its outlining capabilities. In the DOS era, there were several outlining programs that were interesting, but they never gained viability with Windows, as Word came to dominate word processing. Word's outlining is a basic hide-and-show hierarchy, a clumsy one at that. There are many other possibilities for outlining.

I like the some features of the Mac-based OmniOutliner program, particularly adding columns such as tick boxes and numeric fields, as exist with a spreadsheet. But OmniOutliner remains a hide-and-show program.

The most interesting outlining program I have found is designed for lawyers. Developed by CaseSoft, NoteMap has several nifty features. It allows gathering non-contiguous items, and putting them in another branch of the outline. This is very useful for connecting thoughts that have not thus far been related to each other. NoteMap has better annotation and formatting capabilities than many outliners, plus a basic sort facility (though more could be done with the sort.) But more important than its features, it offers smooth performance, where Word feels shaky.

It seems strange to me that something as basic as outlining receives so little attention in word processors. Too much attention in word processors is placed on formatting, and not enough on how users draft their thoughts.

Monday, February 13, 2006

 

the shrinking world of customized software

What has become of customized software -- software designed to an organization's or group's unique needs? I have previously argued that the shift from custom built to off-the-shelf software has largely sidelined traditional user centered design approaches premised on bottom-up design. I am coming to realize bottom-up design is under more pressure than ever, thanks to the rise of "on demand" web applications.

Businesses have largely given up on customized software, because they deem it too expensive. They switched to homogeneous off-the-shelf software, with ready-made interchangeable components. Much of this software was sold as kits, which would get a modest customization done by a systems integration house. The integrators made far more money than the kit vendors. Various forms of software customization and service could represent $7 for every $1 spent on the actual software license. Add to that the aggravating "plumbing" problems that the systems integrators seemed to find, and often never completely manage, corporate customers have sought greater simplicity from their software spending.

Vendors have responded with "on-demand" software solutions delivered over the web. No need to hire a systems integrator, we'll give you a complete package that meets all your needs, vendors promise. A good example is Salesforce.com, which offers a "customer relationship management" (CRM) system remotely delivered via the web. The solution is cheaper for the customer, and simpler too. Who could complain about that?

The drive to reduce costs and complexity is understandable. But businesses are often suckered by the myopic logic of software vendors, especially the new web application vendors. They speak of "cost of ownership", but restrict their definition of costs to only those items that appear in the IT budget. Now, IT budgets are considerable, but they generally aren't the predominant expense in service companies: employees are. How IT costs affect labor productivity are never addressed.

On-demand software pretends that business processes are so uniform and standard that you don't need to customize anything. Unfortunately, things are a bit more complicated than that.

If any business process could be successfully supported by uncustomized software it ought to be CRM. Sales is hardly a intellectually complex activity. But CRM has been a big failure, even after three generations of trying to get it right. I recently read a roundtable discussion by CRM vendors and IT analysts , and all admitted usability is still a massive problem. Sales people don't feel CRM supports their needs -- it is just a monitoring tool for the benefit of management.

Selling involves personal style, something one-size-fits-all software doesn't accommodate well. Forcing users to follow a set of rigid procedures doesn't translate into helping them make more sales.

What companies need is the ability to customize -- in a meaningful way. Yes customization was expensive and fraught with technical glitches. But the problem wasn't the concept of customization, but rather what was required to achieve customization. Too much focus and energy went into solving plumbing problems, not user problems.

On-demand vendors need to offer easy to use tools to allow customization of their offering. This customization needs to be fundamental, not cosmetic. Real customization isn't just about the UI, it is about the process of using the application in conjunction with one's daily work. Understanding real uses processes is gained through contextual research across a range of potential users. One size doesn't work. The task is to learn just how many different sizes are needed.

Tuesday, February 07, 2006

 

innate and learned interactive behaviors

The gold standard for social science is research that is non-trivial, non-obvious, and widely repeatable. User research sometimes yields seismic discoveries, but small discoveries -- or even ambiguous ones -- are more the norm. But it would be wrong to conclude that because everyday user research yields only mundane discoveries, we know the major things we need to know, or that pedestrian user research simply isn't worth the effort.

Okay, few people are so bold as to argue user research is counterproductive. But the concept of user research as a routine aspect of usable design is being questioned from many quarters, with alternative concepts sometimes promoted as being more time- or cost- effective. I see three often mooted ideas that can imply routine, classic empirical user research isn't necessary. One is that smart consultants already know the answers -- only dumb ones don't. Another is that there are newer, better theories that deliver the answers. The third is that there are better methods to deliver the answers.

The first skepticism toward user research I will call "charismatic usability."Advisors and consultants can take so much satisfaction in what they know, they can loose sight of what they don't know. With usability charismatics, attention shifts from the credibility of the conclusion to the "credibility" of the messenger. Perhaps the consultant has secret knowledge, gained through previous work with very select clients, that is not known in the wider usability community. We also see this fantasy played out in scenarios imaging that "user experience" evangelism will storm the board room and become the pet interest of the CEO. Charismatic messages appeal to individuals and organizations in crisis, and charismatic seekers admit a desperation, as well analyzed by Harvard business professor Rakesh Khurana in Searching for a Corporate Savior: the Irrational Quest for Charismatic CEOs. The lovefest is often disappointing, with the charismatic expert over-promising, and under-delivering, a solution to a problem that is bigger than any single personality can solve. Because you can convenience a client she has a problem doesn't necessarily endow you with the capability to solve it.

Other people doubt user centered design, which dates from the 1980s, is still a viable theory and believe needs to be replaced with something newer. Some would believe a more complex theory is what is needed, perhaps activity theory. As a theory, UCD has never been coherent, and traditional cognitive science approaches don't offer many answers sought. . While activity theory involves user research, it seems to have priorities reversed: in AT, theory drives user research, instead of user research driving development of theory. Like most Marxist-derived "theory", AT is not really a theory that can be proven, but a simply framework that guides one's focus of attention, for better or worse. How much value can be found by deciphering impenetrable jargon about the "zone of proximical development" or Hegelian dialectics, is open to question. Researchers get excited by the phenomenology of artifact-mediated activities -- say, how office workers use a magnetic white board -- while in offices across the globe, artifacts themselves are disappearing into the ether. There is less and less physical manipulation to watch and theorize about. Theory isn't always in sync with is happening on the ground.

Another approach might be called the "re-engineering" of usability: obliterate all the unnecessary steps. Some imagine that definitive answers can be found "agilely" through quick polling techniques: just ask a few people, and your problem dissolves. So-called "discount usability" -- a concept that has merit when used judiciously -- is rapidly degenerating into a wholesale dumping-down of usability. The technique of testing five people -- plausible only under highly understood circumstances -- has collapsed into testing "3-5" people, soon to be "1-5" people, promising answers to all questions, irrespective of how facets are associated with the problem.

If user research, which for me includes old-fashioned usability testing, is losing its mojo, how far we can rely on our pre-existing knowledge of users? To simplify, I propose we consider user behaviors of two kinds: innate behaviors, and learned behaviors. Both are valuable, but both are limited, for different reasons.

Innate behaviors are essentially biological in character. Classical ergonomics, focused on the physical dimension of activity, looks at innate behavior: how easily people can grip a knob, or point a mouse. Innate behavior exists in the mental realm as well: the psychology of perception describes innate behaviors. Memory and attention functions are often innate, more a function of intrinsic capabilities than previous experiences. Generally speaking, people vary in their innate behavior by degree of ability, rather than by kind of ability (the obvious exception is for persons with a disability -- lacking an ability that exists in the general population. ) If there are wide differences in a behavior, such that some people can be successful accomplishing a task while other cannot, it seems unlikely the behavior is innate (an exception being if the performance required, or the person involved, is at one or the other end of a bell curve distribution.)

Ergonomics and empirical cognitive science focus extensively on innate behavior. It is important to recognize that the numbers of behaviors that are innate is small in comparison to all observable human behavior. Still, these behaviors are often fundamental, and there is plenty to be learned about them. What is most interesting about innate behaviors is understanding the range of performance, especially the upper limits of what people can do. I am very excited by recent work in the field of "cognitive systems engineering" looking at information overload. This research is greatly extending, and qualifying, research from half a century ago on short term memory. Information overload is one of the most vital issues confronting design today, but we are only just starting to understand its complex dynamics, outside simple lab experiments.

Designers can utilize findings on innate user behaviors knowing that if they were to do their own user tests, it would yield no new information. Unfortunately, the findings are sometimes not as useful as needed. The general finding may be valid, but the data does not address one's user group in adequate detail. Even basic anthropometric data is sometimes not available for certain population subgroups. Other times the finding does not fit the design problem appropriately: it is too general to predict how people would behave in a given circumstance, or the finding, while appearing robust (it has been repeated), has not be demonstrated with sufficiently diverse user groups and contexts to merit being considered a universal, innate behavior (the problem of how innate are behaviors of university psychology students). Truth is, studies on innate behavior are only of limited use to designers. Most design questions are not answered by data on innate human performance.

Other user behavior is not biological but learned. This distinction is not clear cut, because one can learn to improve one's performance of an innate behavior. The general population is able to remember facts (an innate ability), though it may be possible to improve one's recall of facts through training or techniques (a learned ability.) Learning effects exist for both physical and mental performance. But a more serious problem is to act as if a learned behavior as innate one.

Most HCI research addresses learned behaviors, even though it generally fails to stipulate that limitation. HCI research tends to report its results categorically, as in "subjects behaved" in such and such a manner, with very limited discussion of historical, contextual and conditional factors relating to the subjects. (We may simply learn that subjects were heavy Internet users and average 22 years of age. Sometimes we learn more about the computer equipment used in the experiment, which is often thoroughly dissected.) Sometimes we find cryptic findings in the HCI literature, stating some tentative finding about a highly particular, even artificial, activity, with recommendations that further study be considered (and sometimes only the investigator is sufficiently interested in the problem to do the further investigation.) Practitioners -- people who need to design things -- often can't use such information to any extent. Even with these limitations, HCI research is valuable, not least because it tracks what we don't know, and nudges us to learn more about it. Much HCI research consciously focuses on novel technologies we know little about. There are benefits of knowledge for knowledge's sake, and all practitioners owe their academic colleagues our appreciation for investigating areas that don't have an obvious payoff. But exploratory research, especially involving bespoke and idiosyncratic design configurations, is difficult to generalize from.

Two things are vexing about learned behaviors. First, the practicality of applying any finding about learned behavior is difficult, since the finding is highly susceptible to what user population you have studied. Just because you found a strong user behavior among American college students doesn't mean you will find the same behavior with middle age Americans, or Egyptian college students, or some other user group. Second, learned behaviors change. People learn new behaviors, and old ones fall in disfavor. What is take as gospel about the Web today will become idiosyncratic once Web 2.0 (or whatever is ultimately becomes known as) gains currency.

Much of our knowledge about users is highly perishable. It might be correct today, but we can't be sure how much longer it will remain valid. I am skeptical of many finding done in the 1990s, simply because technology and user behavior has moved on since then. There are few eternal truths in an out-of-date HCI textbook. What remains useful are the methods of HCI (e.g, task analysis), not the specific findings of user trials.

We have a human tendency to want to think through of the implications of a finding, to generalize from it. This tendency may be even stronger among commercial researchers than academic ones. Commercial researchers can be less cautious in their generalization. If we encounter research data on something not previously studied, we want to treat it as the foundation of something of wide and lasting consequence, not as data limited to a particular group or a particular time.

Over the past quarter century, starting with seminal research by Tversky and Kahneman, cognitive science has developed a clearer understanding of expert judgment. Humans have a strong and consistent tendency to be over-confident about their understanding of a situation, and their ability to predict outcomes. In HCI, we see these findings confirmed when competitions of different expert usability teams arrive at widely different conclusions about what needs fixing. Indeed, one "strong" conclusion of HCI research, according to strength of evidence index in the National Cancer Institute's usability research summary, is to avoid relying heavily on expert reviews and cognitive walkthroughs, the very kinds of activities that dispense with user research.

At least we have the routine feedback of user research to curb our temptation toward overconfidence. Let's hope.

This page is powered by Blogger. Isn't yours?