Monday, November 28, 2005

 

performance and usability

User Acceptance Testing -- determining how well a system performs functionally -- should not be confused with Usability Testing -- determining how well a system meets user needs. Geeks fixate on technical performance, while usability advocates focus on human performance in the context of technical systems. But now that usability specialists have established what's important is not the system, it's the user, it seems this distinction is getting blurred again.

Enter Ajax. I've been slogging through some detailed writings on Ajax technology recently in a effort to catch up with the buzz. While only half understanding it all, I see indications that technical implementation is the difference between a positive or negative user experience. Paper prototyping is useless for Ajax. Forget how Ajax works in theory, worry about how it works in practice. Getting a browser to steal a moment to request data from a server takes some programming finesse, dealing with subtle timings on both client and server ends. JavaScript would seem especially unruly, given that it lacks the disciplinary structure of other programming languages and is prone to memory leaks. There seems ample possibilities for conflicts and hang-ups, which can grind user sessions to a slow march. I wonder if and when we will start seeing bad Ajax apps appearing on the net.

The more invasive web technologies become, downloading themselves onto a user's PC and affecting a user's memory resources, the more important live tests will become.

Sunday, November 27, 2005

 

give me network computing that's flexible

Computers are more cheaper than ever, and more capable to boot. There is a strong temptation to put one in every room. A temptation tempered by the hassle of keeping these beasts up-to-date with patches for bugs and Trojan doors, security programs, latest plugs-in, extensions, compatibility problems, upgrades to new standards, etc.

Our household has three "active" computers, plus two moth-balled ones with valuable info that need occasional access. But I have become tired of being the in-house help desk. I want to outsource that function. Isn't outsourcing supposed to be the future?

Ten years ago Sun Microsystems talked up the idea of the network being the computer. It hasn't happened, really. Bandwidth has improved, Web apps are more capable, but the experience isn't the same as having programs on one's own PC. Even Ajax, valuable as it is, is just a workaround for the chasm between people wanting immediate interaction from quick devices and the cumbersome protocols of speaking to remote server farms.

Before I can ditch my computer for a dumb terminal that doesn't require my attention, several things need to happen. Connectivity needs to be fast - several orders of magnitude faster than current broadband. We seem to make content more memory-intensive as we improve connection speed. It is much like the phenomenon that the average speed of traffic in London is 10 km an hour in 1905 and 2005 -- nothing changes. We need things faster in the future to handle what the future will bring. I don't want to download anything -- applications or content.

Second, we need to let users choose what they want, not just have a standard package of software choices forced on them. Many of my personal dramas involve hiccups between minor applications, not the major ones. I want choice -- the grand promise of our market system -- and I want service too -- the other promise. I don't want to be stuck trying to figure what combination of configurations is causing the problem. I don't even want to buy a program to do these task automatically (the responsibility for fixing them is still mine.) I want a company to offer everything I might ever want, who guarantees it will make it work together. But the current model of the system administrator couldn't be further from my ideal. I don't want to be punished for expecting things to work well together, being told I have to use software that is five years out of date. I want the newest innovations, screened and patched for compatibility.

The first obstacle, speed, is at least partially technical (though there is a political element -- where there is a will, there is often a way.)

The second obstacle requires a radical rethinking of how the software industry works. I would like to see someone take responsibility to certify software for robustness. The "leave it to the market to decide" approach doesn't work well enough. Currently, if vendors offer incompatible software, they simply declare the birth of a new standard, and expect that others will follow it. If something doesn't work, it is someone else's fault, or vendors claim it a minor annoyance for the privileged of getting a preview of something innovative. Microsoft enforces some level of compliance, but since it is not a neutral party, it can both both act unfairly, or be ignored even if it is not throwing its weight around. Voluntary committees aren't fast, and don't produce consensus. What is needed are commercial, independent organizations that have credibility to disclose what applications don't cooperate with what. If companies knew there sales depending on it, they wouldn't be so eager to pass the time drain of compatibility on to consumers.

I don't want to buy (oops, rent) software directly from a vendor. I want, as a consumer, to have a "software maintenance organization" [SMO] do that for me, and supply what I need as I need it. Using a "dumb terminal" (today more likely just a screen), I don't care what server an application comes from. Different servers may speak different languages, but they can all speak to my dumb screen -- compatibility problems disappear. SMO's will bear the costs of dealing with upgrades and compatibility. When the real cost of incompatibility is accounted for -- SMOs will bargain hard with vendors to clean up their act. If consumers value the extra cost a disruptive change in software standards creates, then they will pay more their service. But otherwise, vendors will need to contain the disruption they typically unleash, by adhering to standards, and testing their software in real situations. Even the biggest names in software rarely do in situ user testing -- seeing how well their products work on the home PCs of average families loaded up with heaven knows what.

It seems a long way off, but maybe the network will someday be the computer.

Sunday, November 20, 2005

 

is tabbed browsing changing the structure of content?

Tabbed browsing has been hailed as a revolution, though Ben Goodger at Firefox cites Microsoft research that says having lots of windows open -- a problem tabs is meant to solve -- is not that big a problem for most users. I'll admit I have been slow to understand why users need multiple Webpages open at once, though I'm starting to see more possibilities. Without doing testing myself (probably only browser developers would commission such research), I see the following possibilities for ordinary users:

The last possibility, new tab opened when clicking on an embedded link, breaks a paradigm for web content. Web content developers have been told not to use embedded links within the body of an article, because users will leave the site and probably never return. Associated links are supposed to be placed on the side, or at the bottom of an article, to avoid the possibility of users being sucked away.

Among sites I routinely read, I notice that the New York Times now has embedded links within the body of articles, whereas previously they put links at the end of articles. I don't know if this change in practice is because of the rise of tabbed browsing, or because the Times is trying to sell their premium service, and tempting people at every opportunity.


Saturday, November 19, 2005

 

drama therapy

Ideo are moving ever more into Oprah territory. Their mission is to rid company employees of their negativity. If the brand of the moment is innovation, then employees don't dare be off brand. Ideo's answer to negativity is personas, happy personas.

You can read the "exclusive" excerpt of Tom Kelley's new book published in Fast Company. If I were to give Fast Company a persona, it would be a business version of Hello! magazine, fawning over profiled business personalities.

With a judgmental attitude not generally associated with the "California persona," Tom is very harsh on people who play "devil's advocate." He calls them toxic. Rather than allow employees to question ideas, better to trust Tom to tell you what employees should be thinking.

Buried in the article are some worthwhile ideas, but the presentation is so buzz word laden and gratuitous it is hard to take it seriously. Why is the "Hurdler" a necessary persona and not someone let's call the "Magician" or the "Fairy"? Sorry to play devil's advocate, but my intuitive side detects a lot of bullshit, and my logical side isn't persuaded either. There is a Tony Robbins quality when Tom says "the personas are about 'being innovation' rather than merely 'doing innovation.'" Why not "be innovation" by trying fire walking? Is the goal to feel good, or achieve something? Innovation is more toil, and less happy talk, than Tom admits.

Good ideas need articulate champions, not thought police. Kelley's innovation personas liberally borrow from Edward de Bono's Six Thinking Hats. The difference is that Edward de Bono recognizes a role for the devil's advocate (the black hat), while Kelley just wants -- and expects -- everyone to buy in to an new idea. That is not only not realistic (outside a sheltered environment such as beginning art school classes), but it is counter-productive. Even hard-nosed experimentation doesn't necessarily produce clear answers where facts speak for themselves. Facts need to be interpreted, and interpretations will need to be argued over to achieve clarity.

Thursday, November 17, 2005

 

the quiet revolution in information architecture

While I can't escape "doing" information architecture, I don't consider myself an information architect -- someone who truly focuses on the discipline. When I first encountered information architecture maybe 5 years ago, it seemed like common sense to me. I worked for many years as a technical information specialist, so working with classification schemes for text databases over Internet delivery seemed old hat.

How times have changed. Information architecture is a now discipline with great depth. It is no longer just common sense. There has been a quiet revolution in the past two years, exploring new standards and web technology, and experimenting with new conceptual approaches. Information architecture has gone from a field that seemed overly concerned with defining itself, to being a field that innovates, being experimental and empirical. I notice the gap between what I have a good knowledge of in IA, and what is good practice, is widening.

What hasn't changed is the capacity of IAs to be modest and practical. I found a posting on IA slash lamenting how boring information architects can be. IAs need to give themselves more credit.

Tuesday, November 15, 2005

 

games people (and interfaces) play

Computers are complex. People are complex too. Who gets the last laugh?

I've identified at least three games that people and computer interfaces play with each other.

1) "Make a wish (but be careful what you wish for.)" In this game, people tell the computer exactly what they want, the computer happily obliges. We assume here that people are very clever, and computers are rather less so. This game has a "feel good" factor to it: we are superior to computers.

Now, imagine you could get your computer to cook for you. You might see an interface a bit like one of those "design your own" noodle menus. You choose ramen (noodle type) + chicken (meat) + black sauce (sauce). You get back an assemblage of gunk that tastes tasteless. You realize that cooking something tasty isn't that simple.

2) "Spot the difference (that makes a difference)." In this game, the computer has done all the hard work of gathering the details about different options. Every conceivable option under the sun is available. You just need to choose what you want.

Imagine the computer does your grocery shopping. You want to get some potato chips, and are offered 400 choices. Which one do you want? You feel a bit overwhelmed, trying to figure which one would be best (the extensive listing of ingredients doesn't seem to help either.) You decide that other people must know the answer, so you look at user reviews. All of the options have 80% of feedback comments giving 5 stars, with a disgruntled 20% minority giving 1 star. Doesn't seem to matter what chip, always the same distribution of star ratings.

3) "Guess what I want." In this game, the computer keeps guessing what you want for dinner. Finally, the perfect servant!

The computer finally figures out that you want tofu, after an hour of wrong guesses that assumed you were a carnivore. It cooks the tofu, serves it to you, and you pout. "No, no, no, this isn't what I want at all!" The computer asks what is wrong, and you comment the tofu needs more salt. After adding more salt, the computer is again told it isn't right. It needs more pepper. And so, and so on. The next meal you are offered a decent tofu dish, only you are so tired of thinking about tofu you now want a cheese sandwich. More guessing for the computer.

Computer interfaces play their strongest game with fussy eaters. If you hope to avoid being humilated by your computer, try eating brown rice only.

Thursday, November 10, 2005

 

open source blues

Open source is a lovely idea -- kind of like world peace, or a pollution free world. Sadly, ideals don't translate into reality without a bit of disappointment.

I want to give Firefox the opportunity to shine. Heaven knows browsers can stand improvement. But whenever I invest a bit of time to enhance Firefox, I get burned.

I spent the weekend adding various extensions -- things that actually make Firefox qualitatively different -- which required me to download a newer version than I had loaded previously on one of our home PCs and deal with annoying questions from the Firefox development team. Firefox is annoyingly preachy by the way. Note to developers: you are making software, not saving the world. You won't be getting a Nobel peace prize for your efforts.

A few days later, something has blewn up, and now nothing is there, apart from a reloaded, completely clueless version of Firefox that doesn't even know my bookmarks.

Yes I am getting this for free, but I'd rather pay someone, even Microsoft, a few bucks to assure some kinks are worked out ahead of time. My time is more valuable than the retail cost of the software. What problem does open source solve? Companies aren't the evil, payment isn't the evil, the evil is how monopoly can stiffle innovation. When open source encourages innovation, great. But what problems does open source unleash? Often crappy beta software, enough to make even an experienced computer user into a cynic. What a tedious waste of time for users to try to choke-proof software, creating back-up files, because there are no robust systems in place to verify compatiability of add ins. The wild west of software can be exciting, but is dangerous. Trouble is, you don't know that you are bleeding edge until something innocent looking fails. I am not a sucker for the taunts saying "hey you don't have our latest build!" Even supposedly stable builds can be too flakey.

UPDATE

If we are going to live in a multi-browser world, we should have a common file for bookmarks. I am hoping to find a way to synchronize bookmarks saved in either Firefox or IE. IE has a clumsy utility to import and export bookmarks (I have had no success figuring it out). Firefox only lets you import bookmarks from IE, but doesn't let you export them to a common file folder. Who is playing fair? If users are being offered a real choice, and your product is supposed to be better than everyone else's, why try to prevent them from defecting to a competing product?

If I can never get bookmarks saved in Firefox into IE, but can do the reverse, I only want to save bookmarks in IE, which encourages me to use IE more. What is obnoxious is the entire notion of a "default" browser, where everything is supposed to live.

Wednesday, November 09, 2005

 

issues in enterprise usability

Jakob Nielsen has posted a "Alert box" column on enterprise usability, a topic that has become a keen interest of mine over the past year. It isn't terribly informative, especially given the magnitude of issues involved. He does admit that "group-level usability and enterprise usability are less well defined: they've been researched less and are more variable" than consumer usability.

A couple points of disagreement:

JN: "Total cost of ownership (TCO) is often one of the most important usability metrics at the enterprise level. "

Michael: TCO is a vendor metric used to sell replacement systems. It really isn't about usability at all. The real metric is ROI, which involves the very elusive concept of measuring the productivity for workers of a software application. Developing methods to assess human productivity in enterprise systems is just beginning. It is very difficult, but most anything is better than the total neglect it receives currently.

JN: "for enterprise usability, we need to study the people who run the organization and who know the pain points at levels above an individual contributor's job. Customer roundtables are a good supplement to field studies: they bring together a small group of sysadmins or managers to discuss their own experiences with larger issues of the product's use."

Michael: Talking to managers is important, but customer roundtables is the wrong way to do it. Customer roundtables are no different from inviting managers as stakeholders to a requirements roundtable. In my experience you get a few anecdotes that seem interesting, but have no way to know how big the problem might be, or what else you are missing. Research with managers needs to be grounded in the same contextual research approaches used for endusers, combining observation and discussion around artifacts.

Tuesday, November 08, 2005

 

how much can we rely on user testing alone?

For many projects, user testing is the only proper user research conducted -- research with real people who offer concrete insights into user needs. User testing is certainly valuable, but is it enough?

I believe the usability community has become distracted by the question "how many test subjects are enough?" Champions of "discount usability" developed a mathematical formula that supposedly proves that adding more test subjects after 5 or 6 yields little new information. The formula works if you don't care about the underlying assumptions, but if you are curious about them, you find the formula only works in ideal situations, where you are doing a "health check" on something you expect mostly will work, with a homogeneous group of test subjects. If either of these conditions aren't true, all bets are off about now many users you need.

User testing is an opportunity to test hypothesis about what users need, within the context of other design constraints. Despite the obvious annoyance of having to run tests with users who offer no further enlightenment, and the extra cost of such superfluous testing, one needs to also acknowledge one never knows in a preliminary test what will test poorly, and can't prejudge the scope and scale of issues. Doing proper iterative testing, where every design requirement is subjected to multiple tests, will throw up issues everywhere. Because the scope of testing is fluid in iterative testing, one can't say how a result will necessarily settle. You play with alternatives to get reactions as long as there is diversity in reaction, and project time and budget to explore this diversity.

Another complication arises when a single design is meant to serve diverse users. One client I have worked with on many projects segments users into various age categories, and also whether they are consumers or business customers. All these segments need to be covered in testing because the client's stakeholders organize their products and processes around these segments. But prior to testing, one is never sure how these segments might differ in reaction to prototype designs. They might all react the same way, in which case checking all their reactions seems like overkill in retrospect. What often happens is that one or two subjects differ from the overall test subject population. Is this an expression of their segment preferences, or is it noise? Because the groups have been subdivided so finely, it can be hard to tell.

User testing is wonderful data, but it can be difficult to draw over-arching conclusions from it. I therefore encourage clients to do pre-design user research, so user preferences and needs can be at least partly established before actually tested. Such user research also allows one to learn if a design is bombing because of a design compromise due to an external requirement -- you guessed users wouldn't be keen on the compromise, and indeed they weren't.

Monday, November 07, 2005

 

user research is about data

I am worried how methods are dictating design outcomes, without the involvement of user research. The signs of method-itis are often hard to detect, because many methods purport to infer what users want, without doing the donkey work of actually consulting users themselves.

User centered design is about looking at what users need and want, which comes from extensive research. No extensive research, no user centered design. But some people imagine because they want to help users, and because they are ever so empathic toward users, they are therefore user centric. They simply "know" what users want, through their own experiences, dissecting hypothetical issues, or walking in the shoes of users, imaging what the user would do. Somehow they forget the central issue: you can't know what users want except by actually looking at their behavior and preferences from all perspectives. People who think they know what users want without doing research have either worked on a very narrow issue too long, or are not very competent. The road to hell is paved with good intentions. The road to user centered design is paved with facts.

Data may seem lifeless, and a sideshow to the main story. Market researchers are often criticized for generating confusing data that doesn't point the way forward. Market researchers are frequently guilty of developing shallow data: surveys that don't answer why users behave as they do, and focus groups that don't answer what users actually do and truly need apart from a random collection of impressionistic feedback.

Useful user research, whether quantitative or qualitative, involves structure in collection and analysis. Unfortunately such research structure is often lacking in design approaches that take the end as the beginning (i.e., design to fit users to a preconceived activity, scenario, or use.) Frequently these approaches are based on a fictional person: an imagined typical user, or an imagined extreme user (an outlier case). Users weren't identified according to how representative they were, they weren't studied over enough time, or across enough variant circumstances to determine what themes are genuinely common and which are unique.

I am a big fan of possibilities of qualitative research, but I find that math phobes make the worst qualitative researchers, because they don't understand the notion of sampling and significance. One can be qualitative by doing a detailed structured sample of small group of people to probe inter-relationships, or light observation of a wide group of people to find common themes. But whatever the approach, it needs to be robust, ideally drawing on multiple perspectives. I highly recommend the books of DVL Smith and JH Fletcher on the relationship between qualitative and quantative research.

The major question any method needs to answer is: how do you know your conclusions are right? Unacceptable answers are that people say it sounds right when I tell them, or that other people who follow the same method I used reach the same conclusion. Acceptable answers are that you used multiple research techniques to search for disconfirming evidence, and that you tested design implications of your research conclusions through user testing.

Sunday, November 06, 2005

 

whiteboard politics

The humble whiteboard is perhaps the best example of an "external representation." External representation is about the power of a physical representation of a concept to facilitate discussion among people. A fair body of theory and evidence has developed to support what many would consider common sense: being able to look at some while discussing it is helpful. The question is: helpful for whom?

I have a tendency to grab a marker during a discussion, and write on a whiteboard. I idealistically imagine everyone is benefiting. I benefit by getting my thoughts down in front of me, where I can see them, and critique them, if I have too many to sort through mentally. Others can following more easily what it is I am talking about, especially the connections between the concepts.

While whiteboards are a cognitive facilitator, they are not necessarily a social facilitator. I notice a different dynamic around whiteboards than around conference tables generally.

While whiteboards should ideally be treated as scratch paper, they often are treated as powerpoint slides. Blame it on years of cutting arts funding in public schools.

 

user agency

The longer I ponder the question of how the motivation of users affect their behavior, the more I ponder the question of agency. Agency is very related to motivation, but at the same time different. People are motivated based on their perception of how much in control they are, and how that shapes their expectations of what will happen. Agency is a psychological construct, defining how much people believe outcomes are the result of what they do. Agency is defined by the individual, but shaped also by society, and human computer interaction is being shaped by both ends.

Garden variety psychologists -- the kind who write advice columns, rather than study rats and undergraduates -- often talk about attribution. Is the consequence of my action the result of my skill, or some external factor? Some people are very "me" focused, others see wider circumstances as determining outcomes. We can see parallels in the wider contemporary debate about human and social nature. Some commentators talk about our (often genetic) drives, others talk about chaos, connections and non-linear determined external events.

In the computer realm, agency has been most clearly articulated in gaming. Their are games of skill, and games of chance. Agency in gaming is often a function of the amount of feedback a game offers, or the lack of it or lag in it.

What I sense is that interaction design is starting to shift away from the notion that users define outcomes. Compared with a decade or two ago, users may be more willing to give up control to their computer. Live is too complex to be a control freak.

The organizational psychologist Yiannis Gabriel has talked about the increasing tendency for workers to desire to "be discovered." Previously, hard steady work was the recipe for advancement. Now, "luck, self promotion, image and find[ing] oneself in the right place at the right time" are the formula.

We can see the shift from self-agency to reliance on externals reflected in computer applications. The old ideas of user control -- WIMPs, for example -- seems a bit old fashion now. Software, especially web applications, promise the benefits of luck and opportunity over user control. We roll the die awaiting "personalized" recommendations based on impersonal data mining techniques. We sign up for networking schemes online, hoping to make useful connections. A raft of social software and ambient computing solutions is being developed to fulfill our desire that the wider world will know what we want, and respond accordingly.

Will our hopes be gratified, or will we be disappointed? If the later, we will demand more control again, and focus on how we should be telling computers what we want, instead of the reverse.

This page is powered by Blogger. Isn't yours?