Life, the User Interface, and Everything

Timeless lessons in usability from the restaurant at the end of the universe.

Ah, the founding gurus of usability: Jef Raskin, Ben Shneiderman, Jakob Nielsen, Alan Cooper, Don Norman, and Douglas Adams. Douglas Adams? The author of The Hitchhiker’s Guide to the Galaxy and subsequent humorous sci-fi books featuring time travel, planetary construction, the original (and literal) Babel fish, and parties attended by Norse gods? Why, yes, in addition to being the master of the convulsive convoluted compound-complex sentence, Adams was also a keen observer of technology and its relation to its users. Like nearly all sci-fi and fantasy, the five-volume Hitchhiker’s trilogy [sic] is not really about an alternative reality that’s somewhere or sometime else, but our own reality right here and now, but exaggerated on some dimensions to explore an issue or make a point.

In the case of the Hitchhiker’ series, Adams pokes fun at the fundamental conflict of interest between government and individuals, the subversion of the will of the people by the powerful few for the sake of commercial interests, the futility of war, and the complete inability of perfectly intelligent human beings to figure out how to divide a restaurant bill, among other things. He also took technology very seriously. He was generally positive about the possibilities of technology meeting human needs, some of which comes through in the Hitchhiker series (e.g., the simulation of day and night aboard an interstellar ship). However, he also used his books to ridicule our mis-design and misuse of technology. Most sci-fi stories have a generally positive view of the human-machine interface. Even when advanced technology is used for evil, it is at least easy to use for evil. When has Darth Vadar ever said “oops”?

Detail of my dog-eared copy of HHGG.

Various sci-fi stories feature technology maliciously turning on its users, but in Adam’s case, the technology is truly trying to fulfill its intended function. Nonetheless, it fails due to problems at the interface between technology and its users. Thirty years ago, Adams preternaturally wrote of the pitfalls in high tech user interfaces, yet we nonetheless repeatedly fall into them. Here, for example, is how he starts The Hitchhiker’s Guide to the Galaxy (HHGG):

Orbiting at a distance of roughly ninety-eight million miles is an utterly insignificant little blue-green planet whose ape-descended life forms are so amazingly primitive that they still think digital watches are a pretty neat idea. (p1)

(Editorial note: All quotes edited for brevity, even though it makes them less funny. All ellipsis in original.)

(Spoiler note: If you don’t know the significance of the value of 42, and you plan to read HHGG, you may want to stop here. I’ll wait; you can probably knock it off in an afternoon. Otherwise, I don’t think I’m giving anything away.)

Technology as Style

It’s worth pointing out that the digital watches of 30 years ago were grossly inferior to what’s available today. They were expensive, bulky, feature-poor (forget about alarms or stopwatches, they might display a date, but then not simultaneously with the time), and, most horrifically, they used power-sucking LEDs for display. This meant to read the watch, the user had to push a button to light up the display temporarily. But technology marched on, LCDs replaced LEDs, the form factor shrank, prices fell, features were added, and these impressive advancements have yielded today’s digital watches, which are just not quite as usable as old mechanical watches. The difference now is that many of the ape-descended life forms have caught on and doffed these digital dead ends from their wrists and instead use… their cell phones. Right. We’re back in 1870, using pocket watches, only at least pocket watches had an analog display.

What do you expect? Watches aren’t just worn to tell time. They’re worn (or not worn) to make a fashion statement. Digital watches were made and sold in the 1970s on the basis of technical cool-hood, not usability. We live in a society that values advanced technology, and for good reason. Technology has brought us many cures, comforts, and conveniences. However, this positive regard for technology generalizes to its misapplication in such things as LED digital watches.

In Adams’ Restaurant at the End of the Universe (REU), blind enthusiastic application of the latest technology extends from the humanoid users to the automated products themselves, such as a Frogstar Class D robot tank. Here, it stops on a high covered bridge to confront an unarmed android named Marvin standing on the far end.

“Out of my way, little robot,” growled the tank.

“I’m afraid,” said Marvin, “that I have been left here to stop you.”

“You? Stop me? What are you armed with?” roared the tank in disbelief.

“Guess,” said Marvin.

“Er…how about an electron ram?”

This was new to Marvin. “What’s that?”

“One of these,” said the machine with enthusiasm. From its turret emerge a sharp prong which spat out a single lethal blaze of light. Behind Marvin, the wall roared and collapsed as a heap of dust..

“No,” said Marvin, “not one of those.”

“Good, though, isn’t it?”

“I’ll tell you what they gave me to protect myself with. Nothing,” said Marvin.

There was a dangerous pause.

“NOTHING?” The machine heaved about in fury. “Just don’t think, do they? Hell, that makes me angry! I think I’ll smash that wall down!” The electron ram stabbed out another searing blaze of light and took out the wall next to the machine. “Just ran off and left you, did they?” the machine thundered. “I think I’ll shoot down their bloody ceiling as well!” It took out the ceiling of the bridge.

“That’s very impressive,” murmured Marvin.

“You ain’t seen nothing yet,” promise the machine. “I can take out this floor too no problem!”

It took out the floor too.

“Hells, bells!” roared the machine as it plummeted and smashed itself to bits on the ground below. (p52-56)

Well, we can’t criticize the electron ram for not working. It worked all too well. The problem was overusing it where it shouldn’t be applied just for the sake of trying to impress others with its coolness.

Since the era of LED watches, and despite the ascent of usability, it’s little different. We designers continued to hop on the latest technological trend to cash in on coolness. Early web sites sported animated gifs and then frames then Flash intro pages for no benefit to the user, but merely because we didn’t have that capability before. Today we see gratuitous used AJAX for things like carousels and light boxes, which nearly always have inferior usability to more conventional designs. Thanks to the Apple iPhone defining touch screens as the latest cool technology (even though they have been around for years), everywhere touch screens are springing up in unusable places. Operating systems superficially ape the web because the web is where it’s at.

Take this pursuit of coolness to its logical extreme, as sci fi tends to do, and you get your Joo Janta 200 Super-Chromatic Peril Sensitive Sunglasses from REU. These sunglasses don’t do anything to shield the user’s eyes from the sun, but who wears sunglasses for that? Sunglasses are for looking cool, and the Joo Janta’s not only make you look cool, they encourage you to act cool:

[Joo Jantas] were specially design to help people develop a relaxed attitude towards danger. At the first hint of trouble, they turn totally black and thus prevent you from seeing anything that might alarm you. (REU, p35)

Style Versus Substance

If technology can be attractive despite poor usability, so can attractiveness be attractive despite poor usability. We make hardware, software, and web sites with fashionable colors and understated controls to achieve an uncompromised aesthetic. Adams acknowledges that striking visual design makes a good first impression, getting your users in the door. Take for instance, when it comes time for REU protagonists Ford Prefect and Zaphod Beeblebrox to, um, borrow a spaceship to make a discreet departure:

Zaphod’s attention was riveted on a ship standing [nearby]. It was a ship of classic simple design, like a flattened salmon, very clean, very sleek. There was just one remarkable thing about it.

“It’s so black!” said Ford Prefect. “You can hardly make out its shape. Light just seems to fall into it!”

Zaphod said nothing. He had simply fallen in love. (REU p141)

Unfortunately, the ship was programmed to fly head long into the sun, which constitutes an altogether incompatible environment for the enjoyment of an uncompromised aesthetic. Trapped aboard as the ship sped to its destination and incineration, our heroes were hampered in changing this programming, because well, you can probably guess:

Said Zaphod, whose love affair with the ship lasted almost three minutes into the flight, “Every time you try to operate these weird black controls that are labeled in black on a black background. a little black light lights up black to let you know you’ve done it, What is this? Some kind of galactic hyperhearse?” (REU p152)

We design our own black ships, essentially the same except ours are at least theoretically intended to be used by users, rather than programmed to fly uninhabited to destruction. We hide physical buttons to achieve a sleek shape. We use minimalist icons and meaningless symbols rather than busy words for labels. We choose foreground and background colors to capture a mood rather than promote good contrast or consistent visual representation of controls. Sometimes aesthetics will conflict with usability, but usually a little design imagination can resolve these conflicts to the betterment of both aesthetics and usability.

Natural Language, Natural Misunderstandings

As the exchange between the robot tank and Marvin indicates, advanced technology will bring us the long-sought natural language user interface. Well, we always knew that would be the ultimate UI, being forecasted in countless other sci-fi books, movies, and TV shows. In the real world, efforts continue to achieve this holy grail of UI. It’s only a matter of having the right algorithm and sufficient computational power. Natural language UIs have an intuitive appeal. All sorts of subtle connotations are consciously or unconsciously embedded in verbal communication making it a very efficient. We often think with language, hearing the words in our heads. If we could just express those words to a computer, we’d have a more direct connection between our thoughts and computer use.

Adams, however, is perhaps the one of the few sci fi authors to present natural language UI as a not particularly effective for human-machine communication despite the hardware thrown at the problem. Take, for example, Hactar, the giant main strategic war computer of the Silastic Armorfiends of Striterax in Adams’ Life, the Universe, and Everything (LUE).

The Silastic Armorfiends of Striterax were engaged in one of their regular wars with the Strenuous Garfighters of Strug, and were not enjoying it as much as usual, when the Strangulous Stillettans of Jajazikstak joined in, [so] they ordered Hactar to design for them an Ultimate Weapon.

“What do you mean,” asked Hactar, “by Ultimate?”

To which the Silastic Armorfiends of Striterax said, “Read a bloody dictionary,” and plunged back into the fray. (p167)

Given these instructions, Hactar of course produced a small bomb that would destroy the entire universe, the Silastic Armorfiends of Striterax included. While that was certainly a perfectly reasonable interpretation of “ultimate weapon,” there can be little doubt that it was not really what the Silastic Armorfiends of Striterax wanted when they tried to use it to destroy the Strangulous Stillettan ammunition dump in one of the Gamma Caves of Carfrax.

Then there was Deep Thought, the city-sized computer built for a far more the humanitarian reason of ending the universal existential angst plaguing all sentient species:

“O Deep Thought computer,” [Fook] said, “the task we have designed you to perform is this. We want you to tell us…” he paused, “the Answer!”

“The Answer?” said Deep Thought. “The Answer to what?”

“Life!” urged Fook.

“The Universe!” said [his colleague] Lunkwill.

“Everything!” they said in chorus.

Deep Thought paused for moment of reflection. “Tricky,” he said finally.

“But can you do it?”

Again, a significant pause. “Yes. But,” he added, “I’ll have to think about it. The program will take me a little while to run.”

Fook glanced impatiently at his watch. “How long?” he said.

“Seven and a half million years,” said Deep Thought. (HHGG p170-173)

In stark contrast to most software estimates, Deep Thought dutifully produced its deliverable as requested on schedule. However, as in the case with Hactar, the deliverable was an entirely correct response to the spoken instructions, but simultaneously entirely useless for its intended purpose.

“Seventy-five thousand generations ago, our ancestors set this program in motion,” said [Phouchg to Loonquawl]. “We are the ones who will hear the answer to the great question of Life…!”

“The Universe…!” said Loonquawl.

“And Everything…!”

“Shh!” said Loonquawl. “I think Deep Thought is preparing to speak!”

“Good morning,” said Deep Thought at last.

“Er… Good morning, O Deep Thought,” said Loonquawl nervously. “Do you have… er, that is….”

“An Answer for you?” interrupted Deep Thought majestically, “Yes I have. Though, I don’t think that you’re going to like it.”

“Doesn’t matter!” said Phouchg. “We must know it! Now!”

“All right,” said Deep Thought, and settled into silence again. The two men fidgeted. The tension was unbearable. “You’re really aren’t going to like it,” observed Deep Thought.

“Tell us!”

“All right,” said Deep Thought. “The Answer to the Great Question…”

“Yes…!”

“Of Life, the Universe, and Everything…”

“Yes!”

Is…” said Deep Thought, and paused.

“Yes…!!!…?”

“Forty-two,” said Deep Thought, with infinite majesty and calm. (HHGG p178-180).

In the case of both Hactar and Deep Thought, the questions the users asked had a certain amount of vagueness that the users didn’t appreciate. In effect it was garbage in, garbage out. Far from being the ultimate in human-computer interface, natural language is a fundamentally limited approach: mimicking natural human conversations in HCI will inherently result in imprecise results, being often not what the user wants, because even natural human-to-human conversations result in imprecise results. A husband and wife know each other better than any computer can hope; how often do they misunderstand each other? With a lot more work, a natural language UI may be okay for when the user is only looking for a sufficing response –something vaguely close to what is wanted, like an answer from Wikipedia, but there will never be a time when all computer interaction is through natural language. At least I hope not.

Ironically, when humans need to be precise with each other –when there must be no mistakes and fast performance is key, they start communicating like a user with a computer: Humans go through great efforts to purge fuzziness from their communication. Consider ship captains, pilots and air traffic controllers, engineers, and lawyers. They each use a formal highly structured domain-specific language that can take years of training to know how to use, a language that can be incomprehensible to outsiders, very much like a computer language.

Well, if natural language has its limits in communication, maybe we should just bypass it and go straight to thought-controlled computers. On our own planet, labs are pursing thought-controlled UIs, and it’s easy to believe that this will be the ultimate UI: no words, no actions at all, just thought. While I can see definite applications for such a UI, I’m more skeptical of its widespread use because it so happens that many of our thoughts are even less precise than our speech.

“Forty-two!” yelled Loonquawl. “Is that all you’ve got to show for seven and a half million years’ of work?”

“I checked it very thoroughly,” said [Deep Thought], “and that quite definitely is the answer. I think the problem, to be honest with you, is that you’ve never actually known what the question is.”

“But it was the Great Question! The Ultimate Question of Life, the Universe, and Everything!” howled Loonquawl.

“Yes,” said Deep Thought, “but what actually is it?”

A stupefied silence crept over the men as they stared at the computer then each other.

“Well, you know, it’s Everything… everything,” offered Pouchg weakly. (HHGG p181)

Likewise, the Silastic Armorfiends of Striterax evidently didn’t think through the full implications of an “ultimate” weapon.

On the other hand, when computers provide a concrete structured representation of the problem and potential solutions, whether it be through a programming language or direct-manipulation GUI, they can help the user formulate his or her thoughts, enforcing a certain rigor to help users work out their true goals.

This seems to be a general characteristic of tool use, not just computer use. Working out a problem with pencil and paper using tools such as scaled drawings, diagrams, or mathematics has a way of illuminating contradictions and problems in our ideas that we would miss in a strictly mental representation. Even human language, for all its vagueness, helps clarify thought more than thinking alone. The very act of writing can help people realize things they hadn’t realized before. I hadn’t realized this until I wrote this paragraph. Maybe as designers we should be developing tools to help users think better, rather than attempting to develop a UI that provides a more “direct” connection to their thoughts as is.

Personality Problems

The idea behind a natural language UI belies a deeper assumption that UIs would be better if the computer acted more like a human. Good UI design often employs metaphors to make the unfamiliar understandable to the users, and it might seem appropriate to metaphorically represent technology as another human. Rarely does this work out.

[Trillian said to Zaphod,] ”I’ll send the robot down to bring [the interstellar hitchhikers] up here. Hey Marvin!”

In the corner, the robot’s head swung up sharply. It pulled itself to its feet and made what an outside observer would have thought was a heroic effort to cross the room. It stopped in front of Trillian and seemed to stare through her left shoulder. “I think you ought to know that I’m feeling very depressed,” it said. Its voice was low and hopeless.

“Oh, god,” muttered Zaphod, and slumped in his seat.

“Well,” said Trillian in a bright compassionate tone. “Here is something to keep your mind off things.”

“It won’t work,” droned Marvin. “I have an exceptionally large mind.”

“Marvin!” warned Trillian.

“All right,” said Marvin, “what do you want me to do?”

“Go down to number two entry bay and bring the two aliens up here under surveillance.”

With a microsecond pause, and a finely calculated micro-modulation of pitch and timbre -not enough you could actually take offense at -Marvin managed to convey his utter contempt and horror of all things human. “Just that?” he said. He turned hopelessly on his heal and lugged himself out of the cabin.

“I don’t think I can stand that robot much longer, Zaphod,” growled Trillian. (HHGG, p90-91)

The logic seems to work something like this: computers are intelligent, humans are intelligent. Humans have personality, therefore computers should have personality. Sort of the same reasoning behind why witches float. Anthropomorphize technology typically fails because first of all, it’s very hard to make technology act like people in all their complexity. However, even if we could make the technology work, representing technology as a human is still the wrong metaphor for the vast majority of situations. Considering the frequency of interpersonal conflicts and problems in day-to-day living, there’s a good chance you’ll just end up with a Marvin. Some people have annoying personalities, a feature which has been successfully replicated in artificial agents like Microsoft’s Bob and Clippy.

“Yeah, but that’s one wild coincidence, isn’t it?” [said Zaphod to Trillian.] “That’s just too…. I want to work this out. Computer!”

The Sirius Cybernetics Shipboard Computer switched into communication mode. “Hi there!” it said brightly.

Zaphod hadn’t worked with this computer for long but had already learned to loathe it.

The computer continued, brash and cheery as if it were selling detergent. “I want you to know that whatever your problem, I am here to help you solve it.”

“Yeah, yeah,” said Zaphod. “Look, I think I’ll just use a piece of paper.”

“Sure thing,” said the computer. “I understand. If you ever want…”

“Shut up!” said Zaphod, and snatching up a pencil sat down next to Trillian.

“Okay, okay,” said the computer in a hurt tone of voice, and closed down its speech channel. (HHGG p99-100)

What we have here is a failure to communicate what “user friendly” is supposed to mean. Microsoft’s Bob was apparently intended to introduce computers to people who were afraid of computers. The assumption by the designers (possibly geeks who knew no fear of computers themselves), was that such fear is irrational and emotional, and therefore should be addressed through emotion. They figured that if they just made the computer nice enough, fun enough, even cute, it would overcome users’ fears. We see some of the same tendencies today in certain web sites and apps with excessive text saying welcome to this most wonderful site/app, and please if you could be as so kind as to do this, and thank you for doing that so very well, and terribly sorry but unfortunately you can’t do that now, would you like the hear a joke instead, and to solve this problem, simply just do that, that’s all, it’s simple, really, you’re not afraid are you?

Sorry you session expired. To log-in, please blah blah...

A little common courtesy is appropriate in some circumstances, but it shouldn’t get to the point where it interferes with the user’s tasks and goals. Take, for instance, the scene in LUE, when Zaphod Beeblebrox attempts a stealthy reconnaissance of a squad of ruthless white killer robot soldiers that have boarded his spaceship and seized the bridge. The problem: the door to the bridge. Anticipating ubiquitous computing, Adams recognized that with advanced technology ordinary objects would incorporate computers, including doors. Like the ship’s computer, the ship’s doors have a natural language user interface, high intelligence, and, most alarmingly, a friendly personality.

[Zaphod] inched his way up the corridor as if he’d rather be yarding his way down it, which was true. He was within two yards of the door to the bridge when he suddenly realized to his horror that it was going to be nice to him, and he stopped dead. He hadn’t been able to turn off the door’s courtesy voice circuits. The doorway to the bridge was concealed from view within it and he had been hoping to enter unobserved. He could just about make out the Sensor Field that extended out into the corridor and told the door when there was someone there to whom it must make a cheery and pleasant remark.

He edged himself towards the door, took a series of shallow breaths, then said as quickly and quietly as he could, “Door, if you can hear me, say so very, very quietly.”

Very, very quietly, the door murmured, “I can hear you.”

“Good. Now in a moment, I’m going to ask you open. When you open I do not want you to say you enjoyed it, okay?”

“Okay.”

“And I don’t want you to say that I made a simple door happy, or that it is your pleasure to open for me and your satisfaction to close again with the knowledge of a job well done, okay?”

“Okay.”

“And I don’t want you to ask me to have a nice day, understand?”

“I understand.”

“Okay,” said Zaphod, tensing himself, “open now.”

The door slid open quietly. Zaphod slipped through quietly. The door closed quietly behind him.

“Is that the way you like it, Mr Beeblebox?” said the door out loud.

The group of white robots swung round to stare at him. (LUE p78-80)

Users aren’t really afraid of computers. They’re not even afraid of ruthless white killer robot soldiers (well, maybe some should be). Users are afraid of not getting their work done, of wasting their time, of looking like an idiot in front of others. And the truth is users do get stuck when using a computer, being unable to figure out how to proceed, they do waste time, working on something only to have the computer blow it all away, and they do find themselves looking helpless and incompetent in front of others, and a whole lot of welcomes, pleases, thank-yous, sorrys, and jokes aren’t going to do anything about that.

Sorry but all your work for the day has been irrecoverably hosed. OK?

A Sirius Cybernetics Corporation Happy People Vertical Transporter [i.e., an elevator] took them down deep into the substrata. They were happy to see that it had been vandalized and didn’t try to make them happy as well as take them down. (REU p135)

The concept of giving user interfaces a human-like personality also ignores the fact that, except for those who are desperately lonely, users don’t need or want a personal relationship with their personal computers. Personality distracts the user towards the tool and away from the task. The way towards good human-computer relations is for the computer to allow the users to reach their goals as quickly and easily as possible without getting in the way. However, by representing the technology as an agent, you place an intermediary between the user and his or her work. Now instead of the user doing the task, the user is micromanaging some other agent to do the task, a situation bound to be frustrating to the user and annoying to the computer.

“Good afternoon, boys.” The voice was oddly familiar, but oddly different. It announced itself as they approached the airlock hatchway that would let them out on the planet surface.

“It’s [Eddie] the computer,” explained Zaphod. “I discovered it had an emergency backup personality that I thought might work out better.”

“Now this is going to be your first day out on a strange new planet,” continued Eddie’s new voice, “so I want you all wrapped up snug and warm, and no playing with any naughty bug-eyed monsters.”

Zaphod tapped impatiently on the hatch. “I’m sorry,” he said, “I think we might be better off with a slide rule.”

“Right!” snapped the computer, “Who said that?”

“Will you open up the exit hatch, please, computer?” said Zaphod, trying not to get angry.

“Not until whoever said that owns up,” urged the computer.

“Oh, god,” muttered Ford slumped against the bulkhead.

“Computer,” said Zaphod, again, “If you don’t open that exit hatch this moment I shall zap straight off to your major data banks and reprogram you with a very large ax, got that?”

Finally, Eddie said quietly, “I can see this relationship is something we’re all going to have to work at,” and the hatchway opened. (HHGG p136-137)

Smart is Not-so-smart

Closely connected with the concept of giving UIs a human-like personality is to give UIs human-like intelligence. Who wouldn’t want smarter software? If we can make products smart enough to understand human goals, wouldn’t they be better at fulfilling those goals? What if the computer could anticipate your needs like a human servant? No, make that better than a human servant?

This is the vision behind Smart Things. There have been ideas to make Smart workstations, Smart email, Smart cell phones, Smart furniture and appliances, Smart houses, Smart e-commerce, and other web sites, and Smart location-based services. These products are intended to have the intelligence to infer user needs and goals by detecting patterns in the environment and users’ behavior. It’s what makes Microsoft Word change your manually enumerated list into a numbered or bulleted format. Clearly, that’s what the user wanted, isn’t it?

“Hello,” said the elevator sweetly, “I am to be your elevator for this trip to the floor of your choice. I have been designed by the Sirius Cybernetics Corporation. If you enjoy your ride, which will be swift and pleasurable, then you may care to experience some of the other elevators that have been installed.”

“Yeah,” said Zaphod stepping into it, “what else do you do besides talk?”

“I go up,” said the elevator, “or down.”

“Good,” said Zaphod, “we’re going up.”

“Or down,” reminded the elevator.

“Yeah, okay, up please.”

There was a moment of silence.

“Down is very nice,” suggested the elevator hopefully.

“Good,” said Zaphod, “now will you take us up?”

“May I ask you,” inquired the elevator in its sweetest, most reasonable voice, “if you’ve considered all the possibilities that down might offer you?”

“Like what other possibilities?” he said wearily.

“Well,” the voice trickled like honey on biscuits, “there’s the basement, the microfiles, the heating system… er….” It paused. “Nothing particularly exciting,” it admitted, “but they are alternatives.”

“Holy Zarquod,” muttered Zaphod, “did I ask for an existential elevator?” (REU p43-45)

The problem with Smart Things is they are in effect machines with a will. Rather than dumbly doing has they’re specifically commanded, smart things are free to interpret what they are told. They follow their own goals that don’t always align with the users. Intelligence implies complexity: the behaviors of a Smart machine result from a complex processing of many inputs interfacing with many memory units following an algorithm of many steps. Such complexity necessarily will challenge the user’s understanding. By the machine attempting to predict user needs in all their complexity, the machine itself is no longer predictable. The result is that users no longer control the machine, but instead only influence it.

Rather than shielding the user from complexity, Smart approaches are more likely to force the user to develop a more complex model of the machine. Of course you should automate various processes in your technology, but because the result is likely to be less controllable, such automation should focused on things the user doesn’t really care about. Ironically, technology should devote the greatest intelligence to the least important things. Tea, for example, is something most important, as any tea drinker will tell you.

Arthur Dent had set out from his cabin in search of a cup of tea. The only source of hot drinks was a Nutri-Matic Drinks Synthesizer. It claimed to produce the widest possible range of drinks personally matched to the tastes and metabolism of whoever cared to use it. However, it invariably produced a plastic cup filled with a liquid that was almost, but not quite, entirely unlike tea.

He attempted to reason with the thing. “Tea,” he said.

“Share and Enjoy,” the machine replied and provided him with another cup of sickly liquid.

He threw it away.

“Share and Enjoy,” the machine repeated and produced another one.

Arthur threw away a sixth cup of the liquid. “Listen, you machine,” he said, “you claim you can synthesize any drink in existence, so why do you keep giving me the same undrinkable stuff?”

“Nutritional and pleasurable sense data,” burble the machine. “Share and Enjoy.”

“It tastes filthy!”

“If you enjoyed the experience of this drink,” continued the machine, “why not share it with your friends?”

“Because,” said Arthur tartly, “I want to keep them. Will you comprehend what I’m telling you? That drink…”

“That drink,” said the machine sweetly, “was individually tailored to meet your personal requirements for nutrition and pleasure.”

Arthur decided to give up. (REU p9-11).

In contrast, when a technology follows a few simple but powerful rules, users can learn them and plan with them to accomplish their goals. For example, a simple interface that allows users to cut, copy, and paste a selection can be used to re-arrange, re-associate, copy, convert, export, and import data of various sorts. Yes, the user has to learn how to cut/copy/paste, but once it’s learned, it’s widely applicable. Yes, to actually get the end result the users wants, the user may have to do lots of cutting, copying, and pasting, but that often takes less time than trying to persuade a smart thing to do what is wanted. Yes, the user has to plan how to accomplish the task with the basic tools provided, but that’s often easier than figuring out how complex automation will behave in a certain instance.

Dumb machines are good. They’re easy to understand, predict, and therefore control. Smart humans are good. The better they can understand and predict the machine, the better they can use it to accomplish their tasks, maybe even in ways the designer never anticipated. Smart humans may be hard to control, but maybe we shouldn’t be trying to control them.

Coercion and Trickery

Through the gloom huge shapes loomed, covered in debris. Most of them were split open or falling apart. They were all spacecraft, all derelict. Toward the rear of the building lay one old ship buried beneath even deeper piles of dust and cobwebs. Its outline, however, appeared unbroken.

Zaphod approached it with interest. He wiped away some grime and laid an ear against the ship’s side. What he heard made his brains turn somersaults.

“Transtellar Cruise Lines would like to apologize to passengers for the continuing delay of this flight. We are currently awaiting the loading of our compliment of small lemon-soaked paper napkins for your comfort, refreshment and hygiene during the journey. Meanwhile, we thank you for your patience.”

Zaphod made some brief calculations. His eyes widened. “Nine hundred years….” he breathed to himself. That was how late the ship was. Two minutes later he was on board. He arrived on the flight deck.

From somewhere, a metallic voice addressed him. “Passengers are not allowed on the flight deck. Please return to your seat and wait for the ship to take off. This is your autopilot speaking.”

“You’re in charge of this ship?”

“Yes,” said the voice, “there has been a delay. Departure will take place when the flight stores are complete. We apologize for the delay.”

Zaphod approached the flight console. “Delay?” he cried. “Have you seen the world outside this ship? It’s a wasteland, a desert. Civilization’s been and gone, man. There are no lemon-soaked paper napkins on the way from anywhere!”

“The statistical likelihood,” continued the autopilot primly, “is that other civilizations will arise. There will one day be lemon-soaked paper napkins. Till then there will be a short delay. Please return to your seat.” (REU p82-87)

If there’s one thing computers are good at, it’s following rules. That’s good for making technology predictable, but too often the rules are arbitrary, forced on the user, and interfere with the users’ goals. Some UI designs leave the impression that they mostly exist to enforce some system’s rules, that it’s important that users behave just as regularly as a computer. We make systems with unnecessary required fields or mandate that users create unnecessary accounts, or comply with unnecessary formatting requirements. There are several ways humans represent dates, for example, but a website might only accept one, rejecting even minor variations on it.

Date fields accept yyyy/mm/dd, but not yyyy/m/d.

Passwords are required to have special characters, but, bizarrely, some accounts reject certain special characters making it difficult for users to manage their passwords. I’ve had one account that required passwords longer than 11 characters, while another required passwords be exactly eight characters. How does this serve either the user or security?

Such formatting rules serve the system more than the users. Another characteristic of modern technology serves the business interests more than users. We have the World Wide Web. In Adams’ universe, they have the room of Informational Illusions, as witnessed by earthling Arthur Dent:

[Slartibartfast] located the slot in the wall for which he had been searching, and clicked the instrument he was holding into it. At that moment a star battleship the size of a small Midland industrial city plunged toward them, star lasers ablaze and smacked a fair bit off the planet directly behind them.

Behind was an immense man speaking immense words. “These then were the Krikket Wars, the greatest devastation ever visited upon our Galaxy. What you have experienced…”

Slartibartfast floated past, waving. “It’s just a documentary,” he called out. “This isn’t the good bit. Terribly sorry. Trying to find the rewind control….”

“…is what billions upon billions of innocent…”

“Do not,” said Slartibartfast, as he floated by again, fiddling furiously with the thing he stuck in the wall, “agree to buy anything at this point.”

“…people, creatures, your fellow beings…” Music swelled, it was immense music, immense chords. “…experienced, lived through, or in many cases, failed to live through. Let us not forget -and I shall suggest a way that will help us always to remember, as represented by the symbol of the Wikkit Gate! There is not a world,” thrilled the man’s expert voice, “not a civilized world in the Galaxy where this symbol is not revered,” And with a flourish, the man produced in his hands a model of the Wikkit Gate. “Not the original, of course. This is a remarkable replica, hand tooled by skilled craftsmen, lovingly assembled into a memento you will be proud to own, in memory of those who fell.”

Slartibartfast floated by again. “Found it,” he said, “we can lose this rubbish. Just don’t nod, that’s all.”

“Now let us bow our heads in payment,” intoned the voice, and then said it again only faster and backwards. The man gabbled himself backward into nothing.

“You get the gist?” said Slartibartfast. (LUE p49-63)

For forcing its way on users, marketing efforts are probably the worse. Pop-ups, click-through ads, animated banners, and upgrade reminders all attempt to steal user interaction away from what they really want to what a salesperson wants. Perhaps users should accept this if they want free content, but it also appears when the user is already paying. I pay Verizon a pretty reasonable sum each month to be my ISP, yet when I use the web interface for my Verizon email, I get just as many obnoxious ads as a Hotmail account. Buy a plane ticket from Northwest Airlines, and I get confronted with so many interfering upselling offers I fail to see the required fields. Like prompting the user to nod, the designs attempt to trick the user into buying. While the control to accept an upsell offer is bold and easy to see, the “No Thanks” control is muted. The visual hierarchy is manipulated to direct user attention to what the sales department wants users to do, not what the user wants to do.

User in Control

Adams’ Hitchhiker series describes a technological dystopia where users are often at odds with their own technology. User loss of control is the common thread joining designing for fashion, natural language user interfaces, agents with personalities, Smart things, and system- or sales-driven forcing features. In contrast User-in-control is a basic usability principle critical to a positive user experience. As a technology designer, the key to not creating a black spaceship, a Marvin, an Eddie, and so forth is to recognize that the technology you create is first and foremost a tool. It’s not a fashion statement, not a friend, not an agent, not a rule-enforcer, and not a marketing channel. Its primary function is to be used by the user get something done.

Summary Checklist

Problem: Evading poor user-interface designs as satirized by Douglass Adams in the Hitchhiker’s Guide to the Galaxy series.

Potential Solution: Keep the user in control:

  • Do not use trendy technology just to be trendy.
  • Seek aesthetics that do not interfere with usability.
  • Be skeptical about natural language or other “natural” communication interfaces.
  • Develop precise structured communication methods to allow users to precisely communicate with technology.
  • Consider how technology’s representation of the task or problem to the user can help the user think better.
  • Avoid representing your technology as an anthropomorphized agent; allow users to work directly on important tasks.
  • Recognize the users’ fear of technology represents practical concerns that need to be directly addressed; it is not an irrational response to be handled emotionally.
  • Remember that “user friendly” means fast and easy to use, not polite and verbose.
  • Avoid trying to guess user intention in various contexts. Rather, provide a consistent interface of simple interactions that users can assemble to fulfill their intentions.
  • Avoid unnecessary requirements or arbitrary limits to make things easy for the system.
  • Avoid subverting usability for sales purposes.

Comments are closed.