Archive for the ‘Learning from Lusers’ Category

Of Windows and Workarounds

Friday, June 4th, 2010

Sometimes luserhood is a cry for help. The penultimate post in the learning from lusers series.

[I notice] a heavy rag on top of the monitor, covering the cooling slits. Thinking I should remove it so it doesn’t overheat, I lifted the rag…. Under the rag was a chicken breast and another piece of food in baggies. [A user] said some of the guys heat their lunch that way…. I asked what was wrong with the microwave in the break room. He said the PC was closer and on their way out the door. Umm-mmm! Nothing tastes as good as chicken warmed by the radiation coming off a CRT! (Shark Tank)

There’s no denying the brilliance of users. One has to wonder why monitors aren’t designed with a lunch-heating feature. It would make a great compliment to the cup holders standard on most CPUs. Maybe we would have never migrated to LCD displays since they lack a native nacho nuking feature, and there wouldn’t now be great mountains of cathode-ray tubes leaching heavy metals into groundwater outside various major third-world cities. At the very least, we could’ve recycled the CRTs as microwaves.

We all know that average users are not very good designers. However, nobody told them that. It’s perhaps the most human activity to modify things to make them better for oneself. We’re all tool users. We modify things in order to modify other things to get what we want. It all started 100,000 years ago when a nice little rock was minding it’s own business, decorating a dried stream bed like God intended, when some hominid scoops it up, and next thing it knows it’s bashing open wildebeest bones for yummy and nutritious marrow. Wasn’t how the rock thought the day would end.

The same thing happens with more advanced technology. We designers created these products to support specific intended tasks, users, and operational environments. We put this sophisticated technology in the hands of hominids with slightly more intelligence than that bone-bashing fellow of 100,000 years ago, and it should be no surprise that they end up using it in surprising ways. What they’re really doing is overcoming limitations we put in the product by the very process of building for specific intended purposes. The users are creating work-arounds for conditions we failed to appreciate, re-purposing, re-applying, and re-designing the technology we provide to deal with a goal that we didn’t consider.

Due to ignorance of all the issues that go into a design (such as the need to keep a CRT from overheating), user work-arounds may have dubious value (on the other hand, as long as the user doesn’t have to pay for a toasted monitor, perhaps they are achieving optimal personal value). Nonetheless, in working around a limitation in the technology we provide, users follow the same design principles we do, or, at least, the same principles we should be following.

If you had only 15 minutes to explain how to make something usable, you’d probably go through the basic design principles. There are various sets of user interface design principles out there. Style guides typically start with a list of principles, and perhaps each of us have our own personal set. They all pretty much cover the same ground, varying mostly by emphasis and nomenclature. Here is my set:

A user interface should have:

  • Efficiency
  • Clarity
  • Flexibility
  • Consistency
  • Tolerance

Users can’t recite these principles (or other versions of them), and yet whenever they deviate from our intended use of our products, it’s nearly always to improve the product with regard to these principles. For example, roasting your chicken on the monitor is more efficient because it means accomplishing the same goal with less walking than using the microwave.

Efficiency

Users can access… a big application in the data center… by way of a link on their procedures web page [that] points to the [primary] applications server. Time comes for IT to do maintenance [so they] change the link on the procedures page [to] point to the backup server during maintenance…. During maintenance, users claim they can’t access the application…. The users thought going through the procedures page took too many mouse clicks, so they created their own shortcuts on their desktops [that] continued to point to the primary app server [that was] down for maintenance. IT tells the customer that it wasn’t really an outage, but a case of user error. “No way,” says customer. “It’s still IT’s fault. You should have known that users don’t follow established procedures, and taken that into account in your maintenance plan.” (Shark Tank)

Ideally you would plan for every stupid thing the users might do, but that has some practical limits. On the other hand, it’s inadequate to plan only for “established procedures,” especially when the procedures were established by you. You have to expect some deviation from the normative path users follow in using your product. Specifically, you can count on users finding and employing an easier way of using your product. That is, they want greater efficiency, the ability to get their work done with less time and effort, physical or mental. They are merely following the same design principle you should have followed in designing the product. Frankly, your users have more enjoyable and important ways to spend their precious moments here on Earth than clicking the mouse and waiting for screen refreshes.

In this case, to be blunt about it, maybe IT shouldn’t be doing IT if it doesn’t understand that making shortcuts and favorites is an “established procedure” -far more established than anything specific to a particular web app. You have a choice in your designs: make things convenient for your users or try to force your users to make things convenient for you and your developers.

Guess how much cooperation you’re going to get from the latter? Do you think users want to do additional work to make life easier for you? Chances are they acquired your product because they thought it would make their life easier. In the business world, users are the reason you have a business at all that needs IT. As far as the users are concerned, IT only exists to serve the users.

Company changes e-mail systems and begins to remove deleted e-mail automatically after two weeks instead of leaving that to users. And that makes one user howl…. He stores all his e-mails in the Deleted Items folder as soon as he reads them, because he can get them out of his in-box with just a click on the big X. [It was] suggested that he create a folder for items he didn’t want to disappear. His first reaction: “You mean I have to make a new folder called Deleted Mail when there already is one?” (Shark Tank)

Part of the issue is a bit of confusion on what “delete” means. Unless they come from a professional editing background, users often think Delete means “remove” (as in “remove this from my Inbox”) rather than “erase.” But the other issue is a limitation of the software. Users need to archive old emails for later reference, perhaps because we place limits on the size of Inboxes. For this user, and presumably all users, it was expected that they create a folder, which means figuring out how, then figuring out how to move a message to the new folder. That’s all a lot of work when you already got a convenient Deleted folder that the users already know how to use. Worse, even after the folder is created, moving a letter to it means going through an drawn-out process. It takes four clicks on my email client to do it by going through the menus. Using drag-and-drop is much better (for users who know how to drag and drop), but it’ still not as easy as clicking the Big X on the toolbar. The archiving-by-deleting usage anti-pattern is the user’s way of telling you to make archiving easier.

Or maybe it’s for dealing with another limitation you set.

This company’s e-mail system has a 200MB limit for mailboxes — but not for the “deleted items” folder. More than one person had set up elaborately nested folders in their deleted items where they happily had 5 or 6 GB of mail. (Shark Tank)

Nowhere is user work-arounds for efficiency more likely than with security limitations, where users will come up with all kinds of inventive ways to subvert carefully designed security measures.

[Technician] discovers that one user’s workstation is generating a barrage of network traffic that’s maxing out the server. [The user] confessed that to defeat the pesky screen-lock timeout foisted upon them by the security folks, he and his buddies would bring up the e-mail client, put a pipe cleaner around the F9 key to keep it stuck down, and voila! — the screen wouldn’t lock. That night, he forgot to unstick the key, so it ran all night. Unbeknownst to the user, the F9 key forces a rather server-intensive refresh operation. (Shark Tank)

Partly this sort of hardware redesign is due to users not seeing security as really being their job. Making sure their subordinates have completed their TPS reports -now that’s clearly their job. Security is just something they have to work through to get to doing their job. It’s utilization not operational work. Users will share their passwords with other users in the name of getting the “real” work done today, and worry about security tomorrow. The other factor in security is that users get immediate rewards for cutting corners, while harmful consequences are very low probability events. Depending on the organization, that may be an optimal balance of competing job requirements.

Ratcheting up required password length or complexity can hurt security more than it helps. Passwords are hard to type, what with all those special characters and masking, so of course users will thwart screen locks if they can figure out how. Passwords are hard to remember, so of course users will write them down or re-use them or both. They’ll avoid changing them unless forced to. It’s not just  about convenience. Required password complexity is approaching the point where it is no longer humanly possible to remember passwords.

The best solution is not to try ever harder to force users to conform to security policies. It’s to design systems that make it easier to achieve the desired level of security. In the case of access control for example, it’s probably time we stopped relying on multiple long complex memorized passwords and moved to a more humane technology.

Clarity

She gets the prospect e-mails from [the real estate] service every day, she [then] forwards each message to herself at the same e-mail address…. When I suggested that she just keep the original e-mail and use that, her only answer was, “But I don’t have time to look at the e-mail when it comes in!” (Shark Tank)

One solution to the problem of providing efficient email archiving is to simply not have any. Rather than impose an arbitrary limit on the users’ Inbox size, just let everything accumulate in there by default, like GMail does. If users need to find an email, there’s always searching and sorting. This is an excellent solution, removing from the user the sizable utilization work of regularly archiving email. However, the Inbox is more than just a storage area for most users. As the name metaphorically implies, the Inbox serves as a list of things to do -of correspondence needing attention. While in general only recent email needs attention, depending on the volume of correspondence, email that still needs attention can pretty easily be pushed out of sight, and out of sight means out of mind. Re-emailing important letters to one’s self is this user’s crafty work-around to keeping key work items in awareness. It improves clarity, better communicating the state of the task, the actions required by the user, and the effects of the user’s actions.

Clarity is where you’re most likely to see glaring user-interface failures especially with low-end users. The failure is undeniable: the user is either stuck and cannot figure out how to proceed, or the user does something most definitely detrimental to completing their task with a computer, like failing to get a computer in the first place. Most of this Learning from Lusers series has been concerned with clarity, achieving it through analogy, proximity, appropriate terminology, and in general balancing the familiar with the magic that’s inherent in advanced technology. Users likewise realize when things are unclear and will take matters into their own hands to deal with it.

Many things can interfere with clarity, and a cluttered UI is just one of them. If you don’t provide a means to declutter the UI of irrelevant data, users try it themselves, and even succeed:

Every morning, every order in the division’s order log would delete…. The tech who has spent months investigating the problem notices that one particular user always seems to be logged in just before the orders disappear. So he calls him. When you logged on this morning, did you notice if the order log was empty? tech asks. “No, it was full,” user replies. “That’s why I delete all those orders.” What? asks tech. “I clear out the order log on my computer as soon as I log on,” says user. “It’s really hard to see today’s orders if yesterday’s orders are in there, too.” (Shark Tank)

Once again, the user intelligently deals with overwhelming data by using the tools available. Apparently, the log was not sorted chronologically (despite being a “log”), so the user had to impose his own filtering to be able to track the data relevant to him. He made the task information clear for himself because the UI didn’t. That would be the happy ending except that wasn’t the only thing the UI failed to make clear. It also failed to make cleared that the log was shared data used by multiple users and systems -the user thought he was working on his personal copy of it.

Professor Oldy McOlderton picked up email with relative ease. He’d frequently check for new messages and his students appreciated his attentiveness. One quirk that [IT technician] Victor noticed, though, was that he’d always BCC himself on each message he sent. He was aware but skeptical of the “Sent Mail” folder, favoring his inbox as a repository for all received and  sent messages. Because of the Professor’s preference, he asked Victor to change the software so that his BCC’d messages would appear to be coming from the person he was sending the email to. Victor explained that this was, in fact, impossible with the way email works, and went on to explain why it works the way it does. “Well, call the person in charge of Email and get it changed!” (The Daily WTF)

It makes perfect sense to have an Inbox and a separate Sent folder. Everyone knows that. Every email client and web app out there does this. And every app is wrong. None of us think of our communications by Sent versus Received, not at the top level anyway. We think of them as on going exchanges between the parties. If you needed to review some correspondences, it would be far more clear what happened if you could go through the exchanges chronologically, first what you said, then what they said, then how you replied and so on. Having a separate Sent and Inbox is like recording a phone conversation with each speaker on a separate tape.

Professor McOlderton is far more sophisticated than the rest of us, relying as we do on included messages in email replies to keep track of the thread of the conversation. He’s fixing the problem by BCCing letters to himself so his Inbox accurately represents the chronology of exchanges. The only problem is that stupid email doesn’t know better to show self-cc’d messages to have “come from” the receiver so that he can filter out separate exchange threads by sorting on the sender’s name. We need to put McOlderton on an open-source email client project. Maybe then we’d finally get an app where searching by email address searches both the From and To fields.

Flexibility

The service representative who is setting up the printer… needs to reboot the PC [so he first checks] with the woman who handles shipping…. The shipping software can’t be shut down and restarted during the working day [because it] will automatically close out the current day and start up with the date of the next business day…. “Sure, I’ll take care of it,” shipping woman says…. Then, with the shipping software still running, she reaches down to the power strip and turns off the machine…. After a few seconds, shipping woman calmly turns the computer back on…. Since it ended in an error state, it rebuilds and corrects some data files and picks up where it left off, on the current day…. [The] programmer [made] sure his software can recover from a major power outage, but [didn’t] give the user a clean way to shut down and restart on the same day.” (Shark Tank).

Any time you add automation, you need to consider building in overrides. If you don’t, the user will override it one way or the other. If all else fails, they’ll pull the plug. Including overrides is basic to designing in flexibility, where flexibility is a product’s capacity to allow the user to complete the tasks by whatever means they feel is best. In the case above, user-created flexibility means compensating for a capability the designer left out.

We’ve saw another example earlier of user re-purposing a product’s feature to improve flexibility. The user that forwarded real estate service email to herself was improving the clarity of the task but also improving the flexibility. She didn’t want the email app dictating when she should deal with the incoming email. She’d do it when she’s good and ready. And in the meantime, keep on forwarding it.

Avoiding arbitrary limits is another basic technique for making your product flexible. For example, you might think three separate Phone Number fields is enough (home, business, and cell), but get that kind of non-normalized data scheme in front of the right users and next thing you know they’re creating two records for the same business object to be able to have sufficient Phone Number fields. Comment fields are another way of providing flexibility for unexpected business conditions. If there isn’t anyplace for tracking essential but arbitrary information, users will just have to improvise.

This regional retail chain relies heavily on fliers it mails to customers, but not everyone is clear on how to use the customer list…. It seems some store clerks have been using the name and address fields for notes, so mailings are going out to “Bad Check Smith” and “Deceased Jones.” While going through and removing the inappropriate data, I ran across this comment: “Customer informs us that customer is dead.” “Was that by e-mail, voice mail or snail mail? (Shark Tank)

I’m pretty sure Windows 7 can communicate with The Other Side. It was a design idea a user had.

Consistency

I live in Italy…. One day, my brother-in-law told me his brother’s laptop wouldn’t work anymore and asked if I can help…. The machine hadn’t been able to boot for the last three days, though it worked perfectly before then…. I told him there’s probably some corrupted driver, and the first thing to do is back up his documents. I booted from a floppy and checked his folders. When I looked into the Windows directory, I noticed a bunch of files named “A,” “B,” “C,” “1,” “2,” and so on — and a few Italian translations of original file names, like FINESTRE.EXE instead of WINDOWS.EXE. “Why on earth did you do this?” “Well, I was looking into the folders one day, and I saw that if you clicked on a filename you could rename it. So I did. Took me three days, too.” (Rinkworks)

We’ve seen this user design anti-pattern before as an example of a failure of the desktop metaphor, which represents all files as documents (in “folders”) when in fact some files are machinery (better shown residing in an electronics cabinet). But more to the point of this post, why do users feel the need to go through system and program directories and change things around? Frequently, it’s to have more consistency, where system’s displays and controls have an appearance and behavior is similar to some other reference the users are using. That reference may be another part of the same system, or, as in this case, something outside the system: an Italian user who naturally wanted files to have names consistent with his language.

Tolerance

It’s the late 1980s, and… a user… complains that his cat has reformatted his hard drive. It all started when he pulled the manual out of the box and began to read the thing. [The user] gets especially excited about [DOS] batch files. He carefully and systematically wrote a neat little batch file to perform each DOS command -copy, mode, delete and, oh yeah,… format c:/s. Then the user discovers the hot-key utility. This would let him take a command or batch file and associate it with a function key…. One by one, he made each function key -F1, F2 and so on -represent a batch file that he had just created. Once that was complete,… his feline friend… jumped off a shelf and walked over his keyboard. One little paw was all it took to format the hard drive. (Shark Tank)

For those keeping score, the user earns +1 for reading the manual, +1 for designing for high efficiency, and -1,000,000 for not designing for tolerance, for resisting the tendencies for error and providing means for rapid recovery in the event of an error. The cat gets +15 votes on UX Exchange for astutely demonstrating the flaw in the design.

No one likes to think they’ll make a mistake, so tolerance in user work-arounds often takes a back seat to efficiency, although to be fair, the above example wasn’t so much a human factors issue as a cat factors issue. If users do work-arounds for greater tolerance, it’s usually only after painful experience. As professional designers we know to build tolerance into our products by making certain errors impossible (e.g., by using a dropdown list rather than a free-form text box), providing means to recover from other errors (e.g., Undo and helpful error messages), and, if all else fails, warning the user of potentially dangerous actions (e.g., with verification messages). We design for efficiency but are wary when that same efficiency also introduces intolerance. Well, usually.

Remote user calls about a Windows problem, and this help desk… begins by telling him to press Control-Escape. User: “I don’t have a Control key.” Sure you do, at the lower corners of your keyboard. User: “No, I don’t have Control keys because I pried them off. They kept getting in the way while I was typing Word documents and messing everything up. What do you use Control keys for anyway?” (Shark Tank)

It’s a little known fact that most word processors will blow away your entire document if you type the magic phrase, “A fish, a frog, a two-foot log.” Yes, it will -if instead of hitting the shift key for “A” you instead hit the control key, resulting in all content being selected and then replaced by the space typed next. Lucky is the user that figures out something happened related to the control key. For many users, they were just typing along and bang! Their whole document vaporized in a flash like someone somewhere hit it with a powerful Mizmo beam.

Like Hot-keys in DOS, accelerators like Ctrl-A in Windows are there for efficiency, and are marvelously good at that, so we certainly don’t want to do away with them all, like the user above did. However, sometimes we forget to consider tolerance when selecting commands to have accelerators. While a Select All accelerator makes sense for small text boxes, it’s probably not a good idea for an entire document. Users don’t need to select the entire document often enough to justify an accelerator -it just won’t save that much time of the user’s life over using the pulldown menu. Meanwhile, Ctrl-A lies there waiting to trap the unwary. I’d wager users more frequently hit Ctrl-A by accident in a word processor than on purpose.

The Insert key as an accelerator for overtype mode is another trap, easily tripped by accident and difficult to diagnose. Firstly, like Caps Lock, the Insert key functions as a mode key, but, unlike Caps Lock, there’s no feedback on that on the keyboard (e.g., a distant LED that lights up when activated, which is hardly adequate any way). Secondly, it’s labeled “Insert,” but since the default mode is Insert, what it primarily does usually is enter Overtype mode (in contrast to Caps Lock, which indeed primarily enters Capitals Locked mode). A user will have a hard time understanding why text is overtyping because the user hit “Insert.” Overtype mode is used so rarely that it’s questionable if we even need an overtype mode in modern office productivity apps (it was removed from Office 2007). Users are thus far more likely to accidentally enter overtype mode than to use it on purpose. If you really need overtype, then make it only accessible via the menu. Personally, I’d like to see the Insert key be an accelerator for, oh, I don’t know, maybe inserting (e.g., a spreadsheet row).

Fortunately, we provide some tolerance for accidental accelerator activation by providing Undo. Undo fixes typing Ctrl-A-something, along with Ctrl-X, Ctrl-V, and Delete, among others. However, Undo only partially fixes accidentally hitting Insert: we can bring back the text we over-typed, but only by erasing the new text. The users have to pick which is less work to redo manually. Also, not all accidental accelerator activations are fixed by Undo. In many office productivity apps hitting Ctrl-N when the window is maximized can also give the appearance of blowing away the user’s entire document, but Undo has no effect. The correct recovery from hitting Ctrl-N  is to close the window, which just so happens to be precisely the thing not to do to recover from hitting Ctrl-A-something. For users that don’t understand what the control keys are for, let alone what control combination does what, how should they react if their document suddenly appears to go blank? I know: pry off those frigging control keys to make sure it never happens again.

Accommodation: The Sixth Principle

We cannot anticipate every way users may want to use our product. If users are going to work around our designs anyway to fit their idiosyncratic needs, then perhaps we as designers should help them. And indeed, many lists of UI design principles cover accommodating the users, where the UI provides the means to adjust to variations among the users or their usage of the product. Humans all vary in their skills, experience, inclinations, goals, and tastes, so a usable product should have the means to adapt to individuals. Typical accommodations include:

  • Adjustable default field values, sort orders, file directories, and periphery selections.
  • Variable language, units of measure, and other internationalization.
  • Environmental lighting compensation (e.g., daylight versus nighttime display).
  • Variable use of aural alarms and other means to control the stridency of alerts.
  • Selectable display of fields or table columns.
  • A favorites or bookmarks feature.
  • Command, toolbar, menu, and keyboard shortcut customization.
  • Adjustable text sizes.
  • Designing for accessibility.

Accommodation is a sound principle, but challenging to implement. One approach is to make the UI as accommodating to as wide a range of users and uses as possible, often by building in redundancy. For example, the UI can provide keyboard access to the menu through both the F10 key (as is the platform standard) and the slash key (as was the convention in a legacy system) so users used to either can use the UI unimpeded. However, a wide-range UI can mean design compromises and complexity (e.g., how does a user enter a slash now?), and sometimes redundant alternatives are mutually exclusive (e.g., maybe the legacy app used F10 for something else).

Another approach is to have the product attempt to automatically modify itself to the particular user based on inferences from the user’s behavior. For example, the File menu can include menu items for the most recently used documents, inferring that users often want to work on the same document they worked on previously. This can work but can also harm usability if the automation makes the wrong inferences, so you need to be selective on what you automate.

A third approach is to provide the user with explicit controls to re-work the product to fit their needs. This is typically done with such features as an Options dialog or macro feature. This also works, but the problem is that it pushes design decision back on the user. The user has to first discover that the accommodating features exist, then make the correct decision on how to use them, then figure out how to activate the feature. If they make the wrong choice, for example, failing to take into account the Cat Factor, they may end up worse off than when they started. We need to design and document our accommodating features such that they help guide users to make the right choices. You have to accommodate the accommodations to the users’ skill, knowledge, and inclinations.

So we should certainly provide means to accommodate our users, but only after taking our best shot at following and balancing the other design principles, and adapting the product as much as we can to the typical user, including adapting the accommodations themselves. If you don’t adapt your product to your users, then one way or another, your users will. And it ain’t pretty.

Summary Checklist

Problem: Adapting our products to the users so that they don’t have to.

Potential Solution: Following the basic principles of good user interface design as applied to your expected users, their tasks, and their work environment. Specifically, design the product to be:

  • Efficient, minimizing the time, and physical and mental effort to complete a task; when two tasks cannot both be accomplished efficiently, then the more frequent or important task should be more efficient.
  • Clear, maximizing speed and accuracy of communication from the system to the user on the state of the task, the available or required actions for the user, and the effects of those actions (feedback), and the implications of it all.
  • Flexible, providing users a wide range of capability and means of accomplishing tasks, such as by minimizing modes, required formats, and required sequences of actions.
  • Consistent, possessing displays and controls whose appearance and behavior is similar within the system and within other contexts experienced by the user.
  • Tolerant, resisting tendencies for human error and providing means for rapid recovery in the event of an error.
  • Accommodating, including the ability to adjust to the idiosyncratic characteristics of the specific user, task, and environment.

Last of Learning from Lusers

Next in the final installment of Learning from Lusers, we’ll take a look back at all we have covered and ask what it takes to adapt technology to our users.