Easy, Intuitive and Metaphor, and other meaningless words

image

What is easy? I find many things easy; making tea, speaking English, driving. I also find many things hard; speaking Spanish, understanding modern art (which I privately suspect is because there isn’t much to understand) and keeping my desk tidy. Is this because these things are inherently easy or hard? Or is more about me and my specific abilities, or lack of them? After all, speaking Spanish can’t be that hard, almost 400 million people around the world seem to manage it effortlessly, (just as I do speaking my native language, English).

Rather than describing something as being easy or hard it would be more accurate to describe it as being easy for me, or hard for me (or you). Without that context of “who” the question of ease or difficulty is a meaningless one.

Many tasks, that were once hard, can become easy. Learning to ride a bicycle as a child is precarious, often involving falling off, scuffing knees, and occasional tears. But as experienced cyclists riding a bike is easy. The process of transition from hard to easy is one of learning. All the time we spend in education is aimed at turning the hard into the easy. Not by changing the tasks at all - but instead by changing us.

With physical and kinesthetic tasks such as riding a bike, driving a car, even walking, the form of learning involves repeated practise. It is well understood that there is no way of avoiding this learning process if you are to become proficient. But in the world of software design we are looking for the quick answer. Intuitiveness.

Intuition

Intuition is the ability to look at a situation, and to weigh it up in a single leap. It often appears to be an effortless process, because it’s extremely rapid and doesn’t require conscious thought. In fact conscious thought can often derail intuition and make it less reliable.

It’s highly likely that intuition is a more basic ability than conscious thought, and probably one we share with many other animals. Cows will often stay close together if a storm is on the way. How do they know? I’d imagine if we spent all our lives standing out in a field we’d pay more attention to the weather too, and we’d have their insight. In fact those who do work outdoors often do have a keen weather sense. They’ve seen many storms coming before.

And that’s the point, ultimately intuition comes down to experience. Police officers often say they can tell if someone is up to no good. And they’re often right. But most people can’t do this. Police officers can because they have greater experience of criminal behaviour. Doctors can diagnose patients often by asking the most obscure questions - again, based on their deep experience.

Of course, intuition isn’t always right. Sometimes we encounter a situation that only mimics something we’ve seen before, and in those cases our intuition may be telling us something quite inaccurate. But those failures all contribute to our experience making our intuition more accurate for next time.

Naïve designers often talk of making things intuitive. What they really mean is intuitable - able to be understood through intuition. A thing can’t be intuitive unless it happens to one of those rather rare and special things that contain a brain - like you, me or my dog. To be intuitable a thing must give clues as to how we should interact with it. Those clues need to help us connect it to our previous experience of similar things.

image

I remember the first time I encountered a public toilet with automatic taps on the washbasin. Up to that point every washbasin I’d encountered had taps. If you wanted to switch on the water you turned, pulled, pushed, or even stamped on the taps. But in this case there were no taps. I had precisely no idea what I should do to get water to flow. The basin had failed to give me sufficient clues, my intuition had no prior experience on which to draw. With various kinds of taps I’d met before, even if the design was somewhat different there would be some clue. Here - nothing. The only way I eventually got some water (after feeling somewhat stupid for much longer than I would have liked) was to watch what someone else using the basin did. And then everything became clear. It’s worth noting that Villeroy-Boch call their tap-less basin the “Magic Basin”. The name itself provides some clue as to how it might work. It tells you to set aside your prior experience with the mechanical and think Magic. Could that clue be enough to engage your intuition? Possibly. Of course, now I know how it works I find the “Magic Basin” delightful in it’s elegance, but now my experience is deeper, and I know that missing taps need not be a problem.

Metaphor

Intuition then is a subconscious process which uses our prior experience to help us predict new situations. But what if for some situation we have no prior experience? How can we proceed? This is essentially the problem that the designers of graphical user interfaces faced when they started to use icons and windows. Since these objects don’t exist in the real world, users had no prior experience on which to draw.

Metaphor at its best helps people use experience from the real world of physical objects and transfer their understanding across to the virtual world of windows and icons. So an e-mail program has an “Inbox” which is a metaphor for an office “In” tray. In real world offices mail is delivered into an “In” tray, so it’s natural to assume that the Inbox does something comparable in the virtual world.

For those people who were used to the office “in” tray the Inbox metaphor might have helped them build a mental model of the behaviour of the e-mail system. But I’m not one of those people. By the time I entered the world of work e-mail was in common use, and physical mail was simply dumped on my desk in the morning. So for me the Inbox metaphor simply did not exist - it referred to a way of working that I had never seen, and therefore it did nothing to help me understand how e-mail worked.

image

But in the early days when e-mail programs were simple creatures at least the metaphor was helpful for those people making the transition from an office “In” tray to e-mail.

But as e-mail became more sophisticated and new features were added the metaphor stopped being able to explain all the behaviours of the system. The metaphor started to break down. Many e-mail programs allow you to set up folders of mailboxes. You can put boxes in folders? Try doing that with your filing cabinet. And there’s another metaphor that’s just crept in, adding to the complexity again.

The ability to filter e-mail into different mailboxes automatically also meant that the Inbox wasn’t the only place e-mail arrived any more, invalidating another of the implication the metaphor makes. At this point the metaphor no longer has explanatory power, instead it’s become misleading. E-mail is only like real mail in some fairly limited fashion. The problem for those users trying to use the metaphor to help build a mental model is they have no way of knowing which parts of the metaphor are valid and which parts are not.

Metaphor suffers in translation too. The mailbox is a North American idea. In the traditional, idealised American household they have a mailbox sitting outside at the end of the drive on the side of the street. Outgoing mail is collected from there, and incoming mail is delivered there. In England, we have letter boxes, built into the front doors of our houses. To send mail we drop it in a bright red post-box down the street or take it to the post-office. In US culture the mailbox is well known and understood. In English culture it quite simply doesn’t exist. And that is with two cultures who at least nominally speak the same language!

Metaphors and (In)Efficiency

Is it inevitable that metaphors will eventually break down? Is it not possible that we could design a system to match our chosen metaphor perfectly, and therefore avoid the discrepancies between the real and the digital imitation?

In principle, yes it is. But to do so we end up constraining the digital to be too much like the real. This means that we have to forgo potentially superior solutions simply in order to maintain the metaphor. In the digital realm the opportunities for improving on the real world are huge, and to deny ourselves those advantages is to deny much of the benefit of the technology.

An example is to look at the world of academic publishing. Once upon a time academics wrote papers, and submitted these papers to journals. The journals were published on paper and distributed to other academics, who would presumably read them.

If you wanted to cite a specific paper you needed to provide enough information to identify it, and locate it within the complete body of academic work. To start with you needed to be able to specify the correct journal. But if your library didn’t have that journal you’d need to locate it, so knowing the publisher was a useful piece of information. Of course, once you’d located the correct journal you would need to know which volume you needed to look in, and since each volume was typically published in a number of issues over the course of a year knowing which issue was also helpful. And if you actually wanted to read the paper after all this, it wouldn’t hurt knowing which page it was on. But then, of course, you’d only want to read it if it was written by someone who mattered, so knowing who wrote it is convenient too.

The end result is a citation which looks far too much like this...

Smith, J., Bloggs, J., Doe, J., Implausible names and unlikely aliases in scientific literature. Journal of Implausible Research, 23, 4 (2010), 1027-1156.

Actually going through the process of tracking down a paper means you have to unpick all the information in the citation and trawl through library indexes to find which shelf the journal is on, before you finally discover that most of the paper is written in Spanish, which I for one can’t read.

Then, Tim Berners-Lee had a bright idea one day. Why not build a system which would allow academics to publish their work online, with a simple scheme for naming each document, and a way linking one document to another - and lo, the web was born.

The irony of the situation is that while the web has been embraced by everyone from all walks of life, the one group that has really dragged their heels, is the one group it was designed for in the first place. So in academic papers you still see citations, even when the document is available online, still quoted in a way more appropriate to the pre-web age. What’s worse is that the URL, the mechanism for getting the paper without all the palaver of libraries, is frequently missing.

In the world of the web, keeping to the model of publishing houses, volumes, issues and so on is unhelpful. It was once the mechanism, but now, in the web-age it’s merely a metaphor as the actual publication method has changed utterly. By clinging to that outdated metaphor we are denying ourselves the advantages of a superior technology.

Despite being cautious about the use of metaphor, it would be an overstatement to insist that they never be used. Metaphors can be useful. As a designer it’s vital to make sure that their limits are well understood. If it’s clear to users where the metaphor breaks down then the harm can be eliminated, so in the end it’s better to break a metaphor than to slavishly adhere to one which is restricting innovative solutions.

Intuitable, but not Metaphorical

If we choose not to use metaphor what other options do we have? To make something intuitable we rely on user’s prior experience. To ensure that prior experience guides users in the right way we have to make our designs build on patterns of design that they have already met.

Consistency has, for many years, been a sacred principle in UI design. There is nothing to be gained by being different, simply for the sake of being different. If each application on a desktop computer redesigned scrollbars, popup menus and the like, it would be much harder for people to rapidly intuit how to use those devices.

It’s been said that a million monkeys with typewriters would eventually reproduce the works of Shakespeare. Now, with the power of the Internet we know this to be false. In a similar way, we have ample proof that breaking consistency makes users lives harder. Many websites using flash, javascript and the like have been forced to re-implement standard UI widgets, such as the scrollbar. But they generally do this inconsistently requiring users to recognise their versions as something that behaves (somewhat) like the “real” thing.

A vital skill for designers is to notice fine detail in the other designs which form part of the technological ecosystem in which their design will live. For example, on Mac OS there are now two different styles of text entry fields for forms. One has square corners, and is used for general data entry. The other has rounded ends, and is used for entering searches. I was recently outraged to find a piece of software which used the rounded style for data entry. This kind of design vandalism muddies the rules which users would otherwise learn, and devalues all software on the platform.

These design details become the idioms of the design language, and just as children learn new vocabulary, so users learn the design language as they become more proficient users. When designers use the language precisely we can create a world which we can navigate without expending worthless effort. Formalisations such as platform style guides and Pattern Languages are helpful tools, but ultimately getting this right relies on a sensitive understanding of the prevalent design languages. But even while understanding the value of consistency it’s important to know when the language needs a few new words; when to break the consistency. Making that judgement well is part of what separates the great designer from the herd.

End of page