Flying the User Interface- Unconscious interfaces

Compare the feeling you get driving a car, as opposed to driving your word processor. When driving a car (assuming you’ve been driving for a little while), you don’t actually think at all about how you interact with the car, but instead you “become the car”, it becomes an extension of your will. You think “slow down”, and your right foot applies pressure to the brakes. Indicators switch on and off with barely a thought, and all this seems natural.

Using a word processor is different. Typing the text is an unconscious process, but using any of the features such as bold text, or tables requires conscious thought.

The difference is where in the brain the work in being done. Driving and typing (assuming you type properly!), are both manual coordinations that are carried out in the mid-brain. The mid-brain coordinates movement, and is the part of the brain that gets trained by practise.

Comparing the mid-brain to the conscious brain some interesting observations can be made. The mid-brain reacts much faster, which is why learner drivers have much more trouble steering, the mid-brain has not yet learnt, and so steering is being done consciously. The mid-brain takes longer to train though, which is why practise is important.

So what has this got to do with UI design? The answer is that by determining which parts of an interface are open to being controlled by the mid-brain, rather than the conscious brain, it is possible to greatly improve user performance.

Unconscious Interfaces

To succinctly describe UIs which are driven by the mid-brain I’ve coined the term “unconscious interface”. Many already exist. Typing is an unconscious interface, including many of the special keystrokes used to manipulate text in an editor. Cars have largely unconscious interfaces, as do arcade computer games, downhill skis, even your legs have unconscious interfaces! All these take practise to master, but once you’ve done so they cease to consume conscious attention.

Exceptions in an unconscious interface

There is a direct analogy between how the mid-brain works and exceptions in OO programming. When all is going well and the mid-brain is controlling things, the users awareness is free to wander. However, as soon as the mid-brain encounters something it doesn’t know about, in other words, that hasn’t been practised, it throws an exception, which is caught by the conscious brain. By this mechanism the mid-brain & conscious brain can, between them, react very fast to normal cases, without sacrificing the power of the conscious brain to think round the unusual ones. However, each time an exception is thrown the conscious brain is stopped doing whatever it was previously doing, akin to receiving an interrupt, and this can be distracting to the foreground task (ie. whatever you happened to be thinking about). For example say you’re writing a long document and accidentally hit the end of the document button, instead of end of line, this distracts from what you were typing since you now have to consciously recover from the error. Smaller errors may be less distracting. Simply hitting the wrong key while typing does not cause such irritation though several errors will.

Context

A vitally important feature of the mid-brain is that while it is capable of storing the patterns for coordinating thousands of different skills, it is also capable of discriminating between them. When you get on a bike, the bike “program” is activated. Each coordination has a context in which it is valid, and is only expressed in that context. But the brain is not a digital database, and so as contexts become more similar the likelihood of activating the wrong “program” increases. Take as an example, using two different keyboards where two keys have been exchanged. The contexts are near identical, so mistakes are made, although in this case relearning is relatively easy. Constantly switching between almost identical keyboards would be irritating until the brain became sufficiently proficient at discriminating between the subtle changes in context.

Language Processing

Another example of an unconscious process is the way in which the brain parses spoken or written language.

The auditory and language centres of the brain behave in a similar manner to the coordination centres with regard to exceptions. Take the phenomena of the unnecessary pardon. Someone says something, which you don’t quite hear properly. The auditory centre throws an exception, which causes you to say “pardon”. Shortly after the auditory centre does another pass trying to interpret the utterance and succeeds, just before the speaker repeats themselves. Hence the “you heard me” complaint of the speaker!

Contexts also behave similarly. For an English speaking person, fluent in a foreign language, reading that foreign language fully activates the parts of the brain responsible for parsing that language correctly. However, take the case of English versus American and you start to get parsing errors occurring since there are so few clues to tell the brain which context you’re in. This essay is written in British English, but up till the phrase “the case of English versus American”, American readers would be forgiven for believing that it was written in American. Other examples which cause, not so much parsing errors but just jolts the reader, are words like got, or gotten, and spelling differences like colour, or color. In extreme cases parsing the language can cause errors which are hard enough to spot that it can take a very significant time. An example here is the word “gas”, meaning natural gas to an Englishman, but gasoline to an American. That particular example is one which has been experienced by the author, with comical effect. This could be thought of as a linguistic inconsistency problem.

Flying the User Interface

So, what is meant by “flying the UI”? The idea is that user interfaces should be driven, not so much by the conscious mind, but more by the unconscious processes the brain uses to coordinate many physical actions. It should feel more like flying a plane, or driving a car, should be done by feeling, rather than conscious thought. Technology, when it becomes completely accepted, becomes transparent, and is taken for granted, take as examples the phone, or the electric light. The user interface still has a long way to go before it drops out of the perception of the user. When it does, we will have arrived. Now that people are surrounded to such an extent by machines of all sorts it is much more important that the designers of those machines take care to make them as easy to use effectively as is possible, however, this does not always mean making them obvious. Where a device is used heavily the equation stacks up heavily in favour of making the interface efficient for the expert user, despite some training cost (as we already have with driving). So learning how to fly becomes an important skill, just as learning to read and write, and drive are at the moment. To achieve all this we need to push more of the UI into the unconscious realm.

Making unconscious interfaces is still a young art. At the moment the techniques available are very limited because of the limited range of input devices. However, in the next few years a few technologies will reach the point where they become viable as unconscious interface methods. Speech recognition is a fairly obvious one, gesture recognition is another less obvious one, but in crowded offices will be of more use potentially. Eye tracking may also develop to the point where, when combined with speech recognition or gestures it may replace the normal pointer. Being able to look at an icon and say what you wish to do with it would make using computers much simpler for the vast majority of people.

Last edited: 18 Jul 1999

End of page