Norman, D. A. (1988). The Design of Everyday Things. New York: Doubleday. ISBN: 0-385-26774-6. Call Number: TS171.4 .N67 1990.
A popular book that will motivate the importance of human factors in the design of everything we use. This reading is also included as an introduction to concepts such as "affordances" and "knowledge in the world" versus "knowledge in the head" [DS]
Jump To:
- About the Author / Reason for Writing the Book
- Chapter 1: The Psychopathology of Everyday Things
- Chapter 2: The Psychology of Everyday Actions
- Chapter 3: Knowledge in the Head and in the World
- Chapter 4: Knowing What to Do
- Chapter 5: To Err is Human
- Chapter 6: The Design Challenge
- Chapter 7: User-Centered Design
Donald Norman wrote this book and “The Invisible Computer”. His primary area of research is human-centered design. He is a professor at UCSD in Cognitive Science and Psychology, and has held high positions working at Apple and HP.
Norman wants to fully utilize the potential of technology and the computer by supporting human tasks first, while attempting to make the supporting technology transparent to users by making them easy to use, easy to learn and easy to understand.
Reason for writing the book:
Donald Norman wrote the book for many reasons. The initial thought was the frustration that he encountered with ‘everyday things’. His inability to operate simple things was frustrating, and after feeling flustered and confused at his inability to operate things, he realized that much of the problem was due to poorly designed interface. This made him realize that people shouldn’t feel guilty or stupid for their inability to operate devices. The fault was with the unintuitive interface- that shifts the problem space to the design of a good interface.
(top)
Summary of the Book:
Chapter 1: The Psychopathology of Everyday Things
Users shouldn’t need an engineering degree to figure out what a device does
He uses the example of aesthetically pleasing glass doors- how we can get trapped or not able to pass through them because there are no clues on how to use them
VISIBILITY - one of the most important aspects of design – interface must have visible features, inferring the right messages to us
Natural Signals – the ‘natural’ or common understanding of objects and their perceived use
Natural Design – design that takes advantage of ‘natural signals’
MAPPINGS – the link between what you want to do and what is perceived possible. It is the relationship between moving a control, and the results in the real world.
Natural Mapping – takes advantage of physical analogies and cultural standards for immediate understanding
AFFORDANCES – the perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could possibly be used
(e.g. a chair affords sitting; glass affords seeing through, breaking; wood affords solidity, opacity, support, carving)Affordances provide us clues on how to operate a device
CONSTRAINTS – limits to the perceived operation of a device (e.g. a small hole vs. a large hole- we might be able to use only one finger in the small hole, while we might be able to use multiple fingers in a large hole)
CONCEPTUAL MODEL – our mental simulation of a device’s operation (mental model?) These can be based on MAPPINGS, AFFORDANCES and CONSTRAINTS.
MENTAL MODEL – models people have of themselves, others, their environment, and the things they interact with (CONCEPTUAL MODELS are part of this)
The mental model of a device is formed by interpreting its perceived actions and its visible structure.
System Image – the visible part of the device being used. If incomplete / contradictory, the user cannot easily use the device.
Feedback – sending information back to the user about what action has actually been done and what result was accomplished
Two principles of designing for people:
good conceptual model
make things visible
Norman’s Conclusion: Design is not an easy task. Technology is a paradox because it is supposed to make our lives easier when it often makes it more difficult. However, this is not an excuse for poor design.
(top)
Chapter 2: The Psychology of Everyday Actions
People feel bad, sorry, frustrated, stupid for not knowing how to operate mechanical things, especially if the task appears to be trivial
The world, and everyday things, are filled with misconceptions
Aristotle's naive physics - our 'naive' way of explaining the phenomenon we witness in everyday life - often very practical but incorrect. People often have naïve, incorrect explanations for real world phenomenon (cranking the thermostat all the way will make us reach a desired temperature faster)
Coincidence can set our ‘causal’ wheels rolling. What matters is that we ‘perceive’ causality, and whether or not that causality exists, we think it is there. Often we perceive causality that isn’t there and often ignore the real cause. This can create a problem / crisis later because we have a bad explanation of what is happening (3 Mile Island)
Spiral of silence / conspiracy of silence - not reporting errors / misconceptions that you think are your fault (you're in the minority and don't want to be singled out). Even though this may not be true- the majority might be having the same problem, and we need to find out.
Learned helplessness - after failing to do a talk multiple times, people often decide that they cannot do the task (they are helpless)
Taught helplessness - perceived difficulty in one task generalizes to the whole, so that we feel (self-blame) that we cannot do tasks (such as in mathematics, where each successive task requires complete understanding of previous tasks). A sort of self-fulfilling prophecy that we are unable to accomplish a task due to previous difficulty / failure.
7 Stages of Action: An Approximate Model
(Execution) Goals (Evaluation)
/ \
Intention to Act Evaluation of interpretations
v ^
Sequence of Actions Interpreting the perception
v ^
Execution of the action sequence Perceiving the state of the world
v ^
\ (THE WORLD) /7 Stages of Action: 1 for goals, 3 for execution, 3 for evaluation:
- Forming the Goal
- Forming the intention
- Specifying an action
- Executing the action
- Perceiving the state of the world
- Interpreting the state of the world
- Evaluating the outcome
THE GULF OF EXECUTION: does the system provide actions that correspond to the intentions of the user?
THE GULF OF EVALUATION: does the system provide a physical representation that can be directly perceived and that is directly interpretable in terms of the intentions and expectations of the user?
Each of the seven stages are good for checking that the gulfs of execution and evaluation are bridged. How easily can one:
Determine The Function of the Device?
Tell What Actions Are Possible? Tell if System is in Desired State?
Determine Mapping from Intention Determine Mapping from System
to Physical Movement? State to Interpretation?Perform the Action? Tell What State the System is In?
These questions boil down to the principles of design from Chapter 1: Visibility, A good conceptual model, Good mappings, and Feedback
A Great Explanation of Norman's Gulfs of Execution and Evaluation: http://www.it.bton.ac.uk/staff/rng/teaching/notes/NormanGulfs.html
(top)
Chapter 3: Knowledge in the Head and in the World (Memory)
We can have precise behavior on how to do a task without precise knowledge of the task due to 4 reasons:
- Information is in the world: much of the information required to do the task can reside in the world. Behavior results from combining information in the head with information in the world.
- Great precision is not required: precision, accuracy and completeness of knowledge are seldom required. Perfect behavior will happen if there is sufficient knowledge to distinguish the correct choice from the others.
- Natural constraints are present. The world restricts the allowed behavior. The physical properties of objects constrain possible operations (ways we can use / manipulate objects). Each object has a set of physical features that limit its relationships to other objects, the operations that can be done on it, etc.
- Cultural constraints are present. Society has evolved numerous artificial conventions that govern acceptable social behavior. These cultural conventions must be learned, but once learned apply to a wide variety of circumstances.
These four reasons reduce the number of alternatives and reduce the amount of information required to be stored in memory to successfully complete the task.
Memory is knowledge in the head
- Often grouped into short term memory and long term memory
- Three important categories of memory:
- Memory for arbitrary things (without meaning / relationships)
- Memory for meaningful relationships (with something else)
- Memory through explanation (some explanatory mechanism)
- Typically requires learning, is efficient, and not easily retrieved
Memory is also knowledge in the world
- Reminding (signal and a message)
- Natural Mappings (arrangement, like stove controls example)
- Typically easily retrieved whenever visible / audible, no learning required, but slowed up by the need to interpret the external information
There are three aspects to mental models (types of conceptual models?):
- the design model (the conceptualization the designer had in mind)
- the user’s model (what the user develops to explain the operation of the system)
- and the system image (the system’s appearance, operation, way it responds, manuals / instructions included with it)
Ideally, the design model and user model are the same. The designer must ensure that the system image is consistent with and operates according to the proper conceptual model.
(top)
We mess up when there is more than one possibility / option of things to do
Building the Lego motorcycle: semantic and cultural constraints, as well as the shape (clues) of the pieces allow us to figure out easily how the pieces are assembled together
Constraints:
Physical constraints – physical limitations, based on shape, size, etc.
Semantic constraints – limitations based on the meaning of the situation (Lego motorcycle: rider must face forward… windshield goes in front of face, etc.)
Cultural constraints – limitations based on accepted cultural conventions. (Lego motorcycle: signs are meant to be read, thus the ‘police’ sign should be right side up. The red light goes on the rear, because red is culturally defined to mean ‘stop’, etc.)
Logical constraints – logically induced limitations (Lego motorcycle: all pieces should be used, with no gaps, etc.)
Constraints are important in suggesting what we should do- so they should not be deceiving. An object should suggest (afford) what it does (only one predictable outcome- GOOD MAPPING).
For example: an array of identical looking switches is a bad design
While the above mainly focuses on constraints and mappings, we must remember to use good visibility and feedback. Crucial parts must be visible (doors must have door handles) and we need feedback to verify we completed the task successfully (a good display, showing what just happened)
(top)
Language has built in functions to allow us to correct ourselves when we stumble, mess up, etc. Artificial devices often do not- a mistake can cause chaos.
Slips are the most common error: when we intend to do one thing and accidentally do another (automatic behavior problem)
Types of Slips:
- Capture Errors (two action sequences have common initial stages - an alternative action 'captures' your attention)
- Description Errors (two objects are physically alike enough to mess up - like throwing clothes in the toilet)
- Data-Driven Errors (recalling the wrong piece of data – confusing two numbers)
- Associative Activation Errors (event activates a similar but wrong response)
- Loss-of-Activation Errors (forgetting to do something or part of the act)
- Mode Errors (when devices have multiple modes and our actions are for the wrong mode)
Well designed things should allow us to detect slips by using feedback (a clear discrepancy between actual and intended result). For example, in computers, when destroying a file, it is good to ask for confirmation to verify the user wants to do an irrevocable action)
Human cognition is extremely complex and difficult to understand, but better understanding of this will allow us to design better systems with less human error.
- Conscious vs. subconscious
- Deep / narrow vs. shallow / wide tasks
- if shallow, width is acceptable (choosing a flavor of ice cream: many choices, but only one decision)
- if narrow, depth is acceptable (following a recipe: few decisions, many steps)
Design should allow for human error:
- Understand causes of error and try to minimize them
- Make it possible to undo actions
- Make it easier to discover when errors occur and make them easy to fix
- Think of tasks as imperfect approximations of what the user wants to do
Forcing Functions: If need be, use lockout devices (force a sequence of actions so that the user can’t enter a dangerous place)
A good design philosophy: p. 140 - summarizes principles discussed thus far
(top)
Chapter 6: The Design Challenge
Often good design is an evolving process: a design is tested, problems are found, design is modified. Process repeats and continues until resources run out.
Design is a constant battle between usability and aesthetics. Problems occur when one dominates over the other too much.
Designers are not end-users, often clients are not either.
Often we have selective attention: we focus too much one thing and reduce attention to other vital things. (such as sticking a knife into a toaster to get the burning bread out.) Designers have a hard time conceiving of all of the possible ways that people will use things!
Often designers mess with convention when designing things (faucet examples, p. 166)
Two deadly temptations for the designer:
- Creeping featurism – keep adding useless features until it’s too difficult to use
- The Worshipping of False Images - make it complex because it looks cool
Often there is no perfect answer to a problem. We MUST consider the tradeoffs of our design, and weigh the options to come up with the best solution.
“THE INVISIBLE COMPUTER OF THE FUTURE” is mentioned at the end of the chapter… where we do tasks and the computer is transparent (we are not using the computer, we are completing a task)
(top)
Chapter 7: User-Centered Design
The point of the book was to advocate a user-centered design which is a philosophy that things should be designed with the needs and interests of the user in mind, making products that are easy to use and understand.
Design Should:
- make it easy to determine what actions are possible at any moment
- make things visible, including the conceptual model of the system, the alternative actions, and the results of actions.
- Make it easy to evaluate the current state of the system
- Follow natural mappings between intentions and the required actions; between actions and the resulting effect; between the information that is visible and the interpretation of the system state
Basically we should be able to (1) figure out what to do (2) tell what is going on
Principles for making difficult tasks simple ones:
- Use both knowledge in the world and knowledge in the head
- Simplify the structure of tasks
- Make things visible: bridge the gulfs of Execution and Evaluation
- Get the mappings right
- Exploit the power of constraints, both natural and artificial
- Design for error
- When all else fails, standardize
There are three aspects to mental models:
- the design model (the conceptualization the designer had in mind)
- the user’s model (what the user develops to explain the operation of the system)
- and the system image (the system’s appearance, operation, way it responds, manuals / instructions included with it)
Ideally, the design model and user model are the same. The designer must ensure that the system image is consistent with and operates according to the proper conceptual model.
Ways to simplify the structure of tasks:
- keep the task much the same, but provide mental aids (simple mental aids provide cues about what we should do)
- use technology to make visible what would otherwise be invisible, thus improving feedback and the ability to keep control (and hide stuff that is irrelevant to completing the task)
- Automate, but keep the task much the same (remove unnecessary steps of a task)
- Change the nature of the task (use technology to simplify something)
But remember NOT TO TAKE AWAY CONTROL FROM THE USER!
Bridge the gulfs of Execution and Evaluation:
- Make things visible so users know what actions are possible
- Make things visible so people can see the results of their actions
- The system should have actions that match the users’ intentions
Design for Error:
- Make it so mistakes aren’t too critical, undoable, etc.
Make things difficult?
- Sometimes a difficult design is good- it forces us to deliberately focus on what we’re doing (focus on it)
- Good for dangerous equipment, operations, secret doors, etc.
Make things easy to use?
- To make something easy to use, match the number of controls to the number of functions and organize the panels according to function.
- To make something LOOK easy, minimize the number of controls.
Remember, tools not only control WHAT we do, but HOW we do it and the way we VIEW ourselves, society and the world! Our design can change a task, a society, and the world.
The world of the future can be looked forward to with pleasure, contemplation and dread. How will we be able to handle more information that is more complex and control it with more ease? The answer: this book- the design of everyday things. We must fight for and reward good design, and do the opposite for bad design.
(top)
Jump To: |
About the Author / Reason for Writing the Book
- Chapter 1: The Psychopathology of Everyday Things
- Chapter 2: The Psychology of Everyday Actions
- Chapter 3: Knowledge in the Head and in the World
- Chapter 4: Knowing What to Do
- Chapter 5: To Err is Human
- Chapter 6: The Design Challenge
- Chapter 7: User-Centered Design