Unit 1 - HCI Assignment Lectures
Unit 1 - HCI Assignment Lectures
Unit 1 - HCI Assignment Lectures
NOTES
Assignment on HCI by
Michael Matardi
STUDY NOTES
Unit Aim
This unit aims to help learners to understand the principles of interaction, including the physical
attributes that influence human computer interaction. The unit also enables learners to design
user interfaces and evaluate user interface designs.
Unit Overview
The unit is for learners considering a career in computing and information systems and who wish
to gain an understanding of human computer interaction. The unit will equip learners with the
skills and knowledge to review the principles of interaction, to examine the physical attributes
that influence human computer interaction and to develop models of interaction for an
application. It will also enable learners to design user interfaces and evaluate user interface
design.
Page 1 of 165
Unit 1: Human Computer Interaction
Contents
Outcome 1: Understand physical attributes that influence human computer interaction ...............6
Introduction ................................................................................................................................ 6
1. Primary modes of response for inputting information .............................................................. 7
HCI interaction Mode ..............................................................................................................7
Inputting Mode ........................................................................................................................ 8
Multimodal HCI systems .........................................................................................................9
Modes of response/feedback for inputting information .......................................................... 10
Types of Feedback............................................................................................................. 10
Including Feedback in Design............................................................................................ 13
Input flow.............................................................................................................................. 14
Mode confusion..................................................................................................................... 14
Haptic interaction .................................................................................................................. 15
2. Human thinking processes for computer-based tasks ............................................................. 16
Human thinking processes ..................................................................................................... 16
Different types of thinking..................................................................................................... 16
Human-Computer Model....................................................................................................... 18
Cognitive Psychology............................................................................................................ 19
Cognitive psychology & Information Processing ................................................................... 20
Levels of Processing .............................................................................................................. 21
Shallow Processing............................................................................................................ 22
Deep Processing ................................................................................................................ 22
Influence of human thinking processes on HCI ...................................................................... 23
Meaning of Design Thinking ............................................................................................. 23
Importance of Design Thinking today ................................................................................ 23
Stages of Design Thinking ................................................................................................. 24
Computer programmers thinking in terms of solving problems .............................................. 25
Computer programming..................................................................................................... 25
Problem Solving ................................................................................................................ 25
Problem Solving Strategies ................................................................................................ 26
Outcome 2: Understand principles of interaction ....................................................................... 28
1. Developing models of interaction for an application .............................................................. 28
Meaning of Design Principles ................................................................................................ 28
HCI Design principles ........................................................................................................... 28
Theoretical frameworks and methodologies ........................................................................... 33
HCI Models ....................................................................................................................... 35
HCI Analysis Methodology ............................................................................................... 37
Heuristic evaluation ........................................................................................................... 37
Cooper‘s Persona System ...................................................................................................... 39
Personas in the User Interface Design ................................................................................ 39
Personas – A Simple Introduction...................................................................................... 39
Personas in Design Thinking ............................................................................................. 40
Page 2 of 165
Four Different Perspectives on Personas ............................................................................ 40
Creating Personas and Scenarios........................................................................................ 43
Significant Models in HCI ..................................................................................................... 44
Developing a model of interaction for an application ............................................................. 48
Factors in selecting suitable interaction models design .......................................................... 52
Translations between user and system ................................................................................... 54
Models of interaction ......................................................................................................... 54
The terms of interaction ..................................................................................................... 54
The execution–evaluation cycle ......................................................................................... 55
The interaction framework................................................................................................. 57
2. Interaction style for a given task ............................................................................................ 59
Human Interface/Human Error .............................................................................................. 59
Introduction ....................................................................................................................... 59
Sources of Human Error .................................................................................................... 60
HCI Problems .................................................................................................................... 61
Available tools, techniques, and metrics ............................................................................ 61
Relationship to other topics ............................................................................................... 64
Conclusions ....................................................................................................................... 64
Interaction styles supported by the system ............................................................................. 65
Human-computer Interfaces Styles .................................................................................... 65
Conceptual Interaction Models .......................................................................................... 67
The nature of user/system dialogue ........................................................................................ 69
Dialogue system ................................................................................................................ 69
Introduction ....................................................................................................................... 69
Components ...................................................................................................................... 70
Types of systems ............................................................................................................... 70
Natural dialogue systems ................................................................................................... 71
Applications ...................................................................................................................... 72
Select and evaluate an appropriate interaction style ............................................................... 72
Outcome 3: Be able to design user interfaces ............................................................................. 74
1. User interface requirements of a task ..................................................................................... 74
Interactive Systems ............................................................................................................... 74
Usability ............................................................................................................................ 74
Difficulty in Designing Interactive Systems ....................................................................... 75
Key Principles of Interactive Design...................................................................................... 75
Interaction Component Styles ................................................................................................ 75
Command Line Dialogue ................................................................................................... 76
Menu Interaction Style ...................................................................................................... 76
Form Fill-In Interaction Style ............................................................................................ 77
Direct Manipulation .......................................................................................................... 77
Natural Language Interfaces .............................................................................................. 78
User interfaces and usability .................................................................................................. 78
User-centered design versus task-oriented design .................................................................. 80
Task Analysis and Modeling ................................................................................................. 81
Hierarchical Tasks Analysis .............................................................................................. 82
Page 3 of 165
GOMS ............................................................................................................................... 84
2. User interface requirements ............................................................................................... 85
Input/output Design User Interface Design ............................................................................ 85
User Interface Design ............................................................................................................ 85
Output Design ....................................................................................................................... 87
Input Design .......................................................................................................................... 89
3. Prototyping ........................................................................................................................... 91
User Interface Prototyping ..................................................................................................... 91
Basic prototype categories ..................................................................................................... 91
Human factors and ergonomics ............................................................................................. 92
Cognitive Ergonomics ....................................................................................................... 93
Sources of Information...................................................................................................93
Approach to Ergonomics ................................................................................................... 94
Ergonomics Techniques..................................................................................................... 97
Application Areas ............................................................................................................ 101
People-centred Issues ...................................................................................................... 103
High-Fidelity and Low-Fidelity Prototyping ........................................................................ 106
Creating Paper Prototypes ................................................................................................... 108
Coming up & using ideas in your Interface Design .............................................................. 108
4. Design Interface & Components ...................................................................................... 110
Interface and Components ................................................................................................... 110
User Interface (UI) Design Patterns ..................................................................................... 110
Navigation considerations ................................................................................................... 111
Navigational directions .................................................................................................... 111
Global Navigation best design ......................................................................................... 111
UI widgets ........................................................................................................................... 112
Effective Visual Communication for Graphical User Interfaces ........................................... 113
Design Considerations ..................................................................................................... 113
Visible Language............................................................................................................. 113
Color ............................................................................................................................... 120
Principles of user interface design & Fitts‘ Law .................................................................. 121
User interface designs & human abilities ............................................................................. 122
The simplicity principle in perception and cognition............................................................ 127
Online Controlled Experiment ............................................................................................. 128
Web design as the anchoring domain ................................................................................... 128
How Anchoring Can Improve the User Experience ............................................................. 128
Interface design Rules ......................................................................................................... 129
Outcome 4: Evaluating user interface designs .......................................................................... 131
1. Designing a review process for a user interface ................................................................... 131
Review of user interfaces .................................................................................................... 131
User interface review process .............................................................................................. 131
What Does a Comprehensive UI Review Involve? ......................................................... 132
Increased cost if changes needed further along the development process ............................. 134
How to minimize cost in the software development process ............................................. 134
Cost Of Change ............................................................................................................... 134
Page 4 of 165
Implications Of The Timeline .......................................................................................... 134
Costs in an Engineering Change Request ......................................................................... 135
Reputation costs .................................................................................................................. 137
Design guidelines, rules and standards................................................................................. 137
Standards vs Guidelines ................................................................................................... 137
Design Guides ................................................................................................................. 138
Design review ..................................................................................................................... 138
Design Review Process ....................................................................................................... 139
Review of different aspects of interface ............................................................................... 140
What Does a Comprehensive UI Review Involve? ........................................................... 141
Reviewing Design Elements ............................................................................................ 141
Reviewing Text Elements ................................................................................................ 144
Reviewing Link Elements ................................................................................................ 148
Reviewing Visual Elements ............................................................................................. 148
Reviewing User Interactions ............................................................................................ 152
Documenting/Reporting Issues ........................................................................................... 154
Using a Bug Tracking System.......................................................................................... 154
Creating a Formal Report ................................................................................................ 155
Reporting Issues Less Formally ....................................................................................... 156
Why Tackle a User Interface Review? ............................................................................. 156
Accessibility tools and HTML validation tools .................................................................... 156
Using/developing style guides ............................................................................................. 157
Meaning of a style guide.................................................................................................. 157
Developing a style guide ................................................................................................. 158
2. Designing written instruments to gather user interface feedback .......................................... 159
Meaning & Importance of feedback..................................................................................... 159
Written instruments for gathering interface feedback ........................................................... 159
Designing written instruments for gathering interface feedback ........................................... 165
Page 5 of 165
Outcome 1: Understand physical attributes that influence
human computer interaction
Introduction
Human–computer interaction (HCI) researches the design and use of computer technology,
focused on the interfaces between people (users) and computers.
HCI (human-computer interaction) is the study of how people interact with computers and to
what extent computers are or are not developed for successful interaction with human beings.
As its name implies, HCI consists of three parts: the user, the computer itself, and the ways they
work together.
User
By "user", we may mean an individual user, a group of users working together. An appreciation
of the way people's sensory systems (sight, hearing, touch) relay information is vital. Also,
different users form different conceptions or mental models about their interactions and have
different ways of learning and keeping knowledge and. In addition, cultural and national
differences play a part.
Computer
When we talk about the computer, we're referring to any technology ranging from desktop
computers, to large scale computer systems. For example, if we were discussing the design of a
Website, then the Website itself would be referred to as "the computer". Devices such as mobile
phones or VCRs can also be considered to be ―computers‖.
Interaction
There are obvious differences between humans and machines. In spite of these, HCI attempts to
ensure that they both get on with each other and interact successfully. In order to achieve a usable
system, you need to apply what you know about humans and computers, and consult with likely
users throughout the design process. In real systems, the schedule and the budget are important,
and it is vital to find a balance between what would be ideal for the users and what is feasible in
reality.
Page 6 of 165
1. Primary modes of response for inputting information
HCI interaction Mode
A mode is a distinct setting within a computer program or any physical machine interface, in which
the same user input will produce perceived results different from those that it would in other
settings.
A mode in HCI is a context where user actions (keypresses, mouse clicks etc.) are treated in a
specific way. That is, the same action may have a different meaning depending on the mode.
Modes are known to cause problems as you may press a key expecting it to do one thing thinking
the system is in a particular mode, but something unexpected happens because in fact it is in
another mode.
However, modes are often essential as there are only a finite number of input options (keys and
mouse actions), but often a large number of things you would like them to do.
The crucial thing is to ensure, as far as possible, that mode is apparent to the user via visual,
aural or other forms of feedback. Furthermore, it is important to ensure that mode errors,
because they are common, do not lead to catastrophic consequences.
Examples: If you have a (old fashioned!) mobile phone and press "999" you would expect to be
about to ring the police (in the UK), but if you do the same thing while editing an SMS message
you would get 'y' (assuming multi-tap input) - there is a phone mode and a calculator mode In a
graphics editor click and drag on the canvas might mean draw a line, circle, or rectangle
depending on what tool is selected - that is a mode for each tool. This may be more major, in the
example in the video, we see that Google Docs drawing app hass a 'normal' mode where clicking
over a menu selects it, but when the polyline tool is selected, clicking adds another line segment
– here a 'normal' mode and a polyline mode.
Page 7 of 165
Inputting Mode
A designer looks at the interaction tasks necessary for a particular application.
Interaction tasks are low-level primitive inputs required from the user, such as entering a text
string or choosing a command. For each such task, the designer chooses an appropriate
interaction device and interaction technique. An interaction technique is a way of using a
physical device to perform an interaction task. There may be several different ways of using the
same device to perform the same task. For example, one could use a mouse to select a command
by using a pop-up menu, a fixed menu (or palette), multiple clicking, circling the desired
command, or even writing the name of the command with the mouse.
User performance with many types of manual input depends on the speed with which the user
can move his or her hand to a target. Fitts‘ Law provides a way to predict this, and is a key
foundation in input design. It predicts the time required to move based on the distance to be
moved and the size of the destination target. The time is proportional to the logarithm of the
distance divided by the target width. This leads to a tradeoff between distance and target width: it
takes as much additional time to reach a target that is twice as far away as it does to reach one
that is half as large.
Another way of characterizing many input devices is by their control-display ratio. This is the
ratio between the movement of the input device and the corresponding movement of the object it
controls. For example, if a mouse (the control) must be moved one inch on the desk in order to
move a cursor two inches on the screen (the display), the device has a 1:2 control display ratio.
Hands — Discrete Input
Keyboards, attached to workstations, terminals, or portable computers are one of the principal
input devices in use today. Most use a typewriter-like ‗‗QWERTY‘‘ keyboard layout, typically
augmented with additional keys for moving the cursor, entering numbers, and special functions.
There are other layouts and also chord keyboards, where a single hand presses combinations of
up to five keys to represent different characters.
Page 8 of 165
measures two linear dimensions; knob measures one angular dimension; and Polhemus measures
three linear dimensions and three angular.
• Direct vs. Indirect Control. Mouse is indirect (move it on the table to point to a spot on the
screen); touch screen is direct (touch the desired spot on the screen directly).
• Position vs. Rate Control. Moving a mouse changes the position of the cursor; moving a rate-
control joystick changes the speed with which the cursor moves.
• Integral vs. Separable Dimensions. Mouse allows easy, coordinated movement across
two dimensions simultaneously (integral); a pair of knobs (as in an Etch-a-Sketch toy) does not
(separable).
Devices within this taxonomy include one-dimensional valuators (e.g., knob or slide pot), 2-D
locators (mouse, joystick, trackball, data tablet, touch screen), and 3-D locators (Polhemus and
Ascension magnetic trackers, Logitech ultrasonic tracker, Spaceball). Glove input devices
report the configuration of the fingers of the user‘s hand, allowing gestures to be used as input.
Voice
Another type of input comes from the user‘s speech. Carrying on a full conversation with a
computer as one might do with another person is well beyond the state of the art today—and,
even if possible, may be a naive goal. Nevertheless, speech can be used as input with
unrecognized speech, discrete word recognition, or continuous speech recognition.
Even if the computer could recognize all the user‘s words in continuous speech, the problem of
understanding natural language is a significant and unsolved one. It can be avoided by using an
artificial language of special commands or even a fairly restricted subset of natural language.
They are a new class of emerging systems that aim to recognize naturally occurring forms of
human language and behavior, with the incorporation of one or more recognition-based
technologies (e.g., speech, pen, vision). Multimodal interfaces represent a paradigm shift away
from conventional graphical user interfaces. They are being developed largely because they offer
a relatively expressive, transparent, efficient, robust, and highly mobile form of human– computer
interaction. They represent users‘ preferred interaction style, and they support users‘ ability to
flexibly combine modalities or to switch from one input mode to another that may be better suited
to a particular task or setting.
Because humans themselves are complex systems, they require feedback from others to meet
psychological and cognitive processing needs discussed earlier in this chapter. Feedback also
increases human confidence. How much feedback is required is an individual characteristic.
When users interface with machines, they still need feedback about how their work is progressing.
As designers of user interfaces, systems analysts need to be aware of the humanneed for
feedback and build it into the system. In addition to text messages, icons can often be used. For
example, displaying an hourglass while the system is processing encourages the user to wait a while
rather than repeatedly hitting keys to get a response.
Feedback to the user from the system is necessary in seven distinct situations. Feedback that is ill
timed or too plentiful is not helpful, because humans possess a limited capacity to process
information. Web sites should display a status message or some other way of notifying the user
that the site is responding and that input is either correct or in need of further information.
Types of Feedback
Acknowledging acceptance of input
The first situation in which users need feedback is to learn that the computer has accepted the
input. For example, when a user enters a name on a line, the computer provides feedback to the
user by advancing the cursor one character at a time when the letters are entered correctly. AWeb
example would be a Web page displaying a message that
―Your payment has been processed. Your confirmation number is 1234567. Thank you for using
our services.‖
Page 10 of 165
example of feedback that tells the user that input is in the correct form is the message ―INPUT
OK,‖ because that message takes extra space, is cryptic, and does nothing to encourage the input
of more data. When placing an order on the Web or making a payment, a confirmation page
often displays, requesting that the user review the information and click a button or image to
confirm the order or payment.
Feedback informs the user that input was not in the correct form and lists options.
Notice that the message concerning an error in inputting the subscription length is polite and
concise but not cryptic, so that even inexperienced users will be able to understand it. The
subscription length for the online newsletter is entered incorrectly, but the feedback does not dwell
on the user‘s mistake. Rather, it offers options (13, 26, or 52 weeks) so that the error canbe
corrected easily. On a GUI screen, feedback is often in the form of a message box with an OK
button on it.
Web messages have a variety of formats. One method is to return a new page with the message on
the side of the field containing the error. The new Web page may have a link for additional help.
This method works for all Web sites, and the error detection and formatting of the new page are
controlled by the server. Another method uses JavaScript to detect the error and display a message
box on the current screen with details about the specific error. An advantage of this method is that
the Web page does not have to be sent to the server, and the page is more responsive.
Disadvantages are that, if JavaScript is turned off, the error will not be detected, and only one error
is displayed at a time. There must also be a way of detecting the error on the server. A second
disadvantage is that JavaScript may not detect errors that involve reading database tables, such as
verifying a credit card number. This may be offset by using Ajax, which can send the number to
the server and return an error to the Web page. Remember, however, that some users
Page 11 of 165
intentionally turn off their JavaScript capability; so analysts need to follow a variety of tactics
when communicating errors.
Web pages may also use JavaScript to detect multiple errors and display text messages on the page.
Caution must be used so that the error messages are bold enough for the user to notice. A small
red line of text may go unnoticed. A message box or audible beeps may be used to alert the users
that one or more errors have occurred.
The analyst must decide whether to detect and report errors when a Submit button or link is clicked,
called batch validation, or detect errors one at a time, such as when a user enters a month of 14 and
leaves the field. The second method is a riskier approach since poor coding may putthe browser
into a loop, and the user will have to shut down the browser.
So far, we have discussed visual feedback in text or iconic form, but many systems have audio
feedback capabilities as well. When a user inputs data in the incorrect form, the system might beep
instead of providing a window. But audio feedback alone is not descriptive, so it is not as helpful
to users as onscreen instructions. Use audio feedback sparingly, perhaps to denote urgent
situations. The same advice also applies to the design of Web sites, which may be viewed in an
open office, where sounds carry and a coworker‘s desktop speakers are within earshot of several
other people.
Figure below shows a display providing feedback in a window for a user who has just requested
a printout of the electronic newsletter‘s subscription list. The display shows a sentence reassuring
the user that the request is being processed, as well as a sign in the upper right corner instructing
the user to ―WAIT‖ until the current command has been executed. The display also provides a
way to stop the operation if necessary.
Feedback tells the user that there will be a delay during printing.
Page 12 of 165
Sometimes during delays, while new software is being installed, a short tutorial on the new
application is run, which is meant to serve as a distraction rather than feedback about the
installation. Often, a list of files that are being copied and a status bar are used to reassure the user
that the system is functioning properly. Web browsers usually display the Web pages that are
being loaded and the time remaining.
It is critical to include feedback when using Ajax to update Web forms. Because a new Web page
does not load, the user may not be aware that data are being retrieved from the server that will
change the current Web page. When a drop-down list is changing, a message, such as
―Please wait while the list is being populated,‖ informs the user that the Web page is changing.
Timing feedback of this sort is critical. Too slow a system response could cause the user to input
commands that impede or disrupt processing.
When designing Web interfaces, hyperlinks can be embedded to allow the user to jump to the
relevant help screens or to view more information. Hyperlinks are typically highlighted with
underlining, italics, or a different color. Hyperlinks can be graphics, text, or icons.
The fourth type of help is a wizard, which asks the user a series of questions and then takes action
accordingly. Wizards help users through complicated or unfamiliar processes such as setting up
network connections or booking an airline seat online. Most users are familiar with wizards
through creating a PowerPoint presentation or choosing a style for a word processing memo.
Besides building help into an application, software manufacturers offer online help (either
automated or personalized with live chat) or help lines (most customer service telephone lines are
not toll free, however). Some COTS software manufacturers offer a fax-back system. A user can
request a catalog of various help documents to be sent by fax, and then can order from the catalog
by entering the item number with a touch-tone phone.
Finally, users can seek and find support from other users through software forums. This type of
support is, of course, unofficial, and the information thus obtained may be true, partially true, or
misleading. The principles regarding the use of software forums are the same for those mentioned
later on in Chapter ―Quality Assurance and Implementation‖, where folklore and
recommendation systems are discussed. Approach any software fixes posted on bulletin boards,
blogs, discussion groups, or chat rooms with wariness and skepticism.
Besides informal help on software, vendor Web sites are extremely useful for updating drivers,
viewers, and the software itself. Most online computer publications have some sort of ―driver
watch‖ or ―bug report‖ that monitors the bulletin boards and Web sites for useful programs that
can be downloaded. Programs will forage vendor Web sites for the latest updates, inform the
user of them, assist with downloads, and actually upgrade user applications.
Input flow
Input flow: The flow of information that begins in the task environment, when the user has some
task that requires using their computer. Output: The flow of information that originates in the
machine environment.
Input stream, which is a flowing sequence of data that can be directed to specific functions. The
directed channel that the data stream flows is known as a pipeline, and changing its direction is
known as piping.
Mode confusion
In human-machine systems, a user operates a machine using information on machine‘s behavior
displayed at a user interface. If the abstraction of the information is insufficient for the user to
anticipate machine‘s state, mode confusions occur in the human-machine systems.
Page 14 of 165
A mode confusion occurs when the observed behaviour of a technical system is out of sync with
the behaviour of the user's mental model of it. But the notion is described only informally in the
literature This enables us to propose precise definitions of ―mode‖ and ―mode confusion‖ for
safety-critical systems.
Haptic interaction
A haptics interface is a system that allows a human to interact with a computer through bodily
sensations and movements. Haptics refers to a type of human-computer interaction technology
that encompasses tactile feedback or other bodily sensations to perform actions or processes on a
computing device.
Communication is basically the act of transferring information from one place to another.Feedback
is a system where the reaction or response of the receiver arrives at the sender after he/she has
interpreted the message. Feedback is inevitably essential to make two way communications
effective. In fact, without feedback communication remains incomplete. At times, feedback could
be verbal such as written and oral. Then in some cases, it could be nonverbal. Feedback is mainly
a response from your audiences; it allows you to evaluate the effectiveness of your message. In
fact research shows that the majority of the messages that have been sent are nonverbal and the
ability to understand and use nonverbal communication is powerful tools that will help people
connect with each other.
As well as communication where nonverbal shows much more impressive, a sense of touch as
known as haptics plays an important role in our new phase of technology. It is the science of
applying touch sensation and control to interaction with computer applications by using special
input/output devices. It gives users a slight jolt of energy at the point of touch, providing instant
sensory feedback, while reducing the audio, visual or audio-visual demand. Haptic technology is
an evolutionary step into interacting with objects as an extension of our mind and allows for more
socially appropriate and subtle interaction.
Previous studies on haptic communication indicate that the benefits of interpersonal touch exist
even when touch is artificially mediated between people that are physically apart.
In the current study an evaluation of three input gestures (i.e., moving, squeezing, and stroking)
was conducted to identify preferred methods for creating haptic messages using a hand-held
device. Furthermore, two output methods (i.e., one or four haptic actuators) were investigated in
order to determine whether representing spatial properties of input gestures haptically provides
additional benefit for communication. Participants created haptic messages in four example
Page 15 of 165
communication scenarios. The results of subjective ratings, post-experimental interviews, and
observations showed that squeezing and stroking were the preferred ways to interact with the
device. Squeezing was an unobtrusive and quick way to create haptic content. Stroking, on the
other hand, enabled crafting of more detailed haptic messages. Spatial haptic output was
appreciated especially when using the stroking method. These findings can help in designing haptic
communication methods for hand-held devices.
Computer-based tasks
Task is a term used to describe a software program or section of a program that is running in a
multitasking environment. Tasks are used to help the user and computer identify between each
of the programs running on the computer.
They have the ability to look beyond what is obvious and search for hidden meanings. They can
read between the lines and enjoy solving cryptic puzzles. They don‘t like routine and get bored
easily.
b) Analytical
Analytical thinkers like to separate a whole into its basic parts in order to examine these parts
and their relationships. They are great problem-solvers and have a structured and methodical way
of approaching tasks.
This type of thinker will seek answers and use logic rather than emotional thinking in life.
However, they have a tendency to overthink things and can ruminate on the same subject for
months.
Page 16 of 165
c) Creative
Creative thinkers think outside the box and will come up with ingenious solutions to solve their
dilemmas in life. They like to break away from the traditions and norms of society when it comes
to new ideas and ways of thinking.
They can sometimes be ridiculed as society prefers to keep the status quo. Creative thinkers can
also court jealously if they manage to follow their dreams and work in a creative field.
d) Concrete thinking
Concrete thinking focuses on the physical world, rather than the abstract one. It is all about
thinking of objects or ideas as specific items, rather than as a theoretical representation of a more
general idea.
Concrete thinkers like hard facts, figures and statistics. For example, you will not get any
philosophers who think in concrete terms. Children think in concrete terms as it is a very basic and
literal form of understanding.
e) Critical thinking
Critical thinking takes analytical thinking up a level. Critical thinkers exercise careful
evaluation or judgment in order to determine the authenticity, accuracy, worth, validity, or value
of something. And rather than strictly breaking down the information, critical thinking explores
other elements that could have an influence on conclusions.
f) Convergent thinking
Convergent thinking is a process of combining a finite number of perspectives or ideas to find
a single solution. Convergent thinkers will target these possibilities, or converge them inwards, to
come up with a solution.
One example is a multiple choice question in an exam. You have four possible answers but only
one is right. In order to solve the problem, you would use convergent thinking.
g) Divergent thinking
By contrast, divergent thinking is the opposite of convergent thinking. It is a way of exploring
an infinite number of solutions to find one that is effective. So, instead of starting off with a set
number of possibilities and converging on an answer, it goes as far and wide as necessary and
moves outwards in search of the solution.
Convergent thinking
Includes analytical and concrete types of thinking
If you are a convergent thinker, you are more likely to be an analytical or concrete thinker. You
are generally able to logically process thoughts so you can use your ability to remain cool and
calm in times of turmoil.
Page 17 of 165
You are also more likely to be a natural problem solver. Think of any famous super sleuth, from
Sherlock Holmes to Inspector Frost, and you will see convergent thinking at play. By gathering
various bits of information, convergent thinkers are able to put the pieces of a puzzle together and
come up with a logical answer to the question of ―Who has done it?‖
Concrete thinkers will look at what is visible and reliable. Concrete thinking will only consider the
literal meaning while abstract thinking goes deeper than the facts to consider multiple or hidden
meanings. However, if you are a concrete thinker it means you are more likely to consider literal
meanings and you‘re unlikely to get distracted by ―what ifs‖ or other minor details.
Divergent thinking
Includes abstract and creative types of thinking
Divergent thinking is all about looking at a topic or problem from many different angles. Instead
of focusing inwards it branches outwards. It is an imaginative way of looking at the world. As
such, it uses abstract thinking to come up with new ideas and unique solutions to problems.
Abstract thinking goes beyond all the visible and present things to find hidden meanings and
underlying purpose. For example, a concrete thinker will look at a flag and only see specific
colours, marking, or symbols that appear on the cloth. An abstract thinker would see the flag as a
symbol of a country or organization. They may also see it as a symbol of liberty and freedom.
Divergent thinkers like to go off on tangents. They‘ll take the meandering route, rather than the
tried and trusted straight and narrow approach. If you are a divergent thinker, you are more likely
to be a good storyteller or creative writer. You are good at setting scenes and a natural entertainer.
You like to be creative in your approach to problem-solving.
Human-Computer Model
In cognitive models, there is a strong mapping between human and computer processing.
Computer input is mapped to humans in ways of perception; processing and memory in
computers are aligned with human contemplation; the machine's output with human actions and
behaviors. It is a useful model for simplifying the complexities of humans by comparing our
behaviors to that of machines.
The input and output terminals in the human-computer model are understandable, but the
processing chain, the steps between input and output, are vastly different.
Page 18 of 165
It is hard to replicate human processing with hardware
Information processing in humans is deviously more complicated than in any computer system,
as we have recently discovered. Researchers have been able to simulate a single second of
human brain activity with 82,944 processors in 40 minutes, consuming 1 Petabyte of system
memory. It is also worth noting that this process only created an artificial neural network of 1.73
billion nerve cells compared to our brain's 80-100 billion nerve cell network.
Research on progress bars has determined that backwards moving animations seem faster to
users than forward moving animations. Humans do not perceive the concept of time the same as
a computer as illustrated by the research on progress bars. In the study, users perceived the
loading time to be 11% longer when the progress bar was forward moving despite the fact the
loading times were the same in each trial. Users experience events analyzed through perceptions
in the mind; it is not a binary experience as in a computing machine.
Cognitive Psychology
Cognitive psychology is the science of how we think. It‘s concerned with our inner mental
processes such as attention, perception, memory, action planning, and language. Each of these
components are pivotal in forming who we are and how we behave.
Cognitive psychology is the branch of psychology that focuses on the way people process
information. It looks at how we process information we receive and how the treatment of this
information leads to our responses. In other words, cognitive psychology is interested in what is
happening within our minds that links stimulus (input) and response (output).
Page 19 of 165
Cognitive psychology & Information Processing
At the very heart of cognitive psychology is the idea of information processing.
Cognitive psychology sees the individual as a processor of information, in much the same way
that a computer takes in information and follows a program to produce an output.
Basic Assumptions
The information processing approach is based on a number of assumptions, including:
(1) information made available by the environment is processed by a series of processing
systems (e.g. attention, perception, short-term memory);
(2) these processing systems transform or alter the information in systematic ways;
(3) the aim of research is to specify the processes and structures that underlie cognitive
performance;
(4) information processing in humans resembles that in computers.
The computer gave cognitive psychologists a metaphor, or analogy, to which they could compare
human mental processing. The use of the computer as a tool for thinking how the human mind
handles information is known as the computer analogy.
Essentially, a computer codes (i.e., changes) information, stores information, uses information,
and produces an output (retrieves info). The idea of information processing was adopted by
cognitive psychologists as a model of how human thought works.
For example, the eye receives visual information and codes information into electric neural
activity which is fed back to the brain where it is ―stored‖ and ―coded‖. This information is can
be used by other parts of the brain relating to mental activities such as memory, perception and
attention. The output (i.e. behavior) might be, for example, to read what you can see on a printed
page.
Hence the information processing approach characterizes thinking as the environment providing
input of data, which is then transformed by our senses. The information can be stored, retrieved
and transformed using ―mental programs‖, with the results being behavioral responses.
Cognitive psychology has influenced and integrated with many other approaches and areas of
study to produce, for example, social learning theory, cognitive neuropsychology and artificial
intelligence (AI).
Page 20 of 165
Information Processing & Attention
When we are selectively attending to one activity, we tend to ignore other stimulation, although
our attention can be distracted by something else, like the telephone ringing or someone using
our name.
Psychologists are interested in what makes us attend to one thing rather than another (selective
attention); why we sometimes switch our attention to something that was previously unattended
(e.g. Cocktail Party Syndrome), and how many things we can attend to at the same time
(attentional capacity).
One way of conceptualizing attention is to think of humans as information processors who can
only process a limited amount of information at a time without becoming overloaded.
Broadbent and others in the 1950's adopted a model of the brain as a limited capacity
information processing system, through which external input is transmitted.
Information processing models consist of a series of stages, or boxes, which represent stages of
processing. Arrows indicate the flow of information from one stage to the next.
Levels of Processing
The levels of processing focuses on the depth of processing involved in memory, and predicts the
deeper information is processed, the longer a memory trace will last.
Processing depth: "the meaningfulness extracted from the stimulus rather than in terms of the
number of analyses performed upon it.‖
Unlike the multi-store model it is a non-structured approach. The basic idea is that memory is
really just what happens as a result of processing information.
Page 21 of 165
Memory is just a by-product of the depth of processing of information, and there is no clear
distinction between short term and long term memory.
Therefore, instead of concentrating on the stores/structures involved (i.e. short term memory &
long term memory), this theory concentrates on the processes involved in memory.
Shallow Processing
- This takes two forms
1. Structural processing (appearance) which is when we encode only the physical qualities of
something. E.g. the typeface of a word or how the letters look.
Shallow processing only involves maintenance rehearsal (repetition to help us hold something
in the STM) and leads to fairly short-term retention of information.
This is the only type of rehearsal to take place within the multi-store model.
Deep Processing
- This involves
3. Semantic processing, which happens when we encode the meaning of a word and relate it to
similar words with similar meaning.
Deep processing involves elaboration rehearsal which involves a more meaningful analysis
(e.g. images, thinking, associations etc.) of information and leads to better recall.
For example, giving words a meaning or linking them with previous knowledge.
Summary
Levels of processing: The idea that the way information is encoded affects how well it is
remembered. The deeper the level of processing, the easier the information is to recall.
Page 22 of 165
Influence of human thinking processes on HCI
Meaning of Design Thinking
Design thinking is a non-linear, iterative process which seeks to understand users, challenge
assumptions, redefine problems and create innovative solutions to prototype and test. The
method consists of 5 phases—Empathize, Define, Ideate, Prototype and Test and is most useful
when you want to tackle problems that are ill-defined or unknown.
Design teams use design thinking to tackle ill-defined or unknown problems (otherwise known
as wicked problems) because the process reframes these problems in human-centric ways, and
allows designers to focus on what‘s most important for users. Design thinking offers us a means
to think outside the box and also dig that bit deeper into problem solving. It helps designers carry
out the right kind of research, create prototypes and test out products and services to uncover
new ways to meet users‘ needs.
The design thinking process has become increasingly popular over the last few decades because
it was key to the success of many high-profile, global organizations—companies such as Google,
Apple and Airbnb have wielded it to notable effect, for example. This outside the box thinking is
now taught at leading universities across the world and is encouraged at every level of business.
Design thinking improves the world around us every day because of its ability to generate
ground-breaking solutions in a disruptive and innovative way. Design thinking is more than just
a process, it opens up an entirely new way to think, and offers a collection of hands-on methods
to help you apply this new mindset.
Page 23 of 165
Stages of Design Thinking
Design thinking is described as a five-stage process. It‘s important to note these stages are not
always sequential and designers can often run the stages in parallel, out of order and repeat them
in an iterative fashion.
The various stages of design thinking should be understood as different modes which contribute
to the entire design project, rather than sequential steps. The ultimate goal throughout is to derive
as deep an understanding of the product and its users as possible.
Problem Solving
Computer Programmers are problem solvers. In order to solve a problem on a computer you
must:
Information Representation
A computer, at heart, is really dumb. It can only really know about a few things... numbers,
characters, booleans, and lists (called arrays) of these items. (See Data Types). Everything else
must be "approximated" by combinations of these data types.
A good programmer will "encode" all the "facts" necessary to represent a problem in variables
(See Variables). Further, there are "good ways" and "bad ways" to encode information. Good
ways allow the computer to easily "compute" new information.
Algorithm
An algorithm (see Algorithm) is a set of specific steps to solve a problem. Think of it this way: if
you were to tell your 3 year old neice to play your favorite song on the piano (assuming the neice
has never played a piano), you would have to tell her where the piano was, and how to sit on the
bench, and how to open the cover, and which keys to press, and which order to press them in,
etc, etc, etc.
The core of what good programmers do is being able to define the steps necessary to accomplish
a goal. Unfortunately, a computer, only knows a very restricted and limited set of possible steps.
Page 25 of 165
For example a computer can add two numbers. But if you want to find the average of two
numbers, this is beyond the basic capabilities of a computer. To find the average, you must:
1. First: Add the two numbers and save this result in a variable
2. Then: Divide this new number the number two, and save this result in a variable.
3. Finally: provide this number to the rest of the program (or print it for the user).
Study in cognitive psychology, has definitely made our lives easier to some extent. There are
concrete psychological steps involved in problem solving, which if properly followed, can help us
tackle every sort of problems.
One of the important aspects of solving a problem is forming a good strategy. A strategy might
be well thought of, rigorous and a sure winner but might not be viable given the resources available
in hand.
Below given are the core strategies involved in solving every problem.
Problem-Solving Strategies
1) Algorithms
The step-by-step procedure involved in figuring out the correct answer to any problem is called
algorithm. The step by step procedure involved in solving a mathematical problem using math
formula is a perfect example of a problem-solving algorithm. Algorithm is the strategy that results
in accurate answer; however, it‘s not always practical. The strategy is highly timeconsuming, and
involves taking lots of steps.
For instance, attempting to open a door lock using algorithm to find out the possible number
combinations would take a really long time.
2) Heuristics
Heuristics refers to mental strategy based on rule-of thumb. There is no guarantee that it will
always work out to produce the best solution. However, the rule of thumb strategy does help to
simplify complex problems by narrowing the possible solutions. It makes it easier to reach the
correct solution using other strategies.
Heuristic strategy of problem solving can also be referred to as the mental shortcut. For instance,
you need to reach the other part of the city in a limited amount of time. You‘ll
Page 26 of 165
obviously seek for the shortest route and means of transportation. The rule of thumb allows you to
make up your mind about the fastest route depending on your past commutes. You might choose
subway instead of hiring a cab.
3) Trial-and-Error
Trial and error strategy is the approach that deals with trying a number of different solutions and
ruling out the ones that do not work. Approaching this strategy as the first method in an attempt to
solve any problem can be highly time-consuming. So, it‘s best to use this strategy as a follow up
to figure out the best possible solution, after you have narrowed down the possible number of
solutions using other techniques.
For instance, you‘re trying to open a lock. Trying to enter every possible combination directly
onto the lock for Trial-and-Error method can be highly time-consuming. Instead, if you‘ve
narrowed down the possible combinations to 20, you‘ll have a much easier time solving the
particular problem.
4) Insight
Insight is something that just occurs suddenly. Researchers suggest that insight can occur if you‘ve
dealt with similar problems in the past. For instance, Knowing that you‘ve solved a particular
algebra question in the past will make it much easier for you to solve the similar questions at
present. However, it‘s not always necessary that the mental processes be related withpast problems.
In fact, most cases of mental processes leading to insight happen outside of consciousness.
Page 27 of 165
Outcome 2: Understand principles of interaction
1. Developing models of interaction for an application
Meaning of Design Principles
Design principles are widely applicable laws, guidelines, biases and design considerations which
designers apply with discretion. Professionals from many disciplines—e.g., behavioral science,
sociology, physics and ergonomics—provided the foundation for design principles via their
accumulated knowledge and experience.
The interaction design should create great user experience. It requires experience and
understanding of basic principles of the interaction design in most of the UI disciplines. It‘s
about designing for the entire interconnected system: the device, the interface, the context, the
environment, and the people. Interaction designers strive to create meaningful relationships
between people and the products and services that they use, from computers, to mobile devices,
to appliances, and beyond.
Ten basic principles of interaction design that are needed to be considered are given below:
Matching your audience‘s prior experiences and expectations is achieved by using common
conventions or UI patterns.
2. Consistency
As well as matching people’s expectations through terminology, layout and interactions the
way in which they are used should be consistent throughout the process and between related
applications.
By maintaining consistency users learn more quickly, this can be achieved by re-applying in one
part of the application their prior experiences from another.
Page 28 of 165
An added bonus of keeping elements consistent is that you can then use inconsistency to
indicate to users where things do not work the way they might expect. Breaking consistency is
similar to knowing when to be unconventional as mentioned above.
3. Functional Minimalism
―Everything should be made as simple as possible, but no simpler.‖ Albert Einstein
The range of possible actions should be no more than is absolutely necessary. Providing too
many options can detract from the primary functions and reduce usability by overwhelming the
user with choices. To achieve the zen of ‗functional minimalism‘:
4. Cognitive Loads
Cognition is the scientific term for the ―process of thought‖. When designing interactions we
need to minimize the amount of ―thinking work‖ required to complete a particular task. Another
way of putting it is that a good assistant uses their skills to help the master focus on their
skills.
For instance, while designing, we need to understand how much concentration the task
requires to complete it and create a user interface that reduces cognitive load as much as
Page 29 of 165
possible. A good way to reduce the amount of ‗thinking work‘ the user has to do is to focus on
what the computer is good at and build a system that uses the computers skills to the best of its
abilities. Remember that computer is good at:
Math‘s
Remembering things
Keeping track of things
Comparing things
Spell Checking and spotting/correcting errors
Knowing qualities of product and users, one can create a design for better user experience.
5. Engagement
In User Experience terms engagement measures the extent to which a consumer has a
meaningful experience. An engaging experience is not only more enjoyable, but also easier and
more productive. As with many things engagement is subjective so the system your designing
must engage with the desired audience; what appeals to a teenager is not necessarily what their
grandparent would also find engaging. Beyond aligning with the appropriate users, control
achievement and creation are key.
The user should feel like they are in control of the experience at all times, they must constantly
feel like they‘re achieving something and also be able see the results through positive feedback
or alternatively feel like they‘ve created something.
People are so engaged in the activity they‘re doing that the rest of the world falls away. Flow is
what we‘re looking to achieve through engaging interactions. We should allow users to
concentrate on their work, not on the user interface. In short keep out of the way!
7. Perceivability
People are aware of the opportunity to interact with interactive media. As interface designers, we
must avoid developing hidden interactions, which decrease the usability, efficiency, and user
experience of interactive media. In other words, people should not have to guess or look for
opportunities to interact.
When developing interactive media, users should have the ability to review an interface and
identify where they can interact. We must remember that not everyone experiences and interacts
with interface in the same way others do. Make it a habit to provide hints and indicators. More
like, visual cues such as buttons, icons, textures, textiles, etc. Allow for the user to see that these
Page 30 of 165
visual cues can actually be clicked or tapped with their fingers. We should always take into
consideration the usability and accessibility of our interactive media and how the user sees and
perceives the objects in the interface.
8. Learnability
Another very important core principle is the ability to easily learn and use an interface after using
it for the first time. Remember that engaging interfaces allow users to easily learn and remember
their interactions.
Learnability makes interaction simpler and intuitive. Even simple interfaces may require a
certain amount of experience to learn. People tend to interact with an interface in similar ways
they interact with other interfaces. For this reason, we must understand the importance of
design patterns and consistency. This allows the user to avoid having to learn something new
and will give him/her a sense of achievement. They will feel smart and capable of grasping and
utilizing newer interfaces, allow for them to feel confident while navigating through the
interface.
Error Prevention
Prevent errors by:
Page 31 of 165
Error Detection
Anticipate possible errors and provide feedback that helps users verify that:
Its important to remember that providing feedback by changing the visual state of an object or
item is more noticeable than a written message.
Error Recovery
If the error is unavoidable provide clearly marked ways for the user to recover from it. For
example provide ―back‖, ―undo‖ or ―cancel‖ commands.
If a specific action is irreversible it should be classed as critical and you should make the
user confirm first in order to prevent slip ups. Alternatively you can create a system that
naturally defaults to a less harmful state. For example if I close a document without saving it the
system should be intelligent enough to know that it is unlikely that I intended the action and
therefore either auto-save or clearly warn me before closing.
10. Affordance
Affordance is the quality of an object that allows an individual to perform an action, for example
Page 32 of 165
a standard household light switch has good ‗affordance‘, in that it appears innately clickable. In
short the physical properties of an object should suggest how it can be used. In the context of
user interfaces, affordance can be achieved by:
Interaction design is not always about creating a better interface for the users, but it is also
about using the technology in the way people want. It is necessary to know the target users in
order to design desirable product for them. An interactive design is the basis for the success of any
product. These 10 principles of interaction design are based on the study and experiences of our
team in designing mobile and web apps for a broad product portfolio and on multiple mobile and
web platforms.
Page 34 of 165
• Manage the transitions between tightly and loosely coupled collaboration
Coupling is the degree to which people are working together. People continually
shift back and forth between loosely and tightly coupled collaboration as they move
between individual and group work
• Support people with the coordination of their actions
Members of a group mediate their interactions by taking turns negotiating the sharing
of a common workspace. Groups regularly reorganize the division of work based upon
what other participants have done or are doing.
• Facilitate finding collaborators and establishing contact
Meetings are normally facilitated by physical proximity and can be scheduled,
spontaneous or initiated. The lack of physical proximity in virtual environments
requires other mechanisms to compensate.
Others have produced alternative sets of rules. However the important issue is that there is
no consensus as to which set of rules should be applied in any given case. In other words
HCI is a fragmented discipline which according to Diaper (2005) shows a lack of coherent
development.
HCI Models
A variety of different models have been put forward which are designed to provide an HCI
theory in a particular context. This includes Norman‘s Model, Abowd and Beale‘s model and
the audience participation model of Nemirovsky (2003) which presents a new theoretical
basis for audience participation in HCI.
Norman’s model of interaction
This has probably been the most influential (Dix et al 1992 p105) because it mirrors
human intuition. In essence this model is based on the user formulating a plan of
action and then carrying it out at the interface. Norman has divided this into seven stages:
1. establishing the goal
2. forming the intention
3. specifying the action sequence
4. executing the action
5. perceiving the system state
6. interpreting the system state
7. evaluating the system state with respect to the goals and intentions
Page 35 of 165
Figure: The General Interaction Framework
The interface sits between the user and the system and there is a four step interactive cycle
as shown in the labeled arrows. The user formulates a task to achieve a goal. The user
manipulates the machine through the input articulated by the input language. The input
language is translated into the systems core language to perform the operation. The system
is in a new state which is featured as the output. The output is communicated to the user by
observation.
Page 36 of 165
This latter and more radical model is unlikely to have any real application to the CSCL problem
at hand, since its sphere of application is designed to emulate the creative inspiration of the artist
battling against the mechanical controls of the machine.
• The decision where to begin and end the analysis needs to be clarified.
• Deciding how to document and present the outcome
Heuristic evaluation
A heuristic evaluation is a usability inspection method for computer software that helps to
identify usability problems in the user interface (UI) design. It specifically involves evaluators
examining the interface and judging its compliance with recognized usability principles (the
"heuristics"). These evaluation methods are now widely taught and practiced in the new media
Page 37 of 165
sector, where UIs are often designed in a short space of time on a budget that may restrict the
amount of money available to provide for other types of interface testing.
Jakob Nielsen's heuristics are probably the most-used usability heuristics for user interface
design. Nielsen developed the heuristics based on work together with Rolf Molich in 1990. The
final set of heuristics that are still used today were released by Nielsen in 1994. The heuristics as
published in Nielsen's book Usability Engineering are as follows:
Page 38 of 165
search, focused on the user's task, list concrete steps to be carried out, and not be too
large.
It‘s easy to assemble a set of user characteristics and call it a persona, but it‘s not so easy to create
personas that are truly effective design and communication tools. If you have begun to create your
own personas, here are some tips to help you perfect them.
The idea of personas originated from Alan Cooper, an interaction designer and consultant.
It‘s a part of an approach to software design called Goal-directed design. May also be referred
to as an approach or idea, because it‘s not a theory about users or interaction, neither is it a
complete process for software development. Cooper has introduced the ideas about personas
and goal-directed design in the book The inmates are running the asylum (Cooper 1999) and
in articles (Cooper 1996). In addition, goal-directed design is promoted through Coopers
consulting firm, so he has a commercial interest in it as well.
Personas – A Simple Introduction
Personas are fictional characters, which you create based upon your research in order to
represent the different user types that might use your service, product, site, or brand in a similar
way. Creating personas will help you to understand your users’ needs, experiences, behaviours
and goals. Creating personas can help you step out of yourself. It can help you to recognise that
different people have different needs and expectations, and it can also help you to identify with the
user you’re designing for. Personas make the design task at hand less complex, they guide your
ideation processes, and they can help you to achieve the goal of creating a good user experience
for your target user group.
As opposed to designing products, services, and solutions based upon the preferences of the design
team, it has become standard practice within many human centred design disciplines to collate
research and personify certain trends and patterns in the data as personas. Hence, personas do not
describe real people, but you compose your personas based on real data collected
Page 39 of 165
from multiple individuals. Personas add the human touch to what would largely remain cold facts
in your research. When you create persona profiles of typical or atypical (extreme) users, it will
help you to understand patterns in your research, which synthesises the types of people you seek
to design for. Personas are also known as model characters or composite characters.
Personas provide meaningful archetypes which you can use to assess your design development
against. Constructing personas will help you ask the right questions and answer those questions
in line with the users you are designing for. For example, ―How would Peter, Joe, and Jessica
experience, react, and behave in relation to feature X or change Y within the given context?‖ and
―What do Peter, Joe, and Jessica think, feel, do and say?‖ and ―What are their underlying needs
we are trying to fulfill?‖
1. Goal-directed Personas
This persona cuts straight to the nitty-gritty. ―It focusses on: What does my typical user want to
do with my product?‖. The objective of a goal-directed persona is to examine the process and
workflow that your user would prefer to utilise in order to achieve their objectives in interacting
with your product or service. There is an implicit assumption that you have already done enough
user research to recognise that your product has value to the user, and that by examining their
goals, you can bring their requirements to life. The goal-directed personas are based upon the
perspectives of Alan Cooper, an American software designer and programmer who is widely
recognized as the ―Father of Visual Basic‖.
Page 40 of 165
2. Role-Based Personas
The role-based perspective is also goal-directed and it also focusses on behaviour. The personas
of the role-based perspectives are massively data-driven and incorporate data from both qualitative
and quantitative sources. The role-based perspective focusses on the user‘s role in the organisation.
In some cases, our designs need to reflect upon the part that our users play in their organisations
or wider lives. An examination of the roles that our users typically play in real life can help inform
better product design decisions. Where will the product be used? What‘s this role‘s purpose? What
business objectives are required of this role? Who else is impacted by the duties of this role? What
functions are served by this role? Jonathan Grudin, John Pruitt, and Tamara Adlin are advocates
for the role-based perspective.
3. Engaging Personas
―The engaging perspective is rooted in the ability of stories to produce involvement and insight.
Through an understanding of characters and stories, it is possible to create a vivid and realistic
description of fictitious people. The purpose of the engaging perspective is to move from
designers seeing the user as a stereotype with whom they are unable to identify and whose life
they cannot envision, to designers actively involving themselves in the lives of the personas. The
other persona perspectives are criticized for causing a risk of stereotypical descriptions by not
Page 41 of 165
looking at the whole person, but instead focusing only on behavior.‖
– Lene Nielsen
Engaging personas can incorporate both goal and role-directed personas, as well as the more
traditional rounded personas. These engaging personas are designed so that the designers who
use them can become more engaged with them. The idea is to create a 3D rendering of a user
through the use of personas. The more people engage with the persona and see them as ‘real‘, the
more likely they will be to consider them during the process design and want to serve them with
the best product. These personas examine the emotions of the user, their psychology,
backgrounds and make them relevant to the task in hand. The perspective emphasises how stories
can engage and bring the personas to life. One of the advocates for this perspective is Lene Nielsen.
One of the main difficulties of the persona method is getting participants to use it (Browne, 2011).
In a short while, we‘ll let you in on Lene Nielsen‘s model, which sets out to cover this problem
though a 10-step process of creating an engaging persona.
4. Fictional Personas
The fictional persona does not emerge from user research (unlike the other personas) but it emerges
from the experience of the UX design team. It requires the team to make assumptions based upon
past interactions with the user base, and products to deliver a picture of what, perhaps, typical users
look like. There‘s no doubt that these personas can be deeply flawed (and there are endless debates
on just how flawed). You may be able to use them as an initial sketch
Page 42 of 165
of user needs. They allow for early involvement with your users in the UX design process, but
they should not, of course, be trusted as a guide for your development of products or services.
The 10 steps are an ideal process but sometimes it is not possible to include all the steps in the
project. Here we outline the 10-step process as described by Lene Nielsen in her Interaction Design
Foundation encyclopedia article, Personas.
1. Collect data. Collect as much knowledge about the users as possible. Perform high-quality user
research of actual users in your target user group. In Design Thinking, the research phase is the
first phase, also known as the Empathise phase.
2. Form a hypothesis. Based upon your initial research, you will form a general idea of the various
users within the focus area of the project, including the ways users differ from one another – For
instance, you can use Affinity Diagrams and Empathy Maps.
3. Everyone accepts the hypothesis. The goal is to support or reject the first hypothesis about the
differences between the users. You can do this by confronting project participants with the
hypothesis and comparing it to existing knowledge.
4. Establish a number. You will decide upon the final number of personas, which it makes sense
to create. Most often, you would want to create more than one persona for each product or service,
but you should always choose just one persona as your primary focus.
5. Describe the personas. The purpose of working with personas is to be able to developsolutions,
products and services based upon the needs and goals of your users. Be sure to describe personas
in a such way so as to express enough understanding and empathy to understand the users.
Page 43 of 165
You should include details about the user‘s education, lifestyle, interests, values, goals,
needs, limitations, desires, attitudes, and patterns of behaviour.
Add a few fictional personal details to make the persona a realistic character.
Give each of your personas a name.
Create 1–2-pages of descriptions for each persona.
6. Prepare situations or scenarios for your personas. This engaging persona method is directed
at creating scenarios that describe solutions. For this purpose, you should describe a number of
specific situations that could trigger use of the product or service you are designing. In other words,
situations are the basis of a scenario. You can give each of your personas life by creating scenarios
that feature them in the role of a user. Scenarios usually start by placing the persona ina specific
context with a problem they want to or have to solve.
7. Obtain acceptance from the organization. It is a common thread throughout all 10 steps that the
goal of the method is to involve the project participants. As such, as many team members as
possible should participate in the development of the personas, and it is important to obtain the
acceptance and recognition of the participants of the various steps. In order to achieve this, you
can choose between two strategies: You can ask the participants for their opinion, or you can let
them participate actively in the process.
8. Disseminate knowledge. In order for the participants to use the method, the persona
descriptions should be disseminated to all. It is important to decide early on how you want to
disseminate this knowledge to those who have not participated directly in the process, to future
new employees, and to possible external partners. The dissemination of knowledge also includes
how the project participants will be given access to the underlying data.
9. Everyone prepares scenarios. Personas have no value in themselves, until the persona becomes
part of a scenario – the story about how the persona uses a future product – it does not have real
value.
10. Make ongoing adjustments. The last step is the future life of the persona descriptions. You
should revise the descriptions on a regular basis. New information and new aspects may affect
the descriptions. Sometimes you would need to rewrite the existing persona descriptions, add new
personas, or eliminate outdated personas.
Model-Based User Interface Development (MBUID) is one approach that aims at decreasing the
effort needed to develop UIs while ensuring UI quality. The purpose of Model-Based Design is
Page 44 of 165
to identify high-level models that allow designers to specify and analyse interactive software
applications from a more semantic oriented level rather than starting immediately to address the
implementation level. This allows them to concentrate on more important aspects without being
immediately confused by many implementation details and then to have tools which update the
implementation in order to be consistent with high-level choices.
Significant Models in HCI
Task models
Cognitive architectures
User models
Domain Models
Context Models
Presentation Models
Dialogue models
There are many reasons for developing task models. In some cases the task model of an existing
system is created in order to better understand the underlying design and analyse its potential
limitations and how to overcome them. In other cases designers create the task model of a new
application yet to be developed. In this case, the purpose is to indicate how activities should be
performed in order to obtain a new, usable system that is supported by some new technology.
Features
Assumes that the tasks are related in the sense that they all share a small set of attributes
Representation of commonalities and variabilities in a hierarchical diagram
Represented at various abstraction levels.
The subject of a task model can be either an entire application or one of its parts. The
application can be either a complete, running interactive system or a prototype under
development.
Cognitive Model
A cognitive model is a descriptive account or computational representation of human thinking
about a given concept, skill, or domain. Here, the focus is on cognitive knowledge and skills, as
opposed to sensori-motor skills, and can include declarative, procedural, and strategic
Page 45 of 165
knowledge. A cognitive model of learning, then, is an account of how humans gain accurate and
complete knowledge. This is closely related to metacognitive reasoning and can come about as a
result of (1) revising (i.e., correcting) existing knowledge, (2) acquiring and encoding new
knowledge from instruction or experience, and (3) combining existing components to infer and
deduce new knowledge. A cognitive model of learning should explain or simulate these mental
processes and show how they produce relatively permanent changes in the long-term memory of
learners. It is also common to consider impoverished...
Features
Categorize perceptual abject
Use of processes to accomplish complex task
Described in mathematical or computational languages
Derived from basic principles of cognation
Based mainly experiments.
User model
A user model is a (data) structure that is used to capture certain characteristics about an
individual user, and a user profile is the actual representation in a given user model. The
process of obtaining the user profile is called user modeling.
Description
User models are explicit representations of the properties of an individual user including needs,
preferences as well as physical, cognitive and behavioral characteristics. The characteristics are
represented by variables. The user model is established by the declaration o these variables. Due
to the wide range of applications, it is often difficult to have a common format. A user model can
be seen from a functional/procedural point of view or from a more declarative point of view. In
the first case the focus is laid on processes and actions that the user carries out to accomplish
tasks. In the second case the focus is set on definitions and descriptions of the properties and
characteristics of the user.
A user profile is an instantiation of a user model representing either a specific real user or a
representative of a group of real users.
Beside the user model there are other related models e.g. task model, device model or
environment model. Task models describe how to perform activities to reach users' goals. A
device model is a representation of the features and capabilities of one or several physical
components involved in user interaction. The device model expresses capabilities of the device.
An environment model is a set of characteristics used to describe the use environment. It
includes all required contextual characteristics.
User modeling
User models are used to generate or adapt user interfaces at runtime, to address particular user
needs and preferences. User models are also known as user profiles, personas or archetypes.
They can be used by designers and developers for personalization purposes and to increase the
usability and accessibility of products and services.
Page 46 of 165
Features
Represents the properties of an individual user including needs, preferences as well as
physical, cognitive and behavioral characteristics
Characteristics are represented by variables
Takes processes and actions that the user carries out to accomplish tasks and set on
definitions and descriptions of the properties and characteristics of the user.
Domain model
A domain model is a system of abstractions that describes selected aspects of a sphere of
knowledge, influence or activity (a domain). The model can then be used to solve problems related
to that domain.
Domain Modeling
Domain Modeling is a way to describe and model real world entities and the relationships
between them, which collectively describe the problem domain space. Derived from an
understanding of system-level requirements, identifying domain entities and their relationships
provides an effective basis for understanding and helps practitioners design systems for
maintainability, testability, and incremental development. Because there is often a gap between
understanding the problem domain and the interpretation of requirements, domain modeling is a
primary modeling in product development at scale. Driven in part from object-oriented design
approaches, domain modeling envisions the solution as a set of domain objects that collaborate
to fulfill system-level scenarios.
Features
Understand the project problem description and to translate the requirements of that project
into software components of a solution.
The software components are commonly implemented in an object oriented programming
language.
Context model
Context models are used to illustrate the operational context of a system - they show what lies
outside the system boundaries. Social and organizational concerns may affect the decision on
where to position system boundaries. Architectural models show the system and its relationship
with other systems. Context models simply show the other systems in the environment, not how
the system being developed is used in that environment.
Features
Context models are used to illustrate the operational context of a system
They show what lies outside the system boundaries
They show how IT applications fit into the context of the people and the organization they serve
They simply shows other system in environment, not how the system being developed in that
environment
Producing an architectural model is first step in context modeling
Page 47 of 165
Social and organizational concerns may affect the decision on where to position system boundaries
Dialogue model
A dialogue model is an abstract model that is used to describe the structure of the dialogue
between a user and an interactive computer system. Dialogue models form the basis of the
notations that are used in user interface management systems (UIMS). ... It is shown that the event
model has the greatest descriptive power.
Features
Meaningful communication, so that the computer understands what people are entering and
people understand what the computer is presenting or requesting.
Minimal user action.
Standard operation and consistency.
Presentation Model
Represent the state and behavior of the presentation independently of the GUI controls used in
the interface
Features
Constituted of global message designer constructs to communicate the domain and
interaction message to user
Contains elements as task, command panel and domain object display
The elements are used to organize the presentation of domain and interaction elements
The organization is specified using the presentation and interaction structures
It defines how all of the objects and actions that are part of an application interrelate, in ways
that mirror and support real-life user interactions. It ensures that users always stay oriented and
understand how to move from place to place to find information or perform tasks. It provides a
common vision for an application. It enables designers, developers, and stakeholders to understand
and explain how users move from objects to actions within a system.
Interactive modeling is a feature that enables you to create an output object with which the end
user can interact before generating a model. This interactive output object is placed on the
Outputs tab of the manager pane and contains an interim dataset. The interim dataset can be used
to refine or simplify the model before it is generated. Interactive modeling is achieved by adding
some extra elements to the specification of a regular model builder node:
Page 48 of 165
The Constructors section of the Node definition includes a
CreateInteractiveModelBuilder element.
The extension includes a dedicated InteractiveModelBuilder element.
User interaction with the interim dataset takes place by means of a window known as the
interaction window, which is displayed as soon as the output object is created.
Interaction is specific to the algorithm used and is implemented by the extension. The interaction
window is defined in the User Interface section of the InteractiveModelBuilder
element. You can define an interaction window by specifying either of the following:
A frame class (see User Interface Section), which defines the window completely
A panel class, specified as an attribute of an extension object panel (see Extension Object
Panel), for each tab of the window
Once closed, the interaction window can be redisplayed by double-clicking the object name on
the Outputs tab.
The interaction window specification needs to include code to generate the model once the user
has completed interaction. In the example illustrated, this is done by means of the toolbar button
with the gold nugget icon, which is associated with an action to generate the model. The code for
this is shown in the InteractiveModelBuilder section under Example of Interactive
Modeling.
Page 49 of 165
User Interface Section
The user interface for an object is declared in a UserInterface element within the object
definition in the specification file. There can be more than one UserInterface element in the
same specification file, depending on how many objects (for example, model builder node, model
output object, model applier node) are defined in the file.
Note: In the element definitions that follow (normally identified by the heading Format), element
attributes and child elements are optional unless indicated as "(required)." For thecomplete syntax
of all elements.
For each user interface, you specify the processing to be performed. You can do this by means of
either an action handler or a frame class attribute, both of which are optional. Where neither an
action handler nor a frame class attribute is specified, the processing to be performed is specified
elsewhere in the file.
Format
The basic format of the UserInterface element is:
<UserInterface>
<Icons>
<Icon ... />
...
</Icons>
<Controls>
<Menu ... />
<MenuItem ... />
...
<ToolbarItem ... />
...
</Controls>
<Tabs>
<Tab ... />
...
</Tabs>
</UserInterface>
An extension object UI delegate is associated with either a node dialog or a model- or document-
output window and allows the extension to handle custom actions invoked on the dialog.
Note that extensions should not assume that the UI can be invoked. For example, extensions may
be run within "headless" (no UI) environments such as batch mode and application servers.
Page 50 of 165
An action handler is used where you are adding custom actions to a standard IBM SPSS Modeler
window. It is similar to the extension object UI delegate although an action handler can also
be used when there is no extension node or output window such as when defining a new tool that
can be invoked directly from the IBM SPSS Modeler main window. The action handler specifies
the Java class that is called when a user chooses a custom menu option or toolbarbutton on
a node dialog, a model output window, or a document output window. It is animplementation
of either an ExtensionObjectFrame class or an ActionHandler class. In either case, the
standard window components are included automatically, such as the standard menus, tabs, and
toolbar buttons.
Here is an example of a model apply node dialog containing an extension object panel.
Format
where:
Page 51 of 165
id is a unique identifier that can be used to reference the panel in Java code.
panelClass (required) is the name of the Java class that the panel implements.
The advanced custom layout options provide a fine degree of control over the positioning and
display of screen components.
Example
The following example illustrates the Tab section that defines the extension object panel shown
earlier:
The following UX processes are good starting points to empathize with users in order to elicit the
best design response:
User Personas: Based on the psychologies and behaviors of the target audience you need
to create user personas and refer them at the time of making crucial design decisions.
User Scenarios: This helps to explain how a user persona will act when using the
product. Designers need to explore the context where the user interacts with the product.
User Experience Maps: Also known as journey maps, user experience maps record all
the conditions of a single interaction. It also includes external circumstances and emotion.
This approach helps interaction designers to first acknowledge their constraints before devising a
solution. You are considering the goals users have in mind when using your product and design
it accordingly.
Page 52 of 165
2. It must be Easy to Use
This is the bare minimum for any product. If your product lacks usability, it is obvious that no
one will desire it. It is the ease with which someone uses your product to achieve their desired
goal. Just like UX design, interaction design needs to consider the inherent usability of the
interfaces to make the underlying system to comprehend and use.
For example, if you are designing an online movie and events ticketing app with a high level of
detail where users can determine rows, seat numbers etc. The app typically needs to consolidate
a multi-step and multi-program process and transform the whole experience into a single linear
path.
The usability needs to be effortless if you want people to use it extensively. In case they need to
invest a lot of time just to understand the system, chances are they are going to abandon your
app. The best practice would be something like:
Open the app → Select a show/event → Select date & place → Select row & seat → Payment
and checkout
Using UI patterns like breadcrumbs or WYSIWYG is one of the best ways to improve
learnability. Users too are familiar with these patters since various websites and applications are
using them already. Besides, they offer enough room for customization and creativity.
Signifiers basically work as metaphors, telling people why they should interact with a particular
item by communicating the purpose. Affordances, on the other hand, define the relation between
the user and their environment. What is the underlying functionality of your product? This is
what affordance explains. Designers need to utilize signifiers and affordances consistently
throughout the design.
Page 53 of 165
5. What’s the Feedback & Response Time?
Finally, we have feedback and response time as one of the ―good‖ characteristics of interaction
design. Feedback is key to interaction design. Your product must communicate with your users
(provide feedback) if the desired task was accomplished and what they need to do next.
Let‘s take example from Hootsuite‘s owl, which goes to sleep if you remain inactive for a long
time. It is a funny and intelligent feedback and many other websites and apps are following their
example. The response time in feedback is also a key factor. It must be in real time and
immediate response, anything between 0.1 second and 10 seconds.
Models of interaction
Interaction involves at least two participants: the user and the system. Both are complex,
and are very different from each other in the way that they communicate and view the
domain and the task. The interface must therefore effectively translate between them to
allow the interaction to be successful. This translation can fail at a number of points and for
a number of reasons. The use of models of interaction can help us to understand exactly
what is going on in the interaction and identify the likely root of difficulties. They also
provide us with a framework to compare different interaction styles and to consider
interaction problems.
We begin by considering the most influential model of interaction, Norman‘s execution–
evaluation cycle; then we look at another model which extends the ideas of Norman‘s cycle.
Both of these models describe the interaction in terms of the goals and actions of the user.
We will therefore briefly discuss the terminology used and the assumptions inherent in the
models, before describing the models themselves.
The terms of interaction
Traditionally, the purpose of an interactive system is to aid a user in accomplishing goals
from some application domain. A domain defines an area of expertise and knowledge in
some real-world activity. Some examples of domains are graphic design, authoring and
process control in a factory. A domain consists of concepts that highlight its important
aspects. In a graphic design domain, some of the important concepts are geometric shapes,
a drawing surface and a drawing utensil. Tasks are operations to manipulate the concepts of
a domain. A goal is the desired output from a performed task. For example, one task within
the graphic design domain is the construction of a specific geometric shape with particular
Page 54 of 165
attributes on the drawing surface. A related goal would be to produce a solid red triangle
centred on the canvas. An intention is a specific action required to meet the goal.
Task analysis involves the identification of the problem space for the user of an interactive
system in terms of the domain, goals, intentions and tasks. We can use our knowledge of
tasks and goals to assess the interactive system that is designed to support them. The
concepts used in the design of the system and the description of the user are separate, and
so we can refer to them as distinct components, called the System and the User, respectively.
The System and User are each described by means of a language that can express concepts
relevant in the domain of the application. The System‘s language we will refer to as the core
language and the User‘s language we will refer to as the task language. The core language
describes computational attributes of the domain relevant to the System state, whereas the
task language describes psychological attributes of thedomain relevant to the User state.
The system is assumed to be some computerized application, in the context of this book,
but the models apply equally to non-computer applications. It is also a common assumption
that by distinguishing between user and system we are restricted to single-user applications.
This is not the case. However, the emphasis is on the view of the interaction from a single
user‘s perspective. From this point of view, other users, such as those in a multi-party
conferencing system, form part of the system.
Page 55 of 165
intention, and the actual actions that will reach the goal, before it can be executed by the
user. The user perceives the new state of the system, after execution of the action sequence,
and interprets it in terms of his expectations. If the system state reflects the user‘s goal then
the computer has done what he wanted and the interaction has been successful; otherwise
the user must formulate a new goal and repeat the cycle.
Norman uses a simple example of switching on a light to illustrate this cycle. Imagine you
are sitting reading as evening falls. You decide you need more light; that is you establish
the goal to get more light. From there you form an intention to switch on the desk lamp, and
you specify the actions required, to reach over and press the lamp switch. If someone else is
closer the intention may be different – you may ask them to switch on the light for you. Your
goal is the same but the intention and actions are different. When you have executed the
action you perceive the result, either the light is on or it isn‘t and you interpret this, based on
your knowledge of the world. For example, if the light does not come on you may interpret
this as indicating the bulb has blown or the lamp is not plugged into the mains, and you will
formulate new goals to deal with this. If the light does come on, you will evaluate the new
state according to the original goals – is there now enough light? If so, the cycle is complete.
If not, you may formulate a new intention to switch on the main ceiling light as well.
Norman uses this model of interaction to demonstrate why some interfaces cause problems
to their users. He describes these in terms of the gulfs of execution and the gulfs of
evaluation. As we noted earlier, the user and the system do not use the same terms to describe
the domain and goals – remember that we called the language of the system the core language
and the language of the user the task language . The gulf of execution is the difference
between the user‘s formulation of the actions to reach the goal and the actions allowed by
the system. If the actions allowed by the system correspond to those intended by the user,
the interaction will be effective. The interface should therefore aim to reduce this gulf.
The gulf of evaluation is the distance between the physical presentation of the system state
and the expectation of the user. If the user can readily evaluate the presentation in terms of
his goal, the gulf of evaluation is small. The more effort that is required on the part of the
user to interpret the presentation, the less effective the interaction.
Norman‘s model is a useful means of understanding the interaction, in a way that is clear and
intuitive. It allows other, more detailed, empirical and analytic work to be placedwithin
a common framework. However, it only considers the system as far as the interface. It
concentrates wholly on the user‘s view of the interaction. It does not attempt to deal with the
system‘s communication through the interface. An extension of Norman‘s model, proposed
by Abowd and Beale, addresses this problem. This is described in the next section.
Page 56 of 165
The interaction framework
The interaction framework attempts a more realistic description of interaction by including
the system explicitly, and breaks it into four main components, as shown in Figure below.
The nodes represent the four major components in an interactive system – the System, the
User, the Input and the Output. Each component has its own language. In addition to the
User’s task language and the System’s core language, there are languages for both the Input
and Output components. Input and Output together form the Interface.
As the interface sits between the User and the System, there are four steps in the interactive
cycle, each corresponding to a translation from one component to another, as shown by the
labelled arcs in Figure above. The User begins the interactive cycle with the formulation of
a goal and a task to achieve that goal. The only way the user can manipulate the machine is
through the Input , and so the task must be articulated within the input language. The input
language is translated into the core language as operations to be performed by the System.
The System then transforms itself as described by the operations; the execution phase of the
cycle is complete and the evaluation phase now begins. The System is in a new state, which
must now be communicated to the User. The current values of system attributes are rendered
as concepts or features of the Output. It is then up to the User to observe the Output and
assess the results of the interaction relative to the original goal, ending the evaluation phase
and, hence, the interactive cycle. There are four main translations involved in the interaction:
articulation, performance, presentation and observation.
The User ‘s formulation of the desired task to achieve some goal needs to be articulated in
the input language. The tasks are responses of the User and they need to be translated to
stimuli for the Input. As pointed out above, this articulation is judged in terms of the coverage
from tasks to input and the relative ease with which the translation can beaccomplished. The
task is phrased in terms of certain psychological attributes that highlight
Page 57 of 165
the important features of the domain for the User. If these psychological attributes map
clearly onto the input language, then articulation of the task will be made much simpler. An
example of a poor mapping, as pointed out by Norman, is a large room with overhead lighting
controlled by a bank of switches. It is often desirable to control the lighting so that only one
section of the room is lit. We are then faced with the puzzle of determining which switch
controls which lights. The result is usually repeated trials and frustration. This arises from the
difficulty of articulating a goal (for example, ‗Turn on the lights in the front of the room‘) in
an input language that consists of a linear row of switches, which may or may not be oriented
to reflect the room layout.
Conversely, an example of a good mapping is in virtual reality systems, where input devices
such as datagloves are specifically geared towards easing articulation by making the user‘s
psychological notion of gesturing an act that can be directly realized at the interface. Direct
manipulation interfaces, such as those found on common desktop operating systems like
the Macintosh and Windows, make the articulation of some file handling commands easier.
On the other hand, some tasks, such as repetitive file renaming or launching a program
whose icon is not visible, are not at all easy to articulate with such an interface.
At the next stage, the responses of the Input are translated to stimuli for the System. Of
interest in assessing this translation is whether the translated input language can reach as
many states of the System as is possible using the System stimuli directly. For example, the
remote control units for some compact disc players do not allow the user to turn the power
off on the player unit; hence the off state of the player cannot be reached using the remote
control‘s input language. On the panel of the compact disc player, however, there is usually
a button that controls the power. The ease with which this translation from Input to System
takes place is of less importance because the effort is not expended by the user. However,
there can be a real effort expended by the designer and programmer. In this case, the ease
of the translation is viewed in terms of the cost of implementation.
Once a state transition has occurred within the System, the execution phase of the interaction
is complete and the evaluation phase begins. The new state of the System must be
communicated to the User, and this begins by translating the System responses to the
transition into stimuli for the Output component. This presentation translation must preserve
the relevant system attributes from the domain in the limited expressiveness ofthe output
devices. The ability to capture the domain concepts of the System within the Output is a
question of expressiveness for this translation.
For example, while writing a paper with some word-processing package, it is necessary at
times to see both the immediate surrounding text where one is currently composing, say,
the current paragraph, and a wider context within the whole paper that cannot be easily
displayed on one screen (for example, the current chapter).
Page 58 of 165
Ultimately, the user must interpret the output to evaluate what has happened. The
response from the Output is translated to stimuli for the User which trigger assessment.
The observation translation will address the ease and coverage of this final translation.
For example, it is difficult to tell the time accurately on an unmarked analog clock,
especially if it is not oriented properly. It is difficult in a command line interface to
determine the result of copying and moving files in a hierarchical file system. Developing
a web site using a mark -up language like HTML would be virtuallyimpossible without
being able to preview the output through a browser.
Introduction
In any complex system, most errors and failures in the system can be traced to a human source.
Incomplete specifications, design defects, and implementation errors such as software bugs and
manufacturing defects, are all caused by human beings making mistakes. However, when looking
at human errors in the context of embedded systems, we tend to focus on operator errors and errors
caused by a poor human-computer interface (HCI).
Human beings have common failure modes and certain conditions will make it more likely for a
human operator to make a mistake. A good HCI design can encourage the operator to perform
correctly and protect the system from common operator errors. However, there is no well- defined
procedure for constructing an HCI for safety critical systems.
In an embedded system, cost, size, power, and complexity are especially limited, so the interface
must be relatively simple and easy to use without sacrificing system safety. Also, a distinction
Page 59 of 165
must be made between highly domain specific interfaces, like nuclear power controls or airplane
pilot controls, and more general "walk up and use" interfaces, like automated teller machines or
VCR onscreen menus. However, this is not a hard and fast distinction, because there are interfaces
such as the one in the common automobile that specifically require some amount of training and
certification (most places in the world require a driver's license test) but are designed to be
relatively simple and universal. However, all cars do not have the same interface, and even small
differences may cause an experienced driver to make a mistake when operatingan unfamiliar car.
In safety critical systems, the main goal when of the user interface is to prevent the operator from
making a mistake and causing a hazard. In most cases usability is a complementary goal in that a
highly usable interface will make the operator more comfortable and reduce anxiety. However,
there are some tradeoffs between characteristics that make the interface usable and characteristics
that make it safe. For example, a system that allows the user to commit aprocedure by simply
pressing the enter key a series of times may make it extremely usable, but allow the operator to
bypass important safety checks or easily confirm an action without assessing the consequences.
This was one of the problems with the Therac-25 medical radiation device. Operators could easily
bypass error messages on the terminal and continue to apply treatment, not realizing they were
administering lethal doses of radiation to the patient. Also, the error messages were not particularly
descriptive, which is also another problem with user interfaces providing appropriate feedback. It
is also important to recognize that not all systems are safety critical, and in those cases, usability
is the main goal of the HCI. If the user must operate the system to perform a task, the interface
should guide the user to take the appropriate actions and provide feedback to the user when
operations succeed or fail.
However, if the human operator must routinely be involved in the control of the system, he or she
will tend to make mistakes and adapt to the common mode of operation. Also, if the operator has
a persistent mental model of the system in its normal mode of operation, he or she will tendto
ignore data indicating an error unless it is displayed with a high level of prominence. The HCI
Page 60 of 165
must be designed so that it provides enough novelty to keep the user alert and interested in his or
her job, but not so extremely complicated that the user will find it difficult to operate.
Stress is also a major contributing factor to human error. Stressful situations include unfamiliar
or exceptional occurrences, incidents that may cause a high loss of money, data, or life, or time
critical tasks. Human performance tends to degrade when stress levels are raised. Intensive training
can reduce this affect by making unusual situations a familiar scenario with drills. However, the
cases where human beings must perform at their best to avoid hazards are often the cases of most
extreme stress and worst error rates. The failure rate can be as high as thirty percent in extreme
situations. Unfortunately, the human operator is our only option, since a computer system usually
cannot correct for truly unique situations and emergencies. The best that can be done is to design
the user interface so that the operator will make as few mistakes as possible.
HCI Problems
The HCI must provide intuitive controls and appropriate feedback to the user. Many HCI's can
cause information overload. For example, if an operator must watch several displays to observe
the state of a system, he or she may be overwhelmed and not be able to process the data to gain an
appropriate view of the system. This may also cause the operator to ignore displays that are
perceived as having very low information content. This can be dangerous if one particular
display is in control of a critical sensor. Another way to overwhelm the operator is to have alarm
sensitivity set too high. If an operator gets an alarm for nearly every action, most of which are
false, he or she will ignore the alarm when there is a real emergency condition.
The HCI must also have a confidence level that will allow the operator to assess the validity of its
information. The operator should not have to rely on one display for several sensors. The system
should have some redundancy built into it. Also, several different displays should not relay
information from the same sensor. This would give the user the unsubstantiated notion that he or
she had more information than what was available, or that several different sources were in
agreement. The operator should not trust the information from the HCI to the exclusion of the
rest of his or her environment.
There are several heuristics for judging a well-designed user interface, but there is no systematic
method for designing safe, usable HCI's. It is also difficult to quantitatively measure the safety and
usability of an interface, as well as find and correct for defects.
Page 61 of 165
phase, before the system is built. The fact that a real interface is not being tested also limits what
can be determined about the HCI design. Empirical methods like protocol analysis actually have
real users test the user interface, and do lengthy analyses on all the data collected during the
session, from keystrokes to mouse clicks to the user's verbal account during interaction.
HCI Design
There are no structured methods for user interface design. There are several guidelines and
qualities that are desirable for a usable, safe HCI, but the method of achieving these qualities is
not well understood. Currently, the best method available is iterative design, evaluation, and
redesign. This is why evaluation methods are important. If we can perform efficient evaluations
and correctly identify as many defects as possible, the interface will be greatly improved. Also,
accurate evaluations earlier in the design phase can save money and time. However, it is easier
to find HCI defects when you have a physical interface to work with. It is also important to separate
design of the HCI from other components in the system, so defects in the interface do not propagate
faults through the system. Outlines an architecture for decoupling the HCI from the application in
hard real-time systems so that complexity is reduced and timing constraints can be dealt with.
Heuristic Evaluation
Heuristic evaluation involves having a set of people (the evaluators) inspect a user interface design
and judge it based on a set of usability guidelines. These guidelines are qualitative and cannot be
concretely measured, but the evaluators can make relative judgments about how well the user
interface adheres to the guidelines. A sample set of usability heuristics from designs would be:
This technique is usually applied early in the life cycle of a system, since a working user interface
is not necessary to carry it out. Each individual evaluator can inspect the user interface on his or
her own, judging it according to the set of heuristics without actually having to operate the
interface. This is purely an inspection method. It has been found that to achieve optimal coverage
for all interface problems, about five or more independent evaluators are necessary.
Page 62 of 165
However, if this is cost prohibitive, heuristic evaluation can achieve good results with as few as
three evaluators.
Heuristic evaluation is good at uncovering errors and explaining why there are usability problems
in the interface. Once the causes are known, it is fairly easy to implement a solution to fix the
interface. This can be extremely time and cost saving since things can be corrected before the user
interface is actually built. However, the merits of heuristic evaluation are very dependent on the
merits of the evaluators. Skilled evaluators who are trained in the domain of the system and can
recognize interface problems are necessary for very domain specific applications.
Cognitive Walkthrough
Another usability inspection method is the cognitive walkthrough. Like the heuristic evaluation,
the cognitive walkthrough can be applied to a user interface design without actually operating a
constructed interface. However, the cognitive walkthrough evaluates the system by focusing on
how a theoretical user would go about performing a task or goal using the interface. Each step the
user would take is examined, and the interface is judged based on how well it will guide the user
to perform the correct action at each stage. The interface should also provide an appropriate level
of feedback to ensure to the user that progress is being made on his or her goal.
Since the cognitive walkthrough focuses on steps necessary to complete a specific task, it can
uncover disparities in how the system users and designers view these tasks. It can also uncover
poor labeling and inadequate feedback for certain actions. However, the method's tight focus loses
sight of some other important usability aspects. This method cannot evaluate global consistency
or extensiveness of features. It may also judge an interface that is designed to be comprehensive
poorly because it provides too many choices to the user.
In order for a user interface to be designed well and as many flaws as possible to be caught, several
inspection methods should be applied. There is a trade-off between how thoroughly the interface
is inspected and how many resources are able to be committed at this early stage in the system life
cycle. Empirical methods can also be applied at the prototype stage to actually observe the
performance of the user interface in action.
Protocol Analysis
Protocol analysis is an empirical method of user interface evaluation that focuses on the test user's
vocal responses. The user operates the interface and is encouraged to "think out loud" when going
through the steps to perform a task using the system. Video and audio data are recorded, as well
as keystrokes and mouse clicks. Analyzing the data obtained from one test session can be an
extremely time consuming activity, since one must draw conclusions from the subjective vocal
responses of the subject and draw inferences from his or her facial expressions.
Page 63 of 165
This can be very tedious simply because the volume of data is very high and lengthy analysis is
required for each second.
MetriStation
MetriStation is a tool being developed at Carnegie Mellon University to automate the normally
tedious task of gathering and analyzing all the data gathered from empirical user interface
evaluations. The system consists of software that will synchronize and process data drawn from
several sources when a test user is operating the interface being evaluated. Keystrokes, mouse
clicks, tracking the user's eye movements, and the user's speech during a test session are recorded
and analyzed. The system is based on the premise that if the interface has good usability
characteristics, the user will not pause during the test session, but logically proceed from onestep
to the next as he or she completes an assigned task. Any statistically unusual pauses between actions
would indicate a flaw in the interface and can be detected automatically.
MetriStation seems like a promising tool in aiding empirical analysis. It can give more quantitative
results, and can reduce greatly the time spent collecting and processing data from test sessions.
However, this tool may only flag problems that cause the user to hesitate in a task. It can do
nothing about problems in the interface that do not slow the user down. For instance, it may be
extremely easy to crash a system through the user interface quickly, but this is clearly not a desired
outcome. The premise that most usability problems will cause the user to hesitate has limited scope
and applicability.
Conclusions
The following ideas are the important ones to take away from reading about this topic:
Page 64 of 165
Humans are the most unpredictable part of any system and therefore the most difficult to
model for HCI design.
Humans have higher failure rates under high stress levels, but are more flexible in
recovering from emergency situations and the last hope in a potential disaster.
The HCI must provide an appropriate level of feedback without overloading the operator
with too much information
If the human operator is out of the control loop in an automated task, the operator will
tend to adapt to the normal operation mode and not pay close attention to the system.
When an emergency condition occurs, the operator's response will be degraded.
There is a trade-off between making the HCI relatively easy and intuitive and ensuring
that system safety is not compromised by lulling the operator into a state of complacency.
Evaluation techniques for user interfaces are not mature and can be costly. They focus
more of qualitative rather than quantitative metrics. However, evaluation and iterative
design is the best method we have for improving the interface.
1. interaction style
2. user concept of the interaction
3. interactions for mobile devices
We will go through the list and identifying the technique in terms of how fast it is for the user,
how flexible it is for users or expressive it is for the user, how long it takes for the user to learn,
and how hard it is for the programmer to implement.
Page 65 of 165
Command line interfaces are the original interfaces for the computer and are stillused.
Examples of command line interfaces are UNIX operating system commands and the VI editor.
They are loved by system administrators for their accuracy, speed and flexibility. If the user knows
the commands then typing is faster than searching for it in menus. Consequently, some applications
try to offer both; for example auto cad or Windows Excel. Users can directly create script files and
verbally specify the command sequence. Some commands can be hard to visualize and searching
for commands or files can be frustrating and slow. The interface is easy for programmers to
implement.
Menus are a basis of WIMP interfaces and a favorite with inexperienced users. Novice users can
easily find commands. Searching the menus, the user builds up a metaphor for the application.
Using menus is slow but fun. Using toolkits, menus are not too bad to program.
Dialogs or ―question and answer‖ boxes are another old interface style. They are windows that
pop up asking for information, fields to be filled or buttons to be pressed. Dialogue windows
have been around before WIMP interfaces, they first appeared in database entry application
programs. They are rather inflexible form of interface. Dialog boxes are easy to understand
but not very flexible or fast. They are easy to program. Who is
Forms are sophisticated dialog boxes and not much more flexible. Spreadsheets are a flexible
and powerful form of interface, especially if the user can specify cell types. Data entry in
spreadsheets is typically slow. They are more difficult to program.
Point and click interfaces were made popular by the web. They were very suitable for the initial
web browsers (gopher) when web pages were all text. Users knew to interpret the underscore as a
link to another web page. Now, links are hidden, for examples in images. Icons on the desktop is
another example of point and click style interface. The notion of point and click is a short
interaction that results in a very specific result. Because the user must move the mouse, this
interface style is slow. It is flexible because many different kinds of UI objects canbe pointed at.
Short key interaction is a point and click interaction style without the point. They are generally
easy to implement.
The most common example of natural language interface are the interface of search engines.
They are hard to implement, but can be very flexible.
Command gesturing interface style selects an object and uses a gesture to issue commands. In
essences, it is a generalization of ―point and click‖ interfaces. Examples are in Opra, a web
browser. Other examples of command gesturing are the hand cursor in adobe. Some games,
Brother in Arms, are using command gesturing. Keith Ruthowski has demonstrated that a pie menu
can become a form of command gesturing, and once the gestures are learned nearly as
Page 66 of 165
accurate and fast as text entry. Because algorithms for interpreting gestures are in its infancy, the
flexibility of command gesturing is not known, and difficult to implement. Learning a large set of
gesture can take a long time.
General gesturing is a more general interface style than command gesturing. There does not have
to be an object and the gesture does not have to represent a command. Examples of general
gesturing are drawing applications and text entry using scripting as on notebooks. The Wii is
advancing general gesturing in games. Because this is a very new interaction style, it is unknown
how easy it is to learn, but it should be more flexible. It is more difficult to implement.
Tangible interactions refer to manipulating physical objects other than the mouse and keyboard.
There are few current popular examples, but RFID and NF technology does make some tangible
interactions possible. Low tech examples of tangible interactions are real buttons, switches and
sliders. They can be fast or slow to use, but should be easy to learn, can be hard to implement.
Preece, Rogers and Sharp in Interaction Design propose that understanding users‘ conceptual
models for interaction can guide HCI designers to the proper interaction techniques for their
system
The most important thing to design is the user‘s conceptual model. Everything else should be
subordinated to making that model clear, obvious, and substantial. That is almost exactly the
opposite of how most software is designed. (David Liddle, 1996, Design of the conceptual model,
In Bringing Design to Software, Addison-Wesely, 17-31)
The HCI designers‘ goal is to understand the interaction in terms that the users understanding of
them. Preece, Rogers and Sharp propose four conceptual models for interaction concepts, based
on the type of activities users perform during the interaction.
Page 67 of 165
Passive Instrumental – the system provides passive feedback to user actions
Passive informative – the system provides passive information to the users.
User may interact with a system using more than one conceptual interaction model.
Instructional Interactions
Issuing commands is an example of instructional interactions. Instructional interactions are
probably the most common form of conceptual interactions. It allows the user the most control
over the system. Specific examples vary from using a VCR to programming. In most cases
operating system interactions are instructional interactions. Icons, menus and control keys are
examples of improving the usability command line instructional interaction. Instructional
interactions tend to be quick and efficient.
Conversational Interactions
Conversational Interactions model the interaction as a user-system dialog. Examples of systems
that are primarily conversational are help-systems and search engines. Agents (such as the paper
clip) use conversational interaction. Implementing conversational model may require voice
recognition and text parsing or could use forms. The advantage of conversational model is that it
can be more natural, but it can also be a slower interaction. For example, using automated phone
based systems is a slow conversational interaction interface. Another disadvantage of
conversational interaction is that the user may believe that the system is smarter than it really is,
especially if the system uses an animated agent.
Apple was the first computer company to design an operating system using direct manipulation
in the desk top. Direct manipulation and navigational interactions have a lot of benefits. They are
easy to learn, easy to recall, tend to have less little error, give immediate feedback, and produce
less user anxiety. But they have several disadvantages: the interactions are slower and the user
may believe that the interaction is more than it really is. Poor metaphors such as moving the icon
of a floppy to eject the floppy can confuse the user.
Page 68 of 165
Explorative and Browsing Interactions
Explorative and browsing interactions refer to searching structured information. Examples of
systems using explorative interactions are Music CDs, Movie DVDs, Web, portals. Not much
progress has been made in this conceptual model for interactions, probably because the structuring
information in more than a trivial task and is hard to model.
We can make a table summarize interaction styles and conceptual interaction models. The related
conceptual interaction model is the most common model that is supported by the interaction
style. Implementation is how hard for the designer to implement. Because I made the table we
should go through it and correct it.
The elements of a dialogue system are not defined. The typical GUI wizard engages in a sort of
dialog, but it includes very few of the common dialogue system components, and dialog state is
trivial.
Introduction
After dialogue systems based only on written text processing starting from the early Sixties, the
first speaking dialogue system was issued by the DARPA (Defense Advanced Research
Projects Agency) Project in the USA in 1977. After the end of this 5-year project, some
European projects issued the first dialogue system able to speak many languages (also French,
Page 69 of 165
German and Italian). Those first systems were used in the telecom industry to provide phone
various services in specific domains, e.g. automated agenda and train tables service.
Components
What sets of components are included in a dialogue system, and how those components divide
up responsibilities differs from system to system. Principal to any dialogue system is the dialog
manager, which is a component that manages the state of the dialog, and dialog strategy. A
typical activity cycle in a dialogue system contains the following phases:
1. The user speaks, and the input is converted to plain text by the system's input
recognizer/decoder, which may include:
o automatic speech recognizer (ASR)
o gesture recognizer
o handwriting recognizer
2. The text is analyzed by a Natural language understanding unit (NLU), which may
include:
o Proper Name identification
o part of speech tagging
o Syntactic/semantic parser
3. The semantic information is analyzed by the dialog manager, that keeps the history and
state of the dialog and manages the general flow of the conversation.
4. Usually, the dialog manager contacts one or more task managers, that have knowledge
of the specific task domain.
5. The dialog manager produces output using an output generator, which may include:
o natural language generator
o gesture generator
o layout manager
6. Finally, the output is rendered using an output renderer, which may include:
o text-to-speech engine (TTS)
o talking head
o robot or avatar
Dialogue systems that are based on a text-only interface (e.g. text-based chat) contain only stages
2–5.
Types of systems
Dialogue systems fall into the following categories, which are listed here along a few
dimensions. Many of the categories overlap and the distinctions may not be well established.
by modality
o text-based
o spoken dialogue system
o graphical user interface
o multi-modal
by device
Page 70 of 165
o telephone-based systems
o PDA systems
o in-car systems
o robot systems
o desktop/laptop systems
native
in-browser systems
in-virtual machine
o in-virtual environment
o robots
by style
o command-based
o menu-driven
o natural language
o speech graffiti
by initiative
o system initiative
o user initiative
o mixed initiative
Although most of these aspects are issues of many different research projects, there is a lack of
tools that support the development of dialogue systems addressing these topics. Apart from
Page 71 of 165
VoiceXML that focuses on interactive voice response systems and is the basis for many spoken
dialogue systems in industry (customer support applications) and AIML(Artificial Intelligence
Markup Language) that is famous for the A.L.I.C.E (Artificial Linguistic Internet Computer
Entity). chatbot, none of these integrate linguistic features like dialog acts or language
generation. Therefore, NADIA-Natural DIAlogue system (a research prototype) gives an idea
how to fill that gap and combines some of the aforementioned aspects like natural language
generation, adaptive formulation and sub dialogues.
Applications
Dialogue systems can support a broad range of applications in business enterprises, education,
government, healthcare, and entertainment. For example:
Responding to customers' questions about products and services via a company's website
or intranet portal
Customer service agent knowledge base: Allows agents to type in a customer's question
and guide them with a response
Guided selling: Facilitating transactions by providing answers and guidance in the sales
process, particularly for complex products being sold to novice customers
Help desk: Responding to internal employee questions, e.g., responding to HR questions
Website navigation: Guiding customers to relevant portions of complex websites—a
Website concierge
Technical support: Responding to technical problems, such as diagnosing a problem with
a product or device
Personalized service: Conversational agents can leverage internal and external databases
to personalize interactions, such as answering questions about account balances,
providing portfolio information, delivering frequent flier or membership information, for
example
Training or education: They can provide problem-solving advice while the user learns
Simple dialogue systems are widely used to decrease human workload in call centres. In
this and other industrial telephony applications, the functionality provided by dialogue
systems is known as interactive voice response or IVR.
In some cases, conversational agents can interact with users using artificial characters. These
agents are then referred to as embodied agents.
Page 72 of 165
Guidelines on interaction style design choice
Page 73 of 165
Outcome 3: Be able to design user interfaces
Alternatively, we could define this as any complex system with a UI. That is, something we need
to give instructions to and know the status of in carrying out those instructions. These things are
called systems (or artifacts), which we have dialogues with. Having dialogues with non-humans
(and other animals) is a relatively new concept over the past 50 years.
Usability
For many years, the key has been thought to be making interactive systems usable, i.e., giving
them usability.
To be usable a system needs to be effective. The system supports the tasks the user wants to do
(and the subcomponents of such tasks).
We could also consider other components to make things usable. Efficiency - the system allows
users to do tasks very quickly without making (many) errors; Learnable - the system is easy
enough to learn to use (commensurate with the complexity of the tasks the users want to
undertake); Memorable - the system is easy enough to remember how to use (once, learnt) when
users return to asks after periods of non-use.
We are now moving beyond that to consider satisfaction - the system should make users feel
satisfied with their experience of using it.
Positive user experience, rather than usability has now become the focus of the design of
interactive systems, particularly as we have so many systems that are for leisure, rather than for
work. This expands usability to cover issues such as:
Enjoyable
Fun
Entertaining
Aesthetically pleasing
Supportive of creativity
Another way to think about usability is for the user interface to be transparent/translucent to the
user - the user should be concentrating on their task, and not on how to get the system to do the
Page 74 of 165
task. This might not be the case originally however, though. For example, with the pen, you had
to think about it when younger and how to use it, and now you don't.
Affordance
Some things, e.g., doors, have a method of use which is ambiguous. Doors should afford opening
in the appropriate way, their physical appearance should immediately tell you what to do.
Affordance could be formally defined as: "The perceived and actual properties of an object,
primarily those properties that could determine how the object could possibly be used" (Norman,
1998).
Affordances could be inate, or perhaps culturally learnt, but a lifetime of experience with doors,
etc, means the interface to these systems is (or should be) transparent. Well designed objects,
both traditional and interactive have the right affordances.
Page 75 of 165
One metaphor for this is to consider interacting with these systems as a dialogue - you tell/ask
the system to do something, it tells you things back; not too dissimilar to having a conversation
with a human being. Another metaphor is to consider these systems as objects that you do things
with and interact with (for example, putting waste in a waste paper bin), or as navigating through
a space and going places (the web).
The conversational metaphor applies here, where you're talking to the computer through your
keyboard and it reacts. However, the language you speak in must be correct to the last dot and in
the correct order, much like speaking a foreign language. The system doesn't give you any clues
on what to do, so you must remember (or use a crib sheet), the syntax of the language. These
commands can get quite long and complex, especially when passing lots of options and (in
UNIX), piping one command to the other.
You also get limited feedback about what is happening, a command such as rf may return you
directly to the command line, after deleting 0 or 100 files. The later versions of DOS took the
feedback step too far, however, asking for a confirmation of every file by default. This is
programmed by the interaction designer, and they have to remember to do this and get the level
of interaction right.
If you do get the command correct, however, this can be a very efficient way of operating a
complex system. A short, but powerful, language allows you to acheive a great deal and people
are willing to invest the time to learn this language to get the most efficient use of the system.
However, although computers are good at dealing with cryptic strings, complex syntax and an
exact reproduction of the syntax every time, humans aren't. This interaction system stemmed
from the fact that processing power was expensive, so humans had to adapt to the way computers
needed to interact, not vice versa. This is no longer the case.
Menus are simple, as not much needs to be remembered as the options are there on screen and
the physical interface corresponds directly to the options available. Feedback is immediate -
selecting an option will either take you to another screen of menus or to perform the selected
task.
Feedback is natural and built in -you can see whether you are going the right way, because either
something relevant occurs - much like handling objects gives you instant built in feedback.
Page 76 of 165
Selections from objects (such as radio buttons) can be thought of as a menu, even though the
selection method is different.
Menu systems should be usable without any prior knowledge or memory of previous use, as it
leads you through the interaction. This is excellent for novice and infrequent users (hence their
use in public terminals of many varieties). However, a complex menu structure can complicate
matters where particular features will be found (e.g., IVR systems)
Expert and frequent users get irritated by having to move through the same menu structure every
time they do a particular task (hence shortcut keys in applications such as Word), and menu
systems also present a lack of flexibility - you can only do what menu options are present for
you, so menu driven systems only work where there is a fairly limited number of options at any
point. Items also need to be logically grouped.
How many characters can be typed in the field - this is often not clear, and there is no
inherent indication of this (needs to be explicitely built in by the interaction designer) -
this violates the feedback principle.
What format is supposed to be used - particularly for things like dates
Lack of information - what if all information known by the form isn't available
immediately. The design may not let you process beyond the dialogue box
However, this system does have strengths. If the system is well-designed, any novice can use
them and it has a strong analogy to paper forms, and the user can be led easily through the
process. However, they are easy to design badly, and hence confuse and irritate the user.
Direct Manipulation
The heart of modern graphical user interfaces (GUIs, sometimes referred to as WIMP -
Windows, Icons, Mouse/Menus, Pointers) or "point and click" interfaces is direct manipulation
(note, GUI and WIMP themselves are not interaction styles).
The idea of direct manipulation is that you have objects (often called widgets) in the interface
(icons, buttons, windows, scrollbars, etc) and you manipulate those like real objects.
One of the main advantages of direct manipulation is that you get (almost) immediate natural
feedback on the consequences of your action by the change in the object and its context in the
interface. Also, with the widgets being onscreen, you are provided with clues as to what you can
do (doesn't help much with items buried in menu hierarchies). The nature of widgets should tell
Page 77 of 165
you something about what can be done with them (affordance), and with widgets, many more
objects can be presented simultaneously (more than menus).
However, interface actions don't always have real world metaphors you can build on (e.g.,
executing a file) and information rich interactions don't always tend to work well with the object
metaphor and direct manipulation becomes tedious for repetitive tasks, such as dragging,
copying, etc...
This kind of system should be "walk up and use", just speak your own language and this should
suit both novices and expert users. This isn't yet technologically feasible, as the AI is largely fake
and can only recognise key words. If speech recognition fails, it is easy to get into nasty loops,
where it is not clear as to how you get out again (this is the feedback principle). There is also the
problem of different accents and recognising older voices, and privacy and security issues are
introduced.
Page 78 of 165
1. What is usability?
Usability is a measure about a product that been used in a specific scenario by specific users,
which can achieve the special goal in a satisfied and effective degree. First, usability is not only
related to the interface design, but also involved in the technical level of the entire system.
Second, usability is reflected by human factors, and evaluated by operating a variety of tasks.
Third, usability is to describe how a user can be interact effectively with a product and how easy
a product can be operated.
Usability is ―the extent to which the product can be used by specified users to achieve specified
goals with effectiveness, efficiency and satisfaction in a specified context of use.‖
Usability is defined by five quality components:
Learnability
Efficiency
Memorability
Errors
Satisfaction
Page 79 of 165
3. What product is considered as a usability design?
Efficiency- the degree that users can finish a particular task and achieve a specific goal correctly
and completely in a short time;
Satisfaction — User‘s subjective satisfaction and acceptance degree in the process of using a
product.
A site with good usability has to follow these factors: fast connection, well designed, completely
description and testing, simple operations, friendly and meaningful information interaction,
unique style.
The perfect way to ensure usability is to test the actual users on the working system. Achieving
high usability requires the design work to focus on the end users of the system. There are many
ways to determine who the major users are, how they work and what must be done. However, the
user‘s schedule and budget sometimes may discourage this ideal approach.
In user-centered design, designers use a mixture of investigative methods and tools (e.g., surveys
and interviews) and generative ones (e.g., brainstorming) to develop an understanding of user
needs.
Visibility: Users should be able to see from the beginning what they can do with the
product, what is it about, how they can use it.
Page 80 of 165
Accessibility: Users should be able to find information easily and quickly. They should
be offered various ways to find information for example call to action buttons, search
option, menu, etc.
Legibility: Text should be easy to read. As simple as that.
Language: Short sentences are preferred here. The easier the phrase and the words, the
better.
Task-centered system design is a technique that helps developers design and evaluate interfaces
based on users' real-world tasks. As part of design, it becomes a user-centered requirements
analysis (with the requirements being the tasks that need to be satisfied).
As part of evaluation, the evaluator can do a walk-through of the prototype, using the tasks to
generate a step by step scenario of what a user would have to do with the system. Each step in
the walkthrough asks the questions: is it believable that a person would do this; and does the
person have the knowledge to do it? If not, then a bug has been found.
A task is the set of activities (physical and/or cognitive) in which a user engages to achieve a
goal. Therefore, we can distinguish between a goal, the desired state of a system, and a task, the
sequence of actions performed to achieve a goal (i.e., it is a structured set of activities). Goals,
tasks and actions will be different for different people.
Task analysis and modelling are the techniques for investigating and representing the way people
perform activities: what people do, why they do it, what they know, etc. They are primarily about
understanding, clarifying and organising knowledge about existing systems and work. There is a
lot of common with systems analysis techniques, except that the focus here is solely on the user
and includes tasks other than those performed with an interactive system. The techniques are
applied in the design and evaluation of training, jobs and work, equipment and systems, to
inform interactive system design.
Task analysis involves the analysis of work and jobs and involves collecting data (using
techniques such as interviews and observations) and then analysing it. The level of granuality of
task analysis depends on various factors, notably the purpose of the analysis. Stopping rules are
used to define the depth of a task analysis - the point at which it is appropriate to cease
decomposing. User manuals often stop too early.
Page 81 of 165
Hierarchical Tasks Analysis
Hierarchical tasks analysis (HTA) is concerned with observable behavior and the reason for this
behavior. It is less detailed than some techniques, but is a keystone for understanding what users
do.
HTA represents tasks as a hierarchical decomposition of subtasks and operations, with associated
plans to describe sequencing:
A HTA can be represented by a structured, indented text, or using a structured chart notation.
Page 82 of 165
Plan 0: Do 1-2-4-5 in that order; when the defaults are incorrect, do 3
Plan 3: Do any of 3.1 or 3.2 in any order; depending on the default settings
Is waiting part of a plan, or a task? Generally, task if 'busy' wait - you are actively waiting or a
plan if the end of the delay is the event, e.g., "when alarm rings", etc...
To do HTA, you first need to identify the user groups and select representatives and identify the
main tasks of concern. The next step is to design and conduct data collection to elicit information
about these tasks:
These steps can be done with documentation, interviews, questionnaires, focus groups,
observation, ethnography, experiments, etc...
The data collected then needs to be analyzed to create specific task models initially.
Decomposition of tasks, the balance of models and the stopping rules need to be considered. The
specific tasks models then should be generalized to create a generic task model - from each task
model for the same goal, produce a generic model that includes all the different ways of
achieving the goal. Models should then be checked with all users, other stakeholders, analysts
and the process should then iterate.
To generate a hierarchy, you need to get a list of goals and then group the goals into a part-whole
structure and decompose further where necessary, applying stopping rules when appropriate.
Finally, plans should be added to capture the ordering of goal achievement. Some general things
to remember about modeling are:
Page 83 of 165
Insight and experience are required to analyze and model tasks effectively and to then use
the models to inform design
When you are given an initial HTA, you need to consider how to check/improve it. There are
some heuristics that can be used:
paired actions (for example, place kettle on stove can be broken down to include turning
on the stove as well)
restructure
balance
generalize
There are limitations to HTA however. It focuses on a single user, but many tasks involve the
interaction of groups to people (there is a current shift towards emphasizing the social and
distributed nature of much cognitive activity). It is also poor at capturing contextual information
and sometimes the 'why' information and can encourage a focus on getting the model and
notation 'right', which detracts from the content. There is a danger of designing systems which
place too much emphasis on current tasks or which are too rigid in the ways they support the
task.
There are many sources of information that can be used in HTA. One such is documentation -
although the manuals say what is supposed to happen, they are good for key words and
prompting interviews, and another may be observation (as discussed above) and interviews (the
expert: manager or worker? - interview both).
We could also consider contextual analysis, where physical, social and organizational settings of
design need to be investigated and taken into account in design. For example:
GOMS
GOMS is an alternative to HTA that is very different. It is a lot more detailed and in depth and is
applied to systems where timing and the number of keystrokes are vital. It is more complex to
use than HTA, and it is not common. GOMS is an acronym for:
Goals - the end state the user is trying to reach; involves hierarchical decomposition into
tasks
Operators - basic actions, such as moving a mouse
Methods - sequences of operators or procedures for achieving a goal or sub-goal
Selection rules - invoked when there is a choice of methods
Page 84 of 165
2. User interface requirements
Input/output Design User Interface Design
The user interface is not just input screens but is a communication between users and the system.
It is a two-way communication. Input data and instruction (what users want to do) go into the
system in many forms. System replies with outputs, error messages, feedback, warning, help
functions, etc. Also the interface includes how users navigate through the system. It could be
commands, menus, or links that lead users to the next screen. It is a structure of the system from
users‘ view point (while DFDs and Flowcharts are the structure of the system from programmers‘
view points).
System Boundary
Outputs are data going out from the system to the users. The users may be an external entity, or
maybe internal (boundary between computer-system and manual system). You have to be careful
when determining the output data flow. Notice that dataflow – inquiry form – is a form that induces
the input from the user (Customer), that means although the dataflow goes out of the system
boundary, that form is an input form and not an output.
Page 85 of 165
radio, or the visual layout of a webpage. User interface designs must not only be attractive to
potential users, but must also be functional and created with users in mind.
UI design principles
• UI design must take account of the needs, experience and capabilities of the system users
• Designers should be aware of people‘s physical and mental limitations (e.g. limited short-term
memory) and should recognize that people make mistakes
• UI design principles underlie interface designs although not all principles are applicable to all
designs
Keep the interface simple. The best interfaces are almost invisible to the user. They avoid
unnecessary elements and are clear in the language they use on labels and in messaging.
Page 86 of 165
Create consistency and use common UI elements. By using common elements in your UI,
users feel more comfortable and are able to get things done more quickly. It is also important
to create patterns in language, layout and design throughout the site to help facilitate
efficiency. Once a user learns how to do something, they should be able to transfer that skill
to other parts of the site.
Be purposeful in page layout. Consider the spatial relationships between items on the page
and structure the page based on importance. Careful placement of items can help draw
attention to the most important pieces of information and can aid scanning and readability.
Strategically use color and texture. You can direct attention toward or redirect attention
away from items using color, light, contrast, and texture to your advantage.
Use typography to create hierarchy and clarity. Carefully consider how you use typeface.
Different sizes, fonts, and arrangement of the text to help increase scanability, legibility and
readability.
Make sure that the system communicates what’s happening. Always inform your users
of location, actions, changes in state, or errors. The use of various UI elements to
communicate status and, if necessary, next steps can reduce frustration for your user.
Think about the defaults. By carefully thinking about and anticipating the goals people
bring to your site, you can create defaults that reduce the burden on the user. This becomes
particularly important when it comes to form design where you might have an opportunity to
have some fields pre-chosen or filled out.
Output Design
The design of output is the most important task of any system. During output design, developers
identify the type of outputs needed, and consider the necessary output controls and prototype
report layouts.
To develop output design that serves the intended purpose and eliminates the production
of unwanted output.
To develop the output design that meets the end users requirements.
To deliver the appropriate quantity of output.
To form the output in appropriate format and direct it to the right person.
To make the output available on time for making good decisions.
Page 87 of 165
External Outputs
Manufacturers create and design external outputs for printers. External outputs enable the system
to leave the trigger actions on the part of their recipients or confirm actions to their recipients.
Some of the external outputs are designed as turnaround outputs, which are implemented as a
form and re-enter the system as an input.
Internal outputs
Internal outputs are present inside the system, and used by end-users and managers. They support
the management in decision making and reporting.
Detailed Reports − They contain present information which has almost no filtering or
restriction generated to assist management planning and control.
Summary Reports − They contain trends and potential problems which are categorized
and summarized that are generated for managers who do not want details.
Exception Reports − They contain exceptions, filtered data to some condition or
standard before presenting it to the manager, as information.
Printed or screen-format reports should include a date/time for report printing and the data.
Multipage reports contain report title or description, and pagination. Pre-printed forms usually
include a version number and effective date.
Outputs present information to system users. Outputs, the most visible component of a working
information system, are the justification for the system. During systems analysis, you defined
output needs and requirements, but you didn't design those outputs. In this section, you will learn
how to design effective outputs for system users.
Output Media
• Paper
• Screen
• Microfilm/Microfiche
• Video/Audio
• CDROM, DVD
• Other electronic media
Although the paperless office (and business) has been predicted for several years, it has not yet
become a reality. Perhaps there is an irreversible psychological dependence on paper as amedium.
In any case, paper output will be with us for a long time.
Page 88 of 165
The use of film does present its own problems - microfiche and microfilm can only be produced
and read by special equipment. Therefore, other than paper, the most common output medium is
screen (monitor).
Although the screen medium provides the system user with convenient access to information, the
information is only temporary. When the image leaves the screen, that information is lost unless
it is redisplayed. If a permanent copy of the information is required, paper and film are superior
media. Often times when system designers build systems that include screen outputs, they also
may provide the user with the ability to obtain that output on paper.
Output Formats
• Tabular output
• Zoned output
• Graphic output
• Narrative output
Most of the computer programs written probably generated tabular reports.
Zoned output is often used in conjunction with tabular output. For example, an order output
contains zones for customer and order data in addition to tables (or rows of columns) for ordered
items.
Input Design
In an information system, input is the raw data that is processed to produce output. During the
input design, the developers must consider the input devices such as PC, MICR, OMR, etc.
Therefore, the quality of system input determines the quality of system output. Welldesigned
input forms and screens have following properties −
It should serve specific purpose effectively such as storing, recording, and retrieving the
information.
It ensures proper completion with accuracy.
It should be easy to fill and straightforward.
It should focus on user‘s attention, consistency, and simplicity.
All these objectives are obtained using the knowledge of basic design principles
regarding −
o What are the inputs needed for the system?
o How end users respond to different elements of forms and screens.
Page 89 of 165
To design source documents for data capture or devise other data capture methods
To design input data records, data entry screens, user interface screens, etc.
To use validation checks and develop effective input controls.
Audit trails for data entry and other system operations are created using transaction logs which
gives a record of all changes introduced in the database to provide security and means of
recovery in case of any failure.
Page 90 of 165
3. Prototyping
Users often can't say what they want, but as soon as you give them something and they get to use
it, they know what they don't want. A bridge is needed between talking to users in the abstract
about what they might want and building a full-blown system (with all the expense and effort
that involves). The prototype is that bridge, it might be a paper-based outline of screens, a video
simulation of interaction or a 3D cardboard mock-up of a device.
Prototypes are very useful for discussing ideas with stakeholders, and it encourages reflection
on the design and allows you to explore alternative designs without committing to one idea and
to make ideas from scenarios that are more concrete.
As an analysis artifact that enables you to explore the problem space with your
stakeholders.
As a design artifact that enables you to explore the solution space of your system.
A basis from which to explore the usability of your system.
A vehicle for you to communicate the possible UI design(s) of your system.
A potential foundation from which to continue developing the system (if you intend to
throw the prototype away and start over from scratch then you don‘t need to invest the
time writing quality code for your prototype).
A prototype is an early sample, model, or release of a product built to test a concept or process.
It is a term used in a variety of contexts, including semantics, design, electronics, and software
programming. A prototype is generally used to evaluate a new design to enhance precision by
system analysts and users. Prototyping serves to provide specifications for a real, working system
rather than a theoretical one. In some design workflow models, creating aprototype (a process
sometimes called materialization) is the step between the formalization andthe evaluation of an
idea.
A Proof-of-Principle Prototype serves to verify some key functional aspects of the intended
design, but usually does not have all the functionality of the final product.
A Working Prototype represents all or nearly all of the functionality of the final product.
A Visual Prototype represents the size and appearance, but not the functionality, of the
intended design. A Form Study Prototype is a preliminary type of visual prototype in
which the geometric features of a design are emphasized, with less concern for color, texture,
or other aspects of the final appearance.
Page 91 of 165
A User Experience Prototype represents enough of the appearance and function of the
product that it can be used for user research.
A Functional Prototype captures both function and appearance of the intended design,
though it may be created with different techniques and even different scale from final design.
A Paper Prototype is a printed or hand-drawn representation of the user interface of a
software product. Such prototypes are commonly used for early testing of a software design,
and can be part of a software walkthrough to confirm design decisions before more costly
levels of design effort are expended.
Human factors and ergonomics is concerned with the "fit" between the user, equipment, and
environment or "fitting a job to a person". It accounts for the user's capabilities and limitations in
seeking to ensure that tasks, functions, information, and the environment suit that user.
To assess the fit between a person and the used technology, human factors specialists or
ergonomists consider the job (activity) being done and the demands on the user; the equipment
used (its size, shape, and how appropriate it is for the task), and the information used (how it is
presented, accessed, and changed).
Howell (2003) outlines some important areas of research and practice in the field of human factors.
Areas of Study in Human Factors Psychology
Area Description
Attention Includes vigilance and monitoring, recognizing signals in noise, mental
resources, and divided attention
Cognitive Includes human software interactions in complex automated systems,
engineering especially the decision-making processes of workers as they are
supported by the software system
Task analysis Breaking down the elements of a task
Cognitive task Breaking down the elements of a cognitive task
analysis
Page 92 of 165
Cognitive Ergonomics
Cognitive ergonomics is the field of study that focuses on how well the use of a product matches
the cognitive capabilities of users. It draws on knowledge of human perception, mental processing,
and memory. Rather than being a design discipline, it is a source of knowledge for designers to
use as guidelines for ensuring good usability.
Cognitive ergonomics mainly focuses on work activities which:
Ergonomics is a people-centered approach that aims to fit the work to the person, not the person
to the work. This topic explains how it can be used in all aspects of work from tool, equipment
and software design to work organization and job design.
In recent years, the growth and development of new technology has pushed ergonomics in new
directions. A major role of ergonomics today is to make sense of this new technology and ensure
it fits the needs and expectations of the people who use it. This is likely to continue as one of the
field‘s greatest challenges.
Sources of Information
Ergonomics draws its knowledge from a number of different disciplines, depending on the aspect
of work being addressed.
Anatomy: defines the structure of the body and enables understanding of the key components
and structures involved in human movement and work.
Biomechanics: concerned with the movement of body parts and the forces placed on them
during activity. It draws knowledge from physical, engineering and biological sciences.
Ergonomics uses the information for the design of physical work tasks and equipment.
Physiology: the study of how the body functions down to the chemical level, as in metabolism,
and the changes which occur during activity. Ergonomics uses this information to try and
produce maximum production from minimum effort.
Psychology: ergonomics uses psychological information to design tasks to meet the mental
capabilities of people. Research into perception, memory, reasoning, concentration and alertness
are relevant to the design of tasks. This should ensure they are intuitive and logical, making the
task easier for the person to complete and so minimising the risk of the stress for users.
Anthropometry: the science that deals with body measurements. Sources of anthropometric data
vary in their presentation of the dimensions. For a given body dimension a measurement is
normally provided in millimetres for a male or female of a given nationality, sometimes within a
given age group (eg standing eye height, female, 19–65 years of age, British). These
measurements are used to establish the heights, lengths, depths and sizes for elements of
equipment, workstations or workplace design. This ensures the design can be used by the
specified group of people.
Page 93 of 165
Ergonomics takes a multi-disciplinary approach. It draws on knowledge from other disciplines as
required and combines the knowledge to assess or design the work being studied.
For example, biomechanical knowledge can be combined with psychological knowledge when
designing physical tasks which require concentration. The biomechanical knowledge relates to
how and what the body can physically achieve, while psychology provides the information
concerning the ability to perform the processes and retain attention in order to do so.
The result: a stated load, work posture and work height, with a guideline time for completing the
work before concentration fails and errors occur.
Approach to Ergonomics
Ergonomics takes a holistic approach. It looks at all aspects of the work, from the capabilities of
the person through to the structure of the overall work system and their interactions.
A concentric model can be used to illustrate this approach and its common stages.
Concentric Model
Figure: Concentric Model
Ergonomics focuses on the needs of the person before looking at the work task to be completed.
By establishing the physical and mental capabilities of the person, certain criteria or constraints
can be established for the task.
The workstation is designed in relation to the physical requirements of the person and the
operational requirements of the task. For example, the height and reach of the person will be
relevant to the dimensions of their workstation, as will the objects required to complete the task
and their arrangement.
Page 94 of 165
The work environment is reviewed in relation to the person, task and workstation requirements.
For example, the temperature should be within acceptable limits for human work but also
appropriate to the requirements of the task and operation of the workstation.
The work system is the organisation within which the previous elements sit and which ultimately
affect the entire work. For example, this may include performance targets or shiftworking
patterns imposed by senior management. It should be assessed in order to complement, not
constrain, the effectiveness of the interactions between the person, task, workstation and
environment.
Person
The person is the central point of the work. The person combines knowledge from other elements
in the concentric model to complete the work. Effective work is designed by optimising the
person's ability to complete the work, eg designing it so that it is within their mental and physical
capabilities.
The person should be able to work in a comfortable and effective manner. Equipment, furniture,
the environment and task activities should all support this.
Mental capabilities, such as memory, concentration, reasoning and interpretation, should all be
considered when designing tasks, environments and work systems. This should aim to optimise a
person's capacity to interact with the work.
An important, but often overlooked, point is that the person is the one aspect of the work system
that cannot be changed. Therefore the purpose of the concentric model of ergonomics is to adapt
all other elements of the work system to fit the capabilities of the person.
Work Task
Ergonomics addresses the work task by ensuring that the elements of the task are structured to
suit the person carrying it out.
The factors to be considered in relation to the work task include the following.
Tools and equipment must be easily reached to avoid poor working postures.
Tasks requiring the exertion of force should be automated or semi-automated wherever
possible, always aiming to minimise the degree of exertion.
Repetitive tasks should be automated where possible, or frequently interrupted by
changes of activity or work breaks, and should preferably be self-paced.
Static work postures should be changed regularly — movement during task completion
should be encouraged.
Changes of activity and job rotation should be encouraged to enable changes of posture,
especially if some or all the tasks are repetitive.
Rest breaks should be short, but frequent, to prevent fatigue.
Instructions or information about the task, such as from displays or controls, should be
provided clearly and logically.
Training should be provided for the task, use of equipment, furniture and health and
safety matters. Training is particularly important when any element undergoes changes
that impact the completion of the task.
Page 95 of 165
Information required to complete the task, eg security codes or operation sequences,
should be logical and easy to remember.
Understanding the psychological capabilities of the person, such as the limits of his or her
memory and concentration, will help to design a task within those limits. This should make the
task accessible to the person. The provision of any information or instruction should also account
for the way in which people process information. Postural and biomechanical knowledge will
also place the physical elements of the task within the capabilities of the person.
Workstation
The workstation is the area where the task is conducted. Commonly this will be a desk or work
surface, but may equally be at a conveyor belt or within a driver's cab.
The workstation should be designed so that it is accessible for the person using it.
The factors to be considered in relation to the workstation include the following.
The surface height should be sufficient to avoid stooping or reaching to its surface.
There should be sufficient legroom for standing and writing tasks so that the person is
able to get close enough to the workstation.
The surface properties should not present a risk to the person, eg no sharp edges or
unprotected hot/cold surfaces.
Surrounding workstations or other items in the environment, eg columns or posts, should
not constrain the user's posture.
Any furniture should support a good working posture, eg chairs should provide lumbar
support.
The workstation should be suitable for the work task such that all equipment can be
located and arranged for the person to complete the task.
Adjustable equipment and furniture will increase the flexibility of a workstation making it more
accessible to a range of work tasks and/or workers.
Work Environment
The work environment is the place of work which includes the physical environment and general
layout. Environmental concerns include the temperature, humidity, lighting and noise.
The factors to be considered in relation to the work environment include the following.
Temperature and humidity levels will directly affect the person's comfort and capacity to
work. Excessive temperatures can lead to fatigue and loss of concentration as well as loss
of dexterity.
Illumination of the task and information is important. Eliminating glare or reflections is
also vital.
Noise levels can affect concentration or lead to physical discomfort. Noise sources should
be reduced, eliminated or protective equipment supplied.
Health and safety regulations must be adhered to and, to encourage this, the work tasks
themselves need to be configured to safe systems of work.
Page 96 of 165
Work System
The work system refers to the structure of the organisation and the way it operates. It includes
communications, policies and working standards.
Industry standards or health and safety regulations may place constraints or requirements on the
organisation, which affect the way in which the work is completed.
For example, a company may have to meet quality standards which require comprehensive
quality information to be supplied. This requires the worker to fill in a quality report for all work
completed. This may affect the ergonomics by increasing the activity required in the task (which
may affect the posture or rest breaks), and requires additional material to be stored at the
workstation, ie report forms. The storage of additional material and the need to complete writing
tasks requires additional space at the workstation and may compromise the work posture adopted
or the completion of the task.
Poor job scheduling or job deadline pressures, considered as part of the work system, may also
place constraints or high demands on work tasks and workers. Shift patterns and job rotation are
also elements which affect work and the way it is completed.
Work can be dramatically improved or constrained by work systems. The system must be
reviewed in relation to all of the other elements of work to avoid unforeseen constraints being
imposed or potential improvements remaining unrecognised.
Ergonomics Techniques
The nature of ergonomics requires thorough and comprehensive investigation of all parts of the
work. In order to ensure that all aspects are considered there are specific techniques which can be
employed.
Task Analysis
A task analysis provides information for the design of new work tasks or the evaluation of
existing work tasks. The process identifies what people actually do when they perform tasks,
highlighting connections between activities and identifying unplanned or unusual activities
within the work.
A task analysis is completed in three stages.
1. Task description: this details the activities identified. The format for a task
description may vary depending on the target audience reviewing the information. For
example, it may be shown to senior managers who are used to reading flow charts or it may
be for workers who are more familiar with procedural lists. Most commonly a diagram is
used, although lists, flow charts or tables are equally valid, as long as the analyst's findings
are clear and concise.
2. Data collection: activities are identified through observation, review of records and
manuals or discussions with workers and managers.
3. Data analysis: findings are interpreted in relation to how the system should work. In
particular, any mismatches between the findings and the theoretical completion of the task
can be identified and the cause investigated. For example, the analysts may have recorded a
worker repeatedly pressing a control button when only one press was instructed in the
manual. This may reveal that the button is faulty, the system requires confirmation or the
Page 97 of 165
button press is not being registered with the system. These all have implications for the
effectiveness of the system, particularly in terms of time spent and accuracy of operation.
All boxes (activities) in the hierarchy can be numbered to clarify their sequence and dependence
within the work task.
Workflow Analysis
The flow of work is important to the design of workstations and workplaces. Workflow analysis
plots the flow of activity on a plan of the workplace. This helps to identify the pattern of
activities, ensuring they are optimised to avoid wasting time and energy. It also ensures that the
workplace is appropriately designed.
Workflow analysis can also be used to optimise the arrangement of equipment and components
at the workstation. For example, by ensuring that items used together or in sequence are placed
close to each other to improve the completion of the task.
Figure 3: Examples of Workflow Identification and Change
Page 98 of 165
Frequency Analysis
The frequency with which tools and equipment are used can be important factors in work design.
Frequently-used items may be positioned closer to the worker for ease of access and to prevent
unnecessary hold-ups to the process. Equally, the frequency analysis may be used in the
assessment of postures (how often a posture is adopted) or the repetition involved in the work
task.
A simple tick-box technique can be used to record the frequency that an activity or posture
occurs, or an item is used.
Video Analysis
Digital or video recordings provide permanent, reusable records of activities and work tasks.
Using video has the advantage of providing a record which is accessible to a range of analysts
after the event and away from the work environment. As the record is reusable it provides the
scope to record and analyse data in great detail. It is also a relatively unobtrusive method of
observation and data capture.
The main disadvantage of using digital or other recording techniques is that the camera angle
restricts what is recorded, and therefore constrains what can be analysed without invalidating the
analysis or making it unreliable. The detail in which analysis of data can be recorded can also
make the process time-consuming and therefore costly.
Risk Assessments
Page 99 of 165
Ergonomic risk assessments can be conducted to assess the work.
The assessment should be systematic and comprehensive, accounting for all elements of the
work. Checklists can be used to ensure all elements, ie the person, work task, workstation, work
environment and work system, are covered.
All hazards should be identified and recorded and an assessment of the risk posed by the hazard
should be made. All hazards must be addressed, eliminated where possible or reduced as far as
practicable.
Any checklist used should incorporate the five elements of the concentric approach, but can be
tailored to suit the work within the business. For example, by placing more emphasis on the
environmental issues involved in the safe use, storage and handling of chemicals where the
business focuses on their use; or by emphasising the importance of good software design for call-
centre work, where the software is relied on to provide information to the worker and the
customer.
Allocation of Function
Allocation of function is the process of assigning activities to either the person or the machine in
order to complete the work task. By allocating activities to the person or machine most suited to
complete them the overall efficiency and effectiveness of the work is improved.
Each activity needs to be assessed to identify the requirements for completing it and then
assigned according to the abilities of the person or machine. Some general guidelines are shown
in Table below.
Table : Guidance for Allocating Functions
Person: Good At Machine: Good At
Sensing stimuli: at low levels over Sensing stimuli: outside human range
background noise
Recognising patterns of stimuli Monitoring
Sensing unexpected occurrences Calculating
Decision making Applying force
Information recall High speed operations
Creating alternatives Repetitive tasks
Interpreting subjective data Sustained accuracy and reliability
Adapting responses Counting or measuring
Simultaneous operations
Allocation of function may also be affected by economic or social factors. For example,
mechanising work tasks may require high machining costs, and lead to unemployment for
workers. The ―best‖ performance may not always be required and contact with people is
sometimes preferable to dealing with machines.
It should also be noted that to optimise job design, retaining some interesting tasks for people
may be more important than mechanising them, leaving only machine monitoring or
maintenance tasks for workers.
Human-machine Interaction
Human-machine interaction deals with the interaction between people and machinery facilitated
by the physical operation of controls and mental interpretation of displays.
This interaction relates to tasks as diverse as driving, operating equipment and monitoring
systems. Knowledge from psychology and anatomy are particularly relevant.
Software Design
Software can be designed to make using systems more intuitive and accessible for work.
Previously systems were designed based on the capacity of the technology with little thought for
the ―front-end‖, or software interface. Now, with increasing demand to make systems more
accessible to everyone and not just to computer experts, the use and arrangement of colour and
text can be developed to enhance the usability of the software. Equally, the structure of menus,
commands and programs, which the user is not so aware of, can be designed to improve the
interface or make it more predictive.
The development of software technology that hides complex programming code and replace it
with commands familiar to everyday work have greatly advanced the use of computers and their
accessibility to a wide range of workers.
The use of intuitive or what might be described as "intelligent" software also has implications for
ergonomics and assumptions should not be made that these will always minimse ergonomic
risks. This may be especially true where different software programs — sometimes at different
levels of maturity — may need to used by the same worker to complete a task.
Interface Design
Communication with machines and computers is done through interfaces. These consist of
displays to relay information about the machine/computer state, and controls to enable people to
react to the display, instructing the machine/computer with a response.
Smartphones are tablets are progressively becoming more important in terms of daily interfaces
used by staff and so the procurement of these devices should consider ergonomic factors.
Job Design
The design of the job or work task is important for obtaining the maximum performance from the
person for the minimum effort, but also in terms of creating a level of job satisfaction. There are
three stages considered to be influential in successfully designing jobs.
1. Job rotation: this involves people being moved from one task to another to allow changes
in posture, movement, and mental and physical dexterity. Job rotation can help to
overcome boredom, particularly in repetitive jobs, and to prevent fatigue, loss of
concentration and deterioration in performance.
2. Job enlargement: this involves increasing the scope and range of tasks carried out by one
person, which provides greater flexibility and incorporates wider skills.
3. Job enrichment: this involves enriching the job by incorporating motivating factors, such
as increased responsibility and involvement. Job enrichment aims to provide greater
autonomy over the planning, execution and control of work.
When the posture deviates from neutral, ie when stretching the arm forward or rotating the wrist,
effort is exerted and tension is held in the muscles to maintain the posture. This can, depending
Page 104 of 165
on the degree of deviation and the duration of the posture, lead to fatigue, discomfort, strain and
possibly injury. These can all have an effect on the person, from distracting their attention to
physically preventing them from continuing work.
Ergonomics is also concerned with identifying static postures, ie those in which there is no
movement. These require the muscles to act under tension in order to maintain the posture. As
this tension will reduce the blood flow into the muscle, fatigue and discomfort can occur if the
tension is not released.
Muscle Operation
Muscles operate by contracting and relaxing, creating movement of joints or body parts. They
require oxygen via the blood supply, which is pumped through the muscle as it contracts and
relaxes. The muscle uses the oxygen and produces waste products as a result of the energy
expenditure. The waste products are carried away from the muscle by the blood supply.
When the muscle is held in tension there is no movement, the muscles do not contract and relax
and consequently the blood is not pumped through the muscle as efficiently. As the muscle
sustains the tension and the oxygen is used up (energy is required to hold the muscle in tension),
fatigue is experienced. There is also no means of removing the waste products from the muscle,
which builds up as a result of the energy consumption. This leads to fatigue, discomfort and
subsequently pain.
Another issue to consider is static work. Seated work is static, it requires little movement of the
upper body and places tension in the muscles of the lower back, particularly where the back is
insufficiently supported by the seat.
Static postures can be alleviated by introducing changes of posture and movement into the work,
eg job rotation from seated work to tasks that involve moving around the office, stretching
exercises and work breaks away from the workstation. A good understanding of the task
requirements will help to design work so that poor and awkward postures are avoided.
Good working postures can be achieved by careful positioning of the equipment and the furniture
required to complete the work and to support the posture of the person. Achieving good postures
need not require special equipment. Careful consideration of the ergonomics of the work can
establish and enhance good working postures.
Musculoskeletal Disorders
Ergonomics can help to prevent injury to the muscular or skeletal structures of the body by
ensuring work is designed for the person‘s physical capabilities and that the individual worker is
appropriately trained to minimise the risk of them occuring.
In recent years musculoskeletal disorders (or MSDs), such as back injuries and upper limb
disorders, have become major causes of work-related injury with diagnosis, treatment and
prognosis becoming complex in some cases. Sometimes MSDs can become chronic conditions.
This has contributed to more personal injury cases being brought against employers through civil
courts.
Muscles require a rest and recovery period to avoid fatigue. The more frequently a
muscle works, the more energy it requires. The body may be unable to supply enough
energy or remove waste products to prevent fatigue or discomfort from occurring.
Consequently a rest and recovery period will be required.
Posture: poor postures can lead to injury and discomfort by placing the muscles and
joints under unnecessary strain.
Product teams choose a prototype‘s fidelity based on the goals of prototyping, completeness of
design, and available resources.
Page 106 of 165
Low-fidelity (lo-fi) prototypes are often paper-based and do not allow user
interactions. They range from a series of hand-drawn mock-ups to printouts. In theory, low-
fidelity sketches are quicker to create. Low-fidelity prototypes are helpful in enabling early
visualization of alternative design solutions, which helps provoke innovation and
improvement. An additional advantage to this approach is that when using rough sketches,
users may feel more comfortable suggesting changes.
High-fidelity prototypes are computer-based, and usually allow realistic (mouse-keyboard)
user interactions. High-fidelity prototypes take you as close as possible to a true
representation of the user interface. High-fidelity prototypes are assumed to be much more
effective in collecting true human performance data (e.g., time to complete a task), and in
demonstrating actual products to clients, management, and others.
Note
Low fidelity (lo-fi) prototypes are not very like the end product and are very obviously rough
and ready. They are cheap and simple to make and modify and it makes it clear to stakeholders
that they can be criticized. They are fairly low risk; designers do not have much to risk with
them, but they do not allow realistic use.
One form of lo-fi prototyping is storyboarding, a technique used in the film industry. A series of
sketches, usually accompanied with some text (e.g., a scenario), show the flow of interaction.
This is useful if you're good at sketching, otherwise it can be daunting.
Another method is using paper screenshots, where a sheet of paper or an index card is used for
each page, and overview diagrams can be used to show links between screenshots. Some tools
(such as Denim and Silk) exist to support this process, but they can keep the "sketchy" look of
the prototype.
Another method is the Wizard of Oz system. In the 1980s, a prototype system was created for
speech recognition. At the time, speech recognition was not currently available, so a fake system
was used, where a human actually did the recognition. The term is now used for any system
where the processing power is not implemented and a human "fakes" it. There is no need to
deceive the user for a Wizard of Oz system as it can work perfectly well if the user knows it is a
Wizard of Oz system.
High fidelity prototyping uses materials that you would expect to find in the end product or
system and looks much more like the final system. For software, people can use software such as
Macromedia Dreamweaver or Visual Basic, or in the case of web pages, writing prototype
HTML code. With high-fidelity prototyping, you get the real look and feel of some of the
functionality, it serves as a living specification, it is good for exploration of design features and
evaluation and it is a good marketing and sales tool.
When creating prototypes, you have to consider depth vs. breadth - do you prototype all of the
functionality, but then not go into any specific detail (horizontal prototyping), or do you
prototype one or more functions, but in a lot of depth (vertical prototyping).
Evolutionary prototyping also exists, which is where you build the real system bit by bit and
evaluate as you go along, and the opposite is throw-away prototyping, which is where a
prototype is built in one system (perhaps repeatedly with changes), and then that thrown is totally
thrown away and the real system is built from scratch (sometimes for efficiency, security, etc...).
A final type of prototyping is experience prototyping, where using prototypes allows the
designers to really understand the users' experience, e.g., using a wheelchair for a day for
different technologies like ATMs, etc...
Use one piece of paper for each Web page you create and then have users try them out in a usability
test. Users indicate where they want to click to find the information and you change the page to
show that screen.
The process helps you to gather feedback early in the design process, make changes quickly, and
improve your initial designs.
3. Stay consistent
“The more users’ expectations prove right, the more they will feel in control of the system and
the more they will like it.” – Jakob Nielson
Your users need consistency. They need to know that once they learn to do something, they will
be able to do it again. Language, layout, and design are just a few interface elements that need
consistency. A consistent interface enables your users to have a better understanding of how
things will work, increasing their efficiency.
5. Provide feedback
Your interface should at all times speak to your user, when his/her actions are both right and
wrong or misunderstood. Always inform your users of actions, changes in state and errors, or
exceptions that occur. Visual cues or simple messaging can show the user whether his or her
actions have led to the expected result.
6. Be forgiving
No matter how clear your design is, people will make mistakes. Your UI should allow for and
tolerate user error. Design ways for users to undo actions, and be forgiving with varied inputs (no
Interface components are the parts of the computing system you will be dealing/interacting with
Problem: The usability problem faced by the user when using the system.
Context of use: The situation (in terms of the tasks, users, and context of use) giving rise to
the usability problem.
.
Navigation considerations
Navigation is the act of moving between screens of an app to complete tasks. It‘s enabled
through several means: dedicated navigation components, embedding navigation behavior into
content, and platform affordances.
Navigational directions
Based on your app‘s information architecture, a user can move in one of three navigational
directions:
Lateral navigation refers to moving between screens at the same level of hierarchy. An
app‘s primary navigation component should provide access to all destinations at the top level
of its hierarchy.
Forward navigation refers to moving between screens at consecutive levels of hierarchy,
steps in a flow, or across an app. Forward navigation embeds navigation behavior into
containers (such as cards, lists, or images), buttons, links, or by using search.
Reverse navigation refers to moving backwards through screens either chronologically
(within one app or across different apps) or hierarchically (within an app). Platform
conventions determine the exact behavior of reverse navigation within an app.
UI widgets
Widget is a broad term that can refer to either any GUI (graphical user interface) element or a
tiny application that can display information and/or interact with the user. A widget can be as
rudimentary as a button, scroll bar, label, dialog box or check box; or it can be something slightly
more sophisticated like a search box, tiny map, clock, visitor counter or unit converter.
UI widgets are a self-contained visualization of high-level data that can be placed on a page.
UI Widgets are built for display within the web shell user interface and will not display within
the click once smart client user interface.
Note: The type of data displayed in a UI widget is usually strategic high-level rollup-type data,
which means efficiency is extremely important when retrieving the data. Asking strategic
questions from the production tables within the OLTP database is generally not considered a
good practice as the OLTP database is used to hold day to day, tactical information pertinent to
running your day to day fundraising operations. Therefore, populating a UI Widget with data will
usually mean pulling data from the Blackbaud Data Warehouse or the OLAP cube or some type
of snapshot output table from which to retrieve the data. A custom business process could
generate a snapshot output table which could be generated as the output of a specific business
process instance.
Blackbaud Data Warehouse, The data warehouse is composed of data structures populated by
data extracted from the OLTP database and transformed to fit a flatter schema. Ultimately the
warehouse structures are exposed as star schemas through views of fact and dimension tables.
There are some cases where the data structures reflect the shape of the OLTP structures closely
enough to be seen as parallel structures. But those parallels quickly disappear as tables are joined
together or multiple views are used.
Design Considerations
There are three factors that should be considered for the design of a successful user interface;
development factors, visability factors and acceptance factors.
Visability factors take into account human factors and express a strong visual identity. These
include: human abilities, product identity, clear conceptual model, and multiple representations.
Included as acceptance factors are an installed base, corporate politics, international markets, and
documentation and training.
Visible Language
Visible language refers to all of the graphical techniques used to communicate the message or
context. These include:
Organize: provide the user with a clear and consistent conceptual structure
Economize: do the most with the least amount of cues
Communicate: match the presentation to the capabilities of the user.
Organize
Consistency, screen layout, relationships and navigability are important concepts of organization.
Consistency
There are four views of consistency: internal consistency, external consistency, real-world
consistency, and when not to be consistent.
The first point, internal consistency states the same conventions and rules should be applied to all
elements of the GUI.
Those with different kinds of behavior have their own special appearance.
External consistency, the second point, says the existing platforms and cultural conventions
should be followed across user interfaces.
These icons come from different desktop publishing applications but generally
have the same meaning.
Screen Layout
Three ways to design display spatial layout: use a grid structure, standardize the screen layout, and
group related elements.
A grid structure can help locate menues, dialogue boxes or control panels. Generally 7 +/-2 is the
maximum number of major horizontal or vertical divisions. This will help make the screen less
cluttered and easier to understand.
Relationships
Linking related items and disassociating unrelated items can help achieve visual organization.
Example: Relationships
Left: Shape, location, and value can all create strong visual
relationships which may be inappropriate.
Navigability
There are three important navigation techniques: - provide an initial focus for the viewer's attention
- direct attention to important, secondary, or peripheral items - assist in navigation throughout the
material.
Economize
Four major points to be considered: simplicity, clarity, distinctiveness, and emphasis.
Simplicity
Simplicity includes only the elements that are most important for communication. It should also
be as unobstrusive as possible.
Clarity
All components should be designed so their meaning is not ambiguous.
Distinctiveness
The important properties of the necessary elements should be distinguishable.
Emphasis
The most important elements should be easily perceived. Non-critical elements should be de-
emphasized and clutter should be minimized so as not to hide critical information.
Communicate
The GUI must keep in balance legibility, readability, typography, symbolism, multiple views,
and color or texture in order to communicate successfully.
Readability: display must be easy to identify and interpret, should also be appealing and
attractive.
Readable
Typography: includes characteristics of individual elements (typefaces and typestyles) and their
groupings (typesetting techniques). A small number of typefaces which must be legible, clear,
and distinctive (i.e., distinguish between different classes of information) should be used.
Recommendations: - maximum of 3 typefaces in a maximum of 3 point sizes - a maximum of
40-60 characters per line of text - set text flush left and numbers flush right. Avoid centered text
in lists and short justified lines of text - use upper and lower case characters whenever possible.
Multiple Views: provide multiple perspectives on the display of complex structures and
processes. Make use of these multiple views: - multiple forms of representation - multiple levels
of abstraction - simultaneous alternative views - links and cross references - metadata, metatext,
metagraphics.
Color
Color is one of the most complex elements in achieving successful visual communication. Used
correctly, it can be a powerful tool for communication. Colors should be combined so they make
visual sense.
Some advantages for using color to help communication: - emphasize important information -
identify subsystems of structures - portray natural objects in a realistic manner - portray time and
progress - reduce errors of interpretation - add coding dimensions - increase comprehensibility -
increase believability and appeal
When color is used correctly, people will often learn more. Memory for color information seems
to be much better than information presented in black-and-white.
There are some disadvantages for using color: - requires more expensive and complicated
display equipment - many not accommodate color-deficient vision - some colors can potentially
cause visual discomfort and afterimages. - may contribute to visual or may lead to negative
associations through cross-disciplinary and cross-cultural association.
Color organization pertains to consistency of organization. Color should be used to group related
items. A consistent color code should be applied to screen displays, documentation, and training
materials.
Similar colors should infer a similarity among objects. One needs to be complete and consistent
when grouping objects by the same color. Once a color coding scheme has been established, the
same colors should be used throughout the GUI and all related publications.
Color emphasis suggests using strong contrasts in value and chroma to draw the user's attention
to the most important information. Confusion can result if too many figures or background fields
compete for the viewer's attention. The hierarchy of highlighted, neutral, and low-lighted states
for all areas of the visual display must be designed carefully to provide the maximum simplicity
and clarity.
Color communication deals with legibility, including using appropriate colors for the central and
peripheral areas of the visual field. Color combinations influenced least by the relative area of
each color should be used.
Red or green should not be used in the periphery of the visual field, but in the center. If used in
the periphery, you need a way to capture the attention of the viewer, size change or blinking for
example. Blue, black, white, and yellow should be used near the periphery of the visual field,
where the retina remains sensitive to these colors.
If colors change in size in the imagery, the following should be considered: as color areas
decrease in size, their value (lightness) and chroma will appear to change.
Use colors that differ in both chroma and value. Avoid red/green, blue/yellow, green/blue, and
red/blue combinations unless a special visual effect is needed. They can create vibrations,
illusions of shadows, and afterimages.
For dark viewing situations, light text, then lines, and small shapes on medium to dark
backgrounds should be used in slide presentations, workstations and videos. For light-viewing
situations, use dark (blue or black) text, thin lines and small shapes on light background. These
viewing situations include overhead transparencies and paper.
Color Symbolism
The importance of color is to communicate. Therefore color codes should respect existingcultural
and professional usage. Connotations vary strongly among different kinds of viewers, especially
from different cultures. Color connotations should be used with great care. For example: mailboxes
are blue in the United States, bright red in England and bright yellow in Greece. If using color in
an electronic mail icon on the screen, color sets might be changed for different countries to reflect
the differences in international markets.
Fitts’ law is widely applied in user experience (UX) and user interface (UI) design. For example,
this law influenced the convention of making interactive buttons large (especially on finger-
operated mobile devices)—smaller buttons are more difficult (and time-consuming) to click.
Likewise, the distance between a user‘s task/attention area and the task-related button should be
kept as short as possible.
The law is applicable to rapid, pointing movements, not continuous motion (e.g., drawing). Such
movements typically consist of one large motion component (ballistic movement) followed by
fine adjustments to acquire (move over) the target. The law is particularly important in visual
interface design—or any interface involving pointing (by finger or mouse, etc.): we use it to
assess the appropriate sizes of interactive elements according to the context of use and highlight
potential design usability problems. By following Fitts‘ law, standard interface elements such as
the right-click pop-up menu or short drop-down menus have had resounding success, minimizing
the user‘s travel distance with a mouse in selecting an option—reducing time and increasing
productivity. Conversely, long drop-downs, title menus, etc., impede users‘ actions, raising
movement-time demands.
1. Clear
2. Concise
3. Familiar
4. Responsive
5. Consistent
6. Attractive
7. Efficient
8. Forgiving
1. Clear
What does that do? Hover over buttons in WordPress and a tooltip will pop up explaining their
functions.
2. concise
Clarity in a user interface is great, however, you should be careful not to fall into the trap of
over-clarifying. It is easy to add definitions and explanations, but every time you do that you add
mass. Your interface grows. Add too many explanations and your users will have to spend too
much time reading through them.
Keep things clear but also keep things concise. When you can explain a feature in one sentence
instead of three, do it. When you can label an item with one word instead of two, do it. Save the
valuable time of your users by keeping things concise. Keeping things clear and concise at the
same time isn‘t easy and takes time and effort to achieve, but the rewards are great.
The volume controls in OS X use little icons to show each side of the scale from low to high.
3. Familiar
Many designers strive to make their interfaces ‗intuitive‘. But what does intuitive really mean? It
means something that can be naturally and instinctively understood and comprehended. But how
can you make something intuitive? You do it by making it ‗familiar‘.
Familiar is just that: something which appears like something else you‘ve encountered before.
When you‘re familiar with something, you know how it behaves – you know what to expect.
Identify things that are familiar to your users and integrate them into your user interface.
4. responsive
Responsive means a couple of things. First of all, responsive means fast. The interface, if not the
software behind it, should work fast. Waiting for things to load and using laggy and slow
interfaces is frustrating. Seeing things load quickly, or at the very least, an interface that loads
quickly (even if the content is yet to catch up) improves the user experience.
Responsive also means the interface provides some form of feedback. The interface should talk
back to the user to inform them about what‘s happening. Have you pressed that button
successfully? How would you know? The button should display a ‗pressed‘ state to give that
feedback. Perhaps the button text could change to ―Loading…‖ and it‘s state disabled. Is the
software stuck or is the content loading? Play a spinning wheel or show a progress bar to keep
the user in the loop.
Instead of gradually loading the page, Gmail shows a progress bar when you first go to your
inbox. This allows for the whole page to be shown instantly once everything is ready.
5. consistent
Now, I‘ve talked before about the importance of context and how it should guide your design
decisions. I think that adapting to any given context is smart, however, there is still a level of
consistency that an interface should maintain throughout.
Consistent interfaces allow users to develop usage patterns – they‘ll learn what the different
buttons, tabs, icons and other interface elements look like and will recognize them and realize
what they do in different contexts. They‘ll also learn how certain things work, and will be able to
work out how to operate new features quicker, extrapolating from those previous experiences.
6. attractive
This one may be a little controversial but I believe a good interface should be attractive.
Attractive in a sense that it makes the use of that interface enjoyable. Yes, you can make your UI
simple, easy to use, efficient and responsive, and it will do its job well – but if you can go that
extra step further and make it attractive, then you will make the experience of using that interface
truly satisfying. When your software is pleasant to use, your customers or staff will not simply be
using it – they‘ll look forward to using it.
There are of course many different types of software and websites, all produced for different
markets and audiences. What looks ‗good‘ for any one particular audience will vary. This means
that you should fashion the look and feel of your interface for your audience. Also, aesthetics
should be used in moderation and to reinforce function. Adding a level of polish to the interface
is different to loading it with superfluous eye-candy.
7. efficient
A user interface is the vehicle that takes you places. Those places are the different functions of
the software application or website. A good interface should allow you to perform those
functions faster and with less effort. Now, ‗efficient‘ sounds like a fairly vague attribute – if you
combine all of the other things on this list, surely the interface will end up being efficient?
Almost, but not quite.
What you really really need to do to make an interface efficient is to figure out what exactly the
user is trying to achieve, and then let them do exactly that without any fuss. You have to identify
how your application should ‗work‘ – what functions does it need to have, what are the goals
you‘re trying to achieve? Implement an interface that lets people easily accomplish what they
want instead of simply implementing access to a list of features.
Apple has identified three key things people want to do with photos on their iPhone, and
provides buttons to accomplish each of them in the photo controls.
8. forgiving
Nobody is perfect, and people are bound to make mistakes when using your software or website.
How well you can handle those mistakes will be an important indicator of your software‘s
quality. Don‘t punish the user – build a forgiving interface to remedy issues that come up.
A forgiving interface is one that can save your users from costly mistakes. For example, if
someone deletes an important piece of information, can they easily retrieve it or undo this
Page 126 of 165
action? When someone navigates to a broken or nonexistent page on your website, what do they
see? Are they greeted with a cryptic error or do they get a helpful list of alternative destinations?
Trashed the wrong email by mistake? Gmail lets you quickly undo your last action.
to conclude...
Working on achieving some of these characteristics may actually clash with working on others.
For example, by trying make an interface clear, you may be adding too many descriptions and
explanations, that end up making the whole thing big and bulky. Cutting stuff out in an effort to
make things concise may have the opposite effect of making things ambiguous. Achieving a
perfect balance takes skill and time, and each solution will depend on a case by case basis.
What do you think? Do you disagree with any of these characteristics or have any more to add?
I‘d love to read your comments.
(1) processing the sensory input, which transforms this low-level information to higher-level
information (e.g., extracts shapes for object recognition);
(2) processing which is connected with a person's concepts and expectations (or knowledge),
restorative and selective mechanisms (such as attention) that influence perception.
The simplicity principle, traditionally referred to as Occam's razor, is the idea that simpler
explanations of observations should be preferred to more complex ones. In recent decades the
principle has been clarified via the incorporation of modern notions of computation and
probability, allowing a more precise understanding of how exactly complexity minimization
facilitates inference. The simplicity principle has found many applications in modern cognitive
science, in contexts as diverse as perception, categorization, reasoning, and neuroscience. In all
these areas, the common idea is that the mind seeks the simplest available interpretation of
observations- or, more precisely, that it balances a bias toward simplicity with a somewhat
opposed constraint to choose models consistent with perceptual or cognitive observations. This
brief tutorial surveys some of the uses of the simplicity principle across cognitive science,
emphasizing how complexity minimization in a number of forms has been incorporated into
probabilistic models of inference.
usability problems related to the layout, logical flow, and structure of the interface and
inconsistencies in the design
non-compliance with standards
ambiguous wording in labels, dialog boxes, error messages, and onscreen user assistance
functional errors
Design elements - When assessing the design elements of a user interface, you need to
consider their overall aesthetics—Do they look good?—and functionality—Do they work
well?
Text elements- When reviewing text user interface elements, consider these three Cs of
communication:
Clarity - The textual messages a user receives from an application must be clear.
Consistency - Use terms and labels for user interface controls that are familiar to users.
Conciseness - Be as brief as possible, but don’t sacrifice meaning for brevity.
Link elements - There are links in all Web applications and in some desktop applications, but
not all of them are blue underlined text. Links help users navigate — both within an
application and to external locations.
Navigational links include menus of all types, breadcrumb trails, links to other pages, links to
Help, and even email addresses. Often links in applications display something on the same
page—such as a popup dialog box, an expandable panel, or user assistance—or clickable
image maps with multiple hotspots display related information.
Ensure it uses similar link elements consistently. For example, if an application has a
page browser, do the link elements always look the same? Or are they different like those
in Figure 6?
Review the font, color, and style of active and visited links.
Check the behavior when users hover over links.
Make sure the text matches the link‘s action.
Verify that links take users where they would expect to go.
Visual elements - Visual elements include graphics, color, and display mechanisms. You
don‘t have to be a graphic designer to review the visual elements of a user interface. As with
text and links, you‘re looking for usefulness and consistency. You‘ll notice consistency by its
absence.
User interactions - User interactions are the dynamic aspects of a user interface—such as
clicking a button, performing a search, or saving a record. What happens when you perform
such actions? What results or feedback do you get? Do you get logical and appropriate
results? Look out for results that don’t behave as you expect them to.
When reviewing a user interface, test these user interactions:
Click every menu item, button, and toolbar button. Do they do what you expect them to
do? Do they take you where you expect them to go? For example, is there a clear
difference between what happens when you click Apply, OK, or Cancel?
Click all other controls, including tabs, spin boxes, drop-down list boxes, and other user
interface elements.
Try to access all functionality using the keyboard. For example, Alt+<Character> should
display the relevant menu, Tab should move the insertion point to the next field, Enter
should execute a command if the focus is on a button, and Esc should close a dialog box.
Check what happens when you press the function keys. For example, on a PC, F1 should
display Help and F5 should refresh the browser window or a directory.
Cost Of Change
Sure, we can change that. No problem. Engineering Change Request(s) are common through
development (and often well beyond), but what is the REAL cost? Of course, change and
improvement are good — that‘s what design and development are all about. However, there are
costs. So, how can you minimize the costs in making changes?
The product development process (very simplified) looks something like this graphic. It starts
with ideation, then goes into the development cycle of Design > Prototype > Test. The form of
that cycle depends greatly on the product. If it‘s a software product, you might go through that
cycle in a day. If it‘s an industrial product, it can take months to go around the cycle.
With each rotation around the development cycle, we review where we‘re going to learn more,
and hopefully refine the product. These refinements, or changes — which improve the product
I argue there is not a ―Design Freedom‖ curve, but only a shift in the ―Costs‖ curve with each
Engineering Change Request. We are, after all, free to make changes any time. The impact, or
cost, of such changes (implicit or explicit) in money, time, reputation, opportunity, etc., are
really the issue. For grins, we can show a little shift in the Design Freedom curve as a nod to
project inertia, however. Let‘s look at a different perspective.
An Example:
Company A has a great idea for a product improvement. They progress through the design
phases in a normal sort of way (RED curve). If they continue along the normal path (Light
BLUE curve) in the graphic below we see the potential result. However, during the design cycle,
Engineer Q gets some key feedback from the manufacturer and sees an opportunity for cost
savings in production. The Engineering Change Request implements, then development
proceeds along the RED curve. A big part of the cost ―blip‖ for the new idea is time. Remember,
the bottom axis is not a timeline, nor are the steps in symmetric increments.
Reputation costs
Reputation Costs means the reasonable and necessary fees, costs and expenses charged by any
public relations firm, crisis management firm or law firm retained by or behalf of an Executive to
mitigate the adverse effects to such Executive‘s reputation as a result of a negative public statement
made about him or her by a Regulator.
Reputation cost is the various forms of economic and other harm that an entity may experience
as a result of damage to its reputation with customers or others. For example, if a company
experiences a cyber-attack in which its customer records stolen or compromised, and
the attack is made public, customers may switch to other companies for which attacks have not
been made public, whether or not they have occurred
The best user interface guidelines are high level and contain widely applicable design principles.
The designer who intends to apply these principles should know which theoretical evidence
supports them and apply the guidelines at an early stage of the design life cycle.
Standards Guidelines
High Authority Lower Authority
Page 137 of 165
Little overlap Conflicts, overlap, trade-offs
Limited application - e.g. display area Less focused
Minimal interpretation-little knowledge Interpretation required -expert HCI
required knowledge
National and international bodies most commonly set interactive system design standards such as
British Standards Institution (BSI), the International Organisation for Standardisation (ISO).
Design guidelines can be found in various forms, for example, journal articles, technical reports,
general handbooks, and company house style guides. A good example to this is the guidelines
produced by Smith and Mosier (1986) . An example of company house style guides is Apple's
Human Interface Guidelines. Standardisation in interface design can provide a number of benefits.
Design Guides
Design rules, in the form of standards and guidelines, provide designers with directions for a
good and quality design with the intention of increasing the usability of a system.
Software companies produce sets of guidelines for their own developers to follow. These
collections are called house standards, sometimes referred to as house style guides. These are
more important in industry, especially in large organisations where commonality can be vital.
There are two main types of style guides: commercial style guides and corporate style guides.
Hardware and software manufacturers produce commercial style guides and corporate style
guides are developed by companies for their own internal use. The advantage of using these
guides is to enhance usability of a product through consistency. However, there are other
reasons, for example, software developed for Microsoft, Apple Macintosh or IBM PC maintains
its "look and feel;" across many product lines.
Though commercial guides often contain high level design principles, in most cases these
documents are based on low level design rules which have been developed from high level
design principles. If an organisation aims to develop their own corporate style guides for
software, generally the starting point will be one of the commercial style guides.
Many design style questions have to be addressed more specifically when developing corporate
style guides. For example, how should dialog windows to be organised? Which styles - colour
and fonts etc, should be used ?
The roots for design rules may be psychological, cognitive, ergonomic, sociological or
computational, but may, or may not be, supported by empirical evidence.
Design review
A design review is a milestone within a product development process whereby a design is
evaluated against its requirements in order to verify the outcomes of previous activities and
3. UI Design Review
Use this step to review specific interaction behaviors and to provide guidance to designers on
problematic issues.
4. Creative Direction Review
Use this step to ensure that the visual design maps to the creative direction of the project.
Scheduling Product Design Reviews
Use this section to list the regularly scheduled meeting times for Product Design Reviews.
Include the review meeting days and times (for example, “Wednesdays from 9 am to 10
pm”), and any other information about review meeting schedules (for example, indicate
Page 139 of 165
here if no formal review meeting is scheduled for a particular step in the Design Review
Process).
Sending Review Materials Out in Advance
Use this section to indicate how the materials to be reviewed at the meetings should be
made available to the reviewers. Include the following types of information:
How far in advance the review materials need to be made available before the
scheduled review meeting (for example, one business day, two business days, one
week, etc.)
How materials should be distributed to the reviewers (for example, placed on a web
server, sent as e-mail attachments, etc.)
The format materials should use (for example, .doc, .pdf, .html, .jpg, .gif, all of the
above, etc.)
A list of related materials that should also be made available (for example, meeting
minutes from earlier, related reviews, a Creative Brief, etc.)
Product Design Review Minutes
Use this section to describe when and how meeting minutes should be generated:
Indicate if any meetings do not require minutes.
Indicate who is responsible for recording and distributing minutes at each meeting.
You can indicate a specific person or a role (for example, “Design Lead”).
Indicate how or where the minutes will be made available to team members.
While user interface (UI) reviews often occur at the end of the development cycle, I recommend
that you get involved early in the process, preferably when the designers create the initial
wireframes or paper prototypes. Why? Making changes early in the process reduces development
costs. Plus, if you identify usability issues early, it‘s much more likely the team can remedy them
before launch, preventing bad reviews like that shown in the Figure below, negative word-of-
mouth, and the lost sales that result from them.
Before reviewing an application‘s user interface, make sure you know who the users will be,
have clear goals in mind, and remember, it‘s all about improving the user experience.
design elements
text elements
link elements
visual elements
user interactions
When assessing the design elements of a user interface, you need to consider their overall
aesthetics—Do they look good?—and functionality— Do they work well?
When assessing the design elements of a user interface, you need to consider their overall
aesthetics—Do they look good?—and functionality—Do they work well?
Tip—If your company does not have user interface guidelines or a style guide, create them.
Creating guidelines or style guides is a big undertaking. To save a lot of effort, base them on the
standards from Microsoft, Apple, or Sun, and document only the exceptions that are specific to
your organization. Whether you‘re working on desktop or Web applications, a wide variety
of style guides is available online.
The Figure below shows examples of command buttons that are too small for their labels and
buttons for which there isn‘t sufficient space.
Figure—Poorly rendered command buttons
Correct Punctuation
Review the punctuation of text elements for correctness and consistency. In particular, look for
the following:
Error messages must be clear and useful. We‘ve all seen plenty of error messages that are
neither. Good error messages tell users what went wrong—and possibly why—provide one or
more solutions for fixing the error, and offer a set of buttons that relate directly to the next action
a user should take. Options like OK, Continue, and Cancel are not appropriate when an error
message is a question. The possible responses to a question should be Yes and No. Too many
options in an error message can confuse users, so they have no idea what will happen when they
click a button. When reviewing error message text, your aim is to reduce confusion.
Tip—Ask the developers on your team for your application‘s resource file, which contains all of
the error messages and other UI text, so you don‘t have to try to create every error condition to
Visual elements include graphics, color, and display mechanisms. You don‘t have to be a graphic
designer to review the visual elements of a user interface. As with text and links, you‘re looking
for usefulness and consistency. You‘ll notice consistency by its absence.
Color is often a matter of personal preference. Be aware that people will interpret color names
differently. For example, if I recommend using purple in a user interface, which purple do I
mean—violet, mauve, lilac, burgundy, maroon, or just plain purple? The Figure below shows
many shades of the color purple.
Figure —After: Changing colors in the CSS did not change the hard-coded colors and
graphics
User interactions are the dynamic aspects of a user interface—such as clicking a button,
performing a search, or saving a record. What happens when you perform such actions? What
results or feedback do you get? Do you get logical and appropriate results? Look out for results
that don’t behave as you expect them to.
Click every menu item, button, and toolbar button. Do they do what you expect them to do?
Do they take you where you expect them to go? For example, is there a clear difference
between what happens when you click Apply, OK, or Cancel?
Click all other controls, including tabs, spin boxes, drop-down list boxes, and other user
interface elements.
Try to access all functionality using the keyboard. For example, Alt+<Character> should
display the relevant menu, Tab should move the insertion point to the next field, Enter should
execute a command if the focus is on a button, and Esc should close a dialog box.
Check what happens when you press the function keys. For example, on a PC, F1 should
display Help and F5 should refresh the browser window or a directory.
Before measuring system response times, make sure your system configuration matches the
design specifications for the application. Otherwise, the developers might reject or ignore your
comments about performance.
System response times determine how long it takes to complete specific actions. Are the times
within acceptable ranges—that is, do they meet your expectations based on your experience
performing similar tasks using other software? Some interactions whose performance you should
check include the following:
opening and closing an application
loading new pages, dialog boxes, and tabs
loading new and existing records
responses to user interactions such as clicking a button
running processes such as saving a record or displaying a report
Documenting/Reporting Issues
No matter how you report issues, always provide constructive criticism and minimize personal
opinion.
As you review each Web page or screen of an application, record your findings. No matter how
you report issues, always provide constructive criticism and minimize personal opinion.
Certainly avoid comments such as I don’t like that. Back up your assertions and any suggestions
you make for improving the application‘s user interface with objective reasons relating to the
following:
usability
readability
accessibility
familiarity
consistency
conformance with accepted industry standards—for example, the Checklist of Checkpoints
for Web Content Accessibility Guidelines 1.0
legislative compliance—for example, Section 508 of the Rehabilitation Act (USA), the
British Disability Discrimination Act (UK), or the Disability Discrimination Act (Australia)
Once you‘ve created a formal report of your findings, don‘t let the product team ignore it. Set up
times to talk directly with the developers, team leads, or project managers who can help you get
your feedback addressed. Present your findings as an advocate for users, and let the team see
your passion for meeting users‘ needs.
Sometimes you just want to report an issue to the one person who can fix the problem. In such
instances, consider one of these less formal reporting methods:
Sit with the developer, show him the problems on your computer, and confirm the fixes as he
makes them.
If you cannot sit down with a developer to show him the problems, capture screen videos that
show the issues. A video is also very useful when a developer says he can‘t replicate a
problem. Capture them using tools such as Captivate, Camtasia, or ViewletBuilder. Refer to
any videos in your formal report.
Complete a checklist showing the items you‘ve checked as you reviewed each part of an
application—one checklist per screen. Developers can reuse your checklist as they continue
developing the application. Here is a link to a comprehensive checklistPDF that covers all of
the guidelines discussed in this article.
Reviewing an application‘s user interface is not about your opinions. Your aim is to improve a
user experience to reduce potential user confusion and meet users‘ needs. By thoroughly
reviewing design elements, text elements, link elements, visual elements, and user interactions,
you can ensure users have a positive experience with your application.
Although you can use some software tools to help you with this task, ultimately, your knowledge
and understanding of user interface guidelines, standards, and interactions are your best tools for
reviewing a user interface.
Many of today‘s top companies—like Apple and Target—use their logos prominently and
consistently, making their brand and the quality it promises immediately recognizable. For
companies who sell services in addition to products, company marketing relies heavily on what
you say, or write, and how you say it. Just like with logos, consistency is key. An inconsistent
writing style can reduce the perceived quality of your product or services. Enter the style guide.
Developing a style guide—an internal reference of writing and design standards—can help staff
at organizations of any size navigate the often complex world of communication. It can save time
and allow your content‘s message to shine through without distraction. Own your brand!
1. Think about who will use the style guide and what they need to know. Should your guide
be short, or would it be more useful if it were comprehensive and easy to navigate?
2. Look through your organization‘s documents and identify key requirements for consistent
branding. A review of both drafts and final products will also help you identify common
errors and inconsistencies among colleagues.
3. Assess existing style guides for ideas on how to organize a list of common errors alongside
your desired style elements (such as company fonts, colors, and spacing).
4. If your company relies on client style guides for communications, be sure to differentiate
between the internal style and the client style. For instance, some of our clients use the
word ―cybersecurity‖ while others prefer this to be two words, ―cyber security.‖
5. Include examples for correct and incorrect style usage, particularly for complex issues, so
that your message is clear.
6. If your company requires a comprehensive style guide, consider making a desktop
reference for your colleagues. A one-pager that can be used quickly is extremely helpful in
preventing common errors.
Once you‘ve created a guide, consult with your colleagues to make sure that you have covered
all relevant topics and agree on style conventions. A style guide is only useful if you have buy-in
and if you can defend your reasoning for including certain standards. Is a certain rule too fluid to
be useful? Is an Oxford comma really necessary?*
Here at Nexight, we develop products that distill immense amounts of information to its core. By
using standards for writing style, we can ensure that our communications products are consistent,
effective, and high quality.
After corporate style and branding, often the next most important use of the style guide is to
answer internal questions about presentation. Your style guide should make clear how authors
present:
Tools like PerfectIt can help to ensure that presentation is consistent. What's more, there are
free user guides which show how you can customize PerfectIt and share its style sheets among
colleagues so that all documents in your organization are checked the same way.
The key to determining what goes in the style guide is to find out how usage differs in your
company. The best way to do that is to bring more people into the process of building the style
1. Responsiveness: It signals to the user that these buttons are clickable, and that the button
has two separate parts since they are highlighted separately.
2. Engagement: The visual effect engages the attention. If done well, the visual effect will
delight the user. It gives life to the static elements on the screen.
3. Activity: Visual changes indicate that the underlying system is changing. Animation for
its own sake can distract, but as part of the visual feedback can help explain how the
system works.