Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 1 - HCI Assignment Lectures

Download as pdf or txt
Download as pdf or txt
You are on page 1of 165

COMPUTER ENGINEERING

NOTES
Assignment on HCI by
Michael Matardi

STUDY NOTES
Unit Aim
This unit aims to help learners to understand the principles of interaction, including the physical
attributes that influence human computer interaction. The unit also enables learners to design
user interfaces and evaluate user interface designs.
Unit Overview
The unit is for learners considering a career in computing and information systems and who wish
to gain an understanding of human computer interaction. The unit will equip learners with the
skills and knowledge to review the principles of interaction, to examine the physical attributes
that influence human computer interaction and to develop models of interaction for an
application. It will also enable learners to design user interfaces and evaluate user interface
design.

Human Computer Interaction

Page 1 of 165
Unit 1: Human Computer Interaction

Contents
Outcome 1: Understand physical attributes that influence human computer interaction ...............6
Introduction ................................................................................................................................ 6
1. Primary modes of response for inputting information .............................................................. 7
HCI interaction Mode ..............................................................................................................7
Inputting Mode ........................................................................................................................ 8
Multimodal HCI systems .........................................................................................................9
Modes of response/feedback for inputting information .......................................................... 10
Types of Feedback............................................................................................................. 10
Including Feedback in Design............................................................................................ 13
Input flow.............................................................................................................................. 14
Mode confusion..................................................................................................................... 14
Haptic interaction .................................................................................................................. 15
2. Human thinking processes for computer-based tasks ............................................................. 16
Human thinking processes ..................................................................................................... 16
Different types of thinking..................................................................................................... 16
Human-Computer Model....................................................................................................... 18
Cognitive Psychology............................................................................................................ 19
Cognitive psychology & Information Processing ................................................................... 20
Levels of Processing .............................................................................................................. 21
Shallow Processing............................................................................................................ 22
Deep Processing ................................................................................................................ 22
Influence of human thinking processes on HCI ...................................................................... 23
Meaning of Design Thinking ............................................................................................. 23
Importance of Design Thinking today ................................................................................ 23
Stages of Design Thinking ................................................................................................. 24
Computer programmers thinking in terms of solving problems .............................................. 25
Computer programming..................................................................................................... 25
Problem Solving ................................................................................................................ 25
Problem Solving Strategies ................................................................................................ 26
Outcome 2: Understand principles of interaction ....................................................................... 28
1. Developing models of interaction for an application .............................................................. 28
Meaning of Design Principles ................................................................................................ 28
HCI Design principles ........................................................................................................... 28
Theoretical frameworks and methodologies ........................................................................... 33
HCI Models ....................................................................................................................... 35
HCI Analysis Methodology ............................................................................................... 37
Heuristic evaluation ........................................................................................................... 37
Cooper‘s Persona System ...................................................................................................... 39
Personas in the User Interface Design ................................................................................ 39
Personas – A Simple Introduction...................................................................................... 39
Personas in Design Thinking ............................................................................................. 40
Page 2 of 165
Four Different Perspectives on Personas ............................................................................ 40
Creating Personas and Scenarios........................................................................................ 43
Significant Models in HCI ..................................................................................................... 44
Developing a model of interaction for an application ............................................................. 48
Factors in selecting suitable interaction models design .......................................................... 52
Translations between user and system ................................................................................... 54
Models of interaction ......................................................................................................... 54
The terms of interaction ..................................................................................................... 54
The execution–evaluation cycle ......................................................................................... 55
The interaction framework................................................................................................. 57
2. Interaction style for a given task ............................................................................................ 59
Human Interface/Human Error .............................................................................................. 59
Introduction ....................................................................................................................... 59
Sources of Human Error .................................................................................................... 60
HCI Problems .................................................................................................................... 61
Available tools, techniques, and metrics ............................................................................ 61
Relationship to other topics ............................................................................................... 64
Conclusions ....................................................................................................................... 64
Interaction styles supported by the system ............................................................................. 65
Human-computer Interfaces Styles .................................................................................... 65
Conceptual Interaction Models .......................................................................................... 67
The nature of user/system dialogue ........................................................................................ 69
Dialogue system ................................................................................................................ 69
Introduction ....................................................................................................................... 69
Components ...................................................................................................................... 70
Types of systems ............................................................................................................... 70
Natural dialogue systems ................................................................................................... 71
Applications ...................................................................................................................... 72
Select and evaluate an appropriate interaction style ............................................................... 72
Outcome 3: Be able to design user interfaces ............................................................................. 74
1. User interface requirements of a task ..................................................................................... 74
Interactive Systems ............................................................................................................... 74
Usability ............................................................................................................................ 74
Difficulty in Designing Interactive Systems ....................................................................... 75
Key Principles of Interactive Design...................................................................................... 75
Interaction Component Styles ................................................................................................ 75
Command Line Dialogue ................................................................................................... 76
Menu Interaction Style ...................................................................................................... 76
Form Fill-In Interaction Style ............................................................................................ 77
Direct Manipulation .......................................................................................................... 77
Natural Language Interfaces .............................................................................................. 78
User interfaces and usability .................................................................................................. 78
User-centered design versus task-oriented design .................................................................. 80
Task Analysis and Modeling ................................................................................................. 81
Hierarchical Tasks Analysis .............................................................................................. 82

Page 3 of 165
GOMS ............................................................................................................................... 84
2. User interface requirements ............................................................................................... 85
Input/output Design User Interface Design ............................................................................ 85
User Interface Design ............................................................................................................ 85
Output Design ....................................................................................................................... 87
Input Design .......................................................................................................................... 89
3. Prototyping ........................................................................................................................... 91
User Interface Prototyping ..................................................................................................... 91
Basic prototype categories ..................................................................................................... 91
Human factors and ergonomics ............................................................................................. 92
Cognitive Ergonomics ....................................................................................................... 93
Sources of Information...................................................................................................93
Approach to Ergonomics ................................................................................................... 94
Ergonomics Techniques..................................................................................................... 97
Application Areas ............................................................................................................ 101
People-centred Issues ...................................................................................................... 103
High-Fidelity and Low-Fidelity Prototyping ........................................................................ 106
Creating Paper Prototypes ................................................................................................... 108
Coming up & using ideas in your Interface Design .............................................................. 108
4. Design Interface & Components ...................................................................................... 110
Interface and Components ................................................................................................... 110
User Interface (UI) Design Patterns ..................................................................................... 110
Navigation considerations ................................................................................................... 111
Navigational directions .................................................................................................... 111
Global Navigation best design ......................................................................................... 111
UI widgets ........................................................................................................................... 112
Effective Visual Communication for Graphical User Interfaces ........................................... 113
Design Considerations ..................................................................................................... 113
Visible Language............................................................................................................. 113
Color ............................................................................................................................... 120
Principles of user interface design & Fitts‘ Law .................................................................. 121
User interface designs & human abilities ............................................................................. 122
The simplicity principle in perception and cognition............................................................ 127
Online Controlled Experiment ............................................................................................. 128
Web design as the anchoring domain ................................................................................... 128
How Anchoring Can Improve the User Experience ............................................................. 128
Interface design Rules ......................................................................................................... 129
Outcome 4: Evaluating user interface designs .......................................................................... 131
1. Designing a review process for a user interface ................................................................... 131
Review of user interfaces .................................................................................................... 131
User interface review process .............................................................................................. 131
What Does a Comprehensive UI Review Involve? ......................................................... 132
Increased cost if changes needed further along the development process ............................. 134
How to minimize cost in the software development process ............................................. 134
Cost Of Change ............................................................................................................... 134

Page 4 of 165
Implications Of The Timeline .......................................................................................... 134
Costs in an Engineering Change Request ......................................................................... 135
Reputation costs .................................................................................................................. 137
Design guidelines, rules and standards................................................................................. 137
Standards vs Guidelines ................................................................................................... 137
Design Guides ................................................................................................................. 138
Design review ..................................................................................................................... 138
Design Review Process ....................................................................................................... 139
Review of different aspects of interface ............................................................................... 140
What Does a Comprehensive UI Review Involve? ........................................................... 141
Reviewing Design Elements ............................................................................................ 141
Reviewing Text Elements ................................................................................................ 144
Reviewing Link Elements ................................................................................................ 148
Reviewing Visual Elements ............................................................................................. 148
Reviewing User Interactions ............................................................................................ 152
Documenting/Reporting Issues ........................................................................................... 154
Using a Bug Tracking System.......................................................................................... 154
Creating a Formal Report ................................................................................................ 155
Reporting Issues Less Formally ....................................................................................... 156
Why Tackle a User Interface Review? ............................................................................. 156
Accessibility tools and HTML validation tools .................................................................... 156
Using/developing style guides ............................................................................................. 157
Meaning of a style guide.................................................................................................. 157
Developing a style guide ................................................................................................. 158
2. Designing written instruments to gather user interface feedback .......................................... 159
Meaning & Importance of feedback..................................................................................... 159
Written instruments for gathering interface feedback ........................................................... 159
Designing written instruments for gathering interface feedback ........................................... 165

Page 5 of 165
Outcome 1: Understand physical attributes that influence
human computer interaction

Introduction
Human–computer interaction (HCI) researches the design and use of computer technology,
focused on the interfaces between people (users) and computers.

HCI (human-computer interaction) is the study of how people interact with computers and to
what extent computers are or are not developed for successful interaction with human beings.

As its name implies, HCI consists of three parts: the user, the computer itself, and the ways they
work together.

User
By "user", we may mean an individual user, a group of users working together. An appreciation
of the way people's sensory systems (sight, hearing, touch) relay information is vital. Also,
different users form different conceptions or mental models about their interactions and have
different ways of learning and keeping knowledge and. In addition, cultural and national
differences play a part.

Computer
When we talk about the computer, we're referring to any technology ranging from desktop
computers, to large scale computer systems. For example, if we were discussing the design of a
Website, then the Website itself would be referred to as "the computer". Devices such as mobile
phones or VCRs can also be considered to be ―computers‖.

Interaction
There are obvious differences between humans and machines. In spite of these, HCI attempts to
ensure that they both get on with each other and interact successfully. In order to achieve a usable
system, you need to apply what you know about humans and computers, and consult with likely
users throughout the design process. In real systems, the schedule and the budget are important,
and it is vital to find a balance between what would be ideal for the users and what is feasible in
reality.

Page 6 of 165
1. Primary modes of response for inputting information
HCI interaction Mode
A mode is a distinct setting within a computer program or any physical machine interface, in which
the same user input will produce perceived results different from those that it would in other
settings.

A mode in HCI is a context where user actions (keypresses, mouse clicks etc.) are treated in a
specific way. That is, the same action may have a different meaning depending on the mode.

Modes are known to cause problems as you may press a key expecting it to do one thing thinking
the system is in a particular mode, but something unexpected happens because in fact it is in
another mode.

However, modes are often essential as there are only a finite number of input options (keys and
mouse actions), but often a large number of things you would like them to do.

The crucial thing is to ensure, as far as possible, that mode is apparent to the user via visual,
aural or other forms of feedback. Furthermore, it is important to ensure that mode errors,
because they are common, do not lead to catastrophic consequences.

Examples: If you have a (old fashioned!) mobile phone and press "999" you would expect to be
about to ring the police (in the UK), but if you do the same thing while editing an SMS message
you would get 'y' (assuming multi-tap input) - there is a phone mode and a calculator mode In a
graphics editor click and drag on the canvas might mean draw a line, circle, or rectangle
depending on what tool is selected - that is a mode for each tool. This may be more major, in the
example in the video, we see that Google Docs drawing app hass a 'normal' mode where clicking
over a menu selects it, but when the polyline tool is selected, clicking adds another line segment
– here a 'normal' mode and a polyline mode.

All aspects of human-computer interaction, from the high-level concerns of organizational


context and system requirements to the conceptual, semantic, syntactic, and lexical levels of user
interface design, are ultimately funneled through physical input and output actions and devices.
The fundamental task in computer input is to move information from the brain of the user to the
computer. Progress in this discipline attempts to increase the useful bandwidth across that
interface by seeking faster, more natural, and more convenient means for a user to transmit
information to a computer. This article mentions some of the technical background for this area,
surveys the range of input devices currently in use and emerging, and considers future trends in
input.

Page 7 of 165
Inputting Mode
A designer looks at the interaction tasks necessary for a particular application.
Interaction tasks are low-level primitive inputs required from the user, such as entering a text
string or choosing a command. For each such task, the designer chooses an appropriate
interaction device and interaction technique. An interaction technique is a way of using a
physical device to perform an interaction task. There may be several different ways of using the
same device to perform the same task. For example, one could use a mouse to select a command
by using a pop-up menu, a fixed menu (or palette), multiple clicking, circling the desired
command, or even writing the name of the command with the mouse.
User performance with many types of manual input depends on the speed with which the user
can move his or her hand to a target. Fitts‘ Law provides a way to predict this, and is a key
foundation in input design. It predicts the time required to move based on the distance to be
moved and the size of the destination target. The time is proportional to the logarithm of the
distance divided by the target width. This leads to a tradeoff between distance and target width: it
takes as much additional time to reach a target that is twice as far away as it does to reach one
that is half as large.
Another way of characterizing many input devices is by their control-display ratio. This is the
ratio between the movement of the input device and the corresponding movement of the object it
controls. For example, if a mouse (the control) must be moved one inch on the desk in order to
move a cursor two inches on the screen (the display), the device has a 1:2 control display ratio.
Hands — Discrete Input
Keyboards, attached to workstations, terminals, or portable computers are one of the principal
input devices in use today. Most use a typewriter-like ‗‗QWERTY‘‘ keyboard layout, typically
augmented with additional keys for moving the cursor, entering numbers, and special functions.
There are other layouts and also chord keyboards, where a single hand presses combinations of
up to five keys to represent different characters.

Hands — Continuous Input


A much wider variety of devices is in use for continuous input from the hands. A number of
studies and taxonomies attempt to organize this range of possibilities; most devices used for
manual pointing or locating can be categorized in these ways:
• Type of Motion: Linear vs. Rotary. For example, a mouse measures linear motion (in two
dimensions); a knob, rotary.
• Absolute or Relative Measurement. Mouse measures relative motion; Polhemus magnetic
tracker, absolute.
• Physical Property Sensed: Position (or Angle) or Force (Torque). Mouse measures position;
isometric joystick, force.
• Number of Dimensions: One, Two, or Three Linear and/or One, Two, or Three Angular. Mouse

Page 8 of 165
measures two linear dimensions; knob measures one angular dimension; and Polhemus measures
three linear dimensions and three angular.
• Direct vs. Indirect Control. Mouse is indirect (move it on the table to point to a spot on the
screen); touch screen is direct (touch the desired spot on the screen directly).
• Position vs. Rate Control. Moving a mouse changes the position of the cursor; moving a rate-
control joystick changes the speed with which the cursor moves.
• Integral vs. Separable Dimensions. Mouse allows easy, coordinated movement across
two dimensions simultaneously (integral); a pair of knobs (as in an Etch-a-Sketch toy) does not
(separable).
Devices within this taxonomy include one-dimensional valuators (e.g., knob or slide pot), 2-D
locators (mouse, joystick, trackball, data tablet, touch screen), and 3-D locators (Polhemus and
Ascension magnetic trackers, Logitech ultrasonic tracker, Spaceball). Glove input devices
report the configuration of the fingers of the user‘s hand, allowing gestures to be used as input.

Other Body Movements


Foot position, head position (with a 3-D tracker), and even the direction of gaze of the eyes [1, 5]
are also usable as computer inputs.

Voice
Another type of input comes from the user‘s speech. Carrying on a full conversation with a
computer as one might do with another person is well beyond the state of the art today—and,
even if possible, may be a naive goal. Nevertheless, speech can be used as input with
unrecognized speech, discrete word recognition, or continuous speech recognition.
Even if the computer could recognize all the user‘s words in continuous speech, the problem of
understanding natural language is a significant and unsolved one. It can be avoided by using an
artificial language of special commands or even a fairly restricted subset of natural language.

Virtual Reality Inputs


Virtual reality systems rely combinations of the 3-D devices discussed above, typically a
magnetic tracker to sense head position and orientation to determine the position of the virtual
camera for scene rendering plus a glove or other 3-D hand input device to allow the user to reach
into the displayed environment and interact with it.

Multimodal HCI systems


Multimodal interaction provides the user with multiple modes of interacting with a system. A
multimodal interface provides several distinct tools for input and output of data. For example, a
multimodal question answering system employs multiple modalities (such as text and photo) at
both question (input) and answers (output) level.
Page 9 of 165
Multimodal interfaces process two or more combined user input modes, such as speech, pen, touch,
manual gestures, and gaze, in a coordinated manner with multimedia system output.

They are a new class of emerging systems that aim to recognize naturally occurring forms of
human language and behavior, with the incorporation of one or more recognition-based
technologies (e.g., speech, pen, vision). Multimodal interfaces represent a paradigm shift away
from conventional graphical user interfaces. They are being developed largely because they offer
a relatively expressive, transparent, efficient, robust, and highly mobile form of human– computer
interaction. They represent users‘ preferred interaction style, and they support users‘ ability to
flexibly combine modalities or to switch from one input mode to another that may be better suited
to a particular task or setting.

Modes of response/feedback for inputting information


All systems require feedback to monitor and change behavior. Feedback usually comparescurrent
behavior with predetermined goals and gives back information describing the gap between actual
and intended performance.

Because humans themselves are complex systems, they require feedback from others to meet
psychological and cognitive processing needs discussed earlier in this chapter. Feedback also
increases human confidence. How much feedback is required is an individual characteristic.

When users interface with machines, they still need feedback about how their work is progressing.
As designers of user interfaces, systems analysts need to be aware of the humanneed for
feedback and build it into the system. In addition to text messages, icons can often be used. For
example, displaying an hourglass while the system is processing encourages the user to wait a while
rather than repeatedly hitting keys to get a response.

Feedback to the user from the system is necessary in seven distinct situations. Feedback that is ill
timed or too plentiful is not helpful, because humans possess a limited capacity to process
information. Web sites should display a status message or some other way of notifying the user
that the site is responding and that input is either correct or in need of further information.

Types of Feedback
Acknowledging acceptance of input
The first situation in which users need feedback is to learn that the computer has accepted the
input. For example, when a user enters a name on a line, the computer provides feedback to the
user by advancing the cursor one character at a time when the letters are entered correctly. AWeb
example would be a Web page displaying a message that
―Your payment has been processed. Your confirmation number is 1234567. Thank you for using
our services.‖

Recognizing that input is in the correct form


Users need feedback to tell them that the input is in the correct form. For example, a user inputs
a command, and the feedback states ―READY‖ as the program progresses to a new point. A poor

Page 10 of 165
example of feedback that tells the user that input is in the correct form is the message ―INPUT
OK,‖ because that message takes extra space, is cryptic, and does nothing to encourage the input
of more data. When placing an order on the Web or making a payment, a confirmation page
often displays, requesting that the user review the information and click a button or image to
confirm the order or payment.

Notifying that input is not in the correct form


Feedback is necessary to warn users that input is not in the correct form. When data are incorrect,
one way to inform the user is to generate a window that briefly describes the problem with the
input and explains how the user can correct it, as shown in the figure below.

Feedback informs the user that input was not in the correct form and lists options.

Notice that the message concerning an error in inputting the subscription length is polite and
concise but not cryptic, so that even inexperienced users will be able to understand it. The
subscription length for the online newsletter is entered incorrectly, but the feedback does not dwell
on the user‘s mistake. Rather, it offers options (13, 26, or 52 weeks) so that the error canbe
corrected easily. On a GUI screen, feedback is often in the form of a message box with an OK
button on it.

Web messages have a variety of formats. One method is to return a new page with the message on
the side of the field containing the error. The new Web page may have a link for additional help.
This method works for all Web sites, and the error detection and formatting of the new page are
controlled by the server. Another method uses JavaScript to detect the error and display a message
box on the current screen with details about the specific error. An advantage of this method is that
the Web page does not have to be sent to the server, and the page is more responsive.

Disadvantages are that, if JavaScript is turned off, the error will not be detected, and only one error
is displayed at a time. There must also be a way of detecting the error on the server. A second
disadvantage is that JavaScript may not detect errors that involve reading database tables, such as
verifying a credit card number. This may be offset by using Ajax, which can send the number to
the server and return an error to the Web page. Remember, however, that some users

Page 11 of 165
intentionally turn off their JavaScript capability; so analysts need to follow a variety of tactics
when communicating errors.

Web pages may also use JavaScript to detect multiple errors and display text messages on the page.
Caution must be used so that the error messages are bold enough for the user to notice. A small
red line of text may go unnoticed. A message box or audible beeps may be used to alert the users
that one or more errors have occurred.

The analyst must decide whether to detect and report errors when a Submit button or link is clicked,
called batch validation, or detect errors one at a time, such as when a user enters a month of 14 and
leaves the field. The second method is a riskier approach since poor coding may putthe browser
into a loop, and the user will have to shut down the browser.

So far, we have discussed visual feedback in text or iconic form, but many systems have audio
feedback capabilities as well. When a user inputs data in the incorrect form, the system might beep
instead of providing a window. But audio feedback alone is not descriptive, so it is not as helpful
to users as onscreen instructions. Use audio feedback sparingly, perhaps to denote urgent
situations. The same advice also applies to the design of Web sites, which may be viewed in an
open office, where sounds carry and a coworker‘s desktop speakers are within earshot of several
other people.

Explaining a delay in processing


One of the most important kinds of feedback informs the user that there will be a delay in
processing his or her request. Delays longer than 10 seconds or so require feedback so that the user
knows the system is still working.

Figure below shows a display providing feedback in a window for a user who has just requested
a printout of the electronic newsletter‘s subscription list. The display shows a sentence reassuring
the user that the request is being processed, as well as a sign in the upper right corner instructing
the user to ―WAIT‖ until the current command has been executed. The display also provides a
way to stop the operation if necessary.

Feedback tells the user that there will be a delay during printing.

Page 12 of 165
Sometimes during delays, while new software is being installed, a short tutorial on the new
application is run, which is meant to serve as a distraction rather than feedback about the
installation. Often, a list of files that are being copied and a status bar are used to reassure the user
that the system is functioning properly. Web browsers usually display the Web pages that are
being loaded and the time remaining.

It is critical to include feedback when using Ajax to update Web forms. Because a new Web page
does not load, the user may not be aware that data are being retrieved from the server that will
change the current Web page. When a drop-down list is changing, a message, such as
―Please wait while the list is being populated,‖ informs the user that the Web page is changing.
Timing feedback of this sort is critical. Too slow a system response could cause the user to input
commands that impede or disrupt processing.

Acknowledging that a request is completed


Users need to know when their request has been completed and new requests may be input. Often
a specific feedback message is displayed when an action has been completed by a user, such as
―Employee record has been added,‖ ―Customer record has been changed,‖ or ―Item number
12345 has been deleted.‖

Notifying that a request was not completed


Feedback is also needed to let the user know that the computer is unable to complete a request. If
the display reads ―Unable to process request. Check request again,‖ the user can then go back
and check to see if the request has been input correctly rather than continue to enter commands
that cannot be executed.

Offering the user more detailed feedback


Users need to be reassured that more detailed feedback is available, and they should be shown how
they can get it. Commands such as Assist, Instruct, Explain, and More may be employed. Or the
user may type a question mark or click on an appropriate icon to get more feedback. Usingthe
command Help as a way to obtain further information has been questioned, because users may feel
helpless or caught in a trap from which they must escape. This convention is in use, and its
familiarity to users may overcome this concern.

When designing Web interfaces, hyperlinks can be embedded to allow the user to jump to the
relevant help screens or to view more information. Hyperlinks are typically highlighted with
underlining, italics, or a different color. Hyperlinks can be graphics, text, or icons.

Including Feedback in Design


If used correctly, feedback can be a powerful reinforcer of users‘ learning processes, serve to
improve user performance with the system, increase motivation to produce, and improve the fit
among the user, the task, and the technology.

A variety of help options


Feedback on personal computers has developed over the years. ―Help‖ originally started as a
response to the user who pressed a function key, such as F1; the GUI alternative is the pull-down
Page 13 of 165
help menu. This approach was cumbersome, because end users had to navigate through a table of
contents or search via an index. Next came context-sensitive help. Users could simply click on the
right mouse button, and topics or explanations about the current screen or area of the screen would
be revealed. A third type of help on personal computers occurs when the user places the arrow
over an icon and leaves it there for a couple of seconds. At this point, some programs pop up a
balloon similar to those found in comic strips. This balloon explains a little bit about the icon
function.

The fourth type of help is a wizard, which asks the user a series of questions and then takes action
accordingly. Wizards help users through complicated or unfamiliar processes such as setting up
network connections or booking an airline seat online. Most users are familiar with wizards
through creating a PowerPoint presentation or choosing a style for a word processing memo.
Besides building help into an application, software manufacturers offer online help (either
automated or personalized with live chat) or help lines (most customer service telephone lines are
not toll free, however). Some COTS software manufacturers offer a fax-back system. A user can
request a catalog of various help documents to be sent by fax, and then can order from the catalog
by entering the item number with a touch-tone phone.

Finally, users can seek and find support from other users through software forums. This type of
support is, of course, unofficial, and the information thus obtained may be true, partially true, or
misleading. The principles regarding the use of software forums are the same for those mentioned
later on in Chapter ―Quality Assurance and Implementation‖, where folklore and
recommendation systems are discussed. Approach any software fixes posted on bulletin boards,
blogs, discussion groups, or chat rooms with wariness and skepticism.

Besides informal help on software, vendor Web sites are extremely useful for updating drivers,
viewers, and the software itself. Most online computer publications have some sort of ―driver
watch‖ or ―bug report‖ that monitors the bulletin boards and Web sites for useful programs that
can be downloaded. Programs will forage vendor Web sites for the latest updates, inform the
user of them, assist with downloads, and actually upgrade user applications.

Input flow
Input flow: The flow of information that begins in the task environment, when the user has some
task that requires using their computer. Output: The flow of information that originates in the
machine environment.

Input stream, which is a flowing sequence of data that can be directed to specific functions. The
directed channel that the data stream flows is known as a pipeline, and changing its direction is
known as piping.

Mode confusion
In human-machine systems, a user operates a machine using information on machine‘s behavior
displayed at a user interface. If the abstraction of the information is insufficient for the user to
anticipate machine‘s state, mode confusions occur in the human-machine systems.

Page 14 of 165
A mode confusion occurs when the observed behaviour of a technical system is out of sync with
the behaviour of the user's mental model of it. But the notion is described only informally in the
literature This enables us to propose precise definitions of ―mode‖ and ―mode confusion‖ for
safety-critical systems.

Haptic interaction
A haptics interface is a system that allows a human to interact with a computer through bodily
sensations and movements. Haptics refers to a type of human-computer interaction technology
that encompasses tactile feedback or other bodily sensations to perform actions or processes on a
computing device.

Communication is basically the act of transferring information from one place to another.Feedback
is a system where the reaction or response of the receiver arrives at the sender after he/she has
interpreted the message. Feedback is inevitably essential to make two way communications
effective. In fact, without feedback communication remains incomplete. At times, feedback could
be verbal such as written and oral. Then in some cases, it could be nonverbal. Feedback is mainly
a response from your audiences; it allows you to evaluate the effectiveness of your message. In
fact research shows that the majority of the messages that have been sent are nonverbal and the
ability to understand and use nonverbal communication is powerful tools that will help people
connect with each other.
As well as communication where nonverbal shows much more impressive, a sense of touch as
known as haptics plays an important role in our new phase of technology. It is the science of
applying touch sensation and control to interaction with computer applications by using special
input/output devices. It gives users a slight jolt of energy at the point of touch, providing instant
sensory feedback, while reducing the audio, visual or audio-visual demand. Haptic technology is
an evolutionary step into interacting with objects as an extension of our mind and allows for more
socially appropriate and subtle interaction.

The Benefits of Haptic Feedback in Mobile Phone Camera


The sense of touch is a fundamental part of social interaction as even a short touch from another
person can elicit emotional experiences.

Previous studies on haptic communication indicate that the benefits of interpersonal touch exist
even when touch is artificially mediated between people that are physically apart.

In the current study an evaluation of three input gestures (i.e., moving, squeezing, and stroking)
was conducted to identify preferred methods for creating haptic messages using a hand-held
device. Furthermore, two output methods (i.e., one or four haptic actuators) were investigated in
order to determine whether representing spatial properties of input gestures haptically provides
additional benefit for communication. Participants created haptic messages in four example
Page 15 of 165
communication scenarios. The results of subjective ratings, post-experimental interviews, and
observations showed that squeezing and stroking were the preferred ways to interact with the
device. Squeezing was an unobtrusive and quick way to create haptic content. Stroking, on the
other hand, enabled crafting of more detailed haptic messages. Spatial haptic output was
appreciated especially when using the stroking method. These findings can help in designing haptic
communication methods for hand-held devices.

2. Human thinking processes for computer-based tasks


Human thinking processes
Thought (also called thinking) is the mental process in which beings form psychological
associations and models of the world. Thinking is manipulating information, as when we form
concepts, engage in problem solving, reason and make decisions. A thought may be an idea,
an image, a sound or even an emotional feeling.

Computer-based tasks
Task is a term used to describe a software program or section of a program that is running in a
multitasking environment. Tasks are used to help the user and computer identify between each
of the programs running on the computer.

Different types of thinking


a) Abstract
Abstract thinkers are able to relate seemingly random things with each other. This is because
they can see the bigger picture. They make the connections that others find difficult to see.

They have the ability to look beyond what is obvious and search for hidden meanings. They can
read between the lines and enjoy solving cryptic puzzles. They don‘t like routine and get bored
easily.

b) Analytical
Analytical thinkers like to separate a whole into its basic parts in order to examine these parts
and their relationships. They are great problem-solvers and have a structured and methodical way
of approaching tasks.

This type of thinker will seek answers and use logic rather than emotional thinking in life.
However, they have a tendency to overthink things and can ruminate on the same subject for
months.

Page 16 of 165
c) Creative
Creative thinkers think outside the box and will come up with ingenious solutions to solve their
dilemmas in life. They like to break away from the traditions and norms of society when it comes
to new ideas and ways of thinking.

They can sometimes be ridiculed as society prefers to keep the status quo. Creative thinkers can
also court jealously if they manage to follow their dreams and work in a creative field.

d) Concrete thinking
Concrete thinking focuses on the physical world, rather than the abstract one. It is all about
thinking of objects or ideas as specific items, rather than as a theoretical representation of a more
general idea.

Concrete thinkers like hard facts, figures and statistics. For example, you will not get any
philosophers who think in concrete terms. Children think in concrete terms as it is a very basic and
literal form of understanding.

e) Critical thinking
Critical thinking takes analytical thinking up a level. Critical thinkers exercise careful
evaluation or judgment in order to determine the authenticity, accuracy, worth, validity, or value
of something. And rather than strictly breaking down the information, critical thinking explores
other elements that could have an influence on conclusions.

f) Convergent thinking
Convergent thinking is a process of combining a finite number of perspectives or ideas to find
a single solution. Convergent thinkers will target these possibilities, or converge them inwards, to
come up with a solution.

One example is a multiple choice question in an exam. You have four possible answers but only
one is right. In order to solve the problem, you would use convergent thinking.

g) Divergent thinking
By contrast, divergent thinking is the opposite of convergent thinking. It is a way of exploring
an infinite number of solutions to find one that is effective. So, instead of starting off with a set
number of possibilities and converging on an answer, it goes as far and wide as necessary and
moves outwards in search of the solution.

How can you take advantage of the different types of thinking?

Convergent thinking
Includes analytical and concrete types of thinking
If you are a convergent thinker, you are more likely to be an analytical or concrete thinker. You
are generally able to logically process thoughts so you can use your ability to remain cool and
calm in times of turmoil.

Page 17 of 165
You are also more likely to be a natural problem solver. Think of any famous super sleuth, from
Sherlock Holmes to Inspector Frost, and you will see convergent thinking at play. By gathering
various bits of information, convergent thinkers are able to put the pieces of a puzzle together and
come up with a logical answer to the question of ―Who has done it?‖

Concrete thinkers will look at what is visible and reliable. Concrete thinking will only consider the
literal meaning while abstract thinking goes deeper than the facts to consider multiple or hidden
meanings. However, if you are a concrete thinker it means you are more likely to consider literal
meanings and you‘re unlikely to get distracted by ―what ifs‖ or other minor details.

Divergent thinking
Includes abstract and creative types of thinking
Divergent thinking is all about looking at a topic or problem from many different angles. Instead
of focusing inwards it branches outwards. It is an imaginative way of looking at the world. As
such, it uses abstract thinking to come up with new ideas and unique solutions to problems.

Abstract thinking goes beyond all the visible and present things to find hidden meanings and
underlying purpose. For example, a concrete thinker will look at a flag and only see specific
colours, marking, or symbols that appear on the cloth. An abstract thinker would see the flag as a
symbol of a country or organization. They may also see it as a symbol of liberty and freedom.

Divergent thinkers like to go off on tangents. They‘ll take the meandering route, rather than the
tried and trusted straight and narrow approach. If you are a divergent thinker, you are more likely
to be a good storyteller or creative writer. You are good at setting scenes and a natural entertainer.
You like to be creative in your approach to problem-solving.

Take a look at yourself!


Whenever you‘re considering your next course of action, why not take a minute to consider how
you are forming your opinions or conclusions. Ask yourself if you‘re sure you‘ve considered all
alternatives available to you and pay attention to whether you‘re assuming you have limited
choices. You might just find your mind takes you on an interesting journey!

Human-Computer Model
In cognitive models, there is a strong mapping between human and computer processing.
Computer input is mapped to humans in ways of perception; processing and memory in
computers are aligned with human contemplation; the machine's output with human actions and
behaviors. It is a useful model for simplifying the complexities of humans by comparing our
behaviors to that of machines.

The input and output terminals in the human-computer model are understandable, but the
processing chain, the steps between input and output, are vastly different.

Page 18 of 165
It is hard to replicate human processing with hardware
Information processing in humans is deviously more complicated than in any computer system,
as we have recently discovered. Researchers have been able to simulate a single second of
human brain activity with 82,944 processors in 40 minutes, consuming 1 Petabyte of system
memory. It is also worth noting that this process only created an artificial neural network of 1.73
billion nerve cells compared to our brain's 80-100 billion nerve cell network.

Emotions affect our cognitive processing


Humans emotions affect every aspect of our cognitive processing. For example, emotional recall
is the basis of method acting and used to recall a particular feeling from a person's physical
surroundings, and memory for odors is associated with highly emotional experiences. The heavy
influence of emotions on human behavior is one reason why there is immense value in
qualitative research. The complete human-computer experience cannot be strictly defined by
response times; the level of enjoyment and perceived effectiveness must also be measured in
order to create better user experiences.

Research on progress bars has determined that backwards moving animations seem faster to
users than forward moving animations. Humans do not perceive the concept of time the same as
a computer as illustrated by the research on progress bars. In the study, users perceived the
loading time to be 11% longer when the progress bar was forward moving despite the fact the
loading times were the same in each trial. Users experience events analyzed through perceptions
in the mind; it is not a binary experience as in a computing machine.

Humans have bodies


Like emotions, we often overlook the affect our physical self has on information processing. We
use our physical surroundings as cues in our memory and accelerated functioning. Unlike
computers, we do not store every pixel of visual information about our surroundings. Instead, we
rely on our additional senses to construct visual representations of our surroundings. Intuitively,
we have a host of sensory devices and inputs that cannot be accounted for in a machine. Touch,
smell, sound, as well as visuals, all have a place in the memory recollection process.

Cognitive Psychology
Cognitive psychology is the science of how we think. It‘s concerned with our inner mental
processes such as attention, perception, memory, action planning, and language. Each of these
components are pivotal in forming who we are and how we behave.
Cognitive psychology is the branch of psychology that focuses on the way people process
information. It looks at how we process information we receive and how the treatment of this
information leads to our responses. In other words, cognitive psychology is interested in what is
happening within our minds that links stimulus (input) and response (output).

Page 19 of 165
Cognitive psychology & Information Processing
At the very heart of cognitive psychology is the idea of information processing.

Cognitive psychology sees the individual as a processor of information, in much the same way
that a computer takes in information and follows a program to produce an output.

Basic Assumptions
The information processing approach is based on a number of assumptions, including:
(1) information made available by the environment is processed by a series of processing
systems (e.g. attention, perception, short-term memory);
(2) these processing systems transform or alter the information in systematic ways;
(3) the aim of research is to specify the processes and structures that underlie cognitive
performance;
(4) information processing in humans resembles that in computers.

Computer - Mind Analogy


The development of the computer in the 1950s and 1960s had an important influence on
psychology and was, in part, responsible for the cognitive approach becoming the dominant
approach in modern psychology (taking over from behaviorism).

The computer gave cognitive psychologists a metaphor, or analogy, to which they could compare
human mental processing. The use of the computer as a tool for thinking how the human mind
handles information is known as the computer analogy.

Essentially, a computer codes (i.e., changes) information, stores information, uses information,
and produces an output (retrieves info). The idea of information processing was adopted by
cognitive psychologists as a model of how human thought works.

For example, the eye receives visual information and codes information into electric neural
activity which is fed back to the brain where it is ―stored‖ and ―coded‖. This information is can
be used by other parts of the brain relating to mental activities such as memory, perception and
attention. The output (i.e. behavior) might be, for example, to read what you can see on a printed
page.

Hence the information processing approach characterizes thinking as the environment providing
input of data, which is then transformed by our senses. The information can be stored, retrieved
and transformed using ―mental programs‖, with the results being behavioral responses.

Cognitive psychology has influenced and integrated with many other approaches and areas of
study to produce, for example, social learning theory, cognitive neuropsychology and artificial
intelligence (AI).

Page 20 of 165
Information Processing & Attention
When we are selectively attending to one activity, we tend to ignore other stimulation, although
our attention can be distracted by something else, like the telephone ringing or someone using
our name.

Psychologists are interested in what makes us attend to one thing rather than another (selective
attention); why we sometimes switch our attention to something that was previously unattended
(e.g. Cocktail Party Syndrome), and how many things we can attend to at the same time
(attentional capacity).

One way of conceptualizing attention is to think of humans as information processors who can
only process a limited amount of information at a time without becoming overloaded.

Broadbent and others in the 1950's adopted a model of the brain as a limited capacity
information processing system, through which external input is transmitted.

STIMUL Input Storage Output RESPON


processes processes processes

Information processing models consist of a series of stages, or boxes, which represent stages of
processing. Arrows indicate the flow of information from one stage to the next.

 Input processes are concerned with the analysis of the stimuli.


 Storage processes cover everything that happens to stimuli internally in the brain and can
include coding and manipulation of the stimuli.
 Output processes are responsible for preparing an appropriate response to a stimulus.

Levels of Processing
The levels of processing focuses on the depth of processing involved in memory, and predicts the
deeper information is processed, the longer a memory trace will last.

Processing depth: "the meaningfulness extracted from the stimulus rather than in terms of the
number of analyses performed upon it.‖

Unlike the multi-store model it is a non-structured approach. The basic idea is that memory is
really just what happens as a result of processing information.

Page 21 of 165
Memory is just a by-product of the depth of processing of information, and there is no clear
distinction between short term and long term memory.

Therefore, instead of concentrating on the stores/structures involved (i.e. short term memory &
long term memory), this theory concentrates on the processes involved in memory.

We can process information in 3 ways:

Shallow Processing
- This takes two forms
1. Structural processing (appearance) which is when we encode only the physical qualities of
something. E.g. the typeface of a word or how the letters look.

2. Phonemic processing – which is when we encode its sound.

Shallow processing only involves maintenance rehearsal (repetition to help us hold something
in the STM) and leads to fairly short-term retention of information.

This is the only type of rehearsal to take place within the multi-store model.

Deep Processing
- This involves
3. Semantic processing, which happens when we encode the meaning of a word and relate it to
similar words with similar meaning.

Deep processing involves elaboration rehearsal which involves a more meaningful analysis
(e.g. images, thinking, associations etc.) of information and leads to better recall.

For example, giving words a meaning or linking them with previous knowledge.

Summary
Levels of processing: The idea that the way information is encoded affects how well it is
remembered. The deeper the level of processing, the easier the information is to recall.

Page 22 of 165
Influence of human thinking processes on HCI
Meaning of Design Thinking
Design thinking is a non-linear, iterative process which seeks to understand users, challenge
assumptions, redefine problems and create innovative solutions to prototype and test. The
method consists of 5 phases—Empathize, Define, Ideate, Prototype and Test and is most useful
when you want to tackle problems that are ill-defined or unknown.

Importance of Design Thinking today


Over recent decades, it has become crucial to develop and refine skills which allow us to
understand and act on rapid changes in our environment and behavior. The world has become
increasingly interconnected and complex, and design thinking offers a means to grapple with all
this change in a more human-centric manner.

Design teams use design thinking to tackle ill-defined or unknown problems (otherwise known
as wicked problems) because the process reframes these problems in human-centric ways, and
allows designers to focus on what‘s most important for users. Design thinking offers us a means
to think outside the box and also dig that bit deeper into problem solving. It helps designers carry
out the right kind of research, create prototypes and test out products and services to uncover
new ways to meet users‘ needs.

The design thinking process has become increasingly popular over the last few decades because
it was key to the success of many high-profile, global organizations—companies such as Google,
Apple and Airbnb have wielded it to notable effect, for example. This outside the box thinking is
now taught at leading universities across the world and is encouraged at every level of business.

Design thinking improves the world around us every day because of its ability to generate
ground-breaking solutions in a disruptive and innovative way. Design thinking is more than just
a process, it opens up an entirely new way to think, and offers a collection of hands-on methods
to help you apply this new mindset.

Page 23 of 165
Stages of Design Thinking
Design thinking is described as a five-stage process. It‘s important to note these stages are not
always sequential and designers can often run the stages in parallel, out of order and repeat them
in an iterative fashion.

The various stages of design thinking should be understood as different modes which contribute
to the entire design project, rather than sequential steps. The ultimate goal throughout is to derive
as deep an understanding of the product and its users as possible.

Stage 1: Empathize—Research Your Users' Needs


The first stage of the design thinking process allows you to gain an empathetic understanding of
the problem you‘re trying to solve, typically through user research. Empathy is crucial to a
human-centered design process like design thinking because it allows you to set aside your own
assumptions about the world and gain real insight into users and their needs.

Stage 2: Define—State Your Users' Needs and Problems


In the Define stage, you accumulate the information you created and gathered during the
Empathize stage. You analyze your observations and synthesize them to define the core
problems you and your team have identified so far. You should always seek to define the
problem statement in a human-centered manner as you do this.

Stage 3: Ideate—Challenge Assumptions and Create Ideas


Designers are ready to generate ideas as they reach the third stage of design thinking. The solid
background of knowledge from the first two phases means you can start to ―think outside the
box‖, look for alternative ways to view the problem and identify innovative solutions to the
problem statement you‘ve created.

Stage 4: Prototype—Start to Create Solutions


This is an experimental phase, and the aim is to identify the best possible solution for each of the
problems identified during the first three stages. Design teams will produce a number of
Page 24 of 165
inexpensive, scaled-down versions of the product (or specific features found within the product)
to investigate the problem solutions generated in the previous stage.

Stage 5: Test—Try Your Solutions Out


Designers or evaluators rigorously test the complete product using the best solutions identified in
the Prototype phase. This is the final phase of the model but, in an iterative process such as
design thinking, the results generated are often used to redefine one or more further problems.
Designers can then choose to return to previous stages in the process to make further iterations,
alterations and refinements to rule out alternative solutions.

Computer programmers thinking in terms of solving problems


Computer programming
Computer programming is the process of designing and building an executable computer
program for accomplishing a specific computing task. Programming involves tasks such as:
analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, and
the implementation of algorithms in a chosen programming language (commonly referred to as
coding).

Problem Solving

Computer Programmers are problem solvers. In order to solve a problem on a computer you
must:

1. Know how to represent the information (data) describing the problem.


2. Determine the steps to transform the information from one representation into another.

Information Representation
A computer, at heart, is really dumb. It can only really know about a few things... numbers,
characters, booleans, and lists (called arrays) of these items. (See Data Types). Everything else
must be "approximated" by combinations of these data types.

A good programmer will "encode" all the "facts" necessary to represent a problem in variables
(See Variables). Further, there are "good ways" and "bad ways" to encode information. Good
ways allow the computer to easily "compute" new information.

Algorithm
An algorithm (see Algorithm) is a set of specific steps to solve a problem. Think of it this way: if
you were to tell your 3 year old neice to play your favorite song on the piano (assuming the neice
has never played a piano), you would have to tell her where the piano was, and how to sit on the
bench, and how to open the cover, and which keys to press, and which order to press them in,
etc, etc, etc.

The core of what good programmers do is being able to define the steps necessary to accomplish
a goal. Unfortunately, a computer, only knows a very restricted and limited set of possible steps.
Page 25 of 165
For example a computer can add two numbers. But if you want to find the average of two
numbers, this is beyond the basic capabilities of a computer. To find the average, you must:

1. First: Add the two numbers and save this result in a variable
2. Then: Divide this new number the number two, and save this result in a variable.
3. Finally: provide this number to the rest of the program (or print it for the user).

Problem Solving Strategies


Problem solving is something that we go through on a daily basis. As problems never end, the
need to solve them is also everlasting. From managing your books properly on a shelf to deciding
the next step for your career, the problems can be small or big but they need to be solved on daily
basis.

Study in cognitive psychology, has definitely made our lives easier to some extent. There are
concrete psychological steps involved in problem solving, which if properly followed, can help us
tackle every sort of problems.

One of the important aspects of solving a problem is forming a good strategy. A strategy might
be well thought of, rigorous and a sure winner but might not be viable given the resources available
in hand.

Below given are the core strategies involved in solving every problem.

Problem-Solving Strategies
1) Algorithms
The step-by-step procedure involved in figuring out the correct answer to any problem is called
algorithm. The step by step procedure involved in solving a mathematical problem using math
formula is a perfect example of a problem-solving algorithm. Algorithm is the strategy that results
in accurate answer; however, it‘s not always practical. The strategy is highly timeconsuming, and
involves taking lots of steps.

For instance, attempting to open a door lock using algorithm to find out the possible number
combinations would take a really long time.

2) Heuristics
Heuristics refers to mental strategy based on rule-of thumb. There is no guarantee that it will
always work out to produce the best solution. However, the rule of thumb strategy does help to
simplify complex problems by narrowing the possible solutions. It makes it easier to reach the
correct solution using other strategies.

Heuristic strategy of problem solving can also be referred to as the mental shortcut. For instance,
you need to reach the other part of the city in a limited amount of time. You‘ll
Page 26 of 165
obviously seek for the shortest route and means of transportation. The rule of thumb allows you to
make up your mind about the fastest route depending on your past commutes. You might choose
subway instead of hiring a cab.

3) Trial-and-Error
Trial and error strategy is the approach that deals with trying a number of different solutions and
ruling out the ones that do not work. Approaching this strategy as the first method in an attempt to
solve any problem can be highly time-consuming. So, it‘s best to use this strategy as a follow up
to figure out the best possible solution, after you have narrowed down the possible number of
solutions using other techniques.

For instance, you‘re trying to open a lock. Trying to enter every possible combination directly
onto the lock for Trial-and-Error method can be highly time-consuming. Instead, if you‘ve
narrowed down the possible combinations to 20, you‘ll have a much easier time solving the
particular problem.

4) Insight
Insight is something that just occurs suddenly. Researchers suggest that insight can occur if you‘ve
dealt with similar problems in the past. For instance, Knowing that you‘ve solved a particular
algebra question in the past will make it much easier for you to solve the similar questions at
present. However, it‘s not always necessary that the mental processes be related withpast problems.
In fact, most cases of mental processes leading to insight happen outside of consciousness.

Page 27 of 165
Outcome 2: Understand principles of interaction
1. Developing models of interaction for an application
Meaning of Design Principles
Design principles are widely applicable laws, guidelines, biases and design considerations which
designers apply with discretion. Professionals from many disciplines—e.g., behavioral science,
sociology, physics and ergonomics—provided the foundation for design principles via their
accumulated knowledge and experience.

HCI Design principles


The definition for Interaction Design is, ―it is the behavior and structure of the interactive
systems‖. In other words, it is the relationship between the user and the product, and the service
they use.

The interaction design should create great user experience. It requires experience and
understanding of basic principles of the interaction design in most of the UI disciplines. It‘s
about designing for the entire interconnected system: the device, the interface, the context, the
environment, and the people. Interaction designers strive to create meaningful relationships
between people and the products and services that they use, from computers, to mobile devices,
to appliances, and beyond.

Interaction Design principles are important to keep in mind as we develop complex


applications. There are some key elements of an interaction design that cannot be neglected
while creating an interface for the user.

Ten basic principles of interaction design that are needed to be considered are given below:

1. Match User Experience and Expectations


By matching the sequence of steps, layout of information and terminology used with the
expectations and prior experiences of the user, the friction and discomfort of learning a new
system will be reduced.

Matching your audience‘s prior experiences and expectations is achieved by using common
conventions or UI patterns.

2. Consistency
As well as matching people’s expectations through terminology, layout and interactions the
way in which they are used should be consistent throughout the process and between related
applications.

By maintaining consistency users learn more quickly, this can be achieved by re-applying in one
part of the application their prior experiences from another.

Page 28 of 165
An added bonus of keeping elements consistent is that you can then use inconsistency to
indicate to users where things do not work the way they might expect. Breaking consistency is
similar to knowing when to be unconventional as mentioned above.

3. Functional Minimalism
―Everything should be made as simple as possible, but no simpler.‖ Albert Einstein

The range of possible actions should be no more than is absolutely necessary. Providing too
many options can detract from the primary functions and reduce usability by overwhelming the
user with choices. To achieve the zen of ‗functional minimalism‘:

 Avoid unnecessary features and functions


 Break complex tasks into manageable sub-tasks
 Limit functions rather than the user experience.

4. Cognitive Loads
Cognition is the scientific term for the ―process of thought‖. When designing interactions we
need to minimize the amount of ―thinking work‖ required to complete a particular task. Another
way of putting it is that a good assistant uses their skills to help the master focus on their
skills.

For instance, while designing, we need to understand how much concentration the task
requires to complete it and create a user interface that reduces cognitive load as much as
Page 29 of 165
possible. A good way to reduce the amount of ‗thinking work‘ the user has to do is to focus on
what the computer is good at and build a system that uses the computers skills to the best of its
abilities. Remember that computer is good at:

 Math‘s
 Remembering things
 Keeping track of things
 Comparing things
 Spell Checking and spotting/correcting errors

Knowing qualities of product and users, one can create a design for better user experience.

5. Engagement
In User Experience terms engagement measures the extent to which a consumer has a
meaningful experience. An engaging experience is not only more enjoyable, but also easier and
more productive. As with many things engagement is subjective so the system your designing
must engage with the desired audience; what appeals to a teenager is not necessarily what their
grandparent would also find engaging. Beyond aligning with the appropriate users, control
achievement and creation are key.

The user should feel like they are in control of the experience at all times, they must constantly
feel like they‘re achieving something and also be able see the results through positive feedback
or alternatively feel like they‘ve created something.

People are so engaged in the activity they‘re doing that the rest of the world falls away. Flow is
what we‘re looking to achieve through engaging interactions. We should allow users to
concentrate on their work, not on the user interface. In short keep out of the way!

6. Control, Trust and Explorability


These three elements are fundamentally important to any system. If users feel in control of the
process they will be more comfortable using the system. If the user is comfortable and in control
they will trust that the system will protect them from making unrecoverable or unrecognized
errors or from feeling stupid. Trust inspires confidence and with confidence the user is free to
explore further.

7. Perceivability
People are aware of the opportunity to interact with interactive media. As interface designers, we
must avoid developing hidden interactions, which decrease the usability, efficiency, and user
experience of interactive media. In other words, people should not have to guess or look for
opportunities to interact.

When developing interactive media, users should have the ability to review an interface and
identify where they can interact. We must remember that not everyone experiences and interacts
with interface in the same way others do. Make it a habit to provide hints and indicators. More
like, visual cues such as buttons, icons, textures, textiles, etc. Allow for the user to see that these

Page 30 of 165
visual cues can actually be clicked or tapped with their fingers. We should always take into
consideration the usability and accessibility of our interactive media and how the user sees and
perceives the objects in the interface.

8. Learnability
Another very important core principle is the ability to easily learn and use an interface after using
it for the first time. Remember that engaging interfaces allow users to easily learn and remember
their interactions.

Learnability makes interaction simpler and intuitive. Even simple interfaces may require a
certain amount of experience to learn. People tend to interact with an interface in similar ways
they interact with other interfaces. For this reason, we must understand the importance of
design patterns and consistency. This allows the user to avoid having to learn something new
and will give him/her a sense of achievement. They will feel smart and capable of grasping and
utilizing newer interfaces, allow for them to feel confident while navigating through the
interface.

9. Error Prevention, Detection and Recovery.


The best way to reduce the amount of errors a user makes is to anticipate possible mistakes and
prevent them from happening in the first place. If the errors are unavoidable we need to make
them easy to spot and help the user to recover from them quickly and without unnecessary
friction.

Error Prevention
Prevent errors by:

 Disabling functions that aren‘t relevant to the user


 Using appropriate controls to constrain inputs (e.g. radio buttons, dropdowns)
 Providing descriptive, clear instructions and considering preemptive help
 As a last resort provide clear warning messages

Page 31 of 165
Error Detection
Anticipate possible errors and provide feedback that helps users verify that:

 They‘ve done what they intended to do


 What they intended to do was correct

Its important to remember that providing feedback by changing the visual state of an object or
item is more noticeable than a written message.

Error Recovery
If the error is unavoidable provide clearly marked ways for the user to recover from it. For
example provide ―back‖, ―undo‖ or ―cancel‖ commands.

If a specific action is irreversible it should be classed as critical and you should make the
user confirm first in order to prevent slip ups. Alternatively you can create a system that
naturally defaults to a less harmful state. For example if I close a document without saving it the
system should be intelligent enough to know that it is unlikely that I intended the action and
therefore either auto-save or clearly warn me before closing.

10. Affordance
Affordance is the quality of an object that allows an individual to perform an action, for example
Page 32 of 165
a standard household light switch has good ‗affordance‘, in that it appears innately clickable. In
short the physical properties of an object should suggest how it can be used. In the context of
user interfaces, affordance can be achieved by:

 Simulating ‗physical world‘ affordances e.g. buttons or switches


 or keeping consistency with modern web standards or other interface elements e.g.
underlined links or default button styles.

Interaction design is not always about creating a better interface for the users, but it is also
about using the technology in the way people want. It is necessary to know the target users in
order to design desirable product for them. An interactive design is the basis for the success of any
product. These 10 principles of interaction design are based on the study and experiences of our
team in designing mobile and web apps for a broad product portfolio and on multiple mobile and
web platforms.

Theoretical frameworks and methodologies


There are typically many thousands of rules which have been developed for the assessment
of usability (Nielsen, J. 1993, p19), and there have been many attempts to reduce the
complexity to a manageable set of rules (Nielsen, J. 1993, Baker, Greenberg and Gutwin,
2002). Jacob Nielsen has produced 10 rules which he calls usability heuristics and which are
designed to explain a large proportion of problems observed in interface design, which he
recommends should be followed by all user interface designers.
1. Simple and natural dialogue
Efforts should be made to avoid irrelevant information. Nielsen says that every extra
unit of information competes with units of relevant information and diminishes its
visibility.
2. Speak the Users’ language
All information should be expressed in concepts which are familiar to the user rather than
familiar to the operator or the system.
3. Minimize the Users’ memory load
It is important that the user should not have to remember information from one part of a
dialogue to another. Help should be available at easily retrievable points in the system.
4. Consistency
Words situations and actions should always mean the same thing no matter where they occur
in the system.
5. Feedback
Users should always be informed about what is going on in the system in a timely and
relevant way.
6. Clearly marked Exits
Errors are often made in choosing functions which are not required and there needs to be a
quick emergency exit to return to the previous state without having to engage in extended
dialogue.
Page 33 of 165
7. Shortcuts
Required by the expert user (and unseen by the novice user) to speed the interaction with
the system.

8. Good error messages


These need to be expressed in a plain language that the user understands which are specific
enough to identify the problem and suggest a solution.
9. Prevent Errors
A careful design will prevent a problem from occurring.

10. Help and documentation


the best systems can be used without documentation. However, when such help is needed
it should be easily to hand, focused on the users task and list specific steps to solutions.
Baker, Greenberg and Gutwin (2002) have taken Jakob Nielsen‘s heuristic evaluation a stage
further and considered the problems posed by groupware usability concerns. Theyhave
adapted Nielsen‘s heuristic evaluation methodology to collaborative work within small scale
interactions between group members. They have produced what they call 8 groupware
heuristics.
• Provide the means for intentional and appropriate verbal communication.
The most basic form of communication in groups is verbal conversation.
Intentional communication is used to establish common understanding of the
task at hand and this occurs in one of three ways.
o People talk explicitly about what they are doing o People overhear others
conversations
o People listen to running commentary that people produce describing their actions.
• Provide the means for intentional and appropriate gestured communication.
Explicit gestures are use alongside verbal communication to convey information.
Intentional gestures take various forms. Illustration is acted out speech, Emblems are
actions that replace words and Deixis is a combination of gestures and voice
• Provide consequential communication of an individual’s embodiment
Bodily actions unintentionally give off information about who is in the workspace,
where they are and what they are doing. Unintentional body language is fundamental
for sustaining teamwork.
• Provide consequential communication of share artefacts
A person manipulating an artefact in a workspace unintentionally gives information
about how it is to be used and what is being done with it
• Provide Protection
People should be protected from inadvertently interfering with the work of others or
altering or destroying work that others have done

Page 34 of 165
• Manage the transitions between tightly and loosely coupled collaboration
Coupling is the degree to which people are working together. People continually
shift back and forth between loosely and tightly coupled collaboration as they move
between individual and group work
• Support people with the coordination of their actions
Members of a group mediate their interactions by taking turns negotiating the sharing
of a common workspace. Groups regularly reorganize the division of work based upon
what other participants have done or are doing.
• Facilitate finding collaborators and establishing contact
Meetings are normally facilitated by physical proximity and can be scheduled,
spontaneous or initiated. The lack of physical proximity in virtual environments
requires other mechanisms to compensate.
Others have produced alternative sets of rules. However the important issue is that there is
no consensus as to which set of rules should be applied in any given case. In other words
HCI is a fragmented discipline which according to Diaper (2005) shows a lack of coherent
development.

HCI Models
A variety of different models have been put forward which are designed to provide an HCI
theory in a particular context. This includes Norman‘s Model, Abowd and Beale‘s model and
the audience participation model of Nemirovsky (2003) which presents a new theoretical
basis for audience participation in HCI.
Norman’s model of interaction
This has probably been the most influential (Dix et al 1992 p105) because it mirrors
human intuition. In essence this model is based on the user formulating a plan of
action and then carrying it out at the interface. Norman has divided this into seven stages:
1. establishing the goal
2. forming the intention
3. specifying the action sequence
4. executing the action
5. perceiving the system state
6. interpreting the system state
7. evaluating the system state with respect to the goals and intentions

The Interaction Model


Abowd and Beale (Dix et al 1992 p106) have produced an interaction framework
built on Norman‘s model but theirs is designed to be a more realistic model.

Page 35 of 165
Figure: The General Interaction Framework
The interface sits between the user and the system and there is a four step interactive cycle
as shown in the labeled arrows. The user formulates a task to achieve a goal. The user
manipulates the machine through the input articulated by the input language. The input
language is translated into the systems core language to perform the operation. The system
is in a new state which is featured as the output. The output is communicated to the user by
observation.

Audience Participation Model


Nemirovsky (2003) considers that the old perspective is that of computers as deterministic
boxes blindly following their commands while users are incapable of changing the course of
the program running on the computer. To this he presents an alternative and proposes that users
should be considered as an audience rather than participants. Old models include the idea that
the mass of people wish to be entertained rather than to be creative and are punished for any
creative thinking while using a computer which is regarded as making a mistake. As a
consequence computer users do not have a proper framework to express themselves. This is a
strikingly radical approach. Instead Nemirovsky is concerned with users as an audience that
explore the media space. He goes on to discuss the emonic environment which he defines as
a framework for creation, modification, exchange and performance of audio visual media. This
is composed of the three layers:
• Input (interfaces for sampling)
• Structural (a neural network for providing structural control)
• Perceptual (direct media modification)

Page 36 of 165
This latter and more radical model is unlikely to have any real application to the CSCL problem
at hand, since its sphere of application is designed to emulate the creative inspiration of the artist
battling against the mechanical controls of the machine.

HCI Analysis Methodology


A number of different methodologies have been created to determine the effectiveness of HCI
measurements. These have been refined resulting in the User Needs Analysis of Lindgaard et al
(2006) that suggests how and where user centred design and requirements engineering
approaches should be integrated. After reviewing various process models for user centred design
analysis they suggest a refined approach and identified the main problems as:

• The decision where to begin and end the analysis needs to be clarified.
• Deciding how to document and present the outcome

Lindgaard‘s user needs analysis method involves the following steps


• First: Identify user groups and interview key players from all groups to find the different roles
and tasks of the primary and secondary users
• Second: Communicate this information to the rest of the team by constructing task analysis
data and translating this into workflow diagrams supporting the user interface design. Create a
table that shows the information about user roles and data input
• Third: Upon submitting the first draft of the user needs analysis report create the first iterative
design prototype of the user interface based on minimizing the path of data flow. (Initially
prototyping in PowerPoint was faster and more effective that prototyping in Dreamweaver).
• Fourth: Prototypes were handed over to developers as part of the user interface specification
package.
• Fifth: Usability testing was used to determine the adequacy of the interface. Feedback from
watching users work with the prototype and discussing with them what they were doing always
resulted in more information.
• Sixth: Prototype usability testing meant that the requirements became clearer which resulted in
more changes to the user interface design and the prototype.
• Seventh: The formal plan involved three iterations of design- prototype- usability test for each
user role (they could not keep to this and had no more than two test iterations and in most cases
only one)
• Eighth: Practical issues of feasibility should not be overlooked in the quest to meet users‘
needs. A highly experienced software developer is a necessity on the user interface design team
in order to ensure that the changes were proposed were feasible ( in some cases interface ideas
were dropped because they were not feasible, take too long or cost too much).

Heuristic evaluation
A heuristic evaluation is a usability inspection method for computer software that helps to
identify usability problems in the user interface (UI) design. It specifically involves evaluators
examining the interface and judging its compliance with recognized usability principles (the
"heuristics"). These evaluation methods are now widely taught and practiced in the new media
Page 37 of 165
sector, where UIs are often designed in a short space of time on a budget that may restrict the
amount of money available to provide for other types of interface testing.

Jakob Nielsen's heuristics are probably the most-used usability heuristics for user interface
design. Nielsen developed the heuristics based on work together with Rolf Molich in 1990. The
final set of heuristics that are still used today were released by Nielsen in 1994. The heuristics as
published in Nielsen's book Usability Engineering are as follows:

1. Visibility of system status:


The system should always keep users informed about what is going on, through
appropriate feedback within reasonable time.
2. Match between system and the real world:
The system should speak the user's language, with words, phrases and concepts familiar
to the user, rather than system-oriented terms. Follow real-world conventions, making
information appear in a natural and logical order.
3. User control and freedom:
Users often choose system functions by mistake and will need a clearly marked
"emergency exit" to leave the unwanted state without having to go through an extended
dialogue. Support undo and redo.
4. Consistency and standards:
Users should not have to wonder whether different words, situations, or actions mean the
same thing. Follow platform conventions.
5. Error prevention:
Even better than good error messages is a careful design which prevents a problem from
occurring in the first place. Either eliminate error-prone conditions or check for them and
present users with a confirmation option before they commit to the action.
6. Recognition rather than recall:
Minimize the user's memory load by making objects, actions, and options visible. The
user should not have to remember information from one part of the dialogue to another.
Instructions for use of the system should be visible or easily retrievable whenever
appropriate.
7. Flexibility and efficiency of use:
Accelerators—unseen by the novice user—may often speed up the interaction for the
expert user such that the system can cater to both inexperienced and experienced users.
Allow users to tailor frequent actions.
8. Aesthetic and minimalist design:
Dialogues should not contain information which is irrelevant or rarely needed. Every
extra unit of information in a dialogue competes with the relevant units of information
and diminishes their relative visibility.
9. Help users recognize, diagnose, and recover from errors:
Error messages should be expressed in plain language (no codes), precisely indicate the
problem, and constructively suggest a solution.
10. Help and documentation:
Even though it is better if the system can be used without documentation, it may be
necessary to provide help and documentation. Any such information should be easy to

Page 38 of 165
search, focused on the user's task, list concrete steps to be carried out, and not be too
large.

Cooper’s Persona System


Personas in the User Interface Design
A persona is a user archetype you can use to help guide decisions about product features,
navigation, interactions, and even visual design. By designing for the archetype—whose goals and
behavior patterns are well understood—you can satisfy the broader group of people represented
by that archetype. In most cases, personas are synthesized from a series of ethnographic interviews
with real people, then captured in 1-2 page descriptions that include behavior patterns, goals,
skills, attitudes, and environment, with a few fictional personal details to bring the persona to life.
For each product, or sometimes for each set of tools within a product, there is a small set of
personas, one of whom is the primary focus for the design.

It‘s easy to assemble a set of user characteristics and call it a persona, but it‘s not so easy to create
personas that are truly effective design and communication tools. If you have begun to create your
own personas, here are some tips to help you perfect them.

The idea of personas originated from Alan Cooper, an interaction designer and consultant.
It‘s a part of an approach to software design called Goal-directed design. May also be referred
to as an approach or idea, because it‘s not a theory about users or interaction, neither is it a
complete process for software development. Cooper has introduced the ideas about personas
and goal-directed design in the book The inmates are running the asylum (Cooper 1999) and
in articles (Cooper 1996). In addition, goal-directed design is promoted through Coopers
consulting firm, so he has a commercial interest in it as well.
Personas – A Simple Introduction
Personas are fictional characters, which you create based upon your research in order to
represent the different user types that might use your service, product, site, or brand in a similar
way. Creating personas will help you to understand your users’ needs, experiences, behaviours
and goals. Creating personas can help you step out of yourself. It can help you to recognise that
different people have different needs and expectations, and it can also help you to identify with the
user you’re designing for. Personas make the design task at hand less complex, they guide your
ideation processes, and they can help you to achieve the goal of creating a good user experience
for your target user group.

As opposed to designing products, services, and solutions based upon the preferences of the design
team, it has become standard practice within many human centred design disciplines to collate
research and personify certain trends and patterns in the data as personas. Hence, personas do not
describe real people, but you compose your personas based on real data collected
Page 39 of 165
from multiple individuals. Personas add the human touch to what would largely remain cold facts
in your research. When you create persona profiles of typical or atypical (extreme) users, it will
help you to understand patterns in your research, which synthesises the types of people you seek
to design for. Personas are also known as model characters or composite characters.

Personas provide meaningful archetypes which you can use to assess your design development
against. Constructing personas will help you ask the right questions and answer those questions
in line with the users you are designing for. For example, ―How would Peter, Joe, and Jessica
experience, react, and behave in relation to feature X or change Y within the given context?‖ and
―What do Peter, Joe, and Jessica think, feel, do and say?‖ and ―What are their underlying needs
we are trying to fulfill?‖

Personas in Design Thinking


In the Design Thinking process, designers will often start creating personas during the second
phase, the Define phase. In the Define phase, Design Thinkers synthesise their research and
findings from the very first phase, the Empathise phase. Using personas is just one method, among
others, that can help designers move on to the third phase, the Ideation phase. The personas will
be used as a guide for ideation sessions such as Brainstorm, Worst Possible Idea and SCAMPER.

Four Different Perspectives on Personas


In her Interaction Design Foundation encyclopedia article, Personas, Ph.D and specialist in
personas, Lene Nielsen, describes four perspectives that your personas can take to ensure that they
add the most value to your design project and the fiction-based perspective. Let‘s take a look at
each of them:

1. Goal-directed Personas
This persona cuts straight to the nitty-gritty. ―It focusses on: What does my typical user want to
do with my product?‖. The objective of a goal-directed persona is to examine the process and
workflow that your user would prefer to utilise in order to achieve their objectives in interacting
with your product or service. There is an implicit assumption that you have already done enough
user research to recognise that your product has value to the user, and that by examining their
goals, you can bring their requirements to life. The goal-directed personas are based upon the
perspectives of Alan Cooper, an American software designer and programmer who is widely
recognized as the ―Father of Visual Basic‖.

Page 40 of 165
2. Role-Based Personas
The role-based perspective is also goal-directed and it also focusses on behaviour. The personas
of the role-based perspectives are massively data-driven and incorporate data from both qualitative
and quantitative sources. The role-based perspective focusses on the user‘s role in the organisation.
In some cases, our designs need to reflect upon the part that our users play in their organisations
or wider lives. An examination of the roles that our users typically play in real life can help inform
better product design decisions. Where will the product be used? What‘s this role‘s purpose? What
business objectives are required of this role? Who else is impacted by the duties of this role? What
functions are served by this role? Jonathan Grudin, John Pruitt, and Tamara Adlin are advocates
for the role-based perspective.

3. Engaging Personas
―The engaging perspective is rooted in the ability of stories to produce involvement and insight.
Through an understanding of characters and stories, it is possible to create a vivid and realistic
description of fictitious people. The purpose of the engaging perspective is to move from
designers seeing the user as a stereotype with whom they are unable to identify and whose life
they cannot envision, to designers actively involving themselves in the lives of the personas. The
other persona perspectives are criticized for causing a risk of stereotypical descriptions by not

Page 41 of 165
looking at the whole person, but instead focusing only on behavior.‖
– Lene Nielsen

Engaging personas can incorporate both goal and role-directed personas, as well as the more
traditional rounded personas. These engaging personas are designed so that the designers who
use them can become more engaged with them. The idea is to create a 3D rendering of a user
through the use of personas. The more people engage with the persona and see them as ‘real‘, the
more likely they will be to consider them during the process design and want to serve them with
the best product. These personas examine the emotions of the user, their psychology,
backgrounds and make them relevant to the task in hand. The perspective emphasises how stories
can engage and bring the personas to life. One of the advocates for this perspective is Lene Nielsen.

One of the main difficulties of the persona method is getting participants to use it (Browne, 2011).
In a short while, we‘ll let you in on Lene Nielsen‘s model, which sets out to cover this problem
though a 10-step process of creating an engaging persona.

4. Fictional Personas
The fictional persona does not emerge from user research (unlike the other personas) but it emerges
from the experience of the UX design team. It requires the team to make assumptions based upon
past interactions with the user base, and products to deliver a picture of what, perhaps, typical users
look like. There‘s no doubt that these personas can be deeply flawed (and there are endless debates
on just how flawed). You may be able to use them as an initial sketch

Page 42 of 165
of user needs. They allow for early involvement with your users in the UX design process, but
they should not, of course, be trusted as a guide for your development of products or services.

Creating Personas and Scenarios


As described above, engaging personas can incorporate both goal and role-directed personas, as
well as the more traditional rounded personas. Engaging personas emphasise how stories can
engage and bring the personas to life. This 10-step process covers the entire process from
preliminary data collection, through active use, to continued development of personas. There are
four main parts:

 Data collection and analysis of data (steps 1, 2),


 Persona descriptions (steps 4, 5),
 Scenarios for problem analysis and idea development (steps 6, 9),
 Acceptance from the organization and involvement of the design team (steps 3, 7, 8, 10).

The 10 steps are an ideal process but sometimes it is not possible to include all the steps in the
project. Here we outline the 10-step process as described by Lene Nielsen in her Interaction Design
Foundation encyclopedia article, Personas.

1. Collect data. Collect as much knowledge about the users as possible. Perform high-quality user
research of actual users in your target user group. In Design Thinking, the research phase is the
first phase, also known as the Empathise phase.

2. Form a hypothesis. Based upon your initial research, you will form a general idea of the various
users within the focus area of the project, including the ways users differ from one another – For
instance, you can use Affinity Diagrams and Empathy Maps.

3. Everyone accepts the hypothesis. The goal is to support or reject the first hypothesis about the
differences between the users. You can do this by confronting project participants with the
hypothesis and comparing it to existing knowledge.

4. Establish a number. You will decide upon the final number of personas, which it makes sense
to create. Most often, you would want to create more than one persona for each product or service,
but you should always choose just one persona as your primary focus.

5. Describe the personas. The purpose of working with personas is to be able to developsolutions,
products and services based upon the needs and goals of your users. Be sure to describe personas
in a such way so as to express enough understanding and empathy to understand the users.

Page 43 of 165
 You should include details about the user‘s education, lifestyle, interests, values, goals,
needs, limitations, desires, attitudes, and patterns of behaviour.
 Add a few fictional personal details to make the persona a realistic character.
 Give each of your personas a name.
 Create 1–2-pages of descriptions for each persona.

6. Prepare situations or scenarios for your personas. This engaging persona method is directed
at creating scenarios that describe solutions. For this purpose, you should describe a number of
specific situations that could trigger use of the product or service you are designing. In other words,
situations are the basis of a scenario. You can give each of your personas life by creating scenarios
that feature them in the role of a user. Scenarios usually start by placing the persona ina specific
context with a problem they want to or have to solve.

7. Obtain acceptance from the organization. It is a common thread throughout all 10 steps that the
goal of the method is to involve the project participants. As such, as many team members as
possible should participate in the development of the personas, and it is important to obtain the
acceptance and recognition of the participants of the various steps. In order to achieve this, you
can choose between two strategies: You can ask the participants for their opinion, or you can let
them participate actively in the process.

8. Disseminate knowledge. In order for the participants to use the method, the persona
descriptions should be disseminated to all. It is important to decide early on how you want to
disseminate this knowledge to those who have not participated directly in the process, to future
new employees, and to possible external partners. The dissemination of knowledge also includes
how the project participants will be given access to the underlying data.

9. Everyone prepares scenarios. Personas have no value in themselves, until the persona becomes
part of a scenario – the story about how the persona uses a future product – it does not have real
value.

10. Make ongoing adjustments. The last step is the future life of the persona descriptions. You
should revise the descriptions on a regular basis. New information and new aspects may affect
the descriptions. Sometimes you would need to rewrite the existing persona descriptions, add new
personas, or eliminate outdated personas.

Significant Models in HCI


In HCI design, models are used to guide the creation of an interface, this guide is often less
technical than a theory is ..... Models are often more limited in scope than a theory will be.

Model-Based User Interface Development (MBUID) is one approach that aims at decreasing the
effort needed to develop UIs while ensuring UI quality. The purpose of Model-Based Design is

Page 44 of 165
to identify high-level models that allow designers to specify and analyse interactive software
applications from a more semantic oriented level rather than starting immediately to address the
implementation level. This allows them to concentrate on more important aspects without being
immediately confused by many implementation details and then to have tools which update the
implementation in order to be consistent with high-level choices.
Significant Models in HCI

 Task models
 Cognitive architectures
 User models
 Domain Models
 Context Models
 Presentation Models
 Dialogue models

Task Models for User Interface Design


Task Models describe how to perform activities to reach users' goals. Of the relevant models in
the human-computer interaction field, task models play an important role because they
represent the logical activities that should support users in reaching their goals.
Task models represent the intersection between user interface design and more systematic
approaches by providing designers with a means of representing and manipulating an abstraction
of activities that should be performed to reach user goals.

There are many reasons for developing task models. In some cases the task model of an existing
system is created in order to better understand the underlying design and analyse its potential
limitations and how to overcome them. In other cases designers create the task model of a new
application yet to be developed. In this case, the purpose is to indicate how activities should be
performed in order to obtain a new, usable system that is supported by some new technology.

Features
 Assumes that the tasks are related in the sense that they all share a small set of attributes
 Representation of commonalities and variabilities in a hierarchical diagram
 Represented at various abstraction levels.
 The subject of a task model can be either an entire application or one of its parts. The
application can be either a complete, running interactive system or a prototype under
development.

Cognitive Model
A cognitive model is a descriptive account or computational representation of human thinking
about a given concept, skill, or domain. Here, the focus is on cognitive knowledge and skills, as
opposed to sensori-motor skills, and can include declarative, procedural, and strategic

Page 45 of 165
knowledge. A cognitive model of learning, then, is an account of how humans gain accurate and
complete knowledge. This is closely related to metacognitive reasoning and can come about as a
result of (1) revising (i.e., correcting) existing knowledge, (2) acquiring and encoding new
knowledge from instruction or experience, and (3) combining existing components to infer and
deduce new knowledge. A cognitive model of learning should explain or simulate these mental
processes and show how they produce relatively permanent changes in the long-term memory of
learners. It is also common to consider impoverished...

Features
 Categorize perceptual abject
 Use of processes to accomplish complex task
 Described in mathematical or computational languages
 Derived from basic principles of cognation
 Based mainly experiments.

User model
A user model is a (data) structure that is used to capture certain characteristics about an
individual user, and a user profile is the actual representation in a given user model. The
process of obtaining the user profile is called user modeling.

Description
User models are explicit representations of the properties of an individual user including needs,
preferences as well as physical, cognitive and behavioral characteristics. The characteristics are
represented by variables. The user model is established by the declaration o these variables. Due
to the wide range of applications, it is often difficult to have a common format. A user model can
be seen from a functional/procedural point of view or from a more declarative point of view. In
the first case the focus is laid on processes and actions that the user carries out to accomplish
tasks. In the second case the focus is set on definitions and descriptions of the properties and
characteristics of the user.

A user profile is an instantiation of a user model representing either a specific real user or a
representative of a group of real users.

Beside the user model there are other related models e.g. task model, device model or
environment model. Task models describe how to perform activities to reach users' goals. A
device model is a representation of the features and capabilities of one or several physical
components involved in user interaction. The device model expresses capabilities of the device.
An environment model is a set of characteristics used to describe the use environment. It
includes all required contextual characteristics.

User modeling
User models are used to generate or adapt user interfaces at runtime, to address particular user
needs and preferences. User models are also known as user profiles, personas or archetypes.
They can be used by designers and developers for personalization purposes and to increase the
usability and accessibility of products and services.

Page 46 of 165
Features
 Represents the properties of an individual user including needs, preferences as well as
physical, cognitive and behavioral characteristics
 Characteristics are represented by variables
 Takes processes and actions that the user carries out to accomplish tasks and set on
definitions and descriptions of the properties and characteristics of the user.

Domain model
A domain model is a system of abstractions that describes selected aspects of a sphere of
knowledge, influence or activity (a domain). The model can then be used to solve problems related
to that domain.

Domain Modeling
Domain Modeling is a way to describe and model real world entities and the relationships
between them, which collectively describe the problem domain space. Derived from an
understanding of system-level requirements, identifying domain entities and their relationships
provides an effective basis for understanding and helps practitioners design systems for
maintainability, testability, and incremental development. Because there is often a gap between
understanding the problem domain and the interpretation of requirements, domain modeling is a
primary modeling in product development at scale. Driven in part from object-oriented design
approaches, domain modeling envisions the solution as a set of domain objects that collaborate
to fulfill system-level scenarios.

Features
 Understand the project problem description and to translate the requirements of that project
into software components of a solution.
 The software components are commonly implemented in an object oriented programming
language. 

Context model
Context models are used to illustrate the operational context of a system - they show what lies
outside the system boundaries. Social and organizational concerns may affect the decision on
where to position system boundaries. Architectural models show the system and its relationship
with other systems. Context models simply show the other systems in the environment, not how
the system being developed is used in that environment.

Features
 Context models are used to illustrate the operational context of a system
 They show what lies outside the system boundaries
 They show how IT applications fit into the context of the people and the organization they serve
 They simply shows other system in environment, not how the system being developed in that
environment
 Producing an architectural model is first step in context modeling
Page 47 of 165
 Social and organizational concerns may affect the decision on where to position system boundaries

Dialogue model
A dialogue model is an abstract model that is used to describe the structure of the dialogue
between a user and an interactive computer system. Dialogue models form the basis of the
notations that are used in user interface management systems (UIMS). ... It is shown that the event
model has the greatest descriptive power.

Features
 Meaningful communication, so that the computer understands what people are entering and
people understand what the computer is presenting or requesting.
 Minimal user action.
 Standard operation and consistency.

Presentation Model
Represent the state and behavior of the presentation independently of the GUI controls used in
the interface

Features
 Constituted of global message designer constructs to communicate the domain and
interaction message to user
 Contains elements as task, command panel and domain object display
 The elements are used to organize the presentation of domain and interaction elements
 The organization is specified using the presentation and interaction structures

Developing a model of interaction for an application


An interaction model is a design model that binds an application together in a way that supports
the conceptual models of its target users. It is the glue that holds an application together.

It defines how all of the objects and actions that are part of an application interrelate, in ways
that mirror and support real-life user interactions. It ensures that users always stay oriented and
understand how to move from place to place to find information or perform tasks. It provides a
common vision for an application. It enables designers, developers, and stakeholders to understand
and explain how users move from objects to actions within a system.

Interactive modeling is a feature that enables you to create an output object with which the end
user can interact before generating a model. This interactive output object is placed on the
Outputs tab of the manager pane and contains an interim dataset. The interim dataset can be used
to refine or simplify the model before it is generated. Interactive modeling is achieved by adding
some extra elements to the specification of a regular model builder node:

Page 48 of 165
 The Constructors section of the Node definition includes a
CreateInteractiveModelBuilder element.
 The extension includes a dedicated InteractiveModelBuilder element.

User interaction with the interim dataset takes place by means of a window known as the
interaction window, which is displayed as soon as the output object is created.

Figure. Interactive output object and interaction window

Interaction is specific to the algorithm used and is implemented by the extension. The interaction
window is defined in the User Interface section of the InteractiveModelBuilder
element. You can define an interaction window by specifying either of the following:

 A frame class (see User Interface Section), which defines the window completely
 A panel class, specified as an attribute of an extension object panel (see Extension Object
Panel), for each tab of the window

Once closed, the interaction window can be redisplayed by double-clicking the object name on
the Outputs tab.

The interaction window specification needs to include code to generate the model once the user
has completed interaction. In the example illustrated, this is done by means of the toolbar button
with the gold nugget icon, which is associated with an action to generate the model. The code for
this is shown in the InteractiveModelBuilder section under Example of Interactive
Modeling.

Page 49 of 165
User Interface Section
The user interface for an object is declared in a UserInterface element within the object
definition in the specification file. There can be more than one UserInterface element in the
same specification file, depending on how many objects (for example, model builder node, model
output object, model applier node) are defined in the file.

Within each User Interface section, you can define:

 Icons for display on the canvas or palettes


 Controls (custom menu and toolbar items) to appear on dialogs or output windows
 Tabs that define sets of property controls (for dialogs or output windows)

Note: In the element definitions that follow (normally identified by the heading Format), element
attributes and child elements are optional unless indicated as "(required)." For thecomplete syntax
of all elements.

For each user interface, you specify the processing to be performed. You can do this by means of
either an action handler or a frame class attribute, both of which are optional. Where neither an
action handler nor a frame class attribute is specified, the processing to be performed is specified
elsewhere in the file.

Format
The basic format of the UserInterface element is:
<UserInterface>
<Icons>
<Icon ... />
...
</Icons>
<Controls>
<Menu ... />
<MenuItem ... />
...
<ToolbarItem ... />
...
</Controls>
<Tabs>
<Tab ... />
...
</Tabs>
</UserInterface>

An extension object UI delegate is associated with either a node dialog or a model- or document-
output window and allows the extension to handle custom actions invoked on the dialog.

Note that extensions should not assume that the UI can be invoked. For example, extensions may
be run within "headless" (no UI) environments such as batch mode and application servers.

Page 50 of 165
An action handler is used where you are adding custom actions to a standard IBM SPSS Modeler
window. It is similar to the extension object UI delegate although an action handler can also
be used when there is no extension node or output window such as when defining a new tool that
can be invoked directly from the IBM SPSS Modeler main window. The action handler specifies
the Java class that is called when a user chooses a custom menu option or toolbarbutton on
a node dialog, a model output window, or a document output window. It is animplementation
of either an ExtensionObjectFrame class or an ActionHandler class. In either case, the
standard window components are included automatically, such as the standard menus, tabs, and
toolbar buttons.

Extension Object Panel


An extension object panel works in a similar way to a text browser panel, but instead of displaying
the text content of a container, it creates an instance of a specified Java class that implements the
ExtensionObjectPanel interface defined by the CLEF Java API.

Here is an example of a model apply node dialog containing an extension object panel.

Figure. Model output window with extension object panel highlighted

Format

<ExtensionObjectPanel id="identifier" panelClass="Java_class" >


-- advanced custom layout options --
</ExtensionObjectPanel>

where:

Page 51 of 165
id is a unique identifier that can be used to reference the panel in Java code.

panelClass (required) is the name of the Java class that the panel implements.

The advanced custom layout options provide a fine degree of control over the positioning and
display of screen components.

Example

The following example illustrates the Tab section that defines the extension object panel shown
earlier:

<Tab label="Model" labelKey="Model.LABEL"


helpLink="selflearnnode_output.htm">
<ExtensionObjectPanel id="SelfLearningPanel"
panelClass="com.spss.clef.selflearning.SelfLearningPanel"/>
</Tab>

For Reference and Practice


https://www.ibm.com/support/knowledgecenter/en

Factors in selecting suitable interaction models design


What Makes a ―Good‖ Interaction Design?
Several discussions have been there about what needs to be included in a solid user interface.
Based on those insights and the industry standards, let‘s explore the ―good‖ characteristics of
interaction model interface design.
1. Design must be Goal-Driven
One of the major practices of interaction design is that it is goal-driven design. Interaction
designers therefore need to know how to build the customer insights into their design, regardless
of whether or not they are personally conducting user research.

The following UX processes are good starting points to empathize with users in order to elicit the
best design response:

 User Personas: Based on the psychologies and behaviors of the target audience you need
to create user personas and refer them at the time of making crucial design decisions.
 User Scenarios: This helps to explain how a user persona will act when using the
product. Designers need to explore the context where the user interacts with the product.
 User Experience Maps: Also known as journey maps, user experience maps record all
the conditions of a single interaction. It also includes external circumstances and emotion.

This approach helps interaction designers to first acknowledge their constraints before devising a
solution. You are considering the goals users have in mind when using your product and design
it accordingly.
Page 52 of 165
2. It must be Easy to Use
This is the bare minimum for any product. If your product lacks usability, it is obvious that no
one will desire it. It is the ease with which someone uses your product to achieve their desired
goal. Just like UX design, interaction design needs to consider the inherent usability of the
interfaces to make the underlying system to comprehend and use.

For example, if you are designing an online movie and events ticketing app with a high level of
detail where users can determine rows, seat numbers etc. The app typically needs to consolidate
a multi-step and multi-program process and transform the whole experience into a single linear
path.

The usability needs to be effortless if you want people to use it extensively. In case they need to
invest a lot of time just to understand the system, chances are they are going to abandon your
app. The best practice would be something like:

Open the app → Select a show/event → Select date & place → Select row & seat → Payment
and checkout

3. The Interface must be Easily Learned


You need to design intuition and familiarity into every interface as users don‘t really remember
all functions after using a product. In order to boil down complexity, you need to create
consistency and predictability. A simple example is that when a designer uses a lightbox for
some images and have others opening in a new tab. This breaks both consistency and
predictability and would only confuse users, if not annoying them.
You need to maintain consistency throughout the design to create predictability, which in turn
improves learnability.

Using UI patterns like breadcrumbs or WYSIWYG is one of the best ways to improve
learnability. Users too are familiar with these patters since various websites and applications are
using them already. Besides, they offer enough room for customization and creativity.

4. Keep an Eye on Signifiers & Affordances


It is important to focus on signifiers as well. Your website menu should look like a menu,
otherwise it will leave your users confused. Follow the norm. Without it, users will find it
difficult to perceive the affordance.

Signifiers basically work as metaphors, telling people why they should interact with a particular
item by communicating the purpose. Affordances, on the other hand, define the relation between
the user and their environment. What is the underlying functionality of your product? This is
what affordance explains. Designers need to utilize signifiers and affordances consistently
throughout the design.

Page 53 of 165
5. What’s the Feedback & Response Time?
Finally, we have feedback and response time as one of the ―good‖ characteristics of interaction
design. Feedback is key to interaction design. Your product must communicate with your users
(provide feedback) if the desired task was accomplished and what they need to do next.

Let‘s take example from Hootsuite‘s owl, which goes to sleep if you remain inactive for a long
time. It is a funny and intelligent feedback and many other websites and apps are following their
example. The response time in feedback is also a key factor. It must be in real time and
immediate response, anything between 0.1 second and 10 seconds.

Translations between user and system


User intentions translate into actions at the interface. These translate into alterations of system
state, reflected in the output display interpreted by the user.
Translation is the communication of the meaning of a source-language text by means of an
equivalent target-language text. The English language draws a terminological distinction (not all
languages do) between translating (a written text) and interpreting (oral or sign-language
communication between users of different languages); under this distinction, translation can begin
only after the appearance of writing within a language community.

Models of interaction
Interaction involves at least two participants: the user and the system. Both are complex,
and are very different from each other in the way that they communicate and view the
domain and the task. The interface must therefore effectively translate between them to
allow the interaction to be successful. This translation can fail at a number of points and for
a number of reasons. The use of models of interaction can help us to understand exactly
what is going on in the interaction and identify the likely root of difficulties. They also
provide us with a framework to compare different interaction styles and to consider
interaction problems.
We begin by considering the most influential model of interaction, Norman‘s execution–
evaluation cycle; then we look at another model which extends the ideas of Norman‘s cycle.
Both of these models describe the interaction in terms of the goals and actions of the user.
We will therefore briefly discuss the terminology used and the assumptions inherent in the
models, before describing the models themselves.
The terms of interaction
Traditionally, the purpose of an interactive system is to aid a user in accomplishing goals
from some application domain. A domain defines an area of expertise and knowledge in
some real-world activity. Some examples of domains are graphic design, authoring and
process control in a factory. A domain consists of concepts that highlight its important
aspects. In a graphic design domain, some of the important concepts are geometric shapes,
a drawing surface and a drawing utensil. Tasks are operations to manipulate the concepts of
a domain. A goal is the desired output from a performed task. For example, one task within
the graphic design domain is the construction of a specific geometric shape with particular
Page 54 of 165
attributes on the drawing surface. A related goal would be to produce a solid red triangle
centred on the canvas. An intention is a specific action required to meet the goal.
Task analysis involves the identification of the problem space for the user of an interactive
system in terms of the domain, goals, intentions and tasks. We can use our knowledge of
tasks and goals to assess the interactive system that is designed to support them. The
concepts used in the design of the system and the description of the user are separate, and
so we can refer to them as distinct components, called the System and the User, respectively.
The System and User are each described by means of a language that can express concepts
relevant in the domain of the application. The System‘s language we will refer to as the core
language and the User‘s language we will refer to as the task language. The core language
describes computational attributes of the domain relevant to the System state, whereas the
task language describes psychological attributes of thedomain relevant to the User state.

The system is assumed to be some computerized application, in the context of this book,
but the models apply equally to non-computer applications. It is also a common assumption
that by distinguishing between user and system we are restricted to single-user applications.
This is not the case. However, the emphasis is on the view of the interaction from a single
user‘s perspective. From this point of view, other users, such as those in a multi-party
conferencing system, form part of the system.

The execution–evaluation cycle


Norman‘s model of interaction is perhaps the most influential in Human–Computer
Interaction, possibly because of its closeness to our intuitive understanding of the interaction
between human user and computer. The user formulates a plan of action, which is then
executed at the computer interface. When the plan, or part of the plan, has been executed, the
user observes the computer interface to evaluate the result of the executed plan, and to
determine further actions.
The interactive cycle can be divided into two major phases: execution and evaluation.
These can then be subdivided into further stages, seven in all. The stages in Norman‘s
model of interaction are as follows:

 establishing the goal


 forming the intention
 specifying the action sequence
 executing the action
 perceiving the system state
 interpreting the system state
 evaluating the system state with respect to the goals and intentions.
Each stage is of course an activity of the user. First the user forms a goal. This is the user‘s
notion of what needs to be done and is framed in terms of the domain, in the task language.
It is liable to be imprecise and therefore needs to be translated into the more specific

Page 55 of 165
intention, and the actual actions that will reach the goal, before it can be executed by the
user. The user perceives the new state of the system, after execution of the action sequence,
and interprets it in terms of his expectations. If the system state reflects the user‘s goal then
the computer has done what he wanted and the interaction has been successful; otherwise
the user must formulate a new goal and repeat the cycle.

Norman uses a simple example of switching on a light to illustrate this cycle. Imagine you
are sitting reading as evening falls. You decide you need more light; that is you establish
the goal to get more light. From there you form an intention to switch on the desk lamp, and
you specify the actions required, to reach over and press the lamp switch. If someone else is
closer the intention may be different – you may ask them to switch on the light for you. Your
goal is the same but the intention and actions are different. When you have executed the
action you perceive the result, either the light is on or it isn‘t and you interpret this, based on
your knowledge of the world. For example, if the light does not come on you may interpret
this as indicating the bulb has blown or the lamp is not plugged into the mains, and you will
formulate new goals to deal with this. If the light does come on, you will evaluate the new
state according to the original goals – is there now enough light? If so, the cycle is complete.
If not, you may formulate a new intention to switch on the main ceiling light as well.
Norman uses this model of interaction to demonstrate why some interfaces cause problems
to their users. He describes these in terms of the gulfs of execution and the gulfs of
evaluation. As we noted earlier, the user and the system do not use the same terms to describe
the domain and goals – remember that we called the language of the system the core language
and the language of the user the task language . The gulf of execution is the difference
between the user‘s formulation of the actions to reach the goal and the actions allowed by
the system. If the actions allowed by the system correspond to those intended by the user,
the interaction will be effective. The interface should therefore aim to reduce this gulf.
The gulf of evaluation is the distance between the physical presentation of the system state
and the expectation of the user. If the user can readily evaluate the presentation in terms of
his goal, the gulf of evaluation is small. The more effort that is required on the part of the
user to interpret the presentation, the less effective the interaction.
Norman‘s model is a useful means of understanding the interaction, in a way that is clear and
intuitive. It allows other, more detailed, empirical and analytic work to be placedwithin
a common framework. However, it only considers the system as far as the interface. It
concentrates wholly on the user‘s view of the interaction. It does not attempt to deal with the
system‘s communication through the interface. An extension of Norman‘s model, proposed
by Abowd and Beale, addresses this problem. This is described in the next section.

Page 56 of 165
The interaction framework
The interaction framework attempts a more realistic description of interaction by including
the system explicitly, and breaks it into four main components, as shown in Figure below.
The nodes represent the four major components in an interactive system – the System, the
User, the Input and the Output. Each component has its own language. In addition to the
User’s task language and the System’s core language, there are languages for both the Input
and Output components. Input and Output together form the Interface.

As the interface sits between the User and the System, there are four steps in the interactive
cycle, each corresponding to a translation from one component to another, as shown by the
labelled arcs in Figure above. The User begins the interactive cycle with the formulation of
a goal and a task to achieve that goal. The only way the user can manipulate the machine is
through the Input , and so the task must be articulated within the input language. The input
language is translated into the core language as operations to be performed by the System.
The System then transforms itself as described by the operations; the execution phase of the
cycle is complete and the evaluation phase now begins. The System is in a new state, which
must now be communicated to the User. The current values of system attributes are rendered
as concepts or features of the Output. It is then up to the User to observe the Output and
assess the results of the interaction relative to the original goal, ending the evaluation phase
and, hence, the interactive cycle. There are four main translations involved in the interaction:
articulation, performance, presentation and observation.
The User ‘s formulation of the desired task to achieve some goal needs to be articulated in
the input language. The tasks are responses of the User and they need to be translated to
stimuli for the Input. As pointed out above, this articulation is judged in terms of the coverage
from tasks to input and the relative ease with which the translation can beaccomplished. The
task is phrased in terms of certain psychological attributes that highlight

Page 57 of 165
the important features of the domain for the User. If these psychological attributes map
clearly onto the input language, then articulation of the task will be made much simpler. An
example of a poor mapping, as pointed out by Norman, is a large room with overhead lighting
controlled by a bank of switches. It is often desirable to control the lighting so that only one
section of the room is lit. We are then faced with the puzzle of determining which switch
controls which lights. The result is usually repeated trials and frustration. This arises from the
difficulty of articulating a goal (for example, ‗Turn on the lights in the front of the room‘) in
an input language that consists of a linear row of switches, which may or may not be oriented
to reflect the room layout.

Conversely, an example of a good mapping is in virtual reality systems, where input devices
such as datagloves are specifically geared towards easing articulation by making the user‘s
psychological notion of gesturing an act that can be directly realized at the interface. Direct
manipulation interfaces, such as those found on common desktop operating systems like
the Macintosh and Windows, make the articulation of some file handling commands easier.
On the other hand, some tasks, such as repetitive file renaming or launching a program
whose icon is not visible, are not at all easy to articulate with such an interface.

At the next stage, the responses of the Input are translated to stimuli for the System. Of
interest in assessing this translation is whether the translated input language can reach as
many states of the System as is possible using the System stimuli directly. For example, the
remote control units for some compact disc players do not allow the user to turn the power
off on the player unit; hence the off state of the player cannot be reached using the remote
control‘s input language. On the panel of the compact disc player, however, there is usually
a button that controls the power. The ease with which this translation from Input to System
takes place is of less importance because the effort is not expended by the user. However,
there can be a real effort expended by the designer and programmer. In this case, the ease
of the translation is viewed in terms of the cost of implementation.
Once a state transition has occurred within the System, the execution phase of the interaction
is complete and the evaluation phase begins. The new state of the System must be
communicated to the User, and this begins by translating the System responses to the
transition into stimuli for the Output component. This presentation translation must preserve
the relevant system attributes from the domain in the limited expressiveness ofthe output
devices. The ability to capture the domain concepts of the System within the Output is a
question of expressiveness for this translation.
For example, while writing a paper with some word-processing package, it is necessary at
times to see both the immediate surrounding text where one is currently composing, say,
the current paragraph, and a wider context within the whole paper that cannot be easily
displayed on one screen (for example, the current chapter).

Page 58 of 165
Ultimately, the user must interpret the output to evaluate what has happened. The
response from the Output is translated to stimuli for the User which trigger assessment.
The observation translation will address the ease and coverage of this final translation.
For example, it is difficult to tell the time accurately on an unmarked analog clock,
especially if it is not oriented properly. It is difficult in a command line interface to
determine the result of copying and moving files in a hierarchical file system. Developing
a web site using a mark -up language like HTML would be virtuallyimpossible without
being able to preview the output through a browser.

2. Interaction style for a given task


Human Interface/Human Error
Human operators are one of the biggest sources of errors in any complex system. Many operator
errors are attributed to a poorly designed human-computer interface (HCI). However, human
beings are often needed to be the fail-safe in an otherwise automated system. Even the most highly
trained and alert operators are prone to boredom when they are usually not needed for normal
operation, and panic when an unusual situation occurs, stress levels are raised, and lives are at
stake. The HCI must give appropriate feedback to the operator to allow him or her to make well
informed decisions based on the most up to date information on the state of the system. High false
alarm rates will make the operator ignore a real alarm condition. Methods for determining the
effectiveness of an HCI, such as heuristic evaluation, cognitive walkthroughs, and empirical
evaluations like protocol analysis, exist, but are often cumbersome and do not provide conclusive
data on the safety and usability of an HCI. System designers must insure that the HCI is easy and
intuitive for human operators to use, but not so simple that it lulls the operator into a state of
complacency and lowers his or her responsiveness to emergency situations.

Introduction
In any complex system, most errors and failures in the system can be traced to a human source.
Incomplete specifications, design defects, and implementation errors such as software bugs and
manufacturing defects, are all caused by human beings making mistakes. However, when looking
at human errors in the context of embedded systems, we tend to focus on operator errors and errors
caused by a poor human-computer interface (HCI).

Human beings have common failure modes and certain conditions will make it more likely for a
human operator to make a mistake. A good HCI design can encourage the operator to perform
correctly and protect the system from common operator errors. However, there is no well- defined
procedure for constructing an HCI for safety critical systems.

In an embedded system, cost, size, power, and complexity are especially limited, so the interface
must be relatively simple and easy to use without sacrificing system safety. Also, a distinction

Page 59 of 165
must be made between highly domain specific interfaces, like nuclear power controls or airplane
pilot controls, and more general "walk up and use" interfaces, like automated teller machines or
VCR onscreen menus. However, this is not a hard and fast distinction, because there are interfaces
such as the one in the common automobile that specifically require some amount of training and
certification (most places in the world require a driver's license test) but are designed to be
relatively simple and universal. However, all cars do not have the same interface, and even small
differences may cause an experienced driver to make a mistake when operatingan unfamiliar car.

In safety critical systems, the main goal when of the user interface is to prevent the operator from
making a mistake and causing a hazard. In most cases usability is a complementary goal in that a
highly usable interface will make the operator more comfortable and reduce anxiety. However,
there are some tradeoffs between characteristics that make the interface usable and characteristics
that make it safe. For example, a system that allows the user to commit aprocedure by simply
pressing the enter key a series of times may make it extremely usable, but allow the operator to
bypass important safety checks or easily confirm an action without assessing the consequences.
This was one of the problems with the Therac-25 medical radiation device. Operators could easily
bypass error messages on the terminal and continue to apply treatment, not realizing they were
administering lethal doses of radiation to the patient. Also, the error messages were not particularly
descriptive, which is also another problem with user interfaces providing appropriate feedback. It
is also important to recognize that not all systems are safety critical, and in those cases, usability
is the main goal of the HCI. If the user must operate the system to perform a task, the interface
should guide the user to take the appropriate actions and provide feedback to the user when
operations succeed or fail.

Sources of Human Error


Automated systems are extremely good at repetitive tasks. However, if an unusual situation occurs
and corrective action must be taken, the system usually cannot react well. In this situation, a human
operator is needed handle an emergency. Humans are much better than machines at handling novel
occurrences, but cannot perform repetitive tasks well. Thus the operator is left to passively monitor
the system when there is no problem, and is only a fail-safe in an emergency. This is a major
problem in HCI design, because when the user is not routinely involved in the control of the
system, they will tend to become bored and be lulled into complacency. This is known as operator
drop-out. Since the user's responsiveness is dulled, in a real emergency situation, he or she may
not be able to recover as quickly and will tend to make more mistakes.

However, if the human operator must routinely be involved in the control of the system, he or she
will tend to make mistakes and adapt to the common mode of operation. Also, if the operator has
a persistent mental model of the system in its normal mode of operation, he or she will tendto
ignore data indicating an error unless it is displayed with a high level of prominence. The HCI

Page 60 of 165
must be designed so that it provides enough novelty to keep the user alert and interested in his or
her job, but not so extremely complicated that the user will find it difficult to operate.

Stress is also a major contributing factor to human error. Stressful situations include unfamiliar
or exceptional occurrences, incidents that may cause a high loss of money, data, or life, or time
critical tasks. Human performance tends to degrade when stress levels are raised. Intensive training
can reduce this affect by making unusual situations a familiar scenario with drills. However, the
cases where human beings must perform at their best to avoid hazards are often the cases of most
extreme stress and worst error rates. The failure rate can be as high as thirty percent in extreme
situations. Unfortunately, the human operator is our only option, since a computer system usually
cannot correct for truly unique situations and emergencies. The best that can be done is to design
the user interface so that the operator will make as few mistakes as possible.

HCI Problems
The HCI must provide intuitive controls and appropriate feedback to the user. Many HCI's can
cause information overload. For example, if an operator must watch several displays to observe
the state of a system, he or she may be overwhelmed and not be able to process the data to gain an
appropriate view of the system. This may also cause the operator to ignore displays that are
perceived as having very low information content. This can be dangerous if one particular
display is in control of a critical sensor. Another way to overwhelm the operator is to have alarm
sensitivity set too high. If an operator gets an alarm for nearly every action, most of which are
false, he or she will ignore the alarm when there is a real emergency condition.

The HCI must also have a confidence level that will allow the operator to assess the validity of its
information. The operator should not have to rely on one display for several sensors. The system
should have some redundancy built into it. Also, several different displays should not relay
information from the same sensor. This would give the user the unsubstantiated notion that he or
she had more information than what was available, or that several different sources were in
agreement. The operator should not trust the information from the HCI to the exclusion of the
rest of his or her environment.

There are several heuristics for judging a well-designed user interface, but there is no systematic
method for designing safe, usable HCI's. It is also difficult to quantitatively measure the safety and
usability of an interface, as well as find and correct for defects.

Available tools, techniques, and metrics


Several techniques exist for evaluating user interface designs, but they are not mature and do not
provide conclusive data about an HCI's safety or usability. Inspection methods like heuristic
evaluation and cognitive walkthrough have the advantage that they can be applied at the design

Page 61 of 165
phase, before the system is built. The fact that a real interface is not being tested also limits what
can be determined about the HCI design. Empirical methods like protocol analysis actually have
real users test the user interface, and do lengthy analyses on all the data collected during the
session, from keystrokes to mouse clicks to the user's verbal account during interaction.

HCI Design
There are no structured methods for user interface design. There are several guidelines and
qualities that are desirable for a usable, safe HCI, but the method of achieving these qualities is
not well understood. Currently, the best method available is iterative design, evaluation, and
redesign. This is why evaluation methods are important. If we can perform efficient evaluations
and correctly identify as many defects as possible, the interface will be greatly improved. Also,
accurate evaluations earlier in the design phase can save money and time. However, it is easier
to find HCI defects when you have a physical interface to work with. It is also important to separate
design of the HCI from other components in the system, so defects in the interface do not propagate
faults through the system. Outlines an architecture for decoupling the HCI from the application in
hard real-time systems so that complexity is reduced and timing constraints can be dealt with.

Heuristic Evaluation
Heuristic evaluation involves having a set of people (the evaluators) inspect a user interface design
and judge it based on a set of usability guidelines. These guidelines are qualitative and cannot be
concretely measured, but the evaluators can make relative judgments about how well the user
interface adheres to the guidelines. A sample set of usability heuristics from designs would be:

 Simple and natural dialog


 Speak the users language
 Minimize the users memory load
 Consistency
 Feedback
 Clearly marked exits
 Shortcuts
 Precise and constructive error messages
 Prevent errors
 Help and documentation

This technique is usually applied early in the life cycle of a system, since a working user interface
is not necessary to carry it out. Each individual evaluator can inspect the user interface on his or
her own, judging it according to the set of heuristics without actually having to operate the
interface. This is purely an inspection method. It has been found that to achieve optimal coverage
for all interface problems, about five or more independent evaluators are necessary.

Page 62 of 165
However, if this is cost prohibitive, heuristic evaluation can achieve good results with as few as
three evaluators.

Heuristic evaluation is good at uncovering errors and explaining why there are usability problems
in the interface. Once the causes are known, it is fairly easy to implement a solution to fix the
interface. This can be extremely time and cost saving since things can be corrected before the user
interface is actually built. However, the merits of heuristic evaluation are very dependent on the
merits of the evaluators. Skilled evaluators who are trained in the domain of the system and can
recognize interface problems are necessary for very domain specific applications.

Cognitive Walkthrough
Another usability inspection method is the cognitive walkthrough. Like the heuristic evaluation,
the cognitive walkthrough can be applied to a user interface design without actually operating a
constructed interface. However, the cognitive walkthrough evaluates the system by focusing on
how a theoretical user would go about performing a task or goal using the interface. Each step the
user would take is examined, and the interface is judged based on how well it will guide the user
to perform the correct action at each stage. The interface should also provide an appropriate level
of feedback to ensure to the user that progress is being made on his or her goal.

Since the cognitive walkthrough focuses on steps necessary to complete a specific task, it can
uncover disparities in how the system users and designers view these tasks. It can also uncover
poor labeling and inadequate feedback for certain actions. However, the method's tight focus loses
sight of some other important usability aspects. This method cannot evaluate global consistency
or extensiveness of features. It may also judge an interface that is designed to be comprehensive
poorly because it provides too many choices to the user.

In order for a user interface to be designed well and as many flaws as possible to be caught, several
inspection methods should be applied. There is a trade-off between how thoroughly the interface
is inspected and how many resources are able to be committed at this early stage in the system life
cycle. Empirical methods can also be applied at the prototype stage to actually observe the
performance of the user interface in action.

Protocol Analysis
Protocol analysis is an empirical method of user interface evaluation that focuses on the test user's
vocal responses. The user operates the interface and is encouraged to "think out loud" when going
through the steps to perform a task using the system. Video and audio data are recorded, as well
as keystrokes and mouse clicks. Analyzing the data obtained from one test session can be an
extremely time consuming activity, since one must draw conclusions from the subjective vocal
responses of the subject and draw inferences from his or her facial expressions.

Page 63 of 165
This can be very tedious simply because the volume of data is very high and lengthy analysis is
required for each second.

MetriStation
MetriStation is a tool being developed at Carnegie Mellon University to automate the normally
tedious task of gathering and analyzing all the data gathered from empirical user interface
evaluations. The system consists of software that will synchronize and process data drawn from
several sources when a test user is operating the interface being evaluated. Keystrokes, mouse
clicks, tracking the user's eye movements, and the user's speech during a test session are recorded
and analyzed. The system is based on the premise that if the interface has good usability
characteristics, the user will not pause during the test session, but logically proceed from onestep
to the next as he or she completes an assigned task. Any statistically unusual pauses between actions
would indicate a flaw in the interface and can be detected automatically.

MetriStation seems like a promising tool in aiding empirical analysis. It can give more quantitative
results, and can reduce greatly the time spent collecting and processing data from test sessions.
However, this tool may only flag problems that cause the user to hesitate in a task. It can do
nothing about problems in the interface that do not slow the user down. For instance, it may be
extremely easy to crash a system through the user interface quickly, but this is clearly not a desired
outcome. The premise that most usability problems will cause the user to hesitate has limited scope
and applicability.

Relationship to other topics


Since human error is the largest source of system failures, it must be a large factor in safety critical
system analysis.

 Safety Critical Systems/Analysis - Human error is a major factor is making systems


safety critical. It is difficult to model human behavior in a system analysis, but the
human operator is often a major impediment to making a system safe.
 Exception Handling - The human operator is often a source of exceptional inputs to the
system. However, when a truly unanticipated condition occurs, the human operator is the
default exception handler and the only mechanism able to prevent system failure.
 Security - Defects in the user interface can sometimes be exploited and introduce security
vulnerabilities to the system.
 Social and Legal Concerns - If the user interface was poorly designed and caused the
operator to make a mistake that cost lives or property, who is at fault? The
operator? The system designer? These issues must be addressed by society and the legal
system.

Conclusions
The following ideas are the important ones to take away from reading about this topic:

Page 64 of 165
 Humans are the most unpredictable part of any system and therefore the most difficult to
model for HCI design. 
 Humans have higher failure rates under high stress levels, but are more flexible in
recovering from emergency situations and the last hope in a potential disaster.
 The HCI must provide an appropriate level of feedback without overloading the operator
with too much information
 If the human operator is out of the control loop in an automated task, the operator will
tend to adapt to the normal operation mode and not pay close attention to the system.
When an emergency condition occurs, the operator's response will be degraded.
 There is a trade-off between making the HCI relatively easy and intuitive and ensuring
that system safety is not compromised by lulling the operator into a state of complacency. 
 Evaluation techniques for user interfaces are not mature and can be costly. They focus
more of qualitative rather than quantitative metrics. However, evaluation and iterative
design is the best method we have for improving the interface.

Interaction styles supported by the system


The concept of Interaction Styles refers to all the ways the user can communicate or otherwise
interact with the computer system.

We will consider human-computer interactions from several perspectives:

1. interaction style
2. user concept of the interaction
3. interactions for mobile devices

Human-computer Interfaces Styles


Although WIMP interfaces, which are a combination of Menus, dialog boxes, and ‗point and
click,‘ currently dominates interface designs there are other forms of interfaces:

 Command line interface


 Menus
 Question-answer/Dialog boxes
 Forms
 Spreadsheets
 Point and click
 Natural language
 Command Gesturing
 General Gesturing
 Direct manipulation
 Tangible interaction

We will go through the list and identifying the technique in terms of how fast it is for the user,
how flexible it is for users or expressive it is for the user, how long it takes for the user to learn,
and how hard it is for the programmer to implement.

Page 65 of 165
Command line interfaces are the original interfaces for the computer and are stillused.
Examples of command line interfaces are UNIX operating system commands and the VI editor.
They are loved by system administrators for their accuracy, speed and flexibility. If the user knows
the commands then typing is faster than searching for it in menus. Consequently, some applications
try to offer both; for example auto cad or Windows Excel. Users can directly create script files and
verbally specify the command sequence. Some commands can be hard to visualize and searching
for commands or files can be frustrating and slow. The interface is easy for programmers to
implement.

Menus are a basis of WIMP interfaces and a favorite with inexperienced users. Novice users can
easily find commands. Searching the menus, the user builds up a metaphor for the application.
Using menus is slow but fun. Using toolkits, menus are not too bad to program.

Dialogs or ―question and answer‖ boxes are another old interface style. They are windows that
pop up asking for information, fields to be filled or buttons to be pressed. Dialogue windows
have been around before WIMP interfaces, they first appeared in database entry application
programs. They are rather inflexible form of interface. Dialog boxes are easy to understand
but not very flexible or fast. They are easy to program. Who is

Forms are sophisticated dialog boxes and not much more flexible. Spreadsheets are a flexible
and powerful form of interface, especially if the user can specify cell types. Data entry in
spreadsheets is typically slow. They are more difficult to program.

Point and click interfaces were made popular by the web. They were very suitable for the initial
web browsers (gopher) when web pages were all text. Users knew to interpret the underscore as a
link to another web page. Now, links are hidden, for examples in images. Icons on the desktop is
another example of point and click style interface. The notion of point and click is a short
interaction that results in a very specific result. Because the user must move the mouse, this
interface style is slow. It is flexible because many different kinds of UI objects canbe pointed at.
Short key interaction is a point and click interaction style without the point. They are generally
easy to implement.

The most common example of natural language interface are the interface of search engines.
They are hard to implement, but can be very flexible.

Command gesturing interface style selects an object and uses a gesture to issue commands. In
essences, it is a generalization of ―point and click‖ interfaces. Examples are in Opra, a web
browser. Other examples of command gesturing are the hand cursor in adobe. Some games,
Brother in Arms, are using command gesturing. Keith Ruthowski has demonstrated that a pie menu
can become a form of command gesturing, and once the gestures are learned nearly as

Page 66 of 165
accurate and fast as text entry. Because algorithms for interpreting gestures are in its infancy, the
flexibility of command gesturing is not known, and difficult to implement. Learning a large set of
gesture can take a long time.

General gesturing is a more general interface style than command gesturing. There does not have
to be an object and the gesture does not have to represent a command. Examples of general
gesturing are drawing applications and text entry using scripting as on notebooks. The Wii is
advancing general gesturing in games. Because this is a very new interaction style, it is unknown
how easy it is to learn, but it should be more flexible. It is more difficult to implement.

Direct manipulation is closely related to command gesturing. An example of directmanipulation


is drag and drop files into folders or trash. Drawing applications use direct manipulations. They
can be slow to use but are fast to learn. They can be difficult to implement.

Tangible interactions refer to manipulating physical objects other than the mouse and keyboard.
There are few current popular examples, but RFID and NF technology does make some tangible
interactions possible. Low tech examples of tangible interactions are real buttons, switches and
sliders. They can be fast or slow to use, but should be easy to learn, can be hard to implement.

Conceptual Interaction Models

Preece, Rogers and Sharp in Interaction Design propose that understanding users‘ conceptual
models for interaction can guide HCI designers to the proper interaction techniques for their
system

The most important thing to design is the user‘s conceptual model. Everything else should be
subordinated to making that model clear, obvious, and substantial. That is almost exactly the
opposite of how most software is designed. (David Liddle, 1996, Design of the conceptual model,
In Bringing Design to Software, Addison-Wesely, 17-31)

The HCI designers‘ goal is to understand the interaction in terms that the users understanding of
them. Preece, Rogers and Sharp propose four conceptual models for interaction concepts, based
on the type of activities users perform during the interaction.

 Instructing – issuing commands to the system


 Conversing – user ask the system questions
 Manipulating and Navigation – users interact with virtual objects or environment
 Exploring and Browser – system provides structured information

I propose additional conceptual interactions that are more passive:

Page 67 of 165
 Passive Instrumental – the system provides passive feedback to user actions
 Passive informative – the system provides passive information to the users.

User may interact with a system using more than one conceptual interaction model.

Instructional Interactions
Issuing commands is an example of instructional interactions. Instructional interactions are
probably the most common form of conceptual interactions. It allows the user the most control
over the system. Specific examples vary from using a VCR to programming. In most cases
operating system interactions are instructional interactions. Icons, menus and control keys are
examples of improving the usability command line instructional interaction. Instructional
interactions tend to be quick and efficient.

Conversational Interactions
Conversational Interactions model the interaction as a user-system dialog. Examples of systems
that are primarily conversational are help-systems and search engines. Agents (such as the paper
clip) use conversational interaction. Implementing conversational model may require voice
recognition and text parsing or could use forms. The advantage of conversational model is that it
can be more natural, but it can also be a slower interaction. For example, using automated phone
based systems is a slow conversational interaction interface. Another disadvantage of
conversational interaction is that the user may believe that the system is smarter than it really is,
especially if the system uses an animated agent.

Manipulating and Navigational Interactions


This model describes the interaction of manipulating virtual objects or navigating virtual worlds.
Navigational interactions are popular in computer games. Manipulating interactions occur in
drawing software. Navigational interactions occur even in word processors, for example zooming
and using the scroll bar. Direct manipulations are manipulating interactions. Ben Shneiderman
(1983) coined the phase and posed three properties:

 continuous representations of objects


 rapid reversible incremental actions with immediate feedback
 physical actions

Apple was the first computer company to design an operating system using direct manipulation
in the desk top. Direct manipulation and navigational interactions have a lot of benefits. They are
easy to learn, easy to recall, tend to have less little error, give immediate feedback, and produce
less user anxiety. But they have several disadvantages: the interactions are slower and the user
may believe that the interaction is more than it really is. Poor metaphors such as moving the icon
of a floppy to eject the floppy can confuse the user.

Page 68 of 165
Explorative and Browsing Interactions
Explorative and browsing interactions refer to searching structured information. Examples of
systems using explorative interactions are Music CDs, Movie DVDs, Web, portals. Not much
progress has been made in this conceptual model for interactions, probably because the structuring
information in more than a trivial task and is hard to model.

Passive Instrumental Interactions


Passive instrumental interactions are similar to instruments, for example the speedometer in an
auto dashboard. They can provide feedback to users‘ actions or movements, such as a GPS
interface. The can also provide information to changes in the environment such as a light meter
or an image in a viewfinder. Smart phones are frequently used as instruments and make use of
passive instrumental interactions.

Passive Informative Interactions


Smart phones are frequents used to read books. The interaction is very passive and one way. The
system is providing information to the users. The user primarily gestures to progress through the
book. Viewing images is another passive informative interaction with only interactions for
zooming and panning. Passive informative interaction may be a simplified Manipulating and
Navigational Interactions.

We can make a table summarize interaction styles and conceptual interaction models. The related
conceptual interaction model is the most common model that is supported by the interaction
style. Implementation is how hard for the designer to implement. Because I made the table we
should go through it and correct it.

The nature of user/system dialogue


Dialogue system
A dialogue system, or conversational agent (CA), is a computer system intended to converse
with a human. Dialogue systems employed one or more of text, speech, graphics, haptics,
gestures, and other modes for communication on both the input and output channel.

The elements of a dialogue system are not defined. The typical GUI wizard engages in a sort of
dialog, but it includes very few of the common dialogue system components, and dialog state is
trivial.

Introduction
After dialogue systems based only on written text processing starting from the early Sixties, the
first speaking dialogue system was issued by the DARPA (Defense Advanced Research
Projects Agency) Project in the USA in 1977. After the end of this 5-year project, some
European projects issued the first dialogue system able to speak many languages (also French,

Page 69 of 165
German and Italian). Those first systems were used in the telecom industry to provide phone
various services in specific domains, e.g. automated agenda and train tables service.

Components
What sets of components are included in a dialogue system, and how those components divide
up responsibilities differs from system to system. Principal to any dialogue system is the dialog
manager, which is a component that manages the state of the dialog, and dialog strategy. A
typical activity cycle in a dialogue system contains the following phases:

1. The user speaks, and the input is converted to plain text by the system's input
recognizer/decoder, which may include:
o automatic speech recognizer (ASR)
o gesture recognizer
o handwriting recognizer
2. The text is analyzed by a Natural language understanding unit (NLU), which may
include:
o Proper Name identification
o part of speech tagging
o Syntactic/semantic parser
3. The semantic information is analyzed by the dialog manager, that keeps the history and
state of the dialog and manages the general flow of the conversation.
4. Usually, the dialog manager contacts one or more task managers, that have knowledge
of the specific task domain.
5. The dialog manager produces output using an output generator, which may include:
o natural language generator
o gesture generator
o layout manager
6. Finally, the output is rendered using an output renderer, which may include:
o text-to-speech engine (TTS)
o talking head
o robot or avatar

Dialogue systems that are based on a text-only interface (e.g. text-based chat) contain only stages
2–5.

Types of systems
Dialogue systems fall into the following categories, which are listed here along a few
dimensions. Many of the categories overlap and the distinctions may not be well established.

 by modality
o text-based
o spoken dialogue system
o graphical user interface
o multi-modal
 by device
Page 70 of 165
o telephone-based systems
o PDA systems
o in-car systems
o robot systems
o desktop/laptop systems
 native
 in-browser systems
 in-virtual machine
o in-virtual environment
o robots
 by style
o command-based
o menu-driven
o natural language
o speech graffiti
 by initiative
o system initiative
o user initiative
o mixed initiative

Natural dialogue systems


"A Natural Dialogue System is a form of dialogue system that tries to improve usability and user
satisfaction by imitating human behaviour" (Berg, 2014). It addresses the features of a human-to-
human dialog (e.g. sub dialogues and topic changes) and aims to integrate them into dialogue
systems for human-machine interaction. Often, (spoken) dialogue systems require the user to
adapt to the system because the system is only able to understand a very limited vocabulary, is
not able to react on topic changes, and does not allow the user to influence the dialogue flow.
Mixed-initiative is a way to enable the user to have an active part in the dialogue instead of only
answering questions. However, the mere existence of mixed-initiative is not sufficient to be
classified as natural dialogue system. Other important aspects include:

 Adaptivity of the system


 Support of implicit confirmation
 Usage of verification questions
 Possibilities to correct information that have already been given
 Over-informativeness (give more information than has been asked for)
 Support negations
 Understand references by analyzing discourse and anaphora
 Natural language generation to prevent monotonous and recurring prompts
 Adaptive and situation-aware formulation
 Social behavior (greetings, same level of formality as the user, politeness) 
 Quality of speech recognition and synthesis

Although most of these aspects are issues of many different research projects, there is a lack of
tools that support the development of dialogue systems addressing these topics. Apart from
Page 71 of 165
VoiceXML that focuses on interactive voice response systems and is the basis for many spoken
dialogue systems in industry (customer support applications) and AIML(Artificial Intelligence
Markup Language) that is famous for the A.L.I.C.E (Artificial Linguistic Internet Computer
Entity). chatbot, none of these integrate linguistic features like dialog acts or language
generation. Therefore, NADIA-Natural DIAlogue system (a research prototype) gives an idea
how to fill that gap and combines some of the aforementioned aspects like natural language
generation, adaptive formulation and sub dialogues.

Applications
Dialogue systems can support a broad range of applications in business enterprises, education,
government, healthcare, and entertainment. For example:

 Responding to customers' questions about products and services via a company's website
or intranet portal
 Customer service agent knowledge base: Allows agents to type in a customer's question
and guide them with a response
 Guided selling: Facilitating transactions by providing answers and guidance in the sales
process, particularly for complex products being sold to novice customers
 Help desk: Responding to internal employee questions, e.g., responding to HR questions
 Website navigation: Guiding customers to relevant portions of complex websites—a
Website concierge
 Technical support: Responding to technical problems, such as diagnosing a problem with
a product or device
 Personalized service: Conversational agents can leverage internal and external databases
to personalize interactions, such as answering questions about account balances,
providing portfolio information, delivering frequent flier or membership information, for
example
 Training or education: They can provide problem-solving advice while the user learns
 Simple dialogue systems are widely used to decrease human workload in call centres. In
this and other industrial telephony applications, the functionality provided by dialogue
systems is known as interactive voice response or IVR.

In some cases, conversational agents can interact with users using artificial characters. These
agents are then referred to as embodied agents.

Select and evaluate an appropriate interaction style


 Heads down is the default interaction style for mobiles. Users stare down at the screen while
prodding and swiping.
 We are information omnivores and we are driven to consume and create content. It’s not
surprising then that heads-down screen time is popular—the screen offers a rich visual display
that can communicate a great deal of content, quickly and pleasingly.
 Think instead of what face-on interactions could offer your users. Face on is about giving your
users more of a chance to maintain eye contact with the world around them.

Page 72 of 165
Guidelines on interaction style design choice

1. Regarding the screen design

 Use relatively less arbitrary metaphors to represent objects


 Display only objects which can be manipulated at the given time
 Represent the state of the object too, possibly by color coding.
 Keep consistency by putting common objects at the same place on all screens.

2. Regarding the interaction design

 Make interactions as direct as possible by using selecting, dragging, etc.


 Make operations reversible when possible.
 Issue a "warning" message before any destructive operation.
 Always display clearly marked object for exiting program
 Provide keyboard shortcuts for most often used commands.

3. Regarding the user support

 Provide both context-sensitive and object-sensitive help.

Page 73 of 165
Outcome 3: Be able to design user interfaces

1. User interface requirements of a task


Interactive Systems
At its broadest, an interactive system can be any technology intended to help people complete a
task and achieve their goals, however this could include things like pens, which could help
achieve the goal of leaving a message for a friend, so we tighten this definition to be any
technology incorporating some form of electronic logic designed to help people complete
tasks/achieve goals.

Alternatively, we could define this as any complex system with a UI. That is, something we need
to give instructions to and know the status of in carrying out those instructions. These things are
called systems (or artifacts), which we have dialogues with. Having dialogues with non-humans
(and other animals) is a relatively new concept over the past 50 years.

Usability
For many years, the key has been thought to be making interactive systems usable, i.e., giving
them usability.

To be usable a system needs to be effective. The system supports the tasks the user wants to do
(and the subcomponents of such tasks).

We could also consider other components to make things usable. Efficiency - the system allows
users to do tasks very quickly without making (many) errors; Learnable - the system is easy
enough to learn to use (commensurate with the complexity of the tasks the users want to
undertake); Memorable - the system is easy enough to remember how to use (once, learnt) when
users return to asks after periods of non-use.

We are now moving beyond that to consider satisfaction - the system should make users feel
satisfied with their experience of using it.

Positive user experience, rather than usability has now become the focus of the design of
interactive systems, particularly as we have so many systems that are for leisure, rather than for
work. This expands usability to cover issues such as:

 Enjoyable
 Fun
 Entertaining
 Aesthetically pleasing
 Supportive of creativity

Another way to think about usability is for the user interface to be transparent/translucent to the
user - the user should be concentrating on their task, and not on how to get the system to do the

Page 74 of 165
task. This might not be the case originally however, though. For example, with the pen, you had
to think about it when younger and how to use it, and now you don't.

Difficulty in Designing Interactive Systems


Designing interactive systems on computer systems for non-experts has only been developed for
25 years, whereas something like teapots has had 4000 years to be perfected (and still has
dripping spouts!). Books have had 500 years to develop systems containing the best design
features (page numbers, table of contents, index, chapter numbers, headings, etc) and books can
be used as interactive systems.

Affordance
Some things, e.g., doors, have a method of use which is ambiguous. Doors should afford opening
in the appropriate way, their physical appearance should immediately tell you what to do.

Affordance could be formally defined as: "The perceived and actual properties of an object,
primarily those properties that could determine how the object could possibly be used" (Norman,
1998).

Affordances could be inate, or perhaps culturally learnt, but a lifetime of experience with doors,
etc, means the interface to these systems is (or should be) transparent. Well designed objects,
both traditional and interactive have the right affordances.

Are Current Systems Sufficiently Usable?


It's been estimated that 95% of functions of current systems are not used, either because the user
does not know how, or doesn't want to. One of the causes of this is that anyone can use a
computer these days.

Key Principles of Interactive Design


 Learnability - how easy the system is to use
 Memorability - how easy is the system to remember how to use
 Consistency - to what extent are similar tasks conducted in similar ways within the
system (this will contribute to learnability)
 Visibility - to what extent does the system make it clear what you can/should do next
 Constraints - design the system so that users won't make mistakes and won't be led into
frustrating/irritating dead ends
 Feedback - provide good feedback on the status of the system, what has happened, what
is happening, what will happen

Interaction Component Styles


We need to get computers and complex systems to do things - somehow we have to "tell" them
what to do. In turn, they have to give us information - the status of the system, what to do next,
what's wrong, how to fix it.

Page 75 of 165
One metaphor for this is to consider interacting with these systems as a dialogue - you tell/ask
the system to do something, it tells you things back; not too dissimilar to having a conversation
with a human being. Another metaphor is to consider these systems as objects that you do things
with and interact with (for example, putting waste in a waste paper bin), or as navigating through
a space and going places (the web).

Command Line Dialogue


This style is not widely used, but is useful to understand it to consider more recent and long-term
developments. This is the first interaction style that appeared on PCs, taking over from
mainframe systems.

The conversational metaphor applies here, where you're talking to the computer through your
keyboard and it reacts. However, the language you speak in must be correct to the last dot and in
the correct order, much like speaking a foreign language. The system doesn't give you any clues
on what to do, so you must remember (or use a crib sheet), the syntax of the language. These
commands can get quite long and complex, especially when passing lots of options and (in
UNIX), piping one command to the other.

You also get limited feedback about what is happening, a command such as rf may return you
directly to the command line, after deleting 0 or 100 files. The later versions of DOS took the
feedback step too far, however, asking for a confirmation of every file by default. This is
programmed by the interaction designer, and they have to remember to do this and get the level
of interaction right.

If you do get the command correct, however, this can be a very efficient way of operating a
complex system. A short, but powerful, language allows you to acheive a great deal and people
are willing to invest the time to learn this language to get the most efficient use of the system.

However, although computers are good at dealing with cryptic strings, complex syntax and an
exact reproduction of the syntax every time, humans aren't. This interaction system stemmed
from the fact that processing power was expensive, so humans had to adapt to the way computers
needed to interact, not vice versa. This is no longer the case.

Menu Interaction Style


Although the command line style was good for experts, it wasn't for novice of infrequent users,
so an interaction style was developed which is almost the complete opposite of command line
dialogue in terms of strengths and weaknesses - menus.

Menus are simple, as not much needs to be remembered as the options are there on screen and
the physical interface corresponds directly to the options available. Feedback is immediate -
selecting an option will either take you to another screen of menus or to perform the selected
task.

Feedback is natural and built in -you can see whether you are going the right way, because either
something relevant occurs - much like handling objects gives you instant built in feedback.
Page 76 of 165
Selections from objects (such as radio buttons) can be thought of as a menu, even though the
selection method is different.

Menu systems should be usable without any prior knowledge or memory of previous use, as it
leads you through the interaction. This is excellent for novice and infrequent users (hence their
use in public terminals of many varieties). However, a complex menu structure can complicate
matters where particular features will be found (e.g., IVR systems)

Expert and frequent users get irritated by having to move through the same menu structure every
time they do a particular task (hence shortcut keys in applications such as Word), and menu
systems also present a lack of flexibility - you can only do what menu options are present for
you, so menu driven systems only work where there is a fairly limited number of options at any
point. Items also need to be logically grouped.

Form Fill-In Interaction Style


A form is provided for the user to fill in. Perhaps not the most revolutionary or interesting of
interaction styles, but it is based on another real world metaphor - filling out a paper form. This
also illustrates another problem with metaphors, not everything transfers well from one medium
to another.

Forms present lots of little problems:

 How many characters can be typed in the field - this is often not clear, and there is no
inherent indication of this (needs to be explicitely built in by the interaction designer) -
this violates the feedback principle.
 What format is supposed to be used - particularly for things like dates
 Lack of information - what if all information known by the form isn't available
immediately. The design may not let you process beyond the dialogue box

However, this system does have strengths. If the system is well-designed, any novice can use
them and it has a strong analogy to paper forms, and the user can be led easily through the
process. However, they are easy to design badly, and hence confuse and irritate the user.

Direct Manipulation
The heart of modern graphical user interfaces (GUIs, sometimes referred to as WIMP -
Windows, Icons, Mouse/Menus, Pointers) or "point and click" interfaces is direct manipulation
(note, GUI and WIMP themselves are not interaction styles).

The idea of direct manipulation is that you have objects (often called widgets) in the interface
(icons, buttons, windows, scrollbars, etc) and you manipulate those like real objects.

One of the main advantages of direct manipulation is that you get (almost) immediate natural
feedback on the consequences of your action by the change in the object and its context in the
interface. Also, with the widgets being onscreen, you are provided with clues as to what you can
do (doesn't help much with items buried in menu hierarchies). The nature of widgets should tell
Page 77 of 165
you something about what can be done with them (affordance), and with widgets, many more
objects can be presented simultaneously (more than menus).

However, interface actions don't always have real world metaphors you can build on (e.g.,
executing a file) and information rich interactions don't always tend to work well with the object
metaphor and direct manipulation becomes tedious for repetitive tasks, such as dragging,
copying, etc...

Natural Language Interfaces


The ultimate goal of the conversational metaphor is for us to have a dialogue with the computer
in our own natural language. This might be written, but spoken seems easier. Voice in/voice out
systems have been developed, where the computer recognises what you say (using speech
recognition technology) and produces a response, spoken via text-to-speech (TTS).

This kind of system should be "walk up and use", just speak your own language and this should
suit both novices and expert users. This isn't yet technologically feasible, as the AI is largely fake
and can only recognise key words. If speech recognition fails, it is easy to get into nasty loops,
where it is not clear as to how you get out again (this is the feedback principle). There is also the
problem of different accents and recognising older voices, and privacy and security issues are
introduced.

User interfaces and usability


Usability design is to improve the usability of a product, which is an important part to guide the
actual design. It can also be regarded as an ―user-centered design‖. Thus, It includes two
important parts, namely, usability testing that is based on the target users psychological research
(user model, user needs, the use of processes, etc.) . The other is to combine the Cognitive
Psychology, Ergonomics, Industry Psychology and other disciplines of basic principles are
agilely used in the design behavior.

Here are Jakob Nielsen‘s 10 principles: Usability Heuristics

1) Visibility of system status


2) Match between system and the real world
3) User control and freedom
4) Consistency and standards
5) Error prevention
6) Recognition rather than recall
7) Flexibility and efficiency of use
8) Aesthetic and minimalist design
9) Helps users recognize, diagnose, and recover from errors
10) Help and documentation

Page 78 of 165
1. What is usability?
Usability is a measure about a product that been used in a specific scenario by specific users,
which can achieve the special goal in a satisfied and effective degree. First, usability is not only
related to the interface design, but also involved in the technical level of the entire system.
Second, usability is reflected by human factors, and evaluated by operating a variety of tasks.
Third, usability is to describe how a user can be interact effectively with a product and how easy
a product can be operated.

Usability is ―the extent to which the product can be used by specified users to achieve specified
goals with effectiveness, efficiency and satisfaction in a specified context of use.‖
Usability is defined by five quality components:

 Learnability
 Efficiency
 Memorability
 Errors
 Satisfaction

Usability Evaluation is an assessment of the usability of a product, item, system or interface.

2. Why is the usability design so important?


From the user‘s point of view, usability is important because it can make the users complete the
task accurately, and users can operate it with a pleasant mood rather than feeling stupid. From
the developer‘s aspect, usability is an important principle to determine the success of a system.
From the manager‘s point of view, the poor usability of products will greatly reduce
productivity, people will not buy your products. Any product that lacks usability will waste more
time and energy.

Page 79 of 165
3. What product is considered as a usability design?
Efficiency- the degree that users can finish a particular task and achieve a specific goal correctly
and completely in a short time;

Satisfaction — User‘s subjective satisfaction and acceptance degree in the process of using a
product.

A site with good usability has to follow these factors: fast connection, well designed, completely
description and testing, simple operations, friendly and meaningful information interaction,
unique style.

4. How to achieve usability design?


The key point of achieving the high usability is iteration design. To optimize the design
gradually by evaluation from the early stages, and the evaluation process enables designers and
developers to gather users feedback until the system reaches an acceptable level of usability.

The perfect way to ensure usability is to test the actual users on the working system. Achieving
high usability requires the design work to focus on the end users of the system. There are many
ways to determine who the major users are, how they work and what must be done. However, the
user‘s schedule and budget sometimes may discourage this ideal approach.

User-centered design versus task-oriented design


User-centered design (UCD) is an iterative design process in which designers focus on the users
and their needs in each phase of the design process. In UCD, design teams involve users
throughout the design process via a variety of research and design techniques, to create highly
usable and accessible products for them.

In user-centered design, designers use a mixture of investigative methods and tools (e.g., surveys
and interviews) and generative ones (e.g., brainstorming) to develop an understanding of user
needs.

There are five major UCD principles

1. A clear understanding of user and task requirements.


2. Incorporating user feedback to define requirements and design.
3. Early and active involvement of the user to evaluate the design of the product.
4. Integrating user-centred design with other development activities.
5. Iterative design process

The Essential Elements of User-Centered Design

 Visibility: Users should be able to see from the beginning what they can do with the
product, what is it about, how they can use it.

Page 80 of 165
 Accessibility: Users should be able to find information easily and quickly. They should
be offered various ways to find information for example call to action buttons, search
option, menu, etc.
 Legibility: Text should be easy to read. As simple as that.
 Language: Short sentences are preferred here. The easier the phrase and the words, the
better.

Task-centered system design is a technique that helps developers design and evaluate interfaces
based on users' real-world tasks. As part of design, it becomes a user-centered requirements
analysis (with the requirements being the tasks that need to be satisfied).

As part of evaluation, the evaluator can do a walk-through of the prototype, using the tasks to
generate a step by step scenario of what a user would have to do with the system. Each step in
the walkthrough asks the questions: is it believable that a person would do this; and does the
person have the knowledge to do it? If not, then a bug has been found.

Note: Task-Oriented Design. Is a descriptive term for a methodology of building computer


systems which stresses the various tasks that users would like to perform inside the systems and
then follows this separation during the development phase. While task-oriented design is not
formally defined, the term is typically used to describe the approach itself, rather that any
specific kind of design methodology.

Task Analysis and Modeling


A user has goals and needs to know methods of achieving these goals. The user needs feedback
when the goals are achieved. A good interface can make it more or less easy to achieve these
goals. We need to understand the user needs and their goals to design good interfaces.

A task is the set of activities (physical and/or cognitive) in which a user engages to achieve a
goal. Therefore, we can distinguish between a goal, the desired state of a system, and a task, the
sequence of actions performed to achieve a goal (i.e., it is a structured set of activities). Goals,
tasks and actions will be different for different people.

Task analysis and modelling are the techniques for investigating and representing the way people
perform activities: what people do, why they do it, what they know, etc. They are primarily about
understanding, clarifying and organising knowledge about existing systems and work. There is a
lot of common with systems analysis techniques, except that the focus here is solely on the user
and includes tasks other than those performed with an interactive system. The techniques are
applied in the design and evaluation of training, jobs and work, equipment and systems, to
inform interactive system design.

Task analysis involves the analysis of work and jobs and involves collecting data (using
techniques such as interviews and observations) and then analysing it. The level of granuality of
task analysis depends on various factors, notably the purpose of the analysis. Stopping rules are
used to define the depth of a task analysis - the point at which it is appropriate to cease
decomposing. User manuals often stop too early.
Page 81 of 165
Hierarchical Tasks Analysis
Hierarchical tasks analysis (HTA) is concerned with observable behavior and the reason for this
behavior. It is less detailed than some techniques, but is a keystone for understanding what users
do.

HTA represents tasks as a hierarchical decomposition of subtasks and operations, with associated
plans to describe sequencing:

 tasks/subtasks - activities to achieve particular goals/sub-goals


 operations - lowest level of decomposition, this is the level defined by the stopping rule
 plans - these specify the sequencing of activities associated with a task and the conditions
under which the activities are carried out

A HTA can be represented by a structured, indented text, or using a structured chart notation.

A text variant of the above structured chart may be:

 0. To photocopy a sheet of A4 paper:


1. Enter PIN number on the photocopier
2. Place document face down on glass
3. Select copy details
1. Select A4 paper
2. Select 1 copy
4. Press copy button
5. Collect output

Page 82 of 165
 Plan 0: Do 1-2-4-5 in that order; when the defaults are incorrect, do 3
 Plan 3: Do any of 3.1 or 3.2 in any order; depending on the default settings

We can consider different types of plan:

 fixed sequence (e.g., 1.1 then 1.2 then 1.3)


 optional tasks (e.g., if the pot is full then 2)
 wait for events (e.g., when the kettle boils, 1.4)
 cycles (e.g., do 5.1-5.2 while there are still empty cups)
 time-sharing (e.g., do 1 and at the same time, do 2)
 discretionary (e.g., do any of 3.1, 3.2 or 3.3 in any order)
 mixtures - most plans involve several of the above

Is waiting part of a plan, or a task? Generally, task if 'busy' wait - you are actively waiting or a
plan if the end of the delay is the event, e.g., "when alarm rings", etc...

To do HTA, you first need to identify the user groups and select representatives and identify the
main tasks of concern. The next step is to design and conduct data collection to elicit information
about these tasks:

 The goals that the users are trying to achieve


 The activities they engage in to achieve these goals
 The reasons underlying these activities
 The information resources they use

These steps can be done with documentation, interviews, questionnaires, focus groups,
observation, ethnography, experiments, etc...

The data collected then needs to be analyzed to create specific task models initially.
Decomposition of tasks, the balance of models and the stopping rules need to be considered. The
specific tasks models then should be generalized to create a generic task model - from each task
model for the same goal, produce a generic model that includes all the different ways of
achieving the goal. Models should then be checked with all users, other stakeholders, analysts
and the process should then iterate.

To generate a hierarchy, you need to get a list of goals and then group the goals into a part-whole
structure and decompose further where necessary, applying stopping rules when appropriate.
Finally, plans should be added to capture the ordering of goal achievement. Some general things
to remember about modeling are:

 Model specific tasks first, then generalize


 Base models on real data to capture all the wonderful variations in how people do tasks
 Model why people do things, as well as how
 Remember, there is no single, correct model

Page 83 of 165
 Insight and experience are required to analyze and model tasks effectively and to then use
the models to inform design

When you are given an initial HTA, you need to consider how to check/improve it. There are
some heuristics that can be used:

 paired actions (for example, place kettle on stove can be broken down to include turning
on the stove as well)
 restructure
 balance
 generalize

There are limitations to HTA however. It focuses on a single user, but many tasks involve the
interaction of groups to people (there is a current shift towards emphasizing the social and
distributed nature of much cognitive activity). It is also poor at capturing contextual information
and sometimes the 'why' information and can encourage a focus on getting the model and
notation 'right', which detracts from the content. There is a danger of designing systems which
place too much emphasis on current tasks or which are too rigid in the ways they support the
task.

There are many sources of information that can be used in HTA. One such is documentation -
although the manuals say what is supposed to happen, they are good for key words and
prompting interviews, and another may be observation (as discussed above) and interviews (the
expert: manager or worker? - interview both).

We could also consider contextual analysis, where physical, social and organizational settings of
design need to be investigated and taken into account in design. For example:

 Physical (dusty, noisy, light, heat, etc...)


 Social (sharing of information/displays, communication, privacy, etc...)
 Cultural (etiquette, tone, reading/scanning style, terminology, data formats, etc...)
 Organizational (hierarchy, management style, user support, availability of training, etc.)

GOMS
GOMS is an alternative to HTA that is very different. It is a lot more detailed and in depth and is
applied to systems where timing and the number of keystrokes are vital. It is more complex to
use than HTA, and it is not common. GOMS is an acronym for:

 Goals - the end state the user is trying to reach; involves hierarchical decomposition into
tasks
 Operators - basic actions, such as moving a mouse
 Methods - sequences of operators or procedures for achieving a goal or sub-goal
 Selection rules - invoked when there is a choice of methods

Page 84 of 165
2. User interface requirements
Input/output Design User Interface Design
The user interface is not just input screens but is a communication between users and the system.
It is a two-way communication. Input data and instruction (what users want to do) go into the
system in many forms. System replies with outputs, error messages, feedback, warning, help
functions, etc. Also the interface includes how users navigate through the system. It could be
commands, menus, or links that lead users to the next screen. It is a structure of the system from
users‘ view point (while DFDs and Flowcharts are the structure of the system from programmers‘
view points).

System Boundary

Outputs are data going out from the system to the users. The users may be an external entity, or
maybe internal (boundary between computer-system and manual system). You have to be careful
when determining the output data flow. Notice that dataflow – inquiry form – is a form that induces
the input from the user (Customer), that means although the dataflow goes out of the system
boundary, that form is an input form and not an output.

User Interface Design


User interface (UI) design is the process of making interfaces in software or computerized
devices with a focus on looks or style.
User interface design or UI design generally refers to the visual layout of the elements that a user
might interact with in a website, or technological product. This could be the control buttons of a

Page 85 of 165
radio, or the visual layout of a webpage. User interface designs must not only be attractive to
potential users, but must also be functional and created with users in mind.

Importance of user interface


• System users often judge a system by its interface rather than its functionality
• A poorly designed interface can cause a user to make catastrophic errors
• Poor user interface design is the reason why so many software systems are never used

UI design principles
• UI design must take account of the needs, experience and capabilities of the system users
• Designers should be aware of people‘s physical and mental limitations (e.g. limited short-term
memory) and should recognize that people make mistakes
• UI design principles underlie interface designs although not all principles are applicable to all
designs

Choosing Interface Elements


Users have become familiar with interface elements acting in a certain way, so try to be consistent
and predictable in your choices and their layout. Doing so will help with task completion,
efficiency, and satisfaction.

Interface elements include but are not limited to:


 Input Controls: buttons, text fields, checkboxes, radio buttons, dropdown lists, list boxes,
toggles, date field
 Navigational Components: breadcrumb, slider, search field, pagination, slider, tags, icons
 Informational Components: tooltips, icons, progress bar, notifications, message boxes,
modal windows
 Containers: accordion
There are times when multiple elements might be appropriate for displaying content. When this
happens, it‘s important to consider the trade-offs. For example, sometimes elements that can
help save you space, put more of a burden on the user mentally by forcing them to guess what is
within the dropdown or what the element might be.
Best Practices for Designing an Interface
Everything stems from knowing your users, including understanding their goals, skills,
preferences, and tendencies. Once you know about your user, make sure to consider the
following when designing your interface:

 Keep the interface simple. The best interfaces are almost invisible to the user. They avoid
unnecessary elements and are clear in the language they use on labels and in messaging.
Page 86 of 165
 Create consistency and use common UI elements. By using common elements in your UI,
users feel more comfortable and are able to get things done more quickly. It is also important
to create patterns in language, layout and design throughout the site to help facilitate
efficiency. Once a user learns how to do something, they should be able to transfer that skill
to other parts of the site.
 Be purposeful in page layout. Consider the spatial relationships between items on the page
and structure the page based on importance. Careful placement of items can help draw
attention to the most important pieces of information and can aid scanning and readability. 
 Strategically use color and texture. You can direct attention toward or redirect attention
away from items using color, light, contrast, and texture to your advantage.
 Use typography to create hierarchy and clarity. Carefully consider how you use typeface.
Different sizes, fonts, and arrangement of the text to help increase scanability, legibility and
readability. 
 Make sure that the system communicates what’s happening. Always inform your users
of location, actions, changes in state, or errors. The use of various UI elements to
communicate status and, if necessary, next steps can reduce frustration for your user.
 Think about the defaults. By carefully thinking about and anticipating the goals people
bring to your site, you can create defaults that reduce the burden on the user. This becomes
particularly important when it comes to form design where you might have an opportunity to
have some fields pre-chosen or filled out.

Output Design
The design of output is the most important task of any system. During output design, developers
identify the type of outputs needed, and consider the necessary output controls and prototype
report layouts.

Objectives of Output Design


The objectives of input design are −

 To develop output design that serves the intended purpose and eliminates the production
of unwanted output.
 To develop the output design that meets the end users requirements.
 To deliver the appropriate quantity of output.
 To form the output in appropriate format and direct it to the right person.
 To make the output available on time for making good decisions.

Let us now go through various types of outputs −

Page 87 of 165
External Outputs
Manufacturers create and design external outputs for printers. External outputs enable the system
to leave the trigger actions on the part of their recipients or confirm actions to their recipients.

Some of the external outputs are designed as turnaround outputs, which are implemented as a
form and re-enter the system as an input.

Internal outputs
Internal outputs are present inside the system, and used by end-users and managers. They support
the management in decision making and reporting.

There are three types of reports produced by management information −

 Detailed Reports − They contain present information which has almost no filtering or
restriction generated to assist management planning and control.
 Summary Reports − They contain trends and potential problems which are categorized
and summarized that are generated for managers who do not want details.
 Exception Reports − They contain exceptions, filtered data to some condition or
standard before presenting it to the manager, as information.

Output Integrity Controls


Output integrity controls include routing codes to identify the receiving system, and verification
messages to confirm successful receipt of messages that are handled by network protocol.

Printed or screen-format reports should include a date/time for report printing and the data.
Multipage reports contain report title or description, and pagination. Pre-printed forms usually
include a version number and effective date.

Outputs present information to system users. Outputs, the most visible component of a working
information system, are the justification for the system. During systems analysis, you defined
output needs and requirements, but you didn't design those outputs. In this section, you will learn
how to design effective outputs for system users.

Output Media
• Paper
• Screen
• Microfilm/Microfiche
• Video/Audio
• CDROM, DVD
• Other electronic media
Although the paperless office (and business) has been predicted for several years, it has not yet
become a reality. Perhaps there is an irreversible psychological dependence on paper as amedium.
In any case, paper output will be with us for a long time.
Page 88 of 165
The use of film does present its own problems - microfiche and microfilm can only be produced
and read by special equipment. Therefore, other than paper, the most common output medium is
screen (monitor).

Although the screen medium provides the system user with convenient access to information, the
information is only temporary. When the image leaves the screen, that information is lost unless
it is redisplayed. If a permanent copy of the information is required, paper and film are superior
media. Often times when system designers build systems that include screen outputs, they also
may provide the user with the ability to obtain that output on paper.

Output Formats
• Tabular output
• Zoned output
• Graphic output
• Narrative output
Most of the computer programs written probably generated tabular reports.

Zoned output is often used in conjunction with tabular output. For example, an order output
contains zones for customer and order data in addition to tables (or rows of columns) for ordered
items.

Input Design
In an information system, input is the raw data that is processed to produce output. During the
input design, the developers must consider the input devices such as PC, MICR, OMR, etc.

Therefore, the quality of system input determines the quality of system output. Welldesigned
input forms and screens have following properties −

 It should serve specific purpose effectively such as storing, recording, and retrieving the
information.
 It ensures proper completion with accuracy.
 It should be easy to fill and straightforward.
 It should focus on user‘s attention, consistency, and simplicity.
 All these objectives are obtained using the knowledge of basic design principles
regarding −
o What are the inputs needed for the system?
o How end users respond to different elements of forms and screens.

Objectives for Input Design


The objectives of input design are −

 To design data entry and input procedures


 To reduce input volume

Page 89 of 165
 To design source documents for data capture or devise other data capture methods
 To design input data records, data entry screens, user interface screens, etc.
 To use validation checks and develop effective input controls.

Data Input Methods


It is important to design appropriate data input methods to prevent errors while entering data.
These methods depend on whether the data is entered by customers in forms manually and later
entered by data entry operators, or data is directly entered by users on the PCs.

A system should prevent user from making mistakes by −

 Clear form design by leaving enough space for writing legibly.


 Clear instructions to fill form.
 Clear form design.
 Reducing key strokes.
 Immediate error feedback.

Some of the popular data input methods are −

 Batch input method (Offline data input method)


 Online data input method
 Computer readable forms
 Interactive data input

Input Integrity Controls


Input integrity controls include a number of methods to eliminate common input errors by end-
users. They also include checks on the value of individual fields; both for format and the
completeness of all inputs.

Audit trails for data entry and other system operations are created using transaction logs which
gives a record of all changes introduced in the database to provide security and means of
recovery in case of any failure.

Page 90 of 165
3. Prototyping
Users often can't say what they want, but as soon as you give them something and they get to use
it, they know what they don't want. A bridge is needed between talking to users in the abstract
about what they might want and building a full-blown system (with all the expense and effort
that involves). The prototype is that bridge, it might be a paper-based outline of screens, a video
simulation of interaction or a 3D cardboard mock-up of a device.

Prototypes are very useful for discussing ideas with stakeholders, and it encourages reflection
on the design and allows you to explore alternative designs without committing to one idea and
to make ideas from scenarios that are more concrete.

User Interface Prototyping


User interface (UI) prototyping is an iterative development technique in which users are actively
involved in the mocking-up of the UI for a system. UI prototypes have several purposes:

 As an analysis artifact that enables you to explore the problem space with your
stakeholders.
 As a design artifact that enables you to explore the solution space of your system.
 A basis from which to explore the usability of your system.
 A vehicle for you to communicate the possible UI design(s) of your system.
 A potential foundation from which to continue developing the system (if you intend to
throw the prototype away and start over from scratch then you don‘t need to invest the
time writing quality code for your prototype).

A prototype is an early sample, model, or release of a product built to test a concept or process.
It is a term used in a variety of contexts, including semantics, design, electronics, and software
programming. A prototype is generally used to evaluate a new design to enhance precision by
system analysts and users. Prototyping serves to provide specifications for a real, working system
rather than a theoretical one. In some design workflow models, creating aprototype (a process
sometimes called materialization) is the step between the formalization andthe evaluation of an
idea.

Basic prototype categories


Prototypes explore different aspects of an intended design:

 A Proof-of-Principle Prototype serves to verify some key functional aspects of the intended
design, but usually does not have all the functionality of the final product.
 A Working Prototype represents all or nearly all of the functionality of the final product.
 A Visual Prototype represents the size and appearance, but not the functionality, of the
intended design. A Form Study Prototype is a preliminary type of visual prototype in
which the geometric features of a design are emphasized, with less concern for color, texture,
or other aspects of the final appearance.

Page 91 of 165
 A User Experience Prototype represents enough of the appearance and function of the
product that it can be used for user research.
 A Functional Prototype captures both function and appearance of the intended design,
though it may be created with different techniques and even different scale from final design.
 A Paper Prototype is a printed or hand-drawn representation of the user interface of a
software product. Such prototypes are commonly used for early testing of a software design,
and can be part of a software walkthrough to confirm design decisions before more costly
levels of design effort are expended.

Human factors and ergonomics


Human factors and ergonomics (commonly referred to as human factors) is the application of
psychological and physiological principles to the engineering and design of products, processes,
and systems. The goal of human factors is to reduce human error, increase productivity, and
enhance safety and comfort with a specific focus on the interaction between the human and the
thing of interest.

Human factors and ergonomics is concerned with the "fit" between the user, equipment, and
environment or "fitting a job to a person". It accounts for the user's capabilities and limitations in
seeking to ensure that tasks, functions, information, and the environment suit that user.
To assess the fit between a person and the used technology, human factors specialists or
ergonomists consider the job (activity) being done and the demands on the user; the equipment
used (its size, shape, and how appropriate it is for the task), and the information used (how it is
presented, accessed, and changed).
Howell (2003) outlines some important areas of research and practice in the field of human factors.
Areas of Study in Human Factors Psychology

Area Description
Attention Includes vigilance and monitoring, recognizing signals in noise, mental
resources, and divided attention
Cognitive Includes human software interactions in complex automated systems,
engineering especially the decision-making processes of workers as they are
supported by the software system
Task analysis Breaking down the elements of a task
Cognitive task Breaking down the elements of a cognitive task
analysis

Page 92 of 165
Cognitive Ergonomics
Cognitive ergonomics is the field of study that focuses on how well the use of a product matches
the cognitive capabilities of users. It draws on knowledge of human perception, mental processing,
and memory. Rather than being a design discipline, it is a source of knowledge for designers to
use as guidelines for ensuring good usability.
Cognitive ergonomics mainly focuses on work activities which:

 have an emphasized cognitive component (e.g., calculation, decision-making)


 are in safety-critical environments
 are in a complex, changeableenvironment(i.e., where tasks cannot be predetermined)

Ergonomics is a people-centered approach that aims to fit the work to the person, not the person
to the work. This topic explains how it can be used in all aspects of work from tool, equipment
and software design to work organization and job design.
In recent years, the growth and development of new technology has pushed ergonomics in new
directions. A major role of ergonomics today is to make sense of this new technology and ensure
it fits the needs and expectations of the people who use it. This is likely to continue as one of the
field‘s greatest challenges.

Sources of Information
Ergonomics draws its knowledge from a number of different disciplines, depending on the aspect
of work being addressed.
Anatomy: defines the structure of the body and enables understanding of the key components
and structures involved in human movement and work.
Biomechanics: concerned with the movement of body parts and the forces placed on them
during activity. It draws knowledge from physical, engineering and biological sciences.
Ergonomics uses the information for the design of physical work tasks and equipment.
Physiology: the study of how the body functions down to the chemical level, as in metabolism,
and the changes which occur during activity. Ergonomics uses this information to try and
produce maximum production from minimum effort.
Psychology: ergonomics uses psychological information to design tasks to meet the mental
capabilities of people. Research into perception, memory, reasoning, concentration and alertness
are relevant to the design of tasks. This should ensure they are intuitive and logical, making the
task easier for the person to complete and so minimising the risk of the stress for users.
Anthropometry: the science that deals with body measurements. Sources of anthropometric data
vary in their presentation of the dimensions. For a given body dimension a measurement is
normally provided in millimetres for a male or female of a given nationality, sometimes within a
given age group (eg standing eye height, female, 19–65 years of age, British). These
measurements are used to establish the heights, lengths, depths and sizes for elements of
equipment, workstations or workplace design. This ensures the design can be used by the
specified group of people.

Page 93 of 165
Ergonomics takes a multi-disciplinary approach. It draws on knowledge from other disciplines as
required and combines the knowledge to assess or design the work being studied.
For example, biomechanical knowledge can be combined with psychological knowledge when
designing physical tasks which require concentration. The biomechanical knowledge relates to
how and what the body can physically achieve, while psychology provides the information
concerning the ability to perform the processes and retain attention in order to do so.
The result: a stated load, work posture and work height, with a guideline time for completing the
work before concentration fails and errors occur.

Approach to Ergonomics
Ergonomics takes a holistic approach. It looks at all aspects of the work, from the capabilities of
the person through to the structure of the overall work system and their interactions.
A concentric model can be used to illustrate this approach and its common stages.

Concentric Model
Figure: Concentric Model

Ergonomics focuses on the needs of the person before looking at the work task to be completed.
By establishing the physical and mental capabilities of the person, certain criteria or constraints
can be established for the task.
The workstation is designed in relation to the physical requirements of the person and the
operational requirements of the task. For example, the height and reach of the person will be
relevant to the dimensions of their workstation, as will the objects required to complete the task
and their arrangement.

Page 94 of 165
The work environment is reviewed in relation to the person, task and workstation requirements.
For example, the temperature should be within acceptable limits for human work but also
appropriate to the requirements of the task and operation of the workstation.
The work system is the organisation within which the previous elements sit and which ultimately
affect the entire work. For example, this may include performance targets or shiftworking
patterns imposed by senior management. It should be assessed in order to complement, not
constrain, the effectiveness of the interactions between the person, task, workstation and
environment.
Person
The person is the central point of the work. The person combines knowledge from other elements
in the concentric model to complete the work. Effective work is designed by optimising the
person's ability to complete the work, eg designing it so that it is within their mental and physical
capabilities.
The person should be able to work in a comfortable and effective manner. Equipment, furniture,
the environment and task activities should all support this.
Mental capabilities, such as memory, concentration, reasoning and interpretation, should all be
considered when designing tasks, environments and work systems. This should aim to optimise a
person's capacity to interact with the work.
An important, but often overlooked, point is that the person is the one aspect of the work system
that cannot be changed. Therefore the purpose of the concentric model of ergonomics is to adapt
all other elements of the work system to fit the capabilities of the person.

Work Task
Ergonomics addresses the work task by ensuring that the elements of the task are structured to
suit the person carrying it out.
The factors to be considered in relation to the work task include the following.
 Tools and equipment must be easily reached to avoid poor working postures.
 Tasks requiring the exertion of force should be automated or semi-automated wherever
possible, always aiming to minimise the degree of exertion.
 Repetitive tasks should be automated where possible, or frequently interrupted by
changes of activity or work breaks, and should preferably be self-paced.
 Static work postures should be changed regularly — movement during task completion
should be encouraged.
 Changes of activity and job rotation should be encouraged to enable changes of posture,
especially if some or all the tasks are repetitive.
 Rest breaks should be short, but frequent, to prevent fatigue.
 Instructions or information about the task, such as from displays or controls, should be
provided clearly and logically.
 Training should be provided for the task, use of equipment, furniture and health and
safety matters. Training is particularly important when any element undergoes changes
that impact the completion of the task.

Page 95 of 165
 Information required to complete the task, eg security codes or operation sequences,
should be logical and easy to remember.
Understanding the psychological capabilities of the person, such as the limits of his or her
memory and concentration, will help to design a task within those limits. This should make the
task accessible to the person. The provision of any information or instruction should also account
for the way in which people process information. Postural and biomechanical knowledge will
also place the physical elements of the task within the capabilities of the person.

Workstation
The workstation is the area where the task is conducted. Commonly this will be a desk or work
surface, but may equally be at a conveyor belt or within a driver's cab.
The workstation should be designed so that it is accessible for the person using it.
The factors to be considered in relation to the workstation include the following.
 The surface height should be sufficient to avoid stooping or reaching to its surface.
 There should be sufficient legroom for standing and writing tasks so that the person is
able to get close enough to the workstation.
 The surface properties should not present a risk to the person, eg no sharp edges or
unprotected hot/cold surfaces.
 Surrounding workstations or other items in the environment, eg columns or posts, should
not constrain the user's posture.
 Any furniture should support a good working posture, eg chairs should provide lumbar
support.
 The workstation should be suitable for the work task such that all equipment can be
located and arranged for the person to complete the task.
Adjustable equipment and furniture will increase the flexibility of a workstation making it more
accessible to a range of work tasks and/or workers.

Work Environment
The work environment is the place of work which includes the physical environment and general
layout. Environmental concerns include the temperature, humidity, lighting and noise.
The factors to be considered in relation to the work environment include the following.
 Temperature and humidity levels will directly affect the person's comfort and capacity to
work. Excessive temperatures can lead to fatigue and loss of concentration as well as loss
of dexterity.
 Illumination of the task and information is important. Eliminating glare or reflections is
also vital.
 Noise levels can affect concentration or lead to physical discomfort. Noise sources should
be reduced, eliminated or protective equipment supplied.
 Health and safety regulations must be adhered to and, to encourage this, the work tasks
themselves need to be configured to safe systems of work.

Page 96 of 165
Work System
The work system refers to the structure of the organisation and the way it operates. It includes
communications, policies and working standards.
Industry standards or health and safety regulations may place constraints or requirements on the
organisation, which affect the way in which the work is completed.
For example, a company may have to meet quality standards which require comprehensive
quality information to be supplied. This requires the worker to fill in a quality report for all work
completed. This may affect the ergonomics by increasing the activity required in the task (which
may affect the posture or rest breaks), and requires additional material to be stored at the
workstation, ie report forms. The storage of additional material and the need to complete writing
tasks requires additional space at the workstation and may compromise the work posture adopted
or the completion of the task.
Poor job scheduling or job deadline pressures, considered as part of the work system, may also
place constraints or high demands on work tasks and workers. Shift patterns and job rotation are
also elements which affect work and the way it is completed.
Work can be dramatically improved or constrained by work systems. The system must be
reviewed in relation to all of the other elements of work to avoid unforeseen constraints being
imposed or potential improvements remaining unrecognised.

Ergonomics Techniques
The nature of ergonomics requires thorough and comprehensive investigation of all parts of the
work. In order to ensure that all aspects are considered there are specific techniques which can be
employed.

Task Analysis
A task analysis provides information for the design of new work tasks or the evaluation of
existing work tasks. The process identifies what people actually do when they perform tasks,
highlighting connections between activities and identifying unplanned or unusual activities
within the work.
A task analysis is completed in three stages.
1. Task description: this details the activities identified. The format for a task
description may vary depending on the target audience reviewing the information. For
example, it may be shown to senior managers who are used to reading flow charts or it may
be for workers who are more familiar with procedural lists. Most commonly a diagram is
used, although lists, flow charts or tables are equally valid, as long as the analyst's findings
are clear and concise.
2. Data collection: activities are identified through observation, review of records and
manuals or discussions with workers and managers.
3. Data analysis: findings are interpreted in relation to how the system should work. In
particular, any mismatches between the findings and the theoretical completion of the task
can be identified and the cause investigated. For example, the analysts may have recorded a
worker repeatedly pressing a control button when only one press was instructed in the
manual. This may reveal that the button is faulty, the system requires confirmation or the
Page 97 of 165
button press is not being registered with the system. These all have implications for the
effectiveness of the system, particularly in terms of time spent and accuracy of operation.

Hierarchical Task Analysis


Hierarchical task analysis (HTA) is a commonly-used method for breaking the task into its
component activities. Beginning with the overall goal of the task, a structure is created showing
the levels of activity required to complete the task.

Figure: Hierarchical Task Analysis for a Simple Assembly Task

All boxes (activities) in the hierarchy can be numbered to clarify their sequence and dependence
within the work task.

Workflow Analysis
The flow of work is important to the design of workstations and workplaces. Workflow analysis
plots the flow of activity on a plan of the workplace. This helps to identify the pattern of
activities, ensuring they are optimised to avoid wasting time and energy. It also ensures that the
workplace is appropriately designed.
Workflow analysis can also be used to optimise the arrangement of equipment and components
at the workstation. For example, by ensuring that items used together or in sequence are placed
close to each other to improve the completion of the task.
Figure 3: Examples of Workflow Identification and Change

Page 98 of 165
Frequency Analysis
The frequency with which tools and equipment are used can be important factors in work design.
Frequently-used items may be positioned closer to the worker for ease of access and to prevent
unnecessary hold-ups to the process. Equally, the frequency analysis may be used in the
assessment of postures (how often a posture is adopted) or the repetition involved in the work
task.
A simple tick-box technique can be used to record the frequency that an activity or posture
occurs, or an item is used.

Video Analysis
Digital or video recordings provide permanent, reusable records of activities and work tasks.
Using video has the advantage of providing a record which is accessible to a range of analysts
after the event and away from the work environment. As the record is reusable it provides the
scope to record and analyse data in great detail. It is also a relatively unobtrusive method of
observation and data capture.
The main disadvantage of using digital or other recording techniques is that the camera angle
restricts what is recorded, and therefore constrains what can be analysed without invalidating the
analysis or making it unreliable. The detail in which analysis of data can be recorded can also
make the process time-consuming and therefore costly.

Risk Assessments

Page 99 of 165
Ergonomic risk assessments can be conducted to assess the work.
The assessment should be systematic and comprehensive, accounting for all elements of the
work. Checklists can be used to ensure all elements, ie the person, work task, workstation, work
environment and work system, are covered.
All hazards should be identified and recorded and an assessment of the risk posed by the hazard
should be made. All hazards must be addressed, eliminated where possible or reduced as far as
practicable.
Any checklist used should incorporate the five elements of the concentric approach, but can be
tailored to suit the work within the business. For example, by placing more emphasis on the
environmental issues involved in the safe use, storage and handling of chemicals where the
business focuses on their use; or by emphasising the importance of good software design for call-
centre work, where the software is relied on to provide information to the worker and the
customer.

Allocation of Function
Allocation of function is the process of assigning activities to either the person or the machine in
order to complete the work task. By allocating activities to the person or machine most suited to
complete them the overall efficiency and effectiveness of the work is improved.
Each activity needs to be assessed to identify the requirements for completing it and then
assigned according to the abilities of the person or machine. Some general guidelines are shown
in Table below.
Table : Guidance for Allocating Functions
Person: Good At Machine: Good At
Sensing stimuli: at low levels over Sensing stimuli: outside human range
background noise
Recognising patterns of stimuli Monitoring
Sensing unexpected occurrences Calculating
Decision making Applying force
Information recall High speed operations
Creating alternatives Repetitive tasks
Interpreting subjective data Sustained accuracy and reliability
Adapting responses Counting or measuring
Simultaneous operations

Allocation of function may also be affected by economic or social factors. For example,
mechanising work tasks may require high machining costs, and lead to unemployment for
workers. The ―best‖ performance may not always be required and contact with people is
sometimes preferable to dealing with machines.
It should also be noted that to optimise job design, retaining some interesting tasks for people
may be more important than mechanising them, leaving only machine monitoring or
maintenance tasks for workers.

Page 100 of 165


Application Areas
There are some specialist areas of application in the field of ergonomics.
Human-computer Interaction
Human-computer interaction (HCI) deals with the interchange between people and computer
technology. It has developed as a specialism of ergonomics, as the use of and dependence on
computer technology has grown.
The main disciplines of reference are psychology and cognitive science, which are concerned
with how information is interpreted, processed and responded to. This knowledge helps to
identify how information and systems can be best organised and presented such that people can
easily and accurately use them. Computer science knowledge is also important, providing
information about the technology and the systems being used. New and emerging information
technologies can also influence decisions concerning ongoing interaction.

Human-machine Interaction
Human-machine interaction deals with the interaction between people and machinery facilitated
by the physical operation of controls and mental interpretation of displays.
This interaction relates to tasks as diverse as driving, operating equipment and monitoring
systems. Knowledge from psychology and anatomy are particularly relevant.

Software Design
Software can be designed to make using systems more intuitive and accessible for work.
Previously systems were designed based on the capacity of the technology with little thought for
the ―front-end‖, or software interface. Now, with increasing demand to make systems more
accessible to everyone and not just to computer experts, the use and arrangement of colour and
text can be developed to enhance the usability of the software. Equally, the structure of menus,
commands and programs, which the user is not so aware of, can be designed to improve the
interface or make it more predictive.
The development of software technology that hides complex programming code and replace it
with commands familiar to everyday work have greatly advanced the use of computers and their
accessibility to a wide range of workers.
The use of intuitive or what might be described as "intelligent" software also has implications for
ergonomics and assumptions should not be made that these will always minimse ergonomic
risks. This may be especially true where different software programs — sometimes at different
levels of maturity — may need to used by the same worker to complete a task.

Interface Design
Communication with machines and computers is done through interfaces. These consist of
displays to relay information about the machine/computer state, and controls to enable people to
react to the display, instructing the machine/computer with a response.
Smartphones are tablets are progressively becoming more important in terms of daily interfaces
used by staff and so the procurement of these devices should consider ergonomic factors.

Page 101 of 165


Information can be collated, arranged and presented in formats which psychological studies have
shown to be quicker and easier to read and interpret. Colours, shapes and sizes as well as audible
messages can also be used to aid understanding and improve the interaction between the machine
and the person. For example, alarms or warning signals can be designed to provoke different
types of responses, eg a loud alarm or a soft feminine voice advising a warning.
Logical arrangement of displays and controls will improve the interface, particularly where the
arrangement relates to the sequence or frequency of operations. Good design of displays,
controls and commands can mitigate accidental or unintended commands by panel layout.
Psychology is the dominant knowledge base for interface design.

Displays and Controls


Displays and controls are the mechanisms used to enable people and machines to interact. The
state of the machine is relayed to the person through a display (eg liquid crystal or LED displays,
readings and even dials on some older machines) and the person instructs the machine by
operating controls (eg touch, swipe or tap, buttons, switches and levers). Developments in touch
screen and voice recognition technologies can also influence such interaction.
Displays and controls can be ergonomically designed to enable an effective interaction between
the person and the machine. Shapes, colours, locations, text, arrangement and responsiveness can
all affect whether the person is able to accurately obtain information from the machine and
precisely operate it thereafter.

Job Design
The design of the job or work task is important for obtaining the maximum performance from the
person for the minimum effort, but also in terms of creating a level of job satisfaction. There are
three stages considered to be influential in successfully designing jobs.
1. Job rotation: this involves people being moved from one task to another to allow changes
in posture, movement, and mental and physical dexterity. Job rotation can help to
overcome boredom, particularly in repetitive jobs, and to prevent fatigue, loss of
concentration and deterioration in performance.
2. Job enlargement: this involves increasing the scope and range of tasks carried out by one
person, which provides greater flexibility and incorporates wider skills.
3. Job enrichment: this involves enriching the job by incorporating motivating factors, such
as increased responsibility and involvement. Job enrichment aims to provide greater
autonomy over the planning, execution and control of work.

Hand Tool Design


Ergonomics has increasingly been used in the design of hand tools. Use of hand tools requires
movement, effort and, usually, application of force.
Until recently, modern-day tools were mass produced and often designed to suit their purpose
and not to suit the person using it. More recent interest in the problems associated with using
hand tools, such as poor postures and unnecessary exertion of effort, has led to the incorporation
of ergonomics in hand tool design.

Page 102 of 165


An example of hand tool re-design is illustrated in Figure 4. A material cutting tool was re-
designed to facilitate a better arm posture, utilising grip of the handle and control over the tool.
Figure 4: Hand Tool Design and Re-design

Particular consideration is given to the following issues.


 Handles: handle length and size relates to the amount of force which can be exerted and
the extent of effort required. The material and grip are important in enabling a good
posture to be established and maintained during the use of the tool. 
 Power assistance: power-assisted tools have greatly increased the speed of task
completion and reduced some postures such as rotating the wrist. However, problems of
vibration have been introduced, as well as issues surrounding the repetitive nature of
tasks.
People-centred Issues
Posture
Ergonomics is concerned with creating good working postures. Good working postures can be
classified as neutral postures, ie those which place a minimal amount of strain on the body and
those in which there is some degree of movement.
Neutral postures occur where minimal effort is required and the least tension occurs in the joints
and muscles (other than to counteract opposing muscles or to overcome gravity). Neutral
postures are usually defined as being at the middle of the range of motion of a joint. For
example:
 when the natural ―S‖ curve of the spine is maintained — as when standing
 the wrist is in a neutral posture when it is not deviated or rotated in any direction
 the arm is in a neutral posture when resting by the side of the body and not reaching
forward.
Figure: Neutral Hand Posture

Page 103 of 165


Figure: View of the “S” Shaped Curve of the Spine

When the posture deviates from neutral, ie when stretching the arm forward or rotating the wrist,
effort is exerted and tension is held in the muscles to maintain the posture. This can, depending
Page 104 of 165
on the degree of deviation and the duration of the posture, lead to fatigue, discomfort, strain and
possibly injury. These can all have an effect on the person, from distracting their attention to
physically preventing them from continuing work.
Ergonomics is also concerned with identifying static postures, ie those in which there is no
movement. These require the muscles to act under tension in order to maintain the posture. As
this tension will reduce the blood flow into the muscle, fatigue and discomfort can occur if the
tension is not released.
Muscle Operation
Muscles operate by contracting and relaxing, creating movement of joints or body parts. They
require oxygen via the blood supply, which is pumped through the muscle as it contracts and
relaxes. The muscle uses the oxygen and produces waste products as a result of the energy
expenditure. The waste products are carried away from the muscle by the blood supply.
When the muscle is held in tension there is no movement, the muscles do not contract and relax
and consequently the blood is not pumped through the muscle as efficiently. As the muscle
sustains the tension and the oxygen is used up (energy is required to hold the muscle in tension),
fatigue is experienced. There is also no means of removing the waste products from the muscle,
which builds up as a result of the energy consumption. This leads to fatigue, discomfort and
subsequently pain.
Another issue to consider is static work. Seated work is static, it requires little movement of the
upper body and places tension in the muscles of the lower back, particularly where the back is
insufficiently supported by the seat.
Static postures can be alleviated by introducing changes of posture and movement into the work,
eg job rotation from seated work to tasks that involve moving around the office, stretching
exercises and work breaks away from the workstation. A good understanding of the task
requirements will help to design work so that poor and awkward postures are avoided.
Good working postures can be achieved by careful positioning of the equipment and the furniture
required to complete the work and to support the posture of the person. Achieving good postures
need not require special equipment. Careful consideration of the ergonomics of the work can
establish and enhance good working postures.

Musculoskeletal Disorders
Ergonomics can help to prevent injury to the muscular or skeletal structures of the body by
ensuring work is designed for the person‘s physical capabilities and that the individual worker is
appropriately trained to minimise the risk of them occuring.
In recent years musculoskeletal disorders (or MSDs), such as back injuries and upper limb
disorders, have become major causes of work-related injury with diagnosis, treatment and
prognosis becoming complex in some cases. Sometimes MSDs can become chronic conditions.
This has contributed to more personal injury cases being brought against employers through civil
courts.

Page 105 of 165


MSDs at work are thought to be associated with three factors — force, repetition and posture.
When evaluating and designing work, these three factors should be assessed and optimised to
avoid injury.
 Force: muscles are designed to exert force. Exceeding the force which a muscle can
safely mechanically exert will place a strain on the muscle and may lead to injury. The
duration over which the force must be exerted will also have an effect as it will require
the muscle to be held in tension. Fatigue will occur as the tension continues, making the
muscle less able to exert the amount of force required.
 Repetition: frequent, repetitive movements require the same muscles to repeatedly
contract and relax. If this activity is very rapid or continues for a long period of time the
muscle can become fatigued. 

Muscles require a rest and recovery period to avoid fatigue. The more frequently a
muscle works, the more energy it requires. The body may be unable to supply enough
energy or remove waste products to prevent fatigue or discomfort from occurring.
Consequently a rest and recovery period will be required.

 Posture: poor postures can lead to injury and discomfort by placing the muscles and
joints under unnecessary strain.

Upper Limb Disorders


Upper limb disorder is a term used to refer to injuries occurring in the upper limbs (hands, wrists,
arms, shoulders and neck). It covers strains, sprains, injury and discomfort, including conditions
such as tenosynovitis and carpal tunnel syndrome. The cause of some of these disorders can be
related specifically to the work undertaken and become known as work-related upper limb
disorders.
These disorders can be caused by any one, or most likely a combination, of the three factors —
force, repetition and posture.

High-Fidelity and Low-Fidelity Prototyping


Prototypes don‘t necessarily look like final products — they can have different fidelity. The
fidelity of a prototype refers to how it conveys the look-and-feel of the final product (basically,
its level of detail and realism).
Fidelity can vary in the areas of:
 Visual design
 Content
 Interactivity
There are many types of prototypes, ranging anywhere between these two extremes:
 Low-Fidelity
 High-Fidelity

Product teams choose a prototype‘s fidelity based on the goals of prototyping, completeness of
design, and available resources.
Page 106 of 165
 Low-fidelity (lo-fi) prototypes are often paper-based and do not allow user
interactions. They range from a series of hand-drawn mock-ups to printouts. In theory, low-
fidelity sketches are quicker to create. Low-fidelity prototypes are helpful in enabling early
visualization of alternative design solutions, which helps provoke innovation and
improvement. An additional advantage to this approach is that when using rough sketches,
users may feel more comfortable suggesting changes.
 High-fidelity prototypes are computer-based, and usually allow realistic (mouse-keyboard)
user interactions. High-fidelity prototypes take you as close as possible to a true
representation of the user interface. High-fidelity prototypes are assumed to be much more
effective in collecting true human performance data (e.g., time to complete a task), and in
demonstrating actual products to clients, management, and others.

Note
Low fidelity (lo-fi) prototypes are not very like the end product and are very obviously rough
and ready. They are cheap and simple to make and modify and it makes it clear to stakeholders
that they can be criticized. They are fairly low risk; designers do not have much to risk with
them, but they do not allow realistic use.

One form of lo-fi prototyping is storyboarding, a technique used in the film industry. A series of
sketches, usually accompanied with some text (e.g., a scenario), show the flow of interaction.
This is useful if you're good at sketching, otherwise it can be daunting.

Another method is using paper screenshots, where a sheet of paper or an index card is used for
each page, and overview diagrams can be used to show links between screenshots. Some tools
(such as Denim and Silk) exist to support this process, but they can keep the "sketchy" look of
the prototype.

Another method is the Wizard of Oz system. In the 1980s, a prototype system was created for
speech recognition. At the time, speech recognition was not currently available, so a fake system
was used, where a human actually did the recognition. The term is now used for any system
where the processing power is not implemented and a human "fakes" it. There is no need to
deceive the user for a Wizard of Oz system as it can work perfectly well if the user knows it is a
Wizard of Oz system.

High fidelity prototyping uses materials that you would expect to find in the end product or
system and looks much more like the final system. For software, people can use software such as
Macromedia Dreamweaver or Visual Basic, or in the case of web pages, writing prototype
HTML code. With high-fidelity prototyping, you get the real look and feel of some of the
functionality, it serves as a living specification, it is good for exploration of design features and
evaluation and it is a good marketing and sales tool.

Page 107 of 165


It can be quite expensive and time-consuming however, especially if radical changes are
possible; developers are often unwilling to scrap high-fidelity prototypes. Evaluators tend to
comment on superficial design features rather than real design and user expectations can be set
too high.

When creating prototypes, you have to consider depth vs. breadth - do you prototype all of the
functionality, but then not go into any specific detail (horizontal prototyping), or do you
prototype one or more functions, but in a lot of depth (vertical prototyping).

Evolutionary prototyping also exists, which is where you build the real system bit by bit and
evaluate as you go along, and the opposite is throw-away prototyping, which is where a
prototype is built in one system (perhaps repeatedly with changes), and then that thrown is totally
thrown away and the real system is built from scratch (sometimes for efficiency, security, etc...).

A final type of prototyping is experience prototyping, where using prototypes allows the
designers to really understand the users' experience, e.g., using a wheelchair for a day for
different technologies like ATMs, etc...

Creating Paper Prototypes


Paper-based prototyping is the quickest way to get feedback on your preliminary site information
architecture, design, and content. Paper prototypes are easy to create and require only paper,
scissors and sticky notes.

Use one piece of paper for each Web page you create and then have users try them out in a usability
test. Users indicate where they want to click to find the information and you change the page to
show that screen.

The process helps you to gather feedback early in the design process, make changes quickly, and
improve your initial designs.

Coming up & using ideas in your Interface Design


It‘s the starting point of every design process: The development of ideas, also referred to
as ideation.
‗Staying out of the way‘ means not distracting your users. Rather, good UIs let your users complete
goals. The result? A reduction in training and support costs, and happier, satisfied and highly
engaged users.
When getting started on a new interface, make sure to remember these fundamentals:

1. Know your user

Page 108 of 165


―Obsess over customers: when given the choice between obsessing over competitors or
customers, always obsess over customers. Start with customers and work backward.‖ – Jeff Bezos
Your user‘s goals are your goals, so learn them. Restate them, repeat them. Then, learn about
your user‘s skills and experience, and what they need. Find out what interfaces they like and sit
down and watch how they use them. Do not get carried away trying to keep up with the
competition by mimicking trendy design styles or adding new features. By focusing on your user
first, you will be able to create an interface that lets them achieve their goals.

2. Pay attention to patterns


Users spend the majority of their time on interfaces other than your own (Facebook, MySpace,
Blogger, Bank of America, school/university, news websites, etc). There is no need to reinvent
the wheel. Those interfaces may solve some of the same problems that users perceive within the
one you are creating. By using familiar UI patterns, you will help your users feel at home.

3. Stay consistent
“The more users’ expectations prove right, the more they will feel in control of the system and
the more they will like it.” – Jakob Nielson
Your users need consistency. They need to know that once they learn to do something, they will
be able to do it again. Language, layout, and design are just a few interface elements that need
consistency. A consistent interface enables your users to have a better understanding of how
things will work, increasing their efficiency.

4. Use visual hierarchy


“Designers can create normalcy out of chaos; they can clearly communicate ideas through the
organizing and manipulating of words and pictures.” – Jeffery Veen, The Art and Science of Web
Design
Design your interface in a way that allows the user to focus on what is most important. The size,
color, and placement of each element work together, creating a clear path to understanding your
interface. A clear hierarchy will go great lengths in reducing the appearance of complexity (even
when the actions themselves are complex).

5. Provide feedback
Your interface should at all times speak to your user, when his/her actions are both right and
wrong or misunderstood. Always inform your users of actions, changes in state and errors, or
exceptions that occur. Visual cues or simple messaging can show the user whether his or her
actions have led to the expected result.

6. Be forgiving
No matter how clear your design is, people will make mistakes. Your UI should allow for and
tolerate user error. Design ways for users to undo actions, and be forgiving with varied inputs (no

Page 109 of 165


one likes to start over because he/she put in the wrong birth date format). Also, if the user does
cause an error, use your messaging as a teachable situation by showing what action was wrong,
and ensure that she/he knows how to prevent the error from occurring again.

7. Empower your user


Once a user has become experienced with your interface, reward him/her and take off the
training wheels. The breakdown of complex tasks into simple steps will become cumbersome
and distracting. Providing more abstract ways, like keyboard shortcuts, to accomplish tasks will
allow your design to get out of the way.

8. Speak their language


“If you think every pixel, every icon, every typeface matters, then you also need to believe
every letter matters. ” – Getting Real
All interfaces require some level of copywriting. Keep things conversational, not sensational.
Provide clear and concise labels for actions and keep your messaging simple. Your users will
appreciate it, because they won‘t hear you – they will hear themselves and/or their peers.

4. Design Interface & Components


Interface and Components
In computing, an interface is a shared boundary across which two or more separate components
of computer system exchange information. The exchange can be between software, computer
hardware, peripheral devices, humans, and combinations of these. Some computer hardware
devices, such as a touchscreen, can both send and receive data through the interface, while others
such as a mouse or microphone may only provide an interface to send data to a given system.

Interface components are the parts of the computing system you will be dealing/interacting with

User Interface (UI) Design Patterns


User interface design patterns are descriptions of best practices within user interface design. They
are general, reusable solutions to commonly occurring problems. As such, they form the backbone
of ―technical support.‖ However, as design patterns can be applied to a wide variety of instances,
designers should adapt them to the specific context of use within each design project.
A UI design pattern usually consists of these elements:

 Problem: The usability problem faced by the user when using the system. 
 Context of use: The situation (in terms of the tasks, users, and context of use) giving rise to
the usability problem. 

Page 110 of 165


 Principle: A pattern is usually based on one or more design principles, such as error
management or the consistency of user guidance.
 Solution: A proven solution to the problem. A solution describes only the core of the
problem, and the designer has the freedom to implement it in many ways. 
 Why: How and why the pattern actually works, including an analysis of how it may affect
certain attributes of usability. 
 Examples: Each example shows how the pattern has been successfully applied in a real-life
system. This is often accompanied by a screenshot and short description. 
 Implementation: Some patterns provide implementation details. 
A specific category of design patterns is termed ―dark patterns.‖ These are patterns that
suggest how a designer can persuade users to perform certain actions. Such patterns are
called ―dark‖ because they typically feature in e-commerce in situations where designers
seek to trick people into spending more money or providing personal information.
responsible application of these patterns requires designers to have an ethical and empathetic
awareness of user concerns. This is particularly because user trust and feedback can make or
break an organization‘s reputation overnight

.
Navigation considerations
Navigation is the act of moving between screens of an app to complete tasks. It‘s enabled
through several means: dedicated navigation components, embedding navigation behavior into
content, and platform affordances.

Navigational directions
Based on your app‘s information architecture, a user can move in one of three navigational
directions:

 Lateral navigation refers to moving between screens at the same level of hierarchy. An
app‘s primary navigation component should provide access to all destinations at the top level
of its hierarchy. 
 Forward navigation refers to moving between screens at consecutive levels of hierarchy,
steps in a flow, or across an app. Forward navigation embeds navigation behavior into
containers (such as cards, lists, or images), buttons, links, or by using search. 
 Reverse navigation refers to moving backwards through screens either chronologically
(within one app or across different apps) or hierarchically (within an app). Platform
conventions determine the exact behavior of reverse navigation within an app.

Global Navigation best design


 First, establish your groups of content; if the number of categories exceeds twelve, you
might be better off combining global navigation with another user interface design pattern,
such as a mega dropdown menu. 
 Assign each of the groups of content a fitting title (e.g., from the top-level navigation bar
above: ‗Your Amazon.com‘, ‗Today‘s Deals‘, ‗Gift Cards & Registry‘, ‗Sell‘, and ‗Help‘). 

Page 111 of 165


 Arrange these category titles in a logical order. For each group of categories, there likely
will be an appropriate order, but the first option is usually the homepage, as this represents
the first tier in the whole user interface. When used in combination with tabs, top-level
navigation bars serve as the second tier of the user interface. In these instances, the first tab
should be used to take the user back to the homepage.
 Implement the same top-level navigation bar in all regions of the user interface. Then,
link each title to the appropriate contents, so when the users click it, they are navigated
directly to the associated information. 
 Inform the users of their current position by changing the appearance of the individual
category label. In the WordPress example below, the color of the selected category label
changes from light white to blue, while all other options remain the same color.
 When you‘re using a hyperlinked logo or name in your global navigation, it does not
expressly state that it also operates as a homepage link. Therefore, you should include a
number of other small touches to help the user identify this functionality. Firstly, you
can use a simple tooltip to display the word ‗homepage‘. This is not commonly implemented,
but it would help novice users determine that this facility is afforded. Secondly, changing the
appearance of the cursor when users hover over the hyperlinked logo or name shows them
they can click on it.

UI widgets
Widget is a broad term that can refer to either any GUI (graphical user interface) element or a
tiny application that can display information and/or interact with the user. A widget can be as
rudimentary as a button, scroll bar, label, dialog box or check box; or it can be something slightly
more sophisticated like a search box, tiny map, clock, visitor counter or unit converter.

UI widgets are a self-contained visualization of high-level data that can be placed on a page.

UI Widgets are built for display within the web shell user interface and will not display within
the click once smart client user interface.
Note: The type of data displayed in a UI widget is usually strategic high-level rollup-type data,
which means efficiency is extremely important when retrieving the data. Asking strategic
questions from the production tables within the OLTP database is generally not considered a
good practice as the OLTP database is used to hold day to day, tactical information pertinent to
running your day to day fundraising operations. Therefore, populating a UI Widget with data will
usually mean pulling data from the Blackbaud Data Warehouse or the OLAP cube or some type
of snapshot output table from which to retrieve the data. A custom business process could
generate a snapshot output table which could be generated as the output of a specific business
process instance.

OLTP (On-line Transaction Processing) is characterized by a large number of short on-


line transactions (INSERT, UPDATE, DELETE). The main emphasis for OLTP systems is put
on very fast query processing, maintaining data integrity in multi-access environments and an
effectiveness measured by number of transactions per second. In OLTP database there is detailed

Page 112 of 165


and current data, and schema used to store transactional databases is the entity model (usually
3NF).
OLAP (On-line Analytical Processing) is characterized by relatively low volume of
transactions. Queries are often very complex and involve aggregations. For OLAP systems a
response time is an effectiveness measure. OLAP applications are widely used by Data Mining
techniques. In OLAP database there is aggregated, historical data, stored in multi-dimensional
schemas (usually star schema). For example, a bank storing years of historical records of check
deposits could use an OLAP database to provide reporting to business users.

Blackbaud Data Warehouse, The data warehouse is composed of data structures populated by
data extracted from the OLTP database and transformed to fit a flatter schema. Ultimately the
warehouse structures are exposed as star schemas through views of fact and dimension tables.
There are some cases where the data structures reflect the shape of the OLTP structures closely
enough to be seen as parallel structures. But those parallels quickly disappear as tables are joined
together or multiple views are used.

Effective Visual Communication for Graphical User Interfaces


The use of typography, symbols, color, and other static and dynamic graphics are used to convey
facts, concepts and emotions. This makes up an information-oriented, systematic graphic design
which helps people understand complex information. Successful visual communication through
information-oriented, systematic graphic design relies on some key principles of graphic design.

Design Considerations
There are three factors that should be considered for the design of a successful user interface;
development factors, visability factors and acceptance factors.

Development factors help by improving visual communication. These include: platform


constraints, tool kits and component libraries, support for rapid prototyping, and customizability.

Visability factors take into account human factors and express a strong visual identity. These
include: human abilities, product identity, clear conceptual model, and multiple representations.

Included as acceptance factors are an installed base, corporate politics, international markets, and
documentation and training.

Visible Language
Visible language refers to all of the graphical techniques used to communicate the message or
context. These include:

 Layout: formats, proportions, and grids; 2-D and 3-D organization


 Typography: selection of typefaces and typesetting, including variable width and fixed
width

Page 113 of 165


 Color and Texture: color, texture and light that convey complex information and pictoral
reality
 Imagery: signs, icons and symbols, from the photographically real to the abstract
 Animation: a dynamic or kinetic display; very important for video-related imagery
 Sequencing: the overall approach to visual storytelling
 Sound: abstract, vocal, concrete, or musical cues
 Visual identity: the additional, unique rules that lend overall consistency to a user
interface. The overall decisions as to how the corporation or the product line expresses
itself in visible language.

Visual Language Design Principles


There are three fundamental principles involved in the use of the visible language.

 Organize: provide the user with a clear and consistent conceptual structure
 Economize: do the most with the least amount of cues
 Communicate: match the presentation to the capabilities of the user.

Organize
Consistency, screen layout, relationships and navigability are important concepts of organization.

Consistency
There are four views of consistency: internal consistency, external consistency, real-world
consistency, and when not to be consistent.

The first point, internal consistency states the same conventions and rules should be applied to all
elements of the GUI.

Page 114 of 165


Example: Internal Consistency - Dialogue Boxes

Same kinds of elements are shown in the same places.

Those with different kinds of behavior have their own special appearance.

External consistency, the second point, says the existing platforms and cultural conventions
should be followed across user interfaces.

Example: External Consistency for Text Tool Icons

These icons come from different desktop publishing applications but generally
have the same meaning.

Real-world consistency means conventions should be made consistent with real-world


experiences, observations and perceptions of the user.

Example: Real-World Consistency

Page 115 of 165


The last point, innovation, deals with when not to be consistent. Deviating from existing
conventions should only be done if it provides a clear benefit to the user.

Screen Layout
Three ways to design display spatial layout: use a grid structure, standardize the screen layout, and
group related elements.

A grid structure can help locate menues, dialogue boxes or control panels. Generally 7 +/-2 is the
maximum number of major horizontal or vertical divisions. This will help make the screen less
cluttered and easier to understand.

Relationships
Linking related items and disassociating unrelated items can help achieve visual organization.

Example: Relationships
Left: Shape, location, and value can all create strong visual
relationships which may be inappropriate.

Right: Clear, consistent, appropriate, and strong relationships.

Navigability
There are three important navigation techniques: - provide an initial focus for the viewer's attention
- direct attention to important, secondary, or peripheral items - assist in navigation throughout the
material.

Page 116 of 165


Example: Navigation
LEFT: Poor design.
RIGHT: Improved design; spatial layout and color help focus viewer's
attention to most important titlebar areas. Bulleted items
guide the viewer through the secondary contents.

Economize
Four major points to be considered: simplicity, clarity, distinctiveness, and emphasis.

Simplicity
Simplicity includes only the elements that are most important for communication. It should also
be as unobstrusive as possible.

Example: Complicated and Simpler Designs

Clarity
All components should be designed so their meaning is not ambiguous.

Page 117 of 165


Example: Ambiguous and Clear Icons

Distinctiveness
The important properties of the necessary elements should be distinguishable.

Emphasis
The most important elements should be easily perceived. Non-critical elements should be de-
emphasized and clutter should be minimized so as not to hide critical information.

Communicate
The GUI must keep in balance legibility, readability, typography, symbolism, multiple views,
and color or texture in order to communicate successfully.

Page 118 of 165


Example: Illegible and Legible Texts

Readability: display must be easy to identify and interpret, should also be appealing and
attractive.

Example: Unreadable and Readable Texts:

Unreadable: Design components to be easy to interpret and


understand. Design components to be inviting and attractive.

Readable

Design components to be easy to interpret and understand.

Design components to be inviting and attractive.

Typography: includes characteristics of individual elements (typefaces and typestyles) and their
groupings (typesetting techniques). A small number of typefaces which must be legible, clear,
and distinctive (i.e., distinguish between different classes of information) should be used.
Recommendations: - maximum of 3 typefaces in a maximum of 3 point sizes - a maximum of
40-60 characters per line of text - set text flush left and numbers flush right. Avoid centered text
in lists and short justified lines of text - use upper and lower case characters whenever possible.

Example: Recommended typefaces and typestyles

Multiple Views: provide multiple perspectives on the display of complex structures and
processes. Make use of these multiple views: - multiple forms of representation - multiple levels
of abstraction - simultaneous alternative views - links and cross references - metadata, metatext,
metagraphics.

Page 119 of 165


Example: Verbal and Visual Multiple Views

Color
Color is one of the most complex elements in achieving successful visual communication. Used
correctly, it can be a powerful tool for communication. Colors should be combined so they make
visual sense.

Some advantages for using color to help communication: - emphasize important information -
identify subsystems of structures - portray natural objects in a realistic manner - portray time and
progress - reduce errors of interpretation - add coding dimensions - increase comprehensibility -
increase believability and appeal

When color is used correctly, people will often learn more. Memory for color information seems
to be much better than information presented in black-and-white.

There are some disadvantages for using color: - requires more expensive and complicated
display equipment - many not accommodate color-deficient vision - some colors can potentially
cause visual discomfort and afterimages. - may contribute to visual or may lead to negative
associations through cross-disciplinary and cross-cultural association.

Color Design Principles


The three basic principles can also be applied to color: color organization, color economy, and
color communication.

Color organization pertains to consistency of organization. Color should be used to group related
items. A consistent color code should be applied to screen displays, documentation, and training
materials.

Similar colors should infer a similarity among objects. One needs to be complete and consistent
when grouping objects by the same color. Once a color coding scheme has been established, the
same colors should be used throughout the GUI and all related publications.

Page 120 of 165


The second principle of color, color economy, suggests using a maximum of 5+/-2 colors where
the meaning must be remembered. The fundamental idea is to use color to augment black-and-
white information, i.e. design the display to first work well in black-and-white.

Color emphasis suggests using strong contrasts in value and chroma to draw the user's attention
to the most important information. Confusion can result if too many figures or background fields
compete for the viewer's attention. The hierarchy of highlighted, neutral, and low-lighted states
for all areas of the visual display must be designed carefully to provide the maximum simplicity
and clarity.

Color communication deals with legibility, including using appropriate colors for the central and
peripheral areas of the visual field. Color combinations influenced least by the relative area of
each color should be used.

Red or green should not be used in the periphery of the visual field, but in the center. If used in
the periphery, you need a way to capture the attention of the viewer, size change or blinking for
example. Blue, black, white, and yellow should be used near the periphery of the visual field,
where the retina remains sensitive to these colors.

If colors change in size in the imagery, the following should be considered: as color areas
decrease in size, their value (lightness) and chroma will appear to change.

Use colors that differ in both chroma and value. Avoid red/green, blue/yellow, green/blue, and
red/blue combinations unless a special visual effect is needed. They can create vibrations,
illusions of shadows, and afterimages.

For dark viewing situations, light text, then lines, and small shapes on medium to dark
backgrounds should be used in slide presentations, workstations and videos. For light-viewing
situations, use dark (blue or black) text, thin lines and small shapes on light background. These
viewing situations include overhead transparencies and paper.

Color Symbolism
The importance of color is to communicate. Therefore color codes should respect existingcultural
and professional usage. Connotations vary strongly among different kinds of viewers, especially
from different cultures. Color connotations should be used with great care. For example: mailboxes
are blue in the United States, bright red in England and bright yellow in Greece. If using color in
an electronic mail icon on the screen, color sets might be changed for different countries to reflect
the differences in international markets.

Principles of user interface design & Fitts’ Law


Fitts‘ law states that the amount of time required for a person to move a pointer (e.g., mouse
cursor) to a target area is a function of the distance to the target divided by the size of the target.
Thus, the longer the distance and the smaller the target‘s size, the longer it takes.

Page 121 of 165


In 1954, psychologist Paul Fitts, examining the human motor system, showed that the time
required to move to a target depends on the distance to it, yet relates inversely to its size. By his
law, fast movements and small targets result in greater error rates, due to the speed-accuracy
trade-off. Although multiple variants of Fitts‘ law exist, all encompass this idea.

Fitts’ law is widely applied in user experience (UX) and user interface (UI) design. For example,
this law influenced the convention of making interactive buttons large (especially on finger-
operated mobile devices)—smaller buttons are more difficult (and time-consuming) to click.
Likewise, the distance between a user‘s task/attention area and the task-related button should be
kept as short as possible.

The law is applicable to rapid, pointing movements, not continuous motion (e.g., drawing). Such
movements typically consist of one large motion component (ballistic movement) followed by
fine adjustments to acquire (move over) the target. The law is particularly important in visual
interface design—or any interface involving pointing (by finger or mouse, etc.): we use it to
assess the appropriate sizes of interactive elements according to the context of use and highlight
potential design usability problems. By following Fitts‘ law, standard interface elements such as
the right-click pop-up menu or short drop-down menus have had resounding success, minimizing
the user‘s travel distance with a mouse in selecting an option—reducing time and increasing
productivity. Conversely, long drop-downs, title menus, etc., impede users‘ actions, raising
movement-time demands.

User interface designs & human abilities


There is a lot of information out there about various interface design techniques and patterns you
can use when crafting your user interfaces and websites, solutions to common problems and
general usability recommendations. Following guidelines from experts will likely lead you
towards creating a good user interface – but what exactly is a good interface? What are the
characteristics of an effective user interface?

Here are 8 things I consider a good user interface needs to be:

1. Clear
2. Concise
3. Familiar
4. Responsive
5. Consistent
6. Attractive
7. Efficient
8. Forgiving

Let‘s take a closer look at each.

1. Clear

Page 122 of 165


Clarity is the most important element of user interface design. Indeed, the whole purpose of user
interface design is to enable people to interact with your system by communicating meaning and
function. If people can‘t figure out how your application works or where to go on your website
they‘ll get confused and frustrated.

What does that do? Hover over buttons in WordPress and a tooltip will pop up explaining their
functions.

2. concise
Clarity in a user interface is great, however, you should be careful not to fall into the trap of
over-clarifying. It is easy to add definitions and explanations, but every time you do that you add
mass. Your interface grows. Add too many explanations and your users will have to spend too
much time reading through them.

Keep things clear but also keep things concise. When you can explain a feature in one sentence
instead of three, do it. When you can label an item with one word instead of two, do it. Save the
valuable time of your users by keeping things concise. Keeping things clear and concise at the
same time isn‘t easy and takes time and effort to achieve, but the rewards are great.

The volume controls in OS X use little icons to show each side of the scale from low to high.

3. Familiar
Many designers strive to make their interfaces ‗intuitive‘. But what does intuitive really mean? It
means something that can be naturally and instinctively understood and comprehended. But how
can you make something intuitive? You do it by making it ‗familiar‘.

Familiar is just that: something which appears like something else you‘ve encountered before.
When you‘re familiar with something, you know how it behaves – you know what to expect.
Identify things that are familiar to your users and integrate them into your user interface.

Page 123 of 165


GoPlan's tabbed interface. Tabs are familiar because they mimic tabs on folders. You figure out
that clicking on a tab will navigate you to that section and that the rest of the tabs will remain
there for further navigation.

4. responsive
Responsive means a couple of things. First of all, responsive means fast. The interface, if not the
software behind it, should work fast. Waiting for things to load and using laggy and slow
interfaces is frustrating. Seeing things load quickly, or at the very least, an interface that loads
quickly (even if the content is yet to catch up) improves the user experience.

Responsive also means the interface provides some form of feedback. The interface should talk
back to the user to inform them about what‘s happening. Have you pressed that button
successfully? How would you know? The button should display a ‗pressed‘ state to give that
feedback. Perhaps the button text could change to ―Loading…‖ and it‘s state disabled. Is the
software stuck or is the content loading? Play a spinning wheel or show a progress bar to keep
the user in the loop.

Instead of gradually loading the page, Gmail shows a progress bar when you first go to your
inbox. This allows for the whole page to be shown instantly once everything is ready.

5. consistent
Now, I‘ve talked before about the importance of context and how it should guide your design
decisions. I think that adapting to any given context is smart, however, there is still a level of
consistency that an interface should maintain throughout.

Consistent interfaces allow users to develop usage patterns – they‘ll learn what the different
buttons, tabs, icons and other interface elements look like and will recognize them and realize
what they do in different contexts. They‘ll also learn how certain things work, and will be able to
work out how to operate new features quicker, extrapolating from those previous experiences.

Page 124 of 165


The Microsoft Office user interface is consistent for a reason.

6. attractive
This one may be a little controversial but I believe a good interface should be attractive.
Attractive in a sense that it makes the use of that interface enjoyable. Yes, you can make your UI
simple, easy to use, efficient and responsive, and it will do its job well – but if you can go that
extra step further and make it attractive, then you will make the experience of using that interface
truly satisfying. When your software is pleasant to use, your customers or staff will not simply be
using it – they‘ll look forward to using it.

There are of course many different types of software and websites, all produced for different
markets and audiences. What looks ‗good‘ for any one particular audience will vary. This means
that you should fashion the look and feel of your interface for your audience. Also, aesthetics
should be used in moderation and to reinforce function. Adding a level of polish to the interface
is different to loading it with superfluous eye-candy.

Page 125 of 165


Google are known for their minimalist interfaces that focus on function over form, yet they
clearly spent time polishing off the Chrome user interface elements like buttons and icons to
make them look just right as evident by the subtle gradients and pixel thin highlights.

7. efficient
A user interface is the vehicle that takes you places. Those places are the different functions of
the software application or website. A good interface should allow you to perform those
functions faster and with less effort. Now, ‗efficient‘ sounds like a fairly vague attribute – if you
combine all of the other things on this list, surely the interface will end up being efficient?
Almost, but not quite.

What you really really need to do to make an interface efficient is to figure out what exactly the
user is trying to achieve, and then let them do exactly that without any fuss. You have to identify
how your application should ‗work‘ – what functions does it need to have, what are the goals
you‘re trying to achieve? Implement an interface that lets people easily accomplish what they
want instead of simply implementing access to a list of features.

Apple has identified three key things people want to do with photos on their iPhone, and
provides buttons to accomplish each of them in the photo controls.

8. forgiving
Nobody is perfect, and people are bound to make mistakes when using your software or website.
How well you can handle those mistakes will be an important indicator of your software‘s
quality. Don‘t punish the user – build a forgiving interface to remedy issues that come up.

A forgiving interface is one that can save your users from costly mistakes. For example, if
someone deletes an important piece of information, can they easily retrieve it or undo this
Page 126 of 165
action? When someone navigates to a broken or nonexistent page on your website, what do they
see? Are they greeted with a cryptic error or do they get a helpful list of alternative destinations?

Trashed the wrong email by mistake? Gmail lets you quickly undo your last action.

to conclude...
Working on achieving some of these characteristics may actually clash with working on others.
For example, by trying make an interface clear, you may be adding too many descriptions and
explanations, that end up making the whole thing big and bulky. Cutting stuff out in an effort to
make things concise may have the opposite effect of making things ambiguous. Achieving a
perfect balance takes skill and time, and each solution will depend on a case by case basis.

What do you think? Do you disagree with any of these characteristics or have any more to add?
I‘d love to read your comments.

The simplicity principle in perception and cognition


Perception is the organization, identification, and interpretation of sensory information in order
to represent and understand the presented information, or the environment.
Perception can be split into two processes,

 (1) processing the sensory input, which transforms this low-level information to higher-level
information (e.g., extracts shapes for object recognition);
 (2) processing which is connected with a person's concepts and expectations (or knowledge),
restorative and selective mechanisms (such as attention) that influence perception. 

The simplicity principle, traditionally referred to as Occam's razor, is the idea that simpler
explanations of observations should be preferred to more complex ones. In recent decades the
principle has been clarified via the incorporation of modern notions of computation and
probability, allowing a more precise understanding of how exactly complexity minimization
facilitates inference. The simplicity principle has found many applications in modern cognitive
science, in contexts as diverse as perception, categorization, reasoning, and neuroscience. In all
these areas, the common idea is that the mind seeks the simplest available interpretation of
observations- or, more precisely, that it balances a bias toward simplicity with a somewhat
opposed constraint to choose models consistent with perceptual or cognitive observations. This
brief tutorial surveys some of the uses of the simplicity principle across cognitive science,
emphasizing how complexity minimization in a number of forms has been incorporated into
probabilistic models of inference.

Page 127 of 165


Online Controlled Experiment
The term "online controlled experiment" is a generic term that encompasses all kinds of online
experiments/testings set up for learning from data. It is the golden standard for drawing objective
causal inferences in online marketing, user experience and conversion rate optimization. While
the term A/B test is often used instead, OCE is the technically-correct one to use if one is
speaking for something more complex than a simple test of one variant versus a control.
An online controlled experiment, by definition, employs a control group and one or more
treatment groups among which users are assigned randomly. Ranodmization allows modelling
the effect of all confounding factors as random variables and thus allows one to estimate the
effect of the treatment in a statistically rigorous way.
From the point of an online business we can distinguish between two main goals of using online
controlled experiments: to control the business risk associated with a making a certain decision
and to estimate the magnitude and direction of an effect. The first part is achieved
through hypothesis testing while the second is accomplished through estimation. Often a
practitioner will want both. Usually the former is the primary goal and the latter the secondary.

Web design as the anchoring domain


Anchoring (or focalism) bias refers to the tendency to rely on a single piece of information or
aspect of an event (the ―anchor‖) to inform decision making.
Anchoring is a judgment heuristic. Anchoring and other judgment heuristics, such
as framing and priming, are helpful in expediting everyday decisions, particularly in the absence
of information, resources, or time. They tend to be automatic for most people and can sometimes
lead to erroneous estimates or judgment calls.
There are some people who benefit greatly from anchoring; for example, domain experts with
deep experience directly related to the decision or judgment at hand. Because they are so familiar
with the situation, their early responses are likely to be correct.
Framing: When choosing among several alternatives, people avoid losses and optimize for sure
wins because the pain of losing is greater than the satisfaction of an equivalent gain. UX designs
should frame decisions (i.e., questions or options given to users) accordingly.
Priming: Exposure to a stimulus influences behavior in subsequent, possibly unrelated tasks.
This is called priming; priming effects abound in usability and web design.

How Anchoring Can Improve the User Experience


Anchoring can set novice users up for success and establish clear expectations about how a
process or experience might go. Consider the following applications in your next redesign.
Good Defaults and Suggested Values
Not only do well-chosen numerical default values minimize the interaction cost of typing, but
they can also serve as anchors for the expected variation of the corresponding numerical
parameters. They can help users figure out what a big value is and what a small value is. For
example, for people who have never used an image-editing program, a default gamma value of

Page 128 of 165


2.2 creates an understanding of the normal, most used value, and allows users to adjust that value
in the direction that they want, according to the effect they seek.
If you are new to shopping for a house, a good mortgage calculator can form expectations by
providing defaults that best represent the user population. For example, Bankrate.com‘s
mortgage calculator shows default values that are close to the typical value most users will enter.
These defaults help users gauge what‘s expected and what might be off the norm.
Interface design Rules
Regardless of the domain, user interface, or intended device (computer, tablet or phone) for a
particular website or application and there are certain universal ―Golden Rules‖ of user interface
design.
Theo Mandel describes the golden rules of user interface design in great detail in Chapter 5 of
his book, ―The Elements of User Interface Design.‖
Mandel’s Golden Rules
The golden rules are divided into three groups:
1. Place Users in Control
2. Reduce Users‘ Memory Load
3. Make the Interface Consistent
Each of these groups contains a number of specific rules. The rules (and a keyword for each
rule) for each group are:

Place Users in Control


1. Use modes judiciously (modeless)
2. Allow users to use either the keyboard or mouse (flexible)
3. Allow users to change focus (interruptible)
4. Display descriptive messages and text(Helpful)
5. Provide immediate and reversible actions, and feedback (forgiving)
6. Provide meaningful paths and exits (navigable)
7. Accommodate users with different skill levels (accessible)
8. Make the user interface transparent (facilitative)
9. Allow users to customize the interface (preferences)
10. Allow users to directly manipulate interface objects (interactive)
Reduce Users’ Memory Load
1. Relieve short-term memory (remember)
2. Rely on recognition, not recall (recognition)
3. Provide visual cues (inform)
4. Provide defaults, undo, and redo (forgiving)
5. Provide interface shortcuts (frequency)
6. Promote an object-action syntax (intuitive)
7. Use real-world metaphors (transfer)
8. User progressive disclosure (context)
9. Promote visual clarity (organize)
Make the Interface Consistent
1. Sustain the context of users‘ tasks (continuity)
2. Maintain consistency within and across products (experience)
Page 129 of 165
3. Keep interaction results the same (expectations)
4. Provide aesthetic appeal and integrity (attitude)
5. Encourage exploration (predictable)

Page 130 of 165


Outcome 4: Evaluating user interface designs

1. Designing a review process for a user interface

Review of user interfaces


Reviewing a user interface typically means identifying

 usability problems related to the layout, logical flow, and structure of the interface and
inconsistencies in the design
 non-compliance with standards
 ambiguous wording in labels, dialog boxes, error messages, and onscreen user assistance
 functional errors

User interface review process


While user interface (UI) reviews often occur at the end of the development cycle, I recommend
that you get involved early in the process, preferably when the designers create the initial
wireframes or paper prototypes. Why? Making changes early in the process reduces development
costs. Plus, if you identify usability issues early, it‘s much more likely the team can remedy them
before launch, preventing bad reviews like that shown in Figure 1, negative word-of-mouth, and
the lost sales that result from them.
Figure - A bad review of an application in Australian Personal Computer

Page 131 of 165


Before reviewing an application‘s user interface, make sure you know who the users will be,
have clear goals in mind, and remember, it‘s all about improving the user experience.

What Does a Comprehensive UI Review Involve?


Based on my experience working with desktop, enterprise, and Web applications over the past
decade, I‘ve identified five general areas you should consider when reviewing a user interface:

 Design elements - When assessing the design elements of a user interface, you need to
consider their overall aesthetics—Do they look good?—and functionality—Do they work
well?
 Text elements- When reviewing text user interface elements, consider these three Cs of
communication:
 Clarity - The textual messages a user receives from an application must be clear.
 Consistency - Use terms and labels for user interface controls that are familiar to users.
 Conciseness - Be as brief as possible, but don’t sacrifice meaning for brevity.
 Link elements - There are links in all Web applications and in some desktop applications, but
not all of them are blue underlined text. Links help users navigate — both within an
application and to external locations. 
Navigational links include menus of all types, breadcrumb trails, links to other pages, links to
Help, and even email addresses. Often links in applications display something on the same
page—such as a popup dialog box, an expandable panel, or user assistance—or clickable
image maps with multiple hotspots display related information.

When reviewing a user interface:

 Ensure it uses similar link elements consistently. For example, if an application has a
page browser, do the link elements always look the same? Or are they different like those
in Figure 6?
 Review the font, color, and style of active and visited links.
 Check the behavior when users hover over links.
 Make sure the text matches the link‘s action.
 Verify that links take users where they would expect to go.
 Visual elements - Visual elements include graphics, color, and display mechanisms. You
don‘t have to be a graphic designer to review the visual elements of a user interface. As with
text and links, you‘re looking for usefulness and consistency. You‘ll notice consistency by its
absence. 

Page 132 of 165


As a general rule, if a graphic image is not necessary, don‘t use it. Such an image just
becomes noise and uses up bandwidth. However, some graphics can help guide users through
a process or provide visual cues showing progress along a sequence of steps.
When reviewing a user interface‘s visual elements, consider the following:
 Are any graphics or graphic elements unnecessary?
 Are any graphic elements too big or too small?
 If images don‘t display, what happens to the user interface? Does it display alternate or
title text, or do you get a placeholder box?
 Did your developers use consistent, familiar, and recognizable images for common
functions? For example, although we no longer use floppy disks, that icon is instantly
recognizable as a Save icon. If you use a different image for save, such as a lifesaver,
your users may not understand that it means save and might assume it means help.
 Are any of the graphic elements blurred or jagged?
 Do the graphic elements use transparency appropriately?
 Do hotspots on image maps cover the relevant parts of the image—for example, the
countries on a map?

 User interactions - User interactions are the dynamic aspects of a user interface—such as
clicking a button, performing a search, or saving a record. What happens when you perform
such actions? What results or feedback do you get? Do you get logical and appropriate
results? Look out for results that don’t behave as you expect them to.
When reviewing a user interface, test these user interactions:

 Click every menu item, button, and toolbar button. Do they do what you expect them to
do? Do they take you where you expect them to go? For example, is there a clear
difference between what happens when you click Apply, OK, or Cancel?
 Click all other controls, including tabs, spin boxes, drop-down list boxes, and other user
interface elements.
 Try to access all functionality using the keyboard. For example, Alt+<Character> should
display the relevant menu, Tab should move the insertion point to the next field, Enter
should execute a command if the focus is on a button, and Esc should close a dialog box.
 Check what happens when you press the function keys. For example, on a PC, F1 should
display Help and F5 should refresh the browser window or a directory.

Page 133 of 165


Increased cost if changes needed further along the development process
How to minimize cost in the software development process
Software development is a necessity for all companies, but costs tend to skyrocket. As these
companies seek to build cutting-edge software to drive growth, determining the overall budget
becomes very tricky. Therefore, having regular interaction and obtaining a budget estimate from
the software development team is highly recommended. This helps make room for additional
expenses that may suddenly arise in the software development process. Besides a cost estimate,
there are ways companies can minimize expenses in the software development process:

Cost Of Change
Sure, we can change that. No problem. Engineering Change Request(s) are common through
development (and often well beyond), but what is the REAL cost? Of course, change and
improvement are good — that‘s what design and development are all about. However, there are
costs. So, how can you minimize the costs in making changes?

Implications Of The Timeline


The product development process starts with a bright idea. Beginning at that moment, changes
to the initial idea begin. At first, dreaming has no monetary cost, but only a little time. Then
there‘s the searching on the internet, the investment of time into thinking, sketching, and maybe
making paper mache prototypes.
After thinking, exploring, and maybe some experimentation, at some point, the decision is made
to pursue the idea. Typically this is where time begins to have meaning, and costs become real.

The product development process (very simplified) looks something like this graphic. It starts
with ideation, then goes into the development cycle of Design > Prototype > Test. The form of
that cycle depends greatly on the product. If it‘s a software product, you might go through that
cycle in a day. If it‘s an industrial product, it can take months to go around the cycle.

With each rotation around the development cycle, we review where we‘re going to learn more,
and hopefully refine the product. These refinements, or changes — which improve the product

Page 134 of 165


— all have some kind of cost. The magnitude depends on when the engineering change request
occurs.
Note: For the sake of argument, we use the term ―Engineering Change Request‖ very
liberally. It‘s a structure event in some companies, and in others it means only changes after
production begins. For this article, it means anything that moves the design in a different
direction — big or small.

Costs in an Engineering Change Request


The cost of each change increases because we have more time, money and energy
invested. Sometimes the cost is not actual, but inferred — because we don‘t ―want‖ to make the
change. We‘ve put a lot of effort into something and change means discarding at least some of
what we‘ve done. A graphical representation of the generally accepted Cost vs. Time with
respect to Design Freedom is this image.

Generally Accepted Inverse Relationship of Project Cost & Design Freedom

I argue there is not a ―Design Freedom‖ curve, but only a shift in the ―Costs‖ curve with each
Engineering Change Request. We are, after all, free to make changes any time. The impact, or
cost, of such changes (implicit or explicit) in money, time, reputation, opportunity, etc., are
really the issue. For grins, we can show a little shift in the Design Freedom curve as a nod to
project inertia, however. Let‘s look at a different perspective.

A Wider Perspective Of Cost & Change Relationships

Page 135 of 165


In truth, the effect of activities in the earlier stages are amplified in the later stages. This
includes design changes, enhancements, Pivots and just getting distracted on things. An
Engineering Change Request fits this too. As in the graph, a little difference in the design at the
beginning certainly causes differences in the outcome. — This is not to say that an Engineering
Change Request is bad, quite to the contrary. Changes that improve your product or improve
your ability to sell the product, or reduce the cost of production, or enhance the customer
experience are almost always worth doing. The trick is to identify them as early as possible so
the cost impact of the changes are minimal.
In the graph above, the goal is to achieve the GREEN line of optimization. Please note: the
GREEN line and the BLUE line can represent a similar value in actual dollars when the
difference in costs are primarily time. If the BLUE line represents a product that takes 8 months
more to come to market, then the difference is really the opportunity costs of product not sold in
the 8 months.

An Example:
Company A has a great idea for a product improvement. They progress through the design
phases in a normal sort of way (RED curve). If they continue along the normal path (Light
BLUE curve) in the graphic below we see the potential result. However, during the design cycle,
Engineer Q gets some key feedback from the manufacturer and sees an opportunity for cost
savings in production. The Engineering Change Request implements, then development
proceeds along the RED curve. A big part of the cost ―blip‖ for the new idea is time. Remember,
the bottom axis is not a timeline, nor are the steps in symmetric increments.

Illustrated Engineering Change Request Effect

Page 136 of 165


As noted, Company A had a significant cost involved in the Engineering Change Request, but as
shown, it is more than offset by the overall decrease in cost over time. So, what would have
happened if the cost saving idea had happened much earlier in the design process? How can we
help to find those items earlier?

Reputation costs
Reputation Costs means the reasonable and necessary fees, costs and expenses charged by any
public relations firm, crisis management firm or law firm retained by or behalf of an Executive to
mitigate the adverse effects to such Executive‘s reputation as a result of a negative public statement
made about him or her by a Regulator.
Reputation cost is the various forms of economic and other harm that an entity may experience
as a result of damage to its reputation with customers or others. For example, if a company
experiences a cyber-attack in which its customer records stolen or compromised, and
the attack is made public, customers may switch to other companies for which attacks have not
been made public, whether or not they have occurred

Design guidelines, rules and standards


Standards vs Guidelines
The difference between these is that standards are high in authority and limited in application,
whereas design guidelines are low in authority and are more general in application.

The best user interface guidelines are high level and contain widely applicable design principles.
The designer who intends to apply these principles should know which theoretical evidence
supports them and apply the guidelines at an early stage of the design life cycle.

Standards Guidelines
High Authority Lower Authority
Page 137 of 165
Little overlap Conflicts, overlap, trade-offs
Limited application - e.g. display area Less focused
Minimal interpretation-little knowledge Interpretation required -expert HCI
required knowledge

National and international bodies most commonly set interactive system design standards such as
British Standards Institution (BSI), the International Organisation for Standardisation (ISO).

Design guidelines can be found in various forms, for example, journal articles, technical reports,
general handbooks, and company house style guides. A good example to this is the guidelines
produced by Smith and Mosier (1986) . An example of company house style guides is Apple's
Human Interface Guidelines. Standardisation in interface design can provide a number of benefits.

Design Guides
Design rules, in the form of standards and guidelines, provide designers with directions for a
good and quality design with the intention of increasing the usability of a system.

Software companies produce sets of guidelines for their own developers to follow. These
collections are called house standards, sometimes referred to as house style guides. These are
more important in industry, especially in large organisations where commonality can be vital.
There are two main types of style guides: commercial style guides and corporate style guides.

Hardware and software manufacturers produce commercial style guides and corporate style
guides are developed by companies for their own internal use. The advantage of using these
guides is to enhance usability of a product through consistency. However, there are other
reasons, for example, software developed for Microsoft, Apple Macintosh or IBM PC maintains
its "look and feel;" across many product lines.

Though commercial guides often contain high level design principles, in most cases these
documents are based on low level design rules which have been developed from high level
design principles. If an organisation aims to develop their own corporate style guides for
software, generally the starting point will be one of the commercial style guides.

Many design style questions have to be addressed more specifically when developing corporate
style guides. For example, how should dialog windows to be organised? Which styles - colour
and fonts etc, should be used ?

The roots for design rules may be psychological, cognitive, ergonomic, sociological or
computational, but may, or may not be, supported by empirical evidence.

Design review
A design review is a milestone within a product development process whereby a design is
evaluated against its requirements in order to verify the outcomes of previous activities and

Page 138 of 165


identify issues before committing to - and if need to be re-prioritise - further work. The ultimate
design review, if successful, therefore triggers the product launch or product release.
The conduct of design reviews is compulsory as part of design controls, when developing
products in certain regulated contexts such as medical devices.
By definition, a review must include persons who are external to the design team.
Design Review Process
Overview
The Design Review Process is a mechanism for ensuring design standards, alignment, and
diligence throughout the course of the product design process:
Standards
 Ensure that designs meet appropriate standards for consistency, accessibility, usability,
internationalizability, rebrandability, download time, etc.
Alignment
 Ensure that designs meet business goals.
 Ensure that designs can be well integrated into the brands (if any) on which they will be
deployed.
 Minimize late-stage changes to product requirements and concepts.
Diligence
 Realize maximum value from early-stage design methodologies.
 Increase accountability and clarity of action plans by keeping minutes of each design review.
 Involve people outside the design team at appropriate junctures.
Steps in the Design Review Process
1. Conceptual Design Preview
Use this step to ensure that the initial design direction maps to the business goals and user
needs, and to review the design for alignment with broader initiatives and possible integration
with other product designs.
2. Design Standards Checkpoint
Use this step to confirm that the design meets required design standards.

3. UI Design Review
Use this step to review specific interaction behaviors and to provide guidance to designers on
problematic issues.
4. Creative Direction Review
Use this step to ensure that the visual design maps to the creative direction of the project.
Scheduling Product Design Reviews
Use this section to list the regularly scheduled meeting times for Product Design Reviews.
Include the review meeting days and times (for example, “Wednesdays from 9 am to 10
pm”), and any other information about review meeting schedules (for example, indicate
Page 139 of 165
here if no formal review meeting is scheduled for a particular step in the Design Review
Process).
Sending Review Materials Out in Advance
Use this section to indicate how the materials to be reviewed at the meetings should be
made available to the reviewers. Include the following types of information:
 How far in advance the review materials need to be made available before the
scheduled review meeting (for example, one business day, two business days, one
week, etc.)
 How materials should be distributed to the reviewers (for example, placed on a web
server, sent as e-mail attachments, etc.)
 The format materials should use (for example, .doc, .pdf, .html, .jpg, .gif, all of the
above, etc.)
 A list of related materials that should also be made available (for example, meeting
minutes from earlier, related reviews, a Creative Brief, etc.)
Product Design Review Minutes
Use this section to describe when and how meeting minutes should be generated:
 Indicate if any meetings do not require minutes.
 Indicate who is responsible for recording and distributing minutes at each meeting.
You can indicate a specific person or a role (for example, “Design Lead”).
 Indicate how or where the minutes will be made available to team members.

Review of different aspects of interface


Whether you are an interaction designer, usability professional, technical communicator, quality
assurance engineer, or developer, reviewing a user interface typically means identifying
 usability problems related to the layout, logical flow, and structure of the interface and
inconsistencies in the design
 non-compliance with standards
 ambiguous wording in labels, dialog boxes, error messages, and onscreen user assistance
 functional errors

While user interface (UI) reviews often occur at the end of the development cycle, I recommend
that you get involved early in the process, preferably when the designers create the initial
wireframes or paper prototypes. Why? Making changes early in the process reduces development
costs. Plus, if you identify usability issues early, it‘s much more likely the team can remedy them
before launch, preventing bad reviews like that shown in the Figure below, negative word-of-
mouth, and the lost sales that result from them.

Page 140 of 165


Figure —A bad review of an application in Australian Personal Computer

Before reviewing an application‘s user interface, make sure you know who the users will be,
have clear goals in mind, and remember, it‘s all about improving the user experience.

What Does a Comprehensive UI Review Involve?


Based on my experience working with desktop, enterprise, and Web applications over the past
decade, I‘ve identified five general areas you should consider when reviewing a user interface:

 design elements
 text elements
 link elements
 visual elements
 user interactions

Reviewing Design Elements


―Design is not about decoration. It‘s about communication and problem solving.‖

When assessing the design elements of a user interface, you need to consider their overall
aesthetics—Do they look good?—and functionality— Do they work well?

When assessing the design elements of a user interface, you need to consider their overall
aesthetics—Do they look good?—and functionality—Do they work well?

Page 141 of 165


In many ways, what makes a user interface look good is entirely subjective. What looks good to
me might not look good to you. But reviewing certain elements in the overall design of a window
or Web page removes much of this subjectivity—for example, the labels; the size, placement,
and alignment of user interface elements; and the way a user interface communicates its
workflows.
Your company or product user interface guidelines should cover all of these design
characteristics. Then, you can objectively judge whether a product‘s UI design follows the
guidelines, and your review has nothing to do with your personal opinion. Consistency is an
important design element of user interfaces. Developing UI guidelines helps ensure consistency.

Tip—If your company does not have user interface guidelines or a style guide, create them.
Creating guidelines or style guides is a big undertaking. To save a lot of effort, base them on the
standards from Microsoft, Apple, or Sun, and document only the exceptions that are specific to
your organization. Whether you‘re working on desktop or Web applications, a wide variety
of style guides is available online.

Guidelines for Design Elements


When reviewing design elements, consider the following:
 Are all user interface elements labeled?
 Are related features or functions grouped together and labeled appropriately?
 Are there visual cues that separate elements from one another and from other groups of
elements—for example, group boxes, tabs, expandable sections, or areas with different
background colors?
 Are all labels correctly aligned with their user interface elements? Check the horizontal
alignment of labels with boxes, check boxes, and option buttons. Use a screen ruler
like Screen Ruler from Micro Fox Software to confirm alignment if you aren‘t sure.
 If most of your users are from Western cultures, does the placement of user interface
elements follow the reading pattern from upper left to lower right?
 Does the user interface accommodate users from other cultures who read from right to left?
 Is the vertical spacing between individual elements the same for similar elements? Use a
screen ruler to measure the spacing if you‘re not sure.
 Is there sufficient space in windows, on Web pages, on tabs, or in boxes to accommodate
labels and other text in all required languages? For example, labels in German can take up to
30% more space than English labels. Have the developers allowed for the possibility of other

Page 142 of 165


languages, or have they hard-coded fixed-length values and labels? To test whether a text
box has a fixed length, try typing more characters than it appears it can contain.
Here are some examples of the kinds of problems you might find when assessing design
elements. The tabbed page shown in Figure below contains misaligned labels, check boxes, and
text boxes, as the dashed lines show.

Figure—Misaligned design elements

The Figure below shows examples of command buttons that are too small for their labels and
buttons for which there isn‘t sufficient space.
Figure—Poorly rendered command buttons

Page 143 of 165


The form in the Figure below is a jumble. Text boxes and combo boxes have different heights,
descenders are cut off, arrow buttons aren‘t the same size, and labels are in title case, are overly
long, and aren‘t parallel. (See ―Reviewing Text Elements‖ for more information about text
issues.)

Figure —Inconsistent sizes, misaligned interface controls, and inconsistent labels

Assessing Logical Flow


When reviewing a user interface, you must assess its logical flow. By logical flow, I mean
whether the user interface is logical to your audience. Ask these questions:
 When a user presses the Tab key, where does the focus jump? Does it jump all over the user
interface, or does it follow your audience‘s expectations based on their reading patterns—for
example, left to right, top to bottom?
 Which element has the focus when a page, window, or tab first appears? Is it on the first
element, the OK or Cancel button, or some other element? Is the focus sufficiently visible, so
users can easily see where any insertion point is by default?
 Can users access all elements in the user interface using the Tab key or other keys? Are there
elements users can select only using a mouse or other pointing device?
 Can a screen reader such as JAWS read the user interface?
 Can users control the user interface using voice commands?
 Do users have to scroll horizontally or vertically to see the entire user interface, even on a
screen of the target resolution and size? (Vertical scrolling is usually acceptable, but
generally, horizontal scrolling is not.)

Reviewing Text Elements


When reviewing text user interface elements, consider these three Cs of communication:
 Clarity—the textual messages a user receives from an application must be clear.
 Consistency—Use terms and labels for user interface controls that are familiar to users.
 Conciseness—be as brief as possible, but don’t sacrifice meaning for brevity.
Page 144 of 165
Getting each of these aspects right for every text element in a user interface helps reduce that
other C—confusion.
When reviewing a user interface, be sure to review these textual elements:

 title bars  group box labels  link text—usually in Web


 status bars  headings applications
 sidebars  text box labels  graphic elements
 navigation and menu  column headers containing text
items—all levels  icon labels  user assistance text
 items in list boxes and  button labels  Help text
drop-down lists  ToolTips  confirmation messages
 error messages

Consistency in Terminology and Spelling


When reviewing a user interface, look for:
 misspellings and typos
 consistent spelling of product names and terms
 appropriate use of case and capitalization, as specified by your style guide
Many languages have distinct regional variations in spelling to which a user interface might need
to conform. For example, if your user interface is in English, should the spelling conform to US
English or British English?
Remember that correctness means the user interface complies with your company or product
style guide.

Correct Punctuation
Review the punctuation of text elements for correctness and consistency. In particular, look for
the following:

 inconsistencies in word hyphenation


 appropriate use of closing punctuation for text elements
For example, your style guide probably specifies a closing colon following a text box or list
label. Make sure all labels conform to the guidelines.

Appropriate Use of Fonts


When reviewing user interface text, check the following:
Page 145 of 165
 Is the font family appropriate to the application and familiar to the audience? For example,
Times New Roman rarely, if ever, appears in a desktop application, and Comic Sans never
appears in an enterprise Web application.
 Are bold, italics, and underlining used consistently, in compliance with your style guide? For
example, Web style guides specify that no text in a user interface should be underlined unless
it is a link.
 In Web applications, can a user increase or decrease the font size, or is the size hard-coded?

Appropriate Use of Language


Check the use of language in a user interface:

 Ensure the user interface uses simple language.


 Ensure the use of language is appropriate for the expected audience.
 Watch out for abbreviations and acronyms. Avoid using them unless you are absolutely sure
your audience understands them.
 Look for parallel structure in text elements such as headings, lists, and the labels of text
boxes, check boxes, and option buttons. For example, do they consistently use the gerund
form, ending in -ing, or the imperative form of verbs? See Figure below for an example of
parallel labels.

Figure—Ensure that labels are parallel in structure

Assessing Internationalization and Localization Issues


When reviewing text elements, consider what impact other languages and alphabets might have
on how the interface text displays.

Page 146 of 165


 Is the space for titles, labels, and values in fields flexible enough to allow for other
languages? Or might labels wrap badly, overlap, and be obscured by other labels or other
user interface elements or be truncated so they are unreadable?
 What happens to text when it‘s translated to a right-to-left or double-byte language?
 Is the user interface text hard-coded into the application, or is it stored in linked resource
files? When developing a new application, make sure developers use linked resource files—
even if they aren‘t currently considering future support for other languages. In the long-term,
maintaining text in resource files is much easier for everyone—particularly developers,
technical writers, and technical editors—even if the application never gets translated.
 Is there anything culturally specific about the user interface text? For example, the United
States uses ZIP code when referring to a postal code, but other countries use other terms for
postal codes. In the finance industry, many terms, abbreviations, and acronyms apply only in
specific regions, even when countries use the same language. For example, the United States
has the IRS, while Australia has the ATO; the United States refers to IRAs and retirement
funds, while Australia refers to superannuation.
 Are there any regional language differences within a single country you need to consider?
For example, does a drop-down list of beverages use soda or pop or another other word for a
carbonated drink?

Reviewing Error Messages


Good error messages tell users what went wrong—and possibly why—provide one or more
solutions for fixing the error, and offer a set of buttons that relate directly to the next action a
user should take.

Error messages must be clear and useful. We‘ve all seen plenty of error messages that are
neither. Good error messages tell users what went wrong—and possibly why—provide one or
more solutions for fixing the error, and offer a set of buttons that relate directly to the next action
a user should take. Options like OK, Continue, and Cancel are not appropriate when an error
message is a question. The possible responses to a question should be Yes and No. Too many
options in an error message can confuse users, so they have no idea what will happen when they
click a button. When reviewing error message text, your aim is to reduce confusion.

Tip—Ask the developers on your team for your application‘s resource file, which contains all of
the error messages and other UI text, so you don‘t have to try to create every error condition to

Page 147 of 165


review the message text. Without a resource file, you‘re likely to miss some error messages,
because you may never create the error condition that displays them.

Reviewing Link Elements


There are links in all Web applications and in some desktop applications, but not all of them are
blue underlined text. Links help users navigate—both within an application and to external
locations.
Navigational links include menus of all types, breadcrumb trails, links to other pages, links to
Help, and even email addresses. Often links in applications display something on the same
page—such as a popup dialog box, an expandable panel, or user assistance—or clickable image
maps with multiple hotspots display related information. When reviewing a user interface:
 Ensure it uses similar link elements consistently. For example, if an application has a page
browser, do the link elements always look the same? Or are they different like those in the
Figure below?
 Review the font, color, and style of active and visited links.
 Check the behavior when users hover over links.
 Make sure the text matches the link‘s action.
 Verify that links take users where they would expect to go.

Figure—Inconsistent page browsers within a single application

Reviewing Visual Elements


You don‘t have to be a graphic designer to review the visual elements of a user interface.

Visual elements include graphics, color, and display mechanisms. You don‘t have to be a graphic
designer to review the visual elements of a user interface. As with text and links, you‘re looking
for usefulness and consistency. You‘ll notice consistency by its absence.

Page 148 of 165


Graphics Must Add Value
As a general rule, if a graphic image is not necessary, don‘t use it. Such an image just becomes
noise and uses up bandwidth. However, some graphics can help guide users through a process or
provide visual cues showing progress along a sequence of steps. When reviewing a user
interface‘s visual elements, consider the following:
 Are any graphics or graphic elements unnecessary?
 Are any graphic elements too big or too small?
 If images don‘t display, what happens to the user interface? Does it display alternate or title
text, or do you get a placeholder box?
 Did your developers use consistent, familiar, and recognizable images for common
functions? For example, although we no longer use floppy disks, that icon is instantly
recognizable as a Save icon. If you use a different image for save, such as a lifesaver, your
users may not understand that it means save and might assume it means help.
 Are any of the graphic elements blurred or jagged?
 Do the graphic elements use transparency appropriately?
 Do hotspots on image maps cover the relevant parts of the image—for example, the countries
on a map?

Use Color Carefully


Color is often a matter of personal preference.

Color is often a matter of personal preference. Be aware that people will interpret color names
differently. For example, if I recommend using purple in a user interface, which purple do I
mean—violet, mauve, lilac, burgundy, maroon, or just plain purple? The Figure below shows
many shades of the color purple.

Figure 7—Many shades of the color purple

Page 149 of 165


Color also has different meanings in different cultures. In Western societies, white means purity,
while in some Asian cultures, white means death. Depending on where you live, red might
represent danger or good luck.
Most importantly, though, colors render differently on a screen than they do on paper. Different
monitors, resolutions, flicker rates, brightness and contrast settings, gamma settings, and angles
of view affect how color appears on a screen. And ambient light conditions from overhead lights,
sunlight, and fluorescent lights can drastically change a color‘s appearance.
Finally, a small percentage of the population has some form of color-deficient vision or other
visual impairment. This also contributes to the different ways in which users perceive color. On
the Vischeck Web site, you can see how those with various types of color-deficient vision see
Web pages.
Some things to consider when reviewing the color in a user interface include the following:
 If there is information in an application that users are likely to print, try printing it on various
color and black-and-white printers and on different paper stocks. See how the color renders
when printed. Set the print options to gray scale and test that printout, too.
 Check the color palette your developers used. Is it a limited range of colors, or does it use all
16 million colors? Older monitors might not display the full range of colors, although this is
far less likely today than it was five years ago.
 What happens if a user or administrator changes the color scheme? For example, savvy users
can change their Windows color scheme and display fonts or the colors, fonts, and cascading
style sheets (CSS) their browser uses by default. If graphic user interface elements—for
example, the background color for buttons—are hard-coded to the classic gray of older
Windows interfaces, they‘ll look completely out of place if users customize their Windows
colors. See Figures 1 and 2 below for examples of elements whose color did not change to
match user settings.
 Has the designer used colors to identify standard or required elements—for example, a
required field? Some Web applications indicate required fields with a red asterisk or some
other symbol, color, or font weight. Other Web applications shade required text boxes with a
background color. Are all required fields treated in the same way?

Page 150 of 165


Figure —Before: A skinnable Web application‘s default colors and graphics

Figure —After: Changing colors in the CSS did not change the hard-coded colors and
graphics

Consider Resolutions, Screen Sizes, and Displays


When reviewing the visual elements of a user interface, you must also consider various aspects
of the computer display—resolutions, screen sizes, window sizes, and so on.
 Test how pages display at various resolutions. Change the computer settings to reflect an
800x600 screen, a 1024x768 screen, a 1440x1920 screen, and any other screen sizes your
users might use. Also, resize the fonts—make them very large and very small—and see what
happens. You‘ll readily notice what visual elements have potential readability problems and
accessibility issues for users with visual impairments.
 Test how the pages display on various devices. Web applications now must consider multiple
browsers and multiple devices, including those that have small screens—such as PDAs,
mobile phones, iPods, and BlackBerry devices—and those that have large screens—such as
wide-screen monitors or large LCD television screens. If you do not have access to all of
these different types of devices, try resizing your browser window to the dimensions of their

Page 151 of 165


screens. Even if the application was not designed for such screens, test them anyway, just in
case a user tries to use the application on another device.
 For Web applications:
o Test how the pages display in the most popular browsers. Microsoft Internet Explorer and
Mozilla Firefox make up around 90% of the market, so at least test in those browsers. If
Mac users are likely to use a Web application, test using the Safari browser. You can test
how Web pages display in all major browsers using Browsershots.
o Turn off CSS, JavaScript, cookies, and frames in your Web browser. Is the page still
usable? It might look ugly, but users should still be able to use it. Can you see the
navigation links and the content? If your browser doesn‘t let you turn off these elements,
try one of these free browser add-ons:
 for Firefox—the Web Developer toolbar
 for Internet Explorer—the Developer Toolbar or the Web Accessibility Toolbar
o Validate all Web pages. Use some of the markup validation tools that validate a page‘s
DOCTYPE, HTML, and CSS, such as the W3C Markup Validation Service and CSS
Validator, or an all-in-one validator like the one from AI Internet Solutions.
 Resize all resizable windows. Do the elements in the window resize appropriately? For
example, if the window contains a list box, do the dimensions of this box resize when you
make the window smaller or larger? Does the application save the size and position of a
window once a user has resized or moved it?
 If a user is likely to print a page or window, print it. While most browsers have a print
preview option or your application might have one, actually print it on paper to see how it
really looks. What happens to the headers, footers, sidebars, and navigation elements? Is any
content missing or cut off—such as large images or wide tables? A well-designed user
interface needs to print readable content.

Reviewing User Interactions


User interactions are the dynamic aspects of a user interface—such as clicking a button,
performing a search, or saving a record.

User interactions are the dynamic aspects of a user interface—such as clicking a button,
performing a search, or saving a record. What happens when you perform such actions? What
results or feedback do you get? Do you get logical and appropriate results? Look out for results
that don’t behave as you expect them to.

Page 152 of 165


Test All Buttons, Links, and Keys
When reviewing a user interface, test these user interactions:

 Click every menu item, button, and toolbar button. Do they do what you expect them to do?
Do they take you where you expect them to go? For example, is there a clear difference
between what happens when you click Apply, OK, or Cancel?
 Click all other controls, including tabs, spin boxes, drop-down list boxes, and other user
interface elements.
 Try to access all functionality using the keyboard. For example, Alt+<Character> should
display the relevant menu, Tab should move the insertion point to the next field, Enter should
execute a command if the focus is on a button, and Esc should close a dialog box.
 Check what happens when you press the function keys. For example, on a PC, F1 should
display Help and F5 should refresh the browser window or a directory.

Check System Response Times


Before measuring system response times, make sure your system configuration matches the
design specifications for the application.

Before measuring system response times, make sure your system configuration matches the
design specifications for the application. Otherwise, the developers might reject or ignore your
comments about performance.

System response times determine how long it takes to complete specific actions. Are the times
within acceptable ranges—that is, do they meet your expectations based on your experience
performing similar tasks using other software? Some interactions whose performance you should
check include the following:
 opening and closing an application
 loading new pages, dialog boxes, and tabs
 loading new and existing records
 responses to user interactions such as clicking a button
 running processes such as saving a record or displaying a report

Test Any Search Facility


If the application includes a search facility, perform several searches to check what search
functions are available and what results they produce.

Page 153 of 165


 Search for single words—both whole and partial words. Try any wildcard characters—such
as ? and *—that let you match just part of a word.
 Search for phrases and multiple terms, with and without using quotation marks. Do the
search results differ? Can you use either single or double quotation marks, and do the results
differ if you do?
 Search for multiple terms using Boolean operators such as +, -, !, |, =, AND, OR, and NOT.
What works? What doesn‘t?

Documenting/Reporting Issues
No matter how you report issues, always provide constructive criticism and minimize personal
opinion.

As you review each Web page or screen of an application, record your findings. No matter how
you report issues, always provide constructive criticism and minimize personal opinion.
Certainly avoid comments such as I don’t like that. Back up your assertions and any suggestions
you make for improving the application‘s user interface with objective reasons relating to the
following:
 usability
 readability
 accessibility
 familiarity
 consistency
 conformance with accepted industry standards—for example, the Checklist of Checkpoints
for Web Content Accessibility Guidelines 1.0
 legislative compliance—for example, Section 508 of the Rehabilitation Act (USA), the
British Disability Discrimination Act (UK), or the Disability Discrimination Act (Australia)

Using a Bug Tracking System


If possible, record both the issues you‘ve found and your suggestions for making improvements
in the development team‘s bug tracking system, so the developers will be more likely to review
and address your comments and recommendations. Avoid labeling your bug reports as cosmetic
bugs, because developers rarely review or fix cosmetic issues, especially if there‘s a strong push
to get the product released as soon as possible.

Page 154 of 165


Creating a Formal Report
If you document your findings in a formal report to the development team or to management,
clearly illustrate the issues and the changes you‘re recommending. To illustrate issues and
document your suggested changes, do the following:

 Take screen shots, using tools such as SnagIt from TechSmith.


 Capture an entire Web page that requires scrolling using SnagIt or Paparazzi!
 Add callouts to screen shots, as shown in the Figure below.
 Create a PDF document and add comments to the file, then either refer to it or incorporate it
in your report.
Figure —A screen shot with callouts

Once you‘ve created a formal report of your findings, don‘t let the product team ignore it. Set up
times to talk directly with the developers, team leads, or project managers who can help you get
your feedback addressed. Present your findings as an advocate for users, and let the team see
your passion for meeting users‘ needs.

Page 155 of 165


Reporting Issues Less Formally
Sometimes you just want to report an issue to the one person who can fix the problem.

Sometimes you just want to report an issue to the one person who can fix the problem. In such
instances, consider one of these less formal reporting methods:
 Sit with the developer, show him the problems on your computer, and confirm the fixes as he
makes them.
 If you cannot sit down with a developer to show him the problems, capture screen videos that
show the issues. A video is also very useful when a developer says he can‘t replicate a
problem. Capture them using tools such as Captivate, Camtasia, or ViewletBuilder. Refer to
any videos in your formal report.
 Complete a checklist showing the items you‘ve checked as you reviewed each part of an
application—one checklist per screen. Developers can reuse your checklist as they continue
developing the application. Here is a link to a comprehensive checklistPDF that covers all of
the guidelines discussed in this article.

Why Tackle a User Interface Review?


By thoroughly reviewing design elements, text elements, link elements, visual elements, and user
interactions, you can ensure users have a positive experience with your application.

Reviewing an application‘s user interface is not about your opinions. Your aim is to improve a
user experience to reduce potential user confusion and meet users‘ needs. By thoroughly
reviewing design elements, text elements, link elements, visual elements, and user interactions,
you can ensure users have a positive experience with your application.
Although you can use some software tools to help you with this task, ultimately, your knowledge
and understanding of user interface guidelines, standards, and interactions are your best tools for
reviewing a user interface.

Accessibility tools and HTML validation tools


Web accessibility evaluation tools are software programs or online services that help you
determine if web content meets accessibility guidelines.
A list of evaluation tools that you can you need and how they are able to assist you include
includes:
 508checker.com you can quickly check a webpage for 508 compliance and learn more about
how to become 508 compliant across your entire organization

Page 156 of 165


 A-Tester checks the pre-enhanced version of a web page designed with progressive
enhancement against Evaluera's "WCAG 2.0 Level-AA conformance statements for HTML5
foundation markup" making a report that can serve as a broad and easily confirmed WCAG
2.0 Level-AA claim, even for enhanced versions.
 a11yTools is a collection of HTML Web Accessibility Testing Tools in one location on your
iPhone and iPad for quick and easy Accessibility testing. Run your favorite Accessibility
testing tool and easily take a screenshot on your phone showing the a11y error to developers
and designers.
 a11y-checker is a warn about HTML Markup code accessibility issue, you can know the
accessibility issue by an open console and see the warning.
 a11y.css, This CSS file intends to warn developers about possible risks and mistakes that
exist in HTML code. It can also be used to roughly evaluate a site's quality by simply
including it as an external stylesheet, as a bookmarklet or a user style.

Using/developing style guides


A style guide clearly demonstrates how all interface elements and components are to be visually
represented. It‘s essentially a master reference for the user interface (UI).

Many of today‘s top companies—like Apple and Target—use their logos prominently and
consistently, making their brand and the quality it promises immediately recognizable. For
companies who sell services in addition to products, company marketing relies heavily on what
you say, or write, and how you say it. Just like with logos, consistency is key. An inconsistent
writing style can reduce the perceived quality of your product or services. Enter the style guide.

Developing a style guide—an internal reference of writing and design standards—can help staff
at organizations of any size navigate the often complex world of communication. It can save time
and allow your content‘s message to shine through without distraction. Own your brand!

Meaning of a style guide


Style guides come in many shapes and forms, ranging from highly detailed references like
the Chicago Manual of Style—which has more than 1,000 pages—to concise, single-page
documents. The information included should help clear up any common misconceptions and
ambiguities in communication that your team may face. Ultimately, for a style guide to help
bolster your brand, your staff needs to actually use it.
Elements of a style guide may include:
 Punctuation rules, such as whether or not to use the Oxford comma
 Spelling of commonly used terms
 Citation conventions
 Design guidelines (e.g., company colors, font, spacing)
 Guidance on writing for different audiences

Page 157 of 165


Developing a style guide
Thankfully, you don‘t need to start from scratch. There are already several excellent resources
that can answer your questions about subject-verb agreement, compound sentences, or citations.
Consider the following process to develop a style guide that best fits your company‘s needs:

1. Think about who will use the style guide and what they need to know. Should your guide
be short, or would it be more useful if it were comprehensive and easy to navigate?
2. Look through your organization‘s documents and identify key requirements for consistent
branding. A review of both drafts and final products will also help you identify common
errors and inconsistencies among colleagues.
3. Assess existing style guides for ideas on how to organize a list of common errors alongside
your desired style elements (such as company fonts, colors, and spacing).
4. If your company relies on client style guides for communications, be sure to differentiate
between the internal style and the client style. For instance, some of our clients use the
word ―cybersecurity‖ while others prefer this to be two words, ―cyber security.‖
5. Include examples for correct and incorrect style usage, particularly for complex issues, so
that your message is clear.
6. If your company requires a comprehensive style guide, consider making a desktop
reference for your colleagues. A one-pager that can be used quickly is extremely helpful in
preventing common errors.
Once you‘ve created a guide, consult with your colleagues to make sure that you have covered
all relevant topics and agree on style conventions. A style guide is only useful if you have buy-in
and if you can defend your reasoning for including certain standards. Is a certain rule too fluid to
be useful? Is an Oxford comma really necessary?*

Here at Nexight, we develop products that distill immense amounts of information to its core. By
using standards for writing style, we can ensure that our communications products are consistent,
effective, and high quality.

After corporate style and branding, often the next most important use of the style guide is to
answer internal questions about presentation. Your style guide should make clear how authors
present:

 Headings (and how they are capitalized)


 Lists (whether they are capitalized and how they are punctuated)
 Numbers (when they should be spelled in full)
 Rules for chapter, figure and table headings (including numbering)

Tools like PerfectIt can help to ensure that presentation is consistent. What's more, there are
free user guides which show how you can customize PerfectIt and share its style sheets among
colleagues so that all documents in your organization are checked the same way.
The key to determining what goes in the style guide is to find out how usage differs in your
company. The best way to do that is to bring more people into the process of building the style

Page 158 of 165


guide. That process is reviewed below, but first this article looks at common mistakes in the
preparation of style guides.

2. Designing written instruments to gather user interface


feedback
Meaning & Importance of feedback
User feedback is the information, insights, issues, and input shared by your community about
their experiences with your company, product, or services. This feedback guides improvements
of the customer experience and can empower positive change in any business — even (and
especially) when it‘s negative.

Why is User feedback important?


User feedback is important because it serves as a guiding resource for the growth of your
company. Don‘t you want to know what you‘re getting right — and wrong — as a business in
the eyes of your customers?
Within the good and the bad, you can find gems that make it easier to adjust and adapt the
customer experience over time. In short, feedback is the way to keep your community at the
heart of everything you do.
This visual feedback has three important functions.

1. Responsiveness: It signals to the user that these buttons are clickable, and that the button
has two separate parts since they are highlighted separately.
2. Engagement: The visual effect engages the attention. If done well, the visual effect will
delight the user. It gives life to the static elements on the screen.
3. Activity: Visual changes indicate that the underlying system is changing. Animation for
its own sake can distract, but as part of the visual feedback can help explain how the
system works.

Written instruments for gathering interface feedback


Every project manager knows that you need the right information to make your project a success.
Vital information such as a projects requirements, client‘s expectations and assumptions and
understanding the risks are all a key part of the planning process.
But how do you get the right information? Here‘s the pros and cons of ten techniques:
Interviews
Interviewing stakeholders and users is often the first point of call when gathering requirements for
projects.

Page 159 of 165


Pros:
+ You can achieve a deeper level of engagement when you‘re talking to someone one on one.
+ It‘s easier to arrange a session with one person than to co-ordinate a group.
+ You gain greater ‗buy in‘ from stakeholders. Helping them to feel more valued, and have greater
ownership of the project. As a result, they could offer more support in the later stages of your
project.
Cons:
– It‘s more time consuming than arranging one session with a group of people
– You miss out on people bouncing ideas off one another in a group session
Our tip: As a top priority, you may want to focus your time on interviewing the stakeholders with
the most expertise and, or those with the greatest power over whether your project is a success or
failure.
Focus groups
Focus groups typically include around 6 to 12 people who represent your projects end users or
customers, usually to gather research about their views or perceptions towards a specific concept,
product or service.
Pros:
+ This is geared towards understanding the real needs of your end users, rather than relying on
assumptions on what those needs are, so you‘ll be far better informed.
+ Users feel their opinions are respected and valued, and may help make the project a success by
‗championing it‘ when it goes live.
Cons:
– It takes a lot of logistics to find the right people and get them in the same room at the same
time.
– You may be raising expectations among users that you can‘t meet (if suggestions have been
overly optimistic) and causing disappointment when the project goes live.
– You‘ll need an experienced facilitator with expertise – either external (which costs money), or
an internal member of staff (who may take time and money to train, if they‘re not skilled enough).
Our tip: Manage expectations up front: let everyone know that their suggestions will provide
valuable insight to help you improve, but may not turn into deliverables in the final project.

Page 160 of 165


Facilitated workshops
A facilitated workshop is an intensive, structured meeting led by an impartial facilitator. This
allows you, the project manager, to focus on gathering project requirements.
Pros:
+ You‘re able to bounce ideas off of multiple people
+ With a bank of knowledge in the room, you get more for your time.
+ You can tap into diverse knowledge and experience
Cons:
– To get the best results you‘ll need an experienced facilitator – either external (which costs
money), or an internal members of staff (who may take time and money to train, if they‘re not
skilled enough).
– Without a facilitator there is the danger of the quieter participants being drowned out by those
louder
– It can be hard to co-ordinate stakeholder‘s busy schedules to get them together
Our tip: Where possible, you should have an experienced facilitator to lead the workshop, and a
separate person to record the information. If there is a surge of information from the group, the
facilitator can concentrate on managing the conversation rather than trying to record as well, and
potentially fall behind.
Brainstorming
Brainstorming is a popular method for generating ideas, especially in a group environment. This
normally takes place at the beginning of a new project, with multiple stakeholders involved to
capture initial ideas. Everyone is encouraged to contribute ideas without fear of judgement at this
early stage.
Pros:
+ Your process is given a kick-start with an abundance of ideas generated
+ You get a large amount of information in a short space of time.
+ You can uncover insights and information that may have been lost with other methods.
+ Brainstorming session can be motivating for stakeholders and make them feel more engaged.
Cons:
– You need to have the right format and a good facilitator, or the ideas could dry up, or go off on
a tangent.
– It‘s easy to get overloaded with information.

Page 161 of 165


– Participants can‘t sense check information on the spot (the rule is: there‘s no such thing as a bad
idea)
Our tip: Be sure to capture the information from your brainstorm in a clear and organised way.
Your information won‘t be much use if you can‘t read it afterwards.
Mind mapping
Gathering project requirements isn‘t just about how you collect the information, but also, how you
capture, structure and make sense of it all. Mind mapping can provide a very effective wayof
documenting a brainstorming session.
Pros:
+ It helps kick-start a flow of information either individually or in a group so you can gather lots
of meaningful information quickly.
+ It works the way your mind works – in a hierarchy and forces you to be methodical so you can
spot gaps and trends.
+ The format helps you progress to the next stage (sorting the information) before the
requirement gathering ends, saving you time and effort.
Cons:
– Mind mapping manually (using pen and paper) is slow and unorganised.
– It can be difficult to turn ideas and plans into actionable tasks unless you have the right tools.
Our tip: When using mind mapping in a group, its best when everyone can see the map so they
can make connections and generate even more ideas – so you really need software to help you do
this (MindGenius is great, but feel free to choose another).
Questionnaire and surveys
Questionnaires and surveys are an effective way to gather data which is used to collect, analyse
and understand the perceptions and opinions of a group of people. When it comes to gathering
project requirements, you would ask key players from your organisation.
Pros:
+ It‘s quick and you can send it to a large database very quickly.
+ You can tailor your questions to different stakeholders to address different business objectives.
+ You save time from attending meetings and recording the information – the participants do it
for you.
Cons:
– It can be time-consuming to design and write a good survey.

Page 162 of 165


– Participation can be limited. People may put it to the bottom of the to-do list.
– It‘s one way: there‘s no opportunity to probe or clarify on the spot.
Our tip: Software such as Survey Monkey can help take some of the hassle out of conducting
larger surveys, but for a small pool of participants, you may be just as well using email. Just make
sure you carefully plan the questions.
Observations
Observation is the study of users in their natural environment. By observing users, you can
identify a process flow, awkward steps, pain points and opportunities for improvement.
Pros:
+ You get real-time specific insights from user behaviour.
+ You avoid the bias of people telling you want you want to hear, rather than what they need.
Cons:
– Observations are limited to a point in time.
– They require a large workforce
– It can be difficult to recruit participants and gain approval to go ahead.
Our tip: Observations won‘t always fit with every project, however it can be used for software
projects. It‘s insightful for analysts to witness how users interact with the software at different
stages.
Prototypes
A prototype represents the shell of an actual production application, and are built early on in the
development lifecycle. They are normally used to present a physical product to your end user to
seek feedback and provide insight that will determine the development of a product to make it
more suitable to their requirements.
Pros:
+ You get real-time, in-depth user feedback.
+ It‘s the only way to really test how people use physical products.
+ Works well in agile environments.
Cons:
– Only works for certain projects.
– It can be very expensive and time-consuming.
– Not always suited to waterfall-based project management.
Page 163 of 165
Our tip: Think about the prototype: is the prototype to be disposable, or are there elements you
may wish to reuse for your project.
Benchmarking
This method requires you to look at exceptional organisations within your industry and study
how they have developed their project.
Pros:
+ Learn from the successes and failures of others at no risk to yourself.
+ You can set standards for the deliverables by looking at the best in class.
+ It can be useful to help you validate your approach.
Cons:
– For it to be meaningful you need to find benchmarks that very closely match your project
which can be difficult – as no two projects are exactly the same.
– It could take you in the wrong direction, away from the original business idea.
– You are limited to information that is publicly available.
Our tip: When observing other organisations remember to choose those that match your project
well. Be realistic while aiming to match the best in class.
Document analysis
Document analysis is a common method to kick-start requirements gathering. Analysing materials,
studying relevant information, and then following this up using some of the methods mentioned
earlier, creates a strong approach.
Documents could include project documentation from similar projects that didn‘t work, it could
be statistics-based reports on past performance or it could be minutes from meetings.
Pros:
+ Chances are that you already have documents readily available.
+ Someone may have done a lot of the job for you already.
+ It can provide you with the research you need to inform other information gathering techniques.
Cons:
– You may suffer from information overload.
– You will have to wade through lots of irrelevant information to get something useful.

Page 164 of 165


Our tip: It‘s unlikely that you‘ll be able to rely on this approach alone. You‘ll need to combine it
with other techniques

Designing written instruments for gathering interface feedback


In the real world, the environment gives us feedback. We speak, and others respond (usually).
We scratch a cat, and it purrs or hisses (depending on its moodiness and how much we suck at cat
scratching).
All too often, digital interfaces fail to give much back, leaving us wondering whether we should
reload the page, restart the laptop, or just fling it out the nearest available window.
So give me that loading animation. Make that button pop and snap back when I tap it—but not too
much. And give me a virtual high-five when I do something you and I agree is awesome.
Offers both feedback and encouragement when user schedule or send an email.
Just make sure it all happens fast. Usability.gov defines any delay over 1 second as an interruption.
Over 10 seconds, a disruption. And the latter‘s generous: for about half the U.S. population, 3
seconds is enough to cause a bounce.
If a page will load in under 5 seconds, don‘t display a progress bar, as it‘ll actually make the
loading time seem longer. Instead, use a visualization that doesn‘t imply progress, like Mac‘s
infamous ―pinwheel of death.‖ But not that. If you do use progress bars on your site, consider
trying some visual tricks to make the load seem faster.
The formulation of written feedback comments
What are the features of good written comments? The following is a set of recommendations for
good practice. These are based on investigations of users‘ perceptions of what constitutes helpful
feedback and on researchers‘ suggestions about how to translate these ideas into practice.
Written feedback should be:
• Understandable: Expressed in a language that students will understand
• Selective: Commenting on two or three things that the student can do something about.
• Specific: Pointing to examples in the user‘s submission where the feedback applies
• Timely: Provided in time to inform the next piece of work
• Contextualized: Framed with reference to the learning outcomes and/or assessment criteria
• Non-judgemental: Descriptive rather than evaluative, focused on learning goals not just
performance goals.
• Balanced: Pointing out the positive as well as areas in need of improvement.
• Forward Looking: Suggesting how students might improve subsequent assignments
• Transferable: Focused on processes, skills and self-regulatory abilities

Page 165 of 165

You might also like