KCAU - Distance Learning - HCI Module
KCAU - Distance Learning - HCI Module
Page 1
Page 2
BIT 2308 Human Computer Interaction
Contact Hours 48
Pre-requisite N/A
Purpose/Aim This course presents the principles of creating systems that are
usable by and useful to human beings.
Course Objective The learner can give an overview of the main criteria for
(Indicative Learning system usability
Outcomes The learner can apply human computer interaction
principles to the design of user interfaces
Course Content Introduction
Usability paradigms and principles
The design process
Models of the user in design
Task analysis
Dialogue notations and design
Models of the system
Implementation support
Evaluation techniques
Help and documentation
Groupware
Computer-supported cooperative work and social issues
Current developments in HCI
Hypertext, multimedia and the world-wide web
Page 3
Introduction
In writing this unit because… (paragraph)
To introduce the students to concepts in human computer interfaces and the various
techniques used to build interactive computer systems
Objectives
By the end of this unit you should to:
i. Give an overview of the main criteria for system usability.
ii. To apply human computer interaction principles to the design of user interface.
iii. Define Human computer interaction
iv. Describe HCI interface development.
v. Provide an introduction to theories and methodologies in use in human computer interaction,
concentrative in the area of user interface design.
vi. Gain practical experience in designing and evaluating interfaces.
vii. Develop a simple user interface
1. Explain the capabilities of both humans and computers from the viewpoint of human
information processing.
2. Describe typical human–computer interaction (HCI) models, styles, and various
historic HCI paradigms.
3. Apply an interactive design process and universal design principles to designing HCI
systems.
4. Describe and use HCI design principles, standards and guidelines.
5. Analyze and identify user models, user support, socio-organizational issues, and
stakeholder requirements of HCI systems.
6. Discuss tasks and dialogs of relevant HCI systems based on task analysis and dialog
design.
7. Analyze and discuss HCI issues in groupware, ubiquitous computing, virtual reality,
multimedia, and Word Wide Web-related environments.
Page 4
4. Plyse R., Moore. Graphical User interface Design and Evaluation. Prentice Hall.
Table of Contents
Page 5
2.3. Chapter review questions...........................................................................................37
2.4. Lecture Summary.....................................................................................................37
2.5. Suggestion for further reading...................................................................................37
LECTURE 3: HCI HUMAN FACTORS - COGNITION......................................................38
Lecture Objectives...............................................................................................................38
3.1 Introduction....................................................................................................................38
3.2 Cognitive Psychology....................................................................................................39
3.3. Cognition and Cognitive Frameworks.......................................................................39
3.3.1. Cognition Modes....................................................................................................40
3.4. Cognitive Frameworks..............................................................................................41
3.4.1. Human Information Processing.............................................................................41
3.4.1.1. The Extended Human Information Processing model.......................................42
3.4.1.2. The Model Human Processor.............................................................................43
3.4.2. Mental Models.......................................................................................................45
3.4.3. Gulfs of Execution and Evaluation........................................................................45
3.4.4. Distributed Cognition............................................................................................47
3.4.5. External Cognition.................................................................................................48
3.5. Recent Development in Cognitive Psychology.........................................................49
3.6. Lecture Summary......................................................................................................50
3.7. Review Questions......................................................................................................50
3.8. Suggestion for Further Reading.................................................................................50
LECTURE 4: HCI HUMAN FACTORS – Perception and Representation............................52
4.2. The Gestalt Laws of perceptual organization (Constructivist)......................................53
4.3. Affordances (Ecological)..............................................................................................55
4.4. Affordances in Software................................................................................................55
4.4.1. Perceived Affordances in Software............................................................................56
4.5. Link affordance in web sites.........................................................................................56
4.6. Influence of Theories of perception in HCI..................................................................56
4.7. Lecture Summary......................................................................................................57
4.8. Lecture review questions...........................................................................................57
4.9. Further Reading.........................................................................................................57
LECTURE 5: HUMAN FACTORS -ATTENTION AND MEMORY..................................58
Lecture Objectives................................................................................................................58
5.1 Introduction....................................................................................................................58
5.2. Attention........................................................................................................................58
5.3. Models of Attention......................................................................................................59
5.3.1. Focused Attention......................................................................................................59
Page 6
5.3.2. Divided Attention.......................................................................................................60
5.4. Focusing attention at the interface................................................................................60
5.4.1. Structuring Information..............................................................................................61
5.5. Multitasking and Interruptions......................................................................................61
5.6. Automatic Processing....................................................................................................62
5.7. Memory Constraints.....................................................................................................62
5.8. Levels of Processing Theory........................................................................................63
5.8.1. Meaningful Interfaces...............................................................................................63
5.8.2. Meaningfulness of Commands..................................................................................64
5.8.3. Meaningfulness of Icons............................................................................................64
5.9. Lecture Summary......................................................................................................65
5.10. Lecture review questions.......................................................................................65
5.11. Further Reading.....................................................................................................65
LECTURER 6: MENTAL MODELS AND KNOWLEDGE.................................................66
Lecture Objectives................................................................................................................66
6.1. Mental Models..............................................................................................................66
6.2. Types of Mental Models...............................................................................................67
6.3. Applicability in HCI..................................................................................................68
6.4. Mental Models in HCI...............................................................................................68
6.5. Knowledge Representation........................................................................................70
6.6. Knowledge in the Head vs. Knowledge in the World...............................................71
6.7. Lecture Summary......................................................................................................72
6.8. Lecture Review Questions.........................................................................................72
6.9. Further Reading.........................................................................................................72
LECTURE 7: COMPUTER FACTORS IN HCI.....................................................................73
USER INTERFACE & USER SUPPORT..............................................................................73
Lecture Objectives................................................................................................................73
7.1. Introduction...................................................................................................................73
7.2. Input Devices.............................................................................................................74
7.3.1. Keyboard................................................................................................................75
7.3.2. QWERTY keyboard..............................................................................................75
7.3.3. Alphabetic keyboard..............................................................................................75
7.3.4. Phone pad and T9 entry.........................................................................................76
7.3.5. Dvorak Keyboard...................................................................................................76
7.3.6. Chord Keyboards...................................................................................................76
7.3.7. Handwriting Recognition.......................................................................................76
7.3.8. Speech Recognition...............................................................................................77
Page 7
7.3.9. Dedicated Buttons..................................................................................................78
7.3.10. Positioning, Pointing and Drawing....................................................................78
7.3.11. Cursor Keys........................................................................................................81
7.4. Choosing Input Devices...........................................................................................81
7.5. Output Devices..........................................................................................................82
7.5.1. Purposes of Output.................................................................................................83
7.6. Sound Output.............................................................................................................83
7.6.1. Natural sounds.......................................................................................................84
7.6.2. Speech....................................................................................................................84
7.7. User Support..............................................................................................................84
7.10. Designing user Support Systems...........................................................................89
7.11. User Support Presentation issues...........................................................................89
7.12. Lecture Summary...................................................................................................91
7.13. Lecture review questions.......................................................................................91
7.14. Further Reading.....................................................................................................91
LECTURE 8: HUMAN COMPUTER INTERACTION DESIGN.........................................92
8.1. Interaction Cycle and Framework.................................................................................92
8.1.1. The interaction framework.........................................................................................93
8.2. Ergonomics...................................................................................................................95
8.3. Interaction Styles...........................................................................................................97
8.3.1. Command Line Languages.........................................................................................97
8.3.2. Menus........................................................................................................................97
8.3.3. Direct manipulation....................................................................................................98
8.3.4. Form fill-in.................................................................................................................98
8.3.5. Natural Language......................................................................................................99
8.3.6. Question/Answer and Query Dialogue.....................................................................99
8.3.7. WIMP interface.......................................................................................................100
8.3.9. Buttons.....................................................................................................................101
8.3.10. Toolbars..................................................................................................................101
8.3.11. Palettes...................................................................................................................101
8.3.13. Dialog boxes...........................................................................................................102
8.4. Interaction Design.......................................................................................................102
8.5. Interaction Design versus Interface Design...............................................................103
8.6. Why Good HCI design................................................................................................103
8.6.1. Increase in Worker Productivity, Satisfaction, and Commitment............................104
8.6.2. Reduction in Training Costs, Errors, and Production Costs....................................105
8.7. Design Principles.......................................................................................................106
Page 8
8.8. Lecture Summary....................................................................................................107
8.9. Lecture Review Questions.......................................................................................107
8.10. Further Reading...................................................................................................108
LECTURE 9: HCI DESIGN PROCESS................................................................................109
Lecture Objectives..............................................................................................................109
9.1. The Design Process.....................................................................................................109
9.2. HCI Design Approaches.............................................................................................110
9.3. HCI Design Rules, Principles, Standards and Guidelines..........................................114
9.3.1. HCI Design Rules....................................................................................................114
9.3.2. Principles of Human-Computer Interface Design:..................................................115
9.3.3. HCI Design Standards..............................................................................................119
9.3.4. Guidelines............................................................................................................119
9.4. Web Interface Design Considerations.....................................................................120
Information Architecture and Web Navigation..................................................................120
9.6. Web Sites Navigation Systems................................................................................123
9.6.1. Global navigation.....................................................................................................123
9.6.2. Local navigation.......................................................................................................123
9.6.3. Supplementary navigation........................................................................................124
9.6.4. Contextual navigation..........................................................................................124
9.6.5. Courtesy navigation.............................................................................................124
9.6.6. Personalisation and Social Navigation.................................................................125
9.8. Navigation Aids.......................................................................................................126
9.9. Diagramming Information Architecture and Navigation........................................127
9.9.1. Blueprints.................................................................................................................127
9.9.2. Wireframes...............................................................................................................127
9.10. Lecture Summary.................................................................................................128
9.11. Lecture Review Questions...................................................................................128
9.12. Further Reading...................................................................................................128
Page 9
LECTURE ONE: INTRODUCTION TO HCI
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so that
after studying this you will be able to:
i. Describe the formal definition of HCI
ii. Describe the goals of HCI
iii. Define Usability goals
iv. Discuss the History and Evolution of HCI
v. Explain the HCI Guidelines, Principles and Theories
Lecture Outline
1.1.1. Title 1.3.1
1.1.2. Title 1.3.2
1.1.3. Title 1.3.3
1.1. Definition
Human Computer Interaction (HCI) is the study of how people interact with computers and to
what extent computers are or are not developed for successful interaction with human beings.
Human-computer interaction is a discipline concerned with the design, evaluation and
implementation of interactive computing systems for human use and with the study of major
phenomena surrounding them." Association for Computing Machinery
Page 10
Human Computer Interaction (HCI) involves the study, planning, and design of the
interaction between people and computers. Because human–computer interaction studies a
human and a machine in conjunction, it draws from supporting knowledge on both the
machine and the human side. HCI is also sometimes referred to as man–machine interaction
(MMI) or computer–human interaction (CHI). HCI can be viewed as two powerful
information processors (human and computer) attempting to communicate with each other
via a narrow-bandwidth, highly constrained interface.
As its definitions name implies, HCI consists of three parts: the user, the computer itself, and
the ways they work together (Interaction).
User: By "user", we may mean an individual user, a group of users or a sequence of users
in an organization working together.
Computer: When we talk about the computer, we're referring to any technology ranging
from desktop computers, to large scale computer systems. For example, if we were
discussing the design of a Website, then the Website itself would be referred to as "the
computer". Devices such as mobile phones or VCRs can also be considered to be
“computers”.
Interaction: There are obvious differences between humans and machines. In spite of
these, HCI attempts to ensure that they both get on with each other and interact
successfully. In order to achieve a usable system, you need to apply what you know about
humans and computers, and consult with likely users throughout the design process. In
real systems, it is vital to find a balance between what would be ideal for the users and
what is feasible in reality.
Page 11
The selection relies on the recognition of the options names, thus requiring that the names
used should be meaningful. Visibility of options make it easier to use and it reduces the
need for recall. The methods also allows for hierarchical grouping of the options
Direct Manipulation: Direct manipulation is a style of interaction which features a natural
representation of task objects and actions promoting the notion of people performing a
task themselves (directly) not through an intermediary like a computer
Question/answer and query dialogue: In this kind of interaction questions are asked one at
a time and the next question may depend on the previous answer. Question and answer
dialogues are often used in tasks where information is elicited from users in a prescribed
and limited form. This a type of interaction where the user is led through the interaction
via series of questions. It is suitable for novice users but has restricted functionality. A
Fourth Generation language often used in information systems
form-fills and spreadsheets
Used for data entry and retrieval and are based on paper form metaphor. They are used
for both input and output
WIMP: Most common interaction style on PCs. Uses;
• Windows
• Icons
• Menus
• point and click
The goals of HCI are to produce usable and safe systems, as well as functional systems.
These goals can be summarized as ‘to develop or improve the safety, utility, effectiveness,
efficiency and usability of systems that include computers’ (Interacting with computers,
1989). In this context the term ‘system’ derives from systems theory and it refers not just to
the hardware and software but to the entire environment be it organization of people at work
at, home or engaged in leisure pursuits that uses or is affected by the computer technology in
Page 12
question. Utility refers to the functionality of a system or, in other words, the things it can do.
Improving effectiveness and efficiency are self-evident and ubiquitous objectives. The
promotion of safety in relation to computer systems is of paramount importance in the design
of safety-critical systems. Usability, a key concept in HCI, is concerned with making systems
easy to learn and easy to use. Poorly designed computer system can be extremely annoying to
users, as you can understand from above described incidents.
A basic goal of HCI is to improve the interactions between users and computers by making
computers more usable (usability), and receptive to the user's needs (functionality).
Functionality of a system is defined by the set of actions or services that it provides to its
users. However, the value of functionality is visible only when it becomes possible to be
efficiently utilised by the user
Usability of a system with a certain functionality is the range and degree by which the
system can be used efficiently and adequately to accomplish certain goals for certain
users. The actual effectiveness of a system is achieved when there is a proper balance
between the functionality and usability of a system
Specifically, the goal of HCI is to design a user interface that meets the following
characteristics;
a) Effectiveness: effective to use - Concerned with whether the system is doing what it
generally says it will do
b) Efficiency: efficient to use
c) Safety: It involves protecting the users from dangerous conditions and undesirable
situations ie save to use.
Preventing the user from making serious error by reducing the risk of wrong
keys/buttons being mistakenly activated (an example is not placing the quit or delete-
file command right next to the save command on a menu.)
Providing users with various means of recovery should they make errors. Save
interactive systems should engender confidence and allow the users the opportunity to
explore the interface to carry out new operations. The HCI should provide means of
recovering from errors such as undo options, confirmation dialogs etc
d) Utility: It refers to the extent to which the system provides the right kind of functionality
so that user can do what they need or want to do. HCI should have sufficient
Page 13
functionality to accommodate range of users tasks. An example of a system with high
utility is an accounting software package providing a powerful computational tool that
accountants can use to work out tax returns. have good utility the
e) Learnability: It refers to how easy a system is to learn to use. It is well known that people
do not like spending a long time learning how to use a system. They want to get started
straight away and become competent at caring out tasks without to much effort. This is
especially so far interactive products intended for everyday use (for example interactive
TV, email) and those used only infrequently (for example, video conferencing) to certain
extent
f) Memorability: It refers to how easy a system is to remember how to use, once learned.
This is especially important for interactive systems that are used infrequently. If users
haven’t used a system or an operation for a few months or longer, they should be able to
remember or at least rapidly be reminded how to use it. Users shouldn’t have to keep
relearning how to carry out tasks. Unfortunately, this tends to happen when the operation
required to be learning are obscure, illogical, or poorly sequenced. Users need to be
helped to remember how to do tasks. There are many ways of designing the interaction to
support this. For example, users can be helped to remember the sequence of operations at
different stages of a task through meaningful icons, command names, and menu options.
Also, structuring options and icons so they are placed in relevant categories of options
(for example, placing all the drawing tools in the same place on the screen) can help the
user remember where to look to find a particular tool at a given stage of a task.
A long term goal of HCI is to design systems that minimize the barrier between the human's
cognitive model of what they want to accomplish and the computer's understanding of the
Page 14
user's task. In order to produce computer systems with good usability, developers must
attempt to:
a) understand the factors that determine how people use technology
b) develop tools and techniques to enable building suitable systems
c) achieve efficient, effective, and safe interaction
d) put people first
Underlying the whole theme of HCI is the belief that people using a computer system should
come first. Their needs, capabilities and preferences for conducting various tasks should
direct developers in the way that they design systems. People should not have to change the
way that they use a system in order to fit in with it. Instead, the system should be designed to
match their requirements.
For example, a photocopier might have buttons like these on its control panel.
Imagine that you just put your document into the photocopier and set the photocopier to make
15 copies, sorted and stapled. Then you push the big button with the "C" to start making your
copies.
Page 16
What do you think will happen?
(a) The photocopier makes the copies correctly.
(b) The photocopier settings are cleared and no copies are made.
If you selected (b) you are right! The "C" stands for clear, not copy. The copy button is
actually the button on the left with the "line in a diamond" symbol. This symbol is widely
used on photocopiers, but is of little help to someone who is unfamiliar with this. Every day
systems should actually be easy, effortless, and enjoyable to use?
The principles of visibility and affordance were identified by HCI pioneer Donald Norman.
a) Visibility is the mapping between a control and its effect. For example, controls in cars
are generally visible – the steering wheel has just one function, there is good feedback
and it is easy to understand what it does. Mobile phones and VCRs often have poor
visibility – there is little visual mapping between controls and the users’ goals, and
controls can have multiple functions.
b) The affordance of an object is the sort of operations and manipulations that can be done
to it. A door affords opening, a chair affords support. Important types includes;
Perceived Affordances are the actions a user perceives to be possible. For example,
does the design of a door suggest that it should be pushed or pulled open?
Real Affordances are the actions which are actually possible.
Page 17
contemporary applications, while usability principles describe how these paradigms work. In
their book Usability Paradigms and Principles in Human-Computer Interface, Dix, Finally,
Abowd and Beale proposes three principles of usability design
a) Learnability – the ease with which new users can begin effective interaction and achieve
maximal performance.
b) Flexibility – the multiplicity of ways in which the user and system exchange information.
c) Robustness – the level of support provided to the user in determining successful
achievement and assessment of goals.
Details of these principles will be discussed in Lecture 9 – HCI Design Process.
Jacon Nielsen also proposed other ten general principles for user interface design. They are
called "heuristics" because they are more in the nature of rules of thumb than specific
usability guidelines.
1. Visibility of system status: The system should always keep users informed about what is
going on, through appropriate feedback within reasonable time.
2. Match between system and the real world: The system should speak the users' language,
with words, phrases and concepts familiar to the user, rather than system-oriented terms.
Follow real-world conventions, making information appear in a natural and logical order.
3. User control and freedom: Users often choose system functions by mistake and will need
a clearly marked "emergency exit" to leave the unwanted state without having to go
through an extended dialogue. Support undo and redo.
4. Consistency and standards: Users should not have to wonder whether different words,
situations, or actions mean the same thing. Follow platform conventions.
5. Error prevention: Even better than good error messages is a careful design which prevents
a problem from occurring in the first place. Either eliminate error-prone conditions or
check for them and present users with a confirmation option before they commit to the
action.
6. Recognition rather than recall: Minimize the user's memory load by making objects,
actions, and options visible. The user should not have to remember information from one
part of the dialogue to another. Instructions for use of the system should be visible or
easily retrievable whenever appropriate.
7. Flexibility and efficiency of use: Use accelerators (unseen by the novice user) may often
speed up the interaction for the expert user such that the system can cater to both
inexperienced and experienced users. Allow users to tailor frequent actions.
Page 18
8. Aesthetic and minimalist design: Dialogues should not contain information which is
irrelevant or rarely needed. Every extra unit of information in a dialogue competes with
the relevant units of information and diminishes their relative visibility.
9. Help users recognize, diagnose, and recover from errors: Error messages should be
expressed in plain language (no codes), precisely indicate the problem, and constructively
suggest a solution.
10. Help and documentation: Even though it is better if the system can be used without
documentation, it may be necessary to provide help and documentation. Any such
information should be easy to search, focused on the user's task, list concrete steps to be
carried out, and not be too large.
Three systems that provide landmarks along this evolutionary path are the Dynabook, the Star
and the Apple Lisa, predecessor of today’s Apple Macintosh machines. An important
unifying theme present in all three computer systems is that they provided a form of
interaction that proved effective and easy for novices and experts alike. They were also easy
to learn, and provided a visual-spatial interface whereby, in general, objects could be directly
manipulated, while the system gave immediate feedback.
Page 19
1.6.1 Dynabook
Alan Kay designed the first object-oriented programming language in the 1970s. Called
Smalltalk, the programs were the basis for what is now known as windows technology, the
ability to open more than one program at a time on a personal computer. However, when he
first developed the idea, personal computers were only a concept. In fact, the idea of personal
computers and laptops also belongs to Kay. He envisioned the Dynabook - a notebook sized
computer, with a keyboard on the bottom and a high resolution screen at the top.
1.6.2. Star
The Xerox Star was born out of PARC's creative ferment, designing an integrated system
that would bring PARC's new hardware and software ideas into a commercially viable
product for use in office environments. The Star drew on the ideas that had been developed,
and went further in integrating them and in designing for a class of users who were far less
technically knowledgeable than the engineers who had been both the creators and the prime
users of many PARC systems (one of PARC's favorite mottoes was "Build what you use, use
what you build.") The Star designers were challenged to make the personal computer usable
for a community that did not have previous computer experience.
This section lists some of the key developments and people in the evolution of HCI.
a) Human factors engineering (Frank Gilbreth, post World War 1) – study of operator’s
muscular capabilities and limitations.
Page 20
b) Aircraft cockpits (World War 2) – emphasis switched to perceptual and decision making
capabilities
c) Symbiosis (J.C.R. Licklider, 1960’s) - human operator and computer form two distinct
but interdependent systems, augment each other’s capabilities
d) Cognitive psychology (Donald Norman and many others, late 1970’s, early 1980’s) -
adapting findings to design of user interfaces
e) Development of GUI interface (Xerox, Apple, early 1980’s)
f) Field of HCI came into being (mid 1980’s) – key principles of User Centred Design and
Direct Manipulation emerged.
g) Development of software design tools (e.g. Visual Basic, late 1980’s, early 1990’s)
h) Usability engineering (Jakob Neilsen, 1990’s) - mainly in industry rather than academic
research.
i) Web usability (late 1990’s) – the main focus of HCI research today.
Page 21
4. Ergonomics/Human Factors
hardware design
display readability
5. Linguistics
natural language interfaces
6. Artificial Intelligence
intelligent software
7. Philosophy, Sociology & Anthropology
Computer supported cooperative work (CSCW)
Engineering & Design
a. graphic design
b. engineering principles
They include issues such as: terminology, appearance, action sequences, input/output
formats, and Dos and Donts of graphic styles. They build a good starting point but need
management processes to facilitate their enforcement. Written guidelines help to develop a
“shared language” and based on it promote consistency among multiple designers and
designs/products. Examples of guidelines includes:
a) How to provide ease of interface navigation;
b) how to organize the display.
c) how to draw user’s attention, and
d) how to best facilitate data entry
Guidelines will discussed in details in lecture 9 – HCI Design Process.
Page 22
1.8.2. Design Principles
Are more fundamental, widely applicable and enduring than guidelines (i.e., guidelines often
need to be individually specified for every project/organization while principles are project-
independent). There are 5 tasks/principles that may be performed/followed;
1. Determine your target audience (in particular user skill-level)
Every designer should start with trying to understand the intended user (e.g., population
profile, gender, physical and cognitive abilities, education, cultural/language background,
knowledge, usage pattern, attitudes, etc.). Specifying the intended user is a very important yet
very difficult task. One particular challenge is: Many applications cater for several diverse
user groups at the same time. Determining the user skill level is especially important. There
are 2 skill levels:
A. Skills in regard to interface usage in general
B. Skills in regard to the particular application/task domain
Page 23
Provide context dependent help
Use well-organized reference manuals
c) Expert frequent users: Expert frequent users are thoroughly familiar with the task concept
and general interface concepts. Main goal of such “power users” is to get work done fast
and efficiently. Some specific guidelines for this group are:
Ensure rapid response times
Provide brief and non-distracting feedback
Provide shortcuts to carry out frequent actions
Provide string commands and abbreviations
It is crucial to Systems users are defined and where there are multiple diverse user groups the
following approaches can be used during design process;
Use a “multi layer” design/approach to learning: Structure your UI in a way that limited
users may only work with a small set of functions and objects while advanced users can
access advanced functionality
Give the user control over the density of informative feedback (e.g., error or confirmation
messages), the number of elements on the display, or the pace of interaction
2. Identify the tasks that users perform
This process often involves interviewing and observing the user, which also helps to
understand the task frequencies and sequences
Be careful regarding the extent of provided functionality (inadequate vs. cluttered)
Start with high-level tasks, decompose them into smaller steps and finally atomic
actions
Choose the right level of granularity (e.g., depending on task frequencies)
3. Choose an appropriate interaction style
Use natural language in order to accomplish the required task.
Pros: Users do not need to learn a syntax, may be successful in applications where the
scope is limited and task sequences are clear from the beginning
Cons: Clarification dialog required (what to do next?), hard to determine the context,
may be unpredictable, in applications with broad scope most likely less efficient
4. Apply the “8 golden rules” of HCI Design
Shneiderman’s 8 Golden Rules (1987):
a) Strive for consistency
Page 24
b) Enable frequent users to use shortcuts
c) Offer informative feedback
d) Design dialogs to yield closure
e) Offer error prevention and simple error handling
f) Permit easy reversal of actions
g) Support internal locus of control
h) Reduce short-term memory load
5. Always try to prevent user errors
General issues that should be considered in order to prevent errors:
Use functionally organized screens and menus
Design menu choices and commands to be distinctive
Make it difficult for users to perform irreversible actions
Provide feedback about the state of the UI
Design for consistency of actions
Consider universal usability
Other more specific issues:
Prevent incorrect use of the UI (e.g., grey out unavailable functions, provide pull-
down menus for specific data entry tasks, etc.)
Aggregate a sequence of steps into one single action (e.g., specify a lengthy task once,
save it and execute it again later)
Provide reminders that indicate that specific actions are required to complete a task
1.8.3. Theories
Theories are more high-level than principles, and are largely very abstract. There are two
purposes for the HCI theories’; to support collaboration and teaching through the provision of
consistent terminologies for objects and actions (descriptive, explanatory) and to enhance the
quality of the HCI by providing predictions for motor task performance, perceptual activities,
execution times, error rates, emotional reactions, etc. Predictive characteristics enable
designers, for instance, to better serve the users’ needs and compare proposed designs more
objectively.
Four classes of theories;
1. Stages-of-action models (SOAM)
Page 25
Deal with explaining the stages that users go through when using the User Interface, ie
HCI interactive cycle. The user formulates a plan of action, which is then executed at the
computer interface. When the plan, or part of the plan, has been executed, the user
observes the computer interface to evaluate the result of the executed plan, and to
determine further actions.
The interactive cycle can be divided into two major phases: execution and evaluation.
These can then be subdivided into further stages, seven in all;
a) Forming the goal
b) Forming the intention
c) Specifying the action
d) Executing the action
e) Perceiving the system state
f) Interpreting the system state
g) Evaluating the outcome
Page 26
user must accomplish. An operator is an action performed in pursuit of a goal. A method
is a sequence of operators that accomplish a goal. Selection rules specify which method
satisfies a given goal, based on context. This theory decompose users actions into small
measurable steps
User formulates a goal (e.g., publish draft to obtain client feedback) and sub-goals
(e.g., upload graphic)
User executes a variety of operators in order to change his/her mental state or to affect
the task environment (e.g., locate file on hard drive - in the user’s mind)
Operator elementary perceptual, motor, or cognitive act
User achieves goals by using methods (e.g., move mouse to click appropriate button
on the UI to start the upload procedure)
User bases his/her decision as to what method to use in order to accomplish a goal on
selection rules (e.g., mouse vs. key)
Works well for describing steps in the decision making process of users while they
carry out interaction tasks
3. Widget-level theories (WLT)
Alternative approach to “hierarchical decomposition”. Hierarchical decomposition
reduces complexity by breaking down tasks into atomic actions or complex UIs into
atomic components. WLTs deal with higher level components that are self-contained and
reusable
4. Context-of-use theories
Many theories are based on controlled lab experiments, isolated environments and only
consider isolated phenomena
Problem: the context (physical and social environment) in which the user operates
may also play an important role in influencing usage patterns
Context ≈ Users’ interactions with other people (e.g., colleagues) and resources,
unexpected interruptions, ...
Theory: Knowledge is not always in the user’s mind but rather distributed in his/her
environment (e.g., manuals, Internet, etc.)
The relevance of Context-of-Use theories increases with growing use of mobile or
context-aware devices
1.9. Lecture Summary
Page 27
1.10. Lecture Review Activities
1. Suggest some ways in which the design of the copier buttons on page 3 could be
improved.
2. Consider factors involved in the design of a new library catalogue system using HCI
principles..
3. Use the internet to find information on the work of Donald Norman and Jakob Neilsen
Page 28
LECTURE 2: HCI PARADIGMS
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so that
after studying this you will be able to:
i. Describe the Human computer Paradigms
ii. Explain the principal historical advances in interactive designs.
2.1 Introduction
The great advances in computer technology have increased the power of machines and
enhanced the bandwidth of communication between human and computer. The impact of the
technology alone, however, is not sufficient to enhance its usability. As our machines have
become more powerful, they key to increased usability has come from the creative and
considered application of the technology to accommodate and augment the power of the
human. Paradigms for interaction have for the most part been dependent upon technological
advances and their creative application to enhance interaction.
Recent trend has been to promote paradigms that move beyond the desktop. With the advent
of wireless, mobile, and handheld technologies, developers started designing applications that
could be used in a diversity of ways besides running only on an individual’s desktop
machine.
Page 29
to encourage new ideas about how best to apply the burgeoning computing technology. One
of the major contributions to come out of this new emphasis in research was the concept of
time sharing, in which a single computer could support multiple users. Previously, the
human (or more accurately, the programmer) was restricted to batch sessions, in which
complete jobs were submitted on punched cards or paper tape to an operator who would then
run them individually on the computer. Time-sharing systems of the 1960s made
programming a truly interactive venture and brought about a subculture of programmers
known as ‘hackers’ – single-minded masters of detail who took pleasure in understanding
complexity. Though the purpose of the first interactive time-sharing systems was simply to
augment the programming capabilities of the early hackers, it marked a significant stage in
computer applications for human use.
Sketchpad allowed a computer operator to use the computer to create, very rapidly,
sophisticated visual models on a display screen that resembled a television set. The visual
patterns could be stored in the computer’s memory like any other data, and could be
manipulated by the computer’s processor. Sketchpad demonstrated two important ideas. First,
computers could be used for more than just data processing. They could extend the user’s
ability to abstract away from some levels of detail, visualizing and manipulating different
representations of the same information. Those abstractions did not have to be limited to
representations in terms of bit sequences deep within the recesses of computer memory.
Rather, the abstractions could be made truly visual. To enhance human interaction, the
information within the computer was made more amenable to human consumption. The
computer was made to speak a more human language, instead of the human being forced to
speak more like a computer. Secondly, Sutherland’s efforts demonstrated how important the
Page 30
contribution of one creative mind (coupled with a dogged determination to see the idea
through) could be to the entire history of computing.
According to Engelbart the secret to producing computing equipment that aided human
problem solving ability was in providing the right toolkit. The idea of building components of
a computer system that will allow you to rebuild a more complex system is called
bootstrapping and has been used to a great extent in all of computing. The power of
programming toolkits is that small, well-understood components can be composed in fixed
ways in order to create larger tools. Once these larger tools become understood, they can
continue to be composed with other tools, and the process continues.
Page 31
A personal computer system which forces the user to progress in order through all of the
tasks needed to achieve some objective, from beginning to end without any diversions, does
not correspond to that standard working pattern. If the personal computer is to be an effective
dialog partner, to must be as flexible in its ability to change the topic as the human is.
One presentation mechanism for achieving this dialog partitioning is to separate physically
the presentation of the different logical threads of user–computer conversation on the display
device. The window is the common mechanism associated with these physically and logically
separate display spaces. Interaction based on windows, icons, menus and pointers – the
WIMP interface – is now commonplace.
2.2.8. 3D Interfaces
3D interfaces are also used as part of WIMP interfaces, including buttons, scroll bars, and
icons. These elements are made to appear as if they have three dimensions. This type of
interface addresses the same usability principles as the WIMP interface
Page 32
judicious choice of metaphor. The Xerox Alto and Star were the first workstations based on
the metaphor of the office desktop. The majority of the management tasks on a standard
workstation have to do with the file manipulation.
Rapid feedback is just one feature of the interaction technique known as direct Manipulation.
Others includes
a) Visibility of the objects of interest
b) Incremental action at the interface with rapid feedback on all actions
c) Reversibility of all actions, so that users are encouraged to explore without severe
penalties
d) Syntactic correctness of all actions, so that every user action is a legal operation
e) Replacement of complex command language with actions to manipulate directly the
visible objects.
2.2.12. Hypertext
In 1945, Vannevar Bush, then the highest-ranking scientific administrator in the US war
effort, published an article entitled ‘As We May Think’ in The Atlantic Monthly. Bush was in
charge of over 6000 scientists who had greatly pushed back the frontiers of scientific
knowledge during the Second World War. He recognized that a major drawback of these
prolific research efforts was that it was becoming increasingly difficult to keep in touch with
the growing body of scientific knowledge in the literature. In his opinion, the greatest
advantages of this scientific revolution were to be gained by those individuals who were able
to keep abreast of an ever-increasing flow of information. To that end, he described an
innovative and futuristic information storage and retrieval apparatus – the memex –, which
was constructed with technology wholly existing in 1945 and aimed at increasing the human
capacity to store and retrieve, connected pieces of knowledge by mimicking our ability to
create random associative links.
2.2.13. Multi-modality
The majority of interactive systems still uses the traditional keyboard and a pointing device,
such as a mouse, for input and is restricted to a color display screen with some sound
capabilities for output. Each of these input and output devices can be considered as
communication channels for the system and they correspond to certain human
communication channels. A multi-modal interactive system is a system that relies on the use
of multiple human communication channels. Each different channel for the user is referred to
Page 34
as a modality of interaction. In this sense, all interactive systems can be considered multi-
model, for human have always used their visual and haptic channels in manipulating a
computer. In fact, we often use our audio channel to hear whether the computer is actually
running properly.
The main distinction between CSCW systems and interactive systems designed for a single
user is that designer can no longer neglect the society within which any single user operates.
CSCW systems are built to allow interaction between humans via the computer. CSCW
system is electronic mail – email – yet another metaphor by which individuals at physically
separate locations can communicate via electronic messages that work in a similar way to
conventional postal systems.
Page 35
The Internet is simply a collection of computers, each linked by any sort of data connections,
whether it be slow telephone line and modem or high-bandwidth optical connection. The
computers of the Internet all communicate using common data transmission protocols and
addressing systems. This makes it possible for anyone to read anything from anywhere, in
theory, if it conforms to the protocol. The web builds on this with its own layer of network
protocol, a standard markup notation for laying out pages of information and a global naming
scheme. Web pages can contain text, color images, movies, sound and, most important,
hypertext links to other web pages. Hypermedia documents can therefore be published by
anyone who has access to a computer connected to the Internet.
The most profound technologies are those that disappear. They weave themselves into the
fabric of everyday life until they are indistinguishable from it. These words have inspired a
new generation of researchers in the area of ubiquitous computing. Another popular term for
this emerging paradigm is pervasive computing, first coined by IBM. The intention is to
create a computing infrastructure that permeates our physical environment so much that we
do not notice the computer may longer. A good analogy for the vision of ubiquitous
computing is the electric motor. When the electric motor was first introduced, it was large,
loud and very noticeable. Today, the average household contains so many electric motors that
we hardly ever notice them anymore. Their utility led to ubiquity and, hence, invisibility.
Page 36
2.3. Chapter review questions
1. Explain the following HCI Usability Paradigms
i. Ubiquitous Computing
ii. Direct Manipulation
iii. Time sharing
iv. WIMP
2. What new paradigms do you think may be significant in the future of interactive
computing?
Page 37
LECTURE 3: HCI HUMAN FACTORS - COGNITION
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Explain cognitive psychology with respect to HCI designs
ii. Understand the importance of Cognition in HCI design
iii. Explain types of Human Cognition
iv. Understand different cognitive frameworks used in HCI design.
v. Describe Cognitive Psychology theories with respect to HCI
3.1 Introduction
Human/computer interaction is characterized as a dialogue or interchange between the human
and the computer because the output of one serves as the input for the other in an exchange of
actions and intentions. Early characterizations of the human-computer interface were more
computer-centric. Today, the interface is viewed primarily from the other direction. Starting
from human tasks and intentions, we follow the path of actions until we come to the machine.
Human-computer interaction (HCI) involves the activities of humans using computers.
Interaction refers to a dialogue generated by the command and data input to the computer and
the display output of the computer and the sensory/perceptual input to the human and motor
response output of the human. Interaction takes place at the interface, which is made up of a
set of hardware devices and software tools from the computer side and a system of sensory,
motor, and cognitive processes from the human side.
Displays are human-made artifacts designed to support the perception of relevant system
variables and to facilitate further processing of that information. Before a display is designed,
the task that the display is intended to support must be defined (e.g. navigating, controlling,
decision making, learning, entertaining, etc.). A user or operator must be able to process
whatever information that a system generates and displays; therefore, the information must be
displayed according to principles in a manner that will support perception, situation
awareness, and understanding.
Page 38
technologies have not been designed with users in mind. Many technologies do not fit users'
tasks. Technology systems need to be built to effectively support human tasks. Failing to
design and develop information technologies with user characteristics in mind can lead to a
lack of system functionality, increase in user dissatisfaction, and increase in ineffective work
practices.
When designing HCI, useful principles can be drawn from the sub-domains of sensation,
perception, attention, memory, and decision-making to guide us on issues surrounding screen
layout, information grouping, menu length, depth and breadth.
Cognitive psychology have attempted to apply relevant psychological principles to HCI by using
a variety of methods, including development of guidelines, the use of models to predict human
performance and the use of empirical methods for testing computer systems. A key aim of HCI
is to understand how humans interact with computers, and to represent how knowledge is
passed between the two.
Page 39
In order to make human-computer interactions that are is easy to learn, easy to remember,
and easy to apply to new problems, computer scientists must understand something about
human learning, memory, and problem solving. While designing user interface of these
systems, the cognitive processes whereby users interact with computers must be taken into
account because usually users’ attributes do not match to computer attributes. Also we should
take into account that computer systems can have non-cognitive effects on the user, for
example the user’s response to virtual worlds.
Page 40
3.4. Cognitive Frameworks
A number of conceptual frameworks and theories have been developed to explain and predict
user behavior based on theories of cognition. In this section, we outline three early internal
frameworks that focus primarily on mental processes together with three more recent external
ones that explain how humans interact and use technologies in the context in which they
occur. These are:
Internal
• Information processing.
• Mental models
• Gulfs of execution and evaluation
External
• Distributed cognition
• External cognition
• Embodied interaction.
One of the many other approaches to conceptualizing how the mind works, has been to use
metaphors and analogies. A number of comparisons have been made, including
conceptualizing the mind as a reservoir, a telephone network, and a digital computer. One
prevalent metaphor from cognitive psychology is the idea that the mind is an information
processor, everything that is sensed (sight, hearing, touch, smell, and taste) was considered to
be information, which the mind processes. Information is thought to enter and exit the mind
through a series of ordered processing stages.
Page 41
Human information processing analyses are used in HCI in several ways.
• basic facts and theories about information-processing capabilities are taken into
consideration when designing interfaces and tasks
• information-processing methods are used in HCI to conduct empirical studies
evaluating the cognitive requirements of various tasks in which a human uses a
computer
• computational models developed in HCI are intended to characterize the information
processing of a user interacting with a computer, and to predict, or model, human
performance with alternative interfaces.
The idea of human information processing is that information enters and exits the human
mind through a series of ordered as shown below:
The model assumes that information is unidirectional and sequential and that each of the
stages takes a certain amount of time, generally thought to depend on the complexity of the
operation performed.
Page 42
In the extended model, cognition is viewed in terms of:
1. how information is perceptual processors
2. how that information is attended to, and
3. how that information is processed and stored in memory.
The figure below illustrates the extended human information processing model. It shows that
attention and memory interact with all the stages of processing.
An important question when researching into memory is how it is structured. Memory can be
broadly categorized into three parts, which have links between them, moving the information
which comes in through the senses. This parts includes;
• sensory information store
• short-term memory (more recently known as working memory)
• long-term memory
Page 43
Based on the information-processing model, cognition is conceptualized as a series of
processing stages where perceptual, cognitive, motor processors are organized in relation to
one another. The model predicts which cognitive processes are involved when a user interacts
with a computer, enabling calculations to be made how long a user will take to carry out
various tasks. This can be very useful when comparing different interfaces. For example, it
has been used to compare how well different word processors support a range of editing
tasks. The information processing approach is based on modeling mental activities that
happen exclusively inside the head. However, most cognitive activities involve people
interacting with external kinds of representations, like books, documents, and computers not
to mentions one another. For example, when we go home from wherever we have been we do
not need to remember the details of the route because we rely on cues in the environment
(e.g., we know to turn left at the red house, right when the road comes to a T-junction, and so
on.). Similarly, when we are at home we do not have to remember where everything is
because information is “out there.” We decide what to eat and drink by scanning he items in
the fridge, find out whether any messages have been left by glancing at the answering
machine to see if there is a flashing light, and so on.
The MHP model was used as the basis for the GOMS (Goals, Operators, Methods, and
Selection Rules) family of techniques proposed for quantitatively modeling and describing
human task performance.
a) Goals: These are the user’s goals, describing what the user wants to achieve. Further, in
GOMS the goals are taken to represent a ‘memory point’ for the user, from which he can
evaluate what should be done and to which he may return should any errors occur.
b) Operators: These are the lowest level of analysis. They are the basic actions that the user
must perform in order to use the system. They may affect the system (e.g., press the ‘X’
key) or only the user’s mental state (e.g., read the dialogue box). There is still a degree of
flexibility about the granularity of operators; we may take the command level “issue the
select command” or be more primitive; “move mouse to menu bar, press center mouse
button….”
c) Methods: As we have already noted, there are typically several ways in which a goal can
be split into sub goals.
d) Selection: Selection means of choosing between competing methods.
Page 44
One of the problems of abstracting a quantitative model from a qualitative description of user
performance is ensuring that two are connected. In particular, it a has been noted that the
form and contents of GOMS family of models are relatively unrelated to the form and content
of the model human processor and it also oversimplified human behavior.
Page 45
Figure 4. Cycle of interaction between a user and a system
The Gulf of Evaluation is the amount of effort a user must exert to interpret the physical state
of the system and how well their expectations and intentions have been met.
1. Users can bridge this gulf by changing their interpretation of the system image, or
changing their mental model of the system.
2. Designers can bridge this gulf by changing the system image.
The Gulf of Execution is the difference between the user’s goals and what the system allows
them to do – it describes how directly their actions can be accomplished.
1. Users can bridge this gulf by changing the way they think and carry out the task
toward the way the system requires it to be done
2. Designers can bridge this gulf by designing the input characteristics to match the
users’ psychological capabilities.
Design considerations:
Systems should be designed to help users form the correct productive mental models.
Common design methods include the following factors:
1. Affordance: Clues provided by certain objects properties on how this object will be used
and manipulated.
2. Simplicity: Frequently accessed function should be easily accessible. A simple interface
should be simple and transparent enough for the user to concentrate on the actual task in
hand.
3. Familiarity: As mental models are built upon prior knowledge, it's important to use this
fact in designing a system. Relying on the familiarity of a user with an old, frequently
Page 46
used system gains user trust and help accomplishing a large number of tasks. Metaphors
in user interface design are an example of applying the familiarity factor within the
system.
4. Availability: Since recognition is always better than recall, an efficient interface should
always provide cues and visual elements to relieve the user from the memory load
necessary to recall the functionality of the system.
5. Flexibility: The user should be able to use any object, in any sequence, at any time.
6. Feedback: Complete and continuous feedback from the system through the course of
action of the user. Fast feedback helps assessing the correctness of the sequence of
actions.
A main goal of the distributed cognition approach is to analyze how the different components
of the functional system are coordinated. This involves analyzing how information is
propagated through the functional system in terms of technological cognitive, social and
organizational aspects. To achieve this, the analysis focuses on the way information moves
and transforms between different representational states of the objects in the functional
system and the consequences of these for subsequent actions.
One property of distributed cognition that is often discovered through analysis is situation
awareness (Norman, 1993) which is the silent and inter-subjective communication that is
shared among a group. When a team is working closely together the members will monitor
each other to keep abreast of what each member is doing. This monitoring is not explicit -
rather the team members monitor each other through glancing and inadvertent overhearing
Page 48
c) Annotating and cognitive tracing: Another way in which we externalize our cognitions is
by modifying representations to reflect changes that are taking place that we wish to
mark. For example, people often cross thinks off in to-do list to show that they have been
completed. They may also reorder objects in the environment; say by creating different
piles as the nature of the work to be done changes. These two kinds of modification are
called annotating and cognitive tracing:
Annotating involves modifying external representations, such as crossing off
underlining items.
Cognitive tracing involves externally manipulating items different orders or structures
This has occurred in parallel with the reduced importance of the model human processor with
in HCI and the development other theoretical approaches. This Cognitive theories are classed
as either computational or connectionist.
The computational approach
This approach uses the computer as a metaphor for how the brain works, similar to the
information processing models described above. Computational approaches continue to
adopt the computer metaphor as a theoretical framework, but they no longer adhere to the
information-processing framework. Instead, the emphasis is on modeling human
performance in terms of what is involved when information is processed rather than when
and how much. Primarily, computational models conceptualize the cognitive system in
terms of the goals, planning and action that are involved in task performance. These
aspects include modeling: how information is organized and classified, how relevant
stored information is retrieved, what decisions are made and how this information is
Page 49
reassemble. Thus tasks are analyzed not in terms of the amount of information processed
per se in the various stages but in terms of how the system deals with new information.
Connectionist Approaches
The connectionist approach rejects the computer metaphor in favour of the brain
metaphor, in which cognition is represent by neural networks. This approach, otherwise
known as neural networks or parallel distributed processing, simulate behavior through
using programming models. However, they differ from conceptual approaches in that
they reject the computer metaphor as a theoretical framework. Instead, they adopt the
brain metaphor, in which cognition is represented at the level of neural networks
consisting of interconnected nodes. Hence all cognitive processes are viewed as
activations of the nodes in the network and the connections between them rather than the
processing and manipulation of information.
Page 50
4. Schneiderman B., Designing the user Interface: Strategies for effective Human Computer
Interaction, Addison Wesley
5. Bayer H. and Holtzblat K., Contextual Design: defining customer centered systems,
Morgan Kaufmann
Page 51
LECTURE 4: HCI HUMAN FACTORS – Perception and Representation
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Explain the role Human perception in HCI designs
4.1. Perception
Perception is the process or the capability to attain awareness and understand the
environment surrounding us by interpreting, selecting and organizing different type of
information. All perceptions involve stimuli in the central nervous system. These stimuli
result from the stimulation of our sense organs such as auditory stimuli when one hears a
sound or a taste when someone eats something.
Perception is not only passive and can be shaped by our learning, experiences and education.
By training your brain and your cognitive abilities one can improve the different skills that
they use to perceive the world around you, be more aware and improve your learning
capacity.
An understanding of the way humans perceive visual information is important in the design
of visual displays in computer systems. Several competing theories have been proposed to
explain the way we see. These can be split into two classes: constructivist and ecological.
Constructivist theorists believe that seeing is an active process in which our view is
constructed from both information in the environment and from previously stored
knowledge. Perception involves the intervention of representations and memories. What
we see is not a replica or copy; rather a model that is constructed by the visual system
through transforming, enhancing, distorting and discarding information.
Ecological theorists believe that perception is a process of ‘picking up” information from
the environment, with no construction or elaboration needed. Users intentionally engage
in activities that cause the necessary information to become apparent. We explore objects
in the environment.
Page 52
4.2. The Gestalt Laws of perceptual organization (Constructivist)
This law implies that users’ ability to interpret the meaning of scenes and objects is based on
innate human laws of organization. Look at the following figure:
What does it say? What do you notice about the middle letter of each word?
You probably read this as ‘the cat’. You interpreted the middle letter in each word according
to the context. Your prior knowledge of the world helped you make sense of ambiguous
information. This is an example of the constructivist process.
Gestalt psychology is a movement in experimental psychology that began just prior to World
War I. It made important contributions to the study of visual perception and problem solving.
The Gestalt approach emphasizes that we perceive objects as well-organized patterns rather
than separate component parts. The Gestalt psychologists were constructivists.
The focal point of Gestalt theory is the idea of "grouping," or how we tend to interpret a
visual field or problem in a certain way. According to the Gestalt laws of perceptual
organization the main factors that determine grouping are:
proximity - how elements tend to be grouped together depending on their closeness. Dots
appear as groups rather than a random cluster of elements
similarity - The tendency for elements of same shape or color to be seen as belonging
together, ie how items that are similar in some way tend to be grouped together.
Similarity can be shape, colour, etc.
Page 53
closure - missing parts of the figure are filled in to complete it, so that it appears as a
whole circle ie how items are grouped together if they tend to complete a pattern.
good continuation/ continuity - the stimulus appears to be made of two lines of dots,
traversing each other, rather than a random set of dots. We tend to assign objects to an
entity that is defined by smooth lines or curves.
Page 54
• Symmetry - regions bounded by symmetrical borders tend to perceived as coherent
figures
Norman's first and ongoing example is that of a door. Some doors are difficult to see if they
should be pushed or pulled. Other doors are obvious. The same is true of ring controls on a
cooker. How do you turn on the right rear ring?
Page 55
Both scrollbars afford movement up or down.
What visual clues in design on the right make this affordance obvious?
In some of these cases the affordances of GUI objects rely on prior knowledge or learning.
We have learned that something that looks like a button on the screen is for clicking. A text
box is for writing in, etc. For example, saying that a button on a screen affords clicking,
whereas the rest of the screen does not, is inaccurate. You could actually click anywhere on
the screen. We have learned that clicking on a button shaped area of the screen results in an
action being taken.
Page 56
b) paying careful attention to the affordances of objects ensures that the information
required to use them can easily be detected by the user.
Page 57
LECTURE 5: HUMAN FACTORS -ATTENTION AND MEMORY
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Explain the cognitive process of Attention and Memory in the context of HCI
ii. Describe the models of attention cognitive process
iii. Describe memory models
iv. Explain the importance of attention to the Human Information Processing model
5.1 Introduction
Here in this lecture we will study two cognitive processes named attention and memory. The
importance of these two you have seen in the Extended Human Processing model, studied in
Lecture 3 as shown in figure
5.2. Attention
Attention is the process of selecting things to concentrate on, at a point in time, from the range of
possibilities available. Attention involves our auditory and/or visual senses an example of
auditory attention is waiting in the dentist’s waiting room for our name to be called out to know
when it is our time to go in. auditory attention is based on pitch, timber and intensity. An example
of attention involving the visual senses in scanning the football results in a newspaper to attend to
information about how our team has done. Visual attention is based on color and location.
Attention allows us to focus on information that is relevant to what we are doing. The extent to
which this process is easy or difficult depends on whether we have clear goals and Whether the
information we need is salient in the environment
Page 58
The human brain is limited in capacity. It is important to design user interfaces which take
into account the attention and memory constraints of the users. This means that we should
design meaningful and memorable interfaces. Interfaces should be structured to be attention-
grabbing and require minimal effort to learn and remember. The user should be able to deal
with information and not get overloaded. Our ability to attend to one event from what seems
like a mass of competing stimuli has been described psychologically as focused attention.
The "cocktail party effect" -- the ability to focus one's listening attention on a single talker
among a cacophony of conversations and background noise has been recognized for some
time.
We know from psychology that attention can be focused on one stream of information (e.g.
what someone is saying) or divided (e.g. focused both on what someone is saying and what
someone else is doing). We also know that attention can be voluntary (we are in an attentive
state already) or involuntary (attention is grabbed). Careful consideration of these different
states of attention can help designers to identify situations where a user’s attention may be
overstretched, and therefore needs extra prompts or error protection, and to devise
appropriate attention attracting techniques.
A further property of attention is that can be voluntary, as when we make a conscious effort
to change our attention. Attention may also be involuntary, as when the salient
characteristics of the competing stimuli grab our attention. An everyday example of an
involuntary act is being distracted from working when we can hear music or voices in the
next room. Another thing is that frequent actions become automatic actions, that is, they do
not need any conscious attention and they require no conscious decisions.
Note
Page 60
1. Important information should be displayed in a prominent place to catch the user’s eye
Less important information can be relegated to the background in specific areas – the user
should know where to look
2. Information not often requested should not be on the screen, but should be accessible
when Needed
Note that the concepts of attention and perception are closely related.
In complex environments, users may be performing one primary task which is the most
important at that time, and also one or more less important secondary tasks. For example, a
pilot’s tasks include attending to air traffic control communications, monitoring flight
Page 61
instruments, dealing with system malfunctions which may arise, and so on. At any time, one
of these will be the primary task, which is said to be fore-grounded, while other activities are
momentarily suspended.
People are in general good at multitasking but are often prone to distraction. On returning to
an activity, they may have forgotten where they left off. People often develop their own
strategies, to help them remember what actions they need to perform when they return to an
activity.
Such external representations, or cognitive aids (Norman, 1992), may include writing lists or
notes, or even tying a knot in a handkerchief. Cognitive aids have applications in HCI, where
the system can be designed to
• user where he was
• remind user of common tasks
The human memory system is very versatile, but it is by no means infallible. We find some
things easy to remember, while other things can be difficult to remember. The same is true
Page 62
when we try to remember how to interact with a computer system. Some operations are
simple to remember while others take a long time to learn and are quickly forgotten. An
understanding of human memory can be helpful in designing interfaces that people will find
easy to remember how to use.
In addition, memory can be classified as declarative memories that include facts or statements
about the world and procedural memories that are used to perform procedures. More
specifically, implicit memories are not reportable and explicit memories can be reported.
Human factors workers must also consider human learning abilities and how to design
information technologies to support different learning styles.
Sometimes a command name may be a word familiar to the user in a different context. For
example, the word ‘CUT’ to a computer novice will mean to sever with a sharp instrument,
rather than to remove from a document and store for future use. This can make the CUT
command initially confusing.
The extent to which the meaning of an icon is understood depends on how it is represented.
Representational form of icons can be classified as follows:
a. Resemblance icons – depict the underlying concept through an analogous image.
Eg. the road sign for "falling rocks" presents a clear resemblance of the roadside hazard.
this represents the Windows calculator application, and resembles a calculator
b. Exemplar icons – serve as a typical example
Page 64
Eg. a knife and fork used in a public information sign to represent "restaurant services".
The image shows the most basic attribute of what is done in a restaurant i.e. eating. this
represents Microsoft Outlook – the clock and letter are examples of the tasks this
application does (calendar and email tasks)
c. Symbolic icons – convey the meaning at a higher level of abstraction
the picture of a wine glass with a fracture conveys the concept of fragility this represents
a connection to the internet – the globe conveys the concept of the internet
d. Arbitrary icons – bear no relation to the underlying concept
the bio-hazard sign consists of three partially overlaid circles this represents a software
design application called Enterprise Architect. There is no obvious meaning in the icon
to tell you what task you can do with the application
Note that arbitrary icons should not be regarded as poor designs, even though they must
be learned. Such symbols may be chosen to be as unique and/or compact such as a red no
entry sign with a white horizontal bar, designed to avoid dangerous misinterpretation.
e. Combination Icons
Icons are often favoured as an alternative to commands. It is common for users who use a
system infrequently to forget commands, while they are less likely to forget icons once
learnt. However, the meaning of icons can sometimes be confusing, and it is now quite
common to use a redundant form of representation where the icons are displayed together
with the command names.
Page 65
LECTURER 6: MENTAL MODELS AND KNOWLEDGE
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Explain the concept of Mental models in HCI
ii. Explain the characteristics of Mental Models
iii. Explain the applicability of Mental Models to HCI
iv. Discuss the types of Mental Models
v. Explain Knowledge representation in Memory
Mental models are representations in the mind of real or imaginary situations. Conceptually,
the mind constructs a small scale model of reality and uses it to reason, to underlie
explanations and to anticipate events. These models can be constructed from perception,
imagination, or interpretation of discourse. A mental model represents explicitly what is true,
but not what is false. The greater number of mental models a task suggests, and the greater
the complexity of every model, the poorer performance is. These models are more than just
pictures or images, sometimes the model itself cannot be visualized or the image of the model
depends on underlying models. Models can also represent abstract notions like negation or
ownership which are impossible to visualize
Within cognitive psychology the term mental model has since been explicated by Johnson-
Laird (1983, 1988) with respect to its structure and function in human reasoning and
Page 66
language understanding. In terms of structure of mental models, he argues that mental models
are either analogical representations or a combination of analogical and prepositional
representations. They are distinct from, but related to images. A mental model represents the
relative position of a set of objects in an analogical manner that parallels the structure of the
state of objects in the world. An image also does this, but more specifically in terms of view
of a particular model.
An important difference between images and mental models is in terms of their function.
Mental models are usually constructed when we are required to make an inference or a
prediction about a particular state of affairs. In constructing the mental model a conscious
mental simulation may be ‘run’ from which conclusions about the predicted state of affairs
can be deduced. An image, on the other hand, is considered to be a one-off representation.
Having developed a mental model of an interactive product, it is assumed that people will use
it to make inferences about how to carry out tasks when using the interactive product. Mental
models are also used to fathom what to do when something unexpected happens with a
system and when encountering unfamiliar systems. The more someone learns about a system
and how it functions, the more their mental model develops.
Page 67
b) Functional models, also known as task-action mapping models, are procedural knowledge
about how to use the system. The main advantage of functional models is that they can be
constructed from existing knowledge about a similar domain or system.
Structural models are context free while functional models are context sensitive.
Structural models can answer unexpected questions and make predictions, while functional
models are based round a fixed set of tasks. Most of the time users will tend to apply
functional models as they are usually easier to construct.
The theory of knowledge representation and mental models is applicable in designing every
day's things. By knowing what users know about the system and how they can infer the
system functionality from the provided interface, it will be possible to predict and improve
the learning curve as well as users errors and the ease of use of that system and finally to
design interfaces that support the acquisition of appropriate user model.
Page 68
2. System's model of the user which is the model constructed inside the system as it runs
through different sources of information such as profiles, user settings, logs, and even
errors.
3. Conceptual model which is an accurate and consistent representation of the target
system held by the designer or an expert user
4. Designer's model of the user's model which is basically constructed before the system
exists by looking at similar systems or prototype or by cognitive models or task
analysis.
Several factors influence the way these models are built and maintained. At the users' side:
their physical and sensory abilities, their previous experience dealing with similar systems,
their domain knowledge and finally ergonomics and environments in which users live. At the
designers' side, the need is to influence the user's model to perceive the conceptual model
underlying the relevant aspects of the system. This can be accomplished using metaphor,
graphics, icons, language, documentations and tutorials. It is important that all these materials
collaborate together to encourage the same model.
Design considerations:
As stated above, systems should be designed to help users form the correct productive mental
models. Common design methods include the following factors:
Page 69
1) Affordance: Clues provided by certain objects properties on how this object will be used
and manipulated.
2) Simplicity: Frequently accessed function should be easily accessible. A simple interface
should be simple and transparent enough for the user to concentrate on the actual task in
hand
3) Familiarity: As mental models are built upon prior knowledge, it's important to use this
fact in designing a system. Relying on the familiarity of a user with an old, frequently
used system gains user trust and help accomplishing a large number of tasks. Metaphors
in user interface design are an example of applying the familiarity factor within the
system.
4) Availability: Since recognition is always better than recall, an efficient interface should
always provide cues and visual elements to relieve the user from the memory load
necessary to recall the functionality of the system.
5) Flexibility: The user should be able to use any object, in any sequence, at any time.
6) Feedback: Complete and continuous feedback from the system through the course of
action of the user. Fast feedback helps assessing the correctness of the sequence of
actions.
Page 70
6.6. Knowledge in the Head vs. Knowledge in the World
Psychologist and cognitive scientist Donald A. Norman published a book titled The
Psychology of Everyday Things. In it he reviews the factors that affect our ability to use the
items we encounter in our everyday lives. He relates amusing but pointed stories of people’s
attempts to use VCRs, computers, slide projectors, telephones, refrigerator controls, etc.
In his book Norman distinguishes between the elements of good design for items that are
encountered infrequently or used only occasionally and those with which the individual
becomes intimately familiar through constant use. Items encountered infrequently need to be
obvious. An example used by Norman is the common swinging door. The individual
intending to pass through the door needs to know whether to push the door or pull it. We all
have experienced doors where it was not obvious what to do, or two doors appeared the same,
but only one would swing. However, when a door is well designed, it is obvious whether one
is to push or pull. Norman refers to knowledge of how to use such items as being in the
world.
However, when one uses something frequently, efficiency and speed are important, and the
knowledge of how to use it needs to reside in the head. Most people can relate to the common
typewriter or computer keyboard. When one uses it infrequently and has never learned to
Page 71
type, the knowledge of which key produces which character on the screen comes from
visually scanning the keyboard and finding the key with the desired character. The
knowledge comes from the world. However, people who frequently use a computer learn to
touch type, transferring the knowledge to the head. Their efficiency and speed far exceed that
of the hunt-and-peck typist. Norman describes the trade-off between knowledge in the world
and knowledge in the head as follows:
e
i
Page 72
LECTURE 7: COMPUTER FACTORS IN HCI
USER INTERFACE & USER SUPPORT
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Describe the advantages and disadvantages of different input output devices keeping
in view different aspects of HCI
ii. Describe the Approaches to User Support in HCI
iii. Explain the requirements for User Support in HCI
iv. Explain types of User Support
7.1. Introduction
A user interface is the system by which people user interact with a machine. The user
interface includes hardware (physical) and software (logical) components. User interfaces
exist for various systems and provide a means of:
a) Input, allowing the users to manipulate a system
b) Output, allowing the system to indicate the effects of the users' manipulation
The user interface is a component of a computer or its software which can be visualized,
heard, touched, interacted with, run and understood by the common people or users of the
computer. A computer system is made up of various elements each of these elements affects
the interaction
a) input devices – text entry and pointing
b) output devices – screen (small & large), digital paper
c) virtual reality – special interaction and display devices
d) physical interaction – e.g. sound, haptic, bio-sensing
e) paper – as output (print) and input (scan)
f) memory – RAM & permanent media, capacity & access
g) processing – speed of processing, networks
The devices dictate the styles of interaction that the system supports. If we use different
devices, then the interface will support a different style of interaction.
Page 73
7.2. Input Devices
An input device is
” “a device that, together with appropriate software, transforms information from the user
into data that a computer application can process”
Input is concerned with recording and entering data into computer system and issuing
instruction to the computer. In order to interact with computer systems effectively, users must
be able to communicate their interaction in such a way that the machine can interpret them.
Therefore, input devices can be defined as: a device that, together with appropriate software,
transforms information from the user into data that a computer application can process.
One of the key aims in selecting an input device and deciding how it will be used to control
events in the system is to help users to carry out their work safely, effectively, efficiently and,
if possible, to also make it enjoyable. The choice of input device should contribute as
positively as possible to the usability of the system. In general, the most appropriate input
device will be the one that:
Matches the physiology and psychological characteristics of users, their training and
their expertise. For example, older adults may be hampered by conditions such as
arthritis and may be unable to type; inexperienced users may be unfamiliar with
keyboard layout.
Is appropriate for the tasks that are to be performed. For example, a drawing task from
a list requires an input device that allows continuous movement; selecting an option
from a list requires an input device that permits discrete movement.
Is suitable for the intended work and environment. For example, speech input is
useful where there is no surface on which to put a keyboard but is unsuitable in noisy
condition; automatic scanning is suitable if there is a large amount of data to be
generated.
Many systems use two or more complementary input devices together, such as a keyboard
and a mouse. There should also be appropriate system feedback to guide, reassure and, if
necessary, correct users’ errors, for example:
a) On screen - text appearing, cursor moves across screen
b) Auditory – alarm, sound of mouse button clicking
c) Tactile – feel of button being pressed, change in pressure
Page 74
7.3. Text Entry Devices
There are many text entry devices as given below:
7.3.1. Keyboard
The most common method of entering information into the computer is through a keyboard.
When considering the design of keyboards, both individual keys and grouping arrangements
need to be considered. The physical design of keys is obviously important. Alterations in the
arrangement of the keys can affect a user’s speed and accuracy. Various studies have shown
that typing involves a great deal of analyses of trained typists suggest that typing is not a
sequential act, with each key being sought out and pressed as the letters occur in the works to
be typed. Rather, the typist looks ahead, processes text in chunks, and then types it in chunks.
The arrangement of keys was chosen in order to reduce the incidence of keys jamming in the
manual typewriters of the time rather than because of any optimal arrangement for typing.
For example, the letters ‘s’, ,t, and ‘h’ are far apart even though they are far apart even though
they are frequently used together.
These keyboards are used in some pocket electronic personal organizers, perhaps because the
layout looks simpler to use than the QWERTY one. Also, it dissuades people from attempting
Page 75
to use their touch-typing skills on a very small keyboard and hence avoids criticisms of
difficulty of use.
The Dvorak approach is said to lead to faster typing. It was named after its inventor, Dr.
August Dvorak. Dr. Dvorak also invented systems for people with only one hand.
Page 76
this form of input and converting it to text, we can see that it is an intuitive and simple way of
interacting with the computer. However, there are a number of disadvantages with hand
writing recognition. Current technology is still fairly inaccurate and so makes a significant
number of mistakes in recognizing letters, though it has improved rapidly. Moreover,
individual differences in handwriting are enormous, and make the recognition process even
more difficult.
Speech input is useful in applications where the use of hands is difficult, either due to the
environment or to a user’s disability. It is not appropriate in environments where noise is an
issue. Much progress has been made, but we are still a long way from the image we see in
science fiction of humans conversing naturally with computers. Speech recognition is a
promising are of text entry, but it has been promising for a number of years and is still only
used in very limited situations. However, speech input suggests a number of advantages over
other input methods:
Page 77
Since speech is a natural form of communication, training new users is much easier
than with other input devices.
Since speech input does not require the use of hands or other limbs, it enables
operators to carry out other actions and to move around more freely.
Speech input offers disabled people such as the blind and those with severs motor
impairment the opportunities to use new technology.
Mouse
The mouse has become a major component of the majority of desktop computer systems sold
today, and is the little box with the tail connecting it to the machine in our basic computer
system picture. It is a small, palm-sized box housing a weighted ball- as the box is moved on
the tabletop; the ball is rolled by the table and so rotates inside the housing. This rotation is
Page 78
detected by small rollers that are in contact with the ball, and these adjust the values of
potentiometers. The mouse operates in a planar fashion, moving around the desktop, and is
an indirect input device, since a transformation is required to map from the horizontal nature
of desktop to the vertical alignment of the screen. Left-right motion is directly mapped, whilst
up-down on the screen is achieved by moving the mouse away-towards the user.
Foot mouse
Although most mice are hand operated, not all are there have been experiments with a device
called the foot mouse. As the name implies, it is foot-operated device, although more akin to
an isometric joysticks than a mouse. The cursor is moved by foot pressure on one side or the
other of pad. This allows one to dedicate hands to the keyboard. A rare device, the foot
mouse has not found common acceptance.
Touch pad
Touchpad’s are touch-sensitive tablets usually around 2-3 inches square. They were first used
extensively in Apple PowerBooks portable computers but are now used in many other
notebook computers and can be obtained separately to replace the mouse on the desktop.
They are operated by stroking a finger over their surface, rather like using a simulated
trackball. The feel is very different from other input devices, but as with all devices users
quickly get used to the action and become proficient.
In absolute joystick, movement is the important characteristic, since the position of the
joystick in the base corresponds to the position of the cursor on the screen. In the isometric
joystick, the pressure on the stick corresponds to the velocity of the cursor, and when
released, the stick returns to its usual upright centered position.
Touch Screen
Page 79
Touch displays allow the user to input information into the computer simply by touching an
appropriate part of the screen. This kind of screen is bi-directional – it both receives input and
it outputs information. Using appropriate software different parts of a screen can represent
different responses as different displays are presented to a user.
Advantages:
a) Easy to learn – ideal for an environment where use by a particular user may only occur
once or twice
b) Require no extra workspace
c) No moving parts (durable)
d) Provide very direct interaction
Disadvantages:
a) Lack of precision
b) High error rates
c) Arm fatigue
d) Screen smudging
Touch screens are used mainly for:
1. Kiosk devices in public places, for example for tourist information
2. Handheld computers
Pen Input
Touch screens designed to work with pen devices rather than fingers have become very
common in recent years. Pen input allows more precise control, and, with handwriting
recognition software, also allows text to be input. Handwriting recognition can work with
ordinary handwriting or with purpose designed alphabets such as Graffiti.
Pen input is used in handheld computers (PDAs) and specialised devices, and more recently
in tablet PCs, which are similar to notebook computers, running a full version of the
Windows operating system, but with a pen-sensitive screen and with operating system and
applications modified to take advantage of the pen input. Pen input is also used in graphics
tablets, which are designed to provide precise control for computer artists and graphic
designers.
Page 80
7.3.11. Cursor Keys
Cursor keys are available on most keyboards. Four keys on the keyboard are used to control
the cursor, one each for up, down, left and right. There is no standardized layout for the keys.
Some layouts are shown in figure but the most common now is the inverted ‘T’. Cursor keys
used to be more heavily used in character-based systems before windows and mice were the
norm. However, when logging into remote machines such as web servers, the interface is
often a virtual character-based terminal within a telnet window.
Cursor keys can be used to move a cursor, but it is difficult to accomplish dragging. Using
keys can provide precise control of movement by moving in discrete steps, for example when
moving a selected object in a drawing program. Some handheld computers have a single
cursor button which can be pressed in any of four directions.
Page 81
Note that although these three tasks are similar, the precise nature of the task can have a large
influence on the best choice of device.
Page 82
7.5.1. Purposes of Output
Output has two primary purposes:
a) Presenting information to the user
b) Providing the user with feedback on actions
Visual Output
Visual display of text or data is the most common form of output. The key design
considerations are that the information displayed be legible and easy to locate and process.
These issues relate to aspects of human cognition described in previous chapters. Visual
output can be affected by poor lighting, eye fatigue, screen flicker and quality of text
characters.
Visual Feedback
Users need to know what is happening on the computer’s side of an interaction. The system
should provide responses which keep users well informed and feeling in control. This
includes providing information about both normal processes and abnormal situations (errors).
Where possible, a system should be able to:
a) Tell a user where he or she is in a file or process
b) Indicate how much progress has been made
c) Signify that it is the user’s turn to provide input
d) Confirm that input has been received
e) Tell the user that input just received is unsuitable.
In order to increase bandwidth for information reaching the user, it is an important goal to use
more channels in addition to visual output. One commonly used supplement for visual
information is sound, but its true potential is often not recognized. Audible feedback can
make interaction substantially more comfortable for the user, providing unambiguous
information about the system state and success or failure of interaction (e. g., a button press),
without putting still more load onto the visual channel.
Page 83
a) Applications where eyes and attention are required away form the screen – e.g. flight
decks, medical applications, industrial machinery
b) Applications involving process control in which alarms must be dealt with
c) Applications addressing the needs of blind or partially sighted users.
d) Data sonification – situations where data can be explored by listening, given an
appropriate mapping between data and sound. The Geiger counter is a well-known
example of sonification.
7.6.2. Speech
Speech output is one of the most obvious means of using sound for providing users with
feedback. Successful speech output requires a method of synthesizing spoken words which
can be understood by the user. Speech can be synthesized using one of two basic methods:
Page 84
Quick reference is used primarily as a reminder to the user of the details of tools a user is
basically familiar with and has used before. It may, for example, be used to find a particular
command option, or to remind the user of the syntax of the command. Task-specific help is
required when the user has encountered a problem in performing a particular task or when he
is uncertain how to apply the tool to his particular problem. The help that is offered is directly
related to what is being done. The more experienced or inquisitive user may require a full
explanation of a tool or command to enable him to understand it more fully. This explanation
will almost certainly include information that the user does not need at that time. The fourth
type of support required by users is tutorial help. This is particularly aimed at new users of a
tool and provides step-by-step instruction (perhaps by working through examples) of how to
use the tool.
Each of these types of user support is complementary – they are required at different points in
the user’s experience with the system and fulfill distinct needs. Within these types of required
support there will be numerous pieces of information that the user wants – definitions,
examples, known errors and error recovery information, command options and accelerators,
to name but a few. Some of these may be provided within the design of the interface itself but
others must be included within the help or support system
At the other end of the spectrum, an adaptive help system that can provide help actively on its
own initiative, rather than at the request of the user, can intrude on the user and so become a
hindrance rather than a help. It is important with these types of system that the ‘suggest’
option can be overridden by the user and switched off !
Page 86
7.9. Approaches to User Support
There are a number of different approaches to providing help, each of which meets a
particular need. These vary from simple captions to full adaptive help and tutoring systems.
User support comes in a number of styles:
a) Command-based methods: Perhaps the most basic approach to user support is to provide
assistance at the command level – the user requests help on a particular command and is
presented with a help screen or manual page describing it. This type of help is simple and
efficient if the user knows what he wants to know about and is seeking either a reminder
or more detailed information. However, it assumes that the user does know what he is
looking for, which is often not the case. The approach is good for quick reference
b) Command prompts: In command line interfaces, command prompts provide help when
the user encounters an error, usually in the form of correct usage prompts. Such prompts
are useful if the error is a simple one, such as incorrect syntax, but again they assume
knowledge of the command.
Another form of command prompting, which is not specifically intended to provide help
but which supports the user to a limited degree, is the use of menus and selectable icons.
These provide an aid to memory as well as making explicit what commands are available
at a given time. It assumes knowledge of the command
c) Context sensitive help: Some help systems are context sensitive. These range from those
that have specific knowledge of the particular user (which we will consider under
adaptive help) to those that provide a simple help key or function that is interpreted
according to the context in which it is called and will present help accordingly. Such
systems are not necessarily particularly sophisticated. However, they do move away from
placing the onus on the user to remember the command. They are often used in menu-
based systems to provide help on menu options.
d) On-line tutorials: Online tutorials allow the user to work through the basics of an
application within a test environment. The user can progress at his own speed and can
repeat parts of the tutorial if needed. Online tutorials are therefore inflexible and often
unforgiving.
An alternative to the traditional online tutorial is to allow the user to learn the system by
exploring and experimenting with a version with limited functionality. This is the idea
Page 87
behind the Training Wheels interface proposed by Carroll and his colleagues at IBM [60].
The user is presented with a version of the full interface in which some of the
functionality has been disabled. He can explore the rest of the system freely but if he
attempts to use the blocked functionality he is told that it is unavailable. This approach
allows the user freedom to investigate the system as he pleases but without risk.
e. non-line documentation (integrated with application): Online documentation effectively
makes the existing paper documentation available on computer. This makes the material
available continually (assuming the machine is running!) in the same medium as the
user’s work and, potentially, to a large number of users concurrently.
The use of hypertext can make online documentation more accessible to the inexperienced
user Hypertext stores text in a network and connects related sections of text using selectable
links.
Page 88
f. Wizards and assistants
A wizard is a task-specific tool that leads the user through the task, step by step, using
information supplied by the user in response to questions along the way. They are distinct
from demonstrations in that they allow the user actually to complete the task
Wizards are common in modern applications and provide support for task completion. The
user can perform quite complex tasks safely, quickly and accurately. This is particularly
helpful in the case of infrequent tasks where the learning cost of doing the task manually may
be prohibitive or where there are many steps that must be completed in a specific sequence.
However, wizards can be unnecessarily constraining, they may not offer the user the options
he wants, or they may request information that the user does not have. A well-designed
wizard will allow the user to move back a step as well as forward, will provide a progress
indicator showing how much of the task is completed and how many steps remain, and will
offer sufficient information to allow the user to answer the questions.
Implementation issues
Alongside the presentation issues the designer must make implementation decisions. Some of
these may be forced by physical constraints, others by the choices made regarding the user’s
requirements for help.
Another issue the designer must decide is how the help data is to be structured: in a single
file, a file hierarchy, a database? Again this will depend on the type of help that is required,
but any structure should be flexible and extensible – systems are not static and new topics
will inevitably need to be added to the help system. The data structure used will, to an extent,
determine the type of search or navigation strategy that is provided. Will users be able to
browse through the system or only request help on one topic at a time? The user may also
Page 90
want to make a hard copy of part of the help system to study later (this is particularly true of
manuals and documentation). Will this facility be provided as part of the support system?
Finally, the designer should consider the authors of help material as well as its users. It is
likely that, even if the designer writes the initial help texts, these will be extended by other
authors at different times. Clear conventions and constraints on the presentation and
implementation of help facilitate the addition of new material.
Page 91
LECTURE 8: HUMAN COMPUTER INTERACTION DESIGN
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Describe the various types of interactions.
ii. Explain the HCI design principles
iii. Explain the HCI design process
The interactive cycle can be divided into two major phases: execution and evaluation. These
can then be subdivided into further stages, seven in all. The stages in Norman’s model of
interaction are as follows:
1. Establishing the goal.
2. Forming the intention.
3. Specifying the action sequence.
4. Executing the action.
5. Perceiving the system state.
6. Interpreting the system state.
Page 92
7. Evaluating the system state with respect to the goals and intentions.
Norman uses this model of interaction to demonstrate why some interfaces cause problems to
their users. He describes these in terms of the gulfs of execution and the gulfs of evaluation.
As we noted earlier, the user and the system do not use the same terms to describe the domain
and goals – remember that we called the language of the system the core language and the
language of the user the task language. The gulf of execution is the difference between the
user’s formulation of the actions to reach the goal and the actions allowed by the system. If
the actions allowed by the system correspond to those intended by the user, the interaction
will be effective. The interface should therefore aim to reduce this gulf.
Norman’s model is a useful means of understanding the interaction, in a way that is clear and
intuitive. It allows other, more detailed, empirical and analytic work to be placed within a
common framework. However, it only considers the system as far as the interface. It
concentrates wholly on the user’s view of the interaction. It does not attempt to deal with the
system’s communication through the interface. The interaction framework tries to address
this.
Page 93
As the interface sits between the User and the System, there are four steps in the interactive
cycle, each corresponding to a translation from one component to another, as shown by the
labeled arcs in figure 6 below
The User begins the interactive cycle with the formulation of a goal and a task to achieve that
goal. The only way the user can manipulate the machine is through the Input, and so the task
must be articulated within the input language. The input language is translated into the core
language as operations to be performed by the System. The System then transforms itself as
described by the operations; the execution phase of the cycle is complete and the evaluation
phase now begins. The System is in a new state, which must now be communicated to the
User. The current values of system attributes are rendered as concepts or features of the
Output. It is then up to the User to observe the Output and assess the results of the interaction
relative to the original goal, ending the evaluation phase and, hence, the interactive cycle.
There are four main translations involved in the interaction: articulation, performance,
presentation and observation.
ACM SIGCHI Curriculum Development Group have further developed the framework to
show other aspects related to HCI as shown below;
Page 94
Fig 7. A framework of Human-Computer Interaction. Adapted from ACM
SIGCHI Curriculum Development Group
Ergonomics addresses issues on the user side of the interface, covering both input and output,
as well as the user’s immediate context. Dialog design and interface styles can be placed
particularly along the input branch of the framework, addressing both articulation and
performance.
Presentation and screen design relates to the output branch of the framework. The entire
framework can be placed within a social and organizational context that also affects the
interaction. Each of these areas has important implications for the design of interactive
systems and the performance of the user
8.2. Ergonomics
Ergonomics (or human factors) is traditionally the study of the physical characteristics of the
interaction: how the controls are designed, the physical environment in which the interaction
takes place, and the layout and physical qualities of the screen. A primary focus is on user
performance and how the interface enhances or detracts from this. In seeking to evaluate
these aspects of the interaction, ergonomics will certainly also touch upon human psychology
and system constraints. It is a large and established field, which is closely related to but
distinct from HCI, and full coverage would demand a book in its own right.
Page 95
suggest will depend on the domain and the application, but possible organizations include
the following:
functional controls and displays are organized so that those that are functionally
related are placed together;
sequential controls and displays are organized to reflect the order of their use in a
typical interaction (this may be especially appropriate in domains where a particular
task sequence is enforced, such as aviation);
frequency controls and displays are organized according to how frequently they are
used, with the most commonly used controls being the most easily accessible.
2. The physical environment of the interaction: As well as addressing physical issues in the
layout and arrangement of the machine interface, ergonomics is concerned with the
design of the work environment itself. Where will the system be used? By whom will it
be used? Will users be sitting, standing or moving about? Again, this will depend largely
on the domain and will be more critical in specific control and operational settings than in
general computer use.
However, the physical environment in which the system is used may influence how well
it is accepted and even the health and safety of its users. It should therefore be considered
in all design.
Page 96
without discomfort or eyestrain. The light source should also be positioned to avoid
glare affecting the display.
Noise Excessive noise can be harmful to health, causing the user pain, and in acute
cases, loss of hearing. Noise levels should be maintained at a comfortable level in the
work environment.
Time the time users spend using the system should also be controlled.
4. The use of color: Colors used in the display should be as distinct as possible and the
distinction should not be affected by changes in contrast. Blue should not be used to
display critical information. If color is used as an indicator it should not be the only cue:
additional coding information should be included. The colors used should also
correspond to common conventions and user expectations. Red, green and yellow are
colors frequently associated with stop, go and standby respectively. Therefore, red may
be used to indicate emergency and alarms; green, normal activity; and yellow, standby
and auxiliary function. These conventions should not be violated without very good
cause.
8.3.2. Menus
Menus are defined as set of options on screen for choosing the action or among options for
data entry ie as “a set of options displayed on the screen where the selection and execution of
Page 97
one (or more) of the options results in a change in the state of the interface. There are three
types of menus
1. Pull-down menus
2. Pop-up menus
3. Hierarchical menus
Unlike command-driven systems, menus have the advantage that users do not have to
remember the item they want, they only need to recognize it” The advantage of using menus
is that user needs to recognize rather than recall objects. The menu options need to be
grouped logically and meaningful, so the user could easily recognize the
needed option.
Direct manipulation interfaces “present a set of objects on a screen and provide the user a
repertoire of manipulations that can be performed on any of them”. Each operation on the
interface is done directly and graphically. This leads to decreasing syntax errors like you can
not compile non-existing code since it is not on the screen when you click the compile icon
and faster performance of a task. Advantages of direct manipulation to objects:
a) Novices can learn basic functionality quickly, usually through a demon-
b) Experts can work extremely rapidly to carry out a wide range of tasks,
c) Knowledgeable intermittent users can retain operational concepts.
d) Error messages are rarely needed.
e) Users can see immediately if their actions are furthering their goals, and if not, they
can simply change the direction of their activity.
Most form-filling interfaces allow easy movement around the form and allow some fields to
be left blank. They also require correction facilities, as users may change their minds or make
a mistake about the value that belongs in each field. The dialog style is useful primarily for
data entry applications and, as it is easy to learn and use, for novice users. However,
assuming a design that allows flexible entry, form filling is also appropriate for expert users.
The advantage of using this interaction style is to users that do not have access to keyboards
or have limited experience. While ambiguities of the language may cause unexpected effects
and makes very difficult for a computer to understand.
Query languages, on the other hand, are used to construct queries to retrieve information from
a database. They use natural-language-style phrases, but in fact require specific syntax, as
Page 99
well as knowledge of the database structure. Queries usually require the user to specify an
attribute or attributes for which to search the database, as well as the attributes of interest to
be displayed. This is straightforward where there is a single attribute, but becomes complex
when multiple attributes are involved. Most query languages do not provide direct
confirmation of what was requested, so that the only validation the user has is the result of the
search. The effective use of query languages therefore requires some experience. A
specialized example is the web search engine.
Other tools of computer interface design are menus, dialog boxes, check boxes, and radio
buttons and so on. These make use of visualization methods and computer graphics to
provide a more accessible interface than command-line-based displays. The fundamental goal
of WIMP designs is to give the user a meaningful working metaphor, for example an office
or ‘desktop’ representation as opposed to the command-line interfaces. Its advantages are
general application, make functions explicit and provide immediate feedback.
Humans are highly attuned to images and visual information that in other hand can
communicate some kinds of information much more rapidly and effectively than any other
method, and as is said “a picture is worth a thousand words ”.
Page 100
when you click you see a definition of the word. You may point at a recognizable iconic
button and when you click some action is performed.
This point-and-click interface style is obviously closely related to the WIMP style. It clearly
overlaps in the use of buttons, but may also include other WIMP elements. However, the
philosophy is simpler and more closely tied to ideas of hypertext.
8.3.9. Buttons
Buttons are individual and isolated regions within a display that can be selected by the user to
invoke specific operations. These regions are referred to as buttons because they are
purposely made to resemble the push buttons you would find on a control panel. ‘Pushing’
the button invokes a command, the meaning of which is usually indicated by a textual label
or a small icon. Buttons can also be used to toggle between two states, displaying status
information such as whether the current font is italicized or not in a word processor, or
selecting options on a web form. Such toggle buttons can be grouped together to allow a user
to select one feature from a set of mutually exclusive options, such as the size in points of the
current font. These are called radio buttons, since the collection functions much like the old-
fashioned mechanical control buttons on car radios. If a set of options is not mutually
exclusive, such as font characteristics like bold, italics and underlining, then a set of toggle
buttons can be used to indicate the on/off status of the options. This type of collection of
buttons is sometimes referred to as check boxes.
8.3.10. Toolbars
Many systems have a collection of small buttons, each with icons, placed at the top or side of
the window and offering commonly used functions. The function of this toolbar is similar to
a menu bar, but as the icons are smaller than the equivalent text more functions can be
simultaneously displayed. Sometimes the content of the toolbar is fixed, but often users can
customize it, either changing which functions are made available, or choosing which of
several predefined toolbars is displayed.
8.3.11. Palettes
In many application programs, interaction can enter one of several modes. The defining
characteristic of modes is that the interpretation of actions, such as keystrokes or gestures
with the mouse, changes as the mode changes. Palettes are a mechanism for making the set of
Page 101
possible modes and the active mode visible to the user. A palette is usually a collection of
icons that are reminiscent of the purpose of the various modes. An example in a drawing
package would be a collection of icons to indicate the pixel color or pattern that is used to fill
in objects, much like an artist’s palette for paint.
Some systems allow the user to create palettes from menus or toolbars. In the case of pull-
down menus, the user may be able ‘tear off’ the menu, turning it into a palette showing the
menu items. In the case of toolbars, he may be able to drag the toolbar away from its normal
position and place it anywhere on the screen. Tear-off menus are usually those that are
heavily graphical anyway, for example line-style or color selection in a drawing package.
The design process requires a solid understanding of the theory behind successful human-
computer interaction, as well as an awareness of established procedures for good user
interface design, including the 'usability engineering' process. Design requires understanding
the specific task domain, users and requirements.
Page 102
8.5. Interaction Design versus Interface Design
Interface design deals with the process of developing a method for two (or more) modules in
a system to connect and communicate. These modules can apply to hardware, software or the
interface between a user and a machine. Interface design suggests an interface between code
on one side and users on the other side, passing messages between them. It gives
programmers a license to code as they please, because the interface will be slapped on when
they are done.
Interaction design is heavily focused on satisfying the needs and desires of the people who
will use the product. It is about shaping digital things for people’s use, ie refers to function,
behavior, and final presentation.
Good human computer interaction work and design is important for obtaining these
measurable outcomes:
Increasing worker productivity
Increasing worker satisfaction and commitment
Reducing training costs
Reducing errors during interface interaction
Reducing production costs
Good human computer interaction work and design is important for increasing job
commitment by reducing worker turnover. Poor quality interfaces can lead to stress and strain
on users both mentally and physically. Users may experience sore muscles or eye strain due
to poor HCI interfaces and computer design. Workers may leave their jobs if they are
dissatisfied with their HCI experience.
Good human computer interaction work and design is important for reducing production
costs. Effective interfaces allow workers to produce better quality products and services. For
example, an effective Website can assist users to view products and services offered by
companies, including better customer services.
In order to design a good human computer interaction, we have to appropriately choose the
type of interface and interaction style to fit with the class of users it is designed for whereas
the human factors must be taken in consideration. The following is recommended;
a) to investigate the advantages and disadvantages of interaction styles and interface
types that best support the activities and styles of learning of users the system is
aimed at;
b) to choose the type of interface and interaction styles that best supports the system
goals;
c) to choose the interaction styles that are compatible to user attributes and that support
the users needs, which means to choose the styles that are more advantageous for
Page 105
aimed users (for example, in a system for learning and practicing programming, direct
manipulation style is more advantageous which are stressed in more detail in section
3.1); and
d) to define the user class (experts, intermediate or novices) that the system is designed
for, where the human factors must be taken in consideration.
1. Visibility: The more visible functions are, the more likely users will be able to know what
to do next. In contrast, when functions are “out of sight,” it makes them more difficult to
find and knows how to use.
2. Affordance: Affordance is a term used to refer to an attribute of an object that allows
people to know how to use it. For example, a mouse button invites pushing by the way it
is physically constrained in its plastic shell. At a very simple level, to afford means “to
give a clue.” When the affordances of a physical object are perceptually obvious it is easy
to know how to interact with it.
3. Constraints: The design concept of constraining refers to determining ways of restricting
the kind of user interaction that can take place at a given moment. There are various ways
this can be achieved. A common design practice in graphical user interfaces is to
deactivate certain menu options by shading them, thereby restricting the user to only
actions permissible at that stage of the activity. One of the advantages of this form of
constraining is it prevents the user from selecting incorrect options and thereby refuses
the chances of making a mistake. The use of different kinds of graphical representations
can also constrain a person’s interpretation of a problem or information space. There are
three types of constraints
a) Physical constraints: Physical constraints refer to the way physical objects restrict
the movement of things
Page 106
b) Logical constraints: Logical constraints rely on people’s understanding of the way
the world works. They rely on people’s common-sense reasoning about actions and
their consequences.
c) Culture constraints: Culture constraints rely on learned conventions, like the use of
red for warning, the use of certain kinds of signals for danger, and the use of the
smiley face to represent happy emotions. Most cultural constraints are arbitrary in the
sense that their relationship with what is being represented is abstract, and could have
equally evolved to be represented in another form (e.g., the use of yellow instead of
red for warning). Accordingly, they have to be learned. Once learned and accepted by
a cultural group, they become universally accepted conventions. Two universally
accepted interface conventions are the use of windowing for displaying information
and the use icons on the desktop to represent operations and documents.
4. Mapping: This refers to the relationship between controls and their effects in the world.
Nearly all artifacts need some kind of mapping between controls and effects, whether it is
a flashlight, car, power plant, or cockpit. The mapping of the relative position of controls
and their effects is also important
5. Consistency: This refers to designing interfaces to have similar operations and use similar
elements for achieving similar tasks. In particular, a consistent interface is one that
follows rules, such as using the same operation to select all objects.
6. Feedback: Related to the concept of visibility is feedback. Feedback is about sending
back information about what action has been done and what has been accomplished,
allowing the person to continue with the activity. Various kinds of feedback are available
for interaction design-audio, tactile, verbal, visual, and combinations of these. Deciding
which combinations are appropriate for different kinds of activities and interactivities is
central. Using feedback in the right way can also provide the necessary visibility for user
interaction.
Page 107
2. Describe briefly four different interaction styles used to accommodate the dialog between
user and computer.
Page 108
LECTURE 9: HCI DESIGN PROCESS
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Describe the HCI Design Process.
ii. Explain the HCI Design Approaches
iii. Describe Web Interface Design Considerations
The steps in the HCI design process can include the following steps:
a) Analyzing the users and determining their needs.
Users can be categorized as Novice and first time users, Knowledgeable intermittent
uses or Expert frequent users.
Choose the interaction style looking at the advantages and disadvantages based on the
users tasks specification /requirements
Page 109
b) Drafting an initial design based on the users' needs analysis.
Create a design using:
Principles of interaction,
Graphics design principles,
Following guidelines.
c) Testing the initial design with users in an HCI testing laboratory or in a real user work
environment.
d) Developing a prototype system based on the initial design and users' feedback.
e) Testing the prototype system with users in an HCI testing laboratory or real user work
environment.
f) Designing and refining each specific interface and screen.
g) Testing the interface with users in an HCI testing laboratory or a real user work
environment.
h) Refining the interface based on users' feedback.
i) Implementing the interface.
Page 110
The anthropomorphic approach to human-computer interaction involves designing a user
interface to possess human-like qualities. For instance, an interface may be designed to
communicate with users in a human-to-human manner, as if the computer empathizes with
the user. This uses
Affordances: Human affordances are perceivable potential actions that a person can do with
an object. In terms of HCI, icons, folders, and buttons afford mouse-clicking, scrollbars
afford sliding a button to view information off-screen, and drop-down menus show the user a
list of options from which to choose. Similarly, pleasant sounds are used to indicate when a
task has completed, signaling that the user may continue with the next step in a process.
Examples of this are notifications of calendar events, new emails, and the completion of a file
transfer.
Constraints: Constraints complement affordances by indicating the limitations of user actions.
A grayed-out menu option and an unpleasant sound (sometimes followed by an error
message) indicate that the user cannot carry out a particular action. Affordances and
constraints can be designed to non-verbally guide user behaviors through an interface and
prevent user errors in a complex interface.
Cognitive Approach
The cognitive approach to human-computer interaction considers the abilities of the human
brain and sensory-perception in order to develop a user interface that will support the end
user.
Metaphoric Design: Using metaphors can be an effective way to communicate an abstract
concept or procedure to users, as long as the metaphor is used accurately. Computers use a
“desktop” metaphor to represent data as document files, folders, and applications. Metaphors
rely on a user’s familiarity with another concept, as well as human affordances, to help users
understand the actions they can perform with their data based on the form it takes. For
instance, a user can move a file or folder into the “trashcan” to delete it.
A benefit of using metaphors in design is that users who can relate to the metaphor are able to
learn to use a new system very quickly. A potential problem can ensue, however, when users
expect a metaphor to be fully represented in a design, and in reality, only part of the metaphor
has been implemented.
Attention and Workload Models: When designing an interface to provide good usability, it is
important to consider the user’s attention span, which may be based on the environment of
Page 111
use, and the perceived mental workload involved in completing a task. Typically, users can
focus well on one-task-at-a-time. For example, when designing a web-based form to collect
information from a user, it is best to contextually collect information separately from other
information. The form may be divided into “Contact Information” and “Billing Information”,
rather than mixing the two and confusing users.
By “chunking” this data into individual sections or even separate pages when there is a lot of
information being collected, the perceived workload is also reduced. If all the data were
collected on a single form that makes the user scroll the page to complete, the user may
become overwhelmed by the amount of work that needs to be done to complete the form, and
he may abandon the website. Workload can be measured by the amount of information being
communicated to each sensory system (visual, auditory, etc.) at a given moment. Some
websites incorporate Adobe Flash in an attempt to impress the user. If a Flash presentation
does not directly support a user’s task, the user’s attention may become distracted by too
much auditory and visual information.
A/B Testing: If two of three design concepts were rated highly during user testing, it may be
advantageous to conduct an A/B Test during post-production. One way to do this is to set up
a Google Analytics account, which allows a researcher to set up multiple variations of a web
page to test. When a user visits the website, Google will display one variation of the web
page according to the end user’s IP address. As the user navigates the website, Google tracks
the user’s clicks to see if one version of the web page produces more sales than another
version. Other “conversion” goals may be tracked as well, such as registering a user account
or signing up for a newsletter.
Page 113
Selection Rules refer to a user’s personal decision about which method will work best
in a particular situation in order to reach a goal.
The GOMS model is based on human information processing theory, and certain
measurements of human performance are used to calculate the time it takes to complete a
goal.
b). Personas
Goal-oriented design advocates for the use of personas, which are created after interviewing a
significant number of users.
The aim of a persona is to "Develop a precise description of our user and what he wishes to
accomplish." The best method to fabricate users with names and back stories who represent
real users of a given product. These users are not as much a fabrication but more so as a
product of the investigation process. The reason for constructing back stories for a persona is
to make them believable, such that they can be treated as real people and their needs can be
argued for.
Page 114
Design rules for interactive systems can be supported by psychological, cognitive,
ergonomic, sociological, economic or computational theory. Shneiderman’s eight golden
rules provide a convenient and succinct summary of the key principles of interface design.
They are intended to be used during design but can also be applied, like Nielsen’s heuristics,
to the evaluation of systems.
1. Strive for consistency in action sequences, layout, terminology, command use and so on.
2. Enable frequent users to use shortcuts, such as abbreviations, special key sequences and
macros, to perform regular, familiar actions more quickly.
3. Offer informative feedback for every user action, at a level appropriate to the magnitude
of the action.
4. Design dialogs to yield closure so that the user knows when they have completed a task.
5. Offer error prevention and simple error handling so that, ideally, users are prevented
from making mistakes and, if they do, they are offered clear and informative instructions
to enable them to recover.
6. Permit easy reversal of actions in order to relieve anxiety and encourage exploration,
since the user knows that he can always return to the previous state.
7. Support internal locus of control so that the user is in control of the system, which
responds to his actions.
8. Reduce short-term memory load by keeping displays simple, consolidating multiple page
displays and providing time for learning action sequences.
Proposed HCI Usability Principles by Dix, Finally, Abowd and Beale in their book Usability
Paradigms and Principles in Human-Computer Interface includes;
a) Learnability: The ease of new learners to learn the HCI. How easy is it for the users to
accomplish basic tasks the first time they encounter the design?. This includes;
Predictability:
Page 115
Does the application produce results that are in accord with pervious commands and
states? Although the computer is ultimately deterministic the user perspective of the
application is as a black box. The user knows the system only by the sequence of
states resulting from the user's interaction. Predictability might require knowing only
the current state or all the previous states and orders. The later is a high demand on
the user. The point is: can the user predicate the results of the system from the
system's history? For example consider the sequence: 0, 1, 4, ... what comes next?
Synthesizablity:
Does the user construct the proper model of the system? Or does the system display
the correct clues to construct a proper model? While the user learns the system, the
user constructs a picture of the innards of the black box.
Familiarity:
Do new users get good clues to use the system properly? Due to prolific use of
metaphors and WIMPI, naive users expect to use applications without study. The
clues for how to use the system must come from visual organization, names and icons
that are common to the user's domain.
Generalizabilty:
Can the user guess at the functionality of new commands? Even advanced users do
not read manuals, they learn by making generalizations of commands and associating
commands from one application to another.
Consistency
Does the operation perform similarly for similar inputs? More specific examples;
consistent naming of commands, or consistent syntax of commands and arguments.
b). Flexibility: Multiplicity of ways to interact with the system
Dialog initiative:
Do dialog boxes hold the user prisoner? Old dialogue boxes were modal and
prevented the user from interacting with any other part of system, called system pre-
emptive. Modern dialog boxes are user pre-emptive. The system may request
information from a dialog box, but should not prevent the user from performing
auxiliary activity even in the same application.
Multi-threading:
Can the user perform simultaneous tasks? Tasks represent threads, and multi-
threading allows the user to perform simultaneous tasks.
Page 116
Task migratabilty:
Can the user perform the task or have the computer perform the tasks? Can the user
control computer automated tasks? At a minimum the user ought to be able to
interrupt an application task.
Subsitutivity:
Can arguments to commands assume different forms for equivalent values? Can the
output of a command be represented differently? For example the user ought to able
to use inches or centimeters or perhaps an expression. The user should be able to see
the results in the units of the user's choice.
Customizability:
Can the user modify the interface in order to improve efficiency? Are the
customizing features easily accessible? This does not mean changing the color of the
gui. Rather can the user add commands, or change font size for better visibility?
c). Robustness: Level of support for the error handling
Observability:
Can the user evaluate the state of the application? At a minimum there ought to be
accurate progress bar, better is for the user to browse the system state. Persistence is
how long the system states or user inputs are visible to the user.
Recoverability:
If the user makes a mistake or the application fails can the user recover the work?
Can the user recover from a mistake in a command by escaping? At a minimum the
user ought to be able to correct the output, forward recovery. Users have grown
accustomed to the undo command, backward recovery. How far back can the user
recover his work? Does the system save snapshots?
Responsiveness:
Does the system respond in suitable time? Some applications require msec response,
others seconds and some applications can run overnight.
Task conformance:
Does the system perform all the tasks that user needs or wants? At a minimum the
application should cover all the tasks of the domain, better is for the system to be able
to add tasks.
Other HCI Usability principles includes the Norman’s Seven Principles for Transforming
Difficult Tasks;
Page 117
1) Use both knowledge in the world and knowledge in the head. People work better when
the knowledge they need to do a task is available externally – either explicitly or through
the constraints imposed by the environment. But experts also need to be able to
internalize regular tasks to increase their efficiency. So systems should provide the
necessary knowledge within the environment and their operation should be transparent to
support the user in building an appropriate mental model of what is going on.
2) Simplify the structure of tasks. Tasks need to be simple in order to avoid complex
problem solving and excessive memory load. There are a number of ways to simplify the
structure of tasks. One is to provide mental aids to help the user keep track of stages in a
more complex task. Another is to use technology to provide the user with more
information about the task and better feedback. A third approach is to automate the task
or part of it, as long as this does not detract from the user’s experience. The final
approach to simplification is to change the nature of the task so that it becomes something
more simple. In all of this, it is important not to take control away from the user.
3) Make things visible: bridge the gulfs of execution and evaluation. The interface should
make clear what the system can do and how this is achieved, and should enable the user
to see clearly the effect of their actions on the system.
4) Get the mappings right. User intentions should map clearly onto system controls. User
actions should map clearly onto system events. So it should be clear what does what and
by how much. Controls, sliders and dials should reflect the task so a small movement has
a small effect and a large movement a large effect.
5) Exploit the power of constraints, both natural and artificial. Constraints are things in the
world that make it impossible to do anything but the correct action in the correct way. A
simple example is a jigsaw puzzle, where the pieces only fit together in one way. Here the
physical constraints of the design guide the user to complete the task.
6) Design for error. To err is human, so anticipate the errors the user could make and design
recovery into the system.
7) When all else fails, standardize. If there are no natural mappings then arbitrary mappings
should be standardized so that users only have to learn them once. It is this
standardization principle that enables drivers to get into a new car and drive it with very
little difficulty – key controls are standardized. Occasionally one might switch on the
indicator lights instead of the windscreen wipers, but the critical controls (accelerator,
brake, clutch, steering) are always the same.
Page 118
9.3.3. HCI Design Standards
Standards for interactive system design are usually set by national or international bodies to
ensure compliance with a set of design rules by a large community. Such international
organizations includes; British Standards Institution (BSI) or the International Organization
for Standardization (ISO). Standards can apply specifically to either the hardware or the
software used to build the interactive system.
9.3.4. Guidelines
Guidelines are best practice, based on practical experiences or empirical studies. They are
software development documents which offer application developers a set of
recommendations. Their aim is to improve the experience for the users by making application
interfaces more intuitive, learnable, and consistent.
Written guidelines help to develop a “shared language” and based on it promote consistency
among multiple designers and designs/products. Examples of guidelines:
a). How to provide ease of interface navigation;
Five ways to enhance navigation
Standardize Task Sequences
Ensure that embedded links are descriptive
Use unique and descriptive headings
Use checkboxes for binary choices
Use thumbnails to preview larger images
b). how to organize the display.
5 high-level goals of display organization:
Ensure consistency of data display
Promote efficient information assimilation by the user
Put minimal memory load on the user
Ensure compatibility of data display with data entry
Allow for flexibility in data display (user controlled)
c). how to draw user’s attention, and
7 issues that can be used to draw user’s attention:
Intensity: Use high intensity to draw attention
Marking: e.g. underlining, using arrows, boarders, etc.
Page 119
Font Size: Use large fonts to draw attention
Font Style: Use exceptional font styles to draw attention
Blinking/Animation: Use blinking to draw attention (careful!)
Color: Use exceptional colors to draw attention
Audio: Use harsh sounds for exceptional conditions
d). how to best facilitate data entry
5 high-level objectives should be pursued in order to best facilitate data entry. If applied
meaningfully, productivity will increase, time-to-learn will be reduced and the error
probability should be lower.
Ensure consistency of data entry transactions
Require minimal input by the user (e.g., button vs. typed command)
Require minimal memory load on user (e.g. command line vs. forms)
Ensure compatibility of data entry with data display
Enable flexibility of data entry (user controlled)
The HCI principles involved in user interface design were developed in the context of
graphical user interface (GUI) design, but they apply equally well to Web interface design.
There are, however, several features which are unique to Web interfaces.
Page 120
h) Web navigation is user controlled
The Web was originally created to provide access to information, and this is still a key
function of many web sites. Alongside this, applications have emerged for interactive web
sites. Online shopping sites are probably the most familiar example of this kind of
interactivity. Many sites rely on both information and interaction. For example, Amazon has
information about the products it sells and requires interaction to allow a customer to make a
purchase.
Successful web sites therefore should make use of both interaction design, and information
architecture. Information Architecture is closely related to navigation – navigation systems
provide the way for the user to find the site content which has been organised by the
information architect.
Page 122
Organic structures have no consistent pattern or sections. They can encourage free-form
exploration, as on some education or entertainment sites, but can make it difficult to find
information reliably.
Sequential structures are familiar in offline media (e.g a book). They are often used in web
sites for small sections, such as articles or tutorials, or in task based aspects.
Page 123
9.6.3. Supplementary navigation
Provides shortcuts to related content, which is not easily accessible through the global and
local navigation.
Supplementary and contextual navigation provide similar paths through the site structure, but
differ in the way they appear on the page.
Like the previous two systems, courtesy navigation provides links which are not easily
accessible through the global and local navigation.
Page 124
9.6.6. Personalisation and Social Navigation
Some sites track some aspect of a user to provide easy navigation to content which is of
specific interest. For example some shopping sites recommend products based on past
purchases.
Social navigation is based on the idea that value for the individual can be derived from
observing the actions of others.
An example of a label is “Contact Us” which is found on many web sites. This label
represents a chunk of information, such as telephone, fax and email information. The label
works as a shortcut which triggers the right association in the user’s mind. Successful
labelling triggers such associations and guides users to the appropriate part of a web site to
find the information they require. Good labelling systems are:
• Consistent in:
a) style and presentation
b) syntax - do not mix verb-based, noun-based or question-based labels in a single
system (“Grooming your dog”, “Diets for dogs”, “How do you train your dog?”)
Page 125
c) audience - do not mix technical and non-technical names for topics in a single system
(“lymphoma”, “tummy-ache”)
Representative and clear
User-centric (for example meaningful to the user, not using corporate jargon or other
terms which might only be meaningful to the site owner)
Many labels in navigation systems are familiar to most web users and can be used quite
safely. Some common variants are listed below:
Main, Main Page, Home
Search, Find, Browse
Site Map, Contents, Table of Contents, Index
Contact, Contact Us
Help, FAQ, Frequently asked Questions
News, News & Events, Announcements
About, About Us, About <company name>, Who We Are
Links: Links can be within blocks of text, headlines, or groups of links organised to allow
access to a large number of locations in a small space:
Buttons, menus, navigation bars and icons: Menus and navigation bars are usually placed
at the left or top of a page. Menus on the left are often used for local navigation systems
alongside global navigation at the top of the page. Pop-up menus can be used to include a
larger number of menu options in a small space.
Page 126
Drop-down (or pop-up) lists: These provide a very compact way of providing access to a
list of topics.
Site maps: Central collections of links to all areas of a web site. These support a user’s
mental model of using a map.
History trails: Show users the route they have followed – can help their route knowledge.
These are sometimes known as breadcrumbs.
Search engines: Allow keyword searches – can be a very efficient way of finding specific
information or products
9.9.2. Wireframes
Wireframes depict how individual pages should look from an architectural perspective. They
are related to both the information architecture and visual design of the site. The wireframe
forces the architect to consider such issues as where the navigation systems might be located
on a page. It translates the navigation systems from blueprint into a “page”. Ideas can be tried
out at this stage, and if the navigation does not seem to work well, this can lead to the
blueprint being revised.
The wireframe also helps the information architect to decide how to group content on the
page. Items near the top left of the page tend to be scanned first by the user, so
information can be prioritized by its position in the “page”.
Wireframes are typically created for the site’s most important pages (main page, major
category pages, etc) and can describe consistent templates that can be applied to many
pages.
A wireframe is not a finished web page. The final product should also involve graphic
designers to define the aesthetic nature of the site. Where the wireframe represents an
Page 127
interactive page, such as a form, it may be appropriate to involve specialist interaction
designers and programmers in creating the final product.
Page 128
Page 129