Robotics
Robotics
Robotics
1-r
CAD/CAM, Robotics, and Computer Vision
Consulting Editor
Herbert Freeman, Rutgers University
Fu, Gonzalez, and Lee: Robotics: Control, Sensing, Vision, and Intelligence
Groover, Weiss, Nagel, and Odrey: Industrial Robotics: Technology, Programming,
and Applications
Levine: Vision in Man and Machine
Parsons: Voice and Speech Processing
ROBOTICS:
Control, Sensing, Vision, and Intelligence
K. S. Fu
School of Electrical Engineering
Purdue University
R. C. Gonzalez
Department of Electrical Engineering
University of Tennessee
and
Perceptics Corporation
Knoxville, Tennessee
C. S. G. Lee
School of Electrical Engineering
Purdue University
14137
Copyright © 1987 by McGraw-Hill, Inc. All rights reserved. Printed in the United States
0.¢
of America. Except as permitted under the United States Copyright Act of 1976, no part
of this publication may be reproduced or distributed in any form or by any means, or
stored in a data base or retrieval system, without the prior written permission of the
publisher.
2 3 4 5 6 7 8 9 0 DOCDOC 8 9 8 7
ISBN 0-07-022625-3
ABOUT THE AUTHORS
made milestone contributions in both basic and applied research. Often termed the
'L7
"father of automatic pattern recognition," Dr. Fu authored four books and more
than 400 scholarly papers. He taught and inspired 75 Ph.D.s. Among his many
1j?
cation in 1981, and was awarded the IEEE's Education Medal in 1982. He was a
Fellow of the IEEE, a 1971 Guggenheim Fellow, and a member of Sigma Xi, Eta
Kappa Nu, and Tau Beta Pi honorary societies. He was the founding president of
.°,
the International Association for Pattern Recognition, the founding editor in chief of
the IEEE Transactions of Pattern Analysis and Machine Intelligence, and the editor
..,
in chief or editor for seven leading scholarly journals. Professor Fu died of a heart
`'"
C17
is a Fellow of the IEEE.
._D
sity. He received his B.S.E.E. and M.S.E.E. degrees from Washington State
University, and a Ph.D. degree from Purdue in 1978. From 1978 to 1985, he was
a faculty member at Purdue and the University of Michigan, Ann Arbor. Dr. Lee
.°A
has authored or coauthored more than 40 technical papers and taught robotics short
C].
courses at various conferences. His current interests include robotics and automa-
cad
i..
tion, and computer-integrated manufacturing systems. Dr. Lee has been doing
extensive consulting work for automotive and aerospace industries in robotics. He is
p.,
`--
Preface xi
1. Introduction
- 1.1. Background
1.2. Historical Development
1.3. Robot Arm Kinematics and Dynamics
1.4. Manipulator Trajectory Planning
and Motion Control 7
1.5. Robot Sensing 8
1.6. Robot Programming Languages
88,0
9
1.7. Machine Intelligence 10
1.8. References 10
2.1. Introduction 12
2.2. The Direct Kinematics Problem
.--
13
2.3. The Inverse Kinematics Solution 52
2.4. Concluding Remarks 75
References 76
Problems 76
3.1. Introduction 82
3.2. Lagrange-Euler Formulation 84
3.3. Newton-Euler Formation 103
3.4. Generalized D'Alembert
Equations of Motion 124
3.5. Concluding Remarks 142
References 142
Problems 144
Vii
Viii CONTENTS
.AA
5.8. Adaptive Control
5.9. Concluding Remarks
References
Problems
6. Sensing
6.1. Introduction
6.2. Range Sensing
6.3. Proximity Sensing
6.4. Touch Sensors
6.5. Force and Torque Sensing
6.6. Concluding Remarks
References
Problems
7. Low-Level Vision
7.1. Introduction
7.2. Image Acquisition
7.3. Illumination Techniques
7.4. Imaging Geometry
7.5. Some Basic Relationships Between Pixels
7.6. Preprocessing
7.7. Concluding Remarks
References
Problems
CONTENTS iX
424
8.6. Interpretation 439
8.7. Concluding Remarks 445
C1'
References 445
Problems 447
.L'
509
10.10. Expert Systems and
Knowledge Engineering
boo
516
10.11. Concluding Remarks 519
(N]
References 520
Appendix 522
A Vectors and Matrices 522
B Manipulator Jacobian 544
'.D
Bibliography 556
Index 571
PREFACE
This textbook was written to provide engineers, scientists, and students involved
in robotics and automation with a comprehensive, well-organized, and up-to-
date account of the basic principles underlying the design, analysis, and syn-
thesis of robotic systems.
The study and development of robot mechanisms can be traced to the
mid-1940s when master-slave manipulators were designed and fabricated at the
Oak Ridge and Argonne National Laboratories for handling radioactive materi-
als. The first commercial computer-controlled robot was introduced in the late
1950s by Unimation, Inc., and a number of industrial and experimental devices
followed suit during the next 15 years. In spite of the availability of this tech-
nology, however, widespread interest in robotics as a formal discipline of study
and research is rather recent, being motivated by a significant lag in produc-
tivity in most nations of the industrial world.
Robotics is an interdisciplinary field that ranges in scope from the design of
mechanical and electrical components to sensor technology, computer systems,
and artificial intelligence. The bulk of material dealing with robot theory,
design, and applications has been widely scattered in numerous technical jour-
nals, conference proceedings, research monographs, and some textbooks that
either focus attention on some specialized area of robotics or give a "broad-
brush" look of this field. Consequently, it is a rather difficult task, particularly
for a newcomer, to learn the range of principles underlying this subject matter.
c+,
This text attempts to put between the covers of one book the basic analytical
techniques and fundamental principles of robotics, and to organize them in a
unified and coherent manner. Thus, the present volume is intended to be of use
both as a textbook and as a reference work. To the student, it presents in a logi-
cal manner a discussion of basic theoretical concepts and important techniques.
For the practicing engineer or scientist, it provides a ready source of reference
in systematic form.
xi
Xii PREFACE
The mathematical level in all chapters is well within the grasp of seniors and
first-year graduate students in a technical discipline such as engineering and com-
puter science, which require introductory preparation in matrix theory, probability,
computer programming, and mathematical analysis. In presenting the material,
-on
emphasis is placed on the development of fundamental results from basic concepts.
Numerous examples are worked out in the text to illustrate the discussion, and exer-
cises of various types and complexity are included at the end of each chapter. Some
of these problems allow the reader to gain further insight into the points discussed
CAD
."3
in the text through practice in problem solution. Others serve as supplements and
extensions of the material in the book. For the instructor, a complete solutions
"i7
manual is available from the publisher.
This book is the outgrowth of lecture notes for courses taught by the authors at
Purdue University, the University of Tennessee, and the University of Michigan.
The material has been tested extensively in the classroom as well as through
numerous short courses presented by all three authors over a 5-year period. The
ac'
suggestions and criticisms of students in these courses had a significant influence in
the way the material is presented in this book.
We are indebted to a number of individuals who, directly or indirectly, assisted
in the preparation of the text. In particular, we wish to extend our appreciation to
Professors W. L. Green, G. N. Saridis, R. B. Kelley, J. Y. S. Luh, N. K. Loh, W.
T. Snyder, D. Brzakovic, E. G. Burdette, M. J. Chung, B. H. Lee, and to Dr. R. E.
Woods, Dr. Spivey Douglass, Dr. A. K. Bejczy, Dr. C. Day, Dr. F. King, and Dr.
;-r
L-W. Tsai. As is true with most projects carried out in a university environment,
our students over the past few years have influenced not only our thinking, but also
gas
the topics covered in this book. The following individuals have worked with us in
O..
Susan Merrell, Ms. Denise Smiddy, Ms. Mary Bearden, Ms. Frances Bourdas, and
Ms. Mary Ann Pruder for typing numerous versions of the manuscript. In addition,
we express our appreciation to the National Science Foundation, the Air Force
Office of Scientific Research, the Office of Naval Research, the Army Research
Office, Westinghouse, Martin Marietta Aerospace, Martin Marietta Energy Systems,
CD'
Union Carbide, Lockheed Missiles and Space Co., The Oak Ridge National Labora-
tory, and the University of Tennessee Measurement and Control Center for their
sponsorship of our research activities in robotics, computer vision, machine intelli-
gence, and related areas.
K. S. Fu
R. C. Gonzalez
C. S. G. Lee
PREFACE Xiii
R. C. G.
C. S. G. L.
CHAPTER
ONE
INTRODUCTION
1.1 BACKGROUND
With a pressing need for increased productivity and the delivery of end products of
uniform quality, industry is turning more and more toward computer-based auto-
mation. At the present time, most automated manufacturing tasks are carried out
by special-purpose machines designed to perform predetermined functions in a
manufacturing process. The inflexibility and generally high cost of these
machines, often called hard automation systems, have led to a broad-based interest
Z".
The word robot originated from the Czech word robota, meaning work.
Webster's dictionary defines robot as "an automatic device that performs functions
ordinarily ascribed to human beings." With this definition, washing machines may
be considered robots. A definition used by the Robot Institute of America gives a
more precise description of industrial robots: "A robot is a reprogrammable
`i7
Os.
lator with external sensors that can perform various assembly tasks. With this
definition, a robot must possess intelligence, which is normally due to computer
CIO
motion of the joints results in relative motion of the links. Mechanically, a robot
is composed of an arm (or mainframe) and a wrist subassembly plus a tool. It is
[n'
designed to reach a workpiece located within its work volume. The work volume
is the sphere of influence of a robot whose arm can deliver the wrist subassembly
unit to any point within the sphere. The arm subassembly generally can move
with three degrees of freedom. The combination of the movements positions the
1
2 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
wrist unit at the workpiece. The wrist subassembly unit usually consists of three
rotary motions. The combination of these motions orients the tool according to the
configuration of the object for ease in pickup. These last three motions are often
called pitch, yaw, and roll. Hence, for a six jointed robot, the arm subassembly is
the positioning mechanism, while the wrist subassembly is the orientation mechan-
0_o
ism. These concepts are illustrated by the Cincinnati Milacron T3 robot and the
'CS
Figure 1.1 (a) Cincinnati Milacron T3 robot arm. (b) PUMA 560 series robot arm.
INTRODUCTION 3
Cartesian coordinates (three linear axes) (e.g., IBM's RS-1 robot and the Sigma
v,°
robot from Olivetti)
CD'
Cylindrical coordinates (two linear and one rotary axes) (e.g., Versatran 600 robot
C1,
from Prab)
Spherical coordinates (one linear and two rotary axes) (e.g., Unimate 2000B from
Unimation Inc.)
Revolute or articulated coordinates (three rotary axes) (e.g., T3 from Cincinnati
Milacron and PUMA from Unimation Inc.)
i.,
Spherical Revolute
The word robot was introduced into the English language in 1921 by the play-
wright Karel Capek in his satirical drama, R.U.R. (Rossum's Universal Robots).
Crop
In this work, robots are machines that resemble people, but work tirelessly. Ini-
tially, the robots were manufactured for profit to replace human workers but,
toward the end, the robots turned against their creators, annihilating the entire
human race. Capek's play is largely responsible for some of the views popularly
held about robots to this day, including the perception of robots as humanlike
machines endowed with intelligence and individual personalities. This image was
reinforced by the 1926 German robot film Metropolis, by the walking robot Elec-
tro and his dog Sparko, displayed in 1939 at the New York World's Fair, and more
7."
recently by the robot C3PO featured in the 1977 film Star Wars. Modern indus-
"-h
trial robots certainly appear primitive when compared with the expectations created
by the communications media during the past six decades.
Early work leading to today's industrial robots can be traced to the period
immediately following World War II. During the late 1940s research programs
were started at the Oak Ridge and Argonne National Laboratories to develop
(3.
duplicated the master unit as closely as possible. Later, force feedback was added
CD"
by mechanically coupling the motion of the master and slave units so that the
operator could feel the forces as they developed between the slave manipulator and
its environment. In the mid-1950s the mechanical coupling was replaced by elec-
tric and hydraulic power in manipulators such as General Electric's Handyman and
"C7
'T1
C1.
Engelberger led to the first industrial robot, introduced by Unimation Inc. in 1959.
The key to this device was the use of a computer in conjunction with a manipula-
INTRODUCTION 5
for to produce a machine that could be "taught" to carry out a variety of tasks
automatically. Unlike hard automation machines, these robots could be repro-
--t
grammed and retooled at relative low cost to perform other jobs as manufacturing
r-.
requirements changed.
While programmed robots offered a novel and powerful manufacturing tool, it
became evident in the 1960s that the flexibility of these machines could be
enhanced significantly by the use of sensory feedback. Early in that decade, H. A.
Ernst [1962] reported the development of a computer-controlled mechanical hand
5<-
with tactile sensors. This device, called the MH-1, could "feel" blocks and use
this information to control the hand so that it stacked the blocks without operator
assistance. This work is one of the first examples of a robot capable of adaptive
`"T
lator to begin machine perception research. During the same period, Tomovic and
Boni [1962] developed a prototype hand equipped with a pressure sensor which
..C
0Q4
sensed the object and supplied an input feedback signal to a motor to initiate one
of two grasp patterns. Once the hand was in contact with the object, information
;0^>U ors:
0
proportional to object size and weight was sent to a computer by these pressure-
C,3
vii
0'0
sensitive elements. In 1963, the American Machine and Foundry Company (AMF)
,-.
.-Y
introduced the VERSATRAN commercial robot. Starting in this same year, vari-
ous arm designs for manipulators were developed, such as the Roehampton arm
and the Edinburgh arm.
In the late 1960s, McCarthy [1968] and his colleagues at the Stanford
Artificial Intelligence Laboratory reported development of a computer with hands,
eyes, and ears (i.e., manipulators, TV cameras, and microphones). They demon-
strated a system that recognized spoken messages, "saw" blocks scattered on a
2V"
"vim
.c?
table, and manipulated them in accordance with instructions. During this period,
Pieper [1968] studied the kinematic problem of a computer-controlled manipulator
,I'
while Kahn and Roth [1971] analyzed the dynamics and control of a restricted arm
using bang-bang (near minimum time) control.
Meanwhile, other countries (Japan in particular) began to see the potential of
''.
industrial robots. As early as 1968, the Japanese company Kawasaki Heavy Indus-
'_'
...
'"a)
tries negotiated a license with Unimation for its robots. One of the more unusual
developments in robots occurred in 1969, when an experimental walking truck was
'Ti
developed by the General Electric Company for the U.S. Army. In the same year,
o-2
the Boston arm was developed, and in the following year the Stanford arm was
-°,,
developed, which was equipped with a camera and computer controller. Some of
5-n
the most serious work in robotics began as these arms were used as robot manipu-
o-2
lators. One experiment with the Stanford arm consisted of automatically stacking
blocks according to various strategies. This was very sophisticated work for an
automated robot at that time. In 1974, Cincinnati Milacron introduced its first
computer-controlled industrial robot. Called "The Tomorrow Tool," or 73, it
could lift over 100 lb as well as track moving objects on an assembly line.
6 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
During the 1970s a great deal of research work focused on the use of external
sensors to facilitate manipulative operations. At Stanford, Bolles and Paul [1973],
using both visual and force feedback, demonstrated a computer-controlled Stanford
arm connected to a PDP-10 computer for assembling automotive water pumps. At
about the same time, Will and Grossman [1975] at IBM developed a computer-
tea)
'it
`"'
controlled manipulator with touch and force sensors to perform mechanical assem-
'C3
,a!
Laboratory worked on the artificial intelligence aspects of force feedback. A
Cep
twig
landfall navigation search technique was used to perform initial positioning in a
.,,
,-.
,L3
precise assembly task. At the Draper Laboratory Nevins et al. [1974] investigated
CAD
sensing techniques based on compliance. This work developed into the instrumen-
tation of a passive compliance device called remote center compliance (RCC)
which was attached to the mounting plate of the last joint of the manipulator for
close parts-mating assembly. Bejczy [1974], at the Jet Propulsion Laboratory,
(74
Today, we view robotics as a much broader field of work than we did just a
few years ago, dealing with research and development in a number of interdisci-
plinary areas, including kinematics, dynamics, planning systems, control, sensing,
programming languages, and machine intelligence. These topics, introduced
briefly in the following sections, constitute the core of the material in this book.
Robot arm kinematics deals with the analytical study of the geometry of motion of
a robot arm with respect to a fixed reference coordinate system without regard to
::r
CIO
the forces/moments that cause the motion. Thus, kinematics deals with the analyti-
cal description of the spatial displacement of the robot as a function of time, in
-ti
particular the relations between the joint-variable space and the position and orien-
tation of the end-effector of a robot arm.
There are two fundamental problems in robot arm kinematics. The first prob-
lem is usually referred to as the direct (or forward) kinematics problem, while the
second problem is the inverse kinematics (or arm solution) problem. Since the
independent variables in a robot arm are the joint variables, and a task is usually
stated in terms of the reference coordinate frame, the inverse kinematics problem
is used more frequently. Denavit and Hartenberg [1955] proposed a systematic
and generalized approach of utilizing matrix algebra to describe and represent the
spatial geometry of the links of a robot arm with respect to a fixed reference
°.y
inverse kinematics problem can be solved by several techniques. The most com-
monly used methods are the matrix algebraic, iterative, or geometric approach.
Detailed treatments of direct kinematics and inverse kinematics problems are given
in Chap. 2.
Robot arm dynamics, on the other hand, deals with the mathematical formula-
tions of the equations of robot arm motion. The dynamic equations of motion of a
manipulator are a set of mathematical equations describing the dynamic behavior
of the manipulator. Such equations of motion are useful for computer simulation
of the robot arm motion, the design of suitable control equations for a robot arm,
and the evaluation of the kinematic design and structure of a robot arm. The
actual dynamic model of an arm can be obtained from known physical laws such
as the laws of newtonian and lagrangian mechanics. This leads to the development
of dynamic equations of motion for the various articulated joints of the manipulator
in terms of specified geometric and inertial parameters of the links. Conventional
approaches like the Lagrange-Euler and the Newton-Euler formulations can then be
applied systematically to develop the actual robot arm motion equations. Detailed
discussions of robot arm dynamics are presented in Chap. 3.
With the knowledge of kinematics and dynamics of a serial link manipulator, one
would like to servo the manipulator's joint actuators to accomplish a desired task
dOC
by controlling the manipulator to fol v a desired path. Before moving the robot
arm, it is of interest to know whether there are any obstacles present in the path
a_.
+-'
that the robot arm has to traverse (obstacle constraint) and whether the manipulator
hand needs to traverse along a specified path (path constraint). The control prob-
lem of a manipulator can be conveniently divided into two coherent
subproblems-the motion (or trajectory) planning subproblem and the motion con-
trol subproblem.
The space curve that the manipulator hand moves along from an initial loca-
tion (position and orientation) to the final location is called the path. The trajectory
planning (or trajectory planner) interpolates and/or approximates the desired path
-°,
phases. The first is the gross motion control in which the arm moves from an ini-
tial position/orientation to the vicinity of the desired target position/orientation
along a planned trajectory. The second is the fine notion control in which the
end-effector of the arm dynamically interacts with the object using sensory feed-
back information from the sensors to complete the task.
Current industrial approaches to robot arm control treat each joint of the robot
arm as a simple joint servomechanism. The servomechanism approach models the
varying dynamics of a manipulator inadequately because it neglects the motion and
configuration of the whole arm mechanism. These changes in the parameters of
the controlled system sometimes are significant enough to render conventional
feedback control strategies ineffective. The result is reduced servo response speed
and damping, limiting the precision and speed of the end-effector and making it
appropriate only for limited-precision tasks. Manipulators controlled in this
manner move at slow speeds with unnecessary vibrations. Any significant perfor-
mance gain in this and other areas of robot arm control require the consideration
of more efficient dynamic models, sophisticated control approaches, and the use of
dedicated computer architectures and parallel processing techniques. Chapter 5
focuses on deriving gross motion control laws and strategies which utilize the
dynamic models discussed in Chap. 3 to efficiently control a manipulator.
The use of external sensing mechanisms allows a robot to interact with its environ-
ment in a flexible manner. This is in contrast to preprogrammed operations in
which a robot is "taught" to perform repetitive tasks via a set of programmed
functions. Although the latter is by far the most predominant form of operation of
present industrial robots, the use of sensing technology to endow machines with a
greater degree of intelligence in dealing with their environment is indeed an active
topic of research and development in the robotics field.
The function of robot sensors may be divided into two principal categories:
internal state and external state. Internal state sensors deal with the detection of
variables such as arm joint position, which are used for robot control. External
state sensors, on the other hand, deal with the detection of variables such as range,
proximity, and touch. External sensing, the topic of Chaps. 6 through 8, is used
.W.-.
,.O
0)"
for robot guidance, as well as for object identification and handling. The focus of
Chap. 6 is on range, proximity, touch, and force-torque sensing. Vision sensors
'-.
i.,
a+.
Robot vision may be defined as the process of extracting, characterizing, and inter-
"a)
'.o
tication involved in their implementation. We consider three levels of processing:
low-, medium-, and high-level vision. While there are no clearcut boundaries
...
between these subdivisions, they do provide a useful framework for categorizing
CAD
the various processes that are inherent components of a machine vision system. In
our discussion, we shall treat sensing and preprocessing as low-level vision func-
tions. This will take us from the image formation process itself to compensations
such as noise reduction, and finally to the extraction of primitive image features
such as intensity discontinuities. We will associate with medium-level vision those
processes that extract, characterize, and label components in an image resulting
from low-level vision. In terms of our six subdivisions, we will treat segmenta-
tion, description, and recognition of individual objects as medium-level vision
functions. High-level vision refers to processes that attempt to emulate cognition.
The material in Chap. 7 deals with sensing, preprocessing, and with concepts and
techniques required to implement low-level vision functions. Topics in higher-level
vision are discussed in Chap. 8.
O.0
There are several ways to communicate with a robot, and the three major
.fl
approaches to achieve it are discrete word recognition, teach and playback, and
c0.
:-1
'..t
for recognition.
..t
The method of teach and playback involves teaching the robot by leading it
through the motions to be performed. This is usually accomplished in the follow-
.Q.
ing steps: (1) leading the robot in slow motion using manual control through the
,W0
entire assembly task, with the joint angles of the robot at appropriate locations
^'t
being recorded in order to replay the motion; (2) editing and playing back the
A..
taught motion; and (3) if the taught motion is correct, then the robot is run at an
v~,
:Z.
arc welding, spot welding, and paint spraying. These tasks require no interaction
10 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
between the robot and the environment and can be easily programmed by guiding.
s1)
However, the use of robots to perform assembly tasks generally requires high-level
programming techniques. This effort is warranted because the manipulator is usu-
ally controlled by a computer, and the most effective way for humans to communi-
cate with computers is through a high-level programing language. Furthermore,
Cry
using programs to describe assembly tasks allows a robot to perform different jobs
by simply executing the appropriate program. This increases the flexibility and
`C7
versatility of the robot. Chapter 9 discusses the use of high-level programming
CS'
lem we have a robot that is equipped with sensors and a set of primitive actions
that it can perform in some easy-to-understand world. Robot actions change one
°'o
.''
state, or configuration, of the world into another. In the "blocks world," for
example, we imagine a world of several labeled blocks resting on a table or on
each other and a robot consisting of a TV camera and a movable arm and hand
that is able to pick up and move blocks. In some situations, the robot is a mobile
vehicle with a TV camera that performs tasks such as pushing objects from place
A7'
-Cr
7C'
In Chap. 10, we introduce several basic methods for problem solving and their
applications to robot planning. The discussion emphasizes the problem-solving or
'CS
planning aspect of a robot. A robot planner attempts to find a path from our ini-
A..
tial robot world to a final robot world. The path consists of a sequence of opera-
.-.
tions that are considered primitive to the system. A solution to a problem could
be the basis of a corresponding sequence of physical actions in the physical world.
,.o
'U9
i..
tions, we still need powerful and efficient planning algorithms that will be executed
by high-speed special-purpose computer systems.
1.8 REFERENCES
The general references cited below are representative of publications dealing with
topics of interest in robotics and related fields. References given at the end of
INTRODUCTION 11
N
later chapters are keyed to specific topics discussed in the text. The bibliography
at the end of the book is organized in alphabetical order by author, and it contains
^°h
all the pertinent information for each reference cited in the text.
Some of the major journals and conference proceedings that routinely contain
articles on various aspects of robotics include: IEEE Journal of Robotics and Auto-
mation; International Journal of Robotics Research; Journal of Robotic Systems;
Robotica; IEEE Transactions on Systems, Man and Cybernetics; Artificial Intelli-
gence; IEEE Transactions on Pattern Analysis and Machine Intelligence; Computer
Graphics, Vision, and Image Processing; Proceedings of the International Sympo-
Z3-
sium on Industrial Robots; Proceedings of the International Joint Conference on
Artificial Intelligence; Proceedings of IEEE International Conference on Robotics
fop
s.,
Snyder [1985], Lee, Gonzalez, and Fu [1986], Tou [1985], and Craig [1986].
CHAPTER
TWO
ROBOT ARM KINEMATICS
2.1 INTRODUCTION
joints driven by actuators. One end of the chain is attached to a supporting base
while the other end is free and attached with a tool (the end-effector) to manipulate
objects or perform assembly tasks. The relative motion of the joints results in the
motion of the links that positions the hand in a desired orientation. In most
robotic applications, one is interested in the spatial description of the end-effector
fl"
space and the position and orientation of the end-effector of a robot arm. This
chapter addresses two fundamental questions of both theoretical and practical
interest in robot arm kinematics:
1. For a given manipulator, given the joint angle vector q(t) = (q, (t),
,.C
C`..
q2 (t)( t ) ,- q (t) ) T and the geometric link parameters, where n is the number
of degrees of freedom, what is the position and orientation of the end-effector
of the manipulator with respect to a reference coordinate system?
2. Given a desired position and orientation of the end-effector of the manipulator
and the geometric link parameters with respect to a reference coordinate sys-
tem, can the manipulator reach the desired prescribed manipulator hand position
."T'
The first question is usually referred to as the direct (or forward) kinematics prob-
lem, while the second question is the inverse kinematics (or arm solution) problem.
12
ROBOT ARM KINEMATICS 13
Link parameters
Joint
III
Direct
Position and
angle,
kinematics
- orientation of
the end-cIIector
Link parameters
Joint
11!
Inverse
angles
kinematics
Since the independent variables in a robot arm are the joint variables and a task is
usually stated in terms of the reference coordinate frame, the inverse kinematics
problem is used more frequently. A simple block diagram indicating the relation-
ship between these two problems is shown in Fig. 2.1.
Since the links of a robot arm may rotate and/or translate with respect to a
reference coordinate frame, the total spatial displacement of the end-effector is due
to the angular rotations and linear translations of the links. Denavit and Harten-
berg [1955] proposed a systematic and generalized approach of utilizing matrix
.14
algebra to describe and represent the spatial geometry of the links of a robot arm
GCS
O5"CD
niques. Most commonly used methods are the matrix algebraic, iterative, or
geometric approaches. A geometric approach based on the lifxk coordinatd'systems
,0,
't7
C($
I Vectors are represented in lowercase bold letters; matrices are in uppercase bold.
14 ROBOTICS- CONTROL, SENSING. VISION, AND INTELLIGENCE
respect to a fixed reference frame. Since the links of a robot arm may rotate and/
or translate with respect to a reference coordinate frame, a body-attached coordi-
nate frame will be established along the joint axis for each link. The direct
kinematics problem is reduced to finding a transformation matrix that relates the
body-attached coordinate frame to the reference coordinate frame. A 3 x 3 rota-
tion matrix is used to describe the rotational operations of the body-attached frame
^_.
with respect to the reference frame. The homogeneous coordinates are then used
`n.
to represent position vectors in a three-dimensional space, and the rotation matrices
will be expanded to 4 x 4 homogeneous transformation matrices to include the
translational operations of the body-attached coordinate frames. This matrix
representation of a rigid mechanical link to describe the spatial geometry of a
robot-arm was first used by Denavit and Hartenberg [1955]. The advantage of
-OS
using the Denavit-Hartenberg representation of linkages is its algorithmic univer-
sality in deriving the kinematic equation of a robot arm.
tangular coordinate systems, namely, the OXYZ coordinate system with OX, OY,
Oat
and OZ as its coordinate axes and the OUVW coordinate system with OU, OV, and
OW as its coordinate axes. Both coordinate systems have their origins coincident at
go=CD
.--
point O. The OXYZ coordinate system is fixed in the three-dimensional space and
is considered to be the reference frame. The OUVW coordinate frame is rotating
G.:
with respect to the reference frame OXYZ. Physically,' one can consider the
CAD
x
Figure 2.2 Reference and body-attached coordinate systems.
ROBOT ARM KINEMATICS 15
permanently and conveniently attached to the rigid body (e.g., an aircraft or a link
of a robot arm) and moves together with it. Let (i,t, jy, k_) and (i,,, j,,, k,,,) be
the unit vectors along the coordinate axes of the OXYZ and OUVW systems,
respectively. A point p in the space can be represented by its coordinates with
respect to both coordinate systems. For ease of discussion, we shall assume that p
is at rest and fixed with respect to the OUVW coordinate frame. Then the point p
can be represented by its coordinates with respect to the OUVW and OXYZ coordi-
nate systems, respectively, as
Note that physically the point p,,,,,,, has been rotated together with the OUVW coor-
dinate system.
Recalling the definition of the components of a vector, we have
where px, py, and pz represent the components of p along the OX, OY, and OZ
axes, respectively, or the projections of p onto the respective axes. Thus, using
the definition of a scalar product and Eq. (2.2-3),
PX ix i iX j, i., k , PU
Py Jy i J, J,' J, Pv (2.2-5)
Pz k: i kz j,, kz k Al,
16 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
ix i1 ix JV ix k,v
R= Jy i Jy Jv Jy kw (2.2-6)
k, ill kz j. kz k,v
Similarly, one can obtain the coordinates of from the coordinates of pxyz:
Pu i ' ix i j), i kz Px
or Pv iv lx .lv jy jv kz Py (2.2-8)
Since dot products are commutative, one can see from Eqs. (2.2-6) to (2.2-8)
that
Q = R-' = RT (2.2-9)
.;3
the rotation matrices that represent rotations of the OUVW coordinate system about
s..
each of the three principal axes of the reference coordinate system OXYZ. If the
OUVW coordinate system is rotated an a angle about the OX axis to arrive at a
new location in the space, then the point having coordinates (p,,, p,,, p,,,)T
with respect to the OUVW system will have different coordinates (px, py, pz )T
with respect to the reference system OXYZ. The necessary transformation matrix
Rx,a is called the rotation matrix about the OX axis with a angle. R,,,, can be
derived from the above transformation matrix concept, that is
with ix = i, and
ix ill ix jv ix k,,, 1 0 0
0 cos a - sin a (2.2-12)
kz . i
Cr.
Similarly, the 3 x 3 rotation matrices for rotation about the OY axis with 0 angle
and about the OZ axis with 0 angle are, respectively (see Fig. 2.3),
ROBOT ARM KINEMATICS 17
-Sin o 0 cos cp 0 0
O,)
1
The matrices Rx,a, Ry,,5, and RZ,0 are called the basic rotation matrices. Other
finite rotation matrices can be obtained from these matrices.
Example: Given two points au,,,,, = (4, 3, 2)T and b= (6, 2, 4)T with
respect to the rotated OUVW coordinate system, determine the corresponding
points ayz, by, with respect to the reference coordinate system if it has been
rotated 60° about the OZ axis.
SOLUTION: aXyz = Rz,60" auv,v and bxyz = RZ,6o' buvw
0.500 -0.866 0 4
0 0 1 2
0 0 1 4 4.0
Thus, axyZ and bXyZ are equal to (-0.598, 4.964, 2.0)T and (1.268, 6.196,
,,6
Example: If axyZ = (4, 3, 2)T and bXyZ = (6, 2, 4)T are the coordinates
with respect to the reference coordinate system, determine the corresponding
_°+
v-,
points ai,v,v, buv,v with respect to the rotated OUVW coordinate system if it has
vii
0.500 0.866 0 4
-0.866 0.500 4( -0.866) + 3(0.5) +2(0)
ono
.+.
auvw = 0 3
0 0 1 2 4(0)+3(0)+2(1)
4.598
-1.964
2.0
ROBOT ARM KINEMATICS 19
Q..
rotations about the principal axes of the OXYZ coordinate system. Since matrix
multiplications do not commute, the order or sequence of performing rotations is
important. For example, to develop a rotation matrix representing a rotation of a
o'°
angle about the OX axis followed by a rotation of 8 angle about the OZ axis fol-
lowed by a rotation of 0 angle about the OY axis, the resultant rotation matrix
'"'
co 0 so co - se 0 1 0 0
R = RY, 0 Rz,e Rx,« _ 0 1 0 so co 0 0 Ca - Sa
-so 0 Cg 0 0 1 0 Sa Ca
lowed by a rotation of a angle about the OX axis. The resultant rotation matrix is:
1 0 0 co - so 0 co 0 So
R = R.,,.« RZ,o RY.O = 0 Ca - Sa se co 0 0 1 0
0 Sa Ca 0 0 1 -so 0 Co
In addition to rotating about the principal axes of the reference frame OXYZ,
the rotating coordinate system OUVW can also rotate about its own principal axes.
In this case, the resultant or composite rotation matrix may be obtained from the
following simple rules:
20 ROBOTICS. CONTROL, SENSING, VISION, AND INTELLIGENCE
1. Initially both coordinate systems are coincident, hence the rotation matrix is a
r-+
SOLUTION:
..1
co 0 Scb CO - SO 0 1 0 0
0 1 0 S0 co 0 0 Ca - Sa
-So 0 Co 0 0 1 0 Sa Ca
Note that this example is chosen so that the resultant matrix is the same as Eq.
(2.2-14), but the sequence of rotations is different from the one that generates
Eq. (2.2-14).
tions about the principal axes of the OUVW and/or OXYZ coordinate frames. To
derive this rotation matrix Rr,o, we can first make some rotations about the princi-
pal axes of the OXYZ frame to align the axis r with the OZ axis. Then make the
rotation about the r axis with 0 angle and rotate about the principal axes of the
OXYZ frame again to return the r axis back to its original location. With refer-
ence to Fig. 2.4, aligning the OZ axis with the r axis can be done by rotating
via
about the OX axis with a angle (the axis r is in the XZ plane), followed by a rota-
tion of -a angle about the OY axis (the axis r now aligns with the OZ axis).
After the rotation of q5 angle about the OZ or r axis, reverse the above sequence
of rotations with their respective opposite angles. The resultant rotation matrix is
ROBOT ARM KINEMATICS 21
,'b
R, = R, -a Ry,R Rz,, Ry, -R R, a
1 0 0 co 0 S(3 Co -Sq 0
0 Ca Sa 0 1 0 So Co 0
0 - Sa Ca -so 0 co 0 0 1
co 0 - so 1 0 0
x 0 1 0 0 Ca - Sa
so 0 co 0 Sa Ca
From Fig. 2.4, we easily find that
ry rz
sin a = Cos a =
ry +r? ry + rZ
sin /3 = r,, Cos a = r + rZ
Substituting into the above equation,
via
Y, V
`-----------------/
1. Rx.a
2. Ry,-R
3. Rz,o
4. Ry,A r,
5. R,_a c X, U
Z, W
Example: Find the rotation matrix Rr,, that represents the rotation of 0 angle
about the vector r = (1, 1, 1) T.
SOLUTION: Since the vector r is not a unit vector, we need to normalize it and
find its components along the principal axes of the OXYZ frame. Therefore,
3 3
rY
r2+ry+rZ 2 2 jl
1/3 Vv -
73
So
So
1/3 V(b +
1/3 V(b +
i
3S
o 'I3 Vq -
1/3 V0 + Co
73
So
3-
but it needs nine elements to completely describe the orientation of a rotating rigid
body. It does not lead directly to a complete set of generalized coordinates. Such
U.-
a set of generalized coordinates can describe the orientation of a rotating rigid
r_'
body with respect to a reference coordinate frame. They can be provided by three
4'~
angles called Euler angles 0, 0, and >G. Although Euler angles describe the orien-
a..
tation of a rigid body with respect to a fixed reference frame, there are many
different types of Euler angle representations. The three most widely used Euler
angles representations are tabulated in Table 2.1.
+-'
The first Euler angle representation in Table 2.1 is usually associated with
F-'
-°=
gyroscopic motion. This representation is usually called the eulerian angles, and
corresponds to the following sequence of rotations (see Fig. 2.5):
Z, W
U,"
Co - So 0 1 0 0 Ci1i - Si 0
So Co 0 0 CO - SO S>1' Ci 0
0 0 1 0 Se CO 0 0 1
ses SOCV1 Co
The above eulerian angle rotation matrix R¢, 0, >G can also be specified in terms
of the rotations about the principal axes of the reference coordinate system: a rota-
tion of 0 angle about the OZ axis followed by a rotation of 0 angle about the OX
a.+
Z, W
X, U
Co -Ski 0 ce 0 se co -so 0
so Co 0 0 1 0 so co 0
0 0 1 -so 0 co 0 0 1
COO
sgCeco + COW -so COW + Coco Sose (2.2-18)
- seal/ seso co
The above Euler angle rotation matrix R0,B,0 can also be specified in terms of
the rotations about the principal axes of the reference coordinate system: a rotation
4-.
of 0 angle about the OZ axis followed by a rotation of 0 angle about the OY axis
and finally a rotation of 0 angle about the OZ axis.
Another set of Euler angles representation for rotation is called roll, pitch, and
yaw (RPY). This is mainly used in aeronautical engineering in the analysis of
ate,
co - So 0 Co 0 So 1 0 0
0 0 1 -so o co 0 S>G co
The above rotation matrix R0 , 0 for roll, pitch, and yaw can be specified in
O..
0'm
terms of the rotations about the principal axes of the reference coordinate system
and the rotating coordinate system: a rotation of 0 angle about the OZ axis fol-
lowed by a rotation of o angle about the rotated OV axis and finally a rotation of 0
angle about the rotated OU axis (see Fig. 2.7).
respect to the reference frame and one can draw the location of all the principal
Lo.
axes of the OUVW coordinate frame with respect to the reference frame. In other
i..
..t
words, a rotation matrix geometrically represents the principal axes of the rotated
coordinate system with respect to the reference coordinate system.
i..
26 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Since the inverse of a rotation matrix is equivalent to its transpose, the row
vectors of the rotation matrix represent the principal axes of the reference system
OXYZ with respect to the rotated coordinate system OUVW. This geometric
interpretation of the rotation matrices is an important concept that provides insight
into many robot arm kinematics problems. Several useful properties of rotation
matrices are listed as follows:
1. Each column vector of the rotation matrix is a representation of the rotated axis
unit vector expressed in terms of the axis unit vectors of the reference frame,
and each row vector is a representation of the axis unit vector of the reference
frame expressed in terms of the rotated axis unit vectors of the OUVW frame.
2. Since each row and column is a unit vector representation, the magnitude of
each row and column should be equal to 1. This is a direct property of ortho-
normal coordinate systems. Furthermore, the determinant of a rotation matrix
a-,
duct (dot product) of each row with each other row equals zero. Similarly, the
inner product of each column with each other column equals zero.
4. The inverse of a rotation matrix is the transpose of the rotation matrix.
R-1 = RT and RR T = 13
Properties 3 and 4 are especially useful in checking the results of rotation matrix
^T'
Example: If the OU, OV, and OW coordinate axes were rotated with a angle
:bin
about the OX axis, what would the representation of the coordinate axes of the
reference frame be in terms of the rotated coordinate system OUVW?
SOLUTION: The new coordinate axis unit vectors become i = (1, 0, 0)T,
jv = (0, 1, 0)T, and k,v = (0, 0, 1)T since they are expressed in terms of
.-,
Applying property 1 and considering these as rows of the rotation matrix, the
Rx,a matrix can be reconstructed as
ROBOT ARM KINEMATICS 27
0 0
0 cos a sin a
0 - sin a COS a
0,o
nates. In this section, we use a "hat" (i.e., p) to indicate the representation of a
cartesian vector in homogeneous coordinates. Later, if no confusion exists, these
"hats" will be lifted. The concept of a homogeneous-coordinate representation of
points in a three-dimensional euclidean space is useful in developing matrix
O.:
rotation
position
P3x1 ' matrix vector
T= (2.2-20)
lxi perspective scaling
transformation
The upper left 3 x 3 submatrix represents the rotation matrix; the upper right
3 x 1 submatrix represents the position vector of the origin of the rotated coordi-
nate system with respect to the reference system; the lower left 1 x 3 submatrix
represents perspective transformation; and the fourth diagonal element is the global
scaling factor. The homogeneous transformation matrix can be used to explain the
geometric relationship between the body attached frame OUVW and the reference
coordinate system OXYZ.
If a position vector p in a three-dimensional space is expressed in homogene-
yam.
ous coordinates [i.e., p = (pX, pti pz, 1)T], then using the transformation matrix
concept, a 3 x 3 rotation matrix can be extended to a 4 x 4 homogeneous
transformation matrix Trot for pure rotation operations. Thus, Eqs. (2.2-12) and
(2.2-13), expressed as homogeneous rotation matrices, become
1 0 0 0 cos cp 0 sin 0 0
0 cos a -sina 0 0 1 0 0
TX,c, TY'a =
0 sina cosa 0 -sin o 0 cos 0 0
0 0 0 1 0 0 0 1
cos 0 -sinB 0 0
sinB cos 0 0 0
T,,0 = 0 0 1 0
(2.2-21)
0 0 0 1
These 4 x 4 rotation matrices are called the basic homogeneous rotation matrices.
The upper right 3 x 1 submatrix of the homogeneous transformation matrix
has the effect of translating the OUVW coordinate system which has axes parallel
to the reference coordinate system OXYZ but whose origin is at (dx, dy, dz) of the
reference coordinate system:
1 0 0 dx
0 1 0 dy
Ttran = (2.2-22)
0 0 1 dz
0 0 0 1
(DD
represents perspective transformation, which is useful for computer vision and the
'LS
(''
calibration of camera models, as discussed in Chap. 7. In the present discussion,
`--1,
the elements of this submatrix are set to zero to indicate null perspective transfor-
mation.
The principal diagonal elements of a homogeneous transformation matrix pro-
duce local and global scaling. The first three diagonal elements produce local
stretching or scaling, as in
a 0 0 0 x ax
0 b 0 0 y by
(2 2-23)
.
0 0 c 0 z cz
1-r
0 0 0 1 1 1
Thus, the coordinate values are stretched by the scalars a, b, and c, respectively.
CC'
Note that the basic rotation matrices, Two do not produce any local scaling effect.
The fourth diagonal element produces global scaling as in
1 0 0 0 x x
0 1 0 0 y y
(2.2-24)
0 0 1 0 z z
0 0 0 s 1 S
px
x
S
py
s
11Z
s
w= -
s
=1 (2.2-25)
has the effect of globally reducing the coordinates if s > 1 and of enlarging the
coordinates if 0 < s < 1.
In summary, a 4 X 4 homogeneous transformation matrix maps a vector
expressed in homogeneous coordinates with respect to the OUVW coordinate sys-
CAD
and
nx sx ax px
nY sy. ay Py n s a p
T = nz sz 0 0 0 1
(2.2-26b)
az pz
f-+
0 0 0 1
30 ROBOTICS: CONTROL, SENSING. VISION, AND INTELLIGENCE
can be represented as in Eq. (2.2-26b). Let us choose a point p fixed in the OUVW
CDO
coordinate system and expressed in homogeneous coordinates as (0, 0, 0, 1) T; that
'-'
is, p,,ti.,,, is the origin of the OUVW coordinate system. Then the upper right 3 x 1
submatrix indicates the position of the origin of the OUVW frame with respect to
the OXYZ reference coordinate frame. Next, let us choose the point p to be
(1, 0, 0 1)T; that is i,,. Furthermore, we assume that the origins of both
III
coordinate systems coincide at a point 0. This has the effect of making the ele-
-.p
ments in the upper right 3 x 1 submatrix a null vector. Then the first column (or
n vector) of the homogeneous transformation matrix represents the coordinates of
the OU axis of OUVW with respect to the OXYZ coordinate system. Similarly,
coo
choosing p to be (0, 1, 0, 1)T and (0, 0, 1, 1)T, one can identify that the
second-column (or s vector) and third-column (or a vector) elements of the homo-
s-.
geneous transformation matrix represent the OV and OW axes, respectively, of the
BCD
OUVW coordinate system with respect to the reference coordinate system. Thus,
s..
respect to the reference coordinate frame. The fourth-column vector of the homo-
geneous transformation matrix represents the position of the origin of the OUVW
`3'
coordinate system with respect to the reference system. In other words, a homo-
°.'
Coo
system.
Since the inverse of a rotation submatrix is equivalent to its transpose, the row
vectors of a rotation submatrix represent the principal axes of the reference coordi-
nate system with respect to the rotated coordinate system OUVW. However, the
,-+
nx ny. nz -nTP _ T
P
T-' = Sx Sy. Sz -STp 83x3 -STp
(2.2-27)
ten.
ax ay az -aTP -aTP
0 0 0 1 0 0 0 1
Thus, from Eq. (2.2-27), the column vectors of the inverse of a homogeneous
transformation matrix represent the principal axes of the reference system with
'a.
respect to the rotated coordinate system OUVW, and the upper right 3 x 1 subma-
ROBOT ARM KINEMATICS 31
trix represents the position of the origin of the reference frame with respect to the
OUVW system. This geometric interpretation of the homogeneous transformation
matrices is an important concept used frequently throughout this book.
.-.
ing rules are useful for finding a composite homogeneous transformation matrix:
Example: Two points a,,,,,v = (4, 3, 2)T and (6, 2, 4)T are to be
translated a distance + 5 units along the OX axis and - 3 units along the OZ
axis. Using the appropriate homogeneous transformation matrix, determine
the new points a,,,, and b,,y,.
SOLUTION:
00-
may
1 0 0 5 4
'^w
0 1 0 0 ,3
key, =
0 0 1 -3 2
.--'
0 0 0 1 1
1 0 0 5 6
0 1 0 0 2
0 110 1 -3 4
.-+
0 0 0 1
The translated points are axyZ = (9, 3, -1)T and bXy. = (11, 2, 1)T.
angle about the OX axis, followed by a translation of b units along the rotated
OV axis.
32 ROBOTICS- CONTROL, SENSING, VISION, AND INTELLIGENCE
SOLUTION: This problem can be tricky but illustrates some of the fundamental
components of the T matrix. Two approaches will be utilized, an unorthodox
approach which is illustrative, and the orthodox approach, which is simpler.
...
After the rotation T.,,,,, the rotated OV axis is (in terms of the unit vectors
i.,, jy, k, of the reference system) j, = cos ajy + sin akz; i.e., column 2 of
''Y
Eq. (2.2-21). Thus, a translation along the rotated OV axis of b units is b j _
bcosajy + bsin cakZ. So the T matrix is
>e'
1 0 0 0 1 0 0 0
0 1 0 bc osa 0 cos a - since 0
R..
T = Tv, b T x, a =
0 0 1 b s in a 0 sina cosa 0
0 0 0 1 0 0 0 1
1 0 0 0
0 cos a - sina b cos a
0 sina cosa b sin a
0 0 0 1
In the orthodox approach, following the rules as stated earlier, one should
realize that since the T.,,,, matrix will rotate the OY axis to the OV axis, then
translation along the OV axis will accomplish the same goal, that is,
1 0 0 0 1 0 0 0
0 cos a - since 0 0 1 0 b
T = Tx, a Tv, b =
0 sina cosa 0 0 0 1 0
0 0 0 1 0 0 0 1
1 0 0 0
0 cos a -sina b cos a
0 sina cosa b sin a
0 0 0 1
SOLUTION:
cos 0 -sin O 0 0 1 0 0 0 1 0 0 a 1 0 0 0
sin 0 cos 0 0 0 0 1 0 0 0 1 0 0 0 cosa - since 0
0 0 1 0 0 0 1 d 0 0 1 0 0 sina cosa 0
0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1
"C3
0
0 0 0 1
where
T = 4 x 4 homogeneous transformation matrix relating the two coordinate
systems
pi = 4 x 1 augmented position vector (xi, yi, zi, 1)T representing a point
in the link i coordinate system expressed in homogeneous coordinates
pi- I = is the 4 x 1 augmented position vector (xi_ I, yi_ I, z_1, 1)T
representing the same point pi in terms of the link i - 1 coordinate
system
nected by either revolute or prismatic joints (see Fig. 2.8). Each joint-link pair
constitutes 1 degree of freedom. Hence, for an N degree of freedom manipulator,
.-,
there are N joint-link pairs with link 0 (not considered part of the robot) attached
to a supporting base where an inertial coordinate frame is usually established for
this dynamic system, and the last link is attached with a tool. The joints and links
34 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
are numbered outwardly from the base; thus, joint 1 is the point of connection
between link 1 and the supporting base. Each link is connected to, at most, two
cam',
In general, two links are connected by a lower pair joint which has two sur-
faces sliding over one another while remaining in contact. Only six different
lower-pair joints are possible: revolute (rotary), prismatic (sliding), cylindrical,
spherical, screw, and planar (see Fig. 2.9). Of these, only rotary and prismatic
joints are common in manipulators.
A joint axis (for joint i) is established at the connection of two links (see Fig.
2.10). This joint axis will have two normals connected to it, one for each of the
links. The relative position of two such connected links (link i - 1 and link i) is
.._.
7c'
given by di which is the distance measured along the joint axis between the nor-
mals. The joint angle Oi between the normals is measured in a plane normal to the
joint axis. Hence, di and Oi may be called the distance and the angle between the
5;.
links.
A link i (i = 1, ... , 6 ) is connected to, at most, two other links (e.g., link
i - 1 and link i + 1); thus, two joint axes are established at both ends of the con-
CAD
nection. The significance of links, from a kinematic perspective, is that they main-
tain a fixed configuration between their joints which can be characterized by two
ROBOT ARM KINEMATICS 35
Revolute Planar
Cylindrical Prismatic
Spherical Screw
parameters: ai and a;. The parameter ai is the shortest distance measured along
the common normal between the joint axes (i.e., the z, _ 1 and zi axes for joint i
and joint i + 1, respectively), and a; is the angle between the joint axes measured
in a plane perpendicular to ai. Thus, ai and ai may be called the length and the
twist angle of the link i, respectively. They determine the structure of link i.
36 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
In summary, four parameters, ai, ai, di, and Oi, are associated with each link
of a manipulator. If a sign convention for each of these parameters has been esta-
.7"
blished, then these parameters constitute a sufficient set to completely determine
the kinematic configuration of each link of a robot arm. Note that these four
parameters come in pairs: the link parameters (ai, cri) which determine the struc-
ture of the link and the joint parameters (di, Oi) which determine the relative posi-
tion of neighboring links.
tea:
Denavit and Hartenberg [1955] proposed a matrix method of systematically estab-
Q^. chi
e-.
r.'
chain. The Denavit-Hartenberg (D-H) representation results in a 4 x 4 homogene-
'«i
ous transformation matrix representing each link's coordinate system at the joint
o-'
am.
with respect to the previous link's coordinate system. Thus, through sequential
transformations, the end-effector expressed in the "hand coordinates" can be
transformed and expressed in the "base coordinates" which make up the inertial
4-.
for each link at its joint axis, where i = 1 , 2, ... ,n (n = number of degrees of
freedom) plus the base coordinate frame. Since a rotary joint has only 1 degree of
C'<
freedom, each (xi, yi, zi) coordinate frame of a robot arm corresponds to joint
"CD
i + 1 and is fixed in link i. When the joint actuator activates joint i, link i will
'"r
move with respect to link i - 1. Since the ith coordinate system is fixed in link i,
C",
v>'
it moves together with the link i. Thus, the nth coordinate frame moves with the
hand (link n). The base coordinates are defined as the 0th coordinate frame (xo,
yo, zo) which is also the inertial coordinate frame of the robot arm. Thus, for a
six-axis PUMA-like robot arm, we have seven coordinate frames, namely,
(x0, Yo, ZO), (x1, YI, Zl), . . . , (x6, y6, Z6).
Every coordinate frame is determined and established on the basis of three
rules:
1. The zi_I axis lies along the axis of motion of the ith joint.
2. The xi axis is normal to the zi_I axis, and pointing away from it.
3. The yi axis completes the right-handed coordinate system as required.
By these rules, one is free to choose the location of coordinate frame 0 any-
p..
where in the supporting base, as long as the zo axis lies along the axis of motion
C].
of the first joint. The last coordinate frame (nth frame) can be placed anywhere in
`C1
C].
the hand, as long as the x,, axis is normal to the z,, -I axis.
DC'
'F.
Coo
associated with each link. These four parameters completely describe any revolute
t (x,, y,, z;) actually represent the unit vectors along the principal axes of the coordinate frame i,
respectively, but are used here to denote the coordinate frame i.
a`7
ROBOT ARM KINEMATICS 37
or prismatic joint. Referring to Fig. 2.10, these four parameters are defined as
follows:
Oi is the joint angle from the xi_ I axis to the x1 axis about the zi -I axis (using the
right-hand rule).
s..
di is the distance from the origin of the (i -1)th coordinate frame to the intersec-
tion of the z, _ I axis with the xi axis along the zi -I axis.
ai is the offset distance from the intersection of the zi_ I axis with the xi axis to
OT".
the origin of the ith frame along the xi axis (or the shortest distance between
the zi _ I and zi axes).
ai is the offset angle from the zi -I axis to the zi axis about the xi axis (using the
+'9
right-hand rule).
Y6 (s)
Z6 (a)
y5
(Lift) xa Z3
en
Joint i ai a; d;
n.-
0i
I 0,_-90 -90 0 d,
2 02 = -90 90 0 d2
3 -90 0 0 d3
4 04 = 0 -90 0 0
5 05 = 0 90 0 0
6 06=0 0 0 d6
For a rotary joint, di, ai, and ai are the joint parameters and remain constant
i1.
for a robot, while Bi is the joint variable that changes when link i moves (or
rotates) with respect to link i - 1. For a prismatic joint, Bi, ai, and ai are the
..S
joint parameters and remain constant for a robot, while di is the joint variable.
For the remainder of this book, joint variable refers to Bi (or di), that is, the vary-
ing quantity, and joint parameters refer to the remaining three geometric constant
values (di, ai, ai) for a rotary joint, or (0i, ai, ai) for a prismatic joint.
With the above three basic rules for establishing an orthonormal coordinate
system for each link and the geometric interpretation of the joint and link parame-
ters, a procedure for establishing consistent orthonormal coordinate systems for a
robot is outlined in Algorithm 2.1. Examples of applying this algorithm to a six-
ROBOT ARM KINEMATICS 39
axis PUMA-like robot arm and a Stanford arm are given in Figs. 2.11 and 2.12,
respectively.
00"
link of the robot arm according to arm configurations similar to those of human
CD.
arm geometry. The labeling of the coordinate systems begins from the supporting
base to the end-effector of the robot arm. The relations between adjacent links can
be represented by a 4 x 4 homogeneous transformation matrix. 'The significance
of this assignment is that it will aid the development of a consistent procedure for
deriving the joint solution as discussed in the later sections. (Note that the assign-
ment of coordinate systems is not unique.)
D1. Establish the base coordinate system. Establish a right-handed orthonormal
coordinate system (xo, yo, zo) at the supporting base with the zo axis lying
along the axis of motion of joint 1 and pointing toward the shoulder of the
robot arm. The x0 and yo axes can be conveniently established and are nor-
mal to the zo axis.
D2. Initialize and loop. For each i, i = 1, ... , n - 1, perform steps D3 to D6.
D3. Establish joint axis. Align the zi with the axis of motion (rotary or sliding)
w.'
CAD
°a°
axes are pointing away from the shoulder and the "trunk" of the robot arm.
D4. Establish the origin of the ith coordinate system. Locate the origin of the ith
coordinate system at the intersection of the zi and zi _ i axes or at the inter-
section of common normal between the zi and zi_ I axes and the zi axis.
D5. Establish xi axis. Establish xi = t (zi_I x zi )/I1zi_i x zill or along the
common normal between the zi -I and zi axes when they are parallel.
D6. Establish yi axis. Assign yi = + (zi x xi)/Ilzi x xill to complete the
N.°
tem to the intersection of the zi_I axis and the xi axis along the zi_I axis. It
is the joint variable if joint i is prismatic.
D10. Find ai. ai is the distance from the intersection of the zi_i axis and the xi
axis to the origin of the ith coordinate system along the xi axis.
Dl l. Find 8i. 8i is the angle of rotation from the xi_ i axis to the xi axis about
the zi_ I axis. It is the joint variable if joint i is rotary.
D12. Find ai. ai is the angle of rotation from the zi_I axis to the zi axis about
the xi axis.
40 ROBOTICS. CONTROL, SENSING, VISION, AND INTELLIGENCE
Once the D-H coordinate system has been established for each link, a homo-
geneous transformation matrix can easily be developed relating the ith coordinate
frame to the (i - 1)th coordinate frame. Looking at Fig. 2.10, it is obvious that a
point ri expressed in the ith coordinate system may be expressed in the (i - 1)th
coordinate system as ri_ by performing the following successive transformations:
!'K
I
1. Rotate about the zi-I axis an angle of 01 to align the xi_I axis with the xi axis
,.p
(xi -I axis is parallel to xi and pointing in the same direction).
2. Translate along the zi- I axis a distance of di to bring the xi- I and xi axes into
coincidence.
3. Translate along the xi axis a distance of ai to bring the two origins as well as
the x axis into coincidence.
4. Rotate about the xi axis an angle of ai to bring the two coordinate systems into
's..
coincidence.
CAD
C'3
i- IAi, known as the D-H transformation matrix for adjacent coordinate frames, i
!-t
s"'
and i - 1. Thus,
- sin 0 0
'°_"i
1
0 0 0 cos 0j 0 1 0 0 ai 1 0 0 0
0 1 0 0 sin 0i cos O 0 0 0 1 0 0 0 cos ai - sin ai 0
0 0 1 di 0 0 1 0 0 0 1 0 0 sin ai cos ai 0
0 0 0 0 0 0
,--01
0 0 0 1 1 1 0 0 0 1
0 0 0 1
cos 0i sin Oi 0 - ai
i
- cos ai sin Oi cos ai cos 0i sin ai - di sin ai
['-'Ail-' = Ai-I =
sin ai sin Oi - sin ai cos 0i cos ai - di cos ai
0 0 0 1
(2.2-30)
where ai, ai, di are constants while 0i is the joint variable for a revolute joint.
For a prismatic joint, the joint variable is di, while ai, ai, and 0i are con-
CAD
0 0 0 1
(2.2-32)
Using the `-IAi matrix, one can relate a point pi at rest in link i, and expressed in
.ate
The six i-IAi transformation matrices for the six-axis PUMA robot arm have
'>~
f~/)
been found on the basis of the coordinate systems established in Fig. 2.11. These
`- IAi matrices are listed in Fig. 2.13.
xi Yi Zi Pi °R. °Pi
d^.
(2.2-34)
0 0 0 1 0 1 i
where
[xi, yi, zi ] = orientation matrix of the ith coordinate system established at link
i with respect to the base coordinate system. It is the upper left
3 x 3 partitioned matrix of °Ti.
M,C
pi = position vector which points from the origin of the base coordi-
nate system to the origin of the ith coordinate system. It is the
upper right 3 X 1 partitioned matrix of °Ti.
42 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
C
C, 0 - S1 0 C2 - S2 0 a2 C2
N('
Sl 0 C1 0 S2 C2 0 a2 S2
°Al = 1A2 =
0 -1 0 0 0 0 1 d2
0 0 0 1 L 0 0 0 1
C3 0 S3 a3 C3 C4 0 - S4 0
S3 0 - C3 a3 S3 S4 0 C4 0
2A3 =
0 1 0 0 3A4 = 0 -1 0 d4
0 0 0 1 0 0 1
C5 0 S5 0 C6 - S6 0 0
S5 0 - C5 0 S6 C6 0 0
a SA6 =
A5 = 0 1 0 0 0 0 1 d6
0 0 0 1 0 0 0 1
C4 C5 C6 - S4 S6 - C4 C5 S6 - S4 C6 C4 S5 d6 C4 S5
C/]
S4 C5 C6 + C4 S6 - S4 C5 S6 + C4 C6 S4 S5 d6 S4 S5
3 A4 4 5A6
T2 = A5 = - S5 C6 S5 S6 C5 d6 C5 + d4
0 - 0 0 1
.r+
where C, = cos 8,; Si = sin 8;; Cii = cos (8, + 8j); S11 = sin (8, + 8j).
III
III
U+"
Specifically, for i = 6, we obtain the T matrix, T = °A6, which specifies the posi-
tion and orientation of the endpoint of the manipulator with respect to the base
coordinate system. This T matrix is used so frequently in robot arm kinematics
that it is called the "arm matrix." Consider the T matrix to be of the form
ROBOT ARM KINEMATICS 43
arc
X6 Y6 Z6 P6 °R6 °P6 n s a p
T =
0 0 0 1 0 1 10 0 0 1
I
nx sx ax Px
ny sy ay Py
nz (2.2-35)
s2 az Pz
0 0 0 1
tion B and has a tool attached to its last joint's mounting plate described by H,
then the endpoint of the tool can be related to the reference coordinate frame by
multiplying the matrices B, °T6, and H together as
refTcool
= B °T6 H (2.2-36)
f3,
matter of calculating T = °A6 by chain multiplying the six '-IA; matrices and
a..
evaluating each element in the T matrix. Note that the direct kinematics solution
aCD
yields a unique T matrix for a given q = (q, , q2 , ... , q6 )T and a given set of
coordinate systems, where q; = 8; for a rotary joint and q; = d; for a prismatic
joint. The only constraints are the physical bounds of B; for each joint of the robot
CAD
arm. The table in Fig. 2.11 lists the joint constraints of a PUMA 560 series robot
based on the coordinate system assigned in Fig. 2.11.
Having obtained all the coordinate transformation matrices '-1A; for a robot
'_»
arm, the next task is to find an efficient method to compute T on a general purpose
f3,
digital computer. The most efficient method is by multiplying all six '- 'AI
matrices together manually and evaluating the elements of T matrix out explicitly
coo
p,-
i..
to multiply all six '-1A; matrices together, and (2) the arm matrix is applicable
..S
only to a particular robot for a specific set of coordinate systems (it is not flexible
enough). On the other extreme, one can input all six '- 'A; matrices and let the
computer do the multiplication. This method is very flexible but at the expense of
r-.
computation time as the fourth row of '- 1A; consists mostly of zero elements.
'?»
A method that has both fast computation and flexibility is to "hand" multiply
the first three '- IA; matrices together to form T, = °A, 'A2 2A3 and also the last
three '- 'A; matrices together to form T2 = 3A4 4A5 5A6, which is a fairly straight-
forward task. Then, we express the elements of T, and T2 out in a computer pro-
gram explicitly and let the computer multiply them together to form the resultant
..t
2
TI = OA3
A3 = A, Az AS
I
T2 = 3A6 = 3A44A55A6
C4 C5 C6 - S4 S6 - C4 C5 S6 - S4 C6 C4 S5 d6 C4 S5
S4 C5 C6 + C4 S6 - S4 C5 S6 + C4 C6 S4 S5 d6 S4 S5
- S5 C6 S5 S6 C5 d6 C5 + d4
L 0 0 0 1
(2.2-38)
The arm matrix T for the PUMA robot arm shown in Fig. 2.11 is found to be
Px I
Py
T = T1 T2 = °A1 1A22A33A44A55A6 = (2.2-39)
Pz
1
where
nz = - S23 [ C4 C5 C6 - S4 S6 I - C23 S5 C6
az = - S23 C4 S5 + C23 C5
0 -1 0 -149.09
T = 0 0 1 921.12
-1 0 0 20.32
0 0 0 1
length of the terminal device, then d6 = 0 and the new tool length will be
increased by d6 unit. This reduces the computation to 12 transcendental function
calls, 35 multiplications, and 16 additions.
Example: A robot work station has been set up with a TV camera (see the
figure). The camera can see the origin of the base coordinate system where a
,.O
six joint robot is attached. It can also see the center of an object (assumed to
be a cube) to be manipulated by the robot. If a local coordinate system has
been established at the center of the cube, this object as seen by the camera
can be represented by a homogeneous transformation matrix T1. If the origin
of the base coordinate system as seen by the camera can also be expressed by
a homogeneous transformation matrix T, and
.r.
0 1 0 1 1 0 0 -10
1 0 0 10 0 -1 0 20
T1 = T2 =
0 0 -1 9 0 0 -1 10
0 0 0 1 0 0 0 1
(a) What is the position of the center of the cube with respect to the base
coordinate system?
(b) Assume that the cube is within the arm's reach. What is the orientation
matrix [n, s, a] if you want the gripper (or finger) of the hand to be
CAD
aligned with the y axis of the object and at the same time pick up the
object from the top?
ROBOT ARM KINEMATICS 47
SOLUTION:
0 1 0 1
cameral
cube =T= I
1 0 0 10
0 0 -1 9
0 0 0 1
and
1 0 0 -10
camera
= T2 = 0 -1 0 20
lbase -
0 0 -1 10
0 0 0 1
1 0 0 10 0 1 0 1
baser
cube -
- 0 -1 0 20 1 0 0 10
0 0 -1 10 0 0 -1 9
0 0 0 1 0 0 0 1
0 1 0 11
-1 0 0 10
0 0 1 1
0 0 0 1
Therefore, the cube is at location (11, 10, 1)T from the base coordinate sys-
.-v
tem. Its x, y, and z axes are parallel to the - y, x, and z axes of the base
coordinate system, respectively.
To find [ n, s, a], we make use of
n s a p
0 0 0 1
where p = (11, 10, 1)7'from the above solution. From the above figure, we
want the approach vector a to align with the negative direction of the OZ axis
of the base coordinate system [i.e., a = (0, 0, -1)T]; the s vector can be
aligned in either direction of the y axis of base Tcabe [i.e., s = (± 1, 0, 0)T];
48 ROBOTICS. CONTROL, SENSING, VISION, AND INTELLIGENCE
and the n vector can be obtained from the cross product of s and a:
i j k 0
n = t1 0 0 t1
0 0 -1 0
0 1 01
[n,s,a] _ +1 0 0 or -1 0 0
0 0 -1 0 0 -1
Another advantage of using Euler angle representation for the orientation is that
the storage for the position and orientation of an object is reduced to a six-element
vector XYZZO '. From this vector, one can construct the arm matrix °T6 by Eq.
(2.2-44).
Roll, Pitch, and Yaw Representation for Orientation. Another set of Euler
angle representation for rotation is roll, pitch, and yaw (RPY). Again, using Eq.
ROBOT ARM KINEMATICS 49
(2.2-19), the rotation matrix representing roll, pitch, and yaw can be used to obtain
the arm matrix °T6 as:
0 0 0 1
1 0 0 PX 0
°R6 0
0 1 0 Py
°T6 - (2.2-46)
0 0 1 Pz 0
0 0 0 1 0 0 0 1
where °R6 = rotation matrix expressed in either Euler angles or [n, s, a] or roll,
pitch, and yaw.
1 0 0 0 Ca - Sa 0 0
0 1 0 0 Sa Ca 0 0
Tcylindrical = Tz, d Tz, c, Tx, r= 0 0 1 d 0 0 1 0
0 0 0 1 0 0 0 1
1 0 0 r Ca - Sa 0 rCa
0 1 0 0 Sa Ca 0 rSa
x
d
r-+
'ZS
'--.
0 0 1 0 0 0 1 (2.2-47)
0 0 0 1 0 0 0 1
Since we are only interested in the position vectors (i.e., the fourth column of
Tcylindrical), the arm matrix OT6 can be obtained utilizing Eq. (2.2-46).
0
°R6 0
°T6 - (2.2-48)
0
0 0 0 1
spherical coordinate system for specifying the position of the end-effector. This
--h
.X.
r-,
Ca - Sa 0 0 Co 0 S3 0
Sa Ca 0 0 0 1 0 0
III
Tsph = Tz, a Ry, 0 Tz, r =
0 0 1 0 -so 0 co 0
0 0 0 1 0 0 0 1
1 0 0 0 Cu CQ - Sa CaS(3 rCaSf3
0 0 0 1 0 0 0 1
Again, our interest is the position vector with respect to the base coordinate sys-
'C3
tem, therefore, the arm matrix °T6 whose position vector is expressed in spherical
coordinates and the orientation matrix is expressed in [n, s, a] or Euler angles or
roll, pitch, and yaw can be obtained by:
1 0 0 rCaS(3
0 1 0 rSaS(3 °R6
°T6 = (2.2-50)
0 0 1 rC(3
0 0 0 1
III
In summary, there are several methods (or coordinate systems) that one can
choose to describe the position and orientation of the end-effector. For position-
-',",
ing, the position vector can be expressed in cartesian (ps, p y, pZ )T, cylindrical
(rCa, rsa, d)T, or spherical (rCaS/3, rSaS(3, rC0 )T terms. For describing the
orientation of the end-effector with respect to the base coordinate system, we have
,a_
cartesian [n, s, a], Euler angles (0, 0, 1G), and (roll, pitch, and yaw). The result of
the above discussion is tabulated in Table 2.2.
Positioning Orientation
0
[n, s, a ] or R06 , yr 0
Tposition - Trot = 0
0 0 0 1',
,..
(the joints) have only 1 degree of freedom. With this restriction, two types of
mot,
joints are of interest: revolute (or rotary) and prismatic. A revolute joint only per-
mits rotation about an axis, while the prismatic joint allows sliding along an axis
with no rotation (sliding with rotation is called a screw joint). These links are
T.'
connected and powered in such a way that they are forced to move relative to one
cue
another in order to position the end-effector (a hand or tool) in a particular posi-
tion and orientation.
Hence, a manipulator, considered to be a combination of links and joints, with
the first link connected to ground and the last link containing the "hand," may be
>,3
classified by the type of joints and their order (from the base to the hand). With
this convention, the PUMA robot arm may be classified as 6R and the Stanford
'ti
This section addresses the second problem of robot arm kinematics: the inverse
kinematics or arm solution for a six-joint manipulator. Computer-based robots are
44-
(qI, q2, q3, q4, q5, q6 )T of the robot so that the end-effector can be positioned as
desired.
CAD
Ziegler [1984]). Pieper [1968] presented the kinematics solution for any 6 degree
of freedom manipulator which has revolute or prismatic pairs for the first three
joints and the joint axes of the last three joints intersect at a point. The solution
can be expressed as a fourth-degree polynomial in one unknown, and closed form
solution for the remaining unknowns. Paul et al. [1981] presented an inverse
transform technique using the 4 x 4 homogeneous transformation matrices in solv-
ing the kinematics solution for the same class of simple manipulators as discussed
by Pieper. Although the resulting solution is correct, it suffers from the fact that
the solution does not give a clear indication on how to select an appropriate solu-
tion from the several possible solutions for a particular arm configuration. The
ROBOT ARM KINEMATICS 53
user often needs to rely on his or her intuition to pick the right answer. We shall
discuss Pieper's approach in solving the inverse solution for Euler angles. Uicker
ova
0C'
et al. [1964] and Milenkovic and Huang [19831 presented iterative solutions for
°.r
most industrial robots. The iterative solution often requires more computation and
it does not guarantee convergence to the correct solution, especially in the singular
^-t
and degenerate cases. Furthermore, as with the inverse transform technique, there
,.t
is no indication on how to choose the correct solution for a particular arm
configuration.
It is desirable to find a closed-form arm solution for manipulators. For-
L:.
tunately, most of the commercial robots have either one of the following sufficient
conditions which make the closed-form arm solution possible:
Both PUMA and Stanford robot arms satisfy the first condition while ASEA and
4'b
MINIMOVER robot arms satisfy the second condition for finding the closed-form
solution.
From Eq. (2.2-39), we have the arm transformation matrix given as
r nr s,r ax Px
ny sY ay py,
T6 = = °AI'A22A33A44A55A6 (2.3-1)
nz sz az Pz
0 0 0 1
The above equation indicates that the arm matrix T is a function of sine and cosine
of 0 1 , 01 , ... , 06. For example, for a PUMA robot arm, equating the elements
of the matrix equations as in Eqs. (2.2-40) to (2.2-43), we have twelve equations
with six unknowns (joint angles) and these equations involve complex tri-
gonometric functions. Since we have more equations than unknowns, one can
immediately conclude that multiple solutions exist for a PUMA-like robot arm.
We shall explore two methods for finding the inverse solution: inverse transform
technique for finding Euler angles solution, which can also be used to find the joint
solution of a PUMA-like robot arm, and a geometric approach which provides
more insight into solving simple manipulators with rotary joints.
nx sx ax
ny sy ay = Rz, 4 R11, o
nz sz az
54 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
(2.3-2)
we would like to find the corresponding value of c1, 0, > . Equating the elements of
the above matrix equation, we have:
nZ = Sos> (2.3-3c)
sz = SOC>G (2.3-3f)
aX = SOSO (2.3-3g)
ay = -CgSO (2.3-3h)
az = co (2.3-3i)
Using Eqs. (2.3-3i), (2.3-3f), and (2.3-3h), a solution to the above nine equations
.fl
is:
r sz
-11
-
>/i = cos'
N
(2.3-5)
So
(2.3-6)
S oy
1. The arc cosine function does not behave well as its accuracy in determining the
angle is dependent on the angle. That is, cos (0) = cos (- 0).
2. When sin (0) approaches zero, that is, 0 = 0° or 0 r t 180°, Eqs. (2.3-5)
and (2.3-6) give inaccurate solutions or are undefined.
r-.
angle solution and a more consistent arc trigonometric function in evaluating the
ROBOT ARM KINEMATICS 55
angle solution. In order to evaluate 0 for - 7r < 0 < 7r, an arc tangent function,
atan2 (y, x), which returns tan-'(y/x) adjusted to the proper quadrant will be
used. It is defined as:
for +x and + y
0
0° 0 < 90 °
for -x and + y
0
90- 0 180-
0 = atan2 (y, x) _ -180 ° 0 -90' for -x and - y (2.3-7)
Using the arc tangent function (atan2) with two arguments, we shall take a look at
a general solution proposed by Paul et al. [1981].
From the matrix equation in Eq. (2.3-2), the elements of the matrix on the left
hand side (LHS) of the matrix equation are given, while the elements of the three
matrices on the right-hand side (RHS) are unknown and they are dependent on
0, 0, >G. Paul et al. [1981] suggest premultiplying the above matrix equation by its
unknown inverse transforms successively and from the elements of the resultant
matrix equation determine the unknown angle. That is, we move one unknown (by
COD
its inverse transform) from the RHS of the matrix equation to the LHS and solve
for the unknown, then move the next unknown to the LHS, and repeat the process
until all the unknowns are solved.
Premultiplying the above matrix equation by RZj1, we have one unknown
on the LHS and two unknowns (0, >') on the RHS of the matrix equation, thus we
have
or
(2.3-8)
which gives
aX
0 = tan- = atan2(a, - ay) (2.3-10)
56 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Equating the (1, 1) and (1, 2) elements of the both matrices, we have:
vac
W Cq s, - SOS" (2.3-11b)
Equating the (2, 3) and (3, 3) elements of the both matrices, we have:
SO = Sg ax - Ccba,
CO = az (2.3-13)
Scbax - Ccba y.
6=tan-1 = tan - = atan2 (SOa,, - CCa y, az)
8 a
J
(2.3-14)
Since the concept of inverse transform technique is to move one unknown to
Q4-
the LHS of the matrix equation at a time and solve for the unknown, we can try to
solve the above matrix equation for cp, 0, 1 by postmultiplying the above matrix
equation by its inverse transform R,v, 1
Ci so 0 Co - So 0 1 0 0
- So co 0 so Co 0 0 co - S6
0 0 1 0 0 1 0 S6 co
CD'
Again equating the (3, 1) elements of both matrices in the above matrix equation,
we have
which gives
00
SO = nZSI/i + szCt/i (2.3-18a)
CO = aZ (2.3-18b)
nZS>/i + sZCt/i 1
0=tan ' = atan2 (nzSt' + szCtk, az) (2.3-19)
I. az
which gives
nyCo-syS
0 = tan - I
nxCi/i - sxSi/i J
O (orientation) is the angle formed from the yo axis to the projection of the tool a
axis on the XY plane about the zo axis.
A (altitude) is the angle formed from the XY plane to the tool a axis about the s
axis of the tool.
T (tool) is the angle formed from the XY plane to the tool s axis about the a axis
of the tool.
Initially the tool coordinate system (or the hand coordinate system) is aligned
with the base coordinate system of the robot as shown in Fig. 2.18. That is, when
O = A = T = 0 °, the hand points in the negative yo axis with the fingers in a
horizontal plane, and the s axis is pointing to the positive x0 axis. The necessary
58 ROBOTICS- CONTROL, SENSING, VISION, AND INTELLIGENCE
0, a measurement of the
angle formed between
the WORLD Y axis and
a projection of the
TOOL Z on the WORLD
XY plane
Figure 2.17 Definition of Euler angles 0, A, and T. (Taken from PUMA robot manual
398H.)
transform that describes the orientation of the hand coordinate system (n, s, a)
with respect to the base coordinate system (xo, yo, zo) is given by
0 1 0
0 0 - 1 (2.3-22)
-1 0 0
ROBOT ARM KINEMATICS 59
zo
ai Yo
xo
From the definition of the OAT angles and the initial alignment matrix [Eq. (2.3-
22)], the relationship between the hand transform and the OAT angle is given by
7
nx sX ax 0 1 0
ny sy ay RZ. o 0 0 -1 RS, A Ra, T
nZ sZ aZ -1 0 0
j
r-,
CO - SO 0 0 1 0 CA 0 SA CT - ST 0
SO CO 0 0 0 -1 0 1 0 ST CT 0
0 0 1 -1 0 0 - SA 0 CA 0 0 1
CT ST 0 CO - SO 0 0 1 0
- ST CT 0 SO CO 0 0 0 -1
t`'
0 0 1 0 0 1 -1 0 0
CA 0 SA
x 0 1 0
- SA 0 CA
S
T = tan-' = atan2 (s, -nt) (2.3-25)
- nZ
Equating the (3, 1) and (3, 3) elements of the both matrices, we have:
SA = -as (2.3-26a)
and
- az
A = tan - I = atan2 ( -as, -nZCT + sZST) (2.3-27)
-nzCT + sZST J
Equating the (1, 2) and (2, 2) elements of the both matrices, we have:
n>,ST + syCT
n,ST + sECT
PUMA robot arm joint solution can be found in Paul et al. [1981].
Although the inverse transform technique provides a general approach in
determining the joint solution of a manipulator, it does not give a clear indication
-Q.
on how to select an appropriate solution from the several possible solutions for a
particular arm configuration. This has to rely on the user's geometric intuition.
Thus, a geometric approach is more useful in deriving a consistent joint-angle
solution given the arm matrix as in Eq. (2.2-39), and it provides a means for the
user to select a unique solution for a particular arm configuration. This approach
is presented in Sec. 2.3.2.
v-.
identified with the assistance of three configuration indicators (ARM, ELBOW, and
WRIST)-two associated with the solution of the first three joints and the other
with the last three joints. For a six-axis PUMA-like robot arm, there are four pos-
sible solutions to the first three joints and for each of these four solutions there are
two possible solutions to the last three joints. The first two configuration indica-
O<7
tors allow one to determine one solution from the possible four solutions for the
first three joints. Similarly, the third indicator selects a solution from the possible
0
two solutions for the last three joints. The arm configuration indicators are
prespecified by a user for finding the inverse solution. The solution is calculated
,_h
in two stages. First, a position vector pointing from the shoulder to the wrist is
f3.
derived. This is used to derive the solution of each joint i (i = 1, 2, 3) for the
first three joints by looking at the projection of the position vector onto the
xi _ I yi _, plane. The last three joints are solved using the calculated joint solution
iii
,,N
from the first three joints, the orientation submatrices of °Ti and
0R°
`- I Ai (i = 4, 5, 6), and the projection of the link coordinate frames onto the
xi_, yi_, plane. From the geometry, one can easily find the arm solution con-
row
y0,
multiplying rerT,oo, by B-1 and H- 1, respectively, and the joint-angle solution can
00.
_°o
nx sx a.% px
ny, s>, ay py
°T6 = T = B-, refTtoo1 H-, = (2.3-30)
nZ sz az PZ
0 0 0 1
Definition of Various Arm Configurations. For the PUMA robot arm shown in
Fig. 2.11 (and other rotary robot arms), various arm configurations are defined
according to human arm geometry and the link coordinate systems which are esta-
blished using Algorithm 2.1 as (Fig. 2.19)
RIGHT (shoulder) ARM: Positive 02 moves the wrist in the positive z° direction
while joint 3 is not activated.
LEFT (shoulder) ARM: Positive 02 moves the wrist in the negative z° direction
while joint 3 is not activated.
ABOVE ARM (elbow above wrist): Position of the wrist of the
RIGHT
LEFT arm with respect to the shoulder coordinate system has
62 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
negative
positive coordinate value along the Y2 axis.
positive
negative coordinate value along the Y2 axis.
WRIST DOWN: The s unit vector of the hand coordinate system and the y5 unit
vector of the (x5, y5, z5) coordinate system have a positive dot product.
WRIST UP: The s unit vector of the hand coordinate system and the y5 unit vector
of the (x5, y5, z5) coordinate system have a negative dot product.
(Note that the definition of the arm configurations with respect to the link coordi-
nate systems may have to be slightly modified if one uses different link coordinate
systems.)
With respect to the above definition of various arm configurations, two arm
configuration indicators (ARM and ELBOW) are defined for each arm
configuration. These two indicators are combined to give one solution out of the
possible four joint solutions for the first three joints. For each of the four arm
configurations (Fig. 2.19) defined by these two indicators, the third indicator
(WRIST) gives one of the two possible joint solutions for the last three joints.
These three indicators can be defined as:
+1 RIGHT arm
ARM = (2.3-31)
-1 LEFT arm
+1 ABOVE arm
ELBOW = -1 BELOW arm (2 3-32)
.
+1 WRIST DOWN
WRIST = (2.3-33)
-1 WRIST UP
In addition to these indicators, the user can define a "FLIP" toggle as:
finding the inverse kinematics solution. These indicators can also be set from the
.`.
knowledge of the joint angles of the robot arm using the corresponding decision
bow
equations. We shall later give the decision equations that determine these indicator
ROBOT ARM KINEMATICS 63
Arm Solution for the First Three Joints. From the kinematics diagram of the
PUMA robot arm in Fig. 2.11, we define a position vector p which points from
the origin of the shoulder coordinate system (x°, y0, z°) to the point where the last
three joint axes intersect as (see Fig. 2.14):
Pz d4 C23 - a3 S23 - a2 S2
64 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Joint 1 solution. If we project the position vector p onto the x0 yo plane as in Fig.
2.20, we obtain the following equations for solving 01:
of = IX 0 R = 7r + 0+ IX (2.3-37)
r= +p?-d; R (2.3-38)
(2.3-39)
d2
sin IX = -
R
cosa - R- r
(2.3-40)
x0Y0 plane
Yo
OA = d2
AB=r= PX+Py-d2
OB = Px+PZ
xii
where the superscripts L and R on joint angles indicate the LEFT/RIGHT arm
configurations. From Eqs. (2.3-37) to (2.3-40), we obtain the sine and cosine
functions of 0I for LEFT/RIGHT arm configurations:
p}r
sin I = sin ( - a) = sin 0 cos a - cos 0 sin a = - pXdz (2.3-41)
R2
pxr + pyd2
cos 0i = cos ( - a) = cos 0 cos a + sin 0 sin a = R2 (2.3-42)
sin OR
I
= sin (7r + d + a) _ -PyrR2- pd2 (2.3-43)
Combining Eqs. (2.3-41) to (2.3-44) and using the ARM indicator to indicate the
LEFT/RIGHT arm configuration, we obtain the sine and cosine functions of 01,
respectively:
cos 01 =
- ARM px P? + p,2 - d22 + Pyd2 (2.3-46)
Px + P Y
where the positive square root is taken in these equations and ARM is defined as
in Eq. (2.3-31). In order to evaluate 01 for -7r s 01 5 ir, an arc tangent func-
tion as defined in Eq. (2.3-7) will be used. From Eqs. (2.3-45) and (2.3-46), and
using Eq. (2.3-7), 01 is found to be:
r sin 01 1
01 = tan-1
COS01
r
= tan-1
- ARM Py px + p? - d2 - Pxd2 -7rz 01 <7r
- ARM Px px2 + py - d2 + Pyd2
(2.3-47)
Joint 2 solution. To find joint 2, we project the position vector p onto the x1 yt
plane as shown in Fig. 2.21. From Fig. 2.21, we have four different arm
configurations. Each arm configuration corresponds to different values of joint 2
as shown in Table 2.3, where 0 ° 5 a 5 360 ° and 0 ° < 0 5 90 °.
66 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
From the above table, 02 can be expressed in one equation for different arm
and elbow configurations using the ARM and ELBOW indicators as:
where the combined arm configuration indicator K = ARM ELBOW will give
an appropriate signed value and the "dot" represents a multiplication operation on
the indicators. From the arm geometry in Fig. 2.21, we obtain:
_ PZ PZ
sin a = (2.3-50)
R px +plz+p -d2
ARM r ARM px + Pv - d,
cosa =
.N..
(2.3-51)
R p2 2
pX +Py dz
a2 + R2 - (d4 + a3 )
cos 0 = (2.3-52)
2a2R
pX +py+pZ +a2-d2-(d2+a3)
2a2 VPX2
+ P2 + p - d2
sin (3 = COS2 (2.3-53)
From Eqs. (2.3-48) to (2.3-53), we can find the sine and cosine functions of 02:
cos0z = cos(a +
cos a cos 0 - (ARM ELBOW) sin a sin (3
C11
(2.3-55)
ROBOT ARM KINEMATICS 67
OA=d, EF=P,
AB=a, EG=P,
BC = a3 DE = P,
CD = d4
AD=R= P2+P2+P2-
AE=r=
C0
From Eqs. (2.3-54) and (2.3-55), we obtain the solution for 02:
F sin 02 1
."3
(2.3-56)
L cOS 02 J
Joint 3 solution. For joint 3, we project the position vector p onto the x2y2 plane
as shown in Fig. 2.22. From Fig. 2.22, we again have four different arm
4-y
as shown in Table 2.4, where (2p4)y is the y component of the position vector
i+1
from the origin of (x2, y2, z2) to the point where the last three joint axes intersect.
From the arm geometry in Fig. 2.22, we obtain the following equations for
finding the solution for 03:
a2 + (d4 + a3) - R2
cos 0 _ (2.3-58)
2a2 d4 + a3
sin = ARM ELBOW
x3
03=4>-0
Left and below arm
From Table 2.4, we can express 03 in one equation for different arm
configurations:
03 = 0 - a (2.3-60)
From Eq. (2.3-60), the sine and cosine functions of 03 are, respectively,
From Eqs. (2.3-61) and (2.3-62), and using Eqs. (2.3-57) to (2.3-59), we find the
C17
'LS
Arm configurations (ZP4)y 03 ARM ELBOW ARM ELBOW
LEFT and ABOVE arm a0 0-Q -1 +1 -1
Aviv/n\
LEFT and BELOW arm 0-a -1 -1 +1
-co
0
RIGHT and ABOVE arm 0 -/3 +1
-e-
+1 +1
RIGHT and BELOW arm 0 0-(3 +1 -1 -1
r sin 03 1
03 = tan - I - 7r < 03 < it (2.3-63)
COS 03
Arm Solution for the Last Three Joints. Knowing the first three joint angles, we
can evaluate the °T3 matrix which is used extensively to find the solution of the
last three joints. The solution of the last three joints of a PUMA robot arm can be
found by setting these joints to meet the following criteria:
1. Set joint 4 such that a rotation about joint 5 will align the axis of motion of
joint 6 with the given approach vector (a of T).
2. Set joint 5 to align the axis of motion of joint 6 with the approach vector.
3. Set joint 6 to align the given orientation vector (or sliding vector or Y6) and
normal vector.
In Eq. (2.3-64), the vector cross product may be taken to be positive or nega-
tive. As a result, there are two possible solutions for 04. If the vector cross pro-
duct is zero (i.e., z3 is parallel to a), it indicates the degenerate case. This hap-
pens when the axes of rotation for joint 4 and joint 6 are parallel. It indicates that
..d
at this particular arm configuration, a five-axis robot arm rather than a six-axis one
would suffice.
Joint 4 solution. Both orientations of the wrist (UP and DOWN) are defined by
looking at the orientation of the hand coordinate frame (n, s, a) with respect to
the (x5, Y5, z5) coordinate frame. The sign of the vector cross product in Eq.
(2.3-64) cannot be determined without referring to the orientation of either the n
or s unit vector with respect to the x5 or y5 unit vector, respectively, which have a
70 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
fixed relation with respect to the z4 unit vector from the assignment of the link
coordinate frames. (From Fig. 2.11, we have the z4 unit vector pointing'at the
same direction as the y5 unit vector.)
We shall start with the assumption that the vector cross product in Eq.
,_,
SZ =
ifs 0
IIz3 x all
(z3 X a)
if s (z3 x a) = 0
CC:
If our assumption of the sign of the vector cross product in Eq. (2.3-64) is not
correct, it will be corrected later using the combination of the WRIST indicator
ate,
and the orientation indicator 0. The 12 is used to indicate the initial orientation of
the z4 unit vector (positive direction) from the link coordinate systems assignment,
acs
while the WRIST indicator specifies the user's preference of the orientation of the
wrist subsystem according to the definition given in Eq. (2.3-33). If both the
orientation 12 and the WRIST indicators have the same sign, then the assumption of
the sign of the vector cross product in Eq. (2.3-64) is correct. Various wrist
.CO
orientations resulting from the combination of the various values of the WRIST
.~.
DOWN >1 0 +1 +1
DOWN <0 +1 -1
UP >1 0 -1 -1
UP <0 -1 +1
ROBOT ARM KINEMATICS 71
Again looking at the projection of the coordinate frame (x4, Y4, z4) on the
X3 y3 plane and from Table 2.5 and Fig. 2.23, it can be shown that the following
are true (see Fig. 2.23):
sign (x) _
f+1 if x '> 0
(2.3-70)
L-1 if x < 0
Thus, the solution for 84 with the orientation and WRIST indicators is:
F sin 84 1
84 = tan - I
cos 04 J
M(Clay - Slax)
= tan - I - 7r < 04 < 7r (2.3-71)
M(CI C23ax + SI C23ay - S,3a,) J
If the degenerate case occurs, any convenient value may be chosen for 84 as long
as the orientation of the wrist (UP/DOWN) is satisfied. This can always be
r-+
Hip
ensured by setting 84 equals to the current value of 04. In addition to this, the user
can turn on the FLIP toggle to obtain the other solution of 04, that is,
04 = 84 + 1800.
Joint 5 solution. To find 85, we use the criterion that aligns the axis of rotation of
E-+
a)-0
joint 6 with the approach vector (or a = z5). Looking at the projection of the
cos 04= Z4 - Y3
X3
where x4 and y4 are the x and y column vectors of °T4, respectively, and a is the
approach vector. Thus, the solution for 05 is:
C sin 05
05 = tan-1 - 7r < 05 < 7r
LCOS 05 J
(CiC23C4-SiS4)ax+(SiC23C4+C1S4)ay-C4S23az
= tan-I
..i
(2.3-73)
C1 S23ax+S1 S23ay+C23az
Joint 6 solution. Up to now, we have aligned the axis of joint 6 with the approach
yon
vector. Next, we need to align the orientation of the gripper to ease picking up
CD'
the object. The criterion for doing this is to set s = Y6. Looking at the projection
of the hand coordinate frame (n, s, a) on the x5 Y5 plane, it can be shown that the
ran
following are true (see Fig. 2.25):
where y5 is the y column vector of °T5 and n and s are the normal and sliding
"'h
r sin 06
06 = tan-] - 7r ' < 06 < 7r
COS 06
N/1
(-S1C4-C1C23S4)nx+(CIC4-SiC23S4)ny+(S4S23)nz
= tan'
(-SIC4-C1C23S4)Sx+(CIC4-S1C23S4)SY+(S4S23)sz J
(2.3-75)
Y4
sin BS = a x4
cos B5 = - (a Y4)
sin 06 =n - ys
cos 06 = S ys
x5
The above derivation of the inverse kinematics solution of a PUMA robot arm is
based on the geometric interpretation of the position of the endpoint of link 3 and
the hand (or tool) orientation requirement. There is one pitfall in the above
derivation for 04, 05, and 86. The criterion for setting the axis of motion of joint 5
b0.0
equal to the cross product of z3 and a may not be valid when sin 85 = 0, which
CJ"
,O`1.
means that 05 = 0. In this case, the manipulator becomes degenerate with both
the axes of motion of joints 4 and 6 aligned. In this state, only the sum of 04 and
own
a)"
06 is significant. If the degenerate case occurs, then we are free to choose any
U4'
value for 04, and usually its current value is used and then we would like to have
84 + 86 equal to the total angle required to align the sliding vector s and the nor-
A..
mal vector n. If the FLIP toggle is on (i.e., FLIP = 1), then 04 = 04 + ir, 85 =
,--.
85, and 06 = 86 + 7r .
In summary, there are eight solutions to the inverse kinematics problem of a
C).
'C3
six joint PUMA-like robot arm. The first three joint solution (81, 82, 03) positions
the arm while the last three-joint solution, (04, 05, 06), provides appropriate orien-
p,;
tation for the hand. There are four solutions for the first three joint solutions-two
for the right shoulder arm configuration and two for the left shoulder arm
configuration. For each arm configuration, Eqs. (2.3-47), (2.3-56), (2.3-63), (2.3-
71), (2.3-73), and (2.3-75) give one set of solutions (01, 82, 83, 04, 05, 86) and
(81, 02, 03, 04 + ir, - 85, 86 + ir) (with the FLIP toggle on) gives another set of
solutions.
Decision Equations for the Arm Configuration Indicators. The solution for the
PUMA-like robot arm derived in the previous section is not unique and depends
''d
on the arm configuration indicators specified by the user. These arm configuration
indicators (ARM, ELBOW, and WRIST) can also be determined from the joint
L..
angles. In this section, we derive the respective decision equation for each arm
configuration indicator. The signed value of the decision equation (positive, zero,
'J'
(2.3-31) to (2.3-33).
w-+
74 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
For the ARM indicator, following the definition of the RIGHT/LEFT arm, a
decision equation for the ARM indicator can be found to be:
p,
i .1 k
ZI x 1
g(0,p) = z0. z0 - sin 01 cos 01 0
11zl X P'll = Ikzl X p'I!
PX Py 0
- py sin 01 - pX cos 01
(2.3-76)
Hzl X p'll
where p' _ (P_" py., 0)T is the projection of the position vector p [Eq. (2.3-36)]
onto the x0 y0 plane, zI = (- sin 01, cos 01, 0)T from the third column vector of
°T1, and z0 = (0, 0, 1)T. We have the following possibilities:
000
Since the denominator of the above decision equation is always positive, the
determination of the LEFT/RIGHT arm configuration is reduced to checking the
'77
where the sign function is defined in Eq. (2.3-70). Substituting the x and y com-
ponents of p from Eq. (2.3-36), Eq. (2.3-77) becomes:
ARM = sign [g(0, p)] = sign [g(0)] = sign ( -d4S23 -a3C23 - a2 C2) (2.3-78)
Hence, from the decision equation in Eq. (2.3-78), one can relate its signed value
to the ARM indicator for the RIGHT/LEFT arm configuration as:
n s a p
() I) () I
Decision equations
Error
ARM, ELBOW, WRIST
Inverse kinematics
I-
+1 ELBOW above wrist (2 3-80)
ELBOW = ARM sign (d4 C3 - a3 S3) =
-1 ' ELBOW below wrist
For the WRIST indicator, we follow the definition of DOWN/UP wrist to
obtain a positive dot product of the s and y5 (or z4 ) unit vectors:
+1 if s z4 > 0
WRIST = -1 if s z4 < 0 = sign (s z4) (2.3-81)
+1 if n - z4 > 0
WRIST = -1 if n - Z4 < 0 = sign (n z4) (2.3-82)
solution which should agree to the joint angles fed into the direct kinematics rou-
tine previously. A computer simulation block diagram is shown in Fig. 2.26.
We have discussed both direct and inverse kinematics in this chapter. The param-
:-t
eters of robot arm links and joints are defined and a 4 x 4 homogeneous transfor-
mation matrix is introduced to describe the location of a link with respect to a
fixed coordinate frame. The forward kinematic equations for a six-axis PUMA-
like robot arm are derived.
The inverse kinematics problem is introduced and the inverse transform tech-
nique is used to determine the Euler angle solution. This technique can also be
used to find the inverse solution of simple robots. However, it does not provide
CDN
CAD
arm-four solutions for the first three joints and for each arm configuration, two
tin
more solutions for the last three joints. The validity of the forward and inverse
kinematics solution can be verified by computer simulation. The geometric
approach, with appropriate modification and adjustment, can be generalized to
((DD
other simple industrial robots with rotary joints. The kinematics concepts covered
in this chapter will be used extensively in Chap. 3 for deriving the equations of
motion that describe the dynamic behavior of a robot arm.
REFERENCES
Further reading on matrices can be found in Bellman [1970], Frazer et al. [1960],
and Gantmacher [1959]. Utilization of matrices to describe the location of a rigid
Q'5
mechanical link can be found in the paper by Denavit and Hartenberg [1955] and
in their book (Hartenberg and Denavit [1964]). Further reading about homogene-
-7,
-CD
ous coordinates can be found in Duda and Hart [1973] and Newman and Sproull
[1979]. The discussion on kinematics is an extension of a paper by Lee [1982].
O-,
More discussion in kinematics can be found in Hartenberg and Denavit [1964] and
Suh and Radcliffe [1978]. Although matrix representation of linkages presents a
,.C
7r0
for with rotary joints was based on the paper by Lee and Ziegler [1984]. The arm
solution of a Stanford robot arm can be found in a report by Lewis [1974]. Other
techniques in solving the inverse kinematics can be found in articles by Denavit
[1956], Kohli and Soni [1975], Yang and Freudenstein [1964], Yang [1969], Yuan
and Freudenstein [1971], Duffy and Rooney [1975], Uicker et al. [1964]. Finally,
the tutorial book edited by Lee, Gonzalez, and Fu [1986] contains numerous recent
a0-+
papers on robotics.
PROBLEMS
2.1 What is the rotation matrix for a rotation of 30° about the OZ axis, followed by a rota-
tion of 60° about the OX axis, followed by a rotation of 90° about the OY axis?
2.2 What is the rotation matrix for a rotation of 0 angle about the OX axis, followed by a
'0-
rotation of >li angle about the OW axis, followed by a rotation of B angle about the OY axis?
2.3 Find another sequence of rotations that is different from Prob. 2.2, but which results in
the same rotation matrix.
2.4 Derive the formula for sin (0 + 0) and cos (0 + 0) by expanding symbolically two
m-1
rotations of 0 and B using the rotation matrix concepts discussed in this chapter.
2.5 Determine a T matrix that represents a rotation of a angle about the OX axis, followed
ivy
2,7 For the figure shown below, find the 4 x 4 homogeneous transformation matrices A;
and °A; for i = 1, 2, 3, 4.
4 in
2.8 A robot workstation has been set up with a TV camera, as shown in the example in
Sec. 2.2.11. The camera can see the origin of the base coordinate system where a six-link
'-'
robot arm is attached, and also the center of a cube to be manipulated by the robot. If a
local coordinate system has been established at the center of the cube, then this object, as
.J.
seen by the camera, can be represented by a homogeneous transformation matrix T1. Also,
.^y
the origin of the base coordinate system as seen by the camera can be expressed by a homo-
'pp
>,3
0 1 0 1 1 0 0 -10
1 0 0 10 0 -1 0 20
T = ,
0 0 -1 9
and T2 = 0 0 -1 10
0 0 0 0 0 0 1
(a) Unfortunately, after the equipment has been set up and these coordinate systems have
been taken, someone rotates the camera 90° about the z axis o the camera. What is the
position/orientation of the camera with respect to the robots base coordinate system?
(b) After you have calculated the answer for question (a), the same person rotated the
object 90° about the x axis of the object and translated it 4 units of distance along the
ROBOT ARM KINEMATICS 79
rotated y axis. What is the position/orientation of the object with respect to the robot's base
QC-.
coordinate system? To the rotated camera coordinate system?
2.9 We have discussed a geometric approach for finding the inverse kinematic solution of a
?D+
PUMA robot arm. Find the computational requirements of the joint solution in terms of
L1.
multiplication and addition operations and the number of transcendental calls (if the same
term appears twice, the computation should be counted once only).
2.10 Establish orthonormal link coordinate systems (xi, y;, z,) for i = 1, 2, ... , 6 for
the PUMA 260 robot arm shown in the figure below and complete the table.
Waist rotation 330°
Flange
rotation
360°
Wrist
rotation
6
80 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
2.11 Establish orthonormal link coordinate systems (xi, yi, zi) for i = 1, 2, ... , 5 for
ABC
the MINIMOVER robot arm shown in the figure below and complete the table.
yo
'2.12 A Stanford robot arm has moved to the position shown in the figure below. The joint
variables at this position are: q = (90°, -120°, 22 cni, 0°, 70°, 90°)T. Establish the
orthonormal link coordinate systems (xi, yi, zi) for i = 1, 2, ... ,6, for this arm and
complete the table.
2.13 Using the six "Ai matrices ( i = 1 , 2, ... , 6) of the PUMA robot arm in Fig.
2.13, find its position error at the end of link 3 due to the measurement error of the first
three joint angles (MB1, OB2, MB3). A first-order approximation solution is adequate.
2.14 Repeat Prob. 2.13 for the Stanford arm shown in Fig. 2.12.
ROBOT ARM KINEMATICS 81
2.15 A two degree-of-freedom manipulator is shown in the figure below. Given that the
.fl
length of each link is 1 m, establish its link coordinate frames and find °A, and 'A2. Find
;-.
the inverse kinematics solution for this manipulator.
'2.16 For the PUMA robot arm shown in Fig. 2.11, assume that we have found the first
C7.
three joint solution (0,, 02, 03) correctly and that we are given '-'A;, i = 1, 2, ... , 6
and °T6. Use the inverse transformation technique to find the solution for the last three
joint angles (04, 05, 06). Compare your solution with the one given in Eqs. (2.3-71), (2.3-
73), and (2.3-75).
2.17 For the Stanford robot arm shown in Fig. 2.12, derive the solution of the first three
.z.
bow
joint angles. You may use any method that you feel comfortable with.
2.18 Repeat Prob. 2.16 for the Stanford arm shown in Fig. 2.12.
CHAPTER
THREE
ROBOT ARM DYNAMICS
3.1 INTRODUCTION
Robot arm dynamics deals with the mathematical formulations of the equations of
:t7
.'3
robot arm motion. The dynamic equations of motion of a manipulator are a set of
mathematical equations describing the dynamic behavior of the manipulator. Such
equations of motion are useful for computer simulation of the robot arm motion,
°°'
the design of suitable control equations for a robot arm, and the evaluation of the
kinematic design and structure of a robot arm. In this chapter, we shall concen-
trate on the formulation, characteristics, and properties of the dynamic equations of
motion that are suitable for control purposes. The purpose of manipulator control
acs
,.o
'L7
ance with some prespecified system performance and desired goals. In general,
pt-
control algorithms and the dynamic model of the manipulator. The control prob-
.3''
lem consists of obtaining dynamic models of the physical robot arm system and
',.
then specifying corresponding control laws or strategies to achieve the desired sys- Obi
tem response and performance. This chapter deals mainly with the former part of
..y
CAD
the manipulator control problem; that is, modeling and evaluating the dynamical
properties and behavior of computer-controlled robots.
The actual dynamic model of a robot arm can be obtained from known physi-
.°.
cal laws such as the laws of newtonian mechanics and lagrangian mechanics. This
At'
leads to the development of the dynamic equations of motion for the various arti-
culated joints of the manipulator in terms of specified geometric and inertial
parameters of the links. Conventional approaches like the Lagrange-Euler (L-E)
and Newton-Euler (N-E) formulations could then be applied systematically to
develop the actual robot arm motion equations. Various forms of robot arm
motion equations describing the rigid-body robot arm dynamics are obtained from
these two formulations, such as Uicker's Lagrange-Euler equations (Uicker [1965],
E3..
r3,
`/d
°t=
[1980]), Luh's Newton-Euler equations (Luh et al. [1980a]), and Lee's generalized
d'Alembert (G-D) equations (Lee et al. [1983]). These motion equations are
Y',
`CD
"equivalent" to each other in the sense that they describe the dynamic behavior of
p.,
.,,
the same physical robot manipulator. However, the structure of these equations
82
ROBOT ARM DYNAMICS 83
may differ as they are obtained for various reasons and purposes. Some are
obtained to achieve fast computation time in evaluating the nominal joint torques in
servoing a manipulator, others are obtained to facilitate control analysis and syn-
pr'
thesis, and still others are obtained to improve computer simulation of robot
motion.
The derivation of the dynamic model of a manipulator based on the L-E for-
[17
mulation is simple and systematic. Assuming rigid body motion, the resulting
Q..
'-'
(1)
.^,
lash, and gear friction, are a set of second-order coupled nonlinear differential
equations. Bejczy [1974], using the 4 x 4 homogeneous transformation matrix
,d,
representation of the kinematic chain and the lagrangian formulation, has shown
that the dynamic motion equations for a six joint Stanford robot arm are highly
nonlinear and consist of inertia loading, coupling reaction forces between joints
(Coriolis and centrifugal), and gravity loading effects. Furthermore, these
torques/forces depend on the manipulator's physical parameters, instantaneous joint
configuration, joint velocity and acceleration, and the load it is carrying. The L-E
equations of motion provide explicit state equations for robot dynamics and can be
r+'
a lesser extent, they are being used to solve for the forward dynamics problem,
that is, given the desired torques/forces, the dynamic equations are used to solve
',-
.fl
for the joint accelerations which are then integrated to solve for the generalized
coordinates and their velocities; or for the inverse dynamics problem, that is, given
-..
the desired generalized coordinates and their first two time derivatives, the general-
','G.+
a'-
the dynamic coefficients Dik, h,k,,,, and c, defined in Eqs. (3.2-31), (3.2-33), and
(74
requires a fair amount of arithmetic operations. Thus, the L-E equations are very
ice..
difficult to utilize for real-time control purposes unless they are simplified.
S].
4°)
.fl
o">
based on the N-E equations of motion (Armstrong [1979], Orin et al. [1979], Luh
..Q
et al. [1980a]). The derivation is simple, but messy, and involves vector cross-
product terms. The resulting dynamic equations, excluding the dynamics of the
control device, backlash, and gear friction, are a set of forward and backward
recursive equations. This set of recursive equations can be applied to the robot
ice.
ue.
C's
°s'
.fl
..t
.s6
tions at the center of mass of each link-from the inertial coordinate frame to the
hand coordinate frame. The backward recursion propagates the forces and
c"4"
moments exerted on each link from the end-effector of the manipulator to the base
.-r
O("
reference frame. The most significant result of this formulation is that the compu-
s0.
number of joints of the robot arm and independent of the robot arm configuration.
'.3
b1)
With this algorithm, one can implement simple real-time control of a robot arm in
.-t
(CS
The inefficiency of the L-E equations of motion arises partly from the 4 x 4
homogeneous matrices describing the kinematic chain, while the efficiency of the
N-E formulation is based on the vector formulation and its recursive nature. To
further improve the computation time of the lagrangian formulation, Hollerbach
s..
[1980] has exploited the recursive nature of the lagrangian formulation. However,
the recursive equations destroy the "structure" of the dynamic model which is
quite useful in providing insight for designing the controller in state space. For
s-.
state-space control analysis, one would like to obtain an explicit set of closed-form
Cam'
differential equations (state equations) that describe the dynamic behavior of a
manipulator. In addition, the interaction and coupling reaction forces in the equa-
tions should be easily identified so that an appropriate controller can be designed
to compensate for their effects (Huston and Kelly [1982]). Another approach for
obtaining an efficient set of explicit equations of motion is based on the generalized
CND
d'Alembert principle to derive the equations of motion which are expressed expli-
D~,
'-s
faster computation of the dynamic coefficients than the L-E equations of motion,
the G-D equations of motion explicitly identify the contributions of the transla-
'-'
tional and rotational effects of the links. Such information is useful for designing
TV'
°-h
derived and discussed, and the motion equations of a two-link manipulator are
worked out to illustrate the use of these equations. Since the computation of the
dynamic coefficients of the equations of motion is important both in control
o'.
analysis and computer simulation, the mathematical operations and their computa-
^'.
tional issues for these motion equations are tabulated. The computation of the
applied forces/torques from the generalized d'Alembert equations of motion is of
order 0(n3), while the L-E equations are of order 0(n4) [or of order 0(n3) if
optimized] and the N-E equations are of order 0(n), where n is the number of
degrees of freedom of the robot arm.
tion to describe the spatial displacement between the neighboring link coordinate
frames to obtain the link kinematic information, and they employ the lagrangian
dynamics technique to derive the dynamic equations d a manipulator. The direct
Cam,
expressed by matrix operations and facilitates both analysis and computer imple-
CAD
explicit terms will be based on the compact matrix algorithm derived in this sec-
tion.
The derivation of the dynamic equations of an n degrees of freedom manipula-
tor is based on the understanding of:
d
dt
r aL 1
L aqr J
aL
= T, i = 1, 2, ... , n (3.2-1)
aq,
where
L = lagrangian function = kinetic energy K - potential energy P
a0.
positions of the joints are readily available because they can be measured by poten-
,_.
this section, the velocity of a point fixed in link i will be derived and the effects of
'C3
the motion of other joints on all the points in this link will be explored.
With reference to Fig. 3.1, let 'r; be a point fixed and at rest in a link i and
expressed in homogeneous coordinates with respect to the ith link coordinate
frame,
xi
Yi
1)T
zi
= (xi, Yi, zi, (3.2-2)
Let °ri be the same point 'r1 with respect to the base coordinate frame, -'Ai the
homogeneous coordinate transformation matrix which relates the spatial displace-
ment of the ith link coordinate frame to the (i -1)th link coordinate frame, and
°A, the coordinate transformation matrix which relates the ith coordinate frame to
the base coordinate frame; then °ri is related to the point 'ri by
where
°Ai = °A1'A2 ...'-1Ai (3.2-4)
If joint i is revolute, it follows from Eq. (2.2-29) that the general form of '-'A; is
given by
or, if joint i is prismatic, from Eq. (2.2-31), the general form of - 'Ai is
cos 0i - cos ai sin 0i sin ai sin 0i 0
'-'A_ _ sin 0i cos ai cos Oi - sin ai cos 0i 0
(3.2-6)
0 sin ai cos ai di
0 0 0 1
In general, all the nonzero elements in the matrix °Ai are a function of
(0,, 02, ... , 0i), and ai, ai, di are known parameters from the kinematic structure
'-s
of the arm and 0i or di is the joint variable of joint i. In order to derive the equa-
tions of motion that are applicable to both revolute and prismatic joints, we shall
4-"
use the variable qi to represent the generalized coordinate of joint i which is either
vii
points as well as the point 'ri fixed in the link i and expressed with respect to the
'_'
ith coordinate frame will have zero velocity with respect to the ith coordinate
frame (which is not an inertial frame). The velocity of 'ri expressed in the base
`"'
°O.
a °Ai
+ °A, ... i- IAi'r. + °A, ir. = 'ri (3.2-7)
aq;
J
The above compact form is obtained because 'ii = 0. The partial derivative of
..C
°Ai with respect to qj can be easily calculated with the help of a matrix Qi which,
for a revolute joint, is defined as
`..
0 -1 0 0
1 0 0 0
Qi = (3.2-8a)
0 0 0 0
0 0 0 0
88 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
0 0 0 0
Qi = (3 2-8b)
.
0 0 0 1
0 0 0 0
a'-'Ai
= Q '- 'Ai
r (3 . 2-9)
aq1
For example, for a robot arm with all rotary joints, qi = 0i, and using Eq. (3.2-
5),
0 -1 0 0
1 0 0 0 sin 0i cos ai cos0i - sin ai cos 0i ai sin 0i
0 0 0 0 0 sin ai cos ai di
0 0 0 0 0 0 0 1
Qi'-'Ai
points on link i. In order to simplify notations, let us define Uij ° a °Ai/aqj, then
Eq. (3.2-10) can be written as follows for i = 1, 2, . , n, . .
It is worth pointing out that the partial derivative of-'Ai with respect to qi results
in a matrix that does not retain the structure of a homogeneous coordinate transfor-
s..
mation matrix. For a rotary joint, the effect of premultiplying '-'Ai by Qi is
equivalent to interchanging the elements of the first two rows of-'Ai, negating all
the elements of the first row, and zeroing out all the elements of the third and
fourth rows. For a prismatic joint, the effect is to replace the elements of the third
row with the fourth. row of '-'Ai and zeroing out the elements in the other rows.
The advantage of using the Q matrices is that we can still use the '-'Ai matrices
and apply the above operations to '-'Ai when premultiplying it with the Q.
Next, we need to find the interaction effects between joints as
auir o
°Ay-,QJ'-'Ak-,Qkk-'Ai
°Ak-1Qkk-'Aj_iQj'-'Ai
ikj j
i k (3.2-13)
aqk
0 i < j or i < k
For example, for a robot arm with all rotary joints, i = j = k = 1 and q, = 81,
so that
a B,l
= To (Q1°A,) = QIQI°Ai
Eq. (3.2-13) can be interpreted as the interaction effects of the motion of joint j
and joint k on all the points on link i.
the base coordinate system, and let dKi be the kinetic energy of a particle with
differential mass dm in link i; then
where a trace operatort instead of a vector dot product is used in the above equa-
a)
ate)
tion to form the tensor from which the link inertia matrix (or pseudo-inertia
matrix) Ji can be obtained. Substituting vi from Eq. (3.2-12), the kinetic energy
of the differential mass is
t Tr A a;,.
90 ROBOTICS: CONTROL. SENSING, VISION, AND INTELLIGENCE
Ti
yr.
E Uipgp F, Uirgr din
P=1 Lr=1 J J
r i i
Uip 1ri `rfTUirgpgr dm
L p = I r=I
i i
The matrix Uii is the rate of change of the points ('ri) on link i relative to
'L3
the base coordinate frame as qi changes. It is constant for all points on link i and
independent of the mass distribution of the link i. Also c, are independent of the
in.
mass distribution of link i, so summing all the kinetic energies of all links and put-
ting the integral inside the bracket,
f i i
The integral term inside the bracket is the inertia of all the points on link i, hence,
Ji = S'ri'r[T dm = (3.2-17)
Sxizi din $yizi dm $z? dm $zi dm
where 'r1 = (x,, y1, z1, 1)T as defined before. If we use inertia tensor Ii.l which is
3-0
defined as
r i-
bit
xk xixi dm
L Lk J
where the indices i, j, k indicate principal axes of the ith coordinate frame and bii
".Y
-'a+Iyy+Izz mixi l
2
Ixy I. - Iyy + Izz Iyz mi Yi
Ji = 2 (3.2-18)
I. + Iyy - Izz
v_~
or using the radius of gyration of the rigid body m; in the (x,, yi, zi) coordinate
system, Ji can be expressed as
N,^
2 z 2 2 2
- kilI + k;22 + k;33 ki12 ki13 xi
2
2 2 2
z
k,12
ki 1 i - 4222 + ki33
kiz23 Yi
Ji = mi 2
2 2 23
z z kill + ki22 - k;33
ki13 k i23
2
x; Y; Z; 1
(3.2-19)
where ki23 is the radius of gyration of link i about the yz axes and 'fi =
(xi, Yi> zi, 1)T is the center of mass vector of link i from the ith link coordinate
frame and expressed in the ith link coordinate frame. Hence, the total kinetic
energy K of a robot arm is
n n
,1 1 1
which is a scalar quantity. Note that the Ji are dependent on the mass distribution
of link i and not their position or rate of motion and are expressed with respect to
the ith coordinate frame. Hence, the Ji need be computed only once for evaluating
the kinetic energy of a robot arm.
and the total potential energy of the robot arm can be obtained by summing all the
.y;
where g = (gg, gy, gZ, 0) is a gravity row vector expressed in the base coordi-
nate system. For a level system, g = (0, 0, - g I, 0) and g is the gravitational
constant (g = 9.8062 m/sec2).
92 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
s.,
arm [Eq. (3.2-23)] yields the necessary generalized torque Ti for joint i actuator to
;f'
r..
drive the ith link of the manipulator,
d aL aL
Ti =
dt
a 4i J aqi
j n j j n
T
_
-I-
or in a matrix form as
where
T(t) = n x 1 generalized torque vector applied at joints i = 1, 2, ... , n; that is,
'i7
a.+
q(t) = an n x 1 vector of the joint variables of the robot arm and can be
expressed as
q(t) = an n x 1 vector of the joint velocity of the robot arm and can be
expressed as
ij(t) = an n x 1 vector of the acceleration of the joint variables q(t) and can be
expressed as
D(O) = (3.2-35)
D14 D24 D34 D44 D45 D46
where
+ Tr (U52J5U51) + Tr (U62J6U6 )
D13 = D31 = Tr (U33J3U31) + Tr (U43J4U41) + Tr (U53J5U51 + Tr (U63J6U6 )
hip
D14 = D41 = Tr (U44J4U41) + Tr (U54J5U51) + Tr (U64J6U61 )
hip
D15 = D51 = Tr (U55J5U51) + Tr (U65J6U61)
hip
D 16 = D61 = Tr (U66J6U61)
+ Tr (U52J5U52) + Tr (U62J6U62 )
D66 = Tr (U66J6U66)
The Coriolis and Centrifugal Terms, h(6, 0). The velocity-related coefficients in
the Coriolis and centrifugal terms in Eqs. (3.2-32) and (3.2-33) can be expressed
separately by a 6 x 6 symmetric matrix denoted by H;,,, and defined in the follow-
ing way:
ROBOT ARM DYNAMICS 95
Then, Eq. (3.2-32) can be expressed in the following compact matrix-vector pro-
duct form:
hi = BT Hi t, 6 (3.2-38)
where the subscript i refers to the joint (i = 1... , 6) at which the velocity-
induced torques or forces are "felt."
The expression given by Eq. (3.2-38) is a component in a six-dimensional
column vector denoted by h(0, 0):
hl OTHI, v0
h2 OTH2, v0
h3 6TH3 0
h(0, 0) _ (3.2-39)
h4 0TH4, Y0
h5 BTHS, ,6
h6 BTH6,,,6
(3.2-40)
where
c5 = -(m5gU555r5 + m6gU656r6)
C6 = - m6gU666r6
The coefficients ci, Dik, and hik,,, in Eqs. (3.2-31) to (3.2-34) are functions of
d^,
both the joint variables and inertial parameters of the manipulator, and sometimes
are called the dynamic coefficients of the manipulator. The physical meaning of
these dynamic coefficients can easily be seen from the Lagrange-Euler equations of
motion given by Eqs. (3.2-26) to (3.2-34):
1. The coefficient ci represents the gravity loading terms due to the links and is
defined by Eq. (3.2-34).
2. The coefficient Dik is related to the acceleration of the joint variables and is
defined by Eq. (3.2-31). In particular, for i = k, Dii is related to the accelera-
tion of joint i where the driving torque Ti acts, while for i #k, Dik is related to
CAD
the reaction torque (or force) induced by the acceleration of joint k and acting
at joint i, or vice versa. Since the inertia matrix is symmetric and
Tr (A) = Tr (AT), it can be shown that Dik = Dki.
3. The coefficient hik,,, is related to the velocity of the joint variables and is defined
by Eqs. (3.2-32) and (3.2-33). The last two indices, km, are related to the
velocities of joints k and m, whose dynamic interplay induces a reaction torque
chi,
(or force) at joint i. Thus, the first index i is always related to the joint where
the velocity-induced reaction torques (or forces) are "felt." In particular, for
.-,
noted that, for a given i, we have hik,,, = hi,,,k which is apparent by physical
reasoning.
fro
instance, the centrifugal force will not interact with the motion of that joint
..,
which generates it, that is, hiii = 0 always; however, it can interact with
motions at the other joints in the chain, that is, we can have hjii # 0.)
'H.
COD
n 5111 - 45
-mjgUji jr, 4n(9n - 7)
2
It
fugal and Coriolis, and gravitational effects of the links. For a given set of applied
torques Ti (i = 1, 2, ... , n) as a function of time, Eq. (3.2-26) should be
integrated simultaneously to obtain the actual motion of the manipulator in terms
of the time history of the joint variables q(t). Then, the time history of the joint
variables can be transformed to obtain the time history of the hand motion (hand
..1
the time history of the joint variables, the joint velocities, and the joint accelera-
tions is known ahead of time from a trajectory planning program, then Eqs. (3.2-
26) to (3.2-34) can be utilized to compute the applied torques T(t) as a function of
time which is required to produce the particular planned manipulator motion. This
0-.
0.°
from the closed-loop control viewpoint in that they give a set of state equations as
o-1
in Eq. (3.2-26). This form allows design of a control law that easily compensates
t3.
for all the nonlinear effects. Quite often in designing a feedback controller for a
f)'
manipulator, the dynamic coefficients are used to minimize the nonlinear effects of
the reaction forces (Markiewicz [1973]).
98 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
3..
cal operations (multiplications and additions) that are required to compute Eq.
L1.
(3.2-26) for every set point in the trajectory. Computationally, these equations of
motion are extremely inefficient as compared with other formulations. In the next
section, we shall develop the motion equations of a robot arm which will prove to
L'.
be more efficient in computing the nominal torques.
To show how to use the L-E equations of motion in Eqs. (3.2-26) to (3.2-34), an
example is worked out in this section for a two-link manipulator with revolute
joints, as shown in Fig. 3.2. All the rotation axes at the joints are along the z axis
.,--1
ono
normal to the paper surface. The physical dimensions such as location of center of
mass, mass of each link, and coordinate systems are shown below. We would like
to derive the motion equations for the above two-link robot arm using Eqs. (3.2-
26) to (3.2-34).
We assume the following: joint variables = 01, 82; mass of the links = nil,
m2; link parameters = a1 = a2 = 0; d, = d2 = 0; and a1 = a2 = 1. Then,
from Fig. 3.2, and the discussion in the previous section, the homogeneous coordi-
nate transformation matrices'-'Ai (i = 1, 2) are obtained as
C, - S1 0 1C, C2 - S2 0 1C2
S1 C, 0 is, S2 C2 0 1S2
°AI = 'A2 =
0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1
where Ci = cos Bi ; Si = sin 9i ; Cii = cos (Bi + 8j) ; Sid = sin (Bi + Of).
X17
0 -1 0 0
1 0 0 0
Qi =
0 0 0 0
0 0 0 0
0 -1 0 0
a°A, 0 0 0
U_ ae,
= Qi°A, =
0
1
0 0 0
0 0 0 0
-1 -S12 l(C,2 + C, )
.-O
0 0 0 C,2 0
a °A 2 1 0 0 0 S12 C12 0 l(S,2 + S, )
U21 = = Q, °A2 =
aB, 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 1
0 0 0 0
a°A2
U22 =
ae2
= °A,Q2'A2
100 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
,-+
S, C1 0 is, 1 0 0 0 S2 C2 0 IS2
0 0 1 0 0 0 0 0 0 0 1 0
1-'
0 0 0 1 0 0 0 0 0 0 0 1
0 0 0 0
0 0 0 0
From Eq. (3.2-18), assuming all the products of inertia are zero, we can derive the
pseudo-inertia matrix J; :
D = Tr(U1,J,U11) + Tr(U21J2U2i)
D22 = Tr ( U22J2U22)
To derive the Coriolis and centrifugal terms, we use Eq. (3.2-32). For i = 1,
using Eq. (3.2-32), we have
2 2
_ Y' hlk0k0m = hlll0l +
2
h1120102 + h1210102 + h12202
k=1 m=1
Using Eq. (3.2-33), we can obtain the value of h;k,». Therefore, the above value
which corresponds to joint 1 is
'hm2S2120
Therefore,
- '/2 m2 S2 l2 02 - m2 S212 01 02
h(0, 0) _
'hm2S2128
Next, we need to derive the gravity-related terms, c = (c1, c2 )T. Using Eq.
(3.2-34), we have:
0 -1512
cl
'/2M191CI + '/sm2glCI2 + m2glC1
C2
'hm2g1Ct2 J
Finally, the Lagrange-Euler equations of motion for the two-link manipulator are
found to be
TI
r '/3mI l2 + 4/3 m212 + m2 C212 '/sm212 + 'hm212C2 B1
T2
'/3m212 + 1hm212C2 '/3m212 B2
-'/zm2S212B2 - m2S2l26I B2
'hm2S2l2BI
loop control. The problem is due mainly to the inefficiency of the Lagrange-Euler
equations of motion, which use the 4 x 4 homogeneous transformation matrices.
f)'
In order to perform real-time control, a simplified robot arm dynamic model has
been proposed which ignores the Coriolis and centrifugal forces. This reduces the
computation time for the joint torques to an affordable limit (e.g., less than 10 ms
s..
for each trajectory point using a PDP 11/45 computer). However, the Coriolis and
CZ.
centrifugal forces are significant in the joint torques when the arm is moving at
,-.
fast speeds. Thus, the simplified robot arm dynamics restricts a robot arm motion
.,,
to slow speeds which are not desirable in the typical manufacturing environment.
Furthermore, the errors in the joint torques resulting from ignoring the Coriolis
,.,
s..
and centrifugal forces cannot be corrected with feedback control when the arm is
4°;
,N-.
al. [1979], Luh et al. [1980a], Walker and Orin [1982]). This formulation when
`J'
applied to a robot arm results in- a set of forward and backward recursive equa-
tions with "messy" vector cross-product terms. The most significant aspect of this
CAD
formulation is that the computation time of the applied torques can be reduced
C1'
rotating coordinate system and a fixed inertial coordinate frame, and then extend
the concept to include a discussion of the relationship between a moving coordinate
system (rotating and translating) and an inertial frame. From Fig. 3.3, two right-
X17
and a starred coordinate system OX*Y*Z* (rotating frame), whose origins are
(!Q
coincident at a point 0, and the axes OX*, OY*, OZ* are rotating relative to the
O`.
axes OX, OY, OZ, respectively. Let (i, j, k) and (i*, j*, k*) be their respective
unit vectors along the principal axes. A point r fixed and at rest in the starred
i..
coordinate system can be expressed in terms of its components on either set of axes:
104 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
r = xi + yj + zk (3.3-1)
or r = x* i* + y* j* + z*k* (3.3-2)
We would like to evaluate the time derivative of the point r, and because the coor-
'C3
dinate systems are rotating with respect to each other, the time derivative of r(t)
can be taken with respect to two different coordinate systems. Let us distinguish
these two time derivatives by noting the following notation:
d( o
ti me d er i vati ve with respec t to the fixe d re f erence coordi nat e
dt
system which is fixed = time derivative of r(t)
v.,
(3.3-3)
dt
411
Then, using Eq. (3.3-1), the time derivative of r(t) can be expressed as
C].
dr.
= xi + yj + ik + x di + y dj + z dk
I0.
dt dt dt dt
= xi + yj + ik (3.3-5)
= x* i* + y* j* + z* k* + x* d* + y* d*L + z* d* k*
d* r t*
dt dt dt dt
= x* i* + y* j* + i* k* (3.3-6)
ROBOT ARM DYNAMICS 105
Using Eqs. (3.3-2) and (3.3-6), the time derivative of r(t) can be expressed as
dr _
d =z*ix + y * * +i *k* +z +y
* di*
dt
* dj*
dt +
Z
* dk*
dt
d*r +x*di* + y*dj* +Z*dk*
(3.3-7)
dt dt dt dt
In evaluating this derivative, we encounter the difficulty of finding di*/dt, dj*/dt,
CF'
..d
and dk*ldt because the unit vectors, i*, j*, and k*, are rotating with respect to the
unit vectors i, j, and k.
In order to find a relationship between the starred and unstarred derivatives,
let us suppose that the starred coordinate system is rotating about some axis OQ
passing through the origin 0, with angular velocity co (see Fig. 3.4), then the
b-0
angular velocity w is defined as a vector of magnitude w directed along the axis
-4n
(3.3-10)
dt Al-0 At
With reference to Fig. 3.4, and recalling that a vector has magnitude and
direction, we need to verify the correctness of Eq. (3.3-10) both in direction and
..C
ds
= I co x sI = cos sing (3.3-11)
dt
which is obvious in Fig. 3.4. The direction of w x s can be found from the
definition of the vector cross product to be perpendicular to s and in the plane of
the circle as shown in Fig. 3.4.
If Eq. (3.3-8) is applied to the unit vectors (i*, j'*, k*), then Eq. (3.3-7)
becomes
106 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
0
Figure 3.4 Time derivative of a rotating coordinate system.
dr
dtr + x* (w X i*) + y*(co x j*) + z* (w x k*)
dt
d*
dtr + w X r (3.3-13)
This is the fundamental equation establishing the relationship between time deriva-
tives for rotating coordinate systems. Taking the derivative of right- and left-hand
sides of Eq. (3.3-13) and applying Eq. (3.3-8) again to r and d* r/dt, we obtain
the second time derivative of the vector r(t):
d2r
= d Fd*r1 + w x r+ dw
xr
dt2 dt dt dt at
'- J
+ w x d* r d*r
CIE
d*2r
dt2 dt + w X
dt
+wxr
d*2r dw
=
at r + co x (wxr) +
+ 2w x d*
dt
xr (3.3-14)
dt2
Equation (3.3-14) is called the Coriolis theorem. The first term on the right-hand
..y
side of the equation is the acceleration relative to the starred coordinate system.
1'.
The second term is called the Coriolis acceleration. The third term is called the
centripetal (toward the center) acceleration of a point in rotation about an axis.
0
One can verify that w X (w x r) points directly toward and perpendicular to the
axis of rotation. The last term vanishes for a constant angular velocity of rotation
about a fixed axis.
ROBOT ARM DYNAMICS 107
translation motion of the starred coordinate system with respect to the unstarred
.cam
coordinate system. From Fig. 3.5, the starred coordinate system O*X*Y*Z* is
rotating and translating with respect to the unstarred coordinate system OXYZ
CAD
which is an inertial frame. A particle p with mass m is located by vectors r* and
r with respect to the origins of the coordinate frames O* X* Y* Z* and OXYZ,
respectively. Origin O* is located by a vector h with respect to the origin O. The
relation between the position vectors r and r* is given by (Fig. 3.5)
r = r* + h (3.3-15)
dt dt + dt
where v* and v are the velocities of the moving particle p relative to the coordi-
f3,
nate frames O* X* Y* Z* and OXYZ, respectively, and v1, is the velocity of the
starred coordinate system O*X*Y*Z* relative to the unstarred coordinate system
OXYZ. Using Eq. (3.3-13), Eq. (3.3-16) can be expressed as
alt
Similarly, the acceleration of the particle p with respect to the unstarred coordinate
system is
d2* d'r
SIT
d
a(t)
dtt) dt2 + dth - a* + all
2
(3.3-18)
Z*
where a* and a are the accelerations of the moving particle p relative to the coor-
dinate frames O*X*Y*Z* and OXYZ, respectively, and a1, is the acceleration of
the starred coordinate system O* X* Y* Z* relative to the unstarred coordinate sys-
tem OXYZ. Using Eq. (3.3-14), Eq. (3.3-17) can be expressed as
+ co x (w x r*) + dw x r* + ate
d*2r* d* r* d2h
a(t) = + 2w x (3.3-19)
dt2 dt dt
With this introduction to moving coordinate systems, we would like to apply
this concept to the link coordinate systems that we established for a robot arm to
obtain the kinematics information of the links, and then apply the d'Alembert prin-
ciple to these translating and/or rotating coordinate systems to derive the motion
equations of the robot arm.
respectively. Origin 0' is located by a position vector p; with respect to the origin
0 and by a position vector p;* from the origin 0* with respect to the base coordi-
°0.O
nate system. Origin 0* is located by a position vector p;_I from the origin 0
with respect to the base coordinate system.
BCD
Let v_ i and w; _ I be the linear and angular velocities of the coordinate system
,U,
(x_1, y; _ 1, z1) with respect to the base coordinate system (xo, yo, zo), respec-
s..
tively. Let w;, and w;* be the angular velocity of 0' with respect to (xo, yo, zo)
and (x1_1, y; _ 1, z1_1), respectively. Then, the linear velocity v; and the angular
4-.
velocity w; of the coordinate system (x;, y;, z;) with respect to the base coordinate
system are [from Eq. (3.3-17)], respectively,
d* P;*
v; = dt + w; -, x P;* + v; -1 (3.3-20)
where d*( )ldt denotes the time derivative with respect to the moving coordinate
system (x,_1, y;_1, z;_1). The linear acceleration v; and the angular acceleration
(. i; of the coordinate system (x;, y;, z;) with respect to the base coordinate system
are [from Eq. (3.3-19)], respectively,
ROBOT ARM DYNAMICS 109
zr_I
d* 2p * d* p*
+ 6;_1 x p* + 2w;_, x (3.3-22)
dt2 dt
then, from Eq. (3.3-13), the angular acceleration of the coordinate system
(x;, y;, z;) with respect to (x;_1, y;_,, z;_1) is
d *w
+ w; _ i x w;* (3.3-24)
dt
t17
d* w,*
+ dt + w; _ I X w;* (3.3-25)
Recalling from the definition of link-joint parameters and the procedure for
s..
establishing link coordinate systems for a robot arm, the coordinate systems
(x,_1, y;_1, z;_1) and (x;, y;, z;) are attached to links i - 1 and i, respectively.
If link i is translational in the coordinate system (x;_1, y;_,, z;_ I), it travels in the
coo
110 ROBOTICS- CONTROL, SENSING, VISION, AND INTELLIGENCE
`-C
angular motion of link i is about the z,_1 axis. Therefore,
wi -I
Using Eq. (3.3-8), the linear velocity and acceleration of link i with respect to
.5C
a.+
d*P
x._
_ X P7 if link i is rotational
c>~
7c"
wr
(3.3-30)
dt L z, I qr if link i is translational
(a x b) (3.3-33)
ROBOT ARM DYNAMICS 111
and Eqs. (3.3-26) to (3.3-31), the acceleration of link i with respect to the refer-
ence system is [from Eq. (3.3-22)]
if link i is
r cii x p + wi x (wi X pi*) + rotational
zi_1 qi + coi x p,' + 2wi x (zi-19i) if link i is (3.3-35)
For any body, the algebraic sum of externally applied forces and the forces
'Z2
-00i
Consider a link i as shown in Fig. 3.7, and let the origin 0' be situated at its
vii
center of mass. Then, by corresponding the variables defined in Fig. 3.6 with
variables defined in Fig. 3.7, the remaining undefined variables, expressed with
respect to the base reference system (xo, Yo' zo), are:
frame
si = position of the center of mass of link i from the origin of the coordinate
system (xi, yi, zi )
pi* ; the origin of the ith coordinate frame with respect to the (i - 1)th coordi-
11,
nate system
dri
linear velocity of the center of mass of link i
.14
CAD
dt ,
dvi
ai = dt , linear acceleration of the center of mass of link i
AEI
Then, omitting the viscous damping effects of all the joints, and applying the
d'Alembert principle to each link, we have
d(mi vi)
Fi = = mi ai (3.3-36)
dt
d(Ii wi)
and Ni = dt = Ii cui + wi x (Iiwi) (3.3-37)
where, using Eqs. (3.3-32) and (3.3-35), the linear velocity and acceleration of the
center of mass of link i are, respectively,t
Vi = wi x si + vi (3.3-38)
Then, from Fig. 3.7, and looking at all the forces and moments acting on link i,
4-,
the total external force Fi and moment Ni are those exerted on link i by gravity
and neighboring links, link i - 1 and link i + 1. That is,
Fi = fi - fi+ I (3.3-40)
The above equations are recursive and can be used to derive the forces and
moments (fi, ni) at the links for i = 1, 2, ... ,n for an n-link manipulator, not-
ti.
ing that and are, respectively, the forces and moments exerted by the
".c
actually rotates qi radians in the coordinate system (xi-1, yi_ 1, zi- t) about the
zi_ I axis. Thus, the input torque at joint i is the sum of the projection of n, onto
s..
the zi_1 axis and the viscous damping moment in that coordinate system. How-
ever, if joint i is translational, then it translates qi unit relative to the coordinate
ti.
q..,
system (xi _ 1, yi _ 1, zi _ I ) along the z, _ I axis. Then, the input force Ti at that joint
bow
is the sum of the projection of fi onto the zi_ I axis and the viscous damping force
in that coordinate system. Hence, the input torque/force for joint i is
if link i is rotational
...
n,7 zi _ i+ b; qi
(3.3-45)
f,T zi-, + bigi if link i is translational
...
where bi is the viscous damping coefficient for joint i in the above equations.
If the supporting base is bolted on the platform and link 0 is stationary, then
0.,O
gx
pi)
C0)
tion of each individual link, are propagated from the base reference system to the
end-effector. For the backward recursive equations, the torques and forces exerted
on each link are computed recursively from the end-effector to the base reference
..t
system. Hence, the forward equations propagate kinematics information of each
link from the base reference frame to the hand, while the backward equations com-
pute the necessary torques/forces for each joint from the hand to the base refer-
(1]
ence system.
if link i is translational
if link i is translational
a; = wi x si + wi x (w; x -9i) + vi
Backward equations: i = n, n -1, ... , 1
Fi = mi ai
Ni = Ii w; + wi x (I; wi )
fi = Fi + fi+ i
ni = ni+ + p* x fi+ i + (pi* + s;) x Fi + Ni
nilzi_i + bi4i if link i is rotational
Ti =
fiTz _, + bi4i if link i is translational
where bi is the viscous damping coefficient for joint i.
The "usual" initial conditions are wo = cuo = vo = 0 and vo = (gs, gy, gZ)T (to include
gravity), where I g I = 9.8062 m/s2.
ROBOT ARM DYNAMICS 115
ence to coordinate frame (x;, y;, z;) to the coordinate system (x;-,, y;-,, z;-,).
This is the upper left 3 x 3 submatrix of '- 'Ai.
E"'
where
cos 0; sin 0; 0
and ['-'Rr]-l = - cos a; sin 0; cos a; cos 0; sin a; (3.3-49)
Instead of computing w;, ci;, v;, a;, p;*, s;, F;, Ni, f;, n;, and Ti which are refer-
enced to the base coordinate system, we compute 'Row;,'Roui;,'Rov;,'Roa;,'RoF;,
'RoN;, 'Rof;, 'Ron;, and 'Ror; which are referenced to its own link coordinate sys-
116 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
tem (xi, yi, zi). Hence, Eqs. (3.3-28), (3.3-29), (3.3-35), (3.3-39), (3.3-36),
(3.3-37), (3.3-43), (3.3-44), and (3.3-45), respectively, become:
'Roy; = (3.3-52)
'R;-;(zo9i + ''Roy;-1) + ('Row;) x ('Rop;*)
+ 2('Row;) x ('R;-izoQ;)
+ ('Row;) x [ ('Row;) x ('Rop;*)1 if link i is translational
(3.3-58)
(iRofi)T(`Ri-Izo) + bi'1 if link i is translational
where z° = (0, 0, 1)T, 'Rosi is the center of mass of link i referred to its own
link coordinate system (xi, yi, zi ), and 1Ropi' is the location, of (xi, yi, zi) from
the origin of (x_1, y; - I, z._1) with respect to the ith coordinate frame and is
found to be
ROBOT ARM DYNAMICS 117
ai
'R0 pi* = d, sin ai (3.3-59)
di cos a;
and (`RoIioRi) is the inertia matrix of link i about its center of mass referred to its
own link coordinate system (xi, yi, zi).
Hence, in summary, efficient Newton-Euler equations of motion are a set of
forward and backward recursive equations with the dynamics and kinematics of
each link referenced to its own coordinate system. A list of the recursive equa-
tions are found in Table 3.3.
Initial conditions:
n = number of links (n joints)
Wo=wo=vo=0 vo=g=(gx,By,Sz)T where IgI = 9.8062 m/s2
Joint variables are qi, 4r, 4i for i = 1, 2, ... , n
Link variables are i, Fi, fi, ni, Ti
Forward iterations:
N1. [Set counter for iteration] Set i -- 1.
N2. [Forward iteration for kinematics information] Compute 'Ro wi, iRowi,
'Rovi, and 'Roai using equations in Table 3.3.
N3. [Check i = n?] If i = n, go to step N4; otherwise set i - i + 1 and
.ti
'Row,
'Ri i('-'Rowi-I) if link i is translational
3
'Roi, =
'Ri-,(z0gi + '-'Ro',-1) + ('R0 ,) x ('R0pi*)
+ 2('Row,) X ('R,_,zocli)
+ ('Row,) x [ ('Row,) x ('Rop,*) ] if link i is translational
ICE
where z° = (0, 0, 1) T and b, is the viscous damping coefficient for joint i. The usual initial
conditions are coo = uio = vo = 0 and vo = (gX, g,., g_)T (to include gravity), where
IgI = 9.8062 m/s2.
Backward iterations:
N4. [Set and n,,.. ] Set and to the required force and moment,
respectively, to carry the load. If no load, they are set to zero.
ROBOT ARM DYNAMICS 119
'Row; 9nt 7n
R0 1 9n 9n
'Knit 27n 22n
Roa, 15n 14n
'RoF, 3n 0
Rof, 9(n-1) 9n-6
'RoN, 24n 18n
'Ron, 21n - 15 24n - 15
N5. [Compute joint force/torque] Compute `RoF;, 'RoN;, `Rof, , 'R0n,, and T,
with and given.
N6. [Backward iteration] If.i = 1, then stop; otherwise set i -- i - 1 and
z
go to step N5.
manipulator with revolute joints as shown in Fig. 3.2 is worked out in this section.
All the rotation axes at the joints are along the z axis perpendicular to the paper
surface. The physical dimensions, center of mass, and mass of each link and coor-
dinate systems are given in Sec. 3.2.6.
First, we obtain the rotation matrices from Fig. 3.2 using Eqs. (3.3-48) and
(3.3-49):
C, - S1 0 C2 - S2 0 -S,2
OR, _ S, C, 0 'R2 = S2 C2 0 °R2 = C12
S12 C12 0I
0
0 0 1 0 0 1 0 0 1
I
C, S, 0 I C2 S2 0 C12 S12 0
'R0 = -S1 C, 0 2R, _ -S2 C2 0 2R0 - -S12 C12 0
0 0 1 0 0 1 0 0 1
120 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
From the equations in Table 3.3 we assume the following initial conditions:
0 0 1 1 1
For i = 2, we have:
C2 S2 0 0 [0
- S2 C2 0 0 02 0 (81 + 82)
0 0 1 1 1
Using Eq. (3.3-51), compute the angular acceleration for revolute joints for
i = 1, 2:
For i = 1, with wo = wo = 0, we have:
'Bowl = 'Ro(c o + zo01 + woxzo01) = (0, 0, 1) T O1
For i = 2, we have:
' Roi, = (' Roc 1) x (' Rop1*) + ('Row,) x [('Row,) x (' Rop,*) l + ' Rovo
0 0 0 l
0 0 0 01 x 0
1 1 1 0
L J
ROBOT ARM DYNAMICS 121
For i = 2, we have:
1 0 I
x 0 x 0 x 0
0 01 +02 0
C2 S2 0
- S2 C2 0
0 0 1
Using Eq. (3.3-53), compute the linear acceleration at the center of mass for links
1 and 2:
For i = 1, we have:
.--I
!Y,
sl = 'Ross = -Si 0
- i s,
2 0
C1
0 1
- i s,
2
0
0
0 0
Thus,
l
0 0
2
1ROa1 = 0 81 x 0 0
1
j 0 61
.-.
SIN
122 ROBOTICS. CONTROL, SENSING, VISION, AND INTELLIGENCE
For i = 2, we have:
l
- 1C12 C 12 S 12 0 - ZC 12
0 0
Thus,
l
'SIN
0 0 0
2
2ROa2 = 0 x 0 x 0 X
0
8I + 62
0
61 + 62 01 + 02
III
For i = 1, we have:
's7
2 2 2
m21[-01-'h C2(01+62)
2-C20102-' S2(01+02)]-m2$(C12S2-C2S12)-'/zm1101+m1$S1
Using Eq. (3.3-57), compute the moment exerted on link i for i = 2, 1: For
i = 2, with n3 = 0, we have:
0 0 0 1 0 0
Thus,
1
rm21(S201 - C201 - 'h01 - 'h02 - 0102) + gm2S12
2
2Ron2 = x m21(C20, + 520 + '201 + '/z02) + gm2C12
0
0 0
10 0 0 0
+ 0 '/12m212 0 0
0 0 '/12 m212 0, + 02
0
0
For i = 1, we have:
1C, [ lC2 I 1
Ro P1* = 0
0 0
124 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Thus,
-, T
2
+ 'RON1
Finally, we obtain the joint torques applied to each of the joint actuators for both
links, using Eq. (3.3-58):
For i = 2, with b2 = 0, we have:
?2 = (2Ron2)T(2R1zo)
Tl = ('Ron1)T('Rozo)
= '/3m1 1281 + 4/3 m21201 + '/3m21202 + m2 C212B1
The above equations of motion agree with those obtained from the Lagrange-Euler
formulation in Sec. 3.2.6.
CAD
information of each link, obtain the kinetic and potential energies of the robot arm
to form the lagrangian function, and apply the Lagrange-Euler formulation to
obtain the equations of motion. In this section, we derive a Lagrange form of
d'Alembert equations of motion or generalized d'Alembert equations of motion
(G-D). We shall only focus on robot arms with rotary joints.
Assuming that the links of the robot arm are rigid bodies, the angular velocity
w, of link s with respect to the base coordinate frame can be expressed as a sum
of the relative angular velocities from the lower joints (see Fig. 3.8),
W, = E 0; (3.4-1)
ROBOT ARM DYNAMICS 125
Xe
where zj_1 is the axis of rotation of joint j with reference to the base coordinate
frame. Premultiplying the above angular velocity by the rotation matrix SRo
G.,
In Fig. 3.8, let T., be the position vector to the center of mass of link s from
the base coordinate frame. This position vector can be expressed as
s-1
rs = E pl* + cs (3.4-3)
1='
where cs is the position vector of the center of mass of link s from the (s - 1)th
coordinate frame with reference to the base coordinate frame.
Using Eqs. (3.4-1) to (3.4-3), the linear velocity of link s, vs, with respect to
the base coordinate frame can be computed as a sum of the linear velocities from
the lower links,
a-1 k
Vs = E Bjzj-1 x pk* + rj=1
` OiZj-1 x c, (3.4-4)
k=1 J=1 J J
126 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
The kinetic energy of link s (1 < s < n) with mass ms can be expressed as
the summation of the kinetic energies due to the translational and rotational effects
at its center of mass:
Ks = (Ks)tran + (K5).t = t/2ms(vs vs) + t/2(sROws)T I5(sROws) (3.4-5)
where IS is the inertia tensor of link s about its center of mass expressed in the sth
coordinate system.
For ease of discussion and derivation, the equations of motion due to the
CID
translational, rotational, and gravitational effects of the links will be considered and
treated separately. Applying the Lagrange-Euler formulation to the above transla-
tional kinetic energy of link s with respect to the generalized coordinate Bi
(s > i), we have
d 3(K5)tran 8(K5 )tran
dt ae; ae;
d avs I- avs
ms VS Ms Vs
dt
d''
ae; aet
ms vs avs
+ msvs .
d avs
- ms Vs
as (3.4-6)
a8; dt ae; a8;
where
avs
= z;-i x (PI, + Pi+i* + + Ps- c + cs) (3.4-7)
aOi
d a(Ks)tran a(Ks)tran
I)] (3.49)
dt ae; aei
Summing all the links from i to n gives the reaction torques due to the transla-
tional effect of all the links,
d a(K.E.)tran 1- a(K.E.)tran n
d a(Kstran a(Ks)tran
d ae; a0i dt [ aei J a0
)t
where, using Eqs. (3.4-4) and (3.3-8), the acceleration of link s is given by
s-1 k
s--`1 k k
E ejzj-I X pk* + E [e1_ 1 x Ei ejzj _ 1
= k=1 j=I J j1 J j=I J
1 rs .
r
cs
+ I E ej zj _ I X ejzj-1 X es
C is X
,
l.j=l J
I
j=1 J
t
r
P-I
E egzq_I x epzp-I X pk*
_q=1
r
P-I.
(-1
Next, the kinetic energy due to the rotational effect of link s is:
(Ks)rot = I/Z(SR0w)TIs(sR0ws)
s T s
1
ej sRozj- I Is ej sRozj_ 1
(3.4-12)
2
J=I j=1
Since
a(Ks)rot
= (sRozj-I)T Is ej sRozj_ I s>i (3.4-13)
aei j=I
ae
(SRozj-1) = SRozj-1 x SRozi-I i j (3.4-14)
1
and
s de
dt (sRozj-1) _ sRozj I
j=i aeJ dt
r m
s
0.0
S
dt
Lj=1 J
r., sRozj-1
+ (5Rozi-1)"1,
'+"
F, B''Rozj-1 I + (sRoZi-I)TIS.
E Bj dd
dt
j=1 J j=1 J J
,s T
sRozi-I X EOjsRozj-1J Is E BjsRozj-1
j=i j=1
S
[sRozi_Ij
s
+ (sRozi-I)TIs
Nom
j=1
r r 1
"W°
S s
T
a(Ks)rot
`m.
= [e.SRoz.I1 x sRozi-I Is
[e.sRoz.I1 (3.4-17)
aei
j=1 j=I
4-,
Subtracting Eq. (3.4-17) from Eq. (3.4-16) and summing all the links from i to n
.!]
gives us the reaction torques due to the rotational effects of all the links,
I-1
.-.
(sRozi-1)TIs EBjsRozj- I
s=i L j=I
S
Bj sRozj- 1 X Bk cRozk-
+ (sRozi-I)TI5
C k=j+I
T
sRoZi-I X 1S S
The potential energy of the robot arm equals to the sum of the potential ener-
gies of each link,
P.E. = E PS (3.4-19)
S=1
acts - pi-1)
= g'mS = g'mS [zi-1 X (rs - pi-1)] (3.4-21)
ae
where pi-1 is not a function of O. Summing all the links from i to n gives the
reaction torques due to the gravity effects of all the links,
The summation of Eqs. (3.4-10), (3.4-18), and (3.4-22) is equal to the general-
ized applied torque exerted at joint i to drive link i,
a(K.E.)tran a(K.E.)rot
ae;
+
d
dt
ae,
- a(K.E.)rot
aei
+ a(P.E.)
aei
s-I k S
k=I
Eej Zj -
j=1
X pk* + [ezii1
r,
j=1
Xes )'[z;-I X (rs-Pi-I)]
11
it r rs-I k
+E ms Fi X [Oqz_1
rd X pk*
S=i k=1 HI J 1q=1 J J
[Opz_
i 11 X [e_I X cs
11
+S ' [ms r, J
1
P - I,
F Bqz _ 1 X BpZp_ I .[Zi-1 X (rs-Pi-1)1
g=1 J
r r
M._
urS
.s
r
SRozi-I X F, BvsRozn-I I,s E BgSRoZq_
P=1
J q=1 J J
_g ZiIX [Emi(fiPi_i)
j
Its
(3.4-23)
j=i
(for i = 1, 2, , n): . . .
/I
where, for i = 1, 2,
cup
. . . , n,
r»
Dij = D;jr` + D;Jan = G.i [(SRozi-I)TIs (sRoZj-1)]
S=j
11 s-1
+ s=j
E [ms Zj - 1 X pk* + cs [Zi-I x RS - pi-1)] i < j
L k=j
_ [(SRozi-I)TIS (SRozj-1)]
s=j
r
n
s=j
(3.4-25)
ROBOT ARM DYNAMICS 131
also,
k
htran(e 0) X
ins Fd X Pk*
S=i t J q=1 BgZq-
I rIi'egzq-1
X BPZP X pk* [zi-1 X (rs - Pi-1)]
q_' J J J J J
=w5
+r,
II
S'6PZP-1 BgZq-I X Cs
ms [f[ J X 9=1
S J
and
11
hirot(8 0) = EJ (SRozi-1)TIS
S=i J
'I"
Finally, we have
ci = -g
11
The dynamic coefficients Did and ci are functions of both the joint variables
and inertial parameters of the manipulator, while the hits" and h[ot are functions of
the joint variables, the joint velocities and inertial parameters of the manipulator.
These coefficients have the following physical interpretation:
1. The elements of the Di,j matrix are related to the inertia of the links in the
manipulator. Equation (3.4-25) reveals the acceleration effects of joint j acting
on joint i where the driving torque Ti acts. The first term of Eq. (3.4-25) indi-
cates the inertial effects of moving link j on joint i due to the rotational motion
of link j, and vice versa. If i = j, it is the effective inertias felt at joint i due
to the rotational motion of link i; while if i # j, it is the pseudoproducts of
inertia of link j felt at joint i due to the rotational motion of link j. The second
term has the same physical meaning except that it is due to the translational
motion of link j acting on joint i.
132 ROBOTICS: CONTROL, SENSING. VISION, AND INTELLIGENCE
2. The h,yan (0, 0) term is related to the velocities of the joint variables. Equation
(3.4-26) represents the combined centrifugal and Coriolis reaction torques felt at
joint i due to the velocities of joints p and q resulting from the translational
motion of links p and q. The first and third terms of Eq. (3.4-26) constitute,
respectively, the centrifugal and Coriolis reaction forces from all the links
below link s and link s itself, due to the translational motion of the links. If
p = q, then it represents the centrifugal reaction forces felt at joint i. If
p q, then it indicates the Coriolis forces acting on joint i. The second and
fourth terms of Eq. (3.4-26) indicate, respectively, the Coriolis reaction forces
^L7
contributed from the links below link s and link s itself, due to the translational
.a)
motion of the links.
3. The K0'(8, 0) term is also related to the velocities of the joint variables. Simi-
lar to h,tran (0, 8), Eq. (3.4-27) reveals the combined centrifugal and Coriolis
reaction torques felt at joint i due to the velocities of joints p and q resulting
boo
`gyp
from the rotational motion of links p and q. The first term of Eq. (3.4-27) indi-
p4.
cates purely the Coriolis reaction forces of joints p and q acting on joint i due
s.,
to the rotational motion of the links. The second term is the combined centrifu-
gal and Coriolis reaction forces acting on joint i. If p = q, then it indicates the
centrifugal reaction forces felt at joint i, but if p # q, then it represents the
...
Coriolis forces acting on joint i due to the rotational motion of the links.
c°',
4. The coefficient c= represents the gravity effects acting on joint i from the links ti.
above joint i.
At first sight, Eqs. (3.4-25) to (3.4-28) would seem to require a large amount
of computation. However, most of the cross-product terms can be computed very
O'5
citly showing the procedure in calculating these coefficients for every set point in
`CS
3.9. Table 3.5 summarizes the computational complexities of the L-E, N-E, and
G-D equations of motion in terms of required mathematical operations per trajec-
+-'
t n = number of degrees of freedom of the robot arm. No effort is spent here to optimize the computation.
(18n + 36)M,(15n + 15)A,(28)S (6n )A,(7n )S (3 )S
Oi , °i , P,, g
1, Orzr 1
P=1
} IJ
09z9 11 x Orzr Okrl;otk 09 rRoz9 P
r. - Pi- I 9 X Pk I 91 +1
1
1
a=1 c=1
3 'Rozi 1 I
Lr-I Z9 IJ x Pk
O
Li 1 x ( r, - Pi-I ) Or 9=11O9Z9
I) I
1
(P= ,
1 x =
'IOrzr
1]
x
[Ia=lOaza
O
n
(ins t5n2+ 3n)M (3n3t 2n't 12n)M
11," °"(0 , 0 ) 11,rn1(0 0 )
(Zn2+Zn)S (3n2+3n)S
( o3n2 + I2 n + 33)A
(20n2 + Ila -3)A (3112 - 3n )A
Figure 3.9 Computational procedure for Dip hiy,n, h f °t, and c1.
134 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
1567
Magnitude of elements
Magnitude of elements
2.879
0 000
s'
0600
'
1200 1800
Time (ms)
a
2400
' b
3000
- 3842
0 000
'
0600 1200
I
Time (ms)
1800 2400 3000
x)Magnitude of elements
Magnitude
- 00014n' I
' 0 000 - i i
_. '
--.----L-__.
0 000 0600 1200 1800 2400 3000 0 000 0600 1200 1800 2400 3000
Time (ms) Time (ms)
Acceleration effects on joint 3 (D13) Acceleration effects on joint 4 (D14)
0100 3 920
o_
X
x)
Magnitude of elements
0066 0V 1 423
0033 / / C)
w
E
C)
0 -1 073
B
0000 f'
0 000 0600 1200 1800
' I . I
on
`° -3 570
2400 3000 0 000 0600 1200 1800 2400 3000
Time (ms) Time (ms)
Acceleration effects on joint 5 (D15) Acceleration effects on joint 6 (D16)
<`°
motion are explicitly expressed in vector-matrix form and all the interaction and
a`,
coupling reaction forces can be easily identified. Furthermore, D, an, D. t httran
hP t and ci can be clearly identified as coming from the translational and the rota-
tional effects of the motion. Comparing the magnitude of the translational and
rotational effects for each term of the dynamic equations of motion, the extent of
Magnitude of elements
Magnitude of elements
0307.
0 000 0600 1200 1800 2400 3000
0247
0 000
-- --.--0- J
0600 1200 1800 2400
--L--3
3000
Time (ms) Time (ms)
Acceleration effects on joint 2 (D22) Acceleration effects on joint 3 (D23)
Magnitude of elements
Magnitude of elements
Time (ms)
Acceleration effects on joint 6 (D26)
dominance from the translational and rotational effects can be computed for each
set point along the trajectory. The less dominant terms or elements can be
neglected in calculating the dynamic equations of motion of the manipulator. This
coo
greatly aids the construction of a simplified dynamic model for control purpose.
As an example of obtaining a simplified dynamic model for a specific trajec-
tory, we consider a PUMA 560 robot and its dynamic equations of motion along a
ova
preplanned trajectory. The D, a" DIJ ht`ra", and h.ro` elements along the trajectory
are computed and plotted in Figs. 3.10 and 3.11. The total number of trajectory
set points used is 31. Figure 3.10 shows the acceleration-related elements D,t an
and DT`. Figure 3.11 shows the Coriolis and centrifugal elements hr`ra" and hTo'
_-.
These figures show the separate and combined effects from the translational and
rotational terms.
From Fig. 3.10, we can approximate the elements of the D matrix along the
trajectory as follows: (1) The translational effect is dominant for the D12, D22,
D23, D33, D45, and D56 elements. (2) The rotational effect is dominant for the
D44, D46, D55, and D66 elements. (3) Both translational and rotational effects are
dominant for the remaining elements of the D matrix. In Fig. 3.11, the elements
tran and D45" show a staircase shape which is due primarily to the round-off
10-3)
9245
Magnitude of elements
Magnitude of elements
a)
d 6246
O N
0V7 Q)
3247 -5319
Cp
c
00
0247 ___ - 7 9781 L_ _
----
W _ 1 I
0133
elements
Magnitude of elements
3933
c
Magnitude
- 1 053
o0
c
00161 I I I 2 -2 5(X)
0 000 0600 120)) 1800 24110 3(X%) 0 000 060() 1_100 180() 2400 3000
error generated by the VAX-11/780 computer used in the simulation. These ele-
ments are very small in magnitude when compared with the rotational elements.
Similarly, we can approximate the elements of the h vector as follows: (1) The
translational effect is dominant for the hl , h2, and h3 elements. (2) The rotational
effect is dominant for the h4 element. (3) Both translational and rotational effects
are dominant for the h5 and h6 elements.
7 275 - (88X)
O_
Magnitude of elements (x
x)
- 4000
Magnitude of elements
4 853
u
0
2.431 O - 8000
d
00
_ - -I LW
0090,
0 000 0600 1200 1800 2400 0 000 0600 1200 1800
3 010 8111 41
x)
x)
Magnitude of elements
Magnitude of elements
2010 5 444
1 010 2 778
0100
0 000 0600 1200 1800 2400 3000
11106 __t'__t
0 000 0600 1200
T---_i
1800 2400 3000
Time (ms) Time (ms)
Acceleration effects on joint 6 (D46) Acceleration effects on joint 5 (DS5)
10-6)
(9-01
0000 7 3 010
o_ 0
X X
Magnitude of elements
Magnitude of elements
- 1 333 2 010
O 1 010
-2 667
v
-c
en
4 o(K) 0100 - -I - i _. I 1 -.
0 000 0600 1200 1800 2400 3000 0 000 0600 1200 1800 2400 3000
6209 0000
Magnitude of elements
Magnitude of elements
7- 2
0000 12771 i i r91 I I
0 000 0600 1200 1800 2400 3000 0 (XX) 060X) 1200 1800 2400 M5%)
1170
Magnitude of elements
Magnitude of elements
0770
0371
- 0027
O (XX) OM81 12(X) )81X1 24)X) 7(XX) (( ON) 0600 1200 1800 2400 3000
Time (ms) Time (ms)
Coriolis and centrifugal effects on joint 3, h3 Coriolis and centrifugal effects on joint 4, h4
elements (x
x)
Magnitude of elements
I 113 E - 5667
E E
u
u
5033
Magnitude
b -q
() (XX) (M) 12)X) ( 80X) 24(8) 1(8X) 5(1() (X8) 1((0) 12(8) (8(8) 2_4(X) l(Xx)
Time (ms) Time (ms)
Coriolis and centrifugal effects on joint 5, h5 Coriolis and centrifugal effects on joint 6, h6
Consider the two-link manipulator shown in Fig. 3.2. We would like to derive the
generalized d'Alembert equations of motion for it. Letting m, and m2 represent
the link masses, and assuming that each link is 1 units long, yields the following
expressions for the link inertia tensors:
10 0 0 0 0 0
I1 = 0 '/12m112 0 0 '112 m212 0
0 0 112m,12 0 0 'h2 m212
Cl _S1 0 C2 - S2 0
OR, = 0 'R2 =
S1 C, S2 C2 0
0 0 1 0 0 1
C'2 -S12 01
OR2=oR
R2 = S'2 C12 0
0 0 1
and
'R0 = (OR, )T
2R0 = (OR2)T
cI 1
C12 1 [ 1C1 + 1 C12 1
2l l
cl = r1 =
+
0
=(0,0,1)I1 +(0,0,1)I2
FBI
0 0
1
NIA
r0 fCI
0 0 x
+ m1 x 1S
1
2 '
+ m, 0 x 0 x
1 1_
0 0
Thus,
To derive the hf' (0, 0) and h/01(0, 0) components, we need to consider only
the following terms in Eqs. (3.4-26) and (3.4-27) in our example because the other
terms are zero.
ROBOT ARM DYNAMICS 141
ran
m2[01 zo X (01 zo x P1*)] (zo X r2) + m1 [01 zo X (01 zo x C1)]
= 0
Thus,
'h m212S201
Therefore,
where g = (0, -g, 0)T. Thus, the gravity loading vector c becomes
m2S2l2B2 - M2 S2120162
'/2m2 S2 l28
Three different formulations for robot arm dynamics have been presented and dis-
cussed. The L-E equations of motion can be expressed in a well structured form,
but they are computationally difficult to utilize for real-time control purposes
7-+
unless they are simplified. The N-E formulation results in a very efficient set of
s..
recursive equations, but they are difficult to use for deriving advanced control
laws. The G-D equations of motion give fairly well "structured" equations at the
expense of higher computational cost. In addition to having faster computation
time than the L-E equations of motion, the G-D equations of motion explicitly
indicate the contributions of the translational and rotational effects of the links.
Such information is useful for control analysis in obtaining an appropriate approxi-
't7
REFERENCES
Further reading on general concepts on dynamics can be found in several excellent
mechanics books (Symon [1971] and Crandall et al. [1968]). The derivation of
ROBOT ARM DYNAMICS 143
.fl
matrix was first carried out by Uicker [1965]. The report by Lewis [1974] con-
tains a more detailed derivation of Lagrange-Euler equations of motion for a six-
joint manipulator. An excellent report written by Bejczy [1974] reviews the details
of the dynamics and control of an extended Stanford robot arm (the JPL arm).
The report also discusses a scheme for obtaining simplified equations of motion.
Exploiting the recursive nature of the lagrangian formulation, Hollerbach [1980]
further improved the computation time of the generalized torques based on the
lagrangian formulation.
Simplification of L-E equations of motion can be achieved via a differential
transformation (Paul [1981]), a model reduction method (Bejczy and Lee [1983]),
and an equivalent-composite approach (Luh and Lin [1981b]). The differential
transformation technique converts the partial derivative of the homogeneous
gyp,
simpler form. However, the Coriolis and centrifugal term, h;k,,,, which contains
the second-order partial derivative was not simplified by Paul [1981]. Bejczy and
"C3
Lee [1983] developed the model reduction method which is based on the homo-
geneous transformation and on the lagrangian dynamics and utilized matrix
numeric analysis technique to simplify the Coriolis and centrifugal term. Luh and
bon
Lin [1981b] utilized the N-E equations of motion and compared their terms in a
t-+
Nay
ate)
computer to eliminate various terms and then rearranged the remaining terms to
rya
s.,
efficient algorithms for computing the generalized forces/torques based on the N-E
z-.
°rn
equations of motion. Armstrong [1979], and Orin et al. [1979] were among the
first to exploit the recursive nature of the Newton-Euler equations of motion. Luh
et al. [1980a] improved the computations by referencing all velocities, accelera-
>a'
tions, inertial matrices, location of the center of mass of each link, and forces/
moments, to their own link coordinate frames. Walker and Orin [1982] extended
'-t
the N-E formulation to computing the joint accelerations for computer simulation
of robot motion.
Though the structure of the L-E and the N-E equations of motion are different,
fl..
Turney et al. [1980] explicitly verified that one can obtain the L-E motion equa-
tions from the N-E equations, while Silver [1982] investigated the equivalence of
the L-E and the N-E equations of motion through tensor analysis. Huston and
Kelly [1982] developed an algorithmic approach for deriving the equations of
-°o
0 4..
o'-
o,0
motion suitable for computer implementation. Lee et al. [1983], based on the
,.,
Neuman and Tourassis [1983] and Murray and Neuman [1984] developed
computer software for obtaining the equations of motion of manipulators in sym-
bolic form. Neuman and Tourassis [1985] developed a discrete dynamic model of
.b'
a manipulator.
144 ROBOTICS: CONTROL, SENSING, VISION. AND INTELLIGENCE
PROBLEMS
3.1 (a) What is the meaning of the generalized coordinates for a robot arm? (b) Give two
((D
different sets of generalized coordinates for the robot arm shown in the figure below. Draw
two separate figures of the arm indicating the generalized coordinates that you chose.
,72
3.2 As shown in the figure below, a particle fixed in an intermediate coordinate frame
(xi, y', x1) is located at (-1, 1, 2) in that coordinate frame. The intermediate coordinate
frame is moving translationally with a velocity of 3ti + 2tj + 4k with respect to the refer-
ence frame (xo, yo, xo) where i, j, and k are unit vectors along the x0, yo, and zo axes,
respectively. Find the acceleration of the particle with respect to the reference frame.
zp
xi
Yo
---------y
xo
3.3 With reference to Secs. 3.3.1 and 3.3.2, a particle at rest in the starred coordinate sys-
,-'
tem is located by a vector r(t) = 3ti + 2tj + 4k with respect to the unstarred coordinate
system (reference frame), where (i, j, k) are unit vectors along the principal axes of the
reference frame. If the starred coordinate frame is only rotating with respect to the refer-
ence frame with w = (0, 0, 1) T, find the Coriolis and centripetal accelerations.
3.4 Discuss the differences between Eq. (3.3-13) and Eq. (3.3-17) when (a) h = 0 and
((DD
of the cube. (a) Find the inertia tensor in the (x0, yo, zo) coordinate system. (b) Find the
inertia tensor at the center of mass in the (xcm, ycm, zcm) coordinate system.
zo
Ycm
3.6 Repeat Prob. 3.5 for this rectangular block of mass M and sides 2a, 2b, and 2c:
zo
yn
to
146 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
3.7 Assume that the cube in Prob. 3.5 is being rotated through an angle of a about the z°
axis and then rotated through an angle of 0 about the u axis. Determine the inertia tensor in
the (x°, y0, z°) coordinate system.
7i'
3.8 Repeat Prob. 3.7 for the rectangular block in Prob. 3.6.
3.9 We learned that the Newton-Euler formulation of the dynamic model of a manipulator
,V.
is computationally more efficient than the Lagrange-Euler formulation. However, most
researchers still use the Lagrange-Euler formulation. Why is this so? (Give two reasons.).
-3.10 A robotics researcher argues that if a robot arm is always moving at a very slow
speed, then its Coriolis and centrifugal forces/torques can be omitted from the equations of
motion formulated by the Lagrange-Euler approach. Will these "approximate" equations of
motion be computationally more efficient than the Newton-Euler equations of motion?
Explain and justify your answer.
3.11 We discussed two formulations for robot arm dynamics in this chapter, namely, the
i7'
Eat
Lagrange-Euler formulation and the Newton-Euler formulation. Since they describe the
--i
same physical system, their equations of motion should be "equivalent." Given a set point
on a preplanned trajectory at time t, , (q°(t, ), gd(t1 ), gd(ti )), one should be able to find
.01
the D(gd(t1 )), the h(q"(tI )), gd(ti )), and the c(gd(t1 )) matrices from the L-E equations
r'+
of motion. Instead of finding them from the L-E equations of motion, can you state a pro-
was
cedure indicating how you can obtain the above matrices from the N-E equations of motion
chi
3.12 The dynamic coefficients of the equations of motion of a manipulator can be obtained
from the N-E equations of motion using the technique of probing as discussed in Prob.
-O"
3.11. Assume that N multiplications and M additions are required to compute the torques
pN.
.fl
applied to the joint motors for a particular robot. What is the smallest number of multiplica-
tions and additions in terms of N, M, and n needed to find all the elements in the D(q)
..h
CAD
matrix in the L-E equations of motion, where n is the number of degrees of freedom of the
'°h
°+,
..t
robot?
3.13 In the Lagrange-Euler derivation of equations of motion, the gravity vector g given in
's.
Eq. (3.3-22) is a row vector of the form (0, 0, - IgI, 0), where there is a negative sign
for a level system. In the Newton-Euler formulation, the gravity effect as given in Table 3.2
...
is (0, 0, g I ) ' for a level system, and there is no negative sign. Explain the discrepancy.
3.14 In the recursive Newton-Euler equations of motion referred to its own link coordinate
frame, the matrix ('1° Ii °R,) is the inertial tensor of link i about the ith coordinate frame.
Derive the relationship between this matrix and the pseudo-inertia matrix J; of the
'-'
energy of the Lagrange-Euler and Newton-Euler equations of motion in the following table
-a=
"'+
Lagrange-Euler Newton-Euler
Angular velocity
Kinetic energy
ROBOT ARM DYNAMICS 147
3.16 The two-link robot arm shown in the figure below is attached to the ceiling and under
the influence of the gravitational acceleration g = 9.8062 m/sec'-; (x0, y0, z0) is the refer-
ence frame; 01, 02 are the generalized coordinates; d1, d2 are the lengths of the links; and
~O.
m1 , m2 are the respective masses. Under the assumption of lumped equivalent masses, the
mass of each link is lumped at the end of the link. (a) Find the link transformation matrices
'-'A;, i = 1, 2. (b) Find the pseudo-inertia matrix J; for each link. (c) Derive the
Cry
Lagrange-Euler equations of motion by first finding the elements in the D(O), h(O, 8), and
'-+
c(0) matrices.
C/
3.17 Given the same two-link robot arm as in Prob. 3.16, do the following steps to derive
the Newton-Euler equations of motion and then compare them with the Lagrange-Euler
CAD
equations of motion. (a) What are the initial conditions for the recursive Newton-Euler
~^.
equations of motion? (b) Find the inertia tensor 'R01, °R; for each link. (c) Find the other
constants that will be needed for the recursive Newton-Euler equations of motion, such as
'Rosi and 'Ro p,*. (d) Derive the Newton-Euler equations of motion for this robot arm,
assuming that 1 and 1 have zero reaction force/torque.
148 ROBOTICS- CONTROL, SENSING, VISION, AND INTELLIGENCE
3.18 Use the Lagrange-Euler formulation to derive the equations of motion for the two-link
B-d robot arm shown below, where (xo, yo, zo) is the reference frame, B and d are the
y4?
generalized coordinates, and in,, in, are the link masses. Mass in, of link 1 is assumed to
be located at a constant distance r, from the axis of rotation of joint 1, and mass m2 of link
2 is assumed to be located at the end point of link 2.
CHAPTER
FOUR
PLANNING OF MANIPULATOR TRAJECTORIES
4.1 INTRODUCTION
With the discussion of kinematics and dynamics of a serial link manipulator in the
previous chapters as background, we now turn to the problem of controlling the
manipulator so that it follows a preplanned path. Before moving a robot arm, it is
of considerable interest to know whether there are any obstacles present in its path
(obstacle constraint) and whether the manipulator hand must traverse a specified
path (path constraint). These two constraints combined give rise to four possible
control modes, as tabulated in Table 4.1. From this table, it is noted that the con-
trol problem of a manipulator can be conveniently divided into two coherent
subproblems-motion (or trajectory) planning and motion control. This chapter
focuses attention on various trajectory planning schemes for obstacle-free motion.
It also deals with the formalism of describing the desired manipulator motion as
sequences of points in space (position and orientation, of the manipulator) through
`O0
which the manipulator must pass, as well as the space curve that it traverses. The
.p"
space curve that the manipulator hand moves along from the initial location (posi-
tion and orientation) to the final location is called the path. We are interested in
developing suitable formalisms for defining and describing the desired motions of
A..
CAD
4-,
not suitable as a working coordinate system because the joint axes of most manipu-
lators are not orthogonal and they do not separate position from orientation. If
C],
joint coordinates are desired at these locations, then the inverse kinematics solution
routine can be called upon to make the necessary conversion.
Quite frequently, there exists a number of possible trajectories between the
two given endpoints. For example, one may want to move the manipulator along
149
150 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
'ZS
and orientation constraints at both endpoints (joint-interpolated trajectory). In this
chapter, we discuss the formalisms for planning both joint-interpolated and
straight-line path trajectories. We shall first discuss simple trajectory planning that
satisfies path constraints and then extend the concept to include manipulator
dynamics constraints.
A systematic approach to the trajectory planning problem is to view the trajec-
tory planner as a black box, as shown in Fig. 4.1. The trajectory planner accepts
`r1
input variables which indicate the constraints of the path and outputs a sequence of
'"'
coo
nates, from the initial location to the final location. Two common approaches are
used to plan manipulator trajectories. The first approach requires the user to expli-
citly specify a set of constraints (e.g., continuity and smoothness) on position,
velocity, and acceleration of the manipulator's generalized coordinates at selected
locations (called knot points or interpolation points) along the trajectory. The tra-
jectory planner then selects a parameterized trajectory from a class of functions
C),
(usually the class of polynomial functions of degree n or less, for some n, in the
mph
time interval [to, tf]) that "interpolates" and satisfies the constraints at the inter-
polation points. In the second approach, the user explicitly specifies the path that
the manipulator must traverse by an analytical function, such as a straight-line path
=me
Path constraints
Manipulator's
dynamics
constraints
the manipulator hand traverses. Hence, the manipulator hand may hit obstacles
with no prior warning. In the second approach, the path constraints are specified
in cartesian coordinates, and the joint actuators are servoed in joint coordinates.
Hence, to find a trajectory that approximates the desired path closely, one must
convert the Cartesian path constraints to joint path constraints by some functional
approximations and then find a parameterized trajectory that satisfies the joint path
constraints.
The above two approaches for planning manipulator trajectories should result
(D'
in simple trajectories that are meant to be efficient, smooth, and accurate with a
fast computation time (near real time) for generating the sequence of control set
points along the desired trajectory of the manipulator. However, the sequences of
the time-based joint-variable space vectors {q(t), 4(t), q(t)} are generated
without taking the dynamics of the manipulator into consideration. Thus, large
tracking errors may result in the servo control of the manipulator. We shall dis-
cuss this problem in Sec. 4.4.3. This chapter begins with a discussion of general
issues that arise in trajectory planning in Sec. 4.2; joint-interpolated trajectory in
i,3
Sec. 4.3; straight-line trajectory planning in Sec. 4.4; and a cubic polynomial tra-
jectory along a straight-line path in joint coordinates with manipulator dynamics
taken into consideration in Sec. 4.4.3. Section 4.5 summarizes the results.
manipulator hand's position, velocity, and acceleration are planned, and the
corresponding joint positions, velocities, and accelerations are derived from the
hand information. Planning in the joint-variable space has three advantages: (1)
the trajectory is planned directly in terms of the controlled variables during
motion, (2) the trajectory planning can be done in near real time, and (3) the joint
trajectories are easier to plan. The associated disadvantage is the difficulty in
determining the locations of the various links and the hand during motion, a task
that is usually required to guarantee obstacle avoidance along the trajectory.
In general, the basic algorithm for generating joint trajectory set points is quite
simple:
t=to;
loop: Wait for next control interval;
t = t + At;
h (t) = where the manipulator joint position should be at time t;
If t = t f, then exit;
go to loop;
function (or trajectory planner) h(t) which must be updated in every control inter-
val. Thus, four constraints are imposed on the planned trajectory. First, the tra-
jectory set points must be readily calculable in a noniterative manner. Second,
.-'
the continuity of the joint position and its first two time derivatives must be
Cat
The above four constraints on the planned trajectory will be satisfied if the
r93
the joint trajectory for a given joint (say joint i) uses p polynomials, then
3(p + 1) coefficients are required to specify initial and terminal conditions (joint
position, velocity, and acceleration) and guarantee continuity of these variables at
the polynomial boundaries. If an additional intermediate condition such as position
..y
C.,
'C7 'L3
.CD
tion. In general, two intermediate positions may be specified: one near the initial
position for departure and the other near the final position for arrival which will
.a:
(4-3-4) trajectory segments, two cubics and one quintic (3-5-3) trajectory segments,
P')
or five cubic (3-3-3-3-3) trajectory segments. This will be discussed further in the
next section.
PLANNING OF MANIPULATOR TRAJECTORIES 153
For Cartesian path control, the above algorithm can be modified to:
CDW.
t=to; °`0
loop: Wait for next control interval;
t=t+At;
H (t) = where the manipulator hand should be at time t;
Q [ H (t) ] = joint solution corresponding to H (t) ;
'C!
If t = t f, then exit;
go to loop;
Here, in addition to the computation of the manipulator hand trajectory function
CAD
H(t) at every control interval, we need to convert the Cartesian positions into their
corresponding joint solutions, Q[H(t)]. The matrix function H(t) indicates the
desired location of the manipulator hand at time t and can be easily realized by a
4 x 4 transformation matrix, as discussed in Sec. 4.4.
Generally, Cartesian path planning can be realized in two coherent steps:
'CJ
`G'
ing a class of functions to link these knot points (or to approximate these path seg-
ments according to some criteria. For the latter step, the criteria chosen are quite
p'.
.-.
often dictated by the following control algorithms to ensure the desired path track-
o...
ing. There are two major approaches for achieving it: (1) The Cartesian space-
oriented method in which most of the computation and optimization is performed
in Cartesian coordinates and the subsequent control is performed at the hand
level. t The servo sample points on the desired straight-line path are selected at a
fixed servo interval and are converted into their corresponding joint solutions in
real time while controlling the manipulator. The resultant trajectory is a piecewise
straight line. Paul [1979], Taylor [1979], and Luh and Lin [1981] all reported
r'3
methods for using a straight line to link adjacent cartesian knot points. (2) The
joint space-oriented method in which a low-degree polynomial function in the
ono
joint-variable space is used to approximate the path segment bounded by two adja-
cent knot points on the straight-line path and the resultant control is done at the
joint level. $ The resultant cartesian path is a nonpiecewise straight line. Taylor's
.-.
bounded deviation joint path (Taylor [1979]) and Lin's cubic polynomial trajectory
rte.
method (Lin et al. [1983]) all used low-degree polynomials in the joint-variable
space to approximate the straight-line path.
"-w
straight-line path. However, since all the available control algorithms are invari-
ably based on joint coordinates because, at this time, there are no sensors capable
s..
j' The error actuating signal to the joint actuators is computed based on the error between the target
0
cartesian position and the actual cartesian position of the manipulator hand.
$ The error actuating signal to the joint actuators is computed based on the error between the target
joint position and the actual joint position of the manipulator hand.
154 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
C13
a-"
planning requires transformations between the cartesian and joint coordinates in
'-'
real time-a task that is computationally intensive and quite often leads to longer
control intervals' Furthermore, the transformation from cartesian coordinates to
U-+
joint coordinates is ill-defined because it is not a one-to-one mapping. In addition,
'-'
if manipulator dynamics are included in the trajectory planning stage, then path
constraints are specified in cartesian coordinates while physical constraints, such as
x"1
torque and force, velocity, and acceleration limits of each joint motor, are bounded
in joint coordinates. Thus, the resulting optimization problem will have mixed
a..
constraints in two different coordinate systems.
Because of the various disadvantages mentioned above, the joint space-oriented
method, which converts the cartesian knot points into their corresponding joint
coordinates and uses low-degree polynomials to interpolate these joint knot points,
is widely used. This approach has the advantages of being computationally faster
and makes it easier to deal with the manipulator dynamics constraints. However,
it loses accuracy along the cartesian path when the sampling points fall on the
fitted, smooth polynomials. We shall examine several planning schemes in these
r-,
To servo a manipulator, it is required that its robot arm's configuration at both the
initial and final locations must be specified before the motion trajectory is planned.
In planning a joint-interpolated motion trajectory for a robot arm, Paul [1972]
showed that the following considerations are of interest:
1. When picking up an object, the motion of the hand must be directed away from
an object; otherwise the hand may crash into the supporting surface of the
'-n
object.
2. If we specify a departure position (lift-off point) along the normal vector to the
surface out from the initial position and if we require the hand (i.e., the origin
of the hand coordinate frame) to pass through this position, we then have an
1U+
point out from the surface and then slow down to the final position) so that the
correct approach direction can be obtained and controlled.
4. From the above, we have four positions for each arm motion: initial, lift-off,
set-down, and final (see Fig. 4.2).
5. Position constraints
(a) Initial position: velocity and acceleration are given (normally zero).
...
Joint i
0(tj) Final
9(t2)
Time
Figure 4.2 Position conditions for a joint trajectory.
6. In addition to these constraints, the extrema of all the joint trajectories must be
within the physical and geometric limits of each joint.
7. Time considerations
.CD
(a) Initial and final trajectory segments: time is based on the rate of approach
of the hand to and from the surface and is some fixed constant based on the
characteristics of the joint motors.
(b) Intermediate points or midtrajectory segment: time is based on maximum
velocity and acceleration of the joints, and the maximum of these times is
Sao
used (i.e., the maximum time of the slowest joint is used for normaliza-
tion).
The constraints of a typical joint trajectory are listed in Table 4.2. Based on
these constraints, we are concerned with selecting a class of polynomial functions
`CD
of degree n or less such that the required joint position, velocity, and acceleration
at these knot points (initial, lift-off, set-down, and final positions) are satisfied, and
the joint position, velocity, and acceleration are continuous on the entire time inter-
CS,
.`3
joint i,
where the unknown coefficients aj can be determined from the known positions
and continuity conditions. However, the use of such a high-degree polynomial to
interpolate the given knot points may not be satisfactory. It is difficult to find its
extrema and it tends to have extraneous motion. An alternative approach is to split
the entire joint trajectory into several trajectory segments so that different interpo-
lating polynomials of a lower degree can be used to interpolate in each trajectory
156 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Intermediate positions:
4. Lift-off position (given)
5. Lift-off position (continuous with previous trajectory segment)
6. Velocity (continuous with previous trajectory segment)
7. Acceleration (continuous with previous trajectory segment)
8. Set-down position (given)
9. Set-down position (continuous with next trajectory segment)
10. Velocity (continuous with next trajectory segment)
11. Acceleration (continuous with next trajectory segment)
Final position:
12. Position (given)
13. Velocity (given, normally zero)
14. Acceleration (given, normally zero)
segment. There are different ways a joint trajectory can be split, and each method
possesses different properties. The most common methods are the following:
4-3-4 Trajectory. Each joint has the following three trajectory segments: the first
segment is a fourth-degree polynomial specifying the trajectory from the initial
position to the lift-off position. The second trajectory segment (or midtrajec-
tory segment) is a third-degree polynomial specifying the trajectory from the
lift-off position to the set-down position. The last trajectory segment is a
fourth-degree polynomial specifying the trajectory from the set-down position
to the final position.
3-5-3 Trajectory. Same as 4-3-4 trajectory, but uses polynomials of different
degrees for each segment: a third-degree polynomial for the first segment, a
fifth-degree polynomial for the second segment, and a third-degree polynomial
000
Note that the foregoing discussion is valid for each joint trajectory; that is,
each joint trajectory is split into either a three-segment or a five-segment trajec-
tory. The number of polynomials for a 4-3-4 trajectory of an N-joint manipulator
will have N joint trajectories or N x 3 = 3N trajectory segments and 7N polyno-
mial coefficients to evaluate plus the extrema of the 3N trajectory segments. We
PLANNING OF MANIPULATOR TRAJECTORIES 157
shall discuss the planning of a 4-3-4 joint trajectory and a 5-cubic joint trajectory
J n the next section.
T - Ti 1
and h, (t) = an4t4 + an3t3 + aii2t2 + anlt + ari0 (last segment) (4.3-4)
The subscript of each polynomial equation indicates the segment number, and n
indicates the last trajectory segment. The unknown coefficient aji indicates the ith
a-+
coefficient for the j trajectory segment of a joint trajectory. The boundary condi-
N-.
a-+
tions that this set of joint trajectory segment polynomials must satisfy are:
The boundary conditions for the 4-3-4 joint trajectory are shown in Fig. 4.3. The
first and second derivatives of these polynomial equations with respect to real time
T can be written as:
dhi(t) dhi(t) dt _ 1 dhi(t)
Vi(t)
= dr dt dT Ti - Ti_I dt
dhi(t)
1 -hi (t) i = 1, 2, n (4.3-5)
ti dt ti
and
d2hi(t) 1 d2hi(t)
ai(t) _
dT2 (Ti - Ti_ 1 )2 dt2
1 d2hi(t) _ 1
-hi (t) i = 1, 2, n (4.3-6)
IN.
For the first trajectory segment, the governing polynomial equation is of the
fourth degree:
0(r) = 0(7'+)
Joint i
0(72) = B(T2)
e(72) = 6(7-2)
0(r,,)
0(72) B(7n) = Of
of
of
TO T1 72 T Real time
From Eqs. (4.3-5) and (4.3-6), its first two time derivatives with respect to real
time are
and
1. For t = 0 (at the initial position of this trajectory segment). Satisfying the
boundary conditions at this position leads to
which yields
a0ti
a12 2
alt 2 1
h1 (t) = a14t4 + a13t3 + i I t2 + (v0t, )t + 00 t c- [0, 1] (4.3-13)
2. For t = 1 (at the final position of this trajectory segment). At this position,
'r7
1.O
we relax the requirement that the interpolating polynomial must pass through the
004
position exactly. We only require that the velocity and acceleration at this position
160 ROBOTICS- CONTROL, SENSING, VISION, AND INTELLIGENCE
have to be continuous with the velocity and acceleration, respectively, at the begin-
ning of the next trajectory segment. The velocity and acceleration at this position
are:
+ anti +
N..
= h1(1) 4a14 + 3a13 vote
all
V1 (1) VI (4.3-14)
tl tI
.'.
For the second trajectory segment, the governing polynomial equation is of the
'S.
third degree:
1. For t = 0 (at the lift-off position). Using Eqs. (4.3-5) and (4.3-6), the
velocity and acceleration at this position are, respectively,
which gives
a21 = v1 t2
and
which yields
a1 t2
a22
2
Since the velocity and acceleration at this position must be continuous with the
i7.
velocity and acceleration at the end of the previous trajectory segment respectively,
we have
L t2 t=0 tl
J t=1
(4.3-21)
or
and
1
6a23 t + 2a22 1 12a14t2 + 6a13t + 2a12
(4.3-23)
t22
J t=0 ...
t12
J t=1
or
2. For t = 1 (at the set-down position). Again the velocity and acceleration at
this position must be continuous with the velocity and acceleration at the beginning
of the next trajectory segment. The velocity and acceleration at this position are
obtained, respectively, as:
and
For the last trajectory segment, the governing polynomial equation is of the
obi
fourth degree:
w..
Using Eqs. (4.3-5) and (4.3-6), its first and second derivatives with respect to real
time are
and
1. For t = 0 (at the final position of this segment). Satisfying the boundary
conditions at this final position of the trajectory, we have
4n(0) and
Vf =
tn
_ - (4.3-33)
tn
which gives
and = Vftn
and
_ hn (0) _ 2an2
of t2 (4.3-34)
n t2n
which yields
a f to
ant 2
aftnz
ai4 - an3 + - Vftn + Of = 02(1) (4.3-35)
PLANNING OF MANIPULATOR TRAJECTORIES 163
and
and
(4.3-37)
2
tn
The velocity and acceleration continuity conditions at this set-down point are
h,,(-1) hn(-1)
Y-+
h2(1) h2(1)
t2 to
and 2
t2
- 2
to
(4.3-38)
or
to t2 t2 t2
and
6a23 2a22
2
=0
n
The difference of joint angles between successive trajectory segments can be found
.fl
to be
2
(4.3-41)
`0N
and
2
a n
S,, = 0 f - 02 = hn (0) - hn (-1) ai4 + an3 - (4.3-43)
"`"
2 + V ftn
y = Cx (4.3-44)
where
a0ti2
y= 61 - - v0t, , -aot1 - vo , - a0 , S2, (4.3-45)
2
T
aftn
- aftn + Vf,'af, Sn + 2 - VftnJ
1 1 0 0 0 0 0
4/t1 -1/t2 0
r-.
31t1 0 0 0
61t 2 12/t2 0 -2/t2 0 0 0
C = 0 0 1 1 1 0 0 (4.3-46)
0 0 11t2 2/t2 3/t2 -31t, 4/tn
0 0 0 2/t22 6/t22 6/t,2 -12/t,?
0 0 0 0 0 1 -1
Then the planning of the joint trajectory (for each joint) reduces to solving the
matrix vector equation in Eq. (4.3-44):
Yi = E cijxj (4.3-48)
j=1
or x = C -ly (4.3-49)
The structure of matrix C makes it easy to compute the unknown coefficients and
the inverse of C always exists if the time intervals ti, i = 1, 2, n are positive
values. Solving Eq. (4.3-49), we obtain all the coefficients for the polynomial
Cs'
for the last trajectory segment, after obtaining the coefficients ani from the above
A'+
PLANNING OF MANIPULATOR TRAJECTORIES 165
matrix equation, we need to reconvert the normalized time back to [0, 1]. This
can be accomplished by substituting t = t + 1 into t in Eq. (4.3-29). Thus we
obtain
ai4t4 + ( -4an4 + an3)t3 + (6an4 - 3a,,3 + an2)t2
h,, (t) =
+( 3a,,3 - 2an2 + anI)t
The resulting polynomial equations for the 4-3-4 trajectory, obtained by solving the
above matrix equation, are listed in Table 4.3. Similarly, we can apply this tech-
nique to compute a 3-5-3 joint trajectory. This is left as an exercise to the reader.
The polynomial equations for a 3-5-3 joint trajectory are listed in Table 4.4.
continuity in the first and second derivatives at the interpolation points is known as
'-'
cubic spline functions. The degree of approximation and smoothness that can be
'"'
C-.
case of cubic splines, the first derivative represents continuity in the velocity and
ax)
the second derivative represents continuity in the acceleration. Cubic splines offer
several advantages. First, it is the lowest degree polynomial function that allows
'CS
The general equation of five-cubic polynomials for each joint trajectory seg-
ment is:
hi(t) = aj3t3 + aj2t2 + at + ado j = 1, 2, 3, 4, n (4.3-51)
with r j -I < z < r 1 and t e [ 0, 1 ] . The unknown coefficient aj1 indicates the ith
coefficient for joint j trajectory segment and n indicates the last trajectory segment.
In using five-cubic polynomial interpolation, we need to have five trajectory
segments and six interpolation points. However, from our previous discussion, we
I--.
only have four positions for interpolation, namely, initial, lift-off, set-down, and
final positions. Thus, two extra interpolation points must be selected to provide
enough boundary conditions for solving the unknown coefficients in the polynomial
BCD _O_
sequences. We can select these two extra knot points between the lift-off and set-
down positions. It is not necessary to know these two locations exactly; we only
require that the time intervals be known and that continuity of velocity and
o2.
acceleration be satisfied at these two locations. Thus, the boundary conditions that
this set of joint trajectory segment polynomials must satisfy are (1) position con-
straints at the initial, lift-off, set-down, and final positions and (2) continuity of
velocity and acceleration at all the interpolation points. The boundary conditions
166 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
h1(1) 461 a
V1 3v0 - a0t1
t1 t1 tI
t1
h2(t) _ 52 V, t2 -
a, tz 1
t3 +
C2a1 t2
+ (V1t2)t + 81
2 J 2
t2 + (V2t,,)t + 02
`'0
t3 +
2
L J J
t1 t1 t2 3 t2 t, 2t,,
g =t -+ -2t + 2 + -
c+7
3t2
t2 t, tI
PLANNING OF MANIPULATOR TRAJECTORIES 167
_,N
2
aot l
2 z
2 2
NIA
32 tz + a2 2 al t2
NIA
+ t3 + t2
J L
2J
+ (v1t2)t + B1
V2 =
h2(1)
t2
36
to
-2vf+ aft
2
6n - Vfto +
anz t3 + (-36 + 3vft - aftn )t2
hn(t) =
r aftn2
36 - 2v t' + t + 82
n f 2 J
168 ROBOTICS: CONTROL, SENSING, VISION. AND INTELLIGENCE
q(t)
for a five-cubic joint trajectory are shown in Fig. 4.4, where the underlined vari-
ables represent the known values before calculating the five-cubic polynomials.
The first and second derivatives of the polynomials with respect to real time
are:
ti ti
Given the positions, velocities, and accelerations at the initial and final positions,
'C3
the polynomial equations for the initial and final trajectory segments [h1 (t) and
h, (t) ] are completely determined. Once these, polynomial equations are calcu-
.L1
lated, h2(t), h3 (t), and h4 (t) can be determined using the position constraints and
continuity conditions.
For the first trajectory segment, the governing polynomial equation is
fin
vo o hi (0) = all
(4 . 3 - 56 )
tl tI
PLANNING OF MANIPULATOR TRAJECTORIES 169
from which
all = vote
h1 (0) 2a12
ao p-
all
=
'r7
and (4.3-57)
t2 t2
which yields
aotlz
a12 2
aotl
a13 =61 -vote - 2
(4.3-59)
With this polynomial equation, the velocity and acceleration at t = 1 are found to
be
-o -
h1(1) 361 - (aoti)/2 - 2vot1 = aotl
- 2v o -
all
361 (4.3-61)
tI VI1
t1 tI 2
..y
and t2
= a1 = 2
ti
_
t - - - 2ao
tI
(4.3-62)
...
The velocity and acceleration must be continuous with the velocity and
acceleration at the beginning of the next trajectory segment.
(4.3-65)
hn (1) = an3 + ant + and + 04 = Of
h,, (1) 3an3 + 2an2 + an
to
=vf= 1
(4.3-66)
tn
Solving the above three equations for the unknown coefficients ai3, an2, an1, we
obtain
aft"
S"-vft"+ t3 + (-36, + 3vftn - aftn2)t2
'-,
hn(t) = 2
1
aft,,2
3Sn - 2v ftn +
+
tie
t + 04 (4.3-68)
2
so that
a21 = VI t2
alt2
a22 =
2
PLANNING OF MANIPULATOR TRAJECTORIES 171
a,t22
h2(t) = a23t3 + t2 + (vl t2 )t + 01 (4.3-73)
L 2 J
h2(1) 3a23 + a1 t2 + v1 t2
= V2 = = v1 + a1 t2 + 3a23 (4.3-75)
t2 t2 t2
'.O
Note that 02, v2, and a2 all depend on the value of a23 .
A h3(0)
_-_ a31 h2(1)
III
V2 = (4.3-79)
t3 t3 t2
so that
a31 = V2 t3
which yields
a2 t3
a32 =
2
172 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
a2 t3
h3 (t) = a33 t3 + t2 + V2 t3 t + 82 (4.3-81)
L2J
At t = 1, we obtain the velocity and acceleration which are continuous with the
'en
velocity and acceleration at the beginning of the next trajectory segment.
''C
,
a232
=83=02 + v2 t3 + + a33
..m
h3 (1) (4.3-82)
``N
(4.3-83)
t3 t3 t3
6a33 + a2 3 6a33
and = a3 = = a2 + (4.3-84)
t2
3 t33
Note that B3, v3, and a3 all depend on a33 and implicitly depend on a23 .
.ti
2
t3
h4 (0) = a40 = 63 = 02 + V2 t3 + a + a33 (4.3-86)
2
V3 =
h4(0)
t4
_-_ a4I
t4 ,
h3(1)
t3
(4.3-87)
which gives
a41 = v3 4
_ _ _
and a3 - h4(0)
t4
2
2a42
t4
2
h3(1)
t3 (4.3-88)
which yields
2
a3 t4
a42 2
PLANNING OF MANIPULATOR TRAJECTORIES 173
where 03, v3, and a3 are given in Eqs. (4.3-82), (4.3-83), and (4.3-84), respec-
l+J
tively, and a23, a33, a43 remain to be found. In order to completely determine the
O.'
polynomial equations for the three middle trajectory segments, we need to deter-
j..
mine the coefficients a23, a33, and a43. This can be done by matching the condi-
tion at the endpoint of trajectory h4 (t) with the initial point of h5 (t):
2
(4.3-91)
t4 t4 t 2
a23, a33, and a43 . Solving for a23, a33, and a43, the five-cubic polynomial equa-
.--O
aot2i 1 aotI 1
S1 - v0t1 -
l
h1(t) = t
3
+ t2 + (v0t1 )t + 0 (4.3-93)
2 2
V, =
38,
t
- 2v0 -
a0t1
2 a, =
66,
t2
- 6v0
- 2a0 (4.3-94)
aI tz 1
h2 (t) = a23 t3 + t2+(vlt2)t+01
+
(4.3-95)
2 J
2
a
82 = a23 + 2 2 + VI t2 + 81 (4.3-96)
3a23 6a23
V2 = VI + a1 t2 + a2 = aI + (4.3-97)
INN
2
t2 t2
a2 t3
t2 + V2 t3 t + 92
1
h3 (t) = a33t3 +
W-4
(4.3-98)
2 J
2
a2 t3
03 = 02 + V2 t3 + + a33 (4.3-99)
2
174 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
3a33 6a33
V3 = v2 + a2 t3 + a3 = a2 + (4.3-100)
t3 t3
z
h. (t) _ r6,: - V ft, + a2" t3 + (-36 + 3Vft - aft,Z)t2 (4.3-102)
r aftn
36 - 2v ftn + 2 t + 04
L. J
36 aft. -66 6vf
v4 = tt- - 2vf + 2
a4 = 2
+ - 2af (4.3-103)
to tn
2x3
2x1
a23 = t2 D a33 - t32x2
D
a43 t4 D
(4.3-104)
with
111
(4.3-105)
U = t2 + t3 + t4 (4.3-109)
k1 = 04 - 01 - v1u - a1 2
V4 - V1 - a1 u - (a4 - aI )u/2
k2 =
3
a4 - a1
k3 =
6
d=3t4+3t3t4 +t3
A.,
(4.3-114)
So, it has been demonstrated that given the initial, the lift-off, the set-down,
and the final positions, as well as the time to travel each trajectory (ti), five-cubic
polynomial equations can be uniquely determined to satisfy all the position con-
PLANNING OF MANIPULATOR TRAJECTORIES 175
straints and continuity conditions. What we have just discussed is using a five-
cubic polynomial to spline a joint trajectory with six interpolation points. A more
general approach to finding the cubic polynomial for n interpolation points will be
`CS
discussed in Sec. 4.4.3.
"-y
the manipulator joint coordinates are not orthogonal and they do not separate posi-
tion from orientation. For a more sophisticated robot system, programming
C)'
CIO
languages are developed for controlling a manipulator to accomplish a task. In
BCD
1-t
`..1
such systems, a task is usually specified as sequences of cartesian knot points
°'.
through which the manipulator hand or end effector must pass. Thus, in describ-
CAD 'o'
ing the motions of the manipulator in a task, we are more concerned with the for-
,-.
CIO
malism of describing the target positions to which the manipulator hand has to
move, as well as the space curve (or path) that it traverses.
Paul [1979] describes the design of manipulator Cartesian paths made up of
`J°
"C7
straight-line segments for the hand motion. The velocity and acceleration of the
CAD
hand between these segments are controlled by converting them into the joint coor-
dinates and smoothed by a quadratic interpolation routine. Taylor [1979] extended
and refined Paul's method by using the dual-number quaternion representation to
,P..'.
examine their approaches in designing straight-line Cartesian paths in the next two
sections.
cartesian knot points can be computed from the inverse kinematics solution routine
and a quadratic polynominal can be used to smooth the two consecutive joint knot
points in joint coordinates for control purposes. Thus, the manipulator hand is
controlled to move along a straight line connected by these knot points. This tech-
(OD
nique has the advantage of enabling us to control the manipulator hand to track
moving objects. Although the target positions are described by transforms, they
do not specify how the manipulator hand is to be moved from one transform to
another. Paul [1979] used a straight-line translation and two rotations to achieve
CD'
176 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
the motion between two consecutive cartesian knot points. The first rotation is
about a unit vector k and serves to align the tool or end-effector along the desired
approach angle and the second rotation aligns the orientation of the tool about the
tool axis.
In general, the manipulator target positions can be expressed in the following
fundamental matrix equation:
where
°T6 = 4 x 4 homogeneous transformation matrix describing the manipula-
tor hand position and orientation with respect to the base coordinate
frame.
6Ttoo1 = 4 x 4 homogeneous transformation matrix describing the tool posi-
tion and orientation with respect to the hand coordinate frame. It
r~.
describes the tool endpoint whose motion is to be controlled.
°Cbase(t) = 4 x 4 homogeneous transformation matrix function of time describing
the working coordinate frame of the object with respect to the base
coordinate frame.
basepobj
= 4 x 4 homogeneous transformation matrix describing the desired
gripping position and orientation of the object for the end-effector
with respect to the working coordinate frame.
If the 6Ttoo1 is combined with °T6 to form the arm matrix, then 6Ttoo1 is a 4 x 4
identity matrix and can be omitted. If the working coordinate system is the same
as the base coordinate system of the manipulator, then °Cbase(t) is a 4 x 4 identity
matrix at all times.
Looking at Eq. (4.4-1), one can see that the left-hand-side matrices describe
the gripping position and orientation of the manipulator, while the right-hand-side
matrices describe the position and orientation of the feature of the object where we
-CDs
would like the manipulator's tool to grasp. Thus, we can solve for °T6 which
describes the configuration of the manipulator for grasping the object in a correct
cps
If °T6 were evaluated at a sufficiently high rate and converted into corresponding
joint angles, the manipulator could be servoed to follow the trajectory.
Utilizing Eq. (4.4-1), a sequence of N target positions defining a task can be
expressed as
0 base
O T6 ( Ttool) I =
T6 Cbase(Z) 1 ( pobj ) 1
=
1°Cbase(t)IN(basep
T6t001TN = CN(t)
PN
From the positions defined by C1(t) P; we can obtain the distance between con-
secutive points, and if we are further given linear and angular velocities, we can
obtain the time requested T to move from position i to position i + 1. Since tools
°a.
and moving coordinate systems are specified at positions with respect to the base
coordinate system, moving from one position to the next is best done by specifying
both positions and tools with respect to the destination position. This has the
advantage that the tool appears to be at rest from the moving coordinate system.
via
In order to do this, we need to redefine the present position and tools with respect
to the subsequent coordinate system. This can easily be done by redefining the P,
transform using a two subscript notation as Pi1 which indicates the position Pr
M'.
expressed with respect to the jth coordinate system. Thus, if the manipulator
C1.
The purpose of the above equation is to find P12 given P11. Thus, the motion
between any two consecutive positions i and i + 1 can be stated as a motion from
where Pr,r+1 and Pr+l,i+l represent transforms, as discussed above. Paul [1979]
used a simple way to control the manipulator hand moving from one transform to
the other. The scheme involves a translation and a rotation about a fixed axis in
space coupled with a second rotation about the tool axis to produce controlled
linear and angular velocity motion of the manipulator hand. The first rotation
serves to align the tool in the required approach direction and the second rotation
serves to align the orientation vector of the tool about the tool axis.
178 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
where
which gives
A A A
nx sx ax px
0 nA SA aA PA ny sy ay PA
Pi,i+I =A= 0 PA
(4.4-13)
0 0 1
nZ sZ aZ
0 0 0 1
B B B B
nx sx ax Px
B B B B
all
=0 nB SB aB PB ny sy ay py
and Pi+I,i+I B= (4.4-14)
0 0 0 1 B B B B
nZ sZ aZ P
0 0 0 1
Using Eq. (2.2-27) to invert Pi,i+I and multiply with Pi+I,i+,, we obtain
D(X) will correspond to a constant linear velocity and two angular velocities. The
translational motion can be represented by a homogeneous transformation matrix
C].
L(X) and the motion will be along the straight line joining P; and Pi+1. The first
rotational motion can be represented by a homogeneous transformation matrix
RA (X) and itserves- to rotate the approach vector from P; to the approach vector
at P; + ,. The second rotational motion represented by RB (X) serves to rotate the
orientation vector from P; into the orientation vector at P;+I about the tool axis.
Thus, the drive function can be represented as
CAD
where
1 0 0 Xx
0 1 0 Xy
L(X) = (4.4-17)
0 0 1 Xz
0 0 0 1
(4.4-18)
C(Xq) -S(Xq) 0 0
RB(X) = S(Xg5) C(X0) 0 0 (4.4-19)
0 0 1 0
0 0 0 1
where
[dn do da dp1
D(X) = (4.4-21)
0 0 0 1
where
-S(Xq)[S20V(X0) + C(X0)] + C(Xc)[-S0C1GV(X0)]
.-:
do = -S(Xq)[-S>GC0V(X0)] + C(x0)[00V(x0) + C(X0)]
-S(Xq)[-C0(X0)] + C(X0)[-SIS(X0)]
C>GS(X8) Xx
da = dp = Xy
C(XO) Xz
and do = do x da
Using the inverse transform technique on Eq. (4.4-16), we may solve for x, y, z
by postmultiplying Eq. (4.4-16) by RB ' (X) RA 1(X) and equating the elements of
the position vector,
x=nA(PB-PA)
Y = SA (PB - PA) (4.4-22)
z = aA (PB - PA)
By postmultiplying both sides of Eq. (4.4-16) by RB' (X) and then premultiplying
by L -' (X ), we can solve for 0 and >G by equating the elements of the third
column with the elements of the third column from Eq. (4.4-16),
then
So
0 = tan-' - Sir < 0 < 7 (4.4-27)
LCOJ
Transition Between Two Path Segments. Quite often, a manipulator has to move
on connected straight-line segments to satisfy a task motion specification or to
avoid obstacles. In order to avoid discontinuity of velocity at the endpoint of each
segment, we must accelerate or decelerate the motion from one segment to another
004
segment. This can be done by initiating a change in velocity T unit of time before
004
the manipulator reaches an endpoint and maintaining the acceleration constant until
T unit of time into the new motion segment (see Fig. 4.5). If the acceleration for
qua
each variable is maintained at a constant value from time -T to T, then the
0))
acceleration necessary to change both the position and velocity is
(_,
YBC YBA
ZBC AB = ZBA
BBC BBA
OBC OBA
where AC and AB are vectors whose elements are cartesian distances and angles
from points B to C and from points B to A, respectively.
Path
AB
A
7
Timc
From Eq. (4.4-28), the velocity and position for -T < t < T are given by
AB
ACT + AB (4.4-29)
T
r I
q(t) = ACT + AB X-2AB X + AB (4.4-30)
where
r.
all
7-
q(t) = 2T
2 T
(4.4-31)
It is noted that, as before, X represents normalized time in the range [0, 1]. The
.^1.
reader should bear in mind, however, that the normalization factors are usually
°0w
where 'JAB and OBC are defined for the motion from A to B and from B to C,
respectively, as in Eq. (4.4-23). Thus, >G will change from Y'AB to V'BC
In summary, to move from a position P; to a position P1+1, the drive function
D(X) is computed using Eqs. (4.4-16) to (4.4-27); then T6(X) can be evaluated by
Eq. (4.4-10) and the corresponding joint values can be calculated from the inverse
kinematics routine. If necessary, quadratic polynominal functions can be used to
interpolate between the points obtained from the inverse kinematics routine.
SOLUTION: Let Pi be the Cartesian knot points that the manipulator hand must
v0]
(4.4-35)
At P4. (4.4-38)
where [WORLD], [BASE], [INIT], [BO], [BR], [T6], [E], [PN], [P1],
[P2], [P3], [P4], and IN are 4 x 4 coordinate matrices. [BASE], [INIT],
[BO], and [BR] are expressed with respect to [WORLD]; [T6] is expressed
.R.
expressed with respect to [INIT ] ; [PI], [P2], and [ P3 ] are expressed with
`-+
respect to [BO]; and [P4] and IN are expressed with respect to [BR].
To move from location P0 to location P1, we use the double subscript to
describe Eq. (4.4-34) with respect to P1 coordinate frame. From Eq. (4.4-34),
!ti
we have
stand and use. However, the matrices are moderately expensive to store and com-
putations on them require more operations than for some other representations.
Furthermore, matrix representation for rotations is highly redundant, and this may
lead to numerical inconsistencies. Taylor [1979] noted that using a quaternion to
represent rotation will make the motion more uniform and efficient. He proposed
,..
(CD
technique but using a quaternion representation for rotations. This method is sim-
s.,
.CD
ple and provides more uniform rotational motion. However, it requires consider-
able real time computation and is vulnerable to degenerate manipulator
..'
configurations. The second approach, called bounded deviation joint path, requires
a motion planning phase which selects enough knot points so that the manipulator
can be controlled by linear interpolation of joint values without allowing the mani-
pulator hand to deviate more than a prespecified amount from the straight-line
..d
path. This approach greatly reduces the amount of computation that must be done
at every sample interval.
i2=j2=k2= -1
ij = k jk=i ki=j
ji= -k kj -i ik= -j
The units i, j, k of a quaternion may be interpreted as the three basis vectors of a
Cartesian set of axes. Thus, a quaternion Q may be written as a scalar part s and a
vector part v:
Q = [s + v] = s + ai + bj + ck = (s, a, b, c) (4.4-44)
Vector part of Q: ai + bj + ck
Conjugate of Q: s - (ai + bj + ck)
Norm of Q: s2 + a2 + b2 + C2
Reciprocal of Q:
s - (ai+bj+ck)
s2 + a2 + b2 + c2
Unit quaternion: s + ai + bj + ck, where s2+a2+b2+c2 = 1
It is important to note that quaternions include the real numbers (s, 0, 0, 0) with
a single unit 1, the complex numbers (s, a, 0, 0) with two units 1 and i, and the
vectors (0, a, b, c) in a three-dimensional space. The addition (subtraction) of two
quaternions equals adding (subtracting) corresponding elements in the quadruples.
The multiplication of two quaternions can be written as
Q1 Q2 = (s1 + a1 i + b1 j + c1 k)(s2 + a2i + b2 j + c2 k)
_ (s] s2 - v1 v2 + s2 V1 + S1 V2 + v1 x v2) (4.4-45)
and is obtained by distributing the terms on the right as in ordinary algebra, except
that the order of the units must be preserved. In general, the product of two vec-
tors in three-dimensional space, expressed as quaternions, is not a vector but a
quaternion. That is, Q1 = [0 + vil _ (0, a1, bl, c1) and Q2 = [0 + v2] _
(0, a2, b2, c2) and from Eq. (4.4-45),
Q1 Q2 = - V1 V2 + V1 X V2
With the aid of quaternion algebra, finite rotations in space may be dealt with
CJ'
0
S = sin and C = cos
186 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
'/2 +
i+j+k -,r3-
2
cos 60 ° + sin 60 °
i+j+k1
i+j+k
`J.
= Rot 120°
the i, j, k axes. Note that we could represent the rotations about the j and k
a"+
axes using the rotation matrices discussed in Chap. 2. However, the quaternion
0.i
(Im
'L7
gives a much simpler representation. Thus, one can change the representation
for rotations from quaternion to matrix or vice versa.
For the remainder of this section, finite rotations will be represented in quaternion
as Rot (n, 0) = [cos (0/2) + sin (0/2)n] for a rotation of angle 0 about an axis
n. Table 4.5 lists the computational requirements of some common rotation opera-
tions, using quaternion and matrix representations.
III
coordinate frame along a straight-line path between two knot points specified by F0
and F1 in time T, where each coordinate frame is represented by a homogeneous
transformation matrix,
F1 =
The motion along the path consists of translation of the tool frame's origin from po
to pi coupled with rotation of the tool frame orientation part from Ro to R1. Let
X(t) be the remaining fraction of the motion still to be traversed at time t. Then
ham.
for uniform motion, we have
t
fi(t) = T
T
(4.4-47)
where T is the total time needed to traverse the segment and t is time starting from
°,o
the beginning of the segment traversal. The tool frame's position and orientation
at time t are given, respectively, by
where Rot (n, 0) represents the resultant rotation of Ro 1R1 in quaternion form.
It is worth noting that p, - po in Eq. (4.4-48) and n and 0 in Eq. (4.4-49) need
to be evaluated only once per segment if the frame F1 is fixed. On the other hand,
if the destination point is changing, then F1 will be changing too. In this case,
p, - po, n, and 0 should be evaluated per step. This can be accomplished by the
pursuit formulation as described by Taylor [1979]. ,
If the manipulator hand is required to move from one segment to another
while maintaining constant acceleration, then it must accelerate or decelerate from
one segment to the next. In order to accomplish this, the transition must start T
time before the manipulator reaches the intersection of the two segments and com-
plete the transition to the new segment at time T after the intersection with the new
..C
segment has been reached. From this requirement, the boundary conditions for the
segment transition are
T API
P(TI - T) = P1 - T 1
(4.4-51)
188 ROBOTICS- CONTROL, SENSING, VISION, AND INTELLIGENCE
TOP2
P(T1 + T) = Pl + (4.4-52)
T2
d API
(4.4-53)
1
d OP2
dt P(t)11=T+1 = T2
(4.4-54)
where Opi = PI - P2, AP2 = P2 - PI, and T1 and T2 are the traversal times
1a.
for the two segments. If we apply a constant acceleration to the transition,
d2
p(t) = ap (4.4-55)
dtz
then integrating the above equation twice and applying the boundary conditions
gives the position equation of the tool frame,
where t' = T1 - t is the time from the intersection of two segments. Similarly,
the orientation equation of the tool frame is obtained as
(T + t')2
n2, 02 (4.4-57)
4TT2
where
The above equations for the position and orientation of the tool frame along the
straight-line path produce a smooth transition between the two segments. It is
worth pointing out that the angular acceleration will not be constant unless the axes
n1 and n2 are parallel or unless one of the spin rates
02
z
or 02 =
T2
2
is zero.
Bounded Deviation Joint Path. The cartesian path control scheme described
above requires a considerable amount of computation time, and it is difficult to
deal with the constraints on the joint-variable space behavior of the manipulator in
real time. Several possible ways are available to deal with this problem. One
PLANNING OF MANIPULATOR TRAJECTORIES 189
could precompute and store the joint solution by simulating the real time algorithm
c°).0
coo
before the execution of the motion. Then motion execution would be trivial as the
`'.
servo set points could be read readily from memory. Another possible way is to
precompute the joint solution for every nth sample interval and then perform joint
interpolation using low-degree polynominals to fit through these intermediate points
'C7
to generate the servo set points. The difficulty of this method is that the number
of intermediate points required to keep the manipulator hand acceptably close to
`C1
the cartesian straight-line path depends on the particular motion being made. Any
predetermined interval small enough to guarantee small deviations will require a
tea)
043H
cob
Taylor [1979] proposed a joint variable space motion strategy called bounded devi-
Q..
ation joint path, which selects enough intermediate points during the preplanning
phase to guarantee that the manipulator hand's deviation from the cartesian
straight-line path on each motion segment stays within prespecified error bounds.
.o,
coo
The scheme starts with a precomputation of all the joint solution vectors q;
corresponding to the knot points F; on the desired cartesian straight-line path. The
CAD
joint-space vectors q; are then used as knot points for a joint-variable space inter-
polation strategy analogous to that used for the position equation of the cartesian
Q.°
control path. That is, for motion from the knot point q0 to q1, we have
T1 - t
q(t) = q1 - Ti
Oqi (4.4-58)
q(t')=qi- (T - t')2
4TT1
q +
(T _ t')2
47-T2
Oq2 (4.4-59)
where Oq, = q, - q2, Oq2 = q2 - q1, and T1, T2, r, and t' have the same
meaning as discussed before. The above equations achieve uniform velocity
'CD
between the joint knot points and make smooth transitions with constant accelera-
tion between segments. However, the tool frame may deviate substantially from
the desired straight-line path. The deviation error can be seen from the difference
.D,
between the Fi(t), which corresponds to the manipulator hand frame at the joint
'l7
knot point qj(t), and Fd(t), which corresponds to the the manip ilator hand frame
at the cartesian knot point Fl(t). Defining the displacement and rotation deviations
respectively as
= 101
190 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
and specifying the maximum deviations, Sp ax and SR ax for the displacement and
orientation parts, respectively, we would like to bound the deviation errors as
Smax G smax
bp p and bR R (4.4-62)
With this deviation error bounds, we need to select enough intermediate points
between two consecutive joint knot points such that Eq. (4.4-62) is satisfied. Tay-
lor [1979] presented a bounded deviation joint path which is basically a recursive
bisector method for finding the intermediate points such that Eq. (4.4-62) is
satisfied. The algorithm converges quite rapidly to produce a good set of inter-
mediate points, though they are not a minimal set. His algorithm is as follows.
Err
Algorithm BDJP: Given the maximum deviation error bounds SP ax and SR 'ax
for the position and orientation of the tool frame, respectively, and the carte-
sian knot points Fi along the desired straight-line path, this algorithm selects
enough joint knot points such that the manipulator hand frame will not deviate
more than the prespecified error bounds along the desired straight-line path.
Si. Compute joint solution. Compute the joint solution vectors q0 and qI
corresponding to F0 and F1, respectively.
S2. Find joint space midpoint. Compute the joint-variable space midpoint
q»z =qi- 1/z Aqi
where Oq1 = qI - q0, and use q,,, to compute the hand frame F,,,
corresponding to the joint values q,,,.
S3. Find cartesian space midpoint. Compute the corresponding cartesian path
midpoint Fc:
Po + Pi
PC =
2
and R, = RI Rot ni, - 2 1
where Rot (n, 0) = Ro I R1.
S4. Find the deviation errors. Compute the deviation error between Fand
F,
Sp = I An - PC SR = I angle part of Rot (n, 0) = RC I R,,, I
= ICI
S5. Check error bounds. If Sp 5 Sp ax and S R S gR ax, then stop. Otherwise,
compute the joint solution vector qc corresponding to the cartesian space
midpoint FC, and apply steps S2 to S5 recursively for the two subseg-
ments by replacing FI with Fc and Fc with F0.
Taylor [1979] investigated the rate of convergence of the above algorithm for a
c-.
cylindrical robot (two prismatic joints coupled with a rotary joint) and found that it
ranges from a factor of 2 to a factor of 4 depending on the positions of the mani-
pulator hand.
In summary, the bounded deviation joint path scheme relies on a preplanning
phase to interpolate enough intermediate points in the joint-variable space so that
the manipulator may be driven in the joint-variable space without deviating more
than a prespecified error from the desired straight-line path.
'-.
{q(t), 4(t), q(t)} along the desired cartesian path without taking the dynamics of
the manipulator into consideration. However, the actuator of each joint is subject
.w.
to saturation and cannot furnish an unlimited amount of torque and force. Thus,
CI-
torque and force constraints must be considered in the planning of straight-line tra-
CAD
jectory. This suggests that the control of the manipulator should be considered in
two coherent phases of execution: off-line optimum trajectory planning, followed
(~=p
must either convert the cartesian path into joint paths by some low-degree polyno-
.-.
mial function approximation and optimize the joint paths and control the robot at
,.d
the joint level (Lee and Chung [1984]); or convert the joint torque and force
00p
bounds into their corresponding cartesian bounds and optimize the cartesian path
CQt
,._
and control the robot at the hand level (Lee and Lee [1984]).
:D_
04x.. ,..
the joint-variable space. Lin et al. [1983] proposed a set of joint spline functions
to fit the segments among the selected knot points along the given cartesian path.
i00
This approach involves the conversion of the desired cartesian path into its func-
tional representation of N joint trajectories, one for each joint. Since no transfor-
C/1
mation is known to map the straight-line path into its equivalent representation in
the joint-variable space, the curve fitting methods must be used to approximate the
A..
space, one can select enough knot points along the path, and each path segment
specified by two adjacent knot points can then be interpolated by N joint polyno-
CS.
mial functions, one function for each joint trajectory. These functions must pass
`?. .a?
,On..
through the selected knot points. Since cubic polynomial trajectories are smooth
and have small overshoot of angular displacement between two adjacent knot
points, Lin et al. [1983] adopted the idea of using cubic spline polynomials to fit
the segment between two adjacent knots. Joint displacements for the n - 2
192 ROBOTICS: CONTROL, SENSING. VISION, AND INTELLIGENCE
in.
tion on the entire trajectory for the Cartesian path, two extra knot points with
0.°r
unspecified joint displacements must be added to provide enough degrees of free-
Q,.
'L7
dom for solving the cubic polynomials under continuity conditions. Thus, the total
number of knot points becomes n and each joint trajectory consists of n - 1
'-'
CD'
piecewise cubic polynomials. Using the continuity conditions, the two extra knot
'=t
points are then expressed as a combination of unknown variables and known con-
stants. Then, only n - 2 equations need to be solved. The resultant matrix equa-
tion has a banded structure which facilitates computation. After solving the matrix
'°G
equation, the resulting spline functions are expressed in terms of time intervals
between adjacent knots. To minimize the total traversal time along the path, these
':V
a-°
time intervals must be adjusted subject to joint constraints within each of the
n - 1 cubic polynomials. Thus, the problem reduces to an optimization of
`C7
'-t
minimizing the total traveling time by adjusting the time intervals.
Let H(t) be the hand coordinate system expressed by a 4 X 4 homogeneous
transformation matrix. The hand is required to pass through a sequence of n
cartesian knot points, [H(t1 ) , H(t2 ) , ... , H(t,,)]. The corresponding joint posi-
r-3
tion vectors, ( q 1 I , q2I, ... .qNI), ( q 1 2 , q22, ... ,qN2).... , (q111, q21, ... ,
qN ), at these n cartesian knot points can be solved using the inverse kinematics
routine, where q11 is the angular displacement of joint j at the ith knot point
corresponding to Hi(t). Thus, the objective is to find a cubic polynomial trajec-
..1
tory for each joint j which fits the joint positions [qj1(t1 ) , qj2(t2), ... ,
s..
where t1 < t2 < < t, is an ordered time sequence indicating when the
.s:
hand should pass through these joint knot points. At the initial time t = t1 and
the final time t = t,,, the joint displacement, velocity, and acceleration are
specified, respectively, as qj1, vj1, all and qj,,, vj,,, aj,,. In addition, joint displace-
a.-
ments q j k at t = to for k = 3, 4, ... , n - 2 are also specified for the joint tra-
jectory to pass through. However, q2 and I are not specified; these are the two
`-'
extra knot points required to provide the freedom for solving the cubic polynomi-
.^y
als.
Let Qui(t) be the piecewise cubic polynomial function for joint j between the
knot points Hi and Hi+ 1, defined on the time interval [ti, ti I ] . Then the problem
is to spline Qui(t), for i = 1, 2, ... , n-1, together such that the required dis-
placement, velocity, and acceleration are satisfied and are continuous on the entire
time interval [ti, Since the polynomial Qji(t) is cubic, its second-time deriva-
tive QJi (t) must be a linear function of time t, i = 1, ... , n -1
qi,l+I uiQji(ti+I)
+ 1 (t - ti)
Ui 6
q1,, uiQji(ti)
ui 6
(ti+I - t)
i = 1, ... n-1
j = 1, ... , N (4.4-64)
AQ = b (4.4-65)
where
Qj2(t2 )
Qj3(t3 )
Q =
Qj,n-1(tn-I)
2
I
3u, +2u2 + U2 0 0 0 0 0 0
U2
u2
U2-- i
U2
2(u2+U3) u3
0 U3 2(u3+Ua)
0 U4 2(U4+Un-3) Un-3
0 2(Un-3+Un-2)
Un-2
0 0 0 0 0 0 Ui_2
194 ROBOTICS: CONTROL, SENSING. VISION, AND INTELLIGENCE
and
qj3+qjs 1 r z 1
o:)
I I Ut
6 U' -6 +ul vjl + 3 aj,J -ulaj!
U2
`U1 + U2 J 1. qjl
r
2
6
qjl +ul vjl +u"ajl +6qj4 -6 qj3
U2 3 U3 U2 U3 I
r
qjs - qj4 q4 - qj3
6
U4 U3
L J
_
Ui-2
6
u,, +
u,z
3
-i ap,
1
6
( I
Ui-2
+
U»-3
11 qj,»-2 + 6
U»-3
qj,n-3
6q»-z
-6 r I
U»-i
+ 111 qj»-vjnan+
u»2
3
a1,, +
6qj»
+
Un-hilt-2
-U»-lajn
L J L
The banded structure of the matrix A makes it easy to solve for Q which is substi-
tuted into Eq. (4.4-63) to obtain the resulting solution Qji(t). The resulting solu-
tion Qji(t) is given in terms of time intervals ui and the given values of joint dis-
o-'1
`.3
velocity, acceleration, jerk, and torque constraints. The problem can then be
ox..,
stated as:
Minimize the objective function
n-I
T = ui (4.4-66)
i=I
i= 1, ... ,N
Acceleration constraint: I Qji(t) I < Aj
i = 1,...,n-1
Jerk constraint:
3
dt3 Qji(t) < Jj
= 1,...N
i = 1,...,
Torque constraint: I rj(t) I < 1'j j = 1, ... , N
where T is the total traveling time, Vj, Aj, J1, and rj are, respectively, the velo-
city, acceleration, jerk, and torque limits of joint j.
r-.
Qji(t) =
G.) ji
(t 1 - t)2 +
..
wj,,+1
(t - ,
F qj,i+l - uiwj,i+I 1
2ui `+ 2u,
qji
Ui
Li1
and Qji(t) =
Ui
(t - ti) - i-,
Ui
(t - ti+1)
where w ji is the acceleration at Hi and equal to Qji (ti) if the time instant at which
Qji(t) passes through Hi is ti. _
°14
The maximum absolute value of velocity exists at ti, ti+1, or ti, where
mar.
-
ti e [ti, t1] and satisfies Qji (ti) = 0. The velocity constraints then become
max IQjil = max [IQji(ti)I, IQji(ti+1)I, IQji(ti)I ] < Vj
t E [t,, t, I
i = 1, 2, ... , n - 1 (4.4-69)
j = 1, 2,...,N
where
wji qj, i + I - qji (Wji - wj,i+1)ui
I Qji(ti) I = 2 u` + u, + 6
and
occurs at either ti or ti+ I and equals the maximum of { I wji I , I wj, i+ I I } . Thus, the
acceleration constraints become
Torque Constraints. The torque r(t) can be computed from the dynamic equa-
,-.
-"+
7_j (t) N N N
= EDjk(Qi(t))Qji(t) + E E hjk,,(Qi(t))Qki(t)Qmi(t) + Cj(Qi(t))
k=I k=tin=1
(4.4-72)
where
j = 1,2,...,N
Qi(t) = (Q11(t), Q2i(t), . . . ,QN,(t))T
i = 1, 2,... ,n - 1
If the torque constraints are not satisfied, then dynamic time scaling of the trajec-
tory must be performed to ensure the satisfaction of the torque constraints (Lin and
CAD
rithm that will minimize the total traveling time subject to the velocity, accelera-
tion, jerk, and torque constraints. There are several optimization algorithms avail-
able, and Lin et al. [1983] utilized Nelder and Mead's flexible polyhedron search
to obtain an iterative algorithm which minimizes the total traveling time subject to
the constraints on joint velocities, accelerations, jerks, and torques. Results using
this optimization technique can be found in Lin et al. [1983].
PLANNING OF MANIPULATOR TRAJECTORIES 197
Two major approaches for trajectory planning have been discussed: the joint-
0))
Coop
approach plans polynomial sequences that yield smooth joint trajectory. In order
C`5
a'°
to yield faster computation and less extraneous motion, lower-degree polynomial
sequences are preferred. The joint trajectory is split into several trajectory seg-
ments and each trajectory segment is splined by a low-degree polynomial. In par-
ticular, 4-3-4 and five-cubic polynomial sequences have been discussed.
Several methods have been discussed in the cartesian space planning. Because
servoing is done in the joint-variable space while a path is specified in cartesian
coordinates, the most common approach is to plan the straight-line path in the
joint-variable space using low-degree polynomials to approximate the path. Paul
f3.
[1979] used a translation and two rotations to accomplish the straight-line motion
of the manipulator hand. Taylor [1979] improved the technique by using a
quaternion approach to represent the rotational operation. He also developed a
bounded deviation joint control scheme which involved selecting more intermediate
interpolation points when the joint polynomial approximation deviated too much
from the desired straight-line path. Lin et al. [1983] used cubic joint polynomials
to spline n interpolation points selected by the user on the desired straight-line
path. Then, the total traveling time along the knot points was minimized subject to
joint velocity, acceleration, jerk, and torque constraints. These techniques
represent a shift away from the real-time planning objective to an off-line planning
phase. In essence, this decomposes the control of robot manipulators into off-line
motion planning followed by on-line tracking control, a topic that is discussed in
'u.
detail in Chap. 5.
REFERENCES
Further reading on joint-interpolated trajectories can be found in Paul [1972],
Lewis [1973, 1974], Brady et al. [1982], and Lee et al. [1986]. Most of these
joint-interpolated trajectories seldom include the physical manipulator dynamics
and actuator torque limit into the planning schemes. They focused on the require-
coo
ment that the joint trajectories must be smooth and continuous by specifying velo-
city and acceleration bounds along the trajectory. In addition to the continuity
constraints, Hollerbach [1984] developed a time-scaling scheme to determine
CAD
whether a planned trajectory is realizable within the dynamics and torque limits
which depend on instantaneous joint position and velocity.
The design of a manipulator path made up of straight line segments in the
.t,
'-'
CC'
matrix to represent target positions for the manipulator hand to traverse. Move-
ment between two consecutive target positions is accomplished by two sequential
ate)
operations: a translation and a rotation to align the approach vector of the manipu-
lator hand and a final rotation about the tool axis to align the gripper orientation.
'CD
198 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
-`='
and uniform motion. In order to achieve real-time trajectory planning objective,
((DD
both approaches neglect the physical manipulator torque constraint.
Other existing cartesian planning schemes are designed to satisfy the continuity
and the torque constraints simultaneously. To include the torque constraint in the
trajectory planning stage, one usually assumes that the maximum allowable torque
°°?
is constant at every position and velocity. For example, instead of using varying
r.'
torque constraint, Lin et al. [1983] and Luh and Lin [1984] used the velocity,
acceleration, and jerk bounds which are assumed constant for each joint. They
'-j
"'t
selected several knot points on the desired cartesian path, solved the inverse
kinematics, and found appropriate smooth, lower-degree polynomial functions
r..
which guaranteed the continuity conditions to fit through these knot points in the
CD.
'+,
joint-variable space. Then, by relaxing the normalized time to the servo time, the
`o'
°Q.
dynamic constraint with the constant torque bound assumption was included along
the trajectory. Due to the joint-interpolated functions, the location of the manipu-
lator hand at each servo instant may not be exactly on the desired path, but rather
on the joint-interpolated polynomial functions.
Lee [1985] developed a discrete time trajectory planning scheme to determine
-?'r
COD
the trajectory set points exactly on a given straight-line path which satisfies both
o.. 'LS
Chi)
the smoothness and torque constraints. The trajectory planning problem is formu-
0.0
'C3
PROBLEMS
4.1 A single-link rotary robot is required to move from 0(0) = 30° to 0(2) = 100° in
i-+
`D_
2 s. The joint velocity and acceleration are both zero at the initial and final positions.
(a) What is the highest degree polynomial that can be used to accomplish the motion?
°a°
(b) What is the lowest degree polynomial that can be used to accomplish the motion?
4.2 With reference to Prob. 4.1, (a) determine the coefficients of a cubic polynomial that
accomplishes the motion; (b) determine the coefficients of a quartic polynomial that accom-
plishes the motion; and (c) determine the coefficients of a quintic polynomial that accom-
n?,
plishes the motion. You may split the joint trajectory into several trajectory segments.
4.3 Consider the two-link robot arm discussed in Sec. 3.2.6, and assume that each link is 1
m long. The robot arm is required to move from an initial position (xo, yo) = (1.96, 0.50)
"fit
to a final position (xf, yf) _ (1.00, 0.75). The initial and final velocity and acceleration are
s..
zero. Determine the coefficients of a cubic polynomial for each joint to accomplish the
motion. You may split the joint trajectory into several trajectory segments.
PLANNING OF MANIPULATOR TRAJECTORIES 199
4.4 In planning a 4-3-4 trajectory one needs to solve a matrix equation, as in Eq. (4.3-46).
Does the matrix inversion of Eq. (4.3-46) always exist? Justify your answer.
4.5 Given a PUMA 560 series robot arm whose joint coordinate frames have been esta-
-°;
CND
blished as in Fig. 2.11, you are asked to design a 4-3-4 trajectory for the following condi-
C1.
'-n
tions: The initial position of the robot arm is expressed by the homogeneous transformation
matrix Tinitial:
.N.
0.047 0.789 -0.612 -34.599
0 0 0 1
The final position of the robot arm is expressed by the homogeneous transformation matrix
Tfinal
The lift-off and set-down positions of the robot arm are obtained from a rule of thumb by
c`"
...
taking 25 percent of d6 (the value of d6 is 56.25 mm). What are the homogeneous transfor-
mation matrices at the lift-off and set-down positions (that is, Tl,ft-t,ff and Tset-down)?
4.6 Given a PUMA 560 series robot arm whose joint coordinate frames have been esta-
blished as in Fig. 2.11, you are asked to design a 4-3-4 trajectory for the following condi-
tions: The initial position of the robot arm is expressed by the homogeneous transformation
matrix Tinitial
-1 0 0 0
0 1 0 600.0
T initial -
0 0 -1 -100.0
0 0 0 1
The set-down position of the robot arm is expressed by the homogeneous transformation
matrix Tset-down
0 1 0 100.0
1 0 0 400.0
Tset-down -
0 0 -1 -50.0
0 0 0 1
(a) The lift-off and set-down positions of the robot arm are obtained from a rule of thumb
by taking 25 percent of d6 ( the value of d6 is 56.25 mm) plus any required rotations.
What is the homogeneous transformation matrix at the lift-off (that is, Ti,ft-off) if the hand is
rotated 60° about the s axis at the initial point to arrive at the lift-off point? (b) What is
the homogeneous transformation matrix at the final position (that is, Tfinal) if the hand is
.,.
rotated -60' about the s axis at the set-down point to arrive at the final position?
r7'
0
200 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
4.7 A manipulator is required to move along a straight line from point A to point B, where
A and B are respectively described by
-1 0 0 5 0 -1 0 20
0 1 0 10 0 0 1 30
A= 0 0 -1 15
and B =
-1 0 0 5
0 0 0
.-.
0 0 0 1 1
The motion from A to B consists of a translation and two rotations, as described in Sec.
4.4.1. Determine 0, >G, 0 and x, y, z for the drive transform. Aslo find three intermediate
transforms between A and B.
4.8 A manipulator is required to move along a straight line from point A to point B rotat-
ing at constant angular velocity about a vector k and at an angle 0. The points A and B are
given by a 4 x 4 homogeneous transformation matrices as
-1 0 0 10 0 -1 0 10
0 1 0 10 0 0 1 30
A = 0 0 -1 10
B = -1 0 0 10
0 0 0 1 0 0 0 1
way
Find the vector k and the angle 0. Also find three intermediate transforms between A and
.4'
B.
4.9 Express the rotation results of Prob. 4.8 in quaternion form.
4.10 Give a quaternion representation for the following rotations: a rotation of 60 ° about j
followed by a rotation of 120 ° about i. Find the resultant rotation in quaternion representa-
tion.
4.11 Show that the inverse of the banded structure matrix A in Eq. (4.4-65) always exists.
CHAPTER
FIVE
CONTROL OF ROBOT MANIPULATORS
may
5.1 INTRODUCTION
Given the dynamic equations of motion of a manipulator, the purpose of robot arm
.fl
stated in such a simple manner, its solution is complicated by inertial forces, cou-
row
L."
pling reaction forces, and gravity loading on the links. In general, the control
problem consists of (1) obtaining dynamic models of the manipulator, and (2)
using these models to determine control laws or strategies to achieve the desired
pas
system response and performance. The first part of the control problem has been
.O'
Current industrial approaches to robot arm control system design treat each
joint of the robot arm as a simple joint servomechanism. The servomechanism
approach models the varying dynamics of a manipulator inadequately because it
neglects the motion and configuration of the whole arm mechanism. These
changes in the parameters of the controlled system are significant enough to render
conventional feedback control strategies ineffective. The result is reduced servo
response speed and damping, limiting the precision and speed of the end-effector
and making it appropriate only for limited-precision tasks. As a result, manipula-
tors controlled this way move at slow speeds with unnecessary vibrations. Any
significant performance gain in this and other areas of robot arm control require
fro
niques, and the use of computer architectures. This chapter focuses on deriving
.AC."
.G,
201
202 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
strategies which utilize the dynamic models discussed in Chap. 3 to efficiently con-
trol a manipulator.
Considering the robot arm control as a path-trajectory tracking problem (see
Fig. 5.1), motion control can be classified into three major categories for the pur-
pose of discussion:
For these control methods, we assume that the desired motion is specified by a
s..
tions.
Disturbances
Trajectory
H
(}-
Controller Manipulator
planning
Sensors
and
estimators
---
Interface
.4:
control structure is hierarchically arranged. At the top of the system hierarchy is
the LSI-11/02 microcomputer which serves as a supervisory computer. At the
.U+
0"o
lower level are the six 6503 microprocessors-one for each degree of freedom (see
0Th
Fig. 5.2). The LSI-11/02 computer performs two major functions: (1) on-line user
interaction and subtask scheduling from the user's VALt commands, and (2) sub-
task coordination with the six 6503 microprocessors to carry out the command.
The on-line interaction with the user includes parsing, interpreting, and decoding
"O'
the VAL commands, in addition to reporting appropriate error messages to the
user. Once a VAL command has been decoded, various internal routines are
called to perform scheduling and coordination functions. These functions, which
reside in the EPROM memory of the LSI-11/02 computer, include:
0
updates corresponding to each set point to each joint every 28 ms.
3. Acknowledging from the 6503 microprocessors that each axis of motion has
C's
At the lower level in the system hierarchy are the joint controllers, each of
;-.
which consists of a digital servo board, an analog servo board, and a power
amplifier for each joint. The 6503 microprocessor is an integral part of the joint
controller which directly controls each axis of motion. Each microprocessor
c'°)
c0)
resides on a digital servo board with its EPROM and DAC. It communicates with
the LSI-11/02 computer through an interface board which functions as a demulti-
5°`0
plexer that routes trajectory set points information to each joint controller. The
interface board is in turn connected to a 16-bit DEC parallel interface board
(DRV-11) which transmits the data to and from the Q-bus of the LSI-11/02 (see
Fig. 5.2). The microprocessor computes the joint error signal and sends it to the
analog servo board which has a current feedback designed for each joint motor.
There are two servo loops for each joint control (see Fig. 5.2). The outer
loop provides position error information and is updated by the 6503 microproces-
.s.
sor about every 0.875 ms. The inner loop consists of analog devices and a com-
t VAL is a software package from Unimation Inc. for control of the PUMA robot arm.
204 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Floppy Manual
Terminal Accessory
disk box
6503 JOINT
D/A AMPLIFIER
Pp MOTOR I
DLV - I I1 0.875 nn
ENCODER
VAL
EPROM
Q D R V -I I IN TE RFAC Ti
B
U
RAM
Tq = 28 nn 6503 JOINT °6
D/A AMPLIFIER
Pp MOTOR6
CPU 0 875 nn
ENCODER
pensator with derivative feedback to dampen the velocity variable. Both servo
loop gains are constant and tuned to perform as a "critically damped joint system"
at a speed determined by the VAL program. The main functions of the micropro-
cessors include:
1. Every 28 ms, receive and acknowledge trajectory set points from the LSI-11/02
computer and perform interpolation between the current joint value and the
desired joint value.
2. Every 0.875 ms, read the register value which stores the incremental values
from the encoder mounted at each axis of rotation.
3. Update the error actuating signals derived from the joint-interpolated set points
and the values from the axis encoders.
4. Convert the error actuating signal to current using the DACs, and send the
current to the analog servo board which moves the joint.
It can be seen that the PUMA robot control scheme is basically a proportional
plus integral plus derivative control method (PID controller). One of the main
disadvantages of this control scheme is that the feedback gains are constant and
prespecified. It does not have the capability of updating the feedback gains under
varying payloads. Since an industrial robot is a highly nonlinear system, the iner-
tial loading, the coupling between joints and the gravity effects are all either
position-dependent or position- and velocity-dependent terms. Furthermore, at
high speeds the inertial loading term can change drastically. Thus, the above con-
trol scheme using constant feedback gains to control a nonlinear system does not
perform well under varying speeds and payloads. In fact, the PUMA arm moves
CONTROL OF ROBOT MANIPULATORS 205
with noticeable vibration at reduced speeds. One solution to the problem is the
use of digital control in which the applied torques to the robot arm are obtained by
a computer based on an appropriate dynamic model of the arm. A version of this
method is discussed in Sec. 5.3.
tion scheme and the computed torque is converted to the applied motor voltage (or
current). This applied voltage is computed at such a high rate that sampling
effects generally can be ignored in the analysis.
Because of modeling errors and parameter variations in the model, position
can
and derivative feedback signals will be used to compute the correction torques
which, when added to the torques computed based on the manipulator model, pro-
coo
motion. The analysis here treats the "single joint" robot arm as a continuous time
system, and the Laplace transform technique is used to simplify the analysis.
Most industrial robots are either electrically, hydraulically, or pneumatically
actuated. Electrically driven manipulators are constructed with a dc permanent
magnet torque motor for each joint. Basically, the dc torque motor is a permanent
BCD
con
con
The motor shaft is coupled to a gear train to the load of the link. With refer-
ence to the gear train shown in Fig. 5.4, the total linear distance traveled on each
gear is the same. That is,
CAD
where r,, and rL are, respectively, the radii of the input gear and the output gear.
Since the radius of the gear is proportional to the number of teeth it has, then
N. BL
=n<1
or = (5.3-3)
NL Brn
CONTROL OF ROBOT MANIPULATORS 207
(b) (c)
If a load is attached to the output gear, then the torque developed at the motor
shaft is equal to the sum of the torques dissipated by the motor and its load. That
is;"
shaft J motor
a.,
I
or, in equation form,
TL(t)OL(t) _
TL*(t) = (t) = fTL(t) (5.3-11)
0,,(t)
Using Eqs. (5.3-10) and (5.3-12), the torque developed at the motor shaft [Eq.
^v,
(5.3-8)] is
where Jeff = J,n + n2JL is the effective moment of inertia of the combined motor
and load referred to the motor shaft and feff = f,,, + n2fL is the effective viscous
friction coefficient of the combined motor and load referred to the motor shaft.
0
Based on the above results, we can now derive the transfer function of this
single joint manipulator system. Since the torque developed at the motor shaft
increases linearly with the armature current, independent of speed and angular
position, we have
where eb is the back electromotive force (emf) which is proportional to the angular
velocity of the motor,
Taking the Laplace transform of Eq. (5.3-14), and substituting Ia(s) from Eq.
(5.3-17), we have
Va(s) - sKb®,n(s)
T(s) = K-L.(s) = K (5.3-19)
IL Ra +sLa
Equating Eqs. (5.3-18) and (5.3-19) and rearranging the terms, we obtain the
transfer function from the armature voltage to the angular displacement of the
motor shaft,
®m(s) _ K.
(5.3-20)
Va(s) 5 [5 Jeff La + (Lafeff + RaJeff)s + Rafeff + KaKb]
Since the electrical time constant of the motor is much smaller than the mechanical
time constant, we can neglect the armature inductance effect, La. This allows us to
simplify the above equation to
r-.
__», (s) - K. _ K
(5.3-21)
Va(s) s(sRaJeff + Rafeff + KaKb) s(T,ns + 1)
where
K.
K motor gain constant
Rafeff + KaKb
and T. = RaJeff
motor time constant
Rafeff + KaKb
Since the output of the control system is the angular displacement of the joint
[®L(s)], using Eq. (5.3-4) and its Laplace transformed equivalence, we can relate
the angular position of the joint OL (s) to the armature voltage V,, (s),
_____ nKa
(5.322)
Va(s) s(SRaJeff + Rafeff + KaKb)
210 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
V "(s)
T (s) se,,,(s) 6(s) OL(S)
I _ 1
Eq. (5.3-22) is the transfer function of the "single joint" manipulator relating the
applied voltage to the angular displacement of the joint. The block diagram of the
system is shown in Fig. 5.5.
where KP is the position feedback gain in volts per radian, e(t) = Bi(t) - OL(t)
is the system error, and the gear ratio n is included to compute the applied voltage
,AN
Q-''
referred to the motor shaft. Equation (5.3-23) indicates that the actual angular dis-
+U+
.U.
placement of the joint is fed back to obtain the error which is amplified by the
position feedback gain KP to obtain the applied voltage. In reality, we have
;t4
4U)
changed the single joint robot system from an open-loop control system [Eq. (5.3-
40000
22)] to a closed-loop control system with unity negative feedback. This closed-
con
loop control system is shown in Fig. 5.6. The actual angular position of the joint
4-'
a.+
= KPE(s)
--1
KP[Di(s) - ®L(S)]
Va(s) = (5.3-24)
n n
and substituting Va(s) into Eq. (5.3-22), yields the open-loop transfer function
relating the error actuating signal [E(s) ] to the actual displacement of the joint:
OL(S) KKP
G(s) = (5.3-25 )
E(s) S(SRaJeff + Raffeff + KaKb)
CONTROL OF ROBOT MANIPULATORS 211
Va(s) r(a)
1_
sK, n +R stir + .fen
n
After some simple algebraic manipulation, we can obtain the closed-loop transfer
function relating the actual angular displacement EL(s) to the desired angular dis-
placement ® (s):
CIO
error, one can increase the positional feedback gain Kp and incorporate some
damping into the system by adding a derivative of the positional error. The angu-
lar velocity of the joint can be measured by a tachometer or approximated from
the position data between two consecutive sampling periods. With this added feed-
back term, the applied voltage to the joint motor is linearly proportional to the
position error and its derivative; that is,
Kpe(t) +
(5.3-27)
n
where K,, is the error derivative feedback gain, and the gear ratio n is included to
compute the applied voltage referred to the motor shaft. Equation (5.3-27) indi-
cates that, in addition to the positional error feedback, the velocity of the motor is
measured or computed and fed back to obtain the velocity error which is multi-
Coo
plied by the velocity feedback gain K. Since, as discussed in Chap. 4, the desired
joint trajectory can be described by smooth polynomial functions whose first two
time derivatives exist within [to, tf], the desired velocity can be computed from
212 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
the polynomial function and utilized to obtain the velocity error for feedback pur-
poses. The summation of these voltages is then applied to the joint motor. This
closed-loop control system is shown in Fig. 5.6.
can
Taking the Laplace transform of Eq. (5.3-27) and substituting Va(s) into Eq.
(5.3-22) yields the transfer function relating the error actuating signal [E(s) ] to
the actual displacement of the joint:
OL(S) Ka(Kp + sKy,)
E(s) = GPD(s) =
s(SRaJeff + Rafeff + KaKb)
KaKvS + KaKp
S(SRaJeff + Rafeff + KaKb)
Some simple algebraic manipulation yields the closed-loop transfer function relat-
ing the actual angular displacement [OL(S)] to the desired angular displacement
[OL(s)]:
OL(S) GPD(s)
1-+
CSC
K in the left half plane of the s plane. Depending on the location of this zero, the
system could have a large overshoot and a long settling time. From Fig. 5.7, we
notice that the manipulator system is also under the influence of disturbances
[D(s)] which are due to gravity loading and centrifugal effects of the link.
Because of this disturbance, the torque generated at the motor shaft has to compen-
sate for the torques dissipated by the motor, the load, and also the disturbances.
Thus, from Eq. (5.3-18),
T(s) = [SZJeff + sfeff]O,n(s) + D(s) (5.3-30)
D(s)
I 1
+ sK,. n
n sL + R - etI + ferr
Y&I,
where D(s) is the Laplace transform equivalent of the disturbances. The transfer
function relating the disturbance inputs to the actual joint displacement is given by
®L (s)
- nR
(5.3-31)
+
D(s)
From Eqs. (5.3-29) and (5.3-31) and using the superposition principle, we can
obtain the actual displacement of the joint from these two inputs, as follows:
time. We shall first investigate the bounds for the position and velocity feedback
gains. Assuming for a moment that the disturbances are zero, we see that from
Q''
Eqs. (5.3-29) and (5.3-31) that the system is basically a second-order system with
('n
.fl
a finite zero, as indicated in the previous section. The effect of this finite zero
usually causes a second-order system to peak early and to have a larger overshoot
s..
(than the second-order system without a finite zero). We shall temporarily ignore
4U.
the effect of this finite zero and try to determine the values of Kp and Kv to have a
.L"
s2 + w, = 0 (5.3-33)
where and w,, are, respectively, the damping ratio and the undamped natural fre-
..t
quency of the system. Relating the closed-loop poles of Eq. (5.3-29) to Eq. (5.3-
33), we see that
KaKp
w2 -- (5.3-34)
JeffR.
Rafeff + KaKb + KaKv
and 2 w,, = (5.3-35)
Jeff R,,
'"'
...
overdamped system, which requires that the system damping ratio be greater than
or equal to unity. From Eq. (5.3-34), the position feedback gain is found from the
natural frequency of the system:
z
R.
Kp = w" K >0 (5.3-36)
K,
where the equality of the above equation gives a critically damped system response
and the inequality gives an overdamped system response. From Eq. (5.3-37), the
'C7
In order not to excite the structural oscillation and resonance of the joint, Paul
[1981] suggested that the undamped natural frequency w, may be set to no more
than one-half of the structural resonant frequency of the joint, that is,
and solving the above characteristic equation gives the structural resonant fre-
quency of the system
r lI2
kstiff I
wr = (5.3-42)
L Jeff J
Although the stiffness of the joint is fixed, if a load is added to the manipulator's
end-effector, the effective moment of inertia will increase which, in effect, reduces
the structural resonant frequency. If a structural resonant frequency wo is meas-
CONTROL OF ROBOT MANIPULATORS 215
ured at a known moment of inertia J0, then the structural resonant frequency at the
other moment of inertia Jeff is given by
1 1/2
Jo
Wr = WO (5.3-43)
L Jeff J
JeffRa
0 < K_p < (5.3-44)
4Ka
After finding Kp, the velocity feedback gain K can be found from Eq. (5.3-38):
Next we investigate the steady-state errors of the above system for step and
ramp inputs. The system error is defined as e(t) = Bi(t) - OL(t). Using Eq.
(5.3-32), the error in the Laplace transform domain can be expressed as
For a step input of magnitude A, that is, Bi(t) = A, and if the disturbance input is
v,'
"F+
unknown, then the steady-state error of the system due to a step input can be
C3.
found from the final value theorem, provided the limits exist; that is,
o
ess(step) essP = lim e(t) = sim sE(s)
t-CO
(5.3-48)
which is a function of the disturbances. Fortunately, we do know some of the dis-
turbances, such as gravity loading and centrifugal torque due to the velocity of the
216 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
joint. Other disturbances that we generally do not know are the frictional torque
COD
due to the gears and the system noise. Thus, we can identify each of these torques
separately as
TD(t) = TG(t) + 7-C(t) + Te (5.3-49)
where TG(t) and TC(t) are, respectively, torques due to gravity and centrifugal
s"'
effects of the link, and Te are disturbances other than the gravity and centrifugal
torques and can be assumed to be a very small constant value. The corresponding
Laplace transform of Eq. (5.3-49) is
pr,
Te
D(s) = TG(s) + Tc(s) + (5.3-50)
s
To compensate for gravity loading and centrifugal effects, we can precompute these
torque values and feed the computed torques forward into the controller to minim-
4-"
ize their effects as shown in Fig. 5.8. This is called feedforward compensation.
Let us denote the computed torques as TCOmp(t) whose Laplace transform is
Tromp (s). With this computed torque and using Eq. (5.3-50), the error equation of
Eq. (5.3-47) is modified to
fir'
S2Rajeff + S(Ra.feff
For the steady-state position error, the contribution from the disturbances due to
the centrifugal effect is zero as time approaches infinity. The reason for this is
that the centrifugal effect is a function of Bi(t) and, as time approaches infinity,
8L(cc) approaches zero. Hence, its contribution to the steady-state position error
..d
Caw
is zero. If the computed torque Tcomp(t) is equivalent to the gravity loading of the
link, then the steady-state position error reduces to
nRa Te
e= P
K
KaKp
(5.3-53)
Since K,, is bounded by Eq. (5.3-45), the above steady-state position error reduces
to
4nTe
essp = (5.3-54)
WOJo
CONTROL OF ROBOT MANIPULATORS 217
A/D Tachometer
Attitude rate
error feedback
compensation
B,5(r)
Planning
Ti
program Torque Voltage-torque Drive Gear
D/A H-o Load
computation I I
characteristic motor train
Compute:
Gravity loading
Coriolis
Centrifugal
Inertial effect
_ (Rafeff + KaKb )A
KaKp
Again, in order to reduce the steady-state velocity error, the computed torque
,-.
[Tcomp(t)] needs to be equivalent to the gravity and centrifugal effects. Thus, the
steady-state velocity error reduces to
(Raffeff + KaKb )A
eSSV = + eSSp (5.3-56)
Ka Kp
which has a finite steady-state error. The computation of TcOmp(t) depends on the
dynamic model of the manipulator. In general, as discussed in Chap. 3, the
Lagrange-Euler equations of motion of a six joint manipulator, excluding the
dynamics of the electronic control device, gear friction, and backlash, can be writ-
v>'
6 a °T;
- Emg
j=i aqi J
r; f o r i = 1 , 2, ... ,--6 (5.3-57)
where Ti(t) is the generalized applied torque for joint i to drive the ith link, 4i(t)
and 4i(t) are the angular velocity and angular acceleration of joint i, respectively,
and qi is the generalized coordinate of the manipulator and indicates its angular
position. °Ti is a 4 x 4 homogeneous link transformation matrix which relates the
spatial relationship between two coordinate frames (the ith and the base coordinate
frames), ri is the position of the center of mass of link i with respect to the ith
Is;
coordinate system, g = (gX, gy, gZ, 0) is the gravity row vector and
0.0 C).
gI = 9.8062 m/s2, and Ji is the pseudo-inertia matrix of link i about the ith
104
where
T
6 a°T.
Dik = Tr
a°T.
6 a2 °T; a °T; T
hikm = E Tr Jj i,k,m = 1, 2, ... ,6
j=max(i, k, in) agkagm aqi
(5.3-60)
CONTROL OF ROBOT MANIPULATORS 219
Ci = 6
-mfg a °TJ rj i=1, 2, ... ,6 (5.3-61)
l=i aqi
41(t)
42(t)
43 (t)
Ti(t) = [Di1, Dig, Di3, Di4, Di5, Did^^d
44 (t)
45 (t)
46(t)
.Q,
(5.3-62)
+ 141 M, 42(t), 43(t), 44(t), 45 (t), 46(t)]
41(t)
hill hi 12 hi13 hi14 hi 15 hi16
q2 (t)
hi21 hi 22 hi23 hi24 hi 25 hi26
43 (t)
X hi3l hi32 hi33 hi34 hi 35 hi36 + Ci
44 (t)
45 (t)
hi6l hi 62 hi63 hi64 hi 65 hi66
q6 (t)
41(t)
hi16
hill hi12 h113 hil4 hil5
q2(t)
hill hi22 hi23 hi24 hi25 hi26
43 (t)
X hi3l hi32 hi33 hi34 hi35 hi36 i = 1,2,...,6
44 (t)
45(t)
L hi61 hi62 hi63 hi64 hi65 hi66
q6(t)
(5.3-64)
220 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
This compensation leads to what is usually known as the "inverse dynamics prob-
lem" or "computed torque" technique. This is covered in the next section.
computed torque technique based on the L-E or the N-E equations of motion.
Basically the computed torque technique is a feedforward control and has feedfor-
ward and feedback components. The control components compensate for the
interaction forces among all the various joints and the feedback component com-
putes the necessary correction torques to compensate for any deviations from the
desired trajectory. It assumes that one can accurately compute the counterparts of
D(q), h(q, 4), and c(q) in the L-E equations of motion [Eq. (3.2-26)] to minim-
ize their nonlinear effects, and use a proportional plus derivative control to servo
the joint motors. Thus, the structure of the control law has the form of
T(t)=DQ(q){9d(t)+Kv[gd(t)-g(t)I+KP[gd(t)-q(t)+ha(q, g)+ca(q)
(5.3-65)
where K,, and KP are 6 x 6 derivative and position feedback gain matrices,
respectively, and the manipulator has 6 degrees of freedom.
Substituting r(t) from Eq. (5.3-65) into Eq. (3.2-26), we have
D(q)q(t)+h(q, 4)+c(q)=Da(q){qd(t)+Kv[gd(t)-4(t)I+Kp[gd(t)-q(t)l}
If Da(q), ha(q, 4), ca(q) are equal to D(q), h(q, 4), and c(q), respectively,
then Eq. (5.3-66) reduces to
r..'
the characteristic roots of Eq. (5.3-67) have negative real parts, then the position
C's
matrix Da (q) . In this case, the structure of the control law has the form
+ ca (q) (5.3-68)
CONTROL OF ROBOT MANIPULATORS 221
A computer simulation study had been conducted to which showed that these terms
cannot be neglected when the robot arm is moving at high speeds (Paul [1972]).
a,_,
An analogous control law in the joint-variable space can be derived from the
N-E equations of motion to servo a robot arm. The control law is computed
recursively using the N-E equations of motion. The recursive control law can be
obtained by substituting 4i(t) into the N-E equations of motion to obtain the neces-
sary joint torque for each actuator:
n n
gr(t) = q (t) + K1[4a(t) - qj(t)] + KK[gq(t) - q1(t)] (5.3-69)
J=I i=I
where K,I and Kn are the derivative and position feedback gains for joint i respec-
'BCD
tively and ea(t) = qj(t) - qj(t) is the position error for joint j. The physical
interpretation of putting Eq. (5.3-69) into the N-E recursive equations can be
viewed as follows:
1. The first term will generate the desired torque for each joint if there is no
modeling error and the physical system parameters are known. However, there
are errors due to backlash, gear friction, uncertainty about the inertia parame-
ters, and time delay in the servo loop so that deviation from the desired joint
trajectory will be inevitable.
2. The remaining terms in the N-E equations of motion will generate the correc-
tion torque to compensate for small deviations from the desired joint trajectory.
°.7
The above recursive control law is a proportional plus derivative control and
has the effect of compensating for inertial loading, coupling effects, and gravity
loading of the links. In order to achieve a critically damped system for each joint
subsystem (which in turn loosely implies that the whole system behaves as a criti-
cally damped system), the feedback gain matrices KP and K (diagonal matrices)
a..,
on the dynamic coefficients of D(q), h(q, 4), and c(q) in the equations of
'°A
.fl
motion.
i.e., velocity is expressed as radians per At rather than radians per second. This
BCD
has the effect of scaling the link equivalent inertia up by fs , where ff is the sam-
pling frequency (f, = 1/At).
222 ROBOTICS. CONTROL, SENSING, VISION, AND INTELLIGENCE
'C3
tinuous time system is more stringent than that. To minimize any deterioration of
the controller due to sampling, the rate of sampling must be much greater than the
-CD
natural frequency of the arm (inversely, the sampling period must be much. less
than the smallest time constant of the arm). Thus, to minimize the effect of sam-
off
pling, usually 20 times the cutoff frequency is chosen. That is,
1 _ 1
At
- 20 w,, /27r 20f
(5 . 3-70)
characteristics at high torques, the actual voltage-torque curves are not linear. For
Gas
CAD
is usually accomplished via lookup tables or calculation from piecewise linear
approximation formulas. The output voltage is usually a constant value and the
E-+
voltage pulse width varies. A typical voltage-torque curve is shown in Fig. 5.9,
where Vo is the motor drive at which the joint will move at constant velocity
exerting zero force in the direction of motion, and F0 is the force/torque that the
joint will exert at drive level V, with a negative velocity. The slopes and slope
BCD
X1(t) = X2(t)
and %2(t) = f2[x(t)] + b[xl(t)]u(t) (5.4-4)
transfers the system from the initial state x0 to the final state x f, while minimizing
the performance index in Eq. (5.4-7) and subject to the constraints of Eq. (5.4-3),
`-'
tfdt
J
to
= tf - to (5.4-7)
224 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
and H(x*, p*, u*) < H(x*, p*, u) for all t e [to, tf] (5.4-11)
'CS
and for all admissible controls. Obtaining v*(t) from Eqs. (5.4-8) to (5.4-11), the
'yam
optimization problem reduces to a two point boundary value problem with boun-
dary conditions on the state x(t) at the initial and final times. Due to the non-
..d
tions. Hence, the computations of the optimal control have to be performed for
4-'°
time control.
The suboptimal feedback control is obtained by approximating the nonlinear
system [Eq. (5.4-4)] by a linear system and analytically finding an optimal control
s..
for the linear system. The linear system is obtained by a change of variables fol-
'C)
decouple the controls in the linearized system. Defining a new set of dependent
b-0
The first nE;(t) is the error of the angular position, and the second n%j(t) is the
error of the rate of the angular position. Because of this change of variables, the
control problem becomes one of moving the system from an initial state (to) to
.C+
gin of the space. In addition, all sine and cosine functions of i are replaced by
their series representations. As a result, the linearized equations of motion are
J2i-1(0 = vi (5.4-15)
2i(t) = J2i-I i = 1, 2, 3
Vi F = (Ui)max + Cl (5.4-16)
Vi- = -(Ui)max + Ci
t Recall that time-optimal controls are piecewise constant functions of time, and thus we are
interested in regions of the state space over which the control is constant. These regions are separated
by curves in two-dimensional space, by surfaces in three-dimensional space, and by hypersurfaces in
.7w
n-dimensional space. These separating surfaces are called switching curves, switching surfaces, and
switching hypersurfaces, respectively.
226 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
In 1978, Young [1978] proposed to use the theory of variable structure systems for
the control of manipulators. Variable structure systems (VSS) are a class of sys-
cad
tems with discontinuous feedback control. For the last 20 years, the theory of
variable structure systems has found numerous applications in control of various
processes in the steel, chemical, and aerospace industries. The main feature, of
VSS is that it has the so-called sliding mode on the switching surface. Within the
sliding mode, the system remains insensitive to parameter variations and distur-
bances and its trajectories lie in the switching surface. It is this insensitivity
,-+
,'T
property of VSS that enables us to eliminate the interactions among the joints of a
manipulator. The sliding phenomena do not depend on the system parameters and
have a stable property. Hence, the theory of VSS can be used to design a variable
structure controller (VSC) which induces the sliding mode and in which lie the
robot arm's trajectories. Such design of the variable structure controller does not
require accurate dynamic modeling of the manipulator; the bounds of the model
parameters are sufficient to construct the controller. Variable structure control
differs from time-optimal control in the sense that the variable structure controller
induces the sliding mode in which the trajectories of the system lie. Furthermore,
the system is insensitive to system parameter variations in the sliding mode.
Let us consider the variable structure control for a six-link manipulator. From
Eq. (5.4-1), defining the state vector xT(t) as
and introducing the position error vector e1(t) = p(t) - pd and the velocity
'C7
error vector e2(t) = v(t) (with vd = 0), we have changed the tracking problem
to a regulator problem. The error equations of the system become
e1(t) = v(t)
where f2(.) and b(.) are defined in Eq. (5.4-5). For the regulator system prob-
CCs
and the synthesis of the control reduces to choosing the feedback controls as in Eq.
(5.5-3) so that the sliding mode occurs on the intersection of the switching planes.
By solving the algebraic equations of the switching planes,
where C = diag [ cl , C2, ... , c6 ]. Then, the sliding mode is obtained from Eq.
(5.5-4) as
ei = - ciei i = 1, ... , 6 (5.5-7)
The above equation represents six uncoupled first-order linear systems, each
representing 1 degree of freedom of the manipulator when the system is in the
sliding mode. As we can see, the controller [Eq. (5.5-3)] forces the manipulator
can
Vii
into the sliding mode and the interactions among the joints are completely elim-
inated. When in the sliding mode, the controller [Eq. (5.5-6)] is used to control
..C
the manipulator. The dynamics of the manipulator in the sliding mode depend
a.+
only on the design parameters ci. With the choice of ci > 0, we can obtain the
C1,
asymptotic stability of the system in the sliding mode and make a speed adjustment
of the motion in sliding mode by varying the parameters ci.
In summary, the variable structure control eliminates the nonlinear interactions
among the joints by forcing the system into the sliding mode. However, the con-
'-.
troller produces a discontinuous feedback control signal that change signs rapidly.
The effects of such control signals on the physical control device of the manipula-
0.O
tor (i.e., chattering) should be taken into consideration for any applications to
robot arm control. A more detailed discussion of designing a multi-input con-
troller for a VSS can be found in Young [1978].
and obtained decoupled subsystems, postural stability, and desired periodic trajec-
tories. Their approach is different from the method of linear system decoupling
7N.
228 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
where the system to be decoupled must be linear. Saridis and Lee [1979] pro-
posed an iterative algorithm for sequential improvement of a nonlinear suboptimal
control law. It provides an approximate optimal control for a manipulator. To
achieve such a high quality of control, this method also requires a considerable
amount of computational time. In this section, we shall briefly describe the gen-
OMs
eral nonlinear decoupling theory (Falb and Wolovich [1967], Freund [1982]) which
will be utilized together with the Newton-Euler equations of motion to compute a
nonlinear decoupled controller for robot manipulators.
Given a general nonlinear system as in
where x(t) is an n-dimensional vector, u(t) and y(t) are m-dimensional vectors,
and A(x), B (x) , and Q x) are matrices of compatible order. Let us define a
nonlinear operator NA as
K= 1,2,...,n
'G(x) A(x) (5.6-2)
NAC,(x) I ax NA
i = 1, 2, ... , m
where Ci(x) is the ith component of C(x) and NACi(x) = Ci(x). Also, let us
define the differential order d, of the nonlinear system as
where
F*(x) = -D*-I(x)C*(x)
Fz (x) = -D*-I(x)M*(x)
F*(x) represents the state feedback that yields decoupling, while F *(x) performs
the control part with arbitrary pole assignment. The input gain of the decoupled
part can be chosen by G(x), and D*(x) is an m x m matrix whose ith row is
given by
b-0
d,-I
Mi"(x) _ aKiNACi(x) for d, # 0 (5.6-9)
K=O
where y*(t) t is an output vector whose ith component is Yj (`O (t). That is,
(d) (d-I)
y; (t) + ad, - I,i Yr (t) + . . .
+ ao,iyi(t) _ Xiwi(t) (5.6-12)
To show that the ith component of y*(t) has the form of Eq. (5.6-11), let us
assume that di = 1. Then, yi (t) = Ci (x) and, by differentiating it successively,
we have
9C,(x)
Y10)(t) = yi(t) = x(t)
ax
8C,(x)
[A(x) + B(x)F(x) + B(x)G(x)w(t)]
ax
3C1(x)
= NA+BFCi(x) + [B(x)G(x)w(t)]
ax
o.,
8C1(x)
YiU)(t) = NAC1(x) + ax Ci(x) B(x)F(x) + [B(x)G(x)w(t)]
ax
Similar comments hold for di =2, 3, . . to yield Eq. (5.6-11). Thus, the resul-
'"'3
where u(t) is a 6 X 1 applied torque vector for joint actuators, 0(t) is the angular
positions, 6 (t) is the angular velocities, 0 (t) is a 6 x 1 acceleration vector, c(6)
is a 6 x 1 gravitational force vector, h(6, 6) is a 6 x 1 Coriolis and centrifugal
force vector, and D(6) is a 6 x 6 acceleration-related matrix. Since D(6) is
always nonsingular, the above equation can be rewritten as
or, explicitly,
D
[D16 66 J [U6 (t) J
The above dynamic model consists of second-order differential equations for each
joint variable; hence, d, = 2. Treating each joint variable 0i(t) as an output vari-
C'"
where
(5.6-18)
and [D-'(0)]i is the ith row of the D-'(0) matrix. Thus, the controller u(t) for
the decoupled system [Eq. (5.6-5)] must be
(5.6-21)
232 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
From the above equation, we note that the controller ui(t) for joint i depends
only on the current dynamic variables and the input w(t). Substituting u(t) from
Eq. (5.6-20) into Eq. (5.6-14), we have
V..
which leads to
(5.6-24)
lator dynamics. An efficient way of computing the controller u(t) is through the
'C7
s...
L""
G7.
In the last section, several methods were discussed for controlling a mechanical
O..
CD.
motors are combined and resolved into separately controllable hand motions along
CD--
the world coordinate axes. This implies that several joint motors must run simul-
taneously at different time-varying rates in order to achieve desired coordinated
hand motion along any world coordinate axis. This enables the user to specify the
CONTROL OF ROBOT MANIPULATORS 233
direction and speed along any arbitrarily oriented path for the manipulator to fol-
low. This motion control greatly simplifies the specification of the sequence of
one
motions for completing a task because users are usually more adapted to the Carte-
sian coordinate system than the manipulator's joint angle coordinates.
In general, the desired motion of a manipulator is specified in terms of a
time-based hand trajectory in cartesian coordinates, while the servo control system
requires that the reference inputs be specified in joint coordinates. The mathemati-
C.'
cal relationship between these two coordinate systems is important in designing
_Q.
efficient control in the Cartesian space. We shall briefly describe the basic kinemat-
ics theory relating these two coordinate systems for a six-link robot arm that will
lead us to understand various important resolved motion control methods.
The location of the manipulator hand with respect to a fixed reference coordi-
nate system can be realized by establishing an orthonormal coordinate frame at the
hand (the hand coordinate frame), as shown in Fig. 5.10. The problem of finding
the location of the hand is reduced to finding the position and orientation of the
hand coordinate frame with respect to the inertial frame of the manipulator. This
can be conveniently achieved by a 4 x 4 homogeneous transformation matrix:
0
s(t)
0
a(t)
0
p(t)
0 0 0 1
(5.7-1)
Sweep
where p is the position vector of the hand, and n, s, a are the unit vectors along
the principal axes of the coordinate frame describing the orientation of the hand.
Instead of using the rotation submatrix [n, s, a] to describe the orientation, we
can use three Euler angles, yaw a(t), pitch 0(t), and roll y(t), which are defined
as rotations of the hand coordinate frame about the x0, yo, and z0 of the reference
frame, respectively. One can obtain the elements of [n, s, a] from the Euler
rotation matrix resulting from a rotation of the a angle about the x0 axis, then a
(Do
rotation of the 0 angle about the yo axis, and a rotation of the y angle about the z0
axis of the reference frame [Eq. (2.2-19)]. Thus:
cy -Sy 0 co 0 sot 1 0 0
Sy Cy 0 0 1 0 0 Ca - Sa
0 0 1 -so 0 co 0 Sa ca
where sin a = Sa, cos a = Ca, sin 0 = So, cos 0 = C0, sin y = Sy, and
fem.'
111
III
cosy = Cy.
Iii
Let us define the position p(t), Euler angles 4 (t), linear velocity v(t), and
"C3
angular velocity Q(t) vectors of the manipulator hand with respect to the reference
.-.
frame, respectively:
o
all
411
Can
The linear velocity of the hand with respect to the reference frame is equal to the
time derivative of the position of the hand:
v(t) d
dtt)
= p(t) (5.7-4)
Since the inverse of a direction cosine matrix is equivalent to its transpose, the
instantaneous angular velocities of the hand coordinate frame about the principal
CONTROL OF ROBOT MANIPULATORS 235
0 -Wz Wy
RdRT =- dRRT= wz 0 - co,
dt dt
-wy w.x 0
From the above equation, the relation between the [cwx(t), wy(t), wz(t)]T and
.ca
[«(t), 0(t), y(t)]T can be found by equating the nonzero elements in the
matrices:
CyCo - Sy 0
S7C/3 Cy 0 (5.7-6)
-so 0 1
«(t) Cy Sy 0 wx(t)
4(t)
L'2
(5.7-7)
Based on the moving coordinate frame concept, the linear and angular veloci-
ties of the hand can be obtained from the velocities of the lower joints:
V(t)
[N(q)]4(t) _ N, (q), N2(q), ... , N6(q)] 4(t) (5.7-9)
L 0(t)
236 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
where 4(t) = (41 , ... , cq6 )T is the joint velocity vector of the manipulator, and
N(q) is a 6 x 6 jacobian matrix whose ith column vector Ni(q) can be found to
be (Whitney [1972]):
t!7
t`.
where x indicates the vector cross product, p!_1 is the position of the origin of
the (i - 1)th coordinate frame with respect to the reference frame, zi_1 is the unit
c$0
vector along the axis of motion of joint i, and p is the position of the hand with
respect to the reference coordinate frame.
If the inverse jacobian matrix exists at q(t), then the joint velocities 4(t) of
the manipulator can be computed from the hand velocities using Eq. (5.7-9):
v(t)
4(t) = N-'(q) 0(t) (5.7-11)
Given the desired linear and angular velocities of the hand, this equation computes
the joint velocities and indicates the rates at which the joint motors must be main-
tained in order to achieve a steady hand motion along the desired cartesian direc-
tion.
The accelerations of the hand can be obtained by taking the time derivative of
the velocity vector in Eq. (5.7-9):
(5.7-12)
Q(t)
where q(t) = [q1(t), ... ,46(t)]T is the joint acceleration vector of the manipu-
lator. Substituting 4(t) from Eq. (5.7-11) into Eq. (5.7-12) gives
C/)
v(t) v(t)
= N(q, 4)N-'(q) + N(q)q(t) (5.7-13)
St(t) 0(t)
and the joint accelerations q(t) can be computed from the hand velocities and
accelerations as
F V(t)
q(t) = N-'(q) - N-'(q)N(q, 4)N-l(q)
rte.
(5.7-14)
Q(t)
The above kinematic relations between the joint coordinates and the cartesian
coordinates will be used in Sec. 5.7.1 for various resolved motion control methods
CONTROL OF ROBOT MANIPULATORS 237
and in deriving the resolved motion equations of motion of the manipulator hand in
Cartesian coordinates.
The relationship between the linear and angular velocities and the joint velocities
of a six-link manipulator is given by Eq. (5.7-9).
For a more general discussion, if we assume that the manipulator has m
degrees of freedom while the world coordinates of interest are of dimension n,
then the joint angles and the world coordinates are related by a nonlinear function,
as in Eq. (5.7-15).
If we differentiate Eq. (5.7-15) with respect to time, we have
dx(t)
= *(t) = N(q)4(t) (5.7-16)
dt
where N(q) is the jacobian matrix with respect to q(t), that is,
af,
'-.
We see that if we work with rate control, the relationship is linear, as indicated by
Eq. (5.7-16). When x(t) and q(t) are of the same dimension, that is, m = n, then
the manipulator is nonredundant and the jacobian matrix can be inverted at a par-
ticular nonsingular position q(t):
From Eq. (5.7-18), given the desired rate along the world coordinates, one can
easily find the combination of joint motor rates to achieve the desired hand motion.
Various methods of computing the inverse jacobian matrix can be used. A
resolved motion rate control block diagram is shown in Fig. 5.11.
238 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Joint 6, 6
ill N-1(x) [ i Arm
controller
Joint
sensing
If m > n, then the manipulator is redundant and the inverse jacobian matrix
does not exist. This reduces the problem to finding the generalized inverse of the
jacobian matrix. In this case, if the rank of N(q) is n, then 4(t) can be found by
.~f
(5.7-19)
Substituting 4(t) from Eq. (5.7-20) into Eq. (5.7-21), and solving for X, yields
X= [N(q)A-lNT(q)l-li(t)
(5.7-22)
4(t) = A-INT(q)[N(q)A-1NT(q)1-li(t)
(5.7-23)
If the matrix A is an identity matrix, then Eq. (5.7-23) reduces to Eq. (5.7-18).
Quite often, it is of interest to command the hand motion along the hand coor-
dinate system rather than the world coordinate system (see Fig. 5.10). In this
case, the desired hand rate motion h(t) along the hand coordinate system is related
to the world coordinate motion by
where °R,, is an n X 6 matrix that relates the orientation of the hand coordinate
system to the world coordinate system. Given the desired hand rate motion h(t)
CONTROL OF ROBOT MANIPULATORS 239
with respect to the hand coordinate system, and using Eqs. (5.7-23) to (5.7-24), the
joint rate 4(t) can be computed by:
4(t) = A-'NT(q)[N(q)A-INT(q)]-IORh
i(t) (5.7-25)
In Eqs. (5.7-23) and (5.7-25), the angular position q(t) depends on time t, so we
need to evaluate N-1(q) at each sampling time t for the calculation of 4(t). The
added computation in obtaining the inverse jacobian matrix at each sampling time
and the singularity problem associated with the matrix inversion are important
issues in using this control method.
The resolved motion acceleration control (RMAC) (Luh et al. [1980b]) extends the
concept of resolved motion rate control to include acceleration control. It presents
an alternative position control which deals directly with the position and orientation
of the hand of a manipulator. All the feedback control is done at the hand level,
and it assumes that the desired accelerations of a preplanned hand motion are
specified by the user.
The actual and desired position and orientation of the hand of a manipulator
can be represented by 4 x 4 homogeneous transformation matrices, respectively, as
nd(t)
and Hd(t) = (5.7-26)
0 0 0 1
sd(t) ad(t) pd(t)
where n, s, a are the unit vectors along the principal axes x, y, z of the hand
`'0
coordinate system, respectively, and p(t) is the position vector of the hand with
respect to the base coordinate system. The orientation submatrix [n, s, a] can be
,",
defined in terms of Euler angles of rotation (a, /3, y) with respect to the base
+-+
pd(t) - PX(t)
en(t) = pd(t) - P(t) = py(t) - py(t) (5.7-27)
pd(t) - pz(t)
Similarly, the orientation error is defined by the discrepancies between the desired
and actual orientation axes of the hand and can be represented by
Thus, control of the manipulator is achieved by reducing these errors of the hand
to zero.
Considering a six-link manipulator, we can combine the linear velocities v(t)
and the angular velocities W(t) of the hand into a six-dimensional vector as i(t),
V(t)
= N(q)4(t) (5.7729)
W(t)
basis for resolved motion rate control where joint velocities are solved from the
Cow
hand velocities. If this idea is extended further to solve for the joint accelerations
from the hand acceleration K(t), then the time derivative of x(t) is the hand
"CU)
acceleration
C2.
velocity vd(t), and the desired acceleration vd(t) of the hand are known with
respect to the base coordinate system. In order to reduce the position error, one
`t7
may apply joint torques and forces to each joint actuator of the manipulator. This
essentially makes the actual linear acceleration of the hand, v(t), satisfy the equa-
r-.
tion
where ep(t) = pd(t) - p(t). The input torques and forces must be chosen so as
C2.
to guarantee the asymptotic convergence of the position error of the hand. This
C3.
requires that kI and k2 be chosen such that the characteristic roots of Eq. (5.7-32)
c,..,
Similarly, to reduce the orientation error of the hand, one has to choose the
"_'
input torques and forces to the manipulator so that the angular acceleration of the
hand satisfies the expression
Let us group vd and cod into a six-dimensional vector and the position and orienta-
tion errors into an error vector:
CONTROL OF ROBOT MANIPULATORS 241
Substituting Eqs. (5.7-29) and (5.7-30) into Eq. (5.7-35) and solving for q(t) gives
Equation (5.7-36) is the basis for the closed-loop resolved acceleration control for
manipulators. In order to compute the applied joint torques and forces to each
joint actuator of the manipulator, the recursive Newton-Euler equations of motion
CD'
are used. The joint position q(t), and joint velocity 4(t) are measured from the
potentiometers, or optical encoders, of the manipulator. The quantities
CS'
AO)
coo
v, w, N, N-I, N, and H(t) can be computed from the above equations. These
values together with the desired position pd(t), desired velocity vd(t), and the
c.'
desired acceleration vd(t) of the hand obtained from a planned trajectory can be
y..,
used to compute the joint acceleration using Eq. (5.7-36). Finally the applied joint
'TI
torques and forces can be computed recursively from the Newton-Euler equations
t3"
.fl
mation.
the position control. A control block diagram of the RMFC is shown in Fig. 5.12.
We shall briefly discuss the mathematics that governs this control technique.
A more detailed discussion can be found in Wu and Paul [1982]. The basic con-
trol concept of the RMFC is based on the relationship between the resolved
242 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
force vector, F = (F, Fy, F, Mz, My, MZ)T, and the joint torques, r=
(r1, 7-2, 7-")T
which are applied to each joint actuator in order to counterbal-
I
ance the forces felt at the hand, where (Fx, FY, FZ )T and (Mx, My, MZ ) T are the
cartesian forces and moments in the hand coordinate system, respectively. The
underlying relationship between these quantities is
..o
Since the objective of RMFC is to track the cartesian position of the end-
effector, an appropriate time-based position trajectory has to be specified as func-
.t"
tions of the arm transformation matrix °A6(t), the velocity (vx, vy, vZ)T, and the
.-t
angular velocity (wx, co)" U) w )T about the hand coordinate system. That is, the
00o
as
0 0 0 1
then, the desired cartesian velocity xd(t) = (vx, vy, vZ, Wx, wy, w)T can be
obtained from the element of the following equation
a4)
^^^
0 0 0 1 (5.7-39)
The cartesian velocity error id - x can be obtained using the above equation.
The velocity error xy x used in Eq. (5.7-31) is different from the above velo-
CONTROL OF ROBOT MANIPULATORS 243
city error because the above error equation uses the homogeneous transformation
matrix method. In Eq. (5.7-31), the velocity error is obtained simply by
differentiating pd(t) - p(t).
Similarly, the desired cartesian acceleration xd(t) can be obtained as:
Xd(t) =
xd(t + At) - xd(t)
(5.7-40)
At
By choosing the values of K,, and Kp so that the characteristic roots of Eq. (5.7-
42) have negative real parts, x(t) will converge to xd(t) asymptotically.
Based on the above control technique, the desired cartesian forces and
moments to correct the position errors can be obtained using Newton's second law:
where M is the mass matrix with diagonal elements of total mass of the load m
...
and the moments of inertia I,x, lyy, I,., at the principal axes of the load. Then,
using the Eq. (5.7-37), the desired cartesian forces Fd can be resolved into the
joint torques:
T(t) = NT(q)Fd = NT(q) MX(t) (5.7-44)
In general, the above RMFC works well when the mass and the load are
negligible, as compared with the mass of the manipulator. But, if the mass and the
(IQ
load approaches the mass of the manipulator, the position of the hand usually does
w.°
not converge to the desired position. This is due to the fact that some of the joint
torques are spent to accelerate the links. In order to compensate for these loading
and acceleration effects, a force convergence control is incorporated as a second
"L3
L".
converge to the desired cartesian force Fd obtained from the above position control
technique. If the error between the measured force vector F0 and the desired
cartesian force is greater than a user-designed threshold OF(k) = F"(k) - Fo(k),
then the actual cartesian force is updated by
Most of the schemes discussed in the previous sections control the arm at the hand
or joint level and emphasize nonlinear compensations of the interaction forces
between the various joints. These control algorithms sometimes are inadequate
because they require accurate modeling of the arm dynamics and neglect the
..h
changes of the load in a task cycle. These changes in the payload of the controlled
p..
system often are significant enough to render the above feedback control strategies
ate)
ineffective. The result is reduced servo response speed and damping, which limits
ca)
the precision and speed of the end-effector. Any significant gain in performance
Z5'
.O,
:'_
for tracking the desired time-based trajectory as closely as possible over a wide
range of manipulator motion and payloads require the consideration of adaptive
control techniques.
reference model and adaptation algorithm which modifies the feedback gains to the
actuators of the actual system. The adaptation algorithm is driven by the errors
between the reference model outputs and the actual system outputs. A general
coo
BCD
dimension is assumed to be small compared with the length of other links. Then,
the selected reference model provides an effective and flexible means of specifying
desired closed-loop performance of the controlled system. A linear second-order
Ll.
time invariant differential equation is selected as the reference model for each
'±.
O°'C:
Reference
input r R ob ot x = (©T, BT)
arm
+ dynamics
Adjustable
feedback
gains
Adaptation
mechanism
tXa
Reference +
model L v
Figure 5.13 A general control block diagram for model-referenced adaptive control.
gains, and that the coupling terms are negligible, then the manipulator dynamic
equation for joint i can be written as
where the system parameters c>:i (t) and (3i (t) are assumed to vary slowly with
time.
246 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Several techniques are available to adjust the feedback gains of the controlled
system. Due to its simplicity, a steepest descent method is used to minimize a
quadratic function of the system error, which is the difference between the
response of the actual system [Eq. (5.8-3)] and the response of the reference model
[Eq. (5.8-1)]:
where ei = yi - xi, and the values of the weighting factors, kk, are selected from
stability considerations to obtain stable system behavior.
Using a steepest descent method, the system parameters adjustment mechanism
which will minimize the system error is governed by
(/1
ai(t) [k2et(t) + kiei(t) + koei(t)][kzui(t) + kiui(t) + kour(t)] (5.8-5)
where ui(t) and wi(t) and their derivatives are obtained from the solutions of the
r-.
following differential equations:
and yi(t) and yi(t) are the first two time derivatives of response of the reference
model. The closed-loop adaptive system involves solving the reference model
equations for a given desired input; then the differential equations in Eqs. (5.8-7)
and (5.8-8) are solved to yield ui(t) and wi(t) and their derivatives for Eqs. (5.8-
5) and (5.8-6). Finally, solving the differential equations in Eqs. (5.8-5) and (5.8-
C!1
The fact that this control approach is not dependent on a complex mathemati-
cal model is one of its major advantages, but stability considerations of the
closed-loop adaptive system are critical. A stability analysis is difficult, and
Dubowsky and DesForges [1979] carried out an investigation of this adaptive sys-
tem using a linearized model. However, the adaptability of the controller can
become questionable if the interaction forces among the various joints are severe.
Koivo and Guo [1983] proposed an adaptive, self-tuning controller using an autore-
gressive model to fit the input-output data from the manipulator. The control algo-
rithm assumes that the interaction forces among the joints are negligible. A block
diagram of the control system is shown in Fig. 5.14. Let the input torque to joint
i be ui, and the output angular position of the manipulator be yi. The input-output
CONTROL OF ROBOT MANIPULATORS 247
Manipulator
pairs (ui, yi) may be described by an autoregressive model which match these
.ti
where ai° is a constant forcing term, ei(k) is the modeling error which is assumed
to be white gaussian noise with zero mean and independent of ui and yi(k - m),
m > 1. The parameters a," and b;" are determined so as to obtain the best least-
squares fit of the measured input-output data pairs. These parameters can be
.`O
ai = (a°, ail , .
a°, b;°,
with
In order to track the trajectory set points, a performance criterion for joint i is
defined as
The optimal control that minimizes the above performance criterion is found
to be:
ui(k + 1)
-bil(k) 17
"
NJ=
where a,", b,", and a" are the estimates of the parameters from Eqs. (5.8-13) and
(5.8-14).
In summary, this adaptive control uses an autoregressive model [Eq. (5.8-9)]
to fit the input-output data from the manipulator. The recursive least-squares
identification scheme [Eqs. (5.8-13) and (5.8-14)] is used to estimate the parame-
ters which are used in the optimal control [Eq. (5.8-17)] to servo the manipulator.
as possible for all times over a wide range of manipulator motion and payloads.
Adaptive perturbation control differs from the above adaptive schemes in the sense
that it takes all the interactions among the various joints into consideration. The
vii
O'Q
putes the nominal torques which compensate all the interaction forces between the
.U+
r0)
various joints along the nominal trajectory. The feedback component computes the
perturbation torques which reduce the position and velocity errors of the manipula-
tor to zero along the nominal trajectory. An efficient, recursive, real-time, least-
squares identification scheme is used to identify the system parameters in the per-
turbation equations. A one-step optimal control law is designed to control the
linearized perturbation system about the nominal trajectory. The parameters and
the feedback gains of the linearized system are updated and adjusted in each sam-
pling period to obtain the necessary control effort. The total torques applied to the
joint actuators then consist of the nominal torques computed from the Newton-,ti
Euler equations of motion and the perturbation torques computed from the one-step
optimal control law of the linearized system. This adaptive control strategy
..y
times.
Suppose that the nominal states x, (t) of the system [Eq. (5.4-4)] are known
from the planned trajectory, and the corresponding nominal torques u,,(t) are also
known from the computations of the joint torques using the N-E equations of
motion. Then, both and satisfy Eq. (5.4-4):
Using the Taylor series expansion on Eq. (5.4-4) about the nominal trajectory, sub-
tracting Eq. (5.8-18) from it, and assuming that the higher order terms are negligi-
ble, the associated linearized perturbation model for this control system can be
expressed as
where V,fl and are the jacobian matrices of f[x(t), u(t)] evaluated at
COD
x,,(t) and respectively, 6x(t) = x(t) - and 6u(t) = u(t) -
..,
The system parameters, A(t) and B(t), of Eq. (5.8-19) depend on the instan-
`CS
taneous manipulator position and velocity along the nominal trajectory and thus,
vary slowly with time. Because of the complexity of the manipulator equations of
motion, it is extremely difficult to find the elements of A(t) and B(t) explicitly.
However, the design of a feedback control law for the perturbation equations
requires that the system parameters of Eq. (5.8-19) be known at all times. Thus,
parameter identification techniques must be used to identify the unknown elements
in A(t) and B(t).
As a result of this formulation, the manipulator control problem is reduced to
determining bu(t), which drives 6x(t) to zero at all times along the nominal tra-
jectory. The overall controlled system is thus characterized by a feedforward com-
ponent and a feedback component. Given the planned trajectory set points qd(t),
cjd(t), and qd(t), the feedforward component computes the corresponding nominal
torques u,,(t) from the N-E equations of motion. The feedback component com-
tai
putes the corresponding perturbation torques bu(t) which provide control effort to
'LS
compensate for small deviations from the nominal trajectory. The computation of
the perturbation torques is based on a one-step optimal control law. The main
advantages of this formulation are twofold. First, it reduces a nonlinear control
problem to a linear control problem about a nominal trajectory; second, the com-
`O'
'O.
k = 0, 1, . . . (5.8-20)
Robot link
parameters Disturbances
I
Trajectory Newton-Euler u(k) u(k)
planning Robot
equations Environment
system manipulator
of motion
i
000
F(O).
optimal least square
controller identification
G(0),
P(0). P x(k)
scheme
F (k), G(k)
6x(k)
and r(kT, to) is the state-transition matrix of the system. F(kT) and G(kT) are,
respectively, 2n x 2n and 2n x n matrices and are given by
With this model, a total of 6n2 parameters in the F(kT) and G(kT) matrices need
to be identified. Without confusion, we shall drop the sampling period T from the
rest of the equations for clarity and simplicity.
Various identification algorithms, such as the methods of least squares, max-
imum likelihood, instrumental variable, cross correlation, and stochastic approxi-
rte-.
parameter identification scheme is selected here for identifying the system parame-
ters in F(k) and G(k). In the parameter identification scheme, we make the fol-
4-i
lowing assumptions: (1) the parameters of the system are slowly time-varying but
gin'
the variation speed is slower than the adaptation speed; (2) measurement noise is
negligible; and (3) the state variables x(k) of Eq. (5.8-20) are measurable.
252 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
parameter identification. Defining and expressing the ith row of the unknown
parameters of the system at the kth instant of time in a 3n-dimensional vector, we
have
i = 1, 2, ... , p
or, expressed in matrix form, as
...
g1I(k) gpI(k)
(5.8-25)
where p = 2n. Similarly, defining the outputs and inputs of the perturbation sys-
tem [Eq. (5.8-20)] at the kth instant of time in a 3n-dimensional vector as
we have that the corresponding system equation in Eq. (5.8-20) can be written as
urement data, which are weighted equally, to estimate the unknown parameters.
Unfortunately, this algorithm cannot be applied to time-varying parameters. Furth-
ermore, the solution requires matrix inversion which is computational intensive. In
order to reduce the number of numerical computations and to track the time-
varying parameters ®(k) at each sampling period, a sequential least-squares
identification scheme which updates the unknown parameters at each sampling
period based on the new set of measurements at each sampling interval provides an
efficient algorithmic solution to the identification problem. Such a recursive, real-
time, least-squares parameter identification algorithm can be found by minimizing
an exponentially weighted error criterion which has an effect of placing more
weights on the squared errors of the more recent measurements; that is,
N
JN = E pN-jei2(J) (5.8-30)
j=I
where the error vector is weighted as
where 0 < p < 1, the hat notation is used to indicate the estimate of the parame-
ters 0,(k), and P(k) = p[Z(k)ZT(k)]-I is a 3n x 3n symmetric positive definite
matrix, where Z(k) _ [ z ( 1 ) , z(2), ... ,z(k)] is the measurement matrix up to
the kth sampling instant. If the errors e,(k) are identically distributed and indepen-
dent with zero mean and variance a2, then P(k) can be interpreted as the covari-
ance matrix of the estimate if p is chosen as a2.
v,'
The above recursive equations indicate that the estimate of the parameters
0,(k + 1) at the (k + 1)th sampling period is equal to the previous estimate
s..
parameters 0,(k) and the measurement vector z(k). The components of the vector
y(k)P(k)z(k) are weighting factors which indicate how the corrections and the
.''
previous estimate should be weighted to obtain the new estimate 0,(k+ 1). The
parameter p is a weighting factor and is commonly used for tracking slowly time-
..y
V-.
sequences. We can compromise between fast adaptation capabilities and loss of
v..
accuracy in parameter identification by adjusting the weighting factor p. In most
applications for tracking slowly time-varying parameters, p is usually chosen to be
s.,
0.90'< p<1.0.
Finally, the above identification scheme [Eqs. (5.8-32) to (5.8-34)] can be
started by choosing the initial values of P(0) to be
2
F(0)-I2n+ f[x,z(0), un(0)]1T+ f _Lf
8x
un(0)]1 2 (5.8-36)
x au [xn(0), un(0)] J
sic
T2
+ [xn(0), un(0)]
au [xn(0), un(0)]
(5.8-37)
ax 2
laws can be designed to obtain the required correction torques to reduce the posi-
(IQ
tion and velocity errors of the manipulator along a nominal trajectory. This can be
fin
done by finding an optimal control u*(k) which minimizes the performance index
J(k) while satisfying the constraints of Eq. (5.8-20):
tive definite weighting matrix. The one-step performance index in Eq. (5.8-38)
indicates that the objective of the optimal control is to drive the position and velo-
city errors of the manipulator to zero along the nominal trajectory in a coordinated
position and rate control per interval step while, at the same time, attaching a cost
to the use of control effort. The optimal control solution which minimizes the
functional in Eq. (5.8-38) subject to the constraints of Eq. (5.8-20) is well known
CONTROL OF ROBOT MANIPULATORS 255
where F(k) and G(k) are the system parameters obtained from the identification
p,.
algorithm [Eqs. (5.8-32) to (5.8-34)] at the kth sampling instant.
The identification and control algorithms in Eqs. (5.8-32) to (5.8-34) and
Eq. (5.8-39) do not require complex computations. In Eq. (5.8-34),
[zT(k)P(k)z(k) + p] gives a scalar, so its inversion is trivial. Although the
2^C
weighting factor p can be adjusted for each ith parameter vector O (k) as desired,
o00
phi
this requires excessive computations in the P(k + 1) matrix. For real-time robot
arm control, such adjustments are not desirable. P(k + 1) is computed only once
at each sampling time using the same weighting factor p. Moreover, since P(k) is
a symmetric positive definite matrix, only the upper diagonal matrix of P(k) needs
to be computed. The combined identification and control algorithm can be com-
puted in 0(n3) time. The computational requirements of the adaptive perturbation
.."
control are tabulated in Table 5.1. Based on the specifications of a DEC PDP 11 /
'CS
MULF (floating point multiply) instruction requires 7.17µs. If we assume that for
'J'
.-...
each ADDF and MULF instruction, we need to fetch data from the core memory
'77
twice and the memory cycle time is 450 ns, then the adaptive perturbation control
,G.'
requires approximately 7.5 ms to compute the necessary joint torques to servo the
first three joints of a PUMA robot arm for a trajectory set point.
'T1
W.,'
ducted (Lee and Chung [1984,1985]) to evaluate and compare the performance of
the adaptive controller with the controller [Eq. (5.3-65)], which is basically a pro-
portional plus derivative control (PD controller). The study was carried out for
various loading conditions along a given trajectory. The performances of the PD
and adaptive controllers are compared and evaluated for three different loading
L].
Newton-Euler
4-.
Various
loading Max. error Max. error Final position Max. error Max. error Final position
Joint
conditions (degrees) (mm) error (degrees) (degrees) (mm) error (degrees)
0.78
000
conditions and the results are tabulated in Table 5.2: (1) no-load and 10 percent
p.,
error in inertia tensor, (2) half of maximum load and 10 percent error in inertia
..N.
tensor, and (3) maximum load (5 lb) and 10 percent error in inertia tensor. In
each case, a 10 percent error in inertia matrices means f 10 percent error about
'a-.
r.+
its measured inertial,values. For all the above cases, the adaptive controller shows
better performance than the PD controller with constant feedback gains both in tra-
CAD
jectory tracking and the final position errors. Plots of angular position errors for
r>'
the above cases for the adaptive control are shown in Figs. 5.16 to 5.18. Addi-
tional details of the simulation result can be found in Lee and Chung [1984, 1985].
conditions by adopting the ideas of resolved motion rate and acceleration controls.
The resolved motion adaptive control is performed at the hand level and is based
on the linearized perturbation system along a desired time-based hand trajectory.
...
The resolved motion adaptive control differs from the resolved motion acceleration
control by minimizing the position/orientation and angular and linear velocities of
;CO
the manipulator hand along the hand coordinate axes instead of position and orien-
tation errors. Similar to the previous adaptive control, the controlled system is
a11
Joint I
0.0198
0.0046
-0.0257
-0.0409
-0.05601 1
I I I
Joint 2
Position error (deg)
0.0796
0.0578
Position error (deg)
0.0360
0 0142
-0.0075
-0.02931 1 1
VI I
positions, -'velocities, and accelerations of the hand into a set of values of joint
`..
positions, velocities, and accelerations from which the nominal joint torques are
CDD
computed using the Newton-Euler equations of motion to compensate for all the
--.
interaction forces among the various joints. The feedback component computes the
perturbation joint torques which reduce the manipulator hand position and velocity
.O+
v(t)
= N(q, 4)N-'(q) + N(q)q(t) (5.8-40)
0(t)
In order to include the dynamics of the manipulator into the above kinematics
equation [Eq. (5.8-40)], we need to use the L-E equations of motion given in Eq.
(3.2-26). Since D(q) is always nonsingular, q(t) can be obtained from Eq. (3.2-
CONTROL OF ROBOT MANIPULATORS 259
26) and substituted into Eq. (5.8-40) to obtain the accelerations of the manipulator
hand:
ICI
v(t)
(t) St(t)
(5.8-41)
N11(q) N12(q)
N(q) A (5.8-42a)
N21(q) N22(q)
o K11(q) K12(q)
all
g E12(q)
all
r E11(q)
III
A
D-1(q) E(q) (5.8-43a)
L E21(q) E22(q)
h1(q, 4)
h2(q, 4) (5.8-43b)
(5.8-44a)
(5.8-44b)
Combining Eqs. (5.7-4), (5.7-8), and (5.8-41), and using Eqs. (5.8-42) to (5.8-44),
we can obtain the state equations of the manipulator in cartesian coordinates:
p(t) [0 0 13 0
4(t) 0 0 0 S(4)
v(t) 0 0 N1 1(q, q )K11(q)+N12(q, 4)K21(q) N11(q, 4)K12(q)+N12(q, 4)K22(q)
0(t) 0 0 N2 1(q, 4 )K11(q)+N22(q, 4)K21(q) N21(q, 4)K12(q)+N22(q, 4) K22(q)
0 0
0 0
x N11(q)E11(q)+N12(q)E21(q)
x
N11(q)E12(q)+N12(q)E22(q)
N21(q)E11(q)+N22(q)E21(q) N21(q)E12(q)+N22(q)E22(q)
(continued on next page)
260 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
-hi(q, 4)-ci(q)+T1(t)
x (5.8-45)
-h2(q, 4)-c2(q)+T2(t)
where 0 is a 3 x 3 zero matrix. It is noted that the leftmost and middle vectors are
+-+
12 x 1, the center left matrix is 12 x 12, the right matrix is 12 x 6, and the
rightmost vector is 6 x 1. Equation (5.8-45) represents the state equations of the
CAD
manipulator and will be used to derive an adaptive control scheme in cartesian
-r7
coordinates.
Defining the state vector for the manipulator hand as
A
X(t) (xI, x2, . . . ,x12)T
(pT VT fjT)T
xi+6(t) =fi+6(x, u)
0 0
0 0
Ntt(q)E11(q) + N12(q)E,I(q) Nti(q)E12(q) + N12(q)E'2(q)
N21(q)Eii(q) + N22(q)E21(q) N21(q)E12(q) + N22(q)E22(q)
and
-hl(q, 4) - ci(q)
X(q, 4) =
-h2(q, 4) - c2(q)
Equation (5.8-49) describes the complete manipulator dynamics in cartesian coordi-
nates, and the control problem is to find a feedback control law u(t) = g[x(t)] to
minimize the manipulator hand error along the desired hand trajectory over a wide
CD-
C.D.
range of payloads. Again, perturbation theory is used and Taylor series expansion
crow
is applied to Eq. (5.8-49) to obtain the associated linearized system and to design a
feedback control law about the desired hand trajectory. The determination of the
ate)
feedback control law for the linearized system is identical to the one in the joint
coordinates [Eqs. (5.8-32) to (5.8-34) and Eq. (5.8-39)]. The resolved motion
adaptive control block diagram is shown in Fig. 5.19.
arc
hand trajectory set points p`t(t), vd(t), Sli(t), vd(t), and Sli(t) are resolved
into a set of values of desired joint positions, velocities, and accelerations; (2) the
,.,
desired joint torques along the hand trajectory are computed from the Newton-
'-'
0'°
Euler equations of motion using the computed sets of values of joint positions,
.U.
CAD
.C,
torques 6u(t) the same way as in Eq. (5.8-39), using the recursive least-squares
identification scheme in Eqs. (5.8-32) to (5.8-34).
A feasibility study of implementing the adaptive controller based on a 60-Hz
sampling frequency and using present-day low-cost microprocessors can be con-
ducted by looking at the computational requirements in terms of mathematical mul-
CAD
262
I vdrr)
SZ"t(t) qd
[N-t (q d ) ]
Robot
link parameters
[p t(t)
Inverse d
'i7
c:.
L0,1(t) One-step Recursive F(O) ,
ki nemati cs
routine Q, R optimal least square G ( O ). [N(q) ]
contr oller identification P O). ( P
scheme
p(r)1
..7
ally in three separate stages. It requires about 3348 multiplications and 3118 addi-
tions for a six joint manipulator. Since the feedforward and feedback components
can be computed in parallel, the resolved motion adaptive control requires a total
of 3348 multiplications and 3118 additions in each sampling period. Computa-
p.,
tional requirements in terms of multiplications and additions for the adaptive con-
troller for a n -joint manipulator are tabulated in Table 5.3.
Based on the specification sheet of an INTEL 8087 microprocessor, an integer
multiply requires 19 µs, an addition requires 17 As, and a memory fetch or store
requires 9µs. Assuming that two memory fetches are required for each multipli-
'-t
cation and addition operation, the proposed controller can be computed in about
233 ms which is not fast enough for closing the servo loop. (Recall from Sec.
5.3.5 that a minimum of 16 msec is required if the sampling frequency is 60 Hz).
Similarly, looking at the specification sheet of a Motorola MC68000 microproces-
C/1
sor, an integer multiply requires 5.6 Its, an addition requires 0.96 As, and a
memory fetch or store requires 0.32 As, the proposed controller can be computed
"""
in about 26.24 ms which is still not fast enough for closing the servo loop.
Finally, looking at the specification sheet of a PDP 11/45 computer, an integer
multiply requires 3.3 As, an addition requires 300 ns, and a memory fetch or store
requires 450 ns, the proposed controller can be computed in about 18 ms which
translates to a sampling frequency of approximately 55 Hz. However, the PDP
11/45 is a uniprocessor machine and the parallel computation assumption is not
This exercise should give the reader an idea of the required processing
CD"
valid.
speed for adaptive control of a manipulator. We anticipate that faster microproces-
sors, which will be able to compute the proposed resolved motion adaptive con-
..t
We have reviewed various "robot manipulator control methods. They vary from a
0
and resolved motion control methods discussed servo the arm at the hand or the
joint level and emphasize nonlinear compensations of the coupling forces among
the various joints. We have also discussed various adaptive control strategies.
The model-referenced adaptive control is easy to implement, but suitable reference
models are difficult to choose and it is difficult to establish any stability analysis of
264 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Compute qd
4n2 4n'- - 3n
stage 3
(144) (126)
117n - 24 103n - 21
stage 4 Compute T
(678) (597)
Compute (p747r
0
(48) (22)
stage I
Compute (vTQT)T
112+27n-21 n'-+ 18n- 15
(177) (129)
8113+4n2+n+ 1 8n3 -n
stage 3 Compute adaptive controller
(1879) (1722)
the controlled system. Self-tuning adaptive control fits the input-output data of the
system with an autoregressive model. Both methods neglect the coupling forces
`_"
between the joints which may be severe for manipulators with rotary joints. Adap-
tive control using perturbation theory may be more appropriate for various mani-
pulators because it takes all the interaction forces between the joints into considera-
a..
tion. The adaptive perturbation control strategy was found suitable for controlling
the manipulator in both the joint coordinates and cartesian coordinates. An adap-
F.1
parallel. The computations of the adaptive control for a six-link robot arm may be
implemented in low-cost microprocessors for controlling in the joint variable
space, while the resolved motion adaptive control cannot be implemented in
present-day low-cost microprocessors because they still do not have the required
speed to compute the controller parameters for the "standard" 60-Hz sampling fre-
quency.
REFERENCES
Further readings on computed torque control techniques can be found in Paul
1'3
[1972], Bejczy [1974], Markiewicz [1973], Luh et al. [1980b], and Lee [1982].
Minimum-time control can be found in Kahn and Roth [1971], and minimum-time
control with torque constraint is discussed by Bobrow and Dubowsky [1983].
.-.
Young [1978] discusses the design of a variable structure control for the control of
manipulators. More general theory in variable structure control can be found in
CAD
Utkin [1977] and Itkis [1976]. Various researchers have discussed nonlinear
Ooh
decoupled control, including Falb and Wolovich [1967], Hemami and Camana
COa)
[1976], Saridis and Lee [1979], Horowitz and Tomizuka [1980], Freund [1982],
Tarn et al. [1984], and Gilbert and Ha [1984].
Further readings on resolved motion control can be found in Whitney [1969,
1972] who discussed resolved motion rate control. Luh et al. [1980b] extended
5'c
changing loads that it carries, various adaptive control schemes, both in joint and
cartesian coordinates, have been developed. These adaptive control schemes can
be found in Dubowsky and DesForges [1979], Horowitz and Tomizuka [1980],
'-C
Koivo and Guo [1983], Lee and Chung [1984, 1985], Lee and Lee [1984], and
Lee et al. [1984].
An associated problem relating to control is the investigation of efficient con-
n-b
trol system architectures for computing the control laws within the required servo
time. Papers written by Lee et al. [1982], Luh and Lin [1982], Orin [1984],
CAD
Nigam and Lee [1985], and Lee and Chang [1986b], are oriented toward this
goal.
PROBLEMS
5.1 Consider the development of a single joint positional controller, as discussed in Sec.
C/)
5.3.2. If the applied voltage V,,(t) is linearly proportional to the position error and to the
'U'
rate of the output angular position, what is the open-loop transfer function OL(s)/E(s) and
BCD
5.3 In the computed torque control technique, if the Newton-Euler equations of motion are
used to compute the applied joint torques for a 6 degree-of-freedom manipulator with rotary
0°'
joints, what is the required number of multiplications and additions per trajectory set point?
'-h
5.4 In the computed torque control technique, the analysis is performed in the continuous
time, while the actual control on the robot arm is done in discrete time (i.e., by a sampled-
t3"
..,
data system) because we use a digital computer for implementing the controller. Explain
-a7
where g is the gravitational constant. Choose an appropriate state variable vector x(t) and a
control vector u(t) for this dynamic system. Assuming that D-1(8) exists, express the
equations of motion of this robot arm explicitly in terms of dl's, (3,1's, and c,'s in a state-
'a+
space representation with the chosen state-variable vector and control vector.
5.6 Design a variable structure controller for the robot in Prob. 5.5. (See Sec. 5.5.)
5.7 Design a nonlinear decoupled feedback controller for the robot in Prob. 5.5. (See Sec.
s..
5.6.)
5.8 Find the jacobian matrix in the base coordinate frame for the robot in Prob. 5.5. (See
try
Appendix B.)
5.9 Give two main disadvantages of using the resolved motion rate control.
arc
5.10 Give two main disadvantages of using the resolved motion acceleration control.
5.11 Give two main disadvantages of using the model-referenced adaptive control.
5.12 Give two main disadvantages of using the adaptive perturbation control.
CHAPTER
SIX
SENSING
6.1 INTRODUCTION
The use of external sensing mechanisms allows a robot to interact with its environ-
ment in a flexible manner. This is in contrast to preprogrammed operation in
which a robot is "taught" to perform repetitive tasks via a set of programmed
functions. Although the latter is by far the most predominant form of operation of
present industrial robots, the use of sensing technology to endow machines with a
greater degree of intelligence in dealing with their environment is, indeed, an
active topic of research and development in the robotics field. A robot that can
"see" and "feel" is easier to train in the performance of complex tasks while, at
the same time, requires less stringent control mechanisms than preprogrammed
machines. A sensory, trainable system is also adaptable to a much larger variety
of tasks, thus achieving a degree of universality that ultimately translates into
lower production and maintenance costs.
The function of robot sensors may be divided into two principal categories:
internal state and external state. Internal state sensors deal with the detection of
r°.
variables such as arm joint position, which are used for robot control, as discussed
in Chap. 5. External state sensors, on the other hand, deal with the detection of
variables such as range, proximity, and touch. External sensing, the topic of
Chaps. 6 to 8, is used for robot guidance, as well as for object identification and
handling.
External state sensors may be further classified as contact or noncontact sen-
sors. As their name implies, the former class of sensors respond to physical con-
tact, such as touch, slip, and torque. Noncontact sensors rely on the response of a
detector to variations in acoustic or electromagnetic radiation. The most prominent
examples of noncontact sensors measure range, proximity, and visual properties of
'-h
an object.
The focus of this chapter is on range, proximity, touch, and force-torque sens-
ing. Vision sensors and techniques are discussed in detail in Chaps. 7 and 8. It is
of interest to note that vision and range sensing generally provide gross guidance
information for a manipulator, while proximity and touch are associated with the
terminal stages of object grasping. Force and torque sensors are used as feedback
devices to control manipulation of an object once it has been grasped (e.g., to
avoid crushing the object or to prevent it from slipping).
267
268 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
6.2.1 triangulation
One of the simplest methods for measuring range is through triangulation tech-
,-+
niques. This approach can be easily explained with the aid of Fig. 6.1. An object
cps
is illuminated by a narrow beam of light which is swept over the surface. The
...
(IQ
sweeping motion is in the plane defined by the line from the object to the detector
and the line from the detector to the source. If the detector is focused on a small
'-r
portion of the surface then, when the detector sees the light spot, its distance D to
the illuminated portion of the surface can be calculated from the geometry of Fig.
6.1 since the angle of the source with the baseline and the distance B between the
source and detector are known.
The above approach yields a point measurement. If the source-detector
'-'
arrangement is moved in a fixed plane (up and down and sideways on a plane per-
pendicular to the paper and containing the baseline in Fig. 6.1), then it is possible
Baseline
Figure 6.1 Range sensing by triangulation. (Adapted from Jarvis [1983a], © IEEE.)
SENSING 269
Figure 6.2 (a) An arrangement of objects scanned by a triangulation ranging device. (b)
Corresponding image with intensities proportional to range. (From Jarvis [1983a],
IEEE.)
to obtain a set of points whose distances from the detector are known. These dis-
tances are easily transformed to three-dimensional coordinates by keeping track of
the location and orientation of the detector as the objects are scanned. An example
is shown in Fig. 6.2. Figure 6.2a shows an arrangement of objects scanned in the
manner just explained. Figure 6.2b shows the results in terms of an image whose
intensity (darker is closer) is proportional to the range measured from the plane of
motion of the source-detector pair.
Specific range values are computed by first calibrating the system. One of the
simplest arrangements is shown in Fig. 6.3b, which represents a top view of Fig.
'-h
6.3a. In this arrangement, the light source and camera are placed at the same
height, and the sheet of light is perpendicular to the line joining the origin of the
Cam
light sheet and the center of the camera lens. We call the vertical plane containing
this line the reference plane. Clearly, the reference plane is perpendicular to the
'''
`u'
sheet of light, and any vertical flat surface that intersects the sheet will produce a
'CS
vertical stripe of light (see Fig. 6.3a) in which every point will have the same
perpendicular distance to the reference plane. The objective of the arrangement
shown in Fig. 6.3b is to position the camera so that every such vertical stripe also
appears vertical in the image plane. In this way, every point along the same
,-.
"C3
bpi
column in the image will be known to have the same distance to the reference
'C7
plane.
270 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
(a)
Figure 6.3 (a) Range measurement by structured lighting approach. (b) Top view of part
(a) showing a specific arrangement which simplifies calibration.
Most systems based on the sheet-of-light approach use digital images. Sup-
CAD
pose that the image seen by the camera is digitized into an N x M array (see Sec.
con
d = XtanO (6.2-1)
0=a,-ao (6.2.2)
SENSING 271
(b)
dk=kd
M/2
= 2kd
M
(6.2-3)
«k = «, - ek (6.2-4)
where
tan Bk =
d - dk (6.2-5)
where 0 < k < M12. For the remaining values of k (i.e., on the other side of the
optical axis), we have
0k,
ak = ac + (6.2-7)
where
d(2k - M)
8"
k = tan-' (6.2-8)
MA
D (6.2-9)
It is important to note that once B, ao, a, M, and X are known, the column
rte-'
number in the digital image completely determines the distance between the refer-
°.<
ence plane and all points in the stripe imaged on that column. Since M and X are
..r
fixed parameters, the calibration procedure consists simply of measuring B and C==
...
?:.
.n.
determining a, and ao, as indicated above. To determine a, we place a flat verti- ..I
1:3
'-h
cal surface so that its intersection with the sheet of light is imaged on the center of
the image plane (i.e., at y = M12). We then physically measure the perpendicular
~^"
distance D, between the surface and the reference plane. From the geometry of
CAD
In order to determine ao, we move the surface closer to the reference plane
until its light stripe is imaged at y = 0 on the image plane. We then measure Do
and, from Fig. 6.3b,
rpo 1
ao=tan ' (6.2-11)
LBJ
This completes the calibration procedure.
The principal advantage of the arrangement just discussed is that it results in a
relatively simple range measuring technique. Once calibration is completed, the
distance associated with every column in the image is computed using Eq. (6.2-9)
with k = 0, 1, 2, ... ,M - 1 and the results are stored in memory. Then, dur-
ing normal operation, the distance of any imaged point is obtained simply by deter-
SENSING 273
mining its column number in the image and addressing the corresponding location
OCR
in memory.
Before leaving this section, we point out that it is possible to use the concepts
discussed in Sec. 7.4 to solve a more general problem in which the light source
.7r
and camera are placed arbitrarily with respect to each other. The resulting expres-
Q.,
"C3
sions, however, would be considerably more complicated and difficult to handle
from a computational point of view.
D = cT/2, where T is the pulse transit time and c is the speed of light. It is of
C.1
interest to note that, since light travels at approximately 1 ft/ns, the supporting
electronic instrumentation must be capable of 50-ps time resolution in order to
achieve a ± 1/4-inch accuracy in range.
s..
F>? "L7
plished by deflecting the laser light via a rotating mirror. The working range of
.--.
4-,
this device is on the order of 1 to 4 m, with an accuracy of f 0.25 cm. An exam-
ple of the output of this system is shown in Fig. 6.4. Part (a) of this figure shows
"t7
'YS
p.,
the distance between the sensor and the reflecting surface at that point (darker is
CDs
closer). The bright areas around the object boundaries represent discontinuity in
range determined by postprocessing in a computer.
"-s
(DD
Figure 6.4 (a) An arrangement of objects. (b) Image with intensity proportional to range.
(From Jarvis [1983b], © IEEE.)
v,'
274 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
CAD
CAD
out to a reflecting surface. The total distance traveled by the reflected beam is
D' = L + 2D. Suppose that D = 0. Under this condition D' = L and botht the
(C)
reference and reflected beams arrive simultaneously at the phase measuring device.
If we let D increase, the reflected beam travels a longer path and, therefore, a
phase shift is introduced between the two beams at the point of measurement, as
illustrated in Fig. 6.5b. In this case we have that
D' = L + (6.2-12)
360 X
It is noted that if 0 = 360 ° the two waveforms are again aligned and we cannot
differentiate between D' = L and D' = L + nX, n = 1 , 2, ... , based on meas-
urements of phase shift alone. Thus, a unique solution can be obtained only if we
Beam splitter
Laser
Reflecting
surface
Outgoing beam
- - 4- - Returning beam
1
Phase
measurement
(a)
x
-- a k---
% N
/ \\//
(b)
Figure 6.5 (a) Principles of range measurement by phase shift. (b) Shift between outgoing
and returning light waveforms.
SENSING 275
require that 0 < 360 ° or, equivalently, that 2D < X. Since D' = L + 2D, we
have by substitution into Eq. (6.2-12) that
D = (6.2-13)
c3'
C'.
laser), the method sketched in Fig. 6.5 is impractical for robotic applications. A
simple solution to this problem is to modulate the amplitude of the laser light by
using a waveform of much higher wavelength. (For example, recalling that
`=i
""S
.tCD
but the reference signal is now the modulating function. The modulated laser sig-
nal is sent out to the target and the returning beam is stripped of the modulating
ate)
signal, which is then compared against the reference to determine phase shift.
Equation (6.2-13) still holds, but we are now working in a more practical range of
wavelengths.
An important advantage of the continuous vs. the pulsed-light technique is that
the former yields intensity as well as range information (Jarvis [1983a]). How-
ever, continuous systems require considerably higher power. Uncertainties in dis-
tance measurements obtained by either technique require averaging the returned
.°'
.ti
signal to reduce the error. If we treat the problem as that of measurement noise
being added to a true distance, and we assume that measurements are statistically
independent, then it can be shown that the standard deviation of the average is
L?G
equal to l/,\rN- times the standard deviation of the noise, where N is the number of
'"'
samples averaged. In other words, the longer we average, the smaller the uncer-
tainty will be in the distance estimate.
An example of results obtainable with a continuous, modulated laser beam
scanned by a rotating mirror is shown in Fig. 6.7b. Part (a) of this figure is the
range array displayed as an intensity image (brighter is closer). The true intensity
s.,
c4-
information obtained with the same device is shown in part (b). Note that these
two images complement each other. For example, it is difficult to count the
Figure 6.6 Amplitude-modulated waveform. Note the much larger wavelength of the
modulating function.
276 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Figure 6.7 (a) Range array displayed as an image. (b) Intensity image. (From Duda, Nit-
zan, and Barrett [1979], © IEEE.)
C17
number of objects on top of the desk in Fig. 6.7a, a simple task in the intensity
`T1
image. Conversely, it is not possible to determine the distance between the near
and far edges of the desk top by examining the intensity image, while this informa-
tion is readily available in the range array. Techniques for processing this type of
information are discussed in Chaps. 7 and 8.
An ultrasonic range finder is another major exponent of the time-of-flight con-
cept. The basic idea is the same as that used with a pulsed laser. An ultrasonic
chirp is transmitted over a short time period and, since the speed of sound is
BCD
known for a specified medium, a simple calculation involving the time interval
between the outgoing pulse and the return echo yields an estimate of the distance
to the reflecting surface.
In an ultrasonic ranging system manufactured by Polaroid, for example, a 1-
ms chirp, consisting of 56 pulses at four frequencies, 50, 53, 57, and 60 KHz, is
`",
object is detected by the same transducer and processed by an amplifier and other
circuitry capable of measuring range from approximately 0.9 to 35 ft, with an
accuracy of about 1 inch. The mixed frequencies in the chirp are used to reduce
.-r
signal cancellation. The beam pattern of this device is around 30 °, which intro-
duces severe limitations in resolution if one wishes to use this device to obtain a
=D:
range image similar to those discussed earlier in this section. This is a common
i4-
problem with ultrasonic sensors and, for this reason, they are used primarily for
navigation and obstacle avoidance. The construction and operational characteristics
of ultrasonic sensors are discussed in further detail in Sec. 6.3.
The range sensors discussed in the previous section yield an estimate of the dis-
tance between a sensor and a reflecting object. - Proximity sensors, on the other
hand, generally have a binary output which indicates the presence of an object
CCs
SENSING 277
within a specified distance interval. Typically, proximity sensors are used in robot-
ics for near-field work in connection with object grasping or avoidance. In this
section we consider several fundamental approaches to proximity sensing and dis-
as)
cuss the basic operational characteristics of these sensors.
ors
operation of these sensors can be explained with the aid of Figs. 6.8 and 6.9. Fig-
ure 6.8a shows a schematic diagram of an inductive sensor which basically con-
sists of a wound coil located next to a permanent magnet packaged in a simple,
rugged housing.
The effect of bringing the sensor in close proximity to a ferromagnetic
ate,
material causes a change in the position of the flux lines of the permanent magnet,
as shown in Fig. 6.8b and c. Under static conditions there is no movement of the
flux lines and, therefore, no current is induced in the coil. However, as a fer-
romagnetic object enters or leaves the field of the magnet, the resulting change in
(a)
Coil
Figure 6.8 (a) An inductive sensor. (b) Shape of flux lines in the absence of a ferromag-
-ti
netic body. (c) Shape of flux lines when a ferromagnetic body is brought close to the sen-
sor. (Adapted from Canali [1981a], © Society Italiana di Fisica.)
278 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
+V
High velocity
Voltage across coil
o
U
0
U
Time
C)
Cs
-V
(a)
Normalized signal amplitude
0.8
0.4
I 1 1
(b)
BCD
Figure 6.9 (a) Inductive sensor response as a function of speed. (b) Sensor response as a
function of distance. (Adapted from Canali [198la], © Society Italiana di Fisica.)
.-.y
the flux lines induces a current pulse whose amplitude and shape are proportional
to the rate of change in the flux.
CAD
The voltage waveform observed at the output of the coil provides an effective
means for proximity sensing. Figure 6.9a illustrates how the voltage measured
CAD
,,.
across the coil varies as a function of the speed at which a ferromagnetic material
is introduced in the field of the magnet. The polarity of the voltage out of the sen-
°'A
GCD
sor depends on whether the object is entering or leaving the field. Figure 6.9b
boa
approach for generating a binary signal is to integrate this waveform. The binary
CAD
output remains low as long as the integral value remains below a specified thresh-
vii
old, and then switches to high (indicating proximity of an object) when the thresh-
old is exceeded.
SENSING 279
`.w..
moo
field across the material. When used by themselves, Hall-effect sensors can only
detect magnetized objects. However, when used in conjunction with a permanent
oco
magnet in a configuration such as the one shown in Fig. 6.10, they are capable of
detecting all ferromagnetic materials. When used in this way, a Hall-effect device
...
senses a strong magnetic field in the absence of a ferromagnetic metal in the near
,.,
°0)
field (Fig. 6.10a). When such a material is brought in close proximity with the
device, the magnetic field weakens at the sensor due to bending of the field lines
'-'
through the material, as shown in Fig. 6.10b.
Hall-effect sensors are based on the principle of a Lorentz force which acts on
a charged particle traveling through a magnetic field. This force acts on an axis
v)'
perpendicular to the plane established by the direction of motion of the charged
particle and the direction of the field. That is, the Lorentz force is given by
''h
the electrons, which would tend to collect at the bottom of the material and thus
_"'
produce a voltage across it which, in this case, would be positive at the top.
Hall-effect
sensor
(a) (b)
-- Convention it cwrent
-- Electron current (q = -)
decrease the strength of the magnetic field, thus reducing the Lorentz force and,
ultimately, the voltage across the semiconductor. This drop in voltage is the key
'.'
for sensing proximity with Hall-effect sensors. Binary decisions regarding the
s.,
presence of an object are made by thresholding the voltage out of the sensor.
It is of interest to note that using a semiconductor, such as silicon, has a
number of advantages in terms of size, ruggedness, and immunity to electrical
interference. In addition, the use of semiconducting materials allows the construc-
tion of electronic circuitry for amplification and detection directly on the sensor
itself, thus reducing sensor size and cost.
The basic components of a capacitive sensor are shown in Fig. 6.12. The
0.,
C",
dielectric material. A cavity of dry air is usually placed behind the capacitive ele-
,!-
TES
ment to provide isolation. The rest of the sensor consists of electronic circuitry
which can be included as an integral part of the unit, in which case it is normally
cue
Printed
circuit
Dialectric
G
MP
/////e-111A
Figure 6.12 A capacitive proximity sensor. (From Canali [1981a], © Societa Italiana di
Fisica.)
oscillator circuit designed so that the oscillation starts only when the capacitance of
the sensor exceeds a predefined threshold value. The start of oscillation is then
-..
translated into an output voltage which indicates the presence of an object. This
method provides a binary output whose triggering sensitivity depends on the
+.,
threshold value.
A more complicated approach utilizes the capacitive element as part of a cir-
cuit which is continuously driven by a reference sinusoidal waveform. A change
in capacitance produces a phase shift between the reference signal and a signal
dam
derived from the capacitive element. The phase shift is proportional to the change
,.r
in capacitance and can thus be used as a basic mechanism for proximity detection.
Figure 6.13 illustrates how capacitance varies as a function of distance for a
proximity sensor based on the concepts just discussed. It is of interest to note that
sensitivity decreases sharply past a few millimeters, and that the shape of the
response curve depends on the material being sensed. Typically, these sensors are
5 l0 1.5
Distance (nom)
operated in a binary mode so that a change in the capacitance greater than a preset
threshold T indicates the presence of an object, while changes below the threshold
indicate the absence of an object with respect to detection limits established by the
CAD
value of T.
The response of all the proximity sensors discussed thus far depends strongly on
the material being sensed. This dependence can be reduced considerably by using
ultrasonic sensors, whose operation for range detection was introduced briefly at
the end of Sec. 6.2.3. In this section we discuss in more detail the construction
and operation of these sensors and illustrate their use for proximity sensing.
Figure 6.14 shows the structure of a typical ultrasonic transducer used for
proximity sensing. The basic element is an electroacoustic transducer, often of the
piezoelectric ceramic type. The resin layer protects the transducer against humi-
dity, dust, and other environmental factors; it also acts as an acoustical impedance
matcher. Since the same transducer is generally used for both transmitting and
receiving, fast damping of the acoustic energy is necessary to detect objects at
=ado
Resin
Ceramic transducer
Acoustic absorber
Figure 6.14 An ultrasonic proximity sensor. (Adapted from Canali [1981b], © Elsevier
Sequoia.)
SENSING 283
n
Driving signal Echo signal
J0 rin
B I UU_11]r_
C I - I
Figure 6.15 Waveforms associated with an ultrasonic proximity sensor. (Adapted from
Canali [1981b], © Elsevier Sequoia.)
high on the positive edge of a pulse in E and is reset to low when E is low and a
pulse occurs in A. In this manner, F will be high whenever an object is present in
BCD
the distance interval specified by the parameters of waveform D. That is, F is the
output of interest in an ultrasonic sensor operating in a binary mode.
Light-emitting diode
Figure 6.16 Optical proximity sensor. (From Rosen and Nitzan [1977], © IEEE.)
shown in Fig. 6.16 is in a mode where a binary signal is generated when the
received light intensity exceeds a threshold value.
cally switches which respond to the presence or absence of an object. Analog sen-
sors, on the other hand, output a signal proportional to a local force. These dev-
tin
Multiple binary touch sensors can be used on the inside surface of each finger
to provide further tactile information. In addition, they are often mounted on the
external surfaces of a manipulator hand to provide control signals useful for guid-
ing the hand throughout the work space. This latter use of touch sensing is analo-
.-.+
Finger control
Figure 6.17 A simple robot hand equipped with binary touch sensors.
tally using a code wheel. Knowledge of the spring constant yields the force
corresponding to a given displacement.
During the past few years, considerable effort has been devoted to the
development of tactile sensing arrays capable of yielding touch information over a
wider area than that afforded by a single sensor. The use of these devices is illus-
trated in Fig. 6.19, which shows a robot hand in which the inner surface of each
External
sensing
plates
finger has been covered with a tactile sensing array. The external sensing plates
°-_
t-.
are typically binary devices and have the function described at the end of Sec.
6.4.1.
Although sensing arrays can be formed by using multiple individual sensors,
C1.
CCD
one of the most promising approaches to this problem consists of utilizing an array
of electrodes in electrical contact with a compliant conductive material (e.g.,
4-,
Several basic approaches used in the construction of artificial skins are shown
Cs.
in Fig. 6.20. The scheme shown in Fig. 6.20a is based on a "window" concept,
characterized by a conductive material sandwiched between a common ground and
an array of electrodes etched on a fiberglass printed-circuit board. Each electrode
consists of a rectangular area (and hence the name window) which defines one
touch point. Current flows from the common ground to the individual electrodes
0-6
the same substrate plane with active electronic circuits using LSI technology. The
CAD
conductive material is placed above this plane and insulated from the substrate
plane, except at the electrodes. Resistance changes resulting from material
"C3
compression are measured and interpreted by the active circuits located between
the electrode pairs.
SENSING 287
Conductive
material
Common
ground
Electrode Active circuitry
Sensing electrodes
pair
(a) (b)
Y electrode
X electrode
Conductive material
(c)
(d)
Figure 6.20 Four approaches for constructing artificial skins (see text).
Another possible technique is shown in Fig. 6.20c. In this approach the con-
`C3
ductive material is located between two arrays of thin, flat, flexible electrodes that
"C3
'-r
ically conductive material. Such materials have the property of being electrically
conductive in only one direction. The sensor is constructed by using a linear array
O-t
of thin, flat electrodes in the base. The conductive material is placed on top of
288 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
this, with the conduction axis perpendicular to the electrodes and separated from
them by a mesh so that there is no contact between the material and electrodes in
the absence of a force. Application of sufficient force results in contact between
°-n
the material and electrodes. As the force increases so does the contact area,
resulting in lower resistance. As with the method in Fig. 6.20c, one array is
externally driven and the resulting current is measured in the other. It is noted
that touch sensitivity depends on the thickness of the separator.
The methods in Fig. 6.20c and d are based on sequentially driving the ele-
ments of one of the arrays. This often leads to difficulties in interpreting signals
"'t
CAD
intersection to eliminate current flow through the alternate paths. Another method
is to ground all paths, except the one being driven. By scanning the receiving
vac
array one path at a time, we are basically able to "look" at the contribution of the
individual element intersections.
All the touch sensors discussed thus far deal with measurements of forces nor-
mal to the sensor surface. The measurement of tangential motion to determine slip
is another important aspect of touch sensing. Before leaving this section, we illus-
trate this mode of sensing by describing briefly a method proposed by Bejczy
[1980] for sensing both the direction and magnitude of slip. The device, illustrated
in Fig. 6.21, consists of a free-moving dimpled ball which deflects a thin rod
mounted on the axis of a conductive disk. A number of electrical contacts are
Object slip
Dimpled ball
(1'
\\
Contacts
Conductive disk
(16 places)
Figure 6.21 A device for sensing the magnitude and direction of slip. (Adapted from
Bejczy [1980], © AAAS.)
SENSING 289
evenly spaced under the disk. Ball rotation resulting from an object slipping past
the ball causes the rod and disk to vibrate at a frequency which is proportional to
the speed of the ball. The direction of ball rotation determines which of the con-
tacts touch the disk as it vibrates, pulsing the corresponding electrical circuits and
thus providing signals that can be analyzed to determine the average direction of
the slip.
Force and torque sensors are used primarily for measuring the reaction forces
developed at the interface between mechanical assemblies. The principal
approaches for doing this are joint and wrist sensing. l A joint sensor measures the
cartesian components of force and torque acting on a robot joint and adds them
vectorially. For a joint driven by a dc motor, sensing is done simply by measur-
ing the armature current. Wrist sensors, the principal topic of discussion in this
section, are mounted between the tip of a robot arm and the end-effector. They
consist of strain gauges that measure the deflection of the mechanical structure due
to external forces. The characteristics and analysis methodology for this type of
sensor are summarized in the following discussion.
with a dynamic range of up to 200 lb. In order to reduce hysteresis and increase
14.
the accuracy in measurement, the hardware is generally constructed from one solid
piece of metal, typically aluminum. As an example, the sensor shown in Fig. 6.22
uses eight pairs of semiconductor strain gauges mounted on four deflection bars-
one gauge on each side of a deflection bar. The gauges on the opposite open ends
of the deflection bars are wired differentially to a potentiometer circuit whose out-
put voltage is proportional to the force component normal to the plane of the strain
O°°
'"A
gauge. The differential connection of the strain gauges provides automatic com-
pensation for variations in temperature. However, this is only a crude first-order
compensation. Since the eight pairs of strain gauges are oriented normal to the
[P'
CAD
x, y, and z axes of the force coordinate frame, the three components of force F
Fit
(0j
t Another category is pedestal sensing, in which strain gauge transducers are installed between the
base of a robot and its mounting surface in order to measure the components of force and torque acting
on the base. In most applications, however, the base is firmly mounted on a solid surface and no pro-
visions are made for pedestal sensing. The analysis of pedestal sensing is quite similar to that used for
wrist sensing, which is discussed in detail in this section.
290 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Most wrist force sensors function as transducers for transforming forces and
ti,
affect the positioning accuracy of the manipulator. Thus, the required performance
specifications can be summarized as follows:
damped out to permit accurate readings during short time intervals. Further-
more, it reduces the magnitude of the deflections of an applied force/moment,
f74
force sensor, it is important to place the sensor as close to the tool as possible
to reduce positioning error as a result of the hand rotating through small angles.
;..
s..
thus, minimizing the distance between the hand and the sensor reduces the lever
r».
".3
the applied forces/moments permits resolving the forces and moments by simple
G16
4. Low hysteresis and internal friction. Internal friction reduces the sensitivity of
0.s"
the force sensing elements because forces have to overcome this friction before
a measurable deflection can be produced. It also produces hysteresis effects
that do not restore the position measuring devices back to their original read-
ings.
The wrist force sensor shown in Fig. 6.22 was designed with these criteria
taken into consideration.
gauges produce readings which vary linearly with respect to changes in their elon-
gation. Then the sensor shown in Fig. 6.22 produces eight raw readings which can
be resolved by computer software, using a simple force-torque balance technique,
t3.
into three orthogonal force and torque components with reference to the force sen-
sor coordinate frame. Such a transformation can be realized by specifying a 6 x 8
matrix, called the resolved force matrix RF (or sensor calibration matrix), which is
postmultiplied by the force measurements to produce the required three orthogonal
force and three torque components. With reference to Fig. 6.22, the resolved
force vector directed along the force sensor coordinates can be obtained mathemat-
a'+
ically as
F = RFW (6.5-1)
where
and
r1l
. . .
r18
RF = (6.5-2)
.
r61 .
r68
In Eq. (6.5-2), the r11 # 0 are the factors required for conversion from the raw
reading W (in volts) to force/moment (in newton-meters). If the coupling effects
between the gauges are negligible, then by looking at Fig. 6.22 and summing the
forces and moments about the origin of the sensor coordinate frame located at the
center of the force sensor, we can obtain the above equation with some of the r,j
292 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
equal to zero. With reference to Fig. 6.22, the resolved force matrix in Eq. (6.5-
2) becomes
0 0 r13 0 0 0 r17 0
r21 0 0 0 r25 0 0 0
0 r52 0 0 0 r56 0 0
Quite often, this assumption is not valid and some coupling does exist. For some
force sensors, this may produce as much as 5 percent error in the calculation of
force resolution. Thus, in practice, it is usually necessary to replace the resolved
force matrix RF by a matrix which contains 48 nonzero elements. This "full"
matrix is used to calibrate the force sensor, as discussed in Sec. 6.5.3. The
resolved force vector F is used to generate the necessary error actuating control
signal for the manipulator. The disadvantage of using a wrist force sensor is that
it only provides force vectors resolved at the assembly interface for a single con-
tact.
the RF matrix. The calibration of the wrist force sensor is done by finding a pseu-
doinverse calibration matrix RF which satisfies
W = RFF (6.5-4)
C/)
RF = [(RF)TRF]-'(RF)T (6.5-8)
The RF matrix can be identified by placing known weights along the axes of the
ass
sensor coordinate frame. Details about the experimental procedure for calibrating
the resolved force matrix can be found in a paper by Shimano and Roth [1979].
".3
The material presented in this chapter is representative of the state of the art in
external robot sensors. It must be kept in mind, however, that the performance of
these sensors is still rather primitive when compared with human capabilities.
As indicated at the beginning of this chapter, the majority of present industrial
robots perform their tasks using preprogrammed techniques and without the aid of
sensory feedback. The relatively recent widespread interest in flexible automation,
however, has led to increased efforts in the area of sensor-driven robotic systems
}..
Ct1
development is indeed a dynamic field where new techniques and applications are
commonplace in the literature. For this reason, the topics included in this chapter
were selected primarily for their value as fundamental material which would serve
--d
coo
REFERENCES
Several survey articles on robotic sensing are Rosen and Nitzan [1977], Bejczy
[1980], Galey and Hsia [1980], McDermott [1980], and Merritt [1982]. Further
reading on laser range finders may be found in Duda et al. [1979] and Jarvis
[1983a, 1983b]. For further reading on the material in Sec. 6.3 see Spencer
[1980], Catros and Espiau [1980], and Canali et al. [1981a, 1981b]. Further read-
0000
ing for the material in Sec. 6.4 may be found in Harmon [1982], Hillis [1982],
000
Marck [1981], and Raibert and Tanner [1982]. See also the papers by Beni et al.
[1983], McDermott [1980], and Hackwood et al. [1983]. Pedestal sensors (Sec.
7--
PROBLEMS
000
6.2 A sheet-of-light range sensor illuminating a work space with two objects produced the
following output on a television screen:
294 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Assuming that the ranging system is set up as in Fig. 6.3b with M = 256, Do = 1 m,
D, = 2 m, B = 3 m, and X = 35 mm, obtain the distance between the objects in the
direction of the light sheet.
6.3 (a) A helium-neon (wavelength 632.8 nm) continuous-beam laser range finder is modu-
lated with a 30-MHz sine wave. What is the distance to an object that produces a phase
shift of 180°? (b) What is the upper limit on the distance for which this device would pro-
duce a unique reading?
6.4 Compute the upper limit on the frequency of a modulating sine wave to achieve a
'C7
s.,
finder.
6.5 (a) Suppose that the accuracy of a laser range finder is corrupted by noise with a gaus-
ins
sian distribution of mean 0 and standard deviation of 100 cm. How many measurements
would have to be averaged to obtain an accuracy of =E0.5 cm with a .95 probability? (b) If,
CAD
instead of being 0, the mean of the noise were 5 cm, how would you compensate the range
measurements for this effect?
6.6 With reference to Fig. 6.15, give a set of waveforms for an ultrasonic sensor capable of
measuring range instead of just yielding a binary output associated with proximity.
ono
6.7 Suppose that an ultrasonic proximity sensor is used to detect the presence of objects
within 0.5 m of the device. At time t = 0 the transducer is pulsed for 0.1 ms. Assume
that it takes 0.4 ms for resonances to die out within the transducer and 20 ms for echoes in
the environment to die out. Given that sound travels at 344 m/s: (a) What range of time
should be used as a window? (b) At what time can the device be pulsed again? (c) What is
...
.w.
tion of two identical beams. The cone formed by each beam originates at the lens and has a
s.;
vertex located 4 cm in front of the center of the opposite lens. Given that each lens has a
.-'
diameter of 4 mm, and that the lens centers are 6 mm apart, over what approximate range
C.,
will this sensor detect an object? Assume that an object is detected anywhere in the sensi-
tive volume.
6.9 A 3 x 3 touch array is scanned by driving the rows (one at a time) with 5 V. A
.`3
column is read by holding it at ground and measuring the current. Assume that the
SENSING 295
undriven rows and unread columns are left in high impedance. A given force pattern
against the array results in the following resistances at each electrode intersection (row,
column): 100 11 at (1, 1), (1, 3), (3, 1), and (3, 3); and 50 0 at (2, 2) and (3, 2). All other
r-,
intersections have infinite resistance. Compute the current measured at each row-column
intersection in the array, taking into account the cross-point problem.
6.10 Repeat Prob. 6.9 assuming (a) that all undriven rows and all columns are held at
`-'
ground; and (b) that a diode (0.6 voltage drop) is in series with the resistance at each junc-
'`7
tion.
6.11 A wrist force sensor is mounted on a PUMA robot equipped with a parallel jaw
gripper and a sensor calibration procedure has been performed to obtain the calibration
matrix RF. Unfortunately, after you have performed all the measurements, someone
remounts a different gripper on the robot. Do you need to recalibrate the wrist force sen-
sor? Justify your answer.
CHAPTER
SEVEN
LOW-LEVEL VISION
7.1 INTRODUCTION
'L3
ing mechanism that allows the machine to respond to its environment in an "intel-
CAD
Ca)
ligent" and flexible manner. The use of vision and other sensing schemes, such as
"-+
s..
those discussed in Chap. 6, is motivated by the continuing need to increase the
En'
C1.
and force sensing play a significant role in the improvement of robot performance,
`.~D
U...,
be expected, the sensors, concepts, and processing hardware associated with
can
machine vision are considerably more complex than those associated with the sen-
']t'
'CS
...t
tion, (5) recognition, and (6) interpretation. Sensing is the process that yields a
'-+
visual image. Preprocessing deals with techniques such as noise reduction and
4-i
objects of interest. Description deals with the computation of features (e.g., size,
A't.
shape) suitable for differentiating one type of object from another. Recognition is
r-.
C-)
boo
the process that identifies these objects (e.g., wrench, bolt, engine block). Finally,
°.°
C,"
these subdivisions, they do provide a useful framework for categorizing the various
"{.
processes that are inherent components of a machine vision system. For instance,
'Z7
.'3
."-
we associate with low-level vision those processes that are primitive in the sense
CAD
part of the vision system. In our discussion, we shall treat sensing and preprocess-
ing as low-level vision functions. This will take us from the image formation pro-
cess itself to compensations such as noise reduction, and finally to the extraction of
'-h
296
LOW-LEVEL VISION 297
f3,
tion of individual objects as medium-level vision functions. High-level vision
r.,
refers to processes that attempt to emulate cognition. While algorithms for low-
a)0
(DD
ably more vague and speculative. As discussed in Chap. 8, these limitations lead
(3.
A..
to the formulation of constraints and idealizations intended to reduce the complex-
ity of this task.
The categories and subdivisions discussed above are suggested to a large
extent by the way machine vision systems are generally implemented. It is not
r-'
implied that these subdivisions represent a model of human vision nor that they are
carried out independently of each other. We know, for example, that recognition
and interpretation are highly interrelated functions in a human. These relation-
ships, however, are not yet understood to the point where they can be modeled
analytically. Thus, the subdivision of functions addressed in this discussion may
be viewed as a practical approach for implementing state-of-the-art machine vision
systems, given our level of understanding and the analytical tools currently avail-
able in this field.
The material in this chapter deals with sensing, preprocessing, and with con-
'.._
three-dimensional activity, most of the work in machine vision is carried out using
O`r
this section we are interested in three main topics: (1) the principal imaging tech-
te'
niques used for robotic vision, (2) the effects of sampling on spatial resolution, and
(3) the effects of amplitude quantization on intensity resolution. The mathematics
of image formation are discussed in Sec. 7.4.
BCD
The principal devices used for robotic vision are television cameras, consisting
°¢c
an in-depth treatment of these devices is beyond the scope of the present discus-
298 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
sion, we will consider the principles of operation of the vidicon tube, a commonly
used representative of the tube family of TV cameras. Solid-state imaging sensors
will be introduced via a brief discussion of charge-coupled devices (CCDs), which
are one of the principal exponents of this technology. Solid-state imaging devices
offer a number of advantages over tube cameras, including lighter weight, smaller
size, longer life, and lower power consumption. However, the resolution of cer-
tain tubes is still beyond the capabilities of solid-state cameras.
As shown schematically in Fig. 7.1a, the vidicon camera tube is a cylindrical
glass envelope containing an electron gun at one end, and a faceplate and target at
the other. The beam is focused and deflected by voltages applied to the coils
shown in Fig. 7.1a. The deflection circuit causes the beam to scan the inner sur-
face of the target in order to "read" the image, as explained below. The inner
surface of the glass faceplate is coated with a transparent metal film which forms
CAD
Transparent
metal coating
Beam focusing coil Photosensitive
layer
I I
(a)
(b)
Figure 7.1 (a) Schematic of a vidicon tube. (b) Electron beam scanning pattern.
LOW-LEVEL VISION 299
tive "target" layer is deposited onto the metal film; this layer consists of very
'LS
small resistive globules whose resistance is inversely proportional to light intensity.
Behind the photosensitive target there is a positively charged fine wire mesh which
`J'
decelerates electrons emitted by the gun so that they reach the target surface with
essentially zero velocity.
In normal operation, a positive voltage is applied to the metal coating of the
faceplate. In the absence of light, the photosensitive material behaves as a dielec-
tric, with the electron beam depositing a layer of electrons on the inner surface of
p,.
the target surface to balance the positive charge on the metal coating. As the elec-
.-.
,}-
tron beam scans the surface of the target layer, the photosensitive layer thus
becomes a capacitor with negative charge on the inner surface and positive charge
on the other side. When light strikes the target layer, its resistance is reduced and
electrons are allowed to flow and neutralize the positive charge. Since the amount
,-.
of electronic charge that flows is proportional to the amount of light in any local
.Wt
area of the target, this effect produces an image on the target layer that is identical
to the light image on the faceplate of the tube; that is, the remaining concentration
CAD
of electron charge is high in dark areas and lower in light areas. As the beam
again scans the target it replaces the lost charge, thus causing a current to flow in
SET'
the metal layer and out one of the tube pins. This current is proportional to the
,'3:
number of electrons replaced and, therefore, to the light intensity at a particular i.,C
CAD
".!
location of the scanning beam. This variation in current during the electron beam
scanning motion produces, after conditioning by the camera circuitry, a video sig-
nal proportional to the intensity of the input image.
O-.
The principal scanning standard used in the United States is shown in Fig.
::r
`L3
The electron beam scans the entire surface of the target 30 times per
.-.
7.1b.
second, each complete scan (called a frame) consisting of 525 lines of which 480
°C°
V'1
00.,
'AU
contain image information. If the lines were scanned sequentially and the result
'-h
,-r
CAD
a.1.
twice the frame rate. The first field of each frame scans the odd lines (shown
,..'
dashed in Fig. 7.1a), while the second field scans the even lines. This scanning
C17
United States. Other standards exist which yield higher line rates per frame, but
CAD
their principle of operation is essentially the same. For example, a popular scan-
"'_
ning approach in computer vision and digital image processing is based on 559
lines, of which 512, contain image data. Working with integer powers of 2 has a
'"h
categories: line scan sensors and area sensors. The basic component of a line
scan CCD sensor is a row of silicon imaging elements called photosites. Image
photons pass through a transparent polycrystalline silicon gate structure and are
absorbed in the silicon crystal, thus creating electron-hole pairs. The resulting
f3.
photoelectrons are collected in the photosites, with the amount of charge collected
C3.
300 ROBOTICS- CONTROL, SENSING, VISION, AND INTELLIGENCE
Output
(a)
Gate
F4 9
Output
Fas - .
i-1
4
gate
-I F.0
FI - . F.0 F
Control signals
C
Gate
Amplifier
s
LL
C7
0-o
0-4
Output
F4 I .4 F. - . aC
C
r-0 w O
(b)
Figure 7.2 (a) CCD line scan sensor. (b),CCD area sensor.
LOW-LEVEL VISION 301
Charge-coupled area arrays are similar to the line scan sensors, with the
exception that the photosites are arranged in a matrix format and there is a gate-
'C7
transport register combination between columns of photosites, as shown in Fig.
..O
III
7.2b. The contents of odd-numbered photosites are sequentially gated into the
vertical transport registers and then into the horizontal transport register. The con-
V..
CAD
tent of this register is fed into an amplifier whose output is a line of video.
..+
s..
-16
w==
Repeating this procedure for the even-numbered lines completes the second field of
a TV frame. This "scanning" mechanism is repeated 30 times per second.
.'3
Line scan cameras obviously yield only one line of an input image. These
devices are ideally suited for applications in which objects are moving past the
:'O
sensor (as in conveyor belts). The motion of an object in the direction perpendicu-
'L3
lar to the sensor produces a two-dimensional image. Line scan sensors with reso-
"CJ
lutions ranging between 256 and 2048 elements are not uncommon. The resolu-
°°°
tions of area sensors range between- 32 x 32 at the low end to 256 x 256 elements
``'o
ue"
,:t
3UD
CAD
this concept, as well as the coordinate convention on which all subsequent discus-
sions will be based. We will often use the variable z to denote intensity variations
in an image when the spatial location of these variations is of no interest.
CD'
C],
vary from black to white in shades of gray. The terms intensity and gray level
will be used interchangeably.
Suppose that a continuous image is sampled uniformly into an array of N rows
and M columns, where each sample is also quantized in intensity. This array,
called a digital image, may be represented as
/Origin
. V
Figure 7.3 Coordinate convention for image representation. The value of any point (x, y) is
given by the value (intensity) of f at that point.
'b4
on. It is common practice to let N, M, and the number of discrete intensity levels
of each quantized pixel be integer powers of 2.
In order to gain insight into the effect of sampling and quantization, consider
Fig. 7.4. Part (a) of this figure shows an image sampled into an array of N x N
jar
pixels with N = 512; the intensity of each pixel is quantized into one of 256
C3.
discrete levels. Figure 7.4b to e shows the same image, but with N = 256, 128,
64, and 32. In all cases the number of allowed intensity levels was kept at 256.
Since the display area used for each image was the same (512 x 512 display
CDA
a>,
ACC
points), pixels in the lower resolution images were duplicated in order to fill the
t3.
entire display field. This produced a checkerboard effect that is particularly visible
ors
in the low-resolution images. It is noted that the 256 x 256 image is reasonably
.fl
close to Fig. 7.4a, but image quality deteriorated rapidly for the other values of N.
Figure 7.5 illustrates the effect produced by reducing the number of intensity
s.,
levels while keeping the spatial resolution constant at 512 x 512. The 256-, 128-,
and 64-level images are of acceptable quality. The 32-level image, however,
..O
siderably more visible as ridgelike structures (called false contours) in the image
displayed with 16 levels, and increases sharply thereafter.
The number of samples and intensity levels required to produce a useful (in
s..
the machine vision sense) reproduction of an original image depends on the image
"sue-
itself and on the intended application. As a basis for comparison, the requirements
:'.
°a)
of 512 x 512 pixels with 128 intensity levels. As a rule, a minimum system for
"C3
general-purpose vision work should have spatial resolution capabilities on the order
ono
Figure 7.4 Effects of reducing sampling-grid size. (a) 512 x 512. (b) 256 x 256. (c)
128 x 128. (d) 64 x 64. (e) 32 x 32.
304 ROBOTICS: CONTROL. SENSING. VISION. AND INTELLIGENCE
Figure 7.5 A 512 x 512 image displayed with 256, 128, 64, 32, 16, 8, 4, and 2 levels.
for objects characterized by smooth, regular surfaces. This lighting scheme is gen-
erally employed in applications where surface characteristics are important. An
example is shown in Fig. 7.7. Backlighting, as shown in Fig. 7.6b, produces a
black and white (binary) image. This technique is ideally suited for applications in
which silhouettes of objects are sufficient for recognition or other measurements.
An example is shown in Fig. 7.8.
LOW-LEVEL VISION 305
points, stripes, or grids onto the work surface. This lighting technique has two
important advantages. First, it establishes a known light pattern on the work
ue..
R.'
....
space, and disturbances of this pattern indicate the presence of an object, thus sim-
...
CG=
plifying the object detection problem. Second, by analyzing the way in which the
light pattern is distorted, it is possible to gain insight into the three-dimensional
`p'
shown in Fig. 7.9. The first shows a block illuminated by parallel light planes
which become light stripes upon intersecting a flat surface. The example shown in
a-,
Fig. 7.9b consists of two light planes projected from different directions, but con-
verging on a single stripe on the surface, as shown in Fig. 7.10a. A line scan
camera, located above the surface and focused on the stripe would see a continu-
ue..
ous line of light in the absence of an object. This line would be interrupted by an
,..r
object which breaks both light planes simultaneously. This particular approach is
-T"
ono
CAD
r..
ideally suited for objects moving on a conveyor belt past the camera. As shown in
Fig. 7.10b, two light sources are used to guarantee that the object will break the
306 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
(c)
Rough surface
(d)
Figure 7.6 Four basic illumination schemes. (From Mundy [1977], © IEEE.)
light stripe only when it is directly below the camera. It is of interest to note that
.fl
the line scan camera sees only the line on which the two light planes converge, but
two-dimensional information can be accumulated as the object moves past the cam-
era.
The directional-lighting approach shown in Fig. 7.6d is useful primarily for
,t;
inspection of object surfaces. Defects on the surface, such as pits and scratches,
can be detected by using a highly directed light beam (e.g., a laser beam) and
cps
measuring the amount of scatter. For flaw-free surfaces little light is scattered
'i.
+-+
upward to the camera. On the other hand, the presence of a flaw generally
"c7
"C7
increases the amount of light scattered to the camera, thus facilitating detection of
a defect. An example is shown in Fig. 7.11.
LOW-LEVEL VISION 307
X*=X+Xo
Y* = Y + Yo (7.4.1)
Z*=Z+Zo
where (X*, Y*, Z*) are the coordinates of the new point. Equation (7.4-1) can be
expressed in matrix form by writing:
X
Y
(7.4-2)
Z
1
Figure 7.9 Two examples of structured lighting. (Part (a) is from Rocher and Keissling
[1975], © Kaufmann, Inc.; part (b) is from Myers [1980], © IEEE.)
T
Li(,ht source
N
Light source
'In
Light plane
(a)
U
.:s
Light source
Light source
M*
(b)
Figure 7.10 (a) Top view of two light planes intersecting in a line of light. (b) Object will
be seen by the camera only when it interrupts both light planes. (Adapted from Holland
.`3
[1979], © Plenum.)
310 ROBOTICS- CONTROL, SENSING, VISION, AND INTELLIGENCE
simplified considerably by using square matrices. With this in mind, we write Eq.
tip
X* X
)>C
1 0 0 X0
Y* 0 1 0 Yo Y
(7.4-3)
Z* 0 0 1 Zo z
1 0 0 0 1 1
In terms of the values of X*, Y*, and Z*, Eqs. (7.4-2) and (7.4-3) are clearly
,a?
equivalent.
Throughout this section, we will use the unified matrix representation
v* = Av (7.4-4)
(7.4-5)
LOW-LEVEL VISION 311
1 0 0 X0
T = 0 1 0 Yo
(7.4-7)
0 0 1 Zo
0 0 0 1
Sx 0 0 0
0 Sy 0 0
S = (7.4-8)
0 0 SZ 0
0 0 01
Rotation. The transformations used for three-dimensional rotation are inherently
more complex than the transformations discussed thus far. The simplest form of
coo
-.o
these transformations is for rotation of a point about the coordinate axes. To rotate
a given point about an arbitrary point in space requires three transformations: The
t..
first translates the arbitrary point to the origin, the second performs the rotation,
'LS
and the third translates the point back to its original position.
'''
With reference to Fig. 7.12, rotation of a point about the Z coordinate axis by
O..-
cos0 sing 0 0
Re =
-sin0 cos0 0 0
(7.4-9)
0 0 1 0
0 0 0 1
The rotation angle 0 is measured clockwise when looking at the origin from a
point on the +Z axis. It is noted that this transformation affects only the values of
X and Y coordinates.
312 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Figure 7.12 Rotation of a point about each of the coordinate axes. Angles are measured
clockwise when looking along the rotation axis toward the origin.
transformation
C.'
1 0 0 0
the transformation
cosy 0 -sin(3 0
0 1 0 0
Ro = (7.4-11)
sin /3 0 cos /3 0
0 0 0 1
v* = RB[S(Tv)] = Av (7.4-12)
Although our discussion thus far has been limited to transformations of a sin-
gle point, the same ideas extend to transforming a set of m points simultaneously
L).
'"'
by using a single transformation. With reference to Eq. (7.4-5), let
VI, v2, ... , v,,, represent the coordinates of m points. If we form a 4 x m matrix
V whose columns are these column vectors, then the simultaneous transformation
of all these points by a 4 x 4 transformation matrix A is given by
V* = AV (7.4-13)
T-I = (7.4-14)
'-..
numerical techniques.
Y, Y
Image plane
x, X
(X, Y, Z)
z,Z
Figure 7.13 Basic model of the imaging process. The camera coordinate system (x, y, z) is
aligned with the world coordinate system (X, Y, Z).
axis. Thus, the center of the image plane is at the origin, and the center of the
lens is at coordinates (0, 0, X). If the camera is in focus for distant objects, X is
,may,
CAD
the focal length of the lens. In this section, it is assumed that the camera coordi-
nate system is aligned with the world coordinate system (X, Y, Z). This restric-
tion will be removed in the following section.
Let (X, Y, Z) be the world coordinates of any point in a 3D scene, as shown
o..
in Fig. 7.13. It will be assumed throughout the following discussion that Z > X,
t?,
that is, all points of interest lie in front of the lens. What we wish to do first is
obtain a relationship that gives the coordinates (x, y) of the projection of the point
(X, Y, Z) onto the image plane. This is easily accomplished by the use of similar
triangles. With reference to Fig. 7.13, it follows that ,
X
X X
(7.4-16)
X Z - X x- Z
and
y =- Y
= Y
(7.4-17)
Z- A x- Z
N
where the negative signs in front of X and Y indicate that image points are actually
CAD
_ xX
x (7.4-18)
X-Z
N
-Y Z
an d y = (7 . 4-19)
LOW-LEVEL VISION 315
It is important to note that these equations are nonlinear because they involve divi-
OCR
sion by the variable Z. Although we could use them directly as shown above, it is
often convenient to express these equations in matrix form as we did in the previ-
ous section for rotation, translation, and scaling. This can be accomplished easily
by using homogeneous coordinates.
The homogeneous coordinates of a point with cartesian coordinates (X, Y, Z)
are defined as (kX, kY, kZ, k), where k is an arbitrary, nonzero constant. Clearly,
conversion of homogeneous coordinates back to cartesian coordinates is accom-
plished by dividing the first three homogeneous coordinates by the fourth. A point
in the cartesian world coordinate system may be expressed in vector form as
(7.4-20)
kX
kY
Wh = kZ
(7.4-21)
1 0 0 0
0 1 0 0
P = (7.4-22)
0 0 1 0
0 0 1 1
Then the product Pw,, yields a vector which we shall denote by Ch:
1 0 0 0 kX kX
0 1 0 0 kY kY
Ch =PW1,= (7.4-23)
0 0 1 0 kZ kZ
0 0 - kZ
1 1
k + kj
x j
cated above, these coordinates can be converted to cartesian form by dividing each
of the first three components of c1, by the fourth. Thus, the cartesian coordinates
of any point in the camera coordinate system are given in vector form by
x
c = y (7.4-24)
The first two components of c are the (x, y) coordinates in the image plane of a
projected 3D point (X, Y, Z), as shown earlier in Eqs. (7.4-18) and (7.4-19). The
third component is of no interest to us in terms of the model in Fig. 7.13. As will
be seen below, this component acts as a free variable in the inverse perspective
transformation.
The inverse perspective transformation maps an image point back into 3D.
Thus, from Eq. (7.4-23),
P-Ich
w1, = (7.4-25)
1 0 0 0
0 1 0 0
P-' = 0 0 1 0
(7 4-26)
.
0 0 1 1
L X J
Suppose that a given image point has coordinates (x0, yo, 0), where the 0 in
the z location simply indicates the fact that the image plane is located at z = 0.
This point can be expressed in homogeneous vector form as
kx0
Ch = kyo (7.4-27)
0
k
Application of Eq. (7.4-25) then yields the homogeneous world coordinate vector
kxo
kyo
Wh = (7.4-28)
0
k
LOW-LEVEL VISION 317
X xo
w = Y A (7.4-29)
Z 0
This is obviously not what one would expect since it gives Z = 0 for any 3D
point. The problem here is caused by the fact that mapping a 3D scene onto the
image plane is a many-to-one transformation. The image point (xo, yo)
corresponds to the set of colinear 3D points which lie on the line that passes
through (xo, yo, 0) and (0, 0, X). The equations of this line in the world coordi-
`''
nate system are obtained from Eqs. (7.4-18) and (7.4-19); that is,
(7.4-30)
and Y= (X - Z) (7.4-31)
These equations show that, unless we know something about the 3D point which
generated a given image point (for example, its Z coordinate), we cannot com-
...
pletely recover the 3D point from its image. This observation, which is certainly
.-.
not unexpected, can be used as a way to formulate the inverse perspective transfor-
mation simply by using the z component of Ch as a free variable instead of 0.
Thus, letting
kxo
Ch = kyo
(7.4-32)
kz
Wh __ (7.4-33)
318 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
X x0
(7.4-34)
Xz
Z =
X+z
Solving for z in terms of Z in the last equation and substituting in the first two
expressions yields
which agrees with the above observation that recovering a 3D point from its image
by means of the inverse perspective transformation requires knowledge of at least
one of the world coordinates of the point. This problem will be addressed again in
Sec. 7.4.5.
figure also shows the camera coordinate system (x, y, z) and image points
(denoted by c). It is assumed that the camera is mounted on a gimbal which
...
i,.,
allows pan through an angle 0 and tilt through an angle a. In this discussion, pan
`C3
is defined as the angle between the x and X axes, and tilt as the angle between the
"C3
z and Z axes. The offset of the center of the gimbal from the origin of the world
r..
coordinate system is denoted by vector w0, and the offset of the center of the
CAD
imaging plane with respect to the gimbal center is denoted by a vector r, with
CAD
The concepts developed in the last two sections provide all the necessary tools
to derive a camera model based on the geometrical arrangement of Fig. 7.14. The
ono
CC;
approach is to bring the camera and world coordinate systems into alignment by
,..
con
apply the perspective transformation given in Eq. (7.4-22) to obtain the image-
C3'
plane coordinates of any given world point. In other words, we first reduce the
320 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
problem to the geometrical arrangement shown in Fig. 7.13 before applying the
perspective transformation.
Suppose that, initially, the camera was in normal position, in the sense that the
gimbal center and origin of the image plane were at the origin of the world coordi-
nate system, and all axes were aligned. Starting from normal position, the
geometrical arrangement of Fig. 7.14 can be achieved in a number of ways. We
co,
assume the following sequence of steps: (1) displacement of the gimbal center
r-y
..O
from the origin, (2) pan of the x axis, (3) tilt of the z axis, and (4) displacement of
CAD
The sequence of mechanical steps just discussed obviously does not affect the
world points since the set of points seen by the camera after it was moved from
normal position is quite different. However, we can achieve normal position again
simply by applying exactly the same sequence of steps to all world points. Since a
camera in normal position satisfies the arrangement of Fig. 7.13 for application of
U-,
s."
,^3
'"S'
world point a set of transformations which correspond to the steps given above.
Translation of the origin of the world coordinate system to the location of the
gimbal center is accomplished by using the following transformation matrix:
1 0 0 -X0
0 1 0 -Yo
G = (7.4-38)
0 0 1 -Z0
L0 0 0 1
}.t
normal position, these two axes are aligned. In order to pan the x axis through the
desired angle, we simply rotate it by 0. The rotation is with respect to the z axis
and is accomplished by using the transformation matrix R0 given in Eq. (7.4-9). In
other words, application of this matrix to all points (including the point Gwh )
effectively rotates the x axis to the desired location. When using Eq. (7.4-9), it is
important to keep clearly in mind the convention established in Fig. 7.12. That is,
angles are considered positive when points are rotated clockwise, which implies a
t3O
counterclockwise rotation of the camera about the z axis. The unrotated (0°) posi-
tion corresponds to the case when the x and X axes are aligned.
At this point in the development the z and Z axes are still aligned. Since tilt is
r0.
the angle between these two axes, we tilt the camera an angle a by rotating the z
axis by a. The rotation is with respect to the x axis and is accomplished by apply-
a.+
c°o
ing the transformation matrix Ra given in Eq. (7.4-10) to all points (including the
point RBGwh). As above, a counterclockwise rotation of the camera implies posi-
°o.
tive angles, and the 0° mark is where the z and Z axes are aligned.t
t A useful way to visualize these transformations is to construct an axis system (e.g., with pipe
cleaners), label the axes x, y, and z, and perform the rotations manually, one axis at a time.
LOW-LEVEL VISION 321
According to the discussion in Sec. 7.4.4, the two rotation matrices can be
concatenated into a single matrix, R = R«Ro. It then follows from Eqs. (7.4-9)
and (7.4-10) that
cos 8 sin 8 0 0
- sin 0 cos a cos 0 cos a sin a 0
R = (7.4-39)
sin 0 sin a - cos 0 sin a cos a 0
0 0 0 1
1 0 0 - rI
0 1 0 -r2
C = (7.4-40)
0 0 1 - r3
0 0 0 1
when X0 = Yo = Zo = 0, rI = r2 = r3 = 0, and a = 0 = 0 °.
0.o
7.15. The camera is offset from the origin and is viewing the scene with a
pan of 135 ° and a tilt of 135 °. We will follow the convention established
above that transformation angles are positive when the camera rotates in a
counterclockwise manner when viewing the origin along the axis of rotation.
CDO
Let us examine in detail the steps required to move the camera from nor-
s.,
mal position to the geometry shown in Fig. 7.15. The camera is shown in
normal position in Fig. 7.16a, and displaced from the origin in Fig. 7.16b. It
is important to note that, after this step, the world coordinate axes are used
only to establish angle references. That is, after displacement of the world-
coordinate origin, all rotations take place about the new (camera) axes. Figure
7.16c shows a view along the z axis of the camera to establish pan. In this
case the rotation of the camera about the z axis is counterclockwise so world
points are rotated about this axis in the opposite direction, which makes 0 a
positive angle. Figure 7.16d shows a view after pan, along the x axis of the
camera to establish tilt. The rotation about this axis is counterclockwise,
which makes a a positive angle. The world coordinate axes are shown dashed
own
in the latter two figures to emphasize the fact that their only use is to establish
L].
the zero reference for the pan and tilt angles. We do not show in this figure
the final step of displacing the image plane from the center of the gimbal.
,fl
Xa = O m
Yo = O m
Z o = 1 m
a = 135°
0 = 135 °
ri = 0.03 m
r2 = r3 = 0.02 m
A = 35 mm = 0.035 m
-0.03
-1.53 + A
x = 0.0007 m
and y = 0.009 m
It is of interest to note that these coordinates are well within a 1 x 1 inch
(0.025 x 0.025 m) imaging plane. If, for example, we had used a lens with a
200-mm focal length, it is easily verified from the above results that the corner
of the block would have been imaged outside the boundary of a plane with
these dimensions (i.e., it would have been outside the effective field of view of
the camera).
Finally, we point out that all coordinates obtained via the use of Eqs.
(7.4-42) and (7.4-43) are with respect to the center of the image plane. A
change of coordinates would be required to use the convention established ear-
lier, in which the origin of an image is at its top left corner.
l'
(a) (b)
.XY plane
(c) (d)
Figure 7.16 (a) Camera in normal position. (b) Gimbal center displaced from origin. (c)
Observer view of rotation about z axis to determine pan angle. (d) Observer view of rota-
tion about x axis for tilt.
parameters by using the camera itself as a measuring device. This requires a set
of image points whose world coordinates are known, and the computational pro-
cedure used to obtain the camera parameters using these known points is often
referred to as camera calibration.
With reference to Eq. (7.4-41), let A = PCRG. The elements of A contain
all the camera parameters, and we know from Eq. (7.4-41) that Ch = Awh. Let-
ting k = 1 in the homogeneous representation, we may write
ci:1 all a12 a13 a14 X
Ch2 a21 a22 a23 a24 Y
(7.4-44)
Ch3 a31 a32 a33 a34 Z
CIA a41 a42 a43 a44 1
LOW-LEVEL VISION 325
From the discussion in the previous two sections we know that the camera coordi-
nates in cartesian form are given by
Ch1
X
(7.4-45)
ch4
Ch2
an d y = (7.4-46)
Ch4
Substituting ci,l = xch4 and Ci12 = ych4 in Eq. (7.6-44) and expanding the matrix
product yields
N
yCh4 = a21 X + a22 Y + a23 Z + a24 (7.4-47)
N
Ch4 = a41 X + a42 Y + a43 Z + a44
-p-
a11X+a12Y+a13Z-a41xX-a42xY-a43xZ-a44x+a14 =0 (7.4-48)
O\-
a21 X+ a22 Y+ a23 Z - a41 yX - a42 y Y- a43 yZ - a44 y + a24 = 0 (7.4-49)
The calibration procedure then consists of (1) obtaining m > 6 world points with
(CD
known coordinates (Xi, Yi, Z;), i = 1, 2, ... ,m (there are two equations involv-
ing the coordinates of these points, so at least six points are needed), (2) imaging
these points with the camera in a given position to obtain the corresponding image
C1,
points (xi, yi ), i = 1, 2, ... , m, and (3) using these results in Eqs. (7.4-48) and
(7.4-49) to solve for the unknown coefficients. There are many numerical tech-
niques for finding an optimal solution to a linear system of equations such as (7.4-
48) and (7.4-49) (see, for example, Noble [1969]).
It was noted in Sec. 7.4.2 that mapping a 3D scene onto an image plane is a
many-to-one transformation. That is, an image point does not uniquely determine
COD
o1=
the location of a corresponding world point. It is shown in this section that the
missing depth information can be obtained by using stereoscopic (stereo for short)
imaging techniques.
As shown in Fig. 7.17, stereo imaging involves obtaining two separate image
views of an object of interest (e.g., a world point w). The distance between the
CAD
centers of the two lenses is called the baseline, and the objective is to find the
coordinates (X, Y, Z) of a point w given its image points (x1, yl) and (x2, Y2)-
It is assumed that the cameras are identical and that the coordinate systems of both
326 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
cameras are perfectly aligned, differing only in the location of their origins, a con-
dition usually met in practice. Recall our convention that, after the camera and
world coordinate systems have been brought into coincidence, the xy plane of the
,-.
image is aligned with the XY plane of the world coordinate system. Then, under
the above assumption, the Z coordinate of w is exactly the same for both camera
coordinate systems.
Suppose that we bring the first camera into coincidence with the world coordi-
nate system, as shown in Fig. 7.18. Then, from Eq. (7.4-31), w lies on the line
with (partial) coordinates
'LS
XI
X, =
- (X - Z1) (7.4-50)
where the subscripts on X and Z indicate that the first camera was moved to the
origin of the world coordinate system, with the second camera and w following,
c~b
but keeping the relative arrangement shown in Fig. 7.17. If, instead, the second
camera had been brought to the origin of the world coordinate system, then we
would have that w lies on the line with (partial) coordinates
X2 = (X - Z2) (7.4-51)
However, due to the separation between cameras and the fact that the Z coordinate
of w is the same for both camera coordinate systems, it follows that
X2 = XI + B (7.4-52)
and Z2 = Z1 = Z (7.4-53)
Figure 7.18 Top view of Fig. 7.17 with the first camera brought into coincidence with the
world coordinate system.
Substitution of Eqs. (7.4-52) and (7.4-53) into Eqs. (7.4-50) and (7.4-51)
results in the following equations:
XI + B = - (X - Z) (7.4-54)
and (7.4-55)
Subtracting Eq. (7.4-55) from (7.4-54) and solving for Z yields the expression
XB
Z=X- (7.4-56)
which indicates that if the difference between the corresponding image coordinates
x2 and xI can be determined, and the baseline and focal length are known, calcu-
-c7
within a small region in one of the image views and then attempt to find the best
matching region in the other view by using correlation techniques, as discussed in
328 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Chap. 8. When the scene contains distinct features, such as prominent corners, a
feature-matching approach will generally yield a faster solution for establishing
correspondence.
Before leaving this discussion, we point out that the calibration procedure
developed in the previous section is directly applicable to stereo imaging by simply
treating the cameras independently.
(x + 1, y) (x - 1, y) (x, y + 1) (x, y - 1)
This set of pixels, called the 4-neighbors of p, will be denoted by N4 (p). It is
noted that each of these pixels is a unit distance from (x, y) and also that some of
-CD
the neighbors of p will be outside the digital image if (x, y) is on the border of
the image.
The four diagonal neighbors of p have coordinates
7.5.2 Connectivity
Let V be the set of intensity values of pixels which are allowed to be connected;
for example, if only connectivity of pixels with intensities of 59, 60, and 61 is
desired, then V = {59, 60, 61). We consider three types of connectivity:
0 1 I 0 1----1 0
0 2 0 0 20 0 2 0
0 0 1 0 0 1 0 0 I
Figure 7.19 (a) Arrangement of pixels. (b) 8-neighbors of the pixel labeled "2." (c) m-
neighbors of the same pixel.
(a) q is in N4(p), or
(b) q is in ND(p) and the set N4 (p) fl N4 (q) is empty. (This is the set of
pixels that are 4-neighbors of both p and q and whose values are from V.)
Mixed connectivity is a modification of 8-connectivity and is introduced to
eliminate the multiple connections which often cause difficulty when 8-connectivity
is used. For example, consider the pixel arrangement shown in Fig. 7.19a.
Assuming V = {1, 2}, the 8-neighbors of the pixel with value 2 are shown by
dashed lines in Fig. 7.19b. It is important to note the ambiguity that results from
multiple connections to this pixel. This ambiguity is removed by using m-
connectivity, as shown in Fig. 7.19c.
A pixel p is adjacent to a pixel q if they are connected. We may define 4-,
8-, or m-adjacency, depending on the type of connectivity specified. Two image
subsets S1 and S2 are adjacent if some pixel in St is adjacent to some pixel in S2.
A path from pixel p with coordinates (x, y) to pixel q with coordinates (s, t)
is a sequence of distinct pixels with coordinates
where (x0, yo) = (x, y) and (x,,, (s, t), (xi, yi) is adjacent to
°O0
(xi_ 1, yi_ 1), 1 i 5 n, and n is the length of the path. We may define 4-, 8-,
or m-paths, depending on the type of adjacency used.
If p and q are pixels of an image subset S, then p is connected to q in S if
there is a path from p to q consisting entirely of pixels in S. For any pixel p in S,
the set of pixels in S that are connected to p is called a connected component of S.
It then follows that any two pixels of a connected component are connected to each
other, and that distinct connected components are disjoint.
7
For this distance measure, the pixels having a distance less than or equal to some
value r from (x, y) are the points contained in a disk of radius r centered at
(x, Y) - ,
The D4 distance (also called city-block distance) between p and q is defined as
D4(p, q) = Ix - sl + (y - t) (7.5-2)
In this case the pixels having a D4 distance less than or equal to some value r
t3,
from (x, y) form a diamond centered at (x, y). For example, the pixels with D4
distance < 2 from (x, y) (the center point) form the following contours of con-
"CC
acs
stant distance:
2
2 1 2
2 1 0 1 2
2 1 2
2
It is noted that the pixels with D4 = 1 are the 4-neighbors of (x, y).
The D8 distance (also called chessboard distance) between p and q is defined
as
In this case the pixels with D8 distance less than or equal to some value r form a
square centered at (x, y). For example, the pixels with D8 distance < 2 from
(x, y) (the center point) form the following contours of constant distance:
2 2 2 2 2
2 1 1 1 2
2 1 0 1 2
2 1 1 1 2
2 2 2 2 2
''h
'p,
equal to the length of the shortest 4-path between these two points. Similar com-
.'O
"'0
o'0
ments apply to the D8 distance. In fact, we can consider both the D4 and D8 dis-
tances between p and q regardless of whether or not a connected path exists
between them, since the definition of these distances involve only the coordinates
of these points. When dealing with m-connectivity, however, the value of the dis-
'A.
tance (length of the path) between two pixels depends on the values of the pixels
LOW-LEVEL VISION 331
along the path as well as their neighbors. For instance, consider the following
w°-
arrangement of pixels, where it is assumed that p, P2, and p4 are valued 1 and pI
and p3 may be valued 0 or 1:
P3 P4
PI P2
7.6 PREPROCESSING
7.6.1 Foundation
In this section we consider two basic approaches to preprocessing. The first is
based on spatial-domain techniques and the second deals with frequency-domain
concepts via the Fourier transform. Together, these approaches encompass most of
(o]
..t
expressed as
where f(x, y) is the input image, g(x, y) is the resulting (preprocessed) image,
and h is an operator on f, defined over some neighborhood of (x, y). It is also
possible to let h operate on a set of input images, such as performing the pixel-
by-pixel sum of K images for noise reduction, as discussed in Sec. 7.6.2.
The principal approach used in defining a neighborhood about (x, y) is to use
a square or rectangular subimage area centered at (x, y), as shown in Fig. 7.20.
The center of the subimage is moved from pixel to pixel starting, say, at the top
left corner, and applying the operator at each location (x, y) to yield g(x, y).
Although other neighborhood shapes, such as a circle, are sometimes used, square
arrays are by far the most predominant because of their ease of implementation.
332 ROBOTICS- CONTROL, SENSING, VISION, AND INTELLIGENCE
s = T(r) (7.6-2)
where, for simplicity, we have used s and r as variables denoting, respectively, the
intensity of f(x, y) and g(x, y) at any point (x, y). This type of transformation
is discussed in more detail in Sec. 7.6.3.
One of the spatial-domain techniques used most frequently is based on the use
of so-called convolution masks (also referred to as templates, windows, or filters).
Basically, a mask is a small (e.g., 3 x 3) two-dimensional array, such as the one
shown in Fig. 7.20, whose coefficients are chosen to detect a given property in an
image. As an introduction to this concept, suppose that we have an image of
constant intensity which contains widely isolated pixels whose intensities are
different from the background. These points can be detected by using the mask
shown in Fig. 7.21. The procedure is as follows: The center of the mask (labeled
8) is moved around the image, as indicated above. At each pixel position in the
in'
image, we multiply every pixel that is contained within the mask area by the
,(DD
corresponding mask coefficient; that is, the pixel in the center of the mask is multi-
C)°
plied by 8, while its 8-neighbors are multiplied by - 1. The results of these nine
multiplications are then summed. If all the pixels within the mask area have the
7r'
same value (constant background), the sum will be zero. If, on the other hand, the
center of the mask is located at one of the isolated points, the sum will be different
LOW-LEVEL VISION 333
Figure 7.21 A mask for detecting isolated points different from a constant background.
GL.
from zero. If the isolated point is in an off-center position, the sum will also be
different from zero, but the magnitude of the response will be weaker. These
weaker responses can be eliminated by comparing the sum against a threshold.
As shown in Fig. 7.22, if we let wI, w2w 2 ,.. w9 represent mask coefficients
'"'
and consider the 8-neighbors of (x, y), we may generalize the preceding discus-
sion as that of performing the following operation:
+ w9f(x + 1, y + 1) (7.6-3)
It 1
(- I. , +I)
11'4 Ith
+ I)
(x + I. 1 - I) (C + I. (1 + I. \ + I)
Figure 7.22 A general 3 x 3 mask showing coefficients and corresponding image pixel
locations.
334 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Before leaving this section, we point out that the concept of neighborhood pro-
cessing is not limited to 3 x 3 areas nor to the cases treated thus far. For
A..
instance, we will use neighborhood operations in subsequent discussions for noise
reduction, to obtain variable image thresholds, to compute measures of texture, and
to obtain the skeleton of an object.
`CAD
Frequency-Domain Methods. The frequency domain refers to an aggregate, of
complex pixels resulting from taking the Fourier transform of an image. The con-
cept of "frequency" is often used in interpreting the Fourier transform and arises
from the fact that this particular transform is composed of complex sinusoids. Due
A.'
to extensive processing requirements, frequency-domain methods are not nearly as
widely used in robotic vision as are spatial-domain techniques. However, the
Fourier transform does play an important role in areas such as the analysis of
COD
object motion and object description. In addition, many spatial techniques for
CD.
enhancement and restoration are founded on concepts whose origins can be traced
to a Fourier transform formulation. The material in this section will serve as an
`i7
.".
f(x), x = 0, 1, 2, ... , N - 1. The forward Fourier transform of f(x)
[l,
.-.
is
defined as
1 N- I
F(u) - E
f(x)e-
(7.6-4)
=0
1 N-I N-I
F(u, v) = - E E f(x,
vy)IN
(7.6-6)
N x=0 y=0
LOW-LEVEL VISION 335
'C7
tion that each of these equations can be expressed as separate one-dimensional
summations of the form shown in Eq. (7.6-4). This leads to a straightforward pro-
cedure for computing the two-dimensional Fourier transform using only a one-
"Q'
dimensional FFT algorithm: We first compute and save the transform of each row
"i7
off
of f(x, y), thus producing a two-dimensional array of intermediate results. These
.'y
2U2
results are multiplied by N and the one-dimensional transform of each column is
can
computed. The final result is F(u, v). Similar comments apply for computing
cad
f(x, y) given F(u, v). The order of computation from a row-column approach
can be reversed to a column-row format without affecting the final result.
CAD
The Fourier transform can be used in a number of ways by a vision system,
as will be shown in Chap. 8. For example, by treating the boundary of an object
as a one-dimensional array of points and computing their Fourier transform,
selected values of F(u) can be used as descriptors of boundary shape. The one-
l].
..S
dimensional Fourier transform has also been used as a powerful tool for detecting
object motion. Applications of the discrete two-dimensional Fourier transform in
'T1
CD,
o.,
tioned earlier, the usefulness of this approach in industrial machine vision is still
CAD
ment this transform. We point out before leaving this section, however, that the
two-dimensional, continuous Fourier transform can be computed (at the speed of .-y
light) by optical means. This approach, which requires the use of precisely
...
aligned optical equipment, is used in industrial environments for tasks such as the
°J"
inspection of finished metal surfaces. Further treatment of this topic is outside the
scope of our present discussion, but the interested reader is referred to the book by
A..
7.6.2 Smoothing
C/]
Smoothing operations are used for reducing noise and other spurious effects that
may be present in an image as a result of sampling, quantization, transmission, or
disturbances in the environment during image acquisition. In this section we con-
sider several fast smoothing methods that are suitable for implementation in the
'.e
g(x, Y) = -
1
P (n, in)eS
F, .f(n, m) (7.6-8)
for all x and y in f(x, y). S is the set of coordinates of points in the neighbor-
hood of (x, y), including (x, y) itself, and P is the total number of points in the
neighborhood. If a 3 x 3 neighborhood is used, we note by comparing Eqs. (7.6-
,.C
C:'
8) and (7.6-3) that the former equation is a special case of the latter with w; = 1/9.
Of course, we are not limited to square neighborhoods in Eq. (7.6-8) but, as men-
tioned in Sec. 7.6.1, these are by far the most predominant in robot vision sys-
'.o
4y,.
tems.
..d
CIO
b-0
hood averaging. Figure 7.23a shows an image corrupted by noise, and Fig.
((DD
7.23b is the result of averaging every pixel with its 4-neighbors. Similarly,
C1,
that it blurs edges and other sharp details. This blurring can often be reduced
significantly by the use of so. called median filters, in which we replace the inten-
sity of each pixel by the median of the intensities in a predefined neighborhood of
o-°.
,.y
set are less than M and half the values are greater than M. In order to perform
median filtering in a neighborhood of a pixel, we first sort the values of the pixel
'''
C1,
and its neighbors, determine the median, and assign this value to the pixel. For
CD.
.r'._.
3 x 3 neighborhood has values (10, 20, 20, 20, 15, 20, 20, 25, 100). These
values are sorted as (10, ,15, 20, 20, 20, 20, 20, 25, 100), which results in a
median of 20. A little thought will reveal that the principal function of median
`w'
C3.
filtering is to force points with very distinct intensities to be more like their neigh-
bors, thus actually eliminating intensity spikes that appear isolated in the area of
the filter mask.
Example: Figure 7.24a shows an original image, and Fig. 7.24b shows the
gyp"
Figure 7.23 (a) Noisy image. (b) Result of averaging each pixel along with its 4-neighbors.
(c) through (f) are the results of using neighborhood sizes of 3 x 3, 5 x 5, 7 x 7, and
11 x 11, respectively.
338 ROBOTICS: CONTROL. SENSING, VISION. AND INTELLIGENCE
Figure 7.24 (a) Original image. (b) Image corrupted by impulse noise. (c) Result of 5 x 5
neighborhood averaging. (d) Result of 5 x 5 median filtering. (Courtesy of Martin Connor,
Texas Instruments, Inc., Lewisville, Texas.)
shown in Fig. 7.24c and the result of a 5 x 5 median filter is shown in Fig.
(Z,
CD' ,r'
C].
9'0
needs no explanation. The three bright dots remaining in Fig. 7.24d resulted
...
''C3
from a large concentration of noise at those points, thus biasing the median
'-r
calculation. Two or more passes with a median filter would eliminate those
".s
points.
Image Averaging. Consider a noisy image g(x, y) which if formed by the addi-
CA,
where it is assumed that the noise is uncorrelated and has zero average value. The
objective of the following procedure is to obtain a smoothed result by adding a
given set of noisy images, gi(x, y), i = 1, 2, . . . , K.
If the noise satisfies the constraints just stated, it is a simple problem to show
(Papouli''s [1965]) that if an image g(x, y) is formed by averaging K different noisy
images,
where E{-(x, y)} is the expected value of g, and ag(x, y) and an(x, y) are the
app
variances of g and n, all at coordinates (x, y). The standard deviation at any
point in the average image is given by
Equations (7.6-12) and (7.6-13) indicate that, as K increases, the variability of the
pixel values decreases. Since E{g(x, y)} = f(x, y), this means that g(x, y) will
approach the uncorrupted image f(x, y) as the number of noisy images used in the
averaging process increases.
It is important to note that the technique just discussed implicitly assumes that
.fl
all noisy images are registered spatially, with only the pixel intensities varying. In
.-.
terms of robotic vision, this means that all object in the work space must be at rest
with respect to the camera during the averaging process. Many vision systems
have the capability of performing an entire image addition in one frame time inter-
val (i.e., one-thirtieth of a second). Thus, the addition of, say, 16 images will take
...
t].
Smoothing Binary Images. Binary images result from using backlighting or struc-
tured lighting, as discussed in Sec. 7.3, or from processes such as edge detection
or thresholding, as discussed in Secs. 7.6.4 and 7.6.5. We will use the convention
of labeling dark points with a 1 and light points with a 0. Thus, since binary
340 ROBOTICS- CONTROL, SENSING, VISION, AND INTELLIGENCE
Figure 7.25 (a) Sample noisy image. (b) through (f) are the results of averaging 4, 8, 16
32, and 64 such images.
LOW-LEVEL VISION 341
images are two-valued, noise in this case produces effects such as irregular boun-
daries, small holes, missing corners, and isolated points.
The basic idea underlying the methods discussed in this section is to specify a
boolean function evaluated on a neighborhood centered at a pixel p, and to assign
to p a 1 or 0, depending on the spatial arrangement and binary values of its neigh-
bors. Due to limitations in available processing time for industrial vision tasks,
the analysis is typically limited to the 8-neighbors of p, which leads us to the
3 x 3 mask shown in Fig. 7.26. The smoothing approach (1) fills in small (one
pixel) holes in otherwise dark areas, (2) fills in small notches in straightedge seg-
ments, (3) eliminates isolated l's, (4) eliminates small bumps along straightedge
segments, and (5) replaces missing corner points.
act
With reference to Fig. 7.26, the first two smoothing processes just mentioned
are accomplished by using the Boolean expression
B1 = (7.6-14)
where " " and "+" denote the logical AND and OR, respectively. Following
the convention established above, a dark pixel contained in the mask area is
assigned a logical 1 and a light pixel a logical 0. Then, if BI = 1, we assign a 1
to p, otherwise this pixel is assigned a 0. Equation (7.6.14) is applied to all pixels
simultaneously, in the sense that the next value of each pixel location is determined
before any of the other pixels have been changed.
Steps 3 and 4 in the smoothing process are similarly accomplished by evaluat-
ing the boolean expression
B2 = (b+c+e) (d+f+g)]
(7.6-15)
1 t h
Figure 7.26 Neighbors of p used for smoothing binary images. Dark pixels are denoted by
1 and light pixels by 0.
342 ROBOTICS: CONTROL. SENSING. VISION, AND INTELLIGENCE
Missing top, right corner points are filled in by means of the expression
B3 = p (a+b+c+e+h) +p (7.6-16)
where overbar denotes the logical complement. Similarly, lower right, top left,
and lower left missing corner points are filled in by using the expressions
B4 = p (a b d) (c + e + f + g + h) + p (7.6-17)
B5 = p (a+b+c+d+ f) + p (7.6-18)
Example: The concepts just discussed are illustrated in Fig. 7.27. Figure
'-r
C14
7.27a shows a noisy binary image, and Fig. 7.27b shows the result of apply-
dam"
'r1
ing BI. Note that the notches along the boundary and the hole in the dark
may"
,0,
`w't
area were filled in. Figure 7.27c shows the result of applying B2 to the image
in Fig. 7.27b. As expected, the bumps along the boundary of the dark area
°x'
and all isolated points were removed (the image was implicitly extended with
L2.
coo
0's for points on the image border). Finally, Fig. 7.27d shows the result of
(3.,
7.6.3 Enhancement
One of the principal difficulties in many low-level vision tasks is to be able to
automatically adapt to changes in illumination. The capability to compensate for
effects such as shadows and "hot-spot" reflectances quite often plays a central role
in determining the success of subsequent processing algorithms. In this subsection
`J'
we consider several enhancement techniques which address these and similar prob-
lems. The reader is reminded that enhancement is a major area in digital image
processing and scene analysis, and that our discussion of this topic is limited to
En"
sample techniques that are suitable for robot vision systems. In this context, "suit-
able" implies having fast computational characteristics and modest hardware
requirements.
ous variable lying in the range 0 < r < 1. The discrete case is considered later
in this section.
For any r in the interval [0, 1], attention will be focused on transformations of
the form
s = T(r) (7.6-20)
LOW-LEVEL VISION 343
Figure 7.27 (a) Original image. (b) Result of applying BI. (c) Result of applying B2. (d)
Final result after application of B3 through B6.
which produce an intensity value s for every pixel value r in the input image. It
is assumed that the transformation function T satisfies the conditions:
1. T(r) is single-valued and monotonically increasing in the interval 0 < T(r)
1.
2. 0 < T(r) < 1 for 0 < r < 1.
Condition 1 preserves the order from black to white in the intensity scale, and con-
dition 2 guarantees a mapping that is consistent with the allowed 0 to 1 range of
pixel values. A transformation function satisfying these conditions is illustrated in
Fig. 7.28.
The inverse transformation function from s back to r is denoted by
r = T-1(s) (7.6-21)
where it is assumed that T-1(s) satisfies the two conditions given above.
344 ROBOTICS. CONTROL. SENSING, VISION, AND INTELLIGENCE
The intensity variables r and s are random quantities in the interval [0, 1] and,
cry
in Fig. 7.29a would have fairly dark characteristics since the majority of pixel
values would be concentrated on the dark end of the intensity scale. On the other
hand, an image whose pixels have an intensity distribution like the one shown in
,5..'
0 0 i
(a) (h)
Figure 7.29 (a) Intensity PDF of a "dark" image and (b) a "light" image.
LOW-LEVEL VISION 345
It follows from elementary probability theory that if pr(r) and T(r) are
known, and T-1(s) satisfies condition 1, then the PDF of the transformed intensi-
ties is given by
1
P, (S) = Pr(r)
L Pr(r) J r=T'(s)
_ [ 1 Ir=T-'(s)
=1 0<S<1 (7.6-25)
,.o
also noted that using the transformation function given in Eq. (7.6-23) yields
`r.
transformed intensities that always have a flat PDF, independent of the shape of
pr(r), a property that is ideally suited for automatic enhancement. The net effect
>;.
below, this process can have a rather dramatic effect on the appearance of an
'C1
image.
In order to be useful for digital processing, the concepts developed above must
be formulated in discrete form. For intensities that assume discrete values we deal
with probabilities given by the relation
nk
Pr(rk) =n- O < rk 1 (7.6-26)
k=0,1;2,...,L- 1
.-.
where L is the number of discrete intensity levels, pr (rk) is an estimate of the pro-
bability of intensity rk, nk is the number of times this intensity appears in the
346 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
image, and n is the total number of pixels in the image. A plot of pr(rk) versus
rk is usually called a histogram, and the technique used for obtaining a uniform
histogram is known as histogram equalization or histogram linearization.
The discrete form of Eq. (7.6-23) is given by
nj
Sk = T(rk) = Ek n
j=0
E Pr(rj) (7.6-27)
j=0
C^'"
22
in order to obtain the mapped value Sk corresponding to rk, we simply sum the his-
togram components from 0 to rk.
The inverse discrete transformation is given by
rk = T- I
(Sk) 0 5 Sk 5 1 (7.6-28)
where both T(rk) and T' (Sk) are assumed to satisfy conditions 1 and 2 stated
'C7
is evident. It is noted that the histogram is not perfectly flat, a condition gen-
"'"
erally encountered when applying to discrete values a method derived for con-
tinuous quantities.
sense that its only function is histogram linearization, a process that is not applica-
N
ova
"s~
Starting again with continuous quantities, let pr(r) and pz(z) be the original
and desired intensity PDFs. Suppose that a given image is first histogram equal-
ized by using Eq. (7.6-23); that is,
Figure 7.30 (a) Original image and (b) its histogram. (c) Histogram-equalized image and
(d) its histogram. (From Woods and Gonzalez [1981], © IEEE.)
If the desired image were available, its levels could also be equalized by using the
t-,
transformation function
The inverse process, z = G-1(v) would then yield the desired levels back. This,
of course, is a hypothetical formulation since the z levels are precisely what we are
+-+
trying to obtain. It is noted, however, that ps(s) and p,,(v) would be identical
uniform densities since the use of Eqs. (7.6-29) and (7.6-30) guarantees a uniform
density, regardless of the shape of the PDF inside the integral. Thus, if instead of
can
using v in the inverse process, we use the inverse levels s obtained from the origi-
nal image, the resulting levels z = G-1(s) would have the desired PDF, pZ (z) .
'i+
lows:
..fl
,-,
Off'
T(r) be determined and combined with G-1(s) into a single transformation that is
applied directly to the input image. The real problem in using the two transforma-
is,
CAD
tions or their combined representation for continuous variables lies in obtaining the
inverse function analytically. In the discrete case this problem is circumvented by
the fact that the number of distinct intensity levels is usually relatively small (e.g., Cc"
COED
256) and it becomes feasible to calculate and store a mapping for each possible
integer pixel value.
The discrete formulation of the foregoing procedure parallels the development
'^p
,.,
t-.
where pr(rj) is computed from the input image, and pz(zj) is specified.
Figure 7.31 (a) Input image. (b) Result of histogram equalization. (c) A specified histo-
gram. (d) Result of enhancement by histogram specification. (From Woods and Gonzalez
[1981], © IEEE.)
finally used to map the intensity of the pixel centered in the neighborhood. The
center of the n x in region is then moved to an adjacent pixel location and the pro-
cedure is repeated. Since only one new row or column of the neighborhood
changes during a pixel-to-pixel translation of the region, it is possible to update the
mss,
b-0
histogram obtained in the previous location with the new data introduced at each
0:F-
+-'
motion step. This approach has obvious advantages over repeatedly computing the
s..
histogram over all n x in pixels every time the region is moved one pixel location.
w-1
r-.
hood is moved from pixel to pixel is shown in Fig. 7.32. Part (a) of this
figure shows an image with constant background and five dark square areas.
The image is slightly blurred as a result of smoothing with a 7 x 7 mask to
reduce noise (see Sec. 7.6.2). Figure 7.32b shows the result of histogram
'+'
.-r
equalization. The most striking feature in this image is the enhancement of
gyp'-
noise, a problem that commonly occurs when using this technique on noisy
sew
C'.
images, even if they have been smoothed prior to equalization. Figure 7.32c
4-.
CA..
i..
shows the result of local histogram equalization using a neighborhood of size
7 x 7. Note that the dark areas have been enhanced to reveal an inner struc-
,.t
ture that was not visible in either of the previous two images. Noise was also
enhanced, but its texture is much finer due to the local nature of the enhance-
CC!
.,r
kit
1..0";
tit
'o'
.;y
f.,
Figure 7.32 (a) Original image. (b) Result of global histogram equalization. (c) Result of
,S!
ment approach. This example clearly demonstrates the necessity for using
local enhancement when the details of interest are too small to influence
significantly the overall characteristics of a global technique.
Instead of using histograms, one could base local enhancement on other pro-
f.,
perties of the pixel intensities in a neighborhood. The intensity mean and variance
.U+
(or standard deviation) are two such properties which are frequently used because
of their relevance to the appearance of an image. That is, the mean is a measure
...
of average brightness and the variance is a measure of contrast. A typical local
transformation based on these concepts maps the intensity of an input image
C/'
too
f(x, y) into a new image g(x, y) by performing the following transformation at
CAD
V'1
g(x, y) = A(x, y)[f(x, y) - m(x, y)] + m(x, y) (7.6-35)
where
a (M y)
In this formulation, m(x, y) and a(x, y) are the intensity mean and standard devi-
a)
In practice, it is often desirable to add back a fraction of the local mean and to
.-,
4.-p,
restrict the variations of A(x, y) between two limits [Arvin , Amax ] in order to bal-
s.,
Figure 7.33 Images before and after local enhancement. (From Narendra and Fitch [1981],
© IEEE.)
[17
Basic Formulation. Basically, the idea underlying most edge detection techniques
is the computation of a local derivative operator. This concept can be easily illus-
en'
trated with the aid of Fig. 7.34. Part (a) of this figure shows an image of a simple
light object on a dark background, the intensity profile along a horizontal scan line cps
of the image, and the first and second derivatives of the profile. It is noted from
'C3
the profile that an edge (transition from dark to light) is modeled as a ramp, rather
than as an abrupt change of intensity. This model is representative of the fact that
C3"
i."
constant intensity, and assumes a constant value during an intensity transition. The
second derivative, on the other hand, is zero in all locations, except at the onset
'-"
and termination of an intensity transition. Based on these remarks and the con-
cepts illustrated in Fig. 7.34, it is evident that the magnitude of the first derivative
can be used to detect the presence of an edge, while the sign of the second deriva-
::r
tive can be used to determine whether an edge pixel lies on the dark (background)
or light (object) side of an edge. The sign of the second derivative in Fig. 7.34a,
for example, is positive for pixels lying on the dark side of both the leading and
c`"
'CA
trailing edges of the object, while the sign is negative for pixels on the light side
C].
arc
of these edges. Similar comments apply to the case of a dark object on a light
background, as shown in Fig. 7.34b. It is of interest to note that identically the
CD.
same interpretation regarding the sign of the second derivative is true for this case.
Although the discussion thus far has been limited to a one-dimensional hor-
't3
a?.
image. We simply define a profile perpendicular to the edge direction at any given
'CS
"C3
point and interpret the results as in the preceding discussion. As will be shown
....
below, the first derivative at any point in an image can be obtained by using the
'L7
magnitude of the gradient at that point, while the second derivative is given by the
ago
Laplacian.
LOW-LEVEL VISION 353
Image
Profile of
a horizontal
line
First
-__
i___
derivative
Second
derivative
Figure 7.34 Elements of edge detection by derivative operators. (a) Light object on a dark
background. (b) Dark object on a light background.
is
r.+
C13
ax
It is well known from vector analysis that the vector G points in the direction of
maximum rate of change of f at location (x, y). For edge detection, however, we
are interested in the magnitude of this vector, generally referred to as the gradient
and denoted by G[f(x, y)], where
G[f(x, y) ] = I Gx I + I Gy I (7.6-39)
for doing this in a digital image. One approach is to use first-order differences
between adjacent pixels; that is,
and
Gy= ay =[f(x-1,y+1)+2f(x,y+1)+f(x+1,y+1)]
- [f(x-1, y - 1) + 2f(x, y - 1)+f(x+ 1, y
.-y
1)]
= (c + 2e + i) - (a + 2d + g) (7.6-43)
where we have used the letters a through i to represent the neighbors of point
(x, y). The 3 x 3 neighborhood of (x, y) using this simplified notation is shown
'a+
F'+
LOW-LEVEL VISION 355
in Fig. 7.35a. It is noted that the pixels closest to (x, y) are weighted by 2 in
N
OO
these particular definitions of the digital derivative. Computing the gradient over a
O
3 x 3 area rather than using Eqs. (7.6-40) and (7.6-41) has the advantage of
'L1
W
AX
CD"
increased averaging, thus tending to make the gradient less sensitive to noise. It is
O
possible to define the gradient over larger neighborhoods (Kirsch [19711), but
3 x 3 operators are by far the most popular in industrial computer vision because
W
X
....
O
It follows from the discussion in Sec. 7.6.1 that GX, as given in Eq. (7.6-42),
N
O
can be computed by using the mask shown in Fig. 7.35b. Similarly, Gy may be
V'1
obtained by using the mask shown in Fig. 7.35c. These two masks are commonly
O
o
referred to as the Sobel operators. The responses of these two masks at any point
O
.,p
(x, y) are combined using Eqs. (7.6-38) or (7.6-39) to obtain an approximation to
O
O
o
O
the gradient at that point. Moving these masks throughout the image f(x, y)
o
yields the gradient at all points in the image.
o
There are numerous ways by which one can generate an output image,
O
g(x, y), based on gradient computations. The simplest approach is to let the value
<
of g at coordinate (x, y) be equal to the gradient of the input image f at that
0
7.36.
a h c
d (.r, c) e
K h 1
(a)
0
N
-1 -2 -1 -I 0 I
O
-2 0
0
0 0 0 2
0
N
1 2 1 -I 0 1
Figure 7.35 (a) 3 x 3 neighborhood of point (x, y). (b) Mask used to compute G G. (c)
Mask used to compute G .
356 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Figure 7.36 (a) Input image. (b) Result of using Eq. (7.6-44).
1 if G[f(x, y) ] > T
g(x, Y) = (7.6-45)
where T is a nonnegative threshold. In this case, only edge pixels whose gradients
exceed T are considered important. Thus, the use of Eq. (7.6-45) may be viewed
as a procedure which extracts only those pixels that are characterized by significant
(as determined by T) transitions in intensity. Further analysis of the resulting pix-
3..i
els is usually required to delete isolated points and to link pixels along proper
`L1
boundaries which ultimately determine the objects segmented out of an image. The
4-+
use of Eq. (7.6-45) in this context is discussed and illustrated in Sec. 8.2.1.
...
- 4 f(x, y) (7.6-47)
This digital formulation of the Laplacian is zero in constant areas and on the ramp
section of an edge, as expected of a second-order derivative. The implementation
of Eq. (7.6-47) can be based on the mask shown in Fig. 7.37.
LOW-LEVEL VISION 357
0 0
-4 I
0 0
3..
light side of an edge.
7.6.5 Thresholding
Image thresholding is one of the principal techniques used by industrial vision sys-
tems for object detection, especially in applications requiring high data
L"'
throughputs. In this section we are concerned with aspects of thresholding that fall
in the category of low-level processing. More sophisticated uses of thresholding
techniques are discussed Chap. 8.
Suppose that the intensity histogram shown in Fig. 7.38a corresponds to an
,-.
image, f(x, y), composed of light objects on a dark background, such that object
and background pixels have intensities grouped into two dominant modes. One
obvious way to extract the objects from the background is to select a threshold T
which separates the intensity modes. Then, any point (x, y) for which
f(x, y) > T is called an object point; otherwise, the point is called a background
V
CDt
point. A slightly more general case of this approach is shown in Fig. 7.38b. In
this case the image histogram is characterized by three dominant modes (for exam-
..C
a)'
3-,
ple, two types of light objects on a dark background). Here, we can use the same
basic approach and classify a point (x, y) as belonging to one object class if
TI < f (x, y) < T, , to the other object class if f (x, y) > T2, and to the back-
[ti
ground if f(x, y) < TI. This type of multilevel thresholding is generally less
BCD
reliable than its single threshold counterpart because of the difficulty in establishing
multiple thresholds that effectively isolate regions of interest, especially when the
gin'
(a)
La11
L
(h)
Figure 7.38 Intensity histograms that can be partioned by (a) a single threshold and (b)
multiple thresholds.
where f(x, y) is the intensity of point (x, y), and p(x, y) denotes some local pro-
perty of this point, for example, the average intensity of a neighborhood centered
at (x, y). We create a thresholded image g(x, y) by defining
1 if f(x, y) > T
g(x, y) = (7.6-49)
0 iff(x, y) S T
Thus, in examining g(x, y), we find that pixels labeled 1 (or any other convenient
intensity level) correspond to objects, while pixels labeled 0 correspond to the
..r
a..
background.
When T depends only on f(x, y), the threshold is called global (Fig. 7.38a
shows an example of such a threshold). If 'T depends on both f(x, y) and
p(x, y), then the threshold is called local. If, in addition, T depends on the spatial
?'_
Figure 7.39 (a) Original image. (b) Histogram of intensities in the range 0 to 255. (c)
Image obtained by using Eq. (7.6-49) with a global threshold T = 90.
vision those thresholding techniques which are based on a single, global value of
-fl
,.t
The material presented in this chapter spans a broad range of processing functions
normally associated with low-level vision. Although, as indicated in Sec. 7.1,
vision is a three-dimensional problem, most machine vision algorithms, especially
those used for low-level vision, are based on images of a three-dimensional scene.
The range sensing methods discussed in Sec. 7.2, the structured-lighting
approaches in Sec. 7.3, and the material in Sec. 7.4 are important techniques for
deriving depth from image information.
360 ROBOTICS: CONTROL. SENSING, VISION, AND INTELLIGENCE
Our discussion of low-level vision and other relevant topics, such as the nature
of imaging devices, has been at an introductory level, and with a very directed
focus toward robot vision. It is important to keep in mind that many of the areas
we have discussed have a range of application much broader than this. A good
example is enhancement, which for years has been an important topic in digital
image processing. One of the salient features of industrial applications, however,
is the ever-present (and often contradictory) requirements of low cost and high
computational speeds. The selection of topics included in this chapter has been
influenced by these requirements and also by the value of these topics as funda-
^C3
mental material which would serve as a foundation for further study in this field.
REFERENCES
Further reading on image acquisition devices may be found in Fink [1957], Her-
II'
rick [1976], and Fairchild [1983]. The discussion in Sec. 7.3 is based on Mundy
:-"
[1977], Holland et al. [1979], and Myers [1980]. The transformations discussed in
Secs. 7.4.1 and 7.4.2 can be found in most books on computer graphics (see, for
C..
°'-
°"U
s..
and calibration can be found in Duda and Hart [1973] and Yakimovsky and Cun-
.fl
ningham [1979]. The survey article by Barnard and Fischler [1982] contains a
0'<
The material in Sec. 7.5 is based on Toriwaki et al. [1979], and Rosenfeld
,U'
and Kak [1982]. The discussion in Sec. 7.6.1, is adapted from Gonzalez and
Wintz [1977]. For details on implementing median filters see Huang et al. [1979],
CD-
Wolfe and Mannos [1979], and Chaudhuri [1983]. The concept of smoothing by
image averaging is discussed by Kohler and Howell [1963]. The smoothing
CS'
technique for binary images discussed in Sec. 7.6.2 is based on an early paper by
Unger [1959].
The material in Sec. 7.6.3 is based on Gonzalez and Fittes [1977] and Woods
and Gonzalez [1981]. For further details on local enhancement see Ketcham
'-'
`r7
[1976], Harris [1977], and Narendra and Fitch [1981]. Early work on edge detec-
tion can be found in Roberts [1965]. A survey of techniques used in this area a
°-o
decade later is given by Davis [1975]. More recent work in this field emphasizes
computational speed, as exemplified by Lee [1983] and Chaudhuri [1983]. For an
introduction to edge detection see Gonzalez and Wintz [1977]. The book by
Rosenfeld and Kak [1982] contains a detailed description of threshold selection
C..
PROBLEMS
7.1 How many bits would it take to store a 512 x 512 image in which each pixel can have
256 possible intensity values'?
LOW-LEVEL VISION 361
7.2 Propose a technique that uses a single light sheet to determine the diameter of cylindri-
-CD
Win.
cal objects. Assume a linear array camera with a resolution of N pixels and also that the
distance between the camera and the center of the cylinders is fixed.
7.3 (a) Discuss the accuracy of your solution to Prob. 7.2 in terms of camera resolution (N
points on a line) and maximum expected cylinder diameter, D,,,,,. (b) What is the max-
.mow
Ll.
7.4 Determine if the world point with coordinates (1/2, 1/2, I/2) is on the optical axis of
a camera located at (0, 0, '), panned 135 ° and tilted 135 °. Assume a 50-mm lens and
let r, = r2 = r3 = 0.
7.5 Start with Eq. (7.4-41) and derive Eqs. (7.4-42) and (7.4-43).
7.6 Show that the D4 distance between two points p and q is equal to the shortest 4-path
between these points. Is this path unique?
7.7 Show that a Fourier transform algorithm that computes F(u) can be used without
modification to compute the inverse transform. (Hint: The answer lies on using complex
conjugates).
7.8 Verify that substitution of Eq. (7.6-4) into Eq. (7.6-5) yields an identity.
7.9 Give the boolean expression equivalent to Eq. (7.6-16) for a 5 x 5 window.
7.10 Develop a procedure for computing the median in an n x n neighborhood.
7.11 Explain why the discrete histogram equalization technique will not, in general, yield a
flat histogram.
7.12 Propose a method for updating the local histogram for use in the enhancement tech-
nique discussed in Sec. 7.6.3.
7.13 The results obtained by a single pass through an image of some two-dimensional
,-I
masks can also be achieved by two passes of a one-dimensional mask. For example, the
result of using a 3 x 3 smoothing mask with coefficients (see Sec. 7.6.2) can also be
1/g
obtained by first passing through an image the mask [1 1 1]. The result of this pass is ti'
1
then followed by a pass of the mask 1 . The final result is then scaled by 1/9 . Show
1
that the Sobel masks (Fig. 7.35) can be implemented by one pass of a differencing mask of
the form [ - 1 0 1] (or its vertical counterpart) followed by a smoothing mask of the
form [1 2 1] (or its vertical counterpart).
7.14 Show that the digital Laplacian given in Eq. (7.6-47) is proportional (by the factor
`-'
- '/a) to subtracting from f(x, y) an average of the 4-neighbors of (x, y). [The process of
subtracting a blurred version of f(x, y) from itself is called unsharp masking.]
CHAPTER
EIGHT
HIGHER-LEVEL VISION
8.1 INTRODUCTION
1"1
i_+
For the purpose of categorizing the various techniques and approaches used in
°-h
vii
machine vision, we introduced in Sec. 7.1 three broad subdivisions: low-,
medium-, and high-level vision. Low-level vision deals with basic sensing and
c°°
preprocessing, topics which were covered in some detail in Chap. 7. We may
'n'
`CS
bin
7.,
view the material in that chapter as being instrumental in providing image and
other relevant information that is in a form suitable for subsequent intelligent
visual processing.
Although the concept of "intelligence" is somewhat vague, particularly when
r,.
examples and to generalize this knowledge so that it will apply in new and
(3.
different circumstances, (3) the ability to infer facts from incomplete information,
and (4) the capability to generate self-motivated goals, and to formulate plans for
.-.
new and promising concepts, the state of the art in machine vision is for the most
part -based on analytical formulations tailored to meet specific tasks. The time
frame in which we may have machines that approach human visual and other sen-
sory capabilities is open to speculation. It is of interest to note, however, that imi-
C3'
tating nature is not the only solution to this problem. The reader is undoubtedly
familiar with early experimental airplanes equipped with flapping wings and other
fro
'L7
birdlike features. Given that the objective is to fly between two points, our present
`C:3
speed and achievable altitude, this solution exceeds the capabilities of these exam-
ples by a wide margin.
362
HIGHER-LEVEL VISION 363
4-,
clude the chapter with a discussion of issues on the interpretation of visual infor-
00-
mation.
8.2 SEGMENTATION
Segmentation is the process that subdivides a sensed scene into its constituent parts
or objects. Segmentation is one of the most important elements of an automated
vision system because it is at this stage of processing that objects are extracted
from a scene for subsequent recognition and analysis. Segmentation algorithms are
generally based on one of two basic principles: discontinuity and similarity. The
principal approach in the first category is based on edge detection; the principal
approaches in the second category are based on thresholding and region growing.
These concepts are applicable to both static and dynamic (time-varying) scenes. In
the latter case, however, motion can often be used as a powerful cue to improve
the performance of segmentation algorithms.
Local Analysis. One of the simplest approaches for linking edge points is to
analyze the characteristics of pixels in a small neighborhood (e.g., 3 x 3 or 5 X 5)
364 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
about every point (x, y) in an image that has undergone an edge detection pro-
BAS
cess. All points that are similar (as defined below) are linked, thus forming a
boundary of pixels that share some common properties.
There are two principal properties used for establishing similarity of edge pix-
els in this kind of analysis: (1) the strength of the response of the gradient operator
used to produce the edge pixel, and (2) the direction of the gradient. The first
"our
property is given by the value of G[f(x, y)], as defined in Eqs. (7.6-38) or (7.6-
"C1
39). Thus, we say that an edge pixel with coordinates (x', y') and in the predefined
neighborhood of (x, y) is similar in magnitude to the pixel at (x, y) if
where T is a threshold.
The direction of the gradient may be established from the angle of the gradient
CAD
where 0 is the angle (measured with respect to the x axis) along which the rate of
-W-
change has the greatest magnitude, as indicated in Sec. 7.6.4. Then, we say that
an edge pixel at (x', y') in the predefined neighborhood of (x, y) has an angle
similar to the pixel at (x, y) if
where A is an angle threshold. It is noted that the direction of the edge at (x, y)
is, in reality, perpendicular to the direction of the gradient vector at that point.
However, for the purpose of comparing directions, Eq. (8.2-3) yields equivalent
results.
Based on the foregoing concepts, we link a point in the predefined neighbor-
hood of (x, y) to the pixel at (x, y) if both the magnitude and direction criteria
are satisfied. This process is repeated for every location in the image, keeping a
record of linked points as the center of the neighborhood is moved from pixel to
pixel. A simple bookkeeping procedure is to assign a different gray level to each
set of linked edge pixels.
which shows an image of the rear of a vehicle. The objective is to find rec-
tangles whose sizes makes them suitable license plate candidates. The forma-
tion of these rectangles can be accomplished by detecting strong horizontal and
vertical edges. Figure 8.1b and c shows the horizontal and vertical com-
ponents of the Sobel operators discussed in Sec. 7.6.4. Finally, Fig. 8.1d
shows the results of linking all points which, simultaneously, had a gradient
value greater than 25 and whose gradient directions did not differ by more
HIGHER-LEVEL VISION 365
Figure 8.1 (a) Input image. (b) Horizontal component of the gradient. (c) Vertical com-
ponent of the gradient. (d) Result of edge linking. (Courtesy of Perceptics Corporation.)
Global Analysis via the Hough Transform. In this section we consider the link-
ing of boundary points by determining whether or not they lie on a curve of
specified shape. Suppose initially that, given n points in the xy plane of an image,
we wish to find subsets that lie on straight lines. One possible solution is to first
'C7
find all lines determined by every pair of points and then find all subsets of points
that are close to particular lines. The problem with this procedure is that it
involves finding n(n - 1)/2 --- n2 lines and then performing n[n(n - 1)]/2 - n3
"L7
366 ROBOTICS: CONTROL, SENSING, VISION. AND INTELLIGENCE
space associated with it, and this line will intersect the line associated with (xi, yi )
at (a', b') where a' is the slope and b' the intercept of the line containing both
CAD
(xi, yi) and (xj, yj) in the xy plane. In fact, all points contained on this line will
have lines in parameter space which intercept at (a', b'). These concepts are illus-
trated in Fig. 8.2.
The computational attractiveness of the Hough transform arises from subdivid-
ing the parameter space into so-called accumulator cells, as illustrated in Fig. 8.3,
'r1
where (amax, and (bax, are the expected ranges of slope and intercept
values. Accumulator cell A(i, j) corresponds to the square associated with param-
eter space coordinates (a;, bj). Initially, these cells are set to zero. Then, for
'-.
every point (Xk, Yk) in the image plane, we let the parameter a equal each of the
allowed subdivision values on the a axis and solve for the corresponding b using
the equation b = -xka + yk. The resulting b's are then rounded off to the
nearest allowed value in the b axis. If a choice of at, results in solution bq, we let
A(p, q) = A(p, q) + 1. At the end of this procedure, a value of M in cell
A(i, j) corresponds to M points in the xy plane lying on the line y = a,x + bj.
The accuracy of the colinearity of these points is established by the number of sub-
divisions in the ab plane.
(a) (b)
a
0
Figure 8.3 Quantization of the parameter plane into cells for use in the Hough transform.
It is noted that if we subdivide the a axis into K increments, then for every
r~+
The meaning of the parameters used in Eq. (8.2-4) is illustrated in Fig. 8.4a. The
use of this representation in constructing a table of accumulators is identical to the
method discussed above for the slope-intercept representation; the only difference
is that, instead of straight lines, we now have sinusoidal curves in the Op plane.
As before, M colinear points lying on a line x cos 8; + y sin 8i = pj will yield M
sinusoidal curves which intercept at (6;, pj) in the parameter space. When we use
the method of incrementing 0 and solving for the corresponding p, the procedure
v.,
will yield M entries in accumulator A(i, j) associated with the cell determined by
(81', pj). The subdivision of the parameter space is illustrated in Fig. 8.4b.
CAD
Pmax
Pmin L- 0
0.
°min
(a) Ihl
Figure 8.4 (a) Normal representation of a line. (b) Quantization of the Op plane into cells.
equal to the distance from corner to corner in the original image. The center
emu,
Although our attention has been focused thus far on straight lines, the Hough
transform is applicable to any function of the form g(x, c) = 0, where x is a
vector of coordinates and c is a vector of coefficients. For example, the locus of
points lying on the circle
(x - ct )2 + (y - c2 )2 = c3 (8.2-5)
can easily be detected by using the approach discussed above. The basic
difference is that we now have three parameters, cl , c2, and c3, which result in a
three-dimensional parameter space with cubelike cells and accumulators of the
form A(i, j, k). The procedure is to increment ct and c2, solve for the c3 that
satisfies Eq. (8.2-5), and update the accumulator corresponding to the cell associ-
,_.
ated with the triple (c1, c2, c3 ). Clearly, the complexity of the Hough transform
is strongly dependent on the number of coordinates and coefficients in a given
functional representation.
Before leaving this section, we point out that further generalizations of the
Hough transform to detect curves with no simple analytic representations are possi-
'C3
ble. These concepts, which are extensions of the material presented above, are
treated in detail by Ballard [1981].
HIGHER-LEVEL VISION 369
Figure 8.5 (a) Image of a work piece. (b) Gradient image. (c) Hough transform table.
t11
(d) Detected lines superimposed on the original image. (Courtesy of D. Cate, Texas Instru-
ments, Inc.)
Global Analysis via Graph-Theoretic Techniques. The method discussed in the
...
previous section is based on having a set of edge points obtained typically through
C1.
tions characterized by high noise content. We now discuss a global approach based
..1
on representing edge segments in the form of a graph structure and searching the
CAD
graph for low-cost paths which correspond to significant edges. This representa-
tion provides a rugged approach which performs well in the presence of noise. As
might be expected, the procedure is considerably more complicated and requires
more processing time than the methods discussed thus far.
We begin the development with some basic definitions. A graph G = (N, A)
is a finite, nonempty set of nodes N, together with a set A of unordered pairs of
370 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
distinct elements of N. Each pair (n;, nj) of A is called an arc. A graph in which
'C7
its arcs are directed is called a directed graph. If an arc is directed from node n, to
r.!
coo
node nj, then nj is said to be a successor of its parent node ni. The process of
905C
identifying the successors of a node is called expansion of the node. In each graph
CAR
we will define levels, such that level 0 consists of a single node, called the start
node, and the nodes in the last level are called goal nodes. A cost c(n , nj) can be
associated with every arc (n,, nj). A sequence of nodes n1, n2, ... , nk with
each node n, being a successor of node n,_I is called a path from nI to nk, and the
cost of the path is given by
k
c= c(n,-I, ni) (8.2-6)
i=2
Finally, we define an edge element as the boundary between two pixels p and q,
such that p and q are 4-neighbors, as illustrated in Fig. 8.6. In this context, an
edge is a sequence of edge elements.
In order to illustrate how the foregoing concepts apply to edge detection, con-
sider the 3 x 3 image shown in Fig. 8.7, where the outer numbers are pixel coor-
dinates and the numbers in parentheses represent intensity. With each edge ele-
ment defined by pixels p and q we associate the cost
where H is the highest intensity value in the image (7 in this example), f(p) is the
intensity value of p, and f(q) is the intensity value of q. As indicated above, p
and q are 4-neighbors.
The graph for this problem is shown in Fig. 8.8. Each node in this graph
corresponds to an edge element, and an arc exists between two nodes if the two
corresponding edge elements taken in succession can be part of an edge. The cost
of each edge element, computed using Eq. (8.2-7), is shown by the arc leading into
it, and goal nodes are shown in double rectangles. Each path between the start
node and a goal node is a possible edge. For simplicity, it has been assumed that
the edge starts in the top row and terminates in the last row, so that the first ele-
11
11
0 1 2
0 a 0
(7) (2) (2)
1 0 a
(5) 0
(7) (2)
2 0 a a
(5) (1) (0)
ment of an edge can only be [(0, 0), (0, 1)] or [(0, 1), (0, 2)] and the last ele-
ment [(2, 0), (2, 1)] or [(2, 1), (2, 2)]. The minimum-cost path, computed
using Eq. (8.2-6), is shown dashed, and the corresponding edge is shown in Fig.
8.9.
In general, the problem of finding a minimum-cost path is not trivial from a
computational point of view. Typically, the approach is to sacrifice optimality for
the sake of speed, and the algorithm discussed below is representative of a class of
Figure 8.8 Graph used for finding an edge in the image of Fig. 8.7. The pair (a, b) (c, d)
in each box refers to points p and q, respectively. Note that p is assumed to be to the right
of the path as the image is traversed from top to bottom. The dashed lines indicate the
minimum-cost path. (Adapted from Martelli [1972], © Academic Press.)
372 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
0 0
(7) (2) (2)
o
(5) (7) (2)
0 Y
(5) (1) (0)
procedures which use heuristics in order to reduce the search effort. Let r(n) be
an estimate of the cost of a minimum-cost path from the start node s to a goal
node, where the path is constrained to go through n. This cost can be expressed
as the estimate of the cost of a minimum-cost path from s to n, plus an estimate of
the cost of that path from n to a goal node; that is,
Here, g(n) can be chosen as the lowest-cost path from s to n found so far, and
h(n) is obtained by using any available heuristic information (e.g., expanding only
certain nodes based on previous costs in getting to that node). An algorithm that
uses r(n) as the basis for performing a graph search is as follows:
eeeoesooeesoeeoseeoeeeeoeeaeoe
O..
.1r
e2+ .+ZAA.a-I+Mli'+Z4xiA di
=vaAMZ).1. )1+Z
..,
a
,!Z e e
r^'
N-r
Z Ia 8++8i Z1 A--X l . s
N..
1 +x- =M 1 XA
ZZ X)Ma a
2!-
NN.
).+2) )X Z+.)-.ZZ)YOM
X0-
8889
Wiry
r..
.MIXAZ+.,LM++
.....
as 8
1XX1.89Xo008XM) 1ZZ.A- 3 3 e
tom
raXMXa3iDD31)0AMO08+a.a a+XZ 3 8 89b
11 +Z80)A3MZ9a0$A 0Z-)+1iM80Z 1 Z, t Z. + a 8 88 8 e
11+10®1 a e a
NN-
8
.'X.1
a2 +13a13314aAMX8O XaZ11-M9X
ZIZXZ), 8 8 8
gam
e e
11=a XZ8DX20A@40861+iZZ)00
8 8
N®N
0
N®N
e 3
)00
a 3 8 8 8
dl`
ZMA*L..+ZAX 9 8 8
...
Z) ZiD114X3O83A8dM0.aX).84 3
o,A 31 oOZ 108. + 0)) -' 3 8 8 8
24iz ), )31X0M88149)0048) Z@60"-Oft IM 3 3 3 8
)1 8 8 8 e
X®m
3
"''y
1..
x+134000814AOM6dx80O 20aro- 2 8 8 8 3
MMl80X. I e 6 8 d 8
m<®
N!)¢
3,3.M A830)M800.M8Aiaz0
8 b 8
$..
,M 8 8 8 8
¢t)
1X41+i,1M4MZ8000AaaA 18X X01a_-
012 IZAXa41080X080X000208A 8A80b 1.+1 8 b 8 8 e
8 8 tl
NHS-9 10aX
M 32 L 3 8
...
X)1340)00XMML8e0MXOMl -XA$MMILL i 3 8
)00
A) : )X08)XMM+ii0$MM*LM- 3 8
x111
n
IL'
y 8
LZ8M00O00X20XMA)IIMI)IM u)M)a+X+-1 3 8
0-47.XAOA39M0AAO8oXi-))-.AAa-11 ZZ.A- L= 88
-->
3 e
)))ill3))M0008XMX+XMXIXAZ)ZX+LLi )t+X- 3 8363 e
T..
+Z
^<4.
XI.I)X X11.+ZMXLO+,X)i.-a1.1ZXLLXMX e
N<.4
iAiAiA)AiXMX)4XZ+).4 ).+X-MaZX1-1,=ZMMZ
N/.
+I +ZAA.-))8IZ.)ZMA:04.X4I1011L) is -L..
...
e
11131 : IA+1-17-4; ZXa -1' x-Ai2L M+M. 11, A
...
eoeeeeeeeeeseeeeoeeeoooeseseo e..eeeroeesesssoeeseaeeseeee
(a)
Figure 8.10 (a) Noisy image. (b) Result of edge detection by using the heuristic graph
tom.
t..
boo
Heuristics were brought into play by not expanding those nodes whose cost
exceeded a given threshold.
374 ROBOTICS- CONTROL, SENSING. VISION. AND INTELLIGENCE
8.2.2 Thresholding
The concept of thresholding was introduced in Sec. 7.6.5 as an operation involving
tests against a function of the form
where f(x, y) is the intensity of point (x, y) and p(x, y) denotes some local pro-
n..
perty measured in a neighborhood of this point. A thresholded image, g(x, y) is
created by defining
11 iff(x,y)>T
g(x, y) _ (8.2-10)
0 if f(x, y) 5 T
so that pixels in g(x, y) labeled 1 correspond to objects, while pixels labeled 0
correspond to the background. Equation (8.2-10) assumes that the intensity of
objects is greater than the intensity of the background. The opposite condition is
handled by reversing the sense of the inequalities.
Global vs. Local Thresholds. As indicated in Sec. 7.6.5, when T in Eq. (8.2-9)
depends only on f(x, y), the threshold is called global. If T depends on both
f(x, y) and p(x, y), then it is called a local threshold. If, in addition, T depends
on the spatial coordinates x and y, it is called a dynamic threshold.
Global thresholds have application in situations where there is clear definition
between objects and background, and where illumination is relatively uniform.
The backlighting and structured lighting techniques discussed in Sec. 7.3 usually
yield images that can be segmented by global thresholds. For the most part, arbi-
trary illumination of a work space yields images that, if handled by thresholding,
require some type of local analysis to compensate for effects such as nonuniformi-
ties in illumination, shadows, and reflections.
In the following discussion we consider a number of techniques for selecting
segmentation thresholds. Although some of these techniques can be used for glo-
bal threshold selection, they are usually employed in situations requiring local
threshold analysis.
where z is a random variable denoting intensity, p, (z) and P2 (z) are the probabil-
CAD
ity density functions, and P, and P2 are called a priori probabilities. These last
two quantities are simply the probabilities of occurrence of two types of intensity
°-n
Intensity ,
(a)
(b)
Figure 8.11 (a) Intensity histogram. (b) Approximation as the sum of two probability
density functions.
bility density functions, as shown in Fig. 8.llb. If it is known that light pixels
represent objects and also that 20 percent of the image area is occupied by object
t3,
which simply says that, in this case, the remaining 80 percent are background pix-
els.
Let us form two functions of z, as follows:
It is known from decision theory (Tou and Gonzalez [1974]) that the average error
of misclassifying an object pixel as background, or vice versa, is minimized by
using the following rule: Given a pixel with intensity value z, we substitute that,
value of z into Eqs. (8.2-13) and (8.2-14). Then, we classify the pixel as an object
376 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
pixel if d1 (z) > d2 (z) or as background pixel if d2 (z) > d1(z). The optimum
threshold is then given by the value of z for which d1 (z) = d2 (z). That is, set-
ting z = T in Eqs. (8.2-13) and (8.2-14), we have that the optimum threshold
satisfies the equation
Thus, if the functional forms of pI (z) and P2 (z) are known, we can use this equa-
tion to solve for the optimum threshold that separates objects from the background.
,14'
Once this threshold is known, Eq. (8.2-10) can be used to segment a given image.
As an important illustration of the use of Eq. (8.2-15), suppose that p, (z) and
,.p
P2 (z) are gaussian probability density functions; that is,
AP +BT+C=0 (8.2-18)
where
A=or 2 -a2
B = 2(m1a2 - m2ai) (8.2-19)
a2P1
C = alm2
2 2
- a2ml2 + 2aja2
22
In 2
a1P2
The possibility of two solutions indicates that two threshold values may be
required to obtain an optimal solution.
If the standard deviations are equal, al = a2 = a, a single threshold is
sufficient:
7 - MI + m2 + a2 P2
In (8.2-20)
2 m1 - m2 PI
mentation of the mechanical parts shown in Fig. 8.12a, where, for the
'1y
+-O
moment, we ignore the grid superimposed on the image. Figure 8.12b shows
..d 76b
the result of computing a global histogram, fitting it with a bimodal gaussian
density, establishing an optimum global threshold, and finally using this thres-
hold in Eq. (8.2-10) to segment the image. As expected, the variations in
intensity rendered this approach virtually useless. A similar approach, how-
ever, can be carried out on a local basis by subdividing the image into subim-
ages, as defined by the grid in Fig. 8.12a.
After the image has been subdivided, a histogram is computed for each
;-d
G°0
subimage and a test of bimodality is conducted. The bimodal histograms are
fitted by a mixed gaussian density and the corresponding optimum threshold is
computed using Eqs. (8.2-18) and (8.2-19). No thresholds are computed for
subimages without bimodal histograms; instead, these regions are assigned
thresholds computed by interpolating the thresholds from neighboring subim-
ages that are bimodal. The histograms for each subimage are shown in Fig.
8.12c, where the horizontal lines provide an indication of the relative scales of
'-"
"0V
involves local analysis to establish the threshold for each cell, and that these
local thresholds are interpolated to create a dynamic threshold which is finally
used for segmentation.
A given pixel with intensity z is assigned to the kth category if dk(z) > dj(z),
j = 1 , 2, ... , n; j # k. As before, the optimum threshold between category k
378 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Figure 8.12 (a) Image of mechanical parts showing local-region grid. (b) Result of global
thresholding. (c) Histograms of subimages. (d) Display of dynamic threshold. (e) Result of
dynamic thresholding. (From Rosenfeld and Kak [1982], courtesy of A. Rosenfeld.)
HIGHER-LEVEL VISION 379
As indicated in Sec. 7.6.5, the real problem with using multiple histogram thres-
holds lies in establishing meaningful histogram modes.
broad range of intensity distributions. Based on the discussion in the last two sec-
tions, it is intuitively evident that the chances of selecting a "good" threshold
should be considerably enhanced if the histogram peaks are tall, narrow, sym-
metric, and separated by deep valleys.
C1.
One approach for improving the shape of histograms is to consider only those
pixels that lie on or near the boundary between objects and the background. One
immediate and obvious improvement is that this makes histograms less dependent
on the relative size between objects and the background. For instance, the inten-
w'.
coo
and one small object would be dominated by a large peak due to the concentration
of background pixels. If, on the other hand, only the pixels on or near the boun-
dary between the object and the background were used, the resulting histogram
would have peaks whose heights are more balanced. In addition, the probability
'.3
cct)
=..
that a given pixel lies near the edge of an object is usually equal to the probability
ca-
"ti
that it lies on the edge of the background, thus improving the symmetry of the his-
.'3
togram peaks. Finally, as will be seen below, using pixels that satisfy some sim-
''3
ple measures based on gradient and Laplacian operators has a tendency to deepen
the valley between histogram peaks.
The principal problem with the foregoing comments is that they implicitly
a')
assume that the boundary between objects and background is known. This infor-
ate
discussed here. However, we know from the material in Sec. 7.6.4 that an indica-
tion of whether a pixel is on an edge may be obtained by computing its gradient.
In addition, use of the Laplacian can yield information regarding whether a given
0
pixel lies on the dark (e.g., background) or light (object) side of an edge. Since,
c='
°'t.
as discussed in Sec. 7.6.4, the Laplacian is zero on the interior of an ideal ramp
edge, we may expect in practice that the valleys of histograms formed from the
CAD
tion.
The gradient, G[f(x, y)], at any point in an image is given by Eq. (7.6-38)
^,o
or (7.6-39). Similarly, the Laplacian L[f(x, y)] is given by Eq. (7.6-47). We may
C17
380 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
O
o
O
G[f(x, y)] < T
O+
0 if
v
s(x, y) _ +
n\
OO
N
N
if G[f(x, y)] T and L[f(x, y)] 0 (8.2-24)
1
if G[f(x, y)] T and L[f(x, y)] < 0
where the symbols 0, +, and - represent any three distinct gray levels, and T is
o
Fig. 7-34b, the use of Eq. (8.2-24) produces an image s(x, y) in which all pixels
w
CAD
which are not on an edge (as determined by G[ f (x, y) ] being less than T) are
o
o
labeled "0, " all pixels on the dark side of an edge are labeled "+ , " and all pixels
O
on the light side of an edge are labeled "- . " The symbols + and - in Eq. (8.2-
o
24) are reversed for a light object on a dark background. Figure 8.13 shows the
O
labeling produced by Eq. (8.2-24) for an image of a dark, underlined stroke writ-
ten on a light background.
O
The information obtained by using the procedure just discussed can be used to
OO
O
O
0
0000000
0000000+0000000000000000
o O l 0
0
O o 0
0000000000
O O O O
0000000000000000000000
O O
O O
O O
O O O O O
00000000000000000--00+0.0000000000000000000000000000000000000000000000
000o0000000000
000000000000000000000000000000000001
0 0
0000000000000000000000000000000000001.
O O O O
O O O O
0 0 0 0
O o 0 0
O O 1 0
00000000000000000000o1111
000000{00000000000000000001
000 0 0 0
/ O o 0 0 o O o 0 0 0 0
+
0
O O
000000000000000000000000000+000000+0000000+000000000000000000000000000
O O O O O O O O O O O O O O O O O O O O O O O O OO O
O O O O O O O
O o 0
O+
0000000000
O O O O O O O O O O O O O
000000000000000000000000000000000000000000000000000000000+000000000000
0+ 0 0 0
O
00000000000000000000000000000000000000000000000000-0000-00000+00000000
0 0
O O
O O O O
0000000000000000000+00000000000000000000000000000000000000-00000+00000
0000000000000000000000+ 0000000000000000000000000000000000000000-0-00+0
0 40 0
+
0
o O O O0
00000000-00000000000-00000000++000000000000000000000000000000000000000
1
0000-
0
0+ O o0 0
0
00000000000000000000000000000000000+000+000000000000000000000000000-0-
00.
0000000000000000000000000000000-000-0000000-0+000000000000000000000000
00
O
O 0 0 0
0 0 0 1 1+++ I
O O O O O O O O
0000000000000000000000000000000000000000000000000000+00000+00000000000
+ 0 0 O O O O O O O
000000000000000000000000000000000000000000000000000-0000-0000+00000000
0 0 o
Ot O Oo O 0O 0O0 O O O
0
0
0 0
0 O
000000000000000000000000000000000000000000000000000000000-00+000000000
000000000000000------- 000000000000000000000000000000000000000000000000
o O
1 1++++ 1
t
000oooooooobo--------------oooooooooooooooooooooooooooooooooocoooooooo
O
0 0 1
0
1
000000000000---+++++++---000000000000000000000000000000000000000000000
'o.
0000000000---++++00+++.--0000000000000000------00000000000000000000000
o
4
1+++++++++
0000000000--+.++..+++.++--000000000000----------- 000000000000000000000
0
1+.++'+++
1
1
0011
000000000---++++---------
++ ++ 1
+
+
+
1+.
I
1++ a.+:
00000000---++++---------0000000000........... +.+++---00000000000000000
0
1 ,++
1
I O 1
1+ O O
1++ p++
1+.
1 0 0 0 0 0/
00000000--++++--00--++.0+++-0000----+.+--------- .++++++--0000000000000
..+
1
00000000--...+--000--++00+++--00- ++..----00-----+.+..+---00000000000
1
I 0 0 0 O O O O O
1
1+++ 1
-------- 000--++++++++-----...++--00000000---....++--0000000000
00000000
0 0
o 0 0
1
.
1
1(++ 1
1
000000000--+++.--00--....---+...... +++---90000000000---+++++.+-------
1
1 1++
0000000000--+++---0--.++++----++..+++---00000000000000---+.+++++++++++
+++ I
a++: 1
o O o O o O O O O
1
00000000000--...+---++++--------------- 000000000000000000---++++++++++
0
0 0
+
1
1 0 0
1
I O O O O o
0
000000000000--.+.... +.++++--0000000000000000000000000000000-----------
.+ 1 0 0 0 0+.+
000
1
1100000
1 0 0 0 O O O
00000000000000---+.... .+---9000000000000000000000000000000000-------- 0
O
0 0 0
O O O
I
1100001+
00000,
looooo
looooo
looooo
looooo
looooo
0
000000000000000----------- 00000000000000000000000000000000000000000000
0
0
0
I 0
/
000000000000000000000000000000000000000000ooooo00000000000000000000000
O O o
0 0
o 0 0
O O O O
O O O O
O O o 0
oooo
000
000
000
000
000
000
000
0000
0 0 0 0
0 0 0 0
0000000000000000000000000000000000000000000000000000000000000000000000
0 0
0 0
0000000000000000000000000000000000000000000000000000000000000000000000
0
0 1
0 0
O O
----000---00000-0000-0000000000000000000000000000000000000000000000000
1
1
----------------------------------------------------------------------
1
1++
1+
1+ 1
1+
1
----------------------------------------------------------------------
000-00000000--00-000000-------0000000----------------0----------------
000000
000000
oooooo0
oooooo
oooooo
000000
oooooo
oooooo
000000
oooooo 000000
O
oooooo
oooooo
oooooo
oooooo
oooooo
oooooo
000000
000000
Iooooo
Iooooo
Iooooo
000000
1
Iooooo
oooooo
looooo
oooooo
000000000000000000000ooooooooooooooooooooooooooooooooooooooooooooooooo
00000
00000
0.0
O O O o O
00000
00000
00000
O O O o
00000
00000
O O O O O
O O O
O O O
O O O O
O O O O O
O O O O O
00000
00000
00000
00000
00000
0000000000000000000000000000000000000000000000000000000000000000000000
0
0
0 0
0000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000
0
0000000000000000000000000000000000000000000000000000000000000000000000
M
Figure 8.13 Image of a handwritten stroke coded by using Eq.(8.2-24). (From White and
O
1'V
and 0's correspond to the background. First we note that the transition (along a
horizontal or vertical scan line) from a light background to a dark object must be
characterized by the occurrence of a - followed by a + in s(x, y). The interior
of the object is composed of pixels which are labeled either 0 or +. Finally, the
transition from the object back to the background is characterized by the
occurrence of a + followed by a -. Thus we have that a horizontal or vertical
scan line containing a section of an object has the following structure:
( )(-,+)(Oor+)(+,-)( . )
(a)
(b)
Figure 8.14 (a) Original image. (b) Segmented image. (From White and Rohrer [19831,
© IBM.)
382 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
1500
pixels
Number
I 1 1
Gradient value
Figure 8.15 Histogram of pixels with gradients greater than 5. (From White and Rohrer
[1983, ©IBM].)
Thresholds Based on Several Variables. The techniques discussed thus far deal
`S'
red, green, and blue (RGB) components are used to form a composite color image.
In this case, each pixel is characterized by three values and it becomes possible to
construct a three-dimensional histogram. The basic procedure is the same as that
used for one variable. For example, given three 16-level images corresponding to
the RGB components of a color sensor, we form a 16 x 16 x 16 grid (cube) and
,00
insert in each cell of the cube the number of pixels whose RGB components have
Car
corresponds to objects and the other to the background. Keeping in mind that each
pixel now has three components and, therefore, may be viewed as a point in
three-dimensional space, we can segment an image by using the following pro-
cedure: For every pixel in the image we compute the distance between that pixel
CAD
and the centroid of each cluster. Then, if the pixel is closer to the centroid of the
object cluster, we label it with a 1; otherwise, we label it with a 0. This concept
is easily extendible to more pixel components and, certainly, to more clusters.
,"t,
interested in further pursuing techniques for cluster seeking can consult, for exam-
ple, the book by Tou and Gonzalez [1974]. This and other related techniques for
segmentation are surveyed by Fu and Mui [1981].
'J'
HIGHER-LEVEL VISION 383
..+
were a vivid red, and that the hair and facial colors were light and different in
..,
C."
spectral characteristics from the window and other background features.
Figure 8.16b was obtained by thresholding about a histogram cluster
CD-
which was known to contain RGB components representative of flesh tones. It
a-+
is important to note that the window, which in the monochrome image has a
..C
range of intensities close to those of the hair, does not appear in the seg-
BCD
'--
mented image because its multispectral characteristics are quite different. The
C/]
/-I
fact that some small regions on top of the subject's hair appear in the seg-
._,
cps
mented image indicates that their color is similar to flesh tones. Figure 8.16c
O°""
was obtained by thresholding about a cluster close to the red axis. In this case
i..
III
eau
LS,
Figure 8.16 Segmentation by multivariable threshold approach. (From Gonzalez and Wintz
'C)
Q.)
[1977], ©Addison-Wesley.)
384 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
only the scarf, the red flower, and a few isolated points appeared in the seg-
mented image. The threshold used to obtain both results was a distance of
one cell. Thus, any pixels whose components placed them within a unit dis-
tance from the centroid of the cluster under consideration were coded white.
All other pixels were coded black.
1. U Ri = R
i=I
2. Ri is a connected region, i = 1, 2, ... , n
3. Ri (1 Rj = 0 for all i and j, i * j
4. P(Ri) = TRUE for i = 1, 2, ... , n
5. P(Ri U Rj) = FALSE for i # j
where P(Ri) is a logical predicate defined over the points in set Ri, and 0 is the
null set.
Condition 1 indicates that the segmentation must be complete; that is, every
pixel must be in a region. The second condition requires that points in a region
must be connected (see Sec. 7.5.2 regarding connectivity). Condition 3 indicates
that the regions must be disjoint. Condition 4 deals with the properties that must
be satisfied by the pixels in a segmented region. One simple example is: P(Ri) =
TRUE if all pixels in Ri have the same intensity. Finally, condition 5 indicates
that regions Ri and Rj are different in the sense of predicate P. The use of these
conditions in segmentation algorithms is discussed in the following subsections.
boring pixels that have similar properties (e.g., intensity, texture, or color). As a
+U+
simple illustration of this procedure consider Fig. 8.17a, where the numbers inside
4.+
the cells represent intensity values. Let the points with coordinates (3, 2) and (3,
t"'
4) be used as seeds. Using two starting points will result in a segmentation con-
own
sisting of, at most, two regions: RI associated with seed (3, 2) and R2 associated
HIGHER-LEVEL VISION 385
2 3
1 0 0 5 6 7
--'
1 5 8 7
0 1 6 7 7
2 0 7 6 6
5 0 1 5 6 5
(a)
a a b b b
a a b b b
a a b b b
a a b b b
a a b b b
(b)
a a a a a
a a a a a
a a a a a
a a a a a
a a a a a
(c)
Figure 8.17 Example of region growing using known starting points. (a) Original image
Fr"
array. (b) Segmentation result using an absolute difference of less than 3 between intensity
levels. (c) Result using an absolute difference less than 8. (From Gonzalez and Wintz
[1977], ©Addison-Wesley.)
with seed (3, 4). The property P that we will use to include a pixel in either
`"j
s.,
region is that the absolute difference between the intensity of the pixel and the
intensity of the seed be less than a threshold T (any pixel that satisfies this pro-
perty simultaneously for both seeds is arbitrarily assigned to regions RI ). The
386 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
result obtained using T = 3 is shown in Fig. 8.17b. In this case, the segmenta-
tion consists of two regions, where the points in RI are denoted by a's and the
.'~
(1)
points in R2 by b's. It is noted that any starting point in either of these two result-
in,
ing regions would have yielded the same result. If, on the other hand, we had
chosen T = 8, a single region would have resulted, as shown in Fig. 8.17c.
(1)
The preceding example, while simple in nature, points out some important
E'+
problems in region growing. Two immediate problems are the selection of initial
seeds that properly represent regions of interest and the selection of suitable pro-
perties for including points in the various regions during the growing process.
Selecting a set of one or more starting points can often be based on the nature of
the problem. For example, in military applications of infrared imaging, targets of
U,,,
interest are hotter (and thus appear brighter) than the background. Choosing the
ova
brightest pixels is then a natural starting point for a region-growing algorithm.
When a priori information is not available, one may proceed by computing at
C3,
COD
every pixel the same set of properties that will ultimately be used to assign pixels
ate)
b.0
to regions during the growing process. If the result of this computation shows
gyp
clusters of values, then the pixels whose properties place them near the centroid of
these clusters can be used as seeds. For instance, in the example given above, a
E".
histogram of intensities would show that points with intensity of 1 and 7 are the
most predominant.
The selection of similarity criteria is dependent not only on the problem under
consideration, but also on the type of image data available. For example, the
analysis of land-use satellite imagery is heavily dependent on the use of color.
This problem would be significantly more difficult to handle by using monochrome
images alone. Unfortunately, the availability of multispectral and other comple-
mentary image data is the exception, rather than the rule, in industrial computer
vision. Typically, region analysis must be carried out using a set of descriptors
based on intensity and spatial properties (e.g., moments, texture) of a single image
.Vi
s..
pixels with only three distinct intensity values. Grouping pixels with the same
intensity to form a "region" without paying attention to connectivity would yield a
.V+
c}0
rule. Basically, we stop growing a region when no more pixels satisfy the criteria
for inclusion in that region. We mentioned above criteria such as intensity, tex-
ture, and color, which are local in nature and do not take into account the "his-
tory" of region growth. Additional criteria that increase the power of a region-
growing algorithm incorporate the concept of size, likeness between a candidate
pixel and the pixels grown thus far (e.g., a comparison of the intensity of a candi-
date and the average intensity of the region), and the shape of a given region being
grown. The use of these types of descriptors is based on the assumption that a
model of expected results is, at least, partially available.
HIGHER-LEVEL VISION 387
Region Splitting and Merging. The procedure discussed above grows regions
starting from a given set of seed points. An alternative is to initially subdivide an
image into a set of arbitrary, disjoint regions and then merge and/or split the
regions in an attempt to satisfy the conditions stated at the beginning of this sec-
tion. A split and merge algorithm which iteratively works toward satisfying these
constraints may be explained as follows.
Let R represent the entire image region, and select a predicate P. Assuming a
square image, one approach for segmenting R is to successively subdivide it into
smaller and smaller quadrant regions such that, for any region R, P(R;) =
TRUE. The procedure starts with the entire region R. If P(R) = FALSE, we
divide the image into quadrants. If P is FALSE for any quadrant, we subdivide
.fl
that quadrant into subquadrants, and so on. This particular splitting technique has
a convenient representation in the form of a so-called quadtree (i.e., a tree in
p.,
which each node has exactly four descendants). A simple illustration is shown in
'-t
Fig. 8.18. It is noted that the root of the tree corresponds to the entire image and
`.Y
that each node corresponds to a subdivision. In this case, only R4 was subdivided
further.
Ri R2
R41 R42
R3
R43 Raa
(a)
R R, R3 Ra
Raa
Rai Raz Ra3
(b)
If we used only splitting, it is likely that the final partition would contain adja-
cent regions with identical properties. This may be remedied by allowing merg-
ing, as well as splitting. In order to satisfy the segmentation conditions stated ear-
lier, we merge only adjacent regions whose combined pixels satisfy the predicate
P; that is, we merge two adjacent regions R, and Rk only if P(R; U Rk) _
TRUE.
The preceding discussion may be summarized by the following procedure in
which, at any step, we
1. Split into four disjoint quadrants any region R, for which P(R;) = FALSE
C/1
2. Merge any adjacent regions Rj and Rk for which P(Rj U Rk) = TRUE
3. Stop when no further merging or splitting is possible
A number of variations of this basic theme are possible (Horowitz and Pavlidis
[1974]). For example, one possibility is to initially split the image into a set of
emu
square blocks. Further splitting is carried out as above, but merging is initially
limited to groups of four blocks which are descendants in the quadtree representa-
tion and which satisfy the predicate P. When no further mergings of this type are
possible, the procedure is terminated by one final merging of regions satisfying
step 2 above. At this point, the regions that are merged may be of different sizes.
The principal advantage of this approach is that it uses the same quadtree for split-
ting and merging, until the final merging step.
(t4
have the same intensity. Then, for the entire image region R, it follows that
a)¢
P(R) = FALSE, so the image is split as shown in Fig. 8.19a. In the next
step, only the top left region satisfies the predicate so it is not changed, while
the other three quadrant regions are split into subquadrants, as shown in Fig.
C3.
8.19b. At this point several regions can be merged, with the exception of the
two subquadrants that include the lower part of the object; these do not satisfy
3.-
the predicate and must be split further. The results of the split and merge
..O
operation are shown in Fig. 8.19c. At this point all regions satisfy P, and
ply
CAD
merging the appropriate regions from the last split operation yields the final,
.Qt
(a) (b)
T-T
T
I
I
I
I I
I I I I i
(c) (d)
Basic Approach. One of the simplest approaches for detecting changes between
'C=
two image frames f (x, y, ti) and f (x, y, tj) taken at times ti and tj, respectively,
is to compare the two images on a pixel-by-pixel basis. One procedure for doing
this is to form a difference image.
_A,
ti.
environment but including a moving object, the difference of the two images will
...
cancel the stationary components, leaving only nonzero entries that correspond to
(D.
O,.
defined as
C/1
(x, y) only if the intensity difference between the two images is appreciably
different at those coordinates, as determined by the threshold 0.
In dynamic image analysis, all pixels in di1(x, y) with value 1 are considered
the result of object motion. This approach is applicable only if the two images are
4-r
registered and the illumination is relatively constant within the bounds established
by 0. In practice, 1-valued entries in d11(x, y) often arise as a result of noise.
.--
Typically, these will be isolated points in the difference image and a simple
approach for their removal is to form 4- or 8-connected regions of l's in d,1(x, y)
and then ignore any region that has less than a predetermined number of entries.
This may result in ignoring small and/or slow-moving objects, but it enhances the
`--'
chances that the remaining entries in the difference image are truly due to motion.
000000000000 000000000000
............
00000000000000 00000000000000
0000000000000000 0000000000000000
000000000000000000 000000000000000000
.......
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0000000000000000000
00000000000000000000 00000000000000000000
... .....
0000000000000000000000 0000000000000000000000
.......
000000000000000000000000 000000000000000000000000
0000000000000000000000000 0000000000000000000000000
000000000000000000000000000 000000000000000000000000000
...........
000
000000000000000000000000000 000000000000000000000000000
000000000000000000000000000 000000000000000000000000000
000
000
00000000000000000000000000 00000000000000000000000000
00000000000000000000000000 00000000000000000000000000
000000000000000000000000 000000000000000000000000
0000000000000000000000 0000000000000000000000
00000000000000000000 00000000000000000000
0000000000000000000 0000000000000000000
00000000000000000 00000000000000000
0000000000000000 0000000000000000
00000000000000 00000000000000
000000000000 000000000000
(a) (h)
1111 llll
1 , 1 , 1 1
1,1, lilt
ill 1 t l
1,1, till
till ,111
1111 flit
,,1t 111,
/l,,,
1, 1 ,
till
,t,1R
1 1 1
R, Ri
,111 1111
(c)
Figure 8.20 (a) Image taken at time t,. (b) Image taken at time .. (c) Difference image.
(From Jain [1981], ©IEEE.)
HIGHER-LEVEL VISION 391
The foregoing concepts are illustrated in Fig. 8.20. Part (a) of this figure
shows a reference image frame taken at time ti and containing a single object of
c5ow
constant intensity that is moving with uniform velocity over a background surface,
also of constant intensity. Figure 8.20b shows a current frame taken at time tp
Q°"
and Fig. 8.20c shows the difference image computed using Eq. (8.2-25) with a
Q..
threshold larger than the constant background intensity. It is noted that two disjoint
regions were generated by the differencing process: one region is the result of the
.-.
ago
leading edge and the other of the trailing edge of the moving object.
`°z
tain isolated entries that are due to noise. Although the number of these entries
can be reduced or completely eliminated by a thresholded connectivity analysis,
this filtering process can also remove small or slow-moving objects. The approach
`J'
discussed in this section addresses this problem by considering changes at a pixel
Q.,
location on several frames, thus introducing a "memory" into the process. The
basic idea is to ignore those changes which occur only sporadically over a frame
"C7
(7q
in the sequence. A counter for each pixel location in the accumulative image is
incremented every time that there is a difference at that pixel location between the
reference and an image in the sequence. Thus, when the kth frame is being com-
coo
pared with the reference, the entry in a given pixel of the accumulative image
coo
gives the number of times the intensity at that position was different from the
corresponding pixel value in the reference image. Differences are established, for
example, by use of Eq. (8.2-25).
The foregoing concepts are illustrated in Fig. 8.21. Parts (a) through (e) of
'r1
this figure show a rectangular object (denoted by 0's) that is moving to the right
Q..
time corresponding to one pixel displacement. Figure 8.21a is the reference image
,^y
frame, Figs. 8.21b to d are frames 2 to 4 in the sequence, and Fig. 8.21e is the
eleventh frame. Figures 8.21f to i are the corresponding accumulative images,
'T,
CC.
which may be explained as follows. In Fig. 8.21f, the left column of l's is due to
differences between the object in Fig. 8.21a and the background in Fig. 8.21b.
The right column of l's is caused by differences between the background in the
tea,
reference image and the leading edge of the moving object. By the time of the
CAD
foylrth frame (Fig. 8.21d), the first nonzero column of the accumulative difference
4-.
image shows three counts, indicating three total differences between that column in
CAD
the reference image and the corresponding column in the subsequent frames.
G1.
absolute (AADI), positive (PADI), and negative (NADI). The latter two quantities
392 ROBOTICS: CONTROL. SENSING, VISION, AND INTELLIGENCE
9
10 00000000
11 00000000
12 00000000
(a)
13 00000000
14 00000000
15 00000000
16
9 9
iii
10 00000000 10 1 1
-CV
11 00000000 11 1 1
12 00000000 12 1 1
13 00000000 13 1 1
14 00000000 14 1 1
15 00000000 15 1 1
16 16
9 9 11111
000000
10 00000000 10 21 21
-CV
11 00000000 11 21 21
(c)
12
13
14
00000000
00000000
00000000
12
13
14
21
21
21
21
21
21
`
15 00000000 15 21 21
16 16
9 9
000000
°°o
10 00000000 10 A98765438887654321
11V
11 00000000 11 A98765438887654321
12 00000000 12 A98765438887654321
(e)
13 00000000 13 A98765438887654321 (`
v-4
HIV
14 00000000 14 A98765438887654321
15 00000000 15 A98765438887654321
16 16
Figure 8.21 (a) Reference image frame. (b) to (e) Frames 2, 3, 4, and 11. (f) to (i) Accu-
mulative difference images for frames 2, 3, 4, and 11. (From Jain [1981], ©IEEE.)
are obtained by using Eq. (8.2-25) without the absolute value and by using the
reference frame instead of f(x, y, t,). Assuming that the intensities of an object
are numerically greater than the background, if the difference is positive, it is com-
pared with a positive threshold; if it is negative, the difference is compared with a
negative threshold. This definition is reversed if the intensities of the object are
less than the background.
Example: Figure 8.22a to c show the AADI, PADI, and NADI for a 20 x 20
a4-0'M
,,d
day
pixel object whose intensity is greater than the background, and which is mov-
ing with constant velocity in a south-easterly direction. It is important to note
HIGHER-LEVEL VISION 393
that the spatial growth of the PADI stops when the object is displaced from its
v)'
COD
original position. In other words, when an object whose intensities are greater
than the background is completely displaced from its position in the reference
image, there will be no new entries generated in the positive accumulative
difference image. Thus, when its growth stops, the PADI gives the initial
location of the object in the reference frame. As will be seen below, this pro-
perty can be used to advantage in creating a reference from a dynamic
sequence of images. It is also noted in Fig. 8.22 that the AADI contains the
regions of both the PADI and NADI, and that the entries in these images give
C/)
an indication of the speed and direction of object movement. The images in
Fig. 8.22 are shown in intensity-coded form in Fig. 8.23.
in the previous two sections is having a reference image against which subsequent
comparisons can be made. As indicated earlier, the difference between two images
in a dynamic imaging problem has the tendency to cancel all stationary
components, leaving only image elements that correspond to noise and to the mov-
99999999999999999999
<`d
99999999999999999999
99999999999999999999
and
and
9988888888888888888811 1
9988888888888888888811 11
9988888888886888888811 11
9988777777777777777722 996877777777777777772211
2211
11 2211
998877777777777777772211 2211
+NN
99887766666666666666332211 332211
PPP
11
998877666666666666663322 332211
99887766666666666666332211 332211
W13
4'.00"0'
9988776655555555555544332211 44332211
000
NNCI
44332211
.-'
9988776655555555555544332211
44332211
-.N
9988776655555555555544332211
v00
998877665544444444445544332211 5544332211
-'"'-'
998877665544444444445544332211 5544332211
r11
5544332211
(.W
-'j
998877665544444444445544332211
99887766554433333333665544332211 665544332211
.-.
99887766554433333333665544332211 665544332211
112233445566666666665544332211 112233445566666666665544332211
PPP
PPP
AAA add
PPP
PPP
ddb
0."
I'm
11223344556677777777665544332211 11223344556677777777665544332211
11223344556677777777665544332211 1122_3344556677777777665544332211
112233445566666666665544332211
N4]4]
PPP 0'1`1P
112233445566666666665544332211
0000'0P
11223344556677777777665544332211
:::+ -+N
11223344556677777777665544332211
0,0
11223344556677777777665544332211 11223344556677777777665544332211
112233445566666666665544332211
PPP
.-r
112233445566666666665544332211
PPP
AAA
99999999999999999995 11223344556677777777665544332211
PPP
11223344556677777777665544332211
11223344556677777777665544332211 99999999999999999999 11223344556677777777665544332211
WW-000P
PPP
PPP
PPP
PPP
PPP
112233445566666666665544332211
non
PPP PPP
PPP
11223344556666666666554433221 1 99999999999999999997
PPP PPP PPP
ddb
1122334455555555555544332211 99888888888888888888
PPP
PPP PPP
AAA ADD
AAA
11223344444444444444332211
PPP
AAA
INN
'1J
PPM
99887766666666666666 112233333333333333332211
CAP
112233333333333333332211
112233333333333333332211 9988776655555555',555 112233333333333333332211
99887766555555555522 112233333333333333332211
N.-
7
112233333333333333332211
99887766555555555555 1122222222222222222211
PPP0000
1122222222222222222211
1122222222 22_22211 99887766554444444444 1122222222222222222211
1122222222222222222211
.'.
1122222222222222222211 99887766554444444444
99887766554444444444 11111111111111111111
PPP
...
.-.
11111111111111111111
11111111111111111111 99887766554433333333 11111111111111111111
11111111111111111111
i.:
.a:
99887766554433333333
'^.
11111111111111111111
Figure 8.22 (a) Absolute, (b) positive, and (c) negative accumulative difference images for
a 20 x 20 pixel object with intensity greater than the background and moving in a south-
easterly direction. (From Jain [1983], courtesy of R. Jain.)
Q,)
cps
394 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Figure 8.23 Intensity-coded accumulative difference images for Fig. 8.22. (a) AADI, (b)
PADI, and (c) NADI. (From Jain [1983], courtesy of R. Jain.)
ing objects. The noise problem can be handled by the filtering approach discussed
earlier or by forming an accumulative difference image.
In practice, it is not always possible to obtain a reference image with only sta-
G1.
images containing one or more moving objects. This is particularly true in situa-
...
CSC
tions describing busy scenes or in cases where frequent updating is required. One
CI)
have moved completely out of their original positions, a reference image contain-
,_..
..,
ing only stationary components will have been created. Object displacement can
be established by monitoring the growth of the PADI.
-,-1
HIGHER-LEVEL VISION 395
Figure 8.24 Two image frames of a traffic scene. There are two principal moving objects:
a white car in the middle of the picture and a pedestrian on the lower left. (From Jain
[1981], ©IEEE.)
8.3 DESCRIPTION
The description problem in vision is one of extracting features from an object for
the purpose of recognition. Ideally, descriptors should be independent of object
size, location, and orientation and should contain enough discriminatory informa-
Figure 8.25 (a) Image with automobile removed and background restored. (b) Image with
pedestrian removed and background restored. The latter image can be used as a reference.
(From Jain [1981], ©IEEE.)
396 ROBOTICS: CONTROL. SENSING, VISION, AND INTELLIGENCE
tion to uniquely identify one object from another. Description is a central issue in
the design of vision systems in the sense that descriptors affect not only the com-
plexity of recognition algorithms but also their performance. In Secs. 8.3.1, 8.3.2,
and 8.4, respectively, we subdivide descriptors into three principal categories:
vii
boundary descriptors, regional descriptors, and descriptors suitable for representing
three-dimensional structures.
a))
course, it is possible to specify chain codes with more directions, but the codes
shown in Fig. 8.26 are the ones most often used in practice.
To generate the chain code of a given boundary we first select a grid spacing,
as shown in Fig. 8.27a. Then, if a cell is more than a specified amount (usually
50 percent) inside the boundary, we assign a 1 to that cell; otherwise, we assign it
a 0. Figure 8.27b illustrates this process, where cells with value 1 are shown
f3.
dark. Finally, we code the boundary between the two regions using the direction
codes given in Fig. 8.26a. The result is shown in Fig. 8.27c, where the coding
was started at the dot and proceeded in a clockwise direction. An alternate pro-
.-.
cedure is to subdivide the boundary into segments of equal length (i.e., each seg-
c7-
ment having the same number of pixels), connecting the endpoints of each segment
with a straight line, and assigning to each line the direction closest to one of the
allowed chain-code directions. An example of this approach using four directions
CD.
o.:
I 2
2 0
3 6
(a) (h)
Figure 8.26 (a) 4-directional chain code. (b) 8-directional chain code.
HIGHER-LEVEL VISION 397
(a) (b)
0
i 3 1 3
3 3
0 0
13
3 2
9 T.
Figure 8.27 Steps in obtaining a chain code. The dot in (c) indicates the starting point.
j It is important to note that the chain code of a given boundary depends upon
+-+
tion, we treat it as a circular sequence of direction numbers and redefine the start-
C1.
+O-+
tiny
o..
ing point so that the resulting sequence of numbers forms an integer of minimum
r--+
magnitude. We can also normalize for rotation by using the first difference of the
in,
chain code, instead of the code itself. The difference is computed simply by
counting (in a counterclockwise manner) the number of directions that separate two
398 ROBOTICS. CONTROL. SENSING, VISION, AND INTELLIGENCE
130322211
Figure 8.28 Generation of chain code by boundary subdivision.
adjacent elements of the code. For instance, the first difference of the 4-direction
chain code 10103322 is 3133030. If we treat the code as a circular sequence, then
the first element of the difference is computed using the transition between the last
and first components of the chain. In this example the result is 33133030. Size
-fl
.fl
normalization can be achieved by subdividing all object boundaries into the same
number of equal segments and adjusting the code segment lengths to fit these sub-
.O.
invariant to rotation and scale change. In practice, this is seldom the case. For $..
instance, the same object digitized in two different orientations will in general have
different boundary shapes, with the degree of dissimilarity being proportional to
image resolution. This effect can be reduced by selecting chain elements which
are large in proportion to the distance between pixels in the digitized image or by
'.'
orienting the grid in Fig. 8.27 along the principal axes of the object to be coded.
This is discussed below in the section on shape numbers.
C/)
angle, as illustrated in Fig. 8.29. Signatures generated by this approach are obvi-
(3.
yam.,
ously dependent on size and starting point. Size normalization can be achieved
'C7
Ll.
simply by normalizing the r(O) curve to, say, unit maximum value. The starting-
point problem can be solved by first obtaining the chain code of the boundary and
then using the approach discussed in the previous section.
Distance vs. angle is, of course, not the only way to generate a signature. We
could, for example, traverse the boundary and plot the angle between a line
tangent to the boundary and a reference line as a function of position along the
boundary (Ambler et al. [1975]). The resulting signature, although quite different
.09
HIGHER-LEVEL VISION 399
A A
1 I I I I I I I
r
I I I I I
is
I I I
a 3r it 7a 27r 7r r
2 LT
Tf
5a
4 2
3a 7a 27f
4 2 4 4 2 4 4 4 4
0 0
(a) (b)
Figure 8.29 Two simple boundary shapes and their corresponding distance vs. angle signa-
tures. In (a), r(0) is constant, while in (b), r(0) = A sec 0.
from the r(0) curve, would carry information about basic shape characteristics.
For instance, horizontal segments in the curve would correspond to straight lines
along the boundary since the tangent angle would be constant there. A variation
of this approach is to use the so-called slope density function as a signature (Nahin
[1974]). This function is simply a histogram of tangent angle values. Since a his-
togram is a measure of concentration of values, the slope density function would
respond strongly to sections of the boundary with constant tangent angles (straight
..d
or nearly straight segments) and have deep valleys in sections producing rapidly
varying angles (corners or other sharp inflections).
s.,
Once a signature has been obtained, we are still faced with the problem of
describing it in a way that will allow us to differentiate between signatures
corresponding to different boundary shapes. This problem, however, is generally
easier because we are now dealing with one-dimensional functions. An approach
often used to characterize a signature is to compute its moments. Suppose that we
treat a as a discrete random variable denoting amplitude variations in a signature,
and let p(a;), i = 1, 2, ... ,K, denote the corresponding histogram, where K is
the number of discrete amplitude increments of a. The nth moment of a about its
mean is defined as
K
(ai - m)'p(ai) (8.3-1)
f=
400 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
where
x
m= aip(ai) (8.3-2)
i=I
The quantity m is recognized as the mean or average value of a and µ2 as its vari-
ance. Only the first few moments are generally required to differentiate between
'-<
mss'
the number of segments in the polygon is equal to the number of points in the
0-y
a.0
boundary so that each pair of adjacent points defines a segment in the polygon. In
practice, the goal of a polygonal approximation is to capture the "essence" of the
boundary shape with the fewest possible polygonal segments. Although this prob-
lem is in general not trivial and can very quickly turn into a time-consuming
iterative search, there are a number of polygonal approximation techniques whose
modest complexity and processing requirements makes them well-suited for robot
vision applications. Several of these techniques are presented in this section.
We begin the discussion with a method proposed by Sklansky et al. [1972] for
finding minimum-perimeter polygons. The procedure is best explained by means
of an example. With reference to Fig. 8.30, suppose that we enclose a given
boundary by a set of concatenated cells, as shown in Fig. 8.30a. We can visual-
ize this enclosure as consisting of two walls corresponding to the outside and
inside boundaries of the strip of cells, and we can think of the object boundary as
a rubberband contained within the walls. If we now allow the rubberband to
shrink, it will take the shape shown in Fig. 8.30b, thus producing a polygon of
minimum perimeter which fits in the geometry established by the cell strip. If the
cells are chosen so that each cell encompasses only one point on the boundary,
then the error in each cell between the original boundary and the rubberband
approximation would be at most hd, where d is the distance between pixels. This
error can be reduced in half by forcing each cell to be centered on its correspond-
ing pixel.
Merging techniques based on error or other criteria have been applied to the
problem of polygonal approximation. One approach is to merge points along a
U.-
boundary until the least-squares error line fit of the points merged thus far exceeds
..y
'"'
a preset threshold. When this occurs, the parameters of the line are stored, the
error is set to zero, and the procedure is repeated, merging new points along the
boundary until the error again exceeds the threshold. At the end of the procedure
the intersections of adjacent line segments form the vertices of a polygon. One of
the principal difficulties with this method is that vertices do not generally
correspond to inflections (such as corners) in the boundary because a new line is
not started until the error threshold is exceeded. If, for instance, a long straight
line were being tracked and it turned a corner, a number (depending on the thres-
hold) of points past the corner would be absorbed before the threshold is exceeded.
HIGHER-LEVEL VISION 401
(a)
(b)
Figure 8.30 (a) Object boundary enclosed by cells. (b) Minimum-perimeter polygon.
difficulty.
One approach to boundary segment splitting is to successively subdivide a seg-
ment into two parts until a given criterion is satisfied. Fdr instance, we might
v,'
require that the maximum perpendicular distance from a boundary segment to the
rte'..
line joining its two endpoints not exceed a preset threshold. If it does, the furthest
point becomes a vertex, thus subdividing the initial segment into two subsegments.
This approach has the advantage that it "seeks" prominent inflection points. For a
`C1
closed boundary, the best starting pair of points is usually the two furthest points
in the boundary. An example is shown in Fig. 8.31. Part (a) of this figure shows
an ;object boundary, and Fig. 8.31b shows a subdivision of this boundary (solid
5'r1
`W+
line) about its furthest points. The point marked c has the largest perpendicular
't7
distance from the top segment to line ab. Similarly, point d has the largest dis-
COO
tance in the bottom segment. Figure 8.31c shows the result of using the splitting
procedure with a threshold equal to 0.25 times the length of line ab. Since no
..,
'..
point in the new boundary segments has a perpendicular distance (to its
corresponding straight-line segment) which exceeds this threshold, the procedure
terminates with the polygon shown in Fig. 8.31d.
Fl-
402 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
(a)
(b)
(c) (d)
`n,
Figure 8.31 (a) Original boundary. (b) Boundary subdivided along furthest points. (c) Join-
ing of vertices by straight line segments. (d) Resulting polygon.
We point out before leaving this section that a considerable amount of work
CAD
:1)
has been done in the development of techniques which combine merging and split-
ting. A comprehensive discussion of these methods is given by Pavlidis [1977].
Coo
tude. The order, n, of a shape number is defined as the number of digits in its
representation. It is noted that n is even for a closed boundary, and that its value
limits the number of possible different shapes. Figure 8.32 shows all the shapes of
pvm
^a7
3"'
and corresponding shape numbers. Note that the first differences were computed
by treating the chain codes as a circular sequence in the manner discussed earlier.
:-s
CAD
coded boundary in general will depend on the orientation of the coding grid shown
in Fig. 8.27a. One way to normalize the grid orientation is as follows. The major
"t1
axis of a boundary is the straight-line segment joining the two points furthest away
from each other. The minor axis is perpendicular to the major axis and of length
such that a box could be formed that just encloses the boundary. The ratio of the
r-'
major to minor axis is called the eccentricity of the boundary, and the rectangle
CAD
just described is called the basic rectangle. In most cases a unique shape number
...
will be obtained by aligning the chain-code grid with the sides of the basic rectan-
HIGHER-LEVEL VISION 403
IOrder 4 Order 6
I1
Chain code 0321 003221
Difference 3333 303303
Shape number: 3333 033033
Order 8
} IL
gle. Freeman and Shapira [1975] give an algorithm for finding the basic rectangle
of a closed, chain-coded curve.
In practice, given a desired shape order, we find the rectangle of order n
C].
whose eccentricity best approximates that of the basic rectangle, and use this new
ear
;!?
rectangle to establish the grid size. For example, if n = 12, all the rectangles of
order 12 (i.e., those whose perimeter length is 12) are 2 x 4, 3 x 3, and 1 x 5. If
~i,
may
4-.:
the eccentricity of the 2 x 4 rectangle best matches the eccentricity of the basic
N-3'
rectangle and use the procedure already outlined to obtain the chain code. The
shape number follows from the first difference of this code, as indicated above.
(],
Although the order of the resulting shape number will usually be equal to n
because of the way the grid spacing was selected, boundaries with depressions
"'h
comparable with this spacing will sometimes yield shape numbers of order greater
r-.
than n. In this case, we specify a rectangle of order lower than n and repeat the
CAD
'CS
chain code and use its first difference to compute the shape number, as shown
in Fig. 8.33d.
404 ROBOTICS. CONTROL, SENSING. VISION, AND INTELLIGENCE
(a) (b)
()may
computed using an FFT algorithm, as discussed in Sec. 7.6.1. The motivation for
.U4
this approach is that only the first few components of F(u) are generally required
(i4
HIGHER-LEVEL VISION 405
Imaginary axis
Real axis
to distinguish between shapes that are reasonably distinct. For example, the
objects shown in Fig. 8.35 can be differentiated by using less than 10 percent of
`o'
pair, this is equivalent to multiplying (scaling) the boundary by the same factor.
Rotation by an angle 0 is similarly handled by multiplying the elements of F(u) by
`"-
exp (j0) . Finally, it can be shown that shifting the starting point of the contour in
."3
exp (jkT), where T is in the interval [0, 27r]. As T goes from 0 to 2ir, the start-
.-e
ing point traverses the entire contour once. This information can be used as the
basis for normalization (Gonzalez and Wintz [1977]).
.-.
..!
Figure 8.35 Two shapes easily distinguishable by Fourier descriptors. (From Persoon and
Fu [1977], ©IEEE.)
406 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
boundary. This is a useful descriptor when the viewing geometry is fixed and
objects are always analyzed approximately the same distance from the camera. A
typical application is the recognition of objects moving on a conveyor belt past a
vision station.
The major and minor axes of a region are defined in terms of its boundary
..o
gyp..
(see Sec. 8.3.1) and are useful for establishing the orientation of an object. The
ratio of the lengths of these axes, called the eccentricity of the region, is also an
.nom-..
.ti-
important global descriptor of its shape.
The perimeter of a region is the length of its boundary. Although the perime-
,..
by a curve lying entirely in the region. For a set of connected regions, some of
which may have holes, it is useful to consider the Euler number as a descriptor.
The Euler number is defined simply as the number of connected regions minus the
number of holes. As an example, the Euler numbers of the letters A and B are 0
'U-'
below.
chi
x'10
Figure 8.36 Examples of (a) smooth, (b) coarse, and (c) regular texture.
µ,,(z) _ (zi -
L
m= zip(zi) (8.3-4)
408 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
.-a
R = 1 - 1 (8.3-5)
1 + a2(z)
is 0 for areas of constant intensity [a2(z) = 0 if all zi have the same value] and
approaches 1 for large values of a2(z). The third moment is a measure of the
CAD
skewness of the histogram while the fourth moment is a measure of its relative
flatness. The fifth and higher moments are not so easily related to histogram
shape, but they do provide further quantitative discrimination of texture content.
Measures of texture computed using only histograms suffer from the limitation
CDR
'"'
that they carry no information regarding the relative position of pixels with respect
4-.
'i3
to each other. One way to bring this type of information into the texture analysis
process is to consider not only the distribution of intensities but also the positions
of pixels with equal or nearly equal intensity values. Let P be a position operator
and let A be a k x k matrix whose element a,j is the number of times that points
CS'
with intensity z; occur (in the position specified by P) relative to points with inten-
...°
sity z1, with 1 < i, j < k. For instance, consider an image with three intensities,
z1 = 0, z2 = 1, and z3 = 2, as follows:
0 0 0 1 2
1 1 0 1 1
2 2 1 0 0
1 1 0 2 0
0 0 1 0 1
If we define the position operator P as "one pixel to the right and one pixel
below," then we obtain the following 3 x 3 matrix A:
'w.
4 2 1
A = 2 3 2
0 2 0
where, for example, a1 (top left) is the number of times that a point with intensity
level z1 = 0 appears one pixel location below and to the right of a pixel with the
same intensity, while a13 (top right) is the number of times that a point with level
z1 = 0 appears one pixel location below and to the right of a point with intensity
z3 = 2. It is important to note that the size of A is determined strictly by the
f`1
number of distinct intensities in the input image. Thus, application of the concepts
discussed in this section usually require that intensities be requantized into a few
bands in order to keep the size of A manageable.
Let n be the total number of point pairs in the image which satisfy P (in the
above example n = 16). If we define a matrix C formed by dividing every ele-
HIGHER-LEVEL VISION 409
ment of A by n, then cij is an estimate of the joint probability that a pair of points
satisfying P will have values (zi, z1). The matrix C is called a gray-level co-
--3
occurrence matrix, where "gray level" is used interchangeably to denote the
..,
intensity of a monochrome pixel or image. Since C depends on P, it is possible to
detect the presence of given texture patterns by choosing an appropriate position
CS,
operator. For instance, the operator used in the above example is sensitive to
bands of constant intensity running at -45 ° (note that the highest value in A was
V')
all = 4, partially due to a streak of points with intensity 0 and running at
-45°). In a more general situation, the problem is to analyze a given C matrix
in order to categorize the texture of the region over which C was computed. A
set of descriptors proposed by Haralick [1979] include
1. Maximum probability:
max (cij)
i,j
E (i - j)kCij
j
3. Inverse element-difference moment of order k:
Cij
(i-j)k
4. Entropy:
The basic idea is to characterize the "content" of C via these descriptors. For
example, the first property gives an indication of the strongest response to P (as in
the above example). The second descriptor has a relatively low value when the
high values of C are near the main diagonal since the differences (i - j) are
smaller there. The third descriptor has the opposite effect. The fourth descriptor
is a measure of randomness, achieving its highest value when all elements of C
are equal. Conversely, the fifth descriptor is lowest when the cij are all equal.
One approach for using these descriptors is to "teach" a system representative
descriptor values for a set of different textures. The texture of an unknown region
is then subsequently determined by how closely its descriptors match those stored
in the system memory. This approach is discussed in more detail in Sec. 8.4.
410 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
`J.
-
the rule S -- aS allows us to generate a texture pattern of the form shown in Fig.
CO)
8.37b.
Suppose next that we add some new rules to this scheme: S bA, A cA,
A - c, A - bS, S - a, such that the presence of a b means "circle down" and
the presence of a c means "circle to the left." We can now generate a string of the
form aaabccbaa which corresponds to a three-by-three matrix of circles. Larger
texture patterns, such as the one shown in Fig. 8.37c can easily be generated in the
same way. (It is noted, however, that these rules can also generate structures that
are not rectangular).
The basic idea in the foregoing discussion is that a simple "texture primitive"
can be used to form more complex texture patterns by means of some rules which
limit the number of possible arrangements of the primitive(s). These concepts lie
at the heart of structural pattern generation and recognition, a topic which will be
treated in considerably more detail in Sec. 8.5.
0
(a)
00000...
(G)
(c)
Figure 8.37 (a) Texture primitive. (b) Pattern generated by the rule S aS. (c) Two-
dimensional texture pattern generated by this plus other rules.
HIGHER-LEVEL VISION 411
CAD
influenced by the choice of a given metric. Some examples using the euclidean
distance are shown in Fig. 8.38.
Although the MAT of a region yields an intuitively pleasing skeleton, a direct
implementation of the above definition is typically prohibitive from a computational
t:$$
point of view because it potentially involves calculating the distance from every
"t7
..,
s...
interior point to every point on the boundary of a region. A number of algorithms
have been proposed for improving computational efficiency while, at the same
time, attempting to produce a medial axis representation of a given region.
Typically, these are thinning algorithms that iteratively delete edge points of a
region subject to the constraints that the deletion of these points (1) does not
remove endpoints, (2) does not break connectedness, and (3) does not cause exces-
..,
sive erosion of the region. Although some attempts have been made to use skele-
tons in gray-scale images (Dyer and Rosenfeld [1979], Salari and Siy [1984]) this
p..
as will be seen below, yields skeletons that are in many cases superior to those
obtained with other thinning algorithms. We begin the development with a few
CAD
definitions. Assuming binary data, region points will be denoted by 1's and back-
CAD
ground points by 0's. These will be called dark and light points, respectively. An
edge point is a dark point which has at least one light 4-neighbor. An endpoint is
°C°
a dark point which has one and only one dark 8-neighbor. A breakpoint is a dark
'C7
point whose deletion would break connectedness. As is true with all thinning algo-
rithms, noise and other spurious variations along the boundary can significantly
C3'
alter the resulting skeleton (Fig. 8.38b shows this effect quite clearly). Conse-
quently, it is assumed that the boundaries of all regions have been smoothed prior
'CS
`C1
to thinning by using, for example, the procedure discussed in Sec. 7.6.2.
[`'
With reference to the neighborhood arrangement shown in Fig. 8.39, the thin-
'TI
ning algorithm identifies an edge point p as one or more of the following four
vac
types: (1) a left edge point having its left neighbor n4 light; (2) a right edge point
having no light; (3) a top edge point having n2 light; and (4) a bottom edge point
do'
Zip
having n6 light. It is possible for p to be classified into more than one of these
in'
types. For example, a dark point p having no and n4 light will be a right edge
CS.
.Y_
`CD
m"s,
point and a left edge point simultaneously. The following discussion initially
Q..
addresses the identification (flagging) of left edge points that should be deleted.
The procedure is then extended to the other types.
An edge point p is flagged if it is not an endpoint or breakpoint, or if its dele-
.-r
tion would cause excessive erosion (as discussed below). The test for these condi-
-O.
".3
tions is carried out by comparing the 8-neighborhood of p against the windows
shown in Fig. 8.40, where p and the asterisk are dark points and d and e are
"don't care" points; that is, they can be either dark or light. If the neighborhood
of p matches windows (a) to (c), two cases may arise: (1) If all d's are light, then
p is an endpoint, or (2) if at least one of the d's is dark, then p is a breakpoint.
O..
and e are dark, then p is a break point and should not be flagged. Other arrange-
ments need to be considered, however. Suppose that all d's are light and the e's
can be either dark or light. This condition yields the eight possibilities shown in
G?.
,_,
'"'
000
Fig. 8.41. Configurations (a) through (c) make p an endpoint, and configuration
sue.
easy to show by example that its deletion would cause excessive erosion in slanting
'LS
n3 112 III
114 p lap
115 no 117
Figure 8.39 Notation for the neighbors of p used by the thinning algorithm. (From Nac-
cache and Shinghal [1984], ©IEEE.)
.-.
HIGHER-LEVEL VISION 413
d d
d d
Figure 8.40 If the 8-neighborhood of a dark point p matches any of the above windows,
pip
then p is not flagged. The asterisk denotes a dark point, and d and e can be either dark or
light. (From Naccache and Shinghal [1984], ©IEEE.)
not be deleted. Finally, if all isolated points are removed initially, the appearance
of configuration (h) during thinning indicates that a region has been reduced to a
°°0
single point; its deletion would erase the last remaining portion of the region.
Similar arguments apply if the roles of d and e were reversed or if the d's and e's
w..
were allowed to assume dark and light values. The essence of the preceding dis-
G7.
cussion is that any left edge point p whose 8-neighborhood matches any of the
windows shown in Fig. 8.40 should not be flagged.
Testing the 8-neighborhood of p against the four windows in Fig. 8.40 has a
.:.
Figure 8.41 All the configurations that could exist if d is light in Fig. 8.40, and e can be
dark, *, or light. (From Naccache and Shinghal [1984], ©IEEE.)
414 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
where the subscript on B indicates that n4 is light (i.e., p is a left edge point),
" " is the logical AND, "+" is the logical OR, "-" is the logical COMPLE-
MENT, and the n's are as defined in Fig. 8.39. Equation (8.3-6) is evaluated by
letting dark, previously unflagged points be valued 1 (TRUE), and light or flagged
"'t
points be valued 0 (FALSE). Then if B4 is 1 (TRUE), we flag p. Otherwise, p is
left unflagged. It is not difficult to show that these conditions on B4 implement all
four windows in Fig. 8.40 simultaneously.
Similar expressions are obtained for right edge points,
Using the above expressions, the thinning algorithm iteratively performs two
scans through the data. The scanning sequence can be either along the rows or
columns of the image, but the choice will generally affect the final result. In the
first scan we use B4 and Bo to flag left and right edge points; in the second scan
we use B2 and B6 to flag top and bottom edge points. If no new edge points were
flagged during the two scans, the algorithm stops, with the unflagged points consti-
tuting the skeleton; otherwise, the procedure is repeated. It is again noted that
=°,
previously flagged dark points are treated as 0 in evaluating the boolean expres-
sions. An alternate procedure is to set any flagged point at zero during execution
of the algorithm, thus producing only skeleton and background points at the end.
This approach is easier to implement, at the cost of losing all other points in the
region.
Example: Figure 8.42a shows a binary region, and Fig. 8.42b shows the
skeleton obtained by using the algorithm developed above. As a point of
interest, Fig. 8.42c shows the skeleton obtained by applying to the same data
another, well-known thinning algorithm (Pavlidis [1982]). The fidelity of the
skeleton in Fig. 8.42b over that shown in Fig. 8.42c is evident.
Moment Invariants. It was noted in Sec. 8.3.1 that Fourier descriptors which are
insensitive to translation, rotation, and scale change can be used to describe the
boundary of a region. When the region is given in terms of its interior points, we
can describe it by a set of moments which are invariant to these effects.
Let f(x, y) represent the intensity at point (x, y) in a region. The moment of
order (p + q) for the region is defined as
'C7
fa
a*f ttila+t aY
ttf lif *fftfaarr r4fair;a fafala to
;Rr4rafffr*if raft! r# f
Rftrr tita#f* Y 44
4t tf f
lrar rt*:i
ifirr i4at;* *
#rfr rfff{r# 4 ff #
#\+tf affrr44* * # #\
rfria; aft #
a a
fRfiff rr4 i 4iatia##*af+f {rt*
4fi#farrlatill rRir44f;f Y
1#rrMf# f
ltfr*f Y
**Not* t
lf4f#a
tetra#t
t of aaf;
f r{ff44
R#rrta4
f ref#f it
*4+r;fflr rrR
t#;i;rrr rtr
rf;t;tM M rM*4
raaata#ff t*
lfr fifff
tr#YrrMt*}ffrrf **t**
fi{#rrtaYetti+}it}ff +}Y }}ia
tr lrrrraf
r;tiff 1f
(c)
Figure 8.42 (a) Binary region. (b) Skeleton obtained using the thinning algorithm discussed
in this section. (c) Skeleton obtained by using another algorithm. (From Naccache and
Shinghal [1984], ©IEEE.)
where the summation is taken over all spatial coordinates (x, y) of points in the
region. The central moment of order (p + q) is given by
where
m1o mo i
x = y (8.3-12)
MOO MOO
µn9
'qpq (8.3-13)
µ0o
where
The following set of moment invariants can be derived using only the normal-
ized central moments of orders 2 and 3:
01 =7120+7102 (8.3-15)
This set of moments has been shown to be invariant to translation, rotation, and
scale change (Hu [1962]).
Attention was focused in the previous two sections on techniques for segmenting
and describing two-dimensional structures. In this section we consider the problem
of performing these tasks on three-dimensional (3D) scene data.
>?'
dimensional scene information. Although research in this area spans more than a
.ti
10-year history, we point out that factors such as cost, speed, and complexity have
inhibited the use of three-dimensional vision techniques in industrial applications.
a.+
nates, as well as intensity information about each point. In this case, we represent
each point in the form f(x, y, z), where the value of f at (x, y, z) gives the
intensity of that point (the term voxel is often used to denote a 3D point and its
intensity). Finally, we may infer 3D relationships from a single two-dimensional
image of a scene. In other words, it is often possible to deduce relationships
t,.
between objects such as "above," "behind," and "in front of." Since the exact 3D
'0r
location of scene points generally cannot be computed from a single view, the rela-
tionships obtained from this type of analysis are sometimes referred to as 21/2 D
information.
HIGHER-LEVEL VISION 417
'C7
grouping points according to the cell which contains them. Then, we fit a plane to
the group of points in each cell and calculate a unit vector which is normal to the
plane and passes through the centroid of the group of points in that cell. A planar
(x, )', z
.
N
.
. \... ..
'{.. ... .
........... .... .
........
.......... .... .
..............
..... . . . .. .. ..
....... .....
.. . .. .. .. .. . . . .
R2 CPD
RI R7 PI C PI
R8 C C2
R3 R9 C
R4 J R5 c2 I c C1
R6 C
(f)
Figure 8.43 Three-dimensional surface description based on planar patches. (From Shirai
f1,
patch is established by the intersection of the plane and the walls of the cell, with
the direction of the patch being given by the unit normal, as illustrated in Fig.
WOW
8.43c. All patches whose directions are similar within a specified threshold are
grouped into elementary regions (R), as shown in Fig. 8.43d. These regions are
then classified as planar (P), curved (C), or undefined (U) by using the directions
^..
of the patches within each region (for example, the patches in a planar surface will
all point in essentially the same direction). This type of region classification is
illustrated in Fig. 8.43e. Finally (and this is the hardest step), the classified
'T1
regions are assembled into global surfaces by grouping adjacent regions of the
.L0
same classification, as shown in Fig. 8.43f. It is noted that, at the end of this pro-
cedure, the scene has been segmented into distinct surfaces, and that each surface
has been assigned a descriptor (e.g., curved or planar).
ate)
patch representations (similar to those discussed in Sec. 8.4.1) which can then be
combined to form surface descriptors. As indicated in Sec. 7.6.4, the gradient
vector is normal to the direction of maximum rate of change of a function, and the
magnitude of this vector is proportional to the strength of that change. These con-
cepts are just as applicable in three dimensions and they can be used to segment
3D structures in a manner analogous to that used for two-dimensional data.
Given a function f(x, y, z), its gradient vector at coordinates (x, y, z) is
given by
I
of
GX ax
G[f(x, y, z)] = Gy
of (8.4-1)
ay
GZ
of
z
The same operator oriented along the y axis is used to compute Gy, and oriented
HIGHER-LEVEL VISION 419
along the z axis to compute G. A key property of these operators is that they
...
yield the best (in a least-squares error sense) planar edge between two regions of
... ...
...
different intensities in a 3D neighborhood.
The center of each operator is moved from voxel to voxel and applied in
s-,
exactly the same manner as their two-dimensional counterparts, as discussed in
Sec. 7.6.4. That is, the responses of these operators at any point (x, y, z) yield
Gx, G3 and G, which are then substituted into Eq. (8.4-1) to obtain the gradient
(7a
vector at (x, y, z) and into Eq. (8.4-2) or (8.4-3) to obtain the magnitude. It is of
interest to note that the operator shown in Fig. 8.44 yields a zero output in a
d-+
s-.
Figure 8.45 Planar patch approximation of a cube using the gradient. (From Zucker and
Hummel [1981], ©IEEE.)
using the gradient operators is shown in Fig. 8.45. Since each planar patch sur-
face passes through the center of a voxel, the borders of these patches may not
BCD
always coincide. Patches that coincide are shown as larger uniform regions in Fig.
..r
8.45.
Once patches have been obtained, they can be grouped and described in the
form of global surfaces as discussed in Sec. 8.4.1. Note, however, that additional
information in the form of intensity and intensity discontinuities is now available to
aid the merging and description process.
surfaces and the edges between them, a finer description of a scene may be
obtained by labeling the lines corresponding to these edges and the junctions which
they form.
As illustrated in Fig. 8.46, we consider basic types of lines. A convex line
(labeled +) is formed by the intersection of two surfaces which are part of a con-
'_'
vex solid (e.g., the line formed by the intersection of two sides of a cube). A con-
cave line (labeled -) is formed by the intersection of two surfaces belonging to
two different solids (e.g., the intersection of one side of a cube with the floor). An
occluding line (labeled with an arrow) is the edge of a surface which obscures a
HIGHER-LEVEL VISION 421
Floor
Convex line
Concave line
Occluding line
surface. The occluding matter is to the right of the line looking in the direction of
the arrow, and the occluded surface is to the left.
After the lines in a scene have been labeled, their junctions provide clues as to
the nature of the 3D solids in the scene. Physical constraints allow only a few
possible combinations of line labels at a junction. For example, in a polyhedral
scene, no line can change its label between vertices. Violation of this rule leads to
impossible physical objects, as illustrated in Fig. 8.47.
The key to using junction analysis is to form a dictionary of allowed junction
types. For example, it is easily shown that the junction dictionary shown in Fig.
8.48 contains all valid labeled vertices of trihedral solids (i.e., solids in which
exactly three plane surfaces come together at each vertex). Once the junctions in a
scene have been classified according to their match in the dictionary, the objective
is to group the various surfaces into objects. This is typically accomplished via a
EG)
set of heuristic rules designed to interpret the labeled lines and sequences of neigh-
boring junctions. The basic concept underlying this approach can be illustrated
with the aid of Fig. 8.49. We note in Fig. 8.49b that the blob is composed
entirely of an occluding boundary, with the exception of a short concave line,
Figure 8.47 An impossible physical object. Note that one of the lines changes label from
occluding to convex between two vertices.
422 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
f f
indicating where it touches the base. Thus, there is nothing in front of it and it
arc
can be extracted from the scene. We also note that there is a vertex of type (10)
from the dictionary in Fig. 8.48. This is strong evidence (if we know we are deal-
ing with trihedral objects) that the three surfaces involved in that vertex form a
coo
cube. Similar comments apply to the base after the cube surfaces are removed.
Removing the base leaves the single object in the background, which completes the
decomposition of the scene.
CD"
Although the preceding short explanation gives an overall view of how line
acs
b(1
and junction analysis are used to describe 3D objects in a scene, we point out that
formulation of an algorithm capable of handling more complex scenes is far from a
trivial task. Several comprehensive efforts in this area are referenced at the end of
w-.
this chapter.
HIGHER-LEVEL VISION 423
(a)
(b) (C)
Figure 8.49 (a) Scene. (b) Labeled lines. (c) Decomposition via line and junction analysis.
O.,
section constant and then allows it to increase linearly past the midpoint of the
spine.
424 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
(a)
(b)
Figure 8.50 Cross sections, spines, and their corresponding generalized cones. In (a) the
cross section remained constant during the sweep, while in (b) its diameter increased
C_.
I-.
8.5 RECOGNITION
Recognition is a labeling process; that is, the function of recognition algorithms is
obi
to identify each segmented object in a scene and to assign a label (e.g., wrench,
seal, bolt) to that object. For the most part, the recognition stages of present
'O-.
.-.
+-+
industrial vision systems operate on the assumption that objects in a scene have
been segmented as individual units. Another common constraint is that images be
CU.
CTS
few exceptions, the procedures discussed in this section are generally used to
recognize two-dimensional object representations.
the following relationship holds for any pattern vector x* belonging to class wi:
CAD
matching. Suppose that we represent each object class by a prototype (or average)
vector:
N
m; = N E Xk 1, 2, ... , M (8.5-2)
k=l
where the xk are sample vectors known to belong to class w;. Given an unknown
x*, one way to determine its class membership is to assign it to the class of its
CD.
. . (8.5-3)
and selecting the largest value. This formulation agrees with the concept of a
decision function, as defined in Eq. (8.5-1).
426 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
--h
w(x, y) in a larger image f(x, y). At each location (x, y) of f(x, y) we define
the correlation coefficient as
where it is assumed that w(s, t) is centered at coordinates (x, y). The summa-
tions are taken over the image coordinates common to both regions, m,v is the
CAD
,-.
with w. It is noted that, in general, -y(x, y) will vary from one location to the
next and that its values are in the range [ - 1, 1 ], with a value of 1 corresponding
.via
Figure 8.51 (a) Subimage w(x, y). (b) Image f(x, y). (c) Location of the best match of w in
f, as determined by the largest correlation coefficient.
HIGHER-LEVEL VISION 427
(x, y) and to select its largest value to determine the best match of w in f [the
procedure of moving w(x, y) throughout f(x, y) is analogous to Fig. 7.20].
The quality of the match can be controlled by accepting a correlation
coefficient only if it exceeds a preset value (for example, .9). Since this method
consists of directly comparing two regions, it is clearly sensitive to variations in
object size and orientation. Variations in intensity are normalized by the denomi-
nator in Eq. (8.5-5). An example of matching by correlation is shown in Fig.
8.51.
fro
8.52. Part (a) of this figure shows a simple object boundary, and Fig. 8.52b
shows a set of primitive elements of specified length and direction. By starting at
the top left, tracking the boundary in a clockwise direction, and identifying
instances of these primitives, we obtain the coded boundary shown in Fig. 8.52c.
CAD
Start A.
a a a
dI b
14 C
b
d
Jb
,
a
Id 1
C
b
d
(c)
Figure 8.52 (a) Object boundary. (b) Primitives. (c) Boundary coded in terms of primitives,
resulting in the string aaabcbbbcdddcd.
428 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
A..
which their shape numbers still coincide. That is, we have s4(A) = s4(B),
s6(A) = s6(B), s8(A) = s8(B), ... , sk(A) = sk(B), sk+2(A) $ sk+2(B),
111
sk+4(A) # sk+4(B), ... , where s indicates shape number and the subscript indi-
cates the order. The distance between two shapes A and B is defined as the
inverse of their degree of similarity:
D(A, B) = k (8.5-6)
more similar the shapes are (note that k is infjnite for identical shapes). The
reverse is true when the distance measure is used.
6. Proceeding down the tree we find that shape D has degree 8 with respect
,n.
to the remaining shapes, and so on. In this particular case, shape F turned out
a.+
higher than any of the other shapes. If E had been the unknown, a unique
CS'
(J4
match would have also been found, but with a lower degree of similarity. If
A had been the unknown, all we could have said using this method is that it is
'LS
`a.
similar to the other five figures with degree 6. The same information can be
summarized in the form of a similarity matrix, as shown in Fig. 8.53c.
HIGHER-LEVEL VISION 429
(a)
Degree
4----------0 ABCDEF A B C D E F
6---------JABCDEF A 00 6 6 6 6 6
B W 8 8 10 8
C 8
00 8 8 12
10------A c A
cCF' Th BE D 03 8 8
12----- A D QCF
E 00 8
I I F 00
14----- A D
0
B
0
E
(b) (c)
Figure 8.53 (a) Shapes. (b) Similarity tree. (c) Similarity matrix. (From Bribiesca and
Guzman [1980], ©Pergamon Press.)
String Matching. Suppose that two object contours CI and C2 are coded into
strings aI a2 a and b1 b2 . b,,,, respectively. Let A represent the number
.
of matches between the two strings, where we say that a match has occurred in the
jth position if aj = bj. The number of symbols that do not match up is given by
B = max (I C1 1, I C2 I) - A (8.5-8)
R (8.5 9)
B max (I C1 I ,AI C2 I) - A
430 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Based on the above comment regarding B, R is infinite for a perfect match and
,'3
zero when none of the symbols in CI and C2 match (i.e., A = 0 in this case).
Since the matching is done on a symbol-by-symbol basis, the starting point on each
boundary when creating the string representation is important. Alternatively, we
can start at arbitrary points on each boundary, shift one string (with wraparound),
and compute Eq. (8.5-9) for each shift. The number of shifts required to perform
all necessary comparisons is max (I CI I, I C2 I ).
Example: Figure 8.54a and b shows a sample boundary from each of two
classes of objects. The boundaries were approximated by a polygonal fit (Fig.
8.54c and d) and then strings were formed by computing the interior angle
(a) (b)
(c) (d)
A/B l.a I.b I.c I.d I.e A/B 2.a 2 b 2.c 2.d 2.c
(e) (f)
(8)
Figure 8.54 (a), (b) Sample boundaries of two different object classes. (c), (d) Their
corresponding polygonal approximations. (e)-(g) Tabulations of R = A/B. (Adapted from
Sze and Yang [1981], ©IEEE.)
HIGHER-LEVEL VISION 431
"C7
direction. Angles were coded into one of eight possible symbols which cor-
respond to 45 ° increments, sI : 0 ° < B <,45', s2 : 45 ° < 0 <,90 °
S8: 315 ° < 0 < 360 °.
The results of computing the measure R for five samples of object 1
against themselves are shown in Fig. 8.54e, where the entries correspond to
values of R = A/B and, for example, the notation 1.c refers to the third string
for object class 1. Figure 8.54f shows the results for the 3'trings of the second
object class. Finally, Fig. 8.54g is a tabulation of R values obtained by com-
paring strings of one class against the other. The important thing to note is
that all values of R in this last table are considerably smaller than any entry in
A..
H.,
the preceding two tables, indicating that the R measure achieved a high degree
I'..0
of discrimination between the two classes of objects. For instance, if string
7:.
l.a had been an unknown, the smallest value in comparing it with the other
..O
strings of class 1 would have been 4.67. By contrast, the largest value in a
comparison against class 2 would have been 1.24. Thus, classification of this
string into class 1 based on the maximum value of R would have been a sim-
...
tea,
Syntactic Methods. Syntactic techniques are by far the most prevalent concepts
used for handling structural recognition problems. Basically, the idea behind syn-
tactic pattern recognition is the specification of structural pattern primitives and a
"C7
set of rules (in the form of a grammar) which govern their interconnection. We
consider first string grammars and then extend these ideas to higher-dimensional
grammars.
String Grammars. Suppose that we have two classes of objects, wI and w2,
°':
grammar, where a grammar is a set of rules of syntax (hence the name syntactic
coo
pattern recognition) for the generation of sentences formed from the given sym-
CAD
coo
bols. In the context of the present discussion, these sentences are strings of sym-
pp.
bols which in turn represent patterns. It is further possible to envision two gram-
X0.53
mars, GI and G2, whose rules are such that GI only allows the generation of sen-
tences which correspond to objects of class wI while G2 only allows generation of
sentences corresponding to objects of class w2. The set of sentences generated by
BCD
-.°
language the pattern represents a valid sentence. If the sentence belongs to L(GI ),
we say that the pattern belongs to object class wl. Similarly, we say that the
object comes from class w2 if the sentence is in L(G2 ). A unique decision cannot
be made if the sentence belongs to both L(GI ) and L(G2). If the sentence is
t3.
When there are more than two pattern classes, the syntactic classification
approach is the same as described above, except that more grammars (at least one
per class) are involved in the process. In this case the pattern is assigned to class
wi if it is a sentence of only L(Gi). A unique decision cannot be made if the sen-
tence belongs to more than one language, and (as above) a pattern is rejected if it
BCD
does not belong to any of the languages under consideration.
When dealing with strings, we define a grammar as the four-tuple
afro
G = (N, E, P, S) (8.5-10)
where
N = finite set of nonterminals or variables
E = finite set of terminals or constants
P = finite set of productions or rewriting rules
S in N = the starting symbol
It is required that N and E be disjoint sets. In the following discussion nonter-
minals will be denoted by capital letters: A, B, ... , S. .... Lowercase letters
at the beginning of the alphabet will be used for terminals: a, b, c, .... Strings
ate-)
of terminals will be denoted by lowercase letters toward the end of the alphabet:
v, w, x, y, z. Strings of mixed terminals and nonterminals will be denoted by
o'<
lowercase Greek letters: a, 0, 0, .... The empty sentence (the sentence with no
symbols) will be denoted by X. Finally, given a set V of symbols, we will use the
notation V* to denote the set of all sentences composed of elements from V.
°a.
N, and a in the set (N U E ) * - X; that is, a can be any string composed of ter-
minals and nonterminals, except the empty string.
1. S - aA
2. A - bA
4.B--c
where the terminals a, b, and c are as shown in Fig. 8.55b. As indicated
earlier, S is the starting symbol from which we generate all strings in L(G).
HIGHER-LEVEL VISION 433
(b)
(c)
Figure 8.55 (a) Object represented by its skeleton. (b) Primitives. (c) Structure generated
using a regular string grammar.
in the string abbA and a rule which allows us to rewrite it, we can continue
the derivation. For example, if we apply production 2 two more times, fol-
lowed by production 3 and then production 4, we obtain the string abbbbbc
which corresponds to the structure shown in Fig. 8.55c. It is important to
note that no nonterminals are left after application of production 4 so the
0
derivation terminates after this production is used. A little thought will reveal
r:.
that the grammar given above has the language L(G) = {ab"c ln > 11,
where b" indicates n repetitions of the symbol b. In other words, G is capable
of generating the skeletons of wrenchlike structures with bodies of arbitrary
length within the resolution established by the length of primitive b.
Use of Semantics. In the above example we have implicitly assumed that the
coo
interconnection between primitives takes place only at the dots shown in Fig.
°.°
's7
'CS
information regarding factors such as primitive length and direction, and the
coo
434 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
number of times a production can be applied, must be made explicit. This is usu-
'-h
ally accomplished via the use of semantics. Basically, syntax establishes the struc-
ture of an object or expression, while semantics deal with its meaning. For exam-
'"'
is given by the direction of the perpendicular bisector of the line joining the
endpoints of the two undotted segments. These line segments are 3 cm each.
A..
It is noted that, by using semantic information, we are able to use a few rules of
syntax to describe a broad (although limited as desired) class of patterns. For
instance, by specifying the direction 0, we avoid having to specify primitives for
each possible object orientation. Similarly, by requiring that all primitives be
(%]
X00
r.,
cep
Eq. (8.5-11) are best illustrated by a simple example.
't0
Example: Consider an automaton given by Eq. (8.5-11) where Q =
CS'
{qo, q1, q2}, E = {a, b}, F = {q0}, and the mappings are given by
6(qo, a) = {q2}, b(qo, b) _ {qI}, 6(q1, a) = {q2}, 6(q1, b) = {q0},
6(q2, a) _ {qo}, 3(q2, b) _ {q1}. If, for example, the automaton is in
state q0 and an a is input, its state changes to q2. Similarly, if a b is input
next, the automaton moves to state qI , and so forth. It is noted that, in this
.'Y
case, the initial and final states are the same.
.-'
A state diagram for the automaton just discussed is shown in Fig. 8.56. The
0,o
state diagram consists of a node for each state, and directed arcs showing the pos-
sible transitions between states. The final state is shown as a double circle and
each arc is labeled with the symbol that causes that transition. A string w of ter-
minal symbols is said to be accepted or recognized by the automaton if, starting in
state q0, the sequence of symbols in w causes the automaton to be in a final state
after the last symbol in w has been input. For example, the automaton in Fig.
8.56 recognizes the string w = abbabb, but rejects the string w = aabab.
(l4
.-:
is generated by a regular grammar. The procedure for obtaining the automaton
BCD
'°J'
corresponding to a given regular grammar is straightforward. Let the grammar be
..,
`zoo
denoted by G = (N, E, P, X0), where X0 = S, and suppose that N is composed
..+
001
III
of X0 plus n additional nonterminals XI, X2, . . . , X,,. The state set Q for the
automaton is formed by introducing n + 2 states, {qo, qI , . . , q,,, qn + I } such
.
that qi corresponds to Xi for 0 < i < n and qn + I is the final state. The set of
On the other hand, given a finite automaton A = (Q, E, 6, q0, F), we obtain
the corresponding regular grammar G = (N, E, P, X0) by letting N be identified
with the state set Q, with the starting symbol X0 corresponding to q0, and the pro-
ductions of G obtained as follows:
G = (N, E, P, r, S) (8.5-12)
A -- a
. .. All
Example: The skeleton of the structure shown in Fig. 8.57a can be generated
by means of a tree grammar with productions
A, A, A2
A A3
(a)
0 a Ib
(b)
Figure 8.57 (a) An object and (b) primitives used for representing the skeleton by means of
a tree grammar.
438 ROBOTICS: CONTROL, SENSING. VISION, AND INTELLIGENCE
be applied the same number of times generates a structure in which all three
'C?
legs are of the same length. Similarly, requiring that productions 4 and 6 be
applied the same number of times produces a structure that is symmetrical
about the vertical axis in Fig. 8.57a.
(a) (b)
8.6 INTERPRETATION
ova
fact that our understanding of this area is really in its infancy. In this section we
touch briefly upon a number of topics which are representative of current efforts
.CC
of factors which make this type of processing a difficult task, including variations
in illumination, occluding bodies, and viewing geometry. In Sec. 7.3 we spent
boa
ago
difficulties we find shadowing effects which complicate edge finding, and the intro-
duction of nonuniformities on smooth surfaces which often results in their being
'"'
detected as distinct bodies. Clearly, many of these problems result from the fact
O..
c~0
of 3D scenes. The line and junction labeling techniques discussed in Sec. 8.4
represent an attempt in this direction, but they fall short of explaining the interac-
.w-.
Occlusion problems come into play when we are dealing with a multiplicity of
objects in an unconstrained working environment. Consider, for example, the
coo
scene shown in Fig. 8.63. A human observer would have little difficulty, say, in
`Z7
determining the presence of two wrenches behind the sockets. For a machine,
't3
however, interpretation of this scene is a totally different story. Even if the system
were able to perform a perfect segmentation of object clusters from the back-
Rule I
Rule 2
LJ
Rule 3 I It I, 11 1
T;
T?
---------
L L ---------
Rule 5
Figure 8.60 Rules used to generate three-dimensional structures. The blank circles indicate
that more than one vertex type is allowed. (Adapted from Gips [1974], ©Pergamon Press.)
440
HIGHER-LEVEL VISION 441
Rule 7
Rule 8
----n
L
Rule 9 W
W
Y
T3
ground, all the two-dimensional procedures discussed thus far for description and
recognition would perform poorly on most of the occluded objects. The three-
3-v
dimensional descriptors discussed in Sec. 8.4 would have a better chance, but even
they would yield incomplete information. For instance, several of the sockets
would appear as partial cylindrical surfaces, and the middle wrench would appear
3U,
obtain descriptions which inherently carry shape and volumetric information, and
procedures for establishing relationships between these descriptions, even when
442 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
S -p
Rule I
Figure 8.61 Sample derivation using the rules in Fig. 8.60. (Adapted from Gips [1974],
©Pergamon Press.)
they are incomplete. Ultimately, these issues will be resolved only through the
.-.
CAD
correctly analyzing the scene. The decision to look at the scene from a different
viewpoint (Fig. 8.64) to resolve the issue would be a natural reaction in an intelli-
gent observer. ti
Figure 8.62 Sample three-dimensional structures generated by the rules given in Fig. 8.60.
(From Gips [1974], ©Pergamon Press.)
.°h
"C1
dled properly (the reader will recall numerous comments made about this in Sec.
O-'
444 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
000
f1.
C/1
and 8.6.
Our treatment of recognition techniques has been at an introductory level.
This is a broad area in which dozens of books and thousands of articles have been
written. The references at the end of this chapter provide a pointer for further
reading on both the decision-theoretic and structural aspects of pattern recognition
and related topics.
REFERENCES
Further reading on the local analysis concepts discussed in Sec. 8.2.1 may be
found in the book by Rosenfeld and Kak [1982]. The Hough transform was first
proposed by P. V. C. Hough [1962] in a U.S. patent and later popularized by
Duda and Hart [1972]. A generalization of the Hough transform for detecting
arbitrary shape has been proposed by Ballard [1981]. The material on graph-
theoretic techniques is based on two papers by Martelli [1972, 1976]. Another
-CD
point of view. For further details on this topic see Ballard and Brown [1982].
The optimum thresholding approach discussed in Sec. 8.2.2 was first utilized
by Chow and Kaneko [1972] for detecting boundaries in cineagiograms (x-ray
pictures of a heart which has been injected with a dye). Further reading on
optimum discrimination may be found in Tou and Gonzalez [1974]. The book by
Rosenfeld and Kak [1982] contains a number of approaches for threshold selection
...+
446 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
a")
by Zucker [1976]. Additional reading on this topic may be found in Barrow and
Tenenbaum [1977], Brice and Fennema [1970], Horowitz and Pavlidis [1974], and
.'3
Ohlander et al. [1979]. The concept of a quad tree was originally called regular
o'.
,..,
"L7
decomposition (Klinger [1972, 1976]). The material in Sec. 8.2.4 is based on two
CAD
papers by Jain [1981, 1983]. Other approaches to dynamic scene analysis may be
found in Thompson and Barnard [1981], Nagel [1981], Rajala et al. [1983], Webb
and Aggarwal [1981], and Aggarwal and Badler [1980].
The chain code representation discussed in Sec. 8.3.1 was first proposed by
""3
v-.
[1977] contains a comprehensive discussion on techniques for polygonal approxi-
mations. The discussion on shape numbers is based on the work of Bribiesca and
_,,
fixes
Guzman [1980] and Bribiesca [1981]. Further reading on Fourier descriptors may
be found in Zahn and Roskies [1972], Persoon and Fu [1977], and Gonzalez and
G?.
(`J
Wintz [1977]. For a discussion of 3D Fourier descriptors see Wallace and
'i7
Mitchell [1980].
Further reading for the material in Sec. 8.3.2 may be found in Gonzalez and
Wintz [1977]. Texture descriptors have received a great deal of attention during
the past few years. For further reading on the statistical aspects of texture see
Haralick et al. [1973], Bajcsy and Lieberman [1976], Haralick [1978], and Cross
and Jain [1983]. On structural texture, see Lu and Fu [1978] and Timita et al.
'O.
.-.
approach is due to Hu [1962]. This technique has been extended to three dimen-
sions by Sadjadi and Hall [1980].
The approach discussed in Sec. 8.4.1 has been used by Shirai [1979] for seg-
.fl
>,y din
menting range data. The gradient operator discussed in Sec. 8.4.2 was developed
-P,
by Zucker and Hummel [1981]. Early work on line and junction labeling for
0.'<
scene analysis (Sec. 8.4.3) may be found in Roberts [1965] and Guzman [1969].
ADO,
see the book by Tou and Gonzalez [1974]. The material in Sec. 8.5.2 dealing with
...
structural pattern recognition see the books by Pavlidis [1977], Gonzalez and Tho-
mason [1978], and Fu [1982].
Further reading for the material in Sec. 8.6 may be found in Dodd and Rossol
[1979] and in Ballard and Brown [1982]. A set of survey papers on the topics dis-
cussed in that section has been compiled by Brady [1981].
PROBLEMS
8.1 (a) Develop a general procedure for obtaining the normal representation of a line given
its slope-intercept equation y = ax + b. (b) Find the normal representation of the line
y = -2x+ 1.
8.2 (a) Superimpose on Fig. 8.7 all the possible edges given by the graph in Fig. 8.8. (b)
't3
where the numbers in parentheses indicate intensity. Assume that the edge starts on the first
column and ends in the last column.
0 1 2
0
(2) (1) (0)
2
'.O
8.4 Suppose that an image has the following intensity distributions, where pi (z)
corresponds to the intensity of objects and P2 (z) corresponds to the intensity of the back-
.y,
ground. Assuming that P1 = P2, find the optimum threshold between object and back-
ground pixels.
8.5 Segment the image on page 448 using the split and merge procedure discussed in Sec.
8.2.3. Let P(R;) = TRUE if all pixels in R; have the same intensity. Show the quadtree
corresponding to your segmentation.
448 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
8.6 (a) Show that redefining the starting point of a chain code so that the resulting sequence
of numbers forms an integer of minimum magnitude makes the code independent of where
CAD
CSC
we initially start on the boundary. (b) What would be the normalized starting point of the
..D
Sec. 8.3.1. (b) Repeat for the slope density function. Assume that the square is aligned
with the x and y axes and let the x axis be the reference line. Start at the corner closest to
C].
the origin.
8.9 Give the fewest number of moment descriptors that would be needed to differentiate
CAD
yam)
a;:
'L7
pixel on the boundary, then the maximum possible error in that cell is Vd, where d is the
'C3
W/]
to zero in the merging method discussed in Sec. 8.3.1? (b) What would be the effect on the
-°»
splitting method?
8.12 (a) What is the order of the shape number in each of the following figures?
.L)
J
4
(I) (4)
8.13 Compute the mean and variance of a four-level image with histogram p(zi) = 0.1,
p(z2) = 0.4, p(z3) = 0.3, p(z4) = 0.2. Assume that z, = 0, z2 = 1, z3 = 2, and
z4 = 3.
8.14 Obtain the gray-level co-occurrence matrix of a 5 x 5 image composed of a checker-
board of alternating l's and 0's if (a) P is defined as "one pixel to the right," and (b) "two
pixels to the right." Assume that the top, left pixel has value 0.
HIGHER-LEVEL VISION 449
8.15 Consider a checkerboard image composed of alternating black and white squares, each
of size m X in. Give a position operator that would yield a diagonal co-occurrence matrix.
8.1'6 (a) Show that the medial axis of a circular region is a single point at its center. (b)
Sketch the medial axis of a rectangle, the region between two concentric circles, and an
equilateral triangle.
8.17 (a) Show that the boolean expression given in Eq. (8.3-6) implements the conditions
given by the four windows in Fig. 8.40. (b) Draw the windows corresponding to B0 in Eq.
(8.3-7).
8.18 Draw a trihedral object which has a junction of the form
8.19 Show that using Eq. (8.5-4) to classify an unknown pattern vector x* is equivalent to
using Eq. (8.5-3).
8.20 Show that D(A, B) = 1/k satisfies the three conditions given in Eq. (8.5-7).
8.21 Show that B = max C, C2 I) - A in Eq. (8.5-8) is zero if and only if C, and
C2 are identical strings.
CHAPTER
NINE
ROBOT PROGRAMMING LANGUAGES
9.1 INTRODUCTION
playback is typically accomplished by the following steps: (1) leading the robot in
slow motion using manual control through the entire assembly task and recording
the joint angles of the robot at appropriate locations in order to replay the motion;
(2) editing and playing back the taught motion; and (3) if the taught motion is
correct, then the robot is run at an appropriate speed in a repetitive mode.
CAD
Leading the robot in slow motion usually can be achieved in several ways:
<°'
using a joystick, a set of pushbuttons (one for each joint), or a master-slave mani-
450
ROBOT PROGRAMMING LANGUAGES 451
pulator system. Presently, the most commonly used system is a manual box with
pushbuttons. With this method, the user moves the robot manually through the
(t4
space, and presses a button to record any desired angular position of the manipula-
tor. The set of angular positions that are recorded form the set-points of the tra-
`'3
jectory that the manipulator has traversed. These position set-points are then inter-
Orn
polated by numerical methods, and the robot is "played back" along the smoothed
trajectory. In the edit-playback mode, the user can edit the recorded angular posi-
tions and make sure that the robot will not collide with obstacles while completing
the task. In the run mode, the robot will run repeatedly according to the edited
1-'
s0.
and smoothed trajectory. If the task is changed, then the above three steps are
repeated. The advantages of this method are that it requires only a relatively small
memory space to record angular positions and it is simple to learn. The main
...
disadvantage is that it is difficult to utilize this method for integrating sensory feed-
back information into the control system.
High-level programming languages provide a more general approach to solv-
ing the human-robot communication problem. In the past decade, robots have
°C3
been successfully used in such areas as arc welding and spray painting using guid-
ing (Engelberger [1980]). These tasks require no interaction between the robot
and the environment and can be easily programmed by guiding. However, the use
of robots to perform assembly tasks requires high-level programming techniques
because robot assembly usually relies on sensory feedback, and this type of un-
structured interaction can only be handled by conditionally programmed methods.
Robot programming is substantially different from traditional programming.
We can identify several considerations which must be handled by any robot pro-
gramming method: The objects to be manipulated by a robot are three-dimensional
0`4
sequence of robot motions. The robot is guided and controlled by the program
throughout the entire task with each statement of the program roughly correspond-
ing to one action of the robot. On the other hand, task-level programming
describes the assembly task as a sequence of positional goals of the objects rather
than the motion of the robot needed to achieve these goals, and hence no explicit
robot motion is specified. These approaches are discussed in detail in the follow-
ing two sections.
To a certain extent, this approach is ad hoc and there are no guidelines on how to
implement the extension.
We can easily recognize several key characteristics that are common to all
robot-oriented languages by examining the steps involved in developing a robot
program. Consider the task of inserting a bolt into a hole (Fig. 9.1). This
E-a)
requires moving the robot to the feeder, picking up the bolt, moving it to the beam
and inserting the bolt into one of the holes. Typically, the steps taken to develop
the program are:
1. The workspace is set up and the parts are fixed by the use of fixtures and
cps
feeders.
2. The location (orientation and position) of the parts (feeder, beam, etc.) and
their features (beam-bore, bolt-grasp, etc.) are defined using the data struc-
t_.
t The reader will recall that the use of the underscore symbol is a common practice in program-
ming languages to provide an effective identity in a variable name and thus improve legibility.
Feeder
AL has influenced the design of many robot-oriented languages and is still actively
being developed. It provides a large set of commands to handle the requirements
'-.
CFO
is currently available as a commercial product for the control of IBM's robots and
s..
its approach is different from AL. Its design philosophy is to provide a system
environment where different robot programming interfaces may be built. Thus, it
o1'
has a rich set of primitives for robot operations and allows the users to design
..+
high-level commands according to their particular needs. These two languages
cad
AML was developed by IBM. It is the control language for the IBM RS-1 robot. It runs on
a Series-1 computer (or IBM personal computer) which also controls the robot. The RS-1
'$0
robot is a cartesian manipulator with 6 degrees of freedom. Its first three joints are
prismatic and the last three joints are rotary. Its characteristics are:
workspace. The parts are usually restricted by fixtures and feeders to minimize
CND
positional uncertainities. Assembly from a set of randomly placed parts requires
vision and is not yet a common practice in industry.
The most common approach used to describe the orientation and the position
..O
".3
f3.
,..,,
(1,
cartesian coordinates. On the other hand, AML provides a general structure called
1`10
an aggregate which allows the user to design his or her own data structures. The
.-+
AML frames defined in Table 9.2 are in cartesian coordinates and the format is
0-.,
AL means the establishment of the coordinate frame base, whose principal axes
are parallel (nilrot implies no rotation) to the principal axes of the reference frame
I..
and whose origin is at location (20, 0, 15) inches from the origin of the reference
a,'
frame. The second statement in AL establishes the coordinate frame beam, whose
principal axes are rotated 90 ° about the Z axis of the reference frame, and whose
..t
origin is at location (20, 15, 0) inches from the origin of the reference frame. The
ROBOT PROGRAMMING LANGUAGES 455
third statement has the same meaning as the first, except for location. The meaning
CAD
of the three statements in AML is exactly the same as for those in AL.
A convenient way of referring to the features of an object is to define a frame
4-i
p,.
(with respect to the object's base frame) for it. An advantage of using a homo-
geneous transformation matrix is that defining frames relative to a base frame can
be simply done by postmultiplying a transformation matrix to the base frame.
Table 9.3 shows the AL and AML statements used to define the features T6, E,
bolt-tip, bolt-grasp, and beam-bore with respect to their base frames, as indicated
-fl
in Fig. 9.1. AL provides a matrix multiplication operator (*) and a data structure
'11
..,
.-.
p',
to represent transformation matrices. AML has no built-in matrix multiplication
.`3
operator, but a system subroutine, DOT, is provided.
..ti
In order to illustrate the meaning of the statements in Table 9.3, the first AL
statement means the establishment of the coordinate frame T6, whose principal
axes are rotated 180 ° about the X axis of the base coordinate frame, and whose
origin is at location (15, 0, 0) inches from the origin of the base coordinate frame.
The second statement establishes the coordinate frame E, whose principal axes are
parallel (nilrot implies no rotation) to the principal axes of the T6 coordinate
frame, and whose origin is at location (0, 0, 5) inches from the origin of the T6
Cam}'
coordinate frame. Similar comments apply to the other three AL statements. The
meaning of the AML statements is the same as those for AL.
Figure 9.2a shows the relationships between the frames we have defined in
chi
Tables 9.2 and 9.3. Note that the frames defined for the arm are not needed for COD
AL because AL uses an implicit frame to represent the position of the end-effector (DD
and does not allow access to intermediate frames (T6, E). As parts are moved or
World
World Beam
E E
Arm
I
T6 T6
Base Base
World World
(a) (b)
are attached to other objects, the frames are adjusted to reflect the current state of
the world (see Fig. 9.2b).
CD
(Grossman and Taylor [1978]), a system designed for AL, allows the user to lead
the robot through the workspace (by hand or by a pendant) and, by pointing the
hand (equipped with a special tool) to objects, it generates AL declarations similar
..+
to those shown in Tables 9.2 and 9.3. This eliminates the need to measure the dis-
tances and angles between frames, which can be quite tedious.
Although coordinate frames are quite popular for representing robot
s..
configurations, they do have some limitations. The natural way to represent robot
configurations is in the joint-variable space rather than the cartesian space. Since
p.'
the inverse kinematics problem gives nonunique solutions, the robot's configuration
is not uniquely determined given a point in the cartesian space. As the number of
",.3
features and objects increases, the relationships between coordinate frames become
complicated and difficult to manage. Furthermore, the number of computations
required also increases significantly.
'C3
path is planned by the system without considering the objects in the workspace and
...
obstacles may be present on the planned path. In order for the system to generate
a collision-free path, the programmer must specify enough intermediate or via
...
Cep
points on the path. For example, in Fig. 9.3, if a straight line motion were used
L]"
from point A to point C, the robot would collide with the beam. Thus, intermedi-
ate point B must be used to provide a safe path.
The positional goals can be specified either in the joint-variable space or in the
cartesian space, depending on the language. In AL, the motion is specified by
using the MOVE command to indicate the destination frame the arm should move
Via points can be specified by using the keyword "VIA" followed by the
.L)
to.
frame of the via point (see Table 9.4). AML allows the user to specify motion in
the joint-variable space and the user can write his or her own routines to specify
motions in the cartesian space. Joints are specified by joint numbers (1 through 6)
and the motion can be either relative or absolute (see Table 9.4).
One disadvantage of this type of specification is that the programmer must
'cs
preplan the entire motion in order to select the intermediate points. The resulting
0
departure vector, or a time limit. Table 9.5 shows the AL statements for moving
the robot from bolt-grasp to A with departure direction along +Z of feeder and
time duration of 5 seconds (i.e., move slowly). In AML, aggregates of the form
<speed, acceleration, deceleration> can be added to the MOVE statement to specify
fir.,
and the task. Most languages provide simple commands on gripper motion so that
f3,
sophisticated motions can be built using them. For a two-fingered gripper, one can
either move the fingers apart (open) or move them together (close). Both AML
row
(for AML) primitives, the gripper can be programmed to move to a certain open-
ing (see Table 9.5).
ine and verify the state of the assembly. Sensing in robot programming can be
classified into three types:
1. Position sensing is used to identify the current position of the robot. This is
usually done by encoders that measure the joint angles and compute the
corresponding hand position in the workspace.
2. Force and tactile sensing can be used to detect the presence of objects in the
workspace. Force sensing is used in compliant motion to provide feedback for
force-controlled motions. Tactile sensing can be used to detect slippage while
grasping an object.
3. Vision is used to identify objects and provide a rough estimate of their position.
are triggered, the motion is halted (see Table 9.6). It also has position-sensing
primitives like QPOSITION (joint numberst) which returns the current position of
the joints. Most languages do not explicitly support vision, and the user has to
.ti
on the hand along the Z axis of the hand coordinate frame is returned by
FORCE(Z). If the force exceeds 10 ounces, then this indicates that the hand
missed the hole and the task is aborted.
The flow of a robot program is usually governed by the sensory information
0°n
acquired. Most languages provide the usual decision-making constructs like "if_
then _ else _ ", "case-", "do-until-", and "while _ do _ " to control the flow of the
program under different conditions.
Certain tasks require the robot to comply with external constraints. For exam-
ple, insertion requires the hand to move along one direction only. Any sideward
forces may generate unwanted friction which would impede the motion. In order
to perform this compliant motion, force sensing is needed. Table 9.6 illustrates the
'C1
ono
use of AL's force sensing commands to perform the insertion task with compli-
ance. The compliant motion is indicated by quantifying the motion statement with
the amount of force allowed in each direction of the hand coordinate frame. In
4"'
this case, forces are applied only along the Z axis of this frame.
(DD
allows the user to support it. Complex robot programs are difficult to develop and
can be difficult to debug. Moreover, robot programming imposes additional
requirements on the development and debugging facilitates:
1. On-line modification and immediate restart. Since robot tasks requires complex
,.0
motions and long execution time, it is not always feasible to restart the program
upon failure. The robot programming system must have the ability to allow
programs to be modified on-line and restart at any time.
2. Sensor outputs and program traces. Real-time interactions between the robot
n-'
and the environment are not always repeatable; the debugger should be able to
record sensor values along with program traces.
3. Simulation. This feature allows testing of programs without actually setting up
robot and workspace. Hence, different programs can be tested more efficiently.
ado
Example: Table 9.7 shows a complete AL program for performing the inser-
tion task shown diagramatically in Fig. 9.1. The notation and meaning of the
statements have already been explained in the preceding discussion. Keep in
mind that a statement is not considered terminated until a semicolon is encoun-
tered.
DO
CLOSE bhand TO 0.9*bolt_diameter;
IF bhand < bolt-diameter THEN BEGIN{ failed to grasp the bolt, try again }
OPEN bhand TO bolt-diameter + 1*inches;
MOVE barin TOO - 1*Z*inches;
END ELSE grasped - true;
tries - tries + 1;
UNTIL grasped OR (tries > 3);
{ Abort the operation if the bolt is not grasped in three tries. }
IF NOT grasped THEN ABORT("failed to grasp bolt");
{ Move the arm to B }
MOVE barm TO B
VIA A
WITH DEPARTURE = Z WRT feeder;
0.!
DO ABORT("No hole");
{ Do insertion with compliance }
MOVE barm TO beam-bore DIRECTLY
;>-
being manipulated rather than by the robot motions. Task-level languages make
use of this fact and simplify the programming task.
A task-level programming system allows the user to describe the task in a
high-level language (task specification); a task planner will then consult a database
(world models) and transform the task specification into a robot-level program
(robot program synthesis) that will accomplish the task. Based on this description,
,-.
.."-
we can conceptually divide task planning into three phases: world modeling, task
specification, and program synthesis. It should be noted that these three phases are
'CS
the problems encountered in task planning and some of the solutions that have
been proposed to solve them.
Geometric and Physical Models. For the task planner to generate a robot pro-
gram that performs a given task, it must have information about the objects and
`CJ
the robot itself. These include the geometric and physical properties of the objects
which can be represented by models.
A geometric model provides the spatial information (dimension, volume,
shape) of the objects in the workspace. As discussed in Chap. 8, numerous tech-
phi
niques exist for modeling three-dimensional objects (Baer et al. [1979], Requicha
[1980]). The most common approach is constructive solid geometry (CSG), where
may"
Task specification
I
Task decontposer
1.......1
I
Robot p1ogrtn
tea.
modeled by utilizing a modeling system called GDP (geometric design processor)
°L.
calling other procedures and applying the MERGE subroutine to them. Table 9.8
shows a description of the bolt used in the insertion task discussed in Sec. 9.2.
Physical properties such as inertia, mass, and coefficient of friction may limit
the type of motion that the robot can perform. Instead of storing each of the pro-
perties explicitly, they can be derived from the object model. However, no model
can be 100 percent accurate and identical parts may have slight differences in their
'=t
physical properties. To deal with this, tolerances must be introduced into the
model (Requicha [1983]).
Representing World States. The task planner must be able to stimulate the
assembly steps in order to generate the robot program. Each assembly step can be
succinctly represented by the current state of the world. One way of representing
these states is to use the configurations of all the objects in the workspace.
AL provides an attachment relation called AFFIX that allows frames to be
attached to other frames. This is equivalent to physically attaching a part to
ROBOT PROGRAMMING LANGUAGES 465
another part and if one of the parts moves, the other parts attached will also move.
AL automatically updates the locations of the frames by multiplying the
appropriate transformations. For example,
AFFIX beam-bore TO beam RIGIDLY;
beam-bore = FRAME(nilrot, VECTOR(1,0,0) *inches);
describes that the frame beam-bore is attached to the frame beam.
'CIw
AUTOPASS uses a graph to represent the world state. The nodes of the
¢'d
graph represents objects and the edges represent relationships. The relations can
rd.
be one of:
As the assembly proceeds, the graph is updated to reflect the current state of
the assembly.
--n
.C]
...
far away. Not even omitting the assembly sequence is possible. The current
approach is to use an input language with a well-defined syntax and semantics,
.7-
'31(D
the objects. For example, consider the block world shown in Fig. 9.5. We define
C1.
a spatial relation AGAINST to indicate that two surfaces are touching each other.
y0,
Then the statements in Table 9.9 can be used to describe the two situations dep-
icted in fig. 9.5. If we assume that state A is the initial state and state B is the
goal state, then they can be used to represent the task of picking up Block3 and
placing it on top of Block2. If state A is the goal state and state B is the initial
state, then they would represent the task of removing Block3 from the stack of +''
blocks and placing it on the table. The advantage of using this type of representa-
tion is that they are easy to interpret by a human, and therefore, easy to specify
and modify. However, a serious limitation of this method is that it does not
specify all the necessary information needed to describe an operation. For exam-
ple, the torque required to tighten a bolt cannot be incorporated into the state
description.
An alternate approach is to describe the task as a sequence of symbolic opera-
tions on the objects. Typically, a set of spatial constraints on the objects are also
given to eliminate any ambiguity. This form of description is quite similar to
those used in an industrial assembly sheet. Most robot-oriented languages have
adopted this type of specification.
Face 3
V_+ V
Block 2 Block 3
Block 2 Face I
Block 1
Block 1
Table Table
AL provides a limited way of describing a task using this method. With the
AFFIX statements, an object frame can be attached to barm to indicate that the
hand is holding the object. Then moving the object to another point can be
described by moving the object frame instead of the arm. For example, the insert-
ing process in Fig. 9.1 can be specified as
Popplestone et al. [1978] have proposed a language called RAPT which uses
contact relations AGAINST, FIT, and COPLANAR to specify the relationship
between object features. Object features, which can be planar or spherical faces,
cylindrical shafts and holes, edges, and vertices, are defined by coordinate frames
similar to those used in AL. For example, the two operations in the block world
ti.,
The spatial relationships are then extracted and solved for the configuration con-
straints on the objects required to perform the task.
AUTOPASS also uses this type of specification but it has a more elaborate
syntax. It divides its assembly related statements into three groups:
The syntax of these statements is complicated (see Table 9.10). For example,
would be used to describe the operation of inserting a bolt and tightening it.
468 ROBOTICS, CONTROL, SENSING, VISION, AND INTELLIGENCE
Table 9.10 The syntax of the state change and tool statements in AUTOPASS
State change statement
PLACE <object> <preposition> <object> <grasping> <final-condition>
<constraint> <then-hold>
where
<object> Is a symbolic name for the object.
a..
OD)
<preposition> Is either IN or ON; it is used to determine the type of opera-
tion.
<grasping> Specifies how the object should be grasped.
<constraint> Specifies the constraints to be met during the execution of the
command.
<then-hold> Indicates that the hand is to remain in position on completion
of the command.
Tool statement
OPERATE <tool> <load-list> <at-position> <attachment> <final-condition>
< tool-parameters > < then-hold >
where
< tool > Specifies the tool to be used.
< load-list > Specifies the list of accessories.
V V V V
< final-condition > Specifies the final condition to be satisfied at the completion of
the command.
< tool-parameters > Specifies tool operation parameters such as direction of rotation
and speed.
< then-hold > Indicates that the hand is to remain in position on completion
of the command.
L'.
.,,
ters of the objects as unknowns. These equations are then solved symbolically by
CU..
using a set of rewrite rules to simplify them. The result obtained is a set of con-
.0-0
straints on the configurations of each object that must be satisfied to perform the
ate'
operation.
Grasping planning is probably the most important problem in task planning
because the way the object is grasped affects all subsequent operations. The way
ROBOT PROGRAMMING LANGUAGES 469
the robot can grasp an object is constrained by the geometry of the object being
e-.
grasped and the presence of other objects in the workspace. A usable grasping
configuration is one that is reachable and stable. The robot must be able to reach
the object without colliding with other objects in the workspace and, once grasped,
the object must be stable during subsequent motions of the robot.
0'n
Most of the current methods for grasp planning focus only on finding reachable
grasping positions, and only a subset of constraints are considered. Grasping in
the presence of uncertainties is more difficult and often involves the use of sensing.
After the object is grasped, the robot must move the object to its destination
'-3
and accomplish the operation. This motion can be divided into four phases:
One of the important problems here is planning the collision-free motion. Several
.-y
.`'
algorithms have been proposed for planning collision-free path and they can be
grouped into three classes:
1. Hypothesis and test. In this method, a candidate path is chosen and the path is
tested for collision at a set of selected configurations. If a collision occurs, a
correction is made to avoid the collision (Lewis and Bejczy [1973]). The main
advantage of this method is its simplicity and most of the tools needed are
already available in the geometric modeling system. However, generating the
correction is difficult, particularly when the workspace is clustered with obstacles.
470 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
;.d
penalty functions and possibly a penalty term relating to minimum path. Then
the derivatives of the total penalty function with respect to the configuration
parameters are estimated and the collision-free path is obtained by following the
local minima of the total penalty function. This method has the advantage that
adding obstacles and constraints is easy. However, the penalty functions gen-
erally are difficult to specify.
3. Explicit free space. Several algorithms have been proposed in this class.
a,,
Lozano-Perez [1982] proposed to represent the free space (space free of obsta-
ONO
the idea is equivalent to transforming the robot's hand holding the object into a
point, and expanding the obstacles in the workspace appropriately. Then,
CD,
,GO
finding a collision-free path amounts to finding a path that does not intersect
any of the expanded obstacles. This algorithm performs reasonably well when
only translation is considered. However, with rotation, approximations must be
made to generate the configuration space and the computations required increase
significantly. Brooks [1983a, 1983b] proposed another method by representing ...
Q.'
,CJ
the free space as overlapping generalized cones and the volume swept by the
'C7
.'3
t A C-surface is defined on a C-frame. It is a task configuration which allows only partial free-
dom in position. Along its tangent is the positional freedom and along its normal is the force freedom.
A C-frame is an orthogonal coordinate system in the cartesian space. The frame is so chosen that the
task freedoms are defined to be translation along and rotation about each of the three principal axes.
ROBOT PROGRAMMING LANGUAGES 471
'+o.,
controlled Stanford Arm Stanford
Arm Arm
Robot-or Mix
Robot Object Robot Robot Robot
object-level
Language Concurrent Lisp, APL, PL/I Pascal Pascal PL/I
..a
5. Craig [1980].
6. Darringer and Blasgen [1975].
Table 9.11b
z
Frame
'"17
Geometric None
data type
Motion Translation, Frame Joints, Joints Joints,
specified by rotation frame '-+ frame
Control If-then-else If-then-else Pascal Fortran If-then
0""'y
structure while-do
Sensing Position Force Force, Position, Position,
CD.
command vision vision force
Parallel INPAR None None None Semaphores
processing
Multiple Yes No No No No
robot
References 1 2 3 4 5
1. Oldroyd [1981].
2. Takase et al. [1981].
3. Franklin and Vanderbrug [1982].
4. Park [1981].
5. User's Guide to VAL, version 11, second edition, Unimation, Inc., Danbury, Conn., 1979.
must be solved before they can be used effectively. We conclude this chapter with
BCD
REFERENCES
-ate
et al. [1982]. Languages for describing objects can be found in Barr et al. [1981,
1982], Grossman and Taylor [1978], Lieberman and Wesley [1977], and Wesley et
al. [1980]. Takase et al. [1981] presented a homogeneous transformation matrix
CAS
Oho
'C3
[1983], Lewis and Bejczy [1973], Lozano-Perez [1983a], Lozano-Perez and Wesley
[1979]. In task planning, Lozano-Perez [1982, 1983b] presented a configuration
ONO
space approach for moving an object through a crowded workspace.
CAD
iii
intelligence (Barr et al. [1981, 1982]) and utilize "knowledge" to perform reason-
ing (Brooks [1981]) and planning for robotic assembly and manufacturing.
PROBLEMS
9.1 Write an AL statement for defining a coordinate frame grasp which can be obtained by
rotating the coordinate frame block through an angle of 65 ° about the Y axis and then
translating it 4 and 6 inches in the X and Y axes, respectively.
9.2 Repeat Prob. 9.1 with an AML statement.
9.3 Write an AL program to palletize nine parts from a feeder to a tray consisting of a 3 x
`C7
3 array of bins. Assume that the locations of the feeder and tray are known. The program
has to index the location for each pallet and signal the user when the tray is full.
9.4 Repeat Prob. 9.3 with an AML program.
9.5 Repeat Prob. 9.3 with a VAL program.
9.6 Repeat Prob. 9.3 with an AUTOPASS program.
9.7 Tower of Hanoi problem. Three pegs, A, B, and C, whose coordinate frames are,
respectively, (xA, yA, zA ), (XB, YB, ZB), and (xc, yc, zc), are at a known location from
the reference coordinate frame (xo, yo, zo), as shown in the figure below. Initially, peg A
has two disks of different sizes, with disks having smaller diameters always on the top of
disks with larger diameters. You are asked to write an AL program to control a robot
equipped with a special suction gripper (to pick up the disks) to move the two disks from
peg A to peg C so that at any instant of time disks of smaller diameters are always on the
top of disks with larger diameters. Each disk has an equal thickness of 1 inch.
Lo
Xo
10.1 INTRODUCTION
those actions. Here, planning means deciding on a course of action before acting.
This action synthesis part of the robot problem can be solved by a problem-solving
t3,
system that will achieve some stated goal, given some initial situation. A plan is,
thus, a representation of a course of action for achieving the goal.
-ti
Research on robot problem solving has led to many ideas about problem-
CDRy
solving systems in artificial intelligence. In a typical formulation of a robot prob-
lem we have a robot that is equipped with sensors and a set of primitive actions
that it can perform in some easy-to-understand world. Robot actions change one
state, or configuration, of the world into another. In the "blocks world," for
(^p
that is able to pick up and move blocks. In some problems the robot is a mobile
vehicle with a TV camera that performs tasks such as pushing objects from place
con
One method for finding a solution to a problem is to try out various possible
approaches until we happen to produce the desired solution. Such an attempt
CDD
`c°
state is produced. Methods of organizing such a search for the goal state are most
conveniently described in terms of a graph representation.
briefly some basic examples as a means of introducing the reader to the concepts
discussed in this chapter.
Blocks World. Consider that a robot's world consists of a table T and three
blocks, A, B, and C. The initial state of the world is that blocks A and B are on
the table, and block C is on top of block A (see Fig. 10.1). The robot is asked to
CCD
change the initial state to a goal state in which the three blocks are stacked with
block A on top, block B in the middle, and block C on the bottom. The only
'ox
operator that the robot can use is MOVE X from Y to Z, which moves object X
.PP.
from the top of object Y onto object Z. In order to apply the operator, it is
required that (1) X, the object to be moved, be a block with nothing on top of it,
and (2) if Z is a block, there must be nothing on it.
v°>
'1.
We can simply use a graphical description like the one in Fig. 10.1 as the state
representation. The operator MOVE X from Y to Z is represented by
MOVE(X, Y,Z) . A graph representation of the state space search is illustrated in
...
Fig. 10.2. If we remove the dotted lines in the graph (that is, the operator is not
i.>
(3a
to be used to generate the same operation more than once), we obtain a state space
,DD
C Initial State
A B
C MOVE (C, B, T)
B
DUB MOVE (C, T. B)
MOVE (A, T, C)
MOVE-
(B, T, C)
A
C
B B
B FA
A B B
C
C B N
A
Goal
State
search tree. It is easily seen from Fig. 10.2 that a solution that the robot can
obtain consists of the following operator sequence:
Path Selection. Suppose that we wish to move a long thin object A through a
crowded two-dimensional environment as shown in Fig. 10.3. To, map motions of
the object once it is grasped by a robot arm, we may choose the state space
representation (x, y, a) where
Both position and orientation of the object are quantized. The operators or robot
ROBOT INTELLIGENCE AND TASK PLANNING 477
i
3 4
commands are:
The state space appears in Fig. 10.4. We assume for illustration that each "move"
is of length 2, and each "rotate" of length 3. Let the object A be initially at loca-
tion (2,2), oriented parallel to the y axis, and the goal is to move A to (3,3) and
t=.
oriented parallel to the x axis. Thus the initial state is (2,2,1) and the goal state is
(3,3,0).
There are two equal-length solution paths, shown in Fig. 10.5 and visualized
on a sketch of the task site in Fig. 10.6. These paths may not look like the most
direct route. Closer examination, however, reveals that these paths, by initially
c'.
i..
.-0
moving the object away from the goal state, are able to save two rotations by util-
1:90
bunch of bananas (Fig. 10.7). The bananas are hanging from the ceiling out of
reach of the monkey. How can the monkey get the bananas?
The four-element list (W,x,Y,z) can be selected as the state representation, where
cam.
_-r
= i coordinate of object
End
Stuff
T/
L 0
ex = orientation
of object
x = x coordinate
5 of object
i
I -lL'-- ---4,
I
I I I
I , , i I , ,
L
I
It should be noted that, in order to apply the operator climbbox, the monkey
must be at the same position W as the box, but not on top of it.
A C B
grasp
(C,l,C,O) (C,I,C,l)
where C is the location on the floor directly under the bananas. It should be
CAD
noted that in order to apply the operator grasp, the monkey and the box should
both be at position C and the monkey should already be on the top of the box.
It is noted that both the applicability and the effects of the operators are
expressed by the production rules. For example, in rule 2, the operator
pushbox(V) is only applicable when its precondition is satisfied. The effect of the
operator is that the monkey has pushed the box to position V. In this formulation,
the set of goal states is described by any list whose last element is 1.
Let the initial state be (A,O,B,O). The only operator that is applicable is
goto (U), resulting in the next state (U,O,B,O). Now three operators are applica-
'-.
ble; they are goto(U), pushbox(V) and climbbox (if U = B). Continuing to
apply all operators applicable at every state, we produce the state space in terms of
the graph representation shown in Fig. 10.8. It can be easily seen that the
sequence of operators that transforms the initial state into a goal state consists of
goto(B), pushbox(C), climbbox, and grasp.
For small graphs, such as the one shown in Fig. 10.8, a solution path from the ini-
tial state to a goal state can be easily obtained by inspection. For a more compli-
cated graph a formal search process is needed to move through the state (problem)
space until a path from an initial state to a goal state is found. One way to
describe the search process is to use production systems. A production system
consists of:
fl.
Step 1. Create a search graph G consisting solely of the start node s. Put s on a
list called OPEN.
Step 2. Create a list called CLOSED that is initially empty.
ate)
Step 4. Select the first node on OPEN, remove it from OPEN, and put it on
CLOSED. Call this mode n.
Step 5. If n is a goal node, exit successfully with the solution obtained by tracing
a path along the pointers from n to s in G (pointers are established in step
'-'
7).
Step 6. Expand node n, generating the set M of its successors that are not ances-
tors of n. Install these members of M as successors of n in G.
Step 7. Establish a pointer to n from those members of M that were not already in
OPEN or CLOSED. Add these members of M to OPEN. For each
member of M that was already on OPEN or CLOSED, decide whether or
482 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
GAD
CLOSED, decide for each of its descendants in G whether or not to
^O.
redirect its pointerl.
Step 8. Reorder the list OPEN, either according to some arbitrary criterion or
according to heuristic merit.
Step 9. Go to LOOP.
C).
cedure orders the nodes on OPEN in increasing order of their depth in the search
..O
tree.$ The search that results from such an ordering is called breadth-first search.
It has been shown that breadth-first search is guaranteed to find a shortest-length
path to a goal node, providing that a path exists. The second type of blind search
orders the nodes on OPEN in descending order of their depth in the search tree.
coo
The deepest nodes are put first in the list. Nodes of equal depth are ordered
.,.
arbitrarily. The search that results from such an ordering is called depth-first
search. To prevent the search process from running away along some fruitless
ors
path forever, a depth bound is set. No node whose depth in the search tree
exceeds this bound is ever generated.
(.D
The blind search methods described above are exhaustive search techniques for
finding paths from the start node to a goal node. For many tasks it is possible to
use task-dependent information to help reduce the search. This class of search
procedures is called heuristic or best-first search, and the task-dependent informa-
tion used is called heuristic information. In step 8 of the graph search procedure,
heuristic information can be used to order the nodes on OPEN so that the search
expands along those sectors of the graph thought to be the most promising. One
important method uses a real-valued evaluation function to compute the "promise"
of the nodes. Nodes on OPEN are ordered in increasing order of their values of
the evaluation function. Ties among the nodes are resolved arbitrarily, but always
in favor of goal nodes. The choice of evaluation function critically determines
"'.
t If the graph being searched is a tree, then none of the successors generated in step 6 has been
generated previously. Thus, the members of M are not already on either OPEN or CLOSED. In this
case, each member of M is added to OPEN and is installed in the search tree as successors of n. If the
graph being searched is not a tree, it is possible that some of the members of M have already been gen-
erated, that is, they may already be on OPEN or CLOSED.
T To promote earlier termination, goal nodes should be put at the very beginning of OPEN.
ROBOT INTELLIGENCE AND TASK PLANNING 483
h(n) is an estimate of the additional cost from node n to a goal node. That is,
f(n) represents an estimate of the cost of getting from the start node to a goal
node along the path constrained to go through node n.
The A* Algorithm
Step 1. Start with OPEN containing only the start node. Set that node's g value
to 0, its h value to whatever it is, and its f value to h + 0, or h. Set
CLOSED to the empty list.
Step 2. Until a goal node is found, repeat the following procedure: If there are no
nodes on OPEN, report failure. Otherwise, pick the node on OPEN with
the lowest f value. Call it BESTNODE. Remove it from OPEN. Place it
on CLOSED. See if BESTNODE is a goal node. If so, exit and report a
solution (either BESTNODE if all we want is the node, or the path that
0
has been created between the start node and BESTNODE if we are
interested in the path). Otherwise, generate the successors of BEST-
NODE, but do not set BESTNODE to point to them yet. (First we need
to see if any of them have already been generated.) For each such SUC-
CESSOR, do the following:
a. Set SUCCESSOR to point back to BESTNODE. These back links will
Zoo
call the node on CLOSED OLD, and add OLD to the list of
BESTNODE's successors. Check to see if the new path or the old path
is better just as in step 2c, and set the parent link and g and f values
ti,
appropriately.
propagate the improvement to OLD's successors. This is a bit tricky.
OLD points to its successors. Each successor in turn points to its suc-
cessors, and so forth, until each branch terminates with a node that
484 ROBOTICS- CONTROL, SENSING, VISION, AND INTELLIGENCE
o..
cost downward, do a depth-first traversal of the tree starting at OLD,
changing each node's g value (and thus also its f value), terminating
0°°
each branch when you reach either a node with no successors or a
node to which an equivalent or better path has already been found.
This condition is easy to check for. Each node's parent link points
back to its best known parent. As we propagate down to a node, see if
its parent points to the node we are coming from. If so, continue the
(DD
propagation. If not, then its g value already reflects the better path of
which it is part. So the propagation may stop here. But it is possible
that with the new value of g being propagated downward, the path we
.fl
are following may become better than the path through the current
parent. So compare the two. If the path through the current parent is
still better, stop the propagation. If the path we are propagating
CD.,
CAD
put it on OPEN, and add it to the list of BESTNODE's successors.
Compute f(SUCCESSOR) = g(SUCCESSOR) + h(SUCCESSOR).
It is easy to see that the A* algorithm is essentially the graph search algorithm
000
using f(n) as the evaluation function for ordering nodes. Note that because g(n)
.`S.
and h(n) must be added, it is important that h(n) be a measure of the cost of get-
CAD
ting from node n to a goal node.
0-0
from the goal states. The rules in the production system model can be used to rea-
son forward from the initial states and to reason backward from the goal states.
To reason forward, the left' sides or the preconditions are matched against the
O..
current state and the right side (the results) are used to generate new nodes until
the goal is reached. To reason backward, the right sides are matched against the
current state and the left sides are used to generate new nodes representing new
goal states to be achieved. This continues until one of these goal states is matched
by a initial state.
By describing a search process as the application of a set of rules, it is easy to
describe specific search algorithms without reference to the direction of the search.
a..
Of course, another possibility is to work both forward from the initial state and
backward from the goal state simultaneously until two paths meet somewhere in
between. This strategy is called bidirectional search.
Another approach to problem solving is problem reduction. The main idea of this
approach is to reason backward from the problem to be solved, establishing sub-
ROBOT INTELLIGENCE AND TASK PLANNING 485
reduction operators that are applicable. Each of these produces an alternative set
'C3
may have to try several operators in order to produce a set whose members are all
solvable. Thus it again requires a search process.
p-+'
solving all of its three subproblems B, C, and D; an AND arc will be marked on
(7.
the incoming arcs of the nodes B, C, and D. The nodes B, C, and D are called
°o.
coo
AND nodes. On the other hand, if problem B can be solved by solving any one of
°O°
methods discussed in Sec. 10.2 are for OR graphs through which we want to find
a single path from the start node to a goal node.
'o"
(S,F,G), where S is the set of starting states, F is the set of operators, and G
°f°
the set of goal states. Since the operator set F does not change in this prob-
lem and the initial state is (A,0,B,0), we can suppress the symbol F and
denote the problem simply by ({ (A, O,B, 0) } , G) . One way of selecting
0.1
0r-
CAD
The state described by (A,0,B,0) is not in Gf, because (1) the box is not at C,
(2) the monkey is not at C, and (3) the monkey is not on the box. The opera-
tors relevant to reduce these differences are, respectively, f2 = pushbox(C),
fI = goto(C), and f3 = climbbox. Applying operator f2 results in the sub-
problems ({(A,0,B,0)},Gf,) and (f2(s1I),Gf,), where sII eGf, is obtained as
a consequence of solving the first subproblem.
Since ({(A,0,B,0)},Gf,) must be solved first, we calculate its difference.
The difference is that the monkey is not at B, and the relevant operator is
fI = goto(B). This operator is then used to reduce the problem to a pair of
$=,
subproblems ({(A,0,B,0)},Gf,) and (fI (s1II ),Gf,). Now the first of these
problems is primitive; its difference is zero since (A,0,B,0) is in the domain
of fI and fI is applicable to solve this problem. Note that fI (sIII ) =
`ti
In an AND/OR graph, one of the nodes, called the state node, corresponds to
the original problem description. Those nodes in the graph corresponding to prim-
5.,
itive problem descriptions are called terminal nodes. The objective of the search
process carried out on an AND/OR graph is to show that the start node is solved.
The definition of a solved node can be given recursively as follows:
1. The terminal nodes are solved nodes since they are associated with primitive
problems.
2. If a nonterminal node has OR successors, then it is a solved node if and only if
at least one of its successors is solved.
3. If a nonterminal node has AND successors, then it is a solved node if and only
if all of its successors are solved.
A solution graph is the subgraph of solved nodes that demonstrates that the
start node is solved. The task of the production system or the search process is to
find a solution graph from the start node to the terminal nodes. Roughly speaking,
moo.
({A,O,B,O},G),)
.513 E G),
({A,0,B,0},Gf) ({C,O,C,0},G),)
s111eGf,
(IJ'1(sI11)1,G1,)
Sill ({/3(5I21>},G/,)
e GJ5
yo')o.
arc is directed, we continue to select one outgoing arc, and so on until eventually
every successor thus produced is an element of N.
In order to find solutions in an AND/OR graph, we need an algorithm similar
to A*, but with the ability to handle the AND arc appropriately. Such an algo-
rithm for performing heuristic search of an AND/OR graph is the so-called AO*
algorithm.
r-'
a. Trace the marked arcs from INIT and select for expansion one of the
as yet unexpanded nodes that occurs on this path. Call the selected
node NODE.
b. Generate the successors of NODE. If there are none, then assign
FUTILITY as the h value of NODE. This is equivalent to saying that
NODE is not solvable. If there are successors, then for each one
(called SUCCESSOR) that is not also an ancestor of NODE do the fol-
lowing:
(1) Add SUCCESSOR to G.
(2) If SUCCESSOR is a terminal node, label it SOLVED and assign it
an h value of 0.
(3) If SUCCESSOR is not a terminal node, compute its h value.
c. Propagate the newly discovered information up the graph by doing the
following: Let S be a set of nodes that have been marked SOLVED or
'..
whose h values have been changed and so need to have values pro-
pagated back to their parents. Initialize S to NODE. Until S is empty,
repeat the following procedure:
(1) Select from S a node none of whose descendants in G occurs in S.
v°)
(In other words, make sure that for every node we are going to
process, we process it before processing any of its ancestors.) Call
this node CURRENT, and remove it from S.
(2) Compute the cost of each of the arcs emerging from CURRENT.
The cost of each arc is equal to the sum of the h values of each of
the nodes at the end of the arc plus whatever the cost of the arc
itself is. Assign as CURRENT's new h value the minimum of the
costs just computed for the arcs emerging from it.
(3) Mark the best path out of CURRENT by marking the arc that had
'+i
CAD
`,1
CURRENT was just changed, then its new status must be pro-
fit
pagated back up the graph. So add to S all the ancestors of
Can
CURRENT.
It is noted that rather than the two lists, OPEN and CLOSED, that were used
'''
in the A* algorithm, the AO* algorithm uses a single structure G, representing the
F.,
portion of the search graph that has been explicitly generated so far. Each node in
man
the graph points both down to its immediate successors and up to its immediate
predecessors. Each node in the graph is associated with an h value, an estimate of
the cost of a path from itself to a set of solution nodes. The g value (the cost of
getting from the start node to the current node) is not stored as in the A* algo-
CIF
rithm, and h serves as the estimate of goodness of a node. A quantity FUTILITY
is needed. If the estimated cost of a solution becomes greater than the value of
.°.y
FUTILITY, then the search is abandoned. FUTILITY should be selected to
'T7
correspond to a threshold such that any solution with a cost above it is too
expensive to be practical, even if it could ever be found.
A breadth-first algorithm can be obtained from the AO* algorithm by assign-
ing h = 0.
Robot problem solving requires the capability for representing, retrieving, and
manipulating sets of statements. The language of logic or, more specifically, the
first-order predicate calculus, can be used to express a wide variety of statements.
f].
of deriving new knowledge from old (i.e., mathematical deduction). In this formal-
ism, we can conclude that a new statement is true by proving that it follows from
the statements that are already known to be true. Thus the idea of a proof, as
0
t At this point, readers who are unfamiliar with propositional and predicate logic may want to con-
sult a good introductory logic text before reading the rest of this chapter. Readers who want a more
complete and formal presentation of the material in this section should consult the book by Chang and
Lee [1973].
490 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
It is sunny.
SUNNY
It is foggy.
FOGGY
If it is raining then it is not sunny.
RAINING -SUNNY
Using these propositions, we could, for example, deduce that it is not sunny if
'L1
',D
0°o
logic. Suppose we want to represent the obvious fact stated by the sentence
John is a man
We could write
JOHNMAN
But if we also wanted to represent
n..
Paul is a man
we would have to write something such as
PAULMAN
which would be a totally separate assertion, and we would not be able to draw any
conclusions about similarities between John and Paul. It would be much better to
Nro
In this section, we briefly introduce the language and methods of predicate logic.
The elementary components of predicate logic are predicate symbols, variable
C).
ran
ing statement is false. Thus INROOM(ROBOT,ri) has value T, and
INROOM(ROBOT,r2) has value F. Atomic formulas are the elementary building
blocks of predicate logic. We can combine atomic formulas to form more complex
wffs by using connectives such as A (and), V (or), and (implies). Formulas
built by connecting other formulas by A's are called conjunctions. Formulas built
by connecting other formulas by V's are called disjunction. The connective
" " is used for representing "if-then" statements, e.g., as in the sentence "If
the monkey is on the box, then the monkey will grasp the bananas":
ON(MONKEY, BOX) GRASP(MONKEY, BANANAS)
The symbol " - " (not) is used to negate the truth value of a formula; that is,
it changes the value of a wff from T to F and vice versa. The (true) sentence
"Robot is not in Room r2 " might be represented as
--- INROOM(ROBOT,r2 )
Sometimes an atomic formula, P(x), has value T for all possible values of x.
This property is represented by adding the universal quantifier (Vx) in front of
P(x). If P(x) has value T for at least one value of x, this property is represented
by adding the existential quantifier (3x) in front of P(x). For example, the sen-
tence "All robots are gray" might be represented by
(Vx)[ROBOT(x) ==> COLOR(x,GRAY)]
The sentence "There is an object in Room r, " might be represented by
(3x)INROOM(x,ri )
If P and Q are two wffs, the truth values of composite expressions made up of
these wffs are given by the following table:
P Q PvQ PAQ P Q -P
T T T T T F
F T T F T T
'>y
`r1 'r1
T F T F F F
F F F F T T
492 ROBOTICS. CONTROL, SENSING, VISION. AND INTELLIGENCE
If the truth values of two wffs are the same regardless of their interpretation,
these two wffs are said to be equivalent. Using the truth table, we can establish
the following equivalences:
--- ( _P) is equivalent to P
10.
rye
QAP
''y-.
o'0
(PV Q) V R is equivalent to PV (Q V R)
0.,
rte-'
Contrapositive law:
P Q is equivalent to -Q -P
In addition, we have
-(3x)P(x) is equivalent to (Vx)[ -P(x)]
-(Vx)P(x) is equivalent to (3x)[-P(x)]
.O.
In predicate logic, there are rules of inference that can be applied to certain
"'y
,-a
wffs and sets of wffs to produce new wffs. One important inference rule is modus
ponens, that is, the operation to produce the wff W2 from wffs of the form WI and
WI W2. Another rule of inference, universal specialization, produces the wff
W(A) from the wff (Vx)W(x), where A is any constant symbol. Using modus
ore
ponens and universal specialization together, for example, produces the wff W2 (A)
(IQ
Inference rules are applied to produce derived wffs from given ones. In the
.,,
predicate logic, such derived wffs are called theorems, and the sequence of infer-
cum
ence rule applications used in the derivation constitutes a proof of the theorem. In
artificial intelligence, some problem-solving tasks can be regarded as the task of
41)
finding a proof for a theorem. The sequence of inferences used in the proofs gives
a solution to the problem.
AT(BOX,B)
AT( BANANAS , C)
-- HB
The predicate ONBOX has value T only when the monkey is on top of the
box, and the predicate HB has value T only when the monkey has the bana-
nas.
The effects of the three operators can be described by the following wffs:
1. grasp
meaning "For all s, if the monkey is on the box and the box is at C in
state s, then the monkey will have the bananas in the state attained by
C3'
applying the operator grasp to state s." It is noted that the value of
grasp(s) is the new state resulting when the operator is applied to state s.
2. climbbox
meaning "For all s, the monkey will be on the box in the state attained by
applying the operator climbbox to state s."
3. pushbox
(VxVs){-ONBOX(s) AT(BOX,x,pushbox(x,s))}
meaning "For a1 x and s, if the monkey is not on the box in state s, then
0'C
the box will be at position x in the state attained by applying the operator
.ti
(3s)HB(s)
So far, we have discussed several search methods that reason either forward or
backward but, for a given problem, one direction or the other must be chosen.
Often, however, a mixture of the two directions is appropriate. Such a mixed stra-
tegy would make it possible to solve the main parts of a problem first and then go
back and solve the small problems that arise in connecting the big pieces together.
494 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
°°A
goal state. Once such a difference is determined, an operator that can reduce the
'.Y
difference must be found. It is possible that the operator may not be applicable to
the current state. So a subproblem of getting to a state in which it can be applied
CJ)
C's
is generated. It is also possible that the operator does not produce exactly the goal
state. Then we have a second subproblem of getting from the state it does produce
CAD
to the goal state. If the difference was determined correctly, and if the operator is
really effective at reducing the difference, then the two subproblems should be
CAD
CAD
easier to solve than the original problem. The means-ends analysis is applied
'-'
recursively to the subproblems. From this point of view, the means-ends analysis
could be considered as a problem-reduction technique.
In order to focus the system's attention on the big problems first, the
differences can be assigned priority levels. Differences of higher priority can then
be considered before lower priority ones. The most important data structure used
in the means-ends analysis is the "goal." The goal is an encoding of the current
problem situation, the desired situation, and a history of the attempts so far to
Cam)
change the current situation into the desired one. Three main types of goals are
provided:
Transform A to B
Reduce difference
Transform
between A and B,
producing output A'
A' to B
Reduce difference
Apply operator Q to A
between A and B
Associated with the goal types are methods or procedures for achieving them.
These methods, shown in a simplified form in Fig. 10.11, can be interpreted as
problem-reduction operators that give rise either to AND nodes, in the case of
transform or apply, or to OR nodes, in the case of a reduce goal.
The first program to exploit means-ends analysis was the general problem
sue..
solver (GPS). Its design was motivated by the observation that people often use
this technique when they solve problems. For GPS, the initial task is represented
as a transform goal, in which A is the initial object or state and B the desired
object or the goal state. The recursion stops if, for a transform goal, there is no
difference between A and B, or for an apply goal the operator Q is immediately
"CJ
applicable. For a reduce goal, the recursion may stop, with failure, when all
CAD
matching process to discover the differences between the two objects. The
"t7
difference with the highest priority is the one chosen for reduction. A difference-
operator table lists the operators relevant to reducing each difference.
Consider a simple robot problem in which the available operators are listed as
follows:
A CLEAR(OBJ)
A HANDEMPTY
CARRY(OBJ,LOC)
2. AT (ROBOT , OBJ ) > AT ( OBJ , LOC )
A SMALL(OBJ) A AT(ROBOT,LOC)
WALK(LOC)
3. N one AT( ROBOT , LOC )
4. AT (ROBOT , OBJ) -
PICKUP(OBJ)
PUTDOWN(OBJ)
HOLDING(OBJ)
Fig. 10.12 shows the difference-operator table that describes when each of the
operators is appropriate. Notice that sometimes there may be more than one
operator that can reduce a given difference, and a given operator may be able to
reduce more than one difference.
496 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Move object
Move robot
Clear object
Be holding object
Suppose that the robot were given the problem of moving a desk with two
.Y.
objects on it from one room to another. The objects on top must also be moved.
"ti
s."
(°i
The main difference between the initial state and the goal state would be the loca-
tion of the desk. To reduce the difference, either PUSH or CARRY could be
ti.
chosen. If CARRY is chosen first, its preconditions must be met. This results in
two more differences that must be reduced: the location of the robot and the size
of the desk. The location of the robot can be handled by applying operator
WALK, but there are no operators that can change the size of an object. So the
path leads to a dead end. Following the other possibility, operator PUSH will be
attempted.
PUSH has three preconditions, two of which produce differences between the
C1'
initial state and the goal state. Since the desk is already large, one precondition
creates no difference. The robot can be brought to the correct location by using
the operator WALK, and the surface of the desk can be cleared by applying opera-
.o°
(t,
tor PICKUP twice. But after one PICKUP, an attempt to apply the second time
7..
Once PUSH is performed, the problem is close to the goal state, but not quite.
The objects must be placed back on the desk. The operator PLACE will put them
there. But it cannot be applied immediately. Another difference must be elim-
`D-.
G1.
inated, since the robot must be holding the objects. The operator PICKUP can be
applied. In order to apply PICKUP, the robot must be at the location of the
""'
^;,
C1, --)
objects. This difference can be reduced by applying WALK. Once the robot is at
the location of the two objects, it can use PICKUP and CARRY to move the
objects to the other room.
The order in which differences are considered can be critical. It is important
that significant differences be reduced before less critical ones. Section 10.6
describes a robot problem-solving system, STRIPS, which uses the means-ends
L-'
analysis.
ROBOT INTELLIGENCE AND TASK PLANNING 497
The simplest type of robot problem-solving system is a production system that uses
the state description as the database. State descriptions and goals for robot prob-
lems can be constructed from logical statements. As an example, consider the
robot hand and configurations of blocks shown in Fig. 10.1. This situation can be
represented by the conjunction of the following statements:
CLEAR(B) Block B has a clear top
00000'.x.
Robot actions change one state, or configuration, of the world into another.
One simple and useful technique for representing robot action is employed by a
robot problem-solving system called STRIPS (Fikes and Nilsson [1971]). A set of
..d
rules is used to represent robot actions. Rules in STRIPS consist of three com-
ponents. The first is the precondition that must be true before the rule can be
.-D
applied. It is usually expressed by the left side of the rule. The second com-
ponent is a list of predicates called the delete list. When a rule is applied to a
state description, or database, delete from the database the assertions in the delete
`O'
'_'
a)'
list. The third component is called the add list. When a rule is applied, add the
'C7
'-'
assertions in the add list to the database. The MOVE action for the block-stacking
example is given below:
MOVE(X,Y,Z) Move object X from Y to Z
Precondition: CLEAR(X), CLEAR(Z), ON(X,Y)
Delete list: ON(X,Y), CLEAR(Z)
GL/
If MOVE is the only operator or robot action available, the search graph (or tree)
't7
1. PICKUP(X)
Precondition and delete list: ONTABLE(X), CLEAR(X), HANDEMPTY
Add list: HOLDING(X)
2. PUTDOWN(X)
Precondition and delete list: HOLDING(X)
Add list: ONTABLE(X), CLEAR(X), HANDEMPTY
498 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
3. STACK(X,Y)
Precondition and delete list: HOLDING(X), CLEAR(Y)
Add list: HANDEMPTY, ON(X,Y), CLEAR(X)
U.'
4. UNSTACK(X, Y)
Precondition and delete list: HANDEMPTY, CLEAR(X), ON(X,Y)
Add list: HOLDING(X), CLEAR(Y)
Suppose that our goal is ON(B,C) A ON(A,B). Working forward from the ini-
tial state description shown in Fig. 10.1, we obtain the complete state space for
,,,.,
i..
CAD
this problem as shown in Fig. 10.13, with a solution path between the initial state
°.'3
and the goal state indicated by dark lines. The solution sequence of actions con-
sists of: {UNSTACK(C,A), PUTDOWN(C), PICKUP(B), STACK(BC),
PICKUP(A), STACK(A,B)}. It is called a "plan" for achieving the goal.
^t7
If a problem-solving system knows how each operator changes the state of the
'°o
0.1
world or the database and knows the preconditions for an operator to be executed,
O--+,
(y,
coo
We have just seen how STRIPS computes a specific plan to solve a particular
co)
robot problem. The next step is to generalize the specific plan by replacing con-
stants by new parameters. In other words, we wish to elevate the particular plan
s-.
f3.
to a plan schema. The need for a plan generalization is apparent in a learning sys-
b11
QCD
tem. For the purpose of saving plans so that portions of them can be used in a
'l7
later planning process, the preconditions and effects of any portion of the plan
ors
need to be known. To accomplish this, plans are stored in a triangle table with
rows and columns corresponding to the operators of the plan. The triangle table
reveals the structure of a plan in a fashion that allows parts of the plan to be
extracted later in solving related problems.
An example of a triangle table is shown in Fig. 10.14. Let the leftmost
column be called the zeroth column; then the jth column is headed by the jth
operator in the sequence. Let the top row be called the first row. If there are N
its
'07°
operators in the plan sequence, then the last row is the (N + 1)th row. The
entries in cell (i, j) of the table, for j > 0 and i < N + 1, are those statements
O.,
,..
added to the state description by the jth operator that survive as preconditions of
COD
the ith operator. The entries in cell (i, 0) for i < N + 1 are those statements in
...
...
goo
the initial state description that survive as preconditions of the ith operator. The
entries in the (N + 1)th row of the table are then those statements in the original
'J'
.., '-'
state description, and those added by the various operators, that are components of
..t
the goal.
Triangle tables can easily be constructed from the initial state description, the
+-'
operators in the sequence, and the goal description. These tables are concise and
s..
CD"
convenient representations for robot plans. The entries in the row to the left of the
ROBOT INTELLIGENCE AND TASK PLANNING 499
CLEAR(4)
ON FABLE)B)
CLEAR(C) ONTABLE(C)
HANDEMPTY
.tack(B.C
tlack(B A)
-tck(B.C)
umit B_A)
k(C.B)
ZZ tack(C.A)
untitack(C.B)
umtack(C.A)
Ilack(4.C)//
/ I.ack(4.B)\\umtack(f.B)
un,tack(4 C)
CLEAR( CLEAR(A) CLEAR(S) CLEAR(B) CLEARIC
ON(B C ON(CB) ON(CA) ON(-I. C) ON(4 B)
CLEAR( CLEAR(C) `.W
CLEAR(C) CLEAR(A) CLEAR(.))
SO ONTABLE(B)
ONTABI ONTABLE(4) ONTA BLE(A) ON TABLET B)
ONTABI ONTA BLE(B) ONTABLE(B) ONIA BLE(C) ONTABLEIC
HANDE H ANDEM PTY HANDEMPTY H ANDEM PT', HANDEMPTY
non
.n(C) pie n( plc n( C)
(C IC f)
Ila B
CLEAR(S) CLEAR(S)
%R,
CLEAR(C)
ON(A.C) ON(B CI ON (C. f)
G ON(C.B) ON(CA) ON(4 K
ONTA BLE(B) ONTABLB)A ) ONTABLE(B)
HANDEMPTY HANDEMPTY HANDEMp
ith operator are precisely the preconditions of the operator. The entries in the
column below the ith operator are precisely the add formula statements of that
operator that are needed by subsequent operators or that are components of the
goal.
Let us define the ith kernel as the intersection of all rows below, and includ
ing, the ith row with all columns to the left of the ith column. The fourth kernel
is outlined by double lines in Fig. 10.14. The entries in the ith kernel are then
'-s
precisely the conditions that must be matched by a state description in order that
the sequence composed of the ith and subsequent operators be applicable and
r-.
C))
CAD
achieve the goal. Thus, the first kernel (i.e., the wroth column), contains those
:7.
m-.
conditions of the initial state needed by subsequent operators and by the goal; the
(N + 1)th kernel [i.e., the (N + 1)th row] contains the goal conditions them-
CAD
These properties of triangle tables are very useful for monitoring the
CD'
selves.
actual execution of robot plans.
Since robot plans must ultimately be executed in the real world by a mechani-
'-'
cal device, the execution system must acknowledge the possibility that the actions
.-r
500 ROBOTICS: CONTROL. SENSING, VISION, AND INTELLIGENCE
HANDEMPTY
I CLEAR(C) I
ON(C,A) unstack(C,A)
2 HOLDING(C) 2
putdown(C)
ONTABLE(B)
3 HANDEMPTY 3
CLEAR(B)
pickup(B)
4 CLEAR(C) HOLDING(B) 4
stack(B. C)
moo,
6 CLEAR(B) HOLDING(A) 6
stack(A,B)
7 ON(B,C) ON(A,B)
in the plan may not accomplish their intended tasks and that mechanical tolerances
o-'
may introduce errors as the plan is executed. As actions are executed, unplanned
effects might either place us unexpectedly close to the goal or throw us off the
track. These problems could be dealt with by generating a new plan (based on an
updated state description) after each execution step, but obviously, such a strategy
would be too costly, so we instead seek a scheme that can intelligently monitor
°-'
such a plan execution system. At the beginning of a plan execution, we know that
`°-A
C1.
CAD
the entire plan is applicable and appropriate for achieving the goal because the
statements in the first kernel are matched by the initial state description, which was
.?'
used when the plan was created. (Here we assume that the world is static; that is,
no changes occur in the world except those initiated by the robot itself.) Now sup-
pose the system has just executed the first i - 1 actions of a plan sequence.
Then, in order for the remaining part of the plan (consisting of the ith and subse-
quent actions) to be both applicable and appropriate for achieving the goal, the
CAD
statements in the ith kernel must be matched by the new current state description.
(We assume that a sensory perception system continuously updates the state
description as the plan is executed so that this description accurately models the
w^.
f1,
current state of the world.) Actually, we can do better than merely check to see if
"°'h
CAD
the expected kernel matches the state description after an action; we can look for
(1.
CAD
ROBOT INTELLIGENCE AND TASK PLANNING 501
closer to the goal, we need only execute the appropriate remaining actions; and if
an execution error destroys the results of previous actions, the appropriate actions
can be reexecuted.
To find the appropriate matching kernel, we check each one in turn starting
with the highest numbered one (which is the last row of the table) and work back-
ward. If the goal kernel (the last row of the table) is matched, execution halts;
CAD
otherwise, supposing the highest numbered matching kernel is the ith one, then we
.O,
o.~
CAD
know that the ith operator is applicable to the current state description. In this
case, the system executes the action corresponding to this ith operator and checks
the outcome, as before, by searching again for the highest numbered matching ker-
nel. In an ideal world, this procedure merely executes in order each action in the
CAD
I-<
c°'
plan. In a real-world situation, on the other hand, the procedure has the flexibility
to omit execution of unnecessary actions or to overcome certain kinds of failures
4-,
stacking problem and the plan represented by the triangle table in Fig. 10.14.
Suppose that the system executes actions corresponding to the first four operators
and that the results of these actions are as planned. Now suppose that the system
-,'
attempts to execute the pickup block A action, but the execution routine (this time)
mistakes block B for block A and picks up block B instead. [Assume again that
t7,
...
HOLDING(A). ] If there were no execution error, the sixth kernel would now be
G-.
matched; the result of the error is that the highest numbered matching kernel is
`"+
"'t
scan the table efficiently for the highest numbered matching kernel. Starting in the
bottom row, we scan the table from left to right, looking for the first cell that con-
4-.
tains a statement that does not match the current state description. If we scan the
'°.
whole row without finding such a cell, the goal kernel is matched; otherwise, if we
:.y
find such a cell in column i, the number of the highest numbered matching kernel
CAD
cannot be greater than i. In this case, we set a boundary at column i and move up
to the next-to-bottom row and begin scanning this row from left to right, but not
past column i. If we find a cell containing an unmatched statement, we reset the
column boundary and move up another row to begin scanning that row, etc. With
the column boundary set to k, the process terminates by finding that the kth kernel
is the highest numbered matching kernel when it completes a scan of the kth row
(from the bottom) up to the column boundary.
Example: Consider the simple task of fetching a box from an adjacent room
"ti
by a robot vehicle. Let the initial state of the robot's world model be as
.-.
shown in Fig. 10.15. Assume that there are two operators, GOTHRU and
PUSHTHRU.
C/'
502 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
ROOM Ri ROOM R,
I BOX
DOOR Di
ROBOT DOOR
D,
ROOM R3
Initial data base Mo:
INROOM (ROBOT,R4)
CONNECTS (D1,R1,R2)
CONNECTS (D2,R2,R3)
BOX (Bi)
INROOM (Bi,R2)
Goal G0:
(3x) [BOX(x) A INROOM (x,R1)]
into room r2
Precondition: INROOM(b,rl) A INROOM(ROBOT,rl )
A CONNECTS(d,rl,r2)
Delete list: INROOM(ROBOT,S), INROOM(b,S)
Add list: INROOM(ROBOT,r2), INROOM(b,r2)
The difference-operator table is shown in Fig. 10.16.
When STRIPS is given the problem, it first attempts to achieve the goal Go
from the initial state Mo. This problem cannot be solved immediately. However,
if the initial database contains a statement INROOM(BI,Ri ), the problem-solving
process could continue. STRIPS finds the operator PUSHTHRU(BI,d,rl ,RI )
whose effect can provide the desired statement. The precondition GI for
PUSHTHRU is
Location
of box
Location
of robot
Location of
box and robot
From the means-ends analysis, this precondition is set up as a subgoal and STRIPS
tries to accomplish it from Mo.
Although no immediate solution can be found to solve this problem, STRIPS
finds that if rl = R2, d = DI and the current database contains
INROOM(ROBOT, R2 ), the process could continue. Again STRIPS finds the
operator GOTHRU(d,rl,R2) whose effect can produce the desired statement. Its
precondition is the next subgoal, namely:
INROOM(BI,R2 ), .. .
Now STRIPS attempts to achieve the subgoal GI from the new database MI .
It finds the operator PUSHTHRU(BI,DI ,R2 ,RI) with the substitutions rI = R2
and d = DI. Application of this operator to MI yields
INROOM(BI,RI ), .. .
I NROOM (ROBOT.R i )
I
CONNECTS(D1,Ri,R,) GOTHRU(D1,R1,R,)
INROOM(B1.R,)
CONNECTS(D,,R1.R,) INROOM(ROBOT.R,)
CONNECTS(x,)-.c) PUSHTHRU(Bi.Di.R,.R1)
CONNECTS(x.v,z)
INROOM(ROBOT.R1)
3
INROOM(B,,R1)
Next, STRIPS attempts to accomplish the original goal Go from M2. This attempt
is successful and the final operator sequence is
We would like to generalize the above plan so that it could be free from the
specific constants DI, RI, R2, and BI and used in situations involving arbitrary
doors, rooms, and boxes. The triangle table for the plan is given in Fig. 10.17,
.'4
and the triangle table for the generalized plan is shown in Fig. 10.18. Hence the
L1.
GOTHRU(dl,rl ,r2 )
PUSHTHRU(b,d2,r2,r3)
and could be used to go from one room to an adjacent second room and push a
box to an adjacent third room.
We have discussed the use of triangle tables for generalized plans to control the
execution of robot plans. Triangle tables for generalized plans can also be used by
STRIPS to extract a relevant operator sequence during a subsequent planning pro-
cess. Conceptually, we can think of a single triangle table as representing a family
of generalized operators. Upon the selection by STRIPS of a relevant add list, we
coo
must extract from this family an economical parameterized operator achieving the
add list. Recall that the (i + 1)th row of a triangle table (excluding the first cell)
`D-
represents the add list, AI .. ,;, of the ith head of the plan, i.e., of the sequence
d.,
, ,
ROBOT INTELLIGENCE AND TASK PLANNING 505
OPI , ... , OP;. An n-step plan presents STRIPS with n alternative add lists, any
one of which can be used to reduce a difference encountered during the normal
planning process. STRIPS tests the relevance of each of a generalized plan's add
lists in the usual fashion, and the add lists that provide the greatest reduction in the
difference are selected. Often a given set of relevant statements will appear in
more than one row of the table. In that case only the lowest-numbered row is
selected, since this choice results in the shortest operator sequence capable of pro-
ducing the desired statements.
Suppose that STRIPS selects the ith add list AI, ,;, i < n. Since this add
list is achieved by applying in sequence OPI , ... , OP;, we will obviously not be
interested in the application of 0P;+ I, ... , OP? , and will therefore not be
4"'
3..
steps. If we lost interest in a tail of a plan, then the relevant instance of the gen-
eralized plan need not contain those operators whose sole purpose is to establish
preconditions for the tail. Also, STRIPS will, in general, have used only some
subset of A1, ... ,; in establishing the relevance of the ith head of the plan. Any of
a.+
the first i operators that does not add some statement in this subset, or help estab-
lish the preconditions for some operator that adds a statement in the subset, is not
needed in the relevant instance of the generalized plan.
In order to obtain a robot planning system that can not only speed up the
planning process but can also improve its problem-solving capability to handle
more complex tasks, one could design the system with a learning capability.
STRIPS uses a generalization scheme for machine learning. Another form of
learning would be the updating of the information in the difference-operator table
CD'
INROOM( p,,.p,)
CONNECTS( pH.py.ps) IN ROOM (ROBOT-P5)
CONNECTS(h.\.: ) PUSHTHRU( p(,.pa.ps.po)
CONNECTS(i.v.: )
for a solution. A semantic network, instead of predicate logic, is used as the inter-
nal representation of tasks. Initially a set of basic task examples is stored in the
system as knowledge based on past experience. The analogy of two task state-
CAD
ments is used to express the similarity between them and is determined by a
semantic matching procedure. The matching algorithm measures the semantic
ago
"closeness"; the smaller the value, the closer the meaning. Based on the semantic
matching measure, past experience in terms of stored information is retrieved and
a candidate plan is formed. Each candidate plan is then checked by its operators'
preconditions to ensure its applicability to the current world state. If the plan is
not applicable, it is simply dropped out of the candidacy. After the applicability
check, several candidate plans might be found. These candidate plans are listed in
A..
in the capability of forming complex plans from the learned basic task examples.
The robot planners discussed in the previous section require only a description of
the initial and final states of a given task. These planning systems typically do not
specify the detailed robot motions necessary to achieve an operation. These sys-
tems issue robot commands such as: PICKUP(A) and STACK(X,Y) without speci-
a-,
fying the robot path. In the foreseeable future, however, robot task planners will
need more detailed information about intermediate states than these systems pro-
vide. But they can be expected to produce a much more detailed robot program.
In other words, a task planner would transform the task-level specifications into
`CS
must have a description of the objects being manipulated, the task environment, the
mow.'
robot carrying out the task, the initial state of the environment, and the desired
final (goal) state. The output of the task planner would be a robot program to
C?'
0'Q
achieve the desired final state when executed in the specified initial state.
..d
-ti.\ There are three phases in task planning: modeling, task specification, and
'+r"+
manipulator program synthesis. The world model for a task must contain the fol-
lowing information: (1) geometric description of all objects and robots in the task
environment; (2) physical description of all objects; (3) kinematic description of all
linkages; and (4) descriptions of robot and sensor characteristics. Models of task
A..
awe
states also must include the configurations of all objects and linkages in the world
'C7
model.
ROBOT INTELLIGENCE AND TASK PLANNING 507
10.8.1 Modeling
The geometric description of objects is the principal component of the world
ate
CAD
model. The major sources of geometric models are computer-aided design (CAD)
CAD
systems and computer vision. There are three major types of three-dimensional
object representation schemes (Requicha and Voelcker [1982]):
1. Boundary representation
t""
2. Sweep representation
3. Volumetric representation
There are three types of volumetric representations: (1) spatial occupancy, (2) cell
decomposition, and (3) constructive solid geometry (CSG). A system based on
constructive solid geometry has been suggested for task planning. In CSG, the
basic idea is that complicated solids are constructed by performing set operations
on a few types of primitive solids. The object in Fig. 10.19a can be described by
the structure given in Fig. 10.19b.
'-'
The legal motions of an object are constrained by the presence of other objects
in the environment, and the form of the constraints depends on the shapes of the
objects. This is the reason why a task planner needs geometric descriptions of
tea.
L".
structure of the robot itself. The kinematic models provide the task planner with
the information required to plan manipulator motions that are consistent with exter-
nal constraints.
Many of the physical characteristics of objects play important roles in planning
robot operations. The mass and inertia of parts, for example, determine how fast
they can be moved or how much force can be applied to them before they fall
over. Another important aspect of a robot system is its sensing capabilities. For
task planning, vision enables the robot to obtain the configuration of an object to
some specified accuracy at execution time; force sensing allows the use of compli-
ant motions; touch information could serve in both capacities. In addition to sens-
ing, there are many individual characteristics of manipulators that must be
described; for example, velocity and acceleration bounds, and positioning accuracy
of each of the joints.
models of the objects at the desired configurations, (2) using the robot itself to
specify robot configurations and to locate features of the objects, and (3) using
symbolic spatial relationships among object features to constrain the configurations
CAD
(a)
(b)
Figure 10.19 Constructive solid geometry (CSG). Attributes of A and B: length, width,
height; attributes of C: radius, height. Set relational operators: U, union, n, intersection,
-, difference.
of symbolic spatial relationships that are required to hold between objects in that
configuration.
Since model states are simply sets of configurations and task specifications are
sequences of model states, given symbolic spatial relationships for specifying
Sao
configurations, we should be able to specify tasks. Assume that the model includes
names for objects and object features. The first step in the task planning process
C^,
'`n
1 0 0 0 0 1 0 0
0 1 0 0 -1 0 0 0
fI = 0 0 1 0 f2 = 0 0 1 0
0 1 1 1 1 0 1 1
1 0 0 0 0 -1 0 0
0 1 0 0 1 0 0 0
f3 = 0 0 1 0 f4 = 0 0 1 0
1 1 1 1 1 0 1 1
Let twix(O) be the transformation matrix for a rotation of angle B around the x
axis, trans(x,y,z) the matrix for a translation x, y, and z, and let M be the matrix
for the rotation around the y axis that rotates the positive x axis into the negative x
axis, with M = M- I. Each against relationship between two faces, say, face f on
510 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
The two against relations in the example of Fig. 10.15 generate the following
a.+
equations:
tz
}z
Figure 10.21 Axes embedded in objects and features from Fig. 10.20.
ROBOT INTELLIGENCE AND TASK PLANNING 511
where the primed matrix denotes the rotational component of the transformation,
obtained by setting the last row of the matrix to [0,0,0,1].
,_y
.-.
It can be shown that the rotational and translational components of this type of
o-`1
'0".
replacing each of the trans matrices by the identity and only using the rotational
'-.
Also, since
[0 -1 0 0
1 0 0 0
(f'2) = M(f'4)-'M =
0 0 1 0
0 0 0 1
(10.9-6), we obtain
-1 0 0 0 -1 0 0 0
0 1 0 0 0 1 0 0
Blockl = (10.9-8)
0 0 -1 0 0 0 -1 0
1 yI 2 +z1 1 2-y2 0 2 +z2 1
512 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
2-y2 =1
Y, = 0
2+z, =2+z2
Hence, yz = 1, y, = 0, and z, = z2; that is, the position of Blockl has 1
slippage of the screwdriver in the gripper. After simplifying the equalities and
inequalities, a set of linear constraints are derived by using differential approxima-
tions for the rotations around a nominal configuration. The values of the
configuration parameters satisfying the constraints can be bounded by applying
linear programming techniques to the linearized constraint equations.
obstacle avoidance can be grouped into the following classes: (1) hypothesize and
test, (2) penalty function, and (3) explicit free space.
The hypothesize and test method was the earliest proposal for robot obstacle
..O
Ate.
avoidance. The basic method consists of three steps: first, hypothesize a candidate
.U+
path between the initial and final configuration of the robot manipulator; second,
test a selected set of configurations along the path for possible collisions; third, if a
possible collision is found, propose an avoidance motion by examining the
NO' '-'
CAD
obstacle(s) that would cause the collision. The entire process is repeated for the
modified motion.
The main advantage of the hypothesize and test technique is its simplicity.
The method's basic computational operations are detecting potential collisions and
C)'
A.'
arc
modifying proposed paths to avoid collisions. The first operation, detecting poten-
COQ
ROBOT INTELLIGENCE AND TASK PLANNING 513
'°J
between the manipulator and obstacle models. This capability is part of the reper-
v,"
y°° .On
toire of most geometric modeling systems. We have pointed out in Sec. 10.8 that
the second operation, modifying a proposed path, can be very difficult. Typical
,.p
(DD
nom'
enclosing spheres. These methods work fairly well when the obstacles are
sparsely located so that they can be dealt with one at a time. When,the space is
a..
cluttered, however, attempts to avoid a collision with one obstacle will typically
lead to another collision with a different obstacle. Under such conditions, a more
accurate detection of potential collisions could be accomplished by using the infor-
D!'
objects. In general, the penalty is infinite for configurations that cause collisions
.`3
and drops off sharply with distance from obstacles. The total penalty function is
computed by adding the penalties from individual obstacles and, possibly, adding a
'F1
penalty term for deviations from the shortest path. At any configuration, we can
compute the value of the penalty function and estimate its partial derivatives with
!<,
r.,
respect to the configuration parameters. On the basis of this local information, the
path search function must decide which sequence of configurations to follow. The
decision can be made so as to follow local minima in the penalty function. These
CAD
.`J
too close to obstacles. The penalty function methods are attractive because they
provide a relatively simple way of combining the constraints from multiple objects.
..,
O0'
the configuration space obstacle. Otherwise, motions of the robot that reduce the
ten
value of the penalty function will not necessarily be safe. The distinction between
''7
,-.
COD
these two types of penalty functions is illustrated in Fig. 10.22. It is noted that in
...
Fig. 10.22a moving along decreasing values of the penalty function is safe,
'-y
whereas in Fig. 10.22b moving the tip of the manipulator in the same way leads to
a collision.
An approach proposed by Khatib [1980] is intermediate between these two
`CS
extremes. The method uses a penalty function which satisfies the definition of a
'CS
potential field; the gradient of this field at a point on the robot is interpreted as a
repelling force acting on that point. In addition, an attractive force from the desti-
nation is added. The motion of the robot results from the interaction of these two
in'
forces, subject to kinematic constraints. By using many points of the robot, rather
`J+
than a single one, it is possible to avoid many situations such as those depicted in
Fig. 10.22.
The key drawback of using penalty functions to plan safe paths is the strictly
local information that they provide for path searching. Pursuing the local minima
..,
of the penalty function can lead to situations where no further progress can be
'LS
514 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
1000
I 1 500
8100
(a) (b)
Figure 10.22 Illustration of penalty function for (a) simple circular robot and (b) the two-
link manipulator. (Numbers in the figure indicate values of the penalty function.)
made. In these cases, the algorithm must choose a previous configuration where
the search is to be resumed, but in a different direction from the previous time.
sp.
These backup points are difficult to identify from local information. This suggests
that the penalty function method might be combined profitably with a more global
CAD
method of hypothesizing paths. Penalty functions are more suitable for applica-
CAD
of subsets of robot configurations that are free of collisions, the free space. Obsta-
cle avoidance is then the problem of finding a path, within these subsets, that con-
v,'
cad
nects the initial and final configurations. The algorithms differ primarily on the
basis of the particular subsets of free-space which they represent and in the
cps
representation of these subsets. The advantage of free space methods is that their
use of an explicit characterization of free space allows them to define search
methods that are guaranteed to find paths if one exists within the known subset of
free space. Moreover, it is feasible to search for short paths, rather than simply
finding the first path that is safe. The disadvantage is that the computation of the
free space may be expensive. In particular, other methods may be more efficient
for uncluttered spaces. However, in relatively cluttered spaces other methods will
either fail or expend an undue amount of effort in path searching.
robot used for grasping, such as the inside of the fingers, are gripping surfaces.
The manipulator configuration which has it grasping the target object at that
object's initial configuration is the initial-grasp configuration. The manipulator
ROBOT INTELLIGENCE AND TASK PLANNING 515
,.,
is the final-grasp
configuration.
There are three principal considerations in choosing a grasp configuration for
objects whose configuration is known. The first is safety: the robot must be safe at
the initial and final grasp configurations. The second is reachability: the robot
L].
must be able to reach the initial grasp configuration and, with the object in the
'-'
hand, to find a collision-free path to the final grasp configuration. The third is sta-
CAD
S,-.
bility: the grasp should be stable in the presence of forces exerted on the grasped
'C7
object during transfer motions and parts mating operations. If the initial
configuration of the target object is subject to substantial uncertainty, an additional
consideration in grasping is certainty: the grasp motion should reduce the
uncertainty in the target object's configuration.
Choosing grasp configurations that are safe and reachable is related to obstacle
avoidance; there are significant differences, however. The first difference is that
,7j
.-.
the goal of grasp planning is to identify a single configuration, not a path. The
second difference is that grasp planning must consider the detailed interaction of
F40
the manipulator's shape and that of the target object. Note that candidate grasp
configurations are those having the gripping surfaces in contact with the target
object while avoiding collisions between the manipulator and other objects. The
third difference is that grasp planning must deal with the interaction of the choice
C3. 'C7
involving the grasped object. Because of these differences, most existing methods in'
for grasp planning treat it independently from obstacle avoidance.
Most approaches to choosing safe grasps can be viewed as instances of the fol-
lowing method:
parallel jaw grippers, a common choice is grasp configurations that place the
'z3
Most techniques in the area of artificial intelligence fall far short of the com-
petence of humans or even animals. Computer systems designed to see images,
CAD
hear sounds, and understand speech can only claim limited success. However, in
Cam/,
about a given field, coupled with methods of applying those rules, to make infer-
ences. They solve problems in such specialized fields as medical diagnosis,
mineral exploration, and oil-well log interpretation. They differ substantially from
>~'
COI
information. In building such expert systems, researchers have found that amass-
ing a large amount of knowledge, rather than sophisticated reasoning techniques, is
responsible for most of the power of the system. Such high-performance expert
systems, previously limited to academic research projects, are beginning to enter
the commercial marketplace.
1. There must be at least one human expert who is acknowledged to perform the
r-.
task well.
2. The primary sources of the expert's abilities must be special knowledge, judg-
ment, and experience.
3. The expert must be able to articulate that special knowledge, judgement, and
experience and also explain the methods used to apply it to a particular task.
4. The task must have a well-bounded domain of application.
ROBOT INTELLIGENCE AND TASK PLANNING 517
Sometimes an expert system can be built that does not exactly match these prere-
quisites; for example, the abilities of several human experts, rather than one, might
be brought to bear on a problem.
The structure of an expert system is modular. Facts and other knowledge
°Q..
about a particular domain can be separated from the inference procedure-or con-
CAD
.0.
trol structure-for applying those facts, while another part of the system-the glo-
bal database-is the model of the "world" associated with a specific problem, its
CAD
status, and its history. It is desirable, though not yet common, to have a natural-
'.T
,-:
language interface to facilitate the use of the system both during development and
A..
CAD
information about the current problem (the input data) and methods (the inference
machine) for applying the general knowledge to the problem. With this separation
"CS
the program can be changed by simple modifications of the knowledge base. This
CAD
is particularly true of rule-based systems, where the system can be changed by the
CAD
tion rule is: IF the power supply on the space shuttle fails, AND a backup power
supply is available, AND the reason for the first failure no longer exists, THEN
switch to the backup power supply. Rule-based systems work by applying rules,
,,C
=CA
noting the results, and applying new rules based on the changed situation. They
can also work by directed logical inference, either starting with the initial evidence
in a situation and working toward a solution, or starting with hypotheses about
possible solutions and working backward to find existing evidence-or a deduction
from existing evidence-that supports particular hypothesis.
One of the earliest and most often applied expert systems is Dendral (Barr et
al. [1981, 1982]). It was devised in the late 1960s by Edward A. Feigenbaum and
Joshua Lederberg at Stanford University to generate plausible structural representa-
tions of organic molecules from mass spectrogram data. The approach called for:
yam,
This rule-based system, chaining forward from the data, illustrates the very com-
mon Al problem-solving approach of "generation and test." Dendral has been used
as a consultant by organic chemists for more than 15 years. It is currently recog-
nized as an expert in mass-spectral analysis.
One of the best-known expert systems is MYCIN (Barr et al. [1981, 1982]),
A-.
design by Edward Shortliffe at Stanford University in the mid-1970s. It is an
5'a
A'3
therapy. MYCIN represents expert judgmental reasoning as condition-conclusions
rules, linking patient data to infection hypotheses, and at the same time it provides
the expert's "certainty" estimate for each rule. It chains backward from
hypothesized diagnoses, using rules to estimate the certainty factors of conclusions
based on the certainty factors of their antecedents, to see if the evidence supports a
diagnosis. If there is not enough information to narrow the hypotheses, it asks the
,.y
physician for additional data, exhaustively evaluating all hypotheses. When it has
finished, MYCIN matches treatments to all diagnoses that have high certainty
values..
Another rule-based system, R1, has been very successful in configuring VAX
computer systems from a customer's order of various standard and optional com-
ponents. The initial version of RI was developed by John McDermott in 1979 at
.`3
'r5
uration problem can be solved without backtracking and without undoing previous
steps, the system's approach is to break the problem up into the following subtasks
`ACT'
At each point in the configuration development, several rules for what to do next
^y.
b°0
are usually applicable. Of the applicable rules, R1 selects the rule having the most
M.,
IF clauses for its applicability, on the assumption that that rule is more specialized
for the current situation. (R1 is written in OPS 5, a special language for executing
...
production rules.) The system now has about 1200 rules for VAXs, together with
GA'
cad
information about some 1000 VAX components. The total system has about 3000
rules and knowledge about PDP-11 as well as VAX components.
10.10.3 Remarks
The application areas of expert systems include medical diagnosis and prescription,
medical-knowledge automation, chemical-data interpretation, chemical and biologi-
'C1
cal synthesis, mineral and oil exploration, planning and scheduling, signal interpre-
...
tation, military threat assessment, tactical targeting, space defense, air-traffic con-
..z
ROBOT INTELLIGENCE AND TASK PLANNING 519
trol, circuit analysis, VLSI design, structure damage assessment, equipment fault
diagnosis, computer-configuration selection, speech understanding, computer-aided
instruction, knowledge-base access and management, manufacturing process plan-
:j.
of rule-based systems are becoming apparent: not all knowledge can be structured
.M,
representations more appropriate to the specific problem, they also tend to simplify
.U+
the reasoning required. Some expert systems, using the "blackboard" approach,
.a)
.t"
vac
.y,
In late 1971 and early 1972, two main approaches to robot planning were pro-
posed. One approach, typified by the STRIPS system, is to have a fairly general
robot planner which can solve robot problems in a great variety of worlds. The
second approach is to select a specific robot world and, for that world, to write a
ors
't"
computer program to solve problems. The first approach, like any other general
O,6
puting power for searching and inference in order to solve a reasonably complex
real-world problem, and, hence, has been regarded computationally infeasible. On
.'Y
a>)
the other hand, the second approach lacks generality, in that a new set of computer
=D:
'L3
programs must be written for each operating environment and, hence, significantly
.0Y
Powerful and efficient task planning algorithms are certainly in demand. Again,
special-purpose computers can be used to speed up the computations in order to
meet the real-time requirements.
Robot planning, which provides the intelligence and problem-solving capability
to a robot system, is still a very active area of research. For real-time robot appli-
cations, we still need powerful and efficient planning algorithms that will be exe-
cuted by high-speed special-purpose computer systems.
REFERENCES
Further general reading for the material in this chapter can be found in Barr et al.
s..
[1981, 1982], Nilsson [1971, 1980], and Rich [1983]. The discussion in Secs. 10.2
and 10.3 is based on the material in Whitney [1969], Nilsson [1971], and Winston
[1984]. Further basic reading for Sec. 10.4 may be found in Chang and Lee
[1973]. Complementary reading for the material in Secs. 10.5 and 10.6 may be
-..
found in Fikes and Nilsson [1971] and Rich [1983]. Additional reading and refer-
ences for the material in Sec. 10.7 can be found in Tangwongsan and Fu [1979].
Early representative references on robot task planning (Secs. 10.8 and 10.9)
are Doran [1970], Fikes et al. [1972], Siklossy and Dreussi [1973], Ambler and
Popplestone [1975], and Taylor [1976]. More recent work may be found in Kha-
tib [1980], Requicha and Voelcher [1982], and Davis and Comacho [1984]. Addi-
tional reading for the material in Sec. 10.10 may be found in Nau [1983], Hayer-
Roth et al. [1983], and Weiss and Allanheld [1984].
PROBLEMS
10.1 Suppose that three missionaries and three cannibals seek to cross a river from the right
bank to the left bank by boat. The maximum capacity of the boat is two persons. If the
missionaries are outnumbered at any time by the cannibals, the cannibals will eat the mis-
sionaries. Propose a computer program to find a solution for the safe crossing of all six
persons. Hint: Using the state-space representation and search methods described in Sec.
10.2, one can represent the state description by (N,,,, Ne), where N,,,, NN are the number of
w<`
missionaries and cannibals in the left ,bank, respectively. The initial state is (0,0), i.e., no
missionary and cannibal are on the left bank, the goal state is (3,3) and the possible inter-
mediate states are (0,1), (0,2), (0,3), (1,1), (2,2), (3,0), (3,1), (3,2).
.-r
10.2 Imagine that' you are a high school geometry student and find a proof for the theorem:
"The diagonals of a parallelogram bisect each other." Use an AND/OR graph to chart the
steps in your search for a proof. Indicate the solution subgraph that constitutes a proof of
the theorem.
10.3 Represent the following sentences by predicate logic wffs. (a) A formula whose main
connective is a = is equivalent to some formula whose main connective is a V. (b) A
robot is intelligent if it can perform a task which, if performed by a human, requires intelli-
gence. (c) If a block is on the table, then it is not also on another block.
7c'
ROBOT INTELLIGENCE AND TASK PLANNING 521
10.4 Show how the monkey-and-bananas problem can be represented so that STRIPS would
generate a plan consisting of the following actions: go to the box, push the box under the
bananas, climb the box, grasp the bananas.
10.5 Show, step by step, how means-ends analysis could be used to solve the robot plan-
ning problem described in the example at the end of Sec. 10.4.
10.6 Show how the monkey-and-bananas problem can be represented so that STRIPS would
generate a plan consisting of the following actions: go to the box, push the box under the
bananas, climb the box, grab the bananas. Use means-ends analysis as the control strategy.
APPENDIX
A
VECTORS AND MATRICES
This appendix contains a review of basic vector and matrix algebra. In the follow-
ing discussion, vectors are represented by lowercase bold letters, while matrices
vii
are in uppercase bold type.
The quantities of physics can be divided into two classes, namely, those having
F-]
van .ti
magnitude only and those having magnitude and direction. A quantity character-
ized by magnitude only is called a scalar. Time, mass, density, length, and coor-
dinates are scalars. A scalar .is usually represented by a real number with some
unit of measurement. Scalars can be compared only if they have the same units.
W,<
and direction correspond to the magnitude and direction of the quantity under con-
sideration. Vectors can be compared only if they have the same physical meaning
and dimensions.
Two vectors a and b are equal if they have the same length and direction. The
notation - a is used to representea vector having the same magnitude as a but in
the opposite direction. Associated with vector a is a positive scalar equal to its
.r1
a= IaI (A. 1)
IaI=1 (A.2)
522
VECTORS AND MATRICES 523
a + b = b + a (A.4)
(a + b) + c = a + (b + c) =a+b+c (A.5)
4-.
direction if m < 0. Thus,
b = ma (A.6)
(3) (m+n)a=ma+na
where m and n are scalars.
A linear vector space V is a nonempty set of vectors defined over a real number
field F, which satisfies the following conditions of vector addition and multiplica-
tion by scalars:
1. For any two vector elements of V, the sum is also a vector element belonging
to V.
2. For any two vector elements of V, vector addition is commutative.
3. For any three vector elements of V, vector addition is associative.
4. There is a unique element called the zero vector in V (denoted by 0) such that
,.e
0 + a_= a + 0 = a
5. For every vector element a e V, there is a unique vector (- a) e V such that
a + (-a) = 0
6. For every vector element a e V and for any scalar m e F, the product of m and
a is another vector element in V. If m = 1, then
ma=la=a1=a
7. For any scalars m and n in F, and any vectors a and b in V, multiplication by
scalars is distributive.
VECTORS AND MATRICES 525
m(a + b) = ma + mb
(m + n)a = ma + na
8. For any scalars m and n in F, and any vector a in V,
Examples of linear vector space are the sets of all real one-, two- or three-
dimensional vectors.
A finite set of vectors {x1, x2, ... , in V is linearly dependent if and only if
there exist n scalars {cI , C2, c3 , ... , c,, } in F (not all equal to zero) such that
Cl X] + C2 X2 + C3 X3 + +CX=0 (A.9)
If the only way to satisfy this equation is for each scalar ci to be identically equal
to zero, then the set of vectors { xi } are said to be linearly independent.
Two linearly dependent vectors in a three-dimensional space are collinear.
That is, they lie in the same line. Three linearly dependent vectors in three-
dimensional space are coplanar, that is, they lie in the same plane.
a.;
constitute a in
because
3a - 2b - c = 0
These vectors also are coplanar.
If there exists a subset of vectors {ej, e2, ... , in V and a set of scalars
a.0
4--C
then we say that x is a linear combination of the vectors lei). The set of vectors
{ei} is said to span the vector space V.
526 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
e; ej
e, e,
e,
The basis vectors for a vector space V are a set of linearly independent vectors
s..
that span the vector space V. In other words, the basis vectors are the minimum
number of vectors that span the vector space V. One can choose different sets of
.-y
basis vectors to span a vector space V. However, once a set of basis vectors are
chosen to span a vector space V, every vector x e V can be expressed uniquely as
a linear combination of the basis vectors.
The dimension of a vector space V is equal to the number of basis vectors that
C',
span the vector space V. Thus, an n-dimensional linear vector space has n basis
vectors. We shall use the notation V to represent a vector space of dimension n.
"'y
Given a set of n basis vectors {e1, e2, ... , for a vector space V, it follows
from Sec. A.6 that any vector r e V, can be expressed uniquely as a linear combi-
nation of the basis vectors,
r=rleI+r2e2+ (A.11)
a.,
denote the basis vectors instead of {e1, e2, e3} (see Fig. A.4).
VECTORS AND MATRICES 527
k k
Right-handed Left-handed
principal axes and a left-handed rotation of 90 ° about OZ carries OX into OY, then
the coordinate system is called a left-handed coordinate system. Throughout this
book, we use only right-handed coordinate systems.
where 0 is the angle between the two vectors (see Fig. A.5). The scalar quantity
COD
b = I b I cos 0 (A.13)
--,,-O
I a I cos0
b
!11
scalar product is commutative:
a b = b a (A.15)
If the scalar product of a and b is zero, then either (or both) of the vectors is zero
or they are perpendicular to each other because cos (± 90 °) = 0. Thus, two
nonzero vectors a and b are orthogonal if and only if their scalar product is zero.
Since the inner product may be zero when neither vector is zero, it follows
that division by a vector is prohibited. Thus, if
(A.16)
(A.18)
and (b + c) a = b a + c a (A.19)
c=axb (A.20)
VECTORS AND MATRICES 529
'?J
A.6). In Fig. A.6, since h = IbI sin 0, the cross product a x b has a magni-
tude equal to the area of the parallelogram formed with sides a and b.
The cross product a x b can be considered as the result obtained by project-
Cp'
ing b on the plane W X Y Z perpendicular to the plane of a and b, rotating the pro-
c:=
jection 90 ° in the positive direction about a and then multiplying the resulting vec-
tor by I a l .
The cross product of b x a has the same magnitude as a x b but in the
coo
b x a = - (a x b) (A.22)
,.Q
and the cross product are not commutative. If vectors a and b are parallel, then 0
is 0 ° or 180 ° and
0
Conversely, if the cross product is zero, then one (or both) of the vectors is zero
or else they are parallel.
Also, we note that the cross product is distributed over addition, that is,
a x (b + c) = a x b + a x c (A.24)
and (b + c) x a = b x a + c x a (A.25)
Applying the scalar and cross product to the unit vectors i, j, k along the
principal axes of a right-handed cartesian coordinate system, we have
i i= j j= k- k= 1
i j= j k= k- i= 0
ixi=jxj=kxk=0 (A.26)
ixj=-jxik
j xk=-kxji
k xi -ixk=j
Using the definition of components and Eq. (A.26), the scalar product of a
and b can be written as
where aT indicates the transpose of a (see Sec. A.12). The cross product of a and
b can be written as a determinant operation (see Sec. A.15),
i j k
a x b = a1 a2 a3
b1 b2 b3
(a b)c a (b x c) a x (b x c) (A.29)
or negative.
VECTORS AND MATRICES 531
= hA = volume of parallelepiped
where h and A are, respectively, the height and area of the parallelepiped.
Expressing the vectors in terms of their components in a three-dimensional vector
space yields
i j k
a (b x c) = (ai + ayj + azk) bX by bZ
CX cy cz
aX ay az
bX by bz (A.31)
CX cy cz
Note that the parentheses around the vector product b x c can be taken out
without confusion as we cannot interpret a b x c as (a b) x c, which is
CI,
a b x c= b a
b x c
These results can be readily shown from the properties of determinants (see Sec.
End
'.'
an anticyclic permutation. An illustration of cyclic permutation is shown in Fig.
A.8. By following a clockwise traversal along the circle, we obtain Eq. (A.32).
Similarly, reversing the direction of the arrows, we obtain Eq. (A.33). Finally,
`r1
the scalar triple vector can be used to prove linear dependence of three coplanar
vectors. If three vectors a, b, and c are coplanar, then (abc) = 0. Hence, if
two of the three vectors are equal, the scalar triple product vanishes. It follows
that if el, e2, and e3 are basis vectors for a vector space V3, then (ele2e3) # 0
and they form a right-handed coordinate system if (el e2e3) > 0 and a left-handed
coordinate system if (e1 e2e3) < 0.
The vector triple product, a x (b x c), is a vector perpendicular to
p.,
(b X c) and lying in the plane of b and c (see Fig. A.9). Suppose that the vec-
..d
ax(bxc)=mb+nc (A.34)
Thus,
m _ -n = x (A.36)
a c a b
where m, n, and X are scalars and Eq. (A.34) becomes
b a
Cyclic Anticyclic
(A.38)
More complicated cases involving four or more products can be simplified by the
="a
(a x b) x (c x d) _ (a x b d)c - (a x b c)d
_ (abd)c - (abc)d (A.40)
and (a x b) (c x d) = a b x (c x d)
(b d) (a c) - (b c) (a d)
C..
(A.41)
lim (A.42)
dt At-0 At At-0 At
dr drY
1. drr, 1
Ldtj k (A.43)
I
i + j +
dt dt Ldt J
534 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
d"r dn rx 1 ,
dnry 1 dnrz
j
i +
3
+ I 1
k (A.44)
dtn dtn dtn
L dtn L J \. J
Using Eq. (A.42), the following rules for differentiating vector functions can be
obtained:
(2) d (ma) = m da
at at
where in is a scalar
da
b+a
NIA
(3) dt (a b) _
dt db
\. J L J
(4) f(a x b) _ da
dt
x b + a x
db
dt
1
(5) f(abc) _ ab
A
dt
\ J _
da
(6) f[a x (b x c)] _ dt
(b c)
a X
dbt x ClJ J
a.,(t) = S bx(T) dT + cX
a,(t) = bz(T) dT + cZ
J
VECTORS AND MATRICES 535
A = [a,j] = i = 1,2,...,m
j = 1, 2, ... , n
a,,,1 amt amn
(A.48)
^C7
all a12 a 1»
A = i = 1,2,...,in
j=1,2,...,n
aml am2 annt
(A.49)
then
all a21
AT =
i = 1, 2,... n
(A.50)
j = 1, 2, ... , m
a1,, a2n
. . . amn
In particular, the transpose of a column matrix is a row matrix and vice versa.
536 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
A square matrix of order n has an equal number of rows and columns (i.e.,
m = n). A diagonal matrix is a square matrix of order n whose off-diagonal ele-
ments are zero. That is, the elements
A unit matrix of order n is a diagonal matrix whose diagonal elements are all
ago
00.
unity. That is, all = 1 if i = j and all = 0 if i j. This matrix is called the
identity matrix and denoted by In or I w
A symmetric matrix is a square matrix of order n whose transpose is identical
to itself. That is, A = AT or all = aj1 for all i and j. If the elements of a square
III
III
t`'
and (A.52)
then the matrix is called a skew matrix. It is noted that if A is skew, then A =
-AT .
C - A + AT (A.53)
2
A null matrix is a matrix whose elements are all identically equal to zero. Two
N
matrices of the same order are equal if their respective elements are equal. That
is, if all = b11 for all i and j, then A = B.
Q..
Two matrices A and B of, the same order can be added (subtracted) forming a
resultant matrix C of the same order by adding (subtracting) corresponding ele-
ments. Thus,
(1) A + B = B + A
(2) (A+B)+C=A+(B+C)
(3) A + 0 = A (0 is the zero or null matrix) (A.56)
(4) A + (-A) = 0
VECTORS AND MATRICES 537
(1) a(A+B)=aA+aB
(2) (a+b)A=aA+bA
(3) a(bA) = (ab)A (A.57)
(4) 1A=A
where a and b are scalars.
Two matrices can be multiplied together only if they are conformable. That
CAD
ABABA
If AB = BA, then we say the matrices are commutative. The unit matrix com-
mutes with any square matrix.
IA = Al = A (A.59)
(4) C(A + B) = CA + CB
assuming that the matrix multiplications are defined. From rule (2), we see that
for the product of three matrices, we can either postmultiply B by C or premulti-
acs
ply B by A first and then multiply the result by the remaining matrix. In general,
AB = 0 does not imply that A = 0 or B = 0. It is worthwhile to note the fol-
S".
lowing rules for the product of matrices:
,, = (row matrix) I x n
Vie'
A.15 DETERMINANTS
IAI = (A.61)
and is equal to the sum of the products of the elements of any row or column and
their respective cofactors, that is,
n n
IAI = E a1 Aid _ E a,jAij (A.62)
i=I i=1
VECTORS AND MATRICES 539
where Mij is the complementary minor, obtained by deleting the elements in the ith
row and the jth column of I A I. In other words, if
JAI =
ail ai2 aij . . . ain
an2 an,,
and we delete the elements in the ith row and jth column, then
all a12
JAI = = alla22 - a21a12 (A.64)
a21 a22
The following properties are useful for simplifying the evaluation of deter-
minants:
1. If all the elements of any row (or column) of A are zero, then IAI = 0.
°-n
2. IAI = IATI.
3. If any two rows (or columns) of A are interchanged, then the sign of its deter-
minant is changed.
4. If A and B are of order n, then I AB I = IAI I B
5. If all the elements of any row (or column) of A are multiplied by a scalar k,
G1.
minant is zero.
7. If a multiple of any row (or column) is added to other row (or column), then
f3,
Example: Let
A =
VECTORS AND MATRICES 541
Then,
1 a a2
d-'
_ (a-b)(b-c)(c-a)
This is the Vandermonde determinant of order 3.
If A is a square matrix and Ai.j is the cofactor of aid in I A I, then the transpose of
.fl
the matrix formed from the cofactors Al is called the adjoint of A, and
[Ac]T add
A-I = = A (A.67)
JAI JAI
The product (in either order) of a nonsingular n x n matrix A and its inverse is
the identity matrix I,,; that is,
and (A.70)
If A1, A2, ... , A are square matrices of order n, then the inverse of their pro-
duct is the product of the inverse of each matrix in reverse order:
Similarly, if the matrix product of Al A2 ... A,, is conformable, then the tran-
'.,
spose of their product is the product of the transpose of each matrix in reverse
order:
(A,A2A3 ... A,,)T = (A2)T(A,)T (A.72)
In general, a 2 x 2 matrix
a c
A =
b d
A_I - 1
ad - be
Similarly, a 3 x 3 matrix
a b c
A= d e f
g h i
An important result called the matrix inversion lemma, may be stated as fol-
lows:
REFERENCES
Further reading for the material in this appendix may be found in Frazer et al. [1960], Bell-
man [1970], Pipes [1963], Thrall and Tornheim [1963], and Noble [1969].
APPENDIX
B
MANIPULATOR JACOBIAN
4-'
One advantage of resolved motion is that there exists a linear mapping between the
infinitesimal joint motion space and the infinitesimal hand motion space. This
mapping is defined by a jacobian. This appendix reviews three methods for
obtaining the jacobian for a six-link manipulator with rotary or sliding joints.
DC'
where, as before, the superscript T denotes the transpose operation. Based on the
,.C
moving coordinate frame concept (Whitney [1972]), the linear and angular veloci-
ties of the hand can be obtained from the velocities of the lower joints:
544
MANIPULATOR JACOBIAN 545
where J(q) is a 6 x 6 matrix whose ith column vector Ji(q) is given by (Whitney
[1972]):
Ji(q) = (B.3)
ti.
0
and 4(t) = 141(0--46 (t)]T is the joint velocity vector of the manipulator,
x indicates cross product, i- I P6 is the position of the origin of the hand coordi-
nate frame from the (i - 1)th coordinate frame expressed in the base coordinate
frame, and zi- I is the unit vector along the axis of motion of joint i expressed in
the base coordinate frame.
For a six-link manipulator with rotary joints, the jacobian can be found to
be:
For the PUMA robot manipulator shown in Fig. 2.11 and its link coordinate
transformation matrices in Fig. 2.13, the elements of the jacobian are found to be:
-SI [d6(C23C4S5+S23C5)+S23d4+a3C23+a2C2)-CI(d6S4S5+d2)
CI [ d6 (C23 C4 S5 + S23 C5) + S23 d4 + a3 C23 + a2 C2 ] - SI (d6 S4 S5 +d2)
0
JI(B)=
0
0
1
r d6 C1 d2 CI
d6SI S4
S4S5
S5 ++ d2SI
J2z
J2(0) =
C/]
- SI
CI
0
546 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
where
Cl
d6S1S4S5
d6 S4 S5 I
J3z
J3(0) =
- S1
C1
0
where
J3z = - S1 [ d6 S3 C4 S5 - d6 C3 C5 - d4 C3 + a3 S3 I
- C1 [ d6 C3 C4 S5 + d6 S3 C5 + d4 S3 + a3 C3
SI S23
C23
d6S23S4C5
d6 S23 S4 S5
d6 Cl C23 S4 C5 + d6 Sl C4 C5 + d6 S1 C23 S4 S5 - d6 Cl C4 S5
J5(6) = - C1 C23 S4 - S1 C4
- S1 C23 S4 + C1 C4
S23 S4
If it is desired to control the manipulator hand along or about the hand coordi-
nate axes, then one needs to express the linear and angular velocities in hand coor-
dinates. This can be accomplished by premultiplying the v(t) and Q(t) by the
3 x 3 rotation matrix [°R6]T, where °R6 is the hand rotation matrix which relates
the orientation of the hand coordinate frame to the base coordinate frame. Thus,
0
[°R6]T 0 6R°
_0
[J(q)]4(t) (B.5)
0 [°R6]TJ 6R°_
-bZ 0
Zip
1
by
bZ 1 - bX 0
T + dT = T (B.6)
-s,, 0
,4N
bX 1
0 0 0 1j
or
1 bZ b,. 0 1 0 0 0
dT =
bZ 1 - b.0 0 1 0 0 T
-83'
o°'
5., 1 0 0 0 1 0
0 0 0 1 0 0 0 1
= AT (B.7)
where
1 0 0 dx 1 -bZ by 0 1 0 0 0
0 1 0 dy bZ 1 - bx 0 0 1 0 0
0
0 -by
.--
0 0 1 dZ bX 1 0 0 0 1 0
0 0 0 1 0 0 0 1 0 0 0 1
(B.8)
°`.°
b = (bX, by, 6Z)T is the differential rotation about the principal axes of the base
coordinate frame and d = (dx, dy, dd)T is the differential translation along the
548 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
1 0 0 dX 1 -SZ 6y 0
T + dT = T 0 1 0 dy Sz 1 -6, 0
(B.9)
0.l
0 0 1 dz -Sy SX 1 0
LO 0 0 11 0 0 0 1
1 -Sz 5y 0 1 0 0 0
Sz 1 -SX 0 0 1 0 0
or dT = T
-6y 8., 1 0 0 0 1 0
0 0 0 1 0 0 0 1
(B.10)
= (T)(TA)
III
where TA has the same structure as in Eq. (B.8), except that the definitions of S
and d are different. S = (Or, by, 6Z)T is the differential rotation about the princi-
pal axes of the T coordinate frame, and d = (dx, dy, dz)T is the differential trans-
3..
lation along the principal axes of the T coordinate frame. From Eqs. (B.7) and
s.,
AT = (T)(TA)
or
TA=T-'AT (B.11)
the principal axes of the base coordinate frame. Using the vector identities
x (y x z) = -y (x x z) = y (z x x )
and x(x x y) =0
MANIPULATOR JACOBIAN 549
0
TQ 0
0
o 0 0 0
(B.13)
nxs=a sxa=n a x n= s
then Eq. (B.13) becomes
0 6- (p x n) +
TQ = 6 a 0 x s) + (B.14)
6 s 6 n 0 x a) +
o 0 0 0
0 0 0 0
then equating the matrix elements of Eqs. (B.14) and (B.15), we have
p) + d]
p) + d]
x p)+d] (B.16)
T6X =
6 a
550 ROBOTICS. CONTROL, SENSING, VISION, AND INTELLIGENCE
dx
dy
dZ
a)]T
[n, s, a]T [(p x n), (p x s), (p x (B.17)
0 a]T bx
[n s,
by
aZ
where 0 is a 3 x 3 zero submatrix. Equation (B. 17) shows the relation of the
..1
differential translation and rotation in the base coordinate frame to the differential
translation and rotation with respect to the T coordinate frame.
Applying Eq. (B.10) to the kinematic equation of a serial six-link manipulator,
we have the differential of °T6:
where
'-IAi is defined as the differential change transformation along/about the
joint i axis of motion and is defined to be
0 - dOi 0 0
dO1 0 0 0
0 0 0 0 if link i is rotational
0 0 0 0
(B.20)
0 0 0 0 if link i is translational
0 0 0 0
0 0 0 ddl
0 0 0 0
From Eq. (B.19), we obtain T6Q due to the differential change in joint i motion
T6Q = (;-IA;'A;+I . . . 5A6)-1'-IA;('-'A;'Ai+I . . . 5A6)
nx sx ax px
ny sy ay P),
Ui = (B.22)
nz sz az Pz
0 0 0 1
Using Eqs. (B.20) and (B.22) for the case of rotary joint i, Eq. (B.21) becomes
0 -az sz Pxny -Pynx
T6A az 0 - nz Pxsy - Pysx
dOi (B.23)
-Sz nz 0 pxay - Pyax
0 0 0 0
0 0 0 nz
T6A 0 0 0 sz
ddi (B.24)
0 0 0 az
0 0 0 0
From the elements of T60 defined in Eq. (B.15), equating the elements of the
a..
matrices in Eq. (B.15) and Eq. (B.23) [or Eq. (B.24)] yields
O.'
r
Pxny - Pynx
Pxsy - Pysx
Pxay - Pyax
nz
dOi if link i is rotational
sz
az
(B.25)
Thus, the jacobian of a manipulator can be obtained from Eq. (B.25) for
i = 1,2,...,6:
T6dx
..-.
dqI
T",dy,
dq2
T6d
z dq3
T6a = J(q) (B.26)
x dq4
T6U Sy
dq5
N Z
dq6
where the columns of the jacobian matrix are obtained from Eq. (B.25). For the
PUMA robot manipulator shown in Fig. 2.11 and its link coordinate transforma-
[1.
tion matrices in Fig. 2.13, the jacobian is found to be:
Jlx
fly
Jl z
JI(0) _ - S23 (C4 C5 C6 - S4 S6) + C23 S5 C6
-523C455 + C23C5
where
Jlx = [ d6 (C23 C4 S5 + S23 C5) + d4 S23 + a3 C23 + a2 C2 ] (S4 C5 C6 + C4 S6 )
J2x
J2y
J2z
J2(0) = S4 C5 C6 + C4 S6
-S4C5S6 + C4 S6
S4 S5
MANIPULATOR JACOBIAN 553
where
- (- d6 C3 C5 + d6 S3 C4 S5 - d4 C3 + a3 S3) (C4 C5 C6 - S4 S6 )
J2 y = - (d6 S3 C5 + d6 C3 C4 S5 + d4 S3 + a3 C3 + a2 ) (S5 S6 )
+ (- d6 C3 C5 + d6 S3 C4 S5 - d4 C3 + a3 S3 ) (C4 C5 S6 + S4 C6 )
J2z = - (d6 S3 C5 + d6 C3 C4 S5 + d4 S3 + a3 C3 + a2 ) C5
S4S5
d6 S5 S6
d6 S5 C6
0
J4(0) =
- S5 C6
S5 S6
C5
I d6 C6 I
-d6 S6
0
J5(0) =
S6
C6
0
554 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
EULER EQUATIONS
OF MOTION
The above two methods derive the jacobian in symbolic form. It is possible to
numerically obtain the elements of the jacobian at time t explicitly from the
.-y
Newton-Euler equations of motion. This is based on the observation that the ratios
of infinitesimal hand accelerations to infinitesimal joint accelerations are the ele-
ments of the jacobian if the nonlinear components of the accelerations are deleted
from the Newton-Euler equations of motion. From Eq. (B.2), the accelerations of
the hand can be obtained by taking the time derivative of the velocity vector:
(t)1
[ 6(t)] = J(q)9(t) + J(q, q)q(t) ( B . 27 )
'RoWi = 'Ri-1[`-IR06i-1
+ zogi + ('-IR0wi-1) x z04i] (B.29)
The terms in Eqs. (B.29) and (B.30) involving wi represent nonlinear Coriolis and
centrifugal accelerations as indicated by the third term in Eq. (B.29) and the
second term in Eq. (B.30). Omitting these terms in Eqs. (B.29) and (B.30) give us
..Q
the linear relation between the hand accelerations and the joint accelerations. Then
if we successively apply an input unit joint acceleration vector (41, q 2 4 2 ,..
46 T = ( 1 , 0, 0, ... 0) T, (41 , 42, ... , 46 ) T = (0, 1, 0, ... , 0) T, (41
q2 46 )T = (0, 0, 0, ... ,
1)T, etc., the columns of the jacobian matrix
can be "strobed" out because the first term in Eq. (B.27) is linear and the second
..d
(nonlinear) term is neglected. This numerical technique takes about 24n(n + 1)/2
multiplications and 19n(n + 1)/2 additions, where n is the number of degrees of
freedom. In addition, we need 18n multiplications and 12n additions to convert
the hand accelerations from referencing its own link coordinate frame to referenc-
ing the hand coordinate frame.
MANIPULATOR JACOBIAN 555
Although these three methods are "equivalent" for finding the jacobian, this
"strobing" technique is well suited for a controller utilizing the Newton-Euler
equations of motion. Since parallel computation schemes have been discussed and
C.'
developed for computing the joint torques from the Newton-Euler equations of
motion (Lee and Chang [1986b]), the jacobian can be computed from these
schemes as a by-product. However, the method suffers from the fact that it only
gives the numerical values of the jacobian and not its analytic form.
REFERENCES
Further reading for the material in this appendix may be found in Whitney [1972], Paul
[1981], and Orin and Schrader [1984].
BIBLIOGRAPHY
Aggarwal, J. K., and Badler, N. I. (eds.) [1980]. "Motion and Time Varying Imagery,"
'-'
s."
Special Issue, IEEE Trans. Pattern Anal. Machine Intelligence, vol. PAMI-2, no. 6,
pp. 493-588.
Agin, G. J. [1972]. "Representation and Description of Curved Objects," Memo AIM-173,
Artificial Intelligence Laboratory, Stanford University, Palo Alto, Calif.
Albus, J. S. [1975]. "A New Approach to Manipulator Control: The Cerebellar Model
Articulation Controller," Trans. ASME, J. Dynamic Systems, Measurement and Control,
obi
pp. 220-227.
Ambler, A. P., et al. [1975]. "A Versatile System for Computer Controlled Assembly,"
Artificial Intelligence, vol. 6, no. 2, pp. 129-156.
'c7
acv
Ambler; A. P., and Popplestone, R. J. [1975]. "Inferring the Positions of Bodies from
Specified Spatial Relationships," Artificial Intelligence, vol. 6, no. 2, pp. 157-174.
amp
Armstrong, W. M. [1979]. "Recursive Solution to the Equations of Motion of an N-link
N.,
Manipulator," Proc. 5th World Congr., Theory of Machines, Mechanisms, vol. 2,
pp. 1343-1346.
Astrom, K. J. and Eykhoff, P. [1971]. "System Identification-A Survey," Automatica,
vol. 7, pp. 123-162.
Baer, A., Eastman, C., and Henrion, M. [1979]. "Geometric Modelling: A Survey," Com-
C70
Cliffs, N.J.
Barnard, S. T., and Fischler, M. A. [1982]. "Computational Stereo," Computing Surveys,
`'+
Barrow, H. G., and Tenenbaum, J. M. [1977]. "Experiments in Model Driven Scene Seg-
fro
`P<
Bejczy, A. K. [1974]. "Robot Arm Dynamics and Control," Technical Memo 33-669, Jet
Propulsion Laboratory, Pasadena, Calif.
Bejczy, A. K. [1979]. "Dynamic Models and Control Equations for Manipulators,"
Technical Memo 715-19, Jet Propulsion Laboratory, Pasadena, Calif.
'-'
BIBLIOGRAPHY 557
.n°
Bejczy, A. K. [1980]. "Sensors, Controls, and Man-Machine Interface for Advanced
ono
Binford, T. O. [1979]. "The AL Language for Intelligent Robots," in Proc. IRIA Sem.
Languages and Methods of Programming Industrial Robots (Rocquencourt, France),
pp. 73-87.
Blum, H., [1967]. "A Transformation for Extracting New Descriptors of Shape," in Models
for the Perception of Speech and Visual Form (W. Wathen-Dunn, ed.), MIT Press,
Cambridge, Mass.
Bobrow, J. E., Dubowsky, S. and Gibson, J. S. [1983]. "On the Optimal Control of Robot
Manipulators with Actuator Constraints," Proc. 1983 American Control Conf., San
Francisco, Calif., pp. 782-787.
oho
Bolles, R., and Paul, R. [1973]. "An Experimental System for Computer Controlled
Mechanical Assembly," Stanford Artificial Intelligence Laboratory Memo AIM-220,
Stanford University, Palo Alto, Calif.
Bonner, S., and Shin, K. G. [1982]. "A Comparative Study of Robot Languages," IEEE
Computer, vol. 15, no. 12, pp. 82-96.
moo
Brady, J. M., et al. (eds.) [1982]. Robot Motion: Planning and Control, MIT Press, Cam-
\.o
bridge, Mass. -
Bribiesca, E. [1981]. "Arithmetic Operations Among Shapes Using Shape Numbers," Pat-
tern Recog., vol. 13, no. 2, pp. 123-138.
Bribiesca, E., and Guzman, A. [1980]. "How to Describe Pure Form and How to Measure
Differences in Shape Using Shape Numbers," Pattern Recog., vol. 12, no. 2,
.mob
pp. 101-112.
Brice, C., and Fennema, C. [1970]. "Scene Analysis Using Regions," Artificial Intelli-
gence, vol. 1, no. 3, pp. 205-226.
°>¢
ti"
Brooks, R. A., [1981]. "Symbolic Reasoning Among 3-D Models and 2-D Images,"
`''
ban
cam
°O,
Space," IEEE Trans. Systems, Man, Cybern., vol. SMC-13, pp. 190-197.
Brooks, R. A. [1983b]. "Planning Collision-Free Motion for Pick-and-Place Operations,"
'17
Space for Find-Path with Rotation," Proc. Intl. Joint Conf. Artificial Intelligence (Karl-
suhe, W. Germany), pp. 799-808.
Bryson A. E. and Ho, Y. C. [1975]. Applied Optimal Control, John Wiley, New York.
°_?
558 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Canali, C., et al. [1981a]. "Sensori di Prossimita Elettronici," Fisica e Tecnologia, vol. 4,
ono
no. 2, pp. 95-123 (in Italian).
Canali, C., et al. [1981b]. "An Ultrasonic Proximity Sensor Operating in Air," Sensors
and Actuators, vol. 2, no. 1, pp. 97-103.
Catros, J. Y., and Espiau, B. [1980]. "Use of Optical Proximity Sensors in Robotics,"
Nouvel Automatisme, vol. 25, no. 14, pp. 47-53 (in French).
'LS
Chase, M. A. [1963]. "Vector Analysis of Linkages," Trans. ASME, J. Engr. Industry,
Series B, vol. 85, pp 289-297.
Chase, M. A., and Bayazitoglu, Y. O. [1971]. "Development and Application of a General-
't7
Q"N
ized d'Alembert Force for Multifreedom Mechanical Systems," Trans. ASME, J. Engr.
Industry, Series B, vol. 93, pp. 317-327.
CAD
Chang, C. L., and Lee, R. C. T. [1973]. Symbolic Logic and Mechanical Theorem Prov-
m°`
via
pp. 1166-1169.
'S7
Chow, C. K., and Kaneko, T. [1972]. "Automatic Boundary Detection of the Left Ventricle
0.Y
°'a
°0°
Chung, M. J. [1983]. "Adaptive Control Strategies for Computer-Controlled Manipula-
tip
tors," Ph.D. Dissertation, The Computer, Information, and Control Engineering Pro-
gram, University of Michigan, Ann Arbor, Mich.
C!4
;°.
Cowart, A. E., Snyder, W. E., and Ruedger, W. H. [1983]. "The Detection of Unresolved
Targets Using the Hough Transform," Comput. Vision, Graphics, and Image Proc.,
'"j
Cross, G. R., and Jain, A. K. [1983]. "Markov Random Field Texture Models," IEEE
Trans. Pattern Anal. Mach. Intell., vol. PAMI-5, no. 1, pp. 25-39.
Darringer, J. A., and Blasgen, M. W. [1975]. "MAPLE: A High Level Language for
Research," in Mechanical Assembly, IBM Research Report RC 5606, IBM T. J.
:C1
Davis, L. S. [1975]. "A Survey of Edge Detection Techniques," Comput. Graphics Image
t17
Cep
Davis, R. H., and Comacho, M. [1984]. "The Application of Logic Programming to the
,-,
Robot Planning," Tech. Note 65, Stanford Research Institute, Menlo Park, Calif.
BIBLIOGRAPHY 559
Dijkstra, E. [1959]. "A Note on Two Problems in Connection with Graphs," Numerische
Mathematik, Vol. 1, pp. 269-271.
Dodd, G. G., and Rossol, L. (eds.) [1979]. Computer Vision and Sensor-Based Robots,
Plenum, New York.
Doran, J. E. [1970]. "Planning and Robots," in Machine Intelligence, vol. 5 (B. Meltzer
and D. Michie, eds.), American Elsevier, New York, pp. 519-532.
Dorf, R. C. [1983]. Robotics and Automated Manufacturing, Reston Publishing Co., Res-
vii
ton, Va.
Drake, S. H. [1977]. "Using Compliance in Lieu of Sensory Feedback for Automatic
ail
"'+
.°°
C/]
'LS
tive Control to Robotic Manipulators," Trans. ASME, J. Dynamic Systems, Measure-
yam,
ment and Control, vol. 101, pp. 193-200.
Duda, R. 0., and Hart, P. E. [1972]. "Use of the Hough Transformation to Detect Lines
and Curves in Pictures," Comm. ACM, vol. 15, no. 1, pp. 11-15.
Duda, R. 0., and Hart, P. E. [1973]. Pattern Classification and Scene Analysis, John
!s"
Duda, R. 0., Nitzan, D., and Barrett, P. [1979]. "Use of Range and Reflectance Data to
a0.
Find Planar Surface Regions," IEEE Trans. Pattern Anal. Machine Intell.,
vol. PAMI-1, no. 3, pp. 259-271. o<°
Duffy, J. [1980]. Analysis of Mechanisms and Robot Manipulators, John Wiley, New York.
Duffy, J., and Rooney, J. [1975]. "A Foundation for a Unified Theory of Analysis of Spa-
tial Mechanisms," Trans. ASME, J. Engr. Industry, vol.97, no. 4, Series B,
C",
pp. 1159-1164.
Dyer, C. R., and Rosenfeld, A. [1979]. "Thinning Algorithms for Grayscale Pictures,"
IEEE Trans. Pattern Anal. Machine Intelligence, vol. PAMI-1, no. 1, pp. 88-89.
Engelberger, J. F. [1980]. Robotics in Practice, AMACOM, New York.
;-r
r'.
Fairchild [1983]. CCD Imaging Catalog, Fairchild Corp., Palo Alto, Calif.
Falb, P. L., and Wolovich, W. A. [1967]. "Decoupling in the Design and Synthesis of
Multivariable Control Systems," IEEE Trans. Automatic Control, vol. 12, no. 6,
pp. 651-655.
Featherstone, R. [1983]. "The Calculation of Robot Dynamics Using Articulated-Body
Inertia," Intl. J. Robotics Res., vol. 2, no. 1, pp. 13-30.
o°-
:`'
Fikes, R. E., Hart, P. E., and Nilsson, N. J. [1972]. "Learning and Executing Generalized
Robot Plans," Artificial Intelligence, vol. 3, no. 4, pp. 251-288.
Fikes, R. E., and Nilsson, N. J. [1971]. "STRIPS: A New Approach to the Application of
C/1
C,6
£S,
t^7
Frazer, R. A., Duncan, W. T., and Collan, A. R. [1960]. Elementary Matrices, Cambridge
chi
University Press, Cambridge, England.
Freeman, H. [1961]. "On the Encoding of Arbitrary Geometric Configurations," IEEE
ago
Trans. Elec. Computers, vol. EC-10, pp. 260-268.
Freeman, H. [1974]. "Computer Processing of Line Drawings," Comput. Surveys, vol. 6,
(~D
pp. 57-97.
Freeman, H., and Shapira, R. [1975]. "Determining the Minimum-Area Encasing Rectangle
-+'
.'3
for an Arbitrary Closed Curve," Comm. ACM, vol. 18, no. 7, pp. 409-413.
Freund, E. [1982]. "Fast Nonlinear Control with Arbitrary Pole Placement for Industrial
Robots and Manipulators," Intl. J. Robotics Res., vol. 1, no. 1, pp. 65-78.
Fu, K. S. [1971]. "Learning Control Systems and Intelligent Control Systems: An Intersec-
tion of Artificial Intelligence and Automatic Control," IEEE Trans. Automatic Control,
vol. AC-16, no. 2, pp. 70-72.
Fu, K. S. [1982a]. Syntactic Pattern Recognition and Applications, Prentice-Hall, Engle-
wood Cliffs, N.J.
co)
Fu, K. S. (ed.) [1982b]. Special Issue of Computer on Robotics and Automation, vol. 15,
`°n
no. 12.
Fu, K. S., and Mui, J. K. [1981]. "A Survey of Image Segmentation," Pattern Recog.,
vol. 13, no. 1, pp. 3-16.
a°"
Galey, B., and Hsia, P. [1980]. "A Survey of Robotics Sensor Technology," Proc. 12th
00o
Application to Robotics," IEEE Trans. Systems, Man, Cybern., vol. SMC-14, no. 2, pp.
...
101-109.
Gips, J. [1974]. "A Syntax-Directed Program that Performs a Three-Dimensional Perceptual
-00
tern Recognition and Image Processing, (T. Young and K.S. Fu, eds.), Academic
..,
Image Enhancement," Mechanism and Machine Theory, vol. 12, pp. 111-122.
.-y
Gonzalez, R. C., and Safabakhsh, R. [1982]. "Computer Vision Techniques for Industrial
4,°
Applications and Robot Control," Computer, vol. 15, no. 12, pp. 17-32.
Gonzalez, R. C., and Thomason, M. G. [1978]. Syntactic Pattern Recognition: An Intro-
duction, Addison-Wesley, Reading, Mass.
Gonzalez, R. C., and Wintz, P. [1977]. Digital Image Processing, Addison-Wesley, Read-
ing, Mass.
Goodman, J. W. [1968]. Introduction to Fourier Optics, McGraw-Hill, New York.
BIBLIOGRAPHY 561
Green, C. [1969]. "Application of Theorem Proving to Problem Solving," Proc. 1st Intl.
Joint Conf. Artificial Intelligence, Washington, D.C.
moo
Grossman, D. D., and Taylor, R. H. [1978]. "Interactive Generation of Object Models with
C1.
yin
a Manipulator," IEEE Trans. Systems, Man, Cybern., vol. SMC-8, no. 9, pp. 667-679.
Gruver, W. A., et al., [1984]. "Industrial Robot Programming Languages: A Comparative
Evaluation," IEEE Trans. Systems, Man, Cybern., vol. SMC-14, no. 4, pp. 321-333.
(Z7
urnU
Automatic Interpretation and Classification of Images (A. Grasseli, ed.), Academic
Press, New York.
7C'
Hackwood, S., et al. [1983]. "A Torque-Sensitive Tactile Array for Robotics," Intl.. J.
.T.
'L]
Robotics Res., vol. 2, no. 2, pp. 46-50.
Haralick, R. M. [1979]. "Statistical and Structural Approaches to Texture," Proc. 4th Intl.
C].
pp. 3-32.
Harris, J. L. [1977]. "Constant Variance Enhancement-A Digital Processing Technique,"
ANC
7S'
'T-
Va.
Hillis, D. W. [1982]. "A High-Resolution Imaging Touch Sensor," Intl. J. Robotics Res.,
vol. 1, no. 2, pp. 33-44.
Holland, S. W., Rossol, L., and Ward, M. R. [1979]. "CONSIGHT-I: A Vision-Controlled
Robot System for Transferring Parts from Belt Conveyors," in Computer Vision and
'ti
Sensor-Based Robots (G. G. Dodd and L. Rossol, eds.), Plenum, New York.
Hollerbach, J. M. [1980]. "A Recursive Lagrangian Formulation of Manipulator Dynamics
..°
1;-
A),
a.,
.'a
Patent 3,069,654.
562 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
00q
Filtering Algorithm," IEEE Trans. Acoust., Speech, Signal Proc., vol. ASSP-27,
pp. 13-18.
`t7
;:D
C/1
pp. 259-266.
Huston, R. L., Passerello, C. E., and Harlow, M. W. [1978]. "Dynamics of Multirigid-
Body Systems," J. Appl. Mech., vol. 45, pp. 889-894.
Inoue, H. [1974]. "Force Feedback in Precise Assembly Tasks," MIT Artificial Intelligence
Laboratory Memo 308, MIT, Cambridge, Mass.
Ishizuka, M., Fu, K. S., and Yao, J. T. P. [1983]. "A Rule-Based Damage Assessment
-ox
BCD
System for Existing Structures," SM Archives, vol. 8, pp. 99-118.
`.'
Itkis, U. [1976]. Control Systems of Variable Structure, John Wiley, New York.
CO)
Katushi, I., and Horn, B. K. P. [1981]. "Numerical Shape from Shading and Occluding
Boundaries," Artificial Intelligence, vol. 17, pp. 141-184.
c°°
Khatib, O. [1980]. "Commande Dynamique dans 1'Espace Operationnel des Robots Mani-
..r
tion of Multiple Images," Phot. Sci. Engr., vol. 7, no. 4, pp. 241-245.
Kohli, D., and Soni, A. H. [1975]. "Kinematic Analysis of Spatial Mechanisms via Succes-
>v'
ash
sive Screw Displacements," J. Engr. for Industry, Trans. ASME, vol. 2, series B,
pp. 739-747.
Koivo, A. J., and Guo, T. H. [1983]. "Adaptive Linear Controller for Robotic Manipula-
BIBLIOGRAPHY 563
tors," IEEE Trans. Automatic Control, vol. AC-28, no. 1, pp. 162-171.
tea`
Landau, Y. D. [1979]. Adaptive Control-The Model Reference Approach, Marcel Dekker,
New York.
bye
Lee, B. H. [1985]. "An Approach to Motion Planning and Motion Control of Two Robots
p0'
ass
in a Common Workspace," Ph.D. Dissertation, Computer Information and Control
Engineering Program, University of Michigan, Ann Arbor, Mich.
Lee, C. C. [1983]. "Elimination of Redundant Operations for a Fast Sobel Operator,"
IEEE Trans. Systems, Man, Cybern., vol. SMC-13, no. 3, pp. 242-245.
Lee, C. S. G. [1982]. "Robot Arm Kinematics, Dynamics, and Control," Computer,
vol. 15, no. 12, pp. 62-80.
't7
a".
Lee, C. S. G. [1983]. "On the Control of Robot Manipulators," Proc. 27th Soc. Photo-
C,4
optical Instrumentation Engineers, vol. 442, San Diego, Calif., pp. 58-83.
°°.
40.
't7
C;5
Electrical Engineering, Purdue University, West Lafayette, Ind.
Lee, C. S. G., and Chang, P. R. [1986b]. "Efficient Parallel Algorithm for Robot Inverse
"'.
Dynamics Computation," IEEE Trans. Systems, Man, Cybern., vol. SMC-16, no. 4.
Lee, C. S. G., and Chung, M. J. [1984]. "An Adaptive Control Strategy for Mechanical
.-'
Manipulators," IEEE Trans. Automatic Control, vol. AC-29, no. 9, pp. 837-840.
Lee, C. S. G., and Chung, M. J. [1985]. "Adaptive Perturbation Control with Feedforward
Compensation for Robot Manipulators," Simulation, vol. 44, no. 3, pp. 127-136. yam'
tea'
0C7
Lee, C. S. G., Chung, M. J., and Lee, B. H. [1984]. "An Approach of Adaptive Control
ANN
Lee, C. S. G., Chung, M. J., Mudge, T. N., and Turney, J. L. [1982]. "On the Control of
a°.
Lee, C. S. G., Gonzalez, R. C., and Fu, K. S. [1986]. Tutorial on Robotics, 2d ed., IEEE
CO)
ti"
Lee, C. S. G., and Huang, D. [1985]. "A Geometric Approach to Deriving Position/Force
Trajectory in Fine Motion," Proc. 1985 IEEE Intl. Conf. Robotics and Automation, St.
Louis, Mo, pp. 691-697.
Lee, C. S. G., and Lee, B. H. [1984]. "Resolved Motion Adaptive Control for Mechanical
Manipulators," Trans. ASME, J. Dynamic Systems, Measurement and Control, vol. 106,
A°°
Lee, C. S. G., Mudge, T. N., Turney, J. L. [1982]. "Hierarchical Control Structure Using
Special Purpose Processors for the Control of Robot Arms," Proc. 1982 Pattern
pry
Recognition and Image Processing Conf., Las Vegas, Nev., pp. 634-640.
Lee, C. S. G., and Ziegler, M. [1984]. "A Geometric Approach in Solving the Inverse
(4W
Calif.
564 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Lewis, R. A., and Bejczy, A. K. [1973]. "Planning Considerations for a Roving Robot
Sao
with Arm," Proc. 3rd Intl. Joint Conf. Artificial Intelligence, Stanford University, Palo
Alto, Calif.
Lieberman' L. I., and Wesley, M. A. [1977]. "AUTOPASS: An Automatic Programming
System for Computer Controlled Mechanical Assembly," IBM J. Res. Devel., vol. 21,
.T7
C/)
Cwt
no. 4, pp. 321-333.
tar
Lin, C. S., Chang, P. R., and Luh, J. Y. S. [1983]. "Formulation and Optimization of
Cubic Polynomial Joint Trajectories for Industrial Robots," IEEE Trans. Automatic
Control, vol. AC-28, no. 12, pp. 1066-1073.
(9)
woo
0o=
Trans. Systems, Man, Cybern., vol. SMC-11, no. 10, pp. 691-698.
Lozano-Perez, T. [1982]. "Spatial Planning, A Configuration Space Approach," IEEE
Trans. Comput., vol. C-32, no. 2, pp. 108-120.
Lozano-Perez, T. [1983a]. "Robot Programming," Proc. IEEE, vol. 71, no. 7,
(y'
[Z7
pp. 821-841.
Lozano-Perez, T. [1983b]. "Task Planning," in Robot Motion: Planning and Control, (M.
Brady, et al., eds.), MIT Press, Cambridge, Mass.
Lozano-Perez, T., and Wesley, M. A. [1979]. "An Algorithm for Planning Collision-Free
Paths Among Polyhedral Obstacles," Comm. ACM, vol. 22, no. 10, pp. 560-570.
'LS
Lu, S. Y., and Fu, K. S. [1978]. "A Syntactic Approach to Texture Analysis," Comput.
Graph. Image Proc., vol. 7, no. 3, pp. 303-330.
coat
Luh, J. Y. S. [1983a]. "An Anatomy of Industrial Robots and their Controls," IEEE Trans.
Automatic Control, vol. AC-28, no. 2, pp. 133-153.
`''
tea.
Luh, J. Y. S., and Lin, C. S. [198la]. "Optimum Path Planning for Mechanical Manipula-
tors," Trans. ASME, J. Dynamic Systems, Measurement and Control, vol. 102,
pp. 142-151.
Luh, J. Y. S., and Lin, C. S. [1981b]. "Automatic Generation of Dynamic Equations for
Mechanical Manipulators," Proc. Joint Automatic Control Conf., Charlottesville, Va.,
pp. TA-2D.
Luh, J. Y. S., and C. S. Lin. [1982]. "Scheduling of Parallel Computation for a Computer
t/1
trial Robots Along Cartesian Path," IEEE Trans. Systems, Man, Cybern., vol. SMC-14,
O.'
Cwt
-`7
Luh, J. Y. S., Walker, M. W., and Paul, R. P. [1980a]. "On-Line Computational Scheme
't7
or,
0"r
tor," IEEE Trans. Systems, Man, Cybern., vol. SMC-11, no. 6, pp. 418-432.
w.;
McCarthy, J., et al. [1968]. "A Computer with Hands, Eyes, and Ears," 1968 Fall Joint
Computer Conf., AFIPS Proceedings, pp. 329-338.
McDermott, J. [1980]. "Sensors and Transducers," EDN, vol. 25, no. 6, pp. 122-137.
s"°
Meindl, J. D., and Wise, K. D., (eds.) [19791. "Special Issue on Solid-State Sensors,
Actuators, and Interface Electronics," IEEE Trans. Elect. Devices, vol. 26,
°«'
pp. 1861-1978.
ono
Merritt, R. [1982]. "Industrial Robots: Getting Smarter All The Time," Instruments and
coo
C%]
CSD, Stanford University, Palo Alto, Calif.
Mujtaba, M. S., Goldman, R. A., and Binford, T. [1982]. "The AL Robot Programming
F"`
Binary Patterns," IEEE Trans. Systems, Man, Cybern., vol. SMC-14, no. 3, pp. 409-
418.
Nagel, H. H. [1981]. "Representation of Moving Rigid Objects Based on Visual Observa-
`o:
IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-3, no. 6, pp. 655-661.
tit
-'A
Nau, D. S. [1983]. "Expert Computer Systems," Computer, vol. 16, pp. 63-85.
°'"
,fl
Nevatia, R., and Binford, T. O. [1977]. "Description and Recognition of Curved Objects,"
Artificial Intelligence, vol. 8, pp. 77-98.
Nevins, J. L., and Whitney, D. E. [1978]. "Computer-Controlled Assembly," Sci. Am.,
z
`0°
Third Yale Workshop on Applications of Adaptive Systems Theory, Yale University,
.N.
New Haven, Conn., pp. 179-189.
Nigam, R., and Lee, C. S. G. [1985]. "A Multiprocessor-Based Controller for the Control
0.l
of Mechanical Manipulators," IEEE J. Robotics and Automation, vol. RA-1, no. 4, pp.
C1.
173-182.
Nilsson, N. J. [1971]. Problem-Solving Methods in Artificial Intelligence, McGraw-Hill,
.=:
New York.
Nilsson, N. J. [1980]. Principles of Artificial Intelligence, Tioga Pub., Palo Alto, Calif.
sib
Noble, B. [1969]. Applied Linear Algebra, Prentice-Hall, Englewood Cliffs, N.J.
l77
Oldroyd, A. [1981]. "MCL: An APT Approach to Robotic Manufacturing," presented at
()'
.b"
fin
s:.
sive Region Splitting Method," Comput. Graphics Image Proc., vol. 8, no. 3,
pp. 313-333.
Orin, D. E. [1984]. "Pipelined Approach to Inverse Plant Plus Jacobian Control of Robot
:;;j
'1y
"vi
trolled Arm," Memo AIM-177; Stanford Artificial Intelligence Laboratory, Palo Alto,
,>,
Calif.
Paul, R. P. [1976]. "WAVE: A Model-Based Language for Manipulator Control,"
,'C
..fl
Simple Manipulators," IEEE Trans. Systems, Man, Cybern., vol. SMC-11, no. 6,
pp. 449-455.
Pavlidis, T. [1977]. Structural Pattern Recognition, Springer-Verlag, New York.
ooh
Pavlidis, T. [1982]. Algorithms for Graphics and Image Processing, Computer Science
Press, Rockville, Md.
Persoon, E., and Fu, K. S. [1977].- "Shape Discrimination Using Fourier Descriptors,"
IEEE Trans. Systems, Man, Cybern., vol. SMC-7, no. 2, pp. 170-179.
Pieper, D. L. [1968]. "The Kinematics of Manipulators under Computer Control,"
"'.
Artificial Intelligence Project Memo No. 72., Computer Science Department, Stanford
University, Palo Alto, Calif.
BIBLIOGRAPHY 567
Pieper, D. L. and Roth, B. [1969]. "The Kinematics of Manipulators under Computer Con-
trol," Proc. II Intl. Congr. Theory of Machines and Mechanisms, vol. 2, pp. 159-168.
Pipes, L. A. [1963]. Matrix Methods in Engineering, Prentice-Hall, Englewood Cliffs, N.J.
is.
4,00
Popplestone, R. J., Ambler, A. P., and Bellos, I. [1978]. "RAPT, A Language for Describ-
ing Assemblies," Industrial Robot, vol. 5, no. 3, pp. 131-137.
Popplestone, R. J., Ambler, A. P., and Bellos, I. [1980]. "An Interpreter for a Language
P0,
a-,
too
Describing Assemblies," Artificial Intelligence, vol. 14, no. 1, pp. 79-107.
.-.
Raibert, M. H., and Craig, J. J. [1981]. "Hybrid Position/Force Control of Manipulators,"
Trans. ASME, J. Dynamic System., Measurement, and Control, vol. 102, pp. 126-133.
Obi
Raibert, M. H., and Tanner, J. E. [1982]. "Design and Implementation of a VLSI Tactile
60.E
Sensing Computer," Intl. J. Robotics Res., vol. 1, no. 3, pp. 3-18.
,moo
Rajala, S. A., Riddle, A. N., and Snyder, W. E. [1983]. "Application of the One-
W-.
Dimensional Fourier Transform for Tracking Moving Objects in Noisy Environments,"
"t7
CD.
CS.
ova
.ti
New York.
Roth, B., Rastegar, J., and Scheinman, V. [1973]. "On the Design of Computer Controlled
o0'
,w,
Q..
Manipulators," 1st CISM-IFTMM Symp. Theory and Practice of Robots and Manipula-
..p
Sacerdoti, E. D. [1977]. A Structure for Plans and Behavior, Elsevier, New York.
Sadjadi, F. A., and Hall, E. L. [1980]. "Three-Dimensional Moment Invariants," IEEE
°-;
Cs]
aye
CAD
Trans. Pattern Anal. Mach. Intell., vol. PAMI-2, no. 2, pp. 127-136.
yam
t3.
Salari, E., and Siy, P. [1984]. "The Ridge-Seeking Method for Obtaining the Skeleton of
..y
CAD
Digital Images," IEEE Trans. Systems Man, Cybern., vol. SMC-14, no. 3,
pp. 524-528.
Saridis, G. N. [1983]. "Intelligent Robotic Control," IEEE Trans. Automatic Control,
WW`
Saridis, G. N., and Lee, C. S. G. [1979]. "An Approximation Theory of Optimal Control
c/)
,way
for Trainable Manipulators," IEEE Trans. Systems, Man, Cybern., vol. SMC-9, no. 3,
ins
pp. 152-159.
568 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
Saridis, G. N., and Lobbia, R. N. [1972]. "Parameter Identification and Control of Linear
°C)
Discrete-Time Systems," IEEE Trans. Automatic Control, vol. AC-17, no. 1,
pp. 52-60.
Saridis, G. N., and Stephanou, H. E. [1977]. "A Hierarchical Approach to the Control of a
Prosthetic Arm," IEEE Trans. Systems, Man, Cybern., vol. SMC-7, no. 6,
pp. 407-420.
Scheinman, V. D. [1969]. "Design of a Computer Manipulator," Artificial Intelligence
Laboratory Memo AIM-92, Stanford University, Palo Alto, Calif.
Shani, U. [1980]. "A 3-D Model-Driven System for the Recognition of Abdominal Ana-
o06
O"'
tomy from CT Scans," Proc. 5th Intl. Joint Conf. Pattern Recog., pp. 585-591.
-,,
,^.
,.,
Shimano, B. [1979]. "VAL: A Versatile Robot Programming and Control System," Proc.
dad
3rd Intl. Computer Software Applications Conf., Chicago, 111, pp. 878-883.
Shimano, B. E., and Roth, B. [1979]. "On Force Sensing Information and its Use in Con-
trolling Manipulators," Proc. 9th Intl. Symp. on Industrial Robots, Washington, D.C.,
pp. 119-126.
Shirai, Y. [1979]. "Three-Dimensional Computer Vision," in Computer Vision and
Sensor-Based Robots (G. G. Dodd and L. Rossol, eds.), Plenum, New York.
.-;
Siklossy, L. [1972]. "Modelled Exploration by Robot," Tech. Rept. 1, Computer Science
C,'
aid
Procedures, Proc. 3rd Intl. Joint Conf. Artificial Intelligence, pp. 423, 430.
Silver, W. M. [1982]. "On the Equivalence of the Lagrangian and Newton-Euler Dynamics
for Manipulators," Intl. J. Robotics Res., vol. 1, no. 2, pp. 60-70.
-.tea
,°u
pp. 274-289.
(7,
Takegaki, M., and Arimoto, S. [1981]. "A New Feedback Method for Dynamic Control of
'F.'
Manipulators," Trans. ASME, J. Dynamic Systems, Measurement and Control, vol. 102,
pp. 119-125.
Tangwongsan, S., and Fu, K. S. [1979]. "An Application of Learning to Robotic Plan-
ning," Intl. J. Computer and Information Sciences, vol. 8, no. 4, pp. 303-333.
z``
Tarn, T. J. et al. [1984]. "Nonlinear Feedback.in Robot Arm Control," Proc. 1984 Conf.
Decision and Control," Las Vegas, Nev., pp. 736-751.
`Z;
(1)
1BMJ. Res. Devel., vol. 23, no. 4, pp. 424-436.
Taylor, R. H., Summers, P. D., and Meyer, J. M. [1983]. "AML: A Manufacturing
Language," Intl. J. Robotics Res., vol. 1, no. 3, pp. 19-41.
Thompson, W. B. and Barnard, S. T. [1981]. "Lower-Level Estimation and Interpretation
`-'
of Visual Motion," Computer, vol. 14, no. 8, pp. 20-28.
Thrall, R. M., and Tornheim, L. [1963]. Vector Spaces and Matrices, John Wiley, New
C.,
°G"
York.
Tomita, F., Shirai, Y., and Tsuji, S. [1982]. "Description of Texture by a Structural
Analysis," IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-4, no. 2, pp. 183-191.
Tomovic, R., and Boni, G. [1962]. "An Adaptive Artificial Hand," IRE Trans. Automatic
Control, vol. AC-7, no. 3, pp. 3-10.
Toriwaki, J. I., Kato, N., and Fukumura, T. [1979]. "Parallel Local Operations for a New
Distance Transformation of a Line Pattern and Their Applications," IEEE Trans. Sys-
f3.
Reading, Mass.
Turney, J. L., Mudge, T. N., and Lee, C. S. G. [1980]. "Equivalence of Two Formulations
for Robot Arm Dynamics," SEL Report 142, ECE Department, University of Michi-
gan, Ann Arbor, Mich.
Turney, J. L., Mudge, T. N., and Lee, C. S. G. [1982]. "Connection Between Formula-
try
..,
coo
tions of Robot Arm Dynamics with Applications to Simulation and Control," CRIM
a°,
??, ..y
Technical Report No. RSD-TR-4-82, the University of Michigan, Ann Arbor, Mich.
E°'
0°i
Uicker, J. J. [1965]. "On the Dynamic Analysis of Spatial Linkages using 4 x 4 Matrices,"
Ph.D. dissertation, Northwestern University, Evanston, Ill.
Uicker, J. J., Jr., Denavit, J., and Hartenberg, R. S. [1964]. "An Iterative Method for the
(1)
=°,
Displacement Analysis of Spatial Mechanisms," Trans. ASME, J. Appl. Mech., vol. 31,
.`'
Robotic Mechanisms," Trans. ASME, J. Systems, Measurement and Control, vol. 104,
acs
pp. 205-211.
Wallace, T. P., and Mitchell, O. R. [1980]. "Analysis of Three-Dimensional Movements
FIi
Using Fourier Descriptors," IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-2,
no. 6, pp. 583-588.
`"'
Webb, J. A., and Aggarwal, J. K. [1981]. "Visually Interpreting the Motion of Objects in
570 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE
vii
Wesley, M. A., et al., [1980]. "A Geometric Modeling System for Automated Mechanical
Assembly," IBM J. Res. Devel., vol. 24, no. 1, pp. 64-74.
White, J. M., and Rohrer, G. D. [1983]. "Image Thresholding for Optical Character
°w°
Recognition and Other Applications Requiring Character Image Extraction," IBM J.
Res. Devel., vol. 27, no. 4, pp. 400-411.
Whitney, D. E. [1969a]. "Resolved Motion Rate Control of Manipulators and Human
,!1
Prostheses," IEEE Trans. Man-Machine Systems, vol. MMS-10, no. 2, pp. 47-53.
Whitney, D. E. [1969b]. "State Space Models of Remote Manipulation Tasks," Proc. Intl.
day
Joint Conf. Artificial Intelligence, Washington, D.C., pp. 495-508.
Whitney, D. E. [1972]. "The Mathematics of Coordinated Control of Prosthetic Arms and
0.o
Manipulators," Trans. ASME, J. Dynamic Systems, Measurement and Control, vol. 122,
;>p
pp. 303-309.
Will, P., and Grossman, D. [1975]. "An Experimental System for Computer Controlled
v°g
Mechanical Assembly," IEEE Trans. Comput., vol. C-24, no. 9, pp. 879-888.
W)0
Wise, K. D. (ed.) [1982]. "Special Issue on Solid-State Sensors, Actuators, and Interface
(/7
°`W
ate:
bra to the Analysis of Spatial Mechanisms," Trans. ASME, J. Appl. Mech., vol. 31,
series E, pp. 152-157.
woo
pp. 101-109.
Yuan, M. S. C., and Freudenstein, R. [1971]. "Kinematic Analysis of Spatial Mechanisms
`J°
by Means of Screw Coordinates," Trans. ASME, J. Engr. Industry, vol. 93, no. 1,
pp. 61-73.
Zahn, C. T., and Roskies, R. Z. [1972]. "Fourier Descriptors for Plane Closed Curves,"
'L7'
'i7
Trans. Pattern Anal. Mach. Intell., vol. PAMI-3, no. 3, pp. 324-331.
INDEX
A Binary image
creation, 358
Acceleration-related terms, 93, 96
C3"
smoothing, 339
(see also Dynamic coefficients of
Blocks world, 475
'O'
manipulators)
Body-attached coordinate frame, 14
Adaptive controls, 202, 244
.'3
Boundary, 396
model-referenced, 244
description, 396
perturbation, 248
detection, 363
resolved motion, 256
c,,
linking, 363
Adjacency, 329
Bounded deviation joint path, 184, 188
Adjoint of a matrix, 541
Aggregate, 454
C
Approach vector of hand, 42
Area, 406 C-frame, 470
Arm (see also Robot arm) C-surface, 470
configurations, 61 Camera
w",
solid-state, 298
Axis of rotation, 14, 312
',7
television, 297
vidicon, 298
Capacitive sensors, 280
B
Cartesian
Base coordinates, 36 path control, 184, 187
Basic path trajectory, 175
homogeneous rotation matrix, 28 robot, 3
homogeneous translation matrix, space control, 202
28 Centrifugal
rectangle, 402 forces/torques, 83
,-.
rn-.
Composite
homogeneous transformation matrix, acceleration constraint, 196
31 jerk constraint, 196
rotation matrix, 19 torque constraint, 196
,.t
Connected
component, 329
D
region, 406
Connectivity, 328 d'Alembert
CAD
Degrees of freedom, 1
C/1
"J'
'L3
..:
minor axis, 402 cartesian coordinates, 259
moments, 399, 407, 414 Lagrange-Euler, 92
perimeter, 406 Newton-Euler, 114, 118
....
>-^
Sao
three-dimensional, 416 number, 406
Difference image, 389 Eulerian angles, 22
PRO
'17
F
city-block, 330
definition of, 329 False contour, 302
euclidean, 330, 425 Fast Fourier transform, 334
mixed, 330 Feedback compensation, 219, 249
BCD
96 Fourier
acceleration-related, 96 descriptors, 404
centrifugal and Coriolis, 96 transform, 334
gravity, 96 Freeman chain codes, 446
Dynamics of robot arm, 82 Frequency domain, 334
E G
Gradient I
definition, 353
Ill-conditioned solution, 54
.-.
direction, 364
Illumination, 304
magnitude, 354
backlighting, 304
three-dimensional, 418
diffuse, 304
Grammar, 431, 436; 438
A..
directional, 306
Graph, 369
structured, 269, 305
AND/OR, 485
0-0
Image
edge detection, 369
acquisition, 297
search, 475, 481
averaging, 338
0000
solution, 486
binary, 339, 358
Gravity
ICU
difference, 389
loading forces, 83, 93
digital, 297, 301
terms, 95
element, 301
Gray level, 301
enhancement, 342
Guiding, 450
gray level, 301
intensity, 301
H
motion, 388
Hall-effect sensors, 279 preprocessing, 331
`C3
K planning)
Master-slave manipulator, 4
Kinematic equations for manipulators,
Matching, 425, 428, 429
42
Matrix, 535
Kinematics inverse solution (see Inverse
adjoint, 541
kinematics solution)
arm, 42
Kinetic energy, 85, 89
determinant, 538
Knot points, 150
L7.
equation, 176
homogeneous, 13, 27
L
inversion lemma, 542
Lagrange multiplier, 238 multiplication of, 537
7',
Minotaur I robot, 4 P
Model-referenced adaptive control,
`t7
Parameter identification, 250
244 residual, 252
Moments, 399, 407, 414 Path, 149, 329
Motion, 388 constraint, 149
specification, 456 length, 329
Moving coordinate system, 107 selection, 476
trajectory tracking problem, 202
Pattern
N primitive, 427
recognition (see Recognition)
Near-minimum-time control, 223
7-7
CAD
averaging, 335
criterion, 213, 248
.Ay
definition, 328
1.1t
index, 254
processing, 331
Perimeter, 406
s.,
Newton-Euler
Photodiode, 283
equations of motion, 82, 114,
Photosite, 299
118
Physical coordinates, 27
`t7
formulation, 103
Picture element (see Pixel)
Pitch, 22, 48
z
aggregation, 384
boo
228
connectivity, 328
.T^
distance, 329
gray level, 301
`COQ CDR
intensity, 301
0 neighbors of, 328
OAT solution, 57 Plane fitting, 417
Obstacle Polaroid range sensor, 276
avoidance, 268, 512 Polygonal approximation, 400
constraint, 149 Pontryagin minimum principle,
Open-loop 224
control, 97 Position
transfer function, 210 error, 239
Optical proximity sensors, 283 sensing, 459
Orientation specification, 454
R+;
Q..
Prototype, 425
Proximity sensing, 276 Robot (see also Robot arm)
Pseudo-inertia matrix, 90 arm categories, 3
''°
PUMA cartesian, 3
control strategy, 203 cylindrical, 3
link coordinate transformation definition of, I
matrices, 41 dynamics, 6, 82
robot, 2, 41, 79, 203 historical development, 4
"CS
intelligence, 10, 474
kinematics, 6, 12
Q learning, 504
manipulator, 1, 4
Quantization, 302 programming, 9, 450
Quaternion, 184 programming synthesis, 468
sensing, 8, 267
spherical, 3
R
task planning, 7, 474
CD.
correlation, 426
decision-theoretic, 425 control (see Controls)
-_-
A?,
PAL, 472 laser, 273
RAIL, 472
.p.
noncontact, 267
robot-oriented, 451 optical, 283
RPL, 472 proximity, 276
task-level, 451, 462 range, 268
VAL, 472 slip, 267
Robotic manipulators (see Manipulators) structure light, 269
Robotic vision (see Vision)
1:4
S
Sigma robot, 3
Signatures, 398
Sampling, 302 Similarity, 384, 428 (see also
frequency, 221 Description; Recognition)
coo
Smoothing, 335
edge linking, 363 Sobel operators, 355, 361
.--
0
Specification
Sensing, 267 end-effector, 47
capacitive, 280 motion, 456
contact, 267 position, 454
external state, 8, 267 task, 466
INDEX 579
.-.
Spherical
AA,
coordinates for positioning 3-5-3 trajectory, 156, 167
subassembly, 49 4-3-4 trajectory, 156
robot, 3 5-cubic trajectory, 156, 165
,_,
Spur, 413 cartesian path, 175
Stanford robot, 6, 80 joint-interpolated, 154
CV.)
'C7
p''
Trajectory segment, 152
CC]
if)
State space, 474 transition, 181
Steady-state errors, 215 Transfer function of a single joint, 205
Stereo imaging, 325 Transformation
String, 427 orthogonal, 16
C/]
0O0
grammar, 431 orthonormal, 16
matching, 429 Transformation matrices, 13, 27, 307
recognition, 434 Transition between path segments, 181
Structural Translation, 308
pattern recognition, 425, 427 Translational kinetic energy, 126
(OD
resonant frequency, 214 Tree
Structured light, 269 grammar, 436
Switching surfaces, 225 quad, 387
similarity, 428
Triangulation, 268
T
Twist angle of link, 34
Task Two point boundary value problem,
planning, 474 224
specification, 466
dam'
cameras, 297
Undamped natural frequency, 213, 245
field, 299
Unimate robot, 3
frame, 299
Unimation PUMA 600 arm (see Robot
Template, 332
arm)
Texture, 406
H-3
°w`
Vector (Cont.): W
product of, 527
acs
Window, 332
subtraction, 523
World
Versatran robot, 3, 5
coordinates, 308
Via points, 457
modeling, 463
Vidicon, 298
world states, 464
Vision
Wrist sensor, 289
definition of, 296
higher-level, 362
illumination for, 304
s.+
Y
low-level, 296
sensors, 297 Yaw, 22, 48
steps in, 296
Voltage-torque conversion, 222
Voxel, 416