Asado PDF
Asado PDF
Asado PDF
12 Lecture Notes -
H. Harry Asada
Ford Professor of Mechanical Engineering
Introduction to Robotics, H. Harry Asada 1
Chapter 1
Introduction
Many definitions have been suggested for what we call a robot. The word may conjure up various
levels of technological sophistication, ranging from a simple material handling device to a
humanoid. The image of robots varies widely with researchers, engineers, and robot
manufacturers. However, it is widely accepted that today’s robots used in industries originated in
the invention of a programmed material handling device by George C. Devol. In 1954, Devol
filed a U.S. patent for a new machine for part transfer, and he claimed the basic concept of teach-
in/playback to control the device. This scheme is now extensively used in most of today's
industrial robots.
Devol's industrial robots have their origins in two preceding technologies: numerical control for
machine tools, and remote manipulation. Numerical control is a scheme to generate control
actions based on stored data. Stored data may include coordinate data of points to which the
machine is to be moved, clock signals to start and stop operations, and logical statements for
branching control sequences. The whole sequence of operations and its variations are prescribed
and stored in a form of memory, so that different tasks can be performed without requiring major
hardware changes. Modern manufacturing systems must produce a variety of products in small
batches, rather than a large number of the same products for an extended period of time, and
frequent changes of product models and production schedules require flexibility in the
manufacturing system. The transfer line approach, which is most effective for mass production,
is not appropriate when such flexibility is needed (Figure 1-1). When a major product change is
required, a special-purpose production line becomes useless and often ends up being abandoned,
despite the large capital investment it originally involved. Flexible automation has been a central
issue in manufacturing innovation for a few decades, and numerical control has played a central
role in increasing system flexibility. Contemporary industrial robots are programmable machines
that can perform different operations by simply modifying stored data, a feature that has evolved
from the application of numerical control.
Another origin of today's industrial robots can be found in remote manipulators. A remote
manipulator is a device that performs a task at a distance. It can be used in environments that
human workers cannot easily or safely access, e.g. for handling radio-active materials, or in some
deep sea and space applications. The first master-slave manipulator system was developed by
1948. The concept involves an electrically powered mechanical arm installed at the operation
site, and a control joystick of geometry similar to that of the mechanical arm (Figure 1-2). The
joystick has position transducers at individual joints that measure the motion of the human
operator as he moves the tip of the joystick. Thus the operator's motion is transformed into
electrical signals, which are transmitted to the mechanical arm and cause the same motion as the
one that the human operator performed. The joystick that the operator handles is called the
master manipulator, while the mechanical arm is called the slave manipulator, since its motion is
ideally the replica of the operator's commanded motion. A master-slave manipulator has
typically six degrees of freedom to allow the gripper to locate an object at an arbitrary position
and orientation. Most joints are revolute, and the whole mechanical construction is similar to that
of the human arm. This analogy with the human arm results from the need of replicating human
motions. Further, this structure allows dexterous motions in a wide range of workspaces, which
is desirable for operations in modern manufacturing systems.
Contemporary industrial robots retain some similarity in geometry with both the human arm and
remote manipulators. Further, their basic concepts have evolved from those of numerical control
and remote manipulation. Thus a widely accepted definition of today’s industrial robot is that of a
numerically controlled manipulator, where the human operator and the master manipulator in the
figure are replaced by a numerical controller.
Figure 1-3 White body assembly lines using spot welding robots
The merge of numerical control and remote manipulation created a new field of engineering, and
with it a number of scientific issues in design and control which are substantially different from
those of the original technologies have emerged.
Robots are required to have much higher mobility and dexterity than traditional machine tools.
They must be able to work in a large reachable range, access crowded places, handle a variety of
workpieces, and perform flexible tasks. The high mobility and dexterity requirements result in the
unique mechanical structure of robots, which parallels the human arm structure. This structure,
however, significantly departs from traditional machine design. A robot mechanical structure is
basically composed of cantilevered beams, forming a sequence of arm links connected by hinged
joints. Such a structure has inherently poor mechanical stiffness and accuracy, hence is not
appropriate for the heavy-duty, high-precision applications required of machine tools. Further, it
also implies a serial sequence of servoed joints, whose errors accumulate along the linkage. In
order to exploit the high mobility and dexterity uniquely featured by the serial linkage, these
difficulties must be overcome by advanced design and control techniques.
The serial linkage geometry of manipulator arms is described by complex nonlinear equations.
Effective analytical tools are necessary to understand the geometric and kinematic behavior of
the manipulator, globally referred to as the manipulator kinematics. This represents an important
and unique area of robotics research, since research in kinematics and design has traditionally
focused upon single-input mechanisms with single actuators moving at constant speeds, while
robots are multi-input spatial mechanisms which require more sophisticated analytical tools.
The dynamic behavior of robot manipulators is also
complex, since the dynamics of multi-input spatial
linkages are highly coupled and nonlinear. The motion
of each joint is significantly affected by the motions of
all the other joints. The inertial load imposed at each
joint varies widely depending on the configuration of the
manipulator arm. Coriolis and centrifugal effects are
prominent when the manipulator arm moves at high
speeds. The kinematic and dynamic complexities create
unique control problems that are not adequately handled
by standard linear control techniques, and thus make
effective control system design a critical issue in
robotics.
Finally, robots are required to interact much more heavily with peripheral devices than traditional
numerically-controlled machine tools. Machine tools are essentially self-contained systems that
handle workpieces in well-defined locations. By contrast, the environment in which robots are
used is often poorly structured, and effective means must be developed to identify the locations
of the workpieces as well as to communicate to peripheral devices and other machines in a
coordinated fashion. Robots are also critically different from master-slave manipulators, in that
they are autonomous systems. Master-slave manipulators are essentially manually controlled
systems, where the human operator takes the decisions and applies control actions. The operator
interprets a given task, finds an appropriate strategy to accomplish the task, and plans the
procedure of operations. He/she devises an effective way of achieving the goal on the basis of
his/her experience and knowledge about the task. His/her decisions are then transferred to the
slave manipulator through the joystick. The resultant motion of the slave manipulator is
monitored by the operator, and necessary adjustments or modifications of control actions are
provided when the resultant motion is not adequate, or when unexpected events occur during the
operation. The human operator is, therefore, an essential part of the control loop. When the
operator is eliminated from the control system, all the planning and control commands must be
generated by the machine itself. The detailed procedure of operations must be set up in advance,
and each step of motion command must be generated and coded in an appropriate form so that
the robot can interpret it and execute it accurately. Effective means to store the commands and
manage the data file are also needed . Thus, programming and command generation are critical
issues in robotics. In addition, the robot must be able to fully monitor its own motion. In order to
adapt to disturbances and unpredictable changes in the work environment, the robot needs a
variety of sensors, so as to obtain information both about the environment (using external
sensors, such as cameras or touch sensors) and about itself (using internal sensors, such as joint
encoders or joint torque sensors). Effective sensor-based strategies that incorporate this
information require advanced control algorithms. But they also imply a detailed understanding of
the task.
Robotics has found a number of important application areas in broad fields beyond
manufacturing automation. These range from space and under-water exploration, hazardous
waste disposal, and environment monitoring to robotic surgery, rehabilitation, home robotics, and
entertainment. Many of these applications entail some locomotive functionality so that the robot
can freely move around in an unstructured environment. Most industrial robots sit on a
manufacturing floor and perform tasks in a structured environment. In contrast, those robots for
non-manufacturing applications must be able to move around on their own. See Figure 1-8.
Locomotion and navigation are increasingly important, as robots find challenging applications in
the field. This opened up new research and development areas in robotics. Novel mechanisms are
needed to allow robots to move through crowded areas, rough terrain, narrow channels, and even
staircases. Various types of legged robots have been studied, since, unlike standard wheels, legs
can negotiate with uneven floors and rough terrain. Among others, biped robots have been
studied most extensively, resulting in the development of humanoids, as shown in Figure 1-9.
Combining leg mechanisms with wheels has accomplished superior performance in both
flexibility and efficiency. The Mars Rover prototype shown below has a rocker-buggy
mechanism combined with advanced wheel drives in order to adapt itself to diverse terrain
conditions. See Figure 1-10.
are capable of interpreting the data to locate themselves. Often the robot has a map of the
environment, and uses it for estimating the location. Furthermore, based on the real-time data
obtained in the field, the robot is capable of updating and augmenting the map, which is
incomplete and uncertain in unstructured environment. As depicted in Figure 1-10, location
estimation and map building are simultaneously executed in the advanced navigation system.
Such Simultaneous Localization And Mapping (SLAM) is exactly what we human do in our
daily life, and is an important functionality of intelligent robots.
The goal of robotics is thus two-fold: to extend our understanding about manipulation,
locomotion, and other robotic behaviors and to develop engineering methodologies to actually
perform desired tasks. The goal of this book is to provide entry-level readers and experienced
engineers with fundamentals of understanding robotic tasks and intelligent behaviors as well as
with enabling technologies needed for building and controlling robotic systems.
Figure 1-10 JPL’s planetary exploration robot: an early version of the Mars Rover
Chapter 2
Actuators and Drive Systems
Actuators are one of the key components contained in a robotic system. A robot has many
degrees of freedom, each of which is a servoed joint generating desired motion. We begin with
basic actuator characteristics and drive amplifiers to understand behavior of servoed joints.
Most of today’s robotic systems are powered by electric servomotors. Therefore, we
focus on electromechanical actuators.
2.1 DC Motors
Figure 2-1 illustrates the construction of a DC servomotor, consisting of a stator, a rotor,
and a commutation mechanism. The stator consists of permanent magnets, creating a magnetic
field in the air gap between the rotor and the stator. The rotor has several windings arranged
symmetrically around the motor shaft. An electric current applied to the motor is delivered to
individual windings through the brush-commutation mechanism, as shown in the figure. As the
rotor rotates the polarity of the current flowing to the individual windings is altered. This allows
the rotor to rotate continually.
Wm Kt i (2.1.1)
where the proportionality constant K t is called the torque constant, one of the key parameters
describing the characteristics of a DC motor. The torque constant is determined by the strength of
the magnetic field, the number of turns of the windings, the effective area of the air gap, the
radius of the rotor, and other parameters associated with materials properties.
In an attempt to derive other characteristics of a DC motor, let us first consider an
idealized energy transducer having no power loss in converting electric power into mechanical
power. Let E be the voltage applied to the idealized transducer. The electric power is then given
by E i , which must be equivalent to the mechanical power:
Pin E i W m Zm (2.1.2)
where Zm is the angular velocity of the motor rotor. Substituting eq.(1) into eq.(2) and dividing
both sides by i yield the second fundamental relationship of a DC motor:
E K tZm (2.1.3)
The above expression dictates that the voltage across the idealized power transducer is
proportional to the angular velocity and that the proportionality constant is the same as the torque
constant given by eq.(1). This voltage E is called the back emf (electro-motive force) generated at
the air gap, and the proportionality constant is often called the back emf constant.
Note that, based on eq.(1), the unit of the torque constant is Nm/A in the metric system,
whereas the one of the back emf constant is V/rad/s based on eq.(2).
Exercise 2.1 Show that the two units, Nm/A and V/rad/s, are identical.
The actual DC motor is not a loss-less transducer, having resistance at the rotor windings
and the commutation mechanism. Furthermore, windings may exhibit some inductance, which
stores energy. Figure 2.1.2 shows the schematic of the electric circuit, including the windings
resistance R and inductance L. From the figure,
di
u R i L E (2.1.4)
dt
R L
Zm
u i E Wm
Combining eqs.(1), (3) and (4), we can obtain the actual relationship among the applied
voltage u, the rotor angular velocity Zm , and the motor torque W m .
dW
2
Kt K
u W m Te m t Zm (2.1.5)
R dt R
L
where time constant Te , called the motor reactance, is often negligibly small. Neglecting
R
this second term, the above equation reduces to an algebraic relationship:
2
Kt K
Wm u t Zm (2.1.6)
R R
This is called the torque-speed characteristic. Note that the motor torque increases in proportion
to the applied voltage, but the net torque reduces as the angular velocity increases. Figure 2.1.3
2
illustrates the torque-speed characteristics. The negative slope of the straight lines, K t R ,
implies that the voltage-controlled DC motor has an inherent damping in its mechanical behavior.
The power dissipated in the DC motor is given by
R
Pdis R i2 W 2
2 m
(2.1.7)
Kt
Wm Kt
Pdis , Km (2.1.8)
Km R
where the parameter K m is called the motor constant. The motor constant represents how
effectively electric power is converted to torque. The larger the motor constant becomes, the
larger the output torque is generated with less power dissipation. A DC motor with more powerful
magnets, thicker winding wires, and a larger rotor diameter has a larger motor constant. A motor
with a larger motor constant, however, has a larger damping, as the negative slope of the torque-
speed characteristics becomes steeper, as illustrated in Figure 2.1.3.
Wm
max W m
u
Pout
-Km2
1 max Zm Zm
max Zm
2
Taking into account the internal power dissipation, the net output power of the DC motor
is given by
Kt
Pout W m Zm ( u K m Zm ) Zm
2
(2.1.9)
R
This net output power is a parabolic function of the angular velocity, as illustrated in Figure 2.1.3.
It should be noted that the net output power becomes maximum in the middle point of the
velocity axis, i.e. 50 % of the maximum angular velocity for a given armature voltage u. This
implies that the motor is operated most effectively at 50 % of the maximum speed. As the speed
departs from this middle point, the net output power decreases, and it vanishes at the zero speed
as well as at the maximum speed. Therefore, it is important to select the motor and gearing
combination so that the maximum of power transfer be achieved.
torque Wm
Zm (2.2.1)
angular velocity Zm
Arm Links
W load Zload
Gearing
DC Motor
Joint Axis
1
Although a robotic system has multiple axes driven by multiple actuators having dynamic
interactions, we consider behavior of an independent single axis in this section, assuming that all
the other axes are fixed.
To fill the gap we need a gear reducer, as shown in Figure 2.2.1. Let r ! 1 be a gear reduction
ratio (If d1 and d2 are diameters of the two gears, the gear reduction ratio is r d 2 / d1 ). The
torque and angular velocity are changed to:
1
W load r W m , Zload Zm (2.2.2)
r
where W load and Zload are the torque and angular velocity at the joint axis, as shown in the
figure. Note that the gear reducer of gear ratio r increases the impedance r2 times larger than that
of the motor axis Zm:
Z load r 2 Zm (2.2.3)
Let Im be the inertia of the motor rotor. From the free body diagram of the motor rotor,
1
I mZ m W m W load (2.2.4)
r
1
where W load is the torque acting on the motor shaft from the joint axis through the gears, and
r
Z m is the time rate of change of angular velocity, i.e. the angular acceleration. Let I l be the
inertia of the arm link about the joint axis, and b the damping coefficient of the bearings
supporting the joint axis. Considering the free body diagram of the arm link and joint axis yields
Eliminating W load from the above two equations and using eq.(2.1.6) and (2.2.2) yields
where I, B, k are the effective inertia, damping, and input gain reflected to the joint axis:
I Il r 2 Im (2.2.7)
2
B b r 2Km (2.2.8)
K
k r t (2.2.9)
R
Note that the effective inertia of the motor rotor is r2 times larger than the original value I m when
reflected to the joint axis. Likewise, the motor constant becomes r2 times larger when reflected to
the joint axis. The gear ratio of a robotic system is typically 20 ~ 100, which means that the
effective inertia and damping becomes 400 ~ 10,000 times larger than those of the motor itself.
For fast dynamic response, the inertia of the motor rotor must be small. This is a crucial
requirement as the gear ratio gets larger, like robotics applications. There are two ways of
reducing the rotor inertia in motor design. One is to reduce the diameter and make the rotor
longer, as shown in Figure 2.2.2-(a). The other is to make the motor rotor very thin, like a
pancake, as shown in Figure 2.2.2-(b).
Most robots use the long and slender motors as Figure (a), and some heavy-duty robots use the
pancake type motor. Figure 2.2.3 shows a pancake motor by Mavilor Motors, Inc.
Exercise 2-2 Assuming that the angular velocity of a joint axis is approximately zero, obtain the
optimal gear ratio r in eq.(7) that maximizes the acceleration of the joint axis.
1
Ploss (V u ) i (V u ) u (2.3.1)
R
where R is the armature resistance. Figure 2.3.2 plots the internal power loss at the transistor
against the armature voltage. The power loss becomes the largest in the middle, where half the
supply voltage V/2 acts on the armature. This large heat loss is not only wasteful but also harmful,
burning the transistor in the worst case scenario. Therefore, this type of linear power amplifier is
seldom used except for driving very small motors.
R Ploss
u
Worst
V range
OFF ON
i
VCE
0 V u
V/2
admitting the current (ON). For all armature voltages other than these complete ON-OFF states,
some fraction of power is dissipated in the transistor. Pulse Width Modulation (PWM ) is a
technique to control an effective armature voltage by using the ON-OFF switching alone. It varies
the ratio of time length of the complete ON state to the complete OFF state. Figure 2.3.3
illustrates PWM signals. A single cycle of ON and OFF states is called the PWM period, whereas
the percentage of the ON state in a single period is called duty rate. The first PWM signal is of
60% duty, and the second one is 25 %. If the supply voltage is V=10 volts, the average voltage is
6 volts and 2.5 volts, respectively.
The PWM period is set to be much shorter than the time constant associated with the
mechanical motion. The PWM frequency, that is the reciprocal to the PWM period, is usually 2 ~
20 kHz, whereas the bandwidth of a motion control system is at most 100 Hz. Therefore, the
discrete switching does not influence the mechanical motion in most cases.
ON OFF ON OFF
60% PWM
TPWM PWM
Period
As modeled in eq.(2.1.4), the actual rotor windings have some inductance L. If the
electric time constant Te is much larger than the PWM period, the actual current flowing to the
motor armature is a smooth curve, as illustrated in Figure 2.3.4-(a). In other words, the inductance
works as a low-pass filter, filtering out the sharp ON-OFF profile of the input voltage. In contrast,
if the electric time constant is too small, compared to the PWM period, the current profile
becomes zigzag, following the rectangular voltage profile, as shown in Figure 2.3.4-(b). As a
result, unwanted high frequency vibrations are generated at the motor rotor. This happens for
some types of pancake motors with low inductance and low rotor inertia.
(a) Te !! TPWM
(b) Te TPWM
than the human audible range, say 15 kHz. Therefore, a higher PWM frequency is in general
desirable. However, it causes a few adverse effects. As the PWM frequency increases:
x The heat loss increases and the transistor may over-heat,
x Harmful large voltage spikes and noise are generated, and
x Radio frequency interference and electromagnetic interference become prominent.
The first adverse effect is the most critical one, which limits the capacity of a PWM
amplifier. Although no power loss occurs at the switching transistor when it is completely ON or
OFF, a significant amount of loss is caused during transition. As the transistor state is switched
from OFF to ON or vise versa, the transistor in Figure 2.3.1 goes through intermediate states,
which entail heat loss, as shown in Figure 2.3.2. Since it takes some finite time for a
semiconductor to make a transition, every time it is switched, a certain amount of power is
dissipated. As the PWM frequency increases, more power loss and, more importantly, more heat
generation occur. Figure 2.3.5 illustrates the turn-on and turn-off transitions of a switching
transistor. When turned on, the collector current Ic increases and the voltage Vce decreases. The
product of these two values provides the switching power loss as shown by broken lines in the
figure. Note that turn-off takes a longer time, hence it causes more heat loss.
Vce
Switching
Transistor power loss
Voltage
Current
Ic
Power loss
0
Time
0.5 Ps 1.0 Ps
Turn-ON Turn-OFF
Figure 2.3.5 Transient responses of transistor current and voltage and associated power loss
during turn-on and turn-off state transitions
From Figure 2.3.5 it is clear that a switching transistor having fast turn-on and turn-off
characteristics is desirable, since it causes less power loss and heat generation. Power MOSFETs
(Metal-Oxide-Semiconductor Field-Effect Transistors) have very fast switching characteristics,
enabling 15 ~ 100 kHz of switching frequencies. For relatively small motors, MOSFETs are
widely used in industry due to their fast switching characteristics. For larger motors, IGBTs
(Insulated Gate Bipolar Transistor) are the rational choice because of their larger capacity and
relatively fast response.
As the switching speed increases, the heat loss becomes smaller. However, fast switching
causes other problems. Consider eq.(2.1.4) again, the dynamic equation of the armature:
di
u R i L E (2.1.4)
dt
High speed switching means that the time derivative of current i is large. This generates a large
di
inductance-induced kickback voltage L that often damages switching semiconductors. As
dt
illustrated in Figure 2.3.6-(a), a large spike is induced when turning on the semiconductor. To get
rid of this problem a free-wheeling-diode (FWD) is inserted across the motor armature, as shown
in Figure 2.3.6-(b). As the voltage across the armature exceeds a threshold level, FWD kicks in to
bypass the current so that the voltage may be clamped, as shown in figure (c).
Power supply
VDS
FWD Armature
Without FWD: ON
OFF
Spikes of VDS VDS
(a)
Gate VDS
VDS Drive
With FWD:
Switching
The spike is OFF ON transistor,
clamped.
MOSFET
(c) (b)
Figure 2.3.6 Voltage spike induced by inductance (a), free-wheeling diode (b),
and the clamped spike with FWD (c)
forward direction. When the gate states are reversed, as shown in figure (ii), the direction of
current is reversed. Furthermore, the motor coasts off when all the gates are turned OFF, since the
armature is totally isolated or disconnected as shown in figure (iii). On the other hand, the
armature windings are shortened, when both gates C and D are turned ON and A and B are turned
OFF. See figure (iv). This shortened circuit provides a “braking” effect, when the motor rotor is
rotating.
+V +V
Gate C +
Armature Gate D Gate C Armature Gate D
OFF ON ON OFF
+V +V
(iii) The motor armature (iv) The motor windings are shortened
coasts off causing a braking effect.
It should be noted that there is a fundamental danger in the H-bridge circuit. A direct
short circuit can occur if the top and bottom switches connected to the same armature terminal are
turned on at the same time. A catastrophic failure results when one of the switching transistors on
the same vertical line in Figure 2.3.7 fails to turn off before the other turns on. Most of H-bridge
power stages commercially available have several protection mechanisms to prevent the direct
short circuit.
DC Motor
Wireless Network
Bridge Robot Controller
Host PC (On-Board)
(Stationary) WinCon Client
WinCon Server
MATLAB/Simulink Processor: 266 MHz Pentium II
Memory: 128 Mb RAM
File System: 256 Mb Compact Flash Solid State Disk
Operating System: Windows XP with VentureCom
RTX real-time extension
Control System Software: WinCom
Figure 2.4.2 On-board and stationary controllers, Quanser.
x 8 A/Ds
x 8 D/As
x 8 digital outputs
x 8 digital inputs
x 8 encoder channels
x 2 clock sources
x Supports keyboard, mouse, video,
LAN, USB, printer, and 2 serial
ports
Translucent
Disk with
grid pattern Photodetector
This optical shaft encoder has no mechanical component making a slide contact, and has
no component wear. An optical circuit is not disturbed by electric noise, and the photodetector
output is a digital signal, which is more stable than an analogue signal. These make an optical
shaft encoder reliable and robust; it is a suitable choice as a feedback sensor for servomotors.
Track A
Track B
Photodetectors
B B
+90o -90o
Figure 2.5.2 Double track encode for detection of the direction of rotation
It should be noted that this type of encoder requires initialization of the counter prior to
actual measurement. Usually a robot is brought to a home position and the up-down counters are
set to the initial state corresponding to the home position. This type of encoder is referred to as an
incremental encoder, since A-phase and B-phase signals provide relative displacements from an
initial point. Whenever the power supply is shut down, the initialization must be performed for
incremental encoders.
2
For simplicity only an incremental encoder is considered.
rotating shaft. The pulse density can be measured by counting the number of incoming pulses in
every fixed period, say T=10 ms, as shown in the figure. This can be done with another up-down
counter that counts A phase and B phase pulses. Counting continues only for the fixed sampling
period T, and the result is sent to a controller at the end of every sampling period. Then the
counter is cleared to re-start counting for the next period.
As the sampling period gets shorter, the velocity measurement is updated more
frequently, and the delay of velocity feedback gets shorter. However, if the sampling period is too
short, discretization error becomes prominent. The problem is more critical when the angular
velocity is very small. Not many pulses are generated, and just a few pulses can be counted for a
very short period. As the sampling period gets longer, the discretization error becomes smaller,
but the time delay may cause instability of the control system.
Counter clear
Sampling period T
An effective method for resolving these conflicting requirements is to use a dual mode
velocity measurement. Instead of counting the number of pulses, the interval of adjacent pulses is
measured at low speed. The reciprocal to the pulse interval gives the angular velocity. As shown
in Figure 2.5.6, the time interval can be measured by counting clock pulses. The resolution of this
pulse interval measurement is much higher than that of encoder pulse counting in a lower speed
range. In contrast, the resolution gets worse at high speed, since the adjacent pulse interval
becomes small. Therefore, these two methods supplement to each other. The dual mode velocity
measurement uses both counters and switches them depending on the speed.
Low
speed
counter
High
speed
counter
replaced by brushless motors and other types of motors having no mechanical commutator. Since
brushless motors, or AC synchronous motors, are increasingly used in robotic systems and other
automation systems, this section briefly describes its principle and drive methods.
A drawback of this brushless motor design is that the torque may change discontinuously
when switches are turned on and off as the rotor position changes. In the traditional DC motor
this torque ripple is reduced by simply increasing the commutator segments and dividing the
windings to many segments. For the brushless motor, however, it is expensive to increase the
number of electronic switching circuits. Instead, in the brushless motor the currents flowing into
individual windings are varied continuously so that the torque ripple be minimum. A common
construction of the windings is that of a three-phase windings, as shown in Figure 2.6.2.
Let IA, IB and IC be individual currents flowing into the three windings shown in the
figure. These three currents are varies such that:
IA I O sin T
2
IB I O sin(T S ) (2.6.1)
3
4
IC I O sin(T S )
3
where IO is the scalar magnitude of desired current, and T is the rotor position. The torque
generated is the summation of the three torques generated at the three windings. Taking into
account the angle between the magnetic field and the force generated at each air gap, we obtain
2 4
Wm kO [ I A sin T I B sin(T S ) I C sin(T S )] (2.6.2)
3 3
2
Wm kO I O (2.6.3)
3
The above expression indicates a linear relationship between the output torque and the scalar
magnitude of the three currents. The torque-current characteristics of a brushless motor are
apparently the same as the traditional DC motor.
Chapter 3
Robot Mechanisms
A robot is a machine capable of physical motion for interacting with the environment.
Physical interactions include manipulation, locomotion, and any other tasks changing the state of
the environment or the state of the robot relative to the environment. A robot has some form of
mechanisms for performing a class of tasks. A rich variety of robot mechanisms has been
developed in the last few decades. In this chapter, we will first overview various types of
mechanisms used for generating robotic motion, and introduce some taxonomy of mechanical
structures before going into a more detailed analysis in the subsequent chapters.
Figure 3.1.1 Primitive joint types: (a) a prismatic joint and (b) a revolute joint
Combining these two types of primitive joints, we can create many useful mechanisms
for robot manipulation and locomotion. These two types of primitive joints are simple to build
and are well grounded in engineering design. Most of the robots that have been built are
combinations of only these two types. Let us look at some examples.
1
It is interesting to note that all biological creatures are made of revolute type joints; there are no
sliding joints involved in their extremities.
it may suffice this positioning requirement. Figures 3.1.2 ~ 4 show three types of robot arm
structures corresponding to the Cartesian coordinate system, the cylindrical coordinate system,
and the spherical coordinate system respectively. The Cartesian coordinate robot shown in Figure
3.1.2 has three prismatic joints, corresponding to three axes denoted x, y , and z. The cylindrical
robot consists of one revolute joint and two prismatic joints, with Tr, andz representing the
coordinates of the end-effecter. Likewise, the spherical robot has two revolute joints denoted T
and I and one prismatic joint denoted r.
There are many other ways of locating an end-effecter in three-dimensional space. Figure
3.1.5 ~ 7 show three other kinematic structures that allow the robot to locate its end-effecter in
three-dimensional space. Although these mechanisms have no analogy with common coordinate
systems, they are capable of locating the end-effecter in space, and have salient features desirable
for specific tasks. The first one is a so-called SCALAR robot consisting of two revolute joints and
one prismatic joint. This robot structure is particularly desirable for assembly automation in
manufacturing systems, having a wide workspace in the horizontal direction and an independent
vertical axis appropriate for insertion of parts.
The second type, called an articulated robot or an elbow robot, consists of all three
revolute joints, like a human arm. This type of robot has a great degree of flexibility and
versatility, being the most standard structure of robot manipulators. The third kinematic structure,
also consisting of three revolute joints, has a unique mass balancing structure. The counter
balance at the elbow eliminates gravity load for all three joints, thus reducing toque requirements
for the actuators. This structure has been used for the direct-drive robots having no gear reducer.
Note that all the above robot structures are made of serial connections of primitive joints.
This class of kinematic structures, termed a serial linkage, constitutes the fundamental makeup of
robot mechanisms. They have no kinematic constraint in each joint motion, i.e. each joint
displacement is a generalized coordinate. This facilitates the analysis and control of the robot
mechanism. There are, however, different classes of mechanisms used for robot structures.
Although more complex, they do provide some useful properties. We will look at these other
mechanisms in the subsequent sections.
Point A Link 2
Joint 5 End Effecter
Joint 2 Passive
joints
Link 4
Closed
Loop
Link 1 Joint 4
T1 Link 3
T3
Joint 1 Joint 3 Link 0
Active Joints
This type of parallel linkage, having a closed-loop kinematic chain, has significant
features. First, placing both actuators at the base link makes the robot arm lighter, compared to
the serial link arm with the second motor fixed to the tip of link 1. Second, a larger end-effecter
load can be born with the two serial linkage arms sharing the load. Figure 3.2.2 shows a heavy-
duty robot having a parallel link mechanism.
Figure 3.2.3 shows the Stewart mechanism, which consists of a moving platform, a fixed
base, and six powered cylinders connecting the moving platform to the base frame. The position
and orientation of the moving platform are determined by the six independent actuators. The load
acting on the moving platform is born by the six “arms”. Therefore, the load capacity is generally
large, and dynamic response is fast for this type of robot mechanisms. Note, however, that this
mechanism has spherical joints, a different type of joints than the primitive joints we considered
initially.
Chapter 4
Kinematics
Kinematics is Geometry of Motion. It is one of the most fundamental disciplines in
robotics, providing tools for describing the structure and behavior of robot mechanisms. In this
chapter, we will discuss how the motion of a robot mechanism is described, how it responds to
actuator movements, and how the individual actuators should be coordinated to obtain desired
motion at the robot end-effecter. These are questions central to the design and control of robot
mechanisms.
To begin with, we will restrict ourselves to a class of robot mechanisms that work within
a plane, i.e. Planar Kinematics. Planar kinematics is much more tractable mathematically,
compared to general three-dimensional kinematics. Nonetheless, most of the robot mechanisms of
practical importance can be treated as planar mechanisms, or can be reduced to planar problems.
General three-dimensional kinematics, on the other hand, needs special mathematical tools, which
will be discussed in later chapters.
Example 4.1 Consider a three degree-of-freedom, planar robot arm shown in Figure 4.1.1. The
arm consists of one fixed link and three movable links that move within the plane. All the links
are connected by revolute joints whose joint axes are all perpendicular to the plane of the links.
There is no closed-loop kinematic chain; hence, it is a serial link mechanism.
y
"3 E End Effecter
§ xe ·
Link 3 ¨¨ ¸¸
© ye ¹
"2 T3 Ie
B
"1 Link 2 Joint 3
T2
Link 1
A Joint 2
Joint 1 T1
O x
Link 0
Figure 4.1.1 Three dof planar robot with three revolute joints
To describe this robot arm, a few geometric parameters are needed. First, the length of
each link is defined to be the distance between adjacent joint axes. Let points O, A, and B be the
locations of the three joint axes, respectively, and point E be a point fixed to the end-effecter.
Then the link lengths are " 1 O A , " 2 A B , " 3 B E . Let us assume that Actuator 1 driving
link 1 is fixed to the base link (link 0), generating angle T1 , while Actuator 2 driving link 2 is
fixed to the tip of Link 1, creating angle T 2 between the two links, and Actuator 3 driving Link 3
is fixed to the tip of Link 2, creating angle T 3 , as shown in the figure. Since this robot arm
performs task by moving its end-effecter at point E, we are concerned with the location of the
end-effecter. To describe its location, we use a coordinate system, O-xy, fixed to the base link
with the origin at the first joint, and describe the end-effecter position with coordinates xe and
ye . We can relate the end-effecter coordinates to the joint angles determined by the three
actuators by using the link lengths and joint angles defined above:
This three dof robot arm can locate its end-effecter at a desired orientation as well as at a desired
position. The orientation of the end-effecter can be described with the angle of the centerline of
the end-effecter measured from the positive x coordinate axis. This end-effecter orientation I e is
related to the actuator displacements as
Ie T1 T 2 T 3 (4.1.3)
The above three equations describe the position and orientation of the robot end-effecter
viewed from the fixed coordinate system in relation to the actuator displacements. In general, a
set of algebraic equations relating the position and orientation of a robot end-effecter, or any
significant part of the robot, to actuator displacements, or displacements of active joints, is called
Kinematic Equations, or more specifically, Forward Kinematic Equations in the robotics
literature.
Exercise 4.1
Shown below in Figure 4.1.2 is a planar robot arm with two revolute joints and one
prismatic joint. Using the geometric parameters and joint displacements, obtain the kinematic
equations relating the end-effecter position and orientation to the joint displacements.
Now that the above Example and Exercise problems have illustrated kinematic equations,
let us obtain a formal expression for kinematic equations. As mentioned in the previous chapter,
two types of joints, prismatic and revolute joints, constitute robot mechanisms in most cases. The
displacement of the i-th joint is described by distance di if it is a prismatic joint, and by angle T i
for a revolute joint. For formal expression, let us use a generic notation: qi. Namely, joint
displacement qi represents either distance di or angle T i depending on the type of joint.
di Prismatic joint
qi {T i Revolute joint
(4.1.4)
We collectively represent all the joint displacements involved in a robot mechanism with a
column vector: q >q1 q2 qn @ , where n is the number of joints. Kinematic equations
T
relate these joint displacements to the position and orientation of the end-effecter. Let us
collectively denote the end-effecter position and orientation by vector p. For planar mechanisms,
the end-effecter location is described by three variables:
ª xe º
p «y » (4.1.5)
« e»
«¬Ie »¼
Using these notations, we represent kinematic equations as a vector function relating p to q:
p f (q ), p 3 x1 , q nx1 (4.1.6)
For a serial link mechanism, all the joints are usually active joints driven by individual
actuators. Except for some special cases, these actuators uniquely determine the end-effecter
position and orientation as well as the configuration of the entire robot mechanism. If there is a
link whose location is not fully determined by the actuator displacements, such a robot
mechanism is said to be under-actuated. Unless a robot mechanism is under-actuated, the
collection of the joint displacements, i.e. the vector q, uniquely determines the entire robot
configuration. For a serial link mechanism, these joints are independent, having no geometric
constraint other than their stroke limits. Therefore, these joint displacements are generalized
coordinates that locate the robot mechanism uniquely and completely. Formally, the number of
generalized coordinates is called degrees of freedom. Vector q is called joint coordinates, when
they form a complete and independent set of generalized coordinates.
The vector kinematic equation derived in the previous section provides the functional
relationship between the joint displacements and the resultant end-effecter position and
orientation. By substituting values of joint displacements into the right-hand side of the kinematic
equation, one can immediately find the corresponding end-effecter position and orientation. The
problem of finding the end-effecter position and orientation for a given set of joint displacements
is referred to as the direct kinematics problem. This is simply to evaluate the right-hand side of
the kinematic equation for known joint displacements. In this section, we discuss the problem of
moving the end-effecter of a manipulator arm to a specified position and orientation. We need to
find the joint displacements that lead the end-effecter to the specified position and orientation.
This is the inverse of the previous problem, and is thus referred to as the inverse kinematics
problem. The kinematic equation must be solved for joint displacements, given the end-effecter
position and orientation. Once the kinematic equation is solved, the desired end-effecter motion
can be achieved by moving each joint to the determined value.
In the direct kinematics problem, the end-effecter location is determined uniquely for any
given set of joint displacements. On the other hand, the inverse kinematics is more complex in the
sense that multiple solutions may exist for the same end-effecter location. Also, solutions may not
always exist for a particular range of end-effecter locations and arm structures. Further, since the
kinematic equation is comprised of nonlinear simultaneous equations with many trigonometric
functions, it is not always possible to derive a closed-form solution, which is the explicit inverse
function of the kinematic equation. When the kinematic equation cannot be solved analytically,
numerical methods are used in order to derive the desired joint displacements.
Example 4.2 Consider the three dof planar arm shown in Figure 4.1.1 again. To solve its
inverse kinematics problem the kinematic structure is redrawn in Figure 4.2.1. The problem is to
find three joint angles T1 ,T 2 ,T 3 leading the end effecter to desired position and orientation,
xe , ye , Ie . We take a two-step approach. First, we find the position of the wrist, point B, from
xe , ye , Ie . Then we find T1 , T 2 from the wrist position. Angle T 3 can be determined immediately
from the wrist position.
End
Effecter
y
E § xe ·
¨¨ ¸¸
© ye ¹
"3
Ie
T3
B
§ xw ·
Wrist ¨¨ ¸¸
y
© w¹
r
E T2 "2
D J
A
O T1 x
"1
Let xw and y w be the coordinates of the wrist. As shown in Figure 4.2.1, point B is at
distance " 3 from the given end-effecter position E. Moving in the opposite direction to the end
effecter orientation Ie , the wrist coordinates are given by
xw xe " 3 cos Ie
(4.2.1)
yw ye " 3 sin Ie
Note that the right hand side of the above equations is functions of xe , ye , Ie alone. From these
wrist coordinates, we can determine the angle D shown in the figure. 1
yw
D tan 1 (4.2.2)
xw
Next, let us consider the triangle OAB and define angles E , J , as shown in the figure.
This triangle is formed by the wrist A, the elbow B, and the shoulder O. Applying the cosine law
to the elbow angle E yields
" 12 " 22 2" 1" 2 cos E r2 (4.2.3)
where r
2
px2 p y2 , the squared distance between O and B. Solving this for angle E yields
" 12 " 22 xw2 yw2
T2 S E S cos 1 (4.2.4)
2" 1" 2
Similarly,
r 2 " 12 2r" 1 cos J " 22 (4.2.5)
Solving this for J yields
1 xw y w " 1 " 2
2 2 2 2
yw
T1 D J tan1
cos (4.2.6)
xw 2" 1 xw2 yw2
From the above T1 , T 2 we can obtain
T3 Ie T1 T 2 (4.2.7)
Eqs. (4), (6), and (7) provide a set of joint angles that locates the end-effecter at the
desired position and orientation. It is interesting to note that there is another way of reaching the
same end-effecter position and orientation, i.e. another solution to the inverse kinematics
problem. Figure 4.2.2 shows two configurations of the arm leading to the same end-effecter
location: the elbow down configuration and the elbow up configuration. The former corresponds
to the solution obtained above. The latter, having the elbow position at point A’, is symmetric to
the former configuration with respect to line OB, as shown in the figure. Therefore, the two
solutions are related as
T1 ' T1 2J
T 2 ' T 2 (4.2.8)
T 3 ' Ie T1 'T 2 ' T 3 2T 2 2J
Inverse kinematics problems often possess multiple solutions, like the above example,
since they are nonlinear. Specifying end-effecter position and orientation does not uniquely
determine the whole configuration of the system. This implies that vector p, the collective
position and orientation of the end-effecter, cannot be used as generalized coordinates.
The existence of multiple solutions, however, provides the robot with an extra degree of
flexibility. Consider a robot working in a crowded environment. If multiple configurations exist
for the same end-effecter location, the robot can take a configuration having no interference with
1
Unless noted specifically we assume that the arc tangent function takes an angle in a proper quadrant
consistent with the signs of the two operands.
the environment. Due to physical limitation, however, the solutions to the inverse kinematics
problem do not necessarily provide feasible configurations. We must check whether each solution
satisfies the constraint of movable range, i.e. stroke limit of each joint.
E
y End
Effecter
T3'
T3
Elbow-Up T2 ' B
Configuration A’
Wrist
E T2
J
Elbow –Down
A Configuration
O T1 ' T1 x
Figure 4.2.2 Multiple solutions to the inverse kinematics problem of Example 4.2
(a) Obtain the kinematic equations relating the endpoint coordinates, xe , ye , ze , to joint
angles T1 ,T 2 , d 3 .
(b) Solve the inverse kinematics problem, i.e. obtain the joint coordinates, given the endpoint
coordinates. Obtain all of the multiple solutions, assuming that each revolute joint is
allowed to rotate 360 degrees and that the prismatic joint is restricted to d 3 t 0 .
(c) Sketch the arm configuration for each of the multiple solutions.
Endpoint
d3
§ xe ·
¨ ¸
¨ ye ¸
Link 3 ¨z ¸
Link 2 Joint 3 © e¹
Joint 2 T2
Link 1
Joint 1
"0
y
T1
x
Figure 4.3.1 Schematic of 3 dof articulated robot arm
As discussed in Chapter 3, parallel link mechanisms have been used for many
applications, particularly for heavy duty and precision applications. Since a parallel link
mechanism contains a closed kinematic chain, formulating kinematic equations is more involved.
In general, closed-form kinematic equations relating its end effecter location to joint
displacements cannot be obtained directly. A standard procedure for obtaining kinematic
equations includes:
x Break down the closed kinematic chain into multiple open kinematic chains,
x Formulate kinematic equations for each of the open kinematic chain, and
x Solve the set of simultaneous kinematic equations for independent joint displacements
and the end point coordinates.
Let us apply this procedure to a five-bar-link robot with a closed kinematic chain.
Example 4.3 Consider the five-bar-link planar robot arm shown in Figure 4.4.1. Joints 1 and 3
are active joints driven by independent actuators, but the other joints are free joints, which are
determined by the active joints. Let us first break the closed kinematic chain at Joint 5, and create
two open kinematic chains; Links 1 and 2 vs. Links 3 and 4 . From the figure, we obtain:
xe " 1 cos T1 " 2 cos T 2
(4.4.1)
ye " 1 sin T1 " 2 sin T 2
for Links 1 and 2, and
xA " 3 cosT 3 " 4 cos T 4
(4.4.2)
yA " 3 sin T 3 " 4 sin T 4
for Links 3 and 4.
Note that, in Eq. (1), Joint 2 is a passive joint. Hence, angle T 2 is a dependent variable.
Using T 2 , however, we can obtain the coordinates of point A:
xA " 1 cos T1 " 5 cos T 2
(4.4.3)
yA " 1 sin T1 " 5 sin T 2
Equating (2) and (3) yields two constraint equations:
" 1 cos T1 " 5 cos T 2 " 3 cos T 3 " 4 cos T 4
(4.4.4)
" 1 sin T1 " 5 sin T 2 " 3 sin T 3 " 4 sin T 4
Note that there are four variables and two constraint equations. Therefore, two of them, e.g.
T1 ,T 3 , are independent. Solving Eq.(4) for T 2 and T 4 and substituting T 2 into Eq.(1) yield
forward kinematic equations relating the end point coordinates to independent joint
displacements. It should be noted that multiple solutions exist for these constraint equations (4).
Although the forward kinematic equations are difficult to write out explicitly, the inverse
kinematic equations can be obtained for this parallel link mechanism. The problem is to find
T1 ,T 3 that lead the endpoint to a desired position: xe , ye . We can take the following procedure:
Exercise 4.3 Obtain the joint angles of the dog’s legs, given the body position and orientation.
yB
Link 0
xB
y A
B
IB
C § xB ·
Link 3 ¨¨ ¸¸ Link 1
© yB ¹ T1
T3
T4 Link 2 T2
Link 4 x
A manipulator arm must have at least six degrees of freedom in order to locate its end-
effecter at an arbitrary point with an arbitrary orientation in space. Manipulator arms with less
than 6 degrees of freedom are not able to perform such arbitrary positioning. On the other hand, if
a manipulator arm has more than 6 degrees of freedom, there exist an infinite number of solutions
to the kinematic equation. Consider for example the human arm, which has seven degrees of
freedom, excluding the joints at the fingers. Even if the hand is fixed on a table, one can change
the elbow position continuously without changing the hand location. This implies that there exist
an infinite set of joint displacements that lead the hand to the same location. Manipulator arms
with more than six degrees of freedom are referred to as redundant manipulators. We will discuss
redundant manipulators in detail in the following chapter.
Chapter 5
Differential Motion
In the previous chapter, the position and orientation of the manipulator end-effecter were evaluated in
relation to joint displacements. The joint displacements corresponding to a given end-effecter location
were obtained by solving the kinematic equation for the manipulator. This preliminary analysis
permitted the robotic system to place the end-effecter at a specified location in space. In this chapter,
we are concerned not only with the final location of the end-effecter, but also with the velocity at
which the end-effecter moves. In order to move the end-effecter in a specified direction at a specified
speed, it is necessary to coordinate the motion of the individual joints. The focus of this chapter is the
development of fundamental methods for achieving such coordinated motion in multiple-joint robotic
systems. As discussed in the previous chapter, the end-effecter position and orientation are directly
related to the joint displacements; hence, in order to coordinate joint motions, we derive the
differential relationship between the joint displacements and the end-effecter location, and then solve
for the individual joint motions.
T1
Joint 1 x
O
Link 0
Figure 5.1.1 Two dof planar robot with two revolute joints
We are concerned with “small movements” of the individual joints at the current position,
and we want to know the resultant motion of the end-effecter. This can be obtained by the total
derivatives of the above kinematic equations:
w x e (T 1 , T 2 ) w x (T , T )
dx e dT1 e 1 2 dT 2 (5.1.3)
wT1 wT 2
wye (T1 ,T 2 ) wy (T ,T )
dye dT1 e 1 2 dT 2 (5.1.4)
wT1 wT 2
where xe , ye are variables of both T1 and T 2 , hence two partial derivatives are involved in the
total derivatives. In vector form the above equations reduce to
dx J dq (5.1.5)
where
§ dxe · § dT1 ·
dx ¨¨ ¸¸, dq ¨¨ ¸¸ (5.1.6)
© dye ¹ © dT 2 ¹
and J is a 2 by 2 matrix given by
The matrix J comprises the partial derivatives of the functions xe (T1 ,T 2 ) and ye (T1 ,T 2 ) with
respect to joint displacements T1 and T 2 . The matrix J, called the Jacobian Matrix, represents the
differential relationship between the joint displacements and the resulting end-effecter motion.
Note that most robot mechanisms have a multitude of active joints, hence a matrix is needed for
describing the mapping of the vectorial joint motion to the vectorial end-effecter motion.
For the two-dof robot arm of Figure 5.1.1, the components of the Jacobian matrix are
computed as
dx e dq
J , or v e J q (5.1.9)
dt dt
Thus the Jacobian determines the velocity relationship between the joints and the end-effecter.
The Jacobian plays an important role in the analysis, design, and control of robotic
systems. It will be used repeatedly in the following chapters. It is worth examining basic
properties of the Jacobian, which will be used throughout this book.
We begin with dividing the 2-by-2 Jacobian of eq.(5.1.8) into two column vectors:
J ( J1 , J 2 ), J1 , J 2 2u1 (5.2.1)
The first term on the right-hand side accounts for the end-effecter velocity induced by the first
joint only, while the second term represents the velocity resulting from the second joint motion
only. The resultant end-effecter velocity is given by the vectorial sum of the two. Each column
vector of the Jacobian matrix represents the end-effecter velocity generated by the corresponding
joint moving at a unit velocity when all other joints are immobilized.
Figure 5.2.1 illustrates the column vectors J 1 , J 2 of the 2 dof robot arm in the two-
dimensional space. Vector J 2 , given by the second column of eq.(5.1. 8), points in the direction
perpendicular to link 2. Note, however, that vector J1 is not perpendicular to link 1 but is
perpendicular to line OE, the line from joint 1 to the endpoint E. This is because J1 represents the
endpoint velocity induced by joint 1 when joint 2 is immobilized. In other words, links 1 and 2
are rigidly connected, becoming a single rigid body of link length OE, and J1 is the tip velocity
of the link OE.
y
J1
J2 E
Link 2
T2
Joint 2
T1 Link 1
Joint 1 x
O
In general, each column vector of the Jacobian represents the end-effecter velocity and
angular velocity generated by the individual joint velocity while all other joints are immobilized.
Let p be the end-effecter velocity and angular velocity, or the end-effecter velocity for short, and
J i be the i-th column of the Jacobian. The end-effecter velocity is given by a linear combination
of the Jacobian column vectors weighted by the individual joint velocities.
p J1 q1 J n q n (5.2.3)
where n is the number of active joints. The geometric interpretation of the column vectors is that
J i is the end-effecter velocity and angular velocity when all the joints other than joint i are
immobilized and only i-th joint is moving at a unit velocity.
Exercise Consider the two-dof articulated robot shown in Figure 5.2.1 again. This time we
use “absolute” joint angles measured from the positive x-axis, as shown in Figure 5.2.2. Note that
angle T 2 is measured from the fixed frame, i.e. the x-axis, rather than a relative frame, e.g. link 1.
Obtain the 2-by-2 Jacobian and illustrate the two column vectors on the xy plane. Discuss the
result in comparison with the previous case shown in Figure 5.2.1.
Link 2
T2
Joint 2
T1 Link 1
Joint 1 x
O
Note that the elements of the Jacobian are functions of joint displacements, and thereby
vary with the arm configuration. As expressed in eq.(5.1.8), the partial derivatives,
wxe / wT i , wye / wT i , are functions of T1 and T 2 . Therefore, the column vectors J 1 , J 2 vary
depending on the arm posture. Remember that the end-effecter velocity is given by the linear
combination of the Jacobian column vectors J 1 , J 2 . Therefore, the resultant end-effecter velocity
varies depending on the direction and magnitude of the Jacobian column vectors J 1 , J 2 spanning
the two dimensional space. If the two vectors point in different directions, the whole two-
dimensional space is covered with the linear combination of the two vectors. That is, the end-
effecter can be moved in an arbitrary direction with an arbitrary velocity. If, on the other hand,
the two Jacobian column vectors are aligned, the end-effecter cannot be moved in an arbitrary
direction. As shown in Figure 5.2.3, this may happen for particular arm postures where the two
links are fully contracted or extended. These arm configurations are referred to as singular
configurations. Accordingly, the Jacobian matrix becomes singular at these singular
configurations. Using the determinant of a matrix, this condition is expressed as
det J 0 (5.2.4)
In fact, the Jacobian degenerates at the singular configurations, where joint 2 is 0 or 180
degrees. Substituting T 2 0, S into eq.(5.1.8) yields
Note that both column vectors point in the same direction and thereby the determinant becomes
zero.
Non-singular
J1 J1
J2
J2
J2
Singular
Singular
Configuration B
Configuration A
J1
x
Now that we know the basic properties of the Jacobian, we are ready to formulate the
inverse kinematics problem for obtaining the joint velocities that allow the end-effecter to move
at a given desired velocity. For the two dof articulated robot, the problem is to find the joint
velocities q (T1 , T2 )T , for the given end-effecter velocity v e (vx , v y )T . If the arm
configuration is not singular, this can be obtained by taking the inverse to the Jacobian matrix in
eq.(5.1.9),
q J 1 v e (5.3.1)
Note that the solution is unique. Unlike the inverse kinematics problem discussed in the previous
chapter, the differential kinematics problem has a unique solution as long as the Jacobian is non-
singular.
The above solution determines how the end-effecter velocity ve must be decomposed, or
resolved, to individual joint velocities. If the controls of the individual joints regulate the joint
velocities so that they can track the resolved joint velocities q , the resultant end-effecter velocity
will be the desired one ve. This control scheme is called Resolved Motion Rate Control, attributed
to Daniel Whitney (1969). Since the elements of the Jacobian matrix are functions of joint
displacements, the inverse Jacobian varies depending on the arm configuration. This means that
although the desired end-effecter velocity is constant, the joint velocities are not. Coordination is
thus needed among the joint velocity control systems in order to generate a desired motion at the
end-effecter.
Example Consider the two dof articulated robot arm again. We want to move the endpoint
of the robot at a constant speed along a path staring at point A on the x-axis, (+2, 0), go around
the origin through points B (+H, 0) and C (0, +H), and reach the final point D (0, +2) on the y-axis.
See Figure 5.3.1. For simplicity each arm link is of unit length. Obtain the profiles of the
individual joint velocities as the end-effecter tracks the path at the constant speed.
vx cos(T1 T 2 ) v y sin(T1 T 2 )
T1 (5.3.2)
sin T 2
vx [cosT1 cos(T1 T 2 )] v y [sin T1 sin(T1 T 2 )]
T2 (5.3.3)
sin T 2
y
B
2
Singular
Configuration
C T2
T1 A
0 B 2 x
Singular
Configuration
Figure 5.3.2 shows the resolved joint velocities T1 , T2 computed along the specified
trajectory. Note that the joint velocities are extremely large near the initial and final points, and
are unbounded at points A and D. These are at the arm’s singular configurations, T 2 0 . As the
end-effecter gets close to the origin, the velocity of the first joint becomes very large in order to
quickly turn the arm around from point B to C. At these configurations, the second joint is almost
–180 degrees, meaning that the arm is near singularity. This result agrees with the singularity
condition using the determinant of the Jacobian:
In eqs.(2) and (3) above, the numerators are divided by sin T 2 , the determinant of the Jacobian.
Therefore, the joint velocities T1 , T2 blow out as the arm configuration gets close to the singular
configuration.
Joint
Velocities
Ti T1
T2
A B C D
t (time)
Figure 5.3.2 Joint velocity profiles for tracking the trajectory in Figure 5.3.1
Furthermore, the arm’s behavior near the singular points can be analyzed by substituting
T2 0, S into the Jacobian, as obtained in eq.(5.2.5). For " 1
" 2 1 and T 2 0 , the Jacobian
column vectors reduce to the ones in the same direction:
§ 2 sin T1 · § sin T1 ·
J1 ¨¨ ¸¸, J 2 ¨¨ ¸¸, for T 2 0 (5.3.5)
© 2 cos T1 ¹ © cos T1 ¹
As illustrated in Figure 5.2.3 (singular configuration A), both joints T1 , T2 generate the endpoint
velocity along the same direction. Note that no endpoint velocity can be generated in the direction
perpendicular to the aligned arm links. For T 2 S ,
§0· § sin T1 ·
J1 ¨¨ ¸¸, J2 ¨¨ ¸¸, for T 2 S (5.3.6)
©0¹ © cos T1 ¹
The first joint cannot generate any endpoint velocity, since the arm is fully contracted. See
singular configuration B in Figure 5.2.3.
At a singular configuration, there is at least one direction in which the robot cannot
generate a non-zero velocity at the end-effecter. This agrees with the previous discussion; the
Jacobian is degenerate at a singular configuration, and the linear combination of the Jacobian
column vectors cannot span the whole space.
Exercise 5.2
A three-dof spatial robot arm is shown in the figure below. The robot has three revolute
joints that allow the endpoint to move in the three dimensional space. However, this robot
mechanism has singular points inside the workspace. Analyze the singularity, following the
procedure below.
Step 1 Obtain each column vector of the Jacobian matrix by considering the endpoint velocity
created by each of the joints while immobilizing the other joints.
Step 2 Construct the Jacobian by concatenating the column vectors, and set the determinant of
the Jacobian to zero for singularity: det J 0 .
Step 3 Find the joint angles that make det J 0 .
Step 4 Show the arm posture that is singular. Show where in the workspace it becomes singular.
For each singular configuration also show in which direction the endpoint cannot have a non-zero
velocity.
z Endpoint
§ xe ·
¨ ¸
"2 ¨ ye ¸
¨z ¸
Link 3 © e¹
T3
"1
Link 2 Joint 3
Joint 2 T2
y
T1
Link 1
x
Joint 1
We have seen in this chapter that singular configurations exist for many robot
mechanisms. Sometimes such singular configurations exist in the middle of the workspace,
seriously degrading mobility and dexterity of the robot. At a singular point the robot cannot move
in certain directions with a non-zero velocity. To overcome this difficulty, several methods can be
considered. One is to plan a trajectory of the robot motion such that it will not go into singular
configurations. Other method is to have more degrees of freedom, so that even when some
degrees of freedom are lost at a certain configuration, the robot can maintain necessary degrees of
freedom. Such a robot is referred to as a redundant robot. In this section we will discuss
singularity and redundancy, and obtain general properties of differential motion for general n
degrees of freedom robots.
As studied in Section 5.3, a unique solution exists to the differential kinematic equation,
(5.1.15), if the arm configuration is non-singular. However, when a planar (spatial) robot arm has
more than three (six) degrees of freedom, we can find an infinite number of solutions that provide
the same motion at the end-effecter. Consider for instance the human arm, which has seven
degrees of freedom excluding the joints at the fingers. When the hand is placed on a desk and
fixed in its position and orientation, the elbow position can still vary continuously without
moving the hand. This implies that a certain ratio of joint velocities exists that does not cause any
velocity at the hand. This specific ratio of joint velocities does not contribute to the resultant
endpoint velocity. Even if these joint velocities are superimposed to other joint velocities, the
resultant end-effecter velocity is the same. Consequently, we can find different solutions of the
instantaneous kinematic equation for the same end-effecter velocity. In the following, we
investigate the fundamental properties of the differential kinematics when additional degrees of
freedom are involved.
To formulate a differential kinematic equation for a general n degrees-of-freedom robot
mechanism, we begin by modifying the definition of the vector dxe representing the end-effecter
motion. In eq. (5.1.6), dxe was defined as a two-dimensional vector that represents the
infinitesimal translation of an end-effecter. This must be extended to a general m-dimensional
vector. For planar motion, m may be 3, and for spatial motion, m may be six. However, the
number of variables that we need to specify for performing a task is not necessarily three or six.
In arc welding, for example, only five independent variables of torch motion need be controlled.
Since the welding torch is usually symmetric about its centerline, we can locate the torch with an
arbitrary orientation about the centerline. Thus five degrees of freedom are sufficient to perform
arc welding. In general, we describe the differential end-effecter motion by m independent
variables dp that must be specified to perform a given task.
Then the differential kinematic equation for a general n degree-of-freedom robot is given by
dp J dq (5.4.2)
Equation (3) can be regarded as a linear mapping from n-dimensional vector space Vn to
m-dimensional space Vm. To characterize the solution to eq.(3), let us interpret the equation using
the linear mapping diagram shown in Figure 5.4.1. The subspace R(J) in the figure is the range
space of the linear mapping. The range space represents all the possible end-effecter velocities
that can be generated by the n joints at the present arm configuration. If the rank of J is of full
row rank, the range space covers the entire vector space Vm. Otherwise, there exists at least one
direction in which the end-effecter cannot be moved with non-zero velocity. The subspace N(J)
of Figure 5.4.1 is the null space of the linear mapping. Any element in this subspace is mapped
into the zero vector in Vm. Therefore, any joint velocity vector q that belongs to the null space
does not produce any velocity at the end-effecter. Recall the human arm discussed before. The
elbow can move without moving the hand. Joint velocities for this motion are involved in the null
space, since no end-effecter motion is induced. If the Jacobian is of full rank, the dimension of the
null space, dim N(J), is the same as the number of redundant degrees of freedom (n-m). When the
Jacobian matrix is degenerate, i.e. not of full rank, the dimension of the range space, dim R(J),
decreases, and at the same time the dimension of the null space increases by the same amount.
The sum of the two is always equal to n:
Let q * be a particular solution of eq.(3) and q 0 be a vector involved in the null space,
then the vector of the form q q * k q 0 is also a solution of eq.(3), where k is an arbitrary
scalar quantity. Namely,
Since the second term kq 0 can be chosen arbitrarily within the null space, an infinite number of
solutions exist for the linear equation, unless the dimension of the null space is 0. The null space
accounts for the arbitrariness of the solutions. The general solution to the linear equation involves
the same number of arbitrary parameters as the dimension of the null space.
q V n J p V m R(J)
p 0
N(J)
Chapter 6
Statics
Robots physically interact with the environment through mechanical contacts. Mating
work pieces in a robotic assembly line, manipulating an object with a multi-fingered hand, and
negotiating a rough terrain in leg locomotion are just a few examples of mechanical interactions.
All of these tasks entail controls of the contacts and interference between the robot and the
environment. Force and moment acting between the robot end-effecter and the environment must
be accommodated in order to control the interactions. In this chapter we will analyze the force
and moment that act on the robot when it is at rest.
Note that all the vectors are defined with respect to the base coordinate system O-xyz.
Next we derive the balance of moments. The moment applied to link i by link i-1is
denoted Ni-1,i, and therefore the moment applied to link i by link i+1 is –Ni,i+1. Furthermore, the
linear forces fi-1,i and –fi+1,i also cause moments about the centroid Ci. The balance of moments
with respect to the centroid Ci is thus given by
N i−1,i − N i ,i+1 − (ri−1,i + ri ,Ci ) × fi−1,i + (−ri ,Ci ) × (−fi ,i+1 ) = 0, i = 1,! , n (6.1.2)
where ri-1,i is the 3x1 position vector from point Oi-1 to point Oi with reference to the base
coordinate frame, and ri,Ci represents the position vector from point Oi to the centroid Ci.
Ci ri, C i
ri−1, i Oi
Oi-1
m ig Link i+1
Link i-1
Actuator i z
τi O
y
x
- N n , n +1
N 0,1 f 0,1
- f n , n +1
(a) (b)
Figure 6.1.2 Force and moment that the base link exerts on link 1 (a), and the ones that the
environment exerts on the end-effecter, the final link
The above equations can be derived for all the link members except for the base link, i.e.
i=1,2, …, n. This allows us to form 2n simultaneous equations of 3x1 vectors. The number of
coupling forces and moments involved is 2(n+1). Therefore two of the coupling forces and
moments must be specified; otherwise the equations cannot be solved. The final coupling force
and moment, f n , n +1 and N n , n +1 , are the force and moment that the end-effecter applies to the
environment. It is this pair of force and moment that the robot needs to accommodate in order to
perform a given task. Thus, we specify this pair of coupling force and moment, and solve the
simultaneous equations. For convenience we combine the force f n , n +1 and the moment N n , n +1 ,
to define the following six-dimensional vector:
& f n, n+1 #
F = $$ !
! (6.1.3)
% N n , n+1 "
We call the vector F the endpoint force and moment vector, or the endpoint force for
short.
For a prismatic joint, such as the ( j+1)-st joint illustrated in Figure 6.2.1, the actuator generates a
linear force in the direction of the joint axis. Therefore, it is the component of the linear coupling
force f i−1, i projected onto the joint axis.
τ i = b i−1T ⋅ fi−1, i (6.2.2)
Note that, although we use the same notation as that of a revolute joint, the scalar quantityτ i has
the unit of a linear force for a prismatic joint. To unify the notation we use τ i for both types of
joints, and call it a joint torque regardless the type of joint.
τi f i−1, i
N i−1, i
Link i Actuator i+1
τ i +1 Prismatic
Joint i
Joint
Link i-1 f i , i +1 N i , i +1
Oi
Actuator i
Link i+1
z
O
b i −1 bi
x y
! = (τ 1 τ 2 ! τ n )
T
(6.2.3)
The joint torque vector collectively represents all the actuators’ torque inputs to the linkage
system. Note that all the other components of the coupling force and moment are borne by the
mechanical structure of the joint. Therefore, the constraint forces and moments irrelevant to
energy formula have been separated from the net energy inputs to the linkage system.
In the free body diagram the individual links are disjointed, leaving constraint forces and
moments at both sides of the link. The freed links are allowed to move in any direction. In the
energy formulation, we describe the link motion using independent variables alone. Remember
that in a serial link open kinematic chain joint coordinates q = (q1 ! qn ) are a complete and
T
independent set of generalized coordinates that uniquely locate the linkage system with
independent variables. Therefore, these variables conform to the geometric and kinematic
constraints. We use these joint coordinates in the energy-based formulation.
The explicit relationship between the n joint torques and the endpoint force F is given by
the following theorem:
Theorem 6.1
Consider a n degree-of-freedom, serial link robot having no friction at the joints. The joint
torques ! ∈ ℜ n×1 that are required for bearing an arbitrary endpoint force F ∈ ℜ6 x1 are given by
! = JT ⋅ F (6.2.4)
Note that the joint torques in the above expression do not account for gravity and friction.
They are the net torques that balances the endpoint force and moment. We call ! of eq.(2) the
equivalent joint torques associated with the endpoint force F.
Proof
We prove the theorem by using the Principle of Virtual Work. Consider virtual
displacements at individual joints, δq = (δq1 , !, δqn )T , and at the end-effecter,
δp = (δx eT , δ" e T )T , as shown in Figure 6.2.2. Virtual displacements are arbitrary infinitesimal
displacements of a mechanical system that conform to the geometric constraints of the system.
Virtual displacements are different from actual displacements, in that they must only satisfy
geometric constraints and do not have to meet other laws of motion. To distinguish the virtual
displacements from the actual displacements, we use the Greek letter δ rather than the roman d.
We assume that joint torques ! = (τ 1 τ 2 ! τ n ) and endpoint force and moment, -F, act on
T
the serial linkage system, while the joints and the end-effecter are moved in the directions
geometrically admissible. Then, the virtual work done by the forces and moments is given by
According to the principle of virtual work, the linkage system is in equilibrium if, and only if, the
virtual work δWork vanishes for arbitrary virtual displacements that conform to geometric
constraints. Note that the virtual displacements δq and δp are not independent, but are related by
the Jacobian matrix given in eq.(5). The kinematic structure of the robot mechanism dictates that
the virtual displacements δp is completely dependent upon the virtual displacement of the joints,
δq. Substituting eq.(5) into eq.(6) yields
δWork = ! T δq − FT J ⋅ δq = ( ! − J T F)T ⋅ δq (6.2.7)
Note that the vector of the virtual displacements δq consists of all independent variables, since
the joint coordinates of an open kinematic chain are generalized coordinates that are complete
and independent. Therefore, for the above virtual work to vanish for arbitrary virtual
displacements we must have:
! = JT F
This is eq.(6.2.4), and the theorem has been proven.
The above theorem has broad applications in robot mechanics, design, and control. We
will use it repeatedly in the following chapters.
Example 6.1
Figure 6.2.1 shows a two-dof articulated robot having the same link dimensions as the
previous examples. The robot is interacting with the environment surface in a horizontal plane.
Obtain the equivalent joint torques ! = (τ 1 , τ 2 )T needed for pushing the surface with an endpoint
force of F = ( Fx , Fy )T . Assume no friction.
The Jacobian matrix relating the end-effecter coordinates xe and ye to the joint
displacements θ1 and θ 2 has been obtained in the previous chapter:
& τ 1 # & − " 1 sin θ1 − " 2 sin(θ1 + θ 2 ) " 1 cos θ1 + " 2 cos(θ1 + θ 2 ) # & Fx #
$$ !! = $$ !! ⋅ $$ !! (6.2.8)
τ
% 2" % − " 2 sin(θ 1 + θ 2 ) " 2 cos(θ 1 + θ 2 ) " % Fy "
y
Endpoint
Force
& Fx #
$ !
$F !
% y"
τ2
θ2
τ1
θ1
x
O
We have found that the equivalent joint torques is related to the endpoint force by the
Jacobian matrix, which is the same matrix that relates the infinitesimal joint displacements to the
end-effecter displacement. Thus, the static force relationship is closely related to the differential
kinematics. In this section we discuss the physical meaning of this observation.
To interpret the similarity between differential kinematics and statics, we can use the
linear mapping diagram of Figure 5.4.1. Recall that the differential kinematic equation can be
regarded as a linear mapping when the Jacobian matrix is fixed at a given robot configuration.
Figure 6.3.1 reproduces Figure 5.4.1 and completes it with a similar diagram associated with the
static analysis. As before, the range space R(J) represents the set of all the possible end-effecter
velocities generated by joint motions. When the Jacobian matrix is degenerate or the robot
configuration is singular, the range space does not span the whole vector space. Namely, there
exists a direction in which the end-effecter cannot move with a non-zero velocity. See the
subspace S2 in the figure. The null space N(J), on the other hand, represents the set of joint
velocities that do not produce any velocity at the end-effecter. If the null space contains a non-
zero element, the differential kinematic equation has an infinite number of solutions that cause
the same end-effecter velocity.
J p# ∈ V m R(J)
q# ∈ V n
N(J) p# = 0
Differential
Kinematics
S1 ⊥ N (J )
S 2 ⊥ R(J )
S3 ⊥ R ( J T )
JT
S 4 ⊥ N (J T )
R(JT)
Statics
!=0 N(JT)
! ∈V n F ∈V m
The lower half of Figure 6.3.1 is the linear mapping associated with the static force
relationship given by eq.(6.2.4). Unlike differential kinematics, the mapping of static forces is
given by the transpose of the Jacobian, generating a mapping from the m-dimensional vector
space Vm, associated with the Cartesian coordinates of the end-effecter, to the n-dimensional
vector space Vn, associated with the joint coordinates. Therefore the joint torques τ are always
determined uniquely for any arbitrary endpoint force F. However, for given joint torques, a
balancing endpoint force does not always exist. As in the case of the differential kinematics, let us
define the null space N(JT) and the range space R(JT) of the static force mapping. The null space
N(JT) represents the set of all endpoint forces that do not require any torques at the joints to bear
the corresponding load. In this case the endpoint force is borne entirely by the structure of the
linkage mechanism, i.e. constraint forces. The range space R(JT), on the other hand, represents the
set of all the possible joint torques that can balance the endpoint forces.
The ranges and null spaces of J and JT are closely related. According to the rules of linear
algebra, the null space N(J) is the orthogonal complement of the range space R(JT). Namely, if a
non-zero n-vector x is in N(J) , it cannot also belong to R(JT), and vice-versa. If we denote by S1
the orthogonal complement of N(J), then the range space R(JT) is identical to S1, as shown in the
figure. Also, space S3, i.e., the orthogonal complement of R(JT) is identical to N(J). What this
implies is that, in the direction in which joint velocities do not cause any end-effecter velocity, the
joint torques cannot be balanced with any endpoint force. In order to maintain a stationary
configuration, the joint torques in this space must be zero.
There is a similar correspondence in the end-effecter coordinate space Vm. The range
space R(J) is the orthogonal complement to the null space N(JT). Hence, the subspace S2 in the
figure is identical to N(JT), and the subspace S4 is identical to R(J). Therefore, no joint torques are
required to balance the end point force when the external force acts in the direction in which the
end-effecter cannot be moved by joint movements. Also, when the external endpoint force is
applied in the direction along which the end-effecter can move, the external force must be borne
entirely by the joint torques. When the Jacobian matrix is degenerate or the arm is in a singular
configuration, the null space N(JT) has a non-zero dimension, and the external force can be borne
in part by the mechanical structure. Thus, differential kinematics and statics are closely related.
This relationship is referred to as the duality of differential kinematics and statics.
The relationship between joint torques and the endpoint force obtained in Theorem 6.1
can be extended to a class of parallel-link mechanisms with closed kinematic-chains. It can also
be extended to multi-fingered hands, leg locomotion, and other robot mechanisms having closed
kinematic chains. In this section we discuss classes of closed kinematic chains based on the
principle of virtual work.
We begin by revisiting the five-bar-link planar robot shown in Figure 6.4.1. This robot
has two degrees of freedom, comprising two active joints, Joints 1 and 3, and three passive joints,
Joints 2, 4, and 5. Therefore the virtual work associated with the endpoint force and joint toques
is given by
We assume no friction at the joints. Therefore the three passive joints cannot bear any torque load
about their joint axis. Substituting τ 2 = τ 4 = τ 5 = 0 into the above yields
For any given configuration of the robot, the virtual displacements of the end-effecter are
uniquely determined by the virtual displacements of Joints 1 and 3. In fact, the former is related
to the latter via the Jacobian matrix:
where
δq = (δθ1 δθ 3 )T , δp = (δ xe δ ye )T (6.4.5)
Eq.(5) implies
! = JT ⋅ F (6.4.6)
Corollary 6.1
Consider a n degree-of-freedom robot mechanism with n active joints. Assume that all
the joints are frictionless, and that, for a given configuration of the robot mechanism, there exists
a unique Jacobian matrix relating the virtual displacements of its end-effecter, δp ∈ ℜm×1 , to the
virtual displacements of the active joints, δq ∈ ℜn×1 ,
δp = Jδq . (6.4.7)
Then the equivalent joint torques ! ∈ ℜn×1 to bear an arbitrary endpoint force F ∈ ℜm×1 is given
by
! = JT ⋅ F (6.4.8)
Note that the joint coordinates associated with the active joints are not necessarily
generalized coordinates that uniquely locate the system. For example, the arm configuration of
the five-bar-link robot shown in Figure 6.4.1 is not uniquely determined with joint angles θ1 and
θ3 alone. There are two configurations for given θ1 and θ3 . The corollary requires the
differential relation to be defined uniquely in the vicinity of the given configuration.
If a n degree-of-freedom robot system has more than n active joints, or less than n active
joints, the above corollary does not apply. These are called over-actuated and under-actuated
systems, respectively. Over-actuated systems are of particular importance in many manipulation
and locomotion applications. In the following we will consider the static relationship among joint
torques and endpoint forces for a class of over-actuated systems.
Figure 6.4.2 shows a two-fingered hand manipulating an object within a plane. Note that
both fingers are connected at the fingertips holding the object. While holding the object, the
system has three degrees of freedom. Since each finger has two active joints, the total number of
active joints is four. Therefore the system is over-actuated.
Using the notation shown in the figure, the virtual work is given by
Note that only three virtual displacements of the four joint angles are independent. There exists a
differential relationship between one of the joints, say θ 4 , and the other three due to the
kinematic constraint. Let us write it as
δθ 4 = J c ⋅ δ q (6.5.2)
where δq = (δθ1 δθ 2 δθ 3 ) are independent, and Jc is the 1x3 Jacobian associated with the
T
constraint due to the closed kinematic chain. Substituting this equation together with the Jacobian
relating the end effecter displacements to the tree joint displacements into eq.(1),
The two-fingered hand is at equilibrium only when the above condition is met. When the external
endpoint force is zero: F=0, we obtain
T
! 0 = −J c τ 4 (6.5.5)
This gives a particular combination of joint torques that do not influence the force balance with
the external endpoint load F. The joint torques having this particular proportion generate the
internal force applied to the object, as illustrated in the figure. This internal force is a grasp force
that is needed for performing a task.
Grasped External
Object Endpoint Force
xA , y A
θ2
Grasp Finger 2
y θ4
Force
Finger 1
θ1
θ3
Exercise 6.2
Define geometric parameters needed in Figure 6.5.1, and obtain the two Jacobian
matrices associated with the two-fingered hand holding an object. Furthermore, obtain the grasp
force using the Jacobian matrices and the joint torques.
Chapter 7
Dynamics
In this chapter, we analyze the dynamic behavior of robot mechanisms. The dynamic
behavior is described in terms of the time rate of change of the robot configuration in relation to
the joint torques exerted by the actuators. This relationship can be expressed by a set of
differential equations, called equations of motion, that govern the dynamic response of the robot
linkage to input joint torques. In the next chapter, we will design a control system on the basis of
these equations of motion.
Two methods can be used in order to obtain the equations of motion: the Newton-Euler
formulation, and the Lagrangian formulation. The Newton-Euler formulation is derived by the
direct interpretation of Newton's Second Law of Motion, which describes dynamic systems in
terms of force and momentum. The equations incorporate all the forces and moments acting on
the individual robot links, including the coupling forces and moments between the links. The
equations obtained from the Newton-Euler method include the constraint forces acting between
adjacent links. Thus, additional arithmetic operations are required to eliminate these terms and
obtain explicit relations between the joint torques and the resultant motion in terms of joint
displacements. In the Lagrangian formulation, on the other hand, the system's dynamic behavior
is described in terms of work and energy using generalized coordinates. This approach is the
extension of the indirect method discussed in the previous chapter to dynamics. Therefore, all the
workless forces and constraint forces are automatically eliminated in this method. The resultant
equations are generally compact and provide a closed-form expression in terms of joint torques
and joint displacements. Furthermore, the derivation is simpler and more systematic than in the
Newton-Euler method.
The robot’s equations of motion are basically a description of the relationship between
the input joint torques and the output motion, i.e. the motion of the robot linkage. As in
kinematics and in statics, we need to solve the inverse problem of finding the necessary input
torques to obtain a desired output motion. This inverse dynamics problem is discussed in the last
section of this chapter. Efficient algorithms have been developed that allow the dynamic
computations to be carried out on-line in real time.
In this section we derive the equations of motion for an individual link based on the direct
method, i.e. Newton-Euler Formulation. The motion of a rigid body can be decomposed into the
translational motion with respect to an arbitrary point fixed to the rigid body, and the rotational
motion of the rigid body about that point. The dynamic equations of a rigid body can also be
represented by two equations: one describes the translational motion of the centroid (or center of
mass), while the other describes the rotational motion about the centroid. The former is Newton's
equation of motion for a mass particle, and the latter is called Euler's equation of motion.
We begin by considering the free body diagram of an individual link. Figure 7.1.1 shows
all the forces and moments acting on link i. The figure is the same as Figure 6.1.1, which
describes the static balance of forces, except for the inertial force and moment that arise from the
dynamic motion of the link. Let v ci be the linear velocity of the centroid of link i with reference
to the base coordinate frame O-xyz, which is an inertial reference frame. The inertial force is then
given by − m i v! ci , where mi is the mass of the link and v! ci is the time derivative of v ci . Based
on D’Alembert’s principle, the equation of motion is then obtained by adding the inertial force to
the static balance of forces in eq.(6.1.1) so that
where, as in Chapter 6, f i −1,i and − f i ,i +1 are the coupling forces applied to link i by links i-1 and
i+1, respectively, and g is the acceleration of gravity.
− fi , i+1
f i−1, i
!i
v ci − N i , i+1
N i−1, i
Link i
Ci ri, C i
ri−1, i Oi
Oi-1
z
m ig
Joint i Joint i+1
O
y
x
Rotational motions are described by Euler's equations. In the same way as for
translational motions, adding “inertial torques” to the static balance of moments yields the
dynamic equations. We begin by describing the mass properties of a single rigid body with
respect to rotations about the centroid. The mass properties are represented by an inertia tensor,
or an inertia matrix, which is a 3 x 3 symmetric matrix defined by
& #
$ 'body {( y − yc ) + ( z − zc ) }ρ dV −' −'
2 2
( x − xc )( y − yc ) ρ dV ( z − zc )( x − xc ) ρ dV
!
body body
$ !
I = $ − ' ( x − xc )( y − yc ) ρ dV ' {( z − zc ) 2 + ( x − xc ) 2 }ρ dV − ' ( y − yc )( z − zc ) ρ dV !
body body body
$ − !
$ 'body ( z − zc )( x − xc ) ρ dV −' 'body {( x − xc ) + ( y − yc ) }ρ dV !"
2 2
( y − yc )( z − zc ) ρ dV
% body
(7.1.2)
where ρ is the mass density, xc , yc , zc are the coordinates of the centroid of the rigid body, and
each integral is taken over the entire volume V of the rigid body. Note that the inertia matrix
varies with the orientation of the rigid body. Although the inherent mass property of the rigid
body does not change when viewed from a frame fixed to the body, its matrix representation
when viewed from a fixed frame, i.e. inertial reference frame, changes as the body rotates.
The inertial torque acting on link i is given by the time rate of change of the angular
momentum of the link at that instant. Let ! i be the angular velocity vector and I i be the
centroidal inertia tensor of link i, then the angular momentum is given by I i ! i . Since the inertia
tensor varies as the orientation of the link changes, the time derivative of the angular momentum
includes not only the angular acceleration term I i !! i , but also a term resulting from changes in the
inertia tensor viewed from a fixed frame. This latter term is known as the gyroscopic torque and
is given by !i ×(I i !i ) . Adding these terms to the original balance of moments (4-2) yields
N i−1,i − N i ,i+1 − (ri −1,i + ri ,Ci ) × fi−1,i + (−ri ,Ci ) × (−fi ,i+1 ) − I i !
! i − !i × (I i !i ) = 0, i = 1,", n
(7.1.3)
using the notations of Figure 7.1.1. Equations (2) and (3) govern the dynamic behavior of an
individual link. The complete set of equations for the whole robot is obtained by evaluating both
equations for all the links, i = 1, · ,n.
The Newton-Euler equations we have derived are not in an appropriate form for use in dynamic
analysis and control design. They do not explicitly describe the input-output relationship, unlike
the relationships we obtained in the kinematic and static analyses. In this section, we modify the
Newton-Euler equations so that explicit input-output relations can be obtained. The Newton-Euler
equations involve coupling forces and moments fi−1, i and N i−1, i . As shown in eqs.(6.2.1) and
(6.2.2), the joint torque τi, which is the input to the robot linkage, is included in the coupling force
or moment. However, τi is not explicitly involved in the Newton-Euler equations. Furthermore,
the coupling force and moment also include workless constraint forces, which act internally so
that individual link motions conform to the geometric constraints imposed by the mechanical
structure. To derive explicit input-output dynamic relations, we need to separate the input joint
torques from the constraint forces and moments. The Newton-Euler equations are described in
terms of centroid velocities and accelerations of individual arm links. Individual link motions,
however, are not independent, but are coupled through the linkage. They must satisfy certain
kinematic relationships to conform to the geometric constraints. Thus, individual centroid
position variables are not appropriate for output variables since they are not independent.
The appropriate form of the dynamic equations therefore consists of equations described
in terms of all independent position variables and input forces, i.e., joint torques, that are
explicitly involved in the dynamic equations. Dynamic equations in such an explicit input- output
form are referred to as closed-form dynamic equations. As discussed in the previous chapter, joint
displacements q are a complete and independent set of generalized coordinates that locate the
whole robot mechanism, and joint torques τ are a set of independent inputs that are separated
from constraint forces and moments. Hence, dynamic equations in terms of joint displacements q
and joint torques τ are closed-form dynamic equations.
Example 7.1
Figure 7.1.1 shows the two dof planar manipulator that we discussed in the previous
chapter. Let us obtain the Newton-Euler equations of motion for the two individual links, and
then derive closed-form dynamic equations in terms of joint displacements θ1 and θ 2 , and joint
torques τ1 and τ2. Since the link mechanism is planar, we represent the velocity of the centroid of
each link by a 2-dimensional vector vi and the angular velocity by a scalar velocity ωi . We
assume that the centroid of link i is located on the center line passing through adjacent joints at a
distance # ci from joint i, as shown in the figure. The axis of rotation does not vary for the planar
linkage. The inertia tensor in this case is reduced to a scalar moment of inertia denoted by Ii.
From eqs. (1) and (3), the Newton-Euler equations for link 1 are given by
Note that all vectors are 2 x 1, so that moment N i-1,i and the other vector products are scalar
quantities. Similarly, for link 2,
f1, 2 + m2g − m2 v! c 2 = 0,
N 1,2 − r1,c 2 × f1,2 − I 2ω! 2 = 0 (7.1.5)
#2
Vc2
#1 # c2
I 2 , m2
θ2 , τ 2
# c1 f1,2
I1 , m1
x
O θ1 , τ 1
Link 0
To obtain closed-form dynamic equations, we first eliminate the constraint forces and separate
them from the joint torques, so as to explicitly involve the joint torques in the dynamic equations.
For the planar manipulator, the joint torques τ1 and τ2 are equal to the coupling moments:
N i−1,i = τ i , i = 1, 2 (7.1.6)
Next, we rewrite v ci , ωi , and ri ,i +1 using joint displacements θ1 and θ 2 , which are independent
variables. Note that ω2 is the angular velocity relative to the base coordinate frame, while θ 2 is
measured relative to link 1. Then, we have
& − # θ! sin θ1 #
v c1 = $$ c1 1 !
# θ ! cos θ !
% c1 1 1 "
Substituting eqs. (9) and (10) along with their time derivatives into eqs. (7) and (8), we obtain the
closed-form dynamic equations in terms of θ1 and θ 2 :
where
The scalar g represents the acceleration of gravity along the negative y-axis.
n n n
τ i = ( H ij q!! j + (( hijk q! j q!k + Gi , i = 1,", n (7.1.13)
j +1 j =1 k =1
where coefficients Hij , hijk, and Gi are functions of joint displacements q1 , q2 ,", qn . When
external forces act on the robot system, the left-hand side must be modified accordingly.
In this section, we interpret the physical meaning of each term involved in the closed-
form dynamic equations for the two-dof planar robot.
The last term in each of eqs. (11-a, b), Gi , accounts for the effect of gravity. Indeed, the
terms G1 and G2, given by (12-e, f), represent the moments created by the masses m1 and m2 about
their individual joint axes. The moments are dependent upon the arm configuration. When the
arm is fully extended along the x-axis, the gravity moments become maximums.
Next, we investigate the first terms in the dynamic equations. When the second joint is
immobilized, i.e. θ!2 = 0 and θ!!2 = 0 , the first dynamic equation reduces to τ 1 = H11θ!!1 , where the
gravity term is neglected. From this expression it follows that the coefficient H11 accounts for the
moment of inertia seen by the first joint when the second joint is immobilized. The coefficient H11
given by eq. (12-a) is interpreted as the total moment of inertia of both links reflected to the first
2
joint axis. The first two terms, m1# c1 + I1 , in eq. (12-a), represent the moment of inertia of link 1
with respect to joint 1, while the other terms are the contribution from link 2. The inertia of the
second link depends upon the distance L between the centroid of link 2 and the first joint axis, as
illustrated in Figure 7.1.3. The distance L is a function of the joint angle θ 2 and is given by
2 2
L2 = # 1 + # c 2 + 2# 1# c 2 cos θ 2 (7.1.14)
Using the parallel axes theorem of moment of inertia (Goldstein, 1981), the inertia of link 2 with
respect to joint 1 is m2L2+I2 , which is consistent with the last two terms in eq. (12-a). Note that
the inertia varies with the arm configuration. The inertia is maximum when the arm is fully
extended ( θ 2 = 0 ), and minimum when the arm is completely contracted ( θ 2 = π ).
L’
θ2 '
L
θ2
# c2
θ1
O #1
Let us now examine the second terms on the right hand side of eq. (11). Consider the
instant when θ!1 = θ!2 = 0 and θ!!1 = 0 , then the first equation reduces to τ 1 = H12θ!!2 , where the
gravity term is again neglected. From this expression it follows that the second term accounts for
the effect of the second link motion upon the first joint. When the second link is accelerated, the
reaction force and torque induced by the second link act upon the first link. This is clear in the
original Newton-Euler equations (4), where the coupling force -fl,2 and moment -N1,2 from link 2
are involved in the dynamic equation for link 1. The coupling force and moment cause a torque
τ int about the first joint axis given by
where N1,2 and fl,2 are evaluated using eq. (5) for θ!1 = θ!2 = 0 and θ!!1 = 0 . This agrees with the
second term in eq. (11-a). Thus, the second term accounts for the interaction between the two
joints.
The third terms in eq. (11) are proportional to the square of the joint velocities. We
consider the instant when θ!2 = 0 and θ!!1 = θ!!2 = 0 , as shown in Figure 7.1.4-(a). In this case, a
centrifugal force acts upon the second link. Let fcent be the centrifugal force. Its magnitude is
given by
fcent = m2 L θ!1
2
(7.1.16)
where L is the distance between the centroid C2 and the first joint O. The centrifugal force acts in
the direction of position vector rO ,C 2 . This centrifugal force causes a moment τcent about the
second joint. Using eq. (16), the moment τcent is computed as
This agrees with the third term hθ!12 in eq. (11-b). Thus we conclude that the third term is caused
by the centrifugal effect on the second joint due to the motion of the first joint. Similarly, rotating
the second joint at a constant velocity causes a torque of − hθ!22 due to the centrifugal effect upon
the first joint.
fcent fCor
# c 2 θ!2
C2
r1, c 2 yb θ!2
r0,c 2 y
θ1 + θ 2
O1
θ!1 Ob
xb
θ!1
O (a) x (b)
O
Finally we discuss the fourth term of eq. (11-a), which is proportional to the product of
the joint velocities. Consider the instant when the two joints rotate at velocities θ!1 and θ!2 at the
same time. Let Ob-xbyb be the coordinate frame attached to the tip of link 1, as shown in Figure
7.1.4-(b). Note that the frame Ob-xbyb is parallel to the base coordinate frame at the instant
shown. However, the frame rotates at the angular velocity θ!1 together with link 1. The mass
centroid of link 2 moves at a velocity of # c 2θ!2 relative to link 1, i.e. the moving coordinate frame
Ob-xbyb. When a mass particle m moves at a velocity of vb relative to a moving coordinate frame
rotating at an angular velocity ω, the mass particle has the so-called Coriolis force given by
− 2m (! × v b ) . Let fCor be the force acting on link 2 due to the Coriolis effect. The Coriolis force
is given by
& 2m # θ! θ! cos(θ1 + θ 2 ) #
fCor = $$ 2 c 2 1 2 ! (7.1.18)
# θ! θ! sin(θ + θ ) !
% 2 c2 1 2
2 m 1 2 "
This Coriolis force causes a moment τ C or about the first joint, which is given by
The right-hand side of the above equation agrees with the fourth term in eq. (11-a). Since the
Coriolis force given by eq. (18) acts in parallel with link 2, the force does not create a moment
about the second joint in this particular case.
Thus, the dynamic equations of a robot arm are characterized by a configuration-
dependent inertia, gravity torques, and interaction torques caused by the accelerations of the other
joints and the existence of centrifugal and Coriolis effects.
d ∂L ∂L
− = Qi , i = 1,", n (7.2.2)
dt ∂q!i ∂qi
where Qi is the generalized force corresponding to the generalized coordinate qi. Considering the
virtual work done by non-conservative forces can identify the generalized forces acting on the
system.
Vc2
# c2 ω2
#1
I 2 , m2
θ2 , τ 2
Vc1 ω1
# c1
I1 , m1
x
O θ1 , τ 1
The total kinetic energy stored in the two links moving at linear velocity v ci and angular
velocity ωi at the centroids, as shown in the figure, is given by
2
1 1
T = ( ( mi v ci + I i ωi )
2 2
(7.2.3)
i =1 2 2
where v ci represents the magnitude of the velocity vector. Note that the linear velocities and the
angular velocities are not independent variables, but are functions of joint angles and joint
angular velocities, i.e. the generalized coordinates and the generalized velocities that locate the
dynamic state of the system uniquely. We need to rewrite the above kinetic energy so that it is
with respect to θ i and θ!i . The angular velocities are given by
v c1 = # c1 θ!1
2 2 2
(7.2.5)
However, the centroidal linear velocity of the second link vc2 needs more computation. Treating
the centroid C2 as an endpoint and applying the formula for computing the endpoint velocity yield
the centroidal velocity. Let J c 2 be the 2x2 Jacobian matrix relating the centroidal velocity vector
to joint velocities. Then,
2 2 T
v c 2 = J c 2q! = q! T J c 2 J c 2q! (7.2.6)
(
where q! = θ!1 θ!2 ) . Substituting eqs.(4~6) to eq.(3) yields
T
Now we are ready to obtain Lagrange’s equations of motion by differentiating the above
kinetic energy and potential energy. For the first joint,
∂L ∂U
=− = −[m1# c1 g cos θ1 + m2 g{# c 2 cos(θ1 + θ 2 ) + # 1 cos θ1}] = −G1 (7.2.9)
∂q1 ∂q1
∂L
= H11θ!1 + H12θ!2
∂q!1
(7.2.10)
d ∂L ∂H ∂H
= H11θ!!1 + H12θ!!2 + 11 θ!2θ!1 + 12 θ!2
2
dt ∂q!1 ∂θ 2 ∂θ 2
Substituting the above two equations into eq.(2) yields the same result as eq.(7.1.11-a). The
equation of motion for the second joint can be obtained in the same manner, which is identical to
eq.(7.1.11-b). Thus, the same equations of motion have been obtained based on Lagrangian
Formulation. Note that the Lagrangian Formulation is simpler and more systematic than the
Newton-Euler Formulation. To formulate kinetic energy, velocities must be obtained, but
accelerations are not needed. Remember that the acceleration computation was complex in the
Newton-Euler Formulation, as discussed in the previous section. This acceleration computation is
automatically dealt with in the computation of Lagrange’s equations of motion. The difference
between the two methods is more significant when the degrees of freedom increase, since many
workless constraint forces and moments are present and the acceleration computation becomes
more complex in Newton-Euler Formulation.
1 T 1 T
Ti = mi v ci v ci + ! i I i ! i , i = 1,", n (7.2.11)
2 2
where !i and Ii are, respectively, the 3x1 angular velocity vector and the 3x3 inertia matrix of
the i-th link viewed from the base coordinate frame, i.e. inertial reference. The total kinetic
energy stored in the whole robot linkage is then given by
n
T = ( Ti (7.2.12)
i =1
v ci = J iLq!
(7.2.13)
!i = J iAq!
where JLi and JAi are, respectively, the 3 x n Jacobian matrices relating the centroid linear
velocity and the angular velocity of the i-th link to joint velocities. Note that the linear and
angular velocities of the i-th link are dependent only on the first i joint velocities, and hence the
last n-i columns of these Jacobian matrices are zero vectors. Substituting eq.(13) into eqs.(11) and
(12) yields
1 n 1
(
T T
T= (mi q! T J iL J iL q! + q! T J iA I i J iAq! ) = q! T Hq! (7.2.14)
2 i=1 2
where H is a n x n matrix given by
n
H = ( (mi J iL J iL + J iA I i J iA )
T T
(7.2.15)
i =1
The matrix H incorporates all the mass properties of the whole robot mechanism, as reflected to
the joint axes, and is referred to as the Multi-Body Inertia Matrix. Note the difference between the
multi-body inertia matrix and the 3 x 3 inertia matrices of the individual links. The former is an
aggregate inertia matrix including the latter as components. The multi-body inertia matrix,
however, has properties similar to those of individual inertia matrices. As shown in eq. (15), the
multi-body inertia matrix is a symmetric matrix, as is the individual inertia matrix defined by eq.
(7.1.2). The quadratic form associated with the multi-body inertia matrix represents kinetic
energy, so does the individual inertia matrix. Kinetic energy is always strictly positive unless the
system is at rest. The multi-body inertia matrix of eq. (15) is positive definite, as are the
individual inertia matrices. Note, however, that the multi-body inertia matrix involves Jacobian
matrices, which vary with linkage configuration. Therefore the multi-body inertia matrix is
configuration-dependent and represents the instantaneous composite mass properties of the whole
linkage at the current linkage configuration. To manifest the configuration-dependent nature of
the multi-body inertia matrix, we write it as H(q), a function of joint coordinates q.
Using the components of the multi-body inertia matrix H={Hij}, we can write the total
kinetic energy in scalar quadratic form:
1 n n
T= (( H ij q!i q! j
2 i=1 j =1
(7.2.16)
Most of the terms involved in Lagrange’s equations of motion can be obtained directly by
differentiating the above kinetic energy. From the first term in eq.(2),
n dH
d ∂T d n n
= (( H ij q j ) = ( H ij q j + ( ij q! j
! !! (7.2.17)
dt ∂q!i dt j =1 j =1 j =1 dt
n
The first term of the last expression, (H
j =1
ij
!!i as well as off-
q!! j , comprises the diagonal term H ii q
n
diagonal terms (H
i≠ j
ij q!!j , representing the dynamic interactions among the multiple joints due to
accelerations, as discussed in the previous section. It is important to note that a pair of joints, i
and j, have the same coefficient of the dynamic interaction, Hij=Hji , since the multi-body inertia
matrix H is symmetric. In vector-matrix form these terms can be written collectively as
& q!1 #
& H11 " " H1n # $ $ !
$ !$ !
i>$ $ % H ij $ ! $ q!i !
!! =
$ ! $$ q! j !!
Hq (7.2.18)
j>$ $ H ji %
$ !
$H " " H nn !" $ $ !
% n1 $ q! !
% n"
It is clear that the interactive inertial torque H ij q! j caused by the j-th joint acceleration upon the i-
th joint has the same coefficient as that of H ji q!i caused by joint i upon joint j. This property is
called Maxwell’s Reciprocity Relation.
The second term of eq.(17) is non-zero in general, since the multi-body inertia matrix is
configuration-dependent, being a function of joint coordinates. Applying the chain rule,
dH ij n ∂H ij dqk n ∂H
=( = ( ij q!k (7.2.19)
dt k =1 ∂qk dt k =1 ∂qk
The second term in eq.(2), Lagrange’s equation of motion, also yields the partial derivatives of
Hij. From eq.(16),
∂T ∂ &1 n n # 1 n n ∂H jk
= $ (( H jk q! j q! k ! = (( q! j q!k (7.2.20)
∂qi ∂qi $% 2 j=1 k =1 ! 2
" j =1 k =1 ∂qi
Substituting eq.(19) into the second term of eq.(17) and combining the resultant term with
eq.(20), let us write these nonlinear terms as
n n
hi = (( Cijk q! j q!k (7.2.21)
j =1 k =1
∂H ij 1 ∂H jk
Cijk = − (7.2.22)
∂qk 2 ∂qi
This coefficient Cijk is called Christoffel’s Three-Index Symbol. Note that eq.(21) is nonlinear,
having products of joint velocities. Eq.(21) can be divided into the terms proportional to square
joint velocities, i.e. j=k, and the ones for j ≠ k : the former represents centrifugal torques and the
latter Coriolis torques.
n n
hi = ( Cijj q! j + ( Cijk q! j q! k = (Centrifugal) + (Coriolis)
2
(7.2.23)
j =1 k≠ j
These centrifugal and Coriolis terms are present only when the multi-body inertia matrix is
configuration dependent. In other words, the centrifugal and Coriolis torques are interpreted as
nonlinear effects due to the configuration-dependent nature of the multi-body inertia matrix in
Lagrangian formulation.
n
U = −( mi gT r0, ci (7.2.24)
i =1
where r0, ci is the position vector of the centroid Ci that is dependent on joint coordinates.
Substituting this potential energy into Lagrange’s equations of motion yields the following
gravity torque seen by the i-th joint:
∂U n ∂r n
Gi = = −( m j gT 0, cj = −( m j gT J Lj , i (7.2.25)
∂qi j =1 ∂qi j =1
where J Lj, i is the i-th column vector of the 3 x 1 Jacobian matrix relating the linear centroid
velocity of the j-th link to joint velocities.
Non-conservative forces acting on the robot mechanism are represented by generalized
forces Qi in Lagrangian formulation. Let δWork be virtual work done by all the non-conservative
forces acting on the system. Generalized forces Qi associated with generalized coordinates qi, e.g.
joint coordinates, are defined by
n
δWork = ( Qiδqi (7.2.26)
i =1
If the virtual work is given by the inner product of joint torques and virtual joint displacements,
τ 1δq1 + " + τ nδqn , the joint torque itself is the generalized force corresponding to the joint
coordinate. However, generalized forces are often different from joint torques. Care must be
taken for finding correct generalized forces. Let us work out the following example.
Example 7.2
Consider the same 2 d.o.f. planar robot as Example 7.1. Instead of using joint angles θ1
and θ 2 as generalized coordinates, let us use the absolute angles, φ1 and φ2 , measured from the
positive x-axis. See the figure below. Changing generalized coordinates entails changes to
generalized forces. Let us find the generalized forces for the new coordinates.
τ2
θ2 , τ 2
δφ2 −τ 2
φ2
τ1
φ1 δφ1
O θ1 , τ 1 x
As shown in the figure, joint torque τ 2 acts on the second link, whose virtual
displacement is δφ2 , while joint torque τ 1 and the reaction torque − τ 2 act on the first link for
virtual displacement δφ1 . Therefore the virtual work is
Comparing this equation with eq.(26) where generalized coordinates are φ1 = q1 , φ2 = q2 , we can
conclude that the generalized forces are:
Q1 = τ 1 − τ 2 , Q2 = τ 2 (7.2.28)
The two sets of generalized coordinates θ1 and θ 2 vs. φ1 and φ2 are related as
φ1 = θ1 , φ2 = θ1 + θ 2 (7.2.29)
This confirms that the generalized forces associated with the original generalized coordinates, i.e.
joint coordinates, are τ 1 and τ 2 .
Non-conservative forces acting on a robot mechanism include not only these joint torques
but also any other external force Fext . If an external force acts at the endpoint, the generalized
forces Q=(Q1,…, Qn)T associated with generalized coordinates q are, in vector form, given by
When the external force acts at position r, the above Jacobian must be replaced by
dr
Jr = (7.2.32)
dq
Note that, since generalized coordinates q can uniquely locate the system, the position vector r
must be written as a function of q alone.
Chapter 9
Force and Compliance Controls
A class of simple tasks may need only trajectory control where the robot end-effecter is
moved merely along a prescribed time trajectory. However, a number of complex tasks, including
assembly of parts, manipulation of tools, and walking on a floor, entail the control of physical
interactions and mechanical contacts with the environment. Achieving a task goal often requires
the robot to comply with the environment, react to the force acting on the end-effecter, or adapt
its motion to uncertainties of the environment. Strategies are needed for performing those tasks.
Force and compliance controls are fundamental task strategies for performing a class of
tasks entailing the accommodation of mechanical interactions in the face of environmental
uncertainties. In this chapter we will first present hybrid position/force control: a basic principle
of strategic task planning for dealing with geometric constraints imposed by the task environment.
An alternative approach to accommodating interactions will also be presented based on
compliance or stiffness control. Characteristics of task compliances and force feedback laws will
be analyzed and applied to various tasks.
!"#$%&'()*$+,-).),/01,(23$4,/.(,5$
$
!"#"#$+()/2)653$
To begin with let us consider a daily task. Figure 9.1.1 illustrates a robot drawing a line
with a pencil on a sheet of paper. Although we humans can perform this type of task without
considering any detail of hand control, the robot needs specific control commands and an
effective control strategy. To draw a letter, “A”, for example, we first conceive a trajectory of the
pencil tip, and command the hand to follow the conceived trajectory. At the same time we
accommodate the pressure with which the pencil is contacting the sheet of paper. Let o-xyz be a
coordinate system with the z-axis perpendicular to the sheet of paper. Along the x and y axes, we
provide positional commands to the hand control system. Along the z-axis, on the other hand, we
specify a force to apply. In other words, controlled variables are different between the horizontal
and vertical directions. The controlled variable of the former is x and y coordinates, i.e. a position,
while the latter controlled variable is a force in the z direction. Namely, two types of control loops
are combined in the hand control system, as illustrated in Figure 9.1.2.
$
$ z
$
$
$
$
$
O
$ Fz y
$ x
$
$
$
$
$
$
Figure 9.1.1 Robot drawing a line with a pencil on a sheet of paper
$
$
+
Position Position Control
Reference
Robot Controlled
Variables
Force +
Force Control
Reference
The above example is one of the simplest tasks illustrating the need for integrating
different control loops in such a way that the control mode is consistent with the geometric
constraint imposed to the robot system. As the geometric constraint becomes more complex and
the task objective is more involved, an intuitive method may not suffice. In the following we will
obtain a general principle that will help us find proper control modes consistent with both
geometric constraints and task objectives. Let us consider the following six-dimensional task to
derive a basic principle behind our heuristics and empiricism.
Example 9.1
Shown below is a task to pull up a peg from a hole. We assume that the peg can move in
the vertical direction without friction when sliding in the hole. We also assume that the task
process is quasi-static in that any inertial force is negligibly small. A coordinate system O-xyz,
referred to as C-frame, is attached to the task space, as shown in the figure. The problem is to find
a proper control mode for each of the axes: three translational and three rotational axes.
Zz
vz
vy
Zy
vx
Zx
The key question is how to assign a control mode, position control or force control, to
each of the axes in the C-frame in such a way that the control action may not conflict with the
geometric constraints and physics. M. Mason addressed this issue in his seminal work on hybrid
position/force control. He called conditions dictated by physics Natural Constraints, and
conditions determined by task goals and objectives Artificial Constraints. Table 9.1.1 summarizes
these conditions.
From Figure 9.1.3 it is clear that the peg cannot be moved in the x and y directions due to
the geometric constraint. Therefore, the velocities in these directions must be zero:
vx 0, v y 0 . Likewise, the peg cannot be rotated about the x and y axes. Therefore, the
angular velocities are zero: Z x 0, Z y 0 . These conditions constitute the natural constraints in
the kinematic domain. The remaining directions are linear and angular z axes. Velocities along
these two directions can be assigned arbitrarily, and may be controlled with position control mode.
The reference inputs to these position control loops must be determined such that the task
objectives may be satisfied. Since the task is to pull up the peg, an upward linear velocity must be
given: vz V ! 0 . The orientation of the peg about the z-axis, on the other hand, doesn’t have to
be changed. Therefore, the angular velocity remains zero: Z z 0 . These reference inputs
constitute the artificial constraints in the kinematic domain.
Kinematic Static
vx 0
Natural vy 0 fz 0
Constraints Zx 0 Wz 0
Zy 0
fx 0
Artificial vz V !0 fy 0
Constraints Zz 0 Wx 0
Wy 0
In the statics domain, forces and torques are specified in such a way that the quasi-static
condition is satisfied. This means that the peg motion must not be accelerated with any
unbalanced force, i.e. non-zero inertial force. Since we have assumed that the process is friction-
less, no resistive force acts on the peg in the direction that is not constrained by geometry.
Therefore, the linear force in the z direction must be zero: f z 0 . The rotation about the z axis,
too, is not constrained. Therefore, the torque about the z axis must be zero: W z 0 . These
conditions are dictated by physics, and are called the natural constraints in the statics domain. The
remaining directions are geometrically constrained. In these directions, forces and torques can be
assigned arbitrarily, and may be controlled with force control mode. The reference inputs to these
control loops must be determined so as to meet the task objectives. In this task, it is not required
to push the peg against the wall of the hole, nor twist it. Therefore, the reference inputs are set to
x Each C-frame axis must have only one control mode, either position or force,
x The process is quasi-static and friction less, and
x The robot motion must conform to geometric constraints.
In general, the axes of a C-frame are not necessarily the same as the direction of a separate
control mode. Nevertheless, the orthogonality properties hold in general. We prove this next.
Let V6 be a six-dimensional vector space, and Va V be an admissible motion space,
6
that is, the entire collection of admissible motions conforming to the geometric constraints
involved in a given task. Let Vc be a constraint space that is the orthogonal complement to the
admissible motion space:
A
Vc Va (9.1.1)
1 1a 1c ; 1a Va , 1c Vc
(9.1.3)
'6 '6 a '6 c ; '6 a Va , '6 c Vc
T T T T
'Work (1a 1c )T ( '6 a '6 c ) 1a '6 a 1a '6 c 1c '6 a 1c '6 c
T T
(9.1.4)
1a '6 a 1c '6 c
since 1a A '6 c , 1c A '6 a by definition. For the infinitesimal displacement '6 to be a virtual
displacement G6 , its component in the constraint space must be zero: '6 c 0 . Then, '6 a G6
becomes a virtual displacement, and eq.(4) reduces to virtual work. Since the system is in a static
equilibrium, the virtual work must vanish for all virtual displacements G6 a .
GWork 1a G6 a
T
0, G6 a (9.1.5)
This implies that any force and moment in the admissible motion space must be zero, i.e. the
natural static constraints:
0 1a Va (9.1.6)
Furthermore, the properties of artificial static constraints can be derived from eqs.(4) and (5).
Since in eq.(4) '6 c 0 , the static equilibrium condition holds, although 1c Vc takes an
arbitrary value. This implies that to meet a task goal we can assign arbitrary values to the force
and moment in the constraint space, i.e. the artificial static constraints.
arbitrary : 6 a Va ,
(9.1.8)
0 6 c Vc
Kinematic Static
Natural Constraints 0 6 c Vc 0 1a Va
Artificial Constraints arbitrary : 6 a Va arbitrary : 1c Vc
The reader will appreciate Mason’s Principle when considering the following exercise
problem, in which the partition between admissible motion space and constraint space cannot be
described by a simple division of C-frame axes. Rather the admissible motion space lies along an
axis where a translational axis and a rotational axis are coupled.
If the feedback signals are noise-less and the C-frame is perfectly aligned with the actual
task process, the position signal of the feedback loop must lie in the admissible motion space, and
the force being fed back must lie in the constraint space. However, the feedback signals are in
general corrupted with sensor noise and the C-frame may be misaligned. Therefore, the position
signal may contain some component in the constraint space, and some fraction of the force signal
may be in the admissible motion space. These components are contradicting with the natural
constraints, and therefore should not be fed back to the individual position and force controls. To
filter out the contradicting components, the feedback errors are projected to their own subspaces,
i.e. the positional error 3p mapped to the admissible motion space Va and the force feedback error
3f mapped to the constraint space Vc. In the block diagram these filters are shown by projection
matrices, +a and +c :
3p +a 3 p , 3 f +c 3 f (9.1.9)
When the C-frame axes are aligned with the directions of the individual position and force control
loops, the projection matrices are diagonal, consisting of only 1 and 0 in the diagonal components.
In the case of the peg-in-the-hole problem, they are:
In case of Example 9.2 where the C-frame axes are not the directions of the individual position
and force control loops, the projection matrices are not diagonal.
Position Feedback
_
+ 3p Position/velocity
1
+a @ control
Position compensator
Reference
Inputs
+ Task
Robot
Environment
+
Force
Reference
Inputs + 3f Force/torque
+c @T control
_ compensator
Force Feedback
These feedback errors, 3 p and 3 f , are in the C-frame, hence they must be converted to
the joint space in order to generate control commands to the actuators. Assuming that the
positional error vector is small and that the robot is not at a singular configuration, the position
feedback error in joint coordinates is given by
3q @ 1 3 p (9.1.11)
where @ is the Jacobian relating the end-effecter velocities in the C-frame to joint velocites. The
force feedback error in the joint coordinates, on the other hand, is obtained based on the duality
principle:
3W @T 3 f (9.1.12)
These two error signals in the joint coordinates are combined after going through dynamic
compensation in the individual joint controls.
!"8$4,>65)A/23$4,/.(,5$
$
!"8"#$BA-C$-.(A.3D&$
Use of both position and force information is a unique feature in the control of robots
physically interacting with the environment. In hybrid position/force control, separation was
made explicitly between position and force control loops through projections of feedback signals
onto admissible motion space and constraint space. An alternative to this space separation
architecture is to control a relationship between position and force in the task space. Compliance
Control is a basic control law relating the displacement of the end-effecter to the force and
moment acting on it. Rather than totally separating the task space into subspaces of either position
or force control, compliance control reacts to the endpoint force such that a given functional
relationship, typically a linear map, is held between the force and the displacement. Namely, a
functional relationship to generate is given by
'6 41 (9.2.1)
where 4 is an m x m Compliance Matrix, and '6 and 1 are endpoint displacement and force
represented in an m-dimensional, task coordinate system. Note that the inverse to the compliance
matrix is a stiffness matrix:
E 4 1 (9.2.2)
such a small spring constant generates only a small restoring force in response to the discrepancy
between the actual doorknob trajectory and the reference trajectory of the robot hand. Along the
direction tangent to the doorknob trajectory, on the other hand, a large stiffness, or a small
compliance, is assigned. This is to force the doorknob to move along the trajectory despite
friction and other resistive forces. The stiffness matrix is therefore given by
§ kx 0·
E ¨ ¸ ; k x 1, k y !! 1 (9.2.3)
¨0
© k y ¸¹
with reference to the task coordinate system O-xy. Using this stiffness with which the doorknow
is held, the robot can open the door smoothly and dexterously, although the exact trajectory of the
doorknob is not known.
y
k y !! 1
x
O k x 1
Doorknob
Trajectory
!"8"8$4,>65)A/23$2,/.(,5$-&/.:3-)-$
Now that a desired compliance is given, let us consider the method of generating the
desired compliance. There are multiple ways of synthesizing a compliance control system. The
simplest method is to accommodate the proportional gains of joint feedback controls so that
desired restoring forces are generated in proportion to discrepancies between the actual and
reference joint angles. As shown in Figure 9.2.2, a feedback control error ei is generated when a
disturbance force or torque acts on the joint. At steady state a ststic balance is made, as an
actuator torque W i proportional to the control error ei cancels out the disturbance torque. The
proportionality constant is determined by the position feedback gain ki, when friction is neglected.
Therefore a desired stiffness or compliance can be obtained by tuning the position feedback gain.
Compliance synthesis is trivial for single joint control systems. For general n degree-of-
freedom robots, however, multiple feedback loops must be coordinated. We now consider how to
generate a desired m x m compliance or stiffness matrix specified at the endpoint by tuning joint
feedback gains.
Disturbance
Force/torque
+ ei Wi + qi
Robot
PD Control Actuators Mechanis
qri (t ) +
B:3,(3>$ Let @ be the Jacobian relating endpoint velocity 6 R m x1 $to joint velocities
F R n x1 , and IJ R n x1 be joint torques associated with joint coordinates F. Let '6 R m x1 be a
m x 1 vector of the endpoint displacement measured from a nominal position 6 , and 1 R
m x1
be the endpoint force associated with the endpoint displacement '6 . Let E6 be a desired
endpoint stiffness matrix defined as:
1 E p '6 (9.2.4)
The necessary condition for joint feedback gain EF to generate the endpoint stiffness E6 is given
by
Eq @T E p @ (9.2.5)
Proof
Using the Jacobian and the duality principle as well as eq.(4),
IJ E q 'F (9.2.7)
?7A>653$!"8"# Consider a two-link, planar robot arm with absolute joint angles and joint torques,
as shown in Figure 9.2.3. Obtain the joint feedback gain matrix producing the endpoint stiffness
Ep :
§ k1 0 ·
Ep ¨¨ ¸¸ (9.2.8)
© 0 k2 ¹
Assuming that the link length is 1 for both links, the Jacobian is given by
§ s1 s2 ·
@ ¨¨ ¸ (9.2.9)
© c1 c2 ¸¹
From eq.(5),
§ k q1 kq3 ·
Eq @T E p@ ¨ ¸ (9.2.10)
¨k
© q3 k q 2 ¸¹
where
Note that the joint feedback gain matrix Eq is symmetric and that the matrix Eq degenerates when
the robot is at a singular configuration. If it is non-singular, then
1 1 1
'6 @'F @E q IJ @E q @ T 1 @ (@ T E p @) 1 @T 1 Ep 1 41 (9.2.12)
Therefore, the obtained joint feedback gain provides the desired endpoint stiffness given by eq.(8),
or the equivalent compliance.
y 1$
'6
W2
I2
W1
I1
O x