Industrial Robotics - Programming Simulation and Applications
Industrial Robotics - Programming Simulation and Applications
Edited by
Kin-Huat Low
Abstracting and non-profit use of the material is permitted with credit to the source. Statements and
opinions expressed in the chapters are these of the individual contributors and not necessarily those of
the editors or publisher. No responsibility is accepted for the accuracy of information contained in the
published articles. Publisher assumes no responsibility liability for any damage or injury to persons or
property arising out of the use of any materials, instructions, methods or ideas contained inside. After
this work has been published by the Advanced Robotic Systems International, authors have the right to
republish it, in whole or part, in any publication of which they are an author or editor, and the make
other personal use of the work.
Printed in Croatia
A catalog record for this book is available from the German Library.
Industrial Robotics: Programming, Simulation and Applications / Edited by Low Kin Huat.
p. cm.
ISBN 3-86611-286-6
1. Manipulators. 2. Calibration. 3. Design I. Title.
V
Contents
Preface ............................................................................................ IX
1. A Design Framework for Sensor Integration in Robotic
Applications .....................................................................................................................1
D. I. Kosmopoulos
10. Interfacing the Gryphon Robot with the World Wide Web......... 195
R. T. F. Ah King, S. Mootien, H. C. S. Rughooputh
Preface
About 86 years ago, in 1921, Karel Capek introduced the word “robot” in his book, Ros-
sum’s Universal Robots. The term was then coined and first used by Isaac Asimov in a short
story published in 1942. Subsequently, after World War II, automatic machines were de-
signed to increase productivity, and machine-tool manufacturers made numerically-
controlled (NC) machines to enable manufacturers to produce better products. At the end, a
marriage between the NC capability of machine tools and the manipulators created a simple
robot. The early robot was sold in 1961 by Unimation, the first robot company established
by Engelberger. This happened 40 years after the word “robot” was introduced.
Today, after another 40 year, robotic arms are widely used in industrial manufacturing.
There is no doubt that robots have increased in capability and performance through im-
proved mechanisms, controller, software development, sensing, drive systems, and materi-
als.
The ARS presents the state-of-the-art developments in this book on “Industrial Robotics:
Programming, Simulation and Applications,” which collects 35 chapters of recent robotics
topics in different applications. The articles are contributed by research scientists, computer
scientists, professors, and engineers from 18 countries in different regions of the world. It
contains a good representation of today’s advanced robotics, maintaining a good balance be-
tween practical and research scientific focus.
The works presented in this book can be divided into several groups:
This book is dedicated to robotics researchers, practicing engineers, and R&D managers,
who wish to enhance and broaden their knowledge and expertise on the fundamentals,
technologies and applications of industrial robots.
Providing an overview of contemporary research with advanced robotic systems, the
above articles provide up-to-date advances and solutions to some theoretical and applica-
tion problems encountered in various areas of industrial robots. By no means, however, we
can cover the whole area of industrial robots in this single book. The editor is indebted to all
authors for their valuable contributions to this book. Special thanks to Editors in Chief of In-
ternational Journal of Advanced Robotic Systems for their effort in making this book possi-
ble.
1. Introduction
The benefits of open robot controllers’ architecture have not been ignored by the robot
constructors industry and in the recent years the closed proprietary architectures were partially
modified to allow access for the integration of external sensors. However, the application
integrators are still not able to exploit to the full extend this openness and installations that
employ sensors are still the exception in most industrial plants, mainly due to the development
difficulties. What is needed to the developers of robotic solutions is a generic method for rapid
development and deployment of sensors and robots to enable quick responses to the market
needs. This need is mainly dictated by the fact that many such applications sharing common
functionality are required in industry. By sharing and reusing design patterns there is a
possibility to reduce costs significantly. Therefore, a careful domain analysis that will identify the
similarities and differences between the various types of applications and which will provide the
infra structure for the development of new ones is of obvious importance.
A major hindrance in the development of sensor-guided robotic applications is the
complexity of the domain, which requires knowledge from the disciplines of robot vision,
sensor-based control, parallel processing and real time systems. A framework defines how
the available scientific knowledge can be used to create industrial systems by embodying
the theory of the domain; it clarifies the core principles and methods and the algorithmic
alternatives. Therefore new algorithms may be easily integrated through the addition or
substitution of modular components.
A design framework should provide the means for the integration of the entailed
components and give an overview of their collaboration. The technological advances may
then be easily adopted provided that the same cooperation rules will be applied.
In the past many academic and commercial frameworks have been presented, covering
parts of the disciplines entailed in sensor-guided robotic applications. However, the
available frameworks that can be used for sensor integration in robotic applications were
either limited in scope or were not developed targeting to that general goal. From the point
of view of the developer of sensor-guided robots the available frameworks treated the
general problem in a rather fragmented manner.
This work presents a design framework that defines the basic components that comprise
robotic sensor guided systems. It also shows how these components interact to implement
the most common industrial requirements and provides some application examples.
2 Industrial Robotics - Programming, Simulation and Applications
The rest of this work is structured as follows: the following section defines the context of this
research; section 3 summarizes the contribution of this work; section 4 identifies the
commonalities and differences between various industrial applications; section 5 describes
the user and the design requirements; section 6 describes the basic component, while section
7 shows how these components interact to implement the basic use cases; section 8
demonstrates the validity of the approach by presenting two applications: a sunroof fitting
robot and a depalletizing robot. The chapter ends with section 9, which presents the
conclusions and the future work.
2. Related work
The literature regarding integration of robotic systems is quite rich. On the one hand there
are many middleware systems, which are quite general, treat sensor data processing and the
control issues separately and do not address the issue of their interaction in typical use
cases. On the other hand some methodologies handle the issue of the cooperation between
sensors and actuators but are limited to certain sensor types, control schemes or application-
specific tasks. Thus methods that will bridge this gap are needed.
The middleware systems that have been proposed implement the interfaces to the
underlying operating system and hardware and they provide programming interfaces that
are more familiar to control application builders than the real-time operating system
primitives. Some the most interesting of these frameworks are the OSACA (Sperling & Lutz
1996) and the Orca (Orca 2006). The scope of these frameworks is to provide the infra
structure and not to specify what the application will functionally be able to do, what data it
will accept and generate, and how the end users and the components will interact. They can
be used in combination with our framework, which specifies the functionality and the
behavior for sensor-guided applications. Some other popular middleware platforms for
mobile robots are the Player/Stage/Gazebo (Vaughan et al. 2003), the CARMEN
(Montemerlo et al. 2003), the MIRO (Utz et al. 2002), and the CLARAty (Nesnas et al. 2006).
Many other attempts have been made to implement open control objects through the
introduction of class or component libraries. Such libraries providing motion control
functionality or the related components are the Orca (Orca 2006), the SMART (Anderson
1993), the START (Mazer et al. 1998). The integration of sensors for robot guidance is
addressed by libraries such as the Servomatic (Toyama et al. 1996), XVision (Hager &
Toyama 1998) but these libraries are limited to certain sensors, control schemes or
application-specific tasks thus they are inappropriate for general use.
For object detection in sensory data a large number of frameworks has been presented by
researchers and industry. Examples of such frameworks are the Image Understanding
Environment (ECV-Net –IUE 2000) and the vxl (VXL 2006). A large number of libraries are
available such as the Open CV (Intel 2006), the Halcon (MVTec 2006), etc. Such libraries or
frameworks may be used in combination to the proposed methodology.
3. Contribution
The proposed design framework capitalizes on the increased “openness” of modern robot
controllers, by extending the capabilities provided by the manufacturer, through intelligent
handling and fusion of acquired information. It provides a modular and systematic way in order
to encompass different types of sensory input; obviously, this approach can be fully profitable,
A Design Framework for Sensor Integration in Robotic Applications 3
when the control system architecture allows for this information exchange to take place. The
presented work proposes how to develop software that will undertake the robot control at the
end-effector level by profiting of the sensory data. More specifically, it presents:
• A unified approach for implementing sensor guided robot applications considering
the restrictions posed by the controller manufacturers and the capabilities of the
available control and vision methods.
• A generalization of the available approaches to enable the integration of the
popular sensors to any open controller using a wide variety of control schemes.
• Patterns of use including open loop and closed loop control schemes.
• Concrete examples of the development of industrial applications facilitated by the
presented framework.
The proposed design is independent of the organizational layer (procedure management, user
interface, knowledge base) and execution levels (actuators, sensors) of the application and of the
rest layers of the robot controller (integration platform, operating system and hardware).
4. Domain description
A unified approach for the integration of industrial sensors in robotic systems through a
common object-oriented framework is reasonable because the necessary abstractions are
present for the sensors, for the robotic actuators and for their interactions. This will be
clarified through a domain description which will lead to the requirements.
4.1 Sensors
The sensors that are frequently used for industrial robotic tasks may be categorized to contact
and non-contact ones. Typical contact sensors that are used in industry are force-torque sensors
and tactile sensors. Popular non-contact sensors are cameras, laser sensors, ultrasound sensors
which give similar output and inductive, capacitive or Hall sensors for detection of the presence
of an object. The processing of the sensory data aims to the identification of features, which in
this context normally means the parts of the image with local properties (e.g., points, edges etc.),
from which useful information about the environment can be extracted. The use of sensors in
most cases presupposes that the inverse sensor model is available, which allows calculation of
the environment state from the sensory data.
The sensors can be treated in a unified manner in design due to the fact that the sensory data
can be described as vectors or 2-d arrays and what differs is the type of the sensory data
(bits, bytes, double precision numbers etc.). Examples of vector images (one-dimensional)
are the outputs of the force sensors and the laser sensors; examples of two dimensional
images are the outputs of the monochrome cameras, the tactile sensors and the arrays of
“binary” sensors; example of image with higher dimensions is the color image, which
consists of three monochrome channels (e.g., red, green and blue).
Cartesian space due to easier comprehension for humans. Therefore the controller has to
solve the inverse kinematic problem (the calculation of the joint states from the end-effector
state) to achieve a desired state in the Cartesian space for the end-effector; then the
controller must calculate the proper current values to drive the joint motors in order to
produce the desired torques in the sampling moments. The industrial robot controllers are
designed to provide good solutions to this problem within certain subspaces.
In closed architectures the user is able to program the robot movement by giving the target
states and eventually by defining the movement type, e.g., linear, circular etc. The controller
then decides how to reach these states by dividing the trajectory into smaller parts defined
by interpolation points. The time to perform the movement through a pair of interpolation
points is defined by the interpolation frequency; in order to reach the interpolation points
closed loop control at the joint level is performed with much higher frequency. Generally
the control at the joint level is not open to the programmer of industrial robots.
Fig. 1. Closed loop control scheme at the end-effector level. The end–effector’s state x is
measured by the sensing devices and the corresponding measurement x̂ is given as
feedback; then x̂ is compared to the desired state xd, the difference is fed to the regulator
and the regulation output is sent to the robot controller to move the manipulator. The joint-
level loop is also displayed.
Fig. 2 The cycles of robot and sensor and their synchronization for compensating for a target
movement. The total delay from the initial target movement to the new robot pose to
compensate for this movement depends mainly on the robot and sensor cycle (adapted from
(Coste-Maniere & Turro 1997)).
A Design Framework for Sensor Integration in Robotic Applications 5
For tasks requiring low accuracy a “blind” trajectory execution with closed loop control at
the joint level is acceptable. However, for the tasks that require high accuracy, closed-loop
control at the end-effector level is necessary, during which the next interpolation points
have to be calculated in each interpolation cycle using external input; this need is satisfied
by the open architecture robot controllers, which unlike the typical closed architecture
controllers, permit the user or an external system to define precisely the interpolation points
and oblige the end-effector to strictly pass through them. However this restriction may
produce oscillations at the end-effector level depending partially on the coarseness of the
trajectory. These oscillations may be avoided if an additional regulator is used. The control
scheme presented in Fig. 1 is then applied.
sensory data that depict the current environmental situation. However, this is usually not
possible due to differences in the sensor and robot cycles and due to the time required for
processing and transmission of the sensory data (see Fig. 2). Therefore synchronization
schemes are necessary that may use prediction for the calculation of the currently required
state, based on previous data. For the open loop systems this problem does not exist because
the robot moves only after reading and processing the sensory data.
Fig. 3. Typical layered architecture of robotic applications. The organizational level includes
the Procedure Manager, the Knowledge Base and the User Interface; the execution level
includes the Sensors and the Actuators; the processing level includes the robot controller,
which may be analyzed to the open architecture layers.
5. Requirements
5.1 Functional and non functional
A generic framework has to consider some functional and non-functional requirements
derived from the description about the sensors, the robot controllers and the control
A Design Framework for Sensor Integration in Robotic Applications 7
schemes using sensors. These requirements as seen from the user perspective are briefly
presented in the following.
As regards the basic functional requirements, the framework should provide the infra structure
and patterns of use for modelling and interfacing of the sensors, should allow sensor integration
with industrial robotic controllers in open loop control and closed loop control schemes, should
use multiple sensors of the same or different type, should allow object recognition from the
sensory data and should use virtual sensory data and manipulators.
The basic non-functional requirements are related to the software quality factors. The
composed systems must be characterized by reliability, efficiency, integrity, usability,
maintainability, flexibility, testability, reusability, portability and interoperability.
Fig. 4. The decomposition of the processing layer system to control objects (capsules) and
their interconnection with modules of the organization and the execution layer (black and
white boxes indicate client and server ports correspondingly). It is composed by the AM
(which communicates with the manufacturer’s controller), the SAI, the SC and the SM
capsules. The capsules’ activation and states are controlled by the ProcedureManager. The
system parameters are read from the ParameterDB, and the predefined states are provided
by the StateServer. The interface with the user is executed by the user interface, which
communicates with the processing layer through the ParameterDB.
The framework has to be independent of sensor - specific and industrial robot
controllers – specific implementations in order to provide integration of every sensor
to any industrial controller type. This is achieved by common communication rules
(protocols) for all sensors and controllers.
A Design Framework for Sensor Integration in Robotic Applications 9
6. Control objects
This work focuses on the control objects layer, which offers its services independently of the
organization, execution and its underlying layers with regard to the open controller
architecture. The control objects are usually associated with a particular sensor or actuator
type or they can just provide services based on input from other control objects. The
sequence of the provided services is defined by the organization level (procedure manager,
user interface) and is influenced by the environment.
For the modeling of the system structure we use the UML notation for real-time systems
that is proposed in (Selic & Rumbaugh 1998), where the notions of capsules, ports and
protocols are defined. Capsules are complex, physical, possibly distributed components
that interact with their surroundings through one or more signal-based boundary objects
called ports. A port is a physical part of the implementation of a capsule that mediates
the interaction of the capsule with the outside world— it is an object that implements a
specific interface. Each port of a capsule plays a particular role in a collaboration that the
capsule has with other objects. To capture the complex semantics of these interactions,
ports are associated with a protocol that defines the valid flow of information (signals)
between connected ports of capsules. In the presented design the capsules are the high
level system components (control objects), the ports are the interface classes and the
protocols are the communication schemes between the components. These are presented
in Fig. 4 and explained in the following.
10 Industrial Robotics - Programming, Simulation and Applications
1 /* Initialization */
2 {COMMUNICATION WITH DB AND INITIALIZATION OF LOCAL PARAMETERS}
3
4 /*Main loop*/
5 while {CONDITION}
6 {
7 {RECEIVING INPUT}
8 {PROCESSING INPUT}
9 {SENDING OUTPUT}
10 }
11
12 /* Exit */
13 {STORING TO DB AND MEMORY DEALLOCATION}
Table 1. Execution scheme of a system process with the initialization phase, the main loop
and the exit phase.
Generally the role of the Controller that was presented in Fig. 3 is to receive the sensory data
from the sensors, to calculate the desired state and communicate the next state to the
actuator. It may be divided to the Manufacturer’s Controller (MC), which includes the
software and hardware provided by the robot vendor for motion planning and motion
control, and the sensor integration modules. These modules are displayed as capsules in Fig.
4 and are the Actuator Manager (AM), the Sensor Manager (SM), the State Calculator (SC) and
the Sensor-Actuator Interface (SAI); their communication with the organization and execution
layer modules is also presented.
Before we proceed to the description of the processing-layer capsules it is necessary to
describe briefly the organizational level. The role of the Procedure Manager is to issue
high level commands using events coming from the processing layer. It activates and
defines the states of all the processing layer capsules. The StateServer is the module of the
organization level that maintains the pre-defined robot states, which may include pose,
velocity and acceleration. The states may be retrieved from a database or may come as
input after online calculation. The ParameterDB stores all the parameters and attributes
that are useful for the execution of the capsules in the processing layer. The parameters’
update is performed online and offline. ParameterDB may also act as a link between the
User Interface and the processing layer.
We begin the description of the processing layer capsules with the AM. Its role is to calculate
the next robot state and to forward it to the MC for execution. The AM receives at each robot
cycle the current end-effector state from the MC and the desired state from the sensory
modules and then it calculates the next state (e.g., using a control law - state regulation
which helps to avoid the unwanted end-effector oscillations); the next state is then sent to
the MC. Different control laws can be applied to each DOF. The robot interpolation cycle
and the sensor cycle may differ significantly (multi-rate system). Therefore for closed loop
control sensor-guided tasks the robot needs to be coupled with the sensors by using the
intermediate capsule SAI; SAI receives asynchronously the desired state from the sensory
modules whenever it is made available by SC; SAI also forwards the current state - that is
calculated using sensory input - synchronously to AM at each robot interpolation cycle.
A Design Framework for Sensor Integration in Robotic Applications 11
The role of the SM is to operate at the image level and to extract the image features from the
sensory data. On the contrary, the role of the SC is to operate at the workspace level and to
use the image-level measurements of SM to calculate the environmental state (e.g., physical
pose) of the end-effector or the target objects. The SM sends data requests to the sensors and
receives from them the sensory data. After image processing it outputs to the SC the feature
measurements. The SC receives the feature measurements from the SM along with the
current actuator state from the ΑΜ through SAI. It outputs to ΑΜ through the SAI the
desired state of the end-effector.
Each of the aforementioned capsules operates within a loop in a system process or thread.
The scheme of each of those system processes is displayed in Table 1. During initialization
the initial local parameters are loaded to memory from the ParameterDB and the appropriate
memory blocks are initialized (2). The main loop (5-10) performs the processing work. At
the beginning of the loop the data from other processes and the parameter database (when
appropriate) are read (7). Then the data become processed (8) and the results are sent to the
other system capsules or to the organization level (9). The integration platform may
implement synchronous or asynchronous communication from either or both sides. The
main loop commands may vary according to the capsule state. During the exit procedure,
which is executed when the task is finished, the memory is released and the data may be
stored into the ParameterDB.
The implementation of the design framework uses the services of a communication layer
(Fig. 3), which has to be deterministic to cover the needs of a real-time industrial system.
The respective communication is performed through the ports presented in Fig. 4.
The communication that is performed within the system follows the producer - consumer
concept. A method for communication of modules that reside in different processes is the
“proxy” pattern. In this pattern a pair of proxy objects, each of them residing on a different
thread/process are used. The local calls to the proxy objects are mapped to inter-process
messages (message destinations may be located either at compile time or at run-time
through broker objects).
The synchronization protocols are distinguished in synchronous and asynchronous. The
“message queuing” pattern can be used for implementing the asynchronous port
communication protocol, e.g., through queued messages of unit length. A synchronous
protocol can be implemented via the “rendezvous” pattern. The basic behavioral model is
that as each capsule becomes ready to rendezvous, it registers with a Rendezvous class and
then blocks until the Rendezvous class releases it to run. A detailed description of the use of
the proxy, message queuing and rendezvous patterns for inter-process communication and
synchronization may be found in (Douglass 2002). In the applications that we created using
the proposed framework the ports were implemented as proxy objects.
4, 6, 8). This communication was previously mentioned in line 2 of Table 1. After the task
execution the participating capsules are deactivated or eventually set to idle state (messages
9, 10, 11, 12). Before deactivation the new parameters or results are stored into the
ParameterDB (line 13 of Table 1).
In the tasks that are presented in the following these pre-and post - conditions apply but
will omit their presentation for simplicity.
7.3 Parameterization
During the offline parameterization task the user changes manually the parameters used by
one or more capsules and is able to evaluate the impact of this change. The parameters are
changed to the ParameterDB through the user interface and then they are read
asynchronously by the active capsules and are applied. If appropriate the parameters may
be saved by the user to be used in future tasks. The interactions for the parameterization of
an SM capsule are given in the following:
The ProcedureManager instructs the SM to measure (1). The measurement is executed (2) and the
results are sent to the ParameterDB (3), from where they are accessed (asynchronously) by the
User Interface (4). The steps (2-5) are repeated until the user changes an SM parameter e.g., a
region of interest (6). The new parameters are read by the SM (7) and new measurements are
performed (8) and written to the ParameterDB (9). The new measurements are read
asynchronously by the User Interface (10). The steps 7-10 last for a sensor cycle and are
performed until the user decides to store permanently the content of the ParameterDB (11).
Fig. 6. A parameterization example for the SM capsule. The user changes the SM parameters
and the SM reads them through the ParameterDB and applies them in processing. Finally the
parameters are saved by the user.
7.4 Training
A sub use case of parameterization with particular interest is the training, which is an
automated calculation of some sensor-related system parameters. An example of training is
14 Industrial Robotics - Programming, Simulation and Applications
the calculation of a feature Jacobian, which provides the system state from the image
features; another example is the calculation of the camera pose relative to the robot end-
effector (calibration). In these training tasks the end-effector moves stepwise along a
predefined training trajectory for each degree of freedom of the task space. During each
training step the sensor subsystem measures the features on the images that are acquired
synchronously by the cameras. When the training movement is completed the required
parameters are calculated based on the feature measurements; they may be task-specific
(e.g., feature Jacobian matrix) or system-specific (e.g., camera pose with respect to robotic
hand). Finally the calculated parameters are stored into the database.
RobotServer ProcedureMana AM SEC SM StateServer ParameterDB
ger
1: PM_ReadState
2: GetState
3: CalculateNextState
steps 3-4 are repeated until
4: SetState threshold reached
5: SetState
6: Measure
7: Measure
8: AppendMeasurements
steps 2-8 are repeated for
all training positions
9: Calculate
10: SetParameters
11: PM_TaskFinished
Fig. 7. The capsule interaction during training. When a task requiring sensory input is to be
trained the end-effector moves stepwise along a predefined training trajectory for each degree of
freedom of the task space and the sensor subsystem measures the image features. When the
training movement is completed the parameters are calculated and then stored into the database.
The capsules that participate in training are the AM, SC, SM that communicate with the
StateServer, ParameterDB. The StateServer holds the states defining the training movement. The
training interaction using a single SM instance is presented in detail in Fig. 7 and has as follows:
1. The ProcedureManager commands AM to read the desired training state from the
StateServer.
2. AM requests synchronously from SC the state vector that defines the next training
End Effector State (EES).
3. After receiving the requested state the next EES is calculated using a control law.
4. AM sets the next EES to the RobotServer to move the robot. The current EES is
returned.
A Design Framework for Sensor Integration in Robotic Applications 15
Steps 3-4 may be repeated until the distance between the current and the desired training
state becomes smaller than a predefined threshold, because high positioning accuracy is
required; another scenario is that a timeout occurs without achieving the desired positioning
accuracy and in this case we have an exception that either stops training or warns the user.
Assuming the first case we have:
5. AM sends to the SC the currently achieved state, which acts as a trigger to activate
the sensors and measure. AM waits until the measurement execution.
6. SC requests from SM to measure and remains blocked waiting for response.
7. SM executes the measurement. After measuring, the SC receives the measurements
and becomes unblocked.
8. SC stores the measurement vector. Then the AM becomes unblocked.
Interaction steps 2 to 8 are repeated for each training step for all degrees of freedom of the
task space. After loop completion:
9. SC calculates the required parameters.
10. The parameters are sent to ParameterDB where they are stored.
11. The ProcedureManager is informed about the task termination.
After step (11) training is finished. The same procedure has to be repeated for all tasks that
need training. The steps (2-4) are executed in the robot interpolation cycle, while the
duration of step 7 is decided mainly by the duration of the sensor cycle.
Fig. 8. The capsule interactions during sensor-guided movement for a closed loop system.
The actuator manager requests from the SAI the measured error of the EES, the SAI returns
the current state error or zero (due to the different camera and robot interpolation rate) and
the regulated pose is sent to the robot controller. The procedure is repeated until the state
error becomes very small.
A Design Framework for Sensor Integration in Robotic Applications 17
The task of sensor-guided movement in open-loop configurations is much simpler than the
respective closed-loop task. We will examine the „look-then-move“ and the „scan- then-
move“ cases. In the former the sensors acquire a local view of the target, while in the latter
the sensors acquire a global view of the workspace before executing the task.
In the “look-then-move” scheme, when the robot is at a nominal pose (without moving) the
sensors acquire the data, these data are processed, the new state is extracted and then the
end-effector moves to reach the required pose. Here there is no need for an AM capsule due
to the fact that there is no need to apply a regulator at the end-effector level.
In Figure. 9. the general interactions at the control object layer is described. The role of the
StateServer here is to provide the current end-effector state to the SC and to communicate the
calculated states based on the sensory data to the rest modules of the organizational level
and to the robot controller.
The StateServer (after receiving a command at the organizational level) sends to the SC the
current end-effector state (1) and then the SC requests synchronously from the SM to
acquire sensory data and provide image measurements (2). When the SM is ready with data
processing (3) the SC receives the measurements and continues with state calculation (4).
The calculated state is then sent to the StateServer (5). Then the state is forwarded to the
robot controller.
In the “scan-then-move” approach the robot moves initially to a predefined set of poses,
where the sensor is activated and thus a part of the workspace is scanned. The acquired
images are processed for feature extraction and registered for each pose using the end-
effector pose (or perhaps the images are processed after the execution of the scan
movement). From the extracted image features the end-effector states are calculated in order
to perform the task.
The StateServer (after receiving a command from the execution layer) sends the current state
to the SC. The SC requests synchronously from the SM to acquire data and measure (2);
when this is done (3) the measurements are registered (4) and the ProcedureManager becomes
informed about the end of the registration for the current end-effector pose. Steps 1-4 are
repeated for all predefined poses; then the next end-effector states are calculated (6) and sent
to the StateServer. The StateServer forwards them appropriately to the robot controller.
8. Applications
The applicability of the framework has been verified through the implementation of
industrial applications. The UML language has been used for modeling and the C/C++
languages have been used for coding. The first application presented here is the sunroof
placement fitting, which aims to fit a sunroof onto a car body on the automobile production
line (Kosmopoulos et al. 2002).
We used the 6-DOF manipulator K-125 of KUKA with the KRC-1 controller, which permits
the employment of external software for control at the end-effector level. The task of fitting
the sunroof on the car body was performed using four CCD cameras, monitoring the four
corners of the sunroof opening. The corners were identified as the intersection points of the
monitored edges (Fig. 10).
The general scheme introduced in Fig. 4 has been used. For this application four instances of
the SM capsule have been employed and one of the AM, SAI and SC. As regards the SM
capsule, the acquired data were of type char due to the monochrome camera images and
thus image objects of type unsigned char were defined. In the SC capsule we extracted the
desired robot state through an image-based, endpoint open loop approach. For the system
state calculation we used an extended Kalman filter. The system state was given by the
vector:
Wk=[x, x’, y, y’, z, z’, a, a’, b, b’, c, c’]Tk
which refers to the end-effector pose error and the corresponding velocities with respect to a
nominal state, which is achieved when the sunroof is correctly placed on the target. For the
calculation of the end-effector pose we have used an over-determined system through
measuring the image coordinates xi, yi of the four corner points. The measurement vector fk
is given by the following equation:
fk = [x1, y1, x2, y2, x3, y3, x4, y4]
The system Jacobian has been assumed to be locally constant and has been calculated
through training. We have also used classes for coordinate transformations from the target
coordinate system to the end-effector, the robot base and other coordintate systems.
For the AM capsule we have implemented PID control laws for each degree of freedom. The
SAI was programmed to send the calculated state to the AM as soon it was requested. A
zero vector was sent until the next calculated state was available from SC.
All the use cases described in section 5.2 have been addressed. The closed loop
communication sequences described in sections 7 for a single SM capsule in this application
were executed in parallel for the four SM capsules.
A second industrial application implemented using our generic framework, addresses the
depalletizing (or robotic bin picking, or pick and place problem) in the context of which a
number of boxes of arbitrary dimensions, texture and type must be automatically located,
grasped and transferred from a pallet (a rectangular platform), on which they reside, to a
specific point defined by the user (Katsoulas & Kosmopoulos 2006).
We used the KR 15/2 KUKA manipulator, a square vacuum-gripper, and a time-of-flight
laser range sensor (model SICK LMS200), which provides 3D points detected by a reflected
laser beam. The sensor is mounted on the gripper and the latter is seamlessly attached to the
robot's hand (Figure 10).
A Design Framework for Sensor Integration in Robotic Applications 19
Due to the fact that only a line is acquired in each sensor cycle, the robot moves the sensor
over the pile while acquiring data so that the whole pile can be viewed. The “scan then
move” process is utilized for registering one dimensional scan lines obtained by the laser
sensor during the robot hand movement, into a 2.5D range image. Finally the object states
are extracted from the constructed image and the boxes lying on the upper layer, which is
visible to the sensor, are grasped one by one and placed to a new position. For the next layer
the same procedure is executed until the pallet is empty.
The general scheme presented in Figure 4 has been implemented omitting the AM (due to
the fact that no closed-loop control functionality is needed), and the SAI (because the robot
moves only when the data acquisition is finished and no additional synchronisation is
needed); their connections are also omitted.
The communication with the manufacturer’s robot controller is achieved through the
organizational level (StateServer).
The processing layer of the application consists of one instance of the SM and the SC capsules;
the SM controls the sensor for data acquisition and the SC calculates the state of the target objects,
in other words their pose in the world coordinate system (target localization).
Sensor-guided movement follows the “scan then move” pattern, described in figure 9b.
Firstly, while the robot is at the initial scan position the StateServer sends the current robot
state to the SC thus triggering a message from SC to SM to acquire data. The SM acquires an
image (scan line), which is returned to SC. The SC receives as input the range image
corresponding to the current configuration of the object on the pallet; using the current
robot state the coordinates of the image elements are calculated in the world coordinate
system. These steps are repeated for all predefined end-effector poses and then the scan
lines are combined to a single image. This range image, which holds information about the
environment, is processed and from it the target object states (poses) are finally extracted
and sent to the organizational level through StateServer.
Fig. 10. (a) The sunroof fitting robot using four cameras in a closed loop control scheme and
(b), (c) the depalletizing system during scanning and grasping
interfacing rules it is possible to replace modules, e.g., for implementing control laws for
the actuator DOFs or for processing the sensor data without much effort. This enables
using the freely available or commercial libraries-components maximising the benefit of
reusing well – tested modules. It is also possible to integrate multiple instances of the
related processes-capsules provided that the interfaces will be the same. The need for a
synchronisation module between sensors and actuator in closed loop control schemes has
been underlined.
The most common use cases for industrial applications have been presented and the
related interaction patterns have been demonstrated. They include some typical open
loop and closed loop control schemes. Furthermore, two concrete examples of the
development of industrial applications facilitated by the presented framework were
demonstrated. Some more applications, not presented here due to confidentiality
issues, ranging from robotic measurement to grasping have already employed this
approach.
Although the development of design solutions and frameworks for robots is a challenging
task we have managed to identify the similarities and differences for the sub-domain of
sensor-guided industrial manipulators. The development time for the related applications
has been significantly reduced to approximately one third of the initial development time
due to reusability of design and components. A good trade-off has been found between
over-generalizing and the application-specific design by building concrete examples based
on the general principles.
The presented work targets industry-specific systems, where the environment is structured
to a significant level. This means that the decisions/use cases taken at the organization level
are limited. More complex behaviors e.g., concerning mobile robots moving in unstructured
environments are not addressed here.
Anyone who deals with the development of robotic applications can benefit from this work,
especially those that seek to couple the mechanical flexibility of industrial robots, with the
flexibility to “build” various diverse applications with common characteristics.
In the future the integration in our scheme of a sophisticated configuration tool, such as
glue code (code level) and configuration tool in textual or graphical mode (integration
and application level), may facilitate the flexibility and the rapid deployment of sensor-
specific industrial applications, thus making our framework highly reconfigurable in a
dynamic manner. Another objective is to provide functionality through a black box
framework from an open integrated environment for the development and testing of the
control objects. This environment will support the use of many sensors and actuators and
will include their models for simulation purposes employing a variety of vision and
control algorithms.
10. References
Anderson, R. J. (1993). SMART: A Modular Architecture for Robots and Teleoperation, IEEE
International Conference on Robotics and Automation, pp. 416-421, Atlanta, Georgia,
May 1993
Corke, P. I. (1993). “Video rate robot visual servoing”, In:Visual Servoing, K. Hashimoto
(Ed.), vol. 7, Robotics and Automated Systems, pp. 257-283, World Scientific, ISBN
9810213646, Singapore
A Design Framework for Sensor Integration in Robotic Applications 21
Coste-Manière, E., Turro, N. (1997). The MAESTRO language and its environment:
Specification, validation and control of robotic missions. IEEE International
Conference on Intelligent Robots and Systems, Vol. 2, pp. 836-841, Grenoble, France,
September 1997.
Douglass, B.P. (2002). Real-Time Design Patterns: Robust Scalable Architecture for Real-Time
Systems, Addison Wesley, ISBN 0201699567.
Hager, G., Toyama, K. (1998). XVision: A portable Substrate for Real-Time Vision
Applications, Computer Vision and Image Understanding, Vol. 65, No 1, January 1998,
pp.14-26, ISSN 1077-3142
Hutchinson, S., Hager, G., Corke, P. (1996). A tutorial introduction to visual servo control,
IEEE Transactions on Robotics and Automation, Vol. 12, No 5, May 1996, pp. 651-670,
ISSN 0882-4967.
Intel (2006). Open source computer vision library, http://www.intel.com/research/
mrl/research/opencv/
Katsoulas, D.K., Kosmopoulos, D.I, (2006). Box-like Superquadric Recovery in Range Images
by Fusing Region and Boundary based Information, Int. Conf. on Pattern
Recognition, Hong Kong, to appear.
Kosmopoulos, D.I, Varvarigou, T.A, Emiris D.M., Kostas, A., (2002). MD-SIR: A
methodology for developing sensor-guided industry robots, Robotics Computer
Integrated Manufacturing, Vol. 18, No 5-6, Oct-Dec 2002, pp. 403-419, ISSN 0736-
5845.
Mazer, E., Boismain, G., Bonnet des Tuves, J.M., Douillard, Y., Geoffroy, S., Doubourdieu,
J.M., Tounsi M., Verdot, F., (1998). START: An application builder for industrial
robotics, Proceedings of IEEE International Conference on Robotics and Automation, Vol.
2, pp. 1154-1159, May 1998, Leuven, Belgium.
MVTec (2006). http://www.mvtec.com/
Selic, B., Rumbaugh, J. (1998). “Using UML for modeling complex real-time systems”, White
Paper.
Sperling, W., Lutz, P., (1996). Enabling open control systems – an introduction to the
OSACA system platform, Robotics and Manufacturing, Vol. 6., ISRAM 97, New York,
ASME Press, ISBN 0-7918-0047-4.
Orca (2006). Orca: components for robotics http://orca-robotics.sourceforge.net/index.html
Toyama, K., Hager, G., Wang, J. (1996). “Servomatic: a modular system for robust
positioning using stereo visual servoing”, In Proceedings International Conference on
Robotics and Automation, Vol.3, pp. 2636-2643, April 1996, Minneapolis, USA.
Valavanis, Κ.P., Saridis, G.N. (1992). Intelligent Robotics Systems: Theory, Design and
Applications, ISBN 0792392507, Boston.
ECV-Net –IUE (2000). http://www-sop.inria.fr/robotvis/projects/iue/main.html
VXL (2006). The vxl homepage, http://vxl.sourceforge.net
Vaughan, R. T., Gerkey, B., and Howard, A. (2003). On device abstractions for portable,
reusable robot code, In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and
Systems, pp. 2421-2427,October 2003,Las Vegas, USA.
Montemerlo, M., N. Roy and S. Thurn (2003). Perspectives on standardization in mobile
robot programming: The Carnegie Mellon Navigation (CARMEN) toolkit. In
Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 2436-2441,
October 2003, Las Vegas, USA.
22 Industrial Robotics - Programming, Simulation and Applications
Utz, H., Sablatnog, S., Enderle, S., and Kraetzschmar, G. (2002). MIRO - middleware for
mobile robot applications, IEEE Transactions on Robotics and Automation, Vol. 18, No
4, August 2002, pp. 493-497, ISSN 0882-4967.
Nesnas, I. A.D., Simmons, R., Gaines, D., Kunz, C., Diaz-Calderon, A., Estlin, T., Madison,
R., Guineau, J., McHenry, M, Hsiang Shu, I., Apfelbaum, D., (2006). CLARAty:
Challenges and Steps Toward Reusable Robotic Software, International Journal of
Advanced Robotic Systems, Vol. 3, No. 1, March 2006, pp. 023-030, ISSN 1729-8806.
2
1. Introduction
Joint Torque sensory Feedback (JTF) can substantially improve the performance of a robotic
system. JTF makes it possible to achieve dynamic control of a robotic manipulator without
the need for modeling its link dynamics. Moreover, it has been proved that JTF can achieve
a precise torque tracking in a manipulator joint by compensating the effect of joint friction
and actuator nonlinearities. Despite these advantages, accurate joint torque measurement
encounters several practical challenges. Since much of the torque/force reaction of the link
load on the joints appears in the form of nontorsional components, e.g. overhung force and
bending moment, the torque sensing device has to not only bear but remain insensitive to
these force/moment components. In addition, it is desirable to design the structure of the
sensor such that it generates a large strain for a given load torque and therefore has a high
sensitivity. However, this is in conflict with the high-stiffness requirement for minimizing
the joint angle error introduced by the sensor.
The main objectives of this chapter are twofold: Firstly, in Sections 2 and 3, we describe the
technical challenges and practical solutions to accurate measurement of joint torques in the
presence of the non-torsional components in a robot joint. For a torque sensing device,
different mechanical structures will be examined and their properties, such as sensitivity and
stiffness in different directions and decoupling, will be explored. This allows a systematic
design of a sensor structure which is suitable for torque measurement in a robot joint. Finally,
the state-of-the-art and design of a torque sensor for robots will be presented in Section 4. The
design achieves the conflicting requirements of high stiffness for all six force and torque
components, high sensitivity for the one driving torque of interest, yet very low sensitivity for
the other five force/torque components. These properties, combined with its donut shape and
small size make this sensor an ideal choice for direct drive robotic applications. Experimental
data validates the basic design ideas underlying the sensor’s geometry, the finite element
model utilized in its optimization, and the advertised performance.
The second objective is to describe the application of joint torque sensory feedback (JTF)in
robot control. The main advantages claimed for JTF are (i) it can simplify the complexity of
the system dynamics by eliminating its link dynamics; and (ii) the control system is
inherently robust with respect to both external force disturbance and parameter variation.
First, assuming both actuator and torque sensor are ideal, we describe robot control with
JTF in Section 5. Then, development of an adaptive JTF is presented in Section 6that requires
only the incorporation of uncalibrated joint torque signals, i.e., the gains and offsets of
multiple sensors are unknown. Also, all physical parameters of the joints including inertia
of the rotors, link twist angles, and friction parameters are assumed unknown to the
24 Industrial Robotics - Programming, Simulation and Applications
controller. Finally, in Section 7, JTF is modified to cope with actuator’s finite bandwidth
dynamics actuator, i.e., no ideal actuator. An optimal JTF is designed in the frequency
domain that minimizes the control system’s sensitivity to load torque disturbances and load
dynamics. Experimental results demonstrate that the JTF remarkably improves the
disturbance attenuation and load decoupling properties of a joint servo controller.
and Lim, 1985; Jacobsen et al., 1991; deSilva et al., 1987). Hashimoto et al. (Hashimoto,
1989b) utilized the elasticity of the flex-spline in a harmonic drive to measure the joint
torque. This technique has the advantage of using the existing structural flexibility of the
robots. However, eliminating the error caused by rotation of the wave generator is
difficult, it requires a nontrivial gear modification, and this approach cannot be used in
direct drive systems. Many researchers (Pfeffer et al., 1989; Wu, 1985;Luh et al.,
1983;deSilva et al., 1987; Asada and Lim, 1985; Vischer and Khatib, 1995) choose not to
place the sensor directly at the joint shaft to avoid the detrimental effects of the support
forces and moments. Pfeffer et al. (Pfeffer et al., 1989) replaced standard attachment bolts
in the PUMA 500 joints with a flexure instrumented with strain gauges. Wu (Wu, 1985)
used a shaft with a thin hollow circular section that is supported by two bearings. Strain
gauges are mounted on the thin section. Luh et al. (Luh et al., 1983) cemented strain
gauges on the connecting output shaft which is mounted to the flex-spline of the
harmonic drive for each joint of the Stanford manipulator. Visher (Vischer and Khatib,
1995) integrated a torque sensor with the gear transmission, while Asada et al. (Asada
and Lim, 1985) integrated strain gauges in the hub supporting the robot of a direct-drive
motor. The strain gauges are cemented on three beams connecting the outer ring,
mounted to the motor rotor, and the inner ring, which is coupled to the motor shaft.
Asada et al. (Asada and Lim, 1985) cemented strain gauges inside the rotor of a direct-
drive motor for torque measurement. Since these sensors are not mounted directly on the
joints of a manipulator, the entire set of forces and moments are supported by the bearing
set rather than the sensor structure. However, these sensors are not ideal because they can
not account for the friction in the joint bearings. Moreover, the mechanical joints are
complicated and sometimes bulky. In commercial torque sensors non-torsional
components are not permitted or are highly restricted. Furthermore, they usually come in
bulky packages, are built for shaft mounting and thus are not suitable for integration in a
robot joint.
to non-torsional components has been reported for this type of sensor (Vischer and Khatib,
1995), the structure is not adequate for a modular robot joint. This is because of the drastic
change in the physical size between the input (inner ring) and output (outer ring) of the
sensor. Rather this type of sensor should be integrated with the gear or with the rotor of a
direct-drive motor, and hence it suffers from the same drawbacks as type (a) and (b) sensors.
The hollow cruciform design Fig. reffig: quad is used in commercially available torque
sensors (Lebow, 1997). In this design, strain is induced mainly by bending of the wing
elements. In order to achieve good sensitivity, the wing and sensor height is large, and as a
result, the stiffness is low and non-torsional torques must be kept small. The proposed
hollow hexaform sensor Fig. 1(e) is similar in its basic geometry to the hollow cruciform
sensor (d) with only four wings. However, there are fundamental functional differences.
Due to the increased number of wing pairs, and the shorter height, strain is induced
primarily in torsion, resulting in a much stiffer sensor, and improved sensitivity. In
addition, this design can be optimized to support non-torsional torques, making it suitable
for direct drive robotic applications.
3. Sensor Design
In this section we describe how the new hollow-hexaform sensor achieves
(i) high sensitivity to torsion,
(ii) low sensitivity to non-torsional components, and
(iii) high stiffness in all axes of forces and moment.
(Omega, 1995) of the strain gauges. Then, assuming every strain gauge pair constitutes a
half-bridge circuit, the overall voltage output is given by where
is the gain of the strain gauges and represents the gain
signs corresponding to the negative and positive branches of a Wheatstone bridge circuit.
Substituting from (1) into the latter equation, we obtain
(2)
where wi is the inner product of vectors t and the ith column of C. It is evident from (2) that,
in the most general case, the sensor output is the superposition of weighted components of
the generalized forces and moments transmitted through the sensor unless all weights
related to the exogenous forces and moments are zero. That is, the decoupling is achieved if
and w6 ≠0. In this case, the sensor output is solely proportional to the
torsion torque i.e.,
(3)
where α = κw6 represents the overall sensitivity of the sensor. That vector t is orthogonal to
all columns of C matrix except the last one underlines a condition on the structure of a
torque sensor by which the senor exhibits the decoupling. As a results, such as sensor is not
sensitive to the supporting forces and moments transmitted through the structure of the
sensor.
Fig. 2. Basic torque sensor structure. A: solid discs; B: elastic element; C: strain gauge.
For the candidate geometry in Fig. 2andwith four strain gauges located on the numbered
locations, the parametric form of the sensitivity matrix can be derived from the symmetry of
the structure with respect to the external forces and torques as
The jth column of the matrix represents the strain sensitivities in the four site locations with
respect to the jth load case, e.g., the third column represents the sensitivity due to f3.The
identical elements in the matrix can be implied from the symmetric configuration of the
structure with respect to different load cases. For instance, the symmetric condition of the
28 Industrial Robotics - Programming, Simulation and Applications
strain gauges with respect to the axial load, f3, implies identical elements of the third
column. Now one can readily verify that and and hence
the structure satisfies the condition for mechanical decoupling.
There are two main reasons in practice that violate the above assumption of exact
symmetry among the measured strains. First, strain gauges exhibit variations in their
gauge factor. Second, the strain gauges will be placed on areas with high strain
gradients. This makes the gauge outputs sensitive to placement errors. This can also be
modeled as a gauge gain error. As a consequence, exact cancelation of the non-torsional
components may not be achieved with the theoretical gain vector. By virtue of the
linear mapping (1), the non-torsional components produce no output, if all elements of
the sensitivity matrix except that for the last column are sufficiently small. This implies
that the strain sensitivity to the non-torsional components has to be held to a minimum
by mechanical design. This condition in conjunction with the decoupling property of
the sensitivity matrix actually determines the capability of the sensor to reject the effect
of non-torsional force/torque to the output and to provide a high fidelity output signal.
where δ is the torsional deflection. As mentioned earlier, the gain of the strain gauge, κ, is
Joint Torque Sensory in Robotics 29
one can conclude that high sensitivity and stiffness are achievable simultaneously only by
use of a high-strength material.
Because a linear response is desired from the sensor, the chosen sensor material must have a
linear strain-stress relationship. Steel is the best available industrial material that has good
linearity properties within a large stress range. Moreover, due to the oscillatory nature of
the loading, steel can work with in finite fatigue life as the allowable strains are determined
based on the endurance limit. The endurance limit or fatigue limit is the maximum stress
under which mechanical failure will not occur, independent of the number of load cycles.
Only ferrous metals and alloys have an endurance limit.
The sensor is designed for a nominal torque 300 Nm that is based on the endurance limit of
mild steel, which is twice as much as the yield point. Hence the safety factor in torque
overloading is two. Remarkably, FEM results demonstrated that the stress induced by
bending moment is very low for the proposed structure. As a result the structure can resist
bending moments as high as 2000 Nm, which is almost an order of magnitude higher than
the nominal torque.
respectively. A preliminary stress analysis showed that the axial and shear forces have
negligible elastic effects because they produce a uniform strain/stress field in the
elastic body, resulting in a very weak maximum strain. In fact, the bending moment is
the critical non-torsional component, and consequently two different load cases
corresponding to the external torsion and bending torques are established for FEM. It
is important to note that in robotic applications the maximum angular deflection due
to external torques (which is amplified by the robot links)is a more restrictive
constraint than linear deflection due to the forces. It has been investigated that the
worst-case strain due to the bending load happens when its axis lies perpendicular to
one of the wings, and consequently that axis is chosen for the bending.
The IDEAS solver uses the Steepest Decent (the gradient method) method with penalty function
for finding the local minimum - the penalty function adds simply the weighted constraint
equation into the objective function.
The post-processing stage was guided by the following design benchmarks: the tangential
32 Industrial Robotics - Programming, Simulation and Applications
and axial displacement of the disc’s outer diameter; the principal strain in the axial
direction and parallel to the gauge axes, due to both load cases, and the maximum von
Mises stress/strain due to a combination of all load cases. Hence, the performance index
can be obtained from Figures 3(c) and 3(e), while the the constraint condition is given by
Fig. 3(b). These design criteria were checked with the FEM results to modify the geometry
of the sensor iteratively. The FEM results of the elastic body’s final design are shown on
Fig. 3. The worst-case von Mises stress, i.e. the combination of the two load cases, is
shown in Fig. 3(b) where its maximum occurs at 150 MPa. This is close to the endurance
limit of mild steel with a reasonable factor of safety. Figs 3(c) and 3(d) illustrate the
tangential and axial displacement fields by which the torsional and bending stiffnesses
are carried out; kz =2.7 ×105Nm/rad and kx =4.9 ×106Nm/rad, respectively. The
axisymmetric pattern in the figure confirms the correctness of the imposed boundary
conditions. Figs. 3(e) and 3(f) show the strain contour in the axial direction in which the
strain gauges are oriented, for the 1st and 2nd load cases, respectively. The FEM results
demonstrate that the strain sensitivity in torsion is seven times higher than in bending,
while the bending stiffness is 18 times higher than the torsional stiffness
in which load cells (MLP-50 from Transducer Techniques (Techniques, n.d.)) were installed.
The lever varied the tension in the cord gradually between zero and maximum. During
loading and unloading, the reference load cell output and the torque sensor output (device
under test) were recorded.
Fig. 7 depicts the ith motor axis and joint axis of a robot with n revolute-joints where each
joint is driven by a geared motor with gear ratio ni and assuming that the motor shaft is cut
right at its junction to the link. In the following the subscript i denotes quantities related to
the ith link or joint.
Assume that ωzi be the z-component of the absolute angular velocity of the rotor ωi; θi and
denote the rotor angle and the rotor inertia along the z-axis. Also, and denote
the torques of the motor, the joint, and the bearing friction, respectively, all of them seen
after the gearbox. Let us assume the followings: (i) The principal axes of all rotor inertias are
parallel to their own joint axes; (ii) all rotors are statically balanced, i.e., their center of mass
coincident with rotation axis; and (iii) the rotors are axisymmetric, i.e., their inertias about x
and y axes are identical. Also as in (Kosuge et al., 1990), we assume that the deformation of
the joint torque sensors are negligible. Moreover, let coordinate system be
attached to ith joint according to the Denavit-Hartenberg convention. Since the rotors are
axisymmetric, the equation of motion of each rotor can be simply described by
(5)
In the above equation, we need to obtain an expression for the absolute angular velocity in
terms of joint angle quantities. Let represent a unit vector in the direction of the jth
joint axis. Then, since the deformations of the torque sensors are assumed negligible, one
can compute the link velocities by a technique similar to Luh et al. (Luh et al., 1980)
(6)
(7)
Equations (6) and (7) can be combined in the vectorial form as
(8)
where N =diag{ni}, and is a lower triangular matrix whose elements are
It is worth pointing out that the unit vectors constitute the columns of the manipulator’s
rotation Jacobian (Sciavicoo and Siciliano, 2000), i.e. .Hence,
the D matric can be constructed from the Jacobian by
where matrix function tri(.) returns only the lower-triangular elements of the input matrix,
while the rest of elements of the returned matrix are zero. Defining and
and using (8) in (5), we get
(9)
where is the net torque acting on the rotors.
Example. 1. The coupling matrix D of a general 3-DOF robot is
where a1 and a2 are the twist angles, and sai and cai represent sin(ai) and cos(ai).
Joint Torque Sensory in Robotics 37
Now, one can readily show that the following inverse-dynamics control scheme
(10)
where GD > 0 and GP > 0 are the feedback gains, can achieve dynamic tracking of the desired
trajectory θd.
(16)
and L > 0 and Λ > 0 are the gains. The matrix inversion in (15) should not be a problem,
because det meaning that is always nonsingular regardless of the value of .
Consider the parameter update law ˙
(17)
where and Γ > 0 is the gain matrix. In the
following, the stability of the control error will be investigated.
Therorem. 1. The control law (15) together with the parameter update law (17) ensure the
following properties: and where and stand for the space of
bounded and square integrable signals, respectively.
Proof: Substituting the control signal (15) in the system dynamics (13) yields the error
dynamics described by
(18)
Here we have used the property that for any dimensionally compatible vector x,
and differentiating it along with trajectories of the error dynamics and using Remark 1
yields
(19)
From this, a standard argument (Ioannou and Sun, 1996) proves the theorem.
Eliminating the joint accelerations from equations (13) and (20) and solving the resultant
equation in terms of τS yields
(21)
where , and Substituting τS from
(12)into (15) gives an expression for the control input independent of τS. That is
(22)
where and
˙is the normalized estimation of torque sensor gain. Since all variables of function
are bounded, then is bounded too. Hence, we can say τM is bounded if and
only if is invertible, i.e.
where
(23)
Therefore, the control input τM remains bounded and the control error asymptotically
converges to zero if this part of the system states α} does not enter the region . In
this case, is bounded and hence s converges asymptotically to zero, which, in turn,
implies convergence of to zero.
In the following, we derive a sufficient condition on the gain sensor estimate to achieve a
bounded control input.
Remark 2 Using norm properties, we can obtain a conservative condition satisfying (23) as follows
(24)
Therefore, joint torques remain bounded providedthat an over-estimation ofthe sensor-
gains does not exceed Let us choose constant so that
always holds. Then, by virtue of Remark 2, one can show (24) is satisfied if
(25)
Based on the projection adaptation law (Ioannou and Sun, 1996), it is possible to modify the
parameter update law (17) to ensure that inequality (25) holds. Assume that αi and ψi be the
T
ith elements of and −ΓY s, respectively. Then, consider the following projection
parameter adaptation law for ˆ
A standard argument (Ioannou and Sun, 1996) shows that the modified parameter update
law guarantees (25), while keeping (19) negative, i.e.
40 Industrial Robotics - Programming, Simulation and Applications
Then, one can show that addition of the JTF loop changes the overall compliance of the
motion servo to
which is equivalent to the weighted disturbance function (Aghili et al., 2001), if the
weighting function is chosen as .Now, the problem is to finda stable
and realizable filter (the class of H∞ functions which are rational) such that the
maximum weighted sensitivity of the system is minimized, that is
Joint Torque Sensory in Robotics 41
(28)
Note that the first weighting function, W1(s), shapes the disturbance gain over frequency band of
interest, while the second weighting function, W2(s), shapes the magnitude of the optimal filter
Q(s). Note that W2(s) causes the magnitude of the torque filter Q(s) is rolled off at high frequency
where the magnitude of W1(s) is sufficiently low. Problem (28) is a standard H∞ problem and the
optimal solution is available.
7.3 Setup
Fig. 9 illustrates the experimental setup which consists of the McGill/MIT Direct-Drive
electric motor (Aghili et al., 2002b) mounted on the dynamometer, instrumented with our
custom torque sensor described in Section 4, and coupled to a hydraulic rack and pinion
rotary motor (Parker 113A129BME). The role of the hydraulic motor is to generate a random
disturbance torques in order to measure the disturbance sensitivity of the servo controller.
7.4 Identification
The performance of joint torque feedback critically relies on the knowledge of actuator
dynamics, that, in turn, can be obtained form disturbance sensitivity tests. Let us consider
again the control system shown in Fig. 8.The challenge in identification system H(s) is that
signal τM is not me asurable. How-ever, the actuator transfer function can be extracted from
measurements of the disturbance sensitivity functions with respect to two different torque
feedbacks. Denoting GJTF as the sensitivity function corresponding to , one can
obtain from (27). Let us assume that and J denote the
corresponding spectral τJ densities when . Then, one can obtain the empirical (non-
parametric) transfer function ˆ from the latest equation as
42 Industrial Robotics - Programming, Simulation and Applications
(29)
The next step of the identification involves a numerical procedure to represent the complex
function (29) by a rational transfer function as close as possible .Several parametric models
were examined, and it turned out that a second order systems is sufficient to match the
input-output behavior adequately. The parametric models approximating G (jω) and H (jω),
which are used for the feedback design, are
8. Conclusion
Motivated by the need for accurate joint torque sensing in robots, we designed a new torque
sensor, based on the hollow hexaform geometry. Its key features were its extremely high
stiffness and its insensitivity to the set of support forces and moments which persist in a
robot joint. These features permit to mount the sensor directly in the joints of a robot
manipulator leading to accurate joint torque sensing and to a compact and modular design.
The structure of the sensor also exhibits strain concentration to torsion loads which
maximizes the sensitivity to torsion without sacrificing torsional stiffness. Other design
issues such as practical shape consideration, material properties and overloading also
considered. The sensor geometry was analyzed and optimized using the finite element
method. The sensor was tested extensively to confirm its design goals, and is well suited as
a torque-sensing device in robots or other industrial high performance motion control
applications. A quantitative comparison with different types of sensors is shown in table 1.
The table indicates that our sensor’s performance characteristics compare very favorably.
The applications of adaptive control in conjunction with joint-torque sensory feedback was
used for dynamic motion control of manipulators. The control system had the advantages of
requiring neither the computation of link dynamics nor the precise measurement of joint
torques, i.e., the torque sensor’s gains and offsets are unknown to the controller. The
adaptive controller could also tune all the joint parameters including the rotor inertia, twist
44 Industrial Robotics - Programming, Simulation and Applications
angles of joint axes, and joint friction parameters. The stability of the control system was
investigated analytically. It was shown that the control error asymptotically converges to
zero if an upper bound on the estimated sensor gain is respected. Subsequently, the
parameter update law was modified based on the projection algorithm to satisfy the
boundedness condition.
Next, we formulated the problem of optimal positive JTF in the presence of actuator’s finite
bandwidth dynamics. The theory of JTF was further developed to suppress the effect of load
torque disturbance on a motion control systems in the presence of actuator dynamics. An
experimental setup comprised of a direct-drive electric motor, torque-sensor, and hydraulic
motor was constructed to measure disturbance sensitivity of a motion servo mechanism. The
results demonstrated that when the servo controller was cascaded with the optimal JTF, a
significant reduction in sensitivity was achieved. In our second experiment, a single link with
adjustable inertia was attached to the motor. In the absence of any torque feedback, the tracking
error increased due to load nonlinearity, and increases with payload, while the optimal feedback
maintains the tracking accuracy.
9. References
Aghili, F. and M. Namvar (2006). Adaptive control of manipulator using uncalibrated joint-
torque sensing. IEEE Trans. Robotics 22(4), 854-860.
Aghili, F., M. Buehler and J. M. Hollerbach (2002a). Design of a hollow hexaform torque
sensor for robot joints. Int. Journal of Robotics Research 20(12), 967 –976.
Aghili, F., M. Buehler and J. M. Hollerbach (2002b). Development of a high performance
joint. Int. Journal of Advanced Robotics 16(3), 233–250.
Aghili, Farhad, Martin Buehler and John M. Hollerbach (2001). Motion control systems with
h-infinity positive joint torque feedback. IEEE Trans. Control Systems Technology
9(5), 685–695.
An, C. H., C. G. Atkeson and J. M. Hollerbach (1988). Model-Based Control of a Robot
Manipulator. MIT Press. Cambridge, MA.
Asada, H. and S.-K. Lim (1985). Design of joint torque sensors and torque feedback control
for direct-drive arms. In: ASME Winter Annual Meeting: Robotics and Manufacturing
Automation, PED-Vol. 15. Miami Beach. pp. 277–284.
de Wit, C. Canudas, H. Olsson, K. J. Aström and P. Lischinsky (1995). A new model for
control of systems with friction. IEEE Transactions on Automatic Control 40(3), 419–
425.
deSilva, C. W., T. E. Price and T. Kanade (1987). Torque sensor for direct-drive manipulator.
ASME Journal of Engineering for Industry 109, 122–127.
Hashimoto, M. (1989a). Robot motion control based on joint torque sensing. In: Proc. IEEE
Int. Conf. Robotics and Automation. pp. 256–1261.
Hashimoto, M. (1989b). Robot motion control based on joint torque sensing. In: IEEE Int.
Conf. Robotics and Automation. pp. 256–261.
Hirose, S. and K. Yoneda (1990).Development of optical 6-axial force sensor and its signal
calibration considering non-linear interference. In: Proc. IEEE Int. Conf. Robotics and
Automation. Cincinnati. pp. 46–53.
Joint Torque Sensory in Robotics 45
1. Introduction
Robot automation increases productivity, the product quality and frees man from
involuntary, unhealthy and dangerous work. While computational power has increased
exponentially during the last decades, the limitations in flexibility constitute today a
bottleneck in the further evolution of robotic systems.
One application for intelligent systems is seam tracking in robotic welding. Seam tracking is
among others used for the manufacturing of large ships such as cruisers and oil tankers,
where relatively high tolerances in the sheet material are allowed to minimize the
manufacturing costs (Bolmsjö, 1997; Fridenfalk & Bolmsjö, 2002a). Seam tracking is today
typically only performed in 2D, by a constant motion in x and compensation of the errors in
y and z directions, see Fig. 1. There exist solutions for higher autonomy that include seam
tracking in 3D where compensation is also performed in o direction or around an orientation
axis. These limitations make however seam tracking only possible for work pieces with
relatively simple geometrical shapes.
The next step in the evolution of sensor systems in robotic welding is the introduction of a
full 6D sensor guided control system for seam tracking which is able to correct the TCP in x,
y, z and around roll, pitch and yaw. Such an ultimate system is per definition able to follow
any continuous 3D seam with moderate curvature. This system has many similarities with
an airplane control system, where a change in either position or orientation affects all other
degrees of freedom.
Though seam tracking has been performed in 2D for at least 40 years, the hypothetical
design of 6D seam tracking systems was probably first addressed in the beginning of the last
decade by suggestions of models based on force sensors (Bruyninckx et al.,1991a;
Bruyninckx et al., 1991b) and laser scanners (Nayak & Ray, 1990a; Nayak & Ray, 1990b). It is
however difficult to evaluate these systems, since no experimental results are presented,
neither any explicit control systems for creating balance between the subsystems.
The contribution of this paper is the invention of a universal 6D seam tracking system for
robotic welding, validated by simulation experiments based on physical experiments, which
proved that 6D seam tracking is possible and even very robust for laser scanning (Fridenfalk
& Bolmsjö, 2002b). On a more detailed level, the contributions of this paper are considered
to be the introduction of: (1) differential control in laser scanning, using the same techniques
48 Industrial Robotics - Programming, Simulation and Applications
as used in arc sensing (Cook et al., 1987), (2) trajectory tangent vector control using
differential vector interpolation, and (3) pitch control by vector interpolation. These
components constitute the foundation of the 6D seam tracking system.
Fig. 1. Definition of Tool Center Point (TCP) and the orthonormal coordinate system {n, o, a}.
Here, o is opposite to the direction of welding and perpendicular to the plane Ω, and a is the
direction of approach. {x, y, z} is a local coordinate system, defined for the work-piece.
The authors have found only a few references from the literature which describe similar
work, which is the introduction of a trajectory tangent vector by curve fitting (Nayak & Ray,
1990a; Nayak & Ray, 1990b). There exist however some essential differences between how
such trajectory vector was used and implemented. The differences consist of (1) curve fitting
by 2nd instead of 3rd degree polynomials, for faster computation and still high control
stability, (2) using an analytic solver for curve fitting of 2nd degree polynomials developed
and implemented for this system, increasing the calculation speed further and (3) using
differential vector interpolation instead of direct use of the trajectory tangent vector, which
showed to be essential for maintaining control stability.
The accuracy needed to perform robotic arc welding depends on the welding process. In our
case we were working with GMAW (Gas Metal Arc Welding). The accuracy needed to
follow the seam and position the weld torch relative the weld joint is in the order of ± half
diameter of the welding wire, or ±0.6 mm in our case. As indicated above and shown in our
simulations, the curvature or radius is an important measure of the performance of the
system. As shown from simulation experiments, seam tracking curves with a radius down
to 20 mm produces no added positional error during continuous path motion in the sensor
feed-back loop which controls the robot. However, the torch angle changes from a 4 degrees
deviation at 50 mm radius to 12 degrees at 20 mm radius. Thus, the accuracy needed for this
process can be obtained by the system provided the sensor and robot have matching
specifications. As indicated by the results, the system can be used for welding processes
which requires higher accuracy providing the robot and sensor can meet this as well.
A 6D seam tracking system increases the capability and flexibility in manufacturing systems
based on robotic welding using laser scanning. This reduces the need for accurate robot
trajectory programming and geometrical databases. The simulation results (based on
physical experiments) showed that the initial objective to design a 6D seam tracking system
Design and Validation of a Universal 6D Seam Tracking System in Robotic
Welding Based on Laser Scanning 49
was reached that could manage a radius of curvature down to 200 mm. Furthermore, if low-
speed seam tracking is performed without welding, to find the geometry of the work-piece,
there is basically no limit how small radius of curvature this system is able to manage.
Fig. 2. Cross section, plane Ω. Vector interpolation used for position and pitch control at laser
scanning. The input from the laser scanner system is the intersection point of the seam walls v0
and the unit vector v1 on Ω between the seam walls. D denotes the distance between v0 and p’.
general requires a relatively low accuracy however, arc sensors are a competitive alternative to
laser scanners. Arc sensing may also be used as a complementary sensor to laser scanners in
some cases where the laser scanner is unable to access the seam track.
Fig. 3. The design of the robot system for 3D seam tracking based on arc sensing in the ROWER-2
project (left) was based on simulations in FUSE (right). The 6D seam tracking system presented
here was developed based on experience and results gained from the ROWER-2 project.
Design and Validation of a Universal 6D Seam Tracking System in Robotic
Welding Based on Laser Scanning 51
Fig. 4. Example of a 6D seam tracking experiment performed in Matlab. Each laser scan is
plotted at each control cycle while seam tracking is performed 180º around a pipe
intersection. The work-piece object corresponds to the object in the upper experiment in
Fig. 7.
where Pt is the current TCP and PΩt is the desired TCP for any sample point t. The errors
around n, o and a were calculated by the subsequent conversion of the resulting
transformation matrix to roll, pitch and yaw, here defined as rotation around, in turn, a, o
and n. Rotation in the positive direction is defined as counter clockwise rotation from the
perspective of a viewer when the axis points towards the viewer.
3. Modelling
In this section the design of the 6D seam tracking system is described. The principal control
scheme of this system is presented in Fig. 5. The control system is based on three main
components, position, trajectory tangent vector and pitch control. Pitch denotes rotation around
o. The main input of this control system is the 4×4 transformation matrix PN−1, denoting current
TCP position and orientation. The output is next TCP, PN. N is the number of samples used for
orientation control. The input parameters are N, the proportional control constants K1, K2 and K3,
and the generic sensor input values ξ = [v0 v1] and ζ = v1, where v0 and v1 are defined in Fig. 2.
The proportional constants K1, K2 and K3 denote the control response to errors in position,
trajectory tangent vector and pitch control. N is also used in trajectory tangent vector control and
is a measure of the memory size of the algorithm. The seam profile presented in Fig. 1 is a fillet
joint. Since v0 and v1 may be extracted for any joint with a predefined signature, no other joints
were required in the experiments.
Fig. 5. The principal scheme of the 6D seam tracking control system. PN−1 and PN denote
current and next TCP. PN is calculated for each control sample designated for the seam
tracking motion.
Fig. 6. Trajectory tangent vector control is performed by the least square curve fitting of the
last N trajectory points (left) to a 2nd degree polynomial curve for x, y and z, followed by
vector interpolation (right). Here N = 5. X, c0, c1 and c2 denote vector entities. The trajectory
tangent vector F is for a 2nd degree polynomial equal to c1 + 2Nc2.
polynomials (for x, y and z). F = ∂X/∂t|t=N with X = C · T, where C = [c0 c1 c2] = [cx cy
cz]T and T = [1 t t2]T (Note that AT is the transpose function for a vector or matrix A
and has no correlation with the vector T). This gives F = C · ∂T/∂t|t=N = C · [0 1 2t]T
|t=N = C · [0 1 2N]T . C is calculated by the estimation of the parameters ci for i = {x, y,
z}, where ci = (ATA)−1(ATpi) and pi are the row vectors of xN, which may also be
written as xN = [px py pz]T. The matrix A consists of N rows of [1 t t2] with t = 1...N,
where t is the row number. It should be mentioned that C may be directly calculated
by the expression C = ((ATA)−1(AT xNT))T , but we have chosen to use the notation
above for pedagogical reasons.
4. oN = normalize(oN−1 − K2· (normalize(F) + oN−1)). Normalization of a vector v denotes
v divided by its length.
5. a´´ = aN−1 − K3· (aN−1 + v1).
6. aN = normalize(a´´− dot (oN, a´´)· oN), where dot (oN, a´´) is the dot product oNT· a´´.
7. PN = oN × aN oN aN pN
0 0 0 1
The initial value of x is determined by linear extrapolation with respect to the estimated
direction at the beginning of the seam. The elements of F(i) with i = {1, 2, 3} for {x, y, z} were
here calculated by the Matlab function pfit((1:N)’,x(i,:)’) (the input parameters x and y in this
function are [1 2 ... N]T and bi as defined above). The pfit function was developed in this
work using Maple (Waterloo Maple Inc.) and gives an analytical solution to 2nd degree
polynomial curve fitting for this application, that showed to work 8 times faster for N = 10
and 4 times faster for N = 100 than the standard Matlab function polyfit. The analytical
function pfit was implemented according to below:
function k = pfit(x,y)
n = length(x); sx = sum(x); sy = sum(y); sxx = x’*x; sxy = x’*y;
sx3 = sum(x.ˆ3); sx4 = sum(x.ˆ4); sxxy = sum(x.ˆ2.*y);
t2 = sx*sx; t7 = sx3*sx3; t9 = sx3*sxx; t12 = sxx*sxx;
den = 1/(sx4*sxx*n-sx4*t2-t7*n+2.0*t9*sx-t12*sxx);
t21 = (sx3*n-sxx*sx)*t15;
k = 2.0*n*((sxx*n-t2)*t15*sxxy-t21*sxy+(sx3*sx-t12)*t15*sy)- ...
t21*sxxy+(sx4*n-t12)*t15*sxy+(t9-sx4*sx)*t15*sy;
The calculation speed was optimization by the application of Cramer’s rule by an analytical
inversion of ATA in Maple and the elimination of c0. In the function above, x and y are N × 1
column vectors, x´ equals xT, sum(x) is the sum of the vector elements of x, and a point
before an operator such as the exponential operator ˆ, denotes element-wise operation. This
implementation may be further optimized by reusing terms that only depend on x in pfit
(which is equal to t outside pfit). Since such optimization is however outside the scope of
this paper, we have chosen to keep this presentation as simple as possible.
4. Experimental results
4.1 Work-piece samples
The position and orientation errors at seam tracking were calculated based on simulation
experiments of basically 8 different work-pieces, see Figs. 7-11. It should be mentioned that the
experiments in the bottom of Fig. 8 and Fig. 9 are not actually possible in practice due to collision
problems, but are performed for the analysis of the control system. The work-pieces in Matlab
were approximated by lines orthogonal to the direction of the weld and the resolution was defined
as the nominal distance between these lines. The resolution was here set to 0.02 mm. The reason
why no noise was added to the profile obtained from the laser scanner (except for perhaps
decreasing the number of measurement points compared to the M-SPOT, which may use up to
512 points per scan), was because large number of measurements allow for high-precision
estimations of v0 and v1, with a constant offset error less than the accuracy of the laser scanner.
Since the 6D seam tracking system corrects the position and the orientation of the TCP relative to
the extracted profile, such error will not influence the behaviour of the control system more than
highly marginally, by basically adding a small offset error to the experimental results in Figs. 7-11.
The work-pieces are categorized into two groups, continuous and discrete. In the
experiments presented in this paper, the range was 0−180º for continuous and 12 mm for
discrete work-pieces. In seam tracking of a continuous work-piece, sudden changes is
interpreted as a curve with a small radius of curvature, see Figs. 7-8, while in seam tracking
of a discrete work-piece a sudden change is regarded as a disturbance, see Figs. 10-11. The
settings of the control parameters decide alone the mode of the control system: continuous
or discrete. For seam tracking of objects with a large radius of curvature, the discrete mode
works well for both continuous and discrete work-pieces, see Fig. 9.
56 Industrial Robotics - Programming, Simulation and Applications
Fig. 7. Seam tracking, laser scanning. Pipe intersection (top) and work-piece for isolating
control around a circular seam (bottom). K1 = 1.0, K2 = K3 = 0.8, N = 10. Error denotes the
difference between desired and current pose.
Fig. 8. Seam tracking, laser scanning. Isolating rotation around n, following inner and outer
curves. Note that in this figure, the case in the bottom is not very realistic, but only included
for the completeness of the experiments. K1 = 1.0, K2 = K3 = 0.8, N = 10.
Angular Error Position Error
n 5 0.01
o 4 0
a
3
−0.01
ε (mm)
γ (deg)
2
θ
−0.02
1
0 −0.03
−1 −0.04
0 45 90 135 180 0 45 90 135 180
r = 50 mm θ (deg) θ (deg)
Fig. 9. Seam tracking, laser scanning. The same as in previous figure, except for K2 = 0.1 and
r = 50 mm. This shows that K2 = 0.1 works fine as well for continuous seams with moderate
radius of curvature. Further, the same experiment was also performed with r = 20 mm,
Design and Validation of a Universal 6D Seam Tracking System in Robotic
Welding Based on Laser Scanning 57
which showed to be basically identical to this figure, but with saturation at 12 degrees
instead of 4. Note that while other laser scanning experiments were performed on the verge
of instability to find the limits of the seam tracking system, using the M-SPOT laser scanner,
the system is highly stable at moderate curvatures using low values for K2.
Fig. 10. Seam tracking, laser scanning. The step is introduced at x = 0 mm. K1 = 1.0, K2 = 0.1,
K3 = 0.8, N = 10.
The control parameters K1, K2, K3 and N used in the presented experiments showed to work well
for each category, continuous and discrete. These settings were found by more than 50 laser
scanning simulation experiments in Matlab, and large deviation from these parameters showed
to create instability in the system, making the control components working against each other
instead of in synergy, resulting in divergence from the seam and ultimately failure at seam
tracking. The seam tracking system is theoretically able to handle any angle α (in Fig. 2) between
0 and 180º. In practice however, a small angle may cause collision between torch and work-piece.
For simplicity, α was set to 90º for all experiments except for pipe intersections.
Angular Error Position Error
5 5
Δn = 5 mm 0 4
−5
3
x
ε (mm)
γ (deg)
−10
2
−15
1
−20
−25 0
−30 −1
n −2 0 2 4 6 8 10 −2 0 2 4 6 8 10
x (mm) x (mm)
o
a Angular Error Position Error
5 1
30° 0 0.8
−5 0.6
ε (mm)
γ (deg)
−10 0.4
x −15 0.2
−20 0
−25 −0.2
−30 −0.4
−2 0 2 4 6 8 10 −2 0 2 4 6 8 10
x (mm) x (mm)
Fig. 11. Seam tracking, laser scanning. In this figure, the case in the bottom denotes a step of
−30º performed around o. K1 = 1.0, K2 = 0.1, K3 = 0.8, N = 10.
58 Industrial Robotics - Programming, Simulation and Applications
As indicated in section 2 an adaptation of the algorithm presented here can be used for arc
sensing. Depending on the actual welding application, the two sensors can be used. In general,
laser scanning can be used for search, pre-scanning and during welding with a very high accuracy
while arc sensing only can be used during welding and during weaving. In general, arc sensing
has a lower accuracy but enough for GMAW while laser scanning can be applied for most
welding processes, provided there is space for the laser scanning in front of the welding torch. As
laser scanning provides richer information about the weld joint, i.e. joint geometry and distance to
the weld joint in addition to the specific information used in seam tracking, it can be used for
optimized control of the welding process based on information in a WPS (Welding Procedure
Specification). Seen from a general perspective, our results indicate a foundation for sensor
guidance using a laser scanner for robotic arc welding. A future development from this point will
be to integrate the WPS to achieve the specified productivity and quality which will be important
when automation of low volume down to one-off production is possible.
6. Future work
Though the 6D seam tracking system showed to be very robust, optimization of the presented
algorithm is still possible. The control system worked well using proportional constants alone, but
if needed, for instance PID or adaptive control may be included to enhance the performance. As an
example, the control parameter K2 could be defined as a function of the “radius of curvature” (or
more specifically as a function of for example c2 in Fig. 6) of the 2nd degree polynomial used for
calculation of the trajectory tangent vector, thereby enable the system to automatically change
between normal operation, used for maximum control stability, and extraordinary operation with
fast response, used for management of small radius of curvatures.
7. References
Arroyo Leon, M.A.A., Ruiz Castro, A. and Leal Ascencio, R.R. (1999). An artificial neural
network on a field programmable gate array as a virtual sensor. In Proceedings of the
Third International Workshop on Design of Mixed-Mode Integrated Circuits and
Applications, pages 114–117, Puerto Vallarta, Mexico.
Bolmsjö, G. (1997). Sensor systems in arc welding. Technical report, Robotics Group, Dep. of
Production and Materials Engineering, Lund University, Sweden, 1997.
Bolmsjö, G. (1999). Programming robot welding systems using advanced simulation tools.
In Proceedings of the International Conference on the Joining of Materials, pages 284–291,
Helsingør, Denmark.
Bruyninckx, H., De Schutter, J. and Allotta, B. (1991a). Model-based constrained motion: A.
modeling, specification and control. In Advanced Robotics, 1991. ’Robots in
Unstructured Environments’, 91 ICAR, pages 976–981.
Bruyninckx, H., De Schutter, J. and Allotta, B. (1991b). Model-based constrained motion: B.
introducing on-line identification and observation of the motion constraints. In Advanced
Robotics, 1991. ’Robots in Unstructured Environments’, 91 ICAR, pages 982–987.
Cook, G. E., Andersen, K., Fernandez, K. R., Shepard, M. E. and A. M. Wells. (1987). Electric
arc sensing for robot positioning control. Robot Welding, pages 181–216. Editor: J. D.
Lane. IFS Publications Ltd, UK, Springer-Verlag.
Corke, P. (1996). A robotics toolbox for MATLAB. IEEE Robotics and Automation Magazine,
3(1):24–32, Mar. 1996.
60 Industrial Robotics - Programming, Simulation and Applications
Doggett, W. and Vazquez, S. (2000). An architecture for real-time interpretation and visualization
of structural sensor data in a laboratory environment. In Digital Avionics Systems
Conferences, 2000. Proceedings. DASC. Volume 2, pages 6D2/1–6D2/8.
European Commission (2002). ROWER-2: On-Board Automatic Welding for Ship Erection –
Phase 2. Record Control Number 39785. www.cordis.lu, update 2002-11-05.
Fridenfalk, M. (2002). Eliminating position errors caused by kinematics singularities in robotic wrists for
use in intelligent control and telerobotics applications. Patent application SE0203180-5.
Fridenfalk, M. and Bolmsjö, G. (2001). The Unified Simulation Environment —Envision
telerobotics and Matlab merged in one application. In Proceedings of the Nordic
Matlab Conference, pages 177–181, Oslo, Norway.
Fridenfalk, M. and Bolmsjö, G. (2002a). Design and validation of a sensor guided robot
control system for welding in shipbuilding. International Journal for the Joining of
Materials, 14(3/4):44–55.
Fridenfalk, M. and Bolmsjö. G. (2002b). Intelligent system eliminating trajectory programming
and geometrical databases in robotic welding. Patent application SE0203239-9.
Fridenfalk, M., Lorentzon, U. and Bolmsjö, G. (1999). Virtual prototyping and experience of
modular robotics design based on user involvement. In Proceedings of the 5th
European Conference for the Advancement of Assistive Technology, pages 308–315,
Düsseldorf, Germany.
Fridenfalk, M., Olsson, M. and Bolmsjö, G. (1999). Simulation based design of a robotic
welding system. In Proceedings of the Scandinavian Symposium on Robotics 99, pages
271–280, Oulu, Finland.
Hailu, G. and Zygmont, A. (2001). Improving tracking behavior with virtual sensors in the
loop. In Proceedings of the IEEE Aerospace Conference, pages 1729–1737.
Nayak, N. and Ray, A. (1990a). An integrated system for intelligent seam tracking in robotic
welding. Part I. Conceptual and analytical development. In Proceedings of the IEEE
Conf. on Robotics and Automation, pages 1892–1897.
Nayak, N. and Ray, A. (1990b). An integrated system for intelligent seam tracking in robotic
welding. Part II. Design and implementation. In Proceedings of the IEEE Conf. on
Robotics and Automation, pages 1898–1903.
Oosterom, M. and Babuska, R. (2000). Virtual sensor for fault detection and isolation in
flight control systems — fuzzy modeling approach. In Proceedings of the IEEE Conf.
on Decision and Control, pages 2645–2650.
Pasika, H., Haykin, S., Clothiaux, E. and Stewart, R. (1999). Neural networks for sensor
fusion in remote sensing. In Proceedings of the Int. Joint Conf. on Neural Networks,
IJCNN ’99. pages 2772– 2776.
Ridao, P., Battle, J., Amat, J. and Carreras, M.(2001). A distributed environment for virtual
and/or real experiments for underwater robots. In Proceedings of IEEE Int. Conf. on
Robotics and Automation, pages 3250–3255.
4
Italy
1. Introduction
The design of industrial robotic applications is dramatically changing as the result of rapid
developments in underlying technologies and novel user requirements. Communication and
computing technologies provide new standards and higher levels of performance. Meanwhile,
novel and more capable robotic platforms and sensors have become available. In industrial
automation, this technology push has opened up new application areas, while raising the
expectations casted upon robotic systems in terms of overall performance and functionality.
In next generation robotic applications, robots will turn from traditional preprogrammed,
stationary systems into machines capable of modifying their behaviour based on the interaction
with the environment and other robots. Changes are also induced by user requirements
demanding higher efficiency, functionality and performance in the control of industrial plants.
In a modern manufacturing scenario, interaction with customers and suppliers must
increasingly occur remotely and on-line, e.g. for just-in-time manufacturing, remote
maintenance, and quality monitoring. For a number of reasons, transparent and widespread
access to run-time production data and internal control variables of robotic systems must be
provided. This requirement highlights the distinct transition beyond the conventional idea of
isolated robotic workcell. As an added element of complexity, integration between different
technologies in the shop floor and in the design and production management departments is
required to achieve the full potential of computer-integrated manufacturing.
The growth in complexity in industrial robotic applications demands higher levels of
abstraction. A first step toward the improvement of software quality and cost reduction has
been the increasing adoption of object-oriented reuse techniques, based on components and
frameworks. The next step is to gain a comparable acceptance of reusability and
transparency concepts as far as the distributed nature of industrial automation systems is
concerned. Indeed, industrial automation applications fall naturally into distributed
architectures, since they require the ability to connect multiple robotic and sensory devices,
which must be controlled in a coordinated manner. As discussed in this chapter,
communication middlewares are the key technology to achieve these goals.
62 Industrial Robotics - Programming, Simulation and Applications
for different robot programming methods (on-line, off-line, hybrid) should be provided,
along with transparency (i.e., all accessible applications should appear as if they were local).
Moreover, modern industrial systems need to be reconfigurable, able to produce different
parts, depending on which machines are used and in which order. Control software and
physical resources, like sensor and robot controllers, need to be easily
connected/disconnected to the systems. Hence, support for multiple clients is required to
system devices, along with suitable authentication and concurrency protocols to ensure
safe access according to prescribed policies and priorities. Control actions, in fact, may
require or be entrusted with different access levels: from simple sensor monitoring, to robot
control, to system supervision.
Servers should ensure real-time operation to allow implementation of the appropriate
control laws with guaranteed operation. Moreover, they should accept and manage
requests preserving their ordering, and exhibit differentiated reactions depending on their
urgency. An industrial robotic system should also provide the guarantee of correct
execution priorities of application tasks at the server. Consistent, end-to-end propagation of
priority semantics is especially difficult when heterogeneous peers, with different operating
systems and programming languages, must be accommodated.
Control of several devices and sensors in a networked environment requires concurrency in
server operations and a multithreaded server structure. This capability enables execution of
parallel independent or coordinated actions in the target workcell, improves the response
time, and simplifies CPU sharing among computational and communication services.
Portable thread synchronization mechanisms should therefore be available to achieve
reliable and efficient concurrency in servers.
The system should allow asynchronous (non-blocking) client requests to servers
controlling networked devices. This feature is not always required for simple applications
consisting of a limited number of distributed requests, possibly operated in stop-and-go
mode. However, it becomes necessary for advanced scenarios where multiple concurrent
activities can be invoked at the server. Examples of such tasks are coordinated operation of
multiple arms or concurrent sensing and manipulation.
In industrial applications, the timely availability of adequate sensory data is required for
plant control, production monitoring, remote maintenance, data logging, and post-
production analysis. Efficient and scalable data distribution techniques must be available
to return sensory information to a dynamic and possibly large set of data consumers.
Finally, while early approaches to distributed robotic systems were based on development
of in-house solutions, new industrial applications take advantage of the availability of
standard infrastructures and technologies. In this way, the total cost of ownership and
management of the architecture controlling the robotic system can be reduced thanks to the
adoption of component-off-the-shelf (COTS) and component-based design. Approaches
promoting software reuse, such as developing a common framework and exploiting COTS,
have the potential of drastically reducing maintenance and integration costs.
3. Distributed Middlewares
A distributed middleware is a connectivity software designed for the interaction of software
components in complex, distributed applications. During the 90’s, this technology has
greatly evolved to provide interoperability and application portability in client/server
architectures. A distributed middleware, however, has emerged as an enabling technology
64 Industrial Robotics - Programming, Simulation and Applications
also for distributed industrial robotic applications, requiring the ability to interconnect
multiple and possible heterogeneous platforms while ensuring location transparency and
extensibility. Among the most prominent actors in the distributed middleware arena are
Web Services and CORBA technologies.
clients to control a single robot through a coherent and logically safe pattern of interaction can
be obtained exploiting the CORBA Concurrency Control Service [OMG, 2000]. This service
allows several clients to coordinate their concurrent accesses to a shared resource so that the
resource consistent state is not compromised. Of course, it is up to the developer to define
resources and identify situations where concurrent accesses to resources conflict.
For these purposes, the Concurrency Service provides the lock as the basic coordination
mechanism. Each shared resource should be associated with a lock, and a client must get the
appropriate lock to access a shared resource. Several lock modes are defined, allowing
different conflict resolution policies among concurrent clients. We show how the CORBA
Concurrency Service can be exploited by means of an example. This pattern can be adapted
to several situations involving conflicting reading and writing access rights to one or more
devices across the distributed system.
In a scenario where several clients compete to control a single robot and/or access data from
multiple sensors, exclusive control of the robot must be granted only to one client in a given
interval of time, while simultaneous read of sensor data should be allowed to other
consumers as well. For each robot, a Robot and a RobotStatus objects are created. The
RobotStatus class maintains information about a robotic device, whereas the Robot class
controls movements and sets parameters. Then, for each Robot object, a lock object is created
and registered in the Naming Service.
As shown in Figure 2 (scenario 1), the client invoking commands on the Robot object holds
an exclusive lock ensuring exclusive control. Indeed, as the exclusive lock conflicts with any
other lock, a client requesting a lock on the same resource will be suspended waiting its
release. Clients invoking methods on the RobotStatus object, instead, are not required to
hold locks as the class has only ‘‘accessor’’ methods.
Fig. 2. Two scenarios of concurrent access to a single robot device from several clients.
To preserve generality and cover a wider domain of applications [Chong et al, 2000], an
alternative scenario can be outlined, where a group of clients wants to control a single robot
68 Industrial Robotics - Programming, Simulation and Applications
Another client mechanism enabling to take advantage of available concurrency in the server has
been introduced in the CORBA 2.3 Messaging Specification. Standard service requests in CORBA
systems rely on the Synchronous Method Invocation (SMI) model, that blocks the client until the
server notifies the end of the requested activity. Non-blocking invocations with earlier CORBA
versions either relied on methods not guaranteeing the delivery of the request or on techniques
requiring significant programming efforts and known to be error prone [Arulanthu et al., 2000].
The Asynchronous Method Invocation (AMI) model [OMG A, 2004] provides a more
efficient way to perform non-blocking invocations with either a Polling or a Callback
approach. The AMI interface allows a CORBA-based system to efficiently activate multiple
concurrent actions at a remote teleoperated site. Moreover, as AMI and SMI share the same
object interface, clients can choose between synchronous or asynchronous calls, whereas
server implementation is not affected.
The AMI interface, however, cannot guarantee that a set of parallel actions will be actually
executed at the ‘‘same’’ time. When a server receives non blocking requests from a client, it
dispatches them to the Thread Pool according to their priorities and they are started as soon as
possible. Due to the communication delays of the distributed system, requests will reach the server
at different and unpredictable times. Synchronized parallel action execution is thus unlikely. This
behaviour is not satisfactory for many robotic applications, where a set of parallel actions must
often begin at the ‘‘same’’ time and coordination of their execution is required to ensure logical
correctness or safety. This problem can be solved by implementing a waiting rendezvous strategy
[Douglass, 1999] based on the available CORBA mechanisms. We have developed an
implementation where an instruction termed cobegin(n) prefixes the invocation of parallel actions
in a server, acting as a barrier for the next n method invocations, whose execution, therefore, does
not start until all calls have reached the server. Without cobegin(n), the server schedules actions
invoked by AMI requests as soon as they arrive.
Fig. 3. Precedence graph of a concurrent task, consisting of an initial action start(), followed
by four concurrent actions with different priority.
70 Industrial Robotics - Programming, Simulation and Applications
Experiments involving simulated workload have been carried out to evaluate the
correctness and robustness of the server, which has been tested with a variety of sets of
concurrent actions, with different priority levels and synchronization requirements. We
have also experimented the effectiveness of cobegin(n) in avoiding priority inversion in the
execution of parallel actions. One of the experiments is described in Figure 3, showing
precedence relations, duration and priority of each method call. The correct outcome of this
experiment requires that the four concurrent methods be executed according to their
priority. Figure 4 compares two experimental executions of the task. Without cobegin(n)
(top diagram), the medium priority action (ID 4), whose request is the first reaching the
server, is executed before the high priority action (ID 1). With cobegin(n) (bottom diagram),
the priority of threads is always guaranteed and no priority inversion occurs.
Fig.. 4. Experimental results of concurrent actions without (top) and with (bottom) cobegin(n).
Solutions currently available in CORBA for transmission of time-critical streaming data are
still quite limited. A standard for Audio/Video Streaming [OMG B, 2000] is included among
CORBA specifications, but it only defines common interfaces for negotiation of the
connection among distributed applications, lacking details on its use and control. Due to the
weakness of this standard, most existing CORBA implementation do not take advantage of
the CORBA Audio/Video Streaming specification to handle multimedia streams. It should
be mentioned that OMG has recently defined a new specification to facilitate the exchange
of continuous data in the CORBA Component Model [OMG, 2005].
Designing Distributed, Component-Based Systems for Industrial Robotic Applications 71
A data distribution technique alternative to Audio/Video Streaming, less efficient but more
portable, adopts a Publisher/Subscriber communication model [Buschmann et al., 1996].
Whenever the Publisher (sensor) changes state, it sends a notification to all its Subscribers.
Subscribers in turn retrieve the changed data at their discretion. OMG introduced two variants of
the Publisher/Subscriber communication model in the CORBA standard, namely the Event and
Notification Services, that strongly decouple Publisher and Subscribers by means of an ‘‘Event
Channel.’’ Exploitation of these services for data distribution is investigated next.
Event Service. The CORBA Event Service [OMG B, 2004] allows suppliers and consumers to
exchange data without requiring the peers to know each other explicitly. The general idea of
the Event Service is to decouple Suppliers and Consumers using an Event Channel that acts
as a proxy consumer for the real suppliers and as a proxy supplier towards the real consumers.
Therefore, the supplier can perform a non blocking send of sensory data in the Event
Channel, while the interested consumers can connect to that channel to get the ‘‘event’’
(Figure 5). This implementation also allows a transparent implementation of the broadcast
of sensory data to multiple consumers. The CORBA standard proposes four different
models interleaving active and passive roles of suppliers and consumers. We discarded
models with active consumers as they can produce blocking communications when new
data are not available at sensor proxy. For distributed robotic applications, the most
appropriate model seems to be the Canonical Push Model, where an active supplier pushes
an event towards passive consumers registered with the Event Channel.
Despite the benefits introduced by an Event Channel, experimenting with this Service highlights
several issues. Firstly, to avoid compile-time knowledge of the actual type of the ‘‘event,’’ sensor
data must be communicated as an OMG IDL “any type” data type. The communication is
therefore type-unsafe and consumers are charged with the duty of converting between the “any
type” and the data type they need. Secondly, the Event Service specification lacks event filtering
features: everything is conveyed through the Event Channel, that in turn sends everything to any
connected consumer. Therefore, the load of a missing property is laid on consumers, that are
forced to filter the whole data set looking for the ones they really need. Moreover, the flow of
unrequested data can again introduce the problem of network saturation. Finally, the Event
Service specification does not consider QoS properties related to priority, reliability, and ordering.
Attempting to ensure these properties in an application results in proprietary solutions that
prevent ORB interoperability.
Notification Service. A second solution investigated for the data distribution subsystem is
based on the CORBA Notification Service [OMG C, 2004], which has been introduced in the
CORBA Standard to overcome the limitations of the CORBA Event Service.
The Notification Service is essentially a superset of the Event Service; most
components of the Event Service architecture (Figure 5) have been enhanced in the
Notification Service. Notable improvements with respect to the Event Service include
filtering and QoS management. In the Notification Service each client subscribes to the
precise set of events it is interested in receiving through the use of filter objects,
encapsulating one or more constraints. Two filter types are defined: a forwarding filter,
that decides whether the event can continue toward the next component, and a
mapping filter, that defines event priority and lifetime. Moreover, QoS properties for
reliability, priority, ordering, and timeliness can be associated to a channel, to a proxy,
or to a single event.
To summarize, for robot servers performing several complex tasks (e.g., sensor data
acquisition and distribution, motion planning, actuator control) and dealing with a large
number of attached clients, the Event Channel (offered by the Event and Notification
Services) represents an effective tool to maintain the overall workload under control. When
consumers have different QoS needs and at the expense of a slight overhead, the
Notification Service is the most appropriate solution, thanks to its configurability options
not available in Distributed Callback and Event Service implementations.
operator can dynamically configure its environment, adding new sensoriality and moving
available hardware. The inset graph in Figure 10 shows the GUI with windows displaying
sensory information received from the stereo camera.
An advanced client station also supports the control of the robot using an 18-sensor
CyberTouch (a virtual reality glove with tactile feedback form Immersion Corp, Inc.) and a
six degree of freedom Polhemus tracker (Figure 10).
Fig. 10. An example of the client setup, and a snapshot of the GUI with a window displaying
sensory information received from the remote site.
Figure 11 shows the Z trajectory of the gripper during two teleoperated peg-in-hole tasks.
The solid line shows the gripper height during a task performed in the Ethernet LAN
environment. The dashed line shows the same data obtained with the same operator
remotely connected to the servers on a high-speed DSL link. In both cases, the GUI was used
and proved very useful to ensure effective interaction with the remote system. It can be
observed that the task duration was essentially the same for both experiments, even though
in the case of teleoperation over LAN the operator was allowed to send more motion
commands (65 against 44 of the other case) thanks to the better response time of the system,
i.e. motion commands (uplink) and sensor data (downlink) were delivered more quickly.
Fig. 11. Z trajectory of the gripper during the peg-in-hole task performed over an Ethernet
LAN (continuous line), and over a WAN (dashed line).
Designing Distributed, Component-Based Systems for Industrial Robotic Applications 77
Fig. 12. Trajectory of the gripper in the peg-in-hole experiment (over an Ethernet LAN).
In the second test four clients participated to the execution of a stacking manipulation
task (Figure 13). Clients were connected to the robotic server through a Fast Ethernet
switch with negligible latency. The main goal of the experiment was to investigate the
reliability of the mechanisms for concurrency control and access arbitration among
clients described in Section 4.1.
Fig. 13. A stacking experiment with the Puma 560 manipulator. The images are taken from
the left eye of the stereo camera.
Figure 14 traces the main environmental values during a task segment. The first row shows
the evolution of the distance between the robotic gripper and the table (Z), sampled at 1 Hz,
during the execution of the manipulation task. Management of concurrent client access is
described by the next four rows of Figure 14.
Each client can be in one of four states: exclusive control, collaborative control, waiting for
control, observing. Waiting slots are experienced by clients asking for control when the
robot is already controlled in an incompatible mode by other client(s). The bottom row of
Figure 14 shows the total number of clients currently connected to the system including slot
time when they are connected as ‘‘observer.’’
78 Industrial Robotics - Programming, Simulation and Applications
Fig. 14. Tracing of a teleoperation experiment involving four clients that concurrently access
the robot arm.
Several task sessions demonstrated the reliability of the CORBA concurrency control
mechanisms in providing safe arbitration to critical resources. Tasks similar to the one
described were executed, and the presence of multiple clients controlling the robot never
prevented their successful completion.
6. Conclusion
The viability and cost effectiveness of new distributed industrial robotic applications can be
widely enhanced exploiting COTS-based software components. However, systems
implementing those applications must face demanding challenges: they should be dynamically
reconfigurable and highly scalable to deal with a potentially large number of peers, they should
provide real-time features, guaranteed performance and efficient concurrency mechanisms,
both locally and in a distributed environment. The CORBA middleware, especially with its
recent extensions, has proven well suited for the needs of many distributed robotic systems. In
this paper, we have summarized our experience resulting from the development of distributed
robotic applications based on Real-Time CORBA.
Results show that exploitation of CORBA brings a number of remarkable advantages in the
distributed robotic domain, enabling portable, highly reconfigurable applications with support for
concurrency and real-time features. Furthermore, CORBA standard services for naming
resolution, data distribution and concurrency control avoid the need for ad hoc solutions, which are
often error prone, require significant development effort, and prevent portability.
Of course, providing transparency and high level services in CORBA is not without cost.
Nevertheless, the recent scaling of network and Internet infrastructures to higher levels of
performance and QoS guarantees has made less relevant the overhead introduced by
CORBA. Drawbacks encountered in our experience stem from the incompleteness in the
implementation of the CORBA standard suite. None of the available CORBA ORBs offers a
Designing Distributed, Component-Based Systems for Industrial Robotic Applications 79
full implementation of the CORBA standard, i.e., covering aspects such as dynamic
scheduling, fault-tolerance, fully compliant CORBA services. Furthermore, some extensions
of the standard would be useful, e.g., higher-level synchronization mechanisms (condition
variables, barriers) and more effective and useable services for streaming and continuous
media management. These deficiencies, however, are widely acknowledged by OMG and in
the CORBA standard community, and for some of them solutions are in the making.
7. Acknowledgements
The research described in this chapter has been supported by LARER (Laboratorio di
Automazione of Regione Emilia-Romagna, Italy).
8. References
Arulanthu, A.; O’Ryan, C.; Schmidt, D. C.; Kircher, M. & Parsons, J. (2000). The Design and
performance of a Scalable ORB Architecture for CORBA Asynchronous Messaging.
Proceedings of the ACM/IFIP Middleware 2000 Conference, pp. 208-230, ISBN 3-
540-67352-0, New York, NY, USA, April 2000, Springer (Lecture Notes on
Computer Science)
Bayardo, R.J.; Gruhl, D.; Josifovski, V. & Myllymaki, J. (2004). An Evaluation of Binary XML
Encoding Optimizations for Fast Stream Based XML Processing. Proceedings of 13th
In’l Conf. World Wide Web, pp. 345-354, ISBN 1581139128, New York, NY, USA, May
2004, ACM Press
Bichier, M. & Lin, K.-J. (2006). Service-Oriented Computing. IEEE Computer, Vol. 39, No. 3,
pp. 99-101, ISSN 0018-9162
Buschmann, F.; Meunier, R.; Rohnert, H.; Sommerlad, P. & Stal, M. (1996). Pattern-Oriented
Software Architecture, Volume 1: A System of Patterns. Wiley and Sons, ISBN
0471958697, New York
Chiu, K.; Govindaraju, M. & Bramley, R. (2002). Investigating the Limits of SOAP
Performance for Scientific Computing. Proceedings of 11th IEEE In’l Symp. High-
Performance Distributed Computing, pp. 246-254, ISBN 0-7695-1686-6, Edimburgh,
Scotland, July 2002, IEEE CS Press
Chong, N.; Kotoku, T.; Ohba, K.; Komoriya, K.; Matsuhira, N. & Tanie, K. (2000). Remote
Coordinated Controls in Multiple Telerobot Cooperation. Proceedings of the IEEE
International Conference on Robotics and Automation (ICRA 2000), pp. 3138-3143, ISBN
0-7803-5889-9, San Francisco, CA, USA, April 2000, IEEE Press.
DiPippo, L.; Fay-Wolfe, V.; Zhang, J.; Murphy, M. & Gupta, P. (2003). Patterns in Distributed
Real-Time Scheduling, Proceedings of the 10th Conference on Pattern Language of
Programs 2003, Monticello, IL, USA, September 2003.
Douglass, B. (1999). Doing Hard Time: Developing Real-time Systems with UML, Objects,
Frameworks, and Patterns. Addison-Wesley, ISBN 0201498375, Reading, MA
Gamma, E.; Helm, R.; Johnson, R. & Vlissides, J. (1995). Design Patterns – Elements of Reusable
Object-Oriented Software. Addison-Wesley, ISBN 0201633612, Reading, MA
Gill, C.; Schmidt, D. C.; Krishnamurthy, Y.; Pyarali, I.; Mgeta, L.; Zhang, Y. & Torri, S. (2004).
Enhancing Adaptivity Standard Dynamic Scheduling Middleware, The Journal of
the Brazilian Computer Society (JCBS) special issue on Adaptive Software Systems, Vol.
10, No. 1, pp. 19-30, ISSN 0104-6500
80 Industrial Robotics - Programming, Simulation and Applications
Liefke, H. & Suciu, D. (2000). XMill: An Efficient Compressor for XML Data. Proceedings of
ACM SIGMOD Conference on Management of Data, pp. 153-164, ISBN 1-58113-218-2,
Dallas, TX, USA, June 2000, ACM Press
Motahari Nezad, H. R.; Benatallah, B.; Casati, F. & Toumani, F. (2006). Web Services
Interoperability Specifications. IEEE Computer, Vol. 39, No. 5, pp. 24-32, ISSN
0018-9162
OMG A (2000). Concurrency Service Specification, Version 1.0,
http://www.omg.org/technology/documents/formal/concurrency_service.htm
OMG B (2000). Audio/Video Streams, Version 1.0,
http://www.omg.org/technology/documents/formal/audio.htm
OMG (2002). Real-time CORBA 1.0 Specification,
http://www.omg.org/docs/formal/02-08-02.pdf
OMG (2003). Review draft of the 1.2 revision to the Real-time CORBA Specification (OMG
document realtime/o3-08-01), previously designated Real-time CORBA 2.0,
http://www.omg.org/docs/ptc/01-08-34.pdf
OMG A (2004). CORBA/IIOP 3.0.3 Specification,
http://www.omg.org/docs/formal/04-03-01.pdf
OMG B (2004). Event Service Specification, Version 1.2,
http://www.omg.org/technology/documents/formal/event_service.htm
OMG C (2004). Notification Service Specification, Version 1.1,
http://www.omg.org/technology/documents/formal/notification_service.htm
OMG (2005). Streams for CCM Specification, http://www.omg.org/docs/ptc/05-07-01.pdf
Schantz, R.E.; Schmidt, D.C.; Loyall, J. P. & Rodrigues, C. (2006). Controlling Quality-of-
Service in Distributed Real-time and Embedded Systems via Adaptive
Middleware. To appear in The Wiley Software Practice and Experience Journal special
issue on Experiences with Auto-adaptive and Reconfigurable Systems, ISSN 0038-0644
Schmidt, D. C.; Levine, D. & Mungee, S. (1998). The Design and Performance of the TAO
Real-Time Object Request Broker. Elsevier Computer Communications Journal, special
issue on Building Quality of Service into Distributed Systems, Vol. 21, No. 4, pp. 294-
324, ISSN 0140-3664
Tian, M.; Voigt, T.; Naumowicz, T.; Ritter, H. & Schiller, J. (2004). Performance
Considerations for Mobile Web Services. Elsevier Computer Communications Journal,
Vol. 27, No. 11, July 2004, pp. 1097-1105, ISSN 0140-3664
W3C (2004). Web Services Architecture, Working Group Note.
Zhang, J.; DiPippo, L.; Fay-Wolfe, V.; Bryan, K. & Murphy, M. (2005). A Real-Time
Distributed Scheduling Service For Middleware Systems. Proceedings of the 10th
International Workshop on Object-Oriented, Real-Time, Dependable Systems, pp. 59-65,
ISBN 0-7695-2347-1, Sedona, AZ, February 2005
5
1. Introduction
Industrial Robots are designed for tasks such as grasping, welding or painting in where
working conditions are well settled. However, if the working conditions change, those
robots may not be able to work properly. So, robots require sensor- based control to perform
complex operations and to react to changes in the environment. The achievement of such
complex applications needs the integration of several research areas in vision and control
such as visual matching, visual tracking and visual servoing (Petersson et al. 2002).
Information about the actual system and its environment can be obtained from a great
variety of sensors. Vision is probably the sensor that provides the greatest and richest
sensing information but the processing of such information becomes complicated.
Nevertheless, computer vision has been improved a lot in the last years and it is being
frequently used in robotics systems, although serious limitations appear in real time
applications due to the time necessary for image processing.
The use of computer vision as a feedback transducer strongly affects the closed loop dynamics of
the overall system. Latency is the most significant dynamic characteristic of vision transducers and
it has many sources, including transport delay of pixels from the camera to vision system, image
processing algorithms, control algorithms and communications with the robot. This delay can
cause instability in visual closed loop systems. To achieve fast response and high control accuracy
the design of a specific visual feedback controller is required.
Visual servoing is the result of merging several techniques in different fields including
image processing, kinematics, dynamics, control theory and real time computing. The task
in visual servoing is to control a robot to manipulate its environment with the help of vision
as a feedback sensor. An excellent overview of the main issues in visual servoing is given in
(Nelson & Papanikolopoulos, 1998).
In this work we present a complete form of designing visual servoing controllers for robotic
systems built over a modular conception. The designing method is particularly applied to
industrial manipulators equipped with a camera mounted on its end effector, known as eye in
hand configuration. The modular conception means that the overall system is composed of a set
of independent modules, or subsystems, that are put together, configuring a modular system. In
this kind of systems any module can be replaced by other one with the same functionality and
the visual controller computation procedure will not change. The goal of any visual servoing
82 Industrial Robotics - Programming, Simulation and Applications
system is the same independently of how it is constructed: to control the robot’s end effector pose
relatively to the target object pose. But the main advantage of our consideration is that if a
module has to be changed for any reason it will not be necessary to replace the others or to
redesign the full platform, but just to compute a new controller.
This scheme allows us to deal with the problem of controller design from a more generic
point of view. We propose a new strategy for designing the control subsystem, which
presents advantages as for calculation time, simplicity and estimation errors of object
position. One purpose is to demonstrate the possibility of getting competitive results in real
time performance with respect to more closed solutions shown in other works and other
control configurations, while maintaining the advantage of easy system adaptability.
The remainder of this paper is structured as follows. Firstly, in section 2, we review the
relevant fundamentals of visual control arquitectures. In sections 3 and 4, we describe the
proposed design methodology and then we analyze the performance of the designed
experimental platform to deduce the mathematical models of the involved components. In
section 5, we design various control strategies used when the object is static and moving,
respectively. After that, we carry out a study of designed controllers in section 6, and we
present and discuss some experimental results obtained in the platform in section 7. Finally,
conclusions and new tendencies of our work are stated at section 8.
Xref θref θ
Inverse θn X
Kinematics Robot
Transform + Geometry
- θ2
θ1
Joint
Controller
such error. Visual sensors represent an excellent option to measure and feedback actual robot
position in this loop and help to reduce positioning errors. The only drawback of using vision
as position sensor is the big amount of computing time required to obtain it.
Earliest applications combining computer vision and robot control where implemented under
an open loop scheme, known as static look and move architecture. Figure 2 shows a control
structure of this type that works by sequentially executing three independent steps (Weiss,
1984). The step by step sequence is repeated until the precision required by the task is
achieved. Therefore, the number of necessary iterations depends on the particular task. Let us
remark that the nature of this robot positioning process is completely iterative and sequential.
STEP 2: “MOVE” STEP 3:
Computer Robot
Calculations
STEP 1: “LOOK”
Vision
Controller Robot
+
-
Vision
attending to two main criteria. First, the way of inputting the signal generated by the
controller to the robotic system and, secondly, the nature of the visual information supplied
by the vision system, i.e. the nature of the controlled variable.
Concerning the first criterion, visual servoing architectures for controlling manipulators can
be classified into two fundamental categories: dynamic look and move structures and visual
servo structures. When the control architecture is hierarchical and uses the vision system to
calculate the set of inputs to the joint level controller, making use of inner joint feedback
loops to stabilize the robot, it is referred to as a dynamic look and move structure. A very
important advantage of this structure is that it separates the kinematics singularities of the
mechanism from the visual controller, allowing the robot to be considered as an ideal
Cartesian motion device. In contrast, visual servo structure eliminates the robot controller
replacing it with a visual controller in such a way that it is used to compute directly the
joints inputs, the loop being the only thing used to stabilize the mechanism.
Concerning the controlled variable, visual servoing systems are classified into two groups:
image based control systems and position based control systems. In an image based control
system, the error variable is computed in the 2D-image space. This approach eventually
reduces the computational delay, eliminates the necessity of image interpretation and
eliminates errors due to sensor modeling and camera calibration. However, they require
online computation of the image Jacobian. Unfortunately, this quantity inherently depends
on the distance from the camera to the target, which in a monocular system is particularly
difficult to calculate. Many systems utilize a constant image Jacobian which is
computationally efficient, but valid only over a small region of the task space [Hutchinson et
al 1996]. On the other hand, in a position based control system, the error variable is
computed in the 3D Cartesian space. The main advantage of the last approach is that
position of the camera trajectory is controlled directly in the Cartesian space.
Figures 4, 5, 6 and 7 show the schemas of the four possible structures concerning to the
combination of the two mentioned criteria, for an eye in hand camera configuration. Notice
that visual servo structures require the direct access to the robot operating system in order
to govern the joint variables directly, whereas dynamic look and move structures consider
the robot as an independent system and can handle it as a black box.
There has been a significant amount of research activity on image based control methods
(Weiss et al., 1987), (Feddema & Mitchell, 1989), (Chaumette et al., 1991), (Espiau et al.,
1992), (Khosla et al., 1993), (Corke, 1993) whereas there have been only a few researchers
working on position based control methods (Koivo & Houshangi, 1991), (Allen et al., 1993),
(Wilson et al., 1996), (Vargas et al., 1999). This tendency can be justified because image based
systems usually reduce computation delays and eliminate errors due to sensor modeling
and camera calibration processes.
Xref Regulator
VISION
VISION
Fig. 4. Dynamic position-based look-and-move structure.
Designing and Building Controllers for 3D Visual Servoing Applications under a Modular Scheme 85
Power amplifiers
Xref Regulator
VISION
Fig. 5. Position-based visual servo structure.
Joint controllers Power amplifiers
X ref Regulator
Image feature
extraction
VISION
Power amplifiers
Xref Regulator
Image feature
extraction
VISIO
VISION
Fig. 7. Image-based visual servo structure.
More recent works (Malis 1999) consider a new visual servo structure that combines the
advantages of image based and position based control schemes and avoids some of their
disadvantages. This approach, known as 2½D, is characterized by considering the controlled
variable composed by two parts, one being represented in the Cartesian space and the other
in the image plane. It does not need any 3D model of the object or any precise camera
calibration either. However, the method is more sensitive to the existing noise in the image.
Some final considerations can be made concerning the adequacy of the considered visual control
architectures to designing and building systems from a modular conception. Having this idea in
mind, the most appropriate selection seems to be taking into account position based systems and
dynamic look and move structures. Several reasons can be pointed out to think so:
• Position based methods allow a direct and more natural specification of the desired
trajectories for the end effector in Cartesian coordinates.
• Many robots have an interface for accepting Cartesian velocity or incremental
position commands. In that sense they can be exchanged and all of them can be
considered as black boxes.
• The visual controller design of visual servo structures is very complex due to the
plant being nonlinear and highly coupled.
86 Industrial Robotics - Programming, Simulation and Applications
L Robot
Controller Controller
I
N
K
Control Computer
3. Control Subsystem: An estimator (optional) and a controller can form the control
subsystem. In principle, this subsystem should be implemented over a specific
control computer, but it can also be included in the vision computer, in the robot
controller operating system, or both.
4. Communications: Communication links between vision and control subsystems and
also between control and robotic subsystems form this module.
Notice that the overall system is a multirate system in nature: the sampling interval of the
manipulator is Tr while the vision subsystem sampling period is Tv, Tv being larger than Tr.
To simplify the study from the control point of view, the vision period can be adjusted to be
n times the robot period, n being an integer value, i.e.:
Tv = n.Tr (1)
For notation concerns, we will use z to represent the z-transform of a system sampled with a
period Tv and ẑ to the z-transform of a system sampled with a period Tr.
f
f
O X
f c c
c c
Y c
O X
O X Y c
Z
Δp
c
Y c c
pr Z pobj
w
Z
c
pd
w
Y desired object position
w
O c
Δp
w
X pobj actual object position
Fig. 9. Basic geometry of the robotic and vision subsystems in the considered configuration.
Let (xi, yi, zi) be the world coordinates of a given point pi located anywhere in the camera
workspace, (cxi, cyi, czi) be the camera coordinates of this point, and (fxi, fyi) be the coordinates of
the projection of the point in the image coordinate system. The last two coordinates are related to
each other by a (3x4) transformation matrix A which includes the perspective projection elements
and the scale factors, as well as the relative translation and rotation between the two coordinate
Designing and Building Controllers for 3D Visual Servoing Applications under a Modular Scheme 89
systems when they exist. In short, they are related by the expression:
⎡ c xi ⎤
⎡ω f xi ⎤ ⎡a11 a12 a13 a14⎤ ⎢c ⎥
⎢ ⎥ ⎢ ⎥ ⎢ y⎥ (2)
⎢ω f y ⎥ = A·c p = ⎢a a a a ⎥ . ⎢ i ⎥
⎢ i
⎥ i
⎢
21 22 23 24
⎥ ⎢ c zi ⎥
⎢ ω ⎥ ⎢a ⎥ ⎢ ⎥
31 a32 a33 a34⎦
⎣⎢ ⎦⎥ ⎣ ⎢ ⎥
⎣ 1⎦
where the 12 parameters of the A matrix are estimated in the off-line calibration process.
To estimate the object position we take advantage of the known two-point geometry pattern.
Both points are black circles in a plane being D the constant distance between them, and
being one circle double the size of the other. We assume that the pattern plane always
remains perpendicular to the camera optical axis, i.e., the two points are at the same cz
coordinate. Let (cx1, cy1, cz1) and (cx2, cy2, cz2) be the camera coordinates of the big point
pattern and the small one respectively in a given position, and let (fx1, fy1) and (fx2, fy2) be
their corresponding coordinates in the projected image. We will identify the actual object
position in camera coordinates with the biggest point position, i.e., cpobj = (cx1, cy1, cz1). Let cpd
be the desired object position in camera coordinates, i.e. the reference position of object with
respect to the camera, which remains constant during the experiment. The actual object
position can be determined by solving the next system with five equations and five
unknowns:
⎧ (a11 − a31 ⋅ f x1 )⋅c x1 + (a12 − a32 ⋅ f x1 )⋅c y1 = −(a13 − a33 ⋅ f x1 )⋅c z1 + (a34 ⋅ f x1 − a14 )
⎪ f c f c
⎪⎪ (a 21 − a31 ⋅ y1 )⋅ x1 + (a 22 − a32 ⋅ y1 )⋅ y1 = −(a 23 − a33 ⋅ f y1 )⋅c z1 + (a34 ⋅ f y1 − a 24 ) (3)
f c f c
⎨ (a11 − a31 ⋅ x 2 )⋅ x 2 + (a12 − a32 ⋅ x 2 )⋅ y 2 = −(a13 − a33 ⋅ f x 2 )⋅c z1 + (a34 ⋅ f x 2 − a14 )
⎪(a − a ⋅ f y )⋅c x + (a − a ⋅ f y )⋅c y = −(a 23 − a33 ⋅ f y 2 )⋅c z1 + (a34 ⋅ f y 2 − a 24 )
⎪ 21 31 2 2 22 32 2 2
⎩⎪ ( c x1 − c x 2 ) 2 + ( c y1 − c y 2 ) 2 = D2
Error vector cΔp = (cΔx, cΔy, cΔz) is then computed by subtracting the actual position to the
reference position, cΔp = cpd - cpobj. This vector is equal but opposite to the robot incremental
movement Δp required to cancel such as error: Δp = - cΔp = cpobj - cpd. From figure 1bis it can be
seen that cpobj is the object position in the camera system and it coincides with (pobj - pr) in the
world system, i.e. Δp = pobj - pr - cpd. Therefore, the vision subsystem provides every sampling
period a triplet of values Δp = (Δx, Δy, Δz) in the world coordinate system representing the
position increment that the robot has to be moved to make the target object be at the right
position with respect to the camera. To reach those values, every sampling period the vision
subsystem must acquire an image of the pattern, digitize it, process it to obtain (fx1, fy1) and (fx2,
fy2), evaluate and solve the system of equations (3) to obtain cpobj, and finally compute Δp = cpobj -
cpd. Because of the required computation time, this information is not available until the next
sampling instant. In that sense, when the overall system is working with Tv period, the vision
subsystem can be considered as a pure delay. So its transfer functions matrix is:
V(z) = diag (z-1, z-1, z-1) (4)
Figure 10 shows the representative block of the vision subsystem. This diagram also
considers a noise signal rs which can be produced, for example, because of bad illumination
conditions or just in the digitizing process.
90 Industrial Robotics - Programming, Simulation and Applications
rs
pobj - pr - Δp
V (z)
Tv + -
c
pd
Vision subsystem
⎛ nx ny nz ⎞ (5)
G ( zˆ ) = diag ⎜ , , ⎟
⎜ ( zˆ − 1)( zˆ − β ) ( zˆ − 1)( zˆ − β ) ( zˆ − 1)( zˆ − β ) ⎟
⎝ x y z ⎠
Figure 11 shows the representative block of the robotic subsystem. The sampling time is Tr
and e is a noise signal that represents small errors due to data acquisition, which will be
used in the identification process. That noise is modeled as a sequence of independent and
identically distributed random variables with zero mean.
noise ( z^ )
+ pr
v
G ( z^ )
Tr +
Robotic subsystem
rs Noise ( z^ )
pobj pr
Δp v +
-
V(z) R(z) G ( ^z )
+ Tv + Tr +
-
-
c
pd
Control
Vision Subsystem Subsystem Robotic Subsystem
⎛ 1 − zˆ − n 1 − zˆ − n 1 − zˆ − n ⎞ (6)
T ( zˆ ) = diag ⎜⎜ −1
, −1
, −1
⎟⎟
⎝ 1 − zˆ 1 − zˆ 1 − zˆ ⎠
c
Noise
rs pd
pobj + pr
- -
V(z) R(z) T ( z^) G ( z^ )
+ - +
Fig. 13. Equivalent representation of the block diagram for velocity control.
At this point it is important to remark that previous block diagrams correspond to multirate
control systems, because two different sampling periods appear in all the representations. In
order to design the controllers at the slowest sampling rate, we should have the transfer
functions matrix of the robotic subsystem corresponding to a sampling period Tv. However,
we know these transfer functions sampled with period Tr. To obtain the desired transfer
functions the following two step procedure can be applied: (1) Estimate the continuous
transfer functions matrix from the discrete transfer functions matrix sampled with period Tr.
(2) Calculate the discrete transfer functions matrix by sampling the continuous transfer
functions matrix with period Tv. Applying this procedure to G(zˆ) , we obtain G (z ) . Then the
92 Industrial Robotics - Programming, Simulation and Applications
block diagram of figure 13 can be converted into a more generic block diagram as shown in
figure 14. Notice that Tv is the sampling period of this new diagram.
noise
cp
rs d
+ pr
pobj - -
V(z) R(z) G(z)
+ - +
4.3 Identification
Once a model has been selected to represent each subsystem, an identification process is
required to estimate the unknown parameters. In general terms, an identification experiment
can be performed by exciting the system (using some kind of input such as step, sinusoid or
random signals) and observing the input and the output over a time interval. These signals are
normally registered in a computer for information processing. Some statistically based method
can be used to estimate the model unknown parameters such as the coefficients of the difference
equation. The model obtained is then tested to verify whether it is an appropriate representation
for the system. If that is not the case, some more complex model must be considered, its
parameters estimated and the new model validated. Concerning our experimental platform the
identification process can be summarized in the following comments.
As it has been exposed in the modeling section the vision subsystem behaves as a pure
delay. Therefore no unknown parameter must be estimated here. The robotic subsystem
is working through its ALTER line. This command allows us to introduce new reference
values each 16 milliseconds, which means that in this platform Tr = 16 milliseconds. It
has already been mentioned that we are considering a second order linear system to
model each coordinate of the robotic subsystem. To identify the respective parameters
we have excited each Cartesian coordinate of the robot with different input signals
(steps, ramps and parabolas) through its ALTER line. In order to take into account the
possible errors produced in the data acquisition process, an autoregressive moving
average with exogenous model (ARMAX) has been used to fit the robotic subsystem
parameters, due to this model a random term is included that can be interpreted as
noise. Parameters are finally identified by minimizing the estimated square error
function using an iterative Gauss-Newton algorithm. The obtained transfer functions
matrix is given by:
(7). To this aim, we have used the method described in (Feliu, 1986) The obtained G (s ) is:
The resulting continuous transfer functions matrix is then sampled with a Tv period using a zero
order hold. Considering n=10 in the equation (1) and applying this discretization procedure to
the transfer functions matrix of robotic subsystem, the following G(z ) is obtained:
IMAGE
Desired
position
object
Object position
For this task we are going to study various conventional and new methods to design the
visual controller. System performance will be evaluated afterwards.
Root locus based controller
The first controller considered can be designed via root locus analysis. These controllers have
the smallest performance but they also produce the smallest values of the Cartesian
acceleration reference. All the transfer functions of the matrix G (z ) have one pole at z = 1 in
the open-loop chain (they are type 1 systems), so the steady state error is zero for a unit step
input in all of them. Then a proportional controller is enough to control the robotic subsystem
from the point of view of the steady state performance. The controller gains are chosen so that
all the closed-loop poles will be real, with the intention of avoiding oscillations while obtaining
the fastest possible response. The obtained controllers are given by:
where modified z-transform notation is used as normal in this type of approach. Finally,
W(z) has the following expression:
R( z ) (12)
W ( z) =
1 + R( z )VG ( z , m)
prd
M( z )
rs
pobj
+ - pr +
- ξ
W( z ) VG( z )
(ISE). The error vector usually considered is ξ = pr - pobj, the difference between the input
applied and the output achieved. Obviously, the final solution will depend on the type of
input chosen. For the task 1 under consideration we propose to modify the applied input by
means of a smoothing transfer function matrix M( z ) as indicated in figure 16. This
modified input, denoted as prd, tries to be a more realistic reference to be compared with the
robot response than the applied input itself. Notice that high-valued step responses are not
physically feasible for the robotic subsystem because they involve high torques that could
surpass the maximum allowed values, provoking saturation effects in the manipulator
actuators. Therefore we have tested the following transfer function matrix:
⎛ 1 − ax 1− ay 1 − a z ⎞⎟ (13)
M ( z ) = diag ⎜ , ,
⎜ (z − a ) ⋅ z (z − a ) ⋅ z (z − a ) ⋅ z ⎟
⎝ x y z ⎠
where the (ax, ay, az) values are chosen to obtain the best output behavior with respect to
speediness and cancellation of oscillations under the existing constrains. In the simulations
the best value was equal to 0.3 for the three Cartesian coordinates. In the experimental test
the same values are kept. Controller obtained is:
⎛ 0.0446z 3 + 0.0435z 2 0.0441z 3 + 0.0449z 2 0.0437z 3 + 0.0451z 2 ⎞
R( z ) = diag⎜ 3 , , ⎟ (4)
⎜ z + 0.9849z 2 + 0.5180z + 0.0717 3 2
z + 0.9777z + 0.5163z + 0.0663 z + 0.9754z 2 + 0.5159z + 0.0644 ⎟⎠
3
⎝
Deadbeat controller
Another design possibility is to use a deadbeat controller, which produces the minimum
settling interval. The computation of this controller for the three coordinates offers the
following transfer functions matrix:
⎛ 0.0979 z 2 0.0995 z 2 0.0995 z 2 ⎞ (15)
R ( z ) = diag ⎜⎜ 2 , , ⎟
⎝ z + z + 0.1615
2
z + z + 0.1469 z + z + 0.1419 ⎟⎠
2
IMAGE
Object position k
Fig. 17. Task 2 symbolic drawing.
Due to the delay introduced by the vision subsystem, in the majority of the reported
experimental systems some type of estimator is used to determine the future position of the
object from previous visual information (Houshangi, 1990), (Wang & Varshney, 1991),
96 Industrial Robotics - Programming, Simulation and Applications
(Westmore & Wilson, 1991), (Corke, 1993). Nevertheless, we will demonstrate in this work
that it is possible to eliminate the estimator while achieving performances similar to those
obtained using an estimator, just by selecting and tuning an adequate control subsystem.
Therefore, we consider two types of control subsystems in this task. An estimator and a
controller form the first one, and it will be called control subsystem with estimator. Only a
controller forms the second one, and it will be called control subsystem without estimator. Both
are analyzed in the following sections.
we propose to design the control subsystem as a deadbeat controller system, following the
conventional methodology for type of controller. Imposing as design specification that the
steady state tracking error to ramp reference must be cancelled in a minimum number of
samples, the transfer function matrix obtained is shown in table 1.
Notice that these transfer functions have poles outside the unit circle. In order to guarantee
the stability of the output signal, we propose a modification of the design criterion to obtain
the deadbeat controller in the way that the steady state tracking error must be zero in a finite
number of samples. When applying the Truxal method with these specifications a higher
number of unknowns than equations appear. A minimization process with restrictions is
required to solve this system of equations. Specifically we have imposed the additional
constraint that all the poles of the regulator have to be inside the unit circle. Table II presents
the controllers finally obtained.
Minimum ⎛ 0 .3094 z 3 − 0.2116 z 2 0.3094 z 3 − 0 .2116 z 2 0 .3094 z 3 − 0.2116 z 2 ⎞⎟
R ( z ) = diag ⎜ 3 , ,
⎜ z + z 2 − 1.651 z − 0.349 z 3 + z 2 − 1 .651 z − 0.349 z 3 + z 2 − 1.651 z − 0 .349 ⎟⎠
number of ⎝
samples
Finite ⎛ 0.1954z 4 + 0.016z 3 − 0.1128z 2 0.1954z 4 + 0.016z 3 − 0.1128z 2 0.1954z 4 + 0.016z 3 − 0.1128z 2 ⎞
R( z ) = diag⎜ , , ⎟
⎜ z 4 + z 3 − 0.6737z 2 − 1.1332z − 0.1931 z 4 + z 3 − 0.6737z 2 − 1.1332z − 0.1931 z 4 + z 3 − 0.6737z 2 − 1.1332z − 0.1931 ⎟⎠
number of ⎝
samples
Table 1. Controllers obtained without estimator in task 2 .
parameter occurs with a conventional controller, but the settling time is excessive. A trade
off is achieved with the optimal controller. Using this controller the steady state error
becomes zero in 6 samples, besides the maximum Cartesian acceleration is reduced to half.
It means that it can be considered the best strategy for this task.
12
10
ns ra (mm/sec2)
8 Classic controller 12 13.375
X (mm)
Pole Placement 5 53.2255
6 Optimal controller 6 27.8702
Dead Beat 4 61.1710
4
Pole placement
Optimal controller
Minimum deadbeat system
Via root locus
comparing the tracking responses corresponding to the different controllers under the same
trajectory profile conditions. This parameter is the magnitude of the square root of the mean
square error for each Cartesian coordinate measured in millimeters. This parameter allows
for comparing tracking performance in terms of trajectory fitting. However, it would be
convenient to have another criterion to compare all the control strategies. As in the task 1
case, it can be done by taking into account the values of the Cartesian acceleration reference.
Notice that sometimes, due to torque capability constraints, it is necessary to limit the
magnitude of the Cartesian acceleration reference so that they remain low enough.
30 30
20 20
10 10
Tracking error (mm)
-10 -10
-20 -20
-30 -30
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Intervals Intervals
Pole placement Pole placement
Optimal controller Optimal controller
30 30
20 20
10 10
Tracking error (mm)
0 0
-10 -10
-20 -20
-30 -30
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Intervals Intervals
Pole placement Minimum number of samples
Optimal controller Finite number of samples
Fig. 19. X coordinate tracking error when evolving through a circular trajectory.
Table 3 shows the values obtained for the X coordinate under three different trajectories for
all the control strategies, with and without estimator, considered in the previous section for
this task. Table 4 shows the values of the Cartesian acceleration reference corresponding to
each case. No relevant differences have been observed in the two other coordinates, so they
are not included here. Analyzing the results of these tables it can be observed that, in
100 Industrial Robotics - Programming, Simulation and Applications
general, the deadbeat controller without estimator presents the best results as for tracking
error in practically all the trajectory profiles just as for values of the Cartesian acceleration.
Nevertheless, the control subsystem composed of the Kalman filter and an optimal
controller also gives good results for tracking error although its values of Cartesian
acceleration are higher.
Ramp Sinusoid Triangular Profile
Pole 1 L.R.
st 4.5895 3.6214 5.1655
Placement Kalman 4.4088 3.1421 5.8296
AR 5.6858 4.8866 5.6705
1st L.R. 2.6057 3.5623 4.7244
Optimal Kalman 2.5803 3.3114 5.517
AR 4.0926 4.7006 5.1734
Minimum deadbeat system 2.4259 2.5945 4.6648
Finite deadbeat system 2.5678 3.2612 5.3475
Table 3. Root of the mean square tracking error (in mm) for several controllers.
Ramp Sinusoid Triangular Profile
Pole 1st L.R. 225.8460 216.0358 232.7056
Placement Kalman 166.4976 109.3901 190.8991
AR 424.6092 461.6932 284.0734
1st L.R. 218.6440 209.1466 256.7252
Optimal Kalman 161.1881 100.9431 180.5773
AR 367.0614 356.5014 275.3864
Minimum deadbeat system 186.8742 105.0531 194.4872
Finite deadbeat system 118.0194 82.6676 126.0523
Table 4. Maximum Cartesian acceleration (mm/sec2) for several controllers.
In conclusion, if we consider both criteria, the finite deadbeat system is the more suitable
strategy for this task. This study demonstrates that the control subsystem without estimator
produces competitive results besides having a number of advantages: (1) No estimator,
neither more nor less complicated, is required and (2) the control subsystem is simple and of
easy implementation.
manipulator trajectory is controlled via the Unimate controller’s Alter line that requires
path control updates every 16 msec. A photograph of the experimental platform is
shown in figure 20.
In all the experiments presented in this work, we have kept the reference for the Z-depth, i.e., the
vertical distance from the camera to the target, at 330 mm. Furthermore, to achieve a particular
motion, the object has been grasped by the end effector of a second robot, a Staubly RX60
manipulator. This allows defining any trajectory in the space as required in the second task. It also
allows keeping the two-point target at constant orientation during the trajectory evolution.
Camera calibration and inverse calibration algorithms have been specifically developed for the
camera and environment, and are described in (Gonzalez, 1998). Anyway, the vision model used
for these experiments has been explained in section 4 of this paper.
40
35
30
25
X (mm)
20
15
10
0
0 1 2 3 4 5 6 7 8 9
Intervals
Fig. 21. Error registered by the vision subsystem when the input is a step of 40 mm.
102 Industrial Robotics - Programming, Simulation and Applications
3
350
2
300
250 1
)
) m
200 m( 0
m
m r
( or
r
Y e
X
150 -1
100 -2
50 -3
0 -4
-10 0 10 20 30 40 50 60 70 80 0 20 40 60 80 100 120 140 160 180 200
X (m m ) Intervals
Object trajectory
Robot trajectory
(a) Object and robot linear trajectory (b) X coordinate tracking error
Fig. 22. Measured tracking performance for target moving along a linear trajectory with the
finite deadbeat system.
200
15
150
10
100
5
Y (m m )
X error (m m )
50
-5
-50 Object trajectory
Robot trajectory
-10
-100 0 50 100 150 200 250 300 350
-50 0 50 100 150 200
Intervals
X(mm)
(a) Object and robot circular trajectory (b) X coordinate tracking error
Fig. 23. Measured tracking performance for target moving along a circular trajectory with
the finite deadbeat system.
Designing and Building Controllers for 3D Visual Servoing Applications under a Modular Scheme 103
300
250
200
Z (m m)
150
0
200
150
200
100
150
50
100
0 50
-50 0
-100 -50
Y (mm)
X (mm)
The visual servoing structure used in this work has the characteristic of being integrated by
independent subsystems with its corresponding advantages. Its principal disadvantage is
that the torque vector acting over the robot’s motors does not appear explicitly and,
therefore, it is not possible to know when a referenced movement can saturate any of the
motors. For this reason, future experiments will focus on aspects as the study of different
schemes that could maintain the exerted torque small and, in any case, under control.
Only one camera has been used in this work, considering that the information supplied such
vision subsystem does not introduce uncertainties because of its simplicity. However, when
the robot is operating in an unstructured environment, such sensor information and such
approach are not always satisfactory. The problem increases when three-dimensional
information must be managed, like in the multiple cameras case.
From this point of view, other visual servoing classification not considered in this work
should be taken into account, having in mind the number of sensors used. Following cases
could be distinguished: (1) monocular visual servoing that uses one camera which can be a
fixed camera (fixed camera configuration) or mounted at the end effector the robot (eye-in-
hand configuration); (2) multi camera vision system where multiple cameras placed in the
work-space are used to get the task specific information. The monocular visual servoing is
the simplest solution; however, the depth information of the work-space is lost. This
information can be calculated when the vision system uses multiple cameras but not always
is possible to extract all the 6-DOF information. Additional difficulties appear when the
extrinsic camera parameters (position and orientation of the camera within the work-space)
are inexactly obtained due to uncertainties in the camera calibration process.
Motivated by this, the focus in visual servoing has shifted towards 3D problems and
compensating the uncertainties. Saedan & Ang (Saedan & Ang, 2002) design a PI controller based
3D pose servoing scheme. This system gives accurate position and orientation control for
stacionary target applications, but was unable to avoid error fluctuations for tracking problems.
Sahin & Zergeroglu (Sahin & Zergeroglu, 2005) use an adaptative controller based 3D position
servoing. This controller is designed to compensate the uncertainties associated with the
mechanical parameters of the robot manipulador and intrisic parameters of the cameras.
However, this work only presents simulation results to illustrate the performance of the
proposed controller. Another solution (Malis, 2004) presents a new visual servoing scheme which
is invariant to changes in camera intrinsic parameters. This work shows how to position a
camera, with respect to non-planar object, even if the intrinsic parameters change.
9. Acknowledgements
This work has been supported by the Spanish Government under contract through the
DPI2005-03769 project, and by the Autonomous Community of Madrid under the
RoboCity2030 project, referenced as S-0505/DPI/0176.
10. References
Allen, P. K.; Timcenko, A.; Yoshimi, B. & Michelman, P. (1993). Automated tracking and
grasping of a moving object with a robotic hand-eye system, IEEE Transactions on
Robotics and Automation, Vol. 9, No. 2, 1993, pp. 152-165.
Bachiller, M; Cerrada, J. A. & Cerrada, C. (2003). A Modular Scheme for Controller Design
and Performance Evaluation in 3D Visual Servoing, Journal of Intelligent & Robotic
Designing and Building Controllers for 3D Visual Servoing Applications under a Modular Scheme 105
1. Introduction
Most currently used industrial robots are still programmed by a teach pendant, especially in
the automotive industry. However the importance of off-line programming in industry as
an alternative to teach-in programming is steadily increasing. The main reason for this trend
is the need to minimize machine downtime and thus to improve the rate of robot utilization.
Nonetheless, for a successful accomplishment of off-line programming the robots need to be
not only repeatable but also accurate.
In addition to improving robot accuracy through software (rather than by changing the
mechanical structure or design of the robot), calibration techniques can also minimize
the risk of having to change application programs due to slight changes or drifts
(wearing of parts, dimension drifts or tolerances, and component replacement effects)
in the robot system. This is mostly important in applications that may involve a large
number of task points.
Theoretically, with the availability of robot calibration systems integrated with off-line
programming systems, it would be possible to implement off-line programming in industry
(currently they are not ready to be used in large scale), where multiple robots are used, by
programming only one robot and copying the programs from one robot to the others. One
of the most evident advantages of this type of programming is the reduction in maintenance
expenditure, since the robot can return faster to service and easy reprogramming means the
software becomes operational with little effort.
Robot calibration is an integrated process of modeling, measurement, numeric identification of
actual physical characteristics of a robot, and implementation of a new model. The calibration
procedure first involves the development of a kinematic model whose parameters represent
accurately the actual robot. Next, specifically selected robot characteristics are measured using
measurement instruments with known accuracy. Then a parameter identification procedure is
used to compute the set of parameter values which, when introduced in the robot nominal
model, accurately represents the measured robot behavior. Finally, the model in the position
control software is corrected.
Many factors such as numerical problems, measurement system deficiencies, cumbersome
setups and commercial interests have defied the confidence of the industry in such systems,
especially if one realizes that a robot will not normally need full calibration more than once
or twice a year.
108 Industrial Robotics - Programming, Simulation and Applications
The main objective of this chapter is to present and discuss several theoretical and
practical aspects involving methods and systems for robot calibration. Firstly, it is
discussed the implementation of techniques to optimize kinematic models for robot
calibration through numerical optimization of the mathematical model. The optimized
model is then used to compensate the model errors in an off-line programming system,
enhancing significantly the robot kinematic model accuracy. The optimized model can
be constructed in an easy and straight operation, through automatic assignment of joint
coordinate systems and geometric parameter to the robot links. Assignment of
coordinate systems by this technique avoids model singularities that usually spoil robot
calibration results. Secondly, calibration results are discussed using a Coordinate
Measuring Arm as a measurement system on an ABB IRB 2000 Robot.
Further in the chapter it is presented and discussed a 3-D vision-based measurement system
developed specifically for robot calibration requirements showing up to be a feasible
alternative for high cost measurement systems. The measurement system is portable,
accurate and low cost, consisting of a single CCD camera mounted on the robot tool flange
to measure the robot end-effector’s pose relative to a world coordinate system. Radial lens
distortion is included in the photogrammetric model. Scale factors and image centers are
obtained with innovative techniques, making use of a multiview approach. Experimentation
is performed on three industrial robots to test their position accuracy improvement using
the calibration system proposed: an ABB IRB-2400, IRB 6400 and a PUMA-500. The
proposed off-line robot calibration system is fast, accurate and ease to setup.
Finally, many theoretical and practical aspects are discussed concerning the relationship
between measuring volumes, positions, robot types and measurement systems and the final
accuracy expected after the calibration process.
The Robot Calibration problem has been investigated for more than two decades, but some of its
obstacles are still around. Usually, one can tackle the problem implementing model or modeless
methods. Modeless methods does not need any kinematic model, using only a grid of known
points located in the robot workspace as a standard calibration board. The robot is moved through
all the grid points, and the position errors are stored for future compensation by using a bilinear
interpolation method and polynomial fitting (Zhuang & Roth, 1996, Park, Xu and Mills, 2002) or
error mapping (Bai and Wang, 2004). Although modeless methods are simpler, the calibrated
workspace region is small, and each time the robot is to work off that region the calibration process
has to be repeated. On the other side, model methods allow a large workspace region to be
calibrated, leading to full model calibration. However, important to model methods is an accurate
kinematic model that is complete, minimal and continuous and has identifiable parameters
(Schröer, Albright and Grethlein, 1997). Researchers have used specific kinematic models that
depend on a particular robot geometry and/or calibration method. Model identifiability has
already been addressed (e.g., Everett and Hsu, 1988, Zhuang, 1992), and Motta and McMaster
(1999) and Motta, Carvalho e McMaster (2001) have shown experimental and simulation results
using a rational technique to find an optimal model for a specific joint configuration, requiring a few
number of measurement points (for a 6 DOF robot only 15 measurement points) for a model with
only geometric parameters (30), in opposition to hundreds of measurement points claimed by other
authors (Drouet et al., 2002, Park, Xu and Mills, 2002). A model with singularities or quasi-singular
parameterization turns the identification process to be ill-conditioned, leading to solutions that
cannot satisfy accuracy requirements when the manipulator is to move off the measurement points.
Furthermore, time to convergence increases or may not exist convergence at all, and the number of
measurement points may be ten-fold larger than the necessary (Motta, 1999).
T = Tx(px).Ty(py).Tz(pz).Rz(γ).Ry(β).Rx(α) (1)
where px, py, pz are translation coordinates and α, β, γ are rotation coordinates for the axes x,
y and z respectively. Then,
⎡Cγ .Cβ Cγ .Sβ .Sα − Sγ .Cα Cγ .Sβ .Cα + Sγ .Sα Px ⎤
T= ⎢ ⎥ (2)
⎢ Sγ .Cβ Sγ .Sβ .Sα + Cγ .Cα Sγ .Sβ .Cα − Cγ .Sα Py ⎥
⎢ − Sβ Cβ .Sα Cβ .Cα Pz ⎥
⎢ ⎥
⎣⎢ 0 0 0 1 ⎦⎥
which after the decomposition of the singularities through the Jacobian determinant results
in a manifold of singularities
⎡0 ox ax px ⎤
⎢ ⎥
Τs = ⎢ 0 o y a y p y ⎥ (4)
⎢± 1 0 0 pz ⎥
⎢ ⎥
⎣⎢ 0 0 0 1 ⎦⎥
This manifold represents γ = ± π/2 and β = ± π /2 (eq. 2), which means there is a
singularity when y and z axes are parallel.
The Denavit-Hartemberg (D-H) convention (parameterization) (Paul, 1981) leads to an
elementary transformation represented by
T(θ,pz,px,α) = Rz(θ).Tz(pz).Tx(px).Rx(α) (5)
Following the same procedure as before the manifold of singularities is
⎡ nx ox 0 px ⎤
Τs = ⎢⎢n y o y 0 py ⎥
⎥
(6)
⎢ nz oz ± 1 pz ⎥
⎢ ⎥
⎢⎣ 0 0 0 1 ⎥⎦
This result consists of all elements represented as parallel rotary joints. This can be
verified by observing the third column showing the representation of one joint axis (z)
into the previous one.
The Hayati-convention (Hayati & Mirmirani, 1985) can be represented by
T(θ,px,α,β) = Rz(θ).Tx(px).Rx(α).Ry(β) (7)
There are two manifolds of singularities,
⎡ nx ox ax px ⎤ ⎡ nx ox λp x px ⎤
⎢ ⎥
Ts = ⎢ n y o y a y p y ⎥ or Ts = ⎢⎢n y oy λp y ⎥
py ⎥ (8)
⎢ nz oz 0 0⎥ ⎢ nz oz λpz pz ⎥
⎢ ⎥ ⎢ ⎥
⎣⎢ 0 0 0 1 ⎦⎥ ⎣⎢ 0 0 0 1 ⎦⎥
Robot Calibration: Modeling, Measurement and Applications 111
These representations show that if the distal z axis is in the same x-y plane of the proximal
coordinate system (so perpendicular) or points to its origin, then there is a singularity.
The Veitschegger convention (Veitschegger & Wu, 1986) is a 5-dimensional
parameterization as
T = Rz(θ).Tz(pz).Tx(px).Rx(α).Ry(β) (9)
The manifolds of singularities are
⎡n x ox ax 0⎤ ⎡ nx ox ax λa x ⎤
⎢ ⎥ ⎢
Ts = ⎢ n y o y a y 0 ⎥ or Ts =
⎢n y oy ay λa y ⎥⎥ (10)
⎢n z oz az pz ⎥ ⎢ nz oz 0 pz ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣ 0 0 0 1 ⎥⎦ ⎢⎣ 0 0 0 1 ⎥⎦
These representations show that if the origin of the distal joint coordinate system is on the z-
axis of the proximal one, or the z axes are perpendicular and intercept each other, then there
is a singularity. However this convention is not minimal.
Using the same technique presented so far for prismatic joints sets of parameterizations can
be used so fourth. The results can be outlined in a list together with their application ranges.
The set is a complete, minimal and model-continuous kinematic model (Schröer et al., 1997).
The transformations are modeled from a current frame C to a revolute joint, JR, or to a
prismatic joint, JP, or a fixed and arbitrarily located target frame, TCP-frame. Some of them
are not unique since other solutions can fulfill the requirements. Additional elementary
transformations are possible, but any added elementary transformation is redundant and
thus, cannot be identified. A non-expanded model can always be found, which describes the
same kinematic structure as the expanded model.
Elementary transformations that are part of the complete, minimal and model-continuous
sub-model being identified are marked bold, i.e. those elementary transformations including
model parameters to be identified (errors): (symbols ⊥ and ⏐⏐ mean orthogonal and parallel
axes respectively)
• Transformation from robot base frame (B) to first joint where joint is translational (JT):
B ⊥ JT : P = ( TX, TY, TZ, RZ, RX) (11)
B ⏐⏐ JT : P = ( TX, TY, TZ, RX, RY) (12)
And, if first joint is rotational (JR):
B ⊥ JR : PX = ( TY, TZ, RZ, RX ) (13)
PY = ( TX, TZ, RZ, RX ) (14)
(If joint axis is near x-axis of frame B)
B ⏐⏐ JR : PZ = ( TX, TY, RX, RY ) (15)
(If joint axis is near y-axis of frame B)
• Transformations between consecutive joint frames:
JR ⊥ JR : P = ( RZ, TZ, TX, RX, TZ) (16)
(D-H parameterization)
JR ⏐⏐JR : P = ( RZ, TX, RX, RY, TZ) (17)
112 Industrial Robotics - Programming, Simulation and Applications
first joint is normal to the x-y plane. This location for the base frame coincides with many
manufacturers’ defined base frame.
Afterwards coordinate frames are attached to the link at its distal joint (joint farthest from
the base). A frame is internal to the link it is attached to (there is no movements relative to
it), and the succeeding link moves relative to it. Thus, coordinate frame i is at joint i+1, that
is, the joint that connects link i to link i+1.
The origin of the frame is placed as following: if the joint axes of a link intersect, then
the origin of the frame attached to the link is placed at the joint axes intersection; if the
joint axes are parallel or do not intersect, then the frame origin is placed at the distal
joint; subsequently, if a frame origin is described relative to another coordinate frame
by using more than one direction, then it must be moved to make use of only one
direction if possible. Thus, the frame origins will be described using the minimum
number of link parameters.
Fig. 2. Skeleton of the PUMA 560 Robot with coordinate frames in the zero position and
geometric variables for kinematic modeling. (Out of scale).
The x-axis or the y-axis have their direction according to the convention used to
parameterize the transformations between links (e.g. eqs. 16 to 23). At this point the
homogeneous transformations between joints must have already been determined. The
other axis (x or y) can be determined using the right-hand rule.
A coordinate frame can be attached to the end of the final link, within the end-effector or
tool, or it may be necessary to locate this coordinate frame at the tool plate and have a
separate hand transformation. The z-axis of the frame is in the same direction as the z-axis of
the frame assigned to the last joint (n-1).
The end-effector or tool frame location and orientation is defined according to the controller
conventions. Geometric parameters of length are defined to have an index of joint and
Robot Calibration: Modeling, Measurement and Applications 115
direction. The length pni is the distance between coordinate frames i - 1 and i , and n is the
parallel axis in the coordinate system i - 1. Figs. 2 and 3 shows the above rules applied to a
PUMA-560 and an ABB IRB-2400 robots with all the coordinate frames and geometric
features, respectively.
Fig. 3. Skeleton of the ABB IRB-2400 Robot with coordinate frames in the zero position and
geometric variables for kinematic modeling. (Out of scale).
where N is the number of joints (or coordinate frames), p = [p1T p2T ... pNT]T is the parameter
vector for the manipulator, and pi is the link parameter vector for the joint i, including the
joint errors. The exact link transformation Ai-1i is (Driels & Pathre, 1990):
Ai-1i = Ti-1i + ΔTi , ΔTi = ΔTi(Δpi) (36)
where Δpi is the link parameter error vector for the joint i.
The exact manipulator transformation Â0N-1 is
116 Industrial Robotics - Programming, Simulation and Applications
N N
Aˆ 0 N = (37)
∏ (T
i =1
i −1
i + ΔTi ) = ∏A
i =1
i −1
i
Thus,
Aˆ 0 N = Tˆ 0 N + ΔTˆ , ΔTˆ = ΔTˆ (q, Δp ) (38)
where Δp = [Δp1T Δp2T … ΔpNT]T is the manipulator parameter error vector and q=[θ1T, θ2T
θNT]T is the vector of joint variables. It must be stated here that ΔT̂ is a non-linear function
of the manipulator parameter error vector Δp.
Considering m the number of measure positions it can be stated that
Aˆ = Aˆ 0 N = Aˆ ( q, p ) (39)
All matrices or vectors in bold are functions of m. The identification itself is the computation
of those model parameter values p*=p+Δp which result in an optimal fit between the actual
measured positions and those computed by the model, i.e., the solution of the non-linear
equation system
B(q,p*) = M(q) (42)
where B is a vector formed with position and orientation components of  and
M(q) = (M(q1),…, M(qm))T ∈ ℜφm (43)
are all measured components and φ is the number of measurement equations provided by
each measured pose. If orientation measurement can be provided by the measurement
system then 6 measurement equations can be formulated per each pose. If the measurement
system can only measure position, each pose measurement can supply data for 3
measurement equations per pose and then B includes only the position components of Â.
When one is attempting to fit data to a non-linear model, the non-linear least-squares
method arises most commonly, particularly in the case that m is much larger than n (Dennis
& Schnabel, 1983). In this case we have from eq. (36), eq. (38) and eq. (42):
B(q, p*) = M (q) = B(q, p ) + C(q, Δp) (44)
where C is the differential motion vector formed by the position and rotation components of
ΔT̂ . From the definition of the Jacobian matrix and ignoring second-order products
C(q, Δp ) = J.Δp (45)
and so,
M(q) - B(q,p) = J.Δp (46)
The following notation can be used
Robot Calibration: Modeling, Measurement and Applications 117
6. Experimental Evaluation
To check the complete system to calibrate robots an experimental evaluation was carried out
on an ABB IRB-2000 Robot. This robot was manufactured in 1993 and is used only in
laboratory research, with little wearing of mechanical parts due to the low number of hours
on work. The robot is very similar to the ABB IRB-2400 Robot, and the differences between
both robots exists only in link 1, shown in Fig. 3, where px1 turns to be zero.
External Cubes
Central Cubes
Fig. 6. Coordinate Measuring Arm - ITG ROMER and ABB IRB-2000 manipulator
(University of Brasilia).
Fig. 7. represents graphically the calibration results within the regions and volumes of the
workspace shown in Fig. 4 with the IRB-2000 Robot. The results presented show that the
average of the position errors before and after calibration were higher when the Volumes
were larger for both Regions tested. This robot was also calibrated locally, that means the
robot was recalibrated in each Region.
A point that deserves attention is that if a robot is calibrated in a sufficiently large
calibration volume, the position accuracy can be substantially improved compared to
calibration with smaller joint motions. The expected accuracy of a robot in a certain
task after calibration is analogous to the evaluation accuracy reported here in various
conditions.
Robot Calibration: Modeling, Measurement and Applications 119
Every time a robot moves from a portion of the workspace to another, the base has to be
recalibrated. However, in an off-line programmed robot, with or without calibration, that
has to be done anyway. If the tool has to be replaced, or after an accident damaging it, it is
not necessary to recalibrate the entire robot, only the tool. For that, all that has to be done is
to place the tool at few physical marks with known world coordinates (if only the tool is to
be calibrated not more than six) and run the off-line calibration system to find the actual tool
coordinates represented in the robot base frame.
Fig. 7 Experimental evaluation of the robot model accuracy for positioning in each of the
volumes.
hand, a single camera setup needs full camera re-calibration at each pose.
The goal of camera calibration is to develop a mathematical model of the transformation
between world points and observed image points resulting from the image formation
process. The parameters which affect this mapping can be divided into three categories
(Prescott & McLean, 1997, Zhuang & Roth, 1996): a) extrinsic (or external) parameters,
which describe the relationship between the camera frame and the world frame,
including position (3 parameters) and orientation (3 parameters); b) intrinsic (or
internal) parameters, which describe the characteristics of the camera, and include the
lens focal length, pixel scale factors, and location of the image center; c) distortion
parameters, which describe the geometric nonlinearities of the camera. Some authors
include distortion parameters in the group of intrinsic parameters (Tsai, 1987, Weng et
al., 1992). Distortion parameters can be present in a model or not.
The algorithm developed here to obtain the camera parameters (intrinsic, extrinsic and
distortions) is a two-step method based on the Radial Alignment Constraint (RAC) Model (see
Tsai, 1987). It involves a closed-form solution for the external parameters and the effective focal
length of the camera. Then, a second stage is used to estimate three parameters: the depth
component in the translation vector, the effective focal length, and the radial distortion
coefficient. The RAC model is recognized as a good compromise between accuracy and
simplicity, which means short processing time (Zhuang & Roth, 1996). Some few modifications
were introduced here in the original RAC algorithm, and will be explained later.
xw
P
yw
zw
World Coordinates
⎡ x⎤ ⎡ xw⎤
⎢ ⎥ ⎢ ⎥ (53)
⎢ y ⎥ = R.⎢ yw⎥ + T
⎢⎣ z ⎥⎦ ⎢⎣ zw ⎥⎦
where R is the orthogonal rotation matrix, which aligns the camera coordinate system with
the object coordinate system, and it can be represented as
⎡ r1 r 2 r 3⎤
⎢ ⎥ (54)
R = ⎢ r 4 r 5 r 6⎥
⎢⎣ r 7 r 8 r 9⎥⎦
y y
Y = s v .v = f .s v . = fy. (59)
z z
The equations above combined with eq. (53) produce the distortion-free camera model
r1.xw + r 2. yw + r 3.zw + Tx
X = fx. (60)
r 7.xw + r 8. yw + r 9.zw + Tz
r 4.xw + r 5. yw + r 6.zw + Ty
Y = fy. (61)
r 7.xw + r 8. yw + r 9.zw + Tz
which relates the world coordinate system (xw, yw, zw) to the image coordinate system
(X,Y). fx and fy are non-dimensional constants defined in eqs. (58) and (59).
Radial distortion can be included in the model as (Tsai, 1987, Weng et al., 1992):
r1.xw + r 2. yw + r 3.zw + Tx
X .(1 + k .r 2 ) = fx. (62)
r 7.xw + r 8. yw + r 9.zw + Tz
r 4.xw + r 5. yw + r 6.zw + Ty
Y .(1 + k .r 2 ) = fy. (63)
r 7.xw + r 8. yw + r 9.zw + Tz
where r = μ.X2 + Y2 and μ is the ratio of scale to be defined further in the text. Whenever all
distortion effects other than radial lens distortion are zero, a radial alignment constraint (RAC)
equation is maintained. In Fig. 8, the distorted image point of the object point, Pd, is shown.
Eqs. (62) and (63) can be linearized as (Zhuang & Roth, 1993):
X r1.xw + r 2. yw + r 3.zw + Tx
≅ fx. (64)
1 − k .r 2 r 7.xw + r 8. yw + r 9.zw + Tz
Y r1.xw + r 2. yw + r 3.zw + Ty
≅ fy. (65)
1 − k .r 2 r 7.xw + r8. yw + r 9.zw + Tz
This transformation can be done under the assumption that k.r2 << 1, i.e. (1 + k.r 2 ) ≅ ( 1
)
1 − k.r 2
when k.r2 << 1. The first stage to solve eqs. (64) and (65) determines the rotation matrix R
(eq. 54), Ty and Tx. The algorithm for that is found in Zhuang & Roth (1996) and Tsai (1987):
The second stage is proposed here as
⎡ Tz ⎤
⎡− X xi − xi .ri 2 ⎤ ⎢ ⎥ ⎡ X .w ⎤
⎢ i . fx ⎥ = ⎢ i i ⎥ (66)
2⎥ ⎢
⎣⎢ − Yi μ . yi − μ . yi .ri ⎦⎥ ⎢ ⎣ Yi .wi ⎦
⎣ k . fx ⎥⎦
where xi = r1.xwi + r2.ywi + Tx, yi = r4.xwi + r5.ywi + Ty, wi = r7.xwi + r8.ywi , and zwi is made
null (all calibration points are coplanar). The overdetermined linear system in eq. (66) can be
solved by a linear least-square routine. The calibration points used to solve the system above
were on the border of a square grid of 9x9 circular points with 1mm diameter, totting up 32
points. The grid has approximately 200x200mm. The angle between the calibration plane and the
image plane should not be smaller than 30 degrees to avoid ill conditioned solutions.
The second stage of the algorithm originally proposed by Tsai (1987) was not a linear
solution as it was based on eq.(63), with components only in the y direction. Tsai’s
arguments were based on the fact that y components were not contaminated by the
mismatch between the frame grabber and camera frequencies. However, it was
Robot Calibration: Modeling, Measurement and Applications 123
experimentally observed during this research that because of the various angles and
distances that the images of the target plate were to be acquired from, a linear solution
considering both directions (x and y) showed to be substantially more accurate than using
only one direction. The improvement in accuracy of the solutions using eq. (66) was
evaluated comparing three different images at different orientations and similar distances,
and observing the focus length calculated from the second stage of the RAC algorithm,
using eq. (63) (only y components), eq. (64) (only x components), and eq. (66) (both x and y
components). Ideally, the focus length has to be the same for all images whatever
orientation the camera is placed. The use of eq. (66) resulted in considerably closer values of
fx for the three images than using only one direction (x or y).
The Radial Alignment Constraint model holds true and is independent of radial lens
distortion when the image center is chosen correctly. Otherwise, a residual exists and,
unfortunately, the RAC is highly non-linear in terms of the image center coordinates.
The method devised to find the image center was to search for the “best” image center as an
average of all images in a sequence of robot measurements, using the RAC residuals. This
method could actually find the optimal image center in the model, which could be easily
checked calculating the overall robot errors after the calibration. The tests with a coordinate
milling machine (explained further in the text) also showed that the determination of the
image center by this method led to the best measurement accuracy in each sequence of
constant camera orientation.
calibration board
CMM table
z
Camera coordinate
y frame
x
Fig. 9. Diagram showing coordinate systems and the experimental setup for the evaluation
of the measurement system accuracy.
Robot Calibration: Modeling, Measurement and Applications 125
The average values for fx and k should be kept constant from then on, and considered the
“best” ones. To check this assumption, the distance traveled by the camera calculated by the
Euclidean norm of (XW, YW, ZW) for each camera position, and the distance read from the
CMM display were compared to each other. Errors were calculated to be the difference
between the distance traveled by the camera using the two methods. It was observed then
that the average value of fx did not minimize the errors.
Subsequently, an optimal value for fx was searched to minimize the errors explained above,
by changing slightly the previous one. The value of k did not show an important influence
on the errors when changed slightly. The optimal values of fx and k were checked in the
same way for the other two measurement sequences at different distances, and showed to
produce the best accuracy. The optimal values for fx and k where found to be 1566 and 7.9 x
10-8, which were kept constant from then on.
Assessment of 3-D measurement accuracy using a single camera is not straightforward. The
method designed here to assess the measurement accuracy was to compare each component
XW, YW and ZW corresponding to each camera position, to the same vector calculated
assuming an average value for the rotation matrix. This method is justified considering that,
since there are no rotations between the camera and the calibration board during the tests,
the rotation matrix must be ideally the same for each camera position. Since the vector (XW,
YW, ZW) depends on R in eq. (54) and T in eq. (55), and as T is calculated from R, all
measurement errors would be accounted for in the rotation matrix, apart from the errors
due to the ground accuracy of the calibration board. Of course, this assumption is valid only
if another method to compare the camera measurements to an external measurement system
(in physical units) is used to validate it. That means, the accuracy calculated from the
traveled distance validates the accuracy calculated from the rotation matrix. The first
method does not consider each error component independently (x, y, z), but serves as a
basis to optimize the camera parameters. The second method takes into account the 3-D
measurement error, assuming that the “correct” rotation matrix is the average of the ones
calculated for each of the 25 positions. The maximum position errors for each measurement
sequence were 0.43, 0.97, and 0.72mm respectively. Fig. 10 shows graphically the average,
median and standard deviation values of the measurement system position accuracy
calculated as explained before, as a function of the average distance from the camera (optical
center) to the central point of the calibration board.
Average
0.4
Median
0.3
(mm)
0.1
0
650 750 850 950
Fig. 10. Measurement system 3-D position accuracy versus the average distance from the
camera focal point to the central point of the target.
126 Industrial Robotics - Programming, Simulation and Applications
240mm
R2 V2
510mm
980mm
R1
420mm
V1
X
720mm
R3 240mm
V3
R2 Y
Z R3
480mm
240mm
V3 V2
480mm
360mm
V1
1130mm 780mm
R1
Fig. 11. Side and top view of the IRB-2400 Robot workspace showing Regions, Volumes and
their dimensions and locations.
The results from the calculation of average errors and their standard deviation in each
Region can be seen in the graphs shown in Fig. 12, calculated before and after the
calibration. For calculated and measured data in different coordinate systems to be
compared to each other, the robot base coordinate frame was moved to coincide with
the world coordinate system at the measurement target plate. This procedure was
carried out through a recalibration of the robot base in each Region.
The results presented in Fig. 12 show that the average of the position errors after calibration
were very close in value for the three calibration regions tested.
IRB-2400
2,50 1,99
1,75 Median Error Before
2,00
Error (mm)
1,45
1,50 Median Error After
0,89 0,95
1,00 0,63
0,57 0,60 0,62 0,65 St. Dev. Before
0,45
0,50 0,21 St. Dev. After
0,00
36,3 58,8 82,9
V1 V2 V3
Fig. 12. Average Error and Standard Deviation calculated before and after calibration in
each Region.
Within the PUMA-500 robot workspace two calibration Regions were defined to collect data. In
each Region three Volumes were defined with different dimensions. Once two different Volumes
had the same volume they had also the same dimensions, whatever Region they were in.
Fig. 13 represents graphically all Regions and Volumes within the PUMA-500 workspace.
Robot Calibration: Modeling, Measurement and Applications 127
The results presented in Figs. 15 and 16 show that the average of the position errors before
and after calibration were higher when the Volumes were larger for both Regions tested.
This robot was also calibrated locally, that means the robot was recalibrated in each Region.
R2
V1 V3
240mm
V2
510mm
420mm
500mm
330mm
R1 R2 Z
V3
X
480mm
500mm
V2 360mm
350mm
R1
200mm
V1 V2
240mm
V1 V3
Fig. 13. Side and top view of the PUMA-500 Robot workspace showing Regions, Volumes
and their dimensions and locations.
Fig. 14. Average Error and Standard Deviation calculated before and after calibration in each
Volume in Region 1.
Fig. 15. Average Error and Standard Deviation calculated before and after calibration in each
Volume in Region 2.
128 Industrial Robotics - Programming, Simulation and Applications
12. References
Bai, Y and Wang, D. (2004). Improve the Robot Calibration Accuracy Using a Dynamic
Online Fuzzy Error Mapping System, IEEE Transactions on System, Man, And
Cybernetics – Part B: Cybernetics, Vol. 34, No. 2, pp. 1155-1160.
Baker, D.R. (1990). Some topological problems in robotics, The Mathematical
Intelligencer,Vol.12, No.1, pp. 66-76.
Robot Calibration: Modeling, Measurement and Applications 129
Bernhardt, R. (1997). Approaches for commissioning time reduction, Industrial Robot, Vol. 24,
No. 1, pp. 62-71.
Dennis JE, Schnabel RB. (1983). Numerical Methods for Unconstrained Optimisation and Non-
linear Equations, New Jersey: Prentice-Hall.
Driels MR, Pathre US. (1990). Significance of Observation Strategy on the Design of Robot
Calibration Experiments, Journal of Robotic Systems, Vol. 7, No. 2, pp. 197-223.
Drouet, Ph., Dubowsky, S., Zeghloul, S. and Mavroidis, C. (2002). Compensation of
geometric and elastic errors in large manipulators with an application to a high
accuracy medical system, Robotica, Vol. 20, pp. 341-352.
Everett, L.J. and Hsu, T-W. (1988). The Theory of Kinematic Parameter Identification for
Industrial Robots, Transaction of ASME, No. 110, pp. 96-100.
Gottlieb, D.H. (1986). Robots and Topology, Proceedings of the IEEE International Conference on
Robotics and Automation, pp. 1689-1691.
Hayati, S. and Mirmirani, M., (1985). Improving the Absolute Positioning Accuracy of
Robots Manipulators, Journal of Robotic Systems, Vol. 2, No. 4, pp. 397-413.
Lenz RK, Tsai RY. (1987). Techniques for Calibration of the Scale Factor and Image Centre
for High Accuracy 3D Machine Vision Metrology, IEEE Proceedings of the
International Conference on Robotics and Automation, pp. 68-75.
McKerrow, P. J. (1995). Introduction to Robotics, 1st ed., Ed. Addison Wesley, Singapore.
Motta, J. M. S. T. and McMaster, R. S. (1999). Modeling, Optimizing and Simulating Robot
Calibration with Accuracy Imporovements, Journal of the Brazilian Society of
Mechanical Sciences, Vol. 21, No. 3, pp. 386-402.
Motta, J. M. S. T. (1999). Optimised Robot Calibration Using a Vision-Based Measurement System
with a Single Camera, Ph.D. thesis, School of Industrial and Manufacturing Science,
Cranfield University, UK.
Motta, J. M. S. T., Carvalho, G. C. and McMaster, R. S. (2001), Robot Calibration Using a 3-D
Vision-Based Measurement System With a Single Camera, Robotics and Computer
Integrated-Manufacturing, Ed. Elsevier Science, U.K., Vol. 17, No. 6, pp. 457-467.
Park, E. J., Xu, W and Mills, J. K. (2002). Calibration-based absolute localization of parts for
multi-robot assembly, Robotica, Vol. 20, pp. 359-366.
Paul, R. P., (1981). Robot Manipulators - Mathematics, Programming, and Control, Boston, MIT
Press, Massachusetts, USA.
Prescott B, McLean GF. (1997). Line-Based Correction of Radial Lens Distortion, Graphical
Models and Image Processing, Vol. 59, No. 1, pp. 39-47.
Press WH, Teukolsky SA, Flannery BP, Vetterling WT. (1994). Numerical Recipes in Pascal -
The Art of Scientific Computer, New York: Cambridge University Press.
Roth, Z.S., Mooring, B.W. and Ravani, B. (1987). An Overview of Robot Calibration, IEEE
Journal of Robotics and Automation, RA-3, No. 3, pp. 377-85.
Schröer, K. (1993). Theory of kinematic modeling and numerical procedures for robot
calibration, Robot Calibration, Chapman & Hall, London.
Schröer, K., Albright, S. L. and Grethlein, M. (1997), Complete, Minimal and Model-
Continuous Kinematic Models for Robot Calibration, Robotics & Computer-Integrated
Manufacturing, Vol. 13, No. 1, pp. 73-85.
130 Industrial Robotics - Programming, Simulation and Applications
Tsai RY. (1987). A Versatile Camera Calibration Technique for High-Accuracy 3D Machine
Vision Metrology Using Off-the Shelf TV Cameras and Lenses. IEEE International
Journal of Robotics and Automation, RA-3, No. 4, pp. 323-344.
Veitscheggar, K. W. and Wu, C. W. (1986). Robot Accuracy Analysis based on Kinematics.
IEEE Journal of Robotics and Automation, Vol. 2, No. 3, pp. 171-179.
Weng J, Cohen P, Herniou M. (1992). Camera Calibration with Distortion Models and
Accuracy Evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence,
Vol. 14, No. 10, pp. 965-980.
Wolf PR. (1983). Elements of Photogrammetry, McGraw-Hill, Singapore.
Zhuang H, Roth ZS, Xu X, Wang K. (1993). Camera Calibration Issues in Robot Calibration
with Eye-on-Hand Configuration. Robotics & Computer Integrated Manufacturing,
Vol. 10, No. 6, pp. 401-412.
Zhuang H, Roth ZS. (1993). A Linear Solution to the Kinematic Parameter Identification of
Robot Manipulators. IEEE Transactions on Robotics and Automation, Vol. 9, No. 2, pp.
174-185.
Zhuang H, Roth ZS. (1996). Camera-Aided Robot Calibration, CRC Press, Boca Raton.Fla, USA.
Zhuang, H. (1992). A Complete and Parametrically Continuous Kinematic Model for Robot
Manipulators, IEEE Transactions on Robotics and Automation, Vol. 8, No. 4, pp. 451-63.
7
1. Introduction
Since initial stages of the adoption of industrial robot, the reason for their development was
to replace humans in repetitive tedious and hazardous manual work. However, due to rise
of production and falling prices, the use of robots becomes an economically viable solution
in more and more applications (Hefele & Brenner, 2001). Nowadays, industrial robot
manipulators are an important component of most automated manufacturing systems, and
especially in the automotive industry. They have been used for many years in industry
efficiently in mass production applications, where simple sequences of movements are pre-
recorded in the memory of the robot controller. Nevertheless, manipulators are still rarely
used in industrial applications with frequently changing operations where continuous
reprogramming is needed, since this requires teaching the points for each operation and
specifying the trajectories for the new sequences of motion using robots teach-pendants.
This problem can be overcome only if the programs and the points are designed off-line
using manipulators’ models and simulated work cells. The implementation of off-line
programming (OLP) requires very good accuracy of manipulators, which can be achieved
by a process of robot calibration to reduce the effects of the kinematic model errors due to
the manufacturing tolerances of the robot elements and the assembly processes, in addition
to changing working environment conditions and effects of tear and wear. This means that
the model of the manipulator in the controller should be a precise representation of the
mechanisms and allows the system to achieve the desired manipulator pose (position and
orientation) with minimum offsets.
Calibration is therefore used to improve robot positioning accuracy, through software rather
than changing the mechanical structure or design of the robot itself (Elatta et al., 2004). Since
robots are mechanical devices, they can be affected by slight changes due to manufacturing
tolerances, changes of working environment temperature, wear of parts and component
replacement (Elatta et al. 2004). Calibration is therefore meant to minimise these effects and
avoid having to modify operations’ programs and allows the use of off-line programming
and effective use of manipulators in more industrial and non industrial applications.
To achieve high accuracy, the kinematic model used in the controller, should be a faithful
mathematical description of the relationship, which relates the end-effector pose to the
individual joint values. Ideally, this model should account for geometrical and no
132 Industrial Robotics - Programming, Simulation and Applications
geometrical parameters, which can affect this relationship. However, all industrial
manipulator controllers in the market contain nominal kinematic representations which are
common to all manipulators of the same model. It is for this reason that improving
manipulator accuracy passes through improving the kinematic model, by generating the real
kinematic parameters for each particular manipulator after its assembly.
The work described in this chapter focuses on kinematic model improvement as a means of
improving the accuracy of robots (Calibration) without having to impose stricter
manufacturing tolerances, and presents an overview of the major research achievements in
the field. The chapter discusses the issue of calibration in general and achieved research
results, while it describes the procedure developed by the authors in previous work
(Abderrahim & Whittaker, 2000) for the direct identification of the D-H kinematic model
parameters. In addition the chapter describes the recent developments concerning the
improvement of robot manipulators accuracy, which did make their way through to the
market and made “Absolute Accuracy” of industrial robots a reality nowadays ((Renders,
2006) and (Fixel, 2006)).
The remainder of this chapter is organized as follows. The next section describes the
repeatability and the accuracy as the most important values that determine the precision
performance of an industrial manipulator. Error sources are also highlighted in this section.
Section 3 gives an overview on industrial manipulators calibration followed by describing in
details the calibration methods in section 4. Examples for the most commonly used robot
simulation software are presented in section 5. In section 6, two examples for commercial
calibration procedures are described and finally conclusions are summarized in section 7.
orientation of the joints. During the manufacturing, the parts variation of dimensions is
inevitable from one robot to the other due to tolerances. According to (Kevin et al., 2000), in
revolute 6 DOF manipulators the length of first 3 links contribute to the position and joints
4, 5, and 6 (wrist) contribute primarily to the orientation of the tool frame. Other parameters
that are not included in kinematic models and affect the accuracy include: Elasticity of joints
and links, load proportion and joint deflection, actuator heating, sensors stability and drifts,
gearbox flexibility and eccentricity and environment working temperature (Hefele &
Brenner, 2001). In addition, end-effector fixing position and the determination of the world
coordinate frame also affect the pose accuracy in the work environment.
3. Calibration Overview
There has been a lot of work on the subject of improving the positioning accuracy of industrial
robot manipulators ((Hayati & Mirmirani, 1985), (Mooring & Pack, 1987), (Driels & Swaze, 1994)
and (Roth et al., 1987) among many others). Most of the authors considered the main source of
errors to be only geometric, with the exception of (Chen at al., 1987) and Whitney et al. (Whitney
et al., 1986), who included explicitly non-geometric errors as well. Although some introduced
their own models (Whitney et al., 1986), the majority used models that are universally valid and
widely accepted, such as the Denavit-Hartenberg or modified versions of it ((Khalil et al., 1991),
(Driels, 1993) and (Harb & Burdekin, 1994)). Much previous work is based only on computer
simulations, but some validation work used real measurements ((Chen at al., 1987), (Whitney et
al., 1986), (Stone, 1987) (Stanton & Parker, 1992) (Khalil et al., 1991),(Driels, 1993),(Driels & Pathre,
1994) and (Abderrahim & Whittaker, 2000)), and very recently there is even commercial
application dedicated to robot calibration ((Renders, 2006) and (Fixel, 2006)). Stone (Stone, 1987)
developed a model and also a novel general identification method to estimate his S-model
parameters, based on the circle-point analysis (Mooring et al., 1991) which lead to what he
designated as joint features. These joint features are the plane of rotation, centre of rotation and radius
of rotation. If required, the D-H parameters can then be extracted from the S model. Stone's
motivation for developing his model was because “the D-H Model is not amenable to direct
identification” (Stone, 1987). However, with small modification, the D-H Model has been used
successfully for calibration in several of the papers referred to above. Although the S-Model can
be identified using the method proposed in (Stone, 1987), it still suffers the same disproportion as
the D-H model when consecutive joint axes are parallel or near parallel. The real problem here is
that the control software of existing industrial manipulator does not use the S-model parameters.
The physically explicit significance of the D-H model and its widespread use in robot control
software make it very desirable to develop identification methods that obtains the D-H
parameters directly, whilst at the same time is able to deal with the case where consecutive joint
axes are parallel. Our procedure proposed in (Abderrahim & Whittaker, 2000) makes use of the
134 Industrial Robotics - Programming, Simulation and Applications
features introduced by Stone, but differs from Stone's work by making more use of the radius of
rotation and also translating the plane of rotation along the axis of rotation to make it coincide with
the D-H X-Y plane. Using the idea developed by (Hayati & Mirmirani, 1985), a new parameter
was introduced to modify the model to be able to deal with the disproportion at parallel or near
parallel joint axes. In this way the proposed method can be viewed as a combination of Stone's
and Hayati’s methodologies. In the method, the position of the end-effector caused by the
movement of only one joint axis is measured, and the process is repeated for each joint in an
inward sequence, starting from joint n and ending at joint 1. In a similar sequence the measured
Cartesian positions of the end-effector are then used for the estimation of the features for each
link (joint). These are in turn used for the estimation of link parameters.
4. Calibration Methods
Robot calibration is a process by which manipulator real parameters’ values affecting its
accuracy are established through direct or indirect measurement, and used to improve its
accuracy by modifying the positioning software. The calibration procedure involves
modelling, measurement, parameter identification and implementation of compensation. The
first step is establishing a model to relate the 3D Cartesian position of the end-effector in terms
of joint values and all other kinematic parameters that affect this function. The next step is to
perform an external accurate measuring of the end-effector Cartesian pose corresponding to
each set of joint values. The measurement should be performed in a number of positions
sufficient for the process of identification. The identification method is chosen according to the
model and how the measurements were taken. Using the set poses and the corresponding joint
angles, the true kinematic values should be estimated through and identification process.
These new parameters, which deviate from their nominal values, shall improve the
manipulators accuracy when implemented in the controller. It has to be noted that in the case
of establishing the non-geometric sources of errors and evaluating their effects a similar
approach and steps have to be followed. Nowadays, calibration techniques and absolute
accuracy are provided by major manipulator manufacturers such as ABB (Fixel, 2006) and
KUKA robots. Access to modify the controller is not obvious, and may not be possible for
third parties. In this case, the alternative method of compensation can be established off-line,
where compensated end-effector Cartesian positions are calculated using the “real” estimated
inverse kinematic model. The latter method is offered nowadays by calibration and metrology
equipment manufacturers such as Metris (Metris, 2005). In the rest of this section we develop
the mentioned steps covering the method described in our previous work and other new
developments by others with emphasis on existing industrial solutions.
4.1. Modelling
Coordinate frame kinematic models are based upon the assignment of Cartesian coordinate
frames fixed relative to each link of the robot arm. The Denavit-Hartenberg (D-H) model has
been widely adopted in robotics due to its explicit physical interpretation of the mechanisms
and the relatively easy implementation in the programming of robot manipulators. Other
models are mentioned in the overview section above, where the mentioned references can
be consulted for in-depth information. Since the D-H model was used in our work and due
to its practical relevance it is the only one covered in this section. The D-H model is
presented as 4X4 matrix that results from the following product:
Accuracy and Calibration Issues of Industrial Manipulators 135
T = A1 ⋅ A2 ⋅ … ⋅ An , (1)
where T denotes the transformation of the link n coordinate frame into the base coordinate frame
of the robot arm. Ai designates the D-H transformation matrix relating frame i to frame i-1. The
nth link frame coincides with the end-effector's coordinate frame. Fig. 2 illustrates the spatial
relative position of two consecutive links and their associated coordinate frames.
Axisi-1
Linki-1 Axisi
Linki
θi
α i −1
4.2. Measurement
This section is dedicated to describe the systems that are capable of providing accurate
measurements fit for application in the calibration process. The first system described is the
system used in our previous work (Abderrahim & Whittaker, 2000), and the second is chosen
because it represents one of the few existing industrial effective solution in the market.
During the mentioned work the instrument used to measure the Cartesian end-effector position of
the Puma manipulator, relative to the robot base frame, was a laser tracking system called
“Optotrac” designed and built at the University of Surrey (Mayer & Parker, 1988). The instrument
consists of two tracking sub-systems that each directs a laser beam towards a retro-reflective
target, attached to the end-effector of the robot arm, by using two orthogonally mounted optical
136 Industrial Robotics - Programming, Simulation and Applications
scanner units. The resultant tracking errors from driving the scanners are used to calculate the 3-
dimensional position of the target. The instrument has a repeatability of $0.1 mm (Stanton &
Parker, 1992). Simultaneously with the target position measurements, the angular positions of the
first three links were measured by extracting the relevant encoder signals from the robot
controller, using hardware developed at Glasgow University (Abderrahim & Whittaker, 1996) fig.
3. Hand-shaking signals were established to synchronise the measurements taken by the two
systems. The optotrac set up takes very long time, which makes using it for industrial applications
much less effective than existing industrial solutions nowadays, such as the Krypton K600 and
K400 (Renders, 2006) and the Leica Lasertracker LTD 500 (Fixel, 2006). It is worth mentioning that,
all these systems would have served well our measurement requirements.
Fig. 3. The measurement system including the Optotrac & motor angles and currents
acquisition hardware.
The second measurement system to describe here is the Krypton K600 solution. The main
piece of the K series measurement system is the camera system, consisting in three linear
CCD cameras, shown in fig 4. The camera system relies on infra red light active LEDs, and
therefore they cannot be seen by the human eye. When a LED is picked by the three linear
cameras the computer calculates its exact position in the 3D space.
The calculation is achieved by comparing the image of the 3 linear CCD cameras, from the
effect of having 3 planes intersecting on the LED position, which is then calculated relative
the pre-calibrated camera as illustrated in fig 5. According to the manufacturer the system is
capable of tracking up to 256 LEDs simultaneously, through computer controlled strobing.
This simultaneous multiple point tracking allows the measurement of the position and
orientation of objects by attaching to them 3 or more LEDs and measuring their positions
simultaneously. The system has a single point accuracy starting at 60 μ and is capable of
measuring targets at 6 m distance from the camera.
The joint features are identified link by link. In a similar manner to Stone, we identify first the
plane of rotation: the plane containing a circle described by a point on a rotating link, followed by
the centre of rotation: the centre of that circle which will be situated on the joint axis. We also
explicitly identify the radius of rotation. We next introduce a novel translation of the plane of
rotation along the axis of rotation so that it coincides with the D-H X-Y plane. This translation,
along with the three identified features, allows the D-H model parameters to be extracted.
z = Ex + Fy + G = φ T Θ (5)
where the φ = [ x , y ,1]T and Θ = [ E , F , G ]T , which are defined as the information and
parameters vectors. The z coordinate is defined to be the output of Eq. (4). A simple
regression of z on x, y and 1 corresponds to the minimisation of the sum of the squared
errors in the z coordinate.
m m
Ξ= ∑ (z − z
j =1
j)
2
= ∑ (φ
j =1
T
Θ − z j )2 (6)
The minimisation of this expression by equating it to zero yields the linear least
squares solution (Hsia, 1977), where it is assumed that the coordinates of the
information vector are independent variables and measured without error. Three
points of the measured target positions are used for the formation of the new
coordinate frame. These three points uniquely form an initial approximation of x-y
plane, hence the plane of rotation. The points are chosen to be mutually most distant
from one another and are denoted by p k , pl and with k < l < m in order to
p m
preserve the sense of rotation. Fig. 6 assists in the understanding of the formation of
the new coordinate frame where the plane of rotation is estimated.
Accuracy and Calibration Issues of Industrial Manipulators 139
Plane of Rotation
Frame C0
Base Coordinate Frame
Fig. 6. The formation of new coordinate frame from the measured points.
The x-axis is chosen to be parallel to the line joining p k to pl . The z-axis is perpendicular to
this axis and to the line joining p k to p m . The y-axis completes the orthogonal system. The
origin of this initial coordinate frame is coincident with the origin of the frame in which the
data are measured (presented), which in our case is the robot arm base coordinate frame.
The homogeneous transformation matrix between the measurement frame and the new
frame C0 is given by:
⎡ n o a 0⎤
Ri = ⎢ ⎥ (7)
⎣ 0 0 0 1⎦
where n , o and a are he unit vectors of the newly formed coordinate frame, which forms
a pure rotation T. The measured data can be transferred to the newly formed frame C0
where a least square regression is executed to calculate the coefficients of the plane of
rotation, which in turn are transferred back the measurement coordinate frame. The detailed
procedure and justification are treated in (Abderrahim & Whittaker, 2000). In the procedure
used in the current work, unless the target position is already on the nominal D-H X-Y
plane, the plane of rotation for each link should now be translated to coincide with the D-H
X-Y plane. This is first done by using the offsets between the nth axis and the target attached
to the nth link. Consequently, accurate knowledge of the target location with reference to
the last link is essential. In the case of other links, the previously estimated parameters are
used. If for some reason this information is not available, nominal lengths could be used.
The translation is along the z-axis of the Ci for each link.
By a similar least squares procedure, the coordinates of the centre of rotation p i can be
identified. The radius of rotation is then easily established.
The next step is the coordinate frames construction and the D-H parameters identification.
This part makes use of the set of the estimated n normal vector to the planes of rotations
a i and n centre of rotations p i to specify the locations and orientations of the D-H model
link coordinate frames. First, partial D-H models are specified to define the location and
orientation of the individual D-H coordinate frames in terms of the base frame of the robot
arm. These are given by:
Ti = A1 ⋅ A2 ⋅… ⋅ Ai (8)
140 Industrial Robotics - Programming, Simulation and Applications
⎧T = T ⋅ A
i i −1 i
⎪⎪ (9)
⎨ and
⎪ A = T −1 ⋅ T
⎪⎩ i i −1
i
⎡n oi ai pi ⎤
Ti = ⎢ i ⎥ (10)
⎣0 0 0 1⎦
Using the result of expression (9) and (10) and equating them to the expression of Ai given in
(3) the parameters forming the matrices can be calculated in a backward iteration from the
last joint to the first. The errors affecting the kinematic model parameters can be extracted.
This method was tested experimentally and results proved its effectiveness (Abderrahim &
Whittaker, 2000).
reduced or completed eliminated if the robot is calibrated and the “true” compensated model is
transferred to the simulation environment. In this case the whole production line can be
simulated and robot programs can be simply downloaded from the simulation environment to
the actual robot on the shop floor with no “touch-up”. This also means that a failing robot can be
replaced by another “absolute accuracy” robot with minimum disruption to the production line
(Hefele & Brenner, 2001).
As samples of these computer simulations software, a brief description of the three most
known to the authors is given next:
• IGRIP (Interactive Graphics Robot Instruction Program): developed by the
company Delmia, is a Robotic Simulation and Off-Line Programming Solution.
According to the company, it permits the design and simulation and analysis of the
complete manufacturing cell. IGRIP includes the most comprehensive library of
robot models available, with more than 500 industrial robot models, including the
latest from FANUC, ABB, Motoman and Nachi. Application-specific devices such as
weld guns, work piece positioners, rails, gantries, and workpiece clamps are
available in options such as Arc, Spot, and Paint (Delmia, 2003).
• RobotStudio: is a simulation software developed by ABB and reflects the group’s
great effort to produce such a mature product. RobotStudio is built on the ABB
VirtualRobot, which is an exact copy of the real software that runs the robots in
production, and can generate exact RAPID programs. It allows very realistic
simulations to be performed, using real robot programs and configuration files
identical to those used on the shop floor. It is therefore easy to transfer robot
programs and configuration parameters between a robot and the computer.
According to ABB, virtual robot technology concept makes the company the only
supplier offering true off-line programming, with no translation or touch-up. In
addition, the company’s calibration software is built on the RobotStudio platform,
and that is how it uses a virtual model identical to the real calibrated robots (see
www.abb.com/ for more details). RobotStudio provides the tools to increase the
robot systems profitability by allowing the users perform tasks such as training and
optimization without disturbing production.
• Festo COSIMIR Professional: is a PC-based software package for 3D modelling
(including fault finding and design optimization), programming and performance
emulation of various industrial robot systems. Modelling with COSIMIR Professional
is made simple by virtue of the libraries of robots, end effectors and automation
components. The program also imports several CAD formats. The programming
feature supported by COSIMIR Professional, permits off-line programming of most
commonly used Robots, allowing syntax check and automatic integration of position
lists, automatic creation of complex parameter assignment, and the up load and
download to and from Kuka, ABB, and all Mitsubishi controllers. The simulation
mode permits the testing of cell and tool designs. All motion sequences and handling
processes can be instantly simulated in the modelled work cell to avoid collisions
and optimise cycle times (see www.festo.com/ for more details). Compilers of more
robot models can be integrated in the software at any time to produce realistic
simulations and analysis of the designed process (Karras, 2003). According to recent
consultation with Festo GmbH, the software has the capability of connecting the real
robot controller to the simulation PC to produce precise cycle time to permit a real
optimisation and analysis.
142 Industrial Robotics - Programming, Simulation and Applications
Fig. 7. Geometry deviations and joint deflection due to load (Courtesy of ABB).
The calibration itself contains 100 positions randomly distributed within the robot working
area and giving excitation to all robot axes. From the result a set of parameters that best
minimizes the error between the model and real robot are calculated. The robot positioning
accuracy is then verified by running 50 more positions and calculating the difference
between the ordered position and the measured, according to the set up shown in fig. 8.
Accuracy and Calibration Issues of Industrial Manipulators 143
Robot position
(»100 points)
Z
CalibW are
Birth Certificat & Robot
Y Compensation Parameters
(CFG file)
Measured Positions X
Laser Tracker
Fig. 9. Minimum of 3 LEDs are attached the tool frame during the calibration to obtain
complete pose (6D). (Courtesy of Metris).
144 Industrial Robotics - Programming, Simulation and Applications
Since Metris is not a robot manufacturer, their system of measurement and calibration targets all
robots makes and models. According to (Renders, 2006), it takes only about 1 hour of work to
include a new robot model. All is needed is a description of all joint axes with respect to the base
frame of the robot. As for the compensation of the errors, this is done on the target positions of the
program (off line). The compensation on the controller level is more difficult and requires more
effort. If this is required, another software module (RETRAC) and the new robot model are
provided by Metris to be integrated in the controller path planner. Therefore any robot that
connects to that controller can run the Absolute Accuracy version.
The combinations of ROCAL software and the K series measurements system are able to calibrate
the entire cell and hence provide the link between simulations and the real world. The software
provides three types of calibration routines, robot calibration, tool calibration and environment
calibration. In the tool calibration LEDs are fixed to the tool and measurements are performed to
establish the tool TCP frame, as illustrated in fig 10. In the case of the environment calibration, the
CMM capabilities of the measurement system are used to register reference points of the
environment relative to the robot and establish a fixture frame.
Fig. 10. During tool calibration LEDs are fixed to the tool (Courtesy of Metris).
For illustration a real example of the achieved accuracy after calibration of a 159 Kg payload
industrial robot manipulator holding full load is presented next. The original pose accuracy
before calibration was 3.25 mm and 5.43 mrad. After complete calibration these figures are
brought down to 0.29 mm and 0.35 mrad.
If a subset of parameters is calculated in partial calibration the achieved accuracy is a little
worse than the mentioned above, but still improve the Cartesian positioning of the robot a
great deal. For the same manipulator mentioned above, encoder offset calibration achieves a
complete pose accuracy of 1.44 mm and 3.87 mrad, while geometrical only calibration
guarantees an accuracy of about 0.72 mm and 2.12 mrad.
The Rocal software and the K600 system make a powerful tool, which makes it possible to
move a 150kg robot load under 0.5 mm average error in its entire working volume.
7. Conclusion
This chapter discussed the accuracy issues of robot manipulators and the importance of
searching for absolute accuracy for industrial manipulators. It shows how there is an
increased need to use off-line programming and robotic and manufacturing simulation
Accuracy and Calibration Issues of Industrial Manipulators 145
software to analyse and optimise the processes and trajectories of manipulators present in
the process. Throughout the chapter it has been emphasised that without absolute accuracy
there cannot be off-line programming of robots and without calibration there cannot be
absolute accuracy. Therefore the combination of OLP and absolute accuracy, production
line design and development in manufacturing can be done in record times through realistic
simulations. Programs can be downloaded directly to the robots without touch-up and
corrections, which can be interpreted in reduction of downtime, great efficiency, easy and
rapid duplication of production lines or even complete cells.
The text presents therefore an overview of the work and methods of accuracy improvement
and explains the main steps to be followed in this process. Existing commercial solutions
have been described to give the reader an idea of the state of the art of the technology and
where to find it. Obviously these are not the only commercial solutions but they have been
chosen because, to the best knowledge of the authors, they are among the best solutions and
they provided all information to make our evaluation.
8. Acknowledgements
The authors are grateful for the feedback and the constructive discussion with Peter Fixel from
ABB and Steven Renders from Metris especially for providing documentation and permission to
use the graphics. Thanks are extended to Fred Zierau from Festo USA for his assistance.
9. References:
Abderrahim M, Whittaker AR. (1996). Design and construction of a low cost
instrumentation system for an industrial robot. Proc of the Int Conf
MECHATRONICS and M2VIP, pp. 369-74, Guimaraes, Portugal, September 1996.
Abderrahim, M. & Whittaker, A. R. (2000) Kinematic model identification of industrial
manipulators, Robotics and Computer-Integrated Manufacturing, Vol. 16, Nº 1,
February 2000, pp 1-8
Beyer, L. & Wulfsberg, J. (2004). Practical robot calibration with ROSY, Robotica, Vol. 22,
pp.505-12, Sep 2004, Cambridge University Press (UK), ISSN: 0263-5747
Chen J, Chao L-M. (1987). Positioning error analysis for robot manipulators with all rotary
joints, IEEE J of Robotics and Automat , Vol. 3, Nº 6, 1987, pp. 539-45.
DELMIA Corporation, (2003). IGRIP Documentation. Auburn Hills, Mich.: DELMIA
Corporation, 2003. www.delmia.com
Driels MR, Pathre US. (1994). Robot calibration using an automated theodolite. Int J Adv
Manuf Technol, Vol. 9, Nº 2, pp. 114-25. 1994.
Driels MR, Swaze WE.(1994). Automated partial pose measurement system for manipulator
calibration experiments. IEEE Trans Robotics Automat, Vol. 10, Nº 4, 1994, pp. 430-440.
Driels MR. (1993). Using passive end-point motion constraints to calibrate robot manipulators. J
Dyn Systems Meas Control, Trans ASME , Vol. 115, Nº. 3, 1993, pp. 560-6.
Elatta, A.Y. ; Gen, L.P. ; Zhi, F.L. ; Daoyuan Y. & Fei, L. (2004). An Overview of Robot Calibration,
Information Technology Journal, Vol. 3, Nº 1, 2004, pp. 74-78, ISSN 1682-6027
Fixel, P. (2006). Absolute Accuracy Marketing Presentation, ABB Automation Technologies AB,
peter.fixell@se.abb.com, June 2006.
Harb SM, Burdekin M. (1994). A systematic approach to identify the error motion of an n-degree
of freedom manipulator. Int J Adv Manuf Technol, Vol. Nº 2, 1994, pp. 126-33,
146 Industrial Robotics - Programming, Simulation and Applications
giovanni.legnani@ing.unibs.it
1. Introduction
The kinematic calibration is a procedure to improve the manipulator accuracy without
mechanical means by acting on the manipulator controller.
Although manipulators are composed by accurate mechanical components, the precision of their
motion is affected by many sources of error (Mooring et al, 1991). The final position accuracy is
mainly influenced by: kinematic inaccuracy (due to manufacturing and assembly errors in both
actuated and passive joints), load deformation (due to external forces including gravity) and
thermal deformation (Reinhar et al, 2004). This is true for serial (Mooring et al, 1991) as for
parallel (Wildenberg, 2000) manipulators as well. Each of these factors should be addressed with
an appropriate compensation or calibration methodology. This work deals with kinematic
inaccuracy, related to robot geometry, assembly and joint motion.
One possibility to compensate for geometrical errors is to perform a kinematic calibration. The
robot is requested to reach some desired poses and the reached actual poses are measured.
Then, the exact robot geometry is estimated analyzing the difference between the desired and
the reached poses. This procedure requires a parametric identification of the manipulator which
consists in the formulation of a geometrical model of the robot in which each source of error is
represented by a parameter. The parameter set includes link lengths, joint axes inclination and
joint coordinate offsets. The calibration consists in identifying the actual values of all these
parameters. Once this operation is performed, it is possible to predict the pose error for any
robot configuration and so it is possible to compensate for them by suitable joint motions.
The aim of this work is to address all the steps of the procedure which includes:
1. The development of a suitable kinematic model of a general serial manipulator;
2. One example of collection of experimental data;
3. The estimation of the numerical value of the manipulator parameters;
4. The error compensation.
For each phase different alternatives are described and critically compared.
A relevant part of the chapter summarises, compares and extends the existing techniques
used to generate a suitable parametric kinematic model for a general serial manipulator.
Rules for automatic generation of models with different characteristics are developed. These
results will be used in a next chapter for the development of a parametric kinematic model
for a general parallel manipulator (PKM).
148 Industrial Robotics - Programming, Simulation and Applications
An effective model for robot calibration must describe all the possible sources of error (it should
be Complete) and, to avoid singularities, little geometrical errors must be described by small
changes in the values of the corresponding parameters (Parametrically Continuous or
Proportional). Such a model is often referred as CPC (Zhuang & Roth, 1992, Mooring et al, 1991).
Furthermore, if each source of error can be described by one parameter only (absence of
redundant parameters), the model is defined Minimum and this allows to obtain an unique
numerical solution from the calibration process. In this paper such models are called MCPC
(Minimum Complete and Parametrically Continuous).
For a given robot more than one valid MCPC model can be defined; generally speaking they
are equivalent to each other, but each of them has different characteristics. Some
comparisons about the different models are contained in the following Sections. Discussion
about the accuracy achievable with the different models in presence of noise or measuring
errors is outside the scope of this paper.
Being the aim of this work the analysis of geometrical errors, it will be assumed that the
robot is composed by rigid links connected by ‘ideal’ joints (without backlash). All the
possible sources of error will be considered constant. It will be also assumed that
actuators are directly connected to the manipulator joints and so errors in the
kinematics of the transmissions will be here neglected. These hypotheses are
reasonable for many industrial manipulators.
In Sections 2 and 3 the basic concepts for the calibration procedure are presented. In Section
4 the general formula for the determination of the parameters number is discussed. Some
different approaches used for serial robots are reordered (Sec.s 5 and 6), compared (Sec. 7)
and a modified one is proposed (Sec. 8). After the explanation of an elimination procedure
for the redundant parameters (Sec. 9) the calibration procedure for two different robots is
discussed (Sec. 10 and 11). Eventually, Sec. 12 draws the conclusions.
J i : i -th joint T (u, a) : translation of a in u axis direction // : parallel
R : revolute joint R(u , ϕ ) : rotation ϕ around axis u // : not parallel
P : prismatic joint R ( x, y, z, α, β , γ ) : 3D rotation ⊥ : orthogonal
R: number of revolute joints Δa , Δb , Δc displacements along x, y, z
P: number of prismatic joints Δα , Δβ , Δγ rotations around x, y, z
Table 1. Symbols and abbreviations.
2. Methodological Bases
When choosing a parameter set to describe errors in a manipulator geometry, many
different approaches can be followed; two of the most common are:
• Extended Denavit and Hartenberg approach (‘ED&H’);
• Incremental approach (‘Inc’).
When an ‘ED&H’ approach is adopted, a specific set of parameters is chosen to describe the
robot structure (Denavit & Hartenberg, 1955) and errors are represented by variations of
these parameters. The direct kinematics is represented as
S = F (Q ,Λ n + ΔΛ ) (1)
where S = [ x, y , z , α , β , γ ]t represents the gripper pose, Q = [ q1 , … q n ]t is the vector of the joint
coordinates, Λ n =[λ1 ,λ2 ,…,λ N ]t is the vector of the nominal structural parameters and ΔΛ is
Calibration of Serial Manipulators: theory and applications. 149
where Qd are the joint coordinates evaluated using the inverse kinematics model G(…),
based on the nominal robot parameters Λn. The error in the gripper pose will be:
ΔS = Sa − S d (4)
Assuming that the magnitude of the parameter errors is sufficiently small, the equations can
be linearized as:
∂F (5)
ΔS ≅ J Λ ⋅ ΔΛ JΛ =
∂Λ
where JΛ is the jacobian matrix evaluated for Q = Qd and Λ= Λn.
If the value of ΔS can be measured for a sufficient number of poses, it is possible to estimate
the value of ΔΛ. The required equations are obtained rewriting Eq. (5) for each pose. A
graphical representation of the procedure is presented in Figure 1.
150 Industrial Robotics - Programming, Simulation and Applications
Desired pose Sd
Inverse
kinematics
Sa actual pose
Actual robot
+ static pose error
Qd - ΔS
Direct kinematic predicted pose
model *
S = F(Qd,Λn+ΔΛ)
ΔΛ Estimation of
structural pararametrs
Estimated minimising ||ΔS||
parameters
Fig. 1. Calibration of a robot using the Pose Measuring approach.
ΔΛ is the vector containing the geometrical parameter errors and ∂G is evaluated for S = Sd
∂Λ
and Λ = Λn. Eq. (8) for pose matching is the equivalent of Eq. (5) for pose measuring. Since
∂G is generally not available, an alternative formulation can be used.
∂Λ
Linearizing the direct kinematics equation we have:
∂F ∂F
S a ≅ F (Q d , Λ n ) + ΔΛ + (Q a − Q d )
∂Λ ∂Q
since Sa has been forced to be equal to F(Qd,Λn), and remembering Eq. (8) we get:
−1
∂F ∂F ∂G ⎛ ∂F ⎞ ∂F
ΔΛ + ΔQ ≅ 0 ⇒ = ⎜⎜ ⎟⎟
(9)
∂Λ ∂Q ∂Λ ⎝ ∂Q ⎠ ∂Λ
The number of the geometrical parameters N is greater than the number of the joint
parameters that can be measured for each gripper pose, but if the value of Δ Q can be
measured for a sufficient number of poses, it is possible to estimate the value of Δ Λ.
The required equations are obtained rewriting Eq. (9) for all the poses. A graphical
representation of the procedure is given in Figure 2.
Calibration of Serial Manipulators: theory and applications. 151
actual joint
Forced pose Qa coordinates
Actual robot
Sd + static joint error
Estimation of
ΔΛ structural parameters
Estimated minimising || ΔQ ||
parameters
Fig. 2. Calibration of a robot using the Pose Matching Method.
Forced pose Sa
Forced
pose static pose
+ error
_
Sa Qa - ΔS
S Actual robot Direct kinematic
predicted pose
model *
S = F(Qa,Λn+ΔΛ)
achived joint
coordinates
Estimation of
ΔΛ structural parameters
Estimated minimising || ΔS ||
parameters
Fig. 3. Calibration of a robot using the Pose Matching Method (alternative approach).
152 Industrial Robotics - Programming, Simulation and Applications
∑S
1
E op = ah − F (Q ah , Λ n + ΔΛ )
k
h =1
where the subscript h is used to scan the k measured poses, Sah is the h-th imposed gripper
pose, Qah is the corresponding joint rotations, and F(.) is the predicted value for S. In the
theoretical error free case, if the value of ΔΛ is exactly evaluated, Eop would be exactly null.
The algorithm, after several (many) iterations, gives an estimation of the value of ΔΛ which
minimises Eop; Eop is also called residual.
Amoeba, the optimization procedure we adopted, is adapted from (Flannery et al. 1992).
Iterative linearization of the equations (‘linear’): The second method experimented
consists in writing Eq. (5) for a sufficient number of poses and grouping all the equations in
a linear system that can be solved to find ΔΛ:
⎡ A1 ⎤ ⎡ b1 ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ (10)
A ⋅ ΔΛ = b A = ⎢ A h ⎥ b = ⎢b h ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣ A k ⎥⎦ ⎢⎣b k ⎥⎦
Λ j +1 = Λ j + ΔΛ j with Λ 0 = Λ n
Calibration of Serial Manipulators: theory and applications. 153
This procedure is repeated iteratively until the average error Eit reaches a stable minimum:
1
E it = bT b
k
Eit is also called residual.
An extended Kalman filter (‘kalman’): A Kalman filter is a mathematical procedure to
estimate the state of a system when uncertainties are present. Assuming that the vector of
the geometrical parameters ΔΛ represents the state of a stationary system, an extended
Kalman filter can be defined to give an estimation of ΔΛ starting from ΔS (Legnani and
Trevelyan, 1996), (Clearly 1997).
The filter gives a new estimation of ΔΛ each time a new measure of ΔS is considered. The
estimation ΔΛh+1 of ΔΛ, after the h-th pose has been measured, is:
ΔΛ h +1 = ΔΛ h + M h (S ah − F (Q ah , Λ n + ΔΛ h ))
(
M h = Ph CTh R + C h Ph CTh )−1
ΔΛ 0 = 0
Ph +1 = (1 − M h C h )Ph
Where Ch is the jacobian JΛ evaluated in the h-th pose, Mh is the filter matrix gain after h
steps, Ph is the matrix of the parameters covariance. R is the matrix of the measures
covariance; extra diagonal elements in position i, j of matrices P (or R) indicates that the i-th
and j-th parameters (or the i-th and the j-th measures) are correlated. P0 representing the
initial uncertainty of Λ should be initialised with suitable large values. The diagonal value of
P contains a prediction of the accuracy of the estimation of ΔΛ, while R contains an
estimation of the noise present in the measuring procedure. R and a series of Sah and Qah
must be given to the algorithm, which estimates ΔΛ and P.
After the processing of all the poses, an error index Eka can be evaluated as
∑S
1
E ka = ah − F (Q ah , Λ n + ΔΛ )
k
h =1
being respectively R and P the number of revolute and of prismatic joints in the kinematic
chain ( n = R + P ). This formula is derived under the hypothesis that
• Serial manipulators make use of n revolute or prismatic one-DOF joints (no
spherical or cylindrical joints);
• All the joints are actuated (and so their motion is measured by the control system);
• The measures of all the 6 coordinates of the gripper are available for a number of
different manipulator poses.
154 Industrial Robotics - Programming, Simulation and Applications
As stated in (Mooring et al, 1991), the proof of Eq. (11) is based on the observation that for
revolute joints it is not relevant the position of the mechanical hinge but only the location of
its geometrical axis which can be expressed in terms of two translations and two rotations
(Fig. 4a). For prismatic joints only the axis direction is relevant and can be described with
two rotations (Fig. 4b). At last, 6 parameters are necessary to define the gripper frame in an
arbitrary location (Fig. 4c); this concept is directly applied in the ‘Inc’ approach (Sec. 6).
When only a partial measure of the gripper pose is available, the number of the identifiable
parameters is reduced accordingly (Omodei et al, 2001):
N = 4R + 2P + G (12)
being G the number of measurable coordinates of the gripper ( G ≤ 6 ). In milling
applications, for example, the tool pose is identified by 5 coordinates, being the rotation
about the tool axis redundant with the spindle motion.
Fig. 4. The structural parameters necessary to define the location of: a) revolute joint
axis ( Δa , Δb , Δα , Δβ ); b) prismatic joint axis ( Δα , Δβ ); c) gripper frame ( Δa , Δb ,
Δc , Δα , Δβ , Δγ ).
However, it is evident that 6 of the N parameters correspond to a wrong location of the
robot base and they can be compensated simply by repositioning it. During experimental
measures for the robot calibration, it is impossible to separate the effects of these errors with
respect to errors in the location of the measuring instrumentation. Similarly the last 6
parameters describe the pose of the end effector with respect to the last link (i.e. the 'shape'
and size of the gripper). So each time the end-effector is changed, these parameters change
their value. We can conclude that only N − 12 parameters are intrinsically related to the
manipulator structure. We call them internal parameters; their number is
N i = N − 12 = 4 R + 2 P − 6 (13)
The internal parameters can be then compensated during the manipulator construction or
with a proper calibration procedure performed in the factory. The other 12, called external
parameters , can be calibrated and compensated only in the user's working cell.
As well known, the links are numbered from 0 (the base) to n (the gripper). A reference
frame is embedded on each link (base and gripper included) in such a way that the i -th
joint axis coincides with the axis z i −1 . The pose of the i-th link with respect to the previous
one is expressed by a 4 × 4 transformation matrix.
⎡ ⎤
⎢ ⎥
Ri Ti ⎥
Ai = ⎢
⎢ ⎥
⎢ ⎥
⎣⎢0 0 0 1 ⎦⎥
where Ri is a 3× 3 rotation matrix and Ti is the vector of the origin coordinates of the
i -th frame.
The direct kinematics of the manipulator can be expressed as
M = A0 A1 A2 … Ai … An (14)
where M is the matrix describing the gripper pose with respect to the base frame, n is the
number of DOF. A0 is a constant matrix representing the location of the first joint with
respect to the base frame.
When the axes zi−1 and zi are not parallel to each other, standard Denavit and Hartenberg
rules can be used (Fig. 5) to position reference frames:
• Axis zi coincides with i + 1 -th joint axis;
• Axis xi is orthogonal both to zi and zi−1 and directed from zi−1 to zi ;
• Axis yi is chosen in order to create a right frame.
The base frame and the origin of the gripper frame are freely located.
The four D&H parameters of the i -th link are:
• The distance hi between axes xi−1 and xi which is called link offset;
• The distance li between axes zi−1 and zi which is called link length;
• The angle ϕ i between zi−1 and zi which is called twist;
• The angle θ i between xi−1 and xi which is called rotation.
The relative location of frame i with respect to frame i − 1 is then
Ai = R( z , θ i )T ( z , hi )T ( x, l i ) R ( x, ϕ i ) (15)
where R (u ,φ ) is a rotation of φ around axis u and T (u , t ) is a translation t along u .
It can be noted that li and ϕ i represent intrinsic geometric properties of the link, while θ i and hi
are the joint motion and the assembly condition with respect to the previous link. This is quite
evident for revolute joints, and similar considerations can be made for prismatic ones. As well
known, for prismatic joints 2 out of the 4 parameters are redundant (Paul, 1981).
The D&H approach is often considered a good standardized way to describe robot
kinematics. However calibration requires MCPC models and for several manipulator
structures Eq. (15) must be modified as described in the following.
When two consecutive joint axes are parallel to each other the parameter hi is not
156 Industrial Robotics - Programming, Simulation and Applications
univocally defined and can be freely assigned. However if a small geometrical error
equivalent to a rotation Δβ i around y i axis occurs, the joints are no longer parallel and
hi ≅ l i /Δβ i . So, for small variation of ± Δβ i around 0 the value of hi changes abruptly from
−∞ to ∞ and therefore the model is no longer proportional.
Fig. 5. Frames positions according to the Denavit and Hartenberg conventions. Case
of i-th link with revolute (left) and prismatic joint (right).
In order to obtain a parametrically continuous model, when the i-th joint is revolute (Fig.
6a), the Hayati modification should be adopted (Hayati & Mirmirani, 1985)
Ai = R ( z , θ i )T ( x, l i ) R ( x, ϕ i ) R ( y, β i ) (16)
while, when the i-th joint is prismatic (Fig. 6b), the PR modification is required:
Ai = T ( z , li )T ( y , hi ) R ( x, ϕ i ) R ( y , β i ). (17)
Fig. 6. Parallel joint axes: Hayati conventions for RR and RP (nearly)-parallel joint axes
and the case of the PR link. In both cases the frames in the figures are represented for
the nominal values of the rotations around xi and y i ( ϕ i = β i = 0 ).
In both cases the nominal values of ϕ and β are zero, but they can be used to represent
errors. In brief we can write
⎧ R ( z ,θ i )T ( z ,hi )T ( x,li ) R ( x,ϕ i ) 'D&H ' if z i −1 //z i
⎪ (18)
Ai = ⎨ R ( z ,θ i )T ( x,li ) R ( x,ϕ i ) R ( y ,β i ) 'Hayati ' if z i −1 //z i link RR or RP
⎪⎩T ( z ,li )T ( y,hi ) R( x,ϕ i ) R( y,β i ) 'PR' if zi −1 //zi link PR
For prismatic joints, 4 parameters are used, however, to avoid redundancy, 2 of them
suitably chosen for each specific robot are kept constant to their nominal value and
eliminated from the calibration model (Mooring et al, 1991). The elimination process can be
Calibration of Serial Manipulators: theory and applications. 157
The complete set of the geometrical parameters Λ is obtained by collecting all the variables
used to describe the quoted matrices Ai . It is important to notice that some of the
parameters coincide with the joints coordinates qi :
Ji = R Ji = P
i < n qi = θi qi = hi (20)
i = n qi = γ n qi = cn
6. Incremental Approach
When an incremental approach is adopted, Eq. (14) can be reformulated as
M = A0 B0 A1 B1 A2 B 2 … Ai Bi … B n −1 An C (21)
where matrices Ai describe the nominal geometry of the links and the joint motions while Bi and
C describe geometric inaccuracy. Matrices Ai can be freely defined with the only constraint that
zi axis coincides with the i—1- th joint axis while Bi and C have the following form:
⎧ R ( x, Δα i ) R ( y, Δβi )T ( x, Δai )T ( y, Δbi ) if J i-1 = R
Bi = ⎨
⎩ R ( x, Δα i ) R ( y, Δβi ) if J i-1 = P
(22)
C = R( x, Δα n ) R ( y, Δβ n ) R( z , Δγ n ) T ( x, Δan )T ( y, Δbn )T ( y, Δcn )
M = A0 B0 D1 A1B1 D2 A2 … Dn An C (24)
⎡ 0 − dγ i dβ i dxi ⎤
⎢ ⎥
∂M −1 dγ 0 − dα i dyi ⎥
ΔS i′ = ΔM i M −1 = M Δλi = Li Δλi = ⎢ i (26)
∂λi ⎢ − dβ i dα i 0 dz i ⎥
⎢ ⎥
⎢⎣ 0 0 0 0 ⎥⎦
⎡ 0 −δγ δβ δx ⎤
⎢ ⎥
δγ 0 − δα δy ⎥ (28)
Li = ⎢
⎢ − δβ δα 0 δz ⎥
⎢ ⎥
⎢⎣ 0 0 0 0 ⎥⎦
r p
which simplifies to L for rotational parameters and to L for translational ones
⎡ 0 −u z uy tx ⎤ ⎡0 0 0 ux ⎤
⎢ ⎥ ⎢ ⎥
u 0 − ux ty ⎥ 0 0 0 uy ⎥ (29)
r
L =⎢ z p
L=⎢
⎢− u y ux 0 tz ⎥ ⎢0 0 0 uz ⎥
⎢ ⎥ ⎢ ⎥
⎣⎢ 0 0 0 0 ⎦⎥ ⎣⎢0 0 0 0 ⎦⎥
The jacobian can be constructed transforming each matrix Li into the i -th column of JΛ
⎡ ⎤
⎢ ⎥ (31)
J Λ = ⎢ j1 … ji … jN ⎥
⎢⎣ ⎥⎦
160 Industrial Robotics - Programming, Simulation and Applications
Fig. 7. Some links of a manipulator used to illustrate the elimination of the parameter
redundant to Δqi .
Suppose that we want to find the parameters redundant to Δqi . For simplicity and
according to Eq. (26) the jacobian is expressed in frame i − 1 and so dx , dy and dz
represent the displacement of one point embedded on frame i + 1 which initially lies in
the origin of frame i − 1 (Legnani, Casolo et al, 1996). The relevant columns of J Λ are
presented in Table 2. The columns 5, 9, and 12 are linearly dependent to each other
( j5 − j9 − li j12 = 0 ) and so one of the parameters Δqi , Δβ i or Δbi +1 must be eliminated
from the model.
An alternative numerical approach for the elimination of the redundant parameters is
presented in the example of Section 11. Another one is described in the chapter devoted to
the calibration of PKM.
Calibration of Serial Manipulators: theory and applications. 161
Stanford Manipulator
Extended D&H Incremental Modified Incr.
link joint type nominal values
Base - Hayati θ 0 = 0 , l0 =d 0 , ϕ 0 = 0 , Δα 0 ,Δβ 0 Δα 0 , Δβ0 ,
0 β0 =0 Δ a0 , Δ b0 ,
Δ a0 Δ b0
R D&H θ1 =q1 , h1 =d1 , l1 = 0 , Δ α 1 , Δ β 1 , Δ q1 ,Δα1 ,
1
ϕ 1 = 90 Δ a 1 Δ b1 Δ a1 , Δ b1
R D&H θ2 =q2 h2 =d2 , l2 = 0 , Δ α 2 , Δ β 2 Δq2 , Δα 2
2
ϕ 2 = 90 °
P PR l3 =q3 , ( h3 = 0 ), ϕ 3 = 0 , Δ α 3 , Δ β 3 , Δ q 3 , Δ α 3 ,
3 β3 =0 Δ a 3 , Δ b3 Δβ3 ,Δ a3 ,
Δ b3
R D&H θ4 =q4 , ( h4 = 0 ), l 4 = 0 , Δ α 4 , Δ β 4 Δq4 , Δα 4 ,
4
ϕ 4 = − 90 Δ a4 , Δ b4 Δa4
R D&H θ5 =q5 , h5 = 0 , l 5 = 0 , Δα 5 , Δβ 5 Δ q5 , Δα 5 ,
5
ϕ 5 = 90 Δ a5 , Δ b5 Δ a5 , Δ b5
R R6 γ 6 =q6 , β 6 = 0 , Δγ , Δβ6 , Δq6
6
,Δβ6 ,
Gripper 6 α 6 = 0 , c6 =d6 , b6 = 0 , Δα 6 , Δ c6 , Δα 6 ,Δ c6 ,
a6 = 0 Δ b6 , Δ a6 Δ b6 , Δa6
External parameters base θ 0 = 0 , l0 =d 0 , ϕ 0 = 0 , Δα 0 ,Δβ 0 , Δα 0 ,Δβ 0 ,
β 0 = 0 , h1 =d1 , Δ q 1 Δ a0 Δ b0 , Δa0 Δ b0 ,
Δ b1 , Δ β 1 Δ b1 , Δ q 1
External parameters gripper γ 6 = Δq6 , β6 =0 , α 6 =0 Δγ 6
,Δβ6 , Δq6 ,Δβ6 ,
c6 =d6 , b6 = 0 , a 6 = 0 Δα 6 Δ c6 , Δα 6 ,Δ c6 ,
Δ b6 , Δ a6 Δ b6 , Δ a6
Table 3. The 28 parameters of the 3 MCPC models of the Stanford manipulator (external
calibration) and the 12 to be removed for internal calibration. For the ED&H approach, the
values after the ‘=’ sign are the nominal values, parameters in parentheses are redundant.
The measuring robot is used to manually measure the shape of an object by touching it with
the robot end-effector. The robot is requested to measure the position of the distal point of
the end-effector and the direction of its axis.
The dime displayed in Figure 10 is used to force the manipulator to known poses for
calibration purposes.
Fig. 9. The 5 DOF measuring robot. Fig. 10. Matching a pose using the dime.
Figure 11 shows a schematic view of the 5 DOF robot. Frame {0} is embedded on the robot
base, while frames {1} to {5} are embedded on the corresponding link using the D&H
convention (Denavit and Hartenberg, 1955) (axis zi coincident with joint axis i+1).
The absolute reference frame {G} is the frame with respect to which each measure is taken.
The nominal position of {G} is on the robot base with the z axis parallel to first joint axis, and
it is coincident with frame {0}. z0 is not parallel to zG but its non-parallelism is caused by
very small constant rotations δx0 and δy0 around xG and yG. The theoretical position of frame
{0} with respect to {G} is (X0, Y0, 0)T and two position errors ΔX0 and ΔY0 are identified.
Translations in the z direction are incorporate in the length of link 1.
164 Industrial Robotics - Programming, Simulation and Applications
X3 X4
{2} {4}
X1 X5 {5}
Z1 Z2 Z4
X2, Z3
{3}
Z5
ZG Z0 {P}
XP
{1}
ZP
XG X0
{G}
Fig. 11. Frame positions.
The pose of frame {0} with respect to {G} is then represented by AG0:
cα = cos(α )
⎡ cψ 0 sψ X⎤ sα = sin(α )
⎢ s ⋅s cχ − s χ ⋅ cψ Y ⎥⎥ χ = χ 0 + δχ 0 (33)
A0 = ⎢ χ ψ
⎢− c χ ⋅ sψ sχ c χ ⋅ cψ 0⎥ ψ = ψ 0 + δψ 0
⎢ ⎥
⎣ 0 0 0 1⎦ X = X 0 + ΔX 0
Y = Y0 + ΔY0
According to Eq. (15), with the exception of A2, we get:
ϑ = ϑi + δϑi
⎡cϑ − sϑ ⋅ cα sϑ ⋅ sα a ⋅ cϑ ⎤
⎢ ⎥ α = α i + δα i
s cϑ ⋅ cα − cϑ ⋅ sα a ⋅ sϑ ⎥
Ai = ⎢ ϑ a = ai + Δai (34)
⎢0 sα cα d ⎥
⎢ ⎥ d = d i + Δd i
⎣⎢ 0 0 0 1 ⎦⎥
i = 1,3, 4, 5
Since joint axes 2 and 3 are parallel, in order to avoid singularities in the calibration
procedures, the transformation between frame {1} and {2} should be expressed accordingly,
as described in Section 5. A suitable formulation is:
A2 = R(z1 , ϑ 2 + δϑ 2 ) ⋅ T (z1 , d1 )⋅ T (x 2 , a 2 + Δa 2 ) ⋅ R (x 2 , α 2 + δα 2 )⋅ R( y 2 ,ψ 2 + δψ 2 )
We get:
ϑ = ϑ2 + δϑ2
⎡cϑ ⋅ c y − sϑ ⋅ sα ⋅ s y − sϑ ⋅ cα cϑ ⋅ s y + sϑ ⋅ sα ⋅ c y a ⋅ cϑ ⎤
⎢ ⎥ α = α 2 + δα 2
s ⋅ c + cϑ ⋅ sα ⋅ s y cϑ ⋅ cα sϑ ⋅ s y − cϑ ⋅ sα ⋅ c y a ⋅ sϑ ⎥
A2 = ⎢ ϑ y a = a 2 + Δa 2 (35)
⎢ − cα ⋅ s y sα cα ⋅ c y d ⎥
⎢ ⎥ d = d2
⎣⎢ 0 0 0 1 ⎦⎥
y = y 2 + δy 2
⎡1 0 0 0 ⎤
⎢ ⎥
0 1 0 0 (36)
A6 = ⎢ ⎥
⎢0 0 1 Z P + ΔZ P ⎥
⎢ ⎥
⎣⎢0 0 0 1 ⎦⎥
To estimate the robot repeatability, all the dime poses were recorded twice by two different
operators recording the correspondent joint rotations. The gripper positions were evaluated
with the nominal value of the parameters and the difference between the two sets was
evaluated. The maximum difference was 0.176 mm, the average was d = 0.082 mm and its
standard deviation was σ = 0.062 mm. Borrowing a definition from international standards
(ISO 9283), we estimated the robot repeatability as d + 3σ = ± 0.268 mm.
The calibration procedure was performed as follows.
A total of 81 poses were collected (we call these data the complete set). The complete set was
divided into two sub-sets: the calibration set and the control set. The calibration set was used
as input for the program which evaluates the geometric parameters. The control set was
used to verify the quality of the calibration.
In order to investigate the algorithm convergence, all the mathematical procedures were
repeated 3 times using different sizes of the calibration and control set:
1. 40 calibration poses, 41 control poses;
2. 60 calibration poses, 21 control poses;
3. 81 calibration poses.
To verify the quality of the calibration, for each of the 81 poses we evaluated the distance
between the known gripper positions with those estimated using the measured joint
rotations and the evaluated structural parameter errors.
Table 4 contains the average residual, its maximum value and the standard deviation
evaluated for all the situations considered. Considering the complete set the average
position error before calibration procedure was about 4.702 mm with a standard deviation
of 1.822 mm and 8.082 mm as maximum value.
Calibration set (mm) Control set (mm) Complete set (mm)
Poses average s. dev. max average s. dev. max average s. dev. max
a 40 4.789 1.784 8.082 4.616 1.854 7.572 4.702 1.822 8.082
b 60 4.692 1.785 7.574 4.732 1.921 8.082 4.702 1.822 8.082
c 81 4.702 1.822 8.082 ----- ----- ----- 4.702 1.822 8.082
Table 4. Position errors (residual) before calibration procedure.
Table 5 presents the results of the calibration process performed using three algorithms and
different number of poses. For each case we give the average position error (Eop, Eit, Eka),
their standard deviation and the maximum error.
The column labelled with ‘time’ represents the time necessary to process the data with a PC
Pentium 100 MHz, highlighting the different efficiency of the algorithms.
Calibration of Serial Manipulators: theory and applications. 167
The results are not very different from those obtained with the complete set of 25
parameters. In some cases they are even better. This mean that some redundant parameter
was present obstructing the convergence of the calibration process.
As a final test we designed an improved calibration program which is able to decide which
parameters should be important.
The program works in this way. Initially the program performs the calibration using the
reduced set of parameters ΔΛ* and the corresponding residual error E* is evaluated. We
indicate the number of parameters in the complete set as nc, while only nr of them are
present in the reduced set. Then the calibration process is repeated nc - nr times considering
at each time the reduced set plus one of the others additional parameters. For each of this
estimation a new value of Ei* is evaluated. The parameter which generate the lower value
of Ei* is selected. If Ei* is significantly lower than E*, the i-th parameter is added to the
reduced set and the estimation procedure is repeated from beginning.
Calibration set (mm) Control set (mm) Complete set (mm)
Alg. Time Poses average s. dev. max Average s. dev. Max average s. dev. max
1h 09’ 40 0.238 0.175 0.693 0.240 0.153 0.722 0.239 0.164 0.722
amoeba 1h 49’ 60 0.209 0.183 0.939 0.250 0.207 0.977 0.219 0.190 0.977
1h 44’ 81 0.232 0.182 0.891 ----- ----- ----- 0.232 0.182 0.891
02’ 20” 40 1.371 0.652 2.496 1.233 0.630 2.563 1.301 0.645 2.563
kalman 04’ 43” 60 5.231 2.610 12.14 5.697 2.249 11.16 5.352 2.530 12.14
07’ 21” 81 5.614 2.846 13.68 ----- ----- ----- 5.614 2.846 13.68
00’ 52” 40 0.711 0.214 1.256 0.669 0.214 1.129 0.690 0.215 1.256
linear 03’ 00” 60 0.469 0.185 0.896 0.536 0.183 1.022 0.487 0.187 1.022
10’ 23” 81 0.409 0.185 0.991 ----- ----- ----- 0.409 0.185 0.991
Table 7. Comparison between calibration algorithms: residual error (reduced set plus
automatically selected parameters).
The iterative process is terminate when the inclusion of a new parameter improves the
residual error less than 5%. Results obtained with this procedure are presented in Table 7.
Table 8 shows the parameters that have been added in the different trials; Δa3 and δα3 seems
more important because they have been selected more frequently by the algorithms.
Comparing Table 7 with Table 5 it is clear that the last version of the algorithm is slower but
Calibration of Serial Manipulators: theory and applications. 169
gives the best results producing lower residual errors. For example the ‘amoeba’ algorithm
the average residual on the complete set is reduced from 0.3 - 0.4 mm to about 0.2 mm. This
means that the calibration procedure works better if the redundant parameters are removed.
Added Parameters
Algorithm Poses Δa1 δα1 δα2 δy2 Δd3 Δa3 δα3 Δa4 δα4 Δa5 δα5
40 x x x
amoeba 60 x x x x x x x
81 x x
40 x
kalman 60 x
81 x
40
linear 60 x x
81 x x x x x x x x
Table 8. Parameters automatically added to the reduce set Λ*.
12. Conclusions
All the steps necessary to perform a kinematic calibration of serial manipulators have been
discussed and applied to an actual manipulator.
This first part of the chapter has compared and extended two procedures for the identification of
the geometrical parameters necessary to describe the inaccuracies in the kinematic structure of a
generic serial robot . The discussion led to a new methodology called ‘Modified Incremental' which
holds the positive characteristics of the others. Each procedure generates a Minimum, Complete
and Parametrically Continuous set of parameters.
First of all the formula for the determination of the total number of parameters has been
discussed pointing out the distinction between internal and external parameters.
The D&H approach for the parameters identification has been extended to deal with all
the special cases (adjacent parallel joint axes, prismatic joints and gripper frame). This
approach has the good quality to include the joint offsets in the parameter set and
shows that some parameters ( l and ϕ ) are intrinsic of the link while the others
represent the joint motion and the assembly condition.
The Incremental approach is the most simple to be applied (only the type of the joints must
be considered) and can be more easily automatized. The main drawback is that the joint
offsets are not explicitly included in the parameters set. The Modified Incremental approach
solves this problem and the proposed procedure for the elimination of the redundant
parameters ensures the Minimality of the model.
The last part of this chapter fully describes a practical calibration of a measuring robot
discussing different estimations algorithms and their performances.
All these concepts will be reviewed in the next chapter in order to be applied to a generic
parallel manipulator .
13. Acknowledgment
This research work has been partially supported by MIUR grants cofin/prin (Ministero
Università e Ricerca, Italia).
170 Industrial Robotics - Programming, Simulation and Applications
14. References
Cleary, W. (1997). Robot Calibration, Development and Testing of a Laser Measurement
System, Honours thesis, The University of Western Australia, October 1997.
Denavit, J. & Hartenberg, R. S. (1955). Kinematic Modelling for Robot Calibration, Trans.
ASME Journal of Applyed Mechanics, Vol. 22, June 1955, pp. 215 - 221.
Everett, L. J.; Driels, M. & Mooring B. W. (1987). Proceedings of IEEE Int. Conf. on Robotics and
Automation, pp. 183-189, Vol. 1, Raleig NC, Mar. 1987.
Flannery B.P., Press W.H., Teukolsky S.A., Vetterling W.T., Numerical Recipes in C, The
Art of Scientific Computing, Cambridge University Press, 1992.
Hayati, S. & Mirmirani, M. (1985). Improving the Absolute Positioning Accuracy of robot
manipulators, Journal of Robotic System, Vol. 2, No. 4, page pp . 397-413.
International Standard ISO 9283 Manipulating industrial Robots_Performance criteria and
related test methods.
Legnani, G. ; Casolo, F. ; Righettini, P. & Zappa B. (1996). A Homogeneous Matrix Approach
to 3D Kinematics and Dynamics. Part 1: theory, Part 2: applications, Mechanisms
and Machine Theory, Vol. 31, No. 5, pp. 573-605.
Legnani G., J. P. Trevelyan (1996), “Static Calibration of Industrial Manipulators: a
Comparison Between Two Methologies”, Robotics Towards 2000 27th Int.
Symp on Industrial Robots, pp.111-116, 6-8 October 1996.
Khalil, W. ; Gautier, M. & Enguehard, C. (1991). Identifiable parameters and optimum
configurations for robot calibration, Robotica, Vol.9, pp. 63-70.
Khalil, W. & Gautier, M. (1991). Calculation of the identifiable parameters for robot
calibration, Proceedings of IFAC 91, pp. 888-892.
Meggiolaro, M. & Dubowsky, S. (2000). An Analytical Method to Eliminate the Redundant
Parameters in Robot Calibration, Proceedings of International Conference on Robotics
and Automation (ICRA '2000), IEEE, pp. 3609-3615, San Francisco, CA, USA.
Mooring, B.W.; Roth, Z. S. & Driels, M. R. (1991). Fundamentals of manipulator calibration, John
Wiley & Sons, Inc., USA.
Omodei, A. ; Legnani, G. & Adamini, R. (2001). Calibration of a Measuring Robot:
Experimental Results on a 5 DOF Structure, Journal of Robotic System Vol.18 No.5,
pp. 237-250.
Paul, R. (1981). Robot Manipulators: Mathematics, Programming, and Control, Mit Press, 1981.
Reinhar, G.; Zaeh, M. F. & Bongardt, T. (2004). Compensation of Thermal Errors at
Industrial Robots, Prod. Engineering Vol. XI, No.1.
Wildenberg, F. (2000). Calibrations for Hexapode CMW, Proceedings of 2nd Chemnitzer Parallel
kinematik-Seminar: Working Accuracy of Parallel Kinematics, pp. 101-112, Verlag
Wissenschaftliche Scripten.
Wang, J. ; Masory, O. & Zhuang, H. (1993). On the Accuracy of a Stewart Platform-Part II:
Kinematic Calibration and Compensation, Proceedings of IEEE Int. Conf. on Robotics
and Automation, Atlanta May 1993.
Zhuang, H. & Roth, Z. S. (1992), Robot calibration using the CPC error model, Robotics &
Computer-Integrated Manufacturing, Vol. 9, No. 3, pp. 227-237.
Zhuang, H. ; Roth, Z. S. & Hamano, F. (1992). A Complete and Parametrically Continuous
Kinematic Model for Robot Manipulator, IEEE Transactions on Robotics and
Automation, Vol. 8, No. 4, August 1992, pp. 451-463.
9
1. Introduction
As already stated in the chapter addressing the calibration of serial manipulators, kinematic
calibration is a procedure for the identification and the consequent compensation of the
geometrical pose errors of a robot. This chapter extends the discussion to Parallel
Manipulators (also called PKM Parallel Kinematic Machines). As described in the following
(Section 2) this extension is not obvious but requires special care.
Although for serial manipulators some procedures for the calibration based on automatic
generation of a MCPC (Minimum Complete Parametrically Continuos) model exist, for PKMs only
methodologies for individual manipulators have been proposed but a general strategy has not been
presented since now. A few examples of the numerous approaches for the calibration of individual
PKMs are proposed in (Parenti-Castelli & Di Gregorio, 1995), (Jokiel et al., 2000) for direct
calibration and (Neugebauer et al., 1999), (Smollett, 1996) for indirect or self calibration techniques.
This paper makes one significant step integrating available results with new ones and reordering
them in simple rules that can be automatically applied to any PKM with general kinematic chains. In
all the cases a MCPC kinematic model for geometrical calibration is automatically obtained.
In Section 2 the main features of PKMs calibration is pointed out and the total number of the
necessary parameters is determined; this is an original contribution. In Sections 3 and 4 two
novel approaches for the generation of a MCPC model are described. Sections 5 and 6 are
dedicated to the analysis of the singular cases and to the procedure for the elimination of the
redundant parameters respectively; actual cases are discussed. Section 7 presents several
examples of application of the two proposed procedures to many existing PKMs. Section 8
eventually draws the conclusions.
Note: in this chapter it is assumed that the reader has already familiarised himself with the notation
and the basic concepts of calibration described in the chapter devoted to serial manipulators.
2. PKMs Calibration
2.1 Differences with respect to serial robots
The identification of the parameter set to be utilized for parallel manipulators can be
performed generalizing the methodology adopted for the serial ones. However some
remarkable differences must be taken into account, namely:
172 Industrial Robotics - Programming, Simulation and Applications
•most of the joints are ‘unsensed’, that is the joint motion is not measured by a sensor;
•PKMs make use of multi-degree of freedom joints (cylindrical, spherical);
•some links have more than two joints;
•links form one or more closed kinematic loops;
•PKMs can be calibrated by ‘internal calibration‘ which does not require an absolute
external measuring system.
Internal calibration (also called indirect or self calibration) is performed using the
measure of extra-sensors monitoring the motion of non actuated joints and comparing
the sensor readings with the values predicted using the nominal manipulator
kinematics (Parenti-Castelli & Di Gregorio, 1995), (Ziegert et al., 1999), (Weck et al,
1999). In these cases, some of the manipulator parameters cannot be identified and just
a partial robot calibration can be performed. A typical full internal calibration is able to
identify all the ‘internal parameters’ (relative position of the joints, joints offsets) but
not the location of the arbitrary ‘user’ frames of the fixed and of the mobile bases (6
parameters for the base and 6 for the gripper).
External calibration (also called direct calibration) mainly consists in calibrating the
pose of the frame attached to the mobile base with respect to that of the fixed base, and
it is similar to the calibration of serial manipulators (Neugebauer et al., 1999),
(Smollett, 1996). To perform it, it is necessary to measure the 6 absolute coordinates (3
translations and 3 rotations) of the mobile base. When the instrumentation measures
less than 6 coordinates (e.g. when calibration is performed using a double ball bar -
DBB), some of the robot parameters cannot be identified and a proper complete
external calibration is not possible.
N = 3R + P + SS + E + 6 L + 6( F − 1) (3)
where SS is the number of the links composed simply by two spherical joints connected by
a rod, E is the number of the encoders (or of the joint sensors), L is the number of the
independent kinematic loops and F is the number of the arbitrary reference frames.
However Vischer does not give a proof of Eq. (3) but simply states that ‘this equation has been
empirically tested on several examples and seems to be valid...’. However some cases are not
included and in the following we prove that the correct equation is
N = 3R + P + 2C + SI + E + 6 L + 6( F − 1) (4)
where C is the number of cylindrical joints and SI is the number of ‘singular’ links which
include the SS links with SI = +1 (Section 5.1) and the SPS legs with SI = −1 (Section 5.2).
As proved in the following, spherical joints do not require any parameter and their number
is not present in Eq. (4).
Eq. (4) which works both for serial and parallel manipulators can be explained generalizing
the Eq. (1) and Eq. (2) given for serial robots. First of all, in the case of serial manipulators
we get SS = 0 , E = R + P , L = 0 and F = 2 for external calibration while F = 0 for internal
calibration and so Eq. (3) reduces to Eq.s (1) and (2).
Moreover it is evident that, after choosing an absolute reference frame, each further ‘user’
frame needs 6 parameters (3 rotations and 3 translations) to calibrate its pose; this explains
the term 6( F − 1) .
To explain the term 6 L we initially consider a serial robot with two ‘branches’ (Fig. 1 left).
Its calibration can be performed considering two kinematic chains B1 and B 2 from the
base frame to the two gripper frames (Khalil et a., 1991). Each joint requires the usual
number of parameters, however since we have two grippers the total number of external
parameters is incremented by 6 and Eq. (1) is transformed as N = 6 + 6 + 4 R + 2 P . If now we
‘weld’ link m to link d , the number of the parameters is clearly unchanged because the
parameters describing the links geometry are still the same and the 6 parameters just added
are used to describe the location of the ‘welding’ between the links d − m . We conclude that
for each closed loop we add 6 parameters to Eq. (1).
Fig. 1. A serial robot with two branches (left) and one with a closed loop (right).
As a further step we remember that for serial manipulators using the ED&H approach one
parameter for each link corresponds to the joint variable. Now since in general in the PKMs
174 Industrial Robotics - Programming, Simulation and Applications
most of the joints are unsensed, we must remove one parameters for each of them; so the
term 4R + 2 P of Eq. (1) transforms to 3R + P + E (if all the joints are sensed it is E = R + P ).
A cylindrical joint can be considered as composed by a revolute joint plus a prismatic one
so, apparently, it would require 3 + 1 = 4 parameters. However the direction of the two joint
axes coincides and so two parameters must be removed; this explains the term 2C .
Universal joints are kinematically equivalent to a sequence of 2 revolute joints, and can be
modelled accordingly.
Spherical joints (ball and socket) can be realised with high accuracy and so they can be often
considered ideal and their modellisation requires only three coordinates to represent the location
of their center. However spherical joints have 3 DOF and are always unsensed and so we must
subtract three parameters. As a result we get zero and so S joints do not appear in Eq. (4).
The explication of term SI is more complex and will be analysed in Section 5. At the moment we
just observe that for SS links it is evident that the rod length has to be counted as parameter.
We also observe that PKMs with SS links hold internal degrees of freedom because the rods can
freely rotate about their axes. Other ‘singular links’ will be analyzed in Section 5.
We indicate with H the matrices that describe the relative position of two frames embedded onto
the same link (link transformation) while matrices G describe the relative location of the frames of
two links connected by a joint, that is the assembly an the motion of the joint (Fig. 2 and Fig. 3).
The matrices H depend on some geometrical parameters called ‘link parameters‘.
The fixed base and the mobile base have an extra ‘user‘ frame. Their pose with respect to the
intrinsic frame will be described by matrices E F (fixed base) and E M (mobile base).
A PKM with F user frames and L loops requires L + F − 1 kinematics equations: one for
each (closed) loop and one for each user (gripper) frame to represent its pose with respect to
the fixed one (‘branch’ or ‘open loop’). For each PKM different alternative choices are
possible. For example for the PKM of Fig. 1-right three alternatives are possible: one
equation for each of the two branches B1 and B 2 , or one equation for loop L1 and one
equation for one of the two branches ( B1 or B 2 ).
Fig. 2. Transformation matrices between user, Fig. 3. Transformation matrices between the
intrinsic and joint auxiliary frames for the fixed frames of three consecutive links with 2 joints.
base (example of a PKM with 3 legs). The The pose of the joint frame J i +1,2 with respect
auxiliary frame of the first leg coincides with the to J i −1,1 is obtained as M = H i −1Gi H iGi +1H i +1 .
intrinsic frame and so H F 1 = I . The matrices of
the mobile base are similar.
The kinematic equations are obtained in matrix form by multiplying all the matrices that describe
the relative position of the frames encountered by travelling along the considered branches or loops.
For each branch we follow this path:
• Start from the absolute user frame and move to the intrinsic reference frame of the
fixed base. This transformation is described by the matrix E F
• Move to the joint frame of the first joint of the branch (matrix H F ).
• Move to the joint frame embedded on the next link (matrix G ).
• Move to the last joint frame of the same link (matrix H ).
• Repeat the two previous steps until the last link (mobile base).
• Move to the user frame of the mobile base (matrix E M ).
176 Industrial Robotics - Programming, Simulation and Applications
where I denotes the identity matrix, M k is the pose of the gripper at the end of the k -th
branch, n is the number of the joints encountered on the branch and m is the number of the
links composing the j -th loop. b and are the number of equations for branches and
loops ( b ≥ F − 1 , ≤ L ). Subscripts k and j will be often omitted for simplicity. Matrices
E F and E M are identical for all the branches.
A complete set of rules to assign the location of the frames is given in the following Sections
which also discuss the singular cases.
• R and C joints: the axis z is chosen coincident to the joint axis. The axis x is chosen
to lay on the common normal to the joint axis and one axis (generally z ) of the
intrinsic frame which is not parallel to it.
• P joints: the axis z is parallel to the joint axis. The axis x is defined as for revolute
joints. Like the standard D&H links the origin of the frame can be freely translated so
that in the assembly configuration it coincides with the origin of the frame associated
to the first revolute or spherical joint which follows it in the kinematic chain.
• S joints: the origin of the frame coincides with the center of the joints and the frame
axes are parallel to those of the intrinsic frame.
RR -links
Rules: the z i axis coincident with the first joint axis, the xi axis
coincident with the common normal.
2 Intrinsic parameters: the distance between the two joint axes
and the angle among them.
Singular case: when the axes of the two joints are parallel.
SSS -links
Rules: the origin of the frame in S1, the axis xi toward S2, the axis
y i in such a way that the z coordinate of S3 is null.
3 Intrinsic parameters: the coordinate x of S2 and the
coordinates x and y of S3.
Singular case: when the three joints are aligned.
RS -links
Rules: The axis z i coincident with the revolute joint axis, the axis
xi toward the spherical joint.
1 Intrinsic parameter: The coordinate x of the spherical joint.
Singular case: When the center of S is on the axis of R .
PPS -links
Rules: The origin of the frame in the spherical joint, the axis z i
parallel to P1 joint axis, the axis x i defined by the cross product
of the two prismatic joint axes.
1 Intrinsic parameter: The angle between P1 and P2.
Singular case: When the two P axes are parallel to each other.
PSS -links
Rules: the origin of the frame in the spherical joint S1, the axis z i
parallel to P axis, the axis xi laying in the plane passing through
z i and S2.
2 Intrinsic parameters: the coordinates x and z of S2.
Singular case: when the axis of P is parallel to the segment
connecting S1 and S2.
Table 2. Rules to position the intrinsic frame on a link.
178 Industrial Robotics - Programming, Simulation and Applications
SS -links
Rules: the origin of the frame in S1, the axis z toward
S2, the orientation of x and y axis arbitrary but
orthogonal to the z axis.
1 Intrinsic parameter: the coordinate z of S2.
1 Free parameter: the rotation around z axis.
PS -links
Rules: the origin of the frame in S, the axis z parallel to
P , the orientation of x and y axis arbitrary but
orthogonal to z axis.
None intrinsic parameter
1 Free parameter: the rotation around z axis.
RP -links
Rules: the axis z coincident to the R joint axis, the axis
x in such a way that the P axis is parallel to the yz
plane.
1 Intrinsic parameter: the angle between the joint axes.
1 Free parameter: the translation along z.
PP -links
Rules: the axis z parallel to the P1 joint axis, the axis x
in such a way that the P2 axis is parallel to the yz plane
( x = P2 × P1).
1 Intrinsic parameter: the angle between the joint axes.
3 Free parameters: the translations along x , y and z .
Table 3. Rules to position the intrinsic frames on a link with free parameters (singular cases).
In all the mentioned cases a constant matrix H (the ‘link matrix’) can be built to represent the
location of the auxiliary joint frame with respect to the intrinsic one. This matrix has generally the
form of D&H-like standard matrices defined by two translations and two rotations. However for
S joints three translations are required. As usual when a prismatic joint is present, its location can
be arbitrary assigned and so two redundant parameters are inserted.
During the construction of the matrices H for the joints which have been involved in the
definition of the intrinsic frame some parameters have necessarily a constant null value and
so they are not inserted in the calibration parameter set Λ . For instance in a link with ns > 3
spherical joints labelled S1 , S 2 , , S n where the intrinsic frame is assigned with the rules of
s
Table 2, the matrices describing the joint frames locations are:
H1 = I H 2 = T ( x, x 2 ) H 3 = T ( x, x3 )T ( y, y3 )
H i = T ( x, xi )T ( y, y i )T ( z, z i ) i = 4… ns
where I is the identity matrix. So we need 3(n s − 2) parameters to describe their relative
position.
As a further example, in a link with 3 revolute joints R1, R2 and R3 and a spherical one
(labelled 4), assuming that the intrinsic frame have been defined using the revolute joints R1
and R2, we get
Calibration of Parallel Kinematic Machines: Theory and Applications 179
H1 = I H 2 = T ( x, l 2 ) R( x, ϕ 2 ) H 3 = T ( z, h3 ) R( z, θ 3 )T ( x, l3 ) R( x, ϕ 3 )
H 4 = T ( x, x 4 )T ( y, y 4 )T ( z, z 4 )
Note: for all the link types the frame of the first joint coincides with the intrinsic one and so
H1 = I .
These matrix equations are equivalent to Eq. (5) with A0 k = E F H Fk , Aik = G ik H ik and
Ank = G nk H nk E M . Eq.s (6) are then updated adding the matrices Bi describing errors in
the joint location, matrices E F ′ and E M ′ describing the errors in the location of the frames
180 Industrial Robotics - Programming, Simulation and Applications
of the fixed and of the mobile frames and the matrices Di describing the offset in the joint
coordinates. Assuming that the intrinsic frame is attached to the first joint of each link
encountered travelling from the base to the gripper we get
M = E F ′ A0 B0 D1 A1 B1 D 2 A2 … Dn −1 An −1 B n −1 Dn Bn An E M ′ for each branch
(7)
I = D1 A1 B1 D 2 A2 B2 … D m Am B m for each loop
where the subscripts k and j have been omitted for simplicity. If the intrinsic frame is
attached to the second joint of the link or for i=n the product Ai Bi commutes to Bi Ai .
Matrices E F ′ and E M ′ are identical for all the branches.
Assuming that the frames are created in such a way that the axes of R, P and C joints
coincide with z axes, the matrices Bi , E F′ and E M
′ are as follows
Bi = R( x, Δα i ) R( y, Δβ i )T ( x, Δai )T ( y, Δbi ) if Ji+1=R or C
Bi = R( x, Δai ) R( y, Δbi ) if Ji+1=P
Bi = T ( x, Δai )T ( y, Δbi )T ( z , Δci ) if Ji+1=S
E k ′ = T ( x, Δa k )T ( y, Δbk )T ( z , Δc k ) R( x, Δα k ) R( y, Δβ k ) R( z , Δγ k ) base frame k=F,M
Then the Eq.s (7) are analyzed in order to remove from matrices Bi the parameters which
are redundant to the joint offsets of matrices Di (Section 6). Finally all the parameters
describing the joint offsets of the unsensed joints are also removed.
To reduce from the beginning the number of the redundant parameters to be removed, the
matrices Bi can be defined taking in account some concepts developed in the ED & H
methodology. On each link it is necessary to define the intrinsic and the joints frames, a
matrix Bi is defined for each joint frame but some of its parameter can be immediately
removed. The parameters removed are those whose value is implicitly fixed to zero by the
definition of the intrinsic frame.
For instance in a link with ns > 3 spherical joints S i , we get
B1 = I B2 = T ( x, Δa 2 ) B3 = T ( x, Δa3 )T ( y, Δb3 )
Bi = T ( x, Δai )T ( y, Δbi )T ( z , Δci ) i = 4… ns
or in a link with 3 revolute joints (R1, R2 and R3) and a spherical one (labelled 4), assuming
that the intrinsic frame was defined using the revolute joints R1 and R2, we get:
B1=I B2 =T ( x, Δa 2 ) R( x, Δα 2 ) B3 =T ( z , Δc3 ) R( z , Δγ 3 )T ( x, Δa3 ) R( x, Δα 3 )
B4 =T ( x, Δa 4 )T ( y, Δb4 )T ( z , Δc 4 )
Calibration of Parallel Kinematic Machines: Theory and Applications 181
Note that for all the links we have B1 = I (see for analogies Section 3.3).
5. Singular Links
5.1 SS links
The Fig. 4 shows a SS link connecting link i − 1 with link i + 1 . Adopting the modified
Incremental approach, among the other parameters, it could be necessary to consider 3 joint
offsets Δq x,i −1 , Δq y ,i −1 and Δq z ,i −1 in the joint i − 1 and 3 translation parameters Δa i +1 ,
Δbi +1 and Δc i +1 in the joint i + 1 .
An infinitesimal analysis (Section 6) of the error propagation in the kinematic chain gives
the following results. The error Δai is equivalent to - Δq zi −1l i and Δci is equivalent to
Δq xi −1l i ; therefore, Δai and Δci must be ignored. The joint rotations Δq x,i −1 , Δq y ,i −1 , and
Δq z ,i −1 , are to be discharged because the joint is unsensed. And so only Δbi survives. The
result is that SS -links require just one parameter (the link length); that is SI = +1 .
The matrix J Λ has 6( L + F − 1) rows. The i -th parameter is considered redundant to others if the
i -th column of J Λ can be expressed as a linear combination of those of the quoted parameters.
where F is the direct kinematics and J Si is the jacobian evaluated in the i-th configuration.
The different jacobians evaluated for all the considered m poses as well as the variations in
the mobile base pose coordinates can be merged in one global equation
⎡ Δ S1 ⎤ ⎡ J S 1 ⎤
⎢ ⎥ ⎢ ⎥
ΔS 2 ⎥ ⎢ J S 2 ⎥
Δ S tot = ⎢ = ΔΛ = J Stot ΔΛ (9)
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣⎢ Δ S m ⎦⎥ ⎣⎢ J Sm ⎦⎥
The global jacobian can be splitted in three matrices by Singular Value Decomposition (Press
et al. 1993) J Stot = US sV converting Eq. (9) in
*
ΔS tot = S S ΔΛ*
with
T
*
ΔStot = U T ΔStot ΔΛ* = V T ΔΛ ΔΛ* = ⎡Δλ1*Δλ*2 … Δλ*N ⎤
⎢⎣ ⎥⎦
Δ S tot = PS−1ΔStot
(10)
Δ Λ = PΛ−1ΔΛ
Δ Λ is called ‘scaled’ parameters vector. Merging Eq.s (10) and Eq.(9) it can be found that
the proposed procedure based on SVD must be applied to the normalized matrices
J Stot = PS−1J Stot PΛ
(11)
Δ S tot = J Stot Δ Λ
In this case, a proper threshold values σ S for the singular values depends only on the
dimension of vector Δ Λ . Indeed, the maximum expected parameter error Δ Λ may cause an
error on a coordinate si of the platform smaller than the acceptable one. We can write
|| Δ S tot || ∞
σS =
|| Δ Λ || 2
where, for a generic vector A , the norms are defined as || A || ∞ =max(| ai |) and || A ||2 = ∑a
2
i .
From the definition of matrix PΛ if follows that | Δ λ i |≤ 1 then using the euclidean norm
184 Industrial Robotics - Programming, Simulation and Applications
|| Δ Λ ||2 ≤ N
where N is the number of the parameters. From the definition of matrix PS we have
|| Δ S tot || ∞ = 1
then, a proper threshold value is
1
σS =
N
Reference R P C SI E L F N
3 1 2 ±1 1 6 6
Stewart-Gough 6[ SPS ] (Jokiel et al., 2000) 0 6 0 0 6 5 2 48 (42 ♣ )
Stewart-Gough 6[URPU] (Wang et al., 1993) 30 6 0 0 6 5 2 138
3[ CCR ] (Callegari & Tarantini, 2003) 3 0 6 0 3 2 2 40
3[( RP )( RP ) R ] Section 7.4 9 6 0 0 3 2 2 46
6[ PSS ] (Ryu & Rauf, 2001) 0 6 0 6 6 5 2 54
Cheope 3[ P (2 S /2 S )] (Tosi & Legnani, 2003) 0 3 0 6 3 5 2 48
Hexa 6[ RSS ] Fig. 10 6 0 0 6 6 5 2 66
Delta4 3[ R (2 S /2 S )] (Clavel, 1991) 3 0 0 6 3 5 2 54
Table 4. Number of parameters for external calibration ( F =2 ) of some serial or parallel
manipulators (Eq. (4)). For Internal calibration subtract 12 parameters ( F=0 ). For extra sensors
add the suitable number of parameters. ♣ generally one parameter for each of the 6 SPS leg can
be numerically neglected.
If the parameter set is composed by the coordinates of the center of the spherical joints and
by 2 parameters for each leg (joint offset and λ ) , this would give 56 parameters. To achieve
minimality, the number of parameters can be reduced to 44 with a proper choice of the
reference frames on the two bases: the origin is located on one sphere S 1 , being the x axis
directed toward a second sphere S 2 , and y axis chosen in such a way that a third sphere
S 3 lies in the x − y plane. As a results, the following coordinates error must be neglected
Fig. 8 A 3 DOF PKM with CCR. Fig. 9. A PKM with PSS legs.
Fig. 10. Reference frames and transformation matrices for nominal kinematics on i -th leg of a
RSS PKM. The actuator is on the R joint.
Internal Parameters
Leg Fixed Base R Joint RS Link RS Link SS
i H Fi G1i H 1i H 2i H Mi
1 - Δc, Δq Δa Δa -
2 Δl , Δϕ Δc, Δq Δa Δa Δa
3 Δh, Δθ , Δl , Δϕ Δc, Δq Δa Δa Δa, Δb
4 Δh, Δθ , Δl , Δϕ Δc, Δq Δa Δa Δa, Δb, Δc
5 Δh, Δθ , Δl , Δϕ Δc, Δq Δa Δa Δa, Δb, Δc
6 Δh, Δθ , Δl , Δϕ Δc, Δq Δa Δa Δa, Δb, Δc
External Parameters
Fixed Base EF Δa, Δb, Δc, Δα , Δβ , Δγ
Mobile Base EM Δa, Δb, Δc, Δα , Δβ , Δγ
Table 5. The 66 parameters (54 internal, 12 external) of the RSS PKM (ED&H Model).
188 Industrial Robotics - Programming, Simulation and Applications
External calibration: For external calibration, 12 more parameters are added to describe
matrices E F and E M containing the errors in the pose of the user frames with respect to the
intrinsic ones (compare Table 5 and Table 7).
As a second alternative, we can omit to define the intrinsic frames on the two bases and errors of
the joints can be defined with respect to the user frame ( EF = EM = I , Table 6 and Table 8).
Fig. 11. Photo of the hybrid robot ‘CHEOPE’ and scheme of the parallel kinematic part.
Figure 12 and Figure 13 show the singular values of Cheope respectively for external and
internal calibration. For each case three different situations are investigated: a) when both
position and rotational errors of the mobile base are important, b) when only position errors
are of interest, c) when only rotational errors are considered.
a) S = [ x p y p z p θ ϕ ψ ]T b) S = [ x p y p z p ]T c) S = [θ ϕ ψ ]T
Fig. 12. Singular values of matrix J Stot in the case of external calibration: (a) complete pose,
(b) position only, (c) rotation only; σ S =1/ 48 =0.14
a) S = [ x p y p z p θ ϕ ψ ]T b) S = [ x p y p z p ]T c) S = [θ ϕ ψ ]T
Fig. 13. Singular values of matrix J Stot in the case of internal-calibration: (a) complete pose,
(b) position only, (c) rotation only; σ S =1/ 36 =0.17 .
The dashed line represent the threshold σ S , only the parameters associated to the higher
singular values must be considered. For example, when both linear and angular errors are of
Calibration of Parallel Kinematic Machines: Theory and Applications 191
concern, the external calibration requires 30 combined parameters (Figure 12-a) while the
internal one can be performed with 24 combined parameters only (Figure 13-a).
The thresholds have been evaluated considering manufacturing errors and joint sensor
offsets with a maximum magnitude of 10-5 m, required PKM accuracy of 10-4 m, measuring
error of 10-5 m for linear sensors and 10-5 rad for the rotational ones.
8. Conclusions
The paper has presented a complete methodology for the identification of the geometrical
parameter set necessary to describe inaccuracy in the kinematic structure of a generic
parallel manipulator.
The methodology, that can be applied to any existing PKM, supplies a minimum, complete
and parametrically continuous model of the manipulators. It can be applied both to intrinsic
(internal) and extrinsic (external) calibration.
The methodology identifies the parameters describing geometric errors of the manipulator,
joint offset errors and errors in the external devices used for calibration (e.g. double ball
bar). Furthermore, a general formula to determine the total number of necessary parameters
has been presented and discussed.
The paper shows how for some manipulators the number of parameters that are theoretically
necessary is quite large and a numerical methodology to remove the less significant is mentioned.
The final methodology is general and it is an algorithm, this means that it can be
automatically applied to any given PKM.
Numerous practical explicative examples are also given.
( )
⎧⎪M ij = G j H Bj = R ( z , θ j )T ( z , h j ) T ( x, l j ) R( x, ϕ j )
( j )
⎨M = G H = R ( z , θ )T ( z , h ) R( z , θ )T ( z , h )T ( x, l ) R ( x, ϕ )
⎪⎩ ik j Bk j B B k k
(12)
10. Acknowledgment
This research work has been partially supported by MIUR grants cofin/prin (Ministero
Università e Ricerca, Italia).
Calibration of Parallel Kinematic Machines: Theory and Applications 193
11. References
Besnard, S. & Khalil, W. (2001). Identifiable parameters for Parallel Robots kinematic
calibration, ICRA 2001, Seoul.
Callegari, M. & Tarantini, M. (2003). Kinematic Analysis of a Novel Translation Platform,
Journal of Mechanical Design, Trans. of ASME, Vol. 125, pp. 308-315, June 2003.
Clavel, R. (1991) Conception d'un robot parallè le rapide à 4 degrés de liberté, Ph.D. Thesis,
EPFL, Lausanne, Switzerland, 1991.
Denavit, J. & Hartenberg, R. S. (1955). Transactions of ASME Journal of Applyed Mechanics, Vol.
22, pp. 215 - 221, June 1955.
Everett, L. J.; Driels, M. & Mooring, B.W. (1987). Kinematic Modelling for Robot Calibration,
Procedeengs of IEEE International Conferenc on Robotics and Automation, Raleig, NC,
Vol. 1, pp. 183-189.
Hayati, S. & Mirmirani, M. (1985). Improving the Absolute Positioning Accuracy of robot
manipulators, Journal of Robotic Systems, Vol. 2, No. 4, pp. 397-413.
Jokiel, B.; Bieg, L. & Ziegert, J. (2000). Stewart Platform Calibration Using Sequential
Determination of Kinematic Parameters, Proceedings of 2000-PKM-IC, pp. 50-56,
Ann Arbor, MI, USA.
Khalil, W.; Gautier, M. & Enguehard, C. (1991). Identifiable parameters and optimum
configurations for robot calibration, Robotica, Vol. 9, pp. 63-70.
Khalil, W. & Gautier, M. (1991). Calculation of the identifiable parameters for robot
calibration, IFAC 91, pp. 888-892.
Legnani, G.; Casolo, F.; Righettini P. & Zappa, B. (1996). A Homogeneous Matrix Approach
to 3D Kinematics and Dynamics. Part 1: theory, Part 2: applications, Mechanisms
and Machine Theory, Vol. 31, No. 5, pp. 573-605.
Meggiolaro, M. & Dubowsky, S. (2000). An Analytical Method to Eliminate the Redundant
Parameters in Robot Calibration, Procedeengs of the International Conference on
Robotics and Automation (ICRA '2000), IEEE, San Francisco, CA, USA, pp. 3609-3615.
Mooring, B. W.; Roth, Z. S. & Driels M. R. (1991). Fundamentals of manipulator calibration, John
Wiley & Sons, Inc., USA.
Neugebauer, R.; Wieland, F.; Schwaar, M. & Hochmuth, C. (1999). Experiences with a
Hexapod-Based Machine Tool, In: Parallel Kinematic Machines: Theoretical Aspects
and Industrial Requirements, pp. 313-326, Springer-Verlag, C.R. Bor, L. Molinari-
Tosatti, K. S. Smith (Eds).
Omodei, A.; Legnani, G. & Adamini, R, (2001). Calibration of a Measuring Robot: Experimental
Results on a 5 DOF Structure, J. Robotic Systems Vol. 18, No. 5, pp. 237-250.
Parenti-Castelli, V. & Di Gregorio, R. (1995). Determination of the actual configuration of the
general Stewart platform using only one additional displacement sensor. In: Proc. of
ASME Int. Mechanical Engineering Congress & Exposition, Nov. 12-17 1995, San
Francisco, CA, USA.
Patel, A. & Ehmann, K. F. (1997). Volumetric Error Analysis of a Stewart Platform Based
Machine Tool, Annals of CIRP, Vol. 46, No. 1, pp. 287-290.
Press, W.H.; Flannery, B.P.; Teukolsky, S.A.; Vetterling, W.T. (1993). Numerical Recipes in
C/Fortran/Pascal, Cambridge University Press.
Reinhar, G.; Zaeh, M. F. & Bongardt, T. (2004). Compensation of Thermal Errors at
Industrial Robots, Production Engineering, Vol. XI, No. 1.
194 Industrial Robotics - Programming, Simulation and Applications
Ryu, J. & Rauf, A. (2001). Efficient Kinematic Calibration of Parallel Manipulators using a
Double Ball Bar System, Proc. 32nd Int. Symp. on Robotics, April 2001, Seoul, Korea.
Sheth, P. N. & Uicker, J. J. Jr (1971). A Generalized Symbolic Notation for Mechanisms,
Journal of Engineering for Industry, Transactions of ASME, pp.102-112, Feb. 1971.
Smollett, R. (1996). A method for Measuring the Position and Orientation of a Stewart
Platform, Proc. of the 1996 Florida Conf. on Recent Advances in Robotics, pp. 257-260.
Tosi, D. & Legnani, G. (2003). Calibration of a Parallel-Serial Hybrid Redundant
Manipulator, International Robots & Vision Show/34th International Symposium on
Robotics, Rosemont (Chicago), June 3-5, 2003.
Visher, P. (1996) Improving the Accuracy of Parallel Robots, Phd thesis, E.P.F de Lausanne.
Wang, J. & Masory, O. (1993). On the accuracy of a Stewart platform-Part I: The effect of
manufacturing tolerances, Proceedings of IEEE International Conference on Robotics
and Automation, pp. 114-120, Atlanta, 2-6 May 1993.
Wang, J.; Masory, O. & Zhuang, H. (1993). On the Accuracy of a Stewart Platform-Part II:
Kinematic Calibration and Compensation, Proceedings of IEEE International
Conference on Robotics and Automation, Atlanta, 2-6 May 1993.
Weck, M.; Giesler, M.; Meylahn, A. & Staimer, D. (1999). Parallel Kinematics: the Importance
of Enabling Technologies, In: Parallel Kinematic Machines: Theoretical Aspects and
Industrial Requirements, pp. 283-294, Springer Verlag, C.R. Boer, L. Molinari-Tosatti,
K. S. Smith (Eds.).
Wildenberg, F. (2000). Calibrations for Hexapode CMW, proceedings of 3rd Chemnitzer Parallel
kinematik-Seminar: Working Accuracy of Parallel Kinematics, pp. 101-112, Verlag
Wissenschaftliche Scripten.
Ziegert, J.C.; Jokiel, B. & Huang, C.C. (1999). Calibration and Self-Calibration of Hexapod
Machine Tools, In: Parallel Kinematic Machines: Theoretical Aspects and Industrial
Requirements, pp. 205-216, Springer Verlag, C.R. Boer, L. Molinari-Tosatti, K. S.
Smith (Eds.).
Zhuang, H. & Roth, Z. S. (1992). Robot calibration using the CPC error model, Robotics &
Computer-Integrated Manufacturing, Vol. 9, No. 3, pp. 227-237.
Zhuang, H.; Roth Z. S. & Hamano, F. (1992). A Complete and Parametrically Continuous
Kinematic Model for Robot Manipulators, IEEE Transactions on Robotics and
Automation, Vol. 8, No. 4, August 1992, pp. 451-463.
10
1. Introduction
The field of robotics encloses mechanics, electronics, computer intelligent control and
network communication technology. It has application in industry, exploration, public
service and medicine. Through supervisory control modes, robotic system can be controlled
from remote places and thus preventing the user from direct contact with hazardous and
inaccessible environment. Moreover, with the development of computer network and the
Internet, connectivity can be achieved to remote devices virtually anywhere in the world.
Besides, distance education is becoming a widely used common way of teaching different
disciplines (Khan, 1997). Along this line, in addition to introducing the field of robotics
through the World Wide Web, this medium can be used as a link to program and control the
Gryphon robot remotely. The motivation behind the development of this system was to
develop a low cost implementation (You, 2001) to enable students to learn robotics, program
and run the robot remotely with existing facilities at the Faculty of Engineering of the
University of Mauritius. The proposed system (Mootien, 2003) is based on the client/server
model, with a suitable front-end for viewing and interactive sessions with web users and a
back-end relational database for storing and retrieving relevant information at the server
side. A Web-based interface is also developed to enable online training of students on the
programming and control of the robot. This chapter presents a detailed description of the
work that has been carried out from the design to the implementation and application (use)
of the system. The hardware and software technologies that have been used for
implementing the system are Microsoft Internet Explorer, Microsoft Internet Information
Services (IIS), VBScript, JavaScript and Hyper Text Markup Language (HTML), Microsoft
Active Server Pages (ASP), Microsoft Access, Virtual Network Computing (VNC), Microsoft
NetMeeting, WALLI3 for Windows and Creative Web Cam.
one actually available is the vacuum gripper. The gripper requires an air supply with a
pressure of about 5 to 8 bar. The gripper entry is either 0 or 1, indicating open or closed for
the Pneumatic actuated gripper. A control box is supplied together with the robot. Any
other compatible devices may be connected to form a workcell via the control box. The
articulated arm is under the control of 4 microprocessors and, if properly programmed, will
accurately work with components in a workcell. Each axis is powered by a stepper motor
with optical encoder feedback to provide closed-loop control. In the controller there is one
microprocessor to monitor the positions of the axes, two more to control the motors and
another one to supervise the first three and to communicate with the host computer.
Programming may be accomplished in a variety of ways. Data for each axis may be directly
entered on-screen. Alternatively, the Gryphon EC Robot may be moved by the teach
pendant or by hand (lead-by-the-nose).
positional feedback through encoders. The robot may be moved using push buttons on the
Teach Pendant, without the host computer supporting the WALLI3 software being
connected. Continuous programming is not possible by this means and it is only a suitable
alternative for point-to-point Simulator control. The Gryphon Robot has an additional
accessory, called a simulator, which mechanically mimics the robot in a miniature and
simplified form. The movement of each of the simulator axis is monitored through
potentiometer sensor devices, so that manual movement of the simulator may be reflected
through the control system, to move the robot. When a robot is being moved in this remote
manner, it is often referred to be in Teledictor control mode.
Reduction Stepper
Gear Box Motor Encoder
Robot
Host
Link
Master
Stepper Motor
Processor Slave proc. Controller Dual
Bridge
Pulse Driver
Slave proc.
Data
Slave proc.
The following activities were enclosed within the linear sequential model shown in Fig. 5:
• System/Information Engineering:
o The requirements for the project were established.
o Information research and background study for different aspects and
subsets of the system/project were dealt with accordingly.
o Interface with other elements such as hardware, people and databases
were considered.
o Initial designs were proposed.
• Analysis:
o Research was more specifically focused on software and technologies to
be used.
o The software constraints and performance were established and
evaluated.
o Choice of software and other requirements were specified.
o A reliable proposed solution was maintained.
• Design:
o The steps towards the implementation were explained explicitly.
o The configuration of the web site and subsequent pages were set.
o Flowcharts and state diagrams were used to initialize the coding part.
• Code Generation:
o The design was coded with selected programming languages.
o The code generated was mingled with the appropriate information for
particular web pages.
o Interfacing between different programming languages and tools were
dealt with accordingly.
• Testing:
o All functions were tested and proper corrections were made.
o Interactions between hardware, software and human operator were
considered and necessary changes were brought to the system.
System/Information
Engineering
5. Proposed System
Since the RJ45 Interface Board is a proprietary design for the Gryphon Ec Robot and the WALLI
technology, the proposed system should inevitably incorporate the Virtual Network Computing
(VNC) technology. VNC is, in essence, a remote display system which allows one to view a
computing 'desktop' environment not only on the machine where it is running, but from
202 Industrial Robotics - Programming, Simulation and Applications
anywhere on the Internet and from a wide variety of machine architectures. In addition to this
tool, the proposed system must have a video feedback for the robot and its environment. For this
to be implemented, the Netmeeting technology developed by Microsoft should be used.
Netmeeting delivers an open, extensible platform for real-time communications, offering audio
and video. Besides the use of these technologies, it is required to build a web site that would act
as the link between these tools and the Gryphon system. Moreover, it is mandatory for the web
site to have an introduction to the field of robotics and associated topics. Fig. 6 describes the
overall proposed solution.
VNC
Web
Site
Internet
Web Cam
(Netmeeting)
Gryphon
Host PC
ASP. Netscape runs on Windows based platforms, and supports multimedia and graphics
images. However, Netscape lacks certain requirements of ASP, as Netscape Communicator
does not recognize all syntax of VBScript but instead recognizes JavaScript. The solution is
to opt for Internet Explorer 5.0 Browser because it supports efficiently ASP technology and
VBScript. Moreover, since the browser is widely used and actually the most common
browser on the Internet, the logical solution is Internet Explorer.
'desktop' environment not only on the machine where it is running, but from anywhere on the Internet
and from a wide variety of machine architectures. VNC consists of two types of component: a Server
which generates a display and a Viewer which actually draws the display on the remote screen. The
server and the viewer may be on different machines and on different architectures. The protocol which
connects the server and viewer is simple, open, and platform-independent. The current VNC software
requires a TCP/IP connection between the server and the viewer. Thus, VNC is ideal for calling WALLI3
software for programming and operating the robot remotely.
6. System Development
The proposed system will mainly consist of client and server web pages linked together using
ASP, HTML and communicating with a back-end database server. The web pages should give an
introduction to the robotic technology and other fields related to this project. As stated in the
objectives, the web pages should also give a detailed description of the mechanical device
involved and the means used for communicating with it, i.e. VNC and Netmeeting.
Fig. 8. Homepage
The web page on the History of Robots shown in Fig. 9 illustrates the existence of robots and
milestones in the robotic industry.
Applications and uses of robots in different sectors: Material Handling (Fig. 10), Automobile
Industry, Agriculture, Medical, Space Exploration, Emergency Response, Robotic Toys are
emphasized.
The web page shown in Fig. 12 dedicated to the Gryphon robot describes its specification,
modes of programming and digital control system.
through the remote interface. Details regarding the connection procedures are described in the
next paragraph. The performance of the robot is demonstrated using an embedded video clip
in the web page and this clip is downloadable. The user can compare the robot action with the
clip to check if he has correctly programmed the robot. Moreover, a more elaborate program is
also available and this involves the robot in picking and placing objects on a conveyor.
Authentication is an important process for this project. The control part of the web site is not
freely available to any visitor. For this sensitive part, ASP technology is extensively used
and right of entry is only possible after user name and password be validated. After
acceptance of user ID and password (Fig. 16), a web page is available to provide necessary
information, e.g. IP address, to establish connection to the remote Gryphon host PC (Fig. 17).
forwarded to the main program. The next instruction from the software is moving the index
table through 90 degrees. This table can continuously rotate and digital sensors may be used
as for the conveyor belt. However, in this context, the index table has been ordered to move
through only 90 degrees. As soon as the index table has completed its task, the program is
again redirected to the Gryphon subroutine PICKUP 2. Its function is to pick up the object
on the index table and place it in a safe position as required by the user. Speeds varying
between 1-3 may be used to move the robot, and 3 is the fastest. The robot has been
programmed to dispose of the object slowly in a safe position. As the task of the subroutine
is completed, the program executes the final line of codes and hence the robot moves to a
safe position as determined by the user.
The remote user has the flexibility of changing the robot parameters and experiment with
the program again to view how the robot can be effectively controlled and in the process
learn the basics of robot programming. To end the session, the remote user simply closes the
VNC and signs out. The user will be then directed to the login page, after which it can be
linked to the rest of the site. At the closing stage of the site, a Comments Form has been
included in order to enable further interaction with any user. The form is available upon
request and the user has to fill all the available fields. On clicking the Submit button, all the
fields are validated on the client side itself. If there are any missing or incorrect formats of
information, the user will be prompted accordingly. If now, all the fields are correctly filled,
connection is opened to the database and the data is stored. To confirm acceptance of the
records by the database, a feedback is supplied to the user. All the records are stored on the
same server as the mechanical device.
8. Conclusions
An introductory web site on robotics, with remote control of the Gryphon robot via the
World Wide Web, has been successfully designed and implemented. This chapter has
described the development of the web-based interface using the tools and technologies
currently available. The web site gives concise and up-to-date facts about robots and
genuine industries which employs them and it acts as a teaching tool for all those who have
a particular interest in robotics. Moreover, by including the control of a mechanical device in
the system, a remote user can easily make the link between the theoretical approach and real
life systems. The user can change the robot parameters remotely and experiment with them
to learn its control. However, at this instance, the remote control of the robot is restricted to
a single user, but priorities may be set for either a local or a remote control. Part of the
system is suitable for Engineering courses such as Robotics Technology, which is being
newly added in the Mechatronics course at the University of Mauritius. Further works and
programs may be developed for other courses in the future, where the World Wide Web can
help students enhance their knowledge about other disciplines.
9. References
AT&T (1999). http://www.uk.research.att.com/vnc/index.html
Italtec (1997). WALLI for Windows. Italtec Company, Italy, revised 10/03/1997.
Jones, A. R. (2000). Mastering Active Server Pages 3. BPB Publications, Delhi.
Khan, B. H. (1997). Web-Based Instruction. Educational Technology Publications, Englewood
Cliffs.
214 Industrial Robotics - Programming, Simulation and Applications
1. Introduction
Many Internet robotic systems have been developed on the Web over the last few years.
Internet telerobotic systems have typically been operated using two ways communication
links between a client and a remote robotic system (Barney Dalton et al, 1998), one is using
HTTP (HyperText Transfer Protocol) combined with CGI (Common Gateway Interface) and
the other way is using Java. The University of Southern California’s Mercury Project (Ken
Goldberg et al, 2000) used HTTP combined with CGI (Common Gateway Interface) to
develop a tele-excavation system that enabled web users to manipulate a remote “real
world”. CMU project developed Xavier autonomous indoor robot on the Web using CGI
(Reid Simmons et al, 1998, Reid Simmons et al, 2000). KhepOnTheWeb (Patrick Saucy et al,
1998, Patrick Saucy et al, 2000) uses CGI, which allows the user to control a Khepera robot in
a static environment. Another CGI system is the WebPioneer (WebPioneer, 1998), in which
the user drives a Pioneer robot in an unknown room. CGI system causes latency and poor
scalability because a new process must be started for each user request. The USA Puma
Paint project (Matthew R. Stein et al, 1998, Matthew R. Stein, 2000) is a Web site allowing
any user with a Java TM compatible Web browser to control a PUMA760 robot to paint on an
easel with real brushes and paint. Mobile robot on the Web project (Roland Siegwart et al,
1998) uses Java to establish a modular framework for mobile robots on the Web. Java is
executable within a Web page. Java programs can be run as applets within the browser and
supply an interactive interface instead of a static one. But Java client must know the location
of all servers to which it wants to connect and Java allows applets to connect only to the host
they were served from. In this paper, we propose using CORBA to implement networking
connections between a client and a remote robotic system.
Our goal was to propose the advanced architecture to implement the Internet robotic system
that can overcome the shortcomings of the other Internet robotic systems. CORBA uses an
Object Request Broker (ORB) as the middleware that establishes client/server relationships
between objects. This allows a client to invoke a method on a server across network
transparently without needing to know where the application servers are located, or what
programming language and operating system are used.
Our secondary goal was to implement the network-based robotic system to assist the
aged or disabled in their homes because the elderly population is growing while the
number of people to take care of them is declining. The primary task of the system
would be to supply food and a cup of water or juice to aged and disabled persons. To
216 Industrial Robotics - Programming, Simulation and Applications
accomplish these tasks, the human caregivers can control the telerobotic system to
retrieve and manipulate the desired tableware or the other objects, by viewing live
image feedback from a remote computer. In our research, we first implement an
Internet-based hand-eye robotic system using CORBA as a basic platform. The user
can control the remote hand-eye robotic system at a task-level to recognize and
manipulate the spoon, the fork, and so on by the intuitive user interface. Then we
developed multi-robots to perform service tasks for supporting the aged or disabled.
A video/audio conference system based on Robot Technology Middleware (RTM)
was also developed to improve the interaction among the users and local robotic
system, and enable web-user to monitor the state of robotic system working and the
state of the aged or disabled, and to get a better understanding of what is going on in
the local environment.
In summary, the paper contributions are:
Propose using Common Object Request Broker Architecture (CORBA) to implement
networking connections between a client and a remote robotic system. This makes
the system has the distinct features as following:
Clients and servers do not need direct knowledge of each other; CORBA-
based system has high scalability to allow clients and server objects, for
example written in different languages, run in different operating system, and
connected in different network to inter-operate.
The CORBA-based Internet robotic system can be implemented by a number
of independent components, and these components of the system can be run
independently to implement the application and easily be integrated with the
other technologies into new comprehensive application systems. This
facilitates network-distributed software sharing and improves the cost of
writing and maintaining software.
The other components of the system can work normally even if there are
problems with some of them.
Develop a CORBA-based Internet hand-eye robotic system and CORBA application
servers to allow the remote user to control the robot arm to recognize and
manipulate the objects at a task-level.
Develop multi-functional services robotic system and key technologies to enable the
developed system to provide daily service task to support the aged or disabled.
Experimental results verified that using CORBA to implement networking
connections between a client and a remote robotic system is very flexible, effective
and robust.
The rest of the paper consists of 7 sections. Section 2 presents Common Object Request Broker
(CORBA) in detail. Section 3 describes the structure of the system hardware base. Section 4 details
the functions and interfaces of the task-level robot arm control server, live image feedback server,
mobile robot control server, and iGPS server. Section 5 introduces the developed network
monitoring system based on RT middleware. Section 6 explains intuitive Web-based user interface.
Experimental results are given in Section 7. Section 8 concludes the paper.
3. System Description
The population-aging problem is increasingly pressing society. According to statistical data
from the Ministry of Welfare of Japan, those 65 years or older will comprise 25.2% of the
Japanese population in 2015. Meanwhile, the population of the aged worldwide is
increasing at approximately a 12% rate. This group requires special care and functional aids
due to the progressively reduced degree of autonomy. Because of rising costs and the
shortage of nursing home personnel, the current trends for long-term care for the elderly are
at-home care or residence at small-scale distributed facilities. Thus, the development of
supporting systems to improve care cost and the QoL (Quality of Life) of the elderly could
not come at a better time.
Our HARSP (Human-Assistance Robotic System Project) project consists on developing a
Human-Assistance Robotic Distributed System in order to provide daily service to aid aged
or the elderly people (Jia et al, 2004). The proposed system has the potential to provide
elderly persons local services in:
Intelligent reminding: remind elderly persons about important activities such as
taking medical, eating meals, visiting the bathroom, or scheduling medical
appointments.
Data collection and surveillance: many emergency conditions can be avoided with
systematic data collection. For instance, robots can support the aged or disabled in
using medical peripherals, such as a telephonic stethoscope, vital sign and glucose
meters. Robotic systems may be soon able to call caregivers for assistance if they
detect that an elderly person has fallen or the other emergency.
Daily services: many elderly persons with some chronic disease give up independent
living because of difficulty in moving their body to take something such as operating
objects in a refrigerators. An omnidirectional mobile robot integrating with a skillful
robot arm could help the aged or disabled overcome these barriers and supply them
necessary daily services.
Mobility aid: support the aged or disabled for getting up from the bed or a chair, and
implement intelligent walking aid.
Internet-Based Service Robotic System Using CORBA as Communication Architecture 219
Multi-robot cooperation is indispensable in order to perform service tasks for supporting the
aged or disabled. The mobile platform equipped with a dexterous manipulator is convenient,
but it is very difficult to handle the objects (such as operating objects in a refrigerators)
because of the difficulty to control the position and orientation of the mobile platform and
the manipulator mounted on the mobile platform. In our system, we adopted using a robot
arm with five degrees of freedoms cooperating with a mobile robot to implement the service
tasks. This method makes it easier to operate the objects such as in a refrigerators. Figure 2
illustrates the hardware configuration of the developed system including robots, cameras,
server computer and users’ computers.
For one method on the task-level robot control server, it consists of the following parts (Figure 5).
Group’s Open Brand for CORBA. This means that omniORB has been tested and certified CORBA
2.1 compliant. omniORB implements the specification 2.3 of the Common Object Request Broker
Architecture (CORBA). We develop robotic functional elements as “RT software component”,
which makes application and system integration easier.
The QuickCam Orbit cameras were used in our system with high-quality videos at true CCD
640x480 resolution and automatic face tracking and mechanical Pan, Tilt feature. It mechanically
and automatically turns left and right for almost a 180-degree horizontal view or up and down for
almost 90-degree top-to-bottom view. Its maximum video frame rate is 30 fps (frames per second)
and works with both USB 2.0 and 1.1. We implemented video stream RT component based on RT
Middleware, and OmniCORBA IIOP is employed as message communication protocol between
RT component and requester. Two QuickCam Orbit cameras were set up in the environment, and
they can overlook the area the service robots moves in by adjusting the mechanical Pan and Tilt of
the cameras. The structure of the developed network monitoring system was shown in Figure 8.
In addition, we developed a graphic user interface (GUI) for the video stream system that
provides a remote video stream (format 320x288 was selected because image data transition’s
problem), a camera zoom and pan-tilt adjustment, and a chat function that allows a remote user to
communicate with a local user. When the user sends a request for video, the system will
autonomously display the GUI. The user can click “Connect” and input the IP address of the
computer on which the RT video component is running to view a real-time video feed. The RT
video stream component was implemented by the Visual C++, Microsoft visual studio.net 2003. A
performance test of the developed real-time video stream was conducted to examine the
possibility of using a live video feed to monitor the state of robotic system. The video server is run
on Windows 2000 Professional (1.9GHz, Pentium4), and the video client is run on Windows XP
(2.4GHz, Pentium4). The average frame rate is approximately 16.5fps.
7. Experimental Results
Performance Evaluation of CORBA
CORBA uses an Object Request Broker (ORB) as the middleware that establishes
client/server relationships between objects. The client can invoke a method on a server
object across a network transparently without knowing where the application servers are
Internet-Based Service Robotic System Using CORBA as Communication Architecture 227
located, or what programming language and operating system are used. This lets system
overcome the shortcomings of the other Internet robotic systems. In addition, the
components of the CORBA-based system can be implemented and run independently to
implement the application, and also be integrated easily into new systems.
The experiments of evaluating the performance CORBA have been done. The client located at PC1
Pentium III, 600 MHz, and NT 4.0) sends a remote method call with 64 long words (4 bytes/word)
to CORBA server (located at PC2, Pentium III, 700 MHz, NT 4.0) in a LAN (100BASE-TX), and the
CORBA server returns the identical data to the client. This cycle is repeated 1000 times, which is
called one set.Twenty sets have been done and the best performance is chosen to be the
representativeresults from 20 sets, and its average of the round-trip time is about 0.29ms.The same
experiment has also been done with ten simultaneous users, and there is no significant time
difference between the result of only single user and that of multi-user.
We also have done the experiments such as: the task-level robot control server is working
and the live image feedback server was down. In such a case, the user can still access the
robot control server normally. Similarly, the user can still access the live image feedback
server normallyeven though the other robot control server was down.
Remote Accesses to the Developed Hand-Eye Robotic System
We useCORBA as a basic platform and connect our hand-eye robotic system to the Internet.
The Web-based user interface was designed and it consists of live image feedback part,
robot commands part, and message text box. This system has been accessed by the operators
from Tama City, Saitama City, Kyoto University, Japan and Chicago, America etc. These
users’ computershave different specifications and network connections, and work on the
different operating systems. Users operate this remote robotic system toretrieve and
manipulate the table ware that they want and they can also monitor the state of the robot
executing the task by viewingthe live image feedback coming from the robot site.
Fig. 10. Remote access to the hand-eye system; The remote user uses the telerobot system to
manipulate the juice, fork, green cup and vitamin drink and put them to the tray mounted
on the mobile platform.
228 Industrial Robotics - Programming, Simulation and Applications
When the remote user pushes the command “Juice, Please”, the system can automatically
find the juice and execute this task robustly. Figure 10 shows some on-line images that the
remote user is accessing the developed hand-eye system to manipulate the fork; green cup
and vitamin drink and put them to the tray mounted on the mobile platform. When the
users pushing the “start” button, the new image with the latest information will be shown.
Remote access to the developed multi-robots system to perform a service task.
Figure 11 illustrates some on-line images of the user operating the multi-robots to perform a
service task. We developed a hardware base including a robot arm and an omnidirectional
mobile robot and CORBA application servers including a task-level robot arm control server,
live image feedback server, mobile robot control server and iGPS server, all of which can be
distributed on the internet and executed in parallel.
By remotely controlling a mobile robot to cooperate with a robot arm using user interface
designed, the developed system can realize some basic services to support the aged and disabled.
When the operator links to the homepage, the operator first specifies the best route for the mobile
robot and directs it to move to the place where it can cooperate with the robot arm by using the
“specify mobile robot route” user interface. By live image feedback or video stream, the user can
observe whether the mobile robot has arrived at a place where it can receive objects passed to it by
the robot arm. If there is displacement, the user can adjust the position and rotation of the mobile
robot. Then, the user operates the robot arm to manipulate the objects requested by the disabled or
aged such as food, medicine or tableware. Lastly, the user can direct the mobile robot to move to
the place where the disabled or aged person is. Of course, if the remote user can use the task-level
control command, the developed system can perform a service task autonomously.
Fig. 11. On-line images remote user interacting with mobile robot to realize a local service task.
In order to enable a remote user to get a better understanding of the local environment, it is
necessary to receive and transmit media streams in real time to improve interaction in
network distributed service robotic system. Network monitoring system using QuickCam
Internet-Based Service Robotic System Using CORBA as Communication Architecture 229
Orbit cameras was developed to visualize the robotic system working, the state of the age
and disabled, and to enable the remote user to know what is going on in local environment.
8. Conclusion
We used CORBA as communication architecture to implement networking connections
between a client and the developed hand-eye robotic system. CORBA uses an Object
Request Broker (ORB) as the middleware that establishes client/server relationships
between objects. The client can invoke a method on a server object across a network
transparently without knowing where the application servers are located, or what
programming language and operating system are used. This lets system overcome the
shortcomings of the other Internet robotic systems. In addition, the components of the
CORBA-based system can be implemented and run independently to implement the
application, and also be integrated easily into new systems. This facilitates network-
distributed software sharing and improves the cost of writing and maintaining software.
The other components of the system can work normally even if there are some problems
with some of them. So, using CORBA to implement Internet robotic system is very effective,
flexible and robust.
Using CORBA as communication architecture, we also developed a hardware base
including a robot arm, omnidirectional mobile robot system and CORBA application servers
including a robot arm control server, live feedback image server, mobile robot control server
and iGPS server. By user controlling a mobile robot to cooperate with a robot arm using
Web-based user interface, the developed system can provide some basic services to support
the aged or disabled.
Network monitoring system based on RTM using QuickCam Orbit cameras was developed
to transmit media streams in real time to improve interaction in network distributed human-
assistance robotic system. It can enable remote users to get a better understanding of the
state of the robots working and the state of the aged or disabled.
Some experiments that the remote user accesses the developed system to provide a service
task to support the aged and the elderly people have been done. And experimental results
verified the effectiveness of the developed system. For future work, improvements of the
navigation of mobile robot and recognition of the obstacles using RFID technology and
stereo camera are a main topic. In addition, we will develop the other additional function of
the system as RT component in order to improve the flexibility of the system.
9. References
Barney Dalton and Ken Taylor (1998). A Framework for Internet Robotics, Proceedings of
1998 IEEE/RSJ, Conference on Intelligent Robots and Systems; Workshop Web Robots, pp.
15-23Victoria, B.C. Canada, October.
Hada, Y., Takase, K, Ohagaki, K. et al. (2004), Delivery Service Robot Using Distributed
Acquisition, Actuators and Intelligence, Proceeding of 2004 IEEE/RSJ International
Conference on Intelligent Robots and System, pp. 2997-3002.
Java remote method invocation:
http://java.sun.com/products/jdk/\\rmi/index.html.
Ken Goldberg, Steven Gentner, Carl Sutter, et al. (2000), The Mercury Project: A Feasibility Study
for Internet Robotics, IEEE Robotics and Automation Magazine, 7(1), pp. 35-40, ISSN.
230 Industrial Robotics - Programming, Simulation and Applications
Maja Matijaševć, Kimon P. Valavanis, Denis Graćanin et al. (1999), Application of a Multi-
User Distributed Virtual Environment Framework to Mobile Robot Teleoperation
over the Internet, Machine Intelligence and Robotic Control, 1(1), 11-26.
Matthew R. Stein (2000). Interactive Internet Artistry, IEEE Robotics and Automation
Magazine, 7(1), 28-32.
Matthew Stein (1998), Painting on the Web, The Puma Paint Project, Proceedings of 1998
IEEE/RSJ, pp.37-43, Canada, October 12-17.
MOM, Message-orientated middleware:
http://sims.berkeley.edu/courses/is206/f97/GroupB/mom/.
OMG, Object Management Group,
http://www.omg.org.
OOC, Object Oriented Concepts, Inc.,
http://www.omg.org.
Patrick Saucy, Francesco Mondada (2000), Open Access to a Mobile Robot on the Internet,
IEEE Robotics and Automation Magazine, 7(1), 41-47.
Patric Saucy (1998), KhepOnTheWeb: One Year of Access to a Mobile Robot on the Internet,
Proceedings of IEEE/RSJ Conference on Intelligent Robots and Systems; Workshop on Web
Robots, pp. 23-31, Victoria, B.C. Canada, October.
Reid Simmons, Joaquin L. Fernandez, Richard Goodwin et al. (2000), Lessons Learned from
Xavier, IEEE Robotics and Automation Magazine, 7(2), pp. 33-39, ISSN.
Reid Simmons: Xavier (1998), An Autonomous Mobile Robot on the Web, Proceedings of
IEEE/RSJ Conference on Intelligent Robots and Systems; Workshop on Web Robots, pp.
43-49, Victoria, B.C. Canada.
Roland Siegwart, Cedric Wannaz, Patrick Garcia et al. (1998), Guiding Mobile Robots through
the Web, Proceedings of 1998 IEEE/RSJ Conference on Intelligent Robots and Systems,
Workshop on Web Robots, Victoria, B.C. Canada, October 12-17 1998, pp. 1-7.
RTM
http://www.is.aist.go.jp/rt/.
Seán Baker (1997), CORBA Distributed Objects Using Orbix, ACM Press, UK.
SJT Condie (1999), Distributed computing, tomorrow’s panacea-an introduction to current
technology, BT Technol. J, 17(2), pp.13-23.
Songmin Jia and Kunikatsu Takase (2001), Internet-based robotic system using CORBA as
communication architecture, Journal of Intelligent and Robotic System, 34(2), pp. 121-134.
Songmin Jia, Yoshiro Hada, Gang Ye and Kunikatsu Takase (2002), Network Distributed Telecare
Robotic Systems Using CORBA as Communication Architecture, Proceeding of 2002 IEEE
International Conference on Robotics and Automation, pp. 2002-2007, USA
Songmin Jia, Weiguo Lin, Kaizhong Wang, Takafumi Abe, Erzhe Shang, and Kunikatsu
Takase (2004), Improvements in Developed Human-Assisting Robotic System,
Proceedings. Of International Conference on Intelligent Mechatronics and Automation,
Invited paper, pp. 511-516.
WebPioneer Website, ActivMedia, Inc. (1998).
http://webpion.mobilerobots.com.
12
1. Abstract
This chapter presents a unified approach to coordination planning and control for robotic
position and orientation trajectories in Cartesian space and its applications in robotic
material handling and processing. The unified treatment of the end-effector positions and
orientations is based on the robot pose ruled surface concept and used in trajectory
interpolations. The focus of this chapter is on the determination and control of the
instantaneous change laws of position and orientation, i.e., the generation and control of
trajectories with good kinematics and dynamics performances along such trajectories. The
coordination planning and control is implemented through controlling the motion laws of
two end points of the orientation vector and calculating the coordinates of instantaneous
corresponding points. The simulation and experiment in robotic surface profiling/finishing
processes are presented to verify the feasibility of the proposed approach and demonstrate
the capabilities of planning and control models.
Keywords: Robot pose ruled surface, Unified approach, Trajectory planning and control,
Off-line programming, Robotics polishing
2. Introduction
Motion coordination planning and control play a crucial role in robot applications to
Cartesian task operations with the consideration of kinematics and dynamics constraints. To
effectively carry out the design and application of robots, an efficient algorithm for motion
planning needs to be developed by generating, simulating and evaluating, and optimizing
robot trajectories in a virtual CAD off-line programming environment (Zha 1993, Zha et al
1994, Zha and Du 2001). Many joint-space and Cartesian space based planning schemes
have been made out by proposing new algorithms or improving the existing algorithms to
describe and plan robot motions and trajectories (Thompson and Patel 1987; Bobrow et al
1985; Shin and McKay 1985; Pfeiffer and Johanni 1987; Yang and Jin 1988; Slotine and Yang
1989; Shiller and Lu 1990; Patel and Lin 1995; Konjovic and Vukobratovic 1994; Lee 1995;
232 Industrial Robotics - Programming, Simulation and Applications
Seereeram and Wen 1995). Some criteria are identified for comparing and evaluating both
trajectories and trajectory planners of robot (Thompson and Patel 1987):
(1) Trajectories should be effective both to compute and to execute.
(2) Trajectories should be predictable and accurate, and they should not degenerate
unacceptably near a singularity.
(3) The position, velocity, and acceleration, and even the rate of change of acceleration,
called the jerk, should be smooth functions of time.
(4) Trajectory planner should be possible to determine efficiently whether a proposed
trajectory requires the robot end effector to move to a point outside its workspace
or move with a velocity or acceleration that is physically impossible. Both of these
are controlled with a good model.
The coordination planning and control for robot motions in Cartesian space has long been
recognized as a most interesting but difficult research field not only for motion control but
also for advanced virtual/real design and planning. Some of the existing methods are, for
the most part, based on kinematics considerations and geometric flavor. They may render to
have limited capabilities for handling some cases where the limits of maximum acceleration
and maximum deceleration along the solution curve are no longer met, or where singular
points or critical points of robot configuration exist. Another problem with existing methods
to plan trajectories is that the unmanageable complexity could occur both in computation
cost and storage.
This chapter aims to present a unified approach to coordination planning and control of
pose trajectories for robot end effector in Cartesian space. The organization of the chapter is
as follows. Section 2 introduces some issues related to robot task analysis and control,
including the evaluation of robot kinematics performance; Section 3 proposes a new
approach to robot pose trajectory planning by generating robot pose ruled surface. Section 4
deals with the optimization solutions to the problem of pose trajectory planning. Section 5
provides simulation of the proposed approach in virtual environment and examples. Section
6 demonstrates the industrial application for robotic polishing. Section 7 summarizes the
chapter with concluding remarks.
X=[P, Φ ] T (1)
Path Coordination Planning and Control in Robotic Material Handling and Processing 233
⎧P(t) = r1 (t)
⎪
⎨Q(t) = r2 (t) (2)
⎪Φ(t) = r (t) − r (t)
⎩ 2 1
where, t ∈[t1,t2], t1 and t2 are the start and end time of motion respectively. Using a standard
mathematical equation to express the robot pose ruled surface, Eq.(2) can be rewritten as
(Zha and Du 2001)
r (t , λ ) = r1 (t ) + λ[r2 (t ) − r1 (t )] = r1 (t ) + λΦ(t ) (3)
where, λ ∈ [0,1] .
Qs
Ps Q
S Qe
Φ a
z o
n
o y P
x Pe
.
] T = J q = [ r (t), Φ (t ) ]T
X =[ P , Φ (4)
1
=[ P
X ] T=J q + J q =[ r (t ) , Φ
, Φ (t ) ]T (5)
2
where, P and P are the Cartesian linear velocity and acceleration of the robot position; q
are the joint velocity and acceleration;
and q and Φ
Φ represent the orientation velocity
can
and acceleration; and J is a Jacobian matrix. Based on the theory of velocity addition, Φ
be written as
n
=
Φ ∑j =1
q j δj e j (6)
j
where, e j =( ∏ E eˆi δ i q i ) ê j is a unit vector of j-th joint axis in the base coordinate system;
i =0
ê j is a unit vector of j-th joint axis in dynamic coordinate system; δj has the same meaning
as δi mentioned in (Makino et al 1982). Hence, the Jacobian matrix J(q) can be expressed as
⎡ ∂P ∂P ∂P ⎤
⎢ "
∂q1 ∂q 2 ∂q n ⎥⎥
⎡Jv ⎤ ⎢
J=⎢ ⎥ =⎢ ⎥ (7)
⎣ J w ⎦ ⎢δ e δ 2 e2 "
⎥
δ n en ⎥
⎢ 11
⎢ ⎥
⎣ ⎦
where, J v and J w are velocity Jacobian and angular Jacobian corresponding to V and
Φ
respectively.
motion mode, an efficient way is to minimize the joint motion, i.e., min( ∑ | q (t + Δt ) − q (t ) | )
i =1
i i
2
or the sum weighted distance or path (Section 6). However, it is not sufficient to consider
only the position and orientation of manipulators for task planning. In fact, in many cases, it
is required to satisfy certain constraints for not only the position and orientation but also the
velocity and even the acceleration, i.e., kinematics and dynamics and control constraints.
3.3.1 Manipulability
The most important and commonly used concept is the robot manipulability, which is a
measure of manipulating ability of robotic mechanisms in positioning and orientating the
end effector which is beneficial for design and control of robots and for task planning. The
measure for manipulability was first defined by Yoshikama (1985), as
W= det(JJ T ) (8)
Path Coordination Planning and Control in Robotic Material Handling and Processing 235
Since W is equal to zero at the singular positions, W≠0 and its increase will enlarge the dexterity
of the system and assure avoiding "avoidable" singularities. For a non-redundant manipulators,
i.e., m=n, the measure W reduces to W=| det(J)|. The manipulability constraint is only used to
detect whether a manipulator is at or nearby a singular pose when its end-effector reaches each
of the task points, and thus it is a local manipulability index. For convenience, the condition index
(CI) can also be employed to formulate the manipulability constraint (Angeles 1992):
σmin /σmax ≥ ε (9)
where, ε is the user defined lower bound of CI for singularity avoidance, which usually
takes a small value, 0.001 for instance.
where dA is the differential of area of ruled surface. From Eq (3), the area of robot pose
ruled surface A can be further written as
1 t
A= dλ
∫ ∫ r (t ) + λΦ (t ) Φ(t ) dt
0 t1
1 (11)
With respect to time t, the first-order and the second-order change ratios of the pose ruled
surface, dA/dt and d2A/dt2, can be obtained respectively as
1
∫ ( t ) | ϕ ( t ) dλ
dA/dt = | r1 (t ) + λΦ
0
(12)
1
2 2
∫ 0
1
(t ) | ϕ (t )dλ )/dt
d A/dt =d ( | r (t ) + λΦ (13)
From Eqs (11-13), both the area of the robot pose ruled surface and its change ratio are functions
of the pose trajectory equations, r1 (t ) and r2 (t ) , and the velocity equations, r1 (t ) and r2 (t ) .
When a robot operates in a certain speed, the area of the robot pose ruled surface and its change
ratios can indicate the kinematics and dynamics performances of the robot manipulator.
where, τ is the jont driving force; I(q) is the n×n inertia matrix; V is the n×n damping matrix;
f(q, q ) is the n- dimensional nonlinear function of certrifugal and Coriolis terms; g(q) is the
gravitational terms. Furthermore, the corresponding dynamics equations in the Cartesian
space can be derived as follows:
236 Industrial Robotics - Programming, Simulation and Applications
+U(q, q ) +P(q)
F=V(q) X (15)
where, F is the generalized operational force in Cartesian space; V(q) is m×m kinetic matrix,
i.e., inertia matrix in Cartesian space; U(q, q ) is the nonlinear function of certrifugal and
Coriolis terms in Cartesian space; P(q) is the gravitational terms in Cartesian space.
As discussed above, the problem of planning robot pose trajectories is equivalent to that
of the generation of robot pose ruled surface. It means that the motion locus of
configuration can be planned for the robot end effector in task space by generating the
robot pose ruled surface. The change laws of robot pose can be determined by fitting or
interpolating key or knot pose points obtained from artificial teaching-by-showing or
measurement and even by the latest technologies (e.g. data glove) in virtual environment.
Thus, the corresponding points of the entire motion path can be calculated by
interpolation.
(e)The robot motion cannot exceed the maximum joint acceleration and the maximum
joint driving forces / torques, i.e., | qi |≤ qi , | τ i |≤ τ i , (i=1,2,3,…,n)
max max
(f) The robot must have better kinematics performances, e.g., maximum velocity-space
or shortest motion path, or minimum area of pose ruled surface, or minimum
change ratios of area of ruled surface.
There are many methods for fitting or interpolating key pose points to generate robot
motion trajectories. These include segment interpolation (e.g. straight-line segment and
transition segment), polynomial curves, cubic curves (e.g., Hermite cubic splines, Bezier
curve, B-spline, NURBS), and so on (Boehm 1985, Patel and Lin 1988, Dierckx 1993, Ge and
Ravani 1994, Ge and Kang 1995, Gerald Farin 1991). Suppose that PsPe and QsQe are
formulated by one of the methods mentioned above, as shown in Figure 2, and they are
described as follows:
Path Coordination Planning and Control in Robotic Material Handling and Processing 237
⎧P : r1 (t ) = x1 (t )i + y1 (t ) j + z1 (t )k
⎪ (16)
⎨Q : r2 (t ) = x 2 (t )i + y 2 (t ) j + z 2 (t )k
⎪Φ : Φ(t ) = r (t ) − r (t )
⎩ 2 1
where, t∈[t1,t2]. This means that both the position and orientation of robot vary from Ps to Pe
and from Qs to Qe on the curves, respectively. If r1 (t ) and r2 (t ) or r1 (t ) and Φ(t) are
determined, the trajectory planning process is accomplished, and the coordinates of
corresponding points on the pose trajectories can be calculated.
Φ(0) S(t)
Φ(t) Φ(1)
P(0)=Ps
P(1)=Pe
P(t)
y
x
⎪ k k k
⎪Φ : Φ(t ) = [
⎪ ∑ ( a2i − a1i )t i , ∑ ∑
(b2i − b1i )t i , (c2i − c1i )t i ]T
⎩ i =0 i =0 i =0
can be obtained through the corresponding fitting or interpolating algorithms, and then the
pose trajectories can therefore be obtained.
where,; Ps and Pe are the starting point and end point of position trajectory respectively; Qs
and Qe are starting point and end point of orientation trajectory respectively; dA is a
differential of area of ruled surface; PI is performance index, W, manipulability measure.
Based on Eq.(11), Eq. (19) can be rewritten as
F= 1 t 1 (20)
∫ dλ ∫
0 t1 PI
[ q (t )] Φ (t ) dt
P[ q (t )] + λ Φ
After all the optimization variables or coefficients are determined, the optimal pose
trajectories are obtained. The models discussed above are suitable for the generation of not
only the position but also orientation trajectory of robot.
In some cases, a simplified model could be used. For example, for the position trajectory
planning, λ =0 and the area differential dA of robot ruled surface is changed into arc length differential
dS, and then the objective function becomes the shortest path, which can be specified and simplified as,
Pe 1 t2 1
F=
∫ Ps PI
dS =
∫ t1 PI
P (t ) dt (21)
The objective function can also be the sum weighted distance as described in Section 6, The
optimal trajectories can be found with a commercial optimization software package such as
MATLAB optimization toolbox (1998) in which the most two typical optimization methods,
the grid method and the stochastic constraint method, were employed. Details about the
simulation and optimization will be discussed in Sections 5 and 6.
where, l1=l1(t), l2=l2(t). This means that robot pose trajectories can be controlled by path
variables, l1 and l2, which obey a specified motion laws. The conditions to be satisfied for the
coordination control of pose trajectories can be derived as follows,
t t ⎧ ps = 0 ⎧q s = 0
l1 = l1 (t ) = Δl1dt , l 2 = l 2 (t ) = Δl 2 dt , ⎪⎨ ⎪
∫ t1 ∫
t1
t2
p = Δl1dt
⎪ e ∫
,
⎪ e ∫
t
⎨q = 2 Δl dt
2
(24)
⎩ t1 ⎩ t1
As discussed above, the trajectory coordination and the calculation of corresponding pose
point coordinates are dependent on and controlled by the motion laws of path parameters,
such as uniform motion, constant acceleration, uniform and constant deceleration motion,
etc. Two typical motion laws of path parameters were discussed in (Zha and Chen 2004).
According to Eq.(22), the corresponding pose coordinates can be calculated for each pair of
possible curves, such as line-line, line-arc, arc-line, arc-arc, high-order polynomials, etc.
Consequently, the pose and its velocities and accelerations at any time can be determined or
controlled. The problem determining the control laws of pose in Cartesian coordinate space
is thus solved.
Determination of Φ and S
of knot points
Robot control parameters
Y
Optimized
?
Fitting or interpolating key
path points by spline method Control simulation
N
⎡− l1 s1 − l 2 s12 − l 2 s12 0⎤
J= ⎢ l1c1 + l 2 c12 l 2 c12 0⎥
⎥ (27)
⎢
⎢⎣ 0 0 1 ⎥⎦
Path Coordination Planning and Control in Robotic Material Handling and Processing 241
θ 0.5
Q(t) l3 l2 0
r2(t) θ
Qs Φ (t) Qe -0.5
-1
Ps r1(t) P(t) Pe -1.5
l1
-2
3
θ 2.5 150
O X 2 100
1.5 50
1
Z 0
The pose trajectories PsPe and QsQe for the manipulator are supposed to be 6-order space
polynomial curves respectively, i.e., as follows
6 6
r1 (t ) =(
∑ a t ,∑ b
i−0
1i
i
i=0
1i t
i
,0 )
T (29)
6 6
r2 (t ) =(
∑a
i−0
1i t
i
, ∑b
i=0
1i t
i
,−π / 2 )
T (30)
where, t∈[0,1] are corresponding to Ps and Qs ;Pe and Qe respectively. Thus, the optimization
objective function or fitness function can be chosen as
1 6 6
π
∑ ia t ∑ ib t
1
∫
i −1 2 i −1 2
F= ( 1i ) +( 1i ) dt (31)
2l1l2 0 s2
i =1 i =1
Based on the fact that t=0 and t=1 is corresponding to Ps ,Qs and Pe and Qe respectively, the
optimization constraints are expressed as
6 6
a10= -0.400, b10=0.200,
∑a
i =0
1i = 0.400, ∑b
i =0
1i = 0.200
The second example is for PUMA 560 robot used for arc welding (Zha 2002). Assume that the
end effector of robot PUMA 560 is required to move along a trajectory passing through the
configuration points in task space. These key pose points are supposed to be fitted by 6-order
space polynomial curves. Using the proposed model and approach, the optimal polynomial
coefficients for the trajectory equations satisfying the kinematics and dynamics constraints, with
the minimum path or pose ruled surface area and maximum flexibility, can be obtained. Table 2
lists the optimized coefficients for the trajectories. Figures 5-7 demonstrate the simulation and
animation results for PUMA 560 robot moving along the optimized trajectories.
i 0 1 2 3 4 5 6
*a1i 0.8000 0.0000 -3.7909 48.2443 -167.4964 198.5294 -79.3606
*b1i 0.4000 0.0000 16.8877 -54.3584 39.1373 17.4540 -19.8360
*c1i 0.2000 0.0000 -15.4005 66.5505 -103.0331 63.3715 -14.2356
*a2i 0.6000 0.0000 17.5912 -58.9944 28.1370 43.7332 -32.8612
*b2i 0.4000 0.0000 15.0318 -28.7452 -49.2458 121.6272 -60.6053
*c2i 0.4000 0.0000 -25.8490 101.9592 -126.8347 48.5275 0.8091
Table 2. Optimal coefficients of trajectory equations for PUMA 560 robot.
Area of robot pose ruled surface and its change ratios
0.35
A*100
A v*10
0.3 Aa
1
0.25
0.5
0.2
0
0.15
-0.5
-1 0.1
3
2.5 30
0.05
2 20
1.5 10
1 0 0
0 5 10 15 20 25
Time unit
(a) Robot pose ruled surface (b) Area of pose ruled surface and its
change ratios
Manipulability
0.12 Objective function
80
F aw*100
70 F avw*10
0.1
F aaw
60
0.08
50
W 0.06 40
30
0.04
20
0.02
10
0 0
0 5 10 15 20 25 30 0 5 10 15 20 25
Time unit Time unit
Joint angle
Dynamic joint torque
4 45
q1 tau d1
40
q2 tau d2
3
q3 tau d3
35
q4 tau d4
q5 30 tau d5
2
q6 tau d6
25
1 20
15
0
10
5
-1
-2 -5
0 5 10 15 20 25 30 0 5 10 15 20 25
Time unit Time unit
1
1
0.5
0.5
0
Z
Z 0
-0.5
-0.5
-1
-1 1
1 0.5 1
0.5 1 0 0.5
0 0.5 0
-0.5
0 -0.5
-0.5 -1
-0.5 Y -1
-1 X
Y -1
X
Fig. 7. Motion animations for PUMA 560 robot moving along the optimized trajectories.
Trailing edge
Design data
Template of sectional
profile created using
Cubic Spline Interpolations
Measurement points
Actual sectional
profile obtained by OPF
rotate values for the template, so that after the transformation, the sum distance from the
template to the measures points at the given height is minimum. In other words, the optimal
fitting problem is a multi-dimensional minimization problem and the sum distance is used as the
performance index function in the minimization process. We have developed and implemented
the Search-Extend-Focus (SEF) minimization algorithm (Chen, X.Q. et al, 2002). SEF finds the
optimal point at which the index function is minimum.
By varying the definition of the performance index function (i.e. the sum distance), different
optimal fitting results can be obtained. Here, the following definition of the sum distance is used:
d sum = wh h 2 + ∑w di i
2
(32)
where dsum is the sum distance defined, di is the distance from ith measure point to the
profile, h is distance from the given height to the profile, wi and wh are the respective
weights in the sum distance. When different weights wi and wh for different measure points
are chosen, different priorities are given to different measure points. By choosing high
weights for groups of measure points in turn, multiple optimal fitting can be carried out for
one sectional optimal fitting. The weights in the optimal profile fitting are given in Table 3.
Weight for Convex Side Weight for Concave Side Weight for
Measure Points Measure Points height
1 2 3 4 5 6 7 8 9 10
Convex 10.0 2.0 2.0 2.0 2.0 1.0 1.0 1.0 1.0 1.0 10.0
Fitting
Concave 1.0 1.0 1.0 1.0 1.0 2.0 2.0 2.0 2.0 10.0 10.0
Fitting
Table 3. Weights for Measure Points and Height in Optimal Fitting.
Note: Measure Point 1 and 10 are on the trailing edge of the profile without brazing
material on the surface, and the other measure points are all on the profile with brazing
material on the surface.
Combining the multiple optimal fitting results, the following fitting objectives are achieved:
1. All portions of the profile are optimally fitted to the measure points.
2. Whole profile is smooth.
3. No distortion of the profile shape at each portion. Minimum distortion exits only at
adjacent areas of two portions of the profile for smooth transition.
parameters, such as polishing wheel size and global locations of all tool heads, have been
built into the model. With this feature, intuitive re-calibration of tooling can be done rapidly
and accurately. Fig. 12 shows one path point computed by ARP.
With the above defined Euler angle rotation matrix, the orientation of the robot end-effector
can be derived with respect to the reference global coordinate system. Then plus the
translation of the robot end-effector, the exact location of the robot end-effector (path point)
after moving robot coordinates (X,Y,Z,A,B,C) in global coordinate system is developed.
248 Industrial Robotics - Programming, Simulation and Applications
Robot
Right Left
GG EE CC
x
B z
y
A
z y x x
x zD
C y
z
A: Global Coordinate System
y B: Robot Hand Coordinate System
C: Tool Coordinate System
D: Part Coordinate System
CC EE GG
Left Right
After the part’s 3D profile is obtained by using the Optimal Template Fitting Method, the
part coordinates system is constructed on the 3D profile for each polishing point. The part
and tool coordinate systems are constructed such that, for each polishing point, the part is at
desired polishing position when the two coordinate systems are overlapped. Based on this,
we can derive the position of robot end-effector’s coordinate system B by Coordinate System
Transformation, and further more, the robot coordinates (X, Y, Z, A, B, C).
The robot path generation is the inversion process of coordinates transformation by
translation and Euler angles rotation. The following is the procedure to compute the robot
coordinates of a polishing point:
Step 1. Compute the polishing points in robot end-effector coordinate system.
Step 2. Construct a part coordinate system for a polishing point.
Step 3. Assume that the part coordinate system is overlapped with tool coordinate
system, compute the robot end-effector coordinate system by using the
coordinates system transformation.
Step 4. From the position of robot coordinate system in the global coordinate system,
derived the robot coordinates (X, Y, Z, A, B, C).
Step 5. Repeat Step 2 to Step 4 to obtain a series the robot coordinates (X, Y, Z, A, B, C).
Through above steps, a series of robot coordinates (X, Y, Z, A, B, C) are developed for each
desired polishing positions, which forms the robot polishing path.
Path Optimisation Knowledge Base. It contains optimum process parameters such as Z-axis
offset, approaching angle, robot travel speed.
Supervisory
System configuration Adaptive path planner
Control
Product design data Optimal profile fitting
Device/Proc
Knowledge
Control
Tool compensation KB Data acquisition/exchange
Base
Fig. 17. Robot holding workpiece in contact with the polishing belt.
Figure 17 shows Robot holding workpiece in compliant contact with the polishing belt. It carries
out polishing with the planned tool path and optimum process parameters. Optimum
parameters can be inferred based on the part conditions: curvature, braze thickness, leading edge
height, and trailing edge distortions. As a result, a smooth finish profile, free of transition lines,
overlapping marks and burning marks, can be obtained.
The Supervisory Controller contains the following control functions:
• Internal / External Communication. Internal communication involves information
and data exchange between control modules in DDSC. External communication
allows data exchange with KBPC.
Path Coordination Planning and Control in Robotic Material Handling and Processing 251
(a) (b)
Fig. 18. Vanes before (a) and after (b) robotic grinding and polishing.
planning and control. During the planning and control of pose trajectories, the position,
velocity and acceleration, and even the jerk and the torque limits of the robot joints are
taken into consideration as constraints with smooth functions of time. It can be ensured
that the robot moves along the planned trajectories with good kinematics and dynamics
performances. Thus, the obtained trajectories are efficient both to compute and to execute,
and the planned motions are uniform and can rule out bang-bang trajectories. The
simulation and experimental study in arc welding and surface finishing process show that
the planned trajectories are optimal with respect to the performance indexes, and can be
realized through animation without violating any constraints of the robot actuators.
Potentially, the proposed model and approach are also useful for computer graphics,
CAD-based off-line programming, CAD/CAM (e.g., CNC, wire-EDM) in virtual/real
design environments.
9. Disclaimer
Any opinions, findings, conclusions, or recommendations expressed in this material are
those of the authors. No approval or endorsement for any commercial product, service or
company by the National Institute of Standards and Technology is intended and implied.
10 References
Angeles, J. (1992). The design of isotropic manipulator architectures in the presence of
redundancies, International. Journal of Robotics Research, 11(3), pp.196-201
Barr, A., Curin, B. Gabriel, S. and Hughes, J. (1992). Smooth interpolation for orientations with
angular velocity constraints using quaternions, Computer Graphics, Vol.26, pp.313-320
Bokeberg,E.H., Hunt, K.H. and Ridely, P.R. (1992). Spatial motion-I: acceleration and the
differential geometry of screws, Mech. Mach. Theory, Vol.27, No.1, pp: 1-15
Bobrow, J.E., Dubowski, S. and Gibson, J.S. (1985). Time-optimal control of robotic
manipulators along specified paths, International Journal of Robotics Research., Vol.4,
No.3, pp.3-17
Boehm,W. (1985). Curvature continuous curves and surfaces, Computer-Aided Geometric
Design, Vol.2, pp.313-323
Craig, J. J. (1989). Introduction to Robotics: Mechanics and Control, Addison Wesley Longman,
ISBN 0-201-09528-9, Singapore.
Chen X. Q., Gong Z. M., Huang H., Ge S. S. , and Zhu Q. (1999). Development of SMART 3D
polishing system, Industrial Automation Journal, Vol. 8, no. 2, pp. 6-11
Chen X. Q., Chang H., Chan K., Fong A. M., Kwek T. Y., Kerisnan B., Tan C. S., Lim P. H. (1998).
Design and implementation of multi-sensory autonomous welding system for high
quality precision welding of large boxed structure, Proceedings of the Fifth International
Conference on Control, Automation, Robotics and Vision, pp.15-19, Singapore
Chen, X. Q.; Gong, Z. M.; Huang, H.; Ge, S. Z. & Zhou, L. B. (2002). Adaptive robotic system
for 3D profile grinding and polishing, In: Advanced Automation Techniques in
Adaptive Material Processing, Chen, X. Q.; Devanathan, R. & Fong, A. M. (Ed.), pp.
55-90, World Scientific, ISBN 981-02-4902-0, Singapore.
Corker, P. I. , Robot Toolbox for MATLAB Reference Guide, 1993-1999
Fu, K. S.; Gonzalez, R. C. & Lee, C. S. G (1987). Robotics: Control, Sensing, Vision, and
Intelligence, McGraw-Hill Inc., ISBN 0070226253, New York.
Path Coordination Planning and Control in Robotic Material Handling and Processing 253
Ge, Q.J. and Ravani, B. (1994). Geometric construction of Bezier motions, ASME Journal of
Mechanical Design, Vol.116, No.3, pp.749-755
Ge, Q.J. and Kang, D.L. (1995). Motion interpolation with G2 composite Bezier motions,
Transactions of ASME, Journal of Mechanical Design, Vol.117 ,No.3, pp.520-525
Gong Z. M., Chen X.Q., and Huang H. (2000). Optimal profile generation in surface
finishing, Proceedings of the 2000 IEEE International Conference on Robotics and
Automation, pp. 1557-1562, San Francisco, pp14-28
Gerald Farin (editor) (1991), NURBS for Curve & Surface Design, Society for Industrial &
Applied Mathematics
Grudic, G.Z. and Lawrence, P.D. (1993). Iterative inverse kinematics with manipulator
configuration control, IEEE Trans. on Robotics and Automation, Vol.9, No.4, pp. 476-483
Huang, H.; Gong, Z. M.; Chen, X. Q., and Zhou, L. B. (2002). Robotic Grinding/Polishing for
Turbine Vane Overhaul. Journal of Materials Processing Technology, Volume 127
(2002), pp. 140-145.
Hunt, K.H. (1978). Kinematics Geometry of Mechanisms, Clarendon Press, Oxford
Jun, Bong-Huan, Lee, Jihong, and Lee, Pan-Mook (2006), Repetitive Periodic Motion Planning and
Directional Drag Optimization of Underwater Articulated Robotic Arms, International
Journal of Control, Automation, and Systems, vol. 4, no. 1, pp. 42-52, February 2006
Konjovic,Z., Vukobratovic, M. and Surla, D. (1994). Synthesis of the optimal trajectories for
robotic manipulators based on their complete dynamic models, International Journal
of Robotics and Automations, Vol.9, No.1, pp. 36-47
Kyriakopoulos, K.J. and Saridis, G. N. (1994). Minimum jerk for trajectory planning and
control, Robotica, 12, pp.109-113
Lee, J.H. (1995). A dynamic programming approach to near minimum-time trajectory
planning for two robots, IEEE Trans. on Robotics and Automations, Vol.11,No.1,
pp.160-164,1995
Leu, M.C., and Singh, S.K. (1989). Optimal planning of trajectories robots, CAD Based
Programming for Sensory Robots, B. Ravani (ed.), Springer-Verlag
Kim, J. Y. (2004), CAD-Based Automated Robot Programming in Adhesive Spray Systems
for Shoe Outsoles and Uppers, Journal of Robotic Systems,21(11), 625–634 (2004)
Manocha,D. and Canny, J. F. (1994). Efficient inverse kinematics for general 6R
manipulators, IEEE Trans. on Robotics and Automations, Vol.10, No.5, pp.648-657
Mitsi,S., Bouzakis, K.-D., and Mansour, G. (1995). Optimization of robot links motion in
inverse kinematics solution considering collision avoidance and joint limits, Mech.
Mach. Theory, Vol.30,No.5, pp.653-663
Makino,H., Xie, C.X. and Zhen, S.X. (1982). Spatial mechanisms and robotic mechanisms,
China Mechanical Industry Press, Beijing
MATLAB 5.3 Reference Guide, The MathWorks Inc., 1998
Osman Gursoy (1992). Some results on closed ruled surfaces and closed space curves, Mech.
Mach. Theory, Vol.27,No.3, pp:323-330
Patel, R.V. and Lin, Z. (1988). Collision-free trajectory planning for robot manipulators, Proceedings
of 1988 IEEE Int. Conf. on Systems, Man, and Cybernetics, Vol.2, pp.787-790
Paul Dierckx (1993), Curve and Surface Fitting with Splines, Oxford University Press
Pfeiffer,F. and Johanni, R. (1987). A Concept for manipulator trajectory planning, IEEE
Journal of Robotics and Automation, Vol.RA-3, No.3, pp.115-123
254 Industrial Robotics - Programming, Simulation and Applications
Pachidis, T.P. Tarchanidis, K.N., Lygouras, J.N., Tsalides, P. G. (2005), Robot Path
Generation Method for a Welding System Based on Pseudo Stereo Visual Servo
Control, EURASIP Journal on Applied Signal Processing, 2005:14, 2268–2280
Ridely,P.R., Bokelberg, E.H. and Hunt, K.H. (1992). Spatial motion-II: acceleration and the
differential geometry of screws, Mech. Mach. Theory,Vol.27,No.1,pp:17-35
Roth, B. (1967). On the screw axes and other special lines associated with spatial
displacements of a rigid body, ASME Journal of Engineering for Industry, Series B,
Vol.89,No.1,pp.102-110
Seereeram, S. and Wen, J.T. (1995). A global approach to path planning for redundant
manipulators, IEEE Trans. on Robotics and Automation, Vol.11, No.1, pp.152-160
Shin, K.G. and McKay, N.D. (1985). Minimum-time control of robotic manipulators with
geometric path constraints, IEEE Trans. Automat. And Cont., Vol. AC-30,
No.6,pp.531-541
Slotine, J.J.E and Yang, H.S. (1989). Improving the efficiency of time-optimal path-following
algorithms, IEEE Trans. Robotics and Automation, Vol.5, pp.118-124
Shiller,Z., and Lu, H.-H. (1990). Robust computation of path constrained time optimal
motions, 1990 Proc. IEEE Int. Conf. Robotics and Automation,pp.144-149
Tabarah, E. , Benhabib, B., and Fenton, R.G. (1994). Optimal motion coordination of two
robots-a polynomial parameterization approach to trajectory resolution, Journal of
Robotic Systems, 11(7), pp.615-629
Thompson, S.E. and Patel, R.V. (1987). Formulation of joint trajectories for industrial robots
using B-splines, IEEE Trans. Industrial Electronics, Vol,.34, No.2, pp.192-199
Yang, T. and Jin, W.M. (1988). Study on the kinematics performance of robot manipulators
using the theory of velocity space, Proceedings of 1988 IEEE Int. Conf. on Systems,
Man, and Cybernetics, Vol.2, pp.1364-1366
Yoshikawa,T. (1985). Manipulability of robotic mechanisms, Int. J. of Robotics Research, Vol.4 ,
No.2, pp. 3-9
Zeid,I. (1993). CAD/CAM Theory and Practice, McGraw-Hill International Editions,
Computer Science Series
Zha, X.F. (1993). On the harmonious planning and simulation of the trajectories of position
and orientation for robot end effector, Proceedings of the 9th Int. Conf. on Computer
Aided Production Engineering, pp.296-301, Nanjing, China
Zha, X.F., Wang, M., and Choi, A.C.K. (1994). Optimal trajectory planning for robot
manipulators, Proceedings of the 2nd Asian Conference on Robotics and Applications,
International Academics Publishers, pp.569-575, Beijing, China
Zha, X.F., Du, H. (2001). Generation and simulation of robot trajectories in a virtual CAD-
based off-line programming environment, International Journal of Advanced
Manufacturing Technology, Vol.17:610-624
Zha, X.F. (2002). Optimal Pose Trajectory planning for robot manipulators, Mechanism and
Machine Theory, Vol.37: 1063–1086
Zha, X.F., Chen, X.Q. (2004). Trajectory coordination planning and control for robot
manipulators in automated material handling and processing, International Journal
of Advanced Manufacturing Technology, Vol.23:831-845
13
1. Introduction
The optimal motion planning problems for manipulator arms have been actively researched
in robotics in the past two or three decades because the optimal motions that minimize
actuator forces, energy consumption, or motion time yield high productivity, efficiency,
smooth motion, durability of machine parts, etc. These merits are more apparent when the
manipulator arms execute repeated motions.
This subject is roughly divided into two categories according to the tasks that the
manipulator arms should perform. These categories are characterized by motions as with or
without geometric path constraints.
If the geometric path of the end-effector of a non-redundant manipulator is predetermined,
the motion has one degree of freedom (DOF) and can be represented by a scalar path
variable. In this case, rigorous solutions were obtained subject to constant bounds on the
actuator forces (Bobrow et al., 1985; Shin & McKay, 1985). Subsequently, the study was
extended to cases where the path included certain singular points on it (Shiller, 1994) or
where the actuator jerks and actuator torques were limited within constant bounds
(Constantinescu & Croft, 2000).
Most manipulator tasks—except arc welding, painting or cutting—are essentially
motions without geometric path constraints. In this case, obstacle avoidance should be
considered simultaneously with motion optimization. These types of manipulator
motions have the same DOF as the number of lower pair joints, and the corresponding
optimal motions are more complicated than those discussed above. The subject of this
chapter lies in this category.
Various types of methods have been developed to solve the optimal motion planning
problem in the presence of obstacles. Optimal control theory (Bryson & Meier, 1990;
Bessonnet & Lallemand, 1994; Formalsky, 1996; Galicki, 1998), nonlinear programming
(Fenton et al., 1986; Bobrow, 1988; Singh & Leu, 1991), dynamic programming (Jouaneh et
al., 1990), tessellation of joint (Ozaki & Lin, 1996) or configuration space (Shiller &
Dubowsky, 1991), and a combination of these (Schlemmer & Gruebel, 1998; Hol et al., 2001)
are the main techniques used.
By the application of the optimal control theory, Pontryagin’s maximum principle leads to a
two-point boundary value problem. Some researchers have attempted to solve these
equations directly (Bryson & Meier, 1990; Formalsky, 1996) while others have attempted to
256 Industrial Robotics - Programming, Simulation and Applications
solve them through parameter optimization (Hol et al., 2001). Although this theory and its
solutions are rigorous, it has been used to solve equations for the motions of 2-link or at
most 3-link planar manipulators due to the complexity and the nonlinearity of the
manipulator dynamics.
Approximation methods have been studied to obtain the solutions for three or more DOF
spatial manipulators; however, the solutions obtained have not been proved to be optimal.
These approximation methods are roughly divided into two groups depending on whether
or not they utilize gradients.
Most algorithms based on nonlinear programming use gradients (Fenton et al., 1986; Bobrow,
1988; Singh & Leu, 1991; Bobrow et al., 2001; Wang et al., 2001). For stable convergence, the
objective functions and constraints must be locally convex and their first derivatives must be
continuous. Numerically calculated gradients have been used to find minimum time motions
(Bobrow, 1988); however, the simulation model was a 3-link spatial manipulator in the presence of
a relatively simple obstacle model. Subsequently, analytically calculated gradients were used to
minimize actuator torques for various multibody systems (Bobrow et al., 2001) or spatial 6-link
manipulators (Wang et al., 2001). However, torque or energy minimizations show more stable
convergence properties than the minimum time motions because the motion time is fixed.
Other approximation methods that do not utilize gradients are mainly based on (1)
approximations in small time-intervals (Singh & Leu, 1991; Jouaneh et al., 1990; Hol et al.,
2001) or (2) discretization/tessellation (Ozaki & Lin, 1996; Shiller & Dubowsky, 1991;
Schlemmer & Gruebel, 1998) of joint or configuration spaces. The former requires less CPU
time but may accumulate numerical or modeling errors in small time-intervals and thus
lower the accuracy of the results. The latter assures stable convergence but the CPU time
may increase exponentially for the refinement of tessellation.
Because of the complex dynamics and kinematics of robot manipulators, various
assumptions or simplifications were introduced for the online implementation or
simplification of the algorithms.
Using geometric simplifications, obstacles have been ignored (Bessonnet & Lallemand, 1994;
Fenton et al., 1986; Jouaneh et al., 1990; Hol et al., 2001; Lee, 1995) or modeled as circles (Galicki,
1998; Singh & Leu, 1991; Ozaki & Lin, 1996) or as finite surface points (Schlemmer & Gruebel,
1998), and robot links have been modeled as lines (Galicki, 1998; Ozaki & Lin, 1996; Lee, 1995),
finite surface points (Singh & Leu, 1991), or ellipsoids (Schlemmer & Gruebel, 1998).
Using kinematic simplifications, motions were restricted in a plane (Formalsky, 1996;
Galicki, 1998; Ozaki & Lin, 1996; Shin & Zheng, 1992; Lee, 1995), the orientation of the end-
effector was ignored (Singh & Leu, 1991; Shiller & Dubowsky, 1991), or the joint velocity
profiles (Fenton et al., 1986) or joint acceleration profiles (Jouaneh et al., 1990; Schlemmer &
Gruebel, 1998; Cao et al., 1998) were pre-specified.
Using dynamic simplifications, manipulator dynamics were ignored subject only to the
kinematic constraints (Fenton et al., 1986; Shin & Zheng, 1992; Cao et al., 1998).
To the best of the author’s knowledge, the optimal motions for six or more DOF
manipulators or multiple robot arms have not yet been obtained without simplifying any of
the geometric, kinematic, or dynamic models of manipulators or obstacles.
In this study, we transform the optimal control problem in a function space into a nonlinear
programming problem in a finite-dimensional vector space. Joint displacements are
Optimal Motion Planning for Manipulator Arms Using Nonlinear Programming 257
τ ic
τi ≤ − ω i +τ ic , i = 1, " , n (1)
ω ic
where τi and ωi are the generalized actuator force and joint velocity of the i-th joint,
respectively; τic, ωic are the absolute values of their limits; n is DOF of the system, which is
equal to the number of lower pair joints.
If we define equivalent actuator forces τie as
258 Industrial Robotics - Programming, Simulation and Applications
τ ic
τ ie = τ i + ω i , i = 1, " , n (2)
ω ic
then the actuator constraints (1) become
τ ie ≤ τ ic , i = 1, " , n (3)
If the limits are constant as shown in Fig. 1(b), the actuator and velocity constraints can be
simply expressed as:
τ i ≤ τ ic , i = 1, " , n (4)
|τi | |τi |
τic τ = −
τ ic
ω +τ c τic
i max i i
ω ic
feasible feasible
region region
|ω i | |ωi |
ωic ωic
(a) (b)
Fig. 1. Two kinds of actuator characteristics; (a) actuator force limit depends on joint velocity,
(b) the limits of actuator force and joint velocity are constant.
T n
⎡⎧ ⎫
2
⎧ ⎫ ⎤
2
⎢⎪ τ i − 1⎪ + ⎪ ω i − 1⎪ ⎥ dt
∫ ∑
1 (10)
Jd4 = ⎨ ⎬ ⎨ ⎬
T ⎢ ⎪τ c c ⎥
0
i =1 ⎢⎣⎩ i ⎭⎪ + ⎩⎪ ω i ⎭⎪ + ⎥⎦
where T is the total motion time and the plus operator {•}+ is defined as
Optimal Motion Planning for Manipulator Arms Using Nonlinear Programming 259
⎧ •, if • ≥ 0
{ • }+ =⎨ (11)
⎩ 0, if • < 0
(7) and (8) are the performance indices for the minimum torque and minimum energy
motions, respectively.
Since τic and ωic in (7) and (8) simply act as weighting factors of each joint, they do not
ensure that the actuator forces and the joint velocities do not exceed their limits. (9) and (10)
are the performance indices for the minimum overload motions. “Overload’’ implies that the
actuator forces or the joint velocities exceed their limits. (9) is the case in which the actuator
force limits depend on the joint velocities as shown in Fig. 1(a). On the other hand, (10) is the
alternate case where the two limits are constant as shown in Fig. 1(b).
If the total motion time is greater than or equal to the minimum time, the minimum values
of (9) and (10) must be zero. On the other hand, if it is less than the minimum time, they
must be positive. In Section 4, the minimum time motions are defined rigorously by this
concept and they are found successfully by the sequential searches for the minimum
overload trajectories.
To formulate the obstacle avoidance constraints, we use the so-called growth function (Ong &
Gilbert, 1996). We briefly review the approach here. Assume that there exists a convex object
A in a three-dimensional workspace. The object is defined as a set of all the points inside and
on the surface of a rigid body. Let pA be an arbitrary point (seed point) fixed inside A, then
the growth model GA(σ) is defined as
G A (σ ) = {y | y = p A + σ (x − p A ), x ∈ A} (12)
σ * ( A, B) = min {σ | G A (σ ) ∩ G B (σ ) ≠ ∅} (13)
The growth function can be calculated by linear programming if A and B are convex
polyhedra. The dimension of this linear programming problem is 4; thus, the active set
method (Best & Ritter, 1985) is efficient for such a low- dimensional LP problem.
Consider that there are m obstacles in a workspace. We assume that all the obstacle models
O1,…,Om and the link models R1(t),…,Rn(t) are convex polyhedra. However, non-convex
models are permissible if they can be decomposed into multiple convex models. If the
growth function σ* of a link model Ri and obstacle model Oj has a value less than one, one
model penetrates into the other and the following penetration growth distance Dij indicates the
extent of the penetration.
where di and dj are the appropriate positive real numbers that represent the actual
geometric sizes of Ri and Oj, respectively, and the plus operator has been defined
above. In general, the penetration growth distance is not equal to the minimum
translational distance separating the objects.
From the above notation, the obstacle avoidance constraints become
260 Industrial Robotics - Programming, Simulation and Applications
D = 0 , ∀t ∈ [0, T ] (15)
where D is a matrix (n × m) whose elements are (14).
To obtain obstacle-free optimal motions, we define the following augmented performance
indices as
J i = J d i + wo J o , i = 1, " ,4 (16)
∫ ∑∑ (D ) dt
1 2
Jo = ij (17)
T 0
i =1 j =1
B j (s) = (1 120) [{s − ( j − 3)}5+ − 6{s − ( j − 2)}5+ + 15{s − ( j − 1)}5+ − 20{s − j}5+
(22)
+ 15{s − ( j + 1)}5+ − 6{s − ( j + 2)}5+ + {s − ( j + 3)}5+ ], s∈[−∞, ∞]
where j is an arbitrary integer. The basis function Bj(s) is positive for j − 3 < s < j + 3 and zero
otherwise. The nodal values of (22) and its s-derivatives are listed in Table 1.
If we choose [0, k] as the interval of the parameter s to express the manipulator motions in
the time interval [0, T], then k + 5 splines i.e., B–2(s),…,Bk+2(s) have nonzero values in [0, k].
The joint trajectories are expressed by the linear combinations of the k + 5 splines as follows:
θ ( s ) = C B( s ) (23)
s=βt (24)
where s is a dummy variable connecting joint variables with time; C, a coefficient matrix (n ×
k + 5); B(s), a column vector (k + 5 × 1) whose elements are B–2(s),…,Bk+2(s); and β (=k/T), a
time-scale factor that defines the motion time. Thus, k remains constant although the total
motion time varies.
Differentiating (23) wrt. time
ω (s) = β C B′(s) (25)
α ( s) = β 2 C B ′′( s) (26)
where the primes (', '') imply differentiation wrt. s.
The initial and final motion conditions (18)–(20) can be used to reduce the dimension of the
coefficient matrix. By algebraic manipulation (Park & Bobrow, 2005), the joint trajectories
satisfying the initial and final motion conditions are written as follows:
θ ( s) = Fs ( s) + C m B m ( s) + F f ( s) (27)
where the boundary condition splines Fs(s) and Ff(s), each of dimension (n × 1), can be
determined from the initial and final motion conditions. Cm is the reduced coefficient matrix (n
× k – 1) and Bm(s) is the reduced B-spline basis function (k – 1 × 1).
The velocities and accelerations are
[
ω = β Fs′ ( s) + CmB′m ( s ) + F′f ( s) ] (28)
(J * j −1
i − J i* j )J * j −1
i ≤ ε 1 , i = 1, " ,4 (31)
where Ji*j is the minimum value of (16) at the end of the jth line search.
If (31) is satisfied, the following condition is tested to terminate the process:
Jo ≤ ε2 (32)
(G )ij ≅
( ) (
J (Cm )ij + δ − J (Cm )ij − δ ), i = 1,", n , j = 1,", k − 1 (33)
2δ
The B-spline basis functions have certain merits. Since a B-spline has a nonzero value in a
small interval, the increment of the objective functional (16) in the small interval is the only
data required to calculate the corresponding component of the gradient.
The radii of the minimum circumscribed spheres of the two models are appropriated as
the values of the parameters di and dj in (14). k in Step 1 has an effect on the accuracy of
the joint trajectories and l in Step 4 has an effect on the accuracy of the numerical
integrations. 15 or 20 is assigned to k. 10–12, 10–7, and 10–7 are assigned to ε1, ε2, and δ,
respectively, in the double precision numerical process. All programs have been written
in FORTRAN, not using any type of package programs that include IMSL libraries. It
takes about ten minutes in the Hewlett Packard workstation x2000 to obtain the optimal
motions for a Puma 560 type of manipulator arm in the presence of one hexahedral
obstacle.
Optimal Motion Planning for Manipulator Arms Using Nonlinear Programming 263
5. Simulations
5.1 Example 1 (Spatial 3-Link Manipulator)
The model is a 3-link arm shown in Fig. 2, where all joints are revolute pairs around their z
axes, and it is a configuration at zero-displacement. The base coordinates are the same as the
first link coordinates. The specifications are listed in Table 2; in this case, gravity acts in the
z0 direction and τc is about twice the static actuator torque necessary to endure gravity in the
fully stretched configuration. The sizes of the two hexahedral obstacles are the same and
their dimensions are (0.4, 0.4, 0.5) m and the centers are (0.76, –0.47, –0.25) and (0.76, 0.47,
0.25) m in base coordinates. The orientations of the obstacles are equal to the base
coordinates. The seed points of all the links are at their geometric centers, but those of all the
264 Industrial Robotics - Programming, Simulation and Applications
obstacles are located in the corners in order to find the various local paths. The manipulator
moves from (–60°, 30°, -60°) to (60°, –30°, –120°) in the joint space. The velocities and
accelerations at the two end points are zero.
z1
z2
Link 2
x1, x2 z3
x3
Link 1 Link 3
101
0
10
10-1
10-2
-3
10
-5
10
0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4
total motion time(sec)
z3
link 2 x3
z0,z1
z2
x0,x1,x2
link 4 link 3
z5 x4,x5,x6
link 1 z4,z6
link 6
0 0
10 10
-1 -1
10 10
-2 -2
10 10
0.5 1 1.5 0.5 1 1.5
(c) Jd3 (sec.) (d) Jd4 (sec.)
0 0
10 10
-2 -2
10 10
-4 -4
10 10
0.5 0.6 0.7 0.8 0.9 0.5 0.55 0.6 0.65 0.7
total motion time (sec.) total motion time (sec.)
Fig. 6. Initial (upper curves) and minimum (lower curves) performance indices vs. total
motion time in Test 1 of Example 2.
The base coordinates are the same as the first link coordinates at zero-displacement. Link 4
is connected to Link 3. The link parameters in the Denavit-Hartenberg notation and the
specifications are listed in Table 3 and 4, respectively; in this case, gravity acts in the – z0
direction and τc is about twice the static actuator torque necessary to endure gravity in the
fully stretched configuration. The mass of the last link includes that of a tool and is heavier
than Link 5. The dimensions of one hexahedral obstacle are (1.2, 2.0, 1.2) m and the center is
(1.2, 0.0, 0.0) m in base coordinates. The orientation of the obstacle is equal to the base
coordinates. The velocities and accelerations at the start and goal positions are all zero.
Optimal Motion Planning for Manipulator Arms Using Nonlinear Programming 267
0
10
-1
10
-2
10
-3
10
-4
10 no obstacle inner course outer course
-5
10
0.6 0.7 0.8 0.9 1 1.1 1.2 1.3
total motion time(sec)
Similar to Example 1, we repeated the global search at every time step. The results are shown in
Fig. 7. When ignoring the obstacle, the minimum overload decreases gradually until it vanishes at
the minimum time 0.90 s. We can see that the convergence is quite stable and it converges to only
one optimal motion, regardless of the total motion time. On the other hand, when considering the
obstacle, the convergence property in Fig. 7 is not as stable as that in Fig. 3. They converged to
different local minima, which will be shown in Fig. 9–12.
Fig. 8 depicts the minimum time motion, ignoring the obstacle; in this case, the four frames
are frames 1, 6, 11, and 16 among the total 16 equal time-interval frames. We can observe in
this motion that the manipulator turns around the z1 axis with the last link bent downward
to reduce the moment of inertia about that axis and the moment arm of gravity about the z2
and z3 axes. We can also observe that the manipulator maximizes the joint coupling effect by
the ”overactions” of the ”underloaded” joints to reduce the torques of the ”overloaded”
joints. Here, Joint 1 is ”underloaded” and Joints 2 and 3 are ”overloaded.” We will see these
phenomena in Fig. 13 as well. We can see the “overaction” of Joint 1 at the eleventh frame in
Fig. 8; that is, Joint 1 turns left over the goal position.
By adjusting the location of the seed point of the obstacle, we have found two obstacle-free
courses, namely, the inner and outer courses as shown in Fig. 9–12, where the four frames
are frames 1, 6, 11, and 16 among the total 16 equal time-interval frames.
In the inner course, the local minimum shown in Fig. 9 has less minimum overload than Fig.
10. We can observe this fact in Fig. 7, where the curve representing the inner course appears
to be composed of several segments of two parallel curves. The difference in the minimum
overloads between the two parallel curves becomes smaller as the total motion time
approaches the minimum time. Both the minimum times are equal—1.12 s.
In the outer course, the local minimum shown in Fig. 11 has less minimum overload than Fig. 12.
The difference is quite small in Fig. 7, but it becomes larger as the total motion time approaches
the minimum time. The minimum times in Fig. 11 and 12 are 1.17 and 1.21 s, respectively.
Fig. 13 shows the saturation state of the actuators. Joints 2, 3, and 5 are overloaded in the initial
motions (dotted lines), where the maximum equivalent torques exceed twice their actuator limits.
After the optimization, the motion time is increased from 0.8 s to 0.90 s in the case of no obstacle
and is increased to 1.12 s to avoid the obstacle. Moreover, all the joints are close to saturation
during the minimum time motions (solid lines). This is consistent with Pontryagin’s maximum
principle since a necessary condition for the minimum time motion (assuming no singular arcs)
is that all the joints should be in saturation during the motions.
Fig. 13. Equivalent torques in Test 2 of Example 2; solid lines are minimum time motions in
local minimum (E) and dotted lines are initial motions, T = 0.8 s.
270 Industrial Robotics - Programming, Simulation and Applications
6. Conclusions
In this chapter, we present a practical and reliable method for finding the minimum torque,
minimum energy, minimum overload, and minimum time motions for the manipulators
moving in an obstacle field, subject to the limits of velocity-dependent actuator forces.
Arbitrary point-to-point manipulator motions are represented by a point in a finite-
dimensional vector space parameterized by quintic B-splines.
The novel idea in this work is the concept of the minimum overload trajectory, in which the
actuator-overloads achieve their minimum values with the total motion time fixed. The
minimum time motion is defined rigorously with this concept and it is obtained by
successive searches for the minimum overload trajectory.
There are various local minimal motions according to the directions in which the
manipulator links bend. We can perform global searches in a certain obstacle field by
adjusting the locations of the seed points of the obstacle models or the link models.
In the resultant optimal motions, the manipulator turns with the last link bent inward in
order to reduce the moment of inertia about the z1 axis and the moment arms of gravity
about the z2 and z3 axes.
In the resultant minimum time motions, 1) the manipulator maximizes the joint coupling
effect by the overactions of underloaded joints to reduce the torques on the overloaded
joints, and 2) almost all the actuators are close to saturation during the motion, which is
consistent with Pontryagin’s maximum principle.
7. References
Bessonnet, G. & Lallemand, J. P. (1994). On the optimization of robotic manipulator
trajectories with bounded joint actuators or joint kinetic loads considered as control
variables, Trans. ASME J. Dynamic Systems Measurement and Control, Vol. 116, 819-
826, ISSN 0022-0434
Best, M. J. & Ritter, K. (1985). Linear Programming: Active Set Analysis and Computer Programs,
Prentice-Hall
Bobrow, J. E.; Dubowsky, S. & Gibson, J. S. (1985). Time-optimal control of robotic
manipulators along specified paths, International J. Robotics Research, Vol. 4, 3-17,
ISSN 0278-3649
Bobrow, J. E. (1988). Optimal robot path planning using the minimum-time criterion, IEEE J.
Robotics and Automation, Vol. 4, 443-450, ISSN 0882-4967
Optimal Motion Planning for Manipulator Arms Using Nonlinear Programming 271
Bobrow, J. E.; Martin, B.; Sohl, G.; Wang, E. C.; Park, F. C. & Kim, J. (2001). Optimal robot
motions for physical criteria, J. Robotic Systems, Vol. 18, 785-795, ISSN 0741-2223
Bryson, Jr. A. E. & Meier, E. B. (1990). Efficient algorithm for time-optimal control of a two-
link manipulator, J. Guidance, Control, and Dynamics, Vol. 13, 859-866, ISSN 0731-
5090
Cao, B.; Dodds, G. I. & Irwin, G.. W. (1998). A practical approach to near time-optimal
inspection-task-sequence planning for two cooperative industrial robot arms,
International J. Robotics Research, Vol. 17, 858-867, ISSN 0278-3649
Constantinescu, D. & Croft, E. A. (2000). Smooth and time-optimal trajectory planning for
industrial manipulators along specified paths, J. Robotic Systems, Vol. 17, 233-249,
ISSN 0741-2223
Craig, J. J. (1986). Introduction to Robotics; Mechanics and Control, Addison-Wesley,
ISBN0201095289
Fenton, R. G.; Benhabib, B. & Goldenberg, A. A. (1986). Optimal point-to-point motion
control of robots with redundant degrees of freedom, Trans ASME J. Engineering for
Industry, Vol. 108, 120-126, ISSN 0022-0817
Fletcher, R. (1987). Practical Method of Optimization, 2nd ed., John Wiley & Sons,
ISBN0471494631
Formalsky, A. M. (1996). The time-optimal control of the bending of a plane two-link
mechanism, J. Applied Mathematics and Mechanics, Vol. 60, 243-251, ISSN 0021-8928
Galicki, M. (1998). The planning of robotic optimal motions in the presence of obstacles,
International J. Robotics Research, Vol. 17, 248-259, ISSN 0278-3649
Galicki, M. & Ucinski, D. (2000). Time-optimal motions of robotic manipulators, Robotica,
vol. 18, 659-667, ISSN 0263-5747
Hol, C. W. J.; Willigenburg, L. G.; Henten, E. J. & Straten, G. (2001). A new optimization
algorithm for singular and non-singular digital time-optimal control of robots,
Proceedings of IEEE International Conference on Robotics and Automation, 1136-1141,
ISBN078036578X
Jouaneh, M. K.; Dornfeld, D. A. & Tomizuka, M. (1990). Trajectory planning for coordinated
motion of a robot and a positioning table: part 2, IEEE Trans. Robotics and
Automation, Vol. 6, 746-759, ISSN 1042-296X
Lee, J. (1995). Dynamic programming approach to near minimum-time trajectory planning
for two robots, IEEE trans. Robotics and Automation, Vol. 11, 160-164, ISSN 1042-
296X
Ong, C. J. & Gilbert, E. G. (1996). Growth distances: new measures for object separation and
penetration, IEEE trans. Robotics and Automation, Vol. 12, 888-903, ISSN 1042-296X
Ozaki, H. & Lin, C.-j. (1996). Optimal B-spline joint trajectory generation for collision-free
movements of a manipulator under dynamic constraints, Proceedings of IEEE
International Conference on Robotics and Automation, 3592-3597, ISBN0780329880
Park, J.-k. & Bobrow, J. E. (2005). Reliable computation of minimum-time motions for
manipulators moving in obstacle fields using a successive search for minimum
overload trajectories, J. Robotic Systems, Vol. 22, 1-14, ISSN 0741-2223
Prenter, P. M. (1975). Splines and Variational Methods, John Wiley & Sons, ISBN0471504025
Schlemmer, M. & Gruebel, G. (1998). Real-time collision-free trajectory optimization of robot
manipulators via semi-infinite parameter optimization, International J. Robotics
Research, Vol. 17, 1013-1021, ISSN 0278-3649
272 Industrial Robotics - Programming, Simulation and Applications
Shiller, Z. & Dubowsky, S. (1991). On computing the global time-optimal motions of robotic
manipulators in the presence of obstacles, IEEE Trans. Robotics and Automation, Vol.
7, 785-797, ISSN 1042-296X
Shiller, Z. (1994). On singular time-optimal control along specified paths, IEEE Trans.
Robotics and Automation, Vol. 10, 561-566, ISSN 1042-296X
Shin, K. G. & McKay, N. D. (1985). Minimum-time control of robotic manipulators with
geometric path constraints, IEEE trans. Automatic Control, AC-30, 531-541,
ISSN 0018-9286
Shin, K. G. & Zheng, Q. (1992). Minimum-time collision-free trajectory planning for dual-
robot systems, IEEE trans. Robotics and Automation, Vol. 8, 641-644, ISSN 1042-296X
Singh, S. K. & Leu, M. C. (1991). Manipulator motion planning in the presence of obstacles and
dynamic constraints, International J. Robotics Research, Vol. 10, 171-186, ISSN 0278-3649
Wang, C.-Y. E.; Timoszyk, W. K. & Bobrow, J. E. (2001). Payload maximization for open
chained manipulators: finding weightlifting motions for a Puma 762 robot, IEEE
trans. Robotics and Automation, Vol. 17, 218-224, ISSN 1042-296X
14
1. Introduction
The “Artificial Vision” permits industrial automation and system vision able to act in the
production activities without humane presence. So we suppose that the acquisition and
interpretation of the imagines for automation purposes is an interesting topic.
Industrial application are referred to technological fields (assembly or dismounting, cut or stock
removal; electrochemical processes; abrasive trials; cold or warm moulding; design with CAD
techniques; metrology), or about several processes (control of the row material; workmanship of
the component; assemblage; packing or storages; controls of quality; maintenance).
The main advantages of this technique are:
1) elimination of the human errors, particularly in the case of repetitive or monotonous
operations;
2) possibility to vary the production acting on the power of the automatic system (the
automatic machines can operate to high rhythms day and night every day of the year);
3) greater informative control through the acquisition of historical data; these data can be
used for successive elaborations, for the analysis of the failures and to have statistics in
real time;
4) quality control founded on objective parameters in order to avoid dispute, and loss of
image.
In addition, another interesting application can be considered: the robot cinematic calibration
and the trajectories recording. First of all it is important to consider that, by a suitable cameras
calibration technique, it is possible to record three dimensional objects and trajectories by means
of a couple of television cameras. By means of perspective transformation it is possible to
associate a point in the geometric space to a point in a plane. In homomogeneous coordinates the
perspective transformation matrix has non-zero elements in the fourth row.
(1)
where sf is the scale factor; more concisely equation (1) can be written as follows:
(2)
where the tilde indicates that the vectors are expressed in homogeneous coordinates. The matrix
T is a generic transformation matrix that is structured according to the following template:
The scale factor will almost always be 1 and the perspective part will be all zeros except when
modelling cameras.
The fourth row of matrix [T] contains three zeros; as for these last by means of the
prospectic transform three values, generally different by zero, will be determined.
Video Applications for Robot Arms 275
Lets consider, now, fig. 2: the vector w*, that represents the projection of vector wr on
the plane ξ,η.
(3)
where are the versor of the frame {Ω,ξ,η,ζ} axes in the frame {O,x,y,z}.
In fig.2 the vector t indicates the origin of frame O,x,y,z in the frame Ω,ξ,η,ζ and the
projection of P on the plane ξ,η is represented by point Q, which position vector is w*. This
last, in homogeneous coordinates is given by:
(4)
In the same figure, nr is the versor normal to the image plane R , and n will be the same versor in
the frame {O,X,Y,Z}. The perspective image of vector w* can be obtained by assessing a suitable
scale factor. This last depends on the distance d between point P and the image plane. The
distance d is given from the following scalar product:
(5)
Let’s indicate with w{Ω,ξ,η,ζ} the vector w in the frame
276 Industrial Robotics - Programming, Simulation and Applications
Because are the versor of the frame {Ω,ξ,η,ζ} axes in the frame {O,x,y,z}, it is possible
to write the coordinates of the vector w{Ω,ξ,η,ζ} in the frame
(5’)
an expression of d is:
it is possible to write:
(5”)
Video Applications for Robot Arms 277
(6)
(7)
The terms Dx, Dy, Dz assume infinity values if the vector w has one of his coordinates null,
but this does not influence on generality of the relation in fact inp this case, the
term that assume infinity value, is multiplied for zero.
(8)
so:
278 Industrial Robotics - Programming, Simulation and Applications
and
(9)
that is to say:
(10)
In this way, a displacement ∆w along the x axis corresponds to a displacement ∆p in the image
plane along a straight line which pitch is So the x axis equation in the image plane is :
(11)
Fig. 3. Frames.
The interception was calculated by imposing that the point which cordinates are belongs to
the x axis. In the same way it is possible to obtain the y axis and the z axis equations:
(12)
(13)
By means of equations (11), (12) and (13) it is possible to obtain a perspective representation
of a frame belonging to the Cartesian space in the image plane; that is to say: for a given
body it is possible to define it’s orientation (e.g. roll, pitch and yaw) in the image plane.
An example could clarify what exposed above: let’s consider a circumference which equation in
the frame xyz is:
Video Applications for Robot Arms 279
(14)
It is possible to associate the geometric locus described from equation (14) to the corresponding
one in the image plane; in fact by means of equations (11),(12) e (13) it is possible to write:
(15)
That represents the equation of a conic section. In particular a circumference the centre of
which is in the origin of the xyz frame that becomes an ellipse having its foci on a generic
straight line in the image plane.
For a generic robot with n d.o.f., the transformation matrix from end-effector frame to base
frame, has the following expression:
where and are the vectors that represent a generic point P in frame 0 and P n frame n.
Can be useful to include the perspective concepts in this trasformation matrix, in this way it
is possible to obtain a perspective representation of the robot base frame, belonging to the
Cartesian space, in an image plane, like following expression shows:
(16)
where n is the refractive index of the lens and R1 ed R2 are the bending radius of the
dioptric surfaces.
Now consider a thin lens, a point P and a plane on which the light-rays refracted from the
lens are projected as shown in fig. 5. the equation for the thin lenses gives:
Video Applications for Robot Arms 281
Hence:
If we consider that generally the distance of a point from the camera’s objective is one meter
or more while the focal distance is about some millimetres (d>>f), the following
approximation can be accepted:
So the coordinates of the point in the image plane can be obtained by scaling the
coordinates in the Cartesian space by a factor −d/f. The minus sign is due to the
upsetting of the image.
position is given by its Cartesian coordinates) it is possible to associate a point in the image
plane (two coordinates) by means of the telecamera. So, the expression “model of the
camera” means the transform that associates a point in the Cartesian space to a point in the
image space.
It has to be said that in the Cartesian space a point position is given by three
coordinates expressed in length unit while in the image plane the two coordinates are
expressed in pixel; this last is the smaller length unit that ca be revealed by the camera
and isn’t a normalized length unit. The model of the camera must take onto account this
aspect also.
In order to obtain the model of the camera the scheme reported in fig.6 can be considered.
Consider a frame xyz in the Cartesian space, the position of a generic point P in the
space is given by the vector w. Then consider a frame having the origin in the lens
centre and the plane ζη coincident with the plane of the lens; hence, the plane ζη is
parallel to the image plane and ζ axis is coincident with the optical axis. Finally
consider a frame u,v on the image plane so that uo and vo are the coordinates of the
origin of frame , expressed in pixel.
(18)
Substantially, the above essentially consists in a changing of the reference frames and a
scaling based on the rules of geometric optics previously reported.
Assumed xl e yl as the first two components of the vector wl, the coordinates u and v
(expressed in pixel) of P’ (image of P) are :
Video Applications for Robot Arms 283
(19)
Where δu e δv are respectively the horizontal and vertical dimensions of the pixel. So, by
substituting equation (17) in equation (19) it comes:
(20)
(22)
Where
and
284 Industrial Robotics - Programming, Simulation and Applications
are the position vectors in the image plane of the cameras, in homogeneous coordinates, and:
(24)
are the [3x4] transformation matrixes (22) from spatial frame to image planes of two cameras.
The first equation of the system (23), in Cartesian coordinates (non-homogenous), can be
written as:
(25)
or:
(26)
In the same way for the camera whose transform matrix is M’, it can be written:
(27)
(28)
This last equation represents the stereoscopic problem and consist in a system of 4 equation
in 3 unknown (wx,wy,wz). As the equations are more than the unknowns can be solved by a
least square algorithm. In this way it is possible to invert the problem that is described by
equations (20) and to recognise the position of a generic point starting to its camera image.
where A is a matrix that depends by two couple of camera coordinates (u,v) and (u’,v’), and
by vector w, and B is a vector with parameters of cameras configuration.
It is possible to find an explicit form of this problem.
Video Applications for Robot Arms 285
(29)
(30)
(31)
286 Industrial Robotics - Programming, Simulation and Applications
(32)
By arranging equation (31) and (32), it is possible to redefine the stereoscopic problem,
expressed by equation (28):
(33)
In equation (33) P is a matrix 4x3, whose elements depend only by (u,v) and (u’,v’), and B is
a vector 4x1, whose elements contain parameters of cameras configuration.
The expression of matrix P is:
By equation (33) it is possibile to invert the problem that is described by eqs.(20) and to
recognise the position of a generic point starting to its camera image, by means of
pseudoinverse matrix P+ of matrix P.
(34)
By means of equation (34), it is possible to solve the stereoscopic problem in all
configurations in which is verified the condition:
In order to calibrate the tele-cameras a toolbox, developed by Christopher Mei, INRIA Sophia-
Antipolis, was used. By means of this toolbox it is possible to find the intrinsic and extrinsic
parameters of two cameras that are necessary to solve the stereoscopic problem. In order to carry
out the calibration of a camera, it is necessary to acquire any number of images of observed space
in which a checkerboard pattern is placed with different positions and orientations, fig 7.
In each acquired image, after clicking on the four extreme corners of a checkerboard pattern
rectangular area, a corner extraction engine includes an automatic mechanism for counting
the number of squares in the grid. This points are used like calibration points, fig. 8.
Fig. 8. Calibration image After corner extraction, calibration is done in two steps: first
initialization, and then nonlinear optimization.
The initialization step computes a closed-form solution for the calibration parameters based
not including any lens distortion.
The non-linear optimization step minimizes the total reprojection error (in the least squares
sense) over all the calibration parameters (9 DOF for intrinsic: focal (2), principal point (2),
distortion coefficients (5), and 6*n DOF extrinsic, with n = images number ).
The calibration procedure allows to find the 3-D position of the grids with respect to the
camera, like shown in fig. 9.
288 Industrial Robotics - Programming, Simulation and Applications
transducers, dimensional errors, and so on. The last sources of error essentially depend on
the correct evaluation of the Denavit and Hartemberg parameters. Hence, some of the
sources of error can be limited by means of the cinematic calibration.
Basically, by the cinematic calibration it is assumed that if the error in the positioning of the
robot’s end effector is evaluated in some points of the working space, by means of these
errors evaluation it is possible to predict the error in any other position thus offset it.
In few words, the main aim of the technique showed in this paper is to obtain precise
evaluations of those Denavit-Hartenberg parameters that represent, for each of the links, the
length, the torsion and the offset.
(35)
(36)
By means of such matrixes it is possible to obtain the transform matrix that allows to obtain
the coordinates in the frame 0 (the fixed one) from those in frame n (the one of the last link) :
(37)
(38)
(40)
with N≥ 9 .
As, for each of the camera images the unknown Denavit-Hartemberg parameters are the
same, equations (40) represent a system of N non linear equations in 9 unknowns. This
system can be numerically solved by means of a minimum square technique.
It’s known at a minimum square problem can be formulated as follows:
given the equation (39), find the solutions that minimize the expression:
(41)
This method can be simplified by substituting the integrals with summations, thus it must
be computed the vector that minimize the expression:
(42)
If we formulate the problem in this way, the higher is the number of images that have been
taken (hence the more are the known parameters), the more accurate will be the solution, so
it’s necessary to take a number of pictures.
Video Applications for Robot Arms 291
6. Trajectories recording
The trajectory recording, that is essential to study robot arm dynamical behaviour has been
obtained by means of two digital television camera linked to a PC.
The rig, that has been developed, is based on a couple of telecameras; it allows us to obtain
the velocity vector of each point of the manipulator. By means of this rig it is possible:
- to control the motion giving the instantaneous joint positions and velocities;
- to measure the motions between link and servomotor in presence of non-rigid
transmissions;
- to identify the robot arm dynamical parameters.
An example of these video application for robot arm is the video acquisition of a robot arm
trajectories in the work space by means of the techniques above reported.
In the figures 11 and 12 are reported a couple of frames, respectively, from the right telecamera and
the left one. In fig. 13 is reported the 3-D trajectory, obtained from the frames before mentioned; in
this last figure, for comparison, the trajectory obtained from the encoders signals is also reported.
7. References
R. Brancati, C. Rossi, S. Scocca: Trajectories Recording for Investigations on Robot Dynamics and
Mechanical Calibrations – Proc. of 10th int. Workshop RAAD 2001, Wien, May 16-19, 2001
V. Niola, C.Rossi: A method for the trajectories recording of a robot arm: early results -Proc.
9th WSEAS International Conference on Computers, Athens 11 – 16 July 2005.
V.Niola, C. Rossi: Video acquisition of a robot arm trajectories in the work space - WSEAS
Transactions on Computers, Iusse 7, Vol. 4, july 2005, pagg. 830 – 836.
G.Nasti, V.Niola, S.Savino: Experimental Study on Vision Motion Measures in Robotics -9th
WSEAS International Conference on Computers, Vouliagmeni – Athens, Greece, July
11 – 16, 2005.
V. Niola, C. Rossi, S. Savino: Modelling and Calibration of a Camera for Trajectories
Recording -Proc. 5th WSEAS Int. Conf. on Signal Processing, Robotics and Automation
– Madrid, February 15-17, 2006
V. Niola, C. Rossi, S. Savino: A Robot Kinematic Calibration Technique -Proc. 5th WSEAS Int.
Conf. on Signal Processing, Robotics and Automation – Madrid, February 15-17, 2006
A. Fusiello: Visione Computazionale: appunti delle lezioni – Informatic Department,
University of Verona, 3 March 2005.
R. Sharma, S, Hutchinson – “Motion perceptibility and its application to active vision-based
servo control”-Technical Report UIUC-BI AI RCV-94-05, The Beckman Institute,
Università dell’Illinois, 1994.
R. Sharma –“Active vision for visual servoing: a review - IEEE Workshop on Visual Servoing:
Achivement” - Application and Open Problems, Maggio 1994.
R. Sharma, S, Hutchinson – “Optimizing hand/eye configuration for visual-servo system” -
IEEE International Confeence on Robotics and Automation , pp. 172-177, 1995.
C. Mei: Camera Calibration Toolbox for Matlab -http://www.vision.caltech.edu/
bouguetj/calib_doc.
V. Niola, C. Rossi, S. Savino – “Modelling and calibration of a camera for robot trajectories
recording” -Proc. 5th WSEAS Int. Conf. on “Signal Processing, Robotics and
Automation” – Madrid, February 15-17, 2006.
A. Fusiello – “Visione Computazionale: appunti delle lezioni” – Informatic Department,
University of Verona, 3 March 2005.
R. Brancati, C. Rossi, S. Scocca - Trajectories Recording for Investigations on Robot
Dynamics and Mechanical Calibrations – Proc. of 10th int. Workshop RAAD 2001,
Wien, May 16-19, 2001
R. Brancati, C. Rossi, S. Savino, A Method for Trajectory Planning in the Joint Space, Proc. of
14th International Workshop on Robotics in Alpe-Adria-Danube Region, Bucharest,
Romania, May 26-28, 2005, pp.81-85.
R. Sharma, S. Hutchinson, On the observability of robot motion under active camera control,
Proc. IEEE International Conference on Robotics and Automation, May 1999, pp. 162-
167.
J. T. Feddema, C. S. George Lee, O. R. Mitchell, Weighted selection of image features for
resolved rate visual feedback control, IEEE Trans. Robot. Automat., Vol. 7, 1991, pp.
31-47.
V. Niola, C. Rossi, S. Savino – “Perspective Transform and Vision System for Robot
Trajectories Recording” -Proc. 5th WSEAS Int. Conf. on “Signal Processing, Robotics
and Automation” – Madrid, February 15-17, 2006.
15
Belarus
Welding is one of the most successful applications for industrial robots, which encourages
intensive research and development in the area of the CAD-based and off-line programming
(Pires et al., 2003, Yagi, 2004). At present, relevant software tools allow to optimise the
process parameters, which directly influence the product quality and the system working
cycle. Besides, they enable users to complete most of the process preparation actions in
advance, without access to the workcell, and therefore, to make the robotic systems
competitive for both large and small series, or even for unique products (Ranky, 2004).
However, resent advances in the arc welding technology motivate rethinking of some
postulates and conventions incorporated in the existing off-line programming methods. One
of related problems, the kinematic control of a robot-positioner system, is addressed in this
chapter.
The welding position (or, more precisely, the weld joint orientation relative to gravity) is an
essential issue in both manual and robotic welding, associated with the problem of the weld
puddle control. As is known from long-term experience, the highest quality and productivity
are achieved for the downhand (or flat) welding position, where the workpiece is oriented so
that the weld tangent line is horizontal, and the weld normal vector is the opposite of the
gravity direction (Cary 1995). This orientation is preferable because gravity draws the molten
metal downward into the joint allowing it to flow appropriately along the weld contour, which
makes the welding faster and easier. For this reason, the downhand welding has been adopted
in robotics as a de facto standard (Bolmsjo 1987, Fernandez and Cook 1988).
To ensure the desired weld orientation, a typical robotic arc welding station (Fig. 1) includes
two separate moving mechanisms: (i) a five- or six-axis industrial robot (welding tool
manipulator) aimed at moving the weld touch with the required speed and orientation
relative to the weld joint; and (ii) a two- or three-axis positioning table (workpiece
manipulator), which ensures the downhand (or close to it) orientation of the weld joint with
respect to gravity.
In contrast to the robot, a skilled human-welder is capable to perform such operations in
other positions, such as the horizontal, vertical, or overhead ones. To make such type of
welding easier, several companies recently proposed their technological innovations, the
flux-cored wires, that create a protective coating, supporting the metal against gravity. This
294 Industrial Robotics - Programming, Simulation and Applications
makes it possible to enlarge area of the non-downhand welding and employ for this
industrial robots (Tolinski 2001). Besides, recent advances in computer vision have allowed
enhancements in robot ability for the welding puddle control (Tarn et al. 2004). Hence, the
existing kinematic control techniques for the arc welding robots must be revised in order to
relax the downhand constraint.
2. Related Works
Automatic programming for robotic arc welding incorporates a number of nontrivial steps,
ranging from specifying a manufacturing task to planning the robot-positioner motions
(Kim et al. 1998, Bolmsjo 2002). They include, in particular, the welding feature extraction
from CAD data, welding task planning and sequencing, coordinated robot-positioner
control and numerous implementation issues (interfacing with the operator/controller,
weldseam tracking, workcell calibration, etc.).
Since the beginning of robotic arc welding, many related studies focused on the
kinematic control of the robot-positioner system. Theoretically, the arc welding requires
(5+2)-axis manipulation, needed for the positioning/orienteering of the weld torch and
the weld joint, respectively. However, because a standard industrial robot has 6
degrees-of-freedom, relevant papers cover topics from (6+2)- to (7+3)-axis kinematical
structures. Accordingly, in all cases, the kinematic control problem is solved by
imposing the following task-specific constraints:
(i) five constraints on the torch position and orientation, defined by three Cartesian
coordinates of the welding tip and two Euler angles of the plasma flow-line;
(ii) two constraints on the weld joint orientation relative to gravity, defined as the verticality
of the weld normal line (‘downhand welding’).
It is implicitly assumed here that both the torch rotation about the plasma flowline and the
weld joint rotation about the vertical line are irrelevant to the welding process.
For the typical (6+2)-case, corresponding to a six-axis robot and two-axis balance or till-roll
positioner, the problem has been studied by several authors (Bolmsjo 1987, Fernandez and
Cook 1988, Nikoleris 1990, Kim et al. 1998). The common approach is based on the strictly
downhand solution for the positioner inverse kinematics and the augmented solutions for
the robot inverse kinematics, which depends on an arbitrary scalar parameter. Then, this
free parameter is used for singularity or collision avoidance, optimisation of the
manipulability, increase in robotic tool reach, etc. These results were extended to the (7+2)-
Kinematic Control of a Robot-Positioner System for Arc Welding Application 295
case by Ahmad and Luo (1989) whose study focused on a six-axis robot mounted on rail and
a two-axis positioner; they also used the extra degrees of freedom to keep the robot out of
singular configurations and to increase its reach. Recently, Wu et al. (2000) applied the
genetic algorithm technique to solve a similar (7+3)-problem.
The positioner inverse kinematics, incorporated in the above control methods, was solved
mainly for the strictly downhand weld orientation with respect to gravity. This essentially
simplified the analytical expressions but was in certain disagreement with engineering
practice that admits some tolerances. Hence, several authors considered more general
formulations. In particular, Bolmsjo (1987) and Nikoleris (1990) stated the problem as
aligning of any weld-associated and any given gravity-associated vectors. For this case,
numerical and analytical solutions were obtained for both the roll-tilt and balance
positioners. Later, the problem was reformulated by the authors (Pashkevich et al., 2003)
and solved in terms of the weld slope-roll angles; the least-square solutions were also
obtained for the case when exact ones do not exist.
In spite of common understanding, only Kim et al. (1998) have directly addressed the
issue of the downhand-orientation tolerancing and its relation with the weld quality.
These authors introduced the weld ‘stability’ metrics, thus allowing admissible
orientations for the welding tool and welding joint (defined by work/ travel and
slope/rotation angles, respectively) to be computed. The open question, however, is
assigning reasonable ‘stability limits’ to obtain the required quality, which obviously
needs a welding expert. Besides, no judgements on variations of the welding speed
within the stability region have been proposed.
Another active area of the related research is the arc welding operations planning and
sequencing. These works concentrate on specific non-trivial cases of the travelling-salesman
problem (TSP) known from combinatorial optimisation (see Gutin and Punnen, 2002 for
details and latest results). For arc welding, the scheduling problem with the minimum-time
objective was first addressed by Rubinovitz and Wysk (1988), who suggested a heuristic
algorithm based on the classical TSP nearest-neighbour method. Then, Fukuda and Yoshikawa
(1990) applied to this problem a neural network technique focusing on reduction in the
welding distortions. Later, the TSP-based method was enhanced by Kim et al. (1998, 2002a),
who proposed several heuristics and genetic algorithms, which avoid the distortions caused
by heat by imposing the problem-specific non-precedence constraints. In another paper, Kim
et al. (2002b) reformulated the heat-affected zone constraint by setting the lower bound on the
travel time between two consecutive welding operations (‘cooling time’) and proposed several
heuristics based on the nearest-neighbour, 2-opt, 3-opt, and tabu search methods. Grenestedt
(2003) applied to this problem the simulating annealing in combination with the approximate
analytical distortion models. In the latest paper by Kim et al. (2005), there several enhanced
welding sequencing heuristics were proposed, which also adopt the ‘cooling time’ concept.
This chapter summarises the authors’ results in the area of the robotic arc welding (Pashkevich et
al., 2003; Dolgui and Pashkevich, 2006) and presents techniques for both the closed-form inverse
kinematics solutions and optimal planning of the welding sequences. It contributes to the
research originated from the papers of Bolmsjo (1996), Nikoleris (1990) and Kim et al. (1998),
assuming that the downhand constraint is relaxed and implicitly converted into the penalty
function, which increases the welding time depending on the ‘non-downhand’ degree. The
objective is to minimize the overall manufacturing cycle, by finding a reasonable trade-off
between the positioner motion time and the time required for the welding itself.
296 Industrial Robotics - Programming, Simulation and Applications
Weld Sequence
Geometrical Data 4
Robot/Positioner
Coordinated Control
World Locations
Kinematical Data 3
Single Movement
Cartesian Control
Axis Coordinates
2
Axis Limits
Single Movement
Axis-Space Control
Encoder Pulses
1
Motor Parameters
Servo-Control
(Encoder Space)
for some manipulators, the interrelation between the joint axis angle and the motor
shaft angle is non-trivial.) At present, control techniques for all the abovementioned
hierarchical levels are being intensively developed. For instance, advanced commercial
controllers already include the forward dynamic models, which essentially improve the
operational speed and accuracy. However, various aspects of the fourth and the fifth
control levels are still subject of intensive research.
W0 Welding direction Yw
Approach Yw W0 l
direction Xw
Xw e
W(l) Rotation
axis r Zw
l pe WELD JOINT
WELD JOINT FRAME
Zw W(l)
WORKPIECE BASE FRAME WORKPIECE BASE
(a) (b)
Fig. 3. Definition of the weld frames for a liner (a) and circular (b) welds.
It should be noted that, in practice, it is prudent to define the Yw -axis as the bisectrix of the
corresponding weld joint surfaces.
Taking into account the above definitions, the kinematic model of the linear weld relative to
the WB-frame (i.e., the workpiece base frame, see Fig. 3a) can be described by the following
homogenous parametric equation
weld starting point. It should be stressed that the vectors nws ; s ws ; pws are defined relative to
the WB-frame and, in practice, they are easily derived from the workpiece 3D CAD model.
For the circular joints, a similar approach is used, but the moving frame is computed to
ensure the tangency of the welding path and the Xw-axis at every point (Fig. 3b). It is evident
that the initial frame is subject to the rotational transformation and the weld kinematics is
described by the following parametric equation:
where the parameter l and the sub/superscripts ‘‘WB’’, ‘‘w’’, ‘‘s’’ have the same meaning as
in (1), the orthogonal 3×3 matrix is expressed as Re = [ nws sws nws × sws ] and defines the
orientation of the weld frame at the starting point, r is the radius of the circular welding
joint, ϑ = l / r is the angle of rotation, the vector pe defines the position of the circle centre,
and Re (ϑ ) the general rotation matrix (Fu et al., 1987) around the axis, which is determined
by the unit vector e = [ex , e y , ez ] (see Fig. 4):
As in the previous case, the required vectors nws ; sws ; pws and e , pe as well as the
radius r; may also be easily derived from the workpiece 3D model using capabilities of the
modern graphical simulation systems to generate straight lines, planes and circles.
Thereby, expressions (1)–(3) completely define spatial location (i.e., the position and the
orientation) of the weld joint relative to the WB-frame (workpiece base), which should be
adjusted by the positioner to optimise the weld orientation relative to the gravity (see Fig. 1).
Hence, the absolute (world) location of the weld joint is described by the product of the
homogenous matrices
0
W (l ) = ⎡⎣ 0 TPB ⋅ P (q ) ⋅PF TWB ⎤⎦ ⋅ WBW (l ) (4)
where the left superscript ‘‘0’’ refers to the world coordinate system, the matrix 0TPB defines the
absolute (world) location of the positioner base PB; the matrix PFTWB describes the workpiece base
WB location relative to the positioner mounting flange PF; and the matrix function P(q) is the
positioner direct kinematic model, while q is the vector of the positioner joint coordinates.
To ensure good product quality and to increase the welding speed, the weld joint should be
properly oriented relative to the gravity. The exact interrelations between these parameters
are not sufficiently well known and require empirical study in each particular case. But
practising engineers have developed a rather simple rule of thumb that is widely used for
both the online and off-line programming: ‘‘the weld should be oriented in the horizontal plane so
that the welding torch is vertical, if possible’’ (Bolmsjo, 1987). It is obvious that the CAD-based
approach requires numerical measures of the ‘‘horizontality’’ and the ‘‘verticality’’, which
are proposed below.
Kinematic Control of a Robot-Positioner System for Arc Welding Application 299
Let us assume that the Z0-axis of the world coordinate system is strictly vertical (i.e. directed
opposite to the gravity vector), and, consequently, the X0Y0-plane is horizontal. Then, the weld
orientation relative to the vector of gravity can be completely defined by two angles (Fig. 4):
• The weld slope θ∈[-π/2; π/2], i.e. the angle between the vector of the welding
direction 0nw and the Cartesian plane X0Y0;
• The weld roll ξ∈(-π; π], i.e. the angle between the vector of the approaching direction
0sw and the vertical plane that is parallel to the vector 0nw and the Cartesian axis Z0.
Roll angle ξ
Z0
Approach direction
Roll angle ξ′
Y0
Welding direction
Slope angle θ
X0
WORLD FRAME
Fig. 4. Definition of the weld orientation angles.
The numerical expressions for θ, ξ can be obtained directly from the definition of the RPY-angles
(Fu et al., 1987), taking into account that the weld orientation (θ, ξ) = (0, 0) corresponds to the
horizontal direction of the axis Xw and the vertical direction of the Yw (see Fig. 5):
0
WR = Rz (ψ ) ⋅ Ry (θ) ⋅ Rx (π 2 − ξ) (5)
where 0WR is the 3 × 3 orientation submatrix of the 4 × 4 matrix of the weld location; Rx; Ry; Rz are
the 3 × 3 rotation matrices around the axes X; Y; Z; respectively, and ψ is the yaw angle which is
non-essential for the considered problem. Multiplication of these matrices leads to
⎡Cψ Cθ Cψ S θ Cξ − S ψ S ξ Cψ S θ S ξ + S ψ Cξ ⎤
0 ⎢ ⎥
W R = ⎢ S ψ Cθ S ψ S θ Cξ + Cψ S ξ S ψ S θ S ξ − C ψ Cξ ⎥
⎢ − Sθ Cθ C ξ Cθ S ξ ⎥
⎣ ⎦
where C and S denote respectively cos (.) and sin(.) of the corresponding angle specified at
the subscript. Therefore, the weld joint orientation angles θ, ξ can be derived as follows:
− o nwz o z
a (6)
θ = atan 2 ; ξ = atan 2 w
( s ) +( a )
o z
o z 2 o z 2
w s
w w
where 0nw, 0sw, 0aw are the corresponding column vectors of the orthogonal matrix 0WR.
Taking into account interrelations between these vectors, the angles θ, ξ can be finally
expressed as functions of the weld joint direction 0nw and approaching direction 0sw
− o nwz (7)
θ = atan 2 ;
( n ) +( n )
o x 2
w
o y 2
w
300 Industrial Robotics - Programming, Simulation and Applications
o
nwx ⋅ o swy − o nwy ⋅ o swx (8)
ξ = atan 2 o z
sw
It should be noted that it is possible to introduce alternative definition of the weld roll, which
is non-singular for all values of the weld slope. It is ξ′∈[0; π], which is the angle between the
approaching direction 0sw and the vertical axis Z0 (see Fig. 4):
( s ) +( s )
2 2
0 x 0 y
ξ ′ = a tan2
w w (9)
0 z
w s
As in the case of angles (θ, ξ), the description (θ, ξ′) also defines the 3rd row of the weld joint
orientation matrix 0WR , but the sign of the a wz may be chosen arbitrary. Hence, the
interrelation between both the definitions of the roll angle ξ and ξ’ is given by the equation
0
s w = ⎡⎣ 0 TPB ⋅ P ( q ) ⋅ PF TWB ⎤⎦ ⋅ s w , (11)
3×3
allow achieving desired weld slope and roll simultaneously. Therefore, the second formulation of
the inverse problem is less reasonable from technological point of view.
The only case when the second formulation is sensible, is the “optimal weld orientation”, for
which the approaching vector is strictly vertical and, consecutively, the weld direction vector onw
lies in the horizontal plane. But the first formulation also successfully tackle this case, as it
corresponds to the (θ, ξ)=(0,0). However, the second formulation can be successfully applied in
the singular for the first approach case (θ=±π/2), when defining the roll angle does not make
sense. For this reason, both formulations of the inverse problem will be considered below.
While applying the inverse formulation to real-life problems, it should also be taken into
account that engineering meaning of the slope and the roll is not sensitive to the sign of this
angles. For instance, the negative slope can be easily replaced by the positive one, if the weld
starting and ending point are interchanged. Also, the positive and negative rolls are equivalent
with respect to gravity forces. Therefore, four cases (±θ, ±ξ) must be investigated while
orienting the weld joint. Similar conclusion is valid for the alternative definition of the weld
orientation angles (θ, ξ′), where ξ′>0 but two cases (±θ, ξ′) yield four different matrices WR.
(a) (b)
Fig. 5. The two-axis balance (a) and roll-tilt (b) positioners.
While building the positioner model, it should be taken into account that the intersection point of
the axes may be located above the faceplate, to be closer to the workpiece centre of gravity (Fig. 5a).
302 Industrial Robotics - Programming, Simulation and Applications
Such design allows avoiding large payload moments specific for heavy and bulky objects. But in
some cases this point may lie above the plate (Fig. 5b). For this reason, it is prudent to release the
usual constraint that locates the positioner base frame at the intersection of its two axes.
The kinematic model of a general 2-axis positioner is presented in Fig. 6. It includes four linear
parameters (a1, d1, a2, d2) and one angular parameter α that defines direction of the Axis1. Without
loss of generality, the Axis2 is assumed to be normal to the faceplate and directed vertically for
q1=0. The geometrical meanings of the parameters are clear from the figure.
Similar to other manipulators, the kinematics of a positioner can be described by the
Denavit-Hartenberg model (Fu et al., 1987). However, for the considered 2-axis system
it is more convenient to use a product of elementary transformations that can be
derived directly from the Fig. 7:
where, similarly to the section 2, vectors n, s, a, p define the upper 3×4 block of the matrix P, and
C, S, V denote respectively cos(.), sin(.), vers(.) of the angle specified at a subscript. It should be
noted, that compared to the model proposed by G.Bolmsjo (1987), the developed one includes
less geometrical parameters while also describes the general case. Besides, the obtained
expressions are less awkward and more computationally efficient than the known ones.
Therefore, expressions (13)-(16) completely define direct kinematics of the 2-axis positioner.
But the obtained model can be also reduced to describe kinematics of a general 1-axis
mechanism. It is achieved by fixing the Axis1 or Axis2 and choosing appropriate value of α.
For instance, for turntables the axis variable is q2 while q1=0. But for turning rolls the axis
variable is q1 while q2=0 and α=0.
Kinematic Control of a Robot-Positioner System for Arc Welding Application 303
[
ηT ⋅0W R = ηT ⋅ 0TPB ⋅ P (q ) ⋅PF TWB ]
3×3
⋅W R , (17)
where the subscript ‘‘3 × 3’’ denotes the rotational part of the corresponding homogenous
transformation matrix, and ηT=[0 0 1]. Then, after appropriate matrix multiplications, it can
be converted to the form
v T = ηT ⋅ P (q )3×3 , (18)
0TPB is assumed not to include the rotational components other then Rz. Further
substitution in accordance with (13) yields three mutually dependent scalar equations of
two unknowns (q1, q2):
where vx, vy, vz are the corresponding components of the vector v. The third of these
equations can be easily solved for q1:
v z − S α2 (20)
q1 = ± a cos
Cα2
The value of q2 can be found by solving the first and the second equations for C2 and S2:
S1 ⋅ v x − S αV1 ⋅ v y (21)
q2 = atan 2
S1 ⋅ v y + S αV1 ⋅ v x
Therefore, Eqs. (18) and (19) represent the closed-form solution of the first inverse problem,
which in the general case for given weld orientation angles (θ, ξ) or (θ, ξ′) yields two pares
of the positioner axis angels (q1, q2).
0
[
W R ⋅ η = 0TPB ⋅ P (q ) ⋅PF TWB ]3×3
⋅W R⋅ η , (22)
T
where η=[0 1 0] . Then, after appropriate matrix multiplications, this equation can be
converted to the form
u = P (q )3×3 ⋅ w , (23)
where
w= [ PF
TWB ⋅W ]3×3
⋅η ; u = 0TPB [ ]T
3×3
⋅0W R ⋅ η .
and the subscript ‘‘3×3’’ means the upper left 3×3 submatrix of the corresponding
homogenous matrix (i.e. its orthogonal rotational part).
Further expansion of P(q) in accordance with (12) and relevant regrouping yields
⎡⎣ Ry ( −α ) ⋅ Rx ( q1 ) Ry (α ) ⎤⎦ ⋅ u = ⎡⎣ Rz ( q2 ) ⎤⎦ ⋅ w , (24)
3×3 3×3
(1 − S V )u + S S u + S C V u
2
α 1 x α 1 y α α 1 z = C2 wx − S2 wy −Sα S1 u x + C1 u y + Cα S1 u z = S 2 wx +C2 wy (25)
(
SαCα V1u x − Cα S1u y + 1 − Cα2V1 u z = wz )
from which the third one can be transformed to the form
Cα u xz ⋅ C1 + Cα u y⋅ S1 = (u z −wz ) + Cα u xz
and solved for q1:
q1 = atan 2
uy
± a cos
(u z −wz ) + Cαu xz , (26)
u xz Cα ⋅ u xz2 + u y2
where u xz = S α u x −Cα u z . It should be noted that these two alternative solutions for q1
correspond to different “configurations” of a positioner, which are strictly defined below.
Besides, both the solutions must be adjusted to the feasible interval (-π,π], since the sum of
atan2(.) and acos(.) can be out of the mentioned limits.
To find the value of q2, let us consider the first two equations of system (25) and solve them
for C2 and S2:
(
C2 = (wx ⋅ v x + w y ⋅ v y ) wx2 + w 2y ; ) (27)
S 2 = (wx ⋅ v y − wy ⋅ v ) (w
x
2
x + w ), 2
y
where
v x = u x +S α (S1u y −V1u xz ) ; v y = C1u y −S1u xz .
It leads to the following expression for q2:
Kinematic Control of a Robot-Positioner System for Arc Welding Application 305
wx ⋅ v y − wy ⋅ v x . (28)
q2 = atan 2
wx ⋅ v x + wy ⋅ v y
Therefore, Eqs. (26) and (28) represent the closed-form solution of the second inverse
problem, which in the general case for given unit vectors (u, w) yields two pares of the
positioner axis angels (q1, q2).
kinematic equations are converted to the identity, if uz=wz. So any value of q1 is a solution for
such input data. The corresponding condition can also be presented as the parallelism of the
vector u and Axis1, as well as the equality for the z-components of u and w, i.e.
u = [± Cα 0 ± S α ] ; v = [∗ ∗ ± S α ] , (33)
T T
To ensure definite computing of q1, it is additionally required that the acos argument in Eq. (26)
belongs to the interval [-1; 1]. After appropriate rearranging, this condition can be presented as
After denoting the angles between the vectors u, v and the Axis1, Axis2 as μ, η
and assuming that (μ, η)∈[0;π]× ‘(0;π], this inequality can be rewritten as
additional output, configuration index M=±1 describing positioner posture, which is also used as
an additional input for the inverse transformation, to produce a unique result.
For inverse problem 1, the configuration index is defined trivially (see Eq. (20)), as the sign
of the coordinate q1 :
M 1 = sgn (q1 ) . (37)
But for inverse problem 2, such index must identify the sign of the second term only (see Eq.
(26)). So, it should be defined as
⎛ uy ⎞ . (38)
M 2 = sgn ⎜⎜ q1 − atan2 ⎟
⎝ u xz ⎟⎠
From the geometrical point of view, the index M2 indicates relative location of two planes, passing
the Axis1. The first of them is obtained by rotating of the X0Z0-plane around the Axis1 by the angle
q1. And the second plane is passed via the Axis1 and the vector u. It should be also noted, that the
index M2 substantially differs from the traditional for robotics orientation index M5=sgn(q5), which
describes the wrist configuration of the typical 6 d.o.f. manipulator.
ξ′ = max{0; 2α + η - π} , (41)
the orientation of such welds can be essentially improved and approached to the optimal
one. The corresponding “suboptimal” solution is defined by the axis angles
(
q1 = π; q 2 = − a tan 2 s wy s wx ) (42)
i.e. exact equalities are achieved for the first and the second equations of system (19), while
for the third one the residual is minimised only.
time” required for the torch and workpiece repositioning between the welding operations.
There are also several other time components related to the workpiece cooling and
downloading/uploading, torch-tip cleaning, equipment adjustment, maintenance, etc., but
these are beyond the scope of the welding process model considered here.
As stressed above, the arc time is minimal in the downhand case, and the torch speed should be
reduced for the “out-of-position” welding. Since the downhand location corresponds to θ = ξ = 0,
we propose approximating the welding speed reduction by the following expression:
V (θ , ξ ) = (1 + kv || (θ , ξ )T ||) −1 ⋅ Vo (43)
where Vo is the downhand welding speed, kv is the scaling factor, and . denotes the
algebraic norm of the vector (θ, ξ)T. It is obvious that definition of this norm for a particular
welding task is not trivial and must rely on the expert judgements, which assign the
maximum allowable tolerances (θmax, ξmax) and corresponding reduction factor kmax. An
example of such a definition is given in our paper (Dolgui and Pashkevich, 2005).
The robot and positioner motion times highly depend on the welding task sequencing,
which prevents unreasonable movements of the torch and workpiece. For a single trajectory,
the motion time depends on the control mode (simultaneous or sequential), travel distance,
and velocity/acceleration limits incorporated in the path-planning algorithms. It can be
proved that the minimum time for a single joint movement (for both the robot and
positioner) is defined by the equation
⎧⎪|Δq| / qmax + qmax / qmax , if |Δq| > qmax / qmax
2
(44)
τ=⎨
⎪⎩ 2 |Δq| / q max , otherwise
where Δq is the joint displacement, and qmax , qmax are the velocity/acceleration limits.
Then, the robot motion time for a single torch displacement (simultaneous axis control) is
defined by the slowest axis drive
tR = max{ τi } (45)
i
where i=1,…6 is the axis number. It should be noted that the latter expression is valid, if the path
planning is performed in the manipulator joint-space. In the alternative case (the Cartesian-space
path planning), the index variable must be extended to i=0,…6, where τ0 is computed via the
Cartesian displacement and relevant velocity/acceleration constraints in a similar way.
In contrast, the positioner motion time for a single workpiece reconfiguration (sequential axis
control) is defined as the sum of the axis motion times, the pauses Δτi between the
successive axis movements and also the auxiliary time τR
tP = ∑ τi + ∑ Δτi + τR , (46)
i i
where τR is required for the welding torch removing to the safe location, to avoid possible
torch-workpiece collisions. As follows from this equation, its preferable to avoid achieving
the downhand location for each weld individually, since it requires extra time for the torch
removing that may overcompensate the downhand welding benefits.
broken in several hierarchical stages (Kim et al., 1998). This decomposition implements a
problem-specific mechanism, which makes its possible to reduce the size of the related
combinatorial optimisation problems, while maintaining the productivity/quality
compromise. These stages are defined as follows:
(i) The weld seam clustering, which arranges the welds into the groups that can be welded
without changing the positioner configuration (in the downhand position preferably,
within the allowable tolerances);
(ii) The intra-cluster sequencing, which determines the seam start/end points and the
welding order for each cluster individually (minimising the robot motions subject to
the heat-distortion-related constrains);
(iii) The inter-cluster sequencing, which determines the cluster processing order that
minimises the positioner motions subject to the downhand-related constrains.
After completing these stages, each operation is further broken down into detailed robot-
positioner movements, which finally yields the workcell control programs.
For the first stage, one can be applied the well-developed general clustering techniques (Everitt et
al., 2001) that group together the welds with similar sw-vectors, which indicate the seam normal
line directions relative to the workpiece base. However, taking into account the welding
specificity, it is prudent to perform the clustering in terms of the θ-, ξ-angles introduced above.
This poses the following optimisation problem for obtaining the positioner coordinates qp
max
i∈C i
{ T
}
|| (θi(q p), ξi(q p) ) || → min
q
,
p
(47)
which is solved within the usual clustering algorithms while evaluating the cluster diversity. In
this expression, Ci denotes the jth cluster index set, . is the norm of the vector (θ, ξ)T, and the
functions θi( qp), ξi( qp) define the ith weld orientation relative to gravity for the given positioner
coordinate vector qp.. Obviously, in order to ensure the desired welding quality, it is necessary to
constraint the inter-cluster diversity by assigning the upper bound Δmax for the above norm
which is also easily incorporated in the existing clustering methods. An example of slightly
different weld clustering techniques, based on the “representative weld-line” concept, is
given in Kim et al. (1998).
For the second stage, there are already exist a number of problem-specific heuristics, neural
network based methods, and genetic algorithms that allow generating the minim-time inter-
cluster welding sequence. These take into account the heat-related distortions by assigning
the minimum cooling time, size of the heat-affected zone, etc. A comprehensive review of
recent advances in this area is given by Kim et al. (2005).
This section focuses on the third welding task planning stage, the inter-cluster sequencing, related
to the optimisation of the positioner motions. To our knowledge, the only paper that addresses
this problem directly is that of Kim et al. (1998) devoted to the welding operations planning in a
robotic workcell with a rotating-tilting positioner. However, their approach assumes that each
cluster is oriented using the representative weldline, which is transformed to the strictly
downhand location by the positioner. Besides, after such orienting, the cluster welding time is
assumed to be constant and computed directly from the welding speed and the weld line length.
In contrast to the known approach, the proposed technique admits the out-of-position (i.e. not
exactly downhand) weld location, which is charged by the welding speed reduction. From
310 Industrial Robotics - Programming, Simulation and Applications
this, one can pose a problem of minimum-time cluster sequencing by finding a o trade-off
between the positioner motion time and the cluster processing time. Another useful feature
of the proposed approach is the re-clustering ability. This means that the developed
algorithm is able to find the same positioner configuration for several neighbouring clusters
(i.e. to merge them), if the corresponding increase of the welding time is over-compensated
by the reduction of the robot-positioner motion time. This allows to modify the clustering
stage, which may impose very strong constraints on the inter-cluster similarity, and partly
combine the clustering stage with the cluster sequencing one.
where I, K denote the optimal cluster sequence and corresponding cluster node numbers,
ki defines the optimal node within the ith cluster, the first term accumulates the cluster-
processing times (welding and torch travel times) , the second term represents the
positioner motion times (between the clusters), and the third term takes into account the
auxiliary robot motions between/after the positioner re-configurations via the indicator
function 1(.).
Sampled positioner coordinate space
Welding Cluster #1 Cluster #2
positioner Axis #2
Cluster #3
Axis #1
Cluster #5
Cluster #6
Cluster #4
Cluster #7
node within each cluster, we assign the node-processing time that varies within the cluster,
with the minimum located in the cluster centre. For each pair of nodes, there are also given
the inter-node travel times. Besides, it is assumed that each non-zero travel requires some
additional pause-time, needed for preparations of the safe movements. The goal is to find the
minimum-time tour, which visits each cluster exactly ones, defined by both the cluster
sequence and the set of visiting nodes within the clusters.
The hypothetical solution (see Fig. 7) shows a compromise between visiting cluster centres and
their peripheries, since the node-processing times and the inter-node travel times compete inside
the total sum to be minimised. Moreover, this solution utilises advantages of cluster overlapping
by processing the clusters #3, #6 in the same positioner configuration (this removes the
corresponding pause-time between successive positioner motions). Another interesting feature is
the one-axis motion between the clusters #1 and #2, which is more time-economical than the
two-axis motion requiring a pause between activating the axis drives.
It should be noted that this study focuses on generating the non-closed optimal tours for cluster
visiting and processing, assuming that the workpiece downloading/uploading is performed at the
tour start/end points interchangeably. However, the proposed technique is easily modified for the
case of the closed tour, as well as for the case of a predetermined loading positioner configuration.
Another essential specificity of the studied optimisation problem is related to cluster geometry,
since the welding-task clusters are usually composed of several disjoint regions in the positioner
coordinate space, corresponding to different inverse kinematic solutions. The next subsection
proposes an optimisation algorithm that takes into account this problem specificity.
repeated several times, each time starting from a new random initial solution and
finishing by updating the best solution. The basic idea and outline of the proposed
composite algorithm (called GI2+) are given below:
over the optimum, that is quite acceptable for engineering applications. In contrast, the exact
algorithm (based on the branch-and-bound with the nested dynamic programming) can handle
only low-dimensional problems (up to 5×100, 6×50, 7×20, 8×10). However, for smaller number
of clusters (n < 5) the branch-and-bound method takes over the heuristic. To summarise the
heuristic empirical performance, the running times were approximated by the expression
Time = c⋅n m using the log-least-square technique that yielded a ≈1.44 and b ≈1.92. This shows
a b
that the heuristics remains rather moderate with respect to n, while for the exact algorithm the
problem difficulty severely increases with the number of clusters.
7. Conclusion
Resent advances in arc welding technology motivate rethinking of some postulates and
conventions incorporated in the existing robot control methods. This chapter addresses
relaxing of the downhand-position assumption, which became a de-facto standard in
robotic welding and requires the weld normal vector to be opposite to gravity. In
contrast to the standard techniques, the developed method explicitly assumes that a
weld may be processed in the non-downhand location, within given tolerances. But, to
ensure the prescribed quality, the downhand deviation is charged by reduction of the
welding speed. For such settings, it is proposed a novel method for the kinematic
control of a robot-positioner system and related optimisation algorithm for the cluster-
level planning of the welding operations. By using this technique in combination with
the existing CAD tools, it is possible essentially reduce the cycle time of the robotic
welding station.
8. References
Ahmad, S. & Luo, S. (1989). Coordinated motion control of multiple robotic devices for
welding and redundancy coordination through constrained optimization in
Cartesian space. IEEE Transactions on Robotics and Automation, Vol. 5, No 4, (Aug.
1989), 409–417.
Bolmsjo, G. (1987). A kinematic description of a positioner and its application in arc welding
robots, In: Proc. of the 2nd Int. Conference on Developments in Automated and Robotic
Welding, pp. 77-84., Nov. 1987, London, UK.
Bolmsjo, G.; Olsson, M. & Cederberg, P. (2002). Robotic arc welding—trends and
developments for higher autonomy. Industrial Robot: An International Journal, Vol.
29, No 2, 98–104.
Cary, H.B. (1995). Arc welding automation , Marcel Dekker, New York.
Dolgui, A., & Pashkevich, A. (2006). Cluster-level operations planning for the out-of-
position robotic arc welding. International Journal of Production Research, Vol. 44, No
4, (Febr. 2006), 675-702.
Everitt, B.S.; Landau, S. & Leese, M. (2001). Cluster Analysis, Oxford University Press, New York.
Fernandez, K. & Cook, G.E. (1988). A generalized method for automatic downhand and wire
feed control of a welding robot and positioner, NASA Technical Paper 2807.
Fu, K. S.; Gonzalez, R. C. & Lee, C. S. G. (1987). Robotics: Control, vision and intelligence.
McGraw-Hill, NewYork.
Fukuda, S. & Yoshikawa, K. (1990). Determination of welding sequence: a neural net approach,
Engineering Analysis with Boundary Elements, Vol. 7, No 2, (June 1990), 78–82.
314 Industrial Robotics - Programming, Simulation and Applications
Grenestedt, J.L. (2003). Optimization of the weld path for overlay coatings. Journal of
Structural and Multi-Disciplinary Optimization, Vol. 25, No 3, (Aug. 2003), 215–224.
Gutin, G. & Punnen, A.P. (Eds), (2002). The Traveling Salesman Problem and its Variations,
Kluwer Academic Publishers, Dordrecht.
Henry-Labordere, A.L. (1969). The record balancing problem: a dynamic programming
solution of a generalized traveling salesman problem. Revue d’informatique et de
recherche operationnelle (RIRO), Vol. B-2, 43–49.
Kim, D.W.; Choi, J.-S. & Nnaji, B.O. (1989). Robot arc welding operations planning with a
rotating/tilting positioner. International Journal of Production Research, Vol. 36, No 4,
(Apr. 1998), 957–979.
Kim, H.J.; Kim, Y.D. & Lee, D.H. (2005). Scheduling for an arc welding robot considering
heat-caused distortion. Journal of the Operational Research Society, Vol. 56, No 1, (Jan.
2006), 39 – 50.
Kim, K.Y.; Kim, D.W. & Nnaji, B.O. (2002a). Robot arc welding task sequencing using
genetic algorithms. IIE Transactions on Design and Manufacturing, Vol. 34, No 10,
(Oct. 2002), 865–880.
Kim, K.Y.; Norman, B. & Nnaji, B.O. (2002b). Heuristics for single-pass welding task
sequencing. International Journal of Production Research., Vol. 40, No 12, (Aug. 2002),
2769–2788.
Laporte, G. & Palekar, U. (2002). Some applications of the clustered travelling salesman problem.
Journal of the Operational Research Society, Vol. 53, No9, (Sept. 2002), 972-976.
Nikoleris, G. (1990). A programming system for welding robots. International Journal for the
Joining of Materials, Vol. 2, No 2, 55–61.
Pashkevich A. ; Dolgui A. & Semkin K.(2003). Kinematic aspects of a robot-positioner
system in an arc welding application. Control Engineering Practice, Vol 11, No 6,
(June 2003), 633–647.
Pires, J.N; Loureiro, A., Godinho, T., Ferreira, P., Fernando, B. & Morgado, J. (2003) Welding
robots. IEEE Robotics and Automation Magazine, Vol. 10, No 2, (June 2003), 45- 55.
Ranky, P.G. (2004). Automotive robotics. Industrial Robot: An International Journal; Vol. 31,
No 3, (June 2004), 252 – 257.
Renaud, J. & Boctor, F.F. (1998). An efficient composite heuristic for the symmetric
generalized travelling salesman problem. European Journal of Operational Research,
Vol. 108, No 3, (Aug. 1998), 571-584.
Rubinovitz, J. & Wysk, R.A. (1988). Task level off-line programming system for robotic arc
welding – an overview. Journal of Manufacturing Systems, Vol. 7, No 4, 293–299.
Tarn, T.J.; Chen, S.B. & Zhou, C. (Eds.), (2004). Robotic Welding, Intelligence and Automation: Lecture
Notes in Control and Information Sciences, Vol. 299, Springer-Verlag, New York.
Tolinski, M. (2001). Getting to the Core of Welding Wire. Forming and Fabricating: SME
Journal, Vol. 8, No 1, 42 –49.
Wu, L.; Cui, K. & Chen, S. B. (2000). Redundancy coordination of multiple robotic devices for
welding through genetic algorithm. Robotica, Vol. 18, No 6, (Nov. 2000), 669 – 676.
Yagi, T. (2004). State-of-the-art welding and de-burring robots. Industrial Robot: An
International Journal, Vol. 31, No 1, (Jan. 2004), 48-54.
16
University of Coimbra
Polo II, 3030-290 Coimbra
PORTUGAL
E-mail: mcris@isr.uc.pt
1. Introduction
One of the most recent developments in the area of industrial automation is the concept of
the Flexible Manufacturing Cells (FMC). These are highly computerized systems composed
by several types of equipment, usually connected through a Local Area Network (LAN)
under some hierarchical Computer Integrated Manufacturing (CIM) structure (Kusiak,
1986) and (Waldner, 1992). FMC are capable of producing a broad variety of products and
changing their characteristics quickly and frequently. This flexibility provides for more
efficient use of resources, but makes the control of these systems more difficult. Developing
an FMC is not an easy task. Usually, an FMC uses equipment (robots, CNC machines, ASRS,
etc.) from different manufacturers, having their own programming languages and
environments. CNC machines and industrial robots are still machines which are difficult to
program, because it is necessary to be knowledgeable in several programming languages
and environments. Each manufacturer has a particular programming language and
environment (generally local). Robust and flexible control software is a necessity for
developing an FMC.
We are going to present several software applications developed for industrial robots and
CNC machines. The objective of this software is to allow these equipments to be integrated, in
an easy and efficient way, in modern Flexible Manufacturing Systems (FMS) and FMC. For the
industrial robots we are going to present the “winRS232ROBOTcontrol” and
“winEthernetROBOTcontrol” software. For the CNC machines we are going to present the
“winMILLcontrol”, for mills, and “winTURNcontrol”, for lathes. With this software, industrial
robots and CNC machines can be integrated in modern FMC, in an easy and efficient way.
Genetic algorithms (GA) can provide good solutions for scheduling problems. In this
chapter we also are going to present a GA to solve scheduling problems in FMC, which
316 Industrial Robotics - Programming, Simulation and Applications
are known to be NP-hard. First, we present a new concept of genetic operators for
scheduling problems. Then, we present a developed software tool, called “HybFlexGA”
(Hybrid and Flexible Genetic Algorithm), to examine the performance of various
crossover and mutation operators by computing simulations of scheduling problems.
Finally, the best genetic operators obtained from our computational tests are applied in
the “HybFlexGA”. The computational results obtained with 40, 50 and 100 jobs show the
good performance and the efficiency of the developed “HybFlexGA” to solving
scheduling problems in FMC.
An FMC was developed, with industrial characteristics, with the objective of showing the
potentialities of the developed software. The use of the “winRS232ROBOTcontrol”,
“winEthernetROBOTcontrol”, “winMILLcontrol” and “winTURNcontrol” software allows
the coordination, integration and control of FMC. With the “HybFlexGA” software we can
solve scheduling problems in FMC.
With the work described in this chapter we can:
− Control and monitoring all of the functionalities of the robots and CNC machines
remotely. This remote control and monitoring is very important to integrate robots
and CNC machines in modern FMS and FMC.
− Develop a distributed software architecture based on personal computers using
Ethernet networks.
− Develop applications that allow remote exploration of robots and CNC machines
using always the same programming languages (e.g. C++).
− Use a personal computer as programming environment, taking advantage of the
huge amount of programming and analysis tools available on these platforms.
− Develop Graphical User Interfaces (GUI) for robots and CNC machines, in a simple
and efficient way.
− Write FMS and FMC control software easier, cheaper, and more flexible. As a
result, the software expertise resides at the companies and software changes can be
made by the company’s engineers. Furthermore, the developed software is easy to
modify.
− Control and integrate several equipments from different manufactures in FMS and
FMC (e.g. conveyors, motors, sensors, and so on).
− Data sharing. For example: share programmes, share databases and so on, among
the client(s) and server(s).
− Solve scheduling problems in FMC with GA.
Suggestions for further research and development work will also be presented.
3.1 winRS232ROBOTcontrol
The “winRS232ROBOTcontrol” was developed to be used in robots manipulators where the
communication with the controller is only possible through the RS232 serial port. With the
“winRS232ROBOTcontrol” it is possible to develop robot programmes in C++, executed at the
PC level, not at the robot’s controller level. Nevertheless, the “winRS232ROBOTcontrol” also
allows the development of programmes to be executed at a robot’s controller level.
Furthermore, it is also possible to have programmes running in the PC and in the robot’s
controller (mix solution), individually or simultaneously. The “winRS232ROBOTcontrol” was
developed for industrial robot manipulators, and was implemented for the special case of
Scorbot ER VII robot. Nevertheless, “winRS232ROBOTcontrol” was designed to work with all
robots that support RS232. But, of course, whenever we use a different robot, it is necessary to
make some upgrades in the RobotControlAPI library, according to the used robot.
The developed API, for the “winRS232ROBOTcontrol” application, is based on a thread
process running simultaneously with the main programme. A thread process was created in
order to poll the serial port. An event is generated each time data is presented in the serial
port. In order to send data to the robot’s controller, the message is placed in the serial port
queue. Asynchronous processing will be used to send the message. For example, in order to
open the robot’s gripper, a data message is placed in the thread queue and further sent to
the robot’s controller trough the RS232 channel. After the delivery of the data, a waiting
cycle is activated. This cycle is waiting for the robot’s controller. It ends when the robot’s
controller sends back a prompt (‘>’), a timeout error occurs or a cancellation message is sent.
Fig. 1 shows the messages cycle of the thread process. The main programme communicates
with the parallel process through the messages placed in the waiting queue, being this
queue managed by the operating system. A message is immediately processed as soon as it
arrives at the parallel process. In the case when there are no messages in the queue, the
parallel process enters in a loop, waiting for data from the robot’s controller. An event is
generated when data arrive. The parallel process is activated in the beginning of the
communication with the controller. When the communication with the robot’s controller is
finished, the resources allocated to the parallel process are returned to the operating system.
MessageN
The RobotControlAPI library was developed with access to the controller’s functions in
mind, as shown in Fig. 2. This API allows us to communicate with the robot’s controller.
The available functions are divided into two groups. The first contains public methods
and the second contains methods representing events. Table 1 shows some of these
functions.
We also have developed an Ethernet server for the computer that controls the robot. Thus, it
is possible to control the robot remotely, in a distributed client/server environment, as
shown in Fig. 2.
...
Ethernet
RS232
Controller
Robot
Function Description
SpeedA() Changes group A speed
Home() Homing the robot
Methods
Public
3.2 winEthernetROBOTcontrol
The “winEthernetROBOTcontrol” was developed to be used in robots manipulators where the
communication with the controller is possible through an Ethernet network. The
“winEthernetROBOTcontrol” is a Dynamic Link Library (DLL) and allows development of
simple and fast applications for robots. The developed software uses the original capabilities
of the robot’s controllers, in a distributed client/server environment. Fig. 3 shows the PC1 and
PC4 communicating with the robots through “winEthernetROBOTcontrol”.
winEthernetROBOTcontrol
Controller Controller
Robot A Robot B
Fig. 3. winEthernetROBOTcontrol.
Description
Robot() Add New Alias
S4ProgamLoad() Load a programme in the robot controller
S4Run() Switches ON the robot’s motors
Procedures
Table 2 presents same procedures, functions and events developed for the
“winEthernetROBOTcontrol”. The procedure “Robot” allows making a connection between one
PC (with winEthernetROBOTcontrol) and one or more robots. For that, it is necessary to create
and configure one connection between the PC and the robot controller (Add New Alias). After
that it is possible to communicate and control the robot remotely through the connection (Alias).
With “winEthernetROBOTcontrol” we can develop a software control, for one or more
robots, in any PC. With this PC we can communicate and control the robot (or robots)
through an Ethernet (see Fig. 3).
The “winEthernetROBOTcontrol” library was developed for industrial robot manipulators, and
was implemented for the special case of ABB robots (with the new S4 control system).
Nevertheless, this very flexible library was designed to work with all robots that support Ethernet.
But, of course, if we use a different robot it is necessary to make some upgrades in this library,
according to the used robot. The basic idea is to define a library of functions for any specific
equipment that in each moment define the basic functionality of that equipment for remote use.
− We can develop Graphical User Interface (GUI) for CNC machines, in a simple and
efficient way.
Some CNC machine’s controllers (e.g. FANUC) only allow external connections through the
RS232 serial port (Ferrolho et al., 2005). In order to surpass this limitation, a “Null Modem”
cable was connected between COM1 and COM2 serial port of each PC that controls the
respective machine, as shown in Fig. 4. The developed DNC programmes use the COM1 to
send orders (e.g. open door, close vice, start NC programme, and so on) to the machine and
the CNC machine’s controllers receive the same orders by COM2. For that, we need
configure the CNC machines software to receive the DNC orders by COM2. We have
developed an Ethernet server for the PCs that control the machines, so as to allow remote
control of the machines. Thus, it is possible to control the machines remotely, as shown in
Fig. 4. Each machine’s server receives the information through the Ethernet and then sends
it to the machine’s controller through the serial port. For example, if the client 1, in Fig. 4,
needs to send an order (e.g. close door) to the machine A, the developed programme
running in the machine computer receives the order through the Ethernet and sends the
same order to the COM1 port. After that, the machine’s controller receives this order
through the COM2 (null modem cable) and then the machine closes the door.
DNC is an interface that allows a computer to control and monitor one or more CNC
machines. The DNC interface creates a connection between client (e.g. client 1) and the CNC
machine computer. After activating the DNC mode, client starts to control the CNC
machines through the Ethernet, as shown in Fig. 4. The developed DNC interface establishes
a TCP/IP connection between computer client and the CNC machine’s computer. With the
DNC interface developed it is possible to perform the following remotely: transfer NC
programmes (files of NC codes) to the CNC machines, work with different parts’ NC files,
visualize alarms and information regarding the state of the CNC machines, control all the
devices of the CNC machines (e.g. doors, clamping devices, blow-out, refrigeration system,
position tools, auxiliary drives, and so on).
...
Ethernet
...
Machine A Machine B Machine C
Fig. 4. Control CNC machines through Ethernet.
Control and Scheduling Software for Flexible Manufacturing Cells 323
USB
Communication
Hardware
USB Kit
…
Inputs / Outputs
Fig. 5. USB software and USB kit.
Scheduling
Ethernet
Machine state and send
Commands by DNC software
winEthernetROBOTcontrol
winRS232ROBOTcontrol
USB Software
“winEthernetROBOTcontrol”
communication
communication
RS232
USB
Server software
Controller
Data
S4Cplus
ABB robot IRB 140 acquisition
ABB IRB140 robot
Mill Buffer Big conveyor
ABB robot IRB 140
Scorbot ER VII
ABB IRB1400 robot
CNC Mill Server software Small conveyor
Manufacturing Sector
Control of the manufacturing sector is carried out by the PC1, as shown in Fig. 6. The
manufacturing sector is controlled by software, developed in C++, and has the following
functions: buffer administration, CNC machine administration and ABB IRB140 robot
control. Control of the IRB140 robot is carried out by “winEthernetROBOTcontrol”. Control
of the Mill and Lathe is carried out by DNC software – “winMILLcontrol” and
“winTURNcontrol” (Ferrolho et al., 2005). This software has a server which allows remote
control of the manufacturing sector.
Assembly Sector
The assembly sector is made up of a Scorbot ER VII robot, a conveyor and an assembly
table, as shown in Fig. 6. Responsibility for the control and coordination of this sector falls
on PC2 and in this sector we used the “winRS232ROBOTcontrol”.
Handling Sector
Control of the handling sector is carried out through the PC3 and of a USB kit (hardware),
as shown in Fig. 6. The USB kit acquires the signals coming from the sensors and sends
them to PC3. When it is necessary to activate the stoppers, PC3 sends the control signals to
the USB kit.
Storage Sector
Control of the storage sector is carried out by PC4, as shown in Fig. 6. The functions
of the developed software are the administration of the database of the whole
warehouse, and controlling the ABB IRB1400 robot. The database can be updated at
any moment by the operator, but it is also automatically updated whenever the ABB
IRB1400 robot conducts a load operation, unloads or replaces pallets. Control of the
ABB 1400 robot is carried out by “winEthernetROBOTcontrol” functions, procedures
and events.
FMC Manager PC
The central computer (FMC manager PC) controls all of the FMC production, connecting the
various computers and data communication networks, which allows real time control and
supervision of the operations, collecting and processing information flow from the various
resources. In this PC the first three layers of the hierarchical structure presented in Fig. 6 are
implemented: engineering, planning and scheduling.
The first layer contains the engineering and product design, where the product is designed
and developed with CAD/CAM software (e.g. MasterCam). The outputs of this layer are
the drawings and the bill of materials.
The second layer is process planning. Here the process plans for manufacturing, assembling
and testing are made. The outputs of this layer are the NC code for the CNC machines. We
can obtain these NC code also with CAD/CAM software. After that we put the product
design, process plan, NC code, and so on, of each job in the products database. Whenever
we have a new product (job) it is necessary to put all the information about this job in the
products database.
The third layer is scheduling. The process plans together with the drawing, the bill of
materials and the customer orders are the input to scheduling. The output of
scheduling is the release of the order to the manufacturing floor. We used Genetic
326 Industrial Robotics - Programming, Simulation and Applications
Algorithms (GA) to solve scheduling problems in the FMC (Ferrolho & Crisóstomo,
2005-c). Some scheduling problems are very difficult to solve, but if the GA has been
developed correctly, we can obtain good solutions, maybe even the optimal schedule.
The scheduling problems studied and implemented in FMC were: single machine
scheduling problem, flow-shop scheduling problem and job-shop scheduling
problem.
Concisely, the FMC manager PC is responsible for:
− Developing and designing new products to manufacture – the engineering
layer.
− Production plans, assemblies and product tests – the planning layer.
− Finding the optimum processing sequence so as to optimize CNC machine use –
the scheduling layer.
− Maintaining a database of jobs to manufacture, including the respective NC
programmes.
− Synchronizing the various sectors so as to produce variable lots of different types
of parts depending on the customer’s orders.
− Monitoring the current state of the production.
− Guaranteeing tolerance of failures, safety and coherence of the data.
therefore computationally efficient techniques are very important for their solution. Some of
the problems are made even more difficult when the rate of demand for products varies.
This dynamic element violates some of the convenient assumptions about approaches used
in static cases (Rembold et al., 1993).
Description Symbol Remarks
Number of jobs n
Job i Ji
Machine j Mj
Operation time Oij
Processing time pij
Read time for Ji ri
Due date for Ji di
Allowance for Ji ai ai=di-ri
Waiting time of Ji preceding the respective k th
Wik
operation
m
Total waiting time of Ji Wi Wi = ∑ Wik
k =1
[ ]
m
Completion time of Ji Ci C i = ri + ∑ Wik + pij ( k )
k =1
As stated, the general n jobs and m machines job-shop scheduling problem has proved to be
very difficult. The difficult is not in modeling but in computation since there is a total of
(n!)m possible schedules, and each one must be examined to select the one that gives the
minimum makespan or optimizes whatever measure of effectiveness is chosen. There are no
known efficient, exact solution procedures. Some authors have formulated integer
programming models but the computational results (computational complexity) have not
been encouraging. For small problems, the branch and bound technique has been shown to
be very good and better than integer programming formulation, but this has been
computationally prohibitive for large problems.
In the next sections we are going to present a GA to solve scheduling jobs in FMC. First, we
present a new concept of genetic operators. Then, we present a developed software tool
“HybFlexGA”. Finally, we show the good performance and the efficiency of the developed
“HybFlexGA” to solving scheduling problems in FMC.
8. Genetic Algorithms
The strong performance of the GA depends on the choice of good genetic operators.
Therefore, the selection of appropriate genetic operators is very important for
328 Industrial Robotics - Programming, Simulation and Applications
constructing a high performance GA. Various crossover and mutation operators have
been examined for sequencing problems in the literature (for example, see (Goldberg,
1989), (Oliver et al., 1987), (Syswerda, 1991), (Ferrolho & Crisóstomo, 2005-d) and
(Murata & Ishibuchi, 1994)). When the performance of a crossover operator is
evaluated, a GA without mutation is employed and the evaluation of a mutation
operator is carried out by a GA without crossover. In the literature, various crossover
operators and mutation operators were examined in this manner (for example, see
(Murata & Ishibuchi, 1994)).
In this section, we propose a new concept of genetic operators for scheduling problems. We
evaluate each of various genetic operators with the objective of selecting the best
performance crossover and mutation operators.
...
M1 Oij(m-1)
...
...
Oij(m-1) Oij(m)
M2 ...
Wim pij(m)
...
Oij(2)
Mj ...
Wi2 pij(2)
Oij(1)
M(m-1) ...
pij(1) ...
Oij(3)
Mm ...
ri di Ci
ai Li
Fi
and TPC3CV2 is a mix of TPC1CV2 plus TPC2CV2. The crossover operator with 4 children
is called two-point crossover: 4 children (TPC4C). This operator is a mix of TPC2CV1 plus
TPC2CV2.
In our computational tests in section 4 we also used the following crossover operators:
− Order crossover (OX) in Goldberg (Goldberg, 1989);
− Cycle crossover (CX) in Oliver (Oliver et al., 1987);
− Position based crossover (PBX) in Syswerda (Syswerda, 1991).
Parent 1 J1 J2 J3 J4 J5 J6 J7 J8 J9 J10
Parent 1 J1 J2 J3 J4 J5 J6 J7 J8 J9 J10
Child 1 J1 J2 J3 J5 J10 J7 J8 J4 J6 J9
Child J2 J10 J3 J4 J5 J6 J7 J8 J1 J9
Parent 2 J5 J2 J10 J3 J7 J1 J8 J4 J6 J9
(c) Two-point crossover: 1 child (Vers. II) (d) One-point crossover: 2 children
(e) Two-point crossover: 2 children (Vers. I) (f) Two-point crossover: 2 children (Vers. II)
Fig. 8. Illustration of crossover operators.
330 Industrial Robotics - Programming, Simulation and Applications
J1 J2 J3 J4 J5 J6 J7 J8 J9 J10 J1 J2 J3 J4 J5 J6 J7 J8 J9 J10
J1 J2 J3 J4 J6 J5 J7 J8 J9 J10 J1 J8 J3 J4 J5 J6 J7 J2 J9 J10
J1 J2 J3 J4 J5 J6 J7 J8 J9 J10 J1 J2 J3 J4 J5 J6 J7 J8 J9 J10
J1 J7 J3 J4 J5 J6 J10 J8 J9 J2 J1 J7 J2 J3 J4 J5 J6 J8 J9 J10
J1 J2 J3 J4 J5 J6 J7 J8 J9 J10 J11 J12 J13 J14 J15 J16 J17 J18 J19 J20
J1 J7 J3 J4 J5 J6 J12 J8 J9 J10 J11 J17 J13 J14 J15 J16 J2 J18 J19 J20
Inteface
Problem type
parameters
Scheduling
Instance PC
Pre-processing
Disk
Data
Found solution:
- Optimal solution
Scheduling - Near-optimal solution
generated by PC file, as shown in Fig. 12. This module, pre-processes the input information
and then sends the data to the next module – the scheduling module.
Step 5 – Elitism
Select the best Npop chromosomes to generate the next population Ψ t +1 and the other
chromosomes are eliminated. Thus, the best chromosomes, i.e. solutions, will survive
into the next generation. However, duplicated solutions may occur in Ψ t +1 . To minimize
this, new chromosomes are generated for all duplicated chromosomes.
Step 6 – Termination test
Stops the algorithm if the stopping condition, specified previously in the interface
module, is satisfied. Otherwise, update t for t:=t+1 and return to step 2.
Fig. 13. GA implemented in the scheduling module.
where xinitial is the best chromosome in the initial population and xend is the best
chromosome in the last population. That is, f ( xinitial ) is the fitness average (of the 20
computational tests) of the best chromosomes in the initial population and f ( xend ) is the
fitness average of the best chromosomes in the end of the 1000 generations. The
performance measure in (1) gives the total improvement in fitness during the execution of
the genetic algorithm.
We used 20 computational tests to examine each crossover operator. The average value of
the performance measure in (1) was calculated for each crossover operator with each
crossover probability and each population size.
Table 4 shows the best average value of the performance measure obtained by each
crossover operator with its best crossover probability and its best population size. This table
shows the crossover operator by classification order.
Position Crossover Pc Npop Performance
1st TPC4C 1.0 100 3834.1
2nd TPC3CV2 1.0 100 3822.9
3rd TPC2CV2 1.0 100 3821.8
4th PBX 1.0 80 3805.8
5th TPC1CV2 0.8 100 3789.3
6th CX 0.8 80 3788.7
7th TPC3CV1 0.8 80 3680.2
8th TPC2CV1 1.0 80 3662.1
9th OPC2C 0.6 100 3647.8
10th OX 1.0 100 3635.4
11th TPC1CV1 1.0 100 3624.7
12th OPC1C 0.6 100 3570.5
Table 4. Classification of the crossover operators.
Fig. 14 presents the CPU time average and the fitness average along the 1000
generations. As we can see in Fig. 14 a), TPC4C, TPC3CV2 and TPC2CV2 require
more CPU time than the other crossover operators. On the other hand, Fig. 14 b)
shows these crossover operators present a very fast evolution at first (in the first 100
generations).
700 3000
TPC4C, Pc=1.0, Npop=100 TPC4C, Pc=1.0, Npop=100
600
TPC3CV2, Pc=1.0, Npop=100 2800 TPC3CV2, Pc=1.0, Npop=100
)
. 500 TPC2CV2, Pc=1.0, Npop=100
c TPC2CV2, Pc=1.0, Npop=100
e
s(
PBX, Pc=1.0, Npop=80 2600
e
g 400 PBX, Pc=1.0, Npop=80
ar e
g
e TPC1CV2, Pc=0.8, Npop=100 ar
v e TPC1CV2, Pc=0.8, Npop=100
A 300 v 2400
e A
m CX, Pc=0.8, Npop=80 CX, Pc=0.8, Npop=80
i
T 200
U 2200
P
C
100
0 2000
0 100 200 300 400 500 600 700 800 900 1000 100 200 300 400 500 600 700 800 900 1000
Generation Generation
a) b)
Fig. 14. The six best crossover operators: a) CPU time average along the 1000 generations b)
Fitness average along the 1000 generations
Control and Scheduling Software for Flexible Manufacturing Cells 335
For the same number of generations, PBX and CX do not need as much CPU time (see Fig.
14 a)) but, these operators present a worse fitness average evolution (see Fig. 14 b)). These
crossover operators obtained the best performance for Npop=80 (see Table 4) and this is one
of the reasons both obtained a good CPU time average.
From Fig. 14 b) we can see TPC4C is the best crossover operator along the 1000 generations,
but Fig. 14 a) shows that it needed more CPU time.
2000
0
0 100 200 300 400 500 600 700 800 900 1000 100 200 300 400 500 600 700 800 900 1000
Generation Generation
a) b)
Fig. 15. The three best mutation operators: a) CPU time average along the 1000 generations
b) Fitness average along the 1000 generations.
336 Industrial Robotics - Programming, Simulation and Applications
For arbitrary positive weights, the SMTWT problem is strongly NP-hard (Lenstra et
al., 1977). Because the SMTWT problem is NP-hard, optimal algorithms for this
problem would require a computational time that increases exponentially with the
problem size (Morton & David, 1993), (Baker, 1974), (Lawer, 1977), (Abdul-Razaq et
al., 1990) and (Potts & Wassenhove, 1991). Several algorithms have been proposed to
solve this, for example: branch and bound algorithms (Shwimer, 1972), (Rinnooy et
al., 1975), (Potts & Wassenhove, 1985) and dynamic programming algorithms
(Schrage & Baker, 1978), have been proposed to generate exact solutions, i.e.,
solutions that are guaranteed to be optimal solutions. But, the branch and bound
algorithms are limited by computational times and the dynamic programming
algorithms are limited by computer storage requirements, especially when the
number of jobs is more than 50. Thereafter, the SMTWT problem was extensively
studied by heuristics. These heuristics generate good or even optimal solutions, but
do not guarantee optimality. In recent years, several heuristics, such as Simulated
Annealing, Tabu Search, Genetic Algorithms and Ant Colony (Huegler & Vasko,
1997), (Crauwels et al., 1998), and (Morton & David, 1993), have been proposed to
solve the SMTWT problem.
We going to present, in this section, the computational results obtained with 40, 50 and
100 jobs, using Genetic Algorithms. From the OR-Library (http#3), we randomly
selected some instances of SMTWT problems with 40, 50 and 100 jobs. We used 20
computational tests for each instance of SMTWT problem. We used the six best
crossover operators (see Table 4) and the best mutation operator (see Table 5) in the
HybFlexGA. Each instance of SMTWT problem was examined by using the following
conditions:
− Number of tests: 20.
− Initial population Ψ t : randomly generated.
− Number of jobs: 40, 50 and 100.
− Instance used: from the OR-Library (http#3).
− Population size Npop: 80 and 100 (see Table 4 and Table 5).
− Stopping condition: 1000 generations for the instances with 40 and 50 jobs or the
optimal solution, and 5000 generations for the instances with 100 jobs or the
optimal solution.
− Crossover operators: the six best crossover operators in Table 4.
− Crossover probabilities Pc: 0.8 and 1.0 (see Table 4).
− Mutation operators: the best mutation operator in Table 5.
− Mutation probabilities Pm: 1.0 (see Table 5).
Control and Scheduling Software for Flexible Manufacturing Cells 337
Instance 40A 40B 40C 50A 50B 50C 100A 100B 100C
Optimal solution 6575 1225 6324 2134 22 2583 5988 8 4267
Tests with optimal
TPC4C 16 20 19 20 20 15 16 20 19
solution
+
CPU time average
Arb20% 362.4 190.0 319.7 88.3 45.5 214.1 2405.1 523.9 3213.5
(sec.)
JC
Generations average 593 284 475 107 54 256 1611 323 1971
TPC3C Tests with optimal
13 15 15 18 20 12 15 20 15
V2 solution
+ CPU time average
382.9 231.3 260.5 112.3 50.4 207.9 3851.0 1012.4 4453.7
Arb20% (sec.)
JC Generations average
725 402 448 158 70 290 3042 727 3181
TPC2C Tests with optimal
8 16 15 17 20 12 9 20 5
V2 solution
+ CPU time average (sec.)
369.1 216.8 305.8 146.2 92.0 157.8 3921.3 1339.8 4036.2
Arb20%
Generations average 380 449 627 246 152 263 3685 1142 3423
JC
Tests with optimal
PBX 0 8 1 18 20 10 1 20 3
solution
+
CPU time average
Arb20% --- 236.1 312.0 144.6 86.3 151.6 2067.0 1413.6 3237.7
(sec.)
JC
Generations average --- 747 976 371 218 385 2957 1828 4182
TPC1C Tests with optimal
0 3 2 13 20 5 1 19 2
V2 solution
+ CPU time average
--- 230.0 363.5 224.7 118.2 180.0 4104.0 2190.7 3847.0
Arb20% (sec.)
JC Generations average --- 609 956 487 252 385 4948 2405 4192
Tests with optimal
CX 0 4 4 17 20 11 3 20 5
solution
+
CPU time average
Arb20% --- 165.3 270.5 107.8 51.0 140.0 2594.0 736.4 3224.2
(sec.)
JC
Generations average --- 551 892 292 136 375 3912 1011 4390
Table 6. Computational results obtained for the SMTWT problems with 40, 50 and 100 jobs.
Table 6 shows the computational results obtained for the SMTWT problems with 40, 50 and
100 jobs. In this table, we have the number of tests with optimal solution, the CPU time
average (in seconds) and the generations average for each instance problem. For example,
we chose the TPC4C with Pc=1.0, Arb20%JC with Pm=1.0 and instance 40A (SMTWT
problem with 40 jobs, from the OR-Library (http#3)) in the HybFlexGA. We did 20
computational tests for this instance. In these tests we obtained the optimal solutions in 16
tests. In these 16 tests, the CPU time average was 362.4 seconds and the generation average
was 593.
As we can see in Table 6, we obtained good results with the TPC4C+Arb20%JC,
TPC3CV2+Arb20%JC and TPC2CV2+Arb20%JC combination, for all the instances with 40,
50 and 100 jobs. But, as we can see in this table, the best results are obtained for the
TPC4C+Arb20%JC combination. When we used the TPC4C+Arb20%JC combination, the
HybFlexGA is very efficient. For example, in the three instances with 40 jobs (see Table 6 –
40A, 40B and 40C) the HybFlexGA found 20 tests with optimal solution in one instance
(40B), 19 tests with optimal solutions in one instance (40C) and 16 tests with optimal
solutions in one instance (40A).
338 Industrial Robotics - Programming, Simulation and Applications
In subsection 10.2 and 10.3, we demonstrated that TPC4C and Arb20%JC need more CPU
time (for the same generations) than the other genetic operators. But, we also demonstrated
that TPC4C and Arb20%JC need fewer generations to find good solutions, probably the
optimal solutions. The computational results obtained in Table 6 show the HybFlexGA with
the TPC4C+Arb20%JC combination requires less CPU time than the one with the other
combinations. For example, in the 50B instance the HybFlexGA always found the optimal
solution for all combinations (e.g., TPC3CV2+Arb20%JC, TPC2CV2+Arb20%JC, and so on).
For this instance, as we can see in Table 6, the combination that needs less CPU time average
was the TPC4C+Arb20%JC (45.5 seconds). All the other combinations need more CPU time
average.
12. Conclusion
Robust control software is a necessity for an automated FMS and FMC, and plays an
important role in the attainable flexibility. Often FMS and FMC are built with very flexible
machines (robots, CNC machines, ASRS, etc) but the available control software is unable to
exploit the flexibility of adding new machines, parts, changing control algorithms, etc. The
present robot control and CNC machines control systems are closed and with deficient
software interfaces. Nowadays, the software is typically custom written, very expensive,
difficult to modify, and often the main source of inflexibility in FMS. We have developed a
collection of software tools: “winRS232ROBOTcontrol”, “winEthernetROBOTcontrol”,
“winMILLcontrol”, “winTURNcontrol” and USB software. These software tools allow the
development of industrial applications of monitoring and controlling robotic networks and
CNC machines inserted in FMS or FMC. All the software developed has operation
potentialities in Ethernet. The basic idea is to define for any specific equipment a library of
functions that in each moment define the basic functionality of that equipment for remote
use.
In this chapter we also propose a new concept of genetic operators for scheduling problems.
We developed a software tool called HybFlexGA, to examine the performance of various
crossover and mutation operators by computing simulations on job scheduling problems.
The HybFlexGA obtained good computational results in the instances of SMTWT problems
with 40, 50 and 100 jobs (see Table 6). As we demonstrated, the HybFlexGA is very efficient
with the TPC4C+Arb20%JC combination. With this combination, the HybFlexGA always
found more optimal solutions than with the other combinations: TPC3CV2+Arb20%JC,
TPC2CV2+Arb20%JC, and so on. When we used this combination (TPC4C+Arb20%JC) in
the HybFlexGA, the genetic algorithm required fewer generations and less CPU time to find
the optimal solutions. These results show the good performance and efficiency of the
HybFlexGA with the TPC4C and Arb20%JC genetic operators.
Application of the HybFlexGA, with these same genetic operator combinations, in other
scheduling problems (e.g. job-shop and flow-shop) is left for future work.
13. References
Abdul-Razaq, T.; Potts C. & Wassenhove L. (1990). A survey for the Single-Machine
Scheduling Total WT Scheduling Problem. Discrete Applied Mathematics, No 26,
(1990), pp. 235–253.
Baker, R. (1974). Introduction to Sequencing and Scheduling, Wiley, New York.
Control and Scheduling Software for Flexible Manufacturing Cells 339
Crauwels H.; Potts C. & Wassenhove L. (1998). Local search heuristics for the single
machine total weighted tardiness scheduling problem. INFORMS Journal on
Computing, Vol. 10, No 3, (1998), pp. 341–350.
Ferrolho A.; Crisóstomo M. & Lima M. (2005). Intelligent Control Software for Industrial
CNC Machines, Proceedings of the IEEE 9th International Conference on Intelligent
Engineering Systems, Cruising on Mediterranean Sea, September 16-19, 2005, in CD-
ROM.
Ferrolho A. & Crisóstomo M. (2005-a). Scheduling and Control of Flexible Manufacturing
Cells Using Genetic Algorithms. WSEAS Transactions on Computers, Vol. 4, 2005, pp.
502–510.
Ferrolho A. & Crisóstomo M. (2005-b). Flexible Manufacturing Cell: Development,
Coordination, Integration and Control, in Proceedings of the IEEE 5th Int.
Conference on Control and Automation, pp. 1050-1055, Hungary, June 26-29, 2005,
in CD-ROM.
Ferrolho A. & Crisóstomo M. (2005-c). Genetic Algorithms for Solving Scheduling Problems
in Flexible Manufacturing Cells, in Proceedings of the 4th WSEAS International
Conference on Electronics, Signal Processing and Control (ESPOCO2005), Brazil, April
25-27, 2005, in CD-ROM.
Ferrolho A. & Crisóstomo M. (2005-d). Genetic Algorithms: concepts, techniques and
applications. WSEAS Transactions on Advances in Engineering Education, Vol. 2, 2005,
pp. 12–19.
French S. (1982). Sequencing and Scheduling: An Introduction to the Mathematics of the Job-Shop,
John Wiley & Sons, New York.
Goldberg D. (1989). Genetic Algorithms in Search, Optimization and Machine Learning,
Addison-Wesley.
Hadj-Alouane N.; Chaar J.; Naylor A. & Volz R. (1988). Material handling systems as
software components: An implementation. Tech. Paper, Center for Res. on Integrated
Manufacturing. The Univ. Michigan, May 1988.
Huegler P. & Vasko F. (1997). A performance comparison of heuristics for the total weighted
tardiness problem. Computers & Industrial Engineering, Vol. 32, No 4, (1997), pp.
753–767.
Joshi S.; R. A. Wysk & E. G. Mettala (1991). Automatic Generation of Control System
Software for Flexible Manufacturing Systems. IEEE Trans. on Robotics and
Automation, Vol. 7, No. 6, (December 1991) pp. 283–291.
Joshi, S.; J. S. Smith; R. A. Wysk; B. Peters & C. Pegden (1995). Rapid-CIM: An approach to
rapid development of control software for FMS control. 27th CIRP International
Seminar on Manufacturing Systems, Ann Arbor, MI, 1995.
Kusiak A., (1986). Modelling and Design of Flexible Manufacturing Systems, Elsevier Science
Publishers.
Lawer, E. (1977). A pseudopolinomial algorithm for Sequencing Jobs to Minimize Total
Tardiness. Annals of Discrete Mathematics, (1977), pp. 331–342.
Lenstra J.; Kinnooy Kan H. & Brucker P. (1977). Complexity of machine scheduling
problems. Annals of Discrete Mathematics, Vol. 1 (1977), pp. 343–362.
Maimon O. & Fisher E. (1988). An object-based representation method for a manufacturing
cell controller. Artificial Intelligence in Engineering, Vol. 3, No. 1.
Morton, E. & David W. (1993). Heuristic Scheduling Systems, John Wiley & Sons.
340 Industrial Robotics - Programming, Simulation and Applications
Sites WWW
http#1 - Rapid Prototyping and Development of FMS Control Software for Computer
Integrated Manufacturing. Available: http://tamcam.tamu.edu/rapidcim/
rc.htm
http#2 - RapidCIM. Available: http://www.engr.psu.edu/cim/
http#3 - http://people.brunel.ac.uk/~mastjjb/jeb/orlib/wtinfo.html
17
1. Introduction
Surface quality plays an important role for both components and products. Either are an
exact form and a dimensional accuracy required to fulfil a certain function, e.g. in the case of
turbine blades, or the visual impression of a surface must meet high demands, e.g. in
sanitary fittings. In order to meet these high demands in material processing, processes such
as grinding and polishing are often used (VDMA, 2000).
Nowadays the processes of grinding and polishing are automised by industrial robots to release
the operator and to grant an economical manufacturing. Presently known systems however face
the problem that they can be adjusted to other part geometries and processing cycles only at high
costs and therefore are used only in large batches (Schraft & Kaun, 1999). One reason are the work-
intensive and time-consuming programming and optimisation of the movement paths, and
another is that the handling systems do not recognize the existing defects on the surface resulting
from grinding and polishing. As a defect-related adaptation to the subsequent processing steps is
not possible, the reprocessing of the surfaces is done manually.
In this chapter a concept is presented which makes it possible to automatise the reworking
of design surfaces of high-quality sanitary fittings with the help of industrial robots.
When analysing the manual reworking more closely, one sees that it is an iterative process.
The operator first grinds a little area in the centre of which the defect is to be removed and
then he checks the result. If the defect has not been removed completely, he repeats grinding
it, but this time he will work on a bigger area. This is repeated until the defect is removed.
Upon removal of the defect the operator tries to clear the traces of the processing by further
grinding paths, finally he polishes the reworked surface largely.
The most important aspects of the manual reworking can be summarized as follows:
- Every high-polished fitting is checked for surface defects.
- Big defects are scrapped, little ones are reworked.
- If possibly, only little areas are reworked.
- There are two types of grinding paths:
1. Paths to remove the defect.
2. Paths to remove the processing traces.
- The result of the previous processing step is directly checked.
- The processing strategy is adapted to the result of the previous grinding step.
- To process the defective fittings with the means of surface processing, such as
grinding and polishing.
An image processing system is used for checking the surface quality (see also Kuhlenkötter,
et al, 2004). This image processing system has an illumination especially developed for
checking high-polished components to detect defects up to a minimum size of 0.3 mm and
to classify them into different defect and polluting groups. For further information about the
classification system see (Zhang, et al., 2006).
Due to the time needed for the checking of the complete surface (up to 25 seconds) it
is economically not advisable to recheck the fitting each time after a processing step
what makes a steady adaptation of the processing strategy - as it is done in manual
rework - impossible.
Therefore the robot system must select a suited strategy for the whole reworking process
and generate processing paths as to the strategy and adapt them to the previous case of
defect already before the reworking process starts.
This reason entails another change in comparison to the manual process. The fact that
it is not possible to check after each grinding or polishing process in which condition
the surface is in and which defects and processing traces are still visible or have
occurred newly, leads in most cases to an enlargement of the area to be processed. By
that, it is granted that both the defect and the processing traces are actually removed
and no further reworking will become necessary.
The division of the processing paths into groups of defect removal and groups of processing
trace removal has been modified in comparison to the manual reworking. In the robot-aided
reworking the paths for the removal of processing traces are subdivided into paths which
border the paths to the outside and paths which are in the vicinity of the defects to be
removed. This additional division helps to better adapt the paths to be generated to the
respective location of the defect and the occurring type of defect.
The first step in the generation of a complete rework program is the definition of the
movement paths. This definition is subject to different restrictions that have to be
considered, e.g. the geometric coverage of the defect area, the reachability, the retractability
and the collision freeness of the generated robot paths.
To develop a suitable method it had become necessary to do a large number of grinding and
polishing tests. At first, the robot program for the normal manufacturing process had to be
created. Here it was very important that the dependency between the workpiece surface and
the processing paths was defined accurately, to make the identification of special process
correlations possible. To use teach-in robot programs for reaching the needed accuracy was
not possible, but instead programs that had been created and optimised totally newly and
mathematically accurately in an offline-programming system.
During the testing of different rework strategies it turned out that besides the mentioned
constraints the perpetuation of the same grinding pattern of reworked and not reworked areas is
another important requirement. In order to get a smooth invisible transition the approach of an
area based reworking was selected. That means, the workpiece surface is divided once into
different rework areas in the setup process. In the rework process the complete affected area is
reworked. If e.g. a defect is found in area 3, the whole area 3 is reworked, and if defects appear in
more than one area, all affected areas are reworked. The most significant criterion for the
predefinition of the rework areas is the realising of a transition free processing at the area edges.
As far as possible, the paths of the original processing program should be used also for
rework, because they are already defined to a technologically und geometrically very
accurate extent fulfilling the requirements mentioned above. If this is not possible, new
paths will have to be generated.
According to the current state of the art and research an automated path generation cannot
meet all requirements in a satisfactory way. Especially the required accuracy of the
automatically generated paths is not good enough due to a lack of appropriate process
models. Therefore a partly automated approach for the generation of the rework program
was selected (see Fig. 3). The realised system uses a manual predefinition of rework areas
and a set of predefined rework strategies for different defect situations and defect areas.
This predefinition is done in the setup process of the rework cell.
In the automated part of the developed process controller the defect data of the vision
system is pre-processed in a first step. After that, the data is used to select the suitable
processing paths automatically and to transfer them to the robot control in a structured way
including appropriate start and end points. This adaptation to the current defect situation
takes place directly before the rework process and is carried out whenever the vision system
detects a defect on the inspected fitting.
Besides the geometrical constraints the requirements the processes themselves define are taken
into account. Thus it is necessary to develop a group of technological strategies for the rework
process that enables the robot system to gain an optimum rework result from the technological
point of view. In order to realise a fully automated rework in the presented application, a class of
qualitative defect removal strategies was developed based on technology value modifications. The
developed solution for the generation of a rework program uses a two-stage method. In a first
stage, a geometric movement program for the rework of the whole affected defect area is
generated. This program consists of geometric robot paths that cover the whole area to be
processed as well as standard values for the technology parameters. In a second stage, the
standard values of the technology parameters of the program such as feed speed and infeed are
346 Industrial Robotics - Programming, Simulation and Applications
adapted automatically to the existing defect situation in a way that the defects are optimally
removed and the edges to the not reworked areas satisfy the quality requirements.
This means, that in the defect vicinity the robot program must be adapted to the shape of the defect,
while at the edge of the rework area it must be adjusted to the neighbouring areas. In the realized
system a classification of the path points is used based on their function inside the rework program
(see Fig. 4). This classification is done once during the programming process and according to the
current state of the art and research it only can be done manually. To shorten this time-consuming
work, the classification into function groups was integrated into an off-line programming system
that was developed in the same project by a participating company. By this, a graphically
interactive, very efficient and descriptive execution of these operations becomes feasible.
Fig. 4. Classification of the path point into function groups (Kuhlenkötter, et al, 2005).
In order to realise a flexible solution for the technology value modification scale factors were
integrated into the control structure of the system.
These scale factors help to adapt predefined default values to different defect situations.
Based on the classification of points different scale factors can be defined both for several
rework areas and function groups.
These scale factors represent the relationship between the technology parameters within the
defect area and the remaining areas so that the technology parameters can be adapted to the
current defect situation easily and effectively.
In order to meet the demands on the adaptability of the system an additional layer was
added to the described control structure. Besides the adaptation of the technology
parameters an adjustment of the rework intensity was implemented into the control
structure, by which a multi-level approach could be realised. One out of four different levels
of rework intensity can be selected depending on shape, size and class of the defect.
The following levels are available:
- Level 1: polishing and high-gloss polishing
- Level 2: fine grinding, polishing and high-gloss polishing
- Level 3: light rough grinding, fine grinding, polishing and high gloss polishing
- Level 4: intensive rough grinding, fine grinding, polishing and high gloss polishing
An analysis during the realisation of this level based approach showed that in the majority
of the occurring cases level 2 is sufficient for the rework intensity. The use of a rough
grinding process step is normally not necessary.
Adaptive robot based reworking system 347
analysing the defect data the control unit generates the rework program that consists of the robot
paths and the adapted technology values and transfers it to the robot control. Then the defect
affected areas of the fitting are reworked. After rework the fitting is inspected again by the vision
system and the whole process starts again. If after a third inspection a fitting still shows a defect
on its surface, it is scrapped regardless of the size of the defect.
7. Acknowledgements
This research and development project is funded by the German Federal Ministry for
Education and Research (BMBF) within the national programme “Research in Production
for Tomorrow” and supervised by the Project Agency for Production and Manufacturing
Technologies, Forschungszentrum Karlsruhe (PTKA-PFT), Germany.
8. References
Krewet, C.; Kuhlenkötter, B.; Schüppstuhl, T. (2005): Optimierung robotergestützter
Fertigungsprozesse – Effiziente Optimierung von Bearbeitungsprogrammen für
komplexe Werkstückgeometrien. wt Werkstattstechnik online, Jahrgang 95 (2005)
H. 3, Springer-VDI-Verlag, Düsseldorf, 2005
Kuhlenkötter, B., Schüppstuhl, T. (2005): Vollautomatisierung durch innovative
Robotersysteme. VDI-Berichte 1892.2, Mechatronik 2005, Innovative
Produktentwicklung, VDI Verlag, Düsseldorf 2005
Kuhlenkötter, B.; Steib, S.; Schüppstuhl, T.; Raedt, H.-W. (2004) : Möglichkeiten zur
automatisierten Fehlererkennung, -markierung und -nachbearbeitung auf
Oberflächen geschmiedeter und gegossener Bauteile. Schmiedejournal, September
2004, ISSN 0933-8330
Schraft, R.D.; Kaun, R.: Automatisierung, Stand der Technik, Defizite und Trends in der
Automatisierungstechnik. Verlagsgruppe Handelsblatt GmbH, Wirtschaftwoche,
Düsseldorf, 1999
VDMA; Fachverband Robotik + Automation: Protrait der Branche 1999/2000,
Frankfurt/Main, 2000
Zhang, X.; Krewet, C.; Kuhlenkötter, B. (2006): Automatic classification of defects on the
product surface in grinding and polishing. International Journal of Machine Tools
& Manufacture, Volume 46, Issue 1, pp. 59-69, Elsevier Science, January 2006
18
1. Introduction
In order for a robotic manipulator to perform useful work, it must be programmed to
accomplish the desired task or motion cycle. Nowadays industrial robots generally require a
tremendous amount of programming to make them useful. Their controllers are very
sophisticated, the commercial robot programming environments are typically closed
systems and the programming languages varies from manufacturer to manufacturer.
Despite the great evolution of the industrial robots controllers, in the majority of the
industrial applications, the robot programming is made, using one of the following ways:
• Manual on-line programming;
• Off-line programming;
Manual on-line programming refers to physically teaching a robot the required trajectory,
through interaction with teach pendant or other similar device (Lee & ElMaraghy, 1990).
This programming kind presents the following disadvantages: very slow, it needs that the
robot is available, difficulty in the handling of equipments, need some practice in the
language used by the robot, and technical knowledge to understand the operation of the
equipment. These disadvantages are very expensive in the industry because the productive
process needs to stop for a long time.
One simple approach to solve some disadvantages described above is the Off-line
programming environments. These environments are based in graphical simulation
platforms, in which the programming and execution process are shown using models of the
real objects. Consequently, the robot programmer has to learn only the simulation language
and not any of the robot programming languages. Other benefits in off-line programming
environments include libraries of pre-defined high-level commands for certain types of
applications, such as painting or welding, and the possibility to assess the kinematics
feasibility of a move, thus enabling the user to plan collision-free paths. The simulation may
also be used to determine the cycle time for a sequence of movements. These environments
usually provide a set of primitives commonly used by various robots, and produce a
sequence of robot manipulator language primitives such as ”move” or ”open gripper” that
are then downloaded in the respective robot controllers.
350 Industrial Robotics - Programming, Simulation and Applications
An IGES file consists of six sections: Flag, Start, Global, Directory Entry, Parameter Data,
and Terminate. Each entity instance consists of a directory entry and parameter data entry.
The directory entry provides an index and includes attributes to describe the data. The
parameter data defines the specific entity. All the parameter data are defined by fixed length
records according to the corresponding entity. Each entity instance has bi-directional
pointers between the directory entry and the parameter data section (Vuoskoski, 1996).
an evolving standard which will cover the whole product life cycle in terms of data sharing,
storage and exchange. It is the most important and largest effort ever established in engineering
domain and will replace current CAD exchange standards.
Incorrect Correct
b) BINARY format
Binary STL files consist of an 80 byte header line that can be interpreted as a comment string. The
following 4 bytes interpreted as a long integer give the total number of facets. What follows is a
normal and 3 vertices for each facet, each coordinate represented as a 4 byte floating point number
(12 bytes in all). There is a 2 byte spacer between each facet. The result is that each facet is
represented by 50 bytes, 12 for the normal, 36 for the 3 vertices, and 2 for the spacer.
<STL file> := <name> <facet number> <facet 1> <facet 2> ... <facet n>
<name> := 80 bytes file name, filled with blank
<facet number> := 4 bytes long int integer
<facet> := <normal> <vertex 1> <vertex 2> <vertex 3> <fill-bytes>
<normal> := Nx, Ny, Nz
<vertex> := X Y Z
<fill-bytes> := 2 fill bytes
• The dimensions of the object for using in the Off-line programming it is easily
imported from the STL files;
• The loss of information about the layers, colours, and other attributes in the export
process for the STL format don’t bring significant losses for this specific application;
• The main commercially CAD tools export the information in STL format.
commands through many languages that will run over different robot controllers. At this section
will be explained how programming techniques helps to extrapolate these structures and
organize sequentially machine control commands.
4.1.1 Factory
It can be understood like an objects factory. This pattern centralizes the objects creation with high
changing probability in the project. The demand of a class that centralize the objects creation was
the motivation to idealize it. Prohibiting object directly instance inside of business classes. The
source problem is: if a signature method changes in a specific class, it will be necessary change all
of direct instances to it class, becoming hard and complex work, considering that this object is
being instanced by many classes. The proposed solution was developing a centralized solution
class, responsible to create and return instance references from all called objects. If some object
charger method changes, in the factory class, it will be centralized in the getInstanceX() method,
where "X" is the name of a class that is moulding the object identity.
The strategy was analyze the list of classes that has an big probability to change, and
implements its getInstance() methods. All GetInstance() methods implements the
objects creation inside of a centralized factory class. Figure 2 shows the class Program
requesting through Factory a reference to new Point object instance.
4.1.2 Singleton
It is a pattern that prohibits the duplication of objects in RAM memory. Your meaning
is "unique instance". It was idealized because during the program execution many
objects are recreated without real requirement, reflecting in memory and processing
overhead. The main problem attacked by it DP is; the memory wastefulness caused by
the sub-utilization of created objects in memory. It will decrease the overhead caused
by java virtual machine garbage collector service, that monitories and cleans the
unreferenced objects in memory.
The proposed solution was, to verify during object creation if it still exists in memory. The
direct consequences to it practice: decreases process overhead caused by garbage collector
and increase the objects life cycle, reflecting in a better memory resources allocation. The
follow strategy was represented in figure 3.
Fig. 3. If the object still exists in memory is returned a reference to him, else the object is
created and also is returned a pointer to it new object.
4.1.3 Facade
The facade pattern decouple user interface and business classes, through an
acknowledged default data input point. The meaning of facade can be understood like
"consensus". It was implemented to make possible connect different interfaces with the
same business classes, through a centralized data input, decoupling interface layer
from business layer. That problem is: if the interface changes, also will be necessary
changes the businesses classes to compatible it. This practicum suggests the creation of
a class that hide interface layer system complexity and allows interaction with any
kind of input, like command prompts, database queries, applets or encoded data input
applications. The direct consequences will be decrease interface interaction complexity
and decouple interface from business layer.
Off-line Programming Industrial Robots Based in the Information
Extracted From Neutral Files Generated by the Commercial CAD Tools 357
Thus all dependencies between involved entities in the use cases will be transparent to the
user. Another visible gain is, the software maintenance becomes easier. Reflect in fast
interface modifying and new interface input creation.
The strategy was implements a charger class, that knows how to talk with the business layer
and known by interfaces that want to use the service. This way will make possible change
the presentation layer without big software set adjustments. Figure 4 shows how charger
class, talks with business layer, and it data inputs centralized model.
4.1.4 MVC
The triple MVC is an acronym to Model/View/Controller. It is used to develop object
oriented interface keeping consistence between model and views through the controller.
This pattern is composed by 3 object types basically:
• The model that is the application object.
• The View that is the graphic representation.
• The Controller, who defines the interface behaviour reacting a data input.
Before MVC pattern creation this functionalities were grouped in a singular object. This
pattern separates views from model establishing a subscription/notification protocol
between them. The view has to reflect the model state, and if the model changes else all
dependent views have to be notified. The main target of this pattern is link an unrestricted
number of views to a model, bringing capability of many different data representations. It
able inserts new views, without huge code modifications.
The aim of this pattern is separate data (Model) from user interface (View) and application
flow (Controller). The gain of this practice is share the same business logic to be accessed
through different interfaces, like is pointed at figure 5.
In MVC architecture, the Model does not know how much Views are showing its state. This
pattern possibly append easily new human interface technologies to a specific solution, like
358 Industrial Robotics - Programming, Simulation and Applications
holograms for example, only attaching the needed hardware and adding the minimum code
required to access the model behaviour.
Interactive applications require dynamic interfaces. Therefore reducing the coupling
between interface and business classes will reflect in less development effort and easiest
interface maintenance to new versions with new functionalities.
Topological Geometric
1.12
1
1.26
0.57
0.57
0.43
0.28
4 0.00 2
X Y Z 0.28
1.00 0.00 1.00 0.14
1.00 1.00 1.00 0.00
3 0.00 1.00 1.00 1 2 3 4
Request modifications
Change notification
4.1.5 Memento
It is a pattern that stores the last recent memory interactions with the software, making
possible recoup effected operations. Can be understood like "Undo". This demand was
perceived when the user, seeking a program creation though line needs to undo some
interactions in the code. The problem detected is: how to store the working structures? The
proposed solution was store the most recently objects states in a limited size list, to restore it
if necessary. Thus the user can undo a fix number of executed steps. The main objective of
this pattern is creating a user-friendly interface.
The strategy is to create a collection that memorize the user steps, defining a structure
that stores and restore the objects attributes, making it persistent in memory. This
collection clones the list showed in figure 6, but keeping persistent the inserted/deleted
motion commands to undo procedure.
4.1.6 Command
This pattern generates interface command decoupling actions of it causing events. The
name of this pattern can give an idea of his functionality. Command pattern is
necessary because the user is forced to execute system functionalities through data
input interface objects(Menu, Button, CheckBox,...). The user interface generation tool
has menus and buttons objects responsible for "commands" entered by the user. Action
is responsibility of business classes. The user interface objects should not implement
the explicit action, because only the business layer should know how to do it.
The solution adopted is showed in figure 7. All current interface instanced objects know only a
reference to the method responsible to execute the user desired action. It decouples interface and
business layer functions. The chosen strategy is; the interface objects only will know the system
default input point (see Facade pattern, section 4.1.3). Thus interface objects invoked by the user,
have to reference the method located in the business layer to execute actions, using the common
input point, the charger class.
4.2 Frameworks
It is a work layer where all knowledge about the predefined activity is encapsulated.
Some characteristics of OO like dynamic binding, polymorphism, and inheritance make
easy structures modelling it. These techniques were widely used to reusable, abstract,
and specialized object creation. Make objects interact and communicate dynamically is
the most important thing in the framework building. The framework concept was
crucial to develop the robot code generator presented in this research, because it has to
generate many different specific robot languages. Sometimes to implant the framework
model is necessary redesign all the application to support its powerful interaction.
Wherefore, the OODP and framework documentation is so important, prevents
completely remodel the projected system if appear some located change demand, like
insert a new brand robot language formulation.
4.3 RoBott
The RoBott Trajectory Generator (figure 9) was projected using OO concepts allied with
UML specification to project analysis phase. Beyond it, all showed patterns concepts were
considered in the software development phase, and applied using iterative incremental
development process (Jacobson et al., 1998). The language used to develop this Off-line
programming tool is Java Enterprise Edition (JEE) version 1.4.2_03, and the development
environment is Oracle JDeveloper version 9.0.5.1(Build 1605).
5. Results
The tests were implemented using two different robot constructors, ABB and
Mitsubishi industrial robots. The complete Off-line programming test was done using
the ABB robot. First, it was necessary to setup references of ABB IRB140 working area
(Murray, 1994), represented in figure 10 as a green rectangle. To get the robot real
working area references is required to position the piece that will be worked. The blue
point represents the robot base were is connected to the axis 1. After that, the neutral
file has to be read to extract the outline of the piece, and it can be positioned as well
using rotation and translation matrixes, respecting the working area bounds. After
piece placement, the select layer (yellow rectangle) can be moved to touch the
interesting points selecting the respective coordinates that will be used to generate the
specific robot program. The file transfer to the ABB Robot Controller (IRC5) was done
using FTP RoBott capability.
Fig. 10. Selected points from a top of a Cube placed in the ABB IRB140 work area.
After working area reference setup, cube boundary extraction from a STL file, and point cloud
selection, it is possible generate a robot program (ABB, 2000) like the following example:
%%%
VERSION:1
LANGUAGE:ENGLISH
%%%
MODULE cubo
PERS robtarget Point0:=[[505.5,200.0,450.5], [0.043,0.384,0.921,0.021],…];
PERS robtarget Point1:=[[505.5,210.0,450.5], [0.043,0.384,0.921,0.021],…];
PERS robtarget Point2:=[[515.5,210.0,450.5], [0.043,0.384,0.921,0.021],…];
PERS robtarget Point3:=[[515.5,200.0,450.5],[0.043,0.384,0.921,0.021],…];
PROC main()
ConfL \Off;
362 Industrial Robotics - Programming, Simulation and Applications
MoveL Point0,v10,fine,Tool0;
MoveL Point1,v10,fine,Tool0;
MoveL Point2,v10,fine,Tool0;
MoveL Point3,v10,fine,Tool0;
MoveL Point0,v10,fine,Tool0;
ConfL \On;
ENDPROC
ENDMODULE
Some tests were done also with Mitsubishi Move Master Industrial Robot, but restricted
only to Melfa Basic IV(Mitsubishi, 2000) motion commands generation. The 3D coordinates
input to Mitsubishi platform was done by teaching.
6. Conclusion
Actually, the robot programming is still a hard work (Wrn, 1998). Some causes are:
difficulty of available equipment reserve to improve it, complex handling, and
technologic approach demanded to learn it. This research demonstrates that the
manufacturing cell integration can be accelerated, the communication between different
platforms of robots can be optimized and costs with specialized people can be reduced.
The off-line programming method was created to minimise the integration cell time. But the
contemporary off-line programming has not brought significative gains to the manufacture
cell integration, also to reduce the robot programmer working hours. The contemporary
programming tools to manufacture cells were projected without the necessary abstraction,
to generalize the robot programming problem. The available tools present in robot kits, can
program and interact only with its platform, files and libraries.
The demand grows for a unified tool that interact between different manufacturers
solutions, turning easy robot programming to the companies. The development focus
creates portable software, capable to write robot programs to different manufacturer’s
languages. Hence actually, it research is able to extract reference coordinates from a neutral
project CAD file, generate motion commands to different robot platforms, through different
input interfaces, running over different operating systems.
Summarizing the process:
• The points cloud is extracted from a STL file
• The coordinates references are transformed by scaling, translations and rotations
matrixes
• The interesting points are selected according the planned task
• The proprietary robot programs are generated
• The program is sent to the robot controller and runs.
So, the effective Off-line programming was tested successfully.
7. Future Research
The abstraction used through robot programming interface development allows it
application to program many robot tasks. Practicing a right point selection in the
Off-line Programming Industrial Robots Based in the Information
Extracted From Neutral Files Generated by the Commercial CAD Tools 363
8. References
Gamma, E., Helm, R., Johnson, R. & Vlissides, J. (1995). Design Patterns: Elements of Reusable
Object-Oriented Software. Addison Wesley Longman, ISBN 0-201-63361-2, 1st
edition.
Jacobson, I., Booch, G. & Rumbaugh, J.(1998). The Unified Software Development Process,
Addison Wesley Longman, 1998.
Lee, D. M. A. & ElMaraghy, W. H. (1990). ROBOSIM: a CAD-based off-line programming
and analysis system for robotic manipulators. Computer-Aided Engineering Journal,
Vol. 7, No. 5, (October 1990) page numbers (141-148), ISSN: 0263-9327
Murray, R. M. (1994). A Mathemetival Introduction to Robotic Manipulation. CRC Press,
Florida, 1st edition.
Owen, J. (1994). STEP : An Introduction. Information Geometers Ltd, (April 1994).
Schenck, D. A. & Wilson, P. R. (1994). Information Modeling: The EXPRESS way. Oxford
University Press, (1994)
Smith, B. M., Rinaudot, G. R., Reed, K. A. & Wright, T. (1988). Initial Graphics Exchange
Specification (IGES). Version 4.0. IGES/PDES Organization, (June 1988),
Gaithersburg, MD.
Vuoskoski, J. (1996). Exchange of Product Data between CAD systems and a Physics
Simulation Program. Tampere University of Technology, Pori Unit, (April 1996) page
numbers (19-23).
364 Industrial Robotics - Programming, Simulation and Applications
Wrn, H. (1998). Automatic off-line programming and motion planning for industrial robots.
In ISR98, 29th International Symposium on Robotics 1998. ISR Press.
ABB (2000). ABB Robotics AB, RAPID Reference Manual – Características Gerais,
Västeras.2000, 172p (3HAC 5780-1).
IGES 1 (1980), NBS, US. Department of Commerce, Washington DC 20234, 1980.
Mitsubishi (2000). Mitsubishi Electronics Corporation. Mitsubishi Industrial Robot –
Instruction Manual, Detailed explanations of functions and operations. Nagoya, 2000.
152p (BFP-A5992-C).
19
1. Introduction
The recent shortage of experienced workers and the trend toward better working
environments have accelerated the introduction of robots into manufacturing sites. This in
turn has highlighted the need for more efficient teaching methods of robots. Conventionally,
industrial robots are taught online. With this method, the production line had to be halted at
every teaching sequence, which adversely affects production efficiency. More recently, off-
line teaching has become common using computer-based robot models (Kobayashi et al.,
2001). While off-line teaching does not stop the operation of the production line, teaching
demand is intensive for the production lines where many different products are
manufactured in small lots, leading to high operator workload and pushing up production
costs (Nagao & Onoue, 2000).
The authors have been engaged in research into an automatic NC data generator named
Kawasaki Common Off-line NC data Generator (KCONG) (Nagao et al., 2000a). KCONG
generates action data for a robot based on the design information from a three-dimensional
CAD unit without the need for any human intervention or teaching. Recently, the authors
have developed some teaching-free robot systems by combining KCONG with welding,
cutting, painting and other industrial robots (Nagao et al., 2000b).
This chapter introduces one such system, a teaching-free robot system for welding large box
structures using KCONG, which was delivered to a client and is now operating successfully
(Nagao et al., 2001).
2. System Overview
2.1 Applicable operations
The system is designed for large box structures more than 4 meters in length and to which
many stiffeners are attached (see Fig. 1). Horizontal and vertical fillet welding and flat,
vertical and horizontal butt-welding can be performed on square, single-bevel and flared
grooves. Weld lines may be straight, curved or a combination. Boxing at the ends,
connecting beads, applying multi-layer beads and continuous / intermittent welding are
also possible.
366 Industrial Robotics - Programming, Simulation and Applications
Structure
information Three-dimensional CAD unit
Welding line
information
Ethernet
Robot NC data generator
KCONG
・Welding condition DB
・Torch motion DB
・Touch-sensing DB
NC data
Office
Factory
Vertical axis
Lateral axis
ITV
camera
Robot
Workpiece
Longitudinal axis
CAD data
Shape data of
workpieces Workpiece
CAD data reading shape data
file
Welding data
Welding data Welding data reading
etc. Welding data
file
Welding data editing
Robot simulation
NC data
file
Operating
NC data instruction sheet
(3) Concatenating weld lines: To minimize the air cut time, weld lines are concatenated
where possible.
When the above tasks are completed, the workpiece configuration and weld lines are
displayed on the screen three-dimensionally. It is possible to zoom in and out of the image
on the display, and change the viewpoint as needed.
3.2 Generating NC data
When creation of NC data is requested, KCONG generates it as follows:
(1) Determining the torch orientation: The torch orientation is determined for each
weld line based on the conditions surrounding the line, and the position of the
base (root) of the robot and its configuration are determined from the torch
motion database.
(2) Determining welding areas: Simulation is carried out at small intervals along each
weld line to check if there is interference between the robot and the workpiece. If
there is interference, KCONG returns to step (1) to select a different torch orientation;
this process is repeated until the interference is resolved. If interference persists, the
torch orientation that minimizes the unwelded area is chosen. In Fig. 4, part B near
weld line PQ and part C that is attached to part A interfere with the welding torch,
making welding impossible in two areas.
Plane view Q
Part A
Part C
Welding
impossible
P Part B Welding
possible
Welding
impossible
Welding
possible
Q
Part A
Part B
P
Bird's eye view
Attached p art
③
④
⑤
②
①
W elding line
Detecting point
M ain p late
(5) Simulating entire welding procedure: After acquiring from the databases the
welding conditions and torch motion at the ends of the weld lines, KCONG
simulates the approach, touch-sensing, torch motion at the ends, regular welding,
and departure in order to verify the NC data that has been just created. See Fig. 7 for
an example simulation display.
(1) Determining the welding sequence: The welding sequence is determined so that all
the preset conditions are fulfilled and the cycle time is minimized.
(2) Outputting the NC data file: The NC data is output as a file in the robot language
format. The file is then transferred to the line controller via the LAN.
(3) Printing the operating instruction sheet and error report: Instructions for the
operator, error reports and operating time estimates are printed.
The function tree of KCONG is shown in Fig. 8.
4. Line Controller
The line controller controls the entire system. It has the following five functions:
(1) Selecting the correct data for the workpiece: When a workpiece is fed onto the
operation stage, the line controller selects and loads the appropriate data for that
workpiece from the NC data created by KCONG.
(2) Teaching the reference points: This function teaches the robot to measure the three
reference points on the workpiece set by the three-dimensional CAD unit. The
process starts by aligning the crosshairs of the ITV camera installed on the moving
unit with the reference points, thus measuring the workpiece placement in the
longitudinal and lateral directions. The touch-sensing function then enables
automatic measurement of the reference points in the vertical direction.
(3) Controlling sequence: This is the function that sequentially executes welding for each weld
line on the workpiece. After correcting the NC data for the first weld line based on the
measured reference point positions, the line controller sends the data to the robot controller
with the welding start command. While the first weld line is being worked, the NC data
for the next weld line is sent to the robot controller to eliminate any wait.
(4) Troubleshooting: If a problem arises while a weld line is being worked, the sequence
for that weld line or for the entire workpiece is skipped.
(5) Monitoring: The line controller monitors the operation of the robot for any errors that
might occur; the status of the robot is displayed in real-time on the screen as it operates.
To increase the rate of automated welding for complex workpieces, the robot has three types of
welding torches: straight, angled at 22°, and angled at 45°. They can be changed remotely from
the line controller. The robot is also equipped with a high-voltage touch-sensing unit, a nozzle
cleaner and a wire cutter. A wire reel is mounted on the lateral table of the moving unit.
6. Conclusion
A welding robot system for large box structures has been developed and put to practical use
to automate welding and to reduce labor requirements. The system automatically generates
the robot operation data based on the design information from a three-dimensional CAD
unit without the need for teaching by humans. This system was delivered to a client in
September, 1999, since which time the system has been operating smoothly, allowing the
factory to automate more than 80% of its welding work.
In order to save the effort of preparation of welding information on a three-dimensional
CAD unit, the authors have already implemented KCONG on Solidworks, a mechanical
three-dimensional CAD. This new version has a function of definition of welding paths with
which operators can prepare welding information in a simple operation. The authors are
now developing redundancy control function which automatically configures the position
and orientation of a robot base and a positioning unit, so as to move in a favorable posture.
7. References
Kobayashi, M.; Ogawa, S. & Koike, M. (2001). Off-Line Teaching System of a Robot Cell for
Steel Pipe Processing, Advanced Robotics, Vol.15, No.3, (2001) (327-331).
Nagao, Y. & Onoue, K. (2000). Teaching Method and Open Architecture Controller for
Industrial Robots, Journal of the Robotics Society of Japan, Vol.18, No.4, (May 2000)
(24-28), ISSN0289-1824.
Nagao, Y.; Urabe, H.; Honda, F. & Kawabata, J. (2000a). Benifits of the Autoamted Robot
Off-line Teaching System: KCONG, The Structural Technology, Vol.13, No.145, (June
2000) (44-48), ISSN0916-0426.
Nagao, Y.; Urabe, H.; Honda, F. & Kawabata, J. (2000b). Development of a Panel Welding
Robot System for Subassembly in Shipbuilding Utilizing a Two-Dimensional CAD
System, Advanced Robotics, Vol.14, No.5, (2000) (333-336).
Nagao, Y. ; Urabe, H. ; Honda, F. & Kawabata, J. (2001). Development of a Teaching-free
Robot System for Welding Large Box Structures, Proceedings of the 32nd International
Symposium on Robotics, pp. 971-976, Seoul, April 2001.
20
1. Introduction
To be useful, industrial robots must meet positioning accuracy requirements for their given
applications. Most industrial robots can return repeatedly to the same location in space quite
precisely; they typically meet published repeatability specifications on the order of 0.5 mm.
On the other hand, most industrial robots cannot move as precisely to a specified (x, y, z)
position in space; they typically meet published accuracy specifications roughly an order of
magnitude higher (typically 10 mm or worse) (Owens, 1994).
In many cases, published repeatability specifications meet positioning accuracy needs in
industrial robot applications such as spot welding, spray painting, and assembly. However,
published positioning accuracy specifications often do not meet industry needs, when using
off-line programming rather than manual teaching methods. As a result, to meet application
positioning accuracy requirements, most robot users turn to off-line calibration to bring
positioning accuracy close to robot repeatability levels (Owens, 1994). Off-line calibration
generally consists of the following five steps:
1. Move the robot into several poses (positions and orientations).
2. Measure and record the precise 3D workspace coordinates of the robot tool center
point (TCP) at each pose.
3. Read and record the corresponding position of the robot, from the robot controller, at
each pose.
4. Use the differences between measured 3D workspace coordinates and corresponding
positions read from the robot controller to correct the parameters in the kinematic
model used by the controller to position the robot.
5. During robot operation, use the corrected kinematic model to compute adjusted
positions in space and then command the robot to move to the adjusted positions
(which causes the robot to move to the actual desired positions).
The number of calibration poses used and the corresponding link positions for each pose
must be selected to provide the information needed to accurately compute the kinematic
model parameters (Robinson, Orzechowski, James, & Smith, 1997). For example, Owens
(1994) used 25 different poses, while Rocadas and McMaster (1997) used 50 different poses.
To measure pose positions precisely enough to complete off-line calibration, robot manufacturers
generally use expensive measurement devices, such as theodolites, coordinate measurement
machines, or laser tracking systems (Mayer & Parker, 1994; Nakamura, Itaya, Yamamoto, &
374 Industrial Robotics - Programming, Simulation and Applications
Koyama, 1995; Owens, 1994). Such systems generally cannot be used for recalibrating robots in
factory environments, due to cost and space limitations. However, recalibration may be needed
after robot repair, collisions with the workpiece or other objects in the workspace environment, or
over time as encoders or servo systems drift (Owens, 1994).
As a result, prior research offers many low-cost systems for calibrating robots off-line within
factory environments. Low-cost methods for measuring robot position during calibration
include cables (Owens, 1994), cameras (van Albada, Lagerberg, & Visser, 1994), dial gauges
(Xu & Mills, 1999), and trigger probes with constraint planes (Zhong & Lewis, 1995).
After off-line calibration, industrial robots, run open-loop without additional control or
intervention, have met the in-process accuracy needs of most current industrial applications
(spot welding, material handling, workpiece handling, and assembly).
When open-loop use of robots has not met a given industrial application’s needs, closed-
loop control or passive compliance has been used. For example, for arc welding, laser-based
vision systems have been used to locate and track welding seams (Agapakis, Katz,
Friedman, & Epstein, 1990). For assembly, passive compliance devices, such as remote
center compliance (RCC) devices, have been used to align components for mating
(Bruyninckx et al., 2001; Boubekri & Sherif, 1990).
However, on-line sources of robot position error have been largely ignored. Collisions with the
workpiece or other objects in the workplace environment, encoder errors, or servo drift can cause
robot position to drift out of specification, leading to product faults, scrap, machine damage, and
additional costs. Without in-process monitoring, in-process robot position errors are generally not
detected until product faults are detected during product inspection.
Generally, sensors and methods used for calibrating robots cannot be used for in-process
monitoring, because the mechanisms interfere with in-process robot operation (e.g., cable
measuring systems, pointers, and calibration plates) or do not work well during in-process
operations. Thus, typically, the only counter measures currently used to prevent in-process
errors are regularly scheduled robot recalibration or production line stops when product
faults are detected in inspection. However, detecting product faults after they occur is
costly. Regular calibration, when not needed, is also expensive. Shop-floor recalibration of a
single robot can take up to six hours or more (Owens, 1994). The wasted manpower time
spent is an unnecessary excess cost.
To achieve greater operational efficiencies, new non-invasive, non-contact methods for
monitoring in-process robot position are needed (Caccavale & Walker, 1997). Early work
focused on using model-based methods, with existing built-in robot joint position and
velocity sensors, to detect errors with respect to ideal dynamic observer or parameter
estimation models (Caccavale & Walker, 1997). However, modeling accuracy, sensor
characteristics, computational loads, and response times can limit the sensitivity and
usefulness of model-based methods (Visinsky et al., 1994).
As an alternative, optical methods, with external sensors, such as laser systems, cameras, or
machine vision systems, have also been proposed (e.g., Dungern & Schmidt, 1986).
However, optical systems can also suffer from severe limitations. Laser and optical sensors
can be difficult to place, since their optical paths to robot end effectors are easily blocked by
workpieces or parts of the robot. In addition, smoke or sparks from welding, or fluids in
other manufacturing processes, can interfere with proper laser and optical sensor operation.
A Non-Contact Method for Detecting On-Line Industrial Robot Position Errors 375
2. Purpose
To overcome the limitations of optical position monitoring methods, the investigator
developed a simple, low-cost method for detecting in-process robot position errors, which
uses a low-cost Doppler motion detector unit placed at one or more critical robot work
positions. The small detector can be easily located near critical work positions. Detector
position and line of sight are not critical, since the detector measures and generates electrical
signals due to large-scale robot motion, rather than precise robot end effector position.
Motion signals from the motion detector unit (MDU) are monitored as a time series, and
statistical quality control methods indicate when robot position drift or other process faults
occur. When faults are detected, signals can be generated to halt the robot and trigger
alarms. Alarms signal the need for robot service or recalibration. Halting the robot at the
earliest sign of possible position errors can help prevent product faults, scrap, machine
damage, and additional costs.
The method may be more robust, in industrial environments, than optical condition monitoring
methods, since radar signals can penetrate smoke, light, and other optical contaminants.
3. Experimental Setup
Figure 1 shows the experimental setup used to develop and test the proposed position
error detection method. A Seiko D-TRAN RT-2000 robot was used for testing. The
Seiko robot has a cylindrical configuration with four axes R (radial), T (rotational), Z
(vertical), and A (gripper rotation).
The given robot controller stores a single calibration constant related to the fully extended length
of the robot R axis. To calibrate the robot, the user must attach a rigid fixture, which has a precise
length, to the robot and reset a stored calibration constant. Subsequently, on power-up the robot
must be homed to recalibrate the robot TCP to the zero location of the workspace coordinate
system. Homing moves the robot in each of the four axes to fixed limit switches and, thus,
recalibrates the robot’s internal servo encoders for the power-on position of the TCP zero point.
Other commercial industrial robots use similar means to re-adjust the TCP to an accurate zero
location. For example, Motoman, Inc. uses a special fixture (ToolSight) containing three LED
sensors to re-center their welding robots (Forcinio, 1999).
Axis Repeatability Resolution
R ± 0.025 mm (0.001 in) 0.025 mm (0.001 in)
T ± 0.025 mm (0.001 in) 0.003 deg
Z ± 0.025 mm (0.001 in) 0.012 mm (0.0005 in)
A ± 0.025 mm (0.001 in) 0.005 deg
Table 1. Seiko D-TRAN RT-2000 repeatability and accuracy specifications.
For the experiments conducted, after robot homing, the robot was commanded to move from
home position to a test point in the robot workspace coordinate system. As shown in Figure 2, a
dial gauge, with a scale in English units, was used to accurately measure relative robot positions
around the given test position. Figure 2 also shows the sensor circuit, composed of a Doppler
radar motion detector and a low-pass filter, which was developed for measuring robot motion.
The Doppler radar motion detector used was a model MDU 1620 Motion Detector Unit from
Microwave Solutions (http://www.microwave-solutions.com).
home position to the test position, the output signal from the Doppler motion detector/low-pass
filter combination was measured for 4 seconds with a 0.1 msec sampling period (10 kHz sampling
frequency). As a result, according to the Nyquist Theorem, the sampled data can be used to
reconstruct frequency components up to 5 kHz in the original signal (Swanson, 2000).
To prevent aliasing during sampling, an electronic filter was used to band-limit the output signal
from the Doppler radar motion detector before sampling, as shown in Figure 3. The low-pass filter
uses an amplifier stage to increase motion detector output signal level, a fourth-order low-pass
filter stage to band limit the measured signal and to eliminate high-frequency noise, and a final
amplifier stage to match the output of the filter to the input range of the data collection system.
Five experiments were run to develop and test the proposed method for detecting on-line robot
position errors in a single axis direction. Future studies will consider multi-axis robot position
errors. Experiments 1 and 2 were run to verify that the robot used for testing met the
manufacturer’s published repeatability and resolution specifications, to verify that a dial gauge
could be used to precisely measure robot position, and to characterize the drift characteristics of
the robot over an extended period of cycling. Experiment 3 was run to develop a measure of robot
position from sensor signals and to determine the precision of the sensor signal measure for robot
moves to a single test position. Experiment 4 was run to establish a linear regression relationship
between the robot position measure, which was developed in Experiment 3, and actual (induced)
robot position errors. Experiment 5 was run to develop a robot position error detection model,
from Experiment 3 and Experiment 4 results, and to test the prediction model for random robot
moves about a single test position. The same experimental setup was used for all five experiments.
The results of each experiment were used to adjust subsequent experiments, if needed. Since an
incremental methodology was used, intermediate conclusions are reported with results from each
experiment. Final conclusions are reported in the conclusions section at the end of the paper.
4. Experiment 1
The objectives of Experiment 1 were to:
1. Experimentally verify the repeatability of the Seiko D-TRAN RT-2000 robot used for
testing,
2. Experimentally verify that a dial gauge can be used to precisely measure robot
position, and
3. Experimentally determine if there is significant drift in the robot during cycling.
The method used to experimentally determine robot repeatability and drift characteristics
consisted of five steps:
1. Command the robot to move to a test position 20 times,
2. Measure the position of the robot using a dial gauge,
3. Cycle the robot, between the workspace origin and the test position, for 3 hours,
4. Command the robot to move to the test position 20 times.
5. Measure the position of the robot using a dial gauge.
To simplify experimental setup and testing, the robot was moved to minimum Z-axis
position, fully extended in the R-axis direction, and then commanded to move cyclically, in
the T-axis direction only, between home position and a test point. The test point selected
was with the robot at minimum Z-axis position, fully extended in the R-axis direction, and
rotated to the 90-degree T-axis position. Minimum Z-axis position was selected to minimize
distance between the sensor and the end effector, at the given test point. The fully-extended
R-axis position was chosen to increase the potential for position errors, since, for the given
robot, position accuracy decreases with distance from the workspace origin (Seiko, 1986).
The 90-degree T-axis position was chosen so that the sensor could be mounted on the same
table as the robot, to minimize potential sensor measurement errors due to robot vibrations.
Table 2 shows robot position dial gauge measurements taken at the test position before and
after cycling the robot for 3 hours. Step 0, in Table 2, indicates the initial position to which
the dial gauge was set, with the robot resting at the correct test position.
A one-way analysis of variance between the two groups of data shows that there is evidence of a
statistically significant difference between the two group means (α = 0.05, p-value < 0.001).
A Non-Contact Method for Detecting On-Line Industrial Robot Position Errors 379
5. Experiment 2
The objective of Experiment 2 was to:
1. Experimentally verify that the dial gauge could accurately detect single-axis robot
position errors, for the given robot.
The method used to experimentally verify that the dial gauge could accurately detect single-
axis position errors consisted of two steps:
1. Command the robot to move to the test position +/- 0.03 T-axis degrees, in 0.003
degree increments (the robot’s T-axis accuracy specification is 0.003 degrees, which
corresponds to 0.001 inches at the given test position).
2. Measure the position of the robot using the dial gauge.
Table 3 shows the 21 positions about the test point to which the Seiko D-TRAN RT-2000
robot was commanded to move (values in millimeters), as well as the corresponding dial
gauge measurements (in inches). Note that the robot takes position commands as (x, y, z)
Cartesian coordinate values, with (x, y, z) values in millimeters.
An analysis of variance shows evidence of a statistically significant relationship between
dial gauge measurements and degree values (α = 0.05, p-value < 0.0001).
Step Position X Y Dial Gauge
1 -89.970 0.313 -597.056 0.488
2 -89.973 0.281 -597.056 0.490
3 -89.976 0.250 -597.056 0.492
4 -89.979 0.219 -597.056 0.492
5 -89.982 0.188 -597.056 0.494
6 -89.985 0.156 -597.056 0.495
7 -89.988 0.125 -597.056 0.497
8 -89.991 0.094 -597.056 0.499
9 -89.994 0.063 -597.056 0.500
10 -89.997 0.031 -597.056 0.501
11 -90.000 0.000 -597.056 0.502
12 -90.003 -0.031 -597.056 0.504
13 -90.006 -0.063 -597.056 0.506
14 -90.009 -0.094 -597.056 0.507
15 -90.012 -0.123 -597.056 0.508
16 -90.015 -0.156 -597.056 0.509
17 -90.018 -0.188 -597.056 0.511
18 -90.021 -0.219 -597.056 0.513
19 -90.024 -0.250 -597.055 0.514
20 -90.027 -0.281 -597.055 0.515
21 -90.030 -0.313 -597.055 0.516
Table 3. Robot position dial gauge measurements for Experiment 2.
Equation 1 gives the equation of the least squares line shown in Figure 5:
Predicted dial gauge value = -41.69 – 0.4688 * Robot position (1)
The model explains 99.74% of the variability in dial gauge measurements. Random
measurement errors or other unexplained factors account for only a small amount of the
observed variability in the data.
A Non-Contact Method for Detecting On-Line Industrial Robot Position Errors 381
0.504
0.5
0.496
0.492
0.488
0.484
-90.03 -90.02 -90.01 -90 -89.99 -89.98 -89.97
Position
Fig. 5. Dial gauge measurements (inches) vs. robot T-axis position (degrees).
6. Experiment 3
The objectives of Experiment 3 were to:
1. Develop a measure from sensor signal samples for determining robot position,
2. Determine how well the sensor signal measure represents robot position,
3. Establish a mean signal to represent the robot moving to the correct test position, and
4. Calibrate the sensor for robot motions without position errors.
The method used to experimentally achieve Experiment 3 objectives consisted of seven steps:
1. Cycle the robot 20 times between home position and the nominal test position.
2. Measure robot position with the dial gauge.
3. Measure the sensor signal as the robot moves between home and the nominal test
position. Sample the sensor signal at 0.1 msec intervals.
4. Average the values of the 20 sensor signals at each sampling time step to find the
mean sensor signal value at each sampling time step.
5. Compute a root sum of squares error measure for each of the 20 sensor signals by
summing squared error for each time sample with respect to the mean sensor signal
value at each sampling time sample.
6. Compare standard deviation of the root sum of squares error measure for the 20
sensor signals to standard deviation of the 20 dial gauge readings.
7. Use the mean sensor signal and the standard deviation of the root sum of squares
error measure as a sensor calibration standard for proper robot motion.
Figure 6 shows three representative sensor calibration signals, ci and the mean calibration signal cm.
Figure 7 shows an expanded view of Figure 6 in the region near 2.5 seconds. The signals in
Figures 6 and 7 were filtered, in Matlab, to remove any DC bias. The mean value of each
signal was computed and subtracted from each of the signal’s sample values.
382 Industrial Robotics - Programming, Simulation and Applications
As a measure of individual signal variation with respect to the mean of all 20 signals, a root-
squared error value was computed for the 10,001 samples between 2.5 and 3.5 seconds
35000
RSSci = ∑ [c (n) − c (n)]
n = 25000
i m
2
(4)
A Non-Contact Method for Detecting On-Line Industrial Robot Position Errors 383
The 10,001 samples between 2.5 and 3.5 seconds were used, rather than all 40,000 samples, to
reduce computation time and to improve signal-to-noise ratio. Table 4 shows the 20 RSSci
measurements and the 20 corresponding dial gauge measurements taken for Experiment 3.
An analysis of variance indicates that there is no statistically significant relationship
between RSSci and dial gauge measurements (α = 0.05, p-value = 0.5462). The analysis of
variance indicates that, with 95% confidence, variation in both measures is probably due to
random measurement error. Mean for the 20 RSSci is 2.98, and standard deviation for the 20
RSSci is 2.01. Mean for the 20 dial gauge measurement is 0.50015, and standard deviation
for the 20 dial gauge measurements is 0.00049.
Experiment 3 results indicate that:
1. The dial gauge is a more precise method for measuring robot position at the given
test point.
2. However, if the sample of 20 calibration signals accurately represents the population
of all sensor signals produced by the robot moving to the given test position, the
RSSci measure developed may be usable for non-invasive non-contact in-process
robot position error detection, using statistical X control chart techniques.
For the RSSci measure to be useable as an X chart quality measure, when errors occur,
individual RSSci measures, on average, must lie at least three standard deviations from the
mean for the 20 sensor calibration signals (9.01 or larger) (Besterfield, 2001).
Signal RSSci Dial Gauge
c1 11.2241 0.500
c2 3.2244 0.499
c3 2.7502 0.500
c4 3.2171 0.500
c5 2.9028 0.500
c6 2.3334 0.500
c7 1.9100 0.500
c8 1.3153 0.501
c9 1.9060 0.500
c10 2.4277 0.500
c11 2.1314 0.500
c12 1.9294 0.500
c13 2.1702 0.501
c14 2.5620 0.500
c15 2.5384 0.500
c16 3.0110 0.501
c17 2.9737 0.501
c18 3.3289 0.500
c19 2.7221 0.500
c20 2.9593 0.500
Table 4. Error measures for calibration signals.
384 Industrial Robotics - Programming, Simulation and Applications
7. Experiment 4
The objectives of Experiment 4 were to:
1. Determine the feasibility of using sensor signals to detect in-process robot position
errors, and
2. Experimentally establish a relationship between position errors and sensor signals.
The method used to achieve Experiment 4 objectives consisted of three steps:
1. Command the robot to move from the home position +/- 0.03 T-axis degrees to the
test position +/- 0.03 T-axis degrees, in 0.003 degree increments (the robot’s T-axis
accuracy specification is 0.003 degrees).
2. Measure the position of the robot using a dial gauge.
3. Simultaneously measure the signal (ei) generated by the sensor.
The robot was commanded to move from offset positions about the home position to
offset positions about the test position to simulate on-line position errors that would
occur due to collisions with the workpiece or other objects in the workplace
environment, encoder errors, or servo drift.
Data collected from Experiment 3 and Experiment 4 was analyzed using statistical methods
to establish a relationship between position errors and sensor signals. The resulting
relationship was then used to detect or predict on-line robot position errors (Experiment 5).
The robot was commanded to move incrementally to 21 positions about, and including, the
test position. For Experiment 4, due to the time required to collect and process collected data
by hand, a single replication of the experiment was conducted. However, to determine the
repeatability of Experiment 4 measurements, for Experiment 5, the robot was commanded to
move to the same 21 positions, but in random, rather than incremental, order.
Figure 8 shows three representative sensor error signals, ei and the mean sensor calibration
signal cm. Figure 9 shows an expanded view of Figure 8 in the region near 2.5 seconds. Table
5 shows the 21 positions from which the robot was commanded to move, the 21 positions to
which the robot was commanded to move, and the corresponding final robot workspace x-
coordinate values to which the robot was commanded to move. The final robot workspace
y-coordinate values were the same for all 21 positions to which the robot was commanded to
move (-597.056 mm). Table 5 also shows the 21 resulting RSSei measurements and the 21
corresponding dial gauge measurements for Experiment 4.
30
25
20
RSSei
15
10
22.5
20
RSSei
17.5
15
12.5
-0.4 -0.3 -0.2 -0.1 0 .1 .2 .3 .4
x-coordinate (inches)
Fig. 11. RSSei vs. robot workspace x-coordinate values for position errors.
35000
RSSei = ∑ [e (n) − c
n = 25000
i m (n )]2 (5)
Figure 9 shows that sensor signals for both positive and negative final robot workspace x-
coordinate values lag the calibration mean cm, whereas e11, the signal generated when the robot
moves without offset from the home and test positions, closely matches the calibration mean. Both
positive and negative final robot workspace x-coordinate values may lead to signals that lag the
calibration mean because the generated sensor signals depend on both the distance between the
sensor and the moving robot arm and the velocity of the moving robot arm.
Figure 10 shows that the RSSei measures calculated for any of the offset robot motions
exceed the single-point error limit established in Experiment 3. The method detects any
induced robot position errors, to the repeatability specification of the robot. In addition, by
excluding the point in Figure 10 corresponding to e11, the non-error condition signal, an
A Non-Contact Method for Detecting On-Line Industrial Robot Position Errors 387
8. Experiment 5
The objective of Experiment 5 was to:
1. Test the robot position error detection model developed in Experiment 4.
The method used to test the error detection model consisted of six steps:
1. Command the robot to move from the home position +/- 0.03 T-axis degrees to
the test position +/- 0.03 T-axis degrees, in 0.003 degree increments, and in
random order.
2. For each move, measure the position of the robot using a dial gauge.
3. Simultaneously measure the signal (ri) generated by the sensor.
4. Calculate the error detection measure ( RSSri ) for the given sensor signal.
5. For each output signal, use the developed error detection model to predict whether
or not the robot was in an error condition.
6. Compare error detection model predictions to actual robot positions to determine the
system’s capability for detecting position errors.
Table 6 shows the 21 positions from which the robot was commanded to move, the 21
positions to which the robot was commanded to move, and the corresponding final robot
388 Industrial Robotics - Programming, Simulation and Applications
workspace x-coordinate values to which the robot was commanded to move. The final robot
workspace y-coordinate values were the same for all 21 positions to which the robot was
commanded to move (-597.056 mm).
Signal From To X-Coordinate RSSri Dial Gauge
(Degrees) (Degrees) (Mm) (Inches)
r1 -0.030 -90.030 -0.313 15.5917 0.515
r2 0.000 -90.000 0.000 6.9508 0.502
r3 -0.027 -90.027 -0.281 15.8215 0.514
r4 -0.015 -90.015 -0.156 19.4724 0.508
r5 -0.024 -90.024 -0.250 16.5845 0.513
r6 0.018 -89.982 0.188 25.6551 0.493
r7 0.021 -89.979 0.219 25.6022 0.491
r8 0.027 -89.973 0.281 26.9904 0.489
r9 -0.021 -90.021 -0.219 18.4504 0.511
r10 -0.018 -90.018 -0.188 18.2697 0.510
r11 0.006 -89.994 0.063 22.2440 0.498
r12 0.012 -89.988 0.125 23.5641 0.496
r13 0.030 -89.970 0.313 27.1052 0.488
r14 0.024 -89.976 0.250 25.5870 0.491
r15 -0.003 -90.003 -0.031 22.8124 0.504
r16 0.009 -89.991 0.094 23.5282 0.498
r17 -0.009 -90.009 -0.094 20.8457 0.506
r18 -0.012 -90.012 -0.123 20.3149 0.506
r19 -0.006 -90.006 -0.063 22.2316 0.504
r20 0.015 -89.985 0.156 24.6732 0.494
r21 0.003 -89.997 0.031 23.0340 0.501
Table 6. RSSri and dial gauge measurements for Experiment 5.
30
25
20
RSSri
15
10
Table 6 also shows the 21 resulting RSSri measurements and the 21 corresponding dial
gauge measurements for Experiment 5.
Figure 12 shows the 21 RSSri measurements plotted as a function of the 21 corresponding
final robot workspace x-coordinate values to which the robot was commanded to move.
Figure 12 also shows the RSSei error detection limit established in Experiment 3 (9.01).
RSSri measurements were calculated using the procedure described for Experiment 3:
35000
RSSri = ∑[ ri
n =25000
(n) − cm (n)]2 (7)
From Experiment 3 results, the error detection model predicts a robot position error for any
RSSri value greater than 9.01. In addition, from Equation 6,
RSSri = 19.87 + 15.31 * x - coordinate (8)
Finally, from the x-coordinate prediction, the direction and magnitude of the single-axis
robot position error can also be predicted. Negative x-coordinate values indicate that the
robot moved past the desired position; positive values indicate that the robot did not reach
the desired position (with respect to the home position). The difference between the
predicted and desired x-coordinate indicates the magnitude of the position error. For
Experiment 5, the desired x-coordinate is always zero. Therefore, the value of the predicted
x-coordinate indicates the magnitude of the position error.
Table 7 shows commanded (actual) and predicted x-coordinate values for Experiment 5.
Table 7 also shows actual errors and predicted errors, whether or not the direction (sign) of
the predicted error is correct, and the difference between the predicted error magnitude and
the actual (induced) error magnitude (errors due to the prediction model).
Experiment 5 results show that:
1. The robot position error detection model developed in Experiment 4 predicts
Experiment 5 errors with 100% accuracy, error direction with 81% accuracy, and
error magnitude to within 0.223 mm.
Experiment 5 results indicate that the method developed can reliably identify robot
position errors at robot repeatability levels. The method can also, to some degree,
identify the direction of an error relative to the desired (commanded) position and the
magnitude of the error.
In future studies, in addition to improvements recommended in Experiment 4 results, the
proposed method can be improved by using standard X control chart techniques, including
subgroup sampling and averaging. Using standard X control chart techniques could reduce
random variation between prediction model and in-process measurements and, thereby,
improve the accuracy and reliability of all three aspects of error detection and identification
(error detection, error direction, and error magnitude).
390 Industrial Robotics - Programming, Simulation and Applications
9. Conclusions
The investigator developed an non-invasive non-contact method for detecting on-line
industrial robot position errors. The method uses a low-cost sensor to detect single-axis
position errors. The sensor, composed of a low-cost microwave Doppler radar detector and
a low-pass filter, converts robot motion into electronic signals that are A/D converted and
processed using a computer.
Computer processing reduces captured signals into root-sum-of-squares error measures,
with respect to a mean sensor calibration signal. Root-sum-of-squares error measures are
compared to a threshold value that indicates, statistically, a 99.7% probability that an on-line
position error has occurred. The threshold value can be adjusted to meet different
application needs.
For the prototype constructed, and the experiments run, the sensor detected position errors with
100% accuracy, error direction with 81% accuracy, and error magnitude to within 0.223 mm.
The proposed method offers a low-cost non-invasive non-contact means for detecting on-
line in-process robot position errors. Accurate in-process robot position error detection
indicates the need for corrective action: robot homing, recalibration, or repair.
The proposed method offers advantages over other possible methods. The sensor
developed uses a microwave Doppler radar detector, which is generally less expensive
A Non-Contact Method for Detecting On-Line Industrial Robot Position Errors 391
and/or more robust in industrial environments than optical sensors, such as laser
tracking systems or cameras. The proposed method is generally more practical for in-
process error detection than contact devices used for robot recalibration, such as cable
systems, trigger probes, or dial gauges.
The proposed method may eliminate the need for regularly scheduled robot homing or
recalibration, thus improving productivity. At the same time, the proposed method
identifies error conditions when they exist, reducing scrap, which also lowers costs and
improves productivity.
Future proposed enhancements include:
1. Improving sensor design,
2. Improving sensor placement,
3. Detecting multi-axis position errors by choosing different sensor placement strategies
or by using multiple sensors at a given position,
4. Fully automating the data collection and analysis process,
5. Using control chart techniques to improve error detection capabilities, particularly
advanced error direction and error magnitude prediction capabilities, and
6. Considering different methods for removing DC bias from sensor signals.
10. References
The following references were cited to acknowledge relevant related research. They do not
necessarily reflect the views or beliefs of the author.
Agapakis, J. E., Katz, J. M., Friedman, J. M., & Epstein, G. N. (1990). Vision-aided robotic
welding: an approach and a flexible implementation. International Journal of Robotics
Research, 9(5), 17-34.
Besterfield, D. H. (2001). Quality Control, Sixth Edition. New Jersey: Prentice-Hall.
Boubekri, N., & Sherif, W. (1999). A position control algorithm for a microcomputer
controlled SCARA robot – part 1. Computers & Industrial Engineering, 19(1-4), 477-
480.
Bruyninckx, H., Lefebvre, T., Mihaylova, L., Staffetti, E., De Schutter, J., & Xiao, J. (2001). A
roadmap for autonomous robotic assembly, Proceedings of the 4th IEEE International
Symposium on Assembly and Task Planning, Fukuoka, Japan, May 28-29, pp. 49-54.
Caccavale, F., & Walker, D. (1997). Observer-based fault detection for robot manipulators,
Proceedings of the IEEE International Conference on Robotics and Automation,
Albuquerque, NM, April, pp. 2881-2887.
Dungern, O. V., & Schmidt G. K. (1986). A new scheme for on-line fault detection and
control of assembly operation via sensory information, Proceedings of the 25th
Conference on Decision and Control, Athens, Greece, December, pp. 1891-1892.
Forcinio, H. (1999). Tighter process control ensures weld quality. Robotics World, 17(1),
17-20.
Mayer, J. R., & Parker, G. A. (1994). A portable instrument for 3-D dynamic robot
measurements using triangulation and laser tracking. IEEE Transactions on Robotics
and Automation, 10(4), 504-516.
Microwave Solutions (2002). Small PCB Style - MDU1620. Retrieved May 23, 2002, from
http://www.microwave-solutions.com.
Millman, J., & Halkias, C. C. (1972). Integrated electronics: analog and digital circuits and
systems. New York: Mc-Graw Hill.
392 Industrial Robotics - Programming, Simulation and Applications
Nakamura, H., Itaya, T., Yamamoto, K., & Koyama, T. (1995). Robot autonomous error
calibration method for off-line programming system, IEEE International Conference
on Robotics and Automation, pp. 1775-1982.
Owens, J. (1994). Robotrak: calibration on a shoestring. Industrial Robot, 21(6), 10-13.
Robinson, P., Orzechowski, P., James, P. W., & Smith, C. (1997). An automated robot
calibration system, IEEE International Symposium on Industrial Electronics, 1, pp.
SS285-SS290.
Rocadas, P. S., & McMaster, R. S. (1997). A robot cell calibration algorithm and its use with a
3D measuring system, IEEE International Symposium on Industrial Electronics, 1, pp.
SS297-SS302.
Seiko Instruments USA, Inc. (1986). Seiko D-TRAN Intelligent Robots: RT-3000 Installation,
Programming, and Operation Manual. Torrance, CA: Seiko Instruments USA, Inc.
Swanson, D. C. (2000). Signal processing for intelligent sensor systems. New York: Marcel
Dekker.
van Albada, G. D., Lagerberg, J. M., & Visser, A. (1994). Eye in hand robot calibration.
Industrial Robot, 21(6), 14-17.
Visinsky, M. L., Cavallaro, J. R., & Walker, I. D. (1994). Robotic fault detection and fault
tolerance: a survey, Reliability Engineering and System Safety, 46, 139-158.
Xu, W., & Mills, J. K. (1999). A new approach to the position and orientation calibration of
robots, Proceedings of the 1999 IEEE International Symposium on Assembly and Task
Planning, Porto, Portugal, July.
Zhong, X-L., & Lewis, J. M. (1995). A new method for autonomous robot calibration, IEEE
International Conference on Robotics and Automation.
21
rP2
D (DP
r P1
r H2
r H1 DH
For example, let pegs tilt to the left, if i=1, one-point contact state (l-1) exists. If i=1and i=4, two-
point contact state (l-8) exists (Fig. 3). Let pegs tilt to the right, If i=1, one-point contact state (r-
4) exists; if i=1and i=4, two-point contact state (r-8) exists (Fig. 3). During the insertion, the
dual-peg may undergo several contact states. In the paper, some states can be chosen, such as,
pegs tilting to the left: state (l-1), state (l-5), state (l-8), a possible three-point contact state and a
mating state or pegs tilting to the right: state (r-4), state (r-6), state (r-8), a possible three-point
contact state and a mating state to finish a dual peg-in-hole assembly process. Their three
dimensional charts and two dimensional charts are shown in Fig. 3 and Fig. 4.
α1 ∈ [0° 90°] ∪ [270° 360°] , α 2 ∈ [90° 270°] α1 ∈ [0° 90°] ∪ [270° 360°] , α 4 ∈ 90° [ 270°]
r1 : (x1 y1 −h1) , r2 : ( x2 y2 0) r1 : ( x1 y1 − h1 ) , r4 : ( x 4 y4 0)
mating state
(b) pegs tilt to the right
Fig. 3. Assembly states of a dual peg-in-hole in three dimensions.
dual pegs
x
dual holes
z
mating state
(a) pegs tilt to the left
mating state
(b) pegs tilt to the right
Fig. 4. Assembly states of a dual peg-in-hole in two dimensions.
dual pegs
r 1 x
θ0
z
h0
dual holes
(a) Boundary state of state l-1 (b) Boundary state of state r-4.
Fig. 5. Boundary state.
f i = μFi
F̂i is the wrench of the contact force; f̂i is the wrench of the friction force corresponding to
the contact force. The real parts of the above equations (7) and (8) express the contact forces
and the dual parts express the moments. Thus, we can obtain
(
Fˆi : Fix Fiy Fiz (Fi × ri ) x (Fi × ri ) y (Fi × ri ) z )
(
fˆi : fix fiy fiz (fi × ri ) x (fi × ri ) y (fi × ri ) z )
(1) One-point contact state l-1 and state r-4:
State l-1: the compact forms of the wrenches of contact forces in contact points are described as follows,
( f1cα1 f1sα1 0 h1 f1sα1 − h1 f1cα1 x1 f1sα1 − y1 f1cα1 )
⎧Fx + f1cα1 = 0
⎪
⎪ Fy + f1sα1 = 0
⎪⎪ Fz − μf1 = 0 (9)
⎨
⎪M x + h1 f1sα1 − y1μf1 = 0
⎪M y − h1 f1cα1 + x1μf1 = 0
⎪
⎪⎩M z + x1 f1sα1 − y1 f1cα1 = 0
From Eq. (9), we can obtain the following relationship equations,
⎧ Fx cα1
⎪F = − μ
⎪ z (10)
⎨M
⎪ y = h1cα1 − x1
⎪⎩ Fz μ
⎧ Fy sα
⎪ =− 1
⎪ Fz μ (11)
⎨M
⎪ x h sα
= − 1 1 + y1
⎩⎪ Fz μ
where α1 ∈ [0° 90°] ∪ [270° 360°] , r1 : ( x1 y1 − h1 ) .
State r-4: the wrenches of contact forces in contact points are as follows,
( f 4cα4 f 4sα 4 0 h4 f 4sα4 − h4 f 4cα4 x4 f 4sα 4 − y4 f 4cα4 )
(0 0 − μf 4 − y 4 μf 4 x4 μf 4 0)
The wrenches of the assembly forces are as follows Fx ( Fy Fz Mx My )
Mz .
According to the static equilibrium equations, we can obtain
⎧Fx + f 4cα 4 = 0
⎪
⎪ F y + f 4 sα 4 = 0
⎪⎪ Fz − μf 4 = 0
⎨ (12)
⎪M x + h4 f 4 sα 4 − y4 μf 4 = 0
⎪M y − h4 f 4 cα 4 + x4 μf 4 = 0
⎪
⎪⎩M z + x4 f 4 sα 4 − y4 f 4cα 4 = 0
Thus, we can obtain the following relationship equations,
⎧ Fx cα 4
⎪F = − μ
⎪ z
⎨M (13)
⎪ y = h4 cα 4 − x4
⎪⎩ Fz μ
⎧ Fy sα
⎪ =− 4
⎪ Fz μ
⎨M (14)
⎪ x h sα
= − 4 4 + y4
⎪⎩ Fz μ
where α 4 ∈ [90° 270°] , r4 : ( x4 y4 −h4 ) .
(2) two-point contact state l-8 and state r-8:
404 Industrial Robotics - Programming, Simulation and Applications
State l-8: the compact forms of the wrenches of contact forces in contact points are described
as follows,
( f1cα1 f1sα1 0 h1 f1sα1 − h1 f1cα1 x1 f1sα1 − y1 f1cα1 )
(0 0 − μf1 − y1μf1 x1μf1 0 )
( f 4cα 4 f 4 sα 4 0 0 0 x4 f 4 sα 4 − y4 f 4cα 4 )
(0 0 − μf 4 − y 4 μf 4 x4 μf 4 0)
The wrenches of the assembly forces are as follows, Fx ( Fy Fz Mx My )
Mz .
According to the static equilibrium equations, we obtain the expressions of moments My, Mx
in terms of insertion forces as:
My μ ⎡ x4 − x1 h1cα1 ⎤ Fx μx1cα 4 − μx4cα1 − h1cα1cα 4
=− ⎢ + ⎥ + (15)
DFz cα1 − cα 4 ⎣ D μD ⎦ Fz μD(cα1 − cα 4 )
Mx μ ⎡ y4 − y1 h1sα1 ⎤ Fy − μy1sα 4 + μy4 sα1 + h1sα1sα 4
=− ⎢− − ⎥ + (16)
DFz sα1 − sα 4 ⎣ D μD ⎦ Fz μD(sα1 − sα 4 )
Fx cα
Eqs. (15) and (16) show linear relations, respectively. For Eq. (15), when = − 1 , this
Fz μ
My h1cα1 x1 F cα
equation reduces to Eq.(10). We can write = − . When x = − 4 , we can write
DFz μD D Fz μ
My x4 (in Fig. 7(a)). For Eq. (16), when Fy sα
=− = − 1 , this equation reduces to Eq.(11). We can
DFz D Fz μ
Fy sα Mx y
write M x = − h1sα1 + y1 . When = − 2 , we can write = 4 (in Fig. 7(b), Fig. 7(c)).
DFz μD D Fz μ DFz D
where α1 ∈ [0° 90°] ∪ [270° 360°] , α 4 ∈ [90° 270°] , r1 : ( x1 y1 −h1 ) , r4 : ( x4 y4 0) .
State r-8: the wrenches of contact forces in contact points are as follows,
( f1cα1 f1sα1 0 0 0 x1 f1sα1 − y1 f1cα1 )
(0 0 − μf1 − y1μf1 x1μf1 0)
( f4cα4 f 4sα4 0 h4 f 4sα4 − h4 f 4cα4 x4 f 4sα4 − y4 f 4cα4 )
(0 0 − μf 4 − y4 μf 4 x4 μf 4 0)
The wrenches of the assembly forces are as follows, Fx ( Fy Fz Mx My )
Mz .
According to the static equilibrium equations, the following equations can be obtained,
My μ ⎡ x1 − x4 h4 cα 4 ⎤ Fx μx4 cα1 − μx1cα 4 − h4 cα1cα 4
=− ⎢ + + (17)
DFz cα 4 − cα1 ⎣ D μD ⎥⎦ Fz μD(cα 4 − cα1 )
Fx cα
Eqs. (17) and (18) show linear relations. For Eq. (17), when = − 1 , we can write
Fz μ
My x1 Fx cα M y h4 cα 4 x4
=− . When = − 4 , we can write = − . For Eq. (18), when
DFz D Fz μ DFz μD D
Study on Three Dimensional Dual Peg-in-hole in Robot Automatic Assembly 405
Fy sα1 Mx y Fy sα 4
=− , we can write = 1. When =− , we can write
Fz μ DFz D Fz μ
Mx h sα y
=− 4 4 + 4 .
DFz μD D
where α1 ∈ [0° 90°] ∪ [270° 360°] , α 4 ∈ [90° 270°] , r1 : ( x1 y1 0) , r4 : ( x4 y4 −h4 ) .
(3) The compact forms of the wrenches of three-point contact can be described as follows,
( f1cα1 f1sα1 0 h1 f1sα1 − h1 f1cα1 x1 f1sα1 − y1 f1cα1 )
(0 0 − μf1 − y1μf1 x1μf1 0 )
( f 2cα 2 f 2 sα 2 0 0 0 x2 f 2 sα 2 − y2 f 2cα 2 )
(0 0 − μf 2 − y2 μf 2 x2 μf 2 0)
( f 3cα 3 f 3 sα 3 0 h3 f 3 sα 3 − h3 f 3cα 3 x3 f 3 sα 3 − y3 f 3cα 3 )
(0 0 − μf 3 − y3 μf 3 x3 μf 3 0 )
Because three-point contact states are transient and considered trivial, their analysis is
ignored.
Three Force/moment of contact states
dimensional
contact states
One- ⎧ Fx cα 1 ⎧ Fy sα
point ⎪F = − μ ⎪ =− 1
⎪ z ⎪ Fz μ
contact (k-1) ⎨M ⎨
⎪ y = j h1cα 1 − x ⎪ M x = − j h1 sα 1 + y
states ⎪⎩ Fz μ
1
⎪⎩ Fz μ
1
⎧ Fx cα 2
⎪F = − μ ⎧ Fy sα
⎪ z ⎪ =− 2
⎨M ⎪ Fz μ
⎨M
(k-2) ⎪ y = (1 − j ) h2cα 2 − x2 ⎪ x = −(1 − j ) h2 sα 2 + y2
⎪⎩ Fz μ ⎪⎩ Fz μ
⎧ Fx cα 3 ⎧ Fy sα
⎪F = − μ ⎪ =− 3
⎪ z ⎪ Fz μ
(k-3) ⎨M ⎨
⎪ y = j h3 cα 3 − x ⎪ M x = − j h3 sα 3 + y
3 3
⎪⎩ Fz μ ⎪⎩ Fz μ
⎧ Fx cα ⎧ Fy sα
⎪ =− 4 ⎪ =− 4
⎪ Fz μ ⎪ Fz μ
⎨M ⎨
(k-4) ⎪ y = (1 − j ) h4 cα 4 − x ⎪ M x = −(1 − j ) h4 sα 4 + y
4 4
⎪⎩ Fz μ ⎪⎩ F z μ
My μ ⎡ x2 − x1 h cα h cα ⎤ F μx cα − μx2cα1 − jh1cα1cα 2 + (1 − j )h2cα1cα 2
=− ⎢ + j 1 1 − (1 − j ) 2 2 ⎥ x + 1 2
DFz cα1 − cα 2 ⎣ D μD μD ⎦ Fz μD(cα1 − cα 2 )
(k-5) Mx
=−
μ ⎡ y2 − y1 h1sα1 h2 sα 2 ⎤ Fy − μy1sα 2 + μy2 sα1 + jh1sα1sα 2 − (1 − j)h2 sα1sα 2
⎢− −j + (1 − j ) ⎥ +
DFz sα1 − sα 2 ⎣ D μD μD ⎦ Fz μD(sα1 − sα 2 )
relative to αi and contact point positions (xi yi). It is only relative to μ, the insertion depth
My
and the radii of peg and hole. In three dimensions, the linear relations between and
Fz
Fx M x Fy
, and must be considered. The jamming diagrams are not only relative to μ, the
Fz Fz Fz
insertion depth and the radii of peg and hole, but also relative to αi and contact point
Mx Fy
positions (xi yi). Because the relation between and is relative to sinαi, according to
Fz Fz
the different range of αi, the jamming diagrams can be discussed in Fig. 7(b) and Fig. 7(c).
In this section, the jamming diagrams of a three dimensional dual peg-in-hole are analyzed.
The range of the jamming analysis is extended. Thus, the assembly process is analyzed in
detail. We can know the whole assembly process of the three dimensional dual peg-in-hole
clearly.
Table 2 lists the force/moment conditions for 20 possible contact states. Fig. 7 shows the
jamming diagrams of a three dimensional dual-peg problem with these contact states. When
the force/moment conditions of all possible contact states of the three dimensional dual-peg
insertion problem are plotted in force/moment space, the plots form a closed boundary
which separates the force/moment space into two spaces; statically unstable space, and
jamming space. The unstable space is the area enclosed by the boundary, and represents the
conditions on the applied forces and moments that cause the pegs to unstably fall into the
holes. The jamming space, on the other hand, is the area outside the boundary, and
represents the force/moment conditions in which the pegs appear stuck in the holes
because the applied forces and moments cannot overcome the net forces and moments due
to frictional contact forces (Fig. 7). The jamming diagrams can supply us some successful
insertion information. If the conditions on the applied forces and moments fall into the
unstable space, the pegs can unstably fall into the holes and the continuity of insertion can
be guaranteed. Since all 20 contact states cannot exist at a particular geometric condition,
each jamming diagram representing a particular insertion problem is constructed from only
some of the straight lines shown in Fig. 7. When the geometric dimensions and the insertion
depth are known, some possible contact states can be formed and some possible jamming
diagrams can be obtained (Fig. 8).
My
DF z
(l-1) h1cα1 x1
μD D
(l-7) (l-5) x1
(r-1) D
(r-5) h3cα3 x 3
(l-3) μD D (l-2)
x2 (l-9)
(l-8) D
h2cα2 x 2
(r-7) (r-8) μD D (r-2)
(l-6) (l-10)
-cα2
μ
-cα1 -cα3
(r-9) -cα4 Fx
μ μ μ
x3
(r-10) Fz
(r-3) D
(r-6) x4
D
(l-4)
h4cα4 x 4
μD D (r-4)
F My
(a) ∝ x
DFz Fz
Mx
DF z
-h2sα2 y2 (r-2)
μD D
y2 (r-10)
D (l-2)
(r-9)
(l-10)
-h4sα4 y4
μD D (r-4)
y4
D (l-4)
(r-5)
-sα3
μ
-sα1
μ (r-6) (l-8) Fy
(l-5) -sα4
μ
-sα2
μ Fz
(r-8)
(r-1) y1
D
(r-7) (l-9)
-h1sα1 y1
(l-1) μD D
(l-6)
y3
(r-3) D
(l-7)
-h3sα3 y3
(l-3) μD D
y3
D (r-3)
(l-7)
(l-9)
(r-7)
-h1sα1 y1 (l-1)
μD D
(r-6)
y1
(l-6)
D (r-1)
-sα2 -sα4 (r-8)
μ μ
-sα1
Fy
μ -sα3
(l-8) (l-5) μ Fz
(l-4) y4
D
(r-4) -h4sα4 y4
(l-10) μD D
(r-5)
(r-9)
(l-2) y2
(r-10) D
-h2sα2 y2
(r-2) μD D
F
[ ] [
(c) M x ∝ y , α 1 ∈ 27 0° 360° , α 3 ∈ 270° 360° , α 2 ∈ 90° 180° , α 4 ∈ 9 0° 180° ] [ ] [ ]
DFz Fz
Fig. 7. Jamming diagrams.
Study on Three Dimensional Dual Peg-in-hole in Robot Automatic Assembly 409
(a-1) (a-2)
(a-3) (a-4)
(a-5) (a-6)
410 Industrial Robotics - Programming, Simulation and Applications
(a-7) (a-8)
(a-9) (a-10)
(a-11)
(a) Possible jamming diagrams ( My Fx ).
∝
DFz Fz
Study on Three Dimensional Dual Peg-in-hole in Robot Automatic Assembly 411
(b-1) (b-2)
(b-3) (b-4)
(b-5) (b-6)
412 Industrial Robotics - Programming, Simulation and Applications
(b-7) (b-8)
(b-9)
(b-10) (b-11)
Mx Fy
(b) Possible jamming diagrams: ∝ , α 1 ∈ [0° 90°] , α 3 ∈ [0° 90°] , α 2 ∈ [180° 270°] ,
DFz Fz
α 4 ∈ [18 0° 270°]
Study on Three Dimensional Dual Peg-in-hole in Robot Automatic Assembly 413
Mx
DF z
-h1sα1 y1 (l-1)
μD D
y1
D (r-1)
-sα2
μ
-sα1
(l-5) μ
Fy
Fz
(r-5)
(l-2) y2
D
-h2sα2 y2
(r-2) μD D
c-1 c-2
Mx
-h3sα3 y3
DF z
Mx μD D (l-3)
DF z y3
(l-1) D (r-3)
-h1sα1 y1
μD D (l-9)
y1
D (r-1)
-sα2 -sα4
μ μ
-sα1
(l-8) μ
(l-4) y4 Fy -sα2
μ
Fy
D Fz -sα3
μ Fz
(r-5) (r-9)
(l-2) y2
D
-h2sα2 y2
μD D -h2sα2 y2
(r-2) (r-2) μD D
c-3 c-4
Mx
-h3sα3 y3
DF z
μD D (l-3)
Mx
-h3sα3 y3
DF z
(l-9) (l-3)
μD D
y3
D (r-3)
y1
D (r-1)
-sα2
μ
-sα1 -sα3
μ μ Fy
Fz (r-6)
(l-6)
-sα4
(r-5) μ
-sα3
(l-2) y2 μ Fy
D (l-4) y4
D
Fz
(r-4) -h4sα4 y4
-h2sα2 y2 μD D
(r-2) μD D
c-5 c-6
414 Industrial Robotics - Programming, Simulation and Applications
Mx
-h3sα3 y3
DF z
(l-3)
Mx μD D
-h3sα3 y3
DF z y3
D (r-3)
μD D (l-3)
(l-6)
-sα2 -sα4
μ μ
-sα3
μ
(l-4)
Fy
y1 y4
D (r-1) D Fz
(l-6)
-sα4 (r-8)
μ
-sα1 -sα3 (r-9)
μ μ Fy
(l-4) y4
D
Fz
(r-4) -h4sα4 y4
μD D -h2sα2 y2
(r-2) μD D
c-7 c-8
Mx
-h3sα3 y3
DF z
μD D (l-3)
Mx
-h3sα3 y3
DF z
μD D (l-3)
y1 y3
D (r-1)
(l-6) D (r-3)
-sα2
μ
-sα4
μ
(l-7)
-sα1 -sα3
μ μ Fy
(l-4) y4
D
Fz (r-7)
-h1sα1 y1 (l-1)
μD D
(r-5)
y1
D (r-1)
-h2sα2 y2 -sα1
Fy
μ -sα3
(r-2) μD D μ Fz
c-9 c-10
c-11
My Fx
(c) Possible jamming diagrams: ∝ , α 1 ∈ [27 0° 360°] , α 3 ∈ [270° 360°] ,
DFz Fz
α 2 ∈ [90° 180°] , α 4 ∈ [9 0° 180°]
Fig. 8. Possible jamming diagrams.
Study on Three Dimensional Dual Peg-in-hole in Robot Automatic Assembly 415
In two dimensions, there are only 9 kinds of jamming diagrams, which are static and
relative to μ, the insertion depth and the radii of peg and hole. In three dimensions, there
are 33 kinds of jamming diagrams, which are relative to not only μ, the insertion depth and
the radii of peg and hole, but also αI and contact point positions (xi yi). They are dynamic.
The quadrangles are more complicated than those in two dimensions. In three dimensions,
the possible jamming diagrams, such as Fig.8 (a-10) and Fig.8 (a-11), may appear. Fig.8 (a-
10) shows the possible jamming diagram of left-contact. Fig.8 (a-11) shows the possible
jamming diagram of right-contact.
The 33 kinds of possible jamming diagrams in Fig.8 may be interpreted as follows. The
vertical lines in the diagrams describe line contact states. Combinations of Fx, Fz, and My
falling on the parallelograms’edges describe equilibrium sliding in. Outside the
parallelogram lie combinations which jam the dual-peg, either in one- or two-point contact.
Inside, the dual-peg is in disequilibrium sliding or falling in.
6. Conclusions
In order to know the assembly process of a dual peg-in-hole and plan an insertion
strategy in detail, it is important to analyze the dual peg-in-hole in three dimensions. In
this paper, the assembly contact and jamming of a dual peg-in-hole in three dimensions
are analyzed in detail. Firstly, 20 contact states of the three dimensional dual-peg are
enumerated and geometric conditions are derived. Secondly, the contact forces are
described by the screw theory in three dimensions. With the static equilibrium equations,
the relationship between forces and moments for maintaining each contact state is
derived. Thirdly, the jamming diagrams of the three dimensional dual peg-in-hole are
presented. Different peg and hole geometry, and different insertion depth may yield
different jamming diagrams. 33 different types of possible jamming diagrams for each
jamming diagram are identified. These jamming diagrams can be used to plan fine motion
strategies for successful insertions. In addition, they also show that the assembly process
in six degrees of freedom is more complicated than that for planar motion. The above
analyses are the theoretical bases for multiple peg-in-hole insertion. It is one most
significant advance in robotic assembly.
7. References
Arai T., et al (1997). Hole search planning for peg-in-hole problem, Manufacturing Systems,
26(2), 1997, 119-124, ISSN: 0748-948X.
Fei Y.Q. and Zhao X.F.( 2003). An assembly process modeling and analysis for robotic
multiple peg-in-hole, Journal of Intelligent and Robotic Systems, 36(2), 2003, 175-189,
ISSN: 0921-0296.
McCarragher B.J. and Asada H.(1995). The discrete event control of robotic assembly tasks,
Journal of Dynamic System, Measurement and Control, 117(3), 1995, 384-393, ISSN:
0022-0434 .
Rossitza S. and Damel B.(1998). Three-dimensional simulation of accommodation, Assembly
Automation, 18(4), 1998, 291-301, ISSN: 0144-5154.
Simunovic S.N.(1972). Force information in assembly process, Proc. Of the 5th Int’l Symposium
on Industrial Robots, pp. 113-126, Chicago, Illinois, 1972, IIT Research Institute
Chicago, Illinois.
416 Industrial Robotics - Programming, Simulation and Applications
1. Introduction
In an era when new product styles are being introduced with ever-shortening life cycles, the
cost of high preparation times for automation equipment every time a new part or product
must be produced is becoming prohibitive, both in terms of time and money. In modern
flexible industrial production, the capabilities of the machinery to adapt to changing
production demands are a key factor for success.
Desktop and Scara Robots are universal tools for various industrial applications like
dispensing, soldering, screw tightening, pick'n place, welding or marking. This type of
robots is suitable for various production line systems (e.g. cell-line, in-line), and can be
adapted to meet diverse requirements. They are easy to use, can be applied economically
and, nowadays, a complex programming in robot language is unnecessary, thus reducing
installation time and providing added convenience. These robots are typically programmed
off-line by using waypoints and path information. However, the coordinates and types of
waypoints have to be entered manually or taught. Typically, small workpieces with a high
complexity of linear paths raise programming efforts.
Once a robot has been programmed off-line for a workpiece, the system should be able of
identifying it and autonomously initiate the correct working procedure. Further, a semi-
automated system has to be capable to autonomously deal with misalignments and
compensate small deviations during loading, which may result in a bad execution of the
robot off-line stored programs.
The concept of sensing for robotics is essential for the design of systems where the real time
operation, flexibility and robustness are major design goals. The ability of a system to sense
its surroundings and perform the task according to the existing conditions is an effective
way to reduce costs and raise flexibility. Highest precision and minimum amount of
programming time is the result. Advanced sensor systems are now emerging in different
activities which will significantly increase the capabilities of industrial robots and will
enlarge their application potential.
In this work, sensor data processing concepts on the basis of fuzzy logic are applied, to
enable a robot to deal autonomously with typical uncertainties of an industrial working
environment. Specifically, it is proposed a flexible, adaptive and low-cost solution to
address some problems which often limit the production rate of small industries. Thus, by
418 Industrial Robotics - Programming, Simulation and Applications
additional capabilities the robot can autonomously adapt to changing production needs,
compensate misalignments and avoid injuries in the work tool and piece.
As a case study, consider a desktop robot executing dispensing applications on
workpieces/fixtures. For each workpiece, the robot is programmed off-line. In order to
improve the performance and flexibility of these industrial systems, we equipped the robot
with a CCD Camera. The process is divided into two phases: a learning and an execution
phase. On the learning phase, the worker programs the robot off-line such that it executes
the dispensing operation over the workpiece. At this stage, an image of the workpiece is
acquired, a set of descriptors describing it are computed, a fuzzy rule describing the
workpiece is generated and included in a database together with the robot's program. On
the execution phase, the worker loads a set of workpieces onto the robot's working table. An
image is acquired, the corresponding descriptors are computed and, through a parsing and
classification procedure, the workpieces are identified.
Alignments and offset values are calculated fully automatically which allows the robot to
ensure accurate placement of tools. Workers stay busy loading and unloading
workpieces/fixtures while a desktop robot, equipped with a vision system, is performing
precision dispensing tasks. The system is also capable of autonomously starting a learning
phase in case an unknown workpiece is shown to the system, and robust to deal with
common errors such as a missing fixture. This significantly reduces development time for
these tedious processes. Further, reduces costs by compensating misalignments in the
workpiece location during loading avoiding injuries both in the workpiece and tool.
Another result of this approach is that computation grounded on information derived from
sensation enables the achievement of autonomy. Autonomy implies that through sensation
and action it might be possible for a system to start some conceptualization process of high
level.
This chapter is organized as follows. Section II describes the vision system and the software
architecture for the learning and execution phases. The representation and description
schemes are also described in this section as well as calibration and general image
processing procedures. Section III describes the experimental results along with the
hardware requirements of the system. A complete cycle of the execution phase is also
depicted in this section. Finally, Section IV outlines the main conclusions and some future
work is discussed.
Herein, we propose a cheap solution which improves the overall flexibility of a typical
dispensing application and minimizes the two problems discussed above. Similarly to the
procedure described, for each type of workpiece the robot is firstly programmed off-line and
the program is stored. The main difference is that during the production stage, it is the
system that autonomously identifies the loaded workpieces using visual information and a
fuzzy inference classifier.
In this study, a JR2000 Series Desktop robot from I&J Fisnar Inc (Janome, 2004) with
simultaneous control of 3 axis is used as the test bed. The robot performs 3D linear and arc
interpolation to include compound arcs in a working area of 150x150mm. The overall
experimental setup is shown in Fig. 1. A CCD camera has been mounted over the robot, and
despite the applied algorithm to improve light uniformity, a fluorescent light was placed
around the CCD Camera to assure that the scene is well illuminated. A front lighting
technique was applied.
The CCD camera is a TRC TC5173 colour camera with a resolution of 768x576 pixels. Image
digitalization is done on a general purpose image acquisition board, National Instruments
PCI1411, mounted inside a 100MHz Pentium III PC. The PC is connected to the robot by a
serial RS-232C protocol.
3. System Architecture
Fig. 2 presents the architecture of the processing system, in which two paths were specified:
one for the learning phase (P1) and another for the execution phase (P2).
The first two modules are identical for both P1 and P2 and deal with object analysis. The
Pre-processing module enhances the image quality and extracts the blob objects of the
image. This module is necessary because the acquired images are not perfect for analysis.
The Feature Extraction module extracts the feature vector that best characterizes each object.
P1 has a Fuzzy Grammar module which generates the fuzzy rules that describe the objects.
These rules are stored in a database.
a) b)
Fig. 1. Experimental setup showing the desktop robot with the mounted CCD Camera, the
fluorescent lamp and a mould. a) General front view. b) Detailed view of the front lighting
technique.
420 Industrial Robotics - Programming, Simulation and Applications
In the execution phase P2, the feature vectors extracted for each object are submitted to a
Parsing Procedure module developed with the compilers yacc and lex (Appel, 1998;
Bumble-Bee, 1999). These vectors are submitted to each rule stored in the database and a
value is obtained for each of them. Finally, the Classification module verifies which rule has
a higher value thus identifying the workpiece under analysis. A threshold is specified such
that an alarm sounds when an unknown or misclassified workpiece is detected. Further, a
learning phase is automatically initiated and an appropriate fuzzy rule is generated for that
workpiece.
4. Preprocessing
To perform a robust industrial application, and prior to apply segmentation procedure to
extract the diferent objects in the image, the following aspects must be minimized: 1)
random noise in the acquisition process; 2) lack of light uniformity, which depends of the
illumination technique; and 3) image distortions due to the type of lenses.
Noise was reduced by calculating the final image to process as an average of five
consecutive acquired images.
Fig. 4. Mapping between a distorted (Pdi (i = 0, 1, 2, 3)) and a nondistorted element. (Pndi (i =
0, 1, 2, 3)) of the grid. These elements have (x, y) coordinates.
The homogeneous coordinate transformation between Pdi=(xdi,ydi) and Pndi=(xndi,yndi), for
i=0,1,2,3, is given by
⎛ xndi ⎞ ⎛ a11 a12 a13 ⎞⎛ xdi ⎞
⎜ ⎟ ⎜ ⎟⎜ ⎟
wi ⎜ y ndi ⎟ = ⎜ a 21 a 22 a 23 ⎟⎜ y di ⎟ (1)
⎜ 1 ⎟ ⎜a ⎟⎜ ⎟
⎝ ⎠ ⎝ 31 a32 a33 ⎠⎝ 1 ⎠
where
422 Industrial Robotics - Programming, Simulation and Applications
a) b)
Fig. 5. A sample object and the extracted blob. a) Object sample. b) Extracted blob.
5. Feature extraction
After image segmentation, it is necessary to choose a representation and a description
scheme for the resulting aggregate of segmented pixels in a form suitable for further
computer processing.
As pointed out by Williams (1999), although description of a 2D shape is one of the foremost
issues in pattern recognition, a satisfactory general theory does not exist. Ideally, the selected
features must be invariant to the shape scaling and rotation and should support clustering or
grouping; resulting in the generation of a robust representation. Low order geometric
Computer Vision and Fuzzy Rules Applied to a
Dispensing Application in an Industrial Desktop Robot 423
moments are arguably the most common features. Diameter features, like Feret diameters and
distance-versus-angle signatures, tend to lead to topological and symmetry considerations and
are more robust to noise and small changes of shape (Ballard & Brown, 1982; Williams, 1999;
Micheli-Tzanakou, 2000; Costa & Cesar, 2001; Gonzalez & Woods, 2002; Kindratenko, 2004).
The perimeter, the first and second central moments and the Feret diameter representations
were tested in order to verify those that allow maximum flexibility, meaning to allow the
coexistence of objects with different shapes in the same database. The best results were
obtained using the Feret diameters (longest distance between any two points along the
object boundary) at different rotation angles, θ, of the object, and thus were chosen to build
the feature vectors of the representation scheme in our work.
Fig. 6 presents some Feret diameters for object depicted in Fig. 5(a).
By trial and error, we have chosen an increment between rotation angles of 10 degrees.
This type of external representation scheme is very useful in the computation of
descriptors and is very adequate because the primary focus is on morphological features.
However, this feature vector is highly dependent on the object's orientation, which poses
a difficulty in the identification process. To solve this, we first orient the object by setting
the axis of higher inertia always in a predefined position. Further, the fuzzy inference
system implies that the magnitude of each element of the feature vector must be in the
interval [0,1] and thus a normalization of the obtained feature vector is required.
Therefore, normalization is achieved simply by normalizing the obtained curve to unit
maximum value as given by
NFD(θ ) =
FD(θ )
−
FDmin (4)
FDmax − FDmin FDmax − FDmin
where FD(θ) is the Feret diameter at angle θ and FDmax, FDmin are the maximum and
minimum value of the Feret diameters for the feature vector, respectively. The normalized
Feret diameters for the objects depicted in Fig. 7 are illustrated in Fig. 8.
Fig. 6. The Feret diameters for object depicetd in Fig. 5 at angles 0, 45 and 90º
424 Industrial Robotics - Programming, Simulation and Applications
a) b) c)
Fig. 7. Some objects used in the choice of the external representation type. a) Object 1. b)
Object 2. c) Object 3
Fig. 8. Normalized Feret diameters for objects depicted in Fig 7. Slod, dash and dash-dot
traces represent objects 1, 2 and 3, respectively.
Equation 4 is also independent of the size factor. For this particular application, this is a
drawback since objects with different sizes require different robot programs. In order to identify
objects with the same shape but with different sizes, we established a size dependent feature. We
have introduced the feature S, which classifies the object's shape relatively to its size and is given
by the FDmax, normalized to the maximum size allowed for an object.
6. Fuzzy Grammar
After the extraction of the feature vector that characterizes an object, it is necessary to
classify the object according to its attributes. Specifically, our application deals with the
following constraints: a) to deal with a high diversity of objects; b) to recognize
Computer Vision and Fuzzy Rules Applied to a
Dispensing Application in an Industrial Desktop Robot 425
simultaneously several different type of objects and c) to autonomously detect a new type of
objects during the execution phase and thus initiate a learning phase, using the intervention
of the human operator only to program the robot. To accomplish these goals the learning
phase of the recognition process must be done with a unique sample of each type of object.
Regarding the classifiers and recognizers, there are different approaches based on statistical
and artificial intelligence methods (Bezdek & Pal, 1992; Kerre & Nachtegael, 2000; Micheli-
Tzanakou, 2000; Costa & Cesar, 2001; Looney, 2002). The most common solutions
commercially available use recognizers based on the calculus of metrics like Euclidean,
Minkowsky e Mahalanobis distance measures (Williams, 1999). However, these recognizers,
as well as the ones based on neural, fuzzy logic and neurofuzzy networks, demand a great
amount of samples from the population to perform learning. Despite the fact that these
modern technologies are now firmly established as leading advanced control techniques in
use in industry, they do not fulfil the constraints of the dispensing application.
In this work, a fuzzy system modelling approach was developed in which a fuzzy inference
system identifies the fuzzy rules representing relationships among 2D shape features. There
are several approaches that generate these fuzzy rules. The most often applied are based on
statistics, neural networks and genetic algorithms (Ivancic & Malaviya, 1998; Peters et al.,
1998; Looney, 2002).
However, none of these methods satisfy the needs of this specific application.
Therefore, we decided to apply a fuzzy grammar approach. Fuzzy grammar is a pattern
classification syntactic model used to represent the structural relations of patterns (Fu &
Booth, 1986ab; Bezdek & Pal, 1992; Malaviya, 1996; Stanchev & Green, 2000) and
describe the syntax of the fuzzy languages that generate the fuzzy rules. This inference
system fulfils the project demands due to its linguistic description approach, which
keeps the number of rules small, and due to its capability to generate a fuzzy rule using
only one sample of a pattern.
Herein, we briefly review some basic concepts of fuzzy grammar (for a full discussion see
(Lee & Zadeh, 1969; Pal & Majumber, 1986; Bezdek & Pal, 1992; Yager & Zadeh, 1992)).
Fuzzy grammar GF is a quintuple GF=(VN,VT,P,S0,μ), in which VN and VT are finite disjoint
sets of non-terminal and terminal vocabulary respectively, such that V=VN∪VT is the total
vocabulary of the grammar. P is a finite set of production rules of the type α→β, with α∈VN
and β is a member of the set V* of all strings (including the null string ε). S0∈VN is the
starting symbol. μis the mapping of P→[0,1], such that μ(p) denotes the possibility of the
current language sentence p∈P.
The syntax of the developed language L(GF) is depicted in Fig. 9 and includes 4 different
steps:
1) The codification of the features to primitives. In this work, the features are the Feret
diameters (NFD(θ)) and the size S, which are coded to the primitives FDθ and SN,
respectively. When more than one sample of an object is presented to the system
the mean value of each feature is used.
2) The definition of linguistic terms HistVar:#. This setting is done according to Table
1. The membership function ∏ is illustrated in Fig. 10 for ∏(x,b,c). The parameter c
is chosen such that the eleven membership functions cover the all universe of
discourse X and have disjoint maximums.
3) The definition of fuzzy modifiers (FM): “More than”, “Less than” and “Between”.
The FM “More than” LT is defined by
426 Industrial Robotics - Programming, Simulation and Applications
⎧ 1 x≥L
μ MT LT = ⎨ (5)
⎩S ( x, L − lb, L − lb / 2, L x<L
where L is a threshold value and lb is the bandwidth value of the S membership function
(Bezdek & Pal, 1992; Malaviya, 1996). The FM “Less than” LT is given by
⎧ 1 x≤L
μ LT LT = ⎨ (6)
⎩1 − S ( x , L , L + lb / 2, L + lb x >L
where w1 and w2 are threshold values (Bezdek & Pal, 1992; Malaviya, 1996).
Designation Function
HistVar:1 Π(x,0.2,0.0)
HistVar:2 Π(x,0.2,0.1)
HistVar:3 Π(x,0.2,0.2)
HistVar:4 Π(x,0.2,0.3)
HistVar:5 Π(x,0.2,0.4)
HistVar:6 Π(x,0.2,0.5)
HistVar:7 Π(x,0.2,0.6)
HistVar:8 Π(x,0.2,0.7)
HistVar:9 Π(x,0.2,0.8)
HistVar:10 Π(x,0.2,0.9)
HistVar:11 Π(x,0.2,1.0)
Table 1. Linguistic terms used on the fuzzy grammar.
1 b=0.2
c=0.5
0.75 b
(x;b,c)
0.5
0.25
c
0
0 0.5 1
x
Fig. 10. Membership function PI.
4) The definition of fuzzy operators (FO) which define the relations between the
linguistic terms and primitives. We defined the following FO:
a) &, representing the AND of two primitives. It is given by the Yager
intersection (Pal & Majumber, 1986).
b) >, representing “More than” LT and is given by μMT<LT>.
c) <, means “Less than” LT and is given by the function μLT<LT>.
d) ||, describes “Between two” LT and is given by μB<LT1><LT2>.
e) #, means a “Separator between a” primitive and a LT.
f) ( ), imposes a hierarchy in the rule.
Consider as an example object 2 depicted in Fig. 7. Fig. 11 illustrates the primitive FD20, obtained
from the Feret diameter feature, NFD(θ)=0.6, when θ=20 degrees. This primitive has non-zero
degrees of membership for LT HistVar:6, LT HistVar:7 and LT HistVar:8 (Fig. 11). The highest
fuzzy value is obtained using LT HistVar:7. Thus, HistVar:7#FD20 is part of the fuzzy rule which
characterizes object 2. Finally, the rule created by the fuzzy grammar is:
428 Industrial Robotics - Programming, Simulation and Applications
HistVar:1#FD0&HistVar:4#FD10&HistVar:7#FD20&#HistVar:9#FD30&HistVar:11#FD40&
HistVar:1#FD50&HistVar:11#FD60&HistVar:9#FD70&HistVar:7#FD80&HistVar:4#FD90&#
HistVar:7#FD100&HistVar:9#FD110&>HistVar:10#FD120&HistVar:11#FD130&>HistVar:10
#FD140&HistVar:9#FD150&HistVar:7#FD160&HistVar:4#FD170&HistVar:2#SN.
If more than one linguistic term gives a fuzzy value superior to 0.75; we apply fuzzy
modifiers like “More than”, “Less than” and “Between”, to combine the obtained results.
Fig. 12 illustrates the procedure for fuzzy modifier “More than” HistVar:10 for the primitive
FD140. The final fuzzy value results from the combination of LT HistVar:10 and LT
HistVar:11. Similar procedures apply for fuzzy modifiers “Less than” (Fig. 13) and
“Between” (Fig. 14).
Fig. 11. The highest fuzzy value for LV FD20 is obtained using LT HistVar:7.
Fig. 12. Linguistic term for the primitive FD140 – Fuzzy Modifier „More than“ HistVar:10.
Computer Vision and Fuzzy Rules Applied to a
Dispensing Application in an Industrial Desktop Robot 429
7. Parsing procedure
The parsing procedure was developed for the fuzzy grammar. The inputs are the feature
vectors extracted for an object from the Feature Extraction module and the rules stored in
the database. The feature vectors are submitted to each rule of the database. The output of
430 Industrial Robotics - Programming, Simulation and Applications
the parsing procedure is a value in the interval [0,1] reflecting the grade of membership of
the object for each class.
Consider as a simple example the feature vectors which are presented in Table 2. They
describe the objects depicted in Fig. 7.
8. Classification
This module uses the output of the parsing and verifies which rule produces the higher
value for the feature vector. For the example of Table 3 the result of the classification
procedure is shown in Table 4.
Classification
Rules higher result Object Type
Object 1 0.89 1
Object 2 0.9 2
Object 3 0.89 3
Table 4. Results of classification procedure.
Computer Vision and Fuzzy Rules Applied to a
Dispensing Application in an Industrial Desktop Robot 431
If this value is less than a defined threshold (Tr of Fig. 2) it is assumed that a new type of
object is present. In such case, the feature vector that characterizes this new object is
submitted to the fuzzy grammar module in order to generate the new appropriated fuzzy
rule.
9. Experimental Results
The commercial software LabView 6.1 with IMAQ 6.0 was used, in order to increase the
processing speed and to reduce the development time. This was also a requirement from the
company that supports the development of the dispensing application. The fuzzy grammar
was developed in C++. To call the fuzzy grammar from the LabView a DLL was created to
encapsulate the C++ code. Fig. 15 illustrates the three principal LabView panels and vi
diagrams of the application.
a)
b) c)
432 Industrial Robotics - Programming, Simulation and Applications
d) e)
Fig. 15. Different panels of the developed application. A selection must be done among:
learning, execution phase and a statistical option for showing statistical data. a) Main panel.
Learning phase: b) front panel, c) vi diagram. Execution phase: d) front panel, e) vi diagram.
The robot program that the robot performs over the workpiece, is sent to the robot via RS-
232C protocol under the control of the JR software (trademark). However, the formats of the
file that contains both the robot program and the information send to the RS-232 port are not
known. This constraint was overcome through the development of a DLL that establishes
the communication between LabView and the JR software. This DLL sends Microsoft
Windows messages to the JR software, providing the appropriate commands to change the
robot software. Finally, a message with the start command is sent to the JR software in order
to initiate the robot program. JR software must be running during learning and execution
phases. This development enables the robot to be controlled by the computer vision
software.
During the learning phase, for each object, the used robot program, the generated
fuzzy rule, the orientation and the position are stored together in the database. On the
execution phase, these workpieces are presented to the system having different
positions and orientations relatively to the learning phase. The developed approach
was able to identify each workpiece. Rotations, R, alignments and offset values in x, y
were calculated, the robot's stored programs were adjusted accordingly and sent to
the robot. Finally, the robot executed the changed program over each workpiece. The
minimum offset that the system was able to calculate was as small as 0.2mm. The
minimum rotation was 3 degrees.
Second column of Table 5 shows the generated linguistics terms for each primitive in the
learning phase for object 10 of Fig. 16. An identical object but rotated of 40 degrees and with
an offset in position of (x,y)=(10,15)mm was processed during the execution phase. Third
and fourth column of Table IV show the obtained primitives and Linguistic terms,
respectively. The classification result for the object 10 rule is 0.90, whereas for the other
objects is 0.0. The calculated offset and orientation was of (10.0,15.1)mm and 39.3 degrees,
respectively.
Table 6 shows the percentage of good classifications when each object is placed with 20
different locations and orientations. In some cases, objects were classified as Not Available
in Database (NAD).
LT Primitive value LT value
Primitive (Learning phase) (Execution phase) (Execution phase)
FD0 HistVar:1 0.00 1.00
FD10 HistVar:2 0.13 0.92
FD20 HistVar:5 0.38 0.99
FD30 HistVar:7 0.60 1.00
FD40 HistVar:9 0.75 0.90
FD50 HistVar:10 0.91 0.98
FD60 HistVar:11 1.00 1.00
FD70 HistVar:11 1.00 1.00
FD80 HistVar:10 0.93 0.92
FD90 HistVar:9 0.84 0.90
FD100 HistVar:10 0.85 0.90
FD110 HistVar:10 0.94 0.90
FD120 HistVar:11 0.96 0,94
FD130 HistVar:10 0.91 0.98
FD140 HistVar:9 0.78 0.99
FD150 HistVar:7 0.59 0.99
FD160 HistVar:5 0.35 0.90
FD170 HistVar:3 0.17 0.96
SN HistVar:5 0.36 0.91
Object 1 2 3 4 5 6 7 8 9 10 NAD
1 95 5
2 100 0
3 100 0
4 95 5
5 5 90 5
6 95 5
7 90 5 5
8 95 5
9 100
10 95 5
As we can see from the above results, the developed approach can be applied when
objects have different locations and orientations and only one sample was used
during the learning phase. The advantage is that a high percentage of type of objects
(greater than 90%), when submitted to rules of objects of other types, gives 0 as a
result. This means that the system creates disjoints rules and assures a good
classification.
10. Conclusion
In this work, we have used sensing technology to endow an industrial Desktop robot
with a greater degree of flexibility in dealing with its environment. The goal was an
adaptive, flexible, low-cost solution to maximize efficiencies in dispensing applications.
Concretely, a CCD Camera was mounted over the robot and the visual information was
used to autonomously change a previously off-line stored robot program to each
workpiece.
The results illustrate the flexibility and robustness of the overall application. Further, the
employed approach assures a good classification of workpieces and a minimum offset and
rotation values of 0.2 mm and 3 degrees, respectively.
To further improve the classification procedure we intend to introduce new features in the
rules and to experiment other methods than fuzzy logic. The overall application can be
improved in such a way that the robot's program could also be automatically generated
through the extraction of the relevant waypoints and path information.
The solution proposed can easily be extended to other type of machinery applications.
For instance, to quality control inspection procedures including: dimensional
measurement and gagging, verification of the presence of components, hole location
and type of holes, detection of surface flaws and defects. It can also be easily extended
to other categories of machine vision applications, in fact this approach was already
applied to texture segmentation for tracking purposes (Ferreira, 2006). This application
differs from the one that was presented here only by the type of features extracted from
the images.
Computer Vision and Fuzzy Rules Applied to a
Dispensing Application in an Industrial Desktop Robot 435
11. References
Appel, A.W. (1998), Modern compiler implementation in C, Cambridge University Press, ISBN:
0-521-60765-5, Cambridge.
Ballard, D.H. & Brown, C.M. (1982), Computer vision, Prentice Hall, ISBN:0131653164 New
Jersey.
Bezdek, J.C. & Pal, S.K. (1992), Fuzzy Models for pattern recognition, IEEE Press, ISBN:
0780304225, New York.
Bumble-Bee (1999), Parser Generator Manual [online], Bumble-Bee Software, Available from
http://www.bumblebeesoftware.com.
Costa, L.F. & Cesar Jr., R.M. (2001), Shape Analysis and Classification, Image Processing Series,
Philip A. Laplante, (Ed.), CRC Press, ISBN 0-8493-3493-4, Florida.
Ferreira, M. J., Santos C, (2006), Tracking system using texture cue based on wavelet
transform. 7th Portuguese Conference on Automatic Control, Controlo 2006, September
2006, Lisboa, Portugal.
Fu, K.S. & Booth, T.L. (1986a), Grammatical inference: introduction and survey, Part I, IEEE
Transactions on pattern analysis and machine intelligence, Vol. PAMI -8, No. 3, page
numbers (343-359), ISSN: 0162-8828.
Fu, K.S. & Booth, T.L. (1986b), Grammatical inference: introduction and survey, Part II, IEEE
Transactions on pattern analysis and machine intelligence, Vol. PAMI 8, No. 3, page
numbers (360-375), ISSN: 0162-8828.
Gonzalez, R.C. & Woods, R.E. (2002), Digital Image Processing, Prentice Hall, ISBN:
0201180758, New Jersey.
Ivancic, F. & Malaviya, A. (1998), An Automatic Rule Base Generation Method for Fuzzy
Pattern Recognition with Multi-phased Clustering, Proceedings of KES'98 IEEE
Conference of Knowledge Engineering System, page numbers (66-75), Australia, April
1998, IEEE Press, Adelaide.
Janome (2004), JR 2000 Software manual, Series Desktop Robot, I&J Fisnar Inc.
Kerre, E.E. & Nachtegael, M. (2000), Fuzzy techniques in image processing, Studies in
fuzziness and soft computing, Vol. 52, Physica-Verlag, ISBN: 3790813044, New
York.
Kindratenko, V. (2004), Shape analysis, In: CVonline: Compendium of Computer Vision [online],
R.Fisher, (Ed), Available from http://homepages.inf.ed.ac.uk/rbf/CVonline/.
Lee, E.T. & Zadeh, L.A. (1969), Note on fuzzy languages, Information Sciences, Vol. 1,
(October 1969), page numbers (421-434).
Looney, C.G. (2002), Pattern Recognition, In: Opto-Mechatronic Systems Handbook: Technical
Applications - Handbook Series for Mechanical Enginnering, Hyngsuck Cho, (Ed.), Vol
10, CRC Press, ISBN: 0849311624, New York.
Malaviya, A. (1996), On-line handwriting recognition with a fuzzy feature description language,
PHD Thesis, Technische Universitat Berlin, Berlin.
Matrox (1998), Matrox Imaging Library User guide, Manual No. 10513-MN-0500, Matrox
Electronic System Ltd, Canada.
Micheli-Tzanakou, E (2000), Supervised and Unsupervised Pattern recognition, feature extraction
and computational, Industrial Electronics Series, J. David Irwin, (Ed.), CRC Press,
ISBN: 0-8493-2278-2, Florida.
Pal, S.K. & Majumber, D.K. (1986), Fuzzy mathematical approach to pattern recognition, Halsted
Press Book, John Wiley & Sons, ISBN: 0470274638, New Delhi.
436 Industrial Robotics - Programming, Simulation and Applications
Peters, L. & Leja, C., Malaviya, A. (1998), A Fuzzy Statistical Rule Generation Method for
Handwriting Recognition, Expert Systems Journal, Elsevier Press, Vol. 15, No. 1,
page numbers (1591-1604).
Russ, J.C. (1998), The image processing handbook, 3ed, CRC Press, ISBN: 0849325323
London.
Stanchev, P.L. & Green Jr, D. (2002), Formal languages for image primitives description,
International journal information theories and applications, Vol. 9, No. 2, page numbers
(51-60).
Williams, P.S. (1999), The automatic hierarchical decomposition of images into sub-images for
use in image recognition and classification, PHD Thesis, University of Western
Australia.
Yager, R.R. & Zadeh, L.A. (ed) (1992), An introduction to fuzzy logic applications in intelligent
systems, Kluwer Academic, ISBN: 0792391918, Boston.
23
1. Introduction
Virtual Reality (VR) constitutes now a well-established term, which is employed basically to
describe systems that enable an intuitive and natural interaction in real-time between a
human and a computer animated 3D graphical environment. The media that are employed
to support such a human-computer interaction should ideally involve all human senses (and
more generally sensori-motor abilities), that is, not only vision (through the rendering of 3D
graphics models) but also audition, as well as the sense of touch, through a haptic
interaction with virtual objects (Burdea & Coiffet, 94). The challenge is to develop computer
simulation environments (simulating an existent or fictitious world) in such a way as to
create to the human user an illusion of reality. VR thus constitutes a multidisciplinary field
covering a variety of scientific and technological subjects, from 3D computer graphics and
multimedia technologies, to the design and control of mechatronic human-computer
interaction devices, human factors and modeling of sensori-motor human skills, as well as
topics related to the field of human perception and psychophysics.
VR and its applications have flourished significantly particularly during the last decade, not
only through the development of new software and hardware tools, devices and
methodologies of human-computer interaction, but also by the implementation of these
techniques into a constantly increasing number of new application paradigms and human-
centered activities. Augmented and mixed realities constitute in fact one form of applied VR
techniques and refer to the processes of overlapping synthetic (virtual) model images onto
real world images, that is, to the combination (mixing) of virtual and real world. In other
words, in an augmented reality (AR) system virtual models complement or modify in a way
reality (providing in a sense some additional information as needed) instead of completely
substituting (replacing) reality, as is the objective in VR systems. The goal here is to provide
the user with the illusion of ‘co-existence’ of virtual and real world at the same time and
space domain.
Augmented Reality, as a research and technology field, has attracted considerable interest
during the last years with new domains of practical applications arising. Much effort is
focused on the new potential opened by the application of AR techniques, and concern
enhancing the ability of the human user (human operator) to perceive aspects of a real
world by overlaying additional (visual or other type of) information, increasing the abilities
438 Industrial Robotics - Programming, Simulation and Applications
of interacting within this world. The auxiliary information that can be conveyed through the
visual, haptic, or other form of display of virtual models may facilitate the user to perceive
in a more direct and intuitive way specific “hidden” or else “fuzzy” characteristics of a real
world, which may be needed for the efficient execution of a certain task. Therefore, while in
general VR promises to revolutionize the way human-computer interaction systems are
conceived, AR techniques seem to lead to the creation of excellent tools making execution of
complex and demanding tasks more easy and intuitive for the human.
Virtual, Augmented and Mixed Reality technologies are now recognized as
constituting a challenging scientific and technological field that can provide
breakthrough solutions to a wide spectrum of application domains, where intuitive
and natural human/computer and human/machine interaction is needed.
Telerobotics, involving a human operator to control a robot from a remote location
usually via a computer interface and a computer network, is one of the fields that can
directly benefit from the potential offered by VR and AR human/machine interfacing
technologies. Application of VR in Telerobotics (VRT) and the related concept of
telepresence, or tele-symbiosis (Vertut & Coiffet, 1984), are ideas which have been
around for more than twenty years, and have been used mainly for the
telemanipulation/teleoperation of robotic mechanisms in hostile environments (such
as in the nuclear industry, space, underwater, or for other difficult or dangerous
service/intervention tasks, like bomb disposal, civil works etc.). In fact, all the
approaches involving the integration of VR techniques in telerobotics, as will be
explained further in this chapter, constitute basically: (a) a generalization of the
concept of “predictive displays”, coping with the problem of time delay and stability
in teleoperation systems, and (b) an attempt to provide human operator assistance
and achieve better transparency characteristics for the teleoperation system.
Nowadays, on the other hand, the rapid development of new broadly expanded
networking technologies, such as those related to the Internet, and the numerous
relevant applications, described by the general term of e-commerce/e-business, can
give new potential to the use of VRT in novel application domains. In fact VRT and
Internet technologies can mutually benefit from ideas developed in the respective fields.
This merging of technological potential can lead to a generalization of the concept of
telework, where remote control through the network of actual physical processes will
be possible. One can even think, for instance, of supervising and actively controlling a
whole manufacturing process without having to move from his home. A major research
objective must be of course to enable and promote new potential applications that can
derive from the merging of such technologies, so that wider categories of the population
can finally take benefit of these technological advances.
At the rest of this chapter, we focus on analysing the theoretical foundations of this field and on
describing the practical application domains that are involved, by presenting some
characteristic case studies. Section 2 starts with a description of the basic principles related to
virtual and augmented reality systems, and then presents an overview of applications related to
the field of robotics. In Section 3 we describe the basic concepts that govern telerobotic systems
and present a historical survey of the field. Section 4, then, presents typical application scenarios
of these technologies, related to the two main robotic systems categories, namely robot
manipulators and mobile robotic vehicles, and highlights the link with the new VR-based field
of haptics. Concluding remarks and future research directions are given in Section 5.
Virtual and Mixed Reality in Telerobotics: A Survey 439
Sensors
Effectors (data glove, Effectors
(hands, etc.) joystick, etc.) (robotic device etc.) Physical
or
Virtual
Senses Effectors Sensors Worksite
(vision, touch, etc.) (CRT display, (video camera, etc.)
haptic feedback, etc.)
(a) (b)
Fig. 2. Programming of a robot manipulator using augmented reality (AR) techniques.
Overlaying virtual model (3D wireframe graphics above) on real robot image (adapted
from: Rastogi et al., 96).
consequence, communication time delays cause certain degradation of the teleoperation system's
performance; but what is even more critical, their presence may jeopardize safe operation and
cause dangerous instabilities especially when force-feedback is involved in a long-distance
bilateral telemanipulation system.
Degradation of sensory feedback may also be due not only to the presence of time delays and
limited bandwidth, but also to noise and other sort of disturbances in the communication
channel. Problems related to the quality of sensory feedback may also derive from the nature of
the task itself, for instance when a slave robot operates in low visibility conditions (e.g. video
feedback from an underwater remotely operated vehicle, which may, in some cases, be
completely useless or extremely difficult to interpret). In all these cases, when sensory feedback is
deteriorated, due to time-delays, noise or other source of signal degradation, some task-specific
methodology or advanced remote control strategy has to be followed to assist the human
operator to perform the task goals, and ensure safe and efficient operation of the system.
Time-delay has long been known in classical control theory as a very challenging problem, and
various predictive control schemes have been proposed based on some a-priori knowledge of the
delay (for instance, the predictor of Smith, proposed around 1956, see: (Laminat, 1993) for a
survey). In the teleoperation field, more recently, some new control schemes have been proposed
to cope with this problem, based on passivity theory (Anderson & Spong, 1992), or on the
concept of adaptive impedance (Niemeyer & Slotine, 1991). All these approaches converge to the
fact that, in any case, stability and transparency (defined in terms of force/trajectory tracking
between the master and slave) of the teleoperation system are two contradictory objectives, and
some trade-off between these characteristics has to be achieved most of the times. All these
approaches in fact slow down the control system coupling the master with the slave, that is,
diminish the control bandwidth of the system leading to a more compliant (less stiff)
teleoperator. This ensures the stability (passivity) of the system, under some constraints related to
the magnitude of the time delay, but has as a counter-effect to deteriorate the transparency of the
teleoperation system (for instance, the human operator does not feel the real profile of the force
generated at the slave site). The problem becomes even more difficult when time-delay is
randomly varying, with no a-priori knowledge available on its order of magnitude.
Another class of techniques trying to cope with the problem of communication time-delay, is
based on the use of predictive displays. Graphical predictors, supplying visual cues (estimations)
on the evolution of the teleoperation task, are the most commonly used. Bejczy et al. (1990), for
instance, have proposed the use of a wireframe graphical model of the slave robot, overlaid on
the usual video feedback provided to the human operator. This combination of both synthetic
and real images (that is the display of a graphical model, directly following the movements of the
human operator and showing what the state of the robot will be before the actual delayed video
images arrive from the slave site) greatly facilitates the task of the human operator. The paradigm
of graphical predictive displays has been greatly adopted since, and extended to cope not only
with problems related to the presence of time delays in the bilateral communication loop but also
to perform visual feedback enhancement and assist the human operator in quickly assessing a
situation and performing teleoperation tasks.
also to other forms of sensory feedback such as haptic or auditive display. Virtual Reality is
in fact a multidisciplinary scientific/technological field, which aims to enable a more natural
and intuitive human/computer interaction based on the use of multimodal/multisensory
interfaces. This human/machine interface technology involving various perceptuo-motor
modalities of the human being (not only vision, but also haptic interaction and auditive
feedback) can provide a technological solution of excellence for the human/robot interaction
and communication systems constituting the field of telerobotics. Virtual environment
simulations of teleoperation systems can indeed be used as predictive models performing
the role of a mediator between the human operator and the remote (slave) robotic system.
This means, in other words, that the human operator could be provided with realistic three-
dimensional graphical images of the remote operation site, while being able to interact with
these images and perform the desired teleoperation task in a natural and intuitive way (that
is, for instance, by feeling the reaction forces during the execution of this virtual task model),
and all that before the actual (delayed or deteriorated) real sensory-feedback signals arrive
from the remote slave site. In fact, this interaction between the human operator and the
virtual environment (that is, the virtual task performed by the human operator) can be used
to generate the appropriate command signals that have to be sent to the slave robotic site,
and guide the on-line execution of the real teleoperation task. The use of such an
intermediate virtual representation of a teleoperation task is reported in (Kheddar et al.,
1997), where a multi-robot long-distance teleoperation experiment is described, as will be
presented more in detail in Section 4.1.1.
VR-based models of teleoperation tasks can also be used in off-line teleprogramming schemes,
in which case the master and slave control loops are completely decoupled. The human
operator performs a virtual task in a completely simulated manner, within a 3D graphic
environment representing the slave site. This virtual task is analyzed and the appropriate
sequence of robot commands is extracted and recorded. The sequence of command signals is
then evaluated by the human operator before its subsequent transmission to the slave
robotic system, where real task execution will take place. Communication time delay is
generally not a problem in this approach. However, this is not applicable for all kind of
teleoperation tasks, for instance when fine telemanipulation of a dextrous robotic
mechanism is required, since programming such complex tasks in the form of simple
sensor-based operations is very difficult. The key issue in teleprogramming schemes is the
type of commands that will constitute the robot programs, which must make use in full
extent of any autonomy features supported by the slave robotic system, in the form of
reactive sensor-based behaviours or elementary task operations. Such approaches are
especially applied in super-long-distance teleoperation systems, for instance when guiding
the operation of a rover on the surface of a distant planet such as Mars. Of course, the same
idea of semi-autonomous teleoperation control can also be applied in an on-line direct
teleoperation scheme, where more high-level command primitives can be send in real-time
to the remote robot, instead of the traditional, continuous force/position/speed signals. In
this general framework, Hirzinger et al. (1993) have proposed the use of a tele-sensor-based
scheme for the remote control of a robot manipulator in space. Freund and Rossmann (1999)
have proposed a task deduction/action planning approach (called projective virtual reality
paradigm) tested on a variety of applications, from simple teleoperated robotic assembly
tasks up to the control of multirobot telemanipulation systems for space applications. In
446 Industrial Robotics - Programming, Simulation and Applications
Section 4.1.2, we will present an example of a similar telerobotic system, but with the
application being that of a “remote laboratory” for education and training in robotics.
VR technology and its applications in different scientific fields have known a rapid development
during the last five to ten years. We can now say with confidence that VR has the potential to
become a key technology for the design of modern man-machine interfaces, as is the case of
teleoperation systems. It can provide the tools and techniques to establish a multimodal, natural
and intuitive human-machine interaction, increasing the feel of telepresence for the human operator,
which constitutes the ultimate goal of any teleoperation/telerobotic system. Of course, many
challenging problems have to be tackled and appropriate (generalized or task-specific) solutions
must be proposed, taking into consideration not only ergonomic issues and human factors, but also
more technical problems such as image calibration (Kim, 96), coping with discrepancies and
modeling uncertainties, as well as control issues and stability of human-machine active interfaces.
The use of VR techniques in telerobotics can be seen as an evolution of general computer-
aided teleoperation schemes, developed to facilitate the task of the human operator and
provide assistance in one of the following ways:
by performing the functions of an information provider, that is, by enhancing the
sensory feedback provided to the human operator and helping him to better
understand the state of the remote task execution. Typical examples are the
graphical predictive displays, described above, or some form of artificial haptic
(kinesthetic and/or tactile) feedback. Other VR-based techniques include the use of
virtual fixtures (Rosenberg, 1993) or virtual mechanisms (Joly & Andriot, 1995).
by performing some form of decision support function, that is, by providing
suggestions or indications concerning the most suitable action plan and assist the
human operator at the decision making process.
by interpreting the actions of the human operator and performing a function of
substitution or cooperation, to provide active assistance for the on-line control of a
teleoperation task. This is the case of an active intervention of the master computer,
with typical examples being a system undertaking the control of some degrees of
freedom (dof), or ensuring that the commands issued by the human operator
satisfy some constraints related to safety issues.
All these features (i.e. providing perception, decision or action assistance to the
human operator) concern functions performed by the system within the master
control station and are generally described by the term computer-aided teleoperation.
Similarly, some form of computational intelligence can be embedded to the slave
control system, which is for instance the case of a slave robot supporting some kind of
autonomous sensor-based behaviors. In this case, we refer to a shared-control (or
shared-autonomy control) mode of operation, with the slave robot executing a set of
elementary (or more complex) operations in a completely autonomous mode. The
commands issued by the master control station (that is, by the human operator) are
described in a higher level of abstraction and include some form of implicit task
representations. In an even higher level one could then think of a telerobotic system
where the human operator is in charge of simply supervising the remote task
execution, with active intervention only in extreme error recovery situations. All these
paradigms are generally grouped under the term supervisory teleoperation, described in
(Sheridan, 1992). A schematic representation of the evolution of these teleoperation
paradigms is illustrated in Fig. 3. The interaction and merging of machine intelligence
features with the human operator capacities and skills is the key issue that will lead
to the creation of more advanced telerobotic systems, capable to perform more
complex task such as those required in the field of intervention and service robotics. It
is certainly one of the most challenging tasks for the designers of modern
teleoperation systems, to find the “optimum line” between robot autonomy and
human operator control, in order to exploit in a full extent the potential of such
human/machine interaction and cooperation systems.
2 http://telerobot.mech.uwa.edu.au/
448 Industrial Robotics - Programming, Simulation and Applications
consists of a six-axis robot manipulator, remotely controlled with one fixed observing camera.
The initial system, originally demonstrated in 1994, required users to type in spatial
coordinates to specify relative arm movements. Since then, various user interfaces have been
developed and tested (Taylor & Dalton, 2000), which more recently embed Java technology to
enable the human operator either to choose from a prespecified set of target positions or to
click on the image and issue robot motion commands relative to the position of a cursor. The
problem of course still remains to associate the position of the cursor that is being dragged on a
2D image, with the position of the robot end-effector and the other objects in the 3D world. An
other very good example of a robotic manipulator being controlled over the Web is the
PumaPaint system (Stein, 2000), which was on-line from June 1998 until March 2000. It
consisted of a Puma 760 robot controlled over the Internet using a Java compatible web
browser. The task performed by the robot was painting on an easel, reproducing in real the
paintings created by the user on a virtual canvas, which was incorporated in the user interface
running a Java applet. The interface forwards all commands to the robot so that almost the
same image appears on the real canvas. The system also provides visual feedback in the form
of periodically updated live images from the robot.
Besides these systems consisting of robot manipulators controlled through the Internet,
there is another class of web robots involving teleoperation of mobile platforms over the
www. Most of these systems provide exclusive remote control to a single person or provide
queues to schedule user requests. One of the first mobile robots to operate in a populated
office building, controlled through the web, was Xavier (Simmons, et al., 2000). This system
was created by the end of 1995 to test the performance of various navigation algorithms, but
has soon become very popular with more than 40,000 requests and 240 Kilometers travelled
to date! The command interface of the robot provides a discrete list of destinations to send
the robot and a list of simple tasks to perform there. When a user submits a task request, this
task is scheduled for execution and a confirmation web page is sent back indicating when
the robot will most likely carry out this task. If the user had registered using a correct e-mail
address, the system will send an e-mail after completion of the requested task. In addition to
the command interface page, there is a monitoring web page that includes the robot's
current status, a map of the floor the robot is currently on and a picture of what it currently
sees.
A very interesting application of such web-based systems involves remote control of mobile
platforms moving in a museum. These are called tour-guide robots (Thrun et al., 1999), like the
Rhino robot deployed in the Deutches Museum in Bonn, or its successor, Minerva (Schulz et al.,
2000), installed successfully in the Smithsonian's National Museum of American History. These
robots are operated either under exclusive control by remote users on the web (virtual visitors),
or under shared control by both real (on-site) and remote (virtual) visitors of the museum. Under
exclusive web control, the user interface is implemented as one Java applet incorporating a map
of the exhibition area and two live images, one from the robot and the other from a ceiling-
mounted camera. Minerva's shared control interface was on-line for 91 hours and was accessed
by 2885 people. The robot travelled 38.5 Km under shared web and on-site control, providing
information about 2390 exhibits.
There exist many other Web robots on the net, performing a variety of tasks such as those
described in (Goldberg, 2000). The NASA Space Telerobotics program website3 currently
3
http://ranier.oact.hq.nasa.gov/telerobotics_page/realrobots.html
Virtual and Mixed Reality in Telerobotics: A Survey 449
lists over 20 Real Robots on the Web. Reviewing all those web-based teleoperation systems,
it is clear that the main problem is of course the unpredictable and variable time delay for
communication over the Internet, which calls for the use of some form of supervisory
control or off-line teleprogramming scheme to ensure stability. Most of the systems
currently available on the web incorporate user interfaces, which implement basic
functionalities, such as enabling the user to choose from a prespecified set of tasks (e.g.
target locations). These interfaces use some combination of HTML forms or Java consoles to
enter data and issue simple commands for immediate or future execution. Sensory feedback
is usually limited to the display of images that are captured at the remote site, and the
presentation of some status information in text form. It is obvious that this separation
between the actions of the human operator (user) and the response of system fed back by
the remote robot deteriorates the transparency and telepresence characteristics of the
teleoperation system. More advanced “interactive telepresence” techniques need to be
investigated, like for instance the integration of VR models and tools within the master
control interface (including predictive displays and automatic active-assistance operations)
to enable a more natural, intuitive and direct, real-time interaction between the user and the
web-based teleoperation system.
This experiment was the first general one of a research cooperation programme named TWE
(Telepresence World Experiment) linking seven research teams belonging to five countries
(among them, the Laboratoire de Robotique de Paris in France, and the Mechanical
Engineering Laboratory in Tsukuba, Japan).
The main challenge of this “telework experiment” was to demonstrate the possibility offered
by VR technologies to ameliorate the human operator master control interface and enhance
the capabilities of such robot teleoperation systems. With four different robots controlled in
parallel to perform the same task, a common intermediate representation is imperative, to
let the human operator focus on the task to be performed and ‘mask’ any robot manipulator
kinematic dissimilarities and constraints. Fig. 5 shows an overview of the experimental
setup, with the master control interfaces, and two robots in parallel operation (one in France
and one in Japan). As can be seen in this figure, the task consisted of assembling a four-piece
puzzle within a fence on a table. The operator performs the virtual puzzle assembly using
his own hand and skill on the master control virtual environment, constituting an
intermediate representation of the real remote assembly task. The visual and haptic
feedback is local and concerns only the graphic representation of the remote task features
without any remote robot. The operator / VE interaction parameters are sent to another
workstation in order to derive robot actions (graphically represented for software validation
and results visualization) and does not involve any direct operator action/perception. A
video feedback was kept for safety and error recovery purposes.
The ultimate goal of such research efforts is to study the role that VR concepts and
techniques can play towards the development of “efficient” interaction and communication
interfaces between humans and robots. In the direction of ameliorating the transparency of
the telerobot system, which constitutes a major target as has been already stated, such a
human-robot interface must enable the human operator:
to remotely perform the desired task in a natural and intuitive way, as if he was physically
present at the remote (slave robot) site, without feeling obstructed or constrained by the
telerobotic system, an objective that is often described by the term “telepresence”,
to use his manual dexterity in remotely conducting the desired manipulation task,
meaning that the system must support natural skill transfer between the human
operator and the remotely controlled (slave) robot.
To approach these key objectives, it is particularly important for the master teleoperation
environment to display information to the human operator not in a static way but through a
multimodal / multisensory dynamic interaction environment. VR concepts and tools play a
significant role in this direction. Particularly, interacting via the (active) sense of touch is of
primary important. This is termed “haptic interaction”, or haptics, which is now a very active
research field worldwide, with numerous applications, as we will see later on in this chapter.
These haptic systems are often based on the development of special purpose glove-like
devices, often termed “data-gloves”, or “force-feedback gloves” if application of forces on
the human-operator’s hand is possible. One such example is illustrated in Fig. 6, where the
human-operator wearing a specially designed exoskeleton device on his hand (the LRP
dexterous hand master, see for instance (Tzafestas et al., 1997)) can interact, in a direct and
intuitive way, with a virtual environment representing the task to be performed. The virtual
manipulation actions performed within this VR simulation environment on the master
control site are transformed into an appropriate sequence of commands for a robot
manipulator to execute, and are then transmitted to the slave robot site(s).
452 Industrial Robotics - Programming, Simulation and Applications
Ethernet
Virtual hand
Joystick
Tsukuba
Ethernet FANUC M6i Robot (LRP)
(MEL)
MASTER STATION
Video feedback
We can thus conclude that the application of VR-based concepts and tools in the control of
robotic telemanipulation systems, aims principally the development of a human-robot interactive
communication framework that allows to exploit in a better extent: (a) from one hand, the
dexterity and skills of the human operator to conduct dexterous manipulation tasks and solve
complex decision problems, and (b) on the other hand, the capacities of robot manipulators to
execute, with superior speed and accuracy, a combination of sensor-based primitive tasks.
2D Graphical
Representation Panels Control / Command
Editing Panel
Robot
waypoints
Actual
(remote) robot
position
Commanded
robot position
Feedback Panel
Commanded
robot position Status Panel
Fig. 7. The graphical user interface of the virtual robotic laboratory platform.
454 Industrial Robotics - Programming, Simulation and Applications
Taking into account these considerations, the work described in (Tzafestas et al., 2006)
was directed towards the development of a virtual robot laboratory platform that will
train students on how to program a robot manipulator arm, using the functionality and
programming modalities provided by the real robotic system. The platform developed
incorporates a robot’s Teach Pendant emulator, as well as a virtual 3D robot animation
panel integrated in the graphical user interface. The system enables students to create,
edit and execute robot programs (i.e. complete motion sequences, such as a pick-and-
place task), in exactly the same way as they would if they were using the real-robot’s
pendant tool. The program created can be previewed "locally" by the student/trainee in
2D and 3D animation modes, and can then be sent for execution: either (a) by the virtual
robot simulation, incorporated as mentioned above in the graphical user interface, or (b)
by a real, remotely located, robot manipulator (such as, in this case, a SCARA-type
AdeptOne manipulator located in the premises of the robotics laboratory).
Fig. 7 shows the graphical user interface (GUI) of the virtual and remote robotic laboratory
platform, which is developed based on Java technologies and integrates the following
control panels:
2D graphical representation panels (top-view and side view), visualizing both
actual and commanded robot configurations,
a real-time video streaming panel, which is based on RTP and implemented using
JMF, showing (when on-line) the real remote manipulator in motion,
a control/command editing panel,
an interactive panel providing an exact emulation of the robot’s Teach Pendant,
called Virtual Pendant,
status and feedback panels providing real-time textual information on current
robot state, and
a virtual robot panel, implemented using Java3D API, providing 3D
visualization of both the commanded (preview animation) and the current
robot configuration.
The overall architecture of the remote laboratory platform is depicted in Fig. 8. The
system is based on a client-server architecture, enabling users to connect via Internet (or
LAN). It supports multiple connected users through the implementation of a specific
protocol using TCP/IP sockets for communication and real-time data exchange with the
"robot server". The robot server supports the following three remote control modes: (i)
direct teleoperation control, (ii) indirect control, for robot teleprogramming via the
command/ editing panel, and (iii) manual control, that is, robot manipulator
programming using the Virtual Pendant functionalities. These control modes are
inspired from the telerobotics field, and particularly from work proposing various
"shared-autonomy" and "supervisory" remote control modalities. In direct teleoperation,
every command issued by the user (human operator) locally, i.e. within the GUI (master
control site), is immediately transferred for execution to the remote (slave) robot. At the
same time two types of feedback displays are active: (a) a predictive display (both in the
2D and 3D graphical panel) immediately visualising the commanded robot motion
according to the human operator issued commands, and (b) a real robot feedback
display (also both in 2D and 3D animation), showing where the robot actually is (that is,
visualising current remote robot configuration, information provided in real-time
through continuous feedback from the remote site).
Virtual and Mixed Reality in Telerobotics: A Survey 455
Video Camera
TE-1
AdeptOne Robot
Adept Cont roller
TE-2 INTERNET
RS232
Grabber Serial Port
TCP/IP
sockets
RS232
Robot
...
...
Server
streaming
TE-n
Frame
Video
RTP
Server
Fig. 8. Overall Architecture of the Virtual and Remote Robot Laboratory Platform.
As opposed to direct teleoperation, in the indirect "teleprogramming" control mode the
commands are generated off-line, queued in a list and submitted to the robot in a
subsequent time frame, when the human operator decides to do so. The idea is to be able to
create a complete robot program off-line, test its validity and optimality, before actually
sending the command sequences for execution by the real robot. Based on the functionality
(robot command editing routines, waypoints list creation etc.) of this indirect
teleprogramming mode, a third "manual-control" mode has been developed, which
implements exactly the Virtual Pendant robot-programming scheme. This Virtual Pendant
panel supports all the main functions of the real robot’s pendant tool, and enables the
student to learn and practice robot-programming routines locally.
A pilot study was conducted to evaluate the performance of the virtual and remote robotic
laboratory platform. The objectives of this study were: (1) to explore to which extent the
considered e-laboratory modalities can be efficiently implemented in practice and used by
students to obtain practical training as a supplement to a theoretical course module (in this
case, an introductory course on robotic manipulation), and (2) to explore the relative
importance of various e-learning elements, particularly virtual vs. remote training
modalities, in comparison with traditional hands-on experimentation. Experimental results
show that students trained using the virtual training modality performed equally well as
compared to students trained the classical way on the real robot, and were even seen to be
more motivated, as revealed by the error rate related to the assimilation of high-level
concepts and skills. These results are very interesting since they show that VR technologies
and tools, when combined with telerobotic concepts and techniques in distance training
scenarios, can prove very beneficial and contribute significantly to the development of very
efficient virtual learning platforms, in many engineering (and not only) disciplines where
456 Industrial Robotics - Programming, Simulation and Applications
Fig. 9. Teleoperation of a subsea ROV. Up-line pictures designate the master control station,
and down-line pictures show an image of the real submersible robot, as well as snapshots of
the video feedback from the robot (JASON ROV) environment (adapted from (Sayers, 99),
the GRASP laboratory, Pennsylvania University).
Fig. 10. Teleoperated mobile robotic assistant: general architecture of the system (adapted
from (Tzafestas et al., 2000))
These may include tasks such as transportation of specific items (like pharmaceuticals, lab
specimens, medical records etc.), accompanying a patient from one location to another
within the hospital building, or even surveillance of an area, in other words, a combination
of simple displacement and manipulation (fetch-and-carry) operations in a potentially
human crowded indoor environment.
The global architecture of the system, with the principal functional modules and their
interconnections is shown in Fig. 10. It consists of the following main subsystems:
(a) The navigation and control subsystem, including the task planner, the global and local path
planning and navigation modules, as well as the manipulator control module. Major
issues that must be investigated include here: (i) the real-time collision avoidance to
ensure safe operation of the mobile platform in a dynamic environment, (ii) the
development of a number of autonomous low-level sensor-based behaviors, and (iii) the
coordinated action of the mobile platform and robot manipulator to optimally exploit the
redundancies offered by such an integrated mobile manipulation system,
(b) The sensing and perception subsystem, performing fusion and interpretation from a
variety of sensory feedback information, provided by the odometry and optical
encoders (proprioceptive feedback), as well as from the vision and ultrasonic sensors
(exteroceptive feedback). The goal of this subsystem on its whole is to update internal
representations regarding: (i) the robot's actual state (positioning, i.e. robot
localization) and (ii) the external world map (i.e. dynamic moving obstacles etc.)
(c) The teleoperation subsystem, which aims at integrating the decision and action
capacities of a human operator in the control loop, and consists of: (i) a multimodal
user interface (which will be designed to enable Web-based remote control of the
Virtual and Mixed Reality in Telerobotics: A Survey 459
robotic platform), (ii) a sensory feedback acquisition and processing module, and (iii) a
task deduction and command generation subsystem. All these modules are
coordinated by a teleoperation server, which was designed to support various modes
of operation ranging from direct on-line remote monitoring and control, to off-line
teleprogramming or simple supervisory control of the system.
The work described briefly in the sequel focuses more specifically on the design and
implementation of a multimodal teleoperation system for the mobile robotic assistant,
integrating virtual reality techniques within a Web-based user interface, to assist the human
operator and enhance the functionality and efficiency of the system. Efficiency in remote
operation and control of a robotic system concerns: (a) making "good use" of the available
communication bandwidth between the master and slave systems, and (b) achieving a
"synergy" between the human operator and the robot, by enabling the system to best exploit
and integrate (in terms of speed, precision and error recovery) both (i) the human operator
capacity to take rapid decisions and intuitively indicate the most appropriate (coarse or
detailed) plan for system action (e.g. robot motion) in complex situations, and (ii) the robotic
system capacity to perform, with controlled speed and precision, a variety of autonomous
operations.
To approach towards these general targets, a set of requirements have to be specified and
fulfilled by the teleoperation system and all its sub-modules. The final system design must
converge towards the merging between a number of often contradictory modalities, in
search of an "optimum" compromise and increased "efficiency". By multimodal teleoperation
interface we mean a system that supports: (a) multiple computer-mediated human/robot
interaction media, including VR models and tools, or even natural gesture recognition etc.,
and (b) multiple modes of operation, with a varying degree of robot autonomy and,
respectively, human intervention. The latter is a very important issue for the design of a
telerobotic system, as has been already cited in previous sections. The modes of operation
that have been considered for this teleoperation system included:
(a) Direct teleoperation control, based on on-line exchange of low-level commands and raw
sensory signals.
(b) Computer-aided teleoperation of the mobile robot, with the master control system providing
some form of assistance to the human operator, such as: (i) performing information feedback
enhancement, like model-based predictive display, (ii) undertaking active control for some
of the degrees of freedom (dof) of the system, substituting or complementing the human
operator, or even (iii) providing some form of active guidance and model-based correction
for the human operator's actions. Two main functions have been integrated in the system: an
active anti-collision and an active motion-guide function, both based on the use of either a
virtual reality model of the robotic platform and its task environment, or of a simple 2D top-
view representation.
(c) Shared-autonomy teleoperation control of the robotic system, using a set of sensor-
based autonomous behaviors of the robot. This mode of teleoperation control can be
extended to incorporate a large set of intermediate-level, behavior-based, hybrid
(qualitative/ quantitative) instructions, such as: move through points A, B, C while
avoiding obstacles, pass through door on the left, move at distance d from wall, follow
corridor etc. These commands trigger and make use of respective behavior-based
control modes of the robot, incorporating automatic path generation functions. In other
words, this mode of teleoperation control is based on some form of basic autonomy
(local path planning and reactive sensor-based behaviors etc.) embedded on the slave
460 Industrial Robotics - Programming, Simulation and Applications
The input devices used were: a joystick for virtual robot motion control, and a Spaceball for
virtual camera navigation. Some form of sensory feedback information is also integrated in
this virtual environment, like the actual robot position represented by a wireframe graphic
model of the robot platform.
(ii) The control-panel, containing a 2D top-view graphical representation of the mobile robot
environment (corridors, doors, rooms, obstacles etc.) and a command editing panel. The 2D
model contains accurate map information of the whole indoor environment, where the
robotic assistant is operating, allowing the user to obtain rapidly a top-view of any required
region (using scrollbars or predefined region-buttons). The human operator will also have
the ability, if needed, to directly edit commands that must be sent to the robot.
(iii) The sensory-feedback panel, where all the required sensory feedback signals are displayed
(for instance a sonar map, indicating the location of obstacles detected by the robot). A
visual-feedback panel was also integrated, displaying images obtained by the on-board robot
camera. The refresh-rate of this video feedback is of course reduced, since the bandwidth
available for communication through the Internet is limited and real-time access to other
more critical information, such as actual robot location and sensor status, is indispensable.
(iv) The status panel displaying information on the actual status of the robot in textual form,
as well as messages about actions and events.
Fig. 12 shows a snapshot of this prototype human operator interface, developed based on
Java technology to facilitate Web-based operation of the system. The 3D graphics rendering
routines for the VR panel are implemented using the Java3D API. An enhanced version of
this system will constitute in the near future the Internet-based teleoperation control
platform for this mobile robotic assistant.
Fig. 12. Multimodal teleoperation interface for the mobile robotic assistant: First prototype
implementation (adapted from (Tzafestas, 2001a))
Experiments with this system were very satisfactory, demonstrating convincingly that VR
techniques and tools can be used efficiently in the teleoperation of mobile service robots. There
462 Industrial Robotics - Programming, Simulation and Applications
remain, however, several issues that need further attention and require future research efforts.
One of the major difficulties is how to enable both: (a) the human operator to perform actions in a
natural and intuitive manner within the master control environment, and (b) the system to
interpret these actions, extract critical task-related parameters and synthesize appropriate robot
commands. The first issue is related to the design of the human operator interface, where we have
opted for the use of VR techniques to enable such an intuitive interaction with the system, while
providing active assistance functionalities, as described above. On-line monitoring and analysis of
the human operator's actions is then necessary, to deduce a correct robot task plan as
specified/indicated by these actions. This means, in other words, to incorporate some form of
"intelligence" in the master control environment, capable of performing these task-deduction and
robot-command-extraction operations, based on observation of human actions within a simulated
virtual representation of the slave site. One approach to this problem of robot action tele-planning
from observation and learning of VR-based human demonstration, is described in (Tzafestas, 2001b).
In the same context of "embedding intelligence" in telerobotic interfaces to facilitate the human
operator task and enhance system performance, an important issue concerns the development of
systems integrating more advanced input/output modalities, inspired from works in the field of
virtual reality (for instance, providing active assistance through haptic devices, as simple as a
force feedback joystick, or more enhanced like a general-purpose haptic device such as the
PHANTOM® devices from SensAble Technologies7). Such a research effort, involving active haptic
display in a mobile robot teleoperation interface, is described in the following section.
7
http://www.sensable.com/products/phantom_ghost/phantom.asp
Virtual and Mixed Reality in Telerobotics: A Survey 463
Network
Fig. 13. Overview of a mobile robot "haptic teleoperation" system with master (left) and
slave (right) side.
In this context, the application of haptic technologies is now considered as a very promising
research direction, offering great potential as a means to revolutionize the way human-robot (and
generally human-machine) interfaces are conceived. The term "haptic" comes from the Greek
word "αφή" (touch), and usually refers to the sense of touch, and especially to the human hand
that can act on the environment and perceive its physical characteristics. Exploiting human
dexterity and manipulative skills within interactive VR systems constitutes a real-challenge for
scientists and engineers in the field. Haptics constitutes now a rapidly evolving field of VR
technologies, with new research communities active worldwide8.
With the spread of low-cost haptic devices, haptic interfaces appear in many areas in the field of
robotics. In telerobotics, the integration of haptic technologies and tools has given rise to a new
research field that is now often termed as "haptic teleoperation". The main motivation behind this
new field is the constant search to enhance transparency and naturalness of teleoperation
systems, and promote performance a step further by enriching the human-telerobot interface
with new sensori-motor interaction modalities, for efficient human-robot skill transfer and
cooperation. Recently, haptic devices have been also used in the field of mobile robot
teleoperation, particularly where mobile robots operate in unknown and dangerous
environments performing specific tasks. Haptic feedback is shown to improve operator
perception of the environment without, however, improving exploration time.
Teleoperated mobile robots are a major tool in the exploration of unknown and risky
environments. Mines removal (Smith et al., 1992) and exploration of underwater structures
(Lin et al., 97) are two common applications carried out through mobile robots. Robot
motion is usually controlled by system operators with the help of a camera mounted on
robot or inspecting the area from above. However, although vision systems provide much
information of the environment, they require network bandwidth and much attention from
the operator. To overcome this problem, haptic devices have been recently introduced as a
way of enhancing operators perception of the robot environment. They provide operators
with the additional sense of “feeling” the robot workspace, thus making it easier to avoid
obstacles and reducing the average number of collisions.
However, the force rendering process yields a problem regarding how the haptic feedback
affects the exploration time. Ideally, we would like the presence of force feedback to reduce
the exploration time or at least not to increase it. In practice, though, this additional sense
often adds more information for operators to interpret and leads to an increase in the
navigation time, as for instance in (Palafox et al., 2006). Another important issue in mobile
robot teleoperation is the selection of a proper driving mechanism. Usually, operators have
to manually drive the mobile robot through obstacles by explicitly specifying the robot
angular and linear velocity. By doing so, they are fully in charge of the robot motion and, as
a clear viewpoint of the robot environment may sometimes not be available, they could
accidentally drive the robot to collisions or choose longer paths than optimal ones.
In a recent work described in (Mitsou et al., 2006), a teleoperation scheme is presented for the case
of remote exploration of a structured polygonal environment by a miniature mobile robot, with
the use of a haptic device. During robot exploration, robot sensor measurements are used to build
an occupancy grid map of the environment, which is displayed to the operator as a substitute for
camera information. The operator can simultaneously exert two different types of commands: an
"active" and a "guarded" motion command. Each command receives force feedback independently
(without influencing one another) making force origin clear. A behavior-based system is then
responsible for controlling the overall motion performed by the slave mobile robot. The
commands received from the haptic device act as a general policy that the robot must follow.
An overview of the mobile robot haptic teleoperation system is shown in Fig 13. It consists of two
sides: the master side, which contains the haptic device and the master station with the map-
building module, and the slave side, which contains the mobile robot and a slave robot server
with the behavior and the localization module. A more detailed view of the main system
modules and their interconnection is schematically shown in Fig. 14. To develop a teleoperation
interface that will facilitate intuitive teleguidance of a mobile robot exploring unknown
environments (such as a rectangular maze), the first step was to implement a specific "driving
scheme". According to this scheme, the haptic workspace is divided into three types of areas.
These areas are not relative to the local coordinate system of the mobile robot, but rather absolute
and relative to the local coordinate system of the master workstation monitor. There is a "neutral"
area, corresponding to the area in the centre of the haptic workspace and implying a stop
command. When selected, the robot immediately stops its motion. When the haptic end-point
enters one of the Up, Down, Left or Right areas, the robot changes its orientation and moves
according to the operator's command. For instance, if the Up area is selected, the robot will start
moving upwards, as watched in the operator monitor. Finally, when the haptic end-point enters
one of the "bi-directional" areas, a “combined command” will be issued and sent to the slave side.
The robot will be instructed to move towards one direction and simultaneously “wall-follow” a
wall. A combined motion command execution is illustrated in Fig. 15 (upper row).
User Motion
User Input Haptic Commands
Haptic Interface Behavior Module
Device Force Module
Feedback Odometry Wheel
Map Errors Commands
(occupancy
Motion
grid)
Force
User – Mobile
Human Operator Robot
value greater than a threshold are considered occupied (painted white), cells with value less
than this threshold are considered empty (painted black) and cells with no value are
considered unvisited (uncertain - painted gray). The constructed map is thus sequentially
created and continuously updated on the master control monitor along with the robot
position and orientation (Fig. 15). It forms the visual part of the information feedback
provided by the interface to assist the human operator in monitoring the task execution and
issuing proper navigation commands.
The second principal information feedback channel of the teleoperation interface concerns
haptics. As already mentioned, haptic feedback is added to the system to enhance operator's
perception of the remote environment, and assist teleoperation of the robot exploration task. The
force fed back to the operator is generated based on a 2D virtual joystick model. When the haptic
control point exits the "neutral area", a spring force is exerted to the operator attempting to
restore end-point position inside this neutral area. The stiffness coefficient of this virtual spring
depends on the absence or presence of an obstacle in this direction, that is, on whether the
respective motion command is permissible or not. In case that no obstacle hinders the execution
of a specific motion command (move forward/backward, turn left/right), the respective spring
coefficient is set to a minimum value. This feature enables the operator to feel if he/she is actually
exerting an "active" command. In case that an obstacle (e.g. a wall) is detected in the direction of a
motion command, the respective spring coefficient is gradually switched to a much larger (max)
value. The presence of a wall can thus be viewed as the origin of a virtual repulsive spring force
that is applied to the mobile robot. In this case, the operator is feeling a force on his/her hand as
if the wall is pushing the robot away, thus conveying the important information that the
respective motion command is not permissible.
‘active’
command
‘guarded’
command
Fig. 15. Example of combined command execution (upper row, in the left, the map on the
master station, in the right the VR simulator view), and Sequential Map Creation in the
Master Teleoperation Interface (lower row) (adapted from Mitsou et al. 2006).
466 Industrial Robotics - Programming, Simulation and Applications
It is important to note that, in order to calculate the environmental force that is applied to the
human operator via the haptic device, we do not directly use the current sensor values
transmitted by the Localization Module. The force is generated according to the values of the
cells of the occupancy grid map. If more than n (a threshold we use to deal with noisy sensor
values) occupied cells (e.g. forming a virtual wall structure) are found in one direction, then the
respective motion command is considered as not permissible. The virtual interaction force is then
continuously computed according to the virtual joystick model described above, and sent to the
Haptic Device for application. Apart from a spring model, a virtual damper model is also used to
smooth the force and avoid large fluctuations in its value.
Additionally, a local robot model is implemented in the master station, to deal with delays
in the communication between the master and the slave side. The robot position received
from the Localization Module is used to update the local model estimation and the Haptic
Interface Module considers the new updated position as the real robot position. In case of
great latency in master-slave communication, if no local model is used the operator may not
feel a wall coming closer to the robot. However, as the local model estimates the current
robot position, the generated force is going to increase and the operator will feel the real
distance from the wall.
In the slave side, the Mobile Robot is responsible for the exploration of the unknown
environment. The robot used in these experiments was a Hemisson mobile robot9. It has a
differential drive system, is equipped with six low-range infrared sensors and has no wheel
encoders. It sends sensors measurements to a server computer (slave robot server) and
receives from it the new speed commands for each wheel. The communication between
robot and server computer was performed via a wireless link.
The Behavior Module in the master computer is responsible for the robot motion. Given a
policy from the master side, the Behavior Module makes sure that the robot will follow it
except for cases of possible collisions. Two different behaviors are implemented: a collision
detection behavior and a wall-follow behavior. The first is activated when an obstacle is
detected from the three front sensors. In this case, the robot stops when operators’ command
drives it towards the obstacle but accepts the commands that guide it away from the
obstacle. The wall-follow behavior is activated by the operator policy. Under specific
policies, the robot can follow walls and automatically turn on corners so as not to lose
contact with them.
To test the effectiveness of this approach, two different kinds of experiments were performed.
The first one was a comparison between this approach and a common teleoperation method,
while the second one constituted a “drive-by-feel” experiment. Both experiments were
performed on a simulated mobile robot, with a virtual world (the slave robot environment) on a
separate remote computer. Also, a PHANTOM OmniTM haptic device was used for force-feedback
generation in the master teleoperation interface. Results show that exploration of unknown
environments can be completed in less time when haptic feedback is enabled. The proposed
visuo-haptic interface is, thus, found to improve navigation time and decrease operator effort.
Comparative evaluation experiments show that the proposed system, with the addition of haptic
display and autonomous behaviors in the slave robot system, automates the task of exploration
making it much easier for the operators to remotely navigate the robot through an unknown
world. Moreover, with no visual information available to the operator, exploration tasks
consisting of finding the exit in an unknown maze can be completed successfully when force
feedback is enabled ('drive by feel' experiment).
Conclusively, the use of VR technologies in such teleoperation interfaces is seen to improve
operators perception of the environment, and makes a step towards facilitating and
automating teleoperated mobile robot exploration tasks. In the future, work is still needed to
extensively test such approaches in practical scenarios, particularly generalizing related
system architectures so as to successfully cope with applications involving any type of
unstructured environment, as well as to examine system behavior under time delay
conditions and develop algorithms to cope with such latency in master-slave round-trip
communication.
6. References
Anderson, R. J. and Spong, M. W. (1992). “Asymptotic Stability for Force Reflecting
Teleoperators”, Intern. Journal of Robotics Research, vol. 11, no. 2, pp.135-149, 1992.
Azuma, R. T. (1997). “A Survey of Augmented Reality,” Presence, 6 (4), August 1997.
Bathia, P., and Uchiyama, M. (1999). "A VR-Human Interface for Assisting Human Input in
Path Planning for Telerobots", Presence, vol.8, no.3, pp.332-354, June 1999.
Bejczy, A. K., Kim, W. S., and Venema, S. (1990). “The Phantom Robot: Predictive Displays
for Teleoperation with Time Delay”, 1990 IEEE Int. Conf. on Robotics and Automation,
pp. 546-551, 1990.
Burdea, G. and Coiffet, P. (1994). Virtual Reality Technology, John Wiley, 1994.
Ellis, S.R. (1995). “Virtual environments and environmental instruments,” Chapter 2, in:
Simulated and Virtual Realities: Elements of Perception, K. Carr and R. England (eds.),
Taylor & Francis, London, pp. 11-51, 1995.
Freund, E., and Rossmann, J. (1999). “Projective Virtual Reality: Bridging the Gap between
Virtual Reality and Robotics”, IEEE Transactions on Robotics and Automation, vol. 15,
no. 3, pp. 411-422, June 1999.
Goldberg, K. (2000). “Introduction: The Unique Phenomenon of a Distance”, in: The Robot in
the Garden. Telerobotics and Telepistemology in the Age of the Internet. K. Goldberg
(ed.), MIT Press 2000.
Gracanin, D. and Valavanis, K. P. (1999). “Autonomous Underwater Vehicles”, Guest
Editorial, IEEE Robotics and Automation Magazine: Special Issue on Design and
Navigation of Autonomous Underwater Vehicles, vol.6, no.2, June 1999.
Hirzinger, G., Brunner, B., Dietrich J., and Heindl, J. (1993). “Sensor-Based Space Robotics-
ROTEX and Its Telerobotic Features”, IEEE Transactions on Robotics and Automation,
vol. 9, no. 5, pp. 649-663, 1993.
Virtual and Mixed Reality in Telerobotics: A Survey 469
1. Introduction
Discussion of visual control of robots has been introduced since many years ago. Related
applications are extensive, encompassing manufacturing, teleoperation and missile tracking
cameras as well as robotic ping-pong, juggling and etc. Early work fails to meet strict
definition of visual servoing and now would be classed as look-then-move robot control
(Corke, 1996). Gilbert describes an automatic rocket-tracking camera which keeps the target
centered in the camera's image plane by means of pan/tilt controls (Gilbert et al., 1983).
Weiss proposed the use of adaptive control for the non-linear time varying relationship
between robot pose and image features in image-based servoing. Detailed simulations of
image-based visual servoing are described for a variety of manipulator structures of 3-DOF
(Webber &.Hollis, 1988). Mana Saedan and Marcelo H. Ang worked on relative target-object
(rigid body) pose estimation for vision-based control of industrial robots. They developed
and implemented a closedform target pose estimation algorithm (Saedan & Marcelo, 2001).
Skaar et al. use a 1-DOF robot to catch a ball. Lin et al. propose a two-stage algorithm for
catching moving targets; coarse positioning to approach the target in near-minimum time
and `fine tuning' to match robot acceleration and velocity with the target.
Image based visual controlling of robots have been considered by many researchers. They
used a closed loop to control robot joints. Feddema uses an explicit feature-space trajectory
generator and closed-loop joint control to overcome problems due to low visual sampling
rate. Experimental work demonstrates image-based visual servoing for 4-DOF (Kelly &
Shirkey, 2001). Haushangi describes a similar approach using the task function method and
show experimental results for robot positioning using a target with four circle features
(Haushangi, 1990). Hashimoto et al. present simulations to compare position-based and
image-based approaches (Hashimoto et al., 1991).
In simulating behavior and environment of robots many researches have been done.
Korayem et al. designed and simulated vision based control and performance tests for a 3P
robot by visual C++ software. They minimized error in positioning of end effector and also
they analyzed the error using ISO9283 and ANSI-RIAR15.05-2 standards and suggested
ways to improve error (Korayem et al., 2005, 2006). They used a camera which was installed
on end effector of robot to find a target and with feature-based-visual servoing controlled
end effector of robot to reach the target. But the vision-based control in this work is
472 Industrial Robotics - Programming, Simulation and Applications
implemented on 6R robot. The two cameras are mounted on the earth, i.e., the cameras
observe the robot we can call the system “out-hand" (the term “stand-alone" is generally
used in the literature). The closed-form target pose estimation is discussed and used in the
position-based visual control. The advantage of this approach is that the servo control
structure is independent from the target pose coordinates and to construct the pose of a
target-object from 2 dimension image plane, two cameras are used. Image feature points in
each of the two images are to be matched and 3-D information of the coordinates of the
target-object and its feature points are computed by this system.
This chapter starts by introducing the 6R and 3P robots which are designed and constructed in
IUST robotic research Lab. (Figs. 1, 2). Modeling and simulation process to solve direct and
inverse kinematics of the robot are prescribed. After discussing simulation software of 6R and
3P robots, we simulated control and performance tests of robots and at last the results of tests
according to ISO9283 and ANSI-RIAR15.05-2 standards and MATLAB are analyzed.
2. Kinematics Equations
Within kinematics one studies the position, velocity and acceleration, and all higher order
derivatives of the position variables. The kinematics of manipulators involves the study of
the geometric and time based properties of the motion, and in particular how the various
links move with respect to one another and with time.
⎡0 1 0 w1 ⎤
⎢0 0 1 w2 ⎥⎥
T =⎢ (2)
⎢1 0 0 w3 ⎥
⎢ ⎥
⎣0 0 0 1⎦
Where w1, w2, w3 are prismatic joints displacement in x, y and z directions, respectively.
4 θ4 0 a4 π /2
5 θ5 0 0 −π / 2
6 θ6 d6 0 0
T determines position and orientation of end effector with respect to base coordinate. If
0
6
positions of joints are determined by position sensors, the pose of end effector will be
determined according to 06T . For 6R robot this matrix will be as follow:
⎡nx ox ax Px ⎤
⎢n Py ⎥⎥
T 06 = ⎢ y
oy ay (4)
⎢nz oz az Pz ⎥
⎢ ⎥
⎣0 0 0 1 ⎦
Where:
nx = C1 (C5C6C234 − S 6 S 234 ) − S1S 5C6 n y = S1 (C5C6C234 − S 6 S 234 ) + C1S 5C6 n z = −C 5 C 6 S 234 − S 6 C 234
474 Industrial Robotics - Programming, Simulation and Applications
Px = −C1 (d 6 S 5C234 − a4C234 − a3C23 − a2C2 ) − d 6 S1C5 o y = − S 6 (C5 S1C 234 + C1 S 5 ) − C6 S1 S 234 oz = S 234C5 S6 − C6C234
Pz = d 6 S 5 S 234 − a4 S 234 − a3 S 23 − a2 S 2 + d1 o x = −C1 (C5 S 6 C 234 + C6 S 234 ) + S1 S 5 S 6 a x = −C1S 5C234 − S1C5
⎛ ± d 6 [1 − ( a y C 1 − a x S 1 ) 2 ]1 / 2 ⎞
θ 5 = tan −1 ⎜⎜ ⎟ (6)
Py C 1 − Px S 1 ⎟
⎝ ⎠
⎛ o x S 1 − o y C1 ⎞
θ 6 = tan −1 ⎜ ⎟ for θ > 0 and θ 6 = θ 6 + π for θ 5 < 0 (7)
⎜ n y C1 − n x S 1 ⎟ 5
⎝ ⎠
⎛ − az ⎞
θ 234 = tan −1 ⎜ ⎟, if θ > 0 , else θ
5 234 = θ 234 + π
(8)
⎜ a x C1 + a y S 1 ⎟
⎝ ⎠
−1 ⎜
⎛ ± [1 − ( w / q ) 2 ] 1 / 2 ⎞ −1 ⎛ u ⎞ (9)
θ 2 = − tan ⎟ + tan ⎜ ⎟
⎜ w/q ⎟ ⎝ t ⎠
⎝ ⎠
− 1 ⎛⎜u − a2S2 ⎞
⎟ −θ2 , (10,11)
θ 3 = tan ⎜ t− a C ⎟ θ 4 = θ 234 − θ 2 − θ 3
⎝ 2 2 ⎠
Where:
t = Px C1 + Py S1 + d 6 S 5 C 234 − a 4 C 234 , u = − p z + d 1 − a 4 S 234 + d 6 S 5 S 234
2 2 2 2
t + u + a 2 − a3
w= ,q = t2 + u2
2a 2
Above equations are used to simulate robot motion.
3. Dynamic Equations
Manipulator dynamics is concerned with the equations of motion, the way in which the
manipulator moves in response to torques applied by the actuators, or external forces. The
equations of motion for an n-axis manipulator are given by:
G G G G G G G G
Q = M ( q ).q + C (q , q ) q + F ( q ) + G (q ) (12)
G
Where q is the generalized joint coordinates vector; M is the symmetric joint-space inertia
matrix; C describes Coriolis and centripetal effects.
F describes viscous and Coulomb friction and is not generally considered part of the rigid-
body dynamics; G is the gravity loading and Q is the vector of generalized forces associated
with the generalized coordinates q.
Simulation and Experimental Tests of Robot Using Feature-Based and
Position-Based Visual Servoing Approaches 475
Where m1 , m 2 , m3 and m p are masses, F1 , F2 , F3 are input torques to motors No. 1, No. 2
and No. 3, respectively.
τ5 = −
1
2
g (C 234 C 5 S 1 + C 1 S 5 )d 6 m 6 +
1
4 [
4 C 5 ( S 234 S 6 C 6 ( I xx − I yy 6 )θ1
6
− d 6 m 6 ( S 34 a 2 θ 2 + S 4 a 3 (θ 2 + θ 3 ))) + 2 S 5 (( − a 1 − C 2 a 2 − C 23 a 3 − C 234 a 4 )
(14)
d 6 m 6 θ1 − 2 S 6 C 6 ( I xx 6 − I yy 6 )( θ2 + θ3 + θ4 ) + ( 4 ( S 6 2 I xx 6 + I yy 5 + C62I yy 6 )
rapidity and easily changed for real situation. In this software, the picture is taken in bitmap
format through two fixed cameras which are mounted on the earth or by a camera which is
installed over the end effector in the capture frame module and it is returned in form of
array of pixels. Image capturing is switched between two cameras. After image processing,
objects in pictures are saved separately, features are extracted and target-object and end
effector will be recognized among them according to their features. Then position
coordinates of target-object and end effector are estimated. After each motion of joints new
picture is taken from end effector and this procedure is repeated until end effector reach to
target-object.
With images from these two fixed cameras or camera on the end effector, positions of objects
are estimated in image plane coordinate, usually, to transform from image plan coordinates
to the reference coordinates system, mapping and calibrating will be used. In this program,
in order to do this mapping which is a non-linear mapping, we have used a neural network
to transform these coordinates to global reference coordinate. Mapping system needs extra
work and is complicated compared to neural network. Neural networks are used as
nonlinear estimating functions. To compute processing matrix, we have used a set of points
to train the neural system. This collection of points are achieved by moving end effector of
robot through different points which their coordinates in global reference system are known
and their coordinates in image plane are computed by vision module in simulator software.
The position of the end effector is recognized at any time by two cameras which are fixed
with a certain distance from each other. The camera No.1 determines the target coordinates
in a 2-D image plan. The depth of the picture also is computed by the second camera.
The used neural network, is a backpropagation perception kind network with 5 layers
which in the first layer a 4 node entrance including picture plan coordination, layers 2, 3 and
4 take 20 node and finally output layer, 3 node including coordinates x, y and z in the earth-
reference system.
same place and after merging, the task of labeling will be continued. The period spent for
this algorithm will be decreased to two seconds (Korayem et al., 2005).
effector to it. Visual system carries out this process in 34 steps. Figs 6, 7 show the robot
in the first step and Figures 7 shows the robot in the last step after reaching to the
target-object (34th step).
In this part of simulator software standard performance tests of robot also can be done
without any limitation and the results can be observed quite naturally as well as controlling
the end effector to find and reach the target-object automatically through the vision system.
Fig. 6. Robot picture after the first step Fig. 7. Robot picture after reaching to the
target
The robot can read and perform its motion in two approaches; the first process is point to
point. In this approach, the rate of moving end effector in the space can be determined in the
file that has been loaded before by the user in such a way of line after line and will be read
and done through the program. The second approach is a continuous path and the user can
determine paths such as circle (parameters: center and radius), rectangle (parameters: the
length of the lines and coordination of corner), line (parameters: coordination of start and
end points) for the robot motion.
images are to be matched and 3-D information of the coordinates of the target-object and its
feature points are computed by this system.
In this algorithm, pictures taken by two cameras are saved in arrays of pixels and after
threshold operations, segmentation, and labeling, the objects in the pictures will be extracted
and each single frame is conserved separately with its number. The target-object and end
effector should be recognized among them after separation of the objects according to their
features and properties. Distance between end effector and target-object will be estimated,
by using inverse kinematics equations of 6R robot, each joint angle will be computed then
by revolution of joints end effector will approach to target. In each step cameras take image
and the process will repeat until end effector reach to target and its distance to target-object
gets the least possible amount. Control procedure of robot to reach to target-object is briefly
shown in Figs. 8-11.
Fig. 8. 6R robot in home position (view Fig. 9. Robot picture after some steps to reach
camera1) to target.
Fig. 10. Robot picture after reaching the end Fig. 11. The robot picture after reaching the end
effector to target. effector to target (view camera2).
Simulation and Experimental Tests of Robot Using Feature-Based and
Position-Based Visual Servoing Approaches 481
Fig. 12. Error investigation in variant rectangular path (by two cameras).
Then using ISO9283, ANSI-RIA standards, these errors will be analyzed and performance
characters and accuracy of the robot will be determined. Results of these standard tests are
used to compare different robots and their performance. In this research we try to do some
of these tests by using camera and visual system according to the standards such as ISO-
9283, and ANSI-RIA that belong to the robot specifically.
0.15
0.1
0.05
error 0
-0.05 ex
ey
-0.1
ez
-0.15
1 2
3 4
5 6
7
ez
8
9
ey
10
ex
Fig. 14. The error schematics in x,y,z directions for direct kinematics tests.
0
error
-1 ex
-2 ez
ey
-3
1 2
3 4
5 6
7
ey
8
9
ez
10
ex
Fig. 15. The error schematics in x, y, z directions for inverse kinematics tests.
4.5
4
Y (mm)
3.5
2.5
1.5
1
1.8 2.3 2.8 3.3 3.8 4.3 4.8 5.3 5.8
X (mm)
circular path and wrist slides along perimeter of circle. In this way sliding, approach and
normal vectors are determined and inverse kinematics equations can be solved. During
motion of wrist on the path, 32 images have been taken from end effector using two
cameras. In this way, end effector coordinates in image plan will be collected. Using neural
network, image plan coordinates of points will be transformed to the reference frame. The
desired path and actual path traversed by robot is shown in Fig. 17.
ideal path real path
4.5
3.5
2.5
Y (mm)
1.5
0.5
-0.5
0 2 4 6 8
-1.5
-2.5
X (mm)
0
-4 -1 1 6
-2
-3
-4
-5
X (mm)
6.2.2 Robot performance test based on the American standard ANSI-RIA R15.05-2
The aim of this standard is providing technical information to help users to select the
most convenient case for the robot. This standard defines important principles based on
the path, and then different methods will be introduced to evaluate them. These
principles are: approximate accuracy of the path, absolute accuracy of the path
repetition ability of the path rapidness specifications, and corner variable. Evaluation of
the mentioned principles is one of the most convenient ways for evaluation of the whole
activity based on industrial robots path. Also, the measurement of these principles
brings the opportunity of contrasting similar robots operations. To make tests more
applicable, statistic analyzes according to ANSI-RIA standard are performed on the
achieved out puts in the former sections (Korayem et al., 2005).
1
Y(mm)
0
1 2 3 4 5 6 7 8
-1
-2
-3
X(mm)
Fig. 19. The error investigated in circular path according to ANSI standard.
APP = ( x − xc ) 2 + ( y − y c ) 2 + ( z − z c ) 2
APx = ( x − x c ), AP y = ( y − y c ), APz = ( z − z c )
(15)
Simulation and Experimental Tests of Robot Using Feature-Based and
Position-Based Visual Servoing Approaches 487
AP RP AT RT
dir. kin 0.42 0.4 - -
inv. Kin 0.98 1.05 - -
line - - 0.65 0.85
circle - - 1.82 2.04
rectangle - - 0.78 0.92
Table 2. Pose accuracy & repeatability according to ISO9283 standard
The mean reference path is used in the evaluation of the path repeatability FOM. This
process involves the calculation of the deviation of Dij between an evaluated point and its
corresponding Bary center point (Korayem et al., 2005).
Cornering round off CR is defined as the minimum distance between the corner point and
any point on the attained path. For each of the three corner points, the value for CR is
calculated as follows:
k
CR = min ( X e − X ak ) 2 + (Ye − Yak ) 2 + ( Z e − Z ak ) 2 (18)
k =1
Where Xe,Ye, Ze are the position coordinates for each of the reference corner points and Xak,
Yak, Zak, are the coordinate of points along the attained path.
Cornering overshoot CO is defined as the largest deviation outside of the reference path after
the robot has passed the corner. Its value is the maximum distance between the reference
path and the attained path:
k
CO = max ( X e − X ak ) 2 + (Ye − Yak ) 2 + ( Z e − Z ak ) 2 (19)
k =1
488 Industrial Robotics - Programming, Simulation and Applications
Where Xak, Yak and Zak are the position coordinates for discrete data points on the attained
path. Xrk , Yrk and Zrk are the coordinates of the sample data points along reference path
traversed by the robot.
To compute performance criteria of robot, end effector is guided along rectangular path
based on standard platform (Korayem et al., 2005). The number of points for evaluation
would be m = 34 and this will be repeated 10 times (n = 10). Two cameras, observing the end
effector with at fixed distance in specified periods, take picture from end effector and its
environment and its coordinates are achieved from image plan with position based visual
system. To transform coordinates of wrist of robot to the reference frame as mentioned
before, in this work we have used neural networks. Using neural networks we map
coordinates from image plan into reference system, in order to have real distances.
Maximum repeatability is PR = 0.375 and average of repetition is PR = 0.264 also CR corner
deviation error and CO cornering over shoot for the tests simulated are listed in Table 3.
Reference CR CO
P1 0.125 3.251
P2 0.098 3.320
P3 0.178 3.78
Table 3. Repeatability & cornering overshoot according to ANSI standard
Where corner coordinates are:
P1 = (2,3), P2 = (1,1), P3 = (−2,−3)
Fig. 22. The error histogram in x direction for inverse kinematics tests.
Fig. 23. The error histogram in y direction for inverse kinematics tests.
490 Industrial Robotics - Programming, Simulation and Applications
Fig. 24. Histogram with superimposed normal distribution for error in x direction.
To show normality of tests error we superimposed histogram of errors with corresponding
normal plot which results in Figs. 24, 25.
To see whether or not the data are normally distributed we plot the graphs in Figs. 26, 27. It
is seen that a fairly large portion of our data are close to the straight line, leading one to
conclude that normal distribution is a reasonable approximation to these data.
Fig. 25. Histogram with superimposed normal distribution for error in y direction.
Simulation and Experimental Tests of Robot Using Feature-Based and
Position-Based Visual Servoing Approaches 491
8. Conclusions
It has been shown how to use a vision system to control industrial 3P and 6R robots. It was
observed that before using this vision system for controlling a robot, we need to know
general information of the robot and the camera function and affects of light conditions on
taken images. We can assume these results from the system as a better and more accurate
control on robot around. In this system there is no need to know the first location of robot to
calculate the required movement toward the goal, because taken images will help us to
know the distance of the object to the end effector and this is one of the advantages of this
system. Vision system can be used for path-related characteristics of robot. In this article, by
applying vision system, the path-related parameters are found for industrial robot. The
calculation of the path accuracy is simplified by finding the intersection of the attained path.
Also simulator package for 6R robot, its different Sections and its capabilities are described.
Control and performance tests for 6R and 3P robot have been simulated by using position
based and feature based visual system. In position based visual system it is not necessary to
know the initial position of target-object and end effector due to capability of the target pose
estimation in this method. Direct and inverse kinematics equations of the 6R robot have
492 Industrial Robotics - Programming, Simulation and Applications
been simulated and three-dimensional information of the target and end effector by using
two cameras with acceptable accuracy have been implemented.
Errors have been analyzed by using different standards and also MATLAB to compute
performance parameters of 6R robot such as accuracy, repeatability, and cornering
overshoot. According to ANSI standard maximum repeatability is PR = 0.375 and
average repeatability is PR = 0.264 and according to standard ISO9283 we had average
repeatability equal to PR = 0.42 , they are fairly close to each other. Also MATLAB showed
error data were fairly normally distributed with standard deviation equal to 0.353 and
skewness of -0.184.
9. References
American National Standard for Industrial Robots and Robot Systems Path-Related
and Dynamic Performance Characteristics Evaluation. ANSI/RIA R15.05-2.
2002.
Gilbert, A, Giles, M, Flachs, G, Rogers, R, and Yee, H, (1983). A real time video tracking
systems, IEEE, Trans. Pattern Anal. Mech. Intell. 2(1), pp. 47 – 56
Hashimoto, H, Kimoto, T, and Ebin, T (1991). Manipulator control with image based visual
servoing, In Proc. IEEE, Conf. robotics and automation, pp. 2267 – 2272.
Haushangi, N, (1990). Control of robotic manipulator to grasp a moving target using vision,
IEEE Conf. robotics and automation, pp. 604 – 609.
ISO9283, “Manipulating industrial robots performance criteria & related test methods”,
1998
Kelly, R, Shirkey, P and Spong, M, (2001). Fixed camera visual servo control for planar
robots.
Korayem, M H, Khoshhal, K, and Aliakbarpour, H, (2005) Vision Based Robot Simulation
and Experiment for Performance Tests of Robot“, International J. of AMT, Vol.25,
No. 11-12, pp. 1218-1231.
Korayem, M H, Jaafari, N, Jamali, Y, Kiomarsi, M, (2005) “Design, Manufacture and
Experimental Analysis of 3R Robotic Manipulator”, Paper in TICME.
Korayem, M H, Jamali, Y, Jaafari, N, Sohrabi, A, Kiomarsi, M, Asadi, M and Reazaee, S
(2005),Design & Manufacturing a Robot Wrist: Performance Analysis, Paper in
TICME.
Korayem, M H, Shiehbeiki, N, and Khanali, T (2006). Design, Manufacturing and
Experimental Tests of Prismatic Robot for Assembly Line“, International J. of AMT,
Vol.29, No. 3-4, pp. 379-388.
Peter I. Corke, (1996), Visual control of robotics: high-performance visual servoing, John Wiley.
Webber, T and Hollis, R, (1988).A vision based correlation to activity damp vibrations of a
coarse fine manipulator, Watson research center.
25
1. Introduction
In recent years the traditional sheet metal forming processes, suitable for high volume
batches, do not correctly meet new market requirements characterised by high flexibility,
reduced time-to-market, low cost for small batch production, etc. Moreover, they are not
suitable for producing low cost prototypes and pre-series components. Thus new sheet
metal forming techniques are very often required and pursued by manufacturing industries
and have been intensively undertaken by scientific research groups (Siegert et al., 1997;
Amino et al., 2000; Shima, 2001; Kochan, 2001; Shim et al., 2001; Filice et al., 2002; Iseki &
Naganawa, 2002; Kim et al., 2003; Yoon et al., 2003; Ceretti et al., 2002, 2004; McLoughlin et
al., 2003; Allwood et al., 2005; Lamminen, 2005; Meier et al., 2005). Among the new
innovative technologies, the sheet Incremental Forming (IF) can be successfully used for
small pre-series batches or prototypes. IF is a process where common and simple tools
mounted on CNC machines, instead of complex die sets, are used to deform locally a
workpiece. In recent years many studies have been done on IF and many are still in
progress with the aim of finding both the most affecting process parameters and the suitable
machines and working centres to run experiments and production (Park & Kim, 2002, 2003;
Jeswiet et al., 2005a, 2005b; Duflou et al., 2005a, 2005b; Hirt et al., 2005; He et al., 2005a, 2005b;
Ambrogio et al., 2005; Bambach et al., 2005).
Unlike the standard metal forming process, fast production changes are possible thanks to
the very simple IF machine configuration. Even if the time required for making one product
is much longer than in the traditional press forming, the IF advantages are gained on tool
design and production in prototyping phase. IF could be also successfully applied in
completion flexible work cells, for example after hydroforming operations for slots or small
parts finishing. Furthermore, instead of using general purpose CNC machines, the modern
incremental sheet processes can be directly performed on robotised cells. This will enhance
the advantages in flexibility and production time reduction since a robotised cell equipped
with the proper tools can produce the part and, on the same fixture, realise the completion
operations such as flanging, trimming and so on.
To form the sheet into the desired shape an ad hoc tool, mounted on the machine spindle or
on a robot gripper, is moved according to the given tool path. Several IF strategies have
been developed which mainly differ for equipment and forming procedure. In particular,
494 Industrial Robotics - Programming, Simulation and Applications
Punch
Blank Holder z y
Die
(a) x (b)
Fig. 1. The set up components.
Sheet Incremental Forming: Advantages of Robotised Cells vs. CNC Machines 495
The die is mounted on the table of the CNC machine and the moving punch deforms the
sheet up to the die. The punch path must be optimized by means of an accurate study of the
final geometry, in order to obtain a good surface finishing. When the die has a positive
geometry the blank holder is moved by four hydraulic actuators to keep the sheet steady in
the working position (Fig. 1b) otherwise is fixed (Fig. 1a).
The punch has a cylindrical body with a spherical head. The sphere dimension is an
important process variable. In fact, large headed tools assure better material flow and
production time reduction, while small sphere dimensions are needed to accomplish the
dimensional features of the part (the tool radius must be equal or lower than the minimum
radius of curvature of part’s surface).
To generate the punch trajectory CAD, CAM and CAE techniques were used. Since the
tool movement strategy is very important for obtaining safe components, several tool
paths were investigated in previous studies (Ceretti et al., 2003; Giardini et al., 2004a,
2004b, 2004c, 2005a, 2005b) and the best one must be identified by taking into account the
positive or negative shape of the die, the shape of the part, the tool dimensions and the
sheet material.
A pneumatic gripper holding a punch with a spherical tip is attached to the robot flange.
Various punches with different radii are used but in any case the small dimensions of the
ball tips allow to obtain the required high local pressures by the application of forces
considerably smaller than in conventional stamping. The die and the blank are clamped
together by means of several clips, whose design and setting are very important for the
quality of the operation: in fact a rigid constraint at frame boundaries can be desired in
some cases as well as the possibility to control the material flow towards the die in other
situations (Ceretti et al., 2003). The die with the supporting frame and the robot are mounted
on the same (thick) metal sheet: an automatic tool changing system was available beside the
robot but it has not been used in the present experimentation.
processing: a fixed slicing of the part was used, with a step depth (or pitch) that has been
varied from 0.5 mm to 1.0 mm. Both 3 axes (for both CNC machine and robot) and 5 axes (for
robot) paths have been generated, see Fig. 3 and 4.
(a) (b)
Fig. 3. Negative forming with concave die: 3 axes path (a) and fixed step slicing (b).
(a) (b)
Fig. 4. Negative forming with concave die: 5 axes path (a) and robot wrist (b).
When the die is concave, different punch movement strategies can be used. In particular,
one strategy starts the process from the part boundary and progressively deforms the sheet
through trajectories drawn at constant Z levels (direct negative forming); the other strategy
is characterised by a straight tool movement inside the blank till the maximum depth of the
die is reached and then a spiral movement towards the part boundaries starts (inverse
negative forming), see Fig. 5. This last approach can be used only with limited die depths
and assures a better material flow from the boundary so guaranteeing a lower thinning of
the part. Unfortunately, this strategy could not be applied with the robotised cell because
the robot was not able to develop the high lateral thrusts that were required. To overcome
this limit a progressive solution was tested: in this case the punch is moved as in inverse
negative forming but it moves downwards up to the bottom of the die following
progressive spiral paths with different geometrical dimensions whose effects were
investigated.
The velocity of the tool together with the path variables (spiral width, pitch and so on)
have a great relevance for the quality of the part and must be chosen according to the
other parameters of the process, like sheet material and thickness, depth of deformation,
etc.
498 Industrial Robotics - Programming, Simulation and Applications
If the surface finishing of the part is an important aspect of the process, the path conducted
as a pocketing with constant step depth needs to be modified. In fact, this step down path
from the top to the bottom of the pocket, in which the tool follows a series of consecutive
contours with fixed step depth, is the simplest one but presents two disadvantages. In
particular the sheet is marked at the transition point between consecutive layers and the
quality of flat or near to flat surfaces is poor when high step depth or pitch (more than 0.5
mm) are used.
In this case to conduct pocketing with constant “scallop height” (Sc) can give better results. This
is a step down path from the top to the bottom of the pocket, in which the tool follows a series of
consecutive contours with variable step depth (the maximum of which must be furnished) in
order to keep constant the value of the scallop height (Fig. 5d). This kind of path reduces the
disadvantages of the first type and, in particular, the flat surfaces show a better quality.
Step depth, scallop height and type of tool path influences were studied during the
experimental tests, changing their values.
Pitch
Spiral
width
(d)
Fig. 5. Direct (a) and inverse (b) negative forming with progressive downward movement
(c) at fixed step depth or fixed scallop height (d).
(a) (b)
Fig. 6. Sketch of the negative incremental forming setting (a) and planar model of the arising
state of deformation (b).
The contact angle θ between tool and blank is:
⎡ ρ +ρ ⎤ ⎛h ⎞
θ = arcsin ⎢ b d
⎥ − arctan ⎜ z ⎟ (1)
⎢⎣ hx + hz ⎥⎦ ⎝ hx ⎠
2 2
It is noted that half of the initial thickness of the metal sheet, t0, must be added to both
curvature radiuses of tool and die, ρb and ρd respectively. The hypothesis of plain strain
state leads to:
⎡ A2 ⎤ ⎡ A2 ⎤ (2)
ε x = −ε z = ln ⎢ ⎥=⎢ ⎥
⎣ L + Rd − A 1 − A 3⎦ ⎣ hx − A 3 ⎦
where : A 2 = ρbθ and A 3 = hx cos θ − hz sin θ + ρdθ .
⎧ Fx = T (1 − cos θ ) (5)
⎨
F
⎩ z = T sin θ
The simple inspection of (1-5) shows that, as expected, the lateral force to be developed at
the tool increases with the contact angles θ (and therefore with the depth h) and decreases
with the distance between the local centres of curvature of tool and die.
The results of the current model have been compared with the corresponding FEM
simulations and experimental trials, whenever available: a good agreement has been proved
in many situations, with results generally over-estimated and therefore conservative for the
sake of process safety. On the other hand, the model is not able to emulate closely the actual
thinning of the sheet (and therefore forecast the possible occurrence of tears or wrinkles) or
the paths with small curvature radii, where clear three-axial states of deformation establish.
If these cases must be investigated, a FEM simulation is unavoidable, while the analytical
model still preserves its validity for a first assessment of the forces occurring at tool tip, that
is an important input information for the kineto-static feasibility check that will be
explained in detail in next section.
(a) (b)
Fig. 8. FEM model objects: sheet, punch, die and blank-holder (a); the thickness
representation (b).
Fig. 9. Static performances of the robot Tricept HP1 (Comau, 1995 and 2001).
502 Industrial Robotics - Programming, Simulation and Applications
Manipulability ellipsoids
Let us consider the set of joint velocities θ of constant unit norm:
θ T ⋅ θ = 1 (7)
It is desired to describe the Cartesian space velocities of the end-effector that can be
generated by such velocities at a given configuration in space. The image of the six-
dimensional sphere (7) in the operational space of tool linear and angular velocities is called
(velocity) manipulability ellipsoid and is given by the quadratic form:
⎡v⎤
[v T
ωT ]
T
⋅ Γv ⋅ ⎢ ⎥ = 1 (8)
⎣ω⎦
where the 6x6 square matrix Γv is defined by:
Γ v (θ) = J −T (θ) ⋅ J −1 (θ) (9)
Cartesian space velocities are high along the direction of the major axis of (9), while only
low velocities are yielded along its minor axis: therefore an isotropic (kinematic) behaviour
of the manipulator is manifested when the ellipsoid becomes a sphere. Just to visualise the
matter, in case of a planar robot with two equal motors, the form (8-9) would represent an
ellipse whose major and minor axes (the eigenvectors of Γv) indicate the directions along
which the tool can be moved with the maximum and minimum velocities respectively (that
are given by the square roots of the eigenvalues of Γv).
Due to the well-known duality between kinematics and statics, for every pose of the
manipulator inside its workspace (position and attitude of the tool) the 6 joint torques τ
balancing the external wrench of forces F and moments M applied at tool tip are given by:
Sheet Incremental Forming: Advantages of Robotised Cells vs. CNC Machines 503
⎡F⎤ (10)
τ = JT ⋅⎢ ⎥
⎣M ⎦
Following the same approach previously outlined for the characterisation of robot
velocities, the ellipsoid representing all the Cartesian forces and moments that correspond
to unit motors torques, τT· τ=1, can be derived. In the case of the present application the
external moments at tool tip are null, therefore only the three-dimensional ellipsoid of end-
effector forces is meaningful:
FT ⋅ Γ f ⋅ F = 1 (11)
(a) (b)
Fig. 10. Force manipulability ellipsoids of the robot Tricept HP1 in two Cartesian planes:
upper (a) and side (b) view (end-effector parallel to global frame).
Static performances
For every position of the manipulator inside its workspace (position and attitude of the tool)
the 6 joint torques τ balancing the external wrench of forces F and moments M applied at
tool tip are given by (10), where the 6x6 manipulator Jacobian matrix is highly varying with
tool position in our case. Once a task has been assigned and the arising forces have been
assessed, the feasibility checks can be easily performed: the joint torques that are computed
by simulation must be compared with the actual rated torques of the motors driving
shoulder and wrist axes, about 13 Nm and 3 Nm respectively, but of course the gear ratio of
the Harmonic Drives and the efficiency of all the transmission has to be accounted for.
Figure 11 shows a sample analysis performed by means of the developed simulation
package: tool tip spans a vertical plane at constant height (z=1600 mm) with the tool
perpendicular to the plane itself and charged by an axial load of 15 kN. The figure plots the
value of the maximum torque requested to any one of shoulder motors: it is noted that the
limit value of 13 Nm is easily overcame when the tool approaches workspace boundaries;
wrist motors, on the other hand, are almost idle since the external force passes through the
centre of the spherical wrist.
504 Industrial Robotics - Programming, Simulation and Applications
Fig. 11. Maximum torque delivered by shoulder motors (z=1600 mm, Fz= 15 kN).
In the simulation of Fig. 12, instead, the configuration of the robot is still the same but an
external force of 500 N is vertically directed along the x axis: in this case shoulder motors are
scarcely charged but wrist motors cannot sometimes deliver all the torque that should be
required.
(a) (b)
Fig. 12. Maximum torque delivered by shoulder (a) and wrist (b) motors (z=1600 mm, Fz= 0.5
kN).
This set of tests was difficult for the robot due to the particular shape of the die, whose
lateral walls were almost vertical; in fact this geometry determines high reaction forces (due
to the large contact surface between tool and blank) whose lines of action are almost
perpendicular to the (preferential) axial direction, therefore often causing the block of the
motors.
During these preliminary experiments it has been shown that, notwithstanding the
relatively low velocities of the tool during the process, the velocity profile of the robot
influences the quality of the product and even more the performances that can be obtained.
In fact during accelerations the inertia forces are added to the elasto-plastic reactions
coming from the blank under deformation, while the forming task is easier for the robot
during deceleration ramps. Moreover, it resulted that the smaller the radii of the corners are,
the higher torques are required to the motors for the impending state of three-axial
deformation: better results and lower torques have been registered by using tools with
smaller sphere diameters.
It must be emphasized that, when the CNC machine is used to perform the experimental
tests, no limits can be identified for this simple geometry. In fact, the machine stiffness and
the motor powers allow the deformation of the sheet since the working loads are low. In
addition, increasing the feed rate gives the possibility of reducing the working time without
affecting the part quality.
Fig. 14. Metallic and wooden dies with blank holder and clamping fixtures.
The first attempts of forming a steel blank have been performed by using 3 axes paths on a
0.6 mm thick AISI 304 stainless steel sheet; in this way it was possible to draw a comparison
with the results obtained using both the robotised cell and the CNC machine. In the case of
the robotized cell a maximum depth of about 14 mm was reached in “direct” forming, while
the motors blocked at the depth of only 10 mm in inverse forming (see Fig. 15). In the first
case a set of square paths at higher depths has been progressively realised, with a pitch of
0.5 mm without working the bottom of the pocket; in the second case, after reaching the
desired depth, a spiral path was traced with a spiral step of 1 mm. The quality of surface
finishing is quite different in the two cases, since the pocket bottom does not come into
contact with the tool in direct forming, while in second case it is deformed by the tool. In
this second case the quality is better, even if in the part bottom it is still visible the point
corresponding to the straight path of the tool before starting the spiral path which deforms
the bottom (Fig. 15). A very tight fixing of the blank holder is required in order to avoid the
formation of wrinkles and the rotation of the sheet.
To reduce the working loads, it was necessary to lower the tool pitch to 0.25 mm instead of 1
mm. The consequence of this change was an increase of the total cycle time. A reduction of
Sheet Incremental Forming: Advantages of Robotised Cells vs. CNC Machines 507
total cycle time was obtained by using a different language primitive for the path
interpolation. Instead of requiring the robot to pass exactly through the programmed points,
and therefore to stop when cornering, a spline interpolation allowed the tool to pass “close”
to the programmed points, without stopping every time. The quality of the part is still good,
while cycle times have been strongly reduced.
In a second set of tests the capabilities of the 6 axes robotized cell were used to orient
properly the tool with respect to the sheet local normal and to the feed direction. First
of all, it must be made clear that in robot trajectory planning 3 degrees of freedom are
constrained by the path to be followed by tool tip, while 2 more degrees of freedom can
be used to orient the punch along a “suitable” direction in space; therefore, by having
the availability of a 6 axes machine, infinite possible trajectories can still be specified
after all.
Fig. 15. Deformed steel parts for direct (left) and inverse (right) forming obtained with the
robotized cell.
It must be pointed out that the kinematics of the robot is rather complex, therefore the
“optimal” alignment of the tool does not necessarily mean that the tool itself is aligned
along the direction of robot motion. In our trials, the tilting of the tool with respect to the
feeding direction has been set so as to try to “minimise” the moments acting at wrist centre
(by the way, the tool is charged by axial loads in this case). Moreover, due to wrist
mechanical structure and to the limited winding of its joints, it was necessary to invert the
sense of rotation of the motion around the die at every pass. This was also visible on the
surface of the final part.
508 Industrial Robotics - Programming, Simulation and Applications
In future work, the redundancy of the robot with respect to the task could be used to:
• take automatically into consideration the constraints coming from the possible
mutual collisions between blank/frame and tool/wrist;
• try to avoid the time consuming re-winding of wrist axes, therefore speeding up
the process;
• align the whole robot structure along the main direction of the manipulability
ellipsoids.
It must be said that the comparison with the 3 axes forming previously performed did not
show major differences, apart from a slight higher quality and precision of the finished part
mainly due to the smaller bending of the tool.
Considering the experiments conducted with the CNC machine, only the limit of the
maximum reachable slot depth was found since it affects the sheet thinning (when thinning
is too high the part breaks). Several tool paths of the “direct” forming type were tested by
changing the spiral width and the spiral step according to Giardini et al. (2005a). The
experiments conducted showed that roughness of the produced pieces was influenced only
by spiral step and was not influenced by the tool path type. The thickness distribution,
analysed along a part transversal section, is affected by the tool path and by the spiral step.
In particular, summarising the results obtained in terms of part roughness, sheet thickness
and maximum reachable punch depth it is possible to define the optimal Full Die IF process
parameters, that is inverse negative forming with spiral step 0.5 mm, tool pitch 1 mm and
feed rate 400 mm/min.
The sheet thickness shows a minimum in correspondence of the bottom radius of the
pocket, see Fig. 16. In one of the studied cases, the spiral step was so large that the
final geometry of the part had a very low quality due to large wrinkles on the pocket
bottom.
Fig. 17. Simulative and experimental thickness distributions along the transversal cross
section of the realised part.
Finally some automotive components have been formed. Starting from CAD models of
the parts, the resin dies were produced and then the part programs for the incremental
forming have been developed. Figure 18 shows a component used in chair back
assembly; two different kinds of steel blanks have been formed with this component:
AISI 304 and DC04. The results are satisfactory in both cases but the higher quality of the
AISI 304 part shows that this material can be worked better through the incremental
forming technique.
During the working of the central slot, a certain compliance of the wrist became apparent,
even if the overall quality of the part was good. Moreover, a little misalignment of the
planes of the blank and the die caused the wrinkles that are visible outside the working
area: a better realisation and tuning of the blank holder would certainly mitigate this
problem.
Fig. 18. Formed chair backs: AISI 304 (left) and DC04 (right).
510 Industrial Robotics - Programming, Simulation and Applications
The research project analyzed also the feasibility study of some finishing operations just to
show the potential versatility of an automated incremental forming cell: to this aim, the
hatch of a multipurpose vehicle has been flanged by pressing the outer edge initially bent at
90°. The part was initially clamped between a punch and the die, then a specially designed
tool, see Fig. 19a, rolled along all the outside border. The use of this rolling tool to deform
locally the outer edge, allowed to flange the part without any problem and to avoid the
expensive hydraulic presses generally used in this case or the manual operations that are
time consuming and cost expensive but are necessary when the attention is focused on pre-
series production. Similar application was studied using a roller mounted on a 3 axes CNC
machine. The results are reported in Fig. 19b.
(a) (b)
Fig. 19. Hemming of sheets in a robotised cell (a) and on a CNC machine (b).
5. Conclusions
The incremental forming of metal sheet parts can be an interesting alternative to manual
forging of prototypes and pre-series blanks or to the manufacturing of shells for resin dies
used for the production of small batches. Such characteristics of small-volume production
would call for an increase in the level of flexibility and automation, possibly leading to the
use of CNC machines or robotised cells able to produce or complete the parts. In particular,
the use of robotised cells, with automatic tool change, can dramatically reduce the process
time since on the same fixture it is possible to deform the part, cut the part, bend or flange
the borders, load/unload the part, etc.
The present contribution has described the crossed experiments performed at the
Polytechnic University of Marche in Ancona and at the Universities of Brescia and Bergamo
to assess the feasibility of the automated processing by using both a traditional CNC
machine and an industrial robot. It is noted that the research and industrial processes of
incremental forming realised so far have ever used 3-axes CNC milling machines, apart
from the hammering process patented by the Fraunhofer Institute for Manufacturing
Engineering and Automation of Stuttgart (Shaefer & Schraft, 2005).
Unfortunately the conventional serial robots do not have the required stiffness and are not
able to apply the necessary forces to deform incrementally the blank, but the rather new
family of parallel robots has characteristics similar to CNC machining centres, while still
keeping the versatility of a robot. The complex kinematics of the machine needed the
development of a special purpose simulation environment to design beforehand the
experiments and assess their feasibility. The necessary force at tool tip has been evaluated
both analytically, with a simplified approach based on a plane strain state, and numerically,
Sheet Incremental Forming: Advantages of Robotised Cells vs. CNC Machines 511
by means of a commercial FEM code: different kinds of tool trajectories have been generated
by CAM programs, based on actual part geometries. A final assessment of whole cell layout
and working has been performed by means of the IGRIP® package, exploiting also the
possibilities of the coordination with other external axes (e.g. a revolving table during
flanging, a tool changing system, etc.) and its capability to generate the part program for the
different commercial controllers.
Several experimental tests have been performed in order to validate the complex
methodology for system design and prototyping and to study the several parameters
playing a significant role in the process, as for instance: the different types of materials (e.g.
AISI 304 and DC04 steel, copper), the number of axes of the task (3 or 5), the kind of
interpolation between the points, the path imposed to the tool (e.g. depth first or breadth
first), the size of punch end, etc. Many tests have been performed on a die specifically
developed in-house for the execution system trials, then a few tests have been performed on
commercial components, on dies provided by a car manufacturer supplier, with the
execution of simple flanging operations too.
At the end of the research a development environment has been set up, able to interface the
different software tools in order to support the process designer in making the correct
choices. It must be said that, even if the simulation environment proved to be powerful and
reliable, the whole design process of an experiment seemed very complicated by the
complexity of the used equipment: therefore the use of more powerful robots, with larger
workspaces would be desirable. It's Authors’ opinion that parallel robots can be a viable
alternative to CNC machines for the execution of incremental forming processes, especially
if it is possible to exploit the high versatility of the machine for further completing
operations. More tests should be needed to completely assess the benefits of robotics, with
the possible availability of more powerful machines, that are already available on the
market. As for the system design and prototyping tool that has been developed, it proved to
be effective and reliable, even if resulted to be quite complex and more integration would be
needed between the single software modules.
6. Acknowledgments
This work has been developed within the research programme “Ultraflex” (see Lamier,
2005), partially funded by the Italian Ministry of Industry and by Lamier SpA (La Loggia,
Italy).
7. References
Allwood, J.M.; Houghton, N.E. & Jackson, K.P. (2005). The Design of an Incremental
Sheet Forming Machine. Proc. Shemet 2005, pp. 471-478, Erlangen, Germany,
April 5-8
Ambrogio, G.; Filice, L.; Gagliardi, F. & Micari, F. (2005) Sheet Thinning Prediction in Single
Point Incremental Forming, Proc. Shemet 2005, pp. 479-486, Erlangen, Germany,
April 5-8.
Amino, H.; Makita, K. & Maki, T. (2000). Sheet Fluid Forming and Sheet Dieless NC
Forming, Proc. New Developments in Sheet Metal Forming 2000, pp. 39-66, Stuttgart,
Germany, May 23-24.
512 Industrial Robotics - Programming, Simulation and Applications
Bambach, M.; Azaouzi, M.; Campagne, L.; Hirt, G. & Batoz, J.L. (2005). Initial Experimental
and Numerical Investigations into a Class of New Strategies for Single Point
Incremental Sheet Forming (SPIF), Proc. Esaform 2005, pp. 671-674, Cluj-Napoca,
Romania, April 27-29.
Callegari, M.; Gabrielli, A.; Palpacelli, M.-C. & Principi, M. (2006). Robotised Cell for the
Incremental Forming of Metal Sheets, Proc. ESDA 2006, 8th Biennial ASME
Conference Engineering Systems Design and Analysis, Turin, July 4-7.
Callegari, M.; Palpacelli, M. & Principi, M. (2005). Analisi della manipolabilità del robot
industriale Tricept, Atti del XVII Congresso AIMeTA di Meccanica Teorica e Applicata,
Firenze, 11-15 Settembre 2005.
Ceretti, E.; Giardini, C.; Attanasio, A. & Maccarini, G. (2002). Some Experimental
Edvidences in Sheet Incremental Forming on CNC Machines. Proc. Numisheet 2002,
pp. 429-434, Jeju Island, Korea, October 21-25.
Ceretti, E.; Giardini, C. & Attanasio, A. (2003). Sheet Incremental Forming on CNC Machines,
Proc. SheMet 2003, pp. 49-56, University of Ulster, Newtownabbey, UK, April 14-16.
Ceretti, E.; Giardini, C.; Attanasio A. (2004). Experimental and Simulative Results in Sheet
Incremental Forming on CNC Machines, Journal of Materials Processing Technology,
vol. 152, pp. 176-184.
Comau (1995). Tricept HP1. Maintenance Manual. rel 3.X/4.X
Comau (2001). Tricept HP1. User’s Manual. rel 3.X/4.X
Duflou, J.R.; Lauwers, B.; Verbert, J.; Tunckol, Y. & De Baerdemaeker, H. (2005b).
Achievable Accuracy in Single Point Incremental Forming: Case Studies, Proc.
Esaform 2005, pp. 675-678, Cluj-Napoca, Romania, April 27-29.
Duflou, J.R.; Szekeres, A. & Vanherck, P. (2005a). Force Measurements for Single Point
Incremental Forming: An Experimental Study, Proc. Shemet 2005, pp. 441-448,
Erlangen, Germany, April 5-8.
Filice, L.; Fratini, L. & Micari, F. (2002) Analysis of Material Formability in Incremental
Forming, Annals of CIRP, Vol. 51, No. 1, pp. 199-202.
Giardini, C.; Ceretti, E.; Attanasio, A. & Pasquali, M. (2004a). Analysis of the Influence of
Working Parameters on Final Part Quality in Sheet Incremental Forming, Proc. 3rd
International Conference and Exibition on Design and Production of Dies and Molds, pp.
191-198, Bursa, Turkey, June 17-19.
Giardini, C.; Ceretti, E.; Attanasio, A. & Pasquali, M. (2004b). Feasibility Limits in Sheet
Incremental Forming: Experimental and Simulative Analysis, Proc. Esaform 2004,
pp. 515-518, Trondheim, Norway, April 28-30.
Giardini, C.; Ceretti, E. & Contri, C., (2004c). Analysis of Material Behavior in Sheet
Incremental Forming Operations, Proc. 8th NUMIFORM 2004, pp. 1016-1021, vol.
712, Columbus-Ohio, USA, June 13-17.
Giardini, C.; Ceretti, E. & Attanasio, A. (2005a). Further Experimental Investigations and
FEM Model Development in Sheet Incremental Forming, Proc. Shemet 2005, pp.
501-508, Erlangen, Germany, April 5-8.
Giardini, C.; Ceretti, E. & Attanasio, A. (2005b). Optimization of Sheet Incremental Forming
Process by Means of FE Simulations, Proc. Esaform 2005, pp. 691-694, Cluj-Napoca,
Romania, April 27-29.
Sheet Incremental Forming: Advantages of Robotised Cells vs. CNC Machines 513
He, S.; Van Bael, A.; Van Houtte, P.; Szekeres, A.; Duflou, J.R.; Henrard, C. & Habraken,
A.M. (2005a). Finite Element Modeling of Incremental Forming of Aluminum
Sheets, Proc. Shemet 2005, pp. 525-532, Erlangen, Germany, April 5-8.
He, S.; Van Bael, A.; Van Houtte, P.; Tunckol, Y. & Duflou, J.R. (2005b). Effect of FEM
Choices in the Modelling of Incremental Forming of Aluminium Sheets, Proc.
Esaform 2005, pp. 675-678, Cluj-Napoca, Romania, April 27-29.
Hirt, G.; Bambach, M. & Junk, S. (2003). Modelling of the Incremental CNC Sheet Metal
Forming Process, Proc. SheMet 2003, pp. 495-502, University of Ulster,
Newtownabbey, UK, April 14-16.
Iseki, H. (2001). An Approximate Deformation Analysis and FEM Analysis for the
Incremental Bulging of Sheet Metal Using a Spherical Roller, Journal of Material
Processing Technology, Vol. 111, pp. 150-154.
Iseki, H. & Naganawa, T. (2002). Vertical Wall Surface Forming of Rectangular Shell Using
Multistage Incremental Forming with Spherical and Cylindrical Rollers, Journal of
Materials Processing Technology, Vol. 130-131, pp. 675-679.
Jesweit, J.; Duflou, J.R. & Szekeres, A. (2005a). Forces in Single Point and Two Point
Incremental Forming, Proc. Shemet 2005, pp. 449-456, Erlangen, Germany, April 5-8.
Jesweit, J.; Young, D. & Ham, M. (2005b). Non-Traditional Forming Limit Diagrams for
Incremental Forming, Proc. Shemet 2005, pp. 409-416, Erlangen, Germany, April
5-8.
Kim, Y. H. & Park, J. J. (2003). Effects of Process Parameters on Formability in Incremental
Sheet Metal Forming Technique, Journal of Material Processing Technology, Vol. 140,
p. 447-453.
Kochan, A. (2001). Dieless forming. Assembly Automation. Vol. 21, No. 4, pp. 321-322.
Lamier. (2005). New Advances in Process Flexibility: the Perspective of an OEM Supplier,
Proc. 9th International Conference "FLORENCE ATA 2005 - Vehicle architectures:
evolution towards improved safety, low weight, ergonomics and flexibility", Florence,
May 11-13.
Lamminen, L. (2005). Incremental Sheet Forming with an Industrial Robot – Forming Limits
and Their Effect on Component Design, Proc. Shemet 2005, pp. 457-464, Erlangen,
Germany, April 5-8.
McLoughlin, K.; Cognot, A. & Quigley, E. (2003). Dieless Manufacturing of Sheet metal
Components with non Rigid Support, Proc. SheMet 2003, pp. 123-130, University of
Ulster, Newtownabbey, UK, April 14-16.
Meier, H.; Dewald, O. & Zhang, J. (2005). A New Robot-Based Sheet Metal Forming Process,
Proc. Shemet 2005, pp. 465-470, Erlangen, Germany, April 5-8.
Merlet, JP. (2005). Parallel Robots, 2nd Ed., Springer, Dordrecht.
Park, J.-J. & Kim, Y.-H. (2002). Effect of Process Parameters on Formability in Incremental
Forming of Sheet Metal. Journal of Materials Processing Technology. Vol. 130-131, pp.
42-46.
Park, J.-J. & Kim, Y.-H. (2003). Fundamental Studies on the Incremental Sheet Metal
Forming Technique. Journal of Materials Processing Technology. Vol. 140, pp. 447-
453.
Schafer, T. & Schraft, R.D. (2005). Incremental Sheet Metal Forming by Industrial Robots.
Rapid Prototyping Journal. Vol. 11, No. 5, pp.278–286.
514 Industrial Robotics - Programming, Simulation and Applications
Shim, M., & Park, J. (2001). The Formability of Aluminium Sheet in Incremental Forming,
Journal of Materials Processing Technology, Vol. 113, pp. 654-658.
Shima, S. (2001). Incremental Forming: State of the Art, Proc. IPMM 2001: Intelligent
Processing and Manufacturing of Materials, Vancouver, British Columbia, Canada,
July 29 - August 3.
Siciliano, B. (1999). The Tricept Robot: Inverse Kinematics, Manipulability Analysis and
Closed-Loop Direct Kinematics Algorithm. Robotica. Vol. 27, pp. 437-445.
Siegert, K.; Rennet, A. & Fann, K.J. (1997). Prediction of the Final Part Properties in Sheet
Metal Forming by CNC-Controlled Stretch Forming, Journal of Materials Processing
Technology, vol. 71, pp. 141-146.
Yoon, S. J. & Yang, D.Y. (2003). Development of a Highly Flexible Incremental Roll Forming
Process for the Manufacture of a Doubly Curved Sheet Metal, Annals of CIRP, Vol.
52, No. 1, pp. 201-204.
26
1. Introduction
The automotive industry represents the fastest-growing market segment of the aluminium
industry, due to the increasing usage of aluminium in cars. The drive behind this is not only
to reduce the vehicle weight in order to achieve lower fuel consumption and improved
vehicle performance, but also the desire for more sustainable transport and the support
from new legislation. Cars produced in 1998, for example, contained on average about 85 Kg
of aluminium. By 2005, the automotive industry will be using more than 125 Kg of
aluminium per vehicle. It is estimated that aluminium for automotive industry alone will be
a 50B$/year market.
Most of the automotive aluminium parts start from a casting in a foundry plant. The
downstream processes usually include cleaning and pre-machining of the gating system
and riser, etc., machining for high tolerance surfaces, painting and assembly. Today, most of
the cleaning operations are done manually in an extremely noisy, dusty and unhealthy
environment. Therefore, automation for these operations is highly desirable. However, due
to the variations and highly irregular shape of the automotive casting parts, solutions based
on CNC machining center usually presented a high cost, difficult-to-change capital
investment.
To this end, robotics based flexible automation is considered as an ideal solution for its
programmability, adaptivity, flexibility and relatively low cost, especially for the fact that
industrial robot is already applied to tend foundry machines and transport parts in the
process. Nevertheless, the foundry industry has not seen many success stories for such
applications and installations. Currently, more than 80% of the application of industrial
robots is still limited to the fields of material handling and welding. (Figure 1)
The major hurdle preventing the adoption of robots for material removal processes is the
fact that the stiffness of today’s industrial robot is much lower than that of a standard CNC
machine. The stiffness for a typical articulated robot is usually less than 1 N/µm, while a
standard CNC machine center very often has stiffness greater than 50 N/µm.
Most of the existing literature on machining process, such as process force modelling (Kim
et al., 2003; Stein & Huh, 2002], accuracy improvement (Yang 1996) and vibration
suppression (Budak & Altintas, 1998) are based on the CNC machine. Research in the field
of robotic machining is still focused on accurate off-line programming and calibration (Chen
& Hu, 1999; Sallinen & Heikkila, 2000; Wang et al., 2003a, 2003b). Akbari et al. (Akbari &
Higuchhi, 2000) describe a tool angle adjustment method in a grinding application with a
516 Industrial Robotics - Programming, Simulation and Applications
small robot. In that case the process force is very small. Matsuoka etc al (Matsuoka et al.,
1999) study the characters of an articulated robot in a milling process avoiding large process
force by using an end mill with small diameter and high spindle speed. Without the
capability of real-time force control, the method to eliminate the force effect on the robotic
machining process has not been fully addressed in the research community or in industry.
Fig. 1. 2003 Robot Applications in North America . A total of 12,367 robots valued at $876.5
million were ordered. When sales to companies outside North America are added in, the
total for North American robotics suppliers is 12,881 robots valued at $913 million. NOTE:
These numbers include results from North America and Outside North America. Source:
Robotic Industries Association.
Machining processes, such as grinding, deburring, polishing, and milling are essential force
tasks whose nature requires the robot end effector to establish physical contact with the
environment and exert a process-specific force. The inherent lower stiffness of the robot has
presented many challenges to execute material removal applications successfully. First and
foremost, the lower stiffness makes chatter much easier to occur in robot than in CNC
machine. Severe low frequency chatter has been observed in milling and deburring
processes. Although extensive research on chatter has been conducted, none of the existing
research has focused on chatter mechanism in robotic machining process. The result is that
without a good understanding or even a rule of thumb guideline, robotic engineers and
technicians have to spend tremendous time on trial and error for the sheer luck of stumbling
a golden setup or has to sacrifice the productivity by settling on very conservative cutting
parameters.
The second challenge is the structure deformation and loss of accuracy due to the required
machining force. The predominant cutting action in machining involves shear deformation
of the work material to form a chip. The contact between the cutting tool and the workpiece
generates significant forces. As a result, a perfect robot program without considering contact
and deformation will immediately become flawed as the robot starts to execute the
machining task. Unlike multi-axis CNC machine centers, such deformation is coupled and
varies even subjected to the same force in different workspace locations. Such coupling
Machining with Flexible Manipulators: Critical Issues and Solutions 517
results in deformation not only in the direction of reaction force and can generate some
counter-intuitive results.
Lastly, the machining of casting parts presents a unique challenge with non-uniform
cutting depth and width. In conventional robot programming and process planning
practice, the cutting feed rate is constant even with significant variation of cutting force
from part to part, which dictates a conservative cutting feed rate without violating the
operational limits. Therefore, it is desirable to maximize the material removal rate (MRR)
and minimize cycle time by optimizing the cutting feed rate based on a programmed
spindle load. By optimizing the feed rate in real time, one could compensate for
conservative assumptions and process variations to help reduce cycle time. Every part,
including the first one, is optimized automatically, eliminating the need for manual part
program optimization.
This chapter will address the above critical challenges with detailed experimental
observation, in-depth analysis of underlying root causes, and practical solutions to
improve stability, accuracy and efficiency (Pan et al., 2006; Zhang et al., 2005). If
industrial robots could be made to provide the same performance under contact
situations as that they already have, then robotic machining would result in significant
cost savings for a lot of potential applications. Moreover, if such applications can be
proven to be economically viable and practically reliable, one would expect to witness
many success stories in the years to come. This chapter is organized in six sections
including the introduction. Section II is devoted to the modelling of robotic machining
process and serves as the basis for the following 3 sections, where chatter analysis,
deformation compensation and MRR control are presented respectively. The whole
chapter is then concluded in Section VI.
where [G ( s )] is matrix of system transfer functions, [M ] , [C ] and [K ] are 6×6 system mass,
damping and stiffness matrix respectively. These matrixes are generally configuration
dependent, but for the convenience of analysis, they can be treated as constant when robot
only moves in a small range.
τ = K q ⋅ ΔQ (5)
Where τ is the torque load on the six joints, ΔQ is the 6×1 deformation vector of all joints.
This simplification is justifiable for industrial robots as they are designed to achieve high
positioning accuracy and high strength. Elastic properties of the arms are insignificant. As
result, the dominant contribution factor for a large deflection of the manipulator tip position
is the joint compliance, e.g., due to gear transmission elasticity. Robot stiffness model could
therefore be reduced to six rotational stiffness coefficients in the joint space. From the
control point of view, this model is also easy to implement, since all industrial robot
controllers are decoupled to SISO joint control at the servo level. Joint deformation could be
directly compensated on the joint angle references passed to the servo controller.
Compared to CNC machines, articulated robots have totally different structural
characteristics (table 1). First all, the serial structure of articulated robot has a much lower
stiffness than that of a standard CNC machine. The stiffness for an articulated robot is
usually less than 1 N/µm, while a standard CNC machine very often has stiffness greater
than 50 N/µm. Secondly, with a large mass, the base natural frequency of robot structure is
very low. Typical for a large robot, it is around 10Hz compared with several hundred Hz or
Machining with Flexible Manipulators: Critical Issues and Solutions 519
even more than one thousand Hz for the moving component of a CNC machine. Lastly, the
stiffness matrix of the robot is configuration dependent and nondiagonal. This means that,
first, the force and deformation in Cartesian space is coupled, the force applied in one
direction will cause the deformation in all directions/orientations; second, at different
locations, the Cartesian stiffness matrix will take different values. As it will be shown in the
following sections, a lot of unique problems present in robotic machining have to do with
the low and coupled stiffness matrix.
F (s ) = K ⋅ wγ ⋅ d α ⋅ f β (8)
and
1 (9)
F (s ) = K ⋅ w γ ⋅ d α ⋅ f β
τ ms +1
where the constants α, β and γ depend on such factors as tool material, tool geometry,
workpiece material, etc. and do not vary significantly during the machining operation. The
cause of the nonlinearities is not well understood but is believed to be caused by the
strength and temperature of the cutting zone, as well as the resultant force direction, as the
process parameters vary. These nonlinear effects have long been recognized and have been
mentioned in the force control literature.
3. Chatter Analysis
One of the major hurdles preventing the adoption of robot for machining process is chatter.
Tobias (Tobias & Fishwick, 1958) and Tlusty (Tlusty & Polacek, 1963) recognized that the
most powerful sources of chatter and self-excitation were the regenerative and mode
coupling effects. Regenerative chatter is based on the fact that the tool cuts a surface already
cut during the previous revolution, so the cutting force and the depth of cut vary. Mode
coupling chatter is due to the fact that the system mass vibrates simultaneously in the
directions of the degrees of freedom (DOF) of the system, with different amplitudes and
with a difference in phases. Regenerative chatter happens earlier than the mode coupling
chatter in most machining processes, as explained by Altintas (Altintas, 2000).
Although extensive research on chatter has been carried out, none of the existing research
has focused on chatter mechanism in robotic machining process. The result is that robotic
engineers and technicians are frustrated to deal with elusive and detrimental chatter issues
without a good understanding or even a rule of thumb guideline. Very often, to get their
process working correctly, one has to spend tremendous time on trial and error for the sheer
luck of stumbling a golden setup or has to sacrifice the productivity by settling on
conservative cutting parameters much lower than the possible machining capability. This
section is trying to bridge the gap by pointing out the underline chatter generation
mechanism (Pan et al., 2006). First, the characteristics of chatter in robotic machining process
are presented, followed by the detailed analysis of chatter mechanism applying both
regenerative and mode coupling theory. Further experimental results are then provided to
verify the theoretical analysis. Finally stability criteria and insightful guidelines for avoiding
chatter in robotic machining process are presented.
parameters, which were proposed by other researchers, intended to avoid chatter at the
expense of the loss of productivity.
Fig. 3. Setup of robotic end milling. Fig. 4. Chatter marks on the workpiece.
In the present work, a robotic milling work cell is setup with ABB IRB6400 industrial
manipulator. The spindle is mounted on robot wrist while the workpiece is fixed on the
steel table. An ATI 6DOF Force/Torque sensor is set up between the robot wrist and spindle
as shown in Figure 3. After compensating the gravity of spindle and tool, 3 DOF machining
force could be measured accurately. When chatter occurs, the amplitude of cutting force
increases dramatically and the chatter frequency is observed from the Fast Fourier
Transform of force data. The experimental conditions for robotic end milling are
summarized in Table 2.
Test End milling Deburing
Robot ABB IRB6400 ABB IRB6400
Workpiece A2024, L300mm×W38mm×H150mm A2024, L300mm×W51mm×H150mm
Spindle type SETCO,5HP, 8000RPM GCOLOMBO,11HP, 24,000RPM
Tool type SECO Φ52mm, SANDVIK, Φ20mm, Helical 2-flute
Round insert, Φ1.89mm×5
Cutting fluid - (Dry cutting) - (Dry cutting)
Feed rate 30 mm/s 60 mm/s
Spindle speed 3600 RPM 18,000 RPM
DOC 1-4 mm 5mm
WOC 38mm 15mm
Table 2. Experimental conditions for robotic machining
In most situations, the cutting process is stable; the work cell could conduct 4-5mm depth-
of-cut (DOC) until reaching the spindle power limit. Nevertheless, while feed in -Z
direction, severe low frequency (10Hz) chatter occurs when the DOC is only 2 mm. The
characteristics of this low frequency chatter are:
1. The frequency of chatter is the robot base natural frequency at 10Hz. It does not
change with the variation of cutting parameters such as spindle speed, width-of-
cut (WOC), feed speed and the location of workpiece.
2. When chatter occurs, the entire robot structure start to vibrate. The magnitude of
vibration is so large that obvious chatter marks are left on the workpiece surface.
(Figure 4)
3. In the cutting setup of Figure 3 and Table 2, using the exact same cutting
parameters (DOC, RPM, WOC, Feed speed), chatter starts to occur when feed in –Z
522 Industrial Robotics - Programming, Simulation and Applications
direction, DOC=2mm, while the process is stable when feed in +Z, ±X direction,
even with the DOC=4mm. The cutting forces in unstable (feed in –Z direction) and
stable (feed in +Z direction) conditions are plotted in Figure 5.
4. The cutting process has different chatter limit at different locations in the robot
workspace. Machining experiments are carried out at three different locations
along the table, defined as L1, L2 and L3. They have the same Y, Z coordinates and
different X coordinate. L1 is the location shown in Figure 3, L3 is on the other
corner, and L2 is in the middle. Without changing other cutting parameters, chatter
limit for L1 is DOC=2mm, for L2 is DOC=1.5mm, for L3 is DOC=1.1mm.
Fig. 5. Cutting force profile. (Left) force plot while low frequency chatter happens, cutting
condition listed in Table 2. (Right) force plot while system stable, cutting condition is the
same except that feed in opposite direction.
2 2
m(ω c + 4ω n ζ 2 ) (11)
d=
2K p
ωc (12)
Ω = 60( ) n = 1,2,3...
2φ − π (1 − 2n)
where d is DOC, Ω is RPM, ω n = k / m , ζ = c / 2m , and ωc = ω n 1 − ζ 2 . The
corresponding stability lobe is plotted in Figure 7. Obviously, with spindle speed Ω = 3600 ,
robot works on the very right side of first stability lobe, where DOC is almost unlimited.
Based on regenerative chatter theory, one would not expect to observe the low frequency
chatter. However, from the experimental test, such chatter behavior indeed exists, which
forced us to look into other theories to explain the underline mechanism for this kind of
chatter. In the following subsection, we would explore the mode coupling chatter theory
and find that is the most reasonable explanation to the aforementioned problem.
Robot
F (s ) Δ(s )
1
2
ms + cs + k
Gain Kp
e − sτ
Delay
Fig. 6. Block diagram of one-DOF regenerative chatter model.
n=3 n=2
n=1
Unlimited
Stable Region
F = K PY
Y
Y1
Ky X1
Kx
γ
Workpiece α
X
Feed direction
Fig. 8. 2D model of mode coupling chatter Fig. 9. TCP locus in stable conditionfor
system. undamped system.
By perform this similarity transformation, Eq. (13) becomes
..
{q} + [ K Λ ]{q} = [V ]T [ K p ][V ]{q} (17)
where
Machining with Flexible Manipulators: Critical Issues and Solutions 525
⎣⎢ M M ⎦⎥
Let
K x' = K x − K p cos(γ ) sin(α )
K y' = K y − K p sin(γ ) cos(α )
The characteristic equation of Eq. (22) is:
1
K x' + K y' K x' K y' − K p2 sin( 2γ ) sin( 2α )
λ +
4
⋅λ +
2 4 =0 (23)
M M2
The solution of Eq. (23) gives:
If ( K x' − K y' ) 2 + K p2 sin(2γ ) sin(2α ) > 0 , then the two λ2 are real negative numbers. In this
case the four eigenvalues of the system located on the imaginary axis, symmetric with
respect to real axis. The solution of system is Lissajous curves as shown in Figure 9. The
system results stable in a BIBO sense. Moreover, since the system damping always exist, the
structure damping of the real system will shift the eigenvalues of the system toward left in
the complex plan, therefore giving exponentially decaying modes.
If ( K x' − K y' ) 2 + K p2 sin(2γ ) sin(2α ) < 0 , then two λ2 are complex number with negative real
part. In this case the four eigenvalues of the system are located symmetrically with respect
to the origin of the complex plane, so two of them have positive real part. Therefore,
instability occurs in this case. The solution of system is exponential increasing as shown in
Figure 10. While the damping effect is considered, the locus of TCP is an ellipse.
Unstable region could be represented as:
[ K x − K y + K p sin(γ − α )]2 (25)
sin( 2γ ) <
− K p2 sin( 2α )
X (N)
Movement of Force Vector in XY plan
Start
End Y (N)
Clockwise
Fig. 10. TCP locus in unstable condition for Fig. 11. Locus of force vector while
undamped system. chatter happens.
An important result here is that unstable condition is only possible when K p >| K x' − K y' | .
That means the mode coupling chatter only occurs when the process stiffness is larger than
the difference of two principle stiffness of robot. The physical meaning of the equation gives
us the general stability criterion. If a machining system can be modeled by a two degree of
freedom mass-spring system, the dynamic motion of the TCP can take an elliptical path. If
the axis with the smaller stiffness lies within the angle formed by the total cutting force and
the normal to the workpiece surface, energy can be transferred into the machine-tool
structure, thus producing mode coupling chatter. The depth of cut for the threshold of
stable operation is directly dependent upon the difference between the two principal
stiffness values, and chatter tends to occur when the two principal stiffness are close in
magnitude.
For CNC machine, the structure stiffness k is on the level of 10 8 N/m, and the process
stiffness K p is usually in the range of 10 5 ~ 10 6 N/m. As a result, any small difference of k
Machining with Flexible Manipulators: Critical Issues and Solutions 527
in each principle directions is much larger than K p . Before the occurrence of mode coupling
chatter, spindle power limit or regenerative chatter limit already reached.
The story is totally different for industrial robot, where stiffness k is on the level of 10 6
N/m and has close magnitude in each direction. Since the process stiffness K p of machining
aluminum workpiece is usually on the level of 10 5 N/m, the mode stiffness of each
principle direction could be smaller than process stiffness in certain robot configuration. The
mode coupling chatter will then occur even in very light cutting condition. The vibration
limit of robotic milling operation depends on the relative angels between machining force
vector and the direction of the principle mode stiffness.
The above analysis also coincide with a rule of thumb in the machine tool industry, that the
stiffness of two directions should at least has 10% difference to prevent chatter. The
characteristics of low frequency chatter summarized in section 3.1 be perfectly explained by
mode coupling mechanism:
1. The frequency of mode coupling chatter is the same as the base frequency of the
machine. The process parameters such as spindle speed, width-of-cut (WOC), and
feed speed won’t change the frequency of chatter.
2. In unstable condition, the solution of the system will exponentially increase until it is
balanced by damping effects or just crash (Figure 4). The locus of force vector is
drawn in Figure 11. The locus of tool tip movement will take the same elliptical path.
3. An unstable cutting process could become stable by only changing the feed
direction because the direction of force vector is changed while machine structure
keeps the same.
4. Chatter limit of the process is configuration dependent due to the coupled robot
structure. The mass matrix and stiffness matrix are not constant; they take different
values at different robot configurations. As a result, although the force vector is the
same, the chatter limit is different since the machine structure is different.
Vertical Cut
(270,150)
2 KS
Y(3-4) 4 KL
Y(1-2)
1
Y(1-2) Y
1
(-250,-100)
Fig. 12. Setup for robotic deburing. Fig. 13. Stability analysis of deburring.
528 Industrial Robotics - Programming, Simulation and Applications
Up-milling Down-milling
Robot Feed
Range of cutting force
Y
Robot Feed F
Y F
After investing the intrinsic mechanism of low frequency chatter, practical guidelines for
chatter-free robotic machining process is summarized here.
1. Select the proper cutting tool. The tool or inserts with different geometry will
distribute machining force to different directions. Round insert always generates
larger normal force (DOC direction) compared to square insert with zero degree
lead angle. Since DOC is the most sensitive parameter related to machining force,
chatter may arise more easily if the process has larger normal force. Thus Round
insert is not recommended for robotic machining although it has many advantages
and is widely using by CNC machine.
2. Plan the proper work space. Since the robot displays different mechanical
properties, which are mass, stiffness and damping, at different locations in the
workspace, selecting a proper machining location that will have higher chatter
limit.
3. Plan the proper path and feed direction. Changing the feed direction is the easiest
way to re-direct machining force without affecting the existing setup. Based on the
theoretical analysis and experimental results, this is an effective method to avoid
mode coupling chatter in many situations.
4. Chatter is more likely to occur in up-milling rather than in down-milling in robotic
machining process.
4. Deformation Compensation
Field tests using industrial robots for heavy machining such as milling often found that a
perfect robot program without considering contact and deformation fails to produce the
desired path once the robot starts to execute the machining task. While thermal induced
error is the largest error component for CNC machining, motion error contributes most of
the total machining errors in robots. For example, a 500N cutting force during a milling
process will cause a 1 mm position error for a robot instead of a less than 0.01mm error for a
CNC machine. The major motion error sources in robotic machining process can be
classified into two categories, (1) Machining force induced error, and (2) robot inherent
motion error (kinematic and dynamic errors, etc.). The inherent motion error, typically in
the range of 0.1 mm, is resulted from the robot position controller and would appear even in
non-contact cases. While the machining force in the milling process is typically over several
hundreds of Newton, the force-induced error, which could easily go up to 1 mm, is the
dominant factor of surface error. In order to achieve higher dimensional accuracy in robotic
machining, the deformation due to the interactive force must be accurately estimated and
compensated in real time.
The existing research of robot deformation compensation is focused on gravity
compensation, deflection compensation of large flexible manipulators, etc. Not much
attention has been paid to the compensation of process force induced robot deformation due
to the lack of understanding and model of robot structure stiffness, the lack of real time
force information and limited access to the controller of industrial robot.
speed to keep MRR constant during the whole machining process. As a result, a much faster
feed speed, instead of a conservative feed speed based on maximal depth of cut and width
of cut position, could be adopted (Figure 20).
Conservative Ft =400N
Ft =350N
Ft =300N
Ft =250N
Ft =200N
Ft =150N
variations to help maximize material removal rate (MRR) and minimize cycle time. Practical
machining experiments conducted in the lab showed great reduction of cycle time, as well
as better surface accuracy. These results outline a promising and practical use of industrial
robots for machining applications that is not possible at present.
7. References
Akbari, A. & Higuchi, S. (2000). Autonomous tool adjustment in robotic grinding, The
International conference on Precision Engineering(ICoPE) ,pp.121-126
Altintas, Y. (2000). Manufacturing Automation: Metal Cutting Mechanics, Machine Tool
Vibrations, and CNC Design, 1st ed., Cambridge University Press, NY, USA
Budak, E. & Altintas, Y. (1998). Analytical prediction of chatter stability conditions for multi-
degree of systems in milling. Part I: modeling, Part II: applications. Transactions of
ASME, Journal of Dynamic Systems, Measurement and Control, Vol.120, pp.22-36
Chen, Y. & Hu, Y. (1999). Implementation of a robot system for sculptured surface cutting.
Part 1. rough machining. International Journal of Advanced Manufacturing Technology,
Vol. 15, pp. 624-629
Daneshmend, L. & Pak, H. (1986). Model reference adaptive control of feed force in turning,
ASME Journal of Dynamic Systems, Measurement, and Control, Vol. 108, No. 3, pp.
215-222.
Gasparetto, A. (1998), A system theory approach to mode coupling chatter in machining,
ASME Journal of Dynamic Systems, Measurement, and Control, Vol. 120, pp.545-547.
Kim, S.; Landers, R. & Ulsoy, G. (2003). Robust machining force control with process
compensation. Journal of Manufacturing science and engineering, Vol. 125, pp. 423-430
Matsuoka, S., Shimizu, K., Yamazaki, N. & Oki, Y. (1999). High-speed end milling of an
articulated robot and its characteristics, Journal of Materials Processing Technology,
Vol. 95, No. 1-3, pp. 83-89
Merritt, H. (1965). Theory of self-excited machine tool chatter: contribution to machine-tool
chatter research-1, ASME Journal of Engineering for Industry, Vol. 87, No. 4, pp. 447-
454
Pan, Z. ; Zhang, H. ; Zhu, Z. ; Wang, J. (2006). Chatter analysis of robotic machining process,
Journal of Materials Processing Technology, Vol.173, pp.301-309
Sallinen, M. & Heikkilä, T. (2000). Flexible workobject localisation for CAD-based robotics,
Proceedings of SPIE Intelligent Robots and Computer Vision XIX: Algorithms,
Techniques, and Active Vision, Vol.4197, pp. 130 – 139, Boston, USA, Nov. 2000
Stein, J. & Huh, K. (2002). Monitoring cutting forces in turning: a model-based approach.
Journal of Manufacturing science and engineering, Vol. 124, pp. 26-31
Tlusty, J. & Polacek, M. (1963). The stability of machine tools against self excited vibrations
in machining, International Research in Production Engineering, ASME, pp. 465-474.
Tobias, S., & Fishwick, W. (1958). The chatter of lath tools under orthogonal cutting
conditions, The Transaction of the ASME, No. 80, pp. 1079–1088
Wang, J. ; Sun, Y. & et. al. (2003a). Process modeling of flexible robotic grinding, International
Conference on Control, Automation and Systems, Gyeongju, Korea, Oct. 2003
Wang, J. ; Sun, Y. & et. al. (2003b). In-process relative robot workcell calibration, International
Conference on Control, Automation and Systems, Gyeongju, Korea, Oct. 2003
Whitney, D. (1977). Force feedback control of manipulator fine motions, ASME Journal of
Dynamic Systems, Measurement, and Control, Vol. 99, No. 2, pp. 91–97
536 Industrial Robotics - Programming, Simulation and Applications
Yang, S. (1996). Real-time compensation for geometric, thermal, and cutting force induced
errors in machine tools. Ph.D. dissertation, The University of Michigan
Zhang, H. & et. al., (2005). Machining with flexible manipulator: toward improving robotic
machining performance, IEEE/ASME International Conference on Advanced Intelligent
Mechatronics (AIM), California, USA, July 2005.
27
1. Introduction
The shipbuilding industry is steadily advancing by introducing robots to its work fields for
increases in productivity and improvements in working conditions (Nagao et al, 2000).
However, the shipbuilding company still faces with the worker’s health problem, an
increase of aging workers, a shortage of skilled workers, and environmental protection
related issues. Therefore, advanced robotic manipulator is still required to overcome these
problems. And, how to apply commercial robotic system properly to meet the production
purpose in the shipyard is a key research topic for shipbuilding engineers.
The shipbuilding process is mainly divided into design, cutting, welding, assembling,
grinding, blinding, and painting process. Among these manufacturing processes, welding is
the most crucial, expensive, and time-consuming process. For that reason, welding robot
applications have yielded a big productivity improvement in hull assembly welding and
have reduced work-related musculoskeletal disorders of workers.
In this chapter, the results of our work in the development of a welding robot system and a
PC-based off-line programming for welding robot application on assembly lines in
shipbuilding are explained. Also, a methodology of implementing PC-based off-line
programming on users PC is presented. The off-line programming is a system that
comprises robot’s simulation, robot programming, and other functions such as monitoring,
scheduling, etc, that makes users operate robot system easily. The off-line programming is
essential for welding robot system in shipyard to prepare robot program and then to shorten
production time.
Currently, the operation of industrial robots is through either on-line teaching or off-line
programming (Choi & Lee, 2003; Carvalho et al., 1998; Craig, 1986;). On-line teaching is, by
definition, a technique of generating robot programs using a real robot system, whereas off-
line programming is a method using simulations that are set up in advance. On-line
teaching may be suitable for jobs for which a robot only needs to repeat a monotonous
motion using one pre-written program that applies to identical sizes or objects. But, in such
work places as shipbuilding, where the shape and size of workpiece are various (i.e., there
are more than 1200 different shapes of workpieces for grand-assembly, if we account the
size of these different shaped workpiece, we may not count the different kind of workpieces.
Moreover, the new shape of workpiece is still increasing according to the ship specification
538 Industrial Robotics - Programming, Simulation and Applications.
advancing), on-line teaching will cause problems due not only to the decrease of
productivity caused by halting the robots while reprogramming, but, more importantly, to
not being able to revise work errors that on-line programming itself can cause. Hence, the
more profitable method of building a work program is using off-line programming, while
only applying programs that were already verified to be effective for the job. The
advantages of using off-line programming are: (1) the effective programming of robot-
command logic with debugging facilities; (2) the easy verification of the validity of the robot
programs through simulation and visualization; (3) organized documentation through
simulation models with appropriate programs; (4) the reuse of existing robot programs and
easy application to other objects; and (5) the cost independence of production owing to the
fact that production can be continued while programming. Fig. 1 shows difference between
two robot programming.
Research on the robotic system simulation is prevalent (Ferretti, 1999; Kreuzer & Miojevic,
1998; Richard, 1997). And the industrial robot production enterprises provide
commercialized software for robot simulations such as ROBCAD, IGRIP, and etc., which
include developed simulation tools for the robots. However, applying this commercialized
software to ship construction requires extra preparation, including users becoming fluent
with CAD systems, the complete modeling of the work object, and the development of
language translators that will work with robot manufacturing
application of the CAD interface and the robot program generation to the actual welding
plant. In Section 5, the process of path planning for scattering the blocks based on a block
arrangement simulation is explained. Section 6 provides conclusions.
Fig. 3 shows the welding robot system that is being used at Daewoo Shipbuilding and
Marine Engineering Ltd., South Korea. The system is composed of a welding robot, a
controller, a gantry crane, welding equipment, and an on-site industrial computer.
Downloading
Monitoring
Off-line programming job programs
Control
;FR65-74sl.pgm
001 RHOME V=030.0(%) HF=1
002 GETP P01 0 0 0 CRD=BASE
003 GMOVJ T01 V=030.0(%) PL=0
004 RHOME V=030.0(%) HF=2
005 RMOVJ T02 V=030.0(%) PL=0
006 GMOVJ T03 V=030.0(%) PL=0
007 GMOVJ T04 V=030.0(%) PL=0
008 GETP P02 0 0 0 CRD=BASE
009 RMOVL T05 V080.0 PL0 D0
010 RTOUCH V=035.0 X1 Y0 Z0 L100 P00
011 RIMOV V=030.0 x=-10 y=0 z=0 CRD=BASE
012 RTOUCH V=035.0 X0 Y0 Z-1 L100 P00
; FR65-74sl.rpy
T02 = 180 0 0 1 1
T03 = -180 -47 180 1 1
T04 = 180 0 0 1 1
T05 = 90 0 0 1 1
T06 = 180 -47 90 1 1
; FR65-74sl.rule
T02 G52 2064.0 -2389.0 1680.0
T03 A52 2734.0 -2389.0 44.0 2064.0 -2389.0 1680.0 0.0
T04 A52 2734.0 -1889.0 44.0 2064.0 -1889.0 1680.0 0.0
T05 A52 2734.0 -2889.0 44.0 2064.0 -2889.0 1680.0 0.0
T06 G52 2177.6 -2344.2 1680.0
Fig. 4. An example of the standard program consisting of three parts: a program-file (top), a
rpy-file (middle), and a rule-file (bottom). In the program-file, the second column is robot
command and the third column is inner paramters of each robot command.
z
{B} y
z'
x y'
BT x'
W {T}
Z z''
Z
y''
Y Y x''
θ
{W} X {O} X
W
TO
Fig. 5. The coordinate systems defined: world, base, object, and tool.
542 Industrial Robotics - Programming, Simulation and Applications.
In order to apply variously sized but identically shaped target objects without modifying
the pre-generated robot program, the teaching points in the standard program are written in
a variable form. Fig. 4 shows an example of a standard-program. Also, in order to be
executed in the robot controller, information regarding the real size of the workpiece and
the welding conditions should be contained in the standard-program. The standard-
program including this information is referred to a job-program. The size information is
obtained by CAD interface and the welding conditions are gathered from a database in
which all previous experimental data have been stored.
Fig. 5 depicts the defined coordinate systems: {W} denotes the fixed world (reference)
coordinate system attached to the ground; {B} denotes the base coordinate system affixed to
the base of the robot, whose position will be determined by the movement of the gantry; {O}
refers to the object coordinate system attached to the workpiece; and {T} represents the
coordinate system attached to the tool center.
Transfer robot
Welding information Work document to Robot program
program to each
from welding D/B arrange assemblies execution sequence
controller: TCP/IP
vision sensor and block arranging to get information of workipiece base position is not
required.
myRotor[n] means each link’s angle, the robot simulation can be done by changing
myRotor[n] value using robot’s forward kinematics and inverse kinematics. The
m_pLoadBlockSep is added to this node structure to draw workpieces. Usually, There are 2
kinds of simulation option is required for shipbuilding as below:
Case 1: Grand- and Middle-assembly
- Because the workpiece base position is fixed,
- Robot base position is also fixed.
Case 2: Sub-assembly
- Because the workpiece base position is not fixed (variable),
- Robot base position is also movable (variables).
Auxiliary graphic objects such as axes, texts, and welding lines are added to the top-node
defined as m_pSceneRoot directly, in order to be independent of robot’s movements. For
example, to display the axis of the teaching points and the welding line, m_pAxisSep and
m_pLine are attached to the m_pSceneRoot node independently of the motion of the related
nodes the m_pLoadBlockSep and the GantrySep, as shown in Fig. 8. The axis graphic node is
added to the m_pSceneRoot node, whenever the user generates new teaching points. The
added axis graphic node is counted and the entire axis in the simulation window has its
own number. By clicking the axis in the simulation window, the simulation window
displays the data of the selected teaching point. In the same way, the m_pLine containing the
line graphic object is added to the m_pSceneRoot node. And, whenever the user selects a
welding line, the line is displayed in the simulation window. In order to render and change
link parameter easily for a similar type robot, a robot initialization file is used. The robot
initialization file contains the file name of the robot body VRML model, the link parameters,
the limit values of individual joints, the control parameter and the home position data. In
addition, the user can arbitrarily specify the robot base position so that the initial position of
the robot system can be easily set. Also, through the manipulation of kinematics data, the
base coordinate frame can be easily placed at a desired position. Hence, the reachability of
the end-effecter and possible collisions with the surrounding parts can be easily examined
through the simulations in a virtual environment.
m_pSceneRoot
m_pAxisSep m_pLoadBlockSep
m_pLine m_Arm[2] m_Arm[3] m_Arm[4] m_Arm[5]
m_pLoadBlock
m_pBlockTrans m_pBlockRotation
GantrySep
m_Arm[1]
m_pRobot
Transform[0] gan_rotor
or
(a) (b)
Fig. 8. Simulation environment construction: (a) a hierarchical representation of graphic
nodes to realize simulation environment; (b) 3D robot body and individual link models.
Two types of simulation modes are provided: a teaching mode and an execution mode.
Robot teaching tells the robot what to do. Because the operator can easily move the robot in
various motions with via-points using the teaching mode, this mode is very helpful for
Welding Robot Applications in Shipbuilding Industry: Off-Line Programming,
Virtual Reality Simulation, and Open Architecture 545
operators to change teaching point on simulation window. The teaching mode includes two
jog functions: a joint jog function that moves the joint actuators in relation to the joint
coordinates, and a coordinate jog function that moves the robot according to a given
coordinate frame, as shown in Fig. 5. In the program execution mode, robot commands in a
standard program are automatically, simultaneously, and continuously executed line by
line. Left picture of Fig. 9 shows the simulation of a grand-assembly, while right picture
shows the simulation of a sub-assembly for which the robot is the hanging-type.
The simulated motions approach to the real ones, because the algorithms of the simulation
program including kinematics, the robot motion planning and the robot language
interpreter are identical to those of the real controller's. Because the control input sampling
time is 16 msec whereas the interpolation time of a robot motion is 5 msec, the robot motion
is updated every 16 msec in simulations. Also, multi-threads called for by a 16 msec timer
are used for the multi-robot simulation. In this case, one thread can initiate some of the
functions shared with other threads, such as the robot language interpret function, the
motion planning function and the starting command function, at the same time, and that
results in memory leakage or malfunction. So the multi-threads and CCriticalSection of VC++
work together.
Fig. 9. Simulation example of a welding robot system(Left: for a grand- and middle-
assembly, Right: sub-assembly).
of welding information file from CAD interface. A welding information file also contains the
welding condition that is specified by class and welding standards. Also it contains general
block information such as the number of welding passes, base coordinate definition, block
names, and block sizes.
By using the boundary representation (BR) method (Sheu & Xue, 1993), the surface of a solid
object is segmented into faces, and each face is modeled by its bounding edges and vertices.
Also, the edges and vertices of an object can be extracted from the TRIBON CAD models by
the BR method. Therefore, the 3D drawing of TRIBON CAD is decomposed into edges and
vertices. The edges and vertices are reconstructed into VRML models. The function
IndexedFaceSet is a VRML command to render a 3D object from edges and vertices. Using the
extracted edges and vertices, all of the elements of the workpiece comprise solid models.
Next, these elements are assembled together on the plates according to the assembly’s
sequence of drawings. This method is similar to the real assembly process in such a way that
the operator handles all of the elements in the space and measures the size of each element
using simulations, as in a real system.
BEGIN_INFO
'S6G9';
BEGIN_JOINT
J-1; 527-FR86A-1;527-FR86A-S1;15.5;18.0;
BEGIN_WELD
W-1;H;0.707,-0.036,-0.706;525;
BEGIN_SEGMENT
1836,-25,314; 0,0,0 1836,500,314;
END_SEGMENT
END_WELD
END_JOINT
END_INFO
Fig. 10. An example of welding information generated from the CAD interface.
Using the CAD interface, the operator can select the robot weld-able workpieces and
extracts the welding information of the selected workpieces on operator’s PC. For the case of
grand- and middle-assembly, as the robot weld-able workpiece type is pre-defined by
operator and designer, only the workpiece size is required to make job-programs. However,
for the sub-assembly, robot weld-able workpiece is not pre-defined, the operator have to
select robot’s workpiece and input not only size but also shape of the workpiece. So, CAD
interface works more important role for the sub-assembly line. For example, to plan a robot
motion trajectory of a line-type seam in a sub-assembly, the start point and destination point
are required. In the case of an arc-shape seam, the start point, mid point, and destination
point are required. Accordingly, the required points are extracted from the vertex datum of
the TRIBON CAD in according to the {O} coordinate frame in Fig. 5. Because a seam is
defined as an adjacent line between two elements, the designated vertexes can be separated
from the rest of the vertexes.
The orientation of the torch defined by roll-pitch-yaw values is extracted as shown in Fig.
11, which depicts an example of a specific shape of a workpiece and of the tool orientation
values for the welding start and end points. For other shapes of welding seam such as
Welding Robot Applications in Shipbuilding Industry: Off-Line Programming,
Virtual Reality Simulation, and Open Architecture 547
straight line, curved line, and vertical line, normal robot job is accomplished. For the critical
examples in Fig. 11, the geometric constraints of a workpiece are: (1) all the elements are
made of plate of a thickness of less than 20 mm; (2) the possible welding length of a robot is
longer than 200 mm. After acquiring the welding start and end points of workpieces from
the output of CAD interface where the points are listed line by line in accordance to the
sequence of composing complete object, 3 adjacent points of all the edge in a workpiece are
gathered as p1 ( x1 , y1 , z1 ) , p 2 ( x 2 , y 2 , z 2 ) , and p3 ( x3 , y 3 , z 3 ) . And then, the center positions
p c and p o are obtained as (1) and (2).
p1 po
p3
p2 pc
û1 û2
Y p4
(a) Case 1
p2
p1
û1 uˆT 1B
ûo pc
Y po
û2 uˆT 2 B
X p3
(b) Case 2
p3
p 2 p1
X
(c) Case 3
Fig. 11. An example of torch pose calculation (cut view of the workpiece). (a) Case 1: convex
and available part; (b) Case 2: concave and available part; (c) Case 3: concave and
unavailable part.
p c = ( p 2 + p3 ) / 2 (1)
p o = ( p1 + p3 ) / 2 (2)
The values p c and p o are depicted in Fig. 11.
Let the distance between two points p A ( x A , y A , z A ) and p B ( x B , y B , z B ) be
548 Industrial Robotics - Programming, Simulation and Applications.
l( p A , pB ) = ( x A − xB ) 2 + ( y A − yB ) 2 + ( z A − z B ) 2 (3)
po pc (4)
uˆ o =
l ( po , pc )
p1 pc (5)
uˆ1 =
l ( p2 , pc )
p 2 pc (6)
uˆ 2 =
l ( p3 , p c )
⎧⎪uT 1B = u1 + u o , (7)
⎨
⎪⎩uT 2 B = u 2 + u o ,
where u T 1B and uT 2 B are the vectors of the tool’s direction projected on a plate.
Considering the shape of the seam line and the constraints, u T 1B and uT 2 B are translated
into real tool direction vectors as in the following 4 cases: Case 1 is the convex part of a
workpiece; Case 2 is the concave part of a workpiece; Case 3 is the impossible shape of the
robot’s welding; and Case 4 is considered as a normal welding seam line.
Case 1: { l ( p1 , p 2 ) ≤ 20 and l ( p 2 , p3 ) ≥ 200 } or { l ( p1 , p 2 ) ≥ 200 and l ( p 2 , p3 ) ≤ 20 }
⎧ π
⎪⎪uT 1 = R XYZ ( 4 , 0 , π ) uT 1B , (8)
⎨
⎪u = R π
⎪⎩ T 2 XYZ ( , 0 , π ) uT 2 B .
4
Case 2: l ( p1 , p2 ) ≥ 200 and l ( p2 , p3 ) ≥ 200
⎧ π π
⎪⎪uT 1 = R XYZ { 4 , 0 , sign(u1 × uo ) 2 } uT 1B , (9)
⎨
⎪u = R { π , 0 , sign(u × u ) π } u
⎪⎩ T 2 XYZ 2 o T 2B .
4 2
Case 3: l ( p1 , p 2 ) < 20 and l ( p 2 , p 3 ) < 20
Indeterminate seam.
Case 4: 20 < l ( p1 , p2 ) < 200 and 20 < l ( p2 , p3 ) < 200
Normal seam.
⎡cα cβ cα sβ sγ − sα cγ cα sβ cγ + sα sγ ⎤
where R XYZ (γ , β , α ) = ⎢⎢ sα cβ sα sβ sγ + cα cγ sα sβ cγ − cα sγ ⎥⎥
⎢⎣ − sβ cβ sγ cβ cγ ⎥⎦
Finally, the tool orientation value of each seam RT 1 (γ , β ,α ) according to world frame {W}
is defined as
Welding Robot Applications in Shipbuilding Industry: Off-Line Programming,
Virtual Reality Simulation, and Open Architecture 549
⎧⎪ RT 1 (γ , β ,α ) =W T uT 1 , (10)
O
⎨
⎪⎩ RT 2 (γ , β ,α )=W TO uT 2 .
The value of W TO is shown in Fig. 5.
In this way, the teaching points of newly introduced seams can be calculated using the
information of CAD interface. Also, because the once defined rules can be adapted to other
similar cases, newly rule-adding works are decreasing.
obtained by pre-simulation results that are obtained for all of the shapes of the
workpieces. In the robot program selection routine, the simple geometries of a
workpiece are matched to respective robot programs. In the compilation routine, the
matched robot programs are combined into a standard program. The writing program
routine rewrites the robot program to fit it to the format of the robot language.
Moreover, it sorts the teaching points according to the teaching point's number and
matches each teaching point in the robot program to the respective values in the rule-
and rpy- files.
seam data
coordinate frame {O} of each workpiece depicted in Fig. 5. Therefore when the robot
program is applied to a workplace where many workpieces are scattered such as Fig.
14, the reference coordinates of each robot program are translated into a world
coordinate frame {W}. The translation matrix from {O} to {W} is provided by the results
of the block simulation and a vision sensor.
where x vision and y vision are the captured values of each mark according to the vision sensor
base and 300 is the offset of the vision sensor from the base frame {B}. Next, the marker
position in reference to the world frame {W} is obtained as
⎡d x ⎤ ⎡ x{ B} ⎤ (12)
M {W } = ⎢ ⎥ = WB T ⎢ ⎥
d
⎣ y⎦ ⎢⎣ y{ B} ⎦⎥
where M W is the marker position. The rotation values relative to the {W} frame are
calculated using 3 different M W s on the plate. The robot's job program generated for the {O}
frame can be transformed into the {W} frame by using the transformation W TO as follows:
⎡cosθ − sin θ 0 d x ⎤
⎢ sin θ cosθ 0 d ⎥
W
TO = ⎢ y⎥
(13)
⎢ 0 0 1 dz ⎥
⎢ ⎥
⎣ 0 0 0 1⎦
where d x , d y , and d z are the position components of the origin of the {O} frame in
reference to the {W} frame. In most cases d z is zero which means that the workpiece is
always placed on the workplace. If d z is not zero, the operator have to measure the height
of the workpiece’s base plate and inputs it to the off-line programs. The origin of the {O}
frame and the positions of the markers are predefined by the designer.
In the case of a multi-robot system, a calibrated job program is allocated to each robot by a
job program distribution algorithm, in which the determining fact is the sum of the lengths
of all of the seams in each robot's working area.
Fig. 15. Structure of the chromosome used for robot’s path planning.
Frequently, a general crossover method may result in a collision path in the multi-robot
cooperative work. The general crossover exchanges a gene of parent 1 with a gene of
parent 2. In this case, the exchanged gene of the assigned robot can be different, and so
the child generated by the parents has to take the collision path. Therefore, we use the
PMX (Partially Matched Crossover) suggested by Fonseca (Fonseca, 1995). The PMX
guarantees the validity of the solution of the traveling salesman problem (TSP) presented
as a path generation. The children of the PMX inherit some pieces of the genetic
information from the first parent by 2-points crossover and the other pieces are inherited
according to the gene sequence of the second parent. The random number of each gene
determines the possibility of crossover occurrence. For example, assume that the parents
are defined as P1 and P2 , the children are defined as C1 and C2 , and the chromosomes, P1
and P2 , are initialized as P1 = ( 1 2 3 4 5 6 7 8 9 ) and P2 = ( 2 4 5 3 7 6 9 1 8 ) , where ‘|’ is
a cut point, physically, a district separation, for each robot. The number between cut
points is an allocated path’s number. For the case of P1 , the 1st, 2nd, and 3rd paths are
allocated to the first robot, and the 4th, 5th, 6th, and 7th paths are assigned to the second
robot, and the 8th and 9th paths are assigned to the third robot. The ‘ × ’ is an undefined
gene. If the child inherits the second robot’s genes, C1 = ( × × × 3 7 6 9 × × ) and
C2 = ( × × × 4 5 6 7 × × ) , then the related mapping is 3 ↔ 4 , 7 ↔ 5 , 6 ↔ 6 , 9 ↔ 7 . By the
transitivity rule, the mapping is changed into 3 ↔ 4 , 9 ↔ 5 , 6 ↔ 6 . If genes independent
of the mapping are inherited, the children are C1 = (1 2 × 3 7 6 9 8 × ) and
C 2 = ( 2 × × 4 5 6 7 1 8 ) . The final children are obtained as C1 = (1 2 4 3 7 6 9 8 5 ) and
C 2 = ( 2 3 9 4 5 6 7 1 8 ) by the mapping. Concurrently, the assigned genes for each robot
are changed with respect to the cut points.
Mutation rate
0.2
Although the PMX is suitable to prevent a colliding path, the crossover method is restricted
in the sense that a crossover occurs over a limited range. Hence the range of solutions is also
restricted, and a set of solutions does not differ from an initial population but is constant. To
search variable solutions, an adaptive mutation method that enables the exchange of the
allocated work by PMX is used. The adaptive mutation method works better than the
constant mutation rate method and also prevents collisions among multi-robots. The
mutation rate is an important factor determining the convergence and the rate of the
convergence of the solution. For a real robot system, because the robot’s access to the
neighboring seams near the separation layer is easier than its access to distant seams, the
mutation rate of seams near the separation layer is high and the mutation rate of seams
distant from the separation layer is low. The separation layer is predefined by the block
arrangement. Here, we accommodate a Gauss function in (14) as an adaptive mutation rate.
Fig. 16 shows the Gauss function.
( x − xi ) 2 (14)
y i = k exp{− }
2σ 2
where i = 0,…, n , n is the number of robots, k = 1 , σ = 0.25 , xi is the position of the
gantry, and yi is the mutation rate. The dominant mutation rate in (14) is determined by
k , which is the maximum value, and x , which is obtained when y is more than 68% of
the whole area.
We use the rank selection method as a selection routine. First, the chromosomes are sorted
by the fitness of each chromosome. Second, the value is calculated as follows:
Fig. 17. Two examples of an optimal path generated for a single robot system: The dashed
line indicates the optimal path represented by the robot base position.
8400
30000
7800 24000
Fitness [mm]
7600
21000
7400
18000
7200
7000 15000
0 50 100 150 200 0 50 100 150 200 250
Generation Generation
(a) For the case of 3 robots. (b) For the case of a single robot.
Fig. 18. Fitness of the GA algorithm for an example of Fig. 17. The best fitness value of (a) is
7,250 mm and the best fitness value of (b) is 17,000 mm. As we expected, the 3 robot system
is more efficient than the single-robot system. In (a), we can see that due to the best
chromosome generated during initial population computation, the best fitness value is
constant over the entire generation.
Fig. 18 depicts the best fitness value and the average fitness value of each generation for 3
robots and for a single robot. For the multi robot system, to allocate workpiece to each robot,
we engaged simple allotment method handled by operators. 2 main determinant factors of
556 Industrial Robotics - Programming, Simulation and Applications.
allotments are welding length and collision avoidance. To prevent collision, we restricted
robot’s working volume by dividing workplace into the same numbered zone of robot. So
the workpieces lying on each zone is allotted to each robot system initially. In the GA,
initially allotted workpieces are modified partly to justify the welding length of each robot
system. As we expected, we can see that 3 robots are more efficient than single robot
according to the computation time and the best fitness value.
6. Implementation
The tools utilized in this work are as follows. Welding robots with the semi-automatic
transfer system is applied to the grand-assembly and fully automatic robot system is
applied to the sub-assembly. To implement the off-line programming both of grand- and
sub-assembly: the PC of a Pentium IV 2.4 GHz processor with Windows 2000; Visual C++
for main programming and other functions; Open Inventor for graphic environment;
TCP/IP for communication among the off-line programming PC, on-site monitoring PC
and controller. Here, because the Open Inventor is a graphic library that provides
collision detection algorithms, it is useful in constructing graphic environments. To
increase the efficiency of the graphic processing ability, the selection of a video card is
very important. The implemented video card has a 64 MB frame buffer memory and a 128
MB texture memory for our system. Also it is optimized to process Open GL. Also, vision
system and mark-recognition algorithm is also developed to help welding robot system in
shipbuilding.
On account of OOP(Object Oriented Programming), the user can selectively use each
function of OLP, which automatically performs all functions in real time. Considering users’
convenience, for the grand-, mid- and sub-assembly welding robot systems, robot
simulation and robot program automatic generation functions were mainly used. And for
the sub-assembly welding robot system, the whole operation of PC-based off-line
programming is used. Table 1 shows the effectiveness of welding robot system with PC-
based off-line programming.
Approximate time to With developed OLP On-line teaching
Generate a standard program under 2 sec About 20 min
Generate all standard programs under 5 min 1 week
Generate job programs under 5 min 1 hour
Total welding time for a general type
Under 5 min About 30 min
workpiece (about 3m length, 3pass)
Table 1. The efficiency of PC-based OLP compared with on-line teaching.
7. Conclusion
This chapter described the welding robot system in a shipbuilding industry and the PC-
based off-line programming system. The welding robot system is very helpful not only to
increase productivity but also to solve worker’s health problem. The developed off-line
programming system provides a robot simulation, a block arrangement simulation, optimal
robot traveling path generation, and automatic robot program generation. Because graphic
environments are made in the VRML, developed off-line programming is highly compatible
with other software, which fact allows the use of off-line programming on the Internet.
Welding Robot Applications in Shipbuilding Industry: Off-Line Programming,
Virtual Reality Simulation, and Open Architecture 557
Thereby, adjustments to various robot systems are easy. Developed off-line programming is
very easy for operators to use and maximizes the operating efficiency of welding robot
systems of the shipbuilding industry. In the future, due to intelligent robotic techniques
such as PC-based OLP, painstaking human labor will be reduced and manufacturing
productivity in shipyards will be increased.
The contributions of this chapter are as follows: (1) a methodology for applying a welding
robot system that works for variously shaped objects, especially for assembly lines of
shipbuilding, is suggested; (2) the functions required to implement an OLP successfully and
the development of a PC-based OLP that helps on-site operators handle a robot system
easily are explained; and (3) the practical implementation of recently issued algorithms such
as the VRML of the simulation environment, the geometrical computation of the CAD
interface, the computing techniques of the automatic generation of robot programs, and the
GA of robot path planning, are shown.
Using the techniques of welding robot system, utilizations for other hard process of
shipbuilding such as painting and grinding are future works.
8. Acknowledgements
The research described in this chapter has been supported by the Research Center for
Logistics Information Technology (LIT) designated by the Ministry of Education and
Human Resources Development, Korea.
9. References
Aiyama, Y. & Tanzawa, H. (2003). Management system for distributed and
hierarchical robot groups, Advanced Robotics, Vol. 17, No. 1, pp. 3-20, ISSN:
1568-5535
Bae, K. Y.; Lee, J. H. & Jung, C. W. (1998). A study on the development of an arc sensor and
its interface system for a welding robot, Journal of the Korean Welding Society, Vol. 16,
No. 3, pp. 129-140, ISSN: 1225-6153
Borm, J. H. & Choi, J. C. (1992). Improvement of local position accuracy of robots for off-
line programming, KSME International Journal, Vol. 6, No. 1, pp. 31-38, ISSN:
1011-8861
Buchal, R. O.; Cherchas, D. B.; Sasami, F. & Duncan, J. P. (1989). Simulated off-line
programming of welding robots, International Journal of Robotics Research, Vol. 9, No.
3, pp. 31-43, ISSN: 0278-3649
Carvalho, G. C.; Siqueira, M. L. & Absi-Alfaro, S. C. (1998). Off-line programming of flexible
welding manufacturing cells, Journal of Materials Processing Technology, Vol. 78, No.
1-3, pp. 24-28, ISSN: 0924-0136
Choi, M. H. & Lee, W. W. (2003). Quantitative Evaluation of an Intuitive Teaching
Method for Industrial Robot Using a Force / Moment Direction Sensor,
International Journal of Control, Automation, and Systems, Vol. 1, No. 3 pp.395-400,
ISSN: 1598-6446
Craig, J. J. (1986). Introduction to Robotics: Mechanics and Control, Addison-Wesley, ISBN: 0-
201-09528-9, Reading, Massachusetts
Davidor, Y. (1991). Genetic Algorithms and Robotics: A Heuristics for Optimization, World
Scientific, ISBN: 9-810-20217-2, Singapore
558 Industrial Robotics - Programming, Simulation and Applications.
Ferretti, G.; Filippi, S.; Maffezzoni, C.; Magnani, G. & Rocco, P. (1999). Modular dynamic
virtual-reality modeling of robotic systems, IEEE Robotics and Automation Magazine,
Vol. 6, No. 4, pp. 13-23, ISSN: 1070-9932
Fonseca, C. M. & Flemming, P. J. (1995). Genetic algorithms for multi objective optimization,
Evolutionary Computation, Vol. 3, No. 1, pp. 1-16, ISSN: 1063-6560
Goldberg, D. & Lingle, R. (1985). Alleles, Loci, and the TSP, Proceedings of the First
International Conference on Genetic Algorithms, pp.154-159, ISBN: 0-805-80426-9,
London, Oct., 1985, Lawrence Erlbaum Associates, Hillsdale, NJ
Hoffmann, C. M. (1989). Geometric and Solid Modeling: An Introduction (The Morgan Kaufmann
Series in Computer Graphics and Geometric Modeling), Morgan Kaufmann, ISBN: 1-
558-60067-1
Kobayashi, N.; Ogawa, S. & Koibe, N. (2001). Off-line teaching system of a robot cell for
steel pipe processing, Advanced Robotics, Vol. 12, No. 3, pp. 327-332, ISSN: 1568-
5535
Kreuzer, B. & Milojevic, D. (1998). Simulation tools improve planning and reliability of paint
finishing lines, Industrial Robot, Vol. 25, No. 2, pp. 117-123, ISSN: 0143-991X
Kunii, T. L. & Shinagawa, Y. (1992). Modern Geometric Computing for Visualization, Springer-
Verlag, ISBN: 0-387-70105-2, Tokyo
Munasinghe, S. R.; Nakamura, M.; Goto, S. & Kyura, N. (2003). Trajectory Planning for
Industrial Robot Manipulators Considering Assigned Velocity and Allowance
Under Joint Acceleration Limit, International Journal of Control, Automation, and
Systems, Vol. 1, No. 1, pp.68-45, ISSN: 1598-6446
Nagao, Y.; Urabe, H.; Honda, F. & Kawabata, J. (2000). Development of a panel welding
robot system for subassembly in shipbuilding utilizing a two-dimensional CAD
system, Advanced Robotics, Vol. 14, No. 5, pp. 333-336, ISSN: 1568-5535
Nielsen, L. F.; Trostmann, S.; Trostmann, E. & Conrad, F. (1992). Robot off-line
programming and simulation as a true CIME-subsystem, Proceedings of 1992 IEEE
International Conference on Robotics and Automation, pp. 1089-1094, ISBN: 0-818-
62720-4, Nice, France, May, 1992, IEEE Computer Society Press
Okumoto, Y. (1997). Advanced welding robot system to ship hul assembly, Journal of Ship
Production, Vol. 13, No. 2, pp. 101-110, ISSN: 1542-0469
Richard, G. (1997). Object-oriented programming for robotic manipulator simulation, IEEE
Robotics and Automation Magazine, Vol. 4, No. 3, pp. 21-29, ISSN: 1070-9932
Sheu, P. C-Y. & Xue, Q. (1993). Intelligent Robotic Planning Systems, World Scientific, ISBN: 9-
810-207558-1, Singapore
Wernecke, J. (1994). The Inventor Mentor: Programming Object-Oriented 3D Graphics with Open
Inventor, Addison-Wesley, ISBN: 0-201-62495-8, Canada
28
1. Introduction
Handling of flexible materials is one of the most challenging problems that have arisen in
the field of robot manipulators. Besides the difficulties (e.g. geometrical uncertainty,
obstacle avoidance, etc.) that emerge when handling rigid materials using robots, flexible
materials pose additional problems due to their unpredictable, non-linear and complex
mechanical behaviour in conjunction with their high flexibility and high bending
deformations. The fact that sewing fabrics is a “sensitive” operation, since fabrics distort and
change their shape even under small-imposed forces, poses barriers in the development of
automated sewing systems. On the other hand, the need for great flexibility in garment
assembly system is really imperative, since cloth manufacturing should cope with the fast
fashion changes and new materials for fabrics and responding to the consumer demands.
Our research efforts are focused on the development of intelligent control systems with
multi-sensor feedback, enabling robots to perform skillful tasks in realistic environments
towards higher flexibility and automation. In this work, the robot control approaches based
on artificial intelligence for handling fabrics towards feeding in the sewing machine are
described in detail.
In the sewing process, two robotic handling tasks (Gilbert et.al., 1995; Koustoumpardis &
Aspragathos, 1999) require sophisticated control. The first one deals with the manipulation
(translation and orientation) of the cloth and the second one with the control of the cloth
tension.
A method for the manipulation of various fabric shapes was introduced (Torgerson & Paul,
1988) using visual information. The determination of robot motion paths is based on visual
feedback defining the location of the fabric edges in world coordinates. The developed
algorithm was used for handling both polygonal and non-polygonal shapes, whereas the
accuracy of the approximation to the desired seam line depends on the camera resolution.
However, this algorithm is based on geometrical computations, which has to be done
manually for each fabric shape. In addition, the algorithm has been constructed ignoring the
buckling or wrinkling that appears during fabric handling.
The first advanced robot sewing system reported by Gershon & Porat (1986; 1988) is the
FIGARO system, where an integrated robotic system for sewing has been developed. In the
FIGARO system, the following components are integrated: a robot manipulator endowed
with a two-fingered end-effector, a sewing machine, two miniature CCD cameras mounted
on the sewing machine and a force sensor mounted on one of the two fingers. For the fabric
tension control an estimated cloth velocity was used based on the sewing machine shaft
encoder and a correction to this estimation was computed by a proportional-integral
controller in order to derive the robot velocity. The gains of the controller were adjusted by
trial and error, which should be modified for a new type of fabric (Gershon, 1993). With the
fabric tension control the buckling of the fabric was prevented and good seam quality was
obtained.
A parallel process decomposition of the robot-sewing task was proposed (Gershon, 1990) to
address the problem of robot sewing. A robot arm manipulates the fabric towards the
desired orientation and control the fabric tension during the sewing process. The sewing
task was decomposed into four concurrent processes within a superposition parallel
architecture. The controlled system is modelled as: a mass-spring-damper model for
representing the robot’s equivalent dynamic behaviour, a nonlinear damped spring for the
fabric, whereas the table friction acting on the fabric is included in the model. The
performance of sewing is decomposed into straight line seams and seams that follow the
edge contour. The complete system demonstrated robustness in the experiments.
After FIGARO system, an automated sewing system coordinating two robots handling the
fabric on the table has been developed (Kudo et al., 2000). On top of robot hands
coordination, fabric tension control and synchronization with the sewing machine speed
were considered. Visual information was used to control seam path and its deviation from
the desired trajectory through a CCD camera mounted on the sewing machine. Sewing
experiments were carried out for a straight-line trajectory using pressing force control, fabric
tension control and fabric position manipulation control. The experiments have been
extended to seams that follow curved-line trajectory using coordinated position/force
control. Extended experimentation under a wide variety of sewing speeds, panel contours,
number of plies and fabric type proved the effectiveness and the robustness of the
developed system.
As far as artificial vision for the control of the manipulator is concerned, the visual servoing
systems are based on two main approaches: position-based and image-based visual servo
control (Hutchinson et al., 1996). In the position-based control systems, the error is
Intelligent Robotic Handling of Fabrics towards Sewing 561
computed in the 3D Cartesian space. The position error is computed using or not the model
of the target depending on the visual features available in the image. The main advantage of
this approach is that the camera trajectory is directly controlled in Cartesian space.
However, this approach has the limitation of being sensitive to calibration errors that arise
either from a coarse calibration of the camera or from errors appeared in the 3D model of the
target.
In the image-based control systems, the error is directly computed in the 2D image space.
The main advantage of this approach is that there is no need for the 3D model of the target
and it is quite robust not only with respect to camera but also to robot calibration errors.
However, the Jacobian matrix is a function of the distance between the camera and the
target, which is not easily calculated and this is a serious limitation of this model-free
control approach.
A combination of the previous two approaches is used in the 2 1/2 D visual servo systems,
where the error to be minimized is specified both in the image and in the pose space. This
method avoids their respective disadvantages: contrarily to the position-based visual
servoing, it does not need any geometric 3D model of the object. In comparison to the
image-based visual servoing, it ensures the convergence of the control law in the whole task
space.
For the most of the handling tasks and essentially in sewing, the cloth must be held taut and
unwrinkled. The seam quality is extremely sensitive to cloth tension variations appeared in
the sewing process. These undesirable tension variations affect the product quality. The
difficulties are more evident when the seam is performed along the cloth bias, due to the
increased fabric extensibility. Gershon (1990) justified the need of force feedback control in
order to fulfil the above fabric’s tension requirements. Furthermore, he underlined that the
conventional control methods are inadequate to handle a fabric tension problem due to the
fabric nonlinear behaviour, the noisy cloth tension signal, etc. Therefore, it is vital to develop
more sophisticate and intelligent control methods.
Stylios (1996) reported that the intelligent behaviour can be expressed in terms of sensing,
processing, actuating, learning and adapting without any previous knowledge about the
properties of the object that the human is handling. To apply this approach in robot
handling task, control systems based on Neural Networks should be designed. The ability of
Neural Networks to work without having any previous knowledge of the controlled system
behaviour and their ability to learn from examples and to adapt as they modify themselves
during the training phase, incorporate the human like behaviour in handling of non-rigid
materials. The Neural Networks and Fuzzy Logic benefits have been used in apparel
industry (Stylios & Sotomi, 1996) and especially in gripper’s design for fabric handling
(Tsourveloudis et.al., 2000).
The objective of the present work is the development of a robotic system for handling of
non-rigid materials lying on a working table. The work is focused on the handling of flexible
materials lying at a random location and orientation on a table and guide them towards the
sewing needle and along the “seam line”, as well as on the control of the appropriate
tensional force in order to maintain constant high seam quality. The handling strategies are
developed for fabrics of various shapes (convex, non-convex, with straight and curved
edges). A robot fuzzy controller is presented, for guiding the fabric towards the sewing
needle, where the fuzzy rules are derived by studying human sewing. Our goal is to design
a robust controller, which autonomously determines the motion of the robot avoiding
562 Industrial Robotics - Programming, Simulation and Applications
special geometrical computations and independent of the fabric’s shape. The appropriate
velocity of the robot end effector in the sewing process is regulated by a Neuro-Controller.
Likewise, the appropriate tensional force applied to the fabric is determined by the
developed Fuzzy Logic decision mechanism.
Fig. 1. The concept and the experimental layout of the robotized sewing.
The gripper contact with the piece of fabric is performed through rubberized ends that press
slightly the fabric on the sewing table as it is shown in Fig. 1. The force sensor mounted on
the robot end-effector measures a tensional force when the distance between the two acting
points of the velocities is greater than the free length of the fabric, or measures a
compressive force when this distance is less than the free length of the fabric. The first case
occurs when the sewing machine velocity is greater than the robot end-effector velocity so
the fabric is extended, while in the second case, the robot velocity is greater than the
machine velocity and then the fabric is compressed. For obtaining pucker-free seams a
constant tensional force should be assured.
The fabric characteristics and properties have to be taken into account in an automated robot
sewing system. The appropriate tensional force depends on the fabric properties
(Koustoumpardis & Aspragathos, 2003) while its variations during the sewing process affect
the seam quality. Thereby, the fabrics have to be recognized into categories (knitted, woven
etc.) depending on their physical properties. Another important factor that should be
considered is the shape of the fabric, which can be convex or non-convex, with straight-
and/or curved lines.
It can be concluded that an automated sewing system demands a classification of the
fabrics into various categories as well as a preliminary scheme of the optimum path the
robot should follow in order to produce the desired stitches. In the proposed sewing
system, these demands have been taken into account for the design of a hierarchical
control scheme. The target for this robot controller is the guidance of the robot to apply a
Intelligent Robotic Handling of Fabrics towards Sewing 563
constant tensional force while the robot end-effector manipulates the fabric in the sewing
process.
4. Pre-sewing tasks
The fabric handling tasks for cloth sewing are ply separation, placement on the working
table, manipulation towards the sewing needle and fabric’s tension control during the
sewing process.
For the tasks “ply separation” and “placement on the working table” work has been done on
the design of a robotic air-jet gripper for destacking fabrics (Zoumponos & Aspragathos,
2004) and the placement of fabrics on a working table (Zoumponos & Aspragathos, 2005)
where a fuzzy motion planning was developed to control the robot. After the fabric has been
placed at a random location on the working table, a number of sub-tasks should be
performed before the sewing process starts. These preliminary sub-tasks are:
1. The recognition of the fabric’s shape. The camera captures the image of the fabric
piece lying on the working table free of wrinkles before the robot end-effector
touches the fabric. The shape (convex or non-convex, with or without curvatures)
and the location of the fabric are identified and is used as the reference shape,
while the piece is handled by the robot.
2. The edges that will be sewed. There are two main kinds of stitches: those ones that
are performed in order to join two parts of cloth together and others that are
performed for decorative and aesthetical purposes (e.g. in the pockets of trousers
and shirts). However, there are parts of cloths, where both types of stitches should
be conducted. For example, if the fabric piece is a pocket, all edges, except for one,
should be sewed and there are decorative stitches should be added. The
information of the stitching lines on the fabric and the type of the stitches is taken
from a CAD system where the cloth has been engineered.
3. Planning of the sewing process. The best sequence of the seam segments is
determined before the sewing process in order to reduce the cycle time of the
sewing. The optimum sequence can be found using Genetic Algorithms (Zacharia
& Aspragathos, 2005) and this is the next step after the stitching lines have been
determined. However, one should keep in mind that some stitches have
antecedence in relation to others. In the previous example, the stitches that serve
aesthetical purposes should be performed before the pocket is sewed onto the
trouser part.
4. The extraction of the “seam line”. The sewing process will be performed on a
“seam line” situated inside the fabric edges. Since the fabric edges are extracted
from the image taken from the camera, the “seam line” is situated some millimetres
inside the fabric’s outer line. For the straight lines, the “seam line” is found by
transferring the outer lines inside the fabric and the intersection of these lines
constitute the vertices of the “seam line”. Consequently, the “seam line” is parallel
to the outer edge and the distance between the outer edge and the “seam line” is
determined, since it is different for different parts of the cloth. For the curved lines,
the process of determining the “seam line” is similar to the aforementioned
process, since the curve has been approximated by a polygon. The distance
between the outer edge line and the “seam line” depends on the cloth part that is
going to be sewed and is defined by the clothing industry manufacturer. The
564 Industrial Robotics - Programming, Simulation and Applications
system is capable of automatically extracting the “seam line” after the outer edges
have been defined. It is clear that this distance is greater for a part from a trousers’
leg than a part from a shit’s sleeve.
5. The initial position of the end-effector. The fabric lies on the table at a random
location with a random orientation and the end-effector has to move towards the
fabric and touch it so that it can lead it towards the sewing machine.
The problem of determining the critical buckling length Lc between the front edge
and the end-effector location (see Fig. 2(a)) in order to avoid the buckling of the
fabric has been investigated by Gershon & Grosberg (1992). In Fig. 2(b), the fabric
buckling is illustrated when the initial length is larger than the critical one.
Therefore, the robot gripper should touch the fabric at a distance from the front
edge equal or shorter than the critical length in order to prevent the buckling.
It has been investigated and concluded (Koustoumpardis & Aspragathos, 2003)
that the appropriate position of the end-effector on the fabric depends mainly on
the fabric type and its flexibility. A fuzzy decision mechanism is used as shown in
Fig. 3.
The term “flexibility” represents an index, which is estimated by an expert
specifying a flexibility degree between 0–100%.
(a) (b)
Fig. 2. The buckling of fabric when L is larger than the critical length.
fabric type
Fuzzy definition robot end effector
of critical
fabric flexibility length Lc initial position
fabric, the position of the end-effector should be such that the covered area of the fabric be the
smallest to facilitate the identification of the piece by the vision system.
After the execution of the above five preliminary sub-tasks the sewing process can be
started. Throughout the sewing two main tasks are investigated and respective robot control
algorithms are developed for the manipulation (translation and orientation) of the fabric and
the regulation of the fabric’s tensional force.
5. Manipulation of fabric
The proposed system is a combination of image-based and position-based control system.
The image-based analysis is used for the identification of the fabric’s shape. After the image
acquisition of the fabric, the features (vertices for the straight edges and dominant points for
the curved edges), the needle-point and the sewing line’s orientation are derived from the
image analysis. The position of the needle is known in the robot coordinate system too. The
position of the end-effector on the fabric is random, but is subject to the constraints
discussed in the previous section. This position is unknown in the image coordinate system;
however, the robot system gives feedback of the current position of the robot in the robot
coordinate system and the robot end-effector is now referred to the robot base system.
Moreover, the relation between the robot- and the image- coordinate system is known from
the calibration of the camera.
For the manipulation of the fabric towards the needle, the image based-approximation is
used, since both the distance and the orientation of the movement are known in the image
coordinate system. For the rotation of the fabric around the needle, the rotation angle is
computed in the image coordinate system, but for the rotation of the robot end-effector
around the needle, the needle-position relative to the robot base is used.
based on the image features identification. The linear and angular velocity of the
fabric are derived from the position and orientation error through the designed fuzzy
decision system described in Section 0. The position error (er) and the orientation
error (eθ) and their rate are the input data, whereas the linear and angular velocities
(u), (ω) of the end-effector respectively are the output data.
The new position and orientation of the end-effector is computed through the
linear and angular velocity, the time step dt and the angle φ, which is computed
from the image. The fabric is transferred to a new position as a result of the
movement of the end-effector, which is stuck on the fabric so the fabric does not
slip relative to the gripper. However, the system can overcome the possible
slipping under the cost of greater cycle time for the task. This manipulation stops
when the edge of the fabric reaches the needle with the desired orientation within
an acceptable tolerance.
sewing machine
needle
r
φ
1
6
θ
sewing
line 2
end-effector
the sewing line under a certain tolerance, it is ready for sewing. The sewing process is
iterated until all the edges of the “seam line” planned for sewing, are sewed.
5.3 The fuzzy control system for the manipulation of the fabric
From the standpoint of robot control design, the control systems are based on the plant
models and make use of the relevant geometry for robot path analysis. However, in the
developed robotic system for handling fabrics, the construction of a model running in real
time is rather impossible. To alleviate this difficulty, and simultaneously make the system
568 Industrial Robotics - Programming, Simulation and Applications
more flexible, a fuzzy logic controller is designed for handling the fabric towards the sewing
needle. A further advantage of the fuzzy controller is that it is robust and quite fast in
deriving the control outputs.
The proposed controller (Zacharia et al., 2005) outputs the linear and angular velocity of the
robot’s end-effector. The block diagram for the control system is shown in Fig. 5, where the
symbols in parenthesis stand for the fuzzy system that controls the orientation. After the
camera has captured the image, the vertices of the fabric are extracted. Then, the current
position r of the selected vertex and the orientation θ of the fabric edge are computed from
the image, whereas the position of the needle rd and the orientation θd of the sewing line in
the image coordinate system are a priori known. The errors, defined as: er=rd-r, eθ=θd-θ,
where rd=0 and θd=0, and the error rates, defined as: er΄= er/dt, eθ΄= eθ/dt, are the inputs of
the controller. The fuzzy controller outputs the linear and angular velocity and the robot
end-effector moves to its new position.
Features image
extraction
that controls the rotation of the fabric, similar membership functions are provided through
extended experimentation.
The rule base of each system is composed of 3×3=9 linguistic rules. The rule base includes
the knowledge acquired by the systematic investigation of the fabric handling by human
workers in feeding the sewing machine. The acquired knowledge is formulated in rules of
the following form:
(a) (b)
(c)
It should be mentioned that for the implication process the ‘min’ operator is used, whereas
for the aggregation process the ‘max’ operator is used and the defuzzification process is
performed using the centroid method.
Fuzzy definition
of critical
length Lc
variables such as soft, hard, extensible or very extensible etc. are used. For the description of the
applied tension-force similar linguistic variables are used: low force, very low force, etc.
This linguistic form is followed for the determination of the desired tension-force as the
input to the F.N.N. controller. A number of expert seamstresses-tailors were interviewed,
inquired and their work was investigated. The target of the queries was to acquire the
knowledge concerning the way they handle different types of fabrics. In the questioning, the
handling actions that were mostly investigated are related to the pushing or pulling each
type of fabric and how much.
According to the experience of the seamstresses-tailors (i) the fabric is pulled during the
sewing procedure so as to keep it outstretched, which means that only tensional force must
be applied on the fabric and (ii) the appropriate tension-force for different fabrics depends
on:
• The extensibility of the fabric. The experts delineate the extensibility of the fabric
with linguistic variables such as: a cloth has medium extensibility, high
extensibility, very high extensibility, low extensibility and very low extensibility.
• The direction of the fabric in which the “seam line” is performed (the bias of the
fabric).
For “the fabric extensibility”, the applied tensional force for different fabrics is expressed by
the following general but essential rule:
“As much as more extensible the fabric is, so much higher tension-force is applied to it”
According to the experts this tension-force is expressed with linguistic variables such as
medium, large, very large, small and very small tensional force.
Considering that the extensibility increases along the bias of the fabric as confirmed by
Potluri & Porat (1996), the tension-force has to become higher in that “direction”. Therefore,
the expert applies “higher” tension-force to the same fabric when it is sewed along its bias,
than the tension-force applied when it is sewed along the yarn direction. The maximum
tension-force is applied while the sewing direction is 45° to the yarns.
The experts were not able to provide us with the specific values of the tension-force
that they apply to each type of the fabric. Nevertheless, they produce high quality
seamed products while they use linguistic variables in order to express the tension-
force and they use the same linguistic variables when they are training new
seamstresses-tailors.
The conclusion was that the decision concerning the determination of the appropriate
tension-force that has to be applied to a sewed fabric incorporates fuzziness. Therefore, the
fuzzy set theory was assumed as appropriate to be used. According to the acquired
knowledge, the two input fuzzy variables “extensibility” and “direction” and one output
called “tension-force” are defined. The membership functions and the corresponding
linguistic values of these fuzzy variables are illustrated in Fig. 8 & Fig. 9.
572 Industrial Robotics - Programming, Simulation and Applications
low high
very low medium very high zero great
medium
(a) (b)
Fig. 8. The membership function of the fuzzy variable (a) “extensibility” and (b) “direction”.
As it is illustrated in Fig. 8(a), for the fabric’s extensibility five linguistic values are defined,
while the universe of discourse is normalized in the percent scale, which indicates the
expert’s estimation about how much a fabric can be extended without puckering compared
with other fabrics.
For the fabric’s direction (bias) three linguistic values are defined (Fig. 8(b)), while the
universe of discourse is normalized in the percent scale, which indicates the expert’s
estimation about the sewed direction on the fabric. The percentage of 100% means that the
fabric is sewed at 45° to the yarns.
Similarly, the five linguistic values of the output variable “tension-force” are defined as it is
shown in Fig. 9.
The rules derived after knowledge acquisition, are of the following form:
If extensibility is x
And direction is y
Then tension is z
and the FAM is presented in Table 2.
where, x ∈ {very low, low, medium, high, very high}, y ∈ {zero, medium, great} and z ∈ {very low,
low, medium, high, very high}.
low high
very low medium very high
Extensibility
very low low medium high very high
The force error given by the difference between the desired and the feedback measured
force is used in the backpropagation method for training the F.N.N.
The Feedforward Neural Network (F.N.N.) is chosen, because its topology is simple and
it has been used successfully in the majority of the Neural Nets applications (Narendra,
1996; Sun, 1989). The backpropagation algorithm is used for the training of the
formulated F.N.N. (Wasserman, 1989). It is well known that the number of the hidden
neurons affects the performance of the F.N.N. In the present work, after a considerable
number of experiments it was concluded that five hidden neurons are appropriate
considering the quality of the response rate, the response smoothness and the
overshooting.
bias1 v1
w1
S
w2 v2
bias2
measured v3 robot
L w3 S S
force velocity
w4 bias3 v4
S
w5 v5
bias4
bias5
Input Layer Hidden Layer Output Layer
S: sigmoid L: linear
neuron neuron
Fig. 11. The Neural Network topology.
The F.N.N. consists of three layers with the configuration (1-5-1), i.e. one neuron in the input
layer, five in the hidden and one in the output, as Fig. 11 illustrates. In the hidden layer a
threshold parameter is used for the five neurons in order to activate the neuron; and the
associated activation function is the sigmoid one.
In the input neuron, the linear activation function is used (Wasserman, 1989), while in the
output neuron a threshold parameter is used and the associated activation function is the
sigmoid, since the velocity of the robot end-effector was assumed positive, i.e. the robot end-
effector moves only forward.
The Neuro-controller scheme memorizes the trained F.N.N. controller characteristics for
each of the fabrics that have been successfully handled, so the final adapted weights and
bias are stored under a label into a file as shown in Fig. 12. The label of each set of weights-
Intelligent Robotic Handling of Fabrics towards Sewing 575
bias reflects the inputs used to define the specific fabric type. This label corresponds to the
desired tension-force that the Fuzzy decision making part of the system concludes as
appropriate. If a new handling task with a piece of fabric is going to be executed, then the
“memory” is searched to find a label identical or the closer one to the desired tension-force.
If there is such a label, then the weights-bias under this label are uploaded to the F.N.N. as
the initial values. Even in this case, the system does not remain insensitive, the F.N.N. still
adapts its weights-bias to train itself and expand its experience (on-line and endless
training). Finally, the weights and bias are stored under the same label by overwriting the
old values, or under a new label if the closer to the desired tension-force label had been used
to initialize the F.N.N.
7. Experimental results
In the following the simulated and experimental results of the two tasks (manipulation and
force regulation) are presented. The specifications of the hardware as well as the limitations
of the controller performance are described.
the manipulation is tested for fabrics where the “seam line” is composed by straight-line
segments. The handling process is repeated enough times for the same fabric and for
various types of fabrics, where each time the fabric starts from a different location on the
table and the gripper is attached to the fabric at a different location. The accepted position
and orientation error are set equal to 2 pixels (≈ 0.25 mm) and 0.5° respectively. In the tested
cases, the system shows robustness and efficiency. When the accepted position and
orientation error were set to lower values, the fabric made many oscillations around the
needle until the needle and the sewing line are reached.
Next, the system is tested for fabrics with arbitrary curved-line segments using the Genetic
Algorithm approach. The maximum number of polygon edges approximating the curved
section is set to 6 and the maximum acceptable deviation ε is arbitrarily set to 8 pixels (≈ 1
mm). The optimum polygon section, resulted from the Genetic Algorithm, which
approximates the arc curve section of the fabric used for the experiments is shown in Fig. 14.
The length of the arc section is about 5 cm and is approximated by a polygon section
consisted of three sides and the maximum deviation for each straight-line segment from the
corresponding arc section is 6.8722, 5.5480 and 7.0702 pixels, respectively.
6
deviation (pixels) 5
0
0 3 6 9 12 15
curve perimeter
(a) (b)
Fig. 16. Neuro-Controller’s response: (a) machine and robot velocities, (b) desired and real
force.
From the results shown in Fig. 16(b), it can be concluded that as the robot velocity
approaches the machine velocity, the tensional force reaches the desired value (0.12).
The shapes of the “robot velocity” and “force” curves are expected and are qualitatively
verified. When the sewing machine starts, the robot end-effector does not move with the
appropriate velocity in order to follow the machine; therefore, the measured force is
tensional and is increased rapidly as it is illustrated from the first loops in Fig. 16(b). At the
same time the controller reacts to this increase of the force by adapting its weights and it
derives a robot velocity greater than the machine velocity, in order to reduce the tensional
force in the desired level.
Since the backpropagation training method modifies the F.N.N. weights continuously, after
the last loop the error varies in the range of the third decimal for the force and of the seventh
decimal for the end-effector velocity. These results were expected since a very low increase
of the difference between the sewing machine velocity and the end-effector velocity results
increases considerably the tensional force.
The controller is also tested for its performance in the presence of disturbances due to the
noise interfering in the measured force. A noise of an amplitude ±10% was added to the
applied force while the topology of the F.N.N. controller kept the same as presented in the
previous test in order to compare the results under the same structure of the experimental
stage. For comparison the velocity and force results are shown in Fig. 17(a) and (b)
respectively without noise (black line) and with noise (white line) while the desired values
are shown by grey lines.
(a) (b)
Fig. 17. Neuro-Controller’s response when noise is added: (a) velocities, (b) forces.
580 Industrial Robotics - Programming, Simulation and Applications
The controller can be characterized as robust, since it is capable to regulate the tensional
force applied to the fabric to reach the desired value while approaching the sewing machine
velocity. The small variations on the applied force, shown in Fig. 17(b) (black line), are the
effects of the noise and have a trend to increase when the added noise increases. These
variations are acceptable when the noise ranges in ±10% of the applied force; but the seam
quality is deteriorated if the noise is out of the range of ±10%.
The verification of the system “memory” is performed through a simulated example. For a
specific fabric, the proposed controller first operates with a “white memory”, in terms of the
F.N.N. weights-bias. The controller successfully adjusts the force as described in the first
example with the results shown in Fig. 16(a) and (b); and the final values of the weights have
been stored in the “memory” file under the label [0.12], which is the desired tensional-force.
In other words the “experience” was acquired and amassed. In the next test, the controller
operates with its embedded “experience” in order to determine the robot velocity for the
same fabric. Therefore, the initial values of the F.N.N. weights are set-up using the stored
values under the label [0.12]. In Fig. 18(a) and (b) the velocity of the robot end-effector and
the force applied to the fabric are presented. By comparing the diagrams before (Fig. 16(a) and
(b)) and after (Fig. 18(a) and (b)) the acquired “experience”, it is clear that the controller’s
performance is improved noticeably using the previous “experience”. The overshooting in
both velocity and force response diagram is reduced about half, as well as the F.N.N.
training time is reduced remarkably.
(a) (b)
Fig. 18. Neuro-Controller’s response when “experience” is utilized: (a) machine and robot
velocity, (b) desired and real force.
The simulation is performed in a PC with an AMD Duron 700 Mhz processor. A complete
training loop, shown in Fig. 10, is performed in 0.96 milliseconds. For the 150 training loops
where the Neuro-Controller has achieved the desired tensional force, the elapsed time is 144
milliseconds. Considering that the usual sewing speeds ranges from 0.1 m/s to 0.3 m/s, the
seam that is performed in this training time (144 milliseconds) has a length ranged from 14.4
mm to 43.2 mm. Therefore, it can be concluded that the system is capable to achieve the
desired tensional force of the fabric from the firsts stitches of the sewing process. Eventually,
the speed efficiency of the proposed system is restricted from the refresh range of the force
sensor device and the I/O of the robot controller.
Intelligent Robotic Handling of Fabrics towards Sewing 581
8. Conclusions
In this work, a system for robotic handling of fabrics towards sewing is introduced. A visual
servoing manipulator controller based on fuzzy logic is designed in order to guide the fabric
towards the sewing machine and produce a good seam quality. The experimental results show
that the proposed approach is effective and efficient method in guiding the fabric towards the
sewing machine, sewing it and rotating it around the needle. Unlike some of the methods
referred in the introduction, the proposed controller does not need mathematical models or
calculations, but it is proved to be rather simple and robust. It seems that, using intelligent
methods the robotic system could be independent and autonomous in order to perform the
sewing process for the majority of fabric types, without or little human assistance.
It should be stressed that the problem of sewing a piece of fabric using a robotic
manipulator instead of a human expert poses additional problems, since it is more difficult
for a robot to cope with buckling, folding or puckering. Considering the future research
work, the proposed algorithms can be extended so that it can take into account the
distortions presented during robot handling of the fabric. Finally, the integration of the
systems in a single robotic workcell is the ultimate target.
9. Acknowledgments
This research work is carried out under the project “XPOMA-Handling of non-rigid
materials with robots: application in robotic sewing” PENED 01 funded by the General
Secretariat for Research and Technology of Greek Government. It is also integrated in the
I*PROMS Network of Excellence.
10. References
Cox, E. (1994). The Fuzzy Systems Handbook, A.P. Professional, pp. 248–249
Gershon, D. (1990). Parallel Process Decomposition of a Dynamic Manipulation Task:
Robotic Sewing, IEEE Trans. on Robotics and Automation, Vol. 6, No. 3, pp. 357-366
Gershon, D. (1993). Strategies for robotic handling of flexible sheet material, Mechatronics,
Vol. 3, No. 5, pp. 611–623
Gershon, D. & Grosberg, P. (1992). The buckling of fabrics during feeding into automatic
sewing stations, J. Text. Inst., 83(1), pp. 35–44
Gershon, D. & Porat, I. (1986). Robotic sewing using multi-sensory feedback, Proceedings of
the 16th International Symposium on Industrial Robots, pp. 823-834, Brussels
Gershon, D. & Porat, I. (1988). Vision servo control of a robotic sewing system, Proceedings of the
IEEE Conference on Robotics and Automation, pp. 1830-1835, Philadelphia, April 1988
Gilbert, J. M., Taylor, P. M.,Monkman, G. J., and Gunner, M. B. (1995). Sensing in garment
assembly, in: Mechatronics Design in Textile Engineering, NATO ASI Conference,
Side, Turkey, pp. 291–308
Hutchinson, S.; Hager, G.D. & Corke P.I. (1996). A tutorial on visual servo control, IEEE
Transactions on Robotics and Automation, Vol. 12, No. 5, pp. 651-670
Koustoumpardis, P. N. and Aspragathos, N. A. (1999). Control and sensor integration in
designing grippers for handling fabrics, in: 8th Internat. Workshop on RAAD ’99,
Munich, Germany, , pp. 291–296
582 Industrial Robotics - Programming, Simulation and Applications
1. Introduction
Product packaging is one of the key industrial sectors where automation is of paramount interest.
Any product going out to the consumer requires some form of packaging, be it food products, gift
packaging or the medical supplies. Hence there is continuous demand of high-speed product
packaging. This requirement is dramatically increased with seasonal consumer products and
fancy gifts, which invite innovative designs and attractive ideas to lure the potential customers.
Typically such products are delivered in cardboard cartons with delicate and complex designs.
Packaging of such designs is not only tedious and challenging but also time consuming and
monotonous, if manual methods are employed. For simple cardboard packaging fixed automation
has been in use by dedicated machines along a conveyor belt system, however, these machines can
handle only simple type of cartons; any change in shape and structure cannot be easily
incorporated into the system. They require, in most cases, a change of more than 40 points (Dai,
1996a) to fit into the same type of carton of different dimensions, which means one specific type of
cartons requires one packaging line. Capital expenditure increases with the change-over (Dai,
1996b) from one type to another type of carton folding assembly lines. Thus the flexibility is lost
due to these limitations and the associated cost in the change-over. This research demonstrates the
feasibility of designing a versatile packaging machine for folding cartons of complex geometry
and shapes. It begins by studying cartons of different geometry and shapes and classifying them
into suitable types and operations that a machine can understand. It conceptualizes a machine
resorting to modeling and simulation that can handle such cartons, and finally developing the
design of the packaging machine. It has been shown that such a versatile machine is a possibility,
all it requires is miniaturization and investment on its development when such machines could be
a reality. However, for practical implementation considerations need to be given for a compact,
portable system incorporating sensors. The presented design is unique in its existence and has
been shown to fold cartons of different complexity.
labor injuries mostly due to hand twisting motions. Further, the manual line is generally
considered to be the seasonal force that a dedicated machine still has to be used on a year-run
basis to save cost and time. To make the task more difficult, the carton designers must pursue
fantasy and originality in carton packaging to respond to a highly competitive market. The
frequent change of style and types of carton and the small batches of production present a
challenge to the carton assembly and packaging line, and demand a flexible machine to be
designed. The onus is thus placed on the packaging industries to fully speed-up the change-over
process for different types of carton with the aid of programmable and reconfigurable systems.
Development of such agile and highly reconfigurable systems requires systematic analysis and
synthesis of each component, i.e. cartons and their folding patterns, machine operating on them,
and the complete assembly operation itself. One such approach (Lu & Akella, 2000) has been
reported for folding cartons with fixtures. Whilst this approach generates all folding sequences
for a carton, the implemented work just handles a simple rectangular carton for which fixed
automation is already in place. For cartons of complex geometry, however, synthesis of both the
carton and the folding mechanism needs to be considered together to achieve flexible automation in
the assembly line. Extensive study has been undertaken by the authors on the operation of complex
cartons folding pattern and its sequence analysis, resorting to graph theory, screw theory, matrix
theory and representing cartons as a spatial mechanism; thereof studying the mobility and
configuration analysis of the carton (Dai & Rees Jones, 1997a-c, 1999); (Dubey et al. 1999a-c, 2001).
This chapter presents the research undertaken to design a reconfigurable carton folding machine
from design inception to the applications that can handle cartons of complex geometry and shapes.
This project was on the persuasive wish-list of many cosmetic and perfumery suppliers like
Elizabeth Arden® and Calvin Klein® and has been under active consideration by the Unilever
Research UK for several years. They were willing to support any research ideas to find some
alternative means for automating the entire packaging process of fancy cartons, if at all it was
possible. As a result the project was jointly sponsored by the UK and the Dutch Unilever
consortium to explore the feasibility of developing a flexible packaging machine that can handle
variety of cartons of different shapes and sizes. The research began with the study of manual
packaging process to reveal that a higher degree of dexterity is required as we move from cartons
of simple types to a complex one (Dai, 1996a). Such cartons are formed on a pre-broken
cardboard sheet of different geometrical shapes. As the sheet is broken, it has a number of
movable panels that can be rotated about the lines of crease. These crease lines facilitate the
folding thus resulting in a shape transformation. Figure 1 shows an example of a fancy carton,
which after folding, takes the final shape of the ‘boy-scout-tent’; generally manual processes are
resorted due to the complexity and typically small batch run of such cartons.
In the manual process of packaging the side panels are bent using fingers since it requires
folding along three axes as shown by the arrows in Figure 2, whilst the top and bottom
panels are bent using the palms, shown by the flat jaws. The arrows representing the fingers
apply a pointed force called ‘poking’ to the panels while the flat jaws apply ‘pushing’ forces.
At the intermediate level of closing, fine motions are required by the fingers to cross-lock the
panels by inserting the projected panels into the slitted areas called the ‘tucking’ operation.
and down, thus allowing any orientation and elevation of carton to be achieved that
may be required during the packaging operation. These considerations were
specifically based on providing high degree of reconfigurability with minimum
number of axis to be controlled. The design as conceptualized is based on agility and
versatility of the human hand which is capable of performing variety of dexterous
functions.
⎛ R Cφ − R3 Sφ ⎞ (1)
β = cos −1 ⎜ 1 ⎟
⎝ Cα ⎠
⎛ ( R Sφ + R3 Cφ )Cα Sβ + R2 Sα ⎞ (2)
γ = cos −1 ⎜⎜ 1 ⎟⎟
⎝ S 2 α + ( C α Sβ ) 2 ⎠
where,
R1 = ( C 2 δ Vθ + Cθ )Cα − Cδ Sδ Vθ Sα ,
R 2 = ( S 2 δ V θ + C θ )S α − C δ S δ V θ C α ,
R3 = S δ S θ C α + C δ Sθ Sα ,
Vθ = (1 − Cθ ) , S = sin , C = cos .
Complex Carton Packaging with dexterous Robot Hands 589
⎛ (c1 l 2 + l3 c1 c3 ) p z − ( p x − l1 c1 ) s3 l3 ⎞ (5)
θ 2 = sin −1 ⎜⎜ 2
⎟
2 ⎟
⎝ (l 2 + l3 c3 )(c1 l 2 + l 3 c1 c3 ) + l3 c1 s3 ⎠
where px, py, pz are the co-ordinates of the target point, c and s are sine and cosine of the
angle, l1, l2, l3 are the link lengths of the finger corresponding to joint angles 1, 2 and 3
moving from base to the tip.
The kinematic connectivity between carton and the fingers can be achieved by locating
various contact points on the carton and recording the displacement of these points as the
folding of the carton takes place. The contact points on the carton can be identified by
geometrical interpretation of the folding sequences (Dubey & Dai, 2001). These contact
points are then used to find joint displacement for each finger’s joints. The displacement
data are further interpolated to generate optimal finger paths minimizing the unwanted
fingers motion and thus reducing the packaging cycle-time. The interpolated data from the
simulation can be downloaded to drive the actual fingers. This whole packaging process can
also be automated based on the study of the geometric features of the panels and their
590 Industrial Robotics - Programming, Simulation and Applications
folding sequences without resorting to the simulation of the carton; this is our current
research effort. The model provides all the kinematic information that may be required to
run the machine (Dubey & Crowder, 2003). A parametric model of the packaging machine
was developed in the software (Workspace4 , 1998) that allows geometrical and
dimensional alterations in the design to be easily incorporated including the configuration
verification. This also allows kinematic parameters of the machine-components to be
ascertained before the actual fabrication.
Figure 7 shows the fingers tracking the contact points over the carton while it is folding. The
simulation provides many valuable information for design as well as control of the
packaging machine. For example, the simulation can be used for geometric as well as
configuration checks before deciding on the dimension and structure of the machine. Any
new geometrical information of the machine components can be directly obtained from the
model by just making parametric changes in the basic dimensions of the model. Motion-data
and trajectory of the fingers so obtained during folding of the carton panels can be used for
finger control in the actual system. Currently the motion parameter available from the
simulation has not been directly integrated to the actual controller thus the data has to be
fed off-line in the form of a data-file. Nevertheless, this technique allows full validation of
folding sequence and then downloading of these data to the controller.
motion of the turn-table (Dubey & Dai, 2006). Ten high torque, high performance motors
were used on the finger joints supplied by the Yaskawa Motors®, Japan, specifications of
these motors were:
Dimension: 30 30 mm
Weight: 70 grams
Torque: 0.7 Nm at 22.5 rpm
Gear ratio: 80:1, harmonic drive
Optical encoder with 96 pulse/rev.
This means a 10 cm long arm connected to the motor can apply a fingertip force of 7 N,
which is high enough to fold the cardboard panels along the crease lines. The controller
architecture of the system is shown in Figure 8. It has four motion control cards and each
can control up to 4 axes (Dubey & Crowder, 2003). These cards are supported by motion
control library in C-programming language; it also has a G-code programming interface for
quick trial and running motors in teach-mode. The system also incorporates pneumatic
connections for attaching suction cups, which can be activated in ON/OFF position by the
controller. Currently this is employed to move the turntable from one orientation to the
other and to hold the carton in fixed positions, however, at later stage it is planned to use
these cups on the fingertips as well, for astrictive mode of grasping. This will have an
advantageous effect in handling flat carton panels without any slip (Dubey et al., 1999).
equivalence. The data-file contains the motion data in a single row for simultaneous
operation of the motors, whereas the succeeding lines have next stage motion control
parameters. Accordingly, the controlling program reads the data file sequentially and
generates appropriate interrupts within, to pass the motion commands simultaneously.
Thus the fingers can duplicate motion in parallel as well as sequential modes. The
programming capability of the controller can be further enhanced by developing sub-
routines specific to various manipulative functions, thus allowing modular structure of the
controller to be achieved, which can be easily adapted to any new carton folding and
packaging sequences.
Reconfigurability of the system was one of the key issues for which this system was
developed. The idea was to use the system to fold different type of cartons with little
alteration in the system design. But to achieve this, it is important to ensure that the
system is exactly configured as per the developed graphical model and the basic
structural set up. To run the machine, the data file is included in the main program and
the system is first operated in a stepwise trial-mode to verify the fingers’ movement
whilst the carton is being folded. Once this is achieved the system is ready for
automated packaging. The developed machine is shown in Figure 9 where all fingers
and horizontal pushers are involved in a complicated sequence of tucking operation.
The system was able to erect and fold the carton (Figure 1) successfully in less than 45
seconds. Although it was difficult to give a comparative timing for manual packaging,
we conducted some trial run for the same carton. The manual packaging on average
(after learning) took around 60s.
6. Conclusions
The chapter has presented a dexterous reconfigurable assembly and packaging system
(D-RAPS) with dexterous robot fingers. The aim of this research was to design a
reconfigurable assembly and packaging system that can handle cardboard cartons of
different geometry and shapes. The initial idea was to develop such a system that can
demonstrate adaptability to cartons of different styles and complexities. It was shown
that the packaging machine could fold two cartons of completely different shapes. The
cycle time for the folding was approximately 45 seconds in each case. Though this is not
an optimized time for folding, it is envisaged to reduce the cycle time to 30 seconds or
less with on-line data transfer. Although there are many issues that need to be
addressed before a fully flexible machine can be realized on the shop floor,
nevertheless, the research was aimed at proving the principle of quick change-over,
which faces the packaging industry.
The future enhancement will include optimization of finger trajectory, use of tactile
sensors for force feedback to avoid excessive pressure on the panel, and astrictive mode
of grasping to involve the vacuum system at the fingertips. It is also proposed to
integrate the simulation model directly with the actual machine to download the
motion data online. The XY-table can be motorized and controlled for auto-
reconfiguration. These advanced techniques will automate the entire packaging process
starting from the two dimensional drawing of the cardboard, defining its kinematics
then generating the motion sequences leading to the finished product packaging. It is
also envisaged to mount such dexterous reconfigurable hands directly onto a robotic
arm to offer higher level of flexibility in packaging, if this can be miniaturized. The
system will not only perform carton folding but can also be used to insert the product
into the carton during the folding sequences. This will reduce the packaging time and
will also be able to meet challenges of high adaptability to ever changing demands of
high-end personal product packaging.
594 Industrial Robotics - Programming, Simulation and Applications
7. Acknowledgment
This research was supported by a grant from the Unilever Research, Port Sunlight, Wirral
(UK), which the authors highly appreciate. The authors also thank Prof John Rees Jones and
Dr Kevin Stamp for their discussions and advices.
8. References
Dai, J.S. (1996a). Survey and Business Case Study of the Dextrous Reconfigurable Assembly
and Packaging System (D-RAPS), Science and Technology Report, No PS960321,
Unilever Research (UK).
Dai, J.S. (1996b). Conceptual Design of the Dextrous Reconfigurable Assembly and
Packaging System (D-RAPS), Science and Technology Report, No PS960326,
Unilever Research (UK).
Dai, J.S. and Rees Jones, J. (1997a). New Models for identification of distinct configurations
of Cartons during Erection and Assembly, Science and Technology Report, PS 97
0066, Unilever Research (UK).
Dai, J.S. and Rees Jones, J. (1997b). Structure and Mobility of Cartons through Screw Theory,
Science and Technology Report, PS 97 0067, Unilever Research (UK).
Dai, J.S. and Rees Jones, J. (1997c). Theory on Kinematic Synthesis and Motion Analysis of
Cartons, Science and Technology Report, PS 97 0184, Unilever Research (UK).
Dai, J.S. and Rees Jones, J. (1999). Mobility in Metamorphic Mechanisms of
Foldable/Erectable Kinds, ASME Journal of Mechanical Design, Vol. 21, No. 3, pp.
375-382.
Dubey, V. N. and Dai, J. S. (2006). A Packaging Robot for Complex Cartons, Industrial
Robot, Vol. 33, No. 2, pp. 82-87.
Dubey, V.N. and Crowder, R.M., (2003). Designing a dexterous reconfigurable packaging
system for flexible automation, ASME Design Engineering Technical Conference,
DETC2003/DAC-48812, Chicago, Illinois (USA).
Dubey, V.N. and Dai J.S. (2001). Modeling and Kinematic Simulation of a Mechanism
extracted from a Cardboard Fold, International Journal of Engineering Simulation,
Vol. 2, No. 3, pp. 3-10.
Dubey, V.N., Dai, J.S. and Stamp, K.J. (1999a). Advanced Modeling and Kinematic
Simulation of a Complex Carton, Science and Technology Report, Unilever
Research, (UK).
Dubey, V.N., Dai, J.S. and Stamp, K.J. (1999b). Advanced Modelling, Design and
Experimental Work of a New Reconfigurable System for Packaging with a Mult-
fingered Robot Hand, Science and Technology Report, Unilever Research, (UK).
Dubey, V.N., Dai, J.S., Stamp, K.J. and Rees Jones, J. (1999). Kinematic Simulation of a
Metamorphic Mechanism, Tenth World Congress on the Theory of Machine and
Mechanisms (IFToMM), pp. 98-103.
Fu, K. S., Gonzalez, R. C. and Lee C. S. G. (1987). Robotics: Control, Sensing, Vision, and
Intelligence, McGraw-Hill International, New York.
Lu, L. and Akella, S. (2000). Folding cartons with fixtures: a motion planning approach IEEE
Transactions on Robotics and Automation, Vol. 16, No. 4, pp. 346-356.
Workspace 4, (1998). User’s Guide, Robot Simulations Ltd., UK.
30
1. Introduction
Robot automation systems are rapidly taking the place of the human work force. One of the
benefits is that this change provides the human work force with the time to spend on more
creative tasks. The highest population of robots is in spot welding, spray painting, material
handling, and arc welding. Spot welding and spray painting applications are mostly in the
automotive industry. However, arc welding and material handling have applications in a
broad range of industries, such as, automotive sub-suppliers, furniture manufacturers, and
agricultural machine manufacturers.
The number of arc welding automation robot stations is growing very rapidly. The two most
common stations are the GMAW (Gas Metal Arc Welding) station and the GTAW (Gas
Tungsten Arc Welding) station. These two stations are the most common because they are
so well suited to robot systems. Typically, a robot arc welding station is comprised of a
robot, a robot controller, arc welding equipment, a work clamp and motion devices to hold
work pieces accurately in position (considering heat deformation), robot motion devices to
move around the robot for a larger working range and better weld positions, sensors, and
safety devices. A typical arc welding robot station is illustrated in Figure 1.
A slender welding gun is better for maneuverability but, in case of a collision, sufficient
strength must be guaranteed. It is also necessary to ensure that the torch is equipped with
shock absorption devices such as springs. It is also important to have a cooling system (a
circulating water, in general) to protect the torch against heat deformation. All the
connections for welding, such as, electric power, the wire, and the coolant are usually
integrated into one cable unit. It is recommended that the cable unit be as short as possible
for a quicker response and a better reliability.
2.3 Manipulators
A robot has a limited working range and accessibility, therefore, in many cases a
manipulator has to be considered. A manipulator is a device holding the work piece and is
moved around (typically with linkages) for better access and welding positions. The
advantages of a manipulator include:
(1) A manipulator can easily be moved around the work piece for the best
welding positions.
(2) A manipulator can reduce the variation in the lead and the lag angles of the tip.
(3) Welding can be performed in a stable flat welding position by a synchronized
and simultaneous control of a robot and a manipulator.
(4) Any hard-to-reach positions can be accessed more easily.
(5) A manipulator increases the working range of a fixed floor mounted robot or
an inverted robot.
In general, a robot can maintain a better flat welding position which will produce a better
deposition and, thereby, reduce any repair work by the cooperation with a manipulator.
This also makes possible higher welding speeds and, thereby, an increase in productivity.
There are two types of actuation systems for manipulators; namely, the indexing type
system, and the servo-controlled system. The indexing type system is for economic models
and is commonly actuated by pneumatic and AC motors. This type of system is usually
controlled by signals for target position with constant speed. The servo-controlled type
system is for speed and direction. This can be an expensive system since it has a complex
structure with servo motors, worm gear reducers, and encoders or resolvers. However, the
servo-controlled type system has higher accuracy and variable speed control in both
directions of rotation. Errors are compensated for by feedback control.
Various types of manipulators, depending on the types of motion and the degrees of
freedom that are required, are shown in Figs. 2 and 3. Several examples of rather simple
one-degree-of-freedom manipulators are shown in Fig. 2.
Figure 3 shows more sophisticated manipulators for higher maneuverability, but with an
associated higher cost. In selecting the best type of manipulator, it is important to consider
control types, load carrying capacity, and working environment. Also, repeatability,
accuracy, mechanical configuration, and degrees of freedom are important issues that
should be considered.
2.4 Sensors
Sensors collect information from the robot, peripheral devices, and process and transfer this
information to a controller. In arc welding, it is critical to consider deformation from high
heat input and, therefore, a closed loop control with a sensor is necessary. Also, in an
automatic welding system the errors caused by manufacturing tolerances of manipulator
and work pieces have to be considered. Various types of sensors for robot arc welding
Stations are available, see Table 1, and the right type must be chosen depending on the
application.
Sensor type Sensors
Contact type Mechanical Type - Roller Spring. Electromechanical type:
(Weld seam tracking) 1) Two probes across the seam line.
2) A probe in the seam line.
Electric control type with probe.
Non-contact type A. Physical type:
(Various Purposes) 1) Acoustic – arc length control.
2) Capacitance – distance control.
3) Eddy current –seam tracking.
4) Induction – seam tracking.
5) Infrared radiation – penetration control.
6) Ultrasonic – penetration and weld quality.
7) Magnetic – detecting electromagnetic field.
B. Through-the-arc type:
1) Arc length control (arc voltage).
2) Weaving with electric measurement (GTAW, GMAW).
C. Optical/vision (image capture and process):
1) Vision sensors.
2) Laser sensors.
3) Opto-electric sensors.
Table 1. Various Types of Sensors for Robot Arc Welding Stations.
As illustrated in Table 1, and shown in Fig. 5, there are two types of sensors; namely, a
contact type sensor and a non-contact type sensor.
A contact type sensor. Figure 5(a) shows a contact type sensor. A gas nozzle, or a finger, is
used as a probe to detect contact with the work piece. The nozzle senses the existence,
location, and orientation, and, thereby, the location of the weld seam. A contact type sensor
is less expensive and easier to use than a non-contact type sensor. However, this type of
sensor can not be used for butt joints and thin lap joints.
A non-contact type sensor. Figure 5(b) shows a non-contact type sensor referred to as a
through-the-arc sensor. This sensor detects changes of welding parameters while the torch is
weaving during the arc welding process. The sensor may be used for tee joints, U and V
grooves, or lap joints over a certain thickness. This type of sensor is appropriate for welding
of bigger pieces with weaving when penetration control is not necessary. However, it is not
applicable to materials such as aluminum.
600 Industrial Robotics - Programming, Simulation and Applications
A Column. A column is developed from the concept of modularization. While the track
moves in a single horizontal direction, the column may fix the robot or move the robot in a
vertical direction and a horizontal direction. A column occupies less space in the plant and
makes possible efficient utilization of the space. Also, the wirings of welding wire, power
and signal cables may be placed in a duct inside the column to avoid unnecessary exposure.
As shown in Fig. 10, a robot suspended from a column may have better accessibility and
weld position than a floor mounted robot. There are stationary (fixed), traveling, rotary, and
rotary/traveling types of columns.
2.6 Safety
An arc welding robot system should be on a firm foundation so that any vibrations will not
produce a shaking effect on the system. Also, the emergency switch button (with colors of
yellow and red) should be located at a position that is easily accessible. The switch should
stop the robot without shutting off the power. There should also be a safety fence to protect
the work force from spatter and arc flash. Figure 12 shows a complete system of a safety
fence.
Start data. Start data generates arc start data and stabilizes electric power. Start data
contains the following parameters:
- Ignition voltage and current.
- Gas preflow time. The time between shield gas flow and arc start.
- Hot start voltage and current: To stabilize the arc in the first stage.
- Restrike amplitude: Minor change in the position of the torch to start the are.
- Restrike cross time: Time to stay in the restrike position.
- Arc welding start maximum time. The robot stops the process if the arc does not
start in this time interval.
Main data. Main data contains the following parameters for the actual welding process:
- Welding voltage.
- Welding current.
- Welding speed.
For higher productivity, the welding speed should be increased to the maximum value.
Therefore, a new system should be put through a number of tryouts until the parameters for
maximum speed are determined. The above three parameters have interrelations with each
other.
End data. At the end of the welding process there are craters, or cracks, that may be the
cause of welding defects. Therefore, several parameters for appropriate finish are required.
These parameters include:
- End voltage and current.
- Gas postflow time.
- Burnback time.
- Cool time.
- Fill time.
Weaving data. A large work piece, to be welded with large penetration repetitive welding,
demands a long time and requies a motion plan to the weld start position. Carrying out this
type of welding in one pass is possible due to weaving. Weaving has various patterns such
as zig-zag, V-shape, triangular, and wrist weaving as shown in Fig. 15. Wrist weaving uses
only the sixth-axis of the robot and enables weaving in a narrow space where weaving with
the lower arms of the robot is impossible. Also, it is useful when high frequency weaving is
necessary. However, there is an error in the weaving height, as shown in Fig. 15. Therefore,
small amplitude weaving is recommended.
Point to point interpolation. This is the simplest and fastest interpolation technique. Point to
point interpolation commands the TCP to move from one point to another regardless of the
intermediate path. This technique is good when there are no obstacles in between. When
there is an obstacle it may be avoided by defining intermediate via points.
Linear segments with parabolic blends. When a constant speed trajectory in the world
coordinate system is needed, this interpolation technique is used. At the beginning, the
speed linearly increases and near the end, the speed linearly decreases.
Minimum time trajectory This technique seeks the minimal time trajectory between two
positions. The trajectory is obtained for maximal acceleration and deceleration and is called
bang-bang trajectory.
Interpolation for weaving. When weaving is used for thick work piece welding, paths for
weaving are calculated based on basic welding path.
Fig. 18. A TIG Welding System of Stainless Steel Plates with Nickel Alloy.
(B) Aluminum Welding Automation. Aluminum is a metal that is very difficult to weld. DC
reverse polarity needs to be used in MIG welding and AC needs to be used in TIG welding.
Aluminum has the following unique properties:
- Aluminum oxide surface coating.
- High thermal conductivity.
- High thermal expansion coefficient.
- Low melting temperature.
- No color changes approaching melting point.
In order to weld aluminum, the following is a list of some critical considerations:
- Base metal and filler metal should be kept clean and dry and oxide surface should
be removed.
- Appropriate filler metal should be chosen.
- Welding parameter should be accurate.
- High welding speed should be maintained.
For an arc welding robot system, a good coordination of peripheral devices with
simultaneous control of external axes are very important issues.
5. References
[1] J. D. Lane, “Robotic Welding,” IFS, 1987.
[2] H. B. Cary, “Modern Welding Technology,” Third Edition, Regents/Prentice Hall Book
Company, Inc., Upper Saddle River, New Jersey, 1994.
[3] N. S. Seo, “Modern Manufacturing Processes,” Dongmyungsa, Korea, 2002.
[4] P. F. Ostwald, and J. Munoz, “Manufacturing Processes and Systems,” John Wiley and
Sons, Inc., New York, 1997.
[5] S. Kalpakjian, “Manufacturing Engineering and Technology,” Addison Wesley
Publishing Company, Inc., Reading, MA, 1992.
[6] J. J. Craig, “Introduction to Robotics, Mechanics and Control,” Addison Wesley
Publishing Company, Inc., Reading, MA, 1989.
[7] K. S. Fu, “Robotics,” McGraw-Hill International Book Company, Inc., New York, 1987.
31
1. Introduction
Deburring is a significant area of research for robot application. However, it is difficult to
obtain details on this important topic in modern text books and archival journal papers.
Therefore, in general, the engineer is subjected to a time consuming decision making process
that usually involves trial-and-error procedures. This chapter will attempt to organize
important know-how and field experience, in a logical sequence, so that the engineer can
use the information in a straightforward manner. The chapter will provide important details
on various types of tools, material selections, robots and station concepts, and
programming. This information will help reduce the trial-and-error approach, which is a
significant drain on time and money. The contents of this chapter will also provide the
opportunity to obtain an optimal solution to the problems associated with deburring.
The chapter is arranged as follows. Section 2 defines a burr and presents background
information on the automation systems for the deburring process. The process is classified
by the types of burr and the work material. Sections 3 gives a brief introduction to robots for
the deburring process. Section 4 discusses the tools of a robot deburring system including,
the spindle motor, tool holders, and tool selection. Section 5 focuses on the rotation-type
burr tool; i.e., deburring by a solid rotating file. The rotating file is the most commonly used
at the present time. Section 6 focuses on the deburring operation and includes a discussion
of potential problems in the deburring process. Section 7 presents a discussion of the
selection of tools for different applications. Finally, Section 8 addresses the problems
associated with the cutting and deburring of polymers and die castings.
The material in this chapter is based on the reports and experiences of experts in Europe, the
United States of America, Japan, and Korea.
major hindrance in assembly work and the sharp edge of a burr can cut a workers hand.
Also, in general, burrs give the appearance of a poorly finished product.
Deburring can occupy as much as five to ten percent of total man hours, yet in many cases, it is
generally regarded as a minor problem or just a bothersome phenomenon. As the demand for
quality becomes more rigorous, and the role of process automation becomes more important, it is
necessary to devise a substantial and systematic prescription against burrs and flashes. The best
solution is that a burr should not be generated in the first place, or the effects of a burr should be
minimizal. Also, the direction of a burr should be controlled in the process planning stage so that it
does not affect the upcoming processes. Finally, the burrs that are generated must be removed, in
which case, a deburring methodology, automation, process planning, and justification of the
financial implications must be thoroughly investigated.
Robot automation systems for the deburring process have been developed primarily for the
automotive industry, kitchen products, and the manufacture of plastics. The motivation is due to
the reduction in the available work force, industrial environmental problems such as dust, and the
demand for cycle time reduction. The latter issue is very important in applications such as the
production of crankshafts. The deburring process, however, is a relatively undeveloped process
and it is difficult to obtain a comprehensive list of the literature on the subject. In general,
information on deburring automation is not readily available, it has remained as the know-how of
the experts in the field and in the reports of system houses. The robot automation system of
deburring is composed of a robot (see Section 3) and peripheral equipment. The most important
peripheral equipment is the fixture device of the work piece. This device may be a simple
clamping table, an indexing table, or a sophisticated manipulator depending on the geometry of
the work piece and conditions such as cycle time. When the size of the work piece is bigger than
the robot work envelope then robot track motion should be used.
The selection of the correct deburring tool is very critical for the success of the process and,
therefore, is an essential part of the deburring process. Also, more than thirty-five methods
have been reported for the deburring process. The most important criterion in choosing one
method over another is the minimization of total manufacturing cost. Primary items that
must be considered are the production volume, the cycle time, the material properties,
tolerances, geometry, and dimensions. Secondary items are the investigation of the process
that generates a burr and the purpose of burr removal. The best solution can only be
obtained when all of these conditions are thoroughly investigated.
For deburring the inside of a hole, there are two different cases. First, when the hole
diameter is larger than 25 mm, then general rotating tools with high speeds (15,000 rpm and
greater) may be used. Second, when the hole diameter is smaller than 25 mm then special
tools, or counter sinking tools, with low speeds (30 - 60 m/min) are used. Figure 2 illustrates
the procedure for deburring the inside of a hole.
based on speed. The stroke is based on the burr geometry and conditions such as
oscillating tool performance.
(iv) No compliance, see Fig. 4(d). The spindle holder is rigidly fixed without compliance for
soft material deburring.
Fig. 8. The Ball Shaped Cutter for Multi-Directional Compliance and Cutters for No
Compliance Applications.
The spherical shape of the ball shaped cutter is very efficient for deburring along smooth
curves. The cutter wheels and the straight cutters are good for mild materials (such as
plastics and aluminum) but need a device to protect them from the vibrations.
The fluting patterns and cutting edges shown in Fig. 5 may be classified as in Fig. 9. The
patterns are categorized by the blade angle relative to rotation axis, number and size of
blades, and cut mechanisms. It is important to choose right concept of the pattern to the size
and hardness, and material of burrs. The standard concept is spiral cut, standard cut, and
medium number and size of blades. It should be the starting point when designing a system.
In the case where diamond cut, double cut, or chip breaker is used in deburring, then the
chip from the process becomes different. The chip generation mechanism is the same as that
of machining processes and, for more details, the reader is referred to books on the theory of
machining.
simple and accurate element, and is good for one directional compliance. The contact
force is controlled by a set screw. A preloaded screw can be used for uniform contact
force in a limited distance of stroke. A gravitational element is mounted on the floor
and the work gripped by the robot approaches the tool so that gravity generates the
contact force.
In the performance of the compliance system, the most important factor is the stability of the
contact force. It is especially important when the tool position is reversed, and in
acceleration or rotation during high speed operation. If the system does not provide
sufficient stability then uniform chamfer cannot be obtained during initial contact, at
corners, or robot servo lag. The main cause of contact force instability is the low contact
force versus inertia ratio. Figure 11 shows the hierarchy of deburring tool systems
considering compliance.
6. Deburring Operation
After the proper tool has been selected and the station is ready, the robot deburring process
can commence. The following is a discussion of the important phases of the robot deburring
automation process.
620 Industrial Robotics - Programming, Simulation and Applications
Path Inaccuracy
When the path is not accurate then the chamfer cannot be uniform due to the following
reasons.
- The burr size deviation should not exceed 1 to 4 mm. If the deviation is bigger that
this then it should be removed in a previous stage.
- The robot is not securely mounted.
- The work piece is not properly clamped.
- Dust or dirt on the surface of the work piece prevents a tight clamping of the work.
- The tool is dislocated from its original position.
- In corner operation, robot servo lag causes inconsistent chamfer. Simpler tool
geometry and slower speed may solve the problem.
Tool Wear
Important factors to consider when investigating tool wear are:
- The robot should be programmed so as to obtain uniform wear of tool.
- When the file is damaged, it is a sign of poor working conditions. The damage
should be investigated and work parameters such as RPM, speed, and contact force
and angle should be reviewed. The possibility of tool chattering should be
investigated.
- The wear mechanism of the tool material should be studied.
electric motor with a speed of 54,000 rpm is used as the spindle motor. A pneumatic
spindle is too weak for cutting with too high speed. When a spindle is attached to robot, it
should be aligned with the sixth axis of the robot in order to minimize the moment
exerted on the robot wrist. The spindle holder should be sufficiently strong to withstand
the strong reaction forces but care must be taken because the spindle bearing might be
damaged if it is too tight. The tool length and the diameter are selected first based on the
work geometry and accessibility. The pattern of the tool teeth is selected based on the
work material. The fixture is tailor made depending on the work shape and size. The
fixture should be sufficiently large and rigid to hold the work securely without tool
chattering.
9. References
[1 ]“General Deburring Manual,” ABB, Sweden, 1990.
[3] N. S. Seo, “Modern Manufacturing Processes,” Dongmyungsa, Korea, 2002.
[4] P. F. Ostwald, and J. Munoz, “Manufacturing Processes and Systems,” John Wiley and
Sons, Inc., New York, 1997.
[5] S. Kalpakjian, “Manufacturing Engineering and Technology,” Addison Wesley
Publishing Company, Inc., Reading, MA, 1992.
[6] J. J. Craig, “Introduction to Robotics, Mechanics and Control,” Addison Wesley
Publishing Company, Inc., Reading, MA, 1989.
[7] K. S. Fu, “Robotics,” McGraw-Hill International Book Company, Inc., New York, 1987.
32
Robot-based Gonioreflectometer
Andreas Höpe, Dirk Hünerhoff, Kai-Olaf Hauer
Physikalisch-Technische Bundesanstalt, AG 4.52, Reflectometry
Bundesallee 100, 38116 Braunschweig, Germany
1. Introduction
Measurements of diffuse reflection are important for a variety of applications in optical
metrology. In reflectometry both the spectral and the spatial distribution of radiation
diffusely reflected by solid materials are measured and characterised with the
indication of classification numbers. In practice, reflection measurements of diffuse
reflecting materials are accomplished predominantly relative to a standard. As primary
standard for this purpose the perfectly reflecting diffuser (PRD) is established, which
reflects the incoming radiation loss-free, completely diffuse and with a lambertian
direction characteristic. The PRD is a theoretical concept only, which cannot be realised
experimentally respectively materially. Since there is no material with these
characteristics, the realisation of primary standards is carried out with physical
methods, i.e. by the measuring apparatus itself, in the context of an absolute
measurement. In contrast to the directed, specular reflection, the incoming light in
diffuse reflection is distributed over all directions in space. If the radiance L of the
reflected radiation is independent of the direction, one speaks of a lambertian emitter.
The classification numbers, in the case of reflection are the reflectance ρ, the radiance factor β and
the BRDF “bidirectional radiance distribution function” fr. These classification numbers are not
material-specific but depend on a multiplicity of parameters, e.g. the wavelength of the radiation,
the direction of irradiation and reflection as well as the aperture angle.
There are, in principle, nine different measuring geometries for a reflection
measurement. They consists of all possible combinations, where the incident and the
reflected radiation is in each case hemispherical, conical or directional. Therefore it is
always necessary to specify the measurement geometry for a material classification
number. The hemispherical measuring methods can be accomplished thereby with an
integrating sphere whereas directional geometries are realised with a gonioreflectometer.
The name gonioreflectometer is used in this context for a device which allows the precise
control of the angles of the incident and reflected optical beams in a reflection
measurement. This publication deals with the first time utilization of a commercial 5-axis
industrial robot as a sample holder in a gonioreflectometer.
624 Industrial Robotics - Programming, Simulation and Applications
Fig. 1. Photograph of the gonioreflectometer showing the main components large rotation
stage and 5-axis-robot.
reflection measurements were possible, where the vectors of the incoming radiation, the reflected
radiation and the vector of the surface normal formed a plane. The same situation is true for a
variety of gonioreflectometer facilities in other National Metrology Institutes (Proctor & Barnes,
1996), (Chunnilall et al., 2003), (Nevas et al., 2004). The measurement of the appearance of
surfaces under arbitrary lighting conditions as in every day life, however, requires a
gonioreflectometer with multi angle capabilities. Such a gonioreflectometer was set-up in PTB
using standard laboratory and engineering equipment.
As mentioned in the introduction one important classification number for the diffuse
reflection is the radiance factor. The radiance factor β is defined as the ratio of the radiance
Lr of the specimen to that of the perfect reflecting diffuser LPRD
r
identically irradiated.
Fig. 2 shows the geometrical conditions of a gonioreflectometer. The incoming radiation is
denoted with index i and the reflected radiation from the sample area dA is denoted with
index r. For a measurement in an arbitrary geometry the spectral radiance factor β depends
besides the wavelength on all of these geometrical quantities:
Lr (θ i , φi , θ r , φ r , dω i , dω r , λ ) (1)
β(θ i , φi , θ r , φ r , dω i , dω r , λ) =
LPRD
r
As mentioned above the radiance of the PRD cannot be measured directly but it has to be
calculated from other quantities. This can be accomplished using the radiance Li of the
irradiating lamp.
cos Θi AQ (2)
LPRD
r = ⋅ 2 ⋅ Li
π R
Here, the area of the radiator is denoted by AQ and R is the distance between the irradiating
lamp and the sample, where Θi is the angle between the incident beam and the surface
normal of the sample. The radiance Li of the irradiating lamp can be measured by looking
with the detection system directly into the lamp. The lamp has to be rotated to the 180
degree position and the sample has to be moved out of the beam path. This is something
which can be carried out easily with the robot (see Fig. 3).
Fig. 3. Moving the robot head with a sample attached out of the detection beam path.
626 Industrial Robotics - Programming, Simulation and Applications
A Multi angle gonioreflectometer can be constructed either classically where several rotation
and translation stages are assembled in order to achieve the desired degrees of freedom or
using a robot as a sample holder for the device under test. It is expensive to stack several
rotation and translation stages with lots of motor control units. We estimated the threefold
price for such an arrangement. For economical reasons, we selected the robot solution.
Fig. 5. Coordinate system and angles for out of the plane measurements at the robot
gonioreflectometer facility.
The both vectors of the incident and reflected beams are restricted to the xy-plane (i.e. z
= 0) according to the construction of the gonioreflectometer. The lamp generating the
Robot-based Gonioreflectometer 629
incident light is mounted on the large rotation stage and can only be rotated in this plane.
The detection path for the reflected radiation lies also in this plane and is fixed due to the
table mounted mirror optics. This fixed direction coincides with the zero degree position of
the rotation stage. The orientation of the surface of the sample is characterised due to the
vector N of the surface normal which is perpendicular to the surface and can be rotated with
the robot to every angular position ϕN, ϑN within the whole hemisphere above the xy-plane.
With the calculation of these three dot products in the two different coordinate systems and
some conversion of the equations it is possible to compute the required angles ϕI, ϕN and ϑN
which had to be adjusted at the facility for an off-axis measurement.
0,9
matt side
glossy side
0,8
0,7
0,6
0,5
0,4
0 10 20 30 40 50 60 70 80 90
Fig. 6. Out of plane measurement on two sides of an opal glass reflection standard.
630 Industrial Robotics - Programming, Simulation and Applications
0°
reflected
25°
incident
90°
X
Fig. 7. Diagram showing the geometrical conditions of an out of plane measurement. The
incident angle is varied from 0° to 85° in a plane perpendicular to the plane of the reflection
angle (fixed at Θr = 25°).
In order to vary the incident angle in this manner, three angles of the facility, the angle of
the rotation stage ϕi and the angles of the sample surface normal ϕN and ϑN had to be
adjusted simultaneously. For this measurement the angle of the rotation stage ϕi varies from
25° to 90°, the angle ϕN of the surface normal from 25° to 0° and the angle ϑN from 0° to 25°,
see Tab. 1 and Fig. 8.
Incident angle of radiation Angle of rotation Angel of surface Angle of surface
on the sample [°] stage ϕi [°] normal ϕN [°] normal ϑN [°]
0 25 25 0
10 26.81 23.28 9.37
20 31.61 19.46 16.01
30 38.29 15.40 19.94
40 46.03 11.85 22.18
50 54.37 8.86 23.47
60 63.05 6.31 24.24
70 71.94 4.06 24.69
80 80.95 1.98 24.93
90 90 0 25
Table.1. Values of the involved angles (in 10 degree steps) for the presented out of plane
reflection measurement.
Fig. 8. Photo of the starting and end position for the explained out of plane measurement.
Robot-based Gonioreflectometer 631
Fig. 8. Orientation dependence of the reflected radiance for two different opal glasses.
632 Industrial Robotics - Programming, Simulation and Applications
Fig. 8 shows the results of measurements on two samples. The samples are in both cases opal
glasses from the same manufacturer but with different batch numbers. The reflected radiance
signal from the samples should be in both cases independent of the rotation angle, which is only
the case for the opal glass with #255, where the variances are in the range of the measurement
uncertainty. The reason for this differences has to be analyzed in more detail in the future. Via
visual inspection there is no obvious difference between the two samples.
From these measurements the following can be deduced. Standard materials of diffuse reflection
should be handled with caution when there are used in directional reflection geometries. The
absolute reflection depends on the rotational orientation of the sample relative to the plane
spanned by the beams of the incident and reflected radiation. Even small rotations about the
surface normal can lead to deviations in the reflected radiance almost in the percent range. This is
more than the measurement uncertainties of most of the goniometric calibration facilities. This
underlines the necessity to indicate the orientation of samples during their calibration and also to
reproduce this orientation when making adjacent measurements using the calibration data.
6. Conclusion
The new gonioreflectometer of the PTB enables high precision goniometric measurements of
diffuse reflecting samples within the whole hemispheric range above the surface of the
sample, due to its out-of-plane capabilities, by using a 5-axis-robot.
The whole facility was build using standard laboratory and engineering equipment. The
five-axis robot model RV-2AJ from Mitsubishi Electric could be used without any
modification for this unusual field of application.
The current wavelength range for measurements and calibrations is 250 nm to 1700 nm. It is
planned to extend this wavelength range up to 2500 nm in the near future. It is also planned
to install an additional lamp with increased UV output power in order to improve the
current uncertainty budget for measurements in the UV range.
7. Acknowledgments
The authors thank B. Campe and S. Teichert for the construction and manufacture of several
parts of the gonioreflectometer facility.
8. References
Chunnilall, C.J.; Deadman, A.J.; Crane, L.; Usadi, U. (2003). NPL scales for radiance factor
and total diffuse reflectance. Metrologia, 40, S192-S195, ISSN 0026-1394
Erb, W. (1979). Kugelstrahler mit homogener Strahldichte. Phys-Techn. Bundesanstalt,
Jahresbericht 1979, 170-171, ISSN 0340-4366
Erb, W. (1980). Computer-controlled gonioreflectometer for the measurement of spectral
reflection characteristics. Appl. Opt., Vol. 19, No. 22, 3789-3794, ISSN: 0003-6935
Hünerhoff, D.; Grusemann, U.; Höpe, A. (2006). New robot-based gonioreflectometer for
measuring spectral diffuse reflection. Metrologia, 43, S11-S16, , ISSN 0026-1394
Nevas, S.; Manoocheri, F.; Ikonen, E. (2004). Gonioreflectometer for Measuring Spectral
Diffuse Reflectance. Appl. Opt., Vol. 43, No. 35, 6391-6399, ISSN: 0003-6935
Proctor, J.E.; Barnes, P.Y. (1996). NIST High Accuracy Reference Reflectometer-
Spectrophotometer. J. Res. Natl. Inst. Stand. Technol., 101, 619-627, ISSN 0160-1741.
33
Serpentine Robots for Industrial
Inspection and Surveillance
Grzegorz Granosik1, Johann Borenstein2, Malik G. Hansen2
1Technical University of Lodz, POLAND
2University of Michigan*, Ann Arbor, MI, USA
1. Introduction
Urban search and rescue, industrial inspections, and military intelligence have one need in
common: small-sized mobile robots that can travel across the rubble of a collapsed building,
squeeze through small crawl-spaces to take measurements or perform visual inspections,
and slither into the shelter of insurgents to gather intelligence. Some of these areas are not
only difficult to reach, but may also present safety and health hazards to human inspectors.
One species of mobile robots that promises to deliver such hyper-mobility is the so-called
serpentine or snake robot (see Figure 1). Serpentine robots typically comprise of three or
more rigid segments that are connected by 2- or 3-degree-of-freedom (DOF) joints. The
segments typically have powered wheels, tracks, or legs to propel the vehicle forward, while
the joints may be powered or unpowered. Desired capabilities for such a robot are:
Fig. 1. The OmniTread Model OT-4 serpen-tine robot entering an “Inverted-‘J’ ventilation
duct at SwRI†.
* The OmniTread work was conducted at the University of Michigan where Dr. Granosik co-
developed the “Omni’s” as a Visiting Researcher from 2002-2004.
† The OmniTread robots were independently tested at the Southwest Research Institute
(SwRI). Most of the OmniTread photographs in this chapter were taken at SwRI during the
successful traverse of the shown obstacle.
634 Industrial Robotics - Programming, Simulation and Applications
• ability to traverse rugged terrain, such as concrete floors cluttered with debris, or
unfinished floors such as those found on constructions sites;
• ability to fit through small openings;
• ability to climb up and over tall vertical steps;
• ability to travel inside and outside of horizontal, vertical, or diagonal pipes such as
electric conduits or water pipes;
• ability to climb up and down stairs;
• ability to pass across wide gaps.
This chapter begins with an extended literature review on serpentine robots in general, and then
focuses on the concept and features of the OmniTread family of serpentine robots, which were
designed and built at the University of Michigan’s (UM’s) Mobile Robotics Lab. Along the way,
we discuss the evolution of OmniTread robots (or “Omnis” in short), showing inheritance of
valuable features, mutation of others, and elimination of disadvantageous designs. In the
Experiment Results Section, photographs of successful obstacle traverses illustrate the abilities of
the Omnis. The chapter concludes with our prognosis for future work in this area.
2. Serpentine Robots
Serpentine robots belong to the group of hyper-redundant articulated mobile robots. This group
can be further divided based on two characteristic features: the way the forward motion of the
robot is generated and the activity of its joints, as shown in Table 1. As our work is focused on
serpentine robots we will limit the following literature review to this scope.
The first practical realization of a serpentine robot, called KR-I, was introduced by Hirose
and Morishima (1990) and later improved with version KR-II (Hirose et al., 1991). This first
serpentine robot was large and heavy, weighing in at 350 kg. The robot comprised of
multiple vertical cylindrical segments on powered wheels (tracks in KR-I) that give the
mechanism a train-like appearance. Vertical joint actuators allow a segment to lift its
neighbors up, in order to negotiate steps or span gaps.
More recently, Klaassen and Paap (1999) at the GMD developed the Snake2 vehicle, which
contains six active segments and a head. Each round segment has an array of 12 electrically
driven wheels evenly spaced around its periphery. These wheels provide propulsion regardless
of the vehicle’s roll angle. Segments are interconnected by universal joints actuated by three
additional electric motors through strings. Snake2 is an example of a robot that is inspired by the
physiological structure of snakes where wheels replace tiny scales observed on the bodies of
some real snakes. Snake2 is equipped with six infrared distance sensors, three torque sensors, one
tilt sensor, two angle sensors in every segment, and a video camera in the head segment. Snake2
was specifically designed for the inspection of sewage pipes.
Another serpentine robot designed for sewer inspection was developed by Scholl et al. (2000) at
the Forschungszentrum Informatik (FZI) in Germany. Its segments use only two wheels but the
actuated 3-DOF joints allow full control over each segment’s spatial orientation. The robot is able
to negotiate tight 90° angled pipes and climb over 55 cm high obstacles. One segment and its joint
are about 20 cm long. The sensor suite of this robot is similar to that of Snake2. The development
of sewer inspection robots is continued in the joint project MAKROplus (Streich & Adria, 2004).
While wheeled serpentine robots can work well in smooth-walled pipes, more rugged
terrain requires tracked propulsion. To this effect Takayama and Hirose (2000) developed
the Soruyu-I crawler, which consists of three segments. Each segment is driven by a pair of
tracks, which, in turn, are all powered simultaneously by a single motor, located in the
Serpentine Robots for Industrial Inspection and Surveillance 635
center segment. Torque is provided to the two distal segments through a rotary shaft and
universal joints. Each distal segment is connected to the center segment by a special 2-DOF
joint mechanism, which is actuated by two lead screws driven by two electric motors. The
robot can move forward and backward, and it can change the orientation of the two distal
segments in yaw and pitch symmetrically to the center segment. One interesting feature is
the ability of this robot to adapt to irregular terrain because of the elasticity of its joints. This
elasticity is provided by springs and cannot be actively controlled.
Movement is
External propulsion element: legs, wheels, tracks generated by
undulation
Active Serpentine robots: Snake-like robots:
joints
OmniTread ACM
(Hirose, 1993)
feedback sensors for the passive but rigid joints. Springs are used to protect the electric motors
from impact, although the stiffness of the springs cannot be controlled during operation.
Another robot incorporating a combination of passive and active joints as well as independently
driven and coupled segments is KOHGA developed by Kamegawa et al. (2004). This robot
implements a smart design feature: besides a camera in the front segment, there is a second
camera in the tail section that can be pointed forward, in the way a scorpion points its tail
forward and over-head. This “tail-view” greatly helps teleoperating the robot.
The concept of joining several small robots into a train to overcome larger obstacles was used by
researchers from Carnegie Mellon University in their Millibot Train (Brown et al., 2002). This
robot consists of seven electrically driven, very compact segments. The diameter of the track
sprockets is larger than the height of each segment, which allows the robot to drive upside-down.
Segments are connected by couplers for active connection and disconnection, but the joints have
only one DOF. Each joint is actuated by an electric motor with a high-ratio harmonic gear and
slip clutch. It provides sufficient torque to lift up the three front segments. The robot has been
demonstrated to climb up a regular staircase and even higher steps. However, with only one
DOF in each joint the vehicle is kinematically limited.
A serpentine robot that uses tracks for propulsion and pneumatics for joint actuation is
MOIRA (Osuka & Kitajima, 2003). MOIRA comprises four segments, and each segment has
two longitudinal tracks on each of its four sides, for a total of eight tracks per segment. The
2-DOF joints between segments are actuated by pneumatic cylinders. We believe that the
bellows-based joint actuators used in our OmniTread have a substantial advantage over a
cylinder-based design, as the discussion of our approach in the next section will show.
The newest construction from NREC (National Robotics Engineering Center) is Pipeline
Explorer – robot designed and built for inspection of live gas pipelines (Schempf et al.,
2003). This robot has a symmetric architecture. A seven-element articulated body design
houses a mirror-image arrangement of locomotor (camera) modules, battery carrying
modules, and support modules, with a computing and electronics module in the middle.
The robot’s computer and electronics are protected in purged and pressurized housings.
Segments are connected with articulated joints: the locomotor modules are connected to
their neighbors with pitch-roll joints, while the others – via pitch-only joints. These specially
designed joints allow orientation of the robot within the pipe, in any direction needed.
The locomotor module houses a mini fish-eye camera, along with its lens and lighting
elements. The camera has a 190-degree field of view and provides high-resolution color
images of the pipe’s interior. The locomotor module also houses dual drive actuators
designed to allow for the deployment and retraction of three legs equipped with custom-
molded driving wheels. The robot can sustain speeds of up to four inches per second. It is
fully untethered (battery-powered, wirelessly controlled) and can be used in explosive
underground natural gas distribution pipelines.
(here: legs) evenly located around the perimeter of each segment; (2) pneumatic power for
joint actuation; and (3) a single so-called “drive shaft spine” that transfers mechanical power
to all segments from a single drive motor.
Fig. 2. OmniPede.
One of the key features in the design of the OmniPede is that each leg has only one degree of
freedom (DOF). A “leg” and its associated “foot” look like the cross section of an umbrella.
The trajectory of the foot and the orientation of the leg are determined by a simple
mechanism as shown in Figure 3b. The geared 5-bar mechanism moves the leg so that the
foot makes contact with the terrain while performing the backward portion of its motion
(which is the portion that propels the vehicle forward). Then the foot disengages from the
terrain while it performs the forward portion of its motion (as shown in Figure 3c). As a
result the OmniPede moves forward.
By having only one DOF per leg instead of the two or three DOF that most other legged
vehicles have, the number of required actuators is reduced. The price that is paid for the
reduced complexity, weight, and cost is having less control over the position and orientation
of the legs. However, we considered this to be a small sacrifice because with the OmniPede
precise leg positioning is unimportant. Also, the reduced complexity of the legs offers
further advantages, as described below.
The OmniPede consists of seven identical segments, with the tail segment housing the
motor. Each segment has four of the legs shown in Figure 3b arranged circularly on its
circumference and evenly spaced at 90-degree intervals. The legs are arranged this way so
that no matter which part of the OmniPede is in physical contact with the environment,
contact is always made through some of the feet. The segments are connected through
articulated joints, which allow two DOF between the segments. These two DOF are each
independently controlled with a pneumatic piston by means of a four-bar mechanism. This
feature provides the OmniPede with the versatility that was lost by linking the legs
kinematically. The joint actuators enable the OmniPede to lift its front end up and onto
obstacles much the same way a millipede (or a worm, or a snake) does. Another key feature
of the OmniPede design is that the motion of each leg is kinematically linked to a common
drive shaft, called the drive shaft spine, that runs through the centre of the vehicle. This
allows all of the legs to be driven by just one actuator, which supplies torque to the common
drive shaft. Also, because the legs are all kinematically linked by the common drive shaft,
the phase differences between all of the legs are fixed.
Unfortunately, the OmniPede never reached the mobility level of millipedes. Partially
because of the scale factor (our robot is much larger than its natural counterpart) and mainly
because we could not produce the same foot density (number of feet per side area of robot)
638 Industrial Robotics - Programming, Simulation and Applications
as nature did. Therefore, our design needed modifications; we could say it was real
evolution. We abandoned the idea of few discrete legs altogether, and instead adopted the
abstract idea of a continuous, dense “stream of legs;” we noticed that trace of each foot can
be seen as track, as schematically shown in Fig. 3. Using tracks (executing rotation) we
improved efficiency of driving mechanism. We also improved design of robot’s joints by
introducing Integrated Joint Actuator (described in detail later). And finally, we preserved
the idea of “drive shaft spine” with a single drive motor.
Fig. 3. Evolution of driving system: from the legged OmniPede to the tracked OmniTread. a
– millipede, b – 1DOF leg of the OmniPede, c – propulsion idea of OmniPede’s foot, d –
Proof-of-concept prototype of the OmniTread: In an abstract sense, a moving track with
grousers can be seen as a continuous stream of moving legs.
From the study of the OmniPede, and from the observed shortcomings of its legged
propulsion system, we derived important insights about the design of serpentine robots.
These insights led to the development of the far more practical “OmniTread” serpentine
robot, shown in Table 1. This version of the OmniTread, later called “OT-8,” has five
segments and four pneumatically actuated 2-DOF joints. The size of each segment is
20×18.6×18.6 cm (length × width × height). Each joint space is 6.8 cm long. The entire robot is
127 cm long and weighs about 13.6 kg. The OmniTread is teleoperated and has off-board
power sources (electric and pneumatic).
In May 2004 we began work on the latest and current version of the OmniTread, the OT-4.
The number designation comes from its dominant design parameter: the OT-4 can fit
through a hole 4 inches (10 cm) in diameter, whereas the OT-8 can fit through an 8-inch
diameter hole.
The OmniTread OT-4 comprises seven segments and six 2-DOF joints, as shown in Figure 4.
The segment in the centre is called “Motor Segment” because it houses the single drive
motor. All other segments are called “Actuation Segments” because they house, among
others, the control components for the pneu-matic actuators. Segments #1 and #7 are able to
hold some payload, such as cameras, microphones, and speakers. Segments #2 and #6 can
hold one micro air-compressor each, for pneumatic power. Segments #3 and #5 hold Li-
Polymer batteries. The OT-4 can carry onboard energy resources for up to 75 minutes of
continuous, untethered driving on easy terrain.
Serpentine Robots for Industrial Inspection and Surveillance 639
Joint #1
Seg. #1 Joint #2
Seg. #2
Payload Air compressor
Joint #3
Seg. #3
Batteries
Motor
Forward segment
Seg. #5
Batteries
Seg. #6
Joint #4 Air compressor
Seg. #7
Actuation Payload
segments
Joint #5
Joint #6
OT-4 Overview.cdr
a b
Fig. 5. Tracks all around: As the OmniTreads drive through rockbeds, it becomes apparent
that side tracks help provide forward propulsion. a – OT-8; b – OT-4.
Serpentine Robots for Industrial Inspection and Surveillance 641
Fig. 6. OT-4 passes through a 10-cm (4”) diameter hole, high above ground. Extendable
“flipper tracks” in the distal segments allows the robot to straddle the hole without having
to support its weight on the sharp edges of the hole in this test at SwRI.
Fig. 7. Joint strength and stiffness: OT-4 lifting up three lead segments in order to reach the
next step.
3. At times it is necessary to increase the stiffness of a joint, for example to reach over
an obstacle, or for crossing a gap (see Fig. 7). Alternatively, it may be necessary to
642 Industrial Robotics - Programming, Simulation and Applications
adjust the stiffness to an intermediate level, for example, when the lead segment
leans against a vertical wall while being pushed up that wall by the following
segments. Thus, serpentine robots should be capable of adjusting the stiffness of
every DOF individually and proportionally.
4. Large amounts of space dedicated to joints dramatically increase the amount of
inert surface area. Therefore, joint actuators should take up as little space as
possible, to reduce the size of space occupied by joints (called “Joint Space”).
Moreover, it is obvious that the weight of the actuators should be minimal and joint angles
in serpentine robots should be controllable proportionally, to provide full 3D mobility.
Extensive studies of these requirements and of joint actuators potentially meeting these
requirements led to the second unique design feature of the OmniTread, the use of
pneumatic bellows for actuating the joints. Our research (Granosik & Borenstein, 2005)
shows that pneumatic bellows meet all four of the above requirements better than any other
type of actuator. In particular, pneumatic bellows provide a tremendous force-to-weight
ratio, and they fit perfectly into the otherwise unusable (since varying) space between
segments.
b
a
Fig. 8. Integrated Joint Actuator: a – In serpentine robots the shape of the so-called “Joint
Space” varies with the angles of the joint. At extreme angles, there are some regions of Joint
Space where there is almost no free space for mounting rigid components. However, the
bellows of the OmniTreads conform to the available space. b – In the OT-8, Joint Space is
only 6.8 cm long while segments are 20 cm long. This design helps produce a very favorable
Propulsion Ratio Pr. The obvious advantage is the OmniTread’s ability to rest its weight on
some obstacle, such as this railroad rail, without getting its Joint Space stuck on it. The sharp
edge of the hole in Fig. 6, however, was meant to penetrate joint space, as an additional
challenge.
The latter point is illustrated in Figure 8a, which shows that parts of Joint Space may be
small at one moment, and large at the next, depending on the bending of the joint. If we
wanted to use Joint Space for housing electronics or other rigid components, then the size of
that component would be limited by the dimensions of the “minimal space” shown in
Figure 8a. Contrary to rigid components, pneumatic bellows fit into such varying spaces
perfectly: bellows expand and contract as part of their intended function, and they happen
to be smallest when the available space is minimal and largest when the available space is
maximal. From the point of space utilization, pneumatic bellows are thus a superbly elegant
Serpentine Robots for Industrial Inspection and Surveillance 643
solution, because joint actuators take up only Joint Space, and very little of it, for that matter.
Therefore, we call our bellows-based pneumatic system “Integrated Joint Actuator” (IJA). In
contrast, pneumatic cylinders or McKibben muscles, as well as electric or hydraulic
actuators, would all require space within the volume of the segments or much larger Joint
Space. To further illustrate our point about small versus large Joint Spaces, we included
Figure 8b, which shows how the OT-8 successfully traverses a relatively narrow-edged
obstacle, thanks to its very short joints. If the joints were longer than the rail’s width, then
the robot would necessarily get stuck on it.
Fig. 9. CAD drawings of the OmniTread gearboxes: a – OT-8 gearbox, shown here for the
motor segment. b – Schematic of the OT-4 drive system, shown here for the motor
segment. c – CAD drawing of the OT-4 motor segment.
In case of OmniTread OT-8 a single 70W drive motor (Model RE36 made by Maxon) located
in the central segment provides torque to all tracks on all five segments via the drive shaft
spine. The drive shaft spine is an axle that runs longitudinally through all segments.
Universal joints let the axle transfer torque at joint angles of up to 30°. Within each segment
there is a worm on each driveshaft that drives four worm-gears offset 90° from each other,
as shown in Figure 9a. Each worm gear runs two spur gears ending in chain drives to
deliver power to the sprocket shafts. The purpose of the spur gears is to bring the chain back
to center again so that the two tracks on each side can be of equal width. The chain drive is
644 Industrial Robotics - Programming, Simulation and Applications
very slim and therefore minimizes the gap between the tracks. The total gear reduction from
the motor to the sprockets is 448:1. The drive system and chain drive is sealed to prevent
dirt from entering the mechanism.
A similar drive train mechanism was developed for the OT-4. The drive shaft spine
comprises seven rigid shafts that are connected by six universal joints. The universal joints
are concentrically located within the gimbal joints that link the segments. On each shaft
segment is a worm. Four worm gears feed off that worm on the drive shaft as shown in
Figure 9b. Each worm gear drives a chain that drives the track sprocket. These worm gears
can be engaged with worm or disengages from it by means of electrically actuated bi-stable
clutches. The OT-4 has one such a clutch for each of its 7×4 = 28 tracks. These clutches allow
the operator to engage or disengage each track individually. Thus, tracks not in contact with
the environment can be disengaged to reduce drag and waste of energy. If the robot rolls
over, the tracks that come into contact with the ground can be reengaged by the operator.
The drive shaft is supported by two ball bearings on each end of the gearbox to retain good
tolerances within the gearbox. The other end of the drive shaft is floating and only
supported by the universal joint. Not constraining the shaft at three points prevents the
driveshaft from flexing too much, if the structure of the segment warps under high loads.
The motor has too large a diameter to fit into the segment next to the drive shaft spine (see
Figure 9c), as is the case in the OT-8. However, the OT-4 drive motor has two output shafts,
so that each drives one half of the now split drive shaft spine.
4.1 Tracks
The OT-8 has 40 tracks and 160 sprockets and rollers. These components make up a
significant portion of the overall weight of the system. In order to minimize system weight,
we sought tracks that were particularly lightweight. In addition, the tracks had to offer low
drag and they had to be able to accommodate debris (especially sand) that could get trapped
between the tracks and the drive sprockets.
A solution was found in the form of a slightly elastic urethane material that would stretch to
accommodate debris without mechanical tensioners, yet was strong enough not to slip over the
sprocket teeth under stress. After testing different tracks designs we selected the section profile
shown in Figure 10a. This design is an adaptation of the rubber tracks found in the Fast Traxx
remote-controlled toy race car made by Tyco. The trapezoidal extrusion on the bottom of the track
fits into a groove on the sprocket, ensuring that the track stays aligned on the sprocket. For further
testing we rapid-prototyped tracks based on this design using 50 through 90 durometer urethanes.
In the much smaller OT-4 we had to simplify the gearbox; the chain is run off a sprocket
mounted directly on the side of the worm gear. The chain drive is therefore off-center with
respect to the driveshaft and the two tracks per side are therefore not of equal width (see
Figure 10b). The tracks are molded in-house from a silicon mold. That mold is made from a
Stereolithographic (SLA) rapid prototype, based on a CAD model, which we also developed
in-house. The grousers have twice the pitch of the track teeth to better engage features of the
obstacle being scaled. Keeping the grouser pitch a function of the tooth pitch reduces the
Serpentine Robots for Industrial Inspection and Surveillance 645
stiffness of the track as most of the flexibility of the track comes from the thin area between
the teeth. In order to increase the stability of the robot to minimize roll-overs, we made the
tracks as wide as geometrically possible while still allowing the robot to fit through a 4-inch
hole. The track width is especially important considering the large deflection of the center of
gravity we can impose on the robot by raising three segments in the air. To meet both goals,
we had to minimize the sprocket diameter, as is evident from Fig. 10b.
Discussion:
There are several disadvantages to small sprockets: 1. greater roll resistance, 2. reduced
ability to transfer torque between the sprocket and the track, and greater sensitivity (i.e.,
failure rate) when driving over natural terrains such as deep sand and underbrush. On these
natural terrains sand and twigs are ingested between the tracks and drive sprocket, forcing
the tracks to overstretch. In most industrial environments and urban search and rescue
operations, surfaces are man-made and the OT-4 performs very well on them.
a b
Fig. 10. a – Profile of the OT-8’s urethane tracks, b – Front view of the OT-4. The extra-wide
track areas add stability and reduce the risk of rollovers.
The diameter of the OT-8 track sprockets is much larger than that of the OT-4 track
sprockets. Consequently, the OT-8 performed exceedingly well in deep sand and in
underbrush. In order to transfer more torque, the tooth profile was kept similar to that of a
timing belt, i.e., we maximized the number of engaging teeth.
a b
Fig. 11. The OT-8 outperformed the OT-4 on difficult natural terrains, mostly because of the
OT-8 larger track sprocket diameter. a – The OT-8 literally plowed through SwRI’s underbrush
test area, aided, in part, by its more massive weight and greater power. b– Unaffected by sand,
the OT-8 had enough power and traction to drive up a 15° inclined bed of deep sand.
646 Industrial Robotics - Programming, Simulation and Applications
4.2 Chassis
The chassis of the OT-8 consists of duralumin frame with attached gearbox machined from
Delrin, as shown in Fig. 12a. Most of the components including sprockets and rollers were
made in house. We spent some time to reduce the weight of metal parts and to optimize
their shape. As a result we obtained easy to assemble segments with a lot of spare space
inside. This space could be used for energy storage or payload.
Due to the small size of the OT-4, significant efforts had to be made to organize the
internal components for space efficiency and accessibility (Borenstein et al., 2006). Cables
and pneumatic lines are routed with these goals in mind. For example, the electronic
circuit board on each segment has a connector on each end, with the wires coming from
the neighboring segments plugging into the closer side. This design eliminated the need
for wire runs all the way through the segment. Similarly, instead of using air hoses we
integrated all pneumatic pathways into the chassis. This was possible thanks to SLA rapid
prototyping techniques, which build the parts in layers and allows for such internal
features. The chassis with integrated manifold and “etched-in” pneumatic pathways is
shown in Fig. 12b. SLA rapid prototyping allowed us to create very complex, and
otherwise difficult to machine structures. The SLA technique also allowed us to design
parts for ease of assembly, maintenance, and space savings. However, SLA resins tend to
warp with time, which is why they are normally used for prototyping only. In our early
OT-4 prototypes, components that were under constant load would creep with time and
would cause problems, especially in the case of the seal between the valves and the
manifold. Aluminum reinforcements were therefore added to the endwalls, joints and
manifold at key points where creep and deformation during load was becoming an issue.
The endwalls were reinforced with a thin aluminum shell and the manifold was
reinforced with a bar screwed on at both ends. The result was a much stiffer segment at a
minor (2.5%) weight penalty.
Valve-to-bellows
pathways
2 of 8
valves shown
Supply
pathway
Exhaust pathway
a b
Fig. 12. a – Aluminum frame of the OT-8 with gearbox, controller boards and manifolds
with white flat cables visible inside, b – Manifold of the of OT-4 with two of the eight valves
(white) mounted. Exhaust and supply pathways from and to the bellows are shown in red
and green, respectively. This manifold is also partially the chassis of the six OT-4 actuation
segments.
Serpentine Robots for Industrial Inspection and Surveillance 647
4.3 Joints
Between any two segments there are two concentric universal joints referred to as the “outer”
and “inner” universal joint. The outer universal joint connects the two adjacent segments. It is
made of two forks and a ball bearing-mounted gimbal connecting the two forks, as shown in
Figure 13a. The inner universal joint (not shown) connects adjacent segments of the drive shaft
spine and is concentrically located inside the gimbal. All components of the outer universal joint
are made from aluminum and each fork is screwed onto the adjacent segment endwalls. Two
Hall-effect angle sensors are mounted on one arm of each fork, respectively, provide position
feedback for the control of the joint angles. The joint can be actuated at least 33° in any direction
and up to 41° in the four principal directions (up, down and side to side). Wiring and pneumatic
lines between the segments pass through four holes at the corners of the gimbal and the bases of
the forks. Each joint is orientated with respect to each other in a way so as to compensate for
gimbal error, the angular twisting deviation that occurs between the two ends of a universal joint
as it is articulated. Without this, three fully articulated joints would lead to each progressive
segment being twisted about the drive spine axis leading to instability and impeding obstacle
traversal. It should be noted that the space available for the mechanical joints is extremely limited
as it is shared with the bellows of the Integrated Joint Actuator (see Fig. 13b). Moreover, space is
limited because we try to dimension the bellows with the largest possible diameter to increase
their force capacity, as explained next.
Drive
tracks
Bellows 3 A
d
D Bellows 4
Universal Segment
joint firewall
Bellows 1 Bellows 2
Fig. 13. Joints in the OmniTread robots: a – Outer universal joint, with Hall-effect joint angle
sensor as used in OT-4. b – Cross-section of the Integrated Joint Actuator.
limited by the cross section area of the inner whorls, Amin. Yet, the volume of space that the
bellows requires is determined by the diameter of its outer whorls. In the relatively large OT-8,
the ratio between inner and outer diameter of the whorls (we refer to this ratio as “bellows
efficiency”) is fairly close to 1.0. However, in the smaller bellows of the OT-4, the bellows
efficiency is much smaller than 1.0. In many conventional bellows the diameter of the inner
whorl increases when inflated, thereby improving that bellows’ efficiency. However, our OT-8
bellows design uses a metal ring around the inner whorls to prevent the bellows from
ballooning. At the same time, these rings prevent the inner whorls from growing in diameter,
thereby keeping the bellows efficiency low. To overcome this problem in the small-sized OT-4
bellows, we abandoned the metal rings altogether. Instead, we encased the OT-4 bellows in a
tubular polyester mesh. To distinguish between these parts, we call the airtight, elastic part of the
bellows “liner,” and the outer part “mesh.”
The new two-part bellows of the OT-4, shown in Figure 14b, has the significant advantage of
allowing the diameter of the inner whorl to grow when pressurized, until the inner whorl is
practically flush with the mesh. The result is a bellows that has an efficiency of close to 1.0,
when fully pressurized.
There is, however, one problem with all bellows designs: When the bellows extends beyond the
natural length of the liner, the axial extension force F = PA has to work against the elasticity of the
liner. Similarly, when a bellows in a joint is compressed beyond a certain limit (e.g., because the
bellows on the opposite site is expanding), its liner and mesh develop elastic forces that resist
further compression with increasing force. Conversely, the bellows on the side of the joint that is
being compressed resist further compression, thereby making it harder for the opposing bellows
to expand. As a result of these effects, the moment produced by the bellows when installed
inside a joint is neither constant nor depending only on the applied pressure differential. Rather,
the produced moment is a non-linear function of the joint’s momentary angle. For extreme joint
angles, the moment produced by the joints may be reduced by as much as 50% compared to the
maximal moment that’s available when the joint is in its neutral position. In the OT-4, however,
we dimensioned the bellows so as to be powerful enough to lift three segments even at near-
maximal joint angles.
4.5 Power
In the OmniTread OT-8 prototype, electric and pneumatic energy, as well as control
signals are provided through a tether – a 1 cm thick and 10 m long cable comprising six
wires and a pneumatic pipe. Compressed air is supplied from an off-board compressor
and distributed to the control valves from a single pipe running through the center of
the robot. In the experiments described in the next section the compressor provided
variable pressure from 85 to 95 psi but the control system limited the maximum
pressure in the bellows to 80 psi.
Of course, a tether is highly undesirable for most inspection and surveillance tasks.
That’s why, when we designed the OT-4, we incorporated all energy resources onboard,
and provided it with wireless communication. The OT-4 has Li-Pol batteries in
Segments #3 and #5. Pneumatic energy, that is, compressed air, is produced onboard by
two miniature air compressors, one each in Segments #2 and #6. Fully charged Li-Pol
batteries allow the OT-4 to drive for 75 minutes on flat terrain. On very difficult terrain
the run time is shorter.
Serpentine Robots for Industrial Inspection and Surveillance 649
b
Fig. 14. a – UM-developed rubber bellows used in the OT-8.
b – OT-4 bellows comprising a liner and a mesh. We chose yellow latex liner material for
this photograph because it contrasts better with the black mesh. However the actual OT-4
bellows have neoprene (black) liners.
A unique means for preserving electric power are the OT-4’s micro-clutches, which allow it to
engage or disengage each individual track from the main drive shaft. Disengagement of tracks
not in contact with the ground results in a significant saving of electric energy. For instance, on
flat ground only three tracks need to be engaged: the ones on the bottom of Segments #2, #3 and
#6, while the other segments are slightly lifted off the ground. This configuration provides stable,
3-point contact and steering is accomplished by the joint between Segments #2 and #3.
5. Control System
The task of the control system is to control the position of a joint, as well as its stiffness. The
controlled parameters are the pressures pA and pB in the bellows-pair that actuates the joint. In
order to control pA and pB, the PID controller generates the control signal u as input for the valve
control subsystem. This subsystem realizes the stiffness control and air flow minimization by
activating the four pneumatic valves according to the flow chart in Figure 16. In every control
cycle only one of the four valves is active, i.e. generates airflow to or from one of the bellows.
The SPPS assigns higher priority to stiffness when conflicts between position control and
stiffness control arise. However, the SPPS control system cannot change the stiffness of a
joint in steady state because the valve control subsystem is activated by position errors only.
As our experiments showed, however, this limitation can easily be accommodated by the
teleoperators or by an onboard computer, in future, more advanced OmniTread models.
S
t1..t4
qr Valves q
Σ u Joint
PID control actuator
- subsystem
pA,pB
Fig. 15. Block diagram of the Simplified Proportional Position and Stiffness (SPPS) system
with zero air consumption at steady state.
u
Initialize pulse
lengths Demanded stiffness S
t1=t2=t3=t4=0
t/tp
100%
-umax umax u
pA+pB<S yes
u<0 u<0
yes yes
Valves
Fig.17. Schematic diagram of the OmniTread control system, shown for the OT-8.
We used the same structure of control system in case of OT-4. However, as this robot has six
joints the number of controllable parameters is even larger that in the OT-8. To reduce the
number of controllable parameters we decided to control the stiffness of each IJA (i.e., a group of
four bellows) instead of the stiffness of each individual bellows. This reduces the number of
controlled parameters to 6 joints × 2 DOF + 6 × stiffness + 1 × speed = 19 parameters. It was also
necessary to fit the controller boards into the very small space of the OT-4 segments. Moreover,
additional functions in the OT-4, such as micro-clutches, require additional circuitry.
In order to accommodate all of this circuitry, we split the functions into one Main Controller
board and another Auxiliary Controller board, as shown in Fig. 18. Each Main Controller board
manages the position and stiffness control for its adjacent 2-DOF joint. The microcontroller can
receive new position and stiffness commands and return feedback data (two positions and four
pressures) every 10 ms. Each microcontroller sends digital PWM signals to eight on-off
pneumatic valves (two for each bellows) to implement the Simplified Proportional Position and
Stiffness (SPPS) controller described in detail in Section 5.1. Each microcontroller also reads
positions from two Hall-effect angle sensors and pressures from four pressure transducers.
a b
Fig. 18. Electronic control circuit boards in the OT-4.
a Main Joint Controller board. One each of these double-sided boards is installed
in each to of the six Actuation Segments. These boards perform (i)
communication via the CAN bus, (ii) PWM control of the eight valves per
segment, and (iii) control of the micro-clutches, as well as (iv) measurements of
joint angles and bellows pressures.
652 Industrial Robotics - Programming, Simulation and Applications
b Auxiliary Controller board. One each of these angled PCBs is installed in each
of the six Actuation Segments.These boards provide added sensing and control
capability, including: System pneumatic pressure sensor, PWM compressor
control switch, four (4) force sensor inputs (for future work), PWM servo
control output. Not all of the outputs are needed in all of the segments.
Fig. 19. Drive Motor Controller Board. This board generates the PWM signals for the off-the-
shelf digital power amplifier for the OT-4 motor.
minutes). On extremely difficult obstacles, where joints are actuated a lot and all tracks are
engaged, the motor battery can be depleted in as little as 25 minutes.
Fig. 20. The 730 mAh Li-Pol battery is in its place in Segment #5. The remaining space will
be completely filled by the 2,000 mAh Li-Pol battery in the engineer’s hand.
Fig. 21. (above) Modified dual-head air compressor installed in Segments #2 and #6.
654 Industrial Robotics - Programming, Simulation and Applications
Check-
valve
Inlet A
Solenoid valve
Pressurized air out (a)
Check-
valve
Inlet A
Solenoid valve
Pressurized air out (b)
Fig. 22. (above) The dual-pressure compressor system. The compressor heads can be
switched between (a) Parallel Mode and (b) Series Mode by switching the state of the
solenoid valve.
40
Pressure (psig)
30
20
10
0
0 50 100 150 200
Time (sec)
Fig. 23. Plot of pressure versus time required to fill a fixed test volume (2.1 liter) with air at
different pressures. Comparison of dual-pressure compressor working in series (blue) and
parallel (red) mode. This chart is for one compressor, although the OT-4 uses two compressors.
Fig. 24. Control communication system components. a – tethered system for OT-8.b –
wireless system for OT-4.
We measured the output (pressure and flow rate) of both compressor modes by connecting
the outlet of one of the two compressors to a 2.1-liter container and timing how quickly it
Serpentine Robots for Industrial Inspection and Surveillance 655
built pressure. The results are shown in Fig. 23. As can be seen, low pressures up to 15 psi
can be built faster in Parallel Mode. Pressures above ~15 psi build up faster in series mode.
Fig. 25. Onboard components of the OT-4’s wireless control communication system.
b
Fig. 26. Gear box and micro-clutches. a – CAD drawing, b – photograph
In practice we implemented the micro-clutches as shown in Fig. 26. To disengage a track, a micro
motor moves one link of a four-bar mechanism so that the worm gear is lifted off the worm.
Micro-switches (not shown here) stop the micro-motor in two stable, self-locking positions. These
positions correspond to the worm gear being fully engaged or disengaged from the worm.
Fig. 27. Flipper track during deployment to its fully extended position.
7. Experimental Results
In order to assess the performance of our robots under realistic and objective conditions, the
OmniTreads (OT-8 and OT-4) were tested at the Small Robotic Vehicle Test Bed at the
Southwest Research Institute (SwRI) in San Antonio, Texas. The OT-8 robot was tested in
2004, while the OT-4 was tested in 2006.
During these tests the OmniTread OT-8 was continuously controlled by two operators who had
audio and visual contact with the robot, allowing them to monitor the robot’s behavior at all
times. The OT-4 required three trained operators to run. The SwRI test facility is well designed
and allows for the objective assessment of vastly different small robot designs. Among the tests
were tasks such as climbing over high steps, ascending through the inside of pipes, traversing
wide gaps, and many more. A detailed description of all tests and their results would far exceed
the scope of this chapter. Rather, we refer the reader to the many photographs in this chapter and
point out that in each of the depicted scenarios, the robot successfully completed the run or
passing the obstacle. The same is true for the photographs in Figure 28, which shows some more
test environments and obstacles (some from SwRI, some from our own lab).
In addition, Table 2 provides some technical specifications and comments on performance.
Lastly, video clips are very particularly helpful in conveying a sense of the capabilities of
serpentine robots. We therefore refer the reader to our large video library on the web at
http://www.engin.umich.edu/research/mrl/OmniTread_Video.html.
Parameter OT-8 OT-4
Specifications
Smallest hole robot can pass 20 cm (8 inches) diameter 10 cm (4 inches) diameter
through
Dimensions (length, width, 127×18.6×18.6 cm 94×8.2×8.2 cm
height) 5 7
Number of segments 20 cm 10.3 cm
Joint segment length 20 cm 10.9 cm
Motor segment length 6.8 cm 3.6 cm
Joint length
Weight 13.6 kg (Incl. batteries & flippers) 4.0
kg
Power Off-board lead-acid On-board: Li-pol batteries
batteries and air (43 Wh), 2 air compressors
compressor (80 psi). Both (45 psi)
resources brought to robot On-board resources
via tether sufficient for 75 minutes
driving on smooth terrain
Control 2 operators, 2 gamepads, 3 operators, 3 gamepads, off-
off-board laptop board laptop
Controls signals CAN bus, via tether Wireless serial link, no tether
658 Industrial Robotics - Programming, Simulation and Applications
Special features
Extra-wide tracks on all four sides of robot
Common features to both Pneumatic joint actuation with position & stiffness
OT-8 and OT-4 control
Single drive motor, drive shaft spine
Requires tether Completely untethered
No micro-clutches Electrically actuated micro-
needed, since electric clutches to
power “free” with tether engage/disengage each track
Flipper tracks extend reach
of lead & tail segment by 8
cm
Supply pressure variable On-the-fly switchable
since compressor off- maximal supply pressures:
board 30 & 45 psi
Very strong components, Mech. overload protection
since not optimized for for each branch of drive train
weight & space by shear pins
Performance
Maximal velocity 10 cm/s 15 cm/s
Minimum turning radius 53 cm 16 cm
(inside)
Performance on rocks or Excellent. Never failed a Excellent. Never failed a test.
rubble test.
Performance on stairs Poor. Can climb only Excellent. Successfully
stairs with small slopes, climbed different
long treads combinations of rise/tread,
up to 40°
Performance in deep sand Excellent, able to climb Very poor. Cannot climb any
15° slope can drive slope. Stalled after 5 meters
indefinitely
Performance in underbrush Excellent, plows through, Poor. Ingests twigs, grass,
can drive indefinitely stalled after 12 meters
Climbing in PCV pipe, 30 cm 10 cm, 15 cm, 20 cm
diameters Maximal 22° 90° (vertical)
inclination of pipe
Maximal height of vertical 46 cm (2.5× own height) 40 cm (4.9× own height)
wall climbed Higher with flipper (not
tested)
Width of largest gap crossed 66 cm (52% of own 49 cm (52% of own length)
length)
Enter hole in wall, smallest 20 cm 10 cm
diameter Not tested 42 cm
Height of center of hole above
ground
Serpentine Robots for Industrial Inspection and Surveillance 659
d e
f
Fig. 28. OmniTread testing in different challenging environments:
a – OT-4 climbing up and over a 40-cm (15.5”) high wall (5× the robot’s heigth).
b – OT-4 traversing a gap 49 cm (19.25”) wide. This is 52% of its length.
c – OT-4 climbing up inside a 4-inch diameter PVC pipe. Speed: 8 cm/s.
d – OT-4 climbing up inside a 8-inch diameter PVC pipe. Speed: 6 cm/s.
e – OT-4 climbing up steep stairs sloped 40°
f – OT-4 inside an 8-inch diameter pipe, halfway through a 90° elbow
660 Industrial Robotics - Programming, Simulation and Applications
8. Conclusion
This chapter introduced the Omnis family of serpentine robots. Serpentine robots have the
potential to provide hitherto unattainable capabilities, such as climbing over high steps, travel
inside horizontal or even vertically pipes, or traversing wide gaps. While individual tasks of this
nature have been tackled in the past by special-purpose mobile robots (e.g., pipe crawlers), it
appears that only serpentine robots may be able to perform a large variety of difficult tasks.
We started our project with the design of the legged OmniPede. Technical evolution then let to
the design of the OmniTread OT-8, and finally to our most advanced model, the OT-4. We
believe that the OmniTread robots have a particularly high potential to become truly practical.
Notable in the OmniTread are several innovative features as summarized here:
• Pneumatically actuated 2-DOF joints – The 2 DOF joints of the OmniTread are
actuated pneumatically. Pneumatic actuation provides natural compliance with the
terrain for optimal traction and shock absorption, as well as a very high force-to-
weight ratio.
• Bellows used as pneumatic actuators – Bellows are ideal for serpentine robots
because they fit naturally into the space occupied by the joints. This minimizes the
need for space, especially space not covered by propulsion elements. In addition,
bellows can expand to four times their compressed length. (US Patent #6,870,343).
• Proportional Position and Stiffness Control system – The pneumatic control system,
developed at our lab especially for serpentine robots, allows simultaneous,
proportional control over position and stiffness of each individual bellows. In
addition, the system uses compressed air only when changing the position or the
stiffness of a bellows. Thus, during long stretches of straight travel no compressed
air is used at all. (US Patent #6,870,343).
• Maximal coverage of robot surface with moving tracks – In this chapter we
identified and formalized the need for maximizing the so-called “Propulsion
Ratio,” Pr. We implemented this design doctrine by covering all sides of all
segments with extra-wide, moving tracks. The cost for this approach is additional
complexity. The cost for not using this approach is mission failure when the robot
gets stuck on troublesome terrain (US Patent #6,774,597).
• Single drive motor for all segments and drive shaft spine – Our analysis shows
that a single drive motor is more energy, weight, and space-efficient than
multiple motor configurations (i.e., one motor in each segment). Motor power
is transferred to the segments via a drive shaft spine that runs the length of the
robot. Within each segment the drive shaft spine is a rigid axle, connected to
the axle in neighboring segments via a universal joint (not a flexible shaft). (US
Patent #6,512,345).
We are currently working on the higher level control system for Serpentine robots. A
problem with high-degree-of-freedom (HDOF) serpentine robots is that they often require
more than one human operator. In the case of the OT-4, three operators simultaneously
control the robot using six individual joysticks as well as auxiliary instrumentation.
In order to reduce the number of operators needed, we developed a “Haptic Operator
Console” (HOC), which we call the “Joysnake” (as in joy-stick.) The premise of the
Joysnake is that the fastest and most intuitive method for a human operator to
command a pose for a High Degree of Freedom robotic mechanism is to shape an
Serpentine Robots for Industrial Inspection and Surveillance 661
adjustable replica of the mechanism into the desired pose (Baker & Borenstein, 2006). In
most simple to moderately difficult task the Joysnake works sufficiently well to replace
the three operators by just one. However, in very complex task the three operators
perform better.
9. Acknowledgements
This work was funded by the U.S. Dept. of Energy under Award No. DE FG52 2004
NA25587 as well as by the Intelligence Technology Innovation Center under Grants
#F013596 and F013019.
The authors are grateful to the OmniTread OT-4 development team for their full-time
efforts: Jack Phan, Russ Miller, Jim Berry, Justin Tesmer, Adam Borrell, Oke Liimatta, Hung
Nguyen, and John Baker. We are also grateful to David Thomas, Jenny Weinberg and the
many machinists who worked on the OT-4 part-time.
We are also grateful to Dr. William Hutchison who developed the “7G” computerized
learning program. This program produced the sophisticated and complex motion strategy
used for traversing the steep staircase in Fig. 7 and Fig. 28c.
10. References
Baker, J. & Borenstein, J. (2006). The Joysnake A Haptic Operator Console for High-Degree-
of-Freedom Robots. 2006 International Joint Topical Meeting: “Sharing Solutions for
Emergencies and Hazardous Environments,” February 12-15, Salt Lake City, Utah,
USA
Blitch, J.G. (2003). Adaptive Mobility for Rescue Robots, Proc. of SPIE. Sensors, and Command,
Control, Communications, and Intelligence (C3I) Technologies for Homeland Defense and
Law Enforcement II, vol. 5071, pp. 315–321.
Brown, H.B., Jr.; Vande Weghe, J.M.; Bererton, C.A.; Khosla, P.K. (2002). Millibot trains for
enhanced mobility, IEEE/ASME Transactions on Mechatronics, Vol. 7, Issue 4, pp. 452-461.
Borenstein, J.; Hansen, M.G. & Nguyen, H. (2006). The OmniTread OT-4 Serpentine Robot
for Emergencies and Hazardous Environments, 2006 Int. Joint Topical Meeting:
“Sharing Solutions for Emergencies and Hazardous Environments,” February 2006, Salt
Lake City, Utah, USA.
Granosik, G. & Borenstein, J. (2004) Minimizing Air Consumption of Pneumatic Actuators in
Mobile Robots, Proc. IEEE Int. Conference on Robotics and Automation, New Orleans,
pp. 3634-3639.
Granosik, G. & Borenstein, J. (2005). Integrated Joint Actuator for Serpentine
Robots, IEEE/ASME Transactions on Mechatronics, Vol. 10, pp. 473-481,
October 2005.
Hirose, S. (1993). Biologically Inspired Robots (Snake-like Locomotor and Manipulator), Oxford
University Press.
Hirose, S. & Morishima, A. (1990). Design and Control of a Mobile Robot With an Articulated
Body, The International Journal of Robotics Research, Vol. 9, No. 2, pp. 99-113.
Hirose, S.; Morishima, A.; Tukagosi S.; Tsumaki T.; Monobe, H. (1991). Design of
Practical Snake Vehicle: Articulated Body Mobile Robot KR-II, Fifth Int.
Conference on Advanced Robotics, 'Robots in Unstructured Environments', Vol. 1,
pp 833 -838.
662 Industrial Robotics - Programming, Simulation and Applications
Kamegawa, T.; Yamasaki, T,; Igarashi, H.; Matsuno, F. (2004). Development of the Snake-
like Rescue Robot KOHGA, Proc. IEEE Int. Conference on Robotics and Automation,
New Orleans, LA, April, pp. 5081-5086.
Kimura, H. & Hirose, S. (2002). Development of Genbu: Active wheel passive joint
articulated mobile robot, Proc. IEEE/RSJ Int. Conference on Intelligent Robots and
System, Vol.1,pp. 823 -828.
Klaassen, B. & Paap, K.L. (1999). GMD-SNAKE2: A Snake-Like Robot Driven by Wheels and
a Method for Motion Control, Proc. IEEE Int. Conference on Robotics and Automation,
Detroit, MI, May 10-15, pp. 3014-3019.
Long, G.; Anderson, J.; Borenstein, J. (2002). The OmniPede: A New Approach to Obstacle
Traversion. Proc. IEEE Int. Conf. on Robotics and Automation, USA, pp. 714-719.
Mori, M. & Hirose, S. (2002). Three-dimensional serpentine motion and lateral rolling by
active cord mechanism ACM-R3, IEEE/RSJ International Conference on Intelligent
Robots and System, pp. 829-834 vol.1.
Ohno, H. & Hirose, S. (2000). Study on slime robot (proposal of slime robot and design of
slim slime robot), Proc. IEEE/RSJ Int. Conference on Intelligent Robots and Systems,
(IROS 2000), pp 2218-2223, Vol 3.
Osuka, K. & Kitajima, H. (2003). Development of Mobile Inspection Robot for Rescue
Activities: MOIRA, Proc. IEEE/RSJ Int. Conference on Intelligent Robots and Systems,
Las Vegas, Nevada, pp. 3373-3377.
Paap, K.L.; Christaller, T.; Kirchner, F. (2000). A robot snake to inspect broken buildings,
Proc. IEEE/RSJ Int. Conference on Intelligent Robots and Systems, pp. 2079-2082.
Schempf H., Mutschler, E., Goltsberg, V. , Skoptsov, G., Gavaert, Vradis, G. (2003). Explorer:
Untethered Real-time Gas Main Assessment Robot System, Proc. of Int. Workshop on
Advances in Service Robotics, ASER’03, Bardolino, Italy.
Scholl, K.U.; Kepplin, V.; Berns, K.; Dillmann, R. (2000). Controlling a multi-joint robot for
autonomous sewer inspection, Proc. IEEE Int. Conference on Robotics and Automation,
ICRA '00, vol.2, pp. 1701-1706.
Streich, H. & Adria, O. (2004). Software approach for the autonomous inspection robot
MAKRO. Proc. IEEE Int. Conference on Robotics and Automation, New Orleans, LA,
USA, pp. 3411-3416.
Takayama, T. & Hirose, S. (2000). Development of Souryu-I connected crawler vehicle for
inspection of narrow and winding space, 26th Annual Conference of the IEEE
Industrial Electronics Society, IECON 2000, vol.1, pp 143-148.
34
1. Introduction
Primates are the living beings with a greater capacity of manipulation. This skill derives
from they have two legs equipped with elements adapted to the manipulation and grasping.
The simultaneous use of both legs and arms confer to these living beings their special
features to manipulate and manage different objects. As many technical developments
inspired when nature laws are observed, parallel robots are conceived in a similar way. In
this way, the mechanical structure of a parallel robot is composed by a closed chain in which
the end effector is linked to the basis at least by two independent kinematic chains.
This definition can be in conflict with the developments about coordinated robots that also
constitute closed kinematic chains. Parallel robots simplify these chains in such a way that
every one has only one actuator. So, the complexity of the mechanism can be reduced and it
is possible to make good use of the energy provided by the actuators to obtain a higher
payload capacity or to increase the speed of movement of the end effector.
The first theoretical works on mechanical parallel structures appear long time ago, even before
the notion of robot. In this way the first parallel mechanism was patented in 1931 (US Patent
Nº 1789680) and was designed by James E. Gwinnett (Gwinnett 1931). In 1940 Willard Pollard
presented a robot with 5 degrees of freedom dedicated to painting tasks. The robot was
composed of three legs of two links each one. The three actuators of the base drive the position
of the tool.
However, other more significant parallel mechanisms have been achieved from then. In this
way, in 1947 Eric Gough designed the most popular parallel platform. Nowadays numerous
platforms can be found with the name of MAST (Multi-Axis Simulation Table). In 1965, Mr.
Stewart (Stewart, 1965) described a movement platform of 6 degrees of freedom (dof)
designed to use as a flight simulator. On the contrary to the general belief the Stewart
mechanism is different to the previously presented by Gough. The work presented by
Stewart had and have a great influence in the academic world, and it is considered one of
the first works of analysis of parallel structures.
At the same time, Klaus Cappel carried out in the Franklin Institute Research Laboratory a
lot of researches with parallel platforms of 6 degrees of freedom. In 1967 Cappel patented a
motion simulator based on a hexapod (Cappel, 1967). Later, in 1978 Hunt (Hunt 1978)
suggested that the mechanisms of the flight simulators can be used as parallel robots. Also
Hunt pointed out that parallel robots required a more detailed study in the context of
robotic applications due to the advantages of accuracy and rigidity of these platforms.
664 Industrial Robotics - Programming, Simulation and Applications
.
Besides an excellent relation payload/dead weight, parallel robots have other interesting
features. In 1979, McCallion and Pham (McCallion and Pham, 1979) suggested to use the
Stewart platform as a parallel robot in an assembly robotics cell, basically due to the end-
effector is much less sensitive than serial robots. The rigidity assures that the distortions in
the actuators are minute and produces a high accuracy in the pose of the robot.
In this paper we present some parallel robots designed and developed to climb along
several structures as pipes, tubular or metallic structures. The parallel robots proposed can
climb on inside or outside tubular structures or palm trunks. Also a parallel robot with light
modifications has been developed to climb on metallic structures of bridges or dome of
buildings. Therefore, the purpose of this paper consists of showing the promising
applications that are possible to achieve using parallel robots as climbing robots.
The remainder of the paper is structured as follows. In the next section the main features of
parallel robots will be presented and discussed. Different possible configurations and
morphologies of the parallel robots will be suggested. In the third section the kinematics
and dynamics analysis of the parallel robots are analysed. In a following section some
prototypes of parallel climbing robots will be shown. The main features of these robots will
be analyzed and discussed. In section five, other parallel robots developed taking into
account the initial structure of the first prototype of parallel climbing robot developed will
be presented. Finally, some conclusions and future works will be proposed.
(b)
(a)
Fig. 1. Planar parallel robots with 2 and 3 dof (a); and 3D parallel robots (b).
Linear
actuator
Universal Hinge
joints Ring-1
Fig. 2. Mechanical design of a parallel robot to climb along tubular structures.
666 Industrial Robotics - Programming, Simulation and Applications
.
On this general morphology several modifications can be accomplished with the purpose
the robot can climb along other structures. In this way different legs can be added on each
one of the external rings of the robot with the purpose the parallel robot can work outside
wall of pipes. These legs allow fastening one ring to the pipe while the other ring moves
along the structure. These modifications will be discussed in Section 4.
G G G G
r i = r1 + A1s1i − s0i i = 1,...6 (1)
where:
• A1 is the rotation matrix that represents the orientation of the basis
Gi G
• s1 and s0i are position vectors that locate universal and spherical joints with
respect to base reference frame and mobile ring
G
• r1 is the position vector of the mobile ring expressed in the reference frame of the
base
Gi
• and finally r is the vector of each one of the linear actuators.
These six equations constitute the inverse kinematics model in which the length of the
Gi
actuators are the module of the vector r . As can be observed, this methodology produces,
except singular configurations, the solution in an easy way. In some singular cases, several
solutions can be provided in a similar way than the serial robots.
Mq + Φ Tq λ = Q
Φ q q = v (3)
Φ q q = γ
Φ=0
Where:
• q is the generalized position of the system
• Q are the external generalized forces applied to the system
• M is the generalized mass matrix of the system
• Φ is the movement constraint vector
• λ is the vector of variables of Lagrange
• v is the velocity vector
• γ is the acceleration vector
To resolve the last equation it is necessary to employ numerical methods. As a consequence,
an analytic dynamic model can not be derived.
The automatic control of the robot that climbs along tubular structures should take into
account geometrical changes in the path of the tubes. As a consequence, three 120º ultrasonic
sensors were installed in every ring. The three sensors allow calculating the difference
between the centre of the ring and the tube (Aracil et al., 2003). Based on this estimate, an
algorithm to control the displacement of the moving ring can be done by maintaining it
centred and following the curve of the tube automatically (Almonacid et al. 2003). The
climbing process is composed by four steps working in sequence. The steps are shown in
figure 4. It consists of holding around the tube and moving up by using holding systems
attached to the rings. It displaces along the tube as one ring holds up and the other free ring
moves on.
On this basis a climbing parallel robot was developed to climb on a palm trunk. In
figure 5 several images show the robot in different positions of the palm trunk. The first
version of this prototype designed to climb this kind of structures moved to a velocity
of 0.4 m/s. It was composed of 6 pneumatic cylinders. Every cylinder is controlled
through a proportional valve FESTO MPYE-5. A linear encoder measures its
displacement. The gripping system, which is activated pneumatically, can be seen in
every ring. A multi-axis Delta Tau PMC-VME card has been used for the control
structure of the robot.
Fig. 4. Steps that define the climbing process of the climbing parallel robot.
But parallel robots also can climb inside tubes or pipes. To perform these tasks, it is
necessary to modify some aspects like the holding systems and also the robot universal
joints with the aim the robot was capable to have big rotations as observed in figure 6.
These joints should be more mechanically robust in contrast to the standard universal
joints (Aracil et al., 2006). As can be observed, the gripping system is radial to the
rings.
Another possibility to climb along tubular structures as tubes or pipes is by means of a
couple of arms that can be extended and retracted. This couple of arms is connected to each
ring and serves as holding device. The main advantage of such holding system consists of
the robot is capable of climb along complicated structures with obstacles due to its structural
design. In this sense, the robot can adopt several forms and postures to overcome
complicated zones as can be observed in figure 7.
670 Industrial Robotics - Programming, Simulation and Applications
.
Fig. 5. Different images of the parallel robot climbing o a palm tree trunk.
Fig. 8. Detail of the joints adapted with the purpose of the robot can accomplish
postures of 90º.
672 Industrial Robotics - Programming, Simulation and Applications
.
The displacement of the robot can be carried out as a sequence of four simple movements.
Figure 9 shows a sequence of these movements. This displacement is composed of four
steps:
• The robot is grasped to the beam with both rings legs
• The robot is held by ring 1 legs. Ring 2 legs are released and folded. Linear
actuators are commanded allowing ring 2 to acquire the required position.
• The ring that has been displaced (ring 2) is held through its clamping devices.
• With ring 2 grasped to the beam, ring 1 is released. Ring 1 acquires the new
position. Once ring 1 has achieved the position, the robot is ready for a new cycle.
So, the postures to generate movements through a right path are simple and easily
reachable. However, when it is necessary to overcome and pass structural nodes in the
metallic structures, the robot must be able to acquire more complicated postures. A
structural node is composed by three beams making a corner. In such structural nodes the
robot can change the direction of its movement or can keep the same direction when passes
it. In figure 10 several postures that the robot can adopt are presented. This sequence shows
that the robot requires a minimum number of postures to pass a structural node.
Fig. 9. Sequence of displacements of the robot along the length of a straight trajectory.
Using Parallel Platforms as Climbing Robots 673
besides the possibility to achieve large depths due to its watertight volume is much
reduced.
We have derived a second prototype of this robot (REMO II) with several improvements.
We have added one motor in each ring to produce the driving forces. So, the navigation is
based in the generation of driving forces due to the combination of forces produced by the
motors mounted in the rings. This procedure allows achieving significant advantages with
regard to the previous prototype.
Fig. 11. Another parallel robots: ROBOTENIS (UPM), REMO (UPM) and REMO II (UPM).
7. References
Ait-Ahmed, M. & Renaud, M. (1993). Polynomial representation of the forward kinematics
of a 6 dof parallel manipulator, Proceedings of International Symposium on Intelligent
Robotics, Bangalore Ed.
Almonacid, M.; Saltaren, R.; Aracil, R. & Reinoso, O. (2003). Motion planning of a climbing
parallel robot. IEEE Transactions on Robotics and Automation, 19, 3, 485-489
Almonacid, M.; Agrawal, S.; Aracil, R.; Saltaren, R. (2001). Multi-body dynamics analysis of
a 6-dof parallel robot. Proc. of the 2001 ASME Inern. Symposium on Advances in Robot
Dynamics and Control, New York, November
Angel, L.; Sebastián, J.M.; Saltaren, R.; Aracil, R. & Sanpedro, J. (2005). Robotenis: optimal
design of a parallel robot with high performance. IEEE/RSJ Int. Conf. on Intelligent
robots and systems IROS 2005, I.
Aracil, R.; Saltaren, R. & Reinoso, O. (2003). Parallel robots for autonomous climbing along
tubular structures. Robotics and Autonomous Systems, 42, 2, 125-134
Aracil, R.; Saltaren, R.; Ferre, M.; Yime, E. & Alvarez, C. (2005). REMO project: design ,
modelling and hydrodynamic simulation of a robot of variable geometry for
actuations on marine disasters. Vertimar: Symposium on Marine accidental oil spills
Aracil, R.; Saltaren, R. & Reinoso, O. (2006). Climbing Parallel Robot CPR. A robot to climb along
tubular and metallic structures. IEEE Robotics and Automation Magazine, 13, 1, 16-22
Bonev, I.A. & Ryu, J. (1999). A simple new closed-form solution of the direct kinematics
using three linear extra sensors. IEEE/ASME Ing. Conf. on Advanced Intelligent
Mechanism, Atlanta 19-23 Sept., 526-530
Cappel, K.L. (1967). Motion simulator. US Patent Nº 3295224
Dasgupta, B. & Mruthyunjaya, T.S. (1999). Closed form dynamic equations of the general
Stewart platform through the Newton-Euler approach. Mechanism and Machine
Theory, 33, 7, 993-1011
Fichter, E.F. (1986). A stewart platform based manipulator: general theory and practical
construction. Int. Journal of Robotic Research, 5, 2, 157-181
Galt, S. & Luk, M. (1997). Evolutionary design and development techniques for an 8-legged
robot. Genetic Algorithms in Engineering Systems: Innovations and Applications, 446,
259-263
Gwinnet, J.E. (1931). Amusement devices. US Patent Nº 1789680
Fichter, E.F. (1986). A stewart platform based manipulator: general theory and practical
construction. Int. Journal of Robotic Research, 5, 2, 157-181
Huang, T.; Wang, J.; Gosselin, M. & Whitehouse, D. (1999). Determination of closed form
solution to the 2-d orientation workspace of Gough-Stewart parallel manipulators,
IEEE Transactions on Robotics and Automation, 15, 6, 1121-1125
Hunt, K.H. (1978). Kinematic geometry of mechanism, Claredon Press, Oxford
Liu, K.; Lebret, J.A.; Lowe & Lewis, F.L. (1992). Control of a Stewart platform based robotic
milling cell. Proc. of ASME WinterAnnual meeting, Symp. On Manufacturing anc
control issues in a telerobotics assembly workcell. 1, 8-13
Lazard D. (1992). Stewart platform and Grobner basis. In ARK. 136-142
McCallion, H. and Pham, D.T. (1979). The analysis of a six degrees of freedom work station
for mechanized assembly.Proc. of ASME Winter Annual Meeting, Symp. On
Manufacturing and control issues in a telerobotics assembly workcell. 1, 8-13
676 Industrial Robotics - Programming, Simulation and Applications
.
Merlet, J.P. (1990). An algorithm for the forward kinematics of general 6 dof parallel
manipulators. Research Report 1331. INRIA
Merlet, J.P. (1997). Les Robots paralleles. Ed. Hermes, ISBN 2-86601-599-1
Saltaren, R.; Aracil, R. & Reinoso, O. (2004). Analysis of a climbing parallel robot for
construction applications. Computer-aided civil and infrastructure engineering, 19, 436-
445
Saltaren, R., Aracil, R. & Reinoso, O. (2005). Climbing Parallel Robot: A computational and
Experimental Study of its Performance around Structural Nodes. IEEE Transactions
on Robotics, 21, 6, 1056-1066
Stewart, D. (1965). A platform with 6 degrees of freedom. Proc. of the Institution of Mechanical
Engineers, 180, Part 1,15, 371-386
Tsai, L.W. (1999). Robot Analysis: the mechanics of serial and parallel manipulators. Wiley
Interscience. John Wiley and Sons
35
1. Introduction
Despite the large diffusion of robotic and automated solutions that took place during the
last decades in most production processes, the agricultural sector only marginally benefited
from automated solutions (such as the control of climatic parameters in greenhouses).
Robotic applications have been confined so far almost only to research studies with few
particular and specific applications made available for farmers.
This lack of advanced robotic solutions in agriculture and particularly in intensive farming
cannot be motivated claiming a lack of interest for the latest results of technology and
research in this sector. The use of latest results of genetics for the production of hybrid
flower plants is one of the many examples that contradicts this kind of argument and shows
how much the agricultural sector is keen to exploit the opportunities offered by modern
technology and research.
However, from the automation point of view, agriculture remains mainly labour intensive
not only in those countries where manpower is relatively cheap, but also for those
enterprises which enthusiastically embrace latest technologies in an effort to improve their
competitiveness and to ensure top quality products.
The motivation for the little development that robotic solutions deserved so far in the
agricultural sector lies elsewhere. On one side it is related to some particularities of the
specific sector, like the fact that farming is usually performed in an unstructured
environment that is therefore less friendly for robotic solutions than a well structured
industrial environment (Kassler, 2001). On the other side a consistent share of the research
effort conducted so far, mainly tried to use standard industrial robotic solutions adapting
them to the intensive farming sector instead of developing brand new solutions that exploit
the specific features of the agricultural sector. Finally the focus of most research was so far
on single specific activities or tasks and less frequently on the whole production process or
on a consistent share of it. In our opinion the approach should be revised looking for new
specific robotic solutions that take advantage of peculiarities and needs encountered in the
agricultural sector that are different from those of the industry. In this context, beside
overcoming specific constraints encountered in the farming context, the important challenge
is to find solutions and develop technology to automatize consistent shares of the
agricultural process. From a business point of view it is important to stress that, for the
678 Industrial Robotics - Programming, Simulation and Applications
robotic industry, the agricultural sector represents a new important market which is ripe for
adopting robotic solutions that can be very effective in increasing the quality of products, in
improving the safety for the workers of the sector, in reducing pollution and environmental
impact, and in ensuring higher productivity.
In this paper the state of the art of robotic research and of available robotic solutions in
intensive agriculture is presented first. The particular features that valuable robotic
solutions should deserve for agricultural applications are then outlined discussing also
which are, in the authors’ opinion, the most promising research directions for the next years.
Such research should address different problems to develop competences and build new
knowledge needed for developing valuable practical solutions suited for the specific
agricultural sector. Finally some research results attained by the authors along some of the
lines previously described are illustrated.
as in the works of Kise et al. (2005) and Rovida Màs et al. (2004). The second technological
solution which was studied in more recent years is based on the use of global navigation
satellite systems (GNSS) presently represented by the American global positioning system
(GPS) (see e.g. Thuilot et al., 2001, and Bak & Jakobsen, 2004). The joint use of GPS and
artificial vision has been proposed by Benson et al. (2003) and Chen et al. (2003), while
solutions based on lasers (Jimenez et al., 1999), (Château et al., 2000) and ultrasonic sensing
(Harper & Mc Kerrow, 2001) have been also investigated.
Other automated solutions studied for specific cultural operations are represented by the
work of (Tillet & Hague, 1999) for the automatic hoe control; the works of (Tillet et al., 2002;
Åstrand & Baerveldt, 2002) for the inter-row raking and tilling for weed control in sugar
beet; the works of Tian et al. (1999), Paice et al. (1995), Tillet et al. (1998) for precision
spraying devoted to weed control and crop treatment; the work of Blasco et al. (2002) for
weed control implemented with electrical discharges.
Final fields of study for automated applications in the field are represented by robotic solutions
for harvesting, as illustrated by the works of Monta et al. (1995), Sarig (1993), Sanders (2005) and
Peterson et al. (1999), and mapping yield and fruit quality (Quiao et al., 2005).
It is worth noting that most studies in literature deal with one specific agricultural operation
for which automated solutions are studied and presented. This ends up in most cases with
the design and testing of a dedicated machine that is often even quite sophisticated, but can
perform only one operation. The case of multipurpose robots that can perform different
tasks is usually an exception although some studies in this category (Monta et al. 1995, Van
Henten et al. 2002, 2003, and 2006) are also available.
use with respect to standard solutions involving human labour. This obvious consideration leads
to other two new guidelines. One concerns the type of cultures for which robotic solutions
should be studied first. In fact, when automatizing intensive and highly remunerative cultures it
is more easy to ensure an economic return that compensates higher investments. Remark that all
cultures in greenhouses are of this kind and note also that greenhouses offer infrastructures (such
as power supply, artificial illumination, plinths to which rails can be hooked, etc.) and represent
an environment that is usually partially structured or can be partially structured (pots/plants are
regularly displaced on benches or on the earth, obstacles are in known positions, etc.).
The second guideline for research and product development concerning robotic solutions in
agriculture is then to focus on applications in greenhouses.
The third guideline steaming from the need to ensure significant improvement over the standard
use of human manpower indicates that valuable robotic solutions should be versatile and cover
different tasks usually performed by human operators. This in order to reduce the amount of
labour needed. If ideally all the tasks to be performed throughout the growth of one crop
(planting, irrigating, fertilizing, spraying, weed control, etc.) could be managed by a robot, then
human operators could just perform supervising tasks. In connection with this guideline it is
important to note that the option of several dedicated robots each one specialized in one specific
task does not seem a convenient strategy. In fact the different tasks are usually performed in
different periods for only a few times per year, so that the robots would work one at a time while
the other would remain idle. Moreover sharing the machine with other farmers in order to
reduce costs would not be a feasible solution in most cases because neighbourhood properties
usually follow the same calendar and need the machine at the same time. The third guideline is
then to look for versatile multitask robots that ideally can manage all the operations needed in
the production cycle of different crops.
The forth and last guideline concerns the development of new applications and tasks that can be
performed only by a robot and are aimed to improve the crop quality, to improve safety and to
reduce pollution and costs. A nice example of such a task is offered by precision spraying for
cyclamens. This crop frequently needs treatment. Since the pests to be targeted live under the
leaves, the spraying should occur under the leaf canopy in order to be really useful. The
cyclamens should then be sprayed one by one, properly positioning the spraying nozzle. This
indeed is practically impossible with human operators when the number of plants to be
processed is huge as it is usually in dedicated enterprises. The task, however, can be done by
robots that do not lose concentration during the operations and can work indefinitely day and
night. Another field in this line is represented by the operations involving vision and image
processing. Beside surveillance for pest and illnesses detection, image processing can be used to
control the growth of each single plant/flower/fruit as well as to identify the level of ripeness in
order to plan optimal harvesting time.
the plants or retrieving it from video images, convenient trajectories can be easily defined in
a Cartesian reference and are immediately translated (without inverse kinematics problems)
into commands to the robot if its structure is also Cartesian. It must be noted that the width
of benches is in many cases in the range of 3000 mm so that a working space in the range of
3000×1500×1000 mm can be representative of several practical situations.
Drawbacks of a non cartesian structure were experimented by the authors in studies and
experiments executed using a robot with pantographic structure (Belforte et al. 2006). It was
practically noticed that its non-Cartesian structure consistently increased the required
computation for trajectory evaluation and motion control, because the commercially available
industrial axis controllers that were used are designed for driving motions along linear axes,
therefore they can be easily integrated into cartesian robots while they require more or less
complex customization for non-cartesian structures. Last but not least mechanical components
and technical solutions available on the market allow fast assembling of relatively low cost
Cartesian robots with a wide range of dimensions while their accuracy remains higher than
the one required in agricultural operations. Such flexibility in the robot size is of primary
importance since greenhouse dimensions are definitely not standardized and new robots
should allow adaptation to existing greenhouses if they have to penetrate this new market.
All the above considerations strongly recommend cartesian robot structures. In the
following different possible solution of this kind are outlined, however another distinctive
characteristic of the robot has to be discussed first: the capability of displacement within the
greenhouse. Actually the choice is between fixed robots that operate in a fixed point on
mobile benches and mobile robots that operate on the whole surface of the greenhouse.
The fixed robot solution has the advantage that the robotic cell can be highly sophisticated
with different equipment that do not need to be moved around and can then be quite heavy.
The supply of any required substance/material to the cell for its working needs must be
ensured in one single point of the greenhouse and is therefore easier as well as the removal
of products/debris. However the fixed point robot requires moving benches that are a quite
sophisticated and expensive infrastructure that is available only in relatively few cases.
Moving robots on the other hand can operate on fixed benches or directly on the ground,
but require a moving support that adds complexity to the machine and requires some
infrastructure in the greenhouse (rails or paths along which the movement can occur). Such
kind of robot cannot be very heavy and requires a more careful design for what concerns the
feeding/removal of substances/parts required or generated during the robot operations. In
general these requirements should not constitute a severe constraint and a moving robot
solution seems to be slightly more flexible and definitely less expensive. Here different
possible solutions of cartesian robots are introduced.
Beside providing a research tool, the construction of the robot was expected to represent a
first tentative in the direction of relatively low cost robots specifically minded for
agricultural applications.
For this reason the robot was developed assembling together to the largest possible extent
standard components available on the market.
While for commercial robot the operability throughout the greenhouse is an essential
feature that can be ensured with any of the solutions discussed in Section 3, for a research
tool this feature is not important. In fact the focus in research is on developing tools,
programs and procedures to enable the robot to perform different basic operations within
its operational volume. Once this is achieved the integration of the robot with moving
benches or its coupling with a dislocation system will enable to operate on the whole crop
throughout the greenhouse. For these reasons the prototype we constructed works in
fixed position.
The robot structure is represented in fig. 9. The dimensions of the robot are 3,60 m (x-axis),
2,50 m (y-axis) and 0,80 m (z-axis), according to the guidelines discussed above. The
electrical motors ensuring the movement to the robot axes are standard 48V DC motors with
encoder and speed transducers. Four independent PWM axis drivers, plugged to velocity
sensors, supply and control the motors.
The robot control logic has been implemented on a National Instruments 1000b PXI
controller unit equipped with a PXI-8170 Pentium III CPU board, a PXI-7344 Motion
Controller and a PXI-6255 I/O board.
The PWM amplifiers are driven by the digital axes controller on the PXI-7344 board. The
position measurements of the four axes are fed-back to the digital PID controllers on the
7344 that generates velocity paths. Input/output signals are managed by a PXI-6255 board.
Maximal velocities along the axes are of 1 m/s and enable the robot to move fast in order to
ensure a satisfactory productivity.
The robot has been equipped with a forth axis that allows complete rotation around the z
axis. This axis is powered by a standard 48V DC motor with encoder and speed transducer.
The gearing ration has been designed in order to allow mechanical operation to the soil,
such as weed control.
The robot has been equipped with a vision subsystem constituted by one colour and one
NIR CCD camera (both are KC-40H1P Rapitron Day and Night CCD cameras) fixed to the
y-axis. This solution allows to maintain a constant distance from the floor of the bench, i.e.
to operate with fixed focus. The camera view angles allow to investigate only a portion of
the bench at a time, but this is not a limitation since the cameras are used to plan
trajectories in the neighbourhood of the single plants and not to control wide
displacements, that are controlled using encoder feedback.
7. Future activities
In Section 3 it has been evidenced that robots should be able to perform as many elementary
operations as possible in order to facilitate their industrial use in intensive farming. This
688 Industrial Robotics - Programming, Simulation and Applications
because a robot that can automatically manage the whole growing cycle of several crops is
commercially appealing. To contribute to the definition of this kind of robot the future lines
of study and research that we will develop mainly consist in the derivation of tools and
techniques enabling other significant elementary operations. In doing this particular
attention will be devoted to the integration of the elementary operations with the artificial
vision system integrated in the robot. Actually the vision system seems to be a very
important part of an efficient robot for two reasons. The one is that several elementary
operation need guidance and control. In the greenhouse where the environment is not
completely structured and changes can be caused by chance or by the action of human
operators, a visual control for the localization of plants and obstacles seems an almost
obliged choice. The second reason is that the artificial vision system integrated in a robot can
itself perform evaluations, comparisons, and monitoring of growth and of other
characteristics that cannot be performed by human operators and could become highly
useful for optimal management of crops.
8. Acknowledgements
The authors would like to thank National Instruments Italy s.r.l., Agricontrol s.n.c. and
CeRSAA for their support to this research, and R.Deboli, S. Tribocco, P. Pezzuto, G. Perin
and G. Benvegnù for their cooperation in constructing the robot.
9. References
Åstrand, B.; Baerveldt, A., J (2002). An agricultural mobile robot with vision-based
perception for mechanical weed control. Autonomus Robots, 13, 1, (July 2002) 21-35,
ISSN 0929-5593
Bak, T.; Jakobsen, H. (2004). Agricultural robotic platform with four wheel steering for
weed detection. Biosystems Engeneering, 87, 2, (February 2004) 125-136, ISSN
1537-5110
Belforte, G.; Deboli, R.; Gay, P.; Piccarolo, P.; Ricauda Aimonino, D. (2006). Robot Design
and Testing for Greenhouse Applications. Biosystem Engineering, 95, 3, (November
2006) 309-321, ISSN 1537-5110
Benson, E. R.; Reid, J. F.; Zhang, Q. (2003). Machine vision-based guidance system for an
agricultural small-grain harvester. Transactions of the ASAE, 46, 4, (July/August
2003) 1255-1264, ISSN 0001-2351
Blasco, J.; Alexios, N.; Roger, J. M.; Rabatel, G.; Moltò, E. (2002). Robotic weed control using
machine vision. Biosystems Engeneering, 83, 2, (October 2002) 149-157, ISSN 1537-5110
Chateau, T. ; Debain, C. ; Collange, F. ; Trassoudaine, L.; Alizon, J. (2000). Automatic
guidance of agricultural vehicles using a laser sensor. Computers and Electronics in
Agriculture, 28, 3, (September 2000) 243-257, ISSN 0168-1699
Chen, B.; Tojo, S.; Watanabe, K. (2003). Machine vision based guidance system for automatic
rice transplanters, Transactions of the ASAE, 46, 1, (January/February 2003) 91-97,
ISSN 0001-2351
Cho, S. I.; Chang, S. J.; Kim, Y. Y. (2002). Development of a three-degrees-of-freedom robot
for harvesting lettuce using machine vision and fuzzy logic control. Biosystems
Engineering, 82, 2, (June 2002), 143-149, ISSN 1537-5110
Robotics for Improving Quality, Safety and Productivity in Intensive Agriculture:
Challenges and Opportunities 689
Hague, T.; Tillet, N. D. (1996). Navigation and control of an autonomous horticultural robot.
Mechatronics, 6, 2, (March, 1996)165-180, ISSN 0957-4158
Hague, T.; Marchant, J. A.; Tillet, N. D. (2000). Ground based sensing system for
autonomous agricultural vehicles. Computers and Electronics in Agriculture, 25, 1-2,
(January 2000) 11-28, ISSN 0168-1699
Harper, N.; Mc Kerrow, P. (2001). Recognising plants with ultrasonic sensing for mobile
robot navigation. Robotics and Autonomous System, 34, 2-3, (February 2001) 71-82,
ISSN 0921-8890
Jimenez, A. R.; Ceres, R.; Pons, J. L. (1999). A machine vision system using a laser
radar applied to robotic fruit harvesting. Proceedings of the IEEE Workshop on
Computer Vision Beyond the Visible Spectrum: Methods and Applications, pp. 110 –
119, ISBN 0-7695-0050-1, Ft. Collins CO USA, June 1999, IEEE Inc, Piscataway
NJ USA
Kassler, M. (2001). Agricultural automation in the new millennium, Computers and Electronics
in Agriculture, 30, (February 2001) 237-240, ISSN 0168-1699
Kise, M.; Zhang, Q.; Rovira Màs, F. (2005). Stereovision-based crop row detection method
for tractor-automated guidance. Biosystems Engineering, 90, 4, (April 2005) 357-367,
ISSN 1537-5110
Kondo N; Monta M (1999). Strawberry harvesting robots. Proceeding of ASAE/CSAE-CGR
Annual International Meeting, Paper No. 99-3071, July 1999, Toronto Canada, ASAE,
St. Joseph MI USA
Kondo, N.; Ting, K. C. (1998). Robotics for Bioproduction Systems, ASAE, ISBN 0-929355-94-6,
St Joseph MI USA
Marchant, J. A.; Hague, T.; Tillet, N. D. (1997). Row-following accuracy of an autonomous
vision-guided agricultural vehicle. Computers and Electronics in Agriculture, 16, 2,
(January 1997) 165-175, ISSN 0168-1699
Monta, M.; Kondo, N.; Shibano, Y. (1995). Agricultural robot in grape production system,
Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2504 –
2509 ISBN 0-7803-1965-6, Nagoya Japan, May 1995, IEEE Inc, Piscataway NJ USA
Paice, M. E. R.; Miller, P. C. H.; Bodle, J. D. (1995). An experimental sprayer for the selective
application of herbicides. Journal of Agricultural Engineering Research, 60, 2,
(February 1995)107-116, ISSN 0021-8634
Peterson, D. L.; Bennedsen, B. S.; Anger, W. C.; Wolford, S. D. (1999). A systems approach to
robotic bulk harvesting of apples. Transactions of the ASAE, 42, 4, (July/August
1999) 871-876, ISSN 0021-8634
Qiao, J.; Sasao, A.; Shibusawa, S.; Kondo, N.; Morimoto, E. (2005). Mapping yield and
quality using the mobile fruit grading robot. Biosystems Engineering, 90, 2,
(February, 2005) 135-142, ISSN 1537-5110
Reed, J. N.; Miles, S. J.; Butler, J.; Baldwin, M.; Noble, R. (2001). Automatic mushroom
harvester development, Journal of Agricultural Engineering Research, 78, 1, (January
2001) 15-23, ISSN 0021-8634
Rovia-Màs, F.; Zhang, Q.; Kise, M. (2004). Stereo vision navigation of agricultural tractor in
structure fields, Proceeding of AgEng 2004 International Conference, Paper No. 163,
ISBN 90-76019-258, Leuven Belgium, September 2004, Tecnologisch Instituut,
Leuven
690 Industrial Robotics - Programming, Simulation and Applications
Ryu, K. H.; Kim, G.; Han, J. S. (2001). Development of a robotic transplanter for bedding
plants, Journal of Agricultural Engineering Research, 78, 2, (February 2001) 141-146,
ISSN 0021-8634
Sanders, K. F. (2005). Orange Harvesting Systems Review. Biosystems Engineering, 90, 2,
(February 2005) 115–125, ISSN 1537-5110
Sarig, Y. (1993). Robotic of fruit harvesting: a state-of-the-art review, Journal of Agricultural
Engineering Research, 54, 4, (November 93) 265-280, ISSN 0021-8634
Tai, Y. W.; Ling, P. P.; Ting, K. C. (1994). Machine vision assisted robotic seedling
transplanting. Transactions of the ASAE, 37, 2, (March/April 1994) 661-667, ISSN
0021-8634
Thuilot, B.; Cariou, C.; Cordesses, L.; Martinet, P. (2001). Automatic guidance of a farm
tractor along curved paths, using a unique CP-DGPS. Proceedings of the IEEE/RSJ
International Conference on Intelligent Robots and Systems, 674 – 679, October 2001,
IEEE Inc, Piscataway NJ USA
Tian, L.; Reid, J. F.; Hummel, J. W. (1999). Development of a precision sprayer for site-
specific weed management. Transactions of the ASAE, 42, 4, (July/August 1999) 893-
900, ISSN 0021-8634
Tillet, N. D. (1991). Automatic guidance sensor for agricultural field machine. Journal of
Agricultural Engineering Research, 50, 3, (November 1991) 167-187, ISSN 0021-
8634
Tillet, N. D. (1993). Robotic manipulators in horticulture: a review, Journal of Agricultural
Engineering Research, 55, 2, (June 1993) 89-105, ISSN 0021-8634
Tillet, N. D.; Hague, T. (1999). Computer-vision-based hoe guidance for cereals - an initial
trial. Journal of Agricultural Engineering Research, 74, 3, (November 1999) 225-236,
ISSN 0021-8634
Tillet, N. D.; Hague, T.; Marchant, J. A. (1998). A robotic system for plant-scale husbandry.
Journal of Agricultural Engineering Research, 69, 3, (March, 1998 ) 169-178, ISSN 0021-
8634
Tillet, N. D.; Hague, T.; Miles, S. J. (2002). Inter-row vision guidance for mechanical weed
control in sugar beet. Computers and Electronics in Agriculture, 33, (March 2002) 163-
177, ISSN 0168-1699
Van Henten, E. J.; Hemming, J.; Van Tuijl, B. A. J.; Kornet, J. G.; Meuleman, J.; Bontsema,
J.; Van Os, E.A. (2002). An Autonomous Robot for Harvesting Cucumbers in
Greenhouses. Autonomous Robots, 13, 3, (November, 2002) 241–258, ISSN 0929-
5593
Van Henten, E. J.; Hemming, J.; Van Tuijl, B. A. J.; Kornet, J. G.; Bontsema, J.; Van Os, E. A.
(2003). Field test of an autonomous cucumber picking robot. Biosystem Engeneering,
86, 3, (November, 2003) 305-313, ISSN 1537-5110
Van Henten, E. J.; Van Tuijl, B. A. J.; Hoogakker, G. J.; Van Der Weerd, M. J.; Hemming, J.;
Kornet, J. G.; Bontsema, J. (2006). An Autonomous Robot for De-leafing Cucumber
Plants grown in a High-wire Cultivation System, Biosystems Engineering, 94, 3, (July
2006) 317–323, ISSN 1537-5110