Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
13 views

Module - 3 - Computer Considerations For Robotic Systems

Uploaded by

Sneha Gouda
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Module - 3 - Computer Considerations For Robotic Systems

Uploaded by

Sneha Gouda
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

7

Computer Considerations
for Robotic Systems

7.0 OBJECTIVES

The purpose of this chapter is to provide an understanding of the computer ar-


chitecture of robotic systems. The reader will gain an appreciation of the practical
considerations that comprise the selection of a computer system from both the
hardware and software point of view.
The topics to be treated in this chapter are:

Architectural considerations (operating systems, multitasking, distributed


processing, multiprocessors, bus structures, robotic considerations)
Role of computational elements in robotic applications (communication func
tionality, calculation functionality, coordination functionality)
Real-time processes (event-driven processes, sensor information
.Robot programming languages
processing)
Robot programming methods
Artificial intelligence
.Path planning
Robot's computer system

7.1 MOTIVATION
The use of computers and
computational
the presence of the brain in an
elements in robotic systems is
essen
just as
intelligent animal is esential. In this chap
510
Sec. 7.2 Architectural Considerations 511

we presentesent broad but by no means


totally comprehensive coverage of some of the
important relevant topics. The material is intended as an
he major topics ot importance, and readers are advised to use introduction
more to
other detailed
thooks and industry articles to round out their background.
One usually thinks of a computer as a device used for computation. In fact,
Webster's New Worla Dictionary defines a computer as "an electronic machine
hat Derforms rapid complexX calculations or compiles or correlates data." This
imnle concept is woefully inadequate when dealing with robotic systems, since
nany of the uses of computational elements are for tasks other than traditional
ma

uting. In particular, when one considers that most robotic manipulators


computing.
utilize numerous miCroprocessors, the computational concept traditionally defined
no longer spans the complete usage of computer components in a robotic system.
In many cases, the microprcessors are much more "control" elements rather than
computational" elements.
Elsewhere in the controller, the computer components may be used for com-
municating to both the outside world and among other components in the robot
controller. Additionally, a computer may be linked to a display unit (e.g., a color
graphics display or CRT terminal) that is used to program the robot or to monitor
its activities.
The more traditional computer tasks of language translation into instructions
usable by the robot controller and path planning* are also performed by the com-
puters in the robot controller.

7.2 ARCHITECTURAL cONSIDERATIONS

As will become evident,


computers have a variety of roles to play in robotic systems.
The efficient use of
computational elements relies heavily on the use of more-or-
less standardized or off-the-shelf"
devices,
with custom units being kept to a
minimum. The benefits of flexible automation through robotics can best be achieved
by using standard programmable elements available from a wide variety of vendors
and suppliers, reserving the custom engineering and task-specific activities for those
tasks that
require them. For example, a peripheral interface adapter (PIA) is a
neral-purpose programmable interface device that may be contfigured in hundreds
ways under software control. If one were to hardwire such a function every
ume it is needed, the cost of such interface units would be prohibitive. The
aDility of these programmable devices brings the unit cost of robotic flexible
automation within reason. Other examples of such devices are stepper motor
Ontrollers and communications
protocol interface devices

h planning is a method whereby the path or trajectory of the end effector is computed from
in a stro about its current position, where it is supposed to go, how it is supposed to get there (e.g.
t ine), its speed, and other eriteria defined by either the user or external sensors.
for Robotic Systems
512 Computer Considerations
Chap.7
A descriptive discussion of operating systems, multitasking, distributed
cessing, multiprocessors, bus structures, and robotic considerations will be npro
sented in this section.
pre
7.2.1 Operating Systems

A computer operating system handles most of the details of management of fl.


files,
resources, program utilities, peripheral devices, communications among software de
velopers, debugging, and documentation tools, such as code change tracking and
automatic backup file creation. Even in the most inexpensive personal computers
an operating system removes much of the menial drudgery of software development.
so that the programmer can be dedicated to developing the application software.
re.
Without operating systems managing the computer resources available, there is no
question that almost no software would ever be developed. The point here is that
the operating system makes it possible to develop software, since it manages the low
level details far better than any human being could ever hope to do.
Prior to the existence of operating systems, the human programmer was
required to keep track of where programs were physically located (e.g., which bin
of paper tape), which versions were the most recent, what changes were recently
made, and so on. The operating system has improved the efficiency of software
development and has permitted less skilled programmers to develop useful soft-
ware.
Several standard computer operating systems have evolved and been devel
oped recently. Two of these are listed below, with some of their attributes.

MSDOS: developed for the IBM family of personal computers and compatible
products; provides a development and execution environment for a single
user; provides the support and availability of numerous languages, compilers,
assemblers; provides numerous utilities for file manipulation, directory ma-
nipulation, networking.
UNIX: initially developed by AT&T primarily for use within its corporate
structure, it was soon made available for a wide variety of computing env-
onments from very
The
large computersystems to very small microcomputc
and multiuser environments with a
system supports single-
range of system utilities, languages, and communications support.
very wide
has been adapted to execute on a very
UN
large number of different compurei
manufactured by many companies. It also provides facilities for transporting
software among many different computer environments. Generally,
is not suited to real-time applications, but there are
U
variants, s0-called i
time UNIX, that are suitable for real-time operation.
Operating systems provide the "hooks" for both programmers and
prog
ram-

ming languages to simplity the laborious task of program development. Wtn e


Sec. 7.2
Architectural Considerations 513

s , programmers do not have to be intimately familiar with the details of the


hardware.
For example, by utilizing a standard interface to an output device (such
as a printer), the programmer may print
messages and data by using a link to a
subroutine that takes care of all device-dependent peculiarities, so that the problem
be focused
and not the
mechanies
of printing may on.

Using commercialy available operating system in a robot controller, one


a

speed up the development process and reduce the learning curve potential
of
can
features such as file management, batch file generation, and on-line
users, since
debugging tools are available.
with
Initially, since special microprocessor architectures were designed for use
to the ad-
each particular application, it was difficult to introduce the neophyte
robots that utilize personal computers
vantages of robotic technology. Recently,
as their master
controllers have become commercially available. These systems
educational environ-
facilitate the implementation of the robot in the industrial or
ment, since the time required for learning may be greatly reduced due to the general
familiarity personal computer operating system and hardware. An ex-
with the
ample of this type of robot is the RTX, a SCARA-type robot arm. This robot
can be interfaced directly with an IBM PCIXT or compatible. Installation is simple,

requiring only cable connection from an R$232-C port of the PC to the control
a
robot is equally simple since it may be
port of the robot. Programming*" of the
or it may be
manually driven using the cursor control keys as a teach pendant,
programmed through software using standard programming languages. A library
interface exists for the PASCAL.
high-level language,
There are other proprietary operating systems available, too numerous to
environments have many, if not more than the features
list, but in their own

discussed above.

7.2.2 Multitasking

Multitasking is attribute of operating systems that permits the execution and man-
an

agement of several (many) processes in the same


time frame. This does not mean
CPU (central processing
That the programs are executing at the same time,can be single
since a
unit) or MPU (microcomputer processing unit) performing only one instruction
at a given time. Multitasking permits numerous users each to be executing several
to the other
users, and each
program operates
programs at the same time transparently
Since only a single CPU is present, multitasking
transparently to the other programs.
slow the execution speed of any single program, although this may
or may not
wIill is sampling a process through
be
perceptible to the user. For example, if one program
and another program is printing a
an analog-to-digital converter at 100 samples/s, difference in program execution or
POgram listing, there may be no perceptible are both using a floating-point processor
rmance. If, however, two programs could be
required in path planning algo-
pertorm complex calculations, (e.g., as relative to execution speed. It is
nms), both programs may be severely affected
UsO possible to permit different programs to have different priorities in such a way
514 Computer Considerations for Robotic Systems
Chap.7
that one program's execution
may be compromised (e.g., slowed down), so
another may be allocated more processing time. that
Multitasking should not be confused with concurrency, which allows
of co-processes that may share information. A multitasking operating systemcution
not necessarily allow does
concurrency, and concurrency does not necessarily allow mul
titasking.
Multitasking-Robotic Considerations. In a robot controller, many events
can
occur
asynchronously. For example, if the controller is servicing a terminal
in
order to get commands from the
user, it has no way of knowing exactly when a
key may be depressed or which key may be depressed. To complicate
let us assume that besides matters,
servicing the keyboard, the controller is waiting for
signals from each joint processor to tell it that the joint has reached its desired set
point. Additionally, the controller must monitor the state of a digital input line
that informs it if an intruder has entered the
workspace and it must be precomputing
the trajectory for the next move it is
required to make. These and a host of other
tasks warrant the use of a
multitasking operating system to implement parts or all
of a robot controller. By using a
multitasking scheme, the controller can perform
a certain task such as
precomputing the next trajectory at real-time speeds and
service other tasks when the appropriate interrupt occurs. For
a key is
example, each time
depressed on the terminal, an interrupt may be generated that stops the
current task, reads the character, places it with
an other characters in a buffer,
and checks for a termination character (such as a line
feed). If no line feed occurs,
the previous task continues; otherwise, a new task is started which interprets the
command just received and may spawn other tasks to accomplish the directive.

7.2.3 Distributed Processors

Most computer systems used in nonrobotic application have a single processor or


a collection of processors that act, from the user's perspective, as if there were
only a single processor. However, in robotic systems, there are often distributed
processors dedicated to specific tasks. For example, it is often the case that each
axis will have a dedicated processor, and all of these are then controlled
by a single
master processor. This is done so that each axis can respond quickly enough to
control (e.g., actuate, sense, and modify) some element outside the computer
(e.g
the actuator, gripper, sensor, etc.).
The use of distributea processors typically permits a simpler program stuu
ture, and more computational power (and therefore controllability) in most ap
plications. Of course, one of the disadvantages is that information must be passeu
among processors so that their activities can be synchronized.

7.2.4 Multiprocessors

Multiprocessors are a special case of distributed processors. Multiprocessors may


be utilized to share computational load for the same task, for providing redundancy
Architectural Considerations 515
Sec. 7.2

a computation, or for sharing the multiaxis controllability load in a system. For


ample in space vehicles, multiprocessors execute the same task and the results
are compared in order to maximize safe operation of the
of all the processors
Spacecraft. To understand the redundancy issue, one need only remember space
launches that were sCrubbed because of computers that did not agree.
Shared processing can be visualized by considering weather prediction com-
24
nuter systems that would require 24 hours of computer time to predict weather
hours from the present. One might as well wait the 24 hours and walk
outside to
observe the weather instead of waiting 24 hours for the computer results. hese
tasks are so complex that multiprocessors are the only reasonable solution with
weather
today's technology to solve the problems. Even with multiprocessors.
prediction is still not real time (or very accurate).
In robots, to date, such systems have not been utilized. However. it iis
possible that as the need for higher-performance manipulators and more sophis-
ticated controls (e.g., optimal and/or adaptive) grows, multiprocessor techniques
will become more important.

7.2.5 Bus Structures

A bus is a vehicle for transportation. In computer parlance, a bus is a vehicle for


transportation of information. There are a multitude of bus structures that are
used and have been standardized so that many standard products are available.
Some of the standard buses are:

Type Originator
IBM PC bus IBM
Multibus Intel
Multibus II Intel
VME bus Motorola
STD bus
IEEE 488 bus Hewlet-Packard
Q-bus Digital Equipment Corp.
Unibus Digital Equipment Corp.

In addition to these commonly used bus structures, there are proprietary


buses designed by manufacturers that have not become industry standards but are
used for a single company's products.
Attributes that may be associated with buses are numerous. These include:

Bus width (i.e., how many lines are on the bus?)


Functionality (i.e., are the lines handling addresses, data, control, or power?)
Speed (i.e., what bandwidth of signal transmission is permitted?)
516 Computer Considerations for Robotic Systems
Chap.7
Multiplexing (i.c., do all lines have the same function all the time, ordos
have different functions depending on cycle location or periodicity?) they
.Purpose (i.e., do the lines have specific cyclic functions, or are they pras
grammable?)
User adaptability (i.e., may the user define certain lines differently and ne.
manently for different applications?)

In addition to these attributes, there are physical and mechanical issues, such
as size of the bus, the type of connector, the electro-magnetic radiation properties,
ruggedness, ability to withstand shock vibration, thermal shock, radiation, and so
on. The variety of considerations is too large to cover fully in this text. but it
should be understood that the issue of bus structureisas dynamic as the evolution
of computer architecture itself, and the lifetime of a bus structure affects choices
of components as well as basic system design considerations.
Currently, there are no standardized bus structures for robots. for this reason
it is virtually impossible to interface one manufacturer's hardware with another.
This should be contrasted to modern computer manufacturing where interface
standards exist and are used to interconnect different vendors hardware and soft
ware. The lack of a standard has hampered the growth of the robotics industry
Standards have been proposed by the SME and the IEEE, which if adopted, will
begin to rectify the situation.

7.3 HARDWARE CONSIDERATIONS

Virtually every robot manufactured today has at least one computer within it. The
simplest robot relies on the ability to control data flow and formats (protocols) to
some degree and therefore has some sort of logical processing unit. This is gen-
erally considered to be a computer, and when so integrated may be a special-
purpose hardware computational or control device. Nevertheless, it is still con-
sidered a computer, even though it may have no programmable features. Despite
the fact that a computer was used to implement the functions required, its pro-
gramming remains essentially fixed and consequently cannot be changed. Al
though certain options for its operation may be selected by the user by setting some
switches or from a terminal, the sequences of operations remain as the designer
originally chose them. The flexibility will not be compromised, however, within the
design envelope, as one can program a computer to perform complex calculatious
even though the programmer cannot change the basic set of machine instructions.
More powerful robots must have the ability to perform coordinate transto
mations and/or straight-line coordinated motions. As a consequence, the co
putational tasks become significant and the computational power required inerease
correspondingly. Also, as sensor inputs and real-time signal processing aree
quired of the robot in the future, the computational burden will increase even o
drastically and the addition of more computational elements may wel becos
Sec. 7.4 Computational Elements in Robotic Applications 517

exDensive. In this instance, special-purpose hardware may become cost-effective


handling the external interface problems. At certain level of complexity, there
a
becomes no other way to solve the problems, except by designing special-purpose
hardware with built-in sophisticated functionality. This is especially true for hig
is
speed operations such as are encountered in assembly operations. Anforexample machine
the use of special-purpose arithmetic-logic units (ALUs) customized
vision applications. The problem of parts alignment is often handled by the
use

of binary correlation. This is a process that is time consuming and requires several
nested loops of high order. Performing these using computer language instruc
at least a second for
tions, even with very fast 32-bit microprocessors still requires
A specialized ALU may require only a few hundred
parts having a moderate area.
milliseconds and may cost only a few thousand dollars.

ELEMENTS IN ROBOTIC APPLICATIONNS


7.4 COMPUTATIONAL
of
The use computational elements in robotics covers a very broad spectrum
of
such as complex calculations,
applications, including classical computational roles and so on. In
operating system and language functions, procedure definitions,
addition to these classical roles, computers are also used to control the actual joint
servos and interface with external sensors to coordinate communications among
and coordinate workcell activities.
elements, interface with factory host computers,
roles ill be detailed and will serve
In the following few paragraphs, some of these
these computational elements.
to illustrate the multifaceted role of
a model of a robot controller. Recall that
Figure 2.2.2 showed the details of
the intelligence to cause the manipulator
the controller's purpose was to provide
to perform in the manner described by
its trainer. To accomplish this functionality,
Section 2.2), each with a specific task that
Seven subsystems were defined (see
robot control.
taken together could provide relatively complicated
elements play an extensive part in
that computational
It is important to note
of a robotic controller. Additionally, one
ne implementation of these subsystems
implementations for the controller.
should note that there may be many possible
and required functionality and designer's
ractorssuch as cost, available technology,
choice contribute to the final architecture.
of the roles of computational compo
The following sections describe some the
ents as related to robotic control.
The reader is encouraged to correlate
overview of a robotic
with the general system
Concepts presented in these sections order to gain an understanding
in Sections 2.2 and 2.4 in
Controller as presented
one internal functions of the controller.
7.4.1 Control Functionality
element in a robotic application is that of
The simplest role of the computational or deactivating electrome-
for example, activating
a
simple digital control unit,
518 Computer Considerations for Robotic Systems Ch
Chap.7
chanical relays or electrical
switching elements (such as a transistor) in order to
turn motors on or off with a so-called "bang-bang" strategy. The use of
the
computational element in this form is similar to a traffic light controller, whereby
simple activation or deactivation of specific valid pathways for data or mechanical
activation systems is effected.
Slightly more complex roles for control elements may be envisioned as pro-
grammable algorithm-level controllers for use in driving servos or displays. For
example, a digital-to-analog (D/A) converter is used to convert a multibit signal
in the internal domain to an
analog, or continuous signal, in the external
world.
he result of a computational algorithm for control may result in a desired drive
signal of 3.7 V. In the internal computer world, this signal will have a multibit
representation in a unitless, abstract sense. Conversion of this unitless number to
a measurable quantity is achieved through the use of the D/A converter.
This
output signal could, therefore, be used to drive controllers to illuminate a lamp to
a certain intensity or to modulate an alarm sensor with a variable "chirp" signal
(a chirp" is a fixed amplitude, increasing frequency signal) or in fact may be used
to produce a special output signal for communication between the computer and
a human being. An important point to be made in conceptualizing computers as
control elements is that these are rather nonstandard, nonclassical applications of
computational devices.
Although all these examples clearly require computational elements to create
the necessary signals, the output of this information through control actuators is
often overlooked as a major functional role of computational elements.
Computational elements may also be used to interrupt sensor data and to
plan and control actuation functions: for example, the use of analog-to-digital (A/D)
converters to sense the environment or to sense conditions in the external world
to format those properly so that a computational program may then massage
those data for appropriate external manipulation. Using these components as
input (A/D) and output (D/A) devices creates the scenario whereby closed-loop
control of actuation systems may be achieved.
Often, the individual joints of a servo-controlled robot are controlled by
individual microprocessors. These are sometimes called slave or joint processors,
and perform a number of control tasks, including acting as a digital summing
junction which compares the set point from the master processor with the actual
position data obtained from the encoder or other position sensor (see Chapter 4
and Appendix C). Other functions performed include lineari interpolation of tne
set points to produce smoother joint motion, and finally implementation of a variety
of digital computation schemes, including, for example, a PID controller.

7.4.2 Communication Functionality

Another role of computational elements is to provide communication (i.e., ex*


change of information between and among components). The controlled exchange
Sec. 7.4 Computational Elements in Robotic Applications 519

of information permits processes to proceed according to the designer's plans, even


though the processes may be asynchronous, random, or concurrent.
In a crude sense, the control components represent communication between
the internal quantized computer world and the external world. They
in the sense that information is passed between these two environments. communicate
In:a
more general sense, however, there are no secure, formal mechanisms for guar-
anteeing data validity between the computer and its input/output devices. For
example, if an A/D converter is used, the computer receives a number when it
queries the device.However, the computer has no knowledge of the utility of the
signal, or its validity.
The next higher level of complexity is that of a secure communication pro-
cedure, a so-called protocol by which elements communicate. This is perhaps best
illustrated by the telephone as a communication element. It is clear that the
telephone is used to pass information between at least two system elements. How-
ever, without the use of protocol, telephone communication would have much less
information content. This need for protocol is illustrated by the following example.
Imagine the difficulties a listener would encounter if a caller started to speak
whatever thoughts or phrases came to mind without first identifying himself or
herself. At an even lower level, one can imagine picking up the phone handset,
punching in a number, and speaking without even waiting for the call to be an-
swered. The point of this absurd example is to indicate the importance of protocol
that permits confirmation and verification that the recipient of the information is
prepared to receive it and that the packaging of the information is correct.
These concepts have been formalized in great detail, and standards have evolved
for a variety of these so-called protocols. The following section will introduce some
of the basic concepts but will not delve into great detail with regard to a standard
definition of communication. Several national committees and organizations, such
the American National Standards Institute (ANSI), American Society of Mechanical
Engineers (ASME), Electronic Industries Association (EIA), Institute of Electrical
and Electronics Engineers (IEEE), National Electric Manufacturers Association
(NEMA), Robotics International of the Society of Manufacturing Engineers (R/SME)
and the Semiconductor Equipment and Materials Institute (SEMI) have generally
cooperated define certain standards on these issues.
to
One of the simpler forms of communication between computers is a binary
primitivesense, binary IO is simply the passage of bit-
input/output (1/0).
organized information
In a
(i.e., a string of bits each able to take on a value of 0 or 1,
true or false, on or off). The interpretation of the status of these bits resides with
user, and generally there is no security
in this type of communication. This
the
the type of controlled communication scheme discussed in Section 7.4.1. It
relies on other devices or elements to be ready and prepared to accept or deliver
data when requested, in a so-called ""asynchronous mode. Asynchrony in this

discussion implies that each element has the ability to arbitrarily (in time) assert
its data.
for Robotic Systems
520 Computer Considerations
Chap.7
ACK

B A
Figure 7.4.1. ACK/NACK pairing of
REQ states.

It is obvious that without a proper communication protocol, asynchronous


data transfer can create havoc with the information flow, A Simple mechanism
to overcome the potential difficulties associated with asynchronous data transmis.
sion is known as handshaking. The simplest form of handshaking requires the
dedication of two binary lines (ACK/REQ) in a so-called ACK/NACK pairing of
This is illustrated in Figure 7.4.1. Using two such data lines to request
states.
and/or acknowledge readiness from a device requires careful selection of valid
states for either the transmitter or receiver to occupy. These states are shown in
device B. Generally,
Figure 7.4.2, whereby device A is requesting the attention of
2 bits are required to guarantee absolute security of state preparedness on the part

of two potentially independent processes.


The use of these bits is similar in principle to a flag system on a rural mailbox.
an RFD mailbox, the
In this situation, if the postal patron requests a pickup from
The mail carrier will observe the red
patron will raise the red flag on the box.
the flag. Confusion can, of
flag, stop the vehicle, remove the material, and drop
course, result from this simple system
if the same red flag is used to inform the
in RFD box. Therefore,
postal patron that the mail carrier has deposited mail thethe
more than simple intelligence is required
to understand meaning of the red
flag, and generally the context of the situation dictates its interpretation.
For example, on rural mail routes, the carrier typically arrives in a prescribed
that
time window during a 24-hour time period. If the red flag is still raised after
time period, either the carrier was unable to keep to a regular schedule, or
typical
REO=a

ACK a'
B A
REQ=b

ACK=b'

State Sequence

1 2 3 4 5 67 8
o 1 1 0 0 0 0 0
secure hand-
a
0 0 1 1 0 0 0 0 Figure 7.4.2. Example of
columns in the table inu
O 0 0 0 1 10 0 shaking. The
b The numbers
cate the system state.

through 8 represent the state sequence


b' 00 00 0 1 1 o
Sec. 7.4 Computational Elements in Robotic
Applications 521

in fact the outgong mal was picked up and mail deposited in the box at the same
time.
Clearly, the use of a red and blue flag in a so-called ACK/NACK pairedd
scheme would eliminate this confusion (as is illustrated in Figure 7.4.2) since the
carrier would raise the blue flag if mail was deposited, and would drop the red flag
to indicate that the mail carrier had been there.
This type ot handshaking is simple and secure, but also requires a great deal
of overhead, since every communication must utilize the concept of flags. Tech-
niques for exchanging information packets reduces some of this overhead, at the
expense of not guaranteeing that every bit of information is sent and/or received
only when the receiver/transmitter is certain to be ready. Packeted information
transfer allows for error checking on a relatively infrequent basis and detects certain
types of errors in transmission so that at the very least, errors can be logged and
appropriate action taken. Such action may be a request for retransmission, or the
data could even be ignored.
Figure 7.4.3 illustrates the packeting of information using a simple schemne
to transmit a message of N bytes of information. This is known by many different
terms, but the SECS1 (Semiconductor Equipment Communication Standard) pro-
tocol designates this technique as the "Data Link Protocol." The idea is simple
in that first, the receiver is informed as to the length of the message (i.e., how
many bytes will be transmitted). This is then followed by the message itself, and
finally, a quantity called the checksum is transmitted. With such a format, the
integrity of the message is preserved and errors in transmission detected.
Clearly, the key to the protocol is the checksum (which is also known as
longitudinal redundancy check, or LRC). Normally, the checksum is the negative
of the sum of the binary-coded numeric values of the message and is usually
truncated to one or two bytes. As long as the structure of the message is known
by the receiver, an error in transmission can be detected by computing a local
checksum and then comparing it to the transmitted value. This technique will
detect the occurrence of single-byte transmission errors.
It is important to understand that it is possible, although unlikely, to have
multiple byte errors that will still produce the same checksum. If one wants to
prevent this situation from occurring, more complicated error-checking techniques
must be used, such as CRC (cyclic redundancy checks) that can detect multiple-
byte errors. However, these require the evaluation of polynomial error formulas
(as compared to the simple linear sum of the LRC) and are more time consuming.
Moreover, it is usually not found to be necessary for most applications, except
where there is the distinct possibility of noise-corrupted transmission.
In robotics applications, communication between slave and master processors
Would probably use LRCs to ensure accurate information up and down the infor
mation chain. In this respect, robotic communications parallels other multi-pro-
cessing applications, requiring secure communications and needs no extraordinary
techniques for implementation.
Receive ldle Loop Send
--- - Read and Send are Flags.
T3 is Set When a Read
Idle is Required
tis a time counter

Listen Length (N
N B N
Length
ReceiveN Bytes N Bytes
Upper check 6S
SendUpper Check
Lower check Send Lower Check
=0

Yes No Character Code


tT1 Read Function
Ns254 Name b1 b2....bB
ENQ 10100000 Request the line to send
EOT Ready to receive
No
yesan3 ACK
00100000
01100000
No NO Correct reception
Done NAK 10101000 Incorrect reception
No ENQ
Yes
Tes Data Link Control
Yes Check Data Link Protocol ParametersS
OK

Send EOT Functions, and Typical Values


Send ENO| Parameter Typical
Symbo name Function
Vo value
nReceive Detects an interruption 0.5 sec.
Listenn timeout in the receive stream
T2 Protocol Detects a lack of 0 sec.
es timeout protocol response
Reply Detects a lack of 5 sec.
timeout message reply
NO No RTY Retry
Tries The maximum number 3
Listen COunt of send retries
Slave es RTY
M/S Master/ Contention resolution 1 or 0
ENO Yes Slave
| No

No EOT
No
NO Data Link Protocol Parameter Ranges
es
Character and Resolutions
Completion Parameters Minimum range Resolutions
0.1- 10 sec. sec.
0.2- 25 sec. sec.
Message Read Failure) (Send Failure No ACK T3 1-120 se.
Received SeC
(Contention RTY -31
Send Later Yes
Send ACK | Send NAK
Message Sent

Return to
Idle

Figure 7.4.3. SEMI SECS1 data link protocol.


Sec. 7.4 Computational Elements in Robotic
Applications 523

7.4.3 Calculation Functionalityy

In addition to the roles just described, more classical calculation roles may be
attributed or assigned to computer components in a robotics system. One is that
of performing a variety of coordinate transformations as will be developed math-
ematically in Chapter 8. Such transformations are necessary to develop drive
signals for the control portions of the robot. For example, moving a gripper or
manipulator from one point to another typically starts with specification of motion
in a rectilinear or Cartesian coordinate system. However, to achieve the desired
motion, these coordinates must be transformed into the specific joint space of the
robot (e.g., Cartesian, cylindrical, spherical).
Usually, these transformations are mathematically complicated and require
transcendental function evaluations. Consequently, some type of relatively so-
phisticated mathematical processing is required. This can be accomplished in a
number of ways, including the use of software routines, hardware evaluation uti-
lizing a floating-point processor, or employing software lookup tables. In a robot,
the decision as to which technique to use is tied to the final system cost, speed,
implementation, expandability, and generality.
Another major area where classical computer calculation-type functions are
involved in a robot is in signal processing, e.g., noise removal from a distorted
signal is a common requirement in sensor data analysis. Signal processing may
be accomplished in both the analog and the digital worlds, and may be multidi-
mensional. That is, there may be multiple lines of data coming in from the outside
world in the form of binary input/output or in the form of analog or continuous
signal input and output. An example of this is the processing of an ultrasonic
acoustic signal from the outside world to determine position information, or perhaps
to monitor acoustic emissions from a variety of electromechanical components such
as motors, gears, and so on. In addition, metal surfaces scraping against other
metal surfaces may produce acoustic emissions that are detectable and may be
useful to the robot controller for preventative maintenance scheduling
Another example of complex computational needs is in the field of vision,
environment relative to the robot
whereby one may be inspecting an outside world
Torthe purposeof for example, in palletizing objects, alignment of
alignment:
integrated circuit chips or in the alignment of surface-mount components on printed
Circuit boards. These tasks are relatively computationally heavy and will generally
these functions tin a timely and
require a dedicated processor for implementing
efficient manner. For example, o n e may need to direct the robot to position a
Camera so that it may i"see" the environment. This information may then be
the alignment offsets, passes that
passed to a vision processor which calculates a controller, and then to the trans-
ntormation over to the robot perhaps throughthe robotic computer. This
of permits
rmational pathways or internal routines velocity and acceleration control
information translated into specific
be
ne to
signals.
524 Computer Considerations for Robotic Systems
Chap.
Another possibility for using a vision system to augment the robot's sense.
of
the environment would be to accept or reject parts, or to characterize or grade
them. An example of this might be in a microchip dicing system, whereby one ic
inspecting a matrix of semiconductor components on a diced wafer. These wafers
are often marked with ink dots to indicate rejection, and the robot may simplv
pick and place the good components into acceptance bins or packages. The poorer
quality components or those that have been rejected may be either left on the
wafer carrier or may, in fact, be taken off the carrier and depOsited into a reject
bin. A refined classification of this application would permit multiple ink dots or
multiple coding of the surface of the chip so that one might grade parts into a
variety of different categories. In this manner, one could fabricate variable quality
assemblies by inspection, ranging from those with the highest down to those with
the lowest.
In addition to the vision and coordinate transformation tasks, calculation is
required in the area of direct axis (or joint) control. For example, if one is using
a servomotor to drive a robotic axis, there are a variety of ways to accomnplish this
task. As discussed in Chapter 4, a digital-to-analog(DA) converter could be used
to drive the servo amplifier directly. In general, the calculation of the required
drive signal is not trivial, and in fact the output will usually have to be shaped
rather precisely in order to produce the desired robot performance (i.e., smooth,
vibration-free motion). The so-called "on/off" or "bang-bang" control systems,
whereby the input to a servomotor is a step of a known value, is a relatively
straightforward control procedure. However, step signals, as explained in Chap-
ters 3 and 4, will introduce high values of derivatives of position, velocity, and
acceleration creating untoward effects in the output (e.g., excessive mechanical
vibration). The obvious need to profile motions throughout space and to control
axes simultaneously makes the problems associated with axis control computa-
tionally intensive.
In addition to the direct output requirements, position and/or velocity feed-
back information must be acquired and utilized. The acquisition of real-time
information may be computer resource intensive, since some of those signals may
have to be filtered digitally. Additionally, making use of the feedback signals to
compute new positions, velocities, and accelerations may also present a large com
putational burden. Although not currently done, in the not too distant future
external sensory data will be fed back to the master processor and will be used to
modify the set points sent out to the joint processors. In effect, the system will
then be recomputing the axis transformations to produce the desired manipulator
motions. The role of the computer in this environment is therefore more or less
traditional. Appendix C shows the computational algorithms necessary to accon
plish these tasks.
Additional computational complexity is introduced by requiring coordinateu
motion control whereby all of the robot's joints must start and stop at the san
time. A further level of computational difficulty results when the required motio
must be in a straight line in three-dimensional space. There will invariably be
Computational Elements in Robotic Applications 525
Sec. 7.4

suelocity constraints, or there may be the requirement to use "via'" (by-way-of)


these '"via" points to provide the
noints whereby the robot must move through
desired path.
Ali of the examples above are computationally substantial tasks that may
function
require floating-point processing, matrix calculations, and transcendental
set of
evaluations, and as such serve to demonstrate the need for a powerful
processors for robot control and implementation.

7.4.4 Coordination Functionality

There are other examples of nontraditional computer element in a robot.


use of a
For example, the control and coordination of multiple robots used to execute the
same tasks, possibly with other material-handling equipment, is an important ap-
plication.
As discussed in Chapter 2, the concept of the coordination requires a
cell controller to control the entire operation. This device must be able to co-
ordinate the robotic manipulators, all sensory systems, as well as other material
handling systems, and of course must be able to keep track of the work in process
and the location of individual completed subassemblies. The cell controller may
also be required to report all of this to a factory host computer, which can interrupt
or modify the plan of the specific cell controller
The cell controller may either be located within the robot's controller or else
may be a separate entity. For simple applications, the robot controller may per-
form the functions of the cell controller. However, in more complicated situations
a separate unit may be required. The robot may not be aware that a cell controller
is being used, but instead may just be following a preprogrammed path which is
activated or deactivated remotely. In other situations, complex communication
protocols between the cell controller and the robot may be required. This is
especially true when multiple manipulators are being used.
As suggested earlier, the cell controller may be nothing more than a micro-
processor with very simple processing capabilities, or may be a complicated mini-
Computer controlling a variety of components. One good example would be in a
hybrid circuit manufacturing facility, where parts may be shuttled in wafer form
mounted on sticky-tape frames (frames with a film and an adhesive surface)
or
wattle packs (plastic carriers with indented pockets for holding individual inte-
grated-circuit dice in their own receptacles). These parts may then need to be
dled onto a small (roughly 2 in. x 2 in.) ceramic substrate which will have
ups bonded to them (either glued or soldered). Wires must be attached internally
etween the substrate and the chips. Full testing of the chips may be required,

fairn they may be graded and placed into bins. This application requires a
r y complicated cell controller with use of common protocol with common lan-
Euages throughout the Figure
system (See 7,4.4.).
n addition to these types of coordination, the robot computer or computing
ments may need to communicate with CAD, CAM, or CAE data bases. That
One may have designed and simulated a specific assembly process on a host
526 Compu ter Consid eration s for Roboti c System s Chap. 7

• - - - - -- - - -- 44" --- --- --- 1


Waffle Pack
Work Platform Robot

1
□□□
□□0000,...,..,,--,,~ /
r Component
Feeder (10)
ODO
00
00
0
0,
36" □□□
CCTV Monitor

Tool Changer
Assembly
(7 Tools)

Substrate
Feed System Control Panel

Figure 7 .4.4. Schema tic of a hybrid circuit assembly system that utilizes
a robotic manip-
ulator. The system utilizes a commo n protoco l and languag e and require
s a complic ated cell
control ler.

compu ter. This assembly progra m may have been downl oaded to a variety
of
compu ters, including the cell controller, for example, which may then be requir
ed
to direct the assembly operat ion, including all the roboti c manip ulator s so that
the
produ ct is assembled. Note that in this case the robots were never taught directl
y,
but obtain ed their "prog rams" electronically. Altho ugh this is not yet a widesp
read
practi ce with roboti c systems, it is one area in which to expec t develo pment
s to be
made.
In additi on to the roles of the computers described above , there is the concep
t
of coord inated path planning for the robot motions. For examp le, path planni
ng
might requir e that the work be moved in a prescr ibed manne r throug h a
specific
set of workstations. This might requir e the integr ation of a numb er of
robotic
manip ulator s, perhap s incorporating information from a vision system as
well as
from other sensors.
As an examp le, we may look at the produ ction of a wiring harness, whic_h
requir es the stringing of wires of various lengths throug hout a specific geome
tric
patter n in space. Normally, wires will have to be routed aroun d pins locate
d on
the harnes s board . Thus one must coordi nate the stringing of a specific wire
based
on previo us ones that the robot has installed. Furthe rmore , the work that
the
robot has compl eted may not be stable after installation. For examp le,
when
Real-Time Considerations 527
sec, 7.5

stringing a wire, there may be a curl or misposition of the wire after the robot
releases the end of the wire. Then, when the robot goes back to place the next
one, it must have some way to guarantee or to measure the placement of these
previously laid wires so that the harness is produced in a reliable fashion.

_ REAL-TIME CONSIDERATIONS
75
In this section we discuss two important topics of real time event-driven processes
and sensor information handling. The concept of "real time" is best thought of
as "needed now." This needed-now concept gives the idea of urgency to the topics
in this section, since either type of processing is so time-critical that if either process
cannot be served as soon as possible, the robot and its environment may subse-
quently be uncontrollable, probably with catastrophic consequences (e.g., the robot
may become unstable).

7.5. 1 Event-Driven Processes

In many software applications, a computer must respond to input from the "outside
world." Two methods of achieving this are:

• Program driven
• Event driven
Program-driven response implies that the input occurrence is expected in some
sense. Entry from a keyboard is usually of this type, whereby a program is waiting
for an input via a keystroke. Although the program does not know what the
response will be, it does know that if there is a response, it will occur at a specific
location in the program. Another example of such a response is that caused by
a switch closing, indicating that a robotic gripper has successfully acquired a part.
Here, the robot is expecting the closure of the, gripper at a specific point in its
program sequence. This is similar to a program waiting for an input of data prior
to executing a calculation.
The above should be contrasted to an event-driven process, where the timing
of the response as well as the type may be totally unpredictable. As an example,
consider a pedestrian walking up to a busy street that has a pushbutton-activated
street light. If the button is pushed, the traffic light controller will respond in time
by changing the light to a yellow-then-red condition for traffic, and eventually to
green for the pedestrian. If the pedestrian never pushes the button, the light will
~ever change, and the internal "brain" will perform its normal tasks, keeping the
hght green for traffic, checking whether all the lamps are functional (by checking
~urrent through the filaments), and calling the traffic department to replace a bulb
if necessary.
528 Computer Consideratio ns for Robotic Systems Chap. 7

A similar event-driven response would be required if someone enters the


workspace of the robot. If instrumente d properly, the work envelope can be
monitored by ultrasonic sensors, photo-optical interrupters,_ or pr~s~ure mats that
will interrupt the robot's controller, and stop the robot mot~on activity so that the
intruder will not be struck and possibly injured by the mam~ulator arm.
This concept of responding to random external events is not o~ly useful for
protecting an intruder or a pedestrian , but is a feat~re th~t all robotic computers
must have in order to control and interact with thetr environmen t. The control
of motors for coordinated motion, the coordination of assembling parts from various
feeders ,. and an endless variety of robotic assembly tasks require the ability to
respond to random events, because not all robotic sequences are deterministic by
nature. Even though the specific global actions desired may be deterministic, the
specific joint actuation sequences may be random, due to external perturbations
from unknown or variable loads and/or because parts may arrive at pickup points
randomly. The robot controller must be capable of properly handling these situations.
The response to the external events may be required in as short a period as
several microseconds, so the computer's architecture must be such that this is
possible. In some instances, where operating system overhead must be contended
with, an interrupt latency may be experienced. Interrupt latency is the time from ·
when the external interrupt occurred to the time when the interrupt is serviced or
handled by a software interrupt service routine. If this latency becomes too long,
it may be necessary to bypass or disable the operating system temporarily or
permanentl y, and compose special software so that the interrupt may be handled
in real time (i.e. , rapidly enough so that the event requiring attention is handled
in an appropriate and timely fashion). In virtually all modern-day computers, the
ability to respond to interrupts is present, as is the ability to prioritize, queue up,
and process hundreds or thousands of these interrupt requests .
In many parts of the robot, it is important for certain events to occur at known
times. For example, in the digital control of a motor, it is extremely important
for the position sensors (the encoders) to be sampled at a uniform rate (e.g. , every
0.5 ms). The control signal must then be output to a D/A converter (DAC) at
the same rate. To accomplish this, a real-time clock that generates a signal every
0.5 ms can be used to trigger an interrupt line. When the interrupt occurs, the
sensor is sampled and the computation s necessary to generate the control signal
are performed. The control signal is transferred to the DAC at a known time
after the input was sampled and the sequence repeats .
Other uses of real-time clocks are to generate time and date stamps and to
time_ events. Also, in many a_pplicat~ons, it is necessary to inhibit a robot from
mo~mg to the ne~t taught p01~t until mechanical settling occurs. This can be
achieved by delaymg the op~rat10n using an accurate timing program. .
Very often '. t_he op_e_r~tmg syste_m is able to assist in managing event-dnven
requests by providmg utilities to service these requests, and by providing software
sec. 7.5 Real-Time Considerations 529

tools to assist in the development of appropriate software (e.g., interrupt service


routines).
7.5.2 Sensor Information Processing

Consider what happens if object acquisition is attempted without an external sensor.


In other words, there are no sensory mechanisms required or available to confirm
target location or that the robot has properly grasped an object. If it can be
guaranteed that the object can be positioned within the tolerance of the pickup
mechanism, acquisition can be achieved. However, when an object's position is
not absolutely predictable, one must have some type of sensing mechanism. A
variety of sensory devices that can be utilized in this respect have already been
described in Chapters 5 and 6. In the case of vision-based sensors, computational
requirements are very severe. In general, these high-rate external sensors must
provide integral data compression so that the signals delivered to the robot are
low-rate and the robot controller will not be overwhelmed by massive data handling
requirements. For example, the vision-based sensor should reduce the imagery
from the order of megabits to a few bits (e.g., the centroid location of the object
to be grasped) .
The important thing to understand, however, is that these tasks require com-
plex sensory information processing and tend to be computationally large. For
example , if one uses a moderate bandwidth sensor of only 50 to 100 Hz and attempts
to use a high-speed, off-the-shelf processor to perform signal processing, it is readily
discovered that the computer rapidly runs out of computing power. This means
that very frequently , special-purpose hardware processors must be developed in
order to handle the incoming data rates, or data must be simplified immensely so
that the computer can make relatively simple decisions.

EXAMPLE 7.5.1
To illustrate the ideas above, consider a simple filtering operation for noise
reduction. The following equation represents a simple single-pole low-pass
filter:
G(n) = AG(n - 1) + (1 - A)F(n)
where F(n) - input sequence from an AID converter
G(n) - output sequence
A filter weight (0 < A s; 1)
If A = 0, the input will pass directly to the output, and thus the filter will
behave as an all-pass device. As A approaches ~nity, th~ filter properties
will approach those of an ideal integrator. For mtermediate values of A ,
low-pass filter characteristics result.
530
Com pute r Con side ratio ns for Rob otic Syst
ems
Chap, 7
Now assu me that the following time s are valid for
spee d micr opro cess or (e.g ., a Mot orol a 68000 with a hypothetical h'
a 12.5-MHz clock' tgh-
rate).
data conv ersio n - 10.0 µs
add time = 1.0 µs
mult iply - 6.0 µs
mem ory access - 0. 5 µs
Ass umi ng that A and 1 - A are prec omp uted
and stor ed in memory th
f or each new com puta tion , one d
ata convers1. 0n, f'1ve mem ory accesses, twen
mul tipli es, and one add will be requ ired to filte
r the data . This corres~ondo
to 25.5 µs or a data rate of 39,215 Hz. Assu min
g a 10 inpu t sensor bas/
one can proc ess thes e data at a rate of less than 4,00
0 Hz per sensor. Furthe;
assu min g a 5-sample/cycle sam plin g rate , one
can hand le signals with fre-
quen cy cont ent up to 784 Hz with this high -spe
ed proc esso r.* It should be
note d that slow er rate proc esso rs wou ld be at leas
t prop ortio nally poorer in
perf orm ance . For exam ple, a Mot orol a 6800 oper
atin g at 1 MHz would be
appr oxim ately 12.5 time s slow er, which wou ld yield
a per sens or date rate of
abo ut 62 Hz. This 62-Hz rate may well be mar gina
l in high spee d applications,
espe ciall y whe n othe r inter ferin g proc esse s, such
as line ar distortions, exist
whic h requ ire digital cont rol syste m com pens atio
n.

Oth er type s of sens ory cons ider ation s have to


do with nom inal versus ex-
trao rdin ary cond ition s. This requ ires the ability
to plan for exce ption s to normal
circ ums tanc es. For exam ple, the robo t man ipul
ator may be prog ramm ed to go
thro ugh a vari ety of mov eme nts. How ever , if
an extr aord inar y situation occurs
(e.g ., the grip per is emp ty whe n it was not supp
osed to be), the robo t must have
som e plan to hand le this devi ation from norm
al beha vior . For example, w_here
the part has not been prop erly acqu ired , the robo
t mus t then have the ability either
to reac quir e it or to info rm its cont rolle r to exec
ute an eme rgen cy stop sequence.
To acco mpl ish the abov e, we may have a forc e
sens or that produces a rat_he~
simp le bina ry sign al that indi cate s to the robo
t that eith er "Ye s, the nonuna
cond ition is pres ent and alth ough ther e may
be vari ation s, noth ing out_ of th~
ordi nary has occu rred " or "No , the cont rapo sitiv
· 1e sens ors, · · b · e." With out this planmng a_n
use of even s1mp 1t 1s o v1ous t h at the exce ptio · ns to norm al behavior
that ofte n occu r will not be take n care of prop
erly . Ii-
Exc epti on hand ling is appl icati on-d epen dent and
is usually left to the app of
cati on prog ram mer . The actio n to be take n is
highly depe nden t on the nature

. rate ·
• The theor etica l samp lmg leat
1s two samp les per cycle, but m
· pract ·
ice one n eeds to safllP
least five times per cycle.
Real-Time Considerations 531
sec, 7,5

the application , and what may be acceptable in one circumstance may be unac-
table in another.
cep A robot language is usually the medium by which exception handling is ac-
cornplished . For example, one~ the application is programmed, a subroutine can
be added to test for t~e p~rt _bemg prese_nt once the gripper has been commanded
to close. If the part 1s _m1ssm?, the action dictated by the subroutine (signal for
an operator, retry, contmue without the part, etc.) can be carried out.
There are also some errors that are sensed by the system in all cases. For
example, if the manipulator is commanded to move, a check could be made to
ensure that the motion is occurring (e.g., by monitoring the error signals in each
of the joint servos). If no motion occurs, it is possible that the robot arm has
collided with another object or that one or more of the error signals may have
exceeded a predetermined band. In the event that this type of error is detected,
the arm could be stopped and the servo gains reduced so that the arm becomes
"mushy." (This procedure prevents possible damage to motors and mechanical
components of the robot.) It is also possible to monitor the control signal for each
servo when the arm is not moving. In the event that this signal is too large, one
may be able to conclude that the payload is too big and take appropriate action.
In addition to these considerations, there is the concept of self-adaptation to
the environment. For instance, many robots are designed so that they may respond
to a variety of inertial loads. For example, different payload weights should not
affect the path that a robot takes in general. Often this is accomplished by changing
servo gains to compensate for variations in the axes loads that would create un-
desirable deviations from the proper path. This would be equivalent to a young
child picking up a lightweight toy and moving it from point A to point Bin space.
If however that lightweight toy was filled with lead shot, and the child attempted
to follow the original path, difficulty might be experienced in overcoming this
additional inertial load even if more muscle power was employed. That is, the
child might have only limited ability to compensate. Although the computational
algorithm might be there in the child's brain, the ability to handle that level of
load might not exist.
The idea of inertial compensation can be built into the algorithmic control
processes so that when the weight or inertia of the load changes (within limits) ,
the robot may still move the load over the same path if it is instructed to do so.
This idea of self-adaptation can be mathematically modeled and included in the
robot's internal program.
Another situation that can be detected by proper monitoring of position and/
or current sensors (in each joint) occurs when the robot strikes an object that was
previously not known to be there. In this instance, there will be an increased
amount of resistance to the arm's motion, which resuits in an unusual increase in
motor current and/or a large position error. It is possible to program the computer
to sense these conditions and take corrective action, such as stopping the motion
or reducing servo gains. For example, if one visualizes a robot picking up the
532 Computer Considerations for Robotic Systems Chap.
7
lead-weighted toy and moving the load from A to B, _a~d one puts a chair
in the
way , it would be clearly desirable to have the senso~y abiht~ to dete~t that
something
out of the ordinary had occurred and take appro?nate act~on . This self-ad
aptation
concept more or less fits in well with the previous nommal versus extrao
rdinary
discussion.
As more external state sensors (see Chapter 5) are employed with robots
the information they provide will be used to modify the original progra
m in real
time. For example, tactile sensors placed on a robotic gripper provide
real-time
data to the robot's controller, which then commands the gripper's servo
so that
the right amount of force is generated.
Sensors placed in robotic grippers are also important when it is necessary
to
handle objects which have specific stability, rigidity, and orientation requir
ements.
There is no reason to expect that one will always have objects of one type,
and a
truly versatile system should be able to handle a variety of shapes and sizes.
It is clear that as external sensors are more heavily utilized, the information
provided by them will increase the computational burden placed on the
robot's
computer systems. This has already been demonstrated in Chapter 6, where
vision
systems and the computational considerations were discussed.

7.6 ROBOT PROGRAMMING

As discussed in Chapter 2, the most sophisticated robot control systems


have a
programming capability that allows for elemental decision making, a capabi
lity
needed to coordinate a robot's actions with ancillary devices and processes
(i.e.,
to interface with its environment). Branching is the ability of the softwa
re to
transfer control during program execution to an instruction other than the
next
sequential command. At a specific point in a task cycle, the robot will
be pro-
grammed to anticipate a branching signa l-a special electrical signal sent
to the
controller by a designated internal or external sensor. If such a signal is receive
d,
the program will follow a predetermined path or function (branching) . If no
signal
is received, the program will continue to follow the main path. Thus
a robot
interacting with a group of machine tools will perform a given sequence of
oper-
ations , depending on which steps have been completed. For example, after
a raw
part is loaded onto a press, the program will look for a branching signal.
If the
signal is received, the program will branch to a pause, causing the robot
to wait
while an ancillary machine works on that part. After the machine has compl
eted
the prescribed work, an external completion signal is sent to the controller
by a
sensor located on that ancillary machine. Then the robot is directed to take
the
part out of the press and transfer it to another machine. Decision makin
g can
also be used to correct an operational problem. For example, a program
may
have a branch to a taught subprogram for releasing a jammed tool.
Robot languages provide flexibility to the user in defining the task to be
performed. Not only do they permit the motion of the task to be defined but
theY
sec. 7.6 Robot Programming 533

also provide the user wit~ t~e abi_lity to imbue intelligence in the control program.
In its simplest forms,_ this mtel!t?ence may check binary sensors and change a
location, or make a stmpl~. dec1s1on based on sensory information to handle an
exception. As the capab1hty of the language increases, the intelligence of the
algorithm controlling the rob?t in a specific application can also increase. Thus
corrections based on sensory mputs (such as vision or tactile sensors) are possible
along with communication with other computers and data bases.
Historically, the initial applications of robots were relatively simple and ac-
cordingly, their controllers did not require or provide sophisticated sequence con-
trol. Typically, the following sequence was all that was needed:

• Move to a specified location in space


• Control the state of a gripper
• Control the state of output lines
• Provide sequence control based on the state of input lines

As applications became more complex, and computer technology more advanced,


techniques were developed to take advantage of the newer computer architectures.
In the following section, techniques for robot control sequencing will be
presented from three appropriately more progressive perspectives (fixed instruction
sequence control, robotic extensions to general purpose programming languages,
and robot-specific programming languages). This is followed by a summary of
robot programming languages and two examples illustrating these methods are
presented. The section concludes with a discussion of how points in space are
taught or "demonstrated" to a robot.

7.6.1 Robot Control Sequencing

Robot sequencing can be accomplished in a variety of ways. As discussed in


Chapter 2, there are certain features of functionality required by a robot control
system in order to facilitate both the training (programming of the sequence of
events) and its use with ancillary equipment. To someone familiar with general-
purpose programming languages, it is obvious how certain aspects of this func-
tionality can be easily provided by a computer language. What may not be as
0 _bvious is that most of the important functions needed for manipulator control and
s1mple interfacing can be implemented by dedicated sequencers. These sequence
controllers accept commands (possibly given by the setting _of switc~es) a~d record
the robot's joint positions. The sequencing of the mampulator 1s achieved by
''playing back" the desired states at a later time. In a certain sense, these se-
quencers also possess the power of programming languages but without all the
explicit commands and data structures associated with a formal programming lan-
guage.
To contrast various "programming" methods, all of which permit the user to
534 Computer Considerations for Robotic Systems Chap. 7

define the sequence of operations of a manipulator, three distinct implementatio


will be discussed. Specifically, they are: ns

• Fixed Instruction Sequence Control


• Robotic Extensions of General-Purpos e Programming Languages
• Robot-Specific Programming Languages

The first is a relatively simple method which makes use of a fixed event sequence
in each instruction. The second is based on extensions of programming languages
which add robot-specific functions (or subroutines) to the standard library, or in
which robot-specific commands have been added to the structure of the language.
The third is a language tailored specifically to the programming or training of
robots.

7 .6.1.1 Fixed instruction sequence control


In this mode of implementation , the sequence of the robot's operation is
defined by means of a "teach pendant" which provides the ability to position the
tool point of the manipulator in space by means of buttons or a joystick. Additional
controls allow the trainer to define the state of the gripper (open or closed) and
the state of each of the output lines (on or off) as well as time delays and simple
branching based on the state of input lines. By saving joint position, and other
state data, a sequence of events can then be defined.
To better understand the nature of a fixed instruction sequence controller,
the implementation used on the Mark I controller from United States Robots will
be examined. In general, each program step consists of a series of actions. These
are:

• Check the status of input lines


• Check for a subroutine call
• Perform a robot motion
• Delay a specified time interval
• Set the state of the gripper (open or closed)
• Set the state of output lines

To understand how this relatively simple structure can provide sufficient


program control, and for the sake of discussion, let us assume that the controll~r
15
already has a number of programs stored in its memory. A specific program
first selected (by number) utilizing a series of thumbwheel switches. To begin ~b~
sequence of actions defined by the program, a "start" switch is depressed whic,,
causes the first in~tructi~n to_ be obtained from memory. First, a logical "AN°Dlt
of a subset of the mput Imes 1s performed against a "mask" stored in memory.
Robot Programming
sec. 7.6 535

should _be understood th at the pro?ram will ~ait indefinitely until the specified
input hne(s) are ass~rt_ed. Next, If the step 1s a subroutine (another series of
program steps), then It IS exe~uted and the following program step is obtained from
memory (note that the motion and subsequent steps are not performed in this
case). If no subroutine call was indicated, then the robot controller causes the
manipulator to move to a point in space defined by a set of joint variables stored
in memory. Once this location is reached, the remaining actions (for the current
program step) are executed. These include waiting a specified delay time, opening
or closing the gripper, and the final action, which is the setting of the state of the
output lines to a value defined in the programming sequence. Following this , the
next program instruction (step) is fetched from memory and decoded as defined
previously. After all the steps of a particular program are executed, the sequence
repeats from the first step. That is, the controller keeps executing the program
indefinitely.
Due to the nature of the fixed sequence of actions for each program step, it
may be necessary to program additional steps to properly sequence the manipulator.
For example, it is necessary to provide a delay to ensure gripper activation prior
to arm motion. This is due to the fact that it takes a finite time for a gripper to
reach its final state after its activating mechanism receives its control signal. There-
fore, the trainer might want to insert a delay (on the order of a few hundred
milliseconds) prior to the execution of any other manipulator motion. Since the
action sequences of a program step without a subroutine call are check inputs,
perform motion, delay, set gripper state, and set output line states, one easily sees
that it is possible for the next program step to cause a motion (if the input conditions
are satisfied immediately) before the gripper's state has stabilized. To accomplish
a delay prior to the motion of this subsequent step, it is necessary to program an
additional step in which no motion occurs but which makes use of the delay in the
sequence of actions.
While this type of programming may require substantial human activity, it is
still able to produce the desired results (i.e., sequencing a manipulator through a
set of motions). The key to both successful and efficient programming of this type
of controller is knowing the sequence of actions and how to take advantage of
them.
As the complexity of the tasks being performed by robots increased, the
demands for more advanced motion control and decision capability also increased ,
t?ereby requiring more sophisticated programming ~ethods. In so_m e c_ases, the
simple sequencing controls could be expanded by addmg more fu~ct1onahty to_ the
!each pendant by means of multiple levels and ad~ed control sw1tc~es. Besides
increasing the complexity of the teach pendant, this_ approach also increased the
programming time and required skill level of the tramer. . " . ,,
An outgrowth of such complex sequence controlle~s 1s a . menu-dnven .pro-
gramming system that permits the training of the rob?t ~~mg a fixed set of fun~tlons.
The menu system differs from the "fixed instruction sequence control m that
536 Computer Consid erations for Robotic Systems
Chap. 7

instructions specific to each function ure generated: Unfortunatel_y the use of


a
menu system can be quite awkward and requir es a traine r well versed tn the concep
ts
of computer programming. . .
One major advantage of a menu system , however? ts _that it may be easily
extended to accomn\odate new functions and even provide interfaces to externa
l
sensors such as vision. It should be apparent that this concept can also be extend
ed
to a robot-specific language by adding a terminal interfa_ce and the typical languag
e
functionality such as syntax checking of instructions pnor to execution (or during
compilation).
Although extensions of fixed instruction sequence control could certainly have
provided additional capability, they lacked flexible program control and data struc-
tures. Consequently, another approach was needed. This approach is discuss
ed
in the next section.

7 .6.1.2 Robotic extensions of general-purpose progr ammi ng


languages
Anoth er step in the evolution of robot programming was the incorporation
of a language. The use of a general purpose programming language with extensi
ons
provides the user with the control and data structures of the language. The robot-
specific operations are handled by subroutines or functions. Clearly this implies
that the training of a robot now requires a person well versed in the concepts
of
compu ter programming .
Various permutations of this concept are possible, including the use of sub-
routines as compared to extensions of languages. The extensions to the langua
ge
include robot-specific commands (and possibly new data types) in addition to
the
existing set of commands (and data types) while leaving the general syntax
and
program flow intact.
An advantage of using an extension of a general-purpose programming lan-
guage is that the designers can concentrate on the problem at hand, designing
a
robot instead of spending time designing a sequencer, providing editing capabilities.
and so on. The actual implementation may make use of a compiled or interpr
eted
language depending on the nature of the base language chosen to be extended
and
the objectives of the design team. One other advantage in extending a langua
ge
is that more sophisticated cell control can be handled by the robot controller.
In
thi~_case, it now has m~re power to _per~orm nonrobot input/output and has
the
ab1h~y to perform certain man-machme interfaces, e.g., statistical and error
re-
porting.
An example progra m for the United States Robot s' MAK ER 22 Seara robot
is illustrated in Table 7.6.3. (This example is treated in detail in Section 7.6.3.)
It is interesting to note that this is the form used to program most Seara robots
from Japan .
This programming method (as compared to the fixed instruction technique)
makes use of program control, specifi cally the FOR.N EXT loop and the STOP
sec. 7.6 Robot Programming 537

statements. One should also observe that there are statements that do not cause
robot motion and the sequence of events is chosen by the programmer or trainer.
Thus it is seen that some of the constraints imposed by the fixed event instruction
are removed.
As the available technology became more sophisticated and manufacturing
requirements grew, the limited flexibility of the language extension approach be-
came obvious. This provided the impetus for the development of robot-specific
languages.

7 .6.1.3 Robot-specific programming languages


A major motivating factor that led to the development of robot-specific pro-
gramming languages was the need to interface the robot's control system to external
sensors in order to provide "real-time" changes to its programmed sequence based
on sensory information. Other requirements such as computing the locations for
a palletizing operation based on the geometry of the pallet, or being able to train
a· task on one robot system and perform it on another (with minor manual ad-
justment of the points) also were an impetus. Additionally, requirements for off-
line programming, CAD/CAM interfacing, and more meaningful task descriptions
led to various language developments.
Table 7 .6.2 shows a complete terminal session of a Westinghouse/Unimation
robot using VAL 1. This example, discussed more fully in Section 7 .6.3, shows
an entire environment for the training of the robot. As shown in the table, the
program is retrieved from a mass storage device, then listed, and the fixed positions
defined in the program are displayed. Finally, the program is executed and output,
indicating the current cycle, is displayed on the terminal. As the listing indicates,
this language clearly provides more capability for complex robot control than that
of the fixed instruction sequencer or the extended language examples described
previously.
Section 7.6.2 presents various commercial and research robot programming
languages and a table that compares program control, robot specific mathematics,
and input/output capability for each language. Once again, it should be noted
that regardless of the complexity of the programming language, the objective is to
define a sequence of operations that are needed to obtain successful control of the
robot.

7.6.2 Selected Summary of Robot Languages

Currently, a large number of robot languages are available, although no standards


for these exist. The more common languages include:

• AL
• AML
• RAIL
538 Computer Considerations for Robotic Systems Chap. 7

• RPL
• VAL

Brief descriptions of each of these are given be]ow. This summary is adapted from
a paper by Gruver et al. (9].

7.6.2.1 AL
AL was the second-generation robot programming language produced at the
Stanford University Artificial Intelligence Laboratory, an early leader in robot
research. Based on concurrent Pascal , it provided constructs for control of multiple
arms in cooperative motion. Commercial arms were integrated into the AL system.
This language has been copied by several research groups around the world. Im-
plementation required a large mainframe computer, but a stand-alone portable
version was marketed for industrial applications. It runs on a PDP 11/45 and is
written almost entirely in OMSI Pascal (9]. In the AL system, programs are
developed and compiled on a PDP-10. The resulting p-code is downloaded into
a PDP-11/45, where it is executed at run time. High-level code is written in SAIL
(Stanford Artificial Intelligence Language). The run-time system is written in
PALX. The PDP 11/45 has a floating-point processor, no cache memory, a single
terminal, and 128 kilobytes of RAM memory. Two PUMA 600's and two Stanford
Scheinman arms were controlled at the same time by this language.

7.6.2.2 AML
A manufacturing language (AML) was designed by IBM to be a well-
structured, semantically powerful interactive language that would be well adapted
to robot programming. The central idea was to provide a powerful base lan-
guage with simple subsets for use by programmers with a wide range of expe-
rience. An interpreter implements the base language and defines the primitive
operations, such as the rules for manipulating vectors and other "aggregate' '
objects that are naturally required to describe robot behavior. A major design
point of the language was that these rules should be as consistent as possible,
with no special-case exceptions. Such a structure provides a ready growth path
as programmers and applications grow more sophisticated. AML is being used
to control the RS/1 assembly robot, a Cartesian arm having linear hydraulic
motors and active force feedback from the end effector. The computer con-
troller on the RS/1 assembly robot consists of an IBM series/1 minicomputer
with a minimum of 192-kilobyte memory. Peripherals include disk and diskette
drive, matrix printer, and keyboard/display terminals. A subset of AML was
employed on the Model 7535 robot that was controlled by the IBM personal
computer. However, the features of this version are not included here since
the 7535 is no longer being marketed by IBM.
sec. 7.6 Robot Programming
539

7.6.2.3 RAIL

RAIL was developed by Automatix, Inc. of Bilerica, Massachusetts as a high-


level language for the control of both vision and manipulation. It is an interpreter,
loosely ~ased 0 ~ Pascal. Many. constructs have been incorporated into RAIL to
support mspection and arc-weldmg systems, which are a major product of Auto-
matix. The ce?tral processor of the RAIL system is a Motorola 68000. Peripherals
include a ~e!mmal and a teach box. RAIL is being supplied with three different
systems: v1s1on only, no arm; a custom-designed Cartesian arm for assembly tasks;
and a Hitachi process robot for arc welding.

7.6.2.4 RPL
,
RPL was developed at SRI International to facilitate development, testing,
and debugging of control algorithms for modest automatic manufacturing systems
that consist of a few manipulators, sensors, and pieces of auxiliary equipment. It
was designed for use by people who are not skilled programmers, such as factory
production engineers or line foremen. RPL may be viewed as LISP cast in a
FORTRAN-lik e syntax.
The SRI Robot Programming System (RPS) consists of a compiler that trans-
lates RPL programs into interpretable code and an interpreter for that code. RPS
is written mostly in Carnegie-Mellon's BLISS-11 and cross-compiles from a DEC
PDP-10 to a PDP-11 or LSI-11. The programs written in this language run under
RT-11 with floppy or hard disks. The RPL language is implemented as subroutine
calls. The user sets up the subroutine library and documents it for people who
must write RPL programs. Previously, SRI operated the Unimate 2000A and
2000B hydraulic arms and the SRI vision module with this language.

7.6.2.5 VAL
VAL is a robot programming language and control system originally designed
for use with Unimation robots. Its stated purpose is to provide the ability to define
robot tasks easily. The intended user of VAL will typically be the manufacturing
engineer responsible for implementing the robot in a desired application.
Eight robot programming languages are compared in Table 7 .6.1. Prior
programming knowledge is helpful but not essential. VAL has the _structure of
BASIC, with many new command words added for _robot p~ogramm~ng. It also
?as its own operating system, called the VAL Momt_or, which_ contams the user
interface editor and file manager. The central momtor contams a DEC LSl-11/
03, or m~re rec~ntly, the LSI-11/23. In a Puma 550 rob?t, each of t~e joints_ is
controlled by a separate 6503 microprocessor. The ~omtor communicates with
the user terminal, the floppy disk, the teach box_, a discrete 1/0 module, and an
optional vision system. VAL is implemented usmg the C language and the 6502
assembly language. It has been released for use with all PUMA robots and with
TABLE 7.6.1 LANGUAGE-COMPARISON TABLE

HELP JARS MCL RAIL RPL VAL


AL AML

Language Modalities
X X X X
Textual X X X X
Menu X
Language Type
Subroutines X X
Extension X
New language X X X
X X
Geometric Data
Types
Frame (pose) X X X X X
Joint angles X X X
Vector X X X X
Transformation X X X X
Rotation X X X
Path X
Control Modes
Position X X X X X X X
Guarded moves a a a

Bias force X
Stiffness/compliance X b

Visual servoing C C C C

Conveyor tracking X X
Motion Types
Coordinated joint
between two points X X X X X d
X
Straight line
between two points C
X X X d
X
Splined through
several points X X X X d
X
Continuous path
("tape recorder"
mode)
Implicit geometry X
circles
Implicit geometry
patterns X
Signal Lines
Binary input 0 64 0 242 6 32 32
Binary output 0 64 2 242
Analog input 10 32 32
64 0 0 242
Analog output 0 32 0
4 0 0 0 242 0 64 0
Display·and Specification of Rotations
Rotatio.n matrix 8 h
Angle about a vector X h
Quaternions
Euler angles X X X X
Roll-pitch-yaw X
X
Ability to Control Multiple Arms
Multiple arms X X
X
Control Structures
Statement labels X X X X X
If-then X X X X
X
X X X
If-then-else X X X X
X
X X
While-do X X X X
X
X X
Do-until X X
X
X
X X
540

l
(Continued)
ABI.E 7.6.1
__s---- AL AML

-----
HELP JARS MCL RAIL RPL VAL
X
ease· X
X X X X
for X
X
X
Begin-end X k X
Cobegio-coend . X m

procedure/function/
subroutine X X X X X X X X
successful Sensor Interfaces
X n
Vision X X X
X X X X
force X X
Proximity
Limit switch X X
X X X X X
Support Modules
p
Text editor X 0 0
p X X X
file system X 0 0
X X X
Hot editor X
Interpreter X X X
Compiler X X X X
Simulator X q
X
MACROs X X X
INCLUDE statement X X
Command files X X
Logging of sessions X
Error logging X
Help functions X X
Tutorial dialogue X
Debugging Features
Single stepping X X X X
Breakpoints X X X X
Trace X X X X
Dump X X X X

Source: Reprinted courtesy of the Society of Manufacturing Engineers. Copyright 1983 from the ROBOTS
7113th !SIR Conference Proceedings.
•Using force-control or limit-switch action.
bCurrently being implemented at Jet Propulsion Laboratory.
<Uses visual inputs to determine set points but does not specifically perform visual servoing.
dRelies on the VAL controller.
•Currently being implemented at Stanford University.
'Custom for each system.
' AL displays rotations as a rotation matrix.
hNorrnally, JARS does not display these forms; however, the user may write a routine to print them because
JARS has the forms available internally.
'AL accepts directly the specification of an orientation by three Euler angles (or by an angle about a vector).
1
AL orientations could also be specified by roll- pitch- yaw angles.
:since it is a language based on subroutines added to Pascal, JARS has all the structures of Pascal.
MCL can invoke tasks in parallel using INPAR.
"'HELP permits the simultaneous activation of several tasks.
"Re_ported by the IBM T. J . Watson Research Center, Yorktown Heights, New York; not commercially
avadable.
:~:Rs and HELP use the systems support features of the RT-11 operating system.
q _uses the support features of the PDP-10 operating syStem. .
A simulator ha b d d th IBM T J Watson Research Center, Yorktown Heights, New York.
s een eve1ope at e · · 541
542 Comp uter Consid eration s for Robot ic System s Chap. 7

as three
the Unim ate 2000 and 4000 series. The langua ges descri bed above as well
have been
others , HELP , JARS , and MCL, are compa red in Table 7.6.1 and
adapt ed from Gruve r et al. [9].

7 .6.3 Samp le Programs

The following examp les illustr ate the use of two differ ent robot programmin
. g
langu ages, VAL and one emplo yed on a partic ular Seara- type manip ulator

EXAM PLE 7.6.1 VAL Example


Assum e that it is desire d to pick up identi cal object s from a known
g
lo~ati on and then stack the object s on top of e~ch _other to a maxim um stackin
heigh t of four. Figure 7 .6.1 shows the applic ation.
Let us consid er this application and its imple menta tion in the VAL
al,
progra mmin g language. Table 7 .6.2 is a listing of a sessio n on the termin
which includ es loadin g and listing the progra m , viewin g the value of the stored
locati ons, and finally , executing the progra m.
The dot (.) in the leftmost colum n is the promp t , which tells the user
the
that VAL is ready to accept a comm and. The first comm and given to

Robot's Origin _ _ __
(0,0,0)

Deposit Point
(4 block stack shown)

Orig?. -m :t (a """"-- p· k
X -448 xy plane ""-- ICUp
Point
V
• dd ·t
Figure 7 .6.1. hWorksp ace for VAL program mi ng examp1e . The pickup an epos1
• • .
pomts are on t e xy-plane offset (in z) from th e ro bot ,s ongm by -448 mm .
TABLE 7.6.2. UST OF A VAL TERMINAL SESSION

LOAD STACK
. .PROGRAM STACK
.LOCATIONS
OK
.usTP STACK

.PROGRAM STACK
1. REMARK
2. REMARK THIS PROGRAM PICKS UP PARTS FROM A FIXED
3. REMARK LOCATION CALLED PICKUP, THEN DEPOSITS THEM AT A
4. REMARK LOCATION CALLED B. IT IS ASSUMED THAT 4 PARTS
5. REMARK ARE TO BE STACKED ON TOP OF ONE ANOTHER.
6. REMARK
7. OPENI
8. SET B = DEPOSIT
9. SETI COUNT = 0.
10. 10 APPROS PICKUP, 200.00
11. MOVES PICKUP
12. CLOSEI
13. DEPARTS 200.00
14. APPRO B, 200.00
15. MOVES B
16. OPENI
17. DEPARTS 200.00
18. SETI COUNT = COUNT + 1
19. TYPEI COUNT
20. REMARK COUNT INDICATES THE TOTAL NUMBER OF ITEMS STACKED
21 . IF COUNT EO 4 THEN 20
22. REMARK MOVE THE LOCATION OF B UP BY 75.00 MM.
23. SHIFT B BY 0.00, 0.00, 75.00
24. GOTO 10
25. 20 SPEED 50.00 ALWAYS
26. READY
27. TYPE * ** END OF STACK PROGRAM ***
.END
.LISTL

X/JT1 Y/JT2 Z/JT3 O/JT4 A/JT5 T


DEPOSIT - 445.03 130.59 - 448.44 - 87.654 88.890 - 180.000
PICKUP 163.94 433.84 - 448.38 178.006 88.896 - 180.000

EXEC STACK
COUNT = 1.
COUNT = 2.
COUNT = 3.
COUNT = 4.
*** END OF STACK PROGRAM ***
PROGRAM COMPLETED: STOPPED AT STEP 28

543
544 Computer Considerati ons for Robotic Systems Chap. 7

robot controller, LOAD STACK, tells the system to recall the program and
any location data from the disk. The system response is on the next three
lines , indicating successful completion of this request. The following com-
mand to the controller is LISTP STACK, which tells VAL to list the program
which is called STACK. This particular version also delimits the program
listing by printing .PROGRA M STACK at the beginning and .END at the
end. Two more commands that are used in the table are (1) LISTL, which
commands the controller to print all the locations that the controller knows
about (in this case there are two such locations , DEPOSIT and PICKUP),
and (2) EXEC STACK, which tells the controller to execute the program
called STACK, which is stored in its memory. Following the EXEC com-
mand is the output generated by the program STACK. This output is the
value of the variable COUNT as the program is executed. Note that the
value of COUNT is used to terminate execution of the program when the
desired number of items have been stacked.
Examinatio n of the program listing shows that each line has a number
associated with it (i.e., 1 through 27). These numbers are used to identify
a line so that the program may be edited. VAL has an editor that allows
the user to create programs and store them in the controller. Once stored,
a program may be modified by referring to its line numbers. The modifi-
cations include inserting, deleting, or modifying lines .
The operation of the robot based on the program steps will now be
described.

• Lines 1 through 6 are comments.


• Line 7 tells the gripper to open immediatel y and then wait a small amount
of time to ensure that the action took place.
• Line 8 equates the location of the variable B to a defined location called
DEPOSIT. This step is necessary since the value of B will be modified
each time a new item is stacked.
• Line 9 sets an integer variable called COUNT to zero. The variable
COUNT is used to terminate the program when the proper number of
items have been stacked (i.e., 4 items) .
• Line 10 has a label (10) associated with it. It commands the robot to
move from wherever it is along a straight line to a location 200 mm
above the point called PICKUP. At the end of the motion, the approach
vector of the gripper will be pointing downward. Recall that the ap-
proach vector is defined so that moving along it causes objects to go
toward the inside of the gripper.
• Line 11 tells the robot to move its gripper in a straight line toward the
position defined by PICKUP. In this example, the motion will be along
the approach vector since the gripper is pointing downward. The po-
sec, 7.6 Robot Programming 545

sition defined by PICKUP is such that when motion ends, the object
will be inside the gripper's jaws.
• Line 12 co~mands the system to close the gripper and wait a sufficient
amount of time for the action to occur. In some cases it may be nec-
essary to add an additional delay if that provided by the command is
insufficient.
• Line 13 tells the manipulator to move along its approach vector in the
direction opposite from which it originally came to a point 200 mm
above the pickup point.
• Line 14 tells the manipulator to move to within 200 mm of point B,
aligning its approach vector downward.
• Line 15 commands the manipulator to move in a straight line until its
tool point is coincident with location B.
• Line 16 tells the gripper to open so that the part can be deposited. This
also includes some time delay for the action to occur. As stated pre-
viously, additional delay may be necessary to compensate for the actual
valves and mechanics used to implement the gripper and to permit the
manipulator to settle to the desired location.
• Line 17 tells the manipulator to move back along the approach vector
so that it is 200 mm above location B.
• Lines 18 and 19 increment the variable COUNT and display its value.
• Line 20 is a comment.
• Line 21 is a test to see if COUNT is equal to 4. If so , go to the statement
with label 20; otherwise, go to the next line.
• Line 22 is a comment.
• Line 23 modifies the location defined by B so that its z coordinate is
increased by 75.0 mm.
• Line 24 forces the program to go to label 10.
• Line 25, which is labeled, tells the controller to reduce the speed of
motions to 50%.
• Line 26 tells the controller to move the manipulator to its ready position,
which is defined as all of the links in a straight line pointing upward.
• Line 27 tells the controller to print a message to the terminal.

From the description of the program, one can easily see the power
implemented by the instructions. Commands exist to cause the manipulator
to move in a straight line and to manipulate position data. (Note that the
"S" in the statement indicates that straight-line motion is desired.) For
example, the variable B, which represents a. loc_ation (i.e.~a_ set of six joint
variables) is modified by a single statement m hne 23. S1m1larly, the com-
546 Computer Considerations for Robotic Systems
Chap. 7

mands APPROS and DEPARTS are quite interesting b~cause they actuan
define positions relative to a variable but do not make 1t necessary for t /
user to define the actual positions for each move that the robot has to mak e
This concept is quite important for robot training, since we have really defin:d
only two positions, PICKUP and B. However,_w~ ~an move to many po-
sitions relative to them. Using this approach, if it 1s necessary to modify
either of the points (PICKUP or B), the changes made to them will auto-
matically be reflected in the intermediate points (selectively by the robot'
path planner), which are defined solely on these two positions. s

The following example illustrates the programming language used by the


MAKER 22 (a 4-axis SCARA robot, see Figure 1.3.13) robot from United States
Robots. The reader should contrast the power of this language with that of Ex-
ample 7.6.1.

EXAMPLE 7 .6.2 Seara Programming Example


The MAKER 22 is programmed in a language similar to BASIC, with robot-
specific extensions. For example, positions in space may be referenced by
a single-variable name of the form Pxxx, where xxx is a three-digit number
from 000 to 999. In order that position variables may be referenced by an
index, it is possible to catenate the P with an integer variable such as A and
refer to the point PA. Whatever the value (from 000 to 999) specified by
the programmer, A will then reference the actual position variable. Certain
operations may be performed on these position points, such as addition and
subtraction. Additionally, provisions exist to multiply or divide a position
by a scalar. Only two types of moves are provided in the language: MOY,
which causes the manipulator to move in a joint-interpolated fashion; and
CP , which causes the robot to move in a continuous-path fashion. Whenever
a CP command is encountered, the controller will move the manipulator from
its current location to the point which is the argument of the command while
also looking ahead for the next CP command and its argument. The occur-
rence of the next such command tells the controller to continue moving toward
this next specified position once it has come close to the location defined by
the previous CP command. This process continues until the end of the
program or a MOY command is encountered. It is clear that if one wanted
the manipulator to follow a specific path, all that would be necessary is to
define a sufficient number of points for the path and then write a program
that uses CP moves to connect them.
The example that we explore illustrates the use of topics discussed ~n
the previous paragraphs. It is desired to cause the MAKER 22 to move 10
a straight line. For our discussion, we will assume that two positions have
sec, 7.6 Robot Prog ramming
547

TABLE 7.8.3 MAKEfl 22 PROGRAMMING EXAMP\.E

10: " STRAIGHT LINE" Label w ith a comment


N = ,o
Number of intermediate points
plus 1
P100 = P1 Copy Pt to PtOO
P101 = P2 - P1 PlOl is distance to be moved
P101 = P101 / N lnc~emental drstance
MOV P100 Set manipulator at first point
For L = 1 TON Beginning o f loop
P100 = P100 + P101 Compute intermediate point
CP P100 Do CP mov e to point
NEXT L End of loop
STOP

been defined previously, Pl and P2*, and that we wish to have the mani pulatvr
move in a straight line starting from Pl and ending at P2.
Table 7 .6.3 shows a listing of the program and comments <.k:fi ni ng the
purpose of the instructions. The program in Table 7 .6.3 takes the diffe re nce
between the initial and terminal points of the line and div ides by the number
of intermediate points plus 1 to compute an incremental distance . It the n
instructs the manipulator to move to the first point, PI OO. After atta in ing
this position, it computes intermediate points by adding P HH to P lOO anu
then instructs the robot to move in a continuous-point fashion connecting the
10 points to form an approximation to a straight line. Note tha t the last
point is P2.
It should be apparent that the robot programming language for the:
MAKER 22 does not contain as high a level of expression as indicated in the
example using VAL. This is obvious if one recognizes that a straight lini: is
achieved with one instruction using VAL whereas it requires the entire pro-
gram in Table 7.6.3 to perform the identical maneuver with the ~laker 22.
However, the same functionality , that is, the ability to move in a straight
line , is provided by both languages.

After reviewing these two examples and the discussion on robot progranuning
languages, it is suggested that Section 2.4 on the function<llity of a robot controUe r
be reviewed in order to relate the desired design functionality to this mate ri,t.l.

*Pl and P2 are Cartesiun coordinate points in (x, _\', z) sp,m, .


548 Computer Considerati ons for Robotic Systems Chap. 7

7 .6.4 Demonstr ation of Points In Space

To program a servo-contr olled robot, a skilled operator often breaks down the
assigned task into a series of steps so that the manipulato r/tool can be directed
through these steps to complete the task ( a program) . This program is played
back (and may be repeated several times, i.e., it can be used as a subroutine) until
the task cycle is completed. The robot is then ready to repeat the cycle. The
robot's actions may be coordinated with ancillary devices through special sensors
and/or limit switches. These, in conjunction with the controller, send "start work"
signals to, and receive "completio n" signals from other robots or interfacing devices
with which that robot is interacting.
A servo-contr olled robot can be "taught" to follow a program which , once
stored in memory, can be replayed, causing the controller to be instructed to send
power to each joint's motor, which in turn, initiates motion. This teaching process
may require that the operator "demonstra te" points in space by causing the end
effector to move (using one of a number of possible methods) to a series of locations
within the work cell.
The robot can also be taught its assembly tasks from a CAD/CAM data base.
Here, the desired points in space are down loaded from such a data base, rather
than being taught (on the robot) by an operator. This has the advantage of not
occupying the robot for teaching of points and also permits the optimizatio n of the
path using simulation techniques. In addition, it is also likely that within the next
few years artificial intelligence (AI) techniques will permit robot teaching to be
more generalized . For example, AI will allow the robot to place filled bottles in
a case or pallet, without having to be explicitly taught a predetermi ned pattern
and/or having specific points actually demonstrat ed by an operator or down loaded
from a CAD/CAM system. Before discussing this topic, however, we will consider
more standard techniques of demonstrating points to a robot.
There are several methods currently in use. The method employed depends
on the manufactur er's specifications, control system software, and the robot's com-
puting/mem ory capabilities. Teaching typically involves one of the following meth-
ods: continuous path, via points, or programme d points. Each of these is now
briefly discussed.

7.6.4.1 Continuous path (CP)


With the CP method, the operator releases all joint brakes and enables an
automatic sampler. The manipulato r is then manually moved through each of the
positions required to perform the task. The controller "remember s" or stores the
coordinates of all the joints for every position. In this manner complex three-
d~mensional paths may easily be followed. Teaching may be done at a speed
different from that speed needed for real-time operation (i.e., playback may be
Hobot Programming 549
Set• 7.6
speeds. allowing for different cycle times). This. m cthtK.t r eou rtc:q
O ther
c.ct at . ll . f . '
· . . 1dehugg111g, a ows or contmuous-pat h programming, and rc<tuit'es m,nimdl
rnin1111a . '·I o f rhc QS"-t->ncd ra~k
•ledge of robotics. . .- owevcr, a thorough undcrs tandino
. . . an d e d"ttmg
knO'' rerequ1s1te . . . . e e
reqmres reprogra mmmg ftom the er tor point. Th111;
::t:od is typically used with robots employed in spray-painting and a ve weldmg
applications.
7.6.4.2 V,ia points (VP)
Teaching with the VP method does not require that the operator phy<.;K._1lly
move the manipulator; rather, it is remotely controlled by either a computc1· rer -
minal or, more commonly, a teach pendant-a device similar to a remote contro l
box with the additional capability to record and play back stored commands . The
teach pendant is plugged into the controlling computer during programming ( the
on-line method), and the operator then presses the appropriate buttons to position
the arm, with small incremental motion for precise positioning. When the correct
position is achie~e?, a switc? is activa~ed to inform the comput~r to r~ad nn~ sto re
positions for all JOmts. This process 1s repeated for every spatial point de~1rcd to
be "taught." Essentially, only the endpoints of the motions are demonstrated .
The VP method is often employed to program discrete points in space ( through
which the end effector is required to pass) and is most commonly used for poin t-
to-point robots. The teach pendant is most commonly used for heavy-duty robot.'-
and in those lightweight robots that have sophisticated control systems.
There are more advanced systems that a11ow for the movements a nd e nc.Jpoint~
to be recorded in an unspecified order. This enables new programs to be created
by calling out the points in a sequence that differs from the original order of input,
thus facilitating programming and editing. These systems also allow the program-
mer to define velocity and acceleration or deceleration between points. However,
such advanced systems have an inherent danger; that is, the path resulting from a
new sequence of movements may inadvertently bring the end effector in contact
with nearby machinery . For this reason , manufacturers recommend that once the
program is complete, the program should be played back at a very slow speed to
minimize the possibility of damage to the robot or other equipment.

7.6.4.3 Programmed points (PP)


The PP method is also an on-line system. The robot operates via a pre re-
corded program (i.e., without manual intervention) , with the program sequence
having been set up externally. Applications of the PP method (lf using decisio n
making jnclude orienting (i.e., aligning workpieces in designated positions) for
assembly operations and material•handl ing work using conveyers. ln addition to
th e techniques used for programming a robot as described above, ther:e is a new
methodology emerging. This is discussed ne xt .
550 Computer Considerations for Robotic Systems Chap. 7

7.6.5 Artificial Intelligence and Robot Programming

The discipline known as artificial intelligence (AI) is becoming more practical as


new _d evelopments in computer hardware and softwa~e ~volve. Higher memory
density, faster processors , and new languages are bnngmg the tools of artificial
intelligence to practice. There are "expert systems" development environments
that execute on nominally priced personal computers , and these are already having
an impact in many areas previously the exclusive domain of the human thought
process. Experience is showing that in a complex equipment maintenance milieu,
in certain classes of medical diagnosis, theorem proving, biochemical analysis, and
a plethora of other fields, AI is contributing to productivity. The much touted
nationalized Japanese fifth-generation computer project is directed toward creating
AI techniques that will reduce software production to a blue-collar job. Whether
or not the Japanese will succeed is yet to be determined, but even if the goal is
not fully reached, there will be significant technological fallout from the effort.
In the programming of robotic systems , the use of AI techniques is certain
to have an impact because of the availability of data base information that can be
used to plan a robot's task efficiently. Although there is no integrated system
available today, laboratory demonstration such as the assembling of simple struc-
tures from randomly presented and available parts is already accomplished. More-
over, a number of laboratory facilities are currently implementing AI/expert sys-
tems in a variety of mobile robots. Intended for use in the nuclear power industry
and by the military, these devices are being employed as testbeds for practical
results in the areas of autonomous navigation, collision avoidance, maintenance
and repair, assembly, reconnaissance , and perimeter monitoring.

7.7 PATH PLANNING

Path planning is a critical aspect of robotic manipulator control. Two specific


aspects of path planning are discussed here.

7.7.1 Coordinated Motion

Path planning or trajectory planning algorithms are concerned with the generation
of the intermediate points along a manipulator's trajectory . These are the points
(or positions) that must be fed to the control system so that the joints can be
commanded to move to the correct locations necessary to position the end effector
properly. In addition, it is often desired to start and stop all robotic axes at the
same time. This behavior is referred to as coordinated motion and will modify
the path-planning algorithm.
In a robot, the initial path position is inferred (from the current position)
Sec. 7.7 Path Planning 551

while the final path position is specified. Along with the final point, some rule
defining the trajectory must be specified and may include the foliowing options:

1. Joints of robot to start and stop at the same time as the end effector moves
from the initial to the final point (not exceeding physical constraints orrobot
specifications). However, the actual path taken is not specified. This is
called joint interpolated motion.
2. The "tool point" is to move along a straight line. This is sometimes referred
to as world motion. Note that this implies that all axes start and stop at the
same time.
3. The tool (or end effector) is to move along a straight line defined by extending
the approach, normal, or orientation vectors associated with the tool point.
This is called tool motion (see Chapter 8 and Section 7.6.4).
4. The end effector may be told to follow a straight line as in world motion,
while the initial and final orientation of the gripper may be required to change.
5. The acceleration or velocity may be specified prior to the motion, or may be
commanded to change during the motion based on some external input.

The mathematics to accomplish these types of motion is discussed in Chapter


8. It should be apparent that the speed at which the computations need to be
made must be in "real time." This implies that they are completed as soon as (or
a significant time before) the information is needed by the joint servos. Although
the mathematics for accomplishing this is formulated in terms of matrices,t h e
implementation may take advantage of certain properties or simplifications. That
is, the actual implementation may involve little or no matrix multiplication. Ad-
ditionally, and depending on the various types of implementations, it is possible
that the computations may take too long and certain motions may be impossible
or speed limited.
It is also possible to define a series of points that determine the trajectory of
a manipulator. This can be accomplished in a number of ways, including:

1. A series of points are taught or demonstrated (defining a complicated curve).


The manipulator is expected to pass through these points as closely as possible
and perform some interpolation between them in order to faithfully reproduce
the path. Of course, the more points taught, the better the curve is repro-
duced.
2. Three points defining a circle are demonstrated and the manipulator is ex-
pected to be able to compute any necessary intermediate points so that it can
draw the circle.
3. An equation is defined (in Cartesian, cylindrical, or other space) that the
robot is commanded to follow.
Considerations for Robotic Systems Chap.7
552 Computer

Various curve-fitting routines or series expansions can be used to implement


these features. Once again, it is important to note that time is critical and an
approximation may be needed. For instance, a reasonable trade-off may be made
on the algorithms used and the actual control resolution and accuracy of the ma-
nipulator. It may only be necessary to define a joint angle to the nearest minute
be similarly limited.
Since the resolution of the encoding device may
The ideas discussed above need to be implemented in whatever type of pro-

gramming language is to be used for a manipulator. Some languages previously


described provide for terse powerful commands. For example, the statement from

VAL

MOVES POINT1

allows the programmer to command the robot to move from its current location
to one defined as POINT1 in a straight line. (The motion will be joint interpolated
if the ""s" is deleted.)
Other languages may provide the mathematical capability
to m o v e in a straight line but without
to compute intermediate points necessary
an explicit command.

7.7.2 Automatic Programming and World Modeling

associated with a robot "teaching itself."


The concept of automatic programming is
Essentially, the robot is assigned a task and must develop a plan to accomplish it
Various AI techniques are utilized to define the steps that the
autonomously.
are used,
robot must take to accomplish the task. However, before any algorithms
model. This defines the
it is first necessary to define a three-dimensional world
environment in which the robot must perform its designated
task. It includes
models of the pieces or parts that the robot must manipulate as well as any physical
constraints or obstructions in the workcell.
The complexity of the modeling process is staggering and compromises must
be made.
For example, if a printed circuit board with 100 holes were modeled,
and their
one would define the plane and the location of the center of the holes
sizes. The model probably would not compensate for manufacturing errors suc
holes in roundness. Even based on this simple example, t
or errors
as misplaced This is
that the model is rarely, if ever, the same as the real world.
can be seen
where the problem begins since the things assumed correct are usually the ie
that will affect the reliability of the process.
Automatic programming is currently a laboratory tool which is only reco
adapted to real-world applications. It also includes the concept collisio
of
being
avoidance, in which the robot ensures that it does not hit anything in the prOcc
of doing its job. It is expected, however, that as processes become more comp
robots wil
and higher performance is desired, future generations of industrial
this capability.
Sec. 7.8 The Robot's Computer System 553

THE ROBOT'S COMPUTER SYSTEMn


78

As discussed previously in this chapter, the role of the computer in a robot's


controller can be quite varied. What we will attempt to do in this section is to
discuss the requirements of a hypothetical robot controller from the computational
point of view using as a model the generic architecture of a robotic controller. as
shown in Fig 2.2.2, the capabilities discussed in Section 2.4, and the servo control
loop discussed in Appendix C.
Starting from the designer's and implementer's points of view, a computer
architecture similar to Figures 2.2.3 and 4.1.1 is chosen. The use of distributed
microprocessors to implement the controller has many advantages for the devel-
opment phase. Specifically:
A suitable processor can be chosen for each application, and thereby cost
and complexity can be kept to a minimum.
The software for each processor can be designed, coded, and tested inde-
pendently of the other processors.
The system can be designed to be modular in nature, and the complexity of
the controller can be reduced when less functionality is needed.
.If the functionality is distributed in the proper way, the task of troubleshooting
the final system is reduced to checking only those modules responsible for
functions that are not operational.
I f single-board computers with identical computational architecture are used
to implement some or all of the processors (which in fact actually execute
completely different software), we obtain commonality of hardware and the
possibility of swapping cards in the field to facilitate troubleshooting.

Of course, we must also understand the disadvantages of employing distrib


uted microprocessors. These include the following:

The communications between the processors must be clearly defined.


Provisions must be made so that testing of each processor can be done in-
dependently of the others. Thus, both hardware and software may be nec-
essary so as to emulate signals and data from nonexistent pieces of the system.
f all the processors must be debugged at the same time, multiple logic ana-
yzers or other test equipment must be available.

For our hypothetical system, the following additional specification will also
be included:

The robot will be programmed using any commercially available language


and a library of subroutines which perform functions associated with robot
control.
554 Computer Considerations for Robotic Systems
hap.7
This particular specification makes the system implementation quite simnle
since only subroutines (or functions) associated with robot control have to he
developed and the remaining control structure of the language, such as loopinp
data structures, and syntax are already available. 1o simpiity the design further.
if we choose a commercial operating system that will run in the processor and
support the language, we have already accounted for the "housekeeping features
detailed in Section 2.4 since the operating system should provide for file mainte
nance and a commercially available editor can be used to create or edit programs.
At this point, the elegant simplicity of our robot controller should be apparent
In addition to what has been described above, the remaining pieces that need
to be included are the specialized interfaces to the electronics that control the
physical hardware of the manipulator or interface ancillary devices (such as binary
inputs and outputs).
The computer we have chosen certainly includes an interface to the outside
world either in the form of a standard bus (such as VME or STD), serial ports
such as RS-232C, or a custom interface. In any event, these interfaces become
the medium of communication between the control program and the hardware-
specific interfaces.
Figure 7.8.1 shows the proposed architecture of our robot controller. We
have assumed that some type of relatively high-speed interface exists between our
"central control unit" or "host" and each piece of specific hardware. Here the
term "central control unit" or "host" is used to encompass the functionality of
sequencer, memory, and computational unit defined in Chapter 2.
Note that in this system a separate computer is used to control each servo
associated with the robot. This processor essentially executes code as defined in
Table C.4.1 (without the profile generator). Therefore, the only information it
needs is the set point data, which will come from the host over the "common bus"
at a fixed rate. To synchronize the "joint processors," another message is sent
to all of the processors simultaneously, telling them to execute their algorithms
using the new set point data. A timing diagram depicting this data transfer is
shown in
Figure 7.8.2. As pointed out in Appendix C, it
may not be possible to
send set point data to the joints as fast as we would like (e.g., every 1 ms) because
of the host's computational limit. In this case it is possible that data are sent every
16 ms and the joints themselves have a linear interpolator used to generate inter
mediate set point information.
Following the concept of data transfer between the host and joint processor
just discussed, other processors on the ""common bus" also need to receive con
and from the host and send data back to it. The host may use the
mands data
information directly or route it to one of the other processors of the system.
keeping with the communication scheme of Figure 7.8.2, these data are transmitteu
to and received from the processors at fixed times. Thus Figure 7.8.2 may b
modified to include slots for communication to the other computers. While on
the surface, this type of communication scheme may seem to contradict the neneed
for extremely fast response, the update rates are generally much faster than t i
Host or Central Control Unit

Operating System
Vision System
User
Controls
Library of Terminal Application
|High-Level Device Drivers Specific Code
Editor Robot Specific Disk Joystick
Programming Programs Such as
Languagee Terminal
Position Training Bus

Mass Device Driver for


Common BUS
Library of Storage Communication
Robot Control| (Disk)
Functions

Common BUS
Bidirectional High-Speed Communication Link

Joint Joint
Processor #1 Processor #n|
Device Driver for Device Driver for
Common BUS Common BUS
Communication Communication
Device Driver for Binary
Binary 1/0 Modules /O
PID Linear Modules
Algorithm Interpretor
1/O Control Program

Joint Processors

Figure 7.8.1. Controller architecture from computer perspective.

mechanical devices they are controlling and should not present a problem. Also
as noted earlier, the servos are interpolating data and are performing at a faster
than the set point information is being sent from the
updates)
rate (e.g., 1 msec
host (e.g., 16 msec updates)
This scheme is also advantageous in coupling job-specieic
hardware devices
to add vision, it would be a self-
into the system. For example, if we wished
receive position information over the
Contained system and would merely send or
Additionally, if we wanted a
common bus for use by the host when necessary.
to this task could be added, which could
1Orce-controlled gripper, hardware specific
use with a force sensor) and servo
Pertorm such tasks as signal processing (for
host would merely send a signal to the
control of the gripper's actuator. The

555
556 Computer Considerations for Robotic Systems
Chap.7
DATA INTERCHANGE STRUCTURE (DIS)

Receive new setpoint


Begin motion using new setpoint
Stop moving immediately
Relax joint (i.e. shut off servo)
Report last position
Set current location to valve in
Joint setpoint
Global Command Byte Last setpoint for this motion
etc.
Joint 1 Setpoint/ Data

Joint 2 Setpoint/ Data

Joint N Setpoint/Data

Binary Output
Bit Patterm

Binary Input
Bit Pattern
OPTIONAL
Vision Processor
Command

Vision Processor
Data/Response
CRC or Check Digit

TIMING OF DATA EXCHANGE BETWEEN HOST AND JOINT PROCESSORS

T/KH
-T

Every T sec host writes DIS to common bus and all joint processors on bus receive it simutaneously
Each processor computes CRC as a validity check
Each processor interprets global command byte and takes its appropriate data
as a
Any information that the processors are to send to the host are written to the appropriate location (Sui
response to the last command byte)
T/K sec after T the host interprets the information written into the DIS
.The cycle repeats forever

Figure 7.8.2. Data transfer between host and


joint processors.
Sec. 7.8 The Robot's Computer System 557

computer to close and exert a force of say 8 oz. Once the processor received the
command, it would perform its task independently of the "central control unit"
and upon completion of its task send back a signal indicating success or the reason
for failure.
To facilitate the "teaching" of points, the system may include another program
by which the trainer uses a joystick, keys, or other control to cause the manipulator
to move either in a joint-by-joint fashion or along a straight line. In either case,
must report their
set points are generated and fed to the servos, which in turn
positions back to the control program. When the trainer is satisfied with the
position of the manipulator, the values of all the variables relating to the position
of the manipulator are saved and given a name such as "this_point."
While the
name "this_point" may be associated with a specific position of the manipulator,
it should be understood that this may be a complex data structure and will carry
information not readily apparent to or needed by the trainer.
Once this type of system has been designed and debugged, it will be used by
people who may be less skilled in the art than its designers. From the user's point
of view, the functionality described in Section 2.4 is implemented primarily by
lan-
high-level programming instructions. For example, using the programming
robot to go in a straight line from its current
guage, the trainer may command the
location to another simply by using a function such as

move straight_to(next_point);

The argument of the function ""next_point" may have been created by the "teachinng
program'" discussed previously.
As pointed out earlier, the function "move_straight_to()" was defined by
the designers, and the details of its exact implementation are probably of little
interest to the user, whose primary interest is to be able to use the function along
with some data to cause the manipulator to perform a specific function. Of course,
other functions must also exist in the library. These instructions provide the ability
to perform the following actions:

Wait a specific amount of time.


S e t the to a particular state.
binary outputs
Read the binary inputs (from external sensors) and the state of the gripper
(1.e., opened or closed).

Besides thefunctions provided for specific robot control, the commercial


ability to perform "housekeeping functions"
operating system gives the user theneeded to support the robot programming en-
and supports any other programs
vironment.
In summary, this section has attempted to give the reader some insight as to
now computational elements and software come together to form a robot controller,
Robotic Systems
558 Computer Considerations for Chap. 7
Of paramount importance was the need to implement a certain level of functionalit
with standard software and hardware so that the ultimate end user could have the
the
ability to define a robot task.

7.9 SUMMARY

In this chapter we have discussed many topics relevant to computer considerations


for robotic systems. The picture presented here is a snapshot of numerous tech-
nological considerations that are changing rapidly, and thus the specific material
in the chapter may be quickly outdated. The general topics treated here will not
become outdated, however, and for this reason one must develop a general set of
methods to evaluate new advances in robotic software, communications, cell con-
trollers, and other robotic computer-related subjects. Although the specific robot
languages or the specific interface protocol may change, the role that these tech-
nological components play will be more or less consistent.
The authors cannot emphasize enough the importance of evaluating the com-
puter elements in a robotic system, and being aware of changes in technology that
may alter the specific value of a specific technique. For this reason the authors
expect that the reader will need to be aware of the general importance of these
issues and can analyze them as technology changes.

7.10 PROBLEMS

7.1 Design the software architecture of a robot controller that utilizes a multitasking op
erating system to provide the following:

Service a terminal
.Interpret terminal commands
Perform computations
.Wait for the specific events
joints at set points
-intrusion detection
Execute the instructions ofa robot program

Modify the design so that an editor may run and a program be entered while tne
robot controller is running another program and executing its other defined tasks.
you had to prioritize activities, which would be the most important; the least important
7.2 Pick a specifie processor (such as a 68000 or 8086) and investigate what commercaly
available bus architecture, operating systems, and programming languages are avala
Based on robotic considerations as defined in Chapters 2 and 7, what combinat
provide the most support for the desired functiona ity at the least cost? Which provide
the most flexibility?
4 x
4
7.3 Based on the discussions in Chapter 8, determine the time it takes to multiply two
Sec. 7.11 Further Reading 559

matrices using languages such as BASIC, FORTRAN, Pascal, and C; also determine
the time factor if coded in Assembly language. Use a computer with support for more
than one of these languages to simulate this. Consider fixed-point and floating-point
numbers along with a general matrix multiplication routine and one that is specific to
DH matrices.
the fixed instruction sequence control definition of the 100 Mark 1
7.4 Using MAKER
controller (see Section 7.6.1.1), program the stacking application (which was accom-
plished in VAL (see Table 7.6.2) and described in Section 7.6.3). Assume that there
are eight binary outputs and that straight-line motion is possible. Use mnemonics to
define positions (remember that each position must be taught previously) and other
device
operations such as gripper state and the states of output lines. Since no terminal
is available, use a subset of the binary outputs to indicate which object has been stacked,
and a special one to indicate that the program has completed its task. Comment on
the fixed instruction sequence versus the VAL implementation.
7.5 Obtain descriptions of at least two commercial programming languages from their man-
ufacturers. Use these languages to reprogram the stacking application (see Table 7.6.2
and Section 7.6.3). Compare and comment on the similarity of the functionality of the
different languages (for instance, straight line motion, operations on positions in space,
terminal display, and so on). Are any of the languages better for performing this
for each
object-stacking application? Next consider the programming skill required
skills
language and comment on whether people with explicit computer programming
could easily understand the construction of the language and effectively use it. For the
languages you have chosen, are any applications better suited for implementation by
one versus the others?
7.6 Using a personal computer, program a robot simulator using two-dimensional graphics
and a fixed instruction sequence control scheme. For example, assume that we graph-
ically show the x-y plane of a r-0 manipulator and the annulus of its workspace. Also,
he display will be used to trace the trajectory of the tool tip of our simulated robot.
Moreover, the state of each of the output lines will be shown along with any other
pertinent information.
Define a fixed instruction sequence set (similar to that of the MAKER 100 de-
scribed in Section 7.6.1.1). One posible suggestion is toinput the robot program into
a file which is read by your simulation program. As each instruction is read from the
file, update the display to show the manipulator position, output line state, gripper
state, and so forth. Note that it may be necessary to query for the input states prior
to updating the display.
7.7 Find commercial examples of robotic extensions to computer languages. Comment on
the similarity of the extensions in terms of functionality.

7.11 FURTHER READING

I, Artwick, Bruce A., Microcomputer Interfacing, Englewood Cliffs, N.J.: Prentice-Hall,

Inc., 1980.
2. Bonner, Susan and Kang G. Shin, "A Comparative Study of Robot Languages," IEEE
Vol. 15, No. 12, 12/82, pp. 82-96.
Computer Society, Computer,

You might also like