Digital Control and State Variable Methods Conventional and Intelligent Control Systems 4nbsped 9780071333276 0071333274 Compress
Digital Control and State Variable Methods Conventional and Intelligent Control Systems 4nbsped 9780071333276 0071333274 Compress
Digital Control and State Variable Methods Conventional and Intelligent Control Systems 4nbsped 9780071333276 0071333274 Compress
FOURTH EDITION
ii Contents
M GOPAL
Professor
Department of Electrical Engineering
Indian Institute of Technology Delhi
New Delhi
McGraw-Hill Offices
New Delhi New York St Louis San Francisco Auckland Bogotá Caracas
Kuala Lumpur Lisbon London Madrid Mexico City Milan Montreal
San Juan Santiago Singapore Sydney Tokyo Toronto
iv Contents
Tata McGraw-Hill
Published by the Tata McGraw Hill Education Private Limited,
7 West Patel Nagar, New Delhi 110 008.
Digital Control and State Variable Methods: Conventional and Intelligent Control Systems, 4e
Information contained in this work has been obtained by Tata McGraw-Hill, from sources believed to be reliable.
However, neither Tata McGraw-Hill nor its authors guarantee the accuracy or completeness of any information
published herein, and neither Tata McGraw-Hill nor its authors shall be responsible for any errors, omissions,
or damages arising out of use of this information. This work is published with the understanding that Tata
McGraw-Hill and its authors are supplying information but are not attempting to render engineering or other
professional services. If such services are required, the assistance of an appropriate professional should be
sought.
Typeset at Tej Composers, WZ-391, Madipur, New Delhi 110063, and printed at
Dedicated
with all my love to my
son, Ashwani
and
daughter, Anshu
Contents vii
Contents
Preface xi
Preface
Control Engineering is an active field of research and hence there is a steady influx of new concepts,
ideas and techniques. In time, some of these elements develop to the point where they join the list of
things every control engineer must know. To grasp the significance of modern developments, a strong
foundation is necessary in analysis, design and stability procedures applied to continuous-time linear and
nonlinear feedback control systems. Simultaneously, knowledge of the corresponding methods in the
digital version of control systems is also required because of the use of microprocessors, programmable
logic devices and DSP chips as controllers in modern systems. This book aims at presenting the vital
theories required for appreciating the past and present status of control engineering.
When compiling the material for the first edition of the book, decisions had to be made as to what should
be included and what should not. It was decided to place the emphasis on the control of continuous-time
and discrete-time linear systems, based on frequency-domain and state-space methods of design. In the
subsequent editions, we continue to emphasize solid mastery of the underlying techniques for linear
systems; in addition, the subject of nonlinear control has occupied an important place in our presentation.
The availability of powerful low-cost microprocessors has spurred great interest in nonlinear control.
Many practical nonlinear control systems based on conventional nonlinear control theory have been
developed. The emerging trends are to employ intelligent control technology for nonlinear systems. As a
result, the subject of nonlinear control (based on conventional as well as intelligent control methodologies)
has become a necessary part of the fundamental background of control engineers.
The vast array of systems to which feedback control is applied and the growing variety of techniques
available for the solution of control problems means that today’s student of control engineering needs
to manage a great deal of information. To help the students in this task and to keep their perspective as
they plow through a variety of techniques, a user-friendly format has been devised for the book. We have
divided the contents in three parts. Part I deals with digital control principles and design in transform
domain, assuming that the reader has had an introductory course in control engineering concentrating
on the basic principles of feedback control and covering the various classical analog methods of control
system design. The material presented in this part of the book is closely related to the material a student
may already be familiar with, but towards the end a direction to wider horizons is indicated. Basic
principles of feedback control and classical analog methods of design have been elaborately covered in
another book: M Gopal, Control Systems: Principles and Design, 4th edition, Tata McGraw-Hill, 2012.
Part II of the book deals with state variable methods in automatic control. State variable analysis and
design methods are usually not covered in an introductory course. It is assumed that the reader is not
exposed to the so-called modern control theory. Our approach is to first discuss the state variable methods
xii Preface
for continuous-time systems and then give a compact presentation for discrete-time systems using the
analogy with the continuous-time systems. This formatting is a little different from the conventional one.
Typically, a book on digital control systems starts with transform-domain design and then carries over
to state space. These books give a detailed account of state variable methods for discrete-time systems.
Since the state variable methods for discrete-time systems run quite parallel to those for continuous-time
systems, a full-blown repetition is not appreciated by readers conversant with state variable methods for
continuous-time systems. And for readers with no background of this type, a natural way of introducing
state variable methods is to give the treatment for continuous-time systems, followed by a brief parallel
presentation for discrete-time systems. This sequence of presentation is natural because it evolves from
the sequence of steps in a design procedure. The systems to be controlled (plants) are continuous-
time systems; we, therefore, investigate the properties of these systems using continuous-time models.
Sampling is introduced only to insert a microprocessor in the feedback loop.
Part III of the book deals with nonlinear control schemes. The choice and emphasis of the schemes is
guided by the basic objective of making an engineer or a student gain insights into the current nonlinear
techniques in use for the solution of practical control problems in the industry. Some results of mostly
theoretical interest are not included. Instead, emerging trends in nonlinear control are introduced. The
conventional nonlinear control structures like Feedback Linearization, Model-Reference Adaptive
Control, Self-Tuning Control, Generalized Model Predictive Control, Sliding Mode Control, etc., fall
well short of the requirements of modern complex systems. While extensions and modifications to these
conventional methods of control design based on mathematical models continue to be made, intelligent
control technology is emerging as an alternative to solve complex control problems. This technology is
slowly gaining wider acceptance in both academics and industry. The scientific community and industry
are converging to the fact that there is something fundamentally significant about this technology.
Rigorous characterization of theoretical properties of intelligent control methodology is not our aim;
rather we focus on the development of systematic design procedures, which will guide the design of a
controller for a specific problem.
The fundamental aim in preparing the book has been to work from basic principles and to present control
theory in a way that can be easily understood and applied. Solved examples are provided as and when
a new concept is introduced. The section on review examples briefly reiterates the key concepts of
the chapter. A supplement of problems, with final answers, is also made available for pen-and-paper
practice. MATLAB/Simulink tools are introduced in appendices to train the students in computer-aided-
design. All the solved examples, review examples and problems can be done using software tools. Some
problems specifically designed with a focus on MATLAB solutions, are given in appendices. A rich
collection of references, classified to topics, has been given for more enthusiastic readers.
In Chapter 1, introduction to the digital control problem is given. A rich variety of practical problems are
placed as examples. A rapid review of the classical procedures used for analog control is also provided.
For the study of classical procedures for digital control, the required mathematical background includes
z-transforms. A review of z-transformation is presented in Chapter 2. With this background, the concepts
of transfer function models and frequency-response models are introduced; and then dynamic response,
steady-state response and stability issues are covered. After taking the student gradually through
mathematical domain of digital control systems, Chapter 2 introduces the sampling theorem and the
phenomenon of aliasing. Methods to generate discrete-time models which approximate continuous-time
dynamics are also introduced in this chapter.
Chapter 3 briefly describes the digital control hardware including microprocessors, shaft-angle encoders,
stepping motors, programmable logic controllers, etc. Transform-domain models of digital control loops
are developed, with examples of some of the widely used digital control systems. Digital PID controllers,
their implementation and tuning are also included in this chapter.
Chapter 4 establishes a toolkit of design-oriented techniques. It puts forward alternative design methods
based on root locus and Bode plots. Design of digital controllers using z-plane synthesis is also included
in this chapter.
Part II (Chapters 5−8) of the book deals with state variable methods in automatic control. The manner
of presentation followed here is to first discuss state variable methods for continuous-time systems and
then give a compact presentation of the methods for discrete-time systems, using the analogy with the
continuous-time case.
Chapter 5 is on state variable analysis. It exposes the problems of state variable representation,
diagonalization, solution, controllability and observability. The relationship between transfer function
and state variable models is also given. Although it is assumed that the reader has the necessary
background on vector-matrix analysis, a reasonably detailed account of vector-matrix analysis is provided
in this chapter for convenient reference.
State variable analysis concepts, developed in continuous-time format in Chapter 5, are extended to
digital control systems in Chapter 6.
The techniques of achieving desired system characteristics by pole-placement using complete state
variable feedback are developed in Chapter 7. Also included is the method of using the system output
to form estimates of the states for use in state feedback. Results are given for both continuous-time and
discrete-time systems.
Lyapunov stability analysis is introduced in Chapter 8. In addition to stability analysis, Lyapunov
functions are useful in solving some optimization problems. We discuss in this chapter, the solution
of linear quadratic optimal control problem through Lyapunov synthesis. Results are given for both
continuous-time and discrete-time systems.
Parts I and II exclusivelydeal with linear systems. In Part III (Chapters 9−14), the focus is on nonlinear
systems. We begin with conventional methods of analysis and design (Chapters 9−10) which are of
current importance in terms of industrial practice. Results of mostly theoretical interest are not included.
Instead, the emerging trends in nonlinear control based on intelligent control technology are presented in
reasonable details (Chapters 11−14).
In Chapter 9, considerable attention is paid to describing function and phase plane methods, which
have demonstrated great utility in analysis of nonlinear systems. Also included is stability analysis of
nonlinear systems using Lyapunov functions.
xiv Preface
Chapter 10 introduces the concepts of feedback linearization, model reference adaptive control, system
identification and self-tuning control and variable structure control. In terms of theory, major strides have
been made in these areas. In terms of applications, many practical nonlinear control systems have been
developed.
Neural networks are widely used in intelligent control systems. An informative description of neural
networks is presented in Chapter 11. This chapter contains architectures and algorithms associated
with multi-layer perceptron networks, radial basis function networks and support vector machines.
Application examples from the perspectives of system identification and control are given.
Chapter 12 introduces the concepts of fuzzy sets and knowledge representation using fuzzy rules-
based learning. Conceptual paradigms of fuzzy controllers are presented, with a discussion on Mamdani
architecture for design. The approach to system identification as linguistic rules using the popular Takagi-
Sugeno fuzzy representation is discussed. A brief description of system identification and control using
neuro-fuzzy systems is also included in this chapter.
The focus in Chapter 13 is on genetic algorithm for optimization. The applications of this algorithm to
the learning of neural networks, as well as to the structural and parameter adaptations of fuzzy systems
are also described.
Chapter 14 presents a new control architecture that is based on reinforcement learning. Several recent
developments in reinforcement learning have substantially increased its viability as a general approach
to intelligent control.
WEB SUPPLEMENTS
The book includes a wealth of supplements available in the dedicated website:
http://www.mhhe.com/gopal/dc4e
It includes:
For Students
For Instructors
This part of the website is password protected and will be available to the instructors who adopt
this text. This request can be sent to a local TMH sales representative.
READERSHIP
The book is intended to be a comprehensive treatment of advanced control engineering for courses at
senior undergraduate level and postgraduate (Masters degrees) level. It is also intended to be a reference
source for PhD research students and practicing engineers.
Preface xv
For the purpose of organizing different courses for students with different backgrounds, the sequencing
of chapters and dependence of each chapter on previous chapters has been properly designed in the text.
A typical engineering curriculum at the second-degree level includes core courses on ‘digital control
systems’ and ‘linear system theory’. Parts I and II of the book have been designed to fully meet the
requirements of the two courses. In Part III of the book, a reasonably detailed account of nonlinear
control schemes, both the conventional and the intelligent, is given. The requirements of elective courses
on ‘nonlinear control systems’ and ‘intelligent control’, will be partially or fully (depending on the depth
of coverage of the courses) served by Part III of the book.
A typical engineering curriculum at the first-degree level includes a core course on feedback control
systems, with one or two elective courses on the subject. This book meets the requirements of elective
courses at the first-degree level.
ACKNOWLEDGEMENTS
I would like to acknowledge the contributions of faculty, students and practicing engineers across the
country, whose suggestions through previous editions have made a positive impact on this new edition.
In particular, the considerable help and education I have received from my students and colleagues at
Indian Institute of Technology, Delhi, deserves sincere appreciation.
I also express my appreciation to the reviewers who offered valuable suggestions for this fourth and
previous editions. The reviews have had a great impact on the project.
Finally, I would like to thank The Tata McGraw-Hill Publishing Company, and its executives for providing
professional support for this project through all phases of its development.
Generous participation of instructors, students, and practicing engineers to eliminate errors in the text
(if any), and to refine the presentation will be gratefully acknowledged.
M.Gopal
mgopal@ee.iitd.ac.in
PUBLISHER’S NOTE
Remember to write to us. We look forward to receiving your feedback, comments and ideas to enhance
the quality of this book. You can reach us at tmh.elefeedback@gmail.com. Please mention the title and
author’s name as the subject.
In case you spot piracy of this book, please do let us know.
Introduction 1
Part I
Digital Control: Principles and Design in
Transform Domain
Automatic control systems play a vital role in the (technological) progress of human civilization. These
control systems range from the very simple to the fairly complex in nature. Automatic washing machines,
refrigerators, and ovens are examples of some of the simpler systems used in homes. Aircraft automatic
pilots, robots used in manufacturing, and electric power generation and distribution systems represent
complex control systems. Even such problems as inventory control, and socio-economic systems control,
may be approached from the theory of feedback control.
Our world is one of continuous-time variables type. Quantities like flow, temperature, voltage,
position, and velocity are not discrete-time variables but continuous-time ones. If we look back at the
development of automatic control, we find that mass-produced analog (electronic) controllers have
been available since about the 1940s. A first-level introduction to control engineering, provided in the
companion book ‘Control Systems: Principles and Design’, deals with the basics of control, and
covers sufficient material to enable us to design analog (op amp based) controllers for many simple
control loops found in the industry.
From the 1980s onwards, we find microprocessor digital technology to be a dominant industrial
phenomenon. Today, the most complex industrial processes are under computer control. A microprocessor
determines the input to manipulate the physical system, or plant; and this requires facilities to apply this
input to the physical world. In addition, the control strategy typically relies on measured values of the
plant behavior; and this requires a mechanism to make these measured values available to the computing
resources. The plant can be viewed as changing continuously with time. The controller, however, has
a discrete clock that governs its behavior and so its values change only at discrete points in time. To
obtain deterministic behavior and ensure data integrity, the sensor must include a mechanism to sample
continuous data at discrete points in time, while the actuators need to produce a continuous value between
the time points with discrete-time data.
Computer interfacing for data acquisition, consists of analog-to-digital (A/D) conversion of the input
(to controller) analog signals. Prior to the conversion, the analog signal has to be conditioned to meet
the input requirements of the A/D converter. Signal conditioning consists of amplification (for sensors
generating very low power signals), filtering (to limit the amount of noise on the signal), and isolation
(to protect the sensors from interacting with one another and/or to protect the signals from possibly
damaging inputs). Conversion of a digital signal to an analog signal (D/A) at the output (of controller),
is to be carried out to send this signal to an actuator which requires an analog signal. The signal has to
be amplified by a transistor or solid state relay or power amplifier. Most manufacturers of electronic
instrumentation devices are producing signal conditioners as modules.
2 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The immersion of computing power into the physical world has changed the scene of control system
design. A comprehensive theory of digital ‘sampled’ control has been developed. This theory requires a
sophisticated use of new concepts such as z-transform. It is, however, quite straightforward to translate
analog design concepts into digital equivalents. After taking a guided tour through the analog design
concepts and op amp technology, the reader will find in Part I of this book sufficient material to enable
him/her to design digital controllers for many simple control loops, and interfacing the controllers to
other subsystems in the loop; thereby building complete feedback control systems.
The broad space of digital control applications can be roughly divided into two categories: industrial
control and embedded control. Industrial control applications are those in which control is used as part
of the process of creating or producing an end product. The control system is not a part of the actual end
product itself. Examples include the manufacture of pharmaceuticals and the refining of oil. In the case
of industrial control, the control system must be robust and reliable, since the processes typically run
continuously for days, weeks or years.
Embedded control applications are those in which the control system is a component of the end product
itself. For example, Electronic Control Units (ECUs) are found in a wide variety of products including
automobiles, airplanes, and home applications. Most of these ECUs implement different feedback
control tasks. For instance, engine control, traction control, anti-lock braking, active stability control,
cruise control, and climate control. While embedded control systems must also be reliable, cost is a
more significant factor, since the components of the control system contribute to the overall cost of
manufacturing the product. In this case, much more time and effort is usually spent in the design phase
of the control system to ensure reliable performance without requiring any unnecessary excess of
processing power, memory, sensors, actuators, etc., in the digital control system. Our focus in this book
will be on industrial control applications.
Perhaps more than any other factor, the development of microprocessors has been responsible for the
explosive growth of the computer industry. While early microprocessors required many additional
components in order to perform any useful task, the increasing use of Large-Scale Integration (LSI) or
Very Large-Scale Integration (VLSI) semiconductor fabrication techniques has led to the production of
microcomputers, where all of the required circuitry is embedded on one or a small number of integrated
circuits. A further extension of the integration is the single-chip microcontroller, which adds analog and
binary I/O, timers, and counters so as to be able to carry out real-time control functions with almost
no additional hardware. Examples of such microcontrollers are Intel 8051, 8096 and Motorola MCH
68HC11. These chips were developed largely, in response to the automotive industries’ desire for
computer-controlled ignition, emission control and anti-skid systems. They are now widely used
in process industries. This digital control practice, along with the theory of sampled-data systems is
covered in Chapters 2–4 of the book.
Introduction 3
Chapter 1
Introduction
respectively, analog-to-digital (A/D) and digital-to-analog (D/A) conversion at the computer input and
output. There are, of course, exceptions; sensors which combine the functions of the transducer and the
A/D converter, and actuators which combine the functions of the D/A converter and the final control
element are available. In most cases, however, our sensors will provide an analog voltage output, and our
final control elements will accept an analog voltage input.
In the control scheme of Fig. 1.1, the A/D converter performs the sampling of the sensor signal (analog
feedback signal ) and produces its binary representation. The digital computer (control algorithm)
generates a digital control signal using the information on desired and actual plant behavior. The
digital control signal is then converted to analog control signal via the D/A converter. A real-time clock
synchronizes the actions of the A/D and D/A converters, and the shift registers. The analog control signal
is applied to the plant actuator to control the plant’s behavior.
The overall system in Fig. 1.1 is hybrid in nature; the signals are in the sampled form (discrete-time
signals) in the computer, and in a continuous form in the plant. Such systems have traditionally been
called sampled-data systems; we will use this term as a synonym for computer control systems/digital
control systems.
The word ‘servomechanism’ (or servo system) is used for a command-following system, wherein the
controlled output of the system is required to follow a given command. When the desired value of the
controlled outputs is more or less fixed, and the main problem is to reject disturbance effects, the control
system is sometimes called a regulator. The command input for a regulator becomes a constant and
is called set-point, which corresponds to the desired value of the controlled output. The set-point may
however be changed in time, from one constant value to another. In a tracking system, the controlled
output is required to follow, or track, a time-varying command input.
To make these definitions more concrete, let us consider some familiar examples of control systems.
The radar scene includes the radar itself, a target, and the transmitted waveform that travels to the target
and back. Information about the target’s spatial position is first obtained by measuring the changes in the
back-scattered waveform relative to the transmitted waveform. The time shift provides information about
the target’s range, the frequency shift provides information about the target’s radial velocity, and the
received voltage magnitude and phase provide information about the target’s angle1[1].
In a typical radar application, it is necessary to
point the radar antenna towards the target and
follow its movements. The radar sensor detects
the error between the antenna axis and the target,
and directs the antenna to follow the target. The
servomechanism for steering the antenna in
response to commands from the radar sensor,
is considered here. The antenna is designed for
two independent angular motions; one about
the vertical axis in which the azimuth angle is
varied, and the other about the horizontal axis
in which the elevation angle is varied (Fig. 1.2).
Fig. 1.2 The servomechanism for steering the antenna
is described by two controlled variables—azimuth angle b and elevation angle a. The desired values
or commands are the azimuth angle br and the elevation angle ar of the target. The feedback control
problem involves error self-nulling, under conditions of disturbances beyond our control (such as wind
power).
The control system for steering antenna can be treated as two independent systems—the azimuth-angle
servomechanism, and the elevation-angle servomechanism. This is because the interaction effects are
usually small. The operational diagram of the azimuth-angle servomechanism is shown in Fig. 1.3.
1
The bracketed numbers coincide with the list of references given at the end of the book.
6 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The steering command from the radar sensor, which corresponds to target the azimuth angle, is compared
with the azimuth angle of the antenna axis. The occurrence of the azimuth-angle error causes an error
signal to pass through the amplifier, which increases the angular velocity of the servo motor in a direction
towards an error reduction. In the scheme of Fig. 1.3, the measurement and processing of signals
(calculation of control signal) is digital in nature. The shaft-angle encoder combines the functions of
transducer and A/D converter.
Figure 1.4 gives the functional block diagrams of the control system. A simple model of the load
(antenna) on the motor is shown in Fig. 1.4b. The moment of inertia J and the viscous friction coefficient
B are the parameters of the assumed model. Nominal load is included in the plant model for the control
design. The main disturbance inputs are the deviations of the load from the nominal estimated value as a
result of uncertainties in our estimate, effect of wind power, etc.
In the tracking system of Fig. 1.4a, the occurrence of error causes the motor to rotate in a direction
favoring the dissolution of error. The processing of the error signal (calculation of the control signal) is
based on the proportional control logic. Note that the components of our system cannot respond instan-
taneously, since any real-world system cannot go from one energy level to another in zero time. Thus,
in any real-world system, there is some kind of dynamic lagging behavior between input and output.
In the servo system of Fig. 1.4a, the control action, on occurrence of the deviation of the controlled
output from the desired value (the occurrence of error), will be delayed by the cumulative dynamic
lags of the shaft-angle encoder, digital computer and digital-to-analog converter, power amplifier, and
the servo motor with load. Eventually, however, the trend of the controlled variable deviation from
the desired value, will be reversed by the action of the amplifier output on the rotation of the motor,
Fig. 1.4
Introduction 7
returning the controlled variable towards the desired value. Now, if a strong correction (high amplifier
gain) is applied (which is desirable from the point of view of control system performance, e.g., strong
correction improves the speed of response), the controlled variable overshoots the desired value (the
‘run-out’ of the motor towards an error with the opposite rotation), causing a reversal in the algebraic
sign of the system error. Unfortunately, because of system dynamic lags, a reversal of correction does
not occur immediately, and the amplifier output (acting on ‘old’ information) is now actually driving
the controlled variable in the direction it was already heading, instead of opposing its excursions, thus
leading to a larger deviation. Eventually, the reversed error does cause a reversed correction, but the
controlled variable overshoots the desired value in the opposite direction and the correction is again in
the wrong direction. The controlled variable is thus driven, alternatively, in opposite directions before
it settles to an equilibrium condition. This oscillatory state is unacceptable as the behavior of antenna-
steering servomechanism. The considerable amplifier gain, which is necessary if high accuracies are to
be obtained, aggravates the described unfavorable phenomenon.
The occurrence of these oscillatory effects can be controlled by the application of special compensation
feedback. When a signal proportional to the motor’s angular velocity (called the rate signal) is subtracted
from the error signal (Fig. 1.4c), the braking process starts sooner than the error reaches a zero value.
The ‘loop within a loop’ (velocity feedback system embedded within a position feedback system)
configuration utilized in this application, is a classical scheme called minor-loop feedback scheme.
Fig. 1.5
state error between the actual speed and the desired speed exists. The occurrence of steady-state error
can be eliminated by generating the control signal with two components: one component proportional to
the error signal, and the other proportional to the integral of the error signal.
In the liquid-level control system of Fig. 1.6, the command signal (which corresponds to the desired
level of the liquid in the cylinder) is fed through the keyboard; the actual level signal is received through
the A/D conversion card. The digital computer compares the two signals at each sampling instant, and
generates a control signal which is the sum of two components: one proportional to the error signal, and
the other, proportional to the integral of the error signal.
1.2
Digital computers were first applied to industrial process control in the late 1950s. The machines
were generally large-scale ‘main frames’ and were used in a so-called supervisory control mode; the
individual temperature, pressure, flow and the like, feedback loops were locally controlled by electronic
or pneumatic analog controllers. The main function of the computer was to gather information on how
the overall process was operating, feed this into a technical-economic model of the process (programmed
into computer memory), and then, periodically, send signals to the set-points of all the analog controllers,
so that each individual loop operated in such a way as to optimize the overall operation.
In 1962, Imperial Chemical Industries in England made a drastic departure from this approach—a digital
computer was installed, which measured 224 variables and manipulated 129 valves directly. The name
Direct Digital Control (DDC) was coined to emphasize that the computer controlled the process directly.
In DDC systems, analog controllers were no longer used. The central computer served as a single, time-
shared controller for all the individual feedback loops. Conventional control laws were still used for each
loop, but the digital versions of control laws for each loop resided in the software in the central computer.
Though digital computers were very expensive, one expected DDC systems to have economic advantage
10 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
for processes with many (50 or more) loops. Unfortunately, this did not often materialize. As failures
in the central computer of a DDC system shut down the entire system, it was necessary to provide a
‘fail-safe’ backup system, which usually turned out to be a complete system of individual loop analog
controllers, thus negating the expected hardware savings.
There was a substantial development of digital computer technology in the 1960s. By the early 1970s,
smaller, faster, more reliable, and cheaper computers became available. The term minicomputers was
coined for the new computers that emerged. DEC PDP11 is by far, the best-known example. There were,
however, many related machines from other vendors.
The minicomputer was still a fairly large system. Even as performance continued to increase and prices to
decrease, the price of a minicomputer main frame in 1975, was still about $10,000. Computer control was still
out of reach for a large number of control problems. However, with the development of microcomputer, the
price of a card computer, with the performance of a 1975 minicomputer, dropped to $500 in 1980. Another
consequence was that digital computing power in 1980 came in quanta as small as $50. This meant that
computer control could now be considered as an alternative, no matter how small the application [54–57].
Microcomputers have already made a great impact on the process control field. They are replacing analog
hardware even as single-loop controllers. Small DDC systems have been made using microcomputers.
Operator communication has vastly improved with the introduction of color video-graphics displays.
The variety of commercially available industrial controllers ranges from single-loop controllers through
multiloop single computer systems to multiloop distributed computers. Although the range of equipment
available is large, there are a number of identifiable trends which are apparent.
Single-loop microprocessor-based controllers, though descendants of single-loop analog controllers, have
greater degree of flexibility. Control actions which are permitted, include on/off control, proportional
action, integral action, derivative action, and the lag effect. Many controllers have self-tuning option.
During the self-tune sequence, the controller introduces a number of step commands, within the tolerances
allowed by the operator, in order to characterize the system response. From this response, values for
proportional gain, reset time, and rate time are developed. This feature of online tuning in industrial
controllers is interesting, and permits the concept of the computer automatically adjusting to changing
process conditions [11–12].
Multiloop single computer systems have variability in available interface and software design. Both
single-loop and multiloop controllers may be used in stand-alone mode, or may be interfaced to a host
computer for distributed operation. The reducing costs and increasing power of computing systems, has
tended to make distributed computing systems for larger installations, far more cost effective than those
built around one large computer. However, the smaller installation may be best catered for by a single
multiloop controller, or even a few single-loop devices.
Control of large and complex processes using Distributed Computer Control Systems (DCCS), is
facilitated by adopting a multilevel or hierarchical view point of control strategy. The multilevel approach
subdivides the system into a hierarchy of simpler control design problems. On the lowest level of control
(direct process control level), the following tasks are handled: acquisition of process data, i.e., collection
of instantaneous values of individual process variables, and status messages of plant control facilities
(valves, pumps, motors, etc.) needed for efficient direct digital control; processing of collected data;
plant hardware monitoring, system check and diagnosis; closed-loop control and logic control functions,
based on directives from the next ‘higher’ level.
Introduction 11
Supervisory level copes with the problems of determination of optimal plant work conditions, and
generation of relevant instructions to be transferred to the next ‘lower’ level. Adaptive control, optimal
control, plant performance monitoring, plant coordination and failure detections are the functions
performed at this level.
Production scheduling and control level is responsible for production dispatching, inventory control,
production supervision, production rescheduling, production reporting, etc.
Plant(s) management level, the ‘highest’ hierarchical level of the plant automation system, is in charge of
the wide spectrum of engineering, economic, commercial, personnel, and other functions.
It is, of course, not to be expected that in all available distributed computer control systems, all four
hierarchical levels are already implemented. For automation of small-scale plants, any DCCS having
at least two hierarchical levels, can be used. One system level can be used as a direct process control
level, and the second one as a combined plant supervisory, and production scheduling and control level.
Production planning and other enterprise-level activities, can be managed by the separate mainframe
computer or the computer center. For instance, in a LAN (Local Area Network)-based system structure,
shown in Fig. 1.7a, the ‘higher’ automation levels are implemented by simply attaching the additional
‘higher’ level computers to the LAN of the system [89].
For complex process plant monitoring, SCADA (Supervisory Control And Data Acquisition) systems are
available. The basic functions carried out by a SCADA system are as follows:
Data acquisition and communication
Events and alarms reporting
Data processing
Partial process control
The full process control functions are delegated to the special control units, connected to the SCADA
system, and are capable of handling emergency shut down situations.
The separation of SCADA and DCCS is slowly vanishing and the SCADA systems are being
brought within the field of DCCS; the hierarchical, distributed, flexible and extremely powerful Computer
Integrated Process Systems (CIPS), is now a technical reality.
The other main and early application area of digital methods was machine tool numerical control, which
developed at about the same time as computer control in process industries. Earlier, numerically controlled
(NC) machines used ‘hard-wired’ digital techniques. As the price and performance of microcomputers
improved, it became feasible to replace the hard-wired functions with their software-implemented
equivalents, using a microcomputer as a built-in component of the machine tool. This approach has been
called Computerized Numerical Control (CNC) [20]. Industrial robots were developed simultaneously
with CNC systems.
A quiet revolution is ongoing in the manufacturing world, which is changing the look of factories.
Computers are controlling and monitoring the manufacturing processes [21–22]. The high degree of
automation that, until recently, was reserved for mass production only, is also applied now to small
batches. This requires a change from hard automation in the production line, to a Flexible Manufacturing
System (FMS), which can be more readily rearranged to handle new market requirements.
Flexible manufacturing systems, combined with automatic assembly and product inspection on one hand,
and CAD/CAM systems on the other, are the basic components of the modern Computer Integrated
Manufacturing System (CIMS). In a CIMS, the production flow, from the conceptual design to the
finished product, is entirely under computer control and management.
Figure 1.7b illustrates the hierarchical structure of CIMS. The lowest level of this structure contains
stand-alone computer control systems of manufacturing processes and industrial robots. The computer
control of processes includes all types of CNC machine tools, welding, electrochemical machining,
electrical discharge machining, and a high-power laser, as well as the adaptive control of these processes.
When a battery of NC or CNC machine tools is placed under the control of a single computer, the result
is a system known as Direct Numerical Control (DNC).
The operation of several CNC machines and industrial robots, can be coordinated by systems called
manufacturing cells. The computer of the cell is interfaced with the computer of the robot and CNC
machines. It receives ‘completion of job’ signals from the machines and issues instructions to the robot
to load and unload the machines, and change their tools. The software includes strategies permitting the
handling of machine breakdown, tool breakage, and other special situations.
The operation of many manufacturing cells can be coordinated by Flexible Manufacturing System (FMS).
The FMS accepts incoming workpieces and processes them under computer control, into finished parts.
The parts produced by the FMS must be assembled into the final product. They are routed on a transfer
system to assembly stations. In each station, a robot will assemble parts, either into a sub-assembly or
(for simple units), into the final product. The sub-assemblies will be further assembled by robots located
in other stations. The final product will be tested by an automatic inspection system.
The FMS uses CAD/CAM systems to integrate the design and manufacturing of parts. At the highest
hierarchical level, there will be a supervisory computer, which coordinates participation of computers
in all phases of a manufacturing enterprise: the design of the product, the planning of its manufacture,
the automatic production of parts, automatic assembly, automatic testing, and, of course, computer-
controlled flow of materials and parts through the plant.
In a LAN-based system, the ‘higher’ automation levels (production planning and other enterprise-level
activities), can be implemented by simply attaching the additional ‘higher’ level computers to the LAN
of the system.
One of the most ingenious devices ever devised to advance the field of industrial automation, is the
Programmable Logic Controller (PLC). The PLC, a microprocessor-based general-purpose device,
provides a ‘menu’ of basic operations that can be configured by programming to create logic control
system for any application [23–25]. So versatile are these devices, that they are employed in the
automation of almost every type of industry. CIPS and CIMS provide interfaces to PLCs for handling
high-speed logic (and other) control functions. Thousands of these devices go unrecognized in process
plants and factory environments—quietly monitoring security, manipulating valves, and controlling ma-
chines and automatic production lines.
Thus, we see that the recent appearance of powerful and inexpensive microcomputers, has made digital
control practical for a wide variety of applications. In fact, now every process is a candidate for digital
control. The flourishing of digital control is just beginning for most industries, and there is much to be
gained by exploiting the full potential of new technology. There is every indication that a high rate of
growth in the capability and application of digital computers, will continue far into the future.
1.3
The development of control system analysis and design can be divided into three eras. In the first era,
we have the classical control theory, which deals with techniques developed during the 1940s and 1950s.
Classical control methods—Routh–Hurwitz, Root Locus, Nyquist, Bode, Nichols—have in common
the use of transfer functions in the complex frequency (Laplace variable s) domain, and the emphasis
14 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
on the graphical techniques. Since computers were not available at that time, a great deal of emphasis
was placed on developing methods that were amenable to manual computation and graphics. A major
limitation of the classical control methods was the use of Single-Input, Single-Output (SISO) control
configurations. Also, the use of the transfer function and frequency domain limited one to linear time-
invariant systems. Important results of this era have been discussed in Part I of this book.
In the second era, we have modern control (which is not so modern any longer), which refers to state-
space-based methods developed in the late 1950s and early 1960s. In modern control, system models are
directly written in the time domain. Analysis and design are also carried out in the time domain. It should
be noted that before Laplace transforms and transfer functions became popular in the 1920s, engineers
were studying systems in the time domain. Therefore, the resurgence of time-domain analysis was not
unusual, but it was triggered by the development of computers and advances in numerical analysis. As
computers were available, it was no longer necessary to develop analysis and design methods that were
strictly manual. Multivariable (Multi-Input, Multi-Output (MIMO)) control configurations could be
analyzed and designed. An engineer could use computers to numerically solve or simulate large systems
that were nonlinear and/or time-varying. Important results of this era—Lyapunov stability criterion,
pole-placement by state feedback, state observers, optimal control—are discussed in Part II of this book.
Modern control methods initially enjoyed a great deal of success in academic circles, but they did not perform
very well in many areas of application. Modern control provided a lot of insight into system structure and
properties, but it masked other important feedback properties that could be studied and manipulated using
the classical control theory. A basic requirement in control engineering is to design control systems that
will work properly when the plant model is uncertain. This issue is tackled in the classical control theory
using gain and phase margins. Most modern control design methods, however, inherently require a precise
model of the plant. In the years since these methods were developed, there have been few significant
implementations and most of them have been in a single application area—the aerospace industry. The
classical control theory, on the other hand, is going strong. It provides an efficient framework for the
design of feedback controls in all areas of application. The classical design methods have been greatly
enhanced by the availability of low-cost computers for system analysis and simulation. The graphical
tools of classical design can now be more easily used with computer graphics for SISO as well as MIMO
systems.
During the past three decades, the control theory has experienced a rapid expansion, as a result of the
challenges of the stringent requirements posed by modern systems, such as flight vehicles, weapon control
systems, robots, and chemical processes; and the availability of low-cost computing power. A body of
methods emerged during this third era of control-theory development, which tried to provide answers to the
problems of plant uncertainty. These techniques, commonly known as robust control, are a combination
of modern state-space and classical frequency-domain techniques. For a thorough understanding of these
new methods, we need to have adequate knowledge of state-space methods, in addition to the frequency-
domain methods. This has guided the preparation of this text.
Robust control system design has been dominated by linear control techniques, which rely on the key
assumption of availability of the uncertainty model. When the required operation range is large, and
a reliable uncertainty model cannot be developed, a linear controller is likely to perform very poorly.
Nonlinear controllers, on the other hand, may handle the nonlinearities in large-range operations, directly.
Also, nonlinearities can be intentionally introduced into the controller part of a control system, so that the
model uncertainties can be tolerated. Advances in computer technology have made the implementation
Introduction 15
of nonlinear control schemes—feedback linearization, variable structure sliding mode control, adaptive
control, gain scheduling—a relatively simpler task.
The third era of control-theory development has also given an alternative to model-based design methods:
the knowledge-based control method. In this approach, we look for a control solution that exhibits
intelligent behavior, rather than using purely mathematical methods to keep the system under control.
Model-based control techniques have many advantages. When the underlying assumptions are satisfied,
many of these methods provide good stability, robustness to model uncertainties and disturbances, and
speed of response. However, there are many practical deficiencies of these ‘crisp’ (‘hard’ or ‘inflexible’)
control algorithms. It is generally difficult to accurately represent a complex process by a mathematical
model. If the process model has parameters whose values are partially known, ambiguous or vague, then
crisp control algorithms, that are based on such incomplete information, will not usually give satisfactory
results. The environment with which the process interacts, may not be completely predictable and it is
normally not possible for a crisp algorithm, to accurately respond to a condition that it did not anticipate,
and that it could not ‘understand’.
Intelligent control is the name introduced to describe control systems in which control strategies are
based on AI (Artificial Intelligence) techniques. In this control approach, which is an alternative to
model-based control approach, a behavioral (and not mathematical) description of the process is used,
which is based on qualitative expressions and experience of people working with the process. Actions
can be performed either as a result of evaluating rules (reasoning), or as unconscious actions based on
presented process behavior after a learning phase. Intelligence becomes a measure of the capability to
reason about facts and rules, and to learn about presented behavior. It opens up the possibility of applying
the experience gathered by operators and process engineers. Uncertainty about facts and rules along with
ignorance about the structure of the system can then be handled easily.
Fuzzy logic, and neural networks are very good methods to model real processes which cannot be
described mathematically. Fuzzy logic deals with linguistic and imprecise rules based on an expert’s
knowledge. Neural networks are applied in the case where we do not have any rules but several data.
The main feature of fuzzy logic control is that a control engineering knowledge base (typically in terms of
a set of rules), created using an expert’s knowledge of process behavior, is available within the controller
and the control actions are generated by applying existing process conditions to the knowledge base,
making use of an inference mechanism. The knowledge base and the inference mechanism can handle
noncrisp and incomplete information, and the knowledge itself will improve and evolve through learning
and past experience.
In neural network based control, the goal of artificial neural network is to emulate the mechanism of
human brain function and reasoning, and to achieve the same intelligence level as the human brain in
learning, abstraction, generalization and making decisions under uncertainty.
In conventional design exercises, the system is modeled analytically by a set of differential equations, and
their solution tells the controller how to adjust the system’s control activities for each type of behavior. In
a typical intelligent control scheme, these adjustments are handled by an intelligent controller, a logical
model of thinking processes that a person might go through in the course of manipulating the system.
This shift in focus from the process to the person involved, changes the entire approach to automatic
control problems. It provides a new design paradigm such that a controller can be designed for complex,
16 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
ill-defined processes without knowing quantitative input-output relations, which are otherwise required
by conventional methods.
The ever-increasing demands of the complex control systems being built today, and planned for the
future, dictate the use of novel and more powerful methods in control. The potential for intelligent
control techniques in solving many of the problems involved is great, and this research area is evolving
rapidly. The emerging viewpoint is that model-based control techniques should be augmented with
intelligent control techniques in order to enhance the performance of control systems. The developments
in intelligent control methods should be based on firm theoretical foundations (as is the case with model-
based control methods), but this is still at its early stages. Strong theoretical results guaranteeing control
system properties such as stability are still to come, although promising results reporting progress in
special cases have been reported recently. The potential of intelligent control systems clearly needs to
be further explored and both theory and applications need to be further developed. A brief account of
nonlinear control schemes, both the conventional and the intelligent, is given in Part III of this book.
1.4
The tools of classical linear control system design are the Laplace transform, stability testing, root locus,
and frequency response. Laplace transformation is used to convert system descriptions in terms of integro-
differential equations to equivalent algebraic relations involving rational functions. These are conveniently
manipulated in the form of transfer functions with block diagrams and signal flow graphs [155].
The block diagram of Fig. 1.8 represents the basic structure of feedback control systems. Not all systems
can be forced into this format, but it serves as a reference for discussion.
In Fig. 1.8, the variable y(t) is the controlled variable of the system. The desired value of the controlled
variable is yr(t), the command input. yr(t) and y(t) have the same units. The feedback elements with
transfer function H(s) are system components that act on the controlled variable y(t) to produce the
feedback signal b(t). H(s) typically represents the sensor action to convert the controlled variable y(t) to
an electrical sensor output signal b(t).
The reference input elements with transfer function A(s) convert the command signal yr(t) into a form
compatible with the feedback signal b(t). The transformed command signal is the actual physical input
to the system. This actual signal input is defined as the reference input.
Fig. 1.8
Introduction 17
The comparison device (error detector) of the system compares the reference input r(t) with the feedback
signal b(t) and generates the actuating error signal ê(t). The signals r(t), b(t), and ê(t) have the same
units. The controller with transfer function D(s) acts on the actuating error signal to produce the control
signal u(t).
The control signal u(t) has the knowledge about the desired control action. The power level of this signal
is relatively low. The actuator elements with transfer function GA(s), are the system components that act
on the control signal u(t) and develop enough torque, pressure, heat, etc. (manipulated variable m(t)), to
influence the controlled system. GP(s) is the transfer function of the controlled system.
The disturbance w(t) represents the undesired signals that tend to affect the controlled system. The
disturbance may be introduced into the system at more than one location.
The dashed-line portion of Fig. 1.8 shows the system error e(t) = yr – y(t). Note that the actuating error
signal ê (t) and the system error e(t) are two different variables.
The basic feedback system block diagram of Fig. 1.8 is shown in an abridged form in Fig. 1.9. The output
Y(s) is influenced by the control signal U(s) and the disturbance signal W(s) as per the following relation:
Y(s) = GP(s) GA(s) U(s) + GP(s) W(s) (1.1a)
= G(s) U(s) + N(s) W(s) (1.1b)
where G(s) is the transfer function from the control signal U(s) to the output Y(s), and N(s) is the transfer
function from the disturbance input W(s) to the output Y(s). Using Eqns (1.1), we can modify the block
diagram of Fig. 1.9 to the form shown in Fig. 1.10. Note that in the block diagram model of Fig. 1.10,
the plant includes the actuator elements.
Fig. 1.9
Fig. 1.10
18 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Fig. 1.11
various system equations, rather than writing them out explicitly. Block diagram manipulation is nothing
more than the manipulation of a set of algebraic transform equations.
For the analysis of a feedback system, we require the transfer function between the input—either
reference or disturbance—and the output. We can use block diagram manipulations to eliminate all the
signals except the input and the output. The reduced block diagram leads to the desired result.
Consider the block diagram of Fig. 1.13. The feedback system has two inputs. We shall use superposition
to treat each input separately.
When disturbance input is set to zero, the single-input system of Fig. 1.14 results. The transfer function
between the input R(s) and the output Y(s) is referred to as the reference transfer function and will be
denoted by M(s). To solve for M(s), we write the pair of transform equations
Ê(s) = R(s) – H(s) Y(s); Y(s) = G(s) U(s) = G(s) D(s) Ê(s)
and then eliminate Ê(s) to obtain
[1 + D(s) G(s) H(s)] Y(s) = D(s) G(s) R(s)
which leads to the desired result
Y ( s) D( s)G ( s)
M(s) = = (1.3)
R( s) W ( s ) = 0 1 + D( s)G ( s) H ( s)
Similarly, we obtain the disturbance transfer function Mw(s) by setting the reference input to zero in
Fig. 1.13 yielding Fig. 1.15, and then solving for Y(s)/W(s). From the revised block diagram,
Ê(s) = – H(s)Y(s); Y(s) = G(s)D(s) Ê(s) + N(s)W(s)
from which Ê(s) can be eliminated to give
Y ( s) N ( s)
Mw(s) = = (1.4)
W ( s) R( s ) = 0
1 + D( s)G ( s) H ( s)
The transfer functions given by Eqns (1.3) and (1.4) are referred to as closed-loop transfer functions. The
denominator of these transfer functions has the term D(s)G(s)H(s) which is the multiplication of all the
transfer functions in the feedback loop. It may be viewed as the transfer function between the variables
R(s) and B(s) if the loop is broken at the summing point. D(s)G(s)H(s) may, therefore, be given the name
open-loop transfer function. The roots of denominator polynomial of D(s)G(s)H(s) are the open-loop
poles, and the roots of numerator polynomial of D(s)G(s)H(s) are the open-loop zeros.
The roots of the characteristic equation
1 + D(s)G(s)H(s) = 0 (1.6)
are the closed-loop poles of the system. These poles indicate whether or not the system is Bounded-
Input Bounded-Output (BIBO) stable, according to whether or not all the poles are in the left half of the
complex plane. Stability may be tested by the Routh stability criterion.
A root locus plot consists of a pole-zero plot of the open-loop transfer function of a feedback system,
upon which is superimposed the locus of the poles of the closed-loop transfer function, as some parameter
is varied. Design of the controller (compensator) D(s) can be carried out using the root locus plot.
One begins with simple compensators, increasing their complexity until the performance requirements
can be met. Principal measures of transient performance are peak overshoot, settling time, and rise
time. The compensator poles, zeros, and multiplying constant are selected to give feedback system pole
locations, that result in acceptable transient response to step inputs. At the same time, the parameters are
constrained so that the resulting system has acceptable steady-state response to important inputs, such
as steps and ramps.
Frequency response characterizations of systems have long been popular because of the ease and
practicality of steady-state sinusoidal response measurements. These methods also apply to systems in
which rational transfer function models are not adequate, such as those involving time delays. They do
not require explicit knowledge of system transfer function models; experimentally obtained open-loop
sinusoidal response data can directly be used for stability analysis and compensator design. A stability
test, the Nyquist criterion, is available. Principal measures of transient performance are gain margin,
phase margin, and bandwidth. The design of the compensator is conveniently carried out using the Bode
plot and the Nichols chart. One begins with simple compensators, increasing their complexity until the
transient and steady-state performance requirements are met.
There are two approaches to carry out the digital controller (compensator) design. The first approach uses
the methods discussed above to design an analog compensator, and then transform it into a digital one. The
second approach, first transforms analog plants into digital plants, and then carries out the design using
digital techniques. The first approach performs discretization after design; the second approach performs
discretization before design. The classical approach to designing a digital compensator directly using an
equivalent digital plant for a given analog plant, parallels the classical approach to analog compensator
design. The concepts and tools of the classical digital design procedures are given in Chapters 2–4. This
background will also be useful in understanding and applying the state variable methods to follow.
Signal Processing in Digital Control 21
Chapter 2
Signal Processing in Digital Control
2.1.1
Flexibility An important advantage offered by digital control is in the flexibility of its modifying
controller characteristics, or in other words, in adaptability of the controller if plant dynamics change
with operating conditions. The ability to ‘redesign’ the controller by changing software (rather than
hardware) is an important feature of digital control against analog control.
Implementation of advanced control techniques
was earlier constrained by the limitations of analog controllers and the high costs of digital computers.
However, with the advent of inexpensive digital computers with virtually limitless computing power, the
techniques of modern control theory may now be put to practice. For example, in multivariable control
systems with more than one input and one output, modern techniques for optimizing system performance
or reducing interactions between feedback loops can now be implemented.
Feedback control is only one of the functions of
a computer. In fact, most of the information transfer between the process and the computer exploits the
logical decision-making capability of the computer. Real-time applications of information processing
and decision-making, e.g., production planning, scheduling, optimization, operations control, etc., may
now be integrated with the traditional process control functions.
To enable the computer to meet a variety of demands imposed on it, its tasks are time-shared.
The study of emerging applications shows that Artificial
Intelligence (AI) will affect the design and application of control systems, as profoundly as the impact
of microprocessors in the last two decades. It is clear that future generation control systems will have
a significant AI component; the list of applications of computer-based control will continue to expand.
22 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
2.1.2
The main problems associated with the implementation of digital control are related to the effects of
sampling and quantization.
Most processes that we are called upon to control, operate in continuous-time. This implies, that we are
dealing largely with an analog environment. To this environment, we need to interface digital computers
through which we seek to influence the process.
The interface is accomplished by a system of Discrete-time signals
the form shown in Fig. 2.1. It is a cascade
of analog-to-digital (A/D) conversion
system followed by a computer which is, in A/D Computer D/A
turn, followed by a digital-to-analog (D/A)
conversion system. The A/D conversion Continuous-time signals
process involves deriving samples of the
analog signal at discrete instants of time
separated by sampling period T sec. The
D/A conversion process involves reconstructing continuous-time signals from the samples given by the
digital computer.
The conversion of signals from analog into digital form and vice versa is performed by electronic devices
(A/D and D/A converters) of finite resolution. A device of n-bit resolution has 2n quantization levels.
Here, the analog signal gets tied to these finite number of quantization levels in the process of conversion
to digital form. Therefore, by the sheer act of conversion, a valuable part of information about the signal,
is lost.
Furthermore, any computer employed as a real-time controller must perform all the necessary calculations
with limited precision, thus introduction of a truncation error after each arithmetic operation has been
performed. As computational accuracy is normally much higher than the resolution of real converters,
a further truncation must take place before the computed data are converted into the analog form. The
repetitive process of approximate conversion–computation–conversion may be costly, if not disastrous,
in terms of control system performance.
The process of quantization in signal conversion systems is discussed ahead.
The selection of a sampling period is a fundamental problem in digital control systems. Later in this
chapter, we will discuss the sampling theorem which states that the sampling period T should be chosen
such that
T < p /wm
where wm is the strict bandwidth of the signal being sampled. This condition ensures that there is no loss
of information due to sampling and the continuous-time signal can be completely recovered from its
samples using an ideal low-pass filter.
Signal Processing in Digital Control 23
There are, however, two problems associated with the use of this theorem in practical control systems:
(i) Real signals are not band-limited and hence strict bandwidth limits are not defined.
(ii) An ideal low-pass filter, needed for the distortionless reconstruction of continuous-time signals
from its samples, is not physically realizable. Practical devices, such as the D/A converter,
introduce distortions.
Thus, the process of sampling and reconstruction also affects the amount of information available
to the control computer, and degrades control system performance. For example, converting a given
continuous-time control system into a digital control system, without changing the system parameters,
degrades the system stability margin.
The ill-effects of sampling can be reduced, if not eliminated completely, by sampling at a very high
rate. However, excessively fast sampling (T Æ 0) may result in numerical ill-conditioning in the
implementation of recursive control algorithms (described later in this chapter).
With the availability of low-cost, high-performance digital computers and interfacing hardware, the
implementation problems in digital control do not pose a serious threat to its usefulness. The advantages
of digital control outweigh its implementation problems for most of the applications.
This book attempts to provide a modest coverage of digital control theory and practice. In the
present chapter, we focus on digital computers and their interface with signal conversion systems
(Fig. 2.1). The goal is to formulate tools of analysis necessary to understand and guide the design of
programs for a computer acting as a control logic component. Needless to say, digital computers can do
many things other than control dynamic systems; our purpose is to examine their characteristics while
executing the elementary control task.
The digital computer processes the sequence of numbers by means of an algorithm and produces a new
sequence of numbers. Since data conversions and computations take time, there will always be a delay when a
control law is implemented using a digital computer. The delay, which is called computational delay, degrades
the control system performance. It should be minimized by the proper choice of hardware and by the proper
design of software for the control algorithm. Floating-point operations take a considerably longer time to
perform (even when carried out by an arithmetic co-processor) than the fixed-point ones. We, therefore,
try to execute fixed-point operations whenever possible. Alternative realization schemes for a control
algorithm are given in the next chapter.
The D/A conversion system in Fig. 2.2 converts the sequence of numbers in numerical code into a
piecewise continuous-time signal. The output of the D/A converter is fed to the plant through the actuator
(final control element) to control its dynamics.
The basic control scheme of Fig. 2.2 assumes a uniform sampling operation, i.e., only one sampling
rate exists in the system and the sampling period is constant. The real-time clock in the computer,
synchronizes all the events of A/D conversion–computation–D/A conversion.
The control scheme of Fig. 2.2 shows a single feedback loop. In a control system having multiple loops,
the largest time constant involved in one loop may be quite different from that in other loops. Hence, it
may be advisable to sample slowly in a loop involving a large time constant, while in a loop involving
only small time constants, the sampling rate must be fast. Thus, a digital control system may have
different sampling periods in different feedback paths, i.e., it may have multiple-rate sampling. Although
digital control systems with multirate sampling are important in practical situations, we shall concentrate
on single-rate sampling. (The reader interested in multirate digital control systems may refer to Kuo
[87]).
The overall system in Fig. 2.2 is hybrid in nature; the signals are in a sampled form (discrete-time
signals/digital signals) in the computer and in continuous-time form in the plant. Such systems have
traditionally been called sampled-data control systems. We will use this term as a synonym of computer
control systems/digital control systems.
Signal Processing in Digital Control 25
In the present chapter, we focus on digital computers and their analog interfacing. For the time being, we
delink the digital computer from the plant. The link will be re-established in the next chapter.
2.3
Figure 2.3a shows an analog signal y(t)—it is defined at the continuum of times, and its amplitudes assume a
continuous range of values. Such a signal cannot be stored in digital computers. The signal, therefore, must
be converted to a form that will be accepted by digital computers. One very common method to do this is
to record sample values of this signal at equally spaced instants. For example, we sample the signal
every 10 msec, we would obtain the discrete-time signal sketched in Fig. 2.3b. The sampling interval of
10 msec corresponds to a sampling rate of 100 samples/sec. The choice of sampling rate is important,
since it determines how accurately the discrete-time signal can represent the original signal.
In a practical situation, the sampling rate is determined by the range of frequencies present in the original
signal. Detailed analysis of uniform sampling process, and the related problem of aliasing will appear
later in this chapter.
Notice that the time axis of the discrete-time signal in Fig. 2.3b, is labeled simply ‘sample number’ and
index k has been used to denote this number (k = 0, 1, 2, ...). Corresponding to different values of sample
number k, the discrete-time signal assumes the same continuous range of values assumed by the analog
signal y(t). We can represent the sample values by a sequence of numbers ys (refer to Fig. 2.3b):
ys = {1.7, 2.4, 2.8, 1.4, 0.4, ...}
26 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
In general,
ys = {y(k)}, 0 £ k <
where y(k) denotes the kth number in the sequence.
The sequence defined above is a one-sided sequence; ys = 0 for k < 0. In digital control applications, we
normally encounter one-sided sequences.
Although, strictly speaking, y(k) denotes the kth number in the sequence, the notation given above is often
unnecessarily cumbersome, and it is convenient and unambiguous to refer to y(k) itself as a sequence.
Throughout our discussion on digital control, we will assume uniform sampling, i.e., sample values of the
analog signal are extracted at equally spaced sampling instants. If the physical time, corresponding to the
sampling interval is T seconds, then the kth sample y(k), gives the value of the discrete-time signal at
t = kT seconds. We may, therefore, use y(kT) to denote a sequence wherein the independent variable is
the physical time.
The signal of Fig. 2.3b is defined at discrete instants of time. The sample values are, however, tied to a
continuous range of numbers. Such a signal, in principle, can be stored in an infinite-bit machine because
a finite-bit machine can store only a finite set of numbers.
A simplified hypothetical two-bit machine can store
Binary number Decimal equivalent four numbers as given adjacent in the table.
The signal of Fig. 2.3b can be stored in such a machine
00 0 if the sample values are quantified to four quantization
01 1 levels. Figure 2.3c shows a quantized discrete-time
10 2 signal for our hypothetical machine. We have assumed
that any value in the interval [0.5, 1.5) is rounded to
11 3
1, and so forth. The signals for which both time and
amplitude are discrete, are called digital signals.
After sampling and quantization, the final step required in converting an analog signal to a form
acceptable to digital computers is coding (or encoding). The encoder maps each quantized sample value
into a digital word. Figure 2.3d gives the coded digital signal, corresponding to the analog signal of
Fig. 2.3a for our hypothetical two-bit machine.
The device that performs the sampling, quantization, and coding is an A/D converter. Figure 2.4 is a
block diagram representation of the operations performed by an A/D converter.
It may be noted that the quantized discrete-time signal of Fig. 2.3c and the coded signal of Fig. 2.3d
carry exactly the same information. For the purpose of analytical study of digital systems, we will use
the quantized discrete-time form for digital signals.
The number of binary digits carried by a device is its word length, and this is obviously an important
characteristic related to the resolution of the device—the smallest change in the input signal that will
produce a change in the output signal. The A/D converter that generates signals of Fig. 2.3 has two binary
digits and thus four quantization levels. Any change, therefore, in the input over the interval [0.5, 1.5)
produces no change in the output. With three binary digits, 23 quantization levels can be obtained, and
the resolution of the converter could be improved.
Signal Processing in Digital Control 27
Continuous-time
continuous-amplitude Digital
signal words
Discrete-time Discrete-time
continuous-amplitude discrete-amplitude
signal signal
The A/D converters in common use have word lengths of 8 to 16 bits. For an A/D converter with a word
length of 8 bits, an input signal can be resolved to one part in 28, or 1 in 256. If the input signal has a
range of 10 V, the resolution is 10/256, or approximately 0.04 V. Thus, the input signal must change by
at least 0.04 V, in order to produce a change in the output.
With the availability of converters with resolution ranging from 8 to 16 bits, the quantization errors do
not pose a serious threat to computer control of industrial processes. In our treatment of the subject, we
assume quantization errors to be zero. This is equivalent to assuming infinite-bit digital devices. Thus we
treat digital signals as if they are discrete-time signals with amplitudes assuming a continuous range of
values. In other words, we make no distinction between the words ‘discrete-time’ and ‘digital.’
A typical topology of a single-loop digital control system is shown in Fig. 2.2. It has been assumed that
the measuring transducer and the actuator (final control element) are analog devices, requiring respectively
A/D and D/A conversion at the computer input and output. The D/A conversion is a process of producing
an analog signal from a digital signal and is, in some sense, the reverse of the sampling process discussed
above.
Digital Discrete-time Analog The D/A converter performs two functions: first,
words signal signal generation of output samples from the binary-
form digital signals produced by the machine,
Zero-order and second, conversion of these samples to analog
Decoder form. Figure 2.5 is a block diagram representation
hold
of the operations performed by a D/A converter.
The decoder maps each digital word into a sample
value of the signal in discrete-time form. It is
usually not possible to drive a load, such as a motor, with these samples. In order to deliver sufficient
energy, the sample amplitude might have to be so large that it may become infeasible to realistically
generate it. Also large-amplitude signals might saturate the system being driven.
The solution to this problem is to smooth the output samples to produce a signal in analog form. The
simplest way of converting a sample sequence into a continuous-time signal is to hold the value of the
sample until the next one arrives. The net effect is to convert a sample to a pulse of duration T—the sample
period. This function of a D/A converter is referred to as a Zero-Order Hold (ZOH) operation. The term
zero-order refers to the zero-order polynomial used to extrapolate between the sampling times (detailed
28 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
discussion will appear later in this chapter). Figure 2.6 shows a typical sample sequence produced by the
decoder, and the analog signal1 resulting from the zero-order hold operation.
y (k) yh (t)
3 3
2 2
1 1
k t
0 1 2 3 4 0 T 2T 3T 4T
(a) (b)
2.3.1
Most D/A converters use the principle shown in the three-bit form in Fig. 2.7 to convert the HI/LO digital
signals at the computer output to a single analog voltage. The circuit of Fig. 2.7 is an ‘R–2R’ ladder; the
value of R typically ranges from 2.5 to 10K ohms.
Suppose a binary number b2 b1b0 is given. The switch (actually, electronic gates) positions in Fig. 2.7
correspond to the digital word 100, i.e., b2 = 1 and b1 = b0 = 0. The circuit can be simplified to the
equivalent form shown in Fig. 2.8a. The currents in the resistor branches are easily calculated and are
indicated in the circuit (for the high gain amplifier, the voltage at point A is practically zero [155]). The
output voltage is
i 1
V0 = 3R 2 = Vref
2 2
If b1 = 1 and b2 = b0 = 0, then the equivalent circuit is as shown in Fig. 2.8b. The output voltage is
i 1
V0 = 3R 1 = Vref
4 4
Similarly, if b0 = 1 and b2 = b1 = 0, then the equivalent circuit is as shown in Fig. 2.8c. The output
voltage is
i 1
V0 = 3R 0 = Vref
8 8
In this way, we find that when the input data is b2b1b0 (where the bi’s are either 0 or 1), then the output
voltage is
V0 = (b22–1 + b12–2 + b02–3)VFS (2.1)
where VFS = Vref = full scale output voltage.
1
In the literature, including this book, the terms ‘continuous-time signal’ and ‘analog signal’ are frequently inter-
changed.
Signal Processing in Digital Control 29
The circuit and the defining equation for an n-bit D/A converter easily follow from Fig. 2.7 and Eqn. (2.1),
respectively.
2.3.2
Most A/D converters use the principle of successive approximation. Figure 2.9 shows the organization
of an A/D converter that uses this method. Its principal components are a D/A converter, a comparator, a
Successive Approximation Register (SAR), a clock, and control and status logic.
On receiving the (Start-Of-Conversion) SOC command, the SAR is cleared to 0s and its most significant
bit is set to 1. This results in a V0 value that is one half of the full scale (refer to Eqn. (2.1)). The output of
30 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
b 0 b1 b b0 b1 b b0 b1 b
the comparator is then tested to see whether VIN is greater than or less than V0. If VIN is greater, the most
significant bit is left on; otherwise it is turned off (complemented).
In the next step, the next most significant bit of the SAR is turned on. At this stage, V0 will become either
three quarters or one quarter of the full scale, depending on whether VIN was, respectively, greater than
or less than V0 in the first step. Again, the comparator is tested and if VIN is greater than the new V0, the
next most significant bit is left on. Otherwise it is turned off.
Signal Processing in Digital Control 31
There are a number of basic discrete-time signals which play an important role in the analysis of signals
and systems. These signals are direct counterparts of the basic continuous-time signals.2 As we shall
see, many characteristics of basic discrete-time signals are directly analogous to the properties of basic
continuous-time signals. There are, however, several important differences in discrete-time, and we will
point these out as we examine the properties of these signals.
2
Chapter 2 of the companion book [155].
32 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The unit sample sequence contains only one nonzero element and is defined by (Fig. 2.11a)
Ï1 for k = 0
d (k) = Ì (2.2a)
Ó 0 otherwise
The delayed unit-sample sequence, denoted by d (k – n), has its nonzero element at sample time n
(Fig. 2.11b):
Ï1 for k = n
d (k – n) = Ì (2.2b)
Ó0 otherwise
One of the important aspects of the unit-sample sequence is that an arbitrary sequence can be represented
as a sum of scaled, delayed unit samples. For example, the sequence r (k) in Fig. 2.11c can be expressed
as
r(k) = r(0)d (k) + r(1)d (k – 1) + r(2)d (k – 2) +
= Â r(n)d(k – n) (2.3)
n=0
r(0), r(1), … , are the sample values of the sequence r(k). This representation of a discrete-time signal is
found useful in the analysis of linear systems through the principle of superposition.
As we will see, the unit-sample sequence plays the same role for discrete-time signals and systems,
that the unit-impulse function does for continuous-time signals and systems. For this reason, the unit-
sample sequence is often referred to as the discrete-time impulse. It is important to note that a discrete-
time impulse does not suffer from the same mathematical complexity as a continuous-time impulse. Its
definition is simple and precise.
The quantity W is called the frequency of the discrete-time sinusoid and f is called the phase. Since k is a
dimensionless integer, the dimension of W must be radians (we may specify the units of W to be radians/
sample, and units of k to be samples).
The fact that k is always an integer in Eqn. (2.6) leads to some differences between the properties of discrete-
time and continuous-time sinusoidal signals. An important difference lies in the range of values the
frequency variable can take on. We know that for the continuous-time signal r(t) = A cos w t = real {Ae jwt},
w can take on values in the range (– , ). In contrast, for the discrete-time sinusoid r(k) = A cos Wk =
real {Ae jWk}, W can take on values in the range [– p, p].
To illustrate the property of discrete-time sinusoids, consider W = p + x, where x is a small number
compared with p. Since
e jWk = e j(p + x)k = e j(2p – p + x)k = e j(– p + x)k
a frequency of (p + x) results in a sinusoid of frequency (– p + x). Suppose now, that W is increased to 2p.
Since e j2pk = e j0, the observed frequency is 0. Thus, the observed frequency is always between – p and p,
and is obtained by adding (or subtracting) multiples of 2p to W until a number in that range is obtained.
The highest frequency that can be represented by a digital signal is, therefore, p radians/sample interval.
The implications of this property for sequences obtained by sampling sinusoids and other signals is
discussed in Section 2.11.
34 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
A specified initial condition is stored before the commencement of the algorithm, in the appropriate
register (of the digital computer) containing x2(.). This can be diagrammatically represented by adding
a signal x2(0)d (k) to the output of the delayer, where d (k) is the unit-sample sequence defined by
Eqn. (2.2a).
The signal processing function performed
by the computer program (2.7) can be r (k) + x (k + 1) x (k)
represented by a block diagram shown in
Fig. 2.13. Various blocks in this figure +
represent the basic computing operations of
a digital computer. The unit delayer is the
only dynamic element involved. The signal + 0.1
processing configuration of Fig. 2.13, thus,
represents a first-order discrete-time system.
The output x(k) of the dynamic element gives the state of the system at any k. If the signal r(k) is
switched on to the system at k = 0 (r(k) = 0 for k < 0), the sample value x(0) of the output sequence x(k),
represents the initial state of the system. Since the initial state in the computer program (2.7) is zero, a
signal of the form x(0) d(k) does not appear in Fig. 2.13.
The defining equation for the computer program (2.7), obtained by forming an equation of the summing
junction in Fig. 2.13, is
x(k + 1) = 0.1 x(k) + r(k); x(0) = 0 (2.8)
36 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The solution of this first-order linear difference equation for given input r(k) applied at k = 0, and given
initial state x(0), yields the state x(k); k > 0. Equation (2.8) is thus the state equation of the discrete-time
system of Fig. 2.13. Conversely, Fig. 2.13 is the simulation diagram for the mathematical model (2.8).
To solve an equation of the form (2.8) is an elementary matter. If k is incremented to take on values
k = 0, 1, 2, ..., etc., the state x(k); k = 1, 2, ..., can easily be generated by an iterative procedure. The
iterative method, however, generates only a sequence of numbers and not a closed-form solution.
Example 2.1
In order to introduce discrete-time systems, we study the signal processing algorithm given by the
difference equation:
x(k + 1) = – a x(k) + r(k); x(0) = 0 (2.9)
where a is a real constant.
We shall obtain a closed-form solution of this equation by using a so-called brute force method
(z-transform method of solving linear difference equations is given in Section 2.7). When solved
repetitively, Eqn. (2.9) yields
x(0) = 0; x(1) = r(0)
x(2) = – a r(0) + r(1); x(3) = (– a)2 r(0) – a r(1) + r(2)
ÔÏ0 for k = 0
g(k) = Ì k -1
(2.11)
ÓÔ( -a ) for k ≥ 1
The question of whether or not the solution decays, is more closely related to the magnitude of a than to
its sign. In particular, for |a | > 1, g(k) grows with increasing k while it decays when |a | < 1. The nature
of time functions of the form (2.11) for different values of a is examined in Section 2.9.
c2
x2(k)
+
r (k) + + + x1(k) + y (k)
c1
+ + +
x02 d(k) x01 d(k)
a2
+
+
a1
...
system
variables are represented by the input vector r(k), rp yq
the output vector y(k) and the state vector x(k),
where ...
È r1 ( k ) ˘ È y1 ( k ) ˘ È x1 ( k ) ˘ x1 x2 xn
Í r (k ) ˙ Í y (k )˙ Í x (k )˙
r(k) =D Í ˙ ; y(k) D Í 2 ˙ ; x(k) D Í 2 ˙ n state variables
2
Í ˙ = Í ˙ = Í ˙
Í ˙ Í ˙ Í ˙
ÍÎ rp ( k ) ˙˚ ÍÎ yq ( k ) ˙˚ Î xn ( k ) ˚
Assuming that the input is switched on to the system at k = 0 (r(k) = 0 for k < 0), the initial state is given by
x(0) =D x0, a specified n ¥ 1 vector
38 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The dimension of the state vector defines the order of the system. The dynamics of an nth-order linear
time-invariant system are described by equations of the form
x1(k + 1) = f11 x1(k) + f12 x2(k) + + f1n xn(k) + g11 r1(k)
+ g12 r2(k) + + g1p rp(k)
x2(k + 1) = f21 x1(k) + f22 x2(k) + + f2n xn(k) + g21 r1(k)
+ g22 r2(k) + + g2p rp(k)
(2.13)
xn(k + 1) = fn1 x1(k) + fn2 x2(k) + + fnn xn(k) + gn1 r1(k)
+ gn2 r2(k) + + gnp rp(k)
where the coefficients fij and gij are constants.
In the vector-matrix form, Eqns (2.13) may be written as
x(k + 1) = Fx(k) + Gr(k); x(0) =D x0 (2.14)
where
È f11 f12 f1n ˘ È g11 g12 g1 p ˘
Íf Íg g2 p ˙˙
f 22 f 2 n ˙˙ Í 21 g 22
F= Í
21
and G =
Í ˙ Í ˙
Í ˙ Í ˙
Î f n1 f n2 f nn ˚ ÍÎ gn1 gn 2 gnp ˙˚
are, respectively, n ¥ n and n ¥ p constant matrices. Equation (2.14) is called the state equation of the
system.
The output variables at t = kT are linear combinations of the values of the state variables and input
variables at that time, i.e.,
y(k) = Cx(k) + Dr(k) (2.15)
where
Èc11 c12 c1n ˘ È d11 d12 d1 p ˘
Íc ˙ Í d21 d22 d2 p ˙˙
21 c22 c2 n ˙
C= Í and D = Í
Í ˙ Í ˙
Í ˙ Í ˙
ÍÎcq1 cq 2 cqn ˙˚ ÍÎ dq1 dq 2 dqp ˙˚
are, respectively, q ¥ n and q ¥ p constant matrices. Equation (2.15) is called the output equation of the
system.
The state equation (2.14) and the output equation (2.15) together give the state variable model of the
MIMO system4:
x(k + 1) = Fx(k) + Gr(k); x(0) =D x0 (2.16a)
y(k) = Cx(k) + Dr(k) (2.16b)
For single-input (p = 1) and single-output (q = 1) system, the state variable model takes the form
x(k + 1) = Fx(k) + gr(k); x(0) =D x0 (2.17a)
y(k) = cx(k) + dr(k) (2.17b)
4
We have used lower case bold letters to represent vectors and upper case bold letters to represent matrices.
Signal Processing in Digital Control 39
Example 2.2
The discrete-time system of Fig. 2.16 has one dynamic element (unit delayer); it is, therefore, a first-
order system. The state of the system at any k is described by x(k)—the output of the dynamic element.
The equation of the input summing junction is
x(k + 1) = 0.95 x(k) + r(k); x(0) = 0 (2.18a)
This is the state equation of the first-order system.
The output y(k) is given by the following output equation:
y(k) = 0.0475 x(k) + 0.05 r(k) (2.18b)
Equations (2.18a) and (2.18b) together constitute the state variable model of the first-order system.
Let us study the response of the system of Fig. 2.16 to the unit-step sequence,
Ï1 for k ≥ 0
m(k) = Ì (2.19a)
Ó0 for k < 0
and the unit-alternating sequence,
ÏÔ( - 1) k for k ≥ 0
r(k) = Ì (2.19b)
ÓÔ0 for k < 0
We will first solve Eqn. (2.18a) for x(k) and then use Eqn. (2.18b) to obtain y(k).
The solution of Eqn. (2.18a) directly follows from Eqn. (2.10):
x(k) = (0.95)k – 1 r(0) + (0.95)k – 2 r(1) + + r(k – 1)
k -1
= Â (0.95) k–1–i
r(i) (2.20)
i=0
0.05
+
r (k) + x (k +1) x (k) y (k)
0.0475
+
+
0.95
40 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È Ê 1 ˆ k ˘
Í1 - Á ˜ ˙
= (0.95)k – 1 Í Ë 0.95 ¯ ˙ = 1 [1 – (0.95)k]
Í 1 ˙ 0.05
ÍÎ 1 - 0.95 ˙˚
The output
0.0475
y1(k) = [1 – (0.95)k] + 0.05; k ≥ 0
0.05
= 1 – (0.95)k + 1; k ≥ 0 (2.21)
Consider now the system excited by the unit-alternating input given by Eqn. (2.19b). It follows from
Eqn. (2.20) that for this input, the state
k -1
1
x(k) = Â(0.95) k–1–i
(–1) i =
1.95
[(0.95)k – (–1)k]
i=0
The output
y2 (k) = 0.0475 x(k) + 0.05 (–1)k
0.05
= [(–1)k + (0.95)k + 1]; k ≥ 0 (2.22)
1.95
From Eqns (2.21) and (2.22), we observe that the steady-state values of y1(k) and y2(k) are
1
y1(k) = 1 for large k; y2 (k) = (–1)k for large k
39
Thus, the discrete-time system of Fig. 2.16 readily transmits a unit step and rejects a unit-alternating
input (reduces its magnitude by a factor of 39). Since the unit-alternating signal is a rapidly fluctuating
sequence of numbers, while the unit step can be viewed as a slowly fluctuating signal, the discrete-time
system of Fig. 2.16 represents a low-pass digital filter. In Example 2.11, we will study the frequency-
domain characteristics of this filter.
Consider the single-input, single-output (SISO) system represented by the state model (2.17). The system
has two types of inputs; the external input r(k), and the initial state x(0) representing initial storage in the
appropriate registers (of the digital computer) containing xi(◊).
If the dynamic evolution of the state x(k) is not required, i.e., we are interested only in the input-output
relation for k ≥ 0, a linear time-invariant discrete-time system composed of n dynamic elements can be
k
1 - a k +1
5
Âa j
=
1- a
; a π1
j =0
Signal Processing in Digital Control 41
analyzed using a single nth-order difference equation as its model. A general form of nth-order linear
difference equation relating output y(k) to input r(k) is given below.
y(k) + a1 y(k – 1) + + an y(k – n) = b0 r(k) + b1 r(k – 1) + + bm r(k – m)
The coefficients ai and bj are real constants; m and n are integers with m £ n.
We will consider the general linear difference equation in the following form:
y(k) + a1 y(k – 1) + + an y(k – n) = b0r(k) + b1r(k – 1) + + bn r(k – n) (2.23)
There is no loss of generality in this assumption; the results for m = n can be used for the case of m < n
by setting appropriate bj coefficients to zero.
If the input is assumed to be switched on at k = 0 (r(k) = 0 for k < 0), then the difference equation model
(2.23) gives the output at instant ‘0’ in terms of the past values of the output; y(– 1), y(– 2), ..., y(– n),
and the present input r(0). Thus the initial conditions of the model (2.23) are {y(– 1), y(– 2), ..., y(– n)}.
Since the difference equation model (2.23) represents a time-invariant system, the choice of the initial
point on the time scale is simply a matter of convenience in analysis. Shifting the origin from k = 0 to
k = n, we get the equivalent difference equation model:
y(k + n) + a1 y(k + n – 1) + + an y(k) = b0r(k + n) + b1r(k + n – 1) + + b0 r(k) (2.24)
Substituting k = 0 in Eqn. (2.24), we observe that the output at instant ‘n’ is expressed in terms of n values
of the past outputs: y(0), y(1), ..., y(n – 1), and in terms of inputs: r(0), r(1), ..., r(n). If k is incremented
to take on values k = 0, 1, 2, ..., etc., the y(k); k = n, n +1 , ..., can easily be generated by the iterative
procedure. Given { y(– 1), y(– 2), ..., y(– n)}, the initial conditions { y(0), y(1), ..., y(n – 1)} of the model
(2.24) can be determined by successively substituting k = – n, – n + 1, ..., – 2, – 1 in Eqn. (2.24).
In this book, we have not accommodated the classical methods of solution of linear difference equations
of the form (2.23) for given initial conditions and/or external inputs. Our approach is to transform the
model (2.23) to other forms which are more convenient for analysis and design of digital control systems.
Our emphasis is on the state variable models and transfer functions.
In Chapter 6, we will present methods of conversion of difference equation models of the form (2.23),
to state variable models. We will use state variable models to obtain the system response to given initial
conditions and external inputs, to construct digital computer simulation diagrams, and to design digital
control algorithms using modern methods of design.
Later in this chapter, the z-transform technique for transforming difference equation model (2.23) to
transfer function form has been presented. Here we will use transfer function models to study input-
output behavior of discrete-time systems; and to design digital control algorithms using classical
methods of design.
Consider the SISO system represented by the state model (2.17) or the difference equation model (2.23).
The system has two types of inputs: the external input r(k); k ≥ 0, and initial state x(0).
A system is said to be relaxed at k = 0 if the initial state x(0) = 0. In terms of the representation (2.23), a
system is relaxed if y(k) = 0 for k < 0.
42 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
We have earlier seen in Eqn. (2.3) that an arbitrary sequence r(k) can be represented as a sum of scaled,
delayed impulse sequences. It follows from this result, that a linear time-invariant, initially relaxed
system can be completely characterized by its impulse response. This can be easily established.
Let g(k) be the response of an initially relaxed, linear time-invariant discrete-time system to an impulse
d(k). Due to time-invariance property, the response to d (k – n) will be g(k – n). By linearity property, the
response to an input signal r(k) given by Eqn. (2.3) will be
y(k) = r(0) g(k) + r(1) g(k – 1) + r(2) g(k – 2) +
Another important observation concerns the symmetry of the situation. If we let k – j = m in Eqn. (2.26),
we get
0
y(k) = Â r(k – m) g(m)
m=k
The symmetry shows that we may reverse the roles of r (◊) and g(◊) in the convolution formula.
We may remind the reader here, that whenever impulse response models are used to describe a system,
the system is always implicitly assumed to be linear, time-invariant, and initially relaxed.
We now transform r(k) and g(k) using the mapping
f (k) Æ F(z)
where
Since g(k – j) = 0 for k < j, we can start the second summation at k = j. Then, defining the index m = k – j,
we can write
We see that by applying the mapping (2.28), a convolution sum is transformed into an algebraic equation.
The mapping (2.28) is, in fact, the definition of z-transform.
The use of z-transform technique for the analysis of discrete-time systems runs parallel to that of Laplace
transform technique for continuous-time systems. The brief introduction to the theory of z-transform
given in this chapter provides the working tools adequate for the purposes of this text.
THE z
There are basically two ways to approach the z-transform. One way is to think in terms of systems that are
intrinsically discrete. Signal processing by a digital computer, as we have seen in the previous section,
is an example of such systems. In fact, intrinsically discrete-time systems arise in a number of ways. A
model of the growth of cancer is discrete, because the cancer cells divide at discrete points in time. A
macroeconomic model is usually discrete, because most economic data is usually reported monthly and
quarterly. Representing the discrete instants of time by the integer variable k (k = 0, 1, 2, ...), we denote
the output of a SISO system by the sequence y(k); k ≥ 0, and the input by the sequence r(k); k ≥ 0.
The alternative approach to the z-transform is in terms of sampled-data systems. This is the approach
we will adopt because it best fits the problem we intend to solve, namely, the control of continuous-time
systems by a digital signal processor (refer to Fig. 2.2). Sampling a continuous-time signal defines the
discrete instants of time. Interestingly, we will see that the z-transform (2.28) defined for analyzing
systems that are intrinsically discrete, is equally useful for sampled-data systems.
Consider an analog signal xa(t); t ≥ 0. By substituting t = kT; k = 0, 1, 2, ..., a sequence xa(kT ) is said to
be derived from xa(t) by periodic sampling, and T is called the fixed sampling period. The reciprocal of
T is called the sampling frequency or sampling rate. In a typical digital control scheme (refer to Fig. 2.2),
the operation of deriving a sequence from a continuous-time signal is performed by an analog-to-digital
(A/D) converter. A simple ideal sampler representation of the sampling operation is shown in Fig. 2.17.
44 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
= Â x(k) d (t – kT ) (2.30)
k=0
Typical signals xa(t), x(k) and x*(t) are shown in Fig. 2.18. The sampler of Fig. 2.17 can thus be viewed
as an ‘impulse modulator’ with the carrier signal,
d T (t) = Â d (t – kT ) (2.31)
k=0
and modulating signal xa(t). The modulation process is schematically represented in Fig. 2.19a, and the
impulse train d T (t) in Fig. 2.19b:
x*(t) = xa(t) dT (t) (2.32a)
xa(t) x(k) x*(t)
t 0 1 2 3 0 T 2T 3T
k t
Signal Processing in Digital Control 45
We will eliminate the impulse function by simply taking the Laplace transform of x*(t) to obtain (refer to
Chapter 2 of the companion book [155] for definitions and properties of impulse function, and Laplace
transform)
= Â Ú xa (t)d (t – kT )e–st dt
k=0 0
This expression for X *(s) represents a Laplace transform, but it is not a transform that is easy to use
because the complex variable s occurs in the exponent of the transcendental function e. By contrast,
the Laplace transforms that we have used previously in the companion book [155], have mostly been
ratios of polynomials in the Laplace variable s, with real coefficients. These latter transforms are easy to
manipulate and interpret.
Ultimately, we will be able to achieve these same ratios of polynomials in a new complex variable z by
transforming X *(s) to reach what we will call the z-plane.
We remind the reader here that X *(s) is the notation used for Laplace transform of impulse modulated
signal x*(t); the ‘star’ distinguishes it from X(s)—the conventional Laplace transform of the unsampled
continuous function xa(t). We have used the same complex plane (s-plane) for Laplace transform
of ‘starred’ functions and conventional functions. This is the most compact approach used almost
universally.
46 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
z
The expression X (s), given by (2.32b), contains the term e–Ts; T is the fixed sampling period. To transform
*
the irrational function X *(s) into a rational function, we use a transformation from the complex variable s
to another complex variable, say, z. An obvious choice for this transformation is
z = eTs (2.33a)
–Ts
although z = e would be just as acceptable.
Solving for s in Eqn. (2.33a), we obtain
1
s =
ln z (2.33b)
T
The relationship between s and z in Eqns (2.33) may be defined as the z-transformation. In these two
equations, z is a complex variable; its relation with real and imaginary parts of complex variable s is
given by (with s = s + jw ):
z = eT(s + jw ) = eTs e jwT = re jW (2.34)
Im For a fixed value of r, the locus of z is a circle in the complex
z-plane. Circle of radius unity in the complex z-plane will be of
z-plane specific interest to us. This circle is called a unit circle (Fig. 2.20).
1 When Eqns (2.33) are substituted in Eqn. (2.32b), we have
Ê ˆ
 x (kT )z
1
Re X* Á s = ln z ˜ = X(z) = –k
(2.35)
Ë T ¯ a
k =0
Thus, the z-transformation given by Eqns (2.33) is same as defined
earlier in Eqn. (2.28) for intrinsically discrete-time systems.
z
Since T remains fixed, there is no loss of information if variable
x (k) is used to represent xa(kT). The expression
X (z) = Â x (k)z –k
k =0
is often used as the definition of the z-transform of sequence x (k) (intrinsically discrete-time sequence or
derived from continuous-time signal xa(t); x(k) =D xa (kT)), denoted symbolically as Z [x (k)]:
X(z) = n
(2.38)
’(z - a )
j =1
j
In this subsection, our goal is to find the z-transform of the functions we will subsequently need for
analysis of control systems.
The unit-sample sequence contains only one nonzero element and is defined by
Ï1 for k = 0
d (k) = Ì
Ó0 otherwise
The z-transform of the elementary signal is
Z [d (k)] = Â d (k)] z –k
= z0 = 1; | z | > 0 (2.39)
k =0
Z [m (k)] = Â m(k)] z –k
= Âz –k
k =0 k =0
Using the geometric series formula
Âx k 1
= ; | x| < 1,
1- x
k =0
48 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
this becomes
1 z
Z [m (k)] = = (2.40)
1 - z -1 z -1
Note that this equation holds only if the infinite sum converges, that is, if |z–1| < 1 or |z| > 1. Thus, the
region of convergence is the area outside the unit circle in the z-plane.
Consequently,
 k x(k)z
dX ( z )
-z = –k
dz k =0
or
dX ( z )
Z [kT x(k)] = – Tz (2.41)
dz
For x(k) = m (k), we obtain
d Ê z ˆ
Z [kT m (k)] = Z [y(k)] = Y(z) = – Tz
dz ÁË z - 1˜¯
-z 1 ˘
= – Tz ÈÍ +
Î ( z - 1)
2 z - 1 ˙˚
Tz
= ;|z|>1 (2.42)
( z - 1) 2
X(z) = Âe –akT –k
z = Â (e aT
z)–k
k =0 k =0
Signal Processing in Digital Control 49
= Âa –k
; a = eaTz
k =0
1 e aT z z
= = =
-1 z - e - aT
aT
1-a e z -1
This equation holds only if the infinite sum converges, that is, if |e–aT z–1 | < 1 or |z| > | e–aT |. The result
holds for both real and complex a.
X(z) =
A
2
Âe jf
e jwkT z–k +
A
2
Âe –jf
e jwkT z–k
k =0 k =0
A ze jf A ze - jf
= +
2 z - e - jw T 2 z - e jw T
È z (e jf + e - jf ) e j (wT + f ) + e - j (wT + f ) ˘
Az Í - ˙
Î 2 2 ˚
= -
Êe jw T
+e jw T ˆ
z2 - 2 Á ˜¯ z + 1
Ë 2
Az [ z cos f - cos (wT + f ) ]
= ;| z | > 1
z 2 - 2 z cos wT + 1
Given the z-transform of A cos(wkT + f ), we can obtain the z-transform of Ae–akT cos(wkT + f) as
follows:
= Â x(k )( ze aT - k
) = X ( ze aT ) (2.43)
k=0
Noting that
Az [ z cos f - cos(wT + f )]
Z [A cos (wkT + f )] =
z 2 - 2 z cos wT + 1
50 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Example 2.3
Let us find the z-transform of
1
X(s) =
s( s + 1)
Whenever a function in s is given, one approach for finding the corresponding z-transform is to convert
X(s) into x(t) and then find the z-transform of x(kT ) = x(k); T is the sampling interval.
The inverse Laplace transform of X(s) is
x(t) = 1–e–t; t ≥ 0
Hence
x(kT) =D x(k) = 1 –e–kT; k ≥ 0
z z (1 - e -T ) z
X(z) = Z [x(k)] = – = ;| z | > 1
z – 1 z – e –T ( z - 1)( z - e -T )
We summarize the z-transforms we have derived up to this point, plus some additional transforms in
Table 2.1. The table lists commonly encountered functions x(t); t ≥ 0 and z-transforms of sampled
version of these functions, given by x(kT ). We also include in this table, the Laplace transforms X(s)
corresponding to the selected x(t). We have seen in Example 2.3, that whenever a function in s is given,
one approach for finding the corresponding z-transform is to convert X(s) into x(t), and then find its
z-transform. Another approach is to expand X(s) into partial functions and use z-transform table to find
the z-transforms of the expanded terms. Table 2.1 will be helpful for this second approach.
All the transforms listed in the table, can easily be derived from first principles. It may be noted that
in this transform table, regions of convergence have not been specified. In our applications of systems
analysis, which involve transformation from time domain to z-domain and inverse transformation, the
variable z acts as a dummy operator. If transform pairs for sequences of interest to us are available, we
are not concerned with the region of convergence.
z-transformation of difference equations written in terms of advanced versions of the input and output
variables (refer to Eqn. (2.24)) requires the following results:
k=0 k=0
Signal Processing in Digital Control 51
– – d (k) 1
1 z
m (t) m(k)
s z –1
1 z
e– at e–akT
s+a z – e – aT
1 Tz
t kT
s 2 ( z – 1) 2
1 Te – aT z
2 t e–at kTe–akT
( s + a) ( z – e – aT ) 2
a (1 - e - aT ) z
1 – e–at 1 – e–akT
s( s + a ) ( z - 1)( z - e - aT )
w (sin wT ) z
sin wt sin wkT 2
2
s +w 2 z - ( 2 cos wT ) z + 1
s z 2 - (cos wT ) z
cos wt cos wkT
s2 + w 2 z 2 - ( 2 cos wT ) z + 1
w (e - aT sin wT ) z
e–at sin wt e–akT sin w kT
( s + a) 2 + w 2 z 2 - ( 2e - aT cos wT ) z + e -2 aT
Letting k + 1 = m, yields
È ˘
Z [ y(k + 1)] = z  y ( m) z -m
= zÍ Â
ÍÎ k = 0
y ( m) z - m - y ( 0 ) ˙
˙˚
k=0
z-transformation of difference equations written in terms of delayed versions of input and output variables
(refer to Eqn. (2.23)) requires the following results:
52 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Z [ y(k – 1)] = Â (k - 1) z -k
k=0
Example 2.4
Let us find the z-transforms of unit-step functions that are delayed by one sampling period, and n
sampling periods, respectively.
Using the shifting theorem given by Eqn. (2.47), we have
Ê 1 ˆ z –1
Z [m(k –1)] = z–1 Z [m (k)] = z–1 Á =
Ë 1 – z –1 ˜¯ 1 – z –1
Also
Ê 1 ˆ z–n
Z [m(k – n)] = z–n Z [m (k)] = z–n Á ˜ =
Ë 1 – z –1 ¯ 1 – z – 1
Remember that multiplication of the z-transform X(z) by z has the effect of advancing the signal x(k)
by one sampling period, and that multiplication of the z-transform X(z) by z–1 has the effect of delaying
the signal x(k) by one sampling period. In control engineering and signal processing, X(z) is frequently
expressed as a ratio of polynomials in z–1 as follows (refer to Table 2.1):
z 2 – (e – aT cos wT ) z
Z [e–akT cos wkT ] = (2.48a)
z 2 – ( 2e – aT cos wT ) z + e –2 aT
1 – (e – aT cos wT ) z –1
= (2.48b)
1 – ( 2e – aT cos wT ) z –1 + e –2 aT z –2
z–1 is interpreted as the unit-delay operator.
In finding the poles and zeros of X(z), it is convenient to express X(z) as a ratio of polynomials z, as is
done in Eqn. (2.48a). In this and the next chapter, X(z) will be expressed in terms of powers of z, as given
by Eqn. (2.48a), or in terms of powers of z–1, as given by Eqn. (2.48b), depending on the circumstances.
Signal Processing in Digital Control 53
Example 2.5
Analogous to the operation of integration, we can define the summation operation
k
x(k) = Â y (i ) (2.49)
i=0
In the course of deriving an expression for X(z) in terms of Y(z), we shall need the infinite series sum:
 (az -1 k 1 z
) = = (2.50)
k=0 1 – az –1 z–a
X(z) = Â x( k ) z -k
= x(0) + z–1 x(1) + + z–k x(k) +
k=0
z
We will obtain the inverse z-transform in exactly the same way that we obtained the inverse Laplace
transform (Chapter 2 [155]), namely, by partial fraction expansion. The reason that the partial fraction
expansion method works is that we frequently encounter transforms that are rational functions, i.e.,
ratio of two polynomials in z with real coefficients (refer to Eqn. (2.37)). The fact that the coefficients
are real is crucial, because it guarantees that the roots of the numerator and denominator polynomials
will be either real, or complex-conjugate pairs. This, in turn, means that the individual terms in the
partial fraction expansion of the transform will be simple in form and we will be able to do the inverse
transformation by inspection.
54 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The transform pairs encountered using the partial fraction expansion technique will usually be those
found in Table 2.1. Those not found in the table can easily be derived by using basic properties of
z-transformation. The partial fraction expansion method for z-transforms is very straightforward, and
similar, in most respects, to the partial fraction expansion in Laplace transforms. We first illustrate the
method with some examples from which we can then abstract some general guidelines.
Example 2.6
We observe that the transforms of the elementary functions (see Table 2.1) contain a factor of z in the
numerator, e.g.,
z
Z [m(k)] =
z –1
where m (k) is a unit-step sequence.
To ensure that the partial fraction expansion will yield terms corresponding to those tabulated, it is
customary to first expand the function Y(z)/z, if Y(z) has one or more roots at the origin, and then multiply
the resulting expansion by z.
For instance, if Y(z) is given as
2 z 2 – 1.5 z z ( 2 z – 1.5)
Y(z) = 2
= ,
z – 1.5 z + 0.5 ( z – 0.5)( z – 1)
we are justified in writing
Y ( z) 2 z – 1.5 A1 A
= = + 2
z ( z – 0.5)( z – 1) z – 0.5 z – 1
Constants A1 and A2 can be evaluated by applying the conventional partial fraction expansion rules.
È Y ( z) ˘ È Y ( z) ˘
A1 = (z – 0.5) Í ˙ = 1; A2 = (z – 1) Í =1
Î z ˚ z =0.5 Î z ˙˚ z =1
Y ( z) 1 1
= +
z z – 0.5 z – 1
or
z z
Y(z) = +
z – 0.5 z – 1
Using the transform pairs from Table 2.1,
y(k) = (0.5)kT + 1; k ≥ 0
Example 2.7
When Y(z) does not have one or more zeros at the origin, we expand Y(z), instead of Y(z)|/z, into partial
fractions and utilize shifting theorem given by Eqn. (2.47) to obtain inverse z-transform.
Signal Processing in Digital Control 55
The final value theorem is concerned with the evaluation of y(k) as k Æ assuming, of course, that y(k)
does approach a limit. Using partial fraction expansion for inverting z-transforms, it is a simple matter to
show that y(k) approaches a limit as k Æ , if all the poles of Y(z) lie inside the unit circle ( | z | <1) in the
complex z-plane. The unit-circle boundary is, however, excluded except for a single pole at z = 1. This
is because purely sinusoidal signals whose transforms will have poles on the unit circle, do not settle to
a constant value as k Æ . Multiple poles at z = 1 are also excluded because, as we have already seen
in the table of z-transform, these correspond to unbounded signals like ramps. A more compact way of
phrasing these conditions is to say that (z – 1)Y(z) must be analytic on the boundary, and outside the unit
circle in the complex z-plane. The final value theorem states that when this condition on (z – 1)Y(z) is
satisfied, then
lim y(k) = lim (z – 1)Y(z) (2.52)
k zÆ1
The proof is as follows:
m
Z [y(k + 1) – y(k)] = lim
m
 [ y(k + 1) - y(k )]z -k
k=0
or
m
zY(z) – z y(0) – Y(z) = lim
m
 [ y(k + 1) - y(k )]z -k
k=0
56 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
k=0
Example 2.8
Given
z z
X(z) = – ;a > 0
z – 1 z – e – aT
By applying the final value theorem to the given X(z), we obtain
È Ê z z ˆ ˘ lim È1 - z - 1 ˘
lim x(k) = lim Í( z - 1) Á - ˜¯ ˙ = z Æ1 Í ˙z =1
k z Æ1 Î Ë z -1 z-e - aT
˚ Î z - e - aT ˚
It is noted that the given X(z) is actually the z-transform of
x(k) = 1 – e–akT
By substituting k = in this equation, we have
x( ) = lim (1 - e - akT ) = 1
k
where
R(z) =D Z [(r(k)] and G(z) =D Z [(g(k)]
We see that by applying the z-transform, a convolution sum is transformed into an algebraic equation.
The function G(z) is called the transfer function of the discrete-time system.
The transfer function of a linear time-invariant discrete-time system is, by definition, the z-transform of
the impulse response of the system.
An alternative definition of transfer function follows from Eqn. (2.54).
Z [ y( k )] Y ( z)
G(z) = = (2.55)
Z [r ( k )] System
initially relaxed R( z ) System
initially relaxed
Thus, the transfer function of a linear time-invariant discrete-time system is the ratio of the z-transforms
of its output and input sequences, assuming that the system is initially relaxed.
Figure 2.21 gives the block diagram of a discrete-time system in transform domain.
Let us use the definition given by Eqn. (2.55) to obtain transfer
function model of a discrete-time system, represented by a R(z) Y(z)
G(z)
difference equation of the form (2.23), relating its output y(k)
to the input r(k). We assume that the discrete-time system is
initially relaxed:
y(k) = 0 for k < 0
and is excited by an input
r(k); k ≥ 0
Taking z-transform of all the terms of Eqn. (2.23), under the assumption of zero initial conditions, we
obtain
Y(z) + a1 z–1 Y(z) + … + an z– n Y(z) = b0 R(z) + b1 z–1 R(z) + … + bn z–n R(z)
where
Y(z) =D Z [y(k)] and R(z) =D Z [r(k)]
Solving for Y(z),
(b0 + b1 z -1 + + bn z - n ) R( z )
Y(z) =
1 + a1 z -1 + + an z - n
Therefore, the transfer function G(z) of the discrete-time system represented by difference equation
(2.23) is
Y ( z) b0 + b1 z -1 + + bn z - n
G(z) = = (2.56)
R( z ) 1 + a1 z -1 + + an z - n
The same result can be obtained by taking the z-transformation of the shifted difference equation (2.24).
We first consider the case with n = 2, and then present the general result.
y(k + 2) + a1y(k + 1) + a2 y(k) = b0r(k + 2) + b1r(k + 1) + b2r(k) (2.57)
58 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The z-transform of Eqn. (2.57) gives (using the shifting theorem given by Eqns (2.45)):
[z2Y(z) – z2y(0) – zy(1)] + a1[zY(z) – zy(0)] + a2Y(z)
= b0[z2R(z) – z2r(0) – zr(1)] + b1[zR(z) – zr(0)] + b2R(z)
or
(z2 + a1z + a2) Y(z) = (b0z2 + b1z + b2) R(z) + z2 [y(0) – b0r(0)]
+ z[y(1) + a1y(0) – b0r(1) – b1r(0)] (2.58)
Since the system is initially at rest, and switched on at k = 0, we have y(k) = 0 for k < 0, and r(k) = 0 for
k < 0. To determine the initial conditions y(0) and y(1), we substitute k = – 2 and k = –1, respectively,
into Eqn. (2.57).
y(0) + a1 y(–1) + a2y(–2) = b0r(0) + b1r(–1) + b2r(–2)
which simplifies to
y(0) = b0r(0) (2.59a)
and
y(1) + a1y(0) + a2 y(–1) = b0r(1) + b1r(0) + b2r(–1)
or
y(1) = –a1y(0) + b0r(1) + b1r(0) (2.59b)
By substituting Eqns (2.59) into Eqn. (2.58), we get
(z2 + a1z + a2)Y(z) = (b0z2 + b1z + b2) R(z)
Therefore,
Y ( z ) b0 z 2 + b1 z + b2 b0 + b1 z -1 + b2 z -2
G(z) = = 2 =
R( z ) z + a1 z + a2 1 + a1 z -1 + a2 z -2
Therefore, we can express the general transfer function model (2.56) as
Y ( z ) b0 z n + b1 z n -1 + + bn
G(z) = = n (2.60a)
R( z ) z + a1 z n -1 + + an
We will represent the numerator polynomial of G(z) by N(z), and the denominator polynomial by D(z):
N ( z)
G(z) = (2.60b)
D( z )
where
N(z) = b0 zn + b1 zn –1 + + bn; D(z) = zn + a1 zn –1 + + an
The terminology used in connection with G(s)—the transfer function of continuous-time systems6—is
directly applicable in the case of G(z).
The highest power of the complex variable z in the denominator polynomial D(z) of the transfer function
G(z) determines the order of the transfer function model. The denominator polynomial D(z) is called the
characteristic polynomial.
6
Chapter 2 of reference [155].
Signal Processing in Digital Control 59
We now give a simple example of transfer function description of discrete-time systems. Figure 2.12
describes the basic operations characterizing a computer program. The unit delayer shown in this figure
is a dynamic system with input x1(k) and output x2(k); x2(0) represents the initial storage in the shift
register.
We assume that the discrete-time system (unit delayer) is initially relaxed:
x2(0) = 0 (2.62a)
and is excited by an input sequence
x1(k); k ≥ 0 (2.62b)
The following state variable model gives the output of the unit delayer at k = 0, 1, 2, ...
x2(k + 1) = x1(k) (2.63)
The z-transformation of Eqn. (2.63) yields
X2(z) = z –1 X1(z)
where
X2(z) =D Z [x2(k)]; X1(z) =D Z [x1(k)]
Therefore, the transfer function of the unit delayer represented by Eqn. (2.63) is
X 2 ( z)
= z –1 (2.64)
X1( z )
The discrete-time system of Fig. 2.16 may be equivalently represented by Fig. 2.22a using the transfer
function description of the unit delayer. Use of the block-diagram analysis results in Fig. 2.22b, which
gives
È 0.0475 ˘
Y(z) = Í + 0.05˙ R(z)
Î z - 0. 95 ˚
Therefore, the transfer function G(z) of the discrete-time system of Fig. 2.22 is
Y ( z) 0.05 z
G(z) = = (2.65)
R( z ) z - 0.95
60 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
0.05
X(z) + Y(z)
R (z) + +
z–1 0.0475
+
0.95
(a)
A standard problem in control engineering is to find the response, y(k), of a system given the input, r(k),
and a model of the system. With z-transforms, we have a means for easily computing the response of
linear time-invariant systems to quite general inputs.
Given a general relaxed, linear discrete-time system and an input signal r(k), the procedure for determining
the output y(k) is given by the following steps:
Step 1 Determine the transfer function G(z) by taking z-transform of equations of motion.
Step 2 Determine the z-transform of the input signal; R(z) = Z [r(k)].
Step 3 Determine the z-transform of the output; Y(z) = G(z)R(z).
Step 4 Break-up Y(z) by partial fraction expansion.
Step 5 Invert Y(z) to get y(k); find the components of y(k) in a table of transform pairs and combine the
components to get the total solution in the desired form.
Example 2.9
A discrete-time system is described by the transfer function
Y ( z) 1 3 1
G(z) = = ; a1 = – , a2 =
R( z ) z 2 + a1 z + a2 4 8
Find the response y(k) to the input (i) r(k) = d (k), (ii) r(k) = m(k).
Signal Processing in Digital Control 61
È 16 Ê 1 ˆ k Ê 1ˆ
k
y(k) = Í ÁË ˜¯ – 8 ÁË ˜¯ +
8 k ˘
(1) ˙
Í3 4 2 2
ÍÎ Transient response ˙
Steady -state response ˚
The transient response terms correspond to system poles excited by the input m(k). These terms vanish
as k .
The second response term arises due to the excitation pole, and has the same nature as the input itself
except for a modification in magnitude caused by the system’s behavior to the specified input. Since the
input exists as k Æ , the second response term does not vanish and is called the steady-state response
of the system.
The steady-state response can be quickly obtained without doing the complete inverse transform
operation by use of the final value theorem (Eqn. (2.52)):
lim y(k) = lim (z – 1)Y(z)
k zÆ1
if (z – 1)Y (z) has no poles on the boundary and outside of the unit circle in the complex z-plane.
Example 2.10
Consider a discrete-time system described by the difference equation
1 1
y(k + 2) + y(k + 1) – y(k) = 3r(k + 1) – r(k)
4 8
The system is initially relaxed (y(k) = 0 for k < 0) and is excited by the input
r(k) = (–1)k m (k)
Obtain the transfer function model of the discrete-time system, and therefrom, find the output y(k); k ≥ 0.
Solution The given difference equation is first converted to the equivalent form:
1 1
y(k) + y(k – 1) – y(k – 2) = 3r(k – 1) – r(k – 2)
4 8
z-transformation of each term in this equation yields (using shifting theorem (2.47))
1 –1 1
Y(z) + z Y(z) – z–2 = 3z–1 R(z) – z–2R(z)
4 8
or
Ê 1 –1 1 –2 ˆ –1 –2
ÁË1 + z – z ˜¯ Y(z) = (3z – z ) R(z)
4 8
or
Ê 2 1 1ˆ
ÁË z + z – ˜¯ Y(z) = (3z – 1) R(z)
4 8
Signal Processing in Digital Control 63
Using the Laplace transform, we were able to show that if we applied the input r(t) = R0 cos wt to a linear
time-invariant system with transfer function G(s), the steady-state output yss was of the form (refer to
[155]),
yss = lim y(t) = R0 |G( jw)| cos(w t + f)
t
where f = –G( jw).
A similar result can be obtained for discrete-time systems. Let G(z) be the z-transform of a discrete linear
time-invariant system, and let the input to this system be r(kT) = R0 cos(w kT), with T sampling period.
Then
R0 z ( z - cos wT ) R0 z ( z - cos wT )
Z [r(kT )] = =
2
z - 2 z cos wT + 1 ( z - e jwT ) ( z - e - jwT )
Suppose
m
k ’(z - p )
i =1
i
G(z) =
n
;| a j | < 1 (2.66)
’(z - a )
j =1
j
64 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
A = (z – e
jwT ) R0 ( z – cos wT )
G( z)
( z – e jwT )( z – e – jwT ) z = e jwT
Steady-state component j =1
Transient component
If |aj| < 1 for j = 1, 2, …, n, then
n
 B (a ) = 0
k
lim j j
k
j =1
Thus,
jw kT 2 | A | ÈÎe j (w kT +q ) + e - j (w kT +q ) ˘˚
yss =D lim y(kT ) = Ae + A*e–jw kT =
k 2
= 2|A| cos(w kT + q)
= R0 G ( e
jwT )
cos(w kT + q ) (2.67)
We have obtained a result that is analogous to that for continuous-time systems. For a sinusoidal input,
the steady-state output is also sinusoidal; scaled by the gain factor |G(e jwT)|, and shifted in phase by
q = –G(e jwT). An important difference is that in the continuous-time case we have |G( jw)|, while in
the discrete-time case we have |G(e jwT)|. The implications of this difference are discussed later in this
chapter.
The steady-state response is also given by (2.67) when the poles of G(z) in Eqn.(2.66) are repeated with
|aj | < 1. This can easily be verified.
Example 2.11
The discrete-time system of Fig. 2.22 is described by the transfer function (refer to Eqn.(2.65))
Y ( z) 0.05 z
G(z) = =
R( z ) z - 0.95
Signal Processing in Digital Control 65
ÔÏ È 0.0475 R0 Ê z ˆ R0 z ˘ ¸Ô
= Re Ì Z -1 Í Á ˜ + G ( e jW
) ˙˝
ÍÎ 0.95 - e Ë z - 0.95 ¯
jW
ÔÓ ( z - e j W ) ˙˚ Ô˛
where
0.05 e j W
G(e jW ) = G ( z ) z = e j W =
e j W - 0.95
The inverse z-transform operation yields
È 0.0475 R0 k
˘
jW k
y(k) = Re Í ( 0. 95) + G ( e jW
)R0 e ˙
Î 0.95 - e jW ˙˚
Transient Steady-state
component component
This equation shows that as k increases, the transient component dies out. When this happens, the output
expression becomes
1.2
0.2p 0.4p W
Magnitude
0
0.8
– 30°
0.4
– 60°
0
0.2p 0.4p W
– 90°
66 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
STABILITY ON THE z
THE JURY STABILITY CRITERION
Stability is concerned with the qualitative analysis of the dynamic response of a system. This section
is devoted to the stability analysis of linear time-invariant discrete-time systems. Stability concepts and
definitions used in connection with continuous-time systems7 are directly applicable here.
A linear time-invariant discrete-time system described by the state variable model (refer to Eqns (2.17)),
x(k + 1) = Fx(k) + gr(k); x(0) =D x0
y(k) = cx(k) + d r(k)
has the following two sources of excitation:
(i) the initial state x0 representing initial internal energy storage; and (ii) the external input r(k).
The system is said to be in equilibrium state xe = 0, when both the initial internal energy storage and the
external input are zero.
In the stability study, we are generally concerned with the questions listed below.
(i) If the system with zero input (r(k) = 0; k ≥ 0) is perturbed from its equilibrium state xe = 0 at
k = 0, will the state x(k) return to xe, remain ‘close’ to xe, or diverge from xe?
(ii) If the system is relaxed, will a bounded input r(k); k ≥ 0, produce a bounded output y(k) for all k?
The first notion of stability is concerned with the ‘boundedness’ of the state of an unforced system in
response to arbitrary initial state, and is called zero-input stability. The second notion is concerned
with the boundedness of the output of a relaxed system in response to the bounded input, and is called
Bounded-Input, Bounded-Output (BIBO) stability.
A relaxed system (zero initial conditions) is said to be BIBO stable if for every bounded input
r(k); k ≥ 0, the output y(k) is bounded for all k.
For a linear time-invariant system to satisfy this condition, it is necessary and sufficient that
7
Chapter 5 of reference [155].
Signal Processing in Digital Control 67
|y(k)| = Â g ( j )r( k - j )
j=0
and if condition (2.68) holds true, |y(k)| is finite; hence the output is bounded and the system is BIBO
stable.
The condition (2.68) is also necessary, for if we consider the bounded input
Ï +1 if g ( j ) > 0
Ô
r(k – j) = Ì 0 if g ( j ) = 0
Ô -1 if g ( j ) < 0
Ó
then the output at any fixed value of k is given by
|y(k)| = Â g ( j )r( k - j ) = Â | g( j) |
j=0 j=0
Thus, unless the condition given by (2.68) is true, the system is not BIBO stable.
The condition (2.68) for BIBO stability can
be translated into a set of restrictions on the Y(z)
location of poles of the transfer function R(z) +
G(z) in the z-plane. Consider the discrete- z–1 z–1
time system shown in Fig. 2.24. The block- +
diagram analysis gives the following input-
output relation for this system.
– a1
z +
Y ( z) +
= G(z) = 2
R( z ) z + a1 z + a2
z – a2
or Y(z) = R(z)
z 2 + a1 z + a2
68 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Y ( z) A A*
= +
z z - p z - p*
where A = |A| –f and A* is complex-conjugate of A.
The impulse response
y(k) = A(p)k + A* ( p*)k = A(p)k + [A(p)k]*
= 2Re [A (p)k] = 2Re [|A| e jf R0k e jW k]
= 2|A| R0k Re[e j(Wk + f)] = 2|A| R0k cos(W k + f)
Therefore, the complex-conjugate pair of poles of the response transform Y(z) gives rise to a sinusoidal
or oscillatory response function R0k cos(Wk + f), whose envelope R0k can be constant, growing or decaying
depending on whether R0 = 1, R0 > 1, or R0 < 1, respectively (Fig. 2.26).
For an nth-order linear discrete-time system, the response transform Y(z) has an nth-order characteristic
polynomial. Assume that Y(z) has a real pole at z = a of multiplicity m, and partial fraction expansion of
Signal Processing in Digital Control 69
z-plane
Unit circle
k
a a
z-plane
Unit circle
+
+
+
+
k
R0 Wk + f R0
70 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
d È 1 ˘ k -2
-z Í 2 ˙ ´ k ( k - 1) a m ( k - 2)
dz Î ( z - a ) ˚
z 1
or 3
´ k ( k - 1)a k - 2 m ( k - 2)
(z - a) 2!
In general,
z k !(a ) k - m + 1
´ m ( k - m + 1) (2.70)
( z - a )m ( k - m + 1)! ( m - 1)!
It can easily be established using final value theorem (Eqn. (2.52)) that each response function in Eqn.
(2.69) equals zero as k if |a| < 1. However, the response functions in Eqn. (2.69) grow without
bound for |a| ≥ 1.
Similar conclusions can be derived in case of response functions corresponding to complex-conjugate pair
of poles (R0e ± j W ) of multiplicity m. The limit of each response function as k Æ equals zero if R0 < 1.
The case of R0 ≥ 1 contributes growing response functions.
From the foregoing discussion it follows that the nature of the response terms contributed by the
system poles (i.e., the poles of the transfer function G(z)), gives the nature of the impulse response g(k)
(= Z –1[G(z)]) of the system. This, therefore, answers the question of BIBO stability through condition
(2.68), which says that for a system with transfer function G(z) to be BIBO stable, it is necessary and
Signal Processing in Digital Control 71
sufficient that
 |g(k)| <
k=0
N ( z)
The nature of response terms contributed by various types of poles of G(z) = , i.e., the roots of the
D( z )
characteristic equation D(z) = 0, has already been investigated. Observing the nature of response terms
carefully, leads us to the following general conclusions on BIBO stability.
(i) If all the roots of the characteristic equation lie inside the unit circle in the z-plane, then the
impulse response is bounded and eventually decays to zero. Therefore, Â | g(k)| is finite and the
k=0
system is BIBO stable.
(ii) If any root of the characteristic equation lies outside the unit circle in the z-plane, g(k) grows
without bound and  |g(k)| is infinite. The system is, therefore, unstable.
k=0
(iii) If the characteristic equation has repeated roots on the unit circle in the z-plane, g(k) grows
without bound and  |g(k)| is infinite. The system is, therefore, unstable.
k=0
(iv) If one or more nonrepeated roots of the characteristic equation are on the unit circle in the
z-plane, then g(k) is bounded but  |g(k)| is infinite. The system is, therefore, unstable.
k=0
An exception to the definition of BIBO stability is brought out by the following observations. Consider
a system with transfer function
N ( z)
G(z) =
( z - 1)( z - e jW )( z - e - jW )
The system has nonrepeated poles on the unit circle in the z-plane. The response functions contributed
by the system poles at z = 1 and z = e ± jW are respectively (l)k and cos(Wk + f). The terms (l)k and
cos(W k + f) are bounded, Â | g(k)| is infinite and the system is unstable in the sense of our definition
k=0
of BIBO stability.
Careful examination of the input-output relation
N ( z)
Y(z) = G(z)R(z) = R(z)
( z - 1)( z - e jW )( z - e - jW )
shows that y(k) is bounded for all bounded r(k), unless the input has a pole matching one of the system
poles on the unit circle. For example, for a unit-step input r(k) = m(k),
z zN ( z )
R(z) = and Y(z) =
z -1 ( z - 1) ( z - e jW )( z - e - jW )
2
72 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The response y(k) is a linear combination of the terms cos(W k + f), (l)k, and k(l)k, and therefore,
y(k) as k . Such a system, which has bounded output for all bounded inputs, except for the
inputs having poles matching the system poles, may be treated as acceptable or non-acceptable. We
will bring the situations where the system has nonrepeated poles on the unit circle under the class of
marginally stable systems.
This concept of stability is based on the dynamic evolution of the system state in response to arbitrary
initial state representing initial internal energy storage. State variable model (refer to Eqn. (2.17))
x(k + 1) = Fx(k) (2.71)
is the most appropriate for the study of dynamic evolution of the state x(k) in response to the initial
state x(0).
We may classify stability as follows:
(i) Unstable: There is at least one finite initial state x(0) such that x(k) grows thereafter without being
bound as k .
(ii) Asymptotically stable: For all possible initial states x(0), x(k) eventually decays to zero as k .
(iii) Marginally stable: For all initial states x(0), x(k) remains thereafter within finite bounds for k > 0.
Taking z-transform on both the sides of Eqn. (2.71) yields
z X(z) – z x(0) = FX(z) where X(z) =D Z [x(k)]
Solving for X(z), we get
X(z) = (zI – F)–1 zx(0) = F(z) x(0)
where
( zI - F ) + z
F (z) = (zI – F)–1 z = (2.72a)
| zI - F |
The state vector x(k) can be obtained by inverse transforming X(z):
x(k) = Z –1 [F(z)] x(0) (2.72b)
Note that for an n ¥ n matrix F, |zI – F| is an nth-order polynomial in z. Also, each element of the adjoint
matrix (zI – F)+ is a polynomial in z of order less than or equal to (n – 1). Therefore, each element of
F(z)/z is strictly a proper rational function, and can be expanded in a partial fraction expansion. Using
the time-response analysis given earlier in this section, it is easy to establish that
lim x( k ) Æ 0
k
if all the roots of the characteristic polynomial |zI – F|, lie strictly inside the unit circle of the complex
plane. In Chapter 6 we will see that under mildly restrictive conditions (namely, the system must be both
controllable and observable), the roots of the characteristic polynomial |zI – F| are same as the poles of
the corresponding transfer function, and asymptotic stability ensures BIBO stability and vice versa. This
implies that stability analysis can be carried out using the BIBO stability test (or only the asymptotic
stability test).
Signal Processing in Digital Control 73
We will use the following terminology and tests for stability analysis of linear time-invariant systems
described by the transfer function G(z) = N(z)/D(z), with the characteristic equation D(z) = 0:
(i) If all the roots of the characteristic equation lie inside the unit circle in the z-plane, the system is
stable.
(ii) If any root of the characteristic equation lies outside the unit circle in the z-plane, or if there is a
repeated root on the unit circle, the system is unstable.
(iii) If condition (i) is satisfied except for the presence of one or more nonrepeated roots on the unit
circle in the z-plane, the system is marginally stable.
It follows from the above discussion that stability can be established by determining the roots of the
characteristic equations. All the commercially available CAD packages ([151–154]) include root-solving
routines. However, there exist tests for determining the stability of a discrete-time system, without finding
the actual numerical values of the roots of the characteristic equation.
A well-known criterion to test the location of zeros of the polynomial
D(z) = a0 zn + a1 zn – 1 + + an – 1 z + an
where a’s are real coefficients, is the Jury stability criterion. The proof of this criterion is quite involved
and is given in the literature (Jury and Blanchard [98]). The criterion gives the necessary and sufficient
conditions for the roots to lie inside the unit circle. In the following, we present the Jury stability criterion
without proof.
In applying the Jury stability criterion to a given characteristic equation D(z) = 0, we construct a table
whose elements are based on the coefficients of D(z).
Consider the general form of the characteristic polynomial D(z) (refer to Eqn. (2.60b)):
D(z) = a0 zn + a1 zn –1 + + ak zn – k + + an – 1 z + an; a0 > 0 (2.73)
The criterion uses the Jury stability table given in Table 2.2.
The Jury stability table is formed using the following rules:
(i) The first two rows of the table consist of the coefficients of D(z), arranged in ascending order of
power of z in row 1, and in reverse order in row 2.
(ii) All even-numbered rows are simply the reverse of the immediately preceding odd-numbered rows.
(iii) The elements for rows 3 through (2n – 3) are given by the following determinants:
an an-1- k
bk = ; k = 0, 1, 2, ..., n – 1
a0 ak +1
bn-1 bn- 2- k
ck = ; k = 0, 1, 2, ..., n – 2 (2.74)
b0 bk +1
p3 p2- k
qk = ; k = 0, 1, 2
p0 pk +1
74 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Row z0 z1 z2 z3 … zk … zn – 2 zn –1 zn
1 an an – 1 an – 2 an – 3 … an – k … a2 a1 a0
2 a0 a1 a2 a3 … ak … an – 2 an – 1 an
3 bn – 1 bn – 2 bn – 3 bn – 4 … bn – k – 1 … b1 b0
4 b0 b1 b2 b3 … bk … bn – 2 bn – 1
5 cn – 2 cn – 3 cn – 4 cn – 5 … cn – k – 2 … c0
6 c0 c1 c2 c3 … ck … cn – 2
…
2n – 5 p3 p2 p1 p0
2n – 4 p0 p1 p2 p3
2n – 3 q2 q1 q0
The procedure is continued until the (2n – 3)rd row is reached which will contain exactly three elements.
The necessary and sufficient conditions for polynomial D(z) to have no roots on and outside the unit
circle in the z-plane are:
D(1) > 0
Ï> 0 for n even
D(– 1) Ì (2.75a)
Ó< 0 for n odd
| an | < | a0 |
| bn-1 | > | b0 |¸
| cn- 2 | > | c0 |ÔÔ
˝ ( n - 2) constraints (2.75b)
Ô
| q2 | > | q0 | Ô˛
The conditions on D(1), D(–1), and between a0 and an in (2.75a) form necessary conditions of stability
that are very simple to check without carrying out the Jury tabulation.
It should be noted that the test of stability given in (2.75) is valid only if the inequality conditions provide
conclusive results. Jury tabulation ends prematurely if, either the first and the last elements of a row are
zero, or, a complete row is zero. These cases are referred to as singular cases. These problems can be
resolved by expanding and contracting the unit circle infinitesimally, which is equivalent to moving the
roots off the unit circle. The transformation for this purpose is
ẑ = (1 + e)z
where e is a very small real number.
Signal Processing in Digital Control 75
Example 2.12
Consider the characteristic polynomial
D(z) = 2z4 + 7z3 + 10z2 + 4z + 1
Employing stability constraints (2.75a), we get
(i) D(1) = 2 + 7 + 10 + 4 + 1 = 24 > 0; satisfied
(ii) D(– 1) = 2 – 7 + 10 – 4 + 1 = 2 > 0; satisfied
(iii) |1| < |2| ; satisfied
Next, we construct the Jury table:
Row z0 z1 z2 z3 z4
1 1 4 10 7 2
2 2 7 10 4 1
3 –3 –10 –10 –1
4 –1 –10 –10 –3
5 8 20 20
Usefulness of the Jury stability test for the design of a digital control system from the stability point of
view, is demonstrated in the next chapter.
The Jury criterion is of marginal use in designing a feedback system; it falls far short of the root locus
method discussed in Chapter 4. The root locus technique of factoring a polynomial is intrinsically
geometric, as opposed to the algebraic approach of the Jury criterion. The root locus method enables us
to rapidly sketch the locus of all solutions to the characteristic polynomial of the closed-loop transfer
function. The sketch is usually only qualitative, but even so, it offers great insight by showing how the
locations of the poles of closed-loop transfer function change as the gain is varied. As we will see in
Chapter 4, the root locus approach has been reduced to a set of ‘rules’. Applied in an orderly fashion,
these rules quickly identify all closed-loop pole locations.
76 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
2.10
The sampling operation—conversion of a continuous-time function to a sequence—has earlier been
studied in Section 2.6. We developed an impulse modulation model for the sampling operation (refer to
Fig. 2.19).
It is important to emphasize here that the impulse modulation model is a mathematical representation
of sampling; not a representation of any physical system designed to implement the sampling operation.
We have introduced this representation of the sampling operation because it leads to a simple derivation
of a key result on sampling (given in the next section) and because this approach allows us to obtain a
transfer function model of the hold operation.
2.10.1
It is the inverse of the sampling operation—conversion of a sequence to a continuous-time function. In
computer-controlled systems, it is necessary to convert the control actions calculated by the computer as
a sequence of numbers, to a continuous-time signal that can be applied to the process.
The problem of hold operation may be posed as follows:
Given a sequence {y(0), y(1), ..., y(k), ...}, we have to construct ya(t), t ≥ 0.
A commonly used solution to the problem of hold operation, is polynomial extrapolation. Using Taylor’s
series expansion about t = kT, we can express ya(t) as
y ( kT )
ya(t) = ya(kT) + ya (kT ) (t – kT) + a (t – kT)2 + ; kT £ t < (k + 1)T (2.76)
2!
where
dya (t )
ya(kT) =D @ 1 [ya (kT) – ya((k – 1)T )]
dt t = kT T
2 1
ya(kT) =D d ya (t ) @ [ ya (kT) – ya((k – 1)T )]
2 T
dt t = kT
1
= [ ya(kT ) – 2ya((k – 1)T ) + ya ((k – 2)T)]
T2
If only the first term in expansion (2.76) is used, the data hold is called a zero-order hold (ZOH). Here
we assume that the function ya(t) is approximately constant within the sampling interval, at a value equal
to that of the function at the preceding sampling instant. Therefore, for a given input sequence {y(k)},
the output of ZOH is given by
ya(t) = y(k); kT £ t < (k + 1)T (2.77)
The first two terms in Eqn. (2.76) are used to realize the first-order hold. For a given input sequence
{y(k)}, the output of the first-order hold is given by
t - kT
ya(t) = y(k) + [y(k) – y(k – 1)] (2.78)
T
Signal Processing in Digital Control 77
It is obvious from Eqn. (2.76) that the higher the order of the derivative to be approximated, the larger will
be the number of delay pulses required. The time-delay adversely affects the stability of feedback control
systems. Furthermore, a high-order extrapolation requires complex circuitry and results in high costs of
construction. The ZOH is the simplest, and most commonly used, data hold device. The standard D/A
converters are often designed in such a way that the old value is held constant until a new conversion is
ordered.
2.10.2
In the digital control structure of Fig. 2.2,
discrete-time processing of continuous-
time signals is accomplished by the system
depicted in Fig. 2.27. The system is a
cascade of an A/D converter followed by a
discrete-time system (computer program),
followed by a D/A converter. Note that
the overall system is equivalent to a
continuous-time system, since it transforms
Fig. 2.27
the continuous-time input signal xa(t) into
the continuous-time signal ya(t). However,
the properties of the system are dependent
on the choice of the discrete-time system
and the sampling rate.
In the special case of discrete-time signal
processing with a unit-gain algorithm, and
negligible time delay (i.e., y(k) = x(k)),
the combined action of the A/D converter,
the computer, and the D/A converter can
be described as a system that samples the Fig. 2.28
analog signal and produces another analog
signal that is constant over the sampling periods. Such a system is called a sample-and-hold (S/H) system.
Input-output behavior of an S/H system is described diagrammatically in Fig. 2.28. In the following, we
develop an idealized model for S/H systems.
S/H operations require modeling of the following two processes:
(i) extracting the samples, and
(ii) holding the result fixed for one period.
The impulse modulator effectively extracts the samples in the form of x(k)d (t – kT). The remaining
problem is to construct a linear time-invariant system which will convert this impulse into a pulse of
height x(k) and width T. The S/H may, therefore, be modeled by Fig. 2.29a, wherein the ZOH is a system
whose response to a unit impulse d (t) is a unit pulse gh0(t) of width T. The Laplace transform of the
impulse response gh0(t) is the transfer function of the hold operation, namely,
78 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
1 - e - sT
T
Gh0(s) = L –1[gh0(t)] = Ú gh0(t) e–stdt =
Ú e–st dt =
s
(2.79)
0 0
Figure 2.29b is a block diagram representation of the transfer function model of the S/H operation.
T
xa(t) x*(t) ya(t)
ZOH
d(t) gh0(t)
1 1
t 0 T t
(a)
T
Xa(s) 1 – e–sT Ya (s)
Gh0(s) = s
–skT
X *(s) = x(k)e
k=0
(b)
Fig. 2.29
In a majority of practical digital operations, S/H functions are performed by a single S/H device. It
consists of a capacitor, an electronic switch, and operational amplifiers (Fig. 2.30). Op amps are needed
for isolation; the capacitor and switch cannot be connected directly to analog circuitry because of the
capacitor’s effect on the driving waveform.
Since the voltage between the inverting and non-inverting inputs of an op amp is measured in microvolts,
we can approximate this voltage to zero. This implies that the voltage from the inverting input (– input)
to ground in Fig. 2.30 is approximately VIN; therefore, the output of first op amp is approximately VIN.
When the switch is closed, the capacitor rapidly charges to VIN , and VOUT is equal to VIN approximately.
When the switch opens, the capacitor retains its charge; the output holds at a value of VIN.
–
–
+ VOUT
VIN + Hold
capacitor
Control
Sample/hold logic
pulse
Fig. 2.30
Signal Processing in Digital Control 79
If the input voltage changes rapidly while the switch is closed, the capacitor can follow this voltage
because the charging time-constant is very short. If the switch is suddenly opened, the capacitor voltage
represents a sample of the input voltage at the instant the switch was opened. The capacitor then holds
this sample until the switch is again closed and a new sample is taken.
As an illustration of the application of a
sampler/ZOH circuit, consider the A/D
conversion system of Fig. 2.31. The two
subsystems in this figure correspond to
systems that are available as physical Fig. 2.31
devices. The A/D converter converts a voltage (or current) amplitude at its input into a binary code
representing a quantized amplitude value closest to the amplitude of the input. However, the conversion
is not instantaneous. Input signal variation during the conversion time of the A/D converter (typical
conversion times of commercial A/D units range from 100 nsec to 200 m sec), can lead to erroneous
results. For this reason, a high performance A/D conversion system includes an S/H device, as shown in
Fig. 2.31.
Although an S/H is available commercially as one unit, it is advantageous to treat the sampling and
holding operations separately for analytical purposes, as has been done in the S/H model of Fig. 2.29b.
This model gives the defining equation of the sampling process and the transfer function of the ZOH. It
may be emphasized here that X*(s) is not present in the physical system but appears in the mathematical
model; the sampler in Fig. 2.29 does not model a physical sampler and the block does not model a
physical data hold. However, the combination does accurately model a sampler/ZOH device.
2.11
We can get further insight into the process of sampling by relating the spectrum of the continuous-time
signal to that of the discrete-time sequence, which is obtained by sampling.
Let us define the continuous-time signal by xa(t). Its spectrum is then given by Xa(jw), where w is the
frequency in radians per second. The sequence x(k) with value x(k) = xa(kT) is derived from xa(t) by
periodic sampling. Spectrum of x(k) is given by X(e jW) where the frequency W has units of radians per
sample interval.
The Laplace transform expresses an analog signal xa (t) as a continuous sum of exponentials est;
s = s + jw. The Fourier transform expresses xa(t) as a continuous sum of exponentials e jwt. Similarly
z-transform expresses a sequence x(k) as a discrete sum of phasors z–k ; z = re jW. Fourier transform
expresses x(k) as a discrete sum of exponentials e jWk [31].
The Fourier transforms of xa(t) and x(k) are, respectively,
We use the intermediate function x*(t)—the impulse modulated xa(t)—to establish a relation between
Xa( jw) and X(ejW).
The Fourier transform of x*(t), denoted by X* (jw), is (refer to Eqn. (2.30)) given by
È ˘
X*(jw) = Ú x* (t )e - jw t dt = Ú Â Í
Ík = 0
x( k ) d (t - kT ) ˙ e - jw t dt
˙
Î ˚
(The summation over the interval – to is allowed, since x(k) = 0 for k < 0). We have arrived at our
first intermediate result. By comparing Eqn. (2.82) with Eqn. (2.81), we observe that
X(e jW ) = X*( jw ) W (2.83a)
w=
T
X(e jW ) is thus a frequency-scaled version of X*(jw) with the frequency scaling specified by
W = wT (2.83b)
We now determine X*( jw) in terms of the continuous-time spectrum Xa(jw). From Eqn. (2.30), we have
The summation over the interval – to is allowed since xa (t) = 0 for t < 0.
Since  d (t – kT) is a periodic function of period T, it can be expressed in terms of the following
k=-
Fourier series expansion.
2p n t
 d(t – kT) = Â
j
cn e T
k=- n= -
where
1
T /2 È ˘ - j 2p nt
cn =
T Ú Â Í
Ík = -
d (t - kT ) ˙ e
˙
T dt
- T /2 Î ˚
T /2 2p n t
1 -j 1 - j0 1
=
T Ú d (t )e T dt =
T
e =
T
for all n
- T /2
Signal Processing in Digital Control 81
Substituting this Fourier series expansion into the impulse modulation process, we get
n= -
The continuous-time spectrum of x*(t) is then equal to
1 È 2p nt ˘
Ú Â
j
X*( jw) =
Ú x*(t)e–jwt dt =
T
Í
Ín = -
Î
xa (t ) e T ˙ e - jw t dt
˙
˚
Interchanging the order of summation and integration, we obtain
È Ê
- jÁw -
2p n ˆ
t ˘
1 T ¯˜
X*( jw) =
T Â Ú Í
Í
xa ( t ) e Ë
dt ˙
˙
n= - Î ˚
= 1 Ê 2p n ˆ
T Â X a Á jw - j
Ë T ˜¯
(2.84a)
n= -
where Xa( jw) is the Fourier transform of xa(t).
We see from this equation that X*( jw) consists of periodically repeated copies of Xa(jw), scaled by 1/T.
The scaled copies of Xa(jw) are shifted by integer multiples of the sampling frequency
2p
ws = (2.84b)
T
and then superimposed to produce X*( jw).
Equation (2.84a) is our second intermediate result. Combining this result with that given by Eqn. (2.83a),
we obtain the following relations:
X*( jw) = 1 Ê 2p k ˆ
T k=- Â
X a Á jw - j
Ë T ˜¯
(2.85a)
Ê Wˆ 1 Ê W 2p k ˆ
X(e jW ) = X* Á j
Ë T ˜¯
=
T Â Xa Á j - j
Ë T T ˜¯
(2.85b)
k=-
2.11.1
While sampling a continuous-time signal xa(t) to produce the sequence x(k) with values x(k) = xa(kT),
we want to ensure that all the information in the original signal is retained in the samples. There will be
no information loss if we can exactly recover the continuous-time signal from the samples. To determine
the condition under which there is no information loss, let us consider xa(t) to be a band-limited signal
with maximum frequency wm, i.e.,
Xa(jw) = 0 for |w | > w m (2.86)
as shown in Fig. 2.32a. Figure 2.32b shows a plot of X*(jw) under the condition
ws p
= > wm (2.87a)
2 T
82 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Figure 2.32c shows the plot of X(e jW), which is derived from Fig. 2.32b by simply scaling the frequency
axis.
Xa( jw)
1
(a)
– wm 0 wm w
X *( jw)
1/T
(b)
– 2p – wm 0 wm 2p w
T –p p T
T T
X(e jW )
1/T
(c)
– 2p – w mT 0 wm T 2p W
–p p
X *( jw)
1/T
(d)
– wm 0 wm w
–p w1 2p
– 2p
T T T
p 2p
T – w1
T
Fig. 2.32
X * ( jw) is seen to be a periodic function with period 2p/T (X(e jW ) is a periodic function with period
2p). The spectrum X*(jw) for |w | £ p/T is identical to the continuous-time spectrum Xa(jw) except for
linear scaling in amplitude (the spectrum X(e jW) for |W | £ p is identical to the continuous-time spectrum
Xa( jw), except for linear scaling in amplitude and frequency). The continuous-time signal xa(t) can be
recovered from its samples x(k) without any distortion by employing an ideal low-pass filter (Fig. 2.33).
Figure 2.32d shows a plot of X*( jw) under the condition
ws p
= < wm (2.87b)
2 T
The plot of X(ejW) can easily be derived from Fig. 2.32d by scaling the frequency axis.
Signal Processing in Digital Control 83
at w1. The frequency ÊÁ 2p - w1 ˆ˜ which shows up at w1 after sampling, is called in the trade as the ‘alias’
Ë T ¯
of w1. The superimposition of the high-frequency behavior onto the low frequency is known as frequency
folding or aliasing. Under the condition given by (2.87b), the form of X*( jw) in the frequency range |w |
£ p/T is no longer similar to Xa(jw); therefore, the true spectral shape Xa( jw) is no longer recoverable by
low-pass filtering (refer to Fig. 2.33). In this case, the reconstructed signal xr(t) is related to the original
signal xa(t) through a distortion introduced by aliasing and therefore, there is loss of information due to
sampling.
Example 2.13
We consider a simple example to illustrate the effects of aliasing.
Figure 2.34a shows a recording of the temperature in a thermal process. From this recording we observe
that there is an oscillation in temperature with a period of two minutes.
(a)
2 min
1.8 min
(b)
18 min
Fig. 2.34
84 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The sampled recording of the temperature obtained by measurement of temperature after every
1.8 minutes is shown in Fig. 2.34b. From the sampled recording, one might believe that there is an
oscillation with a period of 18 minutes. There seems to be loss of information because of the process of
sampling.
The sampling frequency is ws = 2p/1.8 = p/0.9 rad/min, and the frequency of temperature oscillation is
w0 = 2p/2 = p rad/min. Since w0 is greater than ws/2, it does not lie in the passband of a low-pass filter
with a cut-off frequency ws/2. However, the frequency w0 is ‘folded in’ at ws – w0 = p/9 rad/min which
lies in the passband of the low-pass filter. The reconstructed signal has, therefore, a period of 18 minutes,
which is the period of the sampled recording.
2.11.2
A corollary to the aliasing problem is the sampling theorem stated below.
Let xa(t) be a band-limited signal with Xa(jw) = 0 for |w | > wm. Then xa(t) is uniquely determined from
Ê 2p ˆ
its samples x(k) = xa(kT) if the sampling frequency ws Á = > 2wm, i.e., the sampling frequency must
Ë T ˜¯
be at least twice the highest frequency present in the signal.
We will discuss the practical aspects of the choice of sampling frequency in Section 2.13.
2.12
Digital control systems usually require the transformation of discrete-time sequences into analog signals.
In such cases, we are faced with the converse problem from that of sampling xa(t) to obtain x(k). The
relevant question now becomes—how can xa(t) be recovered from its samples.
We begin by considering the unaliased spectrum of X*(jw) shown in Fig. 2.32b. Xa(jw) has the same
-p p
form as X*(jw) over £ w £ . Xa(jw) can be recovered from X*( jw) by a low-pass filter.
T T
Consider the ideal low-pass filter shown in Fig. 2.33. It is characterized by G(jw) defined below.
È -p p
T for £w £
G(jw) = Í T T (2.88)
Í
ÍÎ 0 otherwise
Note that the ideal filter given by Eqn. (2.88) has a zero phase characteristic. This phase characteristic
stems from our requirement that any signal whose frequency components are totally within the passband
of the filter, be passed undistorted.
We will need the following basic mathematical background in this section [31].
The Fourier transform pair:
F [ y(t)] = Y(jw) =D
Ú y (t )e - jw t
dt
Signal Processing in Digital Control 85
1
[Y( jw)] = y(t) =D Ú Y ( jw )e
–1 jw t
F dw
2p
Shifting theorem:
È Ê T ˆ˘
F Í y Á t - ˜ ˙ = e– jwT/2 Y( jw)
Î Ë 2¯˚
The impulse response of the ideal low-pass filter is given by inverse Fourier transformation.
p /T
1
g(t) = 1
2p Ú G( jw) e jwt dw =
2p Ú Te jw t dw
-p / T
T sin p t / T
= (e jp t / T - e - jp t / T ) = (2.89)
j 2p t p t /T
Figure 2.35 shows a plot of g(t) versus t. Notice that the response extends from t = – to t = . This
implies that there is a response for t < 0 to a unit impulse applied at t = 0 (i.e., the time response that
begins before an input is applied). This cannot be true in the physical world. Hence, such an ideal filter
is physically unrealizable.
g (t)
– 3T –T 0 T 3T t
Fig. 2.35
We consider polynomial holds as an approximation to the ideal low-pass filter. The ZOH was considered
in Section 2.10, and its transfer function was derived to be (Eqn. (2.79)),
1- e - sT
Gh0(s) =
s
Its frequency response is consequently given by
1 - e - jwT e - jwT / 2 (e jwT / 2 - e - jwT / 2 )
Gh0( jw) = =
jw jw
sin (wT /2) - jwT / 2
=T e (2.90)
wT /2
sin (wT /2) 2p 4p
Plot of versus w will be of the form shown in Fig. 2.35 with sign reversals at w = , ,º
wT /2 T T
86 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
(i.e., w = ws, 2ws, … ). The sign reversals amount to a phase shift of –180º (it can be taken as +180º as
well) at w = kws; k = 1, 2, …
Equation (2.90) can, therefore, be expressed as
Gh0( jw) = |Gh0( jw)| –Gh0( jw)
where
sin(wT /2)
|Gh0( jw)| = T (2.91a)
wT /2
and
Ê - wT ˆ 2p k
–Gh0( jw) = Á - 180°˜ at w = ; k = 1, 2, ... (2.91b)
Ë 2 ¯ T
A plot of magnitude and phase characteristics of ZOH are shown in Fig. 2.36. The ideal low-pass filter is
shown by dashed lines in Fig. 2.36a. The phase of the ideal filter, at all frequencies, is zero.
It is obvious that the hold device does not have the ideal filter characteristics.
(i) The ZOH begins to attenuate at frequencies considerably below ws/2.
(ii) The ZOH allows high frequencies to pass through, although they are attenuated.
Fig. 2.36
Signal Processing in Digital Control 87
(iii) The factor e– jwT/2 in Eqn. (2.90) corresponds to a delay of T/2 in the time domain. This follows
from the shifting theorem of Fourier transforms. Therefore, the linear phase characteristic
introduces a time delay of T/2. When ZOH is used in a feedback system, the lag characteristic of
the device degrades the degree of system stability.
The higher-order holds, which are more sophisticated and which better approximate the ideal filter,
are more complex and have more time delay than the ZOH. As the additional time delay in feedback
control systems decreases the stability margin or even causes instability, the higher-order holds are rarely
justified in terms of improved performance, and therefore, the zero-order hold is widely used in practice.
In practice, signals in control systems have frequency spectra consisting of low-frequency components
as well as high-frequency noise components. Recall that all signals with frequency higher than ws/2
appear as signals of frequencies between 0 and ws/2 due to the aliasing effect. Therefore, high-frequency
noise will be folded in and will corrupt the low-frequency signal containing the desired information.
To avoid aliasing, we must either choose the sampling frequency high enough (ws > 2wm, where wm
is the highest-frequency component present in the signal) or use an analog filter ahead of sampler
(refer to Fig. 2.2) to reshape the frequency spectrum of the signal (so that the frequency spectrum for
w > (1/2)ws is negligible), before the signal is sampled. Sampling at very high frequencies introduces
numerical errors. Anti-aliasing filters are, therefore, useful for digital control applications.
The synthesis of analog filters is now a very mature subject area. Extensive sets of tables exist, which
give, not only the frequency and phase response of many analog prototypes, but also the element values
necessary to realize those prototypes. Many of the design procedures for digital filters, have been
developed in ways that allow this wide body of analog filter knowledge, to be utilized effectively.
2.13
Every time a digital control algorithm is designed, a suitable sampling interval must be chosen. Choosing
a long sampling interval reduces both the computational load and the need for rapid A/D conversion, and
hence the hardware cost of the project.
However, as the sampling interval is increased, a number of potentially degrading effects start to become
significant. For a particular application, one or more of these degrading effects set the upper limit for
the sampling interval. The process dynamics, the type of algorithm, the control requirement and the
characteristics of input and noise signals, all interact to set the maximum usable value for T.
There is also a lower limit for the sampling interval. Digital hardware dictates the minimum usable value
for T.
We will discuss some of the factors which limit the choice of sampling interval. Some empirical rules for
the selection of sampling interval are also reported.
88 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
2.13.1
The sampling theorem states that a continuous-time signal whose frequency spectrum is bounded by
upper limit wm, can be completely reconstructed from its samples when the sampling frequency is
ws > 2wm. There are two problems associated with the use of the sampling theorem in practical control
systems.
(i) The frequency spectra of real signals do not possess strictly defined wm. There are almost always
frequency components outside the system bandwidth. Therefore, the selection of the sampling
frequency ws using the sampling theorem on the basis of system bandwidth (w b = w m) is risky, as
frequency components outside w b will appear as low-frequency signals of frequencies between 0
and ws /2 due to the aliasing effect, and lead to loss of information.
(ii) The ideal low-pass filter needed for perfect reconstruction of a continuous-time signal from its
samples is not physically realizable. Practical filters, such as the ZOH, introduce reconstruction
errors because of the limitations of their operation.
Figure 2.28 clearly indicates that the accuracy of the zero-order hold as an extrapolating device depends
greatly on the sampling frequency w s. The accuracy improves with increase in sampling frequency.
In practice, signals in control systems include low-frequency components carrying useful information,
as well as high-frequency noise components. The high-frequency components appear as low-frequency
signals (of frequencies between 0 and ws/2) due to the aliasing effect, causing a loss of information.
To avoid aliasing, we use the analog filter ahead of sampler (refer to Fig. 2.2) to reshape the frequency
spectrum of the signal, so that the frequency spectrum for w > (1/2)ws is negligible. The cut-off frequency
ws /2 of the anti-aliasing filter must be much higher than the system bandwidth, otherwise the anti-
aliasing filter becomes as significant as the system itself, in determining the sampled response.
Due to the conversion times and the computation times, a digital algorithm contains a dead-time that is
absent from its analog counterpart. Dead-time has a marked destabilizing effect on a closed-loop system
due to the phase shift caused.
A practical approach of selecting the sampling interval is to determine the stability limit of the closed-
loop control system, as sampling interval T is increased. For control system applications, this approach
is more useful than the use of the sampling theorem for the selection of sampling interval. In the later
chapters of this book, we will use stability tests, root-locus techniques, and frequency-response plots to
study the effect of the sampling interval on closed-loop stability.
A number of digital control algorithms are derived from analog algorithms by a process of discretization.
As we shall see in the next section, in the transformation of an algorithm, from continuous-time to
Signal Processing in Digital Control 89
discrete-time form, errors arise and the character of the digital algorithm differs from that of its analog
counterpart. In general, these errors occurring during the discretization process, become larger as the
sampling interval increases.
This effect should rarely be allowed to dictate a shorter sampling interval, than would otherwise have
been needed. We will see in Chapter 4 that the direct digital design approach allows a longer sampling
interval without the introduction of unacceptable errors.
As the sampling interval T becomes very short, a digital system does not tend to the continuous-time
case, because of the finite word-length. To visualize this effect, we can imagine that as a signal is sampled
more frequently, adjacent samples have more similar magnitudes. In order to realize the beneficial effects
of shorter sampling, longer word-lengths are needed to resolve the differences between adjacent samples.
Excessively fast sampling (T Æ 0) may also result in numerical ill-conditioning in implementation of
recursive control algorithms (such as the PID control algorithm—discussed in the next section).
2.13.2
Practical experience and simulation results have produced a number of useful approximate rules for the
specification of minimum sampling rates.
(i) The recommendations given in the adjacent table
Sampling time
for the most common process variables follow Type of variable
(seconds)
from the experience of process industries.
(ii) Fast-acting electromechanical systems require Flow 1–3
much shorter sampling intervals, perhaps down Level 5–10
to a few milliseconds. Pressure 1–5
(iii) A rule of thumb says that, a sampling period
Temperature 10–20
needs to be selected that is much shorter than
any of the time constants, in the continuous-time plant, to be controlled digitally. The sampling
interval, equal to one tenth of the smallest time-constant, or the inverse of the largest real pole (or
real part of complex pole), has been recommended.
(iv) For complex poles with the imaginary part wd, the frequency of transient oscillations, corresponding
to the poles, is wd. A convenient rule suggests sampling at the rate of 6 to 10 times per cycle.
Thus, if the largest imaginary part in the poles of the continuous-time plant is 1 rad/sec, which
corresponds to transient oscillations with a frequency of 1/6.28 cycles per second, T = 1 sec may
be satisfactory.
(v) Rules of thumb based on the open-loop plant model, are risky under conditions where the high
closed-loop performance is forced from a plant with a low open-loop performance. The rational
choice of the sampling rate, should be based on an understanding of its influence on the closed-
loop performance of the control system. It seems reasonable that the highest frequency of
inter est, should be closely related to the 3dB-bandwidth of the closed-loop system. The selection
90 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
of sampling rates can then be based on the bandwidth of the closed-loop system. Reasonable
sampling rates are 10 to 30 times the bandwidth.
(vi) Another rule of thumb, based on the closed-loop performance, is to select sampling interval T
equal to, or less than, one tenth of the desired settling time.
2.14
Most of the industrial processes that we are called upon to control are continuous-time processes.
Mathematical models of continuous-time processes are usually based around differential equations or,
equivalently, around transfer functions in the operator s. A very extensive range of well-tried methods for
control system analysis and design are in the continuous-time form.
To move from the continuous-time form to the discrete-time form requires some mechanism for time
discretization (we shall refer to this mechanism simply as discretization). In this section, principles and
various methods of discretization will be presented. An understanding of various possible approaches
helps the formation of a good theoretical foundation for the analysis and design of digital control systems.
The main point is to be aware of the significant features of discretization and to have a rough quantitative
understanding of the errors that are likely to be introduced by various methods. We will shortly see that
none of the discretization methods preserves the characteristics of the continuous-time system exactly.
The specific problem of this section is: given a transfer function G(s), what discrete-time transfer function
will have approximately the same characteristics?
We present four methods for solution of this problem.
(i) Impulse-invariant discretization
(ii) Step-invariant discretization
(iii) Discretization based on finite-difference approximation of derivatives
(iv) Discretization based on bilinear transformation
2.14.1
If we are given a continuous-time impulse response ga(t), we can consider transforming it to a discrete-
time system with impulse response g(k) consisting of equally spaced samples of ga(t) so that
g(k) = ga(t)| t = kT = ga(kT)
where T is a (positive) number to be chosen as part of the discretization procedure.
The transformation of ga(t) to g(k) can be viewed as impulse modulation (refer of Fig. 2.19) giving
impulse-train representation g*(t) to the samples g(k):
Ê W 2p k ˆ
G(e jW) = 1
T ÂG a ÁË j T - j T ˜¯
(2.93b)
k =-
w, in radians/second, is the physical frequency of the continuous-time function and W = wT, in radians,
is the observed frequency in its samples.
Thus, for a discrete-time system obtained from a continuous-time system through impulse invariance,
the discrete-time frequency response G(e jW ) is related to the continuous-time frequency response
Ga( jw) through replication of the continuous-time frequency response and linear scaling in amplitude
and frequency. If Ga( jw) is band-limited and T is chosen so that aliasing is avoided, the discrete-time
frequency response is then identical to continuous-time frequency response, except for linear scaling in
amplitude and frequency.
Let us explore further the properties of impulse invariance. Applying the Laplace transform to
Eqn. (2.92), we obtain (refer to Eqn. (2.32b))
G( z) = G*(s) (2.95a)
z = e sT
Rewriting Eqn. (2.93a) in terms of the general transform variable s, gives a relationship between G*(s)
and Ga(s):
1 Ê 2p k ˆ
G*(s) =
T
Ga Á s -
Ë T ˜¯Â (2.95b)
k=-
= 1 Ê 2p k ˆ
Therefore, G( z)
z = e sT T ÂG a ÁË s - T ˜¯ (2.95c)
k=-
We note that impulse invariance corresponds to a transformation between G*(s) and G(z) represented by
the mapping
z = e sT= e(s ± jw)T = esT – ± wT (2.96)
between the s-plane and the z-plane.
In the following, we investigate in more detail the mapping z = e sT. We begin by letting ga(t) = e–atm(t);
a > 0. The Laplace transform of this function is
1
Ga (s) = (2.97a)
s+a
The starred transformation is (refer to Eqn. (2.32b))
e sT
G*(s) = Âe - akT - kTs
e =
e sT
- e - aT
(2.97b)
k =0
92 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Fig. 2.37 z
Signal Processing in Digital Control 93
into dashed and solid portions; the solid portions correspond to mapping for 0 £ w £ p/T and the dashed
p
portions correspond to mapping for - £w £ 0.
T
The following points are worth noting at this juncture.
(i) The left half of the primary strip in the s-plane maps onto the interior of the unit circle, in the
z-plane.
(ii) The imaginary axis between –jp /T and jp /T associated with primary strip in the s-plane, maps
onto the unit circle in the z-plane.
(iii) The right half of the primary strip in the s-plane, maps onto the region, exterior to the unit circle,
in the z-plane.
(iv) The same pattern holds for each of the complementary strips.
The fourth point needs further discussion. We consider
Although useful for discretizing band-limited analog systems, the impulse-invariance method is
unsuccessful for discretizing transfer functions Ga(s) for which |Ga( jw)| does not approach zero for
large w. In these cases, an appropriate sampling rate cannot be found to prevent aliasing.
To overcome the problem of aliasing, we need a method in which the entire jw-axis in the s-plane, maps
uniquely onto the unit circle in the z-plane. This is accomplished by the bilinear transformation method,
described later in this section.
For a given analog system Ga(s), the impulse-invariant discrete-time system is obtained following the
procedure given below:
(i) Obtain the impulse response,
ga(t) = L – 1[Ga(s)]
(ii) Select a suitable sampling interval and derive samples g(k) from ga(t),
g(k) = ga (t)|t = kT
(iii) Obtain z-transform of the sequence g(k),
G(z) = Z [g(k)]
The three steps given above can be represented by the following relationship:
G(z) = Z [L –1[Ga(s)]|t = kT] (2.98a)
This z-transform operation is commonly indicated as
G(z) = Z [Ga(s)] (2.98b)
Single factor building blocks of the Laplace and z-transform pairs are given in Table 2.1. Expanding
any Ga(s) into partial fractions, G(z) can be found by use of this table.
Example 2.14
With the background on analog design methods, the reader will appreciate the value of being able to
correlate particular patterns in the s-plane with particular features of system behavior. Some of the useful
s-plane patterns, which have been used in analog design, are the loci of points in the s-plane with (i)
constant damping ratio z, and (ii) constant undamped natural frequency wn. In this example, we translate
these patterns in the primary strip of the s-plane onto the z-plane using the basic relation z = e sT, where
T is some chosen sampling period.
Consider a second-order system with transfer function
K
Ga(s) =
s + 2zw n s + w n2
2
Figure 2.38a shows a locus of the characteristic roots, with z held constant and wn varying. Figure 2.38b
shows a locus with wn held constant and z varying. The loci in Figs 2.38a and 2.38b, correspond to an
underdamped second-order system.
Signal Processing in Digital Control 95
q
– zwn s – zwn s
Constant-wn
locus
(a)
(b)
Im Im
Constant-z
wd = p /2T locus (z = 0.6)
wd = 0
Re Re
1 1
wd = p /T
Constant-wn locus
(wn = 6p /10T)
(c) (d)
Fig. 2.38 z wn s z
96 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
s =– w n2 - w d2
z = e
(- w -w + jw )T
2
n
2
d d
2.14.2
The basis for impulse invariance is to choose an impulse response for the discrete-time system that is
similar to the impulse response of the analog system. The use of this procedure is often motivated not so
much by a desire to preserve the impulse-response shape, as by the knowledge that if the analog system
is band-limited, then the discrete-time frequency response will closely approximate the continuous-time
frequency response.
In some design problems, a primary objective may be to control some aspect of the time response, such
as the step response. In such cases, a natural approach might be to discretize the continuous-time system
by waveform-invariance criteria. In this subsection, we consider the step-invariant discretization.
The step-invariant discrete-time system is obtained by placing a unit step on the input to the analog
system Ga(s), and a sampled unit step on the input to the discrete-time system. The transfer function G(z)
of the discrete-time system is adjusted, until the output of the discrete-time system represents samples of
the output of the analog system. The input to the analog system Ga(s) is m(t)—a unit-step function. Since
L [m(t)] = 1/s, the output y(t) of the analog system is given by
Signal Processing in Digital Control 97
Ï Ga ( s) ¸
y(t) = L Ì –1
˝
Ó s ˛
Output samples of the discrete-time system are defined to be
Ï Ga ( s) ¸
y(kT) = L –1
Ì ˝
Ó s ˛ t = kT
The z-transform of this quantity yields the z-domain output of the discrete-time system. This gives
È ˘
-1 Ï Ga ( s) ¸
Y(z) = Z ÍL Ì ˝ ˙ (2.99a)
Í Ó s ˛ t = kT ˙˚
Î
z
Since Z [m(k)] = , where m(k) is the unit-step sequence, the output y(k) of the discrete-time
z -1
system G(z) is given by
Ï z ¸
Y(z) = G(z) Ì ˝ (2.99b)
Ó z - 1˛
Comparing Eqn. (2.99b) with Eqn. (2.99a), we obtain
È Ï ¸Ô˘
Ô -1 Ê Ga ( s) ˆ
G(z) = (1 – z –1) ÍZ ÌL ˝˙ (2.100a)
Í ÔÓ ËÁ s ¯˜ t = kT Ô˙
Î ˛˚
È Ê Ga ( s) ˆ ˘
or G(z) = (1 – z–1) ÍZ Á ˙ (2.100b)
Ë
Î s ˜¯ ˚
Notice that Eqn. (2.100b) can be rewritten as follows:
È1 - e - sT ˘
G(z) = Z Í Ga ( s) ˙ (2.100c)
ÍÎ s ˙˚
This can easily be established.
È G ( s) ˘
Let L –1 Í a ˙ = g1(t), and Z [g1(kT)] = G1(z)
Î s ˚
È - sT Ga ( s) ˘
Then L –1 Íe = g1(t – T ), and Z [g1(kT – T)] = z–1 G1(z)
Î s ˙˚
Therefore,
È G ( s) G ( s) ˘ È Ga ( s) ˘
Z Í a - e - sT a ˙ = (1 – z–1) Z Í s ˙
Î s s ˚ Î ˚
This establishes the equivalence of Eqns (2.100b) and (2.100c).
The right-hand side of Eqn (2.100c) can be viewed as the z-transform of the analog system Ga (s),
preceded by zero-order hold (ZOH). Introducing a fictitious sampler and ZOH for analytical purposes,
we can use the model of Fig. 2.39 to derive a step-invariant equivalent of analog systems. For obvious
reasons, step-invariant equivalence is also referred to as ZOH equivalence. In the next chapter, we will
use the ZOH equivalence to obtain discrete-time equivalents of the plants of feedback control systems.
98 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
T
x(t) x*(t) y (t)
1 – e–sT Ga(s)
s
Fig. 2.39 Ga s
Equivalent discrete-time systems obtained by the step-invariance method, may exhibit the frequency
folding phenomena and may, therefore, present the same kind of aliasing errors as found in impulse-
invariance method. Notice, however, that the presence of 1/s term in Ga (s)/s causes high-frequency
attenuation. Consequently, the equivalent discrete-time system obtained by the step-invariance method,
will exhibit smaller aliasing errors than that obtained by the impulse-invariance method.
As for stability, the equivalent discrete-time system obtained by the step-invariance method is stable if
the original continuous-time system is a stable one (refer to Review Example 6.2).
Example 2.15
Figure 2.40 shows the model of a plant driven by a D/A converter. In the following, we derive the
transfer function model relating y(kT) to r(kT).
Fig. 2.40
The standard D/A converters are designed in such a way, that the old value of the input sample is held
constant until a new sample arrives. The system of Fig. 2.40 can, therefore, be viewed as an analog
system Ga(s), preceded by zero-order hold, and we can use ZOH equivalence to obtain the transfer
function model relating y(kT) to r(kT).
Zero-order hold equivalent (step-invariant equivalent) of Ga (s) can be determined as follows:
1 0.5 ( s + 4) 1 1.5 0.5
Since Ga ( s) = = - +
s s ( s + 1) ( s + 2 ) s s + 1 s +2
we have (refer to Table 2.1)
È1 ˘ z 1.5 z 0.5 z
Z Í Ga ( s) ˙ = - +
Îs ˚ z - 1 z - e -T z - e -2T
From Eqn. (2.100b),
z -1 È z 1.5 z 0.5 z ˘ 1.5( z - 1) 0.5( z - 1)
G(z) = Í - + ˙ = 1- +
z Î z -1 z - e - T
z-e - 2T
˚ z - e -T z - e -2T
Let the sampling frequency be 20 rad/sec, so that
2p
T= = 0.31416 sec; e–T = 0.7304; e–2T = 0.5335
20
Signal Processing in Digital Control 99
With these values, we get the following step-invariant equivalent of the given analog system:
0.17115 z - 0.04535
G(z) = 2
z - 1.2639 z + 0.3897
2.14.3
Another approach to transforming a continuous-time system into a discrete-time one is to approximate
derivatives in a differential equation representation of the continuous-time system by finite differences.
This is a common procedure in digital simulations of analog systems, and is motivated by the intuitive
notion that the derivative of a continuous-time function, can be approximated by the difference between
consecutive samples of the signal to be differentiated. To illustrate the procedure, consider the first-order
differential equation
dy(t )
+ ay(t ) = r(t) (2.101)
dt
The backward-difference method consists of replacing r(t) by r(k), y(t) by y(k); and the first derivative
dy(t)/dt by the first backward-difference
dy(t ) y( k ) - y( k -1)
= (2.102)
dt t = kT T
This yields the difference equation
y( k ) - y( k -1)
+ ay(k) = r(k) (2.103)
T
If T is sufficiently small, we would expect the solution y(k) to yield a good approximation to the samples
of y(t).
To interpret the procedure in terms of a mapping of continuous-time function Ga(s) to a discrete-time
function G(z), we apply the Laplace transform to Eqn. (2.101) and z-transform to Eqn. (2.103), to obtain
Y ( s) 1
sY(s) + aY(s) = R(s) ; so that Ga(s) = =
R( s) s+a
Ê 1 - z -1 ˆ Y ( z) 1
Á T ˜ Y(z) + a Y(z) = R(z) ; so that G(z) = R( z ) =
Ë ¯ Ê 1 - z -1 ˆ
Á T ˜ +a
Ë ¯
Comparing Ga(s) with G(z), we see that
G(z) = Ga ( s)
s = (1- z -1 ) / T
1 - z -1 1
Therefore, s= ;z= (2.104)
T 1- sT
is a mapping from the s-plane to the z-plane when the backward-difference method is used to discretize
Eqn. (2.101).
100 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The stability region in the s-plane can be mapped by Eqn. (2.104) into the z-plane as follows. Noting
that the stable region in the s-plane is given by Re(s) < 0, the stability region in the z-plane under the
mapping (2.104), becomes
Ê 1 - z -1 ˆ Ê z - 1ˆ
Re Á ˜ = Re Á T z ˜ < 0
Ë T ¯ Ë ¯
Writing the complex variable z as a + jb, we may write the last inequality as
Ê a + j b - 1ˆ
Re Á <0
Ë a + j b ˜¯
È (a + j b - 1) (a - j b ) ˘ È a 2 - a + b 2 + jb ˘ a2 -a + b2
or Re Í ˙ = Re Í ˙ = <0
ÍÎ a2 + b2 ˙˚ ÍÎ a2 + b2 ˙˚ a2 + b2
which can be written as
(a – 1/2)2 + b 2 < (1/2)2
The stable region in the s-plane can thus be mapped into a circle with center at a = 1/2, b = 0 and radius
equal to 1/2, as shown in Fig. 2.41a.
Fig. 2.41
Signal Processing in Digital Control 101
The backward-difference method is simple and will produce a stable discrete-time system for a stable
continuous-time system. Also, the entire s-plane imaginary axis is mapped once and only once onto the
small z-plane circle; the folding or aliasing problems do not occur. The penalty is a ‘warping’ of the
equivalent s-plane poles, as shown in Fig. 2.41b. This situation is reflected in the relationship between
the exact z-transformation, and the backward-difference approximation.
Consider a pole in z-plane at z = e jaT. Inverse mapping of this pole to the s-plane, using the transformation
s = ln z/T, gives s = ja (shown in Fig. 2.41b). Inverse mapping of the pole in the z-plane at z = e jaT to the
s-plane, using the backward-difference approximation s = (1– z –1)/T, gives s = jâ = (1 – e –jaT)/T (also
shown in Fig. 2.41b).
Thus a nonlinear relationship or ‘warping’:
jâ = (1 – e –jaT)/T (2.105)
exists between the two poles ja and jâ in the s-plane. Note that for small aT, using the first two terms of
the expansion of the exponential in Eqn. (2.105), yields
1
jaˆ @ [1 - (1 - jaT ) ]
T
@ ja
The ‘warping’ effect is thus negligible for relatively small aT (about 17° or less).
Let us now investigate the behavior of the equivalent discrete-time system when the derivative dy(t)/dt in
Eqn. (2.101), is replaced by forward difference:
dy(t ) y( k + 1) - y( k )
=
dt t = kT T
This yields the following difference equation approximation for Eqn. (2.101):
y( k + 1) - y( k )
+ ay(k) = r(k) (2.106)
T
Applying Laplace transform to Eqn. (2.101) and z-transform to Eqn. (2.106), we obtain
Y ( s) 1
= Ga(s) = (2.107a)
R( s) s+a
Y ( z) 1
and = G(z) = (2.107b)
R( z ) z -1
+a
T
The right-hand sides of Eqns (2.107a) and (2.107b) become identical if we let
z -1
s= (2.108)
T
We may consider Eqn. (2.108) to be the mapping from the s-plane to the z-plane, when the forward-
difference method is used to discretize Eqn. (2.101).
One serious problem with the forward-difference approximation method is regarding stability. The left-
Ê z - 1ˆ
hand side of the s-plane is mapped into the region Re Á < 0 or Re (z) < 1. This mapping shows
Ë T ˜¯
102 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
that the poles of the left half of the s-plane, may be mapped outside the unit circle in z-plane. Hence the
discrete-time system obtained by this method may become unstable.
With the forward rule for integration, the continuous-time system (2.109) is converted to the following
recursive algorithm:
y(k) = y(k – 1) – aTy(k – 1) + Tr (k – 1)
The z-transformation of this equation gives
Y(z) = z–1 Y(z) – aT z–1 Y(z) + T z–1 R(z)
Y ( z) 1
or =
R( z ) z - 1
+a
T
y (t) y (t)
0 T 2T 3T t 0 T 2T 3T t
(a) (b)
Fig. 2.42
Signal Processing in Digital Control 103
Laplace transformation of Eqn. (2.109a) gives the transfer function of the continuous-time system:
Y ( s) 1
=
R( s) s+a
The forward rectangular rule for integration thus results in the s-plane to z-plane mapping:
z -1
s=
T
which is same as the one obtained by forward-difference approximation of derivatives (Eqn. (2.108)).
Similarly, it can easily be established that the backward rectangular rule for integration results in s-plane
to z-plane mapping, which is same as the one obtained by backward-difference approximation of
derivatives (Eqn. (2.104)).
Example 2.16
The simplest formula for the PID or three-mode controller is the addition of the proportional, integral, and
derivative modes:
È de(t ) ˘
t
1
u(t) = Kc Íe(t ) +
Í TI Ú e(t )dt + TD
dt ˙
˙ (2.111)
Î 0 ˚
where
u = controller output signal;
TI = integral or reset time;
e = error (controller input) signal;
TD = derivative or rate time; and
K = controller gain.
For the digital realization of the PID controller, it is necessary to approximate each mode in Eqn. (2.111)
using the sampled values of e(t).
The proportional mode requires no approximation since it is a purely static part:
uP (k) = Kc e(k)
The integral mode may be approximated by the backward rectangular rule for integration. If S(k – 1)
approximates the area under the e(t) curve up to t = (k – 1)T, then the approximation to the area under the
e(t) curve up to t = kT is given by (refer to Eqn. (2.110b))
S(k) = S(k – 1) + Te(k)
A digital realization of the integral mode of control is as follows:
Kc
uI (k) = S(k)
TI
where S(k) = sum of the areas under the error curve = S(k – 1) + Te(k)
104 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Bringing all the three modes together, results in the following PID algorithm:
u(k) = uP(k) + uI(k) + uD(k)
È 1 TD ˘
= Kc Íe( k ) + S ( k ) + [e( k ) - e( k - 1)]˙ (2.112a)
Î TI T ˚
where S(k) = S(k – 1) + Te(k) (2.112b)
We can directly use the s-plane to z-plane mapping given by Eqn. (2.104) to obtain the discrete equivalent
(2.112) of the PID controller (2.111).
The PID controller (2.111), expressed in terms of operator s, is given by the input-output relation
È 1 ˘
U(s) = Kc Í1 + + TD s ˙ E(s) (2.113a)
Î TI s ˚
2.14.4
The technique, based on finite-difference approximation to differential equations, for deriving a
discrete-time system from an analog system, has the advantage that z-transform of the discrete-time
system, is trivially derived from the Laplace transform of the analog system by an algebraic substitution.
The disadvantages of these mappings are that jw-axis in the s-plane, generally does not map into the
unit circle in the z-plane, and (for the case of forward-difference method) stable analog systems may not
always map into stable discrete-time systems.
A nonlinear one-to-one mapping from the s-plane to the z-plane which eliminates the disadvantages
mentioned above and which preserves the desired algebraic form is the bilinear 8 transformation defined
by
2 Ê z - 1ˆ
s= Á
T Ë z + 1˜¯
(2.114)
or y(t) = y(0) – a Ú Ú
y(t ) dt + r (t ) dt (2.116b)
0 0
Laplace transformation of Eqn. (2.116a) gives the transfer function of the continuous-time system.
Y ( s) 1
= Ga(s) =
R( s) s+a
Applying bilinear transformation (Eqn. (2.114)) to this transfer y(t)
function, we obtain
1
G(z) =
2 Ê z - 1ˆ
+a
T ÁË z + 1˜¯
In numerical analysis, the procedure known as the trapezoidal
rule for integration proceeds by approximating the continuous-
time function by continuous trapezoids, as illustrated in Fig.
2.43, and then adding their areas to compute the total integral. 0 T 2T 3T t
We thus approximate the area Fig. 2.43
kT
1
Ú y(t )dt by
2
[y(k) + y(k – 1)]T
( k -1)T
8
The transformation is called bilinear from consideration of its mathematical form.
106 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
With this approximation, Eqn. (2.116b) can be converted to the following recursive algorithm:
aT T
y(k) = y(k – 1) – [y(k) + y(k – 1)] + [r(k) + r(k – 1)]
2 2
The z-transformation of this equation gives
aT T
Y(z) = z–1 Y(z) – [Y(z) + z–1 Y(z)] + [R(z) + z–1 R(z)]
2 2
T
(1 + z -1 )
Y ( z) 2 1
or = =
R( z ) (1 - z -1 ) +
aT
(1 + z -1 ) 2 Ê z - 1ˆ
+a
2 T ÁË z + 1˜¯
This result is identical to the one obtained from the transfer function of the continuous-time system by
bilinear transformation.
The nature of bilinear transformation is best understood from Fig. 2.44, which shows how the s-plane
is mapped onto the z-plane. As seen in the figure, the entire jw-axis in the s-plane, is mapped onto the
unit circle in the z-plane. The left half of the s-plane is mapped inside the unit circle in the z-plane,
and the right half of the s-plane is mapped outside the z-plane unit circle. These properties can easily
be established. Consider, for example, the left half of the s-plane defined by Re(s) < 0. By means of
Eqn. (2.114), this region of the s-plane is mapped onto the z-plane region defined by
Ê 2 z - 1ˆ Ê z - 1ˆ
Re Á ˜ < 0 or Re Á <0
Ë T z + 1¯ Ë z + 1˜¯
By taking the complex variable z = a + jb, this inequality becomes
Ê z - 1ˆ Ê a + j b - 1ˆ È (a - 1 + j b ) (a + 1 - j b ) ˘
Re Á = Re Á = Re Í ˙
Ë z + 1˜¯ Ë a + j b + 1˜¯ Î (a + 1 + j b ) (a + 1 - j b ) ˚
È a 2 - 1 + b 2 + j 2b ˘
= Re Í 2 2 ˙ <0
ÍÎ (a + 1) + b ˙˚
Im
jw
Unit circle
Re
s
s-plane z-plane
Fig. 2.44 s z
Signal Processing in Digital Control 107
which is equivalent to
a2 – 1 + b2 < 0 or a2 + b2 < 12
which corresponds to the inside of the unit circle in z-plane. The bilinear transformation thus produces a
stable discrete-time system for a stable continuous-time system.
Since the entire jw-axis of the s-plane is mapped once and only once onto the unit circle in the z-plane,
the aliasing errors inherent with impulse-invariant transformations are eliminated. However, there is
again a warping penalty.
Consider a pole in z-plane at z = e jaT, shown in Fig. 2.45c. Its inverse mapping in the s-plane using
the transformation s = ln z/T, gives s = ja (shown in Fig. 2.45a). Inverse mapping of the z-plane pole
1 z -1
(Fig. 2.45b) at z = e jaT, to the s-plane obtained using the bilinear transformation s = , is also
T z +1
shown in Fig. 2.45a.
s= 2z–1
Tz–1
Im (s) Approximate map
Im (z)
p/T
j 2 tan aT aT rad
T 2
Re(z)
ja
|s | = 0
|s |<2/T
|s |>2/T
(b)
–s Re(s) Im(z)
= –2/T
aT rad
Re(z)
|s | = 0
|s |<2/T
–p/T |s |>2/T
s = ln z/T
Exact map
(a) (c)
Fig. 2.45 s z
108 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Example 2.17
A method that has been frequently used by practicing engineers to approximate a sampled-data system
by a continuous-time system, relies on the approximation of the sample-and-hold operation by means of
a pure time delay. Consider the sampled-data system of Fig. 2.46a. The sinusoidal steady-state transfer
function of the zero-order hold is (refer to Eqn. (2.90))
u(k) u*(t)
r (t) + e(k) y(t)
D (z) Gh0(s) G(s)
T T
–
(a)
r (t) + y(t)
D (s) e–sT/2 G(s)
–
(b)
Fig. 2.46
Signal Processing in Digital Control 109
which leads to
u(k) = 0.9802 u(k – 1) + 0.3416 e(k) – 0.3218 e(k – 1), where e(k) = r(k) – y(k).
This is the proposed algorithm for the digital control system of Fig. 2.46a. To make sure that with the
proposed design, the system will behave as expected, we must analyze the system response. Methods for
analysis of digital control systems are covered in the next chapter.
In this section, we have presented several methods for obtaining discrete-time equivalents for continuous-
time systems. The response between sampling points is different for each discretization method used.
Furthermore, none of the equivalent discrete-time systems can have complete fidelity. The actual
(continuous-time) response between any two consecutive sampling points, is always different from the
response between the same two consecutive sampling points that is taking place in each equivalent
discrete-time system, no matter what method of discretization is used.
It is not possible to say which equivalent discrete-time system is best for any given analog system, since
the degree of distortions in transient response and frequency response characteristics, depends on the
sampling frequency, the highest frequency component involved in the system, transportation lag present
in the system, etc. It may be advisable for the designer to try a few alternate forms of the equivalent
discrete-time systems, for the given analog system.
REVIEW EXAMPLES
Note that if the first-order discrete-time system is relaxed before switching on the input r(k) at k = 0, the
initial condition y(–1) = 0 for the model (2.118), and equivalently the initial condition x(0) = 0 for the
model (2.119).
Figure 2.47 shows a simulation diagram for the given discrete-time system.
Fig. 2.47
Y(z) + 1
4
[z–1 Y(z) + y(–1)] – 1 [z–2 Y(z) + z–1 y(–1) + y(–2)]
8
= 3 [z–1 R(z) + r(–1)] – [z–2 R(z) + z–1 r(–1) + r(–2)]
Since r(–1) = r(–2) = 0, we have
(1 + 1
4 )
z -1 - 18 z -2 Y(z) = (3z–1 – z–2)R(z) + 5
8
z–1 – 2
or (z 2
+ 14 z - 18 ) Y(z) = (3z – 1) R(z) + 5
8
z – 2z2
3z - 1 -2 z 2 + 85 z
Therefore, Y(z) = R( z ) +
z 2 + 14 z - 18 z 2 + 14 z - 18
For (refer to Example 2.10)
z
R(z) = Z [(–1)k] = ,
z +1
z (3 z - 1) -2 z 2 + 85 z
Y(z) = +
(z 2
)
+ 14 z - 18 ( z + 1) z 2 + 14 z - 18
-2 z 3 + 13
8
z 2 - 83 z
=
( z + 12 ) ( z - 14 ) ( z + 1)
Expanding Y(z)/z into partial fractions,
-32
Y ( z) -2 z 2 + 13
8
z - 83 9
- 10
1
5
= = 2 + +
z ( )(
z + 12 z - 14 ( z + 1) z+ ) 1
2
z - 14 z +1
Then (refer to Table 2.1)
y(k) = È 92 - 12( ) ( 14 ) ( -1) k ˘˙ m ( k )
k k
ÍÎ - 10
1
- 32
5
˚
z -1 + 12 z -2 z+ 1
2
Y(z) = R(z) = R(z)
1 - 32 z -1 + 12 z -2 ( z - 12 ) ( z - 1)
( 12 ) ( 12 )
k k
The system modes are and (l)k. The mode decays as k , and the mode (l)k is constant
Ê 1ˆ
A1 = Á z - ˜ Y ( z ) =4
Ë 2¯ z=
1
2
A2 = ( z - 1) 2 Y ( z ) z =1 = 3
d
A3 = [( z - 1) 2 Y ( z )] =–4
dz z =1
Therefore,
4 3 -4
Y(z) = + +
z - 12 ( z - 1) 2 z - 1
= 4 ( 12 ) + 3( k - 1) - 4; k ≥ 1
k -1
= 4 ( 12 ) + 3k - 7; k ≥ 1
k -1
114 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Ï1; k even
where r(k) = Ì ; y(–1) = r(–1) = 0
Ó0; k odd
1 - 2 z -1 + 3 z -2 - 4 z -3 +
)
z2 + 2z + 1 z2
z2 + 2z + 1
– 2z – 1
– 2z – 4 – 2z–1
3 + 2z–1
3 + 6z–1 + 3z–2
– 4z–1 – 3z–2
Im (s)
Im (z)
12p
f = 3 Hz f = 2 Hz
10p
8p
f = 4 Hz f = 1 Hz
f= (wT = 0.2p = 36°)
–6
Hz
36°
f = 5 Hz f = –5 Hz
Re (z)
Re (s) z f=
–4H –1
Hz
=
f=–
f
f=–
2H
3 Hz
f = 6 Hz
z
–8p
–10p
(a) (b)
Fig. 2.48
We try to sample 6 Hz sine wave (w0 = 12p). Note that the signal lies outside the primary strip. Consider
mapping of the imaginary axis of the s-plane to the z-plane, as frequency increases from 0 to 6 Hz. The
paths followed as the frequency increases are shown in Fig. 2.48b.
Note that at a frequency of 5 Hz, the two paths meet at z = –1. The 6 Hz (w0 = 12 p) sine wave will appear
to be (10 Hz – 6 Hz) = 4 Hz sine wave. The high frequency w0 = 12 rad/sec is ‘folded in’ about the folding
Ê 2p ˆ
frequency p/T = 10p; and appears as low frequency at (ws – w0) = Á - w 0 ˜ = 8p .
Ë T ¯
The high frequency w0, which shows up at (ws – w0) after sampling, is called the ‘alias’ of the primary-
strip frequency (ws – w0). The superimposition of the high-frequency behavior onto the low frequency is
known as frequency folding or aliasing.
Take a sine wave of 6 Hz frequency and extract the samples with T = 0.1 sec. Examine the sampled
recording carefully; it has a frequency of 4 Hz.
The phenomenon of aliasing has a clear meaning in time. Two continuous sinusoids of different
frequencies (6 Hz and 4 Hz in the example under consideration) appear at the same frequency when
sampled. We cannot, therefore, distinguish between them, based on their samples alone.
To avoid aliasing, the requirement is that the sampling frequency ws , must be at least twice the highest
frequency wm present in the signal, i.e., ws > 2wm. This requirement is formally known as the sampling
theorem.
116 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
T=1
r(t) r*(t) y (t)
1 – e–sT 1
s s(s + 1)
Fig. 2.49
Solution The discrete-time transfer function of the given system is obtained as follows (refer to
Eqns (2.100)):
1- e - sT 1
Gh0(s) = , Ga(s) =
s s( s + 1)
Y ( z) Ê G ( s) ˆ
= Z [Gh0(s)Ga(s)] = (1 – z–1) Z Á a ˜
R( z ) Ë s ¯
È 1 ˘ È1 1 1 ˘
= (1 – z–1) Z Í 2 ˙ = (1 – z–1) Z Í 2 - +
ÍÎ s ( s + 1) ˙˚ Îs s s + 1 ˙˚
Using Table 2.1, we obtain
Y ( z) È Tz z z ˘
= (1 – z –1) Í - +
R( z ) Î ( z - 1)
2 z - 1 z - e - T ˙˚
È ( ze -T - z + Tz ) + (1 - e - T - Te - T ) ˘
= Í ˙
ÍÎ ( z - 1) ( z - e - T ) ˙˚
For T = 1, we have
Y ( z) ze -1 + 1 - 2e -1
=
R ( z) ( z - 1) ( z - e -1 )
0.3678 z + 0.2642 0.3678 z + 0.2642
= = 2
( z - 1) ( z - 0.3679) z - 1.3678 z + 0.3679
For unit-impulse input, R(z) = 1.
0.3678 z + 0.2642
Therefore, Y(z) = 2
z - 1.3678 z + 0.3679
We can expand Y(z) into a power series by dividing the numerator of Y(z) by its denominator:
0.3678 z -1 + 0.7675 z- 2 + 0.9145 z - 3 + º
2
z - 1.3678 z + 0.3678 0.3678 z + 0.2644
)
0.3678 z - 0.5031 + 0.1353 z -1
+ 0.7675 - 0.1353 z -1
+ 0.7675 - 1.0497 z -1 + 0.2823 z - 2
0.9145 z -1 - 0.2823 z -2
Signal Processing in Digital Control 117
This calculation yields the response at the sampling instants, and can be carried on as far as needed. In
this case we have obtained y(kT) as follows:
y(0) = 0, y(T) = 0.3678, y(2T) = 0.7675, and y(3T) = 0.9145.
Using the trapezoidal rule for integration and backward-difference approximation for the derivatives,
obtain the difference-equation model of the PID algorithm. Also obtain the transfer function U(z)/E(z).
Solution By the trapezoidal rule for integration, we obtain
kT È e(0) + e(T ) e(T ) + e( 2T ) e(( k - 1)T ) + e( kT ) ˘
0 Ú
e(t ) dt = T Í
Î 2
+
2
+ +
2 ˙
˚
È k e((i - 1)T ) + e(iT ) ˘
=TÍ
Íi = 1
 2
˙
˙
Î ˚
By backward-difference approximation for the derivatives (refer to Eqn. (2.102)), we get
de(t ) e( kT ) - e(( k -1)T )
=
dt t = kT T
Let us now obtain the transfer function model of the PID control algorithm given by Eqn. (2.124).
Define (refer to Fig. 2.50)
e(i - 1) + e(i )
= f (i); f(0) = 0
2 e(t)
k k
e(i - 1) + e(i )
Then  2
= Â f(i)
i =1 i =1
-1
È e(i - 1) + e(i ) ˘ 1 + z
Notice that F(z) = Z Í ˙ = E( z)
Î 2 ˚ 2
È k e(i - 1) + e(i ) ˘ 1 + z -1 z +1
Hence Z Í Â
Í i =1 2
˙ =
˙ -1
2(1 - z )
E( z) =
2( z - 1)
E( z)
Î ˚
The z-transform of Eqn. (2.124) becomes
È T 1 + z -1 TD -1
˘
U(z) = Kc Í1 + -1
+ ( 1 - z ) ˙ E( z)
ÍÎ 2TI 1 - z T ˙˚
È T Ê z + 1ˆ TD Ê z - 1ˆ ˘
= Kc Í1 + Á ˜+ Á ˜ ˙ E( z) (2.125)
Î 2TI Ë z - 1¯ T Ë z ¯ ˚
This equation gives the transfer function model of the PID control algorithm. Note that we can obtain
the discrete-time transfer function model (2.125) by expressing the PID controller (2.123) in terms of
operator s and then using the mapping (2.104) for the derivative term, and the mapping (2.114) for the
integral term of the controller.
PROBLEMS
2.1 Consider the signal processing algorithm shown in Fig. P2.1.
(a) Assign the state variables and obtain a state variable model for the system.
(b) Represent the algorithm of Fig. P2.1 by a signal flow graph and from there obtain the transfer
function model of the system using Mason’s gain formula.
0.368
+
r (k) + + y (k)
0.264
– +
1.368
0.368
Fig. P 2.1
2.2 Consider the signal processing algorithm shown in Fig. P2.2. Represent the algorithm by (a)
difference equation model, (b) a state variable model, and (c) a transfer function model.
–3 z–1 1 z–1 1
R (z) Y (z)
–5
–3
Fig. P 2.2
120 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
–1
+ Y (z)
R (z) + +
–1 2
z
+
– 1
2
Fig. P 2.3
(a) Obtain the difference equation model and therefrom the transfer function model of the
system.
(b) Find the impulse response of the system.
(c) Find the response of the system to unit-step input m(k).
2.4 A filter often used as part of computer-controlled algorithms is shown in Fig. P2.4 (b and a are
real constants).
(a) Find the impulse response to an impulse of strength A.
(b) Find the step response to a step of strength A.
(c) Find the ramp response to a ramp function with a slope A.
(d) Find the response to sinusoidal input A cos(W k).
R(z) + Y (z)
z–1 b
+
a
Fig. P2.4
3z 4 + 2 z 3 - z 2 + 4 z + 5
G(z) =
z 4 + 0.5 z 3 - 0.2 z 2 + z + 0.4
2.15 Figure P2.15 shows the input-output description of a D/A converter. The converter is designed
in such a way that the old value of the input sample is held constant, until a new sample arrives.
Treating each sample of the sequence r(kT) as an impulse
r (kT) y (t)
function of strength equal to the value of the sample, the D/A
system of Fig. P2.15 becomes a continuous-time system.
Determine the transfer function model of the system. Fig. P 2.15
2.16 (a) State and prove the sampling theorem.
10( s + 2)
(b) Given: E(s) =
s ( s 2 + 2 s + 2)
2
Based upon the sampling theorem, determine the maximum value of the sampling interval T
that can be used to enable us to reconstruct e(t) from its samples.
(c) Consider a system with sampling frequency 50 rad/sec. A noise signal cos 50t enters into the
system. Show that it can cause a dc component in the system output.
2.17 Draw the magnitude and phase curves of the zero-order hold, and compare these curves with those
of the ideal low-pass filter.
2.18 Consider a signal f(t), which has discrete values f(kT) at the sampling rate 1/T. If the signal f(t) is
imagined to be impulse sampled at the same rate, it becomes
f *(t) = Â f (kT )d (t - kT )
k =0
(b) Determine F ( z ) in terms of F(s). Using this result, explain the relationship between
z = e sT
the z-plane and the s-plane.
2.19 Figure P2.19 shows two root paths in the s-plane:
jws/2 jws/2
a w0
– jws/2 – jws/2
(i) (ii)
Fig. P 2.19
Signal Processing in Digital Control 123
Fig. P 2.20
2.26 For a plant 1.57/[s(s + 1)], we are required to design a digital controller so that the closed-loop
system acquires a damping ratio of 0.45 without loss of steady-state accuracy. The sampling
period T = 1.57 sec. The following design procedure may be followed:
(i) First we design the analog controller D(s) defined in Fig. P2.26. The transfer function Gh(s)
has been inserted in the analog control loop, to take into account the effect of the hold that
25s + 1
must be included in the equivalent digital control system. Verify that D(s) = meets
62.5s + 1
the design requirements.
(ii) Discretize D(s) using bilinear transformation.
Fig. P 2.26
2.27 A PID controller is described by the following relation between input e(t) and output u(t):
È 1 ˘
U(s) = Kc Í1 + + TD s ˙ E ( s)
Î TI s ˚
(a) Derive the PID algorithm using the s-plane to z-plane maps—bilinear transformation for
integration and backward-difference approximation for the derivatives.
(b) Convert the transfer function model of the PID controller obtained in step (a) into a difference
equation model.
2.28 Derive difference equation models for the numerical solution of the following differential equation
using (a) the backward rectangular rule for integration, and (b) the forward rectangular rule for
integration:
ẏ (t) + ay(t) = r(t); y(0) = y0
2.29 Consider the second-order system
ÿ + aẏ + by = 0; y(0) = a, ẏ (0) = b
Approximate this equation with a second-order difference equation for computer solution. Use
backward-difference approximation for the derivatives.
Models of Digital Control Devices and Systems 125
Chapter 3
Models of Digital Control
Devices and Systems
3.1 INTRODUCTION
Now that we have developed the prerequisite signal processing techniques in Chapter 2, we can use them
to study closed-loop digital control systems. A typical topology of the type of systems to be considered
in this chapter, is shown in Fig. 3.1.
Fig. 3.1
Digital control systems with analog sensors include an analog prefilter between the sensor and the sampler
(A/D converter) as an anti-aliasing device. The prefilters are low-pass, and the simplest transfer function
is
a
Hpf (s) =
s+a
so that the noise above the prefilter breakpoint a, is attenuated. The design goal is to provide enough
attenuation at half the sample rate (ws /2) so that the noise above ws/2, when aliased into lower frequencies
by the sampler, will not be detrimental to the control-system performance.
Since the phase lag from the prefilter can significantly affect system stability, it is required that the control
design be carried out with the analog prefilter included in the loop transfer function. An alternative design
126 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
procedure is to select the breakpoint and ws sufficiently higher than the system bandwidth, so that the
phase lag from the prefilter does not significantly alter the system stability, and thus the prefilter design
problem can be divorced from the control-law design problem. Our treatment of the subject is based on
this alternative design procedure. We, therefore, ignore the prefilter design and focus on the basic control-
system design. The basic configuration for this design problem is shown in Fig. 3.2, where
G(s) = transfer function of the controlled plant (continuous-time system);
H(s) = transfer function of the analog sensor; and
D(z) = transfer function of the digital control algorithm.
Fig. 3.2
The analog and digital parts of the system are connected through D/A and A/D converters. The computer,
with its internal clock, drives the D/A and A/D converters. It compares the command signal r(k) with
the feedback signal b(k) and generates the control signal u(k), to be sent to the final control elements of
the controlled plant. These signals are computed from the digital control algorithm D(z), stored in the
memory of the computer.
There are two different approaches for the design of digital algorithms.
(i) Discretization of Analog Design The controller design is done in the s-domain using analog
design methods.1 The resulting analog control law is then converted to discrete-time form, using
one of the approximation techniques given in Section 2.14.
(ii) Direct Digital Design In this approach, we first develop the discrete-time model of the analog
part of the loop—from C to A in Fig. 3.2—that includes the controlled plant. The controller design
is then performed using discrete-time analysis.
An actual design process is often a combination of the two methods. First iteration to a digital design
can be obtained using discretization of an analog design. Then the result is tuned up using direct digital
analysis and design.
The intent of this chapter is to provide basic tools for the analysis and design of a control system that is to
be implemented using a computer. Mathematical models of commonly used digital control devices and
systems are developed. Different ways to implement digital controllers (obtained by the discretization of
analog design (Section 2.14) or by direct digital design (Chapter 4), are also given in this chapter.
1
Chapters 7–10 of reference [155].
Models of Digital Control Devices and Systems 127
3.2 z
= Â f (k) d (t – kT ) (3.2)
k=0
The sampler of Fig. 3.3a can thus be viewed as an ‘impulse modulator’ with the carrier signal
A simple model of a D/A converter is shown in Fig. 3.4a. A sequence of numbers f (k), k = 0, 1, 2, ...,
is the input, and the continuous-time function f + (t), t ≥ 0 is the output. The following relation holds
between input and output:
f +(t) = f (k); kT £ t < (k + 1)T (3.4)
Each sample of the sequence f(k) may be treated as an impulse function of the form f(k) d(t – kT). The
Zero-Order Hold (ZOH) of Fig. 3.4a can thus be viewed as a linear time-invariant system that converts
the impulse f (k)d (t – kT) into a pulse of height f (k) and width T. The D/A converter may, therefore, be
modeled by Fig. 3.4b, where the ZOH is a system whose response to a unit impulse d (t), is a unit pulse
128 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
1- e - sT
Gh0(s) = Z [gh0(t)] = (3.5)
s 0 k 0 t
Figure 3.5 illustrates a typical example
of an interconnection of discrete-time
and continuous-time systems. In order f (k) f *(t) 1 – e –sT f +(t)
(b) Gh0(s) =
to analyze such a system, it is often T s
convenient to represent the continuous-
time system together with the ZOH by an Fig. 3.4
equivalent discrete-time system.
We assume that the continuous-time system of Fig. 3.5 is a linear system with the transfer function
G(s). A block diagram model of the equivalent discrete-time system is shown in Fig. 3.6a. As seen
from this figure, the impulse modulated signal u*(t) is applied to two s-domain transfer functions in
tandem. Since the two blocks with transfer functions Gh0(s) and G(s) are not separated by an impulse
modulator, we can consider them as a single block with transfer function [Gh0(s)G(s)], as shown in
Fig. 3.6b. The continuous-time system with transfer function [Gh0(s)G(s)] has input u*(t) and output y(t).
The output signal y(t) is read off at discrete synchronous sampling instants kT; k = 0, 1, ..., by means of
a mathematical sampler T(M).
We assume that ĝ (t) is the impulse response of the continuous-time system Gh0(s)G(s):
ĝ (t) = L –1[Gh0(s)G(s)] (3.6)
The input signal to the system is given by (refer to Eqn. (3.2)),
Fig. 3.5
This is a sequence of impulses with intensities given by u (kT). Since ĝ (t) is the impulse response of the
system (response to the input d (t)), by superposition from Eqn. (3.7),
r (k)
u(k) y (t)
D(z) D/A G (s)
y (k)
A/D
Plant
Computer
(a)
y (k) T
(b)
Fig. 3.8
Figure 3.9 gives the z-domain equivalent of Fig. 3.8. R(z) + E(z) U(z) Y(z)
D(z) Gh0G(z)
Having become familiar with the technique, from now –
onwards we may directly write z-domain relationships,
without introducing impulse modulators in block
diagrams of sampled-data systems. Fig. 3.9
Consider the sampled-data feedback system of Fig. 3.10 where the sensor dynamics is represented by
transfer function H(s). The following equations easily follow:
E(z) = R(z) – B(z) (3.16a)
U(z) = D(z) E(z) (3.16b)
Y(z) = Gh0G(z) U(z) = Z [Gh0(s)G(s)] U(z) (3.16c)
B(z) = Gh0GH(z)U(z) = Z [Gh0(s)G(s)H(s)]U(z) (3.16d)
Equations (3.16a), (3.16b) and (3.16d) give
E( z) 1
= (3.17)
R( z ) 1 + D( z ) Gh0GH ( z )
Models of Digital Control Devices and Systems 131
b (t)
H (s)
Fig. 3.10
Combining Eqns (3.16b), (3.16c) and (3.17), we get
Y ( z) D( z )Gh0G ( z )
= (3.18)
R( z ) 1+ D( z )Gh0GH ( z )
Figure 3.11 illustrates a phenomenon that we have not yet encountered. When an input signal is acted upon
by a dynamic element before being sampled, it is impossible to obtain a transfer function for the system.
The system in Fig. 3.11 differs from that in Fig. 3.10, in that the analog error e(t) is first amplified before
being converted to digital form for the control computer. The amplifier’s dynamics are given by G1(s).
b (t)
H (s)
Fig. 3.11
Consider first the subsystem shown in Fig. 3.12a. We can equivalently represent it as a block [G1(s)E(s)]
with input d (t), as in Fig. 3.12b. Now the input, and therefore, the output, does not change by imagining
a fictitious impulse modulator through which d(t) is applied to [G1(s)E(s)] as in Fig. 3.12c.
On application of Eqn. (3.12), we can write
E1(z) = Z [G1(s) E(s)] Z [d (k)] = Z [G1(s)E(s)] (3.19)
Now, for the system of Fig. 3.11,
E(s) = R(s) – B(s) = R(s) – H(s)Y(s)
= R(s) – H(s) Gh0(s) G(s) U*(s) (3.20)
Therefore, from Eqns (3.19) and (3.20), we obtain
E1(z) = Z [G1(s)R(s)] – Z [G1(s)H(s)Gh0(s)G(s)]U(z)
= G1R(z) – G1Gh0GH(z) U(z) (3.21)
Since U(z) = D(z) E1(z),
132 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Fig. 3.12 s
Example 3.1
Consider the sampled-data system shown in Fig. 3.13a. From the block diagram, we obtain (refer to
Eqn. (3.15))
Y ( z) Gh0G ( z )
= (3.23)
R( z ) 1 + Gh0G ( z )
Figure 3.13b gives the z-domain equivalent of Fig. 3.13a. The forward path transfer function:
Gh0G(z) =Z [Gh0(s)G(s)]
È G ( s) ˘ È 1 ˘
= (1 – z–1) Z Í –1
˙ = (1 – z ) Z Í 2 ˙
Î s ˚ ÍÎ s ( s + 1) ˙˚
Models of Digital Control Devices and Systems 133
r (t) + 1 y (t)
Gh0(s) G(s) =
T = 1 sec s (s + 1)
–
(a)
R (z) + Y (z)
Gh0G(z)
–
(b)
Fig. 3.13
È1 1 1 ˘ È Tz z z ˘
= (1 – z–1) Z Í 2 - + ˙ = (1 – z–1) Í - + ˙
Îs s s + 1˚ ÍÎ ( z - 1)
2 z - 1 z - e -T ˙˚
z (T - 1 + e -T ) + (1 - e -T - Te -T )
=
( z - 1) ( z - e -T )
When T = 1, we have
ze -1 + 1 - 2e -1
0.3679 z + 0.2642
Gh0G(z) = = 2
( z - 1) ( z - e -1 ) z - 1.3679 z + 0.3679
Substituting in Eqn. (3.23), we obtain
Y ( z) 0.3679 z + 0.2642
=
R( z ) z 2 - z + 0.6321
For a unit-step input,
z
R(z) =
z -1
and therefore,
z (0.3679 z + 0.2642) 0.3679 z 2 + 0.2642 z
Y(z) = =
( z - 1) ( z 2 - z + 0.6321) z 3 - 2 z 2 + 1.6321z - 0.6321
By long-division process, we get
Y(z) = 0.3679 z–1 + z–2 + 1.3996 z–3 + 1.3996 z–4 + 1.1469 z–5 + 0.8944 z–6 + 0.8015 z–7+
Therefore, the sequence (k = 1, 2, ...)
y(kT) = {0.3679, 1, 1.3996, 1.3996, 1.1469, 0.8944, 0.8015, ...}
Note that the final value of y(kT) is (refer to Eqn. (2.52),
0.3679 + 0.2642
lim y(kT) = lim (z – 1)Y(z) = =1
k z Æ1 0.6321
134 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
y (t)
system is 45%, in contrast to 17% for the
continuous-time system.
0.4
The performance of the digital system is, thus,
dependent on the sampling period T. Larger
0
sampling periods usually give rise to higher 1 2 3 4 5 6 7
overshoots in the step response, and may t (sec)
eventually cause instability if the sampling
Fig. 3.14
period is too large.
Example 3.2
Let us compare the stability properties of the system shown in Fig. 3.15, with and without a sample-and-
hold on the error signal.
Without sample-and-hold, the system in Fig. 3.15 has the transfer function
Y ( s) K
=
R( s) s 2 + 2 s + K
This system is stable for all values of K > 0.
Fig. 3.15
For the system with sample-and-hold, the forward-path transfer function is given by
È K ˘
Gh0G(z) = (1 – z–1) Z Í 2 ˙
ÍÎ s ( s + 2) ˙˚
È K Ê 1 1/ 2 1/ 2 ˆ ˘
= (1 – z–1) Z Í Á 2 - + ˙
Î2 Ës s s + 2 ˜¯ ˚
K È Tz ( 1/ 2 ) z ( 1/2 ) z ˘
= (1 – z–1) Í - + ˙
2 ÍÎ ( z - 1)
2 z -1 z - e -2T ˙˚
K È 2T ( z - e -2T ) - ( z - 1) (1 - e -2T ) ˘
= Í ˙
2 ÍÎ 2( z - 1) ( z - e -2T ) ˙˚
Models of Digital Control Devices and Systems 135
3.3
Figure 3.16 is the block diagram of a computer-controlled continuous-time system with dead-time. We
assume that the continuous-time system is described by transfer function of the form
Gp(s) = G(s) e -tD s (3.24)
where tD is the dead-time, and G(s) contains no dead-time.
The equivalent discrete-time system, shown by dotted lines in Fig. 3.16, is described by the model
Y ( z)
= Z [Gh0(s)Gp(s)] = Gh0Gp(z) (3.25a)
U ( z)
È1 ˘
= (1 – z–1) Z Í e -tD s G ( s)˙ (3.25b)
Îs ˚
136 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Fig. 3.16
1 È e - DT s e - DT s ˘
= (1 – z–1) z– N Z Í - ˙ (3.28)
a ÍÎ s s + a ˙˚
È e - DT s ˘ È e - DT s ˘
Now L –1
Í ˙ = g1(t) = m(t – DT); L –1 Í ˙ = g2(t) = e–a(t – DT ) m(t – DT)
ÍÎ s ˙˚ ÍÎ s + a ˙˚
where m(t) is a unit-step function.
Therefore, g1(kT ) = m (kT – DT ); g2(kT) = e–a(kT – DT) m(kT – DT )
Z [g2(kT )] = Â g2(kT)z–k = e–a(T – DT) z–1 + e–a(2T – DT) z–2 + e–a(3T – DT) z–3 +
k=0
We introduce a parameter m, such that
m=1–D
Then Z [g2(kT)] = e– amT z–1 + e– amT e–aT z–2 + e–amT e–2aT z–3 +
= e– amT z–1 [1 + e–aT z–1 + e–2aT z–2 + ]
Models of Digital Control Devices and Systems 137
È 1 ˘
= e– amT z–1 Í - aT -1 ˙
Î1 - e z ˚
e – amT
= (3.30)
z - e – aT
Substituting the z-transform results given by Eqns (3.29) and (3.30) in Eqn. (3.28), we get
Y ( z) 1 È 1 e - amT ˘
–1 –N
= Gh0Gp (z) = (1 – z ) z Í - - aT ˙
U ( z) a ÍÎ z - 1 z - e ˙˚
1È
aÎ
( )
1 - e - amT z + e - amT - e - aT ˘
˚
= N +1 - aT (3.31)
z (z - e )
Table 3.1 has been generated by applying the procedure outlined above, to commonly occurring functions.
e -DTs 1
s z -1
e -DTs mT
+
T
s2 z - 1 ( z - 1) 2
2e -DTs È m 2 z 2 + ( 2m - 2m 2 + 1) z + ( m - 1) 2 ˘
3 T2 Í ˙
s ÍÎ ( z - 1)3 ˙˚
e - DTs e - amT
s+a z - e - aT
Example 3.3
The scheme of Fig. 3.17 produces a steady-stream flow of fluid with controlled temperature q. A stream
of hot fluid is continuously mixed with a stream of cold fluid, in a mixing valve. The valve characteristic
is such that, the total flow rate Q (m3/sec) through it is maintained constant, but the inflow qi (m3/sec)
of hot fluid may be linearly varied by controlling valve stem position x. The valve stem position x, thus
controls the temperature qi (ºC) of the outflow from the mixing valve. Due to the distance between the
valve and the point of discharge into the tank, there is a time delay between the change in qi and the
discharge of the flow with the changed temperature, into the tank.
Fig. 3.17
The differential equation governing the tank temperature is (assuming an initial equilibrium and taking
all variables as perturbations)
dq
Vrc = Qrc (q id – q) (3.32)
dt
where
q = tank fluid temperature, ºC
= temperature of the outflowing fluid from the tank;
c = specific heat of the fluid, Joules/(kg)(ºC);
V = volume of the fluid in the tank, m3;
r = fluid density, kg/m3;
Q = fluid flow rate, m3/sec; and
qid = temperature of the fluid entering the tank, ºC.
The temperature qid at the input to the tank at time t, however, is the mixing valve output temperature tD
seconds in the past, which may be expressed as
qid(t) = qi (t – tD) (3.33)
Models of Digital Control Devices and Systems 139
To form the discrete-time transfer function of Gp(s) preceded by a zero-order hold, we must compute
ÈÊ 1- e - sT ˆ ae -tD s ˘
Gh0Gp(z) = Z ÍÁ ˙
ÍÎË s ˜¯ s + a ˙˚
È a ˘
= (1 – z–1) Z Í e -tD s ˙ (3.35)
Î s ( s + a ) ˚
For the specific values of tD = 1.5, T = 1, a = 1, Eqn. (3.35) reduces to
È 1 ˘
Gh0 Gp (z) = (1 – z–1) Z Í e - s e -0.5 s ˙
Î s( s + 1) ˚
È 1 ˘
= (1 – z–1) z–1 Z Í e -0.5 s ˙
Î s ( s + 1) ˚
Using transform pairs of Table 3.1, we obtain
È (1 - e -0.5 ) z + (e -0.5 - e -1 ) ˘
Gh0Gp(z) = (1 – z–1) z–1 Í ˙
ÍÎ ( z - 1) ( z - e -1 ) ˙˚
0.3935( z + 0.6066) q ( z)
= = (3.36)
z 2 ( z - 0.3679) qi ( z )
The relationship between x and qi is linear, as is seen below.
( Qi + qi) rc qH + [Q – ( Qi + qi)] rc qC = Qrc ( q i + qi)
where qH and qC are constant temperatures of hot and cold streams, respectively.
q i = Kv x
where Kv is the valve gain.
The perturbation equation is obtained as (neglecting second-order terms in perturbation variables),
Kv (qH – qC) x(t) = Qqi(t)
or x(t) = Kqi(t); K = Q/[Kv (qH – qC)]
Therefore,
q( z ) (0.3935/K )( z + 0.6066)
=
X ( z) z 2 ( z - 0.3679)
140 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
3.4
The application of conventional 8- and 16-bit microprocessors to control systems, is now well established.
Such processors have general-purpose architectures which make them applicable for a wide range of tasks,
though none are remarkably efficient. In control applications, such devices may pose problems such as
inadequate speed, difficulties with numerical manipulation, and relatively high cost for the complete system;
the latter being due to both the programming effort and the cost of the peripheral hardware (memories,
I/O ports, timers/counters, A/D converters, D/A converters, PWM circuit, etc.).
In applications requiring small amounts of program ROM, data RAM, and I/O ports, single-chip
microcontrollers are ideally suited. In these chips, the capabilities in terms of speed of computation, on-
chip resources, and software facilities are optimized for control applications. Should the on-chip features
be insufficient to meet control requirements, the microcontroller chips allow for easy expansion.
The Intel microcontroller family (MCS-48 group, MCS-51 group, MCS-96 group) includes 8- and 16-bit
processors with the following on-chip resources—ROM, RAM, I/O lines, timer/counter, A/D converter,
and PWM output. The Motorola microcontroller family (HC 05 group, HC 11 group, HC 16 group) also
provides microcontroller chips with similar features.
In many application areas, processing requirements for digital control systems, such as execution time
and algorithm complexity, have increased dramatically. For example, in motor control, short sampling
time constraints can place exacting requirements on algorithm execution time. New airframe designs
and extended aircraft performance envelopes, increase the complexity of flight control laws. Controller
complexity also increases with number of interacting loops (e.g., in robotics), or the number of sensors
(e.g., in vision systems). For a growing number of real-time control applications, conventional single-
processor systems are unable to satisfy the new demands for increased speed and greater complexity and
flexibility.
The dramatic advances in VLSI technology leading to high transistor packing densities have enabled
computer architects to develop parallel-processing architectures consisting of multiple processors; thus
realizing high-performance computing engines at relatively low cost. The control engineer can exploit a
range of architectures for a variety of functions.
Parallel-processing speeds up the execution time for a task. This is achieved by dividing the problem into
several subtasks, and allocating multiple processors to execute multiple subtasks simultaneously. Parallel
architectures differ from one another in respect of nature of interconnectivity between the processing
elements and the processing power of each individual processing element.
The transputer is a family of single-chip computers, which incorporates features to support parallel
processing. It is possible to use a network of transputers to reduce the execution time of a real-time
control law.
Digital signal processors (DSPs) offer an alternative strategy for implementation of digital controllers.
They use architectures and dedicated arithmetic circuits, that provide high resolution and high speed
arithmetic, making them ideally suited for use as controllers.
Many DSP chips, available commercially, can be applied to a wide range of control problems. The Texas
Instruments TMS 320 family provides several beneficial features through its architecture, speed, and
instruction set.
Models of Digital Control Devices and Systems 141
TMS 320 is designed to support both numeric-intensive operations, such as required in signal processing,
and also general-purpose computation, as would be required in high speed control. It uses a modified
architecture, which gives it speed and flexibility—the program and data memory are allotted separate
sections on the chip, permitting a full overlap of the instruction fetch and execution cycle. The processor
also uses hardware to implement functions which had previously been achieved using software. As a
result, a multiplication takes only 200 nsec, i.e., one instruction cycle, to execute. Extra hardware has
also been included to implement shifting and some other functions. This gives the design engineer the
type of power previously unavailable on a single chip.
Implementation of a control algorithm on a computer consists of the following two steps:
(i) Block diagram realization of the transfer function (obtained by the discretization of analog controller
(Section 2.14), or by the direct digital design (Chapter 4) that represents the control algorithm.
(ii) Software design based on the block diagram realization.
In the following, we present several different structures of block diagram realizations of digital controllers
using delay elements, adders, and multipliers. Different realizations are equivalent from the input-output
point of view if we assume that the calculations are done with infinite precision. With finite precision in
the calculations, the choice of the realization is very important. A bad choice of the realization may give
a controller that is very sensitive to errors in the computations.
Assume that we want to realize the controller
U ( z) b0 z n + b1 z n -1 + + b n -1 z + b n
D(z) = = (3.37a)
E( z) z n + a1 z n -1 + + a n -1 z + a n
where the a i’s and b i’s are real coefficients (some of them may be zero).
Transfer functions of all digital controllers can be rearranged in this form. For example, the transfer
function of PID controller, given by Eqn. (2.113b), can be rearranged as follows:
U ( z) È T Ê 1 ˆ TD ˘
D(z) = = Kc Í1 + Á - ˜ + (1 - z -1 ) ˙
Î TI Ë 1 - z ¯ T
E( z) 1
˚
K cT Ê 1 ˆ KT
= Kc + + c D (1 – z–1)
TI ÁË 1 - z -1 ˜¯ T
b0 z 2 + b1 z + b 2
=
z 2 + a1 z + a 2
where
a1 = –1; a2 = 0
Ê T TD ˆ Ê 2T ˆ KT
b0 = Kc Á1+ + ; b1 = – Kc Á1 + D ˜ ; b2 = c D
Ë TI T ˜¯ Ë T ¯ T
We shall now discuss different ways of realizing the transfer function (3.37a), or equivalently the transfer
function:
U ( z) b0 + b1 z -1 + b 2 z -2 + + b n -1 z - ( n -1) + b n z - n
D(z) = = (3.37b)
E( z) 1 + a1 z -1 + a 2 z -2 + + a n -1 z - ( n -1) + a n z - n
142 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The methods for realizing digital systems of the form (3.37) can be divided into two classes—recursive
and nonrecursive. The functional relation between the input sequence e(k) and the output sequence u(k)
for a recursive realization has the form
u(k) = f (u(k – 1), u(k – 2),..., e(k), e(k – 1), ...) (3.38)
For the linear time-invariant system of Eqn. (3.37b), the recursive realization has the form
u(k) = – a1u(k – 1) – a2u(k – 2) – – anu(k – n) + b0 e(k) + b1e(k – 1) + + bne(k – n) (3.39)
The current output sample u(k) is a function of past outputs and present and past input samples. Due to
the recursive nature, the errors in previous outputs may accumulate.
The impulse response of the digital system defined by Eqn. (3.39), where we assume not all ai’s are zero,
has an infinite number of nonzero samples, although their magnitudes may become negligibly small as k
increases. This type of digital system is called an Infinite Impulse Response (IIR) system.
The input-output relation for a nonrecursive realization is of the form
u(k) = f (e(k), e(k – 1), ...) (3.40a)
For a linear time-invariant system, this relation takes the form
u(k) = b0e(k) + b1e(k – 1) + b2e(k – 2) + + bN e(k – N) (3.40b)
The current output sample u(k) is a function only of the present and past values of the input.
The impulse response of the digital system defined by Eqn. (3.40b), is limited to a finite number of
samples defined over a finite range of time intervals, i.e., the impulse response sequence is finite. This
type of digital system is called a finite impulse response (FIR) system.
The digital controller given by Eqn. (3.37b) is obviously an FIR digital system when the coefficients ai
are all zero. When not all ai’s are zero, we can obtain FIR approximation of the digital system by dividing
its numerator by the denominator and truncating the series at z–N; N ≥ n:
U ( z)
= D(z) @ a0 + a1z–1 + a2z–2 + + aN z–N; N ≥ n (3.41)
E( z)
Notice that we may require a large value of N to obtain a good level of accuracy.
In the following sections, we discuss the most common types of recursive and nonrecursive realizations
of digital controllers of the form (3.37).
3.4.1
The transfer function (3.37) represents an nth-order system. Recursive realization of this transfer function
will require at least n unit delayers. Each unit delayer will represent a first-order dynamic system. Each
of the three recursive realization structures given below, uses the minimum number (n) of delay elements
in realizing the transfer function (3.37).
Let us multiply the numerator and denominator of the right-hand side of Eqn. (3.37b) by a variable X(z).
This operation gives
Models of Digital Control Devices and Systems 143
U ( z) ( b0 + b1 z -1 + b 2 z -2 + + b n -1 z - ( n -1) + b n z - n ) X ( z )
= (3.42)
E( z) (1 + a1 z -1 + a 2 z -2 + + a n -1 z - ( n -1) + a n z - n ) X ( z )
Equating the numerators on both sides of this equation gives
U(z) = (b0 + b1 z–1 + + bn z– n) X(z) (3.43a)
The same operation on the denominator brings
E(z) = (1 + a1 z–1 + + an z– n) X(z) (3.43b)
In order to construct a block diagram for realization, Eqn. (3.43b) must first be written in a cause-and-
effect relation. Solving for X (z) in Eqn. (3.43b) gives
X(z) = E(z) – a1 z–1 X(z) – – an z–n X(z) (3.43c)
A block diagram portraying Eqns (3.43a) and (3.43c) is now drawn in Fig. 3.18 for n = 3. Notice that we
use only three delay elements. The coefficients ai and bi (which are real quantities) appear as multipliers.
The block diagram schemes where the coefficients ai and bi appear directly as multipliers are called
direct structures.
Basically, there are three sources of error that affect the accuracy of a realization (Section 2.1):
(i) the error due to the quantization of the input signal into a finite number of discrete levels;
(ii) the error due to accumulation of round-off errors in the arithmetic operations in the digital system;
and
(iii) the error due to quantization of the coefficients a i and b i of the transfer function. This error may
become large for higher-order transfer functions. That is, in a higher-order digital controller in
direct structure, small errors in the coefficients a i and b i cause large errors in the locations of the
poles and zeros of the controller (refer to Review Example 3.3).
b0
b1
b2
a1
a2
a3
Fig. 3.18 n=
144 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
These three errors arise because of the practical limitations of the number of bits that represent various
signal samples and coefficients. The third type of error listed above may be reduced by mathematically
decomposing a higher-order transfer function into a combination of lower-order transfer functions. In
this way, the system may be made less sensitive to coefficient inaccuracies.
For decomposing higher-order transfer functions in order to reduce the coefficient sensitivity problem,
the following two approaches are commonly used. It is desirable to analyze each of these structures for
a given transfer function, to see which one is better with respect to the number of arithmetic operations
required, the range of coefficients, and so forth.
The sensitivity problem may be reduced by implementing the transfer function D(z) as a cascade
connection of first-order and/or second-order transfer functions. If D(z) can be written as a product of
transfer functions D1(z),... , Dm (z), i.e.,
D(z) = D1(z) D2(z) Dm(z),
then a digital realization for D(z) may be obtained by a cascade connection of m component realizations
for D1(z), D2 (z), ... , and Dm(z), as shown in Fig. 3.19.
E(z) U(z)
D1(z) D2(z) Dm(z)
In most cases, the Di(z); i = 1, 2, ..., m, are chosen to be either first-order or second-order functions. If the
poles and zeros of D(z) are known, then Di(z) can be obtained by grouping real poles and real zeros to
produce first-order functions, or by grouping a pair of complex-conjugate poles and a pair of complex-
conjugate zeros to produce a second-order function. It is, of course, possible to group two real poles with
a pair of complex-conjugate zeros and vice versa. The grouping is, in a sense, arbitrary. It is desirable to
group several different ways, to see which one is best with respect to the number of arithmetic operations
required, the range of coefficients, and so forth.
In general, D(z) may be decomposed as follows:
p
1 + bi z -1 m 1 + ej z -1 + f j z -2
D(z) = P P
i =1 1 + ai z -1 j= p + 1 1 + cj z -1 + d j z -2
The block diagram for
1 + bi z -1 Ui ( z)
Di(z) = -1
=
1 + ai z Ei ( z )
1 + ej z -1 + f j z -2 Uj ( z )
and that for Dj(z) = -1 -2
=
1 + cj z + dj z Ej ( z )
are shown in Figs 3.20a and 3.20b, respectively. The realization for the digital controller D(z) is a cascade
connection of p first-order systems of the type shown in Fig. 3.20a, and (m – p) second-order systems of
the type shown in Fig. 3.20b.
Models of Digital Control Devices and Systems 145
Fig. 3.20
Another approach to reduce the coefficient sensitivity problem is to expand the transfer function D(z)
into partial fractions. If D(z) is expanded so that
D(z) = A + D1(z) + D2(z) + + Dr (z),
where A is simply a constant, then a digital realization for D(z) may be obtained by a parallel connection
of (r + 1) component realizations for A, D1(z), ... , Dr (z), as shown in Fig. 3.21. Due to the presence of
the constant term A, the first-order and second-order functions can be chosen in simpler forms:
ej + f j z -1
q r
Â
bi
D(z) = A + Â 1 + a z -1 + -1
+ d j z -2
i =1 i j = q + 1 1 + cj z
The block diagram for
bi Ui ( z)
Dj(z) = =
1 + ai z -1 E( z)
and that for
ej + f j z -1 U j ( z)
Dj(z) = -1 -2
=
1 + cj z + dj z E( z)
are shown in Figs 3.22a and 3.22b, respectively.
3.4.2
Nonrecursive structures for D(z) are similar to the recursive structures presented earlier in this section.
In the nonrecursive form, the direct and cascade structures are commonly used; the parallel structure is
not used since it requires more elements.
146 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Fig. 3.22
Example 3.4
Consider the digital controller with transfer function model
U ( z) 2 - 0.6 z -1
D(z) = =
E( z) 1 + 0.5 z -1
Recursive realization of D(z) yields the block diagram shown in Fig. 3.23a. By dividing the numerator
of D(z) by the denominator, we obtain
D(z) = 2 – 1.6z–1 + 0.8z–2 – 0.4z–3 + 0.2z–4 – 0.1z–5 + 0.05z–6 – 0.025z–7 +
Models of Digital Control Devices and Systems 147
Truncating this series at z–5, we obtain the following FIR digital system:
U ( z)
= 2 – 1.6 z–1 + 0.8 z–2 – 0.4 z–3 + 0.2 z–4 – 0.1 z–5
E( z)
Figure 3.23b gives a realization for this FIR system. Notice that we need a large number of delay
elements to obtain a good level of accuracy. An advantage of this realization is that, because of the lack
of feedback, the accumulation of errors in past outputs is avoided in the processing of the signal.
Fig. 3.23
3.5
The ultimate goal of control systems engineering is to build real physical systems to perform some
specified tasks. To accomplish this goal, design and physical implementation of a control strategy
are required. The standard approach to design this is as follows. A mathematical model is built making
necessary assumptions about various uncertain quantities on the dynamics of the system. If the objective
is well defined in precise mathematical terms, then control strategies can be derived mathematically (e.g.,
by optimizing some criterion of performance). This is the basis of all model-based control strategies.
This approach is feasible when it is possible to specify the objective and the model, mathematically.
Many sophisticated methods based on model-based control approach will appear in later chapters of the
book.
148 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
For motion control applications (position, and speed control systems), identification of mathematical
models of systems close enough to reality is usually possible. However, for process control applications
(pressure, flow, liquid-level, temperature, and composition control systems), identification of
process dynamics precisely can be expensive even if meaningful identification is possible. This is
because industrial processes are relatively slow and complex. In process-control field, therefore, it is not
uncommon to follow an ad hoc approach for controller development, when high demands on control-
system performance are not made. In the ad hoc approach, we select a certain type of controller based
on past experience with the process to be controlled, and then set controller parameters by experiment
once the controller is installed. The ‘experimental design’ of controller settings has come to be known
as controller tuning.
Many years of experience have shown that a PID controller is versatile enough to control a wide variety
of industrial processes. The common practice is to interface a PID controller (with adjustment features)
to the process and adjust the parameters of the controller online, by trial-and-error, to obtain acceptable
performance. A number of tuning methods have been introduced to obtain fast convergence to control
solution. These methods consist of the following two steps:
(i) experimental determination of the dynamic characteristics of the control loop; and
(ii) estimation of the controller tuning parameters that produce a desired response for the dynamic
characteristics determined in the first step.
It may be noted that for tuning purposes, simple experiments are performed to estimate important dynamic
attributes of the process. The approximate models have proven to be quite useful for process control
applications (For processes whose dynamics are precisely known, the use of trial-and-error tuning is not
justified since many model-based methods to the design of PID controllers are available which predict
the controller parameters fairly well at the design stage itself). The predicted parameter values based on
approximate models simply provide initial trial values for the online trial-and-error approach. These trial
values may turn out to be a poor guess. Fine tuning the controller parameters online is usually necessary
to obtain acceptable control performance.
Some of the tuning methods which have been successfully used in process industry, will be described
here.
3.5.1
Approximately 75% of feedback controllers in the process industry are PI controllers; most of the balance
are PID controllers. Some applications require only P, or PD controllers, but these are few.
Some instrument manufacturers calibrate the controller gain as proportional band (PB). A 10% PB
means that a 10% change in the controller input causes a full-scale (100%) change in controller output.
The conversion relation is thus
100
Kc = (3.45)
PB
A proportional controller has only one adjustable or tuning parameter: Kc or PB.
A proportionally controlled process with no integration property will always exhibit error at steady state in
the presence of disturbances and changes in set-point. The error, of course, can be made negligibly small
by increasing the gain of the proportional controller. However, as the gain is increased, the performance
of the closed-loop system becomes more oscillatory and takes longer to settle down after being disturbed.
Further, most process plants have a considerable amount of dead-time, which severely restricts the value
of the gain that can be used. In processes where the control within a band from the set-point is acceptable,
proportional control is sufficient. However, in processes which require perfect control at the set-point,
proportional controllers will not provide satisfactory performance [155].
To remove the steady-state offset in the controlled variable of a process, an extra amount of intelligence
must be added to the proportional controller. This extra intelligence is the integral or reset action, and
consequently, the controller becomes a PI controller. The equation describing a PI controller is as follows:
È 1
t ˘
u(t) = Kc Íe (t ) +
Í TI Ú e (t ) dt ˙
˙
(3.46a)
Î 0 ˚
È 1 ˘
or U(s) = Kc Í1 + ˙ E(s) (3.46b)
Î TIs˚
Sometimes a mode faster than the proportional mode is added to the PI controller. This new mode of
control is the derivative action, also called the rate action, which responds to the rate of change of error
with time. This speeds up the controller action. The equation describing the PID controller is as follows:
È 1
t
de (t ) ˙˘
u(t) = Kc Íe (t ) +
Í TI Ú e (t ) dt + TD
dt ˙
(3.47a)
Î 0 ˚
150 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È 1 ˘
or U(s) = Kc Í1 + + TD s ˙ E(s) (3.47b)
Î TI s ˚
where TD is the derivative or rate time.
A PID controller has thus three adjustable or tuning parameters: Kc (or PB), TI, and TD. The derivative
action anticipates the error, initiates an early corrective action, and tends to increase the stability of
the system. It does not affect the steady-state error directly. A derivative control mode, in isolation,
produces no corrective effort for any constant error, no matter how large, and would, therefore, allow
uncontrolled steady-state errors. Thus, we cannot consider derivative modes in isolation; they will always
be considered as augmenting some other mode [155].
The block diagram implementation of Eqn. (3.47b) is sketched in Fig. 3.24a. The alternative form,
Fig. 3.24b, is more commonly used, because it avoids taking the rate of change of the set-point input
Fig. 3.24
Models of Digital Control Devices and Systems 151
to the controller, thus preventing the undesirable derivative ‘kick’ on set-point changes by the process
operator.
Due to the noise-accentuating characteristics of derivative operation, the low-pass-filtered derivative
TD s/(aTD s + 1) is actually preferred in practice (Fig. 3.24c). The value of the filter parameter a is not
adjustable but is built into the design of the controller. It is usually of the order of 0.05 to 0.3.
The controller of Fig. 3.24 is considered to be non-interacting in that its derivative and integral modes
operate independently of each other (although proportional gain affects all the three modes). Non-
interaction is provided by the parallel functioning of integral and derivative modes. By contrast, many
controllers have derivative and integral action applied serially to the controlled variable, resulting in
interaction between them. Many of the analog industrial controllers realize the following interacting
PID control action.
È TD¢ s + 1 ˘ È 1 ˘
U(s) = K c¢ Í ˙ Í1 + T ¢s ˙ E(s) (3.48)
Î a TD¢ s + 1 ˚ Î I ˚
The first term in brackets is a derivative unit attached to the standard PI controller serially, to create the
PID controller (Fig. 3.25a). The derivative unit installed on the controlled-variable input to the controller
avoids the derivative kick (Fig. 3.25b).
Fig. 3.25
Most commercially available tunable controllers use the non-interacting version of PID control. The
discussion that follows applies to non-interacting PID control.
152 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
3.5.2
A great number of manufacturers are now making available in the market, process controllers (electronic,
and computer-based) with features that permit adjusting the set-point, transferring between manual and
automatic control modes, adjusting the output signal from the control-action unit (tuning the parameters
Kc, TI, and TD), and displaying the controlled variable, set-point, and control signal. Figure 3.26 shows
the basic structure of an industrial controller. The controller has been broken down into the following
three main units:
(i) the set-point control unit;
(ii) the PID control unit; and
(iii) the manual/automatic control unit.
The set-point control unit receives the measurement y of controlled variable of the process, together with
the set-point r of the control. A switch gives an option of choosing between local and remote (external)
set-point operation. If the set-point to the controller is to be set by the operating personnel, then the local
option rL is chosen. If the set-point to the controller is to be set by another control module, then remote
(external) option re is chosen. This is the case, for example, in cascade control where the drive of the
controller in the major loop constitutes the set-point of the minor-loop controller.
rL uM
L r + e uC M u y
PID Process
R –
re y A
Fig. 3.26
The PID control unit receives the error signal e developed by the set-point control unit, and generates an
appropriate control signal uC. Adjustment features provided in the control unit, for generating appropriate
control signals, include tuning of the three parameters Kc, TI, and TD.
The manual/automatic control unit has a switch which determines the mode of control action. When
the switch is in the auto (A) position, the control signal uC calculated by PID control unit is sent to the
process (in such a case, the process is controlled in closed loop). When the switch is in the manual (M)
position, the PID control unit ‘freezes’ its output. The control signal uM can then be changed manually
by the operating personnel (the process is then controlled in open loop).
The basic structure of a process controller shown in Fig. 3.26 is common for electronic, and computer-
based controllers. These controllers are different in terms of realization of adjustment features.
Models of Digital Control Devices and Systems 153
3.5.3
This pioneer method, also known as the closed-loop or on-line tuning method, was proposed by J G Ziegler
and N B Nichols around 1940. In this method, the parameters by which the dynamic characteristics of
the process are represented are the ultimate gain and period. These parameters are used in tuning the
controller for a specified response: the quarter-decay ratio (QDR) response.
When the process is under closed-loop proportional (P) control, the gain of the P controller at which the
loop oscillates with constant amplitude, has been defined as the ultimate gain Kcu. Ultimate period Tu is
the period of these sustained oscillations. The ultimate gain is, thus, a measure of difficulty in controlling
a process; the higher the ultimate gain, the easier it is to control the process loop. The ultimate period is,
in turn, a measure of speed of response of the loop; the larger the period, the slower the loop.
By its definition, it can be deduced that the ultimate gain is the gain at which the loop is at the threshold
of instability. At gains just below the ultimate, the loop signals will oscillate with decreasing amplitude,
and at gains above the ultimate, the amplitude of the oscillations will grow with time.
For experimental determination of Kcu and Tu, the controller is set in ‘auto’ mode and the following
procedure is followed (refer to Fig. 3.26).
(i) Remove the integral mode by setting the integral time to its highest value. Alternatively, if the PID
controller allows for switching off the integral mode, switch it off.
(ii) Switch off the derivative mode, or set the derivative time to its lowest value, usually zero.
(iii) Increase the proportional gain in steps. After each increase, disturb the loop by introducing a
small step change in set-point and observe the response of the controlled variable, preferably on
a trend recorder. The controlled variable should start oscillating as the gain is increased. When
the amplitude of the oscillations remains approximately constant, the ultimate controller gain has
been reached. Record it as Kcu.
(iv) Measure the period of the oscillations from the trend recording. This parameter is Tu.
The procedure just outlined is simple and requires a minimum upset to the process, just enough to be
able to observe the oscillations. Nevertheless, the prospect of taking a process control loop to the verge
of instability is not an attractive one from a process operation standpoint.
Ziegler and Nichols proposed that the parameters Kcu and Tu, characterizing a process, be used in tuning
the controller for QDR response. The QDR response is illustrated in Fig. 3.27 for a step change in
disturbance, and for a step change in set-point. Its characteristic is that each oscillation has an amplitude
that is one fourth of the previous oscillation.
Empirical relations [12] for calculating the QDR tuning parameters of P, PI and PID controllers, from the
ultimate gain Kcu and period Tu, are given in Table 3.2.
154 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
y (t)
A
1A
4
y (t)
A 1A
4
t
(b) Set-point change
Fig. 3.27
PI and PID tuning parameters that produce quarter-decay response, are not unique. For each setting
of the integral and derivative times, there will usually be a setting of the controller gain that produces
quarter-decay response. The settings given in Table 3.2 are the figures based on experience; these settings
have produced fast response for most industrial loops.
P Kc = 0.5 Kcu — —
PI Kc = 0.45 Kcu TI = Tu /1.2 —
PID Kc = 0.75 Kcu TI = Tu /1.6 TD = Tu/10
3.5.4
Although the tuning method based on ultimate gain and period is simple and fast, other methods of
characterizing the dynamic response of feedback control loops have been developed over the years. The
need for these alternative methods is based on the fact that, it is not always possible to determine the
ultimate gain and period of a loop; some loops would not exhibit sustained oscillations with a proportional
Models of Digital Control Devices and Systems 155
controller. Also, the ultimate gain and period do not give insight into which process or control system
characteristics could be modified to improve the feedback controller performance. A more fundamental
method of characterizing process dynamics is needed to guide such modifications. In the following, we
present an open-loop method for characterizing the dynamic response of the process in the loop.
Process control is characterized by systems which are relatively slow and complex and which, in many
cases, include an element of pure time delay (dead-time). Even where a dead-time element is not present,
the complexity of the system which will typically contain several first-order subsystems, will often result
in a process reaction curve (dynamic response to a step change in input), which has the appearance of
pure time delay.
Process reaction curve may be obtained by carrying out the following step-test procedure.
With the controller on ‘manual’, i.e., the loop opened (refer to Fig. 3.26), a step change of magnitude
Du in the control signal u(t) is applied to the process. The magnitude Du should be large enough for
the consequent change Dy(t) in the process output variable to be measurable, but not so large that the
response will be distorted by process nonlinearities. The process output is recorded for a period from the
introduction of the step change in the input, until the process reaches a new steady state.
u (t)
Du
y (t)
Dyss
0
Fig. 3.28
A typical process reaction curve is sketched in Fig. 3.28. The most common model used to characterize
the process reaction curve is the following:
Y ( s) Ke -tD s
= G(s) = (3.49)
U ( s) t s +1
where K = the process steady-state gain;
tD = the effective process dead-time; and
t = the effective process time-constant.
This is a first-order plus dead-time model. The model response for a step change in the input signal of
magnitude Du, is given by
Ke -tD s Du È1 t ˘
Y(s) = = KDu e -t D s Í - (3.50)
t s +1 s Î s t s + 1 ˙˚
156 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Inverting with the help of a transform table (Table 2.1), and applying the real translation theorem of
Laplace transforms L [ y (t – t0)m(t – t0)] = e–st0 Y(s); t0 > 0 [155], we get
Dy(t) = KDu [1 – e - (t - t D )/t ] ; t > tD
(3.51)
=0 ; t £ tD
The term Dy is the perturbation or change in the output from its initial value:
Dy(t) = y(t) – y(0)
Figure 3.29 shows the model response to a step change of magnitude Du in the input signal. Dyss is the
steady-state change in the process output (refer to Eqn. (3.51)):
Dyss = lim Dy(t) = KDu
t
Fig. 3.29
At the point t = tD on the time axis, the process output variable leaves the initial steady state with a
maximum rate of change (refer to Eqn. (3.51)):
d Ê 1ˆ Dyss
Dy(t ) = KDu Á ˜ =
dt t =tD
Ët ¯ t
The time-constant t is the distance on the time axis between the point t = tD, and the point at which the
tangent to the model response curve, drawn at t = tD, crosses the new steady state.
Note that the model response at t = tD + t is given by
Dy (tD + t) = KDu(1 – e–1) = 0.632 Dyss
The process reaction curve of Fig. 3.28 can be matched to the model response of Fig. 3.29 by the
following estimation procedure.
The model parameter K is given by
Change in process output at steady state Dyss
K= = (3.52)
Step change in proccess input Du
The estimation of the model parameters tD and t can be done by, at least, three methods; each of which
results in different values.
This method makes use of the line that is tangent to the process reaction curve at the point of maximum rate
of change. The time-constant is then defined as the distance on the time axis, between the point where the
Models of Digital Control Devices and Systems 157
Besides the formulas for QDR response tuning based on the ultimate gain and period of the loop (refer to
Table 3.2), Ziegler and Nichols also developed tuning formulas based on the parameters of a first-order
model fit to the process reaction curve. These formulas are given in Table 3.3 [12].
Ke -t D s
G(s
ts + 1
P Kc = t /KtD — —
PI Kc = 0.9t /KtD TI = 3.33 tD —
PID Kc = 1.5t /KtD TI = 2.5tD TD = 0.4tD
As was pointed out in the earlier discussion on QDR tuning based on ultimate gain and period, the
difficulty of the QDR performance specification for PI and PID controllers is that there is an infinite set
of values of the controller parameters that can produce it; i.e., for each setting of the integral time on a
PI controller, and for each reset-derivative time combination on a PID controller, there is a setting of the
gain that results in QDR response. The settings given in Table 3.3 are the figures based on experience;
these settings have produced fast response for most industrial loops.
3.5.5
Most process industries today, use computers to carry out the basic feedback control calculations. The
formulas that are programmed to calculate the controller output are mostly the discrete versions of the
analog controllers presented earlier in this section. This practice allows the use of established experience
with analog controllers and in principle, their well-known tuning rules which could be applied.
As there is no extra cost in programming all the three modes of control, most computer-based algorithms
contain all the three, and then use flags and logic to allow the process engineer to specify any single mode
or, a combination of two or three modes. Most tunable commercially available controllers use the non-
interacting version of PID control (refer to Eqn. (3.47b)). The discussion that follows applies to non-
interacting PID control.
The equation describing an idealized non-interacting PID controller is as follows (refer to Eqn. (3.47a)):
È de(t ) ˘
t
1
u(t) = Kc Íe(t ) +
Í TI Ú e(t )dt + TD
dt ˙
˙ (3.55)
Î 0 ˚
with parameters
Kc = controller gain; TI = integral time; and TD = derivative time.
For small sample times T, this equation can be turned into a difference equation by discretization. Various
methods of discretization were presented in Section 2.14.
Approximating the derivative mode by the backward-difference approximation and the integral mode by
backward integration rule, we obtain (refer to Eqns (2.112))
È 1 T ˘
u(k) = Kc Íe ( k ) + S ( k ) + D (e ( k ) - e ( k - 1)) ˙ (3.56)
Î TI T ˚
S(k) = S(k – 1) + Te(k)
where u(k) = the controller output at sample k;
S(k) = the sum of the errors; and
T = the sampling interval.
This is a nonrecursive algorithm. For the formation of the sum, all past errors e(◊) have to be stored.
Equation (3.56) is known as the ‘absolute form’ or ‘position form’ of the PID algorithm. It suffers from one
particular disadvantage, which is manifest when the process it is controlling, is switched from manual
160 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
to automatic control. The initial value of the control variable u will simply be (e(k – 1) = S(k – 1) = 0 in
Eqn. (3.56)):
È T T ˘
u(0) = Kc Í1+ + D ˙ e(0)
Î TI T ˚
Since the controller has no knowledge of the previous sample values, it is not likely that this output value
will coincide with that previously available under manual control. As a result, the transfer of control
will cause a ‘bump’, which may seriously disturb the plant’s operation. This can only be overcome by
laboriously aligning the manual and computer outputs or, by adding complexity to the controller so that
it will automatically ‘track’ the manual controller.
Practical implementation of the PID algorithm includes the following additional features:
(i) It is seldom desirable for the derivative mode of the controller to respond to set-point changes.
This is because the set-point changes cause large changes in the error that last for only one sample;
when the derivative mode acts on this error, undesirable pulses or ‘derivative kicks’ occur on the
controller output—right after the set-point is changed. These pulses, which last for one sampling
interval, can be avoided by having the derivative mode act on the controlled variable, rather than
on the error.
(ii) A pure derivative term should not be implemented, because it will give a very large amplification
of the measurement noise. The gain of the derivative must thus be limited. This can be done by
approximating the transfer function TD s as follows:
TD s
TD s @
a TD s +1
where a is the filter parameter, whose value is not adjustable but is built into the design of the
controller. It is usually of the order of 0.05 to 0.3.
The PID controller, therefore, takes the form (refer to Fig. 3.24c):
È 1 TD s ˘
U(s) = Kc Í E ( s) + E ( s) - Y ( s) ˙ (3.57)
Î TI s a TD s + 1 ˚
Discretization of this equation results in the following PID algorithm:
È 1 ˘
u(k) = Kc Íe( k ) + S ( k ) + D( k ) ˙ (3.58)
Î TI ˚
S(k) = S(k – 1) + Te(k)
a TD TD
D(k) = D(k – 1) – [ y(k) – y(k – 1)]
a TD + T a TD + T
This is a recursive algorithm characterized by the calculation of the current control variable u(k) based on
the previous control variable u(k – 1) and correction terms. To derive the recursive algorithm, we subtract
from Eqn. (3.56)
Models of Digital Control Devices and Systems 161
È 1 T ˘
u(k – 1) = Kc Íe ( k - 1) + S ( k - 1) + D (e ( k - 1) - e ( k - 2)) ˙ (3.59)
Î TI T ˚
This gives
È T TD ˘
u(k) – u(k – 1) = Kc Íe ( k ) - e ( k - 1) + e ( k ) + [e ( k ) - 2e ( k - 1) + e ( k - 2)]˙ (3.60)
Î TI T ˚
Now, only the current change in the control variable
Du(k) = u(k) – u(k – 1) (3.61)
is calculated. This algorithm is known as the ‘incremental form’ or ‘velocity form’ of the PID algorithm.
The distinction between the position and velocity algorithms is significant only for controllers with integral
effect.
The velocity algorithm provides a simple solution to the requirement of bumpless transfer. The problem
of bumps arises mainly from the need for an ‘initial condition’ on the integral; and the solution adopted
is to externalize the integration, as shown in Fig. 3.31. The external integration may take the form of an
electronic integrator but frequently the type of actuating element is changed, so that recursive algorithm
is used with actuators which, by their very nature, contain integral action. Stepper motor (refer to
Section 3.8) is one such actuating element.
Incremental Auto
e (k)
PID
algorithm Du(k) 1 u(k)
s
Manual
Fig. 3.31
Practical implementation of this algorithm includes the features of avoiding derivative kicks and filtering
measurement noise. Using Eqn. (3.58) we obtain
È T T TD ˘
Du(k) = Kc Íe ( k ) - e ( k - 1) + e( k ) - D( k - 1) - ( y ( k ) - y ( k - 1)) ˙ (3.62)
Î TI a TD + T a TD + T ˚
a TD TD
D(k) = D( k - 1) - [ y( k ) - y( k - 1)] (3.63)
a TD + T a TD + T
where y(k) = controlled variable; Du(k) = incremental control variable = u(k) – u(k – 1);
e(k) = error variable; Kc = controller gain;
TI = integral time; TD = derivative time; and T = sampling interval.
162 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Though tuning formulas that are specifically applicable to digital control algorithms have been developed
[12], the most popular and widely used tuning approach for digital PID controllers is to apply rules in
Tables 3.2–3.3 with a simple correction to account for the effect of sampling. When a continuous-time
signal is sampled at regular intervals of time, and is then reconstructed by holding the sampled values
constant for each sampling interval, the reconstructed signal is effectively delayed by approximately
one half of the sampling interval, as shown in Fig. 3.32a (also refer to Example 2.17). In the digital
control configuration of Fig. 3.32b, the D/A converter holds the output of the digital controller constant
between updates, thus adding one half the sampling time to the dead-time of the process components.
The correction for sampling is then, simply, to add one half the sampling time to the dead-time obtained
from the process reaction curve.
tCD = tD + 1 (3.64)
2
T
where tCD is the corrected dead-time, tD is the dead-time of the process, and T is the sampling interval.
Continuous
signal
Reconstructed
signal
T
2
T 2T 3T 4T 5T 6T t
(a)
A/D
(b)
Fig. 3.32
The tuning formulas given in Table 3.3 can directly be used for digital PID controllers with tD replaced
by tCD.
Notice that the online tuning method, based on ultimate gain and period, inherently incorporates the effect of
sampling when the ultimate gain and period are determined with the digital controller included in the loop.
Tuning rules in Table 3.2 can, therefore, be applied to digital control algorithms without any correction.
Models of Digital Control Devices and Systems 163
3.6
This section describes the hardware features of the design of a microprocessor-based controller for
temperature control in an air-flow system.
Figure 3.33 shows the air-flow system, provided with temperature measurement and having a heater grid
with controlled power input. Air, drawn through a variable orifice by a centrifugal blower, is driven past
the heater grid and through a length of tubing, to the atmosphere again. The temperature sensing element
consists of a bead thermistor fitted to the end of a probe, inserted into the air stream 30 cms from the
heater. The task is to implement a controller, in the position shown by dotted box, to provide temperature
control of the air stream. It is a practical process-control problem in miniature, simulating the conditions
found in furnaces, boilers, air-conditioning systems, etc.
Fig. 3.33
The functions within the control loop can be broken down as follows:
(a) sampling of the temperature measurement signal at an appropriate rate;
(b) transfer of the measurement signal into the computer;
(c) comparison of the measured temperature with a stored desired temperature, to form an error
signal;
(d) operation on the error signal by an appropriate algorithm, to form an output signal; and
(e) transfer of the output signal, through the interface, to the power control unit.
3.6.1
Figure 3.34 gives hardware description of the temperature control system. Let us examine briefly the
function of each block. The block labeled keyboard matrix, interfaced to the microcomputer through a
programmable keyboard/display interface chip, enables the user to feed reference input to the temperature
control system. The LED display unit provides display of the actual temperature of the heating chamber.
164 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The temperature range for the system under consideration is 20 to 60ºC. When a thermistor is used as
temperature sensor, it is necessary to convert the change in its resistance to an equivalent analog voltage.
This is accomplished with Wheatstone bridge; the thermistor exposed to the process air, forms one arm
of the bridge. The millivolt range of the bridge error voltage is amplified to the range required by A/D
converter. The output of the A/D converter is the digital measurement of the actual temperature of the
process air. This data is fed to the microcomputer through an input port. The microcomputer compares
the actual temperature with the desired temperature at each sampling instant, and generates an error
signal. The error signal is then processed as per the control algorithm (to be given later), resulting in a
control signal in digital form. The control signal is, in fact, the amount of power required to be applied
to the plant, in order to reduce the error between the desired temperature and the actual temperature. The
power input to the plant may be controlled with the help of triacs and firing circuit interface.
Fig. 3.34
A basic circuit using a triac (bidirectional thyristor) which controls the flow of alternating current through
the heater is shown in Fig. 3.35a. If the triac closes the circuit for tp seconds out of T seconds, the average
power applied to the plant over the sampling period T is
tp
1 V2 V 2 tp
u=
T Ú R
dt =
R T
0
V = rms value of the voltage applied to the heater; and
R = resistance of the heater.
u
This gives tp = T (3.65)
V 2 /R
Models of Digital Control Devices and Systems 165
Depending on the control signal u (power required to be applied to the plant), tp is calculated in the
microcomputer. A number is latched in a down counter (in the programmable timer/counter chip
interfaced with the microcomputer) which is determined by the value of tp and the counter’s clock
frequency. A pulse of required width tp is thus available at each sampling instant from the programmable
timer/counter chip. This, in fact, is a pulse width modulated (PWM) wave whose time period is constant
and width is varied in accordance with the power required to be fed to the plant (Fig. 3.35b).
Heater
R
230 V
50 Hz
Triac control
pulses 0 tp T 2T Time
(a) (b)
Fig. 3.35
The function of the triacs and firing circuit interface is thus, to process the PWM output of the
microcomputer, such that the heater is ON when the PWM output is logic 1, and OFF when it is logic 0.
Since the heater is operated off 230 V ac at 50 Hz, the firing circuit should also provide adequate isolation
between the high voltage ac signals and the low voltage digital signals.
3.6.2
A model for the temperature control system under study is given by the block diagram of Fig. 3.36. A
gain of unity in the feedback path corresponds to the design of feedback circuit (temperature transducer
+ amplifier + A/D converter) which enables us to interpret the magnitude of the digital output of A/D
converter directly as temperature in ºC. The temperature command is given in terms of the digital number
with magnitude equal to the desired temperature in ºC. The error e (ºC) is processed by the control
algorithm with transfer function D(z). The computer generates a PWM wave whose time period is equal
to the sampling interval, and width is varied in accordance with the control signal u (watts). The PWM
wave controls the power input to the plant through the triacs and the firing circuit interface. Since the
width of PWM remains constant over a sampling interval, we can use S/H to model the input-output
relation of the triacs and the firing circuit interface.
r + e u y
D (z) Gh0(s) G (s)
T
–
Controller Process
Triacs and firing circuit
Digital number °C
1
Feedback circuit
Fig. 3.36
166 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
To develop the digital controller D(z) for the process, we will follow the approach of controller tuning
(refer to Section 3.5). A simple tuning procedure consists of the following steps:
(i) Obtain experimentally the dynamic characteristics of the process, either by open-loop or closed-loop
tests.
(ii) Based on dynamic characteristics of a process, tuning rules have been developed by Ziegler
and Nichols (refer to Tables 3.2–3.3). Use these rules to obtain initial settings of the controller
parameters Kc, TI, and TD of the PID controller
È 1 ˘
D(s) = Kc Í1 + + TD s ˙ (3.66)
Î TI s ˚
(iii) Discretize the PID controller to obtain digital control algorithm for the temperature control
process. Thumb rules given in Section 2.13 may be followed for initial selection of sampling
interval T.
In digital mode, the PID controller takes the form (refer to Eqn. (2.125))
È T Ê z + 1ˆ TD Ê z - 1ˆ ˘ U ( z )
D(z) = Kc Í1 + Á ˜+ Á ˜˙ = (3.67)
Î 2TI Ë z - 1¯ T Ë z ¯ ˚ E( z)
(iv) Implement the digital PID controller. Figure 3.37 shows a realization scheme for the controller;
the proportional, integral, and derivative terms are implemented separately and summed up at the
output.
(v) Fine tune Kc, TI, TD and T to obtain acceptable performance.
E(z) + + U(z)
Kc T/(2TI) + z–1 + +
+ +
TD / T z–1 –
+
Fig. 3.37
An open-loop test was performed on the air-flow system (Fig. 3.33) to obtain its dynamic characteristics.
Input : heater power
Output : air temperature
The test was carried out with a dc input signal. A wattmeter, on the input side, measured the heater
power, and a voltmeter, on the output side, measured the output (in volts) of the bridge circuit, which is
proportional to the air temperature in ºC.
Figure 3.38 shows the response for a step input of 20 watts. This process reaction curve was obtained for
a specific orifice setting.
Models of Digital Control Devices and Systems 167
Fig. 3.38
Approximation of the process reaction curve by a first-order plus dead-time model is obtained as follows
(refer to Fig. 3.29):
The change in the process output at steady state is found to be Dyss = 24.8 volts. Therefore, the process
gain
24.8
K= = 1.24 volts/watt
20
The line that is tangent to the process reaction curve at the point of maximum rate of change gives
tD = 0.3 sec. The time at which the response is 0.632 Dyss is found to be 0.83 sec. Therefore, t + tD =
0.83; which gives t = 0.53 sec. (It may be noted that the response is oscillatory in nature; therefore, a
second-order model will give a better fit. However, for coarse tuning, we have approximated the response
by a first-order plus dead-time model).
The process reaction curve of the air-flow system is thus represented by the model:
Ke -tD s 1.24e - 0.3s
G(s) = = (3.68)
t s +1 0.53 s + 1
Taking a sampling interval T = 0.1 sec, we have (refer to Eqn. (3.64)):
1
tCD = tD + T = 0.35 sec
2
Using tuning formulas of Table 3.3, we obtain the following parameters for the PID controller:
Kc = 1.5t /(KtCD) = 1.832
TI = 2.5t CD = 0.875 (3.69)
TD = 0.4tCD = 0.14
3.7
This section describes hardware features of the design of a microprocessor-based controller for a
position control system. The plant of our digital control system is an inertial load, driven by an armature-
168 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
controlled dc servo motor. The plant also includes a motor-drive circuit. The output of the drive circuit
is fed to the armature of the motor which controls the position of the motor shaft. In addition, it also
controls the direction of rotation of the motor shaft.
Figure 3.39 gives hardware description of the position control system. Let us examine briefly the function
of each block.
Fig. 3.39
The block labeled digital signal generator, interfaced with the microcomputer through an input port,
enables the user to feed the desired position (set-point) of the motor shaft. A keyboard matrix can be used
for entering numerical commands into the digital system.
The microcomputer compares the actual position of the motor shaft with the desired position at each
sampling instant, and generates an error signal. The error signal is then processed as per the control
algorithm (to be given later), resulting in a control signal in digital form. The digital control signal
is converted to a bipolar (can be + ve or –ve) analog voltage in the D/A converter interfaced to the
microcomputer. This bipolar signal is processed in a preamplifier and servo amplifier (power amplifier),
enabling the motor to be driven in one direction for positive voltage at preamplifier input, and in opposite
direction for a negative voltage.
With these units, the block diagram of Fig. 3.39 also shows a shaft encoder for digital measurement of
shaft position/speed. We now examine in detail the principle of operation of this digital device.
3.7.1
The digital measurement of shaft position requires conversion from the analog quantity ‘shaft angle’ to a
binary number. One way of doing this would be to change shaft angle to a voltage using a potentiometer,
and then to convert it to a binary number through an electronic A/D converter. This is perfectly feasible,
but is not sensible because of the following reasons:
Models of Digital Control Devices and Systems 169
(i) high quality potentiometers of good accuracy are expensive and subject to wear; and
(ii) the double conversion is certain to introduce more errors than a single conversion would.
We can go straight from angle to number, using an optical angular absolute-position encoder. It consists
of a rotary disk made of a transparent material. The disk is divided into a number of equal angular
sectors—depending on the resolution required. Several tracks, which are transparent in certain sectors
but opaque in others, are laid out. Each track represents one digit of a binary number. Detectors on these
tracks sense whether the digit is a ‘1’ or a ‘0’. Figure 3.40 gives an example. Here, the disk is divided
into eight 45º sectors. To represent eight angles in binary code requires three digits (23 = 8), hence, there
are three tracks. Each track has a light source sending a beam on the disk and, on the opposite side, a
photoelectric sensor receiving this beam. Depending upon the angular sector momentarily facing the
sensors, they transmit a bit pattern representing the angular disk position. For example, if the bit pattern
is 010, then Sector IV is facing the sensors.
Figure 3.40 is an example of an ‘absolute encoder’. It is so called because for a given angle, the digital
output must always be the same. Note that a cyclic (Gray) binary code is normally used on absolute
encoders (in cyclic codes, only one bit changes between adjacent numbers). If a natural binary-code
pattern were used, a transition from, say, 001 to 010, would produce a race between the two right-hand
bits. Depending on which photosensor responded faster, the output would go briefly through 011 or 000.
In either case, a momentary false bit pattern would be sent. Cyclic codes avoid such races. A cyclic code
can be converted into a natural binary code by using either hardware or computer software.
Encoders similar to Fig. 3.40 have been widely used. However, they have certain disadvantages.
VIII I
100 000
Detectors
VII II
101 001
VI III
111 011
V IV
110 010
Fig. 3.40
170 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
(i) The resolution obtainable with these encoders is limited by the number of tracks on the encoder
disk. The alignment of up to ten detectors and the laying out of ten tracks is still quite difficult and
thus expensive.
(ii) The resulting digital measurement is in a cyclic code and must usually be converted to natural
binary before use.
(iii) The large number of tracks and detectors, inevitably increases the chance of mechanical and/or
electrical failure.
For these reasons, another form of encoder is commonly used today and is known as the incremental
encoder. The basis of an incremental encoder is a single track served by a single detector, and laid out
in equal segments of ‘0’ and ‘1’, as in Fig. 3.41. As the track moves relative to the detector, a pulse train
is generated, and can be fed to a counter to record how much motion has occurred. With regard to this
scheme of measurement, the following questions may be raised:
(i) How do we know which direction the motion was?
(ii) If we can record only the distance moved, how do we know where we were?
Detector
Track motion
relative to
Counter
detector
Fig. 3.41
The answer to the first question involves the addition of a second detector. Figure 3.42a shows two
detectors, spaced one half of a segment apart. As the track moves relative to the detectors (we assume at
a constant rate), the detector outputs vary with time, as shown in the waveforms of Fig. 3.42b. We can
see that the relative ‘phasing’ of the A and B signals depends upon the direction of motion, and so gives
us a means of detecting the direction.
For example, if signal B goes from ‘0’ to ‘1’ while signal A is at ‘1’, the motion is positive. For the same
direction, we see that B goes from ‘1’ to ‘0’ whilst A is at ‘0’. For negative motion, a similar but different
pair of statements can be made. By application of some fairly simple logic, it is possible to control a
reversible counter as is indicated in Fig. 3.43
This method of direction-sensing is referred to as quadrature encoding. The detectors are one half of
a segment apart, but reference to the waveforms of Fig. 3.42 shows that there are two segments to one
cycle; so the detectors are one quarter of a cycle apart, and hence the name.
The solution to the second problem also requires an additional detector working on a datum track, as
shown in Fig. 3.44. The datum resets the counter every time it goes by.
We have thus three detectors in an incremental encoder. But this is still a lot less than on an absolute
encoder.
Models of Digital Control Devices and Systems 171
Segment
– +
Track motion
A B relative to
(a) detector
A
0
1
Positive
motion B
0
1
Negative
motion B
0
Time proportional to steady-state motion
(b)
Fig. 3.42
Encoder
A Up
Logic Reversible
counter
Down
B
Quadrature outputs
Fig. 3.43
In an analog system, speed is usually measured by a tachogenerator attached to the motor shaft. This is
because the time differentiation of analog position signal presents practical problems.
In a digital system, however, it is relatively easy to carry out step-by-step calculation of the ‘slope’ of
the position/time curve. We have the position data in digital form from the shaft encoder, so the rest is
fairly straightforward.
172 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Segment
A B Datum track
Datum detector
Encoder
A Up
Logic Reversible
counter
Down
B
Fig. 3.44
3.7.2
The mathematical model of the position control under study is given by the block diagram of Fig. 3.46.
The magnitude of the digital output of the shaft encoder can be interpreted directly as the position of the
motor shaft in degrees, by proper design of the encoder interface. Similarly, the magnitude of the digital
reference input can be interpreted directly as reference input in degrees, by proper design of the keyboard
matrix interface. The error e (degrees) in position is processed by the control algorithm with transfer
function D(z). The control signal u (in volts) is applied to the preamplifier through the D/A converter.
The plant (preamplifier + servo amplifier + dc motor + load) is described by the transfer function
q( s) 94
= G(s) = (3.70)
V ( s) s(0.3s + 1)
r + e u V q
D(z) Gh0(s) G(s)
T
– volts
Controller D/A Plant
To design the digital controller D(z) for this plant, we will follow the approach of discretization of analog
design (refer to Section 2.14). The design requirements may be fixed as z = 0.7 and wn @ 10. The first step
is to find a proper analog controller D(s) that meets the specifications. The transfer function
( s + 3.33)
D(s) = Kc
s+a
cancels the plant pole at s = –3.33. The characteristic roots of
1 + D(s)G(s) = 0
give z = 0.7 and wn = 10 if we choose Kc = 0.32 and a = 14.
0.32( s + 3.33)
The controller D(s) = (3.71)
s + 14
gives the following steady-state behavior:
Kv = lim sG(s)D(s) = 7.15
sÆ0
This may be considered satisfactory.
The discretized version of the controller D(s) is the proposed digital controller D(z) for the control loop
of Fig. 3.46. The D(z) will perform as per the specifications if the lagging effect of zero-order hold is
negligible. We take a small value for sampling interval T to satisfy this requirement. For a system with
wn = 10 rad/sec, a very ‘safe’ sample rate would be—a factor of 20 faster than wn, yielding
ws = 10 ¥ 20 = 200 rad/sec
174 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
2p
and T= @ 0.03 sec
ws
The dominant time constant of the plant is 0.3 sec. The sampling interval T is one tenth of this value.
We use the bilinear transformation given by
2 Ê z - 1ˆ
s=
T ÁË z + 1˜¯
to digitize D(s). This results in
22.4 z - 20.27
D(z) =
80.67 z - 52.67
0.278 - 0.25 z -1 U ( z)
= = (3.72a)
-1
1 - 0.653 z E( z)
The control algorithm is, therefore, given by
u(k) = 0.653 u(k – 1) + 0.278 e(k) – 0.25 e(k – 1) (3.72b)
This completes the digital algorithm design.
3.8
The explosive growth of the computer industry in recent years has also meant an enormous growth for
stepping motors, because these motors provide the driving force in many computer peripheral devices.
Stepping motors can be found, for example, driving the paper-feed mechanism in printers. These motors
are also used exclusively in floppy disk drives, where they provide precise positioning of magnetic head
on the disks. The X and Y coordinate pens in plotters, are driven by stepping motors.
The stepping motor can be found performing countless tasks outside the computer industry as well. The
most common application is probably in analog quartz watches where tiny stepping motors drive the
hands. These motors are also popular in numerical-control applications (positioning of the workpiece
and/or the tool in a machine according to previously specified numerical data).
A stepping motor is especially suited for applications mentioned above because, essentially, it is a device
that serves to convert input information in digital form to an output that is mechanical. It thereby provides
a natural interface with the digital computer. A stepping motor, plus its associated drive electronics,
accepts a pulse-train input and produces an increment of rotary displacement for each pulse. We can
control average speed by manipulating pulse rate, and motor position by controlling the total pulse count.
Two types of stepping motors are in common use—the permanent magnet (PM), and the variable
reluctance (VR). We will discuss the PM motor first.
3.8.1
A PM stepping motor in its simplest form is shown in Fig. 3.47. The motor has a permanent magnet rotor
that, in this example, has two poles, though often many more poles are used. The stator is made of soft
Models of Digital Control Devices and Systems 175
Figure 3.48 shows a simple power drive scheme; each time the power transistors are switched as per the
sequence given in the chart, the motor moves through a fixed angle, referred to as the step angle. The
chart is circular in the sense that, the next entry after Step 4 is Step 1. To rotate the motor in a clockwise
direction, the chart is traversed from top to bottom, and to rotate the motor in counterclockwise direction,
176 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
the chart is traversed from bottom to top. Number of step movements/sec gives the stepping rate—a
parameter that gives a measure of the speed of operation of the stepping motor. The stepping rate is
controlled by changing the switching frequency of the transistors.
4
Q1 Q2 Q3 Q4
Step Q1 Q2 Q3 Q4
CCW rotation
CW rotation
1 ON OFF ON OFF
2 ON OFF OFF ON
3 OFF ON OFF ON
4 OFF ON ON OFF
Fig. 3.48
From the foregoing description of the method of operation of a stepping motor, we observe that the
stepping action of the motor is dependent on a specific switching sequence that serves to energize and
de-energize the stator windings. In addition to the sequence requirement, the windings must be provided
with sufficient current. These requirements are met by the stepping motor driver, whose block diagram is
shown in Fig. 3.49. The sequence-logic section of the motor driver accepts the pulse-train input, and also
receives a binary direction signal indicating the direction in which the motor is to step. It then produces
an appropriate switching sequence, so that each phase of the motor is energized at the proper time. The
drive-amplifier section consists of power transistors supplying sufficient current to drive the motor.
Pulse train
Direction
CW/CCW logic input
Fig. 3.49
Models of Digital Control Devices and Systems 177
3.8.2
Figure 3.50 illustrates a typical Variable Reluctance (VR) motor. The rotor is made of magnetic material,
but it is not a permanent magnet, and it has a series of teeth (eight in this case) machined into it. As
with the PM stepping motor, the stator consists of a number of pole pieces with windings connected in
phases; all windings belonging to the same phase are energized at the same time. The stator in Fig. 3.50
is designed for 12 pole pieces with 12 associated windings arranged in three phases (labeled 1, 2, and 3,
respectively). The figure shows a set of four windings for Phase 1; the windings for the other two phases
have been omitted for clarity.
1
2
Fig. 3.50
The operating principle of the VR motor is straightforward. Let any phase of the windings be energized with
a dc signal. The magnetomotive force set up will position the rotor such that the teeth of the rotor section
in the neighborhood of the excited phase of the stator, are aligned opposite to the pole pieces associated
with the excited phase. This is the position of minimum reluctance, and the motor is in stable equilibrium.
Figure 3.50 illustrates the rotor in the position it would assume when Phase 1 is energized. If we now
de-energize Phase 1 and energize Phase 2, the rotor rotates counterclockwise so that the four rotor teeth
nearest to the four pole pieces belonging to Phase 2, align themselves with these. The step angle of
the motor equals the difference in angular pitch between adjacent rotor teeth and adjacent pole pieces;
in this case 45 – 30 = 15º. Due to this difference relationship, VR motors can be designed to operate
with considerably smaller step angles than PM motors. Other advantages of VR motors include faster
dynamic response and the ability to accept higher pulse rates.
178 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Among the drawbacks—their output torque is lower than that of a PM motor of a similar size, and they
do not provide any detent torque when not energized.
3.8.3
Torque versus speed curves of a stepping motor give the dynamic torque, produced by the stepping
motor at a given stepping rate, on excitation under rated conditions. The dynamic torque of a motor is
the most important data and it plays a major role in the selection of a motor for a specified application.
In a load-positioning application, for instance, the rotor would typically start from rest and accelerate
the load to the desired position. To provide this type of motion, the motor must develop sufficient torque
to overcome friction, and to accelerate the total inertia. In accelerating the inertia, the motor may be
required to develop a large amount of torque, particularly if the acceleration must be completed in a short
time—so as to position the load quickly. Inability of the motor to develop sufficient torque during motion
may cause the motor to stall, resulting in a loss of synchronization between the motor steps and phase
excitation, and consequently, resulting in incorrect positioning of the load.
A typical torque versus stepping rate characteristic graph is shown in Fig. 3.51, in which curve a gives
pull-in torque versus rotor steps/sec and curve b gives pull-out torque versus rotor steps/sec.
Fig. 3.51
The pull-in range (the area between axes and curve a) of the motor is the range of switching speeds
at which the motor can start and stop, without losing steps. For a frictional load requiring torque T1 to
overcome friction, the maximum pull-in rate is S1 steps per sec. S2 is the maximum pull-in rate at which
the unloaded motor can start and stop, without losing steps.
When the motor is running, the stepping rate can be increased above the maximum pull-in rate, and
when this occurs, the motor is operating in the slew-range region (the area between horizontal axis,
and curves a and b). The slew range gives the range of switching speeds within which the motor can
run unidirectionally, but cannot be started or reversed (at shaft torque T1, the motor cannot be started
or reversed at step rate S3). When the motor is running in the slew range, it can follow changes in the
stepping rate without losing steps, but only with a certain acceleration limit.
Models of Digital Control Devices and Systems 179
For a frictional load requiring torque T1 to overcome friction, the maximum slewing rate at which the
motor can run is S4. S5 is the maximum slewing rate at which the unloaded motor can run without losing
steps.
Curve c in Fig. 3.51 gives the pull-in torque with external inertia. It is obvious that if the external load
results in a pull-in torque curve c, the torque developed by the motor at step rate S1 is T2 < T1. Stepping
motors are more sensitive to the inertia of the load than they are to its friction.
3.8.4
In motion control technology, the rise of stepping motors, in fact, began with the availability of easy-to-
use integrated circuit chips to drive these stepping motors. These chips require, as inputs a pulse train
at the stepping frequency, a logic signal to specify CW and CCW rotation, and a logic signal for STOP/
START operation. An adjustable frequency pulse train is readily obtained from another integrated circuit
chip—a voltage-controlled oscillator.
The application of stepping motors has shot up with the availability of low-cost microprocessors. A
simplified form of microprocessor-based stepping motor drive is shown in Fig. 3.52. The system requires
an input port and an output port (this requirement is reduced to one port if a programmable I/O port is
used). Output port handles the binary pattern applied to the stepping motor (which is assumed to be a
four-phase motor). The excitation sequence is usually stored in a table of numbers. A pattern for four-
phase motor is shown in the chart of Fig. 3.52. The chart is circular in the sense that the next entry
after Step 4 is Step 1. To rotate the motor in a clockwise direction, the chart is traversed from top to
Fig. 3.52
180 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
bottom, and to rotate the motor in counterclockwise direction, the chart is traversed from bottom to top.
By controlling the number of bit-pattern changes, and the speed at which they change, it is possible to
control the angle through which the motor rotates and the speed of rotation. These controls can easily be
realized through software.
The system operator has control over the direction of rotation of the motor by means of a DIRECTION
switch, which is interfaced to the CPU through the input port. The operator is also provided with a STOP
switch which is connected to an interrupt line of the CPU. The interrupt routine must stop the motor
by sending out logic ‘0’s on the data bus lines connected to the stepping motor windings through the
output port.
Figure 3.52 also shows a simple drive circuit for the stepping motor. Power transistors Q1–Q4 act as
switching elements.
When a power transistor is turned off, a high voltage builds up due to di/dt, which may damage the
transistor. This surge in voltage can be suppressed by connecting a diode in parallel with each winding
in the polarity shown in Fig. 3.52. Now, there will be a flow of circulating current after the transistor is
turned off, and the current will decay with time.
3.8.5
Stepping motors present a number of pronounced advantages, as compared to conventional electric
motors:
(i) Since the stepping-motor shaft angle bears an exact relation to the number of input pulses, the
motor provides an accurate open-loop positioning system without the need for closing the loop
with a position encoder, comparator, and servo amplifier, as is done in conventional closed-loop
systems.
(ii) If the stepping motor receives a continuous train of pulses at constant frequency, it rotates at a
constant speed, provided neither the load torque nor the pulse frequency are excessive for the
given motor. The stepping motor can thus take the place of a velocity servo, again, without the
need for a closed-loop system. By changing pulse frequency, the motor speed can be controlled.
Even low velocities can be maintained accurately, which is difficult to do with conventional dc
motors.
(iii) By driving several motors from the same frequency source, synchronized motions at different
points in a machine are easily obtained. Using standard frequency-divider chips, we can drive a
motor at a precise fraction of another motor’s speed, giving an electronic gear train.
(iv) If the motor stator is kept energized during standstill, the motor produces an appreciable holding
torque. Thus, the load position can be locked without the need for clutch-brake arrangements. The
motor can be stalled in this manner indefinitely, without adverse effects.
There are, of course, also certain drawbacks.
(i) If the input pulse rate is too fast, or if the load is excessive, the motor will ‘miss’ steps, making the
speed and position inaccurate.
(ii) If the motor is at rest, an external disturbing torque greater than the motor’s holding torque, can
twist the motor shaft away from its commanded position by any number of steps.
Models of Digital Control Devices and Systems 181
(iii) With high load inertias, overshooting and oscillations can occur unless proper damping is applied,
and under certain conditions, the stepping motor may become unstable.
(iv) Stepping motors are only available in low or medium hp ratings, up to a couple of hp (in theory,
larger stepping motors could be built, but the real problem lies with the controller—how to get
large currents into and out of motor windings at a sufficiently high rate, in spite of winding
inductance).
(v) Stepping motors are inherently low-speed devices, more suited for low-speed applications because
gearing is avoided. If high speeds are required, this of course becomes a drawback.
Since the cost and simplicity advantages of stepping-motor control systems erode when motion sensors
and feedback loops are added, much effort has gone into improving the performance of open-loop
systems:
(i) As explained earlier in connection with Fig. 3.51, the permissible pulse rate for starting an inertia
load (i.e., the pull-in rate), is much lower than the permissible pulse rate once the motor has
reached maximum speed (pull-out rate). A good controller brings the motor up to its maximum
speed gradually, a process called ramping2, in such a manner that no pulses are lost. Similarly, a
good controller controls deceleration when the motor is to be stopped.
(ii) Various schemes for improving damping to prevent overshooting and oscillations when the
motor is to be stopped, are available. Mechanical damping devices provide a simple solution, but
these devices reduce the available motor torque and also, mostly, require a motor with a double-
ended shaft. Therefore, electronic damping methods are usually preferred. A technique called
back-phase damping consists of switching the motor into the reverse direction using the last few
pulses of a move.
(iii) The more sophisticated controllers are able to provide so-called microstepping. This technique
permits the motor shaft to be positioned at places other than the natural stable points of the
motor. It is accomplished by proportioning the current in two adjacent motor windings. Instead
of operating the winding in the on-off mode, the current in one winding is decreased slightly, but
increased in the adjacent winding.
(iv) Complex drive circuits that offer good current build-up without loss at high stepping rates, are
used.
Although the advantages of stepping-motor drives in open-loop systems are most obvious, closed-loop
applications also exist. A closed-loop stepping motor drive can be analyzed using classical techniques
employed for continuous-motion systems. For a detailed account of stepping motors, refer to [52].
3.9
A great deal of what has been said in this book so far about control systems seems exotic: algorithms
for radar tracking, drives for rolling mills, filters for extracting information from noisy data, methods for
numerical control of machine tools, fluid-temperature control in process plants, etc. Underlying most
of these are much more mundane tasks: turning equipment (pumps, conveyor belts, etc.) on and off;
2
Refer to [96] for detailed description of hardware and software.
182 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
opening and closing of valves (pneumatic, hydraulic); checking sensors to be certain they are working;
sending alarms when monitored signals go out of range; etc. Process control plants and manufacturing
floors share this need for simple, but important, tasks.
These so-called logic control functions can be implemented using one of the most ingenious devices
ever devised to advance the field of industrial automation. So versatile are these devices, that they are
employed in the automation of almost every type of industry. The device, of course, is the programmable
controller, and thousands of these devices go unrecognized in process plants and factory environments—
quietly monitoring security, manipulating valves, and controlling machines and automatic production
lines.
Industrial applications of logic control are mainly of two types; those in which the control system is
entirely based on logic principles, and those that are mainly of a continuous feedback nature and use a
‘relatively small’ amount of logic in auxiliary functions, such as start-up/shut-down, safety interlocks
and overrides, and mode switching. Programmable controllers, originally intended for ‘100%’ logic
systems, have, in recent years, added the capability of conventional feedback control; making them
very popular—since one controller can now handle, in an integrated way, all aspects of operation of a
practical system, that includes both types of control problems. General-purpose digital computers could
also handle such situations, but they are not as popular as the programmable controllers, for the reasons
mentioned below.
In theory, general-purpose computers can be programmed to perform most of the functions of
programmable controllers. However, these machines are not built to operate reliably under industrial
conditions, where they can be exposed to heat, humidity, corrosive atmosphere, mechanical shock and
vibration, electromagnetic noise, unreliable ac power with dropping voltages, voltage spikes, etc. A
programmable controller is a special-purpose computer, especially designed for industrial environments.
A general-purpose computer is a complex machine, capable of executing several programs or tasks
simultaneously, and in any order. By contrast, a programmable controller typically executes its tiny
program continuously hundreds of millions of times before being interrupted to introduce a new program.
General-purpose computers can be interfaced with external equipment with special circuit cards. In
programmable controllers by comparison, the hardware interfaces for connecting the field devices are
actually a part of the controller and are easily connected. The software of the controllers is designed for
easy use by plant technicians. A programmable controller is thus a special-purpose device for industrial
automation applications—requiring logic control functions and simple PID control functions; it cannot
compete with conventional computers when it comes to complex control algorithms and/or fast feedback
loops, requiring high program execution speeds.
Early devices were called ‘programmable logic controllers (PLCs)’, and were designed to accept on-off
(binary logic) voltage inputs from sensors, switches, relay contacts, etc., and produce on-off voltage
outputs to actuate motors, solenoids, control relays, lights, alarms, fans, heaters, and other electrical
equipment. As many of today’s ‘programmable controllers’ also accept analog data, perform simple
arithmetic operations, and even act as PID (proportional-integral-derivative) process controllers, the
word ‘logic’ and the letter ‘L’ were dropped from the name long ago. This frequently causes confusion,
since the letters ‘PC’ mean different things to different people; the most common usage of these letters
being for ‘Personal Computer’. To avoid this confusion, there has been a tendency lately to restore the
letter ‘L’ and revive the designation ‘PLC’. We have followed this practice in the book.
Models of Digital Control Devices and Systems 183
Before the era of PLCs, hardwired relay control panels were, in fact, the major type of logic systems, and
this historical development explains why the most modern, microprocessor-based PLCs still are usually
programmed according to relay ladder diagrams. This feature has been responsible for much of the
widespread and rapid acceptance of PLCs; the computer was forced to learn the already familiar human
language rather than making the humans learn a new computer language. Originally cost-effective for
only large-scale systems, small versions of PLCs are now available.
A sequenced but brief presentation of building blocks of a PLC, ladder diagrams, and examples of
industrial automation, follows [23–25]. It is not appropriate to discuss here the internal details,
performance specifications and programming details for any particular manufacturer’s PLC. These
aspects are described in every manufacturers’ literature.
3.9.1
A definition of logic controls, that adequately describes most applications, is that they are controls that
work with one-bit binary signals. That is, the system needs only to know that a signal is absent or present;
its exact size is not important. This definition excludes the field of digital computer control discussed so
far in the book. Conventional computer control also uses binary signals (though usually with many bits);
the type of application and the analysis methods are quite different for logic controls and conventional
computer controls, which is why we make the distinction.
Logic control systems can involve both combinational and sequential aspects. Combinational aspects
are implemented by a proper interconnection of basic logic elements such as AND, OR, NOT, so as to
provide a desired output or outputs, when a certain combination of inputs exists. Sequential effects use
logic elements together with memory elements (counters, timers, etc.), to ensure that a chain of events
occurs in some desired sequence. The present status of outputs depends, both, on the past and present
status of inputs.
It is important to be able to distinguish between the nature of variables in a logic control system, and
those in a conventional feedback control system. To define the difference, we consider an example that
employs both the control schemes.
Figure 3.53 shows a tank with a valve that controls flow of liquid into the tank, and another valve that
controls flow out of the tank. A transducer is available to measure the level of the liquid in the tank. Also
shown is the block diagram of a feedback control system, whose objective is to maintain the level of the
liquid in the tank at some preset, or set-point value. We assume that the controller operates according to
PID mode of control, to regulate the level against variations induced from external influences. This is a
continuous variable control system because, both the level and the control valve setting can vary over a
range to achieve the desired regulation.
The liquid-level control system is a part of the continuous bottle filling process. Periodically, a bottle
comes into position under the outlet valve. The level must be maintained at the set-point while the outlet
valve is opened and the bottle is filled. This requirement is necessary to assure a constant pressure head
during bottle-filling. Figure 3.53 shows a pictorial representation of process hardware for continuous
bottle-filling control. The objective is to fill bottles moving on a conveyor, from the constant-head tank.
This is a typical logic control problem. We are to implement a control program that will detect the
position of a bottle under the tank outlet via a mechanically actuated limit switch, stop the feed motor
184 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Input flow
Control valve
Controller Set-point
Solenoid
operated valve V1
Photoelectric sensor
Empty bottles Filled bottles
Fixed
rollers
LS
Limit switch
Feed motor Outfeed motor drive
drive (always ON
M1 during process)
M2
Fig. 3.53
M1 to stop the feed conveyor, open the solenoid-operated outlet valve V1, and then fill the bottle until
the photosensor detects the filled position. After the bottle is filled, it will close the valve V1, and restart
the conveyor to continue to the next bottle. The start and stop pushbuttons (PB) will be included for the
outfeed motor, and for the start of the bottle-filling process. Once the start PB is pushed, the outfeed
motor M2 will be ON until the stop PB is pushed. The feed motor M1 is energized once the system starts
(M2 ON), and is stopped when the limit switch detects the correct position of the bottle.
The sensors used for the logic control problem have characteristics different from those used for the
regulator problem. For the regulator problem, the level sensor is an analog device producing analog
signal as its output. For the logic control problem, sensors used are binary sensors producing on-off
(binary logic) signals. For example, a limit-switch consists of mechanically actuated electrical contacts.
The contacts open or close when some object reaches a certain position (i.e., limit), and actuates the
switch. Hence, limit-switches are binary sensors. Photoelectric sensors consist, basically, of a source
emitting a light beam and a light-sensing detector receiving the beam. The object to be sensed interrupts
the beam, thereby making its presence known without physical contact between sensor and object. The
filled-bottle state of the product can thus be sensed by a binary photoelectric sensor.
The system of Fig. 3.53 involves solenoid and electric motors as motion actuators. Thus, when the logic
controller specifies that ‘output valve be opened’, it may mean moving a solenoid. This is not done by a
simple toggle switch. Instead, one would logically assume that a small switch may be used to energize a
relay with contact ratings that can handle the heavy load. Similarly, an on-off voltage signal from the logic
controller may actuate a thyristor circuit to run a motor.
Models of Digital Control Devices and Systems 185
3.9.2
The programmable logic controllers are basically computer-based; and therefore, their architecture is very
similar to computer architecture. The memory contains the operating system stored in fixed memory
(ROM), and the application programs stored in alterable memory (RAM). The Central Processing Unit
(CPU) is a microprocessor that coordinates the activities of the PLC system. Figure 3.54 shows basic
building blocks of a PLC.
Power supply
Fig. 3.54
Input devices such as pushbuttons, sensors, and limit switches are connected to the input interface
circuit, called the input module. This section gathers information from the outside environment, and
sends it to the CPU. Output devices such as solenoids, motor controls, indicator lights and alarms are
connected to the output interface circuit, called the output module. This section is where the calculation
results from the CPU are output to the outside environment. With the control application program (stored
within the PLC memory) in execution, the PLC constantly monitors the state of the system through the
field input devices; and based on the program logic, it determines the course of action to be carried out
at the field output devices. This process of sequentially reading the inputs, executing the program in
memory, and updating the outputs is known as scanning.
Intelligence of an automated system is greatly dependent on the ability of a PLC to read in the signals
from various types of input field devices. The most common class of input devices in an automated
system is the binary type. These devices provide input signals that are ON/OFF or OPEN/CLOSED.
To the input interface circuit, all binary input devices are essentially a switch that is either open or
closed, signaling a 1(ON) or 0(OFF). Some of the binary input field devices along with their symbolic
representation are listed in Fig. 3.55.
PB LS
LS LS LS
(a) (b) (c) (d) (e)
Fig. 3.55
186 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
As mentioned earlier, a switch is a symbolic representation of the field input device, interfaced to the
input module of the PLC. The device may be a manually operated pushbutton, mechanically actuated
limit switch (the contacts open/close when some object reaches a certain position and actuates the
switch), proximity switch (device based on inductive/capacitive/magnetic effect which, with appropriate
electronics, can sense the presence of an object without a physical contact with the object), photoelectric
sensor, level sensor, temperature sensor, shaft encoder, etc. The main purpose of the input module is to
condition the various signals, received from the field devices, to produce an output to be sensed by the
CPU. The signal conditioning involves converting power-level signals from field devices to logic-level
signals acceptable to the CPU, and providing electrical isolation so that there is no electrical connection
between the field device (power) and the controller (logic). The coupling between the power and the
logic sections is normally provided by an optical coupler.
During our discussion on PLC programming, it will be helpful if we keep in mind the relationship
between the interface signals (ON/OFF) and their mapping and addressing used in the program. When
in operation, if an input signal is energized (ON), the input interface circuit senses the field device’s
supplied voltage and converts it to a logic-level signal acceptable to the CPU, to indicate the status of the
device. The field status information provided to the standard input module is placed into the input table
in memory through PLC instructions. The I/O address assignment document of the PLC manufacturer
identifies each field device by an address. During scanning, the PLC reads the status of all field input
devices, and places this information at the corresponding address locations.
An automation system is incomplete without means for interface to the field output devices. The output
module provides connections between the CPU and output field devices. The output module receives
from the CPU logic-level signals (1 or 0).
The main purpose of the output interface circuit is to condition the signals received from the CPU, to
produce outputs to actuate the output field devices. The signal conditioning circuit consists, primarily, of
the logic and power sections, coupled by an isolation circuit. The output interface can be thought of as a
simple switch through which power can be provided to control the output device.
During normal operation, the CPU sends to the output table, at predefined address locations, the output
status according to the logic program. If the status is 1, ON signal will be passed through the isolation
circuit, which, in turn, will switch the voltage to the field device through the power section of the module.
The power section of the output module may be transistor based, triac based, or simply, relay ‘contact
based’ circuit. The relay circuit output interface allows the output devices to be switched by NO
(normally open) or NC (normally closed) relay contact. When the processor sends the status (1 or 0) to
the module (through output table) during the output update, the state of the contact will change. If a 1 is
sent to the module from the processor, a normally open contact will close, and a normally closed contact
will change to an open position. If a 0 is sent, no change occurs to the normal state of the contacts. The
contact output can be used to switch either ac or dc loads; switching small currents at low voltages. High
power contact outputs are also available for switching of high currents.
Some of the output field devices, along with their symbolic representation, are given in Fig. 3.56.
Models of Digital Control Devices and Systems 187
SOL
PL AH
(a) (b) (c) (d)
H
NO NC
(e) (f) (g)
Fig. 3.56
Once we have the CPU programmed, we get information in and out of the PLC through the use of
input and output modules. The input module terminals receive signals from switches, and other input
information devices. The output module terminals provide output voltages to energize motors and valves,
operate indicating devices, and so on.
For small PLC systems, the input and output terminals may be included on the same frame as the CPU.
In large systems, the input and output modules are separate units; modules are placed in groups on racks,
and the racks are connected to the CPU via appropriate connector cables.
Generally speaking, there are three categories of rack enclosures—the master rack, the local rack, and
the remote rack. A master rack refers to the enclosure containing the CPU module. This rack may, or may
not, have slots available for the insertion of I/O modules. The larger the PLC system, the less likely that
the master rack will have I/O housing capability or space. A local rack is an enclosure which is placed
in the same location or area where the master rack is housed. If a master rack contains I/O, it can also be
considered a local rack. In general, a local rack contains a local I/O processor which receives and sends
data to and from the CPU.
As the name implies, remote racks are enclosures containing I/O modules located far away from the
CPU. A remote rack contains an I/O processor which communicates I/O information just like the local
rack.
Timers and counters play an important part in many industrial automation systems. The timers are used
to initiate events at defined intervals. The counters, on the other hand, are used to count the occurrences
of any defined event.
Basically the operation of both the timer and the counter is same, as a timer operates like a counter. The
counter shown in Fig. 3.57a, counts down from set value when its execution condition (count input) goes
from OFF to ON. When the value reaches zero, the counter contact point is turned ON. It is reset with a
reset input. The set value is decided by the programmer, and stored in the internal register of the counter
through control program instructions. The count input signal may refer to any event which may occur
randomly.
188 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Count input
Counter Start input Timer
Reset input
Set value Set value
(a) (b)
Fig. 3.57
When a count input signal occurs at fixed frequency, i.e., after every fixed interval of time, the counter
performs as a timer. Now 10 pulses, i.e., counts, will mean an elapsed time of 5 seconds, if the signal
is occurring after a regular interval of 0.5 seconds. The timer, shown in Fig. 3.57b, is activated when its
execution condition goes ON and starts decreasing from the set value. When the value reaches zero, the
timer contact point is turned ON. It is reset to set value when the execution condition goes OFF.
It is unlikely that two different PLCs will have identical memory maps, but a generalization of memory
organization is still valid in the light of the fact that all PLCs have similar storage requirements. In
general, all PLCs must have memory allocated for the four items described below.
The executive software is a permanently stored collection of programs that are considered
a part of the system itself. These programs direct system activities such as execution of the control
program, communication with peripheral devices, and other housekeeping activities. The executive area
of memory is not accessible to the user.
It is a temporary storage used by the CPU to store a relatively small amount of data
for interim calculations or control.
This area stores any data associated with the control program, such as timer/counter set
values, and any other stored constants or variables that are used by the control program. This section also
retains the status information of the system inputs once they have been read, and the system outputs once
they have been set by the control program.
This area provides storage for any programmed instructions entered by the user. The
control program is stored in this area.
The Data Table and the User Program areas are accessible and are required by the user for control
application. The Executive and Scratch Pad areas together are normally referred to as ‘system memory’,
and Data Table and User Program areas together are labeled as ‘application memory’.
The data table area of the PLC’s application memory is composed of several sections described below.
The input table is an array of bits that stores the status of discrete inputs which are
connected to input interface circuit. The maximum number of bits in the input table is equal to the
maximum number of field inputs that can be connected to the PLC. For instance, a controller with 128
Models of Digital Control Devices and Systems 189
inputs would require an input table of 128 bits. If the PLC system has 8 input modules, each with 16
terminal points, then the input table in PLC memory (assuming 16 bit word length) will look like that
in Fig. 3.58.
Rack
Terminal 12
Input
Slot
15 13 10 08 06 04 02 00
0 0 0 1 000
Bit
address
002
Input module
with 16 terminals
placed in 004
master rack
007
Word
address
Fig. 3.58
Each terminal point on each of the input modules will have an address by which it is referenced. This
address will be a pointer to a bit in the input table. Thus, each connected input has a bit in the input table
that corresponds exactly to the terminal to which the input is connected. The address of the input device
can be interpreted as word location in the input table corresponding to the input module, and bit location
in the word corresponding to the terminal of the input module, to which the device is connected.
Several factors determine the address of the word location of each module. The type of module, input or
output, determines the first number in the address from left to right (say, 0 for input, and 1 for output).
The next two address numbers are determined by the rack number and the slot location where the module
is placed. Figure 3.58 graphically illustrates a mapping of the input table, and the modules placed in rack
0 (master rack). Note that the numbers associated with address assignment depend on the PLC model
used. These addresses can be represented in octal, decimal, or hexadecimal. We have used decimal
numbers.
The limit switch connected to the input interface (refer to Fig. 3.58) has an address of 00012 for its
corresponding bit in the input table. This address comes from the word location 000 and the bit number
12; which are related to the rack position where the module is installed, and the module’s terminal
connected to the field device, respectively. If the limit switch is ON (closed), the corresponding bit 00012
will be 1; if the limit switch is OFF (open), its corresponding bit will be 0.
During PLC operation, the processor will read the status of each input in the input modules, and then
place this value (1 or 0) in the corresponding address in the input table. The input table is constantly
changing—to reflect the changes in the field devices connected to the input modules. These changes in
the input table take place during the reading part of the processor scan.
190 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The output table is an array of bits that controls the status of output devices, which
are connected to the output interface circuit. The maximum number of bits available in the output table
is equal to the maximum number of output field devices that can be interfaced to the PLC. For instance,
a controller with a maximum number of 128 outputs would require an output table of 128 bits.
Each connected output has a bit in the output table that corresponds exactly to the terminal to which the
output is connected. The bits in the output table are controlled (ON/OFF) by the processor, as it interprets
the control program logic. If a bit in the table is tuned ON (logic 1), then the connected output is switched
ON. If a bit is cleared or turned OFF (logic 0), the output is switched OFF. Remember that the turning
ON or OFF, of the field devices occurs during the update of outputs after the end of the scan.
This section of the data table may be subdivided in two parts consisting of a work
bit storage area and a word storage area. The purpose of this data table section is to store data that can
change, whether it is a bit or a word (16 bits). Work bits are internal outputs which are normally used
to provide interlocking logic. The internal outputs do not directly control output field devices. When the
processor evaluates the control program, and any of these outputs is energized (logic 1), then this internal
output, in conjunction with other internal and/or real signals from field devices, forms an interlocking
logic sequence that drives an output device or another internal output.
The outputs of timers and counters are used as internal outputs which are generated after a time interval
has expired, or a count has reached a set value.
Assume that the timer/counter table in storage area has 512 points. Address assignment for these points
depends on the PLC model used. We will use TIM/CNT000 to TIM/CNT512 as the addresses of these
points. The word storage area will store the set values of timers/counters.
In our application examples given in the next subsection, we shall use word addresses 000 to 007 for
input table, and addresses 100 to 107 for output table. The input devices will be labeled with numbers
such as 00000,..., 00015, and output devices with numbers such as 10000, ..., 10015.
We shall use word addresses 010 to 017 for internal outputs. Examples of typical work bits (internal
outputs) are 01000, ..., 01015. TIM/CNT000 to TIM/CNT512 are the typical addresses of timer/counter
points.
3.9.3
Although specialized functions are useful in certain situations, most logic control systems may be
implemented with the three basic logic functions AND, OR, and NOT. These functions are used either
singly or in combinations, to form instructions that will determine if an output field device is to be
switched ON or OFF. The most widely used language for implementing these instructions are ladder
diagrams. Ladder diagrams are also called contact symbology, since the instructions, as we shall see, are
relay-equivalent contact symbols shown in Figs 3.56f and 3.56g.
An AND device may have any number of inputs and one output. To turn the output ON, all the inputs
must be ON. This function is most easily visualized in terms of switch arrangement of Fig. 3.59a, and
timing chart of Fig. 3.59b. The corresponding ladder diagram is given in Fig. 3.59c. Figure 3.59d gives
the Boolean algebra expression for the two-input AND, read as “A AND B equals C”.
Models of Digital Control Devices and Systems 191
Fig. 3.59
The timing chart in Fig. 3.59b is simply a series of graphs, each representing a logic variable, in which
the horizontal axis is time and the vertical axis is logic state, that is, 0 or 1. The graphs are placed so that
their time-axis are synchronized; in this way, a vertical line at any point on the graph describes a point in
time, and all input and output variables can be evaluated at that point. The graph of the output variable is
determined by the structure of the logic system and, of course, the pattern of the input.
The input contacts in Fig. 3.59c are normally open (NO) contacts (Do not confuse this symbol with
the familiar electrical symbol for capacitors). If the status of the input A is ‘1’, the contact A in ladder
diagram will close, and allow current to flow through the contact. If the status of the input A is ‘0’, the
contact will remain open, and not allow current to flow through the contact.
The ladder diagram of Fig. 3.59c can be thought of as a circuit having many inputs. A circuit is known
as the ‘rung’ of the ladder. A complete PLC ladder diagram consists of several rungs; a rung controls
an output field device either through an output module or an internal output. The input to a rung can be
logic commands from input modules, or from output modules connected to field devices, or from internal
outputs.
Figure 3.60 gives similar details for logical OR operation and should be self-explanatory. The Boolean
expression is read as “A OR B equals C”.
The contact in Fig. 3.61 in normally closed (NC) contact. If the status of the input A is ‘0’, the contact
will remain closed, thus allowing current to flow through the contact. If the status of the input A is ‘1’,
the contact will open and not allow current to flow through the contact. This symbol permits the use of
logic NOT operator. The Boolean expression is read as “NOT A equals B”; the overbar is used in general,
for applying NOT function.
192 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Input
A
1
A
0
1
B
Output 0
C 1
Input C
B 0
1 3 5 7 9 Time
(a) Switch interpretation (b) Timing chart
B C
Output
Inputs A+B=C
Input 1
A A
0
Output 1
B
B 0
1 3 5 7 9 Time
(a) Switch interpretation (b) Timing chart
B
A Output A=B
Input
1
A
0
1
B
0
A B
C 1
C
0
1 3 5 7 9 Time
(a) Ladder diagram (b) Timing chart
Fig. 3.62
Consider now the logic system
A+B =C
read as “A OR NOT B equals C ”.
The ladder diagram and timing chart for this system are given in Fig. 3.63.
Fig. 3.63
Assume that 000 is the word address of the input module, and 100 is the word address of the output
module of a PLC. Each module is assumed to have 16 terminals: 00 to 15. The start pushbutton is
connected to terminal 00, and stop pushbutton is connected to terminal 01 of the input module 000; and
the signal from terminal 00 of the output module 100 controls the machine. The system variables may,
therefore, be designated as 00000, 00001, and 10000.
The bit 00000 of the input table in PLC memory is 1 when the start pushbutton is pressed, and is 0 when
start pushbutton is released. The bit 00001 of the input table is 1 when the stop pushbutton is pressed,
and is 0 when the stop pushbutton is released. The bit 10000 of the output table is 1 when the machine is
running, and 0 when the machine is not running.
The logic system has three input variables and one output variable. There appears to be a contradiction,
but the statement is true. The variable 10000, representing the start of the machine, is both an input
variable and an output variable. This makes sense because the current state of the machine may affect
the future state.
Figure 3.64a illustrates a simple situation in which pushbutton 00000 turns ON machine 10000. This
of course would not be satisfactory pushbutton switch because as soon as pushbutton is released, the
machine comes to OFF state. Figure 3.64b adds an OR condition that keeps the machine ON if it is
already ON. This is an improvement, but now there is a new problem; once turned ON, the output will
00000 10000
(a)
00000
10000
(b)
10000
00000
00001 10000
(c)
10000
1
00000
0
1
(d) 00001
0
1
10000
0
1 3 5 7 9
Fig. 3.64
Models of Digital Control Devices and Systems 195
never be turned OFF by the logic system. We add another input switch in Fig. 3.64c. Note that 00001
contact is normally closed. Input 00000 turns ON output 10000; input 10000 keeps output 10000 ON
until input 00001 turns it OFF.
The timing chart of the logic system is shown in Fig. 3.64d.
Trap door
Automatic
weigh
station
Overweight track
Fig. 3.65
The variables of the logic system are defined as follows (refer to Fig. 3.66a). 00000 represents the
pressure switch connected to terminal 00 of input module 000. It senses the overweight item on the
automatic weigh station. The bit 00000 in the input table latches 1 for the overweight item. 10000
represents a solenoid connected to terminal 00 of the output module 100, which pushes the trap door
open. When the bit 10000 in the output table latches 1, the trap door is open and when the bit is 0, the
trap door is closed.
For a 4 sec delay, the set value 0040 is stored in word storage area of memory. Countdown of this number
to 0000 will give an elapsed time of 4 seconds in our PLC system, wherein we assume that the timer
counts time-based intervals of 0.1 sec (40 counts will mean an elapsed time of 4 sec).
Fig. 3.66
196 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The timer TIM000 is activated when its execution condition goes ON and starts decreasing from the set
value. When the value reaches 0000, the timer contact point is tuned ON. It is reset to set value when the
execution condition goes OFF. The timer contact point works as internal work bit (A work bit/internal
output is for use of program only; it does not turn ON/OFF external field devices).
It is obvious from the ladder diagram of Fig. 3.66a that once an overweight item is detected, the trap
door opens; it remains open for 4 sec, and thereafter it closes. Figure 3.66b shows the timing chart for
the logic system.
Part sensor
Box sensor
Box conveyor
Fig. 3.67
Models of Digital Control Devices and Systems 197
1
00000
0
1
00001
0
1
01000
0
1
00002
0
1
00003
0
1
CNT010
0
1
10000
0
1
10001
0
Time
Fig. 3.69
198 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
3.9.4
PLC programming methods vary from manufacturer to manufacturer, but the basic ladder diagram
approach appears to be the standard throughout the industry. A CRT connected to the CPU of the PLC
through a peripheral port, is perhaps the most common device used for programming the controller. A
CRT is a self-contained, video display unit with a keyboard and the necessary electronics to communicate
with the CPU. The graphic display on the CRT screen appears as a ladder diagram. This ladder diagram
takes form while the programmer builds it up using the keyboard. The keys themselves have symbols
such as: ; , which are interpreted exactly as explained earlier in this section.
Models of Digital Control Devices and Systems 199
A limitation of CRT is that the device is not interchangeable from one manufacturer’s PLC family to
another. However, with the increasing number of products in the manufacturers’ product lines and user
standardization of products, these programming devices may be a good choice, especially if the user has
standardized with one brand of PLCs.
At the other end of the spectrum of PLC programming devices is a Programming Console for programming
small PLCs (up to 128 I/O). Physically, these devices resemble handheld calculators but have a larger
display and somewhat different keyboard. The Programming Console uses keys with two- or three-letter
abbreviations, to write programs that bear some semblance to computer coding. The display at the top
of the Console exhibits the PLC instruction located in the User Program memory area. As with CRTs,
Programming Consoles are designed so that they are compatible with controllers of the product family.
Common usage of a Personal Computer (PC) in our daily lives has led to a new breed of PLC programming
devices. Due to the PC’s general-purpose architecture and de facto standard operating system, PLC
manufacturers provide the necessary software to implement the ladder diagram entry, editing and real-
time monitoring of the PLC’s control program. PCs will soon be the programming device of choice, not
so much because of its PLC programming capabilities, but because these PCs may already be present at
the location where the user may be performing the programming.
The programming device is connected to the CPU through a peripheral port. After the CPU has been
programmed, the programming device is no longer required for CPU and process operation; it can be
disconnected and removed. Therefore, we may need only one programming device for a number of
operational PLCs. The programming device may be moved about in the plant as needed.
Programming details for any manufacturer’s PLC are not included here. These aspects are described in
every manufacturers’ literature.
3.9.5
Programmable logic controllers are available in many sizes, covering a wide spectrum of capability. On
the low end are ‘relay replacers’ with minimum I/O and memory capability. At the high end are large
supervisory controllers, which play an important role in distributed control systems—by performing
a variety of control and data acquisition functions. In between these two extremes are multifunctional
controllers with communication capability which allow integration with various peripherals, and
expansion capability which allows the product to grow, as the application requirements change.
PLCs with analog input modules and analog output modules, for driving analog valves and actuators
using the PID control algorithms, are being used in process industries.
Large PLCs are used for complicated control tasks that require analog control, data acquisition, data
manipulation, numerical computations and reporting. The enhanced capabilities of these controllers
allow them to be used effectively in applications where LAN (local area network) may be required.
Some PLCs offer the ability to program in other languages beside the conventional ladder language.
An example is the BASIC programming language. Other manufacturers use what is called ‘Boolean
Mnemonics’, to program a controller. The Boolean language is a method used to enter and explain the
control logic which follows Boolean algebra.
200 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
REVIEW EXAMPLES
È 1 0.5 0.6065 ˘
Z [Gh0 (s)G(s)e–DTs] = (1 – z–1) Í - + ˙
ÍÎ ( z - 1)
2 ( z - 1) ( z - 0.3679) ˙˚
+ 1
Gh0(s) G(s) = s(s + 1) e–DTs
– T
Fig. 3.71
Referring to Eqn. (3.73) and noting that R(z) = z/(z – 1), we have
0.1065 z -1 + 0.4709 z -2 + 0.0547 z -3
Yˆ ( z ) =
1 - 2 z -1 + 1.6321 z -2 - 0.6321 z -3
This equation can be expanded into an infinite series in z–1:
Yˆ ( z ) = 0.1065 z–1 + 0.6839 z–2 + 1.2487 z–3 + 1.4485 z–4
+ 1.2913 z–5 + 1.0078 z–6 + 0.8236 z–7 + 0.8187 z–8 +
Therefore,
ŷ (T ) = y(0.5T) = 0.1065; ŷ(2T ) = y(1.5T) = 0.6839; ŷ (3T ) = y(2.5T) = 1.2487;
ŷ(4T ) = y(3.5T) = 1.4485; ŷ (5T ) = y(4.5T) = 1.2913; ŷ(6T ) = y(5.5T) = 1.0078;
ŷ(7T ) = y(6.5T) = 0.8236; ŷ(8T ) = y(7.5T) = 0.8187;
These values give the response at the midpoints between pairs of consecutive sampling points. Note that
by varying the value of D between 0 and 1, it is possible to find the response at any point between two
consecutive sampling points.
Fig. 3.72 K
( b 0 + db 0 ) z n + ( b1 + db1 ) z n -1 + + ( b n + db n )
D̂ (z) = n n -1
z + (a1 + da1 ) z + + (a n + da n )
To study the effects of this realization on the dynamic response, we consider the characteristic equation
and determine how a particular root changes when a particular parameter undergoes change.
D (z, a1, a2, ..., an) = zn + a1 z n – 1 + + an = 0 (3.76)
is the characteristic equation with roots l1, l2, …, ln:
D (z, a1, a2, ..., an) = (z – l1) (z – l2) (z – ln) (3.77)
We shall consider the effect of parameter aj on the root lk. By definition,
D(lk, aj) = 0
If aj is changed to aj + daj, then lk also changes and the new polynomial is
∂D ∂D
D(lk + dlk, aj + daj) = D(lk, aj) + dlk + daj + =0
∂z z = lk ∂a j
z = lk
∂D
= l kn - j
∂a j
z = lk
and from Eqn. (3.77),
∂D
= P (lk – li)
∂z z = lk i=k
in Eqn. (3.79) is largest for j = n. Therefore, the most sensitive parameter in the characteristic
equation (3.76), is an.
(ii) The denominator in Eqn. (3.79) is the product of vectors from the characteristic roots to lk. Thus,
if all the roots are in a cluster, the sensitivity is high.
In the cascade and parallel realizations, the coefficients, mechanized in the algorithm, are poles themselves;
these realizations are generally less sensitive than the direct realization.
PROBLEMS
3.1 Find Y(z)/R(z) for the sampled-data closed-loop system of Fig. P3.1.
H (s)
3.2 For the sampled-data feedback system with digital network in the feedback path as shown in
Fig. P3.2, find Y(z)/R(z).
y (k)
H (z)
T
3.3 Find Y(z) for the sampled-data closed-loop system of Fig. P3.3.
Models of Digital Control Devices and Systems 205
3.4 Obtain the z-transform of the system output for the block diagram of Fig. P3.4.
3.5 Obtain the transfer function Y(z)/R(z) of the closed-loop control system shown in Fig. P3.5. Also
obtain the transfer function between X(z) and R(z).
3.6 Consider the block diagram of a digital control system shown in Fig. P3.6; r(t) stands for reference
input and w(t) for disturbance. Obtain the z-transform of the system output when r(t) = 0.
w (t)
u*(t) + y (t)
r (t) + e (k) +
D (z) Gh0(s) G(s)
– T T
3.7 Shown in Fig. P3.7 is the block diagram of the servo control system for one of the joints of a
robot. With D(z) = 1, find the transfer function model of the closed-loop system. Sampling period
T = 0.25 sec.
Compensator Preamplifier Gears
qR(k) + 1 1 qM 1 qL(t)
D (z) D/A 20
– s+1 s 20
Motor
qL(k) Shaft
encoder
206 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
3.8 The plant of the speed control system shown in Fig. P3.8 consists of load, armature-controlled dc
motor and a power amplifier. Its transfer function is given by
w( s) 185
=
V ( s) 0.025 s + 1
Find the discrete-time transfer function w(z)/wr (z) for the closed-loop system. Sampling period
T = 0.05 sec.
3.9 For the system shown in Fig. P3.9, the computer solves the difference equation u(k) = u(k – 1) +
0.5 e(k), where e(k) is the filter input and u(k) is the filter output. If the sampling rate fs = 5 Hz,
find Y(z)/R(z).
y (k)
A/D
3.10 Consider the sampled-data system shown in Fig. P3.10. Find Y(z)/R(z) when (i) tD = 0.4 sec, (ii)
tD = 1.4 sec.
r (t) + y (t)
Gh0(s) e–tDs
– T = 1 sec s+1
Models of Digital Control Devices and Systems 207
3.11 Figure P3.11 shows an electrical oven provided with temperature measurement by a thermocouple
and having a remotely controlled, continuously variable power input. The task is to design a
microprocessor-based control system to provide temperature control of the oven.
Temperature-
measuring device Microcontroller
Oven
Continuously-variable
electrical input Power supply
R (s) + K Y (s)
Gh0(s)
T s(s + 1)
–
3.16 Consider the system shown in Fig. P3.16. Using Jury stability criterion, find the range of K > 0
for which the system is stable.
(b) A sampler and ZOH are now introduced in the forward path (Fig. P3.17). For a unit-step
input, determine the output y(k) for first five sampling instants when (i) T = 0.01 sec, and (ii)
T = 0.001 sec. Compare the result with that obtained earlier in part (a) above.
r (t) + y (t)
Gh0(s) G(s)
T
–
3.18 For the sampled-data system shown in Fig. P3.18, find the output y(k) for r(t) = unit step.
3.19 For the sampled-data system of Fig. P3.19, find the response y(kT); k = 0, 1, 2, ..., to a unit-step
input r(t). Also, obtain the output at the midpoints between pairs of consecutive sampling points.
r (t) + y (t)
ZOH 1
T = 1 sec s+1
–
3.22 Consider the temperature control system shown in Fig. P3.22a. A typical experimental curve,
obtained by opening the steam valve at t = 0 from fully closed position to a position that allows a
flow Qm of 1 kg/min with initial sensor temperature q of 0 ºC is shown in Fig. P3.22b.
(a) Approximate the process reaction curve by a first-order plus dead-time model using two-
points method of approximation.
(b) Calculate the QDR tuning parameters for a PID controller. The PID control is to be carried
out with a sampling period of 1 min on a computer control installation.
Dial
10 °C
20 °C
Environment
30 °C
40 °C
Steam
boiler Radiator
Valve
Room
qss = 30
0.632 qss
0.283 qss
0 25 65 t, min
(b)
3.23 Consider the liquid-level control system shown in Fig. 1.6. The digital computer was programmed
to act as adjustable-gain proportional controller with a sampling period of T = 10 sec. The
proportional gain was increased in steps. After each increase, the loop was disturbed by introduc-
ing a small change in set-point, and the response of the controlled variable (level in the tank) was
observed. The proportional gain of 4.75 resulted in oscillatory behavior, with amplitude of oscil-
lations approximately constant. The period of oscillations measured from the response is 800 sec.
The PC implements the digital PI control algorithm. Determine tuning parameters for the
controller:
È T ˘
Du(k) = u(k) – u(k – 1) = Kc Íe( k ) - e( k - 1) + e( k ) ˙
Î TI ˚
Models of Digital Control Devices and Systems 211
where
u(k) = output of controller at kth sampling instant;
Du(k) = change in output of controller at kth sampling instant;
e(k) = error at kth sampling instant;
T = sampling time;
TI = integral time; and
Kc = proportional gain.
3.24 A traffic light controller is to be designed for a road, partly closed to traffic for urgent repair work
(Fig. P3.24). North traffic light will go GREEN for 30 sec with South traffic light giving RED
signal. For the next 15 sec, both the traffic lights will give RED signals. Thereafter, South traffic
light will go GREEN for 30 sec with North traffic light giving RED signal. Both the traffic lights
will give RED signal for the next 15 sec. Then this cycle will repeat.
Develop a PLC ladder diagram that accomplishes this objective.
3.25 Consider the tank system of Fig. P3.25. Valve V1 opens on pressing a pushbutton PB1 and liquid
begins to fill the tank. At the same time, the stirring motor M starts operations. When the liquid
level passes LL2 and reaches LL1, the valve V1 closes and the stirring motor stops. When PB1 is
pressed again, the valve V2 opens and starts draining the liquid. When the liquid level drops below
LL2, valve V2 closes. This cycle is repeated five times. A buzzer will go high after 5 repetitions.
The buzzer will be silenced by pressing pushbutton PB2. The process will now be ready to take up
another filling-stirring-draining operation under manual control. Develop a PLC ladder diagram
that accomplishes this objective.
212 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
M
Liquid
V1
LL1
LL2
V2
3.26 A control circuit is to be developed to detect and count the number of products being carried
on an assembly line (Fig. P3.26). A sensor activates a counter as a product leaves the conveyor
and enters the packaging section. When the counter counts five products, the circuit energizes a
solenoid. The solenoid remains energized for a period of 2 seconds; the time being measured by
a software timer. When the set time has lapsed, the solenoid is deenergized, causing it to retract;
and the control circuit is ready for the next cycle.
Develop a suitable PLC ladder diagram.
Solenoid
Sensor
Conveyor
3.27 In the system of Fig. P3.27, a PLC is used to start and stop the motors of a segmented conveyor
belt. This allows only belt sections carrying an object to move. Motor M3 is kept ON during
the operation. Position of a product is first detected by proximity switch S3, which switches on
the motor M2. Sensor S2 switches on the motor M1 upon detection of the product. When the
product moves beyond the range of sensor S2, a timer is activated and when the set time of 20 sec
has lapsed, motor M2 stops. Similarly, when the product moves beyond the range of sensor S1,
another timer is activated and when the set time of 20 sec (for unloading the product) has lapsed,
motor M1 stops.
Models of Digital Control Devices and Systems 213
S3
S2
S1
M3
M2
M1
3.28 The system of Fig. P3.28 has the objective of drilling a hole in workpiece moved on a carriage.
When the start button PB1 is pushed and LS1 is ON (workpiece loaded), feed carriage motor runs
in CW direction, moving the carriage from left to right. When the work comes exactly under the
drill, which is sensed by limit switch LS2, the motor is cut-off and the work is ready for drilling
operation. A timer with a set time of 7 sec is activated. When the timer set value has lapsed, the
motor reverses, moving the carriage from right to left. When the work piece reaches LS1 position,
the motor stops. The motor can be stopped by a stop pushbutton while in operation.
Develop a suitable ladder diagram for PLC control.
Drill
LS1 LS2
Clamp
214 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Chapter 4
Design of
Digital Control Algorithms
4.1 INTRODUCTION
During recent decades, the design procedures for analog control systems have been well formulated and a
large body of knowledge has been accumulated. The analog-design methodology, based on conventional
techniques of root locus and Bode plots or the tuning methods of Ziegler and Nichols, may be applied
to designing digital control systems. The procedure would be to first design the analog form of the
controller, or compensator, to meet a particular set of performance specifications. Having done this, the
analog form can be transformed to a discrete-time formulation. This approach is based on the fact that
a digital system with a high sampling rate approximates to an analog system. The justification for using
digital control under these circumstances must be that the practical limitations of the analog controller
are overcome, the implementation cheaper, or that the supervisory control and communications more
easily implemented.
However, the use of high sampling rates wastes computer power, and can lead to problems of arithmetic
precision, etc. One is, therefore, driven to find methods of design which take account of the sampling
process.
The alternative approach is to design controllers directly in the discrete-time domain, based on the
specifications of closed-loop system response. The controlled plant is represented by a discrete-time
model which is a continuous-time system observed, analyzed, and controlled at discrete intervals of time.
This approach provides a direct path to the design of digital controllers. The features of direct digital
design are that the sample rates are generally lower than those for discretized analog design, and the
design is directly ‘performance based’.
Figure 4.1 shows the basic structure of a digital control system. The design problem generally evolves around
the choice of the control function D(z), in order to impart a satisfactory form to the closed-loop transfer
function. The choice is constrained by the function Gh0G(z) representing the fixed process elements.
A wide variety of digital-design procedures is available; these fall into the following two categories:
(i) direct synthesis procedures; and
(ii) iterative design procedures.
Design of Digital Control Algorithms 215
r (t) + y (t)
D (z) Gh0(s) G (s)
– T T
Control ZOH Process
algorithm
H (s)
Sensor
Fig. 4.1 Basic structure of a digital control system
The direct synthesis procedures assume that the control function D(z) is not restricted in any way by
hardware or software limitations, and can be allowed to take any form demanded by the nature of the
fixed process elements and the specifications of the required system performance. This design approach
has found wider applications in digital control systems—than has the equivalent technique used with
analog systems. In a digital control system, realization of the required D(z) may involve no more than
programming a special-purpose software-procedure. With analog systems, the limitation was in terms of
the complications involved in designing special purpose analog controllers.
The design obtained by a direct synthesis procedure will give perfect nominal performance. However,
the performance may be inadequate in the field because of the sensitivity of the design to plant disturbances
and modeling errors. It is important that a control system is robust in its behavior with respect to the
discrepancies between the model and the real process, and uncertainties in disturbances acting on the
process. Robustness property of some of the standard control structures, such as a three-term (PID)
control algorithm, has been very well established. The design of such algorithms calls for an iterative
design procedure where the choice of control function D(z) is restricted to using a standard algorithm
with variable parameters; the designer must then examine the effect of the choice of controller parameters
on the system performance, and make an appropriate final choice. The iterative design procedures for
digital control systems are similar to the techniques evolved for analog system design, using root locus
and frequency response plots.
Figure 4.2 summarizes the basic routes to the design of digital controllers for continuous-time processes.
The route: continuous-time modeling Æ continuous-time control design Æ discrete-time approximation of
the controller, was considered in Chapter 2 (refer to Example 2.17). This chapter is devoted to the
following route:
Continuous-time modeling Æ discrete-time approximation of the model Æ discrete-time control
design.
Plant models can be obtained from the first principles of physics. The designer may, however, turn to
the other source of information about plant dynamics, which is the data taken from experiments directly
conducted to excite the plant, and to measure the response. The process of constructing models from
experimental data is called system identification. An introduction to system identification and adaptive
control is given in Chapter 10.
One obvious, but fundamental, point is that control design always begins with a sufficiently accurate
mathematical model of the process to be controlled. For a typical industrial problem, the effort required
for obtaining a mathematical model of the process to be controlled, is often an order of magnitude greater
than the effort required for control design proper. Any control-design method that requires only a simple
216 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Continuous-time process
Differential Experimentally obtained Differential
equations discrete data equations
Continuous-time
plant model Identification of discrete- Continuous-time
time model model
Digital controller
r (t) + y (t)
Gh0(s) G (s)
– T
H (s)
4.2 z
The central concerns of controller design are for good relative stability and speed of response, good
steady-state accuracy, and sufficient robustness. Requirements on time response need to be expressed
as constraints on z-plane pole and zero locations, or on the shape of the frequency response in order to
permit design in the transform domain. In this section, we give an outline of specifications of controller
design in the z-plane.
Our attention will be focused on the unity-feedback systems1 of the form shown in Fig. 4.3b, with the
open-loop transfer function Gh0G(z) = Z [Gh0(s)G(s)], having no poles outside the unit circle in the
z-plane. Further, the feedback system of Fig. 4.3b is desired to be an underdamped system.
1
It is assumed that the reader is familiar with the design of unity and non-unity-feedback continuous-time
systems. With this background, the results presented in this chapter for unity-feedback discrete-time systems,
can easily be extended for the non-unity-feedback case.
218 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The nature of transient response of a linear control system is revealed by any of the standard test
signals—impulse, step, ramp, parabola—as this nature is dependent on system poles only and not on the
type of the input. It is, therefore, sufficient to specify the transient response to one of the standard test
signals; a step is generally used for this purpose. Steady-state response depends on both the system and
the type of input. From the steady-state viewpoint, the ‘easiest’ input is generally a step since it requires
only maintaining the output at a constant value, once the transient is over. A more difficult problem
is tracking a ramp input. Tracking a parabola is even more difficult since a parabolic function is one
degree faster than the ramp function. In practice, we seldom find it necessary to use a signal faster than a
parabolic function; characteristics of actual signals which the control systems encounter, are adequately
represented by step, ramp, and parabolic functions.
4.2.1
Steady-state accuracy refers to the requirement that after all transients become negligible, the error
between the reference input r and the controlled output y must be acceptably small. The specification on
tk
steady-state accuracy is often based on polynomial inputs of degree k: r(t) = m (t ) . If k = 0, the input
k!
is a step of unit amplitude; if k = 1, the input is a ramp with unit slope; and if k = 2, the input is a parabola
with unit second derivative. From the common problems of mechanical motion control, these inputs are
called, respectively, position, velocity, and acceleration inputs.
For quantitative analysis, we consider the unity-feedback discrete-time system shown in Fig. 4.3b. The
steady-state error is the difference between the reference input r(k) and the controlled output y(k), when
steady state is reached, i.e., steady-state error
* = lim e(k) = lim [r(k) – y(k)]
e ss (4.1a)
k k
Using the final value theorem (Eqn. (2.52)),
* = lim [(z – 1)E(z)]
e ss (4.1b)
z Æ1
provided that (z – 1) E(z) has no poles on the boundary and outside of the unit circle in the z-plane.
For the system shown in Fig. 4.3b, define
È G ( s) ˘
Gh0G(z) = (1– z–1) Z Í s ˙
Î ˚
Y ( z) Gh0G ( z )
Then, we have =
R( z ) 1 + Gh0G ( z )
R( z )
and E(z) = R(z) – Y(z) = (4.2)
1 + Gh0G ( z )
By substituting Eqn. (4.2) into Eqn. (4.1b), we obtain
* = lim [(z – 1)E(z)]
e ss (4.3a)
z Æ1
È R( z ) ˘
= lim Í( z - 1) ˙ (4.3b)
z Æ1 Î 1 + Gh0G ( z ) ˚
Thus, the steady-state error of a discrete-time system with unity feedback, depends on the reference
input signal R(z), and the forward-path transfer function Gh0G(z). By the nature of the limit in Eqns (4.3),
Design of Digital Control Algorithms 219
we see that the result of the limit can be zero, or can be a constant different from zero. Also, the limit
may not exist, in which case, the final-value theorem does not apply. However, it is easy to see from basic
* =
definition (4.1a) that e ss in this case anyway, because E(z) will have a pole at z = 1 that is of order
higher than one. Discrete-time systems, having a finite nonzero steady-state error when the reference
input is a zero-order polynomial input (a constant), are labeled ‘Type-0’. Similarly, a system that has
finite nonzero steady-state error to a first-order polynomial input (a ramp), is called a ‘Type-1’ system,
and a system with finite nonzero steady-state error to a second-order polynomial input (a parabola), is
called a ‘Type-2’ system.
Let the reference input to the system of Fig. 4.3b be a step function of magnitude unity. The z-transform
of discrete form of r(t) = m (t) is (refer to Eqn. (2.40))
z
R(z) = (4.4a)
z -1
Substituting R(z) into Eqn. (4.3b), we have
1 1
* = lim
e ss =
z Æ 1 1 + Gh 0 G ( z ) 1 + lim Gh0G ( z )
z Æ1
In terms of the position error constant Kp, defined as
Kp = lim Gh0 G(z) (4.4b)
z Æ1
the steady-state error to unit-step input becomes
* =
1
e ss (4.4c)
1+ K p
For a ramp input r(t) = tm (t); the z-transform of its discrete form is (refer to Eqn. (2.42))
Tz
R(z) = (4.5a)
( z -1) 2
Substituting into Eqn. (4.3b), we get
* = lim T 1
e ss =
z Æ 1 ( z - 1)[1 + Gh0 G ( z )] È z -1 ˘
lim Í Gh0G ( z ) ˙
z Æ1Î T ˚
In terms of velocity error constant Kv, defined as
1
Kv = lim [(z – 1)Gh0G(z)] (4.5b)
T z Æ1
the steady-state error to unit-ramp input becomes
* =
1
e ss (4.5c)
Kv
For a parabolic input r(t) = (t 2/2) m (t); the z-transform of its discrete form is (from Eqns (2.41)–(2.42))
T 2 z ( z + 1)
R(z) = (4.6a)
2( z - 1)3
220 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
* = lim T2 1
e ss =
z Æ1 (z 2
- 1) [1 + Gh0G ( z )] ÈÊ z - 1ˆ 2 ˘
lim ÍÁ ˜ Gh0G ( z ) ˙
z Æ 1 ÍË T ¯ ˙˚
Î
In terms of acceleration error constant Ka, defined as
1
Ka = lim [(z – 1)2 Gh0G(z)] (4.6b)
T 2 z Æ1
the steady-state error to unit-parabolic input becomes
* =
1
e ss (4.6c)
Ka
As said earlier, discrete-time systems can be classified on the basis of their steady-state response to
polynomial inputs. We can always express the forward-path transfer function Gh0G(z) as
K P( z - zi )
Gh0G(z) = i
; p j π 1, zi π 1 (4.7)
( z - 1) N P( z - p j )
j
N
Gh0G(z) in Eqn. (4.7) involves the term (z – 1) in the denominator. As z Æ 1, this term dominates in
determining the steady-state error. Digital control systems are, therefore, classified in accordance with
the number of poles at z = 1 in the forward-path transfer function, as described below.
If N = 0, the steady-state errors to various standard inputs, obtained from Eqns (4.1)–(4.7), are
Ï K P( z - zi )
Ô 1 in response to unit-step input; K p = i
ÔÔ1 + K p P(z - p j )
* =
e ss Ì j z =1 (4.8a)
Ô in response to unit - ramp input
Ô
ÔÓ in response to unit - parabolic input
Thus, a system with N = 0, or no pole at z = 1 in Gh0G(z), has a finite nonzero position error, and infinite
velocity and acceleration errors at steady state.
Thus, a system with N = 1, or one pole at z = 1 in Gh0G(z), has zero position error, a finite nonzero
velocity error, and infinite acceleration error at steady state.
Thus, a system with N = 2, or two poles at z = 1 in Gh0G(z), has zero position and velocity errors, and a
finite nonzero acceleration error at steady state.
Steady-state errors for various inputs and systems are summarized in Table 4.1.
Unit step 1 0 0
1+ K p
1 0
Unit ramp
Kv
1
Unit parabolic
Ka
1 1
Kp = lim Gh0G ( z ); Kv = lim [( z - 1) Gh0G ( z )]; Ka = lim [( z - 1) 2 Gh0G ( z )]
z Æ1 T z Æ1 T2 z Æ1
The development above indicates that, in general, increased system gain K, and/or addition of poles at
z = 1 to the open-loop transfer function Gh0G(z), tend to decrease steady-state errors. However, as will be
seen later in this chapter, both large system gain and the poles at z = 1 in the loop transfer function, have
destabilizing effects on the system. Thus, a control system design is usually a trade off between steady-
state accuracy and acceptable relative stability.
Example 4.1
In the previous chapter, we have shown that sampling usually has a detrimental effect on the transient
response and the relative stability of a control system. It is natural to ask what the effect of sampling on
222 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
the steady-state error of a closed-loop system will be? In other words, if we start out with a continuous-
time system and then add S/H to form a digital control system, how would the steady-state errors of the
two systems compare, when subject to the same type of input?
Let us first consider the system of Fig. 4.3b without S/H. Assume that the process G(s) is represented by
Type-1 transfer function:
K (1 + ta s)(1 + tb s) (1 + t m s)
G(s) =
s(1 + t1s )(1 + t 2 s ) (1 + t n s)
having more poles than zeros.
The velocity error constant
Kv = lim sG(s) = K
sÆ0
The steady-state error of the system to unit-step input is zero, to unit-ramp input is 1/K, and to unit-
parabolic input is .
We now consider the system of Fig. 4.3b with S/H:
È K (1 + t a s)(1 + tb s) (1 + t m s) ˘
Gh0G(z) = (1 – z–1) Z Í 2 ˙
ÍÎ s (1 + t 1s)(1 + t 2 s) (1 + tn s) ˙˚
È K K1 ˘
= (1 – z–1) Z Í 2 + + terms due to the nonzero poles˙
Î s s ˚
È KTz Kz ˘
= (1 – z–1) Í + 1 + terms due to the nonzero poles˙
ÍÎ ( z - 1)
2 z -1 ˙˚
It is important to note that the terms due to the nonzero poles do not contain the term (z – 1) in the
denominator. Thus, the velocity error constant is
1
Kv = lim [(z – 1)Gh0G(z)] = K
T z Æ1
The steady-state error of the discrete-time system to unit-step input is zero, to unit-ramp input is 1/K,
and to unit-parabolic input is . Thus, for a Type-1 system, the system with S/H has exactly the same
steady-state error as the continuous-time system with the same process transfer function (this, in fact, is
true for Type-0 and Type-2 systems also).
Equations (4.5b) and (4.6b) may purport to show that the velocity error constant and the acceleration
error constant of a digital control system depend on the sampling period T. However, in the process of
evaluation, T gets canceled, and the error depends only on the parameters of the process and the type
of inputs.
4.2.2
Transient performance in time domain is defined in terms of parameters of the system response to a step
in command input. The most frequently used parameters are rise time, peak time, peak overshoot, and
setting time. Figure 4.4 shows a typical unit-step response of a control system.
For underdamped systems, the rise time, tr, is normally defined as the time required for the step response
to rise from 0% to 100% of its final value.
Design of Digital Control Algorithms 223
y (t)
Allowable
tolerance
Mp
1.0
tr t
tp
ts
The peak overshoot, Mp, is the peak value of the response curve measured from unity. The time at which
peak occurs is referred to as the peak time, tp.
The time required for the response to damp out all transients, is called the settling time, ts. Theoretically,
the time taken to damp out all transients may be infinity. In practice, however, the transient is assumed
to be over when the error is reduced below some acceptable value. Typically, the acceptable level is set
at 2% or 5% of the final value.
The use of root locus plots for the design of digital control systems necessitates the translation of time-
domain performance specifications into desired locations of closed-loop poles in the z-plane. However,
the use of frequency response plots necessitates the translation of time-domain specifications in terms of
frequency response features such as bandwidth, phase margin, gain margin, resonance peak, resonance
frequency, etc.
z
Our approach is to first obtain the transient response specifications in terms of characteristic roots in the
s-plane, and then use the relation
z = esT (4.9)
to map the s-plane characteristic roots to the z-plane.
The transient response of Fig. 4.4 resembles the unit-step response of an underdamped second-order system
Y ( s) w n2
= 2 (4.10)
R( s) s + 2zw n s + w n2
where
z = damping ratio, and wn = undamped natural frequency.
The transient response specifications in terms of rise time tr, peak time tp, peak overshoot Mp, and
settling time ts can be approximated to the parameters z and wn of the second-order system defined by
224 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
2
Chapter 6 of reference [155].
Design of Digital Control Algorithms 225
ZOH device, attenuate the responses due to the poles in the complementary strips; only the poles in the
primary strip, generally, need be considered.
Figure 4.5 illustrates the mapping of constant-z locus in the primary strip of the s-plane to the z-plane.
As the imaginary parts ± jwd = ± jwn 1 - z 2 of the s-plane poles move closer to the limit ± jws/2 of the
primary strip, the angles q = ± wdT = ± wnT 1 - z 2 of the z-plane poles approach the direction of the
negative real axis. The negative real axis in the z-plane, thus, corresponds to the boundaries of the primary
strip in the s-plane. Figure 4.5 also shows the mapping of a constant-wn locus, in the primary strip of the
s-plane, to the z-plane.
In the z-plane, the closed-loop poles must lie on the constant-z spiral to satisfy peak overshoot
requirement, also the poles must lie on constant-wn curve to satisfy speed of response requirement. The
intersection of the two curves (Fig. 4.5b) provides the preferred pole locations, and the design aim is to
make the root locus pass through these locations.
If one chooses the following boundaries for the system response:
T = 1 sec
Peak overshoot £ 15% fi z ≥ 0.5
8
Settling time £ 25 sec fi wn ≥ ,
25
the acceptable boundaries for the closed-loop pole locations in z-plane are shown in Fig. 4.6.
In the chart of Fig. 4.6, the patterns are traced for various natural frequencies and damping ratios. Such
a chart is a useful aid in root-locus design technique. We will be using this chart in our design examples.
226 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Im
6p 4p z=
10T 10T 0
wn =
2p
8p 0.2
10T
10T 0.4
0.6
0.8
wn = p
T
z=1 Re
p
wn =
T
0.8
0.6
8p 0.4
10T wn =
0.2 2p
10T
6p 0
4p z=
10T 10T
Fig. 4.6
Most control systems found in practice are of high order. The preferred locations of closed-loop poles
given by Fig. 4.5b, realize the specified transient performance only if the other closed-loop poles and
zeros of the system have negligible effect on the dynamics of the system, i.e., only if the closed-loop
poles corresponding to specified z and wn, are dominant.
In the following, we examine the relationship between the pole-zero patterns and the corresponding step-
responses of discrete-time systems. Our attention will be restricted to the step responses of the discrete-
time system with transfer function
Y ( z) K ( z - z1 )( z - z2 ) K ( z - z1 )( z - z2 )
= - jq
= (4.16)
R( z ) jq
( z - p)( z - re )( z - re ) ( z - p )( z 2 - 2r cos q z + r 2 )
for a selected set of values of the parameters K, z1, z2, p, r and q.
Design of Digital Control Algorithms 227
The translation of time-domain specifications into desired locations of pair of dominant closed-loop
poles in the z-plane, is useful if the design is to be carried out by using root locus plots. The use of
frequency response plots necessitates the translation of time-domain performance specifications in terms
of frequency response features.
228 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
100
y (k)
(z = 0.5, q = 72°)
50
30
1.0 20
z1 = 0.8
10 (z = 0.5, q = 18°)
5
k – 1.0 – 0.5 0 0.5 1.0
Zero location
(a) (b)
50
Rise time (Log scale)
30 (z = 0.5, q = 18°)
Number of samples
20
(z = 0.5, q = 45°)
10
5
3
2 (z = 0.5, q = 72°)
1
– 1.0 – 0.5 0 0.5 1.0
Pole location
(c)
Fig. 4.8 z q = 18°
All the frequency-domain methods of continuous-time systems can be extended for the analysis and
design of digital control systems. Consider the system shown in Fig. 4.3b. The closed-loop transfer
function of the sampled-data system is
Y ( z) Gh0G ( z )
= (4.18)
R( z ) 1 + Gh0G ( z )
Just as in the case of continuous-time systems, the absolute and relative stability conditions of the closed-
loop discrete-time system can be investigated by making the frequency response plots of Gh0G(z). The
frequency response plots of Gh0G(z) are obtained by setting z = e jwT, and then letting w vary from – ws /2
to ws/2. This is equivalent to mapping the unit circle in the z-plane onto the Gh0G(e jwT)-plane. Since the
unit circle in the z-plane is symmetrical about the real axis, the frequency response plot of Gh0G(e jwT)
Design of Digital Control Algorithms 229
will also be symmetrical about the real axis, so that only the portion that corresponds to w = 0 to w =
ws/2 needs to be plotted.
A typical curve of (refer to Eqn. (4.18))
Y jwT Gh0G (e jwT )
(e ) = , (4.19)
R 1 + Gh0G (e jwT )
the closed-loop frequency response, is shown in Fig. 4.9. The amplitude ratio and phase angle will
approximate the ideal 1.0 – 0º for some range of ‘low’ frequencies, but will deviate for high frequencies.
The height Mr (resonance peak) of the peak is a relative stability criterion; the higher the peak, the poorer
the relative stability. Many systems are designed to exhibit a resonance peak in the range 1.2 to 1.4.
The frequency w r (resonance frequency) at which this peak occurs, is a speed of response criterion; the
higher the wr, the faster the system. For systems that exhibit no peak (sometimes the case), the bandwidth
wb is used for speed of response specifications. Bandwidth is the frequency at which amplitude ratio has
dropped to 1/ 2 times its zero-frequency value. It can, of course, be specified even if there is a peak.
Alternative measures of relative stability and speed of response are stability margins and crossover
frequencies. To define these measures, a discussion of the Nyquist stability criterion in the z-plane is
required. Given the extensive foundation for the Nyquist criterion for continuous-time systems, that we
laid in Chapter 10 of the companion book [155], it will not take us long to present the criterion for the
discrete-time case.
4.2.3 z
The concepts involved in z-plane Nyquist stability criterion, are identical to those for s-plane criterion.
In the s-plane, the region of stability is infinite in extent, namely, the entire left half of the s-plane. In the
z-plane, this is not the case. The region of stability is the interior of the unit circle. This makes drawing
the locus of Gh0G(e jwT ), the open-loop frequency response on the polar plane, easier because the Nyquist
contour Gz in the z-plane is finite in extent, being simply the unit circle. We treat poles at z = 1 in the way
we treated poles at s = 0, by detouring around them on a contour of arbitrarily small radius.
Figure 4.10a shows a typical Nyquist contour along which we will evaluate Gh0G(z). Note that we detour
around the pole at z = 1, on a portion of a circle of radius e centered at z = 1. A typical Nyquist plot
Gh0G(e jwT) is shown in Fig. 4.10b. We see from this figure, that the Nyquist plot is similar to those we
obtain for continuous-time functions with a single pole at s = 0, with the following exception. The plot
does not touch the origin in the z-plane. The reason is that we evaluate Gh0G(e jwT ) over a finite range of
values of w, namely, 0 £ w £ p /T, where T is the sampling interval.
We have labeled the segments of Gz in the same fashion as we did in the s-plane Nyquist analysis [155].
Segment C1 is the upper half of the unit circle, and segment C2 is the lower half of the unit circle.
Segment C3 is the portion of a circle with radius e centered at z = 1. There is no segment corresponding
to the s-plane portion of a circle with infinite radius centered at s = 0, because the Nyquist contour Gz in
the z-plane, unlike its counterpart in the s-plane, is of finite extent.
Note that the locus Gh0G(C1) in Fig. 4.10b, is directly obtained from Gh0G(e jwT) for 0 £ w £ p/T, whereas
the locus Gh0G(C2) is the same information, with the phase reflected about 180°; Gh0G(C3) is inferred
from Fig. 4.10a based on pole-zero configuration.
In the case of open-loop transfer function Gh0G(z) with no poles outside the unit circle, the closed-loop
system of Fig. 4.3b is stable if
N = number of clockwise encirclements of the critical point –1 + j0 made by the Gh0G(e jwT ) locus
of the Nyquist plot
= 0
Note that the necessary information to determine relative stability is contained in the portion Gh0G(C1)
of the Nyquist plot, which corresponds to the frequency response of the open-loop system Gh0G(z). This
portion of the Nyqusit plot of Fig. 4.10b has been redrawn in Fig. 4.10c. Gain and phase margins are
defined so as to provide a two-dimensional measure of how close the Nyquist plot is to encircling the
–1 + j0 point, and they are identical to the definitions developed for continuous-time systems. The Gain
Margin (GM) is the inverse of the amplitude of Gh0G(e jwT) when its phase is 180°, and is a measure of
how much gain of the system can be increased before instability results. The Phase Margin (FM) is the
difference between 180° and the phase of Gh0G(e jwT) when its amplitude is 1. It is a measure of how
much additional phase lag, or time delay, can be tolerated before instability results, because the phase of
a system is highly related to these characteristics.
Design of Digital Control Algorithms 231
The Nyquist plot in Fig. 4.10c intersects the negative real axis at frequency wf. This frequency at which
the phase angle of Gh0G(e jwT) is 180°, is referred to as phase crossover frequency. The gain margin of
the closed-loop system of Fig. 4.3b, is defined as the number
1
GM = jw T
Gh0G (e f )
For stable systems, GM is always a number greater than one.
A unit circle, centered at the origin, has been drawn in Fig. 4.10 c in order to identify the point at which
the Nyquist plot has unity magnitude. The frequency at this point has been designated wg, the gain
crossover frequency. The phase margin of the closed-loop system of Fig. 4.3b, is defined as
FM = 180° + –Gh0G(e jw gT )
For stable systems, FM is always positive.
The GM and FM are both measures of relative stability. General numerical design goals for these margins
cannot be given since systems that satisfy other specific performance criteria may exhibit a wide range
232 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
of these margins. It is possible, however, to give useful lower bounds—the gain margin should usually
exceed 2.5 and the phase margin should exceed 30°.
For continuous-time systems, it is often pointed out that the phase margin is related to the damping ratio z
for a standard second-order system; the approximate relation being z = FM/100. The FM, from a z-plane
frequency response analysis, carries the same implications about the damping ratio of the closed-loop
system.
The gain crossover frequency wg is related to the bandwidth of the system. The larger the wg, the wider
the bandwidth of the closed-loop system, and the faster is its response.
The translation of time-domain specifications in terms of frequency response features, is carried out
by using the explicit correlations for second-order system (4.10). The following correlations are valid
approximations for higher-order systems dominated by a pair of complex conjugate poles3.
1
Mr = ; z £ 0.707 (4.20)
2z 1 - z 2
wr = wn 1 - 2z 2 (4.21)
1
wb = wn È1 - 2z 2 + ( 2 - 4z 2 + 4z 4 ) ˘ 2 (4.22)
ÎÍ ˚˙
Ï 1¸
–1 Ô 2z È 1 + 4z 4 - 2z 2 ˘ 2 Ô
FM = tan Ì ÍÎ ˙˚ ˝
ÔÓ Ô˛
@ 100z (4.23)
4.2.4
The effectiveness of a system in disturbance signal rejection is readily studied with the topology of
Fig. 4.11a. The response Y(z) to disturbance W(z), can be found from the closed-loop transfer function
Y ( z) 1
= (4.24a)
W ( z) 1 + D( z )Gh0G ( z )
We now introduce the function
1
S(z) = (4.24b)
1 + D( z )Gh0G ( z )
which we call the sensitivity function of the control system, for reasons to be explained later. To reduce
the effects of disturbances, it turns out that S(e jwT ) must be made small over the frequency band of
disturbances. If constant disturbances are to be suppressed, S(1) should be made small. If D(z)Gh0G(z)
includes an integrator (which means that D(z) or Gh0G(z) has a pole at z = 1), then the steady-state error
due to constant disturbance is zero. This may be seen as follows. Since for a constant disturbance of
amplitude A, we have
Az
W(z) = ,
z -1
3
Chapter 11 of reference [155].
Design of Digital Control Algorithms 233
(a)
W(z)
R(z) = 0 + + +
D(z) Gh0G(z) Y(z)
–
(b)
R(z) = 0 + Y(z)
D(z) Gh0G(z)
–
+
+
(c) Wn(z)
Fig. 4.11
234 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Y ( z) D( z )Gh0G ( z )
= (4.25)
Wn ( z ) 1+ D( z )Gh0G ( z )
Thus, the measurement noise is transferred to the output whenever | D(z)Gh0G(z)| > 1. Hence, large
gains of D(z)Gh0G(z) will lead to large output errors due to measurement noise. This is in conflict with
the disturbance-rejection property with respect to configurations of Figs 4.11a and 4.11b. To solve this
problem, we can generally examine the measuring instrument and modify the filtering, so that it satisfies
the requirements of a particular control problem.
4.2.5
Finally, in our design, we must take into account both the small and, often, the large differences between
the derived process model and the real process behavior. The differences may appear due to modeling
approximations, and the process behavior changes with time during operation. If, for simplicity, it is
assumed that the structure and order of the process model are chosen exactly, and they do not change
with time, then these differences are manifested as parameter errors.
Parameter changes with respect to nominal parameter vector pn are assumed. The closed-loop behavior
for parameter vector
p = pn + Dp
is of interest. If the parameter changes are small, then sensitivity methods can be used. For controller design,
both good control performance (steady-state accuracy, transient accuracy, and disturbance rejection),
and small parameter sensitivity, are required. The resulting controllers are then referred to as insensitive
controllers. However, for large parameter changes, the sensitivity design is unsuitable. Instead, one has to
assume several process models with different parameter vectors p1, p2, ..., pM, and try to design a robust
controller which, for all process models, will maintain stability and certain control performance range.
For the design of insensitive controllers, the situation is very much like the disturbance-signal rejection.
The larger the gain of the feedback loop around the offending parameter, the lower the sensitivity of the
closed-loop transfer function to changes in that parameter.
Consider the digital control system of Fig. 4.11. The closed-loop input-output behavior corresponding to
the nominal parameter vector, is described by
Y ( z) D( z )Gh0G (p n , z )
M(pn, z) = = (4.26)
R( z ) 1 + D( z )Gh0G (p n , z )
The process parameter vector now changes by an infinitesimal value Dp. For the control loop, it follows
that
∂M ( p , z ) D( z ) ∂Gh0G (p, z )
=
∂p p = pn [1 + D( z ) Gh0G (p n , z )]2 ∂p
p=p n
T
Ê ∂G G ( p , z ) ˆ
For DGh0G(pn, z) = Á h0 ˜ Dp,
Ë ∂p p = pn ¯
Design of Digital Control Algorithms 235
T
D( z ) Ê ∂Gh0G (p, z ) ˆ
DM (pn, z) = 2 Á
[1 + D( z )Gh0G (p n , z )] Ë ∂p ˜ Dp
p=p n ¯
Ï D( z )Gh0G (p n , z ) ¸ Ï 1 ¸ Ï 1 ¸
= Ì ˝ Ì ˝ Ì ˝ {DGh0G(pn, z)} (4.27)
Ó1 + D( z )Gh0G (p n , z ) ˛ Ó 1 + D ( z )Gh0 G ( p n , z ) ˛ G
Ó h0 G ( p n , z ) ˛
From Eqns (4.26)–(4.27), it follows that
DM (p n , z ) DGh0G (p n , z )
= S(pn, z) (4.28a)
M (p n , z ) Gh0G (p n , z )
with the sensitivity function S(pn, z) of the feedback control given as
1
SGMh0G = S(pn, z) = (4.28b)
1 + D( z )Gh0G (p n , z )
This sensitivity function shows how relative changes of input/output behavior of a closed loop, depend
on changes of the process transfer function. Small parameter-sensitivity of the closed-loop behavior, can
be obtained by making S(pn, e jwT) small in the significant frequency range.
4.2.6
Control system design with high-gain feedback results in the following:
(i) good steady-state tracking accuracy;
(ii) good disturbance-signal rejection; and
(iii) low sensitivity to process-parameter variations.
There are, however, factors limiting the gain:
(i) High gain may result in instability problems.
(ii) Input amplitudes limit the gain; excessively large magnitudes of control signals will drive the
process to saturation region of its operation, and the control system design, based on linear model
of the plant, will no longer give satisfactory performance.
(iii) Measurement noise limits the gain; with high-gain feedback, measurement noise appears
unattenuated in the controlled output.
Therefore, in design, we are faced with trade-offs.
4.3
All the frequency response methods of continuous-time systems4, are directly applicable for the analysis
and design of digital control systems. For a system with closed-loop transfer function
Y ( z) Gh0G ( z )
= (4.29)
R( z ) 1 + Gh0G ( z )
4
Chapters 10–12 of reference [155].
236 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
the absolute and relative stability conditions can be investigated by making the frequency response plots
of Gh0G(z). The frequency response plots of Gh0G(z) can be obtained by setting
z = e jwT; T = sampling interval (4.30)
and then letting the frequency w vary from –ws /2 to ws/2; ws = 2p/T. Computer assistance is normally
required to make the frequency response plots (refer to Problem A.8 in Appendix A).
Since the frequency appears in the form z = e jwT, the discrete-time transfer functions are typically not
rational functions and the simplicity of Bode’s design technique is altogether lost in the z-plane. The
simplicity can be regained by transforming the discrete-time transfer function in the z-plane, to a different
plane (called w) by the bilinear transformation (refer to Eqn. (2.115))
1 + wT /2
z= (4.31a)
1 - wT /2
By solving Eqn. (4.31a) for w, we obtain the inverse relationship
2 z -1
w= (4.31b)
T z +1
Through the z-transformation and the w-transformation, the primary strip of the left half of the s-plane
is first mapped into the inside of the unit circle in the z-plane, and then mapped into the entire left half
of the w-plane. The two mapping processes are depicted in Fig. 4.12. Notice that as s varies from 0 to
jws/2 along the jw-axis in the s-plane, z varies from 1 to –1 along the unit circle in the z-plane, and w
varies from 0 to along the imaginary axis in the w-plane. The bilinear transformation (4.31) does not
have any physical significance in itself and, therefore, all w-plane quantities are fictitious quantities that
correspond to the physical quantities of either the s-plane or the z-plane. The correspondence between
the real frequency w, and the fictitious w-plane frequency, denoted as n, is obtained as follows:
From Eqn. (4.31b),
0 s Re 0 Re
1
– jws/2
Example 4.2
Consider a process with transfer function
10
G(s) = (4.34)
s( 15s + 1)
238 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
which, when preceded by a ZOH (T = 0.1 sec), has the discrete-time transfer function (refer to Table 2.1)
È 50 ˘ 0.215( z + 0.85)
Gh0G(z) = (1 – z–1) Z Í 2 ˙ = (4.35)
ÍÎ s ( s + 5) ˙˚ ( z - 1)( z - 0.61)
Ê wˆ Ê w ˆ
10 Á1 - ˜ ÁË1 + 246.67 ˜¯
Ë 20 ¯
Gh0G(w) = (4.36)
Ê w ˆ
w Á1 +
Ë 4.84 ˜¯
Notice that the gain of Gh0G(w) is precisely the same as that of G(s)—it is 10 in both the cases. This will
always be true for a Gh0G(w) computed using the bilinear transformation given by Eqns (4.31). The gain
of 10 in Eqn. (4.36) is the Kv of the uncompensated system (4.35).
We also note that in Eqn. (4.36), the denominator looks very much similar to that of G(s), and that
the denominators will be the same as T approaches zero. This would also have been true for any zeros
of Gh0G(w) that corresponded to zeros of G(s), but our example does not have any. Our example also
shows the creation of a right-half plane zero of Gh0G(w) at 2/T, and the creation of a fast left-half plane
zero when compared to the original G(s). The transfer function Gh0G(w) is thus a nonminimum phase
function.
To summarize, the w-transformation maps the inside of the unit circle in the z-plane, into the left half of
the w-plane. The magnitude and phase of Gh0G( jn) correspond to the magnitude and phase of Gh0G(z)
as z takes on values around the unit circle. Since Gh0G( jn) is a rational function of n, we can apply all
the standard straight-line approximations to the log-magnitude and phase curves.
The design of analog control systems usually falls into one of the following categories: (1) lead
compensation, (2) lag compensation, (3) lag-lead compensation. Other more complex schemes, of
course, do exist, but knowing the effects of these three basic types of compensation, gives a designer
much insight into the design problem. With reference to the design of digital control systems by Bode
plots, the basic forms of compensating network D(w) have also been classified as lead, lag, and lag-
lead. In the following paragraphs, we briefly review the fundamental frequency-domain features of these
compensators.
A simple lead compensator model in the w-plane is described by the transfer function
1 + wt
D(w) = ; 0 < a < 1, t > 0 (4.38)
1 + a wt
The zero-frequency gain of the compensator is found by letting w = 0. Thus, in Eqn. (4.38), we are
assuming a unity zero-frequency gain for the compensator. Most of the designs require a compensator
with a non-unity zero-frequency gain to improve steady-state response, disturbance rejection, etc. A non-
unity zero-frequency gain is obtained by multiplying the right side of Eqn. (4.38) by a constant equal
to the value of the desired zero-frequency gain. For the purpose of simplifying the design procedure,
we normally add the required increase in gain to the plant transfer function, and design the unity zero-
frequency gain compensator given by Eqn. (4.38), based on the new plant transfer function. Then the
compensator is realized as the transfer function of (4.38) multiplied by the required gain factor.
The Bode plot of the unity zero-frequency gain lead compensator is shown in Fig. 4.14. The maximum
phase lead fm of the compensator is given by the relation
1 - sin fm
a= (4.39)
1 + sin fm
and it occurs at the frequency
Ê 1ˆ Ê 1 ˆ
nm = ÁË t ˜¯ ÁË at ˜¯ (4.40)
A simple lag compensator model in the w-plane is described by the transfer function
1 + wt
D(w) = ; b > 1, t > 0 (4.41)
1 + b wt
The Bode plot of this unity zero-frequency gain compensator is shown in Fig. 4.15.
240 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Slope = 20 dB / decade
dB
D ( jn) 20 log 1
a
20 log 1
a
0
90°
45°
– D ( jn) fm
0°
1/t nm = 1/ at 1/at
log n
Fig. 4.14 Bode plot of lead compensator
In the design method using Bode plots, the attenuation property of lag compensator is utilized; the phase
lag characteristic is of no consequence. The attenuation provided by the lag compensator in the high
frequency range shifts the gain crossover frequency to a lower value, and gives the system sufficient
phase margin. So that a significant phase lag will not be contributed near the new gain crossover, the
upper corner frequency 1/t of D(w) is placed far below the new gain crossover.
With the reduction in system gain at high frequencies, the system bandwidth gets reduced and thus the
system has a slower speed of response. This may be an advantage if high frequency noise is a problem.
Equations (4.38) and (4.41) describe simple first-order compensators. In many system design problems,
however, the system specifications cannot be satisfied by a first-order compensator. In these cases, higher-
order compensators must be used. To illustrate this point, suppose that smaller steady-state errors to ramp
inputs are required for a Type-2 system; this requires an increase in the low-frequency gain of the system.
If phase-lead compensation is employed, this increase in gain must be reflected at all frequencies. It is
then unlikely that one first-order section of phase-lead compensation can be designed to give adequate
phase margin. One solution to this problem would be to cascade two first-order lead compensators.
However, if the noise in the control system is a problem, this solution may not be acceptable. A different
approach is to cascade a lag compensator with a lead compensator. This compensator is usually referred
to as a lag-lead compensator.
Example 4.3
Consider the feedback control system shown in Fig. 4.16. The plant is described by the transfer
function
K
G(s) =
s( s + 5)
Design a digital control scheme for the system to meet the following specifications:
(i) the velocity error constant Kv ≥ 10;
(ii) peak overshoot Mp to step input £ 25%; and
(iii) settling time ts (2% tolerance band) £ 2.5 sec.
Solution The design parameters are the sampling interval T, the system gain K, and the parameters of
the unity zero-frequency gain compensator D(z).
Let us translate the transient accuracy requirements to frequency response measures. z = 0.4 corresponds
to a peak overshoot of about 25% (Eqn. (4.13)), and a phase margin of about 40º (Eqn. (4.23)). The
requirement of ts @ 2.5 sec corresponds to wn = 4 rad/sec (Eqn. (4.14)) and closed-loop bandwidth
wb @ 5.5 rad/sec (Eqn. (4.22)). Taking the sampling frequency about 10 times the bandwidth, we choose
the sampling interval
2p
T= @ 0.1 sec
10w b
242 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
r (t) + y (t)
D (z) Gh0(s) G (s)
– T T
Compensator ZOH Plant
Fig. 4.16
Our design approach is to first fix the system gain K to a value that results in the desired steady-state
accuracy. A unity zero-frequency gain compensator, that satisfies the transient accuracy requirements
without affecting the steady-state accuracy, is then introduced.
Since sampling does not affect the error constant of the system, we can relate K with Kv as follows, for
the system of Fig. 4.16 with D(z) = 1 (i.e., for uncompensated system):
K
Kv = lim sG(s) =
sÆ0 5
Thus, K = 50 meets the requirements on steady-state accuracy.
For T = 0.1 and K = 50, we have
È 1 - e -Ts Ê 50 ˆ ˘ 0.215( z + 0.85)
Gh0G(z) = Z Í Á ˜ ˙ = (4.42)
ÍÎ s Ë s( s + 5) ¯ ˙˚ ( z - 1)( z - 0.61)
Ê wˆ Ê w ˆ
10 Á1 - ˜ ÁË1 + 246.67 ˜¯
Ë 20 ¯
Gh0G(w) = Gh0G ( z ) 1+
wT = (4.43)
2 Ê w ˆ
z= w Á1 +
1-
wT
Ë 4.84 ˜¯
2
Ê jn ˆ Ê jn ˆ
10 Á1 - ˜ Á1 +
Ë 20 ¯ Ë 246.67 ˜¯
Gh0G( jn) = Gh 0G ( w ) = (4.44)
Ê jn ˆ
w = jn jn Á1 +
Ë 4.84 ˜¯
The Bode plot of Gh0G( jn) (i.e., the uncompensated system) is shown in Fig. 4.17. We find from this
plot, that the uncompensated system has gain crossover frequency nc1 = 6.6 rad/sec and phase margin
FM1 @ 20º. The magnitude versus phase angle curve of the uncompensated system is drawn in Fig. 4.18.
The bandwidth5 of the system is read as
nb1 = 11
In terms of the real frequency, the bandwidth (Eqn. (4.32))
5
The –3dB closed-loop gain contour of the Nichols chart has been used to determine bandwidth. The contour
has been constructed using the following table obtained from the Nichols chart.
2 Ên T ˆ
w b1 = tan -1 Á b1 ˜ = 10 rad/sec
T Ë 2 ¯
It is desired to raise the phase margin to 40º without altering Kv. Also the bandwidth should not increase.
Obviously, we should first try a lag compensator.
From the Bode plot of uncompensated system, we observe that the phase margin of 40º is obtained
if the gain crossover frequency is reduced to 4 rad/sec. The high frequency gain –20 log b of the lag
compensator (Fig. 4.15) is utilized to reduce the gain crossover frequency. The upper corner frequency 1/t
of the compensator is placed one octave to one decade below the new gain crossover, so that the phase lag
contribution of the compensator, in the vicinity of the new gain crossover, is made sufficiently small.
To nullify the small phase lag contribution which will still be present, the gain crossover frequency is
reduced to a value nc2 where the phase angle of the uncompensated system is
f = –180º + FMs + e;
FMs is the specified phase margin and e is allowed a value 5º–15º.
The uncompensated system (Fig. 4.17) has a phase angle
f = –180º + FMs + e = –180º + 40º + 10º = –130º
at nc2 = 3 rad/sec. Placing the upper corner frequency of the compensator two octaves below nc2, we have
244 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
1 n c2 3
= 2 = 4
t ( 2)
To bring the magnitude curve down to 0 dB at nc2, the lag compensator must provide an attenuation of
9 dB (Fig. 4.17). Therefore,
20 log b = 9 or b = 2.82
The lower corner frequency of the compensator is then fixed at
1
= 0.266
bt
The transfer function of the lag compensator is then
1+ t w 1 + 1.33w
D(w) = =
1 + bt w 1 + 3.76 w
Phase lag introduced by the compensator at nc2 = tan–1(1.33 nc2) – tan–1 (3.76nc2) = 75.93º – 84.93º =
–9º. Therefore, the safety margin of e = 10º is justified.
The open-loop transfer function of the compensated system becomes
Ê wˆ Ê w ˆÊ w ˆ
10 Á1 - ˜ ÁË1 + 246.67 ˜¯ ÁË1 + 0.75 ˜¯
Ë 20 ¯
D(w)Gh0G(w) =
Ê w ˆÊ w ˆ
w Á1 + 1+
Ë 4.84 ˜¯ ÁË 0.266 ˜¯
The Bode plot of D(w)Gh0G(w) is shown in Fig. 4.17, from where the phase margin of the compensated
system is found to be 40º and the gain margin is 15 dB. The magnitude versus phase angle curve of the
compensated system is shown on Nichols chart in Fig. 4.18. The bandwidth of the compensated system is
Ê 2 -1 Ê n b 2T ˆ ˆ
nb2 = 5.5 Á w b 2 = tan Á ˜¯ = 5.36 rad/sec ˜
Ë T Ë 2 ¯
Therefore, the addition of the compensator has reduced the bandwidth from 10 rad/sec to 5.36 rad/sec.
However, the reduced value lies in the acceptable range.
2 z -1
Substituting w=
T z +1
in D(w), we obtain
Ê z - 0.928 ˆ 0.362 z - 0.336
D(z) = 0.362 Á ˜ =
Ë z - 0.974 ¯ z - 0.974
È Ê z - 0.928 ˆ ˘
Zero-frequency gain of D(z) = lim Í0.362 Á
Ë z - 0.974 ˜¯ ˙˚
=1
z Æ1 Î
The digital controller D(z) has a pole-zero pair near z = 1. This creates a long tail of small amplitude
in the step response of the closed-loop system. This behavior of the lag-compensated system will be
explained shortly, with the help of root locus plots.
To evaluate the true effectiveness of the design, we write the closed-loop transfer function of the
compensated system (Fig. 4.19) and therefrom obtain the response to step input. Computer assistance is
usually needed for this analysis.
Comment We have obtained a digital control algorithm which meets the following objectives: Kv @ 10,
Mp @ 25%, ts @ 2.5 sec. We may attempt to improve upon this design to obtain Kv > 10, Mp < 25% and
ts < 2.5 sec. However, the scope of such an exercise is limited, because the improvement in steady-state
accuracy will be at the cost of stability margins and vice versa. Also, the conflicting requirements of
limiting the magnitudes of control signals to avoid saturation problems, limiting the bandwidth to avoid
high-frequency noise problems, etc., have to be taken into consideration.
Example 4.4
Reconsider the feedback control system of Example 4.3 (Fig. 4.16). We now set the following goal for our
design:
(i) Kv ≥ 10;
246 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Gh0G( jn)
D( jn) Gh0G( jn)
Magnitude (dB)
0
5.67 6.6 7.5
– 10 9.4
– 140
38
20
Phase (degrees)
– 180
21
– 220
2 5 10 20 40
n
Compensator design (Example 4.4)
Design of Digital Control Algorithms 247
that its corner frequencies 1/t and 1/at are on either side of the gain crossover frequency nc1 = 6.6
rad/sec. The compensator so placed will increase the system gain in the vicinity of nc1; this will cause
the gain crossover to shift to the right—to some unknown value nc2. The phase lead provided by the
compensator at nc2 adds to the phase margin of the system.
Phase margin of the uncompensated system at nc1 is FM1. At nc2, which is expected to be close to nc1, let
us assume the phase margin of the uncompensated system to be (FM1 – e) where e is allowed a value
5º – 15º. The phase lead required at nc2 to bring the phase margin to the specified value FMs, is given by
fl = FMs – (FM1 – e) = FMs – FM1 + e
In our design, we will force the frequency nm of the compensator to coincide with nc2, so that maximum
phase lead fm of the compensator is added to the phase margin of the system. Thus, we set
nc2 = nm
Therefore, fm = fl
The a-parameter of the compensator can then be computed from (refer to Eqn. (4.39))
1 - sin fm
a=
1 + sin fm
Since at nm, the compensator provides a dB-gain of 20 log(1/ a ) , the new crossover frequency nc2 = nm
can be determined as that frequency at which the uncompensated system has a dB-gain of –20 log (1/ a ).
For the design problem under consideration,
fl = 40º – 20º + 15º = 35º
1 - sin 35°
Therefore, a= = 0.271
1 + sin 35°
1 1 4.893
or = a ( vm ) = 4.893 and = = 18.055
t at 0.271
Since the compensator zero is very close to a pole of the plant, we may cancel the pole with the zero,
i.e., we may choose
1 1
= 4.84; = 17.86
t at
The transfer function of the lead compensator becomes
1+ t w 1 + 0.21w
D(w) = =
1 + at w 1 + 0.056 w
Substituting
2 z -1
w=
T z +1
in D(w), we obtain
2.45( z - 0.616)
D(z) =
z - 0.057
The open-loop transfer function of the compensated system is
Ê wˆ Ê w ˆ
10 Á1 - ˜ ÁË1 + 246.67 ˜¯
Ë 20 ¯
D(w)Gh0G(w) =
Ê w ˆ
w Á1 +
Ë 17.86 ˜¯
The Bode plot of D(w)Gh0G(w) is shown in Fig. 4.20, from where the phase margin of the compensated
system is found to be 38º, and gain margin is 7.5 dB. The magnitude versus phase angle curve of the
compensated system is shown in Fig. 4.21. The bandwidth of the compensated system is
2 Ên T ˆ
nb2 = 22.5; wb2 = tan -1 Á b 2 ˜ = 16.9 rad/sec
T Ë 2 ¯
Thus, the addition of the lead compensator has increased the system bandwidth from 10 to 16.9 rad/sec.
It may lead to noise problems if the control system is burdened with high frequency noise.
A solution to noise problems involves the use of a lag compensator cascaded with lead compensator. The
lag compensation is employed to realize a part of the required phase margin, thus reducing the amount
of lead compensation required.
Design of Digital Control Algorithms 249
4.4
Design of compensation networks using the root locus plots is a well established procedure in analog
control systems. This is essentially a trial-and-error method where, by varying the controller parameters,
the roots of the characteristic equation are relocated to favorable locations. In the present section, we
shall consider the application of root locus method to the design of digital control systems.
4.4.1 z
The characteristic equation of a discrete-time system can always be written in the form
1 + F(z) = 0 (4.45)
where F(z) is a rational function of z.
From Eqn. (4.45), it is seen that the roots of the characteristic equation (i.e., the closed-loop poles of the
discrete-time system), occur only for those values of z where
F(z) = – 1 (4.46)
Since z is a complex variable, Eqn. (4.46) is converted into two conditions given below.
(i) Magnitude condition: |F(z)| = 1 (4.47a)
(ii) Angle condition: –F(z) = ± 180º (2q + 1 ); q = 0, 1, 2, ... (4.47b)
In essence, the construction of the z-plane root loci is to find the points that satisfy these conditions. If
we write F(z) in the standard pole-zero form:
K P( z - zi )
i
F(z) = ;K≥0 (4.48a)
P( z - p j )
j
Consequently, given the pole-zero configuration of F(z), the construction of the root loci in the z-plane
involves the following steps:
(i) A search for the points on the z-plane that satisfy the angle condition given by Eqn. (4.48c).
(ii) The value of K at a given point on a root locus is determined from the magnitude condition given
by Eqn. (4.48b).
The root locus method developed for continuous-time systems can be extended to discrete-time systems
without modifications, except that the stability boundary is changed from the jw axis in the s-plane, to
the unit circle in the z-plane. The reason the root locus method can be extended to discrete-time systems
250 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
is that the characteristic equation (4.45) for the discrete-time system, is of exactly the same form as the
equation for root locus analysis in the s-plane. However, the pole locations for closed-loop systems in the
z-plane must be interpreted differently from those in the s-plane.
We assume that the reader is already familiar with the s-plane root locus technique. We shall concentrate
on the interpretation of the root loci in the z-plane with reference to the system performance, rather
than the construction of root loci in the z-plane. Rules of construction of root loci are summarized in
Table 4.2 for ready reference.6
F(z) = 0
m
K P ( z - zi )
i =1
F(z) = n
; K ≥ 0, n ≥ m; zi: m open-loop zeros; pj: n open-loop poles
P (z - pj )
j =1
(i) The root locus plot consists of n root loci as K varies from 0 to . The loci are symmetric with
respect to the real axis.
(ii) As K increases from zero to infinity, each root locus originates from an open-loop pole with
K = 0, and terminates either on an open-loop zero or on infinity with K = . The number of
loci terminating on infinity equals the number of open-loop poles minus zeros.
(iii) The (n – m) root loci which tend to infinity, do so along straight-line asymptotes radiating out
from a single point z = – s A on the real axis (called the centroid), where
S ( real parts of open-loop poles) - S ( real parts of open-loop zeros)
– sA =
n-m
These (n – m) asymptotes have angles
( 2q + 1)180°
fA = ; q = 0, 1, 2, ..., (n – m – 1)
n-m
(iv) A point on the real axis lies on the locus if the number of open-loop poles plus zeros on the
real axis to the right of this point, is odd. By use of this fact, the real axis can be divided into
segments on-locus and not-on-locus; the dividing points being the real open-loop poles and
zeros.
(v) The intersections (if any) of root loci with the imaginary axis can be determined by use of the
Routh criterion.
(vi) The angle of departure, fp of a root locus from a complex open-loop pole, is given by
fp = 180º + f
where f is the net angle contribution at this pole of all other open-loop poles and zeros.
(vii) The angle of arrival, f z of a locus at a complex zero, is given by
f z = 180º – f
where f is the net angle contribution at this zero of all other open-loop poles and zeros.
Contd.
6
Chapter 7 of reference [155].
Design of Digital Control Algorithms 251
(Contd.)
(viii) Points at which multiple roots of the characteristic equation occur (breakaway points of root
loci) are the solutions of
n
P (z - pj )
dK j =1
= 0, where K = – m
dz
P ( z - zi )
i =1
(ix) The gain K at any point z0 on a root locus, is given by
n
P | z0 - p j |
j =1
K = m
P | z0 - zi |
i =1
[Product of phasor lengths (read to scale) from z0 to poles of F ( z )]
=
[Product of phasor lengths (read to scale)from z0 to zeros of F ( z )]
Example 4.5
Consider a process with the transfer function
K
G(s) = (4.49a)
s ( s + 2)
which, when preceded by a zero-order hold (T = 0.2 sec), has the discrete-time transfer function (refer to
Table 2.1)
È K ˘ K ¢( z - b)
Gh0G(z) = (1 – z – 1) Z Í 2 ˙ = (4.49b)
ÍÎ s ( s + 2) ˙˚ ( z - a1 )( z - a2 )
where K¢ = 0.01758K, b = – 0.876, a1 = 0.67, a2 = 1.
The root locus plot of
1 + Gh0G(z) = 0 (4.50)
can be constructed using the rules given in Table 4.2. Gh0G(z) has two poles at z = a1 and z = a2, and a
zero at z = b. From rule (iv), the parts of the real axis between a1 and a2, and between – and b constitute
sections of the loci. From rule (ii), the loci start from z = a1 and z = a2; one of the loci terminates at
z = b, and the other locus terminates at – . From rule (viii), the breakaway points (there are two) may
be obtained by solving for the roots of
dK ¢ ( z - a1 ) ( z - a2 )
= 0, where K ¢ = –
dz ( z - b)
However, we can show that for this simple two-pole and one zero configuration, the complex-conjugate
section of the root locus plot is a circle. The breakaway points are easily obtained from this result, which
is proved as follows:
Let z = x + jy
252 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
y y( 2 x - a1 - a2 )
-
x - b ( x - a1 ) ( x - a2 ) - y 2
=0
y È y( 2 x - a1 - a2 ) ˘
1+ Í ˙
x - b Î ( x - a1 ) ( x - a2 ) - y 2 ˚
1 2 x - a1 - a2
or - =0
x - b ( x - a1 ) ( x - a2 ) - y 2
Simplifying, we get
(x – b)2 + y2 = (b – a1)(b – a2) (4.51)
which is the equation of a circle with the center at the open-loop zero z = b, and the radius equal to
[(b – a1) (b – a2)]1/2.
The root locus plot for the system given by Eqn. (4.49b), is constructed in Fig. 4.22. The limiting value
of K for stability may be found by graphical construction or by the Jury stability test. We illustrate the
use of graphical construction.
Im
K = 22.18
P
K K= K=0 K=0
– 0.876 0.67 Re
Unit circle
Fig. 4.22 Root locus plot for the system of Example 4.5
Design of Digital Control Algorithms 253
By rule (ix) of Table 4.2, the value of K¢ at point P where the root locus crosses the unit circle is given by
(Phasor length from P to pole at z = 1) ¥ (Phasor length from P to pole at z = 0.67)
K¢ =
(Phasor length from P to zero at z = 0.876)
0.85 ¥ 0.78
= = 0.39 = 0.01758K
1.7
0.39
Therefore, K= = 22.18
0.01758
The relative stability of the system can be investigated by superimposing the constant-z loci on the system
root locus plot. This is shown in Fig. 4.23. Inspection of this figure shows that the root locus intersects the
z = 0.3 locus at point Q. The value of K¢ at point Q is determined to be 0.1; the gain
K = K ¢/0.01758 = 5.7
Fig. 4.23
254 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The value of wn for K¢ = 0.1, may be obtained by superimposing constant-wn loci on the root locus plot and
locating the constant-wn locus which passes through the point Q. From Fig. 4.23, we observe that none
of the constant-wn loci on the standard chart passes through the point Q; we have to make a guess for the
wn value. We can, instead, construct a constant-wd locus passing through the point Q and from there
obtain wn more accurately.
s1, 2 = - zw n ± jw n 1 - z 2 = – zw n ± jw d
± jq
are mapped to z1, 2 = e -zw nT e ± jw d T = re
in the z-plane.
A constant-wd locus is thus a radial line passing through the origin at an angle q = wdT with the positive
real axis of the z-plane, measured positive in the counterclockwise direction.
The radial line passing through the point Q makes an angle q = 25º with the real axis (Fig. 4.23). This is
a constant-wd locus with wd given by
25 ¥ p
wdT = rad
180
25p
Therefore, w nT 1 - z 2 =
180
This gives wn = 2.29 rad/sec.
The value of K¢ at the breakaway point R, located at z = 0.824, is determined to be 0.01594. Therefore,
the gain K = 0.01594/0.01758 = 0.9067 results in critical damping (z = 1) with the two closed-loop poles
at z = 0.824.
A pole in the s-plane at s = – a has a time constant of t = 1/a and an equivalent z-plane location of e–aT =
e–T/t. Thus, for the critically damped case,
e –0.2/t = 0.824
or t = 1.033 = time constant of the closed-loop poles.
In the frequency-response design procedure described in the previous section, we attempted to reshape the
open-loop frequency response to achieve certain stability margins, steady-state response characteristics
and so on. A different design technique is presented in this section—the root-locus procedure. In this
procedure, we add poles and zeros through a digital controller, so as to shift the roots of the characteristic
equation to more appropriate locations in the z-plane. Therefore, it is useful to investigate the effects of
various pole-zero configurations of the digital controller on the root locus plots.
A simple lead compensator model in the w-plane is described by the transfer function (refer to Eqn.
(4.38))
1 + wt
D(w) = ; a < 1, t > 0
1 + a wt
Design of Digital Control Algorithms 255
Im Im
Unit circle Unit circle
Re Re
(a) (b)
Fig. 4.24
For the purpose of simplifying the design procedure, we normally associate the gain Kc1 with the plant
transfer function, and design the lead compensator
z - a1
D(z) = (4.53a)
z - a2
based on the new plant transfer function. It may be noted that D(z) given by Eqn. (4.53a) is not a unity
gain model; the dc gain of D(z) is given by
Ê z - a1 ˆ Ê 1 - a1 ˆ
lim Á = Á (4.53b)
˜
z Æ 1 Ë z - a2 ¯ Ë 1 - a 2 ˜¯
To study the effect of a lead compensator on the root loci, we consider a unity-feedback sampled-data
system with open-loop transfer function
K ( z + 0.368)
Gh0G(z) = ; T = 1 sec (4.54)
( z - 0.368)( z - 0.135)
256 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The root locus plot of the uncompensated system is shown in Fig. 4.25a. The plot intersects the z = 0.5
locus7 at point P. The value of gain K at this point is determined to be 0.3823.
Constant-wd locus passing through point P is a radial line at an angle of 82º with the real axis (Fig. 4.25a).
Therefore,
82p
wdT = wnT 1 - z 2 =
180
This gives wn = 1.65 rad/sec
Since Gh0G(z) given by Eqn. (4.54) is a Type-0 system, we will consider position error constant Kp to
study steady-state accuracy. For K = 0.3823,
0.3823 (1 + 0.368)
Kp = lim Gh0G(z) = = 0.957
z Æ1 (1 - 0.368)(1 - 0.135)
We now cancel the pole of Gh0G(z) at z = 0.135 by the zero of the lead compensator, and add a pole at
z = – 0.135, i.e., we select
z - 0.135
D(z) =
z + 0.135
Figure 4.25b shows the root locus plot of lead compensated system. The modified locus has moved to
the left, towards the more stable part of the plane. The intersection of the locus with the z = 0.5 line is
at point Q. The value of wn at this point is determined to be 2.2 rad/sec. The lead compensator has thus
increased wn and hence the speed of response of the system. The gain K at point Q is determined to be
0.433. The position error constant of the lead compensated system is given by
0.433( z + 0.368)
Kp = lim D(z)Gh0G(z) = lim = 0.82
z Æ1 z Æ1 (z - 0.368)( z + 0.135)
The lead compensator has thus given satisfactory dynamic response, but the position error constant is too
low. We will shortly see how Kp can be increased by lag compensation.
7
For a given z, the constant-z curve may be constructed using Eqn. (4.15b). The following table gives the real
and imaginary coordinates of points on some constant-z curves.
z = 0.3
{ Re
Im
0.932
0.164
0.735
0.424
0.360
0.623
0
0.610
– 0.259
0.448
–0.380
0.220
–0.373
0
z = 0.4
{ Re
Im
0.913
0.161
0.689
0.398
0.317
0.549
0
0.504
–0.201
0.347
–0.276
0.160
–0.254
0
z = 0.5
{ Re
Im
0.891
0.157
0.640
0.370
0.273
0.473
0
0.404
–0.149
0.259
–0.191
0.110
–0.163
0
z = 0.6
{ Re
Im
0.864
0.152
0.585
0.338
0.228
0.395
0
0.308
–0.104
0.180
–0.122
0.070
–0.095
0
z = 0.7
{ Re
Im
0.830
0.146
0.519
0.299
0.179
0.310
0
0.215
–0.064
0.111
–0.067
0.039
–0.046
0
z = 0.8
{ Re
Im
0.780
0.138
0.431
0.249
0.124
0.215
0
0.123
–0.031
0.053
–0.026
0.015
–0.015
0
Design of Digital Control Algorithms 257
Im
q = 82° q = 109°
Locus for
P z = 0.5 Q
Re
Unit circle
Root locus
(a) (b)
q = 60°
(c)
Fig. 4.25 Root locus plot for (a) uncompensated; (b) lead compensated; and (c) lag compensated
system.
The selection of the exact values of pole and zero of the lead compensator is done by experience and by
trial-and-error. In general, the zero is placed in the neighborhood of the desired dominant closed-loop
poles, and the pole is located at a reasonable distance to the left of the zero location.
A simple lag compensator model in the w-plane is described by the transfer function (refer to Eqn.
(4.41))
1 + wt
D(w) = ; b > 1, t > 0
1 + b wt
The bilinear transformation
2 z -1
w=
T z +1
258 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Since t and b are both positive numbers and since b > 1, the pole and zero of D(z) always lie on
the real axis inside the unit circle; the pole is always to the right of the zero. A typical pole-zero
configuration of the lag compensator
z - b1
D(z) = K c 2 (4.55)
z - b2
is shown in Fig. 4.24b. Note that both the pole and the zero have been shown close to z = 1. This, as we
shall see, gives better stability properties.
Again, we will associate the gain Kc2 with the plant transfer function and design the lag compensator,
z - b1
D(z) = (4.56)
z - b2
based on the new plant transfer function. The dc gain of the lag compensator given by (4.56), is equal to
z - b1 1 - b1
lim = (4.57)
z Æ 1 z - b2 1 - b2
To study the effect of lag compensator on the root loci, we reconsider the system described by Eqn. (4.54):
K ( z + 0.368)
Gh0G(z) = ; T = 1 sec
( z - 0.368)( z - 0.135)
The root locus plot of the uncompensated system is shown in Fig. 4.25a. At point P, z = 0.5, wn = 1.65
and K = 0.3823 (Kp = 0.957).
We now cancel the pole of Gh0G(z) at z = 0.368 by the zero of the lag compensator, and add a pole at
z = 0.9, i.e., we select
z - 0.368
D(z) =
z - 0.9
Figure 4.25c shows the root locus plot of the lag compensated system. The intersection of the locus
with z = 0.5 line is at point R. The value of wn at this point is determined to be 1.2 rad/sec. The lag
compensator has thus reduced wn and hence the speed of response. The value of the gain K at point R is
determined to be 0.478. The position error constant of the lag compensated system is
0.478( z + 0.368)
Kp = lim D(z)Gh0G(z) = lim = 7.56
z Æ1 z Æ 1 ( z - 0.135)( z - 0.9)
Thus, we have been able to increase position error constant appreciably by lag compensation.
If both the pole and the zero of the lag compensator are moved close to z = 1, then the root locus plot of
the lag compensated system moves back towards its uncompensated shape. Consider the root locus plot
of the uncompensated system shown in Fig. 4.25a. The angle contributed at point P by additional pole-
zero pair close to z = 1 (called a dipole), will be negligibly small; therefore, the point P will continue to
lie on the lag compensated root locus plot. However, the lag compensator
Design of Digital Control Algorithms 259
z - b1
D(z) =
z - b2
will raise the system Kp (refer to Eqn. (4.57)), by a factor of (1 – b1)/(1 – b2).
The following examples illustrate typical digital control system design problems carried out in the
z-plane, using the root locus technique. As we shall see, the design of digital compensation using root
locus plots is essentially a trial-and-error method. The designer may rely on a digital computer to plot
out a large number of root loci by scanning through a wide range of possible values of the compensator
parameters, and select the best solution. However, one can still make proper and intelligent initial
‘guesses’ so that the amount of trial-and-error effort is kept to a minimum.
Example 4.6
Consider the feedback control system shown in Fig. 4.26. The plant is described by the transfer
function
K
G(s) =
s ( s + 2)
Design a digital control scheme for the system to meet the following specifications;
(i) the velocity error constant Kv = 6;
(ii) peak overshoot Mp to step input £ 15%; and
(iii) settling time ts (2% tolerance band) £ 5 sec.
r + e u y
D(z) Gh0(s) G(s)
– T T
Fig. 4.26
Solution The transient accuracy requirements correspond to z = 0.5 and wn = 1.6. We select T = 0.2 sec.
Note that sampling frequency ws = 2p/T is about 20 times the natural frequency; therefore, our choice of
sampling period is satisfactory.
The transfer function Gh0G(z) of the plant, preceded by a ZOH, can be obtained as follows:
È K ˘
Gh0G(z) = (1 – z–1) Z Í 2 ˙
ÍÎ s ( s + 2) ˙˚
0.01758 K ( z + 0.876) K ¢( z + 0.876)
= = (4.58)
( z - 1)( z - 0.67) ( z - 1)( z - 0.67)
The root locus plot of this system for 0 £ K¢ < was earlier constructed in Fig. 4.22. Complex-conjugate
sections of this plot are shown in Fig. 4.27. The plot intersects the z = 0.5 locus at point P. At this point
wn = 1.7 rad/sec, K¢ = 0.0546.
260 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Im
Locus
for z = 0.5
P q =17°
– 0.876 0.67 Re
Root locus
Unit circle
Fig. 4.27 Root locus plot for system (4.58)
Therefore, the transient accuracy requirements (z = 0.5, wn = 1.6) are almost satisfied by gain adjust-
ment only. Let us now examine the steady-state accuracy of the uncompensated system (D(z) = 1) with
K¢ = 0.0546.
The velocity error constant Kv of the system is given by
1
Kv = lim (z – 1)Gh0G(z)
T z Æ1
5(0.0546)(1 + 0.876)
= = 1.55
(1 - 0.67)
The specified value of Kv is 6. Therefore, an increase in Kv by a factor of 3.87 (= 6/1.55) is required.
The objective before us now is to introduce a D(z) that raises the system Kv by a factor of 3.87, without
appreciably affecting the transient performance of the uncompensated system, i.e., without appreciably
affecting the root locus plot in the vicinity of point P. This objective can be realized by a properly
designed lag compensator, as is seen below.
We add the compensator pole and zero as shown in Fig. 4.28. Since both the pole and the zero are very
close to z = 1, the scale in the vicinity of these points has been greatly expanded. The angle contributed
by the compensator pole at point P, is almost equal to the angle contributed by the compensator zero.
Therefore, the addition of dipole near z = 1 does not appreciably disturb the root locus plot in the vicinity
of point P. It only slightly reduces wn. The lag compensator
z - 0.96
D(z) =
z - 0.99
raises the system Kv by a factor of (1 – 0.96)/(1 – 0.99) = 4.
Design of Digital Control Algorithms 261
Note that because of lag compensator, a third closed-loop pole has been added. This pole, as seen from
Fig. 4.28, is a real pole lying close to z = 1. This pole, fortunately, does not disturb the dominance of the
complex conjugate closed-loop poles. The reason is simple.
Expanded
– 0.876 0.67
Compensator
pole and zero
The closed-loop pole, close to z = 1, has a long time constant. However, there is a zero close to this
additional pole. The net effect is that the settling time will increase because of the third pole, but the
amplitude of the response term contributed by this pole will be very small. In system response, a long
tail of small amplitude will appear which may not appreciably degrade the performance of the system.
Example 4.7
Reconsider the feedback control system of Example 4.6 (Fig. 4.26). We now set the following goal for our
design:
(i) Kv ≥ 2.5;
(ii) z @ 0.5; and
(iii) ts (2% tolerance band) £ 2 sec.
The transient accuracy requirements correspond to z = 0.5 and wn = 4. For sampling interval T = 0.2
sec, the sampling frequency is about eight times the natural frequency. A smaller value of T is more
appropriate for the present design problem—which requires higher speed of response. We will, however,
take T = 0.2 sec to compare our results with those of Example 4.6.
Following the initial design steps of Example 4.6, we find that
0.01758 K ( z + 0.876) K ¢( z + 0.876)
Gh0G(z) = =
( z - 1)( z - 0.67) ( z - 1)( z - 0.67)
262 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Complex conjugate sections of the root locus plot superimposed on z = 0.5 line are shown in Fig. 4.27.
The root locus plot intersects the constant-z locus at point P. At this point, wn = 1.7 rad/sec. The specified
value of wn is 4. Therefore, the transient accuracy requirements cannot be satisfied by only gain adjust-
ment.
The natural frequency wn can be increased by lead compensation. To design a lead compensator, we
translate the transient performance specifications into a pair of dominant closed-loop poles, add open-
loop poles and zeros through D(z) to reshape the root locus plot, and force it to pass through the desired
closed-loop poles.
Point Q in Fig. 4.29 corresponds to the desired closed-loop pole in the upper half of z-plane. It is the
point of intersection of the z = 0.5 locus and the constant-wd locus, with wd given by
wd = wn 1 - z 2 = 3.464 rad/sec
Ê 180 ˆ
For this value of wd, constant-wd locus is a radial line at an angle of wdT Á
Ë p ˜¯
= 39.7º with the real
axis.
If the point Q is to lie on the root locus plot of the compensated system, then the sum of the angles
contributed by open-loop poles and zeros of the plant, and the pole and zero of the compensator at the
point Q, must be equal to ± (2q + 1)180º; q = 0, 1, 2, …
The sum of the angle contributions due to open-loop poles and zero of the plant at plant Q, is
17.10º – 138.52º – 109.84º = – 231.26º
Im
Locus
for z = 0.5
Q q = 39.7°
51.26°
– 0.876 0.254 0.67 Re
Unit circle
Hence, the compensator D(z) must provide +51.26º. The transfer function of the compensator may be
assumed to be
z - a1
D(z) =
z - a2
If we decide to cancel the pole at z = 0.67 by the zero of the compensator at z = a1, then the pole of the
compensator can be determined (from the condition that the compensator must provide + 51.26º) as a
point at z = 0.254 (a2 = 0.254). Thus, the transfer function of the compensator is obtained as
z - 0.67
D(z) =
z - 0.254
The open-loop transfer function now becomes
The value of K¢ at point Q, obtained from Fig. 4.29 by graphical construction, is 0.2227. Therefore, K = 12.67.
The velocity error constant of the compensated system is given by
1
Kv = lim [(z – 1)D(z)Gh0G(z)] = 2.8
T z Æ1
It meets the specification on steady-state accuracy.
If it is required to have a large Kv, then we may include a lag compensator. The lag-lead compensator can
satisfy the requirements of high steady-state accuracy and high speed of response.
From the viewpoint of microprocessor implementation of the lag, lead, and lag-lead compensators, the
lead compensators present the least coefficient quantization problems, because the locations of poles and
zeros are widely separated, and the numerical inaccuracies in realization of these compensators will result
in only small deviations in expected system behavior. However, in the case of lag compensators and lag-
lead compensators, the lag section may result in considerable coefficient quantization problems, because
the locations of poles and zeros are usually close to each other (they are near the point z = 1). Numerical
problems associated with realization of compensator coefficients, may lead to significant deviations in
expected system behavior.
4.5 z
Much of the style of the transform-domain techniques we have been discussing in this chapter, grew out
of the limitations of technology which was available for realization of the compensators with pneumatic
components, or electric networks and amplifiers. In digital computer, such limitations on realization
are, of course, not relevant, and one can ignore these particular constraints. One design method which
eliminates these constraints begins from the very direct point of view that we are given a process (plus
hold) transfer function, Gh0G(z), that we want to construct a desired transfer function, M(z), between
264 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
input r and output y and that we have the computer transfer function, D(z), to do the job as per the
feedback control structure of Fig. 4.30.
r + e u u+(t) y
D (z) Gh0(s) G (s)
T T
–
Computer ZOH Process
8
Pole excess of M(z) = {Number of finite poles of M(z) – Number of finite zeros of M(z)}.
Design of Digital Control Algorithms 265
This means that because of the condition of realizability of digital controller, the pole excess of the
closed-loop transfer function M(z) has to be greater than or equal to the pole excess of the process
transfer function Gh0G(z).
If the digital controller D(z) given by Eqn. (4.60) and the process Gh0G(z) are in a closed loop, the
poles and zeros of the process are canceled by the zeros and poles of the controller. The cancellation
is perfect if the process model Gh0G(z) matches the process exactly. Since the process models used for
design practically never describe the process behavior exactly, the corresponding poles and zeros will
not be canceled exactly; the cancellation will be approximate. For poles and zeros of Gh0G(z) which
are sufficiently spread in the inner of the unit disc in the z-plane, the approximation in cancellation
leads to only small deviations of the assumed behavior M(z) in general. However, one has to be careful
if Gh0G(z) has poles or zeros on or outside the unit circle. Imperfect cancellation may lead to weakly
damped or unstable behavior. Therefore, the design of digital controllers, according to Eqn. (4.60), has
to be restricted to cancellation of poles and zeros of Gh0G(z) located inside the unit circle. This imposes
certain restrictions on the desired transfer function M(z) as is seen below.
Assume that Gh0G(z) involves an unstable (or critically stable) pole at z = a. Let us define
G ( z)
Gh0G(z) = 1
z -a
where G1(z) does not include a term that cancels with (z – a). Then the closed-loop transfer function
becomes
G1 ( z )
D( z )
z -a
M(z) = (4.63)
G ( z)
1 + D( z ) 1
z -a
Since we require that no zero of D(z) cancel the pole of Gh0G(z) at z = a, we must have
1 z -a
1 – M(z) = =
G1 ( z ) z - a + D( z )G1 ( z )
1 + D( z )
z -a
that is, 1 – M(z) must have z = a as a zero. This argument applies equally if Gh0G(z) involves two or more
unstable (or critically stable) poles.
Also note from Eqn. (4.63) that if poles of D(z) do not cancel zeros of Gh0G(z), then the zeros of Gh0G(z)
become zeros of M(z).
Let us summarize what we have stated concerning cancelation of poles and zeros of Gh0G(z).
(i) Since the digital controller D(z) should not cancel unstable (or critically stable) poles of Gh0G(z),
all such poles of Gh0G(z) must be included in 1 – M(z) as zeros.
(ii) Zeros of Gh0G(z) that lie on or outside the unit circle should not be canceled with poles of D(z);
all such zeros of Gh0G(z) must be included in M(z) as zeros.
266 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The design procedure, thus, essentially involves the following three steps:
(I) The closed-loop transfer function M(z) of the final system is determined from the performance
specifications, and the fixed parts of the system, i.e., Gh0G(z).
(II) The transfer function D(z) of the digital controller is found using the design formula (4.60).
(III) The digital controller D(z) is synthesized.
Step (I) is certainly the most difficult one to satisfy. In order to pass step (I), a designer must fulfil the
following requirements:
(i) the digital controller D(z) must be physically realizable;
(ii) the poles and zeros of Gh0G(z) on or outside the unit circle should not be canceled by D(z); and
(iii) the system specifications on transient and steady-state accuracy should be satisfied.
Example 4.8
The plant of sampled-data system of Fig. 4.30 is described by the transfer function
1
G(s) = (4.64a)
s(10 s + 1)
The sampling period is 1 sec.
The problem is to design a digital controller D(z) to realize the following specifications:
(i) Kv ≥ 1;
(ii) z = 0.5; and
(iii) ts (2% tolerance band) £ 8 sec.
The selection of a suitable M(z) is described by the following steps.
(i) The z-transfer function of the plant is given by (refer to Table 2.1)
È 1 ˘ ( z + 0.9672)
Gh0G(z) = (1 – z–1) Z Í 2 ˙ = 0.04837 (4.64b)
ÍÎ s (10 s + 1) ˙˚ ( z - 1)( z - 0.9048)
Since Gh0G(z) has one more pole than zero, M(z) must have a pole excess of at least one.
(ii) Gh0G(z) has a pole at z = 1. This must be included in 1– M(z) as zero, i.e.,
1 – M(z) = (z – 1)F(z) (4.65)
where F(z) is a ratio of polynomials of appropriate dimensions.
(iii) The transient accuracy requirements are specified as z = 0.5, wn = 1(ts = 4/zwn). With a sampling
period T = 1 sec, this maps to a pair of dominant closed-loop poles in the z-plane with
The closed-loop transfer function, M(z), should have dominant poles at the roots of the equation
D(z) = z2 – 0.7856 z + 0.3678 = 0 (4.66)
The steady-state accuracy requirements demand that steady-state error to unit-step input is zero, and
steady-state error to unit-ramp input is less than 1/Kv.
E(z) = R(z) – Y(z) = R(z)[1 – M(z)] = R(z) (z – 1)F(z)
*
ess = lim z (z – 1) F(z) = 0
unit step z Æ1
Thus, with the choice of M(z) given by Eqn. (4.65), the steady-state error to unit-step input is always
zero.
*
ess Tz
= lim (z – 1) (z – 1)F(z) = T F(1) = 1/Kv
unit ramp z Æ1 ( z -1) 2
For T = 1 and Kv = 1,
F(1) = 1 (4.67)
From Eqns (4.65) and (4.66), we observe that
z -a
F(z) = 2
z - 0.7856 z + 0.3678
meets the requirements on realizability of D(z), cancellation of poles and zeros of Gh0G(z), and transient
accuracy. The requirement on steady-state accuracy is also met if we choose a such that (refer to
Eqn. (4.67))
1-a
=1
1 - 0.7856 + 0.3678
This gives a = 0.4178
z - 0.4178 ( z - 1)( z - 0.4178)
Therefore, F(z) = 2 ; 1 – M(z) = 2
z - 0.7856 z + 0.3678 z - 0.7856 z + 0.3678
0.6322 z - 0.05
M(z) = 2
(4.68)
z - 0.7856 z + 0.3678
Now, turning to the basic design formula (4.60), we compute
1 È M ( z) ˘ ( z - 1)( z - 0.9048) È 0.6322 z - 0.05 ˘
D(z) = Í ˙ = Í ˙
Gh0G ( z ) Î1 - M ( z ) ˚ (0.04837)( z + 0.9672) Î ( z - 1)( z - 0.41778) ˚
( z - 0.9048)( z - 0.079)
= 13.07 (4.69)
( z + 0.9672)( z - 0.4178)
A plot of the step response of the resulting design is provided in Fig. 4.31, which also shows the control
effort. The underdamped response settles within a two percent band of the desired value of unity in less
than 8 sec. We can see the oscillation of u(k)—associated with the pole of D(z) at z = –0.9672, which
is quite near the unit circle. Strong oscillations of u(k) are often considered unsatisfactory, even though
the process is being controlled as was intended. In the literature, poles near z = –1 are often referred to
as ringing poles.
268 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
u y
10 1.0
5 0.5
0 0
k
–5
– 10
– 15
Fig. 4.31 Step response (Example 4.8)
To avoid the ringing effect, we could include the zero of Gh0G(z) at z = – 0.9672 in M(z) as zero, so that
this zero of Gh0G(z) is not canceled with pole of D(z). M(z) may have additional poles at z = 0, where the
transient is as short as possible. The result will be a simpler D(z) with a slightly more complicated M(z).
REVIEW EXAMPLES
r + e u y
D (z) Gh0(s) G (s)
T T
–
Compensator ZOH Plant
Fig. 4.32
Design of Digital Control Algorithms 269
Gh0G( jn)
Magnitude (dB) D( jn) Gh0G( jn)
10 1.8
0
11.5
– 10 1.25 14.2
– 140
51
32
22
Phase (degrees)
– 180
3.2
– 220
0.5 1 2 5 10
n
Fig. 4.33 Compensator design (Review Example 4.1)
The phase lead of 31° is provided at the frequency (refer to Eqn. (4.40))
1Ê 1 ˆ
nm = 0.997 ¥ 3.27 = 1.8
t ÁË at ˜¯
=
The system gain K was determined to be 2. Therefore, for the plant of the system of Fig. 4.32, the digital
controller is given by
Design of Digital Control Algorithms 271
U ( z) Ê z - 0.8187 ˆ Ê z - 0.8187 ˆ
= 5.436 Á
Ë z - 0.5071 ˜¯
= D(z) = 2.718 K Á
E( z) Ë z - 0.5071 ˜¯
Im
Locus
for z = 0.5
P q = 21°
Re
Unit circle
Fig. 4.34
Im
Locus q = 41°
for z = 0.5
Q
Re
Unit circle
Fig. 4.35
Design of Digital Control Algorithms 273
PROBLEMS
4.1 For the system shown in Fig. P4.1, find
(i) position error constant, Kp;
(ii) velocity error constant, Kv; and
(iii) acceleration error constant, Ka.
Express the results in terms of K1, K2, J, and T.
4.2 Consider the analog control system shown in Fig. P4.2a. Show that the phase margin of the system
is about 45º.
We wish to replace the analog controller by a digital controller as shown in Fig. P4.2b. First,
modify the analog controller to take into account the effect of the hold that must be included
r + 1.57 y
D (s) = 1
– s(s + 1)
(a)
r + 1 – e– sT y
D1(z) 1.57
T = 1.57 T s s(s + 1)
–
(b)
r + 1.57e– sT/2 y
D1(s)
s(s + 1)
–
(c)
274 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
in the equivalent digital control system (the zero-order hold may be approximated by a pure
time delay of one half of the sampling period T (Fig. P4.2c), and then a lag compensator D1(s)
may be designed to realize the phase margin of 45º). Then, by using the bilinear transformation,
determine the equivalent digital controller.
Compare the velocity error constants of the original analog system of Fig. P4.2a, and the equivalent
digital system of Fig. P4.2b.
4.3 A unity-feedback system is characterized by the open-loop transfer function
0.2385( z + 0.8760)
Gh0G(z) =
( z - 1)( z - 0.2644)
The sampling period T = 0.2 sec.
Determine steady-state errors for unit-step, unit-ramp, and unit-acceleration inputs.
4.4 Predict the nature of the transient response of a discrete-time system whose characteristic equation
is given by
z2 – 1.9z + 0.9307 = 0
The sampling interval T = 0.02 sec.
4.5 The system of Fig. P4.5 contains a disturbance input W(s), in addition to the reference input R(s).
(a) Express Y(z) as a function of the two inputs.
(b) Suppose that D2(z) and D3(z) are chosen such that D3(z) = D2(z)Gh0G(z). Find Y(z) as a
function of the two inputs.
(c) What is the advantage of the choice in part (b) if it is desired to minimize the response Y(z)
to the disturbance W(s)?
D2(z) W (s)
+ + Y (s)
R (s) + + +
D3(z) D1(z) Gh0(s) G (s)
T – T
4.6 Consider the system of Fig. P4.6. The design specifications for the system require that
(i) the steady-state error to a unit-ramp reference input be less than 0.01; and
(ii) a constant disturbance w should not affect the steady-state value of the output.
Show that these objectives can be met if D(z) is a proportional-plus-integral compensator.
w
+
r + + 1 y
D (z) Gh0(s)
– T T s+1
Design of Digital Control Algorithms 275
4.7 Consider the feedback system shown in Fig. P4.7. The nominal values of the parameters K and
t of the plant G(s) are both equal to 1. Find an expression for the sensitivity S(z) of the closed-
loop transfer function M(z), with respect to incremental changes in open-loop transfer function
Gh0G(z). Plot |S(ejwT)| for 0 £ w £ ws/2, where ws is the sampling frequency. Determine the band-
width of the system if it is designed to have |S(ejwT)| < 1.
r + K y
Gh0(s) G (s) =
– T = 0.5 sec ts + 1
r + e u y
D (z) Gh0(s) G (s)
– T T
Compensator ZOH Plant
276 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
4.10 Consider the control system of Fig. P4.9, where the plant transfer function is G(s) = 1/s2, and
T = 0.1 sec. Design a lead compensator such that the phase margin is 50º and the gain margin is at
least 10 dB. Obtain the velocity error constant Kv of the compensated system.
Can the design be achieved using a lag compensator? Justify your answer.
4.11 Consider the control system of Fig. P4.9, where the plant transfer function is
K
G(s) = , and T = 0.1 sec
s( s + 5)
The performance specifications are given as
(i) velocity error constant Kv ≥ 10;
(ii) phase margin FM ≥ 60º; and
(iii) bandwidth wb = 8 rad/sec.
(a) Find the value of K that gives Kv = 10. Determine the phase margin and the bandwidth of the
closed-loop system.
(b) Show that if lead compensation is employed, the system bandwidth will increase beyond
the specified value, and if lag compensation is attempted, the bandwidth will decrease
sufficiently so as to fall short of the specified value.
(c) Design a lag section of a lag-lead compensator to provide partial compensation for the phase
margin. Add a lead section to realize phase margin of 60º. Check the bandwidth of the lag-
lead compensated system.
(d) Find the transfer function D(z) of the lag-lead compensator and suggest a realization scheme.
4.12 Shown in Fig. P4.12a is a closed-loop temperature control system. Controlled electric heaters
maintain the desired temperature of the liquid in the tank. The computer output controls electronic
switches (triacs), to vary the effective voltage supplied to the heaters, from 0 V to 230 V. The
temperature is measured by a thermocouple whose output is amplified to give a voltage in the
range required by A/D converter. A simplified block diagram of the system, showing perturbation
dynamics, is given in Fig. P4.12b.
(a) Consider the analog control loop of Fig. P4.12c. Determine K that gives 2% steady-state
error to a step input.
(b) Let D(z) = K obtained in part (a). Is the sampled-data system of Fig. P4.12b stable for this
value of D(z)?
(c) Design a lag compensator for the system of part (b), such that 2% steady-state error is
realized, the phase margin is greater than 40º and the gain margin is greater than 6 dB. Give
the total transfer function D(z) of the compensator.
(d) Can the design of part (c) be achieved using a lead compensator? Justify your answer.
4.13 (a) Consider a unity-feedback system with open-loop transfer function
K ( z - z1 )
Gh0G(z) = ;0£K<
( z - p1 )( z - p2 )
The poles and zero of this second-order transfer function lie on the real axis; the poles
are adjacent or congruent, with the zero to their left. Prove that the complex-conjugate
section of the root locus plot is a circle with the center at z = z1, and the radius equal to
( z1 - p1 ) ( z1 - p2 ) .
Design of Digital Control Algorithms 277
230 V rms
0-230 V
Triac q
Computer Heaters
circuit
Tank
Thermocouple
volts
A/D
(a)
qr + q 20 q
0.04 D (z) Gh0(s) 1
– T 3s + 1
Power
ZOH Plant
gain
0.04
T = 0.5 sec
(b)
qr °C + 20 q °C
0.04 K 1 3s + 1
–
Power
gain
0.04
(c)
K ( z - 0.9048)
(b) Given Gh0G(z) =
( z - 1) 2
Sketch the root locus plot for 0 £ K < . Using the information in the root locus plot,
determine the range of values of K for which the closed-loop system is stable. Also determine
the value of K for which the system closed-loop poles are real and multiple.
4.14 A sampled-data feedback control system is shown in Fig. P4.14. The controlled process of the
system is described by the transfer function
K
G(s) = ;0£K<
s( s +1)
The sampling period T = 1 sec.
(a) Sketch the root locus plot for the system on the z-plane and from there obtain the value of K
that results in marginal stability.
278 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
(b) Repeat part (a) for (i) T = 2 sec, (ii) T = 4 sec, and compare the stability properties of the
system with different values of sampling interval.
r + y
Gh0(s) G(s)
– T
4.15 The digital process of a unity-feedback system is described by the transfer function
K ( z + 0.717)
Gh0G(z) = ; T = 1 sec
( z - 1)( z - 0.368)
Sketch the root locus plot for 0 £ K < and from there obtain the following information:
(a) The value of K that results in marginal stability. Also find the frequency of oscillations.
(b) The value of K that results in z = 1. What are the time constants of the closed-loop poles?
(c) The value of K that results in z = 0.5. Also find the natural frequency wn for this value of K.
You may use the following table to construct a constant-z locus on the z-plane corresponding
to z = 0.5.
Re 0.891 0.64 0.389 0.169 0 –0.113 –0.174 –0.188 –0.163
Im 0.157 0.37 0.463 0.464 0.404 0.310 0.207 0.068 0
4.16 The characteristic equation of a feedback control system is
z2 + 0.2A z – 0.1 A = 0
Sketch the root loci for 0 £ A < , and therefrom obtain the range of parameter A for which the
system is stable.
4.17 The block diagram of a sampled-data system using a dc motor for speed control is shown in
Fig. P4.17. The encoder senses the motor speed, and the output of the encoder is compared with
the speed command. Sketch the root locus plot for 0 £ K < .
(a) For K = 1, find the time constant of the closed-loop pole.
(b) Find the value of K which results in a closed-loop pole whose time constant is less than or
equal to one fourth of the value found in part (a).
Use the parameter values:
Km = 1, tm = 1, T = 0.1 sec, P = 60 pulses/revolution.
Design of Digital Control Algorithms 279
1
4.18 Consider the system shown in Fig. P4.9 with G(s) = and T = 0.2 sec.
s( s + 1)
(a) Design a lead compensator so that the dominant closed-loop poles of the system will have
z = 0.5 and wn = 4.5.
(b) Obtain the velocity error constant Kv of the lead compensated system.
(c) Add a lag compensator in cascade so that Kv is increased by a factor of 3. What is the effect
of the lag compensator on the transient response of the system?
(d) Obtain the transfer function D(z) of the lag-lead compensator, and suggest a realization
scheme.
Use root locus method.
4.19 Consider the system shown in Fig. P4.9 with
1
G(s) = ; T = 1 sec
( s + 1)( s + 2)
Design a compensator D(z) that meets the following specifications on system performance:
(a) z = 0.5;
(b) wn = 1.5; and
(c) Kp ≥ 7.5.
Use root locus method.
4.20 The block diagram of a digital control system is shown in Fig. P4.9. The controlled process is
described by the transfer function
K
G(s) = 2 ; T = 1 sec
s
which may represent a pure inertial load.
(a) The dominant closed-loop poles of the system are required to have z = 0.7, wn = 0.3 rad/sec.
Mark the desired dominant closed-loop pole locations in the z-plane. The root loci must pass
through these points.
(b) Place the zero of the compensator D(z) below the dominant poles and find the location of
pole of D(z), so that the angle criterion at the dominant poles is satisfied. Find the value of
K, so that the magnitude criterion at the dominant poles is satisfied.
(c) Find the acceleration error constant, Ka.
(d) Your design will result in specified values of z and wn for the closed-loop system response, only if
the dominance condition is satisfied. Find the third pole of the closed-loop system and
comment on the effectiveness of your design.
4.21 The configuration of a commercial broadcast videotape positioning system is shown in Fig. P4.21.
The relationship between the armature voltage (applied to drive motor) and tape speed at the
recording and playback heads, is approximated by the transfer function G(s). The delay term
involved, accounts for the propagation of speed changes along the tape, over the distance of
physical separation of the tape drive mechanism and the recording and playback heads. The tape
position is sensed by a recorded signal on the tape itself.
280 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
G (s) =
qr + 40 e –s/120 1 q
D (z) D/A
– s + 40 s
A/D 1
Position sensor
Design the digital controller that should result in zero steady-state error to any step change in
the desired tape position. The closed-loop poles of the system are required to lie within a circle of
radius 0.56. Take the sampling interval T = 1/120 sec.
4.22 Consider the sampled-data system shown in Fig. P4.22; the plant is known to have the transfer
function
1
G(s) =
s( s + 1)
A sampling period of T = 0.1 sec is to be used.
(a) Design a digital controller to realize the following specifications:
(i) z = 0.8;
(ii) wn = 2p/10T; and
(iii) Kv ≥ 5.
(b) Design a digital controller so that the response to unit-step input is
y(k) = 0, 0.5, 1, 1, …
Find the steady-state error to unit-ramp input.
r + e u y
D (z) Gh0(s) G (s)
– T T
4.23 In the control configuration of Fig. P4.22, find the control algorithm D(z) so that the response to
a unit-step function will be y(t) = 1– e–t. The plant transfer function is
1
G(s) =
10 s + 1
Assume that the sampling interval T = 2 sec.
Control System Analysis using State Variable Methods 281
Part II
State Variable Methods in Automatic Control:
Continuous-Time and Sampled-Data Systems
In Part I of the book, we developed some general procedures for the design of controllers. Our discussion
was basically centered around the generalized operational block diagram of a feedback system, shown
in Fig. 1.8.
We have assumed in our presentation, that the dynamic behavior of the plant can be represented (or
approximated with ‘sufficient’ accuracy) by a linear time-invariant nth-order system, which is described
by a strictly proper, minimal (controllable and observable) rational transfer function GP(s). We have also
assumed that any external disturbances that affect the plant, can be represented by a single, additive
signal w(t), with known dynamic properties (refer to Fig.1.8). The dynamics in the feedback path (often
attributed to the sensor), was assumed to be characterized by the proper minimal transfer function H(s),
which produces a continuous measure of the potentially noisy output y(t).
We placed a restriction on the design of controllers: the controller can be represented by a linear time-
invariant system, whose single output (for the single-input single-output (SISO) systems) u(t) is produced
by the input r(t) – b(t) = ê(t). Therefore, its dynamic behavior can be described by
U(s) = D(s)[R(s) – B(s)]
where D(s) is the proper minimal transfer function of the controller, whose degree defines the order of
the controller.
We have observed that in many cases involving the so-called classical control techniques, the transfer
function A(s) (corresponding to the reference-input elements (Fig.1.8)), is assumed to be equal to H(s).
This implies the more restrictive unity-feedback configuration depicted in Fig. 1.12. However, the choice
of A(s) π H(s) would imply non-unity-feedback structure; the design procedures for this structure have
been developed in our companion book [155]. In the vast majority of applications, the unity-feedback
configuration is preferred because the error (e(t) = r(t) – y(t) = yr(t) – y(t)) is explicitly present—both to
drive the controller, and to be zeroed via feedback.
In this part of the book, we intend to relax the restrictions we have so far imposed on the development
of general procedures for the design of controllers. We know that the output y(t) does not represent the
complete dynamic state of the plant at time t; it is the state vector x(t)=[x1(t), …, xn(t)]T which carries
complete knowledge on the dynamics at time t. In the output-feedback configurations of the form shown
in Fig. 1.8, only partial information on the dynamical state of the plant is fed back. We will relax this
restriction and allow the complete state x(t) to be fed back.
In the classical configuration of Fig. 1.8, the controller output u(t) is produced by one input:
[r(t) – b(t)]. We will relax this restriction also, and allow the controller u(t) to be a function of r(t), and
b(t) independently.
282 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Chapter 5
Control System Analysis using
State Variable Methods
5.1 INTRODUCTION
In Part I of this book, we have seen that the root-locus method and the frequency-response method are
quite powerful for the analysis and design of feedback control systems. The analysis and design are
carried out using transfer functions, together with a variety of graphical tools such as root-locus plots,
Nyquist plots, Bode plots, Nichols chart, etc. These techniques of the so-called classical control theory
have been greatly enhanced by the availability, and low cost, of digital computers for system analysis and
simulation. The graphical tools can now be more easily used with computer graphics.
The classical design methods suffer from certain limitations, due to the fact that the transfer function
model is applicable only to linear time-invariant systems, and that, there too, it is generally restricted
to Single-Input, Single-Output (SISO) systems. This is because the classical design approach becomes
highly cumbersome for use in Multi-Input, Multi-Output (MIMO) systems. Another limitation of the
transfer function technique is that it reveals only the system output for a given input and provides no
information about the internal behavior of the system. There may be situations where the output of a
system is stable and yet some of the system elements may have a tendency to exceed their specified
ratings. In addition to this, it may sometimes be necessary, and advantageous, to provide a feedback
proportional to the internal variables of a system, rather than the output alone, for the purpose of
stabilizing and improving the performance of a system.
The limitations of classical methods, based on transfer function models, have led to the development of
state variable approach of analysis and design. It is a direct time-domain approach which provides a basis
for modern control theory. It is a powerful technique for the analysis and design of linear and nonlinear,
time-invariant or time-varying MIMO systems. The organization of the state variable approach is such
that it is easily amenable to solution through digital computers.
It will be incorrect to conclude from the foregoing discussion, that the state variable design methods can
completely replace the classical design methods. In fact, the classical control theory, comprising a large
body of use-tested knowledge, is still going strong. State variable design methods prove their mettle in
applications that are intractable by classical methods.
The state variable formulation contributes to the application areas of classical control theory
in a different way. To compute the response of G(s) to an input R(s), requires the expansion of
284 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
{G(s)R(s)} into partial fractions; which, in turn, requires computation of all the poles of {G(s)R(s)},
or all the roots of a polynomial. The roots of a polynomial are very sensitive to their coefficients
(refer to Review Example 3.3). Furthermore, to develop a computer program to carry out partial
fraction expansion is not simple. On the other hand, the response of state variable equations is easy
to program. Its computation does not require the computation of roots or eigenvalues. Therefore, it
is less sensitive to parameter variations. For these reasons, it is desirable to compute the response
of G(s) through state variable equations. State variable formulation is thus the most efficient form
of system representation—from the standpoint of computer simulation. For this reason, many
Computer-Aided-Design (CAD) packages, handling both the classical and the modern tools of control
system design, use this notation. It is, therefore, helpful for the control engineer to be familiar with state
variable methods of system representation and analysis.
Part-II of this text presents an introduction to a range of topics which fall within the domain of state
variable analysis and design. Our approach is to build on, and complement, the classical methods of
analysis and design. State variable analysis and design methods use vector and matrix algebra and are, to
some extent, different from those based on transfer functions. For this reason, we have not integrated the
state variable approach with the frequency-domain approach based on transfer functions.
We have been mostly concerned with SISO systems in the text so far. In the remaining chapters also, our
emphasis will be on the control of SISO systems. However, many of the analysis and design methods
based on state variable concepts are applicable to both SISO and MIMO systems with almost equal
convenience; the only difference being the additional computational effort for MIMO systems, which is
taken care of by CAD packages. A specific reference to such results will be made at appropriate places
in these chapters.
5.2.1 Matrices1
Basic definitions and algebraic operations associated with matrices are given below.
Matrix
The matrix
È a11 a12 � a1m ˘
Ía a22 � a2 m ˙˙
A= Í
21
= [aij] (5.1)
Í � � � ˙
Í ˙
Î an1 an 2 � anm ˚
1
We will use upper case bold letters to represent matrices and lower case bold letters to represent vectors.
Control System Analysis using State Variable Methods 285
is a rectangular array of nm elements. It has n rows and m columns. aij denotes (i, j)th element, i.e., the
element located in ith row and jth column. A is said to be a rectangular matrix of order n ¥ m.
When m = n, i.e., the number of columns is equal to that of rows, the matrix is said to be a square matrix
of order n.
A n ¥ 1 matrix, i.e., a matrix having only one column is called a column matrix. A 1 ¥ n matrix, i.e., a
matrix having only one row is called a row matrix.
Diagonal Matrix
A diagonal matrix is a square matrix whose elements off the principal diagonal are all zeros (aij = 0 for
i π j ). The following matrix is a diagonal matrix.
È a11 0 � 0 ˘
Í0 a22 � 0 ˙˙
L = Í = diag [a11 a12 ann] (5.2)
Í � � � ˙
Í ˙
Î0 0 � ann ˚
A unit matrix I is a diagonal matrix whose diagonal elements are all equal to unity (aii = 1, aij = 0 for
i π j).
È1 0 0˘
Í0 1 0 ˙˙
I= Í
Í ˙
Í ˙
Î0 0 1˚
È0 0 0˘
Í0 0 0 ˙˙
0= Í
Í ˙
Í ˙
Î0 0 0˚
Whenever necessary, the dimensions of the null matrix will be indicated by two subscripts: 0nm.
Lower-Triangular Matrix
A lower-triangular matrix L has all its elements above the principal diagonal equal to zero; lij = 0 if i < j
for 1 £ i £ n and 1 £ j £ m.
286 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È l11 0 0 ˘
Íl 0 ˙˙
21 l22
L= Í
Í ˙
Í ˙
Î ln1 ln 2 lnm ˚
Upper-Triangular Matrix
An upper-triangular matrix U has all its elements below the principal diagonal equal to zero; uij = 0 if
i > j for 1 £ i £ n and 1 £ j £ m.
Èu11 u12 u1m ˘
Í0 u u2 m ˙˙
U= Í 22
Í ˙
Í ˙
Î0 0 unm ˚
Matrix Transpose
If the rows and columns of an n ¥ m matrix A are interchanged, the resulting m ¥ n matrix, denoted as
AT, is called the transpose of the matrix A. Namely, if A is given by Eqn. (5.1), then
È a11 a21 an1 ˘
Ía a a ˙
AT = Í
12 22 n2 ˙
Í ˙
Í ˙
Î a1m a2 m anm ˚
Some properties of the matrix transpose are
(i) (AT)T = A (ii) (kA)T = kAT, where k is a scalar
(iii) (A + B)T = AT + BT (iv) (AB)T = BTAT
Conjugate Matrix
If the complex elements of a matrix A are replaced by their respective conjugates, then the resulting
matrix is called the conjugate of A.
Control System Analysis using State Variable Methods 287
Conjugate Transpose
The conjugate transpose is the conjugate of the transpose of a matrix. Given a matrix A, the conjugate
transpose is denoted by A*, and is equal to conjugate of AT.
Determinants are defined for square matrices only. The determinant of the n ¥ n matrix A, written as |A|,
or det A, is a scalar-valued function of A. It is found through the use of minors and cofactors.
The minor mij of the element aij is the determinant of a matrix of order (n – 1) ¥ (n – 1), obtained from
A by removing the row and the column containing aij.
The cofactor cij of the element aij is defined by the equation
cij = (– 1)i + j mij
Determinants can be evaluated by the method of Laplace expansion. If A is an n ¥ n matrix, any arbitrary
row k can be selected and |A| is then given by
n
|A| = Â akj ckj
j =1
Similarly, Laplace expansion can be carried out with respect to any arbitrary column l, to obtain
n
|A| = Â ail cil
i =1
Laplace expansion reduces the evaluation of an n ¥ n determinant down to the evaluation of a string of
(n – 1) ¥ (n – 1) determinants, namely, the cofactors.
Some properties of determinants are
(i) det AB = (det A)(det B)
(ii) det AT = det A
288 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Singular Matrix
A square matrix is called singular if the associated determinant is zero.
Nonsingular Matrix
A square matrix is called nonsingular if the associated determinant is nonzero.
Adjoint Matrix
The adjoint matrix of a square matrix A is found by replacing each element aij of matrix A, by its
cofactor cij and then transposing.
È c11 c21 cn1 ˘
Íc cn 2 ˙˙
adj A = A+ = Í
12 c22
= [cji]
Í ˙
Í ˙
Îc1n c2 n cnn ˚
Note that
A(adj A) = (adj A)A = |A| I (5.3)
Matrix Inverse
The inverse of a square matrix is written as A–1, and is defined by the relation
A–1A = AA–1 = I
From Eqn. (5.3) and the definition of the inverse matrix, we have
adj A
A–1 = (5.4)
A
È1/ a11 0 0 ˘
Í 0 1/ a 0 ˙˙ È 1 1 1 ˘
L–1 = Í 22
= diag Í ˙
Í ˙ Î a11 a22 ann ˚
Í ˙
Î 0 0 1/ ann ˚
Control System Analysis using State Variable Methods 289
The rank r(A) of a matrix A is the dimension of the largest array in A with a nonzero determinant. Some
properties of rank are
(i) r(AT ) = r(A)
(ii) The rank of a rectangular matrix cannot exceed the lesser of the number of rows and the number of
columns. A matrix whose rank is equal to the lesser of the number of rows and number of columns
is said to be of full rank.
r(A) £ min (n, m); A is n ¥ m matrix
(iii) The rank of a product of two matrices cannot exceed the rank of the either:
r(AB) £ min [r(A), r(B)]
The trace of a square matrix A is the sum of the elements on the principal diagonal.
tr A = Â aii (5.5)
i
Some properties of trace are
(i) tr AT = tr A (ii) tr (A + B) = tr A + tr B
(iii) tr AB = tr BA; tr AB π (tr A)(tr B) (iv) tr P–1AP = tr A
A matrix can be partitioned into submatrices or vectors. Broken lines are used to show the partitioning
when the elements of the submatrices are explicitly shown. For example,
È a11 a12 a13 ˘
Í ˙
A = Í a21 a22 a23 ˙
Ía ˙
Î 31 a32 a33 ˚
The broken lines indicating the partitioning are sometimes omitted when the context makes it clear that
partitioned matrices are being considered. For example, the matrix A given above may be expressed as
È A11 A12 ˘
A= Í
Î A 21 A 22 ˙˚
We will be frequently using the following forms of partitioning.
(i) Matrix A partitioned into its columns:
A = [ a1 a2 am]
where
È a1i ˘
Ía ˙
ai = Í ˙ = ith column in A
2i
Í ˙
Í ˙
Î ani ˚
290 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È A1 0 0 ˘
Í0 A2 0 ˙˙
A= Í = diag [A1 A2 Am ]
Í ˙
Í ˙
Î0 0 Am ˚
For this case
(i) |A| = |A1| |A2| |Am|
(ii) A– 1 = diag [A1– 1 A2– 1 A–1m ], provided that A–1 exists.
5.2.2 Vectors
We will be mostly concerned with vectors and matrices that have real elements. We, therefore, restrict
our discussion to these cases only. An extension of the results to the situations where the vectors/matrices
have complex elements is quite straightforward.
The concept of norm of a vector is a generalization of the idea of length. For the vector
Control System Analysis using State Variable Methods 291
È x1 ˘
Íx ˙
x= Í ˙
2
Í ˙
Í ˙
Î xn ˚
the Euclidean vector norm || x || is defined by
|| x || = (x12 + x22 + + xn2)1/2 = (xTx)1/2 (5.6a)
In two or three dimensions, it is easy to see that this definition for the length of x satisfies the conditions
of Euclidean geometry. It is a generalization to n dimensions of the theorem of Pythagoras.
For any nonsingular matrix P, the vector
y = Px
has the Euclidean norm
|| y || = [(P x)T(P x)]1/2 = (xT PT P x)1/2
Letting Q = PT P, we write
|| y || = (xTQ x)1/2
or
|| x ||Q = (xTQ x)1/2 (5.6b)
We call || x ||Q the norm of x with respect to Q. It is, in fact, a generalization of the norm defined in (5.6a)
in that it is a measure of the size of x ‘weighted’ by the matrix Q.
The norm of a matrix is a measure of the ‘size’ of the matrix (not its dimension). For the matrix
È a11 a12 a1n ˘
Ía a2 n ˙˙
21 a22
A= Í
Í ˙
Í ˙
Î an1 an 2 ann ˚
the Euclidean matrix norm || A || is defined by
1/ 2
È n ˘
|| A || = Í Â aij2 ˙
Íi , j = 1 ˙
= [tr (ATA]1/2 (5.6c)
Î ˚
We can also describe the size of A by
|| Ax ||
|| A || = max ;xπ0
x || x ||
i.e., the largest value of the ratio of the length || Ax || to the length || x ||.
292 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
( x A Ax )
1/ 2 1/ 2
T T
Ê xT AT Ax ˆ
|| A || = max = max Á ˜
(x x )
x 1/ 2 x Ë xT x ¯
T
The maximum value of the ratio in this expression can be determined in terms of the eigenvalues2 of the
matrix ATA. The real symmetric matrix ATA has all real and nonnegative eigenvalues and the maximum
value of the ratio (xTATAx)/(xTx) is equal to the maximum eigenvalue of ATA (for proof, refer to [107]).
Therefore,
|| A || = (Maximum eigenvalue of ATA)1/2 (5.6d)
This definition of the matrix norm is known as the spectral norm3 of A.
The square roots of the eigenvalues of ATA are called the singular values of A. The spectral norm of A
is equal to its largest singular value.
Singular values of a matrix are useful in numerical analysis. The ratio of the largest to the smallest
singular values of A, called the condition number of A, is a measure of how close the matrix A comes to
being singular. The matrix A is, therefore, ‘ill-conditioned’ if its condition number is large.
Orthogonal Vectors
Any two vectors which have a zero scalar product are said to be orthogonal vectors. Two n ¥ 1 vectors
x and y are orthogonal if
xTy = 0
A set of vectors is said to be orthogonal if, and only if, every two vectors from the set are orthogonal:
xTy = 0 for all x π y in the set.
Unit Vector
A unit vector x̂ is, by definition, a vector whose norm is unity; || x̂ || = 1. Any nonzero vector x can be
normalized to form a unit vector.
x
x̂ =
|| x ||
A set of vectors is said to be orthonormal if, and only if, the set is orthogonal and each vector in this
orthogonal set is a unit vector.
Orthogonal Matrix
Suppose that {x1, x2, … , xn} is an orthogonal set:
x Ti xi = 1, for all i
2
The roots of the equation
|lI – A| = 0
are called the eigenvalues of matrix A. Detailed discussion is given in Section 5.6.
3
Refer to [105] for other valid vector and matrix norms.
Control System Analysis using State Variable Methods 293
and
x Ti xj = 0 for all i and j with i π j.
If we form the n ¥ n matrix
P = [ x1 x2 xn],
it follows from partitioned multiplication that
PTP = I
That is,
PT = P–1
Such a matrix P is called an orthogonal matrix.
Consider a set of m vectors {x1, x2, … , xm}, each of which has n components. If there exists a set of m
scalars ai , at least one of which is not zero, which satisfies
a1x1 + a2x2 + + amxm = 0,
then the set of vectors {xi} is said to be linearly dependent.
Any set of vectors {xi} which is not linearly dependent is said to be linearly independent. That is, if
a1x1 + a2x2 + + amxm = 0
implies that each ai = 0, then {xi} are linearly independent vectors.
Consider the set of m vectors {xi}, each of which has n components, with m π n. Assume that this set is
linearly dependent so that
a1x1 + a2x2 + + amxm = 0
with at least one nonzero ai.
Premultiplying both sides of this equation by x iT, gives a set of m simultaneous equations:
a1xiTx1 + a2 x iTx2 + + am x iTxm = 0; i = 1, 2, … , m
These equations can be written in the matrix form as
È x1T x1 x1T x 2 x1T x m ˘ È a1 ˘ È0 ˘
Í ˙Í ˙ Í ˙
Í xT2 x1 xT2 x 2 xT2 x m ˙ Í a 2 ˙ Í0 ˙
Í ˙Í ˙=Í ˙ (5.7a)
Í ˙Í ˙ Í ˙
Í xT x xTm x 2 xTm x m ˙˚ Îa m ˚ Î0 ˚
Î m 1
or G` = 0
294 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
In Section 5.8, we will require a test for the linear independence of the rows of a matrix whose elements
are functions of time.
Consider a matrix
È f11(t ) f12(t )
f1m (t ) ˘ È f1(t ) ˘
Í ˙ Í ˙
F(t) = Í ˙=Í ˙
Í f n1(t ) f n 2(t ) f nm (t ) ˙˚ ÍÎ fn (t ) ˙˚
Î
fi(t); i = 1, … , n are the n row vectors of matrix F; each vector has m components.
The scalar product of two 1 ¥ m vector functions fi(t) and fj(t) on [t0, t1] is by definition,
t1
The set of n row-vector functions {f1(t), … , fn(t)} are linearly dependent if there exists a set of n scalars
ai, at least one of which is not zero, which satisfies
a1f1(t) + a2f2(t) + + anfn(t) = 01 ¥ m
È f11(t ) ˘ È f 21 (t ) ˘ È f n1(t ) ˘ È0 ˘
Í ˙ Í ˙ Í ˙ Í ˙
or a1 Í ˙ + a2 Í ˙ + + an Í ˙ = Í ˙
Í f1m (t ) ˙ Í f 2 m (t ) ˙ Í f nm (t ) ˙ ÍÎ0 ˙˚
Î ˚ Î ˚ Î ˚
Equivalently, the n rows fi(t) are linearly dependent if
`T F(t) = 0 (5.8a)
for some
È a1 ˘
`= Í ˙ π0
Í ˙
ÍÎa n ˙˚
Control System Analysis using State Variable Methods 295
The Grammian matrix of functions fi(t), i = 1, … , n; where fi(t) is the ith row of the matrix F(t), is given
by (refer to Eqns (5.7))
t1
5.2.3
An expression such as
n n
V(x1, x2, … , xn) = Â Â qij xi xj
i =1 j =1
involving terms of second degree in xi and xj, is known as the quadratic form of n variables. Such scalar-
valued functions are extensively used in stability analysis and modern control design.
In practice, one is usually concerned with quadratic forms V(x1, x2, … , xn) that assume only real values.
When xi, xj, and qij are all real, the value of V is real, and the quadratic form can be expressed in the
vector-matrix notation as
È q11 q12 q1n ˘ È x1 ˘
Íq q22 q2 n ˙˙ ÍÍ x2 ˙˙
xn] Í
21
V(x) = [x1 x2
Í ˙Í ˙
Í ˙Í ˙
Î qn1 qn 2 qnn ˚ Î xn ˚
or V(x) = xTQx
Any real square matrix Q may be written as the sum of a symmetric matrix Qs and a skew-symmetric
matrix Qsk, as shown below.
Let
Q = Qs + Qsk
Taking transpose of both sides,
QT = QTs + QTsk = Qs – Qsk
Solving for Qs and Qsk, we obtain
Q + QT Q – QT
Qs = ; Qsk =
2 2
For a real matrix Q, the quadratic function V(x) is, therefore, given by
V(x) = xTQx = xT(Qs + Qsk)x = xT Q s x + 12 xT Qx - 12 xT QT x
Since xTQx = (xTQx)T = xTQTx, we have
V(x) = xTQsx
296 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Thus, in quadratic function V(x), only the symmetric portion of Q is of importance. We shall, therefore,
tacitly assume that Q is symmetric.
It may be noted that real vector x and real matrix Q do not constitute necessary requirements for V(x)
to be real. V(x) can be real when Q and x are possibly complex; it can easily be established that for a
Hermitian matrix Q,
V(x) = x*Qx
has real values.
Our discussion will be restricted to real symmetric matrices Q.
If for all x π 0,
(i) V(x) = xTQx ≥ 0,
then V(x) is called a positive semidefinite function and Q is called a positive semidefinite matrix;
(ii) V(x) = xTQx > 0,
then V(x) is called a positive definite function and Q is called a positive definite matrix;
(iii) V(x) = xTQx £ 0,
then V(x) is called a negative semidefinite function and Q is called a negative semidefinite matrix;
and
(iv) V(x) = xTQx < 0,
then V(x) is called a negative definite function and Q is called a negative definite matrix.
Table 5.1
to be positive definite are that all the successive principal minors of Q be positive, i.e.,
q11 q12 q13
q11 q12
q11 > 0; > 0; q21 q22 q23 > 0; ; |Q| > 0 (5.9)
q21 q22
q31 q32 q33
The necessary and sufficient conditions for V(x) to be positive semidefinite are that Q is singular and all
the other principal minors of Q are non-negative.
V(x) is negative definite if [–V(x)] is positive definite. Similarly, V(x) is negative semidefinite if [–V(x)]
is positive semidefinite.
r + u y
kR Plant
–
x1
k1
x2
k2
xn
kn
Fig. 5.1
Analysis of systems with the input-output model will not give any information about the behavior of the
internal variables for different operating conditions. For a better understanding of the system behavior,
its mathematical model should include the internal variables also. The state variable techniques of system
representation and analysis, make the internal variables an integral part of the system model, and thus
provide more complete information about the system behavior. In order to appreciate how the internal
variables are included in the system representation, let us examine the modeling process by means of a
simple example.
Consider the network shown in Fig. 5.2a. The set of voltages and currents associated with all the branches
of the network at any time t, represents the status of the network at that time. Application of Kirchhoff’s
current law at nodes 1 and 2 of the network gives the following equations:
de1 de u - e1
+2 2 =
dt dt 2
de3 de2
2 = 2
dt dt
Application of Kirchhoff’s voltage law to the loop consisting of the three capacitors yields
e1(t) – e2(t) = e3(t)
Displacement
y (t)
+ y – 1 + e2 – 2 Velocity v (t)
2W 2F
K
+ + +
u 1F e1 2F e3
– – – Force
M
F(t)
Zero
B
friction
(a) (b)
Fig. 5.2
All other voltage and current variables associated with the network are related to e1, e2, e3 and input u,
through linear algebraic equations. This means that their values (at all instants of time) can be obtained
from the knowledge of the network variables e1, e2, e3 and the input variable u, merely by linear
combinations. In other words, the reduced set {e1(t), e2(t), e3(t)} of network variables with the input
variable u(t), completely represents the status of the network at time t.
For the purpose of finding a mathematical model to represent a system, we will naturally choose a
minimal set of variables that describes the status of the system. Such a set would be obtained when none
of the selected variables is related to other variables and the input, through linear algebraic equations.
A little consideration shows that there is redundancy in the set{e1(t), e2(t), e3(t)} for the network of
Fig. 5.2a; a set of two variables, say, {e1(t), e2(t)}, with the input u(t) represents the network completely
at time t.
Control System Analysis using State Variable Methods 299
d 2 y (t ) dy (t )
M 2
+B + Ky(t) = F(t)
dt dt
An alternative form of the input-output model is the transfer function model:
Y ( s) 1
=
F ( s ) Ms + Bs + K
2
The set of forces, velocities, and displacements, associated with all the elements of the mechanical
network at any time t, represents the status of the network at that time. A little consideration shows that
values of all the system variables (at all instants of time) can be obtained from the knowledge of the
system variables y(t) and v(t), and the input variable F(t), merely by linear combinations. The dynamics
of y(t) and v(t) are given by the following first-order differential equations:
dy (t )
= v(t)
dt
d v (t ) K B 1
=– y(t) – v(t) + F(t)
dt M M M
The variables {y(t), v(t)} are, therefore, the state variables of the system of Fig. 5.2b, and the two
first-order differential equations given above, are the state equations of the system. Using standard
symbols for state variables and input variable, we can write the state equations as
x 1 = x2
K B 1
x2 = – x1 – x2 + u
M M M
x1(t) =D y(t); x2(t) =D v(t); u(t) =D F(t)
Defining y(t) as the output variable, the output equation becomes
y = x1
We can now appreciate the following definitions:
State
The state of a dynamic system is the smallest set of variables (called state variables) such that
the knowledge of these variables at t = t0, together with the knowledge of the input for t ≥ t0, completely
determines the behavior of the system for any time t ≥ t0.
State Vector
If n state variables x1, x2, … , xn, are needed to completely describe the behavior of a given system, then
these n state variables can be considered the n components of a vector x. Such a vector is called a state
vector.
State Space
The n-dimensional space whose coordinate axes consist of the x1-axis, x2-axis, … , xn-axis, is called a
state space.
Control System Analysis using State Variable Methods 301
At any time t0, the state vector (and hence the state of the system) defines a point in the state space. As
time progresses and the system state changes, a set of points will be defined. This set of points, the locus
of the tip of the state vector as time progresses, is called the state trajectory of the system.
State space and state trajectory in two-dimensional cases are referred to as the phase plane and phase
trajectory, respectively.
4
Chapter 2 of reference [155]
302 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
may be linearized about a selected operating point using the multivariable form of the Taylor series:
È∂ f ˘ È∂f ˘
f(x1, x2, x3, ) = f(x10, x20, …) + Í ˙ (x1 – x10) + Í ˙ (x2 – x20) + (5.11c)
ÍÎ ∂ x1 x10 , x20 , ˙˚ ÍÎ ∂ x2 x10 , x20 , ˙˚
One of the advantages of state variable formulation is that an extremely compact vector-matrix notation
can be used for the mathematical model. Using the laws of matrix algebra, it becomes much less
cumbersome to manipulate the equations.
In the vector-matrix notation, we may write Eqns (5.10) as
Í ˙ Í ˙ Í ˙ Í ˙ Í ˙ = Í ˙ (5.12a)
Í ˙ Í ˙ Í ˙ Í ˙ Í ˙ Í ˙
ÍÎ xn (t ) ˙˚ Î an1 an 2 ann ˚ ÍÎ xn(t ) ˙˚ Îbn ˚ ÍÎ xn(t0 ) ˙˚ Í x0 ˙
Î n˚
È x1 (t ) ˘
Í ˙
x 2 ( t )˙
y(t) = [c1 c2 cn] ÍÍ ˙ + d u(t) (5.12b)
Í ˙
ÍÎ xn (t )˙˚
In compact notation, Eqns (5.12) may be expressed as
x (t) = Ax(t) + bu(t); x(t0) =D x0 : State equation (5.13a)
y(t) = cx(t) + du(t) : Output equation (5.13b)
where
x(t) = n ¥ 1 state vector of nth-order dynamic system
u(t) = system input
y(t) = defined output
A= n ¥ n matrix
b= n ¥ 1 column matrix
c= 1 ¥ n row matrix
d= scalar, representing direct coupling between input and output (direct coupling is rare in control
systems, i.e., usually d = 0)
Example 5.1 Two very usual applications of dc motors are in speed and position
control systems.
Figure 5.3 gives the basic block diagram of a speed control system. A separately excited dc motor drives
the load. A dc tachogenerator is attached to the motor shaft; speed signal is fed back and the error signal
is used to control the armature voltage of the motor.
Control System Analysis using State Variable Methods 303
er + u dc motor w
Controller +
– load
Tachogenerator
In the following, we derive the plant model for the speed control system. A separately excited dc motor
with armature voltage control, is shown in Fig. 5.4.
The voltage loop equation is
dia (t )
u(t) = La + Ra ia(t) + eb(t) (5.14a)
dt
where
La = inductance of armature winding (henrys);
Ra = resistance of armature winding (ohms);
ia = armature current (amperes);
eb = back emf (volts); and
u = applied armature voltage (volts).
Fig. 5.4
304 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
dia(t ) Ra K 1
=– i (t) – b w(t) + u(t) (5.15)
dt La a La La
dw ( t ) KT B
= ia(t) – w(t)
dt J J
x1(t) = w (t), and x2(t) = ia(t) is the obvious choice for state variables. The output variable is y(t) = w(t).
The plant model of the speed control system, organized into the vector-matrix notation, is given below.
È B KT ˘
- È0 ˘
È x1 (t ) ˘ Í J J ˙ È x1 (t ) ˘ Í ˙
Í ˙ = Í ˙ Í ˙ + 1 u(t)
ÍÎ x2 (t )˙˚ Í - K b Ra ˙
- ˙ ÍÎ x2 (t )˙˚ ÍÍ ˙˙
Í L La ˚ Î La ˚
Î a
y(t) = x1(t)
Let us assign numerical values to the system parameters.
5
In MKS units, Kb = KT ; Section 3.2 of reference [155].
Control System Analysis using State Variable Methods 305
Example 5.2
Figure 5.5 gives the basic block diagram of a position control system. The controlled variable is now the
angular position q (t) of the motor shaft:
dq (t )
= w (t) (5.18)
dt
Controlled output (position)
Reference signal corresponding
to desired position
er + u dc motor q
Controller +
– load
Error signal Armature voltage
Position sensor
È0 1 0 ˘
Í È0 ˘
È x1 (t ) ˘
Í0 - B KT ˙˙ È x1 (t ) ˘ Í0 ˙
Í ˙ Í ˙
Í 2 ( )˙ J ˙ Í 2( ) ˙
x t = Í J x t + Í ˙ u(t)
Í ˙ Í1 ˙
Í x3 (t )˙
- a ˙ Î 3( ) ˚
Kb R Í x t ˙ Í ˙
Î ˚ Í0 -
ÍÎ La La ˙˚ Î La ˚
y(t) = x1(t)
6
These parameters have been chosen for computational convenience
306 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
For the system parameters given by (5.16), the plant model for position control system becomes
x(t) = Ax(t) + bu(t)
(5.19)
y(t) = cx(t)
where
È0 1 0˘ È 0˘
Í0 -1 1˙˙ ; b = Í 0˙
A= Í Í ˙ ; c = [1 0 0]
ÍÎ0 -1 -10 ˙˚ ÍÎ10 ˙˚
In Examples 5.1 and 5.2 discussed above, the selected state variables are the physical quantities of the
systems which can be measured.
We will see in Chapter 7 that in a physical system, in addition to output, other state variables could be
utilized for the purpose of feedback. The implementation of design with state variable feedback becomes
straightforward if the state variables are available for feedback. The choice of physical variables of
a system as state variables, therefore, helps in the implementation of design. Another advantage of
selecting physical variables for state variable formulation is that the solution of state equation gives time
variation of variables which have direct relevance to the physical system.
5.3.3
It frequently happens that the state variables used in the original formulation of the dynamics of a system
are not as convenient as another set of state variables. Instead of having to reformulate the system
dynamics, it is possible to transform the set {A, b, c, d} of the original formulation (5.13), to a new set
{ A , b, c, d }. The change of variables is represented by a linear transformation
x =Px (5.20a)
where x is a state vector in the new formulation, and x is the state vector in the original formulation. It
is assumed that the transformation matrix P is a nonsingular n ¥ n matrix, so that we can always write
x = P–1x (5.20b)
We assume, moreover, that P is a constant matrix.
The original dynamics are expressed by
x(t) = Ax(t) + bu(t); x(t0) =D x0 (5.21a)
and the output by
y(t) = cx(t) + du(t) (5.21b)
Substitution of x, as given by Eqn. (5.20a), into these equations gives
P x (t) = AP x (t) + bu(t)
y(t) = cP x (t) + du(t)
or
x (t) = A x (t) + b u(t); x (t0) = P–1x(t0) (5.22a)
y(t) = c x (t) + d u(t) (5.22b)
Control System Analysis using State Variable Methods 307
with
A = P–1AP, b = P–1b, c = cP, d = d
In the next section, we will prove that both the linear systems (5.21) and (5.22) have identical output
responses for the same input. The linear system (5.22) is said to be equivalent to the linear system (5.21),
and P is called an equivalence or similarity transformation.
It is obvious that there exist an infinite number of equivalent systems since the transformation matrix P
can be arbitrarily chosen. Some transformations have been extensively used for the purposes of analysis
and design. Five of such special (canonical) transformations will be used in the present and the next two
chapters.
x = ÈÍ 1 ˘˙ = ÈÍ
x x1 ˘ È 1 0 ˘ È x1 ˘
˙ = Í ˙Í ˙
Î x2 ˚ Î - x1 + x2 ˚ Î -1 1 ˚ Î x2 ˚
We can express velocity x1(t) and armature current x2(t) in terms of the variables x 1(t) and x 2 (t):
x =Px (5.23)
with
È1 0 ˘
P= Í ˙
Î1 1 ˚
Using Eqns (5.22) and (5.17), we obtain the following state variable model for the system of Fig. 5.4, in
terms of the transformed state vector x (t):
x (t) = Ax (t) + b u(t)
(5.24)
y(t) = c x (t)
where
È 1 0 ˘ È -1 1 ˘ È1 0 ˘ È 0 1˘
A = P–1A P = Í ˙ Í ˙ Í ˙ = Í -11 -11˙
Î -1 1 ˚ Î -1 -10 ˚ Î1 1 ˚ Î ˚
È 1 0˘ È 0˘ È 0˘
b = P–1b = Í ˙ Í ˙ = Í ˙
Î -1 1 ˚ Î10 ˚ Î10 ˚
308 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È1 0 ˘
c = cP = [1 0] Í ˙ = [1 0]
Î1 1 ˚
x1(t0) = x1(t0); x 2(t0) = – x1(t0) + x2(t0)
Equations (5.24) give an alternative state variable model of the system previously represented by Eqns
(5.17). x (t) and x(t) both qualify to be state vectors of the given system (the two vectors individually
characterize the system completely at time t), and the output y(t), as we shall see shortly, is uniquely
determined from either of the models (5.17) and (5.24). State variable model (5.24) is thus equivalent to
the model (5.17), and the matrix P given by Eqn. (5.23) is an equivalence or similarity transformation.
The state variable model given by Eqns (5.24) is in a canonical (special) form. In Chapter 7, we will use
this form of model for pole-placement design by state feedback.
5.3.4
An important advantage of state variable formulation is that it is a straightforward method to obtain a
simulation diagram for the state equations. This is extremely useful if we wish to use computer simulation
methods to study dynamic systems. In the following, we give an example of analog simulation diagram.
Examples of digital simulation will appear in Chapter 6.
For brevity, we consider a second-order system:
x 1(t) = a11 x1(t) + a12 x2(t) + b1u(t)
x 2(t) = a21 x1(t) + a22 x2(t) + b2u(t) (5.25)
y(t) = c1x1(t) + c2x2(t)
It is evident that if we knew x 1 and x 2, we could obtain x1 and x2 by simple integration. Hence x 1 and
x 2 should be the inputs to two integrators. The corresponding integrator outputs are x1 and x2. This
leaves only the problem of obtaining x1 and x2 for use as inputs to the integrators. In fact, this is
already specified by state equations. The completed state diagram is shown in Fig. 5.6. This diagram is
essentially an analog-computer program for the given system.
x01
+
+ x1
b1 Ú +
c1
+ +
a11
a21 +
u y
a12 +
a22
+
+ +
b2
+
Ú x2
c2
+
x02
Fig. 5.6
where
X(s) =D L [x(t)]; U(s) =D L [u(t)]; Y(s) =D L [y(t)]
Manipulation of these equations gives
(sI – A)X(s) = x0 + bU(s); I is n ¥ n identity matrix
or X(s) = (sI – A)–1x0 + (sI – A)–1bU(s) (5.27a)
–1 0 –1
Y(s) = c(sI – A) x + [c(sI – A) b + d]U(s) (5.27b)
0
Equations (5.27) are algebraic equations. If x and U(s) are known, X(s) and Y(s) can be computed from
these equations.
In the case of a zero initial state (i.e., x0 = 0), the input-output behavior of the system (5.26) is determined
entirely by the transfer function
Y ( s)
= G(s) = c(sI – A)–1b + d (5.28)
U ( s)
We can express the inverse of the matrix (sI – A) as
( s I - A) +
(sI – A)–1 = (5.29)
| sI - A |
310 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
where
|sI – A| = determinant of the matrix (sI – A)
(sI – A)+= adjoint of the matrix (sI – A)
Using Eqn. (5.29), the transfer function G(s) given by Eqn. (5.28) can be written as
c ( s I - A) + b
G(s) = +d (5.30)
| sI - A |
For a general nth-order matrix
È a11
a12 � a1n ˘
Ía a22 � a2 n ˙˙
A = Í
21
,
Í � � � ˙
Í ˙
an 2 � ann ˚
Î an1
the matrix (sI – A) has the following appearance:
È s - a11 - a12 � - a1n ˘
Í -a s - a22 � - a2 n ˙˙
(sI – A) = Í
21
Í � � � ˙
Í ˙
Î - an1 - an 2 � s - ann ˚
If we imagine calculating det(sI – A), we see that one of the terms will be the product of diagonal
elements of (sI – A):
(s – a11)(s – a22) � (s – ann) = sn + a¢1 sn – 1 + � + a¢n,
a polynomial of degree n with the leading coefficient of unity. There will be other terms coming from
the off-diagonal elements of (sI – A), but none will have a degree as high as n. Thus | sI – A| will be of
the following form:
| sI – A| = D(s) = sn + a 1 sn – 1 + � + an (5.31)
where ai are constant scalars.
This is known as the characteristic polynomial of the matrix A. It plays a vital role in the dynamic
behavior of the system. The roots of this polynomial are called the characteristic roots or eigenvalues
of matrix A. These roots, as we shall see in Section 5.7, determine the essential features of the unforced
dynamic behavior of the system (5.26).
The adjoint of an n ¥ n matrix is itself an n ¥ n matrix, whose elements are the cofactors of the original
matrix. Each cofactor is obtained by computing the determinant of the matrix that remains when a
row and a column of the original matrix are deleted. It thus follows that each element in (sI – A)+ is a
polynomial in s of maximum degree (n – 1). Adjoint of (sI – A) can, therefore, be expressed as
(sI – A)+ = Q1 sn – 1 + Q2 sn – 2 + � + Qn – 1 s + Qn (5.32)
where Qi are constant n ¥ n matrices.
We can express transfer function G(s) given by Eqn. (5.30) in the following form:
c[Q1 s n - 1 + Q2 s n - 2 + � + Qn - 1 s + Qn ] b
G(s) = +d (5.33)
s n + a1 s n - 1 + � + a n - 1 s + a n
Control System Analysis using State Variable Methods 311
G(s) is thus a rational function of s. When d = 0, the degree of numerator polynomial of G(s) is strictly
less than the degree of the denominator polynomial and, therefore, the resulting transfer function is a
strictly proper transfer function. When d π 0, the degree of numerator polynomial of G(s) will be equal
to the degree of the denominator polynomial, giving a proper transfer function. Further,
d = lim [G(s)] (5.34)
s
From Eqns (5.31) and (5.33) we observe that the characteristic polynomial of matrix A of the system
(5.26) is same as the denominator polynomial of the corresponding transfer function G(s). If there
are no cancellations between the numerator and denominator polynomials of G(s) in Eqn. (5.33), the
eigenvalues of matrix A are same as the poles of G(s). We will take up in Section 5.9, this aspect of
the correspondence between state variable models and transfer functions. It will be proved that for a
completely controllable and completely observable state variable model, the eigenvalues of matrix A are
same as the poles of the corresponding transfer function.
5.4.1
It is recalled that the state variable model for a system is not unique, but depends on the choice of a set
of state variables. A transformation
x(t) = P x(t); P is a nonsingular matrix (5.35)
results in the following alternative state variable model (refer to Eqns (5.22)) for the system (5.26):
x (t) = Ax (t) + bu(t); x (t0) = P–1x(t0) (5.36a)
y(t) = c x (t) + du(t) (5.36b)
–1 –1
where A = P AP, b = P b, c = cP
The definition of new set of internal state variables should, evidently, not affect the eigenvalues or
input-output behavior. This may be verified by evaluating the characteristic polynomial and the transfer
function of the transformed system.
(i) | sI – A | = | sI – P–1AP| = |sP–1P – P– 1AP| = |P–1(sI – A)P| = |P–1| |sI – A| |P| = |sI – A| (5.37)
(ii) System output in response to input u(t) is given by the transfer function
G (s) = c (sI – A )–1 b + d = cP(sI – P–1AP)–1P–1b + d
= cP(sP–1P – P–1AP)–1 P–1b + d = cP[P–1(sI – A)P]–1P–1b + d
= cPP–1(sI– A)–1PP–1b + d = c(sI – A)–1b + d = G(s) (5.38)
(iii) System output in response to initial state x (t0) is given by (refer to Eqn. (5.27b))
c(sI – A )–1 x(t0) = cP(sI – P–1AP)–1P–1x(t0) = c(sI – A)–1x(t0) (5.39)
The input-output behavior of the system (5.26) is, thus, invariant under the transformation (5.35).
Example 5.4
Consider the position control system of Example 5.2. The plant model of the system is reproduced
below:
x (t) = Ax(t) + bu(t)
(5.40)
y(t) = cx(t)
312 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
with
È0 1 0˘ È 0˘
Í0 -1 1 ˙˙ ; b = Í 0 ˙ ; c = [1
A= Í Í ˙ 0 0]
ÍÎ0 -1 -10 ˙˚ ÍÎ10 ˙˚
The characteristic polynomial of matrix A is
s –1 0
|sI – A| = 0 s +1 -1 = s(s2 + 11s + 11)
0 1 s + 10
The transfer function
Y ( s) c ( sI - A ) + b
G(s) = =
U ( s) | sI - A |
È s 2 + 11 s + 11 s + 10 1 ˘È 0 ˘
Í ˙
[1 0 0] Í 0 s( s + 10) s ˙ ÍÍ 0 ˙˙
Í 0 -s s( s + 1) ˙˚ ÍÎ10 ˚˙
= Î
s( s 2 + 11 s + 11)
10
= 2 (5.41)
s( s + 11 s + 11)
Alternatively, we can draw the state diagram of the plant model in signal-flow graph form and from there,
obtain the transfer function using Mason’s gain formula. For the plant model (5.40), the state diagram is
shown in Fig. 5.7. Application of Mason’s gain formula7 yields
Y ( s) 10 s -3
= G(s) =
U ( s) 1 - ( -10 s -1 - s -1 - s -2 ) + 10 s -2
10 10
= 3 2 = 2
s + 11 s + 11 s s( s + 11 s + 11)
–10 –1
–1
Fig. 5.7
5.4.2
The matrix
( sI - A ) +
F(s) = (sI – A)–1 = (5.42)
| sI - A |
7
Section 2.12 of reference [155]
Control System Analysis using State Variable Methods 313
is known in mathematical literature as the resolvent of A. Resolvent matrix F(s) can be expressed in the
following form (refer to Eqns (5.31) and (5.32)):
Q1s n - 1 + Q2 s n - 2 + + Qn - 1 s + Qn
F(s) = (sI – A)–1 = (5.43)
n n -1
s + a1 s + + an - 1 s + an
where Qi are constant (n ¥ n) matrices and a j are constant scalars.
An interesting and useful relationship for the coefficient matrices Qi of the adjoint matrix, can be
obtained by multiplying both sides of Eqn. (5.43) by |sI – A|(sI – A). The result is
| sI – A| I = (sI – A)(Q1sn– 1 + Q2sn – 2 + + Qn–1s + Qn)
or
snI + a1sn – 1I + + anI = snQ1 + sn – 1(Q2 – AQ1) + + s(Qn – AQn – 1) – AQn
i
Equating the coefficients of s on both the sides gives
Q1 = I
Q2 = AQ1 + a1I
Q3 = AQ2 + a2I (5.44a)
Qn = AQn – 1 + an – 1I
0 = AQn + anI
We have thus determined that the leading coefficient of (sI – A)+ is the identity matrix, and that the
subsequent coefficients can be obtained recursively. The last equation in (5.44a) is redundant, but can be
used as a check when these recursion equations are used as the basis of a numerical algorithm.
An algorithm based on Eqns (5.44a) requires the coefficients ai (i = 1, 2, ..., n) of the characteristic
polynomial. Fortunately, the determination of these coefficients can be included in the algorithm, for it
can be shown that8
1
ai = – tr(AQi); i = 1, 2, ..., n (5.44b)
i
where tr(M), the trace of M, is the sum of all the diagonal elements of the matrix M.
The algorithm given by Eans (5.44), called the resolvent algorithm, is convenient for hand calculation
and also easy to implement on a digital computer.
Example 5.5
Here we again compute (sI – A)–1 which appeared in Example 5.4, but this time using the resolvent
algorithm (5.44).
Q1 = I, a1 = – tr(A) = 11
Q2 = A + a1I
8
The proof of relation (5.44b) is quite involved and will not be presented here. Refer to [108].
314 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È11 1 0 ˘
= ÍÍ 0 10 1˙˙ ; a2 = – 1
2
tr(AQ2) = 11
ÍÎ 0 -1 1˙˚
Q3 = AQ2 + a2I
È11 10 1˘
= ÍÍ 0 0 0 ˙˙ ; a3 = – 1 tr(AQ3) = 0
3
ÎÍ 0 0 0 ˙˚
As a numerical check, we see that the relation
0 = AQ3 + a 3I
is satisfied. Therefore,
Q1 s 2 + Q2 s + Q3
(sI – A)–1 = F(s) =
s + a1 s 2 + a 2 s + a 3
3
È s 2 + 11 s + 11 s + 10 1 ˘
1 Í ˙
= 2 Í 0 s( s + 10) s ˙
s( s + 11s + 11) Í
0 -s s( s + 1) ˙
Î ˚
Using resolvent algorithm, we develop here a fundamental property of the characteristic equation. To this
end, we write from Eqns (5.44a)
Q2 = A + a1I
Q3 = AQ2 + a2I = A2 + a1A + a2I
Qn = An – 1 + a1An – 2 + + an – 1I
n n–1
AQn = A + a1A + + an – 1A = – anI
Therefore,
An + a1An – 1 + + an – 1A + anI = 0 (5.45)
This is the well-known result known as the Cayley–Hamilton theorem. Note that this equation is same as
the characteristic equation
sn + a1sn – 1 + + an – 1s + an = 0 (5.46)
with the scalar si in the latter replaced by the matrix Ai ( i = 1, 2, … , n).
Thus, another way of stating the Cayley–Hamilton theorem is as follows: Every matrix satisfies its own
characteristic equation.
Later we will use the resolvent algorithm and the Cayley–Hamilton theorem for evaluation of the state
transition matrix required for the solution of the state equations.
Control System Analysis using State Variable Methods 315
5.5.1
Our development starts with a transfer function of the form
Z ( s) 1
= n n -1 (5.50)
U( s) s + a1 s + + an
which can be written as
(sn + a1sn – 1 + + an) Z(s) = U(s)
The corresponding differential equation is
pnz(t) + a1pn – 1z(t) + + anz(t) = u(t)
where
d kz (t )
pkz(t) =D
dt k
Solving for highest derivative of z(t), we obtain
pnz(t) = – a1pn – 1z(t) – a2pn – 2 z(t) – – anz(t) + u(t) (5.51)
Now consider a chain of n integrators as shown in Fig. 5.8. Suppose that the output of the last integrator
is z(t); then, the output of the just previous integrator is pz = dz/dt, and so forth. The output from the
first integrator is pn – 1z(t), and thus, the input to this integrator is pnz(t). This leaves only the problem
of obtaining pnz(t) for use as input to the first integrator. In fact, this is already specified by Eqn. (5.51).
Realization of this equation is shown in Fig. 5.8.
Control System Analysis using State Variable Methods 317
u + pnz pn – 1z p n – 2z pz z
Ú Ú Ú Ú
–
a1 a2 an – 1 an
+ + +
+ + +
Fig. 5.8
Having developed a realization of the simple transfer function (5.50), we are now in a position to consider
the more general transfer function (5.49). We decompose this transfer function into two parts, as shown
in Fig. 5.9. The output Y(s) can be written as
Y(s) = (b0sn + b1sn – 1 + + bn) Z(s) (5.52a)
where Z(s) is given by
Z ( s) 1
= n n -1 (5.52b)
U( s ) s + a1 s + + an
Fig. 5.9
A realization of the transfer function (5.52b) has already been developed. Figure 5.8 shows this
realization. The output of the last integrator is z(t) and the inputs to the integrators in the chain—from
the last to the first— are the n successive derivatives of z(t).
Realization of the transfer function (5.52a) is now straightforward. The output
y(t) = b0 pnz(t) + b1pn – 1z(t) + + bnz(t),
is the sum of the scaled versions of the inputs to the n integrators. Figure 5.10 shows complete realization
of the transfer function (5.49). All that remains to be done is to write the corresponding differential
equations.
To get one state variable model of the system, we identify the output of each integrator in Fig. 5.10 with
a state variable starting at the right and proceeding to the left. The corresponding differential equations,
using this identification of state variables, are
x 1 = x2
x 2 = x3
(5.53a)
x n – 1 = xn
x n = – an x1 – an – 1 x2 – – a1 xn + u
318 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The output equation is found by careful examination of the block diagram of Fig. 5.10. Note that there
are two paths from the output of each integrator to the system output—one path upward through the box
labeled bi, and a second path down through the box labeled ai and hence, through the box labeled b0. As
a consequence,
y = (bn – an b0) x1 + (bn – 1 – an – 1b0) x2 + + (b1 – a1b0) xn + b0u (5.53b)
Fig. 5.10
The state and output equations (5.53), organized in vector-matrix form, are given below.
x(t) = Ax(t) + bu(t)
(5.54)
y(t) = cx(t) + du(t)
with
È 0 1 0 0 ˘ È0˘
Í 0 0 1 0 ˙˙ Í0˙
Í Í ˙
A= Í ˙;b= Í ˙
Í ˙ Í ˙
Í 0 0 0 1 ˙ Í0˙
ÍÎ -a n -a n -1 -a n - 2 -a1 ˙˚ ÍÎ1 ˙˚
with this structure is said to be in companion form. For this reason, we identify the realization (5.54)
as companion-form realization of the transfer function (5.49). We call this the first companion form;
another companion form, second companion from, is discussed in the following section.
5.5.2
In the first companion form, the coefficients of the denominator of the transfer function appear in one of
the rows of the A matrix. There is another companion form in which the coefficients appear in a column
of the A matrix. This can be obtained by writing Eqn. (5.49) as
(sn + a1sn –1 + + an) Y(s) = (b0sn + b1sn – 1 + + bn) U(s)
n n–1
or s [Y(s) – b0U(s)] + s [a1Y(s) – b1U(s)] + + [anY(s) – bnU(s)] = 0
n
On dividing by s and solving for Y(s), we obtain
1 1
Y(s) = b0U(s) + [b1U(s) – a1Y(s)] + + [bnU(s) – anY(s)] (5.55)
s sn
1
Note that 1/sn is the transfer function of a chain of n integrators. Realization of [bnU(s) – anY(s)]
sn
requires a chain of n integrators with input [bnu – an y] to the first integrator in the chain from left-
1
to-right. Realization of n-1 [bn – 1U(s) – an – 1Y(s)], requires a chain of (n–1) integrators, with input
s
[bn – 1u – an – 1 y] to the second integrator in the chain, from left-to-right, and so forth. This immediately
leads to the structure shown in Fig. 5.11. The signal y is fed back to each of the integrators in the chain,
and the signal u is fed forward. Thus the signal [bnu – an y] passes through n integrators; the signal
[bn – 1u – an – 1 y] passes through (n – 1) integrators, and so forth—to complete the realization of
Eqn. (5.55). The structure retains the ladder-like shape of the first companion form, but the feedback
paths are in different directions.
We can now write differential equations for the realization given by Fig. 5.11. To get one state variable
model, we identify the output of each integrator in Fig. 5.11 with a state variable starting at the left and
proceeding to the right. The corresponding differential equations are
u
bn bn – 1 bn – 2 b1 b0
x1 x2 xn – 1 xn
+ + + + +
+ + + + y
Ú Ú Ú Ú
– – – –
an an – 1 an – 2 a1
Fig. 5.11
320 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
x 2 = x1 – an – 1 (xn + b0u) + bn – 1u
x1 = – an (xn + b0u) + bnu
and the output equation is
y = xn + b0u
The state and output equations, organized in vector-matrix form, are given below.
x(t) = Ax(t) + bu(t)
(5.56)
y(t) = cx(t) + du(t)
with
È0 0 0 - an ˘
Í1 Èb n - an b0 ˘
Í 0 0 -a n -1 ˙˙ Íb - a b ˙
b= Í
n -1 n -1 0 ˙
A = Í0 1 0 -a n - 2 ˙ ;
Í ˙ Í ˙
Í ˙ Í ˙
b
Î 1 - a b ˚
ÍÎ0 0 1 -a1 ˙˚
1 0
c = [0 0 0 1]; d = b0
Compare A, b, and c matrices of the second companion form with that of the first. We observe that
A, b, and c matrices of one companion form correspond to the transpose of the A, c, and b matrices,
respectively, of the other.
There are many benefits derived from the companion forms of state variable models. One obvious benefit
is that both the companion forms lend themselves easily to simple analog computer models. Both the
companion forms also play an important role in pole-placement design through state feedback. This will
be discussed in Chapter 7.
5.5.3
In the two canonical forms (5.54) and (5.56), the coefficients of the denominator of the transfer function
appear in one of the rows or columns of matrix A. In another of the canonical forms, the poles of
the transfer function form a string along the main diagonal of the matrix. This canonical form follows
directly from the partial fraction expansion of the transfer function.
The general transfer function under consideration is (refer to Eqn. (5.49))
b0 s n + b1 s n - 1 + + bn
G(s) = n n -1
s + a1 s + + an
By long division, G(s) can be written as
b1¢ s n - 1 + b 2¢ s n - 2 + + b n¢
G(s) = b0 + = b0 + G¢(s)
s n + a1 s n - 1 + + a n
Control System Analysis using State Variable Methods 321
The results are simplest when the poles of the transfer function are all distinct. The partial fraction
expansion of the transfer function, then has the form
Y ( s) r1 r2 rn
G(s) = = b0 + + + + (5.57)
U ( s) s - l1 s - l2 s - ln
The coefficients ri (i = 1, 2, …, n) are the residues of the transfer function G¢(s) at the corresponding
poles at s = li (i = 1, 2, …, n). In the form of Eqn. (5.57), the transfer function consists of a direct
path with gain b0, and n first-order transfer functions in parallel. A block diagram representation of
Eqn. (5.57) is shown in Fig. 5.12. The gains, corresponding to the residues, have been placed at the
outputs of the integrators. This is quite arbitrary. They could have been located on the input side, or
indeed, split between the input and the output.
b0
x1 + y
u + +
Ú r1
+ +
l1
+ x2 +
Ú r2
+ +
l2
+ xn
Ú rn
+
ln
Fig. 5.12 Gs
Identifying the outputs of the integrators with the state variables results in the following state and output
equations:
x(t) = Lx(t) + bu(t)
(5.58)
y(t) = cx(t) + du(t)
with
È l1 0 0˘ È1˘
Í0 l2 0 ˙˙ Í1˙
L= Í ;b= Í ˙ ; c = [r1 r2 rn]; d = b0
Í ˙ Í˙
Í ˙ Í˙
Î0 0 ln ˚ Î1˚
322 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
It is observed that for this canonical state variable model, the matrix L is a diagonal matrix with the poles
of G(s) as its diagonal elements. The unique decoupled nature of the canonical model is obvious from
Eqns (5.58); the n first-order differential equations are independent of each other:
x i (t) = li xi(t) + u(t); i = 1, 2, … , n (5.59)
This decoupling feature, as we shall see later in this chapter, greatly helps in system analysis.
The block diagram representation of Fig. 5.12 can be turned into hardware only if all the poles at
s = l1, l2, ..., ln are real. If they are complex, the feedback gains and the gains corresponding to the
residues, are complex. In this case, the representation must be considered as being purely conceptual;
valid for theoretical studies, but not physically realizable. A realizable representation can be obtained by
introducing an equivalence transformation.
Suppose that s = s + jw , s = s – jw and s = l are the three poles of a transfer function. The residues at
the pair of complex conjugate poles must be themselves complex conjugates. Partial fraction expansion
of the transfer function, with a pair of complex conjugate poles and a real pole, has the form
p + jq p - jq r
G(s) = d + + +
s - (s + j w ) s - (s - j w ) s-l
A state variable model for this transfer function is given below (refer to Eqns (5.58)):
x = Lx + bu
(5.60)
y = cx + du
with
Ès + j w 0 0˘ È1˘
Í 0 s - jw 0 ˙˙ ; b = Í1˙
L= Í Í ˙ ; c = [p + jq p – jq r]
ÍÎ 0 0 l ˙˚ ÍÎ1˙˚
Introducing an equivalence transformation
x =Px
È1/ 2 - j1/ 2 0 ˘
Í ˙
with P = Í1/ 2 j1/ 2 0 ˙
ÍÎ 0 0 1 ˙˚
we obtain (refer to Eqns (5.22))
x(t) = A x(t) + b u(t)
(5.61)
y(t) = c x(t) + du(t)
where
È1 1 0˘ Ès + jw 0 0˘ È1/ 2 - j1/ 2 0 ˘ È s w 0˘
Í Í ˙ Í
–1
A = P LP = Í j -j 0 ˙˙ Í 0
Í s - jw 0 ˙˙ Í1/ 2 j1/ 2 0 ˙ = Í -w s 0 ˙˙
ÍÎ 0 0 1 ˙˚ ÍÎ 0 0 l ˙˚ ÍÎ 0 0 1 ˙˚ ÍÎ 0 0 l ˙˚
È 2˘
–1 Í ˙
b = P b = Í0 ˙ ; c = cP = [ p q r]
ÍÎ1 ˙˚
Control System Analysis using State Variable Methods 323
When the transfer function G(s) has repeated poles, the partial fraction expansion will not be as simple
as Eqn. (5.57). Assume that G(s) has m distinct poles at s = l1, l2, …, lm of multiplicity n1, n2, …, nm,
respectively; n = n1 + n2 + + nm. That is, G(s) is of the form
b1¢ s n - 1 + b 2¢ s n - 2 + + b n¢
G(s) = b 0 + n1 n2
(5.62)
( s - l1 ) ( s - l2 ) ( s - lm ) nm
The partial fraction expansion of G(s) is of the form.
Y ( s)
G(s) = b0 + H1(s) + + Hm(s) = (5.63)
U( s )
where
ri1 ri 2 rini Yi ( s)
Hi(s) = ni + ni - 1 + + =
( s - li ) ( s - li ) ( s - li ) U ( s)
The first term in Hi(s) can be synthesized as a chain of ni identical, first-order systems, each having
transfer function 1/(s – li). The second term can be synthesized by a chain of (ni – 1) first-order systems,
and so forth. The entire Hi(s) can be synthesized by the system having the block diagram shown in
Fig. 5.13.
Fig. 5.13 Hi s
We can now write differential equations for the realization of Hi(s), given by Fig. 5.13. To get one state
variable formulation, we identify the output of each integrator with a state variable—starting at the right
and proceeding to the left. The corresponding differential equations are
x i1 = li xi1 + xi2
x i2 = li xi2 + xi3 (5.64a)
xini = li xini + u
and the output is given by
yi = ri1 xi1 + ri2 xi2 + + rini xini (5.64b)
324 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Note that matrix Li has two diagonals—the principal diagonal has the corresponding characteristic root
(pole), and the superdiagonal has all 1s. In matrix theory, a matrix having this structure is said to be in
Jordan form. For this reason, we identify the realization (5.65) as Jordan canonical form.
According to Eqn. (5.63), the overall transfer function G(s) consists of a direct path with gain b0 and m
subsystems, each of which is in the Jordan canonical form, as shown in Fig. 5.14. The state vector of the
overall system consists of the concatenation of the state vectors of each of the Jordan blocks:
È x1 ˘
Íx ˙
x= Í ˙
2
(5.66a)
Í ˙
Í ˙
Îxm ˚
Since there is no coupling between any of the subsystems, the L matrix of the overall system is ‘block
diagonal’:
b0
x1 = v1x1 + b1u y1 + + y
y1 = c1x1 +
u
x2 = v2x2 + b2u y2 +
y2 = c2x2 +
xm = vmxm + bmu ym
ym = cmxm
Fig. 5.14
Control System Analysis using State Variable Methods 325
È L1 0 0 ˘
Í ˙
0 L2 0 ˙
L= Í (5.66b)
Í ˙
Í ˙
Î 0 0 Lm ˚
where each of the submatrices Li is in the Jordan canonical form (5.65). The b and c matrices of the
overall system are the concatenations of the bi and ci matrices, respectively, of each of the subsystems:
È b1 ˘
Íb ˙
b = Í ˙ ; c = [c1 c2 … cm]; d = b0
2
(5.66c)
Í ˙
Í ˙
Îb m ˚
The state variable model (5.58) derived for the case of distinct poles, is a special case of Jordan canonical
form (5.66) where each Jordan block is of 1 ¥ 1 dimension.
Example 5.6
In the following, we obtain three different realizations for the transfer function
s+3 Y ( s)
G(s) = 3 2 =
s + 9 s + 24 s + 20 U ( s)
First Companion Form Note that the given G(s) is a strictly proper fraction; the realization will,
therefore, be of the form (5.48), i.e., the parameter d in the realization {A, b, c, d} is zero.
The state variable formulation in the first companion form, can be written just by inspection of the given
transfer function. Referring to Eqns (5.54), we obtain
È x1 ˘ È 0 1 0˘ È x1 ˘ È0 ˘
Íx ˙ = Í 0 0 1˙˙ Í x ˙ + Í0˙ u
Í 2˙ Í Í 2˙ Í ˙
ÍÎ x3 ˙˚ ÍÎ-20 -24 -9˙˚ ÍÎ x3 ˙˚ ÍÎ1 ˙˚
È x1 ˘
y = [3 1 0] ÍÍ x2 ˙˙
ÍÎ x3 ˙˚
Figure 5.15a shows the state diagram in signal flow graph form.
Fig. 5.15 Gs
Jordan Canonical Form The given transfer function G(s) in the factored form:
s+3
G(s) =
( s + 2) 2 ( s + 5)
Using partial fraction expansion, we obtain
1/3 2/ 9 -2/ 9
G(s) = + +
( s + 2) 2 s + 2 s+5
A matrix of the state variable model in Jordan canonical form will be block-diagonal; consisting of two
Jordan blocks (refer to Eqns (5.65)):
È -2 1 ˘
L1 = Í ˙ ; L2 = [–5]
Î 0 -2˚
The corresponding bi and ci vectors are (refer to Eqns (5.65)):
È0 ˘
b1 = Í ˙ ; c1 = [ 13 92 ]; b2 = [1] ; c2 = [– 2 ]
Î1 ˚ 9
The state variable model of the given G(s) in Jordan canonical form is, therefore, given by (refer to
Eqns (5.66))
Control System Analysis using State Variable Methods 327
È x1 ˘ È -2 1 0˘ È x1 ˘ È 0 ˘
Í x ˙ Í 0 -2 0 ˙ Í ˙ Í ˙
Í 2˙ = Í ˙ + Í x2 ˙ + Í 1 ˙ u
ÍÎ x3 ˙˚ Í 0 0 -5˙˚ ÍÎ x3 ˙˚ Í 1 ˙
Î Î ˚
È x1 ˘
y = [ 13 92 -92 ] ÍÍ x2 ˙˙
ÍÎ x3 ˙˚
Figure 5.15c shows the state diagram. We note that Jordan canonical state variables are not completely
decoupled. The decoupling is blockwise; state variables of one block are independent of state variables of
all other blocks. However, the state variables of one block, among themselves, are coupled; the coupling
is unique and simple.
5.6.1 Eigenvalues
For a general nth-order matrix
È a11 a12 a1n ˘
Ía a22 a2 n ˙˙
A= Í
21
Í ˙
Í ˙
Î an1 an 2 ann ˚
the determinant
l - a11 - a12 - a1n
- a21 l - a22 - a2 n
|l I – A| =
- an1 - an 2 l - ann
On expanding the determinant we find that |l I – A | , called the characteristic polynomial of the matrix
A, is a polynomial of degree n:
|l I – A| = D(l) = ln + a 1l n – 1 + + an – 1l + an
where ai are constant scalars.
The equation
D(l) = ln + a1ln – 1 + + an – 1l +an = 0 (5.70)
is called the characteristic equation of the matrix A, and its n roots are called characteristic roots, or
characteristic values, or eigenvalues of the matrix A. When A represents the dynamic matrix of a linear
system, the eigenvalues determine the dynamic response of the system (the next section will establish
this fact), and also turn out to be the poles of the corresponding transfer function (refer to Eqn. (5.31)).
Eigenvalues of a matrix A are invariant under equivalence transformation (refer to Eqn. (5.37)), i.e.,
|lI – A| = |lI – P –1AP|
for any nonsingular matrix P.
5.6.2 Eigenvectors
Consider an n ¥ n matrix A with eigenvalues {l1, l2, …, ln}. We start with the assumption of distinct
eigenvalues; later we will relax this assumption.
State transformation to Jordan canonical form requires a
transformation matrix P such that
È l1 0 0 ˘
Í 0 l2 0 ˙˙
P –1AP = L = Í (5.71)
Í ˙
Í ˙
Î 0 0 ln ˚
Control System Analysis using State Variable Methods 329
Example 5.7
The matrix È -4 1 0˘
Í 1 ˙˙
A = Í 0 -3
ÍÎ 0 0 -2˙˚
has the characteristic equation
l + 4 -1 0
|l I – A| = 0 l +3 -1
0 0 l+2
= (l + 4)(l + 3)(l + 2) = 0
Therefore, the eigenvalues of A are l1 = – 2, l 2 = – 3 and l 3 = – 4.
Consider a set of homogeneous equations
(l1I – A)v1 = 0
330 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È 2 -1 0 ˘ È v11 ˘ È0 ˘
Í0 1 -1˙ Ív ˙ Í ˙
or Í ˙ Í 21 ˙ = Í0 ˙ (5.74)
ÍÎ0 0 0 ˙˚ ÍÎ v31 ˙˚ ÍÎ0 ˙˚
It is easy to check that rank of the matrix (l1I – A) is two, i.e.,
r (l1I – A) = 2
A highest-order array having a nonvanishing determinant, is
È 2 -1˘
Í0 ,
Î 1˙˚
which is obtained from (l1I – A) by omitting the third row and the third column. Consequently, a set of
linearly independent equations is
2v11 – v21 = 0
v21 = v31
È 2 -1˘ È v11 ˘ È 0 ˘
or Í0 ˙ Í
1˚ Îv21 ˚˙ = Í ˙
Î Îv31 ˚
-1
È v11 ˘ È 2 -1˘ È 0 ˘ Èv31 / 2˘
Therefore, Ív ˙ = Í0 1 ˙ Ív ˙ = Ív ˙
Î 21 ˚ Î ˚ Î 31 ˚ Î 31 ˚
There are three components in v1 and two equations governing them; therefore, one of the three
components can be arbitrarily chosen. For v31 = 2, a solution to Eqn. (5.74) is
È1 ˘
Í ˙
v1 = Í 2˙
ÍÎ 2˙˚
A different choice for v31 leads to a different solution to Eqn. (5.74). In fact, this set of equations has
infinite solutions as demonstrated below.
For v31 = 2a (with a arbitrary), the solution
È1 ˘
Í ˙
v1 = a Í 2 ˙
ÍÎ 2˙˚
Obviously, this solution is non-unique. However, all nontrivial solutions have a unique direction, and
they differ only in terms of a scalar multiplier. There is, thus, only one independent solution.
Corresponding to the eigenvalue l2 = – 3, a linearly independent solution to homogeneous equations
(l2I – A)v2 = 0
is given by
È1 ˘
Í ˙
v2 = Í1 ˙
ÍÎ0 ˙˚
Control System Analysis using State Variable Methods 331
È 2˘
v3 = ÍÍ0 ˙˙
ÍÎ0 ˙˚
In general, the number of equations that the vector vi in (5.73) has to obey, is equal to r(liI – A) where
r(M) denotes the rank of matrix M. There are n components in vi (n = number of columns of (liI – A));
therefore, (n – r(liI – A)) components of vi can be arbitrarily chosen. Thus, the number of linearly
independent solutions of the homogeneous equation (5.73) = [n – r(liI – A)] = g (l iI – A), where g (M)
denotes the nullity of matrix M.
We have the following answers to the two questions raised earlier with regard to Eqn. (5.73):
(i) For Eqn. (5.73) to have a nontrivial solution, rank of (liI – A) must be less than n, or, equivalently,
det (liI – A) = 0. This condition is satisfied by virtue of the fact that li is an eigenvalue.
(ii) The number of linearly independent solutions to Eqn. (5.73), is equal to nullity of (l i I – A).
The nullity of matrix (liI – A) does not exceed the multiplicity of the eigenvalue li (refer to Lancaster
and Tismenetsky [28] for proof of the result). Therefore, for distinct eigenvalue li, there is one, and only
one, linearly independent solution to Eqn. (5.73). This solution is called the eigenvector of A associated
with the eigenvalue li.
Theorem 5.1 Let v1, v2, …, vn be the eigenvectors associated with the distinct eigenvalues l1, l2,
…, ln, respectively, of matrix A. The vectors v1, v2, …, vn are linearly independent and the nonsingular
matrix
P = [v1 v2 vn]
transforms matrix A into Jordan canonical form.
Example 5.8
Consider the matrix
È -4 1 0˘
Í 0 -3 1 ˙
A= Í ˙
ÍÎ 0 0 -2˙˚
for which we found, in Example 5.7, the eigenvalues and eigenvectors to be
È1 ˘ È1 ˘ È 2˘
Í1 ˙ Í0 ˙
l 1 = – 2, v1 = Í 2˙ ; l 2 = – 3, v2 = Í ˙ ; l 3 = – 4, v3 = Í ˙
Í ˙
ÍÎ 2˙˚ ÍÎ0 ˙˚ ÍÎ0 ˙˚
The transformation matrix
È1 1 2˘
Í 0 ˙˙
P = Í2 1
ÍÎ 2 0 0 ˙˚
This gives
È0 0 2˘ È -4 1 0 ˘ È1 1 2˘ È -2 0 0˘
–1 1 Í ˙Í ˙Í ˙ Í -3 0 ˙˙ = L
P AP = 0 4 -4˙ Í 0 -3 1 ˙ Í2 1 0 ˙ = Í 0
4 Í
ÍÎ2 -2 1 ˙˚ ÍÎ 0 0 -2˙˚ ÍÎ2 0 0 ˙˚ ÍÎ 0 0 -4 ˙˚
which is the diagonal matrix (a special case of Jordan canonical form) with eigenvalues of A as its
diagonal elements. In fact, L could be written down directly without computing P –1AP.
can be computed by solving the set of linear algebraic equations. The method of Gauss elimination is
a straightforward and powerful procedure for reducing systems of linear equations to a simple reduced
form, easily solved by substitution (refer to Noble and Daniel [27]). High quality software is available
commercially; for example, the MATLAB system from the Math Works [152].
In the following, we give an analytical procedure of computing the eigenvectors. This procedure is quite
useful for hand calculations.
Using the property (refer to Eqn. (5.3))
M adj M = |M|I
and letting M = (liI – A) yields
(liI – A) adj (liI – A) = |liI – A| I
Since |liI – A| is the characteristic polynomial and li is an eigenvalue, this equation becomes
(liI – A) adj (liI – A) = 0 (5.79)
A comparison of Eqn. (5.78) with (5.79) shows that vi is proportional to any nonzero column of
adj (liI – A).
Example 5.9
Consider the state variable model
x = Ax + bu
y = cx
È - 9 1 0˘ È 2˘
with A = ÍÍ -26 0 1 ˙˙ ; b = Í 5˙ ; c = [1
Í ˙ 2 – 1]
ÍÎ -24 0 0 ˙˚ ÍÎ0 ˙˚
The characteristic equation
|lI – A| = 0
yields the roots l 1 = – 2, l 2 = – 3, and l 3 = – 4.
È l + 9 -1 0 ˘ È l ˘
2
l 1
Í ˙
adj(lI – A) = adj ÍÍ 26 l -1˙˙ = Í -26 l - 24 l 2 + 9l l+9 ˙
ÍÎ 24 0 l ˙˚ ÍÍ -24 l -24 2 ˙
l + 9l + 26 ˙˚
Î
È 4 -2 1˘ È 1˘
Í 7˙
For l1 = – 2, adj(l1I – A) = ÍÍ 28 -14 7˙˙ ; v1 = Í ˙
ÍÎ 48 -24 12˙˚ ÍÎ12˙˚
È 9 -3 1˘ È1 ˘
Í54 -18 6 ˙ Í6 ˙
For l2 = – 3, adj(l2I – A) = Í ˙ ; v2 = Í ˙
ÍÎ72 -24 8 ˙˚ ÍÎ8 ˙˚
334 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È16 -4 1 ˘ È1 ˘
Í80 -20 5˙ Í ˙
For l3 = – 4, adj(l3I – A) = Í ˙ ; v3 = Í 5 ˙
ÍÎ96 -24 6 ˙˚ ÍÎ6 ˙˚
In each case, the columns of adj(liI – A) are linearly related. In practice, it is necessary to calculate only
one (nonzero) column of the adjoint matrix.
The transformation matrix
È 1 1 1˘
Í ˙
P = [v1 v2 v3] = Í 7 6 5˙
ÍÎ12 8 6 ˙˚
State transformation
x =Px
results in the following model (refer to Eqns (5.22)):
x = L x + bu
y = cx
with
È -4 2 -1˘ È -9 1 0 ˘ È 1 1 1˘ È -2 0 0˘
1Í ˙Í ˙Í ˙
L = P AP = – Í 18 -6 2˙ Í-26 0 1˙ Í 7
–1 6 5˙ = Í 0 -3 0 ˙˙
Í
2
ÍÎ-16 4 -1˙˚ ÍÎ-24 0 0 ˙˚ ÍÎ12 8 6 ˙˚ ÍÎ 0 0 -4 ˙˚
È -1˘
Í ˙
b = P b = Í -3˙ ; c = cP = [3
–1
5 5]
ÍÎ 6 ˙˚
Case II: Some Eigenvalues are Multiple Roots of the Characteristic Equation For notational
convenience, we assume that matrix A has an eigenvalue l1 of multiplicity n1, and all other eigenvalues
l , …, ln are distinct, i.e.,
n1 + 1
|lI – A| = ( l - l1 ) n1 ( l - ln1 + 1 ) (l – ln)
Recall the result stated earlier: the nullity g of matrix (liI – A) does not exceed the multiplicity of li.
Therefore,
1 £ g (l1I – A) £ n1
g ( ln1 + 1 I – A) = 1
g (lnI – A) = 1
We know that the number of linearly independent eigenvectors associated with an eigenvalue li is equal
to the nullity g of the matrix (liI – A). Thus, when one or more eigenvalues is a repeated root of the
characteristic equation, a full set of n linearly independent eigenvectors may, or may not, exist.
It is convenient to consider three subclassifications for Case II.
Control System Analysis using State Variable Methods 335
È l1 0 0 0 0˘
Í 0 l1 0 0 0 ˙˙
Í
Í ˙
–1 Í ˙
gives P AP = L = Í 0 0 l1 0 0˙
Í 0 0 0 ln1 + 1 0˙
Í ˙
Í ˙
Í 0 0 0 0 ln ˙˚
Î
Case II2: Nullity of (l1I – A) = 1 For this case, there is only one eigenvector associated with l1,
regardless of multiplicity n1. This eigenvector is given by the linearly independent solution of the vector
equation
(l1I – A)v = 0
The solution to this equation may be found as in Case I.
We have seen in Cases I and II1, that the transformation matrix P yields a diagonal matrix L if, and
only if, P has a set of n linearly independent eigenvectors. When nullity of the matrix (l1I – A) is one, n
linearly independent eigenvectors cannot be constructed and, therefore, the transformation to a diagonal
matrix is not possible.
The simplest form to which matrix A, having a multiple eigenvalue l1 of multiplicity n1 with g (l1I – A)
= 1 and all other distinct eigenvalues, can be reduced is the Jordan canonical form:
È L1 0 0 ˘
Í ˙
0 L n1 + 1 0 ˙
L = Í
Í ˙
Í ˙
Î 0 0 Ln ˚
where the Jordan blocks Li are
È l1 1 0 0 ˘
Í 0 l1 1 0 ˙˙
L1 = Í
Í ˙
Í ˙
Î 0 0 0 l1 ˚
336 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Case II3: 1 < g (l1I – A) < n1 For this case, there are g eigenvectors associated with l1. There will
be one Jordan block for each eigenvector; that is, l1 will have g blocks associated with it. This case is
just a combination of the Cases II1 and II2; there is only one ambiguity—the knowledge of n1 and g does
not directly give the information about the dimension of each of the Jordan blocks associated with l1.
Assume that l1 is a fourth-order root of the characteristic equation and g (l1I – A) = 2. The two
eigenvectors associated with l1 satisfy
(l1I – A)va = 0, (l1I – A)vb = 0
To form the transformation matrix, we require two generalized eigenvectors—but it is still uncertain
whether the generalized eigenvectors are both associated with va, or both with vb, or one with each. That
is, the two Jordan blocks could take one of the following forms:
Control System Analysis using State Variable Methods 337
È l1 1 0˘
Í l1 1 ˙˙ , L2 = [l 1]
L1 = Í 0
ÍÎ 0 0 l1 ˙˚
È l1 1˘ È l1 1˘
or L1 = Í ˙ , L2 = Í
Î0 l1 ˚ Î0 l1 ˙˚
The first pair corresponds to the equations
(l1I – A)v1 = 0
(l1I – A)v2 = – v1
(l1I – A)v3 = – v2
(l1I – A)v4 = 0
The second pair corresponds to the equations
(l1I – A)v1 = 0
(l1I – A)v2 = – v1
(l1I – A)v3 = 0
(l1I – A)v4 = – v3
Ambiguities such as this, can be resolved by the trial-and-error procedure.
An n-dimensional SISO system with m distinct eigenvalues l1, l2, ..., lm, of multiplicity n1, n2, ..., nm,
Ê m ˆ
respectively Á n =
Ë
 ni ˜ , has the following Jordan canonical representation:
i =1 ¯
x = Lx + bu
y = cx + du
where L is a block diagonal matrix with Jordan blocks L1, ..., Lm corresponding to the eigenvalues
l1, ..., lm, respectively, on its principal diagonal; each Jordan block Li corresponding to the eigenvalue
li is again a block diagonal matrix with g (i) sub-blocks on its principal diagonal; g (i) being the number
of linearly independent eigenvectors associated with the eigenvalue li:
È L1 0 0 ˘
Í ˙
0 L2 0 ˙
L = Í
( n ¥ n) Í ˙
Í ˙
Î0 0 Lm ˚
È L1i 0 0 ˘
Í ˙
0 L 2i 0 ˙ ; i = 1, 2, …, m
Li = Í
( ni ¥ ni ) Í ˙
Í ˙
ÍÎ 0 0 L g (i )i ˙˚
338 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È li 1 0 0˘
Í 0 li 1 0 ˙˙
Lki = Í ; k = 1, 2, …, g (i)
Í ˙
Í ˙
Î 0 0 0 li ˚
The topic of computation of eigenvectors and generalized eigenvectors for systems with multiple
eigenvalues is much too detailed and specialized for this book to treat (Refer to Gopal [105] and Brogan
[106]). Over the years, experts have developed excellent general-purpose computer programs for the
efficient and accurate determination of eigenvectors and generalized eigenvectors [152-154].
In this book, the usefulness of the transformation of state variable models to Jordan canonical form will
be illustrated through system examples having distinct eigenvalues.
5.7.1
Functions of square matrices arise in connection with the solution of vector differential equations. Of
immediate interest to us are matrix infinite series.
Consider the infinite series in a scalar variable x:
i= 0
converges for all A. By analogy with the power series in Eqns (5.82) for the ordinary exponential
function, we adopt the following nomenclature:
If A is an n ¥ n matrix, the matrix exponential of A is
1
eA =D I + A +
1 2
2!
A + +
1 k
k!
A + = Â i! A i
i= 0
The following matrix exponential will appear in the solution of state equations:
eAt = I + At +
1 22
2!
At + +
1 kk
k!
At + = Â i!1 Aiti (5.83)
i= 0
d At 1 32 1
e = A + A2t + At + + Ak tk – 1 +
dt 2! ( k - 1)!
1 22 1
= A[I + At + At + + A k – 1 tk – 1 + ] = AeAt
2! ( k - 1)!
1 22 1
= [I + At + At + + A k – 1 tk – 1 + ]A = eAtA
2! ( k - 1)!
340 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
5.7.2
The simplest form of the general differential equation (5.80) is the homogeneous, i.e., unforced equation
x(t) = Ax(t); x(t0) =D x0 (5.88)
We assume a solution x(t) of the form
x(t) = eAtk (5.89)
At
where e is the matrix exponential function defined in Eqn. (5.83), and k is a suitably chosen constant
vector.
The assumed solution is, in fact, the true solution since it satisfies the differential equation ( 5.88) as is
seen below.
d At d At
x(t) = [e k] = [e ]k
dt dt
Using property (5.87) of the matrix exponential, we obtain
x(t) = AeAt k = Ax(t)
To evaluate the constant vector k in terms of the known initial state x(t0), we substitute t = t0 in Eqn.
(5.89):
x(t0) = eAt0 k
Using property (5.86) of the matrix exponential, we obtain
k = (eAt0) –1x(t0) = e– At0x(t0)
Thus, the general solution to Eqn. (5.88) for the state x(t) at time t, given the state x(t0) at time t0, is
x(t) = eAt e–At0 x(t0) = eA(t – t0) x(t0) (5.90a)
We have used the property (5.85) of the matrix exponential to express the solution in this form.
If the initial time t0 = 0, i.e., the initial state x0 is known at t = 0, we have from Eqn. (5.90a):
x(t) = eAtx(0) (5.90b)
D 0
From Eqn. (5.90b), it is observed that the initial state x(0) = x at t = 0 is driven to a state x(t) at time t.
This transition in state is carried out by the matrix exponential eAt. Due to this property, eAt is known as
the state transition matrix, and is denoted by e(t).
Properties of the matrix exponential, given earlier in Eqns (5.84)–(5.87), are restated below in terms of
state transition matrix e(t).
d
(i) e (t) = Ae(t); e(0) = I
dt
(ii) e(t2 – t1)e(t1 – t0) = e(t2 – t0) for any t0, t1, t2
This property of the state transition matrix is important since it implies that a state transition
process can be divided into a number of sequential transitions. The transition from t0 to t2:
x(t2) = e(t2 – t0)x(t0);
Control System Analysis using State Variable Methods 341
The state transition matrix e(t) = eAt of an n ¥ n matrix A, is given by the infinite series (5.83). The series
converges for all A and all finite t. Hence, eAt can be evaluated within prescribed accuracy by truncating
the series at, say, i = N. An algorithm for evaluation of matrix series is given in Section 6.3.
In the following, we discuss the commonly used methods for evaluating eAt in closed form.
Example 5.10
Consider the system
È0 0 -2˘ È0 ˘
x = Í0 1 0 ˙ x; x(0) = Í1 ˙
Í ˙ Í ˙
ÍÎ 1 0 3˙˚ ÍÎ0 ˙˚
By direct computation, we have
-1
Ès 0 2 ˘
( s I - A) +
(sI – A)– 1 = Í 0 s -1 0 ˙ =
Í ˙ | sI - A |
ÍÎ -1 0 s - 3˙˚
È( s - 1) ( s - 3) 0 -2( s - 1) ˘
2 Í
|sI – A| = (s – 1) (s – 2); (sI – A) = Í +
0 ( s - 1) ( s - 2) 0 ˙
˙
ÍÎ ( s - 1) 0 s( s - 1) ˙˚
È ( s - 3) -2 ˘
Í 0 ˙
Í ( s - 1) ( s - 2) ( s - 1) ( s - 2) ˙
Í 1 ˙
eAt = L – 1 [(sI – A)– 1] = L – 1 Í 0 0 ˙
Í ( s - 1) ˙
Í 1 s ˙
Í 0 ˙
Î ( s - 1) ( s - 2) ( s - 1) ( s - 2)) ˚
È 2 et - e 2 t 0 2 et - 2 e 2 t ˘
Í ˙
= Í 0 et 0 ˙
Í t 2t 2t t ˙
ÍÎ - e + e 0 2 e - e ˙˚
A and L are similar matrices; there exists a nonsingular transformation matrix P such that (refer
to Eqns (5.22))
L = P –1AP
Now
1 22 1 –1 2 2
P –1eAtP = P–1[I + At + At + ] P = I + P –1APt + P A Pt +
2! 2!
1 –1 1 22
= I + P –1APt + P APP–1APt 2 + = I + Lt + Lt + = eLt
2! 2!
Thus the matrices eAt and eLt are similar. Since L is diagonal, e Ltis given by
Èe l1t 0 0 ˘0
Í ˙
Lt Í 0 e l2 t 0 0 ˙
e = Í ˙
Í ˙
Í 0 0 0 e lnt ˙˚
Î
The matrix exponential eAt of matrix A with distinct eigenvalues l1, l2, ..., ln may, therefore, be evaluated
using the following relation:
Èe l1t 0 0 ˘
Í ˙
Í 0 e l2 t 0 ˙ –1
eAt = P e Lt P –1 =P Í ˙ P (5.93)
Í ˙
Í 0 0 e ln t ˙˚
Î
where P is a transformation matrix that transforms A into the diagonal form.
(For the general case wherein matrix A has multiple eigenvalues, refer to [105]. Also refer to Review
Example 5.3 given at the end of this chapter).
Example 5.11
Consider the system
È 0 1˘ È0 ˘
x= Í ˙ x; x(0) = Í ˙
Î -2 -3˚ Î1 ˚
The characteristic equation for this system is
l -1
|lI – A| = =0
2 ( l + 3)
or (l + 1)(l + 2) = 0
Therefore, the eigenvalues of system matrix A are
l1 = – 1, l2 = – 2
Eigenvectors v1 and v2 corresponding to the eigenvalues l1 and l2, respectively, can be determined from
the adjoint matrix (lI – A)+ (refer to Eqn. (5.79)).
344 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È( l + 3) 1 ˘
(lI – A)+ = Í ˙
Î -2 l˚
For l = l1 = – 1,
È 2 1˘ È 1˘
(l1I – A)+ = Í ˙ ; v1 = Í -1˙
Î -2 -1˚ Î ˚
For l = l2 = – 2,
È 1 1˘ È 1˘
(l2I – A)+ = Í ˙ ; v2 = Í ˙
Î -2 -2˚ Î -2˚
The transformation matrix P that transforms A into diagonal form, is
È 1 1˘
P= Í ˙
Î -1 -2˚
The matrix exponential
Èe -t 0 ˘ È 1 1˘ Èe - t 0 ˘ È 2 1˘
eAt
=P Í -2t
˙ P– 1 = Í ˙ Í ˙ Í -1 -1˙
ÍÎ0 e ˙˚ Î-1 -2˚ ÍÎ0 e -2t ˙˚ Î ˚
È 2e - t - e -2t e - t - e -2t ˘
= Í ˙
ÍÎ -2e - t + 2e -2t - e - t + 2e -2t ˙˚
Consequently, the free response of the system is
È e - t - e -2t ˘
x(t) = eAtx(0) = Í ˙
ÍÎ -e - t + 2e -2t ˙˚
This matrix polynomial, which is of degree higher than the order of A, can be computed by consideration
of the scalar polynomial
f (l) = a0 + a1l + a2l2 + + anln + an + 1ln + 1 + (5.94b)
Dividing f (l) by the characteristic polynomial D(l), we get
f( l ) g( l )
= q(l) + (5.95a)
D( l ) D( l )
where g(l) is the remainder polynomial of the following form:
g(l) = b0 + b1l + + bn – 1ln – 1 (5.95b)
Equation (5.95a) may be written as
f (l) = q(l)D(l) + g(l) (5.96)
Assume that the n ¥ n matrix A has n distinct eigenvalues l1, l2, ..., ln;
D(li) = 0; i = 1, 2, ..., n
If we evaluate f (l) in Eqn. (5.96) at the eigenvalues l1, l2, ..., ln, we have
f (li) = g(li), i = 1, 2, ..., n (5.97)
The coefficients b0, b1, ... , bn–1 in Eqn. (5.95b) can be computed by solving the set of n simultaneous
equations obtained by successively substituting l1, l2, ... , ln in Eqn. (5.97).
Substituting A for l in Eqn. (5.96), we get
f (A) = q(A)D(A) + g(A)
Since D(A) is identically zero, it follows that
f(A) = g(A) = b0I + b1A + + bn – 1 An – 1
If A possesses an eigenvalue lk of multiplicity nk, then only one independent equation can be obtained by
substituting lk into Eqn. (5.97). The remaining (nk – 1) linear equations, which must be obtained in order
to solve for bi’s, can be found by differentiating both sides of Eqn. (5.97).
È dj ˘
Since Í j D( l ) ˙ = 0 ; j = 0, 1, ..., (nk – 1),
ÍÎ d l ˙˚ l = lk
it follows that
È dj ˘ È dj ˘
Í j f (l ) ˙ = Í j g (l )˙ ; j = 0, 1, ... , (nk – 1)
ÍÎ d l ˙˚ l = lk
ÍÎ d l ˙˚ l = lk
The formal procedure of evaluation of the matrix polynomial f(A) is given below.
(i) Compute D(l) =D |lI – A|
(ii) Find the roots of D(l) = 0, say,
D(l) = (l – l1)n1 (l – l2)n2 (l – lm)nm (5.98a)
where n1 + n2 + + nm = n. In other words, D(l) has root li with multiplicity ni. If li is a complex
number, then its complex conjugate is also a root of D(l).
346 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
(v) Solve for the n unknown parameters b0, b1, ... , bn – 1 from the n equations in Step (iv).
Then
f(A) = g(A) = b0I + b1A + + bn – 1An – 1 (5.98d)
Example 5.12
Find f (A) = A10 for
È 0 1˘
A= Í ˙
Î -2 -3˚
Solution The characteristic polynomial is
l -1
D(l) = |lI – A| = = (l + 1)(l + 2)
2 l +3
The Cayley–Hamilton technique allows us to solve the problem of evaluation of eAt, where A is a constant
n ¥ n matrix. Since the matrix power series
A2 t 2 An t n
eAt = I + At + + + +
2! n!
converges for all A and for all finite t, the matrix polynomial f(A) = eAt can be expressed as a polynomial
g(A) of degree (n – 1). This is illustrated below with the help of an example.
Control System Analysis using State Variable Methods 347
Example 5.13
Consider the system
x = Ax
with
È0 0 -2˘ È0 ˘
Í ˙ Í1 ˙
A = Í0 1 0 ˙ ; x(0) = Í ˙
ÍÎ 1 0 3˙˚ ÍÎ0 ˙˚
In the following, we evaluate the function
f(A) = eAt
using the Cayley–Hamilton technique.
The characteristic polynomial of matrix A is
l 0 2
D(l) = |lI – A| = 0 ( l - 1) 0 = (l – 1)2 (l – 2)
-1 0 ( l - 3)
The characteristic equation D(l) = 0 has a second-order root at l1 = 1 and a simple root at l2 = 2.
Since A is of third order, the polynomial g(l) will be of the form
g(l) = b0 + b1l + b2l2
The coefficients b0, b1, and b2 are evaluated using the following relations:
f(l1) = g(l1)
d d
f (l ) = g(l )
dl l = l1 dl l = l1
f(l2) = g(l2)
These relations yield the following set of simultaneous equations:
et = b0 + b1 + b2
tet = b1 + 2b2
e2t = b0 + 2b1 + 4b2
Solving these equations, we obtain
b0 = – 2tet + e2t
b1 = 3tet + 2et – 2e2t, and
b2 = e2t – et – tet
Hence, we have
eAt = g(A) = b0I + b1A + b2A2
= (– 2tet + e2t)I + (3tet + 2et – 2e2t)A + (e2t – et – tet)A2
348 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È 2et - e 2 t 0 2e t - 2e 2 t ˘
Í ˙
= Í 0 et 0 ˙
Í t 2t 2t t ˙
ÍÎ -e + e 0 2e - e ˙˚
5.7.3
When an input u(t) is present, the complete solution x(t) is obtained from the nonhomogeneous
equation (5.80).
By writing Eqn. (5.80) as
x(t) – Ax(t) = bu(t)
and premultiplying both sides of this equation by e– At, we obtain
e– At [ x(t) – Ax(t)] = e–At bu(t) (5.99)
By applying the rule for the derivative of the product of two matrices, we can write (refer to Eqn. (5.87))
d – At d d – At
[e x(t)] = e–At (x(t)) + (e )x(t) = e–At x(t) – e– At Ax(t)
dt dt dt
= e– At [ x(t) – Ax(t)]
Use of this equality in Eqn. (5.99) gives
d – At
[e x(t)] = e– Atbu(t)
dt
Integrating both sides with respect to t between the limits 0 and t, we get
t t
Úe
- At
e–Atx(t) = bu(t)dt
0 0
t
Úe
- At
or e–Atx(t) – x(0) = bu(t) dt
0
If the initial state is known at t = t0, rather than t = 0, Eqn. (5.100) becomes
t
Ú eA(t – t)bu(t)dt
A( t - t0 )
x(t) = e x(t0) + (5.101)
t0
Equation (5.101) can also be written as
t
Example 5.14
For the speed control system of Fig. 5.3, the following plant model was derived in Example 5.1 (refer to
Eqns (5.17)):
x = Ax + bu
y = cx
with
È -1 1˘ È 0˘
A= Í ˙ ;b= Í ˙;c=[1 0]
Î -1 -10 ˚ Î10 ˚
State variables x1 and x2 are the physical variables of the system:
x1(t) = w (t), angular velocity of the motor shaft
x2(t) = ia(t), armature current
The output
y(t) = x1(t) = w (t)
In the following, we evaluate the response of this system to a unit-step input, under zero initial conditions.
-1
Ès + 1 -1 ˘ 1 È s + 10 1 ˘
(sI – A)–1 = Í ˙ = 2 Í ˙
Î 1 s + 10 ˚ s + 11s + 11 Î -1 s + 1˚
È s + 10 1 ˘
Í ( s + a )( s + a ) ( s + a )( s + a ) ˙
= Í 1 2 1 2 ˙ ; a = 1.1125, a = 9.8875
1 2
Í -1 s +1 ˙
Í ( s + a )( s + a ) ( s + a )( s + a ) ˙
Î 1 2 1 2 ˚
Therefore,
t t È
Í
114 ( )
. e - a1 (t - t ) - e - a2 (t - t ) ˘
˙ dt
x(t) = Ú eA(t – t )
bdt = Ú
0
Í
0 Í114
Î (
. - 0.1123e - a1 (t - t ) + 8.8842e - a2 (t - t ) ) ˙
˙˚
È 0.9094 - 1.0247 e - a1t + 0.1153 e - a2 t ˘
= Í ˙
ÍÎ - 0.0132 + 0.1151 e - a1t - 0.1019 e - a2 t ˙˚
The output
y(t) = w(t) = 0.9094 – 1.0247 e– 1.1125t + 0.1153e–9.8875t; t ≥ 0
5.8
Controllability and observability are properties which describe structural features of a dynamic system.
These properties play an important role in modern control system design theory; the conditions on
controllability and observability often govern the control solution.
To illustrate the motivation of investigating controllability and observability properties, we consider the
problem of the stabilization of an inverted pendulum on a motor-driven cart.
Example 5.15
q Figure 5.16 shows an inverted pendulum with its pivot
mounted on a cart. The cart is driven by an electric motor.
The motor drives a pair of wheels of the cart; the whole
Pendulum cart and the pendulum become the ‘load’ on the motor. The
dm motor at time t exerts a torque T(t) on the wheels. The linear
l
J = the moment of inertia of the pendulum with respect to center of gravity (CG).
l l l
È r3 ˘ Ê 2l 3 ˆ Ê l2 ˆ
Ú Ú
2 2
J= r dm = r ( r Adr ) = r A Í ˙ = r AÁ ˜ = r A( 2l ) Á ˜
-l -l
ÍÎ 3 ˙˚ -l
Ë 3 ¯ Ë 3¯
2
ml
=
3
where A = area of cross section, and r = density.
The horizontal and vertical positions of the CG of the pendulum are given by (z + l sinq ) and (l cosq),
respectively.
The forces exerted on the pendulum are—the force mg on the center of gravity, a horizontal reaction
force H and a vertical reaction force V (Fig. 5.17a). H is the horizontal reaction force that the cart exerts
on the pendulum, whereas –H is the force exerted by the pendulum on the cart. Similar convention
applies to forces V and –V.
q
H
CG
mg
z
0 Pivot V u
H
(a) (b)
We next substitute (u - M z ) from (5.104c) into (5.104d) and perform manipulations to get
2
J q = mgl sin q - ml q - mzl cos q (5.104e)
1
Let a=
m+ M
Then, we can represent (5.104e) as
z = – malq cos q + malq 2 sin q - aFc + au (5.104f)
We substitute (5.104f ) into (5.104e), to obtain
mgl sin q - ( m 2 l 2 aq 2 sin 2q )/ 2 + ( mal cos q ) Fc - ( mal cos q )u
q = (5.104g)
J - m 2 l 2 a cos 2 q + ml 2
We next substitute q from (5.104e) into (5.104f) to get
2 2 2 2
z = -( m l ag sin 2q )/ 2 + ( malq sin q + a(u - Fc ))( J + ml ) (5.104h)
J + ml 2 - m 2 l 2 a cos 2q
1 2
Since J = ml , Eqns (5.104g) and (5.104h) reduce to the following nonlinear set of equations.
3
g sin q - ( mlaq 2 sin 2q )/ 2 + a cos q ( Fc - u)
q = (5.105a)
4l / 3 - mla cos 2q
-( mag sin 2q )/ 2 + ( aq sin q )4ml / 3 + (u - Fc )4 a / 3
z = (5.105b)
4 / 3 - ma cos 2q
Suppose that the system parameters are as follows:
M = 1 kg; m = 0.15 kg; and l = 0.5 m.
Recall that g = 9.81 m/sec2.
In our problem, since the objective is to keep the pendulum upright, it seems reasonable to assume that
q(t ) and q(t ) will remain close to zero. In view of this, we can set with sufficient accuracy sin q q ;
cos q 1. Also, the second-order deviations q ¥ q 0; q ¥ q 0 . We further assume, for simplified
analysis, that Fc = 0.
With these assumptions, we have from Eqns (5.105)
q (t) = 16.3106 q (t) – 1.4458 u(t)
z(t) = – 1.0637 q (t) + 0.9639 u(t)
Control System Analysis using State Variable Methods 353
Choosing the states x1 = q, x2 = q , x3 = z, and x4 = z , we obtain the following state model for the inverted
pendulum on moving cart:
x = Ax + bu (5.106)
with
È 0 1 0 0˘ È 0 ˘
Í16.3106 0 0 0 ˙˙ Í -1.4458˙
A= Í ;b= Í ˙
Í 0 0 0 1˙ Í 0 ˙
Í ˙ Í ˙
Î -1.0637 0 0 0˚ Î 0.9639˚
The plant (5.106) is said to be completely controllable if every state x(t0) can be affected or controlled to
reach a desired state in finite time, by some unconstrained control u(t). Shortly, we will see that the plant
(5.106) satisfies this condition, and therefore, a solution exists to the following control problem:
Move the cart from one location to another without causing the pendulum to fall.
The solution to this control problem is not unique. We normally look for a feedback control scheme so
that the destabilizing effects of disturbance forces (due to wind, for example) are filtered out. Figure 5.18a
shows a state-feedback control scheme for stabilizing the inverted pendulum. The closed-loop system is
formed by feeding back the state variables through a real constant matrix k:
u(t) = – kx(t)
The closed-loop system is thus described by
x(t) = (A – bk)x(t)
The design objective in this case is to find the feedback matrix k such that the closed-loop system is
stable. The existence of a solution to this design problem is directly based on the controllability property
of the plant (5.106). This will be established in Chapter 7.
Implementation of the state-feedback control solution requires access to all the state variables of the
plant model. In many control situations of interest, it is possible to install sensors to measure all the
state variables. This may not be possible or practical in some cases. For example, if the plant model
includes nonphysical state variables, measurement of these variables using physical sensors is not
possible. Accuracy requirements or cost considerations may prohibit the use of sensors for some physical
variables also.
The input and the output of a system are always physical quantities, and are normally easily accessible
to measurement. We, therefore, need a subsystem that performs the estimation of state variables based
on the information received from the input u(t) and the output y(t). This subsystem is called an observer
whose design is based on observability property of the controlled system.
The plant (5.106) is said to be completely observable if all the state variables in x(t) can be observed from
the measurements of the output y(t) = q (t) and the input u(t). Shortly, we will see that the plant (5.106)
does not satisfy this condition and therefore, a solution to the observer-design problem does not exist
when the inputs to the observer subsystem are u(t) and q(t).
Cart position z(t) is easily accessible to measurement and as we shall see, the observability condition is
satisfied with this choice of input information to the observer subsystem. Figure 5.18b shows the block
354 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
diagram of the closed-loop system with an observer that estimates the state vector from measurements of
u(t) and z(t). The observed or estimated state vector, designated as x̂, is then used to generate the control
u through the feedback matrix k.
A study of controllability and observability properties, presented in this section, provides a basis for the
state-feedback design problems discussed in Chapter 7. Further, these properties establish the conditions
for complete equivalence between the state variable and transfer function representations.
In this section, we study the controllability and observability of linear time-invariant systems described
by state variable model of the following form:
x(t) = Ax(t) + bu(t) (5.107a)
y(t) = cx(t) + du(t) (5.107b)
where A, b, c and d are respectively n ¥ n, n ¥ 1, 1 ¥ n and 1 ¥ 1 matrices, x(t) is n ¥ 1 state vector, y(t)
and u(t) are, respectively, output and input variables.
For the linear system given by Eqns (5.107), if there exists an input u[0, t1] which transfers the initial
state x(0) =D x0 to the state x1 in a finite time t1, the state x0 is said to be controllable. If all initial states
are controllable, the system is said to be completely controllable, or simply controllable. Otherwise, the
system is said to be uncontrollable.
From Eqn. (5.100), the solution of Eqn. (5.107a) is
t
At 0
x(t) = e x + Ú eA(t – t ) bu(t ) dt
0
To study the controllability property, we may assume, without loss of generality, that x1 ∫ 0. Therefore,
if the system (5.107) is controllable, there exists an input u[0, t1] such that
t1
Ú e –At bu(t ) dt
0
–x = (5.108)
0
Control System Analysis using State Variable Methods 355
From this equation, we observe that complete controllability of a system depends on A and b, and is
independent of output matrix c. The controllability of the system (5.107) is frequently referred to as the
controllability of the pair {A, b}.
It may be noted that according to the definition of controllability, there is no constraint imposed on the
input or on the trajectory that the state should follow. Further, the system is said to be uncontrollable
although it may be ‘controllable in part’.
From the definition of controllability, we observe that by complete controllability of a plant we mean
that we can make the plant do whatever we please. Perhaps this definition is too restrictive in the sense
that we are asking too much of the plant. But if we are able to show that system equations satisfy this
definition, certainly there can be no intrinsic limitation on the design of the control system for the plant.
However, if the system turns out to be uncontrollable, it does not necessarily mean that the plant can
never be operated in a satisfactory manner. Provided that a control system will maintain the important
variables in an acceptable region, the fact that the plant is not completely controllable, is immaterial.
Another important point which the reader must bear in mind, is that almost all physical systems are
nonlinear in nature to a certain extent, and a linear model is obtained after making certain approximations.
Small perturbations of the elements of A and b may cause an uncontrollable system to become
controllable. It may also be possible to increase the number of control variables and make the plant
completely controllable (controllability of multi-input systems is discussed in Section 5.10).
A common source of uncontrollable state variable models arises when redundant state variables are
defined. No one would intentionally use more state variables than the minimum number needed to
characterize the behavior of a dynamic system. In a complex system with unfamiliar physics, one may
be tempted to write down differential equations for everything in sight and, in doing so, may write down
more equations than are necessary. This will invariably result in an uncontrollable model for the system.
For the linear system given by Eqns (5.107), if the knowledge of the output y and the input u over a finite
interval of time [0, t1] suffices to determine the state x(0) =D x0, the state x0 is said to be observable. If
all initial states are observable, the system is said to be completely observable, or simply observable
otherwise, the system is said to be unobservable.
The output of the system (5.107) is given by
t
Ú
y(t) = c eAt x0 + c eA(t – t )bu(t)dt + du(t)
0
The output and the input can be measured and used, so that the following signal h (t) can be obtained
from u and y.
t
Ú
h(t) =D y(t) – c eA(t – t )b u(t )dt – d u(t) = c eAtx0 (5.109)
0
T
Premultiplying by e A tcT and integrating from 0 to t1 gives
Ï t1 T ¸ t1
Ô A t T At Ô 0
Ú Úe AT t T
Ì e c c e dt ˝x = c h(t )dt (5.110)
ÔÓ 0 Ô˛ 0
356 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
When the signal h(t) is available over a time interval [0, t1], and the system (5.107) is observable, then
the initial state x0 can be uniquely determined from Eqn. (5.110).
From Eqn. (5.110) we see that complete observability of a system depends on A and c, and is independent
of b. The observability of the system (5.107) is frequently referred to as the observability of the pair
{A, c}.
Note that the system is said to be unobservable, although it may be ‘observable in part’. Plants that are
not completely observable can often be made observable by making more measurements (observability
of multi-output systems will be discussed in Section 5.10). Alternately, one may examine feedback
control schemes which do not require complete state feedback.
5.8.2
It is difficult to guess whether a system is controllable or not from the defining equation (5.108). Some
simple mathematical tests which answer the question of controllability, have been developed. The
following theorem gives two controllability tests.
The necessary and sufficient condition for the system (5.107) to be completely
controllable is given by any one of the following:
t1
Úe
- At T
I. W(0, t1) = b bT e –A t dt (5.111)
0
is nonsingular.
II. The n ¥ n controllability matrix
U =D [b Ab A2b An – 1b] (5.112)
has rank equal to n, i.e., r(U) = n.
Since Test II can be computed without integration, it allows the controllability of a system to be
easily checked.
Ú Ú
e - At bu(t ) dt = - e - At bbT e - A t W -1 (0, t1 )x0 dt
T
0 0
Ï t1 ¸
Ô Ô –1
Ú
= – Ì e - At b bT e-A tdt
T
0 0
˝ W (0, t1)x = – x
ÔÓ 0 Ô˛
Necessity: Assume that the system is controllable, though W(0, t1) is singular for any t1. Then, as per
the results given in Eqns (5.8), the n rows of e– Atb are linearly dependent, i.e., there exists a nonzero
n ¥ 1 vector ` such that
`Te– At b = 0 (5.114)
Control System Analysis using State Variable Methods 357
From the assumption of controllability, there exists an input u satisfying Eqn. (5.108); therefore, from
Eqns (5.108) and (5.114),
t1
`T ` = [|| ` ||]2 = 0
This is true only for ` = 0, which contradicts the nonzero property of `. Therefore, the nonsingularity
of W(0, t1) is proved.
Sufficiency: It is first assumed that though r (U) = n, the system is not controllable, and by showing that
this is a contradiction, the controllability of the system is proved.
By the above assumption,
r (U) = n and W(0, t1) is singular.
Therefore, Eqn. (5.114) holds, i.e.,
` Te– At b = 0; t ≥ 0, ` π 0
Derivatives of the above equation at t = 0, yield (refer to Eqn. (5.87)),
`TAk b = 0; k = 0, 1, ..., (n – 1)
which is equivalent to
`T [b Ab ◊◊◊ An – 1b] = `T U = 0
Therefore, n rows of controllability matrix U are linearly dependent (refer to Eqn. (5.8a)). This contradicts
the assumption that r(U) = n; hence the system is completely controllable.
Necessity: It is assumed that the system is completely controllable but r (U) < n. From this assumption,
there exists nonzero vector ` satisfying
`TU = 0
or
`TAk b = 0; k = 0, 1, ..., (n – 1) (5.116a)
– At
Also from the Cayley–Hamilton theorem, e can be expressed as a linear combination of I, A, ..., An–1:
e– At = b0I + b1A + + bn – 1 An–1 (5.116b)
From Eqns (5.116a) and (5.116b), we obtain
`Te– Atb = 0, t ≥ 0, ` π 0
and therefore (refer to Eqns (5.8)),
t1
Example 5.16
Recall the inverted pendulum of Example 5.15, shown in Fig. 5.16, in which the object is to apply a force
u(t) so that the pendulum remains balanced in the vertical position. We found the linearized equations
governing the system to be
x = Ax + bu
where x = [q q z z]T
È 0 1 0˘
0 È 0 ˘
Í16.3106 0 0 ˙˙
0 Í -1.4458˙
A= Í ;b= Í ˙
Í 0 0 1˙
0 Í 0 ˙
Í ˙ Í ˙
Î -1.0637 0 0
0˚ Î 0.9639˚
z(t) = horizontal displacement of the pivot on the cart; and
q (t) = rotational angle of the pendulum.
To check the controllability of this system, we compute the controllability matrix U:
È 0 -1.4458 0 -23.5816 ˘
Í -1.4458 0 - 23.5816 0 ˙
U = [b Ab A2b A3b] = Í ˙
Í 0 0.9639 0 1.5379 ˙
Í ˙
Î 0.9639 0 1.5379 0 ˚
Since |U| = 420.4851, U has full rank, and by Theorem 5.2, the system is completely controllable. Thus,
if the angle q departs from equilibrium by a small amount, a control always exists which will drive it
back to zero.10 Moreover, a control also exists which will drive both q and z, as well as their derivatives,
to zero.
It may be noted that Eqn. (5.113) suggests a control law to prove the sufficiency of the controllability test.
It does not necessarily give an acceptable solution to the control problem. The open-loop control given
by Eqn. (5.113) is normally, not acceptable. In Chapter 7, we will derive a state-feedback control law for
the inverted pendulum. As we shall see, for such a control to exist, complete controllability of the plant
is a necessary requirement.
Example 5.17
Consider the electrical network shown in Fig. 5.19. Differential equations governing the dynamics of
this network, can be obtained by various standard methods. By use of nodal analysis, for example, we
get
de e - e2 e1 - e0
C1 1 + 1 + =0
dt R3 R1
de e - e1 e2 - e0
C2 2 + 2 + =0
dt R3 R2
10
This justifies the assumption that q (t) @ 0, provided we choose an appropriate control strategy.
Control System Analysis using State Variable Methods 359
The appropriate state variables for the network are the capacitor
voltages e1 and e2. Thus, the state equations of the network are
+ R1 R2 x = Ax + be0
e0 where x = [e1 e2]T
R3
e1 e2
C1 C2 È Ê 1 1ˆ 1 1 ˘ È 1 ˘
Í- Á + ˜ ˙ Í RC ˙
Í Ë R1 R3 ¯ C1 R3C1 ˙ Í 1 1˙
A= Í
Ê 1 ˆ ˙;b= Í 1 ˙
Í 1 1 1 ˙
-Á + ÍR C ˙
ÍÎ R3C2 Ë R2 R3 ˜¯ C2 ˙˚ Î 2 2˚
È 1 1 1 Ê 1 1 ˆ˘
Í - + - ˙
Í RC1 1 ( R1C1 ) R3C1 Ë R2C2 R1C1 ˜¯ ˙
2Á
U = [b Ab] = Í ˙
Í 1 1 1 Ê 1 1 ˆ˙
- + Á - ˜
ÍÎ R2C2 ( R2C2 ) 2 R3C2 Ë R1C1 R2C2 ¯ ˙˚
We see that under the condition
R1C1 = R2C2
r (U) = 1 and the system becomes ‘uncontrollable’. This condition is the one required to balance the
bridge, and in this case, the voltage across the terminals of R3 cannot be influenced by the input e0.
5.8.3
The following theorem gives two observability tests.
The necessary and sufficient condition for the system (5.107) to be completely
observable, is given by any one of the following:
t1
Úe
T
At T
I. M(0, t1) = c ceAtdt (5.117)
0
is nonsingular.
II. The n ¥ n observability matrix
È c ˘
Í cA ˙
V =D Í ˙ (5.118)
Í ˙
Í n-1 ˙
ÍÎcA ˙˚
has rank equal to n, i.e., r(V) = n.
360 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Proof Using the defining equation (5.110), this theorem can be proved in a manner similar to
Theorem 5.2.
Example 5.18
We now return to the inverted pendulum of Example 5.16. Assuming that the only output variable to be
measured is q (t), the position of the pendulum, then the linearized equations governing the system are
x� = Ax + bu
y = cx
È 0 1 0 0˘ È 0 ˘
Í16.3106 Í ˙
0 0 0 ˙˙ Í –1.4458˙
where A= Í ;b=
Í 0 0 0 1˙ Í 0 ˙
Í ˙ Í ˙
Î -1.0637 0 0 0˚ Î 0.9639˚
c =[1 0 0 0]
The observability matrix
È c ˘ È 1 0 0 0˘
Í cA ˙ Í
Í ˙ 0 1 0 0 ˙˙
V = Í 2˙ = Í
cA Í 0˙
Í ˙ Í16.3106 0 0
˙
ÍÎ cA ˙˚ Î 0
3
16.3106 0 0˚
|V| = 0, and therefore, by Theorem 5.3, the system is not completely observable.
Consider now, the displacement z(t) of the cart as the output variable. Then
c = [0 0 1 0]
and the observability matrix
È 0 0 1 0˘
Í 0 0 0 1 ˙˙
V= Í
Í -1.0637 0 0 0˙
Í ˙
Î 0 -1.0637 0 0˚
|V| = 1.1315 π 0; the system is, therefore, completely observable. The values of z�(t), q (t) and q� (t) can
all be determined by observing z(t) over an arbitrary time interval. Observer design for the inverted-
pendulum system is given in Chapter 7.
results in the following alternative state variable model (refer to Eqns (5.22)) for the system (5.107):
( A ) n - 1 b = P–1An–1b
Therefore,
U = [P–1b P–1Ab P–1An –1b] = P–1U
where U = [b Ab An – 1b] (5.119b)
Since P–1 is nonsingular,
r ( U) = r(U) (5.119c)
II. A similar relationship can be shown for the observability matrices.
5.8.5
If the system equations are known in Jordan canonical form, then one need not resort to controllability
and observability tests given by Theorems 5.2 and 5.3. These properties can be determined almost by
inspection of the system equations, as will be shown below.
Consider a SISO system with distinct eigenvalues l1, l2, ..., ln. The Jordan canonical state model of this
system is of the form
x = Lx + bu
(5.120)
y = cx + du
È l1 0 0˘ È b1 ˘
Í0 l2 0 ˙˙ Íb ˙
with L= Í ;b= Í 2 ˙ ; c = [c1 c2 cn]
Í ˙ Í ˙
Í ˙ Í ˙
Î0 0 ln ˚ Îbn ˚
362 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The system (5.120) is completely controllable if, and only if, none of the elements of
the column matrix b is zero, and (5.120) is completely observable if, and only if, none of the elements
of the row matrix c is zero.
È b1 b1l1 b1l1n - 1 ˘
Í ˙
= Íb2 b2 l2 b2 l2n - 1 ˙
Í ˙
Í ˙
Íb bn ln bn lnn - 1 ˙˚
Î n
1 l1 l1n - 1
1 l2 l2n - 1
|U| = b1 ¥ b2 ¥ ¥ bn π 0 if bi π 0, i = 1, 2, ..., n.
1 ln lnn - 1
This proves the first part of the theorem. The second part can be proved in a similar manner.11
In frequency-domain analysis, it is tacitly assumed that the dynamic properties of a system are completely
determined by the transfer function of the system. That this is not always the case is illustrated by the
following examples.
Example 5.19
Consider the system
x = Ax + bu
y = cx (5.121)
È -2 1˘ È1˘
with A= Í ;b= Í1˙ ; c = [0 1]
Î 1 -2 ˙˚ Î˚
The controllability matrix
È1 -1˘
U = [b Ab] = Í ˙
Î1 -1˚
11
Refer to Gopal [105] for controllability and observability tests using Jordan canonical representation of systems
with multiple eigenvalues
Control System Analysis using State Variable Methods 363
Since r(U) = 1, the second-order system (5.121) is not completely controllable. The eigenvalues of
matrix A are the roots of the characteristic equation
s + 2 -1
|sI – A| = =0
-1 s + 2
The eigenvalues are obtained as – 1, – 3. The modes of the transient response are, therefore, e– t and e–3t.
The transfer function of the system (5.121) is calculated as
-1
s + 2 -1 ˘
G(s) = c(sI – A)–1b = [0 1] ÈÍ ˙
È1˘
Í1˙
Î - 1 s + 2˚ Î˚
È s+2 1 ˘
Í ( s + 1) ( s + 3) ( s + 1) ( s + 3) ˙
È1˘ 1
= [0 1] Í ˙
Í1˙ = s + 1
Í 1 s+2 ˙ Î˚
Í ( s + 1) ( s + 3) ( s + 1) ( s + 3) ˙
Î ˚
We find that because of pole-zero cancellation, both the eigenvalues of matrix A do not appear as poles
in G(s). The dynamic mode e–3t of the system (5.121), does not show up in input-output characterization
given by the transfer function G(s). Note that the system under consideration is not a completely
controllable system.
Example 5.20
Consider the system
x = Ax + bu
y = cx (5.122)
È -2 1˘ È1 ˘
with A= Í ˙ ; b = Í ˙ ; c = [1 –1]
Î 1 -2 ˚ Î0 ˚
The observability matrix
Èc ˘ È 1 -1 ˘
V= Í ˙ = Í
ÎcA ˚ Î -3 3 ˙˚
Since r(V) = 1, the second-order system (5.122) is not completely observable.
The eigenvalues of matrix A are – 1, – 3. The transfer function of the system (5.122) is calculated as
G(s) = c(sI – A)– 1b
È s+2 1 ˘
Í ( s + 1) ( s + 3) ( s + 1) ( s + 3) ˙ È1 ˘ 1
= [1 –1] Í ˙ Í ˙ =
Í 1 s+2 ˙ Î0 ˚ s+3
Í ( s + 1) ( s + 3) ˙
( s + 1) ( s + 3) ˚
Î
The dynamic mode e–t of the system (5.122), does not show up in the input-output characterization given
by the transfer function G(s). Note that the system under consideration is not a completely observable system.
364 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
In the following, we give two specific state transformations to reveal the underlying structure imposed
upon a system by its controllability and observability properties (for proof, refer to [105]). These results
are then used to establish equivalence between transfer function and state variable representations.
È x1 ˘ È Ac A12 ˘ È x1 ˘ Èb c ˘
Í ˙ = Í ˙Í ˙+Í ˙ u
Î x2 ˚ Î0 A 22 ˚ Î x2 ˚ Î 0 ˚
= A x + bu (5.123c)
È x1 ˘
y = [ c1 c2 ] Í ˙ = c x
Î x2 ˚
where the m-dimensional subsystem
x1 = A c x1 + bc u + A12 x2
is controllable from u (the additional driving term A12 x2 has no effect on controllability); the (n – m)
dimensional subsystem
x2 = A 22 x2
12 È A1 A 2 ˘ È B1 B2 ˘ È I 0˘
Í0 A3 ˙˙˚ ÍÍ B3 B4 ˙˙
= Í0 I ˙
ÍÎ Î ˚ ÍÎ ˙˚
È B1 B2 ˘ È A1-1 - A1 A 2 A3-1 ˘
-1
gives Í ˙ = Í ˙
ÍÎ B3 B4 ˙˚ ÍÎ 0 A3-1 ˙˚
Control System Analysis using State Variable Methods 365
u x1
x1 = Acx1 + bcu + A12x2 c1
y
+
x2
x2 = A22x2 c2
È x1 ˘
y = [ c0 0]Í ˙ = cx
Î x2 ˚
where the l-dimensional subsystem
x1 = A 0 x1 + b1u
y = c0 x1
is observable from y, and the (n – l)-dimensional subsystem
x2 = A 22 x2 + b2 u + A 21 x1
has no effect upon the output y, and is therefore entirely unobservable, i.e., nothing about x2 can be
inferred from output measurement.
This theorem shows that any system which is not completely observable, can be decomposed into the
observable and unobservable subsystems shown in Fig. 5.21. The state model (5.124c) is said to be in
observability canonical form.
Since systems (5.124a) and (5.124c) are equivalent, the set of eigenvalues of matrix A of system (5.124a)
is same as the set of eigenvalues of matrix A of system (5.124c), which is a union of the subsets of
eigenvalues of matrices A 0 and A 22. The transfer function of the system (5.124a) may be calculated from
(5.124c) as follows:
-1
È s I - A0 0 ˘ È b1 ˘
G(s) = [ c0 0] Í ˙ Í ˙
Î - A 21 s I - A 22 ˚ Îb2 ˚
È ( sI - A0 ) -1 0 ˘ È b1 ˘
= [ c0 0] Í ˙Í ˙
ÍÎ( sI - A 22 ) - 1 A 21 ( sI - A0 ) -1 ( sI - A 22 ) -1 ˙˚ Îb2 ˚
= c0 ( s I - A 0 ) - 1 b1 (5.125)
which shows that the unobservable part of the system does not affect the input-output relationship. We
will refer to the eigenvalues of A0 as observable poles and the eigenvalues of A 22 as unobservable
poles.
We now examine the use of state variable and transfer function models of a system to study its dynamic
properties.
We know that a system is asymptotically stable if all the eigenvalues of the characteristic matrix A
of its state variable model, are in the left half of the complex plane. Also, we know that a system is
Control System Analysis using State Variable Methods 367
(Bounded-Input Bounded-Output) BIBO stable if all the poles of its transfer function model are in the
left half of the complex plane. Since, in general, the poles of the transfer function model of a system
are a subset of the eigenvalues of the characteristic matrix A of the system, asymptotic stability always
implies BIBO stability.
The reverse, however, may not always be true because the eigenvalues of the uncontrollable and/or
unobservable part of the system are hidden from the BIBO stability analysis. These may lead to instability
of a BIBO stable system. When a state variable model is both controllable and observable, all the eigen-
values of characteristic matrix A appear as poles in the corresponding transfer function. Therefore, BIBO
stability implies asymptotic stability only for completely controllable and completely observable system.
To conclude, we may say that the transfer function model of a system represents its complete dynamics
only if the system is both controllable and observable.
t
x(t) = eAtx(0) + Ú eA(t – t) Bu(t ) dt (5.127a)
0
The output
È t ˘
Ú
A( t - t )
y(t) = C Íe x(0) + e Bu(t )dt ˙ + Du(t)
At
(5.127b)
Í ˙
Î 0 ˚
In the transform domain, the input-output behavior of the system (5.126) is determined entirely by the
matrix (refer to Eqn. (5.28))
G(s) = C(sI – A)–1 B + D (5.128a)
This matrix is called the transfer function matrix of system (5.126), and it has the property that the input
U(s) and output Y(s) of Eqns (5.126) are related by
Y( s) = G( s) U( s) (5.128b)
( q ¥1) ( q ¥ p ) ( p ¥1)
whenever x0 = 0.
In an expanded form, Eqn. (5.128b) can be written as
È Y1 ( s) ˘ È G11 ( s) G12 ( s) � G1 p ( s) ˘ È U1 ( s) ˘
ÍY ( s) ˙ Í G ( s) G ( s) � G ( s) ˙ ÍU ( s) ˙
Í 2 ˙ = Í 21 22 2p ˙Í 2 ˙
Í � ˙ Í � � � ˙Í � ˙
Í ˙ Í ˙Í ˙
ÍÎYq ( s) ˙˚ ÍÎ Gq1 ( s) Gq 2 ( s) � Gqp ( s) ˙˚ ÍÎU p ( s) ˙˚
The (i, j)th element Gij(s) of G(s) is the transfer function relating the ith output to the jth input.
Example 5.21
The scheme of Fig. 5.23 describes a simple concentration control process. Two concentrated solutions of
some chemical with constant concentrations C1 and C2 are fed with flow rates Q1(t) = Q1 + q1(t), and
Q2(t) = Q2 + q2(t), respectively, and are continuously mixed in the tank. The outflow from the mixing
tank is at a rate Q(t) = Q + q(t) with concentration C(t) = C + c(t). Let it be assumed that stirring causes
perfect mixing so that the concentration of the solution in the tank is uniform throughout, and equals that
of the outflow. We shall also assume that the density remains constant.
Let V(t) = V + v(t) be the volume of the fluid in the tank.
The mass balance equations are
d
[V + v(t)] = Q1 + q1(t) + Q2 +q2(t) – Q – q(t) (5.129a)
dt
d
[{C + c(t)}{V + v(t)}] = C1[ Q1 + q1(t)] + C2[ Q2 + q2(t)] – [ C + c(t)][ Q + q(t)] (5.129b)
dt
Control System Analysis using State Variable Methods 369
where H(t) = H + h(t) is the head of the liquid in the tank, A is the cross-sectional area of the tank and
k is a constant.
The steady-state operation is described by the equations (obtained from Eqns (5.129) and (5.130))
0 = Q1 + Q2 - Q
0 = C1Q1 + C2 Q2 - CQ
V
Q = k
A
For small perturbations about the steadystate, Eqn. (5.130) can be linearized using Eqn. (5.11c):
k ∂ V (t )
Q(t) – Q = (V(t) – V )
A ∂V ( t )
V= V
k V Q
or q(t) = v(t) = v(t)
2V A 2V
370 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
From the foregoing equations, we obtain the following relations describing perturbations about
steadystate:
� = q1(t) + q2(t) – 1 Q v(t)
v(t) (5.131a)
2V
C v� (t ) + V c�(t ) = C1 q1(t) + C2 q2(t) – 1 CQ v(t) – Q c(t) (5.131b)
2 V
(Second-order terms in perturbation variables have been neglected)
The hold-up time of the tank is
V
t=
Q
Let us define
x1(t) = v(t), x2(t) = c(t), u1(t) = q1(t), u2(t) = q2(t), y1(t) = q(t), and y2(t) = c(t)
In terms of these variables, we get the following state model from Eqns (5.131):
È 1 ˘
Í - 2t 0 ˙ È 1 1 ˘
x� (t) = Í ˙ x(t) + Í ˙
Í 1˙ Í C1 - C C2 - C ˙ u(t) (5.132a)
ÍÎ 0 - ˙ ÍÎ V V ˙˚
t˚
È1 ˘
0˙
y(t) = Í 2t x(t) (5.132b)
Í ˙
ÍÎ 0 ˙
1˚
For the parameters
Q1 = 10 liters/sec, Q2 = 20 liters/sec, C1 = 9 g-moles/liter, C2 = 18 g-moles/liter, V = 1500 liters, the state
variable model becomes
x� (t) = Ax(t) + Bu(t) (5.133a)
y(t) = Cx(t) (5.133b)
with
È -0.01 0 ˘ È 1 1 ˘ È0.01 0 ˘
A= Í ˙ ;B= Í ˙ ;C = Í
Î 0 -0.02˚ Î -0.004 0.002˚ Î0 1 ˙˚
In the transform domain, the input-output behavior of the system is given by
Y(s) = G(s) U(s)
where
G(s) = C(sI – A)–1B
For A, B, and C given by Eqns (5.133), we have
È s + 0.01 0 ˘
(sI – A) = Í
Î 0 s + 0.02˙˚
Control System Analysis using State Variable Methods 371
È 1 ˘
Í s + 0.01 0 ˙ È 1
È 0 . 01 0 ˘ 1 ˘
G(s) = C(sI – A)–1B = Í Í ˙
Î0 1 ˙˚ Í 0 1 ˙ ÍÎ -0.004 0.002˙˚
ÍÎ s + 0.02 ˙˚
È 0.01 0.01 ˘
Í s + 0.01 s + 0.01 ˙
= Í ˙ (5.134)
Í -0.004 0.002 ˙
ÍÎ s + 0.02 s + 0.02 ˙˚
The necessary and sufficient condition for the system (5.126) to be completely controllable, is that the
n ¥ np matrix
U =D [B AB A2B An – 1B] (5.135)
has rank equal to n, i.e., r (U) = n.
The necessary and sufficient condition for the system (5.126) to be completely observable, is that the
nq ¥ n matrix
È C ˘
Í CA ˙
V= Í
D ˙ (5.136)
Í ˙
Í n -1 ˙
ÍÎCA ˙˚
has rank equal to n, i.e., r (V) = n.
The controllability and observability properties can be determined by the inspection of the system
equations in Jordan canonical form. A MIMO system with distinct eigenvalues l1, l2, ..., ln has the
following Jordan canonical state model:
x = Lx + Bu (5.137a)
y = Cx + Du (5.137b)
with
È l1 0 0˘ È b11 b12 b1 p ˘
Íb È c11 c12 c1n ˘
Í0 l2 0 ˙˙ Í 21 b22 b2 p ˙˙ Í ˙
L= Í ;B= Í ˙;C= Í ˙
Í ˙
Í ˙ Í ˙ Ícq1 cq 2 cqn ˙˚
ln ˚ Î
Î0 0 ÍÎ bn1 bn 2 bnp ˙˚
372 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The system (5.137) is completely controllable if, and only if, none of the rows of B matrix is a zero
row, and (5.137) is completely observable if, and only if, none of the columns of C matrix is a zero
column.
We have been using Jordan canonical structure only for systems with distinct eigenvalues. Refer to [105]
for controllability and observability tests using Jordan canonical representation of systems with multiple
eigenvalues.
Example 5.22
Consider the mixing-tank system discussed in Example 5.21. Suppose the feeds Q1 and Q2 have equal
concentrations, i.e., C1 = C2 = C0 (Fig. 5.23). Then the steady-state concentration in the tank is also C0,
and from Eqn. (5.132a) we have
È 1 ˘
Í - 2t 0 ˙
È1 1 ˘
x(t) = Í ˙ x(t) + Í u(t)
Í 0 1 ˙ Î 0 0 ˙˚
ÍÎ - ˙
t˚
This state variable model is in Jordan canonical form. Since one row of the B matrix is a zero row,
the system is not completely controllable. As is obvious from the Jordan canonical model, the input
u(t) affects only the state variable x1(t), the incremental volume. The variable x2(t), the incremental
concentration, has no connection with the input u(t).
If C1 π C2, the system is completely controllable.
REVIEW EXAMPLES
Therefore,
40/3 -15 5/3
Y(s) = R(s) + R(s) + R(s)
s s +1 s+3
40/3 40
Let X1(s) = R(s); this gives x1 = r
s 3
-15
X2(s) = R(s); this gives x2 + x2 = – 15r
s +1
5/3 5
X3(s) = R(s); this gives x3 + 3x3 = r
s+3 3
In terms of x1, x2 and x3, the output y(t) is given by
y(t) = x1(t) + x2(t) + x3(t)
A state variable formulation, for the given transfer function, is defined by the following matrices:
È0 0 0 ˘ È 40 / 3 ˘
Í ˙
L = Í0 -1 0 ˙ ; b = Í -15 ˙ ; c = [1 1 1]; d = 0
Í ˙
ÍÎ0 0 -3 ˙˚ ÍÎ 5/ 3 ˙˚
Note that the coefficient matrix A is diagonal, and the state model is in Jordan canonical form.
We now construct two state models for the given transfer function in companion form. To do this,
we express the transfer function as
Y ( s) 10( s + 4) 10 s + 40 b0 s3 + b1s 2 + b 2 s + b3
= = 3 = ;
R( s) s( s + 1)( s + 3) s + 4 s 2 + 3s s3 + a1s 2 + a 2 s + a 3
b0 = b1 = 0, b2 = 10, b3 = 40, a1 = 4, a2 = 3, a3 = 0
(b) With reference to Eqns (5.54), we obtain the following state model in the first companion form:
È0 1 0˘ È0 ˘
Í0 0 1˙ ; b = ÍÍ0 ˙˙ ; c = [40 10 0]; d = 0
˙
A= Í
ÍÎ0 -3 - 4 ˙˚ ÍÎ1 ˙˚
(c) With reference to Eqns (5.56), the state model in second companion form becomes
È0 0 0 ˘ È 40 ˘
Í 1 0 - 3˙ Í ˙
A= Í ˙ ; b = Í 10 ˙ , c = [0 0 1]; d = 0
ÍÎ0 1 - 4 ˙˚ ÍÎ 0 ˙˚
(b) Consider now that the system has a forcing function and is represented by the following
nonhomogeneous state equation:
È x1 ˘ È0 1˘ È x1 ˘ È0 ˘
Í x ˙ = Í0 -2˙ Í x ˙ + Í1 ˙ u
Î 2˚ Î ˚ Î 2˚ Î ˚
where u is a unit-step input.
Compute the solution of this equation assuming initial conditions of part (a).
Solution
(a) Since
È s -1 ˘
(sI – A) = Í ˙
Î0 s + 2˚
we obtain
È1 1 ˘
Ís s( s + 2) ˙
(sI – A)–1 = Í ˙
Í 1 ˙
Í0 s + 2 ˙˚
Î
Hence
È1 1
(1 - e -2t ) ˘
[(sI – A) ] = Í ˙
At –1 –1 2
e =L
ÍÎ0 e-2t
˚˙
At
To obtain the state transition matrix e by the canonical transformation method, we compute the
eigenvalues and eigenvectors of matrix A. The roots of the characteristic equation
|lI – A| = 0
are l1 = 0, and l2 = – 2. These are the eigenvalues of matrix A. Eigenvectors corresponding to the
distinct eigenvalues l i, may be obtained from the nonzero columns of adj(l iI – A).
For the given A matrix
È li + 2 1 ˘
adj(lil – A) = Í
Î 0 li ˙˚
È2 1˘
For l1 = 0, adj(l1I – A) = Í ˙
Î0 0 ˚
The eigenvector v1 corresponding to the eigenvalue l1 is, therefore, given by
È1 ˘
v1 = Í ˙
Î0 ˚
È0 1˘
For l2 = – 2, adj (l2I – A) = Í ˙
Î0 -2˚
The eigenvector v2 corresponding to the eigenvalue l2 is given by
È 1˘
v2 = Í ˙
Î -2˚
Control System Analysis using State Variable Methods 375
È1 1˘ È e
0
0 ˘ È1 1˘
2
È1 1
(1 - e -2t ) ˘
e At
= Pe Lt –1
P = Í ˙ Í ˙Í ˙ = Í 2
˙
-2t
Î0 -2˚ ÍÎ 0 e ˙˚ ÍÎ0 - 2 ˙˚
1
ÍÎ0 e-2t
˚˙
È1 ˘
x(t) = eAtx(0) = Í ˙
Î0 ˚
t
(b) At
x(t) = e x(0) + Ú eA(t – t )bu(t)dt
0
Now
È t ˘
t
Í 2 Ú
Í 1 [1 - e -2(t -t ) ]dt ˙
˙ È - 1 + 1 t + 1 e -2t ˘
Ú e A(t – t )
bu(t )dt = Í
Í
0
t
˙ = Í 4 2
˙ Í 1
(1 - e
4
-2t
)
˙
˙
0 Î 2 ˚
Í
Í Ú
e -2(t -t ) dt ˙
˙
Î 0 ˚
Therefore,
x1(t) = - 14 + 12 t + 14 e - 2t
1
x2(t) = 2 (1 – e–2t)
The matrix L has n eigenvalues at l = l1. To evaluate f (L) = eLt, we define (refer to Eqn. (5.98b)) the
polynomial g(l) as
g(l) = b0 + b1l + + bn – 1ln – 1
This polynomial may be rearranged as
g(l) = b0 + b1(l – l1) + + bn – 1(l – l1)n – 1
The coefficients b0, b1, …, bn – 1 are given by the following equations (refer to Eqns (5.98c)):
f(l1) = g(l1)
d d
f (l ) = g(l )
dl l = l1 dl l = l1
d n -1 d n -1
f (l ) = g(l )
d l n -1 l = l1 d l n -1 l = l1
Solving, we get
b0 = el1t
t l1t
b1 = e
1!
t 2 l1t
b2 = e
2!
t n -1 l1t
bn – 1 = e
( n - 1)!
Therefore,
eLt = b0I + b1(L – l1I) + + bn –1(L – l1I)n – 1
È0 1 0 0 0˘
Í0 0 1 0 0 ˙˙
(L – l1I) = Í
Í ˙
Í ˙
Î0 0 0 0 0˚
È0 0 1 0 0˘
Í0 0 0 1 0 ˙˙
(L – l1I)(L – l1I) = Í
Í ˙
Í ˙
Î0 0 0 0 0˚
Control System Analysis using State Variable Methods 377
Èb0 b1 b2 bn -1 ˘
Í0 b0 b1 bn - 2 ˙˙
eLt = Í
Í ˙
Í ˙
Î0 0 0 b0 ˚
È x1 ˘ È 0 1 0 0 ˘ È x1 ˘ È0 0 ˘
Íx ˙ Í 2 ˙ Íx ˙ Í ˙
Í 2 ˙ = Í3w 0 0 2w ˙ Í 2 ˙ + Í1 0 ˙ È u1 ˘ = Ax + Bu
Í x3 ˙ Í 0 0 0 1 ˙ Í x3 ˙ Í0 0 ˙ ÍÎu2 ˙˚
Í ˙ Í ˙ Í ˙ Í ˙
Î x4 ˚ ÍÎ 0 -2w 0 0 ˙˚ Î x4 ˚ Î0 1 ˚
where w is the angular frequency of the satellite in circular, equatorial orbit; x1(t) and x3(t) are, respectively,
the deviations in position variables r(t) and q(t) of the satellite; and x2(t) and x4(t) are, respectively, the
deviations in velocity variables r (t) and q (t). The inputs u1(t) and u2(t) are the thrusts ur and uq in the
radial and tangential directions, respectively, applied by small rocket engines or gas jets (u = 0 when
x = 0).
(a) Prove that the system is completely controllable.
(b) Suppose that the tangential thruster becomes inoperable. Determine the controllability of the
system with the radial thruster alone.
(c) Suppose that the radial thruster becomes inoperable. Determine the controllability of the system
with the tangential thruster alone.
(d) Prove that the system is completely observable from radial (x1 = r) and tangential (x3 = q ) position
measurements.
(e) Suppose that the tangential measuring device becomes inoperable. Determine the observability of
the system from radial position measurement alone.
(f) Suppose that the radial measurements are lost. Determine the observability of the system from
tangential position measurement alone.
Solution
(a) The controllability matrix
U = [B AB A2B A3B]
378 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
0 2w - 2w 3
|U| = 2w 0 1 - 4w 2 = –12w 4 π 0
1 0 0
PROBLEMS
5.1 Figure P5.1 shows a control scheme for controlling the azimuth angle of a rotating antenna. The
plant consists of an armature-controlled dc motor with dc generator used as an amplifier. The
parameters of the plant are given below.
Motor torque constant, KT = 1.2 newton-m/amp
Motor back emf constant, Kb = 1.2 V/(rad/sec)
Generator gain constant, Kg = 100 V/amp
Motor to load gear ratio, n = ( qL /qM ) = 1/2
Rf = 21 W, Lf = 5H, Rg = 9 W, Lg = 0.06 H, Ra = 10 W, La = 0.04 H,
J = 1.6 newton-m/(rad/sec2), B = 0.04 newton-m/(rad/sec), motor inertia and friction are negligible.
Taking physically meaningful and measurable variables as state variables, derive a state model for
the system.
Rg Lg Ra La
qM
eg eb
qL
J, B
Lf
Rf
u
+ –
5.2 Figure P5.2 shows a position control system with state variable feedback. The plant consists of a field-
controlled dc motor with a dc amplifier. The parameters of the plant are given below.
Amplifier gain, KA = 50 volt/volt
Motor field resistance, Rf = 99 W
Motor field inductance, Lf = 20 H
Motor torque constant, KT = 10 newton-m/amp
Moment of inertia of load, J = 0.5 newton-m/(rad/sec2)
Coefficient of viscous friction of load, B = 0.5 newton-m/(rad/sec)
Motor inertia and friction are negligible.
Taking x1 = q, x2 = q , and x3 = if as the state variables, u = ef as the input, and y = q as the output,
derive a state variable model for the plant.
Control System Analysis using State Variable Methods 381
x1 = q
Potentiometer
Ia
–
u = ef Rf
r (t) + KA
–
–
Lf J, B
x3 = if
R = 1W
x2 = q
Tachogenerator
5.3 Figure P5.3 shows the block diagram of a motor-driven, single-link robot manipulator with
position and velocity feedback. The drive motor is an armature-controlled dc motor; ea is armature
voltage, ia is armature current, qM is the motor shaft position and q M is motor shaft velocity. qL is
the position of the robot arm.
Taking qM, qM and ia as state variables, derive a state model for the feedback system.
k2
0.5
– –
ea 1 ia 1 1 qM 1 qL
38
+ 2s + 21 2s + 1 s 20
+
k1 qM
–
+
qR
5.4 Figure P5.4 shows the block diagram of a speed control system with state variable feedback. The
drive motor is an armature-controlled dc motor with armature resistance Ra, armature inductance
382 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
ia
k2
–
er + + ec + 1 1
k1 Kc KT
sLa + Ra Js + B
– –
ea Kb
Kt
La, motor torque constant KT, inertia referred to motor shaft J, viscous friction coefficient referred
to motor shaft B, back emf constant Kb, and tachogenerator constant Kt. The applied armature
voltage is controlled by a three-phase full-converter. We have assumed a linear relationship
between the control voltage ec and the armature voltage ea; er is the reference voltage corre-
sponding to the desired speed.
Taking x1 = w (speed) and x2 = ia (armature current) as the state variables, u = er as the input, and
y = w as the output, derive a state variable model for the feedback system.
5.5 Consider the system
-3 1 ˘
x = ÈÍ ˙
È0 ˘
x+ Í ˙u
Î -2 0 ˚ Î1 ˚
y = [1 0] x
A similarity transformation is defined by
È 2 -1˘
x = Px = Í ˙x
Î -1 1˚
(a) Express the state model in terms of the states x (t).
(b) Draw state diagrams in signal-flow graph form for the state models in x(t) and x (t).
(c) Show by Mason’s gain formula that the transfer functions for the two state diagrams in (b)
are equal.
5.6 Consider a double-integrator plant described by the differential equation
d 2q(t )
= u(t)
dt 2
(a) Develop a state equation for this system with u as the input, and q and q as the state variables
x1 and x2, respectively.
(b) A similarity transformation is defined as
È1 0 ˘
x=Px = Í ˙ x
Î1 1 ˚
Express the state equation in terms of the states x (t).
(c) Show that the eigenvalues of the system matrices of the two state equations in (a) and (b), are
equal.
Control System Analysis using State Variable Methods 383
Using the Laplace transform technique, transform the state equation into a set of linear algebraic
equations in the form
X(s) = G(s)x0 + H(s)U(s)
5.8 Give a block diagram for the programming of the system of Problem 5.7 on an analog computer.
5.9 The state diagram of a linear system is shown in Fig. P5.9. Assign the state variables, and write
the dynamic equations of the system.
–1
–1
3
y1(0)
+ – +
u1 + + + +
3 Ú Ú y1
– +
2
4
–
+ + +
u2 3 Ú Ú + y2
–
y2(0)
4
384 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
r + x3 + u y
– Ú (◊)dt – 3.5 Plant
– – –
x1 x2
2
1.5
5.14 Construct state models for the systems of Fig. P5.14a and Fig. P5.14b, taking outputs of simple
lag blocks as state variables.
+ 1
u + 1 + y s+1
– s+1
u + 1 + + y
1 – s
s+2
1
(a)
s+2
(b)
Control System Analysis using State Variable Methods 385
5.15 Derive a state model for the two-input, two-output feedback control system shown in Fig. P5.15.
Take outputs of simple lags as state variables.
r1 + – u1 y1
1 +
K1
s+1
+
5
s+5
0.4
s + 0.5
r2 + + + y2
u2 4
K2
– s+2
5.16 Construct state models for the following transfer functions. Obtain different canonical form for
each system.
s+3 5 s3 + 8s 2 + 17 s + 8
(i) 2 (ii) (iii)
s + 3s + 2 2
( s + 1) ( s + 2) ( s + 1) ( s + 2) ( s + 3)
Give block diagrams for the analog computer simulation of these transfer functions.
5.17 Construct state models for the following differential equations. Obtain a different canonical form
for each system.
(i) y + 3 y + 2 y = u + u (ii) y + 6 y + 11 y + 6y = u
(iii) y + 6 y + 11 y + 6y = u + 8u + 17u + 8u
5.18 Derive two state models for the system with transfer function
Y ( s) 50(1 + s / 5)
=
U ( s) s(1 + s / 2) (1 + s /50)
(a) One for which the system matrix is a companion matrix.
(b) One for which the system matrix is diagonal.
5.19 (a) Obtain state variable model in Jordan canonical form for the system with transfer function
Y ( s) 2s2 + 6 s + 5
=
U ( s) ( s + 1) 2 ( s + 2)
(b) Find the response y(t) to a unit-step input using the state variable model in (a).
(c) Give a block diagram for analog computer simulation of the transfer function.
386 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
5.20 Find the eigenvalues and eigenvectors for the following matrices:
È 0 1 0˘
È1 1 ˘ È -3 2˘ Í 3 0
(i) Í ˙ (ii) Í ˙ (iii) Í 2 ˙˙
Î0 2˚ Î -1 0 ˚ ÍÎ -12 -7 -6 ˙˚
5.21 (a) If l1, l2, …, ln are distinct eigenvalues of
È 0 1 0 0 ˘
Í 0 0 1 0 ˙˙
Í
A= Í ˙
Í ˙
Í 0 0 0 1 ˙
ÍÎ -a n -a n -1 -a n - 2 -a1 ˙˚
prove that the matrix
È 1 1 1 ˘
Í l l l ˙
Í 1 2 n ˙
P = Í l1 l n2 ˙
2
l22
Í ˙
Í ˙
Í n -1 n -1 n -1 ˙
Î l1 l2 ln ˚
transforms A into Jordan canonical form.
(b) Using the result in (a), find the eigenvalues and eigenvectors of the following matrix:
È 0 1 0˘
Í
A= Í 0 0 1 ˙˙
ÍÎ -24 -26 -9 ˙˚
5.22 Consider the matrix
È 0 1 0˘
Í ˙
A = Í 0 0 1˙
ÍÎ -2 - 4 - 3 ˙˚
(a) Suggest a transformation matrix P such that L = P–1AP is in Jordan canonical form.
(b) Matrix L in (a) has complex elements. Real arithmetic is often preferable, and can be
achieved by further transformation. Suggest a transformation matrix Q such that Q–1LQ has
all real elements.
5.23 Given the system
È -4 3˘
x= Í ˙ x = Ax
Î -6 5˚
Determine eigenvalues and eigenvectors of matrix A, and use these results to find the state
transition matrix.
Control System Analysis using State Variable Methods 387
5.24 Using Laplace transform method, find the matrix exponential eAt for
È0 - 3˘ È 0 1˘
(a) A = Í ˙ (b) A = Í ˙
Î1 - 4 ˚ Î -3 - 4 ˚
5.25 Using the Cayley–Hamilton technique, find eAt for
È 0 1˘ È 0 2˘
(a) A = Í ˙ (b) A = Í ˙
Î -6 - 5 ˚ Î -2 - 4 ˚
5.26 Given the system
È-2 1˘ È1˘
x= Í ˙x+ Í ˙u
Î 1 - 2˚ Î1˚
(a) Obtain a state diagram in signal-flow graph form.
(b) From the signal-flow graph, determine the state equation in the form
X(s) = G(s)x(0) + H(s)U(s)
(c) Using inverse Laplace transformation, obtain the
(i) zero-input response to initial condition
x(0) = [x10 x 20]T;
(ii) zero-state response to unit-step input.
5.27 A linear time-invariant system is described by the following state model:
È 0 1 0˘ È0 ˘
Í
x = Í 0 0 ˙
1˙ x + Í0 ˙ u
Í ˙
ÍÎ -6 -11 -6 ˙˚ ÍÎ 2˙˚
y = [1 0 0] x
Diagonalize the coefficient matrix of the state model using a similarity transformation, and from
there obtain the explicit solutions for the state vector and output when the control force u is a unit-
step function and the initial state vector is
x(0) = [0 0 2]T
5.28 Consider the system
È 0 1˘ È0 ˘ È1˘
x= Í ˙ x + Í1 ˙ u; x(0) = Í1˙
Î -2 -3 ˚ Î ˚ Î˚
y = [1 0] x
(a) Determine the stability of the system.
(b) Find the output response of the system to unit-step input.
5.29 Find the response of the system
È0 ˘
x = ÈÍ
0 1˘ È 2 1˘
˙ x+ Í ˙ u; x(0) = Í ˙
Î -2 -3˚ Î0 1˚ Î0 ˚
È1 0 ˘
y= Í ˙ x
Î1 1 ˚
to the following input:
È u1 (t ) ˘ È m (t ) ˘
u(t) = Í ˙ = Í -3t ˙ ; m (t) is unit-step function.
Îu2 (t ) ˚ Î e m (t ) ˚
388 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
5.30 Figure P5.30 shows the block diagram of a control system with state variable feedback and
feedforward control. The plant model is
È -3 2˘ È1 ˘
x = Í ˙ x + Í0 ˙ u
Î 4 -5 ˚ Î ˚
y = [0 1]x
(a) Derive a state model for the feedback system.
(b) Find the output y(t) of the feedback system to a unit-step input r(t); the initial state is assumed
to be zero.
y = x2
r + u
7 Plant
–
x1
3
+
+
1.5
È 1˘ È e -t ˘
If x(0) = Í ˙ , then x(t) = Í ˙
Î -1˚ ÍÎ -e - t ˙˚
Find eAt and hence A.
5.35 Show that the pair {A, c} is completely observable for all values of ai’s.
È0 0 0 -an ˘
Í1 0 0 - a n -1 ˙˙
Í
A = Í0 1 0 - a n-2 ˙
Í ˙
Í ˙
ÍÎ0 0 1 - a 1 ˙˚
c = [0 0 0 1]
5.36 Show that the pair {A, b} is completely controllable for all values of ai’s.
È 0 1 0 0 ˘ È0˘
Í 0 0 1 0 ˙˙ Í0˙
Í Í ˙
A= Í ˙;b= Í ˙
Í ˙ Í ˙
Í 0 0 0 1 ˙ Í0˙
ÍÎ -a n -a n -1 -a n - 2 -a1 ˙˚ ÍÎ1 ˙˚
What can we say about controllability and observability—without making any further calculations?
5.38 Determine the controllability and observability properties of the following systems:
È -2 1˘ È1 ˘
(i) A = Í ˙ ; b = Í ˙ ; c = [1 –1]
Î 1 -2˚ Î0 ˚
È -1 0˘ È 2˘
(ii) A = Í
-2˙˚ Í 5˙ ; c = [0 1]
;b=
Î 0 Î ˚
È -1 0 0˘ È1 0 ˘
Í È1 1 2˘
(iii) A = Í 0 -2 0 ˙˙ ; B = ÍÍ1 2˙˙ ; C = Í ˙
ÍÎ 0 0 -3 ˙˚ ÍÎ 2 1 ˙˚ Î3 1 5 ˚
È0 1 0˘ È0 ˘
Í 0 1˙˙ ; b = Í0 ˙
(iv) A = Í0 Í ˙ ; c = [10 0 0]
ÎÍ0 -2 -3˙˚ ÍÎ1 ˙˚
È0 0 0˘ È 40 ˘
Í 0 -3˙˙ ; b = ÍÍ10 ˙˙ ; c = [0 0 1]
(v) A = Í1
ÍÎ0 1 -4 ˙˚ ÍÎ 0 ˙˚
390 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
1
5.39 The following models realize the transfer function G(s) = .
s +1
È– 2 1˘ È1˘
(i) A = Í ˙; b = Í1˙ ; c = [0 1]
Î 1 –2˚ Î˚
È -1 0˘ È1˘
(ii) A = Í ˙;b= Í1˙ ; c = [1 0]
Î 0 -3˚ Î˚
È -2 0˘ È0 ˘
(iii) A = Í ˙ ; b = Í ˙ ; c = [0 1]
Î 0 -1˚ Î1 ˚
Investigate the controllability and observability properties of these models.
Find a state variable model, for the given transfer function, which is both controllable and
observable.
5.40 Consider the systems
È0 -2˘ È1˘
(i) A = Í ˙ ; b = Í1˙ ; c = [0 1]
Î 1 -3 ˚ Î˚
È 0 1 0˘ È0 ˘
Í 0 0 1 ˙ Í ˙
(ii) A = Í ˙ ; b = Í0 ˙ ; c = [4 5 1]
ÍÎ -6 -11 -6 ˙˚ ÍÎ1 ˙˚
Determine the transfer function in each case. What can we say about controllability and
observability properties—without making any further calculations?
5.41 Consider the system
È1 1 0˘ È 0˘
Í ˙
x = Í0 -2 1˙ x + Í 1 ˙ u; y = [1 0 0] x
Í ˙
ÍÎ0 0 -1˙˚ ÍÎ -2˙˚
(a) Find the eigenvalues of A and from there determine the stability of the system.
(b) Find the transfer function model and from there determine the stability of the system.
(c) Are the two results the same? If not, why?
5.42 Given a transfer function
10 Y ( s)
G(s) = =
s( s + 1) U ( s)
Construct the following three different state models for this system:
(a) One which is both controllable and observable.
(b) One which is controllable but not observable.
(c) One which is observable but not controllable.
5.43 Prove that the transfer function
G(s) = Y(s)/U(s)
of the system
x (t) = Ax(t) + bu(t)
y(t) = cx(t) + du(t)
is invariant under state transformation x(t) = P x (t); P is a constant nonsingular matrix.
State Variable Analysis of Digital Control Systems 391
Chapter 6
State Variable Analysis of
Digital Control Systems
6.1 INTRODUCTION
In the previous chapter of this book, we treated in considerable detail, the analysis of linear continuous-
time systems using state variable methods. In this chapter, we give a condensed review of the same
methods for linear discrete-time systems. Since the theory of linear discrete-time systems—very
closely—parallels the theory of linear continuous-time systems, many of the results are similar. For this
reason, the comments in this chapter are brief, except in those cases where the results for discrete-time
systems deviate markedly from the continuous-time situation. For the same reason, many proofs are
omitted.
We will be mostly concerned with Single-Input, Single-Output (SISO) system configurations of the
type shown in the block diagram of Fig. 6.1. The plant in the figure, is a physical process characterized
by continuous-time input and output variables. A digital computer is used to control the continuous-
time plant. The interface system that takes care of the communication between the digital computer
and the continuous-time plant consists of analog-to-digital (A/D) converter and digital-to-analog (D/A)
converter. In order to analyze such a system, it is often convenient to represent the continuous-time plant,
together with the D/A converter and the A/D converter, by an equivalent discrete-time system.
The discrete-time systems we will come across can, therefore, be classified into two types.
(i) Inherently discrete-time systems (digital processors), where it makes sense to consider the system
at discrete instants of time only, and what happens in between is irrelevant.
Sensor
A/D
(ii) Discrete-time systems that result from considering continuous-time systems at discrete instants of
time only.
Equation (6.1a) is called the state equation of the system, Eqn. (6.1b) is called the output equation; the
two equations together give the state variable model of the system.
6.2.1
In the study of linear time-invariant discrete-time equations, we may also apply the z-transform
techniques. Taking the z-transform of Eqns (6.1), we obtain:
zX(z) – zx0 = FX(z) + gU(z)
Y(z) = cX(z) + dU(z)
where X(z) =D Z [x(k)]; U(z) =D Z [u(k)]; Y(z) =D Z [y(k)]
Manipulation of these equations, gives
(zI – F) X(z) = zx0 + gU(z); I is n × n identity matrix
or X(z) = (zI – F)–1 zx0 + (zI – F)–1 gU(z) (6.2a)
Y(z) = c(zI – F)–1 zx0 + [c(zI – F)–1 g + d] U(z) (6.2b)
Equations (6.2) are algebraic equations. If x0 and U(z) are known, X(z) can be computed from these
equations.
In the case of zero initial state (i.e., x0 = 0), the input-output behavior of the system (6.1) is determined
entirely by the transfer function
Y ( z)
= G(z) = c(zI – F)–1 g + d (6.3a)
U ( z)
( z I - F)+ g
=c +d (6.3b)
| zI - F |
where (zI – F)+ = adjoint of the matrix (zI – F)
|zI – F | = determinant of the matrix (zI – F)
|lI – F | is the characteristic polynomial of matrix F. The roots of this polynomial are the characteristic
roots or eigenvalues of matrix F.
From Eqn. (6.3b), we observe that the characteristic polynomial of matrix F of the system (6.1), is same
as the denominator polynomial of the corresponding transfer function G(z). If there are no cancellations
between the numerator and denominator polynomials of G(z) in Eqn. (6.3b), the eigenvalues of matrix
F are same as the poles of G(z).
In a later section, we shall see that for a completely controllable and observable state variable model, the
eigenvalues of matrix F are same as the poles of the corresponding transfer function.
6.2.2
Canonical State Variable Models
In Chapters 2– 4, we have seen that transform-domain design techniques yield digital control algorithms
in the form of transfer functions of the form
b0 z n + b1 z n -1 + + b n -1 z + b n
D(z) = n (6.4)
z + a1 z n -1 + + a n -1 z + a n
394 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
where the coefficients ai and bi are real constant scalars. Equation (6.4) represents an nth-order digital
controller. Several different structures for realization of this controller—using delay elements, adders,
and multipliers—were presented in Section 3.4. Each of these realizations is a dynamic system with n
first-order dynamic elements—the unit delayers. We know that output of a first-order dynamic element
represents the state of that element. Therefore, each realization of Eqn. (6.4) is, in fact, a state diagram;
by labeling the unit-delayer outputs as state variables, we can obtain the state variable model.
In the following discussion, we shall use two of the structures presented in Section 3.4 for obtaining
canonical state variable models corresponding to the general transfer function
Y ( z) b0 z n + b1 z n -1 + + b n -1 z + b n
G(z) = = n (6.5)
U ( z) z + a1 z n -1 + + a n -1 z + a n
Revisiting Section 3.4 at this stage will be helpful in our discussion.
+ + + y (k)
+ + +
b0 b1 bn – 1 bn
a1 an – 1 an
+ +
+ +
Fig. 6.2
State Variable Analysis of Digital Control Systems 395
Careful examination of Fig. 6.2 reveals that there are two paths from the output of each unit delayer to
the system output: one path upward through the box labeled bi , and a second path down through the box
labeled ai and thence through the box labeled b0. As a consequence
y(k) = (bn – an b0) x1(k) + (bn–1 – an–1 b0) x2(k) + + (b1 – a1 b0) xn(k) + b0 u(k) (6.6b)
The state and output equations (6.6), organized in vector-matrix form, are given below.
x(k + 1) = Fx(k) + gu(k) (6.7)
y(k) = cx(k) + du(k)
È 0 1 0 0 ˘ È0˘
Í 0 0 1 0 ˙ ˙ Í0˙
Í Í ˙
with F = Í ˙;g= Í ˙
Í ˙ Í ˙
Í 0 0 0 1 ˙ Í0˙
ÍÎ -a n -a n -1 -a n - 2 -a1 ˙˚ ÍÎ1 ˙˚
c = [bn – anb0, bn–1 – an –1 b0, …, b1 – a1b0]; d = b0
The matrix F in Eqns (6.7) has a very special structure—the coefficients of the denominator of the
transfer function preceded by minus signs form a string along the bottom row of the matrix. The rest of
the matrix is zero except for the ‘superdiagonal’ terms which are all unity. A matrix with this structure
is said to be in companion form. We call the state variable model (6.7) the first companion form1 state
model for the transfer function (6.5); another companion form follows.
bn bn – 1 b1 b0
+ + + +
+ + + y (k)
– – –
x1(k) xn – 1(k)
an an – 1 a1 xn(k)
Fig. 6.3
1
The pair (F, g) of Eqns (6.7) is completely controllable for all values of ai’s (Refer to Problem 5.36).
396 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
identify the output of each unit delayer with a state variable—starting at the left and proceeding to the
right. The corresponding difference equations are
xn(k + 1) = xn–1(k) – a1(xn(k) + b0u(k)) + b1u(k)
xn–1(k + 1) = xn–2(k) – a2(xn(k) + b0u(k)) + b2u(k)
c = [0 0 0 1]; d = b0
Comparing the F, g and c matrices of the second companion form2 with that of the first, we observe
that F, g, and c matrices of one companion form correspond to the transpose of F, c, and g matrices,
respectively, of the other.
Both the companion forms of state variable models play an important role in pole-placement design
through state feedback. This will be discussed in Chapter 7.
b1¢z n -1 + b 2¢ z n - 2 + + b n¢
= b0 + = b0 + G¢(z)
( z - l1 )( z - l2 ) ( z - ln )
r1 r rn
= b0 + + 2 + + (6.9)
z - l1 z - l2 z - ln
The coefficients ri (i = 1, 2, ..., n) are the residues of the transfer function G¢(z) at the corresponding poles
at z = li(i = 1, 2, ..., n). A parallel realization structure of the transfer function (6.9) is shown in Fig. 6.4.
b0
+ y (k)
u (k) + x1(k) +
r1
+ +
l1
+ xn(k)
rn
+
ln
Fig. 6.4
Identifying the outputs of the delayers with the state variables results in the following state and output
equations:
x(k + 1) = Lx(k) + gu(k) (6.10)
y(k) = cx(k) + du(k)
È l1 0 0˘ È1˘
Í0 l Í1˙
0 ˙˙ Í˙
with L = Í 2
;g=
Í ˙ Í˙
Í ˙ Í˙
Î0 0 ln ˚ Î1˚
c = [r1 r2 rn]; d = b0
It is observed that for this canonical state variable model, the matrix L is a diagonal matrix with the poles
of G(z) as its diagonal elements.
z = l1, and that all other poles are distinct. Performing the partial fraction expansion for this case, we get
Y ( z) b0 z n + b1 z n -1 + + b n -1 z + b n
= G(z) =
U ( z) z n + a1 z n -1 + + a n -1 z + a n
b1¢z n -1 + b 2¢ z n - 2 + + b n¢
= b0 + n n -1
z + a1 z + + an
n -1 n- 2
b1¢z + b 2¢ z + + b n¢
= b0 + m
( z - l1 ) ( z - lm +1 ) ( z - ln )
= b0 + H1(z) + Hm +1(z) + + Hn(z) (6.11a)
rm +1 rn
where Hm +1(z) = ,… , H n ( z ) = , (6.11b)
z - lm +1 z - ln
r11 r12 r
and H1(z) = + m -1
+ + 1m (6.11c)
( z - l1 ) m
( z - l1 ) ( z - l1 )
A realization of H1(z) is shown in Fig. 6.5. Other terms of Eqn. (6.11a) may be realized as per Fig. 6.4.
Fig. 6.5 H1 z
Identifying the outputs of the delayers with the state variables results in the following state and output
equations:
x(k + 1) = Lx(k) + gu(k)
(6.12)
y(k) = cx(k) + du(k)
with m ¥ m Jordan block
È l1 1 0 0 0 0˘ È 0 ˘
Í0 l1 1 0 0 0 ˙˙ Í ˙
Í Í 0 ˙
Í ˙ Í ˙
Í ˙ Í ˙
L = Í0 0 0 l1 0 0˙;g= Í 1 ˙
Í0 0 0 0 lm+1 0˙ Í 1 ˙
Í ˙ Í ˙
Í ˙ Í ˙
Í0 0 0 0 0 ln ˙˚ Í 1 ˙˚
Î Î
State Variable Analysis of Digital Control Systems 399
Systems that consist of an interconnection of a discrete-time system and a continuous-time system are
frequently encountered. An example of particular interest occurs when a digital computer is used to
control a continuous-time plant. Whenever such interconnections exist, there must be some type of
interface system that takes care of the communication between the discrete-time and continuous-time
systems. In the system of Fig. 6.1, the interface function is performed by D/A and A/D converters.
Simple models of the interface actions of D/A and A/D converters have been developed in Chapter 2.
A brief review is in order here.
A simple model of A/D converter is shown in Fig. 6.6. A continuous-time function f(t), t ≥ 0, is the input,
and the sequence of real numbers f (k), k = 0, 1, 2, ..., is the output; the following relation holds between
input and output:
f (k) = f(t = kT); T is the time interval between samples (6.13a)
A simple model of D/A converter is shown in Fig. 6.7. A sequence of numbers f (k), k = 0, 1, 2, ..., is the
input, and the continuous-time function f +(t), t ≥ 0, is the output; the following relation holds between
input and output:
f +(t) = f (k); kT £ t < (k + 1)T (6.13b)
400 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Fig. 6.6
f (k) f +(t)
Zero-order hold
0 1 2 3 k t
Fig. 6.7
Ú eA(t – t ) bu+(t) dt
A ( t - t0 )
x(t) = e x(t0 ) + (6.15)
t0
Since we use a ZOH (refer to Eqn. (6.13b)),
u+ (t) = u(kT); kT £ t < (k + 1)T; k = 0, 1, 2, ...
Fig. 6.8
State Variable Analysis of Digital Control Systems 401
g= Ú eA(T – s ) bds
0
With q = T – s, we get
T
g= Ú eAq bdq
0
If we are interested in the value of x(t) (or y(t)) between sampling instants, we first solve for x(kT) for
any k using Eqn. (6.17), and then use Eqn. (6.16) to determine x(t) for kT £ t < (k + 1)T.
Since we have a sampler in configuration of Fig. 6.8 (refer to Eqn. (6.13a)), we have from Eqn. (6.14b),
y(kT) = cx(kT) + du(kT)
State description of the equivalent discrete-time system of Fig. 6.8 is, therefore, of the form
x(k + 1) = Fx(k) + gu(k) (6.18a)
y(k) = cx(k) + du(k) (6.18b)
where F = eAT (6.18c)
T
eAT
There are several methods available for computing eAT. Some of these methods have been discussed in
the earlier chapter. Standard computer programs based on these methods are available.
In the following, we present an alternative technique of computing eAT. The virtues of this technique are
its simplicity and the ease of programming.
The infinite series expansion for F = eAT is
1 2 2 1 3 3
F = eAT = I + AT + AT + AT +
2! 3!
Ai T i
= Â i!
; A0 = I (6.19)
i=0
402 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
For a finite T, this series is uniformly convergent (Section 5.7). It is, therefore, possible to evaluate F
within prescribed accuracy. If the series is truncated at i = N, then we may write the finite series sum as
N
Ai T i
F = Â i!
(6.20)
i=0
which represents the infinite series approximation. The larger the N, the better is the approximation. We
evaluate F by a series in the form
Ê AT ÔÏ AT È AT Ê AT ˆ ˘ Ô¸ˆ
F = I + AT Á I + ÌI + ÍI + + Á I+ ˙ ˝˜ (6.21)
Ë 2 ÔÓ 3 Î N - 1Ë N ˜¯ ˚ Ô˛¯
which has better numerical properties than the direct series of powers. Starting with the innermost factor,
this nested product expansion lends itself easily to digital programming. The empirical relation giving
the number of terms, N, is
N = min {3 || AT || + 6, 100} (6.22)
where || AT || is a norm of the matrix AT. There are several different forms of matrix norms commonly
used. Any one of them may be used in Eqn. (6.22). Two forms of matrix norms are defined in Section 5.2.
The relation (6.22) assumes that no more than 100 terms are included. The series eAT will be accurate to,
at least, six significant figures.
The integral in Eqn. (6.18d) can be evaluated term by term, to give
ÈT Ê ˆ ˘˙ Ai T i + 1
Í ËÚ
1
g = Í Á I + Aq + A 2q 2 +
2! ˜¯ dq b =
˙
 (i + 1)! b (6.23)
i=0
Î0 ˚
Ai T i
= Â (i + 1)!
Tb
i=0
Ê AT A 2T 2 ˆ AT –1
= ÁI + + + ˜ T b = (e – I) A b (6.24)
Ë 2! 3! ¯
The transition from Eqn. (6.23) to (6.24) is possible only for a nonsingular matrix A. For a singular A,
we may evaluate g from Eqn. (6.23) by the approximation technique described above. Since the series
expansion for g converges faster than that for F, it suffices to determine N for F from Eqn. (6.22) and
apply the same value for g.
Example 6.1
Figure 6.9 shows the block diagram of a digital positioning system. Defining the state variables as
x1(t) = q (t), x2(t) = q (t),
the state variable model of the plant becomes
x(t) = Ax(t) + bu+(t)
y(t) = cx(t) (6.25)
È0 1˘ È0 ˘
with A= Í ˙;b= Í1 ˙ ; c = [1 0]
Î0 -5˚ Î ˚
State Variable Analysis of Digital Control Systems 403
Fig. 6.9
Here we apply the Cayley–Hamilton technique to evaluate the state transition matrix eAt.
Eigenvalues of matrix A are given by
Èl -1 ˘
| lI – A| = Í ˙ =0
Î 0 l + 5˚
Therefore, l1 = 0, l2 = – 5
Since A is of second order, the polynomial g(l) will be of the form (refer to Eqns (5.98)),
g(l) = b0 + b1l
The coefficients b0 and b1 are evaluated from the following equations:
1 = b0
–5t
e = b0 – 5b1
The result is b0 = 1
1
b1 =(1 – e–5t)
5
È1 1 (1 - e -5t ) ˘
= b0I + b1A = Í ˙
5
Hence eAt
ÍÎ0 e -5t ˙˚
The equivalent discrete-time plant with input u(k) and output q (k) (refer to Fig. 6.9) is described by the
equations
x(k + 1) = Fx(k) + gu(k)
(6.26)
y(k) = cx(k)
È1 1
(1 - e -5T )˘
where F =e AT
= Í
5
˙
ÍÎ0 e -5T ˙˚
ÈT ˘
T Í 5 Ú
Í 1 (1 - e -5q )dq ˙
˙ È 15 (T - 15 + 15 e -5T ) ˘
˙ = Í ˙
Ú eAq bdq = Í
0
g=
Í T ˙ Í 15 (1 - e -5T ) ˙
Î ˚
0 Í
Í Ú
e -5q dq ˙
˙
Î 0 ˚
404 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
k1
k2
k3 z–1 1 z–1 1
e u
x3 x4
1
Fig. 6.10
The processor input is derived from the reference input and the position feedback (Fig. 6.9):
e(k) = r(k) – x1(k) (6.28)
From Eqns (6.26)–(6.28), we get the following state variable model for the feedback system of Fig. 6.9.
È x1 ( k + 1) ˘ È1- 0.0043k1 0.0787 0 0.0043˘ È x1 ( k ) ˘ È0.0043k1 ˘
Í x ( k + 1) ˙ Í 0 0.0787˙˙ Í x (k )˙ Í ˙
Í 2 ˙ = Í -0.0787k1 0.6065 Í 2 ˙ + Í 0.0787k1 ˙ r(k)
Í x3 ( k + 1) ˙ Í - k3 0 0 0 ˙ Í x3 ( k ) ˙ Í k3 ˙
Í ˙ Í ˙ Í ˙ Í ˙
Î x4 ( k + 1) ˚ Î -( k2 + k1 ) 0 1 1 ˚ Î x4 ( k ) ˚ Î k2 + k1 ˚ (6.29)
È x1 ( k ) ˘
Í x (k )˙
0 0] Í ˙
2
y(k) = [1 0
Í x3 ( k ) ˙
Í ˙
Î x4 ( k ) ˚
State Variable Analysis of Digital Control Systems 405
6.4
Consider a state equation of a single-input system which includes delay in control action:
x (t) = Ax(t) + bu+(t – tD) (6.30)
+
where x is n ¥ 1 state vector, u is scalar input, tD is the dead-time, and A and b are, respectively, n ¥ n and
n ¥ 1 real constant matrices.
The solution of Eqn. (6.30) with t0 as initial time is
t
Ú
A( t - t0 ) A ( t -t )
x(t) = e x(t0) + e bu + (t - tD )dt
t0
x(kT + T) = e AT
x(kT) + Ú eAs bu+(kT + T – tD – s ) ds (6.31)
0
If N is the largest integer number of sampling periods in tD, we can write
tD = NT + DT; 0 £ D < 1 (6.32a)
Substituting in Eqn. (6.31), we get
T
Úe bu+(kT + T – NT – DT – s ) ds
AT As
x(kT + T) = e x(kT) +
0
Since we use a ZOH, u+ is piecewise constant. The nature of the integral in Eqn. (6.33), with respect to
variable s, becomes clear from the sketch of the piecewise constant input u+ over a segment of time axis
near t = kT – NT (Fig. 6.11). The integral runs for s from 0 to T—which corresponds to t from kT – NT
+ mT backward to kT – NT – T + mT. Over this period, the control first takes on the value u(kT – NT )
and then the value u(kT – NT – T). Therefore, we can break the integral in Eqn. (6.33) into two parts as
follows:
È mT ˘ ÈT ˘
Ú
x(kT + T) = e x(kT) + Í e bds ˙ u(kT – NT) + Í e As bds ˙ u(kT – NT – T) Ú
AT As
Í ˙ Í ˙
Î0 ˚ Î mT ˚
= Fx(kT) + g1u(kT – NT – T) + g2u(kT – NT) (6.34a)
406 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
u+(t) s = mT
s=T s=0
u(kT – NT)
u(kT – NT – T)
t
kT – NT – T kT – NT + T
kT – NT
kT – NT – T + mT kT – NT + mT
Fig. 6.11
xn + N + 1(k) = u(k – 1)
The augmented state equation now becomes
È x( k + 1) ˘ È F g1 g2 0 0 ˘ È x( k ) ˘ È 0 ˘
Í x ( k + 1) ˙ Í0 0 1 0 0 ˙˙ ÍÍ xn +1 ( k ) ˙˙ ÍÍ 0 ˙˙
Í n +1 ˙ Í
Í xn + 2 ( k + 1) ˙ Í0 0 0 1 0 ˙ Í xn + 2 ( k ) ˙ Í 0 ˙
Í ˙ = Í ˙Í ˙ + Í ˙ u (k ) (6.36)
Í ˙ Í ˙Í ˙ Í ˙
Í xn + N ( k + 1) ˙ Í0 0 0 0 1 ˙ Í xn + N ( k ) ˙ Í 0 ˙
Í ˙ Í ˙Í ˙ Í ˙
ÍÎ xn + N +1 ( k + 1) ˙˚ ÍÎ 0 0 0 0 0 ˙˚ ÍÎ xn + N +1 ( k ) ˙˚ ÍÎ 1 ˙˚
Example 6.2
In the following, we reconsider the tank fluid temperature control system discussed in Example 3.3
(refer to Fig. 3.17). The differential equation governing the tank fluid temperature was found to be
x1 (t) = – x1(t) + u(t – 1.5) (6.37)
where
x1(t) = q(t) = tank fluid temperature;
u(t) = qi(t) = temperature of the incoming fluid (control temperature); and
tD = 1.5 sec.
Assume that the system is sampled with period T = 1 sec. From Eqn. (6.32), we have
N = 1, D = 0.5, m = 0.5
Equations (6.34b), (6.34d), and (6.34e) give
F = e–1 = 0.3679
0.5
g2 = Ú e–s ds = 1 – e–0.5 = 0.3935
0
0.5
g1 = e–0.5 Ú e–q dq = e–0.5 – e–1 = 0.2387
0
The discrete-time model of the tank fluid temperature control system becomes (refer to Eqn. (6.34a))
x1(k + 1) = 0.3679 x1(k) + 0.2387 u(k – 2) + 0.3935 u(k – 1) (6.38)
Let us introduce two new states, defined below as
x2(k) = u(k – 2)
x3(k) = u(k – 1)
408 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
6.5.2
In the following, we obtain the closed-form solution of state equation (6.41).
From Eqns (6.42a)–(6.42b), we obtain
x(2) = F[Fx(0) + gu(0)] + gu(1)
= F2x(0) + Fgu(0) + gu(1) (6.43)
From Eqns (6.43) and (6.42c), we get
x(3) = F[F2x(0) + Fgu(0) + gu(1)] + gu(2)
= F3x(0) + F2 gu(0) + Fgu(1) + gu(2)
By repeating this procedure, we obtain
x(k) = Fkx(0) + Fk – 1 gu(0) + Fk – 2 gu(1) + + F0 gu(k – 1); F0 = I
k -1
= Fkx(0) + Â Fk–1– i gu(i) (6.44)
i=0
Clearly x(k) consists of two parts: one representing the contribution of the initial state x(0), and the other
the contribution of the input u(i); i = 0, 1, 2, ..., (k – 1).
Notice that it is possible to write the solution of the homogeneous state equations
x(k + 1) = Fx(k); x(0) =D x0 (6.45a)
k
as x(k) = F x(0) (6.45b)
From Eqn. (6.45b) it is observed that the initial state x(0) at k = 0 is driven to the state x(k) at the sampling
instant k. This transition in state is carried out by the matrix Fk. Due to this property, Fk is known as the
state transition matrix, and is denoted by e(k):
e(k) = Fk; e(0) = I (Identity matrix) (6.46)
In the following, we discuss commonly used methods for evaluating state transition matrix in closed form.
Example 6.3
È 0 1˘
Consider the matrix F = Í ˙
Î -0.16 -1˚
-1
È z - 1˘
For this F, (zI – F)–1 = Í
Î0.16 z + 1˙˚
È z +1 1 ˘
Í ( z + 0.2)( z + 0.8) ( z + 0.2)( z + 0.8) ˙
= Í ˙
Í -0.16 z ˙
Í ˙
Î ( z + 0.2)( z + 0.8) ( z + 0.2)( z + 0.8) ˚
È 4/3 + -1/ 3 5/ 3 -5 / 3 ˘
+
Í z + 0.2 z + 0.8 z + 0.2 z + 0.8 ˙
= Í ˙
Í -0.8 / 3 0.8 / 3 -1/ 3 4/3 ˙
ÍÎ z + 0.2 + z + 0.8
+
z + 0.2 z + 0.8 ˙˚
Therefore e(k) = Fk = Z–1[(zI – F)–1z]
È 4 z z z z ˘
Í 3 z + 0.2 - 3 z + 0.8 -5
1 5
3
z + 0.2 3 z + 0.8 ˙
= Z–1 Í ˙
Í -0.8 z + 0.8 z -1 z
+ 4 z ˙
ÍÎ 3 z + 0.2 3 z + 0.8 3
z + 0.2 3 z + 0.8 ˙˚
È 4 ( -0.2) k - 1 ( -0.8) k 5
( -0.2) k - 53 ( -0.8) k ˘
= Í ˙
3 3 3
Í -0.8 ( -0.2) k + 0.8 ( -0.8) k -1
( -0 . 2 ) k
+ 4
( -0 .8) k˙
Î 3 3 3 3 ˚
F and L are similar matrices; there exists a nonsingular matrix P such that (refer to Eqns (5.22))
L = P–1 FP
Now P–1 FkP = P–1 [FF F]P
= P [(PLP ) (PLP–1)
–1 –1
(PLP–1)]P = Lk
Thus the matrices Fk and Lk are similar. Since L is diagonal, Lk is given by
È l1k 0 0˘
Í ˙
k Í0 l2k 0˙
L = Í ˙
Í ˙
Í0 0 lnk ˙˚
Î
The state transition matrix Fk of matrix F with distinct eigenvalues l1, l2, ..., ln may, therefore, be
evaluated using the following relation:
È l1k 0 0˘
Í ˙
Í0 l2k 0 ˙ –1
Fk = PLk P–1 =P Í ˙P (6.48)
Í ˙
Í0 0 lnk ˙˚
Î
where P is a transformation matrix that transforms F into the diagonal form (For the general case where
matrix F has multiple eigenvalues, refer to [105]; also refer to Review Example 6.5 given at the end of
this chapter).
È 1 1˘ È( -1) k 0 ˘È 2 1˘
= Í ˙Í ˙Í ˙
ÍÎ-1 -2˙˚ ÍÎ 0 ( -2) k ˙˚ ÍÎ-1 -1˙˚
È 2( -1) k - ( -2) k ( -1) k - ( -2) k ˘
= Í ˙
ÍÎ -2( -1) k + 2( -2) k -( -1) k + 2( -2) k ˙˚
The solution of the nonhomogeneous state difference equation (6.41) is given by Eqn. (6.44). In terms of
the state transition matrix e(k), Eqn. (6.44) can be written in the form
k -1
x(k) = e(k) x(0) + Â e (k – 1 – i) gu(i) (6.49)
i=0
This equation is called the state transition equation; it describes the change of state relative to the initial
conditions x(0) and the input u(k).
È0 ˘ È1˘
With g = Í ˙ , x(0) = Í ˙ , and u(k)= (–1)k,
Î1 ˚ Î1˚
we get
k -1
y(k) = x1(k) = 3(–1) – 2(–2) + k
 [(–1)k – 1 – i – (–2)k – 1 – i] (–1)i
k
i=0
k -1
= 3(–1)k – 2(–2)k + k(–1)k – 1 – (–2)k – 1 Â ( )
i
1
2
i=0
Since4
( 12 )
k -1 k
1-
Â( )
i
= – 2 ÈÍ ( 12 ) - 1˘˙ , we have
1 k
2 =
i=0 1- 1 Î ˚
2
1 - a k +1
k
4
 aj =
1- a
; a π 1.
j=0
414 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
( 12 ) ]
k
y(k) = 3(–1)k – 2(–2)k – k(–1)k + (–2)k [1 -
= 3(–1)k – 2(–2)k – k(–1)k + (– 2)k – (–1)k = (2 – k) (–1)k – (–2)k
6.6.1
For the linear system given by Eqns (6.50), if there exists an input u(k); k Œ [0, N – 1] with N a finite
positive integer, which transfers the initial state x(0) =D x0 to the state x1 at k = N, the state x0 is said to be
controllable. If all initial states are controllable, the system is said to be completely controllable or simply
controllable. Otherwise, the system is said to be uncontrollable.
The following theorem gives a simple controllability test.
The necessary and sufficient condition for the system (6.50) to be completely
controllable is that the n ¥ n controllability matrix,
U =D [g Fg F2g Fn –1 g] (6.51)
has rank equal to n, i.e., r (U) = n.
È u( n - 1) ˘
Íu( n - 2) ˙
or x1 – Fnx0 = [g Fg Fn –1g] Í ˙ (6.52)
Í ˙
Í ˙
Î u( 0) ˚
Since g is an n ¥ 1 matrix, we find that each of the matrices g, Fg, ..., Fn –1 g is an n ¥ 1 matrix. Therefore,
U = [g Fg Fn –1 g]
State Variable Analysis of Digital Control Systems 415
is an n ¥ n matrix. If the rank of U is n, then for arbitrary states x0 and x1, there exists a sequence of
unconstrained control signals u(0), u(1), ..., u(n – 1) that satisfies Eqn. (6.52). Hence, the condition that
the rank of the controllability matrix is n gives a sufficient condition for complete controllability.
To prove that the condition r (U) = n is also a necessary condition for complete controllability, we
assume that
r[g Fg F n –1 g] < n
The matrix U is, therefore, singular and for arbitrary x0 and x1, a solution {u(0), u(1), ..., u(n – 1)}
satisfying Eqn. (6.52) does not exist.
Let us attempt a solution of the form {u(0), u(1), ..., u(N – 1)}; N > n. This will amount to adding
columns Fng, Fn+1g, ..., FN–1 g in the U matrix. But by Cayley–Hamilton theorem, f(F) = Fj; j ≥ n is a linear
combination of Fn–1, …, F1, F0 (refer to Eqn. (5.98d)) and therefore, columns Fng, Fn+1g, ..., FN–1g add
no new rank. Thus, if a state cannot be transferred to some other state in n sampling intervals, no matter
how long the input sequence {u(0), u(1), ..., u(N – 1)}; N > n is, it still cannot be achieved. Consequently,
we find that the rank condition given by Eqn. (6.51) is necessary and sufficient condition for complete
controllability.
6.6.2
For the linear system given by Eqns (6.50), if the knowledge of the input u(k); k Œ [0, N – 1] and the
output y(k); k Œ [0, N – 1] with N a finite positive integer, suffices to determine the state x(0) =D x0, the
state x0 is said to be observable. If all initial conditions are observable, the system is said to be completely
observable, or simply observable. Otherwise, the system is said to be unobservable.
The following theorem gives a simple observability test.
The necessary and sufficient condition for the system (6.50) to be completely
observable is that the n ¥ n observability matrix
È c ˘
Í ˙
Í cF ˙
V = Í cF 2 ˙ (6.53)
Í ˙
Í ˙
Í n - 1˙
ÍÎcF ˙˚
has rank equal to n, i.e., r (V) = n.
6.6.3
State Variable Model in Jordan Canonical Form
The following result for discrete-time systems easily follows from the corresponding result for
continuous-time systems, given in the earlier chapter.
Consider a SISO system with distinct eigenvalues5 l1, l2, ..., ln.
The Jordan canonical state model of the system is of the form
x(k + 1) = Lx(k) + gu(k) (6.55)
y(k) = cx(k) + du(k)
È l1 0 0˘ È g1 ˘
Í0 l 0˙ ˙ Íg ˙
L = Í ; g = Í ˙ ; c = [c1 c2 cn]
2 2
with
Í ˙ Í ˙
Í ˙ Í ˙
Î0 0 ln ˚ Î gn ˚
The system (6.55) is completely controllable if, and only if, none of the elements of the column matrix
g, is zero. The system (6.55) is completely observable if, and only if, none of the elements of the row
matrix c, is zero.
6.6.4
The following result for discrete-time systems easily follows from the corresponding result for
continuous-time systems, given in the earlier chapter.
The general state variable model of nth-order linear time-invariant discrete-time system is given by
Eqns (6.50):
x(k + 1) = Fx(k) + gu(k); x(0) =D x0
(6.56)
y(k) = cx(k) + du(k)
5
Refer to Gopal [105] for the case of multiple eigenvalues.
State Variable Analysis of Digital Control Systems 417
Conclusion The transfer function model of a system represents its complete dynamics only if the
system is both controllable and observable.
6.6.5
Sampling of a continuous-time system gives a discrete-time system with system matrices that depend
on the sampling period. How will that influence the controllability and observability of the sampled
system? To get a controllable sampled system, it is necessary that the continuous-time system also be
controllable, because the allowable control signals for the sampled system—piecewise constant signals—
are a subset of allowable control signals for the continuous-time system. However, it may happen that the
controllability is lost for some sampling periods.
The conditions for unobservability are more restricted in the continuous-time case because the output
has to be zero over a time interval, while the sampled system output has to be zero only at the sampling
instants. This means that the continuous output may oscillate between the sampling times and be zero at the
sampling instants. This condition is sometimes called hidden oscillations. The sampled system can thus be
unobservable—even if the corresponding continuous-time system is observable.
The harmonic oscillator can be used to illustrate the preceding discussion. The transfer function model
of the oscillator system is
Y ( s) w2
= 2 (6.58)
U ( s) s +w2
From this model, we have
y + w2y = w2u
1
Define x1 = y; x2 = y
w
This gives the following state variable representation of the oscillator system:
È x1 ˘ È 0 w ˘ È x1 ˘ È 0 ˘
Í x ˙ = Í -w 0 ˙ Í x ˙ + Íw ˙ u
Î 2˚ Î ˚ Î 2˚ Î ˚ (6.59)
È x1 ˘
y = [1 0] Í ˙
Î x2 ˚
418 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The discrete-time state variable representation of the system is obtained as follows. Noting that
È 0 w˘ È0˘
A= Í ˙ ,b= Í ˙
Î -w 0 ˚ Îw ˚
we have
ÊÈ s -w ˘
-1 ˆ
È s w ˘
Í s2 + w 2 s 2 + w 2 ˙˙
Í È cos wT sin wT ˘
= L –1 = Í
Í -w s ˙ Î - sin wT cos wT ˙˚
Í 2 ˙
Î s +w2 s +w2 ˚
2
t =T
and
ÈT ˘ È T Ê cos wq sin wq ˆ ˘ È0˘ 1 - cos w T ˘
˙ Í ˙ = È
Í Ú
g = Í e dq ˙ b = Í Á
Aq
˙ Ú
Í Ë - sin wq cos wq ¯ ˜ d q
˙ Îw ˚ Í
Î sin w T ˚
˙
Î0 ˚ Î0 ˚
Hence, the discrete-time state variable representation of the oscillator system becomes
È x1 ( k + 1) ˘ È cos w T sin w T ˘ È x1 ( k ) ˘ È1 - cos wT ˘
Í x ( k + 1) ˙ = Í - sin w T cos w T ˙ Í ˙ + Í ˙ u(k) (6.60)
Î 2 ˚ Î ˚ Î x2 ( k ) ˚ Î sin wT ˚
È x1 ( k ) ˘
y(k) = [1 0] Í ˙
Î x2 ( k ) ˚
The determinants of the controllability and observability matrices are
| U| = |[g Fg]| = – 2 sinwT (1 – coswT )
Èc˘
|V| = Í ˙ = sinwT
ÎcF ˚
Both controllability and observability are lost for wT = np, n = 1, 2, ... (i.e., when the sampling interval
is half the period of oscillation of the harmonic oscillator, or an integer multiple of that period), although
the corresponding continuous-time system given by Eqns (6.59), is both controllable and observable.
Loss of controllability and/or observability due to sampling occurs only when the continuous-time
system has oscillatory modes and the sampling interval is half the period of oscillation of an oscillatory
mode, or an integer multiple of that period. This implies that controllability and observability properties
of a continuous-time system, are preserved after introduction of sampling if, and only if, for every
eigenvalue of the characteristic equation, the relation
Re li = Re lj (6.61)
2np
implies Im (li – lj) π
T
where T is the sampling period and n = ± 1, ± 2, ... .
State Variable Analysis of Digital Control Systems 419
We know that controllability and/or observability is lost when the transfer function corresponding to a
state model has common poles and zeros. The poles and zeros are functions of sampling interval. This
implies that if the choice of sampling interval does not satisfy the condition given by (6.61), pole-zero
cancellation will occur in passing from the continuous-time to the discrete-time case; the pole-zero
cancellation will not take place if the continuous-time system does not contain complex poles.
It is very unlikely that the sampling interval chosen for a plant control system, would be precisely the
one resulting in loss of controllability and/or observability. In fact the rules of thumb, for the choice of
sampling interval given in Section 2.13, imply a sampling interval of about one tenth of the period of
oscillation of an oscillatory mode, and not just half.
The output
È k -1 ˘
y(k) = C Í F k x(0) +
Í Â F k -1- i Gu(i ) ˙ + Du(k)
˙
(6.63b)
Î i=0 ˚
In the transform domain, the input-output behavior of the system (6.62) is determined entirely by the
transfer function matrix (refer to Eqns (6.3))
G(z) = C(zI – F)–1 G + D (6.64a)
The output Y( z ) = G( z ) U( z ) (6.64b)
q ¥1 q¥p p ¥1
The necessary and sufficient condition for the system (6.62) to be completely controllable is that the
n ¥ np matrix
U =D [G FG F2G Fn – 1 G] (6.65)
has rank equal to n.
420 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The necessary and sufficient condition for the system (6.62) to be completely observable is that the
nq ¥ n matrix
È C ˘
Í CF ˙
V = ÍÍ
D ˙
˙ (6.66)
Í n-1
˙
ÍÎ CF ˙˚
has rank equal to n.
A MIMO system with distinct eigenvalues6 l1, l2, ..., ln has the following Jordan canonical state model.
x(k + 1) = Lx(k) + Gu(k)
(6.67)
y(k) = Cx(k) + Du(k)
with
È l1 0 0˘ È g11 g12 g1 p ˘
Í0 l ˙ Í ˙ È c11 c12 c1n ˘
0 Í g21 g22 g2 p ˙ Í ˙
L= Í ˙;G=
2
Í ˙ Í ˙; C = Í ˙
Í ˙ Í ˙ Í c c c ˙
ln ˚ ÍÎ gn1 gn 2 gnp ˙˚ Î q1 q 2 qn ˚
Î0 0
The system (6.67) is completely controllable if, and only if, none of the rows of G matrix is a zero row,
and (6.67) is completely observable if, and only if, none of the columns of C matrix is a zero column.
Example 6.7
The scheme of Fig. 5.23 (refer to Example 5.21) describes a simple concentration control process.
Mathematical model of the plant, given by Eqns (5.133), is reproduced below.
x = Ax + Bu
(6.68)
y = Cx
È - 0.01 0 ˘ È 1 1 ˘ È0.01 0 ˘
with A = Í ˙ ;B= Í ˙ ;C= Í
Î 0 - 0.02˚ Î - 0.004 - 0.002˚ Î 0 1 ˙˚
The state, input, and output variables are deviations from steady-state values:
x1 = incremental volume of fluid in the tank (liters)
x2 = incremental outgoing concentration (g-moles/liter)
u1 = incremental feed 1 (liters/sec)
u2 = incremental feed 2 (liters/sec)
y1 = incremental outflow (liters/sec)
y2 = incremental outgoing concentration (g-moles/liter)
Matrix A in Eqns (6.68) is in diagonal form; none of the rows of B matrix is a zero row, and none of the
columns of C matrix is a zero column. The state model (6.68) is, therefore, completely controllable and
observable.
6
Refer to Gopal [105] for the case of multiple eigenvalues.
State Variable Analysis of Digital Control Systems 421
With initial values of x1 and x2 equal to zero at t = 0, a step of 2 liters/sec in feed 1 results in
Èt ˘
Í Ú
y(t) = C Í e A(t -t ) Bu(t )dt ˙
˙
Î0 ˚
È e - 0.01t 0 ˘
with eAt = Í ˙
ÍÎ 0 e - 0.02t ˙˚
È 2˘
and u(t ) = Í ˙
Î0 ˚
Solving for y(t), we get
È t ˘
Í
Í Ú
2e -0.01(t -t ) dt ˙
˙ È 2 (1 - e - 0.01t ) ˘
y(t) = C Í
0
˙ =C Í 0.01 ˙
Í t ˙ ÍÎ - 0.4 (1 - e - 0.02t ) ˙˚
Ú
Í - 0.008e - -
0 .02 ( t t )
dt ˙
Í ˙
Î 0 ˚
–0.01t
Therefore, y1(t) = 2(1 – e ) (6.69a)
–0.02t
y2(t) = – 0.4(1 – e ) (6.69b)
Suppose that the plant (6.68) forms part of a process commanded by a process control computer.
As a result, the valve settings change at discrete instants only and remain constant in between.
Assuming that these instants are separated by time period T = 5 sec, we derive the discrete-time
description of the plant.
x(k + 1) = Fx(k) + Gu(k) (6.70a)
y(k) = Cx(k) (6.70b)
Èe - 0.01T
0 ˘ È0.9512 0 ˘
F = e AT = Í ˙ = Í
0.9048˙˚
- 0.02T
(6.70c)
ÍÎ 0 e ˙˚ Î0
È T T ˘
T
Í
Í Ú
e - 0.01q dq Ú e - 0.01q dq
˙
˙ È 4.88 4.88 ˘
Ú eAq Bdq = Í ˙ = Í
0 0
G= ˙ (6.70d)
Í T T ˙ Î - 0.019 0.0095˚
Ú
Í - 0.004 e - 0.02q dq
Ú
- 0.02q
dq ˙
0
0.002 e
Í ˙
Î 0 0 ˚
Matrix F in Eqns (6.70) is in diagonal form; none of the rows of G matrix is a zero row, and none of the
columns of C matrix is a zero column. The state model (6.70) is, therefore, completely controllable and
observable.
With initial values of x1 and x2 equal to zero at k = 0, a step of 2 liters/sec in feed 1 results in
È k -1 ˘
y(k) = C Í
Íi = 0 Â
F k -1- i Gu(i ) ˙
˙
Î ˚
422 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È(0.9512) k 0˘ È 2˘
with Fk = Í ˙ and u(i) = Í ˙
ÍÎ 0 (0.9048) ˙˚ k Î0 ˚
Solving for y(k), we get
k -1
y1(k) = 0.01x1(k) = 0.01 Â 9.76(0.9512)k – 1 – i
i=0
REVIEW EXAMPLES
The observable canonical state model (second companion form) follows directly from Eqns (6.5) and (6.8):
È x1 ( k + 1) ˘ È0 0 2 ˘ È x1 ( k ) ˘ È 1˘
Í x ( k + 1) ˙ = Í1 0 - 5 ˙ Í x ( k ) ˙ + Í-7 ˙ u(k)
Í 2 ˙ Í ˙ Í 2 ˙ Í ˙
ÍÎ x3 ( k + 1) ˙˚ ÍÎ0 1 4 ˙˚ ÍÎ x3 ( k ) ˙˚ ÍÎ 4 ˙˚
È x1 ( k ) ˘
y(k) = [0 0 1] ÍÍ x2 ( k ) ˙˙ + 4u(k)
ÍÎ x3 ( k ) ˙˚
The state variable model in Jordan canonical form follows from Eqns (6.11) and (6.12):
Y ( z) 4 z 3 - 12 z 2 + 13 z - 7 4z2 - 7z + 1
= G(z) = = 4 +
U ( z) z3 - 4 z 2 + 5z - 2 ( z - 1) 2 ( z - 2)
2 1 3
=4+ + +
( z - 1) 2 ( z - 1) ( z - 2)
È xˆ1 ( k + 1) ˘ È1 1 0 ˘ È xˆ1 ( k ) ˘ È0 ˘
Í xˆ ( k + 1) ˙ = Í0 1 0 ˙ Í xˆ ( k ) ˙ + Í1 ˙ u(k)
Í 2 ˙ Í ˙ Í 2 ˙ Í ˙
ÍÎ xˆ3 ( k + 1) ˙˚ ÍÎ0 0 2˙˚ ÍÎ xˆ3 ( k ) ˙˚ ÍÎ1 ˙˚
È xˆ1 ( k ) ˘
y(k) = [2 1 3] ÍÍ xˆ2 ( k ) ˙˙ + 4u(k)
ÍÎ xˆ3 ( k ) ˙˚
È l1 0 0˘
Í0 l2 0 ˙˙
P–1AP = L = Í
Í ˙
Í ˙
Î0 0 ln ˚
424 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
r + u x y = x1
x = Ax + bu c
–
Plant
(a)
r + e u y = x1
ZOH Plant
T
–
(b)
Fig. 6.12
The characteristic equation of the closed-loop system is
l2 + 2l + K = 0
The closed-loop system is stable for all values of K > 0.
Figure 6.12b shows a block diagram of the closed-loop digital system. The discrete-time description of
the plant is obtained as follows:
x(k + 1) = Fx(k) + g u(k)
y(k) = cx(k)
T
F = eAT; and g =
Úe
Aq
where b dq
0
È1 1 ˘
ÊÈs - 1˘
- 1ˆ Ís s( s + 2) ˙
˜ = L –1 Í ˙
-1
Now eAt = L–1 [( sI - A) ] = L –1 ÁÍ ˙
ÁË Î0 s + 2˚ ˜¯ Í 1 ˙
Í0 ˙
Î s+2 ˚
È1 1
(1 - e -2t ) ˘
= Í ˙
2
ÍÎ0 e-2t ˙ ˚
È1 )˘
1
(1 - e -2T
F = eAT = Í ˙
2
Therefore,
ÍÎ0 e ˙˚ -2T
T ÈK -2q ˘ ÈT - 1 + 1 e -2T ˘
Í 2 (1 - e ) ˙
g= ÚÍ
- 2q
˙ dq =
1
2
K Í
2 2
ÍÎ 1 - e - 2T ˙˚
˙
0 Í ˙˚
Î Ke
426 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Fig. 6.13
Solution
È0 1˘
Given A= Í ˙
Î0 - 1˚
È1 1 ˘
Ís s( s + 1) ˙ È1 1 - e - t ˘
eAt = L –1[(s I – A) –1] = L –1 Í ˙ = Í ˙
Í -t
1 ˙ ÎÍ0 e ˙˚
Í0 ˙
Î s +1 ˚
The discretized state equation of the plant is
x(k + 1) = Fx(k) + gu(k) (6.73a)
È1 1 - e - T ˘
where F = eAT = Í ˙
ÍÎ0 e - T ˙˚
ÈT ˘
T Ú
Í (1 - e -q ) dq ˙
Í ˙ ÈT - 1 + e - T ˘
Ú Í ˙
0
g= Aq
e b dq = = Í ˙ (6.73b)
Í T ˙ Í 1 - e - T ˙˚
Î
Í
Ú
e - q dq ˙
0
Í ˙
Î 0 ˚
For T = 1 sec, we have
È1 0.632˘ È 0.368˘
F = Í ˙;g= Í0.632˙ (6.73c)
Î0 0.368˚ Î ˚
Consider now, the feedback system of Fig. 6.13 with the plant described by the equation (refer to Eqns 6.73)):
È x1 ( k + 1) ˘ È 1 0.632 ˘ È x1 ( k ) ˘ È 0.368˘
Í x ( k + 1) ˙ = Í 0 0.368 ˙ Í x ( k ) ˙ + Í0.632˙ e2(k) (6.74)
Î 2 ˚ Î ˚ Î 2 ˚ Î ˚
e2(k) may be taken as the third state variable x3(k) whose dynamics are given by
x3(k + 1) = – a x3(k) + b e1(k) = – a x3(k) + b (r(k) – x1(k)) = – b x1(k) – a x3(k) + b r(k) (6.75)
From Eqns (6.74)–(6.75), we get the following state variable model for the closed-loop digital system of
Fig. 6.13:
428 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È x1 ( k + 1) ˘ È 1 0.632 0.368˘ È x1 ( k ) ˘ È0 ˘
Í x ( k + 1) ˙ = Í 0 0.368 0.632˙ Í x ( k ) ˙ + Í0 ˙ r(k)
Í 2 ˙ Í ˙ Í 2 ˙ Í ˙
ÍÎ x3 ( k + 1) ˙˚ ÍÎ -b 0 - a ˙˚ ÍÎ x3 ( k ) ˙˚ ÍÎb ˙˚
È x1 ( k ) ˘
y(k) = [1 0 0] ÍÍ x2 ( k ) ˙˙
ÍÎ x3 ( k ) ˙˚
d n -1 d n -1
f (l ) = g(l )
d l n -1 l = l1
d l n -1 l = l1
Solving, we get
b0 = l 1k
k k -1
b1 = l
1! 1
k ( k - 1) k - 2
b2 = l1
2!
State Variable Analysis of Digital Control Systems 429
k ( k - 1)( k - 2) ( k - n + 2) k - n +1
bn –1 = l1
( n - 1)!
k ( k - 1)( k - 2) ( k - n + 2)( k - n + 1)( k - n) 1 È 1 ˘ k - n +1
= Í ˙ l1
( k - n + 1)( k - n) 1 Î ( n - 1)! ˚
k! k - n +1
= l
( k - n + 1)!( n - 1)! 1
Therefore (refer to Review Example 5.3),
Lk = b0I + b1(L – l1I) + + bn –1 (L – l1I)n –1
È k k k -1 k ( k - 1) k - 2 k ! l1k - n +1 ˘
Í l1 l1 l1 ˙
Í 1! 2! ( k - n + 1)!( n - 1)! ˙
Í k k -1 ˙
Í0 l1k l1 ∑ ˙
= Í 1! ˙
Í0 0 l1k ∑ ˙
Í ˙
Í ˙
Í k ˙
ÍÎ 0 0 0 l1 ˙˚
PROBLEMS
6.1 A system is described by the state equation
È- 3 1 0 ˘ È -3˘
Í- 4 0 1 ˙ Í ˙
˙ x(k) + Í -7˙ u(k); x(0) = x
0
x(k + 1) = Í
ÍÎ- 1 0 0 ˙˚ ÍÎ 0˙˚
Using the z-transform technique, transform the state equation into a set of linear algebraic
equations in the form
X(z) = G(z)x0 + H(z)U(z)
6.2 Give a block diagram for digital realization of the state equation of Problem 6.1.
6.3 Obtain the transfer function description for the following system:
È x1 ( k + 1) ˘ È 2 - 5˘ È x1 ( k ) ˘ È1 ˘
Í x ( k + 1) ˙ = Í 1 - 1˙ Í x ( k ) ˙ + Í0 ˙ u(k); y(k) = 2x1(k)
Î 2 ˚ Î2 ˚Î 2 ˚ Î ˚
6.4 A second-order multivariable system is described by the following equations:
È u1 ( k ) ˘
È x1 ( k + 1) ˘ È 2 - 5˘ È x1 ( k ) ˘ È1 - 2 0˘ Í
Í x ( k + 1) ˙ = Í 1 - 1˙ Í x ( k ) ˙ + Í0 1 u2 ( k ) ˙˙
Î 2 ˚ Î2 ˚ Î 2 ˚ Î 3˙˚ Í
ÍÎ u3 ( k ) ˙˚
È u1 ( k ) ˘
È y1 ( k ) ˘ È 2 0 ˘ È x1 ( k ) ˘ È0 4 0 ˘ Íu ( k )˙
Í y ( k ) ˙ = Í1 -1˙ Í x ( k ) ˙ + Í0 0 - 2 ˙ Í 2 ˙
Î 2 ˚ Î ˚ Î 2 ˚ Î ˚ ÍÎ u3 ( k ) ˙˚
Convert the state variable model into a transfer function matrix.
430 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
6.5 The state diagram of a linear system is shown in Fig. P6.5. Assign the state variables and write the
dynamic equations of the system.
z–1
1/2 5
1
1/4
–1 z–1 6
U(z) Y(z)
–1/2
–1 3 2
2 –7
z–1
1/3
Fig. P6.5
6.6 Set up a state variable model for the system of Fig. P6.6.
+ + y1
u1 + + +
3 2
+
12 2
+
7
+
u2 + – + y2
– +
Fig. P6.6
State Variable Analysis of Digital Control Systems 431
6.7 Obtain the companion form realizations for the following transfer functions. Obtain different
companion form for each system.
Y ( z) 3z 2 - z - 3
(i) = 2 1
R( z ) z + 3 z - 23
Y ( z) -2 z 3 + 2 z 2 - z + 2
(ii) =
R( z ) z 3 + z 2 - z - 43
6.8 Obtain the Jordan canonical form realizations for the following transfer functions.
Y ( z) z 3 + 8 z 2 + 17 z + 8
(i) =
R( z ) ( z + 1) ( z + 2) ( z + 3)
Y ( z) 3z 3 - 4 z + 6
(ii) =
R( z ) ( z - 13 )3
6.9 Find state variable models for the following difference equations. Obtain different canonical form
for each system.
(i) y(k + 3) + 5 y(k + 2) + 7 y(k + 1) + 3 y(k) = 0
(ii) y(k + 2) + 3 y(k + 1) + 2 y(k) = 5 r(k + 1) + 3 r(k)
(iii) y(k + 3) + 5 y(k + 2) + 7 y(k + 1) + 3 y(k) = r(k + 1) + 2 r(k)
6.10 Given
È 0 1˘
F= Í ˙
Î- 3 4 ˚
Determine f(k) = Fk using
(a) the z-transform technique;
(b) similarity transformation; and
(c) Cayley–Hamilton technique.
6.11 Consider the system
È 1˘
x(k + 1) = Fx(k) + gu(k); x(0) = Í ˙
Î -1˚
y(k) = cx(k)
È 0 1˘ È1˘
with F= Í ˙ ; g = Í ˙ ; c = [1 0]
Î- 0 .16 - 1˚ Î1˚
Find the closed-form solution for y(k) when u(k) is unit-step sequence.
6.12 Consider the system
x(k + 1) = Fx(k) + gu(k)
y(k) = Cx(k) + du(k)
È 3 - 1˘ È 3˘ È -5˘
with F = Í2 ˙ ; g = Í ˙ ; x(0) = Í ˙
ÍÎ 1 - 1˙˚ Î 2˚ Î 1˚
È -3 4 ˘ È- 2˘
( 12 )
k
C = Í ˙ ; d = Í ˙ ; u(k) = ,k≥0
Î -1 1 ˚ Î 0˚
Find the response y(k), k ≥ 0.
432 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
w (t)
+ y (t)
u (t) + 1 1
10 s
10s + 1
Fig. P6.16
6.17 The mathematical model of the plant of a two-input, two-output temperature control system is
given below.
x = Ax + Bu
y = Cx
È- 0.1 0 ˘ È100 0˘ È1 0 ˘
A= Í ˙ ;B= Í ˙;C= Í0 1 ˙
Î 0.1 - 0.1˚ Î 0 100 ˚ Î ˚
State Variable Analysis of Digital Control Systems 433
For the computer control of this system, obtain the discrete-time model of the plant. Sampling
period T = 3 seconds.
6.18 Consider the closed-loop control system shown in Fig. P6.18.
(a) Obtain the z-transform of the feedforward transfer function.
(b) Obtain the closed-loop transfer function, and convert it into a state variable model for digital
simulation.
Fig. P6.18
6.19 The mathematical model of the plant of a control system is given below.
Y ( s) e -0.4 s
= Ga(s) =
U ( s) s +1
For digital simulation of the plant, obtain a vector difference state model with T = 1 sec as the
sampling period. Use the following methods to obtain the plant model:
(a) Sample Ga(s) with a zero-order hold and convert the resulting discrete-time transfer function
into a state model.
(b) Convert the given Ga(s) into a state model and sample this model with a zero-order hold.
6.20 Determine zero-order hold sampling of the process
x (t) = – x(t) + u(t – 2.5)
with sampling interval T = 1.
6.21 Convert the transfer function
Y ( s) e - stD
= Ga(s) = ; 0 £ tD < T
U ( s) s2
into a state model and sample this model with a zero-order hold; T is the sampling interval.
6.22 The plant of a unity-feedback continuous-time control system is described by the equations
È0 1 ˘ È0 ˘
x = Í ˙ x+ Í ˙ u
Î 0 - 2 ˚ Î 2˚
y = x1
(a) Show that the continuous-time closed-loop system is stable.
(b) A sampler and zero-order hold are now introduced in the forward loop. Show that the stable
linear continuous-time system becomes unstable upon the introduction of a sampler and a
zero-order hold with sampling period T = 3 sec.
6.23 The block diagram of a sampled-data system is shown in Fig. P6.23.
(a) Obtain a discrete-time state model for the system.
(b) Obtain the equation for intersample response of the system.
434 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
r + u 1 1 1 y
ZOH s+2 s+1 s
– T
Plant
Fig. P6.23
6.24 The block diagram of a sampled-data system is shown in Fig. P6.24. Obtain the discrete-time state
model of the system.
È 0 1˘ È0 ˘
Given A = Í ˙ ; b = Í ˙ ; c = [1 0]
Î- 2 - 3˚ Î1 ˚
Fig. P6.24
6.25 A closed-loop computer control system is shown in Fig. P6.25. The digital compensator is
described by the difference equation
e2(k + 1) + 2e2(k) = e1(k)
The state model of the plant is, as given in Problem 6.24. Obtain the discrete-time state model for
the system.
Fig. P6.25
6.26 Consider the closed-loop analog control system shown in Fig. P6.26. For computer control of
the process, transform the controller transfer function into a difference equation using backward-
difference approximation of the derivative.
Sample the process model with a zero-order hold and obtain the state variable model of the closed-
loop computer-controlled system. Take T = 0.1 sec as sampling interval.
R(s) + E(s) U(s) 10 Y(s)
9 + 4.1 s
s(10s + 1)
–
Controller Process
Fig. P6.26
State Variable Analysis of Digital Control Systems 435
r p 1 y
ZOH +
T (s + 0.02)2 + p 2 s + 1
Fig. P6.29
6.30 Consider the state variable model
x(k + 1) = Fx(k) + g r(k)
y(k) = cx(k)
È 0 1˘ È0 ˘
with F = Í 1 3 ˙ ; g = Í ˙ ; c = [ - 12 1]
-
Î 8 4˚ Î1 ˚
(a) Find the eigenvalues of matrix F.
(b) Find the transfer function G(z) = Y(z)/R(z) and determine the poles of the transfer function.
(c) Comment upon the controllability and observability properties of the given system without
making any further calculations.
436 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Chapter 7
Pole-Placement Design and
State Observers
7.1 INTRODUCTION
The design techniques presented in the preceding chapters are based on either frequency response or the
root locus. These transfer function-based methods have been referred to as classical control design. The
goal of this chapter is to solve the identical problems using different techniques based on state variable
formulation. The use of the state-space approach has often been referred to as modern control design.
However, since the state-space method of description for differential equations is over 100 years old,
and was introduced in control design in the late 1950s, it seems somewhat misleading to refer to it as
‘modern’. We prefer to refer to the two approaches to design as state variable methods and transform
methods.
The transform methods of design are powerful methods of practical design. Most control systems are
designed using variations of these methods. An important property of these methods is robustness. The
resultant closed-loop system characteristics tend to be insensitive to small inaccuracies in the system
model. This property is very important because of the difficulty in finding an accurate linear model of a
physical system and also, because many systems have significant nonlinear operations.
The state variable methods appear to be much more dependent on having an accurate system model for
the design process. An advantage of these methods is that the system representation provides a complete
(internal) description of the system, including possible internal oscillations or instabilities that might be
hidden by inappropriate cancellations in the transfer function (input/output) description. The power of
state variable techniques is especially apparent when we design controllers for systems with more than
one control input or sensed output. However, in this chapter, we will illustrate the state variable design
methods using Single-Input, Single-Output (SISO) systems. Methods for Multi-Input, Multi-Output
(MIMO) design are discussed in Chapter 8.
In this chapter, we present a design method known as pole placement or pole assignment. This method is
similar to the root-locus design in that, the closed-loop poles may be placed in desired locations. However,
pole-placement design allows all closed-loop poles to be placed in desirable locations, whereas the root-
locus design procedure allows only the two dominant poles to be placed. There is a cost associated with
placing all closed-loop poles, however, because placing all closed-loop poles requires measurement and
feedback of all the state variables of the system.
Pole-Placement Design and State Observers 437
In many applications, all the state variables cannot be measured because of cost considerations, or
because of the lack of suitable transducers. In these cases, those state variables that cannot be measured
must be estimated from the ones that are measured. Fortunately, we can separate the design into two
phases. During the first phase, we design the system as though all states of the system will be measured.
The second phase is concerned with the design of the state estimator. In this chapter, we consider both
phases of the design process, and the effects that the state estimator has on closed-loop system operation.
Figure 7.1 shows how the state-feedback control law and the state estimator fit together, and how the
combination takes the place of, what we have been previously referring to as, dynamic compensation. We
will see in this chapter that the estimator-based dynamic compensators are very similar to the classical
compensators of Chapter 4, in spite of the fact that they are arrived at by entirely different means.
r + u y
kR Plant Sensor
–
Control law
x
Estimator
k
State vector
Constant estimate
gain matrix
Compensation
Fig. 7.1
is a constant state-feedback gain matrix. With this state-feedback control law, the closed-loop system is
described by the state differential equation
x(t) = (A – bk) x(t) (7.3)
and the characteristic equation of the closed-loop system is
| sI – (A – bk) | = 0 (7.4)
When evaluated, this yields an nth-order polynomial in s containing the n gains k1, k2, ..., kn. The control-
law design then consists of picking the gains so that the roots of Eqn. (7.4) are in desirable locations.
In the next section, we find that under a mildly restrictive condition (namely, the system (7.1) must be
completely controllable), all the eigenvalues of (A – bk) can be arbitrarily located in the complex plane
by choosing k suitably (with the restriction that complex eigenvalues occur in complex-conjugate pairs).
If all the eigenvalues of (A – bk) are placed in
the left-half plane, the closed-loop system is,
u of course, asymptotically stable; x(t) will decay
Plant
– to zero irrespective of the value of x(0)—the
initial perturbation in the state. The system
state is thus maintained at zero value in spite of
disturbances that act upon the system. Systems
+ x1 with this property are called regulator systems.
k1
The origin of state space is the equilibrium
+ state of the system.
Example 7.1
Consider the problem of designing an attitude control system for a rigid satellite. Satellites usually require
attitude control so that antennas, sensors, and solar panels are properly oriented. For example, antennas
Pole-Placement Design and State Observers 439
Satellite
u 1 x2 1 x1
– – s s
k2
k1
Fig. 7.4
Equating the coefficients with like powers of s in Eqns (7.7) and (7.8) yields
k1 = 32, k2 = 8
The calculation of the gains using the technique illustrated in this example, becomes rather tedious when
the order of the system is larger than three. There are, however, ‘canonical’ forms of the state variable
equations where the algebra for finding the gains is especially simple. One such canonical form—useful
in control-law design—is the controllable canonical form. Consider a system represented by the transfer
function
Y ( s) b1s n -1 + b 2 s n - 2 + + b n
=
U ( s) s n + a1s n -1 + + a n
A companion-form realization of this transfer function is given below (refer to Eqns (5.54)):
x = Ax + bu
(7.9)
y = cx
where
È 0 1 0 0 ˘ È0 ˘
Í ˙ Í ˙
Í 0 0 1 0 ˙ Í0 ˙
A= Í ˙;b= Í ˙
Í ˙ Í ˙
Í 0 0 0 1 ˙ Í0 ˙
Í-a n -a n -1 -a n - 2 -a1 ˙˚ ÍÎ1 ˙˚
Î
c = [bn bn –1 b2 b1]
The matrix A in Eqns (7.9) has a very special structure: the coefficients of the denominator of the transfer
function preceded by minus signs, form a string along the bottom row of the matrix. The rest of the matrix
is zero except for the superdiagonal terms, which are all unity. It can easily be proved that the pair (A,b)
is completely controllable for all values of a i’s. For this reason, the companion-form realization given by
Eqns (7.9) is referred to as the controllable canonical form.
Pole-Placement Design and State Observers 441
One of the advantages of the controllable canonical form is that the controller gains can be obtained from
it, just by inspection. The closed-loop system matrix
È 0 1 0 0 ˘
Í ˙
Í 0 0 1 0 ˙
A – bk = Í ˙
Í ˙
Í 0 0 0 1 ˙
Í- a n - k1 - a n -1 - k2 - a n - 2 - k3 - a1 - kn ˙˚
Î
has the characteristic equation
sn + (a1 + kn)sn– 1 + + (an – 2 + k3)s2 + (an–1 + k2)s + an + k1 = 0
and the controller gains can be found by comparing the coefficients of this characteristic equation with
Eqn. (7.5).
We now have the basis for a design procedure. Given an arbitrary state variable model and a desired
characteristic polynomial, we transform the model to controllable canonical form and solve for the
controller gains, by inspection. Since these gains are for the state in the controllable canonical form,
we must transform the gains back to the original state. We will develop this pole-placement design
procedure in the subsequent sections.
Consider the linear time-invariant system (7.1) with state-feedback control law (7.2); the resulting
closed-loop system is given by Eqn. (7.3). In the following, we shall prove that a necessary and sufficient
condition for arbitrary placement of closed-loop eigenvalues in the complex plane (with the restriction
that complex eigenvalues occur in complex-conjugate pairs), is that the system (7.1) is completely
controllable. We shall first prove the sufficient condition, i.e., if the system (7.1) is completely
controllable, all the eigenvalues of (A – bk) in Eqn. (7.3) can be arbitrarily placed.
In proving the sufficient condition on arbitrary pole-placement, it is convenient to transform the state
equation (7.1) into the controllable canonical form (7.9). Let us assume that such a transformation exists
and is given by
x = Px (7.10)
È p11 p12 p1n ˘ È p1 ˘
Í ˙ Í ˙
p21 p22 p2 n ˙ p2
= Í x= Í ˙x
Í ˙ Í ˙
Í ˙ Í ˙
Î pn1 pn 2 pnn ˚ Îp n ˚
pi = [pi1 pi2 pin]; i = 1, 2, ..., n
Under the transformation (7.10), system (7.1) is transformed to the following controllable canonical
model:
x = Ax + bu (7.11)
442 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
where
È 0 1 0 0 ˘ È0 ˘
Í ˙ Í ˙
Í 0 0 1 0 ˙ Í0 ˙
A = PAP = Í
–1 ˙ ; b = Pb = Í ˙
Í ˙ Í ˙
Í 0 0 0 1 ˙ Í0 ˙
Í- a n - a n -1 - a n- 2 - a1 ˙˚ ÍÎ1 ˙˚
Î
| sI – A | = sn + a1 sn–1 + + an–1 s + an = |sI – A|
(Characteristic polynomial is invariant under equivalence transformation)
The first equation in the set (7.10) is given by
x1 = p11 x1 + p12 x2 + + p1n xn = p1x
Taking the derivative on both sides of this equation, we get
x1 = p1 x = p1Ax + p1bu
But x1 (= x2 ) is a function of x only as per the canonical model (7.11).
Therefore,
p1 b = 0 and x2 = p1Ax
Taking derivative on both sides once again, we get
p1Ab = 0 and x3 = p1A2x
Continuing the process, we obtain
p1 An – 2b = 0 and xn = p1An – 1x
Taking derivative once again, we obtain
p1 An – 1b = 1
Thus
È p1 ˘
Í ˙
p1A ˙
x = Px = Í x
Í ˙
Í n-1
˙
ÍÎp1A ˙˚
where p1 must satisfy the conditions
p1b = p1Ab = = p1An – 2b = 0, p1An – 1b = 1
From Eqn. (7.11), we have
È0 ˘ È p1b ˘
Í ˙ Íp Ab ˙
Í0 ˙ Í 1 ˙
Pb = Í ˙ = Í ˙
Í ˙ Í n- 2
˙
Í0 ˙ Íp1A b˙
Í1 ˙ Í ˙
Î ˚ ÍÎ p1A n -1b ˙˚
Pole-Placement Design and State Observers 443
or
p1[b Ab An –2b An – 1b] = [0 0 0 1]
This gives
p1 = [0 0 0 1]U –1
where
U = [b Ab An –1b]
is the controllability matrix, which is nonsingular because of the assumption of controllability of the
system (7.1).
Therefore, the controllable state model (7.1) can be transformed to the canonical form (7.11) by the
transformation.
x = Px (7.12)
Èp1 ˘
Í ˙
Í p1 A ˙ ; p = [0
where P= 0 0 1]U –1
Í ˙ 1
Í n-1
˙
ÍÎp1A ˙˚
Under the equivalence transformation (7.12), the state-feedback control law (7.2) becomes
u = – kx = – k x (7.13)
where k = kP –1 = [k1 k2 kn ]
With this control law, system (7.11) becomes
x = (A - b k ) x
È 0 1 0 0 0 ˘
Í ˙
Í 0 0 1 0 0 ˙
= Í ˙x (7.14)
Í ˙
Í 0 0 0 0 1 ˙
Í-a n - k1 -a n -1 - k2 -a n - 2 - k3 -a 2 - kn -1 -a1 - kn ˙˚
Î
È p1 ˘
Í ˙
Í p1A ˙ p1 = [0 0 0 1]U -1
P= ; (7.21)
Í ˙ U = [b Ab A n -1b]
Í n-1
˙
ÍÎp1A ˙˚
Step 3 Using the desired eigenvalues (desired closed-loop poles) l1, l2, ..., ln, write the desired
characteristic polynomial:
(s – l 1)(s – l 2) (s – l n) = sn + a1 sn –1 + + an –1 s + an (7.22)
and determine the values of a1, a2, , an–1, an.
Step 4 The required state-feedback gain matrix is determined from the following equation:
k = [an – an an –1 – an –1 a1 – a1] P (7.23)
There are other approaches also, for the determination of the state-feedback gain matrix k. In what
follows, we shall present a well-known formula, known as the Ackermann’s formula, which is convenient
for computer solution.
From Eqns (7.23) and (7.21), we get
È[0 0 0 1] U -1 ˘
Í ˙
-1
Í[0 0 0 1] U A ˙
k = [an – an an –1 – an–1 a1 – a1] Í ˙
Í ˙
Í[0 0 0 1] U -1A n -1 ˙˚
Î
= [0 0 0 1]U –1 [(a1 – a1)An –1 + (a2 – a2)An –2 + + (an – an)I] (7.24)
The characteristic polynomial of matrix A is (Eqn. (7.20))
|sI – A | = sn + a1s n–1 + a2sn –2 + + an–1s + an
Since the Cayley–Hamilton theorem states that a matrix satisfies its own characteristic equation, we have
An + a1An –1 + a2An – 2 + + an –1A + anI = 0
n n –1 n –2
Therefore, A = – a1A – a2A – – an –1A – anI (7.25)
From Eqns (7.24) and (7.25), we get
k = [0 0 0 1]U–1 e(A) (7.26a)
where
e(A) = An + a1An –1 + a2An –2 + + an –1A + anI (7.26b)
n –1
U = [b Ab A b] (7.26c)
Equations (7.26) describe the Ackermann’s formula for the determination of the state-feedback gain matrix
k.
446 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Example 7.2
Recall the inverted pendulum of Example 5.15, shown in Fig. 5.16, in which the object is to apply a force
u(t) so that the pendulum remains balanced in the vertical position. We found the linearized equations
governing the system to be
x = Ax + bu
where x = [q q z z ]T
È0 1 0 0˘ È 0 ˘
Í ˙ Í ˙
A= Í
16.3106 0 0 0˙
;b= Í-1.4458 ˙
Í0 0 0 1˙ Í 0 ˙
Í ˙ Í ˙
Î-1.0637 0 0 0˚ Î 0.9639 ˚
z(t) = horizontal displacement of the pivot on the cart
q (t) = rotational angle of the pendulum
It is easy to verify that the characteristic polynomial of matrix A is
| sI – A| = s4 – 16.3106s2
Since there are poles at 0, 0, 4.039, and – 4.039, the system is quite unstable, as one would expect from
physical reasoning.
Suppose we require a feedback control of the form
u(t) = – kx = – k1x1 – k2x2 – k3x3 – k4x4,
such that the closed-loop system has the stable pole configuration given by multiple poles at –1. We
verified in Example 5.16 that the system under consideration is a controllable system; therefore, such
a feedback gain matrix k does exist. We will determine the required k by using the design equations
(7.17)–(7.23).
The controllability matrix
È 0 -1.4458 0 -23.5816 ˘
Í ˙
-1.4458 0 -23.5816 0
U = [b Ab A2b A3b] = Í ˙
Í 0 0.9639 0 1.5379 ˙
Í ˙
Î 0.9639 0 1.5379 0 ˚
È 0 0.0750 0 1.1500 ˘
Í ˙
0.0750 0 1.1500 0
U–1 = Í ˙
Í 0 - 0.0470 0 - 0.0705˙
Í ˙
Î 0.0470 0 - 0.0705 0 ˚
Therefore,
p1 = [– 0.0470 0 – 0.0705 0]
Pole-Placement Design and State Observers 447
È p1 ˘ È-0.0470 0 -0.0705 0 ˘
Í ˙ Í ˙
p
Í 1 ˙A -0.0470 -0.0705˙
P= Í = Í 0 0
p A2 ˙ Í -0.6917 0 0 0 ˙
Í 1 ˙ Í ˙
ÍÎ p1A 3 ˙˚ Î 0 -0.6917 0 0 ˚
|sI – A| = s4 + a1 s3 + a2 s2 + a3 s + a4 = s4 + 0s3 – 16.3106s2 + 0 s + 0
|sI – (A – bk) | = s4 + a1 s3 + a2 s2 + a3 s + a4 = (s + 1)4 = s4 + 4s3 + 6s2 + 4s + 1
k = [a4 – a4 a3 – a3 a2 – a2 a1 – a1]P
= [1 4 22.3106 4]P = [–15.4785 –2.9547 –0.0705 –0.2820]
This feedback control law yields a stable closed-loop system so that the entire state vector, when disturbed
from the zero state, returns asymptotically to this state. This means that not only is the pendulum balanced
(q Æ 0), but that the cart returns to its origin as well (z Æ 0).
Example 7.3
Let us use Ackermann’s formula to the state-regulator design problem of Example 7.1 (satellite-attitude
control system). The plant model is given by (Eqn. (7.6))
È0 1 ˘ È0 ˘
x = Ax + bu = Í ˙ x+ Í ˙ u
Î0 0 ˚ Î1 ˚
The desired characteristic polynomial is (Eqn. (7.8))
s2 + a1s + a2 = s2 + 8s + 32
To use Ackermann’s formula (7.26) to calculate the gain matrix k, we first evaluate U–1 and e(A):
È0 1 ˘ –1 È0 1 ˘
U = [b Ab] = Í ˙;U = Í ˙
Î 1 0 ˚ Î1 0 ˚
È0 1 ˘ È0 1 ˘ È0 1 ˘ È1 0 ˘ È32 8 ˘
e(A) = A2 + a1A + a2I = Í ˙ Í ˙ +8 Í ˙ + 32 Í ˙ = Í 0 32˙
Î0 0 ˚ Î0 0 ˚ Î0 0 ˚ Î0 1 ˚ Î ˚
Now using Eqn. (7.26a), we obtain
È0 1 ˘ È32 8 ˘
k = [0 1] U–1 e(A) = [0 1] Í ˙ Í ˙ = [32 8]
Î1 0 ˚ Î 0 32˚
The solution is seen to be the same as that obtained in Example 7.1.
Comments
1. Through the pole-placement design procedure described in the present section, it is always possible to
stabilize a completely controllable system by state feedback, or to improve its stability by assigning the
closed-loop poles to locations in the left-half complex plane. The design procedure, however, gives no
guidance as to where, in the left-half plane, the closed-loop poles should be located.
448 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
It appears that we can choose the magnitude of the real part of the closed-loop poles to be arbitrarily large,
making the system response arbitrarily fast. However, to increase the rate at which the plant responds,
the input signal to the plant must become larger, requiring large values of gains. As the magnitudes of
the signals in a system increase, the likelihood of the system entering nonlinear regions of operation,
increases. For very large signals, this nonlinear operation will occur for almost every physical system.
Hence, the linear model that is used in design no longer accurately models the physical system.
Thus, the selection of desired closed-loop poles requires a proper balance of bandwidth, overshoot,
sensitivity, control effort, and other design requirements. If the system is of second-order, then the
system dynamics (response characteristics) can be precisely correlated to the locations of the desired
closed-loop poles. For higher-order systems, the location of the closed-loop poles and the response
characteristics are not easily correlated. Hence, in determining the state-feedback gain matrix k for a
given system, it is desirable to examine, by computer simulations, the response characteristics for several
different matrices k (based on several different characteristic equations), and choose the one that gives
the best overall performance.
2. For the case of single-input systems, the gain matrix k, which places the closed-loop poles at the
desired locations, is unique.
If the dynamic system under consideration
x = Ax + Bu
has more than one input, that is, B has more than one column, then the gain matrix K in the control law
u = – Kx
has more than one row. Since each row of K furnishes n gains (n is the order of the system) that can be
adjusted, it is clear that in a controllable system there will be more gains available—than are needed—to
place all of the closed-loop poles. This is a benefit: the designer has more flexibility in the design;
it is possible to specify all the closed-loop poles and still be able to satisfy other requirements. How
should these other requirements be specified? The answer to this question may well depend on the
circumstances of the particular application. A number of results using the design freedom in multi-input
systems to improve robustness of the control system, have appeared in the literature. We will not be able
to accommodate these results in this book.
The non-uniqueness in the design of state-feedback control law for multi-input systems, is removed by
optimal control theory which is discussed in Chapter 8.
Thus, either a new approach that directly accounts for the non-availability of the entire state vector
(Chapter 8) is to be devised, or a suitable approximation of the state vector must be determined. The latter
approach is much simpler in many situations.
The purpose of this section is to demonstrate the estimation of all the state variables of a system, from
the measurements that can be made on the system. If the estimate of the state vector is denoted by x̂, it
would be nice if the true state in the control law given by Eqn. (7.27), could be replaced by its estimate
u(t) = – kx̂ (t) (7.28)
This indeed is possible, as we shall see in the next section.
A device (or a computer program) that estimates the state variables is called a state observer, or simply an
observer. If the state observer estimates all the state variables of the system, regardless of whether some
state variables are available for direct measurement, it is called a full-order state observer. However, if
accurate measurements of certain states are possible, we may estimate only the remaining states, and
the accurately measured signals are then used directly for feedback. The resulting observer is called a
reduced-order state observer.
In order to speed up the estimation process and provide a useful state estimate, we feed back the
difference between the measured and the estimated outputs—and correct the model continuously with
this error signal. This scheme, commonly known as ‘Luenberger state observer’, is shown in Fig. 7.6,
and the equation for it is
ˆ = A x̂(t) + bu(t) + m(y(t) – ŷ (t))
x(t) (7.31)
where m is an n ¥ 1 real constant gain matrix.
The state error vector
x(t) = x(t) – x̂(t) (7.32)
Differentiating both sides, we get
ˆ
x(t) = x(t) – x(t)
ˆ from Eqns (7.29) and (7.31) respectively, we get
Substituting for x (t) and x(t)
x(t) = Ax(t) + bu(t) – A x̂(t) – bu(t) – mc(x(t) – x̂(t))
= (A – mc) x(t) (7.33)
The characteristic equation of the error is given by
| sI – (A – mc) | = 0 (7.34a)
If m can (we hope) be chosen so that (A – mc) has stable and reasonably fast roots, x(t) will decay to
zero irrespective of x(0). This means that x̂(t) will converge to x(t) regardless of the value of x̂(0), and
furthermore, the dynamics of the error can be chosen to be faster than the open-loop dynamics. Note
that Eqn. (7.33) is independent of applied control. This is a consequence of assuming A, b and c to be
identical in the plant and the observer. Therefore, the estimation error x converges to zero and remains
there, independent of any known forcing function u(t) on the plant and its effect on the state x(t). If we
Plant
u x y
x = Ax + bu c
+ +
+ x x y –
b Ún c
+
n-parallel
integrators
Observer
Fig. 7.6
Pole-Placement Design and State Observers 451
do not have a very accurate model of the plant (A, b, c), the dynamics of the error are no longer governed
by Eqn. (7.33). However, m can typically be chosen so that the error system is stable and the error is
acceptably small, even with small modeling errors and disturbance inputs.
The selection of m can be approached in exactly the same fashion as the selection of k in the control law
design. If we specify the desired location of the observer-error roots as
s = l1, l2, …, ln,
the desired observer characteristic equation is
(s – l1) (s – l2) (s – ln) = 0 (7.34b)
and one can solve for m by comparing coefficients in Eqns (7.34a) and (7.34b). However, as we shall see
shortly, this can be done only if the system (7.29) is completely observable.
The calculation of the gains using this simple technique becomes rather tedious when the order of the
system is larger than three. As in the controller design, there is an observable canonical form for which
the observer design equations are particularly simple. Consider a system represented by the transfer
function
Y ( s) b1s n -1 + b 2 s n - 2 + + b n
=
U ( s) s n + a1s n -1 + + a n
A companion-form realization of this transfer function is given below (refer to Eqns (5.56)):
x = Ax + bu (7.35)
y = cx
where
È0 0 0 -a n ˘ È bn ˘
Í ˙ Í ˙
Í1 0 0 -a n -1 ˙
Í b n -1 ˙
A = Í0 1 0 -a n - 2 ˙ ; b =
Íb n - 2 ˙ ; c = [0 0 0 1]
Í ˙ Í ˙
Í ˙ Í ˙
Í0 0 1 -a1 ˙˚ Í b1 ˙
Î Î ˚
It can easily be proved that the pair (A,c) is completely observable for all values of ai’s. For this reason,
the companion form realization given by Eqn. (7.35) is referred to as observable canonical form.
One of the advantages of the observable canonical form is that the observer gains m can be obtained from
it, just by inspection. The observer-error matrix is
È0 0 0 -a n - m1 ˘
Í ˙
Í1 0 0 -a n -1 - m2 ˙
(A – mc) = Í 0 1 0 -a n - 2 - m3 ˙
Í ˙
Í ˙
Í0 0 1 -a1 - mn ˙˚
Î
which has the characteristic equation
sn + (a1 + mn) sn –1 + + (an – 2 + m3)s2 + (an –1 + m2) s + an + m1 = 0
and the observer gains can be found by comparing the coefficients of this equation with Eqn. (7.34b).
452 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
A procedure for observer design, therefore, consists of transforming the given state variable model
to observable canonical form, solving for the observer gains, and transforming the gains back to the
original state.
We can, however, directly use the equations of the control-law design for computing the observer gain
matrix m, if we examine the resemblance between the estimation and control problems. In fact, the
two problems are mathematically equivalent. This property is called duality. The design of a full-order
observer requires the determination of the gain matrix m such that (A – mc) has desired eigenvalues
li ; i = 1, 2, ..., n. This is mathematically equivalent to designing a full state-feedback controller for the
‘transposed auxiliary system’,
y (t) = ATy (t) + cTh(t) (7.36a)
with feedback
h(t) = – mTy (t) (7.36b)
so that the closed-loop auxiliary system
y (t) = (AT – cTmT )y (t) (7.37)
has eigenvalues li; i = 1, 2, ..., n.
Since
det W = det WT,
one obtains
det [sI – (AT – cTmT)] = det [sI – (A – mc)]
i.e., the eigenvalues of (AT – cTmT) are same as the eigenvalues of (A – mc).
By comparing the characteristic equation
Table 7.1
of the closed-loop system (7.19) and
that of the auxiliary system (7.37), we
Control Estimation obtain the duality relations given in Table
7.1 between the control and estimation
A AT problems. The Ackermann’s control-
b cT design formula given by Eqns (7.26)
k mT becomes the observer-design formula if
the substitutions of Table 7.1 are made.
A necessary and sufficient condition for determination of the observer gain matrix m for the desired
eigenvalues of (A – mc), is that the auxiliary system (7.36) be completely controllable. The controllability
condition for this system is that the rank of
[cT ATcT (AT)n –1 cT]
is n. This is the condition for complete observability of the original system defined by Eqns (7.29). This
means that a necessary and sufficient condition for estimation of the state of the system defined by Eqns
(7.29), is that the system be completely observable.
Again by duality, we can say that for the case of single-output systems, the gain matrix m, which places
the observer poles at desired locations, is unique. In the multi-output case, the same pole configuration
Pole-Placement Design and State Observers 453
can be achieved by various feedback gain matrices. This non-uniqueness is removed by optimal control
theory which is discussed in Chapter 8.
Example 7.4
We will consider the satellite-attitude control system of Example 7.3. The state equation of the plant is
x = Ax + bu
È0 1 ˘ È0 ˘
with A= Í ˙;b= Í ˙
Î0 0 ˚ Î1 ˚
x1 = q, the orientation of the satellite; x2 = q
We assume that the orientation q can be accurately measured from the antenna signal. Therefore,
y = cx(t)
with c = [1 0]
Let us design a state observer for the system. We choose the observer to be critically damped with a
Ê 4 ˆ
settling time of 0.4 sec Á = 0.4; zw n = 10˜ . To satisfy these specifications, the observer poles will
Ë zw n ¯
be placed at s = – 10, – 10.
The transposed auxiliary system is given by
y = ATy + cTh; h = – mTy
È0 0 ˘ È0 0 ˘ È0 0 ˘ È1 0 ˘ È100 0˘
= Í ˙ Í ˙ + 20 Í ˙ + 100 Í ˙ = Í ˙
Î1 0 ˚ Î1 0 ˚ Î 1 0 ˚ Î0 1 ˚ Î 20 100 ˚
The observer gain matrix is given by the equation
È100 0˘
mT = [0 1] Í ˙ = [20 100]
Î 20 100 ˚
È 20˘
Therefore, m= Í ˙
Î100 ˚
454 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Example 7.5
Consider once again the inverted-pendulum system of Example 7.2. Suppose that the only output
available for measurement is z(t), the position of the cart. The linearized equations governing this system
are
x = Ax + bu; y = cx
È 0 1 0˘
0 È 0 ˘
Í ˙ Í ˙
16.3106 0 0
0˙ -1 .4458
where A= Í ;b= Í ˙ ; c = [0 0 1 0]
Í 0 0 1˙
0 Í 0 ˙
Í ˙ Í ˙
Î -1.0637 0 0
0˚ Î 0.9639 ˚
We verified in Example 5.18 that this system is completely observable. In the following, we design a
full-order observer for this system. We choose the observer pole locations as – 2, – 2 ± j1, – 3. The
corresponding characteristic equation is
s4 + 9s3 + 31s2 + 49s + 30 = 0
The transposed auxiliary system is given by
y (t) = ATy (t) + cTh(t); h(t) = – mTy (t)
We will determine the gain matrix m using the design equations (7.17)–(7.23).
The controllability matrix
È0 0 -1.0637 0 ˘
Í ˙
0 0 0 -1.0637˙
U = [cT ATcT (AT)2cT (AT)3cT] = Í
Í1 0 0 0 ˙
Í ˙
Î0 1 0 0 ˚
È 0 0 1 0˘
Í ˙
0 0 0 1˙
U–1 = Í
Í-0.9401 0 0 0˙
Í ˙
Î 0 -0.9401 0 0˚
Therefore,
p1 = [0 – 0.9401 0 0]
Èp1 ˘ È 0 -0.9401 0 0˘
Í T ˙ Í ˙
Í 1
p ( A ) ˙ Í - 0.9401 0 0 0˙
P= Í T 2˙ = Í
Íp1 ( A ) ˙ Í 0 - 15 .3333 0 1˙
˙
Í p ( AT )3 ˙ Î-15.3333 0 1 0˚
Î 1 ˚
| sI – AT | = s4 + a1s3 + a2s2 + a3s + a4 = s4 + 0s3 – 16.3106s2 + 0s + 0
| sI – (AT – cTmT)| = s4 + a1s3 + a2s2 + a3s + a4 = s4 + 9s3 + 31s2 + 49s + 30
Pole-Placement Design and State Observers 455
mT = [a4 – a4 a3 – a3 a2 – a2 a1 – a1] P
= [30 49 47.3106 9] P = [–184.0641 –753.6317 9 47.3106]
È -184.0641˘
Í ˙
-753.6317˙
Therefore, m= Í
Í 9 ˙
Í ˙
Î 47.3106 ˚
With this m, the observer
xˆ = (A – mc) x̂ + bu + my
will process the cart position y(t) = z(t) and input u(t), to continuously provide an estimate x̂(t) of the
state vector x(t); and any errors in the estimate will decay at least as fast as e–2t.
where the two rightmost terms are known and can be considered as an input into the xe dynamics. Since
x1 = y, measured dynamics are given by the scalar equation
x1 = y = a11 y + a1e xe + b1u (7.40)
If we collect the known terms of Eqn. (7.40) on one side, we get
y - a11 y - b1u = a1exe (7.41)
known measurement
Note that Eqns (7.39) and (7.41) have the same relationship to the state xe that the original equations (7.38)
had to the entire state x. Following this line of reasoning, we can establish the following substitutions in
the original observer-design equations, to obtain an (reduced-order) observer of xe:
x ¨ xe
A ¨ Aee
bu ¨ ae1y + beu (7.42)
y ¨ y – a11y – b1u
c ¨ a1e
Making these substitutions into the equation for full-order observer (Eqn. (7.31)), we obtain the equation
of the reduced-order observer:
xˆ e = Aee x̂e + a e1 y + be u + m ( y - a11 y - b1u – a1e x̂ e) (7.43)
input measurement
If we define the estimation error as
xe = xe – x̂e (7.44)
the dynamics of error are given by subtracting Eqn. (7.43) from Eqn. (7.39):
xe = (Aee – ma1e) xe (7.45)
Its characteristic equation is given by
| sI – (Aee – ma1e) | = 0 (7.46)
We design the dynamics of this observer by selecting m so that Eqn. (7.46) matches a desired reduced-
order characteristic equation. To carry out the design using state regulator results, we form a ‘transposed
auxiliary system’
T T
y (t) = A ee y (t) + a1e h(t)
(7.47)
h(t) = – mTy (t)
Use of Ackermann’s formula given by Eqns (7.26) for this auxiliary system gives the gains m of the
reduced-order observer. We should point out that the conditions for the existence of the reduced-order
observer are the same as for the full-order observer—namely observability of the pair (A, c).
Let us now look at the implementational aspects of the reduced-order observer given by Eqn. (7.43). This
equation can be rewritten as
xˆ e = (Aee – ma1e) x̂e + (ae1 – ma11)y + (be – mb1)u + m y (7.48)
Pole-Placement Design and State Observers 457
The fact that the reduced-order observer requires the derivative of y(t) as an input, appears to present
a practical difficulty. It is known that differentiation amplifies noise, so if y is noisy the use of y is
unacceptable. To get around this difficulty, we define the new state as
x¢e =D x̂e – my (7.49a)
Then, in terms of this new state, the implementation of the reduced-order observer is given by
xe¢ = (Aee – ma1e) x̂e + (ae1 – ma11)y + (be – mb1)u (7.49b)
and y no longer appears directly. A block-diagram representation of the reduced-order observer is
shown in Fig. 7.7.
ae1 – ma11
m
+ + + + x
u x¢e x¢e e
b e – mb1 Ún – 1
+
n – 1 parallel
integrators
Aee – ma1e
Fig. 7.7
Example 7.6
In Example 7.4, a second-order observer for the satellite-attitude control system was designed with the
observer poles at s = – 10, – 10. We now design a reduced-order (first-order) observer for the system with
observer pole at s = – 10.
The plant equations are
È x1 ˘ È0 1 ˘ È x1 ˘ È0 ˘
Í ˙ = Í ˙ Í ˙ + Í ˙ u
Î x2 ˚ Î 0 0 ˚ Î x2 ˚ Î 1 ˚
È x1 ˘
y = [1 0] Í ˙
Î x2 ˚
The partitioned matrices are
Èa11 a1e ˘ È0 1 ˘ È b1 ˘ È0 ˘
Í ˙ = Í ˙ = Í ˙ = Í1 ˙
ÍÎae1 Aee ˙˚ ÍÎ0 0 ˙˚ ÍÎbe ˙˚ ÍÎ ˙˚
458 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
7.6
THE SEPARATION PRINCIPLE
In Sections 7.2–7.4, we studied the design of control laws for systems in which the state variables are all
accessible for measurement. We promised to overcome the difficulty of not being able to measure all the
state variables by the use of an observer to estimate those state variables that cannot be measured. Then
in Section 7.5, we studied the design of observers for systems with known inputs, but not when the state
estimate is used for the purpose of control. We are now ready to combine the state-feedback control law
with the observer to obtain a compensator for linear systems in which not all the state variables can be
measured.
Consider the completely controllable and completely observable system defined by the equations
x = Ax + bu
(7.50)
y = cx
Suppose we have designed a state-feedback control law
u = – kx (7.51)
using the methods of Section 7.4. And also suppose we have designed a full-order observer
xˆ = A x̂ + bu + m(y – c x̂) (7.52)
using the methods of Section 7.5.
For the state-feedback control based on the observed state x̂,
u = – k x̂ (7.53)
The control system based on combining the state-feedback control law and state observer, has the
configuration shown in Fig. 7.8. Note that the number of state variables in the compensator is equal to
the order of the embedded observer and hence is equal to the order of the plant. Thus, the order of the
overall closed-loop system, when a full-order observer is used in the compensator, is 2n for a plant of
Pole-Placement Design and State Observers 459
Plant Sensor
u x y
x = Ax + bu c
x x = Ax + bu
–k
+ m(y – cx)
Control law Observer
Compensator
Fig. 7.8
order n. We are interested in the dynamic behavior of the 2nth-order system comprising the plant and the
compensator. With the control law (7.53) used, the plant dynamics become
x = Ax – bk x̂ = (A – bk)x + bk(x – x̂) (7.54)
The difference between the actual state x and observed state x̂, has been defined as the error x:
x = x – x̂
Substitution of the error vector into Eqn. (7.54) gives
x = (A – bk)x + bk x (7.55)
Note that the observer error was given by Eqn. (7.33), repeated here
x = (A – mc) x (7.56)
Combining Eqns (7.55) and (7.56), we obtain
Èx ˘ ÈA - bk bk ˘ Èx ˘
Í ˙ = Í 0 ˙ Í ˙
- mc ˚ Îx ˚
(7.57)
Îx ˚ Î A
Equation (7.57) describes the dynamics of the 2n-dimensional system of Fig. 7.8. The characteristic
equation for the system is
| sI – (A – bk)| |sI – (A – mc)| = 0
In other words, the poles of the combined system consist of the union of control and observer roots. This
means that the design of the control law and the observer can be carried out independently. Yet, when they
are used together, the roots are unchanged. This is a special case of the separation principle, which holds
in much more general contexts and allows for the separate design of control law and estimator in certain
stochastic cases.
To compare the state-variable method of design with the transform methods discussed in earlier chapters,
we obtain the transfer function model of the compensator used in the control system of Fig. 7.8. The state
variable model for this compensator is obtained by including the feedback law u = – k x̂ (since it is part
of the compensator) in the observer equation (7.52).
xˆ = (A – bk – mc) x̂ + my (7.58)
u = – k x̂
460 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The formula for conversion of state variable model to the transfer function model is given by Eqn. (5.28).
Applying this result to the model given by Eqn. (7.58), we obtain
U ( s)
= D(s) = k(sI – A + bk + mc)–1m (7.59)
-Y ( s)
Figure 7.9 shows the block diagram representation of the system with observer-based controller.
Fig. 7.9
Note that the poles of D(s) in Eqn. (7.59) were neither specified nor used during the state-variable design
process. It may even happen that D(s) has one or more poles in the right-half plane; the compensator, in
other words, could turn out to be unstable. But the closed-loop system, if so designed, would be stable.
There is, however, one problem if the compensator is unstable. The open-loop poles of the system are
the poles of the plant and also the poles of the compensator. If the latter are in the right-half plane, then
the closed-loop poles may be in the right-half plane when the loop gain becomes too small. Robustness
considerations put certain restrictions on the use of unstable compensators to stabilize a system.
Example 7.7
In this example, we study the closed-loop system obtained by implementing the state-feedback control
law of Example 7.3 and state-observer design of Examples 7.4 and 7.6, for the attitude control of a
satellite. The plant model is given by
x = Ax + bu; y = cx
with
È0 1 ˘ È0 ˘
A= Í ˙ ; b = Í ˙ ; c = [1 0]
Î0 0 ˚ Î1 ˚
In Example 7.3, the gain matrix required to place the closed-loop poles at s = – 4 ± j4 was calculated to
be
k = [32 8]
If both the state variables are available for feedback, the control law becomes
u = – kx = – [32 8]x
resulting in the closed-loop system
È 0 1˘
x = (A – bk)x = Í ˙x
Î -32 -8˚
Pole-Placement Design and State Observers 461
Figure 7.10a shows the response of the system to an initial condition x(0) = [1 0]T. Assume now that
the state-feedback control law is implemented using a full-order observer. In Example 7.4, the observer
gain matrix was calculated to be
È 20 ˘
m= Í ˙
Î100 ˚
1
(a)
(b)
0.5
(c)
0.8 1.0
0
0.2 0.4 0.6
t (sec)
– 0.5
Fig. 7.10
The state variable model of the compensator, obtained by cascading the state-feedback control law and
the state observer, is obtained as (refer to Eqns (7.58))
È -20 1˘ È 20 ˘
xˆ = (A – bk – mc) x̂ + my = Í ˙ x̂ + Í ˙ y
Î-132 -8˚ Î100 ˚
u = – k x̂ = – [32 8] x̂
The compensator transfer function is (refer to Eqn. (7.59))
U ( s) 1440 s + 3200
D(s) = = k(sI – A + bk + mc)–1m = 2
-Y ( s) s + 28s + 292
The state variable model of the closed-loop system can be constructed as follows:
x1 = x2
x2 = u = – 32 x̂1 – 8 x̂2
xˆ1 = – 20 x̂1 + x̂2 + 20y = – 20 x̂1 + x̂2 + 20x1
xˆ2 = – 132 x̂1 – 8 x̂2 + 100x1
462 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Therefore,
È x1 ˘ È 0 1 0 0˘ È x1 ˘
Í ˙ Í ˙ Í ˙
x
Í 2˙ -32 -8˙ Í x2 ˙
= Í 0 0
Í xˆ ˙ Í 20 0 -20 1˙ Í xˆ1 ˙
Í 1˙ Í ˙ Í ˙
ÍÎ xˆ2 ˙˚ Î100 0 -132 -8˚ Î xˆ2 ˚
Figure 7.10b shows the response to an initial condition
[1 0 0 0]T
Consider now the implementation of state-feedback control law using reduced-order observer. In
Example 7.6, the following model was obtained to estimate the state x2 (state x1 is directly measured and
fed back, and is not estimated using an observer):
xˆ2¢ = x2¢ + 10y
x2¢ = –10 x̂2 + u
The control law is given by
u = – 32x1 – 8 x̂2
From these equations, the following transfer function model of the compensator is obtained:
U ( s) 112( s + 2.86)
=
-Y ( s) s + 18
The reduced-order compensator is precisely the lead network; this is a pleasant discovery, as it shows
that the classical and state variable methods can result in exactly the same type of compensation.
The state variable model of the closed-loop system with the reduced-order compensator is derived below.
x1 = x2
x2 = u = – 32x1 – 8 x̂2 = – 32x1 – 8(x2¢ + 10x1) = – 112x1 – 8x2¢
x2¢ = – 10 x̂2 + u = – 10 x̂2 – 32x1 – 8 x̂2
= – 18(x2¢ + 10x1) – 32x1 = – 18x2¢ – 212x1
Therefore,
È x1 ˘ È 0 1 0˘ È x1 ˘
Í ˙ Í ˙ Í ˙
Í x2 ˙ = Í-112 0 -8˙ Í x2 ˙
ÍÎ x2¢ ˙˚ ÍÎ-212 0 -18˙˚ ÍÎ x2¢ ˙˚
Figure 7.10c shows the response to an initial condition
[1 0 0]T
Comments
1. Underlying the separation principle is a critical assumption, namely, that the observer includes an
exact dynamic model of the plant—the process under control. This assumption is almost never valid in
reality. In practical systems, the precise dynamic model is rarely known. Even that which is known about
Pole-Placement Design and State Observers 463
the real process dynamics, is often too complicated to include in the observer. Thus, the observer must, in
practice, be configured to use only an approximate model of the plant. This encounter with the real world
does not vitiate the separation principle, but means that the effect of an inaccurate plant model must be
considered. If the design achieved through use of the separation principle is robust, it will be able to
tolerate uncertainty of the plant dynamics. Doyle and Stein [124] have proposed a ‘design adjustment
procedure’ to improve robustness with observers.
2. One of the considerations in the design of a gain matrix k in the state-feedback control law, is that the
resulting control signal u must not be too large; the use of large control effort increases the likelihood of
the system entering nonlinear regions of operation. Since the function of the observer is only to process
data, there is no limitation on the size of the gain matrix m for its realization. (Nowadays, it is all but
certain that the entire compensator would be realized by a digital computer. With floating-point numerics,
a digital computer would be capable of handling variables of any reasonable dynamic range). Though
the realization of observer may impose no limitation on the observer dynamics, it may, nevertheless, be
desirable to limit the observer speed of response (bandwidth). Remember that real sensors are noisy,
and much of the noise occurs at relatively high frequencies. By limiting the bandwidth of the observer,
we can attenuate and smoothen the noise contribution to the compensator output—which is the control
signal.
3. The desired closed-loop poles, to be generated by state feedback are chosen to satisfy the performance
requirements. The poles of the observer are usually chosen so that the observer response is much faster
than the system response. A rule of thumb is to choose an observer response at least two to five times
faster than the system response. This is to ensure a faster decay of estimation errors compared with the
desired dynamics, thus causing the closed-loop poles generated by state feedback to dominate the total
response. If the sensor noise is large enough to be a major concern, one may decide to choose the observer
poles to be slower than two times the system poles, which would yield a system with lower bandwidth
and more noise-smoothing. However, the total system response in this case will be strongly influenced
by the observer poles. Doyle and Stein [124] have shown that the commonly suggested approach of
‘speeding-up’ observer dynamics will not work in all cases. They have suggested that procedures which
drive some observer poles towards stable plant zeros and the rest towards infinity achieve the desired
objective.
4. A final comment concerns the reduced-order observer. Due to the presence of a direct transmission
term (refer to Fig. 7.7), the reduced-order observer has much higher bandwidth from sensor to control,
compared with the full-order observer. Therefore, if sensor noise is a significant factor, the reduced-order
observer is less attractive, since the potential savings in complexity is more than offset by the increased
sensitivity to noise.
general, these considerations should be taken into account in the design of a control system. This can be
done by proper introduction of the reference input into the system equations.
Consider the completely controllable SISO linear time-invariant system with nth-order state variable
model
x(t) = Ax(t) + bu(t) (7.60)
y(t) = cx(t)
We assume that all the n state variables can be accurately measured at all times. Implementation of
appropriately designed control law of the form
u(t) = – kx(t)
results in a state regulator system; any perturbation in the system state will asymptotically decay to the
equilibrium state x = 0.
Let us now assume that for the system given by Eqns (7.60), the desired steady-state value of the
controlled variable y(t) is a constant reference input r. For this servo system, the desired equilibrium
state xs is a constant point in state space and is governed by the equations
cxs = r (7.61)
We can formulate this command-following problem as a ‘shifted regulator problem’, by shifting the origin
of the state space to the equilibrium point xs. Formulation of the shifted regulator problem is as follows.
Let us be the needed input to maintain x(t) at the equilibrium point xs, i.e. (refer to Eqns (7.60)),
0 = Axs + bus (7.62)
Assuming for the present that a us exists that satisfies Eqns (7.61)–(7.62), we define shifted input,
shifted state, and shifted controlled variable as
u(t) = u(t) – us
x(t) = x(t) – xs (7.63)
y(t) = y(t) – r
The shifted variables satisfy the equations
x = A x + bu (7.64)
y = cx
This system possesses a time-invariant asymptotically stable control law
u = – kx (7.65)
The application of this control law ensures that
x Æ 0 (x(t) Æ xs, y(t) Æ r)
In terms of the original state variables, total control effort
u(t) = – kx(t) + us + kxs (7.66)
Manipulation of Eqn. (7.62) gives
(A – bk)xs + b(us + kxs) = 0 or xs = – (A – bk)–1b(us + kxs)
or cxs = r = – c(A – bk)–1b(us + kxs)
Pole-Placement Design and State Observers 465
r + u y
N Plant
–
x
k
Fig. 7.11
Example 7.8
The system considered in this example is the attitude control system for a rigid satellite. The plant
equations are (refer to Example 7.1)
x = Ax + bu; y = cx
where
È0 1 ˘ È0 ˘
A= Í ˙;b= Í ˙ ; c = [1 0]
Î0 0 ˚ Î1 ˚
x1(t) = position q (t); x2(t) = velocity q (t)
The reference input r = qr is a step function. The desired steady state is
xs = [qr 0]T,
which is a non-null state.
As the plant has integrating property, the steady-state value us of the input must be zero (otherwise the
output cannot stay constant). For this case, the shifted regulator problem may be formulated as follows:
x1 = x1 – qr; x2 = x2
Shifted state variables satisfy the equation
x = A x + bu
466 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
qr + + u 1 x2 1 y = x1 = q
k1 s s
– –
k2
Fig. 7.12
Since r is constant, in the steady state x = 0, z = 0, provided that the system is stable. This means that
the steady-state solutions xs, zs and us must satisfy the equation
È 0˘ È A 0 ˘ È x s ˘ Èb ˘
Í ˙ r =– Í ˙ Í ˙ – Í ˙ us
Î-1˚ Î c 0 ˚ Î zs ˚ Î 0 ˚
Substituting this for the last term in Eqn. (7.70), gives
Èx˘ È A 0 ˘ È x - x s ˘ Èb ˘
Í ˙ = Í ˙ Í ˙ + Í ˙ (u – us) (7.71)
z
Î ˚ Î c 0 ˚ Î z - zs ˚ Î 0 ˚
Now define new state variables as follows, representing the deviations from the steady state:
Èx - x s ˘
x= Í ˙ ; u = u – us (7.72a)
Î z - zs ˚
In terms of these variables, Eqn. (7.71) becomes
x = Ax + bu (7.72b)
ÈA 0˘ Èb ˘
A= Í ˙, b = Í ˙
Î c 0˚ Î0 ˚
The significance of this result is that, by defining the deviations from steady state as state and control
variables, the design problem has been reformulated to be the standard regulator problem, with x = 0 as
the desired state. We assume that an asymptotically stable solution to this problem exists, and is given by
u = – kx
Partitioning k appropriately and using Eqns (7.72a) yields
k = [kp ki]
Èx - x s ˘
u – us = – [kp ki] Í ˙ = – kp(x – xs) – ki(z – zs)
Î z - zs ˚
The steady-state terms must balance, therefore,
t
u = – kp x – ki z = – kp x – ki
Ú (y(t) – r)dt (7.73)
0
The control, thus, consists of proportional state feedback and integral control of output error. At steady-
state, x = 0; therefore,
lim z(t) Æ 0 or lim y(t) Æ r
t t
Thus, by integrating action, the output y is driven to the no-offset condition.
This will be true even in the presence of constant disturbances acting on the plant. Block diagram of
Fig. 7.13 shows the configuration of feedback control system with proportional state feedback and
integral control of output error.
468 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
+ + u y
r kiÚ Plant
– –
x
kp
Fig. 7.13
Example 7.9
Suppose the system is given by
Y ( s) 1
=
U ( s) s+3
with a constant reference command signal. We wish to have integral control with closed-loop poles at
wn = 5 and z = 0.5, which is equivalent to asking for a desired characteristic equation
s2 + 5s + 25 = 0
The plant model is
x = – 3x + u; y=x
Augmenting the plant state x with the integral state z—defined by the equation
t
we obtain
Èx˘ È-3 0 ˘ È x ˘ È1 ˘ È 0˘
Í ˙ = Í ˙ Í ˙ + Í ˙ u+ Í ˙ r
z
Î ˚ Î 1 0 z
˚ Î ˚ 0
Î ˚ Î-1˚
In terms of state and control variables representing deviations from the steady state:
È x - xs ˘
x= Í ˙ ; u = u – us
Î z - zs ˚
the state equation becomes
È-3 0 ˘ È1 ˘
x= Í ˙ x + Í ˙u
Î 1 0˚ Î0 ˚
We can find k from
Ê È-3 0 ˘ È1 ˘ ˆ 2
det Á sI - Í ˙ + Í ˙ k ˜ = s + 5s + 25
Ë Î 1 0 ˚ Î0 ˚ ¯
Pole-Placement Design and State Observers 469
or s2 + (3 + k1)s + k2 = s2 + 5s + 25
Therefore,
k = [2 25] = [kp ki]
The control
t
u = – kp x – ki z = – 2x – 25 Ú(y(t) – r)dt
0
The control configuration is shown in Fig. 7.14, along with a disturbance input w. This system will
behave according to the desired closed-loop roots (wn = 5, z = 0.5) and will exhibit the characteristics of
integral control: zero steady-state error to a step r and zero steady-state error to a constant disturbance w.
w
+
r + 1 + 1 y
25 s s+3
– –
Fig. 7.14
7.9
STATE FEEDBACK
This section covers the key results on the pole-placement design, and state observers for discrete-time
systems. Our discussion will be brief because of the strong analogy between the discrete-time and
continuous-time cases. Consider the discretized model of the given plant:
x(k + 1) = Fx(k) + gu(k)
(7.74)
y(k) = cx(k)
where x is the n ¥ 1 state vector, u is the scalar input, y is the scalar output; F, g, and c are, respectively,
n ¥ n, n ¥ 1 and 1 ¥ n real constant matrices; and k = 0, 1, 2, … .
We will carry out the design of digital control system for the plant (7.74) in two steps. One step assumes
that we have all the elements of the state vector at our disposal for feedback purposes. The next step is to
design a state observer which estimates the entire state vector, when provided with the measurements of
the system indicated by the output equation in (7.74).
The final step will consist of combining the control law and the observer, where the control law
calculations are based on the estimated state variables rather than the actual state.
470 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Step 4 The required state-feedback gain matrix is determined from the following equation:
k = [an – an an –1 – an–1 a1 – a1]P (7.81)
The Ackermann’s formula given below is more convenient for computer solution (refer to Eqns
(7.26)).
k = [0 0 0 1 ]U–1e(F) (7.82)
n n–1
where e(F) = F + a1F + + an –1 F + anI
n –1
U = [g Fg F g]
Example 7.10
Consider the problem of attitude control of a rigid satellite. A state variable model of the plant is (refer
to Eqn. (7.6))
È0 1 ˘ È0 ˘
x = Ax + bu = Í ˙x+ Í ˙ u
Î0 0 ˚ Î1 ˚
where x1 = q is the attitude angle and u is the system input.
The discrete-time description of the plant (assuming that the input u is applied through a zero-order hold
(ZOH)) is given below (refer to Section 6.3).
x(k + 1) = Fx(k) + gu(k) (7.83)
T
È1 T ˘ È T 2/2 ˘
where F = eAT = Í ˙;g=
Î0 1 ˚
Ú eAt bdt = Í
ÍÎ T ˙˚
˙
0
An estimation scheme employing a full-order observer is shown in Fig. 7.15, and the equation for it is
x̂(k + 1) = F x̂(k) + gu(k) + m(y(k) – cx̂(k)) (7.85)
where m is an n ¥ 1 real constant gain matrix. We will call this a prediction observer because the estimate
x̂ (k + 1) is one sampling period ahead of the measurement y(k).
+ +
+ x(k + 1) x(k) y (k) –
g c
+
n-parallel
unit delayers
m
Observer
Fig. 7.15
A difference equation describing the behavior of the error is obtained by subtracting Eqn. (7.85) from
Eqn. (7.74):
x(k + 1) = (F – mc) x(k) (7.86)
where
x = x – x̂
Pole-Placement Design and State Observers 473
Current Observer
The prediction observer given by Eqn. (7.85) arrives at the state estimate x̂(k) after receiving
measurements up through y(k – 1). Hence the control u(k) = – k x̂(k) does not utilize the information on
the current output y(k). For higher-order systems controlled with a slow computer, or any time the sample
rates are fast compared to the computation time, this delay between making a measurement and using it
in control law may be a blessing. In many systems, however, the computation time required to evaluate
Eqn. (7.85) is quite short—compared to the sample period—and the control based on prediction observer
may not be as accurate as it could be.
An alternative formulation of the state observer is to use y(k) to obtain the state estimate x̂(k). This can
be done by separating the estimation process into two steps. In the first step we determine x(k + 1), an
approximation of x(k + 1) based on x̂(k) and u(k), using the model of the plant. In the second step, we
use y(k + 1) to improve x(k + 1). The improved x(k + 1) is x̂(k + 1). The state observer based on this
formulation is called the current observer. The current observer equations are given by
x(k + 1) = F x̂(k) + gu(k) (7.89a)
x̂(k + 1) = x(k + 1) + m[y(k + 1) – cx(k + 1)] (7.89b)
In practice, the current observer cannot be implemented exactly because it is impossible to sample,
perform calculations, and output with absolutely no time elapse. However, the errors introduced due to
computational delays will be negligible if the computation time is quite short—compared to the sample
period.
474 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The error equation for the current observer is similar to the error equation for the prediction observer,
given in (7.86). The current-estimate error equation is obtained by subtracting Eqns (7.89) from (7.74).
x(k + 1) = x(k + 1) – x̂(k + 1)
= Fx(k) + gu(k) – F x̂(k) – gu(k) – mc[x(k + 1) – x(k + 1)]
= F x(k) – mcF x(k) = (F – mcF) x(k) (7.90)
Therefore, the gain matrix m is obtained exactly as before, except that c is replaced by cF.
Reduced-Order Observer
The observers discussed so far, are designed to reconstruct the entire state vector, given measurements
of some of the states. To pursue an observer for only the unmeasured states, we partition the state
vector into two parts: one part is x1 which is directly measured, and the other part is xe, representing the
state variables that need to be estimated. If we partition the system matrices accordingly, the complete
description of the system (7.74) is given by
È x1 ( k + 1) ˘ È f11 f1e ˘ È x1 ( k ) ˘ È g1 ˘
Í ˙ = Í ˙ Í ˙ + Í ˙ u(k) (7.91a)
Îxe ( k + 1) ˚ Î fe1 Fee ˚ Îxe ( k ) ˚ Îg e ˚
È x1 ( k ) ˘
y(k) = [1 0] Í ˙ (7.91b)
Îxe ( k ) ˚
The portion describing the dynamics of unmeasured states is
xe(k + 1) = Fee xe(k) + fe1 x1 ( k ) + g e u( k ) (7.92)
known input
The measured dynamics are given by the scalar equation
y( k + 1) - f11 y( k ) - g1u( k ) = f1exe(k) (7.93)
known measurement
Equations (7.92) and (7.93) have the same relationship to the state xe that the original equation (7.74) had
to the entire state x. Following this reasoning, we arrive at the desired observer by making the following
substitutions into the observer equations:
x ¨ xe
F ¨ Fee
gu(k) ¨ fe1 y(k) + geu(k) (7.94)
y(k) ¨ y(k + 1) – f11 y(k) – g1u(k)
c ¨ f1e
Thus, the reduced-order observer equations are
x̂e(k + 1) = Fee x̂e(k) + fe1 y( k ) + g e u( k ) + m (7.95)
input
Subtracting Eqn. (7.95) from (7.92) yields the error equation
xe (k + 1) = (Fee – mf1e) xe (k) (7.96)
Pole-Placement Design and State Observers 475
where
xe = xe – x̂e
The characteristic equation is given by
|zI – (Fee – m f1e)| = 0 (7.97)
We design the dynamics of this observer by selecting m so that Eqn. (7.97) matches a desired reduced-
order characteristic equation. The design may be carried out directly or by using duality principle.
7.9.3
If we implement the state-feedback control law using an estimated state vector, the control system can be
completed. A schematic of such a scheme, using a prediction observer1, is shown in Fig. 7.16. Note that
by the separation principle, the control law and the state observer can be designed separately, and yet
used together.
The portion within the dotted line in Fig. 7.16 corresponds to dynamic compensation. The state variable
model of the compensator is obtained by including the state-feedback control (since it is a part of the
compensator) in the observer equations, yielding
x̂(k + 1) = (F – gk – mc) x̂(k) + my(k) (7.98)
u(k) = – k x̂(k)
Fig. 7.16
The formula for conversion of a discrete-time state variable model to the transfer function model is given
by Eqn. (6.3). Applying this result to the model (7.98), we obtain
U ( z)
= D(z) = k(zI – F + gk + mc)–1m (7.99)
-Y ( z )
1
We will design the compensator only for the prediction observer case. The other observers give very similar
results.
476 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Example 7.11
As an example of complete design, we will add a state observer to the satellite-attitude control
considered in Example 7.10. The system equations of motion are (refer to Eqn. (7.83))
È1 T ˘ È T 2/2 ˘
x(k + 1) = Fx(k) + gu(k) = Í ˙ x(k) + Í ˙ u(k)
Î0 1 ˚ ÍÎ T ˙˚
We assume that the position state x1 is measured and the velocity state x2 is to be estimated; the
measurement equation is, therefore,
y(k) = cx(k) = [1 0] x(k)
We will design a first-order observer for the state x2(k).
The partitioned matrices are
È f11 f1e ˘ È1 T ˘ È g1 ˘ È T 2/2 ˘
Í ˙ = Í ˙ Í ˙
; = Í ˙
ÍÎ f e1 Fee ˙˚ ÍÎ0 1 ˙˚ ÍÎ ge ˙˚ ÍÎ T ˙˚
From Eqn. (7.97), we find the characteristic equation in terms of m:
z – (1 – mT) = 0
For the observer to be about four times faster than the control, we place the observer pole at
z = 0.5 (@ (0.835)4); therefore, 1 – mT = 0.5
For: T = 0.1 sec, m = 5. The observer equation is (refer to Eqn. (7.95))
T2
x̂2(k + 1) = x̂2(k) + Tu(k) + m(y(k + 1) – y(k) – u(k) – T x̂2 (k))
2
= 0.5 x̂2 (k) + 5(y(k + 1) – y(k)) + 0.075u(k)
Substituting for u(k) from the control law (refer to Example 7.10)
u(k) = – 10y(k) – 3.5 x̂2 (k), (7.100a)
we obtain
x̂2 (k + 1) = 0.2375 x̂2 (k) + 5y(k + 1) – 5.75y(k) (7.100b)
The two difference equations (7.100a) and (7.100b) complete the design and can be used to control the
plant to the desired specifications.
To relate the observer-based state-feedback design to a classical design, one needs to compute the
z-transform of Eqns (7.100a) and (7.100b), obtaining
U ( z) 27.5( z - 0.818)
=
-Y ( z ) z - 0.2375
The compensation looks very much like the classical lead compensation that would be used for 1/s2 plant.
One way to introduce an integrator is to augment the plant state vector x with the ‘integral state’ v that
integrates the difference between the output y(k) and the constant reference input r. The ‘integral state’
v is defined by
v(k) = v(k – 1) + y(k) – r (7.107a)
This equation can be rewritten as follows:
v(k + 1) = v(k) + y(k + 1) – r = v(k) + c[Fx(k) + gu(k)] – r
= cFx(k) + v(k) + cgu(k) – r (7.107b)
From Eqns (7.74) and (7.107b), we obtain
Èx( k + 1) ˘ È F 0˘ Èx( k ) ˘ È g ˘ È 0˘
Í ˙ = Í ˙ Í ˙ + Í ˙ u(k) + Í ˙ r (7.108)
Îv( k + 1) ˚ ÎcF 1˚ Îv( k ) ˚ Îcg ˚ Î-1˚
Since r is constant, in the steady state x(k + 1) = x(k) and v(k + 1) = v(k) provided that the system is
stable. This means that the steady-state solutions xs, vs and us must satisfy the equation
È 0˘ Èx s ˘ È F 0˘ Èx s ˘ È g ˘
Í ˙ r = Ív ˙ – Í ˙ Í ˙ – Í ˙ us
Î-1˚ Î s ˚ ÎcF 1˚ Î vs ˚ Îcg ˚
Substituting this for the last term in Eqn. (7.108) gives
x(k + 1) = F x (k) + g u(k) (7.109)
where
Èx - x s ˘
x = Í ˙ ; u = u – us
Î v - vs ˚
È F 0˘ g È g ˘
F = Í ˙; = Í ˙
ÎcF 1 ˚ Îcg ˚
The significance of this result is that by defining the deviations from steady state as state and control
variables, the design problem has been reformulated to be the standard regulator problem, with x = 0 as
the desired state. We assume that an asymptotically stable solution to this problem exists and is given by
u(k) = – k x(k)
Partitioning k appropriately and using Eqn. (7.109) yields
k = [kp ki]
Èx - x s ˘
u – us = – [kp
ki] Í ˙ = – kp(x – xs) – ki(v – vs)
Î v - vs ˚
The steady-state terms must balance, therefore,
u(k) = – kp x(k) – kiv(k) (7.110)
At steady state, x(k + 1) – x(k) = 0; therefore,
v(k + 1) – v(k) = 0 = y(k) – r, i.e., y(k) Æ r
The block diagram of Fig. 7.17 shows the control configuration.
Pole-Placement Design and State Observers 479
x(k)
kp
– v(k – 1)
Fig. 7.17
Example 7.12
Consider the problem of digital control of a plant described by the transfer function
1
G(s) =
s+3
Discretization of the plant model gives
Y ( z) ÈÊ 1 - e - sT ˆ Ê 1 ˆ ˘
ÍÁ ˙
s ˜¯ ÁË s + 3 ˜¯ ˙˚
Gh0G(z) = =Z
U ( z) ÍÎË
-3T ˆ
È 1 ˘ 1 Ê 1- e
= (1 – z–1) Z Í
3 ÁË z - e -3T ˜¯
˙ =
Î s( s + 3) ˚
For a sampling interval T = 0.1 sec,
0.0864
Gh0G(z) =
z - 0.741
The difference equation model of the plant is
y(k + 1) = 0.741y(k) + 0.0864u(k)
The plant has a constant reference command signal. We wish to design a PI control algorithm that results
in system response characteristics: z = 0.5, wn = 5. This is equivalent to asking for the closed-loop poles
at
± jw nT 1-z 2
z1,2 = e -zw nT e = 0.7788 – ± 24.82º = 0.7068 ± j0.3269
The desired characteristic equation is, therefore,
(z – 0.7068 – j0.3269)(z – 0.7068 + j0.3269) = z2 – 1.4136z + 0.6065 = z2 + a1z + a2 = 0
Augmenting the plant state y(k) with the ‘integral state’ v(k) defined by
v(k) = v(k – 1) + y(k) – r,
we obtain
È y( k + 1) ˘ È0.741 0 ˘ È y( k ) ˘ È0.0864 ˘ È 0˘
Í ˙ = Í ˙ Í ˙ + Í ˙ u(k) + Í ˙ r
Î v( k + 1) ˚ Î0.741 1 ˚ Î v( k ) ˚ Î0.0864 ˚ Î-1˚
480 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Comments
The concept of deadbeat performance is unique to discrete-time systems. By deadbeat control, any
nonzero error vector will be driven to zero in (at most) n sampling periods if the magnitude of the scalar
control u(k) is unbounded. The settling time depends on the sampling period T. If T is chosen very small,
the settling time will also be very small, which implies that the control signal must have an extremely
large magnitude. The designer must choose the sampling period for which an extremely large control
magnitude is not required in normal operation of the system. Thus, in deadbeat control, the sampling
period is the only design parameter.
Example 7.13
The system considered in this example is the attitude control system for a rigid satellite.
The plant equations are (refer to Example 7.10)
x(k + 1) = Fx(k) + gu(k)
where
È1 T ˘ È T 2/2 ˘
F= Í ˙;g= Í ˙
Î0 1 ˚ ÍÎ T ˙˚
x1(k) = position state q ; x2(k) = velocity state w
482 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The reference input r = qr, a step function. The desired steady state
xs = [qr 0]T
which is a non-null state.
As the plant has integrating property, the steady-state value us of the input must be zero (otherwise the
output cannot stay constant). For this case, the shifted regulator problem may be formulated as follows:
x1 = x1 – qr; x 2 = x2
Shifted state variables satisfy the equations
x(k + 1) = F x(k) + gu(k)
The state-feedback control
u(k) = – k x(k)
results in the dynamics of x given by
x(k + 1) = (F – gk) x(k)
We now determine the gain matrix k such that the response to an arbitrary initial condition is deadbeat.
The desired characteristic equation is
z2 = 0
Using Ackermann’s formula (7.82), we obtain
k = [0 1]U–1 e (F)
where
È 1 3˘
È 1 2T ˘ Í- 2 2T ˙
T
˙ ; U = [g Fg] = ÍÍ 1 ˙
2 –1 –1
e(F) = F = Í
Î0 1 ˚ 1 ˙
Í -
Î T2 2T ˙˚
This gives
È 1 3 ˘
k = Í 2
ÎT 2T ˙˚
For T = 0.1 sec, k = [100 15]
The control law expressed in terms of original state variables is given as
u(k) = – k1 x1 (k) – k2 x2 (k) = – 100(x1(k) – qr) – 15x2(k)
Example 7.14
Reconsider the problem of attitude control of a satellite. For implementation of the design of the
previous example, we require the states x1(k) and x2(k) to be measurable. Assuming that the output
y(k) = x1(k) is the only state variable that can be measured, we design a state observer for the system. It
is desired that the error vector exhibits deadbeat response. The measurement equation is
y(k) = cx(k) = [1 0]x(k)
Pole-Placement Design and State Observers 483
REVIEW EXAMPLES
ac supply
La Ra
Thyristor
rectifier
ia
u ea Motor
w Tacho
TL ,
J, B
Rs
(a)
x2 = ia TL
+ –
u + 1 1
Kr KT
La s + Ra + Rs Js + B
–
Kb w
Rs
x1 = w
Kt
(b)
Fig. 7.18
The voltage u is fed to the driver of the thyristor rectifier. The driver produces time-gate pulses that
control the conduction of the thyristors in the rectifier module. The rectified output voltage ea depends
on the firing angle of the pulses relative to the ac supply waveform. A linear relationship between the
input voltage u and the output voltage ea can be obtained when a proper firing control scheme is used.
The time constants associated with the rectifier are negligibly small. Neglecting the dynamics of the
rectifier, we get
ea(t) = Kr u(t)
where Kr is the gain of the rectifier.
Figure 7.18b shows the functional block diagram of the plant with
B = viscous-friction coefficient of motor and load;
J = moment of inertia of motor and load;
Pole-Placement Design and State Observers 485
wr er + k1 k3 + u w
Kt + Plant
Kt sKt
– –
ia
k2/Rs Rs
Kt
Fig. 7.19
i.e., x3 = w – r = x1 – r
Augmenting this state variable with the plant equations, we obtain
x = A x + b u + Gw
where
x = [x1 x2 x3]T; w = [TL r]T
È-0.5 10 0 ˘ È 0˘ È-10 0˘
Í ˙ Í ˙ Í ˙
A = Í -0.1 -10 0 ˙ ; b = Í100 ˙ ; G = Í 0 0˙
ÍÎ 1 0 0 ˙˚ ÍÎ 0˙˚ ÍÎ 0 -1˙˚
The controllability matrix
2
U = [b A b A b]
È 0 1, 000 -10, 500 ˘
Í ˙
= Í100 -1, 000 9, 900˙
ÍÎ 0 0 1, 000˙˚
The determinant of U is nonzero. The pair ( A, b ) is, therefore, completely controllable and the conditions
for pole placement by state feedback and integral control, are satisfied.
The characteristic polynomial of the closed-loop system is given by
È s + 0.5 -10 0 ˘
Í ˙
| sI - (A - b k) | = Í0.1 + 100 k1 s + 10 + 100 k2 100 k3 ˙
ÍÎ -1 0 s ˙˚
At steady state, x = 0 and, therefore, the motor velocity x1 = w (t) will approach the constant reference
set point r as t approaches infinity, independent of the disturbance torque TL.
Pole-Placement Design and State Observers 487
In this model, x1(k) is the shaft position and x2(k) is the shaft velocity. We assume that x1(k) and x2(k) can
easily be measured using shaft encoders.
We choose the control configuration of Fig. 7.20 for digital positioning of the load; qr is a constant
reference command. In terms of the error variables
x�1(k) = x1(k) – qr; x�2(k) = x2(k) (7.118)
the control signal
u(k) = – k1 x�1(k) – k2 x�2(k) = – k x� (k) (7.119)
where the gain matrix
k = [k1 k2]
+ + u (k) q
qr
k1 ZOH Plant w
– –
T = 0.1 sec
x2(k)
k2
T = 0.1 sec
x1(k)
Fig. 7.20
qr + + u (k) q
k1 ZOH Plant
– –
T = 0.1 sec
x2 (k) x1(k)
k2 Observer
Fig. 7.21
From the plant state equation (7.117), and Eqns (7.91), the partitioned matrices are seen to be
f11= 1, f1e = 0.0952, fe1 = 0, Fee = 0.905, g1 = 0.00484, ge = 0.0952
The observer equation is (refer to Eqn. (7.95))
x̂2(k + 1) = 0.905 x̂2 (k) + 0.0952u(k) + m(q (k + 1) – q(k) – 0.00484u(k) – 0.0952 x̂2 (k))
= (0.905 – m(0.0952)) x̂2 (k) + mq (k + 1) – mq(k) + (0.0952 – m(0.00484))u(k)
(7.123)
3
The pole at s = – 1/t is mapped to z = e–T/t; T = sampling interval.
490 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
PROBLEMS
7.1 Consider an nth-order Single-Input, Single-Output system
x� = Ax + bu; y = cx
and assume that we are using feedback of the form
u = – kx + r
where r is the reference input signal.
Show that the zeros of the system are invariant under state feedback.
7.2 A regulator system has the plant
È 0 1 0˘ È0 ˘
Í ˙ Í ˙
x� = Í 0 0 1˙ x + Í0 ˙ u; y = [1 0 0] x
ÍÎ-6 -11 -6 ˙˚ ÍÎ1 ˙˚
(a) Design a state-feedback controller which will place the closed-loop poles at – 2 ± j3.464,
– 5. Give a block diagram of the control configuration.
(b) Design a full-order state observer; the observer- error poles are required to be located at
– 2 ± j3.464, – 5. Give all the relevant observer equations and a block diagram description of
the observer structure.
(c) The state variable x1(which is equal to y) is directly measurable and need not be observed.
Design a reduced-order state observer for the plant; the observer-error poles are required to
be located at – 2 ± j3.464. Give all the relevant observer equations.
7.3 A regulator system has the plant
x� = Ax + bu; y = cx
with
È0 0 -6 ˘ È1 ˘
Í ˙ Í ˙
A = Í1 0 -11˙ ; b = Í0 ˙ ; c = [0 0 1]
ÍÎ0 1 -6 ˙˚ ÍÎ0 ˙˚
Pole-Placement Design and State Observers 491
(a) Compute k so that the control law u = – kx, places the closed-loop poles at – 2 ± j3.464, – 5.
Give the state variable model of the closed-loop system.
(b) For the estimation of the state vector x, we use an observer defined by
xˆ = (A – mc) x̂ + bu + my
Compute m so that the eigenvalues of (A – mc) are located at – 2 ± j3.464, – 5.
(c) The state variable x3 (which is equal to y) is directly measurable and need not be observed.
Design a reduced-order observer for the plant; the observer-error poles are required to be
located at – 2 ± j3.464. Give all the relevant observer equations.
7.4 Consider the system
x = Ax + Bu; y = cx + du
where
È-2 -1˘ È1 0 ˘
A= Í ˙;B= Í ˙ ; c = [0 1]; d = [2 0]
Î 1 0˚ Î1 1 ˚
Design a full-order state observer so that the estimation error will decay in less than 4 seconds.
7.5 Consider the system
È1 0 ˘ È1˘
x = Í ˙ x + Í ˙ u; y = [2 – 1] x
Î0 0 ˚ Î1˚
Design a reduced-order state observer that makes the estimation error to decay at least as fast as
e–10t.
7.6 Consider the system with the transfer function
Y ( s) 9
= 2
U ( s) s -9
(a) Find (A, b, c) for this system in observable canonical form.
(b) Compute k so that the control law u = – kx places the closed-loop poles at – 3 ± j3.
(c) Design a full-order observer such that the observer-error poles are located at – 6 ± j6. Give
all the relevant observer equations.
(d) Suppose the system has a zero such that
Y ( s) 9( s + 1)
= 2
U ( s) s -9
Prove that if u = – kx + r, there is a feedback matrix k such that the system is unobservable.
7.7 The equation of motion of an undamped oscillator with frequency w 0 is
y + w 02 y = u
(a) Write the equations of motion in the state variable form with x1 = y and x2 = y as the state
variables.
(b) Find k1 and k2 such that u = – k1x1 – k2x2 gives closed-loop characteristic roots with wn = 2w 0
and z = 1.
(c) Design a second-order observer that estimates x1 and x2, given measurements of x1. Pick
the characteristic roots of the state-error equation with wn = 10w0 and z = 1. Give a block
diagram of the observer-based state-feedback control system.
492 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
(d) Design a first-order observer that estimates x2, given measurements of x1. The characteristic
root of the state-error equation is required to be located at –10w0. Give a block diagram of
the observer-based state-feedback control system.
7.8 A regulator system has the plant
È 0 1˘ È0 ˘
x= Í ˙ x + Í ˙ u; y = [1 0] x
Î20.6 0 ˚ Î1 ˚
(a) Design a control law u = – kx so that the closed-loop system has eigenvalues at –1.8 ± j2.4.
(b) Design a full-order state observer to estimate the state vector. The observer matrix is required
to have eigenvalues at – 8, – 8.
(c) Find the transfer function of the compensator obtained by combining (a) and (b).
(d) Find the state variable model of the complete observer-based state-feedback control system.
7.9 A regulator system has the double integrator plant
Y ( s) 1
= 2
U ( s) s
(a) Taking x1 = y and x2 = y as state variables, obtain the state variable model of the plant.
(b) Compute k such that u = – kx gives closed-loop characteristic roots with wn = 1, z = 2 / 2 .
(c) Design a full-order observer that estimates x1 and x2, given measurements of x1. Pick the
characteristic roots of the state-error equation with wn = 5, z = 0.5.
(d) Find the transfer function of the compensator obtained by combining (b) and (c).
(e) Design a reduced-order observer that estimates x2 given measurements of x1; place the single
observer pole at s = – 5.
(f) Find the transfer function of the compensator obtained by combining (b) and (e).
7.10 A servo system has the Type-1 plant described by the equation
x = Ax + bu; y = cx
where
È0 1 0˘ È0 ˘
Í ˙ Í ˙
A= Í 0 -1 1 ˙ ; b = Í0 ˙ ; c = [1 0 0]
ÍÎ0 0 -2˙˚ ÍÎ1 ˙˚
(a) If u = – kx + Nr, compute k and N so that the closed-loop poles are located at –1 ± j1, – 2;
and y( ) = r, a constant reference input.
(b) For the estimation of the state vector x, we use a full-order observer
xˆ = (A – mc) x̂ + bu + my
Compute m so that observer-error poles are located at – 2 ± j2, – 4.
(c) Replace the control law in (a) by u = – k x̂ + Nr, and give a block diagram of the observer-
based servo system.
7.11 A plant is described by the equation
È-1 0˘ È1˘
x= Í ˙ x+ Í ˙ u; y = [1 3] x
Î 0 -2˚ Î1˚
Pole-Placement Design and State Observers 493
Add to the plant equations an integrator z = y – r (r is a constant reference input) and select gains
k, ki so that if u = – kx – ki z, the closed-loop poles are at – 2, –1 ± j 3 . Give a block diagram of
the control configuration.
7.12 Figure P7.12 shows the block diagram of a position control system employing a dc motor in
armature control mode; q (rad) is the motor shaft position, q(rad/sec) is the motor shaft velocity,
ia (amps) is the armature current, and KP (volts/rad) is the sensitivity of the potentiometer. Find
k1, k2 and k3 so that the dominant poles of the closed-loop system are characterized by z = 0.5,
wn = 2, nondominant pole is at s = –10; and the steady-state error to constant reference input is zero.
qr
KP
u x3 = ia x2 = q x1 = q
+ k1 + + 1 1 1
– KP – – – 0.1s + 1 s+1 s
0.1
k3
k2
KP
Fig. P7.12
7.13 A dc motor in armature control mode has been used in speed control system of Fig. P7.13
employing state-feedback with integral control; w (rad/sec) is the motor shaft velocity, ia (amps)
is the armature current and Kt (volts/(rad/sec)) is the tachogenerator constant. Find k1, k2 and k3 so
that the closed-loop poles of the system are placed at –1 ± j 3 , –10; and the steady-state error
to constant reference input is zero.
wr
Kt
u x2 = ia x1 = w
+ k1 k3 + + 1 1
+
Kt sKt 0.1s + 1 s+1
– – –
0.1
k2
Kt
Fig. P7.13
494 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
7.14 The control law u = – kx – k1qr for position control system of Fig. P7.12 is to be replaced by
u = – k x̂ + k1qr where x̂ is the estimate of the state vector x given by the observer system
xˆ = (A – mc) x̂ + bu + mq
Find the gain matrix m which places the eigenvalues of (A – mc) at – 3 ± j 3 , – 10. Give a block
diagram of the observer-based position control system.
7.15 Consider the position control system of Fig. P7.15 employing a dc motor in armature control
mode with state variables defined on the diagram. Full state-feedback is employed, with position
feedback being obtained from a potentiometer, rate feedback from a tachogenerator and current
feedback from a voltage sample across a resistance in the armature circuit. KA is the amplifier
gain. Find the adjustable parameters KA, k2, and k3 so that the closed-loop poles of the system are
placed at – 3 ± j3, – 20.
x1 = KPq
k1 = 1 Pot
–
qr La Ra
KP +
KA
–
– ia
J, B
x3 = 0.1ia
k3
0.1 W
x2 = Kt q
k2 Tacho
Fig. P7.15
Given:
Potentiometer sensitivity, KP = 1 volt/rad
Tachogenerator constant, Kt = 1 volt/(rad/sec)
Armature inductance, La = 0.005 H
Armature resistance, Ra = 0.9 W
Moment of inertia of motor and load, J = 0.02 newton-m/(rad/sec2)
Viscous-friction coefficient of motor and load, B =0
Back emf constant, Kb = 1 volt/(rad/sec)
Motor torque constant, KT = 1 newton-m/amp
Pole-Placement Design and State Observers 495
7.16 Consider the position control system of Fig. P7.16 employing a dc motor in the field control
mode, with state variables defined on the diagram. Full state-feedback is employed with position
feedback being obtained from a potentiometer, rate feedback from a tachogenerator and current
feedback from a voltage sample across a resistor connected in the field circuit. KA is the amplifier
gain.
Find the adjustable parameters KA, k2, and k3 so that the closed-loop system has dominant poles
characterized by z = 0.5, wn = 2, and the third pole at s = – 10.
Given:
Potentiometer sensitivity, KP = 1 volt/rad
Tachogenerator constant, Kt = 1 volt/(rad/sec)
Field inductance, Lf = 20 H
Field resistance, Rf = 99 W
Moment of inertia of motor and load, J = 0.5 newton-m/(rad/sec2)
Viscous-friction coefficient of motor and load, B = 0.5 newton-m/(rad/sec)
Motor torque constant, KT = 10 newton-m/amp
x1 = KPq
k1 = 1 Pot
qr –
KP Rf
+ Ia
KA
–
–
Lf
J, B
x3 = if
k3
1W
x2 = Ktq
k2 Tacho
Fig. P7.16
7.17 Figure P7.17 shows control configuration of a Type-1 servo system. Both the state variables x1
and x2, are assumed to be measurable. It is desired to regulate the output y to a constant value
r = 5. Find the values of k1, k2 and N so that
(i) y( ) = r = 5; and
(ii) the closed-loop characteristic equation is
s2 + a1s + a2 = 0.
496 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
r + u b y = x1
N s (s + a)
–
x2
k2 s
k1
Fig. P7.17
7.18 A speed control system, employing a dc motor in the armature control mode, is described by the
following state equations:
dw(t ) B K 1
= – w(t ) + T ia (t ) - TL
dt J J J
dia (t ) K R 1
= – b w(t ) - ia (t ) + u(t )
dt L L L
where
ia(t) = armature current, amps;
u(t) = armature applied voltage, volts;
w (t) = motor velocity, rad/sec;
B = viscous-friction coefficient of motor and load = 0;
J = moment of inertia of motor and load = 0.02 newton-m/(rad/sec2);
KT = motor torque constant = 1 newton-m/amp;
Kb = motor back emf constant = 1 volt/(rad/sec);
TL = constant disturbance torque (magnitude not known);
L = armature inductance = 0.005 H; and
R = armature resistance = 1 W.
The design problem is to find the control u(t) such that
dia (t ) dw (t )
(i) lim = 0 and lim = 0, and (ii) lim w (t) = constant set-point r.
t dt t dt t
Ú
u(t) = – k1w (t) – k2ia(t) – k3 (w (t) – r)dt
0
can meet these objectives. Find k1, k2, and k3 so that the closed-loop poles are placed at
–10 ± j10, –300. Suggest a suitable scheme for implementation of the control law.
Pole-Placement Design and State Observers 497
7.19 Figure P7.19 shows a process consisting of two interconnected tanks. h1 and h2 representing
deviations in tank levels from their steady-state values H1 and H 2 , respectively; q represents
deviation in the flow rate from its steady-state value Q . The flow rate q is controlled by signal
u via valve and actuator. A disturbance flow rate w enters the first tank via a returns line from
elsewhere in the process. The differential equations for levels in the tanks are given by
h1 = – 3h1 + 2h2 + u + w
h2 = 4h1 – 5h2
(a) Compute the gains k1 and k2 so that the control law u = – k1h1(t) – k2h2(t) places the closed-
loop poles at – 4, – 7.
(b) Show that the steady-state error in the output y(t) = h2(t), in response to constant disturbance
input w, is nonzero.
(c) Add to the plant equations, an integrator z(t) = y(t) and select gains k1, k2 and k3 so that the
control law u = – k1h1(t) – k2h2(t) – k3z(t) places the closed-loop poles at –1, – 2, – 7. Find the
steady-state value of the output in response to constant disturbance w. Give a block diagram
depicting the control configuration.
u
Actuator
Q+q
H 1 + h1 H 2 + h2
w
Fig. P7.19
7.20 The plant of a servo system is described by the equations
x = Ax + bu + bw; y = cx
where
È-3 2˘ È1 ˘
A= Í ˙ ; b = Í ˙ ; c = [0 1]
Î 4 -5˚ Î0 ˚
w is a disturbance input to the system.
A control law of the form u = – kx + Nr is proposed; r is a constant reference input.
(a) Compute k so that the eigenvalues of (A – bk) are – 4, – 7.
(b) Choose N so that the system has zero steady-state error to reference input, i.e., y( ) = r.
(c) Show that the steady-state error to a constant disturbance input w, is nonzero for the above
choice of N.
498 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
(d) Add to the plant equation, an integrator equation (z(t) being the state of the integrator):
z(t) = y(t) – r
and select gains k1, k2 and k3 so that the control law u = – k1x1(t) – k2 x2(t) – k3 z(t) places the
eigenvalues of closed-loop system matrix at – 1, – 2, – 7.
(e) Draw a block diagram of the control scheme employing integral control and show that the
steady-state error to constant disturbance input, is zero.
7.21 Consider a plant consisting of a dc motor, the shaft of which has the angular velocity w (t) and
which is driven by an input voltage u(t). The describing equation is
w (t) = – 0.5 w (t) + 100 u(t) = Aw(t) + bu(t)
It is desired to regulate the angular velocity at the desired value w 0 = r.
(a) Use control law of the form u = – Kw (t) + Nr. Choose K that results in closed-loop pole with
time constant 0.1 sec. Choose N that guarantees zero steady-state error, i.e., w( ) = r.
(b) Show that, if A changes to A + d A subject to (A + dA – bK) being stable, then the above
choice of N will no longer make w ( ) = r. Therefore, the system is not robust under changes
in system parameters.
(c) The system can be made robust by augmenting it with an integrator
z =w–r
where z is the state of the integrator. To see this, first use a feedback of the form u = – K1w(t) – K2z(t)
and select K1 and K2 so that the characteristic polynomial of the closed-loop system becomes D(s) =
s2 + 11s + 50. Show that the resulting system will have w ( ) = r no matter how the matrix A
changes so long as the closed-loop system remains asymptotically stable.
7.22 A discrete-time regulator system has the plant
È 0 1 0˘ È0 ˘
Í ˙
x(k + 1) = Í 0 0 1˙ x(k) + Í0 ˙ u(k)
Í ˙
ÍÎ-4 -2 -1˙˚ ÍÎ1 ˙˚
Design a state-feedback controller which will place the closed-loop poles at – 12 ± j 12 , 0. Give a
block diagram of the control configuration.
7.23 Consider a plant defined by the following state variable model:
x(k + 1) = Fx(k) + Gu(k); y(k) = cx(k) + du(k)
where
È 1 1 0˘ È 1 4˘
Í 2 ˙ Í ˙
F = Í -1 0 1 ˙ ; G = Í 0 0 ˙ ; c = [1 0 0]; d = [0 4]
Í 0 0 0˙ ÍÎ-3 2 ˙˚
ÍÎ ˙˚
Design a prediction observer for the estimation of the state vector x; the observer-error poles are
required to lie at – 1 ± j 14 , 0. Give all the relevant observer equations and a block diagram
2
description of the observer structure.
7.24 Consider the system defined by
È 0 1 0˘ È0 ˘
Í ˙ Í ˙
x(k + 1) = Í 0 0 1 ˙ x(k) + Í0 ˙ u(k)
ÍÎ-0.5 -0.2 1.1˙˚ ÍÎ1 ˙˚
Pole-Placement Design and State Observers 499
Determine the state-feedback gain matrix k such that when the control signal is given by u(k) =
– kx(k), the closed-loop system will exhibit the deadbeat response to any initial state x(0). Give
the state variable model of the closed-loop system.
7.25 Consider the system
È 0 1˘ È0 ˘
x(k + 1) = Í ˙ x(k) + Í ˙ u(k); y(k) = [1 1] x(k)
Î -0 . 16 -1 ˚ Î1 ˚
Design a current observer for the system; the response to the initial observer error is required to
be deadbeat. Give all the relevant observer equations.
7.26 Consider the plant defined in Problem 7.24. Assuming that only y(k) = x2(k) is measurable, design
a reduced-order observer such that the response to the observer error is deadbeat. Give all the
relevant observer equations.
7.27 A discrete-time regulator system has the plant
È 2 -1˘ È4˘
x(k + 1) Í ˙ x(k) + Í ˙ u(k); y(k) = [1 1] x(k) + 7u(k)
Î-1 1˚ Î3˚
(a) Design a state-feedback control algorithm u(k) = –kx(k) which places the closed-loop
characteristic roots at ± j 12 .
(b) Design a prediction observer for deadbeat response. Give the relevant observer equations.
(c) Combining (a) and (b), give a block diagram of the control configuration. Also obtain state
variable model of the observer-based state-feedback control system.
7.28 A regulator system has the plant with transfer function
Y ( z) z -2
=
U ( z) (1 + 0.8 z ) (1 + 0.2 z -1 )
-1
7.30 A double integrator plant is to be controlled by a digital computer employing state feedback.
Figure P7.30 shows a model of the control scheme. Both the state variables x1 and x2 are assumed
to be measurable.
(a) Obtain the discrete-time state variable model of the plant.
(b) Compute k1 and k2 so that the response y(t) of the closed-loop system has the parameters:
z = 0.5, wn = 4.
(c) Assume now that only x1 is measurable. Design a prediction observer to estimate the state
vector x; the estimation error is required to decay in a deadbeat manner.
(d) Find the transfer function of the compensator obtained by combining (b) and (c).
Plant
u(k) 1 – e–sT 1 x2 1 x1 = y
– – s s s
T = 0.1 sec
k2
T = 0.1 sec
k1
Fig. P 7.30
7.31 Figure P7.31 shows the block diagram of a digital positioning system. The plant is a dc motor
driving inertia load. Both the position q, and velocity q , are measurable.
(a) Obtain matrices (F, g, c) of the discrete-time state variable model of the plant.
(b) Compute k1 and k2 so that the closed-loop system positions the load in a deadbeat manner in
response to any change in step command qr.
(c) Assume now that the position q is measured by a shaft encoder and a second-order state
observer is used to estimate the state vector x from plant input u and measurements of q.
Plant
qr + + u (k) 1 – e–sT 1 x2 = w 1 x1 = q
k1 s
– – s+1 s
x2 (k)
k2
T = 0.1 sec
x1(k)
T = 0.1 sec
Fig. P7.31
Pole-Placement Design and State Observers 501
Design a deadbeat observer. Give a block diagram of the observer-based digital positioning
system.
(d) Design a first-order deadbeat observer to estimate velocity w from measurements of
position q.
7.32 A continuous-time plant described by the equation
y =–y+u+w
is to be controlled by a digital computer; y is the output, u is the input, and w is the disturbance
signal. Sampling interval T = 1 sec.
(a) Obtain a discrete-time state variable model of the plant.
(b) Compute K and N so that the control law
u(k) = – Ky(k) + Nr
results in a response y(t) with time constant 0.5 sec, and y( ) = r (r is a constant reference
input).
(c) Show that the steady-state error to a constant disturbance input w is nonzero for the above
choice of the control scheme.
(d) Add to the plant equation, an integrator equation (v(k) being the integral state)
v(k) = v(k – 1) + y(k) – r
and select gains K1 and K2 so that the control law
u = – K1y(k) – K2v(k)
results in a response y(t) with parameters: z = 0.5, wn = 4.
(e) Give a block diagram depicting the control configuration employing integral control and
show that the steady-state error to constant disturbance w, is zero.
502 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Chapter 8
Linear Quadratic Optimal Control
through Lyapunov Synthesis
8.1 INTRODUCTION
It should be obvious by now that stability plays a major role in control systems design. We have earlier
introduced, in Chapters 2 and 5, the concept of stability based on the dynamic evolution of the system
state in response to arbitrary initial state, representing initial energy storage. State variable model
x(t) = Ax(t); x(t = 0) =D x0 (8.1)
0
is most appropriate to study dynamic evolution of the state x(t), in response to the initial state x with zero
external input. At the origin of the state space, x(t) = 0 for all t; the origin is, thus, the equilibrium point
of the system and xe = 0 is the equilibrium state. This system is marginally stable if, for all possible initial
states, x0, x(t) remains thereafter within finite bounds for t > 0. This is true if none of the eigenvalues of
A are in the right half of the complex plane, and eigenvalues on the imaginary axis, if any, are simple
(A multiple eigenvalue on the imaginary axis would have a response that grows in time and could not be
stable). Furthermore, the system is asymptotically stable if for all possible initial states x0, x(t) eventually
decays to zero as t approaches infinity. This is true if all the eigenvalues of A are inside the left half of
the complex plane.
A.M. Lyapunov considered the stability of general nonlinear systems described by state equation of the
form
x(t) = f(x (t)); x(0) =D x0 (8.2)
We assume that the equation has been written so that x = 0 is an equilibrium point, which is to say that
f (0) = 0, i.e., the system will continue to be in equilibrium state xe = 0 for all time. This equilibrium point
is said to be stable in the sense of Lyapunov, if we are able to select a bound on initial condition x0, that
will result in state trajectories x(t), that remain within a chosen finite limit. The system is asymptotically
stable at x = 0, if it is stable in the sense of Lyapunov and, in addition, the state x(t) approaches zero as
time t approaches infinity.
No new results are obtained by the use of Lyapunov’s method for the stability analysis of linear time-
invariant systems. Simple and powerful methods discussed in earlier chapters are adequate for such
Linear Quadratic Optimal Control through Lyapunov Synthesis 503
systems. However, Lyapunov functions supply certain performance indices and synthesis data for linear
time-invariant systems. Chapter 10 will demonstrate the use of Lyapunov functions in variable structure
sliding mode control, and model reference adaptive control. In this chapter, we introduce the concept of
Lyapunov stability and the role it plays in optimal control design.
The Lyapunov’s method of stability analysis, in principle, is the most general method for determination of
stability of nonlinear systems. The major drawback which seriously limits its use in practice, is the difficulty
often associated with the construction of the Lyapunov function required by the method. Guidelines
for construction of Lyapunov functions for nonlinear systems are given in Chapter 9. For linear time-
invariant systems of main concern in this chapter, the quadratic function (refer to Section 5.2) is adequate
for demonstrating Lyapunov stability. The concept of Lyapunov stability, and working knowledge
required for synthesis of linear time-invariant systems, will be provided here, in this chapter, before
we start the discussion on optimal control. Detailed account of Lyapunov stability will be given later in
Chapter 9.
8.2.2
For the system (8.3), sufficient conditions of stability are as follows.
Theorem 8.1 Suppose that there exists a scalar function V (x) which satisfies the following
properties:
(i) V (x) > 0; x π 0
(ii) V (0) = 0 (8.4)
504 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
0 0
x(0) x(0)
(a) (b)
Fig. 8.1
(iii) V (x) is continuous and has continuous partial derivatives with respect to all components of x.
(iv) V (x) £ 0 along trajectories of Eqn. (8.3).
We call V (x) having these properties, a Lyapunov function for the system. Properties (i) and (ii) mean
that, like energy, V(x) > 0 if any state is different from zero, but V(x) = 0 when the state is zero. Property
(iii) ensures that V(x) is a smooth function and, generally, has the shape of a bowl near the equilibrium.
A visual analysis may be obtained by considering the surface
V(x1, x2 ) = 1
2
p1 x12 + 1
2
p2x22; p1 > 0, p2 > 0
This theorem on asymptotic stability and stability in the sense of Lyapunov applies in a local sense if
the region ||x(0)|| < d is small (refer to Fig. 8.1); the theorem applies in the global sense when the region
includes the entire state space. The value of Lyapunov function for global stability becomes infinite with
infinite deviation (i.e., V(x) Æ as ||x|| Æ ).
The determination of stability through Lyapunov analysis centers around the choice of a Lyapunov
function V(x). Unfortunately, there is no universal method for selecting the Lyapunov function which is
unique for a given nonlinear system. Several techniques have been devised for the systematic construction
of Lyapunov functions; each is applicable to a particular class of systems. If a Lyapunov function cannot
be found, it in no way implies that the system is unstable. It only means that our attempt in trying to
establish the stability of an equilibrium state has failed. Therefore, faced with specific systems, one has
to use experience, intuition, and physical insights to search for an appropriate Lyapunov function. An
elegant and powerful Lyapunov analysis may be possible for complex systems if engineering insight and
physical properties are properly exploited. In spite of these limitations, Lyapunov’s method is the most
powerful technique available today for the stability analysis of nonlinear systems.
For linear time-invariant systems of main concern in this chapter, the quadratic function (refer to
Section 5.2) is adequate for demonstrating Lyapunov stability. Consider the function
V(x) = xT Px (8.5)
where P is a symmetric positive definite matrix. The quadratic function (8.5) satisfies properties (i), (ii)
and (iii) of a Lyapunov function. We need to examine property (iv), the derivative condition, to study the
stability properties of the system under consideration.
Discrete-Time Systems: In the following, we extend the Lyapunov stability theorem to discrete-time
systems:
x (k + 1) = f(x(k)); f(0) = 0 (8.6)
Our discussion will be brief because of the strong analogy between the discrete-time and continuous-
time cases.
Theorem 8.2 Suppose that there exists a Lyapunov function V(x(k)) which satisfies the following
properties:
(i) V (x) > 0; x π 0
(ii) V (0) = 0
(8.7)
(iii) V (x) is a smooth function; it is continuous for all x.
(iv) DV (x(k)) = [V(x(k + 1)) – V(x(k))] £ 0 along trajectories of Eqn. (8.6).
The Lyapunov stability theorem states that, given the system of equations, x (k + 1) = f(x (k)); f(0) = 0, if
there exists a Lyapunov function for this equation, then the origin is stable in the sense of Lyapunov; in
addition, if DV(x(k)) < 0 for x π 0, then the stability is asymptotic.
Lyapunov function. For a linear system, a Lyapunov function can always be constructed and both the
necessary and sufficient conditions on stability established.
8.3.1
Consider a linear system described by the state equation
x = Ax (8.8)
where A is n ¥ n real constant matrix.
Theorem 8.3 The linear system (8.8) is globally asymptotically stable at the origin if, and only if,
for any given symmetric positive definite matrix Q, there exists a symmetric positive definite matrix P,
that satisfies the matrix equation
ATP + PA = – Q (8.9)
Proof Let us first prove the sufficiency of the result. Assume that a symmetric positive definite matrix
P (refer to Section 5.2) exists, which is the unique solution of Eqn. (8.9). Consider the scalar function
V(x) = xTPx
Note that
V(x) > 0 for x π 0 and V(0) = 0
The time derivative of V(x) is
V (x) = xTPx + xTP x
Using Eqns (8.8) and (8.9), we get
V (x) = xTATPx + xTPAx
= xT(ATP + PA)x = – xTQx
Since Q is positive definite, V (x) is negative definite. Norm of x may be defined as (Eqn. (5.6b))
||x|| = (xTPx)1/2
Then
V(x) = ||x||2
V(x) as ||x||
Therefore, the system is globally asymptotically stable at the origin.
To prove the necessity of the result, the reader is advised to refer to [105] where the proof has been
developed in the following two parts:
(i) If (8.8) is asymptotically stable, then for any Q there exists a matrix P satisfying (8.9).
(ii) If Q is positive definite, then P is also positive definite.
Comments
(i) The implication of Theorem 8.3 is that if A is asymptotically stable and Q is positive definite,
then the solution P of Eqn. (8.9) must be positive definite. Note that it does not say that if A is
asymptotically stable and P is positive definite, then Q computed from Eqn. (8.9) is positive
definite. For an arbitrary P, Q may be positive definite (semidefinite) or negative definite
(semidefinite).
Linear Quadratic Optimal Control through Lyapunov Synthesis 507
(ii) Since matrix P is known to be symmetric, there are only n(n + 1)/2 independent equations in (8.9)
rather than n2.
(iii) In very simple cases, Eqn. (8.9), called the Lyapunov equation, can be solved analytically,
but usually numerical solution is required. A number of computer programs for this purpose are
available [152–154].
(iv) Since Theorem 8.3 holds for any positive definite symmetric matrix Q, the matrix Q in Eqn. (8.9)
is often chosen to be a unit matrix.
(v) If V (x) = – xTQx does not vanish identically along any trajectory, then Q may be chosen to be
positive semidefinite.
A necessary and sufficient condition that V (x) does not vanish identically along any trajectory (meaning
that V (x) = 0 only at x = 0), is that
ÈH ˘
Í ˙
HA
r Í ˙ = n; Q = HTH (8.10)
Í ˙
Í n-1
˙
ÍÎHA ˙˚
where r (.) stands for rank of a matrix.
This can be proved as follows. Since V (x) can be written as
V (x) = – xTQx = – xTHTHx,
V (x) = 0 means that Hx = 0
Differentiating with respect to t, gives
H x = HAx = 0
Differentiating once again, we get
HA x = HA2x = 0
Repeating the differentiation process and combining the equations, we obtain
ÈH ˘
Í ˙
ÍHA ˙ x = 0
Í ˙
Í n-1
˙
ÍÎHA ˙˚
A necessary and sufficient condition for x = 0 to be the only solution of this equation is given by (8.10).
Example 8.1
Let us determine the stability of the system described by the following equation:
x = Ax
with
È-1 -2˘
A= Í ˙
Î 1 -4 ˚
508 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
We will first solve Eqn. (8.9) for P for an arbitrary choice of real symmetric positive definite matrix Q.
We may choose Q = I, the identity matrix. Equation (8.9) then becomes
ATP + PA = – I
È -1 1˘ È p11 p12 ˘ È p11 p12 ˘ È-1 -2˘ È-1 0 ˘
or Í ˙Í ˙+Í ˙Í ˙=Í ˙ (8.11)
Î-2 -4 ˚ Î p12 p22 ˚ Î p12 p22 ˚ Î 1 -4 ˚ Î 0 -1˚
Note that we have taken p12 = p21. This is because the solution matrix P is known to be a positive definite
real symmetric matrix for a stable system.
From Eqn. (8.11), we get
– 2 p11 + 2 p12 = – 1
– 2 p11 – 5 p12 + p22 = 0
– 4 p12 – 8 p22 = – 1
Solving for pij’s, we obtain
È p11 p12 ˘ È 6023 7 ˘
- 60
P= Í ˙ = Í ˙
Î p12 p22 ˚ Í- 60 7 11
˙˚
Î 60
Using Sylvester’s test (Section 5.2), we find that P is positive definite. Therefore, the system under
consideration is globally asymptotically stable at the origin.
In order to illustrate the arbitrariness in the choice of Q, consider
È0 0 ˘
Q= Í ˙ (8.12)
Î0 1 ˚
This is a positive semidefinite matrix. This choice of Q is permissible since it satisfies the condition
(8.10), as is seen below.
È0 0 ˘ È0 ˘
Q= Í ˙ = Í ˙ [0 1] = HTH
Î0 1 ˚ Î1 ˚
È H ˘ È0 1 ˘
r Í ˙ =r Í ˙ =2
Î HA ˚ Î1 – 4 ˚
It can easily be verified that with the choice of Q given by Eqn. (8.12), we derive the same conclusion
about the stability of the system as obtained earlier with Q = I.
8.3.2
Consider a linear system described by the state equation
x(k + 1) = Fx(k) (8.13)
where F is n ¥ n real constant matrix.
Theorem 8.4 The linear system (8.13) is globally asymptotically stable at the origin if, and only if,
for any given symmetric positive definite matrix Q, there exists a symmetric positive definite matrix P,
that satisfies the matrix equation
FT PF – P = – Q (8.14)
Linear Quadratic Optimal Control through Lyapunov Synthesis 509
Proof Let us first prove the sufficiency of the result. Assume that a symmetric positive definite
matrix P exists, which is the unique solution of Eqn. (8.14). Consider the scalar function
V(x) = xTPx
Note that
V(x) > 0 for x π 0 and V(0) = 0
The difference
DV(x) = V(x(k + 1)) – V(x(k))
= xT(k + 1)Px(k + 1) – xT(k)Px(k)
Using Eqns (8.13) – (8.14), we get
DV(x) = xT(k)FTPFx(k) – xT(k)Px(k)
= xT(k)[FTPF – P]x(k) = – xT(k)Qx(k)
Since Q is positive definite, DV(x) is negative definite. Further V(x) as ||x|| . Therefore, the
system is globally asymptotically stable at the origin.
The proof of necessity is analogous to that of continuous-time case (refer to [105]).
Comments
(i) In very simple cases, Eqn. (8.14), called the discrete Lyapunov equation, can be solved analytically,
but usually a numerical solution is required. A number of computer programs for this purpose are
available [152–154].
(ii) If DV(x(k)) = –xT(k)Qx(k) does not vanish identically along any trajectory, then Q may be chosen
to be positive semidefinite.
A necessary and sufficient condition that DV(x(k)) does not vanish identically along any trajectory
(meaning that DV(x(k)) = 0 only at x = 0), is that
ÈH ˘
Í ˙
Í HF ˙ = n; Q = HTH
r (8.15)
Í ˙
Í n-1 ˙
ÍÎHF ˙˚
where r (.) stands for rank of a matrix.
Example 8.2
Let us determine the stability of the system described by the following equation:
x(k + 1) = Fx(k)
with
È-1 -2˘
F = Í ˙
Î 1 - 4˚
510 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
We will first solve Eqn. (8.14) for P for an arbitrary choice of real symmetric positive definite matrix Q.
We may choose Q = I, the identity matrix. Equation (8.14) then becomes
FTPF – P = – I
È -1 1˘ È p11 p12 ˘ È-1 -2˘ È p11 p12 ˘ È-1 0 ˘
or Í ˙Í ˙Í ˙-Í ˙=Í ˙
Î-2 -4 ˚ Î p12 p22 ˚ Î 1 -4 ˚ Î p12 p22 ˚ Î 0 -1˚
or – 2p12 + p22 = – 1
2p11 + p12 – 4p22 = 0
4p11 + 16p12 + 15p22 = – 1
Solving for pij’s, we obtain
p12 ˘ È- 60
È p11
43 11 ˘
30
P= Í ˙ = Í 11 ˙
p22 ˚ Í 30 - 15
Î p12
4
˙˚
Î
Using Sylvester’s test (Section 5.2) we find that P is negative definite. Therefore, the system under
consideration is unstable.
IAE =D Ú |e(t)| dt
0
If the index is to be computed numerically, the infinite upper limit can be replaced by the limit tf , where
tf is large enough so that e(t) is negligible for t > tf . This index is not unreasonable since both the fast
but highly oscillatory systems and the sluggish systems will give large IAE value (refer to Fig. 8.3).
Minimization of IAE by adjusting system parameters will provide acceptable relative stability and speed
of response. Also, a finite value of IAE implies that the steady-state error is zero.
Another similar index is the integral of time multiplied by absolute error (ITAE), which exhibits the
additional useful features that the initial large error (unavoidable for a step input) is not heavily weighted,
whereas errors that persist are more heavily weighted.
ITAE =D
Ú t |e(t)| dt
0
512 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
y
r Gain Gain Gain
too high optimum too low
IAE
t
Fig. 8.3
The integral of square error (ISE) and integral of time multiplied by square error (ITSE) indices are
analogous to IAE and ITAE criteria, except that the square of the error is employed for three reasons:
(i) in some applications, the squared error represents the system’s power consumption, (ii) squaring the
error weighs large errors more heavily than small errors, and (iii) the squared error is much easier to
handle analytically.
ISE =D Ú e (t) dt
2
ITSE =D
Ú te (t) dt
2
0
The system whose design minimizes (or maximizes) the selected performance index with no constraints
on controller configuration is, by definition, optimal.
The difference between parameter optimization and optimal control problems is that no constraint on
controllers is imposed on the latter. In optimal design, the designer is permitted to use controllers of
any degree and any configuration, whereas in parameter optimization the configuration and the type of
controllers are predetermined. Since there is no constraint imposed on controllers, optimal design results
in a better system, i.e., lower value of the performance index.
However, because of considerations other than minimization of the performance index, one may not
build an optimal control system. For example, optimal solutions to the problem of control of a linear
time-invariant plant may result in a nonlinear and/or time-varying system. Hardware realization of such
an optimal control law may be quite difficult and expensive. Also, in many control problems, the optimal
solution gives an open-loop control system which is successful only in the absence of meaningful
disturbances. In practical systems, then, it may be more sensible to seek suboptimal control laws: we
select a feedback control configuration and the type of controller, based on considerations of cost,
availability of components, etc., and then determine the best possible values of the free parameters of
the controller that minimize the given performance index. Modifications in control configuration and the
type of controller are made until a satisfactory system is obtained—which has performance character-
istics close to the optimal control system we have worked out in theory.
Linear Quadratic Optimal Control through Lyapunov Synthesis 513
There exists an important class of optimal control problems for which quite general results have been
obtained. It involves control of linear systems with the objective of minimizing the integral of a quadratic
performance index. An important feature of this class of problems is that optimal control is possible by
feedback controllers. For linear time-invariant plants, the optimal control results in a linear time-invariant
closed-loop system. The implementation of optimal control is, therefore, simple and less expensive.
Many problems of industrial control belong to this class of problems—linear quadratic optimal control
problems.
As we shall see later in this chapter, the linear quadratic optimal control laws have some computational
advantage, and a number of useful properties. The task of the designer shifts to the one of specifying
various parameters in the performance index.
In the previous chapters, we have been mostly concerned with the design of single-variable systems.
Extensions of the root-locus method and the Bode/Nyquist-plot design to multivariable cases have
been reported in the literature. However, the design of multivariable systems using these techniques is
much more complicated than the single-variable cases. Design of multivariable systems through pole-
placement can also be carried out; the computations required are however highly complicated.
The optimal control theory provides a simple and powerful tool for designing multivariable systems.
Indeed, the equations and computations required in the design of optimal single-variable systems and
those in the design of optimal multivariable systems are almost identical. We will use, therefore, in
this chapter, the Multi-Input, Multi-Output (MIMO) state variable model in the formulation of optimal
control problem.
The objective set for the rest of this chapter is the presentation of simple, and analytically solvable, optimal
control and parameter optimization problems. This will provide insight into optimal and suboptimal
structures and algorithms that may be applied in practical cases. For detailed study, specialized books
on optimal control [109–124] should be consulted. A moderate treatment of the subject is also available
in reference [105].
Commercially available software [152–154] may be used for solving complex optimal/suboptimal control
problems.
Ú [y(t) – yr]
2
J= dt (8.19a)
0
Ú e (t) dt
2
= (8.19b)
0
where yr is the command or set-point value of the output, y(t) is the actual output, e(t) = y(t) – yr is the
error of the system.
This criterion, which has good mathematical trackability properties, is acceptable in practice, as a
measure of system performance. The criterion penalizes positive and negative errors equally. It penalizes
514 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
heavily on large errors; hence, a small J usually results in a system with small overshoot. Since the
integration is carried out over [0, ), a small J limits the effect of small error lasting for long time and,
thus, results in a small settling time. Also, a finite J implies that the steady-state error is zero.
The optimal design obtained by minimizing the performance index given by Eqns (8.19) may be
unsatisfactory, because it may lead to excessively large magnitudes of control signals. A more realistic
solution to the problem is reached, if the performance index is modified to account for physical constraints
like saturation in physical devices. Therefore, a more realistic performance index is of the form
Ú e (t) dt
2
J= (8.20a)
0
Ú u (t) dt
2
(8.21)
0
We would, very much, like to replace the performance criterion given by (8.20) by the following quadratic
performance index:
Ú [e (t) + u (t)] dt
2 2
J=
0
Ú [e (t) + l u (t)] dt
2 2
J= (8.22)
0
By adjusting the weighting factor l, we can weigh the relative importance of the system error and the
expenditure of energy. By increasing l, i.e., by giving sufficient weight to control effort, the amplitude of
the control signal, which minimizes the overall performance index, may be kept within practical bounds,
although at the expense of the increased system error. Note that as l Æ 0, the performance index
reduces to the integral square error criterion. In this case, the magnitude of u(t) will be very large and the
constraint given by (8.20b) may be violated. If l Æ , the performance index reduces to the one given
by Eqn. (8.21), and the optimal system that minimizes this J is one with u = 0. From these two extreme
cases, we conclude that if l is properly chosen, then the constraint of Eqn. (8.20b) will be satisfied.
Example 8.3
For the system of Fig. 8.4, let us compute the value of K that minimizes ISE for the unit-step input.
Linear Quadratic Optimal Control through Lyapunov Synthesis 515
Obviously, the minimum value of ISE is obtained as K Æ . This is an impractical solution, resulting in
excessive strain on the physical components of the system.
Sound engineering judgment tells us that we must include the ‘cost’ of the control effort in our
performance index. The quadratic performance index
Ú [e (t) + u (t)] dt
2 2
J=
may serve the objective. 0
∂J 1 1
=– + = 0 or K = 1
∂K 2K 2 2
Note that
∂2 J 1
= >0
∂K 2 K3
∂J J
= 0 gives
∂K
K= 2 , Jmin = 0.707 2
l=1
When l is greater than unity, it means that more
importance is given to the constraint on amplitude 1 l = 0.5
of u(t) compared to the performance of the system.
A suitable value of l is chosen so that relative l=0
importance of the system performance is contrasted
with the importance of the limit on control effort. 0 0.5 1 1.5 2 K
Figure 8.5 gives a plot of the performance index Fig. 8.5
versus K for various values of l.
Example 8.4
Consider the liquid-level system shown in Fig. 8.6. h represents the deviation of liquid head from the
steady-state value H .
The pump controls the liquid head h by supplying liquid at a rate (Qi + qi) m3/sec to the tank. We shall
assume that the flow rate qi is proportional to the error in liquid level (desired level – actual level). Under
these assumptions, the system equations are [155]:
dh rgh
(i) A = qi –
dt R
where A = area of cross-section of the tank;
R = total resistance offered by the tank outlet and pipe (R =D incremental change in pressure
across the restriction/incremental change in flow through the restriction);
r = density of the liquid; and
g = acceleration due to gravity.
Desired
level Actual level
Controller
Measurement
system
H+h
Q i + qi
Pump R
Fig. 8.6
Linear Quadratic Optimal Control through Lyapunov Synthesis 517
(ii) qi = Ke
where e = error in liquid level and K = gain constant
R
Let A = 1, and = 1. Then
rg
H ( s) 1
=
Qi ( s) s +1
The block diagram representation is given in Fig. 8.7. The output y(t) = h(t) is the deviation in liquid
head from steady-state value. Therefore, the output y(t) is itself the error which is to be minimized. Let
us pose the problem of computing the value of K that minimizes the ISE for the initial condition y(0) = 1.
y(0)
qi = u + + y (t) = h (t)
e +
K Ú
– –
Process
Fig. 8.7
1
ISE = Ú y2(t) dt = 2 (1 + K )
0
Obviously, the minimum value of ISE is obtained as K Æ .
This is an impractical solution, resulting in excessive strain on the physical components of the system.
Increasing the gain means, in effect, increasing the pump size.
Now, consider the problem of minimization of
Ú [y (t) + u (t)] dt
2 2
J=
0
From Fig. 8.7, we have
u(t) = – Ky(t)
Therefore,
u(t) = – Ke – (1 + K)t
518 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
1 K2
J= +
2 (1 + K ) 2 (1 + K )
∂J
∂K
= 0 gives K = ( 2 - 1)
Note that
∂2J 2
= >0
∂K 2 (1 + K )3
The minimum value of J is ( 2 -1 .)
8.5.1 State Regulator Problem
The performance index given by Eqn. (8.22), is a translation of the requirement of regulation of the
system output, with constraints on amplitude of the input applied to the plant. We now extend the
proposed performance index for the control problem where all the state variables of the system are to be
regulated. We use multivariable formulation of the plant model.
Consider the control problem where the objective is to maintain the system state given by the n ¥ 1 state
vector x(t), near the desired state xd (which, in many cases, is the equilibrium point of the system) for
all time.
Relative to the desired state xd, (x(t) – xd) can be viewed as the instantaneous system error. If we transform
the system coordinates such that the desired state becomes the origin of the state space, then the new state
x(t) is itself the error.
One measure of the magnitude of the state vector x(t) (or of its distance from the origin) is the norm
|| x(t)|| defined by
||x(t)||2 = xT(t)x(t)
Therefore,
Ú [x (t)Qx(t)]dt
T
J= (8.23)
0
with Q as n ¥ n real, symmetric, positive definite (or positive semidefinite) constant matrix, can be used
as a performance measure. The simplest form of Q one can use is the diagonal matrix:
Èq1 0 0˘
Í ˙
0 q2 0˙
Q= Í
Í ˙
Í ˙
Î0 0 qn ˚
Linear Quadratic Optimal Control through Lyapunov Synthesis 519
The ith entry of Q represents the weight the designer places on the constraint on the state variable xi (t).
The larger the value of qi relative to the other values of q, the more control effort is spent to regular xi (t).
The design obtained by minimizing the performance index of the form (8.23) may be unsatisfactory in
practice. A more realistic solution is obtained if the performance index is modified by adding a penalty
term for physical constraints on the p ¥ 1 control vector u(t). One of the ways of accomplishing this is to
introduce the following quadratic control term in the performance index:
Ú [u (t)Ru(t)] dt
T
J= (8.24)
0
where R is p ¥ p real, symmetric, positive definite,1 constant matrix.
By giving sufficient weight to control terms, the amplitudes of control signals which minimize overall
performance index may be kept within practical bounds, although at the expense of increased error in
x(t).
For the state regulator problem, a useful performance measure is, therefore2,
J= 1
2 Ú [xT(t)Qx(t) + uT(t)Ru(t)] dt (8.25)
0
Ú [y (t)Qy(t) + u (t)Ru(t)] dt
1 T T
J= 2
(8.26a)
0
where Q is a q ¥ q positive definite (or positive semidefinite) real, symmetric constant matrix, and R is a
p ¥ p positive definite, real, symmetric, constant matrix.
Substituting y = Cx in Eqn. (8.26a), we get
J= 1
2 Ú (xTCTQCx + uTRu) dt (8.26b)
0
Comparing Eqn. (8.26b) with Eqn. (8.25) we observe that the two indices are identical in form; Q in
Eqn. (8.25) is replaced by CTQC in Eqn. (8.26b). If we assume that the plant is completely observable,
then C cannot be zero; CTQC will be positive definite (or positive semidefinite) whenever Q is positive
definite (or positive semidefinite).
Thus, the solution to the output regulator problem directly follows from that of the state regulator problem.
1
As we shall see in Section 8.7, positive definiteness of R is a necessary condition for the existence of the optimal
solution to the control problem.
2
Note that multiplication by 1/2 does not affect the minimization problem. The constant helps us in mathematical
manipulations as we shall see later in Section 8.7.
520 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
some gains are fixed by the physical constraints of the system and are, therefore, relatively inflexible.
Similarly, if all the states x(t) are not accessible for feedback, one has to go for a state observer whose
complexity is comparable to that of the system itself. It is natural to seek a procedure that relies on the
use of feedback from the accessible state variables only, constraining the gain elements of matrix K
corresponding to the inaccessible state variables, to have zero value (Section 8.9). Thus, whether one
chooses an optimal or suboptimal solution depends on many factors in addition to the performance
required out of the system.
8.6.2
Implementation of the optimal control law given by Eqns (8.29) requires the ability to directly measure
the entire state vector x(t). For many systems, full state measurements are not practical. In Section 7.5,
we found that the state vector of an observable linear system can be estimated using a state observer
which operates on input and output measurements. We assumed that all inputs can be specified exactly
and all outputs can be measured with unlimited precision. The dynamic behavior of the observer was
assumed to be specified in terms of its characteristic equation.
Here, we are concerned with the optimal design of the state observer for the multivariable system given
by Eqns (8.27).
We postulate the existence of an observer of the form
ˆ = A x̂(t) + Bu(t) + M[y(t) – C x̂(t)]
x(t) (8.30)
where x̂ is the estimate of state x and M is an n × q real constant gain matrix. The observer structure is
shown in Fig. 8.9, which is of the same form as that considered in Section 7.5. The estimation error is
given by
x(t) = x(t) – x̂(t) (8.31a)
Plant
u x y
x = Ax + Bu C
+ +
+ x x y –
B Ún C
+
n-parallel
integrators
M
Observer
Fig. 8.9
522 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
J= 1
2 Ú (y TQ0y + g TR0g) dt (8.33)
0
where Q0 is a positive definite (or positive semidefinite), real, symmetric constant matrix, and R0 is a
positive definite, real, symmetric, constant matrix.
The solution to this problem exists if the auxiliary system (8.32) is completely controllable. This
condition is met, if the original system (8.27) is completely observable.
The separation principle (refer to Section 7.6) allows for the separate designs of state-feedback control
law and state observer; the control law and the observer are then combined as per the configuration of
Fig. 8.10. The weighting matrices Q0 and R0, for the observer design, can be assumed to be equal to the
weighting matrices Q and R, respectively, of the
Plant Sensor
control-law design. Generally, however, one would u x y
design a faster observer in comparison with the x = Ax + Bu C
regulator, i.e., for Q0 = Q, the elements of R0 are
chosen smaller than those of R.
The above solution, to the state estimation problem
based on duality between the control and estima-
tion, can be formalized by using “Optimal Filtering x
–K x = Ax + Bu + M(y – Cx)
Theory”. The formal development of the result ex-
tends beyond the scope of this text [105]. We may, Control law Observer
however, use the term “optimal filter” for the state Fig. 8.10
observer designed by the procedure given here in
the text.
An optimal filter whose weighting matrices Q0 and R0 are determined by the “spectral properties” of the
exogenous noise signals is termed a Kalman filter [105].
8.6.3
The control configuration of Fig. 8.8, implicitly assumes that the null state x = 0 is the desired equilibrium
state of the system. It is a state regulator with zero command input.
Linear Quadratic Optimal Control through Lyapunov Synthesis 523
In servo systems, where the output y(t) is required to track a constant command input, the equilibrium
state is a constant point (other than the origin) in state space. This servo problem can be formulated as a
‘shifted regulator problem’, by shifting the origin of the state space to the equilibrium point. Formulation
of the shifted regulator problem for single-input systems was given in Section 7.7. Extension of the
formulation to the multi-input case is straightforward.
8.6.4
In a state-feedback control system (which is a generalization of proportional plus derivative feedback),
it is usually required that the system have one or more integrators within the closed loop. This will lead
to zero steady-state error when the command input and disturbance have constant steady-state values.
Unless the plant to be controlled has integrating property, it is generally necessary to add one or more
integrators within the loop.
For the system (8.27), we can feedback the state x as well as the integral of the error in output by
augmenting the plant state x with the extra ‘integral state’. For single-input systems, the problem of
state feedback with integral control was formulated as a state regulator problem in Section 7.8. This was
done by augmenting the plant state with ‘integral state’, and shifting the origin of the state space to the
equilibrium point. Multivariable generalization of state feedback with integral control is straightforward.
Ú (x
T
J= 1
2
Qx + uT Ru)dt (8.35)
0
524 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
where Q is n ¥ n positive definite, real, symmetric, constant matrix, and R is p ¥ p positive definite, real,
symmetric, constant matrix.
Since the (A, B) pair is completely controllable3, there exists a state-feedback control law
u = – Kx (8.36)
where K is p ¥ n real, constant, unconstrained gain matrix, that results in an asymptotically stable closed-
loop system (refer to Section 7.3)
x(t) = (Ax – BKx) = (A – BK)x (8.37)
T
This implies that there is a Lyaponov function V = 12 x Px for the closed-loop system (8.37); that is, for
some positive definite matrix P, the time derivative dV/dt evaluated on the trajectories of the closed-loop
system is negative definite. We now state and prove a condition for u = – Kx (t) to be optimal [35].
Theorem 8.5 If the state-feedback controller u = – Kx (t) is such that it minimizes the function
f (u) =
dV 1 T
(
+ x Qx + uT Ru ,
dt 2
) (8.38)
and the minimum value of f (u) = 0 for some V = 1 xT Px , then the controller is optimal.
2
dV
+ 1
2
xT Qx + 12 u*T Ru* = 0
dt u =u*
Hence
dV 1
=- 2
xT Qx + 12 u*T Ru*
dt u =u*
V x(( )) - V (x (0)) = - 12 Ú0 (x Qx + u
T *T
)
Ru* dt
( )
V x (0 ) = 12 xT (0 ) Px (0 ) = Ú (x Qx + u*T Ru* dt )
1 T
2 0
Thus, if a linear stabilizing controller satisfies the hypothesis of the theorem, then the value of the
performance index (8.35) for such a controller is
( )
J u* = 12 xT (0 ) Px (0 )
3
The controllability of the (A, B) pair is not a necessary condition for the existence of the optimal solution. If the
(A, B) pair is not completely controllable, we can transform the plant model to controllability canonical form
given in Eqn. (5.123c). It decomposes the model into two parts: the controllable part and the uncontrollable part.
If the uncontrollable part is stable, then the model is said to be stabilizable. Stabilizability of the (A, B) pair is
a necessary condition for the existence of optimal solution.
Linear Quadratic Optimal Control through Lyapunov Synthesis 525
Since u* minimizes the function in (8.38) and the minimum value is zero, for any u different from u*, the
value of the function will be greater than/equal to zero.
dV
(
+ 1 xT Qx + uˆ T Ruˆ ≥ 0
dt u =uˆ 2
)
or
dV
dt u =uˆ
(
≥ - 12 xT Qx + uˆ T Ruˆ )
Integrating both sides with respect to time from 0 to , yields
(
V x (0 ) £ ) Ú (x )
T
1
2 0
Qx + uˆ T Ruˆ dt
implying that
( )
J u* Ä J (uˆ )
∂ Ê dV 1 T ˆ
ÁË + 2 x Qx + 12 uT Ru˜ =0
∂u dt ¯ u =u*
Differentiating the above yields
∂ Ê dV 1 T
ÁË
∂u dt
ˆ
+ 2 x Qx + 12 uT Ru˜ =
¯
∂ 1 T
∂u 2 ((
x Px + xT Px + 12 xT Qx + 12 uT Ru ) )
=
∂ T
∂u
(
x Px + 12 xT Qx + 12 uT Ru )
=
∂ T
∂u
(
x PAx + xT PBu + 12 xT Qx + 12 uT Ru )
= BT PT x + Ru = 0 = BT Px + Ru
( )
x = A – BR –1BT P x ; x (0 ) =D x0
Our optimal controller satisfies the equation
dV
+ 1 xT Qx + 12 u*T Ru* = 0
dt u =u* 2
that is,
xT PAx + xT PBu* + 12 xT Qx + 12 u*T Ru* = 0
(A P + PA + Q – PBR
T –1 T
B P =0) (8.41)
The above equation is referred to as algebraic Riccati equation. In conclusion, the synthesis of the
optimal linear state-feedback controller, minimizing the performance index
Ú (x )
T
J= 1
2
Qx + uT Ru dt
0
subject to
x = Ax + Bu; x (0 ) = x0
D
ÈH ˘
Í ˙
HA
r Í ˙ = n; Q = HTH, (8.43)
Í� ˙
Í n-1
˙
ÍÎHA ˙˚
then V� (x) < 0 for all x π 0.
We prove this result by contradiction: the rank condition is satisfied but V� (x) = 0 for some x π 0.
Substituting Q = HTH in Eqn. (8.42), we obtain (refer to Eqns (5.6))
V� (x) = – 1 (xTHTHx + xTKTRKx) = – 1
2 [|| Hx||2 + || Kx ||2R ]
2
J = 1
2 Ú (xTQx + uTRu)dt = Ú (xTHTHx + uTRu) dt
1
2
0 0
= 1
2 Ú (y T y + uTRu) dt,
0
528 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
the observability of the pair (A, H) implies that all the modes of the state trajectories are reflected in
the performance index. A finite value of J, therefore, ensures that unstable modes (if any) have been
stabilized4 by the control u = – Kx.
The observability condition is always satisfied when the matrix Q is positive definite.
The design steps may now be stated as follows:
(i) Solve the matrix Riccati equation (8.41) for the positive definite matrix P.
(ii) Substitute this matrix P into Eqn. (8.40); the resulting equation gives optimal control law.
This is a basic and well-known result in the theory of optimal control. Once the designer has
specified Q and R, representing his/her assessment of the relative importance of various terms in the
performance index, the solution of Eqn. (8.41) specifies the optimal control law (8.40). This yields the
optimal closed-loop system. If the resulting transient response is unsatisfactory, the designer may alter
the weighting matrices Q and R, and try again.
Comments
(i) The matrix R has been assumed to be positive definite. This is a necessary condition for the
existence of the optimal solution to the control problem, as seen from Eqn. (8.40).
(ii) We have assumed that the plant (8.34) is completely controllable, and the matrix Q in performance
index J, given by Eqn. (8.35), is positive definite. These are sufficient conditions for the existence
of asymptotically stable optimal solution to the control problem. The requirement on matrix Q,
may be relaxed to a positive semidefinite matrix with the pair (A,H) completely observable, where
Q = HTH.
(iii) It is important to be able to find out whether the sought-after solution exists or not, before we
start working out the solution. This is possible only if necessary conditions for the existence of
asymptotically stable optimal solution are established. A discussion on this subject entails not only
controllability and observability, but also the concepts of stabilizability and detectability. Basic
ideas about these concepts have been given in footnotes of this chapter; a detailed discussion is
beyond the scope of this book.
(iv) Equation (8.41) is a set of n2 nonlinear algebraic equations. Since P is a symmetric matrix, we
n( n +1)
need to solve only equations.
2
(v) The solution of Eqn. (8.41) is not unique. Of the several possible solutions, the desired answer is
obtained by enforcing the requirement that P be positive definite. The positive definite solution of
Eqn. (8.41) is unique.
(vi) In very simple cases, the Riccati equation can be solved analytically, but usually a numerical
solution is required. A number of computer programs for the purpose are available [152–154].
Appendix A provides some MATLAB support.
4
Observability canonical form for a state model which is not completely observable, is given in Eqn. (5.124c). It
decomposes the model into two parts: the observable part and the unobservable part. If the unobservable part is
stable, then the model is said to be detectable.
In the optimization problem under consideration, the observability of the pair (A, H) is not a necessary
condition for the existence of a stable solution. If the pair (A, H) is detectable, then the modes of state trajectories,
not reflected in J, are stable and a finite value of J will ensure asymptotic stability of the closed-loop system.
Linear Quadratic Optimal Control through Lyapunov Synthesis 529
(vii) Note that the optimal state regulator requires that all the parameters of matrix K in the control
law (8.36), are free parameters. However, all the parameters of matrix K may not be available for
adjustments. The gain elements of matrix K, corresponding to inaccessible state variables, may
be constrained to zero value (otherwise, a state observer will be required). Also, some gains may
be fixed by physical constraints of the system. This leads to the parameter optimization problem:
optimization of free parameters in the matrix K. The difference between parameter optimization
and optimal control problems is that no constraint on controllers is imposed on the latter. A
solution to parameter optimization (suboptimal control) problem will be given in Section 8.9.
Example 8.5
Consider the problem of attitude control of a rigid satellite which was discussed in Example 7.1. An attitude
control system for the satellite that utilizes rate feedback is shown in Fig. 8.11; q (t) is the actual attitude,
qr(t) is the reference attitude which is a step function, and u(t) is the torque developed by the thrusters.
qr + + u 1 x2 1 y = x1 = q
k1
s s
– –
k2
Fig. 8.11
Ú [(qr – q)
2
J = + u2] dt (8.45)
0
In terms of the shifted state variables
x1 = x1 – qr; x2 = x2
530 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
J= Ú(x 2
1 + u2)dt (8.47)
0
The Q and R matrices are
È2 0˘
Q= Í ˙;R=2
Î0 0 ˚
Note that Q is positive semidefinite matrix.
È 2˘
Q = H TH = Í ˙ [ 2 0]
ÍÎ 0 ˙˚
The pair (A,H) is completely observable. Also, the pair (A,B) is completely controllable. Therefore,
sufficient conditions for the existence of asymptotically stable optimal solution are satisfied.
The matrix Riccati equation is
ATP + PA – PBR–1BTP + Q = 0
È0 0 ˘ È p11 p12 ˘ È p11 p12 ˘ È0 1 ˘
or Í ˙Í ˙+Í ˙Í ˙
Î1 0 ˚ Î p12 p22 ˚ Î p12 p22 ˚ Î0 0 ˚
Solving these three simultaneous equations for p11, p12, and p22, requiring P to be positive definite, we
obtain
È2 2 2 ˘
P= Í ˙
ÍÎ 2 2 2 ˙˚
Linear Quadratic Optimal Control through Lyapunov Synthesis 531
Example 8.6
Consider the liquid-level system of Fig. 8.6. In Example 8.4, we designed an optimal regulator for this
process by direct parameter optimization. In the following, we use the Riccati equation for designing the
optimal regulator. The state equation of the process is
dy
=–y+u (8.48)
dt
where
y = deviation of the liquid head from the steady state; and
u = rate of liquid inflow.
The performance index
J= Ú (y2 + u2) dt
0
P = 2 ( 2 - 1)
The optimal control law is
u = – R–1BTPy(t) = – ( 2 - 1) y(t)
Substituting in Eqn. (8.48), we get the following equation for the closed-loop system:
dy
=– 2y
dt
532 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
u = – k1y(t) – k2z(t)
that minimizes
Ú (y
2
J= + u2)dt
0
The Q and R matrices are
È2 0˘
Q= Í ˙,R=2
Î0 0 ˚
The state equation (8.49) is completely controllable, satisfying one of the sufficient conditions for the
existence of the optimal solution.
The matrix Q is positive semidefinite;
È 2˘
Q = H TH = Í ˙ [ 2 0]
ÍÎ 0 ˙˚
Ê È-1 0 ˘ ˆ
The pair Á Í ˙ , ÈÎ 2 0 ˘˚˜ is not completely observable. Therefore, the other sufficient condition for
ËÎ 1 0 ˚ ¯
the existence of the asymptotically stable optimal solution is not satisfied. It can easily be verified that a
positive definite solution to the matrix Riccati equation does not exist in this case; the chosen matrix Q
cannot give a closed-loop stable optimal system.
We now modify the performance index to the following:
Ú (y
2
J= + z2 + u2) dt
0
Linear Quadratic Optimal Control through Lyapunov Synthesis 533
We will assume that a matrix K exists such that (F – GK) is a stable matrix. The controllability of the
model (8.50) is sufficient to ensure this. This implies that there exists a Lyapunov function V(x(k))
= 12 xT(k) Px(k) for the closed-loop system (8.53). Therefore, the first forward difference, DV(x(k)) =
V (x (k + 1)) – V(x(k)), evaluated on the trajectories of the closed-loop system, is negative definite. We
now state and prove a condition for u(k) = – Kx(k) to be optimal [35].
Theorem 8.6 If the state-feedback controller u(k) = – Kx(k) is such that it minimizes the function
( )
f (u) = DV x ( k ) + 12 xT ( k ) Qx ( k ) + 12 uT ( k ) Ru ( k ) (8.54)
and the minimum value of f (u) = 0 for some V = 12 xT ( k ) Px ( k ) , then the controller is optimal.
(
DV x ( k ) ) u =u *
+ 12 xT ( k ) Qx ( k ) + 12 u*T ( k ) Ru* ( k ) = 0
Hence
( )
V x ( k + 1) - V x ( k ) ( ) u =u *
= – 12 xT ( k ) Qx ( k ) - 12 u*T ( k ) Ru* ( k )
( )
V x (0 ) = 12 xT (0 ) Px (0 ) = 1
2 Â (x T
(k ) Qx (k ) + u*T (k ) Ru* (k ))
k=0
Thus, if a linear stabilizing controller satisfies the hypothesis of the theorem, then the value of the
performance index (8.51) resulting from applying the controller is
( )
J u* = 12 xT (0 ) Px (0 )
Since u* minimizes the function in (8.54) and the minimum value is zero, for any u different from u*, the
value of the function will be greater than/equal to zero.
(
DV x ( k ) ) u=û + 12 xT (k ) Qx (k ) + 12 uˆ T (k ) Ruˆ (k ) ≥ 0
or,
DV x ( k ) ( ) u=û ≥ – 12 xT (k ) Qx (k ) – 12 uˆ T (k ) Ruˆ (k )
Summing the above from k = 0 to , yields
(
V x (0 ) £ ) 1
2 Â (x T
(k ) Qx (k ) + 12 uˆ T (k ) Ruˆ (k )) ;
k=0
536 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
that is,
( )
J u* £ J ( u )
*
for any uˆ π u* . Therefore, the controller u is optimal.
Finding an optimal controller involves finding an appropriate quadratic Lyapunov function V(x(k)) =
1
2
xT ( k )Px ( k ) , which is used to construct the optimal controller. We first find u* that minimizes the
function
( ) ( )
f u ( k ) = DV x ( k ) + 12 xT ( k ) Qx ( k ) + 12 uT( k )Ru ( k ) (8.55)
= 12 xT ( k + 1) Px ( k + 1) - 12 xT ( k ) Px ( k ) + 12 xT ( k ) Qx ( k ) + 12 uT( k )Ru ( k )
= 1
2 (Fx (k ) + Gu (k ))T P (Fx (k ) + Gu (k ))
- 12 xT ( k ) Px ( k ) + 12 xT ( k ) Qx ( k ) + 12 uT( k )Ru ( k )
Ê ∂ ∂f ∂g ˆ
Necessary condition for unconstrained minimization is Á (f T g ) = g+ f˜
Ë ∂x ∂x ∂x ¯
∂ f (u( k )) ∂ (Fx( k ) + Gu( k ) )
= 1
[P( Fx( k ) + Gu( k ))]
∂u ( k ) 2
∂u( k )
∂ ( P (Fx( k ) + Gu( k )) ) ∂ (uT( k ) Ru( k ))
+ 1
[Fx( k ) + Gu( k )] + 1
2
∂u ( k ) 2
∂u( k )
= 1
2
GT P( Fx( k ) + Gu( k )) + 1
2
GT P( Fx( k ) + Gu( k )) + Ru ( k )
= GTP(Fx(k) + Gu(k)) + Ru(k)
= GT PFx(k) + (R + GTPG)u(k) = 0
The matrix R + GTPG is positive definite, and therefore, it is invertible. Hence
( )
-1
u* ( k ) = - R + GT PG GT PFx ( k ) = - Kx ( k ) (8.56)
( )
-1
where K = R + GT PG GT PF
We next check if u* satisfies the second-order sufficient condition for a relative minimizer of the function
(8.55).
∂2 f (u( k )) ∂ ÈÎG PFx( k ) R + G PG u( k ) ˘˚
T T
( )
=
∂u 2 ( k ) ∂u( k )
= R + GTPG; a positive-definite matrix
that is, u* satisfies the second-order sufficient condition for a relative minimizer of the function (8.55).
The optimal controller (8.56) can be constructed if we have found an appropriate positive definite matrix
P. Our next task is to devise a method that would allow us to compute the desired matrix P. For this, we
first find the equation describing the closed-loop system driven by the optimal controller (8.56):
T
(
x ( k + 1) = F – GS- 1GT PF x ( k ) ) (8.57)
where S = R + G PG
Linear Quadratic Optimal Control through Lyapunov Synthesis 537
( )
-1
FT PF - P + Q - FT PG R + GT PG GT PF = 0 (8.58)
This equation is called the discrete algebraic Riccati equation.
The discrete matrix Riccati equation given in (8.58), is one of the many equivalent forms which satisfy
the optimal regulator design. The analytical solution of the discrete Riccati equation is possible only for
very simple cases. A number of computer programs for the solution of the discrete Riccati equation are
available [152–154]. Appendix A provides some MATLAB support.
Example 8.7
Consider the problem of digital control of a plant described by the transfer function
1
G(s) =
s +1
Discretization of the plant model gives
Y ( z) ÈÊ 1 - e - sT ˆ Ê 1 ˆ ˘ –1 È 1 ˘ 1- e -T
Gh0G(z) = =Z ÍÁ ˜¯ ÁË ˜¯ ˙ = (1 – z ) Z Í ˙ =
U ( z) ÎË s s + 1 ˚ Î s( s + 1) ˚ z - e -T
For a sampling interval T = 1 sec,
0.632
Gh0G(z) =
z - 0.368
538 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
k=0
where y and u are, respectively, the deviations of the output and the control signal from their
steady-state values.
(ii) For a constant yr , y( ) = yr, i.e., there is zero steady-state error.
For this design problem, we select the feedback plus feedforward control scheme shown in Fig. 8.15. The
feedback gain K is obtained from the solution of the shifted regulator problem, as is seen below.
yr u (k) y (k)
+ 0.632
N
z – 0.368
–
Fig. 8.15
Let ys and us be the steady-state values of the output and the control signal, respectively. Equation (8.59)
at steady state becomes
ys = 0.368 ys + 0.632 us
The state equation (8.59) may equivalently be expressed as
y (k + 1) = 0.368 y (k) + 0.632 u(k)
where
y = y – ys; u = u – us
In terms of this equivalent formulation, the optimal control problem is to obtain
u (k) = – K y (k)
so that
J= 1
2 Â [ y 2(k) + u 2(k)]
k=0
is minimized.
For this shifted regulator problem,
F = 0.368, G = 0.632, Q = 1, R = 1
Linear Quadratic Optimal Control through Lyapunov Synthesis 539
If we now progress this analysis for the closed-loop system, we substitute for the control law in system
equations. This gives
x = ( A – BK 0 C)x (8.62)
The design problem can be stated through the closed-loop characteristic polynomial D(s) that specifies
the poles of the closed-loop system (eigenvalues of the matrix (A – BK0C)):
D( s) = s n + a1s n -1 + + an
The problem is to choose the available controller gain matrix K0 so that the specified characteristic
polynomial D(s) equals the characteristic polynomial of the matrix (A – BK0C):
s n + a1s n -1 + + a n = sI - ( A - BK 0 C) (8.63)
The output feedback in a state variable framework does not necessarily have sufficient degrees of free-
dom to satisfy this design requirement. The output feedback gain K0 will have p ¥ q parameters to tune
n coefficients of the closed-loop characteristic polynomial. In most real systems, the order of the system
n will be very much greater than the number of measurements q and/or control p. The important issue
here is that the output vector is only a partial view of the state vector.
With full state feedback, a closed-form solution to the pole-placement problem exists under mildly
restrictive condition on controllability of the system. In case of single-input systems, the solution is
unique (given earlier in Chapter 7). In case of multi-input systems, the solution is not unique. In fact,
there is lot of freedom available in the choice of state-feedback gain matrix; this freedom is utilized to
serve other objectives on system performance [105].
With partial state feedback (output feedback), a closed-form solution to the pole-placement problem may
not exist. The designer often solves the problem numerically; tuning the output feedback gain matrix
by trial and error to obtain approximate pole-placement solution and checking the acceptability of the
approximation by simulation.
The output feedback law is restricted in design achievements, while the state-feedback law is able to give
total control over system dynamics. In fact, as we have seen, the design flexibility of the state-feedback law
is supported by deep technical results to guarantee the design properties. However, this design flexibility of
state feedback is achieved because it has been assumed that we can access each state variable. In practice,
this means that we must have a more complicated system where we include an observer which provides
us with the state information. Since an observer incorporates model of the process, dependence on very
accurate representation of the process being controlled for accurate state information, is obvious. We
know that industrial process models are not usually so well known or accurate. Therefore, inclusion of
an observer is bound to deteriorate the robustness properties of the feedback system.
Therefore, in spite of excellent design flexibility in state feedback, we are, many a times, forced to look
at the alternatives, not so excellent in terms of design flexibility but not dependent on inclusion of an
observer. Constrained state feedback is an interesting alternative. Here, we set the gains of the state-
feedback gain matrix corresponding to unmeasurable states, to zero and try to exploit the rest of the gains
to achieve the desired properties of the closed-loop system. This, in fact, may also be viewed as an output
feedback problem where the state x passes through an appropriately selected output matrix C to give the
output variables in the vector y.
Linear Quadratic Optimal Control through Lyapunov Synthesis 541
The constrained state-feedback control law is not supported by deep technical results and does not
guarantee the design properties; nonetheless, it yields robust feedback control systems for many industrial
control problems. Existence of a control law that stabilizes the feedback system is a pre-requisite of
the design algorithm. Unfortunately, general conclusions for existence of a stabilizing control law with
constrained state feedback cannot be laid down; therefore, one resorts to numerical methods to establish
the existence.
In the following, we outline a procedure for the design of constrained state-feedback control law that
minimizes a quadratic performance index. By constrained state feedback we mean that all the parameters
of the matrix K are not available for adjustments. Some of them may have zero values (output feedback).
The procedure described below is equally applicable to situations wherein some of the parameters of K
have fixed values.
Let us consider the system (8.60). It is desired to minimize the following performance index:
J= 1
2 Ú (xTQx + uTRu) dt (8.64)
0
where Q is n ¥ n positive definite, real, symmetric, constant matrix, and R is p ¥ p positive definite, real,
symmetric, constant matrix.
We shall obtain a direct relationship between Lyapunov functions and quadratic performance measures,
and solve the constrained parameter-optimization problem using this relationship. We select the feedback
control configuration described by the control law
u = – K0C x = – Kx (8.65)
where K is p × n matrix which involves adjustable parameters. With this control law, the closed-loop
system becomes
x = Ax – BKx = (A – BK)x (8.66)
All the parameters of the matrix K are not available for adjustments. Some of them have fixed values
(zero values). We will assume that a matrix K satisfying the imposed constraints on its parameters exists
such that (A – BK) is a stable matrix.
The optimization problem is to determine the values of free parameters of the matrix K so as to minimize
the performance index given by Eqn. (8.64).
Substituting the control vector u from Eqn. (8.65) in the performance index J of Eqn. (8.64), we have
J = 1
2 Ú (xTQx + xTKTRKx)dt
0
= 1
2 Ú xT(Q + KTRK)x dt (8.67)
0
V(x(t)) = 1
2 Ú xT(Q + KTRK)x dt
0
542 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Note that the value of the performance index for system trajectory starting at x(0) is V(x(0)).
The time derivative of the Lyapunov function is
V (x) = 1
2
xT (Q + KT RK)x
t
T T 1 xT(t)
= 1
2
x ( ) [Q + K RK]x( ) – [Q + KTRK] x(t)
2
Assuming that the matrix (A – BK) is stable, we have from Eqn. (8.66),
x( ) Æ 0
Therefore,
V (x) = – 1 xT(Q + KTRK)x (8.68)
2
Since V (x) is quadratic in x and the plant equation is linear, let us assume that V(x) is also given by the
quadratic form:
V(x) = 1
2
xTPx (8.69)
where P is a positive definite real, symmetric, constant matrix. Therefore,
V (x) = 1
2
( xT Px + xT Px)
Substituting for x from Eqn. (8.66), we get
V (x) = 1
2
xT[(A – BK)T P + P(A – BK)]x
Comparison of this result with Eqn. (8.68) gives
1 xT[(A – BK)TP + P(A – BK)]x = – 1 xT(Q + KTRK)x
2 2
Since the above equality holds for arbitrary x(t), we have
(A – BK)T P + P(A – BK) + KTRK + Q = 0 (8.70)
This equation is of the form of Lyapunov equation defined in Section 8.3. In Eqn. (8.70) we have n2
nonlinear algebraic equations. However, since n ¥ n matrix P is symmetric, we need to solve only
n ( n +1)
equations for the elements pij of the matrix P. The solution will give pij as functions of the
2
feedback matrix K.
As pointed out earlier, V(x(0)) is the value of the performance index for the system trajectory starting at
x(0). From Eqn. (8.69), we get,
J= 1
2
xT(0) Px(0) (8.71)
A suboptimal control law may be obtained by minimizing J with respect to the available elements kij of
K, i.e., by setting
∂[xT (0)Px(0)]
=0 (8.72)
∂kij
If for the suboptimal solution thus obtained, the matrix (A – BK) is stable, then the minimization of J
as per the procedure described above gives the correct result. From Eqn. (8.68) we observe that for a
Linear Quadratic Optimal Control through Lyapunov Synthesis 543
positive definite Q, V (x) < 0 for all x π 0 (note that KTRK is non-negative definite). Also Eqn. (8.69)
shows that V(x) > 0 for all x π 0 if P is positive definite. Therefore, minimization of J with respect to
kij (Eqn. (8.72)) will lead to a stable closed-loop system if the optimal kij result in a positive definite
matrix P.
One would like to examine the existence of a solution to the optimization problem before actually
starting the optimization procedure. For the problem under consideration, existence of K that minimizes
J is ensured if, and only if, there exists a K satisfying the imposed constraints on the parameters, such
that (A – BK) is asymptotically stable. The question of existence of K has, as yet, no straightforward
answer. We resort to numerical trial-and-error to find a stabilizing matrix K (such a matrix is required
by numerical algorithms for optimization of J [105]). Failure to do so does not mean that a suboptimal
solution does not exist.
Also note that the solution is dependent on initial condition (Eqn. (8.72)). If a system is to operate satis-
factorily for a range of initial disturbances, it may not be clear which is the most suitable for optimization.
The dependence on initial conditions can be avoided by averaging the performance obtained for a linearly
independent set of initial conditions. This is equivalent to assuming the initial state x (0) to be a random
variable, uniformly distributed on the surface of an n-dimensional sphere, i.e.,
E{x(0)xT (0)} = I
where E{.} denotes the expected value.
We define a new performance index
Jˆ = E { J } = E { 1
2 }
xT (0)Px(0) (8.73)
= 1
2
E{trace (Px(0)xT (0))}
= 12 trace (PE{x(0)xT (0)})
= 12 trace P
Reference [105] describes a numerical algorithm for the minimization of Ĵ . When feedback matrix K
is unconstrained, the resulting value of J is optimal; J(optimal) < J (suboptimal).
As we have seen earlier in this chapter, the optimal solution is independent of initial conditions. It is
computationally convenient to use Riccati equation (8.41) for obtaining optimal control law (8.40).
Consider now the discretized model of the given plant:
x(k + 1) = Fx(k) + Gu(k); x(0) =D x0
(8.74)
y(k) = Cx(k)
where x is the n ¥ 1 state vector, u is the p ¥ 1 input vector, y is the q ¥ 1 output vector; F, G, and C are,
respectively, n ¥ n, n ¥ p, and q ¥ n real constant matrices; and k = 0, 1, 2, . . . .
We shall be interested in selecting the controls u(k); k = 0, 1, . . ., which minimize a performance index
of the form
J = 1
2 Â [xT(k)Qx(k) + uT(k)Ru(k)] (8.75)
k=0
544 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
where Q is an n ¥ n positive definite, real, symmetric, constant matrix, and R is a p ¥ p positive definite,
real, symmetric, constant matrix.
The constrained state-feedback control law is
u(k) = – K0C x (k) = – Kx(k) (8.76)
where K is a p × n constant matrix.
With the linear feedback control law (8.76), the closed-loop system is described by
x(k + 1) = (F – GK)x(k) (8.77)
We will assume that a matrix K exists such that (F – GK) is a stable matrix.
Substituting for the control vector u(k) from Eqn. (8.76) in the performance index J given by Eqn. (8.75),
we get
J = 1
2 Â xT(k) [Q + KTRK]x(k) (8.78)
k=0
Let us assume a Lyapunov function
V(x(k)) = 1
2 Â xT(i) [Q + KTRK]x(i) (8.79)
i=k
Note that the value of the performance index for system trajectory starting at x(0) is V(x(0)). The
difference
V(x(k + 1)) – V(x(k)) = DV(x(k)) = – 1 xT(k)[Q + KTRK]x(k) (8.80)
2
(Note that x( ) has been taken as zero under the assumption of asymptotic stability of the closed-loop
system).
Since DV(x(k)) is quadratic in x(k) and the plant equation is linear, let us assume that V(x(k)) is also given
by the quadratic form
V(x(k)) = 1
2
xT(k) Px(k) (8.81)
where P is a positive definite, real, symmetric, constant matrix.
Therefore,
DV(x(k)) = 1
2
xT(k + 1)Px(k + 1) – 1 xT(k)Px(k)
2
Assuming x(0) to be a random variable uniformly distributed on the surface of n-dimensional sphere, the
problem reduces to minimization of
Ĵ = 12 trace P (8.84)
REVIEW EXAMPLES
r + K x3 1 x2 1 x1
s+1 s+2 s
–
Fig. 8.16
È x1 ˘ È 0 1 0 ˘ È x1 ˘ È 0 ˘
Í ˙ Í ˙Í ˙ Í ˙
x
Í 2˙ Í = 0 -2 1˙ Í x2 ˙ + Í 0 ˙ r
ÍÎ x3 ˙˚ ÍÎ- K 0 -1˙˚ ÍÎ x3 ˙˚ ÍÎ K ˙˚
The matrix Q could be chosen as identity matrix. However, we make the following choice for Q:
È0 0 0 ˘
Í ˙
Q = Í0 0 0 ˙
ÍÎ0 0 1 ˙˚
This is a positive semidefinite matrix which satisfies the condition (8.10) as is seen below.
È0 0 0 ˘ È0 ˘
Í ˙ Í ˙
Q = Í0 0 0 ˙ = Í0 ˙ [0 0 1] = HTH
ÍÎ0 0 1 ˙˚ ÍÎ1 ˙˚
ÈH ˘ È 0 0 1˘
Í ˙ Í ˙
r ÍHA ˙ = r Í- K 0 -1˙ = 3
Í 2˙ ÍÎ K - K 1˙˚
ÎHA ˚
With this choice of Q, as we shall see, manipulation of the Lyapunov equation for its analytical solution
becomes easier.
Now let us solve the Lyapunov equation
ATP + PA = – Q
È0 0 - K ˘ È p11 p12 p13 ˘ È p11 p12 p13 ˘ È 0 1 0˘ È0 0 0 ˘
Í ˙Í ˙ Í ˙Í ˙ = Í0 0 0 ˙
Í 1 -2 p23 ˙ + Í p12 p23 ˙ Í 0 -2 1˙
or 0 ˙ Í p12 p22 p22 Í ˙
ÍÎ0 1 -1˙˚ ÍÎ p13 p23 p33 ˚˙ ÍÎ p13 p23 p33 ˙˚ ÍÎ- K 0 -1˙˚ ÍÎ0 0 -1˙˚
È K 2 + 12 K 6K ˘
Í 0 ˙
Í 12 - 2 K 12 - 2 K ˙
Í 6K 3K K ˙
P= Í ˙
Í 12 - 2 K 12 - 2 K 12 - 2 K ˙
Í K 6 ˙
Í 0
Î 12 - 2 K 12 - 2 K ˙˚
È x1e ˘
Solution The equilibrium state xe = Í e ˙ can be determined from the equations
ÍÎ x2 ˙˚
x1e = 2x1e + 0.5xe2 – 5
x2e = 0.8x e2 + 2
Solving, we get
È x1e ˘ È 0 ˘
Í ˙=Í ˙
ÍÎ x2e ˙˚ Î10 ˚
Define
x1 (k) = x1(k) – x1e
x2 (k) = x2(k) – x2e
In terms of the shifted variables, the system equations become
È x1 ( k + 1) ˘ È2 0.5˘ È x1( k ) ˘
Í ˙=Í ˙Í ˙
Î x2 ( k + 1) ˚ Î0 0.8˚ Î x2 ( k ) ˚
or x(k +1) = F x(k) (8.85)
Clearly x = 0 is the equilibrium state of this autonomous system.
Let us choose a Lyapunov function
V( x) = xTP x
where P is to be determined from the Lyapunov equation
FTPF – P = – I
È2 0 ˘ È p11 p12 ˘ È 2 0.5˘ È p11 p12 ˘ È-1 0 ˘
or Í ˙Í ˙Í ˙-Í ˙ = Í ˙
Î 0 .5 0.8˚ Î p12 p22 ˚ Î0 0.8˚ Î p12 p22 ˚ Î 0 -1˚
Solving for pij’s, we get
È- 1 5˘
P = Í 53 9
1225
˙
ÍÎ 9 324 ˙
˚
By applying the Sylvester’s test for positive definiteness, we find that the matrix P is not positive definite.
Therefore, the origin of the system (8.85) is not asymptotically stable.
In terms of the original state variables, we can say that the equilibrium state
xe = [0 10]T of the given system is not asymptotically stable.
sk2
Fig. 8.17
Ú( x
2
J= 1 + 0.25u2)dt
0
is minimized.
Note that
È0 1 ˘ È 0 ˘ È2 0˘
A= Í ˙ ; B = Í100 ˙ ; Q = Í ˙ ; R = 0.5
Î0 0 ˚ Î ˚ Î0 0 ˚
The matrix Riccati equation is
ATP + PA – PBR–1BTP + Q = 0
or
È0 0 ˘ È p11 p12 ˘ È p11 p12 ˘ È0 1 ˘
Í ˙ Í ˙ + Í ˙ Í ˙
Î1 0 ˚ Î p12 p22 ˚ Î p12 p22 ˚ Î0 0 ˚
Solving for p11, p12, and p22, requiring P to be positive definite, we obtain
È2 ¥ 10 -1 10 -2 ˘
P= Í ˙
ÍÎ 10 -2 10 -3 ˙˚
The feedback gain matrix
K = R–1BTP
È2 ¥ 10 -1 10 - 2 ˘
= 1
[0 100] Í ˙ = [2 0.2]
ÍÎ 10 - 2 10 - 3 ˙˚
0.5
qr + + u 20 x2 = q 1 x1 = q
k1
s+2 s
– –
k2
Fig. 8.18
It is desired to regulate the angular position to a unit-step function qr. Find the optimum values of the
gains k1 and k2 that minimize
Ú [(x1 – qr)
2
J= + u2 ]dt
0
Solution The state variable description of the system, obtained from Fig. 8.18, is given by
È0 1 ˘ È 0˘
x= Í ˙ x+ Í ˙u
Î0 -2˚ Î20 ˚
y = x1
In terms of the shifted state variables
x1 = x1 – qr
x2 = x2,
550 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
J= Ú( x 2
1 + u2)dt
0
For this J,
È2 0˘
Q = Í ˙;R=2
Î0 0 ˚
The matrix Q is positive semidefinite;
È 2˘
Q = H TH = Í ˙ [ 2 0]
ÍÎ 0 ˙˚
The pair (A,H) is completely observable. Also, the pair (A,B) is completely controllable. Therefore, the
sufficient conditions for the existence of asymptotically stable optimal closed-loop system are satisfied.
The matrix Riccati equation is
ATP + PA – PBR–1BTP + Q = 0
È0 0 ˘ È p11 p12 ˘ È p11 p12 ˘ È0 1 ˘
or Í ˙Í ˙+Í ˙Í ˙
Î1 -2˚ Î p12 p22 ˚ Î p12 p22 ˚ Î0 -2˚
J= Â [e (k ) + 0.75u (k )]
2 2
k=0
is minimized.
Solution From Fig. 8.19a, we have
1 - e - sT
Gh0(s)G(s) =
s2
Therefore,
È1˘ 1
Gh0G(z) = (1 – z–1) Z Í 2 ˙ =
Îs ˚ z -1
Figure 8.19b shows an equivalent block diagram of the sampled-data system.
(a)
(b)
Fig. 8.19
552 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
J= Â [ x (k) + 0.75u2(k)]
2
k=0
For this problem,
F = 1, G = 1, Q = 2, R = 1.5
The Riccati equation is (refer to Eqn. (8.58))
PROBLEMS
8.1 Consider the linear system
È 0 1˘
x= Í ˙x
Î-1 -2˚
Using Lyapunov analysis, determine the stability of the equilibrium state.
Linear Quadratic Optimal Control through Lyapunov Synthesis 553
8.2 Using Lyapunov analysis, determine the stability of the equilibrium state of the system
x = Ax
È 0 1˘
with A= Í ˙
Î-1 1˚
8.3 Consider the system described by the equations
x1 = x2
x2 = – x1 – x2 + 2
Investigate the stability of the equilibrium state. Use Lyapunov analysis.
8.4 A linear system is described by the state equation
x = Ax
where
È- 4 K 4K ˘
A= Í ˙
Î 2K - 6K ˚
Using Lyapunov analysis, find restrictions on the parameter K to guarantee the stability of the
system.
8.5 Consider the system of Fig. P8.5. Find the restrictions on the parameter K to guarantee system
stability. Use Lyapunov’s analysis.
K x1 1 x2 1 x3
s+1 s+1 s+1
–
Fig. P8.5
8.6 Consider the linear system
È 0.5 1˘
x(k + 1) = Í ˙ x(k)
Î-1 -1˚
Using Lyapunov analysis, determine the stability of the equilibrium state.
8.7 Using Lyapunov analysis, determine the stability of the equilibrium state of the system
x(k + 1) = Fx(k)
with
È 0 0.5˘
F= Í ˙
Î-0.5 -1 ˚
8.8 Consider the system shown in Fig. P8.8. Determine the optimal feedback gain matrix K, such that
the following performance index is minimized:
È2 0˘
J= 1
2 Ú (xTQx + 2u2)dt; Q = ÍÎ0 ˙
2˚
0
554 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
u 1 x2 1 x1
s s
–K
Fig. P8.8
8.9 The matrix Q in Problem 8.8 is replaced by the following positive semidefinite matrix:
È2 0˘
Q= Í ˙
Î0 0 ˚
Show that sufficient conditions for the existence of the asymptotically stable optimal control
solution are satisfied. Find the optimal feedback matrix K.
8.10 Test whether sufficient conditions for the existence of the asymptotically stable optimal control
solution for the plant
È0 0 ˘ È1˘
x= Í ˙x+ Í ˙u
Î0 1 ˚ Î1˚
with the performance index
J= Ú (x12 + u2) dt
0
are satisfied. Find the optimal closed-loop system, if it exists, and determine its stability.
8.11 Consider the plant
È-1 0 ˘ È1 ˘
x= Í ˙x + Í ˙u
Î 1 0˚ Î0 ˚
with the performance index
Ú (x1 + u ) dt
2 2
J=
0
Test whether an asymptotically stable optimal solution exists for this control problem.
8.12 Consider the system described by the state model:
È0 1 ˘ È 0˘
x= Í ˙x+ Í ˙u
Î0 -2˚ Î20 ˚
y = [1 0] x
Find the optimal control law that minimizes
J = 1
2 Ú [(y(t) – 1)2 + u2] dt
0
Linear Quadratic Optimal Control through Lyapunov Synthesis 555
J= Ú (y 12 + y22 + u2)dt
0
Ú (x1 + x 2 + u ) dt
2 2 2
J=
0
Choose a control law that minimizes J. Design a state observer for implementation of the control
law; both the poles of the state observer are required to lie at s = – 3.
8.15 Figure P8.15 shows the optimal control configuration of a position servo system. Both the state
variables—angular position q and angular velocity q —are assumed to be measurable.
It is desired to regulate the angular position to a constant value qr = 5. Find optimum values of the
gains k1 and k2 that minimize
Ú [(x1 – qr)
2 1
J = + 2 u2] dt
0
What is the minimum value of J?
qr + + u x2 = q x1 = q
k1 1 1
s+5 s
– –
k2
Fig. P8.15
8.16 Consider now that for the position servo system of Problem 8.15, the performance index is
1 1 1
For r = 10
, 100
, and 1000
, find the optimal control law that minimizes the given J. Determine
closed-loop poles for various values of r and comment on your result.
8.17 In the control scheme of Fig. P8.17, the control law of the form u = – Ky + Nyr has been used; yr
is the constant command input.
w
yr + y
+ u + 1
N
s+1
–
Fig. P8.17
(a) Find K such that
Ú ( y�
2
J= + u� 2 ) dt
0
is minimized; y� and u� are, respectively, the deviations of the output and the control signal
from their steady-state values.
(b) Choose N so that the system has zero steady-state error, i.e., y( ) = yr.
(c) Show that the steady-state error to a constant disturbance input w, is nonzero for the above
choice of N.
(d) Add to the plant equation, an integrator equation (z(t) being the state of the integrator),
z� (t) = y(t) – yr
and select gains K and K1 so that if u = – Ky – K1z, the performance index
Ú ( y�
2
J= + z� 2 + u� 2 ) dt
0
is minimized.
(e) Draw a block diagram of the control scheme employing integral control and show that the
steady-state error to constant disturbance w, is zero.
8.18 Consider a plant consisting of a dc motor, the shaft of which has the angular velocity w (t), and
which is driven by the input voltage u(t). The describing equation is
w� (t) = – 0.5w (t) + 100u(t) = Aw(t) + Bu(t)
It is desired to regulate the angular velocity at the desired value w0 = r.
(a) Use the control law of the form u(t) = – Kw (t) + Nr.
Ú (w�
2
Choose K that minimizes J = + 100u� 2 ) dt; w� and u� are, respectively, the deviations
0
of the output and the control signal, from their steady-state values.
Linear Quadratic Optimal Control through Lyapunov Synthesis 557
Ú (w
2
J= + z 2 + 100u 2 ) dt
0
is minimized. Show that the resulting system will have w ( ) = r, no matter how the matrix A
changes, so long as the closed-loop system remains asymptotically stable.
8.19 Consider the system
x(k + 1) = 0.368 x(k) + 0.632 u(k)
Using the discrete matrix Riccati equation, find the control sequence
u(k) = – Kx(k)
that minimizes the performance index
J= Â [ x (k ) + u (k )]
2 2
k=0
J= 1
2 Â [ y (k ) + u (k )]
2 2
k=0
is minimized; y and u are, respectively, the deviations of the output and the control signal,
from their steady-state values.
(b) Find the steady-state value of the output.
(c) To eliminate steady-state error, introduce a feedforward controller. The control scheme now
becomes u(k) = – Ky(k) + Nr. Find the value of N so that y( ) = r.
Fig. P8.20
558 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
J= 1
2 Â [ x (k ) + u (k )]
2 2
k=0
is minimized; r is a constant reference input, and x and u are, respectively, the deviations in
state and control signal, from their steady-state values.
(b) Find N so that x( ) = r, i.e., there is no steady-state error.
(c) Show that the property of zero steady-state error is not robust with respect to changes in F.
(d) In order to obtain robust steady-state accuracy with respect to changes in F, we may use
integral control in addition to state feedback. Describe through block diagram, the structure
of such a control scheme.
Nonlinear Systems Analysis 559
Part III
Nonlinear Control Systems: Conventional and
Intelligent
In this part of the book, we will explore tools and techniques for attacking control problems that contain
significant nonlinearities.
Nonlinear control system design has been dominated by linear control techniques, which rely on the key
assumption of a small range of operation for the linear model to be valid. This tradition has produced
many reliable and effective control systems. However, the demand for nonlinear control methodologies
has recently been increasing for several reasons.
First, modern technology, such as applied in high-performance aircraft and high-speed high-accuracy
robots, demands control systems with much more stringent design specifications, which are able to
handle nonlinearities of the controlled systems more accurately. When the required operation range is
large, a linear controller is likely to perform very poorly or to be unstable, because the nonlinearities in
the system cannot be properly compensated for. Nonlinear controllers, on the other hand, may directly
handle the nonlinearities in large range operation. Also, in control systems there are many nonlinearities
whose discontinuous nature does not allow linear approximation.
Second, controlled systems must be able to reject disturbances and uncertainties confronted in real-world
applications. In designing linear controllers, it is usually necessary to assume that the parameters of the
system model are reasonably well known. However, many control problems involve uncertainties in
the model parameters. This may be due to a slow time variation of the parameters (e.g., of ambient air
pressure during an aircraft flight), or to an abrupt change in parameters (e.g., in the inertial parameters
of a robot when a new object is grasped). A linear controller, based on inaccurate values of the model
parameters, may exhibit significant performance degradation or even instability. Nonlinearities can be
intentionally introduced into the controller part of a control system, so that model uncertainties can be
tolerated.
Third, advances in computer technology have made the implementation of nonlinear controllers a
relatively simple task. The challenge for control design is to fully utilize this technology to achieve the
best control system performance possible.
Thus, the subject of nonlinear control is an important area of automatic control. Learning basic techniques
of nonlinear control analysis and design can significantly enhance the ability of control engineers to deal
with practical control problems, effectively. It also provides a sharper understanding of the real world,
which is inherently nonlinear.
560 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Phase-Plane Analysis
Phase-plane analysis, discussed in Chapter 9, is a method of studying second-order nonlinear systems.
Its basic idea is to solve a second-order differential equation and graphically display the result as a
family of system motion trajectories on a two-dimensional plane, called the phase plane, which allow
us to visually observe the motion patterns of the system. While phase-plane analysis has a number of
important advantages, it has the fundamental disadvantage of being applicable only to systems which
can be well approximated by a second-order dynamics. Because of its graphical nature, it is frequently
used to provide intuitive insights about nonlinear effects.
Lyapunov Theory
In using the Lyapunov theory to analyze the stability of a nonlinear system, the idea is to construct
a scalar energy-like function (a Lyapunov function) for the system, and to see whether it decreases.
The power of this method comes from its generality; it is applicable to all kinds of control systems.
Conversely, the limitation of the method lies in the fact that it is often difficult to find a Lyapunov
function for a given system.
Although Lyapunov’s method is originally a method of stability analysis, it can be used for synthesis
problems. One important application is the design of nonlinear controllers. The idea is to somehow
formulate a scalar positive definite function of the system states, and then choose a control law to make
this function decrease. A nonlinear control system thus designed, will be guaranteed to be stable. Such a
design approach has been used to solve many complex design problems, e.g., in adaptive control and in
sliding mode control (discussed in Chapter 10). The basic concepts of Lyapunov theory have earlier been
presented in Chapter 8. Lyapunov theory is elaborate in Chapter 9, wherein guidelines for construction
of Lyapunov functions for nonlinear systems are given.
The describing function method, discussed in Chapter 9, is an approximate technique for studying
nonlinear systems. The basic idea of the method is to approximate the nonlinear components in nonlinear
control systems by linear “equivalents”, and then use frequency-domain techniques to analyze the
resulting systems. Unlike the phase-plane method, it is not restricted to second-order systems. Rather,
Nonlinear Systems Analysis 561
the accuracy of describing function analysis improves with an increase in the order of the system.
Unlike Lyapunov method, whose applicability to a specific system hinges on the success of a trial-and-
error search for a Lyapunov function, its application is straightforward for a specific class of nonlinear
systems.
Trial-and-Error
Based on the analysis methods, one can use trial-and-error to synthesize controllers. The idea is to
use the analysis tools to guide the search for a controller, which can then be justified by analysis and
simulations. The phase-plane method, the describing function method, and Lyapunov analysis can all
be used for this purpose. Experience and intuition are critical in this process. However, for complex
systems, trial-and-error often fails.
Feedback linearization discussed in Chapter 10, can be used as a nonlinear design methodology. The
basic idea is to first transform a nonlinear system into a linear system using feedback, and then use the
well-known and powerful linear design techniques to complete the control design. The approach has
been used to solve a number of practical nonlinear control problems. It applies to important classes of
nonlinear systems.
Adaptive control is an approach to deal with uncertain systems or time-varying systems. Although the
term “adaptive” can have broad meanings, current adaptive control designs apply mainly to systems with
known dynamic structure but unknown constant or slowly-varying parameters. Adaptive controllers,
whether developed for linear systems or for nonlinear systems, are inherently nonlinear.
562 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Systematic theories exist for the adaptive control of linear systems. Frequently used adaptive control
structures are discussed in Chapter 10.
Gain-Scheduling
Gain-scheduling is an attempt to apply the well-developed linear control methodology to the control of
nonlinear systems. The idea of gain-scheduling is to select a number of operating points which cover
the range of system operation. Then, at each of these points, the designer makes a linear time-invariant
approximation to the plant dynamics, and designs a linear controller for each linearized plant. Between
operating points, the parameters of the compensators are then interpolated, or scheduled; thus resulting
in a global compensator.
Intelligent Control
In order to handle the complexities of nonlinearities and accommodate the demand for high-performance
control systems, intelligent control takes advantage of the computational structures—fuzzy systems
and neural networks—which are inherently nonlinear; a very important property, particularly if the
underlying physical mechanisms for the systems are highly nonlinear. Whereas classical control is rooted
in the theory of differential equations, intelligent control is largely rule-based because the dependencies
involved in its deployment are much too complex to permit an analytical representation. To deal with
such dependencies, the mathematics of fuzzy systems and neural networks integrates the experience and
knowledge gained in the operation of a similar plant, into control algorithm. The power of fuzzy systems
lies in their ability (i) to quantify linguistic inputs, and (ii) to quickly give a working approximation of
complex, and often unknown, system input-output rules. The power of neural networks is in their ability
to learn from data. There is a natural synergy between neural networks and fuzzy systems that makes
their hybridization a powerful tool for intelligent control and other applications. Intelligent control is one
of the most serious candidates for the future control of the large class of nonlinear, partially known, and
time-varying systems.
With no agreed upon scientific definition of intelligence, and due to space limitations, we will not venture
into the discussion of what intelligence is. Rather, we will confine our brief exposition in Chapters 11-14
to intelligent machines—neural networks, support vector machines, fuzzy interference systems, genetic
algorithms, reinforcement learning—in the context of applications in control.
Nonlinear Systems Analysis 563
Chapter 9
Nonlinear Systems Analysis
9.1 INTRODUCTION
Because nonlinear systems can have much richer and complex behaviors than linear systems, their analysis
is much more difficult. Mathematically, this is reflected in two aspects. Firstly, nonlinear equations,
unlike linear ones, cannot, in general, be solved analytically, and therefore, a complete understanding of
the behavior of a nonlinear system is very difficult. Secondly, powerful mathematical tools like Laplace
and Fourier transforms do not apply to nonlinear systems. As a result, there are no systematic tools for
predicting the behavior of nonlinear systems. Instead, there is a rich inventory of powerful analysis tools,
each best applicable to a particular class of nonlinear control problems [125–129].
Perhaps the single most valuable asset to the field of engineering is the simulation tool—constructing a
model of the proposed or actual system and using a numerical solution of the model to reveal the behavior
of the system. Simulation is the only general method of analysis applicable to finding solutions of linear
and nonlinear differential and difference equations. Of course, simulation finds specific solutions; that is,
solutions to the equations with specific inputs, initial conditions, and parametric conditions. It is for this
reason that simulation is not a substitute for other forms of analysis. Important properties such as stability
and conditional stability are not proven with simulations. When the complexity of a system precludes the
use of any analytical approach to establish proof of stability, simulations will be the only way to obtain
necessary information for design purposes. A partial list of the simulation programs available today is
contained in references [151–154].
This chapter also does not provide a magic solution to the analysis problem. In fact, no universal analytical
technique exists that can cater to our demand on analysis of the effects of nonlinearities. Our focus in this
chapter, is only on some important categories of nonlinear systems for which significant analysis (and
design) can be done.
For the so-called separable systems, which comprise a linear part defined by its transfer function, and
a nonlinear part defined by a time-independent relationship between its input and output variables, the
describing function method is most practically useful for analysis. It is an approximate method but
564 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
experience with real systems and computer simulation results shows adequate accuracy in many cases.
Basically, the method is an approximate extension of frequency response methods (including Nyquist
stability criterion) to nonlinear systems.
In terms of mathematical properties, nonlinearities may be categorized as continuous and discontinuous.
Because discontinuous nonlinearities cannot be locally approximated by linear functions, they are also
called “hard” nonlinearities. Hard nonlinearities (such as saturation, backlash, or coulomb friction) are
commonly found in control systems, both in small range and large range operations. Whether a system
in small range operation should be regarded as nonlinear or linear depends on the magnitude of the hard
nonlinearities and on the extent of their effects on the system performance.
The continuous or so-called “soft” nonlinearities are present in every control system, though not visible
because these are not separable. Throughout the book, we have neglected these nonlinearities in our
derivations of transfer function and state variable models. For example, we have assumed linear restoring
force of a spring, a constant damping coefficient independent of the position of the mass, etc. In practice,
none of these assumptions is true for a large range operation. Also, there are situations, not covered
in this book, wherein the linearity assumption gives too small range of operation to be useful; linear
design methods cannot be applied for such systems. If the controlled systems are not too complex and
the performance requirements are not too stringent, the linearity assumptions made in this book, give
satisfactory results in practice.
Describing function analysis is applicable to separable hard nonlinearities. For this category of nonlinear
systems, as we shall see later in this chapter, the predictions of describing function analysis usually are a
good approximation to actual behavior when the linear part of the system provides a sufficiently strong
filtering effect. Filtering characteristics of the linear part of a system improve as the order of the system
goes up. The ‘low pass filtering’ requirement is never completely satisfied; for this reason, the describing
function method is mainly used for stability analysis and is not directly applied to the optimization of
system design.
Phase-Plane Analysis
Another practically useful method for nonlinear system analysis is the phase-plane method. While phase-
plane analysis does not suffer from any approximations and hence can be used for stability analysis as
well as optimization of system design, its main limitation is that it is applicable to systems which can be
well approximated by second-order dynamics. Its basic idea is to solve second-order differential equation
and graphically display the result as a family of system motion trajectories on a two-dimensional plane,
called the phase plane, which allows us to visually observe the motion patterns of the system. The
method is equally applicable to both hard and soft nonlinearities.
The aim of this chapter is to introduce the two classical, yet practically important tools—the describing
function method and the phase-plane method—for a class of nonlinear systems. The two methods are
complementary to a large extent, each being available for the study of the systems which are most likely
to be beyond the scope of the other. The phase-plane analysis applies primarily to systems described by
second-order differential equations. Systems of order higher than the second are likely to be well filtered and
tractable by the describing function method.
The use of Lyapunov functions for stability analysis of nonlinear systems is also given in this chapter.
discontinuous jump to point B. The response follows the curve to point A for further decrease in
frequency. Observe from this description that the response never actually follows the segment CF.
This portion of the curve represents a condition of unstable equilibrium.
r (t) + y (t)
Controller Actuator Plant
–
Sensor
Saturation is probably the most commonly encountered nonlinearity in control systems. It is often
associated with amplifiers and actuators. In transistor amplifiers, the output varies linearly with the
input, only for small amplitude limits. When the input amplitude gets out of the linear range of the
amplifier, the output changes very little and stays close to its maximum value. Figure 9.4a shows a linear-
segmented approximation of saturation nonlinearity.
Most actuators display saturation characteristics. For example, the output torque of a servo motor
cannot increase infinitely, and tends to saturate due to the properties of the magnetic material. Similarly,
valve-controlled hydraulic actuators are saturated by the maximum flow rate.
Deadzone
A deadzone nonlinearity may occur in sensors, amplifiers and actuators. In a dc motor, we assume that
any voltage applied to the armature windings will cause the armature to rotate if the field current is
maintained constant. In reality, due to static friction at the motor shaft, rotation will occur only if the
torque provided by the motor is sufficiently large. This corresponds to a so-called deadzone for small
voltage signals. Similar deadzone phenomena occur in valve-controlled pneumatic and hydraulic
actuators. Figure 9.4b shows linear-segmented approximation of deadzone nonlinearity.
568 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Fig. 9.4
Backlash
A backlash nonlinearity commonly occurs in mechanical components of control systems. In gear trains,
small gaps exist between a pair of mating gears (refer to Fig. 9.4c). As a result, when the driving gear
rotates a smaller angle than the gap H, the driven gear does not move at all, which corresponds to the
deadzone (OA segment in Fig. 9.4c); after contact has been established between the two gears, the driven
gear follows the rotation of the driving gear in a linear fashion (AB segment). When the driving gear
rotates in the reverse direction, by a distance of 2H, the driven gear again does not move, corresponding
to the segment BC in Fig. 9.4c. After the contact between the two gears is re-established, the driven gear
linearly follows the rotation of the driving gear in the reverse direction (CD segment). Therefore, if the
driving gear is in periodic motion, the driven gear will move in the fashion represented by the closed
path EBCD.
Nonlinear Systems Analysis 569
A critical feature of backlash, a form of hysteresis, is its multivalued nature. Corresponding to each
input, two output values are possible; which one of the two occurs depends on the history of the input.
In any system where there is a relative motion between contacting surfaces, there are several types of
friction: all of them nonlinear—except the viscous components. Coulomb friction is, in essence, a drag
(reaction) force which opposes motion, but is essentially constant in magnitude, regardless of velocity
(Fig. 9.4d). The common example is an electric motor, in which we find Coulomb friction drag due to
the rubbing contact between the brushes and the commutator.
In this book we have primarily covered the following three modes of control:
(i) proportional control;
(ii) integral control; and
(iii) derivative control.
Another important mode of feedback control is the on–off control. This class of controllers have only two
fixed states rather than a continuous output. In its wider application, the states of an on–off controller may
not, however, be simply on and off but could represent any two values of a control variable. Oscillatory
behavior is a typical response characteristic of a system under two-position control, also called bang-
bang control. The oscillatory behavior may be avoided using a three-position control (on–off controller
with a deadzone). Figure 9.4e shows the characteristics of on–off controllers.
The on–off mode of control results in a variable structure system whose structure changes in accordance
with the current value of its state. A variable structure system can be viewed as a system composed of
independent structures, together with a switching logic between each of the structures. With appropriate
switching logic, a variable structure system can exploit the desirable properties of each of the structures
the system is composed of. Even more, a variable structure system may have a property that is not a
property of any of its structures. The variable structure sliding mode control law is usually implemented
on a computer. The reader will be exposed to simple variable structure systems in this chapter; details to
follow in Chapter 10.
We may classify the nonlinearities as inherent and intentional. Inherent nonlinearities naturally come with
the system’s hardware (saturation, deadzone, backlash, Coulomb friction). Usually such nonlinearities
have undesirable effects, and control systems have to properly compensate for them. Intentional
nonlinearities, on the other hand, are artificially introduced by the designer. Nonlinear control laws, such
as bang-bang optimal control laws and adaptive control laws (refer to Chapter 10), are typical examples
of intentional nonlinearities.
but experience with real systems and computer simulation results, shows adequate accuracy in many
cases. The method predicts whether limit cycle oscillations will exist or not, and gives numerical
estimates of oscillation frequency and amplitude when limit cycles are predicted. Basically, the method
is an approximate extension of frequency-response methods (including Nyquist stability criterion) to
nonlinear systems.
To discuss the basic concept underlying the describing function analysis, let us consider the block
diagram of a nonlinear system shown in Fig. 9.5, where the blocks G1(s) and G2(s) represent the linear
elements, while the block N represents the nonlinear element.
r=0 + x y
G1(s) N G2(s)
–
The describing function method provides a “linear approximation” to the nonlinear element based on
the assumption that the input to the nonlinear element is a sinusoid of known, constant amplitude. The
fundamental harmonic of the element’s output is compared with the input sinusoid, to determine the
steady-state amplitude and phase relation. This relation is the describing function for the nonlinear
element. The method can, thus, be viewed as ‘harmonic linearization’ of a nonlinear element.
The describing function method is based on the Fourier series. A review of the Fourier series will be in
order here.
y(t) =
a0
+
2 n =1
Â
[an cos nw0t + bn sin nw 0t] (9.3a)
 Yn sin(nw0t + fn)
a0
= + (9.3b)
2 n =1
T0
Certain simplifications are possible when y(t) has a symmetry of one type or another.
(i) Even symmetry: y(t) = y(–t) results in
bn = 0; n = 1, 2, ... (9.4c)
(ii) Odd symmetry: y(t) = –y(–t) results in
an = 0; n = 0, 1, 2, ... (9.4d)
(iii) Odd half-wave symmetry: y(t ± T0/2) = – y(t) results in
an = bn = 0; n = 0, 2, 4, ... (9.4e)
9.4.2
Let us assume that input x to the nonlinearity in Fig. 9.5 is sinusoidal, i.e.,
x = X sin w t
With such an input, the output y of the nonlinear element will, in general, be a nonsinusoidal periodic
function which may be expressed in terms of Fourier series as follows (refer to Eqns (9.3)–(9.4)):
y = Y0 + A1cos w t + B1sin w t + A2 cos 2w t + B2 sin 2w t + �
The nonlinear characteristics listed in the previous section, are all odd-symmetrical/odd half-wave
symmetrical; the mean value Y0 for all such cases is zero and therefore, the output
y = A1cos w t + B1sin w t + A2 cos 2w t + B2sin 2w t + �
In the absence of an external input (i.e., r = 0 in Fig. 9.5), the output y of the nonlinear element N is
fed back to its input, through the linear elements G2(s) and G1(s) in tandem. If G2(s)G1(s) has low-pass
characteristics (this is usually the case in control systems), it can be assumed, to a good degree of
approximation, that all the higher harmonics of y are filtered out in the process, and the input x to the
nonlinear element N is mainly contributed by the fundamental component (first harmonic) of y, i.e., x
remains sinusoidal. Under such conditions, the second and higher harmonics of y can be thrown away for
572 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The coefficients A1 and B1 of the Fourier series are given by (refer to Eqns 9.3)
2p
1
A1 =
p Ú y cos w t d(w t) (9.5c)
0
2p
1
B1 =
p Ú y sinw t d(w t) (9.5d)
0
As we shall see shortly, the amplitude Y1 and the phase shift f1 are both functions of X, but independent
of w. We may combine the amplitude ratio and the phase shift in a complex equivalent gain N(X), such
that
Y1 ( X ) B + jA1
N(X) = –f1( X ) = 1 (9.6)
X X
Under first-harmonic approximation, the nonlinear element is completely characterized by the function
N(X); this function is usually referred to as the describing function of the nonlinearity.
The describing function differs from a linear system transfer function, in that its numerical value will vary
with input amplitude X. Also, it does not depend on frequency w (there are, however, a few situations
in which the describing function for the nonlinearity is a function of both, the input amplitude X, and
the frequency w (refer to [128–129]). When embedded in an otherwise linear system (Fig. 9.6), the
describing function can be combined with the ‘ordinary’ sinusoidal transfer function of the rest of the
system, to obtain the complete open-loop function. However, we will get a different open-loop function
for every different amplitude X. We can check all of these open-loop functions for closed-loop stability,
using Nyquist stability criterion.
r=0 + x y
G1 ( jw) N(X ) G2 ( jw)
–
Describing
function
Fig. 9.6
Nonlinear Systems Analysis 573
It is important to remind ourselves here that the simplicity in analysis of nonlinear systems using
describing functions, has been achieved at the cost of certain limitations; the foremost being the
assumption that in traversing the path through the linear parts of the system from nonlinearity output
back to nonlinearity input, the higher harmonics will have been effectively low-pass filtered, relative to
the first harmonic. When the linear part of the system does indeed provide a sufficiently strong filtering
effect, then the predictions of describing function analysis, usually, are a good approximation to actual
behavior. Filtering characteristics of the linear part of the system improve as the order of the system goes
up.
The ‘low-pass filtering’ requirement is never completely satisfied; for this reason, the describing function
method is mainly used for stability analysis and is not directly applied to the optimization of system
design. Usually, the describing function analysis will correctly predict the existence and characteristics
of limit cycles. However, false indications cannot be ruled out; therefore, the results must be verified
by simulation. Simulation, in fact, is an almost indispensable tool for analysis and design of nonlinear
systems; describing function and other analytical methods, provide the background for intelligent
planning of the simulations.
We will limit our discussion to separable nonlinear systems with reference input r = 0, and with
symmetrical nonlinearities (listed in Section 9.3) in the loop. Refer to [128–129] for situations wherein
dissymmetrical nonlinearities are present, and/or the reference input is nonzero.
y y
p +a 2p – a
M
–D
D x 0 wt
–M a
2p + a 3p – a
p –a
x
a
X
p –a
p +a
2p – a
2p + a
3p – a
wt
Fig. 9.7
2p
1
where B1 =
p Ú y sinw t d(w t)
0
Due to the symmetry of y (refer to Fig. 9.7), the coefficient B1 can be calculated as follows:
p p
2 2
4 4M 4M
B1 =
p Ú y sinw t d(w t) = p Ú sin w t d (w t ) = p
cos a (9.8)
0 a
Since A1 (the Fourier series cosine coefficient) is zero, the first harmonic component of y is exactly in
phase with X sinwt, and the describing function N(X) is given by (refer to Eqns (9.6)–(9.8))
Ï 0 ;X < D
Ô 2
N(X) = Ì 4 M Ê Dˆ (9.9)
Ô 1 - Á ˜ ;X ≥ D
ÔÓ p X ËX¯
For a given controller, M and D are fixed and the describing function is a function of input amplitude
X, which is graphed in Fig. 9.8a, together with peak location and value, found by standard calculus
Nonlinear Systems Analysis 575
maximization procedure. Note that for a given X, N(X) is just a pure real positive number, and thus, plays
the role of a steady-state gain in a block diagram of the form shown in Fig. 9.6. However, this gain term
is unusual in that it changes when X changes.
N(X)
2M
pD
0 D 2D X
(a)
X X Im
Increasing = 2
X D D
1
D A
X B Re
D X
Increasing
D –1
=–
pD
– 1 -locus N 2M
N(X) (b)
Fig. 9.8
as a function of X on the polar plane. We will use this form of representation in the next section for
stability analysis.
Rearrangement of Eqn. (9.9) gives
1 pD ( X / D )2
- =- (9.11)
N(X ) 4M ( X / D )2 - 1
Figure 9.8b gives the representation on the polar plane, of the describing function for an on–off controller
with deadzone. It may be noted that though the points A and B lie at the same place on the negative real
axis, they belong to different values of X/D.
We choose as another example the backlash, since its behavior brings out certain features not encountered
in our earlier example. The characteristics of backlash nonlinearity, and its response to sinusoidal input,
576 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
are shown in Fig. 9.9. The output y is again a periodic function of period 2p ; one cycle of this function
is described as follows:
Ï x – H ; 0 £ w t < p /2
Ô X – H ; p /2 £ w t < (p – b )
Ô
y = Ì x + H ; (p – b ) £ w t < 3p/2 (9.12)
Ô – X + H ; 3p /2 £ w t < (2p – b )
ÔÓ x – H ; (2p – b ) £ w t £ 2p
Ê 2H ˆ
where X sinb = X – 2 H; or b = sin–1 Á1 - .
Ë X ˜¯
2p
1
B1 =
p Ú y sinw t d(w t)
0
Due to the symmetry of y, only the positive half-wave need be considered (Fig. 9.9):
Èp
(p - b )
2Í
2
pÍ Ú
A1 = Í ( X sin w t - H ) cos w t d (w t ) +
p
Ú
( X - H ) cos w t d (w t )
Í0
Î 2
p ˘
Ú ( X sin w t + H ) cos w t d (w t ) ˙
+ ˙
(p - b ) ˚
p p
2 (p - b )
2
2X 2( X - H )
=
p Ú sinq cosq dq –
2H
p Ú cosq dq +
p Ú
p
cosq dq
0 0
p p 2
2X 2H
+
p Ú sinq cosq dq +
p Ú cosq dq
(p - b ) (p - b )
3 X 2( X - 2 H ) X
=- + sin b + cos 2b
2p p 2p
3X 2 X X X
= - + sin 2 b + cos 2b = - cos 2 b (9.13a)
2p p 2p p
È p
(p - b )
2 Í
2
B1 = Í
p Í Ú ( X sin w t - H ) sin w t d (w t ) + Ú p
( X - H ) sin w t d (w t )
Í 0
Î 2
p ˘
+ Ú ( X sin w t + H ) sin w t d (w t ) ˙
˙
(p - b ) ˚
p p
2 2 (p - b )
2X 2H 2( X - H )
=
p Ú sin 2q dq -
p Ú sin q dq +
p Ú sin q dq
p
0 0
p p 2
2X 2H
Ú sin q dq + Ú sin q dq
2
+
p p
(p - b ) (p - b )
X Èp ˘ 2( X - 2 H ) X
= Í + b˙ + cos b - sin 2b
p Î2 ˚ p 2p
X Èp ˘ 2X X X Èp 1 ˘
= Í + b˙ + sin b cos b - sin 2b = Í + b + sin 2b ˙ (9.13b)
p Î2 ˚ p 2p p Î2 2 ˚
578 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
It is clear that the fundamental component of y will have a phase shift with respect to X sinwt (a feature
not present in our earlier example). The describing function N(X) is given by (refer to Eqns (9.6), (9.12),
(9.13))
1 1 Èp 1 ˘
N(X) = (B1 + jA1) = Í + b + sin 2b - j cos 2b ˙ (9.14)
X p Î2 2 ˚
Ê 2H ˆ
b = sin–1 Á1 -
Ë X ˜¯
Note that N(X) is a function of the nondimensional ratio H/X; we can thus tabulate or plot a single graph
of N(X) that will be usable for any numerical value of H (refer to Table 9.1, and Fig. 9.10).
Table 9.1
We have so far given illustrative derivations of describing functions for on–off controller with deadzone,
and backlash. By similar procedures, the describing functions of other common nonlinearities can be
derived; some of these are tabulated in Table 9.2.
Nonlinear Systems Analysis 579
Table 9.2
1. 4M
M N(X) =
pX
–M
2. Ï 0 ;X <D
M Ô 2
N (X ) = Ì 4 M Ê Dˆ
–D Ô 1- Á ˜ ; X ≥ D
ÔÓ p X ËX¯
D
–M
3. Slope = K ÏK ;X <S
Ô
Ô È 2˘
N (X ) = Ì 2 K Ísin -1 S + S 1 - Ê S ˆ ˙ ; X ≥ S
–S Ô p Í X X ÁË X ˜¯ ˙
S ÔÓ ÍÎ ˙˚
4. Ï 0 ; X <D
ÔÔ
È 2 ˘
Í p - sin-1 D - D 1 - Ê D ˆ ˙ ; X ≥ D
N (X ) = Ì 2 K
–D Ô p ÁË X ˜¯ ˙
D Í2 X X
ÔÓ Î ˚
Slope = K
5. 1 Èp 1 ˘
N(X) = Í + b + sin 2b - j cos 2 b ˙
p Î2 2 ˚
–H
Ê 2H ˆ
H b = sin -1 Á 1 -
Ë X ˜¯
Slope = 1
580 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
R(s) + Y(s)
KG(s)
–
H(s)
(a)
jw Im
j
0 s –1 Re
Re jq
R
w
–j KG ( jw)H ( jw)
(b) (c)
Im Im
1 Re 1 Re
K K
w
w
G ( jw)H ( jw)
G ( jw)H ( jw)
(d) (e)
Fig. 9.11
1
Chapter 10 of reference [155].
Nonlinear Systems Analysis 581
signal contains components in addition to X sinw t (such as r not being zero), the method of dual-input
describing functions may be useful (refer to [128–129]).
For a given X, N(X) in Fig. 9.12 is just a real/complex number; the condition (9.16) therefore, becomes
G(s) = – 1/N(X) (9.17)
This modified condition differs from the condition (9.16), in the fact that the critical point (–1/K + j0),
now becomes the critical locus – 1/N(X) as a function of X. The stability analysis can be carried out by
examining the relative position of the following plots on polar plane.
(i) Plot of G( jw) with w varying from 0 to , called the polar plot of G( jw) (note that the Nyquist
plot is the plot of G( jw) with w varying from – to + ).
(ii) Plot of – 1/N(X) with X varying from 0 to .
When the critical points of – 1/N(X) lie to the left of the polar plot of G( jw) (or are not encircled by the
Nyquist plot of G( jw)), the closed-loop system is stable; any disturbances which appear in the system
will tend to die out. Conversely, if any part of the –1/N(X) locus lies to the right of the polar plot of G( jw)
(or is enclosed by the Nyquist plot of G( jw)), it implies that any disturbances which are characterized
by the values of X corresponding to the enclosed critical points, will provide unstable operations. The
intersection of G( jw) and – 1/N(X) loci, corresponds to the possibility of a periodic oscillation (limit
cycle) characterized by the value of X on the – 1/N(X) locus, and the value of w on the G( jw) locus.
Figure 9.13a shows a G( jw) plot superimposed on a – 1/N(X) locus. The values of X, for which the
– 1/N(X) locus lies in the region to the right of an observer traversing the polar plot of G( jw) in the direction
of increasing w, correspond to unstable conditions. Similarly, the values of X, for which the –1/N(X) locus
lies in the region to the left of an observer traversing the polar plot of G( jw) in the direction of increasing
w, correspond to the stable conditions. The locus of –1/N(X) and the polar plot of G( jw) intersect at the
point A(w = w2, X = X2), which corresponds to the condition of limit cycle. The system is unstable for
X < X2 and is stable for X > X2. The stability of the limit cycle can be judged by the perturbation
technique described below.
Im G( jw)-plot Im
B(X3) Re Re
–1
G( jw)-plot A(X2, w 2) -locus
N(X )
C(X1)
w X
– 1 -locus
w
X N(X )
(a) (b)
Fig. 9.13
Nonlinear Systems Analysis 583
Suppose that the system is originally operating at A under the state of a limit cycle. Assume that a slight
perturbation is given to the system, so that the input to the nonlinear element increases to X3, i.e., the
operating point is shifted to B. Since B is in the range of stable operation, the amplitude of the input to
the nonlinear element progressively decreases, and hence the operating point moves back towards A.
Similarly, a perturbation which decreases the amplitude of input to the nonlinearity, shifts the operating
point to C which lies in the range of unstable operation. The input amplitude now progressively increases
and the operating point again returns to A. Therefore, the system has a stable limit cycle at A.
Figure 9.13b shows the case of an unstable limit cycle. For systems having G( jw) plots and – 1/N(X) loci
as shown in Figs 9.14a and 9.14b, there are two limit cycles; one stable and the other unstable.
Describing function method usually gives sufficiently accurate information about stability and limit
cycles. This analysis is invariably followed by a simulation study.
The transfer function G(s) in Fig. 9.12, when converted to state variable formulation, takes the form
x� (t) = Ax(t) + bu(t); x(0) =D x0
y(t) = cx(t)
where
x(t) = n ¥ 1 state vector for nth order G(s);
u(t) = input to G(s);
y(t) = output of G(s);
A = n ¥ n matrix;
b = n ¥ 1 column matrix; and
c = 1 ¥ n row matrix.
In Appendix A we use MATLAB Software SIMULINK to obtain response of nonlinear systems of the
form given in Fig. 9.12 with zero reference input and initial state x0.
Im G( jw)-plot Im
G( jw)-plot
Re w Re
Stable
Unstable
Unstable
Stable
w
X
– 1 -locus X
– 1 -locus
N(X )
N(X )
(a) (b)
Fig. 9.14
584 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Example 9.1
Let us investigate the stability of a with system on–off controller, shown in Fig. 9.15. Using the describing
function of an on–off nonlinearity given in Table 9.2, we have
1 pE (9.18)
- =-
N (E) 4
where E is the maximum amplitude of the sinusoidal signal e. Figure 9.16 shows the locus of – 1/N(E)
as a function of E, and the plot of G( jw) for K = 5. Equation (9.17) is satisfied at A since the two graphs
intersect at this point.
r=0 1 y
+ e u K
G(s) =
– –1 (s + 1)(0.1s + 1)2
Fig. 9.15
The point of intersection on the G( jw) plot gives a numerical value w1 for the frequency of the limit cycle;
whereas, the same point on the – 1/N(E) locus gives us the predicted amplitude E1 of the oscillation. As
an observer traverses the G( jw) plot in the direction of increasing w, the portion O-A of the –1/N(E)
locus lies to its right and the portion A-C lies to its left. Using the arguments presented previously, we
can conclude that the limit cycle is a stable one.
Since – 1/N(E) is a negative real number, it is clear that intersection occurs at – 180° phase angle. The
frequency w1 that gives –G( jw1) = – 180° is 10.95 rad/sec. Furthermore, at point A
Im
–1
N(E)
E
C 5
O Re
B(E2, w 2)
w
A(E1, w1)
K=5
K>5
G( jw)
1
|G( jw1)| = -
N ( E1 )
Example 9.2
Figure 9.20a shows a block diagram for a servo system consisting of an amplifier, a motor, a gear train,
and a load (gear 2 shown in the diagram includes the load element). It is assumed that the inertia of the
gears and load element is negligible compared with that of the motor, and backlash exists between gear
1 and gear 2. The gear ratio between gear 1 and gear 2 is unity.
Nonlinear Systems Analysis 587
e x Gear 1
Amplifier Motor –1 y
– e 5
G(s) = s(s + 1)
1
–
y
Gear 2
Slope = 1
(a) (b)
Fig. 9.20 A servo system with backlash in gears
9.7
The free motion of any second-order nonlinear system can always be described by an equation of the
form
y + g ( y, y� ) y� + h( y, y� ) y = 0
�� (9.19)
The state of the system, at any moment, can be represented by a point of coordinates ( y, y� ) in a system
of rectangular coordinates. Such a coordinate plane is called a ‘phase plane’.
In terms of the state variables
x1 = y, x2 = y� , (9.20a)
588 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
second-order system (9.19) is equivalent to the following canonical set of state equations:
dx1
x�1 = = x2
dt
(9.20b)
dx
x�2 = 2 = - g ( x1 , x2 ) x2 - h( x1 , x2 ) x1
dt
By division, we obtain a first-order differential equation relating the variables x1 and x2:
dx2 g ( x1 , x2 ) x2 + h( x1 , x2 ) x1 (9.21)
=-
dx1 x2
Thus, we have eliminated the independent variable t from the set of first-order differential equations given
by (9.20b). In Eqn. (9.21), we consider x1 and x2 as independent and dependent variables, respectively.
For a given set of initial conditions {x1(0),x2(0)}, the solution to Eqn. (9.21) may be represented by a
single curve in the phase plane, for which the coordinates are x1 and x2. The curve traced out by the state
point {x1(t), x2(t)}, as time t is varied from 0 to , is called the phase trajectory, and the family of all
possible curves for different initial conditions is called the phase portrait. Normally, a finite number of
trajectories, defined in a finite region, is considered a portrait.
One may obviously raise the question that when time solutions x1(t) and x2(t), as time t is varied from
0 to , may be obtained by direct integration of Eqns (9.20b) analytically or numerically, where is the
necessity of drawing phase portraits? In fact, as we shall see, the phase portraits provide a powerful
qualitative aid for investigating system behavior and the design of system parameters, to achieve a desired
response. Furthermore, the existence of limit cycles is sharply brought into focus by the phase portrait.
Figure 9.22a shows the output response, and the corresponding phase trajectory, for a linear second-
order servo system described by the differential equation
y + 2z y� + y = 0; y(0) = y0, y�(0) = 0, 0 < z < 1
��
In terms of the state variables x1 = y and x2 = y� , the system model is given by the equations
x�1 = x2; x�2 = –2z x2 – x1; x1(0) = y0, x2(0) = 0
The origin of the phase plane (x1 = 0, x2 = 0) is the equilibrium point of the system since, at this point,
the derivatives x�1 and x�2 are zero (the system continues to lie at the equilibrium point unless otherwise
disturbed). The nature of the transient can be readily inferred from the phase trajectory of Fig. 9.22;
starting from the point P, i.e., with initial deviation but no initial velocity, the system returns to rest, i.e.,
to the origin, with damped oscillatory behavior.
Consider now the well-known Van der Pol’s differential equation (refer to Eqn. (9.1))
y – m(1 – y2) y� + y = 0
��
which describes physical situations in many nonlinear systems. It terms of the state variables x1 = y and
x2 = y�, we obtain
x�1 = x2; x�2 = m(1 – x12)x2 – x1
Origin of the phase plane is the equilibrium point of the system. Figure 9.23 shows phase portraits for
(i) m > 0; and (ii) m < 0. In the case of m > 0, we observe that for large values of x1(0), the system response
Nonlinear Systems Analysis 589
x2 = y
y0
t2
t3
t2 t=0
0 t4
t1 t3 t4 t P x1 = y
t1
is damped and the amplitude of x1(t) = y(t) decreases till the system state enters the limit cycle, as shown
by the outer trajectory. On the other hand, if initially x1(0) is small, the damping is negative, hence
the amplitude of x1(t) = y(t) increases till the system state enters the limit cycle, as shown by the inner
trajectory. The limit cycle is a stable one, since the paths in its neighborhood converge towards the limit
cycle. Figure 9.23 shows an unstable limit cycle for m < 0.
The phase plane for second-order systems is indeed a special case of phase space or state space defined
for nth-order systems. Much work has been done to extend this approach of analysis to third-order
systems. Though a phase trajectory for a third-order system can be graphically visualized through its
projections on two planes, say (x1, x2) and (x2, x3) planes, this complexity causes the technique to lose
its major power of quick graphical visualization of the total system response. The phase trajectories are,
therefore, generally restricted to second-order systems only.
x2 = y x2 = y
Stable
limit
cycle
x1 = y x1 = y
Unstable
limit
cycle
(i) m > 0 (ii) m < 0
Fig. 9.23 A second-order nonlinear system on the phase plane
For time-invariant systems, the entire phase plane is covered with trajectories with one, and only one,
curve passing through each point of the plane, except for certain critical points through which, either
infinite number or none of the trajectories pass. Such points (called singular points) are discussed later
in Section 9.9.
590 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
If the parameters of a system vary with time, or if a time-varying driving function is imposed, two or more
trajectories may pass through a single point in a phase plane. In such cases, the phase portrait becomes
complex and more difficult to work with and interpret. Therefore, the use of phase-plane analysis is
restricted to second-order systems with constant parameters and constant or zero input. However, it
may be mentioned that investigators have made fruitful use of the phase-plane method in investigating
second-order time-invariant systems under simple time-varying inputs, such as ramp. Some simple time-
varying systems have also been analyzed by this method. Our discussion will be limited to second-order
time-invariant systems with constant or zero input.
From the above discussion, we observe that the phase-plane analysis applies primarily to systems
described by second-order differential equations. In the case of feedback control systems, systems of
order higher than the second, are likely to be well filtered and tractable by the describing-function method
discussed earlier in this chapter. The two methods of the phase plane and of the describing function are,
therefore, complementary to a large extent; each being available for the study of the systems which are
most likely to be beyond the scope of the other.
9.8.1
Most nonlinear systems cannot be easily solved by analytical techniques. However for piecewise linear
systems, an important class of nonlinear systems, this method can be conveniently used, as shown in the
following examples.
Example 9.3
In this example, we consider a model of a satellite shown in Fig. 7.3. We assume that the satellite is
rigid and is in a frictionless environment. It can rotate about the reference axis as a result of torque T
applied to the satellite about its mass center by firing the thrusters (T = Fd ). The system input is the
applied torque T and the system output is the attitude angle q. The satellite’s moment of inertia is J. The
input-output model of the system is
d 2q (9.22a)
J 2 =T
dt
Nonlinear Systems Analysis 591
We assume that when the thrusters fire, the thrust is constant; that is, T = A, a constant greater than or less
than zero. In terms of output variable y (= q), we obtain the equation
J ��
y =A (9.22b)
In terms of the state variables x1 = y and x2 = y� , the state equations become
A
x�1 = x2; x�2 = (9.22c)
J
Elimination of t by division, yields the equation of the trajectories:
dx2 A
= (9.23)
dx1 Jx2
or J x2 dx2 = A dx1
The phase portrait for A < 0 is shown in Fig. 9.24b. In the special case of A = 0 (no driving torque), the
integration of the trajectory equation (9.23) gives x2(t) = x2(0). The trajectories are, therefore, straight
lines parallel to x1-axis.
Example 9.4
Consider now the equation
J q�� + Bq� = T (9.25a)
corresponding to a torque T driving a load comprising inertia J and viscous friction B. For a constant
torque, the equation may be expressed as
y + y� = A
t �� (9.25b)
where y = q is the system output and A represents normalized torque; a constant greater than or less than
zero. The equivalent system is
x�1 = x2; t x�2 = A – x2 (9.25c)
1
or x1(t) = A – x2(t) – A ln(A – x2(t)) + C (9.26a)
t
where the constant of integration C is determined by the initial conditions, i.e.,
1
C= x1(0) – A + x2(0) + A ln(A – x2(0)) (9.26b)
t
Therefore, the trajectory equation becomes
1 Ê A - x2 ˆ
(x1 – x1(0)) = – (x2 – x2(0)) – A ln Á (9.26c)
t Ë A - x2(0) ˜¯
For the case of initial state point at the origin (x1(0) = x2(0) = 0), Eqn. (9.26c) reads
1 Ê A - x2 ˆ
x1 = – x2 – A ln Á (9.26d)
t Ë A ˜¯
The phase trajectory described by this equation is shown in Fig. 9.25a as the curve G0. It is seen that the
trajectory is asymptotic to the line x2 = A, which is the final velocity.
x2 = y
A x2
x1 = y
G0
x1
For an initial state point (x1(0), x2(0)), the trajectory will have the same shape as the curve G0, except
that it is shifted horizontally—so that it passes through the point (x1(0), x2(0)). This is obvious from
Eqn. (9.26c) which can be written as
1 Ê A - x2 ˆ
(x1 – K) = – x2 – A ln Á
t Ë A ˜¯
Ê A - x2 ( 0 ) ˆ
where K = x1(0) + t x2(0) + t A ln Á ˜¯
Ë A
For an initial state point (x1(0), x2(0)), the trajectory is G0, shifted horizontally by K units. The phase
portrait for A < 0 is shown in Fig. 9.25b. In the special case of A = 0, the phase portrait consists of a
family of straight lines of slope –1/t.
9.8.2
Consider a time-invariant second-order system described by equations of the form (refer to Eqns (9.20b))
x�1 = x2 (9.27)
x�2 = f ( x1 , x2 )
594 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
At a point ( x1* , x2* ) in the phase plane, the slope m* of the tangent to the trajectory can be determined
from
f ( x1* , x2* ) (9.29)
= m*
x2*
An isocline is defined to be the locus of the points corresponding to a given constant slope m of the
trajectories, on the phase plane. All trajectories passing through the points on the curve
f (x1, x2 ) = mx2 (9.30)
will have the same tangent slope m at the points on the curve; the curve, thus, represents an isocline
corresponding to trajectories of slope m. All trajectories crossing this isocline will have tangent slope m
at the points on the isocline.
The idea of the method of isoclines is to construct several isoclines and a field of local tangents m. Then,
the trajectory passing through any given point in the phase plane, is obtained by drawing a continuous
curve following the directions of the field.
Consider the Van der Pol equation (refer to Eqn. (9.1))
y + m( y 2 - 1) y� + y = 0
�� (9.31)
dx2 - m( x12 - 1) x2 - x1
=
dx1 x2
Therefore, the points on the curve
- m( x12 - 1) x2 - x1
=m
x2
all have the same slope m. The isocline equation becomes
x1
x2 =
( m - m x12 ) - m
By taking m of different values, different isoclines can be obtained. Short line segments are drawn on
the isoclines to generate a field of tangent directions. A trajectory starting at any point can be constructed
by drawing short lines from one isocline to another at average slope corresponding to the two adjoining
isoclines, as shown in Fig. 9.26.
Of course, the construction is much simpler if isoclines are straight lines.
Nonlinear Systems Analysis 595
x2
Initial point
3 a = tan–1m
2
m=0
m = –1
–3 –2 1 2 3
x1
Trajectory
Isocline
m=1
Field of
tangent –2
directions
–3
m = –1 m=0
Fig. 9.26
Example 9.5
The satellite in Example 9.3, is now placed in a feedback configuration in order to maintain the attitude
q at 0°. This feedback control system, called an attitude control system, is shown in Fig. 9.27. When q
is other than 0°, the appropriate thruster will fire to force q towards 0°. When q = x1 is greater than 0°,
u (torque T) = –U, and the trajectories of Fig. 9.24b (corresponding to A < 0) apply. When q = x1 is less
than 0°, u (torque T ) = U, and the trajectories of Fig. 9.24a (corresponding to A > 0) apply. Note that
the switching of u occurs at x1 = 0. Thus the line x1 = 0 (the x2-axis) is called the switching line. From
this discussion we see that Fig. 9.28a illustrates a typical trajectory for the system corresponding to the
initial condition (x10, x 02). The system response is thus a periodic motion. Figure 9.28b shows many closed
curves for different initial conditions.
Satellite
U
u(t) 1 q 1 q=y
s x2 s x1
– –U
Fig. 9.27
596 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
x2 x2
Each curve
(x01 , x 02 ) parabolic
x1 x1
Switching
line
(a) (b)
Fig. 9.28 Typical trajectories for the system of Fig. 9.27
By controlling the switching line in the phase plane, we can control the performance of the attitude
control system. This simple control strategy leads to a robust nonlinear control structure: the variable
structure sliding mode control. The details will follow later in this chapter.
In the following, we obtain the phase portrait of the closed-loop system of Fig. 9.27 using the method of
isoclines. The purpose here is to illustrate the method of isoclines.
The state equations are
x�1 = x2
x�2 = – U sgn x1
È 1; x1 > 0
where sgn x1 = Í
Î- 1; x1 < 0
dx2 -U sgn x1
Then =m=
dx1 x2
Suppose that U is normalized to a value of unity for convenience. Then
1
x2 = – sgn x1
m
For x1 > 0, sgn x1 = 1, and the isocline equation is
1
x2 = -
; x1 > 0
m
For x1 < 0, sgn x1 = –1, and the isocline equation is
1
x 2 = ; x1 < 0
m
Given in Fig. 9.29, is the phase plane showing the isoclines and a typical phase trajectory. Note the
parabolic shape, as was determined analytically earlier in this example.
Nonlinear Systems Analysis 597
x2
m=
1 m=– 1
2 2
2
(x01 , x 02 )
m=1 m=–1
1
m= 0 m=
x1
m=–1 –1 m=1
m=– 1 –2 m=
1
2 2
Trajectory
Isocline
Fig. 9.29 The isoclines and a typical trajectory for the system of Fig. 9.27
With the function f (x1, x2) assumed to be single valued, there is usually a definite value for this slope
at any given point in phase plane. This implies that the phase trajectories will not intersect. The only
exceptions are the singular points at which the trajectory slope is indeterminate:
dx2 0 f ( x1 , x2 )
= = (9.34a)
dx1 0 x2
Many trajectories may intersect at such points. This indeterminacy of the slope accounts for the adjective
‘singular’.
Singular points are very important features on the phase plane. Examination of the singular points can
reveal a great deal of information about the properties of a system. In fact, the stability of linear systems
is uniquely characterized by the nature of their singular points. For nonlinear systems, besides singular
points, there may be more complex features such as limit cycles.
We need to know the following:
(i) Where will the singular points be and how many will be there?
(ii) What is the behavior of trajectories (i.e., the system) in the vicinity of a singular point?
The first question is answered by our definition of the singular point. There will be singular points at all
the points of the phase plane for which the slope of the trajectory is undefined. These points are given by
the solution of the equations
x2 = 0; f (x1, x2) = 0 (9.34b)
Singular points of the nonlinear system (9.32), thus, lie on the x1-axis of the phase plane.
Since at singular points on the phase plane, x�1 = x�2 = 0, these points, in fact, correspond to the equilibrium
states of the nonlinear system. We know a nonlinear system often has multiple equilibrium states.
To determine the behavior of the trajectories in the vicinity of a singular point (equilibrium state of the
nonlinear system), we first linearize the nonlinear equations at the singular point, and then determine the
nature of phase trajectories around the singular point by linear system analysis. If the singular point of
interest is not at the origin, by defining the difference between the original state and the singular point
as a new set of state variables, one can always shift the singular point to the origin. Therefore, without
loss of generality, we can simply consider Eqns (9.32) with a singular point at 0. Using Taylor series
expansion, Eqns (9.32) can be rewritten as
x� = x2
x� 2 = ax1 + bx2 + g2 (x1, x2)
where g2 contains higher-order terms.
In the vicinity of the origin, the higher-order terms can be neglected and, therefore, the nonlinear system
trajectories essentially satisfy the linearized equations
x�1 = x2
x�2 = ax1 + bx2
Transforming these equations into a scalar second-order equation, we get
x1 = ax1 + bx�1
��
Therefore, we will simply consider the second-order linear system described by
y + 2zw n y� + w n2 y = 0
�� (9.35a)
Nonlinear Systems Analysis 599
x2 = y x2 = y x2 = y
x1 = y
x1 = y x1 = y
x1 = y
x1 = y x1 = y
Using Eqn. (9.36), we can construct a phase portrait on the (x1, x2)-plane with x1 = y and x2 = y� . A typical
phase trajectory is shown in Fig. 9.30a which is a logarithmic spiral into the singular point. This type of
singular point is called a stable focus.
Assume that l1 and l2 are two real distinct roots in the right half of the s-plane; l1 is the smaller root.
The phase portrait in the vicinity of the singular point on the (x1 = y, x2 = y� )-plane is shown in Fig. 9.30e.
Such a singular point is called an unstable node.
Nonlinear Systems Analysis 601
All trajectories emerge from the singular point and go to infinity. The trajectories are tangential at the origin
to the straightline trajectory, x2(t) = l1x1(t).
The phase portrait in the vicinity of the singular point on the (x1 = y, x2 = y� )-plane is shown in Fig. 9.30f.
Such a singular point is called a saddle.
There are two straightline trajectories with slopes defined by the root values. The straightline due to the
negative root, provides a trajectory that enters the singular point, while the straightline trajectory due
to the positive root, leaves the singular point. All other trajectories approach the singular point adjacent
to the incoming straightline, then curve away and leave the vicinity of the singular point, eventually
approaching the second straightline asymptotically.
Example 9.6
Consider the nonlinear system shown in Fig. 9.31. The nonlinear element is an on–off controller with
deadzone whose characteristics are shown in Fig. 9.31.
1 y
r = const + e –1 u 1
1 s(s + 1)
– –1
Fig. 9.31
Ê 1 + x2 ˆ
x1 – x1(0) = – (x2 – x2(0)) + ln Á (9.41a)
Ë 1 + x2(0) ˜¯
The trajectories are asymptotic to the ordinate –1.
(ii) Region II (defined by –1 < x1 < 1): The trajectories in this region are given by the equation (refer to
Eqn. (9.26c): t = 1, A = 0)
x1 – x1(0) = – (x2 – x2(0)) (9.41b)
The trajectories are straightlines of slope –1.
(iii) Region III (defined by x1 < –1): The trajectories in this region are given by the equation (refer to
Eqn. (9.26c): t = 1, A = 1)
Ê 1 - x2 ˆ
x1 – x1(0) = – (x2 – x2(0)) – ln Á (9.41c)
Ë 1 - x2(0) ˜¯
x2
1 x2
1
–1 1 2 3
P x1
1 2 3
–1
P x2
Deadzone –1
Fig. 9.32 A typical trajectory for the system in Fig. 9.33 Phase trajectory for the system in
Fig. 9.31
Nonlinear Systems Analysis 603
Example 9.7
Let us investigate the performance of a second-order position control system with Coulomb friction.
Figure 9.34 is a model for a motor position servo with Coulomb friction on the motor shaft. The dynamics
of the system is described by the following differential equation:
K e – Tc sgn ( y� ) = J �y� + By�
where Tc is the Coulomb frictional torque.
r = const + e + 1 y 1 y
K s
– Js + B
–
Tc
Fig. 9.34
related to the lower-half phase plane (x2 negative), and the singular point given by x1 = – Tc /K is related
to the upper-half phase plane (x2 positive).
Let us now investigate the stability of the singular points. For e� > 0, Eqn. (9.43) may be expressed as
d2 Ê T ˆ d Ê T ˆ KÊ T ˆ
t 2 Á
e + c ˜ + Áe + c ˜ + Áe + c ˜ = 0 (9.45)
dt Ë K ¯ dt Ë K¯ BË K¯
�
This is a linear second-order system with the singular point at (– Tc /K, 0) on the (e, e)-plane. The
characteristic equation of this system is given by
1 K
l2 + l + =0
t tB
Let us assume the following parameter values for the system under consideration:
(K/B) = 5, t = 4 (9.46)
With these parameters, the roots of the characteristic equation are complex-conjugate with negative real
parts; the singular point is, therefore, a stable focus (refer to Fig. 9.30a).
Let us now investigate the system behavior when large inputs are applied. Phase trajectories may be
obtained by solving the following second-order differential equations for given initial state points (refer
to Eqns (9.35b)–(9.36)).
Region I (defined by x2 > 0):
Tc
z + z� + 5 z = 0; z = x1 +
4 �� ; x2 = x�1
K
Region II (defined by x2 < 0):
Tc
4 ��z + z� + 5 z = 0; z = x1 – ; x2 = x�1
K
Figure 9.35 shows a few phase trajectories. It is observed that for small as well as large inputs, the
resulting trajectories terminate on a line along the x1-axis from – Tc /K to +Tc /K, i.e., the line joining the
x2
Tc /K
x1
– Tc /K
singular points. Therefore, the system with Coulomb friction is stable; however, there is a possibility of
large steady-state error.
Example 9.8
We consider a double integrator model
e�� = – u (9.47)
having two structures corresponding to u = –1 and u = +1 (refer to Fig. 9.36).
Switching s (e, e) 1
r = const + e u 1 y
function
– s –1 s2
x1(t ) = 1
2
x22 (t ) + x1 (0) - 12 x22 (0) (9.49a)
and the trajectories corresponding to the structure u = +1 are given by
The phase-plane portraits of the two structures are shown in Figs 9.37a and 9.37b; the individual
structures are families of parabolas. Neither of the structures is asymptotically stable; each structure is
unstable. However, by choosing a suitable switching logic between the two structures, we can make the
resulting variable structure system, asymptotically stable.
Suppose the structure of the system is changed at any time the system’s trajectory crosses the vertical
axis of the state plane, that is,
Ï+ 1 if x1 > 0
u= Ì (9.50)
Ó- 1 if x1 < 0
A phase portrait of the system (9.48) with the switching logic specified by (9.50) is shown in Fig. 9.37c;
the system always enters into a limit cycle (In fact, we are familiar with the switching function given by
(9.50); it is on–off switching.
To achieve asymptotic stability, we redesign the switching logic. We note that one trajectory of each
family in Figs 9.37a and 9.37b goes through the origin. Segments A-O and B-O of these two trajectories
terminating at the origin form the curve shown by the thick line in Fig. 9.38.
x2 x2
A
x2
u = –1 u = +1
a>0
a=0
a<0
x1 b<0 O x1
O
b=0
x1
b>0
B
x2(0) x22(0)
a = x1(0) – 2 b = x1(0) +
2 2
(a) u = – 1 (b) u = + 1 (c)
Fig. 9.37 Phase trjectories for the system of Fig. 9.36
Nonlinear Systems Analysis 607
Ï+ 1; s ( x1 , x2 ) > 0
u= Ì (9.51b)
Ó- 1; s ( x1 , x2 ) < 0
It is also clear from Fig. 9.38 that, for all initial conditions, the state point is driven to the origin along
the shortest-time path with no oscillations (the output reaches the final value in minimum time with no
ripples, and stays there; this type of response is commonly called a deadbeat response). Such bang-bang
control systems provide optimal control (minimum-time control) [105].
The equation of optimal switching curve A-0-B can be obtained from Eqns (9.49) by setting (refer to
Fig. 9.37)
x22 (0) x 2 ( 0)
x1 (0) - = x1 (0) + 2 =0
2 2
608 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
s (x1, x2) > 0 implies that the state point (x1, x2) lies above the curve A-O-B. s (x1, x2) = 0 and x2 > 0
implies that the state point (x1, x2) lies on the segment A-O. s (x1, x2) = 0 and x2 < 0 implies that the state
point (x1, x2) lies on the segment B-O. s (x1, x2) < 0 implies that the state point (x1, x2) lies below the
segment A-O-B.
In terms of the optimal switching function s (x1, x2), the control law becomes
Ï +1 when s ( x1 , x2 ) > 0
Ô-1 when s ( x1 , x2 ) = 0 and x2 (t ) < 0
Ô
u(t) = Ì (9.53)
Ô -1 when s ( x1 , x2 ) < 0
Ô +1 when s ( x1 , x2 ) = 0 and x2 (t ) > 0
Ó
The optimal switching may be realized by a computer. It accepts the state point (x1, x2) and computes
the switching function given by Eqn. (9.52b). It then manipulates the on–off controller to produce the
optimal control components according to Eqn. (9.53).
Example 9.9
In this example, we discuss a suboptimal method of switching the on–off controller in Fig. 9.36. The
advantage of the method described below, is that implementation of the switching function is simple. The
cost paid is in terms of increase in settling time compared to the optimal solution.
Consider the following suboptimal switching function:
s (e, e�) = e + K D e�
The system equations now become
e�� = - u; u = sgn(e + K D e�)
In the state variable form ( x1 = e, x2 = e�),
x�1 = x2 ; x�2 = - sgn(x1 + K D x2 )
The phase plane is divided into two regions by the switching line
x1 + KD x2 = 0 (9.54)
The trajectory equation for the region defined by x1 + KD x2 < 0, is (refer to Eqn. (9.49a))
x1 (t ) = 1
2
x22 (t ) + x1 (0) - 12 x22 (0 )
and the trajectory equation for the region defined by x1 + KD x2 > 0, is (refer to Eqn. (9.49b))
x1 (t ) = - 12 x22 (t ) + x1 (0) + 12 x22 (0)
Nonlinear Systems Analysis 609
È f1(◊) ˘
Í ˙
f 2 (◊) ˙
f (·) = Í
Í � ˙
Í ˙
Î f n (◊) ˚
is the n ¥ 1 function vector.
Suppose that all the states of the system (9.55) settle to constant values (not necessarily zero values) for
a constant input vector u(t) = uc. The system is then said to be in an equilibrium state corresponding to
610 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
the input uc. The state trajectories converge to a point in state space, called the equilibrium point. At this
point, no states vary with time. Thus, we have the following definition of equilibrium point (equilibrium
state).
If for any constant input vector u(t) = uc, there exists a point x(t) = xe = constant in state space, such that
at this point x� (t) = 0 for all t, then this point is called an equilibrium point of the system corresponding
to the input uc. Applying this definition to the system (9.55), any equilibrium point must satisfy
f(xe, uc, t) = 0 for all t (9.56)
The number of solutions depends entirely upon the nature of f (·) and no general statement is possible.
Example 9.10
Consider the nonlinear system described by the state equations:
x�1 = x2
x�2 = – x1 – x21 – x2
The equilibrium states of this system are given by the solutions of the following set of equations (refer
to Eqn. (9.56)):
x�1e = x2e = 0
x�2e = – x1e – (x1e)2 – x 2e = 0
From the first equation, x2e is equal to zero. From the second equation,
(x1e)2 + x1e = x1e(x1e + 1) = 0
which has the solutions x1e = 0 and x1e = – 1. Thus, there are two equilibrium states, given by
È0 ˘ È-1˘
xe1 = Í ˙ , xe2 = Í ˙
Î0 ˚ Î 0˚
In the stability analysis of a system, we are usually concerned with the following two notions of stability:
(i) when a relaxed system (x(t0) = 0) is excited by a bounded input, the output must be bounded; and
(ii) in an unforced system (u = 0) with arbitrary initial conditions, the system state must tend towards
the equilibrium point in state space.
We have seen earlier in Chapter 5 that the two notions of stability defined above are essentially equivalent
for linear time-invariant systems.
Unfortunately in nonlinear systems, there is no definite correspondence between the two notions. Most
of the important results obtained, thus far, concern the stability of nonlinear autonomous2 systems:
x�(t) = f(x(t)); x(t0) =D x0 (9.57)
in the sense of second notion above.
2
An unforced (i.e., u = 0) and time-invariant system is called an autonomous system.
Nonlinear Systems Analysis 611
It may be noted that even for this class of systems, the concept of stability is not clear cut. The linear
autonomous systems have only one equilibrium state (the origin of the state space), and their behavior
about the equilibrium state completely determines the qualitative behavior in the entire state space. In
nonlinear systems, on the other hand, system behavior for small deviations about the equilibrium point
may be different from that for large deviations. Therefore, local stability does not imply stability in the
overall state space, and the two concepts should be considered separately.
Secondly, the set of nonlinear equations (refer to Eqns (9.56)–(9.57)),
f(xe) = 0 (9.58)
may result in a number of solutions (equilibrium points). Due to the possible existence of multiple
equilibrium states, the system trajectories may move away from one equilibrium state to the other as time
progresses. Thus, it appears that in the case of nonlinear systems, it is simpler to speak of system stability
relative to the equilibrium state rather than using the general term ‘stability of a system’.
We shall confine our attention to nonlinear autonomous systems described by state equation of the form
x� (t) = f(x(t)); f(0) = 0; x(0) =D x0 (9.59)
Note that the origin of the state space has been taken as the equilibrium state of the system. There is no
loss in generality in this assumption, since any nonzero equilibrium state can be shifted to the origin by
appropriate transformation. Further, we have taken t0 = 0 in Eqn. (9.59), which is a convenient choice
for time-invariant systems.
For nonlinear autonomous systems, local stability may be investigated through linearization in the
neighborhood of the equilibrium point. The validity of determining the stability of the unperturbed
solution near the equilibrium points from the linearized equations was developed independently by
Poincaré and Lyapunov in 1892. Lyapunov designated this as the first method. This stability determination
is applicable only in a small region near the equilibrium point.
The region of validity of local stability is generally not known. In some cases, the region may be too small
to be of any use practically; while in others the region may be much larger than the one assumed by the
designer—giving rise to systems that are too conservatively designed. We, therefore, need information
about the domain of stability. The ‘second method of Lyapunov’ (also called the ‘direct method of
Lyapunov’) is used to determine stability in-the-large. We first present direct method of Lyapunov; the
linearization method is described in Section 9.14.
The concept of stability formulated by Russian mathematician A.M. Lyapunov is concerned with the
following question:
If a system with zero input is perturbed from the equilibrium point xe at t = 0, will the state x(t) return to
xe, remain ‘close’ to xe, or diverge from xe?
Lyapunov stability analysis is, thus, concerned with the boundedness of the free (unforced) response of
a system. The free response of a system is said to be stable in the sense of Lyapunov at the equilibrium
point xe if, for every initial state x(t0) which is sufficiently close to xe, x(t) remains near xe for all t. It is
asymptotically stable at xe if x(t), in fact, approaches xe as t .
In the following, we give mathematically precise definitions of different types of stability with respect to
the system described by Eqn. (9.59).
612 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The system described by Eqn. (9.59) is stable in the sense of Lyapunov at the origin if, for every real
number e > 0, there exists a real number d(e) > 0 such that ||x(0)|| < d results in ||x(t) || < e for all t ≥ 0.
This definition uses the concept of the vector norm. The Euclidean norm for a vector with n components
x1, x2, . . ., xn is (refer to Eqn. (5.6a))
||x|| = (x12 + x22 + � + xn2)1/2
||x|| £ R defines a hyper-spherical region S(R) of radius R, surrounding the equilibrium point xe = 0.
In terms of the Euclidean norm, the above definition of stability implies that for any S(e) that we may
designate, the designer must produce S(d) so that the system state, initially in S(d), will never leave S(e).
This is illustrated in Fig. 9.40a.
S(e ) S(d )
S(d )
x(t) S(e )
0 0
x(0) x(0)
(a) (b)
Fig. 9.40
Note that this definition of stability permits the existence of continuous oscillation about the equilibrium
point. The state-space trajectory for such an oscillation is a closed path. The amplitude and frequency of
the oscillation may influence whether it represents acceptable performance.
Example 9.11
Consider a linear oscillator described by the differential equation
y(t) + w2 y(t) = 0
��
where w is the frequency of oscillations.
Define the state variables as
x1(t) = y(t), x2(t) = y�(t)
This gives the state equations
x�1(t) = x2(t)
x�2(t) = – w2 x1(t)
Nonlinear Systems Analysis 613
V(x1, x2) = 1
2
x22 + 1
2
Kx12 (9.61)
As per Eqn. (9.62), the rate of change of energy is negative and, therefore, system energy V(x1, x2)
continually decreases along the trajectory x(t), t > 0. There is only one exception; when the representative
point x(t) of the trajectory reaches x2 = 0 points in the state plane, the rate of change of energy becomes
zero. However, as seen from Eqns (9.60), x�2 = – Kx1 at the points where x2 = 0. The representative
point x(t), therefore, cannot stay at the points in the state plane where x2 = 0 (except at the origin). It
immediately moves to the points at which the rate of change of energy is negative and the system energy,
therefore, continually decreases from its initial value V(x01, x20) along the trajectory x(t), t > 0, till it
reaches a value V = 0 at the equilibrium point xe = 0.
A visual analogy may be obtained by considering the surface
1 2
V(x1, x2) = x
2 2
+ 1
2
Kx12 (9.63)
This is paraboloid (a solid generated by rotation of parabola about its axis of symmetry) surface as
shown in Fig. 9.43. The value V(x1, x2) = ki (a constant) is represented by the intersection of the surface
V(x1, x2) and the plane z = ki. The projection of this intersection on the (x1, x2)-plane is a closed curve, an
oval, around the origin. There is a family of such closed curves in the (x1, x2)-plane for different values of
ki. The closed curve corresponding to V(x1, x2) = k1, lies entirely inside the closed curve corresponding to
V(x1, x2) = k2 if k1 < k2. The value V(x1, x2) = 0 is the point at the origin. It is the innermost curve of the
family of closed curves, representing different levels on the paraboloid for V(x1, x2) = ki.
If one plots a state-plane trajectory starting from the point (x01, x 20), the representative point x(t) crosses
the ovals for successively smaller values of V(x1, x2), and moves towards the point corresponding to
V(x1, x2) = 0, which is the equilibrium point. Figure 9.43 shows a typical trajectory.
Nonlinear Systems Analysis 615
V = k3 z = V(x)
V = k3
k1 < k2 < k3
V = k2
t1 < t2 < t3
V = k1
V = k2
t3 x1
V = k1
t2 (x01 , x 02 )
x1 t1
x2
x2
(a) (b)
Fig. 9.43
Note also that V(x) given by Eqn. (9.63) is radially unbounded 3, i.e., V(x) as ||x || . The ovals
0
on the (x1, x2)-plane extend over the entire state plane and, therefore, for any initial state x in the entire
state plane, the system energy continually decreases from the value V(x0) to zero.
3
Use of the norm definition given by Eqn. (5.6b) immediately proves this result.
616 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
In Section 5.2, we introduced the concept of sign definiteness of scalar functions. Let us examine here
the scalar function V(x1, x2, ..., xn) =D V(x) for which V(0) = 0 and the function is continuous in a certain
region surrounding the origin in state space. Due to the manner in which these V-functions are used
later, we define the sign definiteness with respect to a region around the origin represented as ||x|| £ K
(a positive constant) where ||x|| is the norm of x.
È 10 1 -2˘
Í ˙
P = Í 1 4 -1˙
Í-2 -1 1˙
Î ˚
As per Sylvester’s test, the necessary and sufficient condition for P to be positive definite is that all the
successive principal minors of P be positive.
Nonlinear Systems Analysis 617
For the autonomous system (9.59), sufficient conditions of stability are as follows:
Suppose that there exists a scalar function V(x) which, for some real number e > 0, satisfies the following
properties for all x in the region ||x|| £ e :
(i) V(x) > 0; x π 0 ¸
(ii) V(0) = 0 ˝ (i.e., V(x) is positive definite function)
˛
(iii) V(x) has continuous partial derivatives with respect to all components of x.
Then the equilibrium state xe = 0 of the system (9.59) is
� is a negative definite function; and
� < 0, x π 0, i.e., V(x)
(iva) asymptotically stable if V(x)
�
(ivb) asymptotically stable in-the-large if V(x) < 0, x π 0, and in addition V(x) as ||x||
Example 9.12
Consider a nonlinear system described by the equations
x�1 = x2 – x1(x12 + x22) (9.64)
x�2 = – x1 – x2(x12 + x22)
Clearly, the origin is the only equilibrium state.
Let us choose the following positive definite scalar function as a possible Lyapunov function:
V(x) = x21 + x22 (9.65)
Time derivative of V(x) along any trajectory, is given by
dV ( x1 , x2 ) ∂ V dx1 ∂ V dx2
V� (x) = = +
dt ∂ x1 dt ∂ x2 dt
= 2x1x�1 + 2x2 x�2 = –2(x21 + x22)2 (9.66)
which is negative definite. This shows that V(x) is continually decreasing along any trajectory; hence
V(x) is a Lyapunov function. By Theorem 9.1, the equilibrium state (at the origin) of the system (9.64)
is asymptotically stable.
618 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Further, V(x) as ||x|| , i.e., V(x) becomes infinite with infinite deviation from the equilibrium
state. Therefore, as per condition (ivb) of Theorem 9.1, the equilibrium state of the system (9.64) is
asymptotically stable in-the-large.
Although Theorem 9.1 is a basic theorem of Lyapunov stability analysis, it is somewhat restrictive
because V� (x) must be negative definite. This requirement can be relaxed to V� (x) £ 0 (a negative
semidefinite V� (x)) under proper conditions. This relaxed requirement is sufficient if it can be shown that
no trajectory can stay forever at the points or on the line other than the origin, at which V� (x) = 0. This is
the case for the system of Fig. 9.42 as described at the beginning of this section.
If, however, there exists a positive definite function V(x) such that V� (x) is identically zero along a
trajectory, the system will remain in that trajectory and will not approach the origin. The equilibrium
state at the origin, in this case, is said to be stable in the sense of Lyapunov.
Theorem 9.2 For the autonomous system (9.59), sufficient conditions of stability are as follows.
Suppose that there exists a scalar function V(x) which, for some real number e > 0, satisfies the following
properties for all x in the region ||x|| £ e:
(i) V ( x) > 0; x π 0 ¸
˝ (i.e, V ( x) is positive definite function)
(ii) V (0) = 0 ˛
(iii) V(x) has continuous partial derivatives with respect to all components of x.
Then the equilibrium state xe = 0 of the system (9.59) is
(iva) asymptotically stable if V� (x) < 0, x π 0, i.e., V� (x) is a negative definite function; or if
V� (x) £ 0 (i.e., V� (x) is negative semidefinite) and no trajectory can stay forever at the points or on
the line other than the origin, at which V� (x) = 0;
(ivb) asymptotically stable in-the-large if conditions (iva) are satisfied, and in addition V(x) as
||x|| and
(ivc) stable in the sense of Lyapunov if V� (x) is identically zero along a trajectory.
Example 9.13
Consider the linear feedback system shown
r=0 + e K y
in Fig. 9.44 with r(t) = 0. We know that the G(s) =
closed-loop system will exhibit sustained – s + a2
2
oscillations.
The differential equation for the error signal
is Fig. 9.44 Linear feedback system
e�� + a 2e = Ky = – Ke
Taking e and e� as state variables x1 and x2, respectively, we obtain the following state equations:
x�1 = x2
(9.67)
x�2 = – (K + a 2)x1
Nonlinear Systems Analysis 619
Let us choose the following scalar positive definite function as a possible Lyapunov function:
V(x) = x12 + x22 (9.68)
Then V� (x) becomes
V� (x) = 2x1x�1 + 2x2 x�2 = 2[1 – (K + a 2)]x1x2
V� (x) is indefinite. This implies that V(x), given by Eqn. (9.68), is not a Lyapunov function and stability
cannot be determined by its use (the system is known to be stable in the sense of Lyapunov as per the
stability definition given in Section 9.11).
We now test
V(x) = p1x21 + p2 x 22; p1 > 0, p2 > 0
for Lyapunov properties. Conditions (i)–(iii) of Theorem 9.2 are obviously satisfied.
V� (x) = 2p1x1x2 – 2p2(K + a 2)x1x2
If we set p1 = p2(K + a 2), V� (x) = 0 and, as per Theorem 9.2, the equilibrium state of the system (9.67)
is stable in the sense of Lyapunov.
Example 9.14
Reconsider the system of Fig. 9.44 with
K
G(s) =
s( s + a )
If the reference variable r(t) = 0, then the differential equation for the actuating error will be
e�� + a e� + Ke = 0
Taking e and e� as state variables x1 and x2 respectively, we obtain the following state equations:
x�1 = x2
x�2 = – K x1 – a x2 (9.69)
if x2 = 0 and x�2 = 0. Since on x1-axis, x�2 = – Kc π 0, x1-axis is not a trajectory, and the equilibrium state
at the origin of the system (9.69) is asymptotically stable.
Further, since V(x) as || x|| the equilibrium state is asymptotically stable in-the-large.
This result, obtained by Lyapunov’s direct method, is readily recognized as being correct either from the
Routh stability criterion or from the root locus.
Example 9.15
Consider the system described by the state equations
x�1 = x2
x�2 = – x1 – x2
Let us choose,
V(x) = x21 + x22
which is a positive definite function; V(x) as ||x||
This gives
V� (x) = 2 x1x�1 + 2 x2x�2 = –2x22
which is negative semidefinite. As per the procedure described in the earlier example, it can be established
that V� (x) vanishes identically only at the origin. Hence, by Theorem 9.2, the equilibrium state at the
origin is asymptotically stable in-the-large.
To show that a different choice of a Lyapunov function yields the same stability information, let us
choose the following positive definite function as another possible Lyapunov function:
V(x) = 12 [(x1 + x2)2 + 2 x21 + x22]
Instability
It may be noted that instability in a nonlinear system can be established by direct recourse to the instability
theorem of the direct method. The basic instability theorem is presented below.
For the autonomous system (9.59), sufficient conditions for instability are as follows.
Suppose that there exists a scalar function W(x) which, for some real number e > 0, satisfies the following
properties for all x in the region ||x|| e :
(i) W(x) > 0; x π 0;
(ii) W(0) = 0; and
(iii) W(x) has continuous partial derivatives with respect to all components of x.
Nonlinear Systems Analysis 621
Then the equilibrium state xe = 0 of the system (9.59) is unstable if W� (x) > 0, x π 0, i.e., W(x)
� is a positive
definite function.
Note that it requires as much ingenuity to devise a suitable W function, as to devise a Lyapunov function
V. In the stability analysis of nonlinear systems, it is valuable to establish conditions for which the
system is unstable. Then the regions of asymptotic stability need not be sought for such conditions, and
the analyst is saved from this fruitless effort.
Example 9.16
Consider a nonlinear system governed by the equations:
x�1 = – x1 + 2x21 x2
x�2 = – x2
622 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Fig. 9.45 Stability regions for the nonlinear system of Example 9.16
Nonlinear Systems Analysis 623
where �f (x) = ∂ f ( x)
dx
= J(x)f(x);
∂ x dt
È ∂ f1 ∂ f1 ∂ f1 ˘
Í ∂x �
∂ x2 ∂ xn ˙
Í 1 ˙
Í ∂ f2 ∂ f2 ∂ f2 ˙
�
J(x) = ÍÍ ∂ x1 ∂ x2 ∂ xn ˙
˙ (9.73b)
Í � � � ˙
Í ˙
Í ∂ fn ∂ fn
�
∂ fn ˙
ÍÎ ∂ x1 ∂ x2 ∂ xn ˙˚
Example 9.17
As an illustration of the Krasovskii method, r=0 + u K y
e g(.)
consider the nonlinear system shown in s(s + 1)
–
Fig. 9.46, where the nonlinear element is described
as
u = g(e) = e3 Fig. 9.46 A nonlinear system
624 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
1 1 1
x21 < or – < x1 <
3K 3K 3K
This region of asymptotic stability is illustrated in Fig. 9.47.
Ê ∂2V ∂2V ˆ
is always symmetric Á = .
Ë ∂xi ∂xj ∂xj ∂xi ˜¯
Under the constraints (9.79), we start by choosing g(x) such that (g(x))Tf (x) is negative definite. The
function V(x) is then computed from the integral
x x n
Ú
V ( x) = gT( y ) d y = Ú Â g ( y , y ,..., y ) dy
i 1 2 n i (9.80a)
0 0 i =1
The integration is taken over any path joining the origin to x (The line integral of a gradient vector is
independent of the path [125]). Usually, this is done along the axes; that is
x1 x2
V ( x) = Ú g1 ( y1 , 0, 0,..., 0 ) dy1 + Úg 2 ( x1 , y2 , 0,..., 0) dy2
0 0
xn (9.80b)
+� + Ú g ( x , x ,..., x
n 1 2 n - 1 , yn ) dyn
0
By leaving some parameters of g(x) undetermined, one would try to choose them to ensure that V(x) is
positive definite.
Example 9.18
Let us use the variable gradient method to find a Lyapunov function for the nonlinear system
x�1 = – x1
(9.81)
x�2 = – x2 + x1 x22
We assume the following form for the gradient of the undetermined Lyapunov function.
È g ( x) ˘ È a x + a x ˘
g(x) = Í 1 ˙ = Í 11 1 12 2 ˙ ; aij may be functions of x (9.82)
Î g2 ( x) ˚ Îa21 x1 + a22 x2 ˚
The function has to satisfy the constraints (9.79):
∂g1 ∂g2 ∂a ∂a
= , i.e., a12 + x2 12 = a21 + x1 21
∂x2 ∂x1 ∂x2 ∂x1
If the coefficients are chosen to be
a11 = a22 = 1; a12 = a21 = 0
Nonlinear Systems Analysis 627
then
g1(x) = x1 and g2(x) = x2
and
V� (x) = (g( x))T f ( x)
= – x 12 – x 22 (1 – x1x2) (9.83)
Thus, if (1 – x1x2) > 0, then V� is negative definite. The function V(x) can be computed as
x1 x2
x12 + x22
V (x) = Ú y1dy1 + Ú y2 dy2 = (9.84)
2
0 0
This is a positive definite function and, therefore, the asymptotic stability of the origin in the region
1 > x1x2 is guaranteed.
Note that (9.84) is not the only Lyapunov function obtainable by the variable gradient method. A different
choice of aij’s may lead to another Lyapunov function for the system.
with zero real parts; one simply cannot infer any stability property of a nonlinear system from its linear
approximation with roots having zero real parts.
Today, Lyapunov’s linearization method has come to represent the theoretical justification of linear
control, while Lyapunov’s direct method has become the most important tool for nonlinear system
analysis and design. Lyapunov’s linearization method shows that linear control design is a matter of
consistency; one must design a controller such that the system remain in its ‘linear range’. It also stresses
on major limitations of linear design, i.e., how large is the linear range from stability considerations?
Such questions motivate a deeper approach to nonlinear system analysis.
REVIEW EXAMPLES
Ï K x; 0 £ wt < a
Ô KS; a £ w t < (p – a)
Ô
y = Ì K x; (p – a) £ w t < (p + a)
Ô – KS; (p + a) £ w t < (2p – a)
Ô K x; (2p – a) £ w t £ 2p
Ó
where a = sin–1(S/X)
This periodic function has odd symmetry:
y(w t) = – y(– w t)
Due to symmetry of y (refer to Fig. 9.48), the coefficient B1 can be calculated as follows:
p È p ˘
4
2
4K Í a 2 ˙
B1 =
p Ú y sin q dq =
p Í Ú Ú
Í X sin 2q dq + S sin q dq ˙
˙
a
0
Í0 ˙
Î ˚
Nonlinear Systems Analysis 629
y y
Slope = K
p+a 2p – a
KS
–S
0
S x wt
– KS a
p–a 2p + a
a x
X
p–a
p+a
2p – a
2p + a
3p – a
wt
Fig. 9.48
4K È X ˘
= (a - sin a cos a ) + S cos a ˙
p ÍÎ 2 ˚
È 2 2˘
B1 2K Í - 1 S S ÊSˆ 2S ÊSˆ
= sin - 1- Á ˜ + 1- Á ˜ ˙
X p Í X X ËX¯ X ËX¯ ˙
Î ˚
Therefore,
Ï È 2˘
ÔÔ 2 K Ísin - 1 S + S 1 - Ê S ˆ ˙ ; X ≥ S
N(X) = Ì p Í X X ÁË X ˜¯ ˙ (9.85)
Ô Î ˚
ÔÓ K ;X <S
The describing functions given by Eqn. (9.85) and nonlinearity 4 in Table 9.2, have a common term of
the form
È 2˘
2 Í -1 1 1 Ê 1ˆ
Nc(z) = sin + 1- Á ˜ ˙ (9.86)
pÍ z z Ë z¯ ˙
Î ˚
630 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
ÊXˆ
N(X) = KNc Á ˜ (9.87)
ËS¯
r=0 + e 1 y
S=1 s(1 + 2s)(1 + s)
–
(a)
Nonlinear Systems Analysis 631
Fig. 9.49
Solution It is convenient to regard the amplifier to have unit gain and the gain K to be attached to the
linear part. From Eqn. (9.87), we obtain, for S = 1 and K = 1, N(E) = Nc (E); the function Nc(E) is listed
in Table 9.3.
The locus of – 1/N(E) thus starts from (– 1 + j0) and travels along the negative real axis for increasing E,
as shown in Fig. 9.49b. Now, for the equation
KG ( jw) = –1/N(X)
This gives
2w + w
= tan 90° = or w = 1/ 2 rad/sec.
1 - 2w 2
The largest value of K for stability is obtained when KG( jw) passes through (–1 + j0), i.e.,
| KG ( jw ) |w =1/ K
2 =1 or = 1 or K = 3/2
Ê 3ˆ
Ê 1 ˆ
ÁË ˜
2¯
( )
3 Á ˜
Ë 2¯
For K = 3, KG( jw) plot intersects – 1/N(X) locus, resulting in a limit cycle at (w1, E1) where w1 = 1/ 2,
while E1 is obtained from the relation
| –1/N(E1)| = |3G( jw1)| = 2 or |N(E1)| = 0.5
Applying the stability test for the limit cycle reveals that point A in Fig. 9.49b corresponds to a stable
limit cycle.
Slope = 2
Plant
e –1 u 1 y
1 s(s + 1)
–
Fig. 9.50
Since m is the slope of phase trajectories, all trajectories in Region I have a slope of –1. Typical trajectories
are shown in Fig. 9.51.
Region II (defined by x1 > 1) The isocline equation is
- x2 - 2 ( x1 - 1)
m=
x2
or
-2 ( x1 - 1)
x2 =
m +1
The isoclines are straightlines intersecting the x1-axis at x1 = 1, with slope equal to –2/(m + 1). Some of
these isoclines are shown in Fig. 9.51.
Nonlinear Systems Analysis 633
m = –1 x2 m = –1 m = –3
m=0 m = –2
m=1
m= 1 m=
–1 x1
m = –3 m = –2 m=0 m=1
Fig. 9.51 Isoclines and typical trajectories for the system of Fig. 9.50
In the following, we apply Krasovskii method to determine sufficient conditions for asymptotic stability,
in the vicinity of the equilibrium point.
A candidate for a Lyapunov function is
V(x) = fT(x)Pf(x)
Selecting P = I may lead to a successful determination of the conditions for asymptotic stability in the
vicinity of the equilibrium point.
With this choice of Lyapunov function, we have (refer to Eqns (9.74))
V� (x) = fT(x)[JT(x) + J(x)]f(x)
where
È ∂ f1 ∂ f1 ˘
Í∂ x ∂ x2 ˙
J(x) = Í
1 ˙ = È -1 -2 x2 ˘
Í∂ f 2 ∂ f2 ˙ Í ˙
Î 0 -1 ˚
Í ˙
Î ∂ x1 ∂ x2 ˚
The matrix
È 2 2 x2 ˘
Q = – [JT(x) + J(x)] = Í ˙
Î 2 x2 2 ˚
x2
Using Sylvester’s criterion (Section 5.2), we find that the matrix 1
Q is positive definite if
4 – 4x22 > 0 or | x2 | < 1
The shaded region in Fig. 9.52 is the region of asymptotic 0 x1
stability. It is, however, not the largest region. Another choice
of Lyapunov function for the system under consideration, may
lead to a larger region of asymptotic stability in the vicinity of –1
the equilibrium point. Fig. 9.52
PROBLEMS
9.1 For a sinusoidal input x = X sin w t, find the output waveforms for each of the nonlinearities listed
in Table 9.2. By Fourier-series analysis of the output waveforms, derive the describing function
for each entry of the table.
9.2 Consider the system shown in Fig. P9.2.
M
Using the describing-function analysis, K y
e
show that a stable limit cycle exists for
s(1 + s)2
all values of K > 0. Find the amplitude –
and frequency of the limit cycle when
K = 4, and plot y(t) versus t.
9.3 Consider the system shown in Fig. P9.3.
Use the describing function technique Fig. P9.2
Nonlinear Systems Analysis 635
to investigate the possibility of limit cycles in this system. If a stable limit cycle is predicted,
determine its amplitude and frequency.
1 y
e 5
– 0.1 s(0.1s + 1)2
Fig. P9.3
9.4 Using the describing-function analysis, prove that no limit cycle exists in the system shown in Fig.
P9.4. Find the range of values of the deadzone of the on–off controller for which limit cycling will
be predicted.
1
e 5 y
– 0.2 (s + 1)(0.1s + 1)2
Fig. P9.4
9.5 Consider the system shown in Fig. P9.5. Using the describing-function technique, show that a
stable limit cycle cannot exist in this system for any K > 0.
e K y
– 1 s(1 + s)2
Slope = 1
Fig. P9.5
9.6 Consider the system shown in Fig. P9.6. Using the describing-function analysis, investigate the
possibility of a limit cycle in the system. If a limit cycle is predicted, determine its amplitude and
frequency, and investigate its stability.
Fig. P9.6
636 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
9.7 An instrument servo system used for positioning a load, may be adequately represented by the
block diagram in Fig. P9.7a. The backlash characteristic is shown in Fig. P9.7b. Show that the
system is stable for K = 1. If the value of K is now raised to 2, show that limit cycles exist.
Investigate the stability of these limit cycles. Determine the amplitude and frequency of the stable
limit cycle.
Given:
Slope = 1 y
e K x y x
N(X)
– s(s + 1)(0.5s + 1) H=1
(a) (b)
Fig. P9.7
9.8 Determine the kind of singularity for each of the following differential equations. Also locate the
singular points on the phase plane.
y + 3 y� + 2 y = 0
(a) �� y + 5 y� + 6 y = 6 (c) ��
(b) �� y - 8 y� + 17 y = 34
9.9 An undamped pendulum is described by the differential equation
ml2 q�� + mgl sin q = 0
where mg is the weight of the pendulum, and l is its length. Show that this nonlinear system
has two equilibrium points: q = 0, and q = p. Develop linear state models using Taylor series
approximation around the two equilibrium points, and therefrom identify the kind of singularity
at each point.
9.10 A linear second-order servo is described by the equation
y + 2zwn y� + w2n y = w 2n
��
0.1
r = const + 0.1 7 y
e
s(s + 1)
–
Fig. P9.11
9.12 Consider the block diagram of a system, shown in Fig. P9.12.
(a) Derive state variable model with x1 = e and x2 = e. �
(b) Write equations of the isoclines on the x1 versus x2 plane.
(c) Given: x1(0) = 1, x2(0) = 0, obtain a trajectory on the x1 versus x2 plane. Show singular points
(if any) and some isoclines.
Slope
r = const + =1 y
e 7
0.1 s(s + 1)
–
Fig. P9.12
9.13 Consider the block diagram of a system, shown in Fig. P9.13.
(a) Derive state variable model with x1 = e and x2 = e� .
(b) Write equations of the isoclines on the x1 versus x2 plane.
(c) Given: x1(0) = 1, x2(0) = 0, obtain a trajectory on the x1 versus x2 plane. Show singular points
(if any) and some isoclines.
0.1
r = const + e 7 y
s(s + 1)
–
– 0.1
Fig. P9.13
9.14 Consider the block diagram of a system, shown in Fig. P9.14.
(a) Derive state variable model with x1 = e and x2 = e� .
(b) Write equations of the isoclines on the x1 versus x2 plane.
(c) Given: x1(0) = 1, x2(0) = 0, obtain a trajectory on the x1 versus x2 plane. Show singular points
(if any) and some isoclines.
638 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
0.1
r = const + e 7 y
0.1 s(s + 1)
–
Fig. P9.14
9.15 Consider the block diagram of a system, shown in Fig. P9.15.
(a) Derive state variable model with x1 = e and x2 = e. �
(b) Write equations of the isoclines on the x1 versus x2 plane.
(c) Given: x1(0) = 1, x2(0) = 0, obtain a trajectory on the x1 versus x2 plane. Show singular points
(if any) and some isoclines.
r = const + e + 1 y 1 y
– – s s
0.1 sgn y
Fig. P9.15
9.16 The position control system shown in Fig. P9.16 has Coulomb friction Tc sgn(q) � at the output
shaft. Prove that the phase trajectories on (e, e� /wn)-plane are semicircles, with the center on
horizontal axis at +Tc /K for e� < 0 and –Tc /K for e� > 0.
Tc
qR = const + e v q
KA T = K1v J
–
Motor
Fig. P9.16
�
Examine the phase trajectory corresponding to qR = unit step, q(0) = q (0) = 0; and find the value
of the steady-state error. What is the largest possible steady-state error which the system in Fig.
P9.16 can possess?
Given:
wn = K / J = 1.2 rad/sec; Tc /K = 0.3 rad
where K = KAK1.
Nonlinear Systems Analysis 639
9.17 Consider the nonlinear system with deadzone shown in Fig. P9.17. Using the method of isoclines,
sketch some typical phase trajectories with and without deadzone, and comment upon the effect
of deadzone on transient and steady-state behavior of the system.
e u 1 y
1 s(s + 1)
–
Slope = 2
Fig. P9.17
9.18 Consider the system shown in Fig. P9.18 in which the nonlinear element is a power amplifier
with gain equal to 1.0, which saturates for error magnitudes greater than 0.4. Given the initial
condition: e(0) = 1.6, e� (0) = 0, plot phase trajectories with and without saturation, and comment
upon the effect of saturation on the transient behavior of the system. Use the method of isoclines
for construction of phase trajectories.
0.4
e u 1 y
– 0.4 s(s + 1)
Fig. P9.18
9.19 (a) A plant with model G(s) = 1/s2 is placed in a feedback configuration as in Fig. P9.19a.
� plane with r = 2 and y(0) = y� (0) = 0. Show that the system
Construct a trajectory on the (e, e)
response is a limit cycle. What
is the switching line of this 1 y
r = const + e u 1
variable structure system? – s 2
(b) To the feedback control
system of Fig. P9.19a, we add
a derivative feedback with
gain KD as in Fig. P9.19b. (a)
Show that the limit cycle gets
1
eliminated by the introduction r = const + e + u 1 y
of derivative-control term. 2
– – s
What is the switching line of
this variable structure system?
(c) Show that if KD is large, the sKD
trajectory may slide along the
switching line towards the (b)
origin.
Fig. P9.19
640 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
9.20 A position control system comprises of a dc servo motor, potentiometer error detector, an on–off
controller, and a tachogenerator coupled to the motor shaft.
The following equations describe the system:
Reaction torque = q�� + 0.5q�
� e = qR – q
Drive torque = 2 sgn(e + 0.5 e);
(a) Make a sketch of the system showing how the hardware is connected.
�
(b) Construct a phase trajectory on (e, e)-plane � = 0, and comment upon
with e(0) = 2 and e(0)
the transient and steady-state behavior of the system.
(c) What is the switching line of the variable structure system?
�
9.21 (a) Describe the system of Fig. P9.21a on the (e,e)-plane, and show that with the on–off control-
ler switching on the vertical axis of the phase plane, the system oscillates with increasing
frequency and decreasing amplitude. Obtain a phase trajectory with (e(0) = 1.4, e(0)� = 0) as
the initial state point.
(b) Introduce now a deadzone of ±0.2 in the on–off controller characteristic. Obtain a phase
�
trajectory for the modified system with (e (0) = 1.4, e(0) = 0) as the initial state point and
comment upon the effect of deadzone.
( )
(c) The on–off controller with deadzone is now controlled by the signal e + 13 e� , combining
proportional and derivative control (Fig. P9.21b). Draw the switching line on the phase plane
� = 0) as the initial state point. What is
and construct a phase trajectory with (e(0) = 1.4, e(0)
the effect of the derivative-control action?
Fig. P9.21
9.22 Consider the second-order system
x�1 = x2 ; x�2 = – u
It is desired to transfer the system to the origin in minimum time from an arbitrary initial state.
Use the bang-bang control strategy with |u| = 1. Derive an expression for the optimum switching
curve. Construct a phase portrait showing a few typical minimum-time trajectories.
Nonlinear Systems Analysis 641
1
9.23 A plant with model G(s) = is placed in a
s( s + 1)
feedback configuration as shown in Fig. P9.23.
It is desired to transfer the system from any initial
state to the origin in minimum time. Derive an
expression for optimum switching curve and
construct a phase portrait on the (e,e)-plane �
Fig. P9.23
showing a few typical minimum-time trajectories.
9.24 Consider the nonlinear system described by the equations
x�1 = x2
x�2 = – (1 – |x1|)x2 – x1
Find the region in the state plane for which the equilibrium state of the system is asymptotically
stable.
9.25 Check the stability of the equilibrium state of the system described by
x�1 = x2
x�2 = – x1 – x21 x2
Show that the Lyapunovs linearization method fails while the Lyapunovs direct method can easily
solve this problem.
9.26 Consider a nonlinear system described by the equations
x�1 = – 3x1 + x2
x�2 = x1 – x2 – x32
Using the Krasovskii method for constructing the Lyapunov function with P as identity matrix,
investigate the stability of the equilibrium state.
Find a region of asymptotic stability using Krasovskii method.
9.27 Check the stability of the system described by
x�1 = – x1+ 2x21x2
x�2 = – x2
by use of the variable gradient method.
9.28 Develop a linearized state model for the Van der Pol’s differential equation
y + m (y2 – 1) y� + y = 0; m = 1
��
and therefrom determine the local stability of the nonlinear system using Lyapunov’s first method.
9.29 Consider the nonlinear system described by the equations
x�1 = x1 (x 21 + x 22 – 1) – x2
x�2 = x1 + x2 (x 21 + x 22 – 1)
Investigate the stability of this nonlinear system around its equilibrium point at the origin.
9.30 Consider a nonlinear system described by the differential equation
x + [K1 + K2( x� )2] x� + x = 0
��
Check the stability of the equilibrium state of this system when (i) K1 > 0 and K2 > 0;
(ii) K1 < 0 and K2 < 0; and (iii) K1 > 0 and K2 < 0.
642 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Chapter 10
Nonlinear Control Structures
10.1 INTRODUCTION
In the previous chapter, we covered tools and techniques for the analysis of control systems containing
nonlinearities. In the present chapter, we will discuss the deliberate introduction of nonlinearities into
the controller for performance improvement over that of a simpler linear controller. Although there are
numerous techniques, several of the most common are illustrated with examples, to give the reader an
idea of the general approach to designing nonlinear controllers.
If the system is only mildly nonlinear, the simplest approach might be to ignore the nonlinearity in
designing the controller (i.e., to omit the nonlinearity in the design model), but to include its effect in
evaluating the system performance. The inherent robustness of the feedback control law designed for the
approximating nonlinear system is relied upon to carry it over to the nonlinear system.
When a system is significantly nonlinear, it is traditionally dealt with by linearization (refer to Eqns
(5.11)) about a selected operating point using Taylor series. We design a linear controller based on
first-order approximation. If the controller works effectively, the perturbations in actual state about the
equilibrium state will be small; if the perturbations are small, the neglected higher-order terms will be
small and can be regarded as a disturbance. Since the controller is designed to counteract the effects of
disturbances, the presence of higher-order terms should cause no problems. This reasoning cannot be
justified rigorously, but, nevertheless, it usually works. Needless to say, it may not always work; so it is
necessary to test the design that emerges, for stability and performance—analytically, by Lyapunov’s
stability analysis for example, and/or by simulation.
In many systems, the nonlinearity inherent in the plant is so dominant that the linearization approach
described above can hardly meet the stringent requirements on systems’ performance. This reality,
inevitably, promotes the endeavor to develop control approaches that will more or less incorporate
the nonlinear dynamics into the design process. One such approach is feedback linearization. Unlike
the first-order approximation approach, wherein the higher-order terms of the plant are ignored, this
approach utilizes the feedback to render the given system, a linear input-output dynamics. On the basis
of the linear system thus obtained, linear control techniques can be applied to address design issues.
The roughness of the linearization approach based on first-order approximation, can be viewed from two
perspectives. First, it neglects all higher-order terms. Second, the linear terms depend on the equilibrium
Nonlinear Control Structures 643
point. These two uncertainties may explain why this linearization approach is incapable of dealing with
the situation where the system operates over wide dynamic regions. Although the feedback linearization
may overcome the first drawback, its applicability is limited, mainly because it rarely leads to a design
that guarantees the system performance over the whole dynamic regime. This is because feedback
linearization is often performed locally—around a specific equilibrium point. The resulting controller is
of local nature. Another remedy to linearization based on first-order approximation, is to design several
control laws corresponding to several operating points that cover the whole dynamics of the system.
Then these linear controllers are pieced together to obtain a nonlinear control law. This approach is
often called gain scheduling. Though this approach does not account for the higher-order terms, it does
accommodate the variation of the first-order terms with respect to the equilibrium points.
Adaptive control theory provides an effective tool for the design of uncertain systems. Unlike fixed-
parameter controllers (e.g., H -theory-based robust controller), adaptive controllers adapt (adjust) their
behavior on-line to the changing properties of the controlled processes.
If a fixed-parameter automatic control system is used, the plant-parameter variations directly affect the
capability of the design to meet the performance specifications under all operating conditions. If an adaptive
controller is used, the plant-parameter variations are accounted for at the price of increased complexity of
the controller. Adaptive control is certainly more complex than fixed-parameter control, and carries with
it more complex failure mechanisms. In addition, adaptive control is both time-varying and nonlinear,
increasing the difficulty of stability and performance analysis. It is this tradeoff, of complexity versus
performance, that must be examined carefully in choosing the control structure.
The main distinctive feature of variable structure systems (VSS), setting them apart as an independent
class of control systems, is that changes can occur in the structure of the system during the transient
process. The structure of a VSS is changed intentionally in accordance with some law of structural
change; the times at which these changes occur (and the type of structure formed) are determined not by
a fixed program, but in accordance with the current value of the states of the system.
The basic idea of feedback linearization in Section 10.2 is the algebraic transformation of the dynamics
of a nonlinear system to that of a linear system, on which linear control designs can in turn be applied.
Sections 10.3–10.5 show how to reduce, or practically eliminate, the effects of model uncertainties on the
stability and performance of feedback controllers using so-called adaptive and variable structure systems.
The chapter concentrates on nonlinear systems represented in continuous-time form. Even though most
control systems are implemented digitally, nonlinear physical systems are continuous in nature and are
hard to discretize meaningfully, while digital control systems may be treated as continuous-time systems
in analysis and design if high sampling rates are used. Thus, we perform nonlinear system analysis and
controller design in continuous-time form. However, of course, the control law is generally implemented
digitally.
The objective set for this chapter is to make the reader aware of the nonlinear control structures commonly
used for dealing with practical control problems in industry. The chapter is not intended to train the
reader on designing nonlinear control systems. For a comprehensive treatment of the subject, refer to
Slotine and Li[126].
644 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
X 2 = l1 cos q1 + l2 cos(q1 + q 2 )
Y2 = l1 sin q1 + l2 sin (q1 + q 2 )
Nonlinear Control Structures 645
K 2 = 12 m2 v22
= 12 m2 l12q12 + 12 m2 l22 (q1 + q 2 ) 2 + m2 l1l2 (q12 + q1q 2 ) cosq 2
(
P2 = m2 gY2 = m2 g l1 sin q1 + l2 sin (q1 + q 2 ) )
The Legrangian for the entire arm is
L = K - P = K1 + K 2 - P1 - P2
= 12 ( m1 + m2 )l12q12 + 12 m2 l22 (q1 + q 2 ) 2 + m2 l1 l2 (q12 + q1q 2 ) cos q 2
- ( m1 + m2 ) gl1 sin q1 - m2 gl2 sin (q1 + q 2 )
Equation (10.1) is a vector equation comprised of two scalar equations. The individual terms needed to
write down these two equations are
∂L
= ( m1 + m2 )l12q1 + m2 l22 (q1 + q 2 ) + m2 l1 l2 ( 2q1 + q 2 ) cos q 2
∂q1
d ∂L
= ( m1 + m2 )l12q1 + m2 l22 (q1 + q 2 ) + m2 l1 l2 ( 2q1 + q 2 ) cos q 2 - m2 l1 l2 ( 2q1 q 2 + q 22 ) sin q 2
dt ∂q1
∂L
= - ( m1 + m2 ) gl1 cos q1 - m2 gl2 cos(q1 + q 2 )
∂q1
∂L
= m2 l22 (q1 + q 2 ) + m2 l1 l2 q1 cos q 2
∂q 2
d ∂L
= m2 l22 (q1 + q 2 ) + m2 l1 l2 q1 cos q 2 - m2 l1 l2 q1 q 2 sin q 2
dt ∂q 2
∂L
= - m2 l1 l2 (q12 + q1 q 2 ) sin q 2 - m2 gl2 cos (q1 + q 2 )
∂q 2
According to Lagrange’s equation (10.1), the arm dynamics are given by the two coupled nonlinear
differential equations
t 1 = [( m1 + m2 )l12 + m2 l22 + 2m2 l1 l2 cos q 2 ]q1 + [m2 l22 + m2 l1 l2 cos q 2 ]q 2 - m2 l1 l2 ( 2q1 q 2 + q 22 ) sin q 2
+ (m1 + m2 ) gl1 cos q1 + m2 gl2 cos(q1 + q 2 )
t 2 = [m2 l22 + m2 l1 l2 cos q 2 ]q1 + m2 l22q 2 + m2 l1 l2 q12 sin q 2 + m2 gl2 cos(q1 + q 2 )
646 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È x11 ˘ È0 0 1 0 ˘ È x11 ˘ È0 0˘
Í ˙ Í ˙Í ˙ Í ˙
Í x12 ˙ = Í0 0 0 1 ˙ Í x12 ˙ Í0 0 ˙ Èu1 ˘
or + Í ˙ (10.9)
Í x21 ˙ Í0 0 0 0 ˙ Í x21 ˙ Í1 0 ˙ Îu2 ˚
Í ˙ Í ˙Í ˙ Í ˙
Î x22 ˚ Î0 0 0 0 ˚ Î x22 ˚ Î0 1˚
which may be completely expressed as
Èx1 ˘ È0 I ˘ Èx1 ˘ È0˘
Í ˙=Í ˙Í ˙+Í ˙u (10.10)
Îx 2 ˚ Î0 0˚ Îx 2 ˚ ÎI ˚
Use linear quadratic regulator design to select a feedback control u(t), that stabilizes the tracking error
system (10.17). The performance measure
J= Ú0
( x� T Qx� + uT Ru)dt (10.18)
Unlike the linearization approach which ignores all higher-order terms of the plant, the feedback
linearization approach utilizes the feedback, to render the given system a linear input-output dynamics.
Then, on the basis of the linear system thus obtained, linear control techniques can be applied to
address design issues. Finally, the resulting nonlinear control law is implemented to achieve the desired
dynamical behavior.
The main drawback of this technique is that it relies on exact cancellation of nonlinear terms to get
linear input-output behavior (This is equivalent to cancellation of the nonlinearity with its inverse).
Consequently, if there are errors or uncertainty in the model of the nonlinear terms, the cancellation is no
longer exact. Therefore, the applicability of such model-based approaches to feedback control of actual
systems is quite limited, because they rely on the exact mathematical models of system nonlinearities.
If the functions f(x) and g–1 (x) are not exactly known in the control scheme of Fig. 10.2, we may explore
the option of constructing their estimates by two neural networks. We shall study this option of intelligent
control in Chapter 11.
Nonlinear Control Structures 649
1.5
Desired
Actual
1
0.5
q1
–0.5
–1
0 1 2 3 4 5 6 7 8
t
(a)
1
Desired
Actual
0.5
0
q2
–0.5
–1
–1.5
0 1 2 3 4 5 6 7 8
(b) t
of normal wear, aging, breakdown, and changes in the environment in which the process operates. The
feedback mechanism provides some degree of immunity to discrepancies between the physical plant and
the model that is used for the design of the control system. But sometimes, this is not enough. A control
system designed on the basis of a nominal design model may not behave as well as expected, because the
design model does not adequately represent the process in its operating environment.
How can we deal with processes that are prone to large changes, or for which adequate design models
are not available? One approach is brute force, i.e., high loop-gain: as the loop-gain becomes infinite, the
output of the process tracks the input with vanishing error. Brute force rarely works, however, for well-
known reasons—dynamic instability, control saturation, and susceptibility to noise and other extraneous
inputs.
In robust control design methods, model uncertainties are captured in a family of perturbed plant models,
where each member of the family may represent the nominal plant, but which member does so, remains
unknown. A robust controller satisfies the design requirements in connection with all the members of the
family. Robust control design techniques are sophisticated and make it possible, for the control system
design, to tolerate substantial variations in the model. But the price of achieving immunity to model
uncertainties may be a sacrifice in performance. Moreover, robust control design techniques are not
applicable to processes for which no (uncertainty) design model is available.
The adaptive control theory provides another approach to the design of uncertain systems. Unlike the
fixed-parameter controller, adaptive controllers adjust their behavior on-line, in real-time, to the changing
properties of the controlled processes. For example, in some control tasks, such as those in robot
manipulations, the systems to be controlled have parameter uncertainty at the beginning of the control
operation. Unless this uncertainty is gradually reduced on-line by an adaptation or estimation mechanism,
it may cause inaccuracy or instability for the control systems. In many other tasks, such as those in
power systems, the system dynamics may have well-known dynamics at the beginning, but experience
unpredictable parameter variations as the control operation goes on. Without continuous ‘redesign’ of the
controller, the initially appropriate controller design may not be able to control the changing plant well.
Generally, the basic objective of adaptive control is to maintain consistent performance of a system in
the presence of uncertainty or unknown variation in plant parameters. Since such parameter uncertainty,
or variation occurs in many practical problems, adaptive control is useful in many industrial contexts.
These include the following:
Robots have to manipulate loads of various sizes, weights, and mass distributions. It is very restrictive to
assume that the inertial parameters of the loads are well known before a robot picks them up and moves
them away. If controllers with constant gains are used, and the load parameters are not accurately known,
motion of the robot can be either inaccurate or unstable. Adaptive control, on the other hand, allows
robots to move loads of unknown parameters with high speed and high accuracy.
Ship Steering
On long courses, ships are usually put under automatic steering. However, the dynamic characteristics
of a ship strongly depend on many uncertain parameters, such as water depth, ship loading, and wind
and wave conditions. Adaptive control can be used to achieve good control performance under varying
operating conditions.
Nonlinear Control Structures 651
The dynamic behavior of an aircraft depends on its altitude, speed, and configuration. The ratio of
variations of some parameters can lie between 10 to 50 in a given flight. Adaptive control can be used to
achieve consistent aircraft performance over a large flight envelope.
Process Control
Models for metallurgical and chemical processes are usually complex and also hard to obtain. The
parameters characterizing the processes vary from batch to batch. Furthermore, the working conditions
are usually time-varying (e.g., reactor characteristics vary during the reactor’s life, the raw materials
entering the process are never exactly the same, atmospheric and climatic conditions also tend to change).
In fact, process control is one of the most important application areas of adaptive control.
Adaptive control has also been applied to other areas, such as power systems.
The concept of controlling a process that is not well understood, or one in which the parameters are
subject to wide variations, has a history that predates the beginning of modern control theory. The early
theory was empirical, and was developed before digital computer techniques could be used for extensive
performance simulations. Prototype testing was one of the few techniques available for testing adaptive
control. At least one early experiment had disastrous consequences. As the more mathematically rigorous
areas of control theory were developed starting in the 1960s, interest in adaptive control faded for a time,
only to be reawakened in the late 1970s with the discovery of mathematically rigorous proofs of the
convergence of some popular adaptive control algorithms. This interest continues unabated [130–136].
Many, apparently different, approaches to adaptive control have been proposed in the literature. Two
schemes in particular have attracted much interest: ‘Self-Tuning Regulator’ (STR), and ‘Model Reference
Adaptive Control’ (MRAC). These two approaches actually turn out to be special cases of a more general
design philosophy.
In the following, we describe model reference adaptive control (MRAC); the discussion on self-tuning
regulator is given in the next section.
Generally, a model reference adaptive control system can be schematically represented by Fig. 10.4.
It is composed of four parts: a plant/process containing unknown parameters, a reference model for
compactly specifying the desired output of the control system, a feedback control law containing
adjustable parameters, and an adaptation mechanism for updating the adjustable parameters.
ym
Model
Controller parameters Adjustment
mechanism
Command
signal y
Controller Process
Control
signal
Fig. 10.4
652 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The plant is assumed to have a known structure, although the parameters are unknown. For linear plants,
this means that the number of poles and the number of zeros are assumed to be known, but that the
locations of these poles and zeros are not.
A reference model is used to specify the ideal response of the adaptive control system to the external
command. Intuitively, it provides the ideal plant response, which the adaptation mechanism should seek
in adjusting the parameters. The choice of the reference model is part of the adaptive control system
design. This choice should reflect the performance specifications in the control tasks, such as rise time,
setting time, overshoot or frequency-domain characteristics.
The controller is usually parameterized by a number of adjustable parameters (implying that one may
obtain a family of controllers by assigning various values to the adjustable parameters). The controller
should have perfect tracking capacity in order to allow the possibility of tracking convergence. That is,
when the plant parameters are exactly known, the corresponding controller parameters should make
the plant output identical to that of the reference model. When the plant parameters are not known, the
adaptation mechanism will adjust the controller parameters, so that perfect tracking is asymptotically
achieved. If the control law is linear in terms of the adjustable parameters, it is said to be linearly
parameterized. Existing adaptive control designs normally require linear parameterization of the
controller—in order to obtain adaptation mechanisms with guaranteed stability and tracking convergence.
The adaptation mechanism is used to adjust the parameters in the control law. In MRAC systems, the
adaptation law searches for parameters such that the response of the plant under adaptive control, becomes
the same as that of the reference model, i.e., the objective of the adaptation is to make the tracking error
converge to zero. Clearly, the main difference from conventional control, lies in the existence of this
mechanism. The main issue in adaptation design is to synthesize an adaptation mechanism which will
guarantee that the control system remains stable and the tracking error converges to zero—even if the
parameters are varied. Many formalisms in nonlinear control can be used to this end, such as Lyapunov
theory, hyperstability theory, and passivity theory[126]. In this section, we shall use Lyapunov theory.
Thus, the desired performance in an MRAC (Fig. 10.4) is given in terms of a reference model, which,
in turn, gives the desired response to the command signal. The inner control loop is an ordinary feedback
loop composed of the plant and the controller; the parameters of the controller are adjusted by the outer
loop in such a way that the error between the plant and model outputs becomes small. The key problem
is to determine the adjustment mechanism so that a stable system, which brings the error to zero, is
obtained.
From the block diagram of Fig. 10.4, one may jump to the false conclusion that MRAC has an answer to
all control problems with uncertain plants. Before using such a scheme, important theoretical problems
such as stability and convergence have to be considered. Since adaptive control schemes are both time-
varying and nonlinear, stability and performance analysis becomes very difficult. Many advances have
been made in proving stability under certain (sometimes limited) conditions. However, not much ground
has been gained in proving performance bounds.
conditions required in the treatment of non-autonomous systems are more restrictive. Scalar functions
with explicit time-dependence, V(x, t), are required while in autonomous system analysis, time-invariant
functions V(x) suffice. Asymptotic stability analysis of non-autonomous systems is generally harder than
that of autonomous systems since it is usually very difficult to find Lyapunov functions with a negative
definite derivative. When V (x, t) is only negative semidefinite, Lyapunov theorems on asymptotic
stability are not applicable.
Barbalat’s Lemma
Before describing Barbalat’s lemma itself, let us clarify a few points concerning the asymptotic properties
of functions and their derivatives. Given a differentiable function f (t), the following facts are important
to keep in mind [126]:
If f is lower bounded (for some l > 0, the region defined by f (t) < l is bounded) and decreasing
( f £ 0), then it converges to a limit. This is a standard result in calculus.
The fact that f (t) converges as t , does not imply that f (t ) Æ 0. For example, while the
–t 2t
function, f (t) = e sin(e ) Æ 0, its derivative f (t ) is unbounded.
Given that a function tends towards a finite limit, what additional requirement can guarantee that its
derivative actually converges to zero? Barbalat’s lemma indicates that the derivative itself should have
some smoothness properties; it should be uniformly continuous.
A function f (t ) is uniformly continuous if one can always find a dR for a given R > 0, such that for any
ti and t Œ[0, dR ], we have
f (ti + t ) - f (ti ) < R
Uniform continuity of a function is often difficult to assert from the above definition. A more convenient
approach is to examine the function’s derivative. A very simple sufficient condition for a differentiable
function to be uniformly continuous, is that its derivative must be bounded. Thus, if the function f (t) is twice
differentiable, then its derivative f (t ) is uniformly continuous if its second derivative f (t ) is bounded.
This can easily be seen from the finite difference theorem:
654 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
For all ti and all ti + t, there exists t¢ (between ti and ti + t ), such that
f (ti + t ) - f (ti ) = f (t ¢ )[(ti + t ) - ti ]
10.3.2
We illustrate the design methodology of MRAC through a simple example. Consider a system with the
plant model of the form
y p = - a p y p + b p u; y p (0) =D y 0p
(10.21)
where u is the control variable, and yp is the measured state (output); ap and bp are unknown coefficients.
Assume that it is desired to obtain a closed-loop system described by
ym = - am ym + bm r ; ym (0) =D ym0 (10.22)
Fig. 10.5
Notice that the error goes to zero if the parameters a(t) and b(t) are equal to the ones given by (10.24).
We will now attempt to construct a parameter adjustment mechanism that will drive the parameters a(t)
and b(t) to appropriate values, such that the resulting control law (10.23) forces the plant output yp(t) to
follow the model output ym(t). For this purpose, we introduce the Lyapunov function,
È 2 1 2 1 2˘
Íe (t ) + b g (b p a(t ) + a p - am ) + b g (b p b(t ) - bm ) ˙
V(e, a, b) = 1
2
Î p p ˚
where g > 0.
This function is zero when e(t) is zero and the controller parameters a(t) and b(t) are equal to the optimal
values given by (10.24). The derivative of V is
dV de(t ) 1 da(t ) 1 db(t )
= e(t ) + [b p a(t ) + a p - am ] + [b p b(t ) - bm ]
dt dt g dt g dt
1 È da(t ) ˘ 1 È db(t ) ˘
= - am e 2 (t ) + [b p a(t ) + a p - am ] Í - g y p (t )e(t ) ˙ + [b p b(t ) - bm ] Í + g r ( t ) e( t ) ˙
g Î dt ˚ g Î dt ˚
If the parameters are updated as
db(t )
= – g r (t )e(t )
dt
da(t )
= g y p (t )e(t ), (10.26)
dt
we get
dV
= - am e 2 (t )
dt
656 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Thus, the adaptive control system is stable in the sense of Lyapunov, i.e., the signals e, a and b are bounded.
To ensure that tracking error goes to zero, we compute second time derivative of Lyapunov function:
d 2V de(t )
2
= -2am e(t )
dt dt
From (10.25b), it follows that
d 2V
= -2am e(t )[- am e(t ) + ( am - a p - b p a(t )) y p (t ) + (b p b(t ) - bm )r (t )]
dt 2
= f (e, a, b, yp, r)
Since all the parameters are bounded, and yp(t) = e(t) + ym(t) is bounded, V is also bounded, which, in
turn, implies that V is uniformly continuous. Therefore, the asymptotic convergence of the tracking error
e(t) is guaranteed by Barbalat’s lemma.
Figure 10.6 shows the simulation (refer to Problem A.19 in Appendix A) of MRAC system with ap = 1,
bp = 0.5, am = 2 and bm = 2. The input signal is a square wave with amplitude 1. The adaptation gain g =
2. The closed-loop system is close to the desired behavior, after only a few transients. The convergence
rate depends critically, on the choice of the parameter g.
1.5
Reference model output
Plant output
1
0.5
–0.5
–1
–1.5
0 10 20 30 40 50 60 70 80 90 100
t
Fig. 10.6
Plots in Fig. 10.6 were generated by simulating the following sets of equations:
(i) ym (t ) = -2 ym (t ) + 2r (t ); ym (0) = 0
This gives ym(t).
(ii) y p (t ) = - y p (t ) + 0.5u(t ); y p (0) = 0
u(t) = b(t)r(t) – a(t)yp (t)
Nonlinear Control Structures 657
db(t )
= -2r (t )e(t ); b(0) = 0.5
dt
da(t )
= 2 y p (t )e(t ); a(0) = 1
dt
e ( t ) = y p ( t ) - ym ( t )
From this set of equations, we obtain u(t) and yp(t).
10.4.1
The types of models that are needed for the design methods presented in this book, can be grouped
into two categories: transfer function model and state variable model. If we have a transfer function
description, we can obtain an equivalent state variable description and vice versa. These equivalent
models are described by certain parameters—the elements of F, g, c matrices of the state model
x( k + 1) = Fx( k ) + gu( k )
y( k ) = cx( k )
x(k) : n ¥ n state vector
u(k) : scalar input
y(k) : scalar output
or the parameters ai and bj of the transfer function
Y ( z ) b1 z m + b 2 z m -1 + + b m +1
G( z) = =
U ( z) z n + a1 z n -1 + + a n
The category of such models gives us the parameteric description of the plant. The other category of
models such as frequency-response curves (Bode plots, polar plots, etc.), time-response curves, etc.,
gives nonparametric description of the plant.
Plant models can be obtained from the first principles of physics. In many cases, however, it is not possible
to make a complete model only from physical knowledge. In these circumstances, the designer may turn
to the other source of information about plant dynamics—the data taken from experiments directly
conducted to excite the plant, and to measure the response. The process of constructing models from
experimental data is called system identification. In this section, we restrict our attention to identification of
discrete parametric models, which includes the following steps:
(i) Experimental planning
(ii) Selection of model structure
(iii) Parameter estimation
Experimental Planning
The choice of experimental conditions for parameter estimation is of considerable importance. It is clear
that the best experimental conditions are those that account for the final application of the model. This
may occur naturally in some cases; e.g., in adaptive control (discussed later in this section) the model is
658 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
adjusted under normal operating conditions. Many ‘classic’ methods depend strongly on having specific
input, e.g., sinusoid or impulse. There could be advantages in contriving such an artificial experiment
if it subjects the system to a rich and informative set of conditions, in the shortest possible time. A
requirement on the input signal is that it should sufficiently excite all the modes of the process.
One broad distinction in identification methods is between on-line and off-line experimentation. The
on-line methods give estimates recursively, as the measurements are obtained, and are the only alternative
if the identification is going to be used in an adaptive controller.
The model structures are derived from prior knowledge of the plant. In some cases the only a priori
knowledge is that the plant can be described as a linear system in a particular range. It is, then, natural to
use general representations of linear systems.
Consider a SISO dynamic system with input {u(t)} and output {y(t)}. The sampled values of these
signals can be related through the linear difference equation
y( k + n) + a1 y( k + n - 1) + + a n y( k ) = b1u( k + m) + b 2 u( k + m - 1) + + b m +1u( k ); n ≥ m (10.27)
B( z -1 ) = z - d ( b1 + b 2 z -1 + + b n - d +1 z - ( n - d ) )
We shall present the parameter-estimation algorithms for the case of d = 1 without any loss of generality;
the results for any value of d easily follow.
For d = 1, we get the input-output model structure
A( z -1 ) y( k ) = B( z -1 )u( k ) (10.30)
where
A( z -1 ) = 1 + a1 z -1 + + an z-n
B( z -1 ) = b1 z -1 + b 2 z -2 + + bn z - n
In the presence of the disturbance, model (10.30) takes the form
A( z -1 ) y( k ) = B( z -1 )u( k ) + e ( k ) (10.31)
where e (k) is some disturbance of unspecified character.
The model (10.31) describes the dynamic relationship between the input and output signals, expressed
in terms of the parameter vector
p = [a1 ... a n b1 ... b n ]T (10.32)
Introducing the vector of lagged input-output data,
e( k ) = [- y( k - 1) - y( k - n) u( k - 1) u( k - n)]; (10.33)
Eqn. (10.31) can be rewritten as
y ( k ) = e( k )p + e ( k ) (10.34)
A model structure should be selected (i) that has a minimal set of parameters and is yet equivalent to
the assumed plant description; (ii) whose parameters are uniquely determined by the observed data; and
(iii) which will make subsequent control design simple.
The dynamic relationship between the input and output of a scalar system is given by the model (10.34).
Ignoring random effects e (k) on data collection, we have
y ( k ) = e( k )p (10.35)
where e(k) is given by Eqn. (10.33) and p is given by Eqn. (10.32).
Using the observations
{u(0), u(1),..., u( N ), y(0), y(1),..., y( N )}
we wish to compute the values of ai and bj in parameter vector p, which will fit the observed data.
Thus, solving the parameter-estimation problem requires techniques for selecting a parameter estimate,
which best represents the given data. For this, we require some idea of the goodness of the fit of a
proposed value of p to the true p°. Because, by the very nature of the problem, p° is unknown, it is
unrealistic to define a direct parameter error between p and p°. We must define the error in a way that can
be computed from {u(k)} and {y(k)}.
660 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Let e(k,p) be the equation error comprising the extent to which the equations of motion (10.35) fail to be
true for a specific value of p when used with the specific actual data:
e( k , p ) = y ( k ) - e( k )p (10.36)
A simple criterion representing the goodness of fit, of a proposed value of p, is given by
N
J (p) = Â e (k , p )
2 (10.37)
k =1
The method called the Least Squares Method, based on minimizing the sum of the squares of the error,
is a very simple and effective method of parameter estimation.
Since y(k) depends on past data up to n earlier periods, the first error we can form is e(n,p); the subsequent
errors being e(n + 1, p), ..., e(N, p):
e( n, p) = y( n) - e( n)p
e( n + 1, p) = y( n + 1) - e( n + 1)p
�
e( N , p) = y( N ) - e( N )p
In vector-matrix notation,
e( N , p ) = y ( N ) - F ( N )p (10.38)
is minimized
The performance measure J (p) can be written as
J (p) = [y( N ) - F ( N )p]T [y( N ) - F ( N )p]
= yT( N )y( N ) - pT FT( N )y( N ) - yT( N ) F ( N ) p + p T F T( N ) F (N ) p
= yT( N ) y( N ) – pT FT( N )y( N ) – yT( N )F (N ) p + pT FT( N ) F ( N ) p
+ yT( N )F ( N )( FT( N ) F (N )) -1 F T( N )y ( N ) - yT( N ) F (N )( FT( N ) F (N )) -1 F T( N )y( N )
(Note that we have simply added and subtracted the same terms under the assumption that [FT(N) F(N)]
is invertible).
Hence
J (p ) = yT ( N )[1 - (F ( N )( FT ( N ) F (N )) -1 FT ( N )]y( N )
+(p - (F T ( N )F ( N )) -1 FT ( N )y( N ))T FT ( N ) F( N ) ¥ (p - ( FT ( N )F ( N )) -1 F T ( N )y( N )
Nonlinear Control Structures 661
The first term in this equation is independent of p, so we cannot reduce J via this term. Hence, to get the
smallest value of J, we choose p so that the second term is zero. Denoting the value of p that achieves the
minimization of J by p̂, we notice that
pˆ = [FT ( N )F ( N )]-1 FT ( N )y( N ) (10.40a)
= P( N )F ( N ) y( N )
T
(10.40b)
T -1
where P( N ) = [F ( N )F ( N ) ]
The least squares calculation for p̂ given by (10.40) is a ‘batch’ calculation since one has a batch of
data from which the matrix F, and vector y, are composed according to (10.38). In many cases, the
observations are obtained sequentially. If the least squares problem has been solved for N observations,
it seems to be a waste of computational resources to start from scratch when a new observation is
obtained. Hence, it is desirable to arrange the computations in such a way that the results obtained for
N observations can be used in order to get the estimates for (N + 1) observations. The algorithm for
calculating the least-squares estimate recursively is discussed below.
Let pˆ ( N ) denote the least-squares estimate based on N measurements. Then from (10.40)
pˆ ( N ) = [FT ( N ) F ( N )]-1 FT ( N ) y( N )
It is assumed that the matrix [F T ( N ) F (N )] is nonsingular for all N. When an additional measurement is
obtained, a row is added to the matrix F and an element is added to the vector y. Hence,
ÈF ( N ) ˘ È y( N ) ˘
F ( N + 1) = Í ˙ ; y (N + 1) = Í ˙
Îe ( N + 1) ˚ Î y( N + 1) ˚
The estimate p̂ ( N + 1) based on N + 1 measurements, can then be written as
pˆ ( N + 1) = [FT ( N + 1)F ( N + 1)]-1 FT ( N + 1)y( N + 1)
= [FT ( N ) F( N ) + eT ( N + 1)e( N + 1)]-1[FT ( N ) y (N ) + eT ( N + 1) y( N + 1)] (10.41)
P( N + 1) = [FT (N + 1 ) F (N + 1) ]-1
Then from (10.41), we obtain
P( N + 1) = [ P -1 ( N ) + eT ( N + 1) e ( N + 1)]-1 (10.42)
We now need the inverse of a sum of two matrices. We will use the well-known matrix inversion lemma1
for this purpose.
1
Matrix inversion lemma is proved below.
Ï -1 ¸
[A + BCD] Ì A -1 - A -1B ÈÎC-1 + DA -1B ˘˚ DA -1 ˝
Ó ˛
-1 -1
-1
- B ÈÎC
-1
+ DA B ˘˚
-
DA -1 - BCDA -1B ÈÎC-1 + DA -1B ˘˚ DA -1
1
= I + BCDA
[
= I + BCDA -1 - BC C-1 - DA -1B C-1 - DA -1B][ ]-1 DA -1
-1 -1
= I + BCDA - BCDA
=I
662 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Any recursive algorithm requires some initial value to be started up. In (10.44), we require p̂(N) and P(N)
(equivalently, in (10.45) we require p̂(k) and P(k)). We may collect a batch of N > 2n data values, and
solve the batch formula once for P(N) and p̂(N).
However, it is more common to start the recursion at k = 0 with P(0) = a I and p̂(0) = 0, where a is some
large constant. You may pick P(0) = a I, but choose p̂(0) to be the best guess that you have, at what the
parameter values are.
We have presented the least squares method ignoring random effects on data collection, i.e., e(k) in
Eqn. (10.34) has been neglected. If e(k) is white noise, the least squares estimate given by (10.45)
converges to the desired value. However, if e(k) is colored noise, the least squares estimation usually
gives a biased (wrong mean value) estimate. This can be overcome by using various extensions of the
least squares estimation.
We have seen that parameter estimation can be done either on-line or off-line. Off-line estimation may
be preferable if the parameters are constant, and there is sufficient time for estimation before control.
However, for parameters which vary (even though slowly) during operation, on-line parameter estimation
is necessary to keep track of the parameter values. Since problems in the adaptive control context usually
involve slowly time-varying parameters, on-line estimation methods are, thus, more relevant.
The main purpose of the on-line estimators is to provide parameter estimates for self-tuning control.
can easily by implemented using a digital computer. Figure 10.7a shows a block diagram representation
of the adaptive control scheme. The system obtained is called a Self-Tuning Regulator (STR) because it
has facilities for tuning its own parameters. The regulator can be thought of as being composed of the
following two loops:
(i) The inner loop is the conventional feedback control loop consisting of the plant and the regulator.
(ii) The parameters of the regulator are adjusted on-line by the outer loop, which is composed of the
recursive-parameter estimator and design calculations.
Process parameters
Design
Controller Estimation
parameters
Command
signal
Output
Controller Process
Control
signal
Fig. 10.7 (a) Block diagram of a self-tuning regulator
A self-tuning regulator, therefore, consists of a recursive parameter estimator (plant identifier) coupled
with a control-design procedure, such that the currently estimated parameter values are used to provide
feedback-controller coefficients. At each sampling, an updated parameter estimate is generated and
a controller is designed, assuming that the current parameter estimate is actually the true value. The
approach of using the estimates as if they were the true parameters for the purpose of design, is called
certainty equivalence adaptive control.
From the block diagram of Fig. 10.7a, one may jump to the false conclusion that such regulators can be
switched on and used blindly without any a priori considerations; the only requirement being a recursive
parameter estimation scheme and a design procedure. We have, no doubt, an array of parameter-estimation
schemes and an array of controller-design methods for plants with known parameters. However, all the
possible combinations may not have a self-tuning property, which requires that the performance of the
regulator coincides with the performance that would be obtained, if the system parameters were known
exactly. Before using a combination, important theoretical problems, such as stability and convergence,
have to be tackled. There are cases wherein self-tuning regulators have been used profitably, though
some of their properties have not been fully understood theoretically; on the other hand, bad choices have
been disastrous in some other cases.
So far, only a small number of available combinations have been explored from the stability, convergence,
and performance points of view. It is a striking fact, uncovered by Astrom and Wittenmark [132], that
in some circumstances a combination of simple least-squares estimation and minimum-variance control
has a self-tuning property. The same is true for some classes of pole-shifting regulators. Computer-based
controllers incorporating these concepts are now commercially available.
10.4.3
Clarke, Mothadi and Tuffs [120,121] proposed an alternative—Generalized Predictive Control (GPC),
to pole placement and minimum variance designs used in self-tuning regulators. The argument for
Nonlinear Control Structures 665
introducing GPC in a self-tuning context was that it was based on a more flexible criterion than minimum
variance controllers without requiring an excessive amount of computations. Although it originated in
an adaptive control context, GPC has many attractive features which definitely makes it worthwhile
considering even for non-adaptive control structures. We first consider the GPC approach for a non-
adaptive control structure.
The generalized predictive control (GPC) differs in at least three ways from the control design methods
considered so far in this book.
(i) In linear quadratic control (Chapter 8), the cost function is defined over the time interval [0, ):
1
J= Â
2 k =0
[e 2 ( k ) + ru 2 ( k )]; e(k) = y – yr
where y(k) is the actual output, yr is the reference/command value, u(k) is the control signal, and
r is a weighting factor.
We call this control problem an infinite-horizon problem. Note that ‘infinite horizon’ does not
necessarily mean that infinite amount of time is required for the control u(k) to achieve the desired
performance; it just means that there is no fixed deadline on time yielding the desired performance.
In fact, as we know, a good design yields stability and steady-state accuracy in seconds.
The other design methods considered in this book so far (e.g., PID, Pole-Placement) are also
based on infinite-horizon assumption, though this is not explicitly visible as in linear quadratic
control.
(ii) The control design methods considered so far are all off-line design methods; the design
calculations are carried out in one shot before implementation (unless the design is a part of the
MRAC/Self-Tuning loops).
(iii) The design in these methods is based on a fixed model of the plant. A model, however, is always an
approximation of the system under consideration. With time, the parameters of the model become
more and more inaccurate because of internal/external disturbances. Therefore, the control law
u(k) calculated at k = 0 would become more and more inaccurate when considered further in the
future, if adequate provision is not built-in to account for the changes in the model. This, in fact, is
the essence of feedback control; the error signal is a measure of the internal/external disturbances,
and hence changes in the model; the control law u(k) is forced to be a function of the error signal.
In GPC, we use the concepts of finite horizon, sequential design (on-line design), and the control strategy
has open-loop structure. The GPC approach can be described as follows:
(1) Assume the measured (actual) value of the current system output is y(k). With the data known
up to time k, the value of the output y(k + j) is predicted over a certain time horizon, called the
prediction horizon N; j = 1, 2,…, N. This ‘output prediction’ is based on the explicit use of the
fixed plant model, and depends on the future values of the control variable u(k + j) within a control
horizon Nu; j = 1, 2,…, Nu; Nu £ N. If Nu < N, then u(k + j) = u(k + Nu); j = Nu + 1,…, N.
(2) A reference trajectory r(k + j); j = 1,…, N, is defined which describes the desired system trajectory
over the prediction horizon.
(3) The vector of future controls u(k + j) is computed such that a cost function of the following form
is minimized:
N Nu
  ÈÎDu (k + j - 1)˘˚
2
J= [ ŷ ( k + j ) - r ( k + j )]2 + r (10.46)
j =1 j =1
666 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
where ŷ(k + j) is the predicted output sequence obtained with the data known up to time k,
Du(k + j –1) is a future control increment (Du(k) = u(k) – u(k – 1)) obtained from minimization
of cost function J, and r is a weighting factor. The horizons (N, Nu) and the weighting factor (r)
are design parameters. The reference trajectory r(k + j) can be a known constant, or known future
variations.
(4) Once the minimization is achieved, the first optimized control action u(k) is applied to the plant,
and the resulting plant output is measured. This measurement of the plant output is used as initial
state of the model to perform the next iteration.
Steps 1 to 4 are repeated at each sampling instant. The block diagram of the GPC scheme is shown in
Fig.10.7b.
The following prime characteristics distinguish GPC approach from other design methods:
At each sample, the control signal is determined to achieve a desired behavior in the following N
steps. This is called a receding horizon strategy.
As per the principle of optimality (Chapter 14), the first element u(k) of the sequence of controls is
optimal if the sequence, at every sample instant, had been determined to optimize the cost function
(10.46) with N = . This is because our control problem is an infinite-horizon problem. Therefore,
using finite-horizon structure in GPC is an approximation, necessitated by the requirement of
reducing the computational time for calculating u(k + j) on-line.
The GPC scheme of Fig. 10.7b is apparently an open-loop structure; therefore, one may doubt
the robustness features of the scheme. The robustness is, in fact, built in the receding horizon
and on-line properties of the scheme: at every decision step, the generalized predictive controller
observes the state of the true system, synchronizes the estimate that it has with this, and tries to
find the best sequence of actions given the updated state.
For the problem formulation with cost function (10.46), predictions ŷ(k + j) are based on the measured
values of y(k), y(k – 1),…, and not on the predicted values ŷ(k), ŷ (k – 1),…. This virtually amounts to a
feedback loop, a source of measure of the internal/external disturbances.
Most real-world dynamical systems are inherently nonlinear. It provides motivation for the application
of GPC strategies, given nonlinear model of the plant. However, in many situations, on-line nonlinear
optimization problem makes implementation of GPC scheme impractical. For many nonlinear systems, a
linearized model is acceptable when the system is working around the operating point. The GPC scheme
with a linear predictive model is a powerful design method in the toolkit of control practitioners.
As the control variables in a GPC scheme are calculated based on the predicted output, the model needs
to be able to reflect the dynamic behavior of the system as accurately as possible. The non-adaptive
control scheme of Fig. 10.7b, when inserted in an adaptive loop such as self-tuning mode of Fig. 10.7a,
will yield an improved performance.
The derivation that follows, employs a linear predictive model.
When considering regulation about a particular operating point, even a nonlinear plant generally admits
a locally-linearized model (refer to Eqns (10.30)–(10.32)):
A( z -1 ) y( k ) = B( z -1 )u( k - 1) + e ( k ) (10.47a)
Nonlinear Control Structures 667
Fig. 10.7
The leading elements b 1, b2, ..., of the polynomial B are set to zero to account for the dead-time of the
plant; and the trailing elements bn, bn–1,…, are set to zero if the degree of polynomial B is less than n.
Principal disturbances encountered in industrial applications are accommodated by modeling e (k) as a
white noise sequence independent of past control inputs. To obtain a controller with integral action, it is
further assumed that the term e(k) is modeled as integrated white noise:
x(k )
e (k ) = (10.48)
1 - z -1
when x(k) is an uncorrelated random sequence. Combining with (10.47a), we obtain
A( z -1 ) y( k ) = B( z -1 )u( k - 1) + x ( k )/ D (10.49)
(1 - z - j Fj ) y (k + j ) = E j B Du( k + j - 1) + E j x ( k + j )
or y( k + j ) = E j B Du( k + j - 1) + Fj y( k ) + E j x ( k + j ) (10.51)
Given that the sequence of future control inputs (i.e., u(k + i ) for i > 1) is known, and measured output
data up to time k is available, the optimal predictor for y(k + j ) is the expectation conditioned on the
information gathered up to time k (since Ej is of degree j – 1, the noise components are all in the future):
yˆ ( k + j | k ) = G j Du( k + j – 1) + F j y( k ) (10.52a)
B B( z -1 )
The polynomial = represents the z-transform of the response y(k) of the process to
AD A( z -1 )(1 - z -1 )
unit-step input.
Step response = g0 + g1z–1 + + gj–1z–( j–1) + gj z– j + (10.52d)
It is obvious that the first j terms in Gj are same as the j coefficients of the step response of the process.
This gives us one way of computing Gj for the prediction equation (10.52a). Both Gj and Fj may be
computed by recursion of the Diophantine equation (10.50a), as is explained below.
With A defined as DA, we have from (10.50a)
1 = E j A + z – j Fj (10.53a)
Since A is monic, the solution to
1 = E1 A + z –1 F1
is obviously
E1 = 1; F1 = z[1 - A]
Assume now that the solution to (10.53a) for some j exists, and consider the equation for j + 1:
1 = E j +1 A + z –( j +1) F j +1 (10.53b)
Subtracting the two gives
0 = A[ E j +1 - E j ] + z - j [ z -1 F j +1 - F j ] (10.54a)
Since deg (Ej + 1 – Ej ) = deg (Ej + 1 ) = j, it turns out to be good idea to define
E j +1 = E j + e(j j +1) z - j
Consequently,
z –1 F j +1 – F j + Ae(j j +1) = 0
or
F j +1 = z[ F j - Ae(j j +1) ]
Comparing the coefficients, we obtain
e(j j +1) = f ( j )
0
fi( j +1) = fi(+1j ) - a i +1 f 0( j ) , i = 0, 1,… , n
A = 1+ a1 z -1 + º + a n z - n
670 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
fi( j +1) = fi +1 - a i +1 f 0 , i = 0, 1, … , n
( j) ( j)
(10.55b)
with
f n(+1
j)
=0
E1 = 1, and F1 = z (1 - A) (10.55c)
The only unknown quantities in the prediction equation (10.52a) are now the future control inputs. In
order to derive the control law, it is necessary to separate these from the part of the expression containing
known (past) data.
yˆ ( k + 1) = (G1 – g0 ) Du( k ) + F1 y( k ) + g0 Du( k ) (10.56a)
j –1
yˆ ( k + j ) = z [G j – G j ]Du( k ) + F j y( k ) + G Du( k + j – 1); 1 < j £ N (10.56b)
The sequence of future controls is determined by setting the derivative of the cost function to zero:
∂J
= 2G T (G u + i – r) + 2 ru)
∂u
= 2G T G u + 2G T ( i – r ) + 2 r u = 0
or
u = [G T G + r I] – 1 G T (r – i ) (10.60)
The matrix involved in the inversion is of the much reduced dimension Nu ¥ Nu. In particular, if Nu = 1
(as is usefully chosen for a ‘simple’ plant), this reduces to a scalar computation.
10.4.4
Consider a first-order system with a plant model of the form
y( k + 1) = f y( k ) + gu( k ); y(0) =D y 0 (10.61)
where u is the control variable and y is the measured state (output); f and g are unknown coefficients.
An especially simple adaptive controller results by combining the least squares method of parameter
estimation with the generalized predictive controller. The least squares parameter-estimation algorithm
requires relatively small computational effort and has a reliable convergence, but is applicable only
for small noise-signal ratios. Several applications have shown that the combination of least squares
parameter estimation with generalized predictive control gives good results.
Let us assume that for the system given by Eqn. (10.61), the desired steady-state value for the controlled
variable y(k) is a constant reference input r. We select the generalized predictive control parameters:
r = 0.1, N = 4, Nu = 1.
If the system parameters were known, the feedback controller should take the form (10.60). Since
the parameters are assumed to be unknown, the least squares error estimates will be used in place of the
true values of f ° and g° of the parameters f and g. The parameter estimates fˆ and ĝ are derived from
the input-output measurements.
To simulate the system (refer to Problem A.20 in Appendix A), the data values were obtained from
Eqn. (10.61) assuming the true parameters
f ° = 1.1052; g° = 0.0526
and sampling interval T = 0.1 sec.
È0 ˘
With the initial estimate pˆ (0) = Í ˙ , P(0) = a I with large value of a , we use Equations (10.45) to
Î0 ˚
generate the new parameter estimate, and implement the generalized predictive control law. The plot of
Fig. 10.8 was generated using this procedure. The input signal is a square wave with amplitude 10. The
closed-loop system is close to the desired behavior after a few transients.
10.4.5
As described above, MRAC control and STR arise from different perspectives; with the parameters
in MRAC systems being updated so as to minimize the tracking errors between the plant output and
672 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
30
Plant output
25 Command signal
20
15
10
–5
–10
–15
0 50 100 150 200 250
Sampling instants
Fig. 10.8
reference model output, and the parameters in STR systems being updated so as to minimize the data-
fitting error in input-output measurements. However, there are strong relations between the two design
methodologies. Comparing Figs 10.4 and 10.7a, we note that the two kinds of systems have both an inner
loop for control and an outer loop for parameters estimation. From a theoretical point of view, it can
actually be shown that MRAC and STR systems can be put under a unified framework.
The two methods can be quite different in terms of analysis and implementation. Compared with MRAC
systems, STR systems are more flexible because of the possibility of coupling various controllers with
various estimators (i.e., the separation of control and estimation). However, the stability and convergence
of self-tuning regulators are generally quite difficult to guarantee, often requiring the signals in the system
to be sufficiently rich so that the estimated parameters converge to the true parameters. If the signals are
not very rich (for example, if the reference signal is zero or a constant), the estimated parameters may not
be close to the true parameters, and the stability and convergence of the resulting control system may not
be guaranteed. In this situation, one must either introduce perturbation signals in the input, or somehow
modify the control law. In MRAC systems, however, the stability and tracking error convergence are
usually guaranteed—regardless of the richness of the signals.
where g is the acceleration due to gravity, and J = Ml2 is the moment of inertia.
By appropriate scaling, the essential dynamics of the system are captured by
y(t ) = -a sin y(t ) + u(t ) (10.62)
where a is a positive scalar
Ignoring the nonlinear sine term, we get the following linear approximation of the pendulum equation:
y (t ) = u (t ) (10.63a)
Choosing x1 = y and x2 = y as state variables, we have
x = [x1 x2]T
x1 (t ) = x2 (t ) ; (10.63b)
x2 ( t ) = u ( t )
The linear control methodologies (pole placement, optimal control) attempt to minimize, in some sense,
the transfer functions relating the disturbances to the outputs of interest. Here we explore discontinuous
control (variable structure control, refer to Section 9.10) methodology for disturbance rejection. The
system with two control structures, corresponding to u = + 1 and u = – 1, is considered. The variable
structure law governing the dynamics is given by
Ï-1 if s ( x1 , x2 ) > 0
u (t ) = Ì (10.64a)
Ó+1 if s ( x1 , x2 ) < 0
where the switching function is defined by
s ( x1 , x2 ) = l x1 + x2 (10.64b)
l is a positive design scalar. The reason for the use of the term ‘switching function’ is clear, since the
function in Eqns (10.64) is used to decide which control structure is in use at any point (x1, x2) in the
phase plane. The expression in Eqn. (10.64a) is usually written more concisely as
u(t ) = - sgn(s (t )) (10.64c)
where sgn (◊) is the sign function. This function exhibits the property that
674 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
s sgn(s ) = s
Figure 10.10a shows typical trajectories (parabolas) for u = ± k, and a typical switching line. Close to
the origin, on either side of the switching line, the trajectories point towards the line; an instant after the
control structure changes, the system trajectory will recross the switching line and the control structure
must switch back. Intuitively, high-frequency switching between the two control structures will take
place as the system trajectories repeatedly cross the line. This high frequency motion is described as
chattering. If infinite-frequency switching were possible, the motion would be trapped or constrained
to remain on the line. The motion when confined to the switching line satisfies the differential equation
obtained from rearranging s (x1, x2) = 0, namely
x2 = – l x1 (10.65a)
or y(t ) = -l y(t ) (10.65b)
This represents a first-order decay and the trajectories will ‘slide’ along the switching line to the origin.
Such a dynamical behavior is described as sliding mode and the switching line is termed the sliding
surface. During sliding motion, the system behaves as a reduced-order system which is apparently
independent of the control. The choice of the sliding surface, represented in our example by the
parameter l, governs the performance response whilst the control law itself, is designed to guarantee
that trajectories are driven to the ‘region’ of the sliding surface where the sliding motion takes place.
To achieve this objective, the control action is required to satisfy certain conditions, called reachability
conditions.
To develop the reachability conditions and the region of sliding motion, we consider the system (10.63b)
with u given by (10.64), i.e., u = ± 1 corresponding to the two control structures. Figure 10.10b shows
typical trajectories of the control system with these control structures, and a typical switching line.
x2
u=k
u = –k
x1 —s
s<
x —s 0
x
s
x —s = 0 s>
0
s> s=
0
0
s<
0
Fig. 10.10
Nonlinear Control Structures 675
Assume that the system under consideration starts with initial conditions corresponding to point A in Fig.
10.10b. The control switches when the representative point reaches B. By geometry of the situation, we
see that the trajectory, resulting from the reversal of the control at point B, will bring the representative
point on a parabola much closer to the origin. This will continue until the trajectory intersects the
switching line at a point closer to the origin than the points A1 and A2 which are points of intersection of
the switching line s (x1, x2), with parabolas passing through the origin: x1 = – 12 x2| x2| . The coordinates
of the points A1 and A2 are obtained as Ê -2 , 2 ˆ and Ê 2 , -2 ˆ , respectively. The region where the
ÁË 2 l ˜¯ ÁË 2 l ˜¯
l l
sliding motion takes place, is a part of the switching line between the points A1 and A2 as is seen below.
s (t ) s (t ) = s ( l x1 + x2 )
= s ( l x2 + u )
= s ( l x2 - sgn (s ))
= l s x2 - s
Since
ls x2 £ l s x2
we have
s (t )s (t ) £ l s x2 - s
£ s ( l x2 - 1)
x2
A1
A x1
A2
u = –1
u=1
B
s (x1, x2) = 0
Fig. 10.10
10.5.1
The problem is to regulate a dynamic system subject to parameter uncertainties and nonlinearities. A
controller is sought to force the system to reach, and subsequently remain on, a predefined surface (called
the sliding surface) within the state space. The dynamical behavior of the system, when confined to the
surface, is called the sliding motion. The advantages of obtaining such a motion are two fold: firstly, there
is a reduction in order; and, secondly, the sliding motion is insensitive to parameter variations. The latter
property of invariance towards uncertainty makes the methodology an attractive one for designing robust
control for uncertain systems.
The design approach comprises the following two components:
The design of a sliding surface in the state space, so that the reduced-order sliding motion satisfies
the specifications imposed on the design.
The synthesis of a control law, discontinuous about the sliding surface, such that the trajectories
of the closed-loop motion are directed towards the surface.
The closed-loop dynamical behavior obtained for using a variable structure control law, comprises two
distinct types of motion. The initial phase, often referred to as the reaching phase, occurs whilst the states
are being driven towards the sliding surface. This motion is, in general, affected by the disturbances
present. Only when the states reach the surface, and the sliding motion takes place, does the system
become insensitive to uncertainty.
Nonlinear Control Structures 677
A sliding mode will exist if, in the vicinity of the sliding surface, the state velocity vectors are directed
towards the surface. In such a case, the sliding surface attracts trajectories when they are in its vicinity;
and, once a trajectory intersects the sliding surface, it will stay on it thereafter.
A hyper surface
S : s ( x1 , x2 , … , xn ) = s ( x) = 0 (10.68a)
is attractive if
(i) any trajectory starting on the surface remains there; and
(ii) any trajectory starting outside the surface tends to it at least asymptotically.
The following conditions (called reachability conditions) ensure that the motion of the state trajectory
x(t) of the single-input dynamical system
x = f (x, u, t) (10.68b)
on either side of the sliding surface s (x) = 0, is towards the surface.
In general, if the reachability conditions are satisfied globally, i.e., W is the entire state space, then, since
d 2
1
2
s = s s < 0,
dt
it follows that
V (s ) = 12 s 2 (10.70)
10.5.2
The plant model, derived earlier in Section 10.2, is given by Eqns (10.6):
x1 = p = x 2
(10.71)
x 2 = p = f ( x) + g( x) s
where
È x11 ˘ Èq1 ˘
Í ˙ Èp ˘ Íq ˙
Èx1 ˘ x12 Èt 1 ˘
x = Í ˙ = Í ˙ = Í ˙ = ÍÍ ˙˙ ; s = Í ˙
2
We now combine equations of the plant and that of the sliding surface (Eqns (10.73) and (10.74)).
x 2 = – k x1
Therefore,
x1 = – k x1 (10.76)
The above equation describes the system dynamics in sliding (observe the order-reduction of system
dynamics in sliding). The response of the system in sliding is completely specified by an appropriate
choice of the parameters l1 and l2 of the switching surface. While in sliding, the system is not affected
by model uncertainties.
After designing a sliding surface, we construct a feedback controller. The controller objective is to
drive the plant state to the sliding surface, and maintain it on the surface for all subsequent time. We
use a generalized Lyapunov approach in constructing the controller. Specifically, we use a distance
measure, V = 1 rT r = 1 (s12 + s 22 ), from the sliding surface r as a Lyapunov function candidate. Then,
2 2
we select the controller so that the time derivative of the chosen Lyapunov function candidate, evaluated
on the solution of the controlled system, is negative-definite with respect to the switching surface; thus,
ensuring the motion of the state trajectory to the surface, as it is illustrated in Fig. 10.10a. Our goal is to
find s so that
d 1
2
rT r = rT r < 0 (10.77)
dt
rT r = rT [ kx1 + Ix 2 ] = rT [ k ( x1d - x1 ) + ( x 2 d - f ( x) - g( x) s ) ]
È Ï k1 sgn(s1 ) ¸˘
s = g -1 ( x) Í- f ( x) + x 2 d + k ( x1d - x1 ) + Ì ˝˙ (10.78)
ÍÎ Ók2 sgn(s 2 ) ˛˙˚
where k1 > 0 and k2 > 0 are the gains to be determined so that the condition rT r < 0 is satisfied. To
determine these gains, we substitute s, given by (10.78), into the expression rT r .
È k1 sgn (s1 ) ˘
rT r = - [s1 s 2 ] Í ˙
Î k2 sgn (s 2 ) ˚
= -s1k1 sgn(s1 ) - s 2 k2 sgn(s 2 ) = –k1|s1|–k2|s2| < 0
Thus, the sliding surface r(x) = 0 is asymptotically attractive. The larger the values of gains, the faster
the trajectory converges to the sliding surface. Note that tolerance of sliding mode control to model
imprecision and disturbances, is high; satisfying asymptotic stability requirement, despite the presence
of uncertainties, ensures asymptotic tracking.
Simulation of this controller for the two-link robot arm (m1 = 1, m2 = 1, l1 = 1, l2 = 1, g = 9.8; q d1 (t) =
sin (pt), qd2 (t) = cos (pt)) was done using MATLAB (refer to Problem A.21 in Appendix A). Figures 10.11
show the tracking performance.
680 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
1.5
Desired
Actual
1
0.5
–0
q1
–0.5
–1
–1.5
0 1 2 3 4 5 6 7 8
t
(a)
1.5
Desired
Actual
1
0.5
–0
q2
–0.5
–1
–1.5
0 1 2 3 4 5 6 7 8
t
(b)
Fig. 10.11 Desired and actual trajectories
Nonlinear Control Structures 681
PROBLEMS
10.1 The system
x1 = x1 x2 + x3
x2 = -2 x2 + x1u
x3 = sin x1 + 2 x1 x2 + u
y = x1
can be transformed to Brunovsky form by differentiating y(t) repeatedly, and substituting state
derivatives from the given system equations, until the control input u(t) appears:
y = x1 = x1 x2 + x3
y = x1 x2 + x1 x2 + x3
= ÈÎsin x1 + x2 x3 + x1 x 22 ˘˚ + ÈÎ1 + x12 ˘˚ u
∫ f ( x) + g ( x) u; x = [ x1 x2 x3 ]T
Defining variables as z1 ∫ y, z2 ∫ y, we obtain
z1 = z2
z 2 = f ( x ) + g ( x )u
This may be converted to a linear system by redefinition of the input as
v(t ) ∫ f ( x) + g ( x)u(t )
so that
1
u (t ) ∫ ( - f ( x) + v(t ))
g ( x)
for then one obtains
z1 = z2 ; z2 = v
which is equivalent to
y=v
With a PD tracking control
v = yd + K D e + K P e
where tracking error is defined as
e(t ) ∫ yd (t ) - y(t ); yd (t ) is the desired trajectory;
the closed-loop system becomes
e + KDe + KPe = 0
The complete controller implied by this feedback linearization technique, is given by
1
u (t ) = ( - f ( x ) + yd + K D e + K P e )
g ( x)
(a) Draw the structure of the feedback linearization controller showing PD outer loop and
nonlinear inner loop.
682 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
(b) Select gains KP and KD so that the closed-loop system has a natural frequency of wn =
10 rad/sec, and a damping ratio of z = 0.707.
(c) Suppose that it is desired for the plant output y(t) to follow the trajectory yd = sin (2p t).
Simulate the system and plot actual output y(t), desired output yd(t), and the tracking error
e(t); given x(0) = [1 1 1]T.
10.2 One useful method for specifying system performance is by means of a model that will produce
the desired output for a given input. The model need not be actual hardware. It can only be a
mathematical model simulated on a computer. In a model reference control system, the outputs
of the model and that of the plant are compared and the difference is used to generate the control
signals.
Consider a nonlinear, time-varying plant described by
È x1 ˘ È 0 1 ˘ È x1 ˘ È0 ˘
x=Í ˙=Í ˙ Í ˙ + Í ˙ u = f (x, u, t )
x
Î 2˚ Î - b - a ( t ) x2 ˚ Î x2 ˚ Î1 ˚
where a(t) is time-varying and b is a positive constant.
Assume the reference model equation to be
È xm1 ˘ È 0 1 ˘ È xm1 ˘ È0 ˘
xm = Í ˙=Í 2 ˙Í ˙+Í ˙v
Î xm 2 ˚ ÍÎ- w n -2zw n ˙˚ Î xm 2 ˚ ÎÍw n2 ˙˚
= Amxm + bmv
where
È p11 p12 ˘ ÏÔ È 0 1 ˘ È x1 ˘ È 0 1 ˘ È x1 ˘ È0 ˘ È0 ˘ ¸Ô
M = [e1 e2 ] Í ˙ ÌÍ ˙Í ˙-Í ˙Í ˙-Í ˙+Í ˙˝
Î p12 p22 ˚ ÔÓ ÍÎ- w n2 - 2zw n ˙˚ Î x2 ˚ Î- b - 2zw n ˚ Î x2 ˚ Îu ˚ ÍÎw n2 v ˙˚ Ô˛
If we choose u so that
u(t ) = -(w n2 - b) x1 - 2zw n x2 + w n2 v + am x22 sign (e1 p12 + e2 p22 ) ; am = max | a(t ) |
then
M = (e1 p12 + e2 p22 )[a(t ) - am sign(e1 p12 + e2 p22 )] x22
= nonpositive
(a) Draw a block diagram representing the structure of the model reference adaptive control
system.
(b) For the parameters: a(t) = 0.2 sint, b = 8, z = 0.7, wn = 4; simulate the closed-loop system
and plot x (t ), x m (t ), and e (t ).
10.3 Consider the adaptive control design problem for a plant, approximately represented by a first-
order differential equation
y = – a p y + bp u
where y(t) is the plant output, u(t) is its input, and ap and bp are constant plant parameters (unknown
to the adaptive controller). The desired performance of the control system is specified by a first-
order reference model
ym = - am ym + bm r
where am and bm are known constant parameters, and r(t) is a bounded external reference signal.
Using Lyapunov synthesis approach, formulate a control law, and an adaptation law, such that the
resulting model-following error y(t) – ym(t), asymptotically converges to zero.
Simulate the MRAC system with ap = 1, bp = 2, am = 3 and bm = 3, adaptation gain g = 1.5; initial
values of both parameters of the controller are chosen to be 0, indicating no a priori knowledge,
and the initial conditions of the plant and the model are both zero. Use two different reference
signals in the simulation: r (t) = 2, and r (t) = 2 sin (3t).
10.4 The following data were collected from a cell concentration sensor, measuring absorbance in a
biochemical stream. The input u is the flow rate deviation (in dimensionless units) and the sensor
output y is given in volts. The flow rate (input) is piecewise constant between sampling instants.
The process is not at steady-state initially; so y can change even though u = 0. Fit a first-order
model
y( k ) = a1 y( k - 1) + b1u( k - 1)
to the data using the least-squares approach. Plot the model response and the actual data.
684 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Time(sec) u y
0 0 3.000
1 3 2.456
2 2 5.274
3 1 6.493
4 0 6.404
5 0 5.243
6 0 4.293
7 0 3.514
8 0 2.877
9 0 2.356
10 0 1.929
10.5 Step test data have been obtained for the off-gas CO2 concentration response—obtained by
changing the feed rate to a bioreactor. At k = 0, a unit-step change in input u occurs, but the output
change at the first sample (k = 1) is not observed until the next sampling instant. The data is given
in the table below.
Estimate the model parameters in the second-order difference equation
y( k ) = a1 y( k - 1) + a2 y( k - 2) + b1u( k - 1) + b2 u( k - 2)
from the input-output data using the least-squares approach. Plot the model response and the
actual data.
k 0 1 2 3 4 5 6 7 8 9 10
y(k) 0 0.058 0.217 0.360 0.488 0.6 0.692 0.772 0.833 0.888 0.925
10.6 The following data were collected for a process:
0 1 4.0000
1 1 –2.0000
2 0 –1.0000
3 1 8.5000
4 1 –9.7500
5 1 1.6250
6 1 21.0625
7 0 –30.8438
8 1 7.1406
9 0 51.9766
10 1 –89.2461
Nonlinear Control Structures 685
Chapter 11
Intelligent Control with Neural
Networks/Support Vector Machines
Biological neuronal processes are enormously complex, and the progress made in the understanding of
the field through experimental observations is limited and crude. Nevertheless, it is true that this limited
understanding of the biological processes has provided a tremendous impetus to the emulation of certain
human learning behaviors, through the fields of mathematics and systems science. In neuronal informa-
tion processing, there are a variety of complex mathematical operations and mapping functions involved,
that, in synergism, act as a parallel-cascade computing structure. As system scientists, our objective is
that, based upon this limited understanding of the brain, we create an intelligent cognitive system that
can aid humans in various decision-making tasks. New computing theories under the category of neural
networks, have been evolving. Hopefully, these new computing methods with the neural network archi-
tecture as the basis, will be able to provide a thinking machine—a low-level cognitive machine for which
the scientists have been striving for so long.
The cognitive functions of the brain, unlike the computation functions of the computer, are based upon
relative grades of information acquired by the neural sensory systems. The conventional mathematical
tools, whether deterministic or probabilistic, are based upon some absolute measure of information. Our
natural sensors acquire information in the form of relative grades rather than in absolute numbers. The
‘perceptions’ and ‘actions’ of the cognitive process also appear in the form of relative grades. The theory
of fuzzy logic, which is based upon the notion of graded membership, provides mathematical power for
the emulation of the higher-order cognitive functions—the thought and perception process. A marriage
between the two evolving disciplines—neural networks and fuzzy logic—may provide a tremendous
impetus to the theory for the important field of cognitive information.
The subject of intelligent systems is in an exciting state of research and we believe that we are slowly
progressing towards the development of truly intelligent systems. The present-day versions of intelligent
systems are not truly intelligent; however, the loose usage of the term ‘intelligent’ acts as a reminder that
we have a long way to go.
The conventional methods of control design use mathematical models derived by the application of
physical laws. The goal of mathematical modeling is to provide a set of equations that purports to
describe interrelations between the system quantities as well as the relations between these quantities
and external inputs. We can use different types of equations and their combinations, like algebraic,
differential (ordinary/partial), difference, integral, or functional equations. A mathematical model can
be viewed as a mathematical representation of the significant relevant aspects of a physical system
(significance and relevance being in relation to an application where the model is to be used).
Whenever devising algebraic, differential, difference equations (or any other model from application
of physical laws) is feasible, using a reasonable number of equations that can solve the given problem
in a reasonable time, at a reasonable cost, and with reasonable accuracy, there is no need to look for an
alternative. Today, however, there are a large number of instances in diverse fields, including control
systems, wherein at least one of these criteria is not satisfied; one, therefore seeks other avenues to solve
the given problem.
Since the inception of the notion of fuzzy logic in 1965, we started thinking about the quantitative and
qualitative aspects of control mechanisms, and introduced the notion of intelligent control systems. This
logic is capable of emulating certain functional elements of human intelligence. In partnership with other
mathematical tools such as neural networks, the field of fuzzy logic is responsible for creation of a new
field—the field of soft computing. In this decade, the field of soft computing has become a new emerging
discipline in providing solutions to complex industrial and management problems—problems that are
deeply surrounded by both qualitative and quantitative uncertainties. The elements of this emerging field
provide some mathematical strength in the emulation of human-like intelligence and in the creation of
systems that we call intelligent systems.
The conventional field of control is based on the traditional mathematical concepts. The mathematics
through which we develop scientific and engineering techniques, is based upon some precise, quantitative
aspects and rigorous concepts. Such quantitative aspects and rigorous concepts are beautiful, but they fail
to formulate the imprecise and qualitative nature of our cognitive behavior—the intelligence.
What is the character of human intelligence? Is it precise, quantitative, rigorous, and computational? The
answer is negative. We are very bad at calculations or any kind of computing. A negligible percentage
of human beings can multiply two three-digit numbers in their heads. The basic function of human
intelligence is to ensure survival in nature, not to perform precise calculations. The human brain can
process millions of visual, acoustic, olfactory (concerned with smelling), tactile (the sense of touch),
and motor data, and it shows astonishing abilities to learn from experience, generalize from learned
rules, recognize patterns and make decisions. We want to transfer some of the human mental faculties
of learning, generalizing, memorizing, and predicting into our models, algorithms, smart machines and
intelligent artificial systems, in order to enable them to survive in highly technological environment, that
is, to solve given tasks based on previous experience with reasonable accuracy, at reasonable cost, and
in a reasonable amount of time.
The basic premises of soft computing are as follows:
The real world is pervasively imprecise and uncertain.
The precision and certainty carry a cost.
Intelligent Control with Neural Networks/Support Vector Machines 689
The guiding principle of soft computing, which follows from these premises is as follows:
Exploit tolerance for imprecision, uncertainty, and partial truth, to achieve tractability, robustness,
and low solution costs.
The guiding principle of soft computing differs strongly from that of classical hard computing which
requires precision, certainty, and rigor. Many contemporary problems do not lend themselves to
precise solutions within the framework of classical hard computing; for instance, recognition problems
(handwriting, speech, objects, and images), computer graphics, mobile robot coordination, and data
compression. To be able to deal with such problems, there is often no choice but to accept solutions
that are suboptimal and inexact. In addition, even when precise solutions can be obtained, their cost is
generally much higher than that of solutions which are imprecise and yet yield results within the range
of acceptability.
Soft computing is not a single methodology, it is an evolving collection of methodologies for the
representation of ambiguity in human thinking. The core methodologies of soft computing are: fuzzy
logic, neural networks, and evolutionary computation. These methodologies have their strengths and
weaknesses. For example, fuzzy logic is most effective when human solution is available. In this context,
fuzzy logic is employed as a programming language that serves to translate a human solution into the
language of fuzzy IF-THEN rules. Neural networks do not require the availability of a human solution,
but can be trained by exemplification. The primary contribution of evolutionary computation, which
is inspired by genetic evolution in humans and animals, is algorithms for systematized random search
for obtaining the best possible solution in a huge solution space. Evolutionary algorithms are a class of
global optimization techniques.
As real-life problems become more varied and more complex, we find that no single soft-computing
methodology suffices to deal with them. To conceive, design, analyze, and use intelligent systems,
we frequently have to employ the totality of soft computing tools that are available. The constituent
methodologies in soft computing are, for the most part, complementary and synergistic rather than
competitive. What this means is that in many applications, it is advantageous to employ the constituent
methodologies in combination rather than in a stand-alone mode. In Chapters 11–14, we will employ soft
computing methodologies—in stand-alone and hybrid modes—to obtain solutions to control problems,
called intelligent control systems.
Some other general terms used in the literature with reference to intelligent systems are as follows.
Soft computing is serving as the foundation for the emerging field of computational intelligence (the
field is sometimes referred to as machine intelligence). When a machine (which almost always means
a computer system) improves its performance at a given task over time without reprogramming, it can
be said to have learned something. Machine learning is the key to machine intelligence, just as human
learning is the key to human intelligence.
There is a significant overlap in the fields of soft computing, computational intelligence, machine
learning, and machine intelligence. The meaning of various terms can change quickly and unpredictably
depending on the context in which they are used. However, the loose definitions given here will serve
our purpose in this book.
690 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
seen during its training phase. Despite our need to make this assumption in order to obtain theoretical
results, it is important to keep in mind that the assumption is often violated in practice.
Supervised Learning
Function approximation is a supervised learning problem where there is an input x, an output y, and the
task is to learn the mapping from the input to the output. The approach in machine learning is that we
assume a parametric model of the form: ŷ = g(x|p), where g(◊) is the model and p are its parameters.
The machine learning program optimizes the parameters, p, such that the error is minimized, that is,
our estimates, ŷ, are as close as possible to the correct values, y, given in the training set. The name
‘supervised learning’, refers to the dependence of the ‘learner’ on the ‘supervisor’ to select informative
states, and to provide actual/observed output for each state.
Note that the supervised-learning task is to learn function g(x|p), called the target function; the only
information available is a training data set {x(p), y (p); p = 1, 2, ..., P}. A learning algorithm that at best
guarantees that the learned target function g(◊) fits the training data well, is not our design objective. Our
aim is to use the machine for predicting output values for the data beyond the training data; for the data
that the machine has not seen during its training phase. The actual/observed output for the unseen data is
not known, and we aim to use the prediction of the machine for decision making.
Traditional mathematical models (differential/difference equations) are based on the application of
physical laws, and employ hard computing. In machine learning, on the other hand, analytical models
(target functions g(◊)) are based on direct empirical experience, and employ soft computing. Naturally, if
the physics of the problem is well understood and a traditional mathematical model is feasible, one need
not resort to machine learning methods.
692 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Lacking information on the physics of the problem, our assumption is that the best ‘model’ for prediction
is the model that is induced by the observed training data. Inductive learning methods formulate a model
based on soft-computing methodologies by finding empirical regularities over the training examples, and
these regularities induce the approximation of the target function well over other unseen examples. The
inductive learning hypothesis is as follows:
Any model found to approximate the target function well over a sufficiently large set of training examples,
will also approximate the target function well over other unobserved examples. It generalizes from the
specific training examples, hypothesizing a general function that covers these examples and other cases
beyond the training examples.
Soft-computing methodologies provide many alternative structures for realizing the target function g(◊).
The two most commonly used structures are neural networks and fuzzy logic.
Unsupervised Learning
Another machine learning application is concerned with unsupervised learning. In supervised learning,
the aim is to learn a mapping from the input to an output whose correct values are provided by a supervisor.
In unsupervised learning, there is no such supervisor and we only have input data. The goal is to unravel
the underlying similarities, and cluster ‘similar’ input vectors together. A major issue in unsupervised
learning is that of defining ‘similarity’ between two input vectors and choosing an appropriate measure
for it.
Reinforcement Learning
In some applications, the output of the system is a sequence of actions. In such cases, a single action is
not important; what is important is the policy —the sequence of correct actions to reach the goal. There
is no such thing as the best action in any intermediate state; an action is good if it is part of a good policy.
In such a case, the machine learning program should be able to assess the goodness of policies and
learn from past good action sequences to be able to generate a policy. Such learning methods are called
reinforcement learning algorithms.
Reinforcement learning is an on-line learning procedure that rewards an action for its good output result
and punishes it for a bad output result. The evaluation of an output as good or bad depends on the specific
problem and the environment. For a control system, if the system continues to be in the desired region in
state space after an action, the output is judged as good, otherwise it is considered as bad. The reward/
penalty of an action is the reinforcement signal.
The adaptation of creatures to their environments results from the interaction of two processes, namely,
evolution and learning. Evolution is a slow stochastic process at the population level that determines
the basic structures of a species. Evolution operates on biological entities, rather than on individuals
themselves. At the other end, learning is a process of gradually improving an individual’s adaptation
capability to its environment by tuning the structure of the individual.
Intelligent Control with Neural Networks/Support Vector Machines 693
Evolution is based on the Darwinian model, also called the principle of natural selection, or survival
of the fittest, while learning is based on the human cognitive faculties. Evolutionary algorithms are
stochastic search methods that employ a search technique based on the Darwinian model, whereas neural
networks and fuzzy systems are learning methods based on human learning model.
Combinations of learning and evolution, embodied by evolving neural networks and evolving fuzzy
systems, have better adaptability to a dynamic environment.
Accuracy
The accuracy of a learning machine is dependent on its generalization capability.
The aim of machine learning is rarely to replicate the training data but the
prediction for new cases. That is, we would like to be able to generate the right output for an input
outside the training set; one for which the correct output is not known but is to be predicted for decision
making. How well a model trained on the training set predicts the right output for unseen examples in
operational situation, is called generalization.
We assume that all the data (training data + new data in operational situation) are generated independently
from some unknown (but fixed) probability distribution W(x, y). This is a standard assumption in learning
theory; data generated this way is commonly referred to as iid (independent and identically distributed).
Our goal is to find a function g(◊) that will generalize well to unseen examples, that is, g(x) = y for
examples (x, y) other than the training examples, generated from W(x, y).
Generalization is a very important aspect of machine learning. Since it is a measure of how well the
machine interpolates to points not used during training, the ultimate objective of machine learning is to
produce a learner with low generalization error, that is to minimize the true risk function
Ú ( g(x | p) - y) d W(x, y)
2
EG(W, p) = (11.1)
where p are adjustable parameters of the learning machine model.
Since W(x, y) is generally not known, p are found through minimization of the empirical risk function
P
 ( g(x
1 ( p)
ET (D, p) = | p) - y ( p) ) 2 (11.2)
P p =1
over a finite training data set
D : {x(p), y(p); p = 1, 2, ..., P} ~ W (x, y) (11.3)
When P , then empirical error ET Æ generalization error EG.
694 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The aim of machine learning is, therefore, to learn the examples in the training set well, while providing
good generalization to examples not included in the training set. It is hoped that a small empirical
(training) error will also give a small true (generalization) error.
We can measure the generalization ability of a machine if we have access to data outside
the training set. We simulate this by dividing, often randomly, the training set we have into two parts.
We use one part for training (i.e., for building a learning machine) and the other part, called the test
set/validation set, is used for testing the generalization ability. In the validation set, for each input x,
the output y is known, but these pairs of data are unknown to the machine, since they have not been
used during training. The inputs from the validation set are given to the trained machine. The machine
outputs predictions , ŷ, which are then compared with the actual values, y ; the empirical error is then
calculated as a measure of generalization capability of the machine. Assuming large enough training and
test sets, the machine that is the most accurate on the test set is the best. After training and testing, the
machine is ready for use with the learned parameters ‘frozen’. The machine with low empirical error is
expected to give reasonable outputs for the data it has not seen before. Research shows a dependence
of generalization error on the size of the training set, the machine architecture, and the number of free
parameters in the machine model.
The learning machine design aims at 100% accuracy in predicting the training examples.
While this is sometimes a reasonable design strategy, in fact it can lead to difficulties when there is noise
in the training data, or the number of training examples is too small to produce a representative sample
of W(x, y). In either of these cases, this design approach can produce a machine that overfits the training
examples. We will say that a machine overfits the training examples if some other machine that fits the
training examples less well actually performs better on the test data.
Overfitting of a training set means that the machine memorizes the training examples, and consequently
loses the ability to generalize. That is, machines that overfit cannot predict correct output for data patterns
not seen during training. Overfitting occurs when machine architecture is too complex (a neural network
with large number of weights, a fuzzy logic model with large number of rules, etc.), compared to the
complexity of the function underlying the data. If we have a machine model that is too complex, the data
is insufficient to constrain it and we may end up with bad prediction function. Or if there is noise in the
data, an over complex model may learn not only the underlying function but also the noise in the data
and may make it a bad fit. This is called overfitting. In such a case, having more training data helps but
only up to certain point.
Figure 11.1 illustrates the impact of overfitting in a typical application of machine learning. The horizontal
axis of the plot indicates the complexity of the machine. The vertical axis indicates the accuracy of
predictions made by the machine. The solid line shows the accuracy of the machine over the training
examples, where the broken line indicates the accuracy measured over the test examples. Predictably,
the accuracy of the machine over the training examples increases monotonically as the machine grows in
complexity. However, the accuracy over the test examples first increases then decreases.
If the machine is trained for too long, the excess free parameters start to memorize all the training
patterns, including the noise contained in the training set. Figure 11.2 presents an illustration of training
and generalization errors as a function of training time. From the start of the training, both the training
and generalization errors decrease— usually exponentially. In the case of oversized machines, there is a
Intelligent Control with Neural Networks/Support Vector Machines 695
Fig. 11.2
point at which the training error continuous to decrease, while the generalization error starts to increase.
This is the point of overfitting. The training should stop as soon as an increase in generalization error is
observed.
Machine learning tasks based on real-world data are unlikely to find the noise-free data assumption
tenable. Also W(x, y) is generally unknown; empirical evidence shows that the available finite amount
of data is insufficient to represent the distribution of total data in operational situations. Therefore,
whenever the prediction comes from inductive learning, it will not, in general, be provably correct. The
question is how to improve the generalization performance. A great deal of research has gone into clever
engineering tricks and heuristics to aid in the design of learning machines which will not overfit on a
given data set, consequently giving a better generalization performance.
696 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The computational complexity of a learning machine is directly influenced by the following design
choices:
1. The machine architecture
The learning machine architecture (soft computing methodologies) is an evolving collection of
representations of the target function in the learning task. Some of the important architectures we will
be exploring in the book are
neural networks;
fuzzy logic models; and
kernel functions (support vector machines).
These architectures are all competitive for a given learning task. Computational complexity of these
models are varied, but we have to balance complexity with accuracy. The more complex models of
these architectures usually yield better accuracy, but only up to a point; a trade-off is thus required in the
selection of architecture.
2. The size of free parameters
The larger the size of the parameter vector p of a model, the more calculations are needed to predict
outputs after training, and the more learning calculations are needed for training-patterns presentation.
3. The training set size
The larger the training set size, the more patterns are presented for training. Therefore, the total number
of learning calculations increases.
4. Complexity of optimization method
As will be discussed later in this book, sophisticated optimization algorithms have been developed
to obtain optimum values of machine model parameters. Optimization improves accuracy. This
sophistication comes, however, at the cost of increased computational complexity.
An acceptable trade-off between computational complexity and accuracy is a very important issue in the
design of learning systems.
Convergence
Convergence characteristics of a learning machine can be described by the ability of the machine to
converge to specified error levels (usually considering the generalization error). While convergence
analysis is an empirical approach, rigorous theoretical analysis has been done for some learning machine
architectures.
neural networks made its first significant appearance in 1943 when Warren McCulloch and Walter
Pitts published their study in this field. They suggested a simple neuron model (known today as MP
artificial neural model) and implemented it as an electrical circuit. In 1949, Donald Hebb highlighted
the connection between psychology and physiology, pointing out that a neural pathway is reinforced
each time it is used. Hebb’s learning rule, as it is sometimes known, is still used and quoted today.
Improvements in hardware and software in the 1950s ushered in the age of computer simulation. It
became possible to test theories about nervous system functions. Research expanded; neural network
terminology came into its own.
The perceptron is the earliest of the neural network paradigms. Frank Rosenblatt built this learning
machine device in hardware in 1958 and caused quite a stir.
The perceptron has been a fundamental building block for more powerful models, such as the ADALINE
(ADAptive LINear Elements) and MEDALINE (Multiple ADALINEs in parallel), developed by Bernard
Widrow and Marcian Hoff in 1959. Their learning rule, sometimes known as Widrow–Hoff rule, was
simple yet elegant.
Affected by the predominately rosy outlook of the time, some people exaggerated the potential of neural
networks. Biological comparisons were blown out of proportion. In 1969, in the midst of many outrageous
claims, Marvin Minsky and Seymour Papert published ‘Perceptrons’, an influential book condemning
Rosenblatt’s perceptron. The limitations of the perceptron were significant; the charge was that it could
not solve any ‘interesting’problems. It brought to a halt, much of the activity in neural network research.
Nevertheless, a few dedicated scientists such as Teuvo Kohonen and Stephen Grossberg, continued
their efforts. In 1982, John Hopfield introduced a recurrent-type neural network that was based on the
interaction of neurons through a feedback mechanism. His approach was based on Hebb’s learning rule.
The back-propagation learning rule arrived on the neural-network scene at approximately the same time
from several independent sources (Werbos; Parker; Rumelhart, Hinton and Williams). Essentially, a
refinement of the Widrow–Hoff learning rule, the backpropagation learning rule provided a systematic
means for training multilayer networks, thereby overcoming the limitations presented by Minsky.
Minsky’s appraisal has proven excessively pessimistic; networks now routinely solve many of the
problems that he posed in his book.
Research in the 1980s triggered the present boom in the scientific community. New and better models are
being proposed, and the limitations of some of the ‘old’ models are being chipped away. A number of
today’s technological problems are in areas where neural-network technology has demonstrated potential:
speech processing, image processing and pattern recognition, time-series prediction, real-time control and
others.
As the research on neural networks is evolving, more and more types of networks are being introduced,
while still less emphasis is being placed on the connection to the biological neural network. In fact, the
neural networks that are most popular today have very little resemblance to the brain, and one might
argue that it would be more fair to regard them simply as a discipline under statistics.
The application of artificial neural networks in closed-loop control, has recently been rigorously studied.
One property of these networks, central to most control applications, is that of function approximation.
Such networks can generate input/output maps which can approximate any continuous function with the
required degree of accuracy. This emerging technology has given us control design techniques that do
698 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
not depend on parametrized mathematical models. Neural networks are used to estimate the unknown
nonlinear functions; the controller formulation uses these estimated results.
When neural networks are used for control of systems, it is important that results and claims are based
on firm analytical foundations. This is especially important when these control systems are to be used in
areas where the cost of failure is very high. For example, when human life is threatened, as in aircrafts,
nuclear plants, etc. It is also true that without a good theoretical framework, it is unlikely that the research
in the discipline will progress very far, as intuitive invention and tricks cannot be counted on to provide
good solutions to controlling complex systems under a high degree of uncertainty. Strong theoretical
results guaranteeing control system properties such as stability are still to come, although promising
results have been reported recently of progress in special cases. The potential of neural networks in
control systems clearly needs to be further explored and both, theory and applications, need to be further
developed.
The rest of the chapter gives a gentle introduction to the application of neural networks in control
systems. A single chapter can in no way do justice to the multitude of interesting neural network results,
that have appeared in literature. Not only would space be required, but in the time required to detail
current results, new results would certainly arise. Instead of trying to cover a large spectrum of such a
vast field, we will focus on what is generally regarded as the core of the subject. This chapter is meant to
be a stepping-stone that could lead interested readers on to other books for additional information on the
current status, and future trends of the subject.
Synapse
Nucleus
Dendrites
Synaptic
terminals
Neurons are connected to each other via their axons and dendrites. Signals are sent through the axon of one
neuron to the dendrites of other neurons. Hence dendrites may be represented as the inputs to the neuron,
and the axon as its output. Note that each neuron has many inputs through its multiple dendrites, whereas it
has only one output through its single axon. The axon of each neuron forms connections with the dendrites
of many other neurons, with each branch of the axon meeting exactly one dendrite of another cell at what
is called a synapse. Actually, the axon terminals do not quite touch the dendrites of the other neurons,
but are separated by a very small distance of between 50 and 200 angstroms. This separation is called
the synaptic gap.
A conventional computer is typically a single processor acting on explicitly programmed instructions.
Programmers break tasks into tiny components, to be performed in sequence rapidly. On the other hand,
the brain is composed of ten billion or so neurons. Each nerve cell can interact directly with up to
200,000 other neurons (though 1000 to 10,000 is typical). In place of explicit rules that are used by a
conventional computer, it is the pattern of connections between the neurons, in the human brain, that
seems to embody the ‘knowledge’ required for carrying out various information-processing tasks. In
human brain, there is no equivalent of a CPU that is in overall control of the actions of all the neurons.
The brain is organized into different regions, each responsible for different functions. The largest parts of
the brain are the cerebral hemispheres, which occupy most of the interior of the skull. They are layered
structures; the most complex being the outer layer, known as the cerebral cortex, where the nerve cells
are extremely densely packed to allow greater interconnectivity. Interaction with the environment is
through the visual, auditory and motion control (muscles and glands) parts of the cortex.
In essence, neurons are tiny electrophysiological information-processing units which communicate with
each other through electrical signals. The synaptic activity produces a voltage pulse on the dendrite which
is then conducted into the soma. Each dendrite may have many synapses acting on it, allowing massive
interconnectivity to be achieved. In the soma, the dendrite potentials are added. Note that neurons are
able to perform more complex functions than simple addition on the inputs they receive, but considering
a simple summation is a reasonable approximation.
When the soma potential rises above a critical threshold, the axon will fire an electrical signal. This
sudden burst of electrical energy along the axon is called axon potential and has the form of an electrical
700 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
impulse or spike that lasts about 1 msec. The magnitude of the axon potential is constant and is not relat-
ed to the electrical stimulus (soma potential). However, neurons typically respond to a stimulus by firing
not just one but a barrage of successive axon potentials. What varies is the frequency of axonal activity.
Neurons can fire between 0 to 1500 times per second. Thus, information is encoded in the nerve signals
as the instantaneous frequency of axon potentials and the mean frequency of the signal.
A synapse couples the axon with the dendrite of another cell. The synapse releases chemicals called
neurotransmitters, when its potential is raised sufficiently by the axon potential. It may take the arrival
of more than one spike before the synapse is triggered. The neurotransmitters that are released by the
synapse diffuse across the gap and chemically activate gates on the dendrites, which, when open, allow
charged ions to flow. It is this flow of ions that alters the dendritic potential and provides voltage pulse
on the dendrite, which is then conducted into the neighboring neuron body. At the synaptic junction,
the number of gates that open on the dendrite depends upon the number of neurotransmitters released.
It also appears that some synapses excite the dendrites they affect, whilst others serve to inhibit it. This
corresponds to altering the local potential of the dendrite in a positive or negative direction.
Synaptic junctions alter the effectiveness with which the signal is transmitted; some synapses are good
junctions and pass a large signal across, whilst others are very poor, and allow very little through.
Essentially, each neuron receives signals from a large number of other neurons. These are the inputs to
the neuron which are ‘weighted’. That is, some signals are stronger than others. Some signals excite (are
positive), and others inhibit (are negative). The effects of all weighted inputs are summed. If the sum is
equal to or greater than the threshold for the neuron, the neuron fires (gives output). This is an ‘all-or-
nothing’ situation. Because the neuron either fires or doesn’t fire, the rate of firing, not the amplitude,
conveys the magnitude of information.
The ease of transmission of signals is altered by activity in the nervous system. The neural pathway
between two neurons is susceptible to fatigue, oxygen deficiency, and agents like anesthetics. These
events create a resistance to the passage of impulses. Other events may increase the rate of firing. This
ability to adjust signals is a mechanism for learning.
After carrying a pulse, an axon fiber is in a state of complete non-excitability for a certain time called
the refractory period. For this time interval, the nerve does not conduct any signals, regardless of the
intensity of excitation. Thus, we may divide the time scale into consecutive intervals, each equal to the
length of the refractory period. This will enable a discrete-time description of the neurons’ performance
in terms of their states at discrete-time instances.
11.5.2
Artificial neurons bear only a modest resemblance to real things. They model approximately three of the
processes that biological neurons perform (there are at least 150 processes performed by neurons in the
human brain).
An artificial neuron
(i) evaluates the input signals, determining the strength of each one;
(ii) calculates a total for the combined input signals and compares that total to some threshold level;
and
(iii) determines what the output should be.
Intelligent Control with Neural Networks/Support Vector Machines 701
Inputs
Output
Each input will be given a relative weighting, which will affect the impact of that input (Fig. 11.5). This
is something like varying synaptic strengths of the biological neurons—some inputs are more important
than others in the way they combine to produce an impulse. Weights are adaptive coefficients within the
network, that determine the intensity of the input signal. In fact, this adaptability of connection strength
is precisely what provides neural networks their ability to learn and store information, and, consequently,
is an essential element of all neuron models.
Inputs Connection
x1 weights
w1
Total input
x2
w2 Swi xi
wn
xn
Fig. 11.5 A neuron with weighted inputs
Excitatory and inhibitory inputs are represented simply by positive or negative connection weights,
respectively. Positive inputs promote the firing of the neuron, while negative inputs tend to keep the
neuron from firing.
Mathematically, we could look at the inputs and the weights on the inputs as vectors.
702 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
 wi xi = w x
T
(11.4c)
i =1
Although most neuron models sum their input signals in basically the same manner, as described above,
they are not all identical in terms of how they produce an output response from this input. Artificial
neurons use an activation function, often called a transfer function, to compute their activation as a
function of total input stimulus. Several different functions may be used as activation functions, and, in
fact, the most distinguishing feature between existing neuron models is precisely which transfer function
they employ.
We will, shortly, take a closer look at the activation functions. We first build a neuron model, assuming
that the transfer function has a threshold behavior, which is, in fact, the type of response exhibited
by biological neurons: when the total stimulus exceeds a certain threshold value q, a constant output
is produced, while no output is generated for input levels below the threshold. Figure 11.6a shows this
neuron model. In this diagram, the neuron has been represented in such a way that the correspondence
of each element with its biological counterpart may be easily seen.
Equivalently, the threshold value can be subtracted from the weighted sum and the resulting value
compared to zero; if the result is positive, then output a 1, else output a 0. This is shown in Fig. 11.6b;
note that the shape of the function is the same but now the jump occurs at zero. The threshold effectively
adds an offset to the weighted sum.
An alternative way of achieving the same effect is to take the threshold out of the body of the model
neuron, and connect it to an extra input value that is fixed to be ‘on’ all the time. In this case, rather than
subtracting the threshold value from the weighted sum, the extra input of +1 is multiplied by a weight
and added in a manner similar to other inputs—this is known as biasing the neuron. Figure 11.6c shows
a neuron model with a bias term. Note that we have taken constant input ‘1’ with an adaptive weight ‘b’
in our model.
The first formal definition of a synthetic neuron model, based on the highly simplified considerations
of the biological neuron, was formulated by McCulloch and Pitts (1943). The two-port model (inputs-
activation value-output mapping) of Fig. 11.6 is essentially the MP neuron model. It is important to look
at the features of this unit—which is an important and popular neural network building block.
Intelligent Control with Neural Networks/Support Vector Machines 703
It is a simple unit, thresholding a weighted sum of its inputs to get an output. It specifically does not
take any account of the complex patterns and timings of the actual nervous activity in real neural
systems, nor does it have any of the complicated features found in the body of biological neurons. This
ensures its status as a model, and not a copy of a real neuron.
The MP artificial neuron model involves two important processes:
(i) Forming net activation by combining inputs. The input values are amalgamated by a weighted
additive process to achieve the neuron activation value a (refer to Fig. 11.6c).
704 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
(ii) Mapping this activation value a into the neuron output ŷ. This mapping from activation to output
may be characterized by an ‘activation’ or ‘squashing’ function.
For the activation functions that implement input-to-output compression or squashing, the range of the
function is less than that of the domain. There is some physical basis for this desirable characteristic.
Recall that in a biological neuron, there is a limited range of output (spiking frequencies). In the MP
model, where DC levels replace frequencies, the squashing function serves to limit the output range.
The squashing function shown in Fig. 11.7a limits the output values to {0, 1}, while that in Fig. 11.7b
limits the output value to {–1, 1}. The activation function of Fig. 11.7a is called unipolar, while that in
Fig. 11.7b is called bipolar (both positive and negative responses of neurons are produced).
y y
1 1
0 a a
–1
11.5.3
From the above discussion, it is evident that the artificial neuron is really nothing more than a simple
mathematical equation, for calculating an output value from a set of input values. From now onwards, we
will be more on a mathematical footing; the reference to biological similarities will be reduced. Therefore,
names like a processing element, a unit, a node, a cell, etc., may be used for the neuron. A neuron model
(a processing element/a unit/a node/a cell of our neural network), will be represented as follows:
The input vector
x = [x1 x2 ... xn]T;
the connection weight vector
wT = [w1 w2 ... wn];
the unity-input weight b (bias term), and the output ŷ of the neuron are related by the following equation:
Ê n ˆ
ŷ = s (w x + b) = s Á
T
Â
ÁË i =1
wi xi + b˜
˜¯
(11.5)
x1 x0 = 1
w1 w0 = b
x2 w2 x1 w1
a y
s (◊) x2
w2 a y
s (◊)
wn
wn
xn b
1 (a) xn (b)
Fig. 11.8
The bias term may be absorbed in the input vector itself as shown in Fig. 11.8b.
ŷ = s (a)
Ê n ˆ
=s Á Â
ÁË i = 0
wi xi ˜ ; w0 = b, x0 = 1
˜¯
(11.6a)
Ê n ˆ
=s Á Â
ÁË i =1
wi xi + w 0 ˜ = s (wTx + w0)
˜¯
(11.6b)
In the literature, this model of an artificial neuron is also referred to as a perceptron (the name was given
by Rosenblatt in 1958).
The expressions for the neuron output ŷ are referred to as the cell recall mechanism. They describe how
the output is reconstructed from the input signals and the values of the cell parameters.
The artificial neural systems under investigation and experimentation today, employ a variety of activation
functions that have more diversified features than the one presented in Fig. 11.7. Below, we introduce the
main activation functions that will be used later in this chapter.
The MP neuron model shown in Fig. 11.6 used the hard-limiting activation function. When artificial
neurons are cascaded together in layers (discussed in the next section), it is more common to use a soft-
limiting activation function. Figure 11.9a shows a possible bipolar soft-limiting semilinear activation
s (a) s (a)
1 1
a 0 a
–1
(a) (b)
Fig. 11.9
706 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
function. This function is, more or less, the ON-OFF type, as before, but has a sloping region in
the middle. With this smooth thresholding function, the value of the output will be practically 1 if the
weighted sum exceeds the threshold by a huge margin and, conversely, it will be practically –1 if the
weighted sum is much less than the threshold value. However, if the threshold and the weighted sum are
almost the same, the output from the neuron will have a value somewhere between the two extremes. This
means that the output from the neuron can be related to its inputs in a more useful and informative way.
Figure 11.9b shows a unipolar soft-limiting function.
For many training algorithms (discussed in later sections), the derivative of the activation function
is needed; therefore, the activation function selected must be differentiable. The logistic or sigmoid
function, which satisfies this requirement, is the most commonly used soft-limiting activation function.
The sigmoid function (Fig. 11.10a):
1
s (a) = (11.7)
1 + e-la
is continuous and varies monotonically from 0 to 1 as a varies from – to . The gain of the sigmoid,
l, determines the steepness of the transition region. Note that as the gain approaches infinity, the sigmoid
approaches a hard-limiting nonlinearity. One of the advantages of the sigmoid is that it is differentiable.
This property had a significant impact historically, because it made it possible to derive a gradient search
learning algorithm for networks with multiple layers (discussed in later sections).
Fig. 11.10
The sigmoid function is unipolar. A bipolar function with similar characteristics is a hyperbolic tangent
(Fig. 11.10b):
1 - e-la
s (a) =
1 + e-la
(
= tanh 12 l a ) (11.8)
The biological basis of these activation functions can easily be established. It is known that neurons
located in different parts of the nervous system have different characteristics. The neurons of the ocular
motor system have a sigmoid characteristic, while those located in the visual area have a Gaussian
characteristic. As we said earlier, anthropomorphism can lead to misunderstanding when the metaphor is
carried too far. It is now a well-known result in neural network theory that a two-layer neural network is
capable of solving any classification problem. It has also been shown that a two-layer network is capable of
solving any nonlinear function approximation problem [138, 141]. This result does not require the use of
sigmoid nonlinearity. The proof assumes only that nonlinearity is a continuous, smooth, monotonically
Intelligent Control with Neural Networks/Support Vector Machines 707
increasing function that is bounded above and below. Thus, numerous alternatives to sigmoid could be
used, without a biological justification. In addition, the above result does not require that the nonlinearity
be present in the second (output) layer. It is quite common to use linear output nodes since this tends to
make learning easier. In other words,
s (a) = l a; l > 0 (11.9)
is used as an activation function in the output layer. Note that this function does not ‘squash’ (compress)
the range of output.
Our focus in this chapter will be on two-layer perceptron networks with the first (hidden) layer having
log-sigmoid
1
s (a) = (11.10a)
1 + e- a
1 - e- a
or tan-sigmoid s (a) = (11.10b)
1 + e- a
activation function, and the second (output) layer having linear activation function
s (a) = a (11.11)
The log-sigmoid function has historically been a very popular choice, but since it is related to the tan-
sigmoid by the simple transformation
s log-sigmoid = (stan-sigmoid + 1)/2 (11.12)
both of these functions are in use in neural network models.
We have so far described two classical neuron models:
perceptron—a neuron with sigmoidal activation function (sigmoidal function is a softer version of
the original perceptron’s hard limiting or threshold activation function); and
linear neuron—a neuron with linear activation function.
from the network. This last layer of the network is the output layer. The layers that are placed between
the input terminals and the output layer are called hidden layers.
Some authors refer to the input terminals as the input layer of the network. We do not use that convention
since we wish to avoid ambiguity. Note that each neuron in a network makes its computation based
on the weighted sum of its inputs. There is one exception to this rule: the role of the ‘input layer’ is
somewhat different as units in this layer are used only to hold input data, and to distribute the data to
units in the next layer. Thus, the ‘input layer’ units perform no function—other than serving as a buffer,
fanning out the inputs to the next layer. These units do not perform any computation on the input data,
and their weights, strictly speaking, do not exist.
The network outputs are generated from the output layer units. The output layer makes the network
information available to the outside world. The hidden layers are internal to the network and have no
direct contact with the external environment. There may be from zero to several hidden layers. The
network is said to be fully connected if every output from a single node is channeled to every node in
the next layer.
The number of input and output nodes needed for a network will depend on the nature of the data
presented to the network, and the type of the output desired from it, respectively. The number of neurons
to use in a hidden layer, and the number of hidden layers required for processing a task, is less obvious.
Further comments on this question will appear in a later section.
A Layer of Neurons
A one-layer network with n inputs and q neurons is shown in Fig. 11.11. In the network, each input xi;
i = 1, 2, ..., n is connected to the jth neuron input through the weight wji; j = 1, 2, ..., q. The jth neuron
has a summer that gathers its weighted inputs to form its own scalar output
n
Fig. 11.11
q ¥ n weight matrix
Èw1T ˘
Èw11 w12 w1n ˘ Í T ˙
Í ˙ Íw ˙
w21 w22 w2 n ˙ Í 2 ˙
W= Í = T (11.14b)
Í ˙ Íw3 ˙
Í ˙ Í ˙
ÍÎwq1 wq 2 wqn ˙˚ Í ˙
ÍwqT ˙
Î ˚
710 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
any continuous function. In real life, we are faced with nonlinear problems, and multilayer neural
network structures have the capability of providing solutions to these problems.
If the relationship between the input and output signals is linear, or can be treated as such, a single layer
neural network having linear neurons is the best solution. “Adaptive Linear Element” (Adaline) is the
name given to a neuron with linear activation function and a learning rule for adapting the weights.
Single-layer adaline networks have a capacity for a wide range of applications, whenever the problem at
hand can be treated as linear.
A two-layer NN, depicted in Fig. 11.13, has n inputs and two layers of neurons, with the first layer
having m neurons that feed into the second layer having q neurons. The first layer is known as the hidden
layer, with m the number of hidden-layer neurons; the second layer is known as the output layer, with
q the number of output-layer neurons. It is common for different layers to have different numbers of
neurons. Note that the outputs of the hidden layer are inputs to the following layer (output layer); and
the network is fully connected. Neural networks with multiple layers are called Multi-layer Perceptrons
(MLP); their computing power is significantly enhanced over the one-layer NN.
All continuous functions (exhibiting certain smoothness) can be approximated to any desired accuracy
with a network of one hidden layer of sigmoidal hidden units, and a layer of linear output units [141]. Does
it mean that there is no need to use more than one hidden layer and/or mix different types of activation
functions? This is not quite true. It may be that the accuracy can be improved using a more sophisticated
network architecture. In particular, when the complexity of the mapping to be learned is high, it is
likely that the performance can be improved. However, since implementation and training of the network
become more complicated, it is customary to apply only a single hidden layer of similar activation
functions, and an output layer of linear units. Our focus is on two-layer feedforward neural networks with
1 1
y1
S S
z1
x1 s (◊)
1 1
y2
S S
z2
x2 s (◊)
1 1
yq
S S
zm
xn s (◊)
Fig. 11.13
712 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
sigmoidal/hyperbolic tangent hidden units and linear output units. This is probably the most commonly
used network architecture, as it works quite well in many practical applications.
Defining the hidden-layer outputs zl allows one to write
Ê n ˆ
zl = s Á Â
ÁË i =1
wli xi + wl 0 ˜ ; l = 1, 2, …, m
˜¯
(11.16)
= s (wlTx + wl0)
where
wlT =D [wl1 wl2 ... wln]
In vector-matrix notation, the hidden layer in Fig. 11.13 has m ¥ 1 output vector
È z1 ˘
Í ˙
z2
z = Í ˙, (11.17a)
Í ˙
Í ˙
Î zm ˚
m × n weight matrix
Èw11 w12 w1n ˘
Í ˙
w21 w22 w2 n ˙
W= Í (11.17b)
Í ˙
Í ˙
Îwml wm 2 wmn ˚
Ê m ˆ
ŷ j = Á
ÁË Âv jl zl + vj 0 ˜ ; j = 1, 2, ..., q
˜¯
(11.20)
l =1
= vjTz + vj0
where vjT =D [vj1 vj 2 ... vjm]
The output vector
È yˆ1 ˘
͈ ˙
Í y2 ˙
ŷ = Í ˙ (11.21a)
Í ˙
ÍÎ yˆq ˙˚
ŷ = Vz + v0
= V(G(Wx + w0) + v0) (11.21b) Fig. 11.14
Figure 11.14 shows the input–output map.
It should be noted that many real-world problems, which one might think would require recurrent
architectures for their solution, turn out to be solvable with feedforward architectures as well. A
multilayer feedforward network, which realizes a static map can represent the input/output behavior of
a dynamic system. For this to be possible, one must provide the neural network with information about
the history of the system—typically, delayed inputs and outputs. How much history is needed, depends
on the desired accuracy. There is a trade-off between accuracy and computational complexity of training,
since the number of inputs used, affects the number of weights in the neural network—and subsequently,
the training time (Section 11.11 will give more details). One sometimes starts with as many delayed
signals as the order of the system, and then modifies the network accordingly. It also appears that using
a two hidden-layer network—instead of one hidden layer—has certain computational advantages. The
number of neurons in the hidden layer(s) is typically chosen based on empirical criteria, and one may
iterate over a number of networks to determine a neural network that has a reasonable number of neurons
and accomplishes the desired degree of approximation.
From numerous practical applications published over the past decade, there seems to be substantial
evidence that multilayer feedforward networks possess an impressive ability to perform reasonably well
in most cases of practical interest. Lately, there have also been some theoretical results that attempt to
explain the reasons for the success [138].
Our focus is on two-layer feedforward neural networks with sigmoidal/hyperbolic tangent hidden units
and linear output units. This is probably the most commonly used network architecture as it works quite
well in many practical applications.
Note further that, though the result says ‘there exists an NN that approximates f(x)’, it does not show how
to determine the required number of units in the hidden layer. The issue of finding the required number of
units in the hidden layer such that an NN does indeed approximate a given function f(x) closely enough, is
not an easy one (If the function approximation is to be carried out in the context of a dynamic closed-loop
feedback control scheme, the issue is thornier and is discussed in subsequent sections). This issue has been
addressed in the literature [138, 141], and a significant result has been derived about the approximation
capabilities of two-layer networks when the function to be approximated exhibits a certain smoothness.
Unfortunately, the result is difficult to apply for selecting the number of hidden units. The guidelines
to select the appropriate number of hidden neurons are rather empirical at the moment. To avoid large
number of neurons and the corresponding inhibitively large training times, the smaller number of
hidden layer neurons are often used in the first trial. One increases accuracy by adding more hidden
neurons. Excessively large number of hidden units may lead to poor generalization, a key feature of the
performance of NN.
Because of the above-mentioned results, one might think that there is no need for using more than
one hidden layer, and/or different types of activation functions. This is not quite true: it may be that
accuracy can be improved using a more sophisticated network architecture. In particular, when the
complexity of the mapping to be learned is high (e.g., functions with discontinuities), it is likely that
the performance can be improved. Experimental evidence tends to show that using a two hidden-layer
network for continuous functions has sometimes advantages over a one hidden-layer network, as the
former requires shorter training times.
input, the network automatically adjusts or adapts the strengths of the connections between processing
elements. The method used for the adjusting process is called the learning rule.
Neural networks deal only with numeric input data. Therefore, we must convert or encode information
from the external environment to numeric data form. Additionally, it is often necessary to scale data.
Inhibitory inputs are just as important as excitatory inputs. The input scheme should adequately allow
for both the types (allow positive and negative weights). A provision is also usually made for constant-
source input to serve as an offset or bias term for the transfer or activation function.
The numeric output data of a neural network will, likewise, require decoding and scaling to make it
compatible with the external environment.
Important characteristics of the network depend on:
(i) the transfer or activation functions of the processing elements;
(ii) the structure of the network (number of neurons, layers and interconnections); and
(iii) the learning rules of the network.
sections. In this section, we present algorithms for training linear machines. These algorithms will be
relevant to the study of multilayer neural networks and support vector machines in later sections.
For learning problems with a scalar output, only one neuron (perceptron) constitutes the linear learning
machine. For multiple outputs (represented by vector y), single layer of perceptrons gives us the required
structure. We present the algorithms for the scalar-output case; extension to vector-output case is
straightforward.
and 1 ¥ (n + 1) vector
wT = [ w0 w1 w2 ... wn ] ,
we can express Eqn. (11.22a) as
ŷ = wT x (11.22b)
The learning task is to find the weights of the neuron (estimate the parameters of the proposed linear model
(11.22a)) using a finite number of measurements, observations, or patterns. The learning environment,
thus, comprises a training set of measured data (patterns):
{x( p), y( p); p = 1, 2, ..., P}
consisting of an input vector x and output or system response y, and the orresponding learning rule for
the adaptation of the weights (In the following discussion, learning algorithm is given for the case of one
neuron only, and the desired output is a scalar variable. The extension of the algorithm for y, a vector, is
straightforward). The choice of a performance criterion, or the measure of goodness of the estimation,
depends primarily on the data, and on the desired simplicity of the learning algorithm. In the neural
network field, the most widely used performance criterion (cost function) is the sum of error squares:
P P
 (e ) Â( y )
2 2
( p) ( p)
E= 1
2
= 1
2
- ŷ ( p ) (11.23)
p =1 p =1
(The constant ½ is used for computational convenience only. It gets cancelled out by the differentiation
required in the error minimization process).
It is obvious that network equation (11.22b) is exactly a linear model with (n + 1) linear parameters. So
we can employ the least-squares methods, discussed in Chapter 10, to minimize the error in the sense of
least squares.
A matrix of input vectors x( p); p = 1, 2, ..., P (called the data matrix X) and vector y of the desired outputs
y( p); p = 1, 2, ..., P, are introduced as follows:
Ê 1 1 1 ˆ
Á (1) ˜
Á x1 x 1( 2) x 1( P )˜
X = Á x 2(1) x 2( 2) x 2( P )˜˜ = ÈÎ x x
(1) ( 2 )
x ( P ) ˘˚ (11.24a)
Á
Á ˜
Á (1) ˜
Ë xn x n( 2) (P)
xn ¯
T
y = ÈÎ y (1) y ( 2) y ( P ) ˘˚
The weights w are required to satisfy the following equations (refer to Eqn. (11.22b)):
y(1) = wT x (1)
y(2) = wT x ( 2)
T (P)
y(P) = w x
Therefore,
[ y(1) y(2) .... y( P)] = wT [ x (1) x ( 2) x (P) ]
T
yT = w X
or y = XT w
P ¥1 P ¥ ( n +1) ( n +1) ¥1
Intelligent Control with Neural Networks/Support Vector Machines 719
In the least squares sense, the best or optimal w that minimizes E results from the equation (refer to
Eqns (10.38–10.40))
w = ( XXT ) -1 Xy = [w0 w1 w2 ...wn ]T (11.24b)
From a computational point of view, the calculation of optimal weights requires the pseudo-inverse of
the P ¥ (n + 1) matrix X.
An alternative solution to this type of problem is the ‘Recursive Least Squares’ (RLS) algorithm (refer to
Eqns (10.45)). For the learning problem in hand, the steps given in Table 11.1 implement the algorithm.
Table 11.1
Given is a set of P measured data points that are used for training:
{x( p), y(p); p = 1, 2, …, P}
consisting of the input pattern vector x and the desired response y;
x = [1 x0 x1 x2 ... xn]T
The weight vector
wT = [w0 w1 w2 ... wn]
is to be constructed.
Perform the following training steps for p = 1, 2, …, P.
Step 1 Set the iteration index k = 0. Initialize the weight vector w (0 ) = 0 and the matrix P(0) =
aI(n +1), where a should be very large number, say, of the order of 108 to 1015.
Step 2 { }
Apply the next (p = 1 for the first one) training pair x ( p ) , y ( p ) to the linear neuron.
Step 3 Calculate the error for the applied data pair: e ( k + 1) = y( k + 1) - xT ( k + 1)w ( k ).
Step 4 Calculate the vector K(k).
K( k ) = P( k ) x( k + 1)[1 + xT (k + 1)P( k ) x( k + 1)] -1
Step 5 Calculate the updated weight vector.
w ( k + 1) = w( k ) + K ( k )e( k + 1)
Step 6 Find the matrix P(k + 1)
P( k + 1) = P( k ) - K ( k ) xT ( k + 1)P( k )
Step 7 Stop the adaptation of the weights if the error is smaller than the predefined value. Otherwise set
k Æ k + 1 and go back to step 2.
the weight parameters can easily be evaluated. These derivatives can then be used in a variety of gradient-
based optimization algorithms for finding the minimum of the error function. Here we consider one of
the simplest of such algorithms, the steepest descent algorithm, for a single linear neuron. We will later
extend the algorithm to multilayer neural networks with sigmoidal/linear units.
For a linear neuron of Fig.11.17, the training set comprises the pairs
{x(p), y(p); p = 1, 2, ..., P}
The performance criterion (refer to Eqn.(11.23)) is
P P
  (e
1 1 ( p) 2
E= (y(p) – ŷ (p))2 = ) (11.25)
2 p =1
2 p =1
To understand the gradient descent algorithm, it is helpful to visualize the error space of possible weight
vectors and the associated values of the performance criterion (cost function). For linear neuron, the error
surface is parabolic with a single global minimum.
Gradient descent search determines a weight vector that minimizes the cost function by starting with
an arbitrary initial weight vector, then repeatedly modifying it in small steps. At each step, the weight
vector is altered in the direction that produces the steepest descent along the error surface. This process
continues until the global minimum error is reached.
Let wi(k) be the weights on the iteration index k, and the associated cost function is E(k). The search
direction given by –(∂E(k)/∂wi(k)), takes us iteratively towards the minimum point according to the rule
∂E ( k )
wi(k + 1) = wi(k) – h (11.27)
∂wi ( k )
where h, the positive step-size parameter, is taken as less than 1, and is called the learning rate.
The two most useful training protocols are batch and incremental. In batch training, all patterns are
presented to the network before learning takes place. The cost function E(k) for the batch training is
given by Eqn.(11.25). A variation of this approach is to update weights for each of the training pairs. This
is known as incremental mode of training. We first consider the incremental mode.
Incremental Training
For the incremental training, the cost function at iteration k, is
1 ( p) 1
E(k) = ( y - ŷ ( k )) 2 = [e( p ) ( k )]2 (11.28a)
2 2
for the training pair (x(p), y(p),) with
n
yˆ ( p ) ( k ) = Â w (k ) x
i
( p)
i + w0 ( k ) (11.28b)
i =1
Intelligent Control with Neural Networks/Support Vector Machines 721
Note that the components x i(p) of the input vector x(p), and the desired output y(p) are not functions of the
iteration index k.
The gradients with respect to weights and bias are computed as follows:
∂E ( k ) ∂e ( p ) ( k ) ∂ ŷ( p ) ( k )
= e( p) ( k ) = - e( p) ( k )
∂wi ( k ) ∂wi ( k ) ∂wi ( k )
∂ È ˘
n
= - e( p) ( k ) Â
Í wi ( k ) xi( p ) + w0 ( k ) ˙
∂wi ( k ) Í i =1 ˙
Î ˚
= - e( p ) ( k ) x i( p )
∂E ( k )
= - e( p) ( k )
∂ w0 ( k )
The gradient descent algorithm becomes
wi (k + 1) = wi(k) + h e(p) (k) x i(p); p = 1, 2, ..., P; i = 1, 2, ..., n (11.29a)
(p)
w0(k + 1) = w0(k) + h e (k) (11.29b)
In terms of vectors, this algorithm may be expressed as
w(k + 1) = w(k) + h e(p) (k) x (p); p = 1, 2, ..., P (11.30a)
(p)
w0(k + 1) = w0(k) + h e (k) (11.30b)
Incremental training algorithm iterates over the training examples p =1, 2, …, P; at each iteration, altering
the weights as per Eqns (11.30). The sequence of these weight updates iterated over all the P training
examples, provides a reasonable approximation to the gradient descent with respect to the batch of data.
By making the value of h reasonably small, incremental gradient descent can be made to approximate
true gradient descent arbitrarily closely.
At each presentation of the data (x(p), y(p)), one step of training algorithm is performed which updates
both the weights and the bias. An epoch is defined as one complete run through all the P associated pairs.
When an epoch has been completed, the pair (x(1), y(1)) is presented again and another run through all the
P pairs is performed. It is hoped that after many epochs, the output error will be small enough.
Note that the approach of teaching the network one fact at a time from one data pair does not work.
All the weights set so meticulously for one fact, could be drastically altered in learning the next fact.
The network has to learn everything together, finding the best weight settings for the total set of facts.
Therefore, with incremental training, the training should stop only after an epoch has been completed.
Batch Training
Our true interest lies in learning to minimize the total error over the entire batch of training examples.
All P pairs are presented to the linear unit (one at a time) and a cumulative error is computed, after all
pairs have been presented. At the end of this procedure, the neuron weights and bias are updated once.
The result is as follows:
È P ˘
Í p =1 ˙ Â
w(k + 1) = w( k ) + h Í e( p ) ( k )˙ x (p) (11.31a)
Î ˚
722 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
P
w0(k + 1) = w 0 ( k ) + h Âe ( p)
(k ) (11.31b)
p =1
In batch training, the iteration index corresponds to the number of times the set of P pairs is presented
and the cumulative error is compounded. That is, k corresponds to epoch number.
Compared with the incremental mode, the batch mode is an inherent averaging process. This leads to a
better estimate of the gradients; thus to more well-behaved convergence. Both the incremental and batch
training modes are commonly used in practice; the error surface in the MLP network case, as we will see
shortly, may contain multiple local minima, and incremental training can sometimes avoid falling into
these local minima.
The sum of error squares over all the training pairs is accumulated in the incremental mode of learning.
After the learning epoch (the sweep through all the training patterns) is completed (p = P), the total error
EP is compared with the acceptable (desired) value Edes; learning is terminated if EP < Edes. Otherwise a
new learning epoch is started. In the batch mode, weight updating is performed after the presentation of
all the training examples that constitute an epoch. The error EP is compared with Edes after each iteration
of a learning epoch.
The sum of error squares is not good as stopping criterion because EP increases with the increase of the
number of data pairs. The more data, the larger is EP. Scaling of the error function gives a better stopping
criterion. The root mean square error (RMSE) is a widely used scaled error function:
P
 (e
1 ( p) 2
ERMS = )
P p =1
There will be no need to change the learning algorithm derived earlier. Training is performed using sum-
of-error squares as the cost function (performance criterion), and RMSE is used as a stopping criterion
at training. However, if desired, for the batch mode the learning algorithm with average square error:
P
 (e
1 ( p) 2
Eav = )
2P p =1
In the previous section, we dealt with the training of a linear neuron using least squares algorithm and
gradient descent algorithm. Both of these algorithms can easily be extended to a network with a layer
of linear neurons.
For real-world problems, one has no previous knowledge of what kind of dependency function between
input x and output y is most suitable; a linear function may not lead to satisfactory performance. Trial-
Intelligent Control with Neural Networks/Support Vector Machines 723
and-error design of a nonlinear function is a difficult task, but inescapable necessity. What we seek is a
clever choice of the nonlinearity. This is the approach of Multi-Layer Perceptron (MLP) networks. MLP
networks can at least in principle, provide the optimal solution to an arbitrary function approximation
problem.
Consider the two-layer network shown in Fig. 11.13. There is nothing magical about this network; it
implements linear functions in a space where the inputs have been mapped nonlinearly using sigmoidal
transformation. It is natural to ask whether every nonlinear function can be implemented by a network
of this form. The answer is ‘yes’—any continuous function from input to output can be implemented
by a network of the form of Fig. 11.13, given sufficient number of hidden units. If x is fed to the input
terminals (including the bias), the ‘activation’ propagates in the feedforward direction, and the output
values of the hidden units are calculated. Each hidden unit is a perceptron by itself and applies the
nonlinear sigmoid to its weighted sum. If the hidden units’ outputs were linear, the hidden layer would
be of no use for function approximation; linear combination of linear combinations is another linear
combination.
One is not limited to MLP networks of the form of Fig. 11.13. More hidden layers with their own weights
can be placed after the first layer with sigmoid hidden units, thus calculating nonlinear transformations
of the first layer of hidden units and implementing more complex functions of the inputs. In practice,
we rarely go beyond one hidden layer since analyzing a network with many hidden layers is quite
complicated; but sometimes when the hidden layer contains too many hidden units, it may be sensible to
go to multiple hidden layers, preferring ‘long and narrow’ networks to ‘short and flat’ networks.
The key power provided by MLP networks is that they admit simple gradient-based training algorithms.
This is made possible because sigmoid is a continuous and differentiable function, with a useful property
that its derivative is easily expressed in terms if its output.
Consider a sigmoidal neuron of Fig. 11.8. The activation
n
a= Âw x + w i i 0 (11.32a)
i =1
and the output
1
yˆ = s ( a) = (11.32b)
1 + e-a
The derivative
d d È 1 ˘ 1
s ( a) = Í ˙=- (- e - a )
da da ÍÎ1 + e - a ˙˚ (1 + e - a ) 2
1 È e-a ˘ 1 È 1 ˘
= -a Í -a ˙
=- Í1 - -a ˙
1+ e ÍÎ1 + e ˙˚ 1 + e-a ÍÎ 1 + e ˙˚
= s ( a)[1 - s ( a)] = ŷ(1 – ŷ) (11.32c)
11.9.1
We aim to derive the backpropagation algorithm for setting the weights based on training patterns, for the
two-layer perceptron network of Fig. 11.13, which is frequently used in practice. Extension of the results
derived in this section to more general perceptron networks is straightforward.
724 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Backpropagation is one of the simplest and most general method for supervised training of MLP
networks. It is a natural extension of the gradient descent algorithm derived in the previous section for a
linear neuron. The gradient descent algorithm worked for the linear unit because the error, proportional
to the square of the difference between the actual output and the desired output, could be evaluated in
terms of input terminals-to-output layer weights. Similarly, in a two-layer network, it is a straightforward
matter to find out how the error depends on hidden-to-output layer weights. In fact, this dependency is
analogous to the linear unit case.
But how should the input terminals-to-hidden layer weights be learned; the ones governing the nonlinear
transformation of the input vectors? If the ‘proper’ outputs for hidden units were known for any input,
the input terminals-to-hidden layer weights could be adjusted to approximate it. However, there is no
explicit ‘supervisor’ to state what the hidden units’ output should be. The power of back propagation is
that it allows us to calculate an ‘effective’ error for each hidden unit, and thus derive a learning rule for
input terminals-to- hidden layer weights.
We begin by defining the cost function for incremental training (iterating through the training examples
one at a time):
q
 [e
1 ( p) 2 ( p) (p)
E(k) = j ( k )] ; ej ( k ) = yj( p ) - ŷj (k) (11.33a)
2 j =1
These equations directly follow from Eqns ((11.28), (11.20), (11.16)), with the difference that now we
have a layer of q linear units, rather than a single linear unit.
For each training example p, every weight vj ; j = 1, ..., q; = 1, ..., m, is updated by adding to it Dvj :
∂E ( k )
Dvj = – h (11.34a)
∂v j ( k )
∂E ( k )
vj (k + 1) = vj (k) – h (11.34b)
∂vj ( k )
With linear activation function in the output layer, the update rule becomes
vj (k + 1) = vj (k) + h e j(p) (k) z (p) (k) (11.34c)
vj0(k + 1) = vj0(k) + h e j(p) (k) (11.34d)
We now consider the hidden layer of the network. Unlike the output nodes, the desired outputs of the
hidden nodes are unknown. The backpropagation algorithm for a given input-output pair (x(p), y(p))
performs two phases of data flow. First the input pattern x(p) is propagated from the input terminals to
the output layer; and as a result of the forward flow of the data, it produces an output ŷ(p). Then the error
Intelligent Control with Neural Networks/Support Vector Machines 725
signals e(p) resulting from the difference between y(p) and ŷ(p) are backpropagated from the output layer
to the hidden layer, to update the weights w i. Error backpropagation may be computed by expanding the
error derivative using the chain rule, as follows:
∂E ( k )
w i ( k + 1) = w i ( k ) - h (11.35a)
∂w i ( k )
(11.35b)
Therefore,
(11.35c)
∂yˆ j( p ) ( k )
= vj (k)
∂z ( p )( k )
∂z ( p ) ( k ) ∂s ( a( k ))
= = s ( a( k ))[1 - s ( a( k ))]
∂a( k ) ∂a( k )
= z (p) (k)[1 – z (p) (k)]
∂a( k )
= x i( p )
∂w i ( k )
Therefore,
m
∂E ( k )
∂w i ( k )
= - x i( p )[ z ( p ) ( k )] [1 - z ( p ) ( k )] Âv j ( k ) ej( p )( k ) (11.35d)
j =1
Present input vector x(p) to the MLP, and compute the MLP output using
n
z ( p) ( k ) = s ( Âw ( p)
i ( k ) xi + w 0 ( k )); = 1, ..., m (11.36a)
i =1
m
yˆ j( p ) ( k ) = Âv j ( k ) z ( p ) ( k ) + vj 0 ( k ); j = 1, ..., q (11.36b)
=1
(0) (0) (0) (0)
with initial weights w 0 , w i , vj0 , vj , randomly chosen.
(11.36c)
q
d ( p ) ( k ) = z ( p ) ( k ) [1 - z ( p ) ( k )] Âv j e j( p ) ( k ); = 1, ..., m (11.36d)
j =1
A Gaussion basis function is typically parameterized by two parameters: the center which defines its
position, and a spread parameter that determines its shape. The spread parameter is equal to the standard
deviation s in case of a one-dimensional Gaussian function (do not confuse the standard deviation
parameter s with the sigmoidal activation function s (◊)). In the case of a multivariate input vector x, the
parameters that define the shape of the hyper-Gaussian function are elements of a covariance matrix S.
With the selection of the same spread parameter s for all components of the input vector, the covariance
matrix S = diag(s2).
The input vector
x = [x1 x2 ... xn]T
and the output f (◊) of an RBF (Gaussian) neuron are related by the following equation.
Ê || x - c ||2 ˆ
f ( x,c,s ) = exp Á - ˜ (11.38)
Ë 2s 2 ¯
where c is the center and s is the spread parameter of the Gaussian function.
Unlike sigmoidal neuron, there are no connection weights between the input terminals and the RBF unit
(refer to Fig. 11.19); the center c and the spread parameter s represent the weights.
RBF networks are structurally equivalent
to the two-layer perceptron network shown
in Fig. 11.13. Both have one hidden layer
with a nonlinear activation function, and an
output layer containing one or more neurons
with linear activation functions. In an RBF
network, one does not augment, both the
n-dimensional input vector x and the hidden Fig. 11.19
layer output vector with the bias term +1.
The architecture of an RBF network is presented in Fig. 11.20. The network consists of n inputs x1, x2, ...,
xn; and a hidden layer of m nonlinear processing units (refer to Eqn. (11.38)):
Ê || x - c ||2 ˆ
f ( x,c ,s ) = exp Á - ˜ ; = 1, 2, ..., m (11.39a)
Ë 2s 2 ¯
The output of the network is computed as a weighted sum of the outputs of the RBF units:
m
yˆ j = Âw j f ( x, c , s ) ; j = 1, 2, ..., q (11.39b)
=1
where wj is the connection weight between the RBF unit and the j th component of the output vector.
It follows from equations (11.39) that the parameters (c , s , wj ) govern the mapping properties of the
RBF neural network. It has been shown [141] that the RBF network can fit any arbitrary function with
just one hidden layer.
In the RBF network, the output of each RBF node is the same for all input points x having the same
Euclidean distance from the respective centers ci, and decreases exponentially with the distance. In
contrast, the output of each sigmoidal node in a multilayer perceptron network saturates to the same
Intelligent Control with Neural Networks/Support Vector Machines 729
Fig. 11.20
value with increasing  w x . In other words, the activation responses of the nodes are of a local nature
i i
i
in the RFB and of a global nature in the multilayer perceptron networks.
This intrinsic difference has important repercussions for both the convergence speed and the generalization
performance. In general, multilayer perceptrons learn slower than their RBF counterparts. In contrast,
multilayer perceptrons exhibit improved generalization properties, especially for the regions that are not
represented sufficiently in the training set.
E= (11.40)
The estimation of the weights wj , the centers c , and the variances s 2 becomes a typical task of nonlinear
optimization process:
∂E ( k )
w j ( k + 1) = w j ( k ) - h1 (11.41a)
∂w j ( k )
∂E ( k )
c ( k + 1) = c ( k ) - h2 (11.41b)
∂c ( k )
∂E ( k )
s ( k + 1) = s ( k ) - h3 (11.41c)
∂s ( k )
k is the iteration index.
730 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The computational complexity of such a scheme is prohibitive for a number of practical situations. When
obtaining gradient information is difficult or expensive, we may use genetic algorithm for the nonlinear
optimization problem (discussed later in Chapter 13).
Several alternative schemes have been proposed for training RBF networks. Some of these schemes learn
only the centers c of the RBF units, and therefrom determine the spread parameters s . The basic idea is
to ensure suitable overlapping of the basis functions. A rule of thumb is to take s equal to, or a multiple
of, the average distance to the several nearest neighbors of c (||c +1 – c ||).
Once the centers and the spread parameters are chosen, the weights wj in the output layer of the network
in Fig. 11.20, can be determined as follows. Output neuron j is driven by the signals f (x, c , s ) produced
by the layer of RBF neurons, which are themselves driven by the input vector (stimulus) x applied to
the input terminals. Supervised learning may be visualized as learning with the help of a ‘supervisor’
having knowledge in the form of input-output examples {x (p), y(p); p = 1, 2, ..., P}. For known RBF
centres and spread parameters, this knowledge may be translated (refer to Eqns (11.39a)) in the form :
{e(p), y(p); p = 1, ..., P}. Neuron j in the output layer is driven by the vector e(p). By virtue of the built-in
knowledge, the supervisor is able to provide the neural network with a desired response yj(p) from e(p). The
network parameters wj are adjusted under the combined influence of the training vector e(p) and the error
ej(p) = yj(p) – ŷj(p), which is the difference between the desired response yj(p) and the actual response yˆ j( p )
(refer to Eqns (11.39b)) of the network. The least squares estimation or the gradient descent algorithm
(refer to Section 11.7) may be used for learning the weights wj .
Although there exist some cases in which the nature of the problem suggests a
specific choice for the centers, in the general case, these centers may be selected randomly from the
training set. Provided that the training set is distributed in a representative manner over the space of all
the patterns (input vectors), this seems to be a reasonable way to choose the m centers.
If the centers are not preselected, they have to be estimated during the
training phase along with the weights wj . This can be achieved by unraveling the clustering (unsupervised
learning) properties of the data, and choosing a representative of each cluster as the corresponding center.
The Self-Organizing Map (SOM), developed by Kohonen, is an unsupervised, clustering network. Proper
clusters are formed by discovering the similarities and dissimilarities among the input data [141].
These techniques of RBF network training are used in MATLAB.
We give here a generic working procedure for system identification with neural networks. Time-invariant
nonlinear dynamic systems with scalar input and scalar output are considered here. Extension to the case
of vector input and vector output is straight forward.
The multilayer feedforward network is straightforward
Experiment to employ for the discrete-time modeling of dynamic
systems for which there is a nonlinear relationship between
the system’s input and output. Let k count the multiple
Select sampling periods so that y(k) specifies the present output
model structure while y(k – 1) signifies the output observed at the previous
sampling instant, etc. It is assumed that the output of the
dynamic system at discrete-time instances can be described
Train model as a function of number of past inputs and outputs:
y(k) = f (y(k – 1), ..., y(k – n), u(k – 1), ..., u(k – m)); n ≥ m
(11.42)
Validate model A multilayer network can be used for approximating f (◊) if
Not accepted
the inputs to the network are chosen as the n past outputs
Accepted and m past inputs of the dynamic system.
When attempting to identify a model of a dynamic system,
Fig. 11.21
cedure it is a common practice to follow the procedure depicted in
Fig. 11.21.
11.11.1 Experiment
The primary purpose of an experiment is to produce a set of examples of how the dynamic system to be
identified responds to various control inputs (These examples can later be used to train neural network to
model the system). The experiment is particularly important in relation to nonlinear modeling; one must
be extremely careful to collect a set of data that describes how the system behaves over its entire range
of operation. The following issues must be considered in relation to acquisition of data (For detailed
information, refer to [131]).
Sampling Frequency
The sampling frequency should be chosen in accordance with the desired dynamics of the closed-loop
system consisting of controller and the system. A high sampling frequency permits a rapid reference
tracking and a smoother control signal, but the problems with numerical ill-conditioning will become
more pronounced. Consequently, the sampling frequency should be selected as a sensible compromise.
Input Signals
While for identification of linear systems, it is sufficient to apply a signal containing a finite number of
frequencies, a nonlinear system demands, roughly speaking, that all combinations of frequencies and
amplitudes in the system’s operating range are represented in the signal. As a consequence, the necessary
732 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
size of the data set increases dramatically with the number of inputs and outputs. Unfortunately, there is
no obvious remedy to this curse of dimensionality.
Before an input signal is selected, it is important to identify the operating range of the system. Special
care must be taken not to excite dynamics that one does not intend to incorporate in the model (e.g.,
mechanical resonances).
11.11.2
The model structure selection is basically concerned with the following two issues:
Selecting an internal network architecture
Selecting the inputs to the network
An often-used approach is to let the internal architecture be feedforward multilayer network. Probably
the most commonly used network architecture is a two-layer feedforward network with sigmoidal/
hyperbolic tangent hidden units and linear output units. This architecture works quite well in many
practical applications. In our presentation, we use this architecture. However, the reader is referred to
more fundamental textbooks/research papers for a treatment of other types of neural networks in the
control loop.
Intelligent Control with Neural Networks/Support Vector Machines 733
The input structure we use here consists of a number of past inputs and outputs (refer to Fig. 11.22):
M Ê N ˆ
yˆ ( k | p) = Â v sÁ Â
Ë i =1
w ifi ( k ) + w 0 ˜ + v0
¯
(11.43a)
=1
where ŷ is the predicted value of the output y at sampling instant t = kT (T = sampling interval),
p = {n w i} is the vector containing the adjustable parameters in the neural network (weights),
e is the regression vector which contains past outputs and past inputs (regressors’s dependency on the
weights is ignored):
e(k) = [y(k – 1) y(k – n) u(k – 1) u(k – m)]T (11.43b)
T
= [f1(k) f2(k) fN (k)]
wl0
wli
S
f1(k) = y (k – 1) z1
s (◊)
y (k – 2) nl
n0
y (k – n) z2 y (k)
u (k – 1) S s (◊)
u (k – 2)
zM
fN (k) = u (k – m) S s (◊)
Often, it is of little importance that the network architecture has selected vector p too small or too large.
However, a wrong choice of lag space, i.e., the number of delayed signals used as regressors, may have
a disastrous impact on some control applications. Too small obviously implies that essential dynamics
will not be modeled, but too large can also be a problem. From the theory of linear systems, it is known
that too large a lag space may manifest itself as common factors in the identified transfer function. An
equivalent behavior must be expected in the nonlinear case. Although it is not always a problem, common
factors (corresponding to hidden modes) may lead to difficulties in some of the controller designs.
It is necessary to determine both, a sufficiently large lag space and an adequate number of hidden units.
While it is difficult to apply physical insight towards the determination of number of hidden units, it can
often guide the proper lag space. If the lag space is properly determined, the model structure selection
problem is substantially reduced. If one has no idea regarding the lag space, it is sometimes possible to
determine it empirically.
734 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
11.11.3 Training
Assume now, that a data set has been acquired and that some model structure has been selected. According
to the identification procedure in Fig. 11.21, the next step is to apply the data set to pick ‘the best’ model
among the candidates contained in the model structure. This is the training stage. The training can be
computationally intensive, but it is generally one of the easiest stages in the identification. It is not very
difficult to implement a training algorithm in a computer, but one might as well resort to one of the many
available software packages, e.g., MATLAB.
The training procedure can be rephrased in more formal terms. An experiment is performed on the time-
invariant nonlinear dynamic system to collect a set of data, that describes how the system behaves over
its entire range of operation:
Experimental data: {[u(t), y(t)]; t= 1,2,3,...} (11.44a)
From the experimental data, we generate the training data. Since the system is assumed to be time-
invariant, the experimental data could be equivalently represented as
{[u(t), y(t)]; t = – n + 1, – n + 2,...,0,1,2,...}
The following P pairs {e(k), y(k); k = 1,2,...,P}, are used for training the neural network (refer to Eqns
(11.43)):
e(1) = [y (0) y (–1) y(1– n) u(0) u(1 – m)]T; y(1)
e(2) = [y(1) y(0) y(2– n) u(1) u(2 – m)]T; y(2) (11.44b)
It is sometimes useful to identify a system online, simultaneously, with the acquirement of measurements.
Adaptive control is an example of such an application. In this case, a model must be identified and a
control system designed online, because the dynamics of the system to be controlled vary with time.
Obviously batch methods are unsuitable in such applications as the amount of computation, required in
each iteration, might exceed the time available within one sampling interval. Moreover, old data will be
obsolete when the system to be identified is time-dependent.
In a recursive algorithm, one input-output pair from the training set, [e(k), y(k)], is evaluated at a time
and used for updating the weights. In the neural network community, this is frequently referred to as
incremental or online backpropagation (refer to Section 11.9).
11.11.4
In the validation stage, the trained model is evaluated to clarify if it represents the underlying system
adequately. Ideally, the validation should be performed in accordance with the intended use of the model.
As it turns out, this is often rather difficult. For instance, if the intention is to use the model for designing a
control system, the validation ought to imply that a controller was designed and its performance tested in
practice. For most applications, this level of ambition is somewhat high, and it is common to apply a series
of simple ‘standard’ tests instead of concentrating on investigating particular properties of the model.
Although this is less than ideal, it is good as a preliminary validation to quickly exclude really poor
models.
Most of the tests require a set of data that was not used during training. Such a data set is commonly
known as test or validation set. It is desirable that the test set satisfies the same demands as the training
set, regarding representation of the entire operating range.
A very important part of the validation is to simply inspect the plot, comparing observed outputs to
predictions. Unless the signal-to-noise ratio is very poor, it can show the extent of overfitting as well as
possible systematic errors.
If the sampling frequency is high, compared to the dynamics of the system, a visual inspection of the
predictions will not reveal possible problems. Some scalar quantities (correlation functions) to measure
the accuracy of the predictions, have been suggested. Reliable estimates of the average generalization
error are also useful for validation purposes, but their primary application is for model structure selection.
The estimates are good for rapidly comparing different model structures to decide which one is likely
to be the best.
quite well. Regardless of the fact that all systems, to some extent, exhibit a nonlinear behavior, it turns
out that they can often be controlled satisfactorily with simple linear controllers. When neural networks
are introduced as a tool for improving the performance of control systems for a general class of unknown
nonlinear systems, it should be done in the same spirit. A consequence of this philosophy is that our
focus is on simple control structures that yield good performance in practice.
Â
1
J= [u(k) – û(k |p)]2 (11.53)
2P k = 1
We will call this procedure, the general training procedure for an inverse model.
Intelligent Control with Neural Networks/Support Vector Machines 737
Fig. 11.23
This is basically an off-line procedure. After this training phase, the structure for an on-line operation
looks like the one shown in Fig. 11.23b, that is, the neural network representing the inverse of the plant
precedes the plant. The trained neural network should be able to take the desired output value yd = r as
its input, and produce appropriate û as an input to the plant.
The practical relevance of using an inverse model of the system as a controller is limited, due to a
number of serious inconveniences. The control scheme will typically result in a poor robustness with a
high sensitivity to noise and high-frequency disturbances (corresponding to unity forward-path transfer
function in the linear case). In addition, one will often encounter a very active control signal, which
may adversely affect the system/actuators. If the system is linear, this occurs when its zeros are situated
close to the unit circle. In the nonlinear case, there is no unique set of zeros, but, of course, a similar
phenomenon exists.
If the inverse model is unstable (corresponding to zeros of the system outside the unit circle in the linear
case), one must anticipate that the closed-loop system becomes unstable. Unfortunately, this situation
occurs quite frequently in practice. Discretization of linear continuous-time models under quite common
circumstances, can result in zeros outside the unit circle—regardless that the continuous-time model has
no zeros, or all zeros are in the left half of the plane. In fact, for a model with a pole excess of at least
two, one or more zeros in the discretized model will converge to the unit circle, or even outside, as the
sampling frequency is increased. It must be expected that a similar behavior can also be found in discrete
models of nonlinear systems.
738 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Another problem with the design arises when the system to be controlled is not one-to-one, since then a
unique inverse model does not exist. If this non-uniqueness is not reflected in the training set, one can,
in principle, yield a particular inverse which might be adequate for controlling the system. Most often,
however, one will end up with a useless, incorrect inverse model.
Despite these drawbacks, in a number of domains (stable systems, and one-to-one mapping plants), this
general training architecture is a viable alternative.
11.12.2
Many of the problems mentioned in the previous subsection can be taken care of by employing a control
structure of the form shown in Fig. 11.24. The feedforward control is used for improving the reference
tracking, while feedback is used for stabilizing the system and for suppressing disturbances.
Fig. 11.24
11.12.3
In the context of training inverse models, which are to be used as controllers, the trained inverse model,
somehow, ought to be validated in terms of performance of the final closed-loop system. This points out
Intelligent Control with Neural Networks/Support Vector Machines 739
a serious weakness associated with the general training procedure for an inverse model: the criterion
(11.53) expresses the objective to minimize the discrepancy between the network output and a sequence
of ‘true’ control inputs. This is not really a relevant objective. In practice, it is not possible to achieve
zero generalization error and consequently, the trained network will have certain inaccuracies. Although
these are reasonably small in terms of the network output being close to the ideal control signal, there
may be large deviations between the reference and the output of the system when the network is applied
as controller for the system. The weakness lies in the fact that the training procedure is not goal directed.
The goal is that, in some sense, the system output should follow the reference signal closely. It would be
more desirable to minimize a criterion of the following type:
P
Â
1
J= [r(k) – y(k)]2 (11.56a)
2P k = 1
which clearly is goal directed. Unfortunately, the minimization of this criterion is not easily carried out
off-line, considering that the system output, y(k), depends on the output of the inverse model, u(k – 1).
Inspired by the recursive training algorithms, the network might alternatively be trained to minimize
Jk = Jk – 1 + [r(k) – y(k)]2 (11.56b)
This is an on-line approach and, therefore, the scheme constitutes an adaptive controller.
Assuming that Jk – 1 has already been minimized, the weights at time k are adjusted according to
de 2 ( k )
p̂(k) = p̂(k – 1) – h (11.57a)
dp
where e(k) = r(k) – y(k) (11.57b)
de 2 ( k ) dy( k )
and =– e(k) (11.57c)
dp dp
dy( k )
By application of the chain rule, the gradient can be calculated:
dp
dy( k ) ∂y( k ) du( k -1)
= (11.58a)
dp ∂u( k - 1) dp
∂y ( k )
Jacobians of the system, , are required. These are generally unknown since the system is
∂ u ( k - 1)
unknown. To overcome this problem, a forward model of the system is identified to provide estimates
of the Jacobians:
∂y ( k ) ∂ yˆ ( k )
ª (11.58b)
∂u( k - 1) ∂u( k - 1)
The forward model is obtained by the system identification procedure described in the earlier section.
Fortunately, inaccuracies in the forward model need not have a harmful impact on the training. The
Jacobian is a scalar factor and, in the simplified algorithm (11.57), will only change the step-size of the
algorithm. Thus, as long as the Jacobians have the correct sign, the algorithm will converge if the step-
size parameter is sufficiently small. We will call this procedure the specialized training procedure for the
inverse model.
740 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The deadbeat character, appearing when inverse models are used directly as controllers, will often result
in an unnecessarily fast response to reference changes. An active control signal may even harm the system
or the actuators. Consequently, it might be desirable to train the network to achieve some prescribed low-
pass behavior of the closed-loop system. Say, have the closed-loop system following the model:
Bm ( z -1 )
ym(k) = r( k ) (11.59)
Am ( z -1 )
The polynomials Am and Bm are selected arbitrarily, by the designer.
The control design is, in this case, related to ‘Model Reference Adaptive System’ (MRAS); a popular
type of adaptive controller (discussed earlier in Section 10.3).
Since this specialized training is an on-line approach, the combination of having many weights to adjust
and having only the slow convergence of a gradient method, will often be disastrous. Before the weights
are properly adjusted, the system may have been driven outside the operating range with possibly serious
consequences. Often general training can be used to provide a decent initialization of the network so that
the specialized training is only used for ‘fine tuning’ of the controller.
The simplified specialized training is quite easily implemented with the backpropagation algorithm
(refer to Fig. 11.25). This algorithm is applied on the inverse model NN2:
u(k – 1) = f [y(k + 1), y(k), ..., y(k – n + 1), u(k – 2),..., u(k – m)]
Reference ym
model
+
e
–
Model
NN 1 ∂y/∂u
forward y^
eu
model
–
r
NN 2
inverse System
u y
model
by assuming the following ‘virtual’ error eu(k) on the output of the controller:
e(k) = [y(k – 1), ..., y(k – n), u(k – 1), ..., u(k – m)] (11.61b)
The derivative of the output with respect to the regressor fi(k), is given by
M
∂yˆ ( k )
∂ fi ( k )
= Âv w i s ¢( a) (11.62a)
=1
M
= Âv w i s ( a) [1 - s ( a)]
=1
N
where a=
Âw i fi ( k ) +w 0
(11.62b)
i =1
margin of separation between Class 1 and Class 2 examples is maximized. We will then take up the more
difficult case of linearly nonseparable patterns. With the material on how to find the optimal hypersurface
for linearly nonseparable patterns at hand, we will formally describe the construction of a support vector
machine for real-life (nonlinear) pattern recognition task. As we shall see shortly, basically the idea of a
support vector machine hinges on the following two mathematical operations:
(i) Nonlinear mapping of input patterns into high-dimensional feature space.
(ii) Construction of optimal hyperplane for linearly separating the features discovered in Step (i).
The final stage of our presentation will be to extend these results for application to nonlinear regression
problems.
11.13.1
Our presentation on SVM begins with the easiest classification problem: binary classification of linearly
separable data (separating functions will be hyperplanes). The presentation will gradually increase in
complexity.
Let the set of training (data) examples D be
D = {(x1, y1), (x2, y2), ..., (xP, yP)} (11.63)
where xi = [ xi1 xi 2 … xin ] is an n-dimensional input vector (pattern with n-features) for the ith
T
example in a real-valued space X Õ ¬ n ; yi is its class label (output value), and yi Œ{+1, - 1}. +1 denotes
Class 1 and –1 denotes Class 2.
To build a classifier, SVM finds a linear function of the form
f (x) = wTx + b (11.64)
so that the input vector xi is assigned to Class 1 if f (xi) ≥ 0, and to Class 2 if f(xi) < 0, i.e.,
ÔÏ+ 1 If wTxi + b ≥ 0
yi = Ì (11.65)
ÓÔ- 1 If wT xi + b < 0
with the largest margin—the gap between the data points of the x2
two classes. This is an intuitively acceptable approach: select the
decision boundary that is far away from both the classes (Fig. 11.27). Class 2
Large-margin separation is expected to yield good classification on
previously unseen data, i.e., good generalization.
From linear algebra, we know that in wTx + b = 0, w defines a
direction perpendicular to the hyperplane. w is called the normal
vector (or simply normal) of the hyperplane. Without changing the
Class 1
normal vector w, varying b moves the hyperplane parallel to itself.
x1
Note also that wTx + b = 0 has an inherent degree of freedom. We
can rescale the hyperplane to kwTx + kb = 0 for k Œ ¬ + (positive real Fig. 11.26 Possible decision
number), without changing the hyperplane. boundaries
x2 Separating line x2
(decision boundary)
Class 2 Class 2
Separating
line
Class 1 Class 1
x1 x1
(a) Large margin separation (b) Small margin separation
Fig. 11.27
Since SVM maximizes the margin between Class 1 and Class 2 data points, let us find the margin.
The linear function f (x) = wTx + b gives an algebraic measure of the distance from x to the hyperplane
wTx + b = 0. We can express x as
w
x = xN + r (11.67)
|| w ||
where xN is the normal projection of x onto the hyperplane and r is the desired algebraic distance
(Fig. 11.28). Since by definition, f ( x N ) = wT x N + b = 0 , it follows that
Ê w ˆ
f ( x) = w Tx + b = w T Á x N + r +b
Ë || w || ˜¯
wT w (|| w ||) 2
=r =r = r || w ||
|| w || || w ||
f ( x)
or r= (11.68)
|| w ||
744 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
( )
Now consider a Class 1 data point x(1) , + 1 that is closest to the hyperplane wTx + b = 0 (Fig. 11.28).
(1)
The distance d of this data point from the hyperplane is
f ( x(1) ) wT x(1) + b
d (1) = = (11.69a)
|| w || || w ||
Similarly
f ( x( 2 ) ) w T x( 2 ) + b
d ( 2) = = (11.69b)
|| w || || w ||
where (x(2), –1) is a Class 2 data point closest to the hyperplane wTx + b = 0.
Fig. 11.28
We define two parallel hyperplanes H(1) and H(2) that pass through x(1) and x(2), respectively. H(1) and
H(2) are also parallel to the hyperplane wTx + b = 0. We can rescale w and b to obtain (this rescaling, as
we shall see later, simplifies the quest for significant patterns, called support vectors)
H(1) wT x(1) + b = +1
(11.70)
H(2) wT x(2) + b = -1
such that
w T xi + b ≥ 1 if yi = + 1
T
(11.71a)
w xi + b £ - 1 if yi = - 1
or equivalently
(
yi wT xi + b ≥ 1) (11.71b)
which indicates that no training data fall between hyperplanes H(1) and H(2). The distance between the
two hyperplanes is the margin M. In the light of rescaling given by (11.70),
1 -1
d (1) = ; d ( 2) = (11.72)
|| w || || w ||
Intelligent Control with Neural Networks/Support Vector Machines 745
where the ‘–’ sign indicates that x(2) lies on the side of the hyperplane wTx + b = 0 opposite to that where
x(1) lies. From Fig. 11.28, it follows that
2
M= (11.73)
|| w ||
Equation (11.73) states that maximizing the margin of separation between classes is equivalent to
minimizing the Euclidean norm of the weight vector w.
Since SVM looks for the separating hyperplane that minimizes the Euclidean norm of the weight vector,
this gives us an optimization problem. A full description of the solution method requires a significant
amount of optimization theory, which is beyond the scope of this book. We will only use relevant results
from optimization theory, without giving formal definitions, theorems or proofs (refer to [29] for details).
Our interest here is in the following nonlinear optimization problem with inequality constraints:
minimize f ( x)
subject to gi ( x) ≥ 0; i = 1,… , m (11.74)
L* ( k )£ L( x , k) £ L* ( x )
746 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The two problems, min-max and max-min, are said to be dual to each other. We refer to the min-max
problem as the primal problem. The objective to be minimized, L*(x), is referred to as the primal function.
The max-min problem is referred to as the dual problem, and L*(k) as the dual function. The optimal primal
and dual function values are equal when f is a convex function and gi are linear functions. The concept of
duality is widely used in the optimization literature. The aim is to provide an alternative formulation of
the problem which is more convenient to solve computationally and/or has some theoretical significance.
In the context of SVM, the dual problem is not only easy to solve computationally, but also crucial for
using kernel functions to deal with nonlinear decision boundaries. This will be clear later in this section.
The nonlinear optimization problem defined in (11.74) can be represented as min-max problem, as is
seen below.
For the Lagrangian (11.75), we have
È m ˘
L*(x) = maxm Í f ( x) -
k Œ¬ Í
 l g (x)˙˙
i i
Î i =1 ˚
Since gi(x) ≥ 0 for all i, li = 0 (i = 1,…, m) would maximize the Lagrangian. Thus
L* ( x) = f ( x)
Therefore, our original constrained problem (11.74) becomes the min-max primal problem:
minimize L* ( x)
x Œ¬
n
subject to gi ( x) ≥ 0; i = 1, … , m
The concept of duality gives the following formulation for max-min dual problem:
maximize L* ( k)
k Œ¬ , k ≥ 0
m
More explicitly, this nonlinear optimization problem with dual variables k, can be written in the form:
È m ˘
maximize minn Í f ( x) -
k ≥0 x Œ¬ Í
 l g (x)˙˙
i i (11.77)
Î i =1 ˚
Let us now state the learning problem in SVM.
Given a set of linearly separable training examples,
D = {( x1 , y1 ),( x 2 , y2 ), …, ( x P , yP )},
Intelligent Control with Neural Networks/Support Vector Machines 747
This formulation is called the hard-margin SVM. Solving this problem will produce the solutions for w
and b, which in turn, give us the maximal margin hyperplane wTx + b = 0 with the margin 2/|| w||.
The objective function is quadratic and convex in parameters w, and the constraints are linear in parameters
w and b. The dual formulation of this constrained optimization problem is obtained as follows.
First we construct the Lagrangian:
P
L(w, b, k) = 1
2
w Tw - Â l [ y (w x + b) - 1]
i i
T
i (11.79)
i =1
Transforming from the primal to its corresponding dual can be done by setting to zero the partial
derivatives of the Lagrangian (11.81) with respect to the primal variables (i.e., w and b), and substituting
748 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
the resulting relations back into the Lagrangian. This is simply to substitute condition (i) of KKT
conditions (11.80) into the Lagrangian (11.81) to eliminate the primal variables; which gives us the dual
objective function.
The third term on the right-hand side of Eqn. (11.81) is zero by virtue of condition (i) of KKT conditions
(11.80). Furthermore, from this condition we have
P P P
wTw = Â l i yi wT xi = Â Âl l i j yi yj wTi x j
i =1 i =1 j =1
Accordingly, minimization of function L in Eqn. (11.81) with respect to primal variables w and b, gives
us the following dual objective function:
P P P
L* (k) = Âl - Â Âl li
1
2 i j yi yj wTi x j (11.82)
i =1 i =1 j =1
Âl y i i =0 (11.83)
i =1
li ≥ 0; i = 1,..., P
This formulation is dual formulation of the hard-margin SVM.
Having solved the dual problem numerically (using MATLAB’s quadprog function, for example), the
resulting optimum li values are then used to compute w and b. w is computed using condition (i) of KKT
conditions (11.80):
P
w= Âl y x i i i (11.84a)
i =1
and b is computed using condition (iv) of KKT conditions (11.80). For support vectors {xs, ys}, this
condition becomes li > 0, and
ys ( w T x s + b) = 1
Instead of depending on one support vector to compute b, in practice all support vectors are used to
compute b, and then their average is taken on the final value for b. This is because the values of li are
computed numerically and can have numerical errors.
N SV
È 1 ˘
 ÍÎ y
1
b= - wT x s ˙ ; NSV = total number of support vectors (11.84b)
N SV s =1 s ˚
11.13.2
The linear separable case is the ideal situation. In practice, however, the training data is almost always
noisy, i.e., containing errors due to various reasons. For example, some examples may be labeled
Intelligent Control with Neural Networks/Support Vector Machines 749
incorrectly. Furthermore, practical problems may have some degree of randomness. Even for two
identical input vectors, their labels may be different.
For SVM to be useful, it must allow noise in the training data. However, with noisy data, the linear SVM
algorithm presented earlier, will not find a solution because the constraints cannot be satisfied. For example,
in Fig. 11.29, there is a Class 2 point (circle) in the Class 1 region, and a Class 1 point (square) in the
Class 2 region. However, in spite of the couple of mistakes, the decision boundary seems to be good. But
the hard margin classifier presented previously cannot be used, because all the constraints.
yi ( wT xi + b) ≥ 1; i = 1,… , P
cannot be satisfied.
x2 wTx + b = 0
Class 2
w
z j ||
||w xl
b ||
xj ||w
z l ||
||w
Class 1
x1
Fig. 11.29 xj and xl are error data points
So the constraints have to be modified to permit mistakes. To allow errors in data, we can relax the
margin constraints by introducing slack variables, zi (≥ 0), as follows:
w T xi + b ≥ 1 - z i for yi = + 1
T
w xi + b £ - 1 + z i for yi = - 1
Thus, we have the new constraints
yi ( wT xi + b) ≥ 1 - z i ; i = 1,… , P
(11.85)
zi ≥ 0
The geometric interpretation is shown in Fig. 11.29.
We also need to penalize the errors in the objective function. A natural way is to assign an extra cost for
errors, to change the objective function to
Ê P ˆ
1
2 Â
wT w + C Á z i ˜ ; C ≥ 0
ÁË i =1 ˜¯
where C is a user specified penalty parameter. This parameter is a trade-off parameter between margin
and mistakes.
750 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
0 £ li £ C ; i = 1, … , P
Intelligent Control with Neural Networks/Support Vector Machines 751
Interestingly, zi and mi are not in the dual objective function; the objective function is identical to that for
the separable case. The only difference is the constraint l i £ C (inferred from C – li – mi = 0 and mi ≥ 0).
The dual problem (11.89) can also be solved numerically, and the resulting li values are then used to
compute w and b. The weight vector w is computed using Eqn. (11.84a).
The bias parameter b is computed using condition (iv) of KKT conditions (11.88):
li ( yi ( wT xi + b) - 1 + z i ) = 0 (11.90a)
mi z i = 0 (11.90b)
Since we do not have values for zi, we have to get around it. li can have values in the interval 0 £ li £ C.
We will separate it into the following three cases:
li = 0
We know that C – li – mi = 0. With li = 0, we get mi = C. Since mizi = 0 (Eqn. (11.90b)), this implies
that zi = 0; which means that the corresponding ith pattern is correctly classified without any error (as it
would have been with hard-margin SVM). Such patterns may lie on margin hyperplanes or outside the
margin. However, they don’t contribute to the optimum value of w, as is seen from Eqn. (11.84a).
0 < li < C
We know that C – li – mi = 0. Therefore, mi = C – li, which means mi > 0. Since mizi = 0 (Eqn. (11.90b)), this
implies that zi = 0. Again the corresponding ith pattern is correctly classified. Also from Eqn. (11.90a),
we see that for zi = 0 and 0 < li < C , yi ( wT xi + b) = 1; so the corresponding patterns are support vectors.
li = C
It can easily be seen that zi π 0 in this case. But zi ≥ 0 is a constraint of the problem. So zi > 0; which
means that the corresponding pattern is mis-classified or lies inside the margin.
We can use support vectors, as in Eqn. (11.84b), to compute the value of b.
The following points need attention of the reader:
One of the most important properties of SVM is that the solution is sparse in li. Most training data
points are outside the margin area and their li’s in the solution are 0. The data points on the margin
having li = 0, also do not contribute to solution. Only those data points that are on the margin
hyperplanes with 0 < li < C (support vectors) and inside the margin (errors; li = C) contribute to
solution. Without this sparsity property, SVM would not be practical for large data sets.
The final decision boundary is
wTx + b = 0
Substituting for w and b from Eqns (11.84), we obtain
T
Ê P ˆ P P N SV È1 P ˘
 Â  Âl y x
1
Á li yi xi ˜ x + b = li yi xTi x j + Í - T
s s s xs ˙ =0 (11.91)
ÁË i =1 ˜¯ N SV s =1 Í
y ˙˚
i =1 j =1 Î s s =1
We notice that w and b do not need to be explicitly computed. As we will shortly see, this is crucial
for using kernel functions to handle nonlinear decision boundaries.
752 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Finally, we still have the problem of determining the parameter C. The value of C is usually chosen
by trying a range of values on the training set to build multiple classifiers and then testing them
on validation set before selecting the one that gives the best classification result on the validation
set.
Fig. 11.30
Intelligent Control with Neural Networks/Support Vector Machines 753
0 £ li £ C ; i = 1,… , P
The potential problem with this approach is that it may suffer from the curse of dimensionality. The
number of dimensions in the feature space can be huge with some useful transformations, even with
reasonable number of attributes in the input space. Fortunately, explicit transformations can be avoided
if we notice that for the dual problem (11.94), the construction of the decision boundary only requires the
evaluation of eT( xi ) e( x j ) in the feature space. With reference to (11.91), we have the following decision
boundary in feature space:
P P
  l y e (x ) e(x ) + b = 0
i j
T
i j (11.95)
i =1 j =1
Thus, if we have to compute eT (xi ) e( x j ) in the feature space using the input vectors xi and xj directly,
then we would not need to know the feature vector e(x) or even the mapping e itself. In SVM, this is done
through the use of kernel functions, denoted by K:
K ( x i , x j ) = eT ( x i ) e ( x j ) (11.96)
Commonly used kernels include the following:
Polynomial of degree d : K(xi, xj) = (xiTxj + 1)d
(11.97)
Ê 1 ˆ
Gaussian RBF : K(xi, xj) = exp Á - || xi - x j ||2 ˜
Ë 2s 2 ¯
We replace eT( xi ) e( x j ) in (11.94) and (11.95) with kernel. We would never need to explicitly know what
e is.
However, how do we know that a kernel function is indeed a dot product in some feature space? This
question is answered by a theorem, called the Mercer’s theorem, which we will not discuss here. The
kernels in (11.97) satisfy this theorem. Refer to [138,141] for details.
11.13.4
Suppose we are given training data
{( x1 , y1 ), … ,( x P , yP )}; x Œ ¬ n , y Œ ¬
where xi Œ¬ n are the input patterns, as in classification problems; and yi Œ¬ now has continuous
values. Our goal is to find a function f (x) that has at most e deviation (where e is a prescribed parameter)
from the actually obtained targets yi for all the training data, and at the same time, is as flat as possible. In
other words, we do not care about errors as long as they are less than e, but will not accept any deviation
larger than this.
754 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
For pedagogical reasons, we begin by describing the case of linear functions f, taking the form
f ( x) = wT x + b; w Œ ¬ n , b Œ ¬ (11.98)
Flatness in the case of (11.98) means that one seeks small w. One way to ensure this is to minimize the
Euclidean norm, i.e., || w||2, This additional requirement on performance (in addition to the constraint on
maximum allowable error in estimate of yi) improves generalization.
Formally we can write this problem as a constrained optimization problem:
minimize 1
2
wT w
(11.99)
subject to yi - wT xi - b £ e; i = 1,… , P
wT xi + b - yi £ e; i = 1,… , P
The tacit assumption in (11.99) is that a function f given by (11.98) actually exists that approximates all
pairs (xi, yi) with e precision, or in other words, that the constrained optimization problem is feasible.
It should be noted that the optimization problem cannot accommodate data points with errors larger
than e; constraints cannot be satisfied for such data points. For SVM to be useful, it must allow noise in
the training data. Analogously to the ‘soft margin’ classifier described earlier, one can introduce slack
variables zi, zi* to cope with otherwise infeasible constraints of the optimization problem (11.99). Hence
we arrive at the following formulation:
P
minimize 1
2
wT w + C Â (z i + z i* )
i =1
|yi – yi|e
x +e
zi ¥
0
¥ ¥ –e
¥ ¥ ¥ zi
¥ z *j
x
¥ ¥ –e +e
¥ |yi – yi|
¥ e + zi
¥
Fig. 11.31
L( w, b, y, y* , k , k * , l , l * )
P P
= 12 wT w + C Â( ) Â l (e + z - y + w x + b )
z i + z i* - i i i
T
i
i =1 i =1
 l (e + z )  (m z + m z )
P P
* * * *
- i i + yi - wT xi - b - i i i i (11.102)
i =1 i =1
where w, b, z i and zi are the primal variables, and li, li , mi, mi* ≥ 0 are the dual variables.
* *
 (l - l ) x
(i) ∂L *
=w- i i i =0
∂w i =1
P
 (l )
∂L *
= i - li = 0
∂b i =1
∂L
= C - li - mi = 0; i = 1, … , P
∂zi
∂L
= C - li* - mi* = 0; i = 1, … , P
∂ z i*
T
(ii) e + z i - yi + w xi + b ≥ 0; i = 1,… , P (11.103)
e + z i* + yi T
- w xi - b ≥ 0; i = 1,… , P
z i , z i* ≥ 0; i = 1,..., P
(iv) ( )
li e + z i - yi + wT xi + b = 0; i = 1,… , P
li* (e + z *
i + y - w x - b ) = 0; i = 1,..., P
i
T
i
miz i = 0; i = 1,… , P
mi*z i* = 0; i = 1,… , P
Substituting the relations in condition (i) of KKT conditions (11.103) yields the dual objective function.
The procedure is parallel to what has been followed earlier. The resulting dual optimization problem is
P P P P
maximize L* ( k, k * ) = -e  (l + l ) +  (l - l )y -   (l - l ) (l
i
*
i i
*
i i
1
2 i
*
i j - l *j )xTi x j
i =1 i =1 i =1 j =1
P
subject to  (l - l ) = 0
i
*
i (11.104)
i =1
li , li* Œ[0, C ]
From condition (i) of KKT conditions (11.103), we have
P
w= Â (l - l ) x
i
*
i i
(11.105)
i =1
Thus the weight vector w is completely described as a linear combination of the training patterns xi.
One of the most important properties of SVM is that the solution is sparse in li, li*. For the
second factor in the following KKT conditions:
(11.106)
*
are nonzero; hence li , li have to be zero. This equivalently means that all the data points inside the
e-insensitive tube (a large number of training examples belong to this category) have corresponding
li , li* equal to zero. Further, from (11.106) it follows that only for , the dual variables li , li*
may be nonzero. Since there can never be a set of dual variables li , li* which are both simultaneously
nonzero, as this would require slacks in both directions (‘above’ the tube and ‘below’ the tube), we have
li ¥ li* = 0.
From KKT conditions (11.103), it follows that
(C - li )z i = 0 (11.107)
(C - li* )z i* =0
Thus the only samples (xi, yi) with corresponding li , li* = C lie outside the e-insensitive tube around f.
For li , li* Œ (0, C ), we have z i, zi*= 0 and moreover the second factor in (11.106) has to vanish. Hence b
can be computed as follows:
b = yi - wT xi - e for li Œ (0, C )
(11.108)
T
b = yi - w xi + e for li* Œ (0, C )
Intelligent Control with Neural Networks/Support Vector Machines 757
All data points with li , li* Œ (0, C ) are used to compute b, and then their average is taken as the final
value for b.
The examples that come with nonvanishing dual variables li, li* are called support vectors.
The next step is to make SVM algorithm nonlinear. This would be achieved by simply preprocessing the
training patterns xi by a map e into some feature space, and then applying the standard SVM algorithm.
All pattern-recognition/function approximation (classification/regression) problems when solved using
SVM algorithms presented in this section, are basically quadratic optimization problems. Attempting
MATLAB functions for SVM algorithms discussed in this section, will be a rich learning experience for
the reader.
REVIEW EXAMPLES
DC Motor Model
Although it is not mandatory to obtain a motor model if a neural network (NN) is used in the motor-control
system, it may be worth doing so, from the analytical perspective, in order to establish the foundation of the
NN structure. We will use input/output patterns generated by simulation of this model for training of NN
(In a real life situation, experimentally generated input/output patterns will be used for training).
The dc motor dynamics are given by the following equations (refer to Fig. 11.32):
dia
va(t) = Ra ia(t) + La + eb(t) (11.109)
dt
eb(t) = Kbw (t) (11.110)
TM(t) = KT ia(t) (11.111)
dw (t )
=J + Bw (t) + TL(t) + TF (11.112)
dt
758 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Ra La
ia(t)
TF TL(t)
eb(t) w(t)
va(t)
TM J, B
Fig. 11.32
where
va(t) = applied armature voltage (volts);
eb(t) = back emf (volts);
ia(t) = armature current (amps);
Ra = armature winding resistance (ohms);
La = armature winding inductance (henrys);
w(t) = angular velocity of the motor rotor (rad/sec);
TM (t) = torque developed by the motor (newton-m);
KT = torque constant (newton-m/amp);
Kb = back emf constant (volts/(rad/sec));
J = moment of inertia of the motor rotor with attached mechanical load (kg-m2);
B = viscous-friction coefficient of the motor rotor with attached mechanical load ((newton-m)/
(rad/sec));
TL(t) = disturbance load torque (newton-m); and
TF = frictional torque (newton-m).
The load torque TL(t) can be expressed as
TL(t) = y (w) (11.113)
where the function y (◊) depends on the nature of the load.
For most propeller driven or fan type loads, the function y (◊) takes the following form:
TL(t) = mw2(t)[sgnw(t)] (11.114)
where m is a constant.
DC motor drive system can be expressed as single-input, single-output system by combining Eqns
(11.109)–(11.110):
Intelligent Control with Neural Networks/Support Vector Machines 759
d 2w (t ) dw(t )
LaJ 2
+ (Ra J + LaB) + (RaB + KbKT)w(t)
dt dt
+ La dTL (t ) + Ra[TL(t) + TF] + KT va(t) = 0 (11.115)
dt
The discrete-time model is derived by replacing all continuous differentials with finite differences.
È w ( k + 1) - w ( k ) ˘
LaJ È w ( k + 1) - 2w ( k ) + w ( k - 1) ˘ + (Ra J + LaB) Í ˙
Í 2 ˙ Î T ˚
Î T ˚
È T ( k ) - TL ( k - 1) ˘
+ (Ra B + KbKT)w (k) + La Í L ˙ + RaTL(k) + RaTF + KTva(k) = 0 (11.116)
Î T ˚
TL(k) = m w2(k)[sgnw (k)] (11.117)
2
TL(k – 1) = m w (k – 1)[sgnw (k)] (11.118)
T = sampling period
w (k) =D w(t = kT); k = 0, 1, 2, ...
Manipulation of Eqns (11.116)–(11.118) yields
w (k + 1) = K1w (k) + K2w(k – 1) + K3[sgnw(k)]w2(k) + K4[sgnw (k)]w 2(k – 1) + K5va(k) + K6 (11.119)
2 La J + T ( Ra J + La B) - T 2 ( Ra B + K b KT )
where K1 =
La J + T ( Ra J + La B)
La J
K2 = –
La J + T ( Ra J + La B )
T ( m La + m RaT )
K3 = – (11.120)
La J + T ( Ra J + La B)
T m La
K4 =
La J + T ( Ra J + La B )
KT T 2
K5 =
La J + T ( Ra J + La B )
TF RaT 2
K6 = –
La J + T ( Ra J + La B )
The following parameter values are associated with the dc motor
J = 0.068 kg-m2
B = 0.03475 newton-m/(rad/sec)
Ra = 7.56 W
(11.121)
La = 0.055 H
KT = 3.475 newton-m/amp
Kb = 3.475 volts/(rad/sec)
760 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
m = 0.0039 newton-m/(rad/sec)2
TF = 0.212 newton-m
T = 40 msec = 0.04 sec
With these motor parameters, the constants K1, K2, K3, K4, K5 and K6 become
K1 = 0.34366
K2 = –0.1534069
K3 = – 2.286928 × 10–3
(11.122)
K4 = 3.5193358 × 10–4
K5 = 0.2280595
K6 = –0.105184
Equation (11.119) can be manipulated to obtain the inverse dynamic model of the drive system as
va(k) = f [w (k + 1), w(k), w(k – 1)] (11.123)
The right-hand side of Eqn. (11.123) is a nonlinear function of the speed w and is given by
f(w(k + 1), w (k), w(k – 1))
1
= [w(k + 1) – K1w (k) – K2w (k – 1) – K3{sgnw (k)}w2(k) – K4{sgnw (k)}w2(k – 1) – K6] (11.124)
K5
which is assumed to be unknown (It is assumed that the only available qualitative a priori knowledge
about the plant is a rough estimate of the order of the plant). A neural network is trained to emulate the
unknown function f (◊). The values w (k + 1), w(k) and w(k – 1), which are the independent variables of
f (◊), are selected as the inputs to the NN. The corresponding target f(w (k + 1), w (k), w(k – 1)) is given
by Eqn. (11.124). This quantity is also equal to the armature voltage va(k), as seen from Eqn. (11.123).
Randomly generated input patterns of [w (k + 1), w(k), w(k – 1)] and the corresponding target va(k),
are used for off-line training. The training data is generated within the constrained operating space. In
conforming with the mechanical and electrical hardware limitations of the motor, and with a hypothetical
operating scenario in mind, the following constrained operating space is defined:
– 30 < w (k) < 30 rad/sec
|w (k – 1) – w (k)| < 1.0 rad/sec (11.125)
|va(k)| < 100 volts
The estimated motor armature voltage given by the NN identifier is
v̂a (k – 1) = N(w (k), w(k – 1), w(k – 2)) (11.126)
wr (k + 1)
va (k) w (k + 1)
DC
motor
z–1 NN
inverse
model Plant
z–2
Controller
Fig. 11.33
enables accurate trajectory control of the shaft speed w(k). Refer to Appendix B for realization of the
controller.
Computer
D/A A/D
u
PWM SM
Stirrer
h
Fig. 11.34 Water bath control system
762 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
We modify this model to include a saturating nonlinearity, so that the water temperature cannot exceed
some limitation. The nonlinear plant model then becomes (obtained from real plant by experimentation)
g
y(k + 1) = F y(k) + u(k) + (1 – F)Y0 (11.130)
1 + exp [0.5 y( k ) - 40]
a = 1.00151 × 10–4
b = 8.67973 × 10–3 (11.131)
Y0 = 25ºC
T = 30 sec
u = input to the PWM, limited between 0 and 5 volts.
With these parameters, the simulated system is equivalent to a SISO temperature control system of a water
bath, that exhibits linear behavior up to about 70ºC and then becomes nonlinear and saturates at about
80ºC.
The task is to learn how to control the plant described in Eqn. (11.130), in order to follow a specified
reference yr(k), minimizing some norm of error e(k) = yr(k) – y(k) through time. It is assumed that the
model in Eqn. (11.130) is unknown; the only available qualitative a priori knowledge about the plant is
a rough estimate of the order of the plant.
A neural network is trained to emulate the inverse dynamics of the plant. Assume that at instant k + 1,
the current output y(k + 1), the P – 1 previous values of y, and P previous values of u are all stored in
memory. Then the P pairs (xT(k – i), u(k – i)); i = 0, 1, ..., P – 1, xT(k) = [y(k + 1), y(k)], can be used as
patterns for training the NN at time k + 1. A train of pulses is applied to the plant and the corresponding
input/output pairs are recorded. The NN is then trained with reasonably large sets of data, chosen from
the experimentally obtained data bank, in order to span a considerable region of the control space (We
will use input/output patterns generated by simulation of the plant model for training the NN).
Intelligent Control with Neural Networks/Support Vector Machines 763
A controller topology is presented in Fig. 11.35. It is assumed that the complete reference trajectory yr(k)
is known in advance. The feedforward component of the control input is then composed by substituting all
system outputs by corresponding reference values. Refer to Appendix B for realization of the controller.
NN uff (k)
inverse
model
z–1
Dynamic
feedforward
+
yr (k + 1) + + u (k) y(k + 1)
PID Water
controller ufb (k) bath
–
Plant
Fig. 11.35
PROBLEMS
11.1 It is believed that the output y of a plant is linearly related to the input u; that is,
ŷ = w1u + w2
(a) What are the values of w1 and w2 if the following measurements are obtained:
u = 2, y = 5, u = –2, y = 1.
(b) One more measurement is taken: u = 5, w11 y1
y = 7. Find a least-squares estimate of w12
w1 and w2 using all the three measure- x1 w1n
ments. w10
(c) Find the unique minimal sum of error
x2
squares in this linear fit to the three y2
points. .
11.2 Consider the network in Fig. P11.2. An input .
.
signal x comprising features and augmented .
.
by a constant input component (bias) is xn .
applied to the network with linear activation
function. The network gives the output ŷ . yq
(a) Organize the weights as row vectors:
1
wjT = [wj1 wj2...wjn wj0];
Fig. P11.2
j = 1, 2, …, q
and write the equations (model) that this network represents.
764 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
(b) The learning environment comprises a training set of P data pairs {x( p), y(p); p = 1, 2, ..., P}
consisting of the input vector x and output vector y.
Prove that the gradient descent learning rule for the network is
wj (k + 1) = wj (k) + h ej (k) x
where k is the iteration index, h is the learning rate parameter, and e j = y j - yˆ j
11.3 Consider the RBF network shown in Fig. 11.20. There are two sets of parameters governing the
mapping properties of this network: the weights wji; i = 1,2,...m; j = 1,2,...,q, in the output layer
and the center ci of the radial basis functions. The simplest form of RBF network training is with
fixed centers. In particular, they are commonly chosen, in a random manner, as a subset of the
input data set. A sufficient number of centres randomly selected from the input data set, would
distribute according to the probability density function of the training data, thus providing an
adequate sampling of the input space. Because the centers are fixed, the mapping performed
by the hidden layer is fixed as well. Derive gradient descent training algorithm to determine the
appropriate settings of the weights in the network output layer so that the performance of the
network mapping is optimized.
11.4 It is desired to design a one-layer NN with one input x and one output ŷ that associates input
x(1) = – 3 with the target output y(1) = 0.4, and input x(2) = 2 with the target output y(2) = 0.8.
Determine the parameters w and w0 of the network
ŷ = s (wx + w0)
with unipolar sigmoidal (log-sigmoid) activation function, that minimize the error
È
( ) ( )
2 2˘
E = Í y (1) - ŷ (1) + y ( 2) - ŷ ( 2) ˙
Î ˚
11.5 Streamline the notation in Chapter 11 for a three-layer NN. For instance, define Wh1 as weights of
Hidden layer 1 with m nodes; Wh2 as weights of Hidden layer 2 with p nodes; and V as weights of
output layer with q nodes.
Input variables : xi; i = 1, ..., n
Outputs of hidden layer 1 : z ; = 1, ..., m
Outputs of hidden layer 2 : tr; r = 1, ..., p
Outputs of output layer : ŷ j; j = 1, ..., q
Desired outputs : yj
Learning constant :h
Derive the backpropagation algorithm for the three-layer network, assuming the output layer has
linear activation and the two hidden layers have unipolar sigmoidal activations.
11.6 Consider a four-input single-node perceptron with a bipolar sigmoidal function (tan-sigmoid)
2
s (a) = -1
1 + e-a
where ‘a’ is the activation value for the node.
(a) Derive the weight update rule for {wi} for all i. The learning rate h = 0.1. Input variables:
xi; i = 1, 2, 3, 4. Desired output is y.
(b) Use the rule in part (a) to update the perceptron weights incrementally for one epoch. The set
of input and desired output patterns is as follows:
Intelligent Control with Neural Networks/Support Vector Machines 765
w21
1
w20
S s (◊)
n2
x2
w22 S s (◊)
z2
Fig. P11.7
(b) Use the equations derived in part (a) to update the weights in the network for one step with
input vector x = [1 0]T, desired output y = 1, and the initial weights:
w10 = 1, w11 = 3, w12 = 4, w20 = –6, w21 = 6, w22 = 5
n0 = –3.92, n1 = 2, and n2 = 4
(c) As a check, compute the error with the same input for initial weights and updated weights
and verify that the error has decreased.
11.8 We are given the two-layer backpropagation network in Fig. P11.8.
Derive the weight update rules for {n } and {w } for all . Assume that activation function for all
the nodes is a bipolar sigmoid function
2
s (a) = -1
1 + e-a
where ‘a’ is the activation value for the node. The learning constant is h = 0.4. The desired output
is y.
766 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
1
w10
w1
S
z1
s (◊)
n1
1
1 n0
w20
w2 z2 n2 y
x S s (◊) S s (◊)
1 w
m0 nm
S
wm zm
s (◊)
Fig. P11.8
11.9 We are given the two-layer back propagation network shown in Fig.P11.9.
Fig. P11.9
(a) Derive the weight update rules in incremental mode for {v } and {w i} for all i and ; the
iteration index is k. Assume that the activation function for all nodes in the hidden layer is
1
s (a) =
1+ e - a
and the activation function for the node in the output layer is
ea - e- a
s ( a) =
ea + e- a
The learning constant h = 0.2. The desired output is y.
(b) Use the equations derived in part (a) to update the weights in the network for one step with
input vector x = [0.5 – 0.4]T, desired output y = 0.15, and the initial weights:
w11 = 0.2, w12 = 0.1, w21 = 0.4, w22 = 0.6, w31 = 0.3, w32 = 0.5; v1 = 0.1, v2 = 0.2 and v3 = 0.1.
Fuzzy Logic and Neuro-Fuzzy Systems 767
Chapter 12
Fuzzy Logic and
Neuro-Fuzzy Systems
12.1 INTRODUCTION
In the previous chapter, we were mostly concerned with learning from experimental data (examples,
samples, measurements, patterns, or observations). Our emphasis was on the following machine learning
problem setting:
There is some unknown dependency (mapping, function) y = f(x) between some high-dimensional input
vector x and a scalar output y (or vector output y). The only information available about the underlying
dependency is a training data set {x(p), y(p); p = 1,2,…,P}. We employed neural networks to learn this
dependency. The number of neurons, their link structure, and the corresponding weights were the subjects
of learning procedure.
It may be noted that depending upon the problem, the neural-network weights have different physical
meanings, and sometimes it is hard to find any physical meaning at all. Neural network learning is, thus,
a ‘block box’ design situation (Fig. 12.1a) in which the process is entirely unknown but there are known
examples {x(p), y(p); p = 1,2,…,P}. The knowledge (information) is available only in the form of data
pairs; the neural network is required to be trained using this knowledge before the machine could be used
for prediction.
A large amount of data can constitute a proportionally large amount of information. But this comes with
a level of uncertainty. As we come to know more, we also know how much we do not know, and our
awareness of the concept of complexity seems to increase. We tend to forego some precise data and allow
uncertainty to creep into our perception. This is when we start describing things in a slightly vague and
fuzzy manner.
Consider, for example, a real-life situation in process industry. Control of large and complex processes
is facilitated by using distributed computer control systems (DCCS). Acquisition of process data, i.e.,
collection of instantaneous values of process variables, and status messages of plant control facilities
(valves, pumps, motors, etc.) needed for efficient direct digital control; processing of collected data;
plant hardware monitoring, system check and diagnosis; closed-loop control and logic control func-
tions; etc., are the routine tasks of DCCS. Enormous amount of data (constituting a proportionally large
amount of information) is, thus, generated. Closed-loop control design using machine learning paradigm
768 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Fig. 12.1 Neural networks and fuzzy logic models as examples of ‘black box’ ‘white box’, and
‘grey box’ modeling approaches [138]
based on the knowledge (information) embedded in the data, is feasible. This approach is, however, sel-
dom used in process industry.
In a man–machine control system, an experienced process operator employs, consciously or
subconsciously, a set of IF-THEN rules to control a process. The operator estimates the important
process variables (such as error, rate of change of error) at discrete time instants, and based on this
information, s/he manipulates the control signal. The estimation of the process variables is not done in
numerical form; it is rather done in linguistic form. For example, s/he may categorize the variable ‘error’
into the following labels:
‘error is negative’
‘error is near zero’
‘error is positive’
Analogously, s/he defines the categories of ‘rate of change of error’ and ‘change of control’.
The categories (linguistic labels) of the process variables are, in general, vague and qualitative.
Their purpose is to describe in a qualitative way control strategies based on human experience and
understanding. A commonly used way of expressing the knowledge (information) based on human
experience and understanding is through IF-THEN rules. A typical rule in this kind of knowledge base
will be of the form:
IF error is near zero AND rate of change of error is near zero THEN change of control is zero
Fuzzy Logic and Neuro-Fuzzy Systems 769
Process operators have no difficulties with understanding and interpreting this kind of rules because
they have the background to hearing problems and solutions described like this. However, providing
a computer with the same level of understanding is a difficult task. How can we represent ‘expert
knowledge’ that uses vague and ambiguous terms, in a computer? Can it be done at all?
In the label ‘near zero’, the word near seems to be comprehended effortlessly by the human brain, but
what of computing systems? What does near mean in the context of process control? The range –0.1 to
+0.1 or the range –1 to +1, or …? Is there a way we can make number crunching systems understand
this? If it is ascertained in a machine that any error less than or equal to | 1 | means near zero, and
anything outside this range is negative/positive, then does it mean that 1.001 is not ‘near zero’ while 1 is
‘near zero’? This is an exaggeration in the real world.
The rule-base representing the expert knowledge can be significantly improved if we consider more
categories for process variables. For example, linguistic label ‘positive’ may be subdivided into positive
small, positive medium, and positive large. The increased granularity of the categories results in finer
formulated rules. There is, however, a trade-off between accuracy and complexity.
Fuzzy logic deals with how we can capture the essence of human comprehension and embed it on the
system, allowing for a gradual transition from one category to another. This comprehension as per Lofti
Zadeh, the founder of the fuzzy logic concept, confers a higher machine intelligence level to computer
systems.
In the previous chapter, our focus was on machine learning problem setting based on the knowledge
(information) available in the form of numerical data. Our focus in this chapter is on another machine
learning problem setting where language serves as a way of expressing imprecise knowledge, and
the tolerance for imprecision about the vague environment we live in. Most human knowledge is
imprecise, uncertain and usually expressed in linguistic terms. In addition, human ways of reasoning
are approximate, nonquantitative and linguistic in nature. Fuzzy logic is a tool for transforming such
linguistically expressed knowledge into workable algorithm called a fuzzy logic model. In its newest
incarnation, fuzzy logic is called ‘computing with words’.
The point of departure in fuzzy logic is the existence of human solution. If there is no human solution,
there will be no knowledge to model and, consequently, no sense in applying fuzzy logic. However, the
existence of human solution is not sufficient. One must be able to articulate to structure the human solu-
tion in the language of fuzzy IF-THEN rules. Almost all structured human knowledge can be expressed
in the form of IF-THEN rules. The fuzzy logic modeling is thus a ‘white box’ design situation in which
the solution to the problem is known, that a, structured human knowledge (experience, expertise, heuris-
tics) about the process exists (Fig. 12.1b). Interpretability of the fuzzy logic model for decision making
is an important characteristic of this setting of machine learning problems.
Neural networks and fuzzy logic models are modeling tools. They perform in the same way after the
learning stage of neural networks or the embedding of human knowledge about some specific task in
fuzzy logic structure, is finished. Whether the more appropriate tool for solving a given problem is a
neural network or a fuzzy logic model, depends upon the availability of previous expert knowledge (in
linguistic form) about the system to be modeled and the amount of measured data. The less previous
expert knowledge exists, the more likely it is that a neural network approach will be used to attempt a
solution. The more knowledge available, the more suitable the problem will be for fuzzy logic modeling.
770 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Through integration of the techniques of fuzzy logic models and neural networks, we can reap the
benefits of both the fuzzy logic models and the neural networks. One such integrated system, a neuro-
fuzzy system, transforms the burden of designing fuzzy logic systems to the training and learning of
neural networks. That is, the neural networks provide learning abilities to the fuzzy logic systems.
Neuro-fuzzy systems are functionally fuzzy logic models; they only utilize learning ability of neural
networks to realize the key components of the fuzzy logic model. Integrated systems may also be formed
by incorporating fuzzy logic into the neural network models. In such an integration, called a fuzzy-
neural network, the numerical parameters (such as input-output data, weights, etc.) of a neural network
are fuzzified. Fuzzy-neural networks are fuzzified neural networks, and thus are functionally neural
networks.
Instances involving some knowledge and some data correspond to ‘grey box’ design situation
(Fig. 12.1c) covered by the paradigm of neuro-fuzzy and fuzzy-neural models.
Embedding existing structured human knowledge into fuzzy logic models and neuro-fuzzy models, will
be the subject of discussion in this chapter.
12.2
In a man–machine system, there arises the problem of processing information with the ‘vagueness’ that
is characteristic of man. We consider here a real-life situation in process control.
The basic structure of a feedback control system is shown in Fig. 12.2a. G represents the system to
be controlled (plant or process). The purpose of the controller D is to guarantee a desired response
of the output y. The process of keeping the output y close to the set-point (reference input) yr, despite
the presence of disturbances, fluctuations of the system parameters, and noisy measurements, is called
regulation. The law governing corrective action of the controller is called the control algorithm. The
output of the controller is the control action u.
The general form of the control law (implemented using a digital computer) is
u(k) = f(e(k), e(k – 1), ..., e(k – m), u(k – 1), ..., u(k – m)) (12.1)
providing a control action that describes the relationship between the input and the output of the
controller. In Eqn. (12.1), e = yr – y represents the error between the desired set-point yr and the output
of the controlled system; parameter m defines the order of the controller; and f (◊) is, in general, a
nonlinear function. k is an index representing sampling instant; T is the sampling interval used for digital
implementation (Fig. 12.2b). To distinguish control law (12.1) from the control schemes based on fuzzy
logic/neural networks, we shall call this, conventional control law.
A common feature of conventional control is that the control algorithm is analytically described by
equations—algebraic, difference, differential, and so on. In general, the synthesis of such control
algorithms requires a formalized analytical description of the controlled system by a mathematical
model. The concept of analyticity is one of the main paradigms of conventional control theory. We will
also refer to conventional control as model-based control.
When the underlying assumptions are satisfied, many of the model-based control techniques provide good
stability, robustness to model uncertainties and disturbances, and speed of response. However, there are
Fuzzy Logic and Neuro-Fuzzy Systems 771
yr + e u y
D G
–
Controller Plant
many practical deficiencies of these control algorithms. It is, generally, difficult to accurately represent a
complex process by a mathematical model. If the process model has parameters whose values are partially
known, ambiguous or vague, the control algorithms that are based on such incomplete information will not,
usually, give satisfactory results. The environment with which the process interacts may not be completely
predictable, and it is normally not possible for a model-based algorithm to accurately respond to a
condition that it did not anticipate. Skilled human operators are, however, controlling complex plants
quite successfully on the basis of their experience, without having quantitative models.
Regulatory control objectives, typical of many industrial applications, are
(1) to remove any significant errors in process output y(t) by appropriate adjustment of the controller
output u(k);
(2) to prevent process output from exceeding some user-specified constraint yc, i.e., for all t, y(t)
should be less than or equal to yc; and
(3) to produce smooth control action near the set-point, i.e., minor fluctuations in the process output
are not passed further to the controller.
A conventional PI controller uses an analytical expression of the following form to compute the control
action:
È 1 ˘
u(t) = K c¢ Íe(t ) +
Î TI
Ú e(t ) dt ˙˚ (12.2)
yc
yr
u
Set-point Normal Constraint
region region region
Smooth u u Æ umax
Time
Fig. 12.3
A simple PI controller is inherently incapable of achieving all of the above control objectives, and has to
be implemented with additional (nonlinear) control laws for set-point and constraint regions, making the
control scheme a complex adaptive control scheme which would allow the desired gain modification when
required.
On the other hand, an experienced process operator can easily meet all the three control objectives. An
expert operator employs, consciously or subconsciously, a set of IF-THEN rules to control a process
1
A PD controller in position form is
u(k) = Kce(k) + KDv(k)
We see that PD controller in position form is structurally related to PI controller in incremental form.
Fuzzy Logic and Neuro-Fuzzy Systems 773
(Fig. 12.4). He estimates the error e(k) and time rate of change of error v(k) at a specific time instant,
and based on this information he changes the control by Du(k).
A typical production rule of the rule-base in Fig. 12.4 is of the form:
IF (process state) THEN (control action) (12.4)
instead of an analytical expression defining
the control variable as a function of process
state. The ‘process state' part of the rule is
called the rule premise (or antecedent), and
contains a description of the process state at
the kth sampling instant. This description is
done in terms of particular values of error Fig. 12.4
e(k), velocity (time rate of change of error)
v(k), and the constraint. The ‘control action’, part of the rule is called the conclusion (or consequent), and
contains a description of the control variable which should be produced given the particular process state
in the rule antecedent. This description is in terms of the value of the change-in-control, Du(k).
Negative values of e(k) mean that the current process output y(k) has a value above the set-point yr, since
e(k) = yr – y(k) < 0. The magnitude of a negative value describes the magnitude of the difference yr – y.
On the other hand, positive values of e(k) express the knowledge that the current value of the process
output y(k) is below the set-point yr. The magnitude of such a positive value is the magnitude of the
difference yr – y.
Negative values of v(k) mean that the current process output y(k) has increased compared with its previous
value y(k – 1), since v(k) = – (y(k) – y(k – 1))/T < 0. The magnitude of such a negative value describes
the magnitude of this increase. Positive values of v(k) express the knowledge that y(k) has decreased its
value when compared to y(k – 1). The magnitude of such a value is the magnitude of the decrease.
Positive values of Du(k) mean that the value of the control u(k – 1), has to be increased to obtain the value
of the control for the current sampling time k. A value with a negative sign means a decrease in the value
of u(k – 1). The magnitude of such a value is the magnitude of increase/decrease in the value u(k – 1).
The possible combinations of positive/negative values of e(k) and v(k) are as follows:
(1) positive e, negative v
(2) negative e, positive v
(3) negative e, negative v
(4) positive e, positive v
The combination (positive e(k), negative v(k)) implies that y < yr, since e(k) = yr – y(k) > 0; and
y > 0, since v(k) = – (y(k) – y(k – 1))/T < 0. This means that the current process output y(k) is below the
set-point and the controller is driving the system upward, as shown by point D in Fig. 12.5. Thus, the
current process output is approaching the set-point from below. The combination (negative e(k), positive
v(k)) implies that y > yr, and y < 0. This means that the current process output is above the set-point
and the controller is driving the system downward, as shown by point B in Fig. 12.5. Thus the current
process output is approaching the set-point from above. The combination (negative e(k), negative v(k))
implies that y > yr and y > 0. This means that the current process output y(k) is above the set-point and
the controller is driving the system upward, as shown by point C in Fig. 12.5. Thus the process output
774 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
is moving further away from the set-point and approaching overshoot. The combination (positive e(k),
positive v(k)) implies that y < yr and y < 0. This means that the current process output is below the
set-point and the controller is driving the system downward, as shown by point A in Fig. 12.5. Thus the
process output is moving further away from the set-point and approaching undershoot.
y (k)
1.5
B
C
yr
A
D
0.5
0 5 10 15 20 25 30 35 40 k
Fig. 12.5
In a man–machine control system of the type shown in Fig. 12.4, experience-based knowledge of the
process operator and/or control engineer is instrumental in changing the control by Du(k), for a given
estimate of error e(k) and time rate of change of error v(k).
(i) If both e(k) and v(k) (positive or negative) are small (or zero), it means that the current value of the
process output variable y(k) has deviated from the set-point but is still close to it. The amount of
change Du(k) in the previous control u(k – 1) should also be small (or zero) in magnitude, which
is intended to correct small deviations from the set-point.
(ii) Consider the situation when e(k) has large negative value (which implies that y(k) is significantly
above the set-point). If v(k) is positive at the same time, this means that y is moving towards the
set-point. The amount of change Du to be introduced is intended to either speed up or slow down
the approach to the set-point. For example, if y(k) is much above the set-point (e(k) has a large
negative value) and it is moving towards the set-point with a small step (v(k) has small positive
value), then the magnitude of this step has to be significantly increased (Du(k) Æ large negative
value).
(iii) e(k) has either a small value (positive, negative, zero) or a large positive value, which implies that
y(k) is either close to the set-point or significantly below it. If v(k) is positive at the same time,
this means that y is moving away from the set-point. Then, a positive change Du(k) in the previous
control u(k – 1) is required to reverse this trend and make y start moving towards it, instead of
moving away from the set-point.
(iv) Consider a situation when e(k) has large positive value (which implies that y(k) is significantly
below the set-point) and v(k) is negative (which implies that y is moving towards the set-point).
Fuzzy Logic and Neuro-Fuzzy Systems 775
The amount of change Du to be introduced is intended to either speed up, or slow down, the
approach to the set-point. For example, if y(k) is much below the set-point (e(k) has large positive
value), and it is moving towards the set-point with somewhat large step (v(k) has large negative
value), then the magnitude of this step need not be changed (Du(k) Æ 0), or only slightly enlarged
(Du(k) Æ small positive value).
(v) e(k) has either a small value (positive, negative, zero) or a large negative value, and this implies
that y(k) is either close to the set-point or significantly above it. If v(k) is negative at the same
time, y is moving away from the set-point. Thus, a negative change Du(k) in the previous control
u(k – 1) is required to reverse this trend and make y start moving towards it instead of moving
away from the set-point.
The variables e, v and Du are described as consisting of a finite number of verbally expressed values
which these variables can take. Values are expressed as tuples of the form {value sign, value magnitude}.
The ‘value sign’ component of such a tuple takes on either one of the two values: positive or negative.
The ‘value magnitude’ component can take on any number of magnitudes, e.g., {zero, small, medium,
big}, or {zero, small, big}, or {zero, very small, small, medium, big, very big}, etc.
The tuples of values may, therefore, look like: Negative Big (NB), Negative Medium (NM), Negative
Small (NS), Zero (ZO), Positive Small (PS), Positive Medium (PM), Positive Big (PB) or an enhanced
set/subset of these values.
We consider here a simple rule-based controller which employs only three values of the variables e, v,
and Du: Negative (N), Near Zero (NZ), Positive (P), for e and v; and Negative (N), Zero (Z), Positive (P)
for Du. A typical production rule of the rule-base in Fig. 12.4 is
IF e(k) is Positive and v(k) is Positive THEN Du(k) is Positive (12.5)
Let us see now what such a rule actually means. A positive e(k) implies that y(k) is below the set-point.
If v(k) is positive at the same time, it means that y(k) is moving away from the set-point. Thus, a positive
change Du(k) in the previous control u(k – 1) is required to reverse this trend.
Consider another rule:
IF e(k) is Positive and v(k) is Negative THEN Du(k) is Zero (12.6)
This rule says that if y(k) is below the set-point, and is moving towards the set-point, then, no change in
control is required.
We will present the rule-base in table format,
shown in Fig. 12.6. The cell defined by the
intersection of the third row and third column
represents the rule given in (12.5), and the cell
defined by the inter-section of the third row and
first column represents the rule given in (12.6). Fig. 12.6
The rule-base shown in Fig. 12.6 is designed to remove any significant errors in process output by
appropriate adjustment of the controller output. Note that the rule
IF e(k) is Near Zero and v(k) is Near Zero THEN Du(k) is Zero (12.7)
ensures smooth action near the set-point, i.e., minor fluctuations in the process output are not passed
further to the controller.
776 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The rule-base of Fig. 12.6 is thus effective for control action in the set-point region and the normal region
in Fig. 12.3. However, we require additional rules for the constraint region. The following three rules
prescribe a control action when the error is in the constraint region, approaching it or leaving it.
(i) IF e(k) is in constraint region THEN value of Du(k) is drastic change.
This rule specifies the magnitude of additional DU(k) to be added to the one already determined
by the rules of Fig. 12.6 when e(k) is in the constraint region.
(ii) IF e(k) enters constraint region, THEN start summing up the values of Du(k) determined by
constraint Rule 1.
(iii) IF e(k) leaves constraint region, THEN subtract the total value of Du(k) determined by constraint
Rule 2.
The man–machine control system of Fig. 12.4 has the capability of representing and manipulating data
that is not precise, but rather fuzzy. The error variable is ‘near zero’, change in control is ‘drastic’, etc.,—
are the type of linguistic information which the expert controller is required to handle. But what is a
‘drastic change’ in control? The property ‘drastic’ is inherently vague, meaning that the set of signals it
is applied to, has no sharp boundaries between ‘drastic’ and ‘not drastic’. The fuzziness of a property lies
in the lack of well-defined boundaries of the set of objects to which the property applies.
Problems featuring uncertainty and ambiguity have been successfully addressed subconsciously by
humans. Humans can adapt to unfamiliar situations and they are able to gather information in an efficient
manner, and discard irrelevant details. The information gathered need not be complete and precise and
could be general, qualitative and vague because humans can reason, infer and deduce new information
and knowledge. They can learn, perceive and improve their skills through experience.
How can humans reason about complex systems, when the complete description of such a system often
requires more detailed data than a human could ever hope to recognize simultaneously, and assimilate
with understanding? The answer is that humans have the capacity to reason approximately. In reasoning
about a complex system, humans reason approximately about its behavior, thereby maintaining only a
generic understanding about the problem.
The seminal work by Dr. Lotfi Zadeh (1965) on system analysis based on the theory of fuzzy sets,
has provided a mathematical strength to capture the uncertainties associated with human cognitive
processes, such as thinking and reasoning. The conventional approaches to knowledge representation,
lack the means for representing the meaning of fuzzy concepts. As a consequence, the approaches based
on classical logic and probability theory, do not provide an appropriate conceptual framework for dealing
with the representation of commonsense knowledge, since such knowledge is by its nature, both, lexically
imprecise and non-categorical. The development of fuzzy logic was motivated, in large measure, by the
need for a conceptual framework which can address the issue of uncertainty and lexical imprecision.
Fuzzy logic provides an inference morphology, that enables approximate human reasoning capabilities
to be applied to knowledge-based systems.
Since the publication of Zadeh’s seminal work Fuzzy Sets in 1965, the subject has been the focus of
many independent research investigations by mathematicians, scientists and engineers from around the
world. Fuzzy logic has rapidly become one of the most successful of technologies today, for developing
sophisticated control systems. With its aid, complex requirements may be implemented in amazingly
simple, easily maintained and inexpensive controllers. Of course, fuzzy logic is not the best approach
Fuzzy Logic and Neuro-Fuzzy Systems 777
for every control problem. As designers look at its power and expressiveness, they must decide where
to apply it.
The criteria, in order of relevance, as to when and where to apply fuzzy logic are as follows:
Human (structured) knowledge is available.
A mathematical model is unknown or impossible.
The process is substantially nonlinear.
There is lack of precise sensor information.
It is applied at the higher levels of hierarchical control systems.
It is applied in generic decision-making problems
Possible difficulties in applying fuzzy logic, arise from the following:
Knowledge is subjective.
For high-dimensional inputs, the increase in the required number of rules is exponential.
Knowledge must be structured, but experts bounce between a few extreme poles: they have trouble
structuring the knowledge; they are too aware of their ‘expertise’; they tend to hide knowledge;
and there may be some other subjective factors working against the whole process of human
knowledge transfer.
Note that the basic premise of fuzzy logic is that a human solution is good. When applied, for example,
in control systems, this premise means that a human being is a good controller. Today, after several
thousands successful applications, there is more or less convergence on trustworthiness of this premise.
The word ‘fuzzy’ may sound to mean intrinsically imprecise, but there is nothing ‘fuzzy’ about fuzzy
logic. It is firmly based on multivalued logic theory and does not violate any well-proven laws of logic.
Also fuzzy logic systems can produce answers to any required degree of accuracy. This means that
these models can be very precise if needed (There is a trade-off between precision and cost). However,
they are aimed at handling imprecise and approximate concepts that cannot be processed by any other
known modeling tool. In this sense, fuzzy logic models are invaluable supplements to classical hard
computing techniques. For example in a hierarchical control system, classical control at the lowest level,
supplemented by fuzzy logic control at higher levels provides good hybrid solution in many situations.
Our focus in this chapter is on the essential ideas and tools necessary for the construction of the fuzzy
knowledge-based models, that have been successful in the development of intelligent controllers.
Fuzzy control and modeling use only a small portion of the fuzzy mathematics that is available; this
portion is also mathematically quite simple and conceptually, easy to understand. This chapter begins
with an introduction to some essential concepts, terminology, notations and arithmetic of fuzzy sets and
fuzzy logic. We include only a minimum, though adequate, amount of fuzzy mathematics necessary
for understanding fuzzy control and modeling. To facilitate easy reading, this background material is
presented in a rather informal manner, with simple and clear notation as well as explanation. Whenever
possible, excessively rigorous mathematics is avoided. This material is intended to serve as an introductory
foundation for the reader to understand not only the fuzzy controllers presented later in this chapter but
also others in the literature. We recommend references [137, 138, 142–145] for further reading on fuzzy
set theory.
778 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
12.3
Up to this point we have only quantified, in an abstract way, the knowledge that the human expert has
about how to control the plant. Next, we will show how to use fuzzy logic to fully quantify the meaning
of linguistic descriptions so that we may automate in the fuzzy controller, the control rules specified by
the expert.
12.3.1
Knowledge is structured information and knowledge acquisition is done through learning and experience,
which are forms of high-level processing of information. Knowledge representation and processing are
the keys to any intelligent system. In logic, knowledge is represented by propositions and is processed
through reasoning, by the application of various laws of logic, including an appropriate rule of inference.
Fuzzy logic focuses on linguistic variables in natural language, and aims to provide foundations for
approximate reasoning with imprecise propositions.
In classical logic, a proposition is either TRUE, denoted by 1, or FALSE, denoted by 0. Consider the
following proposition p:
‘Team member is female’
Let X be a collection of 10 people: x1, x2, ..., x10, who form a project team. The entire object of discussion
is
X = {x1, x2, ..., x10}
In general, the entire object of discussion is called a ‘universe of discourse’, and each constituent member
x is called an ‘element’ of the universe (the fact that x is an element of X, is written as x Œ X).
If x1, x2, x3 and x4 are female members in the project team, then the proposition p on the universe of
discourse X is equally well represented by the crisp (nonfuzzy) set A defined below.
A = {x1, x2, x3, x4}
The fact that A is a subset of X is denoted as A Ã X.
The proposition can also be expressed by a mapping mA from X into the binary space {0, 1}.
mA : X Æ {0, 1}
such that
Ï0 ; x = x5 , x6 , x7 , x8 , x9 , x10
mA = Ì
Ó 1 ; x = x1 , x2 , x3 , x4
That is to say, the value mA(x) = 1 when the element x satisfies the attributes of set A; 0 when it does not.
mA is called the characteristic function of A.
Next, supposing that, within X, only x1 and x2 are below age 20; we may call them ‘minors’. Then
B = {x1, x2}
Fuzzy Logic and Neuro-Fuzzy Systems 779
Example 12.1
One of the most commonly used examples of a fuzzy set is the set of tall people. In this case, the universe of
discourse is potential heights (the real line), say, from 3 feet to 9 feet. If the set of tall people is given the
780 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
well-defined boundary of a crisp set, we might say all people taller than 6 feet are officially considered
tall. The characteristic function of the set A = {tall men} then, is
Ï 1 for 6 £ x
mA(x) = Ì
Ó0 for 3 £ x < 6
Such a condition is expressed by a Venn diagram shown in Fig. 12.7a, and a characteristic function
shown in Fig. 12.8a.
Universe Universe
X X
A A
mA(x) = 1
a mA(x) = 1
b Crisp Fuzzy
mA(x) = 0 boundary mA(x) = 0 boundary
(a) The crisp set A and the universe of discourse (b) The fuzzy set A and the universe of discourse
Fig. 12.7
mA
1
b a
0 6 9 x
(a) Characteristic function of crisp set A
mA
1
0.9
0.3
0 3 6 9 x
(b) Membership function of fuzzy set A
Fig. 12.8
For our example of universe X of heights of people, the crisp set A of all people with x ≥ 6 has a sharp
boundary: individual, ‘a’ corresponding to x = 6 is a member of the crisp set A, and individual ‘b’
corresponding to x = 5.9 is unambiguously not a member of set A. Is it not an absurd statement for the
situation under consideration? A 0.1" reduction in the height of a person has changed mA from 1 to 0, and
the person is no more tall.
It may make sense to consider the crisp set of all real numbers greater than 6 because the numbers belong
to an abstract plane, but when we want to talk about real people, it is unreasonable to call one person
Fuzzy Logic and Neuro-Fuzzy Systems 781
short and another one tall, when they differ in height by the width of a hair. But if this kind of distinction
is unworkable, then what is the right way to define the set of all people? Much as with our example of
‘set of young females’, the word ‘tall’ would correspond to a curve that defines the degree to which any
person is tall. Figure 12.8b shows a possible membership function of this fuzzy set ~ A; the curve defines
the transition from not tall to tall. Two people with membership values 0.9 and 0.3 are tall to some
degree, but one significantly less than the other.
Note that there is inherent subjectivity in fuzzy set description. Figure 12.9 shows a smoothly varying
curve (S-shaped) for transition from not tall to tall. Compared to Fig. 12.8b, the membership values are
lower for heights close to 3¢ and are higher for heights close to 6¢. This looks more reasonable; however,
the price paid is in terms of a more complex function, which is more difficult to handle.
mA
0 x
3 6 9
Fig. 12.9 A
~
Figure 12.7b shows the representation of a fuzzy set by a Venn diagram. In the central (unshaded) region
of the fuzzy set, m A (x) = 1. Outside the boundary region of fuzzy set, m A (x) = 0. On the boundary region,
~ ~
m A (x) assumes an intermediate value in the interval (0, 1). Presumably, the membership value of an x in
~
fuzzy set, ~A, approaches a value of 1 as it moves closer to the central (unshaded) region; it approaches a
value of 0 as it moves closer to leaving the boundary region of ~ A.
Thus, so far we have discussed the representation of knowledge in logic. We have seen that the concept
of fuzzy sets makes it possible to describe vague information (knowledge). But description alone will not
lead to the development of any useful products. Indeed, a good deal of time passed after fuzzy sets were
first proposed, until they were applied at the industrial level. However, eventually it became possible to
apply them in the form of ‘fuzzy inference’, and fuzzy logic theory has now become legitimized as one
component of applied high technology.
In fuzzy logic theory, nothing is done at random or haphazardly. Information containing a certain amount
of vagueness is expressed as faithfully as possible, without the distortion produced by forcing it into a
‘crisp’ mould, and it is then processed by applying an appropriate rule of inference.
‘Approximate reasoning’ is the best known form of fuzzy logic processing and covers a variety of
inference rules.
Fuzziness is often confused with probability. The fundamental difference between them is that fuzziness
deals with deterministic plausibility, while probability concerns the likelihood of nondeterministic
(stochastic) events. Fuzziness is one aspect of uncertainty. It is the ambiguity (vagueness) found in the
definition of a concept, or the meaning of a term. However, the uncertainty of probability generally
782 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
relates to the occurrence of phenomena, not the vagueness of phenomena. For example, ‘There is a
50–50 chance that he will be there’ has the uncertainty of randomness. ‘He is a young man’, has the
uncertainty in definition of ‘young man’. Thus, fuzziness describes the ambiguity of an event, whereas
randomness describes the uncertainty in the occurrence of an event.
We can now give a formal definition to fuzzy sets.
12.3.2
A universe of discourse, X, is a collection of objects all having the same characteristics. The individual
elements in the universe X will be denoted as x.
A universe of discourse and a membership function that spans the universe, completely define a fuzzy
set. Consider a universe of discourse X with x representing its generic element. A fuzzy set ~
A in X has
the membership function m A (x) which maps the elements of the universe onto numerical values in the
~
interval [0, 1]:
m A (x) : X Æ [0, 1] (12.8a)
~
Every element x in X has a membership function m A (x) Œ [0, 1]. ~ A is then defined by the set of ordered
~
pairs:
A = {( x, m A ( x )) | x Œ X , m A ( x ) Œ[0, 1]}
~ (12.8b)
~ ~
A membership value of zero implies that the corresponding element is definitely not an element of the
fuzzy set A. A membership function of unity means that the corresponding element is definitely an
~
element of fuzzy set ~A. A grade of membership greater than zero, and less than unity, corresponds to a
noncrisp (or fuzzy) membership of the fuzzy set A. Classical sets can be considered as special case of
~
fuzzy sets with all membership grades equal to unity.
A fuzzy set A is formally given by its membership function m A (x). We will identify any fuzzy set with
~ ~
its membership function, and use these two terms interchangeably.
Membership functions characterize the fuzziness in a fuzzy set. However, the shape of the membership
functions, used to describe the fuzziness, has very few restrictions indeed. It might be claimed that the
rules used to describe fuzziness are also fuzzy. Just as there are an infinite number of ways to characterize
fuzziness, there are an infinite number of ways to graphically depict the membership functions that
describe fuzziness. Although the selection of membership functions is subjective, it cannot be arbitrary;
it should be plausible.
To avoid unjustified complications, m A (x) is usually constructed without a high degree of precision. It
~
is advantageous to deal with membership functions involving a small number of parameters. Indeed,
one of the key issues in the theory and practice of fuzzy sets is how to define the proper membership
functions. Primary approaches include (1) asking the control expert to define them; (2) using data from
the system to be controlled, to generate them; and (3) making them in a trial-and-error manner. In more
than 25 years of practice, it has been found that the third approach, though ad hoc, works effectively and
efficiently in many real-world applications.
Numerous applications in control have shown that only four types of membership functions are needed
in most circumstances: trapezoidal, triangular (a special case of trapezoidal), Gaussian, and bell-shaped.
Fuzzy Logic and Neuro-Fuzzy Systems 783
Figure 12.10 shows an example of each type. Among the four, the first two are more widely used. All
these fuzzy sets are continuous, normal and convex.
A fuzzy set is said to be continuous if its membership function is continuous.
A fuzzy set is said to be normal if its height is one (The largest membership value of a fuzzy set is called
the height of the fuzzy set).
The convexity property of fuzzy sets is viewed as a generalization of the classical concept of crisp sets.
Consider the universe X to be a set of real numbers ¬. A subset A of ¬ is said to be convex if, and only
if, for all x1, x2 Œ A and for every real number l satisfying 0 £ l £ 1, we have
l x1 + (1 – l)x2 Œ A (12.9)
m(x) m(x)
1 1
0 x 0 x
(a) (b)
m(x) m(x)
1 1
0 x 0 x
(c) (d)
Fig. 12.10
It can easily be established that any set defined by a single interval of real numbers is convex; any set
defined by more than one interval, that does not contain some points between the intervals, is not convex.
An alpha-cut of a fuzzy set ~ A is a crisp set Aa that contains all the elements of the universal set X that
have a membership grade in A greater than or equal to a (refer to Fig. 12.11). The convexity property
~
of fuzzy sets is viewed as a generalization of the classical concept of convexity of crisp sets. In order to
make the generalized convexity consistent with the classical definition of convexity, it is required that
a-cuts of a convex fuzzy set be convex for all a Œ (0, 1] in the classical sense (0-cut is excluded here since
it is always equal to ¬ in this sense and thus includes – to + ). Figure 12.11a shows a fuzzy set that
is convex. Two of the a-cuts shown in this figure are clearly convex in the classical sense, and it is easy
to see that any other a-cuts for a > 0 are convex as well. Figure 12.11b illustrates a fuzzy set that is not
convex. The lack of convexity of this fuzzy set can be demonstrated by identifying some of its a-cuts
(a > 0) that are not convex.
784 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Fig. 12.11
A is the crisp set of all x ŒX such that m A (x) > 0. That is
The support of a fuzzy set ~
~
A) = {x Œ X | m A ( x ) > 0}
supp ( ~ (12.10)
~
The element x Œ X at which m A (x) = 0.5, is called the crosspoint.
~
A fuzzy set A whose support is a single point in X with m A (x) = 1, is referred to as a fuzzy singleton.
~ ~
Example 12.2
Consider the fuzzy set described by membership function depicted in Fig. 12.12, where the universe
of discourse is
X = [32ºF, 104ºF]
This fuzzy set ~
A is linguistic ‘warm’, mA
with membership function
Ï 0 ; x < 64∞
1
Ô ( x - 64∞ ) /6 ; 64∞ £ x < 70∞
ÔÔ
m A (x) = Ì 1 ; 70∞ < x £ 74∞
~ Ô (78∞ - x )/ 4 ; 74 < x £ 78∞
Ô x
ÔÓ 0 ; x > 78∞ 32°F 64° 70° 74° 78° 104°
Example 12.3
Consider a natural language form expression:
‘Speed sensor output is very large’
Fuzzy Logic and Neuro-Fuzzy Systems 785
The formal, symbolic translation of this natural language expression, in terms of linguistic variables,
proceeds as follows:
(i) An abbreviation ‘Speed’ may be chosen to denote the physical variable ‘Speed sensor output’.
(ii) An abbreviation ‘XFast’ (i.e., extra fast) may be chosen to denote the particular value ‘very large’ of
speed.
(iii) The above natural language expression is rewritten as ‘Speed is XFast’.
Such an expression is an atomic fuzzy proposition. The ‘meaning’ of the atomic proposition is
then defined by a fuzzy set XFast , or a membership function mXFast (x), defined on the physical domain
~ ~
X = [0 mph, 100 mph] of the physical variable ‘Speed’.
Many atomic propositions may be associated with a linguistic variable, e.g.,
‘Speed is Fast’
‘Speed is Moderate’
‘Speed is Slow’
‘Speed is XSlow’
Thus, the set of linguistic values that the linguistic variable ‘Speed’ may take is
{XFast, Fast, Moderate, Slow, XSlow}
These linguistic values are called terms of the linguistic variable. Each term is defined by an appropriate
membership function.
It is usual in approximate reasoning to have the following frame associated with the notion of a linguistic
variable:
where ~A denotes the symbolic name of a linguistic variable, e.g., speed, temperature, level, error, change-
of-error, etc. L A is the set of linguistic values that ~
A can take, i.e., L A is the term set of A. X is the
~ ~ ~
actual physical domain over which linguistic variable ~ A takes its quantitative (crisp) values, and mLA is a
~
membership function which gives a meaning to the linguistic value in terms of the quantitative elements
of X.
Example 12.4
Consider speed, interpreted as a linguistic variable with X = [0mph, 100mph]; i.e., x = ‘speed’. Its term
set could be
{Slow, Moderate, Fast}
Slow = the fuzzy set for ‘a speed below about 40 miles per hour (mph)’, with membership function
~
m Slow
~
Moderate = the fuzzy set for ‘a speed close to 55 mph’, with membership function mModerate .
~ ~
Fast = the fuzzy set for ‘a speed above about 70 mph’, with membership function m Fast .
~ ~
786 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Speed
~
, L Speed
~
, X , mLSpeed
~
where
L Speed
~
= {Slow
~
, Moderate, Fast}
~ ~
X = [0, 100] mph
Speed = 50 mph
m Slow (50) = 1/3
~
m Moderate (50) = 2/3
~
m Fast (50) = 0
~
Therefore, the proposition ‘Speed is Slow’ is
satisfied to a degree of 1/3, the proposition
‘Speed is Moderate’ is satisfied to a degree of
2/3, and the proposition ‘Speed is Fast’ is not
satisfied. Fig. 12.13
An extension of ordinary fuzzy sets is to allow the membership values to be a fuzzy set—instead of a
crisply defined degree. A fuzzy set whose membership function is itself a fuzzy set, is called a Type-2
fuzzy set [143]. A Type-1 fuzzy set is an ordinary fuzzy set. We will limit our discussion to Type-1 fuzzy
sets. The reference to a fuzzy set in this chapter, implies a Type-1 fuzzy set.
12.3.3
There are a variety of fuzzy set theories which differ from one another by the set operations (complement,
intersection, union) they employ. The fuzzy complement, intersection and union are not unique
operations, contrary to their crisp counterparts; different functions may be appropriate to represent these
operations in different contexts. That is, not only membership functions of fuzzy sets, but also operations
on fuzzy sets, are context-dependent. The capability to determine appropriate membership functions,
and meaningful fuzzy operations in the context of each particular application, is crucial for making fuzzy
set theory practically useful.
The intersection and union operations on fuzzy sets are often referred to as triangular norms (t-norms),
and triangular conorms (t-conorms; also called s-norms), respectively. The reader is advised to refer to
[143] for the axioms which t-norms, t-conorms, and the complements of fuzzy sets are required to satisfy.
Fuzzy Logic and Neuro-Fuzzy Systems 787
In the following, we define standard fuzzy operations, which are generalizations of the corresponding
crisp set operations.
Consider the fuzzy sets A and B in the universe X.
~ ~
{( x, m A ( x )) | x Œ X ; m A ( x ) Œ[0, 1]}
~A = ~ ~
(12.12)
B = {( x, m~B ( x )) | x Œ X ; m ~B ( x ) Œ[0, 1]}
~ (12.13)
The operations with A and B are introduced via operations on their membership functions mA (x) and
~ ~ ~
m B (x) correspondingly.
~
The standard complement, A , of fuzzy set A with respect to the universal set X , is defined for all x Œ X
~ ~
by the equation
m A ( x ) =D 1- m A ( x ) "x Œ X (12.14)
~ ~
12.3.4
Consider two universes (crisp sets) X and Y. The Cartesian product (or cross product) of two sets X and
Y (in this order) is the set of all ordered pairs, such that, the first element in each pair is a member of X,
and the second element is a member of Y. Formally,
X ¥ Y = {(x, y); x Œ X, y Œ Y} (12.17)
where X ¥ Y denotes the Cartesian product.
A fuzzy relation on X ¥ Y, denoted by R , or R (X, Y ) is defined as the set
~ ~
R = {(( x, y ), mR ( x, y )) |( x, y ) Œ X ¥ Y , m R( x, y ) Œ[0, 1]}
~ (12.18)
~ ~
where m R ( x, y ) is a function in two variables, called membership function of the fuzzy relation. It gives
~
the degree of membership of the ordered pair (x, y) in R , associating with each pair (x, y) in X ¥ Y, a real
~
number in the interval [0, 1]. The degree of membership indicates the degree to which x is in relation
with y. It is clear that a fuzzy relation is basically a fuzzy set.
788 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Example 12.5
Consider an example of fuzzy sets: the set of people with normal weight. In this case, the universe of
discourse appears to be all potential weights (the real line). However, the knowledge representation in
terms of this universe, is not useful. The normal weight of a person is a function of his/her height.
Weight kg
Body Mass Index (BMI) =
(Height, m) 2
Normal BMI for males is 20–25, and for females is 19–24. Values between 25 to 27 in men and 24 to 27
in women indicate overweight; and those over 27 indicate obesity. Of course, values below 20 for men
and below 19 for women indicate underweight.
The universe of discourse for this fuzzy set is more appropriately the Cartesian product of two universal
sets: X, the set of all potential heights, and Y, the set of all potential weights. The Cartesian product space
X ¥ Y is a universal set which is a set of ordered pairs (x, y), for each x Œ X and each y Œ Y.
A subset of the Cartesian product X ¥ Y, satisfying the knowledge attribute ‘normal weight’ is a set of
(height, weight) pairs. This is called a relation R . The membership value for each element of ~
R depends
~
on BMI. For men, a BMI of 27 and more could be given a membership value of 0, and a BMI of less
than 18 could also be given a membership value of 0; and membership value between 0 and 1 for BMI
between 18 and 27.
Example 12.6
Because fuzzy relations, in general, are fuzzy sets, we can define the Cartesian product to be a relation
between two or more fuzzy sets. Let ~ A be a fuzzy set on universe X, and B be a fuzzy set on universe
~
Y; then the Cartesian product between fuzzy sets A and B will result in a fuzzy relation R , which is
~ ~ ~
contained within the full Cartesian product space, or
A¥ B = ~ RÃ X ¥Y (12.19a)
~ ~
where the fuzzy relation R has membership function
~
m R ( x, y ) = m A ¥ B (x, y) = min [ m A ( x ), m B ( y )] "x Œ X, "y Œ Y (12.19b)
~ ~ ~ ~ ~
Note that the min combination applies here because each element (x, y), in the Cartesian product, is
formed by taking both elements x, y together, not just the one or the other.
As an example of the Cartesian product of fuzzy sets, we consider premise quantification. Atomic fuzzy
propositions do not, usually, make a knowledge base in real-life situations. Many propositions connected
by logical connectives may be needed. A set of such compound propositions, connected by IF-THEN
rules, makes a knowledge base.
Consider two propositions defined by
p =D x is A
~
q =D y is B
~
where ~
A and ~
B are the fuzzy sets:
Fuzzy Logic and Neuro-Fuzzy Systems 789
~A ={( x, m ( x)) x Œ X }
A
mA¥B (x, y)
B = {( y, m ( y ) ) y ŒY }
B
~ B
Example 12.7
We consider here quantification of ‘implication’ operator via fuzzy logic. Consider the implication
statement
IF pressure is high THEN volume is small
A = ‘big pressure’,
The membership function of the fuzzy set ~
Ï1 ;x ≥ 5
Ô
m A ( x ) = Ì1 - (5 - x ) /4 ; 1 £ x £ 5
~
Ô0 ; otherwise
Ó
B = ‘small volume’,
is shown in Fig. 12.15a. The membership function of the fuzzy set ~
Ï1 ; y £1
Ô
m B ( y ) = Ì1 - ( y - 1) /4 ; 1 £ y £ 5
~
Ô0 ; otherwise
Ó
790 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Fig. 12.15
If p is a proposition of the form ‘x is A’ where A is a fuzzy set on the universe X, e.g., ‘big pressure’, and q
~ ~
is a proposition of the form ‘y is B’ where B is a fuzzy set on the universe Y, e.g., ‘small volume’, then one
~ ~
encounters the following problem:
How does one define the membership function of the fuzzy implication A Æ ~
~ B?
There are different important classes of fuzzy implication operators based on t-norm and t-conorm.
In many practical applications, one uses Mamdani’s implication operator to model causal relationship
between fuzzy variables:
m AÆ B ( x, y ) = min [ m A ( x ), m B ( y )] "x Œ X , "y ŒY (12.21)
~ ~ ~ ~
The fuzzy implication A Æ B is a fuzzy relation in the Cartesian product space X ¥Y.
~ ~
Note that Mamdani’s implication operator gives a relation which is symmetric with respect to ~ A and
B. This is not intuitively satisfying, because ‘implication’ is not a commutative operation. In practice,
~
however, the method provides good, robust results. The justification for the use of the min operator to
represent the implication, is that we can be no more certain about our consequent than our premise.
The fuzzy set theory has provided a mathematical strength to capture the uncertainties associated
with human congnitive processes, such as thinking and reasoning. Fuzzy logic provides an inference
morphology that enables approximate human reasoning capabilities to be applied to knowledge-based
systems.
Fuzzy conditional, or fuzzy IF-THEN production rules are symbolically expressed as
12.4.1
In Mamdani fuzzy rules, both the premises and the consequents are fuzzy propositions (atomic/
compound). Consider first the case of a rule with atomic propositions. For example:
A THEN y is ~
‘IF x is ~ B’ (12.22a)
If we let X be the premise universe of discourse, and Y the consequent universe of discourse, then the
relation between the premise A and consequent B can be described using fuzzy sets on the Cartesian
~ ~
product space X ¥ Y. Using Mamdani’s implication rule,
~ AÆ ~
R =~ B
m R ( x , y ) = m AÆ B ( x , y )
~ ~ ~
= min [ m A ( x ), m B ( y )] "x Œ X , "y ŒY (12.22b)
~ ~
When the rule premise or rule consequent are compound fuzzy propositions, then the membership
function, corresponding to each such compound proposition, is first determined. The above operation
is applied to represent IF-THEN relation. Quite often, in control applications, we come across
logical connective and (conjunction operation on atomic propositions), which, as we have seen in
Example 12.6, may be implemented by Cartesian product.
The rules of inference in fuzzy logic govern the deduction of final conclusion from IF-THEN rules for
known inputs (Fig. 12.16). Consider the statements:
rule : IF x is A THEN y is ~
B
~
input : x is A¢ (12.23)
~
inference : y is B¢
~
Here the propositions ‘x is A’, ‘x is A¢ ’, ‘y is B ’ and ‘y is B¢ ’ are characterized by fuzzy sets A, ~ ¢ B,
~ ~ ~ ~ ~ A, ~
B¢, respectively.
and ~
~A = {( x, m ~A ( x )) | x Œ X ; m ~A Œ[0, 1]}
¢ {( x, m ( x )) | x Œ X ; m Œ[0, 1]}
~A = ~A¢ A¢
~
(12.24)
B = {( y , m B ( y )) | y Œ Y ; m B Œ[0, 1]}
~ ~ ~
Inference mechanism is based on matching of two fuzzy sets ~ A¢ and R, and determining membership
~
function of B¢ according to the result. Note that X denotes the space in which the input ~ A¢ is defined, and
~
it is subspace of the space X ¥ Y in which the rule-base relation R is defined. It is, therefore, not possible
~
to take the intersection of A¢ and R ; an operation required for matching the two sets, to incorporate the
~ ~
knowledge of the membership functions of both the input and the rule base. But when ~ A¢ is extended to
X ¥ Y, this is possible.
Cylindrical extension of ~A¢ (a fuzzy set defined on X ) on X ¥ Y is the set of all tuples (x, y) Œ X ¥ Y, with
membership degree equal to m A¢ ( x ), i.e.,
~
mce ( A¢ ) ( x, y ) = m A¢ ( x ) for every y Œ Y (12.25)
~ ~
Now, the intersection operation, to incorporate the knowledge of membership functions of input and rule
base, is possible. It is given by
A¢ ) « ~
ce( ~ R
In terms of membership functions, this operation may be expressed as
mce ( A¢ ) ( x, y ) Ÿ m R ( x, y ) = min[mce ( A¢ ) (x, y), m R (x, y)] "x Œ X , "y ŒY
~ ~ ~ ~
m R ( x, y ) = m AÆ B ( x, y ) = min[m A (x), m B (y)]
~ ~ ~ ~ ~
mce ( A¢ ) ( x, y ) = m A¢ ( x )
~ ~
Therefore,
m S (x, y) = mce ( A¢ ) (x, y) Ÿ m R (x, y) = min(m A¢ (x), min(m A (x), m B (y))) (12.26)
~ ~ ~ ~ ~ ~
By projecting this matched fuzzy set (defined on X ¥ Y) over the inference subspace Y, we can determine
B ¢ (defined on Y).
the membership function m B¢ (y) of the fuzzy set ~
~
Projection of m S (x, y) (a fuzzy set defined on X ¥ Y) on Y, is a set of all y Œ Y with membership grades
~
equal to max{m S ( x, y )}; max means maximum with respect to x while y is considered fixed, i.e.,
x ~ x
m proj ( S ) (y) = max{m S ( x, y )} (12.27)
~ x ~
Projection on Y means that yi is assigned the highest membership degree from the tuples (x1, yi),
(x2, yi), (x3, yi), ..., where x1, x2, x2, ... Œ X and yi Œ Y. The rationale for using the max operation on the
membership functions of S should be clear in view of the fact that we have a many-to-one mapping.
~
The combination of fuzzy sets with the aid of cylindrical extension and projection, is called composition.
It is denoted by �.
If ~A¢ is a fuzzy set defined on X and ~ R is a fuzzy relation defined on X ¥ Y, then the composition of A¢
~
and R resulting in a fuzzy set ~ B¢ defined on Y is given by
~
B¢ = A
~ ~ ¢� R ~¢ ) « R
~ = proj (ce( A ~ ) on Y (12.28)
Note that, in general, intersection is given by a t-norm, and projection by a t-conorm, resulting in many
definitions of composition operator. In our applications, we will mostly use min operator for t-norm and
max operator for t-conorm. Therefore, we have the following compositional rule of inference:
m B ¢ (y) = max{min( m A¢ ( x ), min( m A ( x ), m B ( y )))} (12.29)
~ x ~ ~ ~
This inference rule, based on max-min composition, uses Mamdani’s rule for implication operator.
Fuzzy Logic and Neuro-Fuzzy Systems 793
In control applications, as we shall see later, the fuzzy set A¢ is fuzzy singleton, i.e.,
~
Ï 1 for x = x0 Œ X
m A¢ (x) = Ì (12.30)
~
Ó0 for all other x Œ X
This results in a simple inference procedure, as is seen below.
mA(x)
mB(y)
mB¢(y)
mA(x0)
y
x0 x
Fig. 12.17 Inference procedure for singleton fuzzy system
When the rule (12.20a) has a compound proposition in premise part, connected by logical connectives,
then mA in (12.31) is replaced by m premise . For rules with AND’ed premise, one might use min or product
~
t-norm for calculating m premise (we have used min in Eqn. (12.20b)); and for rules with OR’ed premise,
we may use max t-conorm for the calculation of m premise . Of course, other t-norms and t-conorms are
also premissible.
Singleton fuzzy system is most widely used because of its simplicity and lower computational
requirements. However, this kind of fuzzy system may not always be adequate, especially in cases where
noise is present in the data. Nonsingleton fuzzy system becomes necessary to account for uncertainty in
the data.
12.4.2
Unlike Mamdani fuzzy rules, Sugeno rules are functions of input variables on the rule consequent. A
typical rule, with two input variables and one output variable, is of the form:
IF x1 is ~
A and x2 is B THEN y = f (x1, x2) (12.32a)
~
where f (◊) is a real function.
In theory, f (◊) can be any real function, linear or nonlinear. It seems to be appealing to use nonlinear
functions; rules are more general and can potentially be more powerful. Unfortunately, the idea is
impractical; properly choosing or determining the mathematical formalism of nonlinear functions for
every fuzzy rule in the rule base, is extremely difficult, if not impossible. For this reason, linear functions
794 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
have been employed exclusively in theoretical research, and practical development, of Sugeno fuzzy
models. For a system with two input variables and one output variable, ith rule in the rule base is of the
form:
A (i ) and x2 is B (i ) THEN y(i) = ai,0 + ai,1 x1 + ai,2 x2
IF x1 is ~ (12.32b)
~
where the ai, j are real numbers.
We can view the Sugeno fuzzy system as a nonlinear interpolator between the linear mappings that are
defined by the functions in the consequents of the rules. It is important to note that a Sugeno fuzzy system
may have any linear mapping as its output function which contributes to its generality. One mapping that
has proven to be particularly useful, is to have a linear dynamic system as the output function so that the
ith rule (12.32b) takes the form:
A (i ) and x2 is B (i ) THEN x(t) = Ai x(t) + biu(t); i = 1, 2, ..., R
IF x1 is ~ (12.32c)
~
(i ) (i )
where ~A and B are the fuzzy sets of the ith rule, and Ai and bi are state and input matrices (of
~
appropriate dimensions) of the local description of the linear dynamic system. Such a fuzzy system can
be thought of as a nonlinear interpolator between R linear systems. The premise membership functions
for each rule quantify whether the linear system in the consequent is valid for a specific region on the state
space. As the state evolves, different rules turn on, indicating that other combinations of linear models
should be used. Overall, we find that the Sugeno fuzzy system provides a very intuitive representation of
a nonlinear system as a nonlinear interpolation between R linear models [145].
We will limit our discussion to the more widely used controllers—the Mamdani type singleton fuzzy
logic systems. Sugeno architecture will be employed for data-based fuzzy modeling.
12.5
Figure 12.18 shows the basic configuration of a fuzzy logic controller (FLC), which comprises four
principal components: a rule base, a decision-making logic, an input fuzzification interface, and an
output defuzzification interface. The rule base holds a set of IF-THEN rules, that quantify the knowledge
that human experts have amassed about solving particular problems. It acts as a resource to the decision-
making logic, which makes successive decisions about which rules are most relevant to the current
situation, and applies the actions indicated by these rules. The input fuzzifier takes the crisp numeric
inputs and, as its name implies, converts them into the fuzzy form needed by the decision-making logic.
At the output, the defuzzification interface combines the conclusions reached by the decision-making
logic and converts them into crisp numeric values as control actions.
We will illustrate the FLC methodology, step by step, on a water-heating system.
Consider a simple water-heating system shown in Fig. 12.19. The water heater has a knob (HeatKnob)
to control the steam for circulation through the radiator. The higher the setting of the HeatKnob, the
hotter the water gets, with the value of ‘0’ indicating the steam supply is turned off, and the value of ‘10’
indicating the maximum possible steam supply. There is a sensor (TempSense) in the outlet pipe to tell us
the temperature of the outflowing water, which varies from 0ºC to 125ºC. Another sensor (LevelSense)
tells us the level of the water in the tank, which varies from 0 (= empty) to 10 (= full). We assume that
Fuzzy Logic and Neuro-Fuzzy Systems 795
10
HeatKnob Water inlet
0
Steam 10
boiler
LevelSense
Radiator
0
0°C 125°C
TempSense
Steam
exhaust Water outlet
Fig. 12.19
796 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
there is an automatic flow control that determines how much cold water flows into the tank from the main
water supply; whenever the level of the tank gets below 4, the flow control turns ON, and turns OFF
when the level of the water gets above 9.5.
Figure 12.20 shows a FLC diagram for the water-heating system.
Water inlet
Input
Fuzzy logic
controller
Input
Output Go HeatKnob
Gi2 LevelSense
Water outlet
Gi1 TempSense
Fig. 12.20
where LTempSense is the set of linguistic values that TempSense can take. We may use the following
~
fuzzy subsets to describe the linguistic values:
XSmall (XS); Small (S); Medium (M); Large (L); XLarge (XL)
i.e.,
LTempSense = {XSmall, Small, Medium, Large, XLarge}
~
The frame of LevelSense is
Crisp Input Range Fuzzy Variable Crisp Input Range Fuzzy Variable
(ii) Use an odd number of fuzzy sets for each variable so that some set is assured to be in the middle.
The use of 5 to 7 sets is fairly typical.
(iii) Overlap adjacent sets (by 15% to 25%, typically).
Now that we have the inputs and the outputs in terms of fuzzy
variables, we need only specify what actions to take, under what conditions; i.e., we need to construct a
set of rules that describe the operation of the FLC. These rules usually take the form of IF-THEN rules,
and can be obtained from a human expert (heuristics).
The rule-base matrix for our example is given in Table 12.4. Our heuristic guidelines, in determining this
matrix, are the following:
(i) When the temperature is low, the HeatKnob should be set higher than when the temperature is
high.
(ii) When the volume of water is low, the HeatKnob does not need to be as high as when the volume
of water is high.
Decision table
TempSense Æ XS S M L XL
LevelSense
Ø
XS AGoodAmount ALittle VeryLittle
S ALot AGoodAmount VeryLittle VeryLittle
M AWholeLot ALot AGoodAmount VeryLittle
L AWholeLot ALot ALot ALittle
XL AWholeLot ALot ALot AGoodAmount
In FLCs we do not need to specify all the cells in the matrix. No entry signifies that no action is taken. For
example, in the column for TempSense = XLarge, no action is required since the temperature is already
at or above the target temperature.
Fuzzy Logic and Neuro-Fuzzy Systems 799
m (x)
XS S M L XL
1
x
0 10 20 30 40 50 60 70 80 90 100 110 120
Fig. 12.21
m (y) XS S M L XL
y
0 1 2 3 4 5 6 7 8 9 10
Fig. 12.22
z
0 1 2 3 4 5 6 7 8 9 10
Fig. 12.23
800 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Let us examine a couple of typical entries in the table: For LevelSense = Small, and TempSense = XSmall,
the output is HeatKnob = ALot. Now for the same temperature, as the water level rises, the setting
on HeatKnob should also rise—to compensate for the added volume of water. We can see that for
LevelSense = Large and TempSense = XSmall, the output HeatKnob = AWholeLot.
We can translate the table entries into IF-THEN rules. We give here a couple of rules.
IF TempSense is Small and LevelSense is Small THEN set HeatKnob to ALot.
IF TempSense is XSmall and LevelSense is Large THEN set HeatKnob to AWholeLot.
x
0 20 40 60 80 100 120
0.5 0.5 0.5 0.5 0.5 0.5
Fig. 12.24
Fuzzy Logic and Neuro-Fuzzy Systems 801
m (y) XS S M L XL
y
0 2 4 6 8 10
0.5 0.5 0.5 0.5 0.5
Fig. 12.25
m (z) AGoodAmount AWholeLot
VeryLittle ALittle ALot
z
0 2¥2 4¥2 6¥2 8¥2 10 ¥ 2
Fig. 12.26
It is important to realize that the scaling gains are not the only parameters that can be tuned to improve
the performance of the fuzzy control system. Membership function shapes, positioning, and number and
type of rules are often the other parameters to tune.
We set Gi1 = Gi2 = Go = 1 for our design problem.
For crisp input TempSense = 65ºC, and LevelSense = 6.5, four rules fire. m premise for the four rules (refer
to Table 12.5), which amounts to firing strength in each case, can be calculated as follows:
(i) mTempSense ¥ LevelSense = min(0.45,0.25) = 0.25
~ ~
(ii) mTempSense ¥ LevelSense = min(0.28,0.25) = 0.25
~ ~
(iii) mTempSense ¥ LevelSense = min(0.45,0.38) = 0.38
~ ~
(iv) mTempSense ¥ LevelSense = min(0.28,0.38) = 0.28
~ ~
From the induced decision table
(Table 12.5), we observe that only four cells contain nonzero terms. Let us call these cells active. The
active cells correspond to the following rules:
(i) TempSense is Medium and LevelSense is Medium : p1
Set HeatKnob to AGoodAmount : q1
IF p1 THEN q1
m premise(1) = 0.25
minference(1) is obtained by ‘chopping off’ the top of m AGoodAmount function of the output variable
~
HeatKnob, as shown in Fig. 12.27a.
(ii) TempSense is Large and LevelSense is Medium : p2
Set HeatKnob to VeryLittle : q2
IF p2 THEN q2
m premise( 2) = 0.25
minference(2) is shown in Fig. 12.27b.
(iii) TempSense is Medium and LevelSense is Large : p3
Set HeatKnob to ALot : q3
IF p3 THEN q3
m premise(3) = 0.38
minference(3) is shown in Fig. 12.27c.
(iv) TempSense is Large and LevelSense is Large : p4
Set HeatKnob to ALittle : q4
IF p4 THEN q4
m premise( 4 ) = 0.28
minference(4) is shown in Fig. 12.27d.
The reader should note that for different crisp measurements TempSense and LevelSense, there will be
different values of m premise and, hence, different minference functions will be obtained.
In the
previous step, we noticed that the input to the inference process is the set of rules that fire; its output is
the set of fuzzy sets that represent the inference reached by all the rules that fire. We now combine all
the recommendations of all the rules to determine the control action. This is done by aggregating (union
operation) the inferred fuzzy sets. Aggregated fuzzy set, obtained by drawing all the inferred fuzzy sets
on one axis, is shown in Fig. 12.28. This fuzzy set represents the desired control action.
804 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
m (z)
3 7 z 0 2
(a) (b)
1.0 1.0
ALot ALittle
0.38
0.28
6 8.5 1.5 4
(c) (d)
Fig. 12.27 Inference for each rule
magg (z)
z* = 4.66
1
z
1 2 3 4 5 6 7 8 9 10
Defuzzification is a mapping from a space of fuzzy control actions defined by fuzzy sets on an
output universe of discourse, into nonfuzzy (crisp) control actions. This process is necessary because
crisp control action is required to actuate the control.
There are many approaches to defuzzification. We will consider here the ‘Center of Area’ (COA) method,
which is known to yield superior results.
We may discretize the universe Z into q equal (or almost equal) subintervals by the points z1, z2, ..., zq –1.
The crisp value z*, according to this method is
q -1
Âz m k agg ( zk )
* k =1
z = q -1
(12.33)
Âm agg ( zk )
k =1
Fuzzy Logic and Neuro-Fuzzy Systems 805
z* =
Úm
z
agg ( z ) zdz
(12.34)
Úm
z
agg ( z ) dz
12.6
The Mamdani architecture is widely used for capturing expert knowledge. It allows us to describe the
expertise in more intuitive, more human-like manner. On the other hand, the Sugeno architecture is by
far the most popular candidate for data-based fuzzy modeling.
Basically, a fuzzy model is a ‘Fuzzy Inference System (FIS)’ composed of four principal components:
a fuzzification interface, a knowledge base, a decision-making logic, and a defuzzification interface.
Figure 12.29 shows the basic configuration of FIS for data-based modeling.
Fig. 12.29
806 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
We consider here, a single-output FIS in the n-dimensional input space. Let us assume that the following
P input-output pairs are given as training data for constructing FIS model:
{x ( p)
, y ( p ) p = 1, 2, … , P } (12.35)
T
where x( p ) = ÈÎ x1( p ) x 2( p ) x n( p ) ˘˚ is the input vector of pth input-output pair and y( p) is the corresponding
output.
The fuzzification interface performs a mapping that converts crisp values of input variables into fuzzy
singletons. A singleton is a fuzzy set with a membership function that is unity at a single particular
point on the universe of discourse (the numerical-data value), and zero everywhere else. Basically, a
fuzzy singleton is a precise value and hence no fuzziness is introduced by fuzzification in this case. This
strategy, however, has been widely used in fuzzy modeling applications because it is easily implemented.
There are two factors that determine a database (i) a fuzzy partition of an input space, and (ii) membership
functions of antecedent fuzzy sets. Assume that the domain interval of the ith input variable xi, is equally
divided into Ki fuzzy sets labeled ~Ai1, ~
Ai2, ..., AiKi , for i = 1, 2, ..., n. Then the n-dimensional input space
~
is divided into K1 ¥ K2 ¥ � ¥ Kn fuzzy partition spaces:
(~A1 j1 , ~A2 j2 , … , Anjn ) ; j1 = 1, 2, ..., K1; ...; jn = 1, ..., Kn (12.36)
~
Though any type of membership functions (e.g., triangle-shaped, trapezoid-shaped, bell-shaped, etc.)
can be used for fuzzy sets, we employ the symmetric triangle-shaped fuzzy sets, Ai ji , with the following
~
membership functions:
D
2 xi - c(i , ji )
m A i j ( xi ) = m i ji ( xi ) = 1 - ; ji = 1, 2,… , Ki (12.37)
~ i w(i , ji )
mi ji(xi)
c(i , ji ) is the center of the membership function, where the
membership grade is equal to 1, and w(i , ji ) denotes the width of 1
the membership function (Fig. 12.30).
By means of the input-output data, the range ÈÎ ximin , ximax ˘˚ of
the ith input variable is determined, where
c(i, ji) xi
ximin = min xi( p ) , ximax = max xi( p ) (12.38a) w(i, ji)
p ={1,…, P} p ={1,…, P}
The center position of each membership function with respect Fig. 12.30 Parameters of a mem
to the ith variable is determined by
c(i , ji ) = ximin + ( ji - 1) ÈÎ( ximax - ximin ) /( Ki - 1) ˘˚ ; c(i ,1) = ximin ; c(i , Ki ) = ximax (12.38b)
To achieve sufficient overlap from one linguistic label to another, we take
w(i , ji ) = 2(c(i , ji +1) - c(i , ji ) ) (12.38c)
Figure 12.31 shows an example where the domain interval of x1 is divided into K1 = 5 fuzzy sets.
The rule base consists of a set of fuzzy IF-THEN rules in the form ‘IF a set of conditions are satisfied
THEN a set of consequences can be inferred’. Different types of consequent parts have been used in
Fuzzy Logic and Neuro-Fuzzy Systems 807
Fig. 12.31
fuzzy rules; the two commonly used fuzzy models are based on Mamdani’s approach and Sugeno’s
approach. We restrict our discussion to Sugeno architecture: the domain interval of y is represented by
R linear functions, giving rise to R fuzzy rules. All the rules corresponding to the possible combinations
of the inputs are implemented. The total number of rules R for an n-input system is : K1 ¥ K2 ¥ ¥ Kn.
The format of fuzzy rules is,
Rule r: IF x1 is A1 j1 and and xn is Anjn THEN
~ ~
yˆ ( r ) = a0( r ) + a1( r ) x1 + … + an( r ) xn ; r = 1, 2,… , R (12.39)
The consequent part is a linear function of the input variables xi; a0, a1, ..., an are the (n + 1) parameters
that determine the real consequent value. The aim of the linear function is to describe the local linear
behavior of the system. Each rule r gives rise to a local linear model. The selected R rules are required
to approximate the function that theoretically underlines the system behavior most consistently, with the
given sample of input-output data (12.35) (When ŷ is a constant in (12.39), we get a Sugeno model in
which the consequent of a rule is specified by a singleton).
The decision-making logic employs fuzzy IF-THEN rules from the rule base to infer the output by a
fuzzy reasoning method. The contribution of each local linear model (i.e., each rule) in the estimated
output of the FIS is dictated by the firing strength of the rule. We use product strategy to assign firing
strength m (r) to each rule r = 1, 2, …, R.
T
Given an input vector, x( p ) = ÈÎ x1( p ) , x 2( p ) ,..., x np ˘˚ , the degree of compatibility of x(p) to the rth fuzzy
IF-THEN rule is the firing strength m(r) of the rule, and is given by (note that we have used product
t-norm operator on the premise part of the rule)
 (a (r)
0 + a1( r ) x1 + + an( r ) xn ) m ( r )
r =1
ŷ = (12.41a)
R
Âm (r)
r =1
R
= Â (a (r)
0 + a1( r ) x1 + + an( r ) xn ) m ( r ) (12.41b)
r =1
m(r)
where m (r) = R
(12.41c)
Âm (r)
r =1
is the normalized firing strength of rule r; a ratio of firing strength of rule r to the sum of the firing
strengths of all the rules.
Note that the output of the fuzzy model can be determined only if the parameters in rule consequents
are known. However, it is often difficult or even impossible to specify a rule consequent in a polynomial
form. Fortunately, it is not necessary to have any prior knowledge of rule consequent parameters for the
Sugeno fuzzy modeling approach to deal with a problem. These parameters can be determined using
least squares estimation method as follows.
Given the values of the membership parameters and a training set of P input-output patterns x( p ) , y ( p ) ; {
p = 1,2,...,P}, we can form P linear equations in terms of the consequent parameters.
y(p) = m (1) ( x(p ) ) ÈÎa0(1) + a1(1) x1( p ) + + an(1) x n( p ) ˘˚ + m ( 2) ( x( p ) ) Èa 0( 2) + a1( 2) x1( p ) + + an( 2) x n( p ) ˘ +
Î ˚
+ m ( R ) ( x(p ) ) ÈÎa0( R) + a1( R ) x1( p ) + + an( R) x n( p ) ˘˚ ; p = 1, 2, P (12.42)
where m ( r ) ( x( p ) ) is the normalized firing strength of rule r, fired by the input pattern x( p).
In terms of vectors
T
x ( p ) = ÈÎ1 x1( p ) x2( p ) xn( p ) ˘˚
p(r) = ÈÎa0( r ) a1( r ) an( r ) ˘˚ (12.43)
Q = ÈÎ a0(1) a1(1) an(1) a0( 2) an( 2) a0( R ) an( R ) ˘˚
we can write the P linear equations as follows:
y(1) = m (1) ( x(1) ) ÈÎp(1) x (1) ˘˚ + m ( 2) ( x(1) ) ÈÎp( 2) x (1) ˘˚ + + m ( R ) ( x(1) ) ÈÎp( R ) x (1) ˘˚
y(2) = m (1) ( x( 2) ) ÈÎp(1) x ( 2) ˘˚ + m ( 2) ( x( 2) ) ÈÎp( 2) x ( 2) ˘˚ + + m ( R ) ( x( 2) ) ÈÎp( R ) x ( 2) ˘˚
(12.44)
or y = XT QT (12.45b)
In the Sugeno fuzzy model given above, we have used most intuitive approach of implementing all
possible combinations of the given fuzzy sets as rules. In fact, if data is not uniformly distributed, some
rules may never be fired. This and other drawbacks are handled by many variants of the basic ANFIS
model, described in the next section.
12.7
Fuzzy logic and neural networks are natural complementary tools in building intelligent systems. While
neural networks are computational structures that perform well when dealing with raw data, fuzzy logic
deals with reasoning, using linguistic information acquired from domain experts. However, fuzzy systems
lack the ability to learn and cannot adjust themselves to a new environment. On the other hand, although
neural networks can learn, they are opaque to the user. The merger of a neural network with a fuzzy
system into one integrated system, therefore, offers a promising approach to building intelligent systems.
Integrated systems can combine the parallel computation and learning abilities of neural networks, with
the human-like knowledge representation and explanation abilities of fuzzy systems. As a result, neural
networks become more transparent, while fuzzy systems become capable of learning.
The structure of a neuro-fuzzy system is similar to a multilayer neural network. In general, a neuro-fuzzy
system has input terminals, output layer, and hidden layers that represent membership functions and
fuzzy rules.
Roger Jang [142] proposed an integrated system that is functionally equivalent to a Sugeno fuzzy
inference model. He called it an Adaptive Neuro-Fuzzy Inference System or ANFIS. Similar network
structures have also been proposed for Mamdani fuzzy inference model [137]. However, the Sugeno
model is by for the most popular candidate for data-based fuzzy modeling. Our brief presentation of the
subject is, therefore, focused on ANFIS based on Sugeno fuzzy model.
12.7.1
Figure 12.32 shows the ANFIS architecture. For simplicity, we assume that the ANFIS has two inputs,
x1 and x2, and one output y. Each input is represented by two fuzzy sets, and the output by a first-order
polynomial. The ANFIS implements the following four rules:
810 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Fig. 12.32
The inputs to the nodes in the first layer are the input fuzzy sets of the ANFIS. Since these
fuzzy sets are fuzzy singletons, numerical inputs are directly transmitted to the first-layer nodes.
Nodes in this layer represent the membership functions associated with each linguistic term of input
variables. Every node here is an adaptive node. Links in this layer are fully connected between input
terminals and their corresponding membership function nodes. Membership functions can be any
appropriate parameterized function; we use Gaussian function.
È Êx -c 2˘
( i , ji ) ˆ
m Ai ji ( xi ) =D mi ji ( xi ) = exp Í- Á ˙
i
(12.47)
~ Í Ë w(i , ji ) ˜¯ ˙
Î ˚
The nodes are labeled ~Ai ji ; i = 1, 2; ji = 1, 2. Total number of nodes in this layer is, therefore, four.
c(i , ji ) is the center (mean) and w(i , ji ) is the width (variance), respectively, of the membership function
corresponding to the node Ai ji ; xi is the input and mi ji is the output of the node. The adjusted weights in
~
Layer 1 are c(i , ji )’s and w(i , ji ) ’s. As the values of these parameters change, the Gaussian function varies
accordingly; thus exhibiting various forms of membership functions of fuzzy set Ai ji . Parameters in this
~
layer are referred to as premise parameters.
Every node in this layer is a fixed node labeled P, whose output is the product of all the
incoming signals. Each node output represents firing strength of a rule. In fact, other t-norm operators
could also be used as node functions.
Fuzzy Logic and Neuro-Fuzzy Systems 811
Each node, representing a single Sugeno fuzzy rule, has the output
m ( r ) ( x) =D P miji ( xi ) (12.48)
( i , ji ) ŒI r
where Ir is the set of all Ai ji associated with the premise part of rule r.
~
Every node in this layer is a fixed node labeled N. The rth node calculates the ratio of the rth
rule’s firing strength, to the sum of all rules’ firing strengths:
m(r)
m (r) = R
= Normalized firing strength of rule r (12.49)
Âm (r)
r =1
Every node is this layer is an adaptive node, is connected to the respective normalization
node in the previous layer, and also receives inputs x1 and x2. It calculates the weighted consequent value
of a given rule as
yˆ ( r ) = m ( r ) ÈÎa0 + a1 x1 + a2 x2 ˘˚
(r) (r) (r)
(12.50)
where m ( r ) is the normalized firing strength from layer 3, and a(r) (r) (r)
0 , a1 and a2 are the parameters of this
node. Parameters in this layer are referred to as consequent parameters.
Each node in Layer 4 is a local linear model of the Sugeno fuzzy system; integration of outputs of all
local linear models yields global output.
The single node in this layer is a fixed mode labeled S, which computes the overall output
as the summation of all incoming signals:
R
ŷ = Â (a (r)
0 + a1( r ) x1 + a2( r ) x2 ) m ( r ) ; R = 4 (12.51)
r =1
12.7.2
An ANFIS uses a hybrid learning algorithm that combines the least squares estimator and the gradient
descent method. First, initial activation functions are assigned to each membership neuron. The function
centers of the neurons connected to input xi, are set so that the domain of xi is divided equally, and the
widths are set to allow sufficient overlapping of the respective functions.
In an ANFIS training algorithm, each epoch is composed of a forward pass and a backward pass. In the
forward pass, a training set of input patterns (input vector x) is presented to the ANFIS, neurons outputs
are calculated on the layer-by-layer basis, and the rules consequent parameters are identified by the
least squares estimator. In the Sugeno fuzzy inference, an output ŷ is a linear function. Thus, given the
values of the membership parameters and a training set of P input-output patterns, we can form P linear
equations in terms of the consequent parameters (refer to Eqns (12.45)). Least-squares solution of these
equations yields the consequent parameters.
As soon as the rule consequent parameters are established, we can compute actual network output, ŷ,
and determine the error
e = y – ŷ (12.52)
812 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
In the backward pass, the backpropagation algorithm in applied. The error signals are propagated back,
and the premise parameters are updated according to the chain rule.
The goal is to minimize the error function
E= 1
2
(y - yˆ ) 2 (12.53)
The error at Layer 5:
∂E
= ( yˆ - y ) (12.54)
∂ŷ
Back propagating to Layer 3 via Layer 4 (refer to Eqn. (12.51)),
∂E ∂E ∂ yˆ ∂E È ( r ) (r) (r) ˘
=
∂ (r)
=
∂ Î( a0 + a1 x1 + a2 x2 ) ˚ (12.55)
∂m (r) ˆ
y ∂m ˆ
y
Back propagating to Layer 2 (refer to Eqn. (12.49)),
∂E ∂E ∂m ( r ) ∂E È m ( r ) (1 - m ( r ) ) ˘
= = Í ˙ (12.56)
∂m ( r ) ∂m ( r ) ∂m ( r ) ∂m ( r ) ÍÎ m (r) ˙˚
The error at Layer 1:
Ir is the set of all ~Ai ji associated with the premise part of rule r. Reverse pass: I (i , ji ) is the set of all rule
nodes in Layer 2 connected to (i, ji )th node (corresponding to ~Ai ji ) of Layer 1.
Back propagating error to Layer 1 (refer to Eqn. (12.48)),
∂E ∂E ∂m ( r )
∂mi ji
= Â ∂m ( r ) ∂mi ji
r ŒI ( i , ji )
(12.57a)
∂E È m ( r ) ˘
= Â Í ˙
∂ ( r ) ÍÎ mi ji ˙˚
r ŒI ( i , ji ) m
In this section, we have described a method that can be used to construct identifiers of dynamical systems
that, in turn, could be employed to construct neuro-fuzzy control systems. The idea behind the method is
to apply the backpropagation algorithm to a fuzzy logic system.
Neuro-fuzzy control refers to the design methods for fuzzy logic controllers that employ neural network
techniques. The design methods for neuro-fuzzy control are derived directly from methods for neural
control. Thus, if we replace the NN blocks in Figs 11.23–11.25 with ANFIS blocks, then we end up with
neuro-fuzzy control systems.
REVIEW EXAMPLES
The scaling factors GE, GV and GU¢ of the fuzzy controller may be tuned by trial and error. Refer to
Appendix B for realization of the controller.
m
1
N P
m Z
N 1 P
0.5
–1 0 1 e*, v* –1 0 1 Du*
Fig. 12.34
Fig. 12.35
All the possible four rules, given in (12.46), will be fired. Firing strengths of the rules are (Layer 2 in
Fig. 12.32; Eqn. (12.48)):
m(1) (x) = 0.900990 ¥ 0.8 = 0.720792
m(2) (x) = 0.099010 ¥ 0.2 = 0.019802
m(3) (x) = 0.099009 ¥ 0.8 = 0.079208
m(4) (x) = 0.900990 ¥ 0.2 = 0.180198
The normalized firing strengths of the rules are (Layer 3 in Fig. 12.32; Eqn. (12.49)):
4
m (1) = m (1) Âm (r)
= m(1) = 0.720792
r =1
m ( 2)
= m , m (3) = m(3), m ( 4 ) = m(4)
(2)
Weighted consequent values of the rules are (Layer 4 in Fig. 12.32; Eqn. (12.50)):
yˆ (1) = 0.720792(0.10 + 0.2 ¥ 1.1 + 0.3 ¥ 6.0) = 1.528079;
yˆ ( 2) = 0.054059; yˆ (3) = 0.179010; yˆ ( 4 ) = 0.517168
Predicted output of the ANFIS, is (Layer 5 in Fig. 12.32; Eqn. (12.51)):
ŷ = 2.278316
Fuzzy Logic and Neuro-Fuzzy Systems 817
PROBLEMS
12.1 (a) In the following, we suggest a membership function for fuzzy description of the set ‘real
numbers close to 2’:
A {x, m A ( x )}
~ = ~
where
Ï 0 ; x <1
Ô 2
m A (x) = Ì- x + 4 x - 3 ; 1 £ x £ 3
~
Ô 0 ;x>3
Ó
Sketch the membership function (arc of a parabola) and determine its supporting interval,
and a-cut interval for a = 0.5.
(b) Sketch the piecewise quadratic membership function
Ï 2( x - 1) 2 ; 1 £ x < 3/ 2
Ô
Ô1 - 2( x - 2) 2 ; 3 / 2 £ x < 5 /2
m B (x) = Ì
~
Ô 2( x - 3) 2 ; 5/ 2 £ x £ 3
Ô ; otherwisse
Ó 0
and show that it also represents ‘real number close to 2’. Determine its support, and a-cut for
a = 0.5.
12.2 (a) The well known Gaussian distribution in probability is defined by
2
1 Ê x-m ˆ
- Á
1 2 Ë s ˜¯
f (x) = e ;– <x<
s 2p
where m is the mean and s is the standard deviation of the distribution. Construct a normal,
convex membership function from this distribution (select parameters m and s) that
represents ‘real numbers close to 2’. Find its support, and a-cut for a = 0.5. Show that the
membership function
m (x) = 1
A
~
1 + ( x - 2) 2
also represents ‘real numbers close to 2’. Find its support, and a-cut for a = 0.5.
12.3 Consider the piecewise quadratic function
Ï 0 ; x<a
Ô 2
Ô Ê x - aˆ a+b
2
Ô ÁË b - a ˜¯ ; a£ x<
Ô 2
f(x) = Ì 2
Ô Ê x - bˆ a+b
Ô 1 - 2 ÁË b - a ˜¯ ; £x<b
2
Ô
ÔÓ 1 ; b£ x<c
818 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Construct a normal, convex membership function from f (x) (select parameters a, b and
c) that represents the set ‘tall men’ on the universe {3, 9}. Determine the crosspoints and
support of the membership function.
12.4 (a) Write an analytical expression for the membership function m A (x) with supporting interval
~
[–1, 9] and a-cut interval for a = 1 given as [4, 5].
(b) Define what we mean by a normal membership function and a convex membership function.
Is the function described in (a) above (i) normal, (ii) convex?
12.5 (a) Let the fuzzy set A be the linguistic ‘warm’ with membership function
~
Ï 0 ; x < a1
Ô x-a
Ô 1
; a1 £ x £ b1
Ô b1 - a1
Ô
m A (x) = Ì 1 ; b1 £ x £ b2
~
Ô x-a
Ô 2
; b2 £ x £ a2
Ô b2 - a2
Ô 0 ; x ≥ a2
Ó
a1 = 64ºF, b1 = 70ºF, b2 = 74ºF, a2 = 78ºF
(i) Is A a normal fuzzy set?
~
(ii) Is A a convex fuzzy set?
~
(iii) Is A a singleton fuzzy set?
~
If answer to one or more to these is ‘no’, then give an example of such a set.
(b) For fuzzy set A described in part (a), assume that b1 = b2 = 72ºF.
~
Sketch the resulting membership function and determine its support, crosspoints and a-cuts
for a = 0.2 and 0.4.
12.6 Consider two fuzzy sets A and B; membership functions m A (x) and m B (x) are shown in Fig. P12.6.
~ ~ ~ ~
The fuzzy variable x is temperature.
Sketch the graph of m A (x), m A « B (x) and m A » B (x).
~ ~ ~ ~ ~
Which t-norm and t-conorm have you used?
mA(x) mB(x)
1 1
0 0
15°C x 10°C 20°C x
Fig. P12.6
Fuzzy Logic and Neuro-Fuzzy Systems 819
12.7 Consider the fuzzy relation R on the universe X ¥ Y, given by the membership function
~
1
m R (x, y) = ,
~
[1 + 100( x - 3 y ) 4 ]
vaguely representing the crisp relation x = 3y. All elements satisfying x = 3y have unity grade of
membership; elements satisfying, for example, x = 3.1y have membership grades less than 1. The
farther away the elements are from the straight line, the lower are the membership grades.
Give a graphical representation of the fuzzy relation R .
~
12.8 Assume the membership function of the fuzzy set A, big pressure, is
~
Ï 1 ; x≥5
Ô 5- x
Ô
mA (x) = Ì1 - ; 1£ x £ 5
~
Ô 4
ÔÓ 0 ; otherwise
Assume the membership function of the fuzzy set B , small volume, is
~
Ï 1 ; y£1
Ô y -1
Ô
mB (y) = Ì1 - ; 1£ y £ 5
~
Ô 4
ÔÓ 0 ; otherwise
Find the truth values of the following propositions:
(i) 4 is big pressure.
(ii) 3 is small volume.
(iii) 4 is big pressure and 3 is small volume.
(iv) 4 is big pressure Æ 3 is small volume.
Explain the conjunction and implication operations you have used for this purpose.
12.9 Consider the following statements:
Input : A¢ is very small
~
Rule : IF A¢ is small THEN B ¢ is large
~ ~
Inference : B ¢ is very large
~
If R is a fuzzy relation from X to Y representing the implication rule, and A¢ is a fuzzy subset of
~ ~
X, then the fuzzy subset B ¢ of Y, which is induced by A¢ , is given by
~ ~
B ¢ = A¢ R
~ ~ ~
where � operation (composition) is carried out by taking cylindrical extension of A¢ , taking the
~
intersection with R, and projecting the result onto Y.
~
Define cylindrical extension, intersection and projection operations that lead to max-min
compositional rule of inference.
12.10 Input : x is A¢ and y is B ¢
~ ~
Rule 1 : IF x is A1 and y is B1 THEN z is C1
~ ~ ~
Rule 2 : IF x is A2 and y is B2 THEN z is C2
~ ~ ~
Inference : z is C ¢
~
820 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
m
1
3/4
1/2
1/4
z
1 2 3 4 5 6 7 8 9 10
Fig. P12.11
12.12 Consider the fuzzy system concerning the terminal voltage and speed of an electric motor,
described by the membership functions
x 100 150 200 250 300
mA (x) 1 0.8 0.5 0.2 0.1
~
Ï y-5 Ï y-4
Ô ; 5£ y£8 ÔÔ ; 4£ y£7
m B1 (y) = ÔÌ 3 m B2 (y) = Ì 3
Ô11 - y Ô10 - y
~ ~
; 8 £ y £ 11 ; 7 £ y £ 10
ÔÓ 3 ÔÓ 3
Ï z -1 Ïz -3
ÔÔ 3 ; 1£ z £ 4 ; 3£ z £6
ÔÔ
mC1 (z) = Ì mC2 (z) = Ì 3
~ Ô7 - z ; 4£ z£7
~
Ô9 - z ; 6 £ z £ 9
ÔÓ 3 ÔÓ 3
Assume fuzzy sets A¢ and B ¢ are singletons at x0 = 4 and y0 = 8. Determine the inference fuzzy set
~ ~
C ¢ of the fuzzy system. Defuzzify C ¢ .
~ ~
12.14 The control objective is to design an automatic braking system for motor cars. We need two analog
signals: vehicle speed (V), and a measure of distance (D) from the vehicle in the front. A fuzzy
logic control system will process these, giving a single output, braking force (B), which controls
the brakes.
m m
PS PM PL PS PM PL
1 1
0 10 20 30 40 50 60 V(km/hr) 0 10 20 30 40 50 60 D(m)
m
1 PS PM PL
Fig. P12.14
822 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Term set for each of the variables (V, D, and B) is of the form:
{PS (positive small), PM (positive medium), PL (positive large)}
Membership functions for each term-set are given in Fig. P12.14.
Suppose that for the control problem, two rules have to be fired:
Rule 1: IF D = PS and V = PM THEN B = PL
Rule 2: IF D = PM and V = PL THEN B = PM
For the sensor readings of V = 55 km/hr, and D = 27 m from the car in front, find graphically
(i) the firing strengths of the two rules;
(ii) the aggregated output; and
(iii) defuzzified control action.
12.15 The control objective is to automate the wash time when using a washing machine. Experts select
for inputs dirt and grease of the clothes to be washed, and for output parameter the wash time, as
follows:
Dirt =D {SD (small dirt), MD (medium dirt), LD (large dirt)}
Grease =D {NG (no grease), MG (medium grease), LG (large grease)}
Washtime =D {VS (very short), S (short), M (medium), L (long), VL (very long)}
The degrees of the dirt and grease are measured on a scale from 0 to 100; washtime is measured
in minutes from 0 to 60.
50 - x 10 - z
m SD (x) = ; 0 £ x £ 50 mVS (z) = ; 0 £ z £ 10
~ 50 ~ 10
Ï x Ï z
ÔÔ 50 ; 0 £ x £ 50 ÔÔ 10 ; 0 £ z £ 10
m MD (x) = Ì m S (z) = Ì
~
Ô100 - x ; 50 £ x £ 100 ~
Ô 25 - z ; 10 £ z £ 25
ÔÓ 50 ÔÓ 15
Ï z - 10
x - 50 ÔÔ 15 ; 10 £ z £ 25
m LD (x) = ; 50 £ x £ 100 m M (z) = Ì
~ 50 ~
Ô 40 - z ; 25 £ z £ 40
ÔÓ 15
Ï z - 25
50 - y ÔÔ ; 25 £ z £ 40
m NG (y) = ; 0 £ y £ 50 m L (z) = Ì 15
Ô 60 - z ; 40 £ z £ 60
~ 50 ~
ÔÓ 20
Ï y
ÔÔ ; 0 £ y £ 50 z - 40
m MG (y) = Ì 50 mVL (z) = ; 40 £ z £ 60
~
Ô 100 - y ; 50 £ y £ 100 ~ 20
ÔÓ 50
y - 50
m LG (y) = ; 50 £ y £ 100
~ 50
Fuzzy Logic and Neuro-Fuzzy Systems 823
Grease Æ NG MG LG
Dirt
Find a crisp control output for the
Ø
following sensor readings:
SD VS M L Dirt = 60; Grease = 70
MD S M L
LD M L VL
12.16 A fuzzy controller is acting according to the following rule base (N = negative, M = medium,
P = positive):
R1 : If x1 is N AND x2 is N, THEN u is N
R2 : If x1 is N OR x2 is P, THEN u is M
R3 : If x1 is P OR x2 is N, THEN u is M
R4 : If x1 is P AND x2 is P, THEN u is P
The membership functions of the input and output variables are given in Fig. P12.16. Actual
inputs are x1 = 2.5 and x2 = 4. Which rules are active and what will be the controller action u? Find
u by applying standard fuzzy operations: min for AND, and max for OR.
m m m
1 N P 1 N
P 1 N M P
x1 x2 u
0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
Fig. P12.16
12.17 Consider the following fuzzy model of a system with inputs x and y and outpur z:
Rule 1 : If x is A3 OR y is B1, THEN z is C1
Rule 2 : If x is A2 AND y is B2, THEN z is C2
Rule 3 : If x is A1, THEN z is C3
The membership functions of the input and output variables are given in Fig. P12.17. Actual
inputs are x1 and y1. Find the output z by applying standard fuzzy operation: min for AND, and
max for OR.
m m m
A1 A2 A3 B1 B2 C1 C2 C3
1 1 1
0.7
0.5
0.2 0.1
x y z
x1 y1 0 20 35 55 70 100
25 30 60 65
Fig. P12.17
824 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
12.18 A fuzzy controller is acting according to the following rule base (N = negative, P = positive):
R1 : If x1 is N AND x2 is N, THEN u is k1
R2 : If x1 is N OR x2 is P, THEN u is k2
R3 : If x1 is P OR x2 is N, THEN u is k2
R4 : If x1 is P AND x2 is P, THEN u is k3
The membership functions of the input variables are given in Fig. P12.16 and the membership
functions of the output variable (which is a controller action) u are singletons placed at k1 = 1, k2
= 2, k3 = 3. Actual inputs are x1 = 2.5 and x2 = 4. Find u by applying standared fuzzy operations:
min for AND, and max for OR.
12.19 Consider a two-dimensional sinc equation defined by
sin( x1 )sin( x2 )
y = sinc( x1 , x2 ) =
x1 x2
Training data are sampled uniformly from the input range [–10, 10] ¥ [–10, 10]. With two
symmetric triangular membership functions assigned to each input variable, construct a Sugeno
fuzzy model architecture for the sinc function. Give defining equations for determination of the
premise and consequent parameters of the model.
12.20 To identify the nonlinear system
( )
2
0.5 -1 -1.5
y = 1 + ( x1 ) + ( x2 ) + ( x3 )
we assign two membership functions to each input variable. Training and testing data are sampled
uniformly from the input ranges [1,6] ¥ [1,6] ¥ [1,6], and [1.5,5.5] ¥ [1.5,5.5] ¥ [1.5,5.5],
respectively. Extract Sugeno fuzzy rules from the numerical input-output training data that could
be employed in an ANFIS model.
12.21 Assume that a fuzzy inference system has two inputs x1 and x2, and one output y. The rule base
contains two Sugeno fuzzy rules as follows:
Rule 1: IF x1 is A11 and x2 is A21 THEN y(1) = a0(1) + a1(1) x1 + a2(1) x2
~ ~
Rule 2: IF x1 is A12 and x2 is A22 THEN y(2) = a0(2) + a1(2) x1 + a2(2) x2
~ ~
Aij are Gaussian functions.
~
For given input values x1 and x2, the inferred output is calculated by
m (1) y (1) + m ( 2) y ( 2)
ŷ =
m (1) + m ( 2)
where m(r), r = 1, 2 are firing strengths of the two rules. Product inference is used to calculate the
firing strengths of the rules.
Develop ANFIS architecture for this modeling problem, and derive learning algorithms based on
least squares estimation and the gradient-descent methods.
12.22 Consider a fuzzy model (Mamdani architecture) for a manufacturing process. The process
is characterized by two input variables, x1 and x2, and one output variable y. The membership
function distribution (isosceles triangles of base widths q1, q2, q3) of x1, x2 and y are shown in
Fig. P12.22, and a rule base is given in Table P12.22. Determine the output of the model for x1 =
10, x2 = 28.
Fuzzy Logic and Neuro-Fuzzy Systems 825
Fig. P12.22
826 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
12.23 Consider a fuzzy model (Sugeno architecture) for a manufacturing process. The process is
characterized by two input variables, x1 and x2, and one output variable, y. The membership
function distributions of x1 and x2 are shown in Fig. P12.23. Domain intervals of xi are divided
into Ki = 3 fuzzy sets. Therefore, there is a maximum of K1 ¥ K2 = 9 feasible rules. The output of
the rth rule is expressed as
yˆ ( r ) = a j(r) x1 + bk(r) x2
where j, k = 1, 2, 3; a1(r) = 1, a2(r) = 2 and a3(r) = 3 if x1 is found to be A11, A12 and A13, respectively;
b1(r) = 1, b 2(r) = 2, b3(r) = 3 if x2 is found to be A21, A22 and A23, respectively. Determine the output
of the model if x1 = 6.0 and x2 = 2.2.
Fig. P12.23
Optimization with Genetic Algorithms 827
Chapter 13
Optimization with
Genetic Algorithms
Our focus in this chapter is on genetic algorithm—an evolutionary algorithm which is both the simplest
and the most general, for optimization. The application of genetic algorithm to the learning of neural
networks as well as to the structural and parametric adaptations of fuzzy systems, will also be described.
to offsprings; mutation may cause the chromosomes of children to be different from those of their
biological parents. The fitness of an organism is typically defined as the probability that the organism
will live to reproduce (viability), or as a function of the number of offspring the organism has (fertility).
The basic idea of a genetic algorithm is very simple. The term chromosome typically refers to a candidate
solution to a problem, typically stored as strings of binary digits (1s and 0s) in the computer’s memory.
The ‘genes’ are short blocks of adjacent bits that encode a particular element of the candidate solution
(e.g., in the context of multiparameter function optimization, the bits encoding a particular parameter
might be considered to be a gene). An ‘allele’ in a bit string, is either 0 or 1. Crossover typically consists
of exchanging genetic material between two single-chromosome haploid parents. Mutation consists of
flipping the bit at a randomly-chosen locus.
Most applications of genetic algorithms employ haploid individuals, particularly, single-chromosome
individuals. The genotype of an individual, in a genetic algorithm using bit strings, is simply the
configuration of bits in that individual’s chromosome.
13.2.2
The current literature identifies three main types of search methods: calculus-based, enumerative and
random. Calculus-based methods have been studied extensively. These subdivide into two main classes:
indirect and direct. Indirect methods seek local extrema by solving the usually nonlinear set of equations,
resulting from setting the gradient of the objective function equal to zero. Given a smooth, unconstrained
function, finding a possible peak starts by restricting search to those points with slopes of zero in all
directions. On the other hand, direct (search) methods seek local optima by hopping on the function and
moving in a direction related to the local gradient. This is simply the notion of hill climbing: to find the
local best, climb the function in the steepest permissible direction.
Both the calculus-based methods are local in scope: the optima they seek are the best in a neighborhood
of the current point. Clearly, starting the search procedures in the neighborhood of the lower peak will
cause us to miss the main event (the higher peak). Furthermore, once the lower peak is reached, further
improvement must be sought through random restart or other trickery. Another problem with calculus-
based methods is that, they depend upon the existence of derivatives (well-defined slope values). Even if
we allow numerical approximation of derivatives, this is a severe shortcoming. The real world of search
is fraught with discontinuities and vast multimodal (i.e., consisting of many ‘hills’) noisy search spaces;
methods depending upon restrictive requirements of continuity and derivative existence, are unsuitable
for all, but a very limited, problem domain.
Enumerative schemes have been considered in many shapes and sizes. The idea is fairly straightforward:
within a finite search space, the search algorithm starts looking at objective function values at every point
in the space, one at a time. Although the simplicity of the type of algorithm is attractive, and enumeration
is a very human kind of search, such schemes have applications wherein the number of possibilities is
small. Even the highly touted enumerative scheme, dynamic programming, breaks down on problems of
moderate size and complexity.
Random walks and random schemes that search and save the best, in the long run, can be expected
to do no better than enumerative schemes. We must be careful to separate the strictly random search
830 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
methods from randomized techniques. The genetic algorithm is an example of a search procedure that
uses random choice as a tool, to guide a highly exploitative search through a coding of parameter space.
Using random choice as a tool in a directed search process seems strange at first, but nature contains
many examples.
The traditional schemes have been used successfully in many applications; however, as more complex
problems are attacked, other methods will be necessary. We shall soon see how genetic algorithms help
attack complex problems [146].
The GA literature describes a large number of successful applications, but there are also many cases
in which GAs perform poorly. Given a potential application, how do we know if a GA is a good
method to use? There is no rigorous answer, though many researchers share the intuitions that if
the space to be searched is large, is known not to be perfectly smooth and unimodal, or is not well
understood; or if the fitness function is noisy; and if the task does not require a global optimum to
be found—i.e., if quickly finding a sufficiently good solution is enough—a GA will have a good
chance of being competitive or surpassing other methods. If the space is not large, it can be searched
exhaustively by enumerative search methods, and one can be sure that the best possible solution has
been found, whereas a GA might give only a ‘good’ solution. If the space is smooth and unimodal,
a gradient ascent algorithm will be much more efficient than a GA. If the space is well understood,
search methods using domain-specific heuristics can often be designed to outperform any
general-purpose method such as a GA. If the fitness function is noisy, a one-candidate-solution-at-a-time
search method such as simple hill climbing might be irrecoverably led astray by the noise; but GAs, since
they work by accumulating fitness statistics over many generations, are thought to outperform robustly
in the presence of small amounts of noise.
These intuitions, of course, do not rigorously predict when a GA will be an effective search procedure,
competitive with other procedures. It would be useful to have a mathematical characterization of how
the genetic algorithm works, that is, predictive. Research on this aspect of genetic algorithms has not yet
produced definite answers.
13.2.3
Simple genetic algorithms require the natural parameter set of the problem to be coded as a finite-
length string of binary bits 0 and 1. For example, given a set of two-dimensional data ((x, y) data
points), we want to fit a linear curve (straight line) through the data. To get a linear fit, we encode the
parameter set for a line y = q1x + q2, by creating independent bit strings for the two unknown constants q1
and q2 (parameter set describing the line) and then joining them (concatenating the strings). A bit string
is a combination of 0s and 1s, which represents the value of a number in binary form. An n-bit string can
accommodate all integers up to the value 2n – 1.
For problems that are solved by the genetic algorithm, it is usually known that the parameters, that are
manipulated by the algorithm, will lie in a certain fixed range, say {qmin, qmax}. A bit string may then be
mapped to the value of a parameter, say qi, by the mapping
Optimization with Genetic Algorithms 831
b
qi = qmini + L
(qmaxi – qmin i) (13.1)
2 -1
where ‘b’ is the number in decimal form that is being represented in binary form (e.g., 152 may be
represented in binary form as 10011000), L is the length of the bit string (i.e., the number of bits in each
string), and qmax and qmin are user-specified constants, which depend on the problem in hand.
The length of the bit strings is based on the handling capacity of the computer being used, i.e., how
long a string (strings of each parameter are concatenated to make one long string representing the whole
parameter set) the computer can manipulate at an optimum speed.
Let us consider the data set in Table 13.1. For performing a line (y = q1x + q2) fit, as mentioned earlier,
we encode the parameter set (q1, q2) in the form of binary strings. We take the string length to be 12 bits.
The first six bits encode the parameter q1, and the next six bits encode the parameter q2.
Data number x y
1 1.0 1.0
2 2.0 2.0
3 4.0 4.0
4 6.0 6.0
The strings (000000, 000000) and (111111, 111111), represent the points (qmin1, qmin2) and (qmax1, qmax2),
respectively, in the parameter space for the parameter set (q1, q2). Decoding of (000000) and (111111) to
decimal form gives 0 and 63, respectively. However, problem specification may impose different values
of minimum and maximum for qi. We assume that the minimum value to which we would expect q1 or
q2 to go would be –2, and the maximum would be 5.
Therefore,
qmini = –2, and qmaxi = 5
Consider a string (a concatenation of two substrings)
000111 010100 (13.2)
representing a point in the parameter space for the set (q1, q2). The decimal value of the substring
(000111) is 7 and that of (010100) is 20. This, however, does not give the value of the parameter set (q1,
q2) corresponding to the string in (13.2). The mapping (13.1) gives the value:
b 7
q1 = qmin1 + L
(qmax1 – qmin1) = –2 + 6
(5 – (–2)) = –1.22
2 -1 2 -1
b 20
q2 = qmin2 + L
(qmax2 – qmin2) = –2 + 6
(5 – (–2)) = 0.22
2 -1 2 -1
832 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
A fitness function takes a chromosome (binary string) as an input, and returns a number that is a measure
of the chromosome’s performance on the problem to be solved. Fitness function plays the same role in
GAs as the environment plays in natural evolution. The interaction of an individual with its environment,
provides a measure of fitness to reproduce. Similarly, the interaction of a chromosome with a fitness
function, provides a measure of fitness that the GA uses when carrying out reproduction. Genetic
algorithm is a maximization routine; the fitness function must be a non-negative figure of merit.
It is often necessary to map the underlying natural objective function to a fitness function form through
one or more mappings. If the optimization problem is to minimize cost function J (p), where p denotes
the parameter set, then the following cost-to-fitness transformation may be used:
1
J(p) = (13.3)
J (p) + e
where e is a small positive number. Maximization of J can be achieved by minimization of J ; so the
desired effect is achieved.
Another way to define the fitness function is to let
J(p(k)) = – J (p(k)) + max J (p( k ))
p( k )
{ } (13.4)
The minus sign in front of the J (p(k)) term turns the minimization problem into a maximization problem
{ }
and max J (p( k )) term is needed to shift the function up, so that J (p(k)) is always positive; k is the
p( k )
iteration index.
A fitness function can be any nonlinear, nondifferentiable, discontinuous, positive function because the
algorithm only needs a fitness value assigned to each string.
For the problem in hand (fit a line through a given data set), let us choose a fitness function. Using
decoded values of q1 and q2 of a chromosome, and the four data values of x given in Table 13.2, calculate
yˆ ( p ) = q1x (p) + q2; p = 1, 2, 3, 4
These computed values of yˆ ( p ) are compared with the correct values y(p), given in Table 13.2, and
square of errors in estimating the y’s is calculated for each string. The summation of the square of errors
is subtracted from a large number (400 in this problem) to convert the problem into a maximization
problem:
 ( yˆ )
2
( p)
J(p) = 400 – – y ( p ) ; p = [q1 q2] (13.5)
p
The fitness value of the string (13.2) is calculated as follows:
q1 = –1.22, q2 = 0.22
For x = 1.0, yˆ (1) = q1x + q2 = –1.00
For x = 2.0, yˆ ( 2) = – 2.22
For x = 4.0, yˆ (3) = – 4.66
Optimization with Genetic Algorithms 833
 ( yˆ ) = 131.586
2
( p)
J(p) = 400 – - y( p)
p =1
The basic element processed by a GA is the string formed by concatenating substrings, each of which is
a binary coding of a parameter of the search space. If there are N decision variables in an optimization
problem, and each decision variable is encoded as an n-digit binary number, then a chromosome is a
string of n ¥ N binary digits. We start with a randomly selected initial population of such chromosomes;
each chromosome in the population represents a point in the search space, and hence, a possible solution
to the problem. Each string is then decoded to obtain its fitness value, which determines the probability
of the chromosome being acted on by genetic operators. The population then evolves, and a new
generation is created through the application of genetic operators (The total number of strings included
in a population, is kept unchanged throughout generations, for computational economy and efficiency).
The new generation is expected to perform better than the previous generation (better fitness values).
The new set of strings is again decoded and evaluated, and another generation is created using the basic
genetic operators. This process is continued until convergence is achieved within a population.
Let q j(k) be a single parameter in chromosome j of generation k. Chromosome j is composed of N of
these parameters:
p j(k) = [q1 j(k), q 2 j(k), …, qNj(k)] (13.6)
The population of chromosomes, in generation k:
P(k) = {p j(k)| j = 1, 2, ..., S} (13.7)
where S represents the number of chromosomes in the population. We want to pick S to be big enough, so
that the population elements can cover the search space. However, we do not want S to be too big, since
this increases the number of computations we have to perform.
For the problem in hand, Table 13.2 gives an initial population of 4 strings, the corresponding decoded
values of q 1 and q2, and the fitness value for each string.
Evolution occurs as we go from generation k to the next generation k + 1. Genetic operations of selection,
crossover and mutation are used to produce a new generation.
Basically, according to Darwin, the most qualified (fittest) creatures survive to mate. Fitness is
determined by a creature’s ability to survive predators, pestilence, and other obstacles to adulthood
and subsequent reproduction. In our unabashedly artificial setting, we quantify ‘most qualified’ via a
chromosome’s fitness J(p j(k)). The fitness function is the final arbiter of the string-creature’s life or
death. Selecting strings according to their fitness values means that the strings with a higher value have
a higher probability of contributing one or more offspring in the next generation.
Selection is a process in which good-fit strings in the population are selected to form a mating pool,
which we denote by
M(k) = {m j(k) | j = 1, 2, ..., S} (13.8)
The mating pool is the set of chromosomes that are selected for mating. A chromosome is selected for
mating pool according to the probability proportional to its fitness value. The probability for selecting
the ith string is
pi =
(
J pi( k ) ) (13.9)
S
 J (p ( k ) )
j
j =1
For the initial population of four strings in Table 13.2, the probability for selecting each string is calculated
as follows:
131.586 131.586
p1 = = = 0.108
131.586 + 323.784 + 392.41 + 365 1212.8
323.784 392.41 365.00
p2 = = 0.267; p3 = = 0.324; p4 = = 0.301
1212.8 1212 . 8 1212 .8
To clarify the meaning of the formula and, hence, the selection 37.5
strategy, Goldberg [146] uses the analogy of spinning a unit-
circumference roulette wheel; the wheel is cut like a pie into S 26.7%
regions where the ith region is associated with the ith element of
P(k). Each pie-shaped region has a portion of the circumference 10.8 2
30.1%
that is given by pi in Eqn. (13.9). 10.8%
1 4
The roulette wheel for the problem in hand is shown in Fig. 13.1.
0
String 1 has solution probability of 0.108. As a result, String 1 100 32.4%
is given 10.8% slice of the roulette wheel. Similarly, String 2 is 3
given 26.7% slice, String 3 is given 32.4% slice and String 4 is
given 30.1% of the roulette wheel. 67.6
You spin the wheel, and if the pointer points at region i when the
wheel stops, then you place pi into the mating pool M(k). You
Optimization with Genetic Algorithms 835
spin the wheel S times, so that S strings end up in the mating pool. Clearly, the strings which are more fit
will end up with more copies in the mating pool; hence, chromosomes with larger-than-average fitness,
will embody a greater portion of the next generation. At the same time, due to the probabilistic nature
of the selection process, it is possible that some relatively unfit strings may end up in the mating pool.
For the problem in hand, the four spins might choose strings 3, 3, 4 and 2 as parents (String 1 also may
be selected in the process of roulette wheel spin; it is just the luck of the draw. If the roulette wheel were
spun many times, the average results would be closer to the expected values).
We think of crossover as mating in biological terms, which, at the fundamental biological level, involves
the process of combining chromosomes. The crossover operation operates on the mating pool M(k).
First, specify the ‘crossover probability’ pc (usually chosen to be near one, since, when mating occurs in
biological systems, genetic material is swapped between the parents).
The procedure for crossover consists of the following steps:
(i) Randomly pair off the strings in the mating pool M(k). If there are an odd number of strings in
M(k), then, for instance, simply take the last string and pair it off with another string which has
already been paired off.
(ii) Consider chromosome pair (p j, pi) that was formed in Step 1. Generate a random number r Œ [0, 1].
(a) If r < pc, then crossover p j and pi. To crossover these chromosomes, select at random a ‘cross
site’ and exchange all bits to the right of the cross site of one string, with those of the other.
This process is pictured in Fig. 13.2. In this example, the cross site is position four on the
string, and hence we swap the last eight bits between the two strings. Clearly, the cross site
is a random number between one and the number of bits in the string, minus one.
(b) If r > pc, then the crossover will not take place; hence, we do not modify the strings.
(iii) Repeat Step 2 for each pair of strings that is in M(k).
For the problem in hand, Table 13.3 shows the power of crossover. The first column shows the four strings
selected for mating pool. We randomly pair off the strings. Suppose that random choice of mates has
selected the first string in the mating pool, to be mated with the fourth. With a cross site 4, the two strings
cross and yield two new strings as shown in Table 13.3. The remaining two strings in the mating pool are
crossed at site 9; the resulting strings are given in the table.
836 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
In nature, an offspring inherits genes from both the parents. The crossover process creates children
strings from the parent strings. The children strings thus produced, may or may not, have combination
of good substrings from parents strings, but we don’t worry about this too much, because if good strings
are not created by crossover, they will not survive too long because of the selection operator. If good
strings are created by crossover, there will be more copies of it in the next mating pool generated by the
selection operator.
Besides the fact that crossover helps to model the mating part of the evolution process, why should the
genetic algorithm perform crossover? Basically, the crossover operation perturbs the parameters near
good positions to try to find better solutions to the optimization problem. It tends to help perform a
localized search around the more fit strings (since, on average, the strings in the generation k mating pool
are more fit than the ones in the generation k population).
After mutation, we get a modified mating pool M(k). To form the next generation for the population, we
let
P(k + 1) = M(k) (13.10)
where this M(k) is the one that was formed by selection and modified by crossover and mutation. Then
the above steps repeat, successive generations are produced, and we thereby model evolution (of course,
it is a very crude model).
While the biological evolutionary process continues, perhaps indefinitely, we would like to terminate our
artificial one and find the following:
(1) The population string—say, p*(k)—that best maximizes the fitness function. Notice that, to
determine this, we also need to know the generation number k where the most fit string existed (it
is not necessarily in the last generation). A computer code, implementing the genetic algorithm,
keeps track of the highest J value, and the generation number and string that achieved this value
of J.
(2) The value of the fitness function J(p*(k)).
There is then the question of how to terminate the genetic algorithm. There are many ways to terminate
a genetic algorithm, many of them similar to termination conditions used for conventional optimization
algorithms. To introduce a few of these, let e > 0 be a small number and n1 > 0 and n2 > 0 be integers.
Consider the following options for terminating the GA:
(1) Stop the algorithm after generating the generation P(n1)—that is, after n1 generations.
(2) Stop the algorithm after at least n2 generations have occurred and, at least n1 steps have occurred
when the maximum (or average) value of J for all population members has increased by no more
than e.
(3) Stop the algorithm once J takes on a value above some fixed value.
The above possibilities are easy to implement on a computer but, sometimes, you may want to watch the
parameters evolve and decide yourself when to stop the algorithm.
Example 13.1
Consider the problem of maximizing the function
838 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
J(q) = q 2 (13.11)
where q is permitted to vary between 0 and 31.
To use a GA, we must first code the decision variables of our problem as some finite length string. For
this problem, we will code the variable q simply as a binary unsigned integer of length 5. With a five-
bit unsigned integer, we can obtain numbers between 0 (00000) and 31 (11111). The fitness function is
simply defined as the function J(q).
To start off, we select an initial population at random. We select a population of size 4. Table 13.4 gives
the selected initial population, decoded q values, and the fitness function values J(q). As an illustration of
the calculations done, let’s take a look at the third string of the initial population, string 01000. Decoding
this string gives q = 8, and the fitness J(q) = 64. Other q and J(q) values are obtained similarly.
The mating pool of the next generation may be selected by spinning a roulette wheel. Alternatively, the
roulette-wheel technique may be implemented using a computer algorithm:
(i) Sum the fitness of all the population members, and call this result the total fitness S J.
(ii) Generate r, a random number between 0 and total fitness.
(iii) Return the first population member whose fitness, added to the fitness of the preceding population
members (running total), is greater than or equal to r.
We generate numbers randomly from the interval [0, 1170] (refer to Table 13.4). For each number, we
choose the first chromosome for which the running total of fitness is greater than, or equal to, the random
number. Four randomly generated numbers are 233, 9, 508, 967; String 1 and String 4 give one copy to
the mating pool, String 2 gives two copies, and String 3 gives no copies.
With the above active pool of strings looking for mates, simple crossover proceeds in two steps:
(1) strings are mated randomly, and (2) mated-strings couples crossover. We take the crossover proba-
bility pc = 1. Looking at Table 13.5, we find that, random choice of mates has selected the second string
in the mating pool to be mated with the first. With a crossing site of 4, the two strings 01101, and 11000
cross and yield two new strings, 01100 and 11001. The remaining two strings in the mating pool are
crossed at site 2; the resulting strings are given in Table 13.5.
The last operator, mutation, is performed on a bit-by-bit basis. We assume that the probability of muta-
tion in this test is 0.001. With 20 transferred bit positions, we should expect 20 ¥ 0.001 = 0.02 bits to
undergo mutation during a given generation. Simulation of this process indicates that no bits undergo
mutation for this probability value. As a result, no bit positions are changed from 0 to 1, or vice versa,
during this generation.
Following selection, crossover and mutation, the new population is ready to be tested. To do this, we
simply decode the new strings created by the simple GA, and calculate the fitness function values from
the q values thus decoded. The results are shown in Table 13.5. While drawing concrete conclusions from
a single trial of a stochastic process is, at best, a risky business, we start to see how GAs combine high-
performance notions to achieve better performance. Both the maximal and average performance have
improved in the new population. The population average fitness has improved from 293 to 439 in one
generation. The maximum fitness has increased from 576 to 729 during that same period.
13.3
Fuzzy inference systems (discussed in Chapter 12) are highly nonlinear systems with many input and
output variables. The knowledge base for the design of these systems (refer to Fig.12.29) consists of data
base (membership functions for input and output variables) and rule base. Crucial issues in the design are
the tasks of selecting appropriate membership functions, and the generation of fuzzy rules. These tasks
require experience and expertise. Genetic algorithms may be employed for
tuning of membership functions, while the rule base remains unchanged;
generating a rule base when a set of membership functions for input/output variables remains
unchanged; or
for both of these tasks simultaneously.
We will limit our presentation to the first task, i.e., tuning of membership functions while the rule base
remains unchanged.
840 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
13.3.1
The effectiveness of the fuzzy system operation could be increased by appropriate tuning of the fuzzy
sets. The GA modifies membership functions by changing the location of characteristic points of their
shapes. The information on characteristic points of the membership functions is coded in chromosomes.
After the appropriate representation of fuzzy sets in the chromosome has been selected, the GA operates
on the population of individuals, i.e., on the population of chromosomes containing coded shapes of
fuzzy membership functions, according to the genetic cycle comprising the following steps:
1. Decoding each of the individuals (chromosomes) of the population, recreating the set of
membership functions, and constructing an appropriate fuzzy system. The rule base is predefined.
2. Evaluating the performance of the fuzzy system on the basis of the difference (error) between
the system’s responses and the desired values. This error defines the individual’s (chromosome’s)
fitness.
3. Selection and application of genetic operators, such as crossover and mutation, and obtaining a
new generation.
Example 13.2
Let us consider the application of GA to the fuzzy model of a manufacturing process, described in
Problem 12.22. The process is characterized by two input variables, x1 and x2, and one output variable,
y. The membership function distributions of the inputs and the output are shown in Fig. P12.22, and the
predefined rule base is given in Table P12.22.
The membership functions have the shape of isosceles triangles, which may be described by means of
characteristic points in the following manner: the vertices of the triangles are fixed, and the base-widths
q1, q2, and q3 (refer to Fig. P12.22) are tunable. The ranges of the tunable parameters are assumed to be
2 £ q1 £ 4; 5 £ q2 £ 15; 0.5 £ q3 £ 1.5 (13.12)
Let us code these fuzzy sets in chromosomes by placing
characteristic parameters one by one, next to each other q1 q2 q3
(Fig.13.4). Starting from the leftmost position, L bits are
assigned for parameter q1. Each of the parameters q1, q2,
q3, may be assigned different number of bits depending
on their ranges. However, for simplicity of presentation, we assign L = 5 in each of the three cases. Thus,
the GA-string is 15 bits long.
An initial population for the GA is created at random. We assume that the first chromosome of this
randomly selected population is
10110 01101 11011 (13.13)
The mapping rule (13.1) is used to determine the real values of the parameters qi; i = 1, 2, 3, represented
by this string. The decoded value b of the binary substring 10110 is equal to 22. Therefore, the real value
of q1 is given by (refer to Fig.P12.22, and parameter values (13.12))
q1 = q 1min + L
b
(q 1max - q 1min ) = 2 + 5
22
( 4 - 2) = 3.419355
2 -1 2 -1
Optimization with Genetic Algorithms 841
The real values of q2 and q3, corresponding to their respective substrings in (13.13), are 9.193548 and
1.370968, respectively. Figure 13.5 shows the modified membership distributions of input and output
variables.
The GA optimizes the database (tunes the membership functions) with the help of a set of training
examples. Assume that we are given P training examples {x(p), y(p); p = 1,2,…,P}. Further, we take first
training example (p = 1) as {x1 = 10, x2 = 28, y = 3.5}.
For the inputs x1 = 10, x2 = 28, we calculate the predicted value of the output, ŷ , of the fuzzy model when
the model parameters are given by the first chromosome in the initial population. This is done using the
procedure given in Section 12.4. This will give us the absolute value of error in prediction: e1 = | 3.5 – ŷ |.
From this procedure, repeated on all the training examples, we can obtain the average value of absolute
errors in prediction,
842 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
P
1
e =
P Âe p
p =1
Neural-network learning is a search process for the minimization of a performance criterion (error
function). In order to make use of existing learning algorithms, one needs to select a lot of parameters
such as the number of layers, the number of units in each layer, the manner of their connection, the
activation functions, as well as learning parameters. Learning process is usually carried out with the use
of error backpropagation for connection weights, and trial-and-error approach for the other parameters.
These design steps sometimes need quite a lot of time and experience, but genetic algorithms can be
helpful here.
Genetic algorithms can be introduced into neural networks at many different levels:
learning of connection weights including biases;
determination of optimal architecture; or
the simultaneous determination of architecture and weights.
We will limit our presentation to the first task, i.e., the use of genetic algorithms to the problems of
optimization of neural network weights.
The gradient-based algorithms for learning weights of neural networks usually run multiple times to avoid
local minima, and also gradient information must be available. Two of the most important arguments for
the use of genetic algorithms to the problems of optimization of neural network weights, are
a global search of space of weights, avoiding local minima; and
useful for problems where obtaining gradient information is difficult or expensive.
It is important to mention that when gradient information is readily available, the gradient-based methods
could be more effective in terms of computation speed, than the GA for weight optimization of neural
networks. In fact, there is no clear winner in terms of the best training algorithm, since the best method
is always problem dependent. The hybrid of genetic algorithm and gradient algorithm is an effective
alternative.
Optimization with Genetic Algorithms 843
With a fixed topology, the weights of a neural network are coded in a chromosome. Each individual of
the population is determined by a total set of neural network weights. The order of placing the weights in
the chromosome is arbitrary, but cannot be changed after the process of learning begins.
The fitness of individuals will be evaluated on the basis of the fitness function, defined as the sum of
squares of errors, being the differences between the network desired signal and network output signal
for different input data.
The genetic algorithm operates on the population of individuals (chromosomes representing neural
networks with the same architecture but with different weights values) according to the typical genetic
cycle comprising the following steps:
1. Decoding each individual of the current population to the set of weights and constructing the
corresponding neural network with this set of weights; while the network architecture and the
learning rule are predefined.
2. Calculating the total mean squared error of the difference between the desired signals and output
signals for all the input data. This error determines the fitness of the individual (constructed
network).
3. Selection and application of genetic operators, such as crossover and mutation, and obtaining a
new generation.
REVIEW EXAMPLES
The proposed fuzzy controller has two input variables (refer to Figs 12.33):
e(k) – error between set-point and actual temperature of the tank,
v(k) – rate of change of error;
and one output variable:
Du(k) – incremental heat input to the tank.
The universe of discourse for all the three variables may be taken as [–1, 1]. Proposed membership
functions are shown in Fig. 13.6.
The selected rules are as follows:
Rate of
change of errorÆ N NZ P
Error
Ø
N N N Z
NZ N Z P
P Z P P
The initial value of the system output is Y0. The initial velocity, and the initial output of the Fuzzy PI
controller are set to zero.
The scaling factors GE, GV and GU¢ of the fuzzy controller may be tuned using genetic algorithm. Refer
to Appendix B for realization of the controller.
m
1
N NZ P
e*, v*
–1 –0.1 0.1 1
m
N 1
Z P
Du*
–1 0 1
Optimization with Genetic Algorithms 845
Similarly, the real values of all the parameters represented by the GA-string (13.14) can be calculated.
The real values are:
{w11, w12, w21, w22, w31, w32, v1, v2, v3} =
{0.709677, 0.354839, 0.419355, 0.870968, 0.548387, 0.096774, 0.806452,
0.967742, 0.935484} (13.15)
(p) (p)
The first training pattern of the data {x , y , p = 1,2,…. , P} is assumed to be {x1 = 0.6, x2 = 0.7,
y = 0.9}. The outputs of the hidden units for an input {x1 = 0.6, x2 = 0.7} and the connection weights
given by (13.15), are found as follows:
a1 = 0.674194; a2 = 0.861291; a3 = 0.396774; z1 = 0.662442; z2 = 0.702930; z3 = 0.597912
The activation value a of the neuron in the output layer is obtained as follows:
a = v1z1 + v2z2 + v3z3 = 1.773820
and the predicted output of the network is
e a - e- a
ŷ = = 0.9440
e a + e- a
Since the target output for this training pattern is equal to 0.9, the error in prediction is found to be equal
to 0.0440.
A population of GA-strings represents a number of candidate neural networks. When the batch mode of
training is adopted, the whole training data is passed through the neural network represented by a GA
string. This gives Mean Square Error (MSE):
P
1
MSE =
P Â (y(p) – ŷ(p))2 (13.16)
p =1
846 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
PROBLEMS
13.1 The objective is to use GA to find the value of x that maximizes the function
f (x) = sin (p x/256)
over the range 0 £ x £ 255, where values of x are restricted to integers. The true solution to the
problem is x = 128, having function value equal to one.
Explain the steps involved in GA. Use a random population of size 8, represent each individual in
the population with an 8-bit binary string (8 strings is the population), choose fitness function
F(x) = f (x)/ S f (x)
with summation over 8 strings, take crossover probability pc = 0.75 and zero mutation probability.
Show only one iteration by hand calculation.
13.2 The objective is to minimize the function:
f (x1, x2) = (x12 + x2 – 11)2 + (x22 + x1 – 7)2
in the interval 0 £ x1, x2 £ 6. The true solution to the problem is [3, 2]T having a function value
equal to zero.
Take up this problem to explain the steps involved in GA: maximizing the function
1.0
F(x1, x2) = ; 0 £ x1, x2 £ 6.
1.0 + f ( x1 , x2 )
Step 1: Take 10 bits to code each variable. With 10 bits, what is the solution accuracy in the
interval (0, 6)?
Step 2: Take population size equal to total string length, i.e., 20. Create a random population of
strings.
Step 3: Consider the first string of the initial random population. Decode the two substrings and
determine the corresponding parameter values. What is the fitness function value corresponding
to each string? Similarly for other strings, calculate the fitness values.
Step 4: Select good strings in the population to form the mating pool.
Step 5: Perform crossover on random pairs of strings (the crossover probability is 0.8).
Step 6: Perform bitwise mutation with probability 0.05 for every bit.
The resulting population is the new population. This completes one iteration of GA and the
generation count is incremented by 1.
Optimization with Genetic Algorithms 847
13.3 A fuzzy logic-based expert system is to be developed that will work based on Sugano’s architecture
to predict the output of a process. The Data Base of the fuzzy system is shown in Fig.P13.3; x1
and x2 are two inputs with specified minimum values x1min and x2min respectively. The base-widths
q1 and q2 are assumed to vary in the ranges:
0.8 £ q1 £ 1.5; 4.0 £ q2 £ 6.0
There is a maximum of R = 4 feasible rules; the output of rth rule (r = 1, 2, …,R) is expressed as
follows:
yˆ ( r ) = a(r) (r) (r)
0 + a1 x1 + a2 x2
The parameters a0(r), a1(r), a2(r) are assumed to vary in the range:
0.001 £ a0(r), a1(r), a2(r) £ 1.0
To optimize the performance of the fuzzy system using GA, a set of training examples {x(p), y(p);
p = 1, …, P} is used. A typical GA-string in the population of solutions is of the form:
{q1 q2 a0(1) a1(1)a2(1) a0(2) a1(2) a2(2) a0(3) a1(3) a2(3)a0(4) a1(4) a2(4)}
with 4 binary bits assigned to represent each of the parameters.
Randomly select an initial population of solutions, and determine the deviation in prediction for
the training example {x(1), y(1)} = {x1(1) = 1.1, x2(1) = 6.0, y (1) = 5.0} using the first GA-string.
13.4 A fuzzy logic-based expert system is to be developed that will work based on Mamdani’s
architecture to predict the output of a process. The Data Base of the fuzzy system is shown in
Figs P13.3 and P13.4; x1 and x2 are two inputs with specified minimum values x1min and x2min,
respectively, and y is the output with specified minimum value ymin. The basewidths q1, q2 and q3
of these isosceles triangles are tunable. The ranges of the tunable parameters are assumed to be
0.8 £ q1 £ 1.5; 4.0 £ q2 £ 6.0,; 0.5 £ q3 £ 3
848 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
A11 S M
A12 M L
To optimize the performance of the fuzzy system using GA, a set of training examples {x(p), y(p);
p = 1,…, P} is used. A typical GA-string in the population of solutions is of the form
{q1 q2 q3}
with 4 binary bits assigned to represent each of the parameters.
Randomly select an initial population of solutions, and determine the deviation in prediction for
the training example {x(1), y(1)} = {x1(1) = 1.1, x2(1) = 6.0, y(1) = 5.0} using the first GA-string.
13.5 Reconsider the neural network shown in Fig. P11.9, modified to include the bias weights: w10, w20
and w30, for the hidden units and bias weight, v0, for the output unit. All the bias weights vary in
the range 0.0 to 1.0.
A binary-coded GA is used to update connection weights including biases. Extend the procedure
given in Review Example 13.2 to this modified network.
Intelligent Control with Reinforcement Learning 849
Chapter 14
Intelligent Control with
Reinforcement Learning
14.1 INTRODUCTION
Reinforcement learning is a machine intelligence approach that emphasizes on learning by the individual
from direct interaction with its environment. This contrasts with classical approaches (discussed earlier
in Chapters 11 and 12) to machine learning which have focused on learning from exemplary supervision
or from expert knowledge of the environment. In this chapter, the coverage of reinforcement learning is
to be regarded as an introduction to the subject; a springboard to advanced studies. The inclusion of the
topic has been motivated by the observation that reinforcent learning control has the potential of solving
many nonlinear control problems.
Reinforcement learning is based on the common sense idea that if an action is followed by a satisfactory
state of affairs, or by an improvement in the state of affairs (as determined in some clearly defined
way), then the tendency to produce that action is strengthened, i.e., reinforced. Extending this idea to
allow action selections to depend on state information, introduces aspects of feedback. A reinforcement
learning system is, thus, any system that through interaction with its environment improves its
performance by receiving feedback in the form of a scalar reward (or penalty)—a reinforcement signal,
that is commensurate with the appropriateness of the response. The learning system is not told which
action to take, as in forms of machine learning discussed earlier in Chapters 11 and 12, but instead must
discover which actions yield the most reward by trying them. In the most interesting and challenging
cases, actions may affect not only the immediate reward but also the next situation, and through that
all subsequent rewards. These two characteristics—trial-and-error search and cumulative reward—
are the two important distinguishing features of reinforcement learning. Although the system’s initial
performance may be poor, with enough interaction with the environment, it will eventually learn an
effective strategy for maximizing cumulative reward.
Reinforcement learning is emerging as an important alternative to classical problem-solving approaches
to intelligent control (Chapters 11 and 12), because it possesses many of the properties for intelligent
control that classical approaches lack. Much of the classical intelligent control is an empirical science—
the asymptotic effectiveness of the learning systems has been validated only empirically. Recent advances
850 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
relating reinforcement learning to dynamic programming are providing solid mathematical foundation;
mathematical results that guarantee optimality in the limit for an important class of reinforcement
learning systems, are now available [147].
Reinforcement learning systems do not depend upon models of the environment, because they learn
through trial-and-error experience with the environment. However, when available, they can exploit this
knowledge to determine a good initial control policy; this results in faster convergence to optimal policy.
The use of neural networks (or other associative memory structures such as fuzzy systems) makes
reinforcement learning tractable on the realistic control problems with large state spaces. A neural
network has the key feature of generalization; experience with a limited subset of state space is usefully
generalized to produce a good approximation over a much larger subset. Intelligent control architectures
incorporating aspects of both the reinforcement learning and the supervised learning, generalize from
previously experienced states to ones that have never been experimented with. Empirical results based
on such architectures, have shown robust, efficient learning on a variety of nonlinear control problems.
Beyond the agent and the environment, one can identify four main sub-elements of a reinforcement
learning system—a policy, a reward function, a value function and horizon of decisions. A policy defines
the learning agent’s way of behaving at a given time. Roughly speaking, a policy is a mapping from
perceived states of the environment to actions to be taken when in those states. A reward function defines
immediate reward for an action responsible for the current state of the environment. Roughly speaking,
it maps states of the environment to a scalar, a reward, indicating the intrinsic desirability of the state.
Whereas a reward function indicates what is good in the immediate sense, a value function specifies
what is good in the long run. Roughly speaking, the value of a state is the cumulative reward an agent
can expect to accumulate over the future as a result of sequence of its actions, starting from that state.
Whereas rewards determine the immediate, intrinsic desirability of environmental states, values indicate
the long-term desirability of states after taking into account the states that are likely to follow, and the
rewards available in those states. An agent’s sole objective is to maximize the cumulative reward (value)
it receives in the long run.
The value function depends on whether there is a finite horizon or an infinite horizon for decision making.
A finite horizon means that there is a fixed time after which nothing matters—the game is over, so to
speak. With a finite horizon, the optimal action for a given state could change over time. We say that the
optimal policy for a finite horizon is nonstationary.
With no fixed time limit, on the other hand, there is no reason to behave differently in the same state at
different times. Hence, the optimal action depends only on the current state, and the optimal policy is
stationary. Polices for the infinite-horizon case are, therefore, simpler than those for finite-horizon case.
Note that ‘infinite horizon’ does not necessarily mean that all state sequences are infinite; it just means
that there is no fixed deadline. If the environment contains terminal states and if the agent is guaranteed
to get to one eventually, then we will never come across infinite sequences.
Our focus in this chapter is on reinforcement learning solutions to control problems. The controller
(agent) has a set of sensors to observe the state of the controlled process (environment); the learning task
is to learn a control strategy (policy) for choosing control signals (actions) that achieve minimization of
a performance measure (maximization of cumulative reward).
In control problems, we minimize a performance measure; frequently referred to as cost function. The
reinforcement learning control solution seeks to minimize the long-term accumulated cost the controller
852 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
incurs over the task time. The general reinforcement learning solution seeks to maximize the long-
term accumulated reward the agent receives over the task time. Since in control problems, reference of
optimality is a cost function, we assign cost to the reward structure of the reinforcement learning process;
the reinforcement learning solution then seeks to minimize the long-term accumulated cost the agent
incurs over the task time. The value function of the reinforcement learning process is accordingly defined
with respect to cost structure.
The stabilizing control problems we have been discussing in this book, are all infinite-horizon problems.
Here also, we will limit our discussion to this class of control problems.
Some reinforcement learning systems have one more element—a model of the environment. This is
something that mimics the behavior of the environment. For example, given a state and action, the model
might predict the resultant next state and next cost.
Early reinforcement learning systems were explicitly model-free, trial-and-error learners. Nevertheless, it
gradually became clear that reinforcement learning methods are closely related to dynamic programming
methods, which do use models. Adaptive dynamic programming has emerged as a solution method for
reinforcement learning problems wherein the agent learns the models through trial-and-error interaction
with the environment, and then uses these models in dynamic programming methods.
We have used the vector x to represent the state of a physical system: x = [x1 x2 ... xn]T, where xi:
i = 1, ..., n, are state variables of the system. State x, a vector of real numbers, is a point in the state
space. In reinforcement learning (RL) framework, we will represent the state by ‘s’; thus s is a point in
the n-dimensional state space. Similarly, the vector u has been used for control. We will represent this by
the action ‘a’ in our RL framework.
If the environment is deterministic, then an agent’s action a will transit the state of the environment from
s to s¢ deterministically; there is no probability involved. In fact, the transfer function models or state
variable models, used in the book so far, for plants/controlled processes, are approximate models based
on the assumption of deterministic behavior.
If the environment is stochastic, then transition of s to s¢ under action a will be different each time action
a is applied in state s. This is captured by a probabilistic model. If the environment is deterministic, but
uncertain, then also transition of s to s¢ under action a will not be unique each time action a is applied
in state s. Since uncertainty in environments is the major issue leading to complexity of the control
problem, we will be concerned with probabilistic models.
(1) A specification of the outcome probabilities for each admissible action in each possible state is
called the transition model.
P(s, a, s¢): probability of reaching state s¢ if action a is applied in state s.
(2) In control problems, the transitions are Markovian—the probability of reaching state s¢ from s
depends only on s and not on the history of earlier states.
(3) In each state s, the agent receives a reinforcement r(s), which measures the immediate cost of the
action.
(4) The specification of a sequential decision problem for a fully observable environment, with a
Markovian transition model and cost for each state, is called a Markov Decision Process (MDP).
(5) The basis of our reinforcement learning framework is Markov decision processes.
Intelligent Control with Reinforcement Learning 853
14.3.1
If one had to identify an idea as central and novel to reinforcement learning, it would undoubtedly
be Temporal Difference (TD) learning. Temporal difference learning can be thought of as a version
of dynamic programming, with the difference that TD methods can learn on-line in real-time, from
raw experience without a model of the environment’s dynamics. TD methods do not assume complete
knowledge of the environment; they require only experience—sample sequences of states, actions and
costs from actual interaction with the environment. Learning from actual experience is striking because
it requires no prior knowledge of the environment’s dynamics, yet can obtain optimal behavior.
The principle advantage of dynamic programming is that, if a problem can be specified in terms of
Markov decision process, then it can be analyzed and an optimal policy obtained a priori. The two
principle disadvantages of dynamic programming are as follows: (1) for many tasks, it is difficult to
specify the dynamic model; and (2) because dynamic programming determines a fixed control policy a
priori, it does not provide a mechanism for adapting the policy to compensate for disturbances and/or
modeling errors (nonstationary dynamics).
Reinforcement learning has complimentary advantages as follows: (1) it does not require a prior
dynamical model of any kind, but learns on experience gained directly from the environment; and (2)
to some degree, it can track the dynamics of nonstationary systems. The principle disadvantage of
reinforcement learning is that, in general, many trials (repeated experiences) are required to learn an
optimal control strategy, especially if the system starts with a poor initial policy.
This suggests that the respective weaknesses of these two approaches may be overcome by integrating
them. That is, if a complete, possibly inaccurate, model of the task is available a priori, model-based
methods (including dynamic programming) can be used to develop initial policy for a reinforcement
learning system. A reasonable initial policy can substantially improve the system’s initial performance
and reduce the time required to reach an acceptable level of performance. Conversely, adding an adaptive
reinforcement learning component to an otherwise model-based fixed controller, can compensate for an
inaccurate model.
In this chapter, we limit our discussion to naive reinforcement learning systems.
14.3.2
An adaptive dynamic programming agent works by learning the transition model of the environment
through interaction with the environment. It then plugs the transition model and the observed costs in the
dynamic programming algorithm. Adaptive dynamic programming is, thus, an on-line learning system.
The process of learning the model itself is easy when the environment is fully observable. In the simplest
case, we can represent the transition model as a table of probabilities. We keep track of how often each
action outcome occurs, and estimate the transition probability P(s, a, s¢) from the frequency with which
state s¢ is reached when executing action a in state s.
Intelligent Control with Reinforcement Learning 855
Our focus in this chapter is on temporal difference learning. We begin with an introduction to dynamic
programming, and then using this platform, develop temporal difference methods of learning.
State Æ 1 2 3 4 5 6
Pend. angle(deg); q < –6 – 6 to –1 –1 to 0 0 to 1 1 to 6 >6
Actions Æ 1 2 3 4 5 6 7
Apply force of –10 –6 –2 0 2 6 10
u newtons
� x3 = z, x4 = z�. Vector x = [x1 x2 x3 x4]T defines a point in the state space; the distinct
Define: x1, = q, x2 = q,
point corresponding to x is the distinct state s of the environment (pendulum on a cart). Therefore, there
are 6 ¥ 3 ¥ 3 ¥ 3 = 162 distinct states: s(1), s(2),..., s(162), of our environment. The finite set of states, in our
learning problem, is thus given as
S : {s(1), s(2),..., s(162)}
The action set size is seven: a(1), a(2),..., a(7). The finite set of available actions in our learning problem,
is thus given as
A : {a(1), a(2),..., a(7)}
856 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
trajectory is not fixed. A typical state sequence in a trajectory may be expressed as {s0, s1, s2,...} where,
each st; t = 0,1,2,3,..., could be any of the possible environment states s(1),..., s(N).
Given the initial state st and the agent’s policy p. The agent selects an action p(st), and the result of this
action is next state st+1. The state transition model, P(s, a, s¢), gives a probability that the next state st+1
will be s¢ Œ S, given that the current state st = s and the action at = a. Since each state s(1), s(2),..., s(N) is a
candidate to be the next state s¢, the environment simulator gives a set of probabilities: P(st, at , s(1)), ...,
P(st, at, sN); their sum equals one. Thus, a given policy p generates not one state sequence (environment
trajectory), but a whole range of possible state sequences, each with a specific probability determined by
the transition model of the environment.
The quality of a policy is, therefore, measured by the expected value (cumulative cost) of a state, where
the expectation is taken over all possible state sequences that could occur. For MDPs, we can define the
‘value of a state under policy p’ formally as
ÏÔ ¸Ô
ÔÓ t = 0
Â
V p (s) = Ep Ì g t r ( st ) ˝
Ô˛
(14.1)
where Ep {◊} denotes the expected value given that the agent follows policy p. This is a discounted cost
value function; the discount factor g is a number between 0 and 1(0 £ g < 1)
Note that
Âg t
r ( st ) £ Âg t
rmax = rmax/(1– g )
t =0 t =0
Thus, the infinite sequence converges to a finite limit when costs are bounded and g < 1.
The discount factor g determines the relative value of delayed versus immediate costs. In particular,
costs incurred t steps into the future are discounted exponentially by a factor of g t. Note that if we set g
= 0, only the immediate cost is considered. If we set g closer to 1, future costs are given greater emphasis
relative to the immediate cost. The meaning of g substantially less than 1 is that future costs matter to
us less than the costs paid at this present time. The discount factor is an important design parameter in
reinforcement learning scheme.
The final step is to show how to choose between policies. An optimal policy is a policy that yields the
highest expected value. We use p* to denote an optimal policy.
È ˘
p* = arg min Ep Í g t r ( st ) ˙
p ÍÎ t = 0
Â
˙˚
(14.2)
The ‘arg min’ notation denotes the values of p at which Ep [◊] is minimized. p*(s) is, thus, a solution
(obtained off-line) to the sequential decision problem. Given p*, the agent decides what to do in real
time by observing the current state s and executing the action p*(s). This is the simplest kind of agent,
selecting fixed actions on the basis of the current state. A reinforcement learning agent, as we shall
see shortly, is adaptive; it improves its policy on the basis of on-line, real-time interactions with the
environment.
In the following we describe algorithms for finding optimal policies of the dynamic programming agent.
858 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
14.4.1
The dynamic programming technique rests on a very simple idea known as the principle of optimality
[105].
An optimal policy has the property that whatever the initial state and initial decisions are, the remaining
decisions must constitute an optimal policy with regard to the state resulting from the previous decisions.
Consider a state sequence (environment trajectory) resulting from the execution of optimal policy p* :
{s0, s1, s2,...} where each st : t = 0,1,2..., could be any of the possible environment states s(1), s(2),..., s(N).
The index t represents stages of decisions in the sequential decision problem.
The dynamic programming algorithm expresses a generalization of the principle of optimality. It states
that the optimal value of a state is the immediate cost for that state plus the expected discounted optimal
value of the next state, assuming that the agent chooses the optimal action. That is, the optimal value of
a state is given by
V *(s) = r ( s) + g min
a
 P( s, a, s¢)V ( s¢)*
(14.3)
s¢
This is one form of the Bellman optimality equation for V*. For finite MDPs, this equation has a unique
solution.
The Bellman optimality equation is actually a system of N simultaneous nonlinear equations in N
unknowns, where N is the number of possible environment states. If the dynamics of the environment
(P(s,a,s¢)) and the immediate costs underlying the decision process (r(s)) are known, then, in principle,
one can solve this system of equations for V* using any one of the variety of methods for solving systems
of nonlinear equations. Once one has V*, it is relatively easy to determine an optimal policy:
p*(s) = arg min S P ( s, a, s ¢ )V *( s ¢ ) (14.4)
a s¢
*
Note that V*(s) = V p (s):
V *(s) = min V p ( s) for all s Œ S (14.5)
p
The solution of Bellman optimality equation (14.3) directly gives the values V* of states with respect to
optimal policy p*. From this solution, one can obtain optimal policy using Eqn. (14.4).
Equation (14.5) suggests an alternative route to finding optimal policy p*. It uses Bellman equation for
V p, given below.
Vp (s) = r ( s) + g  P( s, p ( s), s¢)V p
( s¢) (14.6)
s¢
Note that this equation is a system of N simultaneous linear equations in N unknowns, where N is
the number of possible environment states (Eqns (14.6) are same as Eqns (14.3) with ‘min’ operator
removed). We can solve these equations for V p(s) by standard linear algebra methods.
Given an initial policy p0, one can solve (14.6) for V p 0 (s). Once we have V p 0, we can obtain improved
policy p1, using the strategy given by Eqn. (14.4):
p1(s) = arg min
a
 P( s, a, s¢)V p0
( s ¢) (14.7)
s¢
Intelligent Control with Reinforcement Learning 859
Each policy is guaranteed to be a strict improvement over the previous one (unless it is already optimal).
Because a finite MDP has only a finite number of policies, this process must converge to an optimal
policy p* and optimal value function V* in a finite number of iterations.
Thus, given a complete and accurate model of MDP in the form of knowledge of the state transition
probabilities P(s, a,s¢) and immediate costs r(s) for all states s Œ S and all actions a Œ A, it is possible—at
least in principle—to solve the decision problem off-line. There is one problem: the Bellman equations
(14.3) are nonlinear because of the ‘min’ operator; solution of nonlinear equations is problematic. The
Bellman equations (14.6) are linear and therefore, can be solved relatively quickly. For large state spaces,
time might be prohibitive even in this relatively simpler case.
In the following, we describe basic forms of two dynamic programming algorithms: value iteration and
policy iteration—a step towards answering the computational complexity problems of solving Bellman
equations.
14.4.2
As used for solving Markov decision problems, value iteration is a successive approximation procedure
for solving the Bellman optimality equation (14.3), whose basic operation is ‘backing up’ estimates of
optimal state values. We can solve Eqn. (14.3) using a simple iterative algorithm:
V( l +1) ( s) ¨ r ( s) + g min
a
 P ( s, a, s¢)V ( s¢)
l (14.8)
s¢
The algorithm begins with arbitrary guess V0(s) for each s Œ S. The sequence of V1(s),V2(s),…, is then
obtained. The algorithm converges to the optimal values V*(s) as the number of iterations l approaches
infinity (We use the index l for the stages of iteration algorithm, whereas we have used earlier the index t
to denote the stages of decisions in the sequential decision problem). In practice, we stop once the value
function changes by a small amount. Then a greedy policy (choosing the action with the lowest estimated
cost) with respect to the optimal set of values is obtained as an optimal policy.
The computation (14.8) is done off-line, i.e., before the real system starts operating. An optimal policy,
that is, an optimal choice of a Œ A for each s Œ S, is computed either simultaneously with V*, or in real
time, using Eqn.(14.4).
A sequential implementation of iteration algorithm (14.8) requires temporary storage locations so that all
the iteration-(l + 1) values are computed based on the iteration-l values. The optimal values V* are then
stored in a lookup table. In addition to a problem of the memory needed for large tables, there is another
problem of time needed to accurately fill them. If there are N states, and M is the largest number of
admissible actions for any state, then each iteration which consists of backing up the value of each state
exactly once requires about M ¥ N2 operations. For the large state sets, typical in many control problems,
it is difficult to try to complete even one iteration, let alone repeat the process until it converges to V*
(curse of dimensionality).
860 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The iteration of synchronous DP algorithm defined in (14.8) backs up the value of every state once to
produce the new approximate value function. We call this kind of operation as full backup; it is based
on all possible next states rather than on a sample next state. We think of the backups as being done in a
sweep through the state space.
Asynchronous DP algorithms are not organized in terms of systematic sweep of the entire set of states in
each iteration. These algorithms backup the values of the states in any order whatsoever, using whatever
values of other states happen to be available. The values of some states may be backed up several
times before the values of others are backed up once. To converge correctly, however, an asynchronous
algorithm must continue to back up the values of all the states.
Of course, avoiding sweeps does not necessarily mean that we can get away with less computation.
It just means that our algorithm does not need to get locked into any hopelessly long sweep before it
can make progress. We can try to take advantage of this flexibility by selecting the states to which we
apply backups so as to improve the algorithm’s rate of progress. We can try to order the backups to
let value information propagate from state to state in an efficient way. Some states may not need their
values backed up as often as other states. Some state orderings produce faster convergence than others,
depending on the problem.
14.4.3
A policy iteration algorithm operates by alternating between two steps (the algorithm begins with
arbitrary initial policy p0).
(i) Policy evaluation step
Given the current policy pk, we perform policy evaluation step that computes V p k (s) for all s Œ S, as the
solution of the linear system of equations (Bellman equation)
V p k (s) = r ( s) + g  P ( s, p k ( s), s ¢ ) V
pk
( s ¢) (14.9)
s¢
pk, i.e., generates an estimate of the value function V p k from states and reinforcement supplied by the
environment as inputs. The policy improvement step is viewed as the work of an actor, who takes into
account the latest evaluation of the critic, i.e., the estimate of the value function, and acts out the improved
policy pk+1.
The algorithm we have described so far requires updating the values/policy for all states at once. It turns
out that this is not strictly necessary. In fact, on each iteration, we can pick any subset of states and apply
updating to that subset. This algorithm is called asynchronous policy iteration. Given certain conditions
on the initial policy and value function, asynchronous policy iteration is guaranteed to converge to an
optimal policy. The freedom to choose any states to work on means that we can design much more
efficient heuristic algorithms—for example, algorithms that concentrate on updating the values of states
that are likely to be reached by a good policy.
Fig. 14.2
The environment is stochastic in nature—each time the action at is applied in the state st, the succeeding
state st+1 could be any of the possible states in S : s(1), s(2),…, s(N). For the stochastic environment, the
agent, however, explores in the space of deterministic policies (a deterministic optimal policy is known
to exist for Markov decision process). Therefore, for each observed environment state st , the agent’s
policy suggests a deterministic action at = p(st).
The task of the agent is to learn a policy p : S Æ A that produces the lowest possible cumulative cost over
time (greedy policy). To state this requirement more precisely, the agent’s task is to learn a policy p that
minimizes the value V p given by (14.1).
Reinforcement learning methods specify how the agent updates its policy as a result of its experience.
The agent could use alternative methods for gaining experience and using it for improvement of its
862 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
policy. In the so called Monte Carlo method, the agent executes a set of trials in the environment using
its current policy p. In each trial, the agent starts in state s(i) (any point s(1),…,s(N) of state space) and
experiences a sequence of state transitions until at reaches a terminal state. In infinite-horizon discounted
cost problems under consideration, terminal state corresponds to the equilibrium state. A learning episode
(trial) is infinitely long, because the learning is continual. For the purpose of viewing the infinite-horizon
problem in terms of episodic learning, we may define a stability region around the equilibrium point
and say that the environment has terminated at a success state if the state continues to be in stability
region for a prespecified time period (In a real-time control, any uncertainty (internal or external) will
pull the system out of stability region and a new learning episode begins). Failure states (situations
corresponding to ‘the game is over and it is lost’) if any, are also terminal states of the learning process.
In a learning episode, agent’s percepts supply both the current state and the cost incurred in that state.
Typical state sequences (environment trajectories) resulting from trials might look like this:
Note that each state percept is subscripted with the cost incurred. The objective is to use the information
about costs to learn the expected value V p (s) associated with each state. The value is defined to be the
expected sum of (discounted) costs incurred if policy p is followed (refer to Eqn.(14.2)).
When a nonterminal state is visited, its value is estimated based on what happens after that visit. Thus,
the value of a state is the expected total cost from that state onward, and each trial (episode) provides
samples of the value for each state visited. For example, the first trial in the set of three given above,
provides one sample of value for state s(1):
(i) r(1) + g r(5) + g 2r(9) + g 3r(5) + g 4r (9) + g 5r(10) + g 6r(11) + g 7r(SUCCESS) ;
two samples of values for state s(5):
(i) r(5) + g r(9) + g 2r(5) + g 3r(9) + g 4r (10) + g 5r(11) + g 6r(SUCCESS);
(ii) r(5) + g r(9) + g 2r(10) + g 3r(11) + g 4r (SUCCESS);
two samples of values for state s(9):
(i) r(9) + g r(5) + g 2r(9) + g 3r(10) + g 4r (11) + g 5r(SUCCESS);
(ii) r(9) + g r (10) + g 2r(11) + g 3r(SUCCESS);
and so on.
Thus, at the end of each episode, the algorithm calculates the observed total cost for each state visited,
and updates the estimated value for that state accordingly just by keeping a running average for each state
in a table. In the limit of infinitely many trials, the sample average will converge to the true expectation
of Eqn. (14.2).
Intelligent Control with Reinforcement Learning 863
The Monte Carlo method differs from dynamic programming in the following two ways:
(i) First, it operates on sample experience, and thus can be used for direct learning without a model.
(ii) Second, it does not build its value estimates for a state on the basis of estimates of the possible
successor states (refer to Eqn. (14.6)); it must wait until the end of the trial to determine the update
in value estimates of states. In dynamic programming methods, the value of each state equals its
own cost plus the discounted expected value of its successor states.
The Temporal Difference (TD) learning methods combine the sampling of Monte Carlo, with the value
estimation scheme of dynamic programming. TD methods update value estimates based on cost of one-
step real-time transition and learned estimate of successor state, without waiting for the final outcome.
Typically, when a transition occurs from state s to state s¢, we apply the following update to V p(s):
V p(s) ¨ V p(s) + h (r (s) + g V p (s¢) – V p(s)) (14.12)
where h is the learning parameter.
Because the update uses the difference in values between successive states, it is called the temporal-
difference or TD equation, TD methods have an advantage over dynamic programming methods in that
they do not require a model of the environment. Advantage of TD methods over Monte Carlo is that they
are naturally implemented in an on-line fully incremental fashion. With Monte Carlo methods, one must
wait until the end of a sequence, because only then is the value known, whereas with TD methods, one
need only wait one time step.
Note that the update (14.12) is based on one state transition that just happens with a certain probability,
whereas in (14.6), the value function is updated for all states simultaneously using all possible next
states, weighted by their probabilities. This difference disappears when the effects of TD adjustments
are averaged over a large number of transitions. The interaction with the environment can be repeated
several times by restarting the experiment after success/failure state is reached. For one particular state,
the next state and received reinforcement can be different each time the state is visited. Because the
frequency of each successor in the set of transitions is approximately proportional to its probability, TD
can be viewed as a crude but efficient approximation to dynamic programming.
The TD equation (14.12) is, in fact, approximation of policy-evaluation step of policy iteration algorithm
of dynamic programming (refer to previous section for a recall), where the agent’s policy is fixed and the
task is to learn the values of states. This, as we have seen, can be done without a model of the system.
However, improving the policy using (14.11) still requires the model.
One of the most important breakthroughs in reinforcement learning was the development of model-free
TD control algorithm, known as Q-learning.
14.6
In addition to recognizing the intrinsic relationship between reinforcement learning and dynamic
programming, Watkins [148,150] has made an important contribution to reinforcement learning by
suggesting a new algorithm called Q-learning. The significance of Q-learning is that when applied to a
Markov decision process, can be shown to converge to the optimal policy, under appropriate conditions.
Q-learning is the first reinforcement learning algorithm to be shown convergent to the optimal policy for
decision problems involving cumulative cost.
864 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
The Q-learning method learns an action–value representation instead of learning value function. We will
use the notation Q(s,a) to denote the value of doing action a in state s. Q-function is directly related to
value function as follows:
V(s) = min Q ( s, a) (14.13)
a
Q-functions may seem like just another way of storing value information, but they have a very important
property: a TD agent that learns a Q-function does not need a model for either learning or action
selection. For this reason, Q-learning is called a model-free method.
The connections between Q-learning and dynamic programming are strong: Q-learning is motivated
directly by value-iteration, and its convergence proof is based on a generalization of the convergence
proof for value-iteration.
We can use the value-iteration algorithm (14.8) directly as an update equation for an iteration process
that calculates exact Q-values, given an estimated model:
Q l + 1 ( s, a ) ¨ r ( s ) + g  P( s, a, s¢) ÈÎmin Q ( s¢, a¢)˘˚
a¢
l (14.14)
s¢
(
Q ( s, a) ¨ Q ( s, a) + h r ( s) + g È min Q ( s ¢, a ¢ ) ˘ - Q ( s, a)
Î a¢ ˚ ) (14.16)
ÎÍ a ( )
Qt +1(st, at) = Qt ( st , at ) + ht Èrt + g min Qt ( st +1 , a) - Qt ( st , at ) ˘
˚˙
(14.17a)
Intelligent Control with Reinforcement Learning 865
14.6.1
We have so far assumed that the Q-values learned by the agent are represented in a tabular form with one
entry for each state-action pair. This is a particularly clear and instructive case, but of course, it is limited
to tasks with small numbers of states and actions. The problem is not just the memory needed for large
tables, but the computational time needed to experience all the state-action pairs for generation of data
to accurately fill the tables.
Very few decision and control problems in the real world fit into lookup table representation strategy
for solution; the number of possible states and actions in the real world is often much too large to
accommodate the computational and storage requirements. The problem is more severe when state/
action spaces include continuous variables—to use a table, they should be discretized to finite size, which
may cause errors. The only way to learn anything at all on these tasks is to generalize from previously
experienced states to ones that have never been seen. In other words, experience with a limited subset of
state space be usefully generalized to produce a good approximation over a much larger subset.
Fortunately, generalization from examples has already been extensively studied, and we do not need to
invent totally new methods for use in Q-learning. To a large extent, we need only combine Q-learning
with off-the-shelf architectures for inductive generalization—often called function approximation
because it takes examples from desired Q-function and attempts to generalize from them to construct
866 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
(
wi ¨ wi + h r ( s) + g È min Q ( s ¢, a ¢; w ) ˘ - Q( s, a; w )
ÍÎ a¢ ˙˚ )
∂ Q ( s , a, w )
∂wi
(14.18)
This update rule can be shown to converge to the closest possible approximation to the true function
when the function approximator is linear in the parameters.
Unfortunately, all bets are off when nonlinear functions—such as neural networks–are used. For many
tasks, Q-learning fails to converge once a nonlinear function approximator is introduced. Fortunately,
however, the algorithm does converge for large number of applications. The theory of Q-learning with
nonlinear function approximator still contains many open questions; at present it remains an empirical
science.
For Q-learning, it makes more sense to use an incremental learning algorithm that updates the parameters
of function approximator after each trial. Alternatively, examples may be collected to form a training set
and learned in batch mode, but it slows down learning as no learning happens while a sufficiently large
sample is being collected.
We give an example of neural Q-learning. Let Q̂t (s, a; w)denote the approximation to Qt(s,a) for all
admissible state-action pairs, computed by means of a neural network at time step t. The state s is input to
the neural network with parameter vector w producing the output Q̂t (s, a; w)" a Œ A. We assume that the
agent uses the training rule of (14.17) after initialization of Q̂ (s, a; w) with arbitrary finite values of w.
Treating the expression inside the square bracket in (14.17a) as the error signal involved in updating the
current value of parameter vector w, we may identify the target (desired) value of Q̂t at time step t as
Qˆ target
t (
( st , at ; w ) = rt + g min Qt ( st +1, a; w )
a ) (14.19)
At each iteration of the algorithm, the weight vector w of the neural network is changed slightly in a way
that brings the output Q̂t (st, at; w) closer to the target Q̂ target
t (st, at; w)for the current (st,at) pair. For other
state-action pairs, Q-values remain unchanged (Eqn. (14.17b)).
14.7
The Q-learning algorithm, described in the previous section, is an off-policy TD method: the learned
action-value function Q directly approximates Q*, the optimal action-value function, independent of the
policy being followed; optimal action for state s is then obtained from Q*. The Q-learning is motivated
by value iteration algorithm in dynamic programming.
Intelligent Control with Reinforcement Learning 867
The alternative approach, motivated by policy iteration algorithm in dynamic programming, is an on-
policy TD method. The distinguishing feature of this method is that it attempts to evaluate and improve
the same policy that it uses to make decisions.
In Section 14.5 on TD learning, we considered transitions from state to state and learned the value of
states (Eqn. (14.12)) when following a policy p. The relationship between states and state-action pairs is
symmetrical. Now we consider transitions from state-action pair to state-action pair and learn the value
of state-action pairs, following a policy p. In particular, for on-policy TD method, we must estimate Qp(s,
a) for the current policy p and for all states s Œ S and actions a Œ A. We can learn Qp using essentially
the same TD method used in Eqn. (14.12) for learning Vp:
(
Q p ( s, a ) ¨ Q p ( s , a ) + h r ( s ) + g Q p ( s ¢ , a ¢ ) - Q p ( s , a ) ) (14.20)
where a¢ is the action executed in state s¢.
This rule uses every element of the quintuple of events, (s, a, r, s¢, a¢), that make up a transition from one
state-action pair to the next. This quintuple (State-Action-Reinforcement-State-Action) gives rise to the
name SARSA for this algorithm. Unlike Q-learning, here the agent’s policy does matter. Once we have
Qp(s,a), improved policy can be obtained as follows:
p
pk +1(s) = arg min Q ( s, a) (14.21)
a
Since tabular (exact) representation is impractical for large state and action spaces, function approximation
methods are used. Approximations in the policy-iteration framework can be introduced at the following
two places:
(i) The representation of the Q-function: The tabular representation of the real-valued function
Qp (s,a) is replaced by a generic parametric function approximation Q̂ p (s, a; w) when w are the
adjustable parameters of the approximator.
(ii) The representation of the policy: The tabular representation of the policy p(s) is replaced by a
parametric representation p̂ (s; p) where p are the adjustable parameters of the representation
The difficulty involved in use of these approximate methods within policy iteration is that the off-the-
shelf architectures and parameter adjustment methods cannot be applied blindly; they have to be fully
integrated into the policy-iteration framework.
868 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
References
APPLICATIONS
1. Siouris, G.M.; Missile Guidance and Control Systems; New York: Springer-Verlag, 2004.
2. Lawrence, A.; Modern Inertial Technology: Navigation, Guidance, and Control; 2nd Edition, New
York: Springer-Verlag, 1998.
3. Leonhard, W.; Control of Electric Drives; 3rd Edition, New York: Springer-Verlag, 2001.
4. Bose, B.K.; Modern Power Electronics and AC Drives; Englewood Cliffs, NJ : Prentice-Hall,
2001.
5. Sen, P.C.; Thyristor DC Drives; 2nd Edition, New York: Wiley-Interscience, 1991.
6. Seborg, D.E., T.F. Edgar, and D.A. Mellichamp; Process Dynamics and Control; 2nd Edition, New
York: John Wiley, 2004.
7 Lee, J., W. Cho, and T.F.Edgar, An Improved Technique for PID Controller Tuning from Closed-
Loop Tests, AICHE J., Vol. 36, No. 36, No. 12,pp.1891–1895, 1990.
8 Ziegler, J.G., and N.B. Nichols, Optimum Settings for Automatic Controllers, Trans. ASME, Vol.
64, pp. 759–768, 1942.
9 Perry, R.H., and D. Green (ed.), Perry's Chemical Engineering Handbook, 6th Edition, New York:
McGraw-Hill, 1997.
10. Shinskey, F.G.; Process Control Systems; 4th Edition, New York: McGraw-Hill, 1996.
11. Astrom, K.J., and T. Hagglund; PID Controllers: Theory, Design, and Tuning; 2nd Edition, Seattle,
WA: International Society for Measurement and Control, 1995.
12. Corripio, A.B.; Tuning of Industrial Control Systems; Research Tringle Park, North Carolina:
Instrument Society of America, 1990.
13. Stephanopoulos, G.; Chemical Process Control—An Introduction to Theory and Practice;
Englewood Cliffs, NJ: Prentice-Hall, 1984.
14. Steven, B., and F. Lewis; Aircraft Control and Simulation; New York: Wiley-Interscience, 2003.
15. Nelson, R.C.; Flight Stability and Automatic Control; 2nd Edition, New York: McGraw-Hill, 1997.
16. Etkin, B., and L.D. Reid; Dynamics of Flight: Stability and Control; 3rd Edition, New York: John
Wiley, 1996.
17. Craig, J.J.; Introduction to Robotics: Mechanics and Control; 3rd Edition, Englewood Cliffs, NJ:
Prentice-Hall, 2003.
18. Koivo, A.J.; Fundamental for Control of Robotics Manipulators; New York: John Wiley, 1989.
19. Asada, H., and K. Youcef-Toumi; Direct Drive Robots: Theory and Practice; Cambridge,
Massachusetts: The MIT Press, 1987.
References 869
20. Valentino, J.V., and J. Goldenberg; Introduction to Computer Numerical Control (CNC) 3rd
Edition, Englewood Cliffs, NJ: Prentice-Hall, 2002.
21. Groover, M.K; Automation, Production Systems, and Computer-Integrated Manufacturing; 2nd
Edition, Englewood Cliffs, NJ: Prentice-Hall, 2000.
22. Olsson, G., and G. Piani; Computer Systems for Automation and Control; London: Prentice-Hall
International, 1992.
23. Hughes, T.A.; Programmable Controllers; 4th Edition, North Caroline: The Instrumentation,
Systems, and Automation Society, 2004.
24. Petruzella, F.D.; Programmable Logic Controllers; 3rd Edition, New York: McGraw-Hill, 2004.
25. Bolton, W.; Programmable Logic Controllers; 3rd Edition, Buslingfon, MA: Newnes Publication,
2003.
26. Beards, C.F.; Vibration Analysis and Control System Dynamics; 2nd Edition, Englewood Cliffs,
NJ: Prentice-Hall, 1995.
MATHEMATICAL BACKGROUND
27. Noble, B, and J.W. Daniel; Applied Linear Algebra; 3rd Edition, Englewood Cliffs, NJ: Prentice-
Hall 1988.
28. Lancaster, P., and M. Tismenetsky; The Theory of Matrices; 2nd Edition, Orlando, Florida:
Academic Press, 1985.
29. Lathi, B.P.; Linear Systems and Signals; 2nd Edition, New York; Oxford University Press, 2005.
30. Oppenheim, A.V., R.W. Shafer, and J.R. Buck; Discrete-Time Signal Processing; 2nd Edition,
Englewood Cliffs, NJ: Prentice-Hall, 1999.
31. Oppenheim, A.V., A.S. Willsky and S. Hamid Nawab; Signals and Systems; 2nd Edition, Upper
Saddle River, NJ : Prentice-Hall, 1997.
32. Brown, J.W., and R.V. Churchill; Complex Variables and Applications; 7th Edition, New York:
McGraw-Hill, 2003.
33. Lefschetz, S.; Differential Equations: Geometric Theory; 2nd Edition, New York : Dover
Publications, 1977.
68. Phillips, C.L., and R.D. Harbor; Feedback Control Systems; 4th Edition, Englewood Cliffs, NJ:
Prentice-Hall, 1999.
69. Shinners, S.M.; Modern Control System Theory and Design; 2nd Edition, New York: John Wiley,
1998.
70. Raven, F.H.; Automatic Control Engineering; 5th Edition, New York: McGraw-Hill, 1995.
71. Wolovich, W.A.; Automatic Control Systems: Basic Analysis and Design; Orlando, Florida:
Saunders College Publishing, 1994.
72. Doebelin, E.O.; Control System Principles and Design; New York; John Wiley, 1986.
73. Truxal, J.G.; Automatic Feedback Control System Synthesis; New York: McGraw-Hill, 1995.
ROBUST CONTROL
74. Dullerud, G.E., and F. Paganini; A Course in Robust Control Theory; New York: Springer-Verlag,
2000.
75. Zhou, K., and J.C. Doyle; Essentials of Robust Control; Englewood Cliffs, NJ: Prentice-Hall,
1997.
76. Saberi, A., B.M. Chen, and P. Sannuti; Loop Transfer Recovery–Analysis and Design; London:
Springer-Verlag, 1993.
77. Doyle, J.C., B.A. Francis, and A.R. Tannenbaum; Feedback Control Theory; New York: MacMilian;
Publishing Company, 1992.
78. Francis, B.A.; A course in the Control Theory, Lecture Notes in Control and Information Seciences;
No. 88, Berlin: Springer-Verlag, 1987.
79. Rosenwasser, E., and R. Yusupov; Sensitivity of Automatic Control Systems; Boca Ratno, FL: CRC
Press, 2004.
DIGITAL CONTROL
80. Franklin, G.F., J.G. Powell, and M.L. Workman; Digital Control of Dynamic Systems; 3rd Edition,
San Diejo, CA: Addision-Wesley, 1997.
81. Astrom, K.J., and B. Wittenmark; Computer-Controlled Systems; 3rd Edition, Englewood Cliffs,
NJ: Prentice-Hall, 1996.
82. Santina, M.S., and A.R. Stubberud; Sample-Rate Selection; Boca Raton, FL: CRC Press, 1996.
83. Santina, M.S., and A.R. Stubberud; Quantization Effects; Boca Raton, FL: CRC Press, 1996.
84. Ogata, K.; Discrete-Time Control Systems; 2nd Edition, Upper Saddle River, NJ: Pearson
Education, 1995.
85. Santina, M.S., A.R. Stubberud, and G.H. Hostetter; Digital Control System Design; 2nd Edition,
Stamford, CT: International Thomson Publishing, 1994.
86. Phillips, C.L., and H.T. Nagla, Jr.; Digital Control System Analysis and Design; 3rd Edition, Upper
Saddle River, NJ: Pearson Education, 1994.
87. Kuo, B.C.; Digital Control Systems; 2nd Edition, Orlando, Florida: Saunders College Publishing,
1992.
88. Houpis, C.H., and G.B. Lemont; Digital Control Systems: Theory, Hardware, and Software; 2nd
Edition, New York: McGraw-Hill, 1992.
872 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
89. Hristu-Varsakelis, D., and W.S. Levine(eds); Handbook of Networked and Embedded Control
Systems; Boston: Birkhauser Publishers, 2005.
90. Ibrahim, D.; Microcontroller Based Temperature Monitoring and Control; Woburn, MA: Newnes,
2002.
91. Chidambaram, M.; Computer Control of Processes; New Delhi: Narosa Publishers, 2002.
92. Ozkul, T.; Data Acquisition and Process Control using Personal Computers; New York: Marcel
Dekkar, 1996.
93. Rigby, W.H., and T. Dalby; Computer Interfacing: A Practical Approach to Data Acquisition and
Control; Englewood Cliffs, NJ: Prentice-Hall, 1995.
94. Tooly, M.; PC-Based Instrumentation and Control; 2nd Edition, Oxford: Newnes: Publications,
1995.
95. Gupta, S., and J.P. Gupta; PC Interfacing for Data Acquisition and Process Control; 2nd Edition,
Research Triangle Park, North Carolina : Instrument Society of America, 1994.
96. Gopal, M.; Digital Control Engineering; New Delhi: New Age International, 1988.
97. Rao, M.V.C, and A.K. Subramanium; “Elimination of singular cases in Jury’s test” IEEE Trans.
Automatic Control, Vol.AC-21, pp.1q14–115, 1976.
98. Jury, E.I., and J. Blanchard; “A Stability Test for Linear Discrete-time Systems in Table Form”,
Proc. IRE., Vol.49, pp.1947–1948, 1961.
112. Bryson, A.E.; Applied Linear Optimal Control: Examples and Algorithms; Cambridge, UK :
Cambridge University Press, 2002.
113. Locatelli, A.; Optimal Control: An Introduction; Basel, Switzerland: Birkhauser Verlag, 2001.
114. Chen, T., and B. Francis; Optimal Sampled-Data Control Systems; 2nd Edition, London: Springer,
1996.
115. Lewis, F.L., and V.L. Syrmos; Optimal Control; 2nd Edition, New York: John Wiley, 1995.
116. Zhou, K., J.C. Doyle, and K. Glover; Robust and Optimal Control; Upper Saddle River, NJ: Prentice-
Hall, 1996.
117. Anderson, B.D.O., and J.B. Moore; Optimal Control: Linear Quadratic Methods; Englewood
Cliffs, NJ: Prentice-Hall, 1990.
118. Grimble, M.J., and M.A., Johnson; Optimal Control and Stochastic Estimation: Theory and
Applications; Vol.I, Chichester: John Wiley, 1988.
119. Albertos, P., and A. Sala; Multivariable Control Systems: An Engineering Approach; Springer,
2004.
120. Clarke, D.W., C. Mothadi, and P.S. Tuffs “Generalized predictive control–Part I. The basic
algorithm”, Automatica, vol. 23, No. 2, pp. 137–148; 1987.
121. Clarke, D.W., C. Mothadi, and P.S. Tuffs “Generalized predictive control–Part II. Extensions and
interpretations”, Automaticam vol. 23, No. 2, pp. 149–160; 1987.
122. Fortman, T.E., and K.L. Hitz, An Introduction to Linear Control Systems, New York: Marcel
Dekker, 1977.
123. Kautsky, J., N.K. Nichols, and P. Dooren; “Robust pole assignment by linear state feedback”, Int.,
J.Control, Vol.41, No.5, pp.1129–1155, 1985.
124. Doyle, J.C., and G. Stein; “Robustness with Observers”, IEEE Trans. Automatic Control, Vol.AC-
24, No.4, pp.607-611, 1979.
134. Sastry, S., and M. Bodson; Adaptive Control: Stability, Convergence and Robustness; Englewood
Cliffs, NJ: Prentice-Hall, 1989.
135. Grimble, M.J., and M.A. Johnson; Optimal Control and Stochastic Examination: Theory and
Applications; Vol.II, Chichester: John Wiley, 1988.
136. Harris, C.J., and S.A. Billings (eds); Self-Tuning and Adaptive Control: Theory and Applications;
2nd Edition, London: Peter Peregrinus, 1985.
INTELLIGENT CONTROL
137. Negnevitsky, M.; Artificial Intelligence: A Guide to Intelligent Systems; 2nd Edition, Essex, UK:
Pearson Education, 2005.
138. Kccman, V.; Learning and Soft Computing; Cambridge, MA: The MIT Press, 2001.
139. Norgaard, M.O., Ravn, N.K. Poulsen, and L.K. Hansen; Neural Networks for Modelling and
Control of Dynamic Systems; London: Springer-Verlag, 2000.
140. Lewis, F.L., S. Jagannathan, and A. Yesildirek; Neural Network Control of Robot Manipulators and
Nonlinear Systems; London, Taylor & Francis, 1999.
141. Haykin, S.; Neural Networks: A Comprehensive Foundation, 2nd Edition, Prentice-Hall, 1998.
142. Jang, J-S.R., C-T. Sun, and E. Mizutani; Neuro-Fuzzy and Soft Computing: A Computational
Approach to Learning and Machine Intelligence; Upper Saddle River, NJ: Pearson Education,
1997.
143. Lin C.T., and C.S. George Lee; Neural Fuzzy Systems; Upper Saddle River, NJ: Prentice-Hall,
1996.
144. Ying, Hao; Fuzzy Control and Modelling: Analytical Foundations and Applications; New York:
IEEE Press, 2000.
145. Passino, K.M., and S. Yurkovich; Fuzzy Control; California: Addison-Wesley, 1990.
146. Goldberg, D.E.; Genetic Algorithms in Search Optimization, and Machine Learning; Reading,
Massachusetts, Addison-Wesley, 1989.
147. Werbos, P., A. Handbook of Learning and Approximate Dynamic Programming, New York: Wiley-
IEEE Press, August 2004.
148. Sutton. R.S., and A.G. Barto, Reinforcement Learning: An Introduction Cambridge, Mass.: MIT
Press, 1998.
149. Mitchell, T.M., Machine Learning, Singapore: McGraw-Hill, 1997.
150. Bertsekas, D.P., and J.N. Tsitsiklis, Neruo-Dynamic Programming, Belmont, Mass: Athena
Scientific, 1996.
TECHNICAL COMPUTING
151. National Programme on technology enhancement learnning
Course: Electrical Engineering (control Engineering)
Faculty Coordinator: Prof. M. Gopal
Web Content: Matlab Modules for Control System Principles and Design
www.nptel.iitm.ac.in
References 875
COMPANION BOOK
155. Gopal, M.; Control Systems: Principles and Design; 4th Edition, New Delhi: Tata McGraw-Hill,
2012.
876 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
Answers to Problems
Caution
For some problems (especially the design problems) of the book, many alternative solutions are possible.
The answers given in the present section, correspond to only one possible solution for such problems.
2.1 (a) x1, x2: Outputs of unit delayers, starting at the right and proceeding to the left.
È 0 1 ˘ È0 ˘
F= Í ˙;g= Í ˙
Î -0 .368 1 .368 ˚ Î1 ˚
c = [0.264 0.368]
(b) 0.368
1.368
– 0.368
Y ( z) 0.368 z + 0.264
= 2
R( z ) z - 1.368 z + 0.368
2.2 (a) y(k + 2) + 5y(k + 1) + 3y(k) = r(k + 1) + 2r(k)
(b) x1, x2: Outputs of unit delayers, starting at the right and proceeding to the left.
È 0 1˘ È 1˘
F = Í ˙ ; g = Í ˙ ; c = [1 0]
Î-3 -5˚ Î-3˚
Y ( z) z+2
(c) = 2
R( z ) z + 5 z + 3
Answers to Problems 877
1 Y ( z) - z + 2
2.3 (a) y(k + 1) + y(k) = – r (k + 1) + 2r(k); =
2 R( z ) 1
z+
2
Ï -1 ;k = 0 k
Ô k -1 2 5 Ê 1ˆ
(b) y(k) = Ì 5 Ê 1ˆ (c) y(k) = - Á - ˜ ; k ≥ 0
Ô - ; k ≥1 3 3 Ë 2¯
Ó 2 ÁË 2 ˜¯
Ab
2.4 (a) y(k) = Ab (a)k – 1; k ≥ 1 (b) y(k) = [1 – (a)k]; k ≥ 0
1- a
Ab
(c) y(k) = [a k + (1 – a)k – 1]; k ≥ 0
(1 - a ) 2
È Ab ˘
(d) y(k) = Re Í jW
(a k - e j W k ) ˙
Îa - e ˚
2.5 (a) y(k) = (–1)k – (–2)k; k ≥ 0
k
1 Ê 1 ˆ È kp kp ˘
(b) y(k) = 1 + Ísin - cos ;k≥0
Á ˜
2Ë 2¯ Î 4 4 ˙˚
2.6 y(0) = 0, y(1) = 0.3679, y(2) = 0.8463, y(k) = 1; k = 3, 4, 5, ...
2.7 y(k) = 3(2)k – 2; k ≥ 0
2.8 (a) y(k) = – 40d (k) + 20d(k – 1) – 10(0.5) k + 50(– 0.3)k; k ≥ 0
(b) y(k) = – 16 + (0.56)k [7.94 sin(0.468k) + 16 cos(0.468k)]; k ≥ 0
2.9 (a) y(k) = – 0.833(0.5)k – 0.41(– 0.3)k + 0.476(–1)k + 0.769; k ≥ 0
(b) y(k) = –10k(0.5)k + 2.5(0.5)k – 6.94(0.1)k + 4.44: k ≥ 0
2.10 y( ) = K
2.13 (a) No (b) Yes
2.14 No
2.16 (b) T = p / 2
2.19 (i) (ii)
Im Im z-plane
z-plane
T
w0
q=
Re Re
q=
w0T
–aT
Radius = e
Unit circle Unit circle
878 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
1 - e -T
2.20 G(z) = ; y(k) = 1 – e– kT; k ≥ 0
z - e -T
Y ( z ) 10 È z + 0.76 ˘
2.21 = Í ˙
R( z ) 16 Î ( z - 1)( z - 0.46) ˚
ÏÔÊ T T ˆ Ê 2T ˆ T ¸
2.25 u(k) – u(k – 1) = Kc ÌÁ1 + + D ˜ e( k ) - Á1 + D ˜ e( k - 1) + D e( k - 2) ˝
ÔÓË TI T ¯ Ë T ¯ T ˛
È T Ê 1 ˆ TD ˘
U(z) = Kc Í1 + ÁË -1 ˜
+ (1 - z -1 ) ˙ E(z)
Î TI 1- z ¯ T ˚
Ê z - 0.9391 ˆ
2.26 (ii) D(z) = 0.4074 Á
Ë z - 0.9752 ˜¯
È T Ê z + 1ˆ TD Ê z - 1ˆ ˘
2.27 (a) U(z) = Kc Í1 + + ˙ E(z)
Î 2TI ÁË z - 1˜¯ T ÁË z ˜¯ ˚
ÏÔ k
e(i - 1) + e(i ) TD ¸Ô
Â
T
(b) u(k) = Kc Ìe( k ) + + [e( k ) - e( k - 1)]˝
TI 2 T
ÓÔ i =1 ˛Ô
1 T
2.28 (a) y(k) = y(k – 1) + r(k) (b) y(k) = (1 – aT)y(k – 1) + Tr(k – 1)
1+ aT 1+ aT
Ê a 1 ˆ Êa 2 ˆ 1
2.29 Á b + + ˜ y(k) – Á + ˜ y(k – 1) + 2 y(k – 2) = 0; y(0) = a, y(–1) = a – Tb
Ë T T ¯
2 ËT T ¯ 2
T
Y ( z) Gh0G1 ( z ) Gh0G2 ( z )
3.1 =
R( z ) 1 + Gh0G1 ( z ) Gh0G2 H ( z )
Y ( z) Gh0G ( z )
3.2 =
R( z ) 1 + Gh0G ( z ) H ( z )
Gh0G2 ( z )G1 R( z )
3.3 Y(z) =
1 + Gh0G2 HG1 ( z )
D( z )Gh0G p ( z )
3.4 Y(z) = GpH2R(z) + [H1R(z) – GpH2R(z)]
1+ D( z )Gh0G p ( z )
Y ( z) D( z )Gh0G1G2 ( z ) X ( z) D( z )Gh0G1 ( z )
3.5 = ; =
R( z ) 1+ D( z )[Gh0G1 ( z ) + Gh0G1G2 ( z )] R( z ) 1+ D( z )[Gh0G1 ( z ) + Gh0G1G2 ( z )]
GW ( z )
3.6 Y(z) =
1 + D( z )Gh0G ( z )
È z + 0.92 ˘ qL ( z ) Gh0G ( z )
3.7 Gh0G(z) = 0.0288 Í ˙; =
Î ( z - 1) ( z - 0.7788) ˚ qR ( z ) 1 + Gh0G ( z )
Answers to Problems 879
w ( z) Ê KF + KP ˆ
3.8 = 159.97 Á
wr( z ) Ë z + 159.97 K P - 0.1353 ˜¯
Y ( z) È z ( z + 0.92) ˘
3.9 = 0.0095 Í ˙
R( z ) Î z - 2.81z 2 + 2.65 z - 0.819 ˚
3
200.34
Unstable
region
20.34
0
0.01 0.1 T
3.16 0 < K < 0.785.
3.17 For T = 0.001 sec, the response y(k) is very close to y(t).
2.0
y (k)
1.5 T = 0.01 sec
1.0
0.5 y (t)
Ê z - 0.8187 ˆ
4.18 (a) D1(z) = 13.934 Á (b) Kv = 3
Ë z - 0.1595 ˜¯
z - 0.94
(c) D2(z) =
z - 0.98
Ê z - 0.84 ˆ
4.19 D(z) = 1.91 Á
Ë z - 0.98 ˜¯
4.20 (a) z1, 2 = 0.78 ± j0.18
(b) Pole of D(z) at z = 0; (K/2) = 0.18
(c) Ka = 0.072
(d) z3 = 0.2; the third pole causes the response to slow down.
Ê z - 0.72 ˆ
4.21 D(z) = 150 Á
Ë z + 0.4 ˜¯
Ê ( z - 0.9048) ( z - 0.6135) ˆ
4.22 (a) D(z) = 135.22 Á
Ë ( z + 0.9833) ( z - 0.7491) ˜¯
Ê ( z - 0.9048) ( z + 1) ˆ
(b) D(z) = 104.17 Á ; 0.15
Ë ( z + 0.9833) ( z + 0.5) ˜¯
4.8 - 3.9 z -1
4.23 D(z) =
1 - z -1
5.1 x1 = qM, x2 = q�M , x3 = motor armature current ia, x4 = generator field current if ; y = qL
È0 1 0 0 ˘ È0 ˘
Í ˙ Í ˙
0 - 0.025 3 0 ˙
A= Í ;b= Í 0 ˙ ; c = [0.5 0 0 0]
Í0 -12 -190 1000 ˙ Í0 ˙
Í ˙ Í ˙
Î0 0 0 - 4.2˚ Î0.2˚
882 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È0 1 0 ˘ È0 ˘
5.2 A = Í0 -1 ˙ Í ˙ ; c = [1
Í 20 ˙ ; b = Í0 ˙
0 0]
ÍÎ0 0 - 5 ˙˚ ÍÎ2.5˙˚
5.3 x = q , x = q� , x = i
1 M 2 M 3 a
È 0 1 0 ˘ È0˘
Í ˙ Í ˙
A= Í 0 - 0.5 19 ˙ ; b = Í 0 ˙; c = È 1 ˘
0 0˙
Í ˙ Í ˙ Í 20
Î ˚
Í k1 - ( k2 + 0.5) -21 ˙ Í k1 ˙
Í- ˙ Í ˙
Î 40 2 2 ˚ Î2˚
È -B KT ˘ È 0 ˘
Í J J ˙ Í ˙
5.4 A = Í ˙; b = Í k1 K c ˙ ; c = [1 0]
Í - ( k1 Kt K c + K b ) - ( Ra + k2 K c ) ˙ ÍÎ La ˙˚
Í La La ˙
Î ˚
È -11 6 ˘ È1 ˘
5.5 (a) A = Í ˙ ; b = Í ˙ ; c = [2 –1]
Î-15 8 ˚ Î2˚
(b) 1 s–1 X2 1 s–1
U X1 = Y
–3
–2
s–1 X1
1 2
–11 –15
U Y
6
2 –1
s–1 X2
8
Y ( s) 1
(c) =
U ( s) s 2 + 3s + 2
È0 ˘ È 1 1˘ È0 ˘
5.6 (a) A = ÈÍ
0 1˘
˙ ; b = Í1 ˙ (b) A = Í ˙; b = Í ˙
Î0 0 ˚ Î ˚ Î-1 -1˚ Î1 ˚
(c) |lI – A | = |lI – A | = l2
Answers to Problems 883
È s( s + 3) s+3 1˘
1Í ˙
5.7 G(s) = Í -1 s( s + 3) s ˙
DÍ ˙
Î -s -1 s2 ˚
È1˘
1Í ˙
H(s) = Í s ˙ ; D = s3 + 3s2 + 1
DÍ ˙
2
Îs ˚
5.9 x1, x2, x3: outputs of integrators, starting at the right and proceeding to the left.
È 0 1 0˘ È0 ˘
A=Í 0 -2
˙
1˙ ; b = Í ˙ ; c = [2 –2 1]
Í Í0 ˙
ÍÎ - 2 1 - 2˙˚ ÍÎ1 ˙˚
5.10 x1, x2, x3, x4: outputs of integrators
Top row: x1, x2 = y1; Bottom row: x3, x4 = y2
È0 0 0 - 4˘ È3 0˘ È 0 ˘
Í ˙ Í ˙ Í ˙
1 -3 0 0˙ Í1 2˙ Í y1 (0) ˙ È0 1 0 0 ˘
A= Í ;B= ; x(0) = Í 0 ˙;C= Í ˙
Í0 -1 0 0˙ Í0 3˙ Î0 0 0 1 ˚
Í ˙ Í ˙ Í ˙
Î0 0 1 - 4˚ Î0 0˚ Î y2 ( 0 ) ˚
s+3 1
5.11 (a) G(s) = (b) G(s) =
( s + 1)( s + 2) ( s + 1)( s + 2)
1 È -3s + 5 4( s - 3) ˘ 3 2
5.12 G(s) = Í 2 ˙ ; D = s – 4s + 6s – 5
D ÍÎ- s + 2 s 2( s 2 - 3s + 1) ˙˚
È - 5 0.5 -3.5˘ È 0˘
5.13 (a) A = Í 4 - 5 ˙
0˙ ; b =
Í ˙ ; c = [0 1 0]
Í Í 0˙
ÍÎ 0 1 0 ˙˚ ÍÎ-1˙˚
14
(b) G(s) =
( s + 1)( s + 2)( s + 7)
5.14 (a) x1 = output of lag 1/(s + 2); x2 = output of lag 1/(s + 1)
È-2 1˘ È0 ˘
A= Í ˙;b= Í ˙ ; c = [– 1 1]; d = 1
Î -1 -1˚ Î1 ˚
(b) x1 = output of lag 1/(s + 2); x2 = output of lag 1/s; x3 = output of lag 1/(s + 1).
È-2 1 1˘ È0 ˘
Í ˙ Í ˙ ; c = [0
A = Í -1 0 0˙ ; b = 1 1]
Í1 ˙
ÍÎ -1 0 -1˙˚ ÍÎ1 ˙˚
884 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
5.15 x1 = output of lag 1/(s + 1); x2 = output of lag 5/(s + 5); x3 = output of lag 0.4/(s + 0.5); x4 = output
of lag 4/(s + 2).
È -1 - K1 - K1 0 0 ˘ È K1 0 ˘
Í ˙ Í ˙
0 -5 - 5K 2 - 5K 2 ˙ Í 0 5K 2 ˙ È1 1 0 0 ˘
A= Í ;B= ;C= Í ˙
Í- 0.4 K1 - 0.4 K1 - 0.5 0 ˙ Í0.4 K1 0 ˙ Î0 0 1 1 ˚
Í ˙ Í ˙
Î 0 0 - 4K2 -2 - 4 K 2 ˚ Î 0 4K2 ˚
È-1 0 ˘ È1˘
5.16 (i) A = Í ˙ ; b = Í ˙ ; c = [2 – 1]
Î 0 -2˚ Î1˚
È0 0 -2˘ È5˘
Í ˙
(ii) A = Í1 0 -5˙ ; b = Í0 ˙ ; c = [0 0 1]
Í ˙
ÍÎ0 1 -4 ˙˚ ÍÎ0 ˙˚
È 0 1 0˘ È0 ˘
(iii) A = Í 0 ˙
0 1˙ ; b = Í ˙ ; c = [2 6 2]; d = 1
Í Í0 ˙
ÍÎ-6 -11 -6 ˙˚ ÍÎ1 ˙˚
È0 0 0˘ È1 ˘
5.17 (i) A = 1 0 -2˙ ; b =
Í Í ˙ ; c = [0 0 1]
Í ˙ Í1 ˙
ÍÎ0 1 -3˙˚ ÍÎ0 ˙˚
È 0 1 0˘ È0 ˘
(ii) A = Í ˙ Í ˙ ; c = [1
Í 0 0 1˙ ; b = 0 0]
Í0 ˙
ÍÎ-6 -11 -6 ˙˚ ÍÎ1 ˙˚
È-1 0 0˘ È1˘
Í ˙ Í˙
(iii) L = Í 0 -2 0˙ ; b = Í1˙ ; c = [– 1 2 1]; d = 1
ÍÎ 0 0 -3˙˚ ÍÎ1˙˚
È0 1 0˘ È0 ˘
5.18 (a) A = Í0 0
˙
1˙ ; b = Í ˙ ; c = [5000 1000 0]
Í Í0 ˙
ÍÎ0 -100 -52˙˚ ÍÎ1 ˙˚
È0 0 0˘ È 50 ˘
Í ˙ Í ˙
(b) L = 0 -2
Í 0˙ ; b = Í-31.25˙ ; c = [1 1 1]
ÍÎ0 0 -50 ˙˚ ÍÎ-18.75˙˚
È-1 1 0˘ È0 ˘
Í ˙
5.19 (a) L = Í 0 -1 0˙ ; b = Í1 ˙ ; c = [1 1 1]
Í ˙
ÍÎ 0 0 -2˙˚ ÍÎ1 ˙˚
(b) y(t) = 2.5 – 2e– t – te –t – 0.5e –2t
Answers to Problems 885
È1 ˘ È1˘
5.20 (i) l1 = 1, l2 = 2; v1 = Í ˙ ; v2 = Í˙
Î0 ˚ Î1˚
È1˘ È 2˘
(ii) l1 = –1, l2 = –2; v1 = Í ˙ ; v2 = Í ˙
Î1˚ Î1 ˚
È 1˘ È 1˘ È 1˘
Í ˙ Í ˙
(iii) l1 = –1, l2 = –2, l3 = –3; v1 = ÍÍ-1˙˙ ; v2 = Í-2˙ ; v3 = Í-3˙
ÍÎ-1˙˚ Í1˙ ÍÎ 3˙˚
Î2˚
È 1˘ È 1˘ È 1˘
Í ˙ Í ˙ Í ˙
5.21 (b) l1 = –2, l2 = – 3, l3 = –4; v1 = Í-2˙ ; v2 = Í-3˙ ; v3 = Í-4 ˙
ÍÎ 4 ˙˚ ÍÎ 9˙˚ ÍÎ 16˙˚
È 1 1 1˘
5.22 (a) P = Í-1 + j1 -1 - j1 -1˙˙
Í
ÍÎ - j 2 j2 1˙˚
È1 - j 1 0˘
Í2 2
˙
(b) Q = Í 12 j 12 0˙
Í ˙
ÍÎ 0 0 1 ˙˚
È1˘ È1 ˘ At È 2e - t - e 2t - e - t + e 2t ˘
5.23 l1 = –1, l2 = 2; v1 = Í ˙ ; v2 = Í ˙; e = Í ˙
Î1˚ Î2˚ ÍÎ2e - t - 2e 2t -e - t + 2e 2t ˙˚
È 3 e - t - 1 e -3t - 32 e - t + 32 e -3t ˘
5.24 (a) e = Í - t ˙
2
At 2
Í e - e -3t
1 1
- 12 e - t + 32 e -3t ˙˚
Î2 2
È 3 e - t - 1 e -3t e - 12 e -3t ˘
1 -t
(b) eAt = Í ˙
2 2 2
Í- 3 e - t + 3 e -3t - 12 e - t + 32 e -3t ˙˚
Î 2 2
5.26 (a)
1
x02 x01
s–1 s–1
1 s–1 1 s–1
u x2 x1
–2 –2
1
X 1 ( s) 1/ 2 1/ 2 X1 ( s) 1/ 2 -1/ 2
(b) = G11(s) = + ; = G12(s) = +
x10 s +1 s + 3 x20 s +1 s + 3
X 2 ( s) 1/ 2 -1/ 2 X 2 ( s) 1/ 2 1/ 2
= G21(s) = + ; = G22(s) = +
x10 s +1 s + 3 0
x2 s + 1 s +3
X 1 ( s) 1 X 2 ( s) 1
= H1(s) = ; = H2(s) =
U ( s) s + 1 U ( s) s +1
1 È e ( x1 + x2 ) + e ( x1 - x2 ) ˘
-t 0 0 -3t 0 0 È1 - e - t ˘
(c) (i) x(t) = Í ˙ (ii) x(t) = Í ˙
2 ÍÎe - t ( x10 + x20 ) + e -3t ( - x10 + x20 ) ˙˚ ÍÎ1 - e - t ˙˚
1 –2t 2 –3t
5.27 x1(t) =– e + e ; x2(t) = 2(e–2t – e–3t); x3(t) = –2(2e–2t – 3e–3t); y(t) = x1(t)
3 3
1 3
5.28 (a) Asymptotically stable (b) y(t) = + 2e–t – e–2t
2 2
5 –t –2t 1 –3t
5.29 y1(t) = 3 – e – e + e ; y2(t) = 1 + e–2t – 2e–3t
2 2
È-6 0.5˘ È7˘
5.30 (a) A = Í ˙;b= Í ˙ ; c = [0 1]
Î 4 -5 ˚ Î0 ˚
28 È 1 1 ˘
(b) y(t) = (1 - e -4t ) - (1 - e -7t ) ˙
3 ÍÎ 4 7 ˚
È x1 (1) ˘ È2.7183 - k ˘
5.31 Í ˙ = Í ˙ for any k π 0
Î x2 (1) ˚ Î 2k ˚
Answers to Problems 887
È k˘
5.33 (b) x(0) = Í ˙; k π 0
Î- k ˚
È 2e - t - e -2t e - t - e -2t ˘
5.34 eAt = Í ˙
ÍÎ2e -2t - 2e - t 2e -2t - e - t ˙˚
È 0 1˘
A= Í ˙
Î-2 -3˚
5.37 Controllable but not observable.
5.38 (i) Controllable but not observable (ii) Controllable but not observable
(iii) Both controllable and observable (iv) Both controllable and observable
(v) Both controllable and observable
5.39 (i) Observable but not controllable (ii) Controllable but not observable
(iii) Neither controllable nor observable
1
5.40 (i) G(s) = ; state model is not controllable
s+2
s+4
(ii) G(s) = ; state model is not observable
( s + 2)( s + 3)
È0 1˘ È0 ˘
5.42 (a) A = Í ˙ ; b = Í ˙ ; c = [10 0]
Î0 -1˚ Î1 ˚
È0 1 0˘ È0 ˘
Í ˙
(b) A = Í0 ˙
0 1˙ ; b = Í0 ˙ ; c = [20 10 0]
Í
ÍÎ0 -2 -3˙˚ ÍÎ1 ˙˚
È0 0 0˘ È20 ˘
(c) A = Í1 0 -2˙˙ ; b =
Í Í ˙ ; c = [0 0 1]
Í10 ˙
ÍÎ0 1 -3˙˚ ÍÎ 0 ˙˚
888 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È z2 z 1 ˘
1Í ˙
6.1 G(z) = Í-( 4 z + 1) z ( z + 3) z + 3 ˙ z; D = z3 + 3z2 + 4z + 1
DÍ
z ( z + 3) + 4 ˙
Î -z
-1
˚
È -3 z 2 - 7 z ˘
1Í ˙
H(z) = Í-7 z 2 - 9 z + 3˙
DÍ ˙
ÍÎ 3 z + 7 ˙˚
2z + 2
6.3 G(z) =
z 2 - z + 12
È 2z + 2 - 4 z - 14 -30 ˘
Í D 4+ ˙
D D
6.4 G(z) = Í ˙ ; D = z2 – z + 12
Í z + 12 -3 z - 4 -3 z - 9 ˙
Í -2 + ˙
Î D D D ˚
6.5 x1, x2, x3: Outputs of unit delayers, starting at the top of the column of unit delayers and proceeding
to the bottom.
È1 1
2˘ È 1˘
Í2 4
˙ Í ˙
F = Í 0 - 2 -1˙ ; g =
1
Í-1˙ ; c = [5 6 –7]; d = 8
Í ˙ ÍÎ 2˙˚
ÍÎ 0 3 13 ˙˚
6.6 x1, x2, x3: Outputs of unit delayers. x1 and x2 in first row, starting at the left and proceeding to the
right.
È 0 1 0˘ È1 0 ˘
Í ˙;C= È0 2 0 ˘ È2 0˘
F= Í 3 0 ˙
2˙ ; G = ˙;D=
Í Í0 0 ˙ Í Í ˙
ÍÎ-12 -7 -6 ˙˚ ÍÎ0 1 ˙˚ Î0 0 1 ˚ Î0 1 ˚
È0 1˘ È0 ˘
6.7 (i) F = Í 2 1˙ ; g = Í ˙ ; c = [–1 –2]; d = 3
Î3 - 3˚ Î1 ˚
È0 0 3˘
È 0.5˘
Í 4
˙ Í ˙
(ii) F = Í1 0 1˙ ; g = Í - 3˙ ; c = [0 0 1]; d = –2
Í0 1 -1˙˙ ÍÎ 4 ˙˚
ÍÎ ˚
È-1 0 0 ˘ È1˘
6.8 (i) F = Í 0 -2 0 ˙ ; g = Í ˙ ; c = [–1 2 1]; d = 1
Í ˙ Í1˙
ÍÎ 0 0 -3˙˚ ÍÎ1˙˚
Answers to Problems 889
È 1 1 0˘ È0 ˘
Í3 ˙ Í ˙ ; C = [5
(ii) F = Í0 1 1 ˙ ; g = –2 3]; d = 0
3 Í0 ˙
Í 1˙ ÍÎ1 ˙˚
ÍÎ0 0 3 ˙˚
È0 0 -3˘ È0 ˘
6.9 (i) F = Í1 0 -7˙ ; g = Í ˙ ; c = [0 0 1]; d = 0
Í ˙ Í0 ˙
ÍÎ0 1 -5˙˚ ÍÎ0 ˙˚
È-1 0˘ È1˘
(ii) F = Í ˙;g= Í ˙ ; c = [–2 7]; d = 0
Î 0 -2˚ Î1˚
È 0 1 0˘ È0 ˘
Í
(iii) F = 0 ˙ Í ˙ ; c = [2
Í 0 1˙ ; g = Í0 ˙
1 0]; d = 0
ÍÎ-3 -7 -5˙˚ ÍÎ1 ˙˚
17 22 25
6.11 y(k) = - (–0.2)k + (–0.8)k + ; k ≥ 0
6 9 18
k k k k
Ê 1ˆ Ê 1ˆ Ê 1ˆ Ê 1ˆ
6.12 y1(k) = 5 Á ˜ + 10 Á - ˜ + 2; k ≥ 0; y2(k) = 3 Á ˜ + 2 Á - ˜ + 1; k ≥ 0
Ë 2¯ Ë 2¯ Ë 2¯ Ë 2¯
6.13 (a) l1 = –1, l2 = –1, l3 = –2; Modes: (–1)k, k(–1)k –1, (–2)k
Èk ( -1) k -1 ˘
Í ˙
(b) x(k) = Í ( -1) k ˙
Í k ˙
ÍÎ ( -2) ˙˚
0.2838 z + 0.1485
6.14 (a) G(z) = Z [Gh0(s)Ga(s)] = 2
z - 1.1353 z + 0.1353
È 0 1 ˘ È0 ˘
F= Í ˙;g= Í ˙ ; c = [0.1485 0.2838]
Î -0 .1353 1 .1353˚ Î1 ˚
(b) From controllable companion form continuous-time model:
È1 0.4323˘ È0.2838˘
F= Í ˙;g= Í ˙ ; c = [1 0]
Î0 0.1353˚ Î0.4323˚
È 0.741 0 ˘ È259.182 0 ˘ È1 0 ˘
6.17 F = Í ˙;G= Í ˙;C= Í ˙
Î 0 .222 0 .741˚ Î 36 . 936 259 .182˚ Î0 1 ˚
È z + 0.7181 ˘
6.18 (a) 0.3679 Í ˙
Î ( z - 1)( z - 0.3679) ˚
Y ( z ) 0.3679 z + 0.2642 È 0 1˘ È0 ˘
(b) = 2 ;F= Í ˙ ; g = Í1 ˙ ; c = [0.2642 0.3679]
R( z ) z - z + 0.6321 Î-0.6321 1˚ Î ˚
0.4512 z + 0.1809
6.19 (a) G(z) = Z [Gh0(s)Ga(s)] = ;
z 2 - 0.3679 z
È0 1 ˘ È0 ˘
F= Í ˙ ; g = Í ˙ ; c = [0.1809 0.4512]
Î 0 0 . 3679 ˚ Î1 ˚
(b) y� (t) = –y(t) + u(t – 0.4); x1(k) = y(k); x2(k) = u(k – 1);
È0.3679 0.1809˘ È0.4512˘
F= Í ˙ ; g = Í 1 ˙ ; c = [1 0]
Î 0 0 ˚ Î ˚
6.20 x1(k) = x(k); x2(k) = u(k – 3); x3(k) = u(k – 2); x4(k) = u(k – 1);
È x�3 ˘ È -6 0 1 ˘ È x3 ˘ È0 ˘ È x3 ˘
Í� ˙ Í ˙ Í ˙ Í ˙ È a11 a1e ˘ Í ˙ È b1 ˘
(c) Í x1 ˙ = Í -6 0 0 ˙ Í x1 ˙ + Í1 ˙ u = Í ˙ x1 + Í ˙ u
Îa e1 A ee ˚ Í ˙ Îbe ˚
ÍÎ x�2 ˙˚ ÍÎ-11 1 0 ˙˚ ÍÎ x2 ˙˚ ÍÎ0 ˙˚ ÍÎ x2 ˙˚
U ( s) 778.16 s + 3690.72
= D(s) = 2
-Y ( s) s + 19.6 s + 151.2
È x�1 ˘ È 0 1 0 0 ˘ È x1 ˘
Í� ˙ Í ˙Í ˙
Í x2 ˙ Í20.6 0 -29.6 -3.6 ˙ Í x2 ˙
(d) Í ˆ� ˙ = Í 16 0 -16 1 ˙ Í x̂1 ˙
x
Í 1˙ Í ˙Í ˙
ÍÎ xˆ�2 ˙˚ Î84.6 0 -93.6 -3.6 ˚ Î x̂2 ˚
È0 1 ˘ È0 ˘
7.9 (a) A = Í ˙;b= Í ˙ ; c = [1 0]
Î0 0 ˚ Î1 ˚
(b) k = [1 2]
(c) xˆ� = (A – mc) x̂ + bu + my; mT = [5 25]
Answers to Problems 893
Èa11 a1e ˘ È0 1 ˘ È b1 ˘ È0 ˘
x̂e = x̂2 ; Ía ˙
Aee ˚ = ÍÍ0 0 ˙˙ ; ÍÎbe ˙˚ = ÍÍ1 ˙˙ ; m = 5
Î e1 Î ˚ Î ˚
U ( s) 8.07( s + 0.62)
(f) = D(s) =
-Y ( s) s + 6.41
7.10 (a) k1 = 4; k2 = 3; k3 = 1; N = k1
(b) mT = [5 7 8]
7.11 k = [–1.4 2.4]; ki = 1.6
7.12 k1 = 4; k2 = 1.2; k3 = 0.1
7.13 k1 = 1.2; k2 = 0.1; k3 = 4
7.14 mT = [5 6 5]
7.15 KA = 3.6; k2 = 0.11; k3 = 0.33
7.16 KA = 40; k2 = 0.325; k3 = 3
7.17 k1 = a2 /b; k2 = (a1 – a)/b; N = k1
7.18 k1 = – 0.38; k2 = 0.6; k3 = 6
7.19 (a) k1 = 3; k2 = 1.5
(b) For a unit-step disturbance, the steady-state error in the output is 1/7.
(c) k1 = 2; k2 = 1.5; k3 = 3.5, Steady-state value of the output = 0
7.20 (a) k = [3 1.5] (b) N = 7
(c) For a unit-step disturbance, the steady-state error in the output is 1/7.
(d) k1 = 2; k2 = 1.5; k3 = 3.5
10
7.21 (a) K = 0.095; N = 0.1 (b) For A + d A = – 0.6, w ( ) = r
10.1
(c) K1 = 0.105; K2 = 0.5
7.22 k1 = – 4; k2 = – 3/2; k3 = 0
È3 11 ˘
7.23 x̂(k + 1) = (F – mc)x̂(k) + Gu(k) + m[y(k) – du(k)]; mT = Í - 0˙
Î2 16 ˚
7.24 k = [– 0.5 – 0.2 1.1]; x(k + 1) = (F – gk)x(k)
7.25 x(k + 1) = Fx̂(k) + gu(k)
x̂(k + 1) = x (k + 1) + m[y(k + 1) – cx(k + 1)]; mT = [6.25 – 5.25]
894 Digital Control and State Variable Methods: Conventional and Intelligent Control Systems
È x2 ( k + 1) ˘ È 0 0 1 ˘ È x2 ( k ) ˘ È 0 ˘ È x2 ( k ) ˘
Í ˙ Í ˙Í ˙ Í ˙ È f11 f1e ˘ Í ˙ È g1 ˘
7.26 Í 1x ( k + 1) ˙ = Í 1 0 0 ˙ Í x1 ( k ) ˙ + Í0 ˙ u(k) = Í ˙ Í x1 ( k ) ˙ + Íg ˙ u(k)
ÍÎ x3 ( k + 1) ˙˚ ÍÎ-0.2 -0.5 1.1˙˚ ÍÎ x3 ( k ) ˙˚ ÍÎ1 ˙˚ Î fe1 Fee ˚
ÍÎ x3 ( k ) ˙˚ Î e˚
È111 18 ˘
7.27 (a) k = Í - ˙
Î 76 19 ˚
(b) x̂(k + 1) = (F – mc) x̂ (k) + bu(k) + m[y(k) – du(k)]; mT = [8 –5]
È x1 ( k + 1) ˘ È 2 -1 -5.84 3.79˘ È x1 ( k ) ˘
Í ˙ Í ˙Í ˙
Í x2 ( k + 1) ˙ = Í -1 1 -4.38 2.84˙ Í x2 ( k ) ˙
Í x̂1 ( k + 1) ˙ Í 8 8 -11.84 -5.21˙ Í x̂1 ( k ) ˙
Í ˙ Í ˙Í ˙
Î x̂2 ( k + 1) ˚ Î-5 -5 -0.38 8.84˚ Î x̂2 ( k )˚
È 0 1˘ È0 ˘
7.28 (a) F = Í ˙ ; g = Í1 ˙ ; c = [1 0]
Î-0.16 -1˚ Î ˚
(b) k1 = 0.36; k2 = –2.2
(c) x̂2(k + 1) = (–1 – m) x̂2(k) + u(k) – 0.16 y(k) + m y(k + 1); m = –1
(d)
U(z) z–2 Y(z)
– (1 + 0.8z )(1 + 0.2z–1)
–1
2.56(1 + 0.1375z–1)
1 – 2.2z–1
È1 -1˘
7.29 (a) x(k) = Qz(k); Q = Í ˙ ; u = – 0.36 z1(k) + 2.2z2(k)
Î0 1˚
(e) w
+ 1 + u + + 1 y
r
– s – s+1
10
8.18 (a) K = 0.095; N = 0.1 (b) For A + dA = – 0.6, w( ) = r
10.1
(c) K = 0.105; K1 = 0.1
8.19 u(k) = – 0.178x(k)
8.20 (a) K = 0.277 (b) y( ) = 0.217r
(c) N = 1.277
8.21 (a) K = 0.2 (b) N = 0.45
0.9
(c) For F + dF = 0.3, x( ) = r
1.1
8M - 8M
9.2 ; 1 rad/sec; y(t) = sint
p p
9.3 0.3; 10 rad/sec
9.4 D < 0.131
9.6 4.25; 2 rad/sec, stable limit cycle
9.7 3.75; 1 rad/sec
9.8 (a) Stable node; (0, 0) point in (y, y� )-plane
(b) Stable node; (1,0) point in (y, y�)-plane
(c) Unstable focus; (2,0) point in (y, y�)-plane
9.9 For q = 0, the singular point is center; for q = p, it is saddle.
9.10 (i) Singularity (1,0) in (y, y�)-plane is a center
(ii) Singularity (1,0) in (y, y� )-plane is a stable focus.
9.11 (a) For – 0.1 < x1 < 0.1, ��x1 + x�1+ 7x1 = 0 (b) Isocline equations:
–7 x1
For x1 > 0.1, ��
x1 + x�1 + 0.7 = 0 x2 = ; – 0.1 < x1 < 0.1
m +1
– 0.7
For x1 < – 0.1, ��
x1 + x�1– 0.7 = 0 x2 = ; x1 > 0.1
m +1
0.7
x2 = ; x1 < – 0.1
m +1
(c) A singular point at the origin
9.12 (a) For – 0.1 < x1 < 0.1, ��
x1 + x�1 = 0 (b) Isocline equations:
For x1 < – 0.1, ��
x1 + x�1+ 7 (x1 + 0.1) = 0 m = – 1; – 0.1 < x1 < 0.1
Answers to Problems 897
–7 x1 – 0.7
For x1 > 0.1, ��
x1 + x�1+ 7 (x1 – 0.1) = 0 x2 = ; x1 < – 0.1
m +1
–7 x1 + 0.7
x2 = ; x1 > 0.1
m +1
(c) Singular point at (x1 = ± 0.1, x2 = 0)
x1 + x�1+ 0.7 sgn x1 = 0
9.13 (a) �� (b) Isocline equations:
–0.7
x2 = ; x1 > 0
m +1
0.7
x2 = ; x1 < 0
m +1
(c) No singular points
9.14 (a) For – 0.1 < x1 < 0.1, ��
x1 + x�1= 0 (b) Isocline equations:
For x1 > 0.1, ��
x1 + x�1+ 0.7 = 0 m=–1 ; – 0.1 < x1 < 0.1
– 0.7
For x1 < – 0.1, ��
x1 + x�1– 0.7 = 0 x2 = ; x1 > 0.1
m +1
0.7
x2 = ; x1 < – 0.1
m +1
(c) No singular points
9.15 (a) ��
x1 + 0.1 sgn x�1+ x1 = 0 (b) Isocline equation:
– x – 0.1sgn x2
x2 = 1
m
(c) Singular point at (x1 = ∓ 0.1, x2 = 0)
9.16 Steady-state error to unit-step input = – 0.2 rad; Maximum steady-state error = ± 0.3 rad.
9.17 Deadzone helps to reduce system oscillations, and introduces steady-state error.
9.18 Saturation has a slowing down effect on the transient.
9.20 The system has good damping and no oscillations but exhibits chattering behavior. Steady-state
error is zero.
9.21 (b) (i) Deadzone provides damping; oscillations get reduced.
(ii) Deadzone introduces steady-state error; maximum error = ± 0.2.
(c) By derivative-control action, (i) settling time is reduced, but (ii) chattering effect appears.
x2 Ê x2 ˆ
9.23 x1 = e, x2 = e;
� Switching curve: x1 = – x2 + ln Á1 + 2 ˜
| x2 | Ë | x2 | ¯
z*COA = 4.7
12.14 (i) 0.25, 0.62
{ ( ) (
(ii) magg(z) = max min 0.25, m PL ( z ) , min 0.62, m PM� ( z )
�
)}
(iii) z* = 53.18
12.15 34.40
12.16 Rules 2, 3, and 4 are active; u* = 2.663
12.17 67.4
12.18 2.3
12.22 2.974
12.23 12.04
900 Index
Index
A Backpropagation training;
Acceleration error constant 220 batch-mode 721–722
Ackermann’s formula 445, 471 gradient descent method 719–722
Accuracy (NN) 693 incremental-mode 720–721
Activation functions; 702 learning rate 720
bipolar 704 momentum term 726
Gaussian 728 multilayer network 722–727
hyperbolic tangent 706 single-layer network 716–722
linear 707 weight initialization 726
log-sigmoid 707 Backward difference approximation of
sigmoidal 706 derivatives 99–103
tan-sigmoid 707 Bandwidth; 229, 232
unipolar 704 on Nichols chart 244
Adaptive control system; Batch-mode training 721–722
model-reference 649–657, 671 Bell-shaped fuzzy set 782–783
self-tuning 663–671 Berbalat’s lemma 653–654
A/D converter; 22, 27 Bias (NN) 702
circuits 29–31 BIBO stability 66–72, 367
model 127 Bilinear transformation; 105–108
Adjoint of a matrix 288 with frequency prewarping 237
Aliasing 81–83 Bode plots:
Alpha-cut; fuzzy set 783–784 lag compensation 239–241
Analytic function 55 lag-lead compensation 241
ANFIS 809–813 lead compensation 239–240
Antecedent; IF-THEN rule 773
Anti-aliasing filter 24, 87 C
Artificial neural network
(see Neural network) Cancellation compensation 256, 263, 265–266
Artificial neuron Canonical state models;
(see Neuron model) controllability form 364
Asymptotic stability 72, 367, 503, 613 controllable companion form 316–319, 394
Autonomous systems 610 first companion form 316–319, 394
Jordan form 320–325, 396–399
B observability form 366
observable companion form 319–320, 395
Backlash nonlinearity; 568
second companion form 319–320, 395
describing function 575–578
Index 901
Singleton fuzzy system 784, 793 State models; canonical (see canonical state
Singular matrix 288 models)
Singular points 597–599 State observers 448, 472
center 600 current 473–474
focus 600 deadbeat 480–481
node 600 full-order 449–452, 472–474
saddle 601 prediction 472–473
vortex 600 reduced-order 455–457, 474
Singular values of a matrix 292 robust 463
Sinusoidal sequence 32–33, 49 State observer design through
SISO systems; definition 14, 38–39 matrix Riccati equation 521–522
Skew-symmetric matrix 287 State regulator design through
Sliding-mode control 672–677 matrix Riccati equation 518, 523–529, 534–537
s-norm; fuzzy sets 786 pole-placement 437, 444–445, 470–471
Soft-computing 687–689 State transition equation 413
Solution of State transition matrix; 340
homogeneous state equations 340, 409 properties 340–341
nonhomogeneous state State transition matrix evaluation by
equations 348–349, 408, 409 Cayley-Hamilton technique 344–346, 412
Specifications (see Performance specifications) inverse Laplace transform 341
Spectral norm of a matrix 292 inverse z-transform 409–410
s-plane to z-plane mapping 46 numerical algorithm 401–402
Stability similarity transformation 342–343, 410–411
asymptotic 72, 367, 503, 613 Steady-state error 218–221
BIBO 66–72, 367 sampling effects 222
global 505, 613 Steady-state error constants (see Error constants)
in-the-large 613 Step-invariance method for
in the sense of Lyapunov 503, 612 discretization 96–98
in-the-small 613 Step motors (see Stepping motors)
local 505, 613 Stepper motors (see Stepping motors)
marginal 72 Stepping motors
Nyquist 230–231 in feedback loop 8
sampling effects 134–135 interfacing to microprocessors 178–180
zero-input 72–73 permanent magnet 174–176
Stability tests for linear systems torque-speed curves 178, 179
Jury 73–75 variable-reluctance 177–178
Lyapunov 506–509 Step sequence 32, 47
Stability tests for nonlinear systems Strictly proper transfer function 311
describing function 580–583 Suboptimal state regulator 539–545
Lyapunov 621–626 Sugeno architecture, data-based
Stabilizability 524 modeling 793–794
State diagram 308, 394 Supervised learning (NN) 691, 716
State feedback 297, 437–438 Support; fuzzy set 784
State model 37–39, 302, 367, 392, 419 Support vector machines 741
conversion to transfer function approximation 753–757
function 308–311, 368, 393, 419 hard-margin linear 742–748
equivalence with transfer nonlinear 752–753
function 362–367, 416–417 soft-margin linear 748–752
sampled plant 399–402 Sylvester’s test 296–297
system with dead-time 405–407 Symmetric matrix 286
Index 909