Unesco - Eolss Sample Chapters: Control of Nonlinear Systems
Unesco - Eolss Sample Chapters: Control of Nonlinear Systems
Unesco - Eolss Sample Chapters: Control of Nonlinear Systems
Khalil
Contents
1. Introduction
2. Stability
2.1. Lyapunov Stability
S
TE S
2.2. Input-Output Stability
R
AP LS
2.3. Passivity
2.4. Feedback Systems
3. Sensitivity Analysis and Asymptotic Methods
C EO
4. Linearization and Gain Scheduling
5. Nonlinear Geometric Methods
6. Feedback Linearization
7. Robust Control
8. Nonlinear Design
E
H
Glossary
Bibliography
Biographical Sketch
SA NE
Summary
The last quarter of the twentieth century has seen a rapid progress towards the
U
development of a nonlinear control theory. This chapter introduces the main tools of
analysis and design of nonlinear control systems, which are detailed in the subsequent
chapters.
1. Introduction
There are many control tasks that require the use of feedback. Depending on the design
goals, there are several formulations of the control problem. The tasks of stabilization,
tracking, and disturbance rejection or attenuation (and various combinations of them)
lead to a number of control problems. In each one, we may have a state feedback
version where all state variables can be measured, or an output feedback version where
all state variables can be measured, or an output feedback version where only an output
vector, whose dimension is often less than the dimension of the state, can be measured.
In a typical control problem, there are additional goals for the design, like meeting
S
TE S
limit our discussions to the basic tasks of stabilization, tracking, and disturbance
R
AP LS
rejection.
2. Stability
C EO
Stability analysis plays a central role in control. There are different concepts of stability.
The most dominant one is the concept formulated by Lyapunov at the end of the
nineteenth century and further developed by many researchers throughout the twentieth
century. The concept is concerned with the stability of steady-state solutions, such as
E
equilibrium points and periodic orbits. In the 1960s and 70s, concept of input-output
H
stability and passivity were formulated by Popov, Sandberg, Willems, and Zames,
PL O
among others. These concepts are particularly useful when we analyze the feedback
connection of Figure 1.
M SC
SA NE
U
In its simplest form, Lyapunov stability is concerned with the stability of an equilibrium
point of the nonlinear system x = f ( x). Lyapunov formulated the notions of stability
and asymptotic stability of an equilibrium point. For an asymptotically stable
equilibrium point, all trajectories starting in a region that contains the point, called the
region of attraction, converge to it as time tends to infinity. The key idea of Lyapunov
stability is that if we take a scalar function V ( x) that vanishes at the equilibrium point
and is positive in its neighborhood and if we calculate the time derivative of this
function using the chain rule
V V
V = x = f ( x)
x x
then the sign of V reveals whether V is increasing or decreasing along the trajectory
passing through x. If we can show that V is negative in the neighborhood of the
equilibrium point, we can conclude that it is asymptotically stable. The ingenious idea
here is that we do not have to solve the state equation in order to determine that the
trajectories move toward the equilibrium point. We only need to examine the sign of V .
S
TE S
The chapter on Lyapunov stability reviews the basic elements of the theory. It also
R
describes an important extension of the basic theory, known as the invariance principle,
AP LS
which allows us to relax the requirement that V be negative everywhere around the
equilibrium point. It allows V to be zero in certain sets, as long as the trajectories cannot
C EO
stay in those sets over a period of time. Lyapunov theory is very versatile and applies to
a wide range of mathematical models. The main challenge is finding the Lyapunov
function. Although there is no general systematic method to find a Lyapunov function,
research over the years has shown how to choose Lyapunov function candidates for
E
its equivalent transfer function model. Developing similar models for nonlinear systems
was challenging, but starting in the 1960s progress was made towards developing
functional models for nonlinear systems. This is reviewed in the chapter on Volterra
and Fliess Series Expansion.
In input-output stability, the input u belongs to a space of signals L ; e.g., the space of
bounded signals or the space of square-integrable signals. Keeping aside some
technicalities, we can say that the system is L stable if the output satisfies
y ( u )+
which is strictly increasing and vanishes at zero, and is a nonnegative bias constant.
When the preceding inequality takes the special form
y u +
where is a positive constant, the system is finite-gain L stable and the smallest such
is called the gain of the system. This notion of input-output stability is introduced in
the chapter on Input-Output Stability. It applies, of course, to the case when the input-
output relationship is determined by a state model, but its real strength comes from the
fact that it applies to systems that cannot be represented by a finite-dimensional state
model, such as time-delay and infinite-dimensional systems.
2.3. Passivity
S
TE S
In the study of physical systems, such as electrical networks or mechanical systems, the
R
AP LS
concept of stored energy is often useful in understanding the behavior of the system. For
example, in an RLC electrical network with passive components, the energy absorbed
by the network over any period of time is greater than or equal to the increase in the
C EO
stored energy over the same period. In the 1960s, Popov, Zames, and others, and later
on in the 1970s, Willems, Hill, Moylan, and others, were able to extend this notion to a
dynamical system for which a physical energy might not be well defined. The extension
is based on a storage function (playing the role of energy) and a supply rate (playing the
role of power flow into a network) such that the integral of the supply rate over any
E
period of time is greater than or equal to the increase in the storage function over the
H
same period. Such a system is called dissipative. When the supply rate is the inner
PL O
product of the input and output vectors, that is, uT y, the system is said to be passive.
M SC
Input-output stability and passivity concepts have been very effective in analyzing the
stability of the feedback connection of Figure 1. Two celebrated results for this system
U
are the small-gain theorem and the passivity theorem (see Analysis of Nonlinear Control
Systems, Input-Output Stability, and Passivity Based Control). The (classical) small-
gain theorem says that if the feedback components H1 and H 2 are finite-gain L stable
with gains 1 and 2 , then the feedback connection is finite-gain L stable if 1 2 < 1.
The passivity theorem says that the feedback connection of two passive systems is
passive. These two theorems can be viewed as nonlinear generalizations of the linear
gain and phase results in the Nyquist-Bode theory. When H1 and H 2 are stable linear
time-invariant systems represented by their transfer functions, the Nyquist-Bode theory
tells us that the feedback connection will be stable if the loop gain is less than one or the
loop phase shift is less than 180D. The connection to the small-gain theorem is obvious.
The connection to the passivity theorem can be seen by noting that for a linear system to
be passive, its phase shift cannot exceed 90D.
The passivity theorem plays an important role in passivity based control (see Passivity
Based Control). The small-gain theorem provides a conceptual framework for
understanding many of the robustness results that arise in the study of dynamical
systems, especially when feedback is used. Quite often, dynamical systems subject to
model uncertainties can be represented in the form of a feedback connection with H1,
say, as a stable nominal system and H 2 as a stable perturbation. Then, the requirement
1 2 < 1 is satisfied whenever 2 is small enough. The classical small-gain theorem
applies to finite-gain stability. In the 1990, Hill, Jiang, Mareels, Praly, and Teel
extended the small-gain theorem to the more general case when a gain is replaced by
a gain function ( ) , leading to a small-gain condition of the form
S
TE S
3. Sensitivity Analysis and Asymptotic Methods
R
AP LS
The chapter on Analysis of Nonlinear Control Systems describes some useful analysis
tools, namely, sensitivity analysis, the averaging method and the singular perturbation
C EO
method.
approximation methods that engineers and scientists should have at their disposal as
they analyze nonlinear systems: (1) numerical solution methods and (2) asymptotic
methods. Asymptotic methods reveal multiple-time-scale structures inherent in many
SA NE
practical problems. Quite often, the solution of the state equations exhibits the
phenomenon that some variables move in time faster than other variables, leading to the
classification of variables as slow and fast. The averaging and singular perturbation
U
methods deal with the interaction of slow and fast variables. In the case of averaging,
the fast variables take the form of fast oscillations, while in singular perturbations they
appear as rapidly decaying signals.
Faced with the difficult task of designing feedback control for nonlinear systems, it is
only natural that control engineers appealed to the neat results available for linear
systems. By linearizing a nonlinear system about an operating (equilibrium) point, or a
desired trajectory, we obtain a linear model that approximates the nonlinear system in
the vicinity of the operating point. We can then use the linear control theory to design a
feedback controller, which we apply to the nonlinear system and expect it to work as
long as the trajectories of the nonlinear system remain in the vicinity of the operating
point. We illustrate the design-via-linearization approach by considering the
x = f ( x, u ) , y = h ( x) (1)
x = Ax + Bu, y = Cx (2)
where
S
TE S
f f h
A= ( x, u ) , B= ( x, u ) , C= ( x)
R
AP LS
x x = 0,u = 0 u x = 0,u = 0 x x =0
z = Fz + Gy, u = Lz + My (3)
H
PL O
A + BMC BL
GC F
(4)
SA NE
is Hurwitz; that is, all its eigenvalues have negative real parts. An example of such
design is the observer-based controller
U
z = ( A BK HC ) z + Hy, u = Kz (5)
where K and H are designed such that A BK and A HC are Hurwitz. The details
of such a linear design are given in the chapters on Design of State Space controllers
(Pole Placement) for SISO systems and Control of Linear Multivariable Systems. When
the controller (3) is applied to the nonlinear system (1) it results in the closed-loop
system
The linearization approach is clearly local; that is, it can only guarantee asymptotic
stability but it cannot, in general, prescribe the region of attraction nor achieve global
asymptotic stability. Gain scheduling is a technique that can extend the validity of the
linearization approach to a range of operating points. In many situations, it is known
how the dynamics of a system change with its operating point. It might even be possible
to model the system in such a way that the operating point is parameterized by one or
more variables, which are called scheduling variables. In such situations, we may
linearize the system at several equilibrium points (corresponding to different values of
the scheduling variables), design a linear feedback controller at each point, and
implement the resulting family of linear controllers as a single controller whose
parameters are changed by monitoring the scheduling variables. Such a controller is
called a gain-scheduled controller.
S
TE S
The concept of gain scheduling originated in connection with flight control systems.
R
AP LS
The nonlinear equations of motion of an airplane or a missile are linearized about
selected operating points that capture the key modes of operation throughout the flight
envelope. Linear controllers are designed to achieve the desired stability and
C EO
performance requirements for the linearizations about the selected operating points. The
parameters of the controllers are then interpolated as functions of gain scheduling
variables; typical variables are dynamic pressure, Mach number, altitude, and angle of
attack. Finally, the gain-scheduled controller is implemented on the nonlinear system.
E
A turning point in nonlinear control came in the 1980s with the development of the
nonlinear geometric approach. Differential geometry proved to be an effective method
M SC
for the analysis and design of nonlinear control systems. Synthesis problems like
disturbance decoupling, non-interacting control, and output regulation have been dealt
with using the tools of differential geometry. The most important achievements of the
SA NE
research has been directed towards studying these concepts for nonlinear systems. The
chapter on Controllability and Observability of Nonlinear Systems surveys the main
approaches for studying controllability and observability of nonlinear systems, with
emphasis on the differential geometric approach. The idea of feedback linearization
appeared toward the end of the 1970s, motivated by physical examples like the
computed torque method of robotic manipulators. The basic question of feedback
linearization is: Can we transform a nonlinear system into an equivalent linear system
by state feedback and/or a change of variables? The answer to this question takes the
form of a set of simultaneous linear partial differential equations. The differential
geometric approach made it possible to characterize the existence of a solution for these
equations. More importantly, it led to the development of the concepts of relative
degree, zero dynamics, and normal form, which permeate our current thinking about
nonlinear systems. We will come back to these concepts in the next section. The main
elements of the theory of feedback linearization are reviewed in the chapter on
-
-
-
S
TE S
Click here
R
AP LS
Bibliography
C EO
Basar T., Bernhard P. (1995). H -Optimal Control and Related Minimax Design Problems. Boston:
Birkhuser, second edn.
Byrnes C.I., Priscoli F.D., Isidori A. (1997). Output Regulation of Uncertain Nonlinear Systems. Boston:
Birkhuser.
E
Freeman R., Kokotovic P. (1996). Robust Nonlinear Control Design:, State-Space and Lyapunov
H
Isidori A. (1995). Nonlinear Control Systems. New York: Springer-Verlag, 3rd edn.
M SC
637-662.
Kokotovic P.V. (1992). The joy of feedback: Nonlinear and adaptive. IEEE Control Systems Magazine
12, 7-17.
U
Krstic M., Kanellakopoulos I., Kokotovic P. (1995). Nonlinear and Adaptive Control Design. New York:
Wiley-Interscience.
Lozano R., Brogliato B., Egeland O., Maschke B. (2000). Dissipative Systems Analysis and Control:
Theory and Applications. London: Springer.
Marino R., Tomei P. (1995). Nonlinear Control Design: Geometric, Adaptive & Robust. London:
Prentice-Hall.
Nijmeijer H., van der Schaft A.J> (1990). Nonlinear Dynamic Control Systems. Berlin:Springer-Verlag.
Ortega R., Loria A., Nicklasson P.J., Sira-Ramirez H. (1998). Passivity-based Control of Euler-Lagrange
Systems. London: Springer.
Qu Z. (1998). Robust Control of Nonlinear Uncertain Systems. New York: Wiley-Interscience.
Rugh W.J., Shamma J.S. (2000). Research on gain scheduling. Automatica 36, 1401-1425.
Sastry S. (199). Nonlinear Systems: Analysis, Stability, and Control. New York: Springer.
Sepulchre R., Jankovic M., Kokotovic P. (1997). Constructive Nonlinear Control. London: Springer.
Slotine J.J., Li W. (1991). Applied Nonlinear Control. Englewood Cliffs, New Jersey: Prentice-Hall.
Utkin V., Guldner J., Shi J. (1999). Sliding Mode Control in Electromechanical Systems. London: Taylor
& Francis.
Utkin V.I. 91992). Sliding Modes in Optimization and Control. New York: Springer-Verlag.
Van der Schaft A. (2000). L2 -Gain and Passivity Techniques in Nonlinear Control. London: Springer.
Vidyasagar M. (1993). Nonlinear Systems Analysis. Englewood Cliffs, NJ: Prentice Hall, second edn.
Biographical Sketch
Hassan K. Khalil received the B.S. and M.S. degrees from Cairo University, Cairo, Egypt, and the Ph.D.
degree from the University of Illinois, Urbana-Champaign, in 1973, 1975, and 1978, respectively, all in
Electrical Engineering.
Since 1978, he has been with Michigan State University, East Lansing, where he is currently University
Distinguished Professor of Electrical and Computer Engineering. He has consulted for General Motors
S
TE S
and Delco Products.
He has published several papers on singular perturbation methods, decentralized control, robustness,
R
AP LS
nonlinear control, and adaptive control. He is author of the book Nonlinear Systems (Macmillan, 1992;
Prentice Hall, 1996 and 2002), coauthor, with P. Kokotovic and J. O'Reilly, of the book Singular
Perturbation Methods in Control: Analysis and Design (Academic Press, 1986; SIAM 1999), and
C EO
coeditor, with P. Kokotovic, of the book Singular Perturbation in Systems and Control (IEEE Press,
1986). He was the recipient of the 1983 Michigan State University Teacher Scholar Award, the 1989
George S. Axelby Outstanding Paper Award of the IEEE Transactionson Automatic Control, the 1994
Michigan State University Withrow Distinguished Scholar Award, the 1995 Michigan State University
Distinguished Faculty Award, the 2000 American Automatic Control Council Ragazzini Education
E
Award, and the 2002 IFAC Control Engineering Textbook Prize. He is an IEEE Fellow since 1989.
H
Dr. Khalil served as Associate Editor of IEEE Transactions on Automatic Control, 1984 - 1985;
PL O
Registration Chairman of the IEEE-CDC Conference, 1984; Finance Chairman of the 1987 American
Control Conference (ACC); Program Chairman of the 1988 ACC; General Chair of the 1994 ACC;
Associate Editor of Automatica, 1992-1999; Action Editor of Neural Networks, 1998-1999; and Member
M SC
of the IEEE-CSS Board of Governors, 1999-2002. Since 1999, he has been serving as Editor of
Automatica for nonlinear systems and control.
SA NE
U