ECE311S Dynamic Systems & Control: Course Notes by Bruce Francis January 2010
ECE311S Dynamic Systems & Control: Course Notes by Bruce Francis January 2010
ECE311S Dynamic Systems & Control: Course Notes by Bruce Francis January 2010
C. As we move from a
warm room to the cold outdoors, our body temperature is maintained. Nature has many other
interesting examples of control systems: Large numbers of sh move in schools, their formations
controlled by local interactions; the population of a species, which varies over time, is a result
of many dynamic interactions with predators and food supplies. And human organizations are
controlled by regulations and regulatory agencies.
Other familiar examples of control systems:
autofocus mechanism in cameras
cruise control system in cars
thermostat temperature control systems.
1.2 DC to DC Voltage Converters
In this section we go into a little detail for a specic, interesting example of a control system in
wide use. You may already have studied this in ECE315 Switched-mode Power Supplies.
Every laptop has a voltage regulator, and so do many other consumer electronic appliances.
The electronic componentshard drive, motherboard, video card, etc.need electric power, and
the voltage has to be regulated. Typically, an electronic component may need 2 V, while the battery
is rated at 12 V, but is not constant. This raises the problem of stepping down a source voltage V
s
,
such as a battery, to a regulated value at a load.
1
CHAPTER 1. INTRODUCTION 2
Eciency
Lets rst model the load as a xed resistor, R
l
(subscript l for load). Then of course a voltage
dividing resistive circuit suggests itself
1
:
V
s
R
l
v
o
+
+
The simplest voltage converter
This solution is unsatisfactory for several reasons. First, it is inecient. Suppose the step-down
ratio is 1/2 and the load resistance R
l
is xed. Then the second resistor is R
l
too. The current
equals V
s
/2R
l
and therefore the power supplied by the input equals (voltage current)
P
in
= V
s
V
s
2R
l
while the output power consumed equals
P
o
= v
o
V
s
2R
l
= V
s
V
s
4R
l
.
The eciency is therefore only
=
P
o
P
in
= 50%
it is the ratio of the voltages. A second reason is that the battery will drain and therefore v
o
will
not be regulated.
Switched mode power supply
As a second attempt at a solution, let us try a switching circuit:
V
s
R
l
v
o
+
+
Voltage regulation via switching
S
1
The output voltage, vo, is written lower case because eventually its not going to be perfectly constant.
CHAPTER 1. INTRODUCTION 3
The switch S
1
, typically a MOSFET, closes periodically and thereafter opens according to a duty
cycle, as follows. Let T be the period of opening in seconds. The time axis is divided into intervals
of width T:
. . . , [0, T), [T, 2T), [2T, 3T), . . . .
Over the interval [0, T), the switch is closed for the subinterval [0, DT) and then open for [DT, T),
where D, the duty cycle, is a number between 0 and 1. Likewise, for every other interval. The
duty cycle will have to be adjusted for each interval, but for now lets suppose it is constant. The
switch S
2
is complementary: It is open when S
1
is closed, and closed when S
1
is open. This second
switch is needed so that there is always a closed circuit around the load. The idea in the circuit
is to choose D to get the desired regulated value of v
o
. For example, if we want v
o
to be half the
value of V
s
, we choose D = 1/2. In this case v
o
(t) would look like this:
V
s
T
DT
t
v
o
(t)
Square-wave voltage
Clearly, the average value of v
o
(t) is correct, but v
o
(t) is far from being constant. How about
eciency? Over the interval [0, DT), S
1
is closed and the current owing is V
s
/R
l
. The input and
output powers are equal. Over the interval [DT, T), S
1
is open and the current owing is 0. The
input and output powers are again equal. Therefore the eciency is 100%. However we have not
accounted for the power required to activate the switches.
Inclusion of a lter
Having such large variations in v
o
is of course unacceptable. And this suggests we need a circuit to
lter out the variations. Let us try adding an inductor:
V
s
L
R
l
v
o
+
+
Switched regulator with a lter
S
This circuit is equivalent to this one
CHAPTER 1. INTRODUCTION 4
L
R
l
v
o
+
+
v
1
Square-wave input voltage
where the input is
V
s
T
DT
t
v
1
(t)
Graph of input
A square wave into a circuit can be studied using Fourier series, which is a fun exercise. Suce it
to say that the average output voltage v
o
(t) equals DV
s
, duty cycle input voltage, and theres
a ripple whose rst harmonic can be made arbitrarily small by suitable design of the switching
frequency or the circuit time constant.
In practice, the circuit also has a capacitor:
V
s
L
C
R
l
v
o
+
+
Buck step-down converter
S
This is called a DC-DC buck converter. The LC lter is lossless and this contributes to the
eciency of the converter.
Left out of the discussion so far is that in reality V
s
is not constantthe battery drainsand
R
l
is not a fully realistic model for a load. In practice a controller is designed in a feedback loop
from v
o
to switch S
1
. A battery drains fairly slowly, so it is reasonable to assume V
s
is constant and
let the controller make adjustments accordingly. As for the load, a more realistic model includes a
current source, reecting the fact that the load draws current.
In practice, the controller can be either analog or digital. Well return to converters in Chapter 5.
CHAPTER 1. INTRODUCTION 5
1.3 Linear Functions
In this course we deal only with linear systems. Or rather, whenever we get a nonlinear system, we
linearize it as soon as possible. So before we proceed we had all better be very certain about what a
linear function is. Let us recall even what a function is. If X and Y are two sets, a function from
X to Y is a rule that assigns to every element of X an unambiguous element of Y there cannot
be two possible dierent values for some x in X. The terms function, mapping, and transformation
are synonymous. The notation
f : X Y
means that f is a function from X to Y . We typically write y = f(x) for a function. To repeat, for
each x there must be one and only one y such that y = f(x); y
1
= f(x) and y
2
= f(x) with y
1
,= y
2
is not allowed.
Let f be a function R R. This means that f takes a real variable x and assigns a real
variable y, written y = f(x). So f has a graph in the (x, y) plane. To say that f is linear means the
graph is a straight line through the origin; theres only one straight line that is not allowedthe
y-axis. Thus y = ax denes a linear function for any real constant a; the equation dening the
y-axis is x = 0. The function y = 2x + 1 is not linearits graph is a straight line, but not through
the origin.
In your linear algebra course you were taught that a linear function is a function f from a vector
space A to another (or the same) vector space having the property
f(a
1
x
1
+a
2
x
2
) = a
1
f(x
1
) +a
2
f(x
2
)
for all vectors x
1
, x
2
in A and all real numbers
2
a
1
, a
2
. If the vector spaces are A = R
n
, = R
m
,
and if f is linear, then it has the form f(x) = Ax, where A is an mn matrix. Conversely, every
function of this form is linear.
This concept extends beyond vectors to signals. For example, consider a capacitor, whose
constitutive law is
i = C
dv
dt
.
Here, i and v are not constants, or vectorsthey are functions of time. If we think of the current i
as a function of the voltage v, then the function is linear. This is because
C
d(a
1
v
1
+a
2
v
2
)
dt
= a
1
C
dv
1
dt
+a
2
C
dv
2
dt
.
On the other hand, if we try to view v as a function of i, then we have a problem, because we need
in addition an initial condition v(0) (or some other initial time) to uniquely dene v, not just i. Let
us set v(0) = 0. Then v is a linear function of i. You can see this from the integral form of the
capacitor equation:
v(t) =
1
C
_
t
0
i()d.
2
This denition assumes that the vector spaces are over the eld of real numbers.
CHAPTER 1. INTRODUCTION 6
1.4 Notation
Generally, signals are written lower case: e.g., x(t). Their transforms are capitalized: X(s) or
X(j). Resistance values, masses, etc. are capital: R, M, etc. The impulse is (t). In signals
and systems the unit step is denoted u(t), but in control u(t) denotes a plant input. Following
Zemanian
3
, we denote the unit step by 1
+
(t).
3
Distribution Theory and Transform Analysis, A.H. Zemanian, Dover
Chapter 2
Mathematical Models of Systems
The best (only?) way to learn this subject is bottom upfrom specic examples to general theory.
So we begin with mathematical models of physical systems. We mostly use mechanical examples
since their behaviour is easier to understand than, say, electromagnetic systems, because of our
experience in the natural world.
2.1 Block Diagrams
The importance of block diagrams in control engineering cant be overemphasized. One could easily
argue that you dont understand your system until you have a block diagram of it.
We shall take the point of view that a block diagram is a picture of a function. We can
draw a picture of the function y = f(x) like this:
x y
f
Thus a box represents a function and the arrows represent variables; the input is the independent
variable, the output the dependent variable.
Example
The simplest vehicle to control is a cart on wheels:
Brief Article
The Author
December 7, 2007
y
u
1
7
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 8
This is a schematic diagram, not a block diagram, because it doesnt say which of u, y causes
the other. Assume the cart can move only in a straight line on a at surface. (There may be air
resistance to the motion and other friction eects.) Assume a force u is applied to the cart and let
y denote the position of the cart measured from a stationary reference position. Then u and y are
functions of time t and we could indicate this by u(t) and y(t). We regard the functions u and y as
signals.
Newtons second law tells us that theres a mathematical relationship between u and y, namely,
u = M y. We take the viewpoint that the force can be applied independently of anything else, that
is, its an input. Then y is an output. We represent this graphically by a block diagram:
Brief Article
The Author
December 7, 2007
u
y
1
Suppose the cart starts at rest at the origin at time 0, i.e., y(0) = y(0) = 0. Then the position
depends only on the force applied. However y at time t depends on u not just at time t, but on past
times as well. So we can write y = F(u), i.e., y is a function of u, but we cant write y(t) = F(u(t))
because the position at time t doesnt depend only on the force at that same time t. 2
Block diagrams also may have summing junctions:
Brief Article
The Author
December 7, 2007
u
stands for y = u + v
stands for y = u v
v
u
v
y
y
-
1
Also, we may need to allow a block to have more than one input:
Brief Article
The Author
December 7, 2007
u
v
y
1
This means that y is a function of u and v, y = F(u, v).
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 9
Example
Brief Article
The Author
December 7, 2007
on a fulcrum
d
at board
can of soup free to
roll on board
1
Suppose a torque is applied to the board. Let denote the angle of tilt and d the distance of roll.
Then both and d are functions of . The block diagram could be
Brief Article
The Author
December 7, 2007
d
1
or
Brief Article
The Author
December 7, 2007
d
1
or initial conditions, if they are not xed, could be modeled as inputs too. 2
Getting a block diagram is sometimes harder than you think it will be. For example, youre
riding a skateboard: The system components obviously include you and the skateboard. Anything
else? You need the road to push on, and there must also be the earth to create a gravitation force
on you (you cant skateboard in space). Would that be it? You, the skateboard, the road/earth.
Does this system have an input? It must, if youre to move along the road following in, say, the
bike lane. Try drawing a block diagram.
2.2 State Models
In control engineering, the system to be controlled is termed the plant. For example, in helicopter
ight control, the plant is the helicopter itself plus its sensors and actuators. The control system is
implemented in an onboard computer. The design of a ight control system for a helicopter requires
rst the development of a mathematical model of the helicopter dynamics. This is a very advanced
subject, well beyond the scope of this course. We must content ourselves with much simpler plants.
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 10
Example
Consider a cart on wheels, driven by a force F and subject to air resistance:
Brief Article
The Author
December 7, 2007
M
y
F
1
Typically air resistance creates a force depending on the velocity y; lets say this force is a possibly
nonlinear function D( y). Assuming M is constant, Newtons second law gives
M y = F D( y).
We are going to put this in a standard form by dening two so-called state variables, in this example
position and velocity:
x
1
:= y, x
2
:= y.
Then
x
1
= x
2
x
2
=
1
M
F
1
M
D(x
2
)
y = x
1
.
These equations have the form
x = f(x, u), y = h(x) (2.1)
where
x :=
_
x
1
x
2
_
, u := F
f : R
2
R R
2
, f(x
1
, x
2
, u) =
_
_
x
2
1
M
u
1
M
D(x
2
)
_
_
h : R
2
R, h(x
1
, x
2
) = x
1
.
The function f is nonlinear if D is; h is linear. Equation (2.1) constitutes a state model of the
system, and x is called the state or state vector. The block diagram is
Brief Article
The Author
December 7, 2007
u
y
P
(x)
1
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 11
Here P is a possibly nonlinear system, u (applied force) is the input, y (cart position) is the output,
and
x =
_
cart posn
cart velocity
_
is the state of P. (Well dene state later.)
As a special case, suppose the air resistance is a linear function of velocity:
D(x
2
) = D
0
x
2
, D
0
a constant.
Then f is linear:
f(x, u) = Ax +Bu, A :=
_
0 1
0 D
0
/M
_
, B :=
_
0
1/M
_
.
Dening C =
_
1 0
, we get the state model
x = Ax +Bu, y = Cx. (2.2)
This model is of a linear, time-invariant (LTI) system. 2
It is convenient to write vectors sometimes as column vectors and sometimes as n-tuples, i.e.,
ordered lists. For example
x :=
_
x
1
x
2
_
, x = (x
1
, x
2
).
We shall use both.
Generalizing the example, we can say that an important class of models is
x = f(x, u), f : R
n
R
m
R
n
y = h(x, u), h : R
n
R
m
R
p
.
This model is nonlinear, time-invariant. The input u has dimension m, the output y dimension p,
and the state x dimension n. An example where m = 2, p = 2, n = 4 is
Brief Article
The Author
December 7, 2007
y
1
y
2
u
1
M
1
u
2
M
2
K
1
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 12
u = (u
1
, u
2
), y = (y
1
, y
2
), x = (y
1
, y
1
, y
2
, y
2
).
The LTI special case is
x = Ax +Bu, A R
nn
, B R
nm
y = Cx +Du, C R
pn
, D R
pm
.
Now we turn to the concept of the state of a system. Roughly speaking, x(t
0
) encapsulates
all the system dynamics up to time t
0
, that is, no additional prior information is required. More
precisely, the concept is this: For any t
0
and t
1
, with t
0
< t
1
, knowing x(t
0
) and knowing u(t) :
t
0
t t
1
, we can compute x(t
1
), and hence y(t
1
).
Example
Brief Article
The Author
December 7, 2007
y
M
no force; no air resistance
1
If we were to try simply x = y, then knowing x(t
0
) without y(t
0
), we could not solve the initial value
problem for the future cart position. Similarly x = y wont work. Since the equation of motion,
M y = 0, is second order, we need two initial conditions at t = t
0
, implying we need a 2-dimensional
state vector. In general for mechanical systems it is customary to take x to consist of positions and
velocities of all masses. 2
Example
A mass-spring-damper:
Brief Article
The Author
December 7, 2007
u
free-body :
M
g
y
M
K(y y
0
) D
0
y
u
D
0
K
1
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 13
The dynamic equation is
M y = u +Mg K(y y
0
) D
0
y.
Its appropriate to take
x = (x
1
, x
2
), x
1
= y, x
2
= y
and then we get the equations
x
1
= x
2
x
2
=
1
M
u +g
K
M
x
1
+
K
M
y
0
D
0
M
x
2
y = x
1
.
This has the form
x = Ax +Bu +c
y = Cx,
where
A =
_
0 1
K
M
D
0
M
_
, B =
_
0
1
M
_
, c =
_
0
g +
K
M
y
0
_
, C = [1 0].
The constant vector c is known, and hence is taken as part of the system rather than
as a signal. 2
Example
This example concerns active suspension of a vehicle for passenger comfort.
Brief Article
The Author
December 7, 2007
D
0
vehicle motion
M
1
M
2 x
1
M
1
= mass of chassis & passengers
M
2
= mass of wheel carriage
K
1
x
2
u
road
surface
r
K
2
u
1
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 14
To derive the equations of motion, bring in free body diagrams:
Brief Article
The Author
December 7, 2007
M
1
M
2
M
2
x
2
= K
1
(x
1
x
2
x
10
) +D
0
( x
1
x
2
)
u M
2
g K
2
(x
2
r x
20
)
K
2
(x
2
r x
20
)
M
1
x
1
= u K
1
(x
1
x
2
x
10
) M
1
g D
0
( x
1
x
2
)
u
K
1
(x
1
x
2
x
10
)
M
1
g
u
D
0
( x
1
x
2
)
D
0
( x
1
x
2
)
M
2
g
K
1
(x
1
x
2
x
10
)
1
Dene x
3
= x
1
, x
4
= x
2
. Then the equations can be assembled as
x = Ax +B
1
u +B
2
r +c
1
(2.3)
where
x =
_
_
x
1
x
2
x
3
x
4
_
_
, c
1
=
_
_
0
0
K
1
M
1
x
10
g
K
1
M
2
x
20
g +
K
2
M
2
x
20
_
_
A =
_
_
0 0 1 0
0 0 0 1
K
1
M
1
K
1
M
1
D
0
M
1
D
0
M
1
K
1
M
2
K
1
+K
2
M
2
D
0
M
2
D
0
M
2
_
_
, B
1
=
_
_
0
0
1
M
1
1
M
2
_
_
, B
2
=
_
_
0
0
0
K
2
M
2
_
_
.
We can regard (2.3) as corresponding to the block diagram
P
u
r
x
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 15
Since c
1
is a known constant vector, its not taken to be a signal. Here u is the controlled input and
r the uncontrolled input or disturbance.
The output to be controlled might be acceleration or jerk of the chassis. Taking y = x
1
= x
3
gives
y = Cx +Du +c
2
(2.4)
where
C =
_
K
1
M
1
K
1
M
1
D
0
M
1
D
0
M
1
_
, D =
1
M
1
, c
2
=
K
1
M
1
x
10
g.
Equations (2.3) and (2.4) have the form
x = f(x, u, r)
y = h(x, u).
Notice that f and h are not linear, because of the constants c
1
, c
2
. 2
Example
A favourite toy control problem is to get a cart to automatically balance a pendulum.
Brief Article
The Author
December 7, 2007
M
1
u
M
2
x
1
L
1
The natural state is
x = (x
1
, x
2
, x
3
, x
4
) = (x
1
, , x
1
,
).
Again, we bring in free body diagrams:
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 16
Brief Article
The Author
December 7, 2007
M
2
g
F
1
x
1
+Lsin
L Lcos
1
Newtons law for the ball in the horizontal direction is
M
2
d
2
dt
2
(x
1
+Lsin ) = F
1
sin
and in the vertical direction is
M
2
d
2
dt
2
(L Lcos ) = M
2
g F
1
cos
and for the cart is
M
1
x
1
= u F
1
sin .
These are three equations in the four signals x
1
, , u, F
1
. Use
d
2
dt
2
sin =
cos
2
sin ,
d
2
dt
2
cos =
sin
2
cos
to get
M
2
x
1
+M
2
L
cos M
2
L
2
sin = F
1
sin
M
2
L
sin +M
2
L
2
cos = M
2
g F
1
cos
M
1
x
1
= u F
1
sin .
We can eliminate F
1
: Add the rst and the third to get
(M
1
+M
2
) x
1
+M
2
L
cos M
2
L
2
sin = u;
multiply the rst by cos , the second by sin , add, and cancel M
2
to get
x
1
cos +L
g sin = 0.
Solve the latter two equations for x
1
and
:
_
M
1
+M
2
M
2
Lcos
cos L
_ _
x
1
_
=
_
u +M
2
L
2
sin
g sin
_
.
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 17
Thus
x
1
=
u +M
2
L
2
sin M
2
g sin cos
M
1
+M
2
sin
2
=
ucos M
2
L
2
sin cos + (M
1
+M
2
)g sin
L(M
1
+M
2
sin
2
)
.
In terms of state variables we have
x
1
= x
3
x
2
= x
4
x
3
=
u +M
2
Lx
2
4
sin x
2
M
2
g sin x
2
cos x
2
M
1
+M
2
sin
2
x
2
x
4
=
ucos x
2
M
2
Lx
2
4
sin x
2
cos x
2
+ (M
1
+M
2
)g sin x
2
L(M
1
+M
2
sin
2
x
2
)
.
Again, these have the form
x = f(x, u).
We might take the output to be
y =
_
x
1
_
=
_
x
1
x
2
_
= h(x).
The system is highly nonlinear; as you would expect, it can be approximated by a linear system for
[[ small enough, say less than 5
. 2
Example
This example involves a tank of water, with water owing it in an uncontrolled way, and water
owing out at a rate controlled by a valve:
x
d
u
The signals are x, the height of the water, u, the area of opening of the valve, and d, the owrate
in. Let A denote the cross-sectional area of the tank, assumed constant. Then conservation of mass
gives
A x = d (ow rate out).
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 18
Also
(ow rate out) = (const)
_
p (area of valve opening)
where p denotes the pressure drop across the valve, this being proportional to x. Thus
(ow rate out) = c
xu
and hence
A x = d c
xu.
The state equation is therefore
x = f(x, u, d) =
1
A
d
c
A
xu.
2
It is worth noting that not all systems have state models of the form
x = f(x, u), y = h(x, u).
One example is the dierentiator: y = u. A second is a time delay: y(t) = u(t 1). Then of course
there are time-varying systems, meaning systems whose properties change with time. For example,
a mass/force system where the mass is a function of time (the mass of a plane varies as the fuel is
burned):
d
dt
(M(t) x) = u.
Finally, there are PDE models, e.g., the vibrating violin string with input the bow force.
Let us now see how to get a state model for an electric circuit.
Example
An RLC circuit:
Brief Article
The Author
December 17, 2007
+ C
R L
u
1
There are two energy storage elements, the inductor and the capacitor. It is natural to take the
state variables to be voltage drop across C and current through L:
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 19
Brief Article
The Author
December 17, 2007
+
u
x
1
x
2
+
1
Then KVL gives
u +Rx
2
+x
1
+L x
2
= 0
and the capacitor equation is
x
2
= C x
1
.
Thus
x = Ax +Bu, A =
_
0
1
C
1
L
R
L
_
, B =
_
0
1
L
_
.
2
2.3 Linearization
So far we have seen that many systems can be modeled by nonlinear state equations of the form
x = f(x, u), y = h(x, u).
(There might be disturbance inputs present, but for now we suppose they are lumped into u.) There
are techniques for controlling nonlinear systems, but thats an advanced subject. However, many
systems can be linearized about an equilibrium point. In this section we see how to do this. The
idea is to use Taylor series.
Example
To review Taylor series, lets linearize the function y = f(x) = x
3
about the point x
0
= 1. The
Taylor series expansion is
f(x) =
n=0
c
n
(x x
0
)
n
, c
n
=
f
(n)
(x
0
)
n!
= f(x
0
) +f
(x
0
)(x x
0
) +
f
(x
0
)
2
(x x
0
)
2
+... .
Taking only terms n = 0, 1 gives
f(x) f(x
0
) +f
(x
0
)(x x
0
),
that is
y y
0
f
(x
0
)(x x
0
).
Dening y = y y
0
, x = x x
0
, we have the linearized function y = f
(x
0
)x, or y = 3x
in this case.
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 20
Brief Article
The Author
December 17, 2007
11
1
x
y
slope = 3
x
y
3
1
Obviously, this approximation gets better and better as [x[ gets smaller and smaller. 2
Taylor series extends to functions f : R
n
R
m
.
Example
f : R
3
R
2
, f(x
1
, x
2
, x
3
) = (x
1
x
2
1, x
2
3
2x
1
x
3
)
Suppose we want to linearize f at the point x
0
= (1, 1, 2). Terms n = 0, 1 in the expansion are
f(x) f(x
0
) +
f
x
(x
0
)(x x
0
),
where
f
x
(x
0
) = Jacobian of f at x
0
=
_
f
i
x
j
(x
0
)
_
=
_
x
2
x
1
0
2x
3
0 2x
3
2x
1
_
x
0
=
_
1 1 0
4 0 2
_
.
Thus the linearization of y = f(x) at x
0
is y = Ax, where
A =
f
x
(x
0
) =
_
1 1 0
4 0 2
_
y = y y
0
= f(x) f(x
0
)
x = x x
0
.
2
By direct extension, if f : R
n
R
m
R
n
, then
f(x, u) f(x
0
, u
0
) +
f
x
(x
0
, u
0
)x +
f
u
(x
0
, u
0
)u.
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 21
Now we turn to linearizing the dierential equation
x = f(x, u).
First, assume there is an equilibrium point, that is, a constant solution x(t) x
0
, u(t) u
0
. This
is equivalent to saying that 0 = f(x
0
, u
0
). Now consider a nearby solution:
x(t) = x
0
+ x(t), u(t) = u
0
+ u(t), x(t), u(t) small.
We have
x(t) = f[x(t), u(t)]
= f(x
0
, u
0
) +Ax(t) +Bu(t) + higher order terms
where
A :=
f
x
(x
0
, u
0
), B :=
f
u
(x
0
, u
0
).
Since x = x and f(x
0
, u
0
) = 0, we have the linearized equation to be
x = Ax +Bu.
Similarly, the output equation y = h(x, u) linearizes to
y = Cx +Du,
where
C =
h
x
(x
0
, u
0
), D =
h
u
(x
0
, u
0
).
Summary
Linearizing x = f(x, u), y = h(x, u): Select, if one exists, an equilibrium point. Compute the four
Jacobians, A, B, C, D, of f and h at the equilibrium point. Then the linearized system is
x = Ax +Bu, y = Cx +Du.
Under mild conditions (sucient smoothness of f and h), this linearized system is a valid approxi-
mation of the nonlinear one in a suciently small neighbourhood of the equilibrium point.
Example
x = f(x, u) = x +u + 1
y = h(x, u) = x
An equilibrium point is composed of constants x
0
, u
0
such that
x
0
+u
0
+ 1 = 0.
So either x
0
or u
0
must be specied, that is, the analyst must select where the linearization is to
be done. Lets say x
0
= 0. Then u
0
= 1 and
A = 1, B = 1, C = 1, D = 0.
Actually, here A, B, C, D are independent of x
0
, u
0
, that is, we get the same linear system at every
equilibrium point. 2
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 22
Example
We continue the cart-pendulum; see f(x, u) on page 15. An equilibrium point
x
0
= (x
10
, x
20
, x
30
, x
40
), u
0
satises f(x
0
, u
0
) = 0, i.e.,
x
30
= 0
x
40
= 0
u
0
+M
2
Lx
2
40
sin x
20
M
2
g sin x
20
cos x
20
= 0
u
0
cos x
20
M
2
Lx
2
40
sin x
20
cos x
20
+ (M
1
+M
2
)g sin x
20
= 0.
Multiply the third equation by cos x
20
and add to the fourth: We get in sequence
M
2
g sin x
20
cos
2
x
20
+ (M
1
+M
2
)g sin x
20
= 0
(sin x
20
)(M
1
+M
2
sin
2
x
20
) = 0
sin x
20
= 0
x
20
= 0 or .
Thus the equilibrium points are described by
x
0
= (arbitrary, 0 or , 0, 0), u
0
= 0.
We have to choose x
20
= 0 (pendulum up) or x
20
= (pendulum down). Lets take x
20
= 0. Then
the Jacobians compute to
A =
_
_
0 0 1 0
0 0 0 1
0
M
2
M
1
g 0 0
0
M
1
+M
2
M
1
g
L
0 0
_
_
, B =
_
_
0
0
1
M
1
1
LM
1
_
_
.
The above provides a general method of linearizing. In this particular example, theres a faster
way, which is to approximate sin = , cos = 1 in the original equations:
M
2
d
2
dt
2
(x
1
+L) = F
1
0 = M
2
g F
1
M
1
x
1
= u F
1
.
These equations are already linear and lead to the above A and B. 2
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 23
2.4 Simulation
Concerning the model
x = f(x, u), y = h(x, u),
simulation involves numerically computing x(t) and y(t) given an initial state x(0) and an input
u(t). If the model is nonlinear, simulation requires an ODE solver, based on, for example, the
Runge-Kutta method. Scilab and MATLAB have ODE solvers and also very nice simulation GUIs,
Scicos and SIMULINK, respectively.
2.5 The Laplace Transform
You have already met Laplace transforms in your circuit theory course. So we only have to give a
brief review here.
In signal processing, the two-sided Laplace transform (LT) is used, but in control only the one-
sided LT is used. Let f(t) be a real-valued function dened for all t 0; it could of course be
dened for all < t < . Its Laplace transform is the integral
F(s) =
_
0
f(t)e
st
dt.
Here s is a complex variable. Normally, the integral converges for some values of s and not for
others. That is, there is a region of convergence (ROC). It turns out that the ROC is always an
open right half-plane, of the form s : Re s > a. Then F(s) is a complex-valued function of s.
Example
The unit step:
f(t) = 1
+
(t) =
_
1 , t 0
0 , t < 0
Actually, the precise value at t = 0 doesnt matter. The LT is
F(s) =
_
0
e
st
dt =
e
st
s
0
=
1
s
and the ROC is
ROC : Re s > 0.
The same F(s) is obtained if f(t) = 1 for all t, even t < 0. This is because the LT is oblivious to
negative time. Notice that F(s) has a pole at s = 0 on the western boundary of the ROC. 2
The LT exists provided f(t) satises two conditions. The rst is that it is piecewise continuous
on t 0. This means that, on any time interval (t
1
, t
2
), f(t) has at most a nite number of jumps,
and between these jumps f(t) is continuous. A square wave has this property for example. The
second condition is that it is of exponential order, meaning there exist constants M, c such that
[f(t)[ Me
ct
for all t 0. This means that if f(t) blows up, at least there is some exponential that
blows up faster. For example, exp(t
2
) blows up too fast.
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 24
Examples
Some other examples: An exponential:
f(t) = e
at
, F(s) =
1
s a
, ROC : Re s > a.
A sinusoid:
f(t) = cos wt =
1
2
_
e
jwt
+ e
jwt
_
F(s) =
s
s
2
+w
2
, ROC : Re s > 0.
2
The LT thus maps a class of time-domain functions f(t) into a class of complex-valued functions
F(s). The mapping f(t) F(s) is linear.
Example
We shall use linearity to nd the LT of this signal:
Brief Article
The Author
December 17, 2007
f(t)
1
1
Thus f = f
1
+f
2
, where f
1
is the unit ramp starting at time 0 and f
2
the ramp of slope 1 starting
at time 1. By linearity, F(s) = F
1
(s) +F
s
(s). We compute that
F
1
(s) =
1
s
2
, Re s > 0
F
2
(s) =
e
s
s
2
, Re s > 0.
Thus
F(s) =
1 e
s
s
2
, Re s > 0.
2
There are tables of LTs. So in practice, if you have F(s), you can get f(t) using a table.
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 25
Example
Given F(s) =
3s + 17
s
2
4
, let us nd f(t). We dont need the ROC to nd f(t), but actually we know
what it is. Because were using the one-sided LT the ROC must be a right half-plane, and because
F(s) must be analytic within its ROC, the ROC of F(s) must be Re s > 2. We have
F(s) =
c
1
s 2
+
c
2
s + 2
, c
1
=
23
4
, c
2
=
11
4
and therefore
f(t) = c
1
e
2t
+c
2
e
2t
.
We do not know if f(t) equals zero for t < 0. 2
One use of the LT is in solving initial-value problems involving linear, constant-coecient dif-
ferential equations. In control engineering we do this by simulation. But let us look briey at the
LT method.
It is useful to note that if we have the LT pair
f(t) F(s)
and f is continuously dierentiable at t = 0, then
_
0
e
st
f(t)dt = e
st
f(t)
0
+s
_
0
e
st
f(t)dt.
Now s is such that e
st
f(t) converges to 0 as t goes to . Thus the right-hand side of the preceding
equation becomes
f(0) +sF(s).
This proves (2.5).
Example
Consider the initial-value problem
y 2y = t, y(0) = 1.
The range of t for the dierential equation isnt stated. Let us rst assume that y(t) is continuously
dierentiable at t = 0. This implies the dierential equation holds at least for < t < for some
positive , that is, it holds for a little time before t = 0. Then we can apply (2.5) to get
sY (s) 1 2Y (s) =
1
s
2
.
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 26
Solving for Y (s) we get
Y (s) =
s
2
+ 1
s
2
(s 2)
=
1
4s
1
2s
2
+
5
4(s 2)
.
Therefore
y(t) =
1
4
1
2
t +
5
4
e
2t
. (2.6)
Since we used the one-sided LT, which is oblivious to negative time, we can assert that (2.6) holds
at least for t 0. Note that it satises y(0) = 1.
On the other hand, suppose instead that the problem is posed as follows:
y 2y = t, t > 0; y(0) = 1.
That is, the dierential equation holds for positive time and the initial value of y is 1. We arent
told explicitly that y(t) is continuously dierentiable at t = 0, but we are justied in making that
assumption, since any other solution, for example the one satisfying y(t) = 0 for t < 0, satises
(2.6) for t 0.
Let us now modify the forcing term:
y 2y = 1
+
(t), y(0) = 1.
This is not well dened, because the dierential equation
y 2y = 1
+
(t)
is ambiguous at t = 0. The correct way to state the problem is
y 2y = 1, t > 0; y(0) = 1.
Then, as in the previous example, since were looking for the solution only for t > 0, we may
rephrase the question as
y 2y = 1, t > ; y(0) = 1
and look for the solution where y(t) is continuously dierentiable at t = 0. It goes like this:
sY (s) 1 2Y (s) =
1
s
Y (s) =
s + 1
s(s 2)
=
1
2s
+
3
2(s 2)
y(t) =
1
2
+
3
2
e
2t
.
2
If f(t) is twice continuously dierentiable at t = 0, then
f(t) s
2
F(s) sf(0)
f(0).
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 27
Example
The equation
y + 4 y + 3y = e
t
, y(0) = 0, y(0) = 2
can be solved as follows. We assume y(t) is twice continuously dierentiable at t = 0. Then
s
2
Y (s) 2 + 4sY (s) + 3Y (s) =
1
s 1
.
So
Y (s) =
2s 1
(s 1)(s + 1)(s + 3)
=
1
8
1
s 1
+
3
4
1
s + 1
7
8
1
s + 3
y(t) =
1
8
e
t
+
3
4
e
t
7
8
e
3t
.
The same solution would be arrived at for t > 0 if, instead of assuming y(t) is twice continuously
dierentiable at t = 0, we were to allow jumps at t = 0. 2
The LT of the product f(t)g(t) of two functions is not equal to F(s)G(s), the product of the
two transforms. Then what operation in the time domain does correspond to multiplication of the
transforms? The answer is convolution. Let f(t), g(t) be dened on t 0. Dene a new function
h(t) =
t
_
0
f(t )g()d, t 0.
We say h is the convolution of f and g. Note that another equivalent way of writing h is
h(t) =
t
_
0
f()g(t )d, t 0.
We also frequently use the star notation h = f g or h(t) = f(t) g(t).
Theorem 2.5.1 The LT of f g is F(s)G(s).
Proof Let h := f g. Then the LT of h is
H(s) =
_
0
h(t)e
st
dt.
Substituting for h we have
H(s) =
_
0
t
_
0
f(t )g()e
st
ddt.
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 28
Now change the order of integration:
H(s) =
_
0
f(t )e
st
dtg()d.
In the inner integral change variables, r = t :
H(s) =
_
0
_
0
f(r)e
sr
dre
s
g()d.
Thus
H(s) =
_
0
F(s)e
s
g()d.
Pull F(s) out and you get H(s) = F(s)G(s). 2
2.6 Transfer Functions
Linear time-invariant (LTI) systems, and only LTI systems, have transfer functions. The transfer
function (TF) of an LTI system is dened to be the ratio Y (s)/U(s) where the LTs are taken with
zero initial conditions.
Example
Consider the dierential equation
y + (sin t)y = u.
This does not have a transfer functionthe system is not time invariant. 2
Example
An RC lter:
Brief Article
The Author
December 17, 2007
u
+
C
R
+
y
i
1
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 29
The circuit equations are
u +Ri +y = 0, i = C
dy
dt
.
Therefore
RC y +y = u.
Apply Laplace transforms with zero initial conditions:
RCY (s) +Y (s) = U(s).
Therefore the TF is
Y (s)
U(s)
=
1
RCs + 1
.
Or, by the voltage-divider rule using impedances:
Y (s)
U(s)
=
1
Cs
R +
1
Cs
=
1
RCs + 1
.
This transfer function is rational, a ratio of polynomials. 2
Example
A mass-spring-damper:
M y = u Ky D y.
We get
Y (s)
U(s)
=
1
Ms
2
+Ds +K
.
This transfer function also is rational. 2
Lets look at some other transfer functions: G(s) = 1 represents a pure gain; G(s) = 1/s is
the ideal integrator integrator; G(s) = 1/s
2
is the double integrator; G(s) = s is the dierentiator
dierentiator; G(s) = e
s
with > 0 is a time delay systemnote that the TF is not rational;
G(s) =
2
n
s
2
+ 2
n
s +
2
n
is the standard second order TF, where
n
> 0 and 0; and
G(s) = K
1
+
K
2
s
+K
3
s
is the TF of the proportional-integral-derivative (PID) controller.
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 30
We say a transfer function G(s) is proper if the degree of the denominator is at least that of
the numerator. The transfer functions G(s) = 1,
1
s + 1
are proper, G(s) = s is not. We say G(s)
is strictly proper if the degree of the denominator is greater than that of the numerator. Note
that if G(s) is proper then lim
|s|
G(s) exists; if strictly proper then lim
|s|
G(s) = 0. These
concepts extend to multi-input, multi-output systems, where the transfer function is a matrix.
Lets see what the transfer function is of an LTI state model:
x = Ax +Bu, y = Cx +Du
sX(s) = AX(s) +BU(s), Y (s) = CX(s) +DU(s)
X(s) = (sI A)
1
BU(s)
Y (s) = [C(sI A)
1
B +D]U(s).
We conclude that the transfer function from u to x is (sI A)
1
B and from u to y is
C(sI A)
1
B +D.
Example
A =
_
_
0 0 1 0
0 0 0 1
1 1 0 0
1 1 0 0
_
_
, B =
_
_
0 0
0 0
1 0
0 1
_
_
C =
_
1 0 0 0
0 1 0 0
_
, D =
_
0 0
0 0
_
C(sI A)
1
B +D =
_
_
s
2
+ 1
s
2
(s
2
+ 2)
1
s
2
(s
2
+ 2)
1
s
2
(s
2
+ 2)
s
2
+ 1
s
2
(s
2
+ 2)
_
_
2
Summary
Let us recap our procedure for getting the transfer function of a system:
1. Apply the laws of physics etc. to get dierential equations governing the behaviour of the
system. Put these equations in state form. In general these are nonlinear.
2. Find an equilibrium, if there is one. If there are more than one equilibria, you have to select
one. If there isnt even one, this method doesnt apply.
3. Linearize about the equilibrium point.
4. If the linearized system is time invariant, take Laplace transforms with zero initial state.
5. Solve for the output Y (s) in terms of the input U(s).
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 31
The transfer function from input to output satises
Y (s) = G(s)U(s).
In general G(s) is a matrix: If dim u = m and dim y = p (m inputs, p outputs), then G(s) is pm.
In the SISO case, G(s) is a scalar-valued transfer function.
There is a converse problem: Given a transfer function, nd a corresponding state model. That
is, given G(s), nd A, B, C, D such that
G(s) = C(sI A)
1
B +D.
The state matrices are never unique: Each G(s) has an innite number of state models. But it is a
fact that every proper, rational G(s) has a state realization. Lets see how to do this in the SISO
case, where G(s) is 1 1.
Example
First, a G(s) with a constant numerator:
G(s) =
1
2s
2
s + 3
.
The corresponding dierential equation model is
2 y y + 3y = u.
Taking x
1
= y, x
2
= y, we get
x
1
= x
2
x
2
=
1
2
x
2
3
2
x
1
+
1
2
u
y = x
1
and thus
A =
_
0 1
3/2 1/2
_
, B =
_
0
1/2
_
C =
_
1 0
, D = 0.
This technique extends to
G(s) =
1
poly of degree n
.
2
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 32
Example
Next, a nonconstant numerator:
G(s) =
s 2
2s
2
s + 3
.
Introduce an auxiliary signal V (s):
Y (s) = (s 2)V (s), V (s) =
1
2s
2
s + 3
U(s).
Then the TF from U to V has a constant numerator. We have
2 v v + 3v = u
y = v 2v.
Dening
x
1
= v, x
2
= v,
we get
x
1
= x
2
x
2
=
1
2
x
2
3
2
x
1
+
1
2
u
y = x
2
2x
1
and so
A =
_
0 1
3/2 1/2
_
, B =
_
0
1
_
C =
_
2 1
, D = 0.
This extends to any strictly proper rational function. 2
Finally, if G(s) is proper but not strictly proper (deg num = deg denom), then we can write
G(s) = c +G
1
(s),
c = constant, G
1
(s) strictly proper. In this case we get A, B, C to realize G
1
(s), and D = c.
2.7 The Impulse
Now we take a little time to discuss the problematical object, the impulse (t). The impulse, also
called the Dirac delta function, is not really a legitimate function, because its value at t = 0 is
not a real number. And you cant rigorously get as the limit of a sequence of ever-narrowing
rectangles, because that sequence does not converge in any ordinary sense. Yet the impulse is a
useful concept in system theory and so we have to make it legitimate. The French mathematician
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 33
Laurent Schwartz worked out a very nice, consistent way of dealing with the impulse, and more
general functions. The impulse is an example of a distribution.
The main idea is that (t) is not a function, but rather it is a way of dening the linear
transformation (0) that maps a signal (t) to its value at t = 0. This linear transformation
should properly be written as () = (0) (i.e., transforms to (0)) but historically it has been
written as
_
(t)(t)dt = (0).
You know this as the sifting formula. Let us emphasize that the expression
_
(t)(t)dt (2.7)
is not intended to mean integration of the product (t)(t) of functions isnt a function; rather,
the expression is that of an operation on whose value is dened to be (0). The expression is
dened to be valid for all functions (t) that are smooth at t = 0; smooth means continuously
dierentiable up to every order.
Needless to say, we have to be careful with ; for example, theres no way to make sense of
2
because the expression (2.7) is not valid for = , again, because isnt a function, let alone a
smooth one. Because the unit step is not smooth at t = 0, 1
+
(t)(t) is undened too. However,
(2.7) does apply for (t) = e
st
, because it is smooth:
_
e
st
(t)dt = 1.
Thus the LT of equals 1.
Initial-value problems involving , such as
y + 2y = ,
or worse,
y + 2y =
,
require more advanced theory, because y(t) cannot be continuously dierentiable at t = 0. Instead of
pursuing this direction, since initial-value problems are of minor concern in control we turn instead
to convolution equations.
As you learned in signals and systems, linear time-invariant systems are modeled in the time
domain by a convolution equation:
y(t) =
_
g(t )u()d.
If g(t) is a smooth function for all t, then expression (2.7) applies and we get
g(t) =
_
g(t )()d.
Thus g(t) equals the output when the input is the unit impulse. We call g(t) the impulse-response
function, or the impulse response. The case where g itself equals isnt covered by what has been
said so far, but distribution theory can be used to justify = .
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 34
Example
Consider a lowpass RC lter:
G(s) =
1
RCs + 1
.
The inverse LT of G(s) is
1
RC
e
t
RC
.
Since the lter is causal, to get the impulse response we should take that time-function that is zero
for t < 0:
g(t) =
1
RC
e
t
RC
1
+
(t).
For the highpass lter,
G(s) =
RCs
RCs + 1
g(t) = (t)
1
RC
e
t/RC
1
+
(t).
2
In general, if G(s) is strictly proper, then g(t) is a regular function. If G(s) is proper but not
strictly proper, then g(t) contains an impulse.
As a nal comment, some texts (Dorf and Bishop) give the LT pair
)
indicating that f(0) and f(0
, D = D
2
D
1
.
2
Parallel connection
A
1
B
1
C
1
D
1
A
2
B
2
C
2
D
2
u
y
1
y
y
2
is very similar and is left for you.
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 36
Example
A
1
B
1
C
1
D
1
A
2
B
2
C
2
D
2
u
y
r e
x
1
= A
1
x
1
+B
1
e = A
1
x
1
+B
1
(r C
2
x
2
)
x
2
= A
2
x
2
+B
2
u = A
2
x
2
+B
2
(C
1
x
1
+D
1
(r C
2
x
2
))
y = C
2
x
2
Taking
x =
_
x
1
x
2
_
we get
x = Ax +Br, y = Cx
where
A =
_
A
1
B
1
C
2
B
2
C
1
A
2
B
2
D
1
C
2
_
, B =
_
B
1
B
2
D
1
_
, C =
_
0 C
2
.
2
2.9 Nothing is as Abstract as Reality
The title of this section is a quote from the artist Giorgio Morandi.
Models of physical systems are approximations of reality. For example, suppose we want to
model a swinging pendulum. The model involves the length L of the pendulum. Suppose we
measure L. It is a common viewpoint that there is an error between our measurement and the real
length. But let us think further. Let us imagine what the real length of the pendulum is: We start
at one end and proceed in a straight line toward the other end; as we approach the other end, we
have to nd the last molecule of the pendulum, the one farthest from where we started; suppose
we do nd the last molecule; it is composed of atoms, so we have to nd the last atom; but atoms
are composed of smaller particles, such as electrons; so we have to nd the last electron; but the
electrons are whizzing around the nucleus, so the last one isnt standing still.
1
So, what is the real
length of the pendulum at this subatomic scale? The question may not make sense because the
concept of length may not be real; real things may not possess the attribute we call length.
Returning to our model of a swinging pendulum and going past the issue of what L is, our model
assumes perfect rigidity of the pendulum. From what we know about reality, nothing is perfectly
1
All this assumes physics is correct which is another philosophical question.
CHAPTER 2. MATHEMATICAL MODELS OF SYSTEMS 37
rigid; that is, rigidity is a concept, an approximation to reality. So if we wanted to make our model
closer to reality, we could allow some elasticity by adopting a partial dierential equation model
and we may thereby have a better approximation. But no model is real. There could not be a
sequence M
k
of models that are better and better approximations of reality and such that M
k
converges to reality. If M
k
does indeed converge, the limit is a model, and no model is real.
For more along these lines, see the article Whats bad about this habit, N. D. Mermin, Physics
Today, May 2009, pages 8, 9.
Chapter 3
Linear System Theory
In the preceding chapter we saw nonlinear state models and how to linearize them about an equi-
librium point. The linearized systems have the form (dropping )
x = Ax +Bu, y = Cx +Du.
In this chapter we study such models. The main point is that x is a linear function of (x(0), u). So
is y.
3.1 Initial-State-Response
Let us begin with the state equation forced only by the initial statethe input is set to zero:
x = Ax, x(0) = x
0
, A R
nn
.
Recall two facts:
1. If n = 1, i.e., A is a scalar, the unique solution of the initial-value problem is x(t) = e
At
x
0
.
2. The Taylor series of the function e
t
at t = 0 is
e
t
= 1 +t +
t
2
2!
+
and this converges for every t. Thus
e
At
= 1 +At +
A
2
t
2
2!
+ .
This second fact suggests that in the matrix case we dene the matrix exponential e
A
to be
e
A
:= I +A+
1
2!
A
2
+
1
3!
A
3
+ .
It can be proved that the right-hand series converges for every matrix A. If A is n n, so is e
A
; e
A
is not dened if A is not square.
38
CHAPTER 3. LINEAR SYSTEM THEORY 39
Example
A =
_
0 0
0 0
_
, e
A
=
_
1 0
0 1
_
Notice that e
A
is not obtained by exponentiating A componentwise. 2
Example
A =
_
1 0
0 1
_
, e
A
=
_
e 0
0 e
_
= eI
2
Example
A =
_
_
0 1 0
0 0 1
0 0 0
_
_
This matrix has the property that A
3
= 0. Thus the power series has only nitely many nonzero
terms:
e
A
= I +A+
1
2
A
2
=
_
_
1 1
1
2
0 1 1
0 0 1
_
_
This is an example of a nilpotent matrix. That is, A
k
= 0 for some power k. 2
Replacing A by At (the product of A with the scalar t) gives the matrix-valued function e
At
,
t e
At
: R R
nn
,
dened by
e
At
= I +At +A
2
t
2
2!
+ .
This function is called the transition matrix.
Some properties of e
At
:
1. e
At
t=0
= I
2. e
At
1
e
At
2
= e
A(t
1
+t
2
)
Note that e
A
1
e
A
2
= e
A
1
+A
2
if and only if A
1
and A
2
commute, i.e., A
1
A
2
= A
2
A
1
.
3. (e
A
)
1
= e
A
, so (e
At
)
1
= e
At
4. A, e
At
commute
5.
d
dt
e
At
= Ae
At
Now the main result:
Theorem 3.1.1 The unique solution of the initial-value problem x = Ax, x(0) = x
0
, is x(t) = e
At
x
0
.
This leaves us with the question of how to compute e
At
. For hand calculations on small problems
(n = 2 or 3), its convenient to use Laplace transforms.
CHAPTER 3. LINEAR SYSTEM THEORY 40
Example
A =
_
_
0 1 0
0 0 1
0 0 0
_
_
, e
At
=
_
_
1 t
t
2
2
0 1 t
0 0 1
_
_
The Laplace transform of e
At
is therefore
_
_
1/s 1/s
2
1/s
3
0 1/s 1/s
2
0 0 1/s
_
_
.
On the other hand,
(sI A)
1
=
_
_
s 1 0
0 s 1
0 0 s
_
_
1
=
1
s
3
_
_
s
2
s 1
0 s
2
s
0 0 s
2
_
_
.
We conclude that in this example e
At
, (sI A)
1
are Laplace transform pairs. This is true in
general. 2
If A is n n, e
At
is an n n matrix function of t and (sI A)
1
is an n n matrix of rational
functions of s.
Example
A =
_
0 1
1 0
_
(sI A)
1
=
_
s 1
1 s
_
1
=
1
s
2
+ 1
_
s 1
1 s
_
=e
At
=
_
cos t sin t
sin t cos t
_
2
Another way to compute e
At
is via eigenvalues and eigenvectors. Instead of a general treatment,
lets do two examples.
Example
A =
_
0 1
2 3
_
The MATLAB command [V, D] = eig (A) produces
V =
_
1 1
1 2
_
, D =
_
1 0
0 2
_
.
CHAPTER 3. LINEAR SYSTEM THEORY 41
The eigenvalues of A appear on the diagonal of the (always diagonal) matrix D, and the columns
of V are corresponding eigenvectors. So for example
A
_
1
1
_
= 1
_
1
1
_
.
It follows that AV = V D (check this) and then that e
At
V = V e
Dt
(prove this). The nice thing is
that e
Dt
is trivial to determine because D is diagonal. In this case
e
Dt
=
_
e
t
0
0 e
2t
_
.
Then
e
At
= V e
Dt
V
1
.
2
Example
A =
_
0 1
1 0
_
, D =
_
j 0
0 j
_
, V =
1
2
_
1 1
j j
_
e
Dt
=
_
e
jt
0
0 e
jt
_
e
At
= V e
Dt
V
1
=
_
cos t sin t
sin t cos t
_
2
The above method works when A has n linearly independent eigenvectors, so V is invertible.
Otherwise the theory is more complicated and requires the so-called Jordan form of A.
3.2 Input-Response
Now we set the initial state to zero and consider the response from the input:
x = Ax +Bu, x(0) = 0.
Heres a derivation of the solution: Multiply by e
At
:
e
At
x = e
At
Ax + e
At
Bu.
Noting that
d
dt
[e
At
x(t)] = Ae
At
x(t) + e
At
x(t),
we get
d
dt
[e
At
x(t)] = e
At
Bu(t).
CHAPTER 3. LINEAR SYSTEM THEORY 42
Integrate from 0 to t:
e
At
x(t) x(0)
..
=0
=
t
_
0
e
A
Bu()d.
Multiply by e
At
:
x(t) =
t
_
0
e
A(t)
Bu()d. (3.1)
Equation (3.1) gives the state at time t in terms of u(), 0 t, when the initial state equals
zero.
Similarly, the output equation y = Cx +Du leads to
y(t) =
t
_
0
Ce
A(t)
Bu()d +Du(t).
Special case Suppose dim u = dim y = 1, i.e., the system is single-input, single-output (SISO).
Then D = D, a scalar, and we have
y(t) =
t
_
0
Ce
A(t)
Bu()d +Du(t). (3.2)
If u = , the unit impulse, then
y(t) = Ce
At
B1
+
(t) +D(t),
where 1
+
(t) denotes the unit step,
1
+
(t) =
_
1 , t 0
0 , t < 0.
We conclude that the impulse response of the system is
g(t) := Ce
At
B1
+
(t) +D(t) (3.3)
and equation (3.2) is a convolution equation:
y(t) = (g u)(t).
Example
y + y = u
Take the state to be
x =
_
x
1
x
2
_
:=
_
y
y
_
.
CHAPTER 3. LINEAR SYSTEM THEORY 43
Then
A =
_
0 1
0 1
_
, B =
_
0
1
_
C = [ 1 0 ], D = 0.
The transition matrix:
(sI A)
1
=
_
_
1
s
1
s(s + 1)
0
1
s + 1
_
_
e
At
=
_
1 1 e
t
0 e
t
_
, t 0.
Impulse response:
g(t) = Ce
At
B, t 0
= 1 e
t
, t 0.
2
3.3 Total Response
Consider the state equation forced by both an initial state and an input:
x = Ax +Bu, x(0) = x
0
.
The system is linear in the sense that the state at time t equals the initial-state-response at time
t plus the input-response at time t:
x(t) = e
At
x
0
+
t
_
0
e
A(t)
Bu()d.
Similarly, the output y = Cx +Du is given by
y(t) = Ce
At
x
0
+
t
_
0
Ce
A(t)
Bu()d +Du(t).
These two equations constitute a solution in the time domain.
Summary
We began with an LTI system modeled by a dierential equation in state form:
x = Ax +Bu, x(0) = x
0
y = Cx +Du.
CHAPTER 3. LINEAR SYSTEM THEORY 44
We solved the equations to get
x(t) = e
At
x
0
+
t
_
0
e
A(t)
Bu()d
y(t) = Ce
At
x
0
+
t
_
0
Ce
A(t)
Bu()d +Du(t).
These are integral (convolution) equations giving x(t) and y(t) explicitly in terms of x
0
and u(), 0
t. In the SISO case, if x
0
= 0 then
y = g u, i.e., y(t) =
t
_
0
g(t )u()d
where
g(t) = Ce
At
B1
+
(t) +D(t)
1
+
(t) = unit step.
3.4 Stability of State Models
Stability theory of dynamical systems is an old subject, dating back several hundred years. The goal
in stability theory is to draw conclusions about the qualitative behaviour of trajectories without
having to solve analytically or simulate exhaustively for all possible initial conditions. There are
two dierent notions of stability. The one we study in this section concerns state models.
To get an idea of the stability question, imagine a helicopter hovering under autopilot control.
Suppose the helicopter is subject to a wind gust. Will it return to its original hovering state? If so,
we say the hovering state is a stable state.
Lets look at a much simpler example.
Example
Brief Article
The Author
December 18, 2007
M
y
K
D
1
The model with no external forces:
M y = Ky D y
CHAPTER 3. LINEAR SYSTEM THEORY 45
or
x = Ax, A =
_
0 1
K
M
D
M
_
.
The point x = 0 is an equilibrium point.
Now suppose a wind gust of nite duration is applied:
Brief Article
The Author
December 18, 2007
d
d(t)
T
t
1
The model is
M y = d Ky D y,
or
x = Ax +Ed, E =
_
0
1
M
_
.
If x(0) = 0, then at time t = T
x(T) =
T
_
0
e
A(T)
Ed()d ,= 0 in general.
For t > T, the model is x = Ax. Thus the eect of a nite-duration disturbance is to alter the
initial state. In this way, the stability question concerns the qualitative behaviour of x = Ax for an
arbitrary initial state; the initial time may be shifted to t = 0. 2
So thats the stability question: For the system x = Ax, will x(t) converge to 0 as t goes to
innity, for every x(0)? As we saw before, the trajectory is specied by x(t) = e
At
x(0). So stability
is equivalent to the condition that e
At
0 as t . When that will happen depends only on the
eigenvalues of A.
Proposition 3.4.1 e
At
0 as t i every eigenvalue of A has negative real part.
Proof If A has n linearly independent eigenvectors, then it can be diagonalized by a similarity
transformation, so that
e
At
= V e
Dt
V
1
.
Then
e
At
0 e
Dt
0
e
i
t
0, i
Re
i
< 0, i.
The proof in the general case is more complicated but not more enlightening. 2
CHAPTER 3. LINEAR SYSTEM THEORY 46
Examples
Here are some quick examples: x = x is stable; x = 0 is unstable; x = x is unstable.
Here are some examples by pictures:
Brief Article
The Author
December 18, 2007
origin asymp. stable
1
stable
Brief Article
The Author
December 18, 2007
origin asymp. stable
1
Brief Article
The Author
December 18, 2007
origin stable
1
unstable
Brief Article
The Author
December 18, 2007
origin asymp. stable
1
stable
Brief Article
The Author
December 18, 2007
origin stable
1
unstable
Brief Article
The Author
December 18, 2007
origin unstable origin unstable
1
unstable
3.5 BIBO Stability
Theres another stability concept, that concerns the response of a system to inputs instead of initial
conditions. Well study this concept for a restricted class of systems.
Consider an LTI system with a single input, a single output, and a strictly proper rational
transfer function. The model is therefore y = g u in the time domain, or Y (s) = G(s)U(s) in
the s-domain. We ask the question: Does a bounded input (BI) always produce a bounded output
(BO)? Note that u(t) bounded means that [u(t)[ B for some B and all t. The least upper bound
B is actually a norm, denoted |u|
.
Some examples: For the unit stepu(t) = 1
+
(t), of course |u|
= 1. Note that the peak value is attained. For the rising exponential u(t) =
(1 e
t
)1
+
(t), |u|
= 1. Notice that the peak value is never attained in nite time. The signal
u(t) = t1
+
(t) is not bounded.
We dene the system to be BIBO stable if every bounded u produces a bounded y.
Theorem 3.5.1 Assume G(s) is strictly proper, rational. Then the following three statements are
equivalent:
CHAPTER 3. LINEAR SYSTEM THEORY 47
1. The system is BIBO stable.
2. The impulse-response function g(t) is absolutely integrable, i.e.,
_
0
[g(t)[dt < .
3. Every pole of the transfer function G(s) has negative real part.
Example
Consider the RC lter with
G(s) =
1
RCs + 1
, g(t) =
1
RC
e
t/RC
1
+
(t).
According to the theorem, every bounded u produces a bounded y. Whats the relationship between
|u|
and |y|
? Lets see:
[y(t)[ =
t
_
0
g(t )u()d
t
_
0
[g(t )[[u()[d
|u|
t
_
0
[g(t )[d
|u|
_
0
[g(t)[dt
= |u|
_
0
1
RC
e
t/RC
dt
= |u|
.
Thus |y|
|u|
1
CHAPTER 3. LINEAR SYSTEM THEORY 49
The signal are these: u is the voltage applied to the electromagnet, i is the current, y is the
position down of the ball, and d is a possible disturbance force. The equations are
L
di
dt
+Ri = u, M
d
2
y
dt
2
= Mg +d c
i
2
y
2
where c is a constant (force of magnet on ball is proportional to i
2
/y
2
). The nonlinear state equations
are
x = f(x, u, d)
x =
_
_
y
y
i
_
_
, f =
_
_
x
2
g +
d
M
c
M
x
2
3
x
2
1
R
L
x
3
+
1
L
u
_
_
.
Lets linearize about y
0
= 1, d
0
= 0:
x
20
= 0, x
10
= 1, x
30
=
_
Mg
c
, u
0
= R
_
Mg
c
.
The linearized system is
x = Ax +Bu +Ed
y = Cx
A =
_
_
0 1 0
2g 0 2
_
g
Mc
0 0
R
L
_
_
, B =
_
_
0
0
1
L
_
_, E =
_
_
0
1
M
0
_
_
C =
_
1 0 0
.
To simplify notation, lets suppose R = L = M = 1, c = g. Then
A =
_
_
0 1 0
2g 0 2
0 0 1
_
_
, B =
_
_
0
0
1
_
_
, E =
_
_
0
1
0
_
_
C =
_
1 0 0
.
Thus
Y (s) = C(sI A)
1
BU(s) +C(sI A)
1
ED(s)
=
2
(s + 1)(s
2
2g)
U(s) +
1
s
2
2g
D(s).
The block diagram is
CHAPTER 3. LINEAR SYSTEM THEORY 50
U
D
Y 2
s + 1
1
s
2
2g
Note that the systems from U to Y and D to Y are BIBO unstable, each having a pole
at s =
g(t )u()d
y(t) =
g(t )e
j
d
=
g()e
jt
e
j
d
= G(j)e
jt
CHAPTER 3. LINEAR SYSTEM THEORY 51
2
Thus, if the input is the complex sinusoid e
jt
, then the output is the sinusoid
G(j)e
jt
= [G(j)[e
j(t+G(jw))
which has frequency = = frequency of input, magnitude = [G(j)[ = magnitude of the transfer
function at s = j, and phase = G(j) = phase of the transfer function at s = j.
Notice that the convolution equation for this result is
y(t) =
g(t )u()d,
that is, the sinusoidal input was applied starting at t = . If the time of application of the
sinusoid is t = 0, there is a transient component in y(t) too.
Heres the important point: Under the stated assumptions about the system (G(s) is rational,
proper, and all poles in Re s < 0), the region of convergence of the Laplace integral includes the
imaginary axis. Therefore it is legitimate to set s = j in G(s) to get G(j), which, by the way,
equals the Fourier transform of g(t). To emphasis this point, let us look at this example:
Example
Consider the dierential equation model
y = y +u.
Suppose we know from physical considerations that the system is causal. Let us look for sinusoidal
solutions: Set u(t) = e
jt
and then set y(t) = Ae
jt
and substitute these into the equation:
Aje
jt
= Ae
jt
+ e
jt
.
Cancel e
jt
and solve for A:
A =
1
j 1
.
So is
y(t) =
1
j 1
e
jt
the sinusoidal steady-state solution? Absolutely not! There is no sinusoidal steady state, because
the system is not stable! In mathematical terms, the impulse response function is
g(t) = e
t
1
+
(t),
whose Laplace transform is
G(s) =
1
s 1
, ROC : Re s > 1.
Thus evaluating G(s) on the imaginary axis is illegitimate. Furthermore, the function 1/(j 1) is
not the Fourier transform of g(t). 2
Next, we want to look at the special frequency response when = 0. For this we need the
nal-value theorem.
CHAPTER 3. LINEAR SYSTEM THEORY 52
Example
Let y(t), Y (s) be Laplace transform pairs with Y (s) =
s + 2
s(s
2
+s + 4)
.
This can be factored as
Y (s) =
A
s
+
Bs +C
s
2
+s + 4
.
Note that A equals the residue of Y (s) at the pole s = 0:
A = Res (Y, 0) = lim
s0
s Y(s) =
1
2
.
The inverse LT then has the form
Y (t) = A1
+
(t) +y
1
(t),
where y
1
(t) 0 as t . Thus
lim
t
y(t) = A = Res (Y, 0).
2
The general result is the nal-value theorem:
Theorem 3.6.1 Suppose Y (s) is rational.
1. If Y (s) has no poles in 's 0, then y(t) converges to 0 as t .
2. If Y (s) has no poles in 's 0 except a simple pole at s = 0, then y(t) converges as t
and lim
t
y(t) equals the residue of Y (s) at the pole s = 0.
3. If Y (s) has a repeated pole at s = 0, then y(t) doesnt converge as t .
4. If Y (s) has a pole at 's 0, s ,= 0, then y(t) doesnt converge as t .
Some examples: Y (s) =
1
s + 1
: nal value equals 0; Y (s) =
2
s(s + 1)
: nal value equals 2;
Y (s) =
1
s
2
+ 1
: no nal value. Remember that you have to know that y(t) has a nal value, by
examining the poles of Y (s), before you calculate the residue of Y (s) at the pole s = 0 and claim
that that residue equals the nal value.
Return now to the setup
Y (s) = G(s)U(s), G(s) proper, no poles in Re s 0.
Let the input be the unit step, u(t) = 1
+
(t), i.e., U(s) =
1
s
. Then Y (s) = G(s)
1
s
. The nal-value
theorem applies to this Y (s), and we get lim
t
y(t) = G(0). For this reason, G(0) is called the DC
gain of the system.
CHAPTER 3. LINEAR SYSTEM THEORY 53
Example
Using MATLAB, plot the step responses of
G
1
(s) =
20
s
2
+ 0.9s + 50
, G
2
(s) =
20s + 20
s
2
+ 0.9s + 50
.
They have the same DC gains and the same poles, but notice the big dierence in transient response.
2
Chapter 4
Feedback Control Theory
Control systems are most often based on the principle of feedback, whereby the signal to be con-
trolled is compared to a desired reference signal and the discrepancy used to compute corrective
control action. When you go from your home to the university you use this principle continuously
without being aware of it. To emphasize how eective feedback is, imagine you have to program
a mobile robot, with no vision capability and therefore no feedback, to go open loop from your
home to your ECE311 classroom; the program has to include all motion instructions to the motors
that drive the robot. The program would be unthinkably long, and in the end the robot would
undoubtedly be way o target:
home
class
robot's path
In this chapter and the next we develop the basic theory and tools for feedback control analysis
and design in the frequency domain. Analysis means you already have a controller and you
want to study how good it is; design of course means you want to design a controller to meet
certain specications. The most fundamental specication is stability. Typically, good performance
requires high-gain controllers, yet typically the feedback loop will become unstable if the gain is too
high. The stability criterion we will study is the Nyquist criterion, dating from 1932. The Nyquist
criterion is a graphical technique involving the open-loop frequency response function, magnitude
and phase.
There are two main approaches to control analysis and design. The rst, the one were doing
in this course, is the older, so-called classical approach in the frequency domain. Specications
are based on closed-loop gain, bandwidth, and stability margin. Design is done using Bode plots.
1
1
One classical technique is called root locus. A root locus is a graph of the closed-loop poles as a function of a
single real parameter, for example a controller gain. In root locus you try to get the closed-loop poles into a desired
54
CHAPTER 4. FEEDBACK CONTROL THEORY 55
The second approach, which is the subject of ECE410 and ECE557, is in the time domain and
uses state-space models instead of transfer functions. Specications may be based on closed-loop
eigenvalues, that is, closed-loop poles. This second approach is known as the state-space approach
ormodern control, although it dates from the 1960s and 1970s.
These two approaches are complementary. Classical control is appropriate for a single-input,
single-output plant, especially if it is open-loop stable. The state-space approach is appropriate for
multi-input, multi-output plants; it is especially powerful in providing a methodical procedure to
stabilize an unstable plant. Stability margin is very transparent in classical control and less so in the
state-space approach. Of course, simulation must accompany any design approach. For example,
in classical control you typically design for a desired bandwidth and stability margin; you test your
design by simulation; you evaluate, and then perhaps modify the stability margin, redesign, and
test again.
Beyond these two approaches is optimal control, where the controller is designed by minimizing a
mathematical function. In this context classical control extends to H
_
0 0 1 0
0 0 0 1
0 19.6 0 0
0 29.4 0 0
_
_
, B =
_
_
0
0
1
1
_
_
.
Lets suppose we designate the cart position as the only output: y = x
1
. Then
C =
_
1 0 0 0
.
The transfer function from u to y is
P(s) =
s
2
9.8
s
2
(s
2
29.4)
.
The poles and zeros of P(s) are
Brief Article
The Author
January 29, 2008
X X
-5.4 -3.13 3.13 5.4
XX
1
Having three poles in Re s 0, the plant is quite unstable. The right half-plane zero doesnt
contribute to the degree of instability, but, as we shall see, it does make the plant quite dicult to
control. The block diagram of the plant by itself is
Brief Article
The Author
January 29, 2008
y
force on
cart, N
position of cart, m
u
P(s)
1
Let us try to stabilize the plant by feeding back the cart position, y, comparing it to a reference
r, and setting the error r y as the controller input:
Brief Article
The Author
January 29, 2008
C(s) P(s)
y r u
1
CHAPTER 4. FEEDBACK CONTROL THEORY 57
Here C(s) is the transfer function of the controller to be designed. One controller that does in fact
stabilize is
C(s) =
10395s
3
+ 54126s
2
13375s 6687
s
4
+ 32s
3
+ 477s
2
5870s 22170
.
The controller itself, C(s), is unstable, as is P(s). But when the controller and plant are connected
in feedback, the system is stable. If the pendulum starts to fall, the controller causes the cart to
move, in the right direction, to make the pendulum tend to come vertical again. Youre invited to
simulate the closed-loop system; for example, let r be the input
Brief Article
The Author
January 29, 2008
r
0.1
5 t
1
which corresponds to a command that the cart move right 0.1 m for 5 seconds, then return to its
original position. Below is a plot of the cart position x
1
versus t:
0 2 4 6 8 10 12
1.5
1.0
0.5
0.0
0.5
1.0
1.5
The cart moves rather wildly as it tries to balance the pendulumits not a good controller
designbut it does stabilize.
We mentioned that our controller C(s) is open-loop unstable. It can be proved (its beyond the
scope of this course) that every controller that stabilizes this P(s) is itself unstable. 2
Our objective in this section is to give two denitions for what it means for the following feedback
system to be stable:
CHAPTER 4. FEEDBACK CONTROL THEORY 58
Brief Article
The Author
January 29, 2008
P(s)
y
C(s)
d
u r e
1
The notation is this:
systems P(s), plant transfer function
C(s), controller transfer function
signals r(t), reference (or command) input
e(t), tracking error
d(t), disturbance
u(t), plant input
y(t), plant output.
The two denitions look quite distinct, but it turns out that they are exactly equivalent under mild
conditions. We shall assume throughout that P(s), C(s) are rational, C(s) is proper, and P(s)
is strictly proper.
Internal Stability
For this concept, set r = d = 0 and bring in state models for P and C:
Brief Article
The Author
January 29, 2008
D
c
C
c
A
c
B
c
(x
c
) (x
p
)
A
p
B
p
e
y u
O C
p
1
A
c
B
c
C
c
D
c
A
p
B
p
C
p
0
(x
p
) (x
c
)
The closed-loop equations are
x
p
= A
p
x
p
+B
p
u
u = C
c
x
c
+D
c
e
x
c
= A
c
x
c
+B
c
e
e = C
p
x
p
.
CHAPTER 4. FEEDBACK CONTROL THEORY 59
Dening the closed-loop state x
cl
= (x
p
, x
x
), we get simply
x
cl
= A
cl
x
cl
, A
cl
=
_
A
p
B
p
D
c
C
p
B
p
C
c
B
c
C
p
A
c
_
.
Internal Stability is dened to mean that the state model x
cl
= A
cl
x
cl
is stable, that is, that all
the eigenvalues of A
cl
have Re < 0. Thus the concept means this: With no inputs applied (i.e.,
r = d = 0), the internal states x
p
(t), x
c
(t) will converge to zero for every initial state x
p
(0), x
c
(0).
Example
Take C(s) = 2, P(s) = 1/(s 1). Then
A
p
= 1, B
p
= 1, C
p
= 1; D
c
= 2
and
A
cl
= A
p
B
p
D
c
C
p
= 1.
Thus the unstable plant
1
s 1
is internally stabilized by unity feedback with the pure-gain controller
2. 2
Example
Lets see that 1/(s 1) cannot be internally stabilized by cancelling the unstable pole:
Brief Article
The Author
February 1, 2008
(x
c
) (x
p
)
y u r
s 1
s + 1
1
s 1
1
The transfer function from r to y equals
1
s + 1
. Hence the system from r to y is BIBO stable. But
with r = 0, the state model is
Brief Article
The Author
February 1, 2008
-1 1
(x
c
) (x
p
)
1
0 1
0 1
-2 1
1
CHAPTER 4. FEEDBACK CONTROL THEORY 60
Thus
x
p
= x
p
2x
c
x
c
= x
c
x
cl
= A
cl
x
cl
, A
cl
=
_
1 2
0 1
_
= unstable.
Clearly x
p
(t) doesnt converge to 0 if x
p
(0) ,= 0. 2
Input-Output Stability
Now we turn to the second way of thinking about stability of the feedback loop. First, write the
equations for the outputs of the summing junctions:
E = R PU
U = D +CE.
Assemble into a vector equation:
_
1 P
C 1
_ _
E
U
_
=
_
R
D
_
.
In view of our standing assumptions (P strictly proper, C proper), the determinant of
_
1 P
C 1
_
is not identically zero (why?). Thus we can solve for E, U:
_
E
U
_
=
_
1 P
C 1
_
1
_
R
D
_
=
_
_
1
1 +PC
P
1 +PC
C
1 +PC
1
1 +PC
_
_
_
R
D
_
.
The output is given by
Y = PU =
PC
1 +PC
R +
P
1 +PC
D.
We just derived the following closed-loop transfer functions:
R to E :
1
1 +PC
, R to U :
C
1 +PC
, R to Y :
PC
1 +PC
D to E :
P
1 +PC
, D to U :
1
1 +PC
, D to Y :
P
1 +PC
.
The above method of nding closed-loop transfer functions works in general.
CHAPTER 4. FEEDBACK CONTROL THEORY 61
Example
Heres a somwhat complex block diagram:
Brief Article
The Author
February 1, 2008
r
y
P
4
x
1
P
3
x
2
P
2
P
5
P
1
1
Let us nd the transfer function from r to y. Label the outputs of the summing junctions, as shown.
Write the equations at the summing junctions:
X
1
= R P
2
P
5
X
2
X
2
= P
1
X
1
P
2
P
4
X
2
Y = P
3
X
1
+P
2
X
2
.
Assemble as
_
_
1 P
2
P
5
0
P
1
1 +P
2
P
4
0
P
3
P
2
1
_
_
_
_
X
1
X
2
Y
_
_
=
_
_
R
0
0
_
_
.
Solve for Y by Cramers rule:
Y =
det
_
_
1 P
2
P
5
R
P
1
1 +P
2
P
4
0
P
3
P
2
0
_
_
det
_
_
1 P
2
P
5
0
P
1
1 +P
2
P
4
0
P
3
P
2
1
_
_
.
Simplify:
Y
R
=
P
1
P
2
+P
3
(1 +P
2
P
4
)
1 +P
2
P
4
+P
1
P
2
P
5
.
2
Now back to the basic feedback loop:
CHAPTER 4. FEEDBACK CONTROL THEORY 62
Brief Article
The Author
February 1, 2008
y
d
u r e
C
P
1
We say this feedback system is input-output stable provided e, u, and y are bounded signals
whenever r and d are bounded signals; briey, the system from (r, d) to (e, u, y) is BIBO stable.
This is equivalent to saying that the 6 transfer funtions from (r, d) to (e, u, y) are stable, in the
sense that all poles are in Re s < 0. It suces to look at the 4 transfer functions from (r, d) to
(e, u), namely,
_
_
1
1 +PC
P
1 +PC
C
1 +PC
1
1 +PC
_
_
.
(Proof: If r and e are bounded, so is y = r e.)
Example
P(s) =
1
s
2
1
, C(s) =
s 1
s + 1
The 4 transfer functions are
_
E
U
_
=
_
_
(s + 1)
2
s
2
+ 2s + 2
s + 1
(s 1)(s
2
+ 2s + 2)
(s + 1)(s 1)
s
2
+ 2s + 2
(s + 1)
2
s
2
+ 2s + 2
_
_
_
R
D
_
.
Three of these are stable; the one from D to E is not. Consequently, the feedback system is not
input-output stable. This is in spite of the fact that a bounded r produces a bounded y. Notice
that the problem here is that C cancels an unstable pole of P. As well see, that isnt allowed.
2
Example
P(s) =
1
s 1
, C(s) = k
The feedback system is input-output stable i k > 1 (check). 2
We now look at two ways to test feedback IO stability. The rst is in terms of numerator and
denominator polynomials:
P =
N
p
D
p
, C =
N
c
D
c
.
CHAPTER 4. FEEDBACK CONTROL THEORY 63
We assume (N
p
, D
p
) are coprime, i.e., have no common factors, and (N
c
, D
c
) are coprime too.
The characteristic polynomial of the feedback system is dened to be N
p
N
c
+D
p
D
c
.
Example
P(s) =
1
s
2
1
, C(s) =
s 1
s + 1
Notice the unstable pole-zero cancellation. The characteristic polynomial is
s 1 + (s
2
1)(s + 1) = (s 1)(s
2
+ 2s + 2).
This has a right half-plane root. 2
Theorem 4.1.1 The feedback system is input-output stable i the characteristic polynomial has no
roots in Re s 0.
Proof We have
_
_
1
1+PC
P
1+PC
C
1+PC
1
1+PC
_
_
=
1
N
p
N
c
+D
p
D
c
_
D
p
D
c
N
p
D
c
N
c
D
p
D
p
D
c
_
. (4.1)
( Suciency) If N
p
N
c
+ D
p
D
c
has no roots in Re s 0, then the four transfer functions on the
left-hand side of (4.1) have no poles in Re s 0, and hence they are stable.
( Necessity) Conversely, assume the feedback system is stable, that is, the four transfer functions
on the left-hand side of (4.1) are stable. To conclude that N
p
N
c
+D
p
D
c
has no roots in Re s 0,
we must show that the polynomial N
p
N
c
+ D
p
D
c
does not have a common factor with all four
numerators in (4.1), namely, D
p
D
c
, N
p
D
c
, N
c
D
p
. That is, we must show that the four polynomials
N
p
N
c
+D
p
D
c
, D
p
D
c
, N
p
D
c
, N
c
D
p
do not have a common root. This part is left for you. 2
The second way to test feedback IO stability is as follows.
Theorem 4.1.2 The feedback system is input-output stable i 1) the transfer function 1 + PC
has no zeros in Re s 0, and 2) the product PC has no pole-zero cancellations in Re s 0.
(Proof left to you.)
Example
P(s) =
1
s
2
1
, C(s) =
s 1
s + 1
Check that 1) holds but 2) does not. 2
CHAPTER 4. FEEDBACK CONTROL THEORY 64
The Routh-Hurwitz Criterion
In practice, one checks feedback stability using MATLAB to calculate the eigenvalues of A
cl
or
the roots of the characteristic polynomial. However, it is sometimes useful, and also of historical
interest, to have an easy test for simple cases.
Consider a general characteristic polynomial
p(s) = s
n
+a
n1
s
n1
+... +a
1
s +a
0
, a
i
real.
Lets say p(s) is stable if all its roots have Re s < 0. The Routh-Hurwitz criterion is an algebraic
test for p(s) to be stable, without having to calculate the roots. Instead of studying the complete
criterion, here are the results for n = 1, 2, 3:
1. p(s) = s +a
0
: p(s) is stable (obviously) i a
0
> 0
2. p(s) = s
2
+a
1
s +a
0
: p(s) is stable i (i) a
i
> 0
3. p(s) = s
3
+a
2
s
2
+a
1
s +a
0
: p(s) is stable (i) a
i
> 0 and a
1
a
2
> a
0
.
4.2 The Internal Model Principle
Cruise control in a car regulates the speed to a prescribed setpoint. What is the principle underlying
its operation? The answer lies in the nal value theorem (FVT).
Example
Brief Article
The Author
February 1, 2008
C(s) P(s)
e(t) r(t)
1
Take the controller and plant
C(s) =
1
s
, P(s) =
1
s + 1
.
Let r be a constant, r(t) = r
0
. Then we have
E(s) =
1
1 +P(s)C(s)
R(s)
=
s(s + 1)
s
2
+s + 1
r
0
s
=
s + 1
s
2
+s + 1
r
0
The FVT applies to E(s), and e(t) 0 as t . Thus the feedback system provides perfect
asymptotic tracking of a step input! How it works: C(s) contains an internal model of R(s) (i.e., an
integrator); closing the loop creates a zero from R(s) to E(s) exactly to cancel the unstable pole
of R(s). (This isnt illegal pole-zero cancellation.) 2
CHAPTER 4. FEEDBACK CONTROL THEORY 65
Example
This time take
C(s) =
1
s
, P(s) =
2s + 1
s(s + 1)
and take r to be a ramp, r(t) = r
0
t. Then R(s) = r
0
/s
2
and so
E(s) =
s + 1
s
3
+s
2
+ 2s + 1
r
0
.
Again e(t) 0; perfect tracking of a ramp. Here C(s) and P(s) together provide the internal
model, a double integrator. 2
Lets generalize:
Theorem 4.2.1 Assume P(s) is strictly proper, C(s) is proper, and the feedback system is stable.
If P(s)C(s) contains an internal model of the unstable part of R(s), then perfect asymptotic tracking
occurs, i.e., e(t) 0.
Example
R(s) =
r
0
s
2
+ 1
, P(s) =
1
s + 1
Design C(s) to achieve perfect asymptotic tracking of the sinusoid r(t), as follows. From the
theorem, we should try something of the form
C(s) =
1
s
2
+ 1
C
1
(s),
that is, we embed an internal model in C(s), and allow an additional factor to achieve feedback
stability. You can check that C
1
(s) = s works. Notice that we have eectively created a notch lter
from R to E, a notch lter with zeros at s = j. 2
Example
An inverted pendulum balanced on your hand.
Brief Article
The Author
February 1, 2008
Mg
M
u
L
u +L
1
CHAPTER 4. FEEDBACK CONTROL THEORY 66
The equation is
u +L
= Mg.
Thus
s
2
U +s
2
L = Mg.
So the transfer function from u to equals
s
2
Ls
2
Mg
.
Step tracking and internal stability are not simultaneously achievable. 2
4.3 Principle of the Argument
The Nyquist criterion is a test for a feedback loop to be stable. The criterion is based on the
principle of the argument from complex function theory.
2
The principle of the argument involves two things: a curve in the complex plane and a transfer
function.
Consider a closed path (or curve or contour) in the s-plane, with no self-intersections and with
negative, i.e., clockwise (CW) orientation. We name the path T:
Brief Article
The Author
February 2, 2008
s-plane
D
1
Now let G(s) be a rational function, a ratio of polynomials such as
s + 1
s + 2
, s + 1,
1
s
2
Harry Nyquist was born in 1889 in Sweden. He attended the University of North Dakota and received the B.S.
and M.S. degrees in electrical engineering. He attended Yale University and was awarded the Ph.D. degree in 1917.
From 1917 to 1934 Nyquist was employed by the American Telephone and Telegraph Company in the Department
of Development and Research Transmission, where he was concerned with studies on telegraph picture and voice
transmission. From 1934 to 1954 he was with the Bell Telephone Laboratories, where he continued in the work
of communications engineering, especially in transmission engineering and systems engineering. At the time of his
retirement from Bell Labs in 1954, Nyquist was Assistant Director of Systems Studies.
During his 37 years of service with the Bell System, he received 138 U. S. patents and published twelve technical
articles. His many important contributions to the radio art include the rst quantitative explanation of thermal noise,
signal transmission studies which laid the foundation for modern information theory and data transmission, and the
invention of the vestigial sideband transmission system now widely-used in television broadcasting.
CHAPTER 4. FEEDBACK CONTROL THEORY 67
but not
s or sin s. (The polynomial s+1 is regarded as rational because it can be written (s+1)/1.)
For every point s in the complex plane, G(s) is a point in the complex plane. We draw two copies of
the complex plane to avoid clutter, s in one called the s-plane, G(s) in the other called the G-plane.
As s goes once around T from any starting point, the point G(s) traces out a closed curve denoted
(, the image of T under G(s).
Example
For G(s) = s 1 we could have
Brief Article
The Author
February 2, 2008
G
G-plane
G(s)
D
1
s
s 1
s-plane
1
Notice that ( is just T shifted to the left one unit. Since T encircles one zero of G(s), ( encircles
the origin once CW. 2
Example
Lets keep G(s) = s 1 but change T:
Brief Article
The Author
February 2, 2008
s
s 1
1
G
D
1
Now T encircles no zero of G(s) and ( has no encirclements of the origin. 2
CHAPTER 4. FEEDBACK CONTROL THEORY 68
Example
Now consider
G(s) =
1
s 1
.
The angle of G(s) equals the negative of the angle of s 1:
G(s) = 1 (s 1) = (s 1).
Brief Article
The Author
February 2, 2008
s 1
G
D
s
1
From this we get that if T encircles the pole CW, then ( encircles the origin once counterclockwise
(CCW). 2
Based on these examples, we now see the relationship between the number of poles and zeros of
G(s) in T and the number of times ( encircles the origin.
Theorem 4.3.1 (Principle of the Argument) Suppose G(s) has no poles or zeros on T, but
T encloses n poles and m zeros of G(s). Then ( encircles the origin exactly n m times CCW.
Proof Write G(s) in this way
G(s) = K
i
(s z
i
)
i
(s p
i
)
with K a real gain, z
i
the zeros, and p
i
the poles. Then for every s on T
G(s) = K + (s z
i
) (s p
i
).
If z
i
is enclosed by T, the net change in (s z
i
) is 2; otherwise the net change is 0. Hence the
net change in G(s) equals m(2) n(2), which equals (n m)2. 2
The special T we use for the Nyquist contour is
CHAPTER 4. FEEDBACK CONTROL THEORY 69
Brief Article
The Author
February 2, 2008
radius
1
Then ( is called the Nyquist plot of G(s). If G(s) has no poles or zeros on T, then the Nyquist
plot encircles the origin exactly n m times CCW, where n equals the number of poles of G(s) in
Re s > 0 and m equals the number of zeros of G(s) in Re s > 0. From this follows
Theorem 4.3.2 Suppose G(s) has no poles on T. Then G(s) has no zeros in Re s 0 (
doesnt pass through the origin and encircles it exactly n times CCW, where n equals the number of
poles in Re s > 0.
Note that G(s) has no poles on T i G(s) is proper and G(s) has no poles on the imaginary axis;
and G(s) has no zeros on T i G(s) is not strictly proper and G(s) has no zeros on the imaginary
axis.
In our application, if G(s) actually does have poles on the imaginary axis, we have to indent
around them. You can indent either to the left or to the right; we shall always indent to the right:
Brief Article
The Author
February 2, 2008
x
x
x
x
D
poles
1
Note that were indenting around poles of G(s) on the imaginary axis. Zeros of G(s) are irrelevant
at this point.
4.4 Nyquist Stability Criterion (1932)
Now we apply the principle of the argument. The setup is
Brief Article
The Author
February 2, 2008
P(s)
KC(s)
1
CHAPTER 4. FEEDBACK CONTROL THEORY 70
where K is a real gain and C(s) and P(s) are rational transfer functions. Were after a graphical
test for stability involving the Nyquist plot of P(s)C(s). We could also have a transfer function
F(s) in the feedback path, but well take F(s) = 1 for simplicity.
The assumptions are these:
1. P(s), C(s) are proper, with at least one of them strictly proper.
2. The product P(s)C(s) has no pole-zero cancellations in Re s 0. We have to assume
this because the Nyquist criterion doesnt test for it, and such cancellations would make the
feedback system not stable.
3. The gain K is nonzero. This is made only because were going to divide by K at some point.
Theorem 4.4.1 Let n denote the number of poles of P(s)C(s) in Re s > 0. Construct the
Nyquist plot of P(s)C(s), indenting to the right around poles on the imaginary axis. Then the
feedback system is stable i the Nyquist plot doesnt pass through
1
K
and encircles it exactly n
times CCW.
Proof The closed-loop transfer function from reference input to plant output equals
KP(s)C(s)
1 +KP(s)C(s)
.
Dene G(s) = 1 + KP(s)C(s). Because we have assumed no unstable pole-zero cancellations,
feedback stability is equivalent to the condition that G(s) has no zeros in Re s 0. So were going
to apply the principle of the argument to get a test for G(s) to have no RHP zeros.
Please note the logic so far: The closed-loop transfer function is
KP(s)C(s)
G(s)
.
This should have no RHP poles. So G(s) should have no RHP zeros. So we need a test for G(s) to
have this property.
Note that G(s) and P(s)C(s) have the same poles in Re s 0, so G(s) has precisely n there.
Since T indents around poles of G(s) on the imaginary axis and since G(s) is proper, G(s) has
no poles on T. Thus by Theorem 4.1.2, the feedback system is stable the Nyquist plot of G(s)
doesnt pass through 0 and encircles it exactly n times CCW. Since P(s)C(s) =
1
K
G(s)
1
K
,
this latter condition is equivalent to: the Nyquist plot of P(s)C(s) doesnt pass through
1
K
and
encircles it exactly n times CCW. 2
If there had been a nonunity transfer function F(s) in the feedback path, we would have taken
G = 1 +KPCF in the proof.
There is a subtle point that may have occurred to you concerning the Nyquist criterion and
it relates to the discussion about frequency response in Section 3.6. Consider the plant transfer
function P(s). If it has an unstable pole, then the ROC of the Laplace transform does not include
the imaginary axis. And yet the Nyquist plot involves the function P(j). This may seem to be
a contradiction, but it is not. In mathematical terms we have employed analytic continuation to
extend the function P(s) to be dened outside the region of convergence of the Laplace intergral.
CHAPTER 4. FEEDBACK CONTROL THEORY 71
4.5 Examples
In this section youll learn how to draw Nyquist plots and how to apply the Nyquist criterion.
Example
The rst example is this:
PC(s) =
1
(s + 1)
2
.
Below is shown the curve T and the Nyquist diagram. The curve T is divided into segments whose
ends are the points A, B, C. We map T one segment at a time. The points in the right-hand plot are
also labelled A, B, C, so dont be confused: the left-hand A is mapped by PC into the right-hand
A.
The rst segment is from A to B, that is, the positive imaginary axis. On this segment, s = j
and you can derive from
PC(j) =
1
(j + 1)
2
that
Re PC(j) =
1
2
(1
2
)
2
+ (2)
2
, Im PC(j) =
2
(1
2
)
2
+ (2)
2
.
Brief Article
The Author
February 2, 2008
1
B
A
C
D
B, C A
Nyquist plot of PC
1
As s goes from A to B, PC(s) traces out the curve in the lower half-plane. You can see this by
noting that the imaginary part of PC(j) remains negative, while the real part changes sign once,
from positive to negative. This segment of the Nyquist plot starts at the point 1, since PC(0) = 1.
As s approaches B, that is, as goes to +, PC(j) becomes approximately
1
(j)
2
=
1
2
,
which is a negative real number going to 0. This is also consistent with the fact that PC is strictly
proper.
CHAPTER 4. FEEDBACK CONTROL THEORY 72
Next, the semicircle from B to C has innite radius and hence is mapped by PC to the origin.
Finally, the line segment from C to A in the left-hand graph is the complex conjugate of the
already-drawn segment from A to B. So the same is true in the right-hand graph. (Why? Because
PC has real coecients, and therefore PC(s) = PC(s).)
In conclusion, weve arrived at the closed path in the right-hand graphthe Nyquist plot. In
this example, the curve has no self-intersections.
Now were ready to apply the Nyquist criterion and determine for what range of K the feedback
system is stable. The transfer function PC has no poles inside T and therefore n = 0. So the
feedback system is stable i the Nyquist plot encircles 1/K exactly 0 times CCW. This means,
does not encircle it. Thus the conditions for stability are 1/K < 0 or 1/K > 1; that is, K > 0 or
1 < K < 0; that is, K > 1, K ,= 0. The condition K ,= 0 is ruled out by our initial assumption
(which we made only because we were going to divide by K). But now, at the end of the analysis,
we can check directly that the feedback system actually is stable for K = 0. So nally the condition
for stability is K > 1. You can readily conrm this by applying Routh-Hurwitz to the closed-loop
characteristic polynomial, (s + 1)
2
+K. 2
Example
Next, we look at
PC(s) =
s + 1
s(s 1)
for which
Re PC(j) =
2
2
+ 1
, Im PC(j) =
1
2
(
2
+ 1)
.
The plots are these:
Brief Article
The Author
February 2, 2008
xx
D
D
A
B
C
D
-1
-2
A
B, C
1
Since theres a pole at s = 0 we have to indent around it. As stated before, we will always indent
to the right.
Lets look at A, the point j, small and positive. We have
Re PC(j) =
2
2
+ 1
2, Im PC(j) =
1
2
(
2
+ 1)
+.
CHAPTER 4. FEEDBACK CONTROL THEORY 73
This point is shown on the right-hand graph.
The mapping of the curve segment ABCD follows routinely.
Finally, the segment DA. On this semicircle, s = e
j
and goes from /2 to +/2; that is,
the direction is CCW. We have
PC
1
e
j
= e
j
= e
j()
.
Since is increasing from D to A, the angle of PC, namely decreases, and in fact undergoes
a net change of . Thus on the right-hand graph, the curve from D to A is a semicircle of innite
radius and the direction is CW. Theres another, quite nifty, way to see this. Imagine a particle
making one round trip on the T contour in the left-hand graph. As the particle goes from C to D,
at D it makes a right turn. Exactly the same thing happens in the right-hand graph: Moving from
C to D, the particle makes a right turn at D. Again, on the left-hand plot, the segment DA is a
half-revolution turn about a pole of multiplicity one, and consequently, the right-hand segment DA
is a half-revolution too, though opposite direction.
Now to apply the Nyquist criterion, n = 1. Therefore we need exactly 1 CCW encirclement of
the point 1/K. Thus feedback stability holds i 1 < 1/K < 0; equivalently, K > 1. 2
Example
The third example is
PC(s) =
1
(s + 1)(s
2
+ 1)
for which
Re PC(j) =
1
1
4
, Im PC(j) =
1
4
.
You should be able to draw the graphs based on the preceding two examples. They are
Brief Article
The Author
February 2, 2008
x
x
x
x
C
D
A
G
E
D
C
+1
G
A
B
F
E
D
1
-1
B
F
1
To apply the criterion, we have n = 0. Feedback stability holds i 1/K > 1; equivalently,
1 < K < 0. 2
CHAPTER 4. FEEDBACK CONTROL THEORY 74
4.6 Bode Plots
Control design is typically done in the frequency domain using Bode plots. For this reason its
useful to translate the Nyquist criterion into a condition on Bode plots.
First, lets review the drawing of Bode plots. (Youre likely to be unfamiliar with these when
the plant has right half-plane poles or zeros.) We consider only rational G(s) with real coecients.
Then G(s) has a numerator and denominator, each of which can be factored into terms of the
following forms:
1. gain: K
2. pole or zero at s = 0 : s
n
3. real nonzero pole or zero : s 1
4. complex conjugate pole or zero :
1
2
n
(s
2
2
n
s +
2
n
),
n
> 0, 0 < 1
For example
G(s) =
40s
2
(s 2)
(s 5)(s
2
+ 4s + 100)
=
40 2
5 100
s
2
_
1
2
s 1
_
_
1
5
s 1
_ _
1
100
(s
2
+ 4s + 100)
.
As an intermediate step, we introduce the polar plot: ImG(j) vs Re G(j) as : 0 . We
now look at polar and Bode plots for each of the four terms. We draw the magnitude Bode plots
in absolute units instead of dB.
CHAPTER 4. FEEDBACK CONTROL THEORY 75
1. G(s) = K. Below, from left to right, we see, rst, in a box the stipulation that the plots are
for positive gain; next, the polar plot, which is a stationary point; then the magnitude Bode plot
above the phase Bode plot.
Brief Article
The Author
February 2, 2008
polar plot
K > 0
K
|G(j)|
10
K
1
0.1
loglog scale
10
0
0.1
semilog scale
deg
arg(G(j))
arg(G)
1
1
1
If K is negative, the plots are these:
Brief Article
The Author
February 2, 2008
polar plot
K < 0
K
K
1
|G|
arg(G)
0
180
1
CHAPTER 4. FEEDBACK CONTROL THEORY 76
2. G(s) = s
n
, n 1 The polar plot is a radial line from the origin along one of the axes,
depending on the value of n. The magnitude Bode plot is a straight line of slope equal to n; the
phase is constant at 90n degrees.
Brief Article
The Author
February 2, 2008
0.1
1 10
1
0.1
0.1
1 10
1
0.1
|G|
10
n
n = 2
n = 3
arg(G)
180
270
90
0
n = 1
n = 4, 8, ...
n = 3, 7, ...
n = 2, 6, ...
n = 1, 5, ...
polar plot
1
CHAPTER 4. FEEDBACK CONTROL THEORY 77
3. G(s) = s + 1, > 0
Brief Article
The Author
February 2, 2008
11
1 decade
= |1 + j| =
2 = 3dB
1
|G|
exact
approx
+1
polar plot
max
error
G(j) = 1 + j
LHP zero
1
= corner freq.
approx
exact
0.1
10
arg(G)
90
1
CHAPTER 4. FEEDBACK CONTROL THEORY 78
G(s) = s 1, > 0
Brief Article
The Author
February 2, 2008
!1 !1
G(j) = 1 + j
RHP zero
|G|
same as for s + 1
180
90
0
0.1
arg(G)
1
10
polar plot
1
CHAPTER 4. FEEDBACK CONTROL THEORY 79
4. G(s) =
1
2
n
_
s
2
+ 2
n
s +
2
n
_
,
n
> 0, 0 < < 1
G(j) =
1
2
n
_
(
2
n
2
) +j2
n
0.1
n
n
90
approx
10
n
180
arg(G)
polar plot
j2
1
at
=
n
1
2
exact <
1
2
approx
e
x
a
c
t
>
1
/
n
|G|
2
CHAPTER 4. FEEDBACK CONTROL THEORY 80
G(s) =
1
2
n
(s
2
+
2
n
),
n
> 0 (i.e., = 0)
Brief Article
The Author
February 2, 2008
1
polar plot
1
polar plot
1
|G|
approx
exact
n
180
0
n
arg(G)
1
CHAPTER 4. FEEDBACK CONTROL THEORY 81
G(s) =
1
2
n
(s
2
2
n
s +
2
n
),
n
> 0, 0 < < 1
Brief Article
The Author
February 2, 2008
1
polar plot
1
polar plot
|G|
1
same as for G(s) =
1
2
n
(s
2
+ 2
n
s +
2
n
)
0
-90
-180
0.1
n
n
10
n
1
CHAPTER 4. FEEDBACK CONTROL THEORY 82
Example
A minimum phase TF:
G
1
(s) =
1
s + 10
= 0.1
1
0.1s + 1
0.1
polar plot is a semicircle:
|G(j) 0.05| = 0.05
-1
10 100
10 100 1
0.1
1
|G
1
|
-90
0
arg(G
1
)
0.01
1
2
2
CHAPTER 4. FEEDBACK CONTROL THEORY 83
Example
An allpass TF:
G
2
(s) =
1 s
1 +s
polar plot
1
|G
2
|
0
arg(G
2
)
-180
0.1
1 10
2
2
CHAPTER 4. FEEDBACK CONTROL THEORY 84
Example
A nonminimum phase TF:
G
3
= G
1
G
2
G
3
(s) =
1
s + 10
1 s
1 +s
[G
3
(j)[ = [G
1
(j)[
arg(G
3
) = arg(G
1
) + arg(G
2
) arg(G
1
)
Brief Article
The Author
February 2, 2008
!90
!180
!270
0.1 1 10 100
!90
!180
!270
0.1 1 10 100
arg(G
2
)
arg(G
1
)
arg(G
3
)
phase lag at = 10
1
Of all TFs having the magnitude plot [G
1
[, G
1
has the minimum phase lag at every . 2
CHAPTER 4. FEEDBACK CONTROL THEORY 85
Using the principles developed so far, you should be able to sketch the approximation of the
Bode plot of
G(s) =
40s
2
(s 2)
(s 5)(s
2
+ 4s + 100)
= 0.16
s
2
(
1
2
s 1)
(
1
5
s 1)[
1
100
(s
2
+ 4s + 100)]
.
Postscript
This and the previous few sections have dealt with graphical methods done by hand on paper. Why
did we bother? Why didnt we just ask MATLAB to draw the Bode or Nyquist plots? The answer
is that by learning to draw the graphs, you acquire understanding. In addition, there will come
times when MATLAB will fail you. For example, poles on the imaginary axis. MATLAB isnt
smart enough to indent the T-contour around a pole in drawing the Nyquist plot.
4.7 Stability Margins
If a feedback system is stable, how stable is it? In other words, how far is it from being unstable?
This depends entirely on our plant model, how we got it, and what uncertainty there is about the
model. In the frequency domain context, uncertainty is naturally measured in terms of magnitude
and phase as functions of frequency. In this section we look at this issue. The Nyquist plot is the
most revealing, but we also will use the Bode plot.
Lets rst see how to deduce feedback stability from Bode plots.
Example
Consider the usual feedback loop with
C(s) = 2, P(s) =
1
(s + 1)
2
.
The Nyquist plot of PC is
Brief Article
The Author
February 2, 2008
!1 !1
2
critical pt
this angle is
called the
phase margin
1
There are no encirclements of the critical point, so the feedback system is stable. The phase margin
is related to the distance from the critical point 1 to the point where the Nyquist plot crosses the
unit circle. The Bode plot of PC is
CHAPTER 4. FEEDBACK CONTROL THEORY 86
Brief Article
The Author
February 2, 2008
0.1 10
1
crossover frequency
this is called the gain
phase
margin
0
arg(PC)
0.1
-180
1
|PC|
1
2
10
gc
1
2
CHAPTER 4. FEEDBACK CONTROL THEORY 87
Example
C(s) = 2, P(s) =
1
(s + 1)
2
(0.1s + 1)
Brief Article
The Author
February 2, 2008
(half shown)
2
crit.pt.
-1
Nyquist :
1
12.1
1
The feedback loop is stable for C(s) = 2K if
1
K
<
1
12.1
.
Thus K can be increased from K = 1 up to K = 12.1 = 21.67 dB before instability occurs. This
number, 21.67 dB, is called the gain margin. The gain margin is related to the distance from the
critical point 1 to the point where the Nyquist plot crosses the negative real axis. See how this
appears on the Bode plot on the next page.
CHAPTER 4. FEEDBACK CONTROL THEORY 88
4.6. STABILITY AND BODE PLOTS 87
Frequency (rad/sec)
P
h
a
s
e
(
d
e
g
)
;
M
a
g
n
i
t
u
d
e
(
d
B
)
Bode Diagrams
!100
!80
!60
!40
!20
0
20
Gm=21.656 dB (at 4.5826 rad/sec), Pm=84.6 deg. (at 0.99507 rad/sec)
10
!1
10
0
10
1
10
2
!300
!250
!200
!150
!100
!50
0
Figure 4.1: Example 4.6.5
CHAPTER 4. FEEDBACK CONTROL THEORY 89
Example
C(s) = 2, P(s) =
s + 1
s(s 1)
The Nyquist plot of PC has the form
Brief Article
The Author
February 2, 2008
-2
1
The critical point is -1 and we need 1 CCW encirclement, so the feedback system is stable. The
phase margin on the Nyquist plot:
Brief Article
The Author
February 2, 2008
-1
36.9
1
If C(s) = 2K, K can be reduced until
1
K
= 2, i.e., min K =
1
2
= 6 dB. The Bode plot is shown
on the next page. MATLAB says the gain margin is negative! So MATLAB is wrong. Conclusion:
We need the Nyquist plot for the correct interpretation of the stability margins.
2
CHAPTER 4. FEEDBACK CONTROL THEORY 90
4.6. STABILITY AND BODE PLOTS 89
Frequency (rad/sec)
P
h
a
s
e
(
d
e
g
)
;
M
a
g
n
i
t
u
d
e
(
d
B
)
Bode Diagrams
!20
!10
0
10
20
30
40
50
Gm=!6.0206 dB (at 1 rad/sec), Pm=36.87 deg. (at 2 rad/sec)
10
!2
10
!1
10
0
10
1
!300
!250
!200
!150
!100
Figure 4.2: Example 4.6.6
CHAPTER 4. FEEDBACK CONTROL THEORY 91
Lets recap: The phase margin is related to the distance from the critical point to the Nyquist
plot along the unit circle; the gain margin is related to the distance from the critical point to the
Nyquist plot along the real axis. More generally, it makes sense to dene the stability margin to
be the distance from the critical point to the closest point on the Nyquist plot:
Brief Article
The Author
February 2, 2008
C(s) P(s)
r
S(s)
e
1
Dene S to be the TF from the reference input r to the tracking error e:
S =
1
1 +PC
.
Assume feedback system is stable. Then the distance from 1 to the Nyquist plot can be expressed
as follows:
min
[ 1 P(j)C(j)[ = min
[1 +P(j)C(j)[
=
_
max
[S(j)[
_
1
= reciprocal of peak magnitude on Bode plot of S
An example might be
1.7832
|S(j)|
1
stability margin
=
1
1.7832
= 0.56
This section concludes with a general remark. Phase margin is widely used to predict ringing,
meaning, oscillatory response. But it must be used with caution. It does not measure how close
the feedback loop is to being unstable. More precisely, if the phase margin is small (say 5 or 10
degrees), then you are close to instability and there may be ringing, but if the phase margin is large
(say 60 degrees), then you cant conclude anythingthe Nyquist plot could still be dangerously
close to the critical point.
Chapter 5
Introduction to Control Design
In this chapter we develop the basic technique of controller design in the frequency domain. Spec-
ications are in the form of bandwidth and stability margin. Of course our real interest is how
the system will behave in physical time, and in using the frequency domain we are employing the
duality between the two domains. The design steps are typically these:
1. Select frequency domain specs.
2. Design a controller for the specs.
3. Simulate for an appropriate number of test cases.
4. Evaluate.
5. If necessary modify the specs and return to step 2.
5.1 Loopshaping
We start with the unity feedback loop:
P C
r e
y
The design problem is this: Given P, the nominal plant transfer function, maybe some uncertainty
bounds, and some performance specs, design an implementable C. The performance specs would
include, as a bare minimum, stability of the feedback system. The simplest situation is where the
performance can be specied in terms of the transfer function
S :=
1
1 +PC
,
which is called the sensitivity function.
92
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 93
Aside: Heres the reason for this name. Denote by T the transfer function from r to y, namely,
T =
PC
1 +PC
.
Of relevance is the relative perturbation in T due to a relative perturbation in P:
lim
P0
T/T
P/P
= lim
P0
T
P
P
T
=
dT
dP
P
T
=
d
dP
_
PC
1 +PC
_
P
1 +PC
PC
= S.
So S is a measure of the sensitivity of the closed-loop transfer function to variations in the plant
transfer function.
For us, S is important for two reasons: 1) S is the transfer function from r to the tracking error
e. Thus we want [S(j)[ to be small over the range of frequencies of r. 2) The peak magnitude of
S is the reciprocal of the stability margin. Thus a typical desired magnitude plot of S is
|S(j)|
1
M
1
Here
1
is the maximum frequency of r, is the maximum permitted relative tracking error, < 1,
and M is the maximum peak magnitude of [S[, M > 1. If [S[ has this shape and the feedback
system is stable, then for the input r(t) = cos t,
1
, we have [e(t)[ in steady state, and
the stability margin is at least 1/M. A typical value for M is 2 or 3. In these terms, the design
problem can be stated as follows: Given P, M, ,
1
; Design C so that the feedback system is
stable and [S[ satises [S(j)[ for
1
and [S(j)[ M for all .
Example
P(s) =
10
0.2s + 1
This is a typical transfer function of a DC motor. Lets take a PI controller:
C(s) = K
1
+
K
2
s
.
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 94
Then any M, ,
1
are achievable by suitable K
1
, K
2
. To see this, start with
S(s) =
1
1 +
10(K
1
s+K
2
)
s(0.2s+1)
=
s(0.2s + 1)
0.2s
2
+ (1 + 10K
1
)s + 10K
2
.
We can simplfy by reducing to two equal real poles:
S(s) =
5s(0.2s + 1)
(s +K
3
)
2
.
Clearly, if we nd a suitable K
3
, then we can back-solve for K
1
, K
2
. Sketch the Bode plot S and
conrm that any M > 1, < 1, and
1
can be achieved. 2
In practice it is common to combine interactively the shaping of S with a time-domain simulation.
Now, S is a nonlinear function of C. So in fact it is easier to design the loop transfer function
L := PC instead of S =
1
1 +L
. Notice that if the loop gain is high, i.e., [L[ 1, then
[S[
1
[L[
.
A typical desired plot is
Brief Article
The Author
February 2, 2008
1
1
|PC| = |L|
1
|P|
1
In shaping [L[, we dont have a direct handle on the stability margin, unfortunately. However, we
do have control over the gain and phase margins, as well see.
In the next two sections we present two simple loopshaping controllers.
5.2 Lag Compensation
We separate the controller into two parts, K and C
1
, the latter having unity DC gain:
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 95
Brief Article
The Author
February 2, 2008
Ts + 1
Ts + 1
K
r
P(s)
C(s)
e
C
1
(s)
y
1
The parameters in C are (0 < < 1), T > 0, K > 0. The approximate Bode plot of C
1
is
Brief Article
The Author
February 2, 2008
0
arg(C
1
)
-90
|C
1
|
1
benet = gain reduction
1
T
10
T
1
T
without phase lag
1
An example design using this type of compensator follows next.
Example
The plant transfer function is P(s) =
1
s(s + 2)
. Let there be two specs:
1. When r(t) is the unit ramp, the steady-state tracking error 5 %.
2. A phase margin of approximately 45
. To increase the PM (while preserving spec 1), well use a lag compensator C
1
(s).
The design is shown on the Bode plots. We want PM 45
.
We proceed as follows. We try for a PM of 50
. The
controller is therefore
C(s) = K
Ts + 1
Ts + 1
, K = 40, = 0.111, T = 52.7.
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 97
5.2. LAG COMPENSATION 103
!50
0
50
100
150
10
!4
10
!3
10
!2
10
!1
10
0
10
1
!180
!160
!140
!120
!100
!80
dB
deg
Bode plots
|KP|
|PC|
arg(KP)
arg(PC)
rad/s
Figure 5.1: Example 5.2.1
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 98
Step responses are shown on the next plot. The step response of KP/(1 +KP), that is, the plant
compensated only by the gain for spec 1, is fast but oscillatory. The step response of PC/(1 +PC)
is slower but less oscillatory, which was the goal of spec 2.
104 CHAPTER 5. INTRODUCTION TO CONTROL DESIGN
0 5 10 15
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
PC/(1+PC)
KP/(1+KP)
Step response
t
y(t)
Figure 5.2: Example 5.2.1
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 99
5.3 Lead Compensation
Brief Article
The Author
February 2, 2008
Ts + 1
Ts + 1
K
r
P(s)
C(s)
e
C
1
(s)
y
1
The parameters in C are ( > 1), T > 0, K > 0. The approximate Bode plot of C
1
is
Brief Article
The Author
February 2, 2008
1
max
|C
1
|
90
arg(C
1
)
benet = phase lead
0
max
1
T
1
T
1
The angle
max
is dened as the maximum of arg(C
1
(j)), and
max
is dened as the frequency at
which it occurs.
Well need three formulas:
1.
max
: This is the midpoint between
1
T
and
1
T
on the logarithmically scaled frequency axis.
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 100
Thus
log
max
=
1
2
_
log
1
T
+ log
1
T
_
=
1
2
log
1
T
2
= log
1
T
max
=
1
T
.
2. The magnitude of C
1
at
max
: This is the midpoint between 1 and on the logarithmically
scaled [C
1
[ axis. Thus
log [C
1
(j
max
)[ =
1
2
(log 1 + log )
= log
[C
1
(j
max
)[ =
.
3.
max
: This is the angle of C
1
(j
max
). Thus
max
= arg C
1
(j
max
)
= arg
1 +
j
1 +
1
j
.
By the sine law
sin
max
=
sin
_
1 +
1
:
Brief Article
The Author
February 2, 2008
max
1
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 101
But sin =
1
1+
. Thus
sin
max
=
_
_
1
1 +
_
1 +
1
=
1
+ 1
,
and hence
max
= sin
1
1
+ 1
.
Example
Lets do the same example as we did for lag compensation. Again, we choose K = 40 to achieve
spec 1. The phase margin is then only 18
. We have
sin 30
=
1
+ 1
= 3.
Step 2 We want to make
max
the new gain crossover frequency. Thus at this frequency we will
be increasing the gain by
= 1.732 = 4.77 dB. Now [KP[ =
1
1.732
= 4.77 dB at = 8.4 rad/s.
Thus we set
max
= 8.4
1
T
= 8.4 T = 0.0687.
The PM achieved is 44
V
ref
Analog controller for voltage regulation
The controllers job is to regulate the voltage v
o
, ideally making v
o
= V
ref
, by adjusting the duty
cycle; the load current i
l
is a disturbance.
Analog control block diagram
As in Section 1.2 let v
1
denote the square-wave input voltage to the circuit. Also, let i
L
denote the
inductor current. Then Kirchos current law at the upper node gives
i
L
= i
l
+
v
o
R
l
+C
dv
o
dt
and Kirchos voltage law gives
L
di
L
dt
+v
o
= v
1
.
Dene the state x = (v
o
, i
L
). Then the preceding two equations can be written in state-space form
as
x = Ax +B
1
i
l
+B
2
v
1
(5.1)
v
o
= Cx, (5.2)
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 105
where
A =
_
1
R
l
C
1
C
1
L
0
_
_
, B
1
=
_
_
1
C
0
_
_
, B
2
=
_
_
0
1
L
_
_
, C =
_
1 0
.
Thus the transfer function from i
l
to v
o
is
C(sI A)
1
B
1
=
1
C
s
s
2
+
1
R
l
C
s +
1
LC
and from v
1
to v
o
is
C(sI A)
1
B
2
=
1
LC
s
2
+
1
R
l
C
s +
1
LC
.
Let us emphasize that the system with inputs (v
1
, i
l
) and output v
o
is linear time-invariant. But
v
1
itself depends on the duty cycle. We now allow the duty cycle to vary from period to period.
The controller is made of two parts: a linear time-invariant (LTI) circuit and a nonlinear pulse-
width modulator (PWM):
v
o
V
ref
v
1 d
PWM
Pulse-width modulator
LTI controller
The PWM works like this: The continuous-time signal d(t), which is the output of the LTI controller,
is compared to a sawtooth waveform of period T, producing v
1
(t) as shown here:
v
1
(t)
d(t)
sawtooth
Analog form of PWM
It is assumed that d(t) stays within the range [0, 1], and the slope of the sawtooth is 1/T.
We have in this way arrived at the block diagram of the control system:
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 106
v
o
V
ref
v
1
d
PWM
Block diagram of analog control system
A B
1
B
2
C 0 0
i
l
LTI controller
The plant block and notation refer to the state-space equations (5.1), (5.2). Every block here is
linear time-invariant except for the PWM.
Linear approximation for analog control
The task now is to linearize the system in the preceding block diagram. But the system does not
have an equilibrium, that is, a state where all signals are constant. This is because by its very
nature v
1
(t) cannot be constant because its generated by a switch. What we should do is linearize
about a periodic solution.
Current practice is not to linearize exactly, but to approximate and linearize in an ad hoc
manner. Consider the gure titled Block diagram of analog control system. To develop a small-
signal model, assume the controller signal d(t) equals a constant, d(t) = D, the nominal duty cycle
about which we shall linearize, assume the load current i
l
(t) equals zero, and assume the feedback
system is in steady state; of course, for this to be legitimate the feedback system has to be stable.
Then v
1
(t) is periodic, and its average value equals DV
s
. The DC gain from v
1
to v
o
equals 1, and
so the average value of v
o
(t) equals DV
s
too. Assume D is chosen so that the average regulation
error equals 0, i.e., DV
s
= V
ref
.
Now let d(t) be perturbed away from the value D:
d(t) = D +
d(t).
Assume the switching frequency is suciently high that
d(t) can be approximated by a constant
for any switching period [kT, (k + 1)T). Looking at the gure titled Analog form of PWM, we
see that, over the period [kT, (k + 1)T), the average of v
1
(t) equals V
s
d(t). Thus, without a very
strong justication we approximate v
1
(t) via
v
1
(t) = V
s
d(t) = DV
s
+V
s
d(t).
Dene v
1
= DV
s
and v
1
(t) = V
s
d(t). Likewise we write x, i
l
, and v
o
as constants plus perturbations:
x = x + x
i
l
= 0 +
i
l
v
o
= v
o
+ v
o
.
With these approximations, the steady state equations of the plant become
0 = A x +B
2
v
1
= AV
s
D +B
2
v
1
v
o
= DV
s
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 107
and the small-signal model is
x = A x +B
1
i
l
+B
2
V
s
d
v
o
= C x.
The block diagram of this small-signal linear system is as follows:
i
l
d
v
o
A B
1
B
2
V
s
C 0 0
LTI controller
Analog controller design example
The design of a buck converter is quite involved, but engineering rms have worked out recipes
in equation form for the sizing of circuit components. We are concerned only with the feedback
controller design, so we start with given parameter values. The following are typical:
V
s
8 V
L 2 H
C 10 F
R
l
3.3
D 0.45
1/T 1 mHz
The transfer function from
d to v
o
is second order with the following Bode plot:
4
10
5
10
20
15
10
5
0
5
10
15
20
4
10
5
10
180
160
140
120
100
80
60
40
20
0
Bode plot of transfer function from
d to v
o
dB
Hz
deg
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 108
The frequency range shown is from 10 kHz to 100 kHz and theres a resonant peak at about 35 kHz.
The gain crossover frequency is 50 kHz, at which the phase is 169
.
Typical closed-loop specications involve the recovery to a step demand in load current. For a
step disturbance in
i
l
, the output voltage v
o
will droop. The droop should be limited in amplitude
and duration, while the duty cycle remains within the range from 0 to 1. The nominal value of load
voltage is V
s
D = 3.6 V. Therefore the nominal load current is V
s
D/R
l
= 1.09 A. For our tests, we
used a step of magnitude 20% of this nominal value. Next shown is the test response of v
o
(t) for
the unity controller:
0e+00 1e05 2e05 3e05 4e05 5e05 6e05
0.08
0.06
0.04
0.02
0.00
0.02
0.04
0.06
Response of v
o
to step in
i
l
; unity controller
V
seconds
The response is very oscillatory, with a long settling time. The problem with this plant is it
is very lightly damped. The damping ratio is only 0.07. The sensible and simple solution is to
increase the damping by using a derivative controller. The controller K(s) = 7 10
6
s increases
the damping to 0.8 without changing the natural frequency. Such a controller causes diculty in
computer-aided design because it doesnt have a state-space model. The obvious x is to include a
fast pole:
K =
7 10
6
s
10
10
s + 1
.
The resulting test response in output voltage is this:
0e+00 1e05 2e05 3e05 4e05 5e05 6e05
0.040
0.035
0.030
0.025
0.020
0.015
0.010
0.005
0.000
0.005
seconds
V
Response of v
o
to step in
i
l
; designed controller
10 switching cycles
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 109
Moreover, the duty cycle stayed within the required range:
0e+00 1e05 2e05 3e05 4e05 5e05 6e05
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Response of duty cycle d(t)
Finally, we emphasize that the design must be tested by a full-scale Simulink model.
5.5 Loopshaping Theory
In this section we look at some theoretical facts that we have to keep in mind while designing
controllers via loopshaping.
5.5.1 Bodes phase formula
It is a fundamental fact that if L = PC is stable and minimum phase and normalized so that
L(0) > 0 (positive DC gain), then the magnitude Bode plot uniquely determines the phase Bode
plot. The exact formula is rather complicated, and is derived using Cauchys integral theorem. Let
0
= any frequency
u = normalized frequency = ln
0
, i.e., e
u
=
0
M(u) = normalized log magnitude = ln [L(j
0
e
u
)[.
W(u) = weighting function = ln coth
[u[
2
.
Recall that
coth x =
cosh x
sinh x
=
e
x
+e
x
e
x
e
x
.
The phase formula is
arg(L(j
0
)) =
1
dM(u)
du
W(u)du in radians. (5.3)
This shows that arg(L(j
0
)) can be expressed as an integral involving [L(j)[.
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 110
It turns out we may approximate the weighting function as W(u)
2
2
(u). Then the phase
formula (5.1) gives
arg(L(j
0
))
2
dM(u)
du
u=0
(5.4)
As an example, consider the situation where
L(j) =
c
n
near =
0
.
Thus n is the slope of the magnitude Bode plot near =
0
. Then
[L(j
0
e
u
)[ =
c
n
0
e
nu
M(u) = ln [L(j
0
e
u
)[ = ln
c
n
0
nu
dM(u)
du
= n
arg(L(j
0
)) = n
2
from (5.2).
Thus we arrive at the observation: If the slope of [L(j)[ near crossover is n, then arg(L(j)) at
crossover is approximately n
2
. Warning This derivation required L(s) to be stable, minimum
phase, positive DC gain.
What we learn from this observation is that in transforming [P[ to [PC[ via, say, lag or lead
compensation, we should not attempt to roll o [PC[ too sharply near gain crossover. If we do,
arg(PC) will be too large near crossover, resulting in a negative phase margin and hence instability.
5.5.2 The waterbed eect
This concerns the ability to achieve the following spec on the sensitivity function S:
Brief Article
The Author
February 2, 2008
1
|S(j)|
1
1
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 111
Let us suppose M > 1 and
1
> 0 are xed. Can we make arbitrarily small? That is, can we get
arbitrarily good tracking over a nite frequency range, while maintaining a given stability margin
(1/M) ? Or is there a positive lower bound for ? The answer is that arbitrarily good performance
in this sense is achievable if and only if P(s) is minimum phase. Thus, non-minimum phase plants
have bounds on achievable performance: As [S(j)[ is pushed down on one frequency range, it pops
up somewhere else, like a waterbed. Heres the result:
Theorem 5.5.1 Suppose P(s) has a zero at s = z with Re z > 0. Let A(s) denote the allpass
factor of S(s). Then there are positive constants c
1
, c
2
, depending only on
1
and z, such that
c
1
log +c
2
log M log [A(z)
1
[ 0.
Example
P(s) =
1 s
(s + 1)(s p)
, p > 0, p ,= 1
Let C(s) be a stabilizing controller. Then
S =
1
1 +PC
S(p) = 0.
Thus
s p
s +p
is an allpass factor of S. There may be other allpass factors, so what we can say is that
A(s) has the form
A(s) =
s p
s +p
A
1
(s),
where A
1
(s) is some allpass TF (may be 1). In this example, the RHP zero of P(s) is s = 1. Thus
[A(1)[ =
1 p
1 +p
[A
1
(1)[.
Now [A
1
(1)[ 1 (why?), so
[A(1)[
1 p
1 +p
and hence
[A(1)
1
[
1 +p
1 p
.
The theorem gives
c
1
log +c
2
log M log
1 +p
1 p
.
Thus, if M > 1 is xed, log cannot be arbitrarily negative, and hence cannot be arbitrarily small.
In fact the situation is much worse if p 1, that is, if the RHP plant pole and zero are close. 2
CHAPTER 5. INTRODUCTION TO CONTROL DESIGN 112
5.6 Remark on Pole Assignment
It may have occurred to you that closed-loop pole locations would be a good performance measure.
Designing a controller to put the closed-loop poles in a desired region is indeed a widely used method,
often in conjunction with the root locus tool. But it isnt without limitations. For example, consider
the following two sets of closed-loop poles:
1. 2 10j, 10
2. 2, 2, 2
The distances from the closest pole to the imaginary axis are equal, 2. Which set of poles is better?
Obviously the second, right? The rst set suggests severe oscillations in the closed-loop response.
But suppose the closed-loop impulse response function is actually
50e
10t
+ 0.001e
2t
cos(10t).
Then the severe oscillations will never be observed and the rst set is better in terms of speed of
response, which is ve times that of the second.
Epilogue
If this is your one and only control course, it is hoped that you take away the following:
The value of block diagrams.
What it means for a system to be linear.
The value of graphical simulation tools like Simulink.
Why we use transfer functions and the frequency domain.
What stability means.
What feedback is and why its important.
What makes a system easy or hard to control.
If youre continuing with another control course, it is hoped you take away additionally these:
How to model a system using dierential equations.
How to put the model in the form of a state equation.
How to nd equilibria and linearize the model at an equilibrium (if there is one).
How to determine if a feedback system is stable by checking poles or eigenvalues.
The beautiful Nyquist stability criterion and the meaning of stability margin.
How to design a simple feedback loop using frequency-domain methods.
The subject of control science and engineering is vast and from this course you can take o in
many directions.
113