Chapter 7 - State Space Control
Chapter 7 - State Space Control
State-Space Design
Open Rubric
Example
Consider the mechanical spring-damper system:
u(t) : external force
y(t) : displacement of mass (m)
k : spring constant
b : damper constant
𝑚 𝑦 + 𝑏 𝑦 + 𝑘𝑦 = 𝑢
Example
Since this is a second order system (it has 2 integrators), it
has 2 states 𝑥1 (𝑡) and 𝑥2 (𝑡), which can be defined as
𝑥1 𝑡 = 𝑦 𝑡
𝑥2 𝑡 = 𝑦(𝑡)
This means that
𝑥1 = 𝑥2
1 1
𝑦 = 𝑥2 = 𝑚 −𝑘𝑦 − 𝑏 𝑦 + 𝑚 𝑢
Or
𝑥1 = 𝑥2
𝑘 𝑏 1
𝑦 = 𝑥2 = − 𝑚 𝑥1 − 𝑚 𝑥2 + 𝑚 𝑢
With the output equation defined as
𝑦 = 𝑥1
Example
In vector format the expressions are therefore
𝑥1 0 1 𝑥1 0
= +
𝑥2 −𝑘/𝑚 −𝑏/𝑚 𝑥2 1/𝑚
𝑥1
𝑦 = 10 𝑥 +0
2
c c
c c c c
c c
x1= Vc1, x2=Vc2
c c c c
c
c c
c c
c
Ex. 7.3
c c c
Ex 7.4
Ex 7.4
Ex 7.5
Ex 7.5
Ex 7.6
Ex 7.6
Ex 7.6
Ex 7.7
Ex 7.7
Ex 7.7
Ex 7.7
Ex 7.7
Relationship between TF and SS
Consider system with TF of
𝑌 𝑠
= 𝐺(𝑠)
𝑈(𝑠)
where 𝐷 is a constant.
We can therefore see that the poles of 𝐺(𝑠) are given by
the roots of det 𝑠𝐼 − 𝐴 = 0.
You should be able to identify this equation as being the
eigenvalue equation, and as a result, the poles of a
state-space system are simply given by the eigenvalues
of the 𝐴 matrix!
Example
Use the expressions derived on the previous slides to
compute the transfer function and the eigenvalues of the
following system:
−7 −12 1
𝐴= , 𝐵= , 𝐶 = [1 2] , 𝐷=0
1 0 0
First compute the eigenvalues:
det 𝑠 𝐼 − 𝐴 = 0
𝑠+7 12
∴ =0
−1 𝑠
∴ 𝑠 𝑠 + 7 + 12 = 0
∴ 𝑠 2 + 7𝑠 + 12 = 0
∴ 𝑠+4 𝑠+3 =0
This last expression says that we expect the poles to be
located at -4 and -3. Use Matlab’s eig to compute it.
Example
We should be able to verify the pole locations by
computing the transfer function using our previously
derived expression:
𝑌 𝑠 = 𝐶 𝑠𝐼−𝐴 −1 𝐵 + 𝐷 𝑈(𝑠)
We can use the following matrix inversion expression for
a 2x2 matrix:
𝑎 𝑏
If 𝑃 = and 𝑎𝑑 − 𝑏𝑐 ≠ 0, then
𝑐 𝑑
1 𝑑 𝑏
𝑃−1 =
𝑎𝑑−𝑏𝑐 −𝑐 𝑎
𝑠+7 12
∴ 𝑠𝐼−𝐴 =
−1 𝑠
𝑠+7 12
−1 −1 𝑠
∴ 𝑠𝐼−𝐴 =
𝑠 2 +7𝑠+12
Example
Substituting the last expression into the following one
−1
𝑌 𝑠 = 𝐶 𝑠𝐼−𝐴 𝐵+𝐷 𝑈 𝑠
and using values for the B and C matrices, we get
𝑠+7 12 1
1 2
−1 𝑠 0
∴𝐺 𝑠 =
𝑠 2 +7𝑠+12
𝑠
1 2
1
∴𝐺 𝑠 =
𝑠 2 +7𝑠+12
𝑠+2 𝑠+2 2 −1
∴𝐺 𝑠 = = = +
(𝑠+4)(𝑠+3) 𝑠 2 +7𝑠+12 𝑠+4 𝑠+3
𝑠𝐼−𝐴 −𝐵 = 0
𝐶 𝐷
We can therefore state that
det 𝑠 𝐼−𝐴 −𝐵
𝐶 𝐷
𝐺 𝑠 =
det 𝑠 𝐼−𝐴
Example 7.13
Canonical Forms
As we mentioned before, a particular state-space
formulation of a system is not unique.
The canonical forms are standard forms of state-space
expression that assist us in a particular goal.
There are 3 primary canonical forms:
Control Canonical Form (CCF)
Modal Canonical Form (MCF)
Observer Canonical Form (OCF)
The notes and the textbook addresses all 3 forms, but
for now we will only focus on the CCF and OCF
Control Canonical Form
If we go back to the previous example, we had an
expression for the systems defined as
𝑠+2 𝑠+2 2 −1
𝐺 𝑠 = = = +
(𝑠+4)(𝑠+3) 𝑠 2 +7𝑠+12 𝑠+4 𝑠+3
−𝑎1 1 0 … 0 𝑏1
−𝑎2 0 1 … 0 𝑏2
𝐴 = −𝑎3 0 0 … ⋮ , 𝐵 = 𝑏3 ,
⋮ ⋮ ⋮ … 1 ⋮
−𝑎𝑛 0 0 … 0 𝑏𝑛
𝐶= 1 0 0 … 0 , 𝐷=0
The Matlab code to do this is:
Num = [b1 b2 ... bn]
Den = [1 a1 a2 ... an]
Sys = tf(Num,Den)
Sys2 = canon(Sys,’companion’)
Observer Canonical Form
The block-diagram of a 3rd order system presented in
OCF is presented below.
Observer Canonical Form
We see that every state appears in the output after a
finite period of time.
This property of a system is called observability.
A system is said to be completely state observable if
every state 𝑥(𝑡) can be determined from the observation
of the output 𝑦(𝑡) over a finite period of time.
If we can express a system in OCF, then the system is
observable.
Observability
Pages 22 to 231 in the notes and pages 471 in the
textbook presents the derivation of the concept of an
observability matrix.
We will not repeat the derivation here, but simply state
that, for any state-space system, one can formulate an
observability matrix as follows
𝑇
OO = 𝐶 𝐶𝐴 𝐶 𝐴2 … 𝐶 𝐴𝑛−1
We will state that a system is completely state
observable (that we can design a state-space observer
for it) if the observability matrix is non-singular (det 𝑂𝑂 ≠
0)
Pole-zero cancellation
𝑥 𝑡 = 𝑥 0 𝑒 𝐴𝑐𝑙𝑡
𝑥 𝑡 = 𝑥 0 𝑒 𝐴 −𝐵 𝑘𝑡
This means that, with a proper selection of 𝐾, the
closed-loop system will be asymptotically stable.
State-feedback design
It also means that the non-zero initial conditions of the
system can be regulated to zero.
We can derive from this last set of equations that the
eigenvalues of (𝐴 − 𝐵 𝑘) (the closed-loop poles of the
system) must be located in the left-half plane for the
system to be stable. So determine:
det(𝑠 𝐼 − (𝐴 − 𝐵𝑘)) = 0
Note that:
In classical control only the positions of the dominant poles
were defined.
In modern control (state-space) we define (specify) the
position of all the (modelled) closed-loop poles.
The second note implies that we must be able to
influence all of the closed-loop poles of the system,
which implies that the system must be controllable.
State-feedback design
The placement of the closed-loop poles is called the
pole-placement problem.
The poles can only be placed if the system is completely
state controllable.
Open-loop Closed-loop
State-feedback design
In principle we can place the poles wherever we want,
but
The positions of the poles are directly related to the
systems bandwidth, rise time and settling time.
If we place the pole to require an excessive bandwidth or
settling time, it means that the system would require an
excessive control effort, which could possibly not be
realizable for the system.
To place the poles, we use the closed-loop system characteristic
equation det(𝑠 𝐼 − (𝐴 − 𝐵𝑘)) = 0
This expression leads to an 𝑛𝑡ℎ order polynomial in S
We can then select 𝐾 so that the roots of this expression
equal the specified root locations.
Ex 7.15
Ex 7.15
Ex 7.15
Ex 7.15
In principle we should have first computed the
controllability matrix to see whether we can actually
design a state-feedback controller.
For this problem
CC = 𝐵 𝐴𝐵
0 1
∴ 𝐶𝐶 =
1 0
∴ det 𝐶𝐶 = −1 ≠ 0 → the system is controllable
State-feedback design
Pgs 447 to 448 in FPE presents a formal derivation of
technique that is used to do pole placement on a
computer.
I will not go through the derivation in class.
The method is called Ackerman’s formula and is
implemented in Matlab as
place or acker
acker is only stable for small numbers of poles (n < 5) and
has been removed in newer versions of Matlab.
place does not check for controllability – you must check it
before you use it, otherwise it will give wrong answers
You cannot give place repeated roots, make small changes,
if you want to, e.g. s1 = 0.01, s2 = 0.011
Ackerman’s formula
Ex 7.16
Ex 7.16
Ex 7.16
Ex 7.16
Ex 7.17
Ex 7.17
Ex 7.17
Reference input
Let’s add a reference input to a SS system. Define
𝑢 = −𝐾 𝑥 + 𝑟
Unless the systems has free integrators, this will result
in a non-zero steady-state error if a step input is
applied.
The way to correct this problem is to compute the
steady-state values of the state and the control input
that will result in zero output error and then force them
to take on these values.
Start by defining the final values of the state and the
control inputs to be 𝑥𝑠𝑠 and 𝑢𝑠𝑠 respectively.
Reference input
We can draw the following block-diagram.
If the system is type 1 or higher, there will be no steady-
state error to a step input and the final state will be:
𝑥 ∞ = 𝑥𝑠𝑠 = 𝑥𝑟
This is not true for a type 0 systems as some control
input is required to keep the system at the desired 𝑥𝑟
Reference input
At steady-state we can therefore write the control law
from the block diagram to be
𝑢 = 𝑢𝑠𝑠 − 𝐾(𝑥 − 𝑥𝑠𝑠 )
To pick the correct values, we must solve the equations
so that the system will have zero steady-state error to
any input.
Define
𝑥 = 𝐴 𝑥 + 𝐵𝑢
𝑦 = 𝐶 𝑥 + 𝐷𝑢
Which reduced to the following expression in the steady-
state
0 = 𝐴 𝑥𝑠𝑠 + 𝐵𝑢𝑠𝑠
𝑦 = 𝐶 𝑥𝑠𝑠 + 𝐷𝑢𝑠𝑠
Reference input
We want
𝑦𝑠𝑠 = 𝑟𝑠𝑠 for all values of 𝑟𝑠𝑠
Now define
𝑥𝑠𝑠 = 𝑁𝑥 𝑟𝑠𝑠
𝑢𝑠𝑠 = 𝑁𝑢 𝑟𝑠𝑠
From which we can rewrite the steady-state SS
equations as
0 = 𝐴𝑁𝑥 𝑟𝑠𝑠 + 𝐵𝑁𝑢 𝑟𝑠𝑠
𝑟𝑠𝑠 = 𝐶 𝑁𝑥 𝑟𝑠𝑠 + 𝐷𝑁𝑢 𝑟𝑠𝑠
Which can be written as
𝑁𝑥
𝐴 𝐵 = 0
𝐶 𝐷 𝑁𝑢 1
Reference input
The gain matrices can therefore be computed as
−1
𝑁
𝑥 = 𝐴 𝐵 0
𝑁𝑢 𝐶 𝐷 1
With these results, we finally have the basis for
introducing the reference input so as to get zero steady-
state error to a step input:
𝑢 = 𝑁𝑢 𝑟 − 𝐾(𝑥 − 𝑁𝑥 𝑟)
∴ 𝑢 = −𝐾 𝑥 + (𝑁𝑢 + 𝐾 𝑁𝑥 )𝑟
The coefficient of r in the parentheses is a constant that
can be computed before hand. We will give it the
symbol 𝑁 so that
𝑢 = −𝐾 𝑥 + 𝑁𝑟
Ex 7.18
Ex 7.18
Ex 7.18
Estimator/Observer Design
Up to now we assumed that all the state variables of the
system were available for state feedback.
In practice not all the states of a system are measured
Sensors for all the states may be too expensive, or
Measurements of all the states may be physically impossible
(as in a nuclear power plant or inside an electric motor or
battery)
We therefore want to reconstruct the states of a system
from a few measurements.
The algorithm used is called a state observer or state
estimator
Full order estimator: all the states of the system are
estimated
Reduced order estimator: only the unmeasurable states are
estimated
Full Order Observers
Define the estimated state vector as 𝑥
Using this estimated state with the plant dynamics, we
can write the estimated dynamics as
𝑥 = 𝐴𝑥 + 𝐵𝑢
𝑦 = 𝐶 𝑥 + 𝐷𝑢
These expressions describe the dynamics of the model
in the picture below.
Full Order Observers
The actual states (in the process) and the model states
(the estimated ones) differ because:
F, G and H are not exact representations of the process –
there are still plant uncertainties
The initial conditions of the process are unknown
There are disturbance signals
We therefore need a way to get the model to
approximate the process and let the estimated states
approximate the true states.
Full Order Observers
If we define the estimated state error (𝑒) to be
𝜖 =𝑥−𝑥 (the textbook used the notation 𝑥 for 𝑒)
Then the dynamics of the error between the process and
the model are
𝜖 = 𝑥 − 𝑥 = 𝐴 𝑥 + 𝐵𝑢 − 𝐴𝑥 − 𝐵𝑢
∴𝜖 =𝐴 𝑥 −𝑥 =𝐴𝜖
The solution to this differential equation is
𝜖 𝑡 = 𝑒 𝐴𝑡
Which says that the error between the process and the
model will decay to zero at the same rate as the plant
dynamics, which is too slow as the plant will change before
the model “catches” it.
We therefore need a different approach to solve this
problem.
Full Order Observers
Full Order Observers
We implement a feedback controller to eliminate the
errors between the process and the model
Develop a system that will have y and u as the input
signals and 𝑥 as the output signal.
We call this system an observer. It is defined by the
observer equation as follows:
𝑥 = 𝐴 𝑥 + 𝐵𝑢 + 𝐿(𝑦 − 𝐶 𝑥)
The observer takes the difference between the real output
and the estimated (predicted) output and feeds that back to
eliminate the prediction error.
The correction term 𝐿(𝑦 − 𝐶 𝑥) is called the innovation
and will correct the errors between 𝑦 and 𝑦 (and
therefore between 𝑥 and 𝑥) due to
Incorrect A,B and C matrices and
Differences in initial conditions between the plant and observer
Full Order Observers
The observer gain matrix 𝐿 is also a proportional gain
matrix (like 𝐾) with the form
𝑙1
𝑙2
𝐿 = 𝑙3
⋮
𝑙𝑛
If the plant is defined by the expression
𝑥 = 𝐴𝑥 + 𝐵𝑢
𝑦 = 𝐶 𝑥 + 𝐷𝑢
Then the subtraction of the observer equation from the
state equation gives
𝑥 − 𝑥 = 𝐴 𝑥 − 𝐴 𝑥 − 𝐿(𝐶 𝑥 − 𝐶 𝑥)
∴ 𝜖 = (𝐴 − 𝐿𝐶)𝜖
Full Order Observers
The solution to the error dynamics is
𝜖 𝑡 =𝑒 𝐴−𝐿𝐶 𝑡
0
0
𝐿 = 𝛼𝑒 𝐴 𝑂𝑂−1 0
⋮
1
Which says that we can use the computer to compute the
observer gains rather than computing it through inspection.
Duality Principle
Referring to the two formulations of Ackerman’s formula
on the previous slide, we can see that
The two forms are similar, but that there are differences.
The two systems are mathematically equivalent.
This property is called duality.
For the controller, you compute 𝐾 to place the poles of (𝐴 − 𝐵𝐾)
𝑠 𝐼 − 𝐴 − 𝐵𝑘 −𝐵 𝑘 𝑥
𝑥 =
𝑒 0 𝑠 𝐼 − (𝐴 − 𝐿 𝐶) 𝑒
This expression describes the dynamics of the observed
state-feedback control system.
Closed-loop observer
The characteristic equation of the complete system is
𝑠 𝐼 − 𝐴 − 𝐵𝑘 −𝐵 𝑘
=0
0 𝑠 𝐼 − (𝐴 − 𝐿 𝐶)
∴ 𝑠 𝐼 − 𝐴 − 𝐵𝑘 𝑠 𝐼 − (𝐴 − 𝐿 𝐶) = 0
The results says that the pole placement design and the
observer design are independent of each other. This is
called the separation principle. The controller and the
observer can be designed separately and then combined
to form the observed-state feedback control system.
Controller-observer TF
From the last block diagram we can define the observer
equation as
𝑥 = 𝐴 − 𝐿 𝐶 𝑥 + 𝐵𝑢 + 𝐿𝑦
Using the state equations
𝑥 = 𝐴 𝑥 + 𝐵𝑢
𝑦 = 𝐶 𝑥 + 𝐷𝑢
And the observed-state control signal 𝑢 = −𝑘 𝑥, we can
start by expressing the Laplace transform of the control
signal as
U(s) = −𝑘 𝑋(𝑠)
The Laplace transform of the observer equation is
𝑠 𝑋 𝑠 = 𝐴 − 𝐿 𝐶 𝑋 𝑠 + 𝐵𝑈 𝑠 + 𝐿𝑌(𝑠)
Controller-observer TF
If we assume that the initial conditions are zero, we can
combine the above expressions into the following
expression:
−1 𝐿𝑌(𝑠)
𝑋 𝑠 = 𝑠𝐼−𝐴+𝐿𝐶+𝐵𝐾
And after a further substitution as,
−1 𝐿𝑌(𝑠)
𝑈 𝑠 = −𝐾 𝑠 𝐼 − 𝐴 + 𝐿 𝐶 + 𝐵 𝐾
This gives the transfer function between 𝑈(𝑠) and −𝑌(𝑠) and
is called the controller-observer transfer function as it acts
as a controller for the system
𝑈 𝑠 −1
= −𝐾 𝑠 𝐼 − 𝐴 + 𝐿 𝐶 + 𝐵 𝐾 𝐿
−𝑌(𝑠)
Example
Consider the system defined by the following set of state
matrices:
0 1 0
𝐴= , 𝐵= , 𝐶 = [1 0] , 𝐷=0
20.6 0 1
If we require the poles to be placed at 𝑝1 = −1.8 + 𝑗2.4 and
𝑝2 = −1.8 − 𝑗2.4 , the feedback gains will be designed to be
𝐾 = 29.6 3.6
If we use observed-state feedback instead of actual state
feedback, the control signal will be
𝑥1
𝑢 = −𝐾 𝑥 = − 29.6 3.6
𝑥2
Choose the observer poles to be 𝑝1 = −8, 𝑝2 − 8
Example
Now obtain the observer gain matrix L from these specs and
draw the block diagram for the observed-state feedback
control system.
Also obtain the transfer function of the controller-observer
and draw a block-diagram of the system
Solution:
The desired characteristic polynomial of the observer is
𝛼𝑒 𝑠 = 𝑠 − 𝑝1 𝑠 − 𝑝2 = 𝑠 + 8 𝑠 + 8 = 𝑠 2 + 16𝑠 + 64
The characteristic equation of the observer is computed from
𝑠𝐼−𝐴+𝐿𝐶 =0
𝑠 0 0 1 𝐿 0
∴ − + 1 =0
0 𝑠 20.6 0 𝐿2 0
𝑠 + 𝐿1 −1
∴ =0
−20.6 + 𝐿2 𝑠
∴ 𝑠 2 + 𝐿1 𝑠 − 20.6 + 𝐿2 = 0
Example
Comparing the two polynomials gives:
𝐿1 16
𝐿= =
𝐿2 84.6
Example
The transfer function of the systems is
𝑈 𝑠 −1
= −𝐾 𝑠 𝐼 − 𝐴 + 𝐿 𝐶 + 𝐵 𝐾 𝐿
−𝑌(𝑠)
−1
𝑠 + 16 −1 16
= 29.6 3.6
93.6 𝑠 + 3.6 84.6
778.16𝑠+3690.72
= 𝑠 2 +19.6𝑠+151.2
1
The transfer function of the original plant is 𝑠 2 −20.6