Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
5 views

Chapter 7 - State Space Control

Controls Systems 4A

Uploaded by

bernardmarukutu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Chapter 7 - State Space Control

Controls Systems 4A

Uploaded by

bernardmarukutu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 170

Chapter 7

State-Space Design

Open Rubric
Example
Consider the mechanical spring-damper system:
 u(t) : external force
 y(t) : displacement of mass (m)
 k : spring constant
 b : damper constant

This is a single-input single-output (SISO)


system with differential equation

𝑚 𝑦 + 𝑏 𝑦 + 𝑘𝑦 = 𝑢
Example
Since this is a second order system (it has 2 integrators), it
has 2 states 𝑥1 (𝑡) and 𝑥2 (𝑡), which can be defined as
𝑥1 𝑡 = 𝑦 𝑡
𝑥2 𝑡 = 𝑦(𝑡)
This means that
𝑥1 = 𝑥2
1 1
𝑦 = 𝑥2 = 𝑚 −𝑘𝑦 − 𝑏 𝑦 + 𝑚 𝑢
Or
𝑥1 = 𝑥2
𝑘 𝑏 1
𝑦 = 𝑥2 = − 𝑚 𝑥1 − 𝑚 𝑥2 + 𝑚 𝑢
With the output equation defined as
𝑦 = 𝑥1
Example
In vector format the expressions are therefore
𝑥1 0 1 𝑥1 0
= +
𝑥2 −𝑘/𝑚 −𝑏/𝑚 𝑥2 1/𝑚
𝑥1
𝑦 = 10 𝑥 +0
2

These expressions are in the standard format of


𝑥 = 𝐴 𝑥 + 𝐵𝑢
𝑦 = 𝐶 𝑥 + 𝐷𝑢
Where
0 1 0
𝐴= , 𝐵=
−𝑘/𝑚 −𝑏/𝑚 1/𝑚
𝐶 = 10 , 𝐷=0
Example
State-space concepts
 Describe a system directly from the differential
equations without using transformations
 Can be used for
 Large systems
 Multi Input Multi Output (MIMO) systems
 Non-linear systems
 Well-suited for computer implementation and computer-
based design due to its use of matrices
State-space concepts
 Definition of a state:
 The state of a dynamic system is the smallest set of
variables (called “state variables”) such that the knowledge
of these variables at 𝑡 = 𝑡𝑜 , together with the knowledge of
the input for 𝑡 ≥ 𝑡𝑜 , completely determines the behaviour of
the system for any time 𝑡 ≥ 𝑡𝑜 .
 State-space representation of a system:
 Representation of a dynamic model of a system in terms of
a set of related ordinary differential equations (ODE’s) as
functions of the system’s state variables (usually in matrix
format).
State-space concepts
 Practical definition of a state:
 The dynamic system must involve elements that memorize
the input for 𝑡 ≥ 𝑡𝑜 . Since integrators in a continuous-time
control system serve as memory devices, the outputs of
such integrators can be considered as the variables that
define the internal state of the dynamic system. The output
of the integrators therefore serve as state variables. The
number of state variables needed to completely define the
dynamics of the system is equal to the number of
integrators involved in the system.
State-space concepts
 The aim is therefore to represent the dynamic system in
the state-space in terms of its state equation
 𝑥 = 𝐴 𝑥 𝑡 + 𝐵 𝑢(𝑡)
 And in terms of its output equation
 𝑦 𝑡 = 𝐶 𝑥 𝑡 + 𝐷 𝑢(𝑡)
 Definitions:
 𝑥 𝑡 State vector
 𝑢(𝑡) Input vector
 𝑦 𝑡 Output vector
 𝐴 State/system matrix
 𝐵 Input matrix
 𝐶 Output matrix
 𝐷 Direct transmission matrix (usually = 0)
State-space concepts
 The state-variables of a system are usually physically
measurable aspects of the system such as position,
velocity, voltage, current, etc.
 Take note that FPE uses the notation F, G, H and J where
 F=A
 G=B
 H=C
 J=D
 I will generally make use of the A, B, C, D notation as it
is the more general form, but the two are equivalent
Ex 7.1
Ex 7.1
Ex 7.1

 Can we draw a block diagram for this system?


Ex 7.2
Ex 7.2
Ex 7.2
Ex 7.2
Ex 7.2
Ex. 7.3
Ex. 7.3

c c

c c c c
c c
x1= Vc1, x2=Vc2
c c c c

c
c c

c c
c
Ex. 7.3

c c c
Ex 7.4
Ex 7.4
Ex 7.5
Ex 7.5
Ex 7.6
Ex 7.6
Ex 7.6
Ex 7.7
Ex 7.7
Ex 7.7
Ex 7.7
Ex 7.7
Relationship between TF and SS
 Consider system with TF of
𝑌 𝑠
 = 𝐺(𝑠)
𝑈(𝑠)

 Which may be expressed in the SS format as


 𝑥 = 𝐴 𝑥 + 𝐵𝑢
 𝑦 = 𝐶 𝑥 + 𝐷𝑢
 The Laplace transform of these equations are
 𝑠 𝑋 𝑠 − 𝑥 0 = 𝐴 𝑋(𝑠) + 𝐵𝑈(𝑠)
 𝑌(𝑠) = 𝐶 𝑋(𝑠) + 𝐷𝑈(𝑠)
Relationship between TF and SS
 Since the transfer function is defined as the ratio of the
Laplace transforms of the output to the input where the
initial conditions are zero, we can rewrite the previous
expression as
 𝑠 𝑋 𝑠 − 𝐴 𝑋 𝑠 = 𝐵𝑈(𝑠)
 ∴ (𝑠 𝐼 − 𝐴) 𝑋 𝑠 = 𝐵𝑈(𝑠)
 By pre-multiplying by 𝑠 𝐼 − 𝐴 −1 on both sides, we get
𝑋 𝑠 = 𝑠𝐼−𝐴 −1 𝐵𝑈(𝑠)

 By substituting this expression into the previous


expression for Y(S), we get the following
𝑌 𝑠 = 𝐶 𝑠𝐼−𝐴 −1 𝐵 + 𝐷 𝑈(𝑠)
 This expression provides a way to compute the TF from
the SS matrices.
Relationship between TF and SS
 From linear algebra, we know that:
𝑎𝑑𝑗(𝐴)
 𝐴−1 =
det(𝐴)

 The system transfer function can therefore be written


as:
𝐶 𝑎𝑑𝑗 𝑠𝐼−𝐴 𝐵
 𝐺 𝑆 = +𝐷
det(𝑠𝐼−𝐴)

where 𝐷 is a constant.
 We can therefore see that the poles of 𝐺(𝑠) are given by
the roots of det 𝑠𝐼 − 𝐴 = 0.
 You should be able to identify this equation as being the
eigenvalue equation, and as a result, the poles of a
state-space system are simply given by the eigenvalues
of the 𝐴 matrix!
Example
 Use the expressions derived on the previous slides to
compute the transfer function and the eigenvalues of the
following system:
−7 −12 1
 𝐴= , 𝐵= , 𝐶 = [1 2] , 𝐷=0
1 0 0
 First compute the eigenvalues:
 det 𝑠 𝐼 − 𝐴 = 0
𝑠+7 12
 ∴ =0
−1 𝑠
 ∴ 𝑠 𝑠 + 7 + 12 = 0
 ∴ 𝑠 2 + 7𝑠 + 12 = 0
 ∴ 𝑠+4 𝑠+3 =0
 This last expression says that we expect the poles to be
located at -4 and -3. Use Matlab’s eig to compute it.
Example
 We should be able to verify the pole locations by
computing the transfer function using our previously
derived expression:
𝑌 𝑠 = 𝐶 𝑠𝐼−𝐴 −1 𝐵 + 𝐷 𝑈(𝑠)
 We can use the following matrix inversion expression for
a 2x2 matrix:
𝑎 𝑏
 If 𝑃 = and 𝑎𝑑 − 𝑏𝑐 ≠ 0, then
𝑐 𝑑
1 𝑑 𝑏
 𝑃−1 =
𝑎𝑑−𝑏𝑐 −𝑐 𝑎
𝑠+7 12
∴ 𝑠𝐼−𝐴 =
−1 𝑠
𝑠+7 12
−1 −1 𝑠
∴ 𝑠𝐼−𝐴 =
𝑠 2 +7𝑠+12
Example
 Substituting the last expression into the following one
−1
𝑌 𝑠 = 𝐶 𝑠𝐼−𝐴 𝐵+𝐷 𝑈 𝑠
 and using values for the B and C matrices, we get
𝑠+7 12 1
1 2
−1 𝑠 0
∴𝐺 𝑠 =
𝑠 2 +7𝑠+12
𝑠
1 2
1
∴𝐺 𝑠 =
𝑠 2 +7𝑠+12
𝑠+2 𝑠+2 2 −1
∴𝐺 𝑠 = = = +
(𝑠+4)(𝑠+3) 𝑠 2 +7𝑠+12 𝑠+4 𝑠+3

 This proves by example that the poles of the


system are the same as the eigenvalues of the
state matrix.
Example
 If we use the Matlab function zero or tzero,
we can compute the zeros of the system.
 This can also be done by solving the following
expression

 𝑠𝐼−𝐴 −𝐵 = 0
𝐶 𝐷
 We can therefore state that
det 𝑠 𝐼−𝐴 −𝐵
𝐶 𝐷
𝐺 𝑠 =
det 𝑠 𝐼−𝐴
Example 7.13
Canonical Forms
 As we mentioned before, a particular state-space
formulation of a system is not unique.
 The canonical forms are standard forms of state-space
expression that assist us in a particular goal.
 There are 3 primary canonical forms:
 Control Canonical Form (CCF)
 Modal Canonical Form (MCF)
 Observer Canonical Form (OCF)
 The notes and the textbook addresses all 3 forms, but
for now we will only focus on the CCF and OCF
Control Canonical Form
 If we go back to the previous example, we had an
expression for the systems defined as
𝑠+2 𝑠+2 2 −1
𝐺 𝑠 = = = +
(𝑠+4)(𝑠+3) 𝑠 2 +7𝑠+12 𝑠+4 𝑠+3

 In the same example it was shown that this transfer


function is an equivalent expression for the system
defined by the following set of state-space equations:
−7 −12 1
 𝐴= , 𝐵= , 𝐶 = [1 2] , 𝐷=0
1 0 0
 If we use this state-space formulation, we see that the
following block-diagram can be used to represent the
system:
Control Canonical Form
 This block-diagram is written in the Controller Canonical
Form.
Control Canonical Form
 By comparing the transfer function and state-space
expressions,
𝑠+2 𝑠+2 2 −1
𝐺 𝑠 = = = +
(𝑠+4)(𝑠+3) 𝑠 2 +7𝑠+12 𝑠+4 𝑠+3
−7 −12 1
 𝐴= , 𝐵= , 𝐶 = [1 2] , 𝐷=0
1 0 0
 We can see that:
 Coefficients 1 and 2 of the numerator appears in the C
matrix
 Coefficients 7 and 12 of the denominator appears with
opposite sign as the first row in the A matrix
Control Canonical Form
 We can therefore develop the state matrices in CCF from
inspection for any system whose transfer function is
written as a rational function (ratio of 2 polynomials). If
𝑏1 𝑠 𝑛−1 +𝑏2 𝑠 𝑛−2 + …+𝑏𝑛
𝐺 𝑠 = , then
𝑠 𝑛 +𝑎1 𝑠 𝑛−1 +𝑎2 𝑠 𝑛−2 + …+𝑏𝑛

−𝑎1 −𝑎2 … … −𝑎𝑛 1


1 0 … … 0 0
 𝐴= 0 1 0 … 0 , 𝐵= 0 ,
⋮ ⋮ ⋮ 0 ⋮ ⋮
0 0 … 1 0 0
 𝐶 = 𝑏1 𝑏2 … … 𝑏𝑛 , 𝐷=0
 The Matlab code to do this is:
Num = [b1 b2 ... bn]
Den = [1 a1 a2 ... an]
Sys = tf(Num,Den)
[A B C D] = ssdata(Sys)
Controllability
 We see every input appears at the states after a finite
period of time – this is called controllability.
 If we are able to write a system in CCF, then it means
that we can design a controller for the system.
 Pages 17 to 21 in the notes and pages 428 to 431 in the
textbook presents the derivation of the concept of a
controllability matrix.
 We will not repeat the derivation here, but simply state
that, if a state-space system can be written in CCF, then
one can formulate a controllability matrix as follows
 CC = 𝐵 𝐴𝐵 𝐴2 𝐵 … 𝐴𝑛−1 𝐵
 We will state that a system is completely state
controllable (that we can design a state-space controller
for it) if the controllability matrix is non-singular
(det 𝐶𝐶 ≠ 0)
Controllability - definition
 We define a system to be completely state controllable if
we can influence every state in our system with a
bounded control signal.
 If the system is state controllable, then we can design a
state controller for the system.
Observer Canonical Form
 We can develop the state matrices in OCF from inspection
for any system whose transfer function is written as a
rational function (ratio of 2 polynomials). If
𝑏1 𝑠 𝑛−1 +𝑏2 𝑠 𝑛−2 + …+𝑏𝑛
𝐺 𝑠 = 𝑠 𝑛 +𝑎1 𝑠 𝑛−1 +𝑎2 𝑠 𝑛−2 + …+𝑏𝑛
, then

−𝑎1 1 0 … 0 𝑏1
−𝑎2 0 1 … 0 𝑏2
 𝐴 = −𝑎3 0 0 … ⋮ , 𝐵 = 𝑏3 ,
⋮ ⋮ ⋮ … 1 ⋮
−𝑎𝑛 0 0 … 0 𝑏𝑛
 𝐶= 1 0 0 … 0 , 𝐷=0
 The Matlab code to do this is:
Num = [b1 b2 ... bn]
Den = [1 a1 a2 ... an]
Sys = tf(Num,Den)
Sys2 = canon(Sys,’companion’)
Observer Canonical Form
 The block-diagram of a 3rd order system presented in
OCF is presented below.
Observer Canonical Form
 We see that every state appears in the output after a
finite period of time.
 This property of a system is called observability.
 A system is said to be completely state observable if
every state 𝑥(𝑡) can be determined from the observation
of the output 𝑦(𝑡) over a finite period of time.
 If we can express a system in OCF, then the system is
observable.
Observability
 Pages 22 to 231 in the notes and pages 471 in the
textbook presents the derivation of the concept of an
observability matrix.
 We will not repeat the derivation here, but simply state
that, for any state-space system, one can formulate an
observability matrix as follows
𝑇
 OO = 𝐶 𝐶𝐴 𝐶 𝐴2 … 𝐶 𝐴𝑛−1
 We will state that a system is completely state
observable (that we can design a state-space observer
for it) if the observability matrix is non-singular (det 𝑂𝑂 ≠
0)
Pole-zero cancellation

 If pole-zero cancellation occurs in a system, then the


mode (pole) that was cancelled cannot be controlled or
observed.
State-feedback design
 Overall control strategy:
 Feedback controller
 Observer
State-feedback design
 Definition of asymptotic stability:
 Described by expression 𝑦 = 𝑒 −𝑎𝑡
1
 𝑌 𝑠 = → stable poles are asymptotically stable
𝑠+𝑎

 A disturbance will be cancelled (die out) with a time


constant “a”
 Stable systems are asymptotically stable
 Definition of a regulator system:
 A system that bring nonzero states (caused by external
disturbances) to the origin (to zero) with sufficient speed
State-feedback design
 We want to make our system asymptotic stable. We can
do this by defining where we want to place the closed-
loop poles of our system – similar to root-locus design.
 Can define the closed-loop pole locations through:
 Arbitrary selection of the location based on our insight, or
 Using optimal control techniques such as the symmetric
root locus or the linear quadratic regular design.
State-feedback design
 We can state that the solution to the homogenous
differential equation 𝑥 − 𝑎𝑥 = 0, is 𝑥 𝑡 = 𝑥 0 𝑒 𝑎𝑡
 In a similar way we can state that the state equation 𝑥 =
𝐴 𝑥 has a solution 𝑥 𝑡 = 𝑥 0 𝑒 𝐴𝑡 that will be stable if the
poles (eigenvalues) of A is in the left-hand complex
plane.
 Now, if we define 𝑥 = 𝐴 𝑥 + 𝐵𝑢 and choose the control
signal (u) to be
 𝑢 = −𝑘 𝑥 + 𝑟, where
 𝑘 is the state feedback gain matrix
 𝑟 is the reference input signal
State-feedback design
 If we substitute the control signal equation into the state
equation, we get
 𝑥 = 𝐴 𝑥 + 𝐵 −𝑘 𝑥 + 𝑟
 ∴ 𝑥 = 𝐴 𝑥 − 𝐵𝑘𝑥 + 𝐵𝑟
 ∴ 𝑥 = (𝐴 − 𝐵𝑘) 𝑥 + 𝐵𝑟
 ∴ 𝑥 = 𝐴𝑐𝑙 𝑥 + 𝐵𝑟
 The solution to this expression with r=0 is

 𝑥 𝑡 = 𝑥 0 𝑒 𝐴𝑐𝑙𝑡
 𝑥 𝑡 = 𝑥 0 𝑒 𝐴 −𝐵 𝑘𝑡
 This means that, with a proper selection of 𝐾, the
closed-loop system will be asymptotically stable.
State-feedback design
 It also means that the non-zero initial conditions of the
system can be regulated to zero.
 We can derive from this last set of equations that the
eigenvalues of (𝐴 − 𝐵 𝑘) (the closed-loop poles of the
system) must be located in the left-half plane for the
system to be stable. So determine:
 det(𝑠 𝐼 − (𝐴 − 𝐵𝑘)) = 0
 Note that:
 In classical control only the positions of the dominant poles
were defined.
 In modern control (state-space) we define (specify) the
position of all the (modelled) closed-loop poles.
 The second note implies that we must be able to
influence all of the closed-loop poles of the system,
which implies that the system must be controllable.
State-feedback design
 The placement of the closed-loop poles is called the
pole-placement problem.
 The poles can only be placed if the system is completely
state controllable.

Open-loop Closed-loop
State-feedback design
 In principle we can place the poles wherever we want,
but
 The positions of the poles are directly related to the
systems bandwidth, rise time and settling time.
 If we place the pole to require an excessive bandwidth or
settling time, it means that the system would require an
excessive control effort, which could possibly not be
realizable for the system.
 To place the poles, we use the closed-loop system characteristic
equation det(𝑠 𝐼 − (𝐴 − 𝐵𝑘)) = 0
 This expression leads to an 𝑛𝑡ℎ order polynomial in S
 We can then select 𝐾 so that the roots of this expression
equal the specified root locations.
Ex 7.15
Ex 7.15
Ex 7.15
Ex 7.15
 In principle we should have first computed the
controllability matrix to see whether we can actually
design a state-feedback controller.
 For this problem
 CC = 𝐵 𝐴𝐵
0 1
 ∴ 𝐶𝐶 =
1 0
 ∴ det 𝐶𝐶 = −1 ≠ 0 → the system is controllable
State-feedback design
 Pgs 447 to 448 in FPE presents a formal derivation of
technique that is used to do pole placement on a
computer.
 I will not go through the derivation in class.
 The method is called Ackerman’s formula and is
implemented in Matlab as
 place or acker
 acker is only stable for small numbers of poles (n < 5) and
has been removed in newer versions of Matlab.
 place does not check for controllability – you must check it
before you use it, otherwise it will give wrong answers
 You cannot give place repeated roots, make small changes,
if you want to, e.g. s1 = 0.01, s2 = 0.011
Ackerman’s formula
Ex 7.16
Ex 7.16
Ex 7.16
Ex 7.16
Ex 7.17
Ex 7.17
Ex 7.17
Reference input
 Let’s add a reference input to a SS system. Define
 𝑢 = −𝐾 𝑥 + 𝑟
 Unless the systems has free integrators, this will result
in a non-zero steady-state error if a step input is
applied.
 The way to correct this problem is to compute the
steady-state values of the state and the control input
that will result in zero output error and then force them
to take on these values.
 Start by defining the final values of the state and the
control inputs to be 𝑥𝑠𝑠 and 𝑢𝑠𝑠 respectively.
Reference input
 We can draw the following block-diagram.
 If the system is type 1 or higher, there will be no steady-
state error to a step input and the final state will be:
 𝑥 ∞ = 𝑥𝑠𝑠 = 𝑥𝑟
 This is not true for a type 0 systems as some control
input is required to keep the system at the desired 𝑥𝑟
Reference input
 At steady-state we can therefore write the control law
from the block diagram to be
 𝑢 = 𝑢𝑠𝑠 − 𝐾(𝑥 − 𝑥𝑠𝑠 )
 To pick the correct values, we must solve the equations
so that the system will have zero steady-state error to
any input.
 Define
 𝑥 = 𝐴 𝑥 + 𝐵𝑢
 𝑦 = 𝐶 𝑥 + 𝐷𝑢
 Which reduced to the following expression in the steady-
state
 0 = 𝐴 𝑥𝑠𝑠 + 𝐵𝑢𝑠𝑠
 𝑦 = 𝐶 𝑥𝑠𝑠 + 𝐷𝑢𝑠𝑠
Reference input
 We want
 𝑦𝑠𝑠 = 𝑟𝑠𝑠 for all values of 𝑟𝑠𝑠

 Now define
 𝑥𝑠𝑠 = 𝑁𝑥 𝑟𝑠𝑠
 𝑢𝑠𝑠 = 𝑁𝑢 𝑟𝑠𝑠
 From which we can rewrite the steady-state SS
equations as
 0 = 𝐴𝑁𝑥 𝑟𝑠𝑠 + 𝐵𝑁𝑢 𝑟𝑠𝑠
 𝑟𝑠𝑠 = 𝐶 𝑁𝑥 𝑟𝑠𝑠 + 𝐷𝑁𝑢 𝑟𝑠𝑠
 Which can be written as

𝑁𝑥
 𝐴 𝐵 = 0
𝐶 𝐷 𝑁𝑢 1
Reference input
 The gain matrices can therefore be computed as
−1
𝑁
 𝑥 = 𝐴 𝐵 0
𝑁𝑢 𝐶 𝐷 1
 With these results, we finally have the basis for
introducing the reference input so as to get zero steady-
state error to a step input:
 𝑢 = 𝑁𝑢 𝑟 − 𝐾(𝑥 − 𝑁𝑥 𝑟)
 ∴ 𝑢 = −𝐾 𝑥 + (𝑁𝑢 + 𝐾 𝑁𝑥 )𝑟
 The coefficient of r in the parentheses is a constant that
can be computed before hand. We will give it the
symbol 𝑁 so that
 𝑢 = −𝐾 𝑥 + 𝑁𝑟
Ex 7.18
Ex 7.18
Ex 7.18
Estimator/Observer Design
 Up to now we assumed that all the state variables of the
system were available for state feedback.
 In practice not all the states of a system are measured
 Sensors for all the states may be too expensive, or
 Measurements of all the states may be physically impossible
(as in a nuclear power plant or inside an electric motor or
battery)
 We therefore want to reconstruct the states of a system
from a few measurements.
 The algorithm used is called a state observer or state
estimator
 Full order estimator: all the states of the system are
estimated
 Reduced order estimator: only the unmeasurable states are
estimated
Full Order Observers
 Define the estimated state vector as 𝑥
 Using this estimated state with the plant dynamics, we
can write the estimated dynamics as
 𝑥 = 𝐴𝑥 + 𝐵𝑢
 𝑦 = 𝐶 𝑥 + 𝐷𝑢
 These expressions describe the dynamics of the model
in the picture below.
Full Order Observers
 The actual states (in the process) and the model states
(the estimated ones) differ because:
 F, G and H are not exact representations of the process –
there are still plant uncertainties
 The initial conditions of the process are unknown
 There are disturbance signals
 We therefore need a way to get the model to
approximate the process and let the estimated states
approximate the true states.
Full Order Observers
 If we define the estimated state error (𝑒) to be
𝜖 =𝑥−𝑥 (the textbook used the notation 𝑥 for 𝑒)
 Then the dynamics of the error between the process and
the model are
 𝜖 = 𝑥 − 𝑥 = 𝐴 𝑥 + 𝐵𝑢 − 𝐴𝑥 − 𝐵𝑢
∴𝜖 =𝐴 𝑥 −𝑥 =𝐴𝜖
 The solution to this differential equation is
 𝜖 𝑡 = 𝑒 𝐴𝑡
 Which says that the error between the process and the
model will decay to zero at the same rate as the plant
dynamics, which is too slow as the plant will change before
the model “catches” it.
 We therefore need a different approach to solve this
problem.
Full Order Observers
Full Order Observers
 We implement a feedback controller to eliminate the
errors between the process and the model
 Develop a system that will have y and u as the input
signals and 𝑥 as the output signal.
 We call this system an observer. It is defined by the
observer equation as follows:
 𝑥 = 𝐴 𝑥 + 𝐵𝑢 + 𝐿(𝑦 − 𝐶 𝑥)
 The observer takes the difference between the real output
and the estimated (predicted) output and feeds that back to
eliminate the prediction error.
 The correction term 𝐿(𝑦 − 𝐶 𝑥) is called the innovation
and will correct the errors between 𝑦 and 𝑦 (and
therefore between 𝑥 and 𝑥) due to
 Incorrect A,B and C matrices and
 Differences in initial conditions between the plant and observer
Full Order Observers
 The observer gain matrix 𝐿 is also a proportional gain
matrix (like 𝐾) with the form

𝑙1
𝑙2
 𝐿 = 𝑙3

𝑙𝑛
 If the plant is defined by the expression

 𝑥 = 𝐴𝑥 + 𝐵𝑢
 𝑦 = 𝐶 𝑥 + 𝐷𝑢
 Then the subtraction of the observer equation from the
state equation gives

 𝑥 − 𝑥 = 𝐴 𝑥 − 𝐴 𝑥 − 𝐿(𝐶 𝑥 − 𝐶 𝑥)
 ∴ 𝜖 = (𝐴 − 𝐿𝐶)𝜖
Full Order Observers
 The solution to the error dynamics is

𝜖 𝑡 =𝑒 𝐴−𝐿𝐶 𝑡

 Which says that the error dynamics will decay according


to the roots (eigenvalues) of the matrix (𝐴 − 𝐿𝐶)
 If (𝐴 − 𝐿𝐶) is stable then
 The errors between the plant and the model will converge
to zero for all initial error conditions and
 Regardless of the initial conditions of the states and the
estimated states
 We can compute the eigenvalues of the (𝐴 − 𝐿𝐶) matrix
using the expression
 det(𝑠𝐼 − (𝐴 − 𝐿𝐶)) = 0
 Which says that we can compute the observer gains in the
say way as the gains of the feedback controller.
Full Order Observers
 It can happen that the A, B and C matrices do not
accurately model the plant. Under such conditions the
correct L will not drive the estimation error to zero, but
will make the error dynamics stable and the error
acceptably small.
 Note that:
 The plant is a physical system consisting of servomechanisms,
motors or a chemical process.
 The observer is a digital implementation of the observer equation on
a computer

 The design of a state observer therefore becomes that of


designing the observer gain matrix (𝐿) such that the
observer error dynamics are asymptotically stable with
sufficient speed of response.
Full Order Observers
 Observer design therefore means that we want to define
the eigenvalues of the matrix (𝐴 − 𝐿𝐶)
 We therefore define the desired estimator poles and the
desired estimator characteristic equation as
 𝛼𝑒 𝑠 = 𝑠 − 𝛽1 𝑠 − 𝛽2 𝑠 − 𝛽3 … (𝑠 − 𝛽𝑛 )
 We define the estimator characteristic equation as
 det(𝑠𝐼 − (𝐴 − 𝐿𝐶)) = 0
 The estimator gain matrix is then computed from the
inspection of the above 2 expressions.
Observability
 Observability refers to our ability to deduce information
about all the internal states of the system by monitoring
only the output of the system.
 It’s like finding out what is going on inside your engine by just
monitoring the exhaust gas of your car.

 If a system is observable, we can observe its states with


a state observer.
 We need to first check the observability of a system
before we try to design an observer as an observer for a
non-observable system will not converge and will give
the wrong state information.
Duality Principle
 The mathematically correct formulation of Ackerman’s
formula for controller design states that
 𝐾 = 0 0 … 0 1 𝐶𝐶 −1 𝛼𝑐 (𝐴)
 In an similar way, we can define Ackerman’s formula for the
state observer as follows:

0
0
 𝐿 = 𝛼𝑒 𝐴 𝑂𝑂−1 0

1
 Which says that we can use the computer to compute the
observer gains rather than computing it through inspection.
Duality Principle
 Referring to the two formulations of Ackerman’s formula
on the previous slide, we can see that
 The two forms are similar, but that there are differences.
 The two systems are mathematically equivalent.
 This property is called duality.
 For the controller, you compute 𝐾 to place the poles of (𝐴 − 𝐵𝐾)

 For the observer, you compute 𝐿 to place the poles of (𝐴 − 𝐿𝐶)


 But, the poles of (𝐴 − 𝐿𝐶) is the same as the poles of 𝐴 − 𝐿𝐶 𝑇 =
(𝐴𝑇 − 𝐶 𝑇 𝐿𝑇 ), which means that mathematically the design of 𝐿𝑇 is
the same as the design of 𝐾.
Duality Principle
 The duality principle therefore states that
Control Observation
A ↔ 𝐴𝑇
B ↔ 𝐶𝑇
C ↔ 𝐵𝑇
K ↔ 𝐿𝑇

 Which means that we can implement it in Matlab as


follows:
 Controller: K = place (A,B,Pc)
 Overver: Lt = place(A’,C’,Pe), L = Lt’
Ex 7.25
Ex 7.25
Comments on best L
 L matrix serves as correction signal to the plant model
 If significant unknowns exist, then L should be large – a lot
of corrective action
 Aggressive observer correction

 If the plant output is contaminated with noise, it is


unreliable and L should be small
 More relaxed observer correction

 In general one would choose the observer poles to be 2


to 5 times faster than the controller poles.
 Since the observer lives inside a computer, its poles can
all be chosen as real values without having to worry
about the amount of control effort that will be required
to fulfil the requirement of placing the pole at the
suggested location (to an extent – numerical precision!).
Comments on best L
 The poles of the observer must be faster (further to the
left in the LHP) than that of the controller to ensure that
the observer can eliminate the differences between the
observer and the plant.
 Since the observer is much faster than the controller, it
is the response of the controller that will be visible in the
system response and not the observer.
Closed-loop observer
 As we do not have the actual state 𝑥 available for
feedback, we will design an observer and use the
observer state for feedback:
Closed-loop observer
 The design therefore becomes a 2-step process:
 Determine feedback gain matrix 𝐾 to yield desired controller
characteristic equation
 Determine observer gain matrix 𝐿 to yield desired observer
characteristic equation
 Let us now investigate the effects of the use of 𝑥 rather
than 𝑥 on the characteristic equation of the closed-loop
system.
Closed-loop observer
 If the control law is defined to be 𝑢 = −𝑘 𝑥 instead of 𝑢 =
− 𝑘 𝑥 (use the estimated states for control), the system
equations
 𝑥 = 𝐴 𝑥 + 𝐵𝑢
 𝑦 = 𝐶 𝑥 + 𝐷𝑢
 Becomes
 𝑥 =𝐴𝑥−𝐵 𝑘𝑥
 𝑦 = 𝐶 𝑥 + 𝐷𝑢
 Which can be manipulated into the expression
 𝑥 = 𝐴 − 𝐵 𝑘 𝑥 + 𝐵 𝑘 (𝑥 − 𝑥)
 Using the expression
 𝑒=𝑥 =𝑥−𝑥
Closed-loop observer
 We get
 𝑥 = 𝐴 − 𝐵𝑘 𝑥 + 𝐵𝑘𝑒
 Using the observer error equation that we defined
before, we get
 𝑒 = (𝐴 − 𝐿 𝐶)𝑒
 We can combine these last two expressions into a single
matrix expression as

𝑠 𝐼 − 𝐴 − 𝐵𝑘 −𝐵 𝑘 𝑥
 𝑥 =
𝑒 0 𝑠 𝐼 − (𝐴 − 𝐿 𝐶) 𝑒
 This expression describes the dynamics of the observed
state-feedback control system.
Closed-loop observer
 The characteristic equation of the complete system is

𝑠 𝐼 − 𝐴 − 𝐵𝑘 −𝐵 𝑘
 =0
0 𝑠 𝐼 − (𝐴 − 𝐿 𝐶)
 ∴ 𝑠 𝐼 − 𝐴 − 𝐵𝑘 𝑠 𝐼 − (𝐴 − 𝐿 𝐶) = 0
 The results says that the pole placement design and the
observer design are independent of each other. This is
called the separation principle. The controller and the
observer can be designed separately and then combined
to form the observed-state feedback control system.
Controller-observer TF
 From the last block diagram we can define the observer
equation as
 𝑥 = 𝐴 − 𝐿 𝐶 𝑥 + 𝐵𝑢 + 𝐿𝑦
 Using the state equations
 𝑥 = 𝐴 𝑥 + 𝐵𝑢
 𝑦 = 𝐶 𝑥 + 𝐷𝑢
 And the observed-state control signal 𝑢 = −𝑘 𝑥, we can
start by expressing the Laplace transform of the control
signal as
 U(s) = −𝑘 𝑋(𝑠)
 The Laplace transform of the observer equation is
 𝑠 𝑋 𝑠 = 𝐴 − 𝐿 𝐶 𝑋 𝑠 + 𝐵𝑈 𝑠 + 𝐿𝑌(𝑠)
Controller-observer TF
 If we assume that the initial conditions are zero, we can
combine the above expressions into the following
expression:
−1 𝐿𝑌(𝑠)
 𝑋 𝑠 = 𝑠𝐼−𝐴+𝐿𝐶+𝐵𝐾
 And after a further substitution as,
−1 𝐿𝑌(𝑠)
 𝑈 𝑠 = −𝐾 𝑠 𝐼 − 𝐴 + 𝐿 𝐶 + 𝐵 𝐾
 This gives the transfer function between 𝑈(𝑠) and −𝑌(𝑠) and
is called the controller-observer transfer function as it acts
as a controller for the system
𝑈 𝑠 −1
 = −𝐾 𝑠 𝐼 − 𝐴 + 𝐿 𝐶 + 𝐵 𝐾 𝐿
−𝑌(𝑠)
Example
 Consider the system defined by the following set of state
matrices:
0 1 0
 𝐴= , 𝐵= , 𝐶 = [1 0] , 𝐷=0
20.6 0 1
 If we require the poles to be placed at 𝑝1 = −1.8 + 𝑗2.4 and
𝑝2 = −1.8 − 𝑗2.4 , the feedback gains will be designed to be
 𝐾 = 29.6 3.6
 If we use observed-state feedback instead of actual state
feedback, the control signal will be
𝑥1
 𝑢 = −𝐾 𝑥 = − 29.6 3.6
𝑥2
 Choose the observer poles to be 𝑝1 = −8, 𝑝2 − 8
Example
 Now obtain the observer gain matrix L from these specs and
draw the block diagram for the observed-state feedback
control system.
 Also obtain the transfer function of the controller-observer
and draw a block-diagram of the system
 Solution:
 The desired characteristic polynomial of the observer is
 𝛼𝑒 𝑠 = 𝑠 − 𝑝1 𝑠 − 𝑝2 = 𝑠 + 8 𝑠 + 8 = 𝑠 2 + 16𝑠 + 64
 The characteristic equation of the observer is computed from
 𝑠𝐼−𝐴+𝐿𝐶 =0
𝑠 0 0 1 𝐿 0
 ∴ − + 1 =0
0 𝑠 20.6 0 𝐿2 0
𝑠 + 𝐿1 −1
 ∴ =0
−20.6 + 𝐿2 𝑠
 ∴ 𝑠 2 + 𝐿1 𝑠 − 20.6 + 𝐿2 = 0
Example
 Comparing the two polynomials gives:
𝐿1 16
 𝐿= =
𝐿2 84.6
Example
 The transfer function of the systems is
𝑈 𝑠 −1
 = −𝐾 𝑠 𝐼 − 𝐴 + 𝐿 𝐶 + 𝐵 𝐾 𝐿
−𝑌(𝑠)

−1
𝑠 + 16 −1 16
 = 29.6 3.6
93.6 𝑠 + 3.6 84.6
778.16𝑠+3690.72
= 𝑠 2 +19.6𝑠+151.2
1
 The transfer function of the original plant is 𝑠 2 −20.6

 The compensated system is therefore 4th order


Ex 7.28
Ex 7.28
Ex 7.28
Ex 7.28
Ex 7.28
Ex 7.28
Ex 7.30
Ex 7.30
Ex 7.30
Reference input with observer
 A regulator system is good as it results in good
disturbance rejection, but we want to investigate the
response of our system when a reference input is
applied.
 We want our system to have good command following
capabilities along with good disturbance rejection
capabilities.
 Command following refers to the capability of a control
system to follow changes in the input command with a good
transient response and steady-state error
 A well-designed control system has good disturbance
rejection and good command following capabilities.
Reference input with observer
 Good command following is obtained by the proper
introduction of the reference input into the system.
 Previously we introduced the reference input through the
expression
 𝑢 = −𝐾 𝑥 + 𝑁𝑟
 As we are now using the estimated state 𝑥 in our
controller design, we can rewrite our expression for the
reference input introduction to be
 𝑢 = −𝐾 𝑥 + 𝑁𝑟
 The preferred block diagram for the introduction of the
reference signal into the controller-observer is presented
on the next page.
Reference input with observer
Reference input with observer
 In this implementation it can be seen that a step
input will excite the observer in the same way as
the plant.
 This will result in the estimation error remaining
zero even after a step input since the step input
will not have an effect on the estimation error.
 The estimation error dynamics is never excited.
Reference input with observer
 The block-diagram below presents an alternative
way of introducing the reference signal.
 Both the estimator and the plant dynamics are
excited.
 Estimation error will decay with estimator
dynamics in addition to response from the
control/plant dynamics.
Integral Control
 Recall the following:
 Type 0 system has no free integrators in the forward path
 Type 1 systems has 1 free integrator in the forward path
 Type 2 systems has 2 free integrators in the forward path
 A type 0 system can be designed (by the correct choice
of 𝑁) to have zero steady-state error for a step input,
but it will not be the case if there are external
disturbances.
 Even for state-feedback design we can insert an
integrator into the system in the forward path between
the summing point and the plant.
Integral Control
Integral Control
Integral Control
Integral Control with Observer
Ex 7.35
Ex 7.35
Ex 7.35
Ex 7.35
Ex 7.35
Pole Selection Methods:
Dominant Second Order Poles
 Let’s revisit the pole selection process.
 We can choose the poles of a higher order system as
follows:
 Choose a pair of dominant second order complex conjugate
poles; and
 Choose the rest of the poles as real poles being sufficiently
damped
 Such a system will show a response that is
predominantly that of a second order system
 We can use the concepts of damping ratio and natural
frequency to define where we want to place the
dominant poles, but we do not have to be too concerned
about the precise placement of the other poles.
Ex 7.20
Ex 7.20
Ex 7.20
Ex 7.20
Pole Selection Methods: Linear
Quadratic Regulator
 Linear Quadratic Regulator (LQR)
 Very effective and widely used technique for linear control
system design.
 Optimal method, meaning that it gives the “best possible”
design solution as a trade-off between the system
performance and the required control effort.
 The LQR problem is solved by minimizing one of the
following 2 cost functions:

𝐽= 0
𝜌𝑧 2 𝑡 + 𝑢2 (𝑡) 𝑑𝑡 --- A

𝐽= 0
𝑥 𝑇 𝑄 𝑥 + 𝑢𝑇 𝑅 𝑢 𝑑𝑡 --- B

 As we will shortly show, these two expressions are


equivalent.
Pole Selection Methods: Linear
Quadratic Regulator
 Referring to the equations on the previous slide, we can
note that the control law that minimizes the expression
for 𝐽 is given by the expression
 𝑢 = −𝐾 𝑥
 Which means that it is a solution to our state-feedback
design problem.
 Equation A on the previous slide is the simpler
expression, but Matlab makes use of equation B with the
statement
 K=lqr(A,B,Q,R)
Pole Selection Methods: Linear
Quadratic Regulator
 We can select Q and R in one of the following two ways:
 Make Q and R diagonal matrices with
 𝑄𝑖𝑖 = 1/maximum acceptable value of 𝑥𝑖2

 𝑅𝑖𝑖 = 1/maximum acceptable value of 𝑢𝑖2

 Or, set R = 1 and Q = 𝜌 𝐻 𝑇 𝐻

 The weighting matrices are then modified during


subsequent design iterations to achieve an acceptable
trade-off between performance and control effort.
Ex 7.24
Ex 7.24
Ex 7.24
Ex 7.24
Ex 7.24
Ex 7.24
Ex 7.24
Problem 7.17
 Using the indicated state variables, write the state
equations for each of the systems shown in Fig. 7.85.
Find the transfer function for each system using both
block-diagram manipulation and matrix algebra.
Problem 7.17 - solution
Problem 7.20
Problem 7.20 - solution
Problem 7.20 - solution
Problem 7.20 - solution
Problem 7.20 - solution
Problem 7.21
Problem 7.21 - solution
Problem 7.21 - solution
Problem 7.30
Problem 7.30 - solution
Problem 7.30 - solution
Problem 7.44
Problem 7.44
Problem 7.44 - solution
Problem 7.44 - solution

You might also like