Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
385 views

Discrete Time Control Systems Unit 5

This document provides an overview of state space analysis of continuous time systems covered in Module 5 over 8 hours. It discusses: 1. Introduction to state variables, definition, and state variable representation. It also discusses the non-uniqueness of state variables. 2. Different state model presentations including physical variable form, phase variable form, and derivation of transfer function form from state model. 3. Solution of state equations including significance of the state transition matrix and its properties for time-invariant systems. 4. Evaluation of the state transition matrix using various methods like power series, Laplace transform, Cayley-Hamilton, and Sylvester's expansion theorem. 5. Introduction to controllability and

Uploaded by

kishan gupta
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
385 views

Discrete Time Control Systems Unit 5

This document provides an overview of state space analysis of continuous time systems covered in Module 5 over 8 hours. It discusses: 1. Introduction to state variables, definition, and state variable representation. It also discusses the non-uniqueness of state variables. 2. Different state model presentations including physical variable form, phase variable form, and derivation of transfer function form from state model. 3. Solution of state equations including significance of the state transition matrix and its properties for time-invariant systems. 4. Evaluation of the state transition matrix using various methods like power series, Laplace transform, Cayley-Hamilton, and Sylvester's expansion theorem. 5. Introduction to controllability and

Uploaded by

kishan gupta
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

CONTROL SYSTEM (EEE1008) FOR B.

TECH EE & EEE (CORE)/


CONTROL SYSTEM (EEE1408) FOR B. TECH ETC (CORE)
SEQUENCE – 5
(MODULE- V): 8 HOURS

(MODULE- V)

VOL-5: K 1
STATE SPACE ANALYSIS OF CONTINUOUS TIME SYTEM:

1. INTRODUCTION: (1). STATE VARIABLES; (2).DEFINATION; (3). STATE VARIABLE REPRESENTATION


(4). NON UNIQUENESS OF STATE VARIABLES,

2. STATE MODEL PRESENTATION EMPLOYED: (1). PHYSICAL VARIABLE FORM,


(2). PHASE VARIABLE FORM, (3). DERIVATION OF TRANSFER FUNCTION FORM STATE MODULE
3. SOLUTION OF STATE EQUATION: (1). SIGNIFICANCE OF THE STM,
(2). PROPERTIES OF ϕ ( t ) FOR TIME INVARIANT SYSTEM I.E. Ẋ =AX ∧ϕ (t )=e At
4. EVALUATION OF STM: (1). USE OF POWER SERIES METHOD, (2). USE OF LAPLACE TRANSFORM,
(3). USE OF THE CAYLEY – HAMILTON METHOD, (4). USE OF SYLEVESTER’S EXPANSION
THEOREM
VOL-5: K2
POLE PLACEMENT BY STATE FEEDBACK:

1. INTRODUCTION: (1). CONTROLLABILITY, (2) OBSERVABILITY


2. POLE PLACEMENT BY STATE FEEDBACK

1
MODULE-5 (8 HOURS)
STATE SPACE ANALYSIS OF CONTINUOUS TIME SYTEM

1. INTRODUCTION:

(1). State Variables


A mathematical abstraction to represent or model the dynamics of a system utilizes three types of
variables called the Input, the output and the State Variables.
(2). Definition:
The state of a dynamic system is a minimal set of variables (known as state variables) such that the
knowledge of these variables at t =t0 together with the knowledge of the inputs for t ≥ = t0
completely determines the behavior of the system for t > t0.
(3). State Variable Representation

u y1
u2 Controlled y2
Input Variable (m) u3 y3 Output Variable (р) U (m) Controlled Y (р)
System
um yр System
X (n)
x1 x2 x3 xn
State Variable (n)
Fig.: Structure of a General System

State Model:

[ ] [ ]
x1 ( t ) y1 ( t )

{ Ẋ= AX + BU : State Equation; X x2 ( t )


Y =CX + DU :Output Equation x3 ( t )
=State vector , Y =
(n)
y2 ( t )
y3 ( t )
=Output Vector ( p )

−¿ x n ( t ) −¿ y n ( t )

[ ]
u1 ( t )
u 2 ( t ) =Input Vector ( m )
u3 ( t )
−¿ un ( t )
A = State Matrix (n x n), B = Input Matrix (n x m)
C= Output Matrix (p x n)
D = Transmission Matrix (p x m)

(4). Non Uniqueness of State Variables:

Consider an n – dimensional vector Z such that X = PZ where P is any n x n non singular constant matrix.
Hence Ẋ =P Ż = APZ + BU where Ẋ =AX + BU ⋯ ⋯ ⋯(a)
Pre multiplying by P-1, we get,

2
~ ~ ~ ~
Ż=P−1 APZ + P−1 BU = A Z+ B U … .(c ) Where A=P−1 AP; B=P−1 B and Y = CPZ+DU where y=CX+DU
~ ~
⋯ ⋯ ⋯ (b) = C+ DU ⋯ ⋯ ⋯(d ) Where C=CP
Eqn. (c) and Eqn. (d) give another state model for a given system. Since P is assumed to be non –
unique non – singular matrix, the state model is also non-unique.

2. STATE MODEL PRESENTATION EMPLOYED

(1) Physical Variables (2) Phase Variables or Dual Phase Variables


(3) Canonical Variables.

(1). Physical Variable form

dv 1
E g. 1: = F ( t ) (1) Let x 1=x ; F (t )=U (t )
dt M
dx
=v ( t ) =¿ ........ (2) Hence ẋ 1=x 2=0+ x 2 ( t ) +oU (t)
dt
1 1
ẋ 2= F ( t ) =0+0+ u(t )
M M

[ ] [ ][ ] [ ]
ẋ1 0
0 1 x1
Hence = + 1 u
ẋ2 0 0 x2
M

y= [ 1 0 ]
[]x1
x2
=x=x 1

E g .2:

The network has three energy storage elements i. e, C, L & L2

If we have a knowledge of initial conditions v(o), i 1(o) & i2(o) and the input signal e(t) for t ≥ 0, then the
behaviour of the network is completely specified for t ≥ 0.

The initial conditions v(0), i1(o), i2(o) together with input signal e(t) for t ≥ 0 constitute the minimal
information needed. It then follows that a natural selection of the state variables:

x 1 ( t )=ν (t) = voltage across the capacitor ⋯ ⋯ ⋯ (1)

x 2 ( t )=i 1 ( t )=current through L1 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (2)

x 3 ( t )=i 2 ( t )=current through L2 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (3)

The differential equations governing behaviour of the R L C circuit are:

dv
i 1+ i2 +C =0 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯( 4)
dt
3
di 1
L1 + R i i 1+ e−v =0 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(5)
dt

di 2
L2 + R 2 i 2−v=0 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (6)
dt

and y 1=v R 2 =i2 R2=R2 x 3 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(7)

y 2=i 2=1 x 3=x 3 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (8)

dv −1 1 1 1
From (4) we can write, ẋ 1= = i1− i 2=0− x 2− x 3+ 0
dt c c c c

di 1 R1 1 1 R1
From (5) we can write ẋ 2= = v − i1− e= x 1− x 2 +0
dt L1 L1 L1 L1 L1

di 1 R 1 R
From (6) we can write ẋ 3= 2
= v− 2 i 2= x 1 +0− 2 x 3 +0
dt L2 L2 L2 L2

In the vector Matrix form:

[ ]
−1 −1

[ ][ ]
0

[]
c c 0
ẋ1 x1
1 −R1 1
ẋ 2 = 0 x2 + − e ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ( 9 ) State Equation and
L1 L1 L1
ẋ3 −R2 x3 0
1
0
L2 L2

[]
x
[ ][
y 1 0 0 R2 1
y2 0 0 1 ]
x 2 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ( 10 )): Output Eqn.
x3

State Model: Ẋ =AX + Bu

Y = CX + Du, where D = [0]

E g. 3:

Let x 1=θ ; x 2=θ̇ ; and x 3=i a (All Physical Variables)

4

J + fω=T ( Torque ) ; K T i a=K T x 3
dt

Or J ẋ 2+ f x2 =K T x3 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ( 1 )

And ν a−k b ω=R a x 3 + La ẋ 3

Or ν a−k b x 2=R a x 3+ La ẋ 3 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ( 2 )

f x2 K T
From (1) we get ẋ 2=0− + x +0 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (3 )
J J 3

kb R 1
From (2) ẋ 3=0− x2 − a x3 + v a ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ( 4 )
La La La

And ẋ 1=0+1. x 2+ 0+0 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ( 5 )

Y =x 1 +0+ 0 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(6)

Hence in vector matrix notation

[ ][ ] [ ]
0 1 0

[]
ẋ1 K T x1 0
−f
0 0
ẋ 2 = J J x2 + ν
1 a
ẋ3 −k b −Ra x 3
0 La
La La

[]
x1
y=x 1 [1 00 ] x 2
x3

(2). Phase Variable form:


Phase Variables or Dual Phase Variables can be used to create a linkage, called simulation diagram
between the classical and the modern. Each integrator output is defined as a state variable.
Alternatively the Phase Variables are defined as those state variables which are obtained from one of
the system variables (normally output) and its successive derivatives.
The synthesis of specific transfer functions through the interconnection is done. A basic component for
synthesis is the integrator, a block or branch having transmittance 1/s. A block diagram or signal flow
graph composed only of constant transmittances and integrators is termed as Simulation Diagram /
(called State Variable Diagram (SVD))
The order of the system is simply the number of integrators present.
One very useful realization known as the Phase – Variable form is described below:
Example 1 (a): In Phase – Variable Form: a given transfer function:
−5 4
+ 2 −12/s 3
2
−5 s + 4 s−12 s s P 1+ P 2 + P 3 Y (s) 3
T ( s) = 3 = = = (by deviding s )
2
s +6 s + s+3 6 1 3 1−L1−L2−L3 U (s)
1+ + +3/ s
s s
(In this form the transfer function may be interpreted as Mason’s gain rule expression).
5
(i. e. dividing the Numerator and denominator by the highest power of s term in the denominator
places a 1 in the denominator and result in other numerator and denominator terms that are inverse
power of s representing multiple integrations).
−5 4 −12
The numerator terms: + + are each taken to be paths through integrators, and the paths are
s s2 s3
intermingled as in Fig. (a) To require a minimum number of integrators in this case (three)
6 1 3
The denominator terms: + 2 + 3 are taken to be the negative of the loop gains. By replacing each of
s s s
these loops through the node to which U(s) couples, all loops touch one another, so no product of
loop gain terms is involved. All the loops touch each of the paths, so each cofactor is unity.
In Fig.: (b) Each integrator output signal has been labeled. These signals are termed the State Variables
of the system. This realization of the example:
The given transfer function is then described by the following equations:

Fig.: (a) (Paths in simulation diagram)

From Fig.: (b),


1
X 1 ( s )= X 2 (s) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(1)
s
1
X 2 ( s )= X 3 (s) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(2)
s

1
X 3 ( s )= [−3 X 1 ( s ) −X 2 ( s )−6 X 3 ( s ) +U ( s)] ⋯ ⋯ ⋯ (3)
s
Y ( s )=¿ −12 X1(s) + 4X2(s) – 5X3(s)⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯( 4)

From (1), s X 1 ( s )=X 2 ( s ) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(5)


From (2) s X 2 ( s )=X 3 ( s) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (6)
From (3) s X 3 ( s )=−3 X 1 ( s )− X 2 ( s ) −6 X 3 ( s ) +U ( s ) ⋯(7)
And from (4) Y ( s )=−12 X 1 ( s ) +4 X 2 ( s ) −5 X 3 ( s) ⋯ ⋯(8)
Taking Laplace Inverse:

6
d x1
From (5), =x 2 ( t ) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (9)
dt
dx
From (6) 2 =x 3 (t ) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(10)
dt
d x3
From (7), =−3 x 1 ( t )−x 2 ( t )−6 x 3 ( t ) +u( t) ⋯ ⋯ ⋯ (11)
dt
From y ( t ) =12 x 1 ( t ) +4 x2 ( t ) −5 x 3 (t ) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (12)
In the Vector Matrix form: (From 9, 10, 11 & 12)

[][ ][ ] [ ]
ẋ1 0 1 0 x1 0
ẋ 2 = 0 0 1 x2 + 0 u
ẋ3 −3 −1 −6 x 3 1
Or Ẋ =AX + Bu
Y =CX + Du
Where [A, B, C, D] are called quadruple

[]
x1
And Y = [−12 4−5 ] x 2 + [ 0 ] u
x3
If A, B, C & D are constants the system is time - invariant system. If A, B, C & D are functions of time the
system is called T V P system
(b) Dual Phase Variable Form:
Another especially convenient way to realize a transfer function with integrators is to arrange the
signal flow graph so that all the paths and all the loops touch an output node.
−5 4 3
+ 2 −12/ s
Y ( s ) −5 s + 4 s−12
2
s s P1 + P2 + P3
Example 1 is revisited: T ( s ) = = 3 = =
U ( s ) s + 6 s 2+ s+ 3 6 1 3 1−( L¿ ¿1+ L2 + L3 )¿
1+ + 2 + 3
s s s
The SVD for this example is shown in Fig. (c) In the next page.
The Laplace Transform relations describing the system are, in terms of the indicated state variables,
from the SVD as:

Fig.: (c) Realizing a transfer function in the dual phase variable from.

(The output signal is derived from a single node, while the input signals is coupled to each other)

The Laplace Transform relations describing the system are, in terms of the indicated state variables,
from the SVD as:
s X 1 ( s )=−6 X 1 ( s ) + X 2 ( s )−5 U ( s ) ∈time domain ẋ 1=−6 x 1 (t ) + x 2 ( t )−5 u(t )
s X 2 ( s )=− X 1 ( s ) + X 3 ( s ) + 4 U ( s )∈time domain ẋ2 =−x 1 ( t )+ x 3 ( t ) + 4 u(t )
7
s X 3 ( s )=−3 X 1 ( s )−12 U ( s ) ∈time domain ẋ 3=−3 x 1 ( t )−12 u(t)
Y ( s )=X 1 ( s ) ∈time domain y ( t )=x 1(t)
In Vector Matrix Notations:

[ ] [ ][ ] [ ]
ẋ1 −6 1 0 x 1 −5
Ẋ =AX + Bu
ẋ 2 = −1 0 1 x 2 + 4 u ¿
Y =CX + Du
ẋ3 −3 0 0 x 3 12

[]
x1
y= [ 1 0 0 ] x 2 + [ 0 ] u
x3
It is observed that if the transfer function is strictly proper, D is always [0]. Comparing the Phase –
Variable form and the Dual Phase Variable form: In Dual Phase Variable form
C = BT of Phase Variable Form
B = CT of Phase Variable Form
Thus B and C are duals of these two forms.

[ ]
0 1 0
The matrix A has a special form A= 0 0 1 In Phase Variable Form
−3 −1 −6

[ ]
−6 1 0
A= −1 0 1 In Dual Phase Variable Form
−3 0 0
This is Companion form or Bush form.

E g.: Given differential equation for a system is:


d3 y d2 y dy
+6 + 11 + 6 y=u
dt 3
dt 2
dt
Let x 1= y → ẋ 1= ẏ=0+ x 2+ 0
x 2= ẋ 1= ẏ → ẋ 2= ẍ= ÿ =0+0+ x 3
3
d y
x 3= ẋ 2= ẍ 1= ÿ → ẋ 3= ⃛y = =−6 x 1−11 x2 −6 x3 +u
dt 3
In vector matrix form:

[][ ][ ] [ ]
ẋ1 0 1 0 x1 0
ẋ 2 = 0 0 1 x2 + 0 u
ẋ3 −6 −11 −6 x3 1
Y = x1  y = [ 1 0 0 ] + [0] u
(3). Derivation of Transfer Function form State Module:
Ẋ =AX + Bu ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (1 a)
Y =CX + Du ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (1b)
Taking Laplace Transform of both sides of Eqn. (1)
sX ( s )−X ( o )= AX (s)+ BU (s) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (2 a)
Y ( s )=CX ( s )+ DU ( s ) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (2 b)
From (2a), (SI – A) X(s) = X(0) +BU(s)

8
Or X ( s )= ( SI − A )−1 X ( 0 )+ ( SI − A )−1 BU ( s ) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (3)
Substituting (3) in (2b)
−1 −1
Y ( s )=C ( SI − A ) X ( o )+ C ( SI − A ) BU ( s )+ DU (s) ⋯ ⋯ ⋯ (4)
Assuming initial conditions to be zero, Eqn. (4) yields:
Y ( s) −1 C adj ( SI −A ) B
T ( s) = =C ( SI −A ) B+ D= + D ⋯ ⋯ (5)
U (s) dt ( SI −A )
While the state model is non-unique the Transfer Function of the system is unique i.e. the Transfer
function must work out to be the same irrespective of which particular state model is used to describe
the system.
3. Solution of State Equation:
di ( t )
(a) Scalar differential equation: + P i (t )=Q ( t ) ⋯ (1): It is a non homogenous differential Eqns.
dt
Let I.F = ePt
Pt di ( t )
Then e + e Pt Pi ( t ) =e Pt Q ( t ) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(2)
dt
d
Or ( i ( t ) e ) =e Q(t) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(3)
Pt Pt
dt
Integrating both sides we get,
t
i (t ) e =∫ e Q ( t ) dt+ K ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(4)
Pt Pt

0
t

Or i (t )=e
−Pt
∫ e Pτ Q ( τ ) dτ+ e−Pt K ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(5)
0

At t = 0, i(t)=i(o) on substitution in the above equation:


Hence K = i(o)
t

Hence i (t )=e
−Pt
∫ e Pτ Q ( τ ) dτ+ e−Pt i(o) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (6)
0

(b) Matrix differential eqn.: Ẋ =AX + Bu ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (7)


It is also a non homogenous equation.
(i) In this case P = -A and Q(t) = Bu
So I.F to be selected as e-At
t

Hence X ( t )=e
At
∫ e− Aτ Bu ( τ ) dτ +e At x (0) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (8)
0
t

Or X ( t )=e x ( 0 ) + B∫ e
At A (t −τ)
u ( τ ) dτ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (9)
0

(ii) Using the Laplace Transform Method: from (7)


sX ( s )−X ( 0 ) =AX ( s ) +BU (s)
−1
Or X ( s )=¿ ¿¿−1 X ( 0 )+ [ sI− A ] BU (s )
−1
Or X ( t )=e At X ( 0 ) + L−1 {[ sI −A ] BU ( s)}
t

Or X ( t )=e X ( 0 ) +∫ e
At A ( t −τ )
Bu(τ )dτ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (10)
0
t

Or X ( t )=ϕ ( t ) X ( 0 ) +∫ ϕ ( t−τ ) Bu ( τ ) dτ ⋯ ⋯ ⋯ ⋯ ⋯(11)


0

9
−1
Where ϕ ( t )=e At=L−1 [ sI− A ] = State Transition Matrix, where as Φ ( s ) is called Resolvant
−1
Matrix = [ sI −A ] .
It is observed that a knowledge of matrix e At and the initial state X(0) of the system
allows us to determine the state X(t) at any later time. Because of the key role of this matrix
plays in determining the transition of the system from one state to another, this matrix e At is
called the State Transition Matrix i.e. STM and usually denoted by φ(t). For homogeneous
matrix differential of the form Ẋ =AX , X ( t )=ϕ ( t ) X ( o) (12)
(1). Significance of the STM:
Since the state transition matrix satisfies the homogeneous state equation, it represents
the Free Response (unforced) of the system. In other words, it governs the response that is
excited by the initial condition only. In view of the following equations:
A 2 t 2 A3 t 3
ϕ ( t )=L−1 [ (sI− A)−1 ]=e At =I + At + + +¿ ............, the STM is dependent only on the
2! 3!
matrix A and is therefore referred to as the State Transition Matrix of A. As the name implies,
φ(t) completely defines the transition of the states from the initial time t = 0 to any other time t
when the inputs are zero (i.e. for homogenous matrix differential equation).
(2). Properties of φ (t) for time invariant system i.e. Ẋ =AX ∧ϕ (t )=e At
(i) ϕ ( t )=e A ×0 =I
−1
(ii) ϕ ( t )=( e− At ) =[ϕ (−t ) ]−1∨ϕ−1 ( t )=ϕ (−t )
A (t1 +t 2)
(iii) ϕ ( t 1+t 2 )=e =ϕ ( t 1 ) ϕ ( t 2 ) =ϕ ( t 2 ) ϕ ( t 1 )
(iv) [ϕ ( t ) ]n=ϕ (nt)
(v) ϕ ( t 2−t 1 ) ϕ ( t 1−t 0 )=ϕ ( t 2−t 0 )=ϕ ( t 1−t 0 ) ϕ ( t 2−t 1 )

Example: Obtain the time response of the system described by the state Eqn.

Ẋ = [ −1 1 x 1 0
][ ] [ ]
+ r ( t ) for r ( t ) =1 for t ≥ 0∧X ( 0 )=[−10 ]T
0 −2 x 2 1

Solution: The state transition matrix, STM is first determined as:

At
ϕ ( t )=e = I + At + [ A 2 t 2 A3 t 3
2!
+
3!
+… … … … … ]
¿ [ ] [ ] [ ][ ]
1 0 −1 1
0 1
+
0 −2
t+
t 2 −1 1 −1 1
2 ! 0 −2 0 −2
+………….

¿ [1 0 ]+ [−t
0 −2 t ] 2 ! [ 0 4 ]
2
t + t 1 3 + … … … ..
0 1

[ ( ) ( )
]
2 2
t 3t
1−t + t− +… … .
2! 2!
¿
0 ( 4
1−2 t+ −… ..
2! )
10
¿
[ e−t e−t −e−2 t
0 e
−2 t = ϕ( t)
]
t

As t0 = 0 here, the solution is given by X ( t )=ϕ ( t ) X ( 0 ) +∫ ϕ ( t−τ ) Br ( τ ) dτ


0

[ ][ ] [ ][ ] [ ] [ ]
−t −t −2 t t −2( t−τ)
−( t−τ ) −e t −2(t−τ )
− ( t −τ ) −e
e e −e −1 e− ( t−τ ) e 0 e−t e
¿ +∫ dτ 1= +∫ dτ
0 e−2 t 0 0 0 e−2(t−τ ) 1 0 0 e−2 (t −τ)

[ ]
1 1
−e−t + e−2 t +
[ ]
−t
−e 2 2
¿ +
0 −1 −2 t 1
e +
2 2

[ ]
1 −t 1 −2 t
−2 e + e
Therefore,
x1 (t )
x2 (t )
= 2
1[ ]
1
− e
2 2
2
−2t

−t −2 t
Hence x 1 ( t )=0.5−2 e +0.5 e
−2 t
x 2 ( t )=0.5−0.5 e (Ans)

4. Evaluation of STM
(1). Use of Power Series method
E g. 1 given A=
0 1
0 −1 [
, find the STM ¿ ϕ ( t ) =e At ]
2 2
A t
ϕ ( t )=e At=I + At + +. ..............
2!

[ ] [ ] [ ][ ]
2
1 0 0 t t 0 1 0 1
¿ + +
0 1 0 −t 2 ! 0 −1 0 −1

¿[
0 1−t 2 ! [ 0 −1 ]
]
2
1 t +t 0 1
+

[ ]
2
t
1 t−
2!
2 + … … ..
t
0 1−t+
2!

[ ]
−t
1 1−e
−t
0 e

(2). Use of Laplace Transform:

11
E g.: Given A=
0
[ ]
1 , find the STM =ϕ ( t )=e At
−20 −9

Solution: ( SI −A )= [ 20s 1
s+ 9 ]
; adj ( SI − A ) =
s +9 1
−20 s [ ]
|( SI − A )|=s 2+ 9 s +20= ( s+ 4 ) (s +5)

[ ]
s+9 1
−1 ( s+ 4 )(s +5) ( s+4 ) (s+5)
Hence ( SI −−A ) =
−20 s
( s+ 4 )(s +5) ( s+4 ) (s+5)

[ ]
5 −4 1 −1
+¿ +¿
¿ s+ 4 s+ 5 s+ 4 s+ 5 =Φ ( s ) =Resolvant ¿
−20 20 −4 5
+¿ +¿
s+ 4 s+5 s+ 4 s+ 5

Hence ϕ (t )=L
−1
[(SI −A )−1 ]= [ 5 e−4 t −4 e−5 t
−5 t
20 e −20 e
−4 t
e−4 t−e−5t
−5 t
5 e −4 e
−4 t ]
(3). Use of the Cayley – Hamilton Method:

The theorem states that “Every matrix satisfies its own characteristic equation”.
The matrix Ch. Eqn. is |λI − A|=0=q ( A )=ϕ ( t ) =e =h 0 I +h1 A +… …
At

If A possesses distinct Eigen Values λ 1 , λ2 ......... λ nthan + h n−1 A we obtain n – algebraic equations as:
n−1

λi t
e =ho + hi λi +… … . hn−1 λn−1
dk dk
For repeated Eigen values: k ( e λ i t )= k [ R λi ]
dλ dλ
To be illustrated through example

[ ]
1 4 10
E g.: If A= 0 2 0 , Determine ϕ ( t )=e At
0 0 3
At 2
Solution: e =ho I +h 1 A+h 2 A
Now the Eigen values of A are given by

[ ]
λ−1 −4 −10
|λI − A|=0= 0 λ−2 0
0 0 λ−3
Or ( λ−1 ) ( λ−2 )( λ−3 )=0
Hence, the Eigen values are: λ 1=1 , λ 2=2 , λ3=3
Since there are three distinct Eigen Values,
t
e =ho +h 1+ h2
2t
e =ho +2 h1 +4 h2
e 3 t=ho +3 h1 +9 h2
Solving the above, we get,

12
1 3t 2t 1 t
h2 = e −e + e
2 2
−3 3t 2t 5 t
h1 = e +4 e − e
2 2
3t 2t t
h 0=e −3 e +3 e

[ ]
1 12 40
2
From A, A = 0 4 0
0 0 9
Hence ϕ ( t )=e At=h o I +h1 A +h2 A 2

[ ]( )[ ]( )[ ]
1 0 0 −3 5 t 1 4 10 1 1 t 1 12 40
¿ ( e 3 t −3 e2 t +3 et ) 0 1 0 + e3 t + 4 e 2 t− e 0 2 0 + e 3 t −e2 t + e 0 4 0
2 2 2 2
0 0 1 0 0 3 0 0 9

[ ]
h o+ h1 +h2 4 h 1+12 h2 10 h1 +40 h 2
Or ϕ ( t )= 0 h o +2 h1+ 4 h 2 0
0 0 ho +3 h 1+ 9 h2

[ ]
t 2t t 3t t
e 4 e −4 e 5 e −4 e
¿ 0 e
2t
0 (Ans)
3t
0 0 e

E g.: Given A= [−20 −42 ] , find ϕ(t)


Eigen Values of A: |λI − A|=0=| |
λ 0−2
0+ 2 λ+ 4

Or λ ( λ +4 ) +4=0∨λ2 + 4 λ+ 4=0

Hence λ 1 , λ2 =−2 repeated Eigen Values


λ1
Hence e t=ho +h1 λ1

And taking the derivative

te λ t =h1
1

On substitution of λ 1we get,


−2 t
e =h0−2h 1

¿ te−2 t =h1

Therefore, h 0=e−2 t +2 t e−2 t

At
Hence ϕ ( t )=e =( 1+2 t ) e
−2 t 1 0 +t e−2 t 0
0 1
2
−2 −4 [ ] [ ]
13
[ ]
−2 t

( 1+2 t ) e 2t e
−2 t
¿ (Answer)
−2 t e−2 t ( 1−2t ) e−2t

(4). Use of Sylevester’s Expansion Theorem:

The theorem states that, if f (A) is a polynomial function of matrix A such as



f ( A ) = ∑ Ck Ak , Then f (A) may be expressed as,
k=0

n
n A−λ1 I
f ( A )=∑ f ( λi )F iWhere λ i ' s are the Eigen values of A and F i=∏ λ −λ . In the case
j=0 i j
i=1
≠i
n
f ( A )=ϕ ( t ) =e , hence ϕ ( t )=∑ e F i
At λ t i

i=1

E g.: Given. A=[−10 −21 ], find ϕ ( t )=e At

The Eigen values are: |λ 1−A|=|[


λ+ 2]|
λ+ 1 −1 = 0
0

Or ( λ+ 1 )( λ+ 2 )=0

Hence λ 1=−1∧λ 2=−2

Therefore F 1=
A−λ 2 I A+ 2 I
λ1−λ 2
=
1
=A +2 I =
1 1
0 0 [ ]
And F 2=
A−λ 1 I A+ I 0 −1
λ2−λ 1
=
−1
=
0 1 [ ]
[ ] [ ][ ]
−t
λ t λ t −t 1 1 +e−2 t 0 −1 = e e−t −e−2 t
Hence ϕ ( t )=F1 e + F2 e =e
1 2
−2 t (Ans)
0 0 0 1 0 e

14
VOL-5: K2

POLE PLACEMENT BY STATE FEEDBACK

1. Introduction:
(1). Controllability: The state-variable formulation of an nth order linear time-invariant multivariable

15
16
17
18
(2) Observability:

19
20
21
2. POLE PLACEMENT BY STATE – FEEDBACK

u + x
B C + + Y
+

-K

Fig.:K1: A system with state feedback.


Ẋ =AX + Bu Where X = State vector (n)
y=CX + Du y= Output Signal (scalar)
u = Control Signal (Scalar)
A = n x n Constant Matrix = System Matrix
B = n x 1 Constant Matrix = Input Matrix
C = 1 x n Constant Matrix = Output Matrix
D = Constant (Scalar) = d
u = -K x
∴ Ẋ= AX +B ( – KX ) =( A−BK ) X
The Eigen values of (A-BK) are called the regulator poles means the closed loop poles.
The problem of placing the regulator poles (i.e. closed loop poles) at the desired location is called
a pole placement problem and using state feedback (i.e. with feedback matrix K) is called pole
placement by state feedback.
Necessary and sufficient condition for Arbitrary Pole Placement:
A necessary ad sufficient condition for arbitrary pole placement is that the system be
completely state controllable.
Example A system is represented by Ẋ =AX + Bu ,Y =CX

[ ] []
0 1 0 0
Where A= 0 0 1 , B= 0 , C=[ −2 4 3 ]
−5 −7 −3 1
Find the feedback constant matrix K [ i. e . k 1 , k 2 , k 3 ]=[ k 1 , k 2 , k 3 ]
So that the closed loop poles be located at s=-4, -4 and -5.
Step – 1: The state controllability test is to be done in order to ensure the arbitrary pole
placement:
So [ B: AB : A 2 B : ] should have rank n = 3 (here)

[ ][ ] [ ][ ]
0 1 0 0 0 ×0+ 1× 0+0 ×1 0
AB= 0 0 1 0 = 0 ×0+ 0 ×0+1 ×1 = 1
−5 −7 −3 1 −5× 0±7 × 0±3× 1 −3

22
[ ][ ] [ ][ ]
0 1 0 0 0 ×0+ 1× 1+ 0×−3 1
2
A B= 0 0 1 1 = 0 ×0+ 0 ×1+1×−3 = −3
−5 −7 −3 3 −5 × 0±7 × 1±3 ×3 2

[ ]
0 0 1
∴ [ B : AB : A B ]= 0 1 −3 ⟹ 0 ×
2

1 −3 2
1 −3
−3 2
+0
0 −3
1 2
+1
0 1
1 −3| | | | [
=−1; hence ran is3 ]
[ ]
0 0 1
∴ [ B : AB : A B ]= 0 1 −3 ⟹ 0 ×
2

1 −3 2
1 −3
−3 2
+0
0 −3
1 2
+1
0 1
1 −3| | | | [
=−1; hence rank is3=n ]
Hence arbitrary pole placement is possible since the system completely, state controllable.
The closed loop poles with state feedback = Eigen values of
|sI −( A−BK )|=0

[] ( )
0 0 0 0
BK = 0 [ k 1 k 2 k 3 ] = 0 0 0
1 k1 k2 k3

[ ]( )[ ]
0 1 0 0 0 0 0 1 0
A−BK = 0 0 1 − 0 0 0 = 0 0 1
−5 −7 −3 k 1 k2 k3 − ( 5+ k 1) − ( 7 +k 2) − ( +k 3 )
3

[ ]( )[ ]
0 1 0 0 0 0 0 1 0
A−BK = 0 0 1 − 0 0 0 = 0 0 1
−5 −7 −3 k 1 k 2 k 3 −( 5+ k 1 ) −( 7 +k 2 ) −( 3 +k 3 )

|[ ] [ ]|
s 0 0 0 1 0
Hence|sI−( A−BK )|= 0 s 0 − 0 0 1
0 0 s −( 5+ k 1 ) −( 7+k 2 ) −( 3 +k 3 )

| |
s −1 0
0 s −1 =0
( 5+ k 1 ) ( 7 +k 2 ) ( s + 3+k 3 )

¿ s [ ( s+3+ k 3 ) + ( 7+ k 2 ) ] + ( 5+ k 1 ) +0=0
2
¿ s ( s +3+k 3 ) + s ( 7 +k 2 ) + ( 5+ k 1 )=0
3 2
¿ s + s ( 3+k 3 ) + s ( 7+ k 2 ) + ( 5+ k 1 )=0
3 2
¿ ( s +4 ) ( s +4 )( s+5 )=s +13 s + 56 s +80
Equation coefficients, ( 3+ k 3 )=13 ; ( 7+ k 2 )=56 ; ( 5+k 1 ) =80
∴ k 1=80−5=75
k 2=56−7=49
k 3=13−3=10

23

You might also like