Discrete Time Control Systems Unit 5
Discrete Time Control Systems Unit 5
(MODULE- V)
VOL-5: K 1
STATE SPACE ANALYSIS OF CONTINUOUS TIME SYTEM:
1
MODULE-5 (8 HOURS)
STATE SPACE ANALYSIS OF CONTINUOUS TIME SYTEM
1. INTRODUCTION:
u y1
u2 Controlled y2
Input Variable (m) u3 y3 Output Variable (р) U (m) Controlled Y (р)
System
um yр System
X (n)
x1 x2 x3 xn
State Variable (n)
Fig.: Structure of a General System
State Model:
[ ] [ ]
x1 ( t ) y1 ( t )
−¿ x n ( t ) −¿ y n ( t )
[ ]
u1 ( t )
u 2 ( t ) =Input Vector ( m )
u3 ( t )
−¿ un ( t )
A = State Matrix (n x n), B = Input Matrix (n x m)
C= Output Matrix (p x n)
D = Transmission Matrix (p x m)
Consider an n – dimensional vector Z such that X = PZ where P is any n x n non singular constant matrix.
Hence Ẋ =P Ż = APZ + BU where Ẋ =AX + BU ⋯ ⋯ ⋯(a)
Pre multiplying by P-1, we get,
2
~ ~ ~ ~
Ż=P−1 APZ + P−1 BU = A Z+ B U … .(c ) Where A=P−1 AP; B=P−1 B and Y = CPZ+DU where y=CX+DU
~ ~
⋯ ⋯ ⋯ (b) = C+ DU ⋯ ⋯ ⋯(d ) Where C=CP
Eqn. (c) and Eqn. (d) give another state model for a given system. Since P is assumed to be non –
unique non – singular matrix, the state model is also non-unique.
dv 1
E g. 1: = F ( t ) (1) Let x 1=x ; F (t )=U (t )
dt M
dx
=v ( t ) =¿ ........ (2) Hence ẋ 1=x 2=0+ x 2 ( t ) +oU (t)
dt
1 1
ẋ 2= F ( t ) =0+0+ u(t )
M M
[ ] [ ][ ] [ ]
ẋ1 0
0 1 x1
Hence = + 1 u
ẋ2 0 0 x2
M
y= [ 1 0 ]
[]x1
x2
=x=x 1
E g .2:
If we have a knowledge of initial conditions v(o), i 1(o) & i2(o) and the input signal e(t) for t ≥ 0, then the
behaviour of the network is completely specified for t ≥ 0.
The initial conditions v(0), i1(o), i2(o) together with input signal e(t) for t ≥ 0 constitute the minimal
information needed. It then follows that a natural selection of the state variables:
dv
i 1+ i2 +C =0 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯( 4)
dt
3
di 1
L1 + R i i 1+ e−v =0 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(5)
dt
di 2
L2 + R 2 i 2−v=0 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (6)
dt
dv −1 1 1 1
From (4) we can write, ẋ 1= = i1− i 2=0− x 2− x 3+ 0
dt c c c c
di 1 R1 1 1 R1
From (5) we can write ẋ 2= = v − i1− e= x 1− x 2 +0
dt L1 L1 L1 L1 L1
di 1 R 1 R
From (6) we can write ẋ 3= 2
= v− 2 i 2= x 1 +0− 2 x 3 +0
dt L2 L2 L2 L2
[ ]
−1 −1
[ ][ ]
0
[]
c c 0
ẋ1 x1
1 −R1 1
ẋ 2 = 0 x2 + − e ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ( 9 ) State Equation and
L1 L1 L1
ẋ3 −R2 x3 0
1
0
L2 L2
[]
x
[ ][
y 1 0 0 R2 1
y2 0 0 1 ]
x 2 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ( 10 )): Output Eqn.
x3
E g. 3:
4
dω
J + fω=T ( Torque ) ; K T i a=K T x 3
dt
Or J ẋ 2+ f x2 =K T x3 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ( 1 )
Or ν a−k b x 2=R a x 3+ La ẋ 3 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ( 2 )
f x2 K T
From (1) we get ẋ 2=0− + x +0 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (3 )
J J 3
kb R 1
From (2) ẋ 3=0− x2 − a x3 + v a ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ( 4 )
La La La
Y =x 1 +0+ 0 ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(6)
[ ][ ] [ ]
0 1 0
[]
ẋ1 K T x1 0
−f
0 0
ẋ 2 = J J x2 + ν
1 a
ẋ3 −k b −Ra x 3
0 La
La La
[]
x1
y=x 1 [1 00 ] x 2
x3
1
X 3 ( s )= [−3 X 1 ( s ) −X 2 ( s )−6 X 3 ( s ) +U ( s)] ⋯ ⋯ ⋯ (3)
s
Y ( s )=¿ −12 X1(s) + 4X2(s) – 5X3(s)⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯( 4)
6
d x1
From (5), =x 2 ( t ) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (9)
dt
dx
From (6) 2 =x 3 (t ) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(10)
dt
d x3
From (7), =−3 x 1 ( t )−x 2 ( t )−6 x 3 ( t ) +u( t) ⋯ ⋯ ⋯ (11)
dt
From y ( t ) =12 x 1 ( t ) +4 x2 ( t ) −5 x 3 (t ) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (12)
In the Vector Matrix form: (From 9, 10, 11 & 12)
[][ ][ ] [ ]
ẋ1 0 1 0 x1 0
ẋ 2 = 0 0 1 x2 + 0 u
ẋ3 −3 −1 −6 x 3 1
Or Ẋ =AX + Bu
Y =CX + Du
Where [A, B, C, D] are called quadruple
[]
x1
And Y = [−12 4−5 ] x 2 + [ 0 ] u
x3
If A, B, C & D are constants the system is time - invariant system. If A, B, C & D are functions of time the
system is called T V P system
(b) Dual Phase Variable Form:
Another especially convenient way to realize a transfer function with integrators is to arrange the
signal flow graph so that all the paths and all the loops touch an output node.
−5 4 3
+ 2 −12/ s
Y ( s ) −5 s + 4 s−12
2
s s P1 + P2 + P3
Example 1 is revisited: T ( s ) = = 3 = =
U ( s ) s + 6 s 2+ s+ 3 6 1 3 1−( L¿ ¿1+ L2 + L3 )¿
1+ + 2 + 3
s s s
The SVD for this example is shown in Fig. (c) In the next page.
The Laplace Transform relations describing the system are, in terms of the indicated state variables,
from the SVD as:
Fig.: (c) Realizing a transfer function in the dual phase variable from.
(The output signal is derived from a single node, while the input signals is coupled to each other)
The Laplace Transform relations describing the system are, in terms of the indicated state variables,
from the SVD as:
s X 1 ( s )=−6 X 1 ( s ) + X 2 ( s )−5 U ( s ) ∈time domain ẋ 1=−6 x 1 (t ) + x 2 ( t )−5 u(t )
s X 2 ( s )=− X 1 ( s ) + X 3 ( s ) + 4 U ( s )∈time domain ẋ2 =−x 1 ( t )+ x 3 ( t ) + 4 u(t )
7
s X 3 ( s )=−3 X 1 ( s )−12 U ( s ) ∈time domain ẋ 3=−3 x 1 ( t )−12 u(t)
Y ( s )=X 1 ( s ) ∈time domain y ( t )=x 1(t)
In Vector Matrix Notations:
[ ] [ ][ ] [ ]
ẋ1 −6 1 0 x 1 −5
Ẋ =AX + Bu
ẋ 2 = −1 0 1 x 2 + 4 u ¿
Y =CX + Du
ẋ3 −3 0 0 x 3 12
[]
x1
y= [ 1 0 0 ] x 2 + [ 0 ] u
x3
It is observed that if the transfer function is strictly proper, D is always [0]. Comparing the Phase –
Variable form and the Dual Phase Variable form: In Dual Phase Variable form
C = BT of Phase Variable Form
B = CT of Phase Variable Form
Thus B and C are duals of these two forms.
[ ]
0 1 0
The matrix A has a special form A= 0 0 1 In Phase Variable Form
−3 −1 −6
[ ]
−6 1 0
A= −1 0 1 In Dual Phase Variable Form
−3 0 0
This is Companion form or Bush form.
[][ ][ ] [ ]
ẋ1 0 1 0 x1 0
ẋ 2 = 0 0 1 x2 + 0 u
ẋ3 −6 −11 −6 x3 1
Y = x1 y = [ 1 0 0 ] + [0] u
(3). Derivation of Transfer Function form State Module:
Ẋ =AX + Bu ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (1 a)
Y =CX + Du ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (1b)
Taking Laplace Transform of both sides of Eqn. (1)
sX ( s )−X ( o )= AX (s)+ BU (s) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (2 a)
Y ( s )=CX ( s )+ DU ( s ) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (2 b)
From (2a), (SI – A) X(s) = X(0) +BU(s)
8
Or X ( s )= ( SI − A )−1 X ( 0 )+ ( SI − A )−1 BU ( s ) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (3)
Substituting (3) in (2b)
−1 −1
Y ( s )=C ( SI − A ) X ( o )+ C ( SI − A ) BU ( s )+ DU (s) ⋯ ⋯ ⋯ (4)
Assuming initial conditions to be zero, Eqn. (4) yields:
Y ( s) −1 C adj ( SI −A ) B
T ( s) = =C ( SI −A ) B+ D= + D ⋯ ⋯ (5)
U (s) dt ( SI −A )
While the state model is non-unique the Transfer Function of the system is unique i.e. the Transfer
function must work out to be the same irrespective of which particular state model is used to describe
the system.
3. Solution of State Equation:
di ( t )
(a) Scalar differential equation: + P i (t )=Q ( t ) ⋯ (1): It is a non homogenous differential Eqns.
dt
Let I.F = ePt
Pt di ( t )
Then e + e Pt Pi ( t ) =e Pt Q ( t ) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(2)
dt
d
Or ( i ( t ) e ) =e Q(t) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(3)
Pt Pt
dt
Integrating both sides we get,
t
i (t ) e =∫ e Q ( t ) dt+ K ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(4)
Pt Pt
0
t
Or i (t )=e
−Pt
∫ e Pτ Q ( τ ) dτ+ e−Pt K ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯(5)
0
Hence i (t )=e
−Pt
∫ e Pτ Q ( τ ) dτ+ e−Pt i(o) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (6)
0
Hence X ( t )=e
At
∫ e− Aτ Bu ( τ ) dτ +e At x (0) ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (8)
0
t
Or X ( t )=e x ( 0 ) + B∫ e
At A (t −τ)
u ( τ ) dτ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (9)
0
Or X ( t )=e X ( 0 ) +∫ e
At A ( t −τ )
Bu(τ )dτ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ (10)
0
t
9
−1
Where ϕ ( t )=e At=L−1 [ sI− A ] = State Transition Matrix, where as Φ ( s ) is called Resolvant
−1
Matrix = [ sI −A ] .
It is observed that a knowledge of matrix e At and the initial state X(0) of the system
allows us to determine the state X(t) at any later time. Because of the key role of this matrix
plays in determining the transition of the system from one state to another, this matrix e At is
called the State Transition Matrix i.e. STM and usually denoted by φ(t). For homogeneous
matrix differential of the form Ẋ =AX , X ( t )=ϕ ( t ) X ( o) (12)
(1). Significance of the STM:
Since the state transition matrix satisfies the homogeneous state equation, it represents
the Free Response (unforced) of the system. In other words, it governs the response that is
excited by the initial condition only. In view of the following equations:
A 2 t 2 A3 t 3
ϕ ( t )=L−1 [ (sI− A)−1 ]=e At =I + At + + +¿ ............, the STM is dependent only on the
2! 3!
matrix A and is therefore referred to as the State Transition Matrix of A. As the name implies,
φ(t) completely defines the transition of the states from the initial time t = 0 to any other time t
when the inputs are zero (i.e. for homogenous matrix differential equation).
(2). Properties of φ (t) for time invariant system i.e. Ẋ =AX ∧ϕ (t )=e At
(i) ϕ ( t )=e A ×0 =I
−1
(ii) ϕ ( t )=( e− At ) =[ϕ (−t ) ]−1∨ϕ−1 ( t )=ϕ (−t )
A (t1 +t 2)
(iii) ϕ ( t 1+t 2 )=e =ϕ ( t 1 ) ϕ ( t 2 ) =ϕ ( t 2 ) ϕ ( t 1 )
(iv) [ϕ ( t ) ]n=ϕ (nt)
(v) ϕ ( t 2−t 1 ) ϕ ( t 1−t 0 )=ϕ ( t 2−t 0 )=ϕ ( t 1−t 0 ) ϕ ( t 2−t 1 )
Example: Obtain the time response of the system described by the state Eqn.
Ẋ = [ −1 1 x 1 0
][ ] [ ]
+ r ( t ) for r ( t ) =1 for t ≥ 0∧X ( 0 )=[−10 ]T
0 −2 x 2 1
At
ϕ ( t )=e = I + At + [ A 2 t 2 A3 t 3
2!
+
3!
+… … … … … ]
¿ [ ] [ ] [ ][ ]
1 0 −1 1
0 1
+
0 −2
t+
t 2 −1 1 −1 1
2 ! 0 −2 0 −2
+………….
¿ [1 0 ]+ [−t
0 −2 t ] 2 ! [ 0 4 ]
2
t + t 1 3 + … … … ..
0 1
[ ( ) ( )
]
2 2
t 3t
1−t + t− +… … .
2! 2!
¿
0 ( 4
1−2 t+ −… ..
2! )
10
¿
[ e−t e−t −e−2 t
0 e
−2 t = ϕ( t)
]
t
[ ][ ] [ ][ ] [ ] [ ]
−t −t −2 t t −2( t−τ)
−( t−τ ) −e t −2(t−τ )
− ( t −τ ) −e
e e −e −1 e− ( t−τ ) e 0 e−t e
¿ +∫ dτ 1= +∫ dτ
0 e−2 t 0 0 0 e−2(t−τ ) 1 0 0 e−2 (t −τ)
[ ]
1 1
−e−t + e−2 t +
[ ]
−t
−e 2 2
¿ +
0 −1 −2 t 1
e +
2 2
[ ]
1 −t 1 −2 t
−2 e + e
Therefore,
x1 (t )
x2 (t )
= 2
1[ ]
1
− e
2 2
2
−2t
−t −2 t
Hence x 1 ( t )=0.5−2 e +0.5 e
−2 t
x 2 ( t )=0.5−0.5 e (Ans)
4. Evaluation of STM
(1). Use of Power Series method
E g. 1 given A=
0 1
0 −1 [
, find the STM ¿ ϕ ( t ) =e At ]
2 2
A t
ϕ ( t )=e At=I + At + +. ..............
2!
[ ] [ ] [ ][ ]
2
1 0 0 t t 0 1 0 1
¿ + +
0 1 0 −t 2 ! 0 −1 0 −1
¿[
0 1−t 2 ! [ 0 −1 ]
]
2
1 t +t 0 1
+
[ ]
2
t
1 t−
2!
2 + … … ..
t
0 1−t+
2!
[ ]
−t
1 1−e
−t
0 e
11
E g.: Given A=
0
[ ]
1 , find the STM =ϕ ( t )=e At
−20 −9
Solution: ( SI −A )= [ 20s 1
s+ 9 ]
; adj ( SI − A ) =
s +9 1
−20 s [ ]
|( SI − A )|=s 2+ 9 s +20= ( s+ 4 ) (s +5)
[ ]
s+9 1
−1 ( s+ 4 )(s +5) ( s+4 ) (s+5)
Hence ( SI −−A ) =
−20 s
( s+ 4 )(s +5) ( s+4 ) (s+5)
[ ]
5 −4 1 −1
+¿ +¿
¿ s+ 4 s+ 5 s+ 4 s+ 5 =Φ ( s ) =Resolvant ¿
−20 20 −4 5
+¿ +¿
s+ 4 s+5 s+ 4 s+ 5
Hence ϕ (t )=L
−1
[(SI −A )−1 ]= [ 5 e−4 t −4 e−5 t
−5 t
20 e −20 e
−4 t
e−4 t−e−5t
−5 t
5 e −4 e
−4 t ]
(3). Use of the Cayley – Hamilton Method:
The theorem states that “Every matrix satisfies its own characteristic equation”.
The matrix Ch. Eqn. is |λI − A|=0=q ( A )=ϕ ( t ) =e =h 0 I +h1 A +… …
At
If A possesses distinct Eigen Values λ 1 , λ2 ......... λ nthan + h n−1 A we obtain n – algebraic equations as:
n−1
λi t
e =ho + hi λi +… … . hn−1 λn−1
dk dk
For repeated Eigen values: k ( e λ i t )= k [ R λi ]
dλ dλ
To be illustrated through example
[ ]
1 4 10
E g.: If A= 0 2 0 , Determine ϕ ( t )=e At
0 0 3
At 2
Solution: e =ho I +h 1 A+h 2 A
Now the Eigen values of A are given by
[ ]
λ−1 −4 −10
|λI − A|=0= 0 λ−2 0
0 0 λ−3
Or ( λ−1 ) ( λ−2 )( λ−3 )=0
Hence, the Eigen values are: λ 1=1 , λ 2=2 , λ3=3
Since there are three distinct Eigen Values,
t
e =ho +h 1+ h2
2t
e =ho +2 h1 +4 h2
e 3 t=ho +3 h1 +9 h2
Solving the above, we get,
12
1 3t 2t 1 t
h2 = e −e + e
2 2
−3 3t 2t 5 t
h1 = e +4 e − e
2 2
3t 2t t
h 0=e −3 e +3 e
[ ]
1 12 40
2
From A, A = 0 4 0
0 0 9
Hence ϕ ( t )=e At=h o I +h1 A +h2 A 2
[ ]( )[ ]( )[ ]
1 0 0 −3 5 t 1 4 10 1 1 t 1 12 40
¿ ( e 3 t −3 e2 t +3 et ) 0 1 0 + e3 t + 4 e 2 t− e 0 2 0 + e 3 t −e2 t + e 0 4 0
2 2 2 2
0 0 1 0 0 3 0 0 9
[ ]
h o+ h1 +h2 4 h 1+12 h2 10 h1 +40 h 2
Or ϕ ( t )= 0 h o +2 h1+ 4 h 2 0
0 0 ho +3 h 1+ 9 h2
[ ]
t 2t t 3t t
e 4 e −4 e 5 e −4 e
¿ 0 e
2t
0 (Ans)
3t
0 0 e
Or λ ( λ +4 ) +4=0∨λ2 + 4 λ+ 4=0
te λ t =h1
1
¿ te−2 t =h1
At
Hence ϕ ( t )=e =( 1+2 t ) e
−2 t 1 0 +t e−2 t 0
0 1
2
−2 −4 [ ] [ ]
13
[ ]
−2 t
( 1+2 t ) e 2t e
−2 t
¿ (Answer)
−2 t e−2 t ( 1−2t ) e−2t
n
n A−λ1 I
f ( A )=∑ f ( λi )F iWhere λ i ' s are the Eigen values of A and F i=∏ λ −λ . In the case
j=0 i j
i=1
≠i
n
f ( A )=ϕ ( t ) =e , hence ϕ ( t )=∑ e F i
At λ t i
i=1
Or ( λ+ 1 )( λ+ 2 )=0
Therefore F 1=
A−λ 2 I A+ 2 I
λ1−λ 2
=
1
=A +2 I =
1 1
0 0 [ ]
And F 2=
A−λ 1 I A+ I 0 −1
λ2−λ 1
=
−1
=
0 1 [ ]
[ ] [ ][ ]
−t
λ t λ t −t 1 1 +e−2 t 0 −1 = e e−t −e−2 t
Hence ϕ ( t )=F1 e + F2 e =e
1 2
−2 t (Ans)
0 0 0 1 0 e
14
VOL-5: K2
1. Introduction:
(1). Controllability: The state-variable formulation of an nth order linear time-invariant multivariable
15
16
17
18
(2) Observability:
19
20
21
2. POLE PLACEMENT BY STATE – FEEDBACK
u + x
B C + + Y
+
-K
[ ] []
0 1 0 0
Where A= 0 0 1 , B= 0 , C=[ −2 4 3 ]
−5 −7 −3 1
Find the feedback constant matrix K [ i. e . k 1 , k 2 , k 3 ]=[ k 1 , k 2 , k 3 ]
So that the closed loop poles be located at s=-4, -4 and -5.
Step – 1: The state controllability test is to be done in order to ensure the arbitrary pole
placement:
So [ B: AB : A 2 B : ] should have rank n = 3 (here)
[ ][ ] [ ][ ]
0 1 0 0 0 ×0+ 1× 0+0 ×1 0
AB= 0 0 1 0 = 0 ×0+ 0 ×0+1 ×1 = 1
−5 −7 −3 1 −5× 0±7 × 0±3× 1 −3
22
[ ][ ] [ ][ ]
0 1 0 0 0 ×0+ 1× 1+ 0×−3 1
2
A B= 0 0 1 1 = 0 ×0+ 0 ×1+1×−3 = −3
−5 −7 −3 3 −5 × 0±7 × 1±3 ×3 2
[ ]
0 0 1
∴ [ B : AB : A B ]= 0 1 −3 ⟹ 0 ×
2
1 −3 2
1 −3
−3 2
+0
0 −3
1 2
+1
0 1
1 −3| | | | [
=−1; hence ran is3 ]
[ ]
0 0 1
∴ [ B : AB : A B ]= 0 1 −3 ⟹ 0 ×
2
1 −3 2
1 −3
−3 2
+0
0 −3
1 2
+1
0 1
1 −3| | | | [
=−1; hence rank is3=n ]
Hence arbitrary pole placement is possible since the system completely, state controllable.
The closed loop poles with state feedback = Eigen values of
|sI −( A−BK )|=0
[] ( )
0 0 0 0
BK = 0 [ k 1 k 2 k 3 ] = 0 0 0
1 k1 k2 k3
[ ]( )[ ]
0 1 0 0 0 0 0 1 0
A−BK = 0 0 1 − 0 0 0 = 0 0 1
−5 −7 −3 k 1 k2 k3 − ( 5+ k 1) − ( 7 +k 2) − ( +k 3 )
3
[ ]( )[ ]
0 1 0 0 0 0 0 1 0
A−BK = 0 0 1 − 0 0 0 = 0 0 1
−5 −7 −3 k 1 k 2 k 3 −( 5+ k 1 ) −( 7 +k 2 ) −( 3 +k 3 )
|[ ] [ ]|
s 0 0 0 1 0
Hence|sI−( A−BK )|= 0 s 0 − 0 0 1
0 0 s −( 5+ k 1 ) −( 7+k 2 ) −( 3 +k 3 )
| |
s −1 0
0 s −1 =0
( 5+ k 1 ) ( 7 +k 2 ) ( s + 3+k 3 )
¿ s [ ( s+3+ k 3 ) + ( 7+ k 2 ) ] + ( 5+ k 1 ) +0=0
2
¿ s ( s +3+k 3 ) + s ( 7 +k 2 ) + ( 5+ k 1 )=0
3 2
¿ s + s ( 3+k 3 ) + s ( 7+ k 2 ) + ( 5+ k 1 )=0
3 2
¿ ( s +4 ) ( s +4 )( s+5 )=s +13 s + 56 s +80
Equation coefficients, ( 3+ k 3 )=13 ; ( 7+ k 2 )=56 ; ( 5+k 1 ) =80
∴ k 1=80−5=75
k 2=56−7=49
k 3=13−3=10
23