Chapter1: Linear Systems: 1 Peano Baker Series
Chapter1: Linear Systems: 1 Peano Baker Series
Chapter1: Linear Systems: 1 Peano Baker Series
We show the above series converges. For σ ∈ [t0 , t], let a = max|Aij (σ)|. Then using
|BC| ≤ nbc, where b = max|Bij |, we get
1
|Φ(t, t0 )ij | ≤ 1 +( na(t − t0 ) + (na)2 (t − t0 )2 /2 + (na)3 (t − t0 )3 /3+ ...) (1)
n
1
≤ 1 + ( exp(na(t − t0 )) − 1 ) (2)
n
Z t+∆t
Φ(t + ∆t, t0 ) − Φ(t, t0 ) = B(σ)dσ (3)
t
Z σ
B(σ) = A(σ) + A(σ) A(σ1 )dσ1 + . . . (4)
t0
Φ(t + ∆t, t0 ) − Φ(t, t0 )
lim = B(t) = A(t)Φ(t, t0 ) (5)
∆t→0 ∆t
d
Φ(t, t0 ) = A(t)Φ(t, t0 ), Φ(t0 , t0 ) = I (6)
dt
Series is absolutely bounded then it converges absolutely and hence converges. Φ(t, t0 )
is called the state transition matrix.
Solution is unique, else there is another solution y(t), with y(t0 ) = x0 satifying ẏ(t) =
A(t)y(t) then for z(t) = z(t) − y(t) we have ż(t) = A(t)z(t), with z(0) = 0, then
dkzk2
= z T (AT + A)z ≤ λkzk2 ,
dt
for λ > 0. Then for η > λ, we have
d
exp(−ηt)kzk2 ≤ 0.
dt
1
Hence z(t) = 0 and y(t) = x(t). Therefore we have a solution that exists for all times and is
unique.
Observe
ẋ = (1 + x2 ), x(0) = 0
has solution x(t) = tan t that does not exist beyond π2 .
Similarly √
ẋ = x, x(0) = 0
has solution x(t) = (t − c)2 /4 for t ≥ c and x(t) = 0 for t < c. Family of solutions for c > 0,
not unique. For linear system we have existence and uniqueness.
By uniqueness, we can say that
When A is constant
2 2 k (t − t0 )k
Φ(t, t0 ) = I + A(t − t0 ) + A (t − t0 ) /2 + · · · + A + · · · = exp(A(t − t0 )).
k!
when A(t) = a(t) a scalar, then we have
Z tZ Z Rt
σ1 σk ( t0
a(σ)dσ)k
.. a(σ1 ) . . . a(σk )dσk ..dσ1 = .
t0 t0 t0 k!
Z t
Φ(t, t0 ) = exp( a(σ)dσ).
t0
As some examples
1.
0 −a cos(at) − sin(at)
A= ; exp(At) =
a 0 sin(at) cos(at)
2
2.
0 a cosh(at) sinh(at)
A= ; exp(At) =
a 0 sinh(at) cosh(at)
3.
0 1 1 t
A= ; exp(At) =
0 0 0 1
4. Z t
0 −a(t) cos b(t) − sin b(t)
A= ; Φ(t, 0) = ; b(t) = a(σ)dσ
a(t) 0 sin b(t) cos b(t) 0
When A and B commute exp((A + B)t) = exp(At) exp(Bt). They both satify the Ẋ =
(A + B)X, X(0) = I. Therefore,
1 1 t 1 t
A= ; exp(At) = e .
0 1 0 1
If A(t), B(t) commute for all t then ΦA+B (t, t0 ) = ΦA (t, t0 )ΦB (t, t0 ).
d X ∂Φ(t, t0 )
detΦ(t, t0 ) = Φ̇ij
dt ∂Φij
X
detΦ(t, t0 ) = cij Φij
d X
detΦ(t, t0 ) = cij Φ̇ij = tr(AΦC T ) = tr(A)detΦ
dt Z t
detΦ(t, t0 ) = exp( tr(A(σ))dσ).
t0
3
The solution is Z t
x(t) = Φ(t, 0)x0 + Φ(t, τ )B(τ )u(τ )dτ. (8)
0
Observe
Z t Z t
d
Φ(t, τ )B(τ )u(τ )dτ = A(t) Φ(t, τ )B(τ )u(τ )dτ + B(t)u(t).
dt 0 0
Z t
x1 (t) cos t sin t x1 (0) cos(t − τ ) sin(t − τ ) 0
= +
x2 (t) − sin t cos t x2 (0) 0 − sin(t − τ ) cos(t − τ ) sin ωt
| {z }
exp(At)
Z t
x1 (t) 1 cos(−ωτ + t − τ ) − cos(ωτ + t − τ )
= dτ.
x2 (t) 2 0 sin(ωτ + t − τ ) − sin(−ωτ + t − τ )
when ω = 1,
x1 (t) 1 sin t − t cos t
= .
x2 (t) 2 t sin t
The solution grows with t. This is excitation on resonance. Linear systems are ubiquitous.
Electrical, mechanical systems.
4
3 Controllability
Consider Z t
ẋ = B(t)u(t), x(t) = x0 + B(τ )u(τ )dτ.
0
Where all can you drive the system starting from x0 . Let
Z t
W = B(τ )B T (τ )dτ.
0
x(t) is reachable iff y = x(t) − x(0) is in the range space of W denoted as R(W ).
Rt
Proof: If y ∈ R(W ) then y = W ξ = 0 B(τ )B T (τ )dτ ξ. Let u(t) = B T (τ )ξ.
/ R(W ), then y = z + a where z ∈ RR(W )⊥ and a ∈ R and z 6=R 0. Then ∃z 6= 0 such
If y ∈
t t
that hz, yi 6= 0 and z ∈ R(W )⊥ . Then 0 z T B(τ )u(τ )dτ 6= 0. But 0 z T B(τ )B T (τ )zdτ =
0 → z T B(τ ) = 0. Hence contradiction. Therefore y ∈ R(W ).
The theorem has at it essence that given matrix P, R(P ) is same as R(P P T ). Given
RT
y = 0 B(τ )u(τ )dτ , we can write this as y = P U , where define discrete time approximation
with ∆t = T /n, we have
u(n∆t)
√ √ u((n − 1)∆t)
P = ∆t B(n∆t) B((n − 1)∆t) ... B(∆t) , U = ∆t .
...
u(∆t)
Then from the above results xf − Φ(T, 0)x0 is reachable at time T is it lies in rangle space
of W , where
Z T
W = Φ(T, τ )B(τ )B T (τ )Φ(T, τ )T dτ.
0
5
W is called controllability grammian. Everypoint can be reached at time T , if controllability
Grammian is full rank. When A and B are constant then
Z T Z T
T T
W = exp(A(T − τ ))BB exp(A(T − τ )) dτ = exp(At)BB T exp(At)T dt.
0 0
Let
M= B AB ... An−1 B .
We claim
R(W ) = R(M M T ).
Recall by Cayley Hamilton theorem A satifies its nth order characteristic polynomial.
Which helps to express An in lower powers of A. Then we have
n−1
X
exp(At) = αi (t)Ai ,
i=0
and
Z T Z n−1
T X
T T
Wx = exp(At)BB exp(At) x dt = αi (t)Ai By(t) = M z.
0 0 i=0
Z T Z T
T T T
exp(At)BB exp(At) dt x = 0; x exp(At)BB T exp(At)T dt x = 0
0 0
Z T
kB T exp(At)T xk dt = 0
0
This means kB exp(At) xk = 0 for all t. It implies B T x = 0, else we make t small and
T T
6
To show this we remember given P , R(P )⊥ = N (P T ). Because if y ∈ R(P )⊥ , we have
y P x = 0, ∀x. Then (P T y)T x = 0, ∀x, hence P T y = 0 and y ∈ N (P T ). Converse follows
T
on same lines.
Thus we have
R(W ) = R(M M T ); N (W ) = N (M M T ).
Now if M = B AB . . . An−1 B is full rank, the system is controllable.
Consider the system
0 1 0 ... ... 0 0
0 0 1 0 . . . 0 0
.
... .
0 0 1 ... 0 .
ẋ = . x + . u
0 0 ... .. . . . 0 ..
0 ... ... ... 0 1 0
−αn −αn−1 . . . −αk . . . −α1 1
Then see M is full ranked. The above form is called controllable canonical form.
In general, given a controllable control system
ẋ = |{z} b u.
A x + |{z}
n×n n×1
ẏ = P −1 AP x + P −1 bu.
But note
0
0
..
−1 .
P b= .. .
.
0
1
7
−αn 1 0 ... ... 0
−αn−1 0 1 0 ... 0
.. ... n
. 0 1 ... 0 X
−1
P AP = ... , An + αi Ai−1 = 0
−α 0 ... ...
0
k i=1
.
..
... ... ... 0 1
−α1 0 ... 0 ... 0
This is a canonical form.
Now let q be the first row of inverse of P . Then form the matrix
q
qA
Q= ... .
(10)
qAn−1
Let z = Qx.Then
ż = QAQ−1 x + Qbu.
But note
0
0
..
.
Qb = .. .
.
0
1
and
0 1 0 ... ... 0
0 0 1 0 ... 0
... n
0 0 1 ... 0 n
X
αi Ai−1 = 0
−1
QAQ = ... , A +
0 0 ... ... 0
i=1
0 ... ... ... 0 1
−α1 −α2 . . . −αk ... −αn
The standard controllable canonical form.
8
4 Least square theory
For m ≤ n consider n × m matrix A of rank m, and the problem of minimizing kAu − bk for
choice of u. If we decompose b as b̂ + b⊥ , where b̂ is projection of b on subspace spanned by
A, then
u(n∆t)
√ √ u((n − 1)∆t)
C= ∆t c(n∆t) c((n − 1)∆t) ... c(∆t) , U = ∆t ,
...
u(∆t)
We want to solve minimize U T U subject to z = CU . This is what we just solved. The
answer is U = C T (CC T )−1 z. which is expressed as
Z T
′ ′ −1
u(τ ) = B (τ )Φ (T, τ )W z, W = Φ(T, τ )B(τ )B ′ (τ )Φ′ (T, τ )dτ.
0
9
5 Misclleneous
let X be a n × n matrix that satifies
Ẋ = AX + XB T
Then X(t) = ΦA (t, 0)X(0)ΦB (t, 0)T . Gust verify by differentiation.
Note
ΦA (T, t)ΦA (t, 0) = Φ(T, 0)
Differentiating both sides
d
ΦA (T, t) = −ΦA (T, t)A.
dt
or
d ′
Φ (T, t) = −A′ Φ′A (T, t).
dt A
ẍ + αẋ = u
we want to stabilize the system to x = 1. Let y = x − 1, then
ÿ + αẏ = u, α > 0
Let y1 = y and y2 = ẏ, then
d y1 0 1 y1 0
= +u (11)
dt y 2 0 −α y 2 1
d y1 0 1 y1 0 y1
= + −K 0 (12)
dt y2 0 −α y2 1 y2
d y1 0 1 y1
= (13)
dt y2 −K −α y2
When K > 0 both eigenvalues have negative real part and we stabilize the system.
10
7 PI controller and Speed Control
Consider the control system
ẋ + αx = u
we want to stabilize the system to x = 1. Let y = x − 1, then
ẏ + αy + α = u,
Rt
Let u(t) = −Ky(t) − Ki 0
y(σ)dσ
Then
ÿ + αẏ = −K ẏ − Ki y,
Let y1 = y and y2 = ẏ, then
d y1 0 1 y1
= (14)
dt y2 −Ki −(α + K) y2
When Ki > 0 and K + α > 0 both eigenvalues have negative real part and we stabilize
the system.
ẋ + αx = u + T
we want to stabilize the system to x = 1. Let y = x − 1, then
ẏ + αy + α = u + T,
Rt
Let u(t) = −Ky(t) − Ki 0
y(σ)dσ
Then
ÿ + αẏ = −K ẏ − Ki y,
and unknown torque disappears. Now everything is same as above. Suppose torque
is changing with time say in discrete steps, then when we differentiate we get impulses at
discrete times. These are called disturbances. We don’t care now, we have a stable system.
11
8 PD controller
Consider the control system
ẍ = u
we want to stabilize the system to x = 1. Let y = x − 1, then
ÿ = u.
Let y1 = y and y2 = ẏ, then
d y1 0 1 y1 0
= +u (15)
dt y2 0 0 y2 1
d y1 0 1 y1 0 y1
= + −K −Kd (16)
dt y2 0 0 y2 1 y2
d y1 0 1 y1
= (17)
dt y2 −K −Kd y2
When K > 0 and Kd > 0 both eigenvalues have negative real part and we stabilize the
system.
ẍ + αẋ + ω 2 x = u
we want to stabilize the system to x = 1. Let y = x − 1, then
ÿ + αẏ + ω 2 y + ω = u,
Rt
Let u(t) = −Ky(t) − Kd ẏ(t) − Ki 0 y(σ)dσ
...
y + αÿ + ω 2 ẏ = −K ẏ(t) − Kd ÿ(t) − Ki y(t),
Let y1 = y and y2 = ẏ, and y3 = ÿ then
y1 0 1 0 y1
d
y2 = 0 0 1 y2 (18)
dt 2
y3 −Ki −(K + ω ) −(α + Kd ) y3
We choose K, Ki , Kd so all eigenvalues have negative real part and we stabilize the system.
That is Ki > 0 and α + Kd > 0 and K + ω 2 > 0.
12
9.1 Servo
Consider the servo control system
ẍ + αẋ = u + T,
where T is unknown torque.
we want to stabilize the system to x = 1. Let y = x − 1, then
ÿ + αẏ = u + T,
Rt
Let u(t) = −Ky(t) − Kd ẏ(t) − Ki 0 y(σ)dσ
...
y + αÿ + ω 2 ẏ = −K ẏ(t) − Kd ÿ(t) − Ki y(t),
everything is same as above , unknown torque disappears.
10 Pole Placement
Now consider a controllable single input system
ẋ = Ax + bu
Then we can always stabilize the system and place the poles of the closed loop system
as desired. We can write the system in canonocal controllable form using y = P −1 x
0 1 0 ... ... 0 0
0 0 1 0 ... 0 0.
... .
0 0 1 ... 0 .
ẏ = ... y + . u
0 0 . . . . . . 0 ..
0 ... ... ... 0 1 0
−α1 −α2 . . . −αk . . . −αn 1
Let
y1
y2
..
.
u= β1 β2 . . . βk . . . βn
yk
..
.
yn
Then closed loop evolution is
13
0 1 0 ... ... 0
0 0 1 0 ... 0
...
0 0 1 ... 0
ẏ = ...
y
0 0 ... ... 0
0 . . . . .. ... 0 1
β1 − α1 β2 − α2 . . . βk − αk . . . βn − αn
| {z } | {z } | {z } | {z }
−γ1 −γ2 −γk −γn
p(s) = sn + γn sn−1 + · · · + γ1
We can choose γk and hence βk are place the poles as we like. Going back to original
coordinates,
y1 x1
y2
x2
−1 .. ..
. .
u= β1 β2 . . . βk . . . βn P P = β̃1 β̃2 . . . β̃k . . . β̃n
yk xk
.. ..
. .
yn xn
10.1 Observers
Our feedback in previous section depended on all the state variables. In practice all state
variables may not be measured. However we can be in a situation that although we don’t
see all state variables our system is still observable, i.e., given the controllable system
14
Then observe The transformation z = Q−1 x gives the system
Then
We are free to choose β, we call o observer. It reconstructs the state z, with inputs as what
can be observed about z.
Then
β1
d β2
(z − o) = R(z − o) + . . . c̃(z − o);
(23)
dt
βn
| {z }
β
Now choose R + βc̃ so that we can place the poles and stabilize, then o aproaches z.
| {z }
R̄
Now note z equation is a controllable equation so there exists a control u = dz such that
15
ż = Rz + b̃dz = R̃z (24)
is stable. We instead use the feedback
˜
ẋ = Ax + bdq; (28)
As we have shown z goes to zero and x goes to zero.
11 Excercises
1. Let A be a n × n matrix,
0 1 0 ... ... 0
0 0 1 0 ... 0
.. ...
. 0 1 ... 0
A=
... ... .
0 0 ... 0
..
. ... ... ... 0 1
0 0 ... 0 ... 0
Find exp(At).
2. Let A be a n × n matrix,
16
1 1 0 ... ... 0
0 1 1 0 ... 0
.. ...
. 0 1 ... 0
A=
... ... .
0 0 ... 0
..
. ... ... ... 1 1
0 0 ... 0 ... 1
Find exp(At).
3. Let A be a constant matrix, find the state transtion matrix Φ(T, 0) for the system
ẋ = f (t)A x, where f is a continuous function.
4. Let Ω(t) = −ΩT (t). Show that ΦΩ (t, 0)ΦTΩ (t, 0) = I.
5. Find
d
exp(A + tB)|t=0
dt
.
6. Given n × n matrices A, B, define
A11 B . . . ... A1n B
A21 B . . . ... A2n B
A⊗B = .. .. .. .. .
. . . .
An1 B . . . ... Ann B
Show
exp(A ⊗ I + I ⊗ B) = exp(A) ⊗ exp(B).
7. Let A be an n by n matrix. We say that a linear subspace of Rn is invariant under
A if every vector x in that subspace has the property that Ax also belongs to that
subspace. Show that
b u.
ẋ = Ax + |{z}
n×1
is controllable if and only if b does not belong to an invariant subspace of A.
8. Consider the scalar system ẋ = x + u. Given the constraint u(t + T ) = u(t), find the
R 2T
control that drives x(0) = 1 to x(2T ) = 0 and minimizes 0 u2 (t)dt.
9. Show that Q in Eq. 10 is invertible.
10. Let A(t + T ) = A(t), express ΦA (t, 0) for t > T in terms of ΦA (σ, 0) for σ ≤ T .
17