Lecture4_std
Lecture4_std
Linear Systems
Pangun Park
pgpark@cnu.ac.kr
LINEAR SYSTEMS!!
u
p̈ = u
p
x1 = p ẋ1 = x2
−→
x2 = ṗ ẋ2 = u
! "
ẋ1 = x2 x1
x=
ẋ2 = u x2
# $
y = p = x1 = 1 0 x
! " ! "
0 1 0 # $
A= ,B = ,C = 1 0 ,
0 0 1
ẋ = Ax + Bu
y = Cx
p̈x = ux
p̈y = uy u = (ux , uy )
p = (px , py )
⎡ ⎤
0 1 0 0
⎢ 0 0 0 0 ⎥ ẋ = Ax + Bu
x 1 = px A=⎢ ⎥
y = Cx
⎣ 0 0 0 1 ⎦
x2 = ṗx 0 0 ⎤0 0
x 3 = py ⎡
0 0
x4 = ṗy ⎢ 1 0 ⎥
B=⎢ ⎥
u1 = ux ⎣ 0 0 ⎦
u2 = uy ' 0 1 (
y 1 = px 1 0 0 0
C=
y 2 = py 0 0 1 0
ẋ = Ax + Bu
y = Cx
ẋ = Ax + Bu
y = Cx
u ẋ = Ax + Bu y
y = Cx
But First: How can such systems be understood? And where do they come from?
ẋ = Ax + Bu
y = Cx
where
c
A = −γ, B = ,C = 1
m
If we care about/can measure the position we have the same general equation with
different matrices:
! " ! "
0 1 0 # $
A= ,B = c ,C = 1 0 ,
0 −γ m
ẋ = Ax + Bu
y = Cx
! " ! "
0 1 0 # $
A= ,B = ,C = 1 0 ,
−g /l 0 c
Not Linear
ẋ = v cos φ ≈ 1
φ ẏ = v sin φ ≈ φ
φ̇ = ω
(x, y)
ẋ = v
We need to be more systematic/ clever
ẏ = v φ when it comes to generating LTI models
φ̇ = ω from nonlinear systems!
ẋ = f (x, u)
y = h(x)
(xo , uo ) −→ (x = xo + δx , u = uo + δu )
Taylor expansion!!
δ̇x = f (xo + δx , uo + δu )
∂f ∂f
= f (xo , uo ) + (xo , uo ) δx + (xo , uo ) δu + H.O.T
∂x
* +, - ∂u
* +, -
A B
∂h
y = h(xo + δx ) = h(xo ) + (xo ) δx + H.O.T
∂x
* +, -
C
Assumptions:
f (xo , uo ) = 0
h(xo ) = 0
⎧
⎧ ⎪ δ̇x = Aδx + Bδu
⎪
⎪ ẋ = f (x, u) x = xo + δx
⎪
⎪
⎪
⎪
⎪ ⎪
⎪ y = C δx
⎪
⎪
⎨ y = h(x) u = u o + δu ⎪
⎪
⎨
,-*+
−→ ∂f
⎪
⎪ ⎪
⎪ A= ∂x
(xo , uo )
⎪
⎪ f (xo , uo ) = 0 ⎪
⎪ ∂f
⎪
⎪
⎩
⎪
⎪
⎪ B= ∂u
(xo , uo )
h(xo ) = 0 ⎪
⎩ ∂h
C = ∂x
(xo )
⎡ ⎤ ⎡ ⎤
f1 h1
⎢f2 ⎥ ⎢ h2 ⎥
x ∈ Rn , u ∈ Rm , y ∈ Rp , f = ⎢ . ⎥ , h = ⎢ . ⎥
⎢ ⎥ ⎢ ⎥
⎣ .. ⎦ ⎣ .. ⎦
fn hm
⎡ ⎤ ⎡ ⎤
f1 h1
⎢f2 ⎥ ⎢ h2 ⎥
x ∈ Rn , u ∈ Rm , y ∈ Rp , f = ⎢ . ⎥ , h = ⎢ . ⎥
⎢ ⎥ ⎢ ⎥
⎣ .. ⎦ ⎣ .. ⎦
fn hm
⎡ ⎤ ⎡ ⎤
f1 h1
⎢f2 ⎥ ⎢ h2 ⎥
x ∈ Rn , u ∈ Rm , y ∈ Rp , f = ⎢ . ⎥ , h = ⎢ . ⎥
⎢ ⎥ ⎢ ⎥
⎣ .. ⎦ ⎣ .. ⎦
fn hm
g
θ̈ = sin θ + u cos θ
l
x1 = θ, x2 = θ̇, y = x1
ℓ
θ
u ! "
x2
f (x, u) = g
l
sin(x1 ) + u cos(x1 )
h(x) = x1
(xo , uo ) = (0, 0)
4 5 ! " ! "
∂f1 ∂f1
∂x1 ∂x2 0 1 0 1
A= ∂f2 ∂f2 = g =
∂x1 ∂x2 l
cos(x1 ) 0 (0,0) g /l 0
0,0
g
θ̈ = sin θ + u cos θ
l
x1 = θ, x2 = θ̇, y = x1
ℓ
θ
u ! "
x2
f (x, u) = g
l
sin(x1 ) + u cos(x1 )
h(x) = x1
(xo , uo ) = (0, 0)
! ∂f1 " ! " ! "
∂u
0 0
B = ∂f = =
∂u
2
0,0
cos(x1 ) (0,0)
1
g
θ̈ = sin θ + u cos θ
l
x1 = θ, x2 = θ̇, y = x1
ℓ
θ
u ! "
x2
f (x, u) = g
l
sin(x1 ) + u cos(x1 )
h(x) = x1
(xo , uo ) = (0, 0)
6 7 # $
∂h ∂h
C = ∂x 1 ∂x2 = 1 0
(0,0)
ẋ = v cos φ
ẏ = v sin φ
φ̇ = ω
φ ⎧
⎪ x1 = x, x2 = y , x3 = φ
⎪
⎪
⎨ y1 = x1 , y2 = x2 , y3 = x3
(x, y)
⎪
⎪
⎪ u1 = v , u2 = ω
⎩
(xo , uo ) = (0, 0)
⎡ ⎤ ⎡ ⎤
1 0 1 0 0
A = 0, B = ⎣0 0⎦ , C = ⎣0 1 0⎦
0 1 0 0 1
ẋ2 = 0
????
Pangun Park (CNU) 23 / 64
Punchlines
Some times the linearizations give reasonable models and sometimes they do not...
Despite the fact that they are only local approximations, they are remarkably useful
(when they work...)
u ẋ = Ax + Bu y
y = Cx
ẋ = Ax
x(t0 ) = x0
If everything is scalar:
How do we know?
Initial conditions
Dynamics
d
x(t) = ae a(t−t0 ) x0 = ax
dt
∞ ∞
8 ak t k 8 Ak t k
e at = e At =
k! k!
k=0 k=0
Derivative:
∞ ∞ ∞ ∞
d 8 Ak t k 8 kAk t k−1 8 Ak−1 t k−1 8 Ak t k
=0+ =A =A
dt k=0 k! k=1
k! k=1
(k − 1)! k=0
k!
d At
e = Ae At
dt
The matrix exponential plays such an important role that it has its own name:
The State Transition Matrix
e A(t−t0 ) = Φ(t, t0 )
ẋ = Ax + Bu
Claim:
9 t
x(t) = Φ(t, t0 )x(t0 ) + Φ(t, τ )Bu(τ )dτ
t0
Claim
9 t
x(t) = Φ(t, t0 )x(t0 ) + Φ(t, τ )Bu(τ )dτ
t0
9 t0
x(t0 ) = Φ(t0 , t0 ) x(t0 ) + Φ(t0 , τ )Bu(τ )dτ
* +, - t0
I * +, -
0
x(t0 ) = x(t0 )
Claim
9 t
x(t) = Φ(t, t0 )x(t0 ) + Φ(t, τ )Bu(τ )dτ
t0
9 t
d d
x(t) = AΦ(t, t0 )x(t0 ) + Φ(t, τ )Bu(τ )dτ
dt dt t0
9 t 9 t
d d
f (t, τ )dτ = f (t, t) + f (t, τ )dτ
dt t0 dt t0
9 t
Φ(t, t)Bu(t) + AΦ(t, τ )Bu(τ )dτ
t0
Claim
9 t
x(t) = Φ(t, t0 )x(t0 ) + Φ(t, τ )Bu(τ )dτ
t0
9 t
d d
x(t) = AΦ(t, t0 )x(t0 ) + Φ(t, τ )Bu(τ )dτ
dt dt t0
: 9 t ;
d
x(t) = A Φ(t, t0 )x(t0 ) + Φ(t, τ )Bu(τ )dτ + Bu(t)
dt t0
d
x(t) = Ax + Bu
dt
ẋ = Ax + Bu, y = Cx
9 t
y (t) = C Φ(t, t0 )x(t0 ) + C Φ(t, τ )Bu(τ )dτ
t0
Φ(t, τ ) = e A(t−τ )
First order of business is always trying to figure out if the system “blows up” or not
ẋ = ax −→ x(t) = e at x(0)
For a > 0
150
100
50
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
ẋ = ax −→ x(t) = e at x(0)
For a < 0
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
ẋ = ax −→ x(t) = e at x(0)
For a = 0
2
1.8
1.6
1.4
1.2
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Asymptotically Stable:
x(t) → 0, ∀x(0)
Unstable:
∃x(0) :∥ x(t) ∥→ ∞
⎧
⎨a > 0 :
⎪ Unstable
at
ẋ = ax −→ x(t) = e x(0) a < 0 : Asymptotically stable
⎪
⎩
a=0: Critically stable
ẋ = Ax −→ x(t) = e At x(0)
We cannot say that A > 0, but we can do the next best thing - eigenvalues!!
Av = λv
λ : eigenvalue ∈ C
v : eigenvector ∈ Rn
The eigenvalues tell us how the matrix A“acts” in different directions (eigenvectors)
In MATLAB:
>> eig(A)
! "
1 0
A=
0 −1
λ1 = 1, λ2 = −1
! " ! "
1 0
v1 = , v2 =
0 1
ẋ = Ax −→ x(t) = e At x(0)
Unstable(if):
∃λ ∈ eig(A) : Re(λ) > 0
Re(λ) ≤ 0, ∀λ ∈ eig(A)
Critically Stable (if): one eigenvalue is 0 and the rest have negative real part OR two
purely imaginary eigenvalues and the rest have negative real part
Pangun Park (CNU) 40 / 64
A Tale of Two Pendula
! "
0 1
A=
−1 0
λ1 = j, λ2 = −j
Critically stable!
Oscillates!
! "
0 1
A=
1 0
λ1 = −1, λ2 = 1
Unstable!
Let’s use the stability results to solve the so-called Rendezvous Problem in swarm
robotics
The setup:
Given a collection of mobile agents who can only measure the relative displacement
of their neighbors (no global coordinates)
Problem: Have all the agents meet at the same (unspecified) position
We have already seen the two-robot case (scalar does not matter...)
ẋ1 = u1 ẋ2 = u2
x1 x2
Fact: If one eigenvalue is 0 and all others have negative real part, then the state will end
up in the so-called null-space of A
null(A) = {x : Ax = 0}
We have that
x1 → α, x2 → α ⇔ (x1 − x2 ) → 0
Rendezvous is achieved!
If there are more than two agents, they should probably aim towards the centroid of their
neighbors (or something similar)
8
ẋi = (xj − xi )
j∈Ni
8
ẋi = (xj − xi ) ẋ = −Lx
j∈Ni
xN Critically Stable!
⎡ ⎤
α
null(L) = {x : x = ⎣ ... ⎦}
⎢ ⎥
8
ẋi = (xj − xi ) ẋ = −Lx
j∈Ni
xi → α, ∀i ⇔ (xi − xj ) → 0, ∀i, j
Rendezvous is achieved!
2 Robots
So now we know that the first order of business is to stabilize the system such that all
the eigenvalues have negative real part
u ẋ = Ax + Bu y
y = Cx
u
p̈ = u
p
)
u>0 if y <0
u = −y
u<0 if y >0
u = −Ky = −KCx
ẋ = Ax + Bu = Ax − BKCx = (A − BKC )x
:! " ! " ;
0 1 0 # $
ẋ = + 1 1 0 x
0 0 1
! "
0 1
ẋ = x
−1 0
eig(A − BKC ) = ±j
Critically stable!
We need to use the full state information in order to stabilize this system!
Need to stabilize the system such that all the eigenvalues have negative real part Using
state feedback!
u ẋ = Ax + Bu y
y = Cx
ẋ = Ax + Bu
u = −Kx
ẋ = Ax + Bu = Ax − BKx = (A − BK ) x
* +, -
closed−loop dynamics
Re(eig(A − BK )) < 0
Next lecture
u ∈ R, x ∈ R2 , K : 1 × 2
# $
K = k1 k2
:! " ! " ;
0 1 0 # $
ẋ = + k1 k2 x
0 0 1
! "
0 1
ẋ = x
−k1 k2
In the next lecture, we will pick gains in a systematic manner, but for now, let’s try
k1 = k2 = 1
! "
0 1
A − BK =
−1 −1
eig(A − BK ) = −0.5 ± 0.866j
Asymptotically stable!
Damped oscillations
k1 = 0.1, k2 = 1
! "
0 1
A − BK =
−0.1 −1
eig(A − BK ) = −0.1127, −0.8873
Asymptotically stable!
No oscillations
It is clear that some eigenvalues are better than others. Some cause oscillations, some
make the system respond too slowly, and so forth...
In the next module we will see how to select eigenvalues and how to pick control laws
based on the output rather than the state.