Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
57 views

Lect 38

This document discusses exact observers for nonlinear systems, including observer form, change of variables to obtain linear error dynamics, and circle criterion design of observers. The key points are: 1) An observer transforms a nonlinear system into a form with linear error dynamics of the form x̃ ̇=(A−HC)x̃. 2) The circle criterion can be used to design H and N such that the error dynamics are exponentially stable. 3) Feedback control can be analyzed by treating the error dynamics and output injection separately.

Uploaded by

Mustafa Kösem
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

Lect 38

This document discusses exact observers for nonlinear systems, including observer form, change of variables to obtain linear error dynamics, and circle criterion design of observers. The key points are: 1) An observer transforms a nonlinear system into a form with linear error dynamics of the form x̃ ̇=(A−HC)x̃. 2) The circle criterion can be used to design H and N such that the error dynamics are exponentially stable. 3) Feedback control can be analyzed by treating the error dynamics and output injection separately.

Uploaded by

Mustafa Kösem
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Nonlinear Systems and Control Lecture # 38 Observers Exact Observers

p. 1/1

Observer with Linear Error Dynamics

Observer Form:
x = Ax + (y, u), y = Cx

where (A, C ) is observable, x Rn , u Rm , y Rp From Lecture # 24: An n-dimensional SO system


x = f (x) + g (x)u, y = h(x)

is transformable into the observer form if and only if


= h, Lf h,
1 Ln h f T

, rank x

(x) = n

b=

0,

0,

=b

p. 2/1

j [adi , ad f f ] = 0,

0 i, j n 1 0j n2

[g, adj f ] = 0,

Change of variables:
1 i = (1)i1 adi f ,

1in n =I

T x

1 ,

2 ,

z = T (x)

p. 3/1

x = Ax + (y, u),

y = Cx

= Ax x + (y, u) + H (y C x ) x =xx = (A HC ) x x

Design H such that (A HC ) is Hurwitz What about feedback control? Let u = (x) be a globally stabilizing state feedback control u = ( x)
= Ax x + (y, u) + H (y C x )

p. 4/1

How would you analyze the closed-loop system?


x = Ax + (Cx, (x x )) = (A HC ) x x

We know that the origin of x = Ax + (Cx, (x)) is globally asymptotically stable = (A HC ) the origin of x x is globally exponentially stable What additional assumptions do we need to show that the origin of the closed-loop system is globally asymptotically stable?

p. 5/1

Circle Criterion Design x = Ax + (y, u) L (M x), y = Cx

where (A, C ) is observable, x Rn , u Rm , y Rp , M x R , ( ) = [ 1 (1 ), . . . , ( ) ]T


= Ax x + (y, u) L (M x N (y C x )) + H (y C x ) x =xx = (A HC ) x x L[ (M x) (M x N (y C x ))] = (A HC ) x x L[ (M x) (M x (M + N C ) x)]

Dene

z = (M + N C ) x (t, z ) = (M x(t)) (M x(t) z )

p. 6/1

= (A HC ) x x L (t, z ) z = (M + N C ) x G(s) = (M + N C )[sI (A HC )]1 L 0 + - n 6


def

G(s) () 

(t, z ) =

1 (t, z1 ), . . . , (t, z )

p. 7/1

Main Assumption: i () is a nondecreasing function


(a b)[i (a) i (b)] 0, a, b R

If i (i ) is continuously differentiable
di di 0, i R

zi i (t, zi ) = zi [i ((M x)i ) i ((M x)i zi )] 0 z T (t, z ) 0

p. 8/1

By the circle criterion (Theorem 7.1) the origin of


= (A HC ) x x L (t, z ) z = (M + N C ) x

is globally exponentially stable if


G(s) = (M + N C )[sI (A HC )]1 L
def

is strictly positive real Design Problem: Design H and N such that G(s) is strictly positive real Feasibility can be investigated using LMI (Arcak & Kokotovic, Automatica, 2001)

p. 9/1

Example:
x 1 = x2 , A= 0 1 0 1 0 0
3 x 2 = x3 x 1 2 + u,

y = x1 0 y 3 + u d d ,

, C=

1 0

, =

L=

, M =

0 1

, ( ) = , h1 h2

= 3 2 0

h=

N
1

G(s) = (M + N C )[sI (A HC )]

L=

s + N + h1 s2 + h1 s + h2

p. 10/1

From Exercise 6.7, G(s) is SPR if and only if


h1 > 0, h2 > 0, 0 < N + h1 < h1 N =
3 2 1 2

h1 = 2,

h2 = 1, s+

G(s) = 1 = x x 2 + 2(y x 1 )

(s + 1)2

2 = y 3 + u x 1 ) x 2 + 1 2 (y x

+ (y x 1 )

p. 11/1

What about feedback control? Let u = (x) be a globally stabilizing state feedback control Closed-loop system under output feedback:
x = Ax + (y, (x x )) L (M x) = (A HC ) x x L (t, z ) z = (M + N C ) x

How would you analyze the closed-loop system?


(t, z ) depends on x(t). How would you show that is well dened?

What about the effect of uncertainty?

p. 12/1

You might also like