Exercises in Nonlinear Control Systems
Exercises in Nonlinear Control Systems
Latest update
1
Introduction
The exercises are divided into problem areas that roughly match the lecture
schedule. Exercises marked “PhD” are harder than the rest. Some exercises
require a computer with software such as Matlab and Simulink.
Many people have contributed to the material in this compendium.
Apart from the authors, exercises have been suggested by Lennart Ander-
sson, Anders Robertsson and Magnus Gäfvert. Exercises have also shame-
lessly been borrowed (=stolen) from other sources, mainly from Karl Johan
Åström’s compendium in Nonlinear Control.
2
1. Nonlinear Models and
Simulation
EXERCISE 1.1[KHALIL, 1996]
The nonlinear dynamic equation for a pendulum is given by
where l is the length of the pendulum, m is the mass and θ is the angle
subtended by the rod and the vertical axis through the pivot point, see
Figure 1.1.
(a) Choose appropriate state variables and write down the state equa-
tions.
(b) Find all equilibria of the system.
(c) Linearize the system around the equilibrium points, and determine
if the system equilibria are locally asymptotically stable.
I q̈1 + M g sin q1 + k( q1 − q2 ) = 0
J q̈2 − k( q1 − q2 ) = u,
3
Chapter 1. Nonlinear Models and Simulation
q1
M δ¨ = P − Dδ˙ − η 1 Eq sin δ
τ Ėq = −η 2 Eq + η 3 cos δ + EF D ,
EXERCISE 1.4
ψ ( t, y )
−
r u y
C(sI − A)−1 B
4
Chapter 1. Nonlinear Models and Simulation
+
θi sin(⋅) G ( s) y
−
θ0 1
s
ż = Az + B sin e
ė = − Cz
EXERCISE 1.6
F
Friction
−
xr x
u 1
PID Σ ms v
1
s
F (v) = F0 sign(v)
Let xr = 0 and rewrite the system equations into feedback connection form.
5
Chapter 1. Nonlinear Models and Simulation
EXERCISE 1.7
Gaw
- +
r u v y
Gf f + Gp
-
Gf b
EXERCISE 1.8
Consider the model of a motor with a nonlinear valve in Figure 1.7. Assume
−1
EXERCISE 1.9
Is the following system (a controlled nonlinear spring) nonlinear locally
controllable around x = ẋ = u = 0?
ẍ = − k1 x − k2 x3 + u.
6
Chapter 1. Nonlinear Models and Simulation
EXERCISE 1.10PHD
The equations for the unicycle in Figure 1.8 are given by
θ
( x, y)
Figure 1.8 The “unicycle used in Exercise 1.10.
ẋ = u1 cos θ
ẏ = u1 sin θ
θ˙ = u2 ,
where ( x, y) is the position and θ the angle of the wheel. Is the system
nonlinear locally controllable at (0, 0, 0)? (Hint: Linearization gives no in-
formation; use the definition directly).
EXERCISE 1.11PHD
The system in Figure 1.9 is known as the “rolling penny”. The equations
θ
( x, y)
Figure 1.9 The “rolling penny” used in Exercise 1.11.
are given by
ẋ = u1 cos θ
ẏ = u1 sin θ
θ˙ = u2
Ψ̇ = u1 .
7
Chapter 1. Nonlinear Models and Simulation
EXERCISE 1.12
Determine if the following system is nonlinear locally controllable at ( x0 , u0 ) =
(0, 0)
EXERCISE 1.13
Simulate the system G (s) = 1/(s + 1) with a sinusoidal input u = sin ω t.
Find the amplitude of the stationary √ output for ω = 0.5, 1, 2. Compare with
the theoretical value h G (iω )h = 1/ 1 + ω 2 .
EXERCISE 1.14
Consider the pendulum model given in Exercise 1.1.
(a) Make a simulation model of the system in Simulink, using for in-
stance m = 1, g = 10, l = 1, k = 0.1. Simulate the system from
various initial states. Is the system stable? Is the equilibrium point
unique? Explain the physical intuition behind your findings.
(b) Use the function linmod in Matlab to find the linearized models for
the equilibrium points. Compare with the linearizations that you de-
rived in Exercise 1.1.
(c) Use a phase plane tool (such as pptool) to construct the phase plane
of the system. Compare with the results from (a).
EXERCISE 1.15
Simulate the example from the lecture with two tanks, using the models
ḣ = (u − q)/ A
p √
q = a 2g h,
where h is the liquid level, u is the inflow to the tank, q the outflow, A
the cross section area of the tank, a the area of the outflow and g the
acceleration due to gravity, see Figure 1.10. Use a step input flow. Make a
step change in u from u = 0 to u = c, where c is chosen in order to give a
stationary value of the height, h = 0.1. Make a step change from u = c to
u = 0. Is the process linear? Linearize the system around h1 = h2 = 0.1.
Use A1 = A2 = 3 10−3 , a1 = a2 = 7 10−6. What are the time constants
of the linearized system?
8
Chapter 1. Nonlinear Models and Simulation
1 q
1/A f(u) 1
1 s
q q In
In Sum Gain Integrator Fcn h 1
1 In
h Out
2 In Subsystem2
h Subsystem
EXERCISE 1.16
Simulate the system with the the oscillating pivot point (the “electric hand-
saw”), see Figure 1.11. Use the equation
1
θ¨ (t) = (g + aω 2 sin ω t)θ (t).
l
Assume a = 0.02m and ω = 2π ⋅ 50 for a hand-saw. Use simulation to find
for what length l the system is locally stable around θ = θ˙ = 0 (Note:
asymptotic stability is not required).
EXERCISE 1.17
The Lorentz equations
d
x1 = σ ( x2 − x1 )
dt
d
x2 = rx1 − x2 − x1 x3
dt
d
x3 = x1 x2 − bx3, σ , r, b > 0,
dt
where σ , r, b are constants, are often used as example of chaotic motion.
(a) Determine all equilibrium points.
(b) Linearize the equations around x = 0 and determine for what σ , r, b
this equilibrium is locally asymptotically stable.
9
2. Linearization and
Phase-Plane Analysis
EXERCISE 2.1[KHALIL, 1996]
For each of the following systems, find and classify all equilibrium points.
(a) ẋ1 = x2
ẋ2 = − x1 + x13 /6 − x2
(b) ẋ1 = − x1 + x2
ẋ2 = 0.1x1 − 2x2 − x12 − 0.1x13
(d) ẋ1 = x2
ẋ2 = − x1 + x2 (1 − 3x12 − 2x22 )
(e) ẋ1 = − x1 + x2 (1 + x1 )
ẋ2 = − x1 (1 + x1 )
ẋ1 = ax1 − x1 x2
ẋ2 = bx12 − cx2
10
Chapter 2. Linearization and Phase-Plane Analysis
(a) ẋ1 = x2
ẋ2 = x1 − 2 tan−1 ( x1 + x2 )
(b) ẋ1 = x2
ẋ2 = − x1 + x2 (1 − 3x12 − 2x22 )
EXERCISE 2.4
Saturations constitute a severe restriction for stabilization of system. Fig-
ure 2.1 shows three phase portraits, each corresponding to one of the fol-
lowing linear systems under saturated feedback control.
(a) ẋ1 = x2
ẋ2 = x1 + x2 − sat(2x1 + 2x2 )
(b) ẋ1 = x2
ẋ2 = − x1 + 2x2 − sat(3x2 )
(c) ẋ1 = x2
ẋ2 = −2x1 − 2x2 − sat(− x1 − x2 )
0.8
1.5 1.5
0.6
1 1
0.4
0.5 0.5
0.2
x2
x2
x2
0 0 0
−0.2
−0.5 −0.5
−0.4
−1 −1
−0.6
−1.5 −1.5
−0.8
−2 −1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2
x1 x1 x1
Figure 2.1 Phase portraits for saturated linear systems in Exercise 2.4
11
Chapter 2. Linearization and Phase-Plane Analysis
(a) ẋ1 = − x2
ẋ2 = x1 − x2 (1 − x12 + 0.1x14 )
(b) ẋ1 = x2
ẋ2 = x1 + x2 − 3 tan−1 ( x1 + x2 )
6 6
4 4
2 2
x2
x2
0 0
−2 −2
−4 −4
−6 −6
−8 −8
−8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8
x1 x1
Figure 2.2 Phase portraits for Exercise 2.5(a) to the left, and Exercise 2.5(b) to
the right.
EXERCISE 2.6
The following system
u = −K y
(a) For all values of the gain K, determine the equilibrium points of the
closed loop system.
(b) Determine the equilibrium character of the origin for all values of
the parameter K. Determine in particular for what values the closed
loop system is (locally) asymptotically stable.
12
Chapter 2. Linearization and Phase-Plane Analysis
ẋ1 = x2
P D η1
ẋ2 = − x2 − Eq sin x1 .
M M M
EXERCISE 2.8
Linearize
around the trajectory ( x1 , x2 ) = (sin t, cos t). Also determine whether this
limit cycle is stable or not.
EXERCISE 2.9
Linearize the ball-on-beam equation
7 2r
ẍ − xφ˙2 = g sin φ + φ¨,
5 5
EXERCISE 2.10
Use a simple trigonometry identity to help find a nominal solution corre-
sponding to u(t) = sin (3t), y(0) = 0, ẏ(0) = 1 for the eqution
4 3 1
ÿ + y (t) = − u(t).
3 3
13
Chapter 2. Linearization and Phase-Plane Analysis
EXERCISE 2.11
The equations for motion of a child on a swing are given by
d d
(ml 2 φ ) + mg l sin φ = 0
dt dt
Here φ (t) is the angle of the swing, m the mass, and l (t) the distance of the
child to the pivot of the swing. The child can excite the swing by changing
l (t) by moving its center of mass.
(a) Draw phase diagrams for two different constant lenghts l1 and l2 .
(b) Assume that it is possible to quickly change between the lenghts
l1 and l2. Show how to jump between the two different systems to
increase the amplitude of the swing.
Hint: During constant l the energy in the system is constant. When l (t)
changes quickly φ will be continuous but dt
d
φ (t) will change in such a way
that the angular momentum ml dt φ is continuous.
2 d
14
3. Lyapunov Stability
EXERCISE 3.1
Consider the scalar system
ẋ = ax3
V ( x) = x4
EXERCISE 3.2
Consider the pendulum equation
ẋ1 = x2
g k
ẋ2 = − sin x1 − x2 .
l m
(a) Assume zero friction, i.e. let k = 0, and show that the origin is stable.
(Hint. Show that the energy of the pendulum is constant along all
system trajectories.)
(b) Can the pendulum energy be used to show asymptotic stability for the
pendulum with non-zero friction, k > 0? If not, modify the Lyapunov
function to show asymptotic stability of the origin.
EXERCISE 3.3
Consider the system
ẍ + d ẋ3 + kx = 0,
1
V ( x) = kx2 + ẋ2
2
15
Chapter 3. Lyapunov Stability
EXERCISE 3.4
Consider the linear system
0 −1
ẋ = Ax = x
1 −1
(a) Compute the eigenvalues of A and verify that the system is asymp-
totically stable
(b) From the lectures, we know that an equivalent characterization of
stability can be obtained by considering the Lyapunov equation
AT P + PA = − Q
ẋ (t) = Ax(t), t ≥ 0,
16
Chapter 3. Lyapunov Stability
(b) The system (3.1) is however stable if the eigenvalues of A(t) + AT (t)
have negative real parts for all t ≥ 0. Prove this by showing that
V = x T x is a Lyapunov function.
2x
ẍ + =0
(1 + x2 )2
and is asked to determine whether or not the equation is stable. The stu-
dents think “this is an undamped mass-spring system – the spring is non-
linear with a spring constant of 2/(1 + x2 )2 ”. The student re-writes the
system as
ẋ1 = x2
−2x1
ẋ2 =
(1 + x12 )2
where the continuous functions f 1 and f 2 have the same sign as their
arguments, i.e. xi f i ( xi ) ≥ 0 (note that f i(0) = 0). Show that almost all
trajectories of the system tend towards the invariant set x12 + 2x22 = 4
independently of the explicit expressions of f 1 and f 2. However, this set
will NOT be a limit cycle as the system will have singular points belonging
17
Chapter 3. Lyapunov Stability
to this set. What equilibrium points does the system have? Show that
x12 + 2x22 = 4 is an attractive invariant set. How will the trajectories of the
system behave?
Extra: Simulate the system.
(Remark. Compare with Example 3.13 in the book by Slotine and Li
where the invariant set will be an limit cycle.)
EXERCISE 3.8
Consider the system
ẋ1 = x2
ẋ2 = −2x1 − 2x2 − 4x13 .
to show that
(a) the system is globally stable around the origin.
(b) the origin is globally asymptotically stable.
EXERCISE 3.9
Consider the system
ÿ = sat(−3 ẏ − 2 y).
ẍ = u
ẍ = sat(u).
(d) For PhD students. Does the results in (c) hold for the triple integrator
d3 x
= sat(u)? (3.2)
dt3
18
Chapter 3. Lyapunov Stability
ẋ1 = x2
ẋ2 = − x1 − max(0, x1 ) ⋅ max(0, x2 )
EXERCISE 3.11
Consider the nonlinear system
ẋ1 = − x1 + x2
ẋ2 = − x1 − x2 + g( x)
(a) Show that V ( x) = 0.5x T x is a Lyapunov function for the system when
g( x) = 0.
(b) Use this Lyapunov function to show that the system is globally asymp-
totically stable for all g( x) that satisfy
g( x) = g( x2 )
Ω = { x : V ( x) ≤ γ }
19
Chapter 3. Lyapunov Stability
ẋ1 = − x2
ẋ2 = x1 + ( x12 − 1) x2 .
ẋ1 = x2
ẋ2 = x1 − sat(2x1 + x2 ).
x1 x2 = c
ẋ = f ( x, u), x ∈ IRn, u ∈ IR
20
Chapter 3. Lyapunov Stability
ẋ = f ( x, u) = φ ( x) + ψ ( x)u,
AP + PAT − bb T < 0.
(Hint. Some LQR theory may come handy when proving necessity. In
particular, if the system is stabilizable, what can you say about
R∞ the
feedback law u = − kx that you obtain from the LQR cost 0 x T x +
uT u dt?)
EXERCISE 3.15
It can sometimes be convenient to re-write nonlinearities in a way that
is more easy to manipulate. Consider the single input, open loop stable,
linear system under saturated feedback
ẋ = Ax + Bsat(u)
u = − K x.
ẋ = Ax + µ ( x) B K x,
where 0 < µ ( x) ≤ 1.
(b) Assume P > 0 is such that
x T ( AT P + PA) x ≤ 0, ∀ x
x T (( A − B K )T P + P( A − B K )) x ≤ 0, ∀ x
guarantees the closed loop system in (a) to be stable. (The nice thing
about this formulation is that it is possible to construct efficient nu-
merical methods for simultaneously finding both feedback gains K
and Lyapunov matrix P).
21
Chapter 3. Lyapunov Stability
ẋ = Ax + f ( x) + Bsat(u)
u = − K x.
λ min( Q)
kf < ,
2λ max ( P)
EXERCISE 3.16
In general, it is non-trivial to find a Lyapunov function for a given nonlinear
system. Several different methods have been derived for specific classes of
systems. In this exercise, we will investigate the following method, known
as Krasovskii’s method.
Consider systems on the form
ẋ = f ( x)
for all x ∈ IRn , and some matrix P = P T > 0. Then, the origin is globally
asymptotically stable with V ( x) = f T ( x) P f ( x) as Lyapunov function.
Prove the validity of the method in the following steps.
(a) Verify that f ( x) can be written as
Z 1
Vf
f ( x) = (σ x) ⋅ x dσ .
0 Vx
x T P f ( x) + f T ( x) Px ≤ − x T x, ∀ x ∈ IRn
22
Chapter 3. Lyapunov Stability
EXERCISE 3.17
Use Krasovskii’s method to justify Lyapunov’s linearization method.
e x2 x1
K 1
+ g( e) s+1 s
−1
x2 as indicated in the figure. Assume that the reference value is zero. The
system equations can then be written as
ẋ1 = x2
ẋ2 = − x2 + K g( e) = − x2 + K g(− x1 ).
23
4. Input-Output Stability
EXERCISE 4.1
The norms used in the definitions of stability need not be the usual Euclid-
ian norm. If the state-space is of finite dimension n (i.e., the state vector
has n components), stability and its type are independent of the choice of
norm (all norms are “equivalent”), although a particular choice of norm
may make analysis easier. For n = 2, draw the unit balls corresponding to
the following norms.
(a) hh xhh2 = x12 + x22 (Euclidian norm)
2
(b) hh xhh = x12 + 5x22
(c) hh xhh = h x1 h + h x2 h
(d) hh xhh = sup(h x1 h, h x2 h)
Recall that a “ball” B ( x0 , R), of center x0 and radius R, is the set of x such
that hh x − x0 hh ≤ R, and that the unit ball is B (0, 1).
EXERCISE 4.2
Consider a linear time invariant system G (s) interconnected with a static
nonlinearity ψ ( y) in the standard form. Compare the Nyquist, Circle, Small
Gain, and Passivity Criterion with respect to the following issues.
(a) What are the restrictions that must be imposed on ψ ( y) in order to
apply the different stability criteria?
(b) What restrictions must be imposed on the Nyquist curve of the linear
system in order to apply the stability criteria above?
EXERCISE 4.3
1.5 1.5 1.5
1 1 1
0 0 0
−1 −1 −1
Consider the static nonlinearities shown in Figure 4.1. For each nonlinear-
ity,
(a) determine the minimal sector [α , β ],
(b) determine the gain of the nonlinearity,
(c) determine if the nonlinearity is passive.
24
Chapter 4. Input-Output Stability
is shown in Figure 4.2 together with a circle with center in 1.5 and with
radius 2.85.
Nyquist Diagrams
1
Imaginary Axis
−1
−2
−3
−2 −1 0 1 2 3 4
Real Axis
4
G ( s) =
(s − 1)(s/3 + 1)(s/5 + 1)
25
Chapter 4. Input-Output Stability
Nyquist Diagrams
1.5
0.5
Imaginary Axis
0
−0.5
−1
−1.5
−2
−4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0
Real Axis
−0.5 0.4
10
−0.7
10 0.2
−0.9
10
Im
270
−0.2
225
−0.4
Phase (deg)
180
−0.6
135
−0.8
90
0 1
10 10 −1
−1.5 −1 −0.5 0 0.5
Frequency (rad/sec) Re
0.4
0.2
Im
−0.2
−0.4
Figure 4.4 The Bode and Nyquist curves for the system in Exercise 4.6ab
(b)
1
G ( s) =
(s + 1)(s + 2)
26
Chapter 4. Input-Output Stability
EXERCISE 4.7
Consider the linear time-varying system
ẋ(t) = ( A + B f (t) C) x,
1 1 1
0 0 0
−1 −1 −1
Figure 4.5 Nyquist curves for transfer function G (s) in Exercise 4.7.
(d) For PhD students. Let G (s) be a transfer function matrix with m
inputs and n outputs. Show that if A is Hurwitz, hh f (t)hh ≤ 1 ∀t, and
supω ∈IR σ max [ C( jω I − A)−1 B ] < 1, then the system is BIBO stable.
EXERCISE 4.8
The singular values of a matrix A are denoted σ i( A).
(a) Use Matlab to compute σ ( A) for
1 10
A= .
0 1
h Axh
σ 1 ( A) = sup .
x h xh
27
Chapter 4. Input-Output Stability
EXERCISE 4.9
In the previous chapter, we have seen how we can use Lyapunov functions
to prove stability of systems. In this exercise, we shall see how another
type of auxillary functions, called storage functions, can be used to assess
passivity of a system.
Consider the nonlinear system
ẋ = f ( x, u)
y = g( x, u) (4.1)
with zero initial conditions, x(0) = 0. Show that if we can find a storage
function V ( x, u) with the following properties
• V ( x, u) is continuously differentiable.
• V (0) = 0 and V ( x, u) ≥ 0 for x = 0.
• uT y ≥ V̇ ( x, u).
then, the system (4.1) is passive.
28
Chapter 4. Input-Output Stability
EXERCISE 4.10
Let P be the solution to
AT P + PA = − I ,
where A is an asymptotically stable matrix. Show that G (s) = B T P(sI −
A)−1 B is passive. (Hint. Use the function V ( x) = x T Px.)
θ˙ = ω
ω̇ = −ω + η ,
where θ is the shaft angle and η is the input voltage. The dynamic
controller
ż = 2(θ − z) − sat(θ − z)
η = z − 2θ
is used to control the shaft position. Use any method you like to prove
that θ (t) and ω (t) converge to zero as t → ∞.
EXERCISE 4.12
(a) Let uc (t) be an arbitrary function of time and let H (⋅) be a passive
system. Show that
is passive from u to y.
(b) Show that the following adaptive system is stable
e(t) = G (s) θ (t) − θ 0 uc (t)
θ˙(t) = −γ uc (t) e(t),
29
Chapter 4. Input-Output Stability
EXERCISE 4.13PHD
Let f be a static nonlinearity in the sector (0, ∞).
(a) Show that the system ẋ = γ x + e, y = f ( x) is passive from e to y if
γ ≥ 0.
(b) Show that if the Popov criterion
1
1+γ s f (⋅)
1 +γ s G ( s)
(c) How does the Popov criterion change if f is in the sector (α , β ) in-
stead?
(d) Figure 4.7 shows the Nyquist curve and the Popov curve (Re G (iω ), ω Im G (iω ))
for the system
s+1
G ( s) = .
s(s + 0.1)(s2 + 0.5s + 9)
30
Chapter 4. Input-Output Stability
0.5
−0.5
−1
−1.5
−2
−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0
Figure 4.7 Nyquist (dash-dot) and Popov curve (solid) for the system in Exer-
cise 4.13d. The Popov curve is to the right of the dashed line for all ω .
31
5. Describing Function
Analysis, Limit Cycles
EXERCISE 5.1
Match each of the odd, static nonlinearities in Figure 5.1 with one of the
describing functions in Figure 5.2. (Hint. Use the interpretation of the
describing function N ( A) as “equivalent gain” for sinusoidal inputs with
amplitude A.)
Nonlinearity a b
15 15
10 10
5 5
0 0
0 5 10 0 5 10
c d
15 15
10 10
5 5
0 0
0 5 10 0 5 10
Describing functions nr 1 nr 2
4 4
3 3
2 2
1 1
0 0
0 5 10 0 5 10
nr 3 nr 4
4 4
3 3
2 2
1 1
0 0
0 5 10 0 5 10
32
Chapter 5. Describing Function Analysis, Limit Cycles
EXERCISE 5.2
Compute the describing functions for
(a) the saturation,
(b) the deadzone, and
(c) the piece-wise linear function
in Figure 5.3. (Hint: Use (a) in (b) and (c).)
H H β
−D −D −D α
D D 2D −α D
−H −H −β
EXERCISE 5.3
Show that the describing function for a relay with hysteresis in Figure 5.4
satisfies
2 !1/2
1 πA D D
− =− 1− +i .
N ( A) 4H A A
H Im
Re
πA
4H
−D D −π D
4H
−H − N (1A)
EXERCISE 5.4
If the describing function for the static nonlinearity f ( x) is YN ( C), then
show that the describing function for D f ( x/ D ) equals YN ( C/ D ), where D
is a constant.
EXERCISE 5.5
33
Chapter 5. Describing Function Analysis, Limit Cycles
r k y
Σ G ( s)
1
d f ( x) d2 f ( x)
> 0, > 0,
dx dx2
for x > 0, have a real describing function Ψ(⋅) that satisfies the inequalities
f (a)
Ψ(a) < , a > 0.
a
f ( x) = k1 x + k2 x2 + k3 x3 .
−5s
G ( s) =
s2 + s + 25
(a) Assess intuitively the possibility of a limit cycle, by assuming that the
system is started at some small initial state, and notice that the sys-
tem can neither stay small (because of instability) nor at saturation
values (by applying the final value theorem of linear control).
(b) Use the describing function method to predict whether the system
exhibits a limit cycle, depending on the saturation level H. In such
cases, determine the frequency and amplitude of the limit cycle. The
describing function of a saturaion is plotted in Figure 5.6.
(c) Use the extended Nyquist criterion to assess whether the limit cycle
is stable or unstable.
EXERCISE 5.8
34
Chapter 5. Describing Function Analysis, Limit Cycles
0.8
0.6
N(A)
0.4
0.2
0
0 2 4 6 8 10
A
1
+ −a G (s)
+ −1 a 0
−
4
G0 (s) =
s(s + 1)(s + 2)
(b) How should the parameter a be choosen so that the describing func-
tion method predicts that sustained oscillations are avoided in the
closed loop system?
EXERCISE 5.9
The Ziegler-Nichols frequency response method suggest PID parameters
based on a system’s ultimate gain Ku and ultimate period Tu according to
the following table. The method provides a convenient method for tuning
Parameter Value
K 0.6Ku
Ti 0.5Tu
Td 0.125Tu
35
Chapter 5. Describing Function Analysis, Limit Cycles
r e u y
G ( s)
−
0.5
−0.5
−1
0 5 10 15
(a) Show that the parameters Ku and Tu can be determined from the
sustained oscillations that may occur in the process under relay feed-
back. Use the describing function method to give a formula for com-
puting Ku and Tu based on oscillation data. (amplitude A and angular
frequency ω of the oscillation). Let the relay amplitude be D.
Recall that the ultimate gain and ultimate period are defined in the
following way. Let G (s) be the systems transfer function, and ω u be
the frequency where the system transfer function has a phase lag of
−180 degrees. Then we have
Tu = 2π /ω u
Ku = 1/h G (iω u )h
(b) What parameters would the relay method give for the process
50
G ( s) =
s(s + 1)(s + 10)
36
Chapter 5. Describing Function Analysis, Limit Cycles
r k y
τ 1 s+1 τ 2 s+1
τ 2 s+1 τ 1 s+1
α
τ1 > τ2
Figure 5.10 shows an alternative way of limiting the high frequency con-
tent of a signal. The system is composed of a high pass filter, a saturation,
and a lowpass filter. Show that the system can be viewed as a nonlinear
lowpass filter that attenuates high-frequency inputs without introducing a
phase lag.
EXERCISE 5.12PHD
Show that the system
1
G ( s) =
s(s + 1)2
37
Chapter 5. Describing Function Analysis, Limit Cycles
EXERCISE 5.13PHD
Consider a linear system with relay feedback:
ẋ = Ax + Bu,
y = Cx,
u = −sgn y,
CeAh
δh= δ z.
C( Az + B )
( Az + B ) C Ah
g( z + δ z) = − z + ( I − ) e δ z.
C( Az + B )
We have now shown that the Jacobian of the Poincaré map for a
linear system with relay feedback is equal to the matrix
( Az + B ) C Ah
e .
C( Az + B )
The limit cycle is locally stable if and only if this matrix has all
eigenvalues in the unit disc.
38
Chapter 5. Describing Function Analysis, Limit Cycles
1
C(sI − A)−1 B = .
(s + 1)3
( Az + B ) C Ah
e .
C( Az + B )
39
6. Anti-windup, Friction,
Backlash, Quantization
EXERCISE 6.1
Consider the antiwindup scheme in polynomial form described in the figure
together with a process A(s) y = B (s)u. Put uc = 0. Make a block trans-
(a) (b)
uc
uc T
T
1 u y 1 v u
∑ −S ∑
R A aw
y
−S
A aw− R
formation to write the system in standard feedback form with lower block
AAaw − 1. Use the circle criterion (or passivity) to conclude that the
+BS
P = AR
system is globally asymptotically stable if A is stable and the following
condition holds:
AR + B S
Re (iω ) ≥ ε > 0, ∀ω .
AAaw
EXERCISE 6.2
The following model for friction is described in a recent PhD thesis
dz hvh
= v− z
dt g(v)
dz
F = σ 0 z + σ 1(v) + Fvv,
dt
where σ 0 , Fv are positive constants and g(v) and σ 1(v) are positive func-
tions of velocity.
(a) What friction force does the model give for constant velocity?
(b) Prove that if 0 < g(v) ≤ a and h z(0)h ≤ a then
h z(t)h ≤ a, t≥0
40
Chapter 6. Anti-windup, Friction, Backlash, Quantization
EXERCISE 6.3
Derive the describing function (v input, F output) for
(a) Coulomb friction, F = F0 sign (v)
(b) Coulomb + linear viscous friction F = F0 sign (v) + Fvv
(c) as in b) but with stiction for v = 0.
EXERCISE 6.4
If v is not directly measurable the adaptive friction compensation scheme
in the lectures must be changed. Consider the following double observer
scheme:
Fb = ( zF + K F hb
vh)sign(b
v)
żF = − K F (u − F b)sign(b
v)
v = zv + Kv x
b
żv = − F v.
b + u − Kv b
EXERCISE 6.5
Show that the describing function for quantization is given by
0
D
A< 2
s 2
N ( A) = 4D Pn 2i − 1
2n−1 2n+1
1− D 2 D < A< 2 D
π A i=1 2A
EXERCISE 6.6
Show that a saturation is a passive element.
41
Chapter 6. Anti-windup, Friction, Backlash, Quantization
ÿ + c ẏ + ky + η ( y, ẏ) = 0
where η is defined as
µ mgsign( ẏ) for h ẏh > 0
k
η ( y, ẏ) = − ky for ẏ = 0 and h yh ≤ µ s mg/ k
−µ s mgsign( y) for ẏ = 0 and h yh > µ s mg/ k
Construct the phase potrait and discuss its qualitative behavior. (Hint: Use
piecewise linear analysis.)
EXERCISE 6.8
The accuracy of a crude A/D converter can be improved by adding a high-
frequency dither signal before quantization and lowpass filtering the dis-
cretized signal, see Figure 6.1 Compute the stationary value y0 of the out-
+ A/ D filter decim.
put if the input is a constant u0 . The dither signal is a triangle wave with
zero mean and amplitude D /2 where D is the quantization level in the
A/D converter.
EXERCISE 6.9
For PhD students. Show that the antiwindup scheme in observer form is
equivalent to the antiwindup scheme in polynomial form with A e equal to
the observer polynomial (see CCS for definitions).
EXERCISE 6.10
For PhD students. Show that the equilibrium point of an unstable linear
system preceded with a saturation can not be made globally asymptotically
stable with any control law.
42
7. Nonlinear Controller
Design
EXERCISE 7.1
In some cases, the main nonlinearity of a system can be isolated to a static
nonlinearity on the input. This is, for example, the case when a linear pro-
cess is controlled using a actuator with a nonlinear characteristic. A simple
design methodology is then to design a controller C(s) for the linear process
and cancel the effect of the actuator nonlinearity by feeding the computed
control through the inverse of the actuator nonlinearity, see Figure 7.1.
Compute the inverse of the following common actuator characteristics
Controller
C ( s) f −1(⋅) f (⋅) G ( s)
−
Use your result to derive the inverse of the important special case of
a dead zone.
(c) A backlash nonlinearity.
EXERCISE 7.2
An important class of nonlinear systems can be written on the form
ẋ1 = x2
ẋ2 = x3
..
.
ẋn = f ( x) + g( x)u
43
Chapter 7. Nonlinear Controller Design
u = h( x, v)
that renders the closed loop system from the new input v to the state
linear. What conditions do you have to impose on f ( x) and g( x) in
order to make the procedure well posed?
(b) Apply this procedure to design a feedback for the inverted pendulum
ẋ1 = x2
ẋ2 = a sin( x1 ) + b cos( x2 )u
that makes the closed loop system behave as a linear system with a
double pole in s = −1. Is the control well defined for all x? Can you
explain this intuitively?
(c) One drawback with the above procedure is that it is very sensitive to
modelling errors. Show that this is the case by designing a linearizing
feedback for the system
ẋ = x2 + u
that makes the closed loop system linear with a pole in −1. Apply
the suggested control to the system
ẋ = (1 + ε ) x2 + u
EXERCISE 7.3
Consider a linear system
ẋ1 = ax2 + bu
ẋ2 = x1
44
Chapter 7. Nonlinear Controller Design
(c) How large variations in the parameters a and b can the controller
designed in (b) tolerate in order to still guarantee a stable closed
loop system?
EXERCISE 7.4
Consider concentration control for a fluid that flows through a pipe, with
no mixing, and through a tank, with perfect mixing. A schematic diagram
of the process is shown in Figure 7.2(left). The concentration at the inlet
of the pipe is cin. Let the pipe volume be Vd and let the tank volume be
Vm. Furthermore, let the flow be q and let the concentration in the tank
at the outlet be c. A mass balance gives
dc(t)
Vm = q(t)(cin(t − L) − c(t))
dt
Vd k
0.63k
c in
Time
Vm a
c
L T
(a) Show that for fixed q(t), the system from input cin to output c can be
reprented by a linear transfer function
K
G ( s) = e−sL
sT + 1
45
Chapter 7. Nonlinear Controller Design
Step response
0.5
Amplitude
0
−0.5
−1
0 2 4 6 8 10
Time [s]
Controller Kp Ti Td
P 1/ a
PI 0.9/a 3L
PID 1.2/a 2L L /2
EXERCISE 7.5
We have seen how it in many cases can be of interest to control the system
into a set, rather than to the origin. One example of this is sliding mode
control, where the system state is forced into an invariant set, chosen
in such a way that if the state is forced onto this set, the closed loop
dynamics are exponentially stable. In this example, we will use similar
ideas to design a controller that “swings” up an inverted pendulum from
its stable equilibrium (hanging downwards) to its upright position.
Let the pendulum dynamics be given by
ẋ1 = x2
mg l ml
ẋ2 = sin( x1 ) − cos( x1 )u
Jp Jp
(a) Denote the total energy of the pendulum by E and determine the
value E0 corresponding to the pendulum standing in the upright po-
sition.
(b) Investigate whether the control strategy
u = k( E − E0 )sign( x2 cos( x1 ))
46
Chapter 7. Nonlinear Controller Design
EXERCISE 7.6
Consider the system
ẋ1 = x1 + u
ẋ2 = x1
y = x2
u = −2x1 − sign( x1 + x2 )
EXERCISE 7.7
R1
Minimize 0 x2 (t) + u2 (t) dt when
ẋ (t) = u(t)
x(0) = 1
x(1) = 0
EXERCISE 7.8
Neglecting air resistance and the curvature of the earth the launching of
a satellite is described with the following equations
ẋ1 = x3
ẋ2 = x4
F
ẋ3 = cos u
m
F
ẋ4 = α sin u − g
m
EXERCISE 7.9
47
Chapter 7. Nonlinear Controller Design
ẋ5 = −γ u2
Show that
λ1 umax σ < 0
tan u1 = and u2 = 0 σ >0
λ4
L σ = 0,
where L means that the solution is unknown. Determine equations for λ
and σ . (You do not have to solve these equations).
EXERCISE 7.10
Consider the system
(
ẋ1 = x2
ẋ2 = − x1 − x23 + (1 + x1 )u
ẋ1 = f 1 ( x1 , x2 , λ 1 , λ 2 )
ẋ2 = f 2 ( x1 , x2 , λ 1 , λ 2 )
λ̇ 1 = f 3 ( x1 , x2 , λ 1 , λ 2 )
λ̇ 2 = f 4 ( x1 , x2 , λ 1 , λ 2 )
EXERCISE 7.11
Consider the double integrator
ẋ1 = x2
ẋ2 = u, huh ≤ 1
with initial value x(0) = x0 . We are interested in finding the control that
brings the system to rest ( x(t f ) = 0) in minum time. (You may think of this
as a way of designing a controller that reacts quickly on set-point changes)
48
Chapter 7. Nonlinear Controller Design
(a) Show that the optimal control is of “bang-bang type” with at most one
switch. (In other words, show the optimal control is first maximal in
one direction for some time, and then possibly changes sign to become
maximal in the other direction for the remaining time)
(b) Show that the control can be expressed as the feedback law
(
umax σ ( x) > 0
u=
−umax σ ( x) < 0
EXERCISE 7.12
Consider the problem of controlling the double integrator
(
ẋ1 = x2
ẋ2 = u, huh ≤ 1
from an arbitrary intitial condition x(0) to the origin so that the criterion
Z tf
(1 + huh) dt
0
is minimized (t f is the first time so that x(t f ) = 0). Show that all extremals
are of the form
−1 0 ≤ t ≤ t1
u(t) = 0 t1 ≤ t ≤ t2
1 t2 ≤ t ≤ t f
or
1 0 ≤ t ≤ t1
u(t) = 0 t1 ≤ t ≤ t2
−1 t2 ≤ t ≤ t f
for some t1, t2 with 0 ≤ t1 ≤ t2 ≤ t f . Some time interval can have the
length 0. Assume that the problem is normal.
EXERCISE 7.13
Consider the system
−5 2 0
ẋ = x+
u
−6 2 1
49
Chapter 7. Nonlinear Controller Design
1
from x0 = 0 to x(t f ) = in minimum time with hu(t)h ≤ 3. Show that
1
the optimal controller is either
(
−3 0 ≤ t ≤ t1
u(t) =
+3 t1 ≤ t ≤ t f
or (
+3 0 ≤ t ≤ t1
u(t) =
−3 t1 ≤ t ≤ t f
for some t1 .
EXERCISE 7.14
Show that minimum time control of a linear system
EXERCISE 7.15
What is the conclusion from the maximum principle for the problem
Z 1
min u dt,
0
ẋ1 = u
x1 (0) = 0
x1 (1) = 1
Explain.
50
Solutions to Chapter 1
SOLUTION 1.1
(a) Choose the angular position and velocity as state variables, i.e., let
x1 = θ
x2 = θ˙
We obtain
ẋ1 = x2
g k
ẋ2 = − sin( x1 ) − x2
l m
0 = x2
g k
0 = − sin( x1 ) − x2
l m
The linearized system is stable for even n, and unstable for odd n.
We can use Lyapunov’s linearization method to conclude that the
pendulum is LAS around the lower equilibrium point, and unstable
around the upper equilibrium point.
SOLUTION 1.2
We choose angular positions and velocities as state variables. Letting x1 =
q1 , x2 = q̇1 , x3 = q2 , x4 = q̇2, we obtain
ẋ1 = x2
MgL k
ẋ2 = − sin x1 − ( x1 − x3 )
I I
ẋ3 = x4
k 1
ẋ4 = ( x1 − x3 ) + u
J J
51
Solutions to Chapter 1
SOLUTION 1.3
(a) Let x1 = δ , x2 = δ˙, x3 = Eq and u = EF D . We obtain
ẋ1 = x2
P D η1
ẋ2 = − x2 − x3 sin x1
M M M
η2 η3 1
ẋ3 = − x3 + cos x1 + u
τ τ τ
ẋ1 = x2
P D η1
ẋ2 = − x2 − Eq sin x1
M M M
SOLUTION 1.4
(a) Let
ẋ = Ax + Bu, y = Cx
and hence
(b) To separate the linear dynamics from the nonlinearities, write the
pendulum state equations as
ẋ1 = x2
k g
ẋ2 = − x2 − sin( x1 )
m l
52
Solutions to Chapter 1
SOLUTION 1.5
(a) Hint: ė = − y = − Cx.
(b) The equilibrium points are given by
0 = G (0) sin e
e = ±nπ , n = 0, 1, 2, . . .
(c) For G (s) = 1/(τ s + 1), we take A = −1/τ , B = 1/τ and C = 1. Then
1 1
ż = − z + sin e
τ τ
ė = − z
ẋ1 = x2
1 1
ẋ2 = − x2 − sin x1 ,
τ τ
which is the pendulum model with g/l = k/m = 1/τ .
SOLUTION 1.6
Let G P I D (s) be the transfer function for the PID controller. The requested
form
V ( s) = − G l ( s) F ( s)
F (v) = F0 sign(v)
is obtained with
s
G l ( s) =
ms2 − G P I D (s)
SOLUTION 1.7
The requested form
U ( s) = − G l ( s) V ( s)
v(t) = sat(u)
is obtained with
G f b G p − Gaw
G l ( s) = .
1 + Gaw
53
Solutions to Chapter 1
SOLUTION 1.8
ẋ1 = x2
ẋ2 = −2x2 − x1 + f ( x3 )
ẋ3 = r − x1
√
(b) For a constant input r the equilibrium point is given by x = (r, 0, r).
The linearization has
0 1 0
√
A = −1 −2 2 r .
−1 0 0
SOLUTION 1.9
The linearization is given by
ẍ = − k1 x + u,
SOLUTION 1.10
The linearized system is not controllable. The system is however nonlinear
locally controllable. This can be seen directly from the definition as follows:
We must show that we can drive the system from (0, 0, 0) to a near by state
( xT , yT , θ T ) using small control signals u1 and u2 . By the sequence u =
(u1 , u2 ) = (0, ε 1 ),u = (ε 1 , 0),u = (0, −ε 1 ), u = (−ε 2 , 0) (or in words: "turn
left, forward, turn right, backwards") one can move to the state (0, yT , 0).
Then apply (ε 3 , 0) and then (0, ε 4 ) to end up in ( xT , yT , θ T ). For any time
T > 0 this movement can be done with small ε i if xT , yT and θ T are small.
SOLUTION 1.11
Same solution as in 1.10, except that you have to find a movement af-
terwards that changes Ψ without changing the other states. This can be
done by the sequence: L-F-R-B-R-F-L-B where F=forward, B=backwards,
L=turn left, R=turn right.
54
Solutions to Chapter 1
SOLUTION 1.12
The linearized system at ( x0 , u0 ) is
ẋ1 = u
ẋ2 = x1
has full rank. Since the linearized system is controllable the nonlinear
system is also locally controllable at ( x0 , u0 ).
SOLUTION 1.13
See lecture slides. Use max(y(:)) to compute the amplitude.
SOLUTION 1.14
See lecture slides.
SOLUTION 1.15
See lecture slides.
SOLUTION 1.16
With a = 0.02 and w = 100π we get local stability for l ∈ [0.044, 1.9].
SOLUTION 1.17
p p
(a) x =
p 0, and if rp> 1 also x = ( b ( r − 1 ), b(r − 1), r − 1) and x =
(− b(r − 1), − b(r − 1), r − 1).
(b) The linearization around x = 0 is
−σ σ 0
ẋ = r −1 0
0 0 −b
55
Solutions to Chapter 2
SOLUTION 2.1
(a) The equilibrium points are
√ √
( x1 , x2 ) = (0, 0), ( 6, 0), (− 6, 0),
which are stable node, saddle point, and stable focus, respectively.
(c) The equilibrium points are
( x1 , x2 ) = (0, 0),
( x1 , x2 ) = (0, 0),
x12 + x22 = 1
( x1 , x2 ) = (0, 0),
SOLUTION 2.2
The three equilibrium points are
p p
( x1 , x2 ) = (0, 0), ( (ac/b), a), (− (ac/b)).
The first equilibrium point is a saddle. The other equilibria are stable nodes
if 8a < c and stable focuses if 8a > c.
SOLUTION 2.3
56
Solutions to Chapter 2
SOLUTION 2.4
Close to the origin, the saturation element opens in the linear region, and
all system are assigned the same closed loop dynamics. Far away from the
origin, the influence of the saturated control can be neglected, and the open
loop dynamics governs the behaviour.
(a) System (a) has one stable and one unstable eigenvalue. For initial
values close to the stable eigenvector, the state will move towards
the origin. For initial values close to the unstable eigenvector, the
system diverges towards infinity. This corresponds to the rightmost
phase portrait.
(b) All eigenvalues of system (b) are unstable. Thus, for initial values
sufficiently far from the origin, the system state will diverge. This
corresponds to the leftmost phase portrait. Note how the region of
attraction (the set of initial states, for which the state converges to
the origin) is severely limited.
(c) System (c) is stable also in open loop. This corresponds to the phase
portrait in the middle.
SOLUTION 2.5
(a) The system has the origin as a unique equilibrium point, being a
stable focus. The direction of the arrow heads can be determined by
inspection of the vector fields. In particular, notice that since f 1 ( x) =
− x2 the function f 1 is negative in the upper half of the plane, and
negative in the lower half.
(b) The system has three equilibrium points
57
Solutions to Chapter 2
a
a − tan( ) = 0
3
SOLUTION 2.6
(a) The equilibrium points are obtained by setting ẋ = 0. For K = −2,
the origin is the unique equilibrium point. When K = −2, the line
x1 = 2x2 is an equilibrium set.
(b) The Jacobian is given by
Vf −1 −K
(0) =
Vx 1 −2
with eigenvalues
r
3 1
λ =− ± − K.
2 4
Thus, the closed loop system is asymptotically stable about the origin
for K > −2. Depending on the value of K, we can origin has the
following character
1
<K stable focus
4
1
−2 < K < stable node
4
K < −2 saddle.
SOLUTION 2.7
The equilibria are given by sin x10 = η Eq ,
P
x20 = 0. The characteristic equa-
tion for the linearization becomes
λ 2 + α λ + β = 0,
ηE
where α = M D
> 0 and β = Mq cos x10 . Depending on α , β the equilibria are
stable focus, stable nodes or saddle points.
58
Solutions to Chapter 2
SOLUTION 2.8
Introduce ∆ x as the deviation from the nominal trajectory:
sin t + ∆ x1 (t)
x ( t) =
cos t + ∆ x2 (t)
This gives for the state derivative
dx(t) cos t + ∆ ˙x1 (t)
=
dt − sin t + ∆ ˙x2 (t)
cos t + ∆ x2 (t) + (sin t + ∆ x1 (t))(−2 sin t∆ x1 (t) − 2 cos t∆ x2 (t) − ∆ x1 (t)2 − ∆ x2 (t)2 )
= 2 2
− sin t − ∆ x1 (t) + (cos t + ∆ x2 (t))(−2 sin t∆ x1 (t) − 2 cos t∆ x2 (t) − ∆ x1 (t) − ∆ x2 (t) )
This gives
dr(t)
= r(1 − r2 )
dt
dθ (t) = −1
dt
which implies that the limit cycle is stable, but not asymptotically
stable.
SOLUTION 2.9
7¨ 2r
x̃ = g cos(φ 0 )φ˜ + φ¨˜
5 5
SOLUTION 2.10
Using the identity
3 1
(sin t)3 = sin t − sin 3t
4 4
we see that u0 (t) = sin (3t), y0(t) = sin t is a nominal solution. The
linearization is given by
1
ỹ¨ + 4 sin2 t ⋅ ỹ = − ũ.
3
59
Solutions to Chapter 2
SOLUTION 2.11
No solution yet.
60
Solutions to Chapter 3
SOLUTION 3.1
(a) Linearization about the system around the origin yields
Vf
A= = 3ax2
Vx
Thus, at the origin we have A = 0. Since the linearization has one
eigenvalue on the imaginary axis, linearization fails to determine
stability of the origin.
(b) V (0) = 0, V ( x) = 0 for x = 0, and V ( x) → ∞ as x → ∞.
Thus, V ( x) satisfies the conditions for being a Lyapunov function
candidate. Its time derivative is
VV
V̇ ( x) = f ( x) = 4ax6 (7.5)
Vx
which is negative definite for a < 0, and positive definite for
a > 0. The desired results now follow from Lyapunov’s stability
and instability theorems.
(c) For a = 0, the system is linear and given by
ẋ = 0
The system has solutions x(t) = x0 for all t. Thus, the system
is stable. A similar conclusion can be drawn from the Lyapunov
function used in (b).
SOLUTION 3.2
(a) We use the pendulum’s total energy
l2 2
V ( x) = g l (1 − cos( x1 )) + x
2 2
as Lyapunov function. We see that V is positive, and compute the
time derivative
dV ( x) X V V
= ẋi = g l sin( x1 ) x2 + x2 (−g l sin( x1 )) = 0
dt V xi
i
dV ( x) kl 2 2
=− x
dt m 2
61
Solutions to Chapter 3
g 1
V ( x) = (1 − cos( x1 )) + x T Px
l 2
We now want to choose p11 , p12 and p22 such that V̇ ( x) is negative
definite. Since the cross terms x2 sin x1 and x1 x2 are sign indefinite,
we cancel them out by letting p22 = 1, p11 = p12 k/m. With these
choices, p12 must satisfy 0 < p12 < k/m. We let p12 = 0.5k/m, and
obtain
1g k 1 k 2
V̇ ( x) = − x1 sin x1 − x .
2lm 2m 2
The term x1 sin x1 > 0 for all 0 < h x1 h < π . Within this strip, V ( x) is
positive definite, and V̇ ( x) is negative definite. We conclude that the
origin is asymptotically stable.
SOLUTION 3.3
With V = kx2 /2 + ẋ2 /2 we get V̇ = −d ẋ4 ≤ 0. Since V̇ = 0 only when
ẋ = 0 and the system equation then gives ẍ = − kx = 0 unless also x = 0,
we conclude that x = ẋ = 0 is the only invariant set. The origin is globally
asymptotically stable since the Lyapunov function is radially unbounded.
SOLUTION 3.4
√
(a) The eigenvalues of A are λ = −1/2 ± i 3/2.
(b) (i) We have
If p11 > 0 and p11 p22 − p212 > 0, both terms are non-negative.
Moreover, V ( x) → ∞ as x → ∞, and V ( x) = 0 ; x1 = x2 = 0
(This proves the "if"-part). If the conditions on pi j do not hold, it
is easy to find x such that V ( x) < 0 (proving the "only if"-part).
62
Solutions to Chapter 3
which has the solution p11 = 1.5, p12 = −0.5 and p22 = 1. P is a
positive definite matrix.
(c) Use the Matlab command lyap(A’,eye(2)).
SOLUTION 3.6
(a) The mistake is that V is not radially unbounded. The student has
forgotten to check that lim x→∞ V ( x) = ∞. In fact,
x12 1
V ( x1, x2 ) = 2
+ x22
1 + x1 2
1 2 x12
V ( x) = x2 + = V ( x0 ) = c
2 1 + x12
x12
x22 = c − ≥ c−1
1 + x12
√
In this case, we have h x2 h ≥ c − 1. Since ẋ1 = x2 , it follows that
h x1 h → ∞ as t → ∞. Roughly speaking, if the system starts with
more initial stored energy than can possibly be stored as potential
energy in the spring, the trajectories will diverge.
SOLUTION 3.7
Find the equilibrium points for the system.
63
Solutions to Chapter 3
d 2
( x + 2x22 − 4) = 2x13 4x2 + 4x2 (−2x13 ) = 0.
dt 1
The motion on this invariant set is given by
The set √ is however NOT a limit cycle, as the two equilibrium points ( x1 , x2 ) =
(0, ± 2) belongs to this set. Any trajectory moving along the invariant
set (the state vector
√ moves clockwise√ ) will eventually end up in either
( x1 , x2 ) = (0, + 2) or ( x1 , x2 ) = (0, − 2) .
Is the invariant set asymptotically stable?
Use the squared distance as Lyapunov candidate
SOLUTION 3.8
Verify that V (0) = 0, V ( x) > 0 for x = 0 and V ( x) → ∞ for hh xhh → ∞.
Now,
(a) We have
d
V ( x1 , x2 ) = 8x1 ẋ1 + 4x2 ẋ2 + 16x13 ẋ1 =
dt
= 8x1 x2 + 4x2 (−2x1 − 2x2 − 4x13 ) + 16x13 x2 =
= −8x22
64
Solutions to Chapter 3
SOLUTION 3.9
(a) Introduce the state vector x = ( x1 , x2 )T = ( y, ẏ)T . The system dynam-
ics can now be written as
ẋ1 = x2
ẋ2 = −sat(2x1 + 3x2 )
d 1
V ( x1, x2 ) = x2 ẋ2 + sat(2x1 + 3x2 )(2x1 + 3x2 )
dt 2
3
= − (sat(2x1 + 3x2 ))2
2
≤0
ÿ = ±1.
SOLUTION 3.10
The derivative of the suggested Lyapunov function is
ẋ1 = x2
ẋ2 = − x1
65
Solutions to Chapter 3
The trace of x1 , x2 is a circle, then if A and B are both nonzero, x1 (t) > 0 and
x2 (t) > 0 for some t. This implies that the only solution of V̇ ( x) = 0 is x(t) =
0 for all t. By LaSalle’s theorem, this system is globally asymptotically
stable.
SOLUTION 3.11
(a)
1 0
P = 0.5
0 1
solves the Lyapunov equation with Q as the identity matrix.
(b) We have
V̇ ( x) = x T ( AT P + PA) x + 2x T Pg( x2 ) =
= − x12 − x22 + x2 g( x2 ) < 0
which is negative for x22 ≤ 1. Since the level sets 0.5( x12 + x22 ) = γ
are circles, we conclude that the unit circle is a guaranteed region of
attraction. A couple of simulations are shown in Figure 7.4. Notice, in
particular, that the entire strip h x2 h ≤ 1 is not a region of attraction.
Phase plane
2
1.5
0.5
x2
−0.5
−1
−1.5
−2
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
x1
66
Solutions to Chapter 3
SOLUTION 3.12
(a) The origin is locally asymptotically stable, since the linearization
d 0 −1
x̃ = x̃
dt 1 −1
AT P + PA = − I ,
x1 = r cos θ
x2 = r sin θ
We get
which is negative for r2 < 1/0.861. Using this, together with λ min( P) ≥
0.69, we choose
0.69
c = 0.8 < = 0.801
0.861
SOLUTION 3.13
(a) For h2x1 + x2 h ≤ 1, we have
0 1
ẋ = x. (7.7)
−1 −1
The system matrix is as. stable. Hence, the origin is locally asymp-
totically stable.
(b) We have V ( x) > 0 in the first and third quadrant.
67
Solutions to Chapter 3
c2
V̇ ( x) = x12 − x1 + .
x12
68
Solutions to Chapter 3
SOLUTION 3.14
(a) Use V as a Lyapunov function candidate and let u be generated by
the nonlinear state feedback
VV
u = −( ψ ( x))
Vx
SOLUTION 3.5
Use convexity wrt K.
SOLUTION 3.16
(a) Integration of the equality d
dσ f (σ x) = V x (σ x) ⋅
Vf
x gives the equation
Z 1
Vf
f ( x) = (σ x) ⋅ x dσ .
0 Vx
We get
Z 1 Z 1 T
Vf T Vf
T T
x P f ( x) + f ( x) Px = x PT
(σ x) xdσ + x (σ x) dσ Px
0 Vx 0 Vx
Z 1( T )
Vf Vf
=x T
P (σ x) + (σ x) P dσ x ≤ − x T x
0 V x V x
(c) Suppose that f is bounded, i.e. that i f ( x)i ≤ c for all x. Then
69
Solutions to Chapter 3
SOLUTION 3.17
Assume the linearization A = VV xf of f is asymptotically stable. Then the
equation
PA + AT P = − I ,
R∞ T
has a solution P > 0. (To prove that P = 0 eA s eAs ds > 0 is such a
solution integrate both sides of
d AT s As T T
e e = AT eA s eAs + eA s eAs A
ds
from 0 to ∞.) All conditions of Krasovskii’s method are then satisfied and
we conclude that the nonlinear system is asymptotically stable. The unsta-
bility result is harder.
SOLUTION 3.18
The system is given by
ẋ1 = x2 =: f 1
ẋ2 = − x2 + K g( e) = − x2 + K g(− x1 ) =: f 2.
With
1 1
P=
1 2
we get V̇ ≤ 0 if 3K x12 < 1. Hence the system is locally stable. Actually
one gets V̇ < 0 if 3K x12 < 1 unless x1 = 0. The invariant set is x1 = x2 =
0. From LaSalle’s theorem the origin is hence also locally asymptotically
stable.
70
Solutions to Chapter 4
SOLUTION 4.1
See the Figure 7.5.
1.5 1.5
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1.5 −1.5
−1 0 1 −1 0 1
1.5 1.5
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1.5 −1.5
−1 0 1 −1 0 1
SOLUTION 4.2
Note that the Small Gain Theorem and the Passivity Theorem can also
cope with dynamic nonlinearities. For simplicity, we will only consider the
case when the linear system is asymptotically stable.
(a) What are the restrictions that we must impose on the nonlinearities
so that we can apply the various stability theorems?
The Nyquist Criterion ψ ( y) must be a linear function of y, i.e.,
ψ ( y) = k1 y for some constant k1 .
The Circle Criterion ψ ( y) must be contained in some sector [ k1 , k2 ].
Small Gain Theorem ψ ( y) should be contained in a symmetric sec-
tor [− k2 , k2 ]. The gain of the nonlinearity is then k2 .
The Passivity Theorem ψ ( y) is strictly passive for all nonlineari-
ties in the sector (ε , 1/ε ) for some small ε .
These conditions are illustrated in Figure 7.6.
(b) If the above restrictions hold, we get the following conditions on the
Nyquist curve
The Nyquist Criterion The Nyquist curve should not encircle the
point −1/ k1
The Circle Criterion The Nyquist curve should not intersect the
disc D ( k1 , k2 ).
Small Gain Theorem The Nyquist curve has to be contained in a
disc centered at the origin, with radius 1/ k2 .
71
Solutions to Chapter 4
The Passivity Theorem The Nyquist curve has to stay in the right
half-plane, Re( G (iω )) ≥ 0.
These conditions are illustrated in Figure 7.7.
0.5 k1 0.5
k1
ψ(y)
ψ(y)
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
0.5 0.5
ψ(y)
ψ(y)
0 0
−k2
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
0.5 0.5
ψ(y)
ψ(y)
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
0.5 0.5
ψ(y)
ψ(y)
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
72
Solutions to Chapter 4
SOLUTION 4.3
(a) The systems belong to the sectors [0, 1], [0, ∞] and [−1, ∞] respec-
tively.
(b) Only the saturation nonlinearity (the leftmost nonlinearity) has finite
gain, which is equal to one. The other two nonlinearities have infinite
gain.
(c) The saturation and the sign nonlinearity are passive. The rightmost
nonlinearity is not passive.
SOLUTION 4.4
Note that the circle theorem in Slotine and Li is stated erroneously. The
second and third case must require that ρ = 0, i.e., that the open loop
system is Hurwitz. A correct version of the theorem in Slotine and Li would
read
ẋ = Ax − Bψ ( y)
y = Cx
Now, since the linear part of the system is Hurwitz, we are free to use all
versions of the circle criterion.
(a) In order to guarantee stability of a nonlinearity belonging to a sym-
metric sector [−α , α ], the Nyquist curve has to stay strictly inside a
disk centered at the origin with radius 1/α . We may, for instance,
take α = 0.25 − ε for some small ε > 0.
(b) The Nyquist curve lies inside the disk D (−1.35, 4.35). Thus, stability
can be guaranteed for all nonlinearities in the sector −0.23, 0.74.
73
Solutions to Chapter 4
Nyquist Diagrams
0.5
0.4
0.3
0.2
0.1
Imaginary Axis
0
−0.1
−0.2
−0.3
−0.4
−0.5
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
Real Axis
Figure 7.8 The Nyquist curves for the system in Exercise 4.6b, and the circle
corresponding to k1 = −2, k2 = 7.
(c) We must find β such that the Nyquist plot lies outside of a half-plane
Re( G (iω )) < −1/β . A rough estimate from the plot is β = 1.1.
SOLUTION 4.5
The open loop system has one unstable pole, and we are restricted to apply
the first or fourth version of the circle criterion. In this example, we can
place a disk with center in −3 and with radius 0.75, and apply the first
version of the Nyquist criterion to conclude stability for all nonlinearities
in the sector [0.27, 0.44].
SOLUTION 4.6
(a) The Nyquist diagram is a circle with midpoint in −0.5 and radius
0.5, see Figure 4.4. Since the open system is unstable the Nyquist
curve should encircle the disc twice. Choosing the circle that passes
through −1/ k1 = −1 + ε and −1/ k2 = −ε we see that the loop is
1
stable for the sector ( , 1/ε ).
1−ε
(b) The circle with k1 = −2, k2 = 7 does not intersect the Nyquist curve.
Hence the sector (−2, 7) suffices. As always there are many other
circles that can be used (The lower limit can be traded against the
upper limit).
SOLUTION 4.7
The system is written on the standard feedback connection form that we
know from the lectures. Since the gain of the time-varying element is
bounded, hh f (t)hh2 ≤ 1, it is appropriate to apply the Small Gain Theorem.
(a) Introduce y = Cx and u = f (t) y, then
ẋ = Ax + Bu
y = Cx
74
Solutions to Chapter 4
(b) Let the gain of the linear system be γ . Then, the feedback connection
is BIBO-stable if γ < 1. Since the gain of a linear system is given by
SOLUTION 4.8
(a) >> A=[1 10; 0 1];svd(A)
ans =
10.0990
0.0990
(b)
i AB xi i AB xi i B xi
σ 1 ( AB ) = sup = sup ⋅
x i xi x i B xi ix
i Ayi i B xi
sup ⋅ sup = σ 1 ( A)σ 1 ( B )
y i yi x i xi
SOLUTION 4.9
The proof follows directly from the definition of passivity, since, according
to the definition of a storage function
Z T
〈u, y〉T = uT y dt
0
Z T
≥ V̇ ( x)dt = V ( x(T )) − V ( x(0)) = V ( x(T ))
0
75
Solutions to Chapter 4
SOLUTION 4.10
The linear system G (s) corresponds to
V̇ = ẋ T Px + x T P ẋ
= x T ( AT P + PA) x + 2x T P Bu = − x T x + 2 y T u ≤ 2 y T u
SOLUTION 4.11
(a) We will use the storage function V ( y, u) = y2 /2. First note that
d
V ( y, u) = y(−2 y + sat( y) + u)
dt
and
y2 ≥ ysat( y) ≥ 0.
which gives
d
V ( y, u) ≤ yu − y2 ≤ yu
dt
Hence, F is passive.
(b) Pick u such that h yh ≤ 1 for some amount of time T. (After the time
T, you can set the input u(t) = 0) Then, for t ∈ [0, T ], the system
behaves like the linear time invariant system
1
G ( s) = ,
s+1
which is (strictly) passive. Hence the nonlinear system is (strictly)
passive.
(c) You can solve this problem in many ways. Here we will base our proof
on the circle criterion. With the obvious state vector x = (θ , ω , z)P , we
rewrite the system in the feedback connection form
0 1 0 0
ẋ = Ax − Bψ ( y) = −2 −1 1 x − 0 sat([ 1 0 −1 ] x)
2 0 −2 1
The Nyquist curve of the linear system is illustrated in Figure 7.9.
Since the Nyquist curve does not intersect the half plane Re( G (iω )) <
−1/2, we conclude stability for all ψ in the sector [0, 2]. Since the
saturation element lies in the sector [0, 1], we conclude stability of
the closed loop.
76
Solutions to Chapter 4
Nyquist Diagrams
0.8
0.6
0.4
0.2
Imaginary Axis
0
−0.2
−0.4
−0.6
−0.8
−1
−1.5 −1 −0.5 0 0.5
Real Axis
SOLUTION 4.12
(a) We have
Z T
〈 y, u〉 = y(t)u(t)dt =
0
Z T
= {u(t)uc (t)}{ H (u(t)uc (t))}dt =
0
Z T
= w(t) H (w(t))dt = 〈w, H (w)〉
0
SOLUTION 4.13
No solution yet.
77
Solutions to Chapter 5
SOLUTION 5.1
Use the interpretation of describing function as “equivalent gain”. We have
1-b, 2-c, 3-a, 4-d.
SOLUTION 5.2
Denote the nonlinearity by f . For memoryless, static nonlinearities, the
describing function does not depend on ω , and the describing function in
Slotine and Li reduces to
b1( A) + ia1 ( A)
N ( A) =
A
where a1 and b1 can be computed as
Z 2π
1
a1 = f ( A sin(φ )) cos(φ ) dφ (7.9)
π 0
Z 2π
1
b1 = f ( A sin(φ )) sin(φ ) dφ . (7.10)
π 0
(a) First, we notice that the saturation is a odd function, which implies
that a1 = 0. In order to simplify the computations of b1, we set H = 1
and note that the saturation can be described as
(
A/ D sin(φ ) 0 ≤ φ ≤ φl
f ( A sin(φ )) =
1 φ l < φ < π /2
2 D
N ( A) = (φ l + cos(φ l ))
Dπ A
78
Solutions to Chapter 5
(c) Noting that this nonlinearity can be written as the sum of the two
nonlinearities in (a) and (b), we arrive at the describing function
2(α − β ) D
N ( A) = φ l + cos(φ l ) + β .
π A
SOLUTION 5.3
Let the input to the relay be
79
Solutions to Chapter 5
We obtain
4H
N ( A) = (cos(φ 0 ) − i sin(φ 0 ))
πA
q
The identity cos( z) = 1 − sin2 ( z) gives the desired result.
SOLUTION 5.4
Follows from the integration rule
Z
1
f (ax)dx = F (ax)
a
R
where F ( x) = f ( x)dx.
SOLUTION 5.5
We have
φ ( x) φ (a)
< , x < a.
x a
and thus
Zπ
2
Φ(a) = φ (a sin(θ )) sin(θ )dθ
aπ 0
Zπ
2 φ (a)
< a sin(θ ) sin(θ )dθ
aπ 0 a
Zπ
2
= φ (a) sin2 (θ )dθ = φ (a)/a
aπ 0
SOLUTION 5.6
The describing function is
N ( A) = k1 + 3A2 k3 /4
Note, however, that the output y(T ) of the nonlinearity for the input e(t) =
A sin(φ ) is
We conclude that the term k2 x22 does not influence N ( A). Still, we can not
just apply the describing function method, since there is a bias term. If the
linear system has integral action, the presence of a constant offset on the
input will have a very big influence after some time.
SOLUTION 5.7
80
Solutions to Chapter 5
Nyquist Diagrams
Imaginary Axis
0
−1
−2
−3
−4
−8 −7 −6 −5 −4 −3 −2 −1 0 1
Real Axis
(a) When the saturation works in the linear range, we have the closed
loop dynamics
−5ks
G ( s) =
s2 + (1 − 5k)s + 25
which is unstable for k > 0.2. Thus, the state can not remain small.
In saturation, on the other hand, the nonlinearity generates a con-
stant(“step”) input to the system. The final value theorem then gives
−5s
lim y(t) = lim =0
t→∞ s→0 s2 + s + 25
−i5ω
G (iω ) =
25 − ω 2 + iω
which intersects the negative real axis for ω P = 5 rad/s. The value
of G (iω P ) = −5. Thus, there will be an intersection if k > 0.2. The
frequency of the oscillation is estimated to 5 rad/s, and for fixed k,
the amplitude can be determined directly from Figure 7.10.
(c) The Nyquist curve of the system is shown in Figure 7.10. The func-
tion −1/ N ( A) is also displayed, with an arrow in the direction of in-
creasing A. The Nyquist curve encircles the points Re( G (iω )) > −5,
indicating increased oscillation amplitude. The points to the left of
the intersection are not encircled, indicating stability and a decaying
oscillation amplitude. We can thus expect a stable limit cycle.
81
Solutions to Chapter 5
SOLUTION 5.8
(a) Introduce θ 0 = arcsin(a/ A) and proceed similarly to the saturation
nonlinearity.
(b) The describing function has maximum for
√
A∗ = 2a
which gives
2
N ( A∗ ) =
πa
√
√ crosses the negative real axis for ω = 2, for which
The Nyquist curve
the gain is G (i 2) = −2/3. Thus, we should expect no oscillations if
4
a> .
3π
SOLUTION 5.9
(a) The describing function for a relay with amplitude D is given by
4D
N ( A) =
πA
−1/ N ( A) lies on the negative real axis. If the Nyquist curve intersects
the negative real axis, the describing function methods will predict a
sustained oscillation
4D
− h G (iω u )h = −1
πA
Thus, given the amplitude A of the oscillation, we estimate the ulti-
mate gain as
4D
Ku = 1/h G (iω u )h =
πA
The ultimate period is the period time of the oscillations
Tu = 2π /ω
(b) From the simulation, we estimate the amplitude A = 0.6 which gives
Ku 2.12. The ultimate period can be estimated directly from the
plot to be Tu 2. Note that the estimates have good correspondence
with the analytical results (which require a full process model)
SOLUTION 5.10
No solution yet.
82
Solutions to Chapter 5
SOLUTION 5.11
No solution yet.
SOLUTION 5.12
No solution yet.
SOLUTION 5.13
No solution yet.
83
Solutions to Chapter 6
SOLUTION 6.1
We would like to write the system equations as
v = G (s)(−u)
u = φ (v)
where φ (⋅) denotes the saturation. Block diagram manipulations give
AR BS
v=u− + u
Aw A Aw A
AR + B S
= − 1 (−u) = G (s)(−u)
AAw
Since the saturation element belongs to the sector [0, 1], we invoke the
circle criterion and conclude stability if the Nyquist curve of G (iω ) does
not enter the half plane Re( G (iω )) < −1. This gives the desired condition.
SOLUTION 6.2
The model is given by
dz hvh
= v− z (7.11)
dt g(v)
dz
F = σ 0 z + σ 1(v) + Fvv (7.12)
dt
(a) For any constant velocity, v, (7.11) converges to the value
g(v)
z= v = g(v)sign(v)
hvh
and F therefore converges to
F = σ 0 g(v)sign(v) + Fvv
h z(t)h ≥ a
84
Solutions to Chapter 6
dz hvh 2 dz
zv = z + z ≥z = V̇ (t)
dt g(v) dt
hvh
Fv = Fvv2 + (σ 1 ż + σ 0 z)( ż + z) (7.13)
g(v)
hvh hvh
≥ σ 1 ż2 + σ 0 z2 + ( σ 1 + σ 0 ) zż (7.14)
g(v) g(v)
Next, we separate out the storage function derivative and make a comple-
tion of squares to estimate the additional terms
hvh 2 hvh
Fv ≥ σ 0 zż + σ 1 ż2 + σ 0 z + σ1 zż (7.15)
g(v) g(v)
2 2 !
hvh hvh hvh
= V̇ + σ 1 ż + z + σ0 − σ1 z2 (7.16)
2g(v) g(v) 2g(v)
Fv ≥ V̇
hvh
σ0 − σ1 >0
4g(v)
SOLUTION 6.3
(a) The describing function for a relay has been derived on Lecture 6 to
be
4F0
N ( A) =
πA
4F0
N ( A) = Fv +
πA
85
Solutions to Chapter 6
SOLUTION 6.4
Recall from the lecture slides that the process is given by
ẋ = v
v̇ = − F + u
v = zv + Kv x
b
żv = − Fb + u − Kv vb
b = ( zF + K F hb
F vh)sign(b
v)
b)sign(b
żF = − K F (u − F v)
ev = v − vb
b
eF = F − F
b + u − Kv vb) − Kv v = − eF − Kv ev
v = − F + u − (− F
ėv = v̇ − ḃ
ėF = Ḟ − ċ b) − K f (− F̂ + u − Kv b
F = Ḟ − (− K F (u − F v) − Kv v) =
= Ḟ − K f Kv ev
The term Ḟ is zero (except at zero velocity where it is not well defined).
Putting Ḟ = 0, we obtain
ėv − Kv −1 ev
= (7.17)
ėF − Kv K f 0 eF
λ ( s ) = s 2 + Kv s − Kv K f
Kv > 0 ,
− Kv K f > 0
86
Solutions to Chapter 6
D
− D /2
D /2
−D
SOLUTION 6.5
We have already (lecture) seen that the describing function for the function
in Figure 7.11 is given by
0 A < D /2
N D ( A) = q
4D 1 − ( D )2 A > D /2
πA 2A
Superposition ( N f 1 + f 2 = N f 1 + N f 2 ) gives
SOLUTION 6.6
We have
Z T Z T
〈u, y〉T = u y dt = usat(u) dt ≥ 0
0 0
SOLUTION 6.7
No solution yet.
SOLUTION 6.8
Assume without loss of generality that 0 < u0 < D /2. The input to the
quantizer is u0 + d(t) where d(t) is the dither signal. The output y from
the quantizer is
(
0 u0 + d(t) < D /2
y(t) = Q(u0 + d(t)) =
D u0 + d(t) < D /2
1 u0
y0 = T ⋅ D = u0 .
T D
87
Solutions to Chapter 6
Hence the dither signal gives increased accuracy, at least if the signal y
can be treated as constant compared to the frequency of the dither signal.
The method does not work for high-frequency signals y.
SOLUTION 6.9
No solution yet.
SOLUTION 6.10
No solution yet.
88
Solutions to Chapter 7
SOLUTION 7.1
Let the output of the nonlinearity be u, so that u = f (v).
(a) We have
u = v2, v≥0
The dead-zone nonlinearity and its inverse are shown in Figure 7.12.
1 1
v=f−1(u)
u=f(v)
0 0
−1 −1
−2 −2
−2 0 2 −1 0 1
v u
89
Solutions to Chapter 7
SOLUTION 7.2
(a) We notice that all state equation but the last one are linear. The last
state equation reads
ẋn = f ( x) + g( x)u
1
u = h( x, v) = (− f ( x) + Lx + v)
g( x)
ẋn = Lx + v
(You may recognize this as the controller form from the basic con-
trol course). For the control to be well defined, we must require that
g( x) = 0 for all x.
(b) The above procedure suggest the control
1
u= (−a sin( x1 ) + l1 x1 + l2 x2 + v)
b cos( x1 )
l1 = −1, l2 = −2
The control law is well defined for x1 = π /2. This corresponds to the
pendulum being horizontal. For x1 = π /2, u has no influence on the
system. Notice how the control “blows up” nearby this singularity.
Extra. You may want to verify by simulations the behaviour of the
modified control
u = sat(h( x, v))
90
Solutions to Chapter 7
u = − x2 − x + v
ẋ = (1 + ε ) x2 − x2 − x = ε x2 − x
and note that for x > 1/ε , we have ẋ > 0, which implies that the
trajectories tend to infinity. Thus, global cancellation is non-robust in
the sense that it may require a very precise mathematical model.
SOLUTION 7.3
(a) The sliding surface in a sliding mode design is invariant, i.e., if x(t s )
belongs to the sliding surface σ ( x) = 0, at time t s, then it belongs to
the set σ ( x) = 0 for all future times t ≥ t s. Thus, it must hold that
σ ( x) = σ̇ ( x) = 0
σ̇ ( x) = ẋ1 = 0
pT Ax µ
u=− T
− T sign(σ ( x))
p B p B
σ ( x) = pT x = 0
u = −( x1 + x2 ) − µ sign( x1 + x2 )
91
Solutions to Chapter 7
b>0 (7.18)
SOLUTION 7.4
(a) Straightforward manipulations give
K 1
G ( s) = e−sL = e−sVd / q(t)
sT + 1 sVm/ q(t) + 1
(b) The step response gives parameters a = 0.9, L = 1. Using the results
from (a) and a = K L/T we obtain
a = Vd / Vm
L = Vd/ q(t)
K p = 0.9/a = 1
Ti = 3L = 3/ q(t)
SOLUTION 7.5
(a) The pendulum energy is given by
Jp 2
E( x) = mg l cos( x1 ) + x
2 2
d
V̇ ( x) = 2( E( x) − E0 ) E( x) =
dt
= 2( E( x) − E0 )(−mg l sin( x1 ) ẋ1 + J p x2 ẋ2 ) =
= 2( E( x) − E0 )(−mlx2 cos( x1 )u)
92
Solutions to Chapter 7
Phase plane
10
2
x2
−2
−4
−6
−8
−10
−6 −4 −2 0 2 4 6
x1
SOLUTION 7.6
We get
93
Solutions to Chapter 7
SOLUTION 7.7
H = n0 ( x12 + u2 ) + λ 1 u
Let us first assume n0 = 1. Then the optimal control signal is
λ1
0 = Hu = 2n0 u + λ 1 ; u=−
2n0
The adjoint equation is
Hence
λ̇ 1
ẍ1 = u̇ = − = x1
2n0
Therefore
x(t) = c1 et + c2 e−t
The boundary conditions gives
c1 + c2 = 1
c1 e + c2 e−1 = 0
This gives c1 = − e−2 /(1 − e−2), c2 = 1/(1 − e−2 ) and the control signal is
u = ẋ = c1 et − c2 e−t
SOLUTION 7.8
Suppose φ ( x(t f )) is the criterion to be minimized and that Ψ( x(t f )) = 0
is the end constraints to be satisfied at t = t f . Note that L = 0. We have
(with α = F /m)
H = λ 1 x3 + λ 2 x4 + λ 3α cos u + λ 4 (α sin u − g)
λ̇ 1 = 0
λ̇ 2 = 0
λ̇ 3 = −λ 1
λ̇ 4 = −λ 2
94
Solutions to Chapter 7
SOLUTION 7.9
We get
u2 u2
H = λ 1 x3 + λ 2 x4 + λ 3 (cos u1 ) + λ 4 ( sin u1 − g) − λ 5γ u2
x5 x5
= σ (t, u1)u2 + terms independent of u
λ3 λ4
where σ (t, u1) = x5 cos u1 + x5 sin u1 − λ 5γ . We get minimum with
umax σ < 0
u2 = L σ =0
0 σ >0
and (
λ4
λ3 u2 > 0
tan u1 =
L u2 = 0
SOLUTION 7.10
The problem is normal, can use n0 = 1. We have
2
H = ex1 + x22 + u2 + λ 1 x2 + λ 2 (− x1 − x23 + (1 + x1 )u)
2
λ̇ 1 = − H x1 = −2x1 ex1 − λ 2 (−1 + u)
λ̇ 2 = − H x2 = −2x2 − λ 1 + 3x22 λ 2
λ (1) = 0
VH λ2
=0 ; 2u + λ 2 (1 + x1 ) = 0 ; u=− (1 + x1 )
Vu 2
2
( VV uH2 = 2 > 0 hence minimum). This gives
ẋ1 = f 1 = x2
λ2
ẋ2 = f 2 = − x1 − x23 − (1 + x1 )2
2
2
λ̇ 1 = f 3 = −2x1 ex1 − λ 2 (−1 + u)
λ̇ 2 = f 4 = −2x2 − λ 1 + 3x22 λ 2
λ 1 (1) = λ 2 (1) = 0
SOLUTION 7.11
95
Solutions to Chapter 7
x1 (t f )
a) We have L = 1, φ = 0, Ψ(t f ) =
= 0 and t f free. We get
x2 (t f )
H = n0 + λ 1 x2 + λ 2 u
Hence
1
λ 2 ( t) < 0
u(t) = ? λ 2 ( t) = 0
−1 λ 2 (t) > 0
The adjoint equations are
λ̇ 1 = 0
λ̇ 2 = λ 1
x1 + C1 = x22 /2
x1 + C2 = − x22 /2
This gives the phase plane in the figure Now we know that we get to
5
−1
−2
−3
−4
−5
−5 −4 −3 −2 −1 0 1 2 3 4 5
since then u(t) = −1 above the switch curve and u(t) = 1 below it.
96
Solutions to Chapter 7
−1
−2
−3
−4
−5
−5 −4 −3 −2 −1 0 1 2 3 4 5
Phase plane
5
1
x2
−1
−2
−3
−4
−5
−5 −4 −3 −2 −1 0 1 2 3 4
x1
SOLUTION 7.12
Since we assume the problem is normal (t f is free so this is not obvious)
we have
H = 1 + huh + λ 1 x2 + λ 2 u.
Minimization wrt huh ≤ 1 gives
λ 2 > 1 ; u = −1
hλ 2 h < 1 ; u = 0
λ 2 < −1 ; u = 1
We also have
λ̇ 1 = − H x1 = 0 ; λ 1 = B
λ̇ 2 = − H x2 = −λ 1 ; λ 2 = A − Bt
97
Solutions to Chapter 7
SOLUTION 7.13
Alternative 1 Use the Bang-bang
theorem (p. 472). Note that ( A, B ) is
1 0
controllable and Ψ x =
has full rank, hence u(t) is bang-bang.
0 1
From “sats 18.6” we know that there are at most n − 1 = 1 switches in u
(the eigenvalues of A are −1, −2 and are hence real).
Alternative 2 Direct calculation shows
Minimization wrt u shows that huh = 3 where the sign is given by the sign
of σ (t). From λ̇ = − AT λ and λ (t f ) = Ψ Tx µ = µ we get
SOLUTION 7.14
See the solution of Problem 7.13. We have that
SOLUTION 7.15
Minimization of
H = (1 + λ 1 )u
gives
1 + λ 1 = 0 : no minimum in u
1 + λ1 = 0 : all u give minima
This does not prove that all u in fact give minima. It only says that all u(t)
are so far possible minima and we need more information.
But in fact since
Z 1 Z 1
u dt = ẋ1 dt = x(1) − x(0) = 1
0 0
98
8. Bibliography
Åström, K. J. (1968): Reglerteknik – Olinjära System. TLTH/VBV.
Boyd, S. P. (1997): “Homework assignments in ee375 – advanced analysis of
feedback.” Available from http://www-leland.stanford.edu/class/ee375/.
Khalil, H. K. (1996): Nonlinear Systems, 2nd edition. Prentice Hall, Upper
Saddle River, N.J.
Slotine, J.-J. E. and W. LI (1991): Applied Nonlinear Control. Prentice Hall,
Englewood Cliffs, N.J.
99