Model Based Code Generation For Nonlinear Model Predictive Control
Model Based Code Generation For Nonlinear Model Predictive Control
I Control theory
I Modeling
I Model predictive control
I Model based code generation
Control Theory
Control Theory is Rocket Science
Feedback Control
Modeling
Causal Modeling
Lorenz Attractor
d
x (t) = −a(y (t) − x (t))
dt
d
y (t) = (b − z(t))x (t) − y (t)
dt
d
z(t) = x (t)y (t) − cz(t)
dt
Causal Modeling
Lorenz Attractor
Causal Modeling
Lorenz Attractor
Acausal Modeling
Simple DC Motor
Acausal Modeling
I What is MPC?
I MPC is a closed loop implementation of optimal control.
I Why not use a PID?
I PID controllers do not perform well for highly nonlinear systems.
I To control a multi-input multi-output (MIMO) system, several
PID controllers should be connected in a non-trivial
configuration.
I Tuning PID controllers is not an easy task, especially given
state and input constraints.
Why MPC?
where:
I x (t) is the state vector
I u(t) is the input vector
Optimal Control Problem
Z tf
minimize J(x0 , t0 ) = φ(x (tf )) + L(x (τ ), u(τ ))dτ
u t0
subject to ẋ (t) = f (x (t), u(t))
x (t0 ) = x◦
gi (x (t), u(t)) = 0, for i = 1, . . . , ng
hi (x (t), u(t)) ≤ 0, for i = 1, . . . , nh
Barrier Method
I Using a particular interior-point algorithm, the barrier method,
the inequality constraints are converted to equality constraints:
Z tf
minimize φ(x (tf )) + L(x (τ ), u(τ )) − r T α(τ ) dτ
u,α t0
subject to ẋ (t) = f (x (t), u(t))
x (t0 ) = x◦
gi (x (t), u(t)) = 0, for i = 1, . . . , ng
hi (x (t), u(t)) + αi (t)2 = 0, for i = 1, . . . , nh
N−1
X
minimize φd (xN ) + L(xk , uk ) − r T αk
u,α
k=0
subject to xk+1 = fd (xk , uk )
x0 = x◦
gi (xk , uk ) = 0, for i = 1, . . . , ng
2
hi (xk , uk ) + αik = 0, for i = 1, . . . , nh
tf −t0
where ∆τ = N and:
φ(x (tf ), tf )
φ(xN , N) =
∆τ
fd (xk , uk ) = xk + f (xk , uk )∆τ
Optimization Problem
N−1
X
minimize φd (xN ) + L(xk , uk ) − r T αk
u,α
k=0
subject to xk+1 = fd (xk , uk )
x0 = x◦
G(xk , uk , αk ) = 0
where:
g1 (xk , uk )
..
.
gng (xk , uk )
G(xk , uk , αk ) =
h1 (xk , uk ) + α1k2
..
.
hnh (xk , uk ) + αn2h k
Lagrange Multipliers
I Lagrangian:
I Optimality conditions:
I Hamiltonian:
Optimality Conditions
Lλk+1 = 0 ?
xk+1 = fd (xk? , uk? )
L λ0 = 0 x0? = x◦
Lxk = 0 λk = Hx (xk? , uk? , αk? , λ?k+1 , νk? )
?
for k = 0, . . . , N − 1
Model Predictive Control
xk+1 = fd (xk , uk )
x0 = xn
λk = Hx (xk , uk , αk , λk+1 , νk )
∂
λN = φd (xN )
∂xN
(Ohtsuka 2004)
Continuation/GMRES Method
Hu (x0 , u0 , α0 , λ1 , ν0 )
Hα (x0 , u0 , α0 , λ1 , ν0 )
G(x0 , u0 , α0 )
F (xn , U) = ..
.
Hu (xN−1 , uN−1 , αN−1 , λN , νN−1 )
Hα (xN−1 , uN−1 , αN−1 , λN , νN−1 )
G(xN−1 , uN−1 , αN−1 )
(Ohtsuka 2004)
Continuation/GMRES Method
I Continuation method: Instead of solving F (x , U) = 0, find U
such that:
Ḟ (x , U) = As F (x , U)
where As is a matrix with negative eigenvalues.
I Now, we have:
Fx ẋ + FU U̇ = As F (x , U)
(Ohtsuka 2004)
Example