Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

HW 4 Sol

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

EE364a, Winter 2007-08

Prof. S. Boyd

EE364a Homework 4 solutions


4.11 Problems involving 1 - and -norms. Formulate the following problems as LPs. Explain in detail the relation between the optimal solution of each problem and the
solution of its equivalent LP.
(a) Minimize kAx bk ( -norm approximation).

(b) Minimize kAx bk1 (1 -norm approximation).


(c) Minimize kAx bk1 subject to kxk 1.

(d) Minimize kxk1 subject to kAx bk 1.


(e) Minimize kAx bk1 + kxk .

In each problem, A Rmn and b Rm are given. (See 6.1 for more problems
involving approximation and constrained approximation.)
Solution.
(a) Equivalent to the LP
minimize t
subject to Ax b  t1
Ax b  t1.

in the variables x Rn , t R. To see the equivalence, assume x is fixed in this


problem, and we optimize only over t. The constraints say that
t aTk x bk t
for each k, i.e., t |aTk x bk |, i.e.,
t max |aTk x bk | = kAx bk .
k

Clearly, if x is fixed, the optimal value of the LP is p (x) = kAx bk . Therefore


optimizing over t and x simultaneously is equivalent to the original problem.
(b) Equivalent to the LP
minimize 1T s
subject to Ax b  s
Ax b  s

with variables x Rn ands Rm . Assume x is fixed in this problem, and we


optimize only over s. The constraints say that
sk aTk x bk sk
1

for each k, i.e., sk |aTk x bk |. The objective function of the LP is separable, so


we achieve the optimum over s by choosing
sk = |aTk x bk |,
and obtain the optimal value p (x) = kAx bk1 . Therefore optimizing over x and
s simultaneously is equivalent to the original problem.
(c) Equivalent to the LP
minimize 1T y
subject to y  Ax b  y
1  x  1,
with variables x Rn and y Rm .

(d) Equivalent to the LP

minimize 1T y
subject to y  x  y
1  Ax b  1
with variables x Rn and y Rn .
Another reformulation is to write x as the difference of two nonnegative vectors
x = x+ x , and to express the problem as
minimize 1T x+ + 1T x
subject to 1  Ax+ Ax b  1
x+  0, x  0,
with variables x+ Rn and x Rn .

(e) Equivalent to

minimize 1T y + t
subject to y  Ax b  y
t1  x  t1,

with variables x Rn , y Rm , and t R.

4.16 Minimum fuel optimal control. We consider a linear dynamical system with state
x(t) Rn , t = 0, . . . , N , and actuator or input signal u(t) R, for t = 0, . . . , N 1.
The dynamics of the system is given by the linear recurrence
x(t + 1) = Ax(t) + bu(t),

t = 0, . . . , N 1,

where A Rnn and b Rn are given. We assume that the initial state is zero, i.e.,
x(0) = 0.
2

The minimum fuel optimal control problem is to choose the inputs u(0), . . . , u(N 1)
so as to minimize the total fuel consumed, which is given by
F =

N
1
X

f (u(t)),

t=0

subject to the constraint that x(N ) = xdes , where N is the (given) time horizon, and
xdes Rn is the (given) desired final or target state. The function f : R R is the
fuel use map for the actuator, and gives the amount of fuel used as a function of the
actuator signal amplitude. In this problem we use
f (a) =

|a|
|a| 1
2|a| 1 |a| > 1.

This means that fuel use is proportional to the absolute value of the actuator signal,
for actuator signals between 1 and 1; for larger actuator signals the marginal fuel
efficiency is half.
Formulate the minimum fuel optimal control problem as an LP.
Solution. The minimum fuel optimal control problem is equivalent to the LP
minimize 1T t
subject to Hu = xdes
y  u  y
ty
t  2y 1,
with variables u RN , y RN , and t R, where
H=

AN 1 b AN 2 b Ab b .

There are several other possible LP formulations. For example, we can keep the state
trajectory x(0), . . . , x(N ) as optimization variables, and replace the equality constraint
above, Hu = xdes , with the equality constraints
x(t + 1) = Ax(t) + bu(t),

t = 0, . . . , N 1,

x(0) = 0,

x(N ) = xdes .

In this formulation, the variables are u RN , x(0), . . . , x(N ) Rn , as well as y RN


and t RN .

Yet another variation is to not use the intermediate variable y introduced above, and
express the problem just in terms of the variable t (and u):
t  u  t,

2u 1  t,

with variables u RN and t RN .


3

2u 1  t,

4.29 Maximizing probability of satisfying a linear inequality. Let c be a random variable in


Rn , normally distributed with mean c and covariance matrix R. Consider the problem
maximize prob(cT x )
subject to F x  g, Ax = b.
Find the conditions under which this is equivalent to a convex or quasiconvex optimization problem. When these conditions hold, formulate the problem as a QP, QCQP, or
SOCP (if the problem is convex), or explain how you can solve it by solving a sequence
of QP, QCQP, or SOCP feasibility problems (if the problem is quasiconvex).
Solution. Define u = cT x, a scalar random variable, normally distributed with mean
E u = cT x and variance E(u E u)2 = xT Rx. The random variable
u cT x

xT Rx
has a normal distribution with mean zero and unit variance, so
cT x
u cT x

prob(u ) = prob
xT Rx
xT Rx
1
2

Rz

cT x
=1
,
xT Rx
!

2 /2

et

dt is the standard normal CDF.

To maximize prob(u ), we can minimize ( cT x)/ xT Rx (since is increasing),


i.e., solve the problem

maximize (
cT x )/ xT Rx
(1)
subject to F x  g
Ax = b.

where (z) =

This is not a convex optimization problem, since the objective is not concave.
The problem can, however, be solved by quasiconvex optimization provided a condtion
holds. (Well derive the condition below.) The objective exceeds a value t if and only
if

cT x t xT Rx

holds. This last inequality is convex, in fact a second-order cone constraint, provided
t 0. So now we can state the condition: There exists a feasible x for which cT x .
(This condition is easily checked as an LP feasibility problem.) This condition, by the
way, can also be stated as: There exists a feasible x for which prob(u ) 1/2.
Assume that this condition holds. This means that the optimal value of our original
problem is at least 0.5, and the optimal value of the problem (1) is at least 0. This
means that we can state our problem as
maximize t
subject to F x  g,
Ax = b
cT x t xT Rx,
4

where we can assume that t 0. This can be solved by bisection on t, by solving


an
T
SOCP feasibility problem at each step. In other words: the function (
c x )/ xT Rx
is quasiconcave, provided it is nonnegative.
In fact, provided the condition above holds (i.e., there exists a feasible x with cT x )
we can solve the problem (1) via convex optimization. We make the change of variables
y=

x
,

s=

cT x

so x = y/s. This yields the problem

1
,

cT x

y T Ry
minimize
subject to F y  gs
Ay = bs
cT y s = 1
s 0.
4.30 A heated fluid at temperature T (degrees above ambient temperature) flows in a pipe
with fixed length and circular cross section with radius r. A layer of insulation, with
thickness w r, surrounds the pipe to reduce heat loss through the pipe walls. The
design variables in this problem are T , r, and w.
The heat loss is (approximately) proportional to T r/w, so over a fixed lifetime, the
energy cost due to heat loss is given by 1 T r/w. The cost of the pipe, which has
a fixed wall thickness, is approximately proportional to the total material, i.e., it is
given by 2 r. The cost of the insulation is also approximately proportional to the total
insulation material, i.e., 3 rw (using w r). The total cost is the sum of these three
costs.
The heat flow down the pipe is entirely due to the flow of the fluid, which has a fixed
velocity, i.e., it is given by 4 T r2 . The constants i are all positive, as are the variables
T , r, and w.
Now the problem: maximize the total heat flow down the pipe, subject to an upper
limit Cmax on total cost, and the constraints
Tmin T Tmax ,

rmin r rmax ,

wmin w wmax ,

Express this problem as a geometric program.


Solution. The problem is
maximize 4 T r2
subject to 1 T w1 + 2 r + 3 rw Cmax
Tmin T Tmax
rmin r rmax
wmin w wmax
w 0.1r.
5

w 0.1r.

This is equivalent to the GP


minimize (1/4 )T 1 r2
subject to (1 /Cmax )T w1 + (2 /Cmax )r + (3 /Cmax )rw 1
(1/Tmax )T 1, Tmin T 1 1
(1/rmax )r 1, rmin r1 1
(1/wmax )w 1, wmin w1 1
10wr1 1
(with variables T , r, w).
5.1 A simple example. Consider the optimization problem
minimize x2 + 1
subject to (x 2)(x 4) 0,
with variable x R.
(a) Analysis of primal problem. Give the feasible set, the optimal value, and the
optimal solution.
(b) Lagrangian and dual function. Plot the objective x2 + 1 versus x. On the same
plot, show the feasible set, optimal point and value, and plot the Lagrangian
L(x, ) versus x for a few positive values of . Verify the lower bound property
(p inf x L(x, ) for 0). Derive and sketch the Lagrange dual function g.

(c) Lagrange dual problem. State the dual problem, and verify that it is a concave
maximization problem. Find the dual optimal value and dual optimal solution
. Does strong duality hold?

(d) Sensitivity analysis. Let p (u) denote the optimal value of the problem
minimize x2 + 1
subject to (x 2)(x 4) u,
as a function of the parameter u. Plot p (u). Verify that dp (0)/du = .
Solution.
(a) The feasible set is the interval [2, 4]. The (unique) optimal point is x = 2, and
the optimal value is p = 5.
The plot shows f0 and f1 .

30

25

f0

20

15

10

f1
0

5
1

(b) The Lagrangian is


L(x, ) = (1 + )x2 6x + (1 + 8).
The plot shows the Lagrangian L(x, ) = f0 + f1 as a function of x for different
values of 0. Note that the minimum value of L(x, ) over x (i.e., g()) is
always less than p . It increases as varies from 0 toward 2, reaches its maximum
at = 2, and then decreases again as increases above 2. We have equality
p = g() for = 2.
f0 + 3.0f1

30

25

f0 + 2.0f1

f0 + 1.0f1

20

15

10

@
I
@ f0

5
1

For > 1, the Lagrangian reaches its minimum at x = 3/(1 + ). For 1


it is unbounded below. Thus
g() =

92 /(1 + ) + 1 + 8 > 1

which is plotted below.

g()

10
2

We can verify that the dual function is concave, that its value is equal to p = 5
for = 2, and less than p for other values of .
(c) The Lagrange dual problem is
maximize 92 /(1 + ) + 1 + 8
subject to 0.
The dual optimum occurs at = 2, with d = 5. So for this example we can
directly observe that strong duality holds (as it must Slaters constraint qualification is satisfied).
(d) The perturbed problem is infeasible for u < 1, since inf x (x2 6x + 8) = 1.
For u 1, the feasible set is the interval

[3 1 + u, 3 + 1 + u],
given by thetwo roots of x2 6x + 8 = u. For 1 u 8 the optimum is
x (u) = 3 1 + u. For u 8, the optimum is the unconstrained minimum of
f0 , i.e., x (u) = 0. In summary,

u < 1

p (u) = 11 + u 6 1 + u 1 u 8

1
u 8.

The figure shows the optimal value function p (u) and its epigraph.

10

epi p

p (u)

p (0) u
2
2

10

Finally, we note that p (u) is a differentiable function of u, and that


dp (0)
= 2 = .
du

Solutions to additional exercises


1. Minimizing a function over the probability simplex. Find simple necessary and sufficient conditions for x Rn to minimize a differentiable convex function f over the
probability simplex, {x | 1T x = 1, x  0}.
Solution. The simple basic optimality condition is that x is feasible, i.e., x  0,
1T x = 1, and that f (x)T (y x) 0 for all feasible y. Well first show this is
equivalent to
min f (x)i f (x)T x.
i=1,...,n

To see this, suppose that f (x)T (y x) 0 for all feasible y. Then in particular, for
y = ei , we have f (x)i f (x)T x, which is what we have above. To show the other
way, suppose that f (x)i f (x)T x holds, for i = 1, . . . , n. Let y be feasible, i.e.,
y  0, 1T y = 1. Then multiplying f (x)i f (x)T x by yi and summing, we get
n
X
i=1

yi f (x)i

n
X
i=1

yi f (x)T x = f (x)T x.

The lefthand side is y T f (x), so we have f (x)T (y x) 0.

Now we can simplify even further. The condition above can be written as
n
X
f
f
min
xi

.
i=1,...,n xi
xi
i=1

But since 1T x = 1, x  0, we have


n
X
f
f
xi

,
i=1,...,n xi
xi
i=1

min

and it follows that

n
X
f
f
xi
=
.
i=1,...,n xi
xi
i=1

min

The right hand side is a mixture of f /xi terms and equals the minimum of all of the
terms. This is possible only if xk = 0 whenever f /xk > mini f /xi .
Thus we can write the (necessary and sufficient) optimality condition as 1T x = 1,
x  0, and, for each k,
f
f
= min
.
xk > 0
xk i=1,...,n xi
In particular, for ks with xk > 0, f /xk are all equal.
2. Complex least-norm problem. We consider the complex least p -norm problem
minimize kxkp
subject to Ax = b,
10

where A Cmn , b Cm , and the variable is x Cn . Here k kp denotes the p -norm


on Cn , defined as
n
X

kxkp =

i=1

|xi |

!1/p

for p 1, and kxk = maxi=1,...,n |xi |. We assume A is full rank, and m < n.
(a) Formulate the complex least 2 -norm problem as a least 2 -norm problem with
real problem data and variable. Hint. Use z = (x, x) R2n as the variable.

(b) Formulate the complex least -norm problem as an SOCP.

(c) Solve a random instance of both problems with m = 30 and n = 100. To generate
the matrix A, you can use the Matlab command A = randn(m,n) + i*randn(m,n).
Similarly, use b = randn(m,1) + i*randn(m,1) to generate the vector b. Use
the Matlab command scatter to plot the optimal solutions of the two problems
on the complex plane, and comment (briefly) on what you observe. You can solve
the problems using the cvx functions norm(x,2) and norm(x,inf), which are
overloaded to handle complex arguments. To utilize this feature, you will need to
declare variables to be complex in the variable statement. (In particular, you
do not have to manually form or solve the SOCP from part (b).)
Solution.
(a) Define z = (x, x) R2n , so kxk22 = kzk22 . The complex linear equations Ax = b
is the same as (Ax) = b, (Ax) = b, which in turn can be expressed as the
set of linear equations
"
#
"
#
A A
b
z=
.
A
A
b
Thus, the complex least 2 -norm problem can be expressed as
minimize

kzk2
"

A A
A
A

subject to

z=

"

b
b

(This is readily solved analytically).


(b) Using epigraph formulation, with new variable t, we write the problem as
minimize
subject to

t "





"

zi
zn+i

#


t,

2
#

A A
A
A

i = 1, . . . , n

z=

"

b
b

This is an SOCP with n second-order cone constraints (in R3 ).


11

(c) % complex minimum


%
randn(state,0);
m = 30; n = 100;
% generate matrix
Are = randn(m,n);
bre = randn(m,1);

norm problem

A
Aim = randn(m,n);
bim = randn(m,1);

A = Are + i*Aim;
b = bre + i*bim;
% 2-norm problem (analytical solution)
Atot = [Are -Aim; Aim Are];
btot = [bre; bim];
z_2 = Atot*inv(Atot*Atot)*btot;
x_2 = z_2(1:100) + i*z_2(101:200);
% 2-norm problem solution with cvx
cvx_begin
variable x(n) complex
minimize( norm(x) )
subject to
A*x == b;
cvx_end
% inf-norm problem solution with cvx
cvx_begin
variable xinf(n) complex
minimize( norm(xinf,Inf) )
subject to
A*xinf == b;
cvx_end
% scatter plot
figure(1)
scatter(real(x),imag(x)), hold on,
scatter(real(xinf),imag(xinf),[],filled), hold off,
axis([-0.2 0.2 -0.2 0.2]), axis square,
xlabel(Re x); ylabel(Im x);
The plot of the components of optimal p = 2 (empty circles) and p = (filled
circles) solutions is presented below. The optimal p = solution minimizes the
objective maxi=1,...,n |xi | subject to Ax = b, and the scatter plot of xi shows that
12

almost all of them are concentrated around a circle in the complex plane. This
should be expected since we are minimizing the maximum magnitude of xi , and
thus almost all of xi s should have about an equal magnitude |xi |.
0.2

0.15

0.1

0.05

0.05

0.1

0.15

0.2
0.2

0.15

0.1

0.05

0.05

0.1

0.15

0.2

3. Numerical perturbation analysis example. Consider the quadratic program


minimize x21 + 2x22 x1 x2 x1
subject to x1 + 2x2 u1
x1 4x2 u2 ,
5x1 + 76x2 1,
with variables x1 , x2 , and parameters u1 , u2 .
(a) Solve this QP, for parameter values u1 = 2, u2 = 3, to find optimal primal
variable values x1 and x2 , and optimal dual variable values 1 , 2 and 3 . Let
p denote the optimal objective value. Verify that the KKT conditions hold for
the optimal primal and dual variables you found (within reasonable numerical
accuracy).
Hint: See 3.6 of the CVX users guide to find out how to retrieve optimal dual
variables. To specify the quadratic objective, use quad_form().
(b) We will now solve some perturbed versions of the QP, with
u1 = 2 + 1 ,

u2 = 3 + 2 ,

where 1 and 2 each take values from {0.1, 0, 0.1}. (There are a total of nine
such combinations, including the original problem with 1 = 2 = 0.) For each
combination of 1 and 2 , make a prediction ppred of the optimal value of the
13

perturbed QP, and compare it to pexact , the exact optimal value of the perturbed
QP (obtained by solving the perturbed QP). Put your results in the two righthand
columns in a table with the form shown below. Check that the inequality ppred
pexact holds.
1
2 ppred
0
0
0 0.1
0
0.1
0.1
0
0.1 0.1
0.1
0.1
0.1
0
0.1 0.1
0.1
0.1

pexact

Solution.
(a) The following Matlab code sets up the simple QP and solves it using CVX:
Q
f
A
b

=
=
=
=

[1 -1/2; -1/2 2];


[-1 0];
[1 2; 1 -4; 5 76];
[-2 -3 1];

cvx_begin
variable x(2)
dual variable lambda
minimize(quad_form(x,Q)+f*x)
subject to
lambda: A*x <= b
cvx_end
p_star = cvx_optval
When we run this, we find the optimal objective value is p = 8.22 and the optimal
point is x1 = 2.33, x2 = 0.17. (This optimal point is unique since the objective
is strictly convex.) A set of optimal dual variables is 1 = 1.46, 2 = 3.77 and
3 = 0.12. (The dual optimal point is unique too, but its harder to show this,
and it doesnt matter anyway.)

14

The KKT conditions are


x1 + 2x2 u1 ,
x1 4x2 u2 ,
1 0,
2 0,

1 (x1 + 2x2 u1 ) = 0,
2 (x1 4x2 u2 ) = 0,
2x1 x2 1 + 1 + 2 + 53 = 0,
4x2 x1 + 21 42 + 763 = 0.

5x1 + 76x2 1
3 0
3 (5x1 + 76x2 1) = 0,

We check these numerically. The dual variable 1 , 2 and 3 are all greater than
zero and the quantities
A*x-b
2*Q*x+f+A*lambda
are found to be very small. Thus the KKT conditions are verified.
(b) The predicted optimal value is given by
ppred = p 1 1 2 2 .
The following matlab code fills in the table
arr_i = [0 -1 1];
delta = 0.1;
pa_table = [];
for i = arr_i
for j = arr_i
p_pred = p_star - [lambda(1) lambda(2)]*[i; j]*delta;
cvx_begin
variable x(2)
minimize(quad_form(x,Q)+f*x)
subject to
A*x <= b+[i;j;0]*delta
cvx_end
p_exact = cvx_optval;
pa_table = [pa_table; i*delta j*delta p_pred p_exact]
end
end
The values obtained are

15

1
2 ppred
0
0 8.22
0 0.1 8.60
0
0.1 7.85
0.1
0 8.34
0.1 0.1 8.75
0.1
0.1 7.99
0.1
0 8.08
0.1 0.1 8.45
0.1
0.1 7.70

pexact
8.22
8.70
7.98
8.57
8.82
8.32
8.22
8.71
7.75

The inequality ppred pexact is verified to be true in all cases.


4. FIR filter design. Consider the (symmetric, linear phase) FIR filter described by
H() = a0 +

N
X

ak cos k.

k=1

The design variables are the real coefficients a = (a0 , . . . , aN ) RN +1 . In this problem
we will explore the design of a low-pass filter, with specifications:
For 0 /3, 0.89 H() 1.12, i.e., the filter has about 1dB ripple in
the passband [0, /3].
For c , |H()| . In other words, the filter achieves an attenuation
given by in the stopband [c , ]. c is called the cutoff frequency.
These specifications are depicted graphically in the figure below.

H()

1.12
1.00
0.89

0
0

/3

16

(a) Suppose we fix c and N , and wish to maximize the stop-band attenuation, i.e.,
minimize such that the specifications above can be met. Explain how to pose
this as a convex optimization problem.
(b) Suppose we fix N and , and want to minimize c , i.e., we set the stopband
attenuation and filter length, and wish to minimize the transition band (between
/3 and c ). Explain how to pose this problem as a quasiconvex optimization
problem.
(c) Now suppose we fix c and , and wish to find the smallest N that can meet
the specifications, i.e., we seek the shortest length FIR filter that can meet the
specifications. Can this problem be posed as a convex or quasiconvex problem?
If so, explain how. If you think it cannot be, briefly and informally explain why.
(d) Plot the optimal tradeoff curve of attenuation () versus cutoff frequency (c )
for N = 7. Is the set of achievable specifications convex? Briefly explain any
interesting features, e.g., flat portions, of the optimal tradeoff curve.
For this subproblem, you may sample the constraints in frequency, which means
the following. Choose K N (perhaps K 10N ), and set k = k/K, k =
0, . . . , K. Then replace the specifications with
For k with 0 k /3, 0.89 H(k ) 1.12.
For k with c k , |H(k )| .

With this approximation, the problem in part (a) becomes an LP, which allows
you to solve part (d) numerically.
Solution.
(a) The first problem can be expressed as
minimize
subject to f1 (a) 1.12
f2 (a) 0.89
f3 (a)
f4 (a)

(2)

where
f1 (a) =

sup H(),

f2 (a) =

0/3

f3 (a) = sup H(),


c

f4 (a) =

inf

0/3

inf

H(),

H().

Problem (2) is convex in the variables a, because f1 and f3 are convex functions
(pointwise supremum of affine functions), and f4 and f5 are concave functions
(pointwise infimum of affine functions).

17

(b) This problem can be expressed


minimize f5 (a)
subject to f1 (a) 1.12
f2 (a) 0.89
where f1 and f2 are the same functions as above, and
f5 (a) = inf{ | H() for }.
This is a quasiconvex optimization problem in the variables a because f1 is convex,
f2 is concave, and f5 is quasiconvex: its sublevel sets are
{a | f5 (a) } = {a | H() for },
i.e., the intersection of an infinite number of halfspaces.
(c) This problem can be expressed as
minimize f6 (a)
subject to f1 (a) 1.12
f2 (a) 0.89
f3 (a)
f4 (a)
where f1 , f2 , f3 , and f4 are defined above and
f6 (a) = min{k | ak+1 = = aN = 0}.
The sublevel sets of f6 are affine sets:
{a | f6 (a) k} = {a | ak+1 = = aN = 0}.
This means f6 is a quasiconvex function, and again we have a quasiconvex optimization problem.
(d) After discretizing we can express the problem in part (a) as the LP
minimize
subject to 0.89 H(i ) 1.12 for 0 i /3
H(i ) for c i

(3)

with variables and a. (For fixed i , H(i ) is an affine function of a, hence all
constraints in this problem are linear inequalities in a and .) We obtain the
tradeoff curve of vs. c , by solving this LP for a sequence of values of c in the
interval (/3, ].
Figure (1) was generated by the following matlab code.
18

clear all
N = 7;
K = 10*N;
k = [0:N];
w = [0:K]/K*pi;
idx= max(find(w<=pi/3));
alphas = [];
for i=idx:length(w)
cvx_begin
variables a(N+1,1)
minimize( norm(cos(w(i:end)*k)*a,inf) )
subject to
cos(w(1:idx)*k)*a >= 0.89
cos(w(1:idx)*k)*a <= 1.12
cvx_end
alphas = [alphas; cvx_optval];
end;
plot(w(idx:end),alphas,-);
xlabel(wc);
ylabel(alpha);
5. Minimum fuel optimal control. Solve the minimum fuel optimal control problem described in exercise 4.16 of Convex Optimization, for the instance with problem data

1 0.4 0.8

0
0
A= 1
,
0
1
0

b= 0
,
0.3

xdes

= 2
,
6

N = 30.

You can do this by forming the LP you found in your solution of exercise 4.16, or more
directly using cvx. Plot the actuator signal u(t) as a function of time t.
Solution. The following Matlab code finds the solution
close all
clear all
n=3; % state dimension
N=30; % time horizon
A=[ -1 0.4 0.8; 1 0 0 ; 0 1 0];
b=[ 1 0 0.3];
19

0.8

0.6

0.4

0.2

0
/3

c
Figure 1 Tradeoff curve for problem 4d.

x0 = zeros(n,1);
xdes = [ 7 2 -6];
cvx_begin
variable X(n,N+1);
variable u(1,N);
minimize (sum(max(abs(u),2*abs(u)-1)))
subject to
X(:,2:N+1) == A*X(:,1:N)+b*u; % dynamics
X(:,1)== x0;
X(:,N+1)== xdes;
cvx_end
stairs(0:N-1,u,linewidth,2)
axis tight
xlabel(t)
ylabel(u)
The optimal actuator signal is shown in figure 2.

20

2.5

u(t)

1.5

0.5

0.5

10

15

20

t
Figure 2 Minimum fuel actuator signal.

21

25

You might also like