Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Problems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 62
At a glance
Powered by AI
The document discusses linear programming exercises related to linear inequalities, halfspaces, polyhedra, linear and piecewise-linear classification, and measurement errors.

Linear inequalities define halfspaces, which are regions in space bounded by hyperplanes. Halfspaces are used to represent constraints in linear programming. The document discusses conditions under which one halfspace contains another or when two halfspaces are equal.

A linear classifier divides space into two regions using a single hyperplane, while a piecewise-linear classifier can divide space into more complex regions by combining multiple linear classifiers. The document provides an example of how four linear classifiers can be combined.

EE236A Linear Programming

Exercises
Prof. Vandenberghe
Electrical Engineering Department
University of California, Los Angeles
Fall Quarter 2007-2008
1
1 Linear inequalities, halfspaces, polyhedra
Exercise 1. When does one halfspace contain another? Give conditions under which
{x | a
T
x b} {x | a
T
x

b}
(a = 0, a = 0). Also nd the conditions under which the two halfspaces are equal.
Exercise 2. What is the distance between the two parallel hyperplanes {x R
n
| a
T
x = b
1
} and
{x R
n
| a
T
x = b
2
} ?
Exercise 3. Consider a waveform
s(x, t) = f(t a
T
x)
where t denotes time, x denotes position in R
3
, f : R R is a given function, and a R
3
is a given nonzero vector. The surfaces dened by
t a
T
x = constant
are called wavefronts. What is the velocity (expressed as a function of a) with which wave-
fronts propagate? As an example, consider a sinusoidal plane wave s(x, t) = sin(t k
T
x).
Exercise 4. Which of the following sets S are polyhedra? If possible, express S in inequality form,
i.e., give matrices A and b such that S = {x | Ax b}.
(a) S = {y
1
a
1
+y
2
a
2
| 1 y
1
1, 1 y
2
1} for given a
1
, a
2
R
n
.
(b) S = {x R
n
| x 0, 1
T
x = 1,

n
i=1
x
i
a
i
= b
1
,

n
i=1
x
i
a
2
i
= b
2
}, where a
i
R
(i = 1, . . . , n), b
1
R, and b
2
R are given.
(c) S = {x R
n
| x 0, x
T
y 1 for all y with y = 1}.
(d) S = {x R
n
| x 0, x
T
y 1 for all y with

i
|y
i
| = 1}.
(e) S = {x R
n
| x x
0
x x
1
} where x
0
, x
1
R
n
are given. S is the the set of
points that are closer to x
0
than to x
1
.
(f) S = {x R
n
| x x
0
x x
i
, i = 1, . . . , K} where x
0
, . . . , x
K
R
n
are given. S
is the set of points that are closer to x
0
than to the other x
i
.
Exercise 5. Linear and piecewise-linear classication. The gure shows a block diagram of a
linear classication algorithm.
x
1
x
2
x
n
a
1
a
2
a
n
b
y
2
The classier has n inputs x
i
. These inputs are rst multiplied with coecients a
i
and added.
The result a
T
x =

n
i=1
a
i
x
i
is then compared with a threshold b. If a
T
x b, the output of
the classier is y = 1; if a
T
x < b, the output is y = 1.
The algorithm can be interpreted geometrically as follows. The set dened by a
T
x = b is a
hyperplane with normal vector a. This hyperplane divides R
n
in two open halfspaces: one
halfspace where a
T
x > b, and another halfspace where a
T
x < b. The output of the classier
is y = 1 or y = 1 depending on the halfspace in which x lies. If a
T
x = b, we arbitrarily
assign +1 to the output. This is illustrated below.
a
a
T
x = b
a
T
x > b
a
T
x < b
By combining linear classiers, we can build classiers that divide R
n
in more complicated
regions than halfspaces. In the block diagram below we combine four linear classiers. The
rst three take the same input x R
2
. Their outputs y
1
, y
2
, and y
3
are the inputs to the
fourth classier.
x
1
x
2
1
1
1
1
1
1
1
2
1
1
1
2
1 y
y
1
y
2
y
3
Make a sketch of the region of input vectors in R
2
for which the output y is equal to 1.
Exercise 6. Measurement with bounded errors. A series of K measurements y
1
, . . . , y
K
R, are
taken in order to estimate an unknown vector x R
q
. The measurements are related to the
unknown vector x by y
i
= a
T
i
x+v
i
, where v
i
is a measurement noise that satises |v
i
| but
is otherwise unknown. The vectors a
i
and the measurement noise bound are known. Let
X denote the set of vectors x that are consistent with the observations y
1
, . . . , y
K
, i.e., the
set of x that could have resulted in the measured values of y
i
. Show that X is a polyhedron.
Now we examine what happens when the measurements are occasionally in error, i.e., for a
few i we have no relation between x and y
i
. More precisely suppose that I
fault
is a subset of
{1, . . . , K}, and that y
i
= a
T
i
x+v
i
with |v
i
| (as above) for i I
fault
, but for i I
fault
, there
is no relation between x and y
i
. The set I
fault
is the set of times of the faulty measurements.
3
Suppose you know that I
fault
has at most J elements, i.e., out of K measurements, at most
J are faulty. You do not know I
fault
; you know only a bound on its cardinality (size). Is X
(the set of x consistent with the measurements) a polyhedron for J > 0?
2 Some simple linear programs
Exercise 7. Consider the LP
minimize c
1
x
1
+c
2
x
2
+c
3
x
3
subject to x
1
+x
2
1
x
1
+ 2x
2
3
x
1
0
x
2
0
x
3
0.
Give the optimal value and the optimal set for the following values of c : c = (1, 0, 1),
c = (0, 1, 0), c = (0, 0, 1).
Exercise 8. For each of the following LPs, express the optimal value and the optimal solution in
terms of the problem parameters (c, k, d, , d
1
, d
2
). If the optimal solution is not unique, it
is sucient to give one optimal solution.
(a)
minimize c
T
x
subject to 0 x 1
with variable x R
n
.
(b)
minimize c
T
x
subject to 1 1
T
x 1
with variable x R
n
.
(c)
minimize c
T
x
subject to 0 x
1
x
2
x
n
1.
with variable x R
n
.
(d)
maximize c
T
x
subject to 1
T
x = k
0 x 1
with variable x R
n
. k is an integer with 1 k n.
4
(e)
maximize c
T
x
subject to 1
T
x k
0 x 1
with variable x R
n
. k is an integer with 1 k n.
(f)
maximize c
T
x
subject to y x y
1
T
y = k
y 1
with variables x R
n
and y R
n
. k is an integer with 1 k n.
(g)
maximize c
T
x
subject to d
T
x =
0 x 1
with variable x R
n
. and the components of d are positive.
(h)
minimize 1
T
u +1
T
v
subject to u v = c
u 0, v 0
with variables u R
n
and v R
n
.
(i)
minimize d
T
1
u d
T
2
v
subject to u v = c
u 0, v 0
with variables u R
n
and v R
n
. We assume that d
1
d
2
.
Exercise 9. An optimal control problem with an analytical solution. We consider the problem of
maximizing a linear function of the nal state of a linear system, subject to bounds on the
inputs:
maximize d
T
x(N)
subject to |u(t)| U, t = 0, . . . , N 1

N1
t=0
|u(t)| ,
(1)
where x and u are related via the recursion
x(t + 1) = Ax(t) +Bu(t), x(0) = 0,
and the problem data are d R
n
, U, R, A R
nn
and B R
n
. The variables are the
input sequence u(0), . . . , u(N 1).
5
(a) Express (1) as an LP.
(b) Formulate a simple algorithm for solving this LP. (It can be solved very easily, without
using a general LP code.) Hint. The problem is a variation on exercise 8, parts (a),(b),(c).
(c) Apply your method to the matrices
A =
_

_
9.9007 10
1
9.9340 10
3
9.4523 10
3
9.4523 10
3
9.9340 10
2
9.0066 10
1
9.4523 10
2
9.4523 10
2
9.9502 10
2
4.9793 10
4
9.9952 10
1
4.8172 10
4
4.9793 10
3
9.5021 10
2
4.8172 10
3
9.9518 10
1
_

_
, (2)
B =
_

_
9.9502 10
2
4.9793 10
3
4.9834 10
3
1.6617 10
4
_

_
. (3)
(You can download these matrices by executing the matlab le ex9data.m which can
be found on the class webpage. The calling sequence is [A,b] = ex9data.) Use
d = (0, 0, 1, 1), N = 100, U = 2, = 161.
Plot the optimal input and the resulting sequences x
3
(t) and x
4
(t).
Remark. This model was derived as follows. We consider a system described by two
second-order equations
m
1
v
1
(t) = K(v
1
(t) v
2
(t)) D( v
1
(t) v
2
(t)) +u(t)
m
2
v
2
(t) = K(v
1
(t) v
2
(t)) +D( v
1
(t) v
2
(t)).
These equations describe the motion of two masses m
1
and m
2
with positions v
1
R
and v
2
R, respectively, and connected by a spring with spring constant K and a
damper with constant D. An external force u is applied to the rst mass. We use the
values
m
1
= m
2
= 1, K = 1, D = 0.1,
so the state equations are
_

_
v
1
(t)
v
2
(t)
v
1
(t)
v
2
(t)
_

_
=
_

_
0.1 0.1 1.0 1.0
0.1 0.1 1.0 1.0
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
_

_
_

_
v
1
(t)
v
2
(t)
v
1
(t)
v
2
(t)
_

_
+
_

_
1
0
0
0
_

_
u(t).
We discretize the system by considering inputs u that are piecewise constant with
sampling interval T = 0.1, i.e., we assume u is constant in the intervals [0.1k, 0.1(k+1)),
for k = 0, 1, 2, . . .. It can be shown that the discretized state equations are
z((k + 1)T) = Az(kT) +Bu(kT), k Z, (4)
where z(t) = ( v
1
(t), v
2
(t), v
1
(t), v
2
(t)), and A and B given by (2) and (3).
Using the cost function d
T
x(N) with d = (0, 0, 1, 1) means that we try to maximize
the distance between the two masses after N time steps.
6
Exercise 10. Power allocation problem with analytical solution. Consider a system of n transmit-
ters and n receivers. The ith transmitter transmits with power x
i
, i = 1, . . . , n. The vector
x is the variable in this problem. The path gain from each transmitter j to each receiver i
is denoted A
ij
and is assumed to be known. (Obviously, A
ij
0, so the matrix A is ele-
mentwise nonnegative. We also assume that A
ii
> 0.) The signal received by each receiver i
consists of three parts: the desired signal, arriving from transmitter i with power A
ii
x
i
, the
interfering signal, arriving from the other transmitters with power

j=i
A
ij
x
j
, and noise v
i
(v
i
is positive and known). We are interested in allocating the powers x
i
in such a way that
the signal to noise plus interference ratio (SNIR) at each of the receivers exceeds a level .
(Thus is the minimum acceptable SNIR for the receivers; a typical value might be around
= 3.) In other words, we want to nd x 0 such that for i = 1, . . . , n
A
ii
x
i

_
_

j=i
A
ij
x
j
+v
i
_
_
.
Equivalently, the vector x has to satisfy the set of linear inequalities
x 0, Bx v (5)
where B R
nn
is dened as
B
ii
= A
ii
, B
ij
= A
ij
, j = i.
(a) Suppose you are given a desired level of , so the right hand side v in (5) is a known
positive vector. Show that (5) is feasible if and only if B is invertible and z = B
1
1
0 (1 is the vector with all components 1). Show how to construct a feasible power
allocation x from z.
(b) Show how to nd the largest possible SNIR, i.e., how to maximize subject to the
existence of a feasible power allocation.
Hint. You can refer to the following result from linear algebra. Let T R
nn
be a matrix
with nonnegative elements, and s R. Then the following statements are equivalent:
(a) There exists an x 0 with (sI T)x > 0.
(b) sI T is nonsingular and the matrix (sI T)
1
has nonnegative elements.
(c) s > max
i
|
i
(T)| where
i
(T) (i = 1, . . . , n) are the eigenvalues of T. The quantity
(T) = max
i
|
i
(T)| is called the spectral radius of T. It is a complicated but readily
computed function of T.
(For such s, the matrix sI T is called a nonsingular M-matrix.)
Remark. This problem gives an analytic solution to a very special form of transmitter power
allocation problem. Specically, there are exactly as many transmitters as receivers, and no
power limits on the transmitters. One consequence is that the receiver noises v
i
play no role
at all in the solution just crank up all the transmitters to overpower the noises!
7
3 Geometry of linear programming
Exercise 11. (a) Is x = (1, 1, 1, 1) an extreme point of the polyhedron P dened by the linear
inequalities
_

_
1 6 1 3
1 2 7 1
0 3 10 1
6 11 2 12
1 6 1 3
_

_
_

_
x
1
x
2
x
3
x
4
_

_
3
5
8
7
4
_

_
?
If it is, nd a vector c such that x is the unique minimizer of c
T
x over P.
(b) Same question for the polyhedron dened by the inequalities
_

_
0 5 2 5
7 7 2 2
4 4 7 7
8 3 3 4
4 4 2 2
_

_
_

_
x
1
x
2
x
3
x
4
_

_
12
17
22
18
8
_

_
and the equality 8x
1
7x
2
10x
3
11x
4
= 20.
Feel free to use Matlab (in particular the rank command).
Exercise 12. We dene a polyhedron
P = {x R
5
| Ax = b, 1 x 1},
with
A =
_

_
0 1 1 1 2
0 1 1 1 0
2 0 1 0 1
_

_, b =
_

_
1
1
1
_

_.
The following three vectors x are in P:
(a) x = (1, 1/2, 0, 1/2, 1)
(b) x = (0, 0, 1, 0, 0)
(c) x = (0, 1, 1, 1, 0).
Are these vectors extreme points of P? For each x, if it is an extreme point, give a vector c
for which x is the unique solution of the optimization problem
minimize c
T
x
subject to Ax = b
1 x 1.
Exercise 13. An n n matrix X is called doubly stochastic if
X
ij
0, i, j = 1, . . . , n,
n

i=1
X
ij
= 1, j = 1, . . . , n,
n

j=1
X
ij
= 1, i = 1, . . . , n,
In words, the entries of X are nonnegative, and its row and column sums are equal to one.
8
(a) What are the extreme points of the set of doubly stochastic matrices? How many
extreme points are there? Explain your answer.
(b) What is the optimal value of the LP
maximize a
T
Xb
subject to X doubly stochastic,
with X as variable, for a general vector b R
n
and each of the following choices of a?
a = (1, 0, 0, . . . , 0).
a = (1, 1, 0, . . . , 0).
a = (1, 1, 0, . . . , 0).
4 Formulating problems as LPs
Exercise 14. Formulate the following problems as LPs:
(a) minimize Ax b
1
subject to x

1.
(b) minimize x
1
subject to Ax b

1.
(c) minimize Ax b
1
+x

.
In each problem, A R
mn
and b R
m
are given, and x R
n
is the optimization variable.
Exercise 15. An illumination problem. We consider an illumination system of m lamps, at posi-
tions l
1
, . . . , l
m
R
2
, illuminating n at patches.
lamp j l
j
v
i
v
i+1
patch i
r
ij

ij
The patches are line segments; the ith patch is given by [v
i
, v
i+1
] where v
1
, . . . , v
n+1
R
2
.
The variables in the problem are the lamp powers p
1
, . . . , p
m
, which can vary between 0
and 1.
The illumination at (the midpoint of) patch i is denoted I
i
. We will use a simple model for
the illumination:
I
i
=
m

j=1
a
ij
p
j
, a
ij
= r
2
ij
max{cos
ij
, 0}, (6)
where r
ij
denotes the distance between lamp j and the midpoint of patch i, and
ij
denotes
the angle between the upward normal of patch i and the vector from the midpoint of patch
i to lamp j, as shown in the gure. This model takes into account self-shading (i.e., the
9
fact that a patch is illuminated only by lamps in the halfspace it faces) but not shading of
one patch caused by another. Of course we could use a more complex illumination model,
including shading and even reections. This just changes the matrix relating the lamp powers
to the patch illumination levels.
The problem is to determine lamp powers that make the illumination levels close to a given
desired illumination level I
des
, subject to the power limits 0 p
i
1.
(a) Suppose we use the maximum deviation
(p) = max
k=1,...,n
|I
k
I
des
|
as a measure for the deviation from the desired illumination level. Formulate the illu-
mination problem using this criterion as a linear programming problem.
(b) There are several suboptimal approaches based on weighted least-squares. We consider
two examples.
i. Saturated least-squares. We can solve the least-squares problem
minimize

n
k=1
(I
k
I
des
)
2
ignoring the constraints. If the solution is not feasible, we saturate it, i.e., set
p
j
:= 0 if p
j
0 and p
j
:= 1 if p
j
1.
Download the Matlab le ex15data.m from the class webpage and generate problem
data by [A,Ides] = ex15data. (The elements of A are the coecients a
ij
in (6).)
Compute a feasible p using this rst method, and calculate (p).
Note. Use the backslash operator \ to solve least-squares problem in Matlab:
x=A\b computes the solution of
minimize Ax b
2
.
Try help slash for details.
ii. Weighted least-squares. We consider another least-squares problem:
minimize

n
k=1
(I
k
I
des
)
2
+

m
i=1
(p
i
0.5)
2
,
where 0 is used to attach a cost to a deviation of the powers from the value
0.5, which lies in the middle of the power limits. For large enough , the solution
of this problem will satisfy 0 p
i
1, i.e., be feasible for the original problem.
Explain how you solve this problem in Matlab. For the problem data generated by
ex15data.m, nd the smallest such that p becomes feasible, and evaluate (p).
(c) Using the same data as in part (b), solve the LP you derived in part (a). Compare the
solution with the solutions you obtained using the (weighted) least-squares methods of
part (b).
Exercise 16. In exercise 15, we encountered the problem
minimize max
k=1,...,n
|a
T
k
p I
des
|
subject to 0 p 1
(7)
(with variables p). We have seen that this is readily cast as an LP.
10
In (7) we use the maximum of the absolute deviations |I
k
I
des
| to measure the dierence
from the desired intensity. Suppose we prefer to use the relative deviations instead, where
the relative deviation is dened as
max{I
k
/I
des
, I
des
/I
k
} 1 =
_
(I
k
I
des
)/I
des
if I
k
I
des
(I
des
I
k
)/I
k
if I
k
I
des
.
This leads us to the following formulation:
minimize max
k=1,...,n
max{ a
T
k
p/I
des
, I
des
/(a
T
k
p) }
subject to 0 p 1
a
T
k
p > 0, k = 1, . . . , n.
(8)
Explain how you can solve this using linear programming (i.e., by solving one or more LPs).
Exercise 17. Download the le ex17data.m from the class website and execute it in Matlab
using the command [t,y] = ex17data. This will generate two vectors t, y R
42
. We are
interested in tting a linear function f(t) = + t through the points (t
i
, y
i
), i.e., we want
to select and such that f(t
i
) y
i
, i = 1, . . . , 42.
We can calculate and by optimizing the following three criteria.
(a) Least-squares: select and by minimizing
42

i=1
(y
i
t
i
)
2
.
(Note that the recommended method for solving a least-squares problem
minimize Ax b
2
in Matlab is x = A\b.)
(b)
1
-norm approximation: select and by minimizing
42

i=1
|y
i
t
i
|.
(c)

-norm approximation: select and by minimizing


max
i=1,...,42
|y
i
t
i
|.
Find the optimal values of and for each the three optimization criteria. This yields three
linear functions f
ls
(t), f

1
(t), f

(t). Plot the 42 data points, and the three functions f.


What do you observe?
Exercise 18. This exercise is concerned with a problem that has applications in VLSI. The prob-
lem is to nd the optimal positions of n cells or modules on an integrated circuit. More
specically, the variables are the coordinates x
i
, y
i
, i = 1, . . . , n, of the n cells. The cells
must be placed in a square C = {(x, y) | 1 x 1, 1 y 1}.
11
Each cell has several terminals, which are connected to terminals on other cells, or to in-
put/output (I/O) terminals on the perimeter of C. The positions of the I/O terminals are
known and xed.
The connections between the cells are specied as follows. We are given a matrix A R
Nn
and two vectors b
x
R
N
, b
y
R
N
. Each row of A and each component of b
x
and b
y
describe a connection between two terminals. For each i = 1, . . . , N, we can distinguish two
possibilities, depending on whether row i of A describes a connection between two cells, or
between a cell and an I/O terminal.
If row i describes a connection between two cells j and k (with j < k), then
a
il
=
_

_
1 if l = j
1 if l = k
0 otherwise
, b
x,i
= 0, b
y,i
= 0.
In other words, we have
a
T
i
x b
x,i
= x
j
x
k
, a
T
i
y b
y,i
= y
j
y
k
.
If row i describes a connection between a cell j and an I/O terminal with coordinates
( x, y), then
a
il
=
_
1 if l = j
0 otherwise
, b
i,x
= x, b
i,y
= y,
so we have
a
T
i
x b
x,i
= x
j
x, a
T
i
y b
y,i
= y
j
y.
The gure illustrates this notation for an example with n = 3, N = 6.
(1, 1)
(1, 1) (1, 1)
(1, 1)
(1, 0)
(0.5, 1)
(1, 0.5)
(0, 1)
(x
1
, y
1
)
(x
2
, y
2
)
(x
3
, y
3
)
For this example, A, b
x
and b
y
given by
A =
_

_
1 1 0
1 0 1
1 0 0
0 1 0
0 0 1
0 0 1
_

_
, b
x
=
_

_
0.0
0.0
1.0
0.5
0.0
1.0
_

_
, b
y
=
_

_
0.0
0.0
0.0
1.0
1.0
0.5
_

_
.
12
The problem we consider is to determine the coordinates (x
i
, y
i
) that minimize some measure
of the total wirelength of the connections. We can formulate dierent variations.
(a) Suppose we use the Euclidean distance between terminals to measure the length of a
connection, and that we minimize the sum of the squares of the connection lengths. In
other words we determine x and y by solving
minimize

N
i=1
_
(a
T
i
x b
x,i
)
2
+ (a
T
i
y b
y,i
)
2
_
or, in matrix notation,
minimize Ax b
x

2
+Ay b
y

2
. (9)
The variables are x R
n
and y R
n
. (Note that we dont have to add the constraints
1 x
i
1 and 1 y
i
1 explicitly, since a solution with a cell outside C can never
be optimal.) Since the two terms in (9) are independent, the solution can be obtained
by solving two least-squares problems, one to determine x, and one to determine y.
Equivalently, we can solve two sets of linear equations
(A
T
A)x = A
T
b
x
, (A
T
A)y = A
T
b
y
.
(b) A second and more realistic choice is to use the Manhattan distance between two con-
nected terminals as a measure for the length of the connection, i.e., to consider the
optimization problem
minimize

N
i=1
_
|a
T
i
x b
x,i
| +|a
T
i
y b
y,i
|
_
.
In matrix notation, this can be written as
minimize Ax b
x

1
+Ay b
y

1
.
(c) As a third variation, suppose we measure the length of a connection between two ter-
minals by the Manhattan distance between the two points, as in (b) , but instead of
minimizing the sum of the lengths, we minimize the maximum length, i.e., we solve
minimize max
i=1,...,N
_
|a
T
i
x b
x,i
| +|a
T
i
y b
y,i
|
_
.
(d) Finally, we can consider the problem
minimize

N
i=1
_
h(a
T
i
x b
x,i
) +h(a
T
i
y b
y,i
)
_
where h is a piecewise-linear function dened as h(z) = max{z, z, } and is a given
positive constant. The function h is plotted below.
z
h(z)
+
13
Give LP formulations for problems (b), (c) and (d). You may introduce new variables, but
you must explain clearly why your formulation and the original problem are equivalent.
Numerical example. We compare the solutions obtained from the four variations for a small
example. For simplicity, we consider a one-dimensional version of the problem, i.e., the
variables are x R
n
, and the goal is to place the cells on the interval [1, 1]. We also drop
the subscript in b
x
. The four formulations of the one-dimensional placement problem are the
following.
(a)
2
-placement: minimize Ax b
2
=

i
(a
T
i
x b
i
)
2
.
(b)
1
-placement: minimize Ax b
1
=

i
|a
T
i
x b
i
|.
(c)

-placement: minimize Ax b

= max
i
|a
T
i
x b
i
|.
(d)
1
-placement with dead zone: minimize

i
h(a
T
i
x b
i
). We use a value = 0.02.
To generate the data, download the le ex18data.m from the class webpage. The command
[A,b] = ex18data(large) generates a problem with 100 cells and 300 connections; [A,b]
= ex18data(small) generates a problem with with 50 cells and 150 connections. You can
choose either problem.
A few remarks and suggestions:
Use the backslash operator, x = A\b, to solve a least-squares problem
minimize Ax b
2
.
An alternative is x = (A*A)\(A*b), which is somewhat faster but less accurate.
The other three problems require an LP solver.
Compare the solutions obtained by the four methods.
Plot a histogram of the n positions x
i
for each solution (using the hist command).
Also plot a histogram of the connnection lengths |a
T
i
x b
i
|.
Compute the total wire length

i
|a
T
i
x b
i
| for each of the four solutions.
Compute the length of the longest connection max
i
|a
T
i
x b
i
| for each of the four
solutions.
So far we have assumed that the cells have zero width. In practice we have to take
overlap between cells into account. Assume that two cells i and j overlap when |x
i
x
j
|
0.01. For each of the four solutions, calculate how many pairs of cells overlap. You can
express the overlap as a percentage of the total number n(n 1)/2 of pairs of cells.
Are the results what you expect? Which of the four solutions would you prefer if the most
important criteria are total wirelength

i
|a
T
i
x b
i
| and overlap?
Exercise 19. Suppose you are given two sets of points {v
1
, v
2
, . . . , v
K
} and {w
1
, w
2
, . . . , w
L
} in
R
n
. Can you formulate the following two problems as LP feasibility problems?
(a) Determine a hyperplane that separates the two sets, i.e., nd a R
n
and b R with
a = 0 such that
a
T
v
i
b, i = 1, . . . , K, a
T
w
i
b, i = 1, . . . , L.
14
Note that we require a = 0, so you have to make sure your method does not return the
trivial solution a = 0, b = 0. You can assume that the matrices
_
v
1
v
2
v
K
1 1 1
_
,
_
w
1
w
2
w
L
1 1 1
_
have rank n + 1.
(b) Determine a sphere separating the two sets of points, i.e., nd x
c
R
n
, R 0 such
that
(v
i
x
c
)
T
(v
i
x
c
) R
2
, i = 1, . . . , K, (w
i
x
c
)
T
(w
i
x
c
) R
2
, i = 1, . . . , L.
(x
c
is the center of the sphere; R is its radius.)
Exercise 20. Download the le ex20data.m from the class website and run it in Matlab using
the command [X,Y] = ex20data(id), where id is your student ID number (a nine-digit
integer). This will create two matrices X R
4100
and Y R
4100
. Let x
i
and y
i
be the
ith columns of X and Y , respectively.
(a) Verify (prove) that it is impossible to strictly separate the points x
i
from the points y
i
by a hyperplane. In other words, show that there exist no a R
4
and b R such that
a
T
x
i
+b 1, i = 1, . . . , 100, a
T
y
i
+b 1, i = 1, . . . , 100.
(b) Find a quadratic function that strictly separates the two sets, i.e., nd A = A
T
R
44
,
b R
4
, c R, such that
x
T
i
Ax
i
+b
T
x
i
+c 1, i = 1, . . . , 100, y
T
i
Ay
i
+b
T
y
i
+c 1, i = 1, . . . , 100.
(c) It may be impossible to nd a hyperplane that strictly separates the two sets, but we can
try to nd a hyperplane that separates as many of the points as possible. Formulate a
heuristic (i.e., suboptimal method), based on solving a single LP, for nding a R
4
and
b R that minimize the number of misclassied points. We consider x
i
as misclassied
if a
T
x
i
+b > 1, and y
i
as misclassied if a
T
y
i
+b < 1.
Describe and justify your method, and test it on the problem data.
Exercise 21. Robot grasp problem with static friction.
We consider a rigid object held by N robot ngers. For simplicity we assume that the object
and all forces acting on it lie in a plane.

(0, 0)
F
ext
x
F
ext
y
T
ext
F
i
G
i

i
(x
i
, y
i
)
F
1
G
1

1
(x
1
, y
1
)
F
2
G
2

2
(x
2
, y
2
)
F
N
G
N

N
(x
N
, y
N
)
15
The ngers make contact with the object at points (x
i
, y
i
), i = 1, . . . , N. (Although it does
not matter, you can assume that the origin (0, 0) is at the center of gravity of the object.)
Each nger applies a force with magnitude F
i
on the object, in a direction normal to the
surface at that contact point, and pointing towards the object. The horizontal component of
the ith contact force is equal to F
i
cos
i
, and the vertical component is F
i
sin
i
, where
i
is
the angle between the inward pointing normal to the surface and a horizontal line.
At each contact point there is a friction force G
i
which is tangential to the surface. The
horizontal component is G
i
sin
i
and the vertical component is G
i
cos
i
. The orientation
of the friction force is arbitrary (i.e., G
i
can be positive or negative), but its magnitude |G
i
|
cannot exceed F
i
, where 0 is a given constant (the friction coecient).
Finally, there are several external forces and torques that act on the object. We can replace
those external forces by equivalent horizontal and vertical forces F
ext
x
and F
ext
y
at the origin,
and an equivalent torque T
ext
. These two external forces and the external torque are given.
The static equilibrium of the object is characterized by the following three equations:
N

i=1
(F
i
cos
i
+G
i
sin
i
) +F
ext
x
= 0 (10)
(the horizontal forces add up to zero),
N

i=1
(F
i
sin
i
G
i
cos
i
) +F
ext
y
= 0 (11)
(the vertical forces add up to zero),
N

i=1
((F
i
cos
i
+G
i
sin
i
)y
i
(F
i
sin
i
G
i
cos
i
)x
i
) +T
ext
= 0 (12)
(the total torque is zero). As mentioned above, we assume the friction model can be expressed
as a set of inequalities
|G
i
| F
i
, i = 1, . . . , N. (13)
If we had no friction, then N = 3 ngers would in general be sucient to hold the object, and
we could nd the forces F
i
by solving the three linear equations (10)-(12) for the variables
F
i
. If there is friction, or N > 3, we have more unkown forces than equilibrium equations, so
the system of equations is underdetermined. We can then take advantage of the additional
degrees of freedom to nd a set of forces F
i
that are small. Express the following two
problems as LPs.
(a) Find the set of forces F
i
that minimizes

N
i=1
F
i
subject to the constraint that the
object is in equilibrium. More precisely, the constraint is that there exist friction forces
G
i
that, together with F
i
, satisfy (10)-(13).
(b) Find a set of forces F
i
that minimizes max
i=1,...,N
F
i
subject to the constraint that the
object is in equilibrium.
Which of these two problems do you expect will have a solution with a larger number of F
i
s
equal to zero?
16
Exercise 22. Linear programming in decision theory. Suppose we have a choice of p available
actions a {1, . . . , p}, and each action has a certain cost (which can be positive, negative
or zero). The costs depend on the value of an unknown parameter {1, . . . , m}, and are
specied in the form of a loss matrix L R
mp
, with L
ij
equal to the cost of action a = j
when = i.
We do not know , but we can observe a random variable x with a distribution that depends
on . We will assume that x is a discrete random variable with values in {1, 2, . . . , n}, so we
can represent its distribution, for the m possible values of , by a matrix P R
nm
with
P
ki
= prob(x = k | = i).
A strategy is a rule for selecting an action a based on the observed value of x. A pure or
deterministic strategy assigns to each of the possible observations a unique action a. A pure
stratey can be represented by a matrix T R
pn
, with
T
jk
=
_
1 action j is selected when x = k is observed
0 otherwise.
Note that each column of a pure strategy matrix T contains exactly one entry equal to one,
and the other entries are zero. We can therefore enumerate all possible pure strategies by
enumerating the 0-1 matrices with this property.
As a generalization, we can consider mixed or randomized strategies. In a mixed strategy we
select an action randomly, using a distribution that depends on the observed x. A mixed
strategy is represented by a matrix T R
pn
, with
T
jk
= prob(a = j | x = k).
The entries of a mixed strategy matrix T are nonnegative and have column sums equal to
one:
T
jk
0, j = 1, . . . , p, k = 1, . . . , n, 1
T
T = 1
T
.
A pure strategy is a special case of a mixed strategy with all the entries T
jk
equal to zero or
one.
Now suppose the value of is i and we apply the strategy T. Then the expected loss is given
by
n

k=1
p

j=1
L
ij
T
jk
P
ki
= (LTP)
ii
.
The diagonal elements of the matrix LTP are the expected losses for the dierent values of
= 1, . . . , m. We consider two popular denitions of an optimal mixed strategy, based on
minimizing a function of the expected losses.
(a) Minimax strategies. A minimax strategy minimizes the maximum of the expected losses:
the matrix T is computed by solving
minimize max
i=1,...,m
(LTP)
ii
subject to T
jk
0, j = 1, . . . , p, k = 1, . . . , n
1
T
T = 1
T
.
The variables are the pn entries of T. Express this problem as a linear program.
17
(b) Bayes strategies. Assume that the parameter itself is random with a known distribu-
tion q
i
= prob( = i). The Bayes strategy minimizes the average expected loss, where
the average is taken over . The matrix T of a Bayes strategy is the optimal solution
of the problem
minimize

m
i=1
q
i
(LTP)
ii
subject to T
jk
0, j = 1, . . . , p, k = 1, . . . , n
1
T
T = 1
T
.
This is a linear program in the pn variables T
jk
. Formulate a simple algorithm for
solving this LP. Show that it is always possible to nd an optimal Bayes strategy that
is a pure strategy.
Hint. First note that each column of the optimal T can be determined independently
of the other columns. Then reduce the optimization problem over column k of T to one
of the simple LPs in Exercise 8.
(c) As a simple numerical example, we consider a quality control system in a factory. The
products that are examined can be in one of two conditions (m = 2): = 1 means
the product is defective; = 2 means the product works properly. To examine the
quality of a product we use an automated measurement system that rates the product
on a scale of 1 to 4. This rating is the observed variable x: n = 4 and x {1, 2, 3, 4}.
We have calibrated the system to nd the probabilities P
ij
= prob(x = i | = j) of
producing a rating x = i when the state of the product is = j. The matrix P is
P =
_

_
0.7 0.0
0.2 0.1
0.05 0.1
0.05 0.8
_

_
.
We have a choice of three possible actions (p = 3): a = 1 means we accept the product
and forward it to be sold; a = 2 means we subject it to a manual inspection to determine
whether it is defective or not; a = 3 means we discard the product. The loss matrix is
L =
_
10 3 1
0 2 6
_
.
Thus, for example, selling a defective product costs us $10; discarding a good product
costs $6, et cetera.
i. Compute the minimax strategy for this L and P (using an LP solver). Is the
minimax strategy a pure strategy?
ii. Compute the Bayes strategy for q = (0.2, 0.8) (using an LP solver or the simple
algorithm formulated in part 2).
iii. Enumerate all (3
4
= 81) possible pure strategies T (in Matlab), and plot the
expected losses ((LTP)
11
, (LTP)
22
) of each of these strategies in a plane.
iv. On the same graph, show the losses for the minimax strategy and the Bayes strategy
computed in parts (a) and (b).
v. Suppose we let q vary over all possible prior distributions (all vectors with q
1
+q
2
=
1, q
1
0, q
2
0). Indicate on the graph the expected losses ((LTP)
11
, (LTP)
22
)
of the corresponding Bayes strategies.
18
Exercise 23. Robust linear programming.
(a) Let x R
n
be a given vector. Prove that x
T
y x
1
for all y with y

1. Is the
inequality tight, i.e., does there exist a y that satises y

1 and x
T
y = x
1
?
(b) Consider the set of linear inequalities
a
T
i
x b
i
, i = 1, . . . , m. (14)
Suppose you dont know the coecients a
i
exactly. Instead you are given nominal values
a
i
, and you know that the actual coecient vectors satisfy
a
i
a
i


for a given > 0. In other words the actual coecients a
ij
can be anywhere in the
intervals [ a
ij
, a
ij
+], or equivalently, each vector a
i
can lie anywhere in a rectangle
with corners a
i
+v where v {, }
n
(i.e., v has components or ).
The set of inequalities (14) must be satised for all possible values of a
i
, i.e., we re-
place (14) with the constraints
a
T
i
x b
i
for all a
i
{ a
i
+v | v

} and for i = 1, . . . , m. (15)


A straightforward but very inecient way to express this constraint is to enumerate the
2
n
corners of the rectangle of possible values a
i
and to require that
a
T
i
x +v
T
x b
i
for all v {, }
n
and for i = 1, . . . , m.
This is a system of m2
n
inequalities.
Use the result in (a) to show that (15) is in fact equivalent to the much more compact
set of nonlinear inequalities
a
T
i
x +x
1
b
i
, i = 1, . . . , m. (16)
(c) Consider the LP
minimize c
T
x
subject to a
T
i
x b
i
, i = 1, . . . , m.
Again we are interested in situations where the coecient vectors a
i
are uncertain, but
satisfy bounds a
i
a
i

for given a
i
and . We want to minimize c
T
x subject to
the constraint that the inequalities a
T
i
x b
i
are satised for all possible values of a
i
.
We call this a robust LP:
minimize c
T
x
subject to a
T
i
x b
i
for all a
i
{ a
i
+v | v

} and for i = 1, . . . , m.
(17)
It follows from (b) that we can express this problem as a nonlinear optimization problem
minimize c
T
x
subject to a
T
i
x +x
1
b
i
, i = 1, . . . , m.
(18)
Express (18) as an LP.
Solving (18) is a worst-case approach to dealing with uncertainty in the data. If x

is
the optimal solution of (18), then for any specic value of a
i
, it may be possible to nd
feasible x with a lower objective value than x

. However such an x would be infeasible


for some other value of a
i
.
19
Exercise 24. Robust Chebyshev approximation.
In a similar way as in the previous problem, we can consider Chebyshev approximation
problems
minimize Ax b

in which A R
mn
is uncertain. Suppose we can characterize the uncertainty as follows.
The values of A depend on parameters u R
p
, which are unknown but satisfy u

.
Each row vector a
i
can be written as a
i
= a
i
+B
i
u where a
i
R
n
and B
i
R
np
are given.
In the robust Chebyshev approximation we minimize the worst-case value of Axb

. This
problem can be written as
minimize max
u
max
i=1....,m

( a
i
+B
i
u)
T
x b
i

. (19)
Show that (19) is equivalent to
minimize max
i=1....,m
(| a
T
i
x b
i
| +B
T
i
x
1
). (20)
To prove this you can use the results from exercise 23. There is also a fairly straightforward
direct proof. Express (20) as an LP.
Exercise 25. Describe how you would use linear programming to solve the following problem.
You are given an LP
minimize c
T
x
subject to Ax b
(21)
in which the coecients of A R
mn
are uncertain. Each coecient A
ij
can take arbitrary
values in the interval
[

A
ij
A
ij
,

A
ij
+ A
ij
],
where

A
ij
and A
ij
are given with A
ij
0. The optimization variable x in (21) must be
feasible for all possible values of A. In other words, we want to solve
minimize c
T
x
subject to Ax b for all A A
where A R
mn
is the set
A = {A R
mn
|

A
ij
A
ij
A
ij


A
ij
+ A
ij
, i = 1, . . . , m, j = 1, . . . , n}.
If you know more than one solution method, you should give the most ecient one.
Exercise 26. Optimization problems with uncertain data sometimes involve two sets of variables
that can be selected in two stages. When the rst set of variables is chosen, the problem data
are uncertain. The second set of variables, however, can be selected after the actual values
of the parameters have become known.
As an example, we consider two-stage robust formulations of the Chebyshev approximation
problem
minimize Ax +By +b

,
with variables x R
n
and y R
p
. The problem parameters A, B, b are uncertain, and we
model the uncertainty by assuming that there are m possible scenarios (or instances of the
problem). In scenario k, the values of A, B, b are A
k
, B
k
, b
k
.
20
In the two-stage setting we rst select x before the scenario is known; then we choose y after
learning the actual value of k. The optimal choice of y in the second stage is the value that
minimizes A
k
x+B
k
y +b
k

, for given x, A
k
, B
k
, b
k
. We denote by f
k
(x) the optimal value
of this second-stage optimization problem for scenario k:
f
k
(x) = min
y
A
k
x +B
k
y +b
k

, k = 1, . . . , m.
(a) We can minimize the worst-case objective by solving the optimization problem
minimize max
k=1,...,m
f
k
(x)
with x as variable. Formulate this problem as an LP.
(b) If we know the probability distribution of the scenarios we can also minimize the ex-
pected cost, by solving
minimize
m

k=1

k
f
k
(x)
with x as variable. The coecient
k
0 is the probability that (A, B, b) is equal to
(A
k
, B
k
, b
k
). Formulate this problem as an LP.
Exercise 27. Feedback design for a static linear system. In this problem we use linear program-
ming to design a linear feedback controller for a static linear system. (The method extends
to dynamical systems but we will not consider the extension here.) The gure shows the
system and the controller.
K
P
w
u
z
y
The elements of the vector w R
nw
are called the exogeneous inputs, z R
nz
are the critical
outputs, y R
ny
are the sensed outputs, and u R
nu
are the actuator inputs. These vectors
are related as
z = P
zw
w +P
zu
u
y = P
yw
w +P
yu
u,
(22)
where the matrices P
zu
, P
zw
, P
yu
, P
yw
are given.
The controller feeds back the sensed outputs y to the actuator inputs u. The relation is
u = Ky (23)
where K R
nuny
. The matrix K will be the design variable.
Assuming I P
yu
is invertible, we can eliminate y from the second equation in (22). We have
y = (I P
yu
K)
1
P
yw
w
21
and substituting in the rst equation we can write z = Hw with
H = P
zw
+P
zu
K(I P
yu
K)
1
P
yw
. (24)
The matrix H is a complicated nonlinear function of K.
Suppose that the signals w are disturbances or noises acting on the system, and that they
can take any values with w

for some given . We would like to choose K so that


the eect of the disturbances w on the output z is minimized, i.e., we would like z to be as
close as possible to zero, regardless of the values of w. Specically, if we use the innity norm
z

to measure the size of z, we are interested in determining K by solving the optimization


problem
minimize max
w
Hw

, (25)
where H depends on the variable K through the formula (24).
(a) We rst derive an explicit expression for the objective function in (25). Show that
max
w
Hw

= max
i=1,...,nz

j=1,...,nw
|h
ij
|
where h
ij
are the elements of H. Up to the constant , this is the maximum row sum
of H: for each row of H we calculate the sum of the absolute values of its elements; we
then select the largest of these row sums.
(b) Using this expression, we can reformulate problem (25) as
minimize max
i=1,...,nz

j=1,...,nw
|h
ij
|, (26)
where h
ij
depends on the variable K through the formula (24). Formulate (25) as an
LP. Hint. Use a change of variables
Q = K(I P
yu
K)
1
,
and optimize over Q R
nuny
instead of K. You may assume that I+QP
yu
is invertible,
so the transformation is invertible: we can nd K from Q as K = (I +QP
yu
)
1
Q.
Exercise 28. Formulate the following problem as an LP. Find the largest ball
B(x
c
, R) = {x | x x
c
R}
enclosed in a given polyhedron
P = {x | a
T
i
x b
i
, i = 1, . . . , m}.
In other words, express the problem
maximize R
subject to B(x
c
, R) P
as an LP. The problem variables are the center x
c
R
n
and the radius R of the ball.
22
Exercise 29. Let P
1
and P
2
be two polyhedra described as
P
1
= {x | Ax b} , P
2
= {x | 1 Cx 1} ,
where A R
mn
, C R
pn
, and b R
m
. The polyhedron P
2
is symmetric about the
origin, i.e., if x P
2
, then x P
2
. We say the origin is the center of P
2
.
For t > 0 and x
c
R
n
, we use the notation tP
2
+x
c
to denote the polyhedron
tP
2
+x
c
= {tx +x
c
| x P
2
},
which is obtained by rst scaling P
2
by a factor t about the origin, and then translating its
center to x
c
.
Explain how you would solve the following two problems using linear programming. If you
know dierent formulations, you should choose the most ecient method.
(a) Find the largest polyhedron tP
2
+x
c
enclosed in P
1
, i.e.,
maximize t
subject to tP
2
+x
c
P
1
.
(b) Find the smallest polyhedron tP
2
+x
c
containing P
1
, i.e.,
minimize t
subject to P
1
tP
2
+x
c
.
In both problems the variables are t R and x
c
R
n
.
Exercise 30. Consider the linear system of exercise 9, equation (4). We study two optimal control
problems. In both problems we assume the system is initially at rest at the origin, i.e.,
z(0) = 0.
(a) In the rst problem we want to determine the most ecient input sequence u(kT),
k = 0, . . . , 79, that brings the system to state (0, 0, 10, 10) in 80 time periods (i.e., at
t = 8 the two masses should be at rest at position v
1
= v
2
= 10). We assume the cost
(e.g., fuel consumption) of the input signal u is proportional to

k
|u(kT)|. We also
impose the constraint that the amplitude of the input must not exceed 2. This leads us
to the following problem:
minimize

79
k=0
|u(kT)|
subject to z(80T) = (0, 0, 10, 10)
|u(kT)| 2, k = 0, . . . , 79.
(27)
The state z and the input u are related by (4) with z(0) = 0. The variables in (27) are
u(0), u(T), . . . , u(79T).
(b) In the second problem we want to bring the system to the state (0, 0, 10, 10) as quickly
as possible, subject to the limit on the magnitude of u:
minimize N
subject to z(NT) = (0, 0, 10, 10)
|u(kT)| 2, k = 0, . . . , N 1.
The variables are N Z, and u(0), u(T), . . . , u(N 1)T.
23
Solve these two problems numerically. Plot the input u and the positions v
1
, v
2
as functions
of time.
Exercise 31. We consider a linear dynamical system with state x(t) R
n
, t = 0, . . . , N, and
actuator or input signal u(t) R, for t = 0, . . . , N 1. The dynamics of the system is given
by the linear recurrence
x(t + 1) = Ax(t) +bu(t), t = 0, . . . , N 1,
where A R
nn
and b R
n
are given. We assume that the initial state is zero, i.e., x(0) = 0.
The minimum fuel optimal control problem is to choose the inputs u(0), . . . , u(N 1) so as
to minimize the total fuel consumed, which is given by
F =
N1

t=0
f(u(t)),
subject to the constraint that x(N) = x
des
, where N is the (given) time horizon, and x
des

R
n
is the (given) nal or target state. The function f : R R is the fuel use map for the
actuator, which gives the amount of fuel used as a function of the actuator signal amplitude.
In this problem we use
f(a) =
_
|a| |a| 1
2|a| 1 |a| > 1.
This means that fuel use is proportional to the absolute value of the actuator signal, for
actuator signals between 1 and 1; for larger actuator signals the marginal fuel eciency is
half.
(a) Formulate the minimum fuel optimal control problem as an LP.
(b) Solve the following instance of the problem:
A =
_
1 1
0 0.95
_
, b =
_
0
0.1
_
, x(0) = (0, 0), x
des
= (10, 0), N = 20.
We can interpret the system as a simple model of a vehicle moving in one dimension.
The state dimension is n = 2, with x
1
(t) denoting the position of the vehicle at time t,
and x
2
(t) giving its velocity. The initial state is (0, 0), which corresponds to the vehicle
at rest at position 0; the nal state is x
des
= (10, 0), which corresponds to the vehicle
being at rest at position 10. Roughly speaking, this means that the actuator input
aects the velocity, which in turn aects the position. The coecient A
22
= 0.95 means
that velocity decays by 5% in one sample period, if no actuator signal is applied.
Plot the input signal u(t) for t = 0, . . . , 19, and the position and velocity (i.e., x
1
(t)
and x
2
(t)) for t = 0, . . . , 20.
Exercise 32. Approximating a matrix in innity norm. The innity (induced) norm of a matrix
A R
mn
, denoted A
,i
, is dened as
A
,i
= max
i=1,...,m
n

j=1
|a
ij
|.
24
The innity norm gives the maximum ratio of the innity norm of Ax to the innity norm
of x:
A
,i
= max
x=0
Ax

.
This norm is sometimes called the max-row-sum norm, for obvious reasons.
Consider the problem of approximating a matrix, in the max-row-sum norm, by a linear
combination of other matrices. That is, we are given k + 1 matrices A
0
, . . . , A
k
R
mn
,
and need to nd x R
k
that minimizes
A
0
+x
1
A
1
+ +x
k
A
k

,i
.
Express this problem as a linear program. Explain the signicance of any extra variables in
your LP. Carefully explain why your LP formulation solves this problem, e.g., what is the
relation between the feasible set for your LP and this problem?
Exercise 33. We are given p matrices A
i
R
nn
, and we would like to nd a single matrix
X R
nn
that we can use as an approximate right-inverse for each matrix A
i
, i.e., we would
like to have
A
i
X I, i = 1, . . . , p.
We can do this by solving the following optimization problem with X as variable:
minimize max
i=1,...,p
I A
i
X

. (28)
Here H

is the innity-norm or max-row-sum norm of a matrix H, dened as


H

= max
i=1,...,m
n

j=1
|H
ij
|,
if H R
mn
.
Express the problem (28) as an LP. You dont have to reduce the LP to a canonical form,
as long as you are clear about what the variables are, what the meaning is of any auxiliary
variables that you introduce, and why the LP is equivalent to the problem (28).
Exercise 34. Explain how you would use linear programming to solve the following optimization
problems.
(a) Given A R
mn
, b R
m
,
minimize
m

i=1
max{0, a
T
i
x +b
i
}.
The variable is x R
n
.
(b) Given A R
mn
, b R
m
,
minimize max
i=1,...,m
max{a
T
i
x +b
i
, 1/(a
T
i
x +b
i
)}
subject to Ax +b > 0.
The variable is x R
n
.
25
(c) Given m numbers a
1
, a
2
, . . . , a
m
R, and two vectors l, u R
m
, nd the polynomial
f(t) = c
0
+c
1
t + +c
n
t
n
of lowest degree that satises the bounds
l
i
f(a
i
) u
i
, i = 1, . . . , m.
The variables in the problem are the coecients c
i
of the polynomial.
(d) Given p + 1 matrices A
0
, A
1
, . . . , A
p
R
mn
, nd the vector x R
p
that minimizes
max
y
1
=1
(A
0
+x
1
A
1
+ +x
p
A
p
)y
1
.
Exercise 35. Suppose you are given an infeasible set of linear inequalities
a
T
i
x b
i
, i = 1, . . . , m,
and you are asked to nd an x that satises many of the inequalities (ideally, as many as
possible). Of course, the exact solution of this problem is dicult and requires combinatorial
or integer optimization techniques, so you should concentrate on heuristic or sub-optimal
methods. More specically, you are asked to formulate a heuristic method based on solving
a single LP.
Test the method on the example problem in the le ex35data.m available on the class web-
page. (The Matlab command [A,b] = ex35data generates a sparse matrix A R
10050
and
a vector b R
100
, that dene an infeasible set of linear inequalities.) To count the number
of inequalities satised by x, you can use the Matlab command
length(find(b-A*x > -1e-5)).
Exercise 36. Consider the linear-fractional program
minimize (c
T
x +)/(d
T
x +)
subject to Ax b,
(29)
where A R
mn
, b R
m
, c, d R
n
, and , R. We assume that the polyhedron
P = {x R
n
| Ax b}
is bounded and that d
T
x + > 0 for all x P.
Show that you can solve (29) by solving the LP
minimize c
T
y +z
subject to Ay zb 0
d
T
y +z = 1
z 0
(30)
in the variables y R
n
and z R. More precisely, suppose y and z are a solution of (30).
Show that z > 0 and that y/ z solves (29).
Exercise 37. Consider the problem
minimize Ax b
1
/(c
T
x +d)
subject to x

1,
where A R
mn
, b R
m
, c R
n
, and d R. We assume that d > c
1
.
26
(a) Formulate this problem as a linear-fractional program.
(b) Show that d > c
1
implies that c
T
x +d > 0 for all feasible x.
(c) Show that the problem is equivalent to the convex optimization problem
minimize Ay bt
1
subject to y

t
c
T
y +dt = 1,
(31)
with variables y R
n
, t R.
(d) Formulate problem (31) as an LP.
Exercise 38. Explain how you would solve the following problem using linear programming. You
are given two sets of points in R
n
:
S
1
= {x
1
, . . . , x
N
}, S
2
= {y
1
, . . . , y
M
}.
You are asked to nd a polyhedron
P = {x | a
T
i
x b
i
, i = 1, . . . , m}
that contains the points in S
1
in its interior, and does not contain any of the points in S
2
:
S
1
{x | a
T
i
x < b
i
, i = 1, . . . , m}, S
2
{x | a
T
i
x > b
i
for at least one i} = R
n
\ P.
An example is shown in the gure, with the points in S
1
shown as open circles and the points
in S
2
as lled circles.
You can assume that the two sets are separable in the way described. Your solution method
should return a
i
and b
i
, i = 1, . . . , m, given the sets S
1
and S
2
. The number of inequalities
m is not specied, but it should not exceed M + N. You are allowed to solve one or more
LPs or LP feasibility problems. The method should be ecient, i.e., the dimensions of the
LPs you solve should not be exponential as a function of N and M.
Exercise 39. Explain how you would solve the following problem using linear programming. Given
two polyhedra
P
1
= {x | Ax b}, P
2
= {x | Cx d},
prove that P
1
P
2
, or nd a point in P
1
that is not in P
2
. The matrices A R
mn
and
C R
pn
, and the vectors b R
m
and d R
p
are given.
If you know several solution methods, give the most ecient one.
27
Exercise 40. Formulate the following problem as an LP:
maximize

n
j=1
r
j
(x
j
)
subject to

n
j=1
A
ij
x
j
c
max
i
, i = 1, . . . , m
x
j
0, j = 1, . . . , n.
(32)
The functions r
j
are dened as
r
j
(u) =
_
p
j
u 0 u q
j
p
j
q
j
+p
disc
j
(u q
j
) u q
j
,
(33)
where p
j
> 0, q
j
> 0 and 0 < p
disc
j
< p
j
. The variables in the problem are x
j
, j = 1, . . . , n.
The parameters A
ij
, c
max
i
, p
j
, q
j
and p
disc
j
are given.
The variables x
j
in the problem represent activity levels (for example, production levels
for dierent products manufactured by a company). These activities consume m resources,
which are limited. Activity j consumes A
ij
x
j
of resource i. (Ordinarily we have A
ij
0, i.e.,
activity j consumes resource i. But we allow the possibility that A
ij
< 0, which means that
activity j actually generates resource i as a by-product.) The total resource consumption is
additive, so the total of resource i consumed is c
i
=

n
j=1
A
ij
x
j
. Each resource consumption
is limited: we must have c
i
c
max
i
, where c
max
i
are given.
Activity j generates revenue r
j
(x
j
), given by the expression (33). In this denition p
j
> 0
is the basic price, q
j
> 0 is the quantity discount level, and p
disc
j
is the quantity discount
price, for (the product of) activity j. We have 0 < p
disc
j
< p
j
. The total revenue is the sum
of the revenues associated with each activity, i.e.,

n
j=1
r
j
(x
j
). The goal in (32) is to choose
activity levels that maximize the total revenue while respecting the resource limits.
5 Duality
Exercise 41. The main result of linear programming duality is that the optimal value of the LP
minimize c
T
x
subject to Ax b
is equal to the optimal value of the LP
maximize b
T
z
subject to A
T
z +c = 0
z 0,
except when they are both infeasible. Give an example in which both problems are infeasible.
Exercise 42. Consider the LP
minimize 47x
1
+ 93x
2
+ 17x
3
93x
4
subject to
_

_
1 6 1 3
1 2 7 1
0 3 10 1
6 11 2 12
1 6 1 3
_

_
_

_
x
1
x
2
x
3
x
4
_

_
3
5
8
7
4
_

_
.
Prove, without using any LP code, that x = (1, 1, 1, 1) is optimal.
28
Exercise 43. Consider the polyhedron
P = {x R
4
| Ax b, Cx = d}
where
A =
_

_
1 1 3 4
4 2 2 9
8 2 0 5
0 6 7 4
_

_
, b =
_

_
8
17
15
17
_

_
and
C =
_
13 11 12 22
_
, d = 58.
(a) Prove that x = (1, 1, 1, 1) is an extreme point of P.
(b) Prove that x is optimal for the LP
minimize c
T
x
subject to Ax b
Cx = d
with c = (59, 39, 38, 85).
(c) Is x the only optimal point? If not, describe the entire optimal set.
You can use any software, but you have to justify your answers analytically.
Exercise 44. Consider the LP
minimize 47x
1
+ 93x
2
+ 17x
3
93x
4
subject to
_

_
1 6 1 3
1 2 7 1
0 3 10 1
6 11 2 12
1 6 1 3
11 1 1 8
_

_
_

_
x
1
x
2
x
3
x
4
_

_
3
5
8
7
4
5
_

_
+
_

_
1
3
13
46
2
75
_

_
.
(34)
where R is a parameter. For = 0, this is the LP of exercise 42, with one extra inequality
(the sixth inequality). This inequality is inactive at x = (1, 1, 1, 1), so x is also the optimal
solution for (34) when = 0.
(a) Determine the range of values of for which the rst four constraints are active at the
optimum.
(b) Give an explicit expression for the optimal primal solution, the optimal dual solution,
and the optimal value, within the range of you determined in part (a). (If for some
value of the optimal points are not unique, it is sucient to give one optimal point.)
Exercise 45. Consider the parametrized primal and dual LPs
minimize (c +d)
T
x
subject to Ax b,
maximize b
T
z
subject to A
T
z +c +d = 0
z 0
29
where
A =
_

_
2 3 5 4
2 1 3 4
2 1 3 1
4 2 4 2
2 3 9 1
_

_
, b =
_

_
6
2
1
0
8
_

_
,
c = (8, 32, 66, 14), d = (16, 6, 2, 3).
(a) Prove that x

= (1, 1, 1, 1) and z

= (9, 9, 4, 9, 0) are optimal when = 0.


(b) How does p

() vary as a function of around = 0? Give an explicit expression for


p

(), and specify the interval in which it is valid.


(c) Also give an explicit expression for the primal and dual optimal solutions for values of
around = 0.
Remark: The problem is similar to the sensitivity problem discussed in the lecture notes.
Here we consider the case where c is subject to a perturbation, while b is xed, so you have
to develop the dual of the derivation in the lecture notes.
Exercise 46. Consider the pair of primal and dual LPs
minimize (c +d)
T
x
subject to Ax b +f,
maximize (b +f)
T
z
subject to A
T
z +c +d = 0
z 0
where
A =
_

_
4 12 2 1
17 12 7 11
1 0 6 1
3 3 22 1
11 2 1 8
_

_
, b =
_

_
8
13
4
27
18
_

_
, c =
_

_
49
34
50
5
_

_
, d =
_

_
3
8
21
25
_

_
, f =
_

_
6
15
13
48
8
_

_
and is a parameter.
(a) Prove that x

= (1, 1, 1, 1) is optimal when = 0, by constructing a dual optimal point


z

that has the same objective value as x

. Are there any other primal or dual optimal


solutions?
(b) Express the optimal value p

() as a continuous function of on an interval that con-


tains = 0. Specify the interval in which your expression is valid. Also give explicit
expressions for the primal and dual solutions as a function of over the same interval.
Exercise 47. In some applications we are interested in minimizing two cost functions, c
T
x and
d
T
x, over a polyhedron P = {x | Ax b}. For general c and d, the two objectives are
competing, i.e., it is not possible to minimize them simultaneously, and there exists a trade-
o between them. The problem can be visualized as in the gure below.
30
c
T
x
d
T
x
The shaded region is the set of pairs (c
T
x, d
T
x) for all possible x P. The circles are the
values (c
T
x, d
T
x) at the extreme points of P. The lower part of the boundary, shown as a
heavy line, is called the trade-o curve. Points (c
T
x, d
T
x) on this curve are ecient in the
following sense: it is not possible to improve both objectives by choosing a dierent feasible x.
Suppose (c
T
x, d
T
x) is a breakpoint of the trade-o curve, where x is a nondegenerate extreme
point of P. Explain how the left and right derivatives of the trade-o curve at this breakpoint
can be computed.
Hint. Compute the largest and smallest values of such that x is optimal for the LP
minimize d
T
x +c
T
x
subject to Ax b.
Exercise 48. Consider the
1
-norm minimization problem
minimize Ax +b +d
1
with
A =
_

_
2 7 1
5 1 3
7 3 5
1 4 4
1 5 5
2 5 1
_

_
, b =
_

_
4
3
9
0
11
5
_

_
, d =
_

_
10
13
27
10
7
14
_

_
.
(a) Suppose = 0. Prove, without using any LP code, that x

= 1 is optimal. Are there


any other optimal points?
(b) Give an explicit formula for the optimal value as a function of for small positive and
negative values of . What are the values of for which your expression is valid?
Exercise 49. Consider the following optimization problem in x:
minimize c
T
x
subject to Ax +b
1
1
(35)
where A R
mn
, b R
m
, c R
n
.
(a) Formulate this problem as a an LP in inequality form and explain why your LP formu-
lation is equivalent to problem (35).
31
(b) Derive the dual LP, and show that it is equivalent to the problem
maximize b
T
z z

subject to A
T
z +c = 0.
What is the relation between the optimal z and the optimal variables in the dual LP?
(c) Give a direct argument (i.e., not quoting any results from LP duality) that whenever
x is primal feasible (i.e., Ax + b
1
1) and z is dual feasible (i.e., A
T
z + c = 0), we
have
c
T
x b
T
z z

.
Exercise 50. Lower bounds in Chebyshev approximation from least-squares. Consider the Cheby-
shev approximation problem
minimize Ax b

(36)
where A R
mn
(m n) and rankA = n. Let x
cheb
denote an optimal point for the
Chebyshev approximation problem (there may be multiple optimal points; x
cheb
denotes one
of them).
The Chebyshev problem has no closed-form solution, but the corresponding least-squares
problem does. We denote the least-squares solution x
ls
as
x
ls
= argminAx b = (A
T
A)
1
A
T
b.
The question we address is the following. Suppose that for a particular A and b you have
computed the least-squares solution x
ls
(but not x
cheb
). How suboptimal is x
ls
for the Cheby-
shev problem? In other words, how much larger is Ax
ls
b

than Ax
cheb
b

? To
answer this question, we need a lower bound on Ax
cheb
b

.
(a) Prove the lower bound
Ax
cheb
b

m
Ax
ls
b

,
using the fact that for all y R
m
,
1

m
y y

y.
(b) In the duality lecture we derived the following dual for (36):
maximize b
T
z
subject to A
T
z = 0
z
1
1.
(37)
We can use this dual problem to improve the lower bound obtained in (a).
Denote the least-squares residual as r
ls
= b Ax
ls
. Assuming r
ls
= 0, show that
z = r
ls
/r
ls

1
, z = r
ls
/r
ls

1
,
are both feasible in (37).
32
By duality b
T
z and b
T
z are lower bounds for Ax
cheb
b

. Which is the better


bound? How does it compare with the bound obtained in part (a) above?
One application is as follows. You need to solve the Chebyshev approximation problem, but
only within, say, 10%. You rst solve the least-squares problem (which can be done faster),
and then use the bound from part (b) to see if it can guarantee a maximum 10% error. If it
can, great; otherwise solve the Chebyshev problem (by slower methods).
Exercise 51. A matrix A R
(mp)n
and a vector b R
mp
are partitioned in m blocks of p rows:
A =
_

_
A
1
A
2
.
.
.
A
m
_

_
, b =
_

_
b
1
b
2
.
.
.
b
m
_

_
,
with A
k
R
pn
, b
k
R
p
.
(a) Express the optimization problem
minimize

m
k=1
A
k
x b
k

(38)
as an LP.
(b) Suppose rank(A) = n and Ax
ls
b = 0, where x
ls
is the solution of the least-squares
problem
minimize Ax b
2
.
Show that the optimal value of (38) is bounded below by

m
k=1
r
k

2
max
k=1,...,m
r
k

1
,
where r
k
= A
k
x
ls
b
k
for k = 1, . . . , m.
Exercise 52. Let x be a real-valued random variable which takes values in {a
1
, a
2
, . . . , a
n
} where
0 < a
1
< a
2
< < a
n
, and prob(x = a
i
) = p
i
. Obviously p satises

n
i=1
p
i
= 1 and p
i
0
for i = 1, . . . , n.
(a) Consider the problem of determining the probability distribution that maximizes prob(x
) subject to the constraint Ex = b, i.e.,
maximize prob(x )
subject to Ex = b,
(39)
where and b are given (a
1
< < a
n
, and a
1
b a
n
). The variable in problem (39)
is the probability distribution, i.e., the vector p R
n
. Write (39) as an LP.
33
(b) Take the dual of the LP in (a), and show that it is can be reformulated as
minimize b +
subject to a
i
+ 0 for all a
i
<
a
i
+ 1 for all a
i
.
The variables are and . Give a graphical interpretation of this problem, by interpret-
ing and as coecients of an ane function f(x) = x + . Show that the optimal
value is equal to
_
(b a
1
)/( a a
1
) b a
1 b a,
where a = min{a
i
| a
i
}. Also give the optimal values of and .
(c) From the dual solution, determine the distribution p that solves the problem in (a).
Exercise 53. The max-ow min-cut theorem. Consider the maximum ow problem of lecture 8,
page 7, with nonnegative arc ows:
maximize t
subject to Ax = te
0 x c.
(40)
Here e = (1, 0, . . . , 0, 1) R
m
, A R
mn
is the node-arc incidence matrix of a directed
graph with m nodes and n arcs, and c R
n
is a vector of positive arc capacities. The
variables are t R and x R
n
. In this problem we have an external supply of t at node 1
(the source node) and t at node m (the target node), and we maximize t subject to the
balance equations and the arc capacity constraints.
A cut separating nodes 1 and m is a set of nodes that contains node 1 and does not contain
node m, i.e., S {1, . . . , m} with 1 S and m S. The capacity of the cut is dened as
C(S) =

kA(S)
c
k
,
where A(S) is the set of arcs that start at a node in S and end at a node outside S. The
problem of nding the cut with the minimum capacity is called the minimum cut problem.
In this exercise we show that the solution of the minimum cut problem (with positive weights
c) is provided by the dual of the maximum ow problem (40).
(a) Let p

be the optimal value of the maximum ow problem (40). Show that


p

C(S) (41)
for all cuts S that separate nodes 1 and m.
(b) Derive the dual problem of (40), and show that it can be expressed as
minimize c
T
v
subject to A
T
y v
y
1
y
m
= 1
v 0.
(42)
34
The variables are v R
n
and y R
m
.
Suppose x and t are optimal in (40), and y and v are optimal in (42). Dene the cut

S = {i | y
i
y
1
}.
Use the complementary slackness conditions for (40) and (42) to show that
x
k
= c
k
if arc k starts at a node in

S and ends at a node outside

S, and that
x
k
= 0
if arc k starts at a node outside

S and ends at a node in

S. Conclude that
p

= C(

S).
Combined with the result of part 1, this proves that

S is a minimum-capacity cut.
Exercise 54. A project consisting of n dierent tasks can be represented as a directed graph with
n arcs and m nodes. The arcs represent the tasks. The nodes represent precedence relations:
If arc k starts at node i and arc j ends at node i, then task k cannot start before task
j is completed. Node 1 only has outgoing arcs. These arcs represent tasks that can start
immediately and in parallel. Node m only has incoming arcs. When the tasks represented
by these arcs are completed, the entire project is completed.
We are interested in computing an optimal schedule, i.e., in assigning an optimal start time
and a duration to each task. The variables in the problem are dened as follows.
y
k
is the duration of task k, for k = 1, . . . , n. The variables y
k
must satisfy the con-
straints
k
y
k

k
. We also assume that the cost of completing task k in time y
k
is
given by c
k
(
k
y
k
). This means there is no cost if we we use the maximum allowable
time
k
to complete the task, but we have to pay if we want the task nished more
quickly.
v
j
is an upper bound on the completion times of all tasks associated with arcs that end
at node j. These variables must satisfy the relations
v
j
v
i
+y
k
if arc k starts at node i and ends at node j.
Our goal is to minimize the sum of the completion time of the entire project, which is given
by v
m
v
1
, and the total cost

k
c
k
(
k
y
k
). The problem can be formulated as an LP
minimize e
T
v +c
T
( y)
subject to A
T
v +y 0
y ,
where e = (1, 0, . . . , 0, 1) and A is the node-arc incidence matrix of the graph (as dened
on page 8-2 of the lecture notes). The variables are v R
m
, y R
n
.
(a) Derive the dual of this LP.
35
(b) Interpret the dual problem as a minimum cost network ow problem with nonlinear
cost, i.e., a problem of the form
minimize

n
k=1
f
k
(x
k
)
subject to Ax = e
x 0,
where f
k
is a nonlinear function.
Exercise 55. This problem is a variation on the illumination problem of exercise 15. In part (a)
of exercise 15 we formulated the problem
minimize max
k=1,...,n
|a
T
k
p I
des
|
subject to 0 p 1
as the following LP in p R
m
and an auxiliary variable w:
minimize w
subject to w a
T
k
p I
des
w, k = 1, . . . , n
0 p 1.
(43)
Now suppose we add the following constraint on the lamp powers p: no more than half the
total power

m
i=1
p
i
is in any subset of r lamps (where r is a given integer with 0 < r <
m). The idea is to avoid solutions where all the power is concentrated in very few lamps.
Mathematically, the constraint can be expressed as
r

i=1
p
[i]
0.5
m

i=1
p
i
(44)
where p
[i]
is the ith largest component of p. We would like to add this constraint to the
LP (43). However the left-hand side of (44) is a complicated nonlinear function of p.
We can write the constraint (44) as a set of linear inequalities by enumerating all subsets
{i
1
, . . . , i
r
} {1, . . . , m} with r dierent elements, and adding an inequality
r

k=1
p
i
k
0.5
m

i=1
p
i
for each subset. Equivalently, we express (44) as
s
T
p 0.5
m

i=1
p
i
for all s {0, 1}
m
,
m

i=1
s
i
= r.
This yields a set of
_
m
r
_
linear inequalities in p.
We can use LP duality to derive a much more compact representation. We will prove that (44)
can be expressed as the set of 1 + 2m linear inequalities
rt +
m

i=1
x
i
0.5
m

i=1
p
i
, p
i
t +x
i
, i = 1, . . . , m, x 0 (45)
in p R
m
, and auxiliary variables x R
m
and t R.
36
(a) Given a vector p R
m
, show that the sum of its r largest elements (i.e., p
[1]
+ +p
[r]
)
is equal to the optimal value of the LP (in the variables y R
m
)
maximize p
T
y
subject to 0 y 1
1
T
y = r.
(46)
(b) Derive the dual of the LP (46). Show that it can be written as
minimize rt +1
T
x
subject to t1 +x p
x 0,
(47)
where the variables are t R and x R
m
. By duality the LP (47) has the same
optimal value as (46), i.e., p
[1]
+ +p
[r]
.
It is now clear that the optimal value of (47) is less than 0.5

m
i
p
i
if and only if there is
a feasible solution t, x in (47) with rt + 1
T
x 0.5

m
i
p
i
. In other words, p satises the
constraint (44) if and only if the set of linear inequalities (45) in x and t are feasible. To
include the nonlinear constraint (44) in (43), we can add the inequalities (45), which yields
minimize w
subject to w a
T
k
p I
des
w, k = 1, . . . , n
0 p 1
rt +1
T
x 0.5 1
T
p
p t1 +x
x 0.
This is an LP with 2m+ 2 variables p, x, w, t, and 2n + 4m+ 1 constraints.
Exercise 56. In this problem we derive a linear programming formulation for the following vari-
ation on

- and
1
-approximation: given A R
mn
, b R
m
, and an integer k with
1 k m,
minimize

k
i=1
|Ax b|
[i]
. (48)
The notation z
[i]
denotes the ith largest component of z R
m
, and |z|
[i]
denotes the ith
largest component of the vector |z| = (|z
1
|, |z
2
|, . . . , |z
m
|) R
m
. In other words in (48) we
minimize the sum of the k largest residuals |a
T
i
x b
i
|. For k = 1, this is the

-problem; for
k = m, it is the
1
-problem.
Problem (48) can be written as
minimize max
1i
1
<i
2
<<i
k
m

k
j=1
|a
T
i
j
x b
i
j
|,
or as the following LP in x and t:
minimize t
subject to s
T
(Ax b) t for all s {1, 0, 1}
m
, s
1
= k.
Here we enumerate all vectors s with components 1, 0 or +1, and with exactly k nonzero
elements. This yields an LP with 2
k
_
m
k
_
linear inequalities.
We now use LP duality to derive a more compact formulation.
37
(a) We have seen that for c R
m
and 1 k n, the the optimal value of the LP
maximize c
T
v
subject to y v y
1
T
y = k
y 1
(49)
is equal to |c|
[1]
+ + |c|
[k]
. Take the dual of the LP (49) and show that it can be
simplied as
minimize kt +1
T
z
subject to t1 z c t1 +z
z 0
(50)
with variables t R and z R
m
. By duality the optimal values of (50) and (49) are
equal.
(b) Now apply this result to c = Ax b. From part (a), we know that the optimal value of
the LP
minimize kt +1
T
z
subject to t1 z Ax b t1 +z
z 0,
(51)
with variables t R, z R
m
is equal to

k
i=1
|Axb|
[i]
. Note that the constraints (51)
are linear in x, so we can simultaneously optimize over x, i.e., solve it as an LP with
variables x, t and z. This way we can solve problem (48) by solving an LP with m+n+1
variables and 3m inequalities.
Exercise 57. A portfolio optimization problem. We consider a portfolio optimization problem
with n assets or stocks held over one period. The variable x
i
will denote the amount of asset
i held at the beginning of (and throughout) the period, and p
i
will denote the price change
of asset i over the period, so the return is r = p
T
x. The optimization variable is the portfolio
vector x R
n
, which has to satisfy x
i
0 and

n
i=1
x
i
1 (unit total budget).
If p is exactly known, the optimal allocation is to invest the entire budget in the asset with
the highest return, i.e., if p
j
= max
i
p
i
, we choose x
j
= 1, and x
i
= 0 for i = j. However,
this choice is obviously very sensitive to uncertainty in p. We can add various constraints to
make the investment more robust against variations in p.
We can impose a diversity constraint that prevents us from allocating the entire budget in
a very small number of assets. For example, we can require that no more than, say, 90% of
the total budget is invested in any 5% of the assets. We can express this constraint as
n/20

i=1
x
[i]
0.9
where x
[i]
, i = 1, . . . , n, are the values x
i
sorted in decreasing order, and n/20 is the largest
integer smaller than or equal to n/20.
In addition, we can model the uncertainty in p by specifying a set P of possible values, and
require that the investment maximizes the return in the worst-case scenario. The resulting
problem is:
maximize min
pP
p
T
x
subject to 1
T
x 1, x 0,

n/20
i=1
x
[i]
0.9.
(52)
38
For each of the following sets P, can you express problem (52) as an LP?
(a) P = {p
(1)
, . . . , p
(K)
}, where p
(i)
R
n
are given. This means we consider a nite number
of possible scenarios.
(b) P = { p + By | y

1} where p R
n
and B R
nm
are given. We can interpret
p as the expected value of p, and y R
m
as uncertain parameters that determine the
actual values of p.
(c) P = { p + y | By d} where p R
n
, B R
rm
, and d R
r
are given. Here we
consider a polyhedron of possible value of p. (We assume that P is nonempty.)
You may introduce new variables and constraints, but you must clearly explain why your
formulation is equivalent to (52). If you know more than one solution, you should choose the
most compact formulation, i.e., involving the smallest number of variables and constraints.
Exercise 58. Let v be a discrete random variable with possible values c
1
, . . . , c
n
, and distribution
p
k
= prob(v = c
k
), k = 1, . . . , n. The -quantile of v, where 0 < < 1, is dened as
q

= min{ | prob(v ) }.
For example, the 0.9-quantile of the distribution shown in the gure is q
0.9
= 6.0.
c
k
p
k
1.0 1.5 2.5 4.0 4.5 5.0 6.0 7.0 9.0 10.0
0.08
0.14
0.18
0.14
0.16
0.14
0.08
0.04
0.02 0.02
A related quantity is
f

=
1
1

c
k
>q

p
k
c
k
+
_
_
1
1
1

c
i
>q

p
i
_
_
q

.
If

c
i
>q

p
i
= 1 (and the second term vanishes), this is the conditional expected value
of v, given that v is greater than q

. Roughly speaking, f

is the mean of the tail of the


distribution above the -quantile. In the example of the gure,
f
0.9
=
0.02 6.0 + 0.04 7.0 + 0.02 9.0 + 0.02 10.0
0.1
= 7.8.
39
We consider optimization problems in which the values of c
k
depend linearly on some op-
timization variable x. We will formulate the problem of minimizing f

, subject to linear
constraints on x, as a linear program.
(a) Show that the optimal value of the LP
maximize c
T
y
subject to 0 y (1 )
1
p
1
T
y = 1,
(53)
with variable y R
n
, is equal to f

. The parameters c, p and are given, with p > 0,


1
T
p = 1, and 0 < < 1.
(b) Write the LP (53) in inequality form, derive its dual, and show that the dual is equivalent
to the piecewise-linear minimization problem
minimize t +
1
1
n

k=1
p
k
max{0, c
k
t}, (54)
with a single scalar variable t. It follows from duality theory and the result in part 1
that the optimal value of (54) is equal to f

.
(c) Now suppose c
k
= a
T
k
x, where x R
m
is an optimization variable and a
k
is given, so
q

(x) and f

(x) both depend on x. Use the result in part 2 to express the problem
minimize f

(x)
subject to Fx g,
with variable x, as an LP.
As an application, we consider a portfolio optimization problem with m assets or stocks held
over a period of time. We represent the portfolio by a vector x = (x
1
, x
2
, . . . , x
m
), with x
k
the amount invested in asset k during the investment period. We denote by r the vector of
returns for the m assets over the period, so the total return on the portfolio is r
T
x. The loss
(negative return) is denoted v = r
T
x.
We model r as a discrete random variable, with possible values a
1
, . . . , a
n
, and distribution
p
k
= prob(r = a
k
), k = 1, . . . , n.
The loss of the portfolio v = r
T
x is therefore a random variable with possible values
c
k
= a
T
k
x, k = 1, . . . , n, and distribution p.
In this context, the -quantile q

(x) is called the value-at-risk of the portfolio, and f

(x) is
called the conditional value-at-risk. If we take close to one, both functions are meaningful
measures of the risk of the portfolio x. The result of part 3 implies that we can minimize
f

(x), subject to linear constraints in x, via linear programming. For example, we can
minimize the risk (expressed as f

(x)), subject to an upper bound on the expected loss (i.e.,


a lower bound on the expected return), by solving
minimize f

(x)
subject to

k
p
k
a
T
k
x R
1
T
x = 1
x 0.
40
Exercise 59. A generalized linear-fractional problem.
Consider the problem
minimize Ax b
1
/(c
T
x +d)
subject to x

1
(55)
where A R
mn
, b R
m
, c R
n
and d R are given. We assume that d > c
1
. As a
consequence, c
T
x +d > 0 for all feasible x.
Remark: There are several correct answers to part (a), including a method based on solving
one single LP.
(a) Explain how you would solve this problem using linear programming. If you know more
than one method, you should give the simplest one.
(b) Prove that the following problem provides lower bounds on the optimal value of (55):
maximize
subject to A
T
z +c
1
b
T
z d
z

1.
(56)
The variables are z R
m
and R.
(c) Use linear programming duality to show that the optimal values of (56) and (55) are in
fact equal.
Exercise 60. Consider the problem
minimize

m
i=1
h(a
T
i
x b
i
) (57)
where h is the function
h(z) =
_
0 |z| 1
|z| 1 |z| > 1
and (as usual) x R
n
is the variable, and a
1
, . . . , a
m
R
n
and b R
m
are given. Note
that this problem can be thought of as a sort of hybrid between
1
- and

-approximation,
since there is no cost for residuals smaller than one, and a linearly growing cost for residuals
larger than one.
Express (57) as an LP, derive its dual, and simplify it as much as you can.
Let x
ls
denote the solution of the least-squares problem
minimize
m

i=1
(a
T
i
x b
i
)
2
,
and let r
ls
denote the residual r
ls
= Ax
ls
b. We assume A has rank n, so the least-squares
solution is unique and given by
x
ls
= (A
T
A)
1
A
T
b.
The least-squares residual r
ls
satises
A
T
r
ls
= 0.
Show how to construct from x
ls
and r
ls
a feasible solution for the dual of (57), and hence
a lower bound for its optimal value p

. Compare your lower bound with the trivial lower


bound p

0. Is it always better, or only in certain cases?


41
Exercise 61. Self-dual homogeneous LP formulation.
(a) Consider the LP
minimize f
T
1
u +f
T
2
v
subject to M
11
u +M
12
v f
1
M
T
12
u +M
22
v = f
2
u 0
(58)
in the variables u R
p
and v R
q
. The problem data are the vectors f
1
R
p
,
f
2
R
q
, and the matrices M
11
R
pp
, M
12
R
pq
, and M
22
R
qq
.
Show that if M
11
and M
22
are skew-symmetric, i.e.,
M
T
11
= M
11
, M
T
22
= M
22
,
then the dual of the LP (58) can be expressed as
maximize f
T
1
w f
T
2
y
subject to M
11
w +M
12
y f
1
M
T
12
w +M
22
y = f
2
w 0,
(59)
with variables w R
p
and y R
q
.
Note that the dual problem is essentially the same as the primal problem. Therefore if
u, v are primal optimal, then w = u, y = v are optimal in the dual problem. We say
that the LP (58) with skew-symmetric M
11
and M
22
is self-dual.
(b) Write down the optimality conditions for problem (58). Use the observation we made
in part (a) to show that the optimality conditions can be simplied as follows: u, v are
optimal for (58) if and only if
M
11
u +M
12
v f
1
M
T
12
u +M
22
v = f
2
u 0
u
T
(f
1
M
11
u M
12
v) = 0.
In other words, u, v must be feasible in (58), and the nonnegative vectors u and
s = f
1
M
11
u M
12
v
must satisfy the complementarity condition u
T
s = 0.
It can be shown that if (58) is feasible, then it has an optimal solution that is strictly
complementary, i.e.,
u +s > 0.
(In other words, for each k either s
k
= 0 or u
k
= 0, but not both.)
(c) Consider the LP
minimize 0
subject to b
T
z +c
T
x 0
b

t +A x 0
A
T
z +c

t = 0
z 0,

t 0
(60)
42
with variables x R
n
, z R
m
, and

t R. Show that this problem is self-dual.
Use the result in part (b) to prove that (60) has an optimal solution that satises

t(c
T
x +b
T
z) = 0
and

t (c
T
x +b
T
z) > 0.
Suppose we have computed an optimal solution with these properties. We can distin-
guish the following cases.


t > 0. Show that x = x/

t, z = z/

t are optimal for the pair of primal and dual LPs


minimize c
T
x
subject to Ax b
(61)
and
maximize b
T
z
subject to A
T
z +c = 0
z 0.
(62)


t = 0 and c
T
x < 0. Show that the dual LP (62) is infeasible.


t = 0 and b
T
z < 0. Show that the primal LP (62) is infeasible.
This result has an important practical ramication. It implies that we do not have to use
a two-phase approach to solve the LP (61) (i.e., a phase-I to nd a feasible point, followed
by a phase-II to minimize c
T
x starting at the feasible point). We can solve the LP (61) and
its dual, or detect primal or dual infeasibility, by solving one single, feasible LP (60). The
LP (60) is much larger than (61), but it can be shown that the cost of solving it is not much
higher if one takes advantage of the symmetry in the constraints.
Exercise 62. We consider a network ow problem on the simple network shown below.
1 3
4
5
2
u
1
u
2
u
3
u
4
u
5
u
6
u
7
V
1
V
2
V
3
V
4
V
5
Here u
1
, . . . , u
7
R denote the ows or trac along links 1, . . . , 7 in the direction indicated
by the arrow. (Thus, u
1
= 1 means a trac ow of one unit in the direction of the arrow
on link 1, i.e., from node 1 to node 2.) V
1
, . . . , V
5
R denote the external inputs (or
43
outputs if V
i
< 0) to the network. We assume that the net ow into the network is zero, i.e.,

5
i=1
V
i
= 0.
Conservation of trac ow states that at each node, the total ow entering the node is zero.
For example, for node 1, this means that V
1
u
1
+ u
4
u
5
= 0. This gives one equation
per node, so we have 5 trac conservation equations, for the nodes 1, . . . , 5, respectively. (In
fact, the equations are redundant since they sum to zero, so you could leave one, e.g., for
node 5, out. However, to answer the questions below, it is easier to keep all ve equations.)
The cost of a ow pattern u is given by

i
c
i
|u
i
|, where c
i
> 0 is the tari on link i. In
addition to the tari, each link also has a maximum possible trac level or link capacity:
|u
i
| U
i
.
(a) Express the problem of nding the minimum cost ow as an LP in inequality form, for
the network shown above.
(b) Solve the LP from part (a) for the specic costs, capacities, and inputs
c = (2, 2, 2, 1, 1, 1, 1), V = (1, 1, 0.5, 0.5, 3), U = (0.5, 0.5, 0.1, 0.5, 1, 1, 1).
Find the optimal dual variables as well.
(c) Suppose we can increase the capacity of one link by a small xed amount, say, 0.1.
Which one should we choose, and why? (Youre not allowed to solve new LPs to answer
this!) For the link you pick, increase its capacity by 0.1, and then solve the resulting
LP exactly. Compare the resulting cost with the cost predicted from the optimal dual
variables of the original problem. Can you explain the answer?
(d) Now suppose we have the possibility to increase or reduce two of the ve external inputs
by a small amount, say, 0.1. To keep

i
V
i
= 0, the changes in the two inputs must be
equal in absolute value and opposite in sign. For example, we can increae V
1
by 0.1,
and decrease V
4
by 0.1. Which two inputs should we modify, and why? (Again, youre
not allowed to solve new LPs!) For the inputs you pick, change the value (increase or
decrease, depending on which will result in a smaller cost) by 0.1, and then solve the
resulting LP exactly. Compare the result with the one predicted from the optimal dual
variables of the original problem.
Exercise 63. Let P R
nn
be a matrix with the following two properties:
all elements of P are nonnegative: p
ij
0 for i = 1, . . . , n and j = 1, . . . , n
the columns of P sum to one:

n
i=1
p
ij
= 1 for j = 1, . . . , n.
Show that there exists a y R
n
such that
Py = y, y 0,
n

i=1
y
i
= 1.
Remark. This result has the following application. We can interpret P as the transition
probability matrix of a Markov chain with n states: if s(t) is the state at time t (i.e., s(t) is
a random variable taking values in {1, . . . , n}), then p
ij
is dened as
p
ij
= prob(s(t + 1) = i | s(t) = j).
44
Let y(t) R
n
be the probability distribution of the state at time t, i.e.,
y
i
(t) = prob(s(t) = i).
Then the distribution at time t + 1 is given by y(t + 1) = Py(t).
The result in this problem states that a nite state Markov chain always has an equilibrium
distribution y.
Exercise 64. Arbitrage and theorems of alternatives.
Consider an event (for example, a sports game, political elections, the evolution of the stock-
market over a certain period) with m possible outcomes. Suppose that n wagers on the
outcome are possible. If we bet an amount x
j
on wager j, and the outcome of the event is
i, then our return is equal to r
ij
x
j
(this amount does not include the stake, i.e., we pay x
j
initially, and receive (1 +r
ij
)x
j
if the outcome of the event is i, so r
ij
x
j
is the net gain). We
allow the bets x
j
to be positive, negative, or zero. The interpretation of a negative bet is as
follows. If x
j
< 0, then initially we receive an amount of money |x
j
|, with an obligation to
pay (1 + r
ij
)|x
j
| if outcome i occurs. In that case, we lose r
ij
|x
j
|, i.e., our net gain is r
ij
x
j
(a negative number).
We call the matrix R R
mn
with elements r
ij
the return matrix. A betting strategy is
a vector x R
n
, with as components x
j
the amounts we bet on each wager. If we use a
betting strategy x, our total return in the event of outcome i is equal to

n
j=1
r
ij
x
j
, i.e., the
ith component of the vector Rx.
(a) The arbitrage theorem. Suppose you are given a return matrix R. Prove the following
theorem: there is a betting strategy x R
n
for which
Rx > 0 (63)
if and only if there exists no vector p R
m
that satises
R
T
p = 0, p 0, p = 0. (64)
We can interpret this theorem as follows. If Rx > 0, then the betting strategy x
guarantees a positive return for all possible outcomes, i.e., it is a sure-win betting
scheme. In economics, we say there is an arbitrage opportunity.
If we normalize the vector p in (64) so that 1
T
p = 1, we can interpret it as a probability
vector on the outcomes. The condition R
T
p = 0 means that the expected return
ERx = p
T
Rx = 0
for all betting strategies. We can therefore rephrase the arbitrage theorem as follows.
There is no sure-win betting strategy (or arbitrage opportunity) if and only if there is
a probability vector on the outcomes that makes all bets fair (i.e., the expected gain is
zero).
(b) Options pricing. The arbitrage theorem is used in mathematical nance to determine
prices of contracts. As a simple example, suppose we can invest in two assets: a stock
and an option. The current unit price of the stock is S. The price

S of the stock at
the end of the investment period is unknown, but it will be either

S = Su or

S = Sd,
where u > 1 and d < 1 are given numbers. In other words the price either goes up by a
45
factor u, or down by a factor d. If the current interest rate over the investment period
is r, then the present value of the stock price

S at the end of the period is equal to

S/(1 +r), and our unit return is


Su
1 +r
S = S
u 1 r
1 +r
if the stock goes up, and
Sd
1 +r
S = S
d 1 r
1 +r
if the stock goes down.
We can also buy options, at a unit price of C. An option gives us the right to purchase
one stock at a xed price K at the end of the period. Whether we exercise the option
or not, depends on the price of the stock at the end of the period. If the stock price

S
at the end of the period is greater than K, we exercise the option, buy the stock and
sell it immediately, so we receive an amount

S K. If the stock price

S is less than K,
we do not exercise the option and receive nothing. Combining both cases, we can say
that the value of the option at the end of the period is max{0,

S K}, and the present
value is max{0,

S K}/(1 +r). If we pay a price C per option, then our return is
1
1 +r
max{0,

S K} C
per option.
We can summarize the situation with the return matrix
R =
_
(u 1 r)/(1 +r) (max{0, Su K})/((1 +r)C) 1
(d 1 r)/(1 +r) (max{0, Sd K})/((1 +r)C) 1
_
.
The elements of the rst row are the (present values of the) returns in the event that
the stock price goes up. The second row are the returns in the event that the stock
price goes down. The rst column gives the returns per unit investment in the stock.
The second column gives the returns per unit investment in the option.
In this simple example the arbitrage theorem allows us to determine the price of the
option, given the other information S, K, u, d, and r. Show that if there is no arbitrage,
then the price of the option C must be equal to
1
1 +r
(p max{0, Su K} + (1 p) max{0, Sd K})
where
p =
1 +r d
u d
.
Exercise 65. We consider a network with m nodes and n directed arcs. Suppose we can apply
labels y
r
R, r = 1, . . . , m, to the nodes in such a way that
y
r
y
s
if there is an arc from node r to node s. (65)
It is clear that this implies that if y
i
< y
j
, then there exists no directed path from node i to
node j. (If we follow a directed path from node i to j, we encounter only nodes with labels
less than or equal to y
i
. Therefore y
j
y
i
.)
Prove the converse: if there is no directed path from node i to j, then there exists a labeling
of the nodes that satises (65) and y
i
< y
j
.
46
Exercise 66. The projection of a point x
0
R
n
on a polyhedron P = {x | Ax b}, in the

-norm, is dened as the solution of the optimization problem


minimize x x
0

subject to Ax b.
The variable is x R
n
. We assume that P is nonempty.
(a) Formulate this problem as an LP.
(b) Derive the dual problem, and simplify it as much as you can.
(c) Show that if x
0
P, then a hyperplane that separates x
0
from P can be constructed
from the optimal solution of the dual problem.
Exercise 67. Describe a method for constructing a hyperplane that separates two given polyhedra
P
1
= {x R
n
|Ax b}, P
2
= {x R
n
|Cx d}.
Your method must return a vector a R
n
and a scalar such that
a
T
x > for all x P
1
, a
T
x < for all x P
2
.
a
a
T
x =
P
1
P
2
You can assume that P
1
and P
2
do not intersect. If you know several methods, you should
give the most ecient one.
Exercise 68. Suppose the feasible set of the LP
maximize b
T
z
subject to A
T
z c
(66)
is nonempty and bounded, with z

< for all feasible z. Show that any optimal solution


of the problem
minimize c
T
x +Ax b
1
subject to x 0
is also an optimal solution of the LP
minimize c
T
x
subject to Ax = b
x 0,
(67)
which is the dual of problem (66).
47
Exercise 69. An alternative to the phase-I/phase-II method for solving the LP
minimize c
T
x
subject to Ax b,
(68)
is the big-M-method, in which we solve the auxiliary problem
minimize c
T
x +Mt
subject to Ax b +t1
t 0.
(69)
M > 0 is a parameter and t is an auxiliary variable. Note that this auxiliary problem has
obvious feasible points, for example, x = 0, t max{0, min
i
b
i
}.
(a) Derive the dual LP of (69).
(b) Prove the following property. If M > 1
T
z

, where z

is an optimal solution of the dual


of (68), then the optimal t in (69) is zero, and therefore the optimal x in (69) is also an
optimal solution of (68).
Exercise 70. Robust linear programming with polyhedral uncertainty. Consider the robust LP
minimize c
T
x
subject to max
aP
i
a
T
x b
i
, i = 1, . . . , m,
with variable x R
n
, where P
i
= {a | C
i
a d
i
}. The problem data are c R
n
, C
i
R
m
i
n
,
d
i
R
m
i
, and b R
m
. We assume the polyhedra P
i
are nonempty.
Show that this problem is equivalent to the LP
minimize c
T
x
subject to d
T
i
z
i
b
i
, i = 1, . . . , m
C
T
i
z
i
= x, i = 1, . . . , m
z
i
0, i = 1, . . . , m
with variables x R
n
and z
i
R
m
i
, i = 1, . . . , m. Hint. Find the dual of the problem of
maximizing a
T
i
x over a
i
P
i
(with variable a
i
).
Exercise 71. Strict complementarity. We consider an LP
minimize c
T
x
subject to Ax b,
with A R
mn
, and its dual
maximize b
T
z
subject to A
T
z +c = 0, z 0.
We assume the optimal value is nite. From duality theory we know that any primal optimal
x

and any dual optimal z

satisfy the complementary slackness conditions


z

i
(b
i
a
T
i
x

) = 0, i = 1, . . . , m.
48
In other words, for each i, we have z

i
= 0, or a
T
i
x

= b
i
, or both.
In this problem you are asked to show that there exists at least one primal-dual optimal pair
x

, z

that satises
z

i
(b
i
a
T
i
x

) = 0, z

i
+ (b
i
a
T
i
x

) > 0,
for all i. This is called a strictly complementary pair. In a strictly complementary pair, we
have for each i, either z

i
= 0, or a
T
i
x

= b
i
, but not both.
To prove the result, suppose x

, z

are optimal but not strictly complementary, and


a
T
i
x

= b
i
, z

i
= 0, i = 1, . . . , M
a
T
i
x

= b
i
, z

i
> 0, i = M + 1, . . . , N
a
T
i
x

< b
i
, z

i
= 0, i = N + 1, . . . , m
with M > 1. In other words, m M entries of b Ax

and z

are strictly complementary;


for the other entries we have zero in both vectors.
(a) Use Farkas lemma to show that the following two sets of inequalities/equalities are
strong alternatives:
There exists a v R
n
such that
a
T
1
v < 0
a
T
i
v 0, i = 2, . . . , M
a
T
i
v = 0, i = M + 1, . . . , N.
(70)
There exists a w R
N1
such that
a
1
+
N1

i=1
w
i
a
i+1
= 0, w
i
0, i = 1, . . . , M 1. (71)
(b) Assume the rst alternative holds, and v satises (70). Show that there exists a primal
optimal solution x with
a
T
1
x < b
1
a
T
i
x b
i
, i = 2, . . . , M
a
T
i
x = b
i
, i = M + 1, . . . , N
a
T
i
x < b
i
, i = N + 1, . . . , m.
(c) Assume the second alternative holds, and w satises (71). Show that there exists a dual
optimal z with
z
1
> 0
z
i
0, i = 2, . . . , M
z
i
> 0, i = M + 1, . . . , N
z
i
= 0, i = N + 1, . . . , m.
(d) Combine (b) and (c) to show that there exists a primal-dual optimal pair x, z, for which
b Ax and z have at most

M common zeros, where

M < M. If

M = 0, x, z are strictly
complementary and optimal, and we are done. Otherwise, we apply the argument given
above, with x

, z

replaced by x, z, to show the existence of a strictly complementary


pair of optimal solutions with less than

M common zeros in b Ax and z. Repeating
the argument eventually gives a strictly complementary pair.
49
6 The simplex method
Exercise 72. Solve the following linear program using the simplex algorithm with Blands pivoting
rule. Start the algorithm at the extreme point x = (2, 2, 0), with active set I = {3, 4, 5}.
mininimize x
1
+x
2
x
3
subject to
_

_
1 0 0
0 1 0
0 0 1
1 0 0
0 1 0
0 0 1
1 1 1
_

_
_

_
x
1
x
2
x
3
_

_
_

_
0
0
0
2
2
2
4
_

_
.
Exercise 73. Use the simplex method to solve the following LP:
minimize 24x
1
+ 396x
2
8x
3
28x
4
10x
5
subject to
_

_
12 4 1 19 7
6 7 18 1 13
1 17 3 18 2
_

_
_

_
x
1
x
2
x
3
x
4
x
5
_

_
=
_

_
12
6
1
_

_
x 0.
Start with the initial basis {1, 2, 3}, and use Blands rule to make pivot selections. Also
compute the dual optimal point from the results of the algorithm.
7 Interior-point methods
Exercise 74. The gure shows the feasible set of an LP
minimize c
T
x
subject to a
T
i
x b
i
, i = 1, . . . , 6
with two variables and six constraints. Also shown are the cost vector c, the analytic center,
and a few contour lines of the logarithmic barrier function
(x) =
m

i=1
log(b
i
a
T
i
x).
50
c
Sketch the central path as accurately as possible. Explain your answer.
Exercise 75. Let x

(t
0
) be a point on the central path of the LP
minimize c
T
x
subject to Ax b,
with t
0
> 0. We assume that A is mn with rank(A) = n. Dene x
nt
as the Newton step
at x

(t
0
) for the function
t
1
c
T
x
m

i=1
log(b
i
a
T
i
x),
where a
T
i
denotes the ith row of A, and t
1
> t
0
. Show that x
nt
is tangent to the central
path at x

(t
0
).
x

(t
0
)
x

(t
1
)
x
nt
c
Hint. Find an expression for the tangent direction x
tg
= dx

(t
0
)/dt, and show that x
nt
is a positive multiple of x
tg
.
51
Exercise 76. In the lecture on barrier methods, we noted that a point x

(t) on the central path


yields a dual feasible point
z

i
(t) =
1
t(b
i
a
T
i
x

(t))
, i = 1, . . . , m. (72)
In this problem we examine what happens when x

(t) is calculated only approximately.


Suppose x is strictly feasible and v is the Newton step at x for the function
tc
T
x +(x) = tc
T
x
m

i=1
log(b a
T
i
x).
Let d R
m
be dened as d
i
= 1/(b
i
a
T
i
x), i = 1, . . . , m. Show that if
(x) = diag(d)Av 1,
then the vector
z = (d +diag(d)
2
Av)/t
is dual feasible. Note that z reduces to (72) if x = x

(t) (and hence v = 0).


This observation is useful in a practical implementation of the barrier method. In practice,
Newtons method provides an approximation of the central point x

(t), which means that


the point (72) is not quite dual feasible, and a stopping criterion based on the corresponding
dual bound is not quite accurate. The results derived above imply that even though x

(t) is
not exactly centered, we can still obtain a dual feasible point, and use a completely rigorous
stopping criterion.
Exercise 77. Let P be a polyhedron described by a set of linear inequalities:
P = {x R
n
| Ax b} ,
where A R
mn
and b R
m
. Let denote the logarithmic barrier function
(x) =
m

i=1
log(b
i
a
T
i
x).
(a) Suppose x is strictly feasible. Show that
(x x)
T

2
( x)(x x) 1 = Ax b,
where
2
( x) is the Hessian of at x. Geometrically, this means that the set
E
inner
= {x | (x x)
T

2
( x)(x x) 1},
which is an ellipsoid centered at x, is enclosed in the polyhedron P.
(b) Suppose x is the analytic center of the inequalities Ax < b. Show that
Ax b = (x x)
T

2
( x)(x x) m(m1).
In other words, the ellipsoid
E
outer
= {x | (x x)
T

2
( x)(x x) m(m1)}
contains the polyhedron P.
52
Exercise 78. Let x be the analytic center of a set of linear inequalities
a
T
k
x b
k
, k = 1, . . . , m.
Show that the kth inequality is redundant (i.e., it can be deleted without changing the
feasible set) if
b
k
a
T
k
x m
_
a
T
k
H
1
a
k
where H is dened as
H =
m

k=1
1
(b
k
a
T
k
x)
2
a
k
a
T
k
.
Exercise 79. The analytic center of a set of linear inequalities Ax b depends not only on the
geometry of the feasible set, but also on the representation (i.e., A and b). For example,
adding redundant inequalities does not change the polyhedron, but it moves the analytic
center. In fact, by adding redundant inequalities you can make any strictly feasible point
the analytic center, as you will show in this problem.
Suppose that A R
mn
and b R
m
dene a bounded polyhedron
P = {x | Ax b}
and that x

satises Ax

< b. Show that there exist c R


n
, R, and a positive integer
q, such that
(a) P is the solution set of the m+q inequalities
Ax b
c
T
x
c
T
x
.
.
.
c
T
x
_

_
q copies.
(73)
(b) x

is the analytic center of the set of linear inequalities given in (73).


Exercise 80. Maximum-likelihood estimation with parabolic noise density. We consider the linear
measurement model of lecture 4, page 18,
y
i
= a
T
i
x +v
i
, i = 1, . . . , m.
The vector x R
n
is a vector of parameters to be estimated, y
i
R are the measured or
observed quantities, and v
i
are the measurement errors or noise. The vectors a
i
R
n
are
given. We assume that the measurement errors v
i
are independent and identically distributed
with a parabolic density function
p(v) =
_
(3/4)(1 v
2
) |v| 1
0 otherwise
(shown below).
53
2 1 0 1 2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
v
p
(
v
)
Let x be the maximum-likelihood (ML) estimate based on the observed values y, i.e.,
x = argmax
x
_
m

i=1
log
_
1 (y
i
a
T
i
x)
2
_
+mlog(3/4)
_
.
Show that the true value of x satises
(x x)
T
H(x x) 4m
2
where
H = 2
m

i=1
1 + (y
i
a
T
i
x)
2
(1 (y
i
a
T
i
x)
2
)
2
a
i
a
T
i
.
Exercise 81. Potential reduction algorithm. Consider the LP
minimize c
T
x
subject to Ax b
with A R
mn
. We assume that rankA = n, that the problem is strictly feasible, and that
the optimal value p

is nite.
For l < p

and q > m, we dene the potential function

pot
(x) = q log(c
T
x l)
m

i=1
log(b
i
a
T
i
x).
The function
pot
is dened for all strictly feasible x, and although it is not a convex function,
it can be shown that it has a unique minimizer. We denote the minimizer as x

pot
(l):
x

pot
(l) = argmin
Ax<b
_
q log(c
T
x l)
m

i=1
log(b
i
a
T
i
x)
_
.
(a) Show that x

pot
(l) lies on the central path, i.e., it is the minimizer of the function
tc
T
x
m

i=1
log(b
i
a
T
i
x)
for some value of t.
54
(b) Prove that the following algorithm converges and that it returns a suboptimal x with
c
T
x p

< .
given l < p

, tolerance > 0, q > m


repeat {
1. x := x

pot
(l)
2. if
m(c
T
xl)
q
< , return(x)
3. l :=
qm
q
c
T
x +
m
q
l
}
Exercise 82. Consider the following variation on the barrier method for solving the LP
minimize c
T
x
subject to a
T
i
x b
i
, i = 1, . . . , m.
We assume we are given a strictly feasible x (i.e., a
T
i
x < b
i
for i = 1, . . . , m), a strictly dual
feasible z (A
T
z +c = 0, z > 0), and a positive scalar with 0 < < 1.
initialize: x = x, w
i
= (b
i
a
T
i
x) z
i
, i = 1, . . . , m
repeat:
1. x := argmin
y
(c
T
y

m
i=1
w
i
log(b
i
a
T
i
y))
2. w := w
Give an estimate or a bound on the number of (outer) iterations required to reach an accuracy
c
T
x p

.
Exercise 83. The inverse barrier. The inverse barrier of a set of linear inequalities
a
T
i
x b
i
, i = 1, . . . , m,
is the function , dened as
(x) =
m

i=1
1
b
i
a
T
i
x
for strictly feasible x. It can be shown that is convex and dierentiable on the set of strictly
feasible points, and that (x) tends to innity as x approaches the boundary of the feasible
set.
Suppose x is strictly feasible and minimizes
c
T
x +(x).
Show that you construct from x a dual feasible point for the LP
minimize c
T
x
subject to a
T
i
x b
i
, i = 1, . . . , m.
55
Exercise 84. Assume the primal and dual LPs
(P) minimize c
T
x
subject to Ax b
(D) maximize b
T
z
subject to A
T
z +c = 0
z 0
are strictly feasible. Let {x(t) | t > 0} be the central path and dene
s(t) = b Ax(t), z(t) =
1
t
_

_
1/s
1
(t)
1/s
2
(t)
.
.
.
1/s
m
(t)
_

_
.
(a) Suppose x

, z

are optimal for the primal and dual LPs, and dene s

= b Ax

. (If
there are multiple optimal points, x

, z

denote an arbitrary pair of optimal points.)


Show that
z(t)
T
s

+s(t)
T
z

=
m
t
for all t > 0. From the denition of z(t), this implies that
m

k=1
s

k
s
k
(t)
+
m

k=1
z

k
z
k
(t)
= m. (74)
(b) As t goes to innity, the central path converges to the optimal points
x

c
= lim
t
x(t), s

c
= b Ax

c
= lim
t
s(t), z

c
= lim
t
z(t).
Dene I = {k | s

c,k
= 0}, the set of active constraints at x

c
. Apply (74) to s

= s

c
,
z

= z

c
to get

kI
s

c,k
s
k
(t)
+

kI
z

c,k
z
k
(t)
= m.
Use this to show that z

c,k
> 0 for k I. This proves that the central path converges to
a strictly complementary solution, i.e., s

c
+z

c
> 0.
(c) The primal optimal set is the set of all x that are feasible and satisfy complementary
slackness with z

c
:
X
opt
= {x | a
T
k
x = b
k
, k I, a
T
k
x b
k
, k I}.
Let x

be an arbitrary primal optimal point. Show that

kI
(b
k
a
T
k
x

kI
(b
k
a
T
k
x

c
).
Hint. Use the arithmetic-geometric mean inequality
_
m

k=1
y
k
_
1/m

1
m
m

k=1
y
k
for nonnegative vectors y R
m
.
56
Exercise 85. The most expensive step in one iteration of an interior-point method for an LP
minimize c
T
x
subject to Ax b
is the solution of a set of linear equations of the form
A
T
DA x = y, (75)
where D is a positive diagonal matrix, the right-hand side y is a given vector, and x is the
unknown. The values of D and y depend on the method used and on the current iterate, and
are not important for our purposes here. For example, the Newton equation in the barrier
methodm

2
(x)v = tc (x),
is of the form (75). In the primal-dual method of lecture 14, we have to solve two sets of
linear equations of the form (75) with D = X
1
Z (see page 14-15).
It is often possible to speed up the algorithm signicantly by taking advantage of special
structure of the matrix A when solving the equations (75).
Consider the following three optimization problems that we encountered before in this course.

1
-minimization:
minimize Pu +q
1
(P R
rs
and q R
r
are given; u R
s
is the variable).
Constrained
1
-minimization:
minimize Pu +q
1
subject to 1 u 1
(P R
rs
and q R
r
are given; u R
s
is the variable).
Robust linear programming (see exercise 23):
minimize w
T
u
subject to Pu +u
1
1 q
(P R
rs
, q R
r
, and w R
s
are given; u R
s
is the variable).
For each of these three problems, answer the following questions.
(a) Express the problem as an LP in inequality form. Give the matrix A, and the number
of variables and constraints.
(b) What is the cost of solving (75) for the matrix A you obtained in part (a), if you do
not use any special structure in A (knowing that the cost of solving a dense symmetric
positive denite set of n linear equations in n variables is (1/3)n
3
operations, and the
cost of a matrix-matrix multiplication A
T
A, with A R
mn
, is mn
2
operations)?
57
(c) Work out the product A
T
DA (assuming D is a given positive diagonal matrix). Can
you give an ecient method for solving (75) that uses the structure in the equations?
What is the cost of your method (i.e., the approximate number of operations when r
and s are large) as a function of the dimensions r and s ?
Hint. Try to reduce the problem to solving a set of s linear equations in s variables,
followed by a number of simple operations.
For the third problem, you can use the following formula for the inverse of a matrix
H +yy
T
, where y is a vector:
(H +yy
T
)
1
= H
1

1
1 +y
T
H
1
y
H
1
yy
T
H
1
.
Exercise 86. In this problem you are asked to write a Matlab code for the
1
-approximation
problem
minimize Pu +q
1
, (76)
where P = R
rs
and q R
r
. The calling sequence for the code is u = l1(P,q). On exit,
it must guarantee a relative accuracy of 10
6
or an absolute accuracy of 10
8
, i.e., the code
can terminate if
Pu +q
1
p

10
6
p

or
Pu +q
1
p

10
8
,
where p

is the optimal value of (76). You may assume that P has full rank (rankP = s).
We will solve the problem using Mehrotras method as described in lecture 14, applied to the
LP
minimize 1
T
v
subject to
_
P I
P I
_ _
u
v
_

_
q
q
_
.
(77)
We will take advantage of the structure in the problem to improve the eciency.
(a) Initialization. Mehrotras method can be started at infeasible primal and dual points.
However good feasible starting points for the LP (77) are readily available from the
solution u
ls
of the least-squares problem
minimize Pu +q
(in Matlab: u = -P\q). As primal starting point we can use u = u
ls
, and choose v so
that we have strict feasibility in (77). To nd a strictly feasible point for the dual of (77),
we note that P
T
Pu
ls
= P
T
q and therefore the least-squares residual r
ls
= Pu
ls
+ q
satises
P
T
r
ls
= 0.
This property can be used to construct a strictly feasible point for the dual of (77). You
should try to nd a dual starting point that provides a positive lower bound on p

, i.e.,
a lower bound that is better than the trivial lower bound p

0.
Since the starting points are strictly feasible, all iterates in the algorithm will remain
strictly feasible, and we dont have to worry about testing the deviation from feasibility
in the convergence criteria.
58
(b) As we have seen, the most expensive part of an iteration in Mehrotras method is the
solution of two sets of equations of the form
A
T
X
1
ZAx = r
1
(78)
where X and Z are positive diagonal matrices that change at each iteration. One of
the two equations is needed to determine the ane-scaling direction; the other equation
(with a dierent right-hand side) is used to compute the combined centering-corrector
step. In our application, (78) has r +s equations in r +s variables, since
A =
_
P I
P I
_
, x =
_
u
v
_
.
By exploiting the special structure of A, show that you can solve systems of the form (78)
by solving a smaller system of the form
P
T
DPu = r
2
, (79)
followed by a number of inexpensive operations. In (79) D is an appropriately chosen
positive diagonal matrix. This observation is important, since it means that the cost
of one iteration reduces to the cost of solving two systems of size s s (as opposed
to (r + s) (r + s)). In other words, although we have introduced r new variables to
express (76) as an LP, the extra cost of introducing these variables is marginal.
(c) Test your code on randomly generated P and q. Plot the duality gap (on a logarithmic
scale) versus the iteration number for a few examples and include a typical plot with
your solutions.
Exercise 87. This problem is similar to the previous problem, but instead we consider the con-
strained
1
-approximation problem
minimize Pu +q
1
subject to 1 u 1
(80)
where P = R
rs
and q R
r
. The calling sequence for the code is u = cl1(P,q). On exit,
it must guarantee a relative accuracy of 10
5
or an absolute accuracy of 10
8
, i.e., the code
can terminate if
Pu +q
1
p

10
5
p

or
Pu +q
1
p

10
8
,
where p

is the optimal value of (80). You may assume that P has full rank (rankP = s).
We will solve the problem using Mehrotras method as described in lecture 14, applied to the
LP
minimize 1
T
v
subject to
_

_
P I
P I
I 0
I 0
_

_
_
u
v
_

_
q
q
1
1
_

_
.
(81)
We will take advantage of the structure in the problem to improve the eciency.
59
(a) Initialization. For this problem it is easy to determine strictly feasible primal and dual
points at which the algorithm can be started. This has the advantage that all iterates
in the algorithm will remain strictly feasible, and we dont have to worry about testing
the deviation from feasibility in the convergence criteria.
As primal starting point, we can simply take u = 0, and a vector v that satises
v
i
> |(Pu +q)
i
|, i = 1, . . . , r. What would you choose as dual starting point?
(b) As we have seen, the most expensive part of an iteration in Mehrotras method is the
solution of two sets of equations of the form
A
T
X
1
ZAx = r
1
(82)
where X and Z are positive diagonal matrices that change at each iteration. One of
the two equations is needed to determine the ane-scaling direction; the other equation
(with a dierent right-hand side) is used to compute the combined centering-corrector
step. In our application, (82) has r +s equations in r +s variables, since
A =
_

_
P I
P I
I 0
I 0
_

_
, x =
_
u
v
_
.
By exploiting the special structure of A, show that you can solve systems of the form (82)
by solving a smaller system of the form
(P
T

DP +

D)u = r
2
, (83)
followed by a number of inexpensive operations. The matrices

D and

D in (83) are
appropriately chosen positive diagonal matrices.
This observation is important, since the cost of solving (83) is roughly equal to the cost
of solving the least-squares problem
minimize Pu +q.
Since the interior-point method converges in very few iterations (typically less than 10),
this allows us to conclude that the cost of solving (80) is roughly equal to the cost of 10
least-squares problems of the same dimension, in spite of the fact that we introduced r
new variables to cast the problem as an LP.
(c) Test your code on randomly generated P and q. Plot the duality gap (on a logarithmic
scale) versus the iteration number for a few examples and include a typical plot with
your solutions.
Exercise 88. Consider the optimization problem
minimize

m
i=1
f(a
T
i
x b
i
)
where
f(u) =
_

_
0 |u| 1
|u| 1 1 |u| 2
2|u| 3 |u| 2.
The function f is shown below
60
u
f(u)
1 2 1 2
1
The problem data are a
i
R
n
and b
i
R.
(a) Formulate this problem as an LP in inequality form
minimize c
T
x
subject to

A x

b.
(84)
Carefully explain why the two problems are equivalent, and what the meaning is of any
auxiliary variables you introduce.
(b) Describe an ecient method for solving the equations

A
T
D

A x = r
that arise in each iteration of Mehrotras method applied to the LP (84). Here D is a
given diagonal matrix with positive diagonal elements, and r is a given vector.
Compare the cost of your method with the cost of solving the least-squares problem
minimize

m
i=1
(a
T
i
x b
i
)
2
.
Exercise 89. The most time consuming step in a primal-dual interior-point method for solving
an LP
minimize c
T
x
subject to Ax b
is the solution of linear equations of the form
_

_
0 A I
A
T
0 0
X 0 Z
_

_
_

_
z
x
s
_

_ =
_

_
r
1
r
2
r
3
_

_,
where X and Z are positive diagonal matrices. After eliminating s from the last equation
we obtain
_
D A
A
T
0
_ _
z
x
_
=
_
d
f
_
where D = XZ
1
, d = r
1
Z
1
r
3
, f = r
2
.
Describe an ecient method for solving this equation for an LP of the form
minimize c
T
x
subject to Px q
1 x 1,
where P R
mn
is a dense matrix. Distinguish two cases: m n and m n.
61

You might also like