An Introduction To Optimization 4th Edition Solution Manual PDF
An Introduction To Optimization 4th Edition Solution Manual PDF
OPTIMIZATION
SOLUTIONS MANUAL
Fourth Edition
1.2
1.3
1.4
1.5
The cards that you should turn over are 3 and A. The remaining cards are irrelevant to ascertaining the
truth or falsity of the rule. The card with S is irrelevant because S is not a vowel. The card with 8 is not
relevant because the rule does not say that if a card has an even number on one side, then it has a vowel on
the other side.
Turning over the A card directly verifies the rule, while turning over the 3 card verifies the contraposition.
Hence,
det M = ± det M k,k .
b. We can see this on the following examples. We assume, without loss of generality that M m−k,k = O and
let M k,k = 2. Thus k = 1. First consider the case when m = 2. Then we have
" # " #
O I m−k 0 1
M= = .
M k,k O 2 0
2
Thus,
det M = −2 = det (−M k,k ) .
Next consider the case when m = 3. Then
..
0 . 1 0
..
" #
O I m−k 0 . 0 1 = 2 6= det (−M k,k ) .
det = det
M k,k O · · ·
··· ··· · · ·
..
2 . 0 0
Therefore, in general,
det M 6= det (−M k,k )
However, when k = m/2, that is, when all sub-matrices are square and of the same dimension, then it is
true that
det M = det (−M k,k ) .
See [121].
2.5
Let " #
A B
M=
C D
and suppose that each block is k × k. John R. Silvester [121] showed that if at least one of the blocks is
equal to O (zero matrix), then the desired formula holds. Indeed, if a row or column block is zero, then the
determinant is equal to zero as follows from the determinant’s properties discussed Section 2.2. That is, if
A = B = O, or A = C = O, and so on, then obviously det M = 0. This includes the case when any three
or all four block matrices are zero matrices.
If B = O or C = O then " #
A B
det M = det = det (AD) .
C D
The only case left to analyze is when A = O or D = O. We will show that in either case,
Without loss of generality suppose that D = O. Following arguments of John R. Silvester [121], we premul-
tiply M by the product of three matrices whose determinants are unity:
" #" #" #" # " #
I k −I k I k O I k −I k A B −C O
= .
O Ik Ik Ik O Ik C O A B
Hence,
" # " #
A B −C O
det =
C O A B
= det (−C) det B
= det (−I k ) det C det B.
3
2.6
We represent the given system of equations in the form Ax = b, where
" # x1 " #
1 1 2 1 x2
1
A= , x = , and b= .
1 −2 0 −1 x3 −2
x4
2. If a ≥ 0, then |a| = a. If a < 0, then |a| = −a > 0 > a. Hence |a| ≥ a. On the other hand, | − a| ≥ −a
(by the above). Hence, a ≥ −| − a| = −|a| (by property 1).
4
3. We have four cases to consider. First, if a, b ≥ 0, then a + b ≥ 0. Hence, |a + b| = a + b = |a| + |b|.
Second, if a, b ≥ 0, then a + b ≤ 0. Hence |a + b| = −(a + b) = −a − b = |a| + |b|.
Third, if a ≥ 0 and b ≤ 0, then we have two further subcases:
The fourth case, a ≤ 0 and b ≥ 0, is identical to the third case, with a and b interchanged.
4. We first show |a − b| ≤ |a| + |b|. We have
|a − b| = |a + (−b)|
≤ |a| + | − b| by property 3
= |a| + |b| by property 1.
To show ||a| − |b|| ≤ |a − b|, we note that |a| = |a − b + b| ≤ |a − b| + |b|, which implies |a| − |b| ≤ |a − b|. On the
other hand, from the above we have |b| − |a| ≤ |b − a| = |a − b| by property 1. Therefore, ||a| − |b|| ≤ |a − b|.
5. We have four cases. First, if a, b ≥ 0, we have ab ≥ 0 and hence |ab| = ab = |a||b|. Second, if a, b ≤ 0,
we have ab ≥ 0 and hence |ab| = ab = (−a)(−b) = |a||b|. Third, if a ≤ 0, b ≤ 0, we have ab ≤ 0 and hence
|ab| = −ab = a(−b) = |a||b|. The fourth case, a ≤ 0 and b ≥ 0, is identical to the third case, with a and b
interchanged.
6. We have
7. ⇒: By property 2, −a ≤ |a| and a ≤ |a. Therefore, |a| < b implies −a ≤ |a| < b and a ≤ |a| < b.
⇐: If a ≥ 0, then |a| = a < b. If a < 0, then |a| = −a < b.
For the case when “<” is replaced by “≤”, we simply repeat the above proof with “<” replaced by “≤”.
8. This is simply the negation of property 7 (apply DeMorgan’s Law).
2.8
Observe that we can represent hx, yi2 as
" #
> 2 3
hx, yi2 = x y = (Qx)> (Qy) = x> Q2 y,
3 5
where " #
1 1
Q= .
1 2
since Q is nonsingular.
2. hx, yi2 = (Qx)> (Qy) = (Qy)> (Qx) = hy, xi2 .
3. We have
hx + y, zi2 = (x + y)> Q2 z
= x> Q2 z + y > Q2 z
= hx, zi2 + hy, zi2 .
5
4. hrx, yi2 = (rx)> Q2 y = rx> Q2 y = rhx, yi2 .
2.9
We have kxk = k(x − y) + yk ≤ kx − yk + kyk by the Triangle Inequality. Hence, kxk − kyk ≤ kx − yk. On
the other hand, from the above we have kyk − kxk ≤ ky − xk = kx − yk. Combining the two inequalities,
we obtain |kxk − kyk| ≤ kx − yk.
2.10
Let > 0 be given. Set δ = . Hence, if kx − yk < δ, then by Exercise 2.9, |kxk − kyk| ≤ kx − yk < δ = .
3. Transformations
3.1
Let v be the vector such that x are the coordinates of v with respect to {e1 , e2 , . . . , en }, and x0 are the
coordinates of v with respect to {e01 , e02 , . . . , e0n }. Then,
v = x1 e1 + · · · + xn en = [e1 , . . . , en ]x,
and
v = x01 e01 + · · · + x0n e0n = [e01 , . . . , e0n ]x0 .
Hence,
[e1 , . . . , en ]x = [e01 , . . . , e0n ]x0
which implies
x0 = [e01 , . . . , e0n ]−1 [e1 , . . . , en ]x = T x.
3.2
a. We have
1 2 4
0 0 0
[e1 , e2 , e3 ] = [e1 , e2 , e3 ] 3 −1 5 .
−4 5 3
Therefore,
−1
1 2 4 28 −14 −14
0 0 0 −1 1
T = [e1 , e2 , e3 ] [e1 , e2 , e3 ] = 3 −1 5 = 29 −19 −7 .
42
−4 5 3 −11 13 7
b. We have
1 2 3
[e1 , e2 , e3 ] = [e01 , e02 , e03 ] 1 −1 0 .
3 4 5
Therefore,
1 2 3
T = 1 −1 0 .
3 4 5
3.3
We have
2 2 3
[e1 , e2 , e3 ] = [e01 , e02 , e03 ] 1 −1 0 .
−1 2 1
6
Therefore, the transformation matrix from {e01 , e02 , e03 } to {e1 , e2 , e3 } is
2 2 3
T = 1 −1 0 ,
−1 2 1
Now, consider a linear transformation L : R3 → R3 , and let A be its representation with respect to
{e1 , e2 , e3 }, and B its representation with respect to {e01 , e02 , e03 }. Let y = Ax and y 0 = Bx0 . Then,
y 0 = T y = T (Ax) = T A(T −1 x0 ) = (T AT −1 )x0 .
Hence, the representation of the linear transformation with respect to {e01 , e02 , e03 } is
3 −10 −8
B = T AT −1 = −1 8 4 .
2 −13 −7
3.4
We have
1 1 1 1
0 1 1 1
[e01 , e02 , e03 , e04 ] = [e1 , e2 , e3 , e4 ] .
0 0 1 1
0 0 0 1
Therefore, the transformation matrix from {e1 , e2 , e3 , e4 } to {e01 , e02 , e03 , e04 } is
−1
1 1 1 1 1 −1 0 0
0 1 1 1 0 1 −1 0
T = = .
0 0 1 1 0 0 1 −1
0 0 0 1 0 0 0 1
Now, consider a linear transformation L : R4 → R4 , and let A be its representation with respect to
{e1 , e2 , e3 , e4 }, and B its representation with respect to {e01 , e02 , e03 , e04 }. Let y = Ax and y 0 = Bx0 .
Then,
y 0 = T y = T (Ax) = T A(T −1 x0 ) = (T AT −1 )x0 .
Therefore,
5 3 4 3
−3 −2 −1 −2
B = T AT −1 = .
−1 0 −1 −2
1 1 1 4
3.5
Let {v 1 , v 2 , v 3 , v 4 } be a set of linearly independent eigenvectors of A corresponding to the eigenvalues λ1 ,
λ2 , λ3 , and λ4 . Let T = [v 1 , v 2 , v 3 , v 4 ]. Then,
AT = A[v 1 , v 2 , v 3 , v 4 ] = [Av 1 , Av 2 , Av 3 , Av 4 ]
λ1 0 0 0
0 λ2 0 0
= [λ1 v 1 , λ2 v 2 , λ3 v 3 , λ4 v 4 ] = [v 1 , v 2 , v 3 , v 4 ] .
0 0 λ3 0
0 0 0 λ4
Hence,
λ1 0 0
AT = T 0 λ2 0 ,
0 0 λ3
7
or
λ1 0 0
T −1 AT = 0 λ2 0 .
0 0 λ3
Therefore, the linear transformation has a diagonal matrix form with respect to the basis formed by a linearly
independent set of eigenvectors.
Because
det(A) = (λ − 2)(λ − 3)(λ − 1)(λ + 1),
the eigenvalues are λ1 = 2, λ2 = 3, λ3 = 1, and λ4 = −1.
From Av i = λi v i , where v i 6= 0 (i = 1, 2, 3), the corresponding eigenvectors are
0 0 0 24
0 0 2 −12
v1 = , v2 = , v 3 = , and v4 = .
1 1 −9 1
0 1 1 9
3.6
Suppose v 1 , . . . , v n are eigenvectors of A corresponding to λ1 , . . . , λn , respectively. Then, for each i =
1, . . . , n, we have
(I n − A)v i = v i − Av i = v i − λi v i = (1 − λi )v i
which shows that 1 − λ1 , . . . , 1 − λn are the eigenvalues of I n − A.
Alternatively, we may write the characteristic polynomial of I n − A as
8
Therefore,
1
N (A) = 2 c : c ∈ R .
4
3.9
Let x, y ∈ R(A), and α, β ∈ R. Then, there exists v, u such that x = Av and y = Au. Thus,
αx + βy = αAv + βAu = A(αv + βu).
Hence, αx + βy ∈ R(A), which shows that R(A) is a subspace.
Let x, y ∈ N (A), and α, β ∈ R. Then, Ax = 0 and Ay = 0. Thus,
A(αx + βy) = αAx + βAy = 0.
Hence, αx + βy ∈ N (A), which shows that N (A) is a subspace.
3.10
Let v ∈ R(B), i.e., v = Bx for some x. Consider the matrix [A v]. Then, N (A> ) = N ([A v]> ), since if
u ∈ N (A> ), then u ∈ N (B > ) by assumption, and hence u> v = u> Bx = x> B > u = 0. Now,
dim R(A) + dim N (A> ) = m
and
dim R([A v]) + dim N ([A v]> ) = m.
Since dim N (A> ) = dim N ([A v]> ), then we have dim R(A) = dim R([A v]). Hence, v is a linear combi-
nation of the columns of A, i.e., v ∈ R(A), which completes the proof.
3.11
We first show V ⊂ (V ⊥ )⊥ . Let v ∈ V , and u any element of V ⊥ . Then u> v = v > u = 0. Therefore,
v ∈ (V ⊥ )⊥ .
We now show (V ⊥ )⊥ ⊂ V . Let {a1 , . . . , ak } be a basis for V , and {b1 , . . . , bl } a basis for (V ⊥ )⊥ . Define
A = [a1 · · · ak ] and B = [b1 · · · bl ], so that V = R(A) and (V ⊥ )⊥ = R(B). Hence, it remains to show
that R(B) ⊂ R(A). Using the result of Exercise 3.10, it suffices to show that N (A> ) ⊂ N (B > ). So let
x ∈ N (A> ), which implies that x ∈ R(A)⊥ = V ⊥ , since R(A)⊥ = N (A> ). Hence, for all y, we have
(By)> x = 0 = y > B > x, which implies that B > x = 0. Therefore, x ∈ N (B > ), which completes the proof.
3.12
Let w ∈ W ⊥ , and y be any element of V. Since V ⊂ W, then y ∈ W. Therefore, by definition of w, we have
w> y = 0. Therefore, w ∈ V ⊥ .
3.13
Let r = dim V. Let v 1 , . . . , v r be a basis for V, and V the matrix whose ith column is v i . Then, clearly
V = R(V ).
Let u1 , . . . , un−r be a basis for V ⊥ , and U the matrix whose ith row is u>
i . Then, V
⊥
= R(U > ), and
⊥ ⊥ > ⊥
V = (V ) = R(U ) = N (U ) (by Exercise 3.11 and Theorem 3.4).
3.14
a. Let x ∈ V. Then, x = P x + (I − P )x. Note that P x ∈ V, and (I − P )x ∈ V ⊥ . Therefore,
x = P x + (I − P )x is an orthogonal decomposition of x with respect to V. However, x = x + 0 is also an
orthogonal decomposition of x with respect to V. Since the orthogonal decomposition is unique, we must
have x = P x.
b. Suppose P is an orthogonal projector onto V. Clearly, R(P ) ⊂ V by definition. However, from part a,
x = P x for all x ∈ V, and hence V ⊂ R(P ). Therefore, R(P ) = V.
3.15
To answer the question, we have to represent the quadratic form with a symmetric matrix as
" # " #! " #
1 1 −8 1 1 1 1 −7/2
x> + x = x> x.
2 1 1 2 −8 1 −7/2 1
9
The leading principal minors are ∆1 = 1 and ∆2 = −45/4. Therefore, the quadratic form is indefinite.
3.16
The leading principal minors are ∆1 = 2, ∆2 = 0, ∆3 = 0, which are all nonnegative. However, the
eigenvalues of A are 0, −1.4641, 5.4641 (for example, use Matlab to quickly check this). This implies that
the matrix A is indefinite (by Theorem 3.7). An alternative way to show that A is not positive semidefinite
is to find a vector x such that x> Ax < 0. So, let x be an eigenvector of A corresponding to its negative
eigenvalue λ = −1.4641. Then, x> Ax = x> (λx) = λx> x = λkxk2 < 0. For this example, we can take
x = [0.3251, 0.3251, −0.8881]> , for which we can verify that x> Ax = −1.4643.
3.17
a. The matrix Q is indefinite, since ∆2 = −1 and ∆3 = 2.
b. Let x ∈ M. Then, x2 + x3 = −x1 , x1 + x3 = −x2 , and x1 + x2 = −x3 . Therefore,
Then,
1 1 1
Q = 1 0 1
1 1 1
√ √
and the eigenvalues of Q are λ1 = 0, λ2 = 1 − 3, and λ3 = 1 + 3. Therefore, the quadratic form is
indefinite.
10
3.19
We have
Let
4 −2 6 x1
Q = −2 1 −3 , x = x2 = x1 e1 + x2 e2 + x3 e3 ,
6 −3 9 x3
where e1 , e2 , and e3 form the natural basis for R3 .
Let v 1 , v 2 , and v 3 be another basis for R3 . Then, the vector x is represented in the new basis as x̃, where
x = [v 1 , v 2 , v 3 ]x̃ = V x̃.
Now, f (x) = x> Qx = (V x̃)> Q(V x̃) = x̃> (V > QV )x̃ = x̃> Q̃x̃, where
q̃11 q̃12 q̃13
Q̃ = q̃21 q̃22 q̃23
q̃31 q̃32 q̃33
v1 = α11 e1
v2 = α21 e1 + α22 e2
v3 = α31 e1 + α32 e2 + α33 e3
Because
q̃ij = v i Qv j = v i Q(αj1 e1 + . . . + αjj ej ) = αj1 (v i Qe1 ) + . . . + αjj (v i Qej ),
we deduce that if v i Qej = 0 for j < i, then v i Qv j = 0. In this case,
v i Qej = 0, j<i
v i Qei = 1,
Case of i = 1.
From v >
1 Qe1 = 1,
(α11 e1 )> Qe1 = α11 (e>
1 Qe1 ) = α11 q11 = 1.
Therefore, 1
4
1 1 1 0
α11 = = = ⇒ v 1 = α11 e1 = .
q11 ∆1 4 0
0
11
Case of i = 2.
From v >
2 Qe1 = 0,
From v >
2 Qe2 = 1,
But, since ∆2 = 0, this system of equations is inconsistent. Hence, in this problem v >2 Qe2 = 0 should
be satisfied instead of v >
2 Qe2 = 1 so that the system can have a solution. In this case, the diagonal
matrix becomes
α11 0 0
Q̃ = 0 0 0 ,
0 0 α33
and the system of equations become
" #" # " # " # " #
1
q11 q21 α21 0 α21
= ⇒ = 2 α22 ,
q12 q22 α22 0 α22 1
Therefore,
α31 α31
α32 = 2α31 + 3α33 ,
α33 α33
12
where α31 and α33 are arbitrary real numbers. Thus,
b
v 3 = α31 e1 + α32 e2 + α33 e3 = 2b + 3c ,
c
Finally,
1 a
4 2 b
V = [x1 , x2 , x3 ] = 0 a 2b + 3c ,
0 0 c
where a, b, and c are arbitrary real numbers.
3.20
We represent this quadratic form as f (x) = x> Qx, where
1 ξ −1
Q = ξ 1 2 .
−1 2 5
The leading principal minors of Q are ∆1 = 1, ∆2 = 1 − ξ 2 , ∆3 = −5ξ 2 − 4ξ. For the quadratic form to
be positive definite, all the leading principal minors of Q must be positive. This is the case if and only if
ξ ∈ (−4/5, 0).
3.21
The matrix Q = Q> > 0 can be represented as Q = Q1/2 Q1/2 , where Q1/2 = (Q1/2 )> > 0.
1. Now, hx, xiQ = (Q1/2 x)> (Q1/2 x) = kQ1/2 xk2 ≥ 0, and
hx + y, ziQ = (x + y)> Qz
= x> Qz + y > Qz
= hx, ziQ + hy, ziQ .
13
since |xk | ≤ maxk |xk | = kxk∞ = 1. Therefore,
n
X
kAk∞ ≤ max |aik |.
i
k=1
Pn n
Pn that kAk∞ = maxi k=1 |aik |, it remains to find a x̃ ∈ R , kx̃k∞ = 1, such that kAx̃k∞ =
To show
maxi k=1 |aik |. So, let j be such that
n
X n
X
|ajk | = max |aik |.
i
k=1 k=1
Define x̃ by (
|ajk |/ajk if ajk 6= 0
x̃k = .
1 otherwise
Clearly kx̃k∞ = 1. Furthermore, for i 6= j,
Xn X n n
X n
X
aik x̃k ≤ |aik | ≤ max |aik | = |ajk |
i
k=1 k=1 k=1 k=1
and
Xn X n
ajk x̃k = |ajk |.
k=1 k=1
Therefore,
Xn X n n
X
kAx̃k∞ = max aik x̃k = |ajk | = max |aik |.
i i
k=1 k=1 k=1
3.23
We have
kAk1 = max{kAxk1 : kxk1 = 1}.
Pm
We first show that kAk1 ≤ maxk i=1 |aik |. For this, note that for each x such that kxk1 = 1, we have
Xm X n
kAxk1 = aik xk
i=1 k=1
m X
X n
≤ |aik ||xk |
i=1 k=1
Xn m
X
≤ |xk | |aik |
k=1 i=1
m
! n
X X
≤ max |aik | |xk |
k
i=1 k=1
m
X
≤ max |aik |,
k
i=1
Pn
since k=1 |xk | = kxk1 = 1. Therefore,
m
X
kAk1 ≤ max |aik |.
k
i=1
14
Pm m
Pmthat kAk1 = maxk i=1 |aik |, it remains to find a x̃ ∈ R , kx̃k1 = 1, such that kAx̃k1 =
To show
maxk i=1 |aik |. So, let j be such that
m
X m
X
|aij | = max |aik |.
k
i=1 i=1
Define x̃ by (
1 if k = j
x̃k = .
0 otherwise
Clearly kx̃k1 = 1. Furthermore,
m X
X n X m m
X
kAx̃k1 = aik x̃k = |aij | = max |aik |.
k
i=1 k=1 i=1 i=1
Therefore, αx + (1 − α)y ∈ S.
⇐: If S is empty, we are done. So, suppose x0 ∈ S. Consider the set S0 = S − x0 = {x − x0 : x ∈ S}.
Clearly, for all x, y ∈ S0 and α ∈ R, we have αx + (1 − α)y ∈ S0 . Note that 0 ∈ S0 . We claim that S0
is a subspace. To see this, let x, y ∈ S0 , and α ∈ R. Then, αx = αx + (1 − α)0 ∈ S0 . Furthermore,
1 1
2 x + 2 y ∈ S0 , and therefore x + y ∈ S0 by the previous argument. Hence, S0 is a subspace. Therefore, by
Exercise 3.13, there exists A such that S0 = N (A) = {x : Ax = 0}. Define b = Ax0 . Then,
S = S0 + x0 = {y + x0 : y ∈ N (A)}
= {y + x0 : Ay = 0}
= {y + x0 : A(y + x0 ) = b}
= {x : Ax = b}.
4.2
Let u, v ∈ Θ = {x ∈ Rn : kxk ≤ r}, and α ∈ [0, 1]. Suppose z = αu + (1 − α)v. To show that Θ is convex,
we need to show that z ∈ Θ, i.e., kzk ≤ r. To this end,
Since u, v ∈ Θ, then kuk2 ≤ r2 and kvk2 ≤ r2 . Furthermore, by the Cauchy-Schwarz Inequality, we have
u> v ≤ kukkvk ≤ r2 . Therefore,
Hence, z ∈ Θ, which implies that Θ is a convex set, i.e., the any point on the line segment joining u and v
is also in Θ.
4.3
Let u, v ∈ Θ = {x ∈ Rn : Ax = b}, and α ∈ [0, 1]. Suppose z = αu + (1 − α)v. To show that Θ is convex,
we need to show that z ∈ Θ, i.e., Az = b. To this end,
Az = A(αu + (1 − α)v)
= αAu + (1 − α)Av.
15
Since u, v ∈ Θ, then Au = b and Av = b. Therefore,
Az = αb + (1 − α)b = b,
and hence z ∈ Θ.
4.4
Let u, v ∈ Θ = {x ∈ Rn : x ≥ 0}, and α ∈ [0, 1]. Suppose z = αu + (1 − α)v. To show that Θ is convex,
we need to show that z ∈ Θ, i.e., z ≥ 0. To this end, write x = [x1 , . . . , xn ]> , y = [y1 , . . . , yn ]> , and
z = [z1 , . . . , zn ]> . Then, zi = αxi + (1 − α)yi , i = 1, . . . , n. Since xi , yi ≥ 0, and α, 1 − α ≥ 0, we have zi ≥ 0.
Therefore, z ≥ 0, and hence z ∈ Θ.
5. Elements of Calculus
5.1
Observe that
kAk k ≤ kAk−1 kkAk ≤ kAk−2 kkAk2 ≤ · · · ≤ kAkk .
Therefore, if kAk < 1, then limk→∞ kAk k = O which implies that limk→∞ Ak = O.
5.2
For the case when A has all real eigenvalues, the proof is simple. Let λ be the eigenvalue of A with largest
absolute value, and x the corresponding (normalized) eigenvector, i.e., Ax = λx and kxk = 1. Then,
A
B= ,
kAk + ε
kAk
kBk = < 1.
kAk + ε
By Exercise 5.1, B k → O as k → ∞, and thus by Lemma 5.1, |λi (B)| < 1, i = 1, . . . , n. On the other hand,
for each i = 1, . . . , n,
λi (A)
λi (B) = ,
kAk + ε
and thus
|λi (A)|
|λi (B)| = < 1.
kAk + ε
which gives
|λi (A)| < kAk + ε.
Since the above arguments hold for any ε > 0, we have |λi (A)| ≤ kAk.
5.3
16
5.4
We have
Df (x) = [x1 /3, x2 /2],
and " #
dg 3
(t) = .
dt 2
By the chain rule,
d dg
F (t) = Df (g(t)) (t)
dt dt " #
3
= [(3t + 5)/3, (2t − 6)/2]
2
= 5t − 1.
5.5
We have
Df (x) = [x2 /2, x1 /2],
and " # " #
∂g 4 ∂g 3
(s, t) = , (s, t) = .
∂s 2 ∂t 1
By the chain rule,
∂ ∂g
f (g(s, t)) = Df (g(t)) (s, t)
∂s ∂s " #
1 4
= [2s + t, 4s + 3t]
2 2
= 8s + 5t,
and
∂ ∂g
f (g(s, t)) = Df (g(t)) (s, t)
∂t ∂t " #
1 3
= [2s + t, 4s + 3t]
2 1
= 5s + 3t.
5.6
We have
Df (x) = [3x21 x2 x23 + x2 , x31 x23 + x1 , 2x31 x2 x3 + 1]
and
et + 3t2
dx
(t) = 2t .
dt
1
17
By the chain rule,
d
f (x(t))
dt
dx
= Df (x(t)) (t)
dt
et + 3t2
= [3x1 (t)2 x2 (t)x3 (t)2 + x2 (t), x1 (t)3 x3 (t)2 + x1 (t), 2x1 (t)3 x2 (t)x3 (t) + 1] 2t
1
= 12t(et + 3t2 )3 + 2tet + 6t2 + 2t + 1.
5.7
Let ε > 0 be given. Since f (x) = o(g(x)), then
kf (x)k
lim = 0.
x→0 g(x)
kf (x)k
< ε,
g(x)
5.8
By Exercise 5.7, there exists δ > 0 such that if kxk < δ, then |o(g(x))| < g(x)/2. Hence, if kxk < δ, x 6= 0,
then
1
f (x) ≤ −g(x) + |o(g(x))| < −g(x) + g(x)/2 = − g(x) < 0.
2
5.9
We have that
{x : f1 (x) = 12} = {x : x21 − x22 = 12},
and
{x : f2 (x) = 16} = {x : x2 = 8/x1 }.
To find the intersection points, we substitute x2 = 8/x1 into x21 − x22 = 12 to get x41 − 12x21 − 64 = 0.
Solving gives x21 = 16, −4. Clearly, the only two possibilities for x1 are x1 = +4, −4, from which we obtain
x2 = +2, −2. Hence, the intersection points are located at [4, 2]> and [−4, −2]> .
The level sets associated with f1 (x1 , x2 ) = 12 and f2 (x1 , x2 ) = 16 are shown as follows.
18
x2
f2(x1,x2) = 16
f1(x1,x2) = 12
f1(x1,x2) = 12
3
2 (4,2)
1
− 12 1 2 3 x1
12
(−4,−2)
f2(x1,x2) = 16
5.10
a. We have
1
f (x) = f (xo ) + Df (xo )(x − xo ) + (x − xo )> D2 f (xo )(x − xo ) + · · · .
2
We compute
Hence,
" # " #" #
x1 − 1 1 0 −1 x1 − 1
f (x) = 2 + [1, 0] + [x1 − 1, x2 ] + ···
x2 2 −1 1 x2
1
= 1 + x1 + x2 − x1 x2 + x22 + · · · .
2
b. We compute
Df (x) = [ex1 −x2 + ex1 +x2 + 1, −ex1 −x2 + ex1 +x2 + 1],
" #
ex1 −x2 + ex1 +x2 −ex1 −x2 + ex1 +x2
D2 f (x) = .
−ex1 −x2 + ex1 +x2 ex1 −x2 + ex1 +x2
20
6. Basics of Unconstrained Optimization
6.1
a. In this case, x∗ is definitely not a local minimizer. To see this, note that d = [1, −2]> is a feasible
direction at x∗ . However, d> ∇f (x∗ ) = −1, which violates the FONC.
b. In this case, x∗ satisfies the FONC, and thus is possibly a local minimizer, but it is impossible to be
definite based on the given information.
c. In this case, x∗ satisfies the SOSC, and thus is definitely a (strict) local minimizer.
d. In this case, x∗ is definitely not a local minimizer. To see this, note that d = [0, 1]> is a feasible direction
at x∗ , and d> ∇f (x∗ ) = 0. However, d> F (x∗ )d = −1, which violates the SONC.
6.2
Because there are no constraints on x1 or x2 , we can utilize conditions for unconstrained optimization. To
proceed, we first compute the function gradient and find the critical points, that is, the points that satisfy
the FONC,
∇f (x1 , x2 ) = 0.
The components of the gradient ∇f (x1 , x2 ) are
∂f ∂f
= x21 − 4 and = x22 − 16.
∂x1 ∂x2
Thus there are four critical points:
" # " # " # " #
(1) 2 (2) 2 (3) −2 (4) −2
x = , x = , x = , and x = .
4 −4 4 −4
Note that F (x(1) ) > 0 and therefore, x(1) is a strict local minimizer. Next, F (x(4) ) < 0 and therefore,
x(4) is a strict local maximizer. The Hessian is indefinite at x(2) and x(3) and so these points are neither
maximizer nor minimizers.
6.3
Suppose x∗ is a global minimizer of f over Ω, and x∗ ∈ Ω0 ⊂ Ω. Let x ∈ Ω0 . Then, x ∈ Ω and therefore
f (x∗ ) ≤ f (x). Hence, x∗ is a global minimizer of f over Ω0 .
6.4
Suppose x∗ is an interior point of Ω. Therefore, there exists ε > 0 such that {y : ky − x∗ k < ε} ⊂ Ω. Since
x∗ is a local minimizer of f over Ω, there exists ε0 > 0 such that f (x∗ ) ≤ f (x) for all x ∈ {y : ky −x∗ k < ε0 }.
Take ε00 = min(ε, ε0 ). Then, {y : ky − x∗ k < ε00 } ⊂ Ω0 , and f (x∗ ) ≤ f (x) for all x ∈ {y : ky − x∗ k < ε00 }.
Thus, x∗ is a local minimizer of f over Ω0 .
To show that we cannot make the same conclusion if x∗ is not an interior point, let Ω = {0}, Ω0 = [−1, 1],
and f (x) = x. Clearly, 0 ∈ Ω is a local minimizer of f over Ω. However, 0 ∈ Ω0 is not a local minimizer of f
over Ω0 .
6.5
a. The TONC is: if f 00 (0) = 0, then f 000 (0) = 0. To prove this, suppose f 00 (0) = 0. Now, by the FONC, we
also have f 0 (0) = 0. Hence, by Taylor’s theorem,
f 000 (0) 3
f (x) = f (0) + x + o(x3 ).
3!
Since 0 is a local minimizer, f (x) ≥ f (0) for all x sufficiently close to 0. Hence, for all such x,
f 000 (0) 3
x ≥ o(x3 ).
3!
21
Now, if x > 0, then
o(x3 )
f 000 (0) ≥ 3!,
x3
which implies that f 000 (0) ≥ 0. On the other hand, if x < 0, then
o(x3 )
f 000 (0) ≤ 3! ,
x3
which implies that f 000 (0) ≤ 0. This implies that f 000 (0) = 0, as required.
b. Let f (x) = −x4 . Then, f 0 (0) = 0, f 00 (0) = 0, and f 000 (0) = 0, which means that the FONC, SONC, and
TONC are all satisfied. However, 0 is not a local minimizer: f (x) < 0 for all x 6= 0.
c. The answer is yes. To see this, we first write
f 00 (0) 2 f 000 (0) 3
f (x) = f (0) + f 0 (0)x + x + x .
2 3!
Now, if the FONC is satisfied, then
f 00 (0) 2 f 000 (0) 3
f (x) = f (0) + x + x .
2 3!
Moreover, if the SONC is satisfied, then either (i) f 00 (0) > 0 or (ii) f 00 (0) = 0. In the case (i), it is clear from
the above equation that f (x) ≥ f (0) for all x sufficiently close to 0 (because the third term on the right-hand
side is o(x2 )). In the case (ii), the TONC implies that f (x) = f (0) for all x. In either case, f (x) ≥ f (0) for
all x sufficiently close to 0. This shows that 0 is a local minimizer.
6.6
a. The TONC is: if f 0 (0) = 0 and f 00 (0) = 0, then f 000 (0) ≥ 0. To prove this, suppose f 0 (0) = 0 and
f 00 (0) = 0. By Taylor’s theorem, for x ≥ 0,
f 000 (0) 3
f (x) = f (0) + x + o(x3 ).
3!
Since 0 is a local minimizer, f (x) ≥ f (0) for sufficiently small x ≥ 0. Hence, for all x ≥ 0 sufficiently small,
o(x3 )
f 000 (0) ≥ 3! .
x3
This implies that f 000 (0) ≥ 0, as required.
b. Let f (x) = −x4 . Then, f 0 (0) = 0, f 00 (0) = 0, and f 000 (0) = 0, which means that the FONC, SONC, and
TONC are all satisfied. However, 0 is not a local minimizer: f (x) < 0 for all x > 0.
6.7
For convenience, let z 0 = x0 + arg minx∈Ω f (x). Thus we want to show that z 0 = arg miny∈Ω0 f (y); i.e., for
all y ∈ Ω0 , f (y − x0 ) ≥ f (z 0 − x0 ). So fix y ∈ Ω0 . Then, y − x0 ∈ Ω. Hence,
f (y − x0 ) ≥ min f (x)
x∈Ω
= f arg min f (x)
x∈Ω
= f (z 0 − x0 ),
which completes the proof.
6.8
a. The gradient and Hessian of f are
" # " #
1 3 3
∇f (x) = 2 x+
3 7 5
" #
1 3
F (x) = 2 .
3 7
22
Hence, ∇f ([1, 1]> ) = [11, 25]> , and F ([1, 1]> ) is as shown above.
b. The direction of maximal rate of increase is the direction of the gradient. Hence, the directional derivative
with respect to a unit vector in this direction is
>
(∇f (x))> ∇f (x)
∇f (x)
∇f (x) = = k∇f (x)k.
k∇f (x)k k∇f (x)k
√
At x = [1, 1]> , we have k∇f ([1, 1]> )k = 112 + 252 = 27.31.
c. The FONC in this case is ∇f (x) = 0. Solving, we get
" #
3/2
x= .
−1
The point above does not satisfy the SONC because the Hessian is not positive semidefinite (its determinant
is negative).
6.9
a. A differentiable function f decreases most rapidly in the direction of the negative gradient. In our problem,
h i> h i>
∂f ∂f
∇f (x) = ∂x1 ∂x2
= 2x1 x2 + x32 x21 + 3x1 x22 .
b. The rate of increase of f at x(0) in the direction −∇f x(0) is
(0)
> −∇f x(0) √ √
∇f x = −k∇f x(0) k = − 125 = −5 5.
k∇f x(0) k
6.10
a. We can rewrite f as " # " #
1 > 4 4 > 3
f (x) = x x+x + 7.
2 4 2 4
The gradient and Hessian of f are
" # " #
4 4 3
∇f (x) = x+ ,
4 2 4
" #
4 4
F (x) = .
4 2
23
b. The FONC in this case is ∇f (x) = 0. The only point satisfying the FONC is
" #
∗ 1 −5
x = .
4 2
The point above does not satisfy the SONC because the Hessian is not positive semidefinite (its determinant
is negative). Therefore, f does not have a minimizer.
6.11
a. Write the objective function as f (x) = −x22 . In this problem the only feasible directions at 0 are of the
form d = [d1 , 0]> . Hence, d> ∇f (0) = 0 for all feasible directions d at 0.
b. The point 0 is a local maximizer, because f (0) = 0, while any feasible point x satisfies f (x) ≤ 0.
The point 0 is not a strict local maximizer because for any x of the form x = [x1 , 0]> , we have f (x) =
0 = f (0), and there are such points in any neighborhood of 0.
The point 0 is not a local minimizer because for any point x of the form x = [x1 , x21 ] with x1 > 0, we
have f (x) = −x41 < 0, and there are such points in any neighborhood of 0. Since 0 is not a local minimizer,
it is also not a strict local minimizer.
6.12
a. We have ∇f (x∗ ) = [0, 5]> . The only feasible directions at x∗ are of the form d = [d1 , d2 ]> with d2 ≥ 0.
Therefore, for such feasible directions, d> ∇f (x∗ ) = 5d2 ≥ 0. Hence, x∗ = [0, 1]> satisfies the first order
necessary condition.
b. We have F (x∗ ) = O. Therefore, for any d, d> F (x∗ )d ≥ 0. Hence, x∗ = [0, 1]> satisfies the second order
necessary condition.
c. Consider points of the form x = [x1 , −x21 + 1]> , x1 ∈ R. Such points are in Ω, and are arbitrarily close to
x∗ . However, for such points x 6= x∗ ,
Any feasible direction d at x∗ has the form d = [d1 , d2 ]> where d2 ≤ 2d1 , d1 , d2 ≥ 0. Therefore, for any
feasible direction d at x∗ , we have
Moreover, there are such points in any neighborhood of x∗ . Therefore, the point x∗ is not a local minimizer.
6.17
a. We have ∇f (x∗ ) = [1/x∗1 , 1/x∗2 ]> . If x∗ were an interior point, then ∇f (x∗ ) = 0. But this is clearly
impossible. Therefore, x∗ cannot possibly be an interior point.
b. We have F (x) = − diag[1/x21 , 1/x22 ], which is negative definite everywhere. Therefore, the second-order
necessary condition is satisfied everywhere. (Note that because we have a maximization problem, negative
definiteness is the relevant condition.)
6.18
Given x ∈ R, let
n
X
f (x) = (x − xi )2 ,
i=1
f 0 (x̄) = 0,
and hence
n
X
2(x̄ − xi ) = 0,
i=1
(a + b)/x − b/x a
f (x) = 2
= .
1 + b(a + b)/x x + b(a + b)/x
We have
a2
0 b(a + b)
f (x) = − 1− .
(x + b(a + b)/x)2 x2
Let x∗ be the optimal distance. Then, by the FONC, we have f 0 (x∗ ) = 0, which gives
b(a + b)
1− = 0
(x∗ )2
p
⇒ x∗ = b(a + b).
6.20
The squared distance from the sensor to the baby’s heart is 1 + x2 , while the squared distance from the
sensor to the mother’s heart is 1 + (2 − x)2 . Therefore, the signal to noise ratio is
1 + (2 − x)2
f (x) = .
1 + x2
We have
−2(2 − x)(1 + x2 ) − 2x(1 + (2 − x)2 )
f 0 (x) =
(1 + x2 )2
2
4(x − 2x − 1)
= .
(1 + x2 )2
√ √
By the FONC, at the optimal position x∗ , we√have f 0 (x∗ ) = 0. Hence, either x∗ = 1 − 2 or x∗ = 1 + 2.
From the figure, it easy to see that x∗ = 1 − 2 is the optimal position.
6.21
a. Let x be the decision variable. Write the total travel time as f (x), which is given by
√ p
1 + x2 1 + (d − x)2
f (x) = + .
v1 v2
Differentiating the above expression, we get
x d−x
f 0 (x) = √ − p .
v1 1+x2 v2 1 + (d − x)2
By the first order necessary condition, the optimal path satisfies f 0 (x∗ ) = 0, which corresponds to
x∗ d − x∗
p = p ,
v1 1 + (x∗ )2 v2 1 + (d − x∗ )2
or sin θ1 /v1 = sin θ2 /v2 . Upon rearranging, we obtain the desired equation.
b. The second derivative of f is given by
1 1
f 00 (x) = 2 3/2
+ .
v1 (1 + x ) v2 (1 + (d − x)2 )3/2
Hence, f 00 (x∗ ) > 0, which shows that the second order sufficient condition holds.
26
6.22
a. We have f (x) = U1 (x1 ) + U2 (x2 ) and Ω = {x : x1 , x2 ≥ 0, x1 + x2 ≤ 1}. A picture of Ω looks like:
x2
1
0 1 x1
b. We have ∇f (x) = [a1 , a2 ]> . Because ∇f (x) 6= 0, for all x, we conclude that no interior point satisfies
the FONC. Next, consider any feasible point x for which x2 > 0. At such a point, the vector d = [1, −1]>
is a feasible direction. But then d> ∇f (x) = a1 − a2 > 0 which means that FONC is violated (recall that
the problem is to maximize f ). So clearly the remaining candidates are those x for which x2 = 0. Among
these, if x1 < 1, then d = [0, 1]> is a feasible direction, in which case we have d> ∇f (x) = a2 > 0. This
leaves the point x = [1, 0]> . At this point, any feasible direction d satisfies d1 ≤ 0 and d2 ≤ −d1 . Hence,
for any feasible direction d, we have
d> ∇f (x) = d1 a1 + d2 a2 ≤ d1 a1 + (−d1 )a2 = d1 (a1 − a2 ) ≤ 0.
So, the only feasible point that satisfies the FONC is [1, 0]> .
c. We have F (x) = O ≤ 0. Hence, any point satisfies the SONC (again, recall that the problem is to
maximize f ).
6.23
We have " #
4(x1 − x2 )3 + 2x1 − 2
∇f (x) = .
−4(x1 − x2 )3 − 2x2 + 2
Setting ∇f (x) = 0 we get
4(x1 − x2 )3 + 2x1 − 2 = 0
3
−4(x1 − x2 ) − 2x2 + 2 = 0.
Adding the two equations, we obtain x1 = x2 , and substituting back yields
x1 = x2 = 1.
Hence, the only point satisfying the FONC is [1, 1]> .
We have " #
12(x1 − x2 )2 + 2 −12(x1 − x2 )2
F (x) = .
−12(x1 − x2 )2 12(x1 − x2 )2 − 2
Hence " #
2> 0
F ([1, 1] ) =
0 −2
Since F ([1, 1]> ) is not positive semidefinite, the point [1, 1]> does not satisfy the SONC.
6.24
Suppose d is a feasible direction at x. Then, there exists α0 > 0 such that x + αd ∈ Ω for all α ∈ [0, α0 ].
Let β > 0 be given. Then, x + α(βd) ∈ Ω for all α ∈ [0, α0 /β]. Since α0 /β > 0, by definition βd is also a
feasible direction at x.
6.25
⇒: Suppose d is feasible at x ∈ Ω. Then, there exists α > 0 such that x + αd ∈ Ω, that is, A(x + αd) = b.
Since Ax = b and α 6= 0, we conclude that Ad = 0.
27
⇐: Suppose Ad = 0. Then, for any α ∈ [0, 1], we have αAd = 0. Adding this equation to Ax = b, we
obtain A(x + αd) = b, that is, x + αd ∈ Ω for all α ∈ [0, 1]. Therefore, d is a feasible direction at x.
6.26
The vector d = [1, 1]> is a feasible direction at 0. Now,
∂f ∂f
d> ∇f (0) = (0) + (0).
∂x1 ∂x2
Since ∇f (0) ≤ 0 and ∇f (0) 6= 0, then
d> ∇f (0) < 0.
Hence, by the FONC, 0 is not a local minimizer.
6.27
◦ ◦
We have ∇f (x) = c 6= 0. Therefore, for any x ∈Ω, we have ∇f (x) 6= 0. Hence, by Corollary 6.1, x ∈Ω
cannot be a local minimizer (and therefore it cannot be a solution).
6.28
The objective function is f (x) = −c1 x1 − c2 x2 . Therefore, ∇f (x) = [−c1 , −c2 ]> 6= 0 for all x. Thus,
S by
FONC, the optimal solution x∗ cannot lie in the interior of the feasible set. Next, for all x ∈ L1 L2 ,
d = [1, 1]> is a feasible direction. Therefore, d> ∇f (x) = −c1 − c2 < 0. Hence, by FONC, the optimal
solution x∗ cannot lie in L1 L2 . Lastly, for all x ∈ L3 , d = [1, −1]> is a feasible direction. Therefore,
S
d> ∇f (x) = c2 − c1 < 0. Hence, by FONC, the optimal solution x∗ cannot lie in L3 . Therefore, by
elimination, the unique optimal feasible solution must be [1, 0]> .
6.29
a. We write
n
1X 2 2
f (a, b) = a xi + b2 + yi2 + 2xi ab − 2xi yi a − 2yi b
n i=1
n
! n
!
2 1X 2 2 1X
= a x +b +2 xi ab
n i=1 i n i=1
n
! n
! n
!
1X 1X 1X 2
−2 xi yi a − 2 yi b + y
n i=1 n i=1 n i=1 i
" P #" #
1 n Pn
n i=1 x2i n1 i=1 xi a
= [a b] 1 Pn
n i=1 xi 1 b
" n n
# " # n
1X 1X a 1X 2
−2 xi yi , yi + y
n i=1 n i=1 b n i=1 i
= z > Qz − 2c> z + d,
where z, Q, c and d are defined in the obvious way.
b. If the point z ∗ = [a∗ , b∗ ]> is a solution, then byPthe FONC, we have ∇f (z ∗ ) = 2Qz ∗ − 2c = 0,
n
which means Qz ∗ = c. Now, since X 2 − (X)2 = n1 i=1 (xi − X)2 , and the xi are not all equal, then
det Q = X 2 − (X)2 6= 0. Hence, Q is nonsingular, and hence
" #" # XY −(X)(Y )
1 1 −X XY X 2 −(X)2
z ∗ = Q−1 c = 2
= (X 2 )(Y )−(X)(XY )
.
2
X − (X) 2 −X X Y 2 2
X −(X)
Since Q > 0, then by the SOSC, the point z ∗ is a strict local minimizer. Since z ∗ is the only point satisfying
the FONC, then z ∗ is the only local minimizer.
c. We have
(X 2 )(Y ) − (X)(XY )
XY − (X)(Y )
a∗ X + b∗ = X+ =Y.
X2 − (X)2 X 2 − (X)2
28
6.30
Given x ∈ Rn , let
p
1X
f (x) = kx − x(i) k2
p i=1
be the average squared error between x and x(1) , . . . , x(p) . We can rewrite f as
p
1X
f (x) = (x − x(i) )> (x − x(i) )
p i=1
p
!>
> 1 X (i) 1
= x x−2 x x + kx(i) k2 .
p i=1 p
So f is a quadratic function. Since x̄ is the minimizer of f , then by the FONC, ∇f (x̄) = 0, i.e.,
p
1 X (i)
2x̄ − 2 x = 0.
p i=1
Hence, we get
p
1 X (i)
x̄ = x ,
p i=1
xn = axn−1 + bun
= a(axn−2 + bun−1 ) + bun
= a2 xn−2 + abun−1 + bun
..
.
= an x0 + an−1 bu1 + · · · + abun−1 + bun
= c> u,
3.2
3.1
2.9
2.8
f(x)
2.7
2.6
2.5
2.4
2.3
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
x
b. The number of steps needed for the Golden Section method is computed from the inequality:
0.2
0.61803N ≤ ⇒ N ≥ 3.34.
2−1
30
Therefore, the fewest possible number of steps is 4. Applying 4 steps of the Golden Section method, we end
up with an uncertainty interval of [a4 , b0 ] = [1.8541, 2.000]. The table with the results of the intermediate
steps is displayed below:
c. The number of steps needed for the Fibonacci method is computed from the inequality:
1 + 2ε 0.2
≤ ⇒ N ≥ 4.
FN +1 2−1
Therefore, the fewest possible number of steps is 4. Applying 4 steps of the Fibonacci method, we end up
with an uncertainty interval of [a4 , b0 ] = [1.8750, 2.000]. The table with the results of the intermediate steps
is displayed below:
d. We have f 0 (x) = 2x − 4 sin x, f 00 (x) = 2 − 4 cos x. Hence, Newton’s algorithm takes the form:
Applying 4 iterations with x(0) = 1, we get x(1) = −7.4727, x(2) = 14.4785, x(3) = 6.9351, x(4) = 16.6354.
Apparently, Newton’s method is not effective in this case.
7.3
31
8
7.95
7.9
7.85
f(x)
7.8
7.75
7.7
7.65
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
x
left=1;
right=2;
uncert=0.23;
rho=(3-sqrt(5))/2;
N=ceil(log(uncert/(right-left))/log(1-rho)) %print N
lower=’a’;
a=left+(1-rho)*(right-left);
f_a=f(a);
for i=1:N,
if lower==’a’
b=a
f_b=f_a
a=left+rho*(right-left)
f_a=f(a)
else
a=b
f_a=f_b
b=left+(1-rho)*(right-left)
f_b=f(b)
end %if
if f_a<f_b
right=b;
lower=’a’
else
left=a;
lower=’b’
end %if
New_Interval = [left,right]
end %for i
%------------------------------------------------------------------
Using the above routine, we obtain N = 4 and a final interval of [1.528, 1.674]. The table with the results
of the intermediate steps is displayed below:
32
Iteration k ak bk f (ak ) f (bk ) New uncertainty interval
1 1.382 1.618 7.7247 7.6805 [1.382,2]
2 1.618 1.764 7.6805 7.6995 [1.382,1.764]
3 1.528 1.618 7.6860 7.6805 [1.528,1.764]
4 1.618 1.674 7.6805 7.6838 [1.528,1.674]
left=1;
right=2;
uncert=0.23;
epsilon=0.05;
F(1)=1;
F(2)=1;
N=0;
while F(N+2) < (1+2*epsilon)*(right-left)/uncert
F(N+3)=F(N+2)+F(N+1);
N=N+1;
end %while
N %print N
lower=’a’;
a=left+(F(N+1)/F(N+2))*(right-left);
f_a=f(a);
for i=1:N,
if i~=N
rho=1-F(N+2-i)/F(N+3-i)
else
rho=0.5-epsilon
end %if
if lower==’a’
b=a
f_b=f_a
a=left+rho*(right-left)
f_a=f(a)
else
a=b
f_a=f_b
b=left+(1-rho)*(right-left)
f_b=f(b)
end %if
if f_a<f_b
right=b;
lower=’a’
else
left=a;
lower=’b’
end %if
New_Interval = [left,right]
end %for i
%------------------------------------------------------------------
33
Using the above routine, we obtain N = 3 and a final interval of [1.58, 1.8]. The table with the results of
the intermediate steps is displayed below:
Iteration k ρk ak bk f (ak ) f (bk ) New uncertainty interval
1 0.4 1.4 1.6 7.7179 7.6805 [1.4,2]
2 0.333 1.6 1.8 7.6805 7.7091 [1.4,1.8]
3 0.45 1.58 1.6 7.6812 7.6805 [1.58,1.8]
7.4
FN −k+1
Now, ρk = 1 − FN −k+2 . Hence,
To show that 0 ≤ ρk ≤ 1/2, we proceed by induction. Clearly ρ1 = 1/2 satisfies 0 ≤ ρ1 ≤ 1/2. Suppose
0 ≤ ρk ≤ 1/2, where k ∈ {1, . . . , N − 1}. Then,
1
≤ 1 − ρk ≤ 1
2
and hence
1
1≤ ≤ 2.
1 − ρk
Therefore,
1 ρk
≤ ≤ 1.
2 1 − ρk
ρk
Since ρk+1 = 1 − 1−ρk , then
1
0 ≤ ρk+1 ≤
2
as required.
7.5
We proceed by induction. For k = 2, we have F0 F3 − F1 F2 = (1)(3) − (1)(2) = 1 = (−1)2 . Suppose
Fk−2 Fk+1 − Fk−1 Fk = (−1)k . Then,
7.6
Define yk = Fk and zk = Fk−1 . Then, we have
" # " #
yk+1 yk
=A ,
zk+1 zk
where " #
1 1
A= ,
1 0
34
with initial condition " # " #
y0 1
= .
z0 0
We can write # " " #
yn n 1
Fn = yn = [1, 0] = [1, 0]A .
zn 0
Since A is symmetric, it can be diagonalized as
" #" #
u> λ 1 0 h i
A= u v
v> 0 λ2
where √ √
1+ 5 1− 5
λ1 = , λ2 = ,
2 2
and q √ q√
1 2/( 5 − 1) 1 ( 5 − 1)/2
u = − 1/4 q √ , v = 1/4 q √ .
5 ( 5 − 1)/2 5 − 2/( 5 − 1)
Therefore, we have
" #n
λ1 0
Fn = u> u
0 λ2
= u21 λn1 + u22 λn2
√ !n+1 √ !n+1
1 1+ 5 1− 5
= √ − .
5 2 2
7.7
The number log(2) is the root of the equation g(x) = 0, where g(x) = exp(x) − 2. The derivative of g is
g 0 (x) = exp(x). Newton’s method applied to this root finding problem is
exp(x(k) ) − 2
x(k+1) = x(k) − = x(k) − 1 + 2 exp(−x(k) ).
exp(x(k) )
b. By symmetry, we need x(1) = −x(0) for cycling. Therefore, x(0) must satisfy
35
c. The algorithm converges to 0 if and only if |x(0) | < c, where c is from part b.
7.9
The quadratic function that matches the given data x(k) , x(k−1) , x(k−2) , f (x(k) ), f (x(k−1) ), and f (x(k−2) )
can be computed by solving the following three linear equations for the parameters a, b, and c:
Then, the algorithm is given by x(k+1) = −b/2a (so, in fact, we only need to find the ratio of a and b).
With some elementary algebra (e.g., using Cramer’s rule without needing to calculate the determinant in
the denominator), the algorithm can be written as:
if nargin < 4
uncert=10^(-5);
if nargin < 3
if nargin == 1
xcurr=0;
xnew=1;
elseif nargin == 0
36
g=’g’;
else
disp(’Cannot have 2 arguments.’);
return;
end
end
end
g_curr=feval(g,xcurr);
while abs(xnew-xcurr)>xcurr*uncert,
xold=xcurr;
xcurr=xnew;
g_old=g_curr;
g_curr=feval(g,xcurr);
xnew=(g_curr*xold-g_old*xcurr)/(g_curr-g_old);
end %while
function alpha=linesearch_secant(grad,x,d)
%Line search using secant method
i=0;
while abs(dphi_curr)>epsilon*abs(dphi_zero),
alpha_old=alpha_curr;
alpha_curr=alpha;
dphi_old=dphi_curr;
dphi_curr=feval(grad,x+alpha_curr*d)’*d;
alpha=(dphi_curr*alpha_old-dphi_old*alpha_curr)/(dphi_curr-dphi_old);
i=i+1;
if (i >= max) & (abs(dphi_curr)>epsilon*abs(dphi_zero)),
disp(’Line search terminating with number of iterations:’);
disp(i);
break;
end
end %while
%------------------------------------------------------------------
37
7.12
a. We could carry out the bracketing using the one-dimensional function φ0 (α) = f (x(0) + αd(0) ), where
d(0) is the negative gradient at x(0) , as described in Section 7.8. The decision variable would be α. However,
here we will directly represent the points in R2 (which is equivalent, though unnecessary in general).
The uncertainty interval is calculated by the following procedure:
" # " #
1 > 2 1 2 1
f (x) = x x, ∇f (x) = x
2 1 2 1 2
We can see f (a1 ) > f (b1 ), hence the uncertainty interval is reduced to:
"" # " ##
0.3655 −0.0100
[a1 , b0 ] = ,
−0.3466 −0.4300
So, calculating the norm of (b0 − a1 ), we see that the uncertainty region width is now 0.38461.
39
Iteration ak bk f (ak ) f (bk ) New Uncertainty Interval
" # " # "" # " ##
0.3655 0.2220 .3655 0.0100
1 0.1270 0.1085 ,
−0.3466 −0.3784 −0.3466 −0.4300
" # " # "" # " ##
0.2220 0.1334 .3655 .1334
2 0.1085 0.1232 ,
−0.3784 −0.3981 −0.3466 −0.3981
" # " # "" # " ##
0.2768 0.2220 .2768 .1334
3 0.1094 0.1085 ,
−0.3663 −0.3784 −0.3663 −0.3981
" # " # "" # " ##
0.2220 0.1882 .2768 .1882
4 0.1085 0.1117 ,
−0.3784 −0.3860 −0.3663 −0.3860
" # " # "" # " ##
0.2430 0.2220 .2768 .2220
5 0.1079 0.1085 ,
−0.3738 −0.3784 −0.3663 −0.3784
" # " # "" # " ##
0.2559 0.2430 0.2559 0.2220
6 0.1081 0.1079 ,
−0.3709 −0.3738 −0.3709 −0.3784
" # " # "" # " ##
0.2430 0.2350 0.2559 0.2350
7 0.1079 0.1080 ,
−0.3738 −0.3756 −0.3709 −0.3756
" # " # "" # " ##
0.2479 0.2430 0.2479 0.2350
8 0.1080 0.1079 ,
−0.3727 −0.3738 −0.3727 −0.3756
" # " # "" # " ##
0.2430 0.2399 0.2479 0.2399
9 0.1079 0.1079 ,
−0.3738 −0.3745 −0.3727 −0.3745
"" # " ##
0.2479 0.2399
We can now see that the minimizer is located within , , and its uncertainty
−0.3727 −0.3745
interval width is 0.00819.
We show the first iteration of the algorithm; the rest are analogous and shown in the following table.
From part a, we know that [a0 , b0 ] = [x(2) , x(4) ], then:
"" # " ##
0.5975 −0.0100
[a0 , b0 ] = , , with f (a0 ) = 0.2678, f (b0 ) = 0.1893.
−0.2950 −0.4300
FN 55
Recall that in the Fibonacci method, ρ1 = 1 − FN +1 =1− 89 = 0.3820 .
T
a1 = a0 + ρ1 (b0 − a0 ) = [0.3654 − 0.3466]
T
b1 = a0 + (1 − ρ1 )(b0 − a0 ) = [0.2221 − 0.3784]
f (a1 ) = 0.1270
f (b1 ) = 0.1085
We can see f (a1 ) > f (b1 ), hence the uncertainty interval is reduced to:
"" # " ##
0.3654 −0.0100
[a1 , b0 ] = ,
−0.3466 −0.4300
So, calculating the norm of (b0 − a1 ), we see that the uncertainty region width is now 0.38458.
42
k ρk ak bk f (ak ) f (bk ) New Uncertainty Interval
" # " # "" # " ##
0.2221 0.1879 0.2767 0.1879
4 0.3810 0.1085 0.1118 ,
−0.3784 −0.3860 −0.3663 −0.3860
" # " # "" # " ##
0.2426 0.2221 0.2767 0.2221
5 0.3846 0.1079 0.1085 ,
−0.3739 −0.3784 −0.3663 −0.3784
" # " # "" # " ##
0.2562 0.2426 0.2562 0.2221
6 0.3750 0.1082 0.1079 ,
−0.3708 −0.3739 −0.3708 −0.3784
" # " # "" # " ##
0.2426 0.2357 0.2562 0.2357
7 0.4000 0.1079 0.1080 ,
−0.3739 −0.3754 −0.3708 −0.3754
" # " # "" # " ##
0.2494 0.2426 0.2494 0.2357
8 0.3333 0.1080 0.1079 ,
−0.3724 −0.3739 −0.3724 −0.3754
" # " # "" # " ##
0.2426 0.2419 0.2494 0.2419
9 0.4500 0.1079 0.1079 ,
−0.3739 −0.3740 −0.3724 −0.3740
"" # " ##
0.2494 0.2419
We can now see that the minimizer is located within , , and its uncertainty
−0.3724 −0.3740
interval width is 0.00769.
44
disp(str);
8. Gradient Methods
8.1
The function f is a quadratic and so we can represent it in standard form as
" # " #
1 > 1 0 > −1 1
f= x x−x + 3 = x> Qx − x> b + c.
2 0 2 −1/2 2
g (0)> g (0) 5
α0 = (0)> (0)
= .
g Qg 6
Hence, " # " #
(1) (0) 5 1 5/6
x = −α0 g =− =− .
6 1/2 5/12
The second iteration is
x(2) = x(1) − α1 ∇f x(1) ,
where " #
(1)
(1) (1) 1/6
∇f x =g = Qx −b= ,
−1/3
and
g (1)> g (1) 5
α1 = (1)> (1)
= .
g Qg 9
Hence, " # " # " #
(2) (1) (1) −5/6 5 1/6 − 25
27
x =x − α1 g = − = 25 .
−5/12 9 −1/3 − 108
The optimal solution is x∗ = [−1, −1/4]> obtaind by solving the equation Qx = b.
8.2
Let s be the order of convergence of {x(k) }. Suppose there exists c > 0 such that for all k sufficiently large,
kx(k+1) − x∗ k ≥ ckx(k) − x∗ kp .
kx(k+1) − x∗ k kx(k+1) − x∗ k 1
=
kx(k) − x∗ ks kx − x k kx − x∗ ks−p
(k) ∗ p (k)
c
≥ .
kx(k) − x∗ ks−p
Taking limits yields
kx(k+1) − x∗ k c
lim ≥ .
k→∞ kx(k) − x∗ ks limk→∞ kx(k) − x∗ ks−p
45
Since by definition s is the order of convergence,
kx(k+1) − x∗ k
lim < ∞.
k→∞ kx(k) − x∗ ks
Therefore, since limk→∞ kx(k) − x∗ k = 0, we conclude that s ≤ p, i.e., the order of convergence is at most p.
8.3
We use contradiction. Suppose x(k) → x∗ and
kx(k+1) − x∗ k
lim >0
k→∞ kx(k) − x∗ kp
for some p < 1. We may assume that x(k) = 6 x∗ for an infinite number of k (for otherwise, by convention,
the ratio above is eventually 0). Fix ε > 0. Then, there exists K1 such that for all k ≥ K1 ,
kx(k+1) − x∗ k
> ε.
kx(k) − x∗ kp
kx(k+1) − x∗ k ε
∗
> .
(k)
kx − x k kx − x∗ k1−p
(k)
Because x(k) → x∗ and p < 1, we have kx(k) − x∗ k1−p → 0. Hence, there exists K2 such that for all k ≥ K2 ,
kx(k) − x∗ k1−p < ε. Combining this inequality with the previous one yields
kx(k+1) − x∗ k
>1
kx(k) − x∗ k
46
8.5
a. We have
x(k) = ax(k−1)
= a · ax(k−2)
= a2 x(k−2)
..
.
= ak x(0) .
y (k) = (y (k−1) )b
= ((y (k−2) )b )b
2
= (y (k−2) )b
..
.
k
= (y (0) )b .
|x(k+1) |
lim = lim a = a,
k→∞ |x(k) | k→∞
|y (k+1) |
lim = lim 1 = 1,
k→∞ |y (k) |b k→∞
|uk+1 |
lim =1−ρ>0
k→∞ |uk |
c. Let x̃ = limk→∞ x(k) . Taking limits of both sides of x(k+1) = x(k) − α(ax(k) − b) (from part b), we get
x̃ = x̃ − α(ax̃ − b).
Let z (k) = |1 − αa||x(k) − b/a|1−p . Note that z (k) converges to a finite nonzero number if and only if p = 1
(if p < 1, then z (k) → 0, and if p > 1, then z (k) → ∞). Therefore, the order of convergence of {x(k) } is 1,
e. Let y (k) = |x(k) − b/a|. From part d, after some manipulation we obtain
The sequence {x(k) } converges (to b/a) if and only if y (k) → 0. This holds if and only if |1 − αa| < 1, which
is equivalent to 0 < α < 2/a.
8.8
We rewrite f as f (x) = 12 x> Qx − b> x, where
" #
6 4
Q=
4 6
The characteristic polynomial of Q is λ2 − 12λ + 20. Hence, the eigenvalues of Q are 2 and 10. Therefore,
the largest range of values of α for which the algorithm is globally convergent is 0 < α < 2/10.
8.9
a. We can write h(x) = Qx − b, where b = [−4, −1]> and
" #
3 2
Q=
2 3
b. By part a, the algorithm is a fixed-step-size gradient algorithm for a problem with gradient h. The
eigenvalues of Q are 1 and 5. Hence, the largest range of values of α such that the algorithm is globally
convergent to the solution is 0 < α < 2/5.
c. The eigenvectors of Q corresponding to eigenvalue 5 has the form c[1, 1]> , where c ∈ R. Hence, to violate
the descent property, we pick " # " #
1 −3
x(0) = Q−1 b + c =
1 0
48
where we choose c = −1 so that x(0) has the specified form.
8.10
a. We have " # " #
1 > 3 1+a > 1
f (x) = x x−x + b.
2 1+a 3 1
b. The unique global minimizer exists if and only if the Hessian is positive definite, which holds if and only
if (1 + a)2 < 9 (by Sylvester’s criterion). Hence, the largest set of values of a and b such that the global
minimizer of f exists is given by −4 < a < 2 and b ∈ R (unrestricted).
The minimizer is given by
" #" # " # " #
∗ 1 3 −(1 + a) 1 3 − (1 + a) 1 1 1
x = = =
9 − (1 + a)2 −(1 + a) 3 1 9 − (1 + a)2 1 4+a 1
c. The algorithm is a gradient algorithm with fixed step size α = 2/5. The eigenvalues of the Hessian are
(after some calculations) 4 + a and 2 − a. For global convergence, we need 2/5 < 2/λmax , or λmax < 5,
where λmax = max(4 + a, 2 − a). From this we deduce that −3 < a < 1. Hence, the largest set of values of
a and b such that the algorithm is globally convergent is given by −3 < a < 1 and b ∈ R (unrestricted).
8.11
a. We have
b. We have x(k) → c if and only if f (x(k) ) → 0. Hence, the algorithm is globally convergent Q∞ if and only if
f (x(k) ) → 0 for any x0 . From part a, we deduce that
Q∞f (x(k)
) → 0 for any x0 if and only if k=0 (1−αk )2
= 0.
Because 0 < α < 1, this condition is equivalent to k=0 (1 − αk ) = 0, which holds if and only if
∞
X
αk = ∞.
k=0
8.12 √ √
The only local minimizer of f is x∗ = 1/ 3. Indeed, we have f 0 (x∗ ) = 0 and f 00 (x∗ ) = 2 3. To find the
largest range of values of α such that the algorithm is locally convergent, we use a “linearization” argument:
The algorithm is locally convergent if and only if the “linearized” algorithm x(k+1) = x(k) −αf 00 (x∗ )(x(k) −x∗ )
is globally convergent. But the linearized algorithm is just a fixed step size algorithm applied to a quadratic
with second derivative f 00 (x∗ ). Therefore,
√ the largest range of values of α such that the algorithm is locally
convergent is 0 < α < 2/f 00 (x∗ ) = 1/ 3.
8.13
We use the formula from Lemma 8.1:
f (x(k+1) ) = (1 − γk )f (x(k) )
(we have V = f in this case). Using the expression for γk , we get, assuming x(k) 6= 1,
γk = α4 · 2−k (1 − α2−k ).
Hence, γk > 0, which means that f (x(k+1) ) < f (x(k) ) if x(k) 6= 1 for k ≥ 0. This implies that the algorithm
has the descent property (for k ≥ 0).
49
We also note that
∞ ∞ ∞
!
X X
−k
X
−k 4α
γk = α4 2 − α4 = α4 2 − < inf ty.
3
k=0 k=0 k=0
Since γk > 0 for all k ≥ 0, we can apply the theorem given in class to deduce that the algorithm is not
globally convergent.
8.14
We have
|x(k+1) − x∗ | = |x(k) − x∗ − f 0 (x(k) )/f 00 (x∗ )|.
By Taylor’s Theorem,
|x(k+1) − x∗ | = O(|x(k) − x∗ |2 ),
which is a quadratic function. The gradient is given by ∇f (x) = 2(A> A)x − 2(A> b) and the Hessian is
given by F (x) = 2(A> A).
b. The fixed step size gradient algorithm for solving the above optimization problem is given by
c. The largest range of values for α such that the algorithm in part b converges to the solution of the problem
is given by
2 1
0<α< >
= .
λmax (2A A) 4
8.17
a. We use contraposition. Suppose an eigenvalue of A is negative: Av = λv, where λ < 0 and v is a
corresponding eigenvector. Choose x(0) = v + x∗ . Then,
8.19
a. Possible. Pick f such that λmax ≥ 2λmin and x(0) such that g (0) is an eigenvector of Q with eigenvalue
λmin . Then,
g (0)> g (0) 1 2
α0 = (0)> = ≥ .
g Qg (0) λmin λmax
g (0)> g (0) 1
α0 = ≤ .
g (0)> Qg (0) λmin
8.20
a. We rewrite f as
1 >
f (x) = x Qx − b> x − 22,
2
where " # #"
3 2 −3
Q= , b= .
2 3 1
The eigenvalues of Q are 1 and 5. Therefore, the range of values of the step size for which the algorithm
converges to the minimizer is 0 < α < 2/5.
51
b. An eigenvector of Q corresponding to the eigenvalue 5 is v = [1, 1]> /5. We have x∗ = Q−1 b = [−11, 9]> /5.
Hence, an initial condition that results in the algorithm diverging is
" #
(0) ∗ −2
x =x +v = .
2
8.21
In both cases, we compute the Hessian Q of f , and find its largest eigenvalue λmax . Then the range we seek
is 0 < α < 2/λmax .
a. In this case, " #
6 4
Q= ,
4 6
with eigenvalues 2 and 10. Hence, the answer is 0 < α < 1/5.
b. In this case, again we have " #
6 4
Q= ,
4 6
with eigenvalues 2 and 10. Hence, the answer is 0 < α < 1/5.
8.22
For the given algorithm we have
(g (k)> g (k) )2
γk = β(2 − β)
(g (k)> Qg (k) )(g (k)> Q−1 g (k) )
By Lemma 8.1, V (x(k) ) ≥ V (x(0) ). Hence, if x(0) 6= x∗ , then {V (x(k) } does not converge to 0, and
consequently x(k) does not converge to x∗ .
8.23
By Lemma 8.1, V (x(k+1) ) = (1 − γk )V (x(k) ) for all k. Note that the algorithm has a descent property if
and only if V (x(k+1) ) < V (x(k) ) whenever g (k) 6= 0. Clearly, whenever g (k) 6= 0, V (x(k+1) ) < V (x(k) ) if and
only if 1 − γk < 1. The desired result follows immediately.
8.24
We have
x(k+1) − x(k) = αk d(k)
and hence
hx(k+1) − x(k) , ∇f (x(k+1) )i = αk hd(k) , ∇f (x(k+1) )i.
Now, let φk (α) = f (x(k) + αd(k) ). Since αk minimizes φk , then by the FONC, φ0k (αk ) = 0. By the chain
rule, φ0k (α) = d(k)> ∇f (x(k) + αd(k) ). Hence,
52
and so
hx(k+1) − x(k) , ∇f (x(k+1) )i = 0.
8.25
A simple MATLAB routine for implementing the steepest descent method is as follows.
function [x,N]=steep_desc(grad,xnew,options);
% STEEP_DESC(’grad’,x0);
% STEEP_DESC(’grad’,x0,OPTIONS);
%
% x = STEEP_DESC(’grad’,x0);
% x = STEEP_DESC(’grad’,x0,OPTIONS);
%
% [x,N] = STEEP_DESC(’grad’,x0);
% [x,N] = STEEP_DESC(’grad’,x0,OPTIONS);
%
%The first variant finds the minimizer of a function whose gradient
%is described in grad (usually an M-file: grad.m), using a gradient
%descent algorithm with initial point x0. The line search used in the
%secant method.
%The second variant allows a vector of optional parameters to
%defined. OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results, (default is no display: 0).
%OPTIONS(2) is a measure of the precision required for the final point.
%OPTIONS(3) is a measure of the precision required of the gradient.
%OPTIONS(14) is the maximum number of iterations.
%For more information type HELP FOPTIONS.
%
%The next two variants returns the value of the final point.
%The last two variants returns a vector of the final point and the
%number of iterations.
if nargin ~= 3
options = [];
if nargin ~= 2
disp(’Wrong number of arguments.’);
return;
end
end
if length(options) >= 14
if options(14)==0
options(14)=1000*length(xnew);
end
else
options(14)=1000*length(xnew);
end
clc;
format compact;
format short e;
options = foptions(options);
print = options(1);
epsilon_x = options(2);
epsilon_g = options(3);
max_iter=options(14);
for k = 1:max_iter,
53
xcurr=xnew;
g_curr=feval(grad,xcurr);
if norm(g_curr) <= epsilon_g
disp(’Terminating: Norm of gradient less than’);
disp(epsilon_g);
k=k-1;
break;
end %if
alpha=linesearch_secant(grad,xcurr,-g_curr);
xnew = xcurr-alpha*g_curr;
if print,
disp(’Iteration number k =’)
disp(k); %print iteration index k
disp(’alpha =’);
disp(alpha); %print alpha
disp(’Gradient = ’);
disp(g_curr’); %print gradient
disp(’New point =’);
disp(xnew’); %print new point
end %if
if k == max_iter
disp(’Terminating with maximum number of iterations’);
end %if
end %for
if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(’Final point =’);
disp(xnew’);
disp(’Number of iterations =’);
disp(k);
end %if
%------------------------------------------------------------------
To apply the above MATLAB routine to the function in Example 8.1, we need the following M-file to
specify the gradient.
function y=g(x)
54
>> steep_desc(’g’,[-4;5;1],options)
As we can see above, we obtained the final point [4.002, 3.000, −4.996]> after 25 iterations. The value of
the objective function at the final point is 7.2 × 10−10 .
8.26
The algorithm terminated after 9127 iterations. The final point was [0.99992, 0.99983]> .
9. Newton’s Method
9.1
a. We have f 0 (x) = 4(x − x0 )3 and f 00 (x) = 12(x − x0 )2 . Hence, Newton’s method is represented as
x(k) − x0
x(k+1) = x(k) − ,
3
which upon rewriting becomes
2 (k)
x(k+1) − x0 = x − x0
3
Because {αk } converges, it is bounded, and so (1 + αk )o(|x(k) − x∗ |) = o(|x(k) − x∗ |). Combining the above
with the first equation, we get
|x(k+1) − x∗ | = o(|x(k) − x∗ |),
which implies that the order of convergence is superlinear.
b. In the secant algorithm, if x(k) → x∗ , then (f 0 (x(k) ) − f 0 (x(k−1) ))/(x(k) − x(k−1) ) → f 00 (x∗ ). Since the
secant algorithm has the form x(k+1) = x(k) −αk f 0 (x(k) ) with αk = (x(k) −x(k−1) )/(f 0 (x(k) )−f 0 (x(k−1) )), we
55
deduce that αk → 1/f 00 (x∗ ). Hence, if we apply the secant algorithm to a function f ∈ C 2 , and it converges
to a local minimizer x∗ such that f 00 (x∗ ) 6= 0, then the order of convergence is superlinear.
9.3
a. We compute f 0 (x) = 4x1/3 /3 and f 00 (x) = 4x−2/3 /9. Therefore Newton’s algorithm for this problem
takes the form
4(x(k) )1/3 /3
x(k+1) = x(k) − = −2x(k) .
4(x(k) )−2/3 /9
b. From part a, we have x(k) = 2k x(0) . Therefore, as long as x(0) 6= 0, the sequence {x(k) } does not converge
to 0.
9.4
a. Clearly f (x) ≥ 0 for all x. We have
Hence, f (x) > f ([1, 1]> ) for all x 6= [1, 1]> , and therefore [1, 1]> is the unique global minimizer.
b. We compute
" #
400x31 − 400x1 x2 + 2x1 − 2
∇f (x) =
200(x2 − x21 )
" #
1200x21 − 400x2 + 2 −400x1
F (x) = .
−400x1 200
Applying two iterations of Newton’s method, we have x(1) = [1, 0]> , x(2) = [1, 1]> . Therefore, in this
particular case, the method converges in two steps! We emphasize, however, that this fortuitous situation is
by no means typical, and is highly dependent on the initial condition.
c. Applying the gradient algorithm x(k+1) = x(k) − αk ∇f (x(k) ) with a fixed step size of αk = 0.05, we
obtain x(1) = [0.1, 0]> , x(2) = [0.17, 0.1]> .
9.5
If x(0) = x∗ , we are done. So, assume x(0) 6= x∗ . Since the standard Newton’s method reaches the point x∗
in one step, we have
Hence, in this case, the modified Newton’s algorithm is equivalent to the standard Newton’s algorithm, and
thus x(1) = x∗ .
(k+1) (0)
So, clearly d is a nonzero linear combination of p , . . . , p(k+1) .
10.2
Let k ∈ {0, . . . , n − 1} and φk (α) = f (x(k) + αd(k) ). By the chain rule, we have
φ0 (αk ) = ∇f (x(k) + αk d(k) )> d(k) = g (k+1)> d(k) .
Since g (k+1)> d(k) = 0, we have φ0 (αk ) = 0. Note that
1 (k)>
φk (α) = d Qd(k) α2 + g (k)> d(k) α + constant.
2
As φ is a quadratic function of α (with positive coefficient in the quadratic term), we conclude that αk =
arg minα f (x(k) + αd(k) ).
Note that since g (k)> d(k) 6= 0 is the coefficient of the linear term in φk , we have αk 6= 0. For i ∈
{0, . . . , k − 1}, we have
1 (k+1)
d(k)> Qd(i) = (x − x(k) )> Qd(i)
αk
1 (k+1)
= (g − g (k) )> d(i)
αk
1 (k+1)> (i)
= g d − g (k)> d(i)
αk
= 0
57
by assumption, which completes the proof.
10.3
From the conjugate gradient algorithm we have
g (k)> Qd(k−1)
d(k) = −g (k) + (k−1)> (k−1)
d(k−1) .
d Qd
Premultiplying the above by d(k)> Q and using the fact that d(k) and d(k−1) are Q-conjugate, yields
g (k)> Qd(k−1)
d(k)> Qd(k) = −d(k)> Qg (k) + d(k)> Qd(k−1)
d(k−1)> Qd(k−1)
= −d(k)> Qg (k) .
10.4
a. Since Q is symmetric, then there exists a set of vectors {d(1) , . . . , d(n) } such that Qd(i) = λi d(i) ,
i = 1, . . . , n, and d(i)> d(j) = 0, j 6= i, where the λi are (real) eigenvalues of Q. Therefore, if i 6= j, we have
d(i)> Qd(j) = d(i)> (λj d(j) ) = λj (d(i)> d(j) ) = 0. Hence the set {d(1) , . . . , d(n) } is Q-conjugate.
b. Define λi = (d(i)> Qd(i) )/(d(i)> d(i) ). Let
d(1)>
.
.. .
D=
d(n)>
Since Q is positive definite and the set {d(1) , . . . , d(n) } is Q-conjugate, then by Lemma 10.1, the set is also
linearly independent. Hence, D is nonsingular. By Q-conjugacy, we have that for all i 6= j, d(i)> Qd(j) = 0.
By assumption, we have d(i)> λj d(j) = λj d(i)> d(j) = 0. Hence, d(i)> Qd(j) = λj d(i)> d(j) . Moreover, for
each i = 1, . . . , n, we have d(i)> Qd(i) = d(i)> λi d(i) = λi d(i)> d(i) . We can write the above conditions in
matrix form:
DQd(i) = D(λi d(i) ).
Since D is nonsingular, then we have
Qd(i) = λi d(i) ,
which completes the proof.
10.5
We have
d(k)> Qd(k+1) = γk d(k)> Qg (k+1) + d(k)> Qd(k) .
Hence, in order to have d(k)> Qd(k+1) = 0, we need
d(k)> Qd(k)
γk = − .
d(k)> Qg (k+1)
10.6
We use induction. For k = 0, we have
Moreover, x(0) = 0 ∈ V0 . Hence, the proposition is true at k = 0. Assume it is true at k. To show that it is
also true at k + 1, note first that
x(k+1) = x(k) + αk d(k) .
58
Because x(k) ∈ Vk ⊂ Vk+1 and d(k) ∈ Vk+1 by the induction hypothesis, we deduce that x(k+1) ∈ Vk+1 .
Moreover,
But because x(k+1) ∈ Vk+1 , Qx(k+1) − b ∈ Vk+2 . Moreover, d(k) ∈ Vk+1 ⊂ Vk+2 . Hence, d(k+1) ∈ Vk+2 .
This completes the induction proof.
b. The conjugate gradient algorithm is an instance of the algorithm given in the question. By the “expanding
subspace” theorem, we can say that in the conjugate gradient algorithm (with x(0) = 0), at each k, x(k) is
the global minimizer of f on the Krylov subspace Vk . Note that for all k ≥ n, Vk+1 = Vk , because of the
Cayley-Hamilton theorem, which allows us to express Qn as a linear combination of I, Q, . . . , Qn−1 .
10.7
Expanding φ(a) yields
1
φ(a) = (x0 + Da)> Q(x0 + Da) − (x0 + Da)> b
2
1 > > 1
> > > > >
= a D QD a + a D Qx0 − D b + x Qx0 − x0 b .
2 2 0
Clearly φ is a quadratic function on Rr . It remains to show that the matrix in the quadratic term, D > QD,
is positive definite. Since Q > 0, for any a ∈ Rr , we have
a> D > QD a = (Da)> Q(Da) ≥ 0
and
a> D > QD a = (Da)> Q(Da) = 0
if and only if Da = 0. Since rank D = r, Da = 0 if and only if a = 0. Hence, the matrix D > QD is positive
definite.
10.8
a. Let 0 ≤ k ≤ n − 1 and 0 ≤ i ≤ k. Then,
by Lemma 10.2.
b. Let 0 ≤ k ≤ n − 1. and 0 ≤ i ≤ k − 1. Then,
g (k+1)T Qg (i)
= (βk d(k) − d(k+1) )T Q(βi−1 d(i−1) − d(i) )
= βk βi−1 d(k)T Qd(i−1) − βk d(k)T Qd(i) − βi−1 d(k+1)T Qd(i−1) + d(k+1)T Qd(i)
= 0
by Q-conjugacy of d(k+1) , d(k) , d(i) and d(i−1) (note that the iteration indices here are all distinct).
10.9
We represent f as " # " #
1 > 5 −3 0
f (x) = x x − x> − 7.
2 −3 2 1
59
The conjugate gradient algorithm is based on the following formulas:
g (k)> d(k)
x(k+1) = x(k) + αk d(k) , αk = −
d(k)> Qd(k)
g (k+1)> Qd(k)
d(k+1) = −g (k+1) + βk d(k) , βk = .
d(k)> Qd(k)
We have, # "
(0) (0) (0) 0
d =g = Qx − b = −b = .
−1
We then proceed to compute
" #
h i 0
0 −1
g (0)> d(0) −1 1
α0 = − =− " #" # = − .
d(0)> Qd(0) h i 5 −3 0 2
0 −1
−3 2 −1
Because the gradient is nonzero, we can proceed with the next step where we compute
" #" #
h i 5 −3 0
−3/2 0
g (1)>
Qd(0) −3 2 −1 9
β0 = (0)> (0)
= " #" # = − .
d Qd h i 5 −3 0 4
0 −1
−3 2 −1
It is easy to verify that the directions d(0) and d(1) are Q-conjugate. Indeed,
" #" #
h i 5 −3 3/2
(0)> (1)
d Qd = 0 −1 = 0.
−3 2 9/4
10.10
a. We have f (x) = 12 x> Qx − b> x where
" # " #
5 2 3
Q= , b= .
2 1 1
60
b. Since f is a quadratic function on R2 , we need to perform only two iterations. For the first iteration we
compute
β0 = 0.0047534
d (1)
= [0.08324, −0.20214]>
α1 = 5.7952
x(2) = [1.000, −1.000]> .
c. The minimizer is given by x∗ = Q−1 b = [1, −1]> , which agrees with part b.
10.11
A MATLAB routine for the conjugate gradient algorithm with options for different formulas of βk is:
function [x,N]=conj_grad(grad,xnew,options);
% CONJ_GRAD(’grad’,x0);
% CONJ_GRAD(’grad’,x0,OPTIONS);
%
% x = CONJ_GRAD(’grad’,x0);
% x = CONJ_GRAD(’grad’,x0,OPTIONS);
%
% [x,N] = CONJ_GRAD(’grad’,x0);
% [x,N] = CONJ_GRAD(’grad’,x0,OPTIONS);
%
%The first variant finds the minimizer of a function whose gradient
%is described in grad (usually an M-file: grad.m), using initial point
%x0.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results, (default is no display: 0).
%OPTIONS(2) is a measure of the precision required for the final point.
%OPTIONS(3) is a measure of the precision required of the gradient.
%OPTIONS(5) specifies the formula for beta:
% 0=Powell;
% 1=Fletcher-Reeves;
% 2=Polak-Ribiere;
% 3=Hestenes-Stiefel.
%OPTIONS(14) is the maximum number of iterations.
%For more information type HELP FOPTIONS.
%
%The next two variants return the value of the final point.
%The last two variants return a vector of the final point and the
%number of iterations.
if nargin ~= 3
options = [];
if nargin ~= 2
disp(’Wrong number of arguments.’);
return;
end
61
end
numvars = length(xnew);
if length(options) >= 14
if options(14)==0
options(14)=1000*numvars;
end
else
options(14)=1000*numvars;
end
clc;
format compact;
format short e;
options = foptions(options);
print = options(1);
epsilon_x = options(2);
epsilon_g = options(3);
max_iter=options(14);
g_curr=feval(grad,xnew);
if norm(g_curr) <= epsilon_g
disp(’Terminating: Norm of initial gradient less than’);
disp(epsilon_g);
return;
end %if
d=-g_curr;
reset_cnt = 0;
for k = 1:max_iter,
xcurr=xnew;
alpha=linesearch_secant(grad,xcurr,d);
%alpha=-(d’*g_curr)/(d’*Q*d);
xnew = xcurr+alpha*d;
if print,
disp(’Iteration number k =’)
disp(k); %print iteration index k
disp(’alpha =’);
disp(alpha); %print alpha
disp(’Gradient = ’);
disp(g_curr’); %print gradient
disp(’New point =’);
disp(xnew’); %print new point
end %if
g_old=g_curr;
g_curr=feval(grad,xnew);
62
disp(epsilon_g);
break;
end %if
reset_cnt = reset_cnt+1;
if reset_cnt == 3*numvars
d=-g_curr;
reset_cnt = 0;
else
if options(5)==0 %Powell
beta = max(0,(g_curr’*(g_curr-g_old))/(g_old’*g_old));
elseif options(5)==1 %Fletcher-Reeves
beta = (g_curr’*g_curr)/(g_old’*g_old);
elseif options(5)==2 %Polak-Ribiere
beta = (g_curr’*(g_curr-g_old))/(g_old’*g_old);
else %Hestenes-Stiefel
beta = (g_curr’*(g_curr-g_old))/(d’*(g_curr-g_old));
end %if
d=-g_curr+beta*d;
end
if print,
disp(’New beta =’);
disp(beta);
disp(’New d =’);
disp(d);
end
if k == max_iter
disp(’Terminating with maximum number of iterations’);
end %if
end %for
if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(’Final point =’);
disp(xnew’);
disp(’Number of iterations =’);
disp(k);
end %if
%------------------------------------------------------------------
We created the following M-file, g.m, for the gradient of Rosenbrock’s function:
function y=g(x)
y=[-400*(x(2)-x(1).^2)*x(1)-2*(1-x(1)), 200*(x(2)-x(1).^2)]’;
>> options(2)=10^(-7);
>> options(3)=10^(-7);
>> options(14)=100;
>> options(5)=0;
>> conj_grad(’g’,[-2;2],options);
63
1.0000e-07
Final_Point =
1.0000e+00 1.0000e+00
Number_of_iteration =
8
>> options(5)=1;
>> conj_grad(’g’,[-2;2],options);
The reader is cautioned not to draw any conclusions about the superiority or inferiority of any of the
formulas for βk based only on the above single numerical experiment.
Hence
φ0 (0) = d(k)> g (k) .
Since φ0 is continuous, then, if d(k)> g (k) < 0, there exists ᾱ > 0 such that for all α ∈ (0, ᾱ], φ(α) < φ(0),
i.e., f (x(k) + αd(k) ) < f (x(k) ).
b. By part a, φ(α) < φ(0) for all α ∈ (0, ᾱ]. Hence,
64
c. Now,
d(k)> g (k+1) = d(k)> ∇f (x(k) + αk d(k) ) = φ0k (αk ).
Since αk = arg minα≥0 f (x(k) + αd(k) ) > 0, we have φ0k (αk ) = 0. Hence, g (k+1)> d(k) = 0.
d.
i. We have d(k) = −g (k) . Hence, d(k)> g (k) = −kg (k) k2 . If g (k) 6= 0, then kg (k) k2 > 0, and hence
d(k)> g (k) < 0.
ii. We have d(k) = −F (x(k) )−1 g (k) . Since F (x(k) ) > 0, we also have F (x(k) )−1 > 0. Therefore,
d(k)> g (k) = −g (k)> F (x(k) )−1 g (k) < 0 if g (k) 6= 0.
iii. We have
d(k) = −g (k) + βk−1 d(k−1) .
Hence,
d(k)> g (k) = −kg (k) k2 + βk−1 d(k−1)> g (k) .
By part c, d(k−1)> g (k) = 0. Hence, if g (k) 6= 0, then kg (k) k2 > 0, and
iv. We have d(k) = −H k g (k) . Therefore, if H k > 0 and g (k) 6= 0, then d(k)> g (k) = −g (k)> H k g (k) < 0.
d(k)> g (k)
αk = − .
d(k)> Qd(k)
11.2
Yes, because:
1. The search direction is of the form d(k) = H k ∇f (x(k) ) for matrix H k = F (x(k) )−1 ;
2. The matrix H k = F (x(k) )−1 is symmetric for f ∈ C 2 ;
3. If f is quadratic, then the quasi-Newton condition is satisfied: H k+1 ∆g (i) = ∆x(i) , 0 ≤ i ≤ k. To see
this, note that if the Hessian is Q, then Q∆x(i) = ∆g (i) . Multiplying both sides by H k = Q−1 , we
obtain the desired result.
11.3
a. We have
1 > >
f x(k) + αd(k) = x(k) + αd(k) Q x(k) + αd(k) − x(k) + αd(k) b + c.
2
Using the chain rule, we obtain
d (k) >
f x + αd(k) = x(k) + αd(k) Qd(k) − d(k)> b.
dα
65
Equating the above to zero and solving for α gives
x(k)> Q − b> d(k) = −αd(k)> Qd(k) .
Taking into account that g (k)> = x(k)> Q − b> and that d(k)> Qd(k) > 0 for g (k) 6= 0, we obtain
66
To find H 1 we need to compute,
" # " #
(0) (1) (0) 2/3 (0) (1) (0) 2/3
∆x =x −x = and ∆g =g −g = .
−2/3 −4/3
Then, we obtain
>
∆x(0) − H 0 ∆g (0) ∆x(0) − H 0 ∆g (0)
H1 = H0 +
∆g (0)> ∆x(0) − H 0 ∆g (0)
" #
0 0
" #
1 0 0 4/9
= +
0 1 −8/9
" #
1 0
=
0 1/2
and " #
(1) (1) 1/3
d = −H 1 g = .
1/6
We next compute
g (1)> d(1)
α1 = − = 1.
d(1)> Qd(1)
Therefore, "#
(2) ∗ (1) (1) 1
x =x =x + α1 d = .
−1/2
Note that g (2) = Qx(2) − b = 0 as expected.
11.6
We are guaranteed that the step size satisfies αk > 0 if the search direction is in the descent direction,
i.e., the search direction d(k) = −M k ∇f (x(k) ) has strictly positive inner product with −∇f (x(k) ) (see
Exercise 11.1). Thus, the condition on M k that guarantees αk > 0 is ∇f (x(k) )> M k ∇f (x(k) ) > 0, which
corresponds to 1 + a > 0, or a > −1. (Note that if a ≤ −1, the search direction is not in the descent
direction, and thus we cannot guarantee that αk > 0.)
11.7
Let x ∈ Rn . Then
!
> > > (∆x(k) − H k ∆g (k) )(∆x(k) − H k ∆g (k) )>
x H k+1 x = x Hkx + x x
∆g (k)> (∆x(k) − H k ∆g (k) )
(x> (∆x(k) − H k ∆g (k) ))2
= x> H k x + .
∆g (k)> (∆x(k) − H k ∆g (k) )
Note that since H k > 0, we have x> H k x > 0. Hence, if ∆g (k)> (∆x(k) −H k ∆g (k) ) > 0, then x> H k+1 x > 0.
11.8
The complement of the Rank One update equation is
(∆g (k) − B k ∆x(k) )(∆g (k) − B k ∆x(k) )>
B k+1 = B k + .
∆x(k)> (∆g (k) − B k ∆x(k) )
67
Using the matrix inverse formula, we get
B −1
k+1
= B −1
k
B −1
k (∆g
(k)
− B k ∆x(k) )(∆g (k) − B k ∆x(k) )B −1
k
−
∆x (k)>
(∆g (k)
− B k ∆x(k) ) + (∆g (k) − B k ∆x(k) )> B −1
k (∆g
(k)
− B k ∆x(k) )
(∆x(k) − B −1
k ∆g
(k)
)(∆x(k) − B −1
k ∆g
(k) >
)
= B −1
k + .
∆g (k)> (∆x(k) − B −1
k ∆g
(k)
)
Substituting H k for B −1
k , we get a formula identical to the Rank One update equation. This should not
be surprising, since there is only one update equation involving a rank one correction that satisfies the
quasi-Newton condition.
11.9
We first compute the gradient ∇f and evaluate it at x(0) ,
# "
(0)
(0) (0) −1
∇f x =g = Qx −b= .
1
To find H 1 we need
" # " #
(0) (1) (0) 2/3 (0) (1) (0) 2/3
∆x =x −x = and ∆g =g −g = .
−2/3 −4/3
and " #
(1) (1) 1/3
d = −H 1 g = .
1/6
Note that d(0)> Qd(1) = 0, that is, d(0) and d(1) are Q-conjugate.
11.10
The calculations are similar until we get to the second step:
" #
1/2 −1/2
H1 =
−1/2 1/2
d(0) = 0.
So the algorithm gets stuck at this point, which illustrates that it doesn’t work.
11.11
a. Since f is quadratic, and αk = arg minα≥0 f (x(k) + αd(k) ), then
g (k)> d(k)
αk = − .
d(k)> Qd(k)
g (k)> H k g (k)
αk = > 0.
d(k)> Qd(k)
11.12
(Our solution to this problem is based on a solution that was furnished to us by Michael Mera, a student in
ECE 580 at Purdue in Spring 2005.) To proceed, we recall the formula of Lemma 11.1,
∆g (k)
A 0 = B k , u0 = , v>
0 = ∆g
(k)>
,
∆g (k)> ∆x(k)
and
∆g (k) ∆g (k)> B k ∆x(k)
A1 = B k + (k)> (k)
= A 0 + u0 v >
0 , u1 = − (k)>
,
∆g ∆x ∆x B k ∆x(k)
and
v>
1 = ∆x
(k)>
Bk .
69
Using the above notation, we represent B k+1 as
H BF
k+1
GS
= (B k+1 )−1
= (A1 + u1 v >
1)
−1
A−1 > −1
1 u1 v 1 A 1
= A−1
1 − −1 .
1 + v>
1 A 1 u1
A−1 > −1
A−1 > −1
−1 0 u0 v 0 A0 > −1 0 u0 v 0 A0
−1 > −1
A 0 u0 v 0 A 0 A 0 − > −1
1+v 0 A0 u0
u 1 v 1 A 0 − >
1+v A u −1
H BF
k+1
GS
= A−10 − > −1 − −1 > A−1
0 0 0 .
1 + v 0 A 0 u0 > −1 A u 0 v
1 + v 1 A0 − 1+v> A−1 u
0 0 0
u1
0 0 0
H k ∆g (k) ∆g (k)> H k
H BF
k+1
GS
= H k −
∆g (k)> ∆x(k) + ∆g (k)> H k ∆g (k)
H k ∆g (k) ∆g (k)> H k −B k ∆x(k) ∆x(k)> B k
H k − ∆g(k)> ∆x(k) +∆g (k)> H k ∆g (k) ∆x(k)> B k ∆x(k)
− (k) ∆g (k)> H
H ∆g −B k ∆x(k)
1 + ∆x(k)> B k H k − ∆g(k)> ∆x(k) +∆g(k)> H k ∆g(k)
k k
∆x(k)> B k ∆x(k)
H k ∆g (k) ∆g (k)> H k
× Hk − .
∆g (k)> ∆x(k) + ∆g (k)> H k ∆g (k)
H k Bk = Bk H k = I n.
We obtain
H k ∆g (k) ∆g (k)> H k
H BF GS
k+1 = Hk −
∆g (k)> ∆x(k) + ∆g (k)> H k ∆g (k)
H k ∆g ∆g (k)>
(k)
(k) (k)> ∆g (k) ∆g (k)> H k
1 − ∆g(k)> ∆x (k) +∆g (k)> H ∆g (k)
k
(−∆x ∆x ) 1 − ∆g (k)> (k)
∆x +∆g (k)> H k ∆g (k)
− .
(k)> (k) (k)> ∆g (k) ∆g (k)> (k)
∆x B k ∆x + ∆x B k − ∆g(k)> ∆x(k) +∆g(k)> H k ∆g(k) (−∆x )
We proceed with our manipulations. We first perform multiplications by ∆x(k) and ∆x(k)> to obtain
H k ∆g (k) ∆g (k)> H k
H BF
k+1
GS
= Hk −
∆g (k)> ∆x(k) + ∆g (k)> H k ∆g (k)
(k) (k)>
H k ∆g ∆g ∆x(k) (k) (k)> ∆x(k)> ∆g (k) ∆g (k)> H k
∆g (k)> (k)
∆x +∆g (k)> H k ∆g (k) − ∆x ∆x − ∆g (k)> (k)
∆x +∆g (k)> H k ∆g (k)
− ∆x(k)> ∆g (k) ∆g (k)> ∆x(k)
.
∆x(k)> B k ∆x(k) − ∆x(k)> B k ∆x(k) + ∆g (k)> ∆x(k) +∆g (k)> H k ∆g (k)
70
Cancelling the terms in the denominator of the last term above and performing further multiplications gives
BF GS H k ∆g (k) ∆g (k)> H k
Hk+1 = Hk −
∆g (k)> ∆x(k) + ∆g (k)> H k ∆g (k)
H k ∆g (k) (∆g (k)> ∆x(k) )(∆x(k)> ∆g (k) )∆g (k)> H k
∆g (k)> ∆x(k) +∆g (k)> H k ∆g (k)
+
∆x(k)> ∆g (k) ∆g (k)> ∆x(k)
∆x(k) ∆x(k)> ∆g (k)> ∆x(k) + ∆g (k)> H k ∆g (k)
+
∆x(k)> ∆g (k) ∆g (k)> ∆x(k)
(k)
H k ∆g ∆g (k)> ∆x(k) ∆x(k)> + ∆x(k) ∆g (k)> H k
− .
∆x(k)> ∆g (k) ∆g (k)> ∆x(k)
Further simplification of the third and the fifth terms on the right hand-side of the above equation gives
BF GS H k ∆g (k) ∆g (k)> H k
Hk+1 = Hk −
∆g (k)> ∆x(k) + ∆g (k)> H k ∆g (k)
H k ∆g (k) ∆g (k)> H k
+
∆g (k)> ∆x(k) + ∆g (k)> H k ∆g (k)
∆x(k) ∆x(k)> ∆g (k)> ∆x(k) + ∆g (k)> H k ∆g (k)
+
∆x(k)> ∆g (k) ∆g (k)> ∆x(k)
H k ∆g (k) ∆x(k)> + ∆x(k) ∆g (k)> H k
− .
∆x(k)> ∆g (k)
Note that the second and the third terms cancel out each other. We then represent the fourth term in
alternative manner to obtain
∆x(k) ∆x(k)> ∆g (k)> H k ∆g (k)
BF GS
H k+1 = H k + 1+
∆x(k)> ∆g (k) ∆g (k)> ∆x(k)
H k ∆g (k) ∆x(k)> + ∆x(k) ∆g (k)> H k
− ,
∆x(k)> ∆g (k)
which is the desired BFGS update formula.
11.13
The first step for both algorithms is clearly the same, since in either case we have
d(1) = −H 1 g (1)
!
∆g (0)> ∆g (0) ∆x(0) ∆x(0)>
= − In + 1+
∆g (0)> ∆x(0)
∆x(0)> ∆g (0)
!
∆g (0) ∆x(0)> + (∆g (0) ∆x(0)> )>
− (0)> (0)
g (1)
∆g ∆x
!
(0)>
(1) ∆g ∆g (0) ∆x(0) ∆x(0)> g (1)
= −g − 1 +
∆g (0)> ∆x(0) ∆x(0)> ∆g (0)
∆g (0) ∆x(0)> g (1) + ∆x(0) ∆g (0)> g (1)
+ .
∆g (0)> ∆x(0)
Since the line search is exact, we have
where
g (1)> ∆g (0) g (1)> (g (1) − g (0) )
β0 = =
d(0)> ∆g (0) d(0)> (g (1) − g (0) )
is the Hestenes-Stiefel update formula for β0 . Since d(0) = −g (0) , and g (1)> g (0) = 0, we have
g (1)> g (1)
β0 = ,
g (0)> g (0)
which is the Fletcher-Reeves formula.
11.14
a. Suppose the three conditions hold whenever applied to a quadratic. We need to show that when applied
to a quadratic, for k = 0, . . . , n − 1 and i = 0, . . . , k, H k+1 ∆g (i) = ∆x(i) . For i = k, we have
So it suffices to show that the second and third terms are both 0. For the second term,
because of the induction hypothesis, which implies Q-conjugacy (where Q is the Hessian of the given
quadratic). Similarly, for the third term,
72
again because of the induction hypothesis, which implies Q-conjugacy. This completes the proof.
b. All three algorithms satisfy the conditions in part a. Condition 1 holds, as described in class. Condition
2 is straightforward to check for all three algorithms. For the rank-one and DFP algorithms, this is shown in
the book. For BFGS, some simple matrix algebra establishes that it holds. Condition 3 holds by appropriate
definition of the vectors a(k) and b(k) . In particular, for the rank-one algorithm,
∆x(k) H k ∆g (k)
a(k) = , b(k) = − .
∆x(k)> ∆g (k)
∆g (k)> H k ∆g (k)
Finally, for the BFGS algorithm,
!
∆g (k)> H k ∆g (k) ∆x(k) H k ∆g (k) ∆x(k)
a(k) = 1+ − , b(k) = .
∆g (k)> ∆x(k) ∆x(k)> ∆g (k) ∆g (k)> ∆x (k)
∆g (k)> ∆x(k)
11.15
a. Suppose we apply the algorithm to a quadratic. Then, by the quasi-Newton property of DFP, we have
H DF P
k+1 ∆g
(i)
= ∆x(i) , 0 ≤ i ≤ k. The same holds for BFGS. Thus, for the given H k , we have for 0 ≤ i ≤ k,
H k+1 ∆g (i) = φH DF P
k+1 ∆g
(i)
+ (1 − φ)H BF GS
k+1 ∆g
(i)
= φ∆x(i) + (1 − φ)∆x(i)
= ∆x(i) ,
which shows that the above algorithm is a quasi-Newton algorithm (and hence also a conjugate direction
algorithm).
b. By Theorem 11.4 and the discussion on BFGS, we have H DFk
P
> 0 and H BF
k
GS
> 0. Hence, for any
x 6= 0,
x> H k x = φx> H DF
k
P
x + (1 − φ)x> H BF
k
GS
x>0
since φ and 1 − φ are nonnegative. Hence, H k > 0, from which we conclude that the algorithm has the
descent property if αk is computed by line search (by Proposition 11.1).
11.16
To show the result, we will prove the following precise statement: In the quadratic case (with Hessian Q),
suppose that H k+1 ∆g (i) = ρi ∆x(i) , 0 ≤ i ≤ k, k ≤ n − 1. If αi 6= 0, 0 ≤ i ≤ k, then d(0) , . . . , d(k+1) are
Q-conjugate.
We proceed by induction. We begin with the k = 0 case: that d(0) and d(1) are Q-conjugate. Because
α0 6= 0, we can write d(0) = ∆x(0) /α0 . Hence,
But g (1)> d(0) = 0 as a consequence of α0 > 0 being the minimizer of φ(α) = f (x(0) + αd(0) ). Hence,
d(1)> Qd(0) = 0.
73
Assume that the result is true for k − 1 (where k < n − 1). We now prove the result for k, that is, that
d , . . . , d(k+1) are Q-conjugate. It suffices to show that d(k+1)> Qd(i) = 0, 0 ≤ i ≤ k. Given i, 0 ≤ i ≤ k,
(0)
using the same algebraic steps as in the k = 0 case, and using the assumption that αi 6= 0, we obtain
Because d(0) , . . . , d(k) are Q-conjugate by assumption, we conclude from the expanding subspace lemma
(Lemma 10.2) that g (k+1)> d(i) = 0. Hence, d(k+1)> Qd(i) = 0, which completes the proof.
11.17
A MATLAB routine for the quasi-Newton algorithm with options for different formulas of H k is:
function [x,N]=quasi_newton(grad,xnew,H,options);
% QUASI_NEWTON(’grad’,x0,H0);
% QUASI_NEWTON(’grad’,x0,H0,OPTIONS);
%
% x = QUASI_NEWTON(’grad’,x0,H0);
% x = QUASI_NEWTON(’grad’,x0,H0,OPTIONS);
%
% [x,N] = QUASI_NEWTON(’grad’,x0,H0);
% [x,N] = QUASI_NEWTON(’grad’,x0,H0,OPTIONS);
%
%The first variant finds the minimizer of a function whose gradient
%is described in grad (usually an M-file: grad.m), using initial point
%x0 and initial inverse Hessian approximation H0.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results, (default is no display: 0).
%OPTIONS(2) is a measure of the precision required for the final point.
%OPTIONS(3) is a measure of the precision required of the gradient.
%OPTIONS(5) specifies the formula for the inverse Hessian update:
% 0=Rank One;
% 1=DFP;
% 2=BFGS;
%OPTIONS(14) is the maximum number of iterations.
%For more information type HELP FOPTIONS.
%
%The next two variants return the value of the final point.
%The last two variants return a vector of the final point and the
%number of iterations.
if nargin ~= 4
options = [];
if nargin ~= 3
disp(’Wrong number of arguments.’);
return;
end
end
numvars = length(xnew);
if length(options) >= 14
if options(14)==0
options(14)=1000*numvars;
end
else
74
options(14)=1000*numvars;
end
clc;
format compact;
format short e;
options = foptions(options);
print = options(1);
epsilon_x = options(2);
epsilon_g = options(3);
max_iter=options(14);
reset_cnt = 0;
g_curr=feval(grad,xnew);
if norm(g_curr) <= epsilon_g
disp(’Terminating: Norm of initial gradient less than’);
disp(epsilon_g);
return;
end %if
d=-H*g_curr;
for k = 1:max_iter,
xcurr=xnew;
alpha=linesearch_secant(grad,xcurr,d);
xnew = xcurr+alpha*d;
if print,
disp(’Iteration number k =’)
disp(k); %print iteration index k
disp(’alpha =’);
disp(alpha); %print alpha
disp(’Gradient = ’);
disp(g_curr’); %print gradient
disp(’New point =’);
disp(xnew’); %print new point
end %if
g_old=g_curr;
g_curr=feval(grad,xnew);
p=alpha*d;
q=g_curr-g_old;
reset_cnt = reset_cnt+1;
if reset_cnt == 3*numvars
75
d=-g_curr;
reset_cnt = 0;
else
if options(5)==0 %Rank One
q’*(p-H*q)
H = H+(p-H*q)*(p-H*q)’/(q’*(p-H*q));
elseif options(5)==1 %DFP
H = H+p*p’/(p’*q)-(H*q)*(H*q)’/(q’*H*q);
else %BFGS
H = H+(1+q’*H*q/(q’*p))*p*p’/(p’*q)-(H*q*p’+(H*q*p’)’)/(q’*p);
end %if
d=-H*g_curr;
end
if print,
disp(’New H =’);
disp(H);
disp(’New d =’);
disp(d);
end
if k == max_iter
disp(’Terminating with maximum number of iterations’);
end %if
end %for
if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(’Final point =’);
disp(xnew’);
disp(’Number of iterations =’);
disp(k);
end %if
%------------------------------------------------------------------
We created the following M-file, g.m, for the gradient of Rosenbrock’s function:
function y=g(x)
y=[-400*(x(2)-x(1).^2)*x(1)-2*(1-x(1)), 200*(x(2)-x(1).^2)]’;
>> options(2)=10^(-7);
>> options(3)=10^(-7);
>> options(14)=100;
>> x0=[-2;2];
>> H0=eye(2);
>> options(5)=0;
>> quasi_newton(’g’,x0,H0,options);
76
>> options(5)=1;
>> quasi_newton(’g’,x0,H0,options);
The reader is again cautioned not to draw any conclusions about the superiority or inferiority of any of
the formulas for H k based only on the above single numerical experiment.
11.18
a. The plot of the level sets of f were obtained using the following MATLAB commands:
>> [X,Y]=meshdom(-2:0.1:2, -1:0.1:3);
>> Z=X.^4/4+Y.^2/2 -X.*Y +X-Y;
>> V=[-0.72, -0.6, -0.2, 0.5, 2];
>> contour(X,Y,Z,V)
2.5
1.5
x_2
0.5
-0.5
-1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x_1
b. With the initial condition [0, 0]> , the algorithm converges to [−1, 0]> , while with the initial condition
[1.5, 1]> , the algorithm converges to [1, 2]> . These two points are the two strict local minimizers of f (as can
be checked using the SOSC). The algorithm apparently converges to the minimizer “closer” to the initial
point.
12. Solving Ax = b
77
12.1
Write the least squares cost in the usual notation kAx − bk2 where
3 h i 1
A = 5 , x= m , b = 2 .
6 3
12.3
a. We form
12 /2 5.00
A = 22 /2 , b = 19.5 .
32 /2 44.0
The least squares estimate of g is then given by
g = (A> A)−1 A> b = 9.776.
b. We start with P 0 = 0.040816, and x(0) = 9.776. We have a1 = 42 /2 = 8, and b(1) = 78.5. Using the RLS
formula, we get x(1) = 9.802, which is our updated estimate of g.
12.4
Let x = [x1 , x2 , . . . , xn ]> and y = [y1 , y2 , . . . , yn ]> . This least-squares estimation problem can be expressed
as
minimize kαx − yk2 ,
with α as the decision variable. Assuming that x 6= 0, the solution is unique and is given by
Pn
x> y xi yi
α∗ = (x> x)−1 x> y = > = Pi=1 n 2 .
x x i=1 xi
78
12.5
The least squares estimate of R is the least squares solution to
1·R = V1
..
.
1·R = Vn .
12.6
We represent the data in the table and the decision variables a and b using the usual least squares matrix
notation:
1 2 6 " #
a
A = 1 1 , b = 4 , x= .
b
3 2 5
The least squares estimate is given by
"# " #−1 " # " #" # " #
∗ a∗ > −1 > 11 9 25 1 9 −9 25 −1/2
x = ∗ = (A A) A b = = = .
b 9 9 26 18 −9 11 26 61/18
12.7
The problem can be formulated as a least-squares problem with
0.3 0.1 5
A = 0.4 0.2 , b = 3 ,
0.3 0.7 4
where the decision variable is x = [x1 , x2 ], and x1 and x2 are the amounts of A and B, respectively. After
some algebra, we obtain the solution:
" #" #
1 0.54 −0.32 3.9
x∗ = (A> A)−1 A> b = .
(0.34)(0.54) − (0.32)2 −0.32 0.34 3.9
Since we are only interest in the ratio of the first component of x∗ to the second component, we need only
explicitly compute:
0.54 − 0.32 0.22
Ratio = = = 11.
−0.32 + 0.34 0.02
12.8
For each k, we can write
yk = ayk−1 + buk + vk
= a2 yk−2 + abuk−1 + avk−1 + buk + vk
..
.
= ak−1 bu1 + ak−2 bu2 + · · · + buk + ak−1 v1 + ak−2 v2 + · · · + vk
79
Write u = [u1 , . . . , un ]> , v = [v1 , . . . , vn ]> , and y = [y1 , . . . , yn ]> . Then, y = Cu + Dv, where
b 0 ··· 0 1 0 ··· 0
.. . .. .
ab b . .. a 1 . ..
C= . , D= . .
. .. .. . ..
. . . 0 . . 0
an−1 b · · · ab b an−1 an−2 · · · 1
Write b = D −1 y and A = D −1 C so that b = Au + v. Therefore, the linear least-squares estimate of u
given y is
u∗ = (A> A)−1 A> b = (C > D −> D −1 C)−1 C > D −> D −1 y.
But C = bD. Hence,
1 0 ··· 0
.. ..
∗ 1 −1 −a 1
1 . .
u = D y= . y.
b b . .. ..
. . . 0
0 ··· −a 1
(Notice that D −1 has the simple form shown above.)
An alternative solution is first to define z = [z1 , . . . , zn ]> by zk = yk − ayk−1 . Then, we have z = bu + v.
Therefore, the linear least-squares estimate of u given y (or, equivalently, z) is
1 0 ··· 0
.. ..
1 1 −a 1 . .
u∗ = z = . y.
b b . .. ..
. . . 0
0 · · · −a 1
12.9
Define
x1 1 y1
. .. .
X = ..
. ,
.. .
y=
xp 1 yp
Since the xi are not all equal, we have rank X = 2. The objective function can be written as
" #
2
a
f (a, b) =
X − y
.
b
Therefore, by Theorem 12.1 there exists a unique minimizer [a∗ , b∗ ]> given by
" #
a∗
= (X > X)−1 X > y
b∗
"P #−1 " P #
p Pp p
i=1 x2i i=1 xi i=1 xi yi
= Pp Pp
i=1 xi p i=1 yi
" #−1 " #
X2 X XY
=
X 1 Y
" #" #
1 1 −X XY
=
X 2 − (X)2 −X X 2 Y
XY −(X)(Y )
X 2 −(X)2
= (X 2 )(Y )−(X)(XY )
.
X 2 −(X)2
80
As we can see, the solution does not depend on Y 2 .
12.10
a. We wish to find ω and θ such that
sin(ωt1 + θ) = y1
..
.
sin(ωtp + θ) = yp .
ωt1 + θ = arcsin y1
..
.
ωtp + θ = arcsin yp .
Since the ti are not all equal, the first column of A is not a scalar multiple of the second column. Therefore,
rank A = 2. Hence, the least squares solution is
12.11
The given line can be expressed as the range of the matrix A = [1, m]> . Let b = [x0 , y0 ]> be the given point.
Therefore, the problem is a linear least squares problem of minimizing kAx − bk2 . The solution is given by
x0 + my0
x∗ = (A> A)−1 A> b = .
1 + m2
Therefore, the point on the straight line that is closest to the given point [x0 , y0 ] is given by [x∗ , mx∗ ]> .
12.12
a. Write
x>
1 1 " # y1
. .. p×(n+1) a .
A= .
. . ∈ R
, z= ∈ Rn+1 , b= p
.. ∈ R .
c
x>
p 1 yp
The objective function can then be written as kAz − bk2 .
81
b. Let X = [x1 , . . . , xp ]> ∈ Rp×n , and e = [1, . . . , 1]> ∈ Rp . Then we may write A = [X e]. The solution
to the problem is (A> A)−1 A> b. But
" # " #
> X >X X >e X >X 0
A A= =
e> X p 0> p
The affine function of best fit is the constant function f (x) = c, were
p
1X
c= yi .
p i=1
12.13
a. Using the least squares formula, we have
−1
u1 y1 Pn
u k yk
θ̂n = [u1 , . . . , un ] · · · [u1 , . . . , un ] · · · = Pk=1 2 .
n
k=1 uk
un yn
1
Pn
Hence, θ̂n → θ if and only if limn→∞ n k=1 ek = 0.
12.14
We pose the problem as a least squares problem: minimize kAx − bk2 where x = [a, b]> , and
x0 1 x1
A = x1 1 , b = x2 .
x2 1 x3
82
12.15
We pose the problem as a least squares problem: minimize kAx − bk2 where x = [a, b]> , and
0 1 h1
h 0 h2
1
A= . . . , b = .
. ..
.
.
hn−1 0 hn
The matrix A> A is nonsingular because we assume that at least one hk is nonzero. Therefore, the least
squares solution is
" # "P #−1 " P # " P Pn−1 2 #
n−1 2 n−1 n−1
a h
i=1 i 0 i=1 h i h i+1 ( i=1 h i h i+1 )/( i=1 hi )
= = .
b 0 1 h1 h1
12.16
We pose the problem as a least squares problem: minimize kAx − bk2 where x = [a, b]> , and
0 1 s1
s1 1 s2
A= .. ,
.. b= ..
. . .
sn−1 1 sn
The matrix A> A is nonsingular because we assume that at least one sk is nonzero. Therefore, the least
squares solution is
" # "P
n−1 2 Pn−1 #−1 " Pn−1 #
a si si si si+1
= Pi=1
n−1
i=1 i=1
P n
b i=1 si n i=1 si
" Pn−1 Pn−1 Pn #
1 n i=1 si si+1 − i=1 si i=1 si
= 2 Pn−1 Pn−1 Pn−1 Pn .
− i=1 si i=1 si si+1 + i=1 s2i i=1 si
Pn−1 P
n−1
n i=1 s2i − i=1 is
12.17
This least-squares estimation problem can be expressed as
If x = 0, then the problem has an infinite number of solutions: any a solves the problem. Assuming that
x 6= 0, the solution is unique and is given by
x> y
a∗ = (x> x)−1 x> y = .
x> x
83
12.18
The solution to this problem is the same as the solution to:
1
minimize kx − bk2
2
subject to x ∈ R(A).
Substituting x = Ay, we see that this is simply a linear least squares problem with decision variable y. The
solution to the least squares problem is y ∗ = (A> A)−1 A> b, which implies that the solution to the given
problem is x∗ = A(A> A)−1 A> b.
12.19
We solve the problem using two different methods. The first method would be to use the Lagrange multiplier
technique to solve the equivalent problem,
minimize kx − x0 k2
h i
subject to h(x) = 1 1 1 x − 1 = 0,
84
We substitute into the above formula the given numerical data to obtain
1 2 1 1
− − 0
3 3 3 3
x∗ = 13 + − 13 2
− 1
−3
3 3
1 1 1 2
3 −3 −3 3 0
4
3
= − 53 .
4
3
12.20
For each x ∈ Rn , let y = x − x0 . Then, the original problem is equivalent to
minimize kyk
subject to Ay = b − Ax0 ,
in the sense that y ∗ is a solution to the above problem if and only if x∗ = y ∗ + x0 is a solution to the original
problem. By Theorem 12.2, the above problem has a unique solution given by
y ∗ = A> (AA> )−1 (b − Ax0 ) = A> (AA> )−1 b − A> (AA> )−1 Ax0 .
x∗ = A> (AA> )−1 b − A> (AA> )−1 Ax0 + x0 = A> (AA> )−1 b + (I n − A> (AA> )−1 A)x0 .
Note that
kx∗ − x0 k = y∗
= kA> (AA> )−1 (b − Ax0 )k
= kA> (AA> )−1 b − A> (AA> )−1 Ax0 k.
12.21
The objective function of the given problem can be written as
where
A b1
. .
B = ..
, .. .
c=
A bp
The solution is therefore
p p
1 X > −1 > 1X ∗
x∗ = (B > B)−1 B > c = (pA> A)−1 A> (b1 + · · · + bp ) = (A A) A bi = x
p i=1 p i=1 i
Alternatively: Write
kAx − bi k2 = x> A> Ax − 2x> A> bi + kbi k2
Therefore, the given objective function can be written as
where
b1 + · · · + bp
b= .
p
12.22
Write
kAx − bi k2 = x> A> Ax − 2x> A> bi + kbi k2
Therefore, the given objective function can be written as
where βi = αi /(α1 + · · · + αp ).
Note that the original problem can be written as the least squares problem
where
α1 b1 + · · · + αp bp
b= .
α1 + · · · + αp
12.23
Let x∗ = A> (AA> )−1 b. Suppose y ∗ is a point in R(A> ) that satisfies Ay ∗ = b. Then, there exists
z ∗ ∈ Rm such that y ∗ = A> z ∗ . Then, subtracting the equation A(A> (AA> )−1 b) = b from the equation
A(A> z ∗ ) = b, we get
(AA> )(z ∗ − (AA> )−1 b) = 0.
Since rank A = m, AA> is nonsingular. Therefore, z ∗ − (AA> )−1 b = 0, which implies that
Hence, x∗ = A> (AA> )−1 b is the only vector in R(A> ) that satisfies Ax∗ = b.
12.24
a. We have
x(0) = (A>
0 A0 )
−1 > (0)
A0 b = G−1 > (0)
0 A0 b .
Similarly,
x(1) = (A>
1 A1 )
−1 > (1)
A1 b = G−1 > (1)
1 A1 b .
b. Now,
" #
h i A
1
G0 = A>
1 a1
a>
1
= A> >
1 A1 + a1 a1
= G1 + a1 a>
1.
Hence,
G1 = G0 − a1 a>
1.
86
c. Using the Sherman-Morrison formula,
P1 = G−1
1
= (G0 − a1 a>
1)
−1
G−1 > −1
0 (−a1 )a1 G0
= G−1
0 −
1 + (−a1 )> G−1
0 a1
P 0 a1 a>
1 P0
= P0 + >
.
1 − a1 P 0 a1
d. We have
A>
0b
(0)
= G0 G0−1 A>
0b
(0)
= G0 x(0)
= (G1 + a1 a>
1 )x
(0)
= G1 x(0) + a1 a>
1x
(0)
.
e. Finally,
= x(0) − P 1 a1 b1 − a> 1 x (0)
.
P k ak+1 a>
k+1 P k
P (k+1) = Pk + >
1 − ak+1 P k ak+1
x(k+1) = x − P k+1 ak+1 bk+1 − a>
(k)
k+1 x
(k)
.
12.25
Using the notation of the proof of Theorem 12.3, we can write
aR(k)+1
x(k+1) = x(k) + µ(bR(k)+1 − a>
R(k)+1 x
(k)
) .
kaR(k)+1 k2
Hence,
k−1
X
µ(2 − µ) >
x(k) = (b R(i)+1 − aR(i)+1 x (i)
) aR(i)+1
i=0
kaR(k)+1 k2
A(y 1 + x(0) ) = b.
Hence, by definition of x∗ ,
Note that y (0) = 0. By Theorem 12.3, the sequence {y (k) } converges to the unique point y ∗ that minimizes
kyk subject to {Ay = b − Ax(0) }. Hence {x(k) } converges to y ∗ + x(0) . From the proof of part a,
x∗ = y ∗ + x(0) minimizes kx − x(0) k subject to {x : Ax = b}. This completes the proof.
12.27
Following the proof of Theorem 12.3, assuming kak = 1 without loss of generality, we arrive at
Since x(k) , x∗ ∈ R(A) = R([a> ]) by Exercise 12.25, we have x(k) − x∗ ∈ R(A). Hence, by the Cauchy-
Schwarz inequality,
(a> (x(k) − x∗ ))2 = kak2 kx(k) − x∗ k2 = kx(k) − x∗ k2 ,
since kak = 1 by assumption. Thus, we obtain
= 0.
88
Substituting a> ∗
R(k)+1 x = bR(k)+1 yields the desired result.
12.29
We will prove this by contradiction. Suppose Cx∗ is not the minimizer of kBy − bk2 over Rr . Let ŷ be the
minimizer of kBy − bk2 over Rr . Then, kB ŷ − bk2 < kBCx∗ − bk2 = kAx∗ − bk2 . Since C is of full rank,
there exists x̂ ∈ Rn such that ŷ = C x̂. Therefore,
(B > )† = (B † )>
(C > )† = (C † )> .
To this end, note that (B > )† = B(B > B)−1 , and (C > )† = (CC > )−1 C. On the other hand, (B † )> =
((B > B)−1 B > )> = B(B > B)−1 , and (C † )> = (C > (CC > )−1 )> = (CC > )−1 C, which completes the proof.
b. Note that A† = C † B † , which is a full rank factorization of A† . Therefore, (A† )† = (B † )† (C † )† . Hence,
to show that (A† )† = A, it is enough to show that
(B † )† = B
(C † )† = C.
To this end, note that (B † )† = ((B > B)−1 B > )† = B since B † is a full rank matrix. Similarly, (C † )† =
(C > (CC > )−1 )† = C since C † is a full rank matrix. This completes the proof.
12.31
⇒: We prove properties 1–4 in turn.
1. This is immediate.
2. Let A = BC be a full rank factorization of A. We have A† = C † B † , where B † = (B > B)−1 B > and
C = C > (CC > )−1 . Note that B † B = I and CC † = I. Now,
†
A† AA† = C † B † BCC † B †
= C †B†
= A† .
3. We have
89
4. We have
⇐: By property 1, we immediately have AA† A = A. Therefore, it remains to show that there exist
matrices U and V such that A† = U A> and A† = A> V .
For this, we note from property 2 that A† = A† AA† . But from property 3, AA† = (AA† )> = (A† )> A> .
Hence, A† = A† (A† )> A> . Setting U = A† (A† )> , we get that A† = U A> .
Similarly, we note from property 4 that A† A = (A† A)T = A> (A† )> . Substituting this back into property
2 yields A† = A† AA† = A> (A† )> A† . Setting V = (A† )> A† yields A† = A> V . This completes the proof.
12.32
(Taken from [23, p. 24]) Let
0 0 0 1 0 0
A1 = 0 1 1 , A2 = 0 1 0 .
0 1 0 0 0 0
We compute
0 0 0 1 0 0
A†1 = 0 0 1 , A†2 = 0 1 0 = A 2 .
0 1 −1 0 0 0
We have
0 0 0 0 h i
A1 A2 = 0 1 0 = 1 0 1 0
0 1 0 1
which is a full rank factorization. Therefore,
0 0 0
(A1 A2 )† = 0 1/2 1/2 .
0 0 0
But
0 0 0
A†2 A†1 = 0 0 1 .
0 0 0
90
b. The Conjugate Gradient algorithm applied to our training problem is:
2. g (0) = −X d (y d − X >
dw
(0)
). If g (0) = 0, stop, else set d(0) = −g (0) .
(k)>
d g (k)
3. αk = − d(k)> X > (k)
dXd d
5. g (k+1) = X d (y d − X >
dw
(k+1)
). If g (k+1) = 0, stop.
g (k+1)> X d X >d d
(k)
6. βk = d(k)> X d X > d (k)
d
8. Set k := k + 1; go to 3.
0.5
0.4
0.3
0.2
0.1
w_2
-0.1
-0.2
-0.3
-0.4
-0.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
w_1
91
0.5
0.4
0.3
0.2
0.1
error
-0.1
-0.2
-0.3
-0.4
-0.5
-1
-0.5 -1
1 0.5 0
1
x_2 x_1
13.2
a. The expression we seek is
ek+1 = (1 − µ)ek .
To derive the above, we write
ek+1 − ek = y d − x>
dw
(k+1)
− y d − x>
dw
(k)
= −x> d w
(k+1)
− w(k) .
wh11=w(1);
wh21=w(2);
wh12=w(3);
wh22=w(4);
wo11=w(5);
wo12=w(6);
v1=wh11*xd1+wh12*xd2;
v2=wh21*xd1+wh22*xd2;
z1=sigmoid(v1);
z2=sigmoid(v2);
y1=sigmoid(wo11*z1+wo12*z2)
d1=(yd-y1)*y1*(1-y1);
y(1)=-d1*wo11*z1*(1-z1)*xd1;
y(2)=-d1*wo12*z2*(1-z2)*xd1;
y(3)=-d1*wo11*z1*(1-z1)*xd2;
y(4)=-d1*wo12*z2*(1-z2)*xd2;
y(5)=-d1*z1;
y(6)=-d1*z2;
y=y’;
93
%OPTIONS(2) is a measure of the precision required for the final point.
%OPTIONS(3) is a measure of the precision required of the gradient.
%OPTIONS(14) is the maximum number of iterations.
%For more information type HELP FOPTIONS.
%
%The next two variants returns the value of the final point.
%The last two variants returns a vector of the final point and the
%number of iterations.
if nargin ~= 3
options = [];
if nargin ~= 2
disp(’Wrong number of arguments.’);
return;
end
end
if length(options) >= 14
if options(14)==0
options(14)=1000*length(xnew);
end
else
options(14)=1000*length(xnew);
end
clc;
format compact;
format short e;
options = foptions(options);
print = options(1);
epsilon_x = options(2);
epsilon_g = options(3);
max_iter=options(14);
for k = 1:max_iter,
xcurr=xnew;
g_curr=feval(grad,xcurr);
alpha=10.0;
xnew = xcurr-alpha*g_curr;
if print,
disp(’Iteration number k =’)
disp(k); %print iteration index k
disp(’alpha =’);
disp(alpha); %print alpha
disp(’Gradient = ’);
disp(g_curr’); %print gradient
94
disp(’New point =’);
disp(xnew’); %print new point
end %if
if k == max_iter
disp(’Terminating with maximum number of iterations’);
end %if
end %for
if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(’Final point =’);
disp(xnew’);
disp(’Number of iterations =’);
disp(k);
end %if
%------------------------------------------------------------------
To apply the above routine, we need the following M-file for the gradient.
function y=grad(w,xd,yd);
wh11=w(1);
wh21=w(2);
wh12=w(3);
wh22=w(4);
wo11=w(5);
wo12=w(6);
t1=w(7);
t2=w(8);
t3=w(9);
xd1=xd(1); xd2=xd(2);
v1=wh11*xd1+wh12*xd2-t1;
v2=wh21*xd1+wh22*xd2-t2;
z1=sigmoid(v1);
z2=sigmoid(v2);
y1=sigmoid(wo11*z1+wo12*z2-t3);
d1=(yd-y1)*y1*(1-y1);
y(1)=-d1*wo11*z1*(1-z1)*xd1;
y(2)=-d1*wo12*z2*(1-z2)*xd1;
y(3)=-d1*wo11*z1*(1-z1)*xd2;
y(4)=-d1*wo12*z2*(1-z2)*xd2;
y(5)=-d1*z1;
y(6)=-d1*z2;
y(7)=d1*wo11*z1*(1-z1);
y(8)=d1*wo12*z2*(1-z2);
y(9)=d1;
95
y=y’;
>> options(2)=10^(-7);
>> options(3)=10^(-7);
>> options(14)=10000;
>> w0=[0.1,0.3,0.3,0.4,0.4,0.6,0.1,0.1,-0.1]’;
>> [wstar,N]=backprop(’grad’,w0,options)
As we can see from the above, the results coincide with Example 13.3. The table of the outputs of the
trained network corresponding to the training input data is shown in Table 13.2.
hold on
axis square
%Plot the contours of the objective function
[X1,X2]=meshgrid(-1:0.01:1);
96
Y=(X2-X1).^4+12.*X1.*X2-X1+X2-3;
[C,h] = contour(X1,X2,Y,20);
clabel(C,h);
while 1
% Check the size of simplex for stopping criterion
simpsize=norm(x(:,1)-x(:,2))+norm(x(:,2)-x(:,3))+norm(x(:,3)-x(:,1));
if(simpsize<1e-6)
break;
end
lastpt=x(:,3);
% Sort the simplex
x=sort_points(x,3);
% Reflection
centro=1/2*(x(:,1)+x(:,2));
xr=centro+rho*(centro-x(:,3));
% Accept condition
if(obj_fun(xr)>=obj_fun(x(:,1)) && obj_fun(xr)<obj_fun(x(:,2)))
x(:,3)=xr;
% Expand condition
elseif(obj_fun(xr)<obj_fun(x(:,1)))
xe=centro+rho*chi*(centro-x(:,3));
if(obj_fun(xe)<obj_fun(xr))
x(:,3)=xe;
else
x(:,3)=xr;
end
97
else
x=shrink(x,sigma);
end
end
% Plot the new point and connect
plot([lastpt(1),x(1,3)],[lastpt(2),x(2,3)],’--*’);
end
% Output the final simplex (minimizer)
x(:,1)
% obj_fun
function y = obj_fun(x)
y=(x(1)-x(2))4+12*x(1)*x(2)-x(1)+x(2)-3;
% sort_points
function y = sort_points(x,N)
for i=1:(N-1)
for j=1:(N-i)
if(obj_fun(x(:,j))>obj_fun(x(:,j+1)))
tmp=x(:,j);
x(:,j)=x(:,j+1);
x(:,j+1)=tmp;
end
end
end
y=x;
% shrink
function y = shrink(x,sigma)
x(:,2)=x(:,1)+sigma*(x(:,2)-x(:,1));
x(:,3)=x(:,1)+sigma*(x(:,3)-x(:,1));
y=x;
When we run the MATLAB code above with initial condition [0.55, 0.7]> , we obtain the following plot:
3.
82 5.3
87 06
0.8 2. 2
35 3.0
1.6 12 8 99 3.8
12 287
0.6 4
2.3
512
0.87 1.61
36 24
0.4−0.603 0.13 7
84 49 1
−0.60
−1.34 384
0.2−2.0814 26
−2.081
4
−2.8201
0 −3.5589 −3.558
9
−4.2976
−0.2 −5.03
364 64
−5.0
−5.7
1 751
775
−0.4
−5.
−0.6
1
75
5 .7
−0.8 −
0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9
When we run the MATLAB code above with initial condition [−0.9, −0.5]> , we obtain the following plot:
98
1
26 14 −2.8201
4 91 84 .34 .08
13 03 −1 −12
0. 0.6 0
− .82 −3.5589
−2
89
.55
−3
0.5
−3
.55
89 589
−3.5
−2.
820
1
−2.8201
−2.0
0 −1.3 814
426
−2
−1. .08
342 14
0.8 0.1 −0 6
736 .60
34 38
7 91 4
0.
87
−0
0.
−0.5 1. 36
13
.6
3.8 −0.9 6 7
−0.8 −0.7 12 −0.6 −0.5 −0.4
49
03
28 4
84
7
Note that this function has two local minimizers. The algorithm terminates at these two minimizers with
the two different initial conditions. This behavior depends on the value of lambda, which determines the
initial simplex. It is possible to reach both minimizers starting from the same initial point by using different
values of lambda. In the solution above, the initial simplex is “small”—lambda is just 0.1.
14.2
A MATLAB routine for a naive random search algorithm is given by the M-file rs_demo shown below:
function [x,N]=random_search(funcname,xnew,options);
%Naive random search demo
% [x,N]=random_search(funcname,xnew,options);
% print = options(1);
% alpha = options(18);
if nargin ~= 3
options = [];
if nargin ~= 2
disp(’Wrong number of arguments.’);
return;
end
end
if length(options) >= 14
if options(14)==0
options(14)=1000*length(xnew);
end
else
options(14)=1000*length(xnew);
end
if length(options) < 18
options(18)=1.0; %optional step size
end
format compact;
format short e;
options = foptions(options);
print = options(1);
epsilon_x = options(2);
99
epsilon_g = options(3);
max_iter=options(14);
alpha0 = options(18);
if funcname == ’f_r’,
ros_cnt
elseif funcname == ’f_p’,
pks_cnt;
end %if
if length(xnew) == 2
plot(xnew(1),xnew(2),’o’)
text(xnew(1),xnew(2),’Start Point’)
xlower = [-2;-1];
xupper = [2;3];
end
f_0=feval(funcname,xnew);
xbestcurr = xnew;
xbestold = xnew;
f_best=feval(funcname,xnew);
f_best=10^(sign(f_best))*f_best;
for k = 1:max_iter,
xcurr=xbestcurr;
f_curr=feval(funcname,xcurr);
alpha = alpha0;
xnew = xcurr + alpha*(2*rand(length(xcurr),1)-1);
for i=1:length(xnew),
xnew(i) = max(xnew(i),xlower(i));
xnew(i) = min(xnew(i),xupper(i));
end %for
f_new=feval(funcname,xnew);
if print,
disp(’Iteration number k =’)
disp(k); %print iteration index k
disp(’alpha =’);
disp(alpha); %print alpha
disp(’New point =’);
disp(xnew’); %print new point
disp(’Function value =’);
disp(f_new); %print func value at new point
end %if
100
break;
end %if
pltpts(xbestcurr,xbestold);
if k == max_iter
disp(’Terminating with maximum number of iterations’);
end %if
end %for
if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(’Final point =’);
disp(xbestcurr’);
disp(’Number of iterations =’);
disp(k);
end %if
A MATLAB routine for a simulated annealing algorithm is given by the M-file sa_demo shown below:
function [x,N]=simulated_annealing(funcname,xnew,options);
%Simulated annealing demo
% random_search(funcname,xnew,options);
% print = options(1);
% gamma = options(15);
% alpha = options(18);
if nargin ~= 3
options = [];
if nargin ~= 2
disp(’Wrong number of arguments.’);
return;
end
end
if length(options) >= 14
if options(14)==0
options(14)=1000*length(xnew);
end
else
options(14)=1000*length(xnew);
end
if length(options) < 15
options(15)=5.0; %
end
if options(15)==0
options(15)=5.0; %
end
if length(options) < 18
options(18)=0.5; %optional step size
end
format compact;
101
format short e;
options = foptions(options);
print = options(1);
epsilon_x = options(2);
epsilon_g = options(3);
max_iter=options(14);
alpha = options(18);
gamma = options(15);
k0=2;
if funcname == ’f_r’,
ros_cnt
elseif funcname == ’f_p’,
pks_cnt;
end %if
if length(xnew) == 2
plot(xnew(1),xnew(2),’o’)
text(xnew(1),xnew(2),’Start Point’)
xlower = [-2;-1];
xupper = [2;3];
end
f_0=feval(funcname,xnew);
xbestcurr = xnew;
xbestold = xnew;
xcurr = xnew;
f_best=feval(funcname,xnew);
f_best=10^(sign(f_best))*f_best;
for k = 1:max_iter,
f_curr=feval(funcname,xcurr);
for i=1:length(xnew),
xnew(i) = max(xnew(i),xlower(i));
xnew(i) = min(xnew(i),xupper(i));
end %for
f_new=feval(funcname,xnew);
102
xbestold = xbestcurr;
xbestcurr = xnew;
f_best = f_new;
end
if print,
disp(’Iteration number k =’)
disp(k); %print iteration index k
disp(’alpha =’);
disp(alpha); %print alpha
disp(’New point =’);
disp(xnew’); %print new point
disp(’Function value =’);
disp(f_new); %print func value at new point
end %if
pltpts(xbestcurr,xbestold);
if k == max_iter
disp(’Terminating with maximum number of iterations’);
end %if
end %for
if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(’Final point =’);
disp(xbestcurr’);
disp(’Objective function value =’);
disp(f_best);
disp(’Number of iterations =’);
disp(k);
end %if
plot([xcurr(1),xnew(1)],[xcurr(2),xnew(2)],’r-’,xnew(1),xnew(2),’o’,’Erasemode’,
’none’);
drawnow; % Draws current graph now
%pause(1)
out = [];
f_p.m:
function y=f_p(x);
103
y=3*(1-x(1)).^2.*exp(-(x(1).^2)-(x(2)+1).^2) -
10.*(x(1)/5-x(1).^3-x(2).^5).*exp
(-x(1).^2-x(2).^2) - exp(-(x(1)+1).^2-x(2).^2)/3;
y=-y;
pks_cnt.m:
echo off
X = [-3:0.2:3]’;
Y = [-3:0.2:3]’;
[x,y]=meshgrid(X’,Y’) ;
func = 3*(1-x).^2.*exp(-x.^2-(y+1).^2) -
10.*(x/5-x.^3-y.^5).*exp(-x.^2-y.^2) -
exp(-(x+1).^2-y.^2)/3;
func = -func;
clf
levels = exp(-5:10);
levels = [-5:0.9:10];
contour(X,Y,func,levels,’k--’)
xlabel(’x_1’)
ylabel(’x_2’)
title(’Minimization of Peaks function’)
drawnow;
hold on
plot(-0.0303,1.5455,’o’)
text(-0.0303,1.5455,’Solution’)
To run the naive random search algorithms, we first pick a value of α = 0.5, which involves setting
options(18)=0.5. We then use the command rs_demo(’f_p’,[0;-2],options). The resulting plot of the
algorithm trajectory is given below. As we can see, the algorithm is stuck at a local minimizer. (By running
the algorithm several times, the reader can verify that this non-convergent behavior is typical.)
2
Solution
1
x2
−1
−2 Start Point
−3
−3 −2 −1 0 1 2 3
x1
Next, we try α = 1.5, which involves setting options(18)=1.5. We then use the command
rs_demo(’f_p’,[0;-2],options) again, to obtain the plot shown below. This time, the algorithm reaches
the global minimizer.
104
Minimization of Peaks function
3
2
Solution
x2
0
−1
−2 Start Point
−3
−3 −2 −1 0 1 2 3
x1
Finally, we again set α = 0.5, using options(18)=0.5. We then run the simulated annealing code using
sa_demo(’f_p’,[0;-2],options). The algorithm can be seen to converge to the global minimizer, as
plotted below.
2
Solution
1
x2
−1
−2 Start Point
−3
−3 −2 −1 0 1 2 3
x
1
14.3
A MATLAB routine for a particle swarm algorithm is:
% A particle swarm optimizer
% to find the minimum/maximum of the MATLABs’ peaks function
% D---# of inputs to the function (dimension of problem)
clear
%Parameters
ps=10;
D=2;
ps_lb=-3;
ps_ub=3;
vel_lb=-1;
vel_ub=1;
iteration_n = 50;
range = [-3, 3; -3, 3]; % Range of the input variables
% Plot contours of peaks function
[x, y, z] = peaks;
%pcolor(x,y,z); shading interp; hold on;
105
%contour(x, y, z, 20, ’r’);
mesh(x,y,z)
% hold off;
%colormap(gray);
set(gca,’Fontsize’,14)
axis([-3 3 -3 3 -9 9])
%axis square;
xlabel(’x_1’,’Fontsize’,14);
ylabel(’x_2’,’Fontsize’,14);
zlabel(’f(x_1,x_2)’,’Fontsize’,14);
hold on
ps_pos=ps_lb + (ps_ub-ps_lb).*rand(ps,D);
ps_vel=vel_lb + (vel_ub-vel_lb).*rand(ps,D);
p_best = ps_pos;
f1=’3*(1-ps_pos(i,1))^2*exp(-ps_pos(i,1)^2-(ps_pos(i,2)+1)^2)’;
f2=’-10*(ps_pos(i,1)/5-ps_pos(i,1)^3-ps_pos(i,2)^5)*exp(-ps_pos(i,1)^2-ps_pos(i,2)^2)’;
f3=’-(1/3)*exp(-(ps_pos(i,1)+1)^2-ps_pos(i,2)^2)’;
p_best_fit=zeros(ps,1);
for i=1:ps
g1(i)=3*(1-ps_pos(i,1))^2*exp(-ps_pos(i,1)^2-(ps_pos(i,2)+1)^2);
g2(i)=-10*(ps_pos(i,1)/5-ps_pos(i,1)^3-ps_pos(i,2)^5)*exp(-ps_pos(i,1)^2-ps_pos(i,2)^2);
g3(i)=-(1/3)*exp(-(ps_pos(i,1)+1)^2-ps_pos(i,2)^2);
p_best_fit(i)=g1(i)+g2(i)+g3(i);
end
p_best_fit;
hand_p3=plot3(ps_pos(:,1),ps_pos(:,2),p_best_fit’,’*k’,’markersize’,15,’erase’,’xor’);
% initial g_best
[g_best_val,g_best_idx] = max(p_best_fit);
%[g_best_val,g_best_idx] = min(p_best_fit); this is to minimize
g_best=ps_pos(g_best_idx,:);
106
+1.494*rand*(p_best(count,:)-ps_pos(count,:))... % independent
+1.494*rand*(g_best-ps_pos(count,:)); % social
end
ps_vel;
% update new position
ps_pos = ps_pos + ps_vel;
%update p_best
for i=1:ps
g1(i)=3*(1-ps_pos(i,1))^2*exp(-ps_pos(i,1)^2-(ps_pos(i,2)+1)^2);
g2(i)=-10*(ps_pos(i,1)/5-ps_pos(i,1)^3-ps_pos(i,2)^5)*exp(-ps_pos(i,1)^2-ps_pos(i,2)^2);
g3(i)=-(1/3)*exp(-(ps_pos(i,1)+1)^2-ps_pos(i,2)^2);
ps_current_fit(i)=g1(i)+g2(i)+g3(i);
if ps_current_fit(i)>p_best_fit(i)
p_best_fit(i)=ps_current_fit(i);
p_best(i,:)=ps_pos(i,:);
end
end
p_best_fit;
%update g_best
[g_best_val,g_best_idx] = max(p_best_fit);
g_best=ps_pos(g_best_idx,:);
set(hand_p3,’xdata’,ps_pos(:,1),’ydata’,ps_pos(:,2),’zdata’,ps_current_fit’);
drawnow
pause
end
g_best
g_best_val
figure;
x = 1:iteration_n;
plot(x, upper, ’o’, x, average, ’x’, x, lower, ’*’);
hold on;
plot(x, [upper average lower]);
hold off;
legend(’Best’, ’Average’, ’Poorest’);
xlabel(’Iterations’); ylabel(’Objective function value’);
When we run the MATLAB code above, we obtain a plot of the initial set of particles, as shown below.
107
5
f(x1,x2)
0
−5
2
2
0
0
−2 −2
x2 x1
5
f(x1,x2)
−5
2
2
0
0
−2 −2
x2 x1
5
f(x1,x2)
−5
2
2
0
0
−2 −2
x2 x1
108
A plot of the objective function values (best, average, and poorest) is shown below.
9
Best
8 Average
Poorest
7
−1
0 10 20 30 40 50
Iterations
14.4
a. Expanding the right hand side of the second expression gives the desired result.
b. Applying the algorithm, we get a binary representation of 11111001011, i.e.,
1995 = 210 + 29 + 28 + 27 + 26 + 23 + 21 + 20 .
d. We have 19 = 24 + 21 + 20 , i.e., the binary representation for 19 is 10011. For the fractional part, we
need at least 7 bits to keep at least the same accuracy. We have 0.95 = 2−1 + 2−2 + 2−3 + 2−4 + 2−7 + · · · ,
i.e., the binary representation is 0.1111001 · · · . Therefore, the binary representation of 19.95 with at least
the same degree of accuracy is 10011.1111001.
14.5
It suffices to prove the result for the case where only one symbol is swapped, since the general case is
obtained by repeating the argument. We have two scenarios. First, suppose the symbol swapped is at a
position corresponding to a don’t care symbol in H. Clearly, after the swap, both chromosomes will still be
in H. Second, suppose the symbol swapped is at a position corresponding to a fixed symbol in H. Since
both chromosomes are in H, their symbols at that position must be identical. Hence, the swap does not
change the chromosomes. This completes the proof.
14.6 T
Consider a given chromosome in M (k) H. The probability that it is chosen for crossover is qc . If neither
of its offsprings is in H, then at least one of the crossover points must be between the corresponding first
and last fixed symbols of H. The probability of this is 1 − (1 − δ(H)/(L − 1))2 . To see this, note that
the probability that each crossover point is not between the corresponding first and last fixed symbols is
1 − δ(H)/(L − 1), and thus the probability that both crossover points are not between the corresponding
first and last fixed symbols of H is (1 − δ(H)/(L − 1))2 . Hence, the probability that the given chromosome
is chosen for crossover and neither of its offsprings is in H is bounded above by
2 !
δ(H)
qc 1 − 1 − .
L−1
109
14.7
As for two-point crossover, the n-point crossover operation is a composition of n one-point crossover opera-
tions (i.e., n one-point crossover operations in succession). The required result for this case is as follows.
Lemma: T
Given a chromosome in M (k) H, the probability that it is chosen for crossover and neither of its offsprings
is in H is bounded above by n
δ(H)
qc 1 − 1 − .
L−1
2
For the proof, proceed as in the solution of Exercise 14.6, replacing 2 by n.
14.8
function M=roulette_wheel(fitness);
%function M=roulette_wheel(fitness)
%fitness = vector of fitness values of chromosomes in population
%M = vector of indices indicating which chromosome in the
% given population should appear in the mating pool
14.9
L = length(parent1);
crossover_pt = ceil(rand*(L-1));
offspring1 = [parent1(1:crossover_pt) parent2(crossover_pt+1:L)];
offspring2 = [parent2(1:crossover_pt) parent1(crossover_pt+1:L)];
14.10
14.11
A MATLAB routine for a genetic algorithm with binary encoding is:
110
function [winner,bestfitness] = ga(L,N,fit_func,options)
% function winner = GA(L,N,fit_func)
% Function call: GA(L,N,’f’)
% L = length of chromosomes
% N = population size (must be an even number)
% f = name of fitness value function
%
%Options:
%print = options(1);
%selection = options(5);
%max_iter=options(14);
%p_c = options(18);
%p_m = p_c/100;
%
%Selection:
% options(5) = 0 for roulette wheel, 1 for tournament
clf;
if nargin ~= 4
options = [];
if nargin ~= 3
disp(’Wrong number of arguments.’);
return;
end
end
if length(options) >= 14
if options(14)==0
options(14)=3*N;
end
else
options(14)=3*N;
end
if length(options) < 18
options(18)=0.75; %optional crossover rate
end
%format compact;
%format short e;
options = foptions(options);
print = options(1);
selection = options(5);
max_iter=options(14);
p_c = options(18);
p_m = p_c/100;
P = rand(N,L)>0.5;
bestvaluesofar = 0;
%Initial evaluation
for i = 1:N,
fitness(i) = feval(fit_func,P(i,:));
end
[bestvalue,best] = max(fitness);
if bestvalue > bestvaluesofar,
bestsofar = P(best,:);
bestvaluesofar = bestvalue;
111
end
for k = 1:max_iter,
%Selection
fitness = fitness - min(fitness); % to keep the fitness positive
if sum(fitness) == 0,
disp(’Population has identical chromosomes -- STOP’);
disp(’Number of iterations:’);
disp(k);
for i = k:max_iter,
upper(i)=upper(i-1);
average(i)=average(i-1);
lower(i)=lower(i-1);
end
break;
else
fitness = fitness/sum(fitness);
end
if selection == 0,
%roulette-wheel
cum_fitness = cumsum(fitness);
for i = 1:N,
tmp = find(cum_fitness-rand>0);
m(i) = tmp(1);
end
else
%tournament
for i = 1:N,
fighter1=ceil(rand*N);
fighter2=ceil(rand*N);
if fitness(fighter1)>fitness(fighter2),
m(i) = fighter1;
else
m(i) = fighter2;
end
end
end
M = zeros(N,L);
for i = 1:N,
M(i,:) = P(m(i),:);
end
%Crossover
Mnew = M;
for i = 1:N/2
ind1 = ceil(rand*N);
ind2 = ceil(rand*N);
parent1 = M(ind1,:);
parent2 = M(ind2,:);
if rand < p_c
crossover_pt = ceil(rand*(L-1));
offspring1 = [parent1(1:crossover_pt) parent2(crossover_pt+1:L)];
offspring2 = [parent2(1:crossover_pt) parent1(crossover_pt+1:L)];
Mnew(ind1,:) = offspring1;
Mnew(ind2,:) = offspring2;
end
end
112
%Mutation
mutation_points = rand(N,L) < p_m;
P = xor(Mnew,mutation_points);
%Evaluation
for i = 1:N,
fitness(i) = feval(fit_func,P(i,:));
end
[bestvalue,best] = max(fitness);
if bestvalue > bestvaluesofar,
bestsofar = P(best,:);
bestvaluesofar = bestvalue;
end
upper(k) = bestvalue;
average(k) = mean(fitness);
lower(k) = min(fitness);
end %for
if k == max_iter,
disp(’Algorithm terminated after maximum number of iterations:’);
disp(max_iter);
end
winner = bestsofar;
bestfitness = bestvaluesofar;
if print,
iter = [1:max_iter]’;
plot(iter,upper,’o:’,iter,average,’x-’,iter,lower,’*--’);
legend(’Best’, ’Average’, ’Worst’);
xlabel(’Generations’,’Fontsize’,14);
ylabel(’Objective Function Value’,’Fontsize’,14);
set(gca,’Fontsize’,14);
hold off;
end
index = polyval(bin,2);
dec = index*((range(2)-range(1))/(2^length(bin)-1)) + range(1);
function y=f_manymax(x);
y=-15*(sin(2*x))^2-(x-2)^2+160;
function y=fit_func1(binchrom);
%1-D fitness function
f=’f_manymax’;
range=[-10,10];
x=bin2dec(binchrom,range);
y=feval(f,x);
Running the above algorithm, we obtain a solution of x∗ = 1.6078, and an objective function value of
159.7640. The figure below shows a plot of the best, average, and worst solution from each generation of the
population.
160
150
140
Objective Function Value
130
120
110
100
90
80 Best
Average
70 Worst
60
0 5 10 15 20 25 30
Generations
b. To run the routine, we create the following M-files (we also use the routine bin2dec from part a.
function y=f_peaks(x);
y=3*(1-x(1)).^2.*exp(-(x(1).^2)-(x(2)+1).^2) -
10.*(x(1)/5-x(1).^3-x(2).^5).*exp
(-x(1).^2-x(2).^2) - exp(-(x(1)+1).^2-x(2).^2)/3;
function y=fit_func2(binchrom);
%2-D fitness function
f=’f_peaks’;
xrange=[-3,3];
yrange=[-3,3];
L=length(binchrom);
x1=bin2dec(binchrom(1:L/2),xrange);
x2=bin2dec(binchrom(L/2+1:L),yrange);
y=feval(f,[x1,x2]);
clear;
options(1)=1;
[x,y]=ga(16,20,’fit_func2’,options);
f=’f_peaks’;
xrange=[-3,3];
114
yrange=[-3,3];
L=length(x);
x1=bin2dec(x(1:L/2),xrange);
x2=bin2dec(x(L/2+1:L),yrange);
disp(’GA Solution:’);
disp([x1,x2]);
disp(’Objective function value:’);
disp(y);
160
140
120
100
f(x)
80
60
40
20
0
−10 −5 0 5 10
x
Running the above algorithm, we obtain a solution of [−0.0353, 1.4941]> , and an x∗ = [−0.0588, 1.5412]> ,
and an objective function value of 7.9815. (Compare this solution with that of Example 14.3.) The figure
below shows a plot of the best, average, and worst solution from each generation of the population.
4
Objective Function Value
−2
Best
−4 Average
Worst
−6
0 10 20 30 40 50 60
Generations
14.12
A MATLAB routine for a real-number genetic algorithm:
function [winner,bestfitness] = gar(Domain,N,fit_func,options)
% function winner = GAR(Domain,N,fit_func)
% Function call: GAR(Domain,N,’f’)
% Domain = search space; e.g., [-2,2;-3,3] for the space [-2,2]x[-3,3]
115
% (number of rows of Domain = dimension of search space)
% N = population size (must be an even number)
% f = name of fitness value function
%
%Options:
%print = options(1);
%selection = options(5);
%max_iter=options(14);
%p_c = options(18);
%p_m = p_c/100;
%
%Selection:
% options(5) = 0 for roulette wheel, 1 for tournament
clf;
if nargin ~= 4
options = [];
if nargin ~= 3
disp(’Wrong number of arguments.’);
return;
end
end
if length(options) >= 14
if options(14)==0
options(14)=3*N;
end
else
options(14)=3*N;
end
if length(options) < 18
options(18)=0.75; %optional crossover rate
end
%format compact;
%format short e;
options = foptions(options);
print = options(1);
selection = options(5);
max_iter=options(14);
p_c = options(18);
p_m = p_c/100;
n = size(Domain,1);
lowb = Domain(:,1)’;
upb = Domain(:,2)’;
bestvaluesofar = 0;
for i = 1:N,
P(i,:) = lowb + rand(1,n).*(upb-lowb);
%Initial evaluation
fitness(i) = feval(fit_func,P(i,:));
end
[bestvalue,best] = max(fitness);
if bestvalue > bestvaluesofar,
bestsofar = P(best,:);
bestvaluesofar = bestvalue;
end
116
for k = 1:max_iter,
%Selection
fitness = fitness - min(fitness); % to keep the fitness positive
if sum(fitness) == 0,
disp(’Population has identical chromosomes -- STOP’);
disp(’Number of iterations:’);
disp(k);
for i = k:max_iter,
upper(i)=upper(i-1);
average(i)=average(i-1);
lower(i)=lower(i-1);
end
break;
else
fitness = fitness/sum(fitness);
end
if selection == 0,
%roulette-wheel
cum_fitness = cumsum(fitness);
for i = 1:N,
tmp = find(cum_fitness-rand>0);
m(i) = tmp(1);
end
else
%tournament
for i = 1:N,
fighter1=ceil(rand*N);
fighter2=ceil(rand*N);
if fitness(fighter1)>fitness(fighter2),
m(i) = fighter1;
else
m(i) = fighter2;
end
end
end
M = zeros(N,n);
for i = 1:N,
M(i,:) = P(m(i),:);
end
%Crossover
Mnew = M;
for i = 1:N/2
ind1 = ceil(rand*N);
ind2 = ceil(rand*N);
parent1 = M(ind1,:);
parent2 = M(ind2,:);
if rand < p_c
a = rand;
offspring1 = a*parent1+(1-a)*parent2+(rand(1,n)-0.5).*(upb-lowb)/10;
offspring2 = a*parent2+(1-a)*parent1+(rand(1,n)-0.5).*(upb-lowb)/10;
%do projection
for j = 1:n,
if offspring1(j)<lowb(j),
offspring1(j)=lowb(j);
elseif offspring1(j)>upb(j),
offspring1(j)=upb(j);
end
117
if offspring2(j)<lowb(j),
offspring2(j)=lowb(j);
elseif offspring2(j)>upb(j),
offspring2(j)=upb(j);
end
end
Mnew(ind1,:) = offspring1;
Mnew(ind2,:) = offspring2;
end
end
%Mutation
for i = 1:N,
if rand < p_m,
a = rand;
Mnew(i,:) = a*Mnew(i,:) + (1-a)*(lowb + rand(1,n).*(upb-lowb));
end
end
P = Mnew;
%Evaluation
for i = 1:N,
fitness(i) = feval(fit_func,P(i,:));
end
[bestvalue,best] = max(fitness);
if bestvalue > bestvaluesofar,
bestsofar = P(best,:);
bestvaluesofar = bestvalue;
end
upper(k) = bestvalue;
average(k) = mean(fitness);
lower(k) = min(fitness);
end %for
if k == max_iter,
disp(’Algorithm terminated after maximum number of iterations:’);
disp(max_iter);
end
winner = bestsofar;
bestfitness = bestvaluesofar;
if print,
iter = [1:max_iter]’;
plot(iter,upper,’o:’,iter,average,’x-’,iter,lower,’*--’);
legend(’Best’, ’Average’, ’Worst’);
xlabel(’Generations’,’Fontsize’,14);
ylabel(’Objective Function Value’,’Fontsize’,14);
set(gca,’Fontsize’,14);
hold off;
end
To run the routine, we create the following M-file for the given function.
function y=f_wave(x);
y=x(1)*sin(x(1)) + x(2)*sin(5*x(2));
118
We use the following script to run the algorithm:
options(1)=1;
options(14)=50;
[x,y]=gar([0,10;4,6],20,’f_wave’,options);
disp(’GA Solution:’);
disp(x);
disp(’Objective function value:’);
disp(y);
Running the above algorithm, we obtain a solution of x∗ = [7.9711, 5.3462]> , and an objective function
value of 13.2607. The figure below shows a plot of the best, average, and worst solution from each generation
of the population.
15
10
Objective Function Value
−5 Best
Average
Worst
−10
0 10 20 30 40 50
Generations
Using the MATLAB function fminunc (from the Optimization Toolbox), we found the optimal point to
be [7.9787, 5.3482]> , with objective function value 13.2612. We can see that this solution agrees with the
solution obtained using our real-number genetic algorithm.
119
15. Introduction to Linear Programming
15.1
minimize −2x1 − x2
subject to x1 + x3 = 2
x1 + x2 + x4 = 3
x1 + 2x2 + x5 = 5
x1 , . . . , x5 ≥ 0
15.2
We have
x2 = ax1 + bu1 = a2 x0 + abu0 + bu1 = a2 + [ab, b]u
where u = [u0 , u1 ]> is the decision variable. We can write the constraint as ui ≤ 1 and ui ≥ −1. Hence, the
problem is:
Since a2 is a constant, we can remove it from the objective function without changing the solution. Intro-
ducing slack variables v1 , v2 , v3 , v4 , we obtain the standard form problem
15.3
− − −
Let x+ + +
i , xi ≥ 0 be such that |xi | = xi + xi , xi = xi − xi . Substituting into the original problem, we have
− − −
minimize c1 (x+ + +
1 + x1 ) + c2 (x2 + x2 ) + · · · + cn (x2 + x2 )
+ −
subject to A(x − x ) = b
x+ , x− ≥ 0,
−
where x+ = [x+ + > − − >
1 , . . . , xn ] and x = [x1 , . . . , xn ] . Rewriting, we get
120
15.4
Not every linear programming problem in standard form has a nonempty feasible set. Example:
minimize x1
subject to −x1 = 1
x1 ≥ 0.
Not every linear programming problem in standard form (even assuming a nonempty feasible set) has an
optimal solution. Example:
minimize −x1
subject to x2 = 1
x1 , x2 ≥ 0.
15.5
Let x1 and x2 represent the number of units to be shipped from A to C and to D, respectively, and x3 and
x4 represent the number of units to be shipped from B to C and to D, respectively. Then, the given problem
can be formulated as the following linear program:
15.6
We can see that there are two paths from A to E (ACDE and ACBF DE), and two paths from B to F
(BCDF and BF ). Let x1 and x2 be the data rates for the two paths from A to E, respectively, and x3 and x4
the data rates for the two paths from B to F , respectively. The total revenue is then 2(x1 + x2 ) + 3(x3 + x4 ).
For each link, we have a data rate constraint on the sum of all xi s passing through that link. For example,
121
for link BC, there are two paths passing through it, with total data rate x2 + x3 . Hence, the constraint for
link BC is x2 + x3 ≤ 7. Hence, the optimization problem is the following linear programming problem:
maximize 2(x1 + x2 ) + 3(x3 + x4 )
subject to x1 + x2 ≤ 10
x1 + x2 ≤ 12
x1 + x3 ≤ 8
x2 + x3 ≤ 7
x2 + x3 ≤ 4
x2 + x4 ≤ 3
x1 , . . . , x4 ≥ 0.
Converting this to standard form:
minimize −2(x1 + x2 ) − 3(x3 + x4 )
subject to x1 + x2 + x5 = 10
x1 + x2 + x6 = 12
x1 + x3 + x7 = 8
x2 + x3 + x8 = 7
x2 + x3 + x9 = 4
x2 + x4 + x10 = 3
x1 , . . . , x10 ≥ 0.
15.7
Let xi ≥ 0, i = 1, . . . , 4, be the weight in pounds of item i to be used. Then, the total weight is x1 +x2 +x3 +x4 .
To satisfy the percentage content of fiber, fat, and sugar, and the total weight of 1000, we need
3x1 + 8x2 + 16x3 + 4x4 = 10(x1 + x2 + x3 + x4 )
6x1 + 46x2 + 9x3 + 9x4 = 2(x1 + x2 + x3 + x4 )
20x1 + 5x2 + 4x3 + 0x4 = 5(x1 + x2 + x3 + x4 )
x1 + x2 + x3 + x4 = 1000
The total cost is 2x1 + 4x2 + x3 + 2x4 . Therefore, the problem is:
minimize 2x1 + 4x2 + x3 + 2x4
subject to − 7x1 − 2x2 + 6x3 − 6x4 = 0
4x1 + 44x2 + 7x3 + 7x4 = 0
15x1 − x3 − 5x4 = 0
x1 + x2 + x3 + x4 = 1000
x1 , x2 , x3 , x4 ≥ 0
Alternatively, we could have simply replaced x1 + x2 + x3 + x4 in the first three equality constraints above
by 1000, to obtain:
3x1 + 8x2 + 16x3 + 4x4 = 10000
6x1 + 46x2 + 9x3 + 9x4 = 2000
20x1 + 5x2 + 4x3 + 0x4 = 5000
x1 + x2 + x3 + x4 = 1000.
Note that the only vector satisfying the above linear equations is [179, −175, 573, 422]> , which is not
feasible. Therefore, the constraint does not have any any feasible points, which means that the problem does
not have a solution.
122
15.8
The objective function is p1 + · · · + pn . The constraint for the ith location is: gi,1 p1 + · · · + gi,n pn ≥ P .
Hence, the the optimization problem is:
minimize p1 + · · · + pn
subject to gi,1 p1 + · · · + gi,n pn ≥ P, i = 1, . . . , m
p1 , . . . , pn ≥ 0.
By defining the notation G = [gi,j ] (m × n), en = [1, . . . , 1]> (with n components), and p = [p1 , . . . , pn ]> ,
we can rewrite the problem as
minimize e>
np
subject to Gp ≥ P em
p ≥ 0.
15.9
It is easy to check (using MATLAB, for example) that the matrix
2 −1 2 −1 3
A = 1 2 3 1 0
1 0 −2 0 −5
is of full rank (i.e., rank A = 3). Therefore, the system has basic solutions. To find the basic solutions, we
first select bases. Each basis consists of three linearly independent columns of A. These columns correspond
to basic variables of the basic solution. The remaining variables are nonbasic and are set to 0. The matrix
A has 5 columns; therefore, we have 53 = 10 possible candidate basic solutions (corresponding to the 10
combinations of 3 columns out of 5). It turns out that all 10 combinations of 3 columns of A are linearly
independent. Therefore, we have 10 basic solutions. These are tabulated as follows:
15.10
In the figure below, the shaded region corresponds to the feasible set. We then translate the line 2x1 +5x2 = 0
across the shaded region until the line just touches the region at one point, and the line is as far as possible
from the origin. The point of contact is the solution to the problem. In this case, the solution is [2, 6]> , and
the corresponding cost is 34.
123
x2
2
0 6
6 2x +
1 5x
2 =34
4
4
x
1+
x
2=
8
x1
4
2x + 0
1 5x
2 =0
15.11
We use the following MATLAB commands:
>> f=[0,-10,0,-6,-20];
>> A=[1,-1,-1,0,0; 0,0,1,-1,-1];
>> b=[0;0];
>> vlb=zeros(5,1);
>> vub=[4;3;3;2;2];
>> x0=zeros(5,1);
>> neqcstr=2;
>> x=linprog(f,A,b,vlb,vub,x0,neqcstr)
x =
4.0000
2.0000
2.0000
0.0000
2.0000
124
16.1
a. Performing a sequence of elementary row operations, we obtain
1 2 −1 3 2 1 2 −1 3 2 1 2 −1 3 2
2 −1 3 0 1 0 −5 5 −6 −3 0 −5 5 −6 −3
A= → → = B.
3 1 2 3 3 0 −5 5 −6 −3 0 0 4 −2 −1
1 2 3 1 1 0 0 4 −2 −1 0 0 0 0 0
Because elementary row operations do not change the rank of a matrix, rank A = rank B. Therefore
rank A = 3.
b. Performing a sequence of elementary row operations, we obtain
1 γ −1 2 1 10 −6 1 1 10 −6 1
A = 2 −1 γ 5 → 1 γ −1 2 → 0 γ − 10 5 1
1 10 −6 1 2 −1 γ 5 0 −21 γ + 12 3
1 1 10 −6 1 1 10 −6
→ 0 1 γ − 10 5 → 0 1 γ − 10 5 =B
0 3 −21 γ + 12 0 0 −3(γ − 3) γ − 3
Because elementary row operations do not change the rank of a matrix, rank A = rank B. Therefore
rank A = 3 if γ 6= 3 and rank A = 2 if γ = 3.
16.2
a. " # " #
3 1 0 1 4
A= , b= , c = [2, −1, −1, 0] .
6 2 1 1 5
b. Pivoting the problem tableau about the elements (1, 4) and (2, 3), we obtain
3 1 0 1 4
3 1 1 0 1
5 0 0 0 1
16.3
The problem in standard form is:
1 0 1 1
0 1 1 2
0 0 −1 3
1 0 1 1
−1 1 0 1
1 0 0 4
The reduced cost coefficients are all nonnegative. Hence, the current basic feasible solution is optimal:
[0, 1, 1]> . The optimal cost is 4.
16.4
The problem in standard form is:
minimize −2x1 − x2
subject to x1 + x3 = 5
x2 + x4 = 7
x1 + x2 + x5 = 9
x1 , . . . , x5 ≥ 0.
1 0 1 0 0 5
0 1 0 1 0 7
0 1 −1 0 1 4
0 −1 2 0 0 10
1 0 1 0 0 5
0 0 1 1 −1 3
0 1 −1 0 1 4
0 0 1 0 1 14
The reduced cost coefficients are all nonnegative. Hence, the optimal solution to the problem in standard
form is [5, 4, 0, 3, 0]> . The corresponding optimal cost is −14.
126
16.5
a. Let B = [a2 , a1 ] represent the first two columns of A ordered according to the basis corresponding to the
given canonical tableau, and D the second two columns. Then,
" #
1 2
B −1 D = ,
3 4
Hence,
" #−1 " #
1 2 3/2 −1/2
B=D = .
3 4 −2 1
Hence, " #
−1/2 3/2 0 1
A=
1 −2 1 0
An alternative approach is to realize that the canonical tableau is obtained from the problem tableau via
elementary row operations. Therefore, we can obtain the entries of A from the 2 × 4 upper-left submatrix
of the canonical tableau via elementary row operations also. Specifically, start with
" #
0 1 1 2
1 0 3 4
and then do two pivoting operations, one about (1, 4) and the other about (2, 3).
b. The right-half of c is given by
" #
−1 1 2
c>
D = r>
D + c>
BB D = [−1, 1] + [7, 8] = [−1, 1] + [31, 46] = [30, 47].
3 4
Explanations:
The given vector x indicates that A is 3 × 5.
127
In the above tableau, we assume that the basis is [a1 , a3 , a4 ], in this order. Other permutations of
orders will result in interchanging rows among the first three rows of the tableau.
The fifth column represents the coordinates of a5 with respect to the basis [a1 , a3 , a4 ]. Because
[−2, 0, 0, 0, 4]> lies in the nullspace of A, we deduce that −2a1 + 4a5 = 0, which can be rewritten
as a5 = (1/2)a1 + 0a3 + 0a4 , and hence the coordinate vector is [1/2, 0, 0]> .
b. Let d0 = [−2, 0, 0, 0, 4]> . Then, Ad0 = 0. Therefore, the vector x0 = x + εd0 also satisfies Ax = b. Now,
x0 = x + εd0 = [1 − 2ε, 0, 2, 3, 4ε]> . For x0 to be feasible, we must have ε ≤ 1/2. Moreover, the objective
function value of x0 is c> x0 = z0 + r5 x05 = 6 − 4ε, where z0 is the objective function value of x. So, if we
pick any ε ∈ (0, 1/2], then x0 will be a feasible solution with objective function value strictly less than 6.
For example, with ε = 1/2, x0 = [0, 0, 2, 3, 2]> is such a point. (We could also have obtained this solution by
pivoting about the element (1, 5) in the tableau of part a.)
16.8
a. The BFS is [6, 0, 7, 5, 0]> , with objective function value −8.
b. r = [0, 4, 0, 0, −4]> .
c. Yes, because the 5th column has all negative entries.
d. We pivot about the element (3, 2). The new canonical tableau is:
0 0 −1/3 1 0 8/3
1
0 −2/3 0 0 4/3
1 1/3 0 −1 7/3
0
0 0 −4/3 0 0 −4/3
e. First note that based on the 5th column, the following point is feasible:
6 2
0 0
x = 7 + 3 .
5 1
0 1
Note that x5 = . Now, any solution of the form x = [∗, 0, ∗, ∗, ]> has an objective function value given by
z = z0 + r 5
where z0 = −8 and r5 = −4 (from parts a and b). If z = −100, then = 23. Hence, the following point has
objective function value z = −100:
6 2 52
0 0 0
x = 7 + 23 3 = 76 .
5 1 28
0 1 23
f. The entries of the 2nd column of the given canonical tableau are the coordinates of a2 with respect to the
basis {a4 , a1 , a3 }. Therefore,
a2 = a4 + 2a1 + 3a3 .
Therefore, the vector [2, −1, 3, 1, 0]> lies in the nullspace of A. Similarly, using the entries of the 5th
column, we deduce that [−2, 0, −3, −1, −1]> also lies in the nullspace of A. These two vectors are linearly
independent. Because A has rank 3, the dimension of the nullspace of A is 2. Hence, these two vectors form
a basis for the nullspace of A.
128
16.9
a. We can convert the problem to standard form by multiplying the objective function by −1 and introducing
a surplus variable x3 . We obtain:
minimize x1 + 2x2
subject to x2 − x3 = 1
x1 , x2 , x3 ≥ 0.
Note that we do not need to deal with the absence of the constraint x2 ≥ 0 in the original problem, since
x2 ≥ 1 implies that x2 ≥ 0 also. Had we used the rule of writing x2 = u − v with u, v ≥ 0, we obtain the
standard form problem:
minimize x1 + 2u − 2v
subject to u − v − x3 = 1
x1 , u, v, x3 ≥ 0.
0 1 −1 1 1
0 0 0 1 0
0 1 −1 1 1
0 −1 1 0 −1
Pivoting now about element (1, 2), we obtain the next canonical tableau:
0 1 −1 1 1
0 0 0 1 0
Hence, phase I terminates, and we use x2 as our initial basic variable for phase II.
For phase II, we set up the problem tableau as:
0 1 −1 1
1 2 0 0
The reduced cost coefficients are all nonnegative. Hence, the solution to the standard form problem is
[1, 1, 0, 0]> . Therefore, the solution to the original problem is [1, 1]> , and the corresponding cost is 2.
16.12
a. The problem in standard form is:
minimize 4x1 + 3x2
subject to 5x1 + x2 − x3 = 11
2x1 + x2 − x4 = 8
x1 + 2x2 − x5 = 7
x1 , . . . , x5 ≥ 0.
We do not have an apparent basic feasible solution. Therefore, we will need to use the two phase method.
Phase I: We introduce artificial variables x6 , x7 , x8 and form the following tableau.
a1 a2 a3 a4 a5 a6 a7 a8 b
5 1 −1 0 0 1 0 0 11
2 1 0 −1 0 0 1 0 8
1 2 0 0 −1 0 0 1 7
c> 0 0 0 0 0 1 1 1 0
We then form the following revised tableau:
Variable B −1 y0
x6 1 0 0 11
x7 0 1 0 8
x8 0 0 1 7
We compute:
λ> = [1, 1, 1]
r>
D = [r1 , r2 , r3 , r4 , r5 ] = [−8, −4, 1, 1, 1].
130
We form the augmented revised tableau by introducing y 1 = B −1 a1 = a1 :
Variable B −1 y0 y1
x6 1 0 0 11 5
x7 0 1 0 8 2
x8 0 0 1 7 1
We now pivot about the first component of y 1 to get
Variable B −1 y0
x1 1/5 0 0 11/5
x7 −2/5 1 0 18/5
x8 −1/5 0 1 24/5
We compute
λ> = [−3/5, 1, 1]
r>
D = [r2 , r3 , r4 , r5 , r6 ] = [−12/5, −3/5, 1, 1, 8/5].
We bring y 2 = B −1 a2 into the basis to get
Variable B −1 y0 y2
x1 1/5 0 0 11/5 1/5
x7 −2/5 1 0 18/5 3/5
x8 −1/5 0 1 24/5 9/5
We pivot about the third component of y 2 to get
Variable B −1 y0
x1 2/9 0 −1/9 5/3
x7 −1/3 1 −1/3 2
x2 −1/9 0 5/9 8/3
We compute
λ> = [−1/3, 1, −1/3]
r>
D = [r3 , r4 , r5 , r6 , r8 ] = [−1/3, 1, −1/3, 4/3, 4/3].
We bring y 3 = B −1 a3 into the basis to get
Variable B −1 y0 y3
x1 2/9 0 −1/9 5/3 −2/9
x7 −1/3 1 −1/3 2 1/3
x2 −1/9 0 5/9 8/3 1/9
We pivot about the second component of y 3 to obtain
Variable B −1 y0
x1 0 2/3 −1/3 3
x3 −1 3 −1 6
x2 0 −1/3 2/3 2
We compute
λ> = [0, 0, 0]
r>
D = [r4 , r5 , r6 , r7 , r8 ] = [0, 0, 1, 1, 1] ≥ 0> .
131
Thus, Phase I is complete, and the initial basic feasible solution is [3, 2, 6, 0, 0]> .
Phase II
We form the tableau for the original problem:
a1 a2 a3 a4 a5 b
5 1 −1 0 0 11
2 1 0 −1 0 8
1 2 0 0 −1 7
c> 4 3 0 0 0 0
The initial revised tableau for Phase II is the final revised tableau for Phase I. We compute
λ> = [0, 5/3, 2/3]
r>
D = [r4 , r5 ] = [5/3, 2/3] > 0> .
Hence, the optimal solution to the original problem is [3, 2]> .
b. The problem in standard form is:
minimize −6x1 − 4x2 − 7x3 − 5x4
subject to x1 + 2x2 + x3 + 2x4 + x5 = 20
6x1 + 5x2 + 3x3 + 2x4 + x6 = 100
3x1 + 4x2 + 9x3 + 12x4 + x7 = 75
x1 , . . . , x7 ≥ 0.
We have an apparent basic feasible solution: [0, 0, 0, 20, 100, 75]> , corresponding to B = I 3 . We form the
revised tableau corresponding to this basic feasible solution:
Variable B −1 y0
x5 1 0 0 20
x6 0 1 0 100
x7 0 0 1 75
We compute
λ> = [0, 0, 0]
r>
D = [r1 , r2 , r3 , r4 ] = [−6, −4, −7, −5].
We bring y 2 = B −1 a3 = a3 into the basis to obtain
Variable B −1 y0 y3
x5 1 0 0 20 1
x6 0 1 0 100 3
x7 0 0 1 75 9
We pivot about the third component of y 3 to get
Variable B −1 y0
x5 1 0 −1/9 35/3
x6 0 1 −1/3 75
x3 0 0 1/9 25/3
We compute
λ> = [0, 0, −7/9]
r>
D = [r1 , r2 , r4 , r7 ] = [−11/3, −8/9, 13/3, 7/9].
132
We bring y 1 = B −1 a1 into the basis to obtain
Variable B −1 y0 y1
x5 1 0 −1/9 35/3 2/3
x6 0 1 −1/3 75 5
x3 0 0 1/9 25/3 1/3
Variable B −1 y0
x5 1 −2/15 −1/15 5/3
x1 0 1/5 −1/15 15
x3 0 −1/15 2/15 10/3
We compute
The optimal solution to the original problem is therefore [15, 0, 10/3, 0]> .
16.13
a. By inspection of r > , we conclude that the basic variables are x1 , x3 , x4 , and the basis matrix is
0 0 1
B = 1 0 0 .
0 1 0
Since r > ≥ 0> , the basic feasible solution corresponding to the basis B is optimal. This optimal basic
feasible solution is [8, 0, 9, 7]> .
b. An optimal solution to the dual is given by
λ> = c>
BB
−1
,
where c>
B = [6, 4, 5], and
0 1 0
B −1 = 0 0 1 .
1 0 0
where α ∈ R.
c. Since all basic feasible solutions are optimal, the relative cost coefficients are all zero.
133
16.15
a. 2 − α < 0, β ≤ 0, γ ≤ 0, and δ anything.
b. 2 − α ≥ 0, δ = −7, β and γ anything.
c. 2 − α < 0, γ > 0, either β ≤ 0 or 5/γ ≤ 4/β, and δ anything.
16.16
a. The value of α must be 0, because the objective function value is 0 (lower right corner), and α is the
value of an artificial variable.
The value of β must be 0, because it is the RCC value corresponding to a basic column.
The value of γ must be 2, because it must be a positive value. Otherwise, there is a feasible solution to
the artificial problem with objective function value smaller than 0, which is impossible.
The value of δ must be 0, because we must be able to bring the fourth column into the basis without
changing the objective function value.
b. The given linear programming problem does indeed have a feasible solution: [0, 5, 6, 0]> . We obtain
this by noticing that the right-most column is a linear combination of the second and third columns, with
coefficients 5 and 6.
16.17
First, we convert the inequality constraint Ax ≥ b into standard form. To do this, we introduce a variable
w ∈ Rm of surplus variables to convert the inequality constraint into the following equivalent constraint:
" #
x
[A, −I] = b, w ≥ 0.
w
Next, we introduce variables u, v ∈ Rn to replace the free variable x by u − v. We then obtain the following
equivalent constraint:
u
[A, −A, −I] v = b, u, v, w ≥ 0.
w
This form of the constraint is now in standard form. So we can now use Phase I of the simplex method to
implement an algorithm to find a vectors u, v, and w satisfying the above constraint, if such exist, or to
declare that none exists. If such exist, we output x = u − v; otherwise, we declare that no x exists such
that Ax ≥ b. By construction, this algorithm is guaranteed to behave in the way specified by the question.
16.18
a. We form the tableau for the problem:
1 0 0 1/4 −8 −1 9 0
0 1 0 1/2 −12 −1/2 3 0
0 0 1 0 0 1 0 1
0 0 0 −3/4 20 −1/2 6 0
The above tableau is already in canonical form, and therefore we can proceed with the simplex procedure.
We first pivot about the (1, 4)th element, to get
4 0 0 1 −32 −4 36 0
−2 1 0 0 4 3/2 −15 0
0 0 1 0 0 1 0 1
3 0 0 0 −4 −7/2 33 0
Pivoting about the (2, 5)th element, we get
−12 8 0 1 0 8 −84 0
−1/2 1/4 0 0 1 3/8 −15/4 0
0 0 1 0 0 1 0 1
1 1 0 0 0 −2 18 0
134
Pivoting about the (1, 6)th element, we get
−3/2 1 0 1/8 0 1 −21/2 0
1/16 −1/8 0 −3/64 1 0 3/16 0
3/2 −1 1 −1/8 0 0 21/2 1
−2 3 0 1/4 0 0 −3 0
Pivoting about the (2, 7)th element, we get
2 −6 0 −5/2 56 1 0 0
1/3 −2/3 0 −1/4 16/3 0 1 0
−2 6 1 5/2 −56 0 0 1
−1 1 0 −1/2 16 0 0 0
Pivoting about the (1, 1)th element, we get
1 −3 0 −5/4 28 1/2 0 0
0 1/3 0 1/6 −4 −1/6 1 0
0 0 1 0 0 1 0 1
0 −2 0 −7/4 44 1/2 0 0
Pivoting about the (2, 2)th element, we get
1 0 0 1/4 −8 −1 9 0
0 1 0 1/2 −12 −1/2 3 0
0 0 1 0 0 1 0 1
0 0 0 −3/4 20 −1/2 6 0
which is identical to the initial tableau. Therefore, cycling occurs.
b. We start with the initial tableau of part a, and pivot about the (1, 4)th element to obtain
4 0 0 1 −32 −4 36 0
−2 1 0 0 4 3/2 −15 0
0 0 1 0 0 1 0 1
3 0 0 0 −4 −7/2 33 0
Pivoting about the (2, 5)th element, we get
−12 8 0 1 0 8 −84 0
−1/2 1/4 0 0 1 3/8 −15/4 0
0 0 1 0 0 1 0 1
1 1 0 0 0 −2 18 0
Pivoting about the (1, 6)th element, we get
−3/2 1 0 1/8 0 1 −21/2 0
1/16 −1/8 0 −3/64 1 0 3/16 0
3/2 −1 1 −1/8 0 0 21/2 1
−2 3 0 1/4 0 0 −3 0
Pivoting about the (2, 1)th element, we get
0 −2 0 −1 24 1 −6 0
1 −2 0 −3/4 16 0 3 0
0 2 1 1 −24 0 6 1
0 −1 0 −5/4 32 0 3 0
135
Pivoting about the (3, 2)th element, we get
0 0 1 0 0 1 0 1
1 0 1 1/4 −8 0 9 1
0 1 1/2 1/2 −12 0 3 1/2
0 0 1/2 −3/4 20 0 6 1/2
0 0 1 0 0 1 0 1
1 −1/2 3/4 0 −2 0 15/2 3/4
0 2 1 1 −24 0 6 1
0 3/2 5/4 0 2 0 21/2 5/4
The reduced cost coefficients are all nonnegative. Hence, the optimal solution to the problem is
[3/4, 0, 0, 1, 0, 1, 0]> . The corresponding optimal cost is −5/4.
16.19
a. We have
Ad(0) = A(x(1) − x(0) )/α0 = (b − b)/α0 = 0.
Hence, d(0) ∈ N (A).
b. From our discussion of moving from one BFS to an adjacent BFS, we deduce that
" #
(0) −y q
d = .
eq−m
In other words, the first m components of d(0) are −y1q , . . . , −ymq , and all the other components are 0 except
the qth component, which is 1.
16.20
The following is a MATLAB function that implements the simplex algorithm.
function [x,v]=simplex(c,A,b,v,options)
% SIMPLEX(c,A,b,v);
% SIMPLEX(c,A,b,v,options);
%
% x = SIMPLEX(c,A,b,v);
% x = SIMPLEX(c,A,b,v,options);
%
% [x,v] = SIMPLEX(c,A,b,v);
% [x,v] = SIMPLEX(c,A,b,v,options);
%
%SIMPLEX(c,A,b,v) solves the following linear program using the
%Simplex Method:
% min c’x subject to Ax=b, x>=0,
%where [A b] is in canonical form, and v is the vector of indices of
%basic columns. Specifically, the v(i)-th column of A is the i-th
%standard basis vector.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results (default is no display: 0).
%OPTIONS(5) specifies how the pivot element is selected;
% 0=choose the most negative relative cost coefficient;
% 1=use Bland’s rule.
136
if nargin ~= 5
options = [];
if nargin ~= 4
disp(’Wrong number of arguments.’);
return;
end
end
format compact;
%format short e;
options = foptions(options);
print = options(1);
n=length(c);
m=length(b);
cB=c(v(:));
cost = -cB’*b;
if print,
disp(’ ’);
disp(’Initial tableau:’);
disp(tabl);
end %if
min_ratio = inf;
p=0;
for i=1:m,
if tabl(i,q)>0
if tabl(i,n+1)/tabl(i,q) < min_ratio
min_ratio = tabl(i,n+1)/tabl(i,q);
p = i;
end %if
end %if
end %for
if p == 0
disp(’Problem unbounded’);
break;
end %if
tabl=pivot(tabl,p,q);
137
if print,
disp(’Pivot point:’);
disp([p,q]);
disp(’New tableau:’);
disp(tabl);
end %if
v(p) = q;
r = tabl(m+1,1:n);
end %while
x=zeros(n,1);
x(v(:))=tabl(1:m,n+1);
The above function makes use of the following function that implements pivoting:
function Mnew=pivot(M,p,q)
%Mnew=pivot(M,p,q)
%Returns the matrix Mnew resulting from pivoting about the
%(p,q)th element of the given matrix M.
for i=1:size(M,1),
if i==p
Mnew(p,:)=M(p,:)/M(p,q);
else
Mnew(i,:)=M(i,:)-M(p,:)*(M(i,q)/M(p,q));
end %if
end %for
%----------------------------------------------------------------
We now apply the simplex algorithm to the problem in Example 16.2, as follows:
>> A=[1 0 1 0 0; 0 1 0 1 0; 1 1 0 0 1];
>> b=[4;6;8];
>> c=[-2;-5;0;0;0];
>> v=[3;4;5];
>> options(1)=1;
>> [x,v]=simplex(c,A,b,v,options);
Initial Tableau:
1 0 1 0 0 4
0 1 0 1 0 6
1 1 0 0 1 8
-2 -5 0 0 0 0
Pivot point:
2 2
New tableau:
1 0 1 0 0 4
0 1 0 1 0 6
1 0 0 -1 1 2
-2 0 0 5 0 30
Pivot point:
3 1
New tableau:
0 0 1 1 -1 2
0 1 0 1 0 6
1 0 0 -1 1 2
0 0 0 3 2 34
>> disp(x’);
2 6 2 0 0
138
>> disp(v’);
3 2 1
As indicated above, the solution to the problem in standard form is [2, 6, 2, 0, 0]> , and the objective
function value is −34. The optimal cost for the original maximization problem is 34.
16.21
The following is a MATLAB routine that implements the two-phase simplex method, using the MATLAB
function from Exercise 16.20.
function [x,v]=tpsimplex(c,A,b,options)
% TPSIMPLEX(c,A,b);
% TPSIMPLEX(c,A,b,options);
%
% x = TPSIMPLEX(c,A,b);
% x = TPSIMPLEX(c,A,b,options);
%
% [x,v] = TPSIMPLEX(c,A,b);
% [x,v] = TPSIMPLEX(c,A,b,options);
%
%TPSIMPLEX(c,A,b) solves the following linear program using the
%two-phase simplex method:
% min c’x subject to Ax=b, x>=0.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results (default is no display: 0).
%OPTIONS(5) specifies how the pivot element is selected;
% 0=choose the most negative relative cost coefficient;
% 1=use Bland’s rule.
if nargin ~= 4
options = [];
if nargin ~= 3
disp(’Wrong number of arguments.’);
return;
end
end
clc;
format compact;
%format short e;
options = foptions(options);
print = options(1);
n=length(c);
m=length(b);
%Phase I
if print,
disp(’ ’);
disp(’Phase I’);
disp(’-------’);
end
v=n*ones(m,1);
for i=1:m
v(i)=v(i)+i;
139
end
[x,v]=simplex([zeros(n,1);ones(m,1)],[A eye(m)],b,v,options);
if all(v<=n),
%Phase II
if print
disp(’ ’);
disp(’Phase II’);
disp(’--------’);
disp(’Basic columns:’)
disp(v’)
end
[x,v]=simplex(c,A,b,v,options);
if print
disp(’ ’);
disp(’Final solution:’);
disp(x’);
end
else
%assumes nondegeneracy
disp(’Terminating: problem has no feasible solution.’);
end
%----------------------------------------------------------------
We now apply the above MATLAB routine to the problem in Example 16.5, as follows:
>> A=[1 1 1 0; 5 3 0 -1];
>> b=[4;8];
>> c=[-3;-5;0;0];
>> options(1)=1;
>> format rat;
>> tpsimplex(c,A,b,options);
Phase I
-------
Initial Tableau:
1 1 1 0 1 0 4
5 3 0 -1 0 1 8
-6 -4 -1 1 0 0 -12
Pivot point:
2 1
New tableau:
0 2/5 1 1/5 1 -1/5 12/5
1 3/5 0 -1/5 0 1/5 8/5
0 -2/5 -1 -1/5 0 6/5 -12/5
Pivot point:
1 3
New tableau:
0 2/5 1 1/5 1 -1/5 12/5
1 3/5 0 -1/5 0 1/5 8/5
0 * 0 * 1 1 *
140
Pivot point:
2 2
New tableau:
-2/3 0 1 1/3 1 -1/3 4/3
5/3 1 0 -1/3 0 1/3 8/3
* 0 0 * 1 1 *
Pivot point:
1 4
New tableau:
-2 0 3 1 3 -1 4
1 1 1 0 1 0 4
* 0 * 0 1 1 *
Basic columns:
4 2
Phase II
--------
Initial Tableau:
-2 0 3 1 4
1 1 1 0 4
2 0 5 0 20
Final solution:
0 4 0 4
16.22
The following is a MATLAB function that implements the revised simplex algorithm.
function [x,v,Binv]=revsimp(c,A,b,v,Binv,options)
% REVSIMP(c,A,b,v,Binv);
% REVSIMP(c,A,b,v,Binv,options);
%
% x = REVSIMP(c,A,b,v,Binv);
% x = REVSIMP(c,A,b,v,Binv,options);
%
% [x,v,Binv] = REVSIMP(c,A,b,v,Binv);
% [x,v,Binv] = REVSIMP(c,A,b,v,Binv,options);
%
%REVSIMP(c,A,b,v,Binv) solves the following linear program using the
%revised simplex method:
% min c’x subject to Ax=b, x>=0,
%where v is the vector of indices of basic columns, and Binv is the
%inverse of the basis matrix. Specifically, the v(i)-th column of
%A is the i-th column of the basis vector.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results (default is no display: 0).
%OPTIONS(5) specifies how the pivot element is selected;
% 0=choose the most negative relative cost coefficient;
% 1=use Bland’s rule.
if nargin ~= 6
options = [];
if nargin ~= 5
disp(’Wrong number of arguments.’);
return;
end
141
end
format compact;
%format short e;
options = foptions(options);
print = options(1);
n=length(c);
m=length(b);
cB=c(v(:));
y0 = Binv*b;
lambdaT=cB’*Binv;
r = c’-lambdaT*A; %row vector of relative cost coefficients
if print,
disp(’ ’);
disp(’Initial revised tableau [v B^(-1) y0]:’);
disp([v Binv y0]);
disp(’Relative cost coefficients:’);
disp(r);
end %if
yq = Binv*A(:,q);
min_ratio = inf;
p=0;
for i=1:m,
if yq(i)>0
if y0(i)/yq(i) < min_ratio
min_ratio = y0(i)/yq(i);
p = i;
end %if
end %if
end %for
if p == 0
disp(’Problem unbounded’);
break;
end %if
if print,
disp(’Augmented revised tableau [v B^(-1) y0 yq]:’)
disp([v Binv y0 yq]);
disp(’(p,q):’);
disp([p,q]);
end
augrevtabl=pivot([Binv y0 yq],p,m+2);
142
Binv=augrevtabl(:,1:m);
y0=augrevtabl(:,m+1);
v(p) = q;
cB=c(v(:));
lambdaT=cB’*Binv;
r = c’-lambdaT*A; %row vector of relative cost coefficients
if print,
disp(’New revised tableau [v B^(-1) y0]:’);
disp([v Binv y0]);
disp(’Relative cost coefficients:’);
disp(r);
end %if
end %while
x=zeros(n,1);
x(v(:))=y0;
143
Relative cost coefficients:
0 0 0 3 2
>> disp(x’);
2 6 2 0 0
>> disp(v’);
3 2 1
>> disp(Binv);
1 1 -1
0 1 0
0 -1 1
16.23
The following is a MATLAB routine that implements the two-phase revised simplex method, using the
MATLAB function from Exercise 16.22.
function [x,v]=tprevsimp(c,A,b,options)
% TPREVSIMP(c,A,b);
% TPREVSIMP(c,A,b,options);
%
% x = TPREVSIMP(c,A,b);
% x = TPREVSIMP(c,A,b,options);
%
% [x,v] = TPREVSIMP(c,A,b);
% [x,v] = TPREVSIMP(c,A,b,options);
%
%TPREVSIMP(c,A,b) solves the following linear program using the
%two-phase revised simplex method:
% min c’x subject to Ax=b, x>=0.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results (default is no display: 0).
%OPTIONS(5) specifies how the pivot element is selected;
% 0=choose the most negative relative cost coefficient;
% 1=use Bland’s rule.
if nargin ~= 4
options = [];
if nargin ~= 3
disp(’Wrong number of arguments.’);
return;
end
end
clc;
format compact;
%format short e;
options = foptions(options);
print = options(1);
n=length(c);
m=length(b);
%Phase I
if print,
disp(’ ’);
disp(’Phase I’);
disp(’-------’);
144
end
v=n*ones(m,1);
for i=1:m
v(i)=v(i)+i;
end
[x,v,Binv]=rev_simp([zeros(n,1);ones(m,1)],[A eye(m)],b,v,eye(m),options);
%Phase II
if print
disp(’ ’);
disp(’Phase II’);
disp(’--------’);
end
[x,v,Binv]=rev_simp(c,A,b,v,Binv,options);
if print
disp(’ ’);
disp(’Final solution:’);
disp(x’);
end
%----------------------------------------------------------------
We now apply the above MATLAB routine to the problem in Example 16.5, as follows:
>> A=[4 2 -1 0; 1 4 0 -1];
>> b=[12;6];
>> c=[2;3;0;0];
>> options(1)=1;
>> format rat;
>> tprevsimp(c,A,b,options);
Phase I
-------
145
Relative cost coefficients:
0 0 0 0 1 1
Phase II
--------
Final solution:
18/7 6/7 0 0
17. Duality
17.1
Since x and λ are feasible, we have Ax ≥ b, x ≥ 0, and λ> A ≤ c> , λ ≥ 0. Postmultiplying both sides of
λ> A ≤ c> by x ≥ 0 yields
λ> Ax ≤ c> x.
Since Ax ≥ b and λ> ≥ 0> , we have λ> Ax ≥ λ> b. Hence, λ> b ≤ c> x.
17.2
The primal problem is:
minimize e>
np
subject to Gp ≥ P em
p ≥ 0,
where G = [gi,j ], en = [1, . . . , 1]> (with n components), and p = [p1 , . . . , pn ]> . The dual of the problem is
(using symmetric duality):
maximize P λ> em
subject to λ> G ≤ e>
n
λ ≥ 0.
17.3
a. We first transform the problem into standard form:
1/2 1 1/2 0 2
3/2 0 −1/2 1 3
−1/2 0 3/2 0 6
146
Pivoting now about the (2, 1)th element gives:
0 1 2/3 −1/3 1
1 0 −1/3 2/3 2
0 0 4/3 1/3 7
Thus, the solution to the standard form problem is x1 = 2, x2 = 1, x3 = 0, x4 = 0. The solution to the
original problem is x1 = 2, x2 = 1.
b. The dual to the standard form problem is
From the discussion before Example 17.6, it follows that the solution to the dual is λ> = c> >
I − rI =
[−4/3, −1/3].
17.4
The dual problem is
Note that we may arrive at the above in one of two ways: by applying the asymmetric form of duality, or
by applying the symmetric form of duality to the original problem in standard form. From the solution of
Exercise 16.11a, we have that the solution to the dual is λ> = c>
BB
−1
= [0, 5/3, 2/3] (using the proof of the
duality theorem).
17.5
We represent the primal in the form
minimize c> x
subject to Ax = b
x ≥ 0.
maximize λ> b
subject to λ> A ≤ c> ,
that is,
The solution to the dual can be obtained using the formula, λ∗> = c>
BB
−1
, where
h i −2 1 1
c> = −1 −2 0 and B = −1 2 0 .
B
1 0 0
147
−1
Note that because the last element in c>
B is zero, we do not need to calculate the last row of B when
∗>
computing λ , that is, these elements are “don’t care” elements that we denote using the asterisk. Hence,
h i 0 0 1 h i
λ∗> = c>BB
−1
= −1 −2 0 0 1/2 1/2 = 0 −1 −2 .
∗ ∗ ∗
Note that
c> x∗ = λ∗> b = −13,
as expected.
17.6
a. Multiplying the objective function by −1, we see that the problem is of the form of the dual in the
asymmetric form of duality. Therefore, the dual to the problem is of the form of the primal in the asymmetric
form:
minimize λ> b
subject to λ> A = −c>
λ≥0
b. The given vector y is feasible in the dual. Since b = 0, any feasible point in the dual is optimal. Thus, y
is optimal in the dual, and the objective function value for y is 0. Therefore, by the Duality Theorem, the
primal also has an optimal feasible solution, and the corresponding objective function value is 0. Since the
vector 0 is feasible in the primal and has objective function value 0, the vector 0 is a solution to the primal.
17.7
−
We introduce two sets of nonnegative variables: x+
i ≥ 0, xi ≥ 0, i = 1, 2, . . . , 3. We can then represent the
optimization problem in the form
− − −
minimize (x+ + +
1 + x1 ) + (x2 + x2 ) + (x3 + x3 )
+
x1
x+
" # 2 " #
+
1 1 −1 −1 −1 1 x3 2
subject to − =
0 −1 0 0 1 0 x1 1
−
x2
x−
3
−
x+
i ≥ 0, xi ≥ 0.
maximize 2λ1 + λ2
" #
h i 1 1 −1 −1 −1 1 h i
subject to λ1 λ2 ≤ 1 1 1 1 1 1 .
0 −1 0 0 1 0
λ∗> = c>BB
−1
" #−1
h i 1 −1
= 1 1
0 1
h i
= 1 2 .
17.8
a. The dual (asymmetric form) is
maximize λ
subject to λai ≤ 1, i = 1, . . . , n.
λ = 1/an .
b. Duality Theorem: If the primal problem has an optimal solution, then so does the dual, and the optimal
values of their respective objective functions are equal.
By the duality theorem, the primal has an optimal solution, and the optimal value of the objective function
is 1/an . The only feasible point in the primal with this objective function value is the basic feasible solution
[0, . . . , 0, 1/an ]> .
c. Suppose we start at a nonoptimal initial basic feasible solution, [0, . . . , 1/ai , . . . , 0]> , where 1 ≤ i ≤ n − 1.
The relative cost coefficient for the qth column, q 6= i, is
aq
rq = 1 − .
ai
Since an > aj for any j 6= n, rq is the most negative relative cost coefficient if and only if q = n.
17.9
a. By asymmetric duality, the dual is given by
minimize λ
subject to λ ≥ ci , i = 1, . . . , n.
b. The constraint in part a implies that λ is feasible if and only if λ ≥ c4 . Hence, the solution is λ∗ = c4 .
c. By the duality theorem, the optimal objective function value for the given problem is c4 . The only solution
that achieves this value is x∗4 = 1 and x∗i = 0 for all i 6= 4.
149
17.10
a. The dual is
minimize λ> 0
subject to λ> A ≥ c>
λ ≥ 0.
b. By the duality theorem, we conclude that the optimal value of the objective function is 0. The only
vector satisfying x ≥ 0 that has an objective function value of 0 is x = 0. Therefore, the solution is x = 0.
c. The constraint set contains only the vector 0. Any other vector x satisfying x ≥ 0 has at least one
positive component, and consequently has a positive objective function value. But this contradicts the fact
that the optimal solution has an objective function value of 0.
17.11
a. The artificial problem is:
maximize λ> b
subject to λ> A ≤ 0>
λ> ≤ e> .
c. Suppose the given original linear programming problem has a feasible solution. By the FTLP, the original
LP problem has a BFS. Then, by a theorem given in class, the artificial problem has an optimal feasible
solution with y = 0. Hence, by the Duality Theorem, the dual of the artificial problem also has an optimal
feasible solution.
17.12
a. Possible. This situation arises if the primal is unbounded, which by the Weak Duality Lemma implies
that the dual has no feasible solution.
b. Impossible, because the Duality Theorem requires that if the primal has an optimal feasible solution,
then so does the dual.
c. Impossible, because the Duality Theorem requires that if the dual has an optimal feasible solution, then
so does the primal. Also, the Weak Dual Lemma requires that if the primal is unbounded (i.e., has a feasible
solution but no optimal feasible solution), then the dual must have no feasible solution.
17.13
To prove the result, we use Theorem 17.3 (Complementary Slackness). Since µ ≥ 0, we have A> λ = c − µ ≤
c. Hence, λ is a feasible solution to the dual. Now, (c − A> λ)> x = µ> x = 0. Therefore, by Theorem 17.3,
x and λ are optimal for their respective problems.
17.14
To use the symmetric form of duality, we need to rewrite the problem as
150
which we represent in the form
" #
> > u
minimize [−c c ] ,
v
" #
u
subject to [−A A] ≥ −b
v
" #
u
≥ 0.
v
17.15
The corresponding dual can be written as:
maximize 3λ1 + 3λ2
subject to λ1 + 2λ2 ≤ 1
2λ1 + λ2 ≤ 1
λ1 , λ2 ≥ 0.
To solve the dual, we refer back to the solution of Exercise 16.11. Using the idea of the proof of the duality
theorem (Theorem 17.2), we obtain the solution to the dual as λ> = c> BB
−1
= [1/3, 1/3]. The cost of the
dual problem is 2, which verifies the duality theorem.
17.16
The dual to the above linear program (asymmetric form) is
maximize λ> 0
subject to λ> O ≤ c> .
The above dual problem has a feasible solution if and only if c ≥ 0. Since any feasible solution to the dual
is also optimal, the dual has an optimal solution if and only if c ≥ 0. Therefore, by the duality theorem, the
primal problem has a solution if and only if c ≥ 0.
If the solution to the dual exists, then the optimal value of the objective function in the primal is equal
to that of the dual, which is clearly 0. In this case, 0 is optimal, since c> 0 = 0.
17.17
Consider the primal problem
minimize 0> x
subject to Ax ≥ b
x ≥ 0,
151
and its corresponding dual
maximize y> b
subject to y> A ≤ 0
y ≥ 0.
⇒: By assumption, there exists a feasible solution to the primal problem. Note that any feasible solution
is also optimal, and has objective function value 0. Suppose y satisfies A> y ≤ 0 and y ≥ 0. Then, y is a
feasible solution to the dual. Therefore, by the Weak Duality Lemma, b> y ≤ 0.
⇐: Note that the feasible region for the dual is nonempty, since 0 is a feasible point. Also, by assumption,
0 is an optimal solution, since any other feasible point y satisfies b> y ≤ b> 0 = 0. Hence, by the duality
theorem, the primal problem has an (optimal) feasible solution.
17.18
a. The dual is
maximize y> b
subject to y > A ≤ 0> .
b. The feasible set of the dual problem is always nonempty, because 0 is clearly guaranteed to be feasible.
c. Suppose y is feasible in the dual. Then, by assumption, b> y ≤ 0. But the point 0 is feasible and has
objective function value 0. Hence, 0 is optimal in the dual.
d. By parts b and c, the dual has an optimal feasible solution. Hence, by the duality theorem, the primal
problem also has an optimal feasible solution.
e. By assumption, there exists a feasible solution to the primal problem. Note that any feasible solution
in the primal has objective function value 0 (and hence so does the given solution). Suppose y satisfies
A> y ≤ 0. Then, y is a feasible solution to the dual. Therefore, by weak duality, b> y ≤ 0.
17.19
Consider the primal problem
minimize y> b
subject to y> A = 0
y≥0
and its corresponding dual
maximize 0> x
subject to Ax ≤ b.
⇒: By assumption, there exists a feasible solution to the dual problem. Note that any feasible solution
is also optimal, and has objective function value 0. Suppose y satisfies A> y = 0 and y ≥ 0. Then, y is a
feasible solution to the primal. Therefore, by the Weak Duality Lemma, b> y ≥ 0.
⇐: Note that the feasible region for the primal is nonempty, since 0 is a feasible point. Also, by as-
sumption, 0 is an optimal solution, since any other feasible point y satisfies b> y ≥ b> 0 = 0. Hence, by the
duality theorem, the dual problem has an (optimal) feasible solution.
17.20
Let e = [1, . . . , 1]> . Consider the primal problem
minimize 0> x
subject to Ax ≤ −e
and its corresponding dual
maximize e> y
subject to y> A = 0
y ≥ 0.
152
⇒: Suppose there exists Ax < 0. Then, the vector x0 = x/ min{|(Ax)i |} is a feasible solution to the
primal problem. Note that any feasible solution is also optimal, and has objective function value 0. Suppose
y satisfies A> y = 0, y ≥ 0. Then, y is a feasible solution to the dual. Therefore, by the Weak Duality
Lemma, e> y ≤ 0. Since y ≥ 0, we conclude that y = 0.
⇐: Suppose 0 is the only feasible solution to the dual. Then, 0 is clearly also optimal. Hence, by the
duality theorem, the primal problem has an (optimal) feasible solution x. Since Ax ≤ −e and −e < 0, we
get Ax < 0.
17.21
a. Rewrite the primal as
minimize (−e)> x
subject to (P − I)> x = 0
x ≥ 0.
By asymmetric duality, the dual is
maximize λ> 0
subject to λ> (P − I)> ≤ −e> .
b. Suppose f1 (x0 ) = f2 (y 0 ) for feasible points x0 and y 0 . Let x be any feasible point in the primal. Then,
by part a, f1 (x) ≥ f2 (y 0 ) = f1 (x0 ). Hence x0 is optimal in the primal.
Similarly, let y be any feasible point in the dual. Then, by part a, f2 (y) ≤ f1 (x0 ) = f2 (y 0 ). Hence y 0 is
optimal in the dual.
if nargin ~= 5
options = [];
if nargin ~= 4
disp(’Wrong number of arguments.’);
return;
end
end
xnew=u;
if length(options) >= 14
if options(14)==0
options(14)=1000*length(xnew);
end
else
options(14)=1000*length(xnew);
end
%clc;
format compact;
format short e;
options = foptions(options);
print = options(1);
epsilon_x = options(2);
epsilon_f = options(3);
max_iter=options(14);
alpha=options(18);
n=length(c);
m=length(b);
for k = 1:max_iter,
xcurr=xnew;
D = diag(xcurr);
155
Abar = A*D;
Pbar = eye(n) - Abar’*inv(Abar*Abar’)*Abar;
d = -D*Pbar*D*c;
if d ~= zeros(n,1),
nonzd = find(d<0);
r = min(-xcurr(nonzd)./d(nonzd));
else
disp(’Terminating: d = 0’);
break;
end
xnew = xcurr+alpha*r*d;
if print,
disp(’Iteration number k =’)
disp(k); %print iteration index k
disp(’alpha_k =’);
disp(alpha*r); %print alpha_k
disp(’New point =’);
disp(xnew’); %print new point
end %if
if k == max_iter
disp(’Terminating with maximum number of iterations’);
end %if
end %for
if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(’Final point =’);
disp(xnew’);
disp(’Number of iterations =’);
disp(k);
end %if
%----------------------------------------------------------------
We now apply the affine scaling algorithm to the problem in Example 16.2, as follows:
>> A=[1 0 1 0 0; 0 1 0 1 0; 1 1 0 0 1];
>> b=[4;6;8];
>> c=[-2;-5;0;0;0];
>> u=[2;3;2;3;3];
>> options(1)=0;
>> options(2)=10^(-7);
>> options(3)=10^(-7);
156
>> affscale(c,A,b,u,options);
Terminating: Relative difference between iterates <
1.0000e-07
Final point =
2.0000e+00 6.0000e+00 2.0000e+00 1.0837e-09 1.7257e-08
Number of iterations =
8
The result obtained after 8 iterations as indicated above agrees with the solution in Example 16.2:
[2, 6, 2, 0, 0]> .
18.2
The following is a MATLAB routine that implements the two-phase affine scaling method, using the MAT-
LAB function from Exercise 18.1.
function [x,N]=tpaffscale(c,A,b,options)
% March 28, 2000
%
% TPAFFSCALE(c,A,b);
% TPAFFSCALE(c,A,b,options);
%
% x = TPAFFSCALE(c,A,b);
% x = TPAFFSCALE(c,A,b,options);
%
% [x,N] = TPAFFSCALE(c,A,b);
% [x,N] = TPAFFSCALE(c,A,b,options);
%
%TPAFFSCALE(c,A,b) solves the following linear program using the
%Two-Phase Affine Scaling Method:
% min c’x subject to Ax=b, x>=0.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results (default is no display: 0).
%OPTIONS(2) is a measure of the precision required for the final point.
%OPTIONS(3) is a measure of the precision required cost value.
%OPTIONS(14) = max number of iterations.
%OPTIONS(18) = alpha.
if nargin ~= 4
options = [];
if nargin ~= 3
disp(’Wrong number of arguments.’);
return;
end
end
%clc;
format compact;
format short e;
options = foptions(options);
print = options(1);
n=length(c);
m=length(b);
%Phase I
if print,
disp(’ ’);
157
disp(’Phase I’);
disp(’-------’);
end
u = rand(n,1);
v = b-A*u;
if v ~= zeros(m,1),
u = affscale([zeros(1,n),1]’,[A v],b,[u’ 1]’,options);
u(n+1) = [];
end
if print
disp(’ ’)
disp(’Initial condition for Phase II:’)
disp(u)
end
The result obtained above agrees with the solution in Example 16.5: [0, 4, 0, 4]> .
18.3
The following is a MATLAB routine that implements the affine scaling method applied to LP problems of
the form given in the question by converting the given problem in Karmarkar’s artificial form and then using
the MATLAB function from Exercise 18.1.
158
function [x,N]=karaffscale(c,A,b,options)
%
% KARAFFSCALE(c,A,b);
% KARAFFSCALE(c,A,b,options);
%
% x = KARAFFSCALE(c,A,b);
% x = KARAFFSCALE(c,A,b,options);
%
% [x,N] = KARAFFSCALE(c,A,b);
% [x,N] = KARAFFSCALE(c,A,b,options);
%
%KARAFFSCALE(c,A,b) solves the following linear program using the
%Affine Scaling Method:
% min c’x subject to Ax>=b, x>=0.
%We use Karmarkar’s artificial problem to convert the above problem into
%a form usable by the affine scaling method.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results (default is no display: 0).
%OPTIONS(2) is a measure of the precision required for the final point.
%OPTIONS(3) is a measure of the precision required cost value.
%OPTIONS(14) = max number of iterations.
%OPTIONS(18) = alpha.
if nargin ~= 4
options = [];
if nargin ~= 3
disp(’Wrong number of arguments.’);
return;
end
end
%clc;
format compact;
format short e;
options = foptions(options);
print = options(1);
n=length(c);
m=length(b);
AA = [
c’ -b’ zeros(1,n) zeros(1,m) (-c’*x0+b’*l0);
A zeros(m,m) zeros(m,n) -eye(m) (b-A*x0+v0);
zeros(n,n) A’ eye(n) zeros(n,m) (c-A’*l0)
];
bb = [0; b; c];
cc = [zeros(2*m+2*n,1); 1];
y0 = [x0; l0; u0; v0; 1];
[y,N]=affscale(cc,AA,bb,y0,options);
159
if cc’*y <= options(3),
x = y(1:n);
if nargout == 0
disp(’Final point =’);
disp(x’);
disp(’Final cost =’);
disp(c’*x);
disp(’Number of iterations =’);
disp(N);
end %if
else
disp(’Terminating: problem has no optimal feasible solution.’);
end
We now apply the above MATLAB routine to the problem in Example 15.15, as follows:
>> c=[-3;-5];
>> A=[-1 -5; -2 -1; -1 -1];
>> b=[-40;-20;-12];
>> options(2)=10^(-4);
>> karaffscale(c,A,b,options);
Terminating: Relative difference between iterates <
1.0000e-04
Final point =
5.1992e+00 6.5959e+00
Final cost =
-4.8577e+01
Number of iterations =
3
The solution from Example 15.15 is [5, 7]> . The accuracy of the result obtained above is disappointing.
We believe that the inaccuracy here may be caused by our particularly simple numerical implementation of
the affine scaling method. This illustrates the numerical issues that must be dealt with in any practically
useful implementation of the affine scaling method.
18.4
a. Suppose T (x) = T (y). Then, Ti (x) = Ti (y) for i = 1, . . . , n + 1. Note that for i = 1, . . . , n, Ti (x) =
(xi /ai )Tn+1 (x) and Ti (y) = (yi /ai )Tn+1 (y). Therefore,
Ti (x) = (xi /ai )Tn+1 (x) = Ti (y) = (yi /ai )Tn+1 (y) = (yi /ai )Tn+1 (x),
Also, for i = 1, . . . , n,
Ti (x) = (yi /yn+1 )Tn+1 (x) = yi .
c. An immediate consequence of the solution to part b.
d. We have
1 1
Tn+1 (a) = = ,
a1 /a1 + · · · + an /an + 1 n+1
and, for i = 1, . . . , n,
1
Ti (a) = (ai /ai )Tn+1 (a) = .
n+1
160
e. Since y = T (x), we have that for i = 1, . . . , n, yi = (xi /ai )yn+1 . Therefore, x0i = yi ai = xi yn+1 , which
implies that x0 = yn+1 x. Hence, Ax0 = yn+1 Ax = byn+1 .
18.5
Let x ∈ Rn , and y = T (x). Let ai be the ith column of A, i = 1, . . . , n. As in the hint, let A0 be given by
A0 = [a1 a1 , . . . , an an , −b] .
Then,
Ax = b ⇔ Ax − b = 0
x1
.
..
⇔ [a1 , . . . , an , −b]
=0
xn
1
x1 /a1
.
..
⇔ [a1 a1 , . . . , an an , −b] =0
xn /an
1
(x1 /a1 )yn+1
..
⇔ 0
A
.
=0
(xn /an )yn+1
yn+1
⇔ A0 y = 0.
18.6
The result follows from Exercise 18.5 by setting A := c> and b := 0.
18.7
Consider the set {x ∈ Rn : e> x = 1, x ≥ 0, x1 = 0}, which can be written as {x ∈ Rn : Ax = b, x ≥ 0},
where " # " #
e> 1
A= > , b= ,
e1 0
with e = [1, . . . , 1]> , e1 = [1, 0, . . . , 0]> . Let a0 = e/n. By Exercise 12.20, the closest point on the set
{x : Ax = b} to the point a0 is
>
1 1
x∗ = A> (AA> )−1 (b − Aa0 ) + a0 = 0, ,..., .
n−1 n−1
Since x∗ ∈ {x : Ax = b, x ≥ 0} ⊂ {x : Ax = b}, the point x∗ is also the closest point on the set
{x : Ax = b, x ≥ 0} to the point a0 .
Let r = ka0 − x∗ k. Then, the sphere of radius r is inscribed in ∆. Note that
1
r = ka0 − x∗ k = p .
n(n − 1)
p
larger than or equal to 1/ n(n − 1). It remains to
Hence, the radius of the largest sphere inscribed in ∆ is p
p than or equal to 1/ n(n − 1). To this end, we show that this largest
show that this largest radius is less
radius is less than or equal to 1/ n(n − 1) + ε for any ε > 0. For this, it suffices to show that the sphere of
161
p
radius 1/ n(n − 1) + ε is not inscribed in ∆. To show this, consider the point
x∗ − a0
x = x∗ + ε
kx∗ − a0 k
p
= x∗ + ε n(n − 1)(x∗ − a0 )
" r #>
∗ n−1 1 1
= x +ε − ,p ,..., p .
n n(n − 1) n(n − 1)
p
p1/ n(n − 1) + ε. However, clearly the
It is easy to verify that the point x above is on the sphere of radius
first component of x is negative. Therefore, the sphere of radius 1/ n(n − 1) + ε is not inscribed in ∆. Our
proof is thus completed.
18.8
We first consider the constraints. We claim that x ∈ ∆ ⇔ x̄ ∈ ∆. To see this, note that if x ∈ ∆, then
e> x = 1 and hence
e> x̄ = e> D −1 x/e> D −1 x = 1,
which means that x̄ ∈ ∆. The same argument can be used for the converse. Next, we claim Ax = 0 ⇔
AD x̄ = 0. To see this, write
Ax = ADD −1 x = AD x̄(e> D −1 x).
Since e> D −1 x > 0, we have Ax = 0 ⇔ AD x̄ = 0. Finally, we claim that if x∗ is an optimal solution to the
original problem, then x̄∗ = U (x∗ ) is an optimal solution to the transformed problem. To see this, recall
that the problem in a Karmarkar’s restricted problem, and hence by Assumption (B) we have c> x∗ = 0. We
now note that the minimum value of the objective function c> D x̄ in the transformed problem is zero. This
is because c> D x̄ = c> x/e> D −1 x, and e> D −1 x > 0. Finally, we observe that at the point, x̄∗ = U (x∗ )
the objective function value for the transformed problem is zero. Indeed,
Postmultiplying the above by e, and using the facts that De = x0 , Ax0 = 0, and e> e = n, we get
162
which implies that vm+1 = 0. Hence, u> AD = 0> , which after postmultiplying by D −1 gives u> A = 0> .
Hence, " #
> A
v = 0> ,
e>
which implies that v = 0. Hence, rank B = m + 1.
18.10
We proceed by induction. For k = 0, the result is true because x(0) = a0 . Now suppose that x(k) is a strictly
interior point of ∆. We first show that x̄(k+1) is a strictly interior point. Now,
x̄(k+1) = a0 − αrĉ(k) .
Since r is the radius of the largest sphere inscribed in ∆, x̄(k+1) is a strictly interior point of ∆. To complete
the proof, we write
D k x̄(k+1)
x(k+1) = U −1
k (x̄
(k+1)
)= >
e D k x̄(k+1)
We already know that x(k+1) ∈ ∆. It therefore remains to show that it is strictly interior, i.e., x(k+1) > 0.
To see this, note that e> D k x̄(k+1) > 0. Furthermore, we can write
(k+1) (k)
x̄1 x1
(k+1)
.
..
D k x̄ =
.
(k+1) (k)
x̄n xn
(k) (k) (k+1) (k+1) >
Since x(k) = [x1 , . . . , xn ]> > 0 by the induction hypothesis, and x̄(k+1) = [x̄1 , . . . x̄n ] > 0 by the
above, x(k+1) > 0 and hence it is a strictly interior point of ∆.
164
19.7
It suffices to show the following claim: If we introduce the equation
n
X
xi + byij cxj + xn+1 = byi0 c
j=m+1
into the original constraint, then the result holds. The reason this suffices is that the Gomory cut is obtained
by subtracting this equation from an equation obtained by elementary row operations on [A, b] (hence is
equivalent to premultiplication by an invertible matrix).
To show the above claim, let xn+1 satisfy this constraint with an integer vector x. Then,
n
X
xi + byij cxj + xn+1 = byi0 c,
j=m+1
minimize
3λ1 + λ2
2 2
subject to λ1 + λ2 ≥ 3
5 5
2
1λ1 − λ2 ≥ 4
5
λ1 , λ 2 ≥ 0
λ1 , λ2 ∈ Z.
The problem is solved graphically using the same approach as in Example 19.5. We proceed by calculating
the extreme points of the feasible set. We first assume that λ1 , λ2 ∈ R. The extreme points are calculated
intersecting the given constraints, and they are:
5 15 >
λ(1) = [5, ]> , λ(2) = [ , 0] .
2 2
In Figure 24.10, we show the feasible set Ω for the case when λ1 , λ2 ∈ R.
Next we sketch the feasible set for the case when λ1 , λ2 ∈ Z and solve the problem graphically. The
graphical solution is depicted in Figure 24.11. We can see in Figure 24.11 that the optimal integer solution
is
x∗ = [6, 2]> .
The following MATLAB code generates the figures.
%--------------------------------------------------------
% The vertices of the feasible set are:
165
Figure 24.10 Feasible set Ω for the case when λ1 , λ2 ∈ R in Example 19.5.
18
16
14
12
10
0
0 5 10 15
166
Y = [0 x(2) 0 40];
fs=16; % Fontsize
% Now we draw the set Omega, supposing x1 x2 \in R.
vi = convhull(X,Y);
plot(X(1:2),Y(1:2), ’rX’, ’LineWidth’,4);
axis on; axis equal;
axis([-.2 18 -0.2 18]);
set(gca,’Fontsize’,fs)
%title(’Feasible set supossing x1, x2 in R.’,’Fontsize’,14,’Fontname’,’Avant-garde’);
hold on
fill (X(vi), Y(vi), ’b’,’facealpha’, 0.2);
text(.1,.5,[’\fontsize{48}\Omega’],’position’, [12 5])
hold off
X = [];
Y = [];
for i=1:18
j=0;
while ((j<=((i-4)*2.5)) && (j<18))
if((j>=(7.5-i)))
X = [X i];
Y = [Y j];
end
j=j+1;
end
end
plot(X,Y,’bls’,’LineWidth’,1,...
’MarkerEdgeColor’,’k’,...
’MarkerFaceColor’,’g’,...
’MarkerSize’,10)
x = [5:0.1:18];
y1 = (x-4)*2.5;
y2 = (7.5-x);
plot(x,y1,’--b’,x,y2,’--b’,’LineWidth’,2);
set(gca,’Fontsize’,fs)
%text(.1,.5,[’\fontsize{48}\Omega’],’position’, [12 5])
167
plot(xc, yc, ’r’, ’LineWidth’,2);
168
20. Problems with Equality Constraints
20.1
The feasible set consists of the points " #
2
x= , a ≤ −1.
a
We next find the gradients:
" # # "
2(x1 − 2) 0
∇h(x) = and ∇g(x) = .
0 3(x2 + 1)2
All feasible points are not regular because at the above points the gradients of h and g are not linearly
independent. There are no regular points of the constraints.
20.2
a. As usual, let f be the objective function, and h the constraint function. We form the Lagrangian
l(x, λ) = f (x)+λ> h(x), and then find critical points by solving the following equations (Lagrange condition):
We obtain
2 2 0 1 4 x1 −4
2 6 0 2 0 x −5
2
0 0 0 0 5 x3 = −6 .
1 2 0 0 0 λ 1 3
4 0 5 0 0 λ2 6
The unique solution to the above system is
4 + 2λx1 = 0
2x2 + 2λx2 = 0
x21 + x22 − 9 = 0.
169
We have four points satisfying the Lagrange condition:
Note that all four points x(1) , . . . , x(4) are regular. We now apply the SOSC. We have
" # " #
0 0 2 0
L(x, λ) = +λ ,
0 2 0 2
T (x) = {y : [2x1 , 2x2 ]y = 0}.
x2 + 2λx1 = 0
x1 + 8λx2 = 0
x21 + 4x22 − 1 = 0.
Note that all four points x(1) , . . . , x(4) are regular. We now apply the SOSC. We have
" # " #
0 1 2 0
L(x, λ) = +λ ,
1 0 0 8
T (x) = {y : [2x1 , 8x2 ]y = 0}.
Note that
" #
−1/2 1
L(x, −1/4) =
1 −2
" #
1/2 1
L(x, 1/4) = .
1 2
After standard manipulations, we conclude that the first two points are strict local maximizers, while the
last two points are strict local minimizers.
20.3
We form the lagrangian
20.5
a. The Lagrange condition for this problem is:
2(x∗ − x0 ) + 2λ∗ x∗ = 0
kx∗ k2 = 9,
where λ∗ ∈ R. Rewriting the first equation we get (1 + λ∗ )x∗ = x0 , which when combined with the second
equation gives two values for 1 + λ∗ : √
1 + λ∗1 = 2/3 and 1 + λ∗2 = √
−2/3. Hence there are two solutions to the
∗(1)
Lagrange condition: x = (3/2)[1, 3], and x∗(2) = −(3/2)[1, 3].
b. We have L(x∗(i) , λ∗i ) = (1 + λ∗i )I. To apply the SONC Theorem, we need to check regularity. This is
easy, since the gradient of the constraint function at any point x is 2x, which is nonzero at both the points
in part a.
For the second point, 1 + λ∗2 = −2/3, which implies that the point is not a local minimizer because the
SONC does not hold.
On the other hand, the first point satisfies the SOSC (since 1 + λ∗1 = 2/3), which implies that it is a strict
local minimizer.
20.6
a. Let x1 , x2 , and x3 be the dimensions of the closed box. The problem is
minimize 2(x1 x2 + x2 x3 + x3 x1 )
subject to x1 x2 x3 = V.
We denote f (x) = 2(x1 x2 + x2 x3 + x3 x1 ), and h(x) = x1 x2 x3 − V . We have ∇f (x) = 2[x2 + x3 , x1 +
x3 , x1 + x2 ]> and ∇h(x) = [x2 x3 , x1 x3 , x1 x2 ]> . By the Lagrange condition, the dimensions of the box with
minimum surface area satisfies
2(b + c) + λbc = 0
2(a + c) + λac = 0
2(a + b) + λab = 0
abc = V,
172
where λ ∈ R.
b. Regularity of x∗ means ∇h(x∗ ) 6= 0 (since there is only one scalar equality constraint in this case). Since
x∗ = [a, b, c]> is a feasible point, we must have a, b, c 6= 0 (for otherwise the volume will be 0). Hence,
∇h(x∗ ) 6= 0, which implies that x∗ is regular.
c. Multiplying the first equation by a and the second equation by b, and then subtracting the first from the
second, we obtain:
c(a − b) = 0.
Since c 6= 0 (see part b), we conclude that a = b. By a similar procedure on the second and third equations,
we conclude that b = c. Hence, substituting into the fourth (constraint) equation, we obtain
a = b = c = V 1/3 ,
The matrix L(x∗ , λ) is not positive definite (there are several ways to check this: we could use Sylvester’s
criterion, or we could compute the eigenvalues of L(x∗ , λ), which are 2, 2, −4). Therefore, we need to compute
the tangent space T (x∗ ). Note that
Hence,
T (x∗ ) = {y : Dh(x∗ )y = 0} = {y : [1, 1, 1]y = 0} = {y : y3 = −(y1 + y2 )}.
Let y ∈ T (x∗ ), y 6= 0. Note that either y1 6= 0 or y2 6= 0. We have,
0 1 1
y > L(x∗ , λ)y = −2y > 1 0 1 y = −4(y1 y2 + y1 y3 + y2 y3 ).
1 1 0
Therefore, y > L(x∗ , λ)y > 0, which shows that the SOSC is satisfied.
An alternative (simpler) calculation:
0 1 1
> ∗ >
y L(x , λ)y = −2y 1 0 1 y = −2(y1 (y2 + y3 ) + y2 (y1 + y3 ) + y3 (y1 + y2 )).
1 1 0
Substituting y1 = −(y2 + y3 ), y2 = −(y1 + y3 ), and y3 = −(y1 + y2 ) in the first, second, and third terms,
respectively, we obtain
y > L(x∗ , λ)y = 2(y12 + y22 + y22 ) > 0.
173
20.7
a. We first compute critical points by applying the Lagrange conditions. These are:
2x1 + 2λx1 = 0
6x1 + 2λx2 = 0
1 + 2λx3 = 0
x21 + x22 + x23 − 16 = 0.
All the above points are regular. We now apply second order conditions to establish their nature. For this,
we compute
2 0 0 2 0 0
F (x) = 0 6 0 , H(x) = 0 2 0 ,
0 0 0 0 0 2
and
T (x∗ ) = {y ∈ R3 : [2x1 , 2x2 , 2x3 ]y = 0}.
For the first point, we have
0 0 0
L(x(1) , λ(1) ) = 0
4 0
0 −2 0
√
T (x ) = {[−a/ 63, b, a]> : a, b ∈ R}.
(1)
√
Let y = [−a/ 63, b, a]> ∈ T (x(1) ), where a and b are not both zero. Then
√
> 0 if |a| < b√2
y > L(x(1) , λ(1) )y = 4b2 − 2a2 = 0 if |a| = b 2 .
< 0 if |a| > b√2
From the above, we see that x(1) does not satisfy the SONC. Therefore, x(1) cannot be an extremizer.
Performing similar calculations for x(2) , we conclude that x(2) cannot be an extremizer either.
For the third point, we have
7/4 0 0
L(x(3) , λ(3) ) = 0 23/4 0
0 0 −1/4
T (x(3) ) = {[a, b, 0]> : a, b ∈ R}.
Let y = [a, b, 0]> ∈ T (x(3) ), where a and b are not both zero. Then
7 2 23 2
y > L(x(3) , λ(3) )y = a + b > 0.
4 4
174
Hence, x(3) is a strict local minimizer. Performing similar calculations for the remaining points, we conclude
that x(4) is a strict local minimizer, and x(5) and x(6) are both strict local maximizers.
b. The Lagrange condition for the problem is:
From the constraint equation, we note that x = [0, 0]> cannot satisfy the Lagrange condition. Therefore,
the determinant of the above matrix must be zero. Solving for λ yields two possible values: −1/7 and −1/2.
We then have four points satisfying the Lagrange condition:
Applying the SOSC, we conclude that x(1) and x(2) are strict local minimizers, and x(3) and x(4) are strict
local maximizers.
20.8
a. We can represent the problem as
minimize f (x)
subject to h(x) = 0,
where f (x) = 2x1 + 3x2 − 4, and h(x) = x1 x2 − 6. We have Df (x) = [2, 3], and Dh(x) = [x2 , x1 ]. Note
that 0 is not a feasible point. Therefore, any feasible point is regular. If x∗ is a local extremizer, then by
the Lagrange multiplier theorem, there exists λ∗ ∈ R such that Df (x∗ ) + λ∗ Dh(x∗ ) = 0> , or
2 + λ∗ x∗2 = 0
3+ λ∗ x∗1 = 0.
Solving, we get two possible extremizers: x(1) = [3, 2]> , with corresponding Lagrange multiplier λ(1) = −1,
and x(2) = −[3, 2]> , with corresponding Lagrange multiplier λ(2) = 1.
b. We have F (x) = O, and " #
0 1
H(x) = .
1 0
First, consider the point x(1) = [3, 2]> , with corresponding Lagrange multiplier λ(1) = −1. We have
" #
(1) (1) 0 1
L(x , λ ) = − ,
1 0
and
T (x(1) ) = {y : [2, 3]y = 0} = {α[−3, 2]> : α ∈ R}.
Let y = α[−3, 2]> ∈ T (x(1) ), α 6= 0. We have
and
T (x(2) ) = {y : −[2, 3]y = 0} = {α[−3, 2]> : α ∈ R} = T (x(1) ).
Let y = α[−3, 2]> ∈ T (x(2) ), α 6= 0. We have
x> Qx
f (x1 , x2 ) = .
x> P x
Therefore, if a point x is a maximizer of f (x1 , x2 ) then so is any nonzero multiple of this point because
Thus, we wish to maximize f (x1 , x2 ) = 18x21 − 8x1 x2 + 12x22 subject to the equality constraint, h(x1 , x2 ) =
1 − 2x21 − 2x22 = 0. We apply the Lagrange’s method to solve the problem. We form the Lagrangian function,
l(x, λ) = f + λh,
That is, solving the problem is being reduced to solving an eigenvalue-eigenvector problem,
" #! " #
9 −2 λ−9 2
λI 2 − x= x = 0.
−2 6 2 λ−6
176
The characteristic polynomial is
" #
λ−9 2
det = λ2 − 15λ + 50 = (λ − 5)(λ − 10).
2 λ−6
The eigenvalues are 5 and 10. Because we are interested in finding a maximizer, we conclude that the value
of the maximized function is 10, while the corresponding maximizer corresponds to an appropriately scaled,
to satisfy the constraint, eigenvector of this eigenvalue. An eigenvector can easily be found by taking any
nonzero column of the adjoint matrix of
" #
9 −2
10I 2 − .
−2 6
Thus, " #
√ −2
0.1
1
is a maximizer for the equivalent problem. Any multiple of the above vector is a solution of the original
maximization problem.
20.10
We use the technique of Example 20.8. First, we write the objective function in the form x> Qx, where
" #
> 3 2
Q=Q = .
2 3
The characteristic polynomial of Q is λ2 − 6λ + 5, and the eigenvalues of Q are 1 and 5. The solutions√to
the problem are the unit length eigenvectors of Q corresponding to the eigenvalue 5, which are ±[1, 1]> / 2.
20.11
Consider the problem
minimize kAxk2
subject to kxk2 = 1.
The optimal objective function value of this problem is the smallest value that kyk2 can take. The above
can be solved easily using Lagrange multipliers. The Lagrange conditions are
The first equation can be rewritten as A> Ax = λx, which implies that λ is an eigenvalue of A> A. Moreover,
premultiplying by x> yields x> A> Ax = λx> x = λ, which indicates that the Lagrange multiplier is √ equal
to the optimal objective function value. Hence, the range of values that kyk = kAxk can take is 1 to 20.
20.12
Consider the following optimization problem (we need to use squared norms to make the functions differen-
tiable):
minimize −kAxk2
subject to kxk2 = 1.
177
As usual, write f (x) = −kAxk2 and h(x) = kxk2 − 1. We have ∇f (x) = −2A> Ax and ∇h(x) = 2x. Note
that all feasible solutions are regular. Let x∗ be an optimal solution. Note that the optimal value of the
objective function is f (x∗ ) = −kAk22 . The Lagrange condition for the above problem is:
Therefore, because x∗ minimizes f (x∗ ), we deduce that λ∗ must be the largest eigenvalue of A> A; i.e.,
λ∗ = λ1 . Therefore, p
kAk2 = λ1 .
20.13
Let h(x) = 1 − x> P x = 0. Let x0 be such that h(x0 ) = 0. Then, x0 6= 0. For x0 to be a regular point, we
need to show that {∇h(x0 )} is a linearly independent set, i.e., ∇h(x0 ) 6= 0. Now, ∇h(x) = −2P x. Since
P is nonsingular, and x0 6= 0, then ∇h(x0 ) = −2P x0 6= 0.
20.14
Note that the point [1, 1]> is a regular point. Applying the Lagrange multiplier theorem gives
a + 2λ∗ = 0
∗
b + 2λ = 0.
Hence, a = b.
20.15
a. Denote the solution by [x∗1 , x∗2 ]> . The Lagrange condition for this problem has the form
From the first and third equations it follows that x∗1 , x∗2 6= 0. Then, combining the first and second equations,
we obtain
2 − x∗2 x∗1
λ∗ = =
2x∗1 2x∗2
which implies that 2x∗2 − (x∗2 )2 = (x∗1 )2 . Hence, x∗2 = 1, and by the third Lagrange equation, (x∗1 )2 = 1.
Thus, the only two points satisfying the Lagrange condition are [1, 1]> and [−1, 1]> . Note that both points
are regular.
b. Consider the point x∗ = [−1, 1]> . The corresponding Lagrange multiplier is λ∗ = −1/2. The Hessian of
the Lagrangian is " # " # " #
∗ ∗ 0 1 1 2 0 −1 1
L(x , λ ) = − = .
1 0 2 0 −2 1 1
The tangent plane is given by
Let y ∈ T (x∗ ), y 6= 0. Then, y = [a, −a]> for some a 6= 0. We have y > L(x∗ , λ∗ )y = −2a2 < 0. Hence,
SONC does not hold in this case, and therefore x∗ = [−1, 1]> cannot be local minimizer. In fact, the point
is a strict local maximizer.
178
c. Consider the point x∗ = [1, 1]> . The corresponding Lagrange multiplier is λ∗ = 1/2. The Hessian of the
Lagrangian is " # " # " #
∗ ∗ 0 1 1 2 0 1 1
L(x , λ ) = + = .
1 0 2 0 −2 1 −1
The tangent plane is given by
Let y ∈ T (x∗ ), y 6= 0. Then, y = [a, a] for some a 6= 0. We have y > L(x∗ , λ∗ )y = 2a2 > 0. Hence, by the
SOSC, the point x∗ = [1, 1]> is a strict local minimizer.
20.16
a. The point x∗ is the solution to the optimization problem
1
minimize kx − x0 k2
2
subject to Ax = 0.
Since rank A = m, any feasible point is regular. By the Lagrange multiplier theorem, there exists λ∗ ∈ Rm
such that
(x∗ − x0 )> − λ∗> A = 0> .
Postmultiplying both sides by x∗ and using the fact that Ax∗ = 0, we get
(x∗ − x0 )> x∗ = 0.
20.17
a. The Lagrange condition is (omitting all superscript-∗ for convenience):
For simplicity, write Q = A> A, which is positive definite. From the first equation, we have
x = Q−1 A> b − Q−1 C > (CQ−1 C > )−1 (CQ−1 A> b − d).
179
b. Rewrite the objective function as
1 > > 1
x A Ax − b> Ax + kbk2 .
2 2
As before, write Q = A> A. Completing the squares and setting y = x − Q−1 A> b, the objective function
can be written as
1 >
y Qy + const.
2
Hence, the problem can be converted to the equivalent QP:
1 >
minimize y Qy
2
subject to Cy = d − CQ−1 A> b.
Hence,
x∗ = Q−1 c + Q−1 A> (AQ−1 A> )−1 (b − AQ−1 c).
Alternatively, we could have rewritten the given problem in our usual quadratic programming form with
variable y = x − Q−1 c.
20.19
Clearly, we have M = R(B), i.e., y ∈ M if and only if there exists x ∈ Rm such that y = Bx. Hence
L is positive semidefinite on M
⇔ for all y ∈ M, y > Ly ≥ 0
⇔ for all x ∈ Rm , (Bx)> L(Bx) ≥ 0
⇔ m
for all x ∈ R , x> (B > LB)x ≥ 0
⇔ for all x ∈ Rm , x> LM x ≥ 0
⇔ LM ≥ 0.
180
For positive definiteness, the same argument applies, with ≥ replaced by >.
20.20
a. By simple manipulations, we can write
x2 = a2 x0 + abu0 + bu1 .
minimize f (u)
subject to h(u) = 0,
where f (u) = 21 kuk2 , and h(u) = a2 x0 + [ab, b]u. Since the vector ∇h(u) = [ab, b]> is nonzero for any u,
then any feasible point is regular. Therefore, by the Lagrange multiplier theorem, there exists λ∗ ∈ R such
that
u∗0 + λ∗ ab = 0
u∗1 + λ∗ b = 0
a2 x0 + abu∗0 + bu∗1 = 0.
We have three linear equations in three unknowns, that upon solving yields
a3 x0 a2 x0
u∗0 = − , u∗1 = − .
b(1 + a2 ) b(1 + a2 )
b. The Hessians of f and h are F (u) = I 2 (2 × 2 identity matrix) and H(u) = O, respectively. Hence, the
Hessian of the Lagrangian is L(u∗ , λ∗ ) = I 2 , which is positive definite. Therefore, u∗ satisfies the SOSC,
and is therefore a strict local minimizer.
20.21
Letting z = [x2 , u1 , u2 ]> , the objective function is z > Qz, where
1 0 0
Q = 0 1/2 0 .
0 0 1/3
x2 = 2x1 + u2 = 2(2 + u1 ) + u2 ,
Thus, the optimal controls are u∗1 = −4/3 and u∗2 = −1.
181
20.22
The composite input vector is
h i>
u = u0 u1 u2 .
The performance index J is J = 21 u> u. To obtain the constraint Au = b, where A ∈ R1×3 , we proceed as
follows. First, we write
x2 = x1 + 2u1
= x0 + 2u0 + 2u1 .
x3 = 9
= x2 + 2u2
= x0 + 2u0 + 2u1 + 2u2 .
Thus we formulated the problem of finding the optimal control sequence as a constrained optimization
problem
1 >
minimize u u
2
subject to Au = b.
u + A> λ = 0 and Au = b.
From the first of the above conditions, we calculate, u = −A> λ. Substituting the above into the second of
the Lagrange conditions gives
−1
λ = − AA> b.
Combining the last two equations, we obtain a closed-form formula for the optimal input sequence
−1
u = A> AA> b.
In our problem,
u0 1
b
u = u1 =
A> =
1 .
>
u2 AA 1
182
21.1
a. We form the Lagrangian function,
(1 − µ)x1 = 0 (2 − µ)x2 = 0.
We first consider the case when µ = 0. Then, we obtain the point, x(1) = 0, which does not satisfy the
constraints.
The next case is when µ = 1. Then we have to have x2 = 0 and using µ(4 − x21 − 2x22 ) = 0 gives
" # " #
(2) 2 (3) 2
x = and x = − .
0 0
Hence, x(2) and x(3) satisfy the SOSC to be strict local minimizers.
When µ = 2, " #
−2 0
L= ,
0 0
and h i>
T = {y = a 1 0 : a ∈ R}.
183
We have
y > Ly = −2a2 < 0.
Thus, x(4) and x(5) do not satisfy the SONC to be minimizers.
In summary, only x(2) and x(3) are strict local minimizers.
21.2
a. We first find critical points by applying the Karush-Kuhn-Tucker conditions, which are
2x1 − 2 − 2µ1 x1 + 5µ2 = 0
1 1
2x2 − 10 + µ1 + µ2 = 0
5 2
1 1
µ1 x2 − x21 + µ2 5x1 + x2 − 5 = 0
5 2
µ ≥ 0.
We have to check four possible combinations.
Case 1: (µ1 = 0, µ2 = 0) Solving the first and second Karush-Kuhn-Tucker equations yields x(1) = [1, 5]> .
However, this point is not feasible and is therefore not a candidate minimizer.
Case 2: (µ1 > 0, µ2 = 0) We have two possible solutions:
(2)
x(2) = [−0.98, 4.8]> µ1 = 2.02
(3)
x(3) = [−0.02, 0]> µ1 = 50.
Both x(2) and x(3) satisfy the constraints, and are therefore candidate minimizers.
Case 3: (µ1 = 0, µ2 > 0) Solving the corresponding equation yields:
(4)
x(4) = [0.5050, 4.9505]> µ1 = 0.198.
The point x(4) is not feasible, and hence is not a candidate minimizer.
Case 4: (µ1 > 0, µ2 > 0) We have two solutions:
x(5) = [0.732, 2.679]> µ(5) = [13.246, 3.986]>
x(6) = [−2.73, 37.32]> µ(6) = [188.8, −204]> .
The point x(5) is feasible, but x(6) is not.
We are left with three candidate minimizers: x(2) , x(3) , and x(5) . It is easy to check that they are regular.
We now check if each satisfies the second order conditions. For this, we compute
" #
2 − 2µ1 0
L(x, µ) = .
0 2
Thus, x(3) does not satisfy the SOSC. In fact, in this case, we have T (x(3) ) = T̃ (x(3) ), and hence x(3) does
not satisfy the SONC either. We conclude that x(3) is not a local minimizer. We can easily check that x(3)
is not a local maximizer either.
For x(5) , we have
" #
(5) (5) −24.4919 0
L(x , µ ) =
0 2
T̃ (x(5) ) = {0}.
The SOSC is trivially satisfied, and therefore x(5) is a strict local minimizer.
b. The Karush-Kuhn-Tucker conditions are:
2x1 − µ1 − µ3 = 0
2x2 − µ2 − µ3 = 0
−x1 ≤ 0
−x2 ≤ 0
−x1 − x2 + 5 ≤ 0
−µ1 x1 − µ2 x2 + µ3 (−x1 − x2 + 5) = 0
µ1 , µ2 , µ3 ≥ 0.
It is easy to verify that the only combination of Karush-Kuhn-Tucker multipliers resulting in a feasible point
is µ1 = µ2 = 0, µ3 > 0. For this case, we obtain x∗ = [2.5, 2.5]> , µ∗ = [0, 0, 5]> . We have
" #
∗ ∗ 2 0
L(x , µ ) = > 0.
0 2
Hence, x∗ is a strict local minimizer (in fact, the only one for this problem).
c. The Karush-Kuhn-Tucker conditions are:
It is easy to verify that the only combination of Karush-Kuhn-Tucker multipliers resulting in a feasible point
is µ1 = 0, µ2 > 0. For this case, we obtain x∗ = [9/14, 2/14]> , µ∗ = [0, 13/14]> . We have
" #
2 6
L(x∗ , µ∗ ) =
6 0
T̃ (x∗ ) = {a[1, 1] : a ∈ R}.
Hence, x∗ is a strict local minimizer (in fact, the only one for this problem).
185
21.3
The Karush-Kuhn-Tucker conditions are:
Note that the resulting µ values violate the condition µ > 0. Hence, neither of the points are minimizers
(although they are candidates for maximizers).
Case 2: (µ = 0) Subtracting the second equation from the first yields x1 = x2 , which upon substituting
into the third equation gives two possible points:
Note that x(4) is not a feasible point, and is therefore not a candidate minimizer.
Therefore, the only remaining candidate is x(3) , with corresponding λ(3) = −1/2 (and µ = 0). We now
check second order conditions. We have
" #
1 −1
L(x(3) , 0, λ(3) ) =
−1 1
T̃ (x(3) ) = {a[1, −1]> : a ∈ R}.
minimize e>
np
subject to Gp ≥ P em
p ≥ 0,
where G = [gi,j ], en = [1, . . . , 1]> (with n components), and p = [p1 , . . . , pn ]> . The KKT condition for this
problem is:
186
21.5
a. We have f (x) = x2 − (x1 − 2)3 + 3 and g(x) = 1 − x2 . Hence, ∇f (x) = [−3(x1 − 2)2 , 1]> and
∇g(x) = [0, −1]> . The KKT condition is
µ ≥ 0
−3(x1 − 2)2 = 0
1−µ = 0
µ(1 − x2 ) = 0
1 − x2 ≤ 0.
µ ≥ 0
−2µ(x1 − 1) = 0
1−µ = 0
2
µ(x2 + (x1 − 1) − 3) = 0
x2 + (x1 − 1)2 + 3 ≤ 0.
From the third equation we get µ = 1. The second equation then gives x1 = 1, and the fourth equation gives
x2 = 3. Therefore, the only point that satisfies the KKT condition is x∗ = [1, 3]> , with a KKT multiplier of
µ∗ = 1.
b. Note that the constraint x2 + (x1 − 1)2 + 3 ≥ 0 is active at x∗ . We have T (x∗ ) = {y : [0, −1]y =
0} = {y : y2 = 0}, and N (x∗ ) = {y : y = [0, −1]z, z ∈ R} = {y : y1 = 0}. Because µ∗ > 0, we have
T̃ (x∗ ) = T (x∗ ) = {y : y2 = 0}.
c. We have " # " #
−2 0 −2 0
L(x∗ , µ∗ ) = O + 1 = .
0 0 0 0
From part b, T (x∗ ) = {y : y2 = 0}. Therefore, for any y ∈ T (x∗ ), y > L(x∗ , µ∗ )y = −2y12 ≤ 0, which means
that x∗ does not satisfy the SONC.
21.7
a. We need to consider two optimization problems. We first consider the minimization problem
187
Then, we form the Lagrangian function
The point x∗ = 0 satisfies the above conditions for µ1 = −2, µ2 = 0, and µ3 = −4. Thus the point x∗ does
not satisfy the KKT conditions for minimum.
We next consider the maximization problem
The point x∗ = 0 satisfies the above conditions for µ1 = 2, µ2 = 0, and µ3 = 4. Hence, the point x∗ satisfies
the KKT conditions for maximum.
b. We next compute the Hessian, with respect to x, of the lagrangian to obtain
" # " # " #
−2 0 4 0 2 0
L = F + µ∗1 G1 = + =
0 −2 0 0 0 −2
That is, T̃ is a trivial subspace that consists only of the zero vector. Thus the SOSC for x∗ to be a strict
local maximizer is trivially satisfied.
21.8
a. Write h(x) = x1 − x2 , g(x) = −x1 . We have Df (x) = [x22 , 2x1 x2 ], Dh(x) = [1, −1], Dg(x) = [−1, 0].
188
Note that all feasible points are regular. The KKT condition is:
x22 + λ − µ = 0
2x1 x2 − λ = 0
µx1 = 0
µ ≥ 0
x1 − x2 = 0
x1 ≥ 0.
We first try x1 = x∗1 = 0 (active inequality constraint). Substituting and manipulating, we have the solution
x∗1 = x∗2 = 0 with µ∗ = 0, which is a legitimate solution. If we then try x1 = x∗1 > 0 (inactive inequality
constraint), we find that there is no consistent solution to the KKT condition. Thus, there is only one point
satisfying the KKT condition: x∗ = 0.
c. The tangent space at x∗ = 0 is given by
which is nontrivial. Hence, for any nonzero vector y ∈ T̃ (0, 0), we have y > L(0, 0, 0)y = 0 6> 0. Thus, the
SOSC does not hold for the solution in part a.
21.9
a. The KKT condition for the problem is:
(x∗ − x0 ) + µ∗ ∇g(x∗ ) = 0
µ∗ g(x∗ ) = 0.
Hence " #
∗ 2 ∗ 0
L(x , µ ) =
0 2
which is positive definite on R2 . Hence, SOSC is satisfied, and x∗ is a strict local minimizer.
21.12
The KKT condition is
x> Q + µ> A = 0>
µ> (Ax − b) = 0
µ ≥ 0
Ax − b ≤ 0.
Postmultiplying the first equation by x gives
x> Qx + µ> Ax = 0.
We note from the second equation that µ> Ax = µ> b. Hence,
x> Qx + µ> b = 0.
Since Q > 0, the first term is nonnegative. Also, the second term is nonnegative because µ ≥ 0 and b ≥ 0.
Hence, we conclude that both terms must be zero. Because Q > 0, we must have x = 0.
Aside: Actually, we can deduce that the only solution to the KKT condition must be 0, as follows. The
problem is convex; thus, the only points satisfying the KKT condition are global minimizers. However, we
see that 0 is a feasible point, and is the only point for which the objective function value is 0. Further, the
objective function is bounded below by 0. Hence, 0 is the only global minimizer.
21.13
a. We have one scalar equality constraint with h(x) = [c, d]> x − e and two scalar inequality constraints with
g(x) = −x. Hence, there exists µ∗ ∈ R2 and λ∗ ∈ R such that
µ∗ ≥ 0
a + cλ − µ∗1
∗
= 0
b + dλ∗ − µ∗2 = 0
∗> ∗
µ x = 0
cx∗1 + dx∗2 = e
∗
x ≥ 0.
190
b. Because x∗ is a basic feasible solution, and the equality constraint precludes the point 0, exactly one of the
inequality constraints is active. The vectors ∇h(x∗ ) = [c, d]> and ∇g1 = [1, 0]> are linearly independent.
Similarly, the vectors ∇h(x∗ ) = [c, d]> and ∇g2 = [0, 1]> are linearly independent. Hence, x∗ must be
regular.
c. The tangent space is given by
where M is a matrix with the first row equal to Dh(x∗ ) = [c, d], and the second row is either Dg1 = [1, 0]
or Dg2 = [0, 1]. But, as we have seen in part b, rank M = 2 Hence, T (x∗ ) = {0}.
d. Recall that we can take µ∗ to be the relative cost coefficient vector (i.e., the KKT conditions are satisfied
with µ∗ being the relative cost coefficient vector). If the relative cost coefficients of all nonbasic variables
are strictly positive, then µ∗j > 0 for all j ∈ J(x∗ ). Hence, T̃ (x∗ , µ∗ ) = T (x∗ ) = {0}, which implies that
L(x∗ , λ∗ , µ∗ ) > 0 on T̃ (x∗ , µ∗ ). Hence, the SOSC is satisfied.
21.14
Let x∗ be a solution. Since A is of full rank, x∗ is regular. The KKT Theorem states that x∗ satisfies:
µ∗ ≥ 0
> ∗>
c +µ A = 0
µ∗> Ax∗ = 0.
If we postmultiply the second equation by x∗ and subtract the third from the result, we get
c> x∗ = 0.
21.15
a. We can write the LP as
minimize f (x)
subject to h(x) = 0,
g(x) ≤ 0,
where f (x) = c> x, h(x) = Ax − b, and g(x) = −x. Thus, we have Df (x) = c> , Dh(x) = A, and
Dg(x) = −I. The Karush-Kuhn-Tucker conditions for the above problem have the form: if x∗ is a local
minimizer, then there exists λ∗ and µ∗ such that
µ∗ ≥ 0
> ∗>
c +λ A − µ∗> = 0>
µ∗> x∗ = 0.
b. Let x∗ be an optimal feasible solution. Then, x∗ satisfies the Karush-Kuhn-Tucker conditions listed in
part a. Since µ∗ ≥ 0, then from the second condition in part a, we obtain (−λ∗ )> A ≤ c> . Hence, λ̄ = −λ∗
is a feasible solution to the dual (see Chapter 17). Postmultiplying the second condition in part a by x∗ , we
have
0 = c> x∗ + λ∗> Ax − µ∗> x∗ = c> x∗ + λ∗> b
which gives
>
c> x∗ = λ̄ b.
Hence, λ̄ achieves the same objective function value for the dual as x∗ for the primal.
c. From part a, we have µ∗> = c> − λ̄A. Substituting this into µ∗> x∗ = 0 yields the desired result.
191
21.16
By definition of J(x∗ ), we have gi (x∗ ) < 0 for all i 6∈ J(x∗ ). Since by assumption gi is continuous for all i,
there exists ε > 0 such that gi (x) < 0 for all i 6∈ J(x∗ ) and all x in the set B = {x : kx − x∗ k < ε}. Let
S1 = {x : h(x) = 0, gj (x) ≤ 0, j ∈ J(x∗ )}. We claim that S ∩ B = S1 ∩ B. To see this, note that clearly
S ∩ B ⊂ S1 ∩ B. To show that S1 ∩ B ⊂ S ∩ B, suppose x ∈ S1 ∩ B. Then, by definition of S1 and B, we
have h(x) = 0, gj (x) ≤ 0 for all j ∈ J(x∗ ), and gi (x) < 0 for all i 6∈ J(x∗ ). Hence, x ∈ S ∩ B.
Since x∗ is a local minimizer of f over S, and S ∩ B ⊂ S, x∗ is also a local minimizer of f over
S ∩ B = S1 ∩ B. Hence, we conclude that x∗ is a regular local minimizer of f on S1 . Note that S 0 ⊂ S1 ,
and x∗ ∈ S 0 . Therefore, x∗ is a regular local minimizer of f on S 0 .
21.17
Write f (x) = x21 +x22 , g1 (x) = x21 −x2 −4, g2 (x) = x2 −x1 −2, and g = [g1 , g2 ]> . We have ∇f (x) = [2x1 , 2x2 ]> ,
∇g1 (x) = [2x1 , −1]> , ∇g2 (x) = [−1, 1]> , and D2 f (x) = diag[2, 2]. We compute
We use the FONC to find critical points. Rewriting ∇f (x) + µ> ∇g(x) = 0, we obtain
µ2 µ1 − µ2
x1 = , x2 = .
2 + 2µ1 2
We also use µ> g(x) = 0 and µ ≥ 0, giving
µ1 (x21 − x2 − 4) = 0, µ2 (x2 − x1 − 2) = 0.
The vector µ has two components; therefore, we try four different cases.
Case 1: (µ1 > 0, µ2 > 0) We have
x21 − x2 − 4 = 0, x2 − x1 − 2 = 0.
We obtain two solutions: x(1) = [−2, 0]> and x(2) = [3, 5]> . For x(1) , the two FONC equations give µ1 = µ2
and −2(2 + 2µ1 ) = µ1 , which yield µ1 = µ2 = −4/5. This is not a legitimate solution since we require µ ≥ 0.
For x(2) , the two FONC equations give µ1 − µ2 = 10 and 3(2 + 2µ1 ) = µ2 , which yield µ = [−16/5, −66/5].
Again, this is not a legitimate solution.
Case 2: (µ1 = 0, µ2 > 0) We have
µ2 µ2
x2 − x1 − 2 = 0, x1 = , x2 = − .
2 2
Hence, x1 = −x2 , and thus x = [−1, 1], µ2 = −2. This is not a legitimate solution since we require µ ≥ 0.
Case 3: (µ1 > 0, µ2 = 0) We have
µ1
x21 − x2 − 4 = 0, x1 = 0, x2 = .
2
Therefore, x2 = −4, µ1 = −8, and again we don’t have a legitimate solution.
Case 4: (µ1 = 0, µ2 = 0) We have x1 = x2 = 0, and all constraints are inactive. This is a legitimate
candidate for the minimizer. We now apply the SOSC. Note that since the candidate is an interior point
of the constraint set, the SOSC for the problem is equivalent to the SOSC for unconstrained optimization.
The Hessian matrix D2 f (x) = diag[2, 2] is symmetric and positive definite. Hence, by the SOSC, the point
x∗ = [0, 0]> is the strict local minimizer (in fact, it is easy to see that it is a global minimizer).
21.18
Write f (x) = x21 +x22 , g1 (x) = −x1 +x22 +4, g2 (x) = x1 −10, and g = [g1 , g2 ]> . We have ∇f (x) = [2x1 , 2x2 ]> ,
∇g1 (x) = [−1, 2x2 ]> , ∇g2 (x) = [1, 0]> , D2 f (x) = diag[2, 2], D2 g1 (x) = diag[0, 2], and D2 g2 (x) = O. We
compute
∇f (x) + µ> ∇g(x) = [2x1 − µ1 + µ2 , 2x2 + 2µ1 x2 ]> .
We use the FONC to find critical points. Rewriting ∇f (x) + µ> ∇g(x) = 0, we obtain
µ1 − µ2
x1 = , x2 (1 + µ1 ) = 0.
2
192
Since we require µ ≥ 0, we deduce that x2 = 0. Using µ> g(x) = 0 gives
µ1 (−x1 + 4) = 0, µ2 = 0.
which is positive definite on all of R2 . The point [4, 0]> is clearly regular. Hence, by the SOSC, x∗ = [4, 0]
is a strict local minimizer.
21.19
Write f (x) = x21 +x22 , g1 (x) = −x1 −x22 +4, g2 (x) = 3x2 −x1 , g3 (x) = −3x2 −x1 and g = [g1 , g2 , g3 ]> . We have
∇f (x) = [2x1 , 2x2 ]> , ∇g1 (x) = [−1, −2x2 ]> , ∇g2 (x) = [−1, 3]> , ∇g2 (x) = [−1, −3]> , D2 f (x) = diag[2, 2],
D2 g1 (x) = diag[0, −2], and D2 g2 (x) = D2 g3 (x) = O.
From the figure, we see that the two candidates are x(1) = [3, 1] and x(2) = [3, −1]. Both points are easily
verified to be regular.
For x(1) , we have µ3 = 0. Now,
which yields µ1 = 4, µ2 = 2. Now, T̃ (x(1) ) = {0}. Therefore, any matrix is positive definite on T̃ (x(1) ).
Hence, by the SOSC, x(1) is a strict local minimizer.
For x(2) , we have µ2 = 0. Now,
which yields µ1 = 4, µ3 = 2. Now, again we have T̃ (x(2) ) = {0}. Therefore, any matrix is positive definite
on T̃ (x(2) ). Hence, by the SOSC, x(2) is a strict local minimizer.
21.20
a. Write f (x) = 3x1 and g(x) = 2 − x1 − x22 . We have ∇f (x∗ ) = [3, 0]> and ∇g(x∗ ) = [−1, 0]> . Hence,
letting µ∗ = 3, we have ∇f (x∗ )+µ∗ ∇g(x∗ ) = 0. Note also that µ∗ ≥ 0 and µ∗ g(x∗ ) = 0. Hence, x∗ = [2, 0]>
satisfies the KKT (first order necessary) condition.
b. We have F (x∗ ) = O and G(x∗ ) = diag[0, −2]. Hence, L(x∗ , µ∗ ) = O + 3 diag[0, −2] = diag[0, −6]. Also,
T (x∗ ) = {y : [−1, 0]y = 0} = {y : y1 = 0}. Hence, x∗ = [2, 0]> does not satisfy the second order necessary
condition.
c. No. Consider points of the form x = [−x22 + 2, x2 ]> , x2 ∈ R. Such points are feasible, and could be
arbitrarily close to x∗ . However, for such points x 6= x∗ ,
µ∗ ≥ 0
∗ ∗ ∗
x +λ a−µ = 0
µ∗> x∗ = 0.
193
Premultiplying the second KKT condition above by µ∗> and using the third condition, we get
λ∗ µ∗> a = kµ∗ k2 .
Also, premultiplying the second KKT condition above by x∗> and using the feasibility condition a> x∗ = b,
we get
kx∗ k2
λ∗ = − < 0.
b
We conclude that µ∗ = 0. For if not, the equation λ∗ µ∗> a = kµ∗ k2 implies that µ∗> a < 0, which contradicts
µ∗ ≥ 0 and a ≥ 0.
Rewriting the second KKT condition with µ∗ = 0 yields
x∗ = −λ∗ a.
21.22
a. Suppose (x∗1 )2 + (x∗2 )2 < 1. Then, the point x∗ = [x∗1 , x∗2 ]> lies in the interior of the constraint set
Ω = {x : kxk2 ≤ 1}. Hence, by the FONC for unconstrained optimization, we have that ∇f (x∗ ) = 0, where
f (x) = kx − [a, b]> k2 is the objective function. Now, ∇f (x∗ ) = 2(x∗ − [a, b]> ) = 0, which implies that
x∗ = [a, b]> which violates the assumption (x∗1 )2 + (x∗2 )2 < 1.
b. First, we need to show that x∗ is a regular point. For this, note that if we write the constraint as
g(x) = kxk2 − 1 ≤ 0, then ∇g(x∗ ) = 2x∗ 6= 0. Therefore, x∗ is a regular point. Hence, by the Karush-
Kuhn-Tucker theorem, there exists µ ∈ R, µ ≥ 0, such that
∇f (x∗ ) + µ∇g(x∗ ) = 0,
2x∗1 + µ∗ exp(x∗1 ) = 0
2(x∗2 + 1) − µ ∗
= 0
∗
µ (exp(x∗1 ) − x∗2 ) = 0
exp(x∗1 ) ≤ x∗2
∗
µ ≥ 0.
b. From the second equation in part a, we obtain µ∗ = 2(x∗2 + 1). Since x∗2 ≥ exp(x∗1 ) > 0, then µ∗ > 0.
Hence, by the third equation in part a, we obtain x∗2 = exp(x∗1 ).
c. Since µ∗ = 2(x∗2 + 1) = 2(exp(x∗1 ) + 1), then by the first equation in part a, we have
which implies
x∗1 = −(exp(2x∗1 ) + exp(x∗1 )).
194
Since exp(x∗1 ), exp(2x∗1 ) > 0, then x∗1 < 0, and hence exp(x∗1 ), exp(2x∗1 ) < 1. Therefore, x∗1 > −2.
21.24
a. We rewrite the problem as
minimize f (x)
subject to g(x) ≤ 0,
where f (x) = c> x and g(x) = 21 kxk2 − 1. Hence, ∇f (x) = c and ∇g(x) = x. Note that x∗ 6= 0 (for
otherwise it would not be feasible), and therefore it is a regular point. By the KKT theorem, there exists
µ∗ ≤ 0 such that c = µ∗ x∗ and µ∗ g(x∗ ) = 0. Since c 6= 0, we must have µ∗ 6= 0. Therefore, g(x∗ ) = 0,
which implies that kx∗ k2 = 2.
p
b. From part a, we have α2 kek2 = 2. Since kek2 = n, we have α = 2/n.
To find c, we use
4 = c> x∗ + 8 = µ∗ kx∗ k2 + 8 = 2µ∗ + 8,
p
and thus µ∗ = −2. Hence, c = −2αe = − 2 2/n e.
21.25
We can represent the equivalent problem as
minimize f (x)
subject to g(x) ≤ 0,
µ∗ ≥ 0
∇f (x∗ ) + µ∗ Dh(x∗ )> h(x∗ ) = 0
µ∗ kh(x∗ )k = 0.
Note that for a feasible point x∗ , we have h(x∗ ) = 0. Therefore, the KKT condition becomes
µ∗ ≥ 0
∗
∇f (x ) = 0.
Note that ∇g(x∗ ) = 0. Therefore, any feasible point x∗ is not regular. Hence, the KKT theorem cannot
be applied in this case. This should be clear, since obviously ∇f (x∗ ) = 0 is not necessary for optimality in
general.
A quadratic function is concave if and only if it is negative-semidefinite. Equivalently, if and only if its
negative is positive-semidefinite. On the other hand, a symmetric matrix is positive semidefinite if and only
195
if all its principal minors, not just the leading principal minors, are nonnegative. Thus we will determine
the range of the parameter α for which
1 α −1
−f = x> α 1 2 x.
−1 2 5
is positive-semidefinite. It is easy to see that the three first-order principal minors (diagonal elements of
F ) are all positive. There are three second-order principal minors. Only one of them, the leading principal
minor, is a function of the parameter α,
" #
1 α
det F (1 : !2, 1 : !2) = det = 1 − α2 .
α 1
α ∈ [−1, 1].
and they are positive. There is only one third-order principal minor, det F , where
" # " # " #
1 2 α −1 α −1
det F = det − α det − det
2 5 2 5 1 2
= 1 − α(5α + 2) − (2α + 1)
= 1 − 5α2 − 2α − 2α − 1
= −5α2 − 4α.
The third-order principal minor is nonnegative if and only if, −α(5α + 4) ≥ 0, that is, if and only if
α ∈ [−4/5, 0].
Combining this with α ∈ [−1, 1] from above, we conclude that the function f is negative-semidefinite,
equivalently, the quadratic function f is concave, if and only if
α ∈ [−4/5, 0].
22.2
We have
1
φ(α) = (x + αd)> Q(x + αd) − (x + αd)b
2
1 > 2 > 1 > >
= (d Qd)α + d (Qx − b)α + x Qx + x b .
2 2
This is a quadratic function of α. Since Q > 0, then
d2 φ
(α) = d> Qd > 0
dα2
and hence by Theorem 22.5, φ is strictly convex.
22.3
Write f (x) = x> Qx, where " #
1 0 1
Q= .
2 1 0
196
Let x, y ∈ Ω. Then, x = [a1 , ma1 ]> and y = [a2 , ma2 ]> for some a1 , a2 ∈ R. By Proposition 22.1, it is
enough to show that (y − x)> Q(y − x) ≥ 0. By substitution,
|y| ≥ 1 + ξ(y − 1) = y.
22.6
Let x, y ∈ Ω and α ∈ (0, 1). For convenience, write f¯ = max{f1 , . . . , f` } = maxi fi . We have
Now,
x1 = αy1 + (1 − α)z1 = αy2 + (1 − α)z2 = x2 ,
and since α, 1 − α ≥ 0,
x1 ≥ 0.
Hence, x ∈ Ω and therefore Ω is convex.
To show that f is convex on Ω, we need to show that for any y, z ∈ Ω and α ∈ [0, 1], f (αy + (1 − α)z) ≤
αf (y) + (1 − α)f (z). Let y, z ∈ Ω and α ∈ [0, 1]. Thus, y1 = y2 ≥ 0 and z1 = z2 ≥ 0, so that f (y) = y13 and
f (z) = z13 . Also, α3 ≤ α and (1 − α)3 ≤ (1 − α). We have
Hence, f is convex.
22.10
Since the problem is a convex optimization problem, we know for sure that any point of the form αy+(1−α)z,
α ∈ (0, 1), is a global minimizer. However, any other point may or may not be a minimizer. Hence, the
largest set of points G ⊂ Ω for which we can be sure that every point in G is a global minimizer, is given by
22.11
a. Let f be the objective function and Ω the constraint set. Consider the set Γ = {x ∈ Ω : f (x) ≤ 1}.
This set contains all three of the given points. Moreover, by Lemma 22.1, Γ is convex. Now, if we take the
average of the first two points (which is a convex combination of them), the resulting point (1/2)[1, 0, 0]> +
(1/2)[0, 1, 0]> = (1/2)[1, 1, 0]> is in Γ, because Γ is convex. Similarly, the point (2/3)(1/2)[1, 1, 0]> +
(1/3)[0, 0, 1]> = (1/3)[1, 1, 1]> is also in Γ, because Γ is convex. Hence, the objective function value of
(1/3)[1, 1, 1]> must be ≤ 1.
b. If the three points are all global minimizers, then the point (1/3)[1, 1, 1]> , which must cannot have higher
objective function value than the given three points (by part a), must also be a global minimizer.
22.12
a. The Lagrange condition for the problem is given by:
x> Q + λ> A = 0
Ax = b.
(AQ−1 A> )λ = b.
Since rank A = m, the matrix AQ−1 A> is invertible. Therefore, the only solution to the Lagrange condition
is
x = Q−1 A> (AQ−1 A> )−1 b.
b. The point in part a above is a global minimizer because the problem is a convex optimization problem (by
problem 1, the constraint set is convex; the objective function is convex because its Hessian, Q, is positive
definite).
22.13
By Theorem 22.4, for all x ∈ Ω, we have
minimize f (x)
subject to g(x) ≤ 0
where f (x) = kxk2 and g(x) = b − a> x. Now, ∇g(x) = −a 6= 0. Therefore, any feasible point is regular.
By the Karush-Kuhn-Tucker theorem, there exists µ∗ ≥ 0 such that
2x∗ − µ∗ a = 0
µ (b − a> x∗ )
∗
= 0.
Since x∗ is a feasible point, then x∗ 6= 0. Therefore, by the first equation, we see that µ∗ 6= 0. The second
equation then implies that b − a> x∗ = 0.
199
c. By the first Karush-Kuhn-Tucker equation, we have x∗ = µ∗ a/2. Since a> x∗ = b, then µ∗ a> a/2 =
a> x∗ = b, and therefore µ∗ = 2b/kak2 . Since x∗ = µ∗ a/2 then x∗ is uniquely given by x∗ = ba/kak2 .
22.15
a. Let f (x) = c> x and Ω = {x : x ≥ 0}. Suppose x, y ∈ Ω, and α ∈ (0, 1). Then, x, y ≥ 0. Hence,
αx + (1 − α)y ≥ 0, which means αx + (1 − α)y ∈ Ω. Furthermore,
minimize c> x
subject to Ax = b
x ≥ 0,
minimize f (x)
subject to h(x) = 0,
g(x) ≤ 0,
where f (x) = c> x, h(x) = Ax − b, and g(x) = −x. Thus, we have Df (x) = c> , Dh(x) = A, and
Dg(x) = −I. The Karush-Kuhn-Tucker conditions for the above problem has the form:
µ∗ ≥ 0
> ∗>
c +λ A − µ∗> = 0>
µ∗> x∗ = 0.
b. The Karush-Kuhn-Tucker conditions are sufficient for optimality in this case because the problem is a
convex optimization problem, i.e., the objective function is a convex function, and the feasible set is a convex
set.
c. The dual problem is
maximize λ> b
subject to λ> A ≤ c> .
c. Let
µ∗ = c> − λ∗> A.
200
Since λ∗ is feasible for the dual, we have µ∗ ≥ 0. Rewriting the above equation, we get
c> − λ∗> A − µ∗ = 0.
The Complementary Slackness condition (c> − λ∗> A)x∗ = 0 can be written as µ∗> x∗ = 0. Therefore, the
Karush-Kuhn-Tucker conditions hold. By part b, x∗ is optimal.
22.17
a. We can treat s(1) and s(2) as vectors in Rn . We have
x1 − µ> s(1) = 0
> (2)
x2 − µ s = 0
> (1) (2)
µ (x1 s + x2 s − a) = 0
µ ≥ 0
(1) (2)
x1 s + x2 s ≥ a.
c. Yes, because the Hessian of the Lagrangian is I (identity), which is positive definite.
d. Yes, this is a convex optimization problem. The objective function is quadratic, with identity Hessian
(hence positive definite). The constraint set is of the form Ax ≥ a, and hence is a linear variety.
22.18
a. We first show that the set of probability vectors
Ω = {q ∈ Rn : q1 + · · · + qn = 1, qi > 0, i = 1, . . . , n}
Also, because yi > 0, zi > 0, α > 0, and 1 − α > 0, we conclude that xi > 0. Thus, x ∈ Ω, which shows that
Ω is convex.
b. We next show that the function f is a convex function on Ω. For this, we compute
p1
q12
··· 0
. .. ..
..
F (q) = . . ,
pn
0 · · · q2
n
which shows that F (q) > 0 for all q in the open set {q : qi > 0, i = 1, . . . , n}, which contains Ω. Therefore,
f is convex on Ω.
201
c. Fix a probability vector p. Consider the optimization problem
p1 pn
minimize p1 log + · · · + pn log
x1 xn
subject to x1 + · · · + xn = 1
xi > 0, i = 1, . . . , n
By parts a and b, the problem is a convex optimization problem. We ignore the constraint xi > 0 and write
down the Lagrange conditions for the equality-constraint problem:
pi
− ∗ + λ∗ = 0, i = 1, . . . , n
xi
x∗1 + · · · + x∗n = 1.
Rewrite the first set of equations as x∗i = pi /λ∗ . Combining this with the constraint and the fact that
p1 + · · · + pn = 1, we obtain λ∗ = 1, which means that x∗i = pi . Therefore, the unique global minimizer is
x∗ = p.
Note that f (x∗ ) = 0. Hence, we conclude that f (x) ≥ 0 for all x ∈ Ω. Moreover, f (x) = 0 if and only if
∗
x = 0. This proves the required result.
d. Given two probability vectors p and q, the number
p1 pn
D(p, q) = p1 log + · · · + pn log
q1 qn
is called the relative entropy (or Kullback-Liebler divergence) between p and q. It is used in information
theory to measure the “distance” between two probability vectors. The result of part c justifies the use of
D as a measure of “distance” (although D is not a metric because it is not symmetric).
22.19
We claim that a solution exists, and that it is unique. To prove the first claim, choose ε > 0 such that there
exists x ∈ Ω satisfying kx − zk < ε. Consider the modified problem
minimize kx − zk
\
subject to x ∈ Ω {y : ky − zk ≤ ε}.
If this modified problem has a solution, then clearly so does the original problem. The objective function here
is continuous, and the constraint set is closed and bounded. Hence, by Weierstrass’s Theorem, a solution to
the problem exists.
Let f be the objective function. Next, we show that f is convex (and hence the problem is a convex
optimization problem). Let x, y ∈ Ω and α ∈ (0, 1). Then,
f (αx + (1 − α)y) = kαx + (1 − α)y − zk
= kα(x − z) + (1 − α)(y − z)k
≤ αkx − zk + (1 − α)ky − zk
= αf (x) + (1 − α)f (y),
which shows that f is convex.
To prove uniqueness, let x1 and x2 be solutions to the problem. Then, by convexity, x3 = (x1 + x2 )/2 is
also a solution. But
x 1 + x2
kx3 − zk =
− z
2
x1 − z x2 − z
=
+
2 2
1
≤ (kx1 − zk + kx2 − zk)
2
= kx3 − zk,
202
from which we conclude that the triangle inequality above holds with equality, implying that x1 − z =
α(x2 − z) for some α ≥ 0. Because kx1 − zk = kx2 − zk = αkx1 − zk, we have α = 1. From this, we obtain
x1 = x2 , which proves uniqueness.
22.20
a. Let A ∈ Rn×n and B ∈ Rn× be symmetric and A ≥ 0, B ≥ 0. Fix α ∈ (0, 1), x ∈ Rn , and let
C = αA + (1 − α)B. Then,
Since x> Ax ≥ 0, x> Bx ≥ 0, and α, (1 − α) > 0 by assumption, then x> Cx ≥ 0, which proves the required
result.
Pn
b. We first show that the constraint set Ω = {x : F 0 + j=1 xj F j ≥ 0} is convex. So, let x, y ∈ Ω and
α ∈ (0, 1). Let z = αx + (1 − α)y. Then,
n
X n
X
F0 + zj F j = F0 + [αxj + (1 − α)yj ]F j
j=1 j=1
Xn n
X
= F0 + α xj F j + (1 − α) yj F j
j=1 j=1
Xn n
X
= α[F 0 + xj F j ] + (1 − α)[F 0 + yj F j ].
j=1 j=1
By assumption, we have
n
X
F0 + xj F j ≥ 0
j=1
Xn
F0 + yj F j ≥ 0.
j=1
F0 = diag{−b1 , . . . , −bm }
Fj = diag{a1,j , . . . , am,j }, j = 1, . . . , n.
203
Note that a diagonal matrix is positive semidefinite if and
Pn only if every diagonal element is nonnegative.
Hence, the constraint Ax ≥ b can be written as F 0 + j=1 xj F j ≥ 0. The left hand side is a diagonal
matrix, and the ith diagonal element is simply −bi + ai,1 x1 + ai,2 x2 + · · · + ai,n xn .
22.21
a. We have
Ω = {x : x1 + . . . + xn = 1; x1 , . . . , xn > 0; x1 ≥ 2xi , i = 2, . . . , n}.
So let x, y ∈ Ω and α ∈ (0, 1). Consider z = αx + (1 − α)y. We have
Moreover, for each i, because xi > 0, yi > 0, α > 0 and 1 − α > 0, we have zi > 0. Finally, for each i,
µ∗ ≥ 0
−Ui0 (x∗i ) + µ∗ = 0, i = 1, . . . , n
n
!
X
∗ ∗
µ xi − C = 0
i=1
n
X
x∗i ≤ C.
i=1
Note that because −Ui (xi ) + µ∗ xi is a convex function of xi , the second line above can be written as
x∗i = arg maxx (Ui (x) − µ∗ x).
204
Pn
c. Because each Ui is concave and increasing, we conclude that i=1 x∗i = C; for otherwise we could increase
n
some x∗i and hence Ui (x∗i ) and also i=1 Ui (x∗i ), contradicting the optimality of x∗ .
P
22.23
First note that the optimization problem that you construct cannot be a convex problem (for otherwise, the
FONC implies that x∗ is a global minimizer, which then implies that the SONC holds). Let f (x) = x2 ,
g(x) = −(x2 + x21 ), and x∗ = 0. Then, ∇f (x∗ ) = [0, 1]> . Any feasible direction d at x∗ is of the form
d = [d1 , d2 ]> with d2 ≥ 0. Hence, d> ∇f (x∗ ) ≥ 0, which shows that the FONC holds.
Because ∇g(x∗ ) = −[0, 1] (so x∗ is regular), we see that if µ∗ = 1, then ∇f (x∗ ) + µ∗ ∇g(x∗ ) = 0, and so
the KKT condition holds.
Because F (x∗ ) = O, the SONC for set constraint Ω holds. However,
" #
∗ ∗ ∗ −2 0
L(x , µ ) = O + µ <0
0 0
and T (x∗ ) = {y : y2 = 0}, which shows that the SONC for inequality constraint g(x) ≤ 0 does not hold.
22.24
a. Let x0 and µ0 be feasible points in the primal and dual, respectively. Then, g(x0 ) ≤ 0 and µ0 ≥ 0, and
so µ>
0 g(x0 ) ≤ 0. Hence,
b. Suppose f (x0 ) = q(µ0 ) for feasible points x0 and µ0 . Let x be any feasible point in the primal. Then,
by part a, f (x) ≥ q(µ0 ) = f (x0 ). Hence x0 is optimal in the primal.
Similarly, let µ be any feasible point in the dual. Then, by part a, q(µ) ≤ f (x0 ) = q(µ0 ). Hence µ0 is
optimal in the dual.
c. Let x∗ be optimal in the primal. Then, by the KKT Theorem, there exists µ∗ ∈ Rm such that
∇x l(x∗ , µ∗ ) = (Df (x∗ ) + µ∗> Dg(x∗ ))> = 0
∗> ∗
µ g(x ) = 0
∗
µ ≥ 0.
∗ ∗
Therefore, µ is feasible in the dual. Further, we note that l(·, µ ) is a convex function (because f is convex,
µ∗> g is convex being the sum of convex functions, and hence l is the sum of two convex functions). Hence,
we have l(x∗ , µ∗ ) = minx∈Rn l(x, µ∗ ). Therefore,
q(µ∗ ) = min l(x, µ∗ )
x∈Rn
= l(x , µ∗ )
∗
22.26
Let " #
x1 x2
P =
x2 x3
and let " # " # " #
1 0 0 1 0 0
P1 = , P2 = , and P 3 = .
0 0 1 0 0 1
where
F i = −A> P i − P i A, i = 1, 2, 3.
Equivalently,
P = P > and A> P + P A < 0
if and only if
F (x) = x1 F 1 + x2 F 2 + x3 F 3 > 0.
22.27
The quadratic inequality
A> P + P A + P BR−1 B > P < 0
can be equivalently represented by the following LMI:
" #
−R B>P
< 0,
P B A> P + P A
206
22.28
The MATLAB code is as follows:
Hence,
1+α
y (k+1) = y (k) .
1 + 2α
(0)
d. Assuming x2 6= 0, y (0) is well defined. Hence, by part c, we can write
k
1+α
y (k) = y (0) .
1 + 2α
Because α > 0,
1+α
< 1,
1 + 2α
which implies that y (k) → 0. But
1 = kx(k) k
q
(k) (k)
= (x1 )2 + (x2 )2
v !
u (k)
u (k)
2
(x1 )2
= t (x2 ) (k)
+1
(x2 )2
q
(k)
= |x2 | (y (k) )2 + 1,
207
which implies that
(k) 1
|x2 | = p .
(y (k) )2 + 1
(k) (k+1) (k)
Because y (k) → 0, we have |x2 | → 1. By the expression for x2 in part c, we see that the sign of x2 does
(k) (k) (k)
not change with k. Hence, we deduce that either x2 → 1 or x2 → −1. This also implies that x1 → 0.
(k)
Hence, x converges to a solution to the problem.
(0) (k) (k)
e. If x2 = 0, then x2 = 0 for all k, which means that x1 = 1 or −1 for all k. In this case, the algorithm
is stuck at the initial condition [1, 0]> or [−1, 0]> (which are in fact the minimizers).
23.2
a. Yes. To show: Suppose that x(k) is a global minimizer of the given problem. Then, for all x ∈ Ω,
x 6= x(k) , we have c> x ≥ c> x(k) . Rewriting, we obtain c> (x − x(k) ) ≥ 0. Recall that
Π[x(k) − ∇f (x(k) )] = arg min kx − (x(k) − ∇f (x(k) ))k2
x∈Ω
23.3
a. Suppose x(k) satisfies the FONC. Then, ∇f (x(k) ) = 0. Hence, x(k+1) = x(k) . Conversely, suppose x(k)
does not satisfy the FONC. Then, ∇f (x(k) ) 6= 0. Hence, αk > 0, and so x(k+1) 6= x(k) .
b. Case (i): Suppose x(k) is a corner point. Without loss of generality, take x(k) = [1, 1]> . (We can do this
because any other corner point can be mapped to this point by changing variables xi to −xi as appropriate.)
Note that any feasible direction d at x(k) = [1, 1]> satisfies d ≤ 0. Therefore,
x(k+1) = x(k) ⇔ −∇f (x(k) ) ≥ 0
⇔ d> ∇f (x(k) ) ≥ 0 for all feasible d at x(k)
⇔ x(k) satisfies FONC.
Case (ii): Suppose x(k) is not a corner point (i.e., is an edge point). Without loss of generality, take
(k)
x ∈ {x : x1 = 1, −1 < x2 < 1}. (We can do this because any other edge point can be mapped to this
point by changing variables xi to −xi as appropriate.) Note that any feasible direction d at x(k) ∈ {x : x1 =
1, −1 < x2 < 1} satisfies d1 ≤ 0. Therefore,
x(k+1) = x(k) ⇔ −∇f (x(k) ) = [a, 0]> , a > 0
⇔ d> ∇f (x(k) ) ≥ 0 for all feasible d at x(k)
⇔ x(k) satisfies FONC.
208
23.4
By definition of Π, we have
The term arg minz∈N (A) kz − yk is simply the orthogonal projection of y onto N (A). By Exercise 6.7, we
have
arg min kz − yk = P y,
z∈N (A)
> > −1
where P = I − A (AA ) A. Hence,
Π[x0 + y] = x0 + P y.
23.5
Since αk ≥ 0 is a minimizer of φk (α) = f (x(k) − αP g (k) ), we apply the FONC to φk (α) to obtain
φ0k (α) = (x(k) − αP g (k) )> Q(−P g (k) ) − b> (−P g (k) ).
Hence
g (k)> P g (k)
αk = .
g (k)> P QP g (k)
23.6
By Exercise 23.5, the projected steepest descent algorithm applied to this problem takes the form
23.9
We use the penalty method. First, we construct the unconstrained objective function with penalty parameter
γ:
f (x) = x21 + 2x22 + γ(x1 + x2 − 3)2 .
Because f is a quadratic with positive definite quadratic term, it is easy to find its minimizer:
" #
1 2
xγ = .
1 + 2/(3γ) 1
To find the solution to the above problem, we use the FONC. It is easy to see that the solution x∗ satisfies
x∗ < a. The derivative of the above objective function in the region x < a is 1 + 2γ(x − a). Thus, by the
FONC, we have x∗ = a − 1/(2γ). Since the true solution is at a, the difference is 1/(2γ). Therefore, for
1/(2γ) ≤ ε, we need γ ≥ 1/(2ε). The smallest such γ is 1/(2ε).
23.11
a. We have " # " #
1 2 2 1 > 1 + 2γ 2γ > 2γ
kxk + γkAx − bk = x x−x + γ.
2 2 2γ 1 + 2γ 2γ
210
The above is a quadratic with positive definite Hessian. Therefore, the minimizer is
" #−1 " #
1 + 2γ 2γ 2γ
x∗γ =
2γ 1 + 2γ 2γ
" #
1 1
= .
2 + 1/2γ 1
Hence, " #
1 1
lim x∗γ = .
γ→∞ 2 1
Let A = U [S O] V > be the singular value decomposition of A. For simplicity, denote ε = 1/2γ. We have
−1
x∗γ = εI n + A> A A> b
" # !−1
S
= εI n + V U > U [S O] V > A> b
O
" # !−1
S2 O >
= εI n + V V A> b
O O
" #−1
εI m + S 2 O
= V V > A> b.
O εI n−m
211
where (εI m + S 2 )−1 is diagonal. Hence,
−1
x∗γ = εI n + A> A A> b
" #" #
(εI m + S 2 )−1 O S
= V 1 U>
O ε I n−m O
" #
S
= V (εI m + S 2 )−1 U >
O
" #
S
= V U > U (εI m + S 2 )−1 U >
O
= A> U (εI m + S 2 )−1 U > .
But,
−1
U (S 2 )−1 U > = U S 2 U > = (AA> )−1 .
Therefore,
x∗γ → A> (AA> )−1 b = x∗ .
clear
clc
disp (’ ’)
disp (’This is a demo illustrating multi-objective optimization.’)
disp (’The numerical example is a modification of the example’)
disp (’from the 2002 book by A. Osyczka,’)
disp (’Example 5.1 on pages 101--105’)
disp (’------------------------------------------------------------’)
disp (’Select the population size denoted POPSIZE, for example, 50.’)
disp (’ ’)
POPSIZE=input(’Population size POPSIZE = ’);
disp (’------------------------------------------------------------’)
disp (’Select the number of iterations denoted NUMITER; e.g., 10.’)
disp (’ ’)
NUMITER=input(’Number of iterations NUMITER = ’);
disp (’ ’)
disp (’------------------------------------------------------------’)
% Main
for i = 1:NUMITER
fprintf(’Working on Iteration %.0f...\n’,i)
xmat = genxmat(POPSIZE);
if i~=1
for j = 1:length(xR)
212
xmat = [xmat;xR{j}];
end
end
[xR,fR] = Select_P(xmat);
fprintf(’Number of Pareto solutions: %.0f\n’,length(fR))
end
disp (’ ’)
disp (’------------------------------------------------------------’)
213
clabel(cs2)
xlabel(’x_1’,’Fontsize’,16)
ylabel(’x_2’,’Fontsize’,16)
title(’Level sets of f_1 and f_2, and Pareto optimal
points’,’Fontsize’,16)
set(gca,’Fontsize’,16)
grid
hold off
% Declaration
J = size(xmat,1);
% Init
Rset = [1];
j = 1;
isstep7 = 0;
% Step 1
x{1} = xmat(1,:);
f{1} = evalfcn(x{1});
% Step 2
while j < J
j = j+1;
% Step 3
r = 1;
rdel = [];
q = 0;
R = length(Rset);
for k = 1:size(xmat,1)
x{k} = xmat(k,:);
f{k} = evalfcn(x{k});
end
% Step 4
while 1
%for r=1:R
if all(f{j}<f{Rset(r)})
q = q+1;
rdel = [rdel r];
else
% Step 5
if all(f{j}>=f{Rset(r)})
break
end
end
% Step 6
214
r=r+1;
if r > R
isstep7 = 1;
break
end
end
% Step 7
if isstep7 == 1
isstep7 = 0;
if (q~=0)
Rset(rdel) =[];
Rset = [Rset j];
else
%Step 8
Rset = [Rset j];
end
end
for k = 1:size(xmat,1)
x{k} = xmat(k,:);
f{k} = evalfcn(x{k});
end
R = length(Rset);
end
x1 = [];
y1 = [];
x2 = [];
y2 = [];
for k = 1:size(xmat,1)
if ismember(k,Rset)
x1 = [x1 f{k}(1)];
y1 = [y1 f{k}(2)];
else
x2 = [x2 f{k}(1)];
y2 = [y2 f{k}(2)];
end
end
%newplot
plot(x1,y1,’xr’,x2,y2,’.b’)
drawnow
function y = f1(x)
% y = x(1)^2+x(2);
% The above function is the original function in the Osyczka’s 2002
% book,
% (Example 5.1, page 101).
% Its negative makes a much more interesting example.
215
y = -(x(1)^2+x(2));
function y = f2(x)
y = x(1)+x(2)^2;
function y = evalfcn(x)
y(1) = f1(x);
y(2) = f2(x);
24.2
a. We proceed using contraposition. Assume that x∗ is not Pareto optimal. Therefore, there exists a point
x̂ ∈ Ω such that
fi (x̂) ≤ fi (x∗ ) for all i = 1, 2, . . . , `
and for some j, fj (x̂) < fj (x∗ ). Since c > 0,
which implies that x∗ is not a global minimizer for the weighted-sum problem.
For the converse, consider the following counterexample: Ω = {x ∈ R2 : kxk ≥ 1, x ≥ 0} and f (x) =
[x1 , x2 ]> . It is easy to see that the Pareto
√ front is {x : kxk = 1, x ≥ 0} (i.e., the part of the unit circle in the
nonnegative quadrant). So x∗ = (1/ 2)[1, 1]> is a Pareto minimizer. However, there is no c > 0 such that
x∗ is a global minimizer of the weighted-sum problem. To see this, fix c >√0 (assuming c1 ≤ c2 without loss
of generality) and consider the objective function value f (x∗ ) = (c1 + c2 )/ 2 for the weighted-sum problem.
Now, the point x0 = [1, 0]> is also a feasible point. Moreover f (x0 ) = c1 ≤ (c1 + c2 )/2 ≤ f (x∗ ). So x∗ is
not a global minimizer of the weighted-sum problem.
b. We proceed using contraposition. Assume that x∗ is not Pareto optimal. Therefore, there exists a point
x̂ ∈ Ω such that
fi (x̂) ≤ fi (x∗ ) for all i = 1, 2, . . . , `
and for some j, fj (x̂) < fj (x∗ ). By assumption, for all i = 1, . . . , `, fi (x∗ ) ≥ 0, which implies that
`
X `
X
(fi (x∗ ))p > (fi (x̂))p
i=1 i=1
(because p > 0). Hence, x∗ is not a global minimizer for the minimum-norm problem.
For the converse, consider the following counterexample: Ω = {x ∈ R2 : x1 + 2x2 ≥ 2, x ≥ 0} and
f (x) = [x1 , x2 ]> . It is easy to see that the Pareto front is {x : x1 + 2x2 = 2, x ≥ 0}. So x∗ = [1, 1/2]> is
a Pareto minimizer. However, there is no p > 0 such that x∗ is a global minimizer of the minimum-norm
problem. To see this, fix p > 0 and consider the objective function value f (x∗ ) = 1+(1/2)p for the minimum-
norm problem. Now, the point x0 = [0, 1]> is also a feasible point. Moreover f (x0 ) = 1 ≤ 1+(1/2)p = f (x∗ ).
So x∗ is not a global minimizer of the minimum-norm problem.
c. For the first part, consider the following counterexample: Ω = {x ∈ R2 : x1 + x2 ≥ 2, x ≥ 0} and
f (x) = [x1 , x2 ]> . The Pareto front is {x : x1 + x2 = 2, x ≥ 0}, and x∗ = [1/2, 3/4]> is a Pareto minimizer.
But
However, x0 = [1, 1]> is also a feasible point, and f (x0 ) = 1 < f (x∗ ). Hence, x∗ is not a global minimizer
of the minimax problem.
For the second part, suppose Ω = {x ∈ R2 : x1 ≤ 2} and f (x) = [x1 , 2]> . Then, for any x ∈ Ω,
max{f1 (x), f2 (x)} = 2. So any x∗ ∈ R2 is a global minimizer of the minimax (single-objective) problem.
216
However, consider another point x̂ such that x̂1 < x∗1 . Then, f1 (x̂) < f1 (x∗ ) and f2 (x̂) = f2 (x∗ ). Hence,
x∗ is not a Pareto minimizer.
In fact, in the above example, no Pareto minimizer exists. However, if we set Ω = {x ∈ R2 : 1 ≤ x1 ≤ 2},
then the counterexample is still valid, but in this case any point of the form [1, x2 ]> is a Pareto minimizer.
24.3
Let
f (x) = c∗> f (x)
where x ∈ Ω = {x : h(x) = 0}. The function f is convex because all the functions fi are convex and
c∗i > 0, i = 1, 2, . . . , `. We can represent the given first-order condition in the following form: for any feasible
direction d at x∗ , we have
d> ∇f (x∗ ) ≥ 0.
By Theorem 22.7, the point x∗ is a global minimizer of f over Ω. Therefore,
That is,
`
X `
X
c∗i fi (x∗ ) ≤ c∗i fi (x) for all x ∈ Ω.
i=1 i=1
∗
To finish the proof, we now assume that x is not Pareto optimal and the above condition holds. We then
proceed using the proof by contradiction. Because, by assumption, x∗ is not Pareto optimal, there exists a
point x̂ ∈ Ω such that
fi (x̂) ≤ fi (x∗ ) for all i = 1, 2, . . . , `
and for some j, fj (x̂) < fj (x∗ ). Since for all i = 1, 2, . . . , `, c∗i > 0, we must have
`
X `
X
c∗i fi (x∗ ) > c∗i fi (x̂),
i=1 i=1
P` ∗ ∗
P` ∗
which contradicts the above condition, i=1 ci fi (x ) ≤ i=1 ci fi (x) for all x ∈ Ω. This completes the
proof. (See also Exercise 24.2, part a.)
24.4
Let
f (x) = c∗> f (x)
where x ∈ Ω = {x : h(x) = 0}. The function f is convex because all the functions fi are convex and c∗i > 0,
i = 1, 2, . . . , `. We can represent the given Lagrange condition in the form
That is,
`
X `
X
c∗i fi (x∗ ) ≤ c∗i fi (x) for all x ∈ Ω.
i=1 i=1
∗
To finish the proof, we now assume that x is not Pareto optimal and the above condition holds. We then
proceed using the proof by contradiction. Because, by assumption, x∗ is not Pareto optimal, there exists a
point x̂ ∈ Ω such that
fi (x̂) ≤ fi (x∗ ) for all i = 1, 2, . . . , `
217
and for some j, fj (x̂) < fj (x∗ ). Since for all i = 1, 2, . . . , `, c∗i > 0, we must have
`
X `
X
c∗i fi (x∗ ) > c∗i fi (x̂),
i=1 i=1
P` ∗ ∗
P` ∗
which contradicts the above condition, i=1 ci fi (x ) ≤ i=1 ci fi (x) for all x ∈ Ω. This completes the
proof. (See also Exercise 24.2, part a.)
24.5
Let
f (x) = c∗> f (x)
where x ∈ Ω = {x : g(x) ≤ 0}. The function f is convex because all the functions fi are convex and c∗i > 0,
i = 1, 2, . . . , `. We can represent the given KKT condition in the form
µ∗ ≥ 0
Df (x∗ ) + µ∗> Dg(x∗ ) = 0>
µ∗> g(x∗ ) = 0
g(x∗ ) ≤ 0.
That is,
`
X `
X
c∗i fi (x∗ ) ≤ c∗i fi (x) for all x ∈ Ω.
i=1 i=1
∗
To finish the proof, we now assume that x is not Pareto optimal and the above condition holds. We then
proceed using the proof by contradiction. Because, by assumption, x∗ is not Pareto optimal, there exists a
point x̂ ∈ Ω such that
fi (x̂) ≤ fi (x∗ ) for all i = 1, 2, . . . , `
and for some j, fj (x̂) < fj (x∗ ). Since for all i = 1, 2, . . . , `, c∗i > 0, we must have
`
X `
X
c∗i fi (x∗ ) > c∗i fi (x̂),
i=1 i=1
P` ∗ ∗
P` ∗
which contradicts the above condition, i=1 ci fi (x ) ≤ i=1 ci fi (x) for all x ∈ Ω. This completes the
proof. (See also Exercise 24.2, part a.)
24.6
Let
f (x) = c∗> f (x)
where x ∈ Ω = {x : h(x) = 0, g(x) ≤ 0}. The function f is convex because all the functions fi are convex
and c∗i > 0, i = 1, 2, . . . , `. We can represent the given KKT-type condition in the form
µ∗ ≥ 0
Df (x∗ ) + λ∗> Dh(x∗ ) + µ∗> Dg(x∗ ) = 0>
µ∗> g(x∗ ) = 0
h(x∗ ) = 0
g(x∗ ) ≤ 0.
To finish the proof, we now assume that x∗ is not Pareto optimal and the above condition holds. We then
proceed using the proof by contradiction. Because, by assumption, x∗ is not Pareto optimal, there exists a
point x̂ ∈ Ω such that
fi (x̂) ≤ fi (x∗ ) for all i = 1, 2, . . . , `
and for some j, fj (x̂) < fj (x∗ ). Since for all i = 1, 2, . . . , `, c∗i > 0, we must have
`
X `
X
c∗i fi (x∗ ) > c∗i fi (x̂),
i=1 i=1
P` ∗ ∗
P` ∗
which contradicts the above condition, i=1 ci fi (x ) ≤ i=1 ci fi (x) for all x ∈ Ω. This completes the
proof. (See also Exercise 24.2, part a.)
24.7
The given minimax problem is equivalent to the problem given in the hint:
minimize z
subject to fi (x) − z ≤ 0, i = 1, 2.
Suppose [x∗> , z ∗ ]> is a local minimizer for the above problem (which is equivalent to x∗ being a local
minimizer to for the original problem). Then, by the KKT Theorem, there exists µ∗> ≥ 0, where µ∗ ∈ R2 ,
such that
" #
∗ >
> ∗> [∇f1 (x ) , −1]
[0 , 1] + µ = 0>
[∇f2 (x∗ )> , −1]
" #
∗ ∗
∗> f1 (x ) − z
µ = 0.
f2 (x∗ ) − z ∗
Suppose fi (x∗ ) < max{f1 (x∗ ), f2 (x∗ )}, where i ∈ {1, 2}. Then, z ∗ > fi (x∗ ). Hence, by the above equation
we conclude that µ∗i = 0.
219