Strong Duality Results: September 22, 2008
Strong Duality Results: September 22, 2008
Strong Duality Results: September 22, 2008
Outline
• Representation Issue
Convex Optimization 1
Lecture 8
Convex Optimization 2
Lecture 8
Convex Optimization 3
Lecture 8
Proof
Consider the set V ⊂ Rm × R given by
V = {(u, w) | g(x) u, f (x) ≤ w, x ∈ X}
The Slater condition is illustrated in the figures below.
Convex Optimization 4
Lecture 8
Proof continues
The vector (0, f ∗) is not in the interior of the set V . Suppose it is, i.e.,
(0, f ∗) ∈ intV . Then, there exists an > 0 such that (0, f ∗ − ) ∈ V
contradicting the optimality of f ∗.
Thus, either (0, f ∗) ∈ bdV or (0, f ∗) 6∈ V . By the Supporting Hyperplane
Theorem, there exists a hyperplane passing through (0, f ∗) and supporting
the set V : there exists (µ, µ0) ∈ Rm × R with (µ, µ0) 6= 0 such that
µT u + µ0 w ≥ µ0 f ∗ for all (u, w) ∈ V (1)
This relation implies that µ 0 and µ0 ≥ 0.
Suppose that µ0 = 0. Then, µ 6= 0 and relation (1) reduces to
inf µT u = 0
(u,v)∈V
Convex Optimization 5
Lecture 8
inf {µ̃T u + w} ≥ f ∗
(u,v)∈V
with µ̃ 0. Therefore,
n o
q(µ̃) = inf f (x) + µ̃ g(x) ≥ f ∗
T
with µ̃ 0
x∈X
implying that q ∗ ≥ f . By the weak duality [q ∗ ≤ f ∗], it follows that
∗
Convex Optimization 6
Lecture 8
We now show that the set of dual optimal solutions is bounded. For any
dual optimal µ̃ 0, we have
n o
∗ T
q = q(µ̃) = inf f (x) + µ̃ g(x)
x∈X
≤ f (x̄) + µ̃T g(x̄)
m
X
≤ f (x̄) + max {gj (x̄)} µ̃j
1≤j≤m
j=1
Pm ∗
Therefore, min1≤j≤m {−gj (x̄)} j=1 µ̃j ≤ f (x̄) − q implying that
m
X f (x̄) − q ∗
kµ̃k ≤ µ̃j ≤
j=1 min1≤j≤m {−gj (x̄)}
Convex Optimization 7
Lecture 8
Example
l∞-Norm Minimization
minimize t
subject to aTj x − bj − t ≤ 0, j = 1, . . . , m
bj − aTj x − t ≤ 0, j = 1, . . . , m
(x, t) ∈ Rn × R
The vector (x̄, t̄ ) given by
x̄ = 0 and t̄ = + max |bj | for some > 0
1≤j≤m
satisfies the Slater condition
We refer to a vector satisfying the Slater condition as a Slater vector
Convex Optimization 8
Lecture 8
Implications
• When domf = Rn, the relative interior condition has to do only with
the constraint set.
Convex Optimization 10
Lecture 8
Examples
Convex Optimization 11
Lecture 8
Convex Optimization 12
Lecture 8
Convex Optimization 13
Lecture 8
Representation Issue
minimize −x2
subject to kxk ≤ x1
x ∈ X, X = {(x1, x2) | x2 ≥ 0}
• The relaxation of the inequality constraint results in a dual problem, for
which the dual value q(µ) is −∞ for any µ ≥ 0 (verify yourself).
Thus, q ∗ = −∞, while f ∗ = 0.
However, a closer look into the constraint set reveals
C = {(x1, x2) | x1 ≥ 0, x2 = 0}
Thus the problem is equivalent to
minimize −x2
subject to x1 ≥ 0, x2 = 0
x ∈ R2
• There is no gap for this problem!!! Why?
The duality gap issue is closely related to the “representation” of the
constraints [the model for the constraints]
Convex Optimization 14
Lecture 8
minimize f (x)
subject to gj (x) ≤ 0, j = 1, . . . , m
Ax = b, Dx d
x∈X
• There are multiple choices for the set of “inequalities” to be relaxed
Convex Optimization 15
Lecture 8
Relax-All Rule
minimize f (x)
subject to gj (x) ≤ 0, j = 1, . . . , m
x ∈ Rn
A linear equality is represented by two linear inequalities.
Theorem Let f ∗ be finite. Consider a dual corresponding to the relaxation
of all the constraints, and assume that there is no duality gap. Then, there
is no duality gap when partially relaxing the constraints.
Convex Optimization 16