LP Notes 310
LP Notes 310
LP Notes 310
Michel X. Goemans
May 4, 2010
1
Basics
Linear Programming deals with the problem of optimizing a linear objective function sub-
ject to linear equality and inequality constraints on the decision variables. Linear program-
ming has many practical applications (in transportation, production planning, ...). It is
also the building block for combinatorial optimization. One aspect of linear programming
which is often forgotten is the fact that it is also a useful proof technique. In this first
chapter, we describe some linear programming formulations for some classical problems.
We also show that linear programs can be expressed in a variety of equivalent ways.
1.1 Formulations
1.1.1 The Diet Problem
In the diet model, a list of available foods is given together with the nutrient content and
the cost per unit weight of each food. A certain amount of each nutrient is required per
day. For example, here is the data corresponding to a civilization with just two types of
grains (G1 and G2) and three types of nutrients (starch, proteins, vitamins):
The requirement per day of starch, proteins and vitamins is 8, 15 and 3 respectively. The
problem is to find how much of each food to consume per day so as to get the required
amount per day of each nutrient at minimal cost.
1
1.1. Formulations 2
When trying to formulate a problem as a linear program, the first step is to decide
which decision variables to use. These variables represent the unknowns in the problem.
In the diet problem, a very natural choice of decision variables is:
The next step is to write down the objective function. The objective function is the
function to be minimized or maximized. In this case, the objective is to minimize the
total cost per day which is given by z = 0.6x1 + 0.35x2 (the value of the objective function
is often denoted by z).
Finally, we need to describe the different constraints that need to be satisfied by x1
and x2 . First of all, x1 and x2 must certainly satisfy x1 ≥ 0 and x2 ≥ 0. Only nonneg-
ative amounts of food can be eaten! These constraints are referred to as nonnegativity
constraints. Nonnegativity constraints appear in most linear programs. Moreover, not all
possible values for x1 and x2 give rise to a diet with the required amounts of nutrients
per day. The amount of starch in x1 units of G1 and x2 units of G2 is 5x1 + 7x2 and this
amount must be at least 8, the daily requirement of starch. Therefore, x1 and x2 must
satisfy 5x1 + 7x2 ≥ 8. Similarly, the requirements on the amount of proteins and vitamins
imply the constraints 4x1 + 2x2 ≥ 15 and 2x1 + x2 ≥ 3.
This diet problem can therefore be formulated by the following linear program:
centers are (in thousands of widgets) 8, 5 and 2 respectively while the monthly supply
at the factories are 6 and 9 respectively. Notice that the total supply equals the total
demand. We are also given the cost of transportation of 1 widget between any factory
and any retail center.
C1 C2 C3
F1 5 5 3
F2 6 4 1
Cost of transportation (in 0.01$/widget).
We now need to write down the constraints. First, we have the nonnegativity constraints
saying that xij ≥ 0 for i = 1, 2 and j = 1, 2, 3. Moreover, we have that the demand at
each retail center must be met. This gives rise to the following constraints:
x11 + x21 = 8,
x12 + x22 = 5,
x13 + x23 = 2.
Finally, each factory cannot ship more than its supply, resulting in the following con-
straints:
These inequalities can be replaced by equalities since the total supply is equal to the total
demand. A linear programming formulation of this transportation problem is therefore
given by:
Among these 5 equality constraints, one is redundant, i.e. it is implied by the other
constraints or, equivalently, it can be removed without modifying the feasible space. For
example, by adding the first 3 equalities and substracting the fourth equality we obtain
the last equality. Similarly, by adding the last 2 equalities and substracting the first two
equalities we obtain the third one.
Maximize or Minimize z = c0 + c1 x1 + . . . + cn xn
subject to:
≤
ai1 x1 + ai2 x2 + . . . + ain xn ≥ bi i = 1, . . . , m
=
(
≥0
xj j = 1, . . . , n.
≷0
The problem data in this linear program consists of cj (j = 0, . . . , n), bi (i = 1, . . . , m)
and aij (i = 1, . . . , m, j = 1, . . . , n). cj is referred to as the objective function coefficient
of xj or, more simply, the cost coefficient of xj . bi is known as the right-hand-side (RHS)
of equation i. Notice that the constant term c0 can be omitted without affecting the set
of optimal solutions.
A linear program is said to be in standard form if
• it is a maximization program,
Max z = cT x
subject to:
Ax = b
x ≥ 0.
where
c1 b1 x1
. .. ..
c= .. , b = . , x = .
cn bm xn
are column vectors, cT denote the transpose of the vector c, and A = [aij ] is the m × n
matrix whose i, j−element is aij .
Any linear program can in fact be transformed into an equivalent linear program in
standard form. Indeed,
Minimize z = 2x1 − x2
subject to:
x1 + x2 ≥ 2
3x1 + 2x2 ≤ 4
x1 + 2x2 = 3
x1 ≷ 0, x2 ≥ 0.
Maximize z ′ = −2x+ −
1 + 2x1 + x2
subject to:
x+ −
1 − x1 + x2 − x3 = 2
3x+ −
1 − 3x1 + 2x2 + x4 = 4
x+ −
1 − x1 + 2x2 = 3
x+ −
1 ≥ 0, x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0.
Max z = cT x
subject to:
Ax ≤ b
x ≥ 0.
A linear program in canonical form can be replaced by a linear program in standard form
by just replacing Ax ≤ b by Ax + Is = b, s ≥ 0 where s is a vector of slack variables
and I is the m × m identity matrix. Similarly, a linear program in standard form can be
replaced
" by # in canonical form by replacing Ax = b by A′ x ≤ b′ where
a linear program !
A b
A′ = and b′ = .
−A −b
Problems
Problem 1.1. A company has to decide its production levels for the 4 coming months.
The demand for those months are 900, 1100, 1700 and 1300 units respectively. The
maximum production per month is 1200 units. Material produced one month can be
delivered either that same month or stored in inventory and delivered at some other
month. It costs the company $3 to carry one unit in inventory from one month to the
next. Through additional man-hours, up to 400 additional units can be produced per
month but, in this case, the company incurs a cost of $7/unit. Formulate as a linear
program the problem of determining the production levels so as to minimize the total
costs.
Problem 1.2. A contractor is working on a project, work on which is expected to last
for a period of T weeks. It is estimated that during the jth week, the contractor will
need uj man-hours of labor, j = 1 to T , for this project. The contractor can fulfill these
requirements either by hiring laborers over the entire T week horizon (called steady
labor) or by hiring laborers on a weekly basis each week (called casual labor) or by
employing a combination of both. One manhour of steady labor costs c1 dollars; the
cost is the same each week. However, the cost of casual labor may vary from week
to week, and it is expected to be c2j dollars/man-hour, during week j, j = 1, . . . , T .
Formulate the problem of fulfilling his labor requirements at minimum cost as a linear
program.
Problem 1.3. Transform the following linear program into an equivalent linear program
in standard form (Max{cT x : Ax = b, x ≥ 0}):
Min x1 − x2
subject to:
2x1 + x2 ≥ 3
3x1 − x2 ≤ 7
x1 ≥ 0, x2 ≷ 0.
where A, b, c and d are given. Assume that ci ≥ 0 for all i. As such this is not a linear
program since the objective function involves absolute values. Show how this problem
can be formulated equivalently as a linear program. Explain why the linear program is
equivalent to the original optimization problem. Would the transformation work if we
were maximizing.
Problem 1.5. Given a set (or arrangement) of n lines (see Figure 1.1) in the plane
(described as ai x + bi y = ci for i = 1, . . . , n), show how the problem of finding a point
x in the plane which minimizes the sum of the distances between x and each line can
be formulated as a linear program.
Hint: use Problem 1.1.4.
Problem 1.6. Given two linear functions over x, say cT x and dT x, show how to formulate
the problem of minimizing max(cT x, dT x) over Ax = b, x ≥ 0 as a linear program.
Would the transformation work if you were to maximize max(cT x, dT x)? How about
minimizing the maximum of several linear functions?
Problem 1.7. A function f : R → R is said to be convex if f (αx+(1−α)y) ≥ αf (x)+(1−
α)f (y) for all x, y ∈ R and all 0 ≤ α ≤ 1. It is piecewise linear if R can be partitioned
into intervals over which the function is linear. See Figure 1.2 for an example. Show
P
how to formulate the problem of minimizing i fi (xi ) where the fi ’s are piecewise linear
as a linear program.
Problem 1.8. What is the optimum solution of the following linear program:
In 1947, George B. Dantzig developed a technique to solve linear programs — this tech-
nique is referred to as the simplex method.
ais
• for i = 1, . . . , m, i 6= r, replacing Ei by Ēi = Ei − ais Ēr = Ei − ars Er .
After pivoting on ars , all coefficients in column s are equal to 0 except the one in row r
which is now equal to 1. Since a pivot consists of elementary row operations, the resulting
system Āx = b̄ is equivalent to the original system.
Elementary row operations and pivots can also be defined in terms of matrices. Let P
be an m × m invertible (i.e. P −1 exists1 ) matrix. Then {x : Ax = b} = {x : P Ax = P b}.
1
This is equivalent to saying that det P 6= 0 or also that the system P x = 0 has x = 0 as unique solution
10
2.2. The Simplex Method on an Example 11
The two types of elementary row operations correspond to the matrices (the coefficients
not represented are equal to 0):
1
1
..
..
.
.
1
1 β ←i
P =
α
← i and P = .. .
.
1 ←k
1
..
.
..
.
1 1
• b ≥ 0,
For example, the following linear program has this required form:
Max z = 10 + 20 x1 + 16 x2 + 12 x3
subject to
x1 + x4 = 4
2 x1 + x2 + x3 +x5 = 10
2 x1 + 2x2 + x3 + x6 = 16
x1 , x2 , x3 , x4 , x5 , x6 ≥ 0.
In this example, B = {x4 , x5 , x6 }. The variables in B are called basic variables while
the other variables are called nonbasic. The set of nonbasic variables is denoted by N . In
the example, N = {x1 , x2 , x3 }.
The advantage of having AB = I is that we can quickly infer the values of the basic
variables given the values of the nonbasic variables. For example, if we let x1 = 1, x2 =
2, x3 = 3, we obtain
x4 = 4 − x1 = 3,
x5 = 10 − 2x1 − x2 − x3 = 3,
x6 = 16 − 2x1 − 2x2 − x3 = 7.
Also, we don’t need to know the values of the basic variables to evaluate the cost of
the solution. In this case, we have z = 10 + 20x1 + 16x2 + 12x3 = 98. Notice that
there is no guarantee that the so-constructed solution be feasible. For example, if we set
x1 = 5, x2 = 2, x3 = 1, we have that x4 = 4 − x1 = −1 does not satisfy the nonnegativity
constraint x4 ≥ 0.
There is an assignment of values to the nonbasic variables that needs special consid-
eration. By just letting all nonbasic variables to be equal to 0, we see that the values
of the basic variables are just given by the right-hand-sides of the constraints and the
cost of the resulting solution is just the constant term in the objective function. In our
example, letting x1 = x2 = x3 = 0, we obtain x4 = 4, x5 = 10, x6 = 16 and z = 10. Such
a solution is called a basic feasible solution or bfs. The feasibility of this solution comes
from the fact that b ≥ 0. Later, we shall see that, when solving a linear program, we
can restrict our attention to basic feasible solutions. The simplex method is an iterative
method that generates a sequence of basic feasible solutions (corresponding to different
bases) and eventually stops when it has found an optimal basic feasible solution.
Instead of always writing explicitely these linear programs, we adopt what is known
as the tableau format. First, in order to have the objective function play a similar role
as the other constraints, we consider z to be a variable and the objective function as a
constraint. Putting all variables on the same side of the equality sign, we obtain:
−z + 20x1 + 16x2 + 12x3 = −10.
We also get rid of the variable names in the constraints to obtain the tableau format:
−z x1 x2 x3 x4 x5 x6
1 20 16 12 -10
1 0 0 1 4
2 1 1 1 10
2 2 1 1 16
Notice that while pivoting we also modified the objective function row as if it was
just like another constraint. We have now a linear program which is equivalent to the
original one from which we can easily extract a (basic) feasible solution of value 90. Still
z can be improved by increasing xs for s = 2 or 3 since these variables have a positive
cost coefficient2 c̄s . Let us choose the one with the greatest c̄s ; in our case x2 will enter
the basis. The maximum value that x2 can take while x3 and x4 remain at the value 0
is dictated by the constraints x1 = 4 ≥ 0, x5 = 2 − x2 ≥ 0 and x6 = 8 − 2x2 ≥ 0. The
2
By simplicity, we always denote the data corresponding to the current tableau by c̄, Ā, and b̄.
tightest of these inequalities being x5 = 2 − x2 ≥ 0, we have that x5 will leave the basis.
Therefore, pivoting on ā22 , we obtain the tableau:
−z x1 x2 x3 x4 x5 x6
1 -4 12 -16 -122
1 0 1 0 4
1 1 -2 1 2
-1 2 -2 1 4
The current basis is B = {x1 , x2 , x6 } and its value is 122. Since 12 > 0, we can
improve the current basic feasible solution by having x4 enter the basis. Instead of writing
explicitely the constraints on x4 to compute the level at which x4 can enter the basis, we
perform the min ratio test. If xs is the variable that is entering the basis, we compute
The argument of the minimum gives the variable that is exiting the basis. In our exam-
ple, we obtain 2 = min{4/1, 4/2} and therefore variable x6 which is the basic variable
corresponding to row 3 leaves the basis. Moreover, in order to get the updated tableau,
we need to pivot on ā34 . Doing so, we obtain:
−z x1 x2 x3 x4 x5 x6
1 2 -4 -6 -146
1 1/2 1 -1/2 2
1 0 -1 1 6
-1/2 1 -1 1/2 2
• ārs > 0,
b̄r b̄i
• ārs ≤ āis if āis > 0.
āis b̄i b̄r
• b̄i = b̄i − ārs = āis āis − ārs ≥ 0 if āis > 0.
We can also justify why the solution keeps improving. Indeed, when we pivot on
ārs > 0, the constant term c̄0 in the objective function becomes c̄0 + b̄r ∗ c̄s /ārs . If b̄r > 0,
we have a strict improvement in the objective function value since by our choice of entering
variable c̄s > 0. We shall deal with the case b̄r = 0 later on.
The bfs corresponding to B = {1, 2, 4} is not optimal since there is still a positive cost
coefficient. We see that x3 can enter the basis and, since there is just one positive element
in row 3, we have that x1 leaves the basis. We thus pivot on ā13 and obtain:
−z x1 x2 x3 x4 x5 x6
1 -4 -8 -4 -154
2 1 2 -1 4
0 1 -1 1 6
1 1 0 0 4
and that x2 is entering the basis. The min ratio test gives 2 = min{2/1, 4/2} and, thus,
either x5 or x6 can leave the basis. If we decide to have x5 leave the basis, we pivot on
ā22 ; otherwise, we pivot on ā32 . Notice that, in any case, the pivot operation creates a
zero coefficient among the RHS. For example, pivoting on ā22 , we obtain:
−z x1 x2 x3 x4 x5 x6
1 -4 12 -16 -122
1 0 1 0 4
1 1 -2 1 2
-1 2 -2 1 0
A bfs with b̄i = 0 for some i is called degenerate. A linear program is nondegenerate if
no bfs is degenerate. Pivoting now on ā34 we obtain:
−z x1 x2 x3 x4 x5 x6
1 2 -4 -6 -122
1 1/2 1 -1/2 4
1 0 -1 1 2
-1/2 1 -1 1/2 0
This pivot is degenerate. A pivot on ārs is called degenerate if b̄r = 0. Notice that a
degenerate pivot alters neither the b̄i ’s nor c̄0 . In the example, the bfs is (4, 2, 0, 0, 0, 0)
in both tableaus. We thus observe that several bases can correspond to the same basic
feasible solution.
Another situation that may occur is when xs is entering the basis, but āis ≤ 0 for
i = 1, . . . , m. In this case, there is no term in the min ratio test. This means that, while
keeping the other nonbasic variables at their zero level, xs can take an arbitrarily large
value without violating feasibility. Since c̄s > 0, this implies that z can be made arbitrarily
large. In this case, the linear program is said to be unbounded or unbounded from above
if we want to emphasize the fact that we are dealing with a maximization problem. For
example, consider the following tableau:
−z x1 x2 x3 x4 x5 x6
1 16 12 20 -90
1 0 0 -1 4
1 1 0 1 2
2 1 -2 1 8
and the variables can be partitioned into B = {xj1 , . . . , xjm } and N with
• c̄ji = 0 for i = 1, . . . , m and
• (
0 k 6= i
ākji =
1 k = i.
The current basic feasible solution is given by xji = b̄i for i = 1, . . . , m
and xj = 0 otherwise. The objective function value of this solution is c̄0 .
2. If c̄j ≤ 0 for all j = 1, . . . , n then the current basic feasible solution is
optimal. STOP.
3. Find a column s for which c̄s > 0. xs is the variable entering the basis.
4. Check for unboundedness. If āis ≤ 0 for i = 1, . . . , m then the linear
program is unbounded. STOP.
5. Min ratio test. Find row r such that
b̄r b̄i
= min .
ārs i:āis >0 āis
Replace xjr by xs in B.
7. Go to step 2.
Theorem 2.1 The simplex method solves a nondegenerate linear program in finitely many
iterations.
Proof:
b̄r c̄s
For nondegenerate linear programs, we have a strict improvement (namely of value >
ārs
0) in the objective function value at each iteration. This means that, in the sequence of
bfs produced by the simplex method, each bfs can appear at most once. Therefore, for
nondegenerate linear programs, the number of iterations is certainly upper bounded by
n
the number of bfs. This latter number is finite (for example, it is upper bounded by m )
since any bfs corresponds to m variables being basic .3 QED.
However, when the linear program is degenerate, we might have degenerate pivots
which give no strict improvement in the objective function. As a result, a subsequence
of bases might repeat implying the nontermination of the method. This phenomenon is
called cycling.
3
Not all choices of basic variables give rise to feasible solutions.
−z x1 x2 x3 x4 x5 x6
1 0.96 -8 0 -4 0
1 -12.5 -2 1 12.5 0
1 0.24 -2 -0.24 1 0
−z x1 x2 x3 x4 x5 x6
1 4 1.92 -0.96 -16 0
1 -12.5 -2 1 12.5 0
1 1 0.24 -0.24 -2 0
−z x1 x2 x3 x4 x5 x6
1 -4 0.96 0 -8 0
12.5 1 1 -2 -12.5 0
1 1 0.24 -0.24 -2 0
−z x1 x2 x3 x4 x5 x6
1 -16 -0.96 1.92 4 0
12.5 1 1 -2 -12.5 0
-2 -0.24 1 0.24 1 0
−z x1 x2 x3 x4 x5 x6
1 -8 0 -4 0.96 0
-12.5 -2 12.5 1 1 0
-2 -0.24 1 0.24 1 0
−z x1 x2 x3 x4 x5 x6
1 4 1.92 -16 -0.96 0
-12.5 -2 12.5 1 1 0
1 0.24 -2 -0.24 1 0
Largest coefficient entering variable rule: Select the variable xs with the largest
c̄s > 0. In case of ties, select the one with the smallest subscript s.
Largest coefficient leaving variable rule: Among all rows attaining the minimum in
the minimum ratio test, select the one with the largest pivot ārs . In case of ties,
select the one with the smallest subscript r.
The example of subsection 2.4.1 shows that the use of the largest coefficient entering
and leaving variable rules does not prevent cycling. There are two rules that avoid cycling:
the lexicographic rule and Bland’s rule (after R. Bland who discovered it in 1976). We’ll
just describe the latter one, which is conceptually the simplest.
Bland’s anticycling pivoting rule: Among all variables xs with positive c̄s , select the
one with the smallest subscript s. Among the eligible (according to the minimum
ratio test) leaving variables xl , select the one with the smallest subscript l.
Theorem 2.2 The simplex method with Bland’s anticycling pivoting rule terminates after
a finite number of iterations.
Proof:
The proof is by contradiction. If the method does not stop after a finite number of
iterations then there is a cycle of tableaus that repeats. If we delete from the tableau
that initiates this cycle the rows and columns not containing pivots during the cycle, the
resulting tableau has a cycle with the same pivots. For this tableau, all right-hand-sides
are zero throughout the cycle since all pivots are degenerate.
Let t be the largest subscript of the variables remaining. Consider the tableau T1 in
the cycle with xt leaving. Let B = {xj1 , . . . , xjm } be the corresponding basis (say jr = t),
xs be the associated entering variable and, a1ij and c1j the constraint and cost coefficients.
On the other hand, consider the tableau T2 with xt entering and denotes by a2ij and c2j
the corresponding constraint and cost coefficients.
Let x be the (infeasible) solution obtained by letting the nonbasic variables in T1 be
zero except for xs = −1. Since all RHS are zero, we deduce that xji = ais for i = 1, . . . , m.
Since T2 is obtained from T1 by elementary row operations, x must have the same objective
function value in T1 and T2 . This means that
m
X
c10 − c1s = c20 − c2s + a1is c2ji .
i=1
Since we have no improvement in objective function in the cycle, we have c10 = c20 . More-
over, c1s > 0 and, by Bland’s rule, c2s ≤ 0 since otherwise xt would not be the entering
variable in T2 . Hence,
m
X
a1is c2ji < 0
i=1
implying that there exists k with a1ks c2jk < 0. Notice that k 6= r, i.e. jk < t, since the
pivot element in T1 , a1rs , must be positive and c2t > 0. However, in T2 , all cost coefficients
c2j except c2t are nonnegative; otherwise xj would have been selected as entering variable.
Thus c2jk < 0 and a1ks > 0. This is a contradiction because Bland’s rule should have
selected xjk rather than xt in T1 as leaving variable. QED.
instead of solving
Max z = c0 + cT x
subject to:
(P ) Ax = b
x≥0
we add some artificial variables {xai : i = 1, . . . , m} and consider the linear program:
m
X
Min w = xai
i=1
subject to:
Ax + Ixa = b
x ≥ 0, xa ≥ 0.
This program is not in the form required by the simplex method but can easily be trans-
formed to it. Changing the min w by max w′ = −w and expressing the objective function
in terms of the initial variables, we obtain:
1. w′ is reduced to zero and no artificial variables remain in the basis, i.e. we are left
with a basis consisting only of original variables. In this case, we simply delete
the columns corresponding to the artificial variables, replace the objective function
by the objective function of (P ) after having expressed it in terms of the nonbasic
variables and use Phase II of the simplex method as described in Section 2.3.
3. w′ is reduced to zero but some artificial variables remain in the basis. These artificial
variables must be at zero level since, for this solution, −w′ = m a
P
i=1 xi = 0. Suppose
that the ith variable ofthe basis is artificial. We may pivot on any nonzero (not
necessarily positive) element āij of row i corresponding to a non-artificial variable
xj . Since b̄i = 0, no change in the solution or in w′ will result. We say that we
are driving the artificial variables out of the basis. By repeating this for all artificial
variables in the basis, we obtain a basis consisting only of original variables. We
have thus reduced this case to case 1.
There is still one detail that needs consideration. We might be unsuccessful in
driving one artificial variable out the basis if āij = 0 for j = 1, . . . , n. However,
this means that we have arrived at a zero row in the original matrix by performing
elementary row operations, implying that the constraint is redundant. We can delete
this constraint and continue in phase II with a basis of lower dimension.
Example
Consider the following example already expressed in tableau form.
−z x1 x2 x3 x4
1 20 16 12 5 0
1 0 1 2 4
0 1 2 3 2
0 1 0 2 2
We observe that we don’t need to add three artificial variables since we can use x1 as
first basic variable. In phase I, we solve the linear program:
w x1 x2 x3 x4 xa1 xa2
1 2 2 5 4
1 0 1 2 4
1 2 3 1 2
1 0 2 1 2
The objective function is to minimize xa1 + xa2 and, as a result, the objective function
coefficients of the nonbasic variables as well as −c̄0 are obtained by taking the negative of
the sum of all rows corresponding to artificial variables. Pivoting on ā22 , we obtain:
w x1 x2 x3 x4 xa1 xa2
1 -2 -1 -2 0
1 1 2 0 4
1 2 3 1 2
-2 -1 -1 1 0
This tableau is optimal and, since w = 0, the original linear program is feasible. To
obtain a bfs, we need to drive xa1 out of the basis. This can be done by pivoting on say
ā34 . Doing so, we get:
w x1 x2 x3 x4 xa1 xa2
1 0 -1 -1 0
1 -3 -2 2 4
1 -4 -2 3 2
2 1 1 -1 0
Problems
Problem 2.1. Solve by the simplex method:
Problem 2.2. Solve by the simplex method using only one pivot:
Max z = 3x1 + x2
subject to:
x1 − x2 ≤ −1
−x1 − x2 ≤ −3
2x1 + x2 ≤ 4
x1 ≥ 0, x2 ≥ 0.
Were you expecting the optimum solution to have all components either 0 or 1?
Problem 2.5. Find a feasible solution to the following system:
x1 + x2 + x3 + x4 + x5 = 2
−x1 + 2x2 + x3 − 3x4 + x5 = 1
x1 − 3x2 − 2x3 + 2x4 − 2x5 = −4
x1 , x2 , x3 , x4 , x5 ≥ 0
Problem 2.6. Use the simplex method to show that the following constraints imply x1 +
2x2 ≤ 8:
4x1 + x2 ≤ 4
2x1 − 3x2 ≤ 6
x1 , x2 ≥ 0
Problem 2.7. How are the various rules of the simplex method affected when solving a
minimization problem instead of a maximization problem as described in these notes?
In this chapter, we show that the entries of the current tableau are uniquely determined
by the collection of decision variables that form the basis and we give matrix expressions
for these entries.
Consider a feasible linear program in standard form:
Max z = cT x
subject to:
Ax = b
x ≥ 0,
where A has full row rank. Consider now any intermediate tableau of phase II of the
simplex method and let B denote the corresponding collection of basic variables. If D
(resp. d) is an m × n matrix (resp. an n-vector), let DB (resp. dB ) denote the restriction
of D (resp. d) to the columns (resp. rows) corresponding to B. We define analogously DN
and dN for the collection N of nonbasic variables. For example, Ax = b can be rewritten
as AB xB +AN xN = b. After possible regrouping of the basic variables, the current tableau
looks as follows:
xB xN
−z 0 c̄TN −c̄0
ĀB = I ĀN b̄.
Since the current tableau has been obtained from the original tableau by a sequence
of elementary row operations, we conclude that there exists an invertible matrix P (see
27
3. Linear Programming in Matrix Form 28
P AB = ĀB = I
P AN = ĀN
and
P b = b̄.
ĀN = A−1
B AN
and
b̄ = A−1
B b.
Moreover, since the objective functions of the original and current tableaus are equivalent
(i.e. cTB xB + cTN xN = c̄0 + c̄TB xB + c̄TN xN = c̄0 + c̄TN xN ) and xB = b̄ − ĀN xN , we derive
that:
and
As we’ll see in the next chapter, it is convenient to define an m-vector y by y T = cTB A−1
B .
In summary, the current tableau can be expressed in terms of the original data as:
xB xN
−z 0 cN − y T AN −y T b
T
I A−1
B AN A−1
B b.
The simplex method could be described using this matrix form. For example, this
optimality criterion becomes cTN − y T AN ≤ 0 or, equivalently, cT − y T A ≤ 0, i.e. AT y ≥ c
where y T = cTB A−1
B .
Problems
Problem 3.1. Consider an LP for which we can directly apply phase II of the simplex
method (i.e. the initial tableau has an identity matrix somewhere). Consider now any
intermediate tableau obtained by applying the simplex method and let B be its associ-
ated basis.
1. Show how to find A−1 B from this intermediate tableau without doing any com-
putation. Prove your claim and check its correctness on the final tableau in the
solutions of Problem 2.2.1.
2. Show also how to find y T = cTB A−1
B . Again check your claim on the same final
tableau.
Problem 3.2. Consider the following two tableaus:
Tableau 1
x1 x2 x3 x4 x5 x6 x7
−z 2 f 3 4 0
1 a 1 1 1 5
0 1 1 1 1 2
1 1 0 1 1 4
Tableau 2
x1 x2 x3 x4 x5 x6 x7
−z e -1 -2 -1 -13
b 1 1 0 -1 1
1 c 1 -1 0 3
d 1 -1 1 g 1
where a, b, c, d, e, f, g are unknowns.
1. What is the basis B of tableau 2? What is AB ? What is A−1
B ? What is the value
of g?
2. What is the relation between a, e and f (expressed as a single equation)?
3. If f = 3, for what values of a will tableau 2 be optimal?
4. If a = 7, what are the values of b, c and d?
Problem 3.3. Consider the following initial tableau:
x1 x2 x3 x4 x5 x6 x7
−z a 2 10 1 0
2 1 1 3 1 10
2 1 2 2 1 15
1 -1 1 -4 1 1
Duality
Duality is the most important and useful structural property of linear programs. We start
by illustrating the notion on an example.
Consider the linear program:
We shall refer to this linear program as the primal. By exhibiting any feasible esolution, say
x1 = 4 and x2 = 2, one derives a lower bound (since we are maximizing) on the optimum
value z ∗ of the linear program; in this case, we have z ∗ ≥ 28. How could we derive upper
bounds on z ∗ ? Multiplying inequality (4.3) by 2, we derive that 6x1 + 4x2 ≤ 32 for any
feasible (x1 , x2 ). Since x1 ≥ 0, this in turn implies that z = 5x1 + 4x2 ≤ 6x1 + 4x2 ≤ 32
for any feasible solution and, thus, z ∗ ≤ 32. One can even combine several inequalities to
get upper bounds. Adding up all three inequalities, we get 5x1 + 4x2 ≤ 30, implying that
z ∗ ≤ 30. In general, one would multiply inequality (4.1) by some nonnegative scalar y1 ,
inequality (4.2) by some nonnegative y2 and inequality (4.3) by some nonnegative y3 , and
add them together, deriving that
To derive an upper bound on z ∗ , one would then impose that the coefficients of the xi ’s
in this implied inequality dominate the corresponding cost coefficients: y1 + y2 + 3y3 ≥ 5
31
4.1. Duality for Linear Programs in canonical form 32
and 2y2 + 2y3 ≥ 4. To derive the best upper bound (i.e. smallest) this way, one is thus led
to solve the following so-called dual linear program:
Observe how the dual linear program is constructed from the primal: one is a maximization
problem, the other a minimization; the cost coefficients of one are the RHS of the other
and vice versa; the constraint matrix is just transposed (see below for more precise and
formal rules). The optimum solution to this linear program is y1 = 0, y2 = 0.5 and
y3 = 1.5, giving an upper bound of 29 on z ∗ . What we shall show in this chapter is that
this upper bound is in fact equal to the optimum value of the primal. Here, x1 = 3 and
x2 = 3.5 is a feasible solution to the primal of value 29 as well. Because of our upper
bound of 29, this solution must be optimal, and thus duality is a way to prove optimality.
Max z = cT x
subject to:
(P ) Ax ≤ b
x≥0
Min w = bT y
subject to:
(D) AT y ≥ c
y ≥ 0.
(P ) is called the primal linear program. Notice there is a dual variable associated with
each primal constraint, and a dual constraint associated with each primal variable. In
fact, the primal and dual are indistinguishable in the following sense:
Proof:
To construct the dual of the dual, we first need to put (D) in canonical form:
Max w′ = −w = −bT y
subject to:
(D ′ ) −AT y ≤ −c
y ≥ 0.
Min z ′ = −cT x
subject to:
(DD′ ) −Ax ≥ −b
x ≥ 0.
Proof:
x≥0 y≥0
z = cT x ≤ (AT y)T x = y T Ax ≤ y T b = bT y = w.
QED.
Any dual feasible solution (i.e. feasible in (D)) gives an upper bound on the optimal
value z ∗ of the primal (P ) and vice versa (i.e. any primal feasible solution gives a lower
bound on the optimal value w∗ of the dual (D)). In order to take care of infeasible linear
programs, we adopt the convention that the maximum value of any function over an empty
set is defined to be −∞ while the minimum value of any function over an empty set is
+∞. Therefore, we have the following corollary:
What is more surprising is the fact that this inequality is in most cases an equality.
Proof:
The proof uses the simplex method. In order to solve (P ) with the simplex method, we
reformulate it in standard form:
Max z = cT x
subject to:
(P ) Ax + Is = b
x ≥ 0, s ≥ 0.
! !
x c
Let à = (A I), x̃ = and c̃ = . Let B be the optimal basis obtained by
s 0
the simplex method. The optimality conditions imply that
ÃT y ≥ c̃
where
y T = (c̃B )T Ã−1
B .
!
c
Replacing à by (A I) and c̃ by , we obtain:
0
AT y ≥ c
and
y ≥ 0.
This implies that y is a dual feasible solution. Moreover, the value of y is precisely
w = y T b = (c̃B )T Ã−1 T ∗ ∗
B b = (c̃B ) x̃B = z . Therefore, by weak duality, we have z = w .
∗
QED.
Since the dual of the dual is the primal, we have that if either the primal or the dual
is feasible and bounded then so are both of them and their values are equal. From weak
duality, we know that if (P ) is unbounded (i.e. z ∗ = +∞) then (D) is infeasible (w∗ =
+∞). Similarly, if (D) is unbounded (i.e. w∗ = −∞) then (P ) is infeasible (z ∗ = −∞).
However, the converse to these statements are not true: There exist dual pairs of linear
programs for which both the primal and the dual are infeasible. Here is a summary of the
possible alternatives:
Primal
z ∗ finite unbounded (z ∗ = ∞) infeasible (z ∗ = −∞)
Dual
w∗ finite z ∗ = w∗ impossible impossible
unbounded (w∗ = −∞) impossible impossible possible
infeasible (w∗ = +∞) impossible possible possible
Max z = cT x
subject to:
X
aij xj ≤ bi i ∈ I1
j
X
(P ) aij xj ≥ bi i ∈ I2
j
X
aij xj = bi i ∈ I3
j
xj ≥ 0 j = 1, . . . , n,
Max z = cT x
subject to:
X
aij xj ≤ bi i ∈ I1
j
X
(P ′ ) − aij xj ≤ −bi i ∈ I2
j
X
aij xj ≤ bi i ∈ I3
j
X
− aij xj ≤ −bi i ∈ I3
j
xj ≥ 0 j = 1, . . . , n.
Assigning the vectors y 1 , y 2 , y 3 and y 4 of dual variables to the first, second, third and
In terms of yi , we obtain (verify it!) the following equivalent dual linear program
X
Min w = bi y i
i∈I
subject to:
X
(D) aij yi ≥ cj j = 1, . . . , n
i∈I
yi ≥ 0 i ∈ I1
yi ≤ 0 i ∈ I2
yi ≷ 0 i ∈ I3 ,
where I = I1 ∪ I2 ∪ I3 .
We could have avoided all these steps by just noticing that, if the primal program
is a maximization program, then inequalities with a ≤ sign in the primal correspond
to nonnegative dual variables, inequalities with a ≥ sign correspond to nonpositive dual
variables, and equalities correspond to unrestricted in sign dual variables.
By performing similar transformations for the restrictions on the primal variables, we
obtain the following set of rules for constructing the dual linear program of any linear
program:
Primal ←→ Dual
Max ←→ Min
P
aij xj ≤ bi ←→ yi ≥ 0
Pj
aij xj ≥ bi ←→ yi ≤ 0
Pj
j aij xj = bi ←→ yi ≷ 0
P
xj ≥ 0 ←→ aij yi ≥ cj
Pi
xj ≤ 0 ←→ aij yi ≤ cj
Pi
xj ≷ 0 ←→ i aij yi = cj .
If the primal linear program is in fact a minimization program then we simply use the
above rules from right to left. This follows from the fact that the dual of the dual is the
primal.
Max z = cT x
subject to:
(P ) Ax ≤ b
x≥0
and
Min w = bT y
subject to:
(D) AT y ≥ c
y ≥ 0.
Proof:
By strong duality we know that x is optimal in (P ) and y is optimal in (D) iff cT x = bT y.
Moreover, (cfr. Theorem 4.2) we always have that:
cT x ≤ y T Ax ≤ y T b = bT y.
Corollary 4.6 Let x be feasible in (P ). Then x is optimal iff there exists y such that
( (
T ≥ xj = 0
A y cj if
( = ( xj > 0
≥ (Ax)i = bi
yi 0 if
= (Ax)i < bi .
As a result, the optimality of a given primal feasible solution can be tested by checking
the feasibility of a system of linear inequalities and equalities.
As should be by now familiar, we can write similar conditions for linear programs in
other forms. For example,
Max z = cT x
subject to:
(P ) Ax = b
x≥0
and y feasible in
Min w = bT y
subject to:
(D) AT y ≥ c.
The geometric interpretation behind the separating hyperplane theorem is as follows: Let
a1 , . . . , an ∈ Rm be the columns of A. Then b does not belong to the cone K = { ni=1 ai xi :
P
Max z = 0T x
subject to:
(P ) Ax = b
x≥0
and
Min w = bT y
subject to:
(D) AT y ≥ 0.
Notice that (D) is certainly feasible since y = 0 is a feasible solution. As a result, duality
implies that (P ) is infeasible iff (D) is unbounded. However, since λy is dual feasible for
any λ ≥ 0 and any dual feasible solution y, the unboundedness of (D) is equivalent to the
existence of y such that AT y ≥ 0, y ≥ 0 and bT y < 0. QED.
Other forms of the separating hyperplane theorem include:
Problems
Problem 4.1. Write the dual to:
1. if c=5?
2. if c=8?
Justify.
Problem 4.3. Consider the linear program
where b > 0.
1. Write its dual.
2. Develop a simple test for checking the feasibility of this problem.
3. Develop a simple test for checking unboundedness.
4. Develop a simple method for obtaining a primal optimum solution and a dual
optimum solution directly.
5. In terms of the optimum dual solution, how much does the optimum value of the
primal (or the dual) change when b is replaced by b + ǫ?
Problem 4.7. Suppose that you are given a “black box” procedure that, when given a
system of linear inequalities, either produces a feasible solution or declares that there
is no feasible solution. Show how a single call to this black box can be used to obtain
an optimal solution to the linear program
Min cT x
subject to:
Ax = b
x ≥ 0.
Max z = cT x
subject to:
Ax = b
x ≥ 0,
Hint: give a system of linear inequalities (≤) which has a solution iff the system A1 x <
b1 , A2 x ≤ b2 and x ≥ 0 has a solution.
Problem 4.11. Given a pair of feasible dual linear programs min{cT x : Ax ≥ b, x ≥ 0}
and max{bT y : AT y ≤ c, y ≥ 0}, prove that there exists an optimal solution x to the
primal and an optimal solution y to the dual such that xj > 0 whenever (AT y)j = cj and
yi > 0 whenever (Ax)i = bi . (This is sometimes referred to as strong complementary
slackness or Tucker’s complementary slackness.)
Hint: use Problem 4.4.10.
In a matrix game, there are two players, say player I and player II. Player I has m different
pure strategies to choose from while player II has n different pure strategies. If player I
selects strategy i and player II selects strategy j then this results in player I gaining aij
units and player II losing aij units. So, if aij is positive, player II pays aij units to player
I while if aij is negative then player I pays −aij units to player II. Since the amounts
gained by one player equal the amounts paid by the other, this game is called a zero-sum
game. The matrix A = [aij ] is known to both players and is called the payoff matrix. In
a sequence of games, player I (resp. player II) may decide to randomize his choice of pure
strategies by selecting strategy i (resp. j) with some probability yi (resp. xj ). The vector
y (resp. x) satisfies
m
X n
X
yi = 1 (resp. xj = 1),
i=1 j=1
Similarly, if player II adopts the mixed strategy x then his expected loss li if player I
selects strategy i is given by:
X
li = aij xj = (Ax)i = eTi Ax
j
58
7. Zero-Sum Matrix Games 59
l = max li = max(Ax)i .
i i
If player I uses the mixed strategy y and player II uses the mixed strategy x then the
expected gain of player I is h = i,j yi aij xj = y T Ax.
P
Theorem 7.1 If y and x are mixed strategies respectively for players I and II then g ≤ l.
Proof:
We have that
X X
h = y T Ax = yi (Ax)i ≤ l yi = l
i i
and
X X
h = y T Ax = (y T A)j xj ≥ g xj = g
j j
Theorem 7.2 (The Minimax Theorem) There exist mixed strategies x∗ and y ∗ such
that g∗ = l∗ .
Proof:
In order to prove this result, we formulate the objectives of both players as linear programs.
Player II’s objective is to minimize l. This can be expressed by:
Min l
subject to:
(P ) Ax ≤ le
eT x = 1
x ≥ 0, l ≷ 0
1
Here guaranteed means that he’ll loose at most l.
where e is a vector of all 1’s. Indeed, for any optimal solution x∗ , l∗ to (P ), we know that
l∗ = maxi (Ax∗ )i since otherwise l∗ could be decreased without violating feasibility.
Similarly, player I’s objective can be expressed by:
Max g
subject to:
(D) AT y ≥ ge
eT y = 1
y ≥ 0, g ≷ 0
Again, any optimal solution to the above program will satisfy g∗ = minj (AT y ∗ )j .
The result follows by noticing that (P ) and (D) constitute a pair of dual linear pro-
grams (verify it!) and, therefore, by strong duality we know that g∗ = l∗ . QED.
The above Theorem can be rewritten as follows (This explains why it is called the
minimax theorem):
Indeed
and
max y T Ax = max(Ax)i = l.
eT y=1,y≥0 i
Example
Consider the game with payoff matrix
!
1 −3
A= .
−2 4
Solving the linear program (P ), we obtain the following optimal mixed strategies for both
players (do it by yourself!):
! !
∗ 7/10 ∗ 6/10
x = and y = ,
3/10 4/10
for which g∗ = l∗ = −2/10.
A matrix game is said to be symmetric if A = −AT . Any symmetric game is fair, i.e.
g∗ = l∗ = 0.
Problems
Problem 7.1.
Consider thematrix game based on the following payoff matrix:
0 −2 1
A= 2 0 3 .
−1 −3 0
Notice that A is antisymmetric, i.e. A = −AT .
1. Write the linear programs associated with both players. Show that these linear
programs are equivalent in the sense that if (x, l) is feasible for player II’s linear
program then (y, g) = (x, −l) is feasible for player I’s linear program and vice
versa. Prove that g∗ = l∗ = 0.
2. Using part 1 and using complementary slackness, find the optimal strategies for
both players.