Lecture Notes 20150101
Lecture Notes 20150101
Lecture Notes 20150101
Chapter 1
Linear Programming
1.1
Introduction
Mathematical models in business and economics often involve optimization, i.e. finding
the optimal (best) value that a function can take. This objective function might represent
profit (to be maximized) or expenditure (to be minimized). There may also be constraints
such as limits on the amount of money, land and labour that are available.
Operations Research (OR), otherwise known as Operational Research or Management Science, can be traced back to the early twentieth century. The most notable developments in
this field took place during World War II when scientists were recruited by the military to
help allocate scarce resources. After the war there was a great deal of interest in applying
OR methods in business and industry, and the subject expanded greatly during this period.
Many of the techniques that we shall study in the first part of the course were developed at
that time. The Simplex algorithm for Linear Programming, which is the principal technique
for solving many OR problems, was formulated by George Dantzig in 1947. This method
has been applied to problems in a wide variety of disciplines including finance, production
planning, timetabling and aircraft scheduling. Nowadays, computers can solve large scale
OR problems of enormous complexity.
Later in the course we consider optimization of non-linear functions. The methods here
are based on calculus. Non-linear problems with equality constraints will be solved using
the method of Lagrange multipliers, named after Joseph Louis Lagrange (1736 - 1812). A
similar method for inequality constraints was developed by Karush in 1939 and by Kuhn
and Tucker in 1951 but is beyond the scope of this course.
The following books may be useful, but they contain a lot more material than will be needed
for this module:
Introduction to Operations Research, by F Hillier and G Lieberman
Operations Research: An Introduction, by H A Taha
(McGraw-Hill)
(Prentice Hall)
(McGraw-Hill)
(McGraw Hill)
1.2
60,
60,
0,
0.
The following diagram shows the feasible region for the problem.
There are infinitely many feasible solutions. An optimal one can be found by considering
the slope, and direction of increase, of the objective function z = 30x1 + 45x2 .
Different values of the objective function correspond to straight lines with equations of the
2
form 30x1 + 45x2 = c for varying values of c. These lines are all parallel, with gradient .
3
Draw such a line, e.g. 2x1 + 3x2 = 6 which crosses the axes at (3, 0) and (0, 2). To find
the maximum profit we translate this profit line in the direction of increasing z, keeping
its slope the same, until moving it any further would take it completely outside the feasible
region.
We see that the optimum occurs at a corner point (or extreme point) of the feasible
region, at the intersection of the lines that correspond to the two machine constraints.
Hence the optimal values of x1 and x2 are the solutions of the simultaneous equations
2x1 + 8x2 = 60 and 4x1 + 4x2 = 60,
i.e. (x1 , x2 ) = (10, 5). The optimal value of the objective function is then z = 525.
Thus the optimal production plan is to produce 10 type A containers and 5 type B containers per hour. This plan yields the maximum profit, which is 525 per hour.
We shall show later that if a LP problem has an optimal solution then there is an extreme
point of the feasible region where this solution occurs.
There can sometimes be optimal solutions which do not occur at extreme points. If two
vertices P and Q of the feasible region give equal optimal values for the objective function
then the same optimal value occurs at all points on the line segment P Q.
45
Suppose the profit on container A is changed to p. If
< p < 45 then the optimal
4
45
solution still occurs at (10, 5). If p =
or p = 45 there are multiple optimal points, all
4
45
yielding the same profit. If p > 45 then (15, 0) is optimal. If p <
then (0, 7.5) is optimal,
4
but 7.5 containers cannot be made. The best integer solution can be found by inspection.
and the total available finance is 8000. The labour available is at most 5000 person-hours
per year. How many bushes of each type should be planted to maximize the profit?
Step 1
Area
Finance ()
Labour (person-hours)
Profit
Step 2
Red
5
8
1
2
White
4
2
5
3
Step 3
We must have 5x1 + 4x2 6100, 8x1 + 2x2 8000, x1 + 5x2 5000, x1 0, x2 0.
Step 5
Step 6 Method 1 Sketch a line of the form z = constant and translate this line, in the
direction in which z increases, until it no longer intersects the feasible region.
The dotted lines in the diagram are z = 2000 and z = 5000. The points on z = 2000 are
feasible but not optimal, whereas those on z = 5000 are all infeasible. The optimal point
lies between these lines at (500, 900), where z = 3700.
Method 2 Evaluate z at each of the extreme points and see where it is greatest.
The extreme points of the feasible region (going clockwise) are (x1 , x2 ) = (0, 0), (0, 1000),
(500, 900), (900, 400), and (1000, 0). These can be calculated by solving pairs of simultaneous equations. If you read them off from the graph, it is advisable to check that they satisfy
the equations of the lines on which they lie.
The optimum occurs at (x1 , x2 ) = (500, 900), and the objective function is then
z = 2 500 + 3 900 = 3700.
Final answer: Growing 500 red and 900 white rose bushes is optimal, with profit 3700.
Note that the original problem was stated in words: it did not mention the variables x1
and x2 or the objective function z. Your final answer should not use these symbols, as they
have been introduced to formulate the mathematical model.
Units of protein
Units of fat
Units of calcium
Units of phosphorus
Cost per unit ()
Find the optimal feeding plan and identify any redundant constraint(s).
Decision variables
Suppose the farmer uses x1 units of hay and x2 units of oats.
Objective function
The aim is to make the cost of feeding one cow as small as possible, so we must
Minimize
z = 0.66x1 + 2.08x2 .
+ 34.0x2
+ 5.9x2
+ 0.09x2
+ 0.09x2
x2
65.0
14.0
0.12
0.15
0
0
(protein)
(fat)
(calcium)
(phosphorus)
(non-negativity)
(non-negativity)
A (x1 , x2 ) = (6, 0)
C (x1 , x2 ) = (1.35, 1.39)
z = 3.96
z = 3.78
z = 3.46
z = 4.96
1.3
Definitions
n
The feasible region for a LP problem is a subset of the Euclidean
vector space R , whose
elements we call points. The norm of x = (x1 , . . . , xn ) is |x| = x1 2 + + xn 2 .
x1 + x2 6,
x2 + 2x3 4,
0 x1 4,
0 x2 4,
x3 0.
The feasible set is a convex polyhedron or simplex with seven plane faces, as shown. As it
is closed and bounded, and any linear function is continuous, Weierstrasss theorem tells us
that z attains a greatest and least value over this region. Clearly the minimum occurs at
(0, 0, 0). It turns out that z is maximum at (4, 2, 3).
Considering other objective functions with the same feasible region, x1 +x2 +x3 is maximum
at (2, 4, 4), while x1 3x2 x3 is maximum at (4, 0, 0).
Proposition 1.2 Let S be the intersection of a finite number of half-planes in Rn . Let
v1 , . . . , vm be the extreme points of S.
Then x S if and only if x = r1 v1 + + rm vm + u where r1 + + rm = 1, each ri 0,
u = 0 if S is bounded and u is a direction of unboundedness otherwise.
The proof of the if part of the above result is in the Exercises. The only if part is more
difficult to prove, but by assuming it we can show the following:
Proposition 1.3 (The Extreme Point Theorem) Let S be the feasible region for a linear programming problem.
1. If S is non-empty and bounded then an optimal solution of the problem exists, and
there is an extreme point of S at which it occurs.
2. If S is non-empty and not bounded then if an optimal solution to the problem exists,
there is an extreme point of S at which it occurs.
Proof
Let the objective function be z = c1 x1 + + cn xn = ct x.
S is closed so in case 1 it is compact, hence by Proposition 1.1 z attains its maximum and
minimum values at some points in S. In case 2 we are assuming that the required optimum
is attained.
Assume z is to be maximized, and takes its maximum value over S at x S. (If z is to
be minimized, apply the following reasoning to z.)
Let v1 , . . . , vm be the extreme points of S.
By Proposition 1.2, x = r1 v1 + + rm vm + u where r1 + + rm = 1, each ri 0, and
u is 0 in case 1 or a direction of unboundedness in case 2.
Suppose, for a contradiction, that there is no extreme point of S at which z is maximum,
so ct vi < ct x for i = 1, . . . , m.
Then ct x = ct (r1 v1 + + rm vm + u)
= r1 ct v1 + + rm ct vm + ct u
< r1 ct x + +rm ct x +ct u
= (r1 + + rm )ct x + ct u
= c t x + c t u
=
ct (x
(since r1 + + rm = 1)
+ u).
Exercises 1
1. Solve the following Linear Programming problems graphically.
(a) Maximize z = 2x1 + 3x2 subject to
2x1 + 5x2 10, x1 4x2 1, x1 0, x2 0.
(b) Minimize z = 4x2 5x1 subject to
x1 + x2 10, 2x1 + 3x2 6, 6x1 4x2 13.
2. The objective function z = px + qy, where p > 0, q > 0, is to be maximized subject
to the constraints 3x + 2y 6, x 0, y 0.
Find the maximum value of z in terms of p and q. (There are different cases, depending
on the relative sizes of p and q.)
3. A company makes two products, A and B, using two components X and Y .
To produce 1 unit of A requires 5 units of X and 2 units of Y .
To produce 1 unit of B requires 6 units of X and 3 units of Y .
At most 85 units of X and 40 units of Y are available per day.
The company makes a profit of 12 on each unit of A and 15 on each unit of B.
Assuming that all the units produced can be sold, find the number of units of each
product that should be made per day to optimze the profit.
If the profit on B is fixed, how low or high would the profit on A have to become
before the optimal production schedule changed?
4. A brick company manufactures three types of brick in each of its two kilns. Kiln A
can produce 2000 standard, 1000 oversize and 500 glazed bricks per day, whereas kiln
B can produce 1000 standard, 1500 oversize and 2500 glazed bricks per day. The daily
operating cost for kiln A is 400 and for kiln B is 320.
The brickyard receives an order for 10000 standard, 9000 oversize and 7500 glazed
bricks. Determine the production schedule (i.e. the number of days for which each
kiln should be operated) which will meet the demand at minimum cost (assuming both
kilns can be operated immediately) in each of the following separate cases. (Note: the
kilns may be used for fractions of a day.)
(a) there is no time limit,
(b) kiln A must be used for at least 5 days,
(c) there are at most 2 days available on kiln A and 9 days on kiln B.
5. A factory can assemble mobile phones and laptop computers. The maximum amount
of the workforces time that can be spent on this work is 10 hours per day. Before the
phones and laptops can be assembled, the component parts must be purchased. The
maximum value of the stock that can be held for a days assembly work is 2200.
In the factory, a mobile phone takes 10 minutes to assemble using 10 worth of
components whereas a laptop takes 1 hour 40 minutes to assemble using 500 worth
of components.
The profit made on a mobile phone is 5 and the profit on a laptop is 100.
(a) Summarise the above information in a table.
(b) Assuming that the factory can sell all the phones and laptops that it assembles,
formulate the above information into a Linear Programming problem.
(c) Determine the number of mobile phones and the number of laptops that should
be made in a day to maximise profit.
(d) The market for mobile phones is saturating. In response, retailers are dropping
prices which means reduced profits. How low can the profit on a mobile phone
go before the factory should switch to assembling only laptops?
6. In the Containers problem (Example 1), suppose we write the machine constraints as
2x1 + 8x2 + x3 = 60 and 4x1 + 4x2 + x4 = 60, where x3 and x4 are the number of
minutes in an hour for which M1 and M2 are not used, so x3 0 and x4 0.
Show that the objective function can be written as
1
1
1
1
z = 30 10 + x3 x4 + 45 5 x3 + x4 .
6
3
6
12
By simplifying this, deduce that that the maximum value of z occurs when x3 = x4 = 0
and state this maximum value.
7. By sketching graphs and using the fact that the straight line joining any two points
of a convex set lies in the set, decide which of the following subsets of R2 are convex.
(a) {(x, y) : xy 1, x 0, y 0},
(c) {(x, y) : y
x2
1},
8. Let a = (a1 , . . . , an ) be a fixed element of Rn and let b be a real constant. Let x1 and
x2 lie in the half-space at x b, so that at x1 b and at x2 b. Show that for all
r [0, 1], at ((1 r)x1 + rx2 ) b. Deduce that the half-space is a convex set.
9. Let S and T be convex sets. Show that their intersection S T is a convex set.
(Hint : let x, y S T , so x, y S and x, y T . Why must (1 r)x + ry be in
S T for 0 r 1?)
Generalise this to show that if S1 , . . . , Sn are convex sets then S = S1 Sn is
convex. Deduce that the feasible region for a linear programming problem is convex.
10. Let S be the feasible region for a Linear Programming problem. If an optimal solution
to the problem does not exist, what can be deduced about S from Proposition 1.3?
11. Prove that every extreme point of a convex set S is a boundary point of S. (Method:
suppose x is in S and is not a boundary point. Then some neighbourhood of x must
be contained in S (why?) Deduce that x is a convex linear combination of two points
in this neighbourhood and so x is not an extreme point.)
12. Prove the if part of Proposition 1.2 as follows. Let the half-planes defining S be
a1 t x b1 , . . . , ak t x bk . As vi S, all these inequalities hold when x = vi for
i = 1, . . . , m.
Show that x = r1 v1 + +rm vm also satisfies all the inequalities, where r1 + +rm =
1 and each ri 0.
Deduce further that if S is unbounded and u is a direction of unboundedness then
r1 v1 + + rm vm + u S.
Chapter 2
If a linear programming problem has more than two decision variables then a graphical solution is not possible. We therefore develop an algebraic, rather than geometric, approach.
Definitions
x is a non-negative vector, written x 0, if every entry of x is positive or zero.
The set of non-negative vectors in Rn is denoted by Rn+ .
u v means that every entry of the vector u is greater than or equal to the corresponding
entry of v. Thus u v, or v u, is equivalent to u v 0.
Let x and y be non-negative vectors. Then it is easy to show that:
(ii) xt y 0,
(i) x + y = 0 x = y = 0,
(iii) u v xt u xt v.
A11 A12
A21 A22
B11 B12
B21 B22
=
.
Now consider a typical Linear Programming problem. We seek values of the decision variables x1 , x2 , . . . , xn which optimize the objective function
z = c1 x1 + c2 x2 + + cn xn
subject to the constraints
a11 x1 + a12 x2 + + a1n xn
a21 x1 + a22 x2 + + a2n xn
..
.
b1
b2
x1 0,
10
x2 0, . . . ,
xn 0.
x1
x = ... , b =
xn
(LP1)
aij and
b1
.. , c =
.
bm
c1
.. .
.
cn
The problem (LP1) is said to be feasible if the constraints are consistent, i.e. if there exists
x Rn+ such that Ax b. Any such vector x is a feasible solution of (LP1). If a feasible
point x maximizes z subject to the constraints, x is an optimal solution.
The problem is unbounded if there is no finite maximum over the feasible region, i.e. there
is a sequence of vectors {xk } satisfying the constraints, such that ct xk as k .
The standard form of a LP problem is defined as follows:
The objective function is to be maximized.
All constraints are equations with non-negative right-hand sides.
All the variables are non-negative.
Any LP problem can be converted into standard form by the following methods:
Minimizing f(x) is equivalent to maximizing f(x). Thus the problem
Minimize c1 x1 + c2 x2 + + cn xn
is equivalent to
Maximize c1 x1 c2 x2 cn xn
subject to the same constraints, and the optimum occurs at the same values of x1 , . . . , xn .
Any equation with a negative right hand side can be multiplied through by 1 so that the
right-hand side becomes positive. Remember that if an inequality is multiplied through by
a negative number, the inequality sign must be reversed.
A constraint of the form can be converted to an equation by adding a non-negative slack
variable on the left-hand side of the constraint.
A constraint of the form can be converted to an equation by subtracting a non-negative
surplus variable on the left-hand side of the constraint.
2.1.1
Example
Minimize
subject to
and
Rewrite the second constraint with a non-negative right-hand side: 2x1 + 3x2 x3 5.
Then add a slack variable x4 in the first constraint and subtract a surplus variable x5 in
the second one. In standard form the problem is
z = 3x1 4x2 + 5x3
Maximize
subject to
and
The slack and surplus variables represent the differences between the left and right hand
sides of the inequalities. If a slack or surplus variable is zero at some point, the constraint
in which it occurs is said to be active or binding at that point.
A variable is unrestricted if it is allowed to take both positive and negative values. An
unrestricted variable xj can be expressed in terms of two non-negative variables by substituting xj = x0j x00j where both x0j and x00j are non-negative. The substitution must be used
throughout, i.e. in all the constraints and in the objective function.
2.1.2
Example
(LP2)
x1
..
.
xn
e
e=
where A is an m (n + m) matrix, x
xn+1
..
.
xn+m
c1
..
cn
, e
c = 0 .
..
2.1.3
Example
x1
x1
x2
x2
2 4 3 1 0
10
x3 =
Maximize z = 1 2 3 0 0 x3 subject to
,
3 6 5 0 1
15
x4
x4
x5
x5
where x1 , . . . , x5 0.
Setting any 3 variables to zero, if the resulting equations are consistent we can solve for the
other two; e.g. (1, 2, 0, 0, 0) and (0, 0, 0, 10, 15) are feasible solutions for (x1 , x2 , x3 , x4 , x5 ).
Note that setting x3 = x4 = x5 = 0 gives 2x1 + 4x2 = 10, 3x1 + 6x2 = 15 which are the
same equation and thus do not have a unique solution for x1 and x2 .
A unique solution of (LP2) obtained by setting n variables to zero is called a basic solution.
If it is also feasible, i.e. non-negative, it is called a basic feasible solution (bfs).
The n variables set to zero are called non-basic variables. The remaining m (some of
which may be zero) are basic variables. The set of basic variables is called a basis.
In the above example, (0, 0, 10/3, 0, 5/3) is a basic infeasible solution. (1, 2, 0, 0, 0) is a
feasible solution but not a bfs. (0, 0, 3, 1, 0) is a basic feasible solution, with basis {x3 , x4 }.
e is a basic feasible solution of (LP2) if and only if it is an extreme
It can be proved that x
point of the feasible region for (LP2) in Rn+m . The vector x consisting of the first n
e is then an extreme point of the original feasible region for (LP1) in Rn .
components of x
In any LP problem that has an optimal solution, we know by Proposition 1.3 that an
optimal solution exists at an extreme point. Hence we are looking for the basic feasible
solution which optimizes the objective function.
Suppose we have a problem with 25 original variables and 15 constraints, giving rise to
15 slack or surplus
In each basic feasible solution, 25 variables are set equal to
variables.
40
zero. There are
possible sets of 25 variables which could be equated to zero, which
25
is more than 4 1010 combinations. Clearly an efficient method for choosing the sets of
variables to set to zero is required! Suppose that
e1 ,
we are able to find an initial basic feasible solution, say x
we have a way of checking whether a given bfs is optimal, and
ei to another, x
ei+1 , that gives a
we have a way of moving from a non-optimal bfs x
better value of the objective function.
Combining these three steps will yield an algorithm for solving the LP problem.
e1 is optimal, then we are done. If it is not, then we move from x
e1 to x
e2 , which by
If x
e1 . If x
e2 is not optimal then we move to x
e3 and so on. Since
definition is better than x
the number of extreme points is finite and we always move towards a better one, we must
ultimately find the optimal one. The Simplex method is based on this principle.
13
2.2
z = 12x1 + 15x2
subject to
5x1 + 6x2 + x3
= 85
2x1 + 3x2
+ x4 = 40
and
(1)
(2)
x1 0, x2 0.
The two equations in four unknowns have infinitely many solutions. Setting any two of
x1 , x2 , x3 , x4 to zero gives a unique solution for the other two, yielding a basic solution of
the problem.
If we take x1 = 0, x2 = 0, x3 = 85, x4 = 40 this is certainly a basic feasible solution; since it
gives z = 0 it is clearly not optimal. z can be made larger by increasing x1 or x2 .
1
Equation (2) gives x2 = (40 2x1 x4 ).
3
Now express z in terms of x1 and x4 only:
z = 12x1 + 15x2 = 12x1 + 5(40 2x1 x4 ) = 200 + 2x1 5x4 .
Taking x1 = x4 = 0 gives z = 200. This is a great improvement on 0, and corresponds to
40
increasing x2 to
so that the second constraint holds as an equation. We have moved
3
from the origin to another vertex of the feasible region. Now x3 = 5, so there are still 5
units of slack in the first constraint. All the xj are non-negative so we still have a bfs.
z can be improved further by increasing x1 .
Eliminate x2 between the constraint equations:
We then have z = 200 + 2x1 5x4 = 200 + 2(5 x3 + 2x4 ) 5x4 = 210 2x3 x4 .
As all the variables are non-negative, increasing x3 or x4 above zero will make z smaller
than 210. Hence the maximum value of z is 210 and this occurs when x3 = x4 = 0, i.e. when
there is no slack in either constraint. Then x1 = 5, x2 = 10. We have moved round the
feasible region to the vertex where the two constraint lines intersect.
The above working can be set out in an abbreviated way. The objective function is written
as z 12x1 15x2 = 0. This and the constraints are three equations in five unknowns
x1 , x2 , x3 , x4 and z. This system of linear equations holds at every feasible point.
We obtain an equivalent system by forming linear combinations of these equations, so long
as the resulting set of equations remains linearly independent. This can be carried out most
easily by writing the equations in the form of an augmented matrix and carrying out row
operations, as in Gaussian elimination. This matrix is written in a way which helps us to
identify the basic variables at each stage, called a simplex tableau.
P
Eventually we should get an expression for the objective function in the form z = k j xj
where each j 0, such as z = 210 2x3 x4 in the example above. Then increasing any
of the xj would decrease z, so we have arrived at a maximum value of z.
14
A simplex tableau consists of a grid with headings for each of the variables and a row for
each equation, including the objective function. Under the heading Basic are the variables
which yield a basic feasible solution when they take the values in the Solution column and
the others are all zero. The = sign comes immediately before the Solution column.
You will find various forms of the tableau in different books. Some omit the z column
and/or the Basic column. The objective function is often placed at the top. Some writers
define the standard form to be a minimization rather than a maximization problem.
For the problem on the previous page, the initial simplex tableau is:
Basic
x3
x4
z
z
0
0
1
x1
5
2
12
x2
6
3
15
x3
1
0
0
x4
0
1
0
Solution
85
40
0
z
0
0
1
x1
1
2/3
2
x2
0
1
0
x3
1
0
0
x4
2
1/3
5
Solution
5
40/3
200
R1 := R1 2R2
R2 := R2 /3
R3 := R3 + 5R2
40
, x3 = 5, x4 = 0. x4 has left the basis
3
(it is the departing variable) and x2 has entered the basis (it is the entering variable).
This represents the bfs z = 200 when x1 = 0, x2 =
Now the bottom row says z 2x1 + 5x4 = 200 so we can still increase z by increasing x1
from 0. Thus x1 must enter the basis. The entry 1 in the top left of the tableau is the
new pivot, and all other entries in its column must become 0. x3 leaves the basis.
Basic
x1
x2
z
z
0
0
1
x1
1
0
0
x2
0
1
0
x3
1
2/3
2
x4
2
5/3
1
Solution
5
10
210
R2 := R2 32 R1
R3 := R3 + 2R1
Now there are no negative numbers in the z row, which says z + 2x3 + x4 = 210.
Increasing either of the non-basic variables x3 , x4 from 0 cannot increase z, so we have the
optimal value z = 210 when x3 = x4 = 0.
The tableau shows that this constrained maximum occurs when x1 = 5 and x2 = 10.
15
z = 2x1 + 3x2
= 6100
5x1 + 4x2 + x3
8x1 + 2x2
+ x4
= 8000
subject to
x1 + 5x2
+ x5 = 5000
xj 0 for j = 1, . . . , 5.
and
The slack variables x3 , x4 , x5 represent the amount of spare area, finance and labour that
are available. The extreme points of the feasible region are:
Extreme point
O
A
B
C
D
x1
0
0
500
900
1000
x2
0
1000
900
400
0
x3
6100
2100
0
0
1100
x4
8000
6000
2200
0
0
x5
5000
0
0
2100
4000
Objective function z
0
3000
3700
3000
2000
The boundary lines each have one of x1 , . . . , x5 equal to zero. Hence the vertices of the
feasible region, which occur where two boundary lines intersect, correspond to solutions in
which two of the five variables in (LP2) are zero.
The simplex method systematically moves around the boundary of this feasible region,
starting at the origin and improving the objective function at every stage.
Step 2 Form the initial tableau.
Basic
x3
x4
x5
z
z
0
0
0
1
x1
5
8
1
2
x2
4
2
5
3
x3
1
0
0
0
x4
0
1
0
0
x5
0
0
1
0
Solution
6100
8000
5000
0
Note that the bottom row comes from writing the objective function as z 2x1 3x2 = 0.
The tableau represents a set of equations which hold simultaneously at every feasible solution. The = sign in the equations occurs immediately before the solution column. There
are 5 3 = 2 basic variables. Each column headed by a basic variable has an entry 1 in
exactly one row and 0 in all the other rows. In the above tableau, x1 and x2 are non-basic.
Setting these to zero, an initial basic feasible solution can be read directly from the tableau:
x3 = 6100, x4 = 8000 and x5 = 5000, giving z = 0.
Step 3 Test for optimality:
If so then stop, as the optimal solution has been reached. Otherwise go to Step 4.
The bottom row says z 2x1 3x2 = 0, so z can be increased by increasing x1 or x2 .
16
Step 4 Choose the variable to enter the basis: the entering variable.
This will be increased from its current value of 0.
z = 2x1 + 3x2 , so we should be able to increase z the most if we increase x2 from 0. (The
rate of change of z is greater with respect to x2 ; this is called a steepest ascent method.)
Thus we choose the column with the most negative entry in the z-row. Suppose this is in
the xj column. Then xj is the entering variable and its column is the pivot column.
Here the entering variable is x2 , as this has the most negative entry (3) in the z-row.
Step 5 Choose the variable to leave the basis: the departing variable.
This will be decreased to 0 from its current value.
The xj column now needs to become basic, so we must divide through some row i by
aij to make the entry in the pivot column 1. To keep the right-hand side non-negative we
need aij > 0. If there are no positive entries in the pivot column then stop the problem
is unbounded and has no solution. Otherwise a multiple of row i must be added to every
other row k to make the entries in the pivot column zero. The operations on rows i and k
must be:
bi
Ri := a1ij Ri
1
aij bi
aij
a
a
akj bk
Ri for all k 6= i
Rk := Rk akj
0 bk akj
b
i
ij
ij
To keep all the right-hand sides non-negative requires bk
akj
bi 0 for all k. Since aij > 0,
aij
aij
akj
bi
aij > 0 and the row quotient i =
is minimum. This row is the pivot row.
aij
The element aij in the selected row and column is called the pivot element. The basic
variable corresponding to this row is the departing variable. It becomes 0 at this iteration.
Step 6 Form a new tableau.
As described above, divide the entire pivot row by the pivot element to obtain a 1 in the
pivot position. Make every other entry in the pivot column zero, including the entry in the
z-row, by carrying out row operations in which multiples of the pivot row only are added
to or subtracted from the other rows. Then go to Step 3.
Applying this procedure to the rose-growing problem, we have:
Initial tableau
Basic
x3
x4
x5
z
z
0
0
0
1
x1
5
8
1
2
x2
4
2
5
3
x3
1
0
0
0
x4
0
1
0
0
x5
0
0
1
0
Solution
6100
8000
5000
0
i
6100 4 = 1525
8000 2 = 4000
5000 5 = 1000
(smallest)
The entering variable is x2 , as this has the most negative coefficient in the z-row. Calculating
the associated row quotients i shows that x5 is the departing variable, i.e. the x5 row is
the pivot row for the row operations. The pivot element is 5.
Fill in the next two tableaux:
17
Second tableau
Basic
z
Final tableau
Basic
z
0
0
0
1
x1
x2
x3
x4
x5
Solution
z
0
0
0
1
x1
x2
x3
x4
x5
Solution
The coefficients of the non-basic variables in the z-row are both positive. This shows
that we have reached the optimum make sure you can explain why! The algorithm now
stops and the optimal values can be read off from the final tableau. zmax = 3700 when
(x1 , x2 , x3 , x4 , x5 ) = (500, 900, 0, 2200, 0).
Summary
We move from one basic feasible solution to the next by taking one variable out of
the basis and bringing one variable into the basis.
The entering variable is chosen by looking at the numbers in the z-row of the current
tableau. If they are all non-negative then we already have the optimum value. Otherwise we choose the non-basic column with the most negative number in the z-row
of the current tableau.
The departing variable is determined (usually uniquely) by finding the basic variable
which will be the first to reach zero as the entering variable is increased. This is
identified by finding a row with a positive entry in the pivot column which gives the
bi
smallest non-negative value of the row quotient i =
.
aij
The version of the algorithm described here can be used only on problems which are in
standard form. It relies on having an initial basic feasible solution. This is easy to find
when all the constraints are of the type. However, in some cases such as the cattle feed
problem this is not the case and a modification of the method is needed. We shall return
to this in a later section when we consider the dual simplex algorithm.
2.2.1
1.
Further examples
18
Basic
z
Basic
z
Basic
z
0
0
0
1
x1
x2
x3
x4
x5
x6
Solution
z
0
0
0
1
x1
x2
x3
x4
x5
x6
Solution
z
0
0
0
1
x1
x2
x3
x4
x5
x6
Solution
From the last tableau z = 10 7x3 2x4 0x6 , so any increase in a non-basic variable
(x3 , x4 or x6 ) would decrease z. Hence this tableau is optimal. z has a maximum value of
10, so the minimum of the given function is 10 when x1 = 0.5, x2 = 2.25, x3 = 0.
x5 = 2.75 shows that strict inequality holds in the second constraint: 2x1 + x2 4x3 falls
short of 6 by 2.75. The other constraints are active (binding) at the optimum.
2.
x1 + x2 6,
x2 + 2x3 4, x1 4, x2 4, xj 0 for j = 1, 2, 3.
x2 + 2x3 + x5 = 4, x1 + x6 = 4, x2 + x7 = 4.
Basic
z
Basic
z
Basic
z
0
0
0
0
1
x1
x2
x3
x4
x5
x6
x7
Solution
z
0
0
0
0
1
x1
x2
x3
x4
x5
x6
x7
Solution
z
0
0
0
0
1
x1
x2
x3
x4
x5
x6
x7
Solution
19
Basic
z
0
0
0
0
1
x1
x2
x3
x4
x5
x6
x7
Solution
5
3
3
The entries in the objective row are all positive; this row reads z + x4 + x5 + x6 = 27.
2
2
2
(Note that z is always expressed in terms of the non-basic variables in each tableau.) Thus
z has a maximum value of 27 when x4 = x5 = x6 = 0 and x1 = 4, x2 = 2, x3 = 3, x7 = 2.
Only one slack variable is basic in the optimal solution, so the at this optimal point the
first three constraints are active (they hold as equalities) while the fourth inequality x2 4
is inactive. Indeed, x2 is less than 4 by precisely 2, which is the amount of slack in this
constraint.
From the final tableau, each basic variable can be expressed in terms of the non-basic
1
1
1
1
1
1
variables, e.g. the x3 row says x3 + x4 + x5 x6 = 3, so x3 = 3 x4 x5 + x6 .
2
2
2
2
2
2
2.3
Degeneracy
When one (or more) of the basic variables in a basic feasible solution is zero, both the
problem and that bfs are said to be degenerate.
In a degenerate problem, the same bfs may correspond to more than one basis. This will
occur when two rows have the same minimum quotient i . Selecting either of the associated
variables to become non-basic results in the one not chosen, which therefore remains basic,
becoming equal to zero at the next iteration.
Degeneracy reveals that the LP problem has at least one redundant constraint. The problem of degeneracy is easily dealt with in practice: just keep going! If two or more of the
row quotients i are equal, pick any one of them and proceed to the next tableau. There
will then be a 0 in the Solution column. Follow the usual rules: the minimum i may be
0, so long as aij > 0. Even if the value of z does not increase, a new basis has been found
and we should eventually get to the optimum.
Example: Consider the following problem:
Maximize
z = 3x1 x2
2x1 x2 4
x1 2x2 2
subject to
x1 + x2 5
and
xj 0, j = 1, 2.
20
x1 + x2
+ x5 = 5
and
xj 0 for j = 1, . . . , 5.
Basic
x3
x4
x5
z
z
0
0
0
1
x1
2
1
1
3
x2
1
2
1
1
x3
1
0
0
0
x4
0
1
0
0
x5
0
0
1
0
Solution
4
2
5
0
i
2
2
5
z
0
0
0
1
x1
0
1
0
0
x2
3
2
3
5
x3
1
0
0
0
x4
2
1
1
3
x5
0
0
1
0
Solution
0
2
3
6
i
0
1
Now x3 is basic but takes the value x3 = 0, so we are at a degenerate bfs. The algorithm
has not finished as there is a negative entry in the z-row. The next iteration gives:
Basic
x2
x1
x5
z
z
0
0
0
1
x1
0
1
0
0
x2
1
0
0
0
x3
1/3
2/3
1
5/3
x4
2/3
1/3
1
1/3
x5
0
0
1
0
Solution
0
2
3
6
This solution is degenerate again, and the objective function has not increased. There is
still a negative entry in the z-row. Pivoting on the x5 row gives:
Basic
x2
x1
x4
z
z
0
0
0
1
x1
0
1
0
0
x2
1
0
0
0
x3
1/3
1/3
1
4/3
x4
0
0
1
0
x5
2/3
1/3
1
1/3
Solution
2
3
3
7
In a degenerate problem, it is possible (though unlikely) that the Simplex algorithm could
return at some iteration to a previous tableau. Once caught in this cycle, it will go round
and round without improving z.
Several methods have been proposed for preventing cycling. One of the simplest is the
smallest subscript rule or Blands rule, which states that the simplex method will not
cycle provided that whenever there is more than one candidate for the entering or leaving
variable, the variable with the smallest subscript is chosen (e.g. x3 in preference to x4 ).
2.4
z
0
0
0
1
x1
5
8
1
2
x1
1
0
0
0
x2
0
0
1
0
x2
4
2
5
3
x3
1
0
0
0
x4
0
1
0
0
x5
0
0
1
0
Solution
6100
8000
5000
0
x3
5/21
38/21
1/21
1/3
x4
0
1
0
0
x5
4/21
22/21
5/21
1/3
Final tableau
Basic
x1
x4
x2
z
z
0
0
0
1
Solution
500
2200
900
3700
In the initial tableau, the columns of the 4 4 identity matrix occur in the columns headed
x3 , x4 , x5 , z respectively.
The corresponding
columns in the
P=
1/21 0 5/21 0 .
1/3
0
1/3
1
The entire initial tableau, regarded as a matrix, has been pre-multiplied by P to give the
entire final tableau.
P tells us the row operations that would convert the initial tableau directly to the final
38
22
tableau, e.g. R2 := R1 + 1R2 + R3 + 0R4 .
21
21
In the final tableau, the columns of the 4 4 identity matrix occur under x1 , x4 , x2 and z
respectively. The corresponding columns of the initial tableau form the matrix
5 0
4 0
8 1
2 0
. This would pre-multiply the entire final tableau to give the
P1 =
1 0
5 0
2 1 3 1
initial tableau.
22
0
T1 can be written in partitioned form as
ct
0t
b
.
The final tableau is obtained from the initial one by a succesion of row operations, each of
which could be achieved by pre-multiplying T1 by some matrix. Hence Tf = PT1 for some
u
, say. As the z-column is
yt
unchanged, u = 0 and = 1.
The process of transforming the initial tableau to the optimal one can be expressed as:
0
0
1
1
yt
ct
0t
b
= 0
0
1
MA
y t A ct
yt
Mb
.
yt b
Thus M and yt are found in the columns headed by the slack variables in the final tableau.
As Tf is optimal we must have y 0 and yt A ct 0, i.e. At y c.
Then zmax = yt b, or equivalently zmax = bt y.
We shall meet these conditions again when we study the dual of a LP problem.
Now suppose that in the optimal tableau the basic variables, reading down the list in the
Basic column, are xB1 , . . . , xBm and the non-basic variables are xN 1 , . . . , xN n . The initial
tableau could be rearranged as follows:
Basic
xn+1
..
.
z
0
..
.
xn+m
z
0
1
Solution
cB1 cBm
cN 1 cN n
M
yt
0
0
1
1
cB t
cN t
b
= 0
0
1
23
MB
MN
yt B cB t
y t N cN t
Mb
.
t
yb
The basic columns of the final tableau contain an identity matrix and a row of zeros so
MB = I, i.e. M = B1 . Thus xB = B1 b.
Also yt B cB t = 0 which can be written as yt = cB t B1 .
Every entry in the z-row is non-negative at the optimum, so yt N cN t 0,
i.e. cB t B1 N cN t 0.
Then zmax = yt b = cB t B1 b.
Thus we have:
Proposition 2.1 Let xB1 , . . . , xBm be the basic variables in the optimal tableau for problem
(LP1) with b 0. Let B be the matrix formed by the columns headed xB1 , . . . , xBm in the
initial tableau (excluding the z-row). Then (xB1 xBm )t = B1 b and zmax = cB t B1 b.
2.4.1
Example
3x1 + 2x2 3.
z
0
0
0
1
x1 x2 x3 x4 x5 x6
1
2
1 1 0 0
2
1 4 0 1 0
3
2
0 0 0 1
2 4
5 0 0 0
z
0
0
0
1
x1 x2
x3
x4
1 0
1/4
1/4
0 0 39/8 7/8
0 1
3/8
3/8
0 0
7
2
1/4
7/8
Pre-multiplying the initial tableau by P =
3/8
2
Solution
5
6
3
0
x5
x6
0 1/4
1
3/8
0
1/8
0
0
0 1/4
1
3/8
0
1/8
0
0
Solution
1/2
11/4
9/4
10
0
0
gives the final tableau.
0
1
The basic variables reading down the Basic column are x1 , x5 , x2 , so thematrix B defined
1 0 2
above is formed from columns 1, 5, 2 respectively of (A | I3 ). Thus B = 2 1 1 .
3 0 2
1/4 0 1/4
x1
1/2
3/8 . At the optimal point x5 = B1 b = 11/4 .
B1 = 7/8 1
3/8 0
1/8
x2
9/4
2
1/2
t
1
0 , so zmax = cB B b = 2 0 4
11/4 = 10.
cB =
4
9/4
1 1 0
We can take N = 4 0 0 , from columns 3, 4, 6 of (A | I3 ), and cN = (5 0 0).
0 0 1
Then cB t B1 N cN t = (7 2 0), the z-row entries in the non-basic columns.
9
1
11
The maximum value of z is 10, when x1 = , x2 = , x3 = 0, x4 = 0, x5 = .
2
4
4
24
We have been assuming that only extreme points of the feasible region need to be considered
as possible solutions. The proof of this is now given; you are not expected to learn it.
Proposition 2.2 v is a basic feasible solution of (LP2) if and only if v is an extreme point
e x = b, x
e 0} in Rn+m .
of the set of feasible solutions S = {e
x : Ae
Proof () Suppose v is a bfs of (LP2) and is not an extreme point of S, so there are
distinct feasible solutions u, w S such that, for some r (0, 1), v = (1 r)u + rw.
e as described earlier, to get (B | N) and permute the
We can permute the columns of A,
vB
uB
wB
entries of v, u, w correspondingly so that
= (1 r)
+r
0
uN
wN
As 0 < r < 1 and u, w are non-negative, it follows that uN = wN = 0.
e = Au
e = Aw
e = b,
As v, u, w are feasible, Av
vB
uB
wB
so (B | N)
= (B | N)
= (B | N)
= b.
0
0
0
Thus BvB = BuB = BwB = b.
As B is non-singular, vB = uB = wB = B1 b.
Exercises 2
1. Use the Simplex method to solve
Maximize
z = 4x1 + 8x2
subject to
5x1 + x2 8
3x1 + 2x2 4
x1 0, x2 0.
and
Express the objective function in terms of the non-basic variables in the final tableau,
and hence explain how you know that the solution is optimal.
25
2x1 + 3x2 + x3 5
4x1 + x2 + 2x3 11
subject to
xj 0 for j = 1, 2, 3.
Write down a matrix P which would pre-multiply the whole initial tableau to give the
final tableau.
After you have studied Section 2.4 in the notes: write down, from the optimal tableau
of this problem, the matrices B, B1 and N and the vectors y, cB , cN as defined on
pages 24 - 25. Verify that the optimal solution is B1 b and that zmax = cB t B1 b.
3. Express the following LP problem in standard form. (Do not solve it.)
Minimize
z = 2x1 + 3x2
subject to
x1 + x2 1
2x1 + 3x2
5
3x1 2x2
3
Solution
4
10
4
12
(a) Which variables are basic? What basic feasible solution does the tableau represent? Is it optimal? Explain your answer by writing the z-row as an equation.
(b) Starting from the given tableau, proceed to find an optimal solution.
(c) From the optimal tableau, express the objective function and each of the basic
variables in terms of the non-basic variables.
5. Suppose one of the constraints in a LP problem is an equation rather than an inequality, as in
Minimize 3x1 + 2x2 + 4x3
x1 + x2 + x3 = 1
and x1 0, x2 0.
Use the third constraint to express x3 in terms of x1 and x2 . Hence eliminate x3 from
the objective function and the other two constraints. Introduce two slack variables x4
and x5 , and solve the problem by the Simplex algorithm. Check that your solution
satisfies all the constraints. How must this approach be modified if also x3 0?
26
8. Solve the following problem by the simplex algorithm, showing that every possible
basic feasible solution occurs in the iterations.
Maximize
subject to
z = 100x1 + 10x2 + x3
x1
1
20x1 +
x2
100
and
z = x1 + x2 + x3
subject to
and
x1 + 3x2 x3 10
x1 x2 + 3x3 14
x1 0, x2 0, x3 0.
Find a 3 3 matrix which would pre-multiply the initial tableau to give a tableau
in which the basic variables are x1 and x3 . Verify that this tableau is optimal, and
hence state the solution of the problem.
If you had not been told which variables to take as basic, how many basic feasible
solutions might you have to consider in order to solve the problem this way?
10.
Prove that v is an extreme point of the feasible region R for (LP1) if and only if
v
is an extreme point of the feasible region S for (LP2).
b Av
27
Chapter 3
Sensitivity analysis
Suppose some feature of a LP problem changes after we have found the optimal solution.
Do we have to solve the problem again from scratch or can the optimum of the original
problem be used as an aid to solving the new problem? These questions are addressed by
sensitivity analysis, also called post-optimal analysis.
If a resource is used up completely then the slack variable in the constraint representing this
resource is zero, i.e. the constraint is active at the optimal solution. This type of resource is
called scarce. In contrast, if the slack variable is non-zero (i.e. the constraint is not active)
then this resource is not used up totally in the optimal solution. Resources of this kind are
called abundant. Increasing the availability of an abundant resource will not in itself yield
an improvement in the optimal solution. However, increasing a scarce resource will improve
the optimal solution.
With the notation of Chapter 2, zmax = yt b. Suppose bi is increased by a small amount
bi . We say that the ith constraint has been relaxed by bi . Then as long as the same
variables remain basic at the optimal solution, zmax = y1 b1 + + yi (bi + bi ) + + ym bm ,
i.e. zmax has increased by yi bi .
yi is thus the approximate increase in the optimum value of z that results from allowing an
extra 1 unit of the ith resource. yi is called the shadow price of the ith resource. It does
not tell us by how much the resource can be increased while maintaining the same rate of
improvement. If the set of constraints which are active at the optimal solution changes,
then the shadow price is no longer applicable.
The shadow prices yi can be read directly from the optimal tableau: they are the numbers
in the z-row at the bottom of the slack variable columns.
3.1.1
To illustrate these concepts we again consider the rose-growing problem, with the initial
and final tableaux as obtained in Chapter 2.
The optimal solution is (500, 900, 0, 2200, 0). The slack variables x3 , x4 and x5 were introduced in the land, finance and labour resource constraints respectively. Thus land and
labour are scarce resources, and finance is an abundant resource.
We now consider various modifications to the problem and its solution. Most of the methods
use the multiplying matrix P which was defined in Section 2.4.
28
1. Shadow prices
x3 x4 and x5 have coefficients 1/3, 0 and 1/3 in the z-row of the final tableau, so these
are the shadow prices of land, finance and labour respectively. The shadow prices give
the rate of improvement of z with each resource, within certain bounds.
The maximum profit z increases at the rate of 13 per extra unit of land or labour.
The shadow price of finance is zero it is an abundant resource at the optimum.
Labour is a scarce resource so making extra labour available will increase profit. The
shadow price of labour is 13 , which means that 1 extra person-hour of labour would
result in the profit increasing by 0.33. Hence if more than 0.33 per hour is paid in
wages, the grower will be worse off. They would certainly have to pay more than this
so it is not economical to hire extra labour.
2. Changing the resources
The rose grower has the option to buy some more land. What is the maximum area
that should be purchased if the other constraints remain the same?
Suppose extra land of area k dm2 was purchased. Then the initial tableau would be
the same as before except that b1 = 6100 + k in the solution column.
Using the matrix multiplier P, the solution column in the
6100 + k
5/21
0 4/21 0
38/21 1 22/21 0 8000
1/21 0 5/21 0 5000 =
1/3
0
1/3
1
0
5
500 + 21
k
38
2200 21
k
.
1
900 21 k
3700 + 13 k
||
3700 + 500t .
3 21
3 21
The solution remains optimal at x3 = x5 = 0 if none of these z-row entries is negative.
0
Therefore the original solution is still optimal, with zmax
= 3700 + 500t, provided
1
5t
1
4t
7
7
+
0 and
0, i.e. t .
3 21
3 21
5
4
3
15
Thus the profit 2 + t can vary between and
.
5
4
29
Original problem
Maximize
subject to
and
z = 2x1 + 3x2
5x1 + 4x2 + x3
8x1 + 2x2 + x4
x1 + 5x2 + x5
= 6100
= 8000
= 5000
xj 0, j = 1, . . . , 5.
Maximize
subject to
x1 + 5x2 + 27 x6 + x05
and
= 6100
= 8000
= 5000
xj , x0k 0, j = 1, 2, 6; k = 3, 4, 5.
z0
0
0
0
1
x1
5
8
1
2
x2
4
2
5
3
x03
1
0
0
0
x04
0
1
0
0
5/21
0 4/21 0
38/21 1 22/21 0
1/21 0 5/21 0
1/3
0
1/3
1
x05
0
0
1
0
x6
7
6
7/2
t
Solution
6100
8000
5000
0
1
7
3
6
=
7/2 1/2
t
7/2 t
Hence the current solution, which involves growing no pink roses at all, is optimal
provided 72 t 0, i.e. t 72 .
However, if t > 27 then this solution is no longer optimal. For example if t =
the original final tableau, with the x6 column added, is
Basic
x1
x04
x2
z0
z0
0
0
0
1
x1
1
0
0
0
x2
0
0
1
0
x03
5/21
38/21
1/21
1/3
x04
0
1
0
0
x05
4/21
22/21
5/21
1/3
x6
1
3
1/2
1
9
2
then
Solution
500
2200
900
3700
This is not optimal, but the optimum is obtained after one more Simplex iteration.
5. Adding an extra constraint.
Suppose the maximum number of roses that can be sold is 1340, so the constraint
x1 + x2 1340 is added to the system. The current optimum is not feasible as
x1 + x2 = 500 + 900 > 1340.
Adding a slack variable x6 gives x1 + x2 + x6 = 1340.
Re-write the new constraint in terms of the current non-basic variables x3 and x5 ,
5
4
1
5
500 x3 + x5 + 900 + x3 x5 + x6 = 1340
21
21
21
21
30
4
1
x3 x5 + x6 = 60. In fact, this calculation can be done within the tableau:
21
21
if we add a row for x1 + x2 + x6 = 1340 into the previously optimal tableau we get:
so
Basic
x1
x4
x2
x6
z
z
0
0
0
0
1
x1
1
0
0
1
0
x2
0
0
1
1
0
x3
5/21
38/21
1/21
0
1/3
x4
0
1
0
0
0
x5
4/21
22/21
5/21
0
1/3
x6
0
0
0
1
0
Solution
500
2200
900
1340
3700
This is not a good simplex tableau as it does not have four basic variable columns.
However, subtracting rows 1 and 3 from row 4 gives:
Basic
x1
x4
x2
x6
z
z
0
0
0
0
1
x1
1
0
0
0
0
x2
0
0
1
0
0
x3
5/21
38/21
1/21
4/21
1/3
x4
0
1
0
0
0
x5
4/21
22/21
5/21
1/21
1/3
x6
0
0
0
1
0
Solution
500
2200
900
60
3700
Notice that the x6 row now contains the equation that we derived previously.
This tableau would be optimal if the negative entry in the solution column were not
there. We perform row operations keeping the z-row entries non-negative until all the
numbers in the Solution column are non-negative. In the above, the basic variable
x6 = 60, so we need x6 to leave the basis. How do we choose the entering variable?
Our choice is restricted to the nonbasic variables that have negative coefficients in
the pivot row, since we want to add a positive multiple of the pivot row to the z-row
without creating any negative entries. To ensure this, choose the variable that yields
the minimum absolute value (modulus) of the z-row coefficient divided by the pivot
row entry aij .
4
1
In this case 31 21
< 13 21
, so x3 is the entering variable. Row operations
give the tableau
Basic
x1
x4
x2
x3
z
z
0
0
0
0
1
x1
1
0
0
0
0
x2
0
0
1
0
0
x3
0
0
0
1
0
x4
0
1
0
0
0
x5
1/4
3/2
1/4
1/4
1/4
x6
5/4
19/2
1/4
21/4
7/4
Solution
425
2770
915
315
3595
This solution is now optimal, as the z-row entries are non-negative and each bj 0.
Introducing the new constraint reduces the optimum value of the objective function
to 3595. This occurs when x1 = 425 and x2 = 915.
The above procedure, keeping the z-row coefficients positive and carrying out row operations
until the basic variables are non-negative, is called the dual simplex method . It can be
used, on its own or in combination with the normal simplex method, on problems which
are not quite in standard form because some of the right-hand sides are negative.
The dual simplex method provides one way of solving problems for which no initial basic
feasible solution is apparent. There are other variations on the simplex method which can
be used for such purposes, such as the Two-Phase method (see a textbook).
31
3.1.2
A company makes three types of garment. The constraints are given by:
Type A B C
No. of units x1 x2 x3 Amount available
Labour (hours) per unit 1
2
3
55 hours
2
Material (m ) per unit 3
1
4
80 m2
Profit () per unit 7
6
9
The company wants to make the largest possible profit subject to the constraints on
labour and materials, so they must ................................. z =
subject to
and x1 0, x2 0, x3 0.
Adding slack variables, the constraints become:
x1
x2
x3
x4
x5
Solution
x1
x2
x3
x4
x5
Solution
x1
x2
x3
x4
x5
Solution
x1
x2
x3
x4
x5
Solution
z
Basic
z
Basic
z
Basic
z
so the maximum profit is
when
of type A,
of type B and
From the initial to the final tableau, the middle two rows are multiplied by
1
B =
. This is the inverse of B =
(using the columns for
Taking cB =
cB
t B1 b
,
, b=
=
= zmax .
and
respectively.
and p
0, so
q
. Thus
q
, x2 =
When q =
, x3 =
then zmax =
..........
..........
..........
..............
.............. || ......................).
The original solution (21, 17, 0, 0, 0) is still optimal if no z-row entry is negative.
0
Therefore the original solution is optimal, with zmax
=
0,
0,
0, i.e.
to
33
, provided
t
without affecting the optimal
= 80,
where xj 0 for j = 0, . . . , 5.
x0
The initial tableau is as before with the addition of a column
3/5 1/5 0
1/5
2/5 0
11/5
8/5 1
x0
, so we have
Thus if k
, the z-row remains non-negative and the current solution is optimal;
D should not be made.
Suppose we take k = 13. Then the previously optimal tableau becomes
Basic
x2
x1
z
z
0
0
1
x0
x1
0
1
0
x2
1
0
0
x3
1
1
4
x4
3/5
1/5
11/5
x5
1/5
2/5
8/5
Solution
17
21
249
x0
x1
x2
x3
x4
x5
Solution
0
1
z
so
z
0
units of A and
x6
z
z
0
0
0
1
x1
0
1
x2
1
0
x3
1
1
x4
3/5
1/5
x5
1/5
2/5
x6
0
0
Solution
17
21
11/5
8/5
249
34
This does not represent a valid Simplex iteration because there is only one basic
column. Subtracting Row 1 and 3 times Row 2 from Row 3 gives:
Basic
x2
x1
x6
z
z
0
0
0
1
x1
0
1
x2
1
0
x3
1
1
x4
3/5
1/5
x5
1/5
2/5
x6
0
0
Solution
17
21
11/5
8/5
249
This represents a basic but infeasible solution. It would be optimal if the negative
entry in the solution column were not there. In this situation we use the dual simplex
method, as described on page 31.
x6 =
To keep the z-row entries non-negative, choose the entering variable as follows:
Where there is a negative number in the pivot row, divide the z-row entry by this and
choose the variable for which the modulus of this quotient is smallest.
|
| < |
Basic
x2
x1
z
Making
3.2
z
0
0
0
1
x1
of type A and
x2
x3
| so
x4
x5
Solution
The method that we used above when negative numbers occur in the solution column can
be used on any such problems, not just in the context of sensitivity analysis.
The differences between the original simplex method and the dual simplex method can be
summarised as follows:
Original (primal) simplex
Starts from a basic feasible solution.
All bj 0.
At least one z-row entry is negative.
Seek to make all z-row entries 0
Dual simplex
Starts from a basic infeasible solution.
At least one bj < 0.
All z-row entries 0.
Seek to make all bj 0 keeping the z-row 0.
3.2.1
Example
z = 5x1 + 4x2
3x1 + 2x2 6
x1 + 2x2 4
subject to
x1 0, x2 0.
and
To use the dual simplex method we must express the problem as a maximization and write
all the constraints in form: 3x1 2x2 6, x1 2x2 4.
Introducing non-negative slack variables x3 , x4 the problem is:
z = 5x1 4x2
Maximize
subject to
3x1 2x2 + x3
= 6
x1 2x2
+ x4 = 4
xj 0, j = 1, . . . , 4.
and
This is not in standard form, but it ensures that the initial simplex tableau will have valid
basic columns. As the right-hand sides are negative we are starting from a basic infeasible
solution:
Basic
x3
x4
z
z
0
0
1
x1
3
1
5
x2
2
2
4
x3
1
0
0
x4
0
1
0
Solution
6
4
0
5 4
6 is the most negative right-hand side so x3 leaves the basis. < so x1 enters.
3
2
First Iteration
Basic
x1
x4
z
z
0
0
1
x1
1
0
0
x2
2/3
4/3
2/3
x3
1/3
1/3
5/3
x4
0
1
0
Solution
2
2
10
z
0
0
1
x1
1
0
0
x2
0
1
0
x3
1/2
1/4
3/2
x4
1/2
3/4
1/2
Solution
1
3/2
11
3.3
Recall the Containers problem from Chapter 1. Suppose now that the company delegates its
production to a contractor who pays them y1 and y2 per minute for the use of machines
M1 and M2 respectively. Let w be the total hourly charge for using the two machines.
The contractor wants to make this hourly charge as small as possible, but must ensure
that the company is paid at least as much as it originally made in profit for each container
produced: 30 per Type A and 45 per Type B.
Thus the contractors problem is to minimize w = 60y1 + 60y2 subject to the constraints
2y1 + 4y2 30, 8y1 + 4y2 45, where y1 0, y2 0.
The feasible region lies in the first quadrant above the boundary lines, as illustrated:
w is minimum where the lines cross, at (2.5, 6.25). Here w = 60 2.5 + 60 6.25 = 525.
Thus the contractor should pay 2.50 per minute for M1 and 6.25 per minute for M2 , so
that the company gets 525 per hour the same as the profit when it made the containers
itself! We have solved the dual of the original problem.
Every linear programming problem has an associated problem called its dual. For now we
will restrict attention to pairs of problems of the following form:
Primal: maximize z = ct x subject to Ax b and x 0
(P)
(D)
To obtain the dual problem from the primal problem we swap c and b, replace A by its
transpose At , replace with in the constraints, and replace maximize with minimize.
The non-negativity restrictions remain.
3.3.1
Example
z = 6x1 + 4x2
subject to
and
Dual
3x1 + x2
2x1 + 2x2
5
4
x1 , x2 0.
Minimize
w = 5y1 + 4y2
subject to
and
3y1 + 2y2
y1 + 2y2
6
4
y1 , y2 0.
The primal problem can be solved easily using the standard simplex algorithm. The optimal
solution is zmax = 11 when (x1 , x2 , x3 , x4 ) = (3/2, 1/2, 0, 0).
We solved the dual problem by the dual simplex algorithm in Section 3.2. The optimal
solution is wmin = 11 when (y1 , y2 , y3 , y4 ) = (1, 3/2, 0, 0).
Notice that the objective functions of the primal and dual problems have the same optimum
value. Furthermore, in the optimal dual tableau, the objective row coefficients of the slack
variables are equal to the optimal primal decision variables.
Consider the cattle-feed problem in Chapter 1. Suppose a chemical company offers the
37
farmer synthetic nutrients at a cost of y1 per unit of protein, y2 per unit of fat, y3 per
unit of calcium and y4 per unit of phosphorus.
The cost per unit of hay substitute is thus (13.2y1 +4.3y2 +0.02y3 +0.04y4 ). To be economic
to the farmer, this must not be more than 0.66. Similarly, considering the oats substitute,
34.0y1 + 5.9y2 + 0.09y3 + 0.09y4 2.08.
For feeding one cow, the company will receive (65.0y1 + 14.0y2 + 0.12y3 + 0.15y4 ), which
it will wish to maximize.
Thus the companys linear programming problem is :
Maximize
subject to
and
We see that the dual of this is the farmers original problem. As we shall show next, in fact
the two problems are the duals of each other.
Proposition 3.1 The dual of the dual problem (D) is the primal problem (P).
Proof
Dual
Minimize bt y
Maximize bt y
Constraints At y c
Constraints At y c
y 0.
We now investigate how the solutions of the primal and dual problems are related, so that
by solving one we automatically solve the other.
Proposition 3.2 (The weak duality theorem) Let x be any feasible solution to the primal problem (P) and let y be any feasible solution to the dual problem (D).
(i) ct x bt y.
(ii) If ct x = bt y then x and y are optimal solutions to the primal and dual problems.
Proof
(i) As x and y are feasible, we have Ax b, At y c, x 0, y 0.
Thus ct x (At y)t x = yt Ax yt b = bt y.
(ii) Suppose ct x = bt y. If ct x0 > ct x for any primal feasible x0 , then ct x0 > bt y which
contradicts (i). Hence ct x0 ct x for all primal feasible x0 , so x is optimal for (P). Similarly,
y is optimal for (D).
Proposition 3.3 (The strong duality theorem) If either the primal or dual problem
has a finite optimal solution, then so does the other, and the optimum values of the primal
and dual objective functions are equal, i.e. zmax = wmin .
Proof
Let x be a finite optimal solution to the primal, so that zmax = ct x = z say.
We have seen that the initial tableau is pre-multiplied as follows to give the final tableau:
B1
yt
ct
0t
b
=
0
B1 A
B1
y t A ct
yt
B1 b
.
t
yb
As this is optimal, yt A ct 0, so At y c,
and y 0. Hence y is feasible for the dual problem.
Now z = yt b = bt y.
But also z = ct x so x, y are feasible solutions which give equal values of the primal and
dual objective functions respectively.
Thus by Proposition 3.2 (ii), x, y are optimal solutions and z is the optimal value of both
z and w.
As each problem is the dual of the other, the same reasoning applies if we start with a finite
optimal solution y to the dual.
The above shows that the entries of y are in fact the optimal values of the main variables
in the dual problem. (They are also the shadow prices for the primal constraints).
Furthermore the entries of yt A ct are the values of the dual surplus variables at the
optimum, so we shall denote them by ym+1 , . . . , ym+n .
Thus the optimal primal tableau contains the following information:
39
Basic
Primal
basic
variables
z
z
0
0
1
Primal main
z
}|
{
x1 xn
Primal slack
z
}|
{
xn+1 xn+m
ym+1 ym+n
|
{z
}
Values of dual
surplus variables
y1 ym
|
{z
}
Values of dual
main variables
Solution
Values of
primal basic
variables
Optimum of objective
functions (primal and dual)
The optimal dual solutions may therefore be read from the optimal primal tableau without
further calculations. Furthermore, since the dual of the dual is the primal, it does not matter
which problem we solve the optimal solution of one will give us the optimal solution of
the other. This is important, as if we are presented with a difficult primal problem, it may
be easier to solve it by tackling its dual:
if the primal constraints are all of the form then (P) cannot be solved by the
normal simplex algorithm, but (D) can;
if the primal problem has many more constraints than variables then the dual has
many fewer constraints than variables, and will in general be quicker to solve.
Proposition 3.4 If either the primal (P) or the dual (D) has an unbounded optimal solution then the other has no feasible solution.
Proof: Suppose the dual has a feasible solution y. Then for any primal feasible solution
x, ct x bt y, so bt y is an upper bound on solutions of the primal. Similarly, if the primal
has a feasible solution this places a lower bound on solutions of the dual. It follows that if
either problem is unbounded then the other does not have a feasible solution.
Proposition 3.4 identifies some cases where the duality results do not hold, i.e. we cannot
say that the primal and dual LP problems have the same optimal values of their objective
functions:
1. Primal problem unbounded and dual problem infeasible.
2. Primal problem infeasible and dual problem unbounded.
3. Primal and dual problems both infeasible.
3.4
Complementary slackness
Also, if xn+j is a slack variable in the primal problem then the corresponding dual variable
is the main variable yj . For j = 1, . . . , m, either yj = 0 or xn+j = 0.
Thus in every case xi ym+i = 0 and xn+j yj = 0.
So at the optimal solution,
ith primal main variable ith dual surplus variable = 0,
jth primal slack variable jth dual main variable = 0.
These relationships are called the complementary slackness equations.
Thus (x1 xn | xn+1 xn+m )(ym+1 ym+n | y1 ym )t = 0, since the scalar product
of two non-negative vectors is 0 iff the product of each corresponding pair of entries is 0.
Now the entries of b Ax are the primal slack variables xn+1 , . . . , xn+m and the entries of
At y c are the dual surplus variables ym+1 , . . . , ym+n , so complementary slackness asserts
that at the optimal solution,
yt (b Ax) = 0 and xt (At y c) = 0.
An interpretation of complementary slackness is that if the shadow price of a resource is
non-zero then the associated constraint is active at the optimum, i.e. the resource is scarce,
but if the constraint is not active (the resource is abundant) then its shadow price is zero.
Proposition 3.5 (The complementary slackness theorem) A necessary and sufficient
condition for x and y to be optimal for the primal and dual problems (P) and (D) is that
x is primal feasible, y is dual feasible, and x and y satisfy the complementary slackness
conditions yt (b Ax) = 0 and xt (At y c) = 0.
Proof:
By the duality theorems, x and y are optimal iff they are feasible and ct x = bt y.
Now ct x = bt y yt b xt c = 0
yt b yt Ax + xt At y xt c = 0
(since yt Ax = xt At y)
yt (b Ax) + |{z}
xt (At y c) = 0
|{z} | {z }
| {z }
0
3.4.1
yt (b
Ax) = 0 and
xt (At y
c) = 0.
Examples
3.5
Asymmetric duality
The dual problems in Equations (P) and (D) are said to represent symmetric duality.
We now examine the situation where the variables in the primal problem are unrestricted
in sign. Thus the primal problem is:
Maximize z = ct x subject to Ax b, x unrestricted.
42
(AP)
We can convert this into a LP problem in standard form by letting x = x0 x00 , where
x0 0 and x00 0.
The constraints then become A(x0 x00 ) b.
This can be written as (A | A)x b, where x = (x01 x0n x001 x00n )t .
The objective function is z = ct x0 ct x00 = (ct | ct )x, and clearly x 0.
Thus the problem becomes
Maximize z = (ct | ct )x subject to (A | A)x b, x 0.
The dual of this problem can be written as:
Minimize w =
bt y
subject to
At
At
y
c
c
, y 0.
(AD)
Conversely, the dual of (AD), a problem with equality constraints, is (AP), a problem in
which the variables are unrestricted. (Of course, every LP problem can be expressed as one
with equality constraints by including slack and surplus variables.)
Proposition 3.6 The following two problems are duals of each other:
Maximize z = ct x subject to Ax b, x unrestricted,
Minimize w = bt y subject to At y = c, y 0.
3.5.1
Example
Maximize
subject to
and
Dual
x1 + x2
2x1 3x2
4
5
x1 , x2 unrestricted.
w = 4y1 5y2
Minimize
subject to
and
y1 2y2
y1 3y2
= 1
= 2
y1 , y2 0.
Clearly a solution of the dual can occur only where the two equations hold, i.e. when
y1 = 7, y2 = 3. Thus w has a minimum value of 13, and this must also be the maximum
value of z in the primal problem. By complementary slackness we find that x1 = 7, x2 = 3.
43
Exercises 3
1. A company which manufactures three products A, B and C, needs to solve the following LP problem in order to maximize their profit.
Maximize
z = 3x1 + x2 + 5x3
subject to
and
1
1
2
20
30
Product
2
3
4
3
8
4
2
1
3
25 40 55
45 80 85
Availability
Up to 90 machine hours per day
Up to 80 components per day
(a) Formulate this as a linear programming problem, where xj is the daily production
of product j and the objective is to maximize the daily profit (income minus
production costs). Find the optimal solution using the simplex tableau method,
and state the optimal profit.
(b) Write down the shadow prices of machine hours and components, briefly explaining their significance.
(c) The firm can increase the available machine hours by up to 10 hours per day by
hiring extra machinery. The cost of this would be 40 per day. Use sensitivity
analysis to decide whether they should hire it, and if so, find the new production
schedule.
44
(d) The production costs of products 1 and 4 are changed by t per unit. Within
what range of values can t lie if the original production schedule is to remain
optimal? Find the corresponding range of values of the maximum profit.
(e) Due to a problem at the distributors, the total daily amount produced has to
be limited to 25 units. Implement the dual simplex algorithm to find a new
production schedule which meets this restriction.
(f) After production has returned to normal (i.e. the original solution is optimal
again) the firm considers manufacturing a new product that would require 3 machine hours and 4 components per unit. The production costs would be 45 and
sales income 75 per unit. Use sensitivity analysis to decide whether they should
go ahead, and if so what the optimum production schedule would be.
3. In each case formulate the dual problem and verify that the given solution is optimal
by showing that primal feasibility, dual feasibility and complementary slackness all
hold.
(a) Maximize 19x1 + 16x2 subject to the constraints x1 + 4x2 20, 3x1 + 2x2 15,
x1 0, x2 0.
(b) Minimize 8x1 + 11x2 subject to the constraints 2x1 2x2 2, x1 + 4x2 5,
x1 0, x2 0.
x1 + 2x2 + x3 20
3x1
+ 2x3 24
subject to
2x1 + 2x2
22
x1 , x2 , x3 0.
and
occurs at (x1 , x2 , x3 ) = (0, 4, 12). Deduce the solution of the dual problem.
5. Formulate the dual of each of the two problems in Section 2.2.1 and solve them from
the optimal primal tableaux using the theory of duality and complementary slackness.
6. By finding and solving the dual problem (without using the simplex algorithm), find
the maximum value of
z = 5x1 + 7x2 + 8x3 + 4x4
subject to x1 + x3 6, x1 + 2x4 5, x2 + x3 9, x2 + x4 3, where x1 , x2 , x3 , x4 are
unrestricted in sign.
7. Use asymmetric duality to find the solutions (if any) of the following LP problems:
(a)
Maximize
z = x1 + 2x2
subject to
2x1 3x2 1
x1 + 4x2 5
and
x1 , x2 unrestricted.
45
(b)
Maximize
subject to
and
x1 , x2 , x3 unrestricted.
4x1 + 2x2 + x3 60
2x1 + 3x2 + 3x3 50
subject to
x1 + 3x2 + x3 45
x1 , x2 , x3 0.
and
z
0
0
0
1
x1
1
0
0
0
x2
3/10
4/5
19/10
4/5
x3
0
1
0
0
x4
3/10
1/5
1/10
14/5
x5
1/10
2/5
3/10
2/5
x6
0
0
1
0
Solution
13
8
24
188
(a) State the optimal solution and the values of x1 , . . . , x6 at the optimum.
(b) Write down the dual problem, usimg y1 , y2 , y3 for the dual main variables and
y4 , y5 , y6 for the dual surplus variables.
(c) Using the above tableau, write down the optimal solution of the dual problem
and give the values of y1 , . . . , y6 at the optimum.
(d) Show how complementary slackness occurs in these solutions.
(e) Convert the dual problem to a maximization problem and solve it by the Dual
Simplex method.
(f) Comment on the relationships between the two optimal tableaux.
9. By solving the dual problem graphically, solve the LP problem:
Minimize
4x1 + 3x2 + x3
subject to
x1 , x2 , x3 0.
Give the values of all the dual and primal variables (main, slack and surplus) at the
optimum.
10. Find the dual of the problem
Maximize ct x subject to Ax = b, x 0.
46
Chapter 4
The transportation model is concerned with finding the minimum cost of transporting a
single commodity from a given number of sources (e.g. factories) to a given number of
destinations (e.g. warehouses). Any destination can receive its demand from more than
one source. The objective is to find how much should be shipped from each source to each
destination so as to minimize the total transportation cost.
Sources
Supply
a1
S1
a2
..
.
am
Destinations
D1
Demand
b1
S2
D2
b2
..
.
..
.
c11
Sm
Dn
..
.
bn
cmn
The figure represents a transportation model with m sources and n destinations. Each
source or destination is represented by a point. The route between a source and destination
is represented by a line joining the two points. The supply available at source i is ai , and the
demand required at destination j is bj . The cost of transporting one unit between source i
and destination j is cij .
When the total supply is equal to the total demand (i.e.
m
P
ai =
i=1
n
P
j=1
tion model is said to be balanced. In a balanced transportation problem each supply must
n
P
be entirely used and each demand must be exactly satisfied, so
xij = ai for i = 1, . . . , m
j=1
and
m
P
xij = bj for j = 1, . . . , n.
i=1
Warehouse 1
c11
c21
7
Warehouse 2
c12
c22
10
Warehouse 3
c13
c23
13
Supply
20
10
A transportation model in which the total supply and total demand are not equal is called
unbalanced. It is always possible to balance an unbalanced transportation problem.
Suppose the demand at warehouse 1 above is 9 units. Then the total supply and total
demand are unequal, and the problem is unbalanced. In this case it is not possible to
satisfy all the demand at each destination simultaneously.
We modify the model as follows: since demand exceeds supply by 2 units we introduce a
dummy source, Factory 3, which has a capacity of 2. The amount sent from this dummy
source to a destination represents the shortfall at that destination.
If supply exceeds demand then a dummy destination, Warehouse 4, is introduced to
absorb the surplus units. Any units shipped from a source to a dummy destination represent
a surplus at that source.
Transportation costs for dummy sources or destinations are allocated as follows:
If a penalty cost is incurred for each unit of unsatisfied demand or unused supply,
then the transportation cost is set equal to the penalty cost.
If there is no penalty cost, the transportation cost is set equal to zero.
If no units may be assigned to a dummy or a particular route, allocate a cost M . This
represents a number larger than any other in the problem think of it as a million!
From now on we shall consider balanced transportation problems only, as any unbalanced
problem can be balanced by introducing a dummy.
Let xij denote the amount transported from source i to destination j. Then the problem is
Minimize
z=
m X
n
X
cij xij ,
i=1 j=1
subject to
and
n
X
j=1
m
X
xij = ai for i = 1, . . . , m
xij = bj for j = 1, . . . , n,
i=1
where
4.2
m
P
ai =
i=1
n
P
j=1
Demand
10
20
11
12
20
14
16
18
15
15
15
25
5
10
This gives the basic feasible solution x11 = 5, x12 = 10, x22 = 5, x23 = 15, x24 = 5, x34 = 5.
All the other variables are non-basic and therefore equal to zero in this solution. There are
m + n 1 = 6 basic variables, as required.
The values of the basic variables xij are entered in the top left of each cell. There should
always be m + n 1 of these; in certain (degenerate) cases some of them may be zero. They
must always add up to the total supply and demand in each row and column.
Note that some books position the data differently in the cells of the tableau.
Method 2: The Least-Cost Method
This method usually provides a better initial basic feasible solution than the North-West
Corner method. Despite its name, it does not give the actual minimum cost. It uses least
available costs to obtain a starting tableau.
49
Assign as much as possible to the cell with the smallest unit cost in the entire tableau.
If there is a tie then choose arbitrarily. It may be necessary to assign 0.
Cross out the row or column whose supply or demand is satisfied. If a row and column
are both satisfied then cross out only one of them.
Adjust the supply and demand for those rows and columns which are not crossed out.
Repeat the above steps on the remaining tableau until only one row or column remains.
For the above example,
Supply
Demand
10
20
11
12
20
14
16
18
15
15
15
25
5
10
w=
subject to
i=1
and
ai i +
n
X
Maximize
bj j ,
j=1
i , j unrestricted in sign.
As before, primal feasibility, dual feasibility and complementary slackness are necessary and
sufficient for optimality. This is the underlying strategy for solving the problem.
For illustrative purposes, we shall start the algorithm for the above example using the bfs
that was provided by the North-West Corner method. The Least-Cost method will usually
give a better initial allocation.
We assign s and s which satisfy i + j = cij to the rows and columns containing the
current basic variables. (This comes from the complementary slackness condition xij sij =
0.) As one constraint in the original problem was redundant, we can choose one of the
or values arbitrarily. For simplicity, it is conventional to set 1 = 0.
The values of sij = cij i j are entered in the top right of the cells.
If all the sij values are non-negative, we have an optimal solution.
Carrying out this procedure, the initial transportation tableau becomes:
10
0
0
10
10
2
0
13
20
15
12
11
5
20
5
14
16
18
We test for optimality by checking whether sij = cij i j 0 for all i and j, i.e. in
all cells. (This is the dual feasibility condition). If this holds for every cell of the tableau
then the optimum has been reached.
Otherwise, choose the cell with the most negative value of sij .
This identifies the variable to enter the basis. In this case the entering variable is x31 .
0
10
10
2
0
5
12
13
20
15
11
5
20
5
14
16
18
We now see how large the entering variable can be made without violating the feasibility
conditions. Suppose x31 increases from zero to some level > 0. Then x11 must change to
5 to preserve the demand constraint in column 1. This has a knock on effect for x12
which therefore changes to 10 + . This process continues for all the corners of the loop.
51
The departing variable is chosen from among the corners of the loop which decrease when
the entering variable increases above zero level. It is the one with the smallest current value,
as this will be the first to reach zero as the entering variable increases. Any further increase
in the entering variable past this value leads to infeasibility.
We may choose any one of x11 , x22 or x34 as the departing variable here. Arbitrarily, we
choose x34 . The entering variable x31 can increase to 5 and feasibility will be preserved.
Second tableau
10
0
10
0
0
7
10
0
15
13
20
15
11
10
12
14
16
20
5
18
Notice that some of the basic variables are zero valued this solution is degenerate. However,
this causes no problem to the general method of solving the problem.
As before, we construct s and s which satisfy i + j = cij for the basic variables. Then
we check for optimality as before. This tableau is not optimal because sij 0 does not
hold for all the cells. The most negative value of sij occurs for x21 , so this is the entering
variable.
Next we construct a loop. Thus can only be as large as zero. (This is bound to happen
because of the degeneracy of the current solution). We let x11 be the departing variable.
Third tableau
5
0
7
5
0
15
0
10
0
13
20
15
11
10
12
14
16
20
5
18
Again, this is a degenerate solution, as some of the basic variables are equal to zero. We
construct s and s as before, and then check for optimality. The tableau is not optimal,
and x14 is the entering variable. The loop construction shows that can be as large as 10,
and that x24 is the departing variable.
Fourth tableau
5
0
7
5
5
10
0
0
10
20
11
10
11
15
12
20
14
16
18
This is now optimal because i + j cij , i.e. sij 0, in every cell. The minimum cost is
therefore given by 5 0 + 10 11 + 0 12 + 10 7 + 15 9 + 5 0 = 315, which occurs
when x12 = 5, x14 = 10, x22 = 10, x23 = 15, x31 = 5, and all the other decision variables
are equal to zero.
52
4.2.1
Example
This example emphasizes the connection between the transportation algorithm and the
primal-dual linear programming problems which underlie the method.
Three factories F1 , F2 , F3 produce 15000, 25000 and 15000 units respectively of a commodity. Three warehouses W1 , W2 , W3 require 20000, 19000 and 16000 units respectively.
The cost of transporting from Fi to Wj is cij per unit, where c11 = 12, c12 = 7, c13 = 10,
c21 = 10, c22 = 8, c23 = 6, c31 = 9, c32 = 15, c33 = 8.
If xij thousand units are transported from Fi to Wj , the total cost 1000z is given by
z = 12x11 + 7x12 + 10x13 + 10x21 + 8x22 + 6x23 + 9x31 + 15x32 + 8x33
which must be minimized subject to the constraints
x11 + x12 + x13
x21 + x22 + x23
x11
+ x21
x12
+ x22
x13
+ x23
i.e.
i.e.
i.e.
i.e.
i.e.
i.e.
i.e.
i.e.
i.e.
1
1
1
2
2
2
3
3
3
+
+
+
+
+
+
+
+
+
1 + s11
2 + s12
3 + s13
1 + s21
2 + s22
3 + s23
1 + s31
2 + s32
3 + s33
=
=
=
=
=
=
=
=
=
12
7
10
10
8
6
9
15
8
12
10
12
10
10
10
15
15
53
=
=
=
=
=
=
15
25
15
20
19
16
12
10
12
10
10
10
15
15
We choose to find an initial bfs by the north-west corner method. After three iterations, all
the sij are non-negative so we have primal feasibility, dual feasibility and complementary
slackness. Hence the optimal solution has been found. The minimum cost is 418,000.
The primal solution is (x11 , . . . , x13 , . . .) = (0, 15, 0, 5, 4, 16, 15, 0, 0, | 0, 0, 0, 0, 0, 0). The last
six 0s represent unnecessary slack variables in the primal problem; they are included only
to show that complementary slackness does indeed hold when we look at the dual solution
(1 , 2 , 3 , 1 , 2 , 3 , s11 , . . . , s33 ) = (0, 1, 0, 9, 7, 5, | 3, 0, 5, 0, 0, 0, 0, 8, 3).
The problem is now modified as follows: The demand at W2 is increased to 28. There is
no link between F2 and W2 . All the demand at W3 must be satisfied.
We add a dummy source F4 with capacity 9.
The costs c22 and c43 are set equal to a large number M . This is the standard method
for ensuring that the allocation to a particular cell is always zero. M is to be thought of as
larger than any other number in the problem. The tableau becomes:
12
10
12
10
10
10
15
15
12
10
12
10
10
10
15
15
We can find an initial allocation by the least-cost method, or by adapting the existing
optimal tableau.
The minimum cost is 450,000; this is uniquely determined, though the allocation which
produces it may not be.
54
Exercises 4
1. For the transportation problem given by the following tableau, find an initial basic
feasible solution by the least-cost method and proceed to find an optimal solution.
Supply
2 1 3
7
4 5 6
8
Demand
Demand
10
5
15
15
10
10
10
8
12
12
15
12
20
10
10
Supply
8
7
10
The supply at Source 3 is now reduced from 10 to 6. There is a penalty of 5 for each
unit required but not supplied. Find the new optimal solution.
4. Three refineries with maximum daily capacities of 6, 5, and 8 million gallons of oil
supply three distribution areas with daily demands of 4, 8 and 7 million gallons.
Oil is transported to the three distribution areas through a network of pipes. The
transportation cost is 1p per 100 gallons per mile. The mileage table below shows
that refinery 1 is not connected to distribution area 3. Formulate the problem as a
transportation model and solve it. [Hint: Let the transportation cost for the nonconnected route be equal to some large value M say and then proceed as normal.]
Refinery
Distribution
1
2
120 180
300 100
200 250
1
2
3
Area
3
80
120
55
Chapter 5
Non-linear optimization
5.1
f(x)
x1
..
.
f(x)
xn
H(f(x)) =
2 f(x)
x21
2 f(x)
x2 x1
..
.
2 f(x)
xn x1
2 f(x)
x1 x2
2 f(x)
x22
..
.
..
.
2 f(x)
xn x2
56
2 f(x)
x1 xn
2 f(x)
x2 xn
..
.
2 f(x)
x2n
5.1.1
Examples
4
0
(i) If f(x, y, z) = 4x2y+z then f(x, y, z) = 2 and H(f(x, y, z)) = 0
1
0
4x + 6y
2
2
(ii) If f(x, y) = 2x +6xy+y then f(x, y) =
and H(f(x, y)) =
6x + 2y
ln 2z
(iii) If f(x, y, z) = e2x y +
then
y
4e2x y
2e2x y
ln 2z
f(x, y, z) = e2x y2 and H(f(x, y)) = 2e2x
1
0
yz
2e2x
2 ln 2z
y3
y12 z
0 0
0 0 .
0 0
4 6
.
6 2
y12 z
.
1
z2 y
i=1
so
g 00 (0)
ut H(f(a))u.
i=1
i,j=1
j=1
Thus we have:
Proposition 5.1 If g(r) = f(a + ru) then g0 (0) = ut f(a) and g00 (0) = ut H(f(a))u.
Proposition 5.2 (First order necessary condition for a local optimum.)
Let f(x) be a differentiable function on S Rn . If f(x) attains a local maximum or minimum
value over S at an interior point x S then f(x ) = 0.
Proof
As x is an interior point of S, we can move from x in any direction u and still be in S.
If x is a local maximizer, f(x) must be non-increasing in every direction moving away from
x so ut f(x ) 0 and (u)t f(x ) 0. Hence ut f(x ) = 0 for all u Rn .
Taking u = ei (the vector with ith entry 1 and all other entries 0) shows that the ith
component of f(x ) is 0 for i = 1, . . . , n. Hence f(x ) = 0.
If x is a local minimizer, the same reasoning applies with decrease replaced by increase
and by .
This condition is not sufficient for a maximum or minimum, as f(x) = 0 also holds at a
saddle point, i.e. a critical point where f(x) is locally neither minimum nor maximum.
57
5.2
Quadratic forms
Recall that if A is a square matrix, the function xt Ax is called a quadratic form. Any
expression consisting entirely of second-order terms can be written in this form. We can
always choose A to be symmetric.
x1
1 4
2
t
2
and A =
.
For example, x1 8x1 x2 + 5x2 = x Ax, where x =
x2
4
5
2
2
0
x
2
2
2
2
1 3
y .
Similarly 2x + y z + 4xy 6yz = x y z
0 3 1
z
A real square matrix A, and equivalently the quadratic form xt Ax, is defined to be
positive definite if xt Ax > 0 x 6= 0,
positive semi-definite if xt Ax 0 x,
negative semi-definite if xt Ax 0 x,
4
2 3
0 has principal minors 4, 1, 3 (first order), 8, 3, 3 (secFor example, 2 1
3
0
3
ond order), 15 (third order). The leading ones are 4, 8, 15.
Proposition 5.3 A real symmetric matrix A is positive definite if and only if all its leading
principal minors are strictly positive.
Proof
()
Let Mj be the submatrix of A consisting of the first j rows and the first j columns.
Mj Qj
We can write A =
where Qj , Rj , Sj are matrices of appropriate sizes (0 0
Rj Sj
when j = n).
58
Suppose all leading principal minors of A are positive, so det(M1 ), . . . , det(Mn ) are all > 0.
For j = 1, . . .
, n, let
uj be the vector in Rj with jth entry 1 and all other entries (if any) 0
p1j
..
and let pj = . satisfy the linear system of equations Mj pj = uj .
pjj
By Cramers rule, pjj =
det(Mj1 )
det(Mj )
Let P be the upper triangular matrix with entries pij for i j and 0 for i > j.
pj
.
The jth column of P is pj =
0
pj
Mj Qj
Mj pj
uj
=
Thus column j of AP is
=
.
Rj pj
Rj pj
Rj Sj
0
Hence AP is lower-triangular, with all the entries on its main diagonal equal to 1.
Let C = Pt AP. C is a product of two lower-triangular matrices, so is itself lower-triangular.
But Ct = Pt At P = Pt AP as A is symmetric, so C is symmetric. Hence C is a diagonal
matrix, and its diagonal entries are p11 , . . . , pnn which we have seen are all positive.
n
X
Let x = Py. Then xt Ax = yt Cy =
pjj yj 2 > 0 for all y 6= 0.
j=1
xt Ax
(yt
| 0)
Mj
Rj
Rj
Qj
Sj
and let x =
y
0
=
(yt
y
0
| 0)
Rn .
Mj y
Rj y
= yt Mj y.
Symmetric matrix
Positive definite
Positive semi-definite
Negative definite
Negative semi-definite
Indefinite
Definition
> 0 x 6= 0
xt Ax 0 x
xt Ax < 0 x 6= 0
xt Ax 0 x
None of above
xt Ax
Eigenvalues
All > 0
All 0
All < 0
All 0
Some +, some
59
Principal Minors
Leading ones all positive.
All non-negative.
jth l.p.m. has sign of (1)j
Those of A non-negative
None of the above
5.2.1
Examples
3
2
(a) Let A =
. The leading principal minors are 3 and 8, alternating in sign
2 4
starting with , so A is negative definite.
Or: A has l.p.m.s 3 and 8, so A is positive definite and hence A is negative definite.
3 6
(b) Let B =
. The l.p.m.s are 3 and 0. The other first order principal minor
6 12
is 12 which is greater than 0, so B is positive semi-definite (and singular).
1
0 2
3 1 . The l.p.m.s are 1, 3, 2 so C is positive definite.
(c) Let C = 0
2 1
5
4
2 3
0 . We found its leading principal minors to be 4, 8, 15.
(d) Let D = 2 1
3
0
3
D is indefinite, hence so is the quadratic form 4x2 y 2 + 3z 2 + 4xy 6xz.
Proposition 5.4 (Second order sufficient conditions for local optima.)
Let f(x) be a twice-differentiable function on S Rn . Suppose f(x ) = 0.
(i) If H(f(x )) is positive definite then x is a strong local minimizer of f(x).
(ii) If H(f(x )) is negative definite then x is a strong local maximizer of f(x).
(iii) If H(f(x )) is indefinite and non-singular then f(x) has a saddle-point at x .
Proof of (i)
Let u be any feasible direction at x . Let g(r) = f(x + ru), so g(0) = f(x ).
0
00
If also H(f(x )) is positive definite, i.e. ut H(f(x ))u > 0 for all u 6= 0, then g (0) > 0
Hence r = 0 is a strong local minimizer of g(r), so g(0) < g(r) for all r close to 0.
Equivalently, f(x ) < f(x + ru) for small enough r and all feasible directions u. Thus x is
a strong local minimizer of f(x) over S.
5.2.2
Examples
yz 2x
(i) Let f(x, y, z) = xyz x2 y 2 z 2 . Then f(x, y, z) = xz 2y = 0 at a critical
xy 2z
point, so 2x = yz, 2y = xz, 2z = xy, giving x = y = z = 0 or x2 = y 2 = z 2 = 4.
Critical points occur when (x, y, z) = (0, 0, 0), (2, 2, 2), (2, 2, 2), (2, 2, 2), (2, 2, 2).
2
z
y
x , with l.p.m.s 2, 4 z 2 , 2(x2 + y 2 + z 2 + xyz 4).
H(f(x, y, z)) = z 2
y
x 2
At (0, 0, 0) these are 2, 4, 8 so H is negative definite and 0 is a strong local maximizer of
f. At the other four points they are 2, 0, 32 so H is indefinite and, as det(H) 6= 0, f(x, y, z)
has saddle points there.
60
5 4x 2y 6z
f(x) = 4 6y 2x 4z = 0 at a critical point, so x = 11.25, y = 1.75, z = 7.25.
2 10z 6x 4y
4 2 6
H(f(x)) = 2 6 4 . This is negative definite as its l.p.m.s are 4, 20, 16, so
6 4 10
f(x) has a local maximum value of 434.1875 at (x, y, z) = (11.25, 1.75, 7.25).
5.3
If f is a convex function of one or two variables, its graph lies below (or on) the straight line
joining any two points on it, so the region above it is a convex set.
If f is a concave function of one or two variables, its graph lies above (or on) the straight
line joining any two points on it, so the region below it is a convex set.
Any linear function is both convex and concave, but not strictly so.
A positive multiple of a convex / concave function is itself convex / concave. For
example, x2 is convex on R, so ax2 is convex on R for any a > 0.
A sum of convex / concave functions is itself convex / concave. For example, x2 , y 4
and z 6 are each convex, so x2 + y 4 + z 6 is convex on R3 .
61
It may not be clear from the definitions whether a function of several variables is convex,
concave or neither. For functions of more than one variable it is not enough just to consider
the second derivatives. We therefore derive criteria which are easier to use.
Proposition 5.5 A twice-differentiable function f defined on a convex set S Rn is
(i) convex on S if and only if, at each x S, H(f(x)) is positive semi-definite;
(ii) concave on S if and only if, at each x S, H(f(x)) is negative semi-definite;
(iii) strictly convex on S if, at each x S, H(f(x)) is positive definite;
(iv) strictly concave on S if, at each x S, H(f(x)) is negative definite.
(Note that semi-definite includes definite.)
Proof: We prove the if part of (i).
Let x and y be in S and let u = y x.
For 0 r 1, let z = (1 r)x + ry = x + ru. As S is a convex set, z S.
Let g(r) = f(z), so g(0) = f(x) and g(1) = f(y).
00
Thus f is convex on S.
n convex o
function on a convex set S Rn . If there is a
concave
minimizes
Hence 0 g(1) g(0) so g(1) g(0), i.e. f(y) f(x ), for all y S.
Thus x minimizes f(x) globally over S.
It can also be shown that every local
minimizer
maximizer
of a
n convex o
function is global, even
concave
5.3.1
Examples
z
x
y z
z2
1
x2
0
0
1
with l.p.m.s 1 , 1 , y 2z . H(f) is negative
y12
and H(f(x, y, z)) = 0
z2
x2 x2 y 2 x2 yz 4
1
0
2y
z2
z3
definite provided y 2z < 0. Thus f is strictly concave in the region where y < 2z.
12x2
0
4x3
and H(f(x, y)) =
(iv) Let f(x, y) =
+
Then f(x, y) =
0
12y 2
4y 3
2
2
2
with l.p.m.s 12x , 144x y . This is positive definite unless x or y is zero, when it is positive
semi-definite. Thus f is convex on R2 .
x4
y4.
Exercises 5
1. Express each of the following in the form xt Ax, where A is a symmetric matrix.
(a) 4x2 + 3y 2 ,
(c) x2 + y 2 + z 2 ,
(c) x3 + 4xy 6y 2 ,
(e) x3 + y 3 + z 3 9xy 9xz + 27x.
63
Chapter 6
Constrained optimization
6.1
Lagrange multipliers
m
P
j (bj gj (x1 , . . . , xn ))
j=1
6.1.1
Example
5
x1 + x2
or L(x, ) = x1 2 + 2x2 x3 + (1 2 3 ) 4 x1 2 x2 2 .
7
x1 x2 x3
Proposition 6.1 (Lagrange sufficiency theorem)
(i) If there exist Rm and x S such that L(x , ) L(x, ) for all x S, and
g(x ) = b, then x maximizes f(x) over S subject to g(x) = b.
(ii) If there exist Rm and x S such that L(x , ) L(x, ) for all x S, and
g(x ) = b, then x minimizes f(x) over S subject to g(x) = b.
64
Proof of (i)
where j (x) =
j=1
j=1
gj (x)
h(x)
f(x) X v(g(x)) gj (x)
f(x) X
=
j (x)
xi
xi
gj (x) xi
xi
xi
v(g(x))
.
gj (x)
m
Let j = j
(x ).
so f(x ) =
m
P
As
h(x )
X
gj
f
= 0, we have
(x ) =
(x ),
j
xi
xi
j=1
j=1
6.1.2
Examples
y + z = 3.
1
1
Computer algebra gives the solutions as x = 2, y = 1, z = 2, 1 = , 2 = .
2
2
3
We can show that L is a concave function on the convex set R+ . Hence
(x, y, z) = (2, 1, 2)
is a global maximizer of f(x, y, z) subject to the constraints. fmax = 4 2.
Clearly the minimum value of f(x, y, z) over R3+ is 0. The method does not find this, as it
occurs on the boundary of the feasible region and not at an interior point.
(iii) Minimize f(x, y, z) = x2 + y 2 + z 2 over R3 , subject to x + 2y + z = 1 and 2xy3z = 4.
Let L(x, y, z, 1 , 2 ) = x2 + y 2 + z 2 + 1 (1 x 2y z) + 2 (4 2x + y + 3z).
The first-order necessary conditions are 2x 1 22 = 0, 2y 21 + 2 = 0,
2z 1 + 32 = 0 and the constraints are x + 2y + z = 1, 2x y 3z = 4.
2x 4y
4x 2y
The first equations give 1 =
+ , 2 =
.
5
5
5
5
16
1
11
52
54
From the remaining equations, x = , y = , z = , with 1 =
and 2 = .
15
3
15
75
75
L is a sum of convex and linear functions so it is a convex function on the convex set R3 .
16 1 11
134
Hence (x, y, z) =
, ,
minimizes f(x, y, z) globally over R3 . fmin =
.
15 3 15
75
Sensitivity analysis
In the proof of the Lagrange necessity theorem, we defined j (x) to be
Thus j =
v(g(x))
.
gj (x)
v(b)
v(g(x ))
=
,
gj (x )
bj
i.e. the optimal value of the Lagrange multiplier for the jth constraint is equal to the rate
of change in the maximal value of the objective function as the jth constraint is relaxed.
If an increase bj in the right-hand side of a constraint yields an increase V in the optimal
value of f(x) then V j bj .
If the constraints arise because of limits on some resources, then j is called the shadow
price of the jth resource.
66
m
X
V
j=1
bj
bj =
m
X
j bj . Thus we have:
j=1
1
In Example 6.1.2 (ii), f(x ) = 4 2, b1 = b2 = 3, 1 = 2 = .
2
If b1 and b2 are slightly increased, say to 3.1 and 3.2 respectively, we would expect the
1
constrained maximum to be increased by about 0.3 , to approximately 5.87.
2
6.2
Let A be a symmetric real matrix. In this section we seek to optimize the quadratic form
q(x) = xt Ax subject to the non-linear constraint | x | = 1, which can also be written as
xt x = 1 or x1 2 + + xn 2 = 1.
First we note that (xt x) = (x1 2 + + xn 2 ) = (2x1 , . . . , 2xn ) = 2x
!
n
P
and if A is symmetric, (xt Ax) =
aij xi xj . The partial derivative of this with
i,j=1
respect to xi is 2aii xi + 2
P
j6=i
aij xj = 2
n
P
j=1
Proposition 6.4 Let A be a symmetric real matrix and let q(x) = xt Ax.
The minimum value of q(x) subject to | x | = 1 is equal to the smallest eigenvalue of A,
and occurs when x is a corresponding unit eigenvector.
The maximum value of q(x) subject to | x | = 1 is equal to the largest eigenvalue of A, and
occurs when x is a corresponding unit eigenvector.
Proof
The constraint is xt x = 1, i.e. 1 xt x = 0.
The Lagrangean function is L(x, ) = xt Ax + (1 xt x).
For q(x) to be optimal subject to the constraint, x L(x, ) = 0.
Hence 2Ax 2x = 0, giving Ax = x. Also xt x = 1.
Thus is an eigenvalue of A, with eigenvector x such that xt x = 1, i.e. x is a unit vector.
So the necessary condition for an optimum of q(x) is satisfied when x is such an eigenvector
of A. Then q(x) = xt Ax = xt (x) = (xt x) = .
The feasible set is compact, so by Weierstrasss Theorem the minimum and maximum exist.
Thus they must be among the values we have found, which are in R since the eigenvalues
of a symmetric real matrix are real. The result follows.
67
6.2.1
Example
1
0
2
0 3 .
q = xt Ax where A = 0
2 3
1
1 0
2
3 = 0 to find the eigenvalues of A.
We solve the characteristic equation 0
2
3 1
Expanding by the first row, (1 )(2 9) + 2(2) = 0, so 3 22 12 + 9 = 0.
Notice that = 3 is a solution and divide to get ( + 3)(2 5 + 3) = 0.
5 13
Hence = 3 or
.
2
5 + 13
, so these are the constrained
The smallest eigenvalue is 3 and the largest is
2
minimum and maximum values of q. They occur when x is a unit eigenvector of A.
1
An eigenvector for the eigenvalue 3 is (1, 2, 2) so a unit eigenvector is (1, 2, 2).
3
1
2
1
2
Thus the minimum occurs when x = , y = z = and also when x = , y = z =
3
3
3
3
6.2.2
Example
A council plans to repair x hundred miles of roads and improve y hundred acres of parks.
Budget restrictions lead to the constraint 4x2 + 9y 2 = 36. The benefit obtained from the
possible work schedules is U = xy. Find the schedule that maximizes U .
First express the constraint in the form xt x = 1, i.e. as a set of unit vectors.
x 2 y 2
This gives 4x2 + 9y 2 = 36, or
+
= 1.
3
2
x
y
Let v = , w = , and the problem is: maximize U = 6vw, subject to v 2 + w2 = 1.
3
2
v
0 3
If v =
,A=
, then we must maximize vt Av subject to vt v = 1.
w
3 0
The eigenvalues of A are 3 and 3, with corresponding unit eigenvectors
!
!
1
1
2
2
and
respectively.
1
1
2
1
3
Thus Umax = 3 when v = w = , so x = and y = 2.
2
2
U is maximized when work is done on approximately 210 miles of roads and 140 acres of
parks. Graphically, this is represented by the point where the curve xy = 3 touches the
constraint curve 4x2 + 9y 2 = 36.
68
Exercises 6
1. A firm has to minimize its cost function C(x, y) = rx + wy subject to the constraint
x1/2 y 1/4 = 8. Find the minimum cost in terms of the constants r and w.
2. Find (i) the minimum value of x2 + y 2 subject to the constraint 2x 3y = 4,
(ii) the maximum value of x2 y 2 subject to the constraint 2x2 + y 2 = 3,
Interpret your answers graphically.
3. f(x, y, z) = x1/3 y 1/3 z 1/3 where x > 0, y > 0, z > 0.
Maximize f(x, y, z) subject to the constraints x + y = 3 and y + z = 3.
4. The total profit z thousand which a company makes from producing and selling x
thousand units of one commodity X and y thousand units of another commodity Y
is given by z = 10 + 50x 5x2 + 16y y 2 .
(a) Find the maximum value of z if there are no constraints on x and y, explaining
why the value you find is a maximum.
(b) Find the maximum value of z if the total cost of production is to be 12,000,
given that each unit of X costs 4 to produce and each unit of Y costs 3.20.
(c) The company now increases the money available for production to 12,500. Use
sensitivity analysis to estimate the new maximum profit.
5. A firms production function, using quantities x, y, z of three inputs, is defined by
1
P (x, y, z) = x 2 ln y z 2 , where x > 0, y > 0, z > 0.
Find the largest region of R3+ on which P is a concave function.
Maximize P (x, y, z) over this region subject to the constraints x + y z = e4 and
x + 2y + z = 2e4 .
6. A consumers satisfaction with three foods is measured by U (x1 , x2 , x3 ) = x1 + ln x2 +
2x3 , where x1 > 0, x2 > 0, x3 > 0 and xi is the number of units of food i consumed.
Foods 1, 2 and 3 cost 2, 1, 0 per unit and contain 0, 100 and 200 calories per unit
respectively.
The consumer wants to spend exactly 10 and consume 1000 calories.
Find the maximum value of U (x1 , x2 , x3 ). Justify that the value you have found is a
maximum.
Using sensitivity analysis, estimate the increase in this maximum value if an extra 1
may be spent and an extra 50 calories consumed.
7. Use Proposition 6.4 to find the maximumPand minimum values of the following
quadratic forms subject to the constraint
xi 2 = 1. Also find the values of the
variables xi at which the optimal values are attained.
(a) q(x1 , x2 ) = 5x1 2 + 5x2 2 4x1 x2 ,
69