Linear Programming
Linear Programming
L inear P rogramming
(For B.A. and B.Sc. IIIrd year students of All Colleges affiliated to Universities in Uttar Pradesh)
By
A. R. Vasishtha
Retd. Head, Dep’t. of Mathematics
Meerut College, Meerut (U.P.)
UNIFIED
Dedicated
to
Lord
Krishna
Authors & Publishers
P reface
This book on LINEAR PROGRAMMING has been specially written
according to the latest Unified Syllabus to meet the requirements of the B.A.
and B.Sc. Part-III Students of all Universities in Uttar Pradesh.
The subject matter has been discussed in such a simple way that the students
will find no difficulty to understand it. The proofs of various theorems and
examples have been given with minute details. Each chapter of this book
contains complete theory and a fairly large number of solved examples.
Sufficient problems have also been selected from various university examination
papers. At the end of each chapter an exercise containing objective questions has
been given.
We have tried our best to keep the book free from misprints. The authors
shall be grateful to the readers who point out errors and omissions which, inspite
of all care, might have been there.
The authors, in general, hope that the present book will be warmly received
by the students and teachers. We shall indeed be very thankful to our colleagues
for their recommending this book to their students.
The authors wish to express their thanks to Mr. S.K. Rastogi, M.D.,
Mr. Sugam Rastogi, Executive Director, Mrs. Kanupriya Rastogi, Director and
entire team of KRISHNA Prakashan Media (P) Ltd., Meerut for bringing
out this book in the present nice form.
The authors will feel amply rewarded if the book serves the purpose for
which it is meant. Suggestions for the improvement of the book are always
welcome.
Unit-I
Linear programming problems, Statement and formation of general linear programming
problems, Graphical method, Slack and surplus variables, Standard and matrix forms of
linear programming problem, Basic feasible solution.
Unit-II
Convex sets, Fundamental theorem of linear programming, Simplex method, Artificial
variables, Big-M method, Two phase method.
Unit-III
Resolution of degeneracy, Revised simplex method, Sensitivity Analysis.
Unit-IV
Duality in linear programming problems, Dual simplex method, Primal-dual method
Integer programming.
Unit-V
Transportation problems, Assignment problems.
B rief C ontents
Dedication.........................................................................(v)
Preface ...........................................................................(vi)
Syllabus ........................................................................(vii)
Brief Contents ...............................................................(viii)
Unit-I
1. Mathematical Preliminaries (Some Basic Concepts on Matrices and Linear
Algebra).........................................................................................................................................01—16
Unit-II
3. Convex Sets and Their Properties..........................................................................93—122
Unit-III
5. Resolution of Degeneracy.......................................................................................199—214
7. Sensitivity Analysis................................................................................................247—288
Unit-IV
8. Duality in Linear Programming...........................................................................289—348
9. Integer Programming............................................................................................349—382
Unit-V
10. The Transportation Problem...............................................................................383—432
3 x + 4 y − 3 z = 5,
6 x + 8 y − 3 z = 1,
4 x −2 y + z = 2
Here x, y and z are unknowns and their coefficients are all numbers. Arranging the
coefficients in the order in which they occur in the equations and enclosing them in
square brackets, we obtain a rectangular array of the form
2 9 7
3 4 −3
6 8 −3
4 −2 1
This rectangular array is an example of a matrix. The horizontal lines (→) are called rows
or row vectors, and the vertical lines (↓) are called columns or column vectors of the matrix.
There are 4 rows and 3 columns in this matrix. Therefore it is a matrix of the type 4 × 3.
The numbers 3, 4, –3, 2 etc., constituting this matrix are called its elements. The
difference between a matrix and a number should be clearly understood. A matrix is not a
number. It has got no numerical value. It is a new thing formed with the help of numbers.
4
We shall use capital letters (in bold type or in italic type) to denote matrices.
0 0 0
5 0 1
Thus A = and B = 0 0 0
6 1 7 2 ×3
0 0 0 3 ×3
Sometimes we also use the brackets ( ) or the double bars, |||| in place of the square
brackets [ ] to denote matrices.
2 2 3 + 5 i 9 7 7
Thus A = , B = , C = ,
2 2 −4 3 − 5 i 7 7
1.2 Matrix
A set of mn numbers (real or complex) arranged in the form of a rectangular array having
m rows and n columns is called an m × n matrix [to be read as ‘m by n’ matrix'].
The numbers a11, a12 etc. of this rectangular array are called the elements of the matrix.
The element a ij belongs to the ith row and the j th column and is sometimes called the
(i, j)th element of the matrix. Thus in the element a ij the first suffix i will always denote
the number of the row and the second suffix j, the number of the column in which the
element occurs. In a matrix, the number of rows and the columns need not be equal.
5
0 1 2 3
2 3 1 0
A =
5 0 1 1
0 0 1 2
4 ×4
2
Y = −9 is a column matrix of the type 3 × 1.
11
3 ×1
1 2 3 9
1 2 3
For Example: The matrix is a submatrix of the matrix A = 7 11 6 5 as it
0 2 1
0 2 1 8
can be obtained from A by omitting the second row and the fourth column.
If two matrices A and B are equal, we write A =B. If two matrices A and B are not equal,
we write A ≠ B. If two matrices are not of the same size, they cannot be equal.
For example, if
3 2 −1 1 −2 7
A = and B =
4 − 3 12 ×3 3 2 −1
2 ×3
3 + 1 2 − 2 −1 + 7 4 0 6
then A +B = =
4 + 3 −3 + 2 1 − 1 7 −1 0
2 ×3
Important Note : It should be noted that addition is defined only for matrices which are
of the same size. If two matrices A and B are of the same size, they are said to be
conformable for addition. If the matrices A and B are not of the same size, we cannot
find their sum.
7
− A + A =O = A +(− A).
Here O is the null matrix of the type m × n. It is identity element for matrix addition.
Subtraction of Two Matrices : If A and B are two m × n matrices, then we define
A − B = A +(−B).
Thus the difference A − B is obtained by subtracting from each element of A the
corresponding element of B.
5. Cancellation laws hold good in the case of addition of matrices i.e., if A, B, C are
three m × n matrices, then
A + B = A +C ⇒ B =C (left cancellation law)
and B + A =C+ A ⇒ B =C (right cancellation law)
6. The equation A + X =O has a unique solution X = − A in the set of all m × n
matrices.
3 2 −1
For example, if k = 2 and A = ,
4 − 3 1
2 ×3
2 × 3 2 × 2 2 × −1 6 4 −2
then 2A = =
2 × 4 2 × −3 2 × 1 8 −6 2
2 ×3 .
8
Theorem 3: If p and q are two scalars and A is any m × n matrix, then p (qA) = ( pq) A .
Theorem 4: If A be any m × n matrix and k be any scalar, then (− k)A = −(kA ) = k(− A ).
is called the product of the matrices A and B in that order and we write C = AB.
The product AB of two matrices A and B exists if and only if the number of columns in A
is equal to the number of rows in B. Two such matrices are said to be conformable for
multiplication. The rule of multiplication is row-by-column multiplication.
a11 a12
b b
For example, if A = a21 a22 , B = 11 12
b b
a31 a32 21 22 2 ×2
3 ×2
Important Note : If the product AB exists, then it is not necessary that the product BA
will also exist. For example if A is a 4 × 5 matrix and B is a 5 × 3 matrix, then the product
AB exists while the product BA does not exist.
A(B + C) = AB + AC,
where A, B, C are any three, m × n, n × p, n × p matrices respectively.
5. The equation AB = O does not necessarily imply that at least one of the matrices A
and B must be a zero matrix. Or
The product of two matrices can be a zero matrix while neither of them is a zero
matrix.
6. In the case of matrix multiplication if AB =O, then it does not necessarily imply
that BA =O.
7. If A be an m × n matrix, In denotes the n-rowed unit matrix, it can be easily seen that
AIn = A = ImA.
12 0 13 12 0 13
Hence, if
A = 15 12 11 , then det A =| A | = 15 12 11
13 11 14 13 11 14
The first row of A is the first column of A ′. The second row of A is the second column of
A′. The third row of A is the third column of A′.
Theorems : If A′ and B′ be the transposes of A and B respectively, then
1. (A ′ )′ = A;
2. (A +B)′ = A ′ +B′, A and B being of the same size.
3. (kA )′ = kA ′, k being any complex number.
4. (AB)′ = B′ A ′ , A and B being conformable to multiplication.
The above law (4) is called the reversal law for transposes i.e., the transpose of the
product is the product of the transposes taken in the reverse order.
Thus the adjoint of a matrix A is the transpose of the matrix formed by the cofactors of A
i.e., if
Note : Sometimes the adjoint of a matrix is also called the adjugate of that matrix.
AB =BA =In
Note : For the products AB, BA to be both defined, it is necessary that A and B are both
square matrices of the same order. Thus non-square matrices cannot possess inverse.
Existence of the Inverse, Theorem : The necessary and sufficient condition for a
square matrix A to possess the inverse is that | A | ≠ 0.
Important :
1
1. If A be an invertible matrix, then the inverse of A is Adj A. It is usual to denote
| A|
the inverse of A by A −1.
1
Thus, A −1 = Adj A , provided | A | ≠ 0.
| A|
I Q
M=
O R
12
Where, I (unit matrix) and R are square submatrices and Q, O (null matrix) are two
matrices.
If R −1 exists and is known, then the inverse of the matrix M is given by
I − QR −1
M −1 =
O R −1
i.e., the inverse of a product is the product of the inverses taken in the reverse order.
1.16 Vectors
Any ordered n-tuple of numbers is called an n-vector. By an ordered n-tuple we mean a set
consisting of n numbers in which the place of each number is fixed. If x1, x2 ,..., x n be any
n numbers, then the ordered n-tuple X = ( x1, x2 ,...., x n) is called an n-vector. The ordered
triad ( x1, x2 , x3 ) is called a 3-vector. Similarly (1, 0, 1, –1) and (1, 8, –5, 7) are 4-vectors.
The n numbers x1, x2 ,...., x n are called components of the n-vector X = ( x1, x2 ,...., x n). A
vector may be written either as a row vector or as a column vector. If A be a matrix of the
type m × n, then each row of A will be an n-vector and each column of A will be an
m-vector. A vector whose components are all zero is called a zero vector and will be denoted
by O.
If k be any number and X be any vector, then relative to the vector X, k is called a scalar.
Algebra of Vectors : Since an n-vector is nothing but a row matrix or a column matrix,
therefore we can develop an algebra of vectors in the same manner as the algebra of
matrices.
Equality of Two Vectors : Two n-vectors X and Y where X = ( x1, x2 ,..., x n) and
Y = ( y1, y2 ,..., yn) are said to be equal if and only if their corresponding components are
equal i.e., if x i = yi, for all i = 1, 2,...., n
For example if
then X =Y
13
The vector k X is called the scalar multiple of the vector X by the scalar k.
1. X+ Y = Y + X
4. ( p + q) X = pX+ qX
5. p (qX)= ( pq) X
k1X1 + k2 X2 + ...+ kr X r = O,
k1X1 + k2 X2 + .... kr X r = O
implies k1 = k2 = k3 = .... = kr = 0
X = k1 X1 + k2 X2 + .... + kr X r ,
1. If a set of vectors is linearly dependent, then at least one member of the set can be
expressed as a linear combination of the remaining members.
2. If a set of vectors is linearly independent then no member of the set can be
expressed as a linear combination of the remaining members.
a1 x + b1 y = c1
.
a2 x + b2 y = c2
3x + 4 y = 5
.
6 x + 8 y = 13
There is no set of values of x and y which satisfies both these equations. Such equations
are said to be inconsistent.
3x + 4 y = 5
.
6 x + 8 y = 10
These equations are consistent since there exist values of x and y which satisfy both of
4 5
these equations. We see that x = − c + , y = c constitute a solution of these equations,
3 3
where c is arbitrary. Thus these equations possess an infinite number of solutions.
If we write
Any set of values of x1, x2 ,..., x n which simultaneously satisfy all these equations is called
a solution of the system (1). When the system of equations has one or more solutions,
the equations are said to be consistent, otherwise they are said to be inconsistent.
To show that the solution is unique, let us suppose that X1 and X2 be two solutions of
AX =B.
⇒ IX1 = IX2 ⇒ X1 = X2 .
In this case the equations AX = B are inconsistent i.e., they have no solution.
In this case the equations AX=B are consistent i.e., they possess a solution. If r < m, then
in the process of reducing the matrix [A B] to Echelon form, (m − r ) equations will be
eliminated. The given system of m equations will then be replaced by an equivalent
system of r equations. From these r equations we shall be able to express the values of
some r unknowns in terms of the remaining n − r unknowns which can be given any
arbitrarily chosen values.
If r < n, the n − r variables can be assigned arbitrary values. So in this case there will be an
infinite number of solutions. Only n − r + 1 solutions will be linearly independent and the
rest of the solutions will be linear combinations of them.
If m < n, then r ≤ m < n. Thus in this case n − r > 0. Therefore when the number of
equations is less than the number of unknowns, the equations will always have an infinite
number of solutions, provided they are consistent.
2.1 Introduction
inear Programming (L.P.) is a branch of Operations Research. In order to understand
L the importance of its study, we must know something of its history and evolution. Its
main origin was during the second world war (1939-1945). At that time, the military
management in England called upon a team of scientists to study the strategic and
tactical problems related to air and land defence of the country. Since they were having
very limited military resources, it was necessary to decide upon the most effective
utilization of them e.g., the efficient transportation, effective bombing, adequate food
supply, etc.
Following the end of war, the success of military teams attracted the attention of
Industrial managers who were seeking solutions to their complex executive-type
problems. The most common problem was to find: what methods should be adopted so
that the total cost is minimum or the total profit is maximum? The first mathematical
technique in this field, called the Simplex Method of Linear Programming, was
developed and applications have been developed through the efforts and co-operation of
interested individuals in academic institutions and industry both.
Thus this technique is concerned with the optimization problems. A decision, which
taking into account all the circumstances can be considered the best one, is called an
optimal decision.
Thus, Linear Programming is the mathematical tool used in solving problems in business,
industry, commerce, military operations, etc. Here the problem before us is usually to
maximize the profit or minimize the cost keeping in mind the restrictions imposed upon
us by the limitations of various resources.
Linear Programming Problem (L.P.P.) : The linear programming problem in general calls for
optimizing (Maximizing/minimizing) a linear function of variables called the objective function
subject to a set of linear equations and/or linear inequations called the constraints or restrictions.
The term linear means that all the relations governing the problem are linear and the term
programming refers to the process of determining a particular programme or plan of
action.
Constraints : The system of linear inequations (or equations) under which the objective
function is to be optimized are called the constraints.
Z = c1 x1 + c2 x2 + ...+ cn x n ...(1)
where all a11, a12 ,...., amn; b1, b2 ,..., bm; c1, c2 ,...., cn are constants and x1, x2 ,..., x n are
variables.
The function Z given in (1) is called the objective function, the conditions given in (2)
are called the linear constraints and the conditions given in (3) are called the
non-negative restrictions of the linear programming problem.
Z = cx
subject to A x (≤ = ≥) b
and x ≥ 0,
b1
b
2
b = ... = [b1, b2 ,..., bm]T
...
bm
x1
x
2
x = ... = [ x1, x2 ,...., x n]T
...
x n
Note : For the matrix form all constraints in L.P.P. should have the same inequality (i.e.,
≥ or ≤ ) or equality sign.
21
Working Rule to form a Linear Programming Problem : The following steps are
helpful in the formulation of a linear programming problem :
2. Identify the objective function and express it as a linear function of the variables
x1, x2 , x3 etc.
3. Find the type of the objective function. It may be in the form of maximizing profits
or minimizing costs.
4. Identify all the constraints and express them as linear inequations or equations.
Solution: Suppose the goldsmith manufactures x1 necklaces and x2 bracelets per day.
Since the profit on a necklace is ` 100 and profit on a bracelet is ` 300, therefore the total
profit Z in ` is given by
Since it takes half an hour to make one necklace, so the time required to make x1
necklaces = (1 2) x1 hours.
Again it takes one hour to make one bracelet, so the time required to make x2 bracelets
= 1. x2 hours = x2 hours.
= ( x1 2 + x2 ) hours. ...(2)
22
x1 2 + x2 ≤ 16 or x1 + 2 x2 ≤ 32. ...(3)
The total number of necklaces and bracelets that the goldsmith can manufacture in a day
is atmost 24, so we have
x1 + x2 ≤ 24. ...(4)
Also the number of necklaces and bracelets manufactured can never be negative,
therefore
x1 ≥ 0, x2 ≥ 0. ...(5)
Hence, the linear programming problem formulated from the given problem is as follows:
x1 + 2 x2 ≤ 32
x1 + x2 ≤ 24
Example 2: A tyre factory produces three types of tyres T1, T2 , T3 . Three different types
of chemicals say C1, C2 , C3 are required for production. One T1 tyre needs 2 units of C1, 3
units of C3 ; one T2 tyre needs 3 units of C1, 2 units of C2 and 2 units of C3 ; and one T3
tyre needs 5 units of C2 and 4 units of C3 . The factory has only a stock of 20 units of
C1, 25 units of C2 and 30 units of C3 . Further the profit from the sale of one tyre T1 is ` 6,
one tyre T2 is ` 10 and of one tyre T3 is ` 8. Assuming that the factory can sell all that it
produces, formulate a linear programming problem to maximize its profit.
Solution: Let the factory produce x1 tyres of type T1, x2 tyres of type T2 and x3 tyres of
type T3 .
Chemical C1 2 3 0 20
Chemical C2 0 2 5 25
Chemical C3 3 2 4 30
Profit in ` (per tyre) 6 10 8
23
Z = 6 x1 + 10 x2 + 8 x3 . ...(1)
Since the factory has a stock of 20 units of chemical C1, therefore we have
2 x1 + 3 x2 ≤ 20. ...(2)
Similarly, considering the total quantity of the chemicals C2 and C3 required, we have
2 x2 + 5 x3 ≤ 25 ...(3)
and 3 x1 + 2 x2 + 4 x3 ≤ 30 ...(4)
Also, we have
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, ...(5)
Hence, the linear programming problem formulated from the given problem is as follows:
Maximize Z = 6 x1 + 10 x2 + 8 x3
2 x1 + 3 x2 ≤ 20
2 x2 + 5 x3 ≤ 25
3 x1 + 2 x2 + 4 x3 ≤ 30
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0.
Example 3: The objective of a diet problem is to ascertain the quantities of certain foods
that should be eaten to meet certain nutritional requirement at a minimum cost. The
consideration is limited to milk, green vegetables and eggs, and to vitamins A, B, C. The
number of milligrams of each of these vitamins contained within a unit of each food and
their daily minimum requirements along with the cost of each food is given in the table
below :
A 1 1 10 1 mg.
B 100 10 10 50 mg.
C 10 100 10 10 mg.
Cost in ` 20 10 8
24
Formulate a linear programming problem for this diet problem. [Meerut 2004]
Solution: Let the daily diet consist of x1 litres of milk, x2 kgs. of vegetables and x3 dozens
of eggs.
Z = 20 x1 + 10 x2 + 8 x3 ...(1)
( x1 + x2 + 10 x3 ) mg.,
x1 + x2 + 10 x3 ≥ 1. ...(2)
Similarly, considering the total amounts of vitamins B and C in the daily diet, we have
100 x1 + 10 x2 + 10 x3 ≥ 50 ...(3)
Since the quantities of different food items to be consumed cannot be negative, therefore
we have
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0.
Hence, the linear programming problem formulated from the given diet problem is :
Minimize Z = 20 x1 + 10 x2 + 8 x3 ,
x1 + x2 + 10 x3 ≥ 1
100 x1 + 10 x2 + 10 x3 ≥ 50
10 x1 + 100 x2 + 10 x3 ≥ 10
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0.
Solution: Let the daily diet consist of x1 kg. of food A, x2 kg. of food B, x3 kg. of food C
and x4 kg. of food D.
Z = 3 x1 + 4 x2 + 2 x3 + 1. 5 x4 . ...(1)
(180 x1 + 160 x2 + 40 x3 + 50 x4 )
Similarly, considering the total amounts of fats and carbohydrates in the diet, we have
Since, the daily diet cannot contain quantities with negative values of any food item,
therefore
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0. ...(5)
Hence, the linear programming problem formulated for the given diet problem is :
Minimize Z = 3 x1 + 4 x2 + 2 x3 + 1. 5 x4
180 x1 + 160 x2 + 40 x3 + 50 x4 ≥ 75
150 x1 + 40 x2 + 200 x3 + 80 x4 ≥ 85
26
70 x2 + 25 x3 + 400 x4 ≥ 300
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0 and x4 ≥ 0.
Since there are only 45,000 bottles available for filling the medicines A and B, therefore
we have x1 + x2 ≤ 45.
It takes three hours to prepare enough material to fill 1,000 bottles of medicine A.
Therefore, the time required to fill x1 thousand bottles of medicine A is 3 x1 hours.
Thus, the total time required to fill x1 thousand bottles of medicine A and x2 thousand
bottles of medicine B is (3 x1 + x2 ) hours.
The total time available for this operation is 66 hours. So, we have 3 x1 + x2 ≤ 66.
There are sufficient ingredients available to make 20 thousand bottles of medicine A and
40 thousand bottles of medicine B, therefore we have x1 ≤ 20, x2 ≤ 40.
The profit per bottle for medicine A is ` 8 and for medicine B is ` 7. So, the total profit in
` on x1 thousand bottles of medicine A and x2 thousand bottles of medicine B is given by
Hence, the linear programming problem formulated for the given problem is
x1 + x2 ≤ 45
3 x1 + x2 ≤ 66
27
x1 ≤ 20
x2 ≤ 40
Example 6: A firm manufactures two types of products A and B and sells them at a profit
of ` 2 on type A and ` 3 on type B. Each product is processed on two machines E and F.
Type A requires one minute of processing time on E and two minutes on F, type B required
one minute on E and one minute on F. The machine E is available for not more than 6
hours 40 minutes while machine F is available for 10 hours during any working day.
Formulate the problem as a linear programming problem.
Solution: Let the manufacturer produce x1 units of the product of type A and x2 units of
the product of type B.
The given information can be systematically arranged in the form of the following table :
E 1 1 400
F 2 1 600
Since the machine E takes 1 minute time for processing a unit of type A and 1 minute
time for processing a unit of type B, therefore the total time required on machine E is
( x1 + x2 ) minutes.
But the machine E is available for not more than 6 hours and 40 minutes = 400 minutes,
therefore we have
x1 + x2 ≤ 400.
Since the machine F is available for not more than 10 hours = 600 minutes, therefore we
have
2 x1 + x2 ≤ 600.
The profit on type A is ` 2 per unit so the profit on selling x1 units of type A will be ` 2 x1.
Similarly, the profit on selling x2 units of type B will be ` 3 x2 .
Therefore, the total profit (in `) on selling x1 units of type A and x2 units of type B is
28
Z = 2 x1 + 3 x2 .
Hence, the required linear programming problem formulated for the given problem is
Maximize Z = 2 x1 + 3 x2
x1 + x2 ≤ 400
2 x1 + x2 ≤ 600
x1 ≥ 0, x2 ≥ 0.
Example 7: A manufacturer produces three models (I, II, and III) of a certain product. He
uses two types of raw materials (A and B) of which 4,000 and 6,000 units respectively are
available. The raw material requirements per unit of the three models are given below :
A 2 3 5
B 4 2 7
The labour time for each unit of model I is twice that of model II and three times that of
model III. The entire labour force of the factory can produce the equivalent of 2,500 units
of model I. A market survey indicates that the minimum demand of the three models are
500, 500 and 375 units respectively. However, the ratio of the number of units produced
must be equal to 3:2:5. Assume that the profits per unit of models I, II and III are ` 60,
40 and 100 respectively. Formulate the problem as a L.P.P. in order to determine the
number of units of each product which will maximize profit.
Solution: Let the manufacturer produce x1, x2 , x3 units of models I, II and III
respectively. Since the profit per unit on model I, II and III is ` 60, ` 40 and ` 100
respectively, the objective function is to maximize the profit :
Z = 60 x1 + 40 x2 + 100 x3 . ...(1)
The raw materials used and available give rise to the following constraints respectively.
2 x1 + 3 x2 + 5 x3 ≤ 4, 000 ...(2)
4 x1 + 2 x2 + 7 x3 ≤ 6, 000 ...(3)
Now if t be the labour time required for one unit of model I, then the time required for
1 1
one unit of model II will be t and that for the model III will be t. As the factory can
2 3
produce 2,500 units of model I, so the restriction on the production time will be
29
1 1
t x1 + t x2 + t x3 ≤ 2, 500 t i.e., 6 x1 + 3 x2 + 2 x3 ≤ 15, 000. ...(4)
2 3
Further the ratio of the number of units of different types of models is 3:2:5 i.e.,
x1 = 3 k, x2 = 2 k, x3 = 5 k.
Maxmize Z = 60 x1 + 40 x2 + 100 x3
2 x1 + 3 x2 + 5 x3 ≤ 4, 000
4 x1 + 2 x2 + 7 x3 ≤ 6, 000
6 x1 + 3 x2 + 2 x3 ≤ 15, 000
Example 8: A complete unit of a certain product consists of four units of component A and
three units of component B. The two components (A and B) are manufactured from two
different raw materials of which 100 units and 200 units respectively are available.
Three departments are engaged in the production process with each department using a
different method for manufacturing the components. The following table gives the raw
material requirements per production run and the resulting units of each component. The
objective is to determine the number of production runs for each department which will
maximize the total number of component units of the final product:
1 7 5 6 4
2 4 8 5 8
3 2 7 7 3
Solution: Let x1, x2 , x3 be the number of production runs for the departments 1, 2, 3
respectively.
7 x1 + 4 x2 + 2 x2 ≤ 100 ...(1)
5 x1 + 8 x2 + 7 x3 ≤ 200 ...(2)
Now final product requires 4 units of components A and 3 units of component B. So the
maximum number of units of the final product cannot exceed the smaller value of
1 1
(6 x1 + 5 x2 + 7 x3 ) and (4 x1 + 8 x2 + 3 x3 ).
4 3
1 1
Max. Z = min (6 x1 + 5 x2 + 7 x3 ), (4 x1 + 8 x2 + 3 x3 )
4 3
Since the objective function is not linear, a suitable transformation can be used to
convert this into a L.P.P.
1 1
Let, min. (6 x1 + 5 x2 + 7 x3 ), (4 x1 + 8 x2 + 3 x3 ) = v.
4 3
1 1
Then (6 x1 + 5 x2 + 7 x3 ) ≥ v and (4 x1 + 8 x2 + 3 x3 ) ≥ v.
4 3
Maximize Z = v,
6 x1 + 5 x2 + 7 x3 − 4 v ≥ 0
4 x1 + 8 x2 + 3 x3 − 3 v ≥ 0
7 x1 + 4 x2 + 2 x3 ≤ 100
5 x1 + 8 x2 + 7 x3 ≤ 200
Example 9: The owner of the Metro Sports wishes to determine how many advertisements
to place in the selected three monthly magazines A, B and C. His objective is to advertise in
such a way that total exposure to principal buyers of expensive sports good is maximized.
Percentages of readers for each magazine are known. Exposure in any particular magazine
is the number of advertisements placed multiplied by the number of principal buyers. The
following data may be used :
Magazines
Exposure Category
A B C
The budgeted amount is at most ` 1 lakh for the advertisement. The owner has already
decided that magazine A should have no more than 6 advertisement and that B and C
each have at least two advertisements. Formulate a LP model for the problem.
Maximize Z = (10% of 1, 00, 000) x1 + (15% of 60, 000) x2 + (7% of 40, 000) x3
x1 ≤ 6, x2 ≥ 2, x3 ≥ 2
Example 10: A manufacturer of biscuits is considering four types of gift packs containing
three types of biscuits : orange cream (OC), chocolate cream (CC), and wafers (W).
Market research conducted recently to assess the preferences of the consumers shows the
following types of assortments to be in good demand :
D No restrictions 12
For the biscuits, the manufacturing capacity and costs are given below :
Biscuits variety OC CC W
Formulate a linear programming model to find the production schedule which maximizes
the profit assuming that there are no market restrictions.
Solution: Let x ij kg. be the quantity used in i -th assortment of the j-th type of biscuits,
Then in assortment A, x A1, x A 2 , x A 3 denote the quantity in kg. of OC, CC and W type of
biscuits respectively. Similarly we can interpret for other assortments.
Now the given data can be put in the form of L.P.P. as follows :
Maximize Z = 20 ( x A1 + x A 2 + x A 3 ) + 25 ( x B1 + x B2 + x B3 )
+ 22 ( x C1 + x C2 + x C3 ) + 12 ( x D1 + x D2 + x D3 )
− 8 ( x A1 + x B1 + x C1 + x D1) − 9 ( x A 2 + x B2 + x C2 + x D2 )
− 7( x A 3 + x B3 + x C3 + x D3 )
or Maximize Z = 12 x A1 + 11x A 2 + 13 x A 3 + 17 x B1 + 16 x B2 + 18 x B3
+14 x C1 + 13 x C2 + 15 x C3 + 4 x D1 + 3 x D2 + 5 x D3
x A1 ≥ 0 . 40 ( x A1 + x A 2 + x A 3 )
gift pack A
x A 2 ≤ 0 . 20 ( x A1 + x A 2 + x A 3 )
x B1 ≥ 0 . 20 ( x B1 + x B2 + x B3 )
gift pack B
x B2 ≤ 0 . 40 ( x B1 + x B2 + x B3 )
x C1 ≥ 0 . 50 ( x C1 + x C2 + x C3 )
gift pack C
x C2 ≤ 0 .10 ( x C1 + x C2 + x C3)
x A 2 + x B2 + x C2 + x D2 ≤ 200
x A 3 + x B3 + x C3 + x D3 ≤ 150
Example 11: A company has two grades of inspectors, I and II, who are to be assigned for
a quality control inspection. It is required that at least 2,000 pieces be inspected per 8
hour day. Grade I inspectors can check pieces at the rate of 50/hour with the accuracy of
97% grade II inspectors can check prices at the rate of 40/hour with the accuracy 95%. The
wage rate of Grade I inspector is ` 4.50/hour and that of Grade II is ` 2.50/hour. Each time
an error is made by an inspector, the cost to the company is one rupee. The company has
available for the inspection job, 10 grade I and 5 grade II inspectors. Formulate the
problem to minimize the total cost of inspection.
Solution: Let the no. of inspectors of grade I and II be x1 and x2 respectively. They will
inspect (8 × 50) x1 + (8 × 40) x2 pieces daily. As the company requires at least 2,000 items
to be inspected daily, so the constraint is
x1 ≤ 10, x2 ≤ 5 ...(2)
Further the company is bearing two types of costs, the wages of inspectors and the costs
of inspection errors.
3
Grade I : ` 4 . 50 + 1 × × 50 = ` 6 . 00 / hour
100
5
Grade II : ` 2 . 50 + 1 × × 40 = ` 4 . 50 / hour
100
Now the objective of the company is to minimize the total cost of inspection at the above
rates for each day.
Minimize Z = 8 × 6 . 00 × x1 + 8 × 4 . 50 × x2 = 48 x1 + 36 x2 ...(3)
Minimize Z = 48 x1 + 36 x2
x1 ≤ 10, x2 ≤ 5
1. A furniture dealer deals in two items, viz., tables and chairs. He has ` 10,000 to
invest and a space to store at most 60 pieces (including both tables and chairs). A
table costs him ` 500 and a chair ` 100. He can sell all the items the he buys, earning
a profit of ` 50 for each table and ` 15 for each chair. Formulate this problem as a
LPP so that he maximizes the profit.
2. A factory produces two products A and B. Each of the product A requires 2 hrs of
moulding, 3 hrs of grinding and 4 hrs for polishing and each of the product B
requires 4 hrs for moulding, 2 hrs for grinding and 2 hrs for polishing. The moulding
machine can work for 20 hrs, grinding machine for 24 hrs and polishing machine
available for 13 hrs. The profit is ` 5 per unit of A and ` 3 per unit of B. Assuming
that the factory can sell all that it produces, formulate the problem as a LPP to
maximize the profit.
4. A firm can produce two products A and B during a given period of time. Each of
these products requires four different operations, viz., Grinding, Turning,
Assembling and Testing. The requirement in hours per unit of manufacturing of
these products is as given :
A 1 3 4 5
B 2 1 3 4
The available capacities of these operations in hours for the given time period are :
30 for grinding, 60 for turning, 200 for assembling and 200 for testing.
Profit on each unit of A is ` 3 and that for each unit of B is ` 2. Formulate the
problem as a linear programming model to maximize the profit assuming that the
firm can sell all the items that it produces at the prevailing market price.
5. A toy company manufactures two types to dolls; an ordinary doll A and a deluxe
doll B. Each type of doll B takes twice as long to produce as one of type A. It is given
that the company would have time to make a maximum of 2000 dolls per day if it
35
produces only the ordinary version. The supply of plastic is sufficient to produce
1500 dolls per day (both A and B combined). The deluxe version requires a fancy
dress of which there are only 600 pieces per day available. If the company makes
profit of ` 3 and ` 5 per doll respectively on doll A and doll B, formulate the problem
as a linear programming problem to maximize the profit. [Meerut 2007 (BP)]
6. A furniture firm manufactures chairs and tables, each requiring the use of three
machines A, B and C. Production of one chair requires 2 hours on machine A, 1
hour on machine B and 1 hour on machine C. Each table requires 1 hour each on
machines A and B and 3 hours on machine C. The profit realized by selling one
chair is ` 30 while for a table the figure is ` 60. The total time available per week on
machine A is 70 hours, on machine B is 40 hours, and on machine C is 90 hours.
Formulate the linear programming problem to maximize the profit. [Meerut 2008]
7. A diet is to contain at least 4000 units of carbohydrates, 500 units of fat and 300
unit of protein. Two foods A and B are available. Food A costs ` 2 per unit and food
B costs ` 4 per unit. A unit of food A contains 10 units of carbohydrates, 20 units of
fat and 15 units of protein. A unit of food B contains 25 units of carbohydrates, 10
units of fat and 20 units of protein. Formulate the problem as a LPP so as to find the
minimized cost for a diet that consists of a mixture of these two foods and also
meets the minimum nutrition requirements. [Gorakhpur 2009]
8. A resourceful home decorator manufactures tow types of lamps say A and B. Both
lamps go through two technicians, first a cutter, second a finisher. Lamp A requires
2 hours of the cutter's time and 1 hour of the finisher's time. Lamp B requires 1 hour
of cutter's and 2 hours of finisher's time. The cutter has 104 hours and finisher has
76 hours of time available each month. Profit on the lamp A is ` 6 and on the lamp B
is ` 11. Assuming that he can sell all that he produces, how many of each type of
lamps should he manufacture per month to obtain the best returns?
Formulate a LPP for this problem.
9. The manager of an oil refinery must decide on the optimal mix of 2 possible
blending processes of which the inputs and outputs per production run are as
follows :
Input Output
Process
Crude A Crude B Gasoline X Gasoline Y
1 6 4 6 9
2 5 6 5 5
The maximum amounts available of crudes A and B are 500 units and 400 units
respectively. Market demand shows that at least 300 units of gasoline X and 260
units of gasoline Y must be produced. The profits per production run from process 1
and 2 are `. 40 and ` 50 respectively. Formulate the LPP for maximizing the profit.
36
10. A factory produces two products A and B. To manufacture one unit of product A, a
1
machine has to work for 1 hours and a craftsman has to work for 2 hours. To
2
manufacture one unit of product B, the machine has to work for 3 hours and the
craftsman for one hour. In a week the factory can avail of 80 hours of machine time
and 70 hours of craftsman's time. The profit on the sale of each unit of A and B is of
` 10 and ` 8 respectively. If the manufacturer can sell all the items produced, how
many of each should be produced to get the maximum profit per week ?
Formulate the problem as a LPP.
11. A company manufactures two kinds of leather purses, A and B. A is a high quality
purse and B is lower quality. The sales of each of these purses A and B earn profit of
` 4 and ` 3 respectively. Each purse of type A requires twice as much time as a purse
of type B, and if all purses are to type B, the company could make 1000 purses per
day. The supply of leather is sufficient for only 800 purses per day (both A and B
combined). Purse A requires a fancy buckle, and only 400 buckles per day are
available. There are only 700 buckles available for purse B. What should be the
daily production of each type of purse to get the maximum profit ? Formulate the
problem as a LPP.
12. A firm can produce three types of cloth, say : A, B and C. Three kinds of wool are
required for it, say red wool, green wool and blue wool. One unit length of type A
cloth needs 2 yards of red wool and 3 yards of blue wool; one unit length of type B
cloth needs 3 yards of red wool, 2 yards of green wool and 2 yards of blue wool and
one unit length of type C cloth needs 5 yards of green wool and 4 yards of blue
wool. The firm has only a stock of 8 yards of red wool, 10 yards of green wool and 15
yards of blue wool. It is assumed that the income obtained from one unit length of
type A cloth is ` 3.00, of type B cloth is ` 5.00 and of type C cloth is ` 4.00.
Formulate this problem as a linear programming model to maximize the income
from the finished cloth.
Product
Machine
A B C
G 4 3 5
H 2 2 4
Machines G and H have 2,000 and 2,500 machines minutes, respectively. The firm
must manufacture 100 A's, 200 B's and 50 C's but not more than 150 A's. Set up a
linear programming problem to maximize profit.
37
14. A farmer has 100 acre farm. He can sell all tomatoes, lettuce or radishes he can raise.
The price he can obtain is ` 1.00 per kg for tomatoes, ` 0.75 a head for let tuce and
` 2.00 per kg for radishes. The average yield per-acre is 2,000 kg of tomatoes, 3,000
heads of lettuce and 1,000 kg of radishes. Fertilizer is available at ` 0.50 per kg and
the amount required per acre is 100 kg each for tomatoes and lettuce and 50 kg for
radishes. Labour required for sowing, cultivating and harvesting per acre is 5
man-days for tomatoes and radishes, and 6 man-days for lettuce. A total of 400
man-days of labour are available at ` 20.00 per man-day. Formulate this problem as
a linear programming model to maximize the farmer's total profit.
15. A city hospital has the following minimal daily requirements for nurses :
Minimum number of
Period Clock time (24 hr. day)
nurse required
1 6 A.M. — 10 A.M. 2
2 10 A.M. — 2 P.M. 7
3 2 P.M. — 6 P.M. 15
4 6 P.M. — 10 P.M. 8
5 10 P.M. — 2 A.M. 20
6 2 A.M. — 6 A.M. 6
Nurses report to the hospital at the beginning of each period and work for 8
consecutive hours. The hospital wants to determine the minimum number of nurses
available for each period. Formulate this as a L.P.P. by setting up appropriate
constraints and objective function. [Meerut 2005, 08 (BP)]
1. Maximize Z = 50 x + 15 y, 2. Maximize Z = 5 x + 3 y,
subject to the constraints subject to the constraints
5 x + y ≤ 100 2 x + 4 y ≤ 20
x + y ≤ 60 3 x + 2 y ≤ 24
and the non-negative restrictions 4 x + 2 y ≤ 13
x ≥ 0, y ≥ 0 and x ≥ 0, y ≥ 0
3. Minimize Z = 2 x + y, 4. Maximize Z = 3 x + 2 y,
subject to the constraints subject to the constraints
7 x + 2 y ≥ 30 x + 2 y ≥ 30
5 x + 4 y ≥ 20 3 x + y ≤ 60
2 x + 8 y ≥ 16 4 x + 3 y ≤ 200
and x ≥ 0, y ≥ 0 5 x + 4 y ≤ 200
and x ≥ 0, y ≥ 0
38
5. Maximize Z = 3 x + 5 y, 6. Maximize Z = 30 x + 60 y,
subject to the constraints subject to the constraints
x + 2 y ≤ 2000 2 x + y ≤ 70
x + y ≤ 1500 x + y ≤ 40
y ≤ 600 x + 3 y ≤ 90
and x ≥ 0, y ≥ 0 and x ≥ 0, y ≥ 0
7. Minimize Z = 2 x + 4 y, 8. Maximize Z = 6 x + 11 y,
subject to the constraints subject to the constraints
10 x + 25 y ≥ 4000 2 x + y ≤ 104
20 x + 10 y ≥ 500 x + 2 y ≥ 76
15 x + 20 y ≥ 300 and x ≥ 0, y ≥ 0
and x, y ≥ 0
9. Maximize Z = 40 x + 50 y, 10. Maximize Z = 10 x + 8 y,
subject to the constraints subject to the constraints
6 x + 5 y ≤ 500 1. 5 x + 3 y ≤ 80
4 x + 6 y ≤ 400 2 x + y ≤ 70
6 x + 5 y ≥ 300 and x ≥ 0, y ≥ 0
9 x + 5 y ≥ 260
and x, y ≥ 0
11. Maximize Z = 4 x + 3 y, 12. Maximize Z = 3 x1 + 5 x2 + 4 x3 ,
subject to the constraints subject to the constraints
2 x + y ≤ 1000 2 x1 + 3 x2 ≤ 8
x + y ≤ 800 2 x2 + 5 x3 ≤ 10
x ≤ 400 3 x1 + 2 x2 + 4 x3 ≤ 15
y ≤ 700 and x1, x2 , x3 ≥ 0
and x ≥ 0, y ≥ 0
13. Maximize Z = 3 x1 + 2 x2 + 4 x3 , 14. Maximize
subject to the constraints Z = 1850 x1 + 2080 x2 + 1875 x3 ,
2 x1 + 2 x2 + 4 x3 ≤ 2500 x1 + x2 + x3 ≤ 100
15. Maximize Z = x1 + x2 + x3 + x4 + x5 + x6 ,
subject to the constraints
x1 + x2 ≥ 7, x2 + x3 ≥ 15,
x3 + x4 ≥ 8, x4 + x5 ≥ 20,
x5 + x6 ≥ 6, x6 + x1 ≥ 2
and : x1, x2 , x3 , x4 , x5 , x6 ≥ 0.
39
To draw the graph of the simultaneous linear inequations i.e., to find the solution set of
the simultaneous linear inequations, we find the region of the xy-plane, common to all
the portions comprising the solution sets of the given inequations. If there is no region
common to all the solutions of the given inequations, we say that the solution set of the
system of ineqations is empty. It should be noted that the solution set of simultaneous
linear inequations may be an empty set or it may be the region bounded by the straight
lines corresponding to given linear inequations or it may be an unbounded region with
straight line boundaries.
40
x + y ≤ 5, 4 x + y ≥ 4, x + 5 y ≥ 5, x ≤ 4, y ≤ 3.
x + y = 5, 4 x + y = 4, x + 5 y = 5, x = 4, y = 3.
Region Represented by x + 5 y ≥ 5 : 5
x+y=5
The straight line x + 5 y = 5 meets the 4
x-axis at (5, 0) and the y-axis at (0, 1). y=3
3
Since the given inequality is not strict, we
join these points by a thick line. Clearly, 2
the point (0, 0) not lying on the line x+5y=
1
5
x + 5 y = 5 does not satisfy the inequation
x + 5 y ≥ 5. Therefore, out of the two X' O(0, 0) 1 2 3 4 5 X
portions of the xy-plane divided by the
line x + 5 y = 5, the portion not containing 4x+y=4 x=4
the origin along with the line represents Y'
the solution set of the inequation Fig. 2.1
x + 5 y ≥ 5.
this line, the portion containing the origin along with the line represents the solution set
of the inequation x ≤ 4.
Hence, the shaded region i.e., the region common to the given five inequations,
represents the solution set of the given system of inequations.
3 x + 2 y ≥ 6, x ≥ 1, y ≥ 1 .
To find the solution set of the given system of inequations, we first find the solution sets
of the given distinct inequations separately.
Therefore out of the portions of the xy-plane divided by the line 3 x + 2 y = 6, the portion
not containing the origin along with the line represents the solution set of the inequation
3 x + 2 y ≥ 6.
Y
Region Represented by x ≥ 1 : Clearly,
3
the line x = 1 is parallel to y-axis at a
distance of 1 unit from the origin lying 2
to the right hand side of y-axis. Since y=1 1
the given inequality is not strict, we
draw this line as a thick line. The point
(0, 0) not lying on the line x = 1 does not X' O 1 2 3 X
satisfy the inequation x ≥ 1, therefore 3x+2y=6
out of the two portions of the xy-plane
x=1
divided by this line, the portion not Y'
Fig. 2.2
containing the origin along with the line
represents the solution set of the
inequation x ≥ 1.
42
Hence, the shaded region common to the given three inequations represents the solution
set of the given system of inequations. We observe that the solution set of the given
system of linear inequations is an unbounded region.
Example 3: Draw the diagram of the solution set of the system of linear inequations :
2 x + 3 y ≤ 6, x + 4 y ≤ 4, x ≥ 0 , y ≥ 0 .
2 x + 3 y = 6, x + 4 y = 4, x = 0, y = 0.
inequation x + 4 y ≤ 4.
The common region (shaded region in the figure) of the above four regions represents the
solution set of the given system of inequations.
43
Example 4: Exhibit graphically the solution set of the following system of linear
inequations :
x + y ≥ 1, − 3 x − 4 y ≥ −12, − x + 2 y ≥ −2, x ≥ 0 , y ≥ 0 .
Hence, the shaded region i.e., the region common to the given five inequations,
represented the solution set of the given system of inequations.
In general we use the following three methods for the solution of a L.P.P.
1. Graphical (or geometrical) Method : If the objective function Z is a function of
two variables only then the problem can be solved by graphical method. A problem
of three variables can also be solved by this method but it is complicated enough.
44
2. Analytic Method : (Trial and Error Method) : The L.P.P. having more than two
variables cannot be solved by graphical method as even the problem of three
variables becomes complicated enough. In such cases, the analytic method (trial
and error method) can be useful.
3. Simplex Method : This is the most powerful method to solve a L.P.P. as any
problem can be solved by this method. This method is an algebraic procedure which
progressively approaches the optimal solution. This method is discussed in chapter
four.
While solving a L.P.P. graphically by this method, we first obtain the region in the
xy-plane containing all points that simultaneously satisfy all constraints including
non-negative restrictions. This polygonal region so obtained is called the convex
polygon of the set of all feasible solutions of the L.P.P. It is also called the permissible
region for the values of the variables.
Now we determine the vertices (or corner points) of this convex polygon. These vertices
are called the extreme points of the set of all feasible solutions of the L.P.P.
After obtaining the extreme points we find the values of the objective function Z at all
these points. The point where the objective function attains its optimum value
(maximum or minimum value of Z as the case may be) gives the optimal (or optimum)
value of the given L.P.P.
If the two vertices of the convex polygon give the same optimal value of the objective
function, then all point on the line segment joining these two vertices give the optimal
value of the objective function and L.P.P. is said to have infinite number of optimal
solutions.
Step 2 : Draw lines in the plane corresponding to each equation (obtained in step1) and
non-negative restrictions.
45
Method to draw lines : Putting x2 = 0 in the equation of the line find x1 and then putting
x1 = 0 find x2 . Thus we get the points of intersection i.e., ( x1, 0), (0, x2 ), of the line with the
axes. The line is drawn by joining these two points on the axes.
Step 3 : Now we find the permissible region for the values of the variables which is the
region bounded by these lines such that every point of this region satisfies all the
constraints and the non-negative restrictions.
Working Rule for Finding Permissible Region (Feasible Region) : Consider the
constraint ax + by ≤ or ≥ c, where c > 0. The line ax + by = c (drawn in step 2) divide the
xy-plane in two regions, one containing and the other not containing the origin. Since (0,
0) satisfy the inequality ax + by ≤ c, so for the inequality ax + by ≤ c, the feasible region is
the region which contains the origin. Also (0, 0) does not satisfy the inequality ax + by ≥ c,
so for the inequality ax + by ≥ c, the feasible region is the region which does not contain
the origin. See fig. 2.5(a).
The line y − mx = 0 (drawn in step 2) divide the xy-plane in two regions, one containing
the +ve x-axis and the other containing the +ve y-axis. For the inequality y − mx < 0, the
feasible region is the region which contains the positive x-axis for the region y − mx > 0,
the feasible region is the region which contains the positive y-axis. See fig. 2.5(b).
y y
ax+by=c y – mx = 0
ax+by>c y–mx>0
ax+by<c y–mx<0
x x
O O
(a) (b)
Fig. 2.5
Thus, we find feasible corresponding to all inequalities. Then the region which is
common to all these regions is the permissible region (i.e., feasible region) for the
values of the variables. This permissible region is shaded.
Step Four : Here we find the point in the permissible region (obtained in step 3) which
gives the optimum value of the objective function Z. The point will be one of the extreme
points (vertices) of the convex polygon enclosing the permissible region.
Method 1 : Corner Point Method : Determine the vertices of the convex polygon,
which are the points of intersection of the straight lines passing through them. These
vertices are called the extreme points of the set of all feasible solution of the L.P.P. Then
find the values of the objective function Z at all these points. The point where the
objective function Z attains its optimum value (maximum or minimum value as the case
may be) gives the optimum (or optimal) value of the L.P.P..
If two vertices of the convex polygon given the same optimum value of the objective
function Z, then all points on the line segment joining these two vertices will give the
optimum value of the objective function Z. In this case the L.P.P. is said to have infinite
number of optimum solutions.
Method 2 : Iso-Profit or Iso-Cost Method : Here, to find the vertex of the convex
polygon, which gives the optimum value of the objective function Z, draw a straight line in
the feasible region corresponding to the equation obtained by giving some convenient
value k to the objective function. This line is called an iso-profit or iso-cost line, since
every point on the line within the permissible region will yield the same value of Z. We
can also take k = 0. Which give a line passing through the origin and parallel to iso-profit
line. Thus, we draw the line (dotted line) through the origin corresponding to Z = 0.
Then for the maximization problem the extreme point of the permissible region which is
farthest away (i.e., at greatest distance) from this line and for the minimization problem
the extreme point of the permissible region which is nearest to this line gives the
optimum values of Z. To obtain this extreme point of the permissible region, giving the
optimum value of the objective function Z, we go on drawing lines parallel to the line
Z = 0. The farthest extreme point is the vertex of the permissible region through which
one of the parallel lines passes and after which it leaves this region and the nearest
extreme point is the vertex of the permissible region through which the parallel line
enters this region.
Note : If there is no permissible region then we say that the problem has no solution.
This method can be applied to problems involving only two variables while most of the
practical situations do involve more than two variables. Therefore it is not a powerful
tool for the solution of L.P.P.
47
Minimize Z = 20 x1 + 10 x2
3 x1 + x2 ≥ 30
4 x1 + 3 x2 ≥ 60
and the non-negative restrictions x1, x2 ≥ 0 . [Meerut 2005, 07 (BP), 11, 12(BP); Kanpur 2007]
Step 3 : The shaded region PQRSP is the permissible region of the problem.
30
25 3x1+x2 = 30
20 Q(4, 18)
Line for mini. Z
15
(6, 12)
10 R x+
1 2
x
2 =
40
2k 5
Z=20x1 + 10x2 = 0
S(15, 0) P(40, 0)
x1
–k O 5 10 15 20 25 30 35 40
x −1
⇒ 1 =
4x 1
x2 2
+3
x2
=
60
Fig. 2.6
48
Now the values of the objectives function Z at these vertices (corner points) are as given
in the table below :
P (40, 0) Z = 20 × 40 + 10 × 0 = 800
Q(4,18) Z = 20 × 4 + 10 × 18 = 260
Q(15, 0) Z = 20 × 15 + 0 = 300
By Iso-profit Method : Here, we draw the line through the origin corresponding to
Z = 0, which is parallel to iso-profit line.
Z = 0 ⇒ 20 x1 + 10 x2 = 0 ⇒ x1 x2 = (−1) 2
The dotted line through the origin is shown in the figure. Drawing parallel lines away
from the origin (note) O, we see that the nearest line (since it is minimization problem)
in the permissible region passes through the vertex R (6,12).
Example 2: Using Iso-profit line method, compute the maximum value of the expression
2 x1 + 3 x2 subject to the conditions :
2 x1 + x2 ≤ 5
x1 − x2 ≤ 1
x2 ≤ 2
Verify the maximum value by computing the values at the boundary points of the polygon.
[Meerut 2008]
Solution: Converting the given conditions into equation, and drawing these lines, the
permissible region is OPQRSO (shaded region) which is the set of all feasible solution of
the problem.
Z = 2 x1 + 3 x2 .
49
Taking Z =0 we get x2
x1 x2 = (−3) 2.
5
The line corresponding to Z = 0 is
2x 1
shown by dotted line through the 4
+x 2
1
=
origin O, which is parallel to the x2
=5
3 –
x1
Iso-profit line. R(3/2, 2) x2=2
S(0, 2)
Line for max. Z
Now drawing parallel lies away
from the origin (since it is Q(2, 1)
2k
maximization problem), shown x1
–3k O P(1, 0) 3 4 5
by dotted lines, we see that the Z=2x1 + 3x2 = 0
–1
farthest line from the origin O x −3
⇒ 1 =
x2 2
passes through the vertex
Fig. 2.7
R (3 2 , 2).
x1 = 3 2 , x2 = 2
and Max. Z = 2 × 3 2 + 3 × 2 = 9.
The values of the objective function Z at these corner points are as follows :
O(0, 0) Z =0+0 =0
P,(1, 0) Z = 2 ×1 + 0 = 2
Q(2,1) Z = 2 × 2 + 3 ×1 = 7
3 Z = 2 × 3 2 + 3 × 2 = 9 (Max.)
R , 2
2
S (0, 2) Z = 0 + 3 × 2 = 6.
3 3
Obviously, the maximum value of Z is 9 at R , 2 i.e., when x1 = , x2 = 2.
2 2
x1 + x2 ≤ 24
x1 ≥ 0, x2 ≥ 0 .
Now the values of the objective function at these corner points are as given in the table
below :
Example 4: A toy company manufactures two types of dolls; a basic version-doll A and a
deluxe version-doll B. Each doll of type B takes twice as long to produce as one of type A,
and the company would have time to make a maximum 2000 per day if it produces only
the basic version. The supply of plastic is sufficient to produce 1500 dolls per day (both A
and B combined). The deluxe version requires a fancy dress of which there are only 600
pieces per day available. If the company makes a profit of ` 3 and ` 5. per doll respectively,
on dolls A and B, how many of each should be produced per day in order to maximize
profit? [Meerut 2009]
x1 + x2 ≤ 1500
x2 ≤ 600
x1 ≥ 0, x2 ≥ 0.
Solution of the Problem : Proceeding stepwise, the permissible region is the shaded
region OPQRSO which is the set of all feasible solutions of the problem.
x2
(0, 1500)
1500
x1 + x2 = 1500
The values of the objective function Z at these corner points are as given below :
O(0, 0) Z = 3 ×0 + 5 ×0 = 0
P (1500, 0) Z = 3 × 1500 + 5 × 0 = 4500
Q(1000, 500) Z = 3 × 1000 + 5 × 500 = 5500 (Max.)
R (800, 600) Z = 3 × 800 + 5 × 600 = 5400
S (0, 600) Z = 3 × 0 + 5 × 600 = 3000
Thus, Z is maximum at the corner point Q i.e., for x1 = 1000 and x2 = 500 and the
maximum value of Z is 5500. The optimal solution of the problem is x1 = 1000, x2 = 500.
Hence, 1000 dolls of type A and 500 dolls of type B should be produced per day to
maximize the profit and the maximum profit per day is ` 5500.
The optimal solution may also be found by the following alternative method.
By Iso-profit Line Method : To find the maximum value of the objective function Z, we
draw the dotted line Z = 3 x1 + 5 x2 = 0 passing through the origin, which is parallel to
iso-profit line. We now move this line parallel to itself away from the origin so that its
distance from the origin becomes maximum yet it has at least one point in the feasible
region. We see that in this position this line passes through only one point Q(1000, 500)
of the feasible region. This line is an iso-profit line having only one point viz., Q in the
feasible region which will yield the maximum value of Z.
Hence, Z is maximum for x1 = 1, 000 and x2 = 500 and the maximum value of Z is
Example 5: A dietician mixes two types of food in such a way that the vitamin contents of
the mixture contain at least 8 units of vitamin A and 10 units of vitamin C. Food X
contains 2 units/kg of vitamin A and 1 unit/kg of vitamin C while food Y contains 1
unit/kg of vitamin A and 2 units/kg of vitamin C. One kg of food X costs ` 5 whereas one kg
of food Y costs ` 7. Determine the minimum cost of such a mixture.
Solution: Formulation of the problem as L.P.P. : Let the dietician mix x1 kg of food X
and x2 kg of food Y. Then
x1 ≥ 0, x2 ≥ 0
53
2 x1 + x2 ≥ 8
x1 + 2 x2 ≥ 10.
Z = 5 x1 + 7 x2 .
Minimize Z = 5 x1 + 7 x2
x1 + 2 x2 ≥ 10
x1 ≥ 0, x2 ≥ 0.
Solution of the Problem : Proceeding stepwise, the permissible region is the shaded
region, which is unbounded and is the set of all feasible solutions of the problem.
x2
A
10
8 P(0, 8)
2x1 + x2=8
6
4 T
(2, 4) x1 + 2x2=10
2
5k
S(10, 0) B
–7k O 2 4 6 8 10 12 14 x1
Z=5x1 + 7x2 = 0
x −7 Line for mini. Z
⇒ 1 =
x2 3
Fig. 2.10
Now the values of the objective function at these corner points are as given in the table
below :
P (0, 8) Z = 5 × 0 + 7 × 8 = 56
T (2, 4) Z = 5 × 2 + 7 × 4 = 38 (Min.)
S (10, 0) Z = 5 × 10 + 7 × 0 = 50
Clearly, Z is minimum at T (2, 4). Hence x1 = 2 and x2 = 4 is the optimal solution of the
given problem and the optimal value is Z = 38.
Hence, the dietician should mix 2 kg of food X and 4 kg of food Y to make the required
mixture at minimum cost. The minimum cost in this case is ` 38.
By Iso-cost Method : To find the minimum value of Z, we draw the dotted line
corresponding to Z = 5 x1 + 7 x2 = 0 passing through the origin which is parallel to iso-cost
line. We now move this line parallel to itself so that it passes through only one point
[here the corner point T (2, 4)] of the feasible region. This line is an iso-cost line having
only one point viz. T in the feasible region which gives the minimum value of Z.
5 × 2 + 7 × 4 = ` 38.
Example 6: A farm is engaged in breeding hens. In view of the need to ensure certain
nutrients (say x1, x2 , x3 ), it is necessary to buy two types of food, say A and B. One unit of
food A contains 36 units of x1, 3 units of x2 and 20 units of x3 . One unit of food B
contains 6 units of x1, 12 units of x2 and 10 units of x3 . The minimum daily requirement
of x1, x2 and x3 is 108, 36 and 100 units respectively. The cost of food A is ` 20 per unit
whereas food B costs ` 40 per unit. Find the minimum food cost so as to meet the minimum
daily requirement of nutrients.
Solution: Formulation of the Problem as L.P.P. : Let x units of food A and y units of
food B be bought to fulfill the minimum requirement of the nutrients x1, x2 , x3 and to
minimize the cost.
Also, since the quantity of each type of food bought cannot be negative, therefore
x ≥ 0, y ≥ 0.
55
Minimize Z = 20 x + 40 y
Solution of the Problem : Proceeding stepwise, the permissible region (the set of all
points satisfying all the constraints and the non-negative restrictions) consists of the
shaded region YLSTPX which is unbounded.
18 L(0, 18)
20x + 10y=100 16
14
12
10
(0, 8)A
6 S(2, 16)
3x + 12y=36
4
(0, 3)N T(4, 2)
2
k
–2k (3, 0) M P(12, 0) B(16, 0)
X
O 2 4 6 8 10 12 14 16
Z=20x + 40y = 0
x –2
⇒ = 36x + 6y=108 Line for mini. Z
y 1
Fig. 2.11
To find the minimum value of Z, we draw the dotted line through the origin
corresponding to Z = 20 x + 40 y = 0, which is parallel to iso-cost line. We now move this
line parallel to itself so that it enters the permissible region through a point [here the
corner point T (4, 2)]. This line is an iso-cost line having only one point viz. T in the
feasible region which will give the minimum value of Z.
Hence, 4 units of food A and 2 units of food B should be bought to fulfill the minimum
requirements of x1, x2 , x3 at a minimum cost of ` 160.
56
Example 7: Old hens can be bought at ` 2 each and young ones at ` 5 each. The old hens
lay 3 eggs per week and the young ones lay 5 eggs per week, each egg being worth 30 paise.
A hen (young or old) costs ` 1 per week to feed. I have only ` 80 to spend for hens. How
many of each kind should I buy to give me a maximum profit, assuming that I cannot house
more than 20 hens ?
Solution: Formulation of the Problem as L.P.P. : Suppose I buy x1 old hens and x2
young hens.
Since old hens lay 3 eggs per week and the young ones lay 5 eggs per weak, the total
number of eggs I have per week is 3 x1 + 5 x2 . Consequently, each egg being worth 30
paise, my total income per week is ` 0.3 (3 x1 + 5 x2 ).
Also, the expenses for feeding ( x1 + x2 ) hens, at the rate of ` 1 per hen per week
= ` ( x1 + x2 )
Z = 0 . 3(3 x1 + 5 x2 ) − ( x1 + x2 ) = − 0 .1x1 + 0 x2
Since the cost of one old hen is ` 2 and that of one young hen is ` 5 and I have only ` 80 to
spend for hens, therefore 2 x1 + 5 x2 ≤ 80
Again, the number of hens purchased, whether old ones or young ones, cannot be
negative.
Therefore, x1 ≥ 0 and x2 ≥ 0.
Maximize Z = − 0 .1x1 + 0 . 5 x2
To find the maximum value of the objective function Z, we draw the dotted line through
the origin corresponding to Z = − 0 .1x1 + 0 . 5 x2 = 0, which is the iso-cost line.
We now move this line parallel to itself away from the origin so that is leaves the region
through only one point [here the corner point B (0,16)] of the feasible region. This line is
an iso-profit line having only one point viz. B in the feasible region which will yield the
maximum value of Z.
57
x2
x1 + x2 = 20
20 D(0, 20)
(0, 16) B
E(20/3, 40/3)
12
8
4
(0, 2)Q 2x1+5x2=80
(–10, 0)P k C(20, 0)
–12 –8 –4 O 0 4 5k 8 12 16 20 24 28 32 36 40 x1
Z= –0.1x1 + 0.5x2 = 0
x 5
⇒ 1 =
x2 1
Fig. 2.12
= 0 . 5 × 16 − 0 .1 × 0 = ` 8.
Hence, he should buy only 16 young hens and no old hen in order to get the maximum
profit of ` 8 per week.
Note : If the dotted line through the origin is moved in the opposite direction then it will
pass through the vertex C (20, 0) for which
Z = − 0 .1 × 20 + 0 . 5 × 0 = −2 < 8
∴ Z is not max. at C (20, 0).
By Corner Point Method : The point giving the maximum value of the objective
function may also be located by finding the values of the objective function at all the
different corner points of the feasible region.
Solving simultaneously the equations of the corresponding intersecting lines of the
feasible region, we get the co-ordinates of the vertices of the feasible region as
O (0, 0), B(0,16), E(20 3 , 40 3) and C(20, 0). The values of the objective function at
these corner points are as given below :
O(0, 0) Z = − 0.1 × 0 + 0. 5 × 0 = 0
B (0,16) Z = − 0.1 × 0 + 0. 5 × 16 = 8 (Max.)
20 40 20 40
E , Z = − 0.1 × + 0. 5 × =6
3 3 3 3
C (20, 0) Z = − 0.1 × 20 + 0. 5 × 0 = −2
Thus, the maximum value of Z is 8 and is attained when x1 = 0 and x2 = 16.
58
Example 8: Suresh has two factories of toys, one located in city X and the other in city Y.
Both these factories manufacture the same type of toys. From these locations a certain
number of toys are delivered to each of the three depots situated at places A, B and C. The
weekly requirements of the depots are respectively 5, 5 and 4 units, while the production
capacity of the factories at X and Y are respectively 8 and 6 units. The transportation cost
in ` per unit from a factory to a depot is as given in the table.
To Depot
A B C
From Factory
X 16 10 15
Y 10 12 10
How many units should be transported from each factory to each depot in order that the
transportation cost is minimum ?
Now the number of units supplied to the depots cannot be negative, therefore
x1 ≥ 0, x2 ≥ 0 and 8 − x1 − x2 ≥ 0 ⇒ x1 + x2 ≤ 8
The weekly requirement of the depot at A is 5 units. Since x1 units are supplied from the
factory at X, the remaining 5 − x1 units are to be supplied from the factory at Y.
∴ 5 − x1 ≥ 0 ⇒ x1 ≤ 5.
∴ 5 − x2 ≥ 0 ⇒ x2 ≤ 5
6 − (5 − x1 + 5 − x2 ), i.e., x1 + x2 − 4
x1 + x2 − 4 ≥ 0 ⇒ x1 + x2 ≥ 4.
or Z = x1 − 7 x2 + 190.
Minimize Z = x1 − 7 x2 + 190
x1 ≤ 5
x2 ≤ 5
x1 + x2 ≥ 4
Solution of the Problem : Proceeding stepwise, the permissible region (the set of all
points satisfying all the constraints and the non-negative restrictions) consists of the
shaded region EFCGHDE.
x2
8
x1 + x2=8 x1=5
7
Line for mini. Z
x1 + x2=4
6
5 D H x2=5
N
(0, 4)E
3 G
2
0
1 x 1–7x 2=
L
F C B
O 1 2 3 4 5 6 7 8 x1
Fig. 2.13
To find the minimum value of the objective function Z, we draw a dotted line through
the origin corresponding to Z = 0 (Note) which is an iso-cost line. When we move this
line parallel to itself in the direction of x2 decreasing, so that it passes through only one
point [here the corner point C (5, 0)] of the feasible region. At C (5, 0), Z = 195 > 190. So
we move the line OL parallel to itself away from the origin towards the positive direction
of x2 -axis so that it passes through only one point [here the corner point D (0, 5)] of the
feasible region. This line is an iso-cost line having only one point viz. D in the feasible
region which will give the minimum value of Z.
60
Thus, Z is minimum for x1 = 0 and x2 = 5 and the minimum value of Z in this case is
Z = 0 − 7 × 5 + 190 = 155.
Hence, the optimal transportation strategy is to supply 0, 5 and 3 units from the factory
at X and 5, 0 and 1 units from the factory at Y to the depots at A, B and C respectively. In
this case the transportation cost is minimum and is ` 155.
Example 9: The post master of a local post office wishes to hire extra helpers during the
deepawali season, because of a large increase in the volume of mail handling and delivery.
Keeping in view the limited office space and the budgetary condition, the number of
temporary helpers must not exceed 10. According to past experience, a man can handle
300 letters and 80 packages per day, and a woman can handle 400 letters and 50
packages per day. It is believed that the daily volume of extra mail and packages will be no
less than 3400 and 680 respectively. A man receives ` 25 a day and a woman re ceives
` 22 a day. How many men and woman helpers should be hired to keep the pay-roll at a
minimum ? [Kanpur 2008]
Solution: Formulation of the Problem as L.P.P. : Let x1 men and x2 women be hired
by the post master to keep the pay-roll at a minimum.
Z = 25 x1 + 22 x2
Since the maximum number of helpers can be only 10, therefore x1 + x2 ≤ 10.
Given that a man can handle 300 letters daily and a woman can handle 400 letters daily
and that the number of extra letters expected daily is not less than 3400, we must have
Similarly, 80 x1 + 50 x2 ≤ 680.
Since the number of men and women hired cannot be negative, we have
x1 ≥ 0, x2 ≥ 0.
Minimize Z = 25 x1 + 22 x2
80 x1 + 50 x2 ≥ 680
Solution of the Problem : Proceeding stepwise, the permissible region (the set of all
points satisfying all the constraints and the non-negative restrictions) consists of the
point G (6, 4) only.
x2
80x1 + 50x2 = 680
14
E(0, 68/5)
13
x1 + x2 = 10
12
3
2
1 (10, 0)
B D(34/3, 0)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 x1
F(17/2, 0)
Fig. 2.14
Z = 25 × 6 + 22 × 4 = 238.
Hence, 6 men and 4 women helpers should be hired by the courier company to meet its
seasonal requirements and keep the pay-roll at a minimum of ` 238.
Minimize Z = 5 x1 + 6 x2
Solution: Solving simultaneously the inequations of the constraints and the non-negative
restrictions by graphical method, we see that there exist no values of x1 and x2 that
simultaneously satisfy all the constraints and the non-negative restrictions.
62
x2
60
x1+x2=50
50
40
30
x1+x2 ≥ 50
x1+2x2=40
20
10
O 10 20 30 40 50 60 x1
Fig. 2.15
Hence, the given problem does not have any feasible solution.
subject to x1 − x2 ≤ 1,
x1 + x2 ≥ 3
2
–x
+
2
x2
Maximize Z = −3 x1 + 2 x2
subject to x1 ≤ 3
x1 − x2 ≤ 0
1. Max. Z = x1 + x2 2. Max. Z = 8 x1 + 7 x2
subject to x1 + x2 ≤ 2000 subject to 3 x1 + x2 ≤ 66,000
x1 + x2 ≤ 1500, x1 + x2 ≤ 45,000
x2 ≤ 600 x1 ≤ 20,000, x2 ≤ 40,000
and x1, x2 ≥ 0. and x1, x2 ≥ 0.
5. Max. Z = 5 x1 + 7 x2 6. Min. Z = 3 x1 + 5 x2
subject to x1 + x2 ≤ 4, 3 x1 + 8 x2 ≤ 24, subject to −3 x1 + 4 x2 ≤ 12
10 x1 + 7 x2 ≤ 35 2 x1 − x2 ≥ −2,
and x1, x2 ≥ 0. 2 x1 + 3 x2 ≥ 12, x1 ≤ 4, x2 ≥ 2
and x1, x2 ≥ 0.
7. Max. Z = 3 x1 + 2 x2 8. Max. Z = 3 x1 + 4 x2
subject to x1 + x2 ≤ 4 subject to x1 − x2 ≤ −1
x1 − x2 ≤ 2 − x1 + x2 ≤ 0
x1, x2 ≥ 0. [Meerut 2009] x1, x2 ≥ 0. [Meerut 2011 (BP)]
9. Min. Z = x1 + x2 , 10. Max. Z = 0 . 75 x1 + x2
subject to 5 x1 + 10 x2 ≤ 50 subject to x1 − x2 ≥ 0
x1 + x2 ≥ 1 −0. 5 x1 + x2 ≤ 1
x2 ≤ 4, x1, x2 ≥ 0. and x1, x2 ≥ 0
[Meerut 2006 (BP), 08 (BP), 12]
11. Max. Z = 3 x1 + 4 x2 12. Max. Z = 3 x1 + 5 x2
subject to 4 x1 + 2 x2 ≤ 80 subject to x1 + 2 x2 ≤ 20
2 x1 + 5 x2 ≤ 180 x1 + x2 ≤ 15, x2 ≤ 6
and x1, x2 ≥ 0. [Gorakhpur 2008, 10] and x1, x2 ≥ 0. [Gorakhpur 2009]
13. Max. Z = 6 x1 − 2 x2 14. Max. Z = 3 x1 + 4 x2
subject to 2 x1 − x2 ≤ 2 subject to 5 x1 + 4 x2 ≤ 200
x1 ≤ 3 3 x1 + 5 x2 ≤ 150
and x1, x2 ≥ 0. 5 x1 + 4 x2 ≥ 100
8 x1 + 4 x2 ≥ 80
x1, x2 ≥ 0. [Meerut 2004, 07]
1
15. Max. Z = x1 + x2 16. Min. Z = 2 x1 − 10 x2
2
subject to x1 − x2 ≥ 0
subject to 3 x1 + 2 x2 ≤ 12
x1 − 5 x2 ≥ −5
5 x1 ≤ 10
and x1, x2 ≥ 0.
x1 + x2 ≥ 8, − x1 + x2 ≥ 4
x1, x2 ≥ 0. [Meerut 2006]
17. Max. Z = 3 x1 − 2 x2 18. Max. Z = x1 + x2
subject to x1 + x2 ≤ 1 subject to x1 − x2 ≥ 0
2 x1 + 2 x2 ≥ 4 −3 x1 + x2 ≥ 3
and x1, x2 ≥ 0. and x1, x2 ≥ 0.
65
28. A soft drink plant two bottling machines A and B. It produces and sells 8 ounce and
16 ounce bottles. The following data is available :
A 100/minute 40/minute
B 60/minute 75/minute
The machines can be run 8 hrs. per day, 5 days per week. Weekly production of the
drinks cannot exceed 3,00,000 ounces and the market can absorb 25,000 eight
ounce bottles and 7,000 sixteen ounce bottles per week. Profit on these bottles is 15
paise and 25 paise per bottle respectively. The planner wishes to maximize his
profit subject to all the production and marketing restrictions. Formulate it as a
linear programming problem and solve graphically.
29. The ABC Electric Appliance Company produces two products; refrigerators and
coolers. Production takes place in two separate departments. Refrigerators are
produced in Department I and coolers are produced in Department II. The
company's two products are produced and sold on a weekly basis. The weekly
production cannot exceed 25 refrigerators in Department I and 35 coolers in
Department II, because of the limited available facilities in these two departments.
66
30. A firm manufactures two types of products A and B and sells each of them at a profit
of ` 2 per product. Each product is processed on two machines P and Q. Type A
product requires one minute of processing time on machine P and two minutes on
machine Q. Type B product requires one minute on machine P and one minute on
machine Q. The machine P is available for not more than 6 hours and 40 minutes
while machine Q is available for 10 hours during any working day.
Formulate the given problem as a LPP and find how many products of each type
should the firm produce each day in order to get maximum profit.
32. A pineapple firm produces two products-canned pineapple and canned juice. The
specific amounts of material, labour and equipment required to produce each
product and the availability of each of these resources are shown in the table given
below :
Assuming one unit each of canned juice and canned pineapple has profit margins of
` 2 and ` 1 respectively. Determine the product mix that will maximize the profit.
33. Diet Problem : Consider two different types of foodstuffs, say F1 and F2 . Assume
that these foodstuffs contain vitamins V1, V2 and V3 respectively. Minimum daily
requirement of three vitamins are 1 mg. of V1, 50 mg. of V2 and 10 mg. of V3 .
Suppose that the food stuff F1 contains 1 mg. of V1, 100 mg of V2 and 10 mg. of V3 .
Whereas the food stuff F2 contains 1 mg. of V1, 10 mg. of V2 100 mg. of V3 . Cost of
one unit of foodstuff F1 is ` 1 and that of F2 is ` 1.5.
67
Find the minimum cost diet that would supply the body at least the minimum
requirements of each vitamin by graphical method.
34. A company sells two different products A and B. The company makes a profit of
` 40 and ` 30 per unit on products A and B respectively. The products are produced
in a common production process and are sold in two different markets. The
production process has a capability of 30,000 man-hours. It takes 3 hours to
produce one unit of A and one hour to produce one unit of B. The market has been
surveyed, and company officials feel that the maximum number of units of A that
can be sold is 8,000 and the maximum of B is 12,000 units. Subject to these
limitations, the products can be sold in any convex combination. Formulate the
above problem as a L.P.P. and solve it by graphical method.
35. A farm is engaged in breeding pigs. The pigs are fed on various products grown on
the farm. In view of the need to ensure certain nutrient constituents, it is necessary
to buy products (say A and B) in addition. The contents of the various products, per
unit, in nutrients are vitamins, proteins etc. given in the following table :
M1 36 6 108
M2 3 12 36
M3 20 10 100
The last column of the above table gives the minimum amount of nutrient
constituents M1, M2 , M3 which must be given to the pigs. If the products A and B
cost ` 20 and ` 40 per unit respectively, how much each of these two products
should be bought so that the total cost is minimized ?
36. A company produces two types of leather belts, say type A and B. Belt A is of
superior quality and belt B is of a lower quality. Profits on the two types of belts are
40 and 30 paise per belt, respectively. Each belt of type A requires twice as much
time as required by a belt of type B. If all belts were of type B, the company would
produce 1000 belts per day. But the supply of leather is sufficient only for 800 per
day. Belt A requires a fancy buckle and 400 fancy buckles are available for this, per
day. For belt of type B, only 700 buckles are available per day. How should the
company manufacture the two types of belts in order to have maximum overall
profit?
37. A person requires 10, 12 and 12 units of chemicals A, B and C respectively for his
garden. A liquid product contains 5, 2 and 1 units of A, B and C respectively per jar. A
dry product contains 1, 2 and 4 units of A, B and C per carton. If the liquid product
sells for ` 3 per jar and the dry product sells for ` 2 per carton, how many of each should
be purchased to minimize the cost and meet the requirements?
68
38. A publisher sells a hard cover edition of a text book for ` 72 and a paperback edition
of the same text for ` 40. Costs to the publisher are ` 56 and ` 28 per book
respectively in addition to weekly costs of ` 9600. Both types of books require 5
minutes of printing time, although hard cover requires 10 minutes binding time
and the paperback requires only 2 minutes. Both the binding and printing
operations have 4800 minutes available each week. How many of each type of book
should be produced in order to maximize profit ?
39. A manufacturer has employed 5 skilled men and 10 semi-skilled men and makes an
article in two qualities : deluxe model and an ordinary model. The making of a
deluxe model requires 2 hrs work by a skilled man and 2 hrs work by a semi-skilled
man. The ordinary model requires 1 hr by a skilled man and 3 hrs by a semi-skilled
man. By union rules no man may work more than 8 hrs per day. The manufacturer's
clear profit on deluxe model is ` 15 and on an ordinary model is ` 10. How many of
each type should be made in order to maximize his total daily profit ?
3. 3 1 4. x1 = 44, x2 = 16 ; Z = 440.
x1 = , x = ; Z = 3. 5.
2 2 2
5. x1 = 1. 6, x2 = 2 . 4 ; Z = 24 . 8. 6. x1 = 3, x2 = 2 ; Z = 19.
38. Maximum profit is ` 3360 when 360 books of hard cover edition and 600 books
of paper back edition are sold.
39. Maximum profit is ` 350 when 20 ordinary and 10 deluxe models are made.
70
The non-negative variables which are added to L.H. sides of the constraints to
convert them into equalities are called the slack variables.
Max. Z = 2 x1 + 3 x2 + 4 x3
x1 + x2 + x3 ≤ b1
s.t. …(1)
2 x1 + 4 x2 − x3 ≤ b2
x1, x2 , x3 ≥ 0.
In order to convert constraints (1) into equalities (equations) we add two non-negative
variables x4 and x5 on L.H.S. of (1), then we have
x1 + x2 + x3 + x4 = b1
and 2 x1 + 4 x2 − x3 + x5 = b2 .
Max. Z = 2 x1 + 4 x2 + 4 x3
x1 + x2 + 2 x3 ≤ b1
...(2)
s.t. 2 x1 + 4 x2 + 6 x3 ≥ b2
x1 − 2 x2 + 4 x3 ≥ b3
The constraints (2) are inequalities. In order to convert them into equalities we have to
add something in the L.H.S. of first and subtract something from the L.H. sides of the
second and the third inequalities of (2), i.e., we have following equalities
x1 + x2 + 2 x3 + x4 = b1
71
2 x1 + 4 x2 + 6 x3 − x5 = b2
x1 − 2 x2 + 4 x3 − x6 = b3
Here, x4 is the slack variable while x5 and x6 are the surplus variables.
Working Rule to find the Standard form of a L.P.P. : To convert a L.P.P. to the
standard form proceed stepwise as follows :
Min. Z = f ( x)
Then multiply both sides of the objective function by –1 and put − Z = Z ′ to get the
corresponding maximization form of the objective function.
Max. Z ′ = − Z = − f ( x)
Step 2 : To make right hand side element of each constraint non-negative, if not.
If the right hand side of a constraint is negative, then it is converted to positive sign by
multiplying both sides of this constraint by –1.
All the constraints containing inequality sign one converted to equations by the
introduction of non-negative variables called slack or surplus variables.
1. The constraint with ≤ sign, is reduced to an equation by adding a non-negative
variable on the left land side.
72
i.e., write x = x′ − x′′ where x′ ≥ 0 and x′′ ≥ 0 in the objective function and in all equations
obtained in step 3.
Solution: For clear understanding we shall convert the given L.P.P. into standard form
stepwise as discussed in article 2.12.
Step 2 : In the given problem, right hand side element of each constraint is positive
(non-negative)
Step 3 : Adding the slack variables x4 and x6 (both non-negative) in first and third
constraint respectively and subtracting the surplus variable x5 (non-negative) from the
second constraint, the constraint of the given L.P.P. as equations one
5 x1 − 4 x2 + 3 x3 + x4 = 7
2 x1 + 5 x2 − 4 x3 − x5 = 2
4 x1 + 3 x2 + 7 x3 + x6 = 8
Taking coefficients 0 for slack and surplus variables in the objective function Z, the
standard form of given L.P.P. is
Max. Z = 2 x1 + 3 x2 + 5 x3 + 0. x4 + 0. x5 + 0. x6
subject to,
5 x1 − 4 x2 + 3 x3 + x4 = 7
2 x1 + 5 x2 − 4 x3 − x5 = 2
4 x1 + 3 x2 + 7 x3 + x6 = 8
and x1, x2 , x3 , x4 , x5 , x6 ≥ 0
Max. Z = c x
where, c = [2 3 5 0 0 0]
x = [ x1 x2 x3 x4 x5 x6 ]T , b = [7 2 8]T
5 −4 3 1 0 0
and A = 2 5 −4 0 −1 0
4 3 7 0 0 1
74
Min. Z = 2 x1 + x2 + 4 x3
subject to, −2 x1 + 4 x2 ≤ 4
x1 + 2 x2 + x3 ≥ 5
2 x1 + 3 x3 ≤ 2
Max. Z ′ = − Z = −2 x1 − x2 − 4 x3 ...(1)
Step 2 : In the given L.P.P. right hand side of each constraint is positive.
Step 3 : Adding the slack variables x4 and x6 (both non-negative) in first and third
constraint respectively and subtracting the surplus variable (non-negative) from the
second constraint, the constraints of the L.P.P. as equations are
−2 x1 + 4 x2 + x4 = 4
x1 + 2 x2 + x3 − x5 = 5 ...(2)
2 x1 + 3 x3 + x6 = 2
Step 4 : Here x3 is unrestricted in sign, so putting x3 = x3′ − x3′′ , where x3′ , x3′′ ≥ 0, in (1)
and (2), the standard form of the given L.P.P. is
subject to, −2 x1 + 4 x2 + x4 =4
x1 + 2 x2 + x3′ − x′′ − x5 =5
2 x1 + 3 x3′ − 3 x3′′ + x6 =2
Max. Z = c x
where, c = [−2 − 1 − 4 4 0 0 0]
75
−2 4 0 0 1 0 0
and A = 1 2 1 −1 0 −1 0 .
2 0 3 −3 0 0 1
Reduce the following L.P.P. into standard form. Also find their matrix form
1. Max. Z = x1 + 2 x2 2. Min. Z = 12 x1 + 5 x2
subject to, 2 x1 + 3 x2 ≤ 6 subject to 5 x1 + 3 x2 ≥ 15
x1 + 7 x2 ≥ 4 7 x1 − 2 x2 ≤ 14
and x1, x2 ≥ 0 and x1, x2 ≥ 0
3. Max. Z = 2 x1 + 5 x2 + 9 x3 4. Max. Z = 2 x1 + 6 x2 + 5 x3
subject to 5 x1 − 4 x2 + 3 x3 ≤ 7 subject to 5 x1 − 2 x2 + 3 x3 ≤ 9
3 x1 + 5 x2 + 6 x3 ≥ 16 x1 + x2 + x3 ≥ 5
4 x1 + 3 x2 + 5 x3 ≤ 9 x1 + 2 x2 = 7
and x1, x2 , x3 ≥ 0 [Gorakhpur 2010] and x1, x2 , x3 ≥ 0 [Kanpur 2008]
5. Max. Z = 3 x1 + 11x2 6. Max. Z = 3 x1 + 5 x2 + 7 x3
subject to 2 x1 + 3 x2 ≤ 6 subject to 6 x1 − 4 x2 ≤ 5
x1 + 7 x2 ≥ 4 3 x1 + 2 x2 + 5 x3 ≥ 11
x1 + x2 = 3 4 x1 + 3 x3 ≤ 2
where, x1, x2 ≥ 0 [Kanpur 2007] x1, x2 ≥ 0 and x3 unrestricted in sign
[Gorakhpur 2008]
7. Max. Z = 3 x1 + 2 x2 + 5 x3 8. Min. Z = x1 − 2 x2 + x3
subject to 2 x1 + 3 x2 ≤ 3 subject to 2 x1 + 3 x2 + 4 x3 ≥ − 4
x1 + 2 x2 + 3 x3 ≥ 5 3 x1 + 5 x2 + 2 x3 ≥ 7
3 x1 + 2 x3 ≤ 2 x1 ≥ 0, x2 ≥ 0, x3 in unrestricted in
and x, x2 ≥ 0, x3 is unrestricted in sign
sign. [Gorakhpur 2009]
9. Max. Z = x1 + x2 10. Max. Z = 2 x1 + 3 x2 + 4 x3
subject to x1 ≤ 3, x2 ≥ 1, subject to x1 + x2 + x3 ≥ 5
and x1, x2 ≥ 0 x1 + 2 x2 = 7
5 x1 − 2 x2 + 3 x3 ≤ 9
and x1, x2 , x3 ≥ 0
76
2 3 1 0 5 3 −1 0
A= and x ≥ 0 A= , x ≥ 0
1 7 0 −1 7 −2 0 1
3. Max. Z = 2 x1 + 5 x2 + 9 x3 + 0. x4 4 Max. Z = 2 x1 + 6 x2 + 5 x3
+0. x5 + 0. x6 +0. x4 + 0. x5
subject to 5 x1 − 4 x2 + 3 x3 + x4 = 7 subject to 5 x1 − 2 x2 + 3 x3 + x4 = 9
3 x1 + 5 x2 + 6 x3 − x5 = 16 x1 + x2 + x3 − x5 = 5
4 x1 + 3 x2 + 5 x3 + x6 = 9 x1 + 2 x2 = 7
and x1, x2 , x3 , x4 , x5 , x6 ≥ 0 and x1, x2 , x3 , x4 , x5 ≥ 0
Matrix form : Max. Z = c x Matrix form Max. Z = c x
subject to A x = b, where subject to A x = b, where
c = [2, 5, 9, 0, 0, 0], c = [2, 6, 5, 0, 0],
x = [ x1, x2 , x3 , x4 , x5 , x6 ]T x = [ x1, x2 , x3 , x4 , x5 ]T
5 −4 3 1 0 0 5 −2 3 1 0
A = 3 5 6 0 −1 0 , x ≥ 0 A = 1 1 1 0 −1 , x ≥ 0
4 3 5 0 0 1 1 2 0 0 0
x1 + 7 x2 − x4 = 4 +0. x5 + 0. x6
x1 + x2 = 3 subject to 6 x1 − 4 x2 + x4 = 5
Here we note that the matrix formed by the coefficients of the m basic variables, or say
formed by the vectors associated to the basic variables is non-singular as its determinant
does not vanish. Hence the vectors associated to the basic variables are L.I.
Thus a solution in which the vectors associated to m variables are L.I. and remaining n − m variables
are zero is called a basic solution.
Hence a basic solution can be constructed by selecting the m L.I. vectors out of n and
setting the variables associated to the remaining (n − m) columns to zero. If B is the matrix
of m L.I. vectors of A and xB is the column vector of the corresponding variables (basic
variables), then the basic solution is given by
B xB = b or x B = B −1b.
n!
The number of basic solutions thus obtained will be at the most n Cm = since m
m !(n − m) !
vectors out of n can be selected in n Cm ways. Note that a B.S. corresponds to some basis.
Basic solutions are of two types : Non-degenerate basic solutions and degenerate basic solutions.
2.15 Theorem
A necessary and sufficient condition for the existence and non-degeneracy of all the basic solutions of
A x = b is that every set of m columns of the augmented matrix A b = [ A, b] is L.I.
Proof : The condition is necessary : Suppose all the basic solutions of the system
Ax = b exist and are non-degenerate. Therefore, every set of m-column vectors of A are L.I.
79
Let α1, α 2 ,...., α m be one set of m-column vectors of A, then the given system gives
α1 x1 + α 2 x2 + ...+ α m x m = b.
Now consider any m-column vectors α1, α 2 ,..., α m of A which are L.I. and let x1, x2 ,..., x m
be the corresponding basic solution. Then
x1 α1 + x2 α 2 + ... + x m α m = b
or −1. b + x1 α1 + x2 α 2 + ... + x m α m = 0
−1. b + x2 α 2 + ... + x m α m = 0 ,
But this is a contradiction as we have already assumed that m vectors of [ A, b] are L.I.
Hence, x1 ≠ 0.
Similarly, we can show that x2 ,..., x m are also different from zero i.e., the solution is
non-degenerate.
In this way, it can be shown that all the basic solutions are non-degenerate.
Corollary 1: The necessary and sufficient condition for any given basic solution
x B = B −1b to be non-degenerate is the linear independence of b and every (m −1) columns
from B.
The condition is sufficient : Let b and any (m −1) columns of B be L.I. Any column of B
can be replaced by b and still a basis is maintained. Hence x1,..., x m are all non-zero.
80
2. Basic Variables : The variables associated to B.F.S. are called basic variables.
x1 − x2 + x3 = 2
xi ≥ 0
Solution: In the given F.S. there are only two non-zero variables namely x1 and x3 . If
vectors associated to these variables be α1 and α 3 then the determinant of column vectors
1 1
corresponding to variables x1, x3 is = 1 −1 = 0
1 1
x1 + 2 x2 + x3 = 4
2 x1 + x2 + 5 x3 = 5
and prove that they are non-degenerate. [Meerut 2007 (BP), 11 (BP); Kanpur 2007, 11]
x1
1 2 1 4
α1 = , α 2 = , α 3 = , x = x2 , b = .
2
1
5
5
x3
1 2 1 1 2 1
B1 = [α1, α 2 ] = , B2 = [α1, α 3 ] = 2 5 , B3 = [α 2 , α 3 ] = 1 5
2 1
If follows that every set of two vectors of A is L.I. Hence all the three basic solutions exist.
If x B , x B , x B are the vectors of corresponding basic variables, then
1 2 3
1 1 −2 4 2
x B = B1−1 b = − =
1 3 −2 1 5 1
1 5 −1 4 5
x B = B2 −1 b = =
2 3 −2 1 5 −1
1 5 −1 4 5 3
x B = B3 −1 b = =
3 9 −1 2 5 2 3
To find all the basic Solutions : In basis B1, basic vectors are α1, α 2 . So
x1 2
x B = = ⇒ x1 = 2, x2 = 1.
1 x2 1
These two variables are the basic variables and the remaining variable x3 is non-basic. It
will have the value zero. Thus the basic solution associated to the basis B1 is given by
(2, 1, 0). Similarly other basic solutions are (5, 0, − 1) and (0, 5 3 , 2 3).
In all the three basic solutions none of the basic variables is zero, hence they are all
non-degenerate.
82
2 x1 + x2 + 3 x4 + x6 = 9
− x1 + x2 + x3 + x7 = 0,
x1, x2 , . . . , x7 ≥ 0
1 2 3
Now α1 α 2 α 4 = 2 1 3 =0
−1 1 0
1 0 0
Now α3 α6 α7 = 0 1 0 = 1 ≠ 0
1 0 1
(iii) This solution contains 5 zero variables. So it may be a B.F.S. Now vectors
associated to non-zero variables are
Now these vectors together with the vector α 5 = [1, 0, 0] are L.I. as
1 2 1
α1 α 2 α 5 = 2 1 0 =3 ≠0
−1 1 0
83
Since one of the basic variables namely x5 vanishes, so this is a degenerate B.F.S.
(iv) This solution contains 5 zero variables. So it may be a B.F.S. Now vectors
associated to non-zero variables are α 5 = [1, 0, 0], α 6 = [0, 1, 0]. Obviously
these vectors together with a7 = [0, 0, 1] are L.I. Hence x 4 is a B.F.S. with
x5 , x6 , x7 as basic variables. Since one of the basic variables namely x7 vanishes, so
this is degenerate B.F.S.
(v) This solution does not contain alteast 7 − 3 = 4 variables zero or it contains more
than 3 (no. of equations) non-zero variables, so it is not a B.F.S.
(vi) This solution contains one non-zero variable. The vector associated to this variable
is a4 = [3, 3, 0].
3 0 0
We have α4 α6 α7 = 3 1 0 = 3 ≠ 0
0 0 1
Here we give analytic method (trial and error method) to solve a L.P.P. We know that for a
system of m equations in n unknowns (n > m), a solution in which at least n − m variables
have their values equal to zero is called a basic solution. These (n − m) variables are called
non-basic variables. The remaining m variables whose values may or may not be equal to
zero are called basic variables, the vectors in the coefficient matrix corresponding to these
basic variables should be L.I.
In analytical method giving zero values to the non-basic variables in the given equations,
we get equations in basic variables. Solving these equations we can get the basic solutions
of the problem. Then the basic solutions (B.S.) which also satisfy the condition x i ≥ 0, ∀ i
called the basic feasible solutions (B.F.S.) are obtained. For all these B.F. solutions the
values of objective function Z are computed and the B.F.S. giving the optimal value
84
Example 1: Find an optimal solution of the following L.P.P. without using the simplex
algorithm.
Max. Z = 2 x1 + 3 x2 + 4 x3 + 7 x4
subject to 2 x1 + 3 x2 − x3 + 4 x4 = 8
x1 − 2 x2 + 6 x3 − 7 x4 = −3
Ax =b
2 3 −1 4
where A= = (α1 α 2 α 3 α 4 ),
1 −2 6 −7
x1
2 3 −1 4 x 8
2
where α1 = , α 2 = , α 3 = , α 4 = , x = , b = .
1 −2 6 −7 x3 −3
x4
1. x1, x2 x3 = x4 = 0 2 x1 + 3 x2 = 8 x1 = 1 Yes. 5
x1 − 2 x2 = −3 x2 = 2
x3 = x4 = 0
2. x1, x3 x2 = x4 = 0 2 x1 − x3 = 8 x1 = 45 13 No. —
x1 + 6 x3 = −3 x3 = − 14 13
x2 = x4 = 0
4. x2 , x3 x1 = x4 = 0 3 x2 − x3 = 8 x2 = 45 16 Yes. 163/16
−2 x2 + 6 x3 = −3 x3 = 7 16
x1 = x4 = 0
5. x2 , x4 x1 = x3 = 0 3 x2 + 4 x4 = 8 x2 = 44 13 No. —
−2 x2 − 7 x4 = −3 x4 = −7 13
x1 = x3 = 0
6. x3 , x4 x1 = x2 = 0 − x3 + 4 x4 = 8 x3 = 44 17 Yes. 491/17
6 x3 − 7 x4 = −3 x4 = 45 17 (Max.)
x1 = x2 = 0
Example 2: Find all the optimal B.F. solutions of the following L.P.P. without using the
simplex method.
Max. Z = 6 x1 + 2 x2 + 9
subject to 3 x1 + x2 ≤ 6
x1 + 3 x2 ≤ 9
M ax. Z = 6 x1 + 2 x2 + 9
subject to 3 x1 + x2 + x3 = 6
86
x1 + 3 x2 + x4 = 6
and x i ≥ 0, V i = 1, 2, 3, 4.
In matrix form the above system of equations can be written as
Ax =b
3 1 1 0
where A= = (α1 α 2 α 3 α 4 ).
1 3 0 1
x1
3 1 1 0 x 6
2
Here α1 = , α 2 = , α 3 = , α 4 = , x = , b = .
1
3
0
1
x
3 6
x4
Hence the problem can have at the most 6 basic solutions which may be B.F.S. also.
These solutions are computed as follows :
4. x2 , x3 x1 = x4 = 0 x2 + x3 = 6 x2 = 2, x3 = 4 Yes. 13
3 x2 = 6 x1 = 0, x4 = 0
5. x2 , x4 x1 = x3 = 0 x2 = 6 x2 = 6, x4 = −12 No. —
3 x2 + x4 = 6
6. x3 , x4 x1 = x2 = 0 x3 = 6 x3 = 6, x4 = 6 Yes. 9
x4 = 6 x1 = 0, x2 = 0
Note : If we solve this L.P.P. by Graphical method then we see that every point on the
line segment joining points (2, 0) and (3/2, 3/2) gives B.F.S. and optimal Z = 21.
87
2 x1 + 3 x2 + 4 x3 = 5
2 x1 + x2 + 4 x3 = 11
1
3. Is x1 = 1, x2 = , x = x4 = x5 = 0 a basic solution to the following equations :
2 3
1
x1 + 2 x2 + x3 + x4 = 2, x1 + 2 x2 + x3 + x5 = 2.
2
[a1 a2 a3 ] x = b.
x1 + 2 x3 = 1, x2 + x3 = 4, x1, x2 , x3 ≥ 0.
8 x1 + 6 x2 + 13 x3 + x4 + x5 = 6
9 x1 + x2 + 2 x3 + 6 x4 + 10 x5 = 10.
Max. Z = 2 x1 + 3 x2 + x3 + 0. x4 + 0. x5 − M. x6
subject to 2 x1 + 3 x2 + 4 x3 + x4 = 6
x1 + 2 x2 + 2 x3 − x5 + x6 = 3
and x1, x2 , x3 , x4 , x5 , x6 ≥ 0.
88
2. 1 5
(i) x1 = 3, x2 = 5, x3 = 0 (ii) x1 = , x = 0, x3 = .
2 2 2
3. No
4. Yes
5. x1 = 1, x2 = 4, x3 = 0 and x1 = 0, x2 = 7 2 , x3 = 1 2
6. No
7. 2 2 50 26 26 54 50 54
, 0, 0, , 0 , , 0, 0, 0, , 0, , 0, , 0 , 0, , 0, 0, ,
3 3 71 71 35 35 59 59
13 59 25 59
0, 0, , , 0 , 0, 0, , 0, .
38 38 64 64
8. Feed Mix : The problem of finding the best diet i.e., the combination of food that
can be supplied at minimum cost, satisfying the daily requirement of the nutrients
is also a L.P.P.
1. The L.P. technique is applicable to that class of problems in which the objective
function and the constraints both are linear. But in many practical problems, it is
not possible to express both the objective function and the constraints in linear
form.
3. The effect of time and uncertainty is not considered in linear programming model.
Thus the model should be defined in such a way that any change due to internal as
well as external factors can be incorporated.
90
4. Parameters appearing in the model are assumed to be constant while in real life
situations they are neither constant nor deterministic.
5. Linear programming deals with only single objective whereas in real life situations
problems occur with multi objectives.
6. All practical L.P.P. involve extensive calculations and hence computer facility is
required. In many situations the computations become tedious even on large digital
computers.
11
(c) 0, , 0 (d) None of these
18
91
True / False
1. A basic feasible solution is said to be optimum, if it optimizes the objective
function.
2. A basic feasible solution is called degenerate if one or more basic variables are
zero-valued.
3. The L.P.P. optimize Z = x1 + x2 , s.t. x1 + x2 ≤ 1, 3 x1 + x2 ≥ 3, x1, x2 ≥ 0 has single
point as its feasible region.
4. The L.P.P. Max. Z = x1 + x2 , s.t. x1 − x2 ≥ 0, 3 x1 − x2 ≤ −3, x1, x2 ≥ 0 has a feasible
solution.
5. By graphical method any L.P.P. can be solved easily. [Meerut 2004, 2005]
92
3. (a) 4. (a)
5. (c)
3. feasible 4. unbounded
7. surplus 8. zero
True / False
1. True 2. True
3. True 4. False
5. False
mmm
Unit-2
3.1 Definitions
1. Point Sets : Point sets are sets whose elements are points or vectors in E n or R n
(n-dimensional Euclidean space).
For example
(i) A linear equation in two variables x1, x2 , i.e., a1 x1 + a2 x2 = b represents a line in
two dimensions. This line may be considered as a set of those points ( x1, x2 )
which satisfy a1 x1 + a2 x2 = b. This set of points can be written as
S1 = {( x1, x2 ) : a1 x1 + a2 x2 = b}.
(ii) Consider the set of points lying inside a circle of unit radius with centre at the
origin, in two dimensional space ( E2 ). Obviously the points ( x1, x2 ) of this set
2 2
satisfy the inequality x1 + x2 < 1.
These sets may contain either a finite or infinite number of elements. Usually,
however, they will contain an infinite number of elements. Further, we shall
always assume that there is at least one element in a set unless otherwise
stated.
2. Hypersphere : A hypersphere in E n with centre at a and radius ε > 0 is defined to be
the set of points
96
X = {x :|x − a|= ε}
i.e., the equation of a hypersphere in E n is
X = {x :| x − a|< ε }.
X = {x : | x − a |< ε}.
6. An Open Set : A set S is said to be an open set if it contains only the interior points.
7. A Closed Set : A set S is said to be a closed set if it contains all its boundary points.
8. Lines : In E n, the line through two points x1 and x 2 , x1 ≠ x 2 is defined to be the set
of points
X = {x : x = λ x1 + (1 − λ ) x 2 , for all real λ }.
9. Line Segments : In E n, the line segment joining two points x1 and x 2 is defined to
be the set of points
X = {x : x = λ x1 + (1 − λ ) x 2 , 0 ≤ λ ≤ 1}.
Note that the restriction 0 ≤ λ ≤ 1 restricts the point x to lie within the line joining
the points x1 and x 2 . Line segment of x1, x 2 is also denoted by [x1 : x 2 ].
or cx =z
X1 = {x : c x > z}
X2 = {x : c x = z}
X3 = {x : c x < z}.
Note : The objective function of a L.P.P. represents a hyperplane and each constraint
(sign ≤, ≥) is a closed half space produced by the hyperplane given by the constraint by
taking (=) sign in place of ≤ and ≥.
Parallel Hyperplanes : Two hyperplanes c1x = z1 and c2 x=z2 are said to be parallel if
they have the same unit normals i.e., if c1 = λ c2 for some λ, λ being non-zero.
x = λ 1 x1 + λ 2 x 2 + ...+ λ n x n,
n
Σ λ i = 1.
i =1
x = λ x1 + (1 − λ) x 2 , 0 ≤ λ ≤ 1.
This shows that the line segment of the two points x1 and x 2 is nothing but the set
of all possible convex combinations of the two points x1 and x 2 .
98
A set of points is said to be convex if for any two points in the set, the line segment
joining these two points is also in the set. In other words a set is convex if the convex
combination of any two points in the set, is also in the set.
P
P
It can be easily seen that the convex combination of any number of points in the convex
set also belongs to the set.
In other words, a point x in a convex set C is an extreme point of C if it does not lie on the
line segment of any two points, different from x in the set.
Mathematically, a point x is an extreme point of a convex set if there do not exist other
points x1, x 2 (x1 ≠ x 2 ) in the set such that
x = λ x1 + (1 − λ) x 2 , 0 < λ < 1.
2 2
For example: The set C = {( x1, x2 ) : x1 + x2 ≤ 1} is convex. Every point on the
circumference is an extreme point. Thus a convex set may also have infinite number of
extreme points.
The polygons which are convex sets have the extreme points as their vertices.
It is important to note that all boundary points of a convex set are not necessarily
extreme points.
The convex hull C( X ) of any given set of points X is the set of all convex combinations of
sets of points from X.
In other words, the intersection of all convex sets, containing X ⊂ E n, is called the convex
hull of X and is denoted by < X >. Thus, the convex hull of a set X ⊂ E n, is the smallest
convex set containing X.
For example: If X is the set of eight vertices of a cube, then the convex hull C( X ) is the
whole cube.
A function f (x ) is said to be strictly convex at x if for any two other distinct points x1
and x 2
Alternatively, if the set X consists of a finite number of points, the convex hull of X is
called a convex polyhedron with vertices at these points.
For example: The set of the area of a triangle is a convex polyhedron of the set of its
vertices.
X = {x : cx = z}.
∴ c x1 = z and c x 2 = z.
If x 3 = λ x1 + (1 − λ ) x 2 , 0 ≤ λ ≤ 1
then c x 3 = λ c x1 + (1 − λ ) c x 2 ,
= λ z + (1 − λ )z = z
c x1 ≥ z, c x 2 ≥ z.
If 0 ≤ λ ≤ 1, then c [λ x1 + (1 − λ) x 2 ] = λ c x1 + (1 − λ ) cx 2
≥ λ z + (1 − λ ) z = z.
So H1 is a convex set.
Corollary : The open half spaces : {x : cx > z} and {x : cx < z} are convex sets.
Proof: Consider two convex sets X1 and X2 . Let X3 be the intersection of sets X1 and X2 ,
i.e., X3 = X1 ∩ X2 .
Now x1 ∈ X1 ∩ X2 ⇒ x1 ∈ X1 and x1 ∈ X2
x 2 ∈ X1 ∩ X2 ⇒ x 2 ∈ X1 and x 2 ∈ X2 .
∴ x1, x 2 ∈ X1 ⇒ λ x1 + (1 − λ ) x 2 ∈ X1 0 ≤ λ ≤1
and x1, x 2 ∈ X2 ⇒ λ x1 + (1 − λ ) x 2 ∈ X2 0 ≤ λ ≤1
101
Thus λ x1 + (1 − λ ) x 2 ∈ X1 and λ x1 + (1 − λ ) x 2 ∈ X2
⇒ λ x1 + (1 − λ ) x 2 ∈ X1 ∩ X2 0 ≤ λ ≤ 1.
Theorem 3(b): Intersection of any finite number of convex sets is also a convex set.
X = X1 ∩ X2 ∩ ... ∩ X n.
⇒ λ x1 + (1 − λ ) x 2 ∈ X1 ∩ X2 ∩ ... ∩ X n 0 ≤ λ ≤1
⇒ λ x1 + (1 − λ ) x 2 ∈ X1 ∩ X2 ∩ ... ∩ X n, 0 ≤ λ ≤ 1.
Theorem 4: The set of all convex combinations of a finite number of points x1, x 2 , . . . . , x n
is a convex set.
Proof: Let X be the set of all convex combinations of a finite number of points.
n n
i.e., X = x : x = Σ λ i x i , Σ λ i = 1, λ i ≥ 0
i =1 i =1
Let u, v ∈ X
n n
∴ u = Σ ai x i , Σ ai = 1, ai ≥ 0.
i =1 i =1
n n
and v = Σ bi x i , Σ bi = 1, bi ≥ 0.
i =1 i =1
Consider w = λ u + (1 − λ ) v, 0 ≤ λ ≤ 1
n n
∴ w = λ Σ ai x i + (1 − λ ) Σ bi x i
i =1 i =1
102
n
= Σ {λ ai + (1 − λ ) bi} x i
i =1
n
= Σ ci x i where ci = λ ai + (1 − λ ) bi
i =1
n n
Now Σ ci = Σ {λ ai + (1 − λ ) bi}
i =1 i =1
n n
= λ Σ ai + (1 − λ ) Σ bi = λ.1 + (1 − λ).1 = 1.
i =1 i =1
Also ci = λ ai + (1 − λ )bi ≥ 0, V i.
n
Hence w = Σ ci x i is a convex combination of x1, x 2 ,...., x n
i =1
i.e., w = λ u + (1 − λ ) v ∈ X , 0 ≤ λ ≤ 1.
Theorem 5: Let S and T be two convex sets in En , then for any scalars α , β, αS + βT is
also convex.
λ x + (1 − λ ) y = λ (α u1 + βv1) + (1 − λ ) (α u2 + βv2 )
= α [λ u1 + (1 − λ ) u2 ] + β [λ v1 + (1 − λ ) v2 ] ...(2)
λ x + (1 − λ ) y ∈αS + βT , 0 ≤ λ ≤ 1.
Thus x, y ∈ α S + βT ⇒ [x : y] ⊂ α S + βT .
Corollary : If S and T be two convex sets in E n, then S + T and S − T are also convex sets.
103
Theorem 6: A set C is convex iff every convex linear combination of points in C, also
belongs to C.
Then, in particular convex linear combination of every two points in C also belongs to C.
Conversely let C be a convex set. Then to prove that convex linear combination of any
number of points in C also belongs to C.
Since C is convex, the convex linear combination of two points in C belongs to C. Thus
the result is true for n = 2.
n +1
i.e., x is a convex linear combination of (n + 1) points of C, where µ i ≥ 0 and Σ µ i = 1.
i =1
(µ1 + µ2 + ...+ µ n)
x= (µ1 x1 + µ2 x 2 + ...+ µ nx n) + µ n+1 x n+1 ...(2)
(µ1 + µ2 + K µ n)
n n
= Σ µ i Σ ai x i + µ n+1 x n+1,
i = 1 i = 1
µi
where ai = , i = 1, 2,..., n.
µ1 + µ2 + ...+ µ n
As each µ i ≥ 0, we have ai ≥ 0
n n µi ∑ µi
and also Σ ai = Σ = = 1.
i =1 i = 1 µ1 + µ2 + ...+ µ n ∑ µi
104
n
Thus Σ ai x i = y (say) is also a convex linear combination of x1, x 2 ,..., x n. So y
i =1
belongs to C, by assumption.
Theorem 7: The set of all feasible solutions (if not empty) of a L.P.P. is a convex set.
[Kanpur 2008; Gorakhpur 2007]
Ax = b, x ≥ 0. ...(1)
Case I : If the set X has only one element, then X is convex set. Hence the theorem is true
in this case.
∴ A x1 = b, x1 ≥ 0.
and A x 2 = b, x 2 ≥ 0.
If x 3 = λ x1 + (1 − λ ) x 2 , 0 ≤ λ ≤1
then A x 3 = Aλ x1 + (1 − λ ) Ax 2
= λ b + (1 − λ ) b=b.
Also since x1 ≥ 0, x 2 ≥ 0, λ ≥ 0, 1 − λ ≥ 0, as 0 ≤ λ ≤ 1
∴ x 3 = λ x1 + (1 − λ ) x 2 ≥ 0
Note : Since the convex combinations of two points are infinite in number so from the
above theorem we conclude that :
If a given L.P.P. has two feasible solutions, then it has infinite number of feasible
solutions.
105
Proof: To prove that ev ery B.F.S. is an ex treme point of the con vex set of all fea si ble
so lu tions.
x = [x B , 0 ], ...(1)
and A x = b ⇒ B. x B = b. ...(2)
Suppose that x is not an extreme point. If X is the convex set of all feasible solutions of
A x = b, then x ∈ X.
If x is not an extreme point then there exist two distinct points x1 and x 2 in X such that
where u1 and u2 are vectors of m components of x1 and x 2 respectively and v1, v2 are
(n − m) components vectors.
Substituting the values of x and x1, x 2 from (1) and (4) in (3), we have
= [λ u1 + (1 − λ ) u2 , λ v1 + (1 − λ ) v2 ]
∴ x1 = [u1, 0 ], x 2 = [u2 , 0 ].
106
A x1 = B u1 = b and A x 2 = B u2 = b
i.e., b = B x B = B u1 = B u2
which gives x B = u1 = u2
∴ x = x1 = x 2
i.e., x cannot be expressed as a convex combination of any two distinct points in the set of
all feasible solutions. Hence x must be an extreme point.
Converse : To prove that every extreme point of the convex set of feasible solutions is a B.F.S.
Let x = [ x1, x2 ,...., x n] be an extreme point. Now in order to prove that x is a B.F.S., we
shall prove that the vectors associated with the positive elements of x are L.I.
Suppose that k-components (variables) in x are non-zero and (n − k) components are zero.
k
∴ Σ x i α i = b, x i > 0, i = 1, 2,....., k ...(6)
i =1
If possible, let the column vectors α 1, α 2 ,..., α k of matrix A be L.D. Then there exist some
scalars λ i = (i = 1, 2,...., k) with at least one of them non-zero, s.t.,
k
Σ λi αi = 0 ...(7)
i =1
k
or Σ ( x i ± δλ i) α i = b
i =1
(n − k) in number
x *2 = [ x1 − δ λ 1, x2 − δ λ 2 ,....., x k − δ λ k , 0, 0,......., 0]
satisfy A x = b.
Min xi
0 <δ < , λ ≠ 0, i = 1, 2,...., k.
i |λ i| i
we conclude that first k components of x1* and x *2 are always positive. But the remaining
components of x1* and x *2 are zero, which follows that x1* and x *2 are feasible solutions
different from x.
1 * 1 *
or x + x = [ x1, x2 ,...., x k , 0, 0,..., 0] = x
2 1 2 2
1
or x = λ x1* + (1 − λ ) x *2 where λ =
2
i.e.,x can be expressed as a convex combination of two distinct feasible solutions x1* and
Thus x is a B.F.S. Hence, every extreme point of the convex set of feasible solutions is a
B.F.S.
Corollary 1: The extreme points of the convex set of feasible solutions are finite in
number.
From the above theorem, we conclude that there is only one extreme point for a given
B.F.S. and vice-versa. That is there is one-to-one correspondence between the extreme
points and the B.F. solutions in the absence of degeneracy. Also in case of degeneracy
corresponding to an extreme point with the number of non-zero variables less than m, we
can form more than one degenerate B.F.S. Hence the number of extreme points of the
feasible region is finite and it cannot exceed the number of its B.F. solutions.
Corollary 2: An extreme point can have atmost m-positive x i' s where m is the number of
constraints.
Corollary 3: In an extreme point, vectors associated to the positive x i' s are L.I.
108
Theorem 9: Fundamental extreme point theorem : If the convex set of the feasible solutions
of A x = b, x ≥ 0 is a convex polyhedron, then atleast one of the extreme points gives an
optimal solution. [Meerut 2007 (BP), 08, 09, 10, 12]
Proof: In the Corollary 1 of last theorem we have proved that the extreme points of the
convex set of feasible solutions of A x = b, x ≥ 0 are finite in number.
Let x1, x 2 ,...., x k be the extreme points of the set X of all the feasible solutions of
A x = b, x ≥ 0 . Let Z be the objective function which is to be maximized be given by
Z = c x.
Max. Z = c x *.
∴ Z * = c x * = c (λ 1x1 + λ 2 x 2 + ....+ λ k x k )
= (λ1 c x1 + λ 2 c x 2 + ...+ λ k c x k )
If maximum of c x i is c x p, then
Z * ≤ (λ1 + λ 2 + ...+ λ k ). c x p
or Z * ≤ c x p.
Z * = c x p or c x * = c x p
Theorem 10: If the objective function of a L.P.P. assumes its optimal value at more than
one extreme point, then every convex combination of these extreme points gives the optimal
value of the objective function.
Max. Z = c x
subject to A x = b, x ≥ 0.
Let x1, x 2 ,...., x k be the extreme points of the feasible region. If the objective function Z
assumes its optimal value Z * at the extreme points x1, x 2 ,..., x p, ( p ≤ k) then
Z * = c x1 = c x 2 = .... = c x p.
p
x 0 = λ 1 x1 + λ 2 x 2 + ....+ λ p x p, λ i ≥ 0, Σ λ i = 1
i =1
∴ c x 0 = c [λ 1 x1 + λ 2 x 2 + ... + λ p x p]
= λ 1 c x1 + λ 2 c x 2 + .... + λ p c x p
= λ 1 Z * + λ 2 Z * + .... + λ pZ *
p
= (λ 1 + λ 2 + ... + λ p) Z * = Z *. Q Σ λ i = 1
i = 1
Hence the optimal value Z * is also attained at x0 which is the convex combination of the
extreme points at which optimal value occurs. Hence the theorem.
Find in which half spaces do the following points (−6, 1, 7, 2 ) and (1, 2, − 4, 1) lie.
3 x1 + 2 x2 + 4 x3 + 7 x4 = 8.
3 x1 + 2 x2 + 4 x3 + 7 x4 > 8.
3 x1 + 2 x2 + 4 x3 + 7 x4 < 8.
Example 2: Sketch the convex polygon spanned by the following points in a two
dimensional Euclidean space. Which of these points are vertices? Express the other as the
convex linear combination of the vertices
1 1
(0, 0), (0, 1), (1, 0), , .
2 4
Solution: The convex combinations of the points (0, 0), (1, 0); (0, 0), (0, 1) and (1, 0),
(0, 1) give the line segments OA, OB and AB respectively. Thus the convex combination
of points (0, 0), (1, 0) and (0, 1) is the interior of the triangle OAB.
The points O(0, 0), A(1, 0) and B(0, 1) are the vertices Y
1 1
and the point C , is the interior point of the B (0, 1)
2 4
1 1 1 1
∴ , = (λ 3 , λ 2 ) which gives λ 2 = , λ 3 = .
2 4 4 2
1 1 1
∴ λ1 = 1 − λ 2 − λ 3 = 1 − − = .
4 2 4
1 1 1 1 1
Thus , = (0, 0) + (0, 1) + (1, 0).
2 4 4 4 2
If w = (w1, w 2 ) is a point on the line segment joining the points u and v, then
w = λ u + (1 − λ ) v, 0 ≤ λ ≤ 1
111
= (λ u1 + (1 − λ ) v1, λ u2 + (1 − λ ) v2 )
⇒ w1 = λ u1 + (1 − λ ) v1 and w2 = λ u2 + (1 − λ ) v 2 .
= λ [2 u1 + 3 u2 ] + (1 − λ ) [2 v1 + 3 v 2 ]
= λ . 7 + (1 − λ ) . 7 = 7, using (1)
∴ w = (w1, w2 ) ∈ C.
Solution: Let x = ( x1, x2 , x3 ) and y = ( y1, y2 , y3 ) be any two points of S. Then we have
2 x1 − x2 + x3 ≤ 4 and 2 y1 − y2 + y3 ≤ 4. ...(1)
If w = (w1, w2 , w3 ) is a point on the line segment joining the points x and y, then
w = λ x + (1 − λ ) y, 0 ≤ λ ≤ 1
⇒ w1 = λ x1 + (1 − λ ) y1, w2 = λ x2 + (1 − λ ) y2 , w3 = λ x3 + (1 − λ ) y3 .
Now 2 w1 − w2 + w3 = λ (2 x1 − x2 + x3 ) + (1 − λ ) (2 y1 − y2 + y3 )
≤ 4 λ + 4 (1 − λ),
[using (1)]
=4
∴ w = (w1, w2 , w3 ) ∈ S.
w = λ u + (1 − λ ) v 0 ≤ λ ≤1
= (λ x1 + (1 − λ ) y1, λ x2 + (1 − λ ) y2 )
⇒ z1 = λ x1 + (1 − λ ) y1 and z2 = λ x2 + (1 − λ ) y2 .
Now 4 z1 + 3 z2 = λ (4 x1 + 3 x2 ) + (1 − λ ) (4 y1 + 3 y2 ) ≤ λ . 6 + (1 − λ ) . 6
[Q0 ≤ λ ≤ 1, 0 ≤ 1 − λ ≤ 1, 4 x1 + 3 x2 ≤ 6 and 4 y1 + 3 y2 ≤ 6]
i.e., 4 z1 + 3 z2 ≤ 6
and z1 + z2 = λ ( x1 + x2 ) + (1 − λ ) ( y1 + y2 ) ≥ λ .1 + (1 − λ).1
i.e., z1 + z2 ≥ 1 [Q 0 ≤ λ ≤ 1, 0 ≤ 1 − λ ≤ 1, x1 + x2 ≥ 1, y1 + y2 ≥ 1]
∴ w = (z1, z2 ) ∈ A.
B is a convex set.
H1 = {( x1, x2 ) : 4 x1 + 3 x2 ≤ 6}
Example 6: For any points x, y ∈ R n , show that the line segment [x : y ] is a convex set.
u = λ 1x + (1 − λ1) y, 0 ≤ λ 1 ≤ 1
and v = λ 2 x + (1 − λ 2 ) y, 0 ≤ λ 2 ≤ 1.
Now let w be a point on the line segment joining the points u and v.
Then w = λ u + (1 − λ ) v, 0 ≤ λ ≤ 1.
113
= λ [λ 1 x + (1 − λ1) y] + (1 − λ ) [λ 2 x + (1 − λ 2 ) y]
= [λλ 1 + (1 − λ) λ 2 ] x + [λ (1 − λ1) + (1 − λ) (1 − λ 2 )] y.
1 − µ = 1 − λλ 1 − (1 − λ) λ 2 = λ + (1 − λ) − λλ 1 − (1 − λ) λ 2
= λ (1 − λ 1) + (1 − λ) (1 − λ 2 ).
Since, 0 ≤ λ1 ≤ 1, 0 ≤ λ 2 ≤ 1 ⇒ 0 ≤ λλ1 + (1 − λ) λ 2 ≤ 1,
w = µ x + (1 − µ) y, 0 ≤ µ ≤ 1
∴ w ∈ [x : y].
Solution: Let A = [aij ]m× n, x = ( x1, x2 ,..., x n) and b = (b1, b2 ,...., bm), then the set
m
It follows that S = ∩ Hi is convex as each half space is convex.
i =1
Example 8: Show that the set S = { x : x = ( x1, x2 , x3 ), x12 + x22 + x32 ≤ 1} is a convex
set. [Meerut 2006 (BP)]
If z = (z1, z2 , z3 ) is a point on the line segment joining the points x and y, then
z = λ x + (1 − λ ) y, 0 ≤ λ ≤ 1
⇒ z1 = λ x1 + (1 − λ ) y1, z2 = λ x2 + (1 − λ ) y2 , z3 = λ x3 + (1 − λ ) y3 .
+ 2 λ (1 − λ)( x1 y1 + x2 y2 + x3 y3 )
∴ x1 y1 + x2 y2 + x3 y3 ≤ √ ( x12 + x22 + x32 ) √ ( y12 + y22 + y32 ) ≤ 1, [using (1) and (2)]
∴ z = (z1, z2 , z3 ) ∈ S.
Example 9: Express any point w inside a triangle as a convex combination of the vertices
(extreme points) x1, x 2 , x 3 of the triangle.
w = λ 2 x1 + (1 − λ 2 ) u, 0 ≤ λ 2 ≤ 1
115
= λ 2 x1 + λ1(1 − λ 2 ) x 2 + (1 − λ1) (1 − λ 2 ) x 3
or w = µ1x1 + µ2 x 2 + µ3 x 3 ...(2)
= λ 2 + λ1 − λ1λ 2 + 1 − λ1 − λ 2 + λ1λ 2 = 1
Example 10: Find the extreme points, if any, of the following sets :
(i) S = {( x, y ): x2 + y2 ≤ 25}
Solution: (i) Draw the region of the set S. It represents the boundary and interior of the
circle with centre at (0, 0) and radius 5.
Thus the set S represents the region of square bounded by the lines x = 1, x = −1, y = 1,
y = −1. The extreme points of this convex set are the vertices (1, 1), (−1, 1) (−1, − 1) and
(1, − 1).
Example 11: Is the union of two convex sets necessarily a convex set? [Meerut 2006 (BP)]
Solution: No, The union of two convex sets may or may not be a convex set.
Then S1 ∪ T1 = {( x, y) : x ≥ 2}
Then S2 ∪ T2 = {( x, y) : x ≥ 2 or y ≥ 1}.
Now the points A (9 / 4, 1 / 2) and B(1 / 2, 5 / 4) are the points of S2 ∪ T2 but their
mid-point P(11 / 8, 7 / 8) is not the point of the set S2 ∪ T2 as 11 / 8 < 2 and 7 / 8 < 1.
116
Hence the union of two convex sets is not necessarily a convex set.
Example 12: Find all the basic feasible solutions for the equations
2 x1 + 6 x2 + 2 x3 + x4 = 3
6 x1 + 4 x2 + 4 x3 + 6 x4 = 2
xi ≥ 0
and determine the associated general convex combination of the extreme point solutions.
[Meerut 2007; Kanpur 2010]
Ax =b
where A = (α1, α 2 , α 3 , α 4 )
x1
2 6 2 1 x 3
α1 = , α 2 = , α 3 = , α 4 = , x = 2 , b = .
6
4
4
6
x
3 2
x
4
This problem can have at most 4 C2 = 6 basic solutions. Now the six sets of two vectors
are
2 6 2 2
B1 = [α1, α 2 ] = , B2 = [α1, α 3 ] = 6 4
6 4
2 1 6 2
B3 = [α1, α 4 ] = , B4 = [α 2 , α 3 ] = 4 4
6 6
6 1 2 1
B5 = [α 2 , α 4 ] =
4 6 , B6 = [α 3 , α 4 ] = 4 6 .
Since none of these is zero, therefore all these sets are L.I.
If x Bi, i = 1, 2,..., 6 are the vectors of the basic variables associated to the sets
Bi, i = 1, 2,.., 6 respectively, then
x1 −1 1 4 −6 3 0
x = x B1 = B1 b = − 28 −6 =
2 2 1 / 2
2
117
x1 −1 1 4 −2 3 −2
x = x B2 = B2 b = − 4 −6 =
2 2 7 / 2
3
x1 −1 1 6 −1 3 8 / 3
x = x B3 = B3 b = 6 −6 2 2 = −7 / 3
4
x2 −1 1 4 −2 3 1 / 2
x = x B4 = B4 b = 16 −4 =
6 2 0
3
x2 −1 1 6 −1 3 1 / 2
x = x B5 = B5 b = 32 −4 6 2 = 0
4
x3 −1 1 6 −1 3 2
x = x B6 = B6 b = 8 −4 2 2 = −1.
4
Thus it is obvious that out of these only three basic solutions are B.F.S. (in which
variables are non-negative). But the B.F.S.'s correspond to the extreme points. Hence the
only three extreme point solutions are given by
1 1 1
x1 = 0, , 0, 0 , x 2 = 0, , 0, 0 , x 3 = 0, , 0, 0 .
2 2 2
Note : To find the basic solution x B we can also proceed as follows. Putting
1
2. 2 2
Prove that the set {( x1, x2 ) : x1 + x2 ≤ 4} is a convex set.
[Meerut 2011 (BP)]
2 2
3. Define a convex set. Show that the set S = {( x1, x2 ) : 3 x1 + 2 x2 ≤ 6} is convex.
[Meerut 2006]
4. Show that the set S = {( x1, x2 ) : 2 x1 + 3 x2 = 11} ⊂ R 2 is a convex set. [Meerut 2010]
8. If S1 and S2 be two non-empty disjoint convex sets and S be a set such that if x1 ∈ S1
and x 2 ∈ S2 then x1 − x 2 ∈ S. Show that S is also a convex set and does not contain
the origin. [Meerut 2008 (BP); 09 (BP)]
9. Show that the set of all the internal points of a convex set S is a convex set.
[Meerut 2011, 12 (BP)]
10. Determine whether the vector [7, 0] is a convex combination of the vectors [6, 3],
[9, –6], [1, 2], [1, –1].
11. Give an example of a convex set whose every boundary point is an extreme point.
(ii) A = {x1, x 2 }.
1
13. If x1, x 2 ∈ S ⇒ (x1 + x 2 ) ∈ S then the set S is convex or not.
2
14. Can there be any convex set without any extreme point? Prove that an extreme
point of a convex set is a boundary point of the set.
16. Find the extreme points of the polygonal convex set X determined by the system
2 x1 + x2 + 9 ≥ 0, − x1 + 3 x2 + 6 ≥ 0, x1 + x2 ≤ 0, x1 + 2 x2 − 3 ≤ 0.
17. A and B are two convex sets in R n, C is a set in R n defined as
C = {z ∈ R n : z = x + y, x ∈ A, y ∈ B}
Examine convexity of C.
3
18. Express (2, 1), 0, , if possible, as a convex combinations of (1, 1) and (–1, 2).
2
(c) m (d) n
3. A rectangle with sides a1 and a2 (a1 ≠ a2 ) is placed with one corner at the origin and
two of its sides along the axes. The interior of the rectangle plus its edges form a :
(a) Convex set (b) Non-convex set
(c) Polyhedron convex set (d) None of these
5. The convex hull of the set of all the points on the boundary of the circle is the :
(a) Interior of the circle (b) Whole circle
(c) Boundary of the circle (d) None of these
120
6. Consider the triangle with vertices (0, 0), (2, 0), (1, 1). The point (.3, .2) is a convex
combination of these vertices is :
(a) (1, 1), (1, –1) (b) (1, 1), (1, –1), (–1, 1)
(c) (1, 1), (1, –1), (–1, 1), (–1, –1) (d) (1, 1), (–1, –1)
4. Any point on the line segment joining two points in R n can be expressed as a convex
combination of .......... points.
5. The polygons which are convex sets have the extreme points as their .......... .
6. The optimal solution of L.P. problem occurs at an .......... point. [Meerut 2004]
True or False
1. A vertex is a boundary point but all boundary points are not vertices.
4. A line passing through two distinct points x1 and x 2 is the set of all the points x
such that x = λ x1 + (1 − λ ) x 2 , λ ∈ [0, 1] .
5. The number of edges that can emanate from any given extreme point of the convex
set of F.S. is two.
6. The set of all feasible solutions (if not empty) of a L.P.P. is a convex set.
121
7. If a L.P.P. has two feasible solutions, then it has an infinite number of feasible
solutions.
9. The extreme points of the convex set of feasible solutions to a L.P.P. are finite in
number.
10. Every B.F.S. to a L.P.P. is not an extreme point of the convex set of feasible
solutions.
11. If the convex set of feasible solutions to a L.P.P. is a convex polyhedron, then at
least one B.F.S. is optimal.
Exercise
1. (i) No (ii) No
7. (i) Convex (ii) Convex
10. Yes
11. The set {x : |x|≤ 1} in R 2 .
13. Convex
14. Yes
15. No
3 −3
16. , , (−3, − 3), (−7, 5), (−3, 3).
2 2
17. Convex
18. (2, 1) cannot be written as a convex combination of (1, 1) and (−1, 2);
3 1 1
0, = (1, 1) + (−1, 2).
2 2 2
True or False
1. True 2. False 3. False
4. False 5. True 6. True
7. True 8. False 9. True
10. False 11. True
mmm
123
4.1 Introduction
n chapter 2 we have discussed the formulation of linear programming problems and
I the graphic method of solving them. It was observed in the graphical approach to the
solution of L.P.P. that in a given situation the set of constraints determines the feasible
region and objective function gives the optimal point, the one that minimizes or
maximizes, as the case may be. But the graphical method suffers from a great limitation
that it can handle problems involving only two decision variables. Whereas in real world
situations, we frequently encounter such problems where more than two variables are
involved.
It is sometimes impossible or requires great labour to search for optimal solution from
amongst all the feasible solutions which may be infinite in number. Fundamental
theorem of linear programming makes it easy to find an optimal solution because it deals
with basic feasible solutions only. But it is also not an easy job to enumerate all the B.F.S.
even for small values of m (number of constraints) and n (number of variables). To
overcome this difficulty Simplex Method or Simplex Algorithm was developed by
George Dantzig in 1947. This method provides an efficient technique which can be
applied for solving linear programming problems of any magnitude (involving two or
more decision variables).
The simplex method is an iterative procedure, for finding, in a systematic manner the
optimal solution to a linear programming problem. The simplex algorithm according to
124
its iterative search selects the optimal solution from among the set of feasible solutions to
the problem.
In this chapter, we shall deal with only those problems for which initial basic feasible
solution is non-degenerate.
Consider a linear programming problem which after introducing slack and surplus
variables is as follows :
subject to
The above linear programming problem can be easily converted to matrix form
Max. Z = c x
subject to A x = b, x ≥ 0 ,
where c is a row vector of order 1× N, x and b are column vectors of order N ×1 and m ×1
respectively.
A = (α 1, α 2 ,...., α N )
B = (β1, β2 ,..., β m)
Since the vectors β1, β2 ,..., β m are linearly independent they form a basis for E m.
α j = β1 y1 j + β2 y2 j + ...+ β m ymj
y1 j
y
2j
= (β1, β2 ,...., β m)
M
y
mj
y1 j
y
2j
or α j = BY j , where Y j = ,
M
y
mj m×1
Also α j = BY j ⇒ Y j = B −1α j .
The variables corresponding to β1, β2 ,..., β m are denoted by x B1, x B2 ,..., x Bm respectively
and are called basic variables.
and x B = B −1 b.
Since x B1, x B2 ,...., x Bm are the basic variables, the remaining N − m variables belonging
to x are called non-basic variables.
We shall denote the coefficients of basic variables x B1, x B2 ,...., x Bm in the objective
function Z by cB1, cB2 ,..., cBm.
Corresponding to any x B , c B will represent the row vector containing the constants
cB1, cB2 ,..., cBm.
Since for any basic feasible solution all non-basic variables are zero therefore the
objective function Z becomes
= c B x B.
= c BY j .
There exists Z j for each α j and it changes as the columns of A forming B change.
Note : For convenience, we shall represent column vectors by [ ] without using transpose
symbol and row vector by ( ). There should be no confusion in understanding scalar
multiplication of two vectors c and x which is written as c x in place of c. x.
Max. Z = c x subject to A x = b, x ≥ 0,
N
Z * = Σ ci x i
i =1
Also suppose that k (k ≤ N ) variables in x * are non-zero and the remaining N − k variables
are zero. For the sake of convenience, we can assume that the first k variables of x * are
non-zero.
N −k
Thus
x * = [ x1, x2 ,...., x k , 0, 0,...., 0]
k
∴ Σ xi α i = b
i =1 ...(1)
k
and Z * = Σ ci x i.
i =1 ...(2)
Any feasible solution for which the vectors α i, i = 1, 2, ..., k associated with non-zero
variables x i, i = 1, 2,..., k are L.I. is called a basic feasible solution. Thus x * is a B.F.S.
which is also optimal.
In this case, we shall reduce the number of non-zero variables step by step until we obtain
a B.F.S. from feasible solution.
Since α1, α 2 ,...., α k are linearly dependent therefore there exist scalars λ 1, λ 2 ,...., λ k
such that
λ1 α1 + λ 2 α 2 + .... + λ k α k = 0
k
or Σ λi αi = 0
i =1 ...(3)
We can assume that at least one λ i is positive because if none is positive, then we can
multiply the equation (3) by –1 and get positive λ i.
Now, suppose
max λ
v = 1 ≤ i ≤ k i .
xi ...(4)
1
Multiplying both sides of (3) by and subtracting from (1), we get
v
k λ
Σ x i − i α i = b. ...(5)
i =1 v
(5) gives
λ λ λ
x′ = x1 − 1 , x2 − 2 ,...., x k − k , 0, 0,...., 0
v v v
A new solution of the matrix equation A x = b.
λ
or x i − i ≥ 0.
v
Since all the components of x ′ are non-negative therefore x′ is a feasible solution to the
given L.P.P.
λi
Also v = , for at least one i, i = 1, 2,..., k.
xi
λ
i.e., for this value of i, 1 ≤ i ≤ k, v − i = 0, therefore the new feasible solution x′ cannot
xi
have more than k −1 non-zero variables.
In this way, we have derived a new feasible solution from the given optimal feasible
solution which contains less number of non-zero variables. This solution is B.F.S., if the
column vectors associated to non-zero variables in this new solution are L.I. If these
associated vectors are not L.I., we shall repeat the whole reduction procedure as
explained above. Continuing in this way for finite number of times, we will derive a
solution in which columns corresponding to positive variables are L.I. i.e., we will obtain
a B.F.S. of the system.
Let Z′ be the new value of the objective function corresponding to the new solution x′.
Then
k λ k 1 k
Z ′ = Σ ci x i − i = Σ ci x i − Σ c λ
i =1 v i =1 v i =1 i i
1 k
or Z′ = Z* − Σ c λ , from (2) ...(6)
v i =1 i i
Z′ = Z*
k
Then either Σ ci λ i < 0
i =1 ...(8)
k
or Σ ci λ i > 0.
i =1 ...(9)
In either of these two equations (8) and (9), we can find a real number r such that
k
r Σ ci λ i > 0
i =1
[in equation (8) r will be negative and in equation (9) r will be positive]
k
or Σ ci (r λ i) > 0
i =1
k k k
or Σ ci (rλ i) + Σ ci x i > Σ ci x i
i =1 i =1 i =1
k
or Σ ci( x i + r λ i) > Z *, from (2) ...(10)
i =1
k
or Σ ( x i + r λ i) α i = b.
i =1
N −k
Therefore, [ x1 + r λ1, x2 + r λ 2 ,...., x k + r λ k , 0, 0,..., 0]
Furthermore, we can choose r in infinitely many ways for which the above solution also
satisfies the non-negative restrictions.
x i + r λ i ≥ 0, i = 1, 2,..., k
or r λ i ≥ − xi
x
∴ r ≥− i if λ i > 0,
λi
x
r ≤− i if λi < 0
λi
and r is unrestricted if λ i = 0.
max − x i ≤ r ≤ max − x i
i λ i i λ i ...(11)
(λ i > 0) (λ i < 0)
Furthermore, we have
objective function which is strictly greater than the greatest value (or optimal value) Z *
of objective function, which is impossible.
131
k
∴ We must have Σ ci λ i = 0
i =1
or Z ′ = Z *.
N −k
Hence x′ = λ1 λ2 λk
x1 − , x2 − ,...., x k − , 0, 0,..., 0
v v v
Max. Z = c x subject to A x = b, x ≥ 0 ,
Proof: Consider an arbitrary feasible solution of the given linear programming problem
Let us assume that k, (k ≤ N ) variables in x ∗ have positive values and the remaining N − k
variables are zero. We can also assume that the variables have been numbered in such a
way that the first k variables are non-zero.
N −k
Thus
x * = [ x1, x2 ,...., x k , 0, 0,..., 0]
k
Also, we have Σ x i α i = b. ...(2)
i =1
Any feasible solution for which the vectors α i, i = 1, 2,..., k associated with non-zero
variables x i, i = 1, 2,..., k are L.I. is called a basic feasible solution. Thus x * is a B.F.S. which is
also optimal.
132
In this case we shall reduce the number of non-zero variables step by step until we obtain
a B.F.S. from feasible solution.
Since α1, α 2 ,...., α k are linearly dependent therefore there exist scalars λ 1, λ 2 ,...., λ k
such that
λ1 α 1 + λ 2 α 2 + .... + λ k λ k = 0
k
or Σ λi αi = 0 ...(3)
i =1
We can assume that at least one λ i is positive because if none is positive, then we can
multiply the equation (3) by –1 and get positive λ i.
Now, suppose
max λ
v = 1 ≤ i ≤ k i ...(4)
xi
1
Multiplying both sides of (3) by and subtracting from (2), we get
v
k λ
Σ x i − i α i = b. ...(5)
i =1 v
λ λ λ
(5) gives x′ = x1 − 1 , x2 − 2 ,..., x k − k , 0, 0,..., 0
v v v
λ
or x i − i ≥ 0.
v
Since all the components of x ′ are non-negative therefore x ′ is a feasible solution to the
given L.P.P.
133
λi
Also v = , for at least one i, 1 ≤ i ≤ k.
xi
λ
v − i = 0, therefore the new feasible solution x′ cannot have more thank k −1 non-zero
xi
variables.
In this way, we have derived a new feasible solution from the given optimal feasible
solution which contains less number of non-zero variables. This solution is B.F.S., if the
column vectors associated to non-zero variables in this new solution are L.I. If these
associated vectors are not L.I., we shall repeat the whole reduction procedure as
explained above. Continuing in this way for finite number of times we will derive a
solution in which columns corresponding to positive variables are L.I. i.e., we will obtain
a B.F.S. of the system.
Note : 1. By applying the above mentioned procedure we can obtain a basic feasible
solution from any feasible solution.
2. If the given feasible solution is optimal, then basic feasible solution is also optimal.
Min. Z = 2 x1 + 3 x2 + 4 x3
subject to x1 + x2 + x3 = 2
x1 − x2 + x3 = 0 , x1, x2 , x3 ≥ 0 ,
Solution: The given system of constraint equations can be written in matrix form A x = b.
x
1 1 1 1 2
or 1 −1 1 x2 = 0 .
x
3
1 1 1 2
∴ α 1 = , α 2 = , α 3 = , b = .
1
−
1 1
0
The given feasible solution x1 = 1, x2 = 0, x3 = 1 will not be basic if the vectors associated
to non-zero variables are not linearly independent.
134
∴ The vectors α1 and α 3 are linearly dependent i.e., are not L.I.
1 1
Aliter : |(α1, α 3 )| = = 0 ⇒ the vectors α1 and α 3 are not L.I.
1 1
Max. Z = x1 + 2 x2 + 4 x3
subject to 2 x1 + x2 + 4 x3 = 11
3 x1 + x2 + 5 x3 = 14
x1, x2 , x3 ≥ 0 ,
then find a basic feasible solution from the given feasible solution. [Gorakhpur 2007]
Max. Z = x1 + 2 x2 + 4 x3 ,
subject to Ax =b
x
2 1 4 1 11
or 3 1 5 x2 = 14
x
3
or α1 x1 + α 2 x2 + α 3α 3 = b, ...(1)
2 1 4 11
where α1 = , α 2 = , α 3 = , b = .
3 1 5 14
2α1 + 3α 2 + α 3 = b. ...(2)
135
Now the vectors α1, α 2 , α 3 associated with non-zero variables x1, x2 , x3 will be linearly
dependent if one of these vectors can be expressed as the linear combination of the
remaining two vectors.
Let α1 = a α 2 + bα 3 ...(3)
2 1 4 a + 4 b
or 3 = a 1 + b 5 = a + 5 b
⇒ a + 4 b = 2 and a + 5 b = 3 ⇒ a = −2, b = 1
α1 = −2α 2 + α 3
or α1 + 2α 2 − α 3 = 0 ...(4)
3
or Σ λ i α i = 0 which gives λ1 = 1, λ 2 = 2, λ 3 = −1.
i =1
Now we shall determine which of the three variables x1, x2 , x3 should be zero.
max
Let v = 1 ≤ i ≤ 3 (λ i x i) = max (λ1 x1 , λ 2 x2 , λ 3 x3 )
= max (1 2 , 2 3 , −1 1) = 2 3 = λ 2 x2 .
∴ x2 should be zero, for which we should eliminate α 2 between (2) and (4).
2α1 − 3(α1 − α 3 ) / 2 + α 3 = b
or (1 2) α1 + (5 2) α 3 = b.
Now the column vectors α1 and α 3 corresponding to basic (non-zero) variables x1 and x3
are L.I. as
2 4
|(α1, α 3 )| = = −2 ≠ 0,
3 5
x1 = 1 2 , x2 = 0, x3 = 5 2 .
Note :
1. We can also find the new B.F.S as follows.
136
We have v =2 3.
Since x i − λ i v ≥ 0 for V i = 1, 2, 3.
( x 1 − λ 1 v , x2 − λ 2 v, x3 − λ 3 v).
We have x1 − λ1 v = 2 −1 (2 / 3) = 1 2 , x2 − λ 2 v = 0 , x3 − λ 3 v = 5 2 .
−α 1 − 2α 2 + α 3 = 0 ...(5)
∴ λ 1 = −1, λ 2 = −2, λ 3 = 1
max
∴ v = 1 ≤ i ≤ 3 (λ i x i) = 1 / 1 = λ 3 x3 .
2α1 + 5α 2 = 0.
x1 = 2, x2 = 5, x3 = 0.
2 x1 + x2 + 5 x3 − 3 x4 = 0
x1 + 6 x2 − 4 x3 + 2 x4 = 15,
x1, x2 , x3 , x4 ≥ 0 .
Solution: The given set of equations can be expressed in the matrix form
A x =b
x
5 −4 3 1 1 3
x
or 2 1 5 −3 2 = 0
x3
1 6 −4 2 15
x4
137
α1 x1 + α 2 x2 + α 3 x3 + α 4 x4 = b ...(1)
5 −4 3 1 3
where α1 = 2 , α 2 = 1, α 3 = 5 , α 4 = −3 , b = 0 .
1 6 −4 2 15
α1 + 2α 2 + α 3 + 3α 4 = b. ...(2)
Now the vectors α1, α 2 , α 3 , α 4 associated with non-zero variables x1, x2 , x3 , x4 will be
linearly dependent if one of these vectors can be expressed as the linear combination of
the remaining three vectors.
Let α1 = aα 2 + bα 3 + cα 4 ...(3)
5 −4 3 1
or 2 = a 1 + b 5 + c −3
1 6 −4 2
−4 a + 3 b + c 5
or a + 5 b − 3 c = 2
6 a − 4 b + 2 c 1
−4 a + 3 b + c = 5
or a + 5b − 3c = 2
6a − 4b + 2c = 1
max λ
Using v = 1 ≤ i ≤ 4 i , we have
xi
λ λ λ λ 86 44 139 −189 86 λ1
v = max 1 , 2 , 3 , 4 = max ,− ,− , = = .
x
1 x x x 1 2 1 3 1 x1
2 3 4
−4 3 1
|(α 2 α 3 α 4 )| = 1 5 −3 ≠ 0.
6 −4 2
2 x1 − x2 + 2 x3 = 2, x1 + 4 x2 = 18,
then find two basic feasible solutions.
5. (2, 1, 3) is a feasible solution of the set of equations
4 x1 + 2 x2 − 3 x3 = 1, 6 x1 + 4 x2 − 5 x3 = 1.
x1 + 2 x2 + 4 x3 + x4 = 7, 2 x1 − x2 + 3 x3 − 2 x4 = 4.
Reduce the feasible solution to two different basic feasible solutions.
139
2 x1 − x2 + 2 x3 = 10
x1 + 4 x2 = 18
x1, x2 , x3 ≥ 0
x1 + x2 + 2 x3 = 4
2 x1 − x2 + x3 = 2
2 x1 − 3 x2 + 4 x3 + 6 x4 = 25
x1 + 2 x2 + 3 x3 − 3 x4 + 5 x5 = 12
If x1 = 2, x2 = 1, x3 = 3, x4 = 2, x5 = 1 is a F.S., then reduce it to a B.F.S. of the
system.
10. Consider the set of equations
2 x1 − 3 x2 + 4 x3 + 6 x4 = 21
x1 + 2 x2 + 3 x3 − 3 x4 + 5 x5 = 9
If x1 = 2, x2 = 1, x3 = 2, x4 = 2, x5 = 1 is a F.S., then reduce it to two different basic
feasible solutions.
4. x1 = 26 9 , x2 = 34 9 , x3 = 0; x1 = 0, x2 = 9 2 , x3 = 13 4
5. x1 = 1, x2 = 0, x3 = 1
6. x1 = 0, x2 = 1 2 , x3 = 3 2 , x4 = 0; x1 = 3, x2 = 2, x3 = 0, x4 = 0
7. x1 = 58 9 , x2 = 26 9 , x3 = 0
8. x1 = 0, x2 = 0, x3 = 2; x1 = 2, x2 = 2, x3 = 0
9. x1 = 147 2 , x2 = 0, x3 = 0, x4 = 1 12, x5 = 0
10. x1 = 0, x2 = 0, x3 = 39 10 , x4 = 9 10 , x5 = 0
and x1 = 39 4 , x2 = 0, x3 = 0, x4 = 1 4 , x5 = 0
140
possible to obtain a new B.F.S. by replacing one of the columns in B by α j and if Z′ is the
Max. Z = c x, subject to A x = b, x ≥ 0,
where A = (α 1, α 2 , ...., α N ), N = m + n,
Since the vectors β1, β2 ,...., β m are in the basis of A, therefore we can express α j as a linear
combination of β's.
m
∴ αj = Σ yij β i = y1 j β1 + y2 j β2 + .... + ymj β m. ...(1)
i =1
1 m yij
or βr = αj − Σ β i.
yrj i = 1 yrj ...(2)
i≠ r
Now, we have
B xB = b
or β1 x B1 + β2 x B2 + ... + β r x Br + .... + β m x Bm = b
141
m
or Σ β i x Bi + β r x Br = b
i =1 ...(3)
i≠ r
m yij x
Σ β i x Bi − x Br + Br α j = b
i =1 yrj yrj ...(4)
i≠ r
m
or Σ x′ Bi β i + x′ Br α j = b,
i =1 ...(5)
i≠ r
yij
where x′Bi = x Bi − x Br , i = 1, 2,..., m, i ≠ r
yrj
x
and x′Br = Br , i = r
...(6)
yrj
Comparing (3) and (5), we observe that the new basic solution of A x = b is given by
yij
x Bi − x ≥ 0, i = 1, 2,..., m, i ≠ r
yrj Br
x Br
and ≥ 0, i = r . ...(7)
yrj
x Br ≥ 0, r = 1, 2, ...., m.
x Bi x
− Br ≥ 0
yij yr j
x Br x
or ≤ Bi
yr j yij
142
x Br min x Bi
or = i y , yij > 0 = v . ...(8)
yrj ij
Thus a new B.F.S. can be obtained from the initial B.F.S. by removing the column vector
β r of the basis matrix B by α j if r is to be selected such that
x min x
v = Br = i Bi , yij > 0 .
yrj yij
If we have v = 0, which is possible only when x Br = 0 then it means that the initial B.F.S.
is degenerate.
Z = cB x B
m
= Σ cBi x Bi.
i =1
Corresponding to the new B.F.S. x ′B , the value of the objective function is Z ′, so we have
m
Z ′ = Σ c′Bi x′Bi, ...(9)
i =1
where c′Bi are the coefficients of the basic variables x′Bi (i = 1, 2,...., m) in the objective
function.
and c′Br = c j .
∴ We can write
m
Z ′ = Σ cBi x′Bi + c j x′Br .
i =1 ...(10)
i≠ r
m yij
Z′ = Σ cBi x Bi − x Br + c x Br
j
i =1 yr j yr j
i≠ r
143
m yij
= Σ cBi x Bi − x Br + c x Br
j
i =1 yr j yr j
yij
Since the term cBi x Bi − x Br = 0 when i = r therefore it can be included in the
y
r j
m x m x
∴ Z ′ = Σ cBi x Bi − Br Σ cBi yij + Br c j
i =1 yr j i =1 yr j
x m
= Z + Br (c j − Z j ), where Z j = Σ cBi yi j
yr j i =1
From (11), we observe that the new value of the objective function is equal to the original
value of the objective function plus the quantity v (c j − Z j ).
We have Z ′ ≥ Z if v (c j − Z j ) ≥ 0.
Hence by choosing the vector α j for which c j − Z j > 0 and at least one yij > 0, we obtain a new
improved value of the objective function.
If the initial B.F.S. is non-degenerates then v > 0 and in that case Z ′ > Z.
Note :
1. B′ = (β′1, β′2 ,...., β′m) is the new non-singular matrix obtained from B by replacing β r
with α j , then we have β′i = β i, i ≠ r and β′r = α j .
2. If v = 0, then the initial B.F.S. is degenerate. From (6), we have
x′Br = x Br = 0, i = r .
Since the values of the variables common to the initial and new solutions do not change
therefore the new B.F.S. is also degenerate.
In 4.5, we have proved that if for any column α j in A but not in B the condition
c j − Z j > 0 holds and if at least one yi j > 0, i = 1, 2,...., m, then it is possible to find an
improved B.F.S.
144
Now the question is, that, what will be the result if for at least one α j all
yij ≤ 0, i = 1, 2, ..., m.
To get the answer of this question we shall prove the following theorem :
Max. Z = c x subject to A x = b, x ≥ 0 ,
Let x B = [ x B1, x B2 , ...., x Bm] be a B.F.S. of the given L.P.P. Then, we have
m
B xB = b or Σ x Bi β i = b ...(1)
i =1
m
Z = c B x B = Σ cBi x Bi ...(2)
i =1
m
Σ x Bi β i − λα j + λα j = b ...(3)
i =1
Let α j ∈ A s.t. α j ∉ B.
Since the vectors β1, β2 , ...., β m are in the basis of A therefore we can express α j as the
linear combination of β's
m
∴ αj = Σ yi j β i
i =1
m
or − λα j = − λ Σ yij β i.
i =1
m m
Σ x Bi β i − λ Σ yij β i + λα j = b
i =1 i =1
m
or Σ ( x Bi − λ yij ) β i + λα j = b
i =1
When λ > 0, x Bi − λ yij ≥ 0 since yij ≤ 0, i = 1, 2,...., m therefore (4) gives the feasible
solution in which the number of positive variables is less than or equal to (m + 1). It may
be less than (m + 1) because some x Bi − λ yij may be zero for some i. In case, the number of
positive variables in this solution is equal to (m + 1), then this solution will be non-basic
feasible solution.
If Z ′ is the new value of the objective function corresponding to this new solution, then
we have
m
Z ′ = Σ cBi ( x Bi − λyij ) + c j λ
i =1
m
or Z ′ = Σ cBi x Bi + λ ( c j − cBi yij ) = Z + λ (c j − Z j ).
i =1
Note : If for some α j , Z j − c j > 0 and yij ≤ 0, i = 1, 2,...., m, then the L.P.P. has an
not in B, then Z* is the optimum value of the objective function Z and x B is an optimal
basic feasible solution.
146
Proof: Suppose x B = [ x B1, x B2 , ...., x Bm] is the B.F.S. of the given L.P.P.
We have
A = (α1, α 2 ,...., α N ), N = m + n
∴ B xB = b or x B = B −1 b. ...(1)
m
Z * = c B x B = Σ cBi x Bi. ...(2)
i =1
Let x′ = [ x1′ , x2′ ,...., x′N ] be any feasible solution and Z ′ be the value of the objective
function for this solution.
Then, we have
A x′ = b ...(3)
N
and Z ′ = c x ′ = Σ c j x′j . ...(4)
i =1
= (B −1 A)x ′
x1′
y11 y12 ... y1 j ... y1N
x′
y y22 ... y2 j ... y2 N 2
21 M
= ... ... ... ... ... ...
x′
... ... ... ... ... ... j
M
ym1 ym2 ... ymj ... ymN
x′
N
N N N N
= Σ y1 j x′j , Σ y2 j x′j ,...., Σ yij x′j ,..., Σ ymj x′j
j = 1 j =1 j =1 j =1
147
N
x Bi = Σ yij x′j . ...(5)
j =1
If this inequality also holds for all those j for which α j is in B, then c j − Z j ≤ 0 for all j for
which α j also belongs to B.
Now if α j = β i, then
which shows that Y j = ei, a vector whose i-th component is unity and all other
components are zero.
∴ c j − Z j = c j − c BY j = c j − c B ei = c j − cBi = 0
∴ c j − Z j ≤ 0 for all α j in A
or cj ≤ Z j
or c j x′j ≤ Z j x′j , [Q x′ j ≥ 0]
N N
or Σ c j x′j ≤ Σ Z j x′j
i =1 j =1
N
or Z ′ ≤ Σ x′j (c BY j ), from (4)
j =1
N m
= Σ x′j Σ cBi yij
j =1 i = 1
m N
= Σ cBi Σ x′j yij
i =1 j =1
m
= Σ cBi x Bi, from (5)
i =1
∴ Z ′ ≤ Z *.
Proof: (i) We have discussed in 4.6 that if we introduce the column vector α j in B where
α j is in A but not in B and yij ≤ 0 for all i = 1, 2,...., m, then a non-basic feasible solution
x ′B with (m + 1) number of positive variables is given by
and the value of the objective function for this new F.S. is given by
Z ′ = Z + λ (c j − Z j ) .
If c j − Z j = 0, then Z ′ = Z
i.e., the value of the objective function for this new non-basic F.S. is also equal to Z ′
(optimal value). Hence, this new non-basic F.S. is an alternative optimal solution of the
given L.P.P.
(ii) We have shown in theorem of 4.5 that if yij > 0 for at least one i = 1, 2,...., m, then by
replacing one column β r in B by the column α j which is in A but not in B, we obtain
a new B.F.S. x ′B is given by
yij
where x′Bi = x Bi − x , i = 1, 2,...., m, i ≠ r
yrj Br
x
x′Br = Br , i = r
yrj
x Br Min x Bi
.
and = i y , yij > 0
yr j
ij
149
x
Z ′ = Z + Br (c j − Z j )
yr j
i.e., the value of the objective function for this new B.F.S. is also equal to Z ′
(optimal value). Hence, this new B.F.S. is an alternative optimal B.F.S.
r ( A) = r ( Ab) = k ≤ n < m.
4.9.2 Inconsistency
As already defined, the set of constraints (linear equations) is said to be inconsistent if
r ( A) ≠ r ( Ab).
Before solving a L.P. problem by simplex method, we should have r ( A) = r ( A b) i.e., the
constraint equations (after introducing the slack and artificial variables) should be
consistent. Since in simplex method we always have r ( A) = r ( Ab) = m.
If the system A x = b involves artificial variables, then we cannot say whether this system
is consistent or there is any redundancy. Below we give the cases (without proof) to
decide about the consistency and redundancy in such systems.
Case 1 : If the basis B contains no artificial vector and the optimality condition is
satisfied (at any iteration), then the current solution is a B.F.S. of the problem.
Case 2 : If one or more artificial vector appears in the basis B at a zero level i.e., the value
of the artificial variables corresponding to artificial vectors in B are zero and the
optimality condition is satisfied (at any iteration), then the system is consistent.
Furthermore if yij = 0 V j and x Br = 0 and r corresponds to the row containing an
artificial vector, then the r-th constraint equation is redundant.
Case 3 : If at least one artificial vector appears in the basis B at a positive level i.e., the
value of at least one artificial variable corresponding to artificial vector in B is non-zero
and the optimality condition is satisfied (at any iteration), then there exists no feasible
solution of the problem.
150
In the constraints of a general linear programming problem, there may be any of the three
signs ≤, =, ≥ . Let us assume that the requirement vector b ≥ 0 (if any of the bi's is negative,
multiply the corresponding constraint by –1).
Case 1 : To find initial B.F.S. when all the original constraints have ≤ sign.
To convert all the constraints into equations we insert slack variables only. The
equations obtained are as follows :
x1
a11 a12 .... a1n 1 0....0 x
a 2 b1
a .... a2 n 0 1....0 M
21 22
.... .... .... .... .... .... x = b2
.... .... n M
.... .... .... .... x n +1
am1 am2 .... amn 0 0....1 M bm
x n+ m
x B = B −1 b = I m b = b ≥ 0
which can be obtained by writing all the non-basic variables (i.e., given variables)
x1, x2 ,...., x n equal to zero and solving the equations for the remaining variables (i.e.,
slack variables) x n+1, x n+ 2 , ...., x n+ m.
151
Case 2 : To find initial B.F.S. when all the original constraints have ≥ sign.
First we convert all the constraints into equations by inserting surplus variables. The
equations obtained are as follows :
x1
a11 a12 .... a1n −1 0 .... 0 x
a 2 b1
a .... a2 n 0 −1 .... 0 M
21 22
.... .... .... .... .... x = b2
.... .... n M
.... ....
x n +1
am1 am2 .... amn 0 0 .... −1 M bm
x n+ m
To avoid this difficulty we add one more variable to each constraint. These variables are
called “artificial variables”.
Adding surplus and artificial variables, the constraints of the given L.P.P. change to the
following equations :
x1
x
2
M
a11 a12 ... a1n −1 0 ... 0 1 0 ... 0 x
n
a b1
a ... a2 n 0 −1 ... 0 0 1 ... 0 x n +1 b
21 22
... ... ... ... ... ... ... ... ... ... ... ... M = 2
... M
... ... ... ... ... ... ... ... ... ... ... x n+ m b
m
a a
m1 m2 ... amn 0 0 ... −1 0 0 ... 1 x
n+ m+1
M
M
x
n+ m+ m
∴ x B = B −1 b = I m b = b ≥ 0,
Case 3 : To find initial B.F.S. when the constraints have ‘≤’, ‘≥’ and ‘=’ signs.
In this case, we convert the constraints into equations by inserting slack, surplus and
artificial variables. Here the basis matrix B = I m is obtained by introducing unit column
vectors corresponding to the slack and artificial variables.
To obtain the initial B.F.S. we put all the non-basic variables equal to zero and solve for
remaining basic variables. Here also the initial B.F.S. x B = b ≥ 0 .
Note :
1. It is important to note that in all the three cases the initial B.F.S. consists of the
constants b1, b2 ,...., bm, (note that b1, b2 ,...., bm are all non-negative).
2. Sometimes an identity is present in A without introducing the artificial variables.
So there is no need to introduce the artificial variables in such cases.
153
The R.H.S. of each of the constraints should be non-negative. If the L.P.P. has a
constraint for which a negative bi is given, it should be converted into positive value by
multiplying both sides of the constraints by –1.
It should be remembered that the values of non-basic variables are always zero at each
iteration.
Note that in the starting simplex table ∆ j 's are same as c j 's. Also ∆ j 's corresponding to
basic variables are zero.
Optimality Test :
(i) If ∆ j ≤ 0 for all j, the solution under consideration is optimal.
(a) Alternative optimal solution will exist if any ∆ j (for non-basic variable) is also zero.
[Meerut 2006 (BP)]
I n i ti a l S i m p le x Ta b le
154
Y n+1 cB1 = 0 x B1 = b1 y11 = a11 y12 = a12 ... y1k = a1k ... y1n = a1n 1 0 ... 0
Y n+ r cBr = 0 x Br = br yr1 = ar1 yr2 = ar2 ... yrk = ark ... yrn = arn M M ... M →
Y n+ m cBm = 0 x Bm = bm ym1 = am1 ym2 = am2 ... ymk = amk ... ymn = amn 0 0 ... 1
Z = c Bx B = 0 ∆j ∆1 ∆2 ∆k ↑ ∆n ∆ n+1 ∆ n+ 2 ∆ n+ m
155
(b) If ∆ j < 0 for all j, corresponding to non-basic variables, then the solution is
unique optimal solution.
(ii) If ∆ j > 0 for any j i.e., if at least one ∆ j is positive the solution under test is not
optimal. In this case, we must proceed to improve the solution (step 7).
(iii) If corresponding to maximum positive ∆ j , all elements of the column Y j are negative
or zero, the solution under test will be unbounded.
(iv) If optimality condition is satisfied but the value of at least one artificial variable
present in the basis is non-zero, the problem will have no feasible solution.
Step 7 : Select the entering (incoming) vector and departing (outgoing) vector.
To improve the initial B.F.S. we select the vector entering the basis matrix and the vector
to be removed from the basis matrix by the following rules :
Note : If the above mentioned minimum value is not unique, then more than one
variable will vanish in the next solution. As a result the next solution will be a degenerate
B.F.S. for which the outgoing vector is selected in a different way.
Step 8 : If α k is the entering vector and β r is the outgoing vector, then the element yrk
which lies at the intersection of minimum ratio arrow (→) and incoming vector arrow ↑ is
called the pivot element (key element).
In order to bring β r in place of incoming vector Y k , unity must occupy the place ÿ. In other
words, key element yrk should be 1. If it is not so, then divide all the elements of this row
by the key element yrk . Then subtract suitable multiples of the row containing the key
element from all other rows and obtain zero at all other positions of this column Y k . Now
bring β r in place of α r and construct the revised simplex table.
Example 1: Maximize Z = 40 x1 + 35 x2
subject to 2 x1 + 3 x2 ≤ 60
4 x1 + 3 x2 ≤ 96,
For the clear understanding of the simplex method the solution to this problem is
illustrated below in a step-wise manner.
Solution: Step 1: The given problem is of maximization and all the bi's are already
positive.
2 x1 + 3 x2 + x3 = 60 and 4 x1 + 3 x2 + x4 = 96
The coefficients of slack variables are zero in the objective function. Therefore, the given
problem becomes
Maximize Z = 40 x1 + 35 x2 + 0 x3 + 0 x4
subject to 2 x1 + 3 x2 + x3 + 0 x4 = 60
4 x1 + 3 x2 + 0 x3 + x4 = 96
x1, x2 , x3 , x4 ≥ 0.
Step 3 : The simplex method begins with an initial basic feasible solution in which the
values of basic variables are zero.
B cB cj 40 35 0 0 Min. ratio
Y3 0 60 2 3 1 0 60 2 = 30
Y4 0 96 4 3 0 1 96 4 = 24(min) →
→
Z = cB x B = 0 ∆j 40 35 0 0
↑ ↓
157
∆ j = c j − c BY j
∆1 = c1 − c B Y1 = 40 − (0, 0) (2, 4) = 40
Since all ∆ j 's are not less than or equal to zero therefore the solution is not optimal. So we
proceed to next step to improve the solution.
vector.
xB x x 60 96
Now = B1 , B2 = , = (30, 24).
Y1 y11 y21 2 4
x Br x x x x
= min Bi , yi1 > 0 = min B1 , B2 = 24 = B2
yr1 i yi1 i y11 y21 y21
Dividing the second row containing the key element a21 = 4 by 4 to make it unity.
Now we shall subtract appropriate multiples of this new row from the other remaining
row so as to obtain zeros in the remaining positions of Y1.
Subtracting 2 times of the second row obtained after dividing by 4 from the first row and
subtract 40 times of the this second row from the row containing ∆ j 's to obtain new
values of ∆ j 's.
Constructing the second simplex table in which β2 (Y4 ) is replaced by α1(= Y 1).
Second Simplex Table
B cB cj 10 35 0 0 Min. ratio
Z = c B x B = 960 ∆j 0 5 0 −10
↑ ↓
158
3 3
∆2 = c2 − c B Y2 = 35 − (0, 40) , = 5
2 4
1 1
∆4 = c4 − c B Y4 = 0 − (0, 40) − , = −10
2 4
Since all ∆ j 's are not less than or equal to zero therefore the solution obtained is not
optimal.
x Br x
Now = min Bi , yi2 > 0
yr2 i yi2
x x x B1
= min B1 , B2 = min [8, 32] = 8 =
y12 y22 y12
Dividing the first row by 3/2 to make the key element unity.
Subtracting 3/4 times of the first row thus obtained from the second row and subtracting
5 times of it from the row containing ∆ j 's to obtain new values of ∆ j 's.
B cB cj 40 35 0 0 Min. ratio
xB Y1(β2 ) Y2 (β1) Y3 Y4
Y2 35 8 0 1 23 −1 3
Y1 40 18 1 0 −1 2 12
The above solution with its different computational steps can be more conveniently
represented by a single table as shown below :
B cB cj 40 35 0 0 Min. ratio
xB Y1 Y2 Y3 Y4 x B Y1 , Yi1 > 0
Y3 0 60 2 3 1 0 60 2 = 30
Y4 0 96 4 3 0 1 96 4 = 24 (min)
→
Z = c Bx B = 0 ∆j 40 35 0 0 x B Y2 , Y i2 > 0
↑ ↓
Y3 0 12 0 32 1 −1 2 8 (min)
→
Y1 40 24 1 34 0 14 32
Z = c B x B = 960 ∆j 0 5 0 −10
↑ ↓
Y2 35 8 0 1 23 −1 3
Y1 40 18 1 0 −1 2 12
subject to 2 x1 + 3 x2 ≤ 8, 2 x2 + 5 x3 ≤ 10
Solution: The given problem is of maximization and all the bi's are positive.
Max Z = 3 x1 + 5 x2 + 4 x3 + 0 x4 + 0 x5 + 0 x6
subject to 2 x1 + 3 x2 + 0 x3 + x4 =8
0 x1 + 2 x2 + 5 x3 + x5 = 10
3 x1 + 2 x2 + 4 x3 + x6 = 15
160
x1, x2 ,..., x6 ≥ 0.
The solution to the problem using simplex method is given in the following table :
cj 3 5 4 0 0 0 Min. ratio
B cB x B Y2 ,
xB Y1 Y2 Y3 Y4 Y5 Y6
y i2 > 0
Y4 0 8 2 3 0 1 0 0 8 3 ( min)
→
Y5 0 10 0 2 5 0 1 0
10 2 = 5
Y6 0 15 3 2 4 0 0 1
15 2
Z = c Bx B = 0 ∆j 3 5 4 0 0 0 x B Y3 ,
↑ ↓ y i3 > 0
Y2 5 83 23 1 0 13 0 0
Y5 0 14 3 −4 3 0 5 −2 3 1 0 14 15 ( min)
→
Y6 0 29 3 53 0 4 −2 3 0 1
29 12
Z = c Bx B ∆j −1 3 0 4 −5 3 0 0 x B / Y 1,
= 40 3 ↑ ↓ yi1 > 0
Y2 5 83 23 1 0 13 0 0 4
Y3 4 14 15 −4 15 0 1 −2 15 15 0
Y6 0 89 15 41 15 0 0 −2 15 −4 5 1 89 41(min)
→
Z = c Bx B ∆j 11 15 0 0 −17 15 −4 5 0
= 256 15 ↑ ↓
Y2 5 50 41 0 1 0 15 41 8 41 −10 41
Y3 4 62 41 0 0 1 −6 41 5 41 4 41
Y1 3 89 41 1 0 0 −2 41 −12 41 15 41
∴ Optimal solution is
subject to x1 + 2 x2 + x3 ≤ 430
161
3 x1 + 2 x3 ≤ 460
x1 + 4 x2 ≤ 420 ,
Solution: The given problem is of maximization and all the bi's are positive.
Max. Z = 3 x1 + 2 x2 + 5 x3 + 0 x4 + 0 x5 + 0 x6
subject to x1 + 2 x2 + x3 + x4 = 430
3 x1 + 0 x2 + 2 x3 + x5 = 460
x1 + 4 x2 + 0 x3 + x6 = 420
x1, x2 ,..., x6 ≥ 0.
The solution to the problem using simplex algorithm is given in the following table :
B cB cj 3 2 5 0 0 0 Min. ratio
x B Y3 ,
xB Y1 Y2 Y3 Y4 Y5 Y6
y i3 > 0
Y4 0 430 1 2 1 1 0 0 430
Y5 0 460 3 0 2 0 1 0 230 (min)
→
Y6 0 420 1 4 0 0 0 1
Z = c Bx B = 0 ∆j 3 2 5 0 0 0 x B Y2 ,
↑ ↓ y i2 > 0
Z = 1150 ∆j −9 2 2 0 0 −5 2 0
↑ ↓
Y2 2 100 −1 4 1 0 12 −1 4 0
Y3 5 230 32 0 1 0 12 0
Y6 0 20 2 0 0 −2 1 1
Z = 1350 ∆j −4 0 0 −1 −2 0
162
subject to 3 x2 − x3 + 2 x5 ≤ 7
−2 x2 + 4 x3 ≤ 12
−4 x2 + 3 x3 + 8 x5 ≤ 10 ,
Max. Z ′ = − Z = − x2 + 3 x3 − 2 x5 .
Max. Z ′ = − Z = 0 . x1 − x2 + 3 x3 + 0 . x4 − 2 x5 + 0 . x6
subject to x1 + 3 x2 − x3 + 0 x4 + 2 x5 + 0 x6 = 7
0 x1 − 2 x2 + 4 x3 + x4 + 0 x5 + 0 x6 = 12
0 x1 − 4 x2 + 3 x3 + 0 x4 + 8 x5 + x6 = 10,
x1, x2 ,..., x6 ≥ 0.
B cB cj 0 –1 3 0 −2 0 Min. ratio
xB Y1 Y2 Y3 Y4 Y5 Y6 x B Y3 , yi3 > 0
Y1 0 7 1 3 −1 0 2 0
Y4 0 12 0 −2 4 1 0 0 3 (min)
→
Y6 0 10 0 −4 3 0 8 1
10 3
Z′ = c Bx B = 0 ∆j 0 −1 3 0 −2 0 x B Y2 , yi2 > 0
↑ ↓
163
Y1 0 10 1 52 0 14 2 0 4 (min)
→
Y3 3 3 0 −1 2 1 14 0 0
Y6 0 1 0 −5 2 0 −3 4 8 1
Z′ = 9 ∆j 0 12 0 −3 4 −2 0
↓ ↑
Y2 −1 4 25 1 0 1 10 45 0
Y3 3 5 15 0 1 3 10 25 0
Y6 0 11 1 0 0 −1 2 10 1
Z ′ = 11 ∆j −1 5 0 0 −4 5 −12 5 0
Since all ∆ j 's ≤ 0, therefore the solution given by last table is optimal.
Min. Z = x1 − 3 x2 + 2 x3
subject to 3 x1 − x2 + 2 x3 ≤ 7
−2 x1 + 4 x2 ≤ 12
−4 x1 + 3 x2 + 8 x3 ≤ 10, x1, x2 , x3 ≥ 0
subject to 3 x1 + 2 x2 + 4 x3 ≤ 100
x1 + 4 x2 + 2 x3 ≤ 100
x1 + x2 + 3 x3 ≤ 100 ,
Solution: The given problem is of maximization and all bi's are positive.
Z = 2 x1 + 5 x2 + 7 x3 + 0 x4 + 0 x5 + 0 x6
subject to 3 x1 + 2 x2 + 4 x3 + x4 = 100
x1 + 4 x2 + 2 x3 + x5 = 100
x1 + x2 + 3 x3 + x6 = 100.
164
B cB cj 2 5 7 0 0 0 Min. ratio
xB Y1 Y2 Y3 Y4 Y5 Y6 x B Y3 ,
yi3 > 0
Y4 0 100 3 2 4 1 0 0 25 (min) →
Y5 0 100 1 4 2 0 1 0 50
Y6 0 100 1 1 3 0 0 1 100 3
Z = c Bx B = 0 ∆j 2 5 7 0 0 0 x B Y2 ,
↑ ↓ yi2 > 0
Y3 7 25 34 12 1 14 0 0 50
Y5 0 50 −1 2 3 0 −1 2 1 0 50 3 (min) →
Y6 0 25 −5 4 −1 2 0 −3 4 0 1 Neg.
Z = c B x B = 175 ∆j −13 4 32 0 −7 4 0 0
↑ ↓
Y3 7 50 3 56 0 1 13 −1 6 0
Y2 5 50 3 −1 6 1 0 −1 6 13 0
Y6 0 100 3 −4 3 0 0 −5 6 16 1
Z = c Bx B ∆j −3 0 0 −3 2 −1 2 0
= 200
The artificial variables are introduced for the limited purpose of obtaining an initial
solution. It is not relevant whether the objective function is of the maximization or
minimization type. Since artificial variables do not represent any quantity relating to the
decision problem they must be driven out of the system and must not be present in the
final solution (if at all they do, it represents a situation of infeasibility which is discussed
later in this chapter).
Due to this large negative price − M in the objective function, it cannot be improved in
the presence of the artificial variables. So first we shall have to remove these artificial
vector from the basis matrix. Once artificial column vectors corresponding to the
artificial variables leave the basis, we forget about it forever and never consider it as a
vector to enter into the basis at any iteration.
Note : A solution to the problem which does not contain an artificial variable in the
basis, represents a feasible solution to the problem.
Max. Z = 2 x1 + 4 x2
subject to 2 x1 + x2 ≤ 18
3 x1 + 2 x2 ≥ 30
x1 + 2 x2 = 26
Introducing the necessary slack variable x3 , surplus variable x4 and artificial variables,
x a , x a and assigning large negative costs − M to artificial variables, the problem reduces
1 2
to the form
Max. Z = 2 x1 + 4 x2 + 0 . x3 + 0 . x4 − M x a − M x a
1 2
subject to 2 x1 + x2 + x3 = 18
3 x1 + 2 x2 − x4 + x a = 30
1
x1 + 2 x2 + xa = 26.
2
The solution to the problem using simplex method is given in the following table :
B cB cj 2 4 0 0 −M −M Min. ratio
x B Y2 ,
xB Y1 Y2 Y3 Y4 A1 A2
yi2 > 0
Y3 0 18 2 1 1 0 0 0 18
A1 −M 30 3 2 0 −1 1 0 15
A2 −M 26 1 2 0 0 0 1 13 (min)
→
Z = c B x B = −56 M ∆j 2 + 4M 4 + 4M 0 −M 0 0 x B Y1,
↑ ↓
y i1 > 0
Y3 0 5 32 0 1 0 0 10 3
A1 −M 4 2 0 0 −1 1 2 (min)
→
Y2 4 13 12 1 0 0 0
26
Z = 52 − 4 M ∆j 2M 0 0 −M 0
↑ ↓
Y3 0 2 0 0 1 34
Y1 2 2 1 0 0 −1 2
Y2 4 12 0 1 0 14
Z = 52 ∆j 0 0 0 0
Computation of ∆ j
For the first table
∆1 = c1 − c BY1 = 2 − (0, − M, − M) (2, 3, 1) = 2 + 4 M
167
∆2 = 0 = ∆3 = ∆5 , ∆4 = c4 − c B Y4 = 0 − (0, − M, 4) (0, − 1, 0) = − M
∆1 = 0 = ∆2 = ∆3 , ∆4 = c4 − c B Y4 = 0 − (0, 2, 4) (3 / 4, − 1 / 2, 1 / 4) = 0.
In the last table all ∆ j ≤ 0, and no artificial variable appears in the basis, therefore this
solution is optimal.
subject to x1 + 2 x2 + 3 x3 = 15
2 x1 + x2 + 5 x3 = 20
x1 + 2 x2 + x3 + x4 = 10 ,
Solution: The given problem is of maximization and all bi's are positive. Also the
constraints are equations. Examining the constraints we observe that in order to obtain a
unit matrix of order 3 we need two more unit vectors as one unit vector is formed by the
coefficients of x4 . Therefore introducing two artificial variables x a and x a in the first
1 2
two constraints and assigning large negative cost − M to the artificial variables, the given
problem becomes
Max. Z = x1 + 2 x2 + 3 x3 − x4 − Mx a − Mx a
1 2
subject to x1 + 2 x2 + 3 x3 + xa = 15
1
2 x1 + x2 + 5 x3 + xa = 20
2
x1 + 2 x2 + x3 + x4 = 10.
The solution to the problem using simplex method is given in the following table :
168
cj 1 2 3 −1 −M −M Min. ratio
B cB xB Y1 Y2 Y3 Y4 A1 A2 x B Y3 ,
yi3 > 0
A1 −M 15 1 2 3 0 1 0 5
A2 −M 20 2 1 5 0 0 1 4 (min)
→
Y4 −1 10 1 2 1 1 0 0
10
Z = c Bx B ∆j 3M + 2 3M + 4 8M + 4 0 0 0 x B Y2 ,
= −35 M − 10 ↑ ↓ yi2 > 0
A1 −M 3 −1 5 75 0 0 1 15 7 (min)
→
Y3 3 4 25 15 1 0 0
20
Y4 −1 6 35 95 0 1 0
10 3
Z = c Bx B ∆j (− M + 2) (7 M + 16) 0 0 0 x B Y1,
= −3 M + 6 5 5 ↓ yi1 > 0
↑
Y2 2 15 7 −1 7 1 0 0 Neg.
Y3 3 25 7 37 0 1 0 25 3
Y4 −1 15 7 67 0 0 1 5 2 (min)
→
Z = c Bx B ∆j 67 0 0 0
= 90 7 ↑ ↓
Y2 2 52 0 1 0 16
Y3 3 52 0 0 1 −1 2
Y1 1 52 1 0 0 76
Z = c Bx B ∆j 0 0 0 −1
= 15
Computation of ∆ j . By ∆ j = c j − c B Y j
∆1 = c1 − c B Y1 = 1 − (− M, − M, − 1) (1, 2, 1) = 3 M + 2,
∆2 = c2 − c B Y2 = 2 − (− M, − M, − 1) (2, 1, 2) = 3 M + 4
∆3 = c3 − c B Y3 = 3 − (− M, − M, − 1) (3, 5, 1) = 8 M + 4
∆4 = 0 = ∆5 = ∆6
169
∆3 = 0 = ∆4 = ∆5
∆1 = 1 − (2, 3, − 1) (−1 7 , 3 7 , 6 7) = 6 7 , ∆2 = 0 = ∆3 = ∆4
x1 = 5 2 = x2 = x3 , x4 = 0 and Max. Z = 15
Infeasible Solution
Example 3: Apply Big-M method to solve the L.P.P.
Max. Z = − x1 − x2
subject to 3 x1 + 2 x2 ≥ 30
−2 x1 + 3 x2 ≤ −30
x1 + x2 ≤ 5,
Since bi in second constraint is negative therefore multiplying both sides by −1, we get
2 x1 − 3 x2 ≥ 30.
Max. Z = − x1 − x2 + 0 x3 + 0 x4 + 0 x5 − Mx a − Mx a
1 2
subject to 3 x1 + 2 x2 − x3 + xa = 30
1
2 x1 − 3 x2 − x4 + xa = 30
2
x1 + x2 + x5 =5
where x1, x2 , x3 , x4 , x5 , x a , x a ≥ 0.
1 2
B cB cj −1 −1 0 0 0 −M −M Min. ratio
,x B Y1
xB Y1 Y2 Y3 Y4 Y5 A1 A2
yi1 > 0
A1 −M 30 3 2 −1 0 0 1 0 10
A2 −M 30 2 −3 0 −1 0 0 1 15
Y5 0 5 1 1 0 0 1 0 0 5 (min)
→
Z = c Bx B ∆j 5 M −1 −M −M −M 0 0 0
= −60 M ↑ ↓
A1 −M 15 0 −1 −1 0 −3 1 0
A2 −M 20 0 −5 0 −1 −2 0 1
Y1 −1 5 1 1 0 0 1 0 0
Z = −35 M − 5 ∆j 0 −6 M −M −M −5 M 0 0
Since all ∆ j ≤ 0, therefore the solution obtained is optimal. But the artificial column
vectors A1, A2 (corresponding to artificial variables x a , x a ) appear in the basis at
1 2
positive levels which implies that the given L.P.P. has no feasible solution.
As an alternative to the Big-M method, there is another method for dealing with linear
programming problems involving artificial variables. This is called the two phase
method and as its name suggests it separates the solution procedure into two phases. In
phase I, all the artificial variables are eliminated from the basis.
In phase II, we use the solution from phase I as the initial basic feasible solution and use
the simplex method to determine the optimal solution.
Step 2 : Assign zero coefficient to each of the primary, slack and surplus variables and the
coefficient (–1) to each of the artificial variables in the objective function. As a result the
new objective function is
171
Step 3 : Solve the problem formed in step 2 by applying the simplex method. If the
original problem has a feasible solution, then this problem shall have an optimal solution
with optimal value of the objective function Z ′ equal to zero as each of the artificial
variables will be equal to zero.
If max Z ′ < 0 and at least one artificial variable appears in the optimum basis at a positive
level, then the given problem does not possess any feasible solution.
If max Z ′ = 0 and at least one artificial variable appears in the optimum basis at zero level,
we proceed to phase II.
If max Z ′ = 0 and no artificial variable appears in the optimum basis then also we proceed
to phase II.
Phase II : In phase II start with the optimal solution contained in the final simplex table
of the phase I. Assign the actual costs to the variables in the objective function and a zero
cost to every slack and surplus variable. Eliminate the artificial variables which are
non-basic at the end of the phase-I. Remove c j row values of the optimum table and
replace them by c j values of the original problem. Now apply simplex algorithm to the
problem contained in the new table to obtain the optimal solution.
Example 1: Solve the following problem by using the two phase method :
Min. Z = 40 x1 + 24 x2
subject to 20 x1 + 50 x2 ≥ 4800
20 x1 + 50 x2 − x3 + xa = 4800
1
80 x1 + 50 x2 − x4 + xa = 7200,
2
x1, x2 , x3 , x4 , x a , x a ≥ 0.
1 2
Phase I : Assigning cost −1 to artificial variables, cost 0 to all other variables, the new
objective function of auxiliary problem becomes
Max. Z ′′ = 0 x1 + 0 x2 + 0 x3 + 0 x4 − x a − x a
1 2
Now applying the simplex method in the usual manner, we have the following table :
B cB cj 0 0 0 0 −1 −1 Min. ratio
A1 −1 3000 0 75 2 −1 14 1 80 (min)
→
Y1 0 90 1 58 0 −1 80 0 144
Z ′′ = −2910 ∆j 0 75 2 0 14 0
↑ ↓
Y2 0 80 0 1 −2 75 1 50
Y1 0 40 1 0 1 60 −1 60
Z ′′ = 0 ∆j 0 0 0 0
Since all ∆ j ≤ 0 and no artificial variable appears in the basis therefore an optimum
solution to the auxiliary problem has been attained.
173
Phase II : Now assigning the actual costs to the original variables and cost zero to the
surplus variables, the objective function becomes
Max. Z ′ = −40 x1 − 24 x2 + 0 x3 + 0 x4 .
Replace the c j row values in the final simplex table of phase I by the c j values of the
original objective function.
Also delete the artificial variables from the final simplex table of phase I, we write the first
simplex table to phase II. Now solution of the problem applying simplex method in the
usual manner is given in the following table :
Y2 −24 80 0 1 −2 75 1 50 Neg.
Y1 −40 40 1 0 1 60 −1 60 2400 (min)
→
Z ′ = c B x B = −3520 ∆j 0 0 2 75 −38 75
↓ ↑
Y2 −24 144 85 1 0 −1 50
Y3 0 2400 60 0 1 −1
Z ′ = −3456 ∆j −8 5 0 0 −12 25
Infeasible Solution
Example 2: Solve the following L.P.P. by using the two phase method :
Min. Z = x1 − 2 x2 − 3 x3
subject to −2 x1 + x2 + 3 x3 = 2
2 x1 + 3 x2 + 4 x3 = 1
Max. Z ′ = − x1 + 2 x2 + 3 x3
−2 x1 + x2 + 3 x3 + xa =2
1
2 x1 + 3 x2 + 4 x3 + xa =1
2
where x1, x2 , x3 , x a , x a ≥ 0.
1 2
Phase I : Assigning costs −1 to artificial variables and costs 0 to all other variables, the
new objective function of auxiliary problem is
Max Z ′ = 0 x1 + 0 x2 + 0 x3 − x a − x a ,
1 2
Now apply simplex method in the usual manner to remove artificial variables.
B cB cj 0 0 0 −1 −1 Min. ratio
xB Y1 Y2 Y3 A1 A2 x B Y3 , yi3 > 0
A1 −1 2 −2 1 3 1 0 23
A2 −1 1 2 3 4 0 1 1 4 (min)
→
Z ′ = −3 ∆j 0 4 7 0 0
↑ ↓
A1 −1 54 −7 2 −5 4 0 1 −3 4
Y3 0 14 12 34 1 0 14
Z ′ = −5 4 ∆j −7 4 −5 4 0 0 −3 4
Since all ∆ j 's are negative or zero, an optimum basic feasible solution to the auxiliary
problem has been attained. But the artificial variable x a (column vector A1) appears in the
1
basic solution at a positive level. Hence the original L.P.P. does not possess any feasible solution.
Example 1: Max. Z = 6 x1 − 2 x2
subject to 2 x1 − x2 ≤ 2
x1 ≤ 4
x1, x2 ≥ 0 .
Solution: The given problem is of maximization and all bi's are positive.
Max. Z = 6 x1 − 2 x2 + 0 x3 + 0 x4
subject to 2 x1 − x2 + x3 =2
x1 + x4 = 4.
cj 6 −2 0 0 Min. ratio
B cB
xB Y1 Y2 Y3 Y4 x B Y1, yi1 > 0
Y3 0 2 2 −1 1 0 1(min)
→
Y4 0 4 1 0 0 1
4
Z = c Bx B = 0 ∆j 6 −2 0 0 x B Y2 , yi2 > 0
↑ ↓
Y1 6 1 1 −1 2 12 0
Y4 0 3 0 12 −1 2 1 6 (min)
→
Z =6 ∆j 0 1 −3 0
↑ ↓
Y1 6 4 1 0 0 1
Y2 −2 6 0 1 −1 2
Z = 12 ∆j 0 0 −2 −2
Hence, a linear programming problem having unbounded feasible region may have a
bounded optimal solution.
Example 2: Max. Z = 10 x1 + 20 x2
subject to 2 x1 + 4 x2 ≥ 16
x1 + 5 x2 ≥ 15, x1, x2 ≥ 0 .
Solution: The given problem is of maximization and all bi's are positive.
Max. Z = 10 x1 + 20 x2 + 0 x3 + 0 x4 − Mx a − Mx a
1 2
subject to 2 x1 + 4 x2 − x3 + xa = 16
1
x1 + 5 x2 − x4 + xa = 15, x1, x2 , x3 , x4 , x a , x a ≥ 0.
2 1 2
The solution to the problem using simplex method is given in the following table :
cj 10 20 0 0 − M − M Min. ratio
B cB x B Y2 ,
xB Y1 Y2 Y3 Y4 A1 A2
yi2 > 0
A1 −M 16 2 4 −1 0 1 0 16 4
A2 −M 15 1 5 0 −1 0 1 15 5 (min)
→
Z = − 31M ∆j 3 M + 10 9 M + 20 −M −M 0 0 x B Y1,
↑ ↓ yi1 > 0
A1 −M 4 65 0 −1 45 1 10 3 (min)
Y2 20 3 15 1 0 −1 5 0 →
15
Z = − 4 M + 60 ∆j 6 0 −M 4 0 x B Y3
( M + 5) ( M + 5)
5 5 ↓
↑
Y1 10 10 3 1 0 −5 6 23
Y2 20 73 0 1 16 −1 3 14 (min)
→
Z = 80 ∆j 0 0 5 0
↓ ↑
177
B cB cj 10 20 0 0 −M −M Min. ratio
x B Y1,
xB Y1 Y2 Y3 Y4 A1 A2 yi1 > 0
Y1 10 15 1 5 0 −1
Y3 0 14 0 6 1 −2
Z = 150 ∆j 0 −30 0 10
↑
In the last table we observe that Y4 is the incoming vector but we cannot find outgoing
vector because all the elements in this column are negative. Therefore, the solution is
unbounded.
Note : If in a situation there is at least one ∆ j greater than zero but there is no
subject to 2 x1 + 3 x2 ≤ 30
3 x1 + 2 x2 ≤ 24,
Solution: To convert inequalities into equations, introducing slack, surplus and artificial
variables, the problem becomes
Max. Z = 6 x1 + 4 x2 + 0 x3 + 0 x4 + 0 x5 − Mx a
subject to 2 x1 + 3 x2 + x3 = 30
3 x1 + 2 x2 + x4 = 24
x1 + x2 − x5 + x a =3
x1, x2 , x3 , x4 , x5 , x a ≥ 0.
x1 = 0, x2 = 0, x3 = 30, x4 = 24, x5 = 0, x a = 3.
178
cj 6 4 0 0 0 −M Min. ratio
B cB
x B Y1, yi1 > 0
xB Y1 Y2 Y3 Y4 Y5 A1
Y3 0 30 2 3 1 0 0 0 15
Y4 0 24 3 2 0 1 0 0 8
A1 −M 3 1 1 0 0 −1 1 3 (min)
→
Z = c Bx B ∆j 6+ M 4+ M 0 0 −M 0 x B Y5 , yi5 > 0
= −3 M ↑ ↓
Y3 0 24 0 1 1 0 2 12
Y4 0 15 0 −1 0 1 3 5 (min)
→
Y1 6 3 1 1 0 0 −1
∆j 0 −2 0 0 0
Z = 18 x B Y2 , yi2 > 0
↓ ↑
Y3 0 14 0 53 1 −2 3 0 42 5 →
Y5 0 5 0 −1 3 0 13 1
Y1 6 8 1 23 0 13 0 12
Z = 48 ∆j 0 0 0 −2 0
↑ ↓
Here corresponding to non-basic variable x2 in the last table ∆2 = 0 and Y2 is not in the
basis B, therefore an alternative solution also exists. Thus, the problem does not have unique
solution.
Taking Y2 as incoming and Y3 as outgoing vector we obtain the following simplex table :
B cB cj 6 4 0 0 0 Min. ratio
xB Y1 Y2 Y3 Y4 Y5
Y2 4 42 5 0 1 35 −2 5 0
Y5 0 39 5 0 0 15 15 1
Y1 6 12 5 1 0 −2 5 35 0
Z = c Bx B ∆j 0 0 0 −2 0
= 48
179
Since all ∆ j ≤ 0, therefore this solution is also optimal having the same maximum value of
Z.
x1 = 12 5 , x2 = 42 5 and Max. Z = 48
1. x1 = 8, x2 = 0
Thus if we obtain two alternative optimum solutions, then we can obtain any number of
optimum solutions.
For any arbitrary value of λ such that 0 ≤ λ ≤ 1 the following table gives different
optimum solutions which are infinite in number :
x1 8 12 5 8 λ + (12 5) (1 − λ )
x2 0 42 5 0 λ + (42 5) (1 − λ )
subject to − x1 + 2 x2 ≤ 4
x1 + x2 ≤ 6
Solution: The given problem is of maximization and all bi's are positive. We know that
the simplex method is applicable if the variables are non-negative. In the given problem,
x1 and x2 are unrestricted (may be +, – or zero). In such cases, we replace all these
variables by the difference of two non-negative variables and solve the problem in usual
manner.
B cB cj 2 −2 3 −3 0 0 0 Min. ratio
x BY ′, 2
xB Y1′ Y1″ Y2′ Y2 ″ Y3 Y4 Y5
y′i2 > 0
Y3 0 4 −1 1 2 −2 1 0 0 2 (min)
→
Y4 0 6 1 −1 1 −1 0 1 0
6
Y5 0 9 1 −1 3 −3 0 0 1
3
Y2′ 3 2 −1 2 12 1 −1 12 0 0
Y4 0 4 32 −3 2 0 0 −1 2 1 0 83
Y5 0 3 52 −5 2 0 0 −3 2 0 1 6 5 (min)
→
Z = c Bx B ∆j 72 −7 2 0 0 −3 2 0 0 x B Y3
=6 ↑ ↓
181
Y2′ 3 13 5 0 0 1 −1 15 0 15 13
Y4 0 11 5 0 0 0 0 25 1 −3 5 11 2 (min)
→
Y′ 2 65 1 −1 0 0 −3 5 0 25
1
Z = c Bx B ∆j 0 0 0 0 35 0 −7 5
= 51 5 ↑ ↓
Y2′ 3 32 0 0 1 −1 0 −1 2 1 2
Y3 0 11 2 0 0 0 0 1 5 2 −3 2
Y′ 2 92 1 −1 0 0 0 3 2 −1 2
1
Z = c Bx B ∆j 0 0 0 0 0 − 3 2 −1 2
= 27 2
Max. Z = 27 2
Example 5: Minimize Z = x1 + x2 + x3
subject to x1 − 3 x2 + 4 x3 = 5
x1 − 2 x2 ≤ 3
2 x2 − x3 ≥ 4,
Max. Z ′ = − x1 − x2 − x3 .
Also introducing slack variable x4 , surplus variable x5 and artificial variables x a and x a ,
1 2
the given problem becomes
x1 − 2 x2 + x4 = 3
182
0 x1 + 2 x2 − x3′ + x3′′ − x5 + x a = 4,
2
x1 = 0, x2 = 21 5 , x3 = 22 5 and Min. Z = 43 5.
Max. Z = 2 x1 − x2 + x3 + 50
subject to 2 x1 + 2 x2 − 6 x3 ≤ 16
12 x1 − 3 x2 + 3 x3 ≥ 6
−2 x1 − 3 x2 + x3 ≤ 4
and x1, x2 , x3 ≥ 0 .
Solution: Leaving the constant term 50 from the objective function in the beginning,
introducing slack, surplus and artificial variables, the given problem reduces to
Max. Z ′ = 2 x1 − x2 + x3 + 0 x4 + 0 x5 + 0 x6 − M x a
1
subject to 2 x1 + 2 x2 − 6 x3 + x4 = 16
12 x1 − 3 x2 + 3 x3 − x5 + x a =6
1
−2 x1 − 3 x2 + x3 + x6 =4
and x1, x2 , x3 , x4 , x5 , x6 , x a ≥ 0.
1
The solution of the problem by simplex method is given in the following table :
cj 2 −1 1 0 0 0 −M Min. ratio
B cB x B Y1,
xB Y1 Y2 Y3 Y4 Y5 Y6 A yi1 > 0
Y4 0 16 2 2 −6 1 1 0 0 16 2
A −M 6 12 −3 3 0 −1 0 1 6 12 (min.) →
Y6 0 4 −2 −3 1 0 0 1 0
Z ′ = cB x B ∆j 2 + 12 M −1 − 3 M 1 + 3 M 0 −M 0 0 x B Y3 , yi3 > 0
= −6 M ↑ ↓
Y4 0 15 0 52 −13 2 1 16 0
Y1 2 12 1 −1 4 14 0 −1 2 0 2 (min.) →
Y6 0 5 0 −7 2 32 0 −1 6 1 10 3
Z ′ = xB x B ∆j 0 −1 2 12 0 16 0 x B Y5 ,
=1 ↓ ↑ yi5 > 0
Y4 0 28 26 −4 0 1 −2 0
Y3 1 2 4 −1 1 0 −1 3 0
Y6 0 2 −6 −2 0 0 1/3 1 6 (min.) →
Y4 0 40 −10 −16 0 1 0 6
Y3 1 4 −2 −3 1 0 0 1
Y5 0 6 −18 −6 0 0 1 3
Z ′ = CB x B ∆j 4 2 0 0 0 −1
=4 ↑
Here Y1 is the entering vector but all elements in column one are –ve, so we cannot select
the outgoing vector. Hence, the solution of the problem is unbounded.
subject to 2 x1 − 3 x2 ≤ 8
2 x2 + 5 x3 ≤ 10
3 x1 + 2 x2 + 4 x3 ≤ 15
x1 ≥ 2, x2 ≥ 4, x3 ≥ 0 .
Max. Z = 3 y1 + 5 y2 + 4 y3 + 26
subject to 2 y1 − 3 y2 ≤ 16, 2 y2 + 5 y3 ≤ 2,
3 y1 + 2 y2 + 4 y3 ≤ 1
and y1, y2 , y3 ≥ 0.
Introducing the slack variables y4 , y5 , y6 and leaving constant term 26 from Z in the
beginning, the above L.P.P. reduces to
Max. Z ′ = 3 y1 + 5 y2 + 4 y3 where Z = Z ′ + 26
subject to 2 y1 − 3 y2 + y4 = 16
2 y2 +5 y3 + y5 =2
3 y1 + 2 y2 + 4 y3 + y6 =1
and y1, y2 , y3 ≥ 0.
cj 3 5 4 0 0 0 Min. ratio
B cB x B Y2 ,
xB Y1 Y2 Y3 Y4 Y5 Y6 yi2 > 0
Y4 0 16 2 −3 0 1 0 0
Y5 0 2 0 2 5 0 1 0 22
Y6 0 1 3 2 4 0 0 1 1 2 (min. ) →
Z′ = cB x B ∆j 3 5 4 0 0 0
=0 ↑ ↓
185
Y4 0 35 2 13 2 0 6 1 0 32
Y5 0 1 −3 0 1 0 1 −1
Y2 5 12 32 1 2 0 0 12
Z′ = cB x B ∆j −9 2 0 −6 0 0 −5 2
=5 2
∴ Optimal solution is
y1 = 0, y2 = 1 2 , y3 = 0, Max. Z ′ = 5 2
∴ x1 = y1 + 2 = 2, x2 = y2 + 4 = 9 2, Max. Z = Z ′ + 26 = 57 2
i.e., x1 = 2, x2 = 9 2, Max. Z = 57 2.
Example 1: Solve the following system of linear equations by using simplex method :
x1 − x3 + 4 x4 = 3 , 2 x1 − x2 = 3
Solution: Here the variables are non-negative. Adding the artificial variables
(non-negative) x a , x a , x a in the given equations and introducing the dummy objective
1 2 3
function Z with costs zero to each given variable and cost–1 to each artificial variable the
given system can be written as a L.P.P. in the following form :
Max. Z = 0 x1 + 0 x2 + 0 x3 + 0 x4 − 1x a − 1x a − 1x a
1 2 3
subject to x1 − x3 + 4 x4 + x a = 3
1
186
2 x1 − x2 + x a = 3
2
3 x1 − 2 x2 − x4 + x a = 1
3
and x1, x2 , x3 , x4 , x a , x a ≥ 0.
1 2
The solution to the problem using simplex algorithm is given in the following table :
cj 0 0 0 0 −1 −1 −1 Min. ratio
B cB x B Y1,
xB Y1 Y2 Y3 Y4 A1 A2 A3
yi1 > 0
A1 −1 3 1 0 −1 4 1 0 0 3
A2 −1 3 2 −1 0 0 0 1 0 32
A3 −1 1 3 −2 0 −1 0 0 1 1 3 (min)
→
Z = c Bx B ∆j 6 −3 −1 3 0 0 0 x B Y4 ,
= −7 ↑ ↓ yi4 > 0
A1 −1 83 0 23 −1 13 3 1 0 8 13 (min) →
A2 −1 73 0 13 0 23 0 1 72
Y1 0 13 1 −2 3 0 −1 3 0 0
Z = c Bx B ∆j 0 1 −1 5 0 0 x B Y2 ,
= −5 ↑ ↓ yi2 > 0
Y4 0 8 13 0 2 13 −3 13 1 0 4
→
A2 −1 25 13 0 3 13 2 13 0 1 25 3
Y1 0 7 13 1 −8 13 −1 13 0 0
Z = −25 13 ∆j 1 3 13 2 13 0 0 x B Y3 ,
↑ ↓ yi3 > 0
Y2 0 4 0 1 −3 2 13 2 0
A2 −1 1 0 0 12 −3 2 1 2→
Y1 0 3 1 0 −1 4 0
Z = −1 ∆j 0 0 12 −3 2 0
↑ ↓
Y2 0 7 0 1 0 2
Y3 0 2 0 0 1 −3
Y1 0 5 1 0 0 1
Z =0 ∆j 0 0 0 0
187
x1 = 5, x2 = 7, x3 = 2, x4 = 0.
2 x1 + x2 = 1
3 x1 + 4 x2 = 12 .
Solution: Here the two variables x1 and x2 are not required to be non-negative, so we
write x1 = x1′ − x1′′ and x2 = x2′ − x2′′ , such that x1′ , x1′′, x2′ , x2′′ ≥ 0.
Adding the artificial variables (non-negative) x a and x a in the two equations and
1 1
introducing the dummy objective function Z with costs zero to each variable
x1′ , x1′′, x2′ , x2′′ and costs–1 to each artificial variable x a , x a , the given problem in L.P.P. is
1 2
as follows :
subject to
Taking x1′ = 0, x1′′ = 0, x2′ = 0, x2′′ = 0, we get x a = 1, x a = 12 which is the starting B.F.S.
1 2
The solution of the problem using simplex method is given in the following table :
B cB cj 0 0 0 0 −1 −1 Min. ratio
x B Y2′ , y′i2 > 0
xB Y1′ Y1′′ Y2′ Y2′′ A1 A2
A1 −1 1 2 −2 1 −1 1 0 1 1 → (min)
A2 −1 12 3 −3 4 −4 0 1 12 4
Z = −8 ∆j −5 5 0 0 −5 0
↑
Y2′ 0 21 5 0 0 1 −1 −3 5 25
Y1′′ 0 85 −1 1 0 0 −4 5 15
Z =0 ∆j 0 0 0 0 0 0
188
Let A be a n × n non-singular real matrix. Then to find the inverse matrix of matrix A by
simplex method proceed as follows :
Now introduce the artificial variables x a > 0 and a dummy objective function Z, with
costs zero to variables in x and cost –1 to each artificial variable. Then find the solution of
the following L.P.P. formed, by using simplex method :
Max. Z = 0 x − 1x a
subject to Ax = b, x, x a ≥ 0.
Then in the final simplex table which gives the optimal solution (i.e., when all ∆ j ≤ 0) and
the columns of A becomes the columns of unit matrix I (i.e., when A is converted to an
unit matrix or when all variables of vector x are in the basis), the inverse of matrix A is the
matrix formed by the column vectors which were the column vectors of the initial basis in
proper order.
If in the final simplex table which gives the optimal solution (i.e., when all ∆ j ≤ 0), the
matrix A is not converted to an unit matrix (i.e., all variables of vector x are not in the
basis and artificial variable / variables appear in the basis), then we continue the simplex
method by excluding the artificial variable and introducing the remaining variable (not
in the basis) into the basis. By this operation (iteration), the solution obtained must
remain optimal while it can be feasible or infeasible. The process is continued till A is
converted to an unit matrix. Finally, the inverse of A is obtained as above.
4 3 b1 = ∑ a1 j = 4 + 3 7
Solution: Let A = and b = b = ∑ a = 3 + 2 = 5 be a dummy real column
3 2 2 2j
matrix. Consider the system of equations
Ax = b
4 3 x1 7 4 x1 + 3 x2 = 7
i.e., 3 2 x = 5 i.e., 3 x + 2 x = 5
2 1 2
Introducing the artificial variables x a , x a and the dummy objective function Z with
1 2
cost 0 to each variable x1, x2 cost –1 to each artificial variable x a , x a , the resulting
1 2
L.P.P. is
Max. Z = 0 x1 + 0 x2 − 1 x a − 1 x a
1 2
subject to 4 x1 + 3 x2 + xa =7
1
3 x1 + 2 x2 + xa =5
2
and x1, x2 , x a , x a ≥ 0.
1 2
The solution of the above L.P.P. by simplex method is given in the following table :
cj 0 0 −1 −1 Min. ratio
B cB x B Y1, yi1 > 0
xB Y1 Y2 A1 A2
A1 −1 7 4 3 1 0 74
A2 −1 5 3 2 0 1 5 3 (min.) →
Z = cB x B ∆j 7 5 0 0 x B Y2 , yi2 > 0
= −12 ↑ ↓
A1 −1 13 0 13 1 −4 3 1 (min. ) →
Y1 0 53 1 23 0 13 52
Z = cB x B ∆j 0 13 0 −7 3
1 ↑ ↓
=−
3
Y2 0 1 0 1 3 −4
Y1 0 1 1 0 −2 3
Z = cB x B ∆j 0 0 −1 −1
=0
190
The last table in proper order (i.e., form) of unit matrix for Y1, Y2 (i.e., A) can be written as
B cB xB Y1 Y2 A1 A2
Y1 0 1 1 0 −2 3
Y2 0 1 0 1 3 −4
Since the given matrix A (given by columns Y1, Y2 ) has been converted to a unit matrix,
therefore A −1 is given by the columns A1, A2 of the initial basis.
−2 3
Hence A −1 = .
3 −4
subject to x1 + x2 ≤ 4 subject to 3 x1 + 5 x2 ≤ 15
x1 − x2 ≤ 2, 5 x1 + 2 x2 ≤ 10
subject to 3 x1 − x2 + 2 x3 ≤ 7 subject to 3 x1 + 5 x2 + 2 x3 ≤ 60
−2 x1 + 4 x2 ≤ 12 4 x1 + 4 x2 + 4 x3 ≤ 72
−4 x1 + 3 x2 + 8 x3 ≤ 10 2 x1 + 4 x2 + 5 x3 ≤ 100
x1, x2 , x3 ≥ 0 x1, x2 , x3 ≥ 0
[Meerut 2005] [Meerut 2009 (BP), 12 (BP)]
subject to subject to x1 − x2 ≤ 10
x1 + 3 x2 + x4 ≤ 4 2 x1 − x2 ≤ 40
x1, x2 , x3 , x4 ≥ 0
191
5. Max. Z = 4 x1 + 10 x2 6. Max. Z = 2 x1 + 4 x2
subject to subject to
2 x1 + x2 ≤ 50 2 x1 + 3 x2 ≤ 48
2 x1 + 5 x2 ≤ 100 x1 + 3 x2 ≤ 42
2 x1 + 3 x2 ≤ 90 x1 + x2 ≤ 21
x1, x2 ≥ 0. [Meerut 2002] x1, x2 ≥ 0.
7. Max. Z = 2 x1 + x2 8. Max. Z = 3 x1 + 4 x2
subject to subject to
x1 + 2 x2 ≤ 10 x1 − x2 ≤ 1
x1 + x2 ≤ 6 − x1 + x2 ≤ 2
x1 − x2 ≤ 2 x1, x2 ≥ 0.
x1 − 2 x2 ≤ 1
x1, x2 ≥ 0.
192
x1 + 4 x2 ≥ 4 1
16 x1 + x − 6 x3 ≤ 5
2 2
x1 + x2 ≤ 5
3 x1 − x2 − x3 ≤ 0
x1, x2 ≥ 0. [Meerut 2004, 11 (BP)]
x1, x2 , x3 , x4 ≥ 0. [Kanpur 2009]
subject to subject to
4 x1 + 5 x2 + x3 + 5 x4 = 5 3 x1 − x2 − x3 ≥ 3
2 x1 − 3 x2 − 4 x3 + 5 x4 = 7 x1 − x2 + x3 ≥ 2
x1 + 4 x2 + 5 x3 − 4 x4 = 6 x1, x2 , x3 ≥ 0.
x1, x2 , x3 , x4 ≥ 0.
subject to subject to
3 x1 + x2 = 3 2 x1 + x2 ≥ 2
4 x1 + 3 x2 ≥ 6 x1 + 3 x2 ≤ 3
x1 + 2 x2 ≤ 4 x2 ≤ 4
x1, x2 ≥ 0. x1, x2 ≥ 0. [Meerut 2012]
x1 − x2 ≥ 0 x1 + x2 + x3 = 10
2 x1 + 3 x2 ≤ −6 x1 − x2 ≥ 1
x1, x2 are unrestricted. 2 x1 + 3 x2 + x3 ≤ 40
x1, x2 , x3 ≥ 0. [Meerut 2007]
subject to subject to
2 x1 + 5 x2 ≥ 1500 3 x1 + x2 ≤ 27
3 x1 + x2 ≥ 1200 x1 + x2 ≥ 21
x1, x2 ≥ 0. x1, x2 , ≥ 0. [Meerut 2011]
194
35. 3 x1 + 2 x2 = 4, 4 x1 − x2 = 6. 36. 3 x1 + 2 x2 = 5, 5 x1 + x2 = 9.
[Gorakhpur 2008]
37. x1 + x2 = 1, 2 x1 + x2 = 3.
4 1 2
4 1
(iii) 0 1 0 (iv) .
2 9 [Meerut 2006]
8 4 5
39. Food A contains 20 units of vitamin X and 40 units of vitamin Y per gram. Food B
contains 30 units each of vitamin X and Y. The daily minimum human
requirements of vitamins X and Y are 900 units and 1200 units respectively. How
many grams of each type of food should be consumed so as to minimize the cost if
food A costs 60 paise per gram and food B costs 80 paise per gram?
40. A finished product must weigh exactly 150 gms. The two raw materials used in
manufacturing the product are : A with cost of ` 2 per unit and B with a cost of ` 8
per unit. At least 14 units of B and not more than 20 units of A must be used. Each
unit of A and B weighs 5 and 10 grams respectively.
How much of each type of raw material should be used for each unit of the final
product in order to minimize the cost ?
41. A company produces two types of products say type A and B. Product B is of
superior quality and product A is of a lower quality. Profits on the two types of
products are ` 30 and ` 40 respectively. The data on resources required and capacity
available is given below:
195
How should the company manufacture the two types of products in order to have a
maximum overall profit ?
True/False
1. Fundamental theorem of L.P.P. states that if the given L.P.P. has an optimal
solution, then at least one basic solution must be optimal.
2. The problem has no feasible solution if the value of at least one artificial variable
present in the basis is non-zero and the optimality condition is satisfied.
3. In the phase I of two phase method, we remove artificial variables from the basis
matrix.
4. If we have to solve a linear programming problem by simplex method the variable x
should be unrestricted in sign. [Meerut 2005]
Exercise
1. (i) x1 = 3, x2 = 1, Max. Z = 11
3. x1 = 0, x2 = 4, Max. Z = 28
4. x1 = 0, x2 = 29 5 , x3 = 21 5, Max. Z = 34 5
6. x1 = 6, x2 = 12, Z = 60
7. x1 = 4, x2 = 2, Max. Z = 10
8. unbounded
9. x1 = 50 7 , x2 = 0, x3 = 55 7, x4 = 0 Max. Z = 695 7
11. x1 = x2 = x3 = 0, Max. Z = 0
12. x1 = 15 2 , x2 = 0, x3 = 0, Max. Z = 45 2
15. x1 = 6 5 , x2 = 0, x3 = 17 5 , x4 = 0, x5 = 0, Max. Z = 18 5
16. x1 = 5 2 , x2 = 0, x3 = 0, Min. Z = 10
17. x1 = 1 3 , x2 = 5 3 , x3 = 0, Max. Z = 22 3
18. unbounded
19. x1 = 21 13 , x2 = 10 13, Min. Z = 31 13
20. x1 = 2, x2 = 0, Max. Z = 6
21. x1 = 0, x2 = 5, Max. Z = 40
22. unbounded
23. no feasible solution
24. x1 = 5 4 , x2 = 0, x3 = 3 4, Max. Z = 75 8
26. x1 = 3, x2 = 0, Max. Z = 9
27. x1 = −6 5 , x2 = −6 5 , Max. Z = − 48 5
28. x1 = 11 2 , x2 = 9 2 , Max. Z = 89 2
31. x1 = 4, x2 = 0, x3 = 1, Max. Z = 11
198
33. x1 = 4, x2 = 1, Min. Z = 11
34. x1 = 3 4 , x2 = 0, x3 = 3 4 , Min. Z = 3
35. x1 = 16 11, x2 = −2 11
36. x1 = 13 7 , x2 = −2 7
37. x1 = 2, x2 = −1
1 −2 2 1 1 2
38. (i) (ii)
4 3 −1 11 4 −3
5 3 −2
1 1 9 −1
(iii) 0 4 0 (iv)
4 34 −2 4
−8 −8 4
True/False
1. True 2. True 3. True
4. False
mmm
Unit-3
5.1 Introduction
n chapter 2 we have defined that a B.F.S. of a L.P. problem is said to be degenerate B.F.S. If
I at least one of the basic variables is zero. So far we have considered the L.P. problems in
which by minimum ratio rule we get only one vector to be deleted from the basis. But
there are L.P. problems where we get more than one vector which may be deleted from,
the basis.
x
Thus, if min Bi , yik > 0 ,(α k is incoming vector)occurs at i = i1, i2 ,..., is .
y
ik
i.e., minimum occurs for more than one value of i, then the problem is to select the vector
to be deleted from the basis. In such cases if we choose one vector say β i (i is one of
i1, i2 ,..., is ) and delete it from the basis then the next solution (obtained at this iteration)
may be a degenerate B.F.S. Such problem is called the problem of degeneracy. It can be
seen that when the simplex method is applied to a degenerate B.F.S., to get a new B.F.S.,
the value of the objective function may remain unchanged i.e., the value of the objective
function is not improved. In some cases due to the presence of degeneracy the same
sequence of simplex tables are repeated forever without ever reaching the optimal
solution. This problem is called cycling.
The procedure which prevent cycling within the simplex routine and an optimal solution is obtained
in finite number of steps is called the resolution of degeneracy.
202
2. If none of the components of b is zero at any iteration and choice of the outgoing vector β r
at the same iteration is not unique, the next solution is bound to be degenerate.
Then to select the vector to be deleted from the basis (i.e., outgoing vector), proceed as
follows :
1. Denote the columns of the unit matrix in proper order (i.e., the columns
corresponding to the basic variables in proper order) in the simplex table by Y1, Y2
etc. Thus,
Element of Y1 in i − th row
2. Compute mini.
i ∈I1 Element of Y k in i − th row
I1 = {i1, i2 , i3 ...}
If this minimum is unique, say attained for i = i r ,
then the i r -th vector is taken, as the outgoing vector and the element where this
i r -th row is intersected by column Y k is taken as the key element. It this minimum is
not unique, then let I2 be the set of all those values of i ∈ I1, for which there is a tie
obviously I2 ⊂ I1. Then proceed to the next step.
Element of Y2 in i − th row
3. Compute mini.
i ∈I2 Element of Y in i − th row
k
203
If this minimum is unique, say attained for i = i s , then the i s -th vector is taken as the
outgoing vector and the element where this i s -th row is intersected by column Y k is
taken as the key element.
If this minimum is also not unique, then let I3 be the set of those values of i ∈ I2 , for
which there is a tie, Obviously I3 ⊂ I2 ⊂ I1. Then proceed to the next step.
Element of Y3 in i − th row
4. Compute mini.
i ∈I3 Element of Y k in i − th row
In this minimum is unique say attained for i = i t ∈ I3 , then i t -th vector is taken as
the outgoing vector and the element where this i t -th row is intersected by column
Y k is taken as the key element.
If this minimum is also not unique, then proceed similarly.
Example 1: Solve the following L.P.P. (Problem with some x Bi = 0 for which yik = 0).
Max. Z = 2 x1 + 3 x2 + 10 x3
subject to x1 + 2 x3 = 0
x2 + x3 = 1
x1, x2 , x3 ≥ 0
Max. Z = 2 x1 + 3 x2 + 10 x3
subject to x1 + 0. x2 + 2 x3 = 0
0. x1 + 1. x2 + 1. x3 = 1
Here the basis matrix I2 (formed by coefficients of x1 and x2 ) exists, so there is no need to
introduce the artificial variables.
cj 2 3 10 Min. Ratio
B cB
xB Y1 Y2 Y3 x B Y3
β1 β2 α3 yi3 > 0
Y1 2 0 1 0 2 0 2 (Min. ) →
Y2 3 1 0 1 1 (1 1)
Z = cB x B ∆j 0 0 3
↓ ↑
=3
Y3 10 0 12 0 1
Y2 3 1 −1 2 1 0
Z = cB x B = 3 ∆j −3 2 0 0
Note : Here in this example it is important to note that we have obtained an optimal
degenerate solution from an degenerate solution without improving the values of Z.
Example 2: Solve the following L.P.P. (problem with some x Bi = 0 for which yik < 0).
Max. Z = 2 x1 + 3 x2 + 10 x3
subject to x1 − 2 x3 = 0
x2 + x3 = 1
x1, x2 , x3 ≥ 0 .
Max. Z = 2 x1 + 3 x2 + 10 x3
subject to 1. x1 + 0 . x2 − 2 x3 = 0.
0 . x1 + 1. x2 + 1. x3 = 1
Here the basis matrix I2 (formed by coefficient of x1 and x2 ) exists, so there is no need to
introduce the artificial variables. Taking x3 (non-basic variable) = 0, we get x1 = 0, x2 = 1,
so the starting B.F.S. is x1 = 0, x2 = 1, x3 = 0 which is degenerate as the basic variable
x1 = 0.
205
cj 2 3 10 Min. Ratio
B cB
xB Y1 Y2 Y3 x B Y3 , yi3 > 0
Y1 2 0 1 0 −2
Y2 3 1 0 1 1 1 1(Min. ) →
Z = cB x B = 3 ∆j 0 0 11
↓ ↑
Y1 2 2 1 2 0
Y3 10 1 0 1 1
Z = c B x B = 14 ∆j 0 −11 0
Note : Here in this examples it may be noted that a non-degenerate optimal solution is
obtained from a degenerate B.F.S. and the value of the objective function is also
improved.
Max. Z = 5 x1 + 3 x2
subject to x1 + x2 ≤ 2
5 x1 + 2 x2 ≤ 10
3 x1 + 8 x2 ≤ 12
and x1, x2 ≥ 0 .
Solution: Introducing the slack variables x3 , x4 , x5 the given L.P.P. can be written as
Max. Z = 5 x1 + 3 x2
subject to x1 + x2 + x3 =2
5 x1 + 2 x2 + x4 = 10
3 x1 + 8 x2 + x5 = 12
B cB cj 5 3 0 0 0 Min. Ratio
x B Y1
xB Y4 Y5 Y1 Y2 Y3
yi1 > 0
Y1 Y2 Y3 (β1) Y4 (β2 ) Y5(β3 )
Y3 0 2 1 1 1 0 0 2
Y4 0 10 5 2 0 1 0 2→
Y5 0 12 3 8 0 0 1 4
Z = cB x B = 0 ∆j 5 3 0 0 0
↑ ↓
By minimum ratio rule we find that minimum is not unique but occurs for i = 1 and i = 2
both.
∴ To select the vector to be deleted from the basis, we proceed as explained in article
5.3.
1. First of all we re-number the columns starting from the identity matrix in the proper
order by Y1, Y2 , Y3 , Y4 , Y5 .
∴ Y1 = Y3 , Y2 = Y4 , Y3 = Y5 , Y4 = Y1, Y5 = Y2
∴ I1 = {1, 2}
∴ The vector in the second row i.e., Y4 is to be deleted and key element is y21 = 5.
Further computation by simplex method is shown in the following table.
207
B cB cj 5 3 0 0 0 Min. Ratio
x B Y2 , yi2 > 0
xB Y1 Y2 Y3 Y4 Y5
Y3 0 0 0 35 1 −1 5 0 0 (Min. ) →
Y1 5 2 1 25 0 15 0 5
Y5 0 6 0 34 5 0 −3 5 1 15 17
Z = c B x B = 10 ∆j 1 0
0 –1 0
↑ ↓
Y2 3 0 0 1 53 −1 3 0
Y1 5 2 1 0 −2 3 13 0
Y5 0 6 0 0 −34 3 53 1
Z = c B x B = 10 ∆j 0 0 −5 3 −2 3 0
Max. Z = 2 x1 + x2 ,
subject to 4 x1 + 3 x2 ≤ 12
4 x1 + x2 ≤ 8
Solution: Introducing the slack variables x3 , x4 and x5 the given L.P.P. can be written as
Max. Z = 2 x1 + x2
subject to 4 x1 + 3 x2 + x3 = 12
4 x1 + x2 + x4 =8
4 x1 − x2 + x5 =8
and x1, x2 , x3 , x4 , x5 ≥ 0
B cB cj 2 1 0 0 0 Min. Ratio
Y3 0 12 4 3 1 0 0 3
Y4 0 8 4 y24 1 0 y21 1 y22 0 2
Y5 0 8 4 y34 −1 0 y31 0 y23 1 2→
Z = cB x B ∆j 2 1 0 0 0
↑ ↓
=0
Since Max ∆ j = 2 = ∆1
By minimum ratio rule we find that minimum is not unique but occurs for i = 2 and i = 3
both
∴ To select the vector to be deleted from the basis, we proceed as explained in article
5.3
1. First of all we renumber the columns of the above table starting from the identity
matrix in the proper order by Y1, Y2 , Y3 .
2. Since minimum ratio occurs for i = 2 and 3 ∴I1 = {2, 3} and incoming vector is Y1(Y4 )
Element of Y1 in i − th row
∴ Compute Mini.
i ∈ I1 Element of Y4 in i − th row
0 0
= Mini. , = Mini.{0, 0} Q i = 2, 3
4 4
Element of Y2 in i − th row
Compute Mini.
i ∈ I2 Element of Y4 in i − th row
=0
209
i.e., minimum occurs at i = 3. ∴ Vector in the 3rd row i.e., Y5 is the out going vector
and key element = y31 = 4. Thus, we enter Y1 in place of Y5 .
The computational work by simplex method is shown in the following table.
B cB cj 2 1 0 0 0 Min. Ratio
XB Y1 Y2 Y3 Y4 Y5 x B Y2 , Yi2 > 0
Y3 0 4 0 4 1 0 −1 1
Y4 0 0 0 2 0 1 −1 0(Min.) →
Y1 2 2 1 −1 4 0 0 14
Z = cB x B ∆j 0 32 0 0 −1 2 x B Y5 , Y i5 > 0
=4 ↑ ↓
Y3 0 4 0 0 1 −2 1 4 (Min.) →
Y2 1 0 0 1 0 12 −1 2
Y1 2 2 1 0 0 18 18 16
Z = cB x B ∆j 0 0 0 −3 4 14
=4 ↓ ↑
Y5 0 4 0 0 1 −2 1
Y2 1 2 0 1 12 −1 2 0
Y1 2 32 1 0 −1 8 38 0
Z = cB x B ∆j 0 0 −1 4 −1 2 0
=5
Max. Z = 5 x1 − 2 x2 + 3 x3
subject to 2 x1 + 2 x2 − x3 ≥ 2
3 x1 − 4 x2 ≤ 3
x2 + 3 x3 ≤ 5
Max. Z = 5 x1 − 2 x2 + 3 x3 + 0. x4 + 0. x5 + 0. x6 − M. x a
subject to 2 x1 + 2 x2 − x3 − x4 + xa =2
3 x1 − 4 x2 + x5 =3
x2 + 3 x3 + x6 =5
The computation of the solution by simplex method is given in the following table.
B cB cj 5 −2 3 0 0 0 − M Min. Ratio
X B Y1,
xB Y1 Y2 Y3 Y4 Y5 Y6 A Yi1 > 0
A −M 2 2 2 −1 −1 0 0 1 2 2 = 1→
Y5 0 3 3 −4 0 0 1 0 0 3 3 =1
Y6 0 5 0 1 0 0 0 1 0 −
Z = −2 M ∆j 5 + 2 M −2 + 2 M 3 − M −M 0 0 0
↑ ↓
Here we note that max. ∆ j = ∆1 so Y1 is the incoming vector and by minimum ratio rule
we find the same minimum ratio in row 1 and row 2. So it is a case of degeneracy. But
here in the first row we have the artificial vector A, so giving preference to A, to leave the
basis we choose A as the outgoing vector. Here there is no need to apply the rule of
resolving degeneracy.
Thus, entering vector Y1 in place of A in the basis, y11 = 2 is the key element, further
computations are shown in the following table :
B cB cj 5 −2 3 0 0 0 −M Min. Ratio
x B Y3
xB Y1 Y2 Y3 Y4 Y5 Y6 A yi > 0
3
Y1 5 1 1 1 −1 2 −1 2 0 0
Y5 0 0 0 −7 32 3 2 1 0 0 (3 2) Min.
→
Y6 0 5 0 1 3 0 0 1 53
Z =5 ∆j 0 −7 11 2 52 0 0 x BY2
↑ ↓ yi > 0
2
211
Y1 5 1 1 −4 3 0 0 13 0
Y3 3 0 0 −14 3 1 1 23 0
Y6 0 5 0 15 0 −3 −2 1 5 15 Min. →
Z =5 ∆j 0 56 3 0 −3 −11 3 0 x BY4
↑ ↓ yi4 > 0
Y1 5 13 9 1 0 0 −4 15 7 45 4 45
Y3 3 14 9 0 0 1 1 15 2 45 14 45 7 0 3 Min.
→
Y2 −2 1 3 0 1 0 −1 5 −2 15 1 15
Y1 5 23 3 1 0 4 0 13 43
Y4 0 70 3 0 0 15 1 23 14 3
Y2 −2 5 0 1 3 0 0 1
Z = 85 3 ∆j 0 0 −11 0 −5 3 −14 3
Since on ∆ j > 0, so this solution is optimal. Hence the optimal solution to the given
problem is x1 = 23 3 , x2 = 5, x3 = 0 and max. Z = 85 3 .
212
2. What do you mean by degeneracy ? Discuss in detail the necessary and sufficient
conditions for the occurrence of degeneracy.
3. What are the problems caused by degeneracy ? Describe a procedure to avoid these
problems.
5. Prove that the degeneracy may appear in a L.P.P. at the very first iteration when
some component of vector b [i.e., bi] is zero.
Solve the following L.P.P.
6. Max. Z = 3 x1 + 9 x2 7. Max. Z = 3 x1 + 5 x2
subject to x1 + 4 x2 ≤ 8 subject to x1 + x3 = 4
x1 + 2 x2 ≤ 4 x2 + x4 = 6
and x1, x2 ≥ 0 3 x1 + 2 x2 + x5 = 12
and x i ≥ 0, i = 1, 2,...,5.
8. Max. Z = 3 x1 + 5 x2 + 4 x3 9. Max. Z = x1 − x2 + 3 x3
subject to 2 x1 + 3 x3 ≤ 18 subject to x1 + x2 + x3 ≤ 10
2 x2 + 5 x3 ≤ 18 2 x1 − x3 ≤ 2
3 x1 + 2 x2 + 4 x3 ≤ 25 2 x1 − 2 x2 + 3 x3 ≤ 0
and x1, x2 , x3 ≥ 0 and x1, x2 , x3 ≥ 0
0. x1 + 0. x2 + 1. x3 + 0. x4 ≤ 1
and x1, x2 , x3 , x4 ≥ 0
213
Exercise
6. x1 = 0, x2 = 2, Z = 18
7. x1 = 0, x2 = 6, x3 = 4, x4 = 0, x5 = 0, Z = 30
8. x1 = 7 3 , x2 = 9, x3 = 0, Z = 52
9. x1 = 0, x2 = 6, x3 = 4, Z = 6
10. x1 = 3, x2 = 2, Z = 29
11. x1 = 1, x2 = 0, x3 = 1, x4 = 0, Z = 5 4
6.1 Introduction
he revised simplex method is an efficient computational procedure for solving linear
T programming problems on digital computers. The revised simplex method solves a
linear programming problem in the same way as the simplex method. The “revised”
aspect concerns the procedure of changing tableaux. In revised simplex method we
compute and store only the information that are of current need.
There are two standard forms for the revised simplex method.
Standard Form II : In standard form II, it is assumed that the artificial variables are
needed for getting an initial identity matrix. In this case two phase method of ordinary
simplex method will be used in a slightly different way to handle artificial variables.
revised simplex method we would deal with (m + 1) dimensional basis in standard form I
and with a (m + 2) dimensional basis in standard form II.
and x i ≥ 0, i = 1, 2,..., n.
Thus we have to find the solution of the system (1) of (m + 1) simultaneous equations in
n + m + 1 number of variables Z , x1, x2 ,..., x n, x n + 1,..., x n + m, such that Z is as large as
possible and unrestricted in sign.
where Z = x0 , − c j = a0 j , j = 1, 2,..., n + m
x0
1 a01 a02 ... a0 n a0 , n + 1... a0 n + m x
0 1 0
a11 a12 ... a1n 1 ... 0
M b1
... .. . ... ... ... ... x
... n = M
... ... ... ... ...
xn + 1 M
0 a m1 a m2 ... a mn 0 ... 1 M b
m
x n + m
217
1 a0 Z 0
or = ...(3)
x b
0 A
x = [ x1, x2 ,..., x n + m], (Column vector of given, slack and surplus variables).
b = [b1, b2 ,..., bm] (Column vector of requirements)
0 = [0, 0,..., 0] (m-dimensional column vector of zeroes)
1 − c Z 0
0 A x = b ...(4)
Equations (1), (2) and (4) are referred to as standard form I for the revised simplex
method.
We shall use the subscript (1) on all vectors to indicate that we have (m + 1) components
in standard form I.
(1)
α j = [− c j , a1 j , a2 j ,..., amj ]
= [− c j , α j ]
(1) − c j a0 j
αj = =
α α
j j
(ii) Corresponding to m component vector b, we can define (m + 1) component vector by
(iv) The basis matrix in revised simplex method, standard I will be denotes by B1.
Subscript 1 is used to indicate the matrix B1 is of order (m + 1) × (m + 1). e1 (defined in
‘iii’) will always be in the first column of B1 and the remaining m columns are any
(1) (1)
m of α j , j = 1, 2,..., (n + m) which are L.I. and are denoted by β i , i = 1, 2,..., m.
1 − CB
=
0 B
= (β1, β2 ,..., β m)
Since B1–1 exists and is known, therefore using article 1.15 the inverse of the matrix B, is
given by
comparing with article 1.15, we
1 C B −1
B1−1 = B see that here
−1
0 B I = 1, R = B and Q = − CB
219
1 CB . I m 1 CB
∴ B −1 = I m−1 = I m. B1−1 = =
0 I m 0 I m
are added and B = I m, then CB = (cB1, cB2 ,..., cBm) = (0, 0,..., 0) = 0,
1 0
then B1−1 = = I m + 1.
0 I
Since α j(1) not in the basis matrix B1 can be expressed as the linear combination of the
= B1 Y j(1)
1 C . B −1 − c j
or Y j(1) = B
0 B −1 α j
− c + C B −1 α
j B j
= − 1
0 + B αj
− c j + CB Y j
= Since B −1 α j = Y j = [ y1 j , y2 j ,..., ymj ]
0 + Yj
− c j + Z j
or Y j(1) = Since Z j = CB Y j
Yj
− ∆ j
or Y j(1) = Since ∆ j = c j − Z j ...(2)
Y
j
Thus the great advantage of treating the objective function as one of the constraints is that
− ∆ j = Z j − c j for any α j(1) not in the basis B1 can be easily computed by taking the product
If x B(1) is the basic solution corresponding to the basis matrix B1, then
x B(1) = B1−1b(1)
1 C B −1 0
= B .
0 B −1 b
C (B −1 b)
= B
−1
B b
C x
= B B Since B −1 b = x B .
xB
C x Z
or x B(1) = B B = ...(4)
x B x B
x B (1) given by (4) is the B.S. not necessarily B.F.S., since Z may be negative also. From
(4) we conclude that the first component of x B(1) immediately gives us the value of the
objective function while the second component x B gives the B.F.S. (corresponding basis
B) of the original constraint system A x = b.
Step 3 : To find the initial basic feasible solution and the basis matrix B1
221
In this step we proceed to obtain the initial basis matrix B1 as an identity matrix.
Now we construct the simplex table for revised simplex method as follows :
Z 1 0 0 ... 0 0 Z k − ck
= − ∆k
x B1 0 1 0 ... 0 b1 y1k
x B2 0 0 1 ... 0 b2 y2 k
M M M M M M M
x Bm 0 0 0 ... 1 bm ymk
This is done by computing ∆ j for all α j(1) not in the basis B1 by the formula.
Step 6 : To find the vector, incoming (or entering) and leaving (outgoing) the basis
Max
(i) To find incoming vector : The incoming vector will be taken as α k (1) if ∆ k =
j
{∆ j } for those j for which α j(1) are not in basis B1.
The vector β r(1) to be removed from the basis is determined by using minimum ratio rule.
It is taken corresponding to that value of r for which
222
x Br Mini x Bi
= , yik > 0
yrk i yik
where α k (1) is the incoming vector and Y k (1) is the column vector, corresponding to α k (1).
When α k (1) is the incoming vector and β r(1) the outgoing vector the element yrk is called
the key element.
method. Bringing α k (1) in place of β r(1), we construct the new (revised) simplex table.
Step 8 : Now test the above improved B.F.S. for optimality as in step 5
If this solution is not optimal then repeat steps (6) and (7) until an optimal solution is
finally obtained.
For the clear understanding of the procedure, few illustrative examples are given here.
subject to x1 + x2 ≤ 3
x1 + 2 x2 ≤ 5
3 x1 + x2 ≤ 6
x1, x2 ≥ 0 .
Max. Z = x1 + 2 x2 + 0. x3 + 0. x4 + 0. x5
subject to Z − x1 − 2 x2 + 0. x3 + 0. x4 + 0. x5 = 0
x1 + x2 + x3 + 0. x4 + 0. x5 = 3
223
x1 + 2 x2 + 0. x3 + 1. x4 + 0. x5 = 5
3 x1 + x2 + 0. x3 + 0. x4 + x5 = 6
x1, x2 ,......, x5 ≥ 0
Since here unit matrix I4 is obtained without the use of artificial variables so the problem
is of standard form I.
B1−1 Mini
Ratio
Variables β0(1) β1(1) β2(1) β3(1) Solution Yk(1) = Y2(1)
x B Y2
in the e1 α 3(1) α 4(1) α 5(1) x B(1) = B1−1 α 2 (1)
basis yi2 > 0
Z 1 0 0 0 0 −2 = − ∆2
x3 ( x B1) 0 1 0 0 3 1 ( y12 ) 31
x4 ( x B2 ) 0 0 1 0 5 2 ( y22 ) 5 2 (Mini)
→
x5 ( x B3 ) 0 0 0 1 6 1 ( y32 )
61
↓
outgoing vector
224
−1 −2
1 1
= − (1, 0, 0, 0) = [1, 2]
1 2
3 1
∴ ∆1 = 1, ∆2 = 2.
i.e., α 2(1) is the vector that must enter the basis i.e., the variables x2 will enter the
B.F.S.
x Br x
Now = Mini Bi , yi2 > 0
yr2 yi2
3 5 6 5 x
= Mini , , = = B2 ∴ r = 2.
1 2 1 2 y22
Hence β2(1) (α 4(1)), is the outgoing vector and so y22 = 2 is the key element.
Since y22 = 2 is the key element enclosed within * in table, therefore, we multiply third
row of Y2(1) by 1 2 and then add 2, −1 and −1 times of the row thus obtained in first,
second and fourth rows respectively.
225
B1−1
Z 1 0 1 0 5
x3 ( x B1) 0 1 −1 2 0 12
x2 ( x B2 ) 0 0 12 0 52
x5 ( x B3 ) 0 0 −1 2 1 72
Z = 5 ; x2 = 5 2 , x3 = 1 2 , x5 = 7 2.
Now we see that α1(1) and α 4(1) are two columns corresponding to the variables not in the
basis, therefore, we compute ∆1 and ∆4 .
−1 0
1 0
= − (1, 0, 1, 0) . = [0, − 1]
1 1
3 0
∴ ∆1 = 0 and ∆4 = −1.
x1 = 0, x2 = 5 2, and Max. Z = 5.
Note : ∆1 = 0 indicate that the problem has alternative optimal solution also.
subject to 2 x1 + 3 x2 − x3 + 4 x4 ≤ 40
2 x1 + 2 x2 + 5 x3 − x4 ≤ 35
x1 + x2 − 2 x3 + 3 x4 ≤ 100
and x1 ≥ 2, x2 ≥ 1, x3 ≥ 3, x4 ≥ 4.
226
Solution: Step 1 : Since the lower bounds of the variables are not zero
∴ we substitute x1 = y1 + 2, x2 = y2 + 1, x3 = y3 + 3, x4 = y4 + 4 in the given L.P.P.
which reduces to the following form :
Max. Z ′ = Z − 41 = 3 y1 + y2 + 2 y3 + 7 y4
subject to 2 y1 + 3 y2 − y3 + 4 y4 ≤ 20
−2 y1 + 2 y2 + 5 y3 − y4 ≤ 26
y1 + y2 − 2 y3 + 3 y4 ≤ 91
and y1 ≥ 0, y2 ≥ 0, y3 ≥ 0, y4 ≥ 0.
Max. Z ′ = 3 y1 + y2 + 2 y3 + 7 y4
s.t. Z ′ − 3 y1 − y2 − 2 y3 − 7 y4 + 0. y5 + 0. y6 + 0. y7 = 0
2 y1 + 3 y2 − y3 + 4 y4 + 1. y5 + 0. y6 + 0. y7 = 20
−2 y1 + 2 y2 + 5 y3 − y4 + 0. y5 + 1. y6 + 0. y7 = 26
y1 + y2 − 2 y3 + 3 y4 + 0. y5 + 0. y6 + 1. y7 = 91.
Since here unit matrix I4 is obtained without the use of the artificial variables, so the
problem is of standard form I.
Z′
β1(1) β2(1) β3(1) y
β 0 (1) 1
e1 α1(1) α 2(1) (1)
α3 α4 (1)
α 5(1) α 6(1) α 7(1) y2 0
1 −3 −1 −2 −7 0 0 0 y 20
3 =
0 2 3 −1 4 1 0 0 y4 26
y 91
0 −2 2 5 −1 0 1 0 5
0 1 1 −2 3 0 0 1 y6
y
7
Here x B(1) = [0, 20, 26, 91] is the initial B.F.S. and basis matrix B1 is given by
B1 = [β0 (1), β1(1), β2(1) , β3(1)] = (e1, α 5(1), α 6(1), α 7(1)] = I4 , which is a unit matrix.
∴ B1−1 = I4 −1 = I4 .
227
Z′ 1 0 0 0 0 −7
y5 ( x B1) 0 1 0 0 20 4 5
(Mini) →
y6 ( x B2 ) 0 0 1 0 26 –1
y7 ( x B3 ) 0 1 0 1 91 3 91 3
↓
Outgoing vector
−3 −1 −2 −7
2 2 −1 4
= − (1, 0, 0, 0) . = [3, 1, 2, 7]
−2 2 5 −1
1 1 −2 3
∴ ∆1 = 3, ∆2 = 1, ∆3 = 2, ∆4 = 7,
Since ∆ k = Max. ∆ j = 7 = ∆4 ∴ k =4
Now
228
x Br Mini x Bi
= , y > 0
yr4 i yi4 i4
20 91 20 x B1
= Mini , − , = = ∴ r =1
4 3 4 y14
In order to bring α 4(1) in place of β1(1) (= α 5(5 )) in B1, we divide second row by 4 and then
add its 7, 1 and – 3 times in first, third and fourth rows respectively.
Z′ 1 74 0 0 35 −15 4
y4 ( x B1) 0 14 0 0 5 −1 4
124 19
y6 ( x B2 ) 0 14 1 0 31 19 4 (Mini) →
y7 ( x B3 ) 0 −3 4 0 1 76 −5 4
↓
Outgoing Vector
Step 8 : Test of optimality for the solution given in the above table
−3 −1 −2 0
2 3 −1 1
7 1 17 15 7
= − , − , , −
= − 1, , 0, 0 .
4 −2 2 5 0 2 4 4 4
1 1 −2 0
∴ ∆1 = −1 2 , ∆2 = −17 4 , ∆3 = 15 4 , ∆5 = −7 4.
Since ∆ k = Max. ∆ j = 15 4 = ∆3
−15 −1 19 −5
Y3(1) = B1−1. α 3(1) = , , ,
4 4 4 4
∴ Key element = 19 4.
In order to bring α 3(1) in place of β2(1) (= α 6(1)) in the basis B1, we divide the third row by
19 4 and then add its 15 4, 1 4 and 5 4 times in first, second and fourth rows respectively.
Thus the simplex table giving the next improved solution is as follows :
Z′ 1 37 19 15 19 0 1130 19 −13 19
↓
Outgoing Vector
Step 11 : Test of optimality for the solution given in the above table
−3 −1 0 0
37 15 2 3 1 0 13 −122 −37 −15
= − 1, , ,0 . = , , ,
19 19 −2 2 0 1 19 19 19 19
1 1 0 0
Since ∆ k = Max ∆ j = 13 19 = ∆1
∴ k = 1 i.e., α1(1) is the vector entering the basis. The column vector Y1(1) corresponding
to α1(1) is given by
−13 8 −6 −7
Y1(1) = B1−1. α1(1) = , , ,
19 19 19 19
By minimum ratio rule we find that β1(1) (= α 4(1)), is the outgoing vector. ∴ key element
= y14 = 8 19.
In order to bring α1(1) in place of β1(1)(= α 4(1)), we divide second row by 8 19 then add its
13 19, 6 19 and 17 19 times in first, third and fourth rows respectively.
Variables in B1−1
the basis
β0(1) β1(1) β2(1) β3(1) Solution
e1 α1(1) α 3(1) α 7(1) x B(1)
Z′ 1 19 8 78 0 281 4
y1 ( x B1) 0 58 18 0 63 4
y3 ( x B2 ) 0 14 14 0 23 2
y7 ( x B3 ) 0 −1 8 38 1 393 4
Step 14 : Test of optimality for the solution given in the above table :
−1 −7 0 0
0 −63 −13 −19 −7
19 7 3 4 1 =
= − 1, , , 0 . , , ,
8 8 2 −1 0 1 8 8 8 8
1 3 0 0
−63 −13 −19 −7
∴ ∆2 = , ∆4 = , ∆5 = , ∆6 = .
8 8 8 8
x1 = y1 + 2 = 71 4 , x2 = y2 + 1 = 1, x3 = y3 + 3 = 29 2 , x4 = y4 + 4 = 4,
4. Max. Z = 2 x1 + x2 5. Max. Z = 3 x1 + 2 x2 + 5 x3
subject to 3 x1 + 4 x2 ≤ 6 subject to x1 + 2 x2 + x3 ≤ 430
6 x1 + x2 ≤ 3 3 x1 + 2 x3 ≤ 460
x1, x2 ≥ 0 x1 + 4 x2 ≤ 420
x1, x2 , x3 ≥ 0
6. Max. Z = 5 x1 + 3 x2 7. Max. Z = 6 x1 − 2 x2 + 3 x3
subject to 3 x1 + 5 x2 ≤ 15 subject to 2 x1 − x2 + 2 x3 ≤ 2
5 x1 + 2 x2 ≤ 10 x1 + 4 x3 ≤ 4
x1, x2 ≥ 0 x1, x2 , x3 ≥ 0
8. Max. Z = x1 + x2 + 3 x3 9. Max. Z = 3 x1 + 5 x2
subject to 3 x1 + 2 x2 + x3 ≤ 3 subject to x1 ≤ 4, x2 ≤ 6
2 x1 + x2 + 2 x3 ≤ 2 3 x1 + 2 x2 ≤ 18
x1, x2 , x3 ≥ 0 x1, x2 ≥ 0
4. x1 = 2 7 , x2 = 9 7 , max. Z = 13 7
5. x1 = 0, x2 = 100, x3 = 230, max. Z = 1350
6. x1 = 20 19 , x2 = 45 19 , max. Z = 235 19
7. x1 = 4, x2 = 6, x3 = 0, max. Z = 12
8. x1 = 0, x2 = 0, x3 = 1, max. Z = 3
9. x1 = 2, x2 = 6, max. Z = 36
232
Phase I : To simplify notations, we assume that the initial matrix does not contain any
positive unit vector. In other words we assume that the basis or the origin problem
contain all artificial vectors corresponding to artificial variables introduced in all the m
constraints. Here we consider one more objective function Z a , known as artificial
objective function defined as
where x1a , x2 a ,..., x ma are the artificial variables introduced in the first, second ,..., and
mth constraints respectively. In the objective function Z a the prices of all the artificial
vectors are taken as −1.
Here in this case we have two objective functions, so in place of considering the problem
with (m + 1) constraints (as in standard form I) we have to consider the problem in the
revised form with (m + 2) constraints in which m constraints correspond to the constraints
of the given problem and the other two constraints correspond to each of the objective
functions Z and Z a .
Thus in standard form II of the revised simplex method, the system of constraint
equations can be written as
x j ≥ 0, x ia ≥ 0, j = 1, 2,..., n, i = 1, 2,..., m.
Since all the artificial variables x ia ≥ 0, thus in phase I, our problem is to maximize Z a
first, subject to the constraint equations (1) with Z a and Z both unrestricted in sign.
of the original variables x1, x2 ,..., x n for this max. Z a will form the B.F.S. in phase I
with which we shall start in phase II.
2. Max. Za < 0 : In this case it is clear that at least one artificial variable has a
non-negative value. Hence in this case there exists no F.S. of the original problem.
Phase II : After driving all the artificial variables equal to zero in phase I we get a B.F.S.
of the problem.
We enter phase II with this B.F.S. obtained in phase I and proceed to get the optimal
solution exactly similarly as in revised simplex method in standard form I.
...(2)
or [e1(2 ), e2(2 ), α1(2 ), α 2(2 ) ..., α n(2 ), α n + 1(2 ), α n + 2(2 ), ..., α n + m(2 )] x B(2 ) = b(2 )
for j = n + 1, n + 2,..., n + m
Note that α j(2 ), j = n + 1,..., n + m are the columns of the coeffs. of artificial variables
x1a ,..., x ma .
e1(2 ) and e2(2 ) are (m + 2) component column vectors corresponding to the two
objective functions Z and Z a respectively.
Z Za x1a x2 a ... x ma
B2 = (2 )
[e1(2 ), e2(2 ), an + 1 , an + 2(2 ), ..., an + m(2 )]
1 0 M 0 0 ... 0
0 1 M 1 1 .. . 1
... ... M ... ... ... ...
= 0 0 M 1 0 ... 0
0 0 M 0 1 ... 0
M M M M M ... M
0 0 M 0 M0 ... 1
1 0 M 0 1 0 M − CB
0 1 M 1m 0 1 M − CBa
= =
... ... M ... ... ... M ... ...(3)
0 0 M I m 0 0 M B
− CBa = 1m = (11
, ,...,1)
where c j , cBaj are the coefficients of the basic variables x1a , x2 a ,..., x ma in the given
objective function Z and the artificial objective function Z a respectively.
We have
1 0 M − CB
0 1 M − CBa I − CB(2 )
B2 = =2
... ... M ... 0 B
0 0 M B
I C (2 ) B −1
B2 −1 = 2 B ...(4)
0 B −1
1 0 M CB B −1
1 0 CB
0 1 M C B −1
B2 −1 = Ba = 0 1 CBa
... ... M ...
− 1 0 0 I m
0 0 M B
∴ B −1 = I m, CB B −1 = CB I m = CB
4. Properties of B2 −1
We have
−1
1 0 C B −1
B − c − c j + CB B a j
j
(i) B2 −1 aj(2 ) = 0 1 CBa B −1 −1
0 = CBa B aj + 0
0 0
B −1 αj B −1aj
C Y − c
j j
B
= CBa Y j − 0 Q B −1 α j = Y j = [ y1 j ,..., ymj ]
Yj
Z − c −∆
j j j
= Z ja − c ja = − ∆ ja
Yj Yj
Z j − c j = −∆ j
Z ja − c ja = −∆ ja
(ii) We have
1 0 C B −1 −1
0 CB B b
B
B2 −1 b (2 ) = 0 1 CBa B −1 0 = C B −1b
Ba
0 0 b
B −1 B −1b
CB x B Z
= CBa x B = Z a Q B −1 b = x B
x B x B
When we multiply first row of B2 −1 with b(2 ) we get Z (the value of the objective
When we multiply second row of B2 −1 with b(2 ) we get Z a (the value of the artificial
objective function)
and when we multiply last m rows of B2 −1 with b(2 ) we get x B , the solution of the
original problem.
Phase I
Step 1 : To express the given problem in revised simplex form standard II
Convert the given L.P.P. (maximization problem) in the form of revised simplex method
standard II as in article 6.6.
Step 2 : To find the initial basic feasible solution and the basis matrix B2
In this step we proceed to obtain the initial basis matrix B2 and its inverse B2 −1 by
relations (3) and (4) of article 6.7 respectively.
Then the first B.F.S. x B is obtained by multiplying last m rows of B 2−1 by b(2 ).
x B(2 ) = [0, 0, x B ]
Now we construct the simplex table for revised simplex method as follows :
237
Z 1 M 0
Za 0 M 1
x B1 0 M 0
M M M
x Bi 0 M 0
M M M M
M M M M
x Bm 0 0
Step 4 : (i) If Z a = 0, then we conclude that all artificial variables are zero and so go to
phase II.
(ii) If Z a < 0, we compute ∆ ja for all α j(2 ) not in the basis B2 , by the formula
When Z a < 0 and all ∆ ja ≤ 0, then Z a is max and hence no feasible solution exists. If at
least one ∆ ja > 0, then we proceed to find entering and leaving vectors.
Max
α k (2 ) if ∆ ka = {∆ ja }
j
The vector Br(2 ) to be removed from the basis is determined by the minimum ratio
rule. It is taken corresponding to that value of r for which
x Br Mini x Bi
= , yik > 0
yrk i yik
After determining the entering and outgoing vectors; we get the next revised simplex
table as usual.
If Max. Z a = 0 then we conclude that all the artificial variables are zero and so we proceed
to phase II.
If all ∆ j (for phase I) are ≤ 0 and Max. Z a < 0 then no feasible solution exist.
Phase II : In phase II, Z a it treated like a artificial variable, so it is removed from the
basic solution. Thus we remove the second row and second column from the constraints
(2) of article 6.7. The reason is that in phase II we deal with the original objective function
Z which is to be maximized and so the prices of all the artificial variables are zero.
subject to 2 x1 + 5 x2 ≥ 6
x1 + x2 ≥ 2
and x1, x2 ≥ 0
Max. Z ′ = − Z = − x1 − 2 x2 .
Here all bi ≥ 0.
Introducing the surplus variables x3 , x4 and artificial variables x1a , x2 a the constraints of
the given L.P.P. reduce to
2 x1 + 5 x2 − x3 + x1a =6
x1 + x2 − x4 + x2 a =2
239
Since artificial variables are needed to get an identity matrix so this problem will be
solved by two phase method.
Phase I
Step 1 : To express the given problem in revised simplex form standard II
First we convert the given L.P.P. in the revised simple form standard II and get the
following constraints :
Z ′ + 1. x1 + 2. x2 + 0. x3 + 0. x4 + 0. x1a + 0. x2 a = 0
Za + x1a + x2 a =0
2 x1 + 5 x2 − x3 + x1a =6
x1 + x2 − x4 + x2 a =2
x j ≥ 0, x ia ≥ 0, j = 1, 2, 3, 4,; i = 1, 2
1 0
where CB = (0, 0), CBa = (−1, − 1), B = = I2
0 1
Since B −1 = I2 −1 = I2
240
1 0 C B −1 1 0 C
B B
∴ B2 −1 = 0 1 CBa B −1 = 0 1 CBa
−1 0 0 I2
0 0 B
1 0 0 0
0 1 −1 −1
=
0 0 1 0
0 0 0 1
1 0 0 0 0 0
0 1 −1 −1 0 −8
Now B2 −1. b(2 ) = = = x (2 )
B
0 0 1 0 6 6
0 0 0
1 2 2
Step 3 : Construction of the starting simplex table for revised simplex method
Z′ 1 0 0 0 0 2
Za 0 1 −1 −1 −8 −6
x1a ( x B1) 0 0 1 0 6 5 6
(Mini) →
x2 a ( x B2 ) 0 0 0 1 2 1 5
2
↓
Outgoing vector
1 2 0 0
0 0 0 0
= − (0,1 − 1 − 1) = [3, 6, − 1, −1]
2 5 −1 0
1 1 0 −1
∴ ∆1a = 3, ∆2 a = 6, ∆3 a = −1, ∆4 a = −1
241
Since few ∆ ja > 0, therefore we can improve Z a . So we find the entering and outgoing
vectors to improve Za.
∴ k = 2, i.e., α 2(2 ) is the entering vector and x2 the variable entering in B.F.S.
∴ we compute
x Br Mini x Bi
= , y > 0
yr2 i yi2 i2
6 2 6 x
= Mini , = = B1
5 1 5 y12
Step. 5 : To find the improved solution : In order to bring α 2(2 ) in place of β1(2 ) in the
basis matrix, we divide the third row by 5 and then add −2, 6 and −1 time of it in the first,
second and fourth rows respectively. Thus the next simplex table is as follows :
Z′ 1 0 −2 5 0 −12 5 15
Za 0 1 15 −1 −4 5 −3 5
x2 ( x B1) 0 0 15 0 65 25 3
4
x2 a ( x B2 ) 0 0 −1 5 1 45 35 (Mini) →
3
Since Z a = −4 5 < 0.
1 0 0 0
0 0 0 1
= − (0, 1, 1 5 , − 1) = [3 5 , 1 5 , − 1, −6 5]
2 −1 0 1
1 0 −1 0
∴ ∆1a = 3 5 , ∆3 a = 1 5 , ∆4 a = −1, ∆5 a = −6 5
Since few ∆ ja > 0, ∴ we proceed to find the entering and outgoing vectors to improve Z a .
∆ ka = Max. ∆ ja = 3 5 = ∆1a
Applying the minimum ratio rule we find that β2(2 ).(= α 6(2 )) is the outgoing vector.
∴ Key element = 3 5.
Z′ 1 0 −1 3 −1 3 −8 3
Za 0 1 0 0 0
x2 ( x B1) 0 0 13 −2 3 23
x1 ( x B2 ) 0 0 −1 3 53 43
Since Z a = 0, from which we come to the conclusion that all artificial variables are driven
to zero.
∴ Now we enter the phase II and forget all the artificial variables.
Phase II
Now we enter phase II to maximize Z ′. For this we remove Z a from the above table, i.e.,
we construct the following table in phase II by removing second row and second column
of the above table :
243
Z′ 1 −1 3 −1 3 −8 3
x2 ( x B1) 0 13 −2 3 23
x1 ( x B2 ) 0 −1 3 5 3 43
0 0
= − (1, −1 3 , −1 3). −1 0 = [−1 3 , −1 3]
0 −1
∴ ∆3 = −1 3 , ∆4 = − 1 3
Optimal solution is
x1 = 4 3 , x2 = 2 3, Max. Z ′ = −8 3
∴ Mini Z = − Z ′ = 8 3.
6.9.2 Disadvantages
While solving the numerical problem by revised simplex method the original column
vectors not in the basis matrix are also required. Thus there may be more computational
mistakes in revised simplex method than that in simplex method.
244
1. Describe the revised simplex method, when artificial vectors are added to obtain an
identity matrix for the initial basis matrix.
2. Min. Z = 2 x1 + x2 3. Max. Z = 5 x1 + 3 x2
subject to 3 x1 + x2 ≤ 3 subject to 4 x1 + 5 x2 ≥ 10
4 x1 + 3 x2 ≥ 6 5 x1 + 2 x2 ≤ 10
x1 + 2 x2 ≤ 3 3 x1 + 8 x2 ≤ 12
x1, x2 ≥ 0 x1, x2 ≥ 0
4. Min. Z = x1 + x2 5. Max. Z = −5 x2
subject to x1 + 2 x2 ≥ 7 subject to x1 + x2 ≤ 2
4 x1 + x2 ≥ 6 x1 + 5 x2 ≥ 10
x1, x2 ≥ 0 x1, x2 ≥ 10
(c) B (d) B2
Exercise
2. x1 = 3 5 , x2 = 6 5 , min. Z = 12 5
3. x1 = 28 17 , x2 = 15 17 , max. Z = 185 17
4. x1 = 5 7 , x2 = 22 7 , min. Z = 27 7
5. x1 = 0, x2 = 2, max. Z = −10
7.1 Introduction
he optimal solution of a linear programming problem, Max. Z = c x, subject to
T A x = b, and x ≥ 0, depends upon the parameters aij , bi and c j . So far we have
assumed that these parameters are given (constant), but there are problems where these
are not constant and vary from time to time. For example, in a diet-problem the cost of
any individual feed will vary from time to time. Here in this chapter we are interested to
know the limits of variations of these parameters so that the solution remains optimal
feasible. In other words, we wish to see the sensitiveness of the optimal feasible solution
corresponding to the variations of these parameters.
The investigations that deal with changes in the optimal solutions due to discrete variations in the
parameters aij , bi and c j are called sensitivity analysis.
The purpose of this chapter i.e., objective of sensitivity analysis is to find, how to
preserve, to a minimum, the additional computational efforts which arise in solving the
problem as a new one. In most of the cases it is not necessary to solve the problem again
but a relatively small amount of calculations applied to the old optimal solution will be
sufficient.
In this chapter we shall discuss the following changes (variations) in the L.P.P.
1. Variations in the price vector c.
2. Variations in the requirement vector b.
248
x B = B −1b.
of c will not change x B i.e., x B will always remain basic feasible solution.
Condition of optimality for the solution x B , is ∆ j = c j − Z j ≤ 0 for all j not in the basis
which is satisfied before any change in c j . But when c j changes, the condition of
optimality may not be satisfied. The change in the price vector c may be made in the
following two ways :
1. Variation in c j ∉ c B (i.e., change in c j which is the price of the non-basic
variable x j ).
If ∆ ck is the change in the cost ck and ck is not present in c B (the vector of the costs
associated to basic variables) then c B remains unaltered with this change. Also there is no
change in B.
∴ We must have
(ck + ∆ ck ) − Z k ≤ 0
or ∆ ck ≤ Z k − ck (= − ∆ k ). ...(1)
We know that
m
Z j = c B B −1 α j = c B Y j = ∑ cBi yij
i=1
Let ∆ cBk be the change in cBk , the price of the basic variable x Bk then if Z j * is the value
of Z j in this solution, then we have
m
Z *j = ∑ cBi yij + (cBk + ∆ cBk ) ykj
i=1
i≠ k
m
= ∑ c y + yk j ∆ cBk = Z j + ykj ∆ cBk
Bi ij
i=1
The solution x B will remain optimal for the change ∆ cBk in cBk if
or (c j − Z j ) − ykj ∆ cBk ≤ 0
or ykj . ∆cBk ≥ c j − Z j
cj − Z j
∴ ∆cBk ≥ , for ykj > 0
ykj
cj − Z j
and ∆cBk ≤ , for ykj < 0. ...(2)
ykj
Hence, the range of ∆cBk (change in the price of cBk corresponding to the basic variable
x Bk ) so that the solution remains optimal is given by
cj − Z j c − Z
Max . ≤ ∆ cBk ≤ Min. j j
...(3)
y kj ≥ 0 ykj y kj < 0 yk j
If no ykj > 0, there is no lower bound to ∆ cBk and if no ykj < 0, there is no upper bound to
∆ cBk .
The value of the objective function for the price vector c is given
m
Z = c B. x B = ∑ cBi. xBi.
i =1
250
When cBk ∈c B is changed to cBk + ∆ cBk , if Z * is the value of the objective function then
m
Z* = ∑ cBi. xBi + (cBk + ∆ cBk ) xBk
i =1
i≠ k
m
= ∑ cBi. xBi + xBk . ∆ cBk = Z + xBk ∆ cBk .
i =1
Hence, if ∆cBk (change in cBk ) satisfies (3), the solution x B will remain optimal and the
value of the objective function will be improved by an amount x Bk . ∆ Bk where x Bk is the
basic variable corresponding to cBk .
Note : If the variation in c j is obtained as a ≤ c j ≤ b, the student are advised to check the
optimality of the solution for the extreme values of c j , i.e., for c j = a and c j = b. If the
solution is not optimal for c j = a, then the variation in c j will be given by a < c j ≤ b
similarly, for c j = b.
subject to x1 + x2 ≤ 1
2 x1 + 3 x2 ≤ 1
and x1, x2 ≥ 0
Obtain the variations in c j ( j = 1, 2 ) which are permitted without changing the optimal
solution.
Solution: Introducing the slack variables x3 and x4 the given L.P.P. becomes
Max. Z = 3 x1 + 5 x2
subject to x1 + x2 + x3 ≤1
2 x1 + 3 x2 + x4 ≤1
x1, x2 , x3 , x4 ≥ 0
The solution of the problem by simplex method is given in the following table :
251
cj 3 5 0 0 Min. Ratio
B c B (C B ) X B Y2 ,
x B (= X B ) Y1 Y2 Y3 Y4
yi2 > 0
Y3 0 1 1 1 1 0 11 =1
Y4 0 1 2 3 0 1 1 3 (min)
→
Z = c B. x B = 0 ∆j 3 5 0 0
↑ ↓
Y3 0 23 13 0 1 −1 3
Y2 5 13 23 1 0 13
Z = c B. x B = 5 3 ∆j −1 3 0 0 −5 3
x1 = 0, x2 = 1 3 and Max. Z = 5 3
∴ From (1), article 7.2, the change ∆ c1 in c1, so that the solution remains optimal is
given by
∆ c1 ≤ Z1 − c1 = − ∆1 or ∆ c1 ≤ 1 3
∴ The range over which c1 can vary maintaining the optimality of the solution given in
above table is
− ∞ < c1 ≤ c1 + ∆ c1
i.e., − ∞ < c1 ≤ 3 + 1 3
i.e., − ∞ < c1 ≤ 10 3
The value of the objective function will not change in this case.
∴ from (3), article 7.2 the range of ∆ cB2 (change in cB2 ) is given by
cj − Z j cj − Z j
Max . ≤ ∆ cB2 ≤ Min .
y2 j > 0 y2 j y2 j < 0 y2 j
252
Here k = 2 and y21 = 2 3 , y24 = 1 3 > 0. Here we cannot consider y22 and y23 as
Y2 (= α 2 = β2 ) and Y3 (= α 3 = β1) are in the basis.
∆ ∆
Max. 1 , 4 ≤ ∆ cB2 < ∞
y21 y24
−1 3 −5 3
or Max. , ≤ ∆ cB2 < ∞
2 3 1 3
or −1 2 ≤ ∆ cB2 < ∞
subject to 3 x1 + 5 x2 ≤ 15
5 x1 + 2 x2 ≤ 10
and x1, x2 ≥ 0
Hence, find how far the component c1 of the vector c of the function Z = c x can be
increased without destroying the optimality of the solution.
Max. Z = 5 x1 + 3 x2 + 0. x3 + 0. x4
subject to 3 x1 + 5 x2 + x3 ≤ 15
5 x1 + 2 x2 + x4 ≤ 10
and x1, x2 , x3 , x4 ≥ 0
The solution of the problem by simplex method is given in the following table :
253
Y3 0 15 3 5 1 0 15 3 = 5
Y4 0 10 5 2 0 1 10 5 = 2 (min) →
Z = cB . x B = 0 ∆j 5 3 0 0 X B Y2 yi2 > 0
↑ ↓
Y3 0 9 0 19 5 1 −3 5 45 19 (min) →
Y1 5 2 1 25 0 15 5
Z = c B . x B = 10 ∆j 0 1 0 −1
↑ ↓
Y2 3 45 19 0 1 5 19 −3 19
Y1 5 20 19 1 0 −2 19 5 19
Z = cB . x B ∆j 0 0 −5 19 −16 19
= 235 19
The range of variation in ∆cB2 for the solution to remain optimal is given by
cj − Z j c − Z j
Max . ≤ ∆ cB2 ≤ Min . j
y2 j > 0 y2 j y2 j < 0 y2 j
Here y23 = −2 19 < 0, y24 = 5 19 > 0, we cannot consider y21 and y22 as
Y1 (= α1 = β2 ), Y2 (= α 2 = β1) are in the basis.
c4 − Z4 c − Z3
∴ ≤ ∆cB2 ≤ 3
y24 y23
−16 19 −5 19
or ≤ ∆cB2 ≤
5 19 −2 19
or −16 5 ≤ ∆c1 ≤ 5 2
⇒ 5 −16 5 ≤ c1 ≤ 5 + 5 2 [Q Given c1 = 5]
or 9 5 ≤ c1 ≤ 15 2
which give the variation in c1 without affecting the optimality of the solution.
254
Max. Z = − x2 + 3 x3 − 2 x5
subject to x1 + 3 x2 − x3 + 2 x5 = 7
−2 x2 + 4 x3 + x4 = 12
−4 x2 + 3 x3 + 8 x5 + x6 = 10
and x j ≥ 0 , j = 1, 2, . . . , 6
Find the variations of the costs c1, c2 , c3 , c4 , c5 and c6 for which the optimal solution
remains optimal.
Solution: The columns corresponding to the coefficients of x1, x4 and x6 form a unit
matrix, so x1, x4 , x6 may be taken as the basic variables and thus there is no need to
consider artificial variables.
The solution of the problem by simplex method is given in the following table.
cj 0 −1 3 0 −2 0 Min. Ratio
B C B(c B ) X B Y3 , yi3 > 0
X B (x B ) Y1 Y2 Y3 Y4 Y5 Y6
Y1 0 7 1 3 −1 0 2 0
Y4 0 12 0 −2 4 1 0 0 12 4 = 3 (min)
→
Y6 0 10 0 −4 3 0 8 1 10 3
Z = cB . x B = 0 ∆j 0 −1 3 0 −2 0 X B Y2
↑ ↓
Y1 0 10 1 52 0 14 2 0 4 (min) →
Y3 3 3 0 −1 2 1 14 0 0
Y6 0 1 0 −5 2 0 −3 4 8 1
Z = cB . x B = 9 ∆j 0 12 0 −3 4 −2 0
↓ ↑
Y2 −1 4 25 1 0 1 10 45 0
Y3 3 5 15 0 1 3 10 25 0
Y6 0 11 1 0 0 −1 2 10 1
Z = c B . x B = 11 ∆j −1 5 0 0 −4 5 −12 5 0
255
x1 = 0, x2 = 4, x3 = 5, x4 = 0, x5 = 0, x6 = 11 and Max. Z = 11
Since for ck ∉ c B , ∆ ck ≤ Z k − ck (= − ∆ k )
∴ ∆ c1 ≤ − ∆1 i.e.,∆ c1 ≤ 1 5 , ∆ c4 ≤ − ∆4 i.e., ∆ c4 ≤ 4 5
and ∆ c5 ≤ − ∆5 i.e., ∆ c5 ≤ 12 5
Q c1 = 0, c4 = 0 and c5 = −2
∴ c1 ≤ 0 + ∆ c1 i.e., c1 ≤ 1 5, similarly c4 ≤ 4 5 , c5 ≤ 2 5
cj − Z j cj − Z j
Max. ≤ ∆cBk ≤ Min.
y kj > 0 ykj y kj < 0 ykj
cj − Z j cj − Z j
Max. ≤ ∆ cB1 ≤ Min.
y1j > 0 y1 j y1j < 0 y1 j
Here y11 = 2 5 > 0, y14 = 1 10 > 0, y15 = 4 5 > 0 no y1 j < 0. We cannot consider
y12 , y13 , y16 as Y2 (= α 2 = β1), Y3 (= α 3 = β2 ), Y6 (= α 6 = β3 ) are in the basis.
c − Z1 c4 − Z4 c5 − Z5
∴ Max. 1 , , ≤ ∆cB1 < ∞
y11 y14 y15
−1 5 −4 5 −12 5
Max. , , ≤ ∆c2 < ∞
2 5 1 10 4 5
cj − Z j cj − Z j
Max. ≤ ∆ cB2 ≤ Min.
y2 j > 0 y2 j y2 j < 0 y2 j
c − Z1 c4 − Z4 c5 − Z5
∴ Max. 1 , , ≤ ∆ cB2 < ∞
y21 y24 y25
−1 5 −4 5 −12 5
or Max. , , ≤ ∆ cB2 < ∞
1 5 3 10 2 5
cj − Z j cj − Z j
Max. ≤ ∆ cB3 ≤ Min.
y3 j > 0 y3 j y3 j < 0 y3 j
c − Z1 c5 − Z5 c − Z4
∴ Max. 1 , ≤ ∆ cB3 ≤ Min. 4
y
y31 y35 34
−1 5 −12 5 −4 5
or Max. , ≤ ∆ c6 ≤ Min.
1 10 −1 2
⇒ 0 −1 5 ≤ c6 ≤ 0 + 8 5 i.e., −1 5 ≤ c6 ≤ 8 5 Q c6 = 0
257
Hence, the variations of the costs c1, c2 ,..., c6 for which the optimal solution remains
optimal are as follows :
− ∞ < c5 ≤ 2 5 , −1 5 ≤ c6 ≤ 8 5.
Let the component bl of the requirement vector b be changed to bl + ∆bl , therefore if the
new requirement vector is b*, then
[l-th component]
6474 8
Since b* = [b1, b2 ,..., bl + ∆bl ,..., bm]
∴ x*B = B −1. b *
= x B + β l . ∆bl
or β il ∆bl ≥ − x Bi
x
∴ ∆bl ≥ − Bi , for β il > 0
β il
x
and ∆bl ≤ − Bi , for β il < 0
β il
Hence the range of ∆bl , so that the optimal solution x *B also remains feasible is given by
x x Bi
Max − Bi ≤ ∆bl ≤ Min − ...(2)
β il
β il > 0 β il < 0 β il
The given value of the objection function for a requirement vector is given by
m
Z = c Bx B = ∑ cBi. xBi
i =1
When bl is changed to bl + ∆bl if Z * is the new value of the objective function, then
Z * = c B x *B
m m m
= ∑ cBi.( x Bi + β il . ∆bl ) = ∑ cBi. x Bi + ∑ cBi.βil . ∆bl
i =1 i =1 i =1
m
=Z + ∑ cBi.βil . ∆bl
i =1
Hence, if ∆bl (change in bl ) satisfies (2), then the solution x *B given by (1) is also optimal
feasible solution and the value of the objective function is improved by an amount
m
Max. Z = − x1 + 2 x2 − x3
subject to 3 x1 + x2 − x3 ≤ 10
− x1 + 4 x2 + x3 ≥ 6
x2 + x3 ≤ 4
and x1, x2 , x3 ≥ 0 .
Find the separate range of b1, b2 and b3 (the constants on the right hand sides of the
constraints) consistent with the optimal solution.
Max. Z = − x1 + 2 x2 − x3 + 0. x4 + 0. x5 + 0. x6 − Mx a
subject to 3 x1 + x2 − x3 + x4 = 10
− x1 + 4 x2 + x3 − x5 + xa = 6
x2 + x3 + x6 =4
and x1,..., x7 ≥ 0
cB cj −1 2 −1 0 0 0 −M Min. Ratio
B
(C B ) X B Y2 ,
x B (X B) Y1 Y2 Y3 Y4 Y5 Y6 A1
yi2 > 0
Y4 0 10 3 1 −1 1 0 0 0 10 1 = 10
A1 −M 6 −1 4 1 0 −1 0 1 6 4 = 3 2(min)
→
Y6 0 4 0 1 1 0 0 1 0
4 1=4
Z = c B. x B ∆j −1 − M 2 + 4 M −1 + M 0 −M 0 0 X B Y5 yi5 > 0
= −6 M ↑ ↓
Y4 0 17 2 13 4 0 −5 4 1 14 0 −1 4 (17 2) (1 4) = 34
Y2 2 32 −1 4 1 14 0 −1 4 0 14
Y6 0 52 14 0 3 4 0 14 1 −1 4 (5 2) (1 4)
=10 (min)
→
Z = c B. x B ∆j −1 2 0 −3 2 0 12 0 −M
−1 2
=3 ↑ ↓
Y4 0 6 3 0 −2 1 0 −1 0
Y2 2 4 0 1 1 0 0 1 0
Y5 0 10 1 0 3 0 1 4 −1
Z = c B. x B ∆j −1 0 −3 0 0 −2 −M
=8
x B = [ x4 , x2 , x5 ] = [ x B1, x B2 , x B3 ]
= [6, 4, 10]
*
1 0 −1
−1
∴ B = (β1, β2 , β3 ) = 0 0 1 *
0 −1 4
To find variation in b1 : After changing b1 to b1 + ∆b1, the new requirement vector is given
by
b* B = [10 + ∆b1, 6, 4]
* B −1 is obtained from the last simplex table and it consists of the columns corresponding to the
first basis (α4 , A1, α6 )
261
From relation (2), article 7.3 the range of ∆b1 consistent with the optimal feasible
solution is given by
−x −x
Max Bi ≤ ∆b1 ≤ Min Bi
β i1 > 0 β i1 β i1 < 0 β i1
x
∴ We have Max. − B1 ≤ ∆b1 < ∞
β11
6
or − ≤ ∆b1 < ∞
1
b**
B = [10, 6 + ∆b2 , 4]
From relation (2), article 7.3 the range of ∆b2 consistent with the optimal feasible
solution is given by
−x −x
Max Bi ≤ ∆b2 ≤ Min Bi i.e., −∞ ≤ ∆b ≤ −10 = 10
2 −1
β i2 > 0 β i2 β i2 < 0 β i2
x B2
∴ We have − ∞ < ∆b2 ≤ Mini −
βi 2 < 0 β i2
−10
or − ∞ < ∆b2 ≤ = 10
−1
∴ Range of variation of b2 is
− ∞ < b2 ≤ 6 + 10 Q b2 = 6
i.e., − ∞ < b2 ≤ 16
b***
B = [10, 6, 4 + ∆b3 ]
from relation (2), article 7.3 the range of ∆b3 , consistent with the optimal feasible
solution is given by
262
−x −x
Max Bi ≤ ∆b3 ≤ Min Bi
β i 3 > 0 βi 3 βi 3 < 0 βi 3
−x −x −x
∴ Max. B2 , B3 ≤ ∆b3 ≤ min. B1
β
23 β33 β13
or −5 2 ≤ ∆b3 ≤ 6
∴ Range of variation of b3 is
4 −5 2 ≤ b3 ≤ 4 + 6 Q b3 = 4
or 3 2 ≤ b3 ≤ 10.
Maximize Z = 6 x1 + 8 x2
4 x1 + 4 x2 ≤ 40
and x1, x2 ≥ 0
60 20
(ii) The right hand side vector of the constraints is changed to . [Meerut 2007]
40
40
Ax = b ...(1)
5 10 x 60
where A = = [α1, α 2 ], x = 1 , b =
4 4 x
2 40
Max. Z = 6 x1 + 8 x2 + 0. x3 + 0. x4
subject to 5 x1 + 10 x2 + x3 = 60
263
4 x1 + 4 x2 + x4 = 40
and x1, x2 , x3 , x4 ≥ 0
Taking x1 = 0, x2 = 0, we get x3 = 60, x4 = 40, which is the basic feasible solution of the
problem :
B cB cj 6 8 0 0 Mini. Ratio
x B Y2 , yi2 > 0
xB Y1 Y2 Y3 Y4
Y3 0 60 5 10 1 0 60 10 = 6 (mini)
→
Y4 0 40 4 4 0 1
40 4 = 10
Z = cB x B = 0 ∆j 6 8 0 0 x B Y1 y i1 > 0
↑ ↓
Y2 8 6 12 1 1 10 0 12
Y4 0 16 2 0 −2 5 1 16 2 (mini)
→
Z = c B x B = 48 ∆j 2 0 −4 5 0
↑
Y2 8 2 0 1 15 −1 4
Y1 6 8 1 0 −1 5 12
Z = c B x B = 64 ∆j
0 0 −2 5 −1
x1 = 8, x2 = 2 and Max. Z = 64
In the last simplex table, matrix A reduced to unit matrix in proper form can be
written as
B cB xB Y1 Y2 Y3 Y4
Y1 6 8 1 0 −1 5 12
Y2 8 2 0 1 15 −1 4
−1 5 1 2 1 −4 10
Hence B −1 = =
1 5 −1 4 20 4 −5
x
From (1) A x = b ⇒ x = 1 = B −1b
x2
264
60 40
(i) When b = is changed to b′ = , the new values of the basic variables in the
40 20
table will become x B = B −1b′
x 1 −4 10 40 2
i.e., xB = 1 = =
x2 20 4 −5 20 3
i.e., x1 = 2, x2 = 3
Since both x1 and x2 are non-negative, so this solution is basic feasible solution. The
new optimal value of Z = 6 × 2 + 8 × 3 = 36
60 20
(ii) When b = is changed to b′′ = , the new values of the basic variables in final
40 40
iteration in the above table, will become
i.e., x1 = 16, x2 = −6
Since x2 = −6 is negative, so this solution becomes infeasible. Dual Simplex
algorithm (see 8.11) can be used to clear this infeasibility. The modified simplex
table (from last table of previous table) can be written as
B cB cj 6 8 0 0
xB Y1 Y2 Y3 Y4
Y1 6 16 1 0 −1 5 12
Y2 8 −6 0 1 15 −1 4 →
Z = c B x B = 48 ∆j 0 0 −2 5 −1
↓ ↑
Y1 6 4 1 2 15 0
Y4 0 24 0 −4 −4 5 1
Z = c B x B = 24 ∆j 0 –4 –6/5 0
∆k Mini ∆ j
∆4 −1 ∆4
Q = , y2 j < 0 = Mini. , y24 < 0 = Mini =4 =
y2 k j y2 j y24 −1 4 y24
changed to b′′, the basic feasible optimal solution is x1 = 4, x2 = 0 and Max. Z = 24,
If alk is not an element of the optimal basis B, a change in alk will not affect B and so B −1
remain the same. Thus, change in such alk will not affect the solution x B = B −1b. Hence,
the feasibility of the solution x B is not affected. But the change of alk may change the
optimality of the solution i.e., the solution x B may not be the optimal solution of the new
L.P.P. (obtained by changing alk to alk + ∆alk ). Thus, we have to find the range of
variation of alk so that the solution still remains optimal.
Since all the column vector except α k of the matrix A remain unaffected by the variation
∆alk in alk .
Thus, the solution x B will remain optimal for the new L.P.P. also, if
∆*k = ck − Z *k ≤ 0 ...(1)
and Z *k = c B B −1α *k
= Z k + c B (β l ∆alk )
= Z k + ∆alk .c Bβ l
= Z k + ∆alk (cB1, cB2 ,..., cBl ,..., cBm)× [β1l , β2 l ,..., β ll ,..., β ml ]
m
= Z k + ∆alk ∑ cBi.βil
i =1
Hence, the range of ∆alk (change in alk ∉ B), so that the solution x B remains optimal and
feasible, is given by
∆k ∆k
≤ ∆alk ≤
m
m
...(2)
∑ cBi β il > 0
∑ cBi β il < 0
i = 1 i = 1
m
If ∑ cBi.βil = 0, ∆alk is unrestricted.
i =1
267
m
If ∑ cBi.βil > 0, there is no upper bound to ∆alk ,
i =1
m
and if ∑ cBi.βil < 0 there is no lower bound to ∆alk .
i =1
Note : There is no change in the value of the objective function if ∆alk satisfies (2).
Since alk is an element of the optimal basis B then if alk is changed to alk + ∆alk , the
optimal basis B will certainly be changed and hence B −1 will also be changed.
Consequently x B = B −1b and Z j = c B B −1α j will also change. A change in Z j may
When the element alk ∈ B is changed to alk + ∆alk then let optimal basis B, the solution
x B and Z j be represented by B*, x *B , and Z *j respectively.
0 0 ... 0 ... 0
0 0 ... 0 ... 0
M
M M M
∴ B* = B + ∆blp
0 0 ... .. . 0 ← l - th row
M
M M M
0 ... 0 ... 0 ... 0
↑
p− th column
= B + ∆blp.0 lp, where 0 lp is the null matrix except for the (l, p)-th element which equals to
unity.
= B (I + B −1 ∆blp.0 lp)
−1
∴ B* = {B (I + B −1 ∆blp.0 lp)}−1 = (I + B −1 ∆blp.0 lp)−1. B −1
...(3)
−1
[Q ( AB) = B −1 A −1]
0 0 ... 0 ... 0
0 0 ... 0 ... 0
−1 −1 M M M M
I + B ∆blp.0 lp = I + B
0 0 ... ∆blp ... 0
M M M M
0 0 ... 0 ... 0
−β1l ∆blp
1 0 ... ... 0
D
−β2 l ∆blp
0 1 ... 0
−1 −1 M
D
∴ (I + B ∆blp.0 lp) = M M ... M ...(4)
1
0 0 ...
D
0
M M M M
−β ml ∆blp
0 0 ...
D
... 1
269
−1 Adj C
where D = 1 + β lp ∆blp. Q C =
| C|
1. To find the range of variation of alk (= blp ) for the feasibility of the solution.
x *B = B*−1. b
−β1l ∆blp
1 0 ... ... 0
D
−β2 l ∆blp x B1
0 1 ... ... 0 x
M D B2
M M M M
= → 1 x
p− th row 0 0 ... ... 0 Bp
D
M M
M M M
−β ml ∆blp x Bm
0
0 ... ... 1
D
↑
p− th column
1 ...(7)
and . x ≥ 0.
D Bp
Since x Bp ≥ 0
270
x Bi
∴ ∆blp ≤ , for β il x Bp − β pl x Bi > 0
β il x Bp − β pl x Bi
x Bi
and ∆blp ≤ , for β il x Bp − β pl x Bi < 0
β il x Bp − β pl x Bi
Hence, the range of ∆alk (= ∆blp) ∈ B, so that the solution remains feasible, is given by
x x
Max. Bi ≤ ∆alk ≤ Min Bi ...(9)
Pi < 0 Pi > 0
Pi = β il x Bp − β pl x Bi.
2. To find the range of variation of alk for the optimality of the solution.
−1
Now Z * j = c B B* α j = c B (I + B −1. ∆blp.0 lp)−1. B −1α j
β1l . ∆blp
y1 j − . y pj
D
β2 l . ∆blp
y2 j − . y pj
D
Z*j = c B M
1
⋅ y pj
D
M
β ml . ∆blp
ymj − . y pj
D
β1l ∆blp
y1 j − . y pj
D
β2 l ∆blp
y2 j − . y pj
D
= (cB1, cB2 ,..., cBp,..., cBm) M
1
⋅ y pj ←
D p− th row
M
β ml ∆blp
ymj − . y pj
D
m
β il ∆blp cBp. y pj
= ∑ cBi yij − D
y pj +
D
i =1
i≠ P
m
cBi β il ∆blp β pl ∆blp cBp. y pj
= ∑ cBi yij − D
y pj − cBp y pj −
D
y pj +
D
i =1
m m
1 1
= ∑ cBi yij −
D ∑ cBiβil ∆ blp ypj − D [cBp.(D − β pl ∆blp) ypj − cBp ypj ]
i =1 i =1
m
1
= Zj −
D ∑ cBi βil ∆blp ypj .
i =1
m
Since ∑ cBi yij = Z j and D = 1 + β pl ∆blp
i =1
m
or D (c j − Z j ) + ∑ cBi βil ∆blp ypj ≤ 0 assuming that D = 1 + β pl ∆blp > 0
i =1
272
m
or (1 + β pl ∆blp) (c j − Z j ) + ∑ cBi βil ∆blp ypj ≤ 0
i =1
m
β (c − Z ) +
or pl j j c∑Bi il pj ∆ blp ≤ −(c j − Z j )
β y
i =1
or Qj ∆blp ≤ − ∆ j
m m
where Qj = β pl (c j − Z j ) + ∑ cBi β il y pj = β pl ∆ j + ∑ cBiβ il y pj
i =1 i =1
−∆ j −∆ j
∴ ∆blp ≤ , for Qj > 0 and ∆blp ≥ for Qj < 0
Qj Qj
Hence, the range of ∆alk (= ∆blp) so that the solution remains optimal is given by
∆j ∆j
Max. − ≤ ∆alk ≤ Min. − ...(10)
Qj < 0 Qj > 0
If no Qj < 0, there is no lower bound to ∆alk , and if no Qj > 0, there is no upper bound to
∆alk .
Hence, a change ∆alk in alk (an element of basis matrix B) for the solution to remain
feasible and optimal can be made such that (8), (9) and (10) are satisfied.
Note : It is important to note that alk , the (l, k)th element of the coefficient matrix A, is
(l, p)th element blp of the basis matrix B.
subject to x1 + 2 x2 + x4 ≤ 6
3 x1 + 2 x3 ≤ 5
x2 + 4 x3 + 5 x4 ≤ 3
and x1, x2 , x3 , x4 ≥ 0
Compute the limits for a11 and a23 so that the new solution remains optimal feasible
solution.
273
Max. Z = 10 x1 + 3 x2 + 6 x3 + 5 x4 + 0. x5 + 0. x6 + 0. x7
subject to x1 + 2 x2 + 0. x3 + x4 + x5 = 6
3 x1 + 0. x2 + 2 x3 + 0. x4 + x6 = 5
0. x1 + 1x2 + 4. x3 + 5. x4 + x7 = 3
Taking x1 = 0 = x2 = x3 = x4 we get x5 = 6, x6 = 5, x7 = 3
The solution of the problem by simplex method is given in the following table.
B c B (C B ) cj 10 3 6 5 0 0 0 Min. Ratio
Y5 0 6 1 2 0 1 1 0 0 6 1=6
Y6 0 5 3 0 2 0 0 1 0 5 3 (min)
→
Y7 0 3 0 1 4 5 0 0 1
Z = c B. x B ∆j 10 3 6 5 0 0 0 X B Y4 , yi4 > 0
=0 ↑ ↓
Y5 0 13 3 0 2 −2 3 1 1 −1 3 0 13 / 3
Y1 10 5 3 1 0 2 3 0 0 13 0
Y7 0 3 0 1 4 5 0 0 1 3 5 (min)
→
Y5 0 56 15 0 9 5 −22 15 0 1 −1 3 −1 5 56 27 (min)
→
Y1 10 53 1 0 2 3 0 0 13 0
Y4 5 35 0 15 4 5 1 0 0 15
3
Z = c B. x B ∆j 0 2 −14 3 0 0 −10 3 −1
= 59 3 ↑ ↓
Y2 3 56 27 0 1 −22 27 0 5 9 −5 27 −1 9
Y1 10 5 3 1 0 23 0 0 13 0
Y4 5 5 27 0 0 26 27 1 −1 9 1 27 2 9
x1 = 5 3 , x2 = 56 27 , x3 = 0, x4 = 5 27 and Z = 643 27
5 9 −5 27 −1 9
∴ from the above table B −1 = (β1, β2 , β3 ) = 0 13 0
−1 9 1 27 2 9
From (9) of article, 7.4, the range of variation of alk = a11 for the feasibility of the solution
is given by
x x
Max. Bi ≤ ∆alk ≤ Min. Bi ...(1)
P
i < 0 Pi > 0
where D = 1 + β pl ∆blp ≥ 0, Pi = β il x Bp − β pl x Bi
x B1 = 56 27 , x B2 = 5 3 , x B3 = 5 27
P2 = β21 x B2 − β21 x B2 = 0 . 5 3 − 0 = 0
x x B1
Max. B3 ≤ ∆a11 ≤ Min.
P
3 < 0 P1 > 0
275
5 27 56 27
or ≤ ∆a11 ≤
−5 27 25 27
or −1 ≤ ∆a11 ≤ 56 25 ...(A)
Again from (10) article 7.4, the range of variation of alk = a11 for the optimality of the
solution is given by
−∆ j −∆ j
Max. ≤ ∆alk ≤ Min. ...(2)
Qj < 0 Qj > 0
m
where Qj = β pl ∆ j + ∑
i = 1
cBi β il y pj for all j not in the basis
3
Here Qj = β21∆ j + ∑
i = 1
cBiβ i1 y2 j
= 0 + (3 . 5 9 + 10 . 0 − 5 .1 9) y2 j = (10 9) y2 j
Q3 = 10 9 . y23 = 10 9 . 2 3 = 20 27 > 0
Q5 = 10 9 , y25 = 10 9.0 = 0
Q6 = 10 9 . y26 = 10 9 .1 3 = 10 27 > 0
Q7 = 10 9 . y27 = 10 9.0 = 0
Since no Qj < 0
∆ ∆
− ∞ < ∆a11 ≤ Min. − 3 , − 6
Q3 Q6
82 27 80 27
or − ∞ < ∆a11 ≤ Min. ,
20 27 10 27
Since the range for ∆a11 so that the solution remains optimal and feasible is given by (A)
and (B) both,
if −1 ≤ ∆a11 ≤ 56 25
Since a11 = 1
−1 + 1 ≤ a11 ≤ 56 25 + 1
or 0 ≤ a11 ≤ 81 25
To find limits of variation of a23 : Here a23 ∈α 3 (Y3 ) which is not in B. From (2),
article 7.4 the range of ∆alk (change in alk ∉ B) so that the solution remains optimal and
feasible is given by
∆k ∆k
≤ ∆alk ≤
m m
∑
i = 1
cBi β il > 0
∑
i = 1
cBi β il < 0
∆ k = ∆3 = − 82 27 , m = 3 (number of constraints)
m 3
−82 27
or ≤ ∆a23 < ∞ or −41 40 ≤ ∆a23 < ∞
80 27
Since a23 = 2,
For the same basis B the solution x B of the original problem will remain optimal
(maxima) if
cn + 1 − Z n + 1 ≤ 0
or cn + 1 − c B B −1α n + 1 ≤ 0
...(1)
i.e., if (1) is satisfied then the new variable becomes just like a non-basic variable having
zero value.
If cn + 1 − Z n + 1 > 0, then the solution x B is no more optimal for the new problem and can
be improved by introducing α n + 1 in the basis. Here, we can start with the last simplex
table giving the optimal feasible solution of the original problem by introducing one
more column corresponding to the variable x n + 1.
Max. Z = 3 x1 + 5 x2
subject to x1 + x3 = 4
3 x1 + 2 x2 + x4 = 18
and x1, x2 , x3 , x4 ≥ 0
If a new variable x5 is introduced in the above L.P.P. with price 7, then we have the
following problem :
Max. Z′ = 3 x1 + 5 x2 + 7 x5
subject to x1 + x3 + x5 = 4
3 x1 + 2 x2 + x4 + 2 x5 = 18
and x1, x2 , x3 , x4 , x5 ≥ 0
(Here, the columns of the coefficients of x3 , x4 form a unit matrix, therefore x3 and x4
may be taken as the basic variables).
Proceeding as usual the successive simplex tables for the original L.P.P. are as follows :
B c B (C B ) cj 3 5 0 0 Mini. Ratio
x B( X B ) Y1 Y2 Y3 Y4 x B Y2 , yi2 > 0
Y3 0 4 1 0 1 0
Y4 0 18 3 2 0 1 18 2 = 9 (min) →
Z = c B. x B = 0 ∆j 3 5 0 0
↑ ↓
Y3 0 4 1 0 1 0
Y2 5 9 32 1 0 12
Z = c B . x B = 45 ∆j –9/2 0 0 –5/2
x1 = 0, x2 = 9, x3 = 4, x4 = 0,
Max. Z = 45
From the final table the solution of the dual of the given L.P.P. is
w1 = 0, w2 = 5 2
Min. Z D = 45
Revised L.P.P. : The dual of the new L.P.P. is same as that the original L.P.P. with one
more constraint w1 + 2 w2 ≥ 7 with corresponds to the new variable x5 .
The optimal solution of the dual of the original L.P.P. is w1 = 0, w2 = 5 2 which does not
satisfy this constraint. Thus, we see that the optimal solution of the dual of the given
L.P.P. is not optimal solution of the dual of the revised L.P.P. Hence, the optimal
solution of the given L.P.P. is not optimal for the revised problem and can be improved
by introducing α 5 = [1, 2], (column of A corresponding to the new variable introduced) in
the basis.
1 0
Since B −1 =
0 1 2
1 0 1 1
∴ Y5 = B −1α 5 = . =
0 1 2 2 1
B c B (C B ) cj 3 5 0 0 7 Min. Ratio
x B (X B) Y1 Y2 Y3 Y4 Y5 X B Y5, yi > 0
5
Y3 0 4 1 0 1 0 1 4 1(min) →
Y2 5 9 32 1 0 12 1 91
Z ′ = c B x B = 45 ∆j −9 2 0 0 −5 2 2
↓ ↑
Y5 7 4 1 0 1 0 1
Y2 5 5 12 1 −1 12 0
Z ′ = c B x B = 53 ∆j −13 2 0 −2 −5 2 0
x1 = 0, x2 = 5, x3 = 0, x4 = 0, x5 = 4 and Max. Z ′ = 53
Let Z * be the optimal (maximal) value of the objective function for the new L.P.P. while
it was Z for the original problem. Let Z * > Z . Since the new optimal solution satisfies the
first m constraints as well as the additional new constraint, therefore, it is also an optimal
solution of the original problem which is contradiction to the fact that we already had an
optimal solution to the original L.P.P. Here Z * cannot be more than Z.
Case I : If the optimal solution of the original L.P.P. satisfies the new constraint, it is also
an optimal solution of the new L.P.P. In this case the additional constraint is redundant.
Case II : If the optimal solution of the original L.P.P. does not satisfy the new constraint,
a new optimal solution of the new L.P. problem must be obtained as follows :
∴ We can write
B 0
B1 = ...(1)
α ±1
The last column of B1 corresponds to the slack, surplus or artificial vector associated with
the additional new constraint and α is the row vector of the coefficients, in the new
constraint, of the variables which correspond to the vectors in the optimal basis B.
B −1 0
B1−1 = −1 ...(2)
mα B ±1
Let am + 1, j be the coefficient of x j in the (m + 1)th constraint and α *j the column vector of
the coefficients of x j in the enlarged problem. Also if Y j* and Z *j are Y j and Z j for the new
Yj
= −1
mαB α j ± am + 1, j
Yj
or Y j* =
mα Y j ± am + 1, j
Yj
Now Z *j = c B1Y j* = (c B , cB, m+1)
mαY j ± am + 1, j
∴ from (3),
Z *j = c B Y j = Z j
∴ c j − Z *j = c j − Z j
∴ c j − Z *j
remains unchanged in this case. Since the optimal solution of the original problem does
not satisfy the new constraint, therefore the slack or surplus variable introduced in the
new constraint is negative. Hence, we can apply the dual simplex algorithm to find an optimal
feasible solution of the new problem.
2. If the artificial variable is introduced in the additional constraint i.e., if the
additional constraint is a perfect equality. In this case the additional vector is an
artificial vector. Now there are two possibilities :
(i) If the artificial variable in the basic solution is negative, then assigning a price
to the artificial variable we can use the dual simplex algorithm for the removal
of the artificial variable from the basis.
(ii) If the artificial variable in the basic solution is positive, then assigning a price
− M to the artificial variable we can use the standard simplex method for the
removal of the artificial variable from the basis. It is important to note that in
this case c j − Z j will be changed.
Note : If the addition of new constraint alters the nature of the problem, then the new
problem must be solved as a fresh problem.
Example 8: Consider the following table which presents an optimal solution to some
linear programming problem.
cj 2 4 1 3 2 0 0 0
B cB
xB Y1 Y2 Y3 Y4 Y5 Y6 Y7 Y8
Y1 2 3 1 0 0 −1 0 0 .5 0 .2 −1
Y2 4 1 0 1 0 2 1 −1 0 0 .5
Y3 1 7 0 0 1 −1 −2 5 −0 . 3 2
Z = c B x B = 17 ∆j 0 0 0 −2 0 −2 −0 .1 −2
Solution: From the table the optimal solution of the given L.P. problem is
x1 = 3, x2 = 1, x3 = 7, x4 = 0 = x5 = x6 = x7 = x8
282
2 x1 + 3 x2 − x3 + 2 x4 − 4 x5 ≤ 5
Thus, the optimal solution of the given problem will not be changed if we introduce the
above constraint to it.
Hence, the additional constraint is redundant and the optimal sol. of the given L.P.P. is
also the optimal solution of the new L.P.P.
Max. Z = 15 x1 + 45 x2
subject to x2 ≤ 50
x1 + 16
. x2 ≤ 240
and x1, x2 ≥ 0.
Max. Z = 3 x1 + 5 x2
subject to x1 ≤ 4
x2 ≤ 6
3 x1 + 2 x2 ≤ 18
and x1, x2 ≥ 0,
Z * = 3 x1 + x2 .
subject to x1 + 16 x2 ≤ 240
5 x1 + 2 x2 ≤ 162
x2 ≤ 50, x1, x2 ≥ 0
If max Z = ∑ c j x j , j = 1, 2 and c2 is fixed at 45, determined how much c1 can be
changed without affecting the above solution.
283
4. Solve the following L.P.P. and find how much can c1 be changed without affecting
the optimal solution.
Max. Z = 15 x1 + 45 x2 = c1 x1 + c2 x2
subject to x1 + 16
. x2 ≤ 240
0.5 x1 + 2 x2 + x3 = 162
x2 + x4 = 50
and x1, x2 , x3 , x4 ≥ 0
5. Solve the L.P.P.
Max. Z = 3 x1 + 4 x2 + x3 + 7 x4
subject to 8 x1 + 3 x2 + 4 x3 + x4 ≤ 7
2 x1 + 6 x2 + x3 + 5 x4 ≤ 3
x1 + 4 x2 + 5 x3 + 2 x4 ≤ 8, and x1, x2 , x3 , x4 ≥ 0
Discuss the effect discrete changes in c on the optimality of an optimum basic
feasible solution to the above L.P.P.
6. For L.P.P. given in Example 4, determine the effect of discrete changes in
c j ( j = 1, 2, 3) on the optimal solution.
7. For L.P.P. given Example 3, find the limits of variation of b2 without affecting the
optimality of the solution. [Meerut 2006]
8. Discuss the effect of discrete changes in the requirement (on the right hand sides of
the inequalities) for the L.P.P. given in Q.5.
9. Solve the problem.
(i) Max. Z = 5 x1 + 12 x2 + 4 x3
subject to, x1 + 2 x2 + x3 ≤ 5
2 x1 − x2 + 3 x3 = 2
and x1, x2 , x3 ≥ 0
Also
7
(ii) Discuss the effect of changing the requirement vector from 5 to on the
2 2
optimum solution.
5 3
(iii) Discuss the effect of changing the requirement vector from to on the
2 9
optimum solution.
10. Consider the following optimum simplex table for a maximization problem (with all
constraint of ≤ type), where x4 is slack and a1 is an artificial variable. Let a new
variable x5 ≥ 0 be introduced in the problem with a cost 30 assigned to it in the
284
objective function. Also suppose that the coefficients of x5 in the two constraints
are 5 and 7 respectively.
B cB xB Y1 Y2 Y3 Y4 A1
Y2 12 85 0 1 −1 5 25 −1 5
Y1 5 95 1 0 75 15 25
Z = 141 5 ∆j 0 0 −3 5 −29 5 −M + 2 5
Discuss the effect of this new variable on the optimality of the given problem.
11. The following table gives the optimal solution to a L.P.P.
Max. Z = 3 x1 + 5 x2 + 4 x3 ,
subject to 2 x1 + 3 x2 ≤ 8
2 x2 + 5 x3 ≤ 10,
3 x1 + 2 x2 + 4 x3 ≤ 15
x1, x2 , x3 ≥ 0
For the above L.P.P. calculate the following :
(i) How much c3 and c4 can be increased before the present basic solution will no
longer be optimal ? Also, find the change in the value of the objective function
if possible.
(ii) How much b2 can be changed maintaining the feasibility of the solution ?
(iii) Find the limits for the changes in a14 and a24 so that the new solution remains
optimal feasible solution.
B cB xB Y1 Y2 Y3 Y4 Y5 Y6
Y2 5 50 41 0 1 0 15 41 8 41 −10 41
Y3 4 62 41 0 0 1 −6 41 5 41 4 41
Y1 3 89 41 1 0 0 −2 41 −12 41 15 41
12. Consider the following table which presents an optimal solution to some L.P.P.
B cB cj 2 3 1 0 0
xB Y1 Y2 Y3 Y4 Y5
Y1 2 1 1 0 0.5 4 −0.5
Y2 3 2 0 1 1 −1 2
Z =8 ∆j 0 0 −3 −5 −5
285
For the above problem assuming that Y4 and Y5 were, in that order, in the initial
identity matrix basis, calculate the followings :
(i) How much can b1 and b2 be increased without affecting the optimality and
feasibility of the solution ?
(ii) How much c3 can be increased before the present basic solution will no longer
be optimal ?
13. Discuss the effect of adding a new non-negative variable x8 in the Q.5 on the
optimality of its optimum solution. It is given that the coefficient of x8 in the
constraints of the problem are 2, 7 and 3 respectively, the cost component
associated with x8 being 5.
Also explain the situation when we have c8 = 10 instead of 5.
14. Consider the L.P.P.
Min. Z = x2 − 3 x3 + 2 x5
subject to 3 x2 − x3 + 2 x5 ≤ 7
−2 x2 + 4 x3 ≤ 12
−4 x2 − 3 x3 + 8 x5 ≤ 10
and x2 , x3 , x5 ≥ 0
B cB cj 0 1 −3 0 2 0
xB Y1 Y2 Y3 Y4 Y5 Y6
Y2 −1 4 25 1 0 1 10 45 0
Y3 3 5 15 0 1 3 10 25 0
Y6 0 11 1 0 0 −1 2 10 1
Z = c B x B = 11 ∆j −1 5 0 0 −4 5 −12 5 0
2. If ∆bl is the change in bl so that the optimal solution x * B also remains feasible, then
the value of the objective function is improved by the amount ............
3. If an additional constraint with ‘=’ sign is introduced in the L.P.P. and the artificial
variable used in this constraint appears in the basis with positive value, then for the
removal of this artificial variable from the basis we assign price − M to this variable
and use the ................. method.
True / False
1. If the coefficient ck of the non-basic variable x k changes to ck + ∆ck to maintain the
optimality of the current optical solution, the value of the objective function is
increased by ∆ck . x k .
2. If the coefficient cBk of the basic variable x k changes to cBk + ∆cBk to maintain the
optimality of the current optimal solution, the value of the objective function is
increased by x Bk . ∆cBk
3. The range of ∆bl , so that the optimal solution x *B also remains optimal is given by
x Bi x Bi
Max. − ≤ ∆ bl ≤ Min. −
β il > 0 β il β il < 0 β il
4. The addition of a new constraint to a L.P.P. will not change the existing optimal
solution if this optimal solution satisfy the new constraint.
288
3. (c) 4. (c)
5. (b) 6. (b)
7. (a)
3. Standard Simplex
True / False
1. False 2. True
3. True 4. True
mmm
Unit-4
8.1 Introduction
he concept of duality was one of the most important discoveries in the early
T development of linear programming. It explains that corresponding to every linear
programming problem there is another linear programming problem. The original
problem is known as ‘primal’ and the other, the related problem is known as the ‘dual’.
The two problems are replicas of each other. If the primal problem is of maximization
type, the dual will be of the minimization type. If the optimal solution to one problem is
known to us, we can easily find the optimal solution of the other. This fact is important
because at times it is easier to solve the dual than the primal.
To understand the concept of duality more clearly, consider the following diet problem :
The amount of two vitamins A and B per unit present in two different foods A1 and A2
respectively are given below :
A 6 9 60
B 4 13 108
Cost (per unit) ` 12 ` 18
292
The objective of the diet problem is to ascertain the quantities of foods A1 and A2 that
should be eaten to meet the minimum daily requirements of vitamins A and B at a
minimum cost.
Z x = 12 x1 + 18 x2
We shall consider this linear programming problem as the primal problem. Now we shall
formulate the dual corresponding to this primal.
Suppose a wholesale dealer sells two vitamins A and B. Customers purchase the two
vitamins from him in the form of two foods A1 and A2 (as given in the above table). The
foods A1 and A2 have their market value only because of their vitamin contents. Now the
dealer has to fix-up the maximum per unit selling prices for the two vitamins A and B in
such a way that the resulting prices of the foods A1 and A2 do not exceed their existing
market prices.
Suppose the dealer decides to fix-up the two prices y1 and y2 respectively.
Z y = 60 y1 + 108 y2
subject to 6 y1 + 4 y2 ≤ 12
9 y1 + 13 y2 ≤ 18
y1, y2 ≥ 0
Observing the above primal and the dual a little closely, we find that
1. Primal is a minimization problem while the dual is a maximization problem.
2. The constraint values 60, 108 of the primal have become the coefficients of the dual
variables y1 and y2 in the objective function of the dual in that order. The
coefficients for the variables x1 and x2 in the objective function of the primal have
become the constraint values in the dual.
3. The constraint coefficient matrix of the dual is the transpose of the constraint
coefficient matrix of the primal.
4. The direction of inequalities in the dual is the reverse of that in the primal.
Primal Dual
Minimize Maximize
c x b′ y
x y
Z x = (12 18) 1 Z y = (60 108) 1
x2 y2
subject to subject to
A x b A′ y c′
6 9 x1 60 6 4 y1 12
4 13 x ≥ 108 9 13 y ≤ 18
2 2
x1, x2 ≥ 0 y1, y2 ≥ 0
While dealing with duality theory, the given L.P.P. (known as primal problem) should be
changed to the standard (or canonical) form. A linear programming problem is said to be
in standard primal form (or canonical primal form) if
1. For a maximization L.P.P. all the constraints have ≤ sign.
2. For a minimization L.P.P. all the constraints have ≥ sign.
Z p = c1 x1 + c2 x2 + ... + cn x n
subject to,
To obtain the dual of the above primal, following steps are required :
(i) Minimize the objective function instead of maximizing it.
(ii) Interchange the role of constant terms and the coefficients of the objective
function.
(iii) Find A′, where A′ denotes the transpose of the coefficient matrix A.
(iv) Reverse the direction of inequalities.
Z x = c x, c ∈ R n
subject to A x ≤ b, b ∈ R m,
Z w = b′ w
subject to A′ w ≥ c′
Max. Z x = 5 x1 + 9 x2
subject to x1 ≤ 6
295
x1 + x2 ≤ 13
x2 ≤ 8,
x1, x2 ≥ 0.
Min. Z w = 6 w1 + 13 w2 + 8w3
subject to w1 + w2 ≥ 5
w2 + w3 ≥ 9,
w1, w2 , w3 ≥ 0.
Note : The following table gives a simple and convenient method to remember the
primal-dual relationship :
For primal constraints, read across the table while for dual constraints read down the
columns.
Z x = c x, c ∈ R n
subject to A x = b, b ∈ R m
Z w = b′ w
subject to A′ w ≥ c′.
For example
In maximization problem
2 x1 + 5 x2 ≤ 9 …(1)
and 2 x1 + 5 x2 ≥ 9 …(2)
In minimization problem
2 x1 + 5 x2 ≥ 9 ...(3)
and 2 x1 + 5 x2 ≤ 9 ...(4)
In the problem of maximization, all constraints should have ≤ sign. So we multiply
both sides of constraint (2) by –1.
So in maximization problem, the equation 2 x1 + 5 x2 = 9 is replaced by 2 x1 + 5 x2 ≤ 9
and −2 x1 − 5 x2 ≤ −9.
Similarly, if the problem is of minimization, all constraints should have ≥ sign.
So in this case the equation say 2 x1 + 5 x2 = 9 is replaced by 2 x1 + 5 x2 ≥ 9 and
−2 x1 − 5 x2 ≥ −9.
2. If there is some unrestricted variable, replace it by the difference of two
non-negative variables.
3. Now to find the dual problem proceed as in article 8.3.1.
subject to 3 x1 + 2 x2 ≥ 18
x1 + 3 x2 ≥ 8
2 x1 − x2 ≤ 6, x1, x2 ≥ 0 .
297
Min. Z = 10 x1 + 20 x2
subject to 3 x1 + 2 x2 ≥ 18
x1 + 3 x2 ≥ 8
−2 x1 + x2 ≥ −6
and x1, x2 ≥ 0
3 2 18
x1
subject to 1 3 ≥ 8
x
−2 1 2 −6
or A x ≥ b, x1, x2 ≥ 0.
y
3 1 −2 1 10
subject to A′ y ≤ c′ or y ≤
2 3 1 2 20
y3
3 y1 + y2 − 2 y3 10
or ≤
2 y1 + 3 y2 + y3 20
Max. Z D = 18 y1 + 8 y2 − 6 y3
subject to 3 x1 + 5 x2 + 4 x3 ≥ 7
6 x1 + x2 + 3 x3 ≥ 4
7 x1 − 2 x2 − x3 ≤ 10
x1 − 2 x2 + 5 x3 ≥ 3
4 x1 + 7 x2 − 2 x3 ≥ 2
Min. Z = 3 x1 − 2 x2 + 4 x3
subject to 3 x1 + 5 x2 + 4 x3 ≥ 7
6 x1 + x2 + 3 x3 ≥ 4
−7 x1 + 2 x2 + x3 ≥ −10
x1 − 2 x2 + 5 x3 ≥ 3
4 x1 + 7 x2 − 2 x3 ≥ 2, x1, x2 , x3 ≥ 0.
3 5 4 7
6 1 3 x1 4
subject to −7 2 1 x2 ≥ −10 or A x ≥ b, x1, x2 , x3 ≥ 0.
1 −2 5 x3 3
4 7 −2 2
= 7 y1 + 4 y2 − 10 y3 + 3 y4 + 2 y5
subject to A′ y ≤ c′
y1
3 6 −7 1 4 y 3
2
or 5 1 2 −2 7 y3 ≤ −2
4 3 y
1 5 −2 4 4
y5
3 y1 + 6 y2 − 7 y3 + y4 + 4 y5 3
or 5 y1 + y2 + 2 y3 − 2 y4 + 7 y5 ≤ −2
4 y1 + 3 y2 + y3 + 5 y4 − 2 y5 4
or Max. Z D = 7 y1 + 4 y2 − 10 y3 + 3 y4 + 2 y5
subject to 3 y1 + 6 y2 − 7 y3 + y4 + 4 y5 ≤ 3
5 y1 + y2 + 2 y3 − 2 y4 + 7 y5 ≤ −2
4 y1 + 3 y2 + y3 + 5 y4 − 2 y5 ≤ 4,
y1, y2 , y3 , y4 , y5 ≥ 0.
299
Min. Z = 2 x2 + 5 x3
subject to x1 + x2 ≥ 2
2 x1 + x2 + 6 x3 ≤ 6
x1 − x2 + 3 x3 = 4
Solution: First we shall convert the given problem to standard primal form.
1. Since the problem is of minimization therefore all the constraints should have the
sign ≥.
2. Multiplying the second constraint by −1, if becomes
−2 x1 − x2 − 6 x3 ≥ −6
3. Since the third constraint is an equality, so replacing it by the following two
constraints :
x1 − x2 + 3 x3 ≥ 4
and x1 − x2 + 3 x3 ≤ 4
or x1 − x2 + 3 x3 ≥ 4
and − x1 + x2 − 3 x3 ≥ −4
Min. Z = 0 x1 + 2 x2 + 5 x3
subject to x1 + x2 ≥ 2
−2 x1 − x2 − 6 x3 ≥ −6
x1 − x2 + 3 x3 ≥ 4
− x1 + x2 − 3 x3 ≥ −4
x1, x2 , x3 ≥ 0.
1 1 0 2
−2 −1 −6 x1 −6
subject to x ≥ or A x ≥ b, x , x , x ≥ 0
1 −1 3 2 4 1 2 3
−1 1 −3 x3 −4
300
= 2 y1 − 6 y2 + 4 ( y3′ − y3′′ )
subject to A′ y ≤ c′
y
1 −2 1 −1 1 0
y
1 −1 −1 1 2 ≤ 2
y3′
0 −6 3 −3 5
y3′′
y1 − 2 y2 + y3′ − y3′′ 0
or y1 − y2 − y3′ + y3′′ ≤ 2 ,
0. y1 − 6 y2 + 3 y3′ − 3 y3′′ 5
subject to
y1 − 2 y2 + ( y3′ − y3′′ ) ≤ 0
y1 − y2 − ( y3′ − y3′′ ) ≤ 2
−6 y2 + 3 ( y3′ − y3′′ ) ≤ 5,
Max. Z D = 2 y1 − 6 y2 + 4 y3
subject to
y1 − 2 y2 + y3 ≤ 0, y1 − y2 − y3 ≤ 2,−6 y2 + 3 y3 ≤ 5,
Note : The variable corresponding to the equality equation (here third equation) in the
constraint will be unrestricted in sign in dual problem.
subject to 4 x1 + 3 x2 + x3 = 6
x1 + 2 x2 + 5 x3 = 4,
x1, x2 , x3 ≥ 0 .
301
Solution: First we shall convert the given L.P.P. into standard primal form.
Here both constraints are equalities, so replacing each by two inequalities, we get the
constraints,
4 x1 + 3 x2 + x3 ≤ 6 and 4 x1 + 3 x2 + x3 ≥ 6
x1 + 2 x2 + 5 x3 ≤ 4 and x1 + 2 x2 + 5 x3 ≥ 4.
Since the given problem is of maximization, so all the constraints should have the sign ≤.
Max. Z = 2 x1 + 3 x2 + x3
subject to 4 x1 + 3 x2 + x3 ≤ 6
−4 x1 − 3 x2 − x3 ≤ −6
x1 + 2 x2 + 5 x3 ≤ 4
− x1 − 2 x2 − 5 x3 ≤ −4,
x1, x2 , x3 ≥ 0.
subject to
4 3 1 6
−4 −3 −1 x1 −6
x ≤
1 2 5 2 4
−1 −2 −5 x3 −4
or A x ≤ b,
x1, x2 , x3 ≥ 0.
y′
4 −4 1 −1 1 2
y1′′
subject to A′ y ≥ c′ or 3 −3 2 −2 y′ ≥ 3
1 −1 5 −5 2 1
y2′′
302
Min. Z D = 6 y1 + 4 y2
subject to 4 y1 + y2 ≥ 2, 3 y1 + 2 y2 ≥ 3, y1 + 5 y2 ≥ 1,
subject to x1 − 3 x2 + 4 x3 = 5, x1 − 2 x2 ≤ 3, 2 x2 − x3 ≥ 4,
Solution: First we shall write the given L.P.P. in the standard primal form, substituting
x3 = x3′ − x3′′ , x3′ ≥ 0, x3′′ ≥ 0. The given problem can be written in the standard primal
form as
− x1 + 3 x2 − 4 ( x3′ − x3′′ ) ≥ −5
− x1 + 2 x2 ≥ −3
2 x2 − ( x3′ − x3′′ ) ≥ 4,
subject to
1 −3 4 −4 x1 5
−1 3 −4 4 x2 −5
≥
−1 2 0 0 x3′ −3
0 2 −1 1 x3′′ 4
303
= 5 ( y1′ − y1′′) − 3 y2 + 4 y3
subject to A′ y ≤ c′
1 −1 −1 0 y1′ 1
−3 3 2 2 y1′′ 1
or ≤
4 −4 0 −1 y2 1
−4 4 0 1 y3 −1
or ( y1′ − y1′′) − y2 ≤ 1
−3 ( y1′ − y1′′) + 2 y2 + 2 y3 ≤ 1
4 ( y1′ − y1′′) − y3 ≤ 1
y1′ , y1′′, y2 , y3 ≥ 0.
Max. Z D = 5 y1 − 3 y2 + 4 y3
subject to y1 − y2 ≤ 1, − 3 y1 + 2 y2 + 2 y3 ≤ 1, 4 y1 − y3 ≤ 1, 4 y1 − y3 ≥ 1
y2 , y3 ≥ 0 and y1 is unrestricted.
Max. Z D = 5 y1 − 3 y2 + 4 y3
subject to y1 − y2 ≤ 1, − 3 y1 + 2 y2 + 2 y3 ≤ 1, 4 y1 − y3 = 1,
y2 , y3 ≥ 0, y1 is unrestricted in sign.
Max. Z = 3 x1 + 5 x2 + 7 x3
subject to x1 + x2 + 3 x3 ≤ 10
4 x1 − x2 + 2 x3 ≥ 15,
x1, x2 ≥ 0 , x3 is unrestricted.
304
Solution: First we shall write the given L.P.P. in the standard primal form.
Since the given problem is of maximization therefore all the constraints should have the
sing ≤.
Substituting x3 = x3′ − x3′′ , x3′ ≥ 0, x3′′ ≥ 0, the standard primal form of the given problem is
subject to
x1 + x2 + 3 x3′ − 3 x3′′ ≤ 10
subject to
x1
1 1 3 −3 x2 10
−4 1 −2 ≤
2 x3′ −15
x′′
3
subject to A′ y ≥ c′
1 −4 3 y1 − 4 y2 3
1 1 y1 5 y + y 5
or ≥ or 1 2 ≥
3 −2 y2 7 3 y1 − 2 y2 7
−3 −3 y + 2 y
2 −7
1 2 −7
or Min. Z D = 10 y1 − 15 y2
subject to y1 − 4 y2 ≥ 3, y1 + y2 ≥ 5, 3 y1 − 2 y2 ≥ 7, − 3 y1 + 2 y2 ≥ −7
or y1 − 4 y2 ≥ 3, y1 + y2 ≥ 5, 3 y1 − 2 y2 ≥ 7, 3 y1 − 2 y2 ≤ 7
y1, y2 ≥ 0.
Min. Z D = 10 y1 − 15 y2
subject to y1 − 4 y2 ≥ 3, y1 + y2 ≥ 5, 3 y1 − 2 y2 = 7, y1, y2 ≥ 0.
305
2. Max. Z = x1 − x2 + 3 x3 3. Max. Z = 3 x1 + 5 x2 + 4 x3
subject to subject to
x1 + x2 + x3 ≤ 10 2 x1 + 3 x2 ≤ 8
2 x1 − x3 ≤ 2 2 x2 + 5 x3 ≤ 10
2 x1 − 2 x2 + 3 x3 ≤ 6 3 x1 + 2 x2 + 4 x3 ≤ 15
x1, x2 , x3 ≥ 0. x1, x2 , x3 ≥ 0. [Meerut 2012]
4. Max. Z = 5 x1 + 3 x2 5. Min. Z = 4 x1 + 6 x2 + 18 x3
subject to subject to
3 x1 + 5 x2 ≤ 15 x1 + 3 x3 ≥ 3
5 x1 + 2 x2 ≤ 15 x2 + 2 x3 ≥ 5
x1, x2 ≥ 0. [Gorakhpur 2008] x1, x2 , x3 ≥ 0.
6. Max. Z = 3 x1 + 4 x2 7. Min. Z = 3 x1 + x2
subject to subject to
2 x1 + 6 x2 ≤ 16 2 x1 + 3 x2 ≥ 2
5 x1 + 2 x2 ≥ 20 x1 + x2 ≥ 1
x1, x2 ≥ 0. [Kanpur 2007] x1, x2 ≥ 0. [Kanpur 2012]
8. Min. Z = 2 x1 + 2 x2 + 4 x3 9. Min. Z = 7 x1 + 3 x2 + 8 x3
subject to subject to
2 x1 + 3 x2 + 5 x3 ≥ 2 8 x1 + 2 x2 + x2 ≥ 3
3 x1 + x2 + 7 x3 ≤ 3 3 x1 + 6 x2 + 4 x3 ≥ 4
x1 + 4 x2 + 6 x3 ≤ 5 4 x1 + x2 + 5 x3 ≥ 1
x1, x2 , x3 ≥ 0. [Meerut 2011 (BP)] x1 + 5 x2 + 2 x3 ≥ 7
x1, x2 , x3 ≥ 0.
10. Max. Z = 2 x1 + 5 x2 + 6 x3 11. Min. Z = 3 x1 − 2 x2 + 4 x3
subject to subject to
5 x1 + 6 x2 − x3 ≤ 3 3 x1 + 5 x2 + 4 x3 ≤ 7
−2 x1 + x2 + 4 x3 ≤ 4 6 x1 + 3 x2 + 2 x3 ≤ 9
x1 − 5 x2 + 3 x3 ≤ 1 2 x1 + 7 x2 + 9 x3 ≥ 5
−3 x1 − 3 x2 + 7 x3 ≤ 6 4 x1 − 2 x2 + 7 x3 ≥ 1
x1, x2 , x3 ≥ 0. [Kanpur 2011] x1, x2 , x3 ≥ 0. [Gorakhpur 2011]
306
2. Min. Z D = 10 y1 + 2 y2 + 6 y3 3. Min. Z D = 8 y1 + 10 y2 + 15 y3
subject to subject to
y1 + 2 y2 + 2 y3 ≥ 1 2 y1 + 3 y3 ≥ 3
− y1 + 2 y3 ≤ 1 3 y1 + 2 y2 + 2 y3 ≥ 5
y1 − y2 + 3 y3 ≥ 3 5 y2 + 4 y3 ≥ 4
y1, y2 , y3 ≥ 0 y1, y2 , y3 ≥ 0
4. Min. Z D = 15 y1 + 15 y2 5. Max. Z D = 3 y1 + 5 y2
subject to subject to
3 y1 + 5 y2 ≥ 5 y1 ≤ 4, y2 ≤ 6
5 y1 + 2 y2 ≥ 3 3 y1 + 2 y2 ≤ 18
y1, y2 ≥ 0 y1, y2 ≥ 0
6. Min. Z D = 16 y1 − 20 y2 7. Max. Z D = 2 y1 + y2
subject to subject to
2 y1 − 5 y2 ≥ 3 2 y1 + y2 ≤ 3
6 y1 − 2 y2 ≥ 4 3 y1 + y2 ≤ 1
y1, y2 ≥ 0 y1, y2 ≥ 0.
8. Max. Z D = 2 y1 − 3 y2 − 5 y3 9. Max. Z D = 3 y1 + 4 y2 + y3 + 7 y4
subject to subject to
2 y1 − 3 y2 − y3 ≤ 2 8 y1 + 3 y2 + 4 y3 + y4 ≤ 7
3 y1 − y2 − 4 y3 ≤ 2 2 y1 + 6 y2 + y3 + 5 y4 ≤ 3
5 y1 − 7 y2 − 6 y3 ≤ 4 y1 + 4 y2 + 5 y3 + 2 y4 ≤ 8
y1, y2 , y3 ≥ 0. y1, y2 , y3 , y4 ≥ 0.
subject to subject to
5 y1 − 2 y2 + y3 − 3 y4 ≥ 2 −3 y1 − 6 y2 + 2 y3 + 4 y4 ≤ 3
6 y1 + y2 − 5 y3 − 3 y4 ≥ 5 5 y1 + 3 y2 − 7 y3 + 2 y4 ≥ 2
− y1 + 4 y2 + 3 y3 + 7 y4 ≥ 6 −4 y1 − 2 y2 + 9 y3 + 7 y4 ≤ 4
y1, y2 , y3 y4 ≥ 0 y1, y2 , y3 , y4 ≥ 0
308
Here we shall prove a number of fundamental theorems to describe the relation between
the primal and its dual. The relationship between the primal and dual is extremely useful
in the development of mathematical programming.
Theorem 1: The dual of a dual of the given primal is the primal itself. [Kanpur 2011]
Primal Problem
Max. Z P = c1 x1 + c2 x2 + ... + cn x n
subject to
Dual Problem
subject to
Now we have to construct the dual of the above dual. First we shall change the above dual
to standard maximization form.
subject to
Dual of the dual : Considering the above dual a primal, its dual can be written as
Min. Z y = − c1 y1 − c2 y2 − ... − cn yn
subject to
− a11 y1 − a12 y2 − . .. − a1n yn ≥ − b1
− a21 y1 − a22 y2 − ... − a2 n yn ≥ − b2
... ... ...
... ... ... ...(4)
− am1 y1 − am2 y2 − ... − amn yn ≥ − bm.
y1, y2 ,..., yn ≥ 0
Changing the above problem to maximization and multiplying each constraint by −1, we
get
subject to a11 y1 + a12 y2 + ... + a1n yn ≤ b1
a21 y1 + a22 y2 + ... + a2 n yn ≤ b2
... ... ...
... ... ... ...(5)
am1 y1 + am2 y2 + ... + amn yn ≤ bm,
y1, y2 ,..., yn ≥ 0
Min. Z D = b′ w ...(2)
subject to A′ w ≥ c′, w ≥ 0.
Let w = [w1, w2 ,..., wm] be any feasible solution to the dual (2).
Now w (m-component column vector) is the feasible solution of the dual and A x ≤ b are
the constraints of the primal (1). If we multiply both sides of A x ≤ b with w′, the sign of
the inequality remains unchanged.
∴ w′ ( A x ) ≤ w′ b
Similarly x (n-component column vector) is the feasible solution of the primal (1) and
A′ w ≥ c′ denotes the constraints of the dual (2) so we can write
x ′ ( A′ w) ≥ x ′ c′
⇒ x ′ (w′ A)′ ≥ (c x )′
⇒ [(w′ A) x ]′ ≥ (c x )′
⇒ (w′ A) x ≥ c x
⇒ ( A′ w)′ x ≥ c x. ...(4)
⇒ c x ≤ (b′ w)′ ⇒ c x ≤ b′ w
⇒ ZP ≤ ZD.
Theorem 3: If ^
x is a feasible solution to the primal problem Max. Z P = c x subject to
A x ≤ b, x ≥ 0 and ^
w is a feasible solution to its dual
such that c ^x = b′ ^
w, then ^
x is the optimal solution of the primal and ^
w is the optimal
solution of the dual problem.
Or
The necessary and sufficient condition for any linear programming problem and its dual to
have optimal solution is that both have feasible solutions.
312
Max. Z P = c x
...(1)
subject to A x ≤ b, x ≥ 0.
Min. Z D = b′ w
...(2)
subject to A′ w ≥ c′, w ≥ 0
Let ^
w be any feasible solution to the dual (2).
c x ≤ b′ ^
w
⇒ c x ≤ c^
x , since c ^
x = b′ ^
w
⇒ Value of the objective function of the primal problem at the feasible solution ^
x is
greater than its value at any other feasible solution x.
Hence ^
x is the optimal solution of the primal problem (for maximization problem).
c^
x ≤ b′ w
⇒ b′ ^
w ≤ b′ w since c ^
x = b′ ^
w
⇒ The value of the objective function of the dual problem at the given feasible
solution ^
w is less than its value at any other feasible solution w.
Hence ^
w is the optimal solution of the dual problem (for minimization problem).
c x 0 = b′ w0 ,
Primal Problem
Max. Z P = c x
subject to Ax ≤ b, x ≥ 0 ...(1)
Dual Problem
Min. Z D = b′ w
subject to A′ w ≥ c′, w ≥ 0 ...(2)
Suppose x 0 is an optimum solution to the primal. Then to solve the primal problem by
simplex method, introduce slack variables to each of the constraints.
identity matrix.
Z = c x0 = c B x B ,
We know
Z j − cj = cB Y j − cj
c B −1α − c , ∀α j ∈ A
B j j
= −1
c
B B e j − 0, ∀ e j ∈ I.
Z j − c j ≥ 0, ∀ j
⇒ A′ w0 ≥ c′ ; w0 ≥ 0 , taking B −1c B = w0 ∈ R m
314
b′ w0 = w0′ b = c B B −1b = c B x B = cx 0
Hence corresponding to a given optimal solution x 0 of the primal there exists a feasible
solution w0 of the dual such that
c x 0 = b′ w0
If w0 is an optimum solution to the dual, then there exists a feasible solution x 0 to the
primal such that c x 0 = b′ w0
Primal Problem
Max. Z P = c x
...(1)
subject to Ax ≤ b, x ≥ 0
Dual Problem
Min. Z D = b′ w
subject to A′ w ≥ c′ ...(2)
w ≥ 0
To prove the theorem we shall construct an optimal solution to the dual from a given
optimal solution of the primal.
Let us assume that the primal has a finite optimal feasible solution x B .
To solve the primal (1) by simplex method, introduce slack variables to each of the
constraints.
Now (1) can be written as
Max. Z P = c x
subject to A x + I x s = b ...(3)
x ≥ 0 , x s ≥ 0,
c j − Z j ≤ 0, ∀ j
We know Z j = c B Y j = c B B −1α j
∴ c j − c B B −1α j ≤ 0, ∀ j
or c B B −1α j ≥ c j
...(4)
⇒ c B B −1 A ≥ c ...(5)
Taking (^
w)′ = c B B −1, where ^
w = [w1, w2 ,..., wm].
(^
w)′ A ≥ c ⇒ [(^
w)′ A]′ ≥ c′
⇒ A′(^
w ) ≥ c′
^
⇒ w is the solution of the dual (2).
Again considering the relation (4) with α j corresponding to slack variables, we have
c B B −1(e j ) ≥ 0 , j = 1, 2,...., m; c j = 0,
⇒ (^
w)′ e j ≥ 0 ⇒ (^
w)′ ≥ 0
^
⇒ w ≥0
^
⇒ w is the feasible solution of the dual (2).
We have
Z D = b′ ^
w = [(^
w)′ b]′ = (^
w)′ b
= (c B B −1)b = c B (B −1b)
= c Bx B = ZP .
Since ^
w and x B are the feasible solutions of the dual (2) and the primal (1) respectively
and
ZD = ZP
316
Therefore ^
w is the optimal solution of the dual (2). (See the result of theorem 3)
(ii) We shall prove it by contradiction. Suppose, when the primal problem has an
unbounded solution, the dual problem has a finite optimal solution.
We have proved in theorem 1 that dual of the dual is the original primal. Also in
part (i) of this theorem we have proved that if a primal has a finite optimal solution,
the dual also has a finite optimal solution.
Thus if we consider the dual as the primal, then its dual (which is the original
primal) must have a finite optimal solution, which contradicts the hypothesis.
Hence the dual has no finite optimal solution. If the primal problem has an
unbounded solution, either the dual has no solution or an unbounded solution.
Similarly, we can prove that if the dual has an unbounded solution, the primal has
no solution or an unbounded solution.
Theorem 6: If any of the constraints in the primal is a perfect equality, the corresponding
dual variable is unrestricted in sign.
Proof: Suppose in the given primal the k-th constraint is an equality. Writing the primal
in the standard primal form, we have
Max. Z = c1 x1 + c2 x2 + ... + cn x n
subject to
a11 x1 + a12 x2 + ... + a1n x n ≤ b1
a21 x1 + a22 x2 + ... + a2 n x n ≤ b2
... ... ...
... ... ...
ak1 x1 + ak 2 x2 + ... + akn x n ≤ bk
− ak1 x1 − ak 2 x2 − ... − akn x n ≤ − bk
... ... ...
... ... ...
am1 x1 + am2 x2 + ... + amn x n ≤ bm,
x1, x2 ,..., x n ≥ 0
subject to
a11w1 + a21w2 + ... + ak1 (w′k − w′′k ) + ... + am1wm ≥ c1
a12 w1 + a22 w2 + ... + ak 2 (w′k − w′′k ) + ... + am2 wm ≥ c2
... ... ...
... ... ...
a1nw1 + a2 nw2 + ... + akn (w′k − w′′k ) + ... + amnwm ≥ cn
w1, w2 , ..., w′k , w′′k ,..., wm ≥ 0.
317
subject to
a11w1 + a21w2 + ... + ak1 wk + ... + am1wm ≥ c1
a12 w1 + a22 w2 + ... + ak 2 wk + ... + am2 wm ≥ c2
... ... ...
... ... ...
a1nw1 + a2 nw2 + ... + akn wk + ... + amnwm ≥ cn
w1, w2 ,..., wk −1, wk + 1,..., wm ≥ 0,
Proof: Consider the primal in which the k-th variable is unrestricted in sign.
subject to
a11 x1 + a12 x2 + ... + a1k x k + ... + a1n x n ≤ b1
a21 x1 + a22 x2 + ... + a2 k x k + ... + a2 n x n ≤ b2
... ... ...
... ... ...
am1 x1 + am2 x2 + ... + amk x k + ... + amn x n ≤ bm
x1, x2 ,..., x k −1, x k + 1,..., x n ≥ 0,
x k is unrestricted in sign.
subject to
a11 x1 + a12 x2 + ... + a1k (x′k − x′′k ) + ... + a1n x n ≤ b1
a21 x1 + a22 x2 + ... + a2 k (x′k − x′′k ) + ... + a2 n x n ≤ b2
... ... ... ...
... ... ... ...
am1 x1 + am2 x2 + ... + amk (x′k − x′′k ) + ... + amn x n ≤ bm,
x1, x2 ,..., x k −1, x′k , x′′k , x k + 1,..., x n ≥ 0
subject to
Hence if the k-th variable of the primal is unrestricted, the k-th constraint in the dual is an
equality.
and Min. Z D = b′ w
Dual problem
subject to A′ w ≥ c′, w ≥ 0
Conversely, let us assume that both the primal and the dual possess feasible solution ^
x
and ^
w respectively. Then c ^
x and b′ ^
w are both finite and c ^
x ≤ b′ ^
w i.e.,b′ ^
w acts as an
upper bound on c ^
x although not necessarily the least upper bound. Hence the primal
must have finite optimum solution.
Max. Z = c x,
subject to A x ≤ b, x ≥ 0
Min, Z D = b′ w
subject to A′ w ≥ c′, w ≥ 0
Suppose there does not exist any feasible solution to the dual but there does exist
If we suppose that ^
x is an optimum solution to the primal then by fundamental
theorem of duality there must exist a feasible solution to the dual which contradicts
the hypothesis. Therefore no feasible solution to the primal can be optimal.
Proof: The objective function of the primal and dual problems in explicit form can be
written as
Now, multiplying the equations (3) by w1, w2 ,..., wm respectively and then adding the
resulting equations, we have
m m m
x1 ∑ ai1 wi + x2 ∑ ai2 wi + ... + x n ∑ ain wi
i =1 i =1 i =1
m
+ x n + 1 w1 + x n + 2 w2 + ... + x n + m wm = ∑ bi wi
...(5)
i =1
m m m
c − ∑ a w x + c − ∑ a w x + ... + c − ∑ a w x
1 i
1 i 1 2 i2 i 2 n in i n
i =1 i =1 i =1
m
− w1 x n + 1 − w2 x n + 2 − ... − wm x n + m = Z P − ∑ bi wi
i =1
= ZP − ZD ...(6)
− (wm + 1 x1 + wm + 2 x2 + ... + wm + n x n)
Now if x * = [ x1*, x2*,..., x n*] and w * = [w1*, w2*,..., wm*] be the optimal solutions to the
primal and the dual problems respectively, then by Duality theorem
Z P * = Z D*
Thus for the optimal solutions x * and w* of the primal and dual problems the
corresponding slack and surplus variables are
Since all the variables in (9) are non-negative so their product terms in (9) are also
non-negative. For the validity of relation (9) each term must be individually equal to zero
1. From (11), if x*n + i > 0 then we must have w*i = 0, i.e., if the slack variable in the i-th
relation of the primal is positive then i-th variable of the dual is zero.
Again from (10), if w*m + j > 0 then we must have x*j = 0, i.e., if the surplus variable in
the i-th relation of the dual is positive then j-th variable of the primal (dual of the
dual) is zero.
This proves the part (1) of the theorem.
2. Also from (10), if x*j > 0, then we must have w*m + j = 0 i.e., if the j-th variable in the
primal is positive then the j-th relation in the dual is strictly an equality (as
w*m + j = 0).
i.e., if the i-th variable in the dual is positive then the j-th relation in the primal (dual
of the dual) is strictly an equality (as x*n + i = 0).
322
Alternative Statement : A necessary and sufficient condition for any pair of feasible
solutions to the primal and dual to be optimal is that
Primal and Dual Correspondence : With the help of the duality theorems developed
in article 8.5 and 8.6 we note the following correspondence between the primal and the
dual problems :
1. The optimal value of the primal objective function is equal to the optimal value of
the dual objective function.
323
2. The ∆ j 's (∆ j = c j − Z j ) with sign changed for the slack (or surplus) variables in the
final simplex table for the primal are the values of the corresponding optimal dual
variables in the final simplex table for the dual.
Proof: In the final simplex table (for primal) corresponding to slack and surplus
variables
− ∆ j = − (c j − Z j ) = − (0 − Z j )
= c B Y j = c B B −1α j
But, for i-th slack variable, α j becomes a unit vector ei with 1 at the i-th place.
3. If either problems has unbounded solution, then the other will have no feasible
solutions.
Note : It is always advantageous to apply the simplex method to the problem
having lesser number of constraints. Therefore, first we shall solve the problem
(primal or dual) with lesser number of constraints by simplex method and then read
the solution of the other from the final simplex table according to the rules
described above.
Duality plays an important role not only in linear programming but also in physics,
economics, engineering etc.
In game theory, we use it to find the optimal strategies of the player B when he minimizes
his losses. Then, using duality, we can change the player A's problem into player B's
problem and vice-versa.
324
Max. Z = 4 x1 + 2 x2
subject to − x1 − x2 ≤ −3
− x1 + x2 ≤ −2, x1, x2 ≥ 0 .
Min. Z D = −3 w1 − 2 w2
subject to − w1 − w2 ≥ 4
− w1 + w2 ≥ 2
w1, w2 ≥ 0.
subject to − w1 − w2 − w3 + wa =4
1
− w1 + w2 − w4 + wa =2
2
cj 3 2 0 0 −M −M Min. ratio
B cB w B W2
wB W1 W2 W3 W4 A1 A2
w i2 > 0
A1 −M 4 −1 −1 −1 0 1 0
A2 −M 2 −1 1 0 −1 0 1 2 (min)
→
Z D′ = −6 M ∆j 3 −2 M 2 −M −M 0 0
↑ ↓
A1 −M 6 −2 0 −1 −1 1 1
W2 2 2 −1 1 0 −1 0 1
Z D′ = 4 − 6 M ∆j 5 −2 M 0 −M 2−M 0 −2
325
In the last simplex table no ∆ j > 0 and a non-zero artificial variable appears in the basis
therefore the dual problem does not possess any optimum basic feasible solution.
Consequently, the given problem does not possess any feasible solution.
Example 2: Write the dual of the following linear programming problem and hence solve
it :
Max. Z = 3 x1 − 2 x2
subject to x1 ≤ 4
x2 ≤ 6
x1 + x2 ≤ 5
− x2 ≤ −1
Solution: The given problem is in standard primal form. The dual of the given primal is
Min. Z D = 4 w1 + 6 w2 + 5 w3 − w4
subject to w1 + w3 ≥ 3
w2 + w3 − w4 ≥ −2,
w1, w2 , w3 , w4 ≥ 0.
Changing the dual problem to maximization multiplying both sides of second constraint
by –1 to make R.H.S. positive, introducing surplus variable w5 and slack variable w6 to
change the inequalities into equations, the dual problem becomes
Max. Z D′ = −4 w1 − 6 w2 − 5 w3 + w4 + 0 w5 + 0 w6
subject to w1 + w3 − w5 = 3
− w2 − w3 + w4 + w6 = 2,
w1, w2 ,..., w6 ≥ 0.
Here we have not introduced the artificial variable in the first constraint equation as w1
will serve the purpose.
cj −4 −6 −5 1 0 0 Min. ratio
B cB wB W1 W2 W3 W4 W5 W6 w B W4
w i4 > 0
W1 −4 3 1 0 1 0 −1 0
W6 0 2 0 −1 −1 1 0 1 2 (min)
→
Z D′ = −12 ∆j 0 −6 −1 1 −4 0
↑ ↓
W1 −4 3 1 0 1 0 −1 0
W4 1 2 0 −1 −1 1 0 1
Z D′ = −10 ∆j 0 −5 0 0 −4 −1
Since all ∆ j ≤ 0, therefore the last table gives the optimal solution of the dual.
w1 = 3, w2 = 0, w3 = 0, w4 = 2,
To read the solution of the primal from the final simplex table of the dual.
Min. Z = 3 x1 + x2
subject to x1 + x2 ≥ 1
Solution: The given L.P.P. is in the standard primal form. The dual problem is given by
Max. Z D = w1 + 2 w2
subject to w1 + 2 w2 ≤ 3
w1 + 3 w2 ≤ 1, w1, w2 ≥ 0.
Max. Z D = w1 + 2 w2 + 0 w2 + 0 w4
subject to w1 + 2 w2 + w3 =3
w1 + 3 w2 + w4 = 1,
327
w1, w2 , w3 , w4 ≥ 0.
cj 1 2 0 0 Min ratio
B cB w B W2
wB W1 W2 W3 W4 w i2 > 0
W3 0 3 1 2 1 0 32
W4 0 1 1 3 0 1 13→
(min)
ZD = 0 ∆j 1 2 0 0 w B W1
↑ ↓
W3 0 73 13 0 1 −2 3 7
W2 2 13 13 1 0 13 1→
(min)
ZD = 2 3 ∆j 13 0 0 −2 3
↑ ↓
W3 0 2 0 −1 1 −1
W1 1 1 1 3 0 1
ZD = 1 ∆j 0 −1 0 −1
In last table, since all ∆ j ≤ 0, therefore the dual problem has optimal solution
w1 = 1, w2 = 0, Max. Z D = 1
Now the solution of the original primal problem from the last simplex table of the dual is
x1 = − ∆3 = 0, x2 = − ∆4 = 1, min Z = max. Z D = 1.
Max. Z = 30 x1 + 23 x2 + 29 x3
subject to 6 x1 + 5 x2 + 3 x3 ≤ 26
4 x1 + 2 x2 + 5 x3 ≤ 7
x1, x2 , x3 ≥ 0 .
Hence or otherwise find the solution to the dual of the above problem.
Max. Z = 30 x1 + 23 x2 + 29 x3 + 0 x4 + 0 x5 ,
subject to 6 x1 + 5 x2 + 3 x3 + x4 = 26
4 x1 + 2 x2 + 5 x3 + x5 = 7,
x1, x2 ,...., x5 ≥ 0.
The solution of the problem by simplex method is given in the following table :
cj 30 23 29 0 0 Min. ratio
x B Y1
B cB
xB Y1 Y2 Y3 Y4 Y5 yi1 > 0
Y4 0 26 6 5 3 1 0 26 6
Y5 0 7 4 2 5 0 1 74→
(min)
Z = c Bx B = 0 ∆j 30 23 29 0 0 x B Y2
↑ ↓ yi2 > 0
Y4 0 31 2 0 2 −9 2 1 −3 2 31 4
Y1 30 74 1 12 54 0 14 72→
(min)
Y4 0 17 2 −4 0 −19 2 1 −5 2
Y2 23 72 2 1 52 0 12
Dual Problem
The given problem is in standard primal form. The dual of the given problem is
Min. Z D = 26 w1 + 7w2
subject to 6 w1 + 4 w2 ≥ 30
5 w1 + 2 w2 ≥ 23
3 w1 + 5 w2 ≥ 29,
w1, w2 ≥ 0.
To real the solution of the dual from the final simplex table of the primal problem.
329
w1 = − ∆4 = 0, w2 = − ∆5 = 23 2
subject to 2 x1 + 2 x2 − x3 ≥ 2
3 x1 − 4 x2 ≤ 3
we get
Max. Z = 5 x1 − 2 x2 + 3 x3
subject to
−2 x1 − 2 x2 + x3 ≤ −2
3 x1 − 4 x2 ≤ 3
x2 + 3 x3 ≤ 5, and x1, x2 , x3 ≥ 0
Min. Z D = −2 w1 + 3 w2 + 5 w3
subject to
−2 w1 + 3 w2 ≥ 5
−2 w1 − 4 w2 + w3 ≥ −2
w1 + 3 w3 ≥ 3, and w1, w2 , w3 ≥ 0.
Max. Z ′D = 2 w1 − 3 w2 − 5 w3
subject to −2 w1 + 3 w2 + 0 w3 ≥ 5
2 w1 + 4 w2 − w3 ≤ 2
w1 + 0 w2 + 3 w3 ≥ 3,
w1, w2 , w3 ≥ 0.
subject to
−2 w1 + 3 w2 + 0 w3 − w4 + wa =5
1
2 w1 + 4 w2 − w3 + 0 w4 + w5 =2
w1 + 0 w2 + 3 w3 − w6 + wa =3
2
w1, w2 ,..., w6 , wa , wa ≥ 0.
1 2
The solution of this dual problem by simplex method is given in the following table :
cj 2 −3 −5 0 0 0 −M −M Min.
ratio
B cB
wB W1 W2 W3 W4 W5 W6 A1 A2 w B W2
w i2 > 0
A1 −M 5 −2 3 0 −1 0 0 1 0 53
W5 0 2 2 4 −1 0 1 0 0 0 2 4(min)
A2 −M 3 1 0 3 0 0 −1 0 1 →
Z ′ D = −8 M ∆j 2−M −3 + 3M −5 + 3 M −M 0 −M 0 0 w B W3
↑ ↓ w i3 > 0
A1 −M 72 −7 2 0 34 −1 −3 4 0 1 0 14 3
W2 −3 12 12 1 −1 4 0 14 0 0 0 Neg.
A2 −M 3 1 0 3 0 0 −1 0 1 1 (min)
→
Z ′D = ∆j 7 − 5M 0 15 M − 23 −M 3 − 3M −M 0 0 w B W6
13 3 2 4 4 ↓ w i6 > 0
− M −
2 2 ↑
A1 − M 11 4 −15 4 0 0 −1 −3 4 14 1 11(min)
→
W2 −3 3 4 7 12 1 0 0 14 −1 12 0
W3 −5 1 13 0 1 0 0 −1 3 0
Z ′D = ∆j 65 − 45M 0 0 −M 3 − 3 M 3 M − 23 0
11 29 12 4 12 ↓
− M −
4 4 ↑
W6 0 11 −15 0 0 −4 −3 1
W2 −3 53 −2 3 1 0 −1 3 0 0
W3 −5 14 3 −14 3 0 1 −4 3 −1 0
85 ∆j −70 3 0 0 −23 3 −5 0
ZD
′ =−
3
In the last simplex table all ∆ j ≤ 0 therefore it gives optimal solution of the dual problem.
w1 = 0, w2 = 5 3 , w3 = 14 3
x1 = − ∆4 = 23 3 , x2 = − ∆5 = 5, x3 = − ∆6 = 0
Example 6: One unit of product A contributes ` 7 and requires 3 units of raw material
and 2 hours of labour. One unit of product B contributes ` 5 and requires one unit of raw
material and one hour of labour. Availability of the raw material at present is 48 units
and there are 40 hours of labour.
(i) Formulate the linear programming problem.
(ii) Write the dual and solve it by simplex method. Also find the optimal product mix.
[Meerut 2009, 11 (BP), 12]
Max. Z = 7 x1 + 5 x2
subject to 3 x1 + x2 ≤ 48
2 x1 + x2 ≤ 40
x1, x2 ≥ 0
Min. Z D = 48w1 + 40 w2
subject to 3 w1 + 2 w2 ≥ 7
w1 + w2 ≥ 5,
w1, w2 ≥ 0
To solve the dual problem by simplex method, changing the dual objective function to
maximization and introducing surplus variables w3 , w4 and artificial variables wa , wa
1 2
respectively to change the constraint inequalities into equations. The dual problem can
be written as
subject to 3 w1 + 2 w2 − w3 + wa = 7
1
w1 + w2 − w4 + wa = 5
2
w1, w2 , w3 , w4 , wa , wa ≥ 0
1 2
332
A1 −M 7 3 2 −1 0 1 0 73→
A2 −M 5 1 1 0 −1 0 1 (min)
5
Z ′D = −12 M ∆j 4 M − 48 3 M − 40 −M −M 0 0 w B W2
↑ ↓ w i2 > 0
W1 −48 73 1 23 −1 3 0 13 0 72→
A2 −M 83 0 13 13 −1 −1 3 1 (min)
8
Z ′D = − ∆j 0 M −16 −M 4M 0 w B W3
−8 16 −
8 ↓ 3 3
(112 + M) wi3 > 0
3 ↑
W2 −40 72 32 1 −1 2 0 −1 2 0
A2 − M 32 −1 2 0 12 −1 −1 2 1 3 (min)
→
Z ′D = − ∆j 1 0 1 −M 3 0
12 − M M −20 20 − M
3 2 2 2 ↓
(140 + M ) ↑
2
W2 −40 5 1 1 0 −1 0 1
W3 0 3 −1 0 1 −2 −1 2
Z ′D = −200 ∆j −8 0 0 −40 −M 40 − M
None of the product A and 40 units of product B for a total contribution of ` 200.
333
Max. Z x = 40 x1 + 50 x2
subject to 2 x1 + 3 x2 ≤ 3, 8 x1 + 4 x2 ≤ 5, x1, x2 ≥ 0.
Write the dual of the given L.P.P. Also find the solution of the dual problem.
2. Min. Z = x1 − x2
3. Max. Z = 3 x1 + 2 x2
subject to 2 x1 + x2 ≤ 5, x1 + x2 ≤ 3 and x1, x2 ≥ 0. [Meerut 2010]
4. Max. Z = 2 x1 + x2
subject to x1 + 2 x2 ≤ 10, x1 + x2 ≤ 6
5. Min. Z = 2 x1 + 2 x2
subject to 2 x1 + 4 x2 ≥ 1, x1 + 2 x2 ≥ 1.
6. Max. Z = 3 x1 + 2 x2
subject to x1 + x2 ≥ 1, x1 + x2 ≤ 7,
7. Formulate the dual of the given L.P.P. and hence solve it.
Min. Z = 3 x1 − 2 x2 + 4 x3
subject to 3 x1 + 5 x2 + 4 x3 ≥ 7, 6 x1 + x2 + 3 x3 ≥ 4,
7 x1 − 2 x2 − x3 ≤ 10, x1 − 2 x2 + 5 x3 ≥ 3
4 x1 + 7 x2 − 2 x3 ≥ 2
and x1, x2 , x3 ≥ 0.
Min. Z = 4 x1 + 3 x2 + 6 x3
Min. Z = 50 x1 − 80 x2 − 140 x3 ,
334
subject to x1 − x2 − 3 x3 ≥ 4, x1 − 2 x2 − 2 x3 ≥ 3
and x1, x2 , x3 ≥ 0.
Max. Z = 3 x1 − 2 x2
subject to x1 + x2 ≤ 5, x1 ≥ 4, 1 ≤ x2 ≤ 6
and x1, x2 ≥ 0.
11. Solve the following primal problem and its dual by simplex method :
Max. Z = 5 x1 + 12 x2 + 4 x3
subject to x1 + 2 x2 + x3 ≤ 5, 2 x1 − x2 + 3 x3 = 2, x1, x2 , x3 ≥ 0.
Verify that the solution of the primal can be read from the optimal table of the dual
and vice-versa. [Meerut 2007]
Max. Z = 40 x1 + 35 x2
Solve the primal and the dual by simplex method. Compare the optimal solutions of
the two problems. [Meerut 2006 (BP)]
Max. Z = 4 x1 + 3 x2
Solve the primal problem by simplex method and deduce from it the solution to the
dual problem.
Min. Z = 10 x1 + 6 x2 + 2 x3
335
Max. Z = 2 x1 + 3 x2 ,
subject to 2 x1 + 2 x2 ≤ 10, 2 x1 + x2 ≤ 6,
x1 + 2 x2 ≤ 6, x1, x2 ≥ 0.
Solve the primal and the find then solution to the dual.
subject to x1 ≥ 4, x2 ≥ 6, x1 + 2 x2 ≥ 20, 2 x1 + x2 ≥ 18
Max. Z = 2 x1 + x2
20. A company makes three products X , Y , Z out of three raw materials A, B, C. The
number of units of raw materials required to produce one unit of the product is as
given in the following table :
Raw material X Y Z
A 1 2 1
B 2 1 4
C 2 5 1
The unit profit contribution of the products X , Y and Z and ` 40, 25 and 50
respectively. The number of units of raw materials available are 36, 60 and 45
respectively.
(i) Determine the product mix that will maximize the total profit.
(ii) Through the final simplex table write the solution to the dual problem.
336
1. x1 = 3 16 , x2 = 7 8 , max. Z x = 51. 25
2. no feasible solution.
3. x1 = 2, x2 = 1, max. Z = 8
4. x1 = 4, x2 = 2, max. Z = 10
5. x1 = 1 3 , x2 = 1 3 , min. Z = 4 3.
6. x1 = 7, x2 = 0, Max. Z = 21.
7. no feasible solution.
8. x1 = 0, x2 = 3, x3 = 2, min. Z = 21
9. no feasible solution.
10. x1 = 4, x2 = 1, max. Z = 10
11. x = 9 5 , x = 8 5 , x = 0, max. Z = 28 1
1 2 3 5
1
Dual w1 = 29 5 , w2 = −2 5 , Min. Z D = 28
5
13. x1 = 4, x2 = 3, max. Z = 25
15. x = 0, y = 0, z = 5 2 , Min. Z = 5
16. x1 = 1 4 , x2 = 5 4 , x3 = 0, Min. Z = 10
17. x1 = 2, x2 = 2, max. Z = 10
Dual w1 = 0, w2 = 1 3 , w3 = 4 3 , min. Z D = 10
18. Unbounded
19. x1 = 3, x2 = 1, max Z = 7
independent of the requirement vector b. Thus, every basic solution with all c j − Z j ≤ 0
will not be feasible, but any basic feasible solution with all c j − Z j ≤ 0 will certainly be an
optimal solution. The dual simplex method uses the above remarks. Thus, whereas the
regular simplex method starts with a basic feasible but non-optimal solution and
proceeds towards optimality, the dual simplex method starts with a basic infeasible but
optimal solution and proceeds towards feasibility.
The dual simplex algorithm was discovered by C.E. Lemke, a student of Charnes while
applying the simplex method to the dual of a L.P.P.
1. If all ∆ j ≤ 0 and all x Bi are non-negative, the solution found above is optimum basic
feasible solution.
2. If all ∆ j ≤ 0 and at least one x Bi is negative, then proceed to step V.
3. If any ∆ j > 0 then the method fails.
Step V : To find the vector incoming (entering) and leaving (outgoing) the basis
To determine the outgoing vector : Here β r i.e., the r-th columns in the basis i.e., the
corresponding vector x r in the basis is the outgoing vector, if
∆k Min ∆ j
= , yrj < 0
yrk j yrj
subject to x1 + x2 ≥ 1
2 x1 + 3 x2 ≥ 2
Solution: For the clear understanding of the procedure we shall solve this example step
by step.
Step I : First we write the given L.P.P. in the standard primal form.
Max. Z P = − Z = −3 x1 − x2
subject to − x1 − x2 ≤ −1
339
−2 x1 − 3 x2 ≤ −2
and x1, x2 ≥ 0
∴ It is possible to find an infeasible but optimal basic solution and hence we can solve
this problem by dual simplex algorithm.
Step II : Introducing the slack variables x3 and x4 the constraints reduce to the following
equalities :
− x1 − x2 + x3 = −1
−2 x1 − 3 x2 + x4 = −2
Taking x1 = 0 = x2 , we get x3 = −1, x4 = −2, which is the starting basic infeasible solution
to the primal.
cj −3 −1 0 0
B cB
X B (= x B ) Y1 Y2 Y3 (β1) Y4 (β2 )
Y3 0 −1 −1 −1 1 0
Y4 0 −2 −2 −3 0 1 →
ZP = cB x B ∆j −3 −1 0 0
=0 ↑ ↓
Step IV : ∆1 = c1 − Z1 = c1 − c BY1 = −3
∆2 = c2 − Z2 = c2 − c BY2 = −1
Step V : To find the improved solution we shall determine the leaving vector and
entering vector to the basis.
∆k ∆k Min. ∆ j
= = , y2 j < 0
yrk y2 k j y2 j
∆ ∆ −3 −1 1 ∆2
= Min 1 , 2 = Min , = =
y y
21 22 − 2 − 3 3 y22
∆k ∆k Min. ∆ j ∆ ∆
= = . y1 j < 0 = Min 1 . 4
yrk y1k j y1 j y11 y14
−7 3 −1 3 ∆4
= Min , = Min {7,1} = 1 =
− 1 3 − 1 3 y14
cj −3 −1 0 0
B cB
X B (= x B ) Y1 Y2 (β2 ) Y3 Y4 (β1)
Y4 0 1 1 0 −3 1
Y2 −1 1 1 1 −1 0
Z P = −1 ∆j −2 0 −1 0
341
Q All ∆ j ≤ 0 and all x B1 > 0, so the solution is optimal and feasible, which is
The above solution in different steps can be more conveniently represented by a single
table as shown below :
cj −3 −1 0 0 Mini x Br
B cB
xB Y1 Y2 Y3 Y4 x Br < 0
Y3 (β1) 0 −1 −1 −1 1 0
Y4 (β2 ) 0 −2 −2 −3 0 1 − 2 ( x B2 ) →
ZP = cB x B ∆j −3 −1 0 0
=0
Mini ratio −3 −2 −1 −3 — —
∆ j y2 j y2 j < 0 =3 2
=1 3 ↑
Y3 (β1) 0 −1 3 −1 3 0 1 −1 3 − 1 3 ( x B1) →
Y2 (β2 ) −1 23 23 1 0 −1 3
Z P = −2 3 ∆j −7 3 0 0 −1 3
Mini ratio −7 3 — — −1 3
=7 =1
∆ j y1 j y1 j < 0 −1 3 −1 3
↑
Y4 0 1 1 0 −3 1 All x Br > 0
Y2 −1 1 1 1 −1 0
Z P = −1 ∆j −2 0 −1 0
subject to 2 x1 + 4 x2 + 5 x3 + x4 ≥ 10
3 x1 − x2 + 7 x3 − 2 x4 ≥ 2
5 x1 + 2 x2 + x3 + 6 x4 ≥ 15
and x1, x2 , x3 , x4 ≥ 0 .
Max. Z P = −3 x1 − 2 x2 − x3 − 4 x4
342
subject to −2 x1 − 4 x2 − 5 x3 − x4 ≤ −10
−3 x1 + x2 − 7 x3 + 2 x4 ≤ −2
−5 x1 − 2 x2 − x3 − 6 x4 ≤ −15
and x1, x2 , x3 , x4 ≥ 0.
Since objective function is of maximization and all c j < 0, we can solve this L.P.P. by dual
simplex algorithm.
Step II : Introducing the slack variables x5 , x6 and x7 the constraints of the above
problem reduce to the following equalities :
−2 x1 − 4 x2 − 5 x3 − x4 + x5 = −10
−3 x1 + x2 − 7 x3 + 2 x4 + x6 = −2
−5 x1 − 2 x2 − x3 − 6 x4 + x7 = −15
The solution to the problem using dual simplex algorithm is given in the following table :
cj −3 −2 −1 −4 0 0 0 Mini x Br
B cB
xB Y1 Y2 Y3 Y4 Y5 Y6 Y7 x Br < 0
Y5 0 −10 −2 −4 −5 −1 1 0 0
Y6 0 −2 −3 1 −7 2 0 1 0
−15 ( xB3 )
Y7 0 −15 −5 −2 −1 −6 0 0 1 →
Z = c B xB = 0 ∆j −3 −2 −1 −4 0 0 0
Mini ratio −3 −5 −2 −1 −4 2
=1 =1 =
∆ j y3 j =35 −2 −1 −6 3
Y3 j < 0 ↑ min
Z = −9 ∆j 0 −4 5 −2 5 −2 5 0 0 −3 5
Mini ratio — −4 5 −2 5 — — — −3 5
∆ j y1 j −16 5 −23 5 −2 5
=14 = 2 23 =32
Y1 j < 0
min ↑
Y3 −1 20 23 0 16 23 1 −7 23 − 5 23 0 2 23 All xBr > 0
Y6 0 289 23 0 153 23 0 84 23 −32 23 1 −1 23
Y1 −3 65 23 1 6 23 0 29 23 1 23 0 −5 23
x1 = 65 23 , x2 = 0, x3 = 20 23 , x4 = 0 = x5 , x6 = 289 23 , x7 = 0
x1 = 65 23 , x2 = 0, x3 = 20 23 , x4 = 0
Min. Z = 6 x1 + 7 x2 + 3 x3 + 5 x4
subject to, 5 x1 + 6 x2 − 3 x3 + 4 x4 ≥ 12
x2 + 5 x3 − 6 x4 ≥ 10
2 x1 + 5 x2 + x3 + x4 ≥ 8
Max. Z P = −6 x1 − 7 x2 − 3 x3 − 5 x4
subject to −5 x1 − 6 x2 + 3 x3 − 4 x4 ≤ −12
− x2 − 5 x3 + 6 x4 ≤ −10
−2 x1 − 5 x2 − x3 − x4 ≤ −8
and x1, x2 , x3 , x4 ≥ 0
Since objective function is of maximization and all c j < 0,we can solve this L.P.P. by dual
simplex algorithm.
Step II : Introducing the slack variables x5 , x6 and x7 the constraints of the above
problem reduce to the following equalities :
−5 x1 − 6 x2 + 3 x3 − 4 x4 + x5 = −12
− x2 − 5 x3 + 6 x4 + x6 = −10
−2 x1 − 5 x2 − x3 − x4 + x7 = −8
The solution to the problem using dual simplex algorithm is given in the following table :
cj −6 −7 −3 −5 0 0 0 Min.
B cB x Br
xB Y1 Y2 Y3 Y4 Y5 Y6 Y7 x Br < 0
Y5 0 −12 −5 −6 3 −4 1 0 0 −12 ( xB1 )
Y6 0 −10 0 −1 −5 6 0 1 0
Y7 0 −8 −2 −5 −1 −1 0 0 1
Z P = CB X B = 0 ∆j −6 −7 −3 −5 0 0 0
Mini. Ratio −6 ( −5 ) −7 ( −6 ) — ( −5 ) ( −4 )
∆ j y 1j =65 =76 =54
y 1 j< 0 (min)
↑
Y2 −7 2 56 1 −1 2 23 −1 6 0 0
Y6 0 −8 56 0 −11 2 20 3 −1 6 1 0 −8 ( xB2 )
Y7 0 2 13 6 0 −7 2 73 −5 6 0 1
Z P = −14 ∆j −1 6 0 −13 2 −1 3 −7 6 0 0
Y2 −7 30 11 25 33 1 0 2 33 −5 33 −1 11 0 All
Y3 −3 16 11 −5 33 0 1 −40 33 1 33 −2 11 0 xBr > 0
Y7 0 78 11 18 11 0 0 −21 11 −8 11 −7 11 1
7. Min. Z = x1 + 2 x2 8. Min. Z = x1 + 2 x2 + 3 x3
subject to 2 x1 + x2 ≥ 4 subject to 2 x1 − x2 + x3 ≥ 4
x1 + 7 x2 ≥ 7 x1 + x2 + 2 x3 ≤ 8
and x1, x2 ≥ 0 x2 − x3 ≥ 2
and x1, x2 , x3 ≥ 0
11. Max. Z = −2 x1 − x3
subject to x1 − 2 x2 + 4 x3 ≥ 8
x1 + x2 − x3 ≥ 5
and x1, x2 , x3 ≥ 0
12. Can you apply Dual Simplex Algorithm to the following L.P.P. ?
Max. Z = 5 x1 + 3 x2
13. A diet conscious house wife wishes to ensure certain minimum in take of vitamins
A, B and C for the family. The minimum daily (quantity) needs of the vitamins A, B
and C for the family are respectively 30, 20 and 16 units. For the supply of these
minimum vitamin requirements, the house wife relies on two fresh foods. The first
one provides 7, 5, 2 units of the three vitamins per gram respectively and the second
one provides 2, 4, 8 units of the same three vitamins per gram of the food stuff
346
respectively. The first food stuff costs ` 3 per gram and the second ` 2 per gram. The
problem is how many grams of each food stuff should the house wife buy everyday
to keep her food bill as low as possible ?
(i) Formulate the underlying L.P.P.
(ii) Write the “Dual” problem.
(iii) Solve the “Dual” problem by using simplex method.
(iv) Solve the primal problem graphically.
(v) Interpret the dual problem and its solution.
4. If the primal has a finite optimal solution then the values of the objective functions
of the primal and dual are .................. .
5. If x is any feasible solution to the primal and w is any feasible solution to the dual
problem then Z p .................. Z D where Z P and Z D are the objective functions of
the primal and dual respectively.
True or False
1. The dual of a dual is the primal itself.
2. If the primal problem has a finite feasible solution, the dual problem has no
solution.
3. The coefficient matrix of the dual is obtained by transposing the coefficient matrix
of the primal.
4. If the primal is a maximization problem, the dual is also a maximization problem.
5. Requirement vector of the primal is the price vector of the dual.
6. In a standard primal problem, if all the constraints have the sign ≥, it is a
maximization problem.
348
Exercise 8.3
7. x1 = 21 13 , x2 = 10 13, Min. Z = 41 13
8. x1 = 3, x2 = 2, x3 = 0 Min. Z = 7
9. x1 = 0, x2 = 2 3 , x3 = 0, Max. Z = − 4 3
10. x1 = 3 5 , x2 = 6 5, Min. Z = 12 5
12. No
13. If x1, x2 grams of the two food stuffs are purchased, then
(i) L.P.P. is Min. Z = 3 x1 + 2 x2 ,
subject to 7 x1 + 2 x2 ≥ 30, 5 x1 + 4 x2 ≥ 20
2 x1 + 8 x2 ≤ 16 and x1, x2 , x3 ≥ 0
(ii) Dual. Max. Z D = 30 w1 + 20 w2 + 16 w3
subject to 7w1 + 5 w2 + 2 w3 ≤ 3,
2 w1 + 4 w2 + 8w3 ≤ 2 and w1, w2 , w3 ≤ 0
(iii) w1 = 5 13 , w2 = 0, w3 = 2 13 , Max. Z D = `.14
(iv) x1 = 4, x2 = 1, Min. Z = `.14
True or False
1. True 2. False
3. True 4. False
5. True 6. False
mmm
349
9.1 Introduction
he ‘Integer Programming Problem’ abbreviated as I.P.P. is special class of linear
T programming problem (L.P.P.) where all or some of the variables in the optimal
solution are restricted to assume non-negative integer values.
and xj ≥ 0
1. All Integer Programming Problem (All I.P.P.) : An I.P.P. is termed as all I.P.P.
(or pure I.P.P.) if all the variables in the optimal solution are restricted to assume
non-negative integer values. [Agra 2000, 01]
After this several algorithms came up to solve I.P.P. An efficient method with relatively
new approach developed by A.H. Lang and A.G. Doig is “Branch and Bound Technique”.
problem is solved to get an integer valued optimum solution. This procedure is repeated
iteratively until the required integer valued optimum solution is obtained.
For example
1 1 1 1
(i) if a = 4 , then [a] = 4 and f = , so that 4 = 4 +
3 3 3 3
1 2 1 2
and (ii) if a = −4 , then [a] = −5 and f = , so that −4 = −5 +
3 3 3 3
Note that 1≤ i ≤ m
n
∴ x i = x Bi − ∑ yij x j
...(1)
j = m+1
Clearly [ x Bi] ≤ x Bi,[ yij ] ≤ yij , 0 ≤ fBi < 1 and 0 ≤ fij < 1
Now if the variables x i (i = 1, 2,..., m) and x j ( j = m + 1,... n) are all integers then the R.H.S
n
of (2) is an integer and hence the L.H.S. fBi − ∑ fij x j of (2) must also be an integer.
j = m+1
n n
Since ∑ fij x j is positive; ∴ fBi − ∑ fij x j ≤ fBi < 1,
j = m+1 j = m+1
n
i.e., fBi − ∑ fij x j is an integer less than 1. Thus, it can either be zero or negative
j = m+1
integer.
353
n
or − ∑ fij x j ≤ − fBi
j = m+1
n
or − ∑ fij x j ≤ − fBi
...(3)
j ∈R
Introducing the non-negative slack variable x G1; the above inequality reduce to the
constraint equation.
n
− ∑ fij x j + x G1 = − fBi ...(4)
j = m+1
The constraint equation (3) is called Gomory constraint equation or Gomory cutting
plane.
Adding the Gomory constraint equation (3) to the optimum simplex table 9.1 we obtain
the following new table 9.2 :
Table 9.2
M M M M M M M M M M M M M
M M M M M M M M M M M M M
Ym cBm cBm 0 0 ... 0 ... 1 ym, m + 1 ... ymn 0
Since − fBi is negative, the optimum solution given by the above table is not feasible
hence we apply the dual simplex algorithm to obtain the optimum feasible solution.
If all the variables in the solution thus obtained are integers then the process ends
otherwise we construct the second Gomory constraint from the resulting simplex table,
introduce it in that table and solve by dual simplex algorithm. The process is repeated
until and integer value solution is obtained.
Step 3 : Convert the constraints into equations by introducing the non-negative slack
and or surplus variables.
Step 4 : Obtain the optimum solution of the given L.P.P. ignoring the integer condition
of the variables by using simplex algorithm.
Step 5 : Test the integerability of the optimum solution obtained in step 4. Now there
are two possibilities.
1. The optimum solution have all integer values, then the required solution has been
obtained.
2. The optimum solution does not have all integral values then proceed to the next
step.
Step 6 : If only one variable say x k = x Bi has the fractional value, then corresponding to
the i-th row in which this fractional variables lies in the optimal simplex table (obtained
in step 4), form the Gomory's constraint by using the formula
− ∑ fij x j + x G1 = − fBi
j∈R
355
Step 7 : Add the Gomory's constraint equation at the bottom of the optimal simplex
table obtained in step (4). Thus, the solution in the table will be infeasible optimal
solution as − fBi < 0 and ∆ j ≤ 0, ∀ j. Now use dual simplex method to change the
infeasible solution to feasible optimum solution. Here the slack variable x G1 will be taken
as the first leaving basic variable in the above table.
Step 8 : Test the integerability of the optimum feasible solution obtained in step 7. Now
again there are tow possibilities.
1. The optimum solution obtained in step 7 have all integral values, then the required
solution is attained.
2. The optimum solution does not have all integral values. In this case repeat step 6 to
step 8, until the required optimum solution is obtained.
Maximize, Z = 3 x2 ,
x1 − x2 ≥ −2
Solution: We shall solve this example step wise, so that the students may understand the
procedure.
3 x1 + 2 x2 ≤ 7
− x1 + x2 ≤ 2
Step 3 : Now the inequalities are converted to equalities by the introduction of slack
variables x3 and x4 as follows :
3 x1 + 2 x2 + x3 =7
− x1 + x2 + x4 =2
Step 4 : Now we solve the given L.P.P. by simplex method, ignoring the integer condition
of the variables. Taking x1 = 0, x2 = 0, we get x3 = 7, x4 = 2, which is the starting B.F.S.
The solution of the problem by simplex method is given in the following table :
356
Table 9.3
cj 0 3 0 0 Min. Ratio
xB Y1 Y2 Y3 Y4 x B Y2
B cB yi2 > 0
Y3 0 7 3 2 1 0 72
Y4 0 2 −1 1 0 1 2 1(Min) →
Y3 0 3 5 0 1 −2 3 5 (Min) →
Y2 3 2 −1 1 0 1
Z =6 ∆j 3 0 0 −3
↑ ↓
Y1 0 35 1 0 15 −2 5
Y2 3 13 5 0 1 15 35
Z = 39 5 ∆j 0 0 −3 5 −9 5
x1 = 3 5 , x2 = 13 5 = 2 + 3 5 , Z = 39 5
Step 5 : Since the optimum solution obtained above does not have all integer values, we
proceed to the next step.
Since the fraction parts in the value of x1, x2 are each equal to 3 5, we select at random
any one of these. Let us choose the x1 = x B1, which lie in the first row of the last part of
above optimum simplex table 9.3.
∴ Here i = 1, m = 2, n = 4,
∴ putting these value in (1) of article 9.6. The corresponding Gomory constraint is
given by
− ∑ f1 j x j ≤ − fB1
...(1)
j∈R
x B1 = 3 5 ∴ fB1 = 3 5,
1 3 3
or − x − x ≤−
5 3 5 4 5
Adding the non-negative slack variable x G1, the corresponding Gomory Constraint
equation is given by
−(1 5) x3 − (3 5) x4 + x G1 = −3 5
Step 7 : Adding the above new constraint in the optimum simplex table, we get the
following table 9.4
Table 9.4
cj 0 3 0 0 0
Y1 0 35 1 0 15 −2 5 0
Y2 3 13 5 0 1 15 35 0
Y G1 0 −3 5 0 0 −1 5 −3 5 1→
Z = 39 5 ∆j 0 0 −3 5 −9 5 0
↑ ↓
* Another method of Finding Gomory Constraint is as follows. Taking the first row as source row,
the corresponding equation is
1 2
1. x1 + 0 . x2 + x3 − x4 = 3 5
5 5
1
or x1 +
x3 + ( −1 + 3 5 ) x4 = 3 5
5
1 3 3
or x3 + x4 = + ( − x1 + x4 )
5 5 5
Since all variables must have non-negative integral values, therefore the L.H.S. is non-negative
and so the R.H.S. should also be non-negative.
1 3 3
∴ x3 + x4 = + (Non-negative integer)
5 5 5
1 3 3
∴ x3 + x4 ≥
5 5 5
1 3 3
or − x3 − x4 ≤ −
5 5 5
358
Since here x G1 = −3 5 < 0. ∴ the solution given by above table is not feasible.
∆k ∆k ∆ j
Here = = Min. , y3 j < 0
yrk y3 k j y3 j
∆3 ∆4 −3 5 −9 5
= Min. , = Min. ,
y33 y34 −1 5 −3 5
= Min. {3,3} ∴ k = 3 or 4
Taking k = 3, i.e., taking Y3 (= α 3 ) as the entering vector, the revised simplex table is
Table 9.5
cj 0 3 0 0 0
Y1 0 0 1 0 0 −1 1
Y2 3 2 0 1 0 0 1
Y3 0 3 0 0 1 3 −5
Z =6 ∆j 0 0 0 0 −3
x1 = 0, x2 = 2 and Maximum Z = 6,
Note 1 : All the calculations done in step 7, may also be done in a single table as follows
Adding the Gomory constraint equation in the last part of the table 9.3, we get the
following table in which basic variable x G1 = − 3 5 < 0.
Table 9.6
cj 0 3 0 0 0
B cB xB Y1 Y2 Y3 Y4 YG1
Y1 0 35 1 0 15 −2 5 0
Y2 3 13 5 0 1 15 35 0
Y G1 0 −3 5 0 0 −1 5 −3 5 1→
Z = cB x B ∆j 0 0 −3 5 −9 5 0
= 39 5
Mini ratio — — −3 5 −9 5
=3 =3
∆j −1 5 −3 5
, y3 j < 0 ↑
y3 j
Y1 0 0 1 0 0 −1 1
Y2 3 2 0 1 0 0 1
Y3 0 3 0 0 1 3 −5
Z = cB x B = 6 ∆j 0 0 0 0 −3
From the above table the optimal and integral solution of the given problem is
x1 = 0, x2 = 2 and Max. Z = 6
x1 = 1, x2 = 2 and maximum Z = 6,
which is obtained by choosing Y4 as the entering vector in the first table 9.4 or 9.6
∴ The solution of the given problem, ignoring the integer values of x1, x2 is
x1 = 3 5 , x2 = 13 5 and Max. Z = 39 5
To find the integer value solution, we add the following constraint known as Gomory
constraint :
1 3 3
− x − x ≤−
5 3 5 4 5
or x3 + 3 x4 ≥ 3 (see step 6) ...(1)
360
x2
7/2
3
P(3/5, 13/5)
(0, 2) (1, 2)
x2=2
3x 1
–2
+
= 1
2x 2
x2
–
=
x1
7
x1
–2 0 1 2 7/3
Fig. 9.1
Adding the non-zero slack variables x3 , x4 the given inequalities reduce to the following
equalities
3 x1 + 2 x2 + x3 = 7 and − x1 + x2 + x4 = 2
Giving x3 = 7 − 3 x1 − 2 x2 and x4 = 2 + x1 − x2
(7 − 3 x1 − 2 x2 ) + 3(2 + x1 − x2 ) ≥ 3
or x2 ≤ 2.
Drawing the line x2 = 2, the above feasible region is cut off to the shaded region shown by
the dots and cross (×) together.
x1 = 0, x2 = 2 and max. Z = 6
or x1 = 1, x2 = 2 and max. Z = 6
Example 2: The owner of a ready-made garments makes two types of shirts known as Zee
shirts and Button Down shirts. He makes profit of ` 1 and ` 4 per shirt on Zee shirts and
Button-Down shirt respectively. He has two tailors (Tailor A and Tailor B) at his
disposal to stitch these shirts. Tailor A and Tailor B can devote at the most 7 hours and
15 hours per day respectively. Both these shirts are to be stitched by both the tailors.
Tailors A and Tailors B spend two hours and five hours respectively in stitching a Zee
shirt and four hours and three hours respectively in stitching a Button-Down shirt. How
many shirts of both the types should be stitched in order to maximize daily profit?
(i) Set up and solve the L.P.P.
(ii) If the optimal solution is not-integer-valued, use the Gomory technique to derive the
optimal integer solution.
361
Let the owner manufacture x1 and x2 number of shirts of two types respectively.
Profit (in `) 1 4
Profit Z = ` (1x1 + 4 x2 )
Max. Z = x1 + 4 x2
subject to 2 x1 + 4 x2 ≤ 7
5 x1 + 3 x2 ≤ 15
(ii) Solution of the above L.P.P. : The problem is of maximization and both bi's are
positive. Introducing the non-negative slack variables x3 and x4 , the given problem
becomes.
Max. Z = x1 + 4 x2 + 0 . x3 + 0 . x4
subject to 2 x1 + 4 x2 + x3 =7
5 x1 + 3 x2 + x4 = 15
x1, x2 , x3 , x4 ≥ 0
Now we solve the problem by simplex method, ignoring the integer condition of
variables.
The solution of the problem by simplex method is given in the following table :
362
Table 9.7
cj 1 4 0 0 Min. Ratio
XB Y1 Y2 Y3 Y4 x B Y2
B cB yi2 > 0
Y3 0 7 2 4 1 0 7 4 min →
Y4 0 15 5 3 0 1 15 3
Z = cB x B = 0 ∆j 1 4 0 0
↑
Y2 4 74 12 1 14 0
Y4 0 39 4 72 0 −3 4 1
Z = cB x B = 7 ∆j −1 0 −1 0
x1 = 0, x2 = 7 4 = 1 + 3 4, x3 = 0, x4 = 39 4 = 9 + 3 4, Max. Z = 7
Here fractional part in both x2 and x4 is equal to 3/4, so we can consider anyone of these.
Considering x2 = 7 4 = 1 + 3 4 = x B1, which lie in row one, the corresponding equation is
(1 2) x1 + 1. x2 + (1 4) x3 + 0 . x4 = 7 4 = 1 + 3 4
or (1 2) x1 + (1 4) x3 = 3 4 + (1 − x2 )
∴ (1 2) x1 + (1 4) x3 ≥ 3 4 , Q1 − x2 is non-negative integer.
or − (1 2) x1 − (1 4) x3 ≤ −3 4
− (1 2) x1 − (1 4) x3 + x G1 = −3 4
Adding this Gomory constraint equation at the bottom of the last part of above table, we
get the table in which, the basic variable x G1 = −3 4 < 0.
Table 9.8
cj 1 4 0 0 0
B cB xB Y1 Y2 Y3 Y4 YG1
Y2 4 74 12 1 14 0 0
Y4 0 39 4 72 0 −3 4 1 0
x G1 0 −3 4 −1 2 0 −1 4 0 1→
Min. Ratio −1 — −1
=2 =4
∆j −1 2 −1 4
, y3 j < 0 min
y3 j
↑
Y2 4 1 0 1 0 0 1
Y4 0 92 0 0 −5 2 1 7
Y1 1 32 1 0 12 0 −2
Z = c B x B = 11 2 ∆j 0 0 −1 2 0 −2
x1 = 3 2 = 1 + 1 2 , x2 = 1, x4 = 9 2 = 4 + 1 2 , Max. Z = 11 2
This solution is also not integer valued, so we form second Gomorys constraint.
Here fractional part in both x1 and x4 is 1/2. So we can take any one of these. Taking
1x1 + 0 . x2 + (1 2) x3 + 0 . x4 − 2 x G1 = 1 + 1 2
or (1 2) x3 = 1 2 + (1 − x1 + 2 x G1)
∴ (1 2) x3 ≥ 1 2 , Q 1 − x1 + 2 x G1 is non-negative integer
or −(1 2) x3 ≤ −1 2
− (1 2) x3 + x G2 = − 1 2
Adding this Gomory's constraints at the bottom of the last part of the above table, we get
the table in which basic variable x G2 = −1 2 < 0.
Table 9.9
cj 1 4 0 0 0 0
B cB
xB Y1 Y2 Y3 Y4 YG1 YG2
Y2 4 1 0 1 0 0 1 0
Y4 0 92 0 0 −5 2 1 7 0
Y1 1 32 1 0 12 0 −2 0
Y G2 0 −1 2 0 0 −1 2 0 0 1→
Z = c B x B = 11 2 ∆j 0 0 −1 2 0 −2 0
Min. Ratio — — −1 2 — —
=1
∆j −1 2
, y4 j < 0 (min)
y4 j ↑
Y2 4 1 0 1 0 0 1 0
Y4 0 7 0 0 0 1 7 −5
Y1 1 1 1 0 0 0 −2 1
Y3 0 1 0 0 1 0 0 −2
Z = cB x B = 5 ∆j 0 0 0 0 −2 −1
For the table the optimal and integral solution of the given problem is
x1 = 1, x2 = 1 and Max. Z = ` 5.
Hence the owner of the ready made garments should make one Zee-shirt and one
Button-Down shirt to get a maximum profit of ` 5.
2. Max. Z = x1 − 2 x2 3. Max. Z = x1 + 2 x2
subject to the constraints subject to the constraints
4 x1 + 2 x2 ≤ 15 x1 + x2 ≤ 7
x1, x2 ≥ 0 and are integers. 2 x1 ≤ 11, 2 x2 ≤ 7
x1, x2 ≥ 0 and are integers.
[Meerut 2006 (BP)]
365
4. Max. Z = x1 + x2 5. Max. Z = 4 x1 + 3 x2
subject to the constraints subject to the constraints
3 x1 + 2 x2 ≤ 5 x1 + 2 x2 ≤ 4
x2 ≤ 2 2 x1 + x2 ≤ 6
x1, x2 ≥ 0 and are integers. x1, x2 ≥ 0 and are integers.
[Meerut 2005(BP)]
8. Max. Z = 3 x2 9. Max. Z = 7 x1 + 9 x2
subject to the constraints subject to the constraints
3 x1 + 2 x2 ≤ 7 − x1 + 3 x2 ≤ 6
x1 − x2 ≤ −2 7 x1 + x2 ≤ 35
x1, x2 ≥ 0 x1, x2 ≥ 0 and integers.
2. x1 = 3, x2 = 0, max. Z = 3 3. x1 = 4, x2 = 3, max. Z = 10
4. x1 = 0, x2 = 2 or x1 = 1, x2 = 1 5. x1 = 3, x2 = 0, max. Z = 12
max. Z = 2
6. x1 = 1, x2 = 0, max. Z = 1 7. x1 = 2, x2 = 3, max. Z = 34
8. x1 = 0, x2 = 3, max. Z = 9 9. x1 = 4, x2 = 3, max. Z = 55
Step 1 : Formulate the given L.P.P. in to a standard maximization form and determine
its optimum solution ignoring the integer condition of the variables by using simplex
algorithm.
Step 2 : Test the integrability of the optimum solution obtained in the above step. Now
there are two possibilities.
(i) The optimum solution has integral values of all integer restricted variables, then the
required solution has been obtained.
(ii) The optimum solution does not have integral values of all integer restricted,
variables, then proceed to the next step.
Step 3 : If only one of the integer restricted variables has the fractional value, then
corresponding to the row in which this fractional variables lies in the optimum simplex
table, form the Gomory's Constraint by using the following formula :
If the basic variable x Bi = x k (integer restricted variable) has the fractional value and lies
in the i-th row of the optimum simplex table, then the formula for obtaining Gomory's
constraint is as follows :
367
f
− ∑ yij x j − Bi ∑ yij x j ≤ − fBi
f −
Bi 1
j ∈R + j ∈R −
Thus, introducing the non-negative slack variable x G1, the Gomory's constraint
equation i.e.,Gomory's cutting plane is given by
f
− ∑ yij x j − Bi ∑ yij x j + x G1 = − fBi
f −
Bi 1
j ∈R + j ∈R −
However, if more than one, integer restricted variables are not integers than the
non-integral variable which contains the largest fractional part is taken.
Step 4 : Adding the Gomory's constraint equation at the bottom of the optimum simplex
table obtained in step 1, obtain the row optimum solution by using dual simplex
algorithm. Note that the slack variable x G1 will be taken as the first leaving basic variable
in the above table.
Step 5 : Test the integrability of the optimum solution obtained in step 4. Now again
there are two possibilities.
(i) The optimum solution obtained in step 4 have integral values of all integer
restricted variables, then the required solution has been obtained.
(ii) The optimum solution does not have integral values of all integer restricted
variables. In this case repeat steps (3) and (4), until the required optimal solution is
obtained.
Solution: Step 1 : Introducing the slack variables x3 and x4 , the given L.P.P. into the
standard maximization form is
Max. Z = x1 + x2 + 0. x3 + 0. x4
s.t. 3 x1 + 2 x2 + x3 =5
0 x1 + x2 + x4 =2
and x1, x2 , x3 , x4 ≥ 0
Ignoring the integer condition for variables x1, and proceeding as usual by simplex
method, all the computation work is shown in the following table. Initial B.F.S. obtained
by taking x1 = 0, x2 = 0, is x3 = 5, x4 = 2.
Table 9.10
cj 1 1 0 0 Min. Ratio
B cB xB Y1 Y2 Y3 Y4 x B Y2 , yi2 > 0
Y3 0 5 3 2 1 0 52
Y4 0 2 0 1 0 1 2 1(Min. ) →
Z =0 ∆j 1 1 0 0 x B Y3 , yi3 > 0
↑ ↓
Y3 0 1 3 0 1 −2 1 3 (Min. ) →
Y2 1 2 0 1 0 1
Z =2 ∆j 1 0 0 −1
↑ ↓
Y1 1 13 1 0 13 −2 3
Y2 1 2 0 1 0 1
Z =7 3 ∆j 0 0 −1 3 −1 3
x1 = 1 3 , x2 = 2. Max. Z = 7 3
Step 2 : In the above optimum solution, x1 = 1 3 fractional, while the condition is that x1
is integer. So we proceed to the next step.
f
− ∑ yij x j − Bi ∑ yij x j ≤ − fBi
fBi − 1
j ∈R + j ∈R −
369
∴ Gomory's Constraint is
13
− y13 x3 − . y14 x4 ≤ − 1 3
1 3 − 1
or − (1 3) x3 − (− 1 2) (− 2 3) x4 ≤ − 1 3
− (1 3) x3 − (1 3) x4 + x G1 = −1 3
Step 4 : Adding the Gomory's constraint equation at the bottom of the last part of the
above table, we get the following table, in which the basic variable x G1 = − 1 3 < 0 :
Table 9.11
cj 1 1 0 0 0
B cB xB Y1 Y2 Y3 Y4 yG1
Y1 1 13 1 0 13 −2 3 0
Y2 1 2 0 1 0 1 0
Y G1 0 −1 3 0 0 −1 3 −1 3 1→
Z = cB x B = 7 3 ∆j 0 0 −1 3 −1 3 0
Min. ratio — — −1 3 −1 3
=1 =1
∆j −1 3 −1 3
, y3 j < 0
y3 j ↑
Y1 1 1 1 0 1 0 −2
Y2 1 1 0 1 −1 0 3
Y3 0 1 0 0 1 1 −3
Z = cB x B = 2 ∆j 0 0 0 0 −1
From the above table, the optimal solution of the given problem with x1 as integer is
x1 = 1, x2 = 1 and max. Z = 2.
Note : In the last table 9.11, if we table Y3 as the entering vector in place of Y4 , then the
alternate optimal solution is x1 = 0, x2 = 0 and max. Z = 2.
370
Max. Z = 3 x1 + x2 + 3 x3
− x1 + 2 x2 + x3 ≤ 4, 4 x2 − 3 x3 ≤ 2, x1 − 3 x2 + 2 x3 ≤ 3,
Solution: Step 1 : Introducing the slack variables x4 , x5 and x6 , the given L.P.P. in to
the standard maximization form is
Max. Z = 3 x1 + x2 + 3 x3
s.t. − x1 + 2 x2 + x3 + x4 =4
4 x2 − 3 x3 + x5 =2
x1 − 3 x2 + 2 x3 + x6 =3
x1, x2 , x3 , x4 , x5 , x6 ≥ 0,
Ignoring the integer condition for variables x1 and x3 , and proceeding as usual, the final
simplex table, giving the optimum solution is as follows :
Table 9.12
cj 3 1 3 0 0 0
B cB xB Y1 Y2 Y3 Y4 Y5 Y6
Y3 3 10 3 0 0 1 49 19 49
Y2 1 3 0 1 0 13 13 13
Y1 3 16 3 1 0 0 19 79 10 9
Z = 29 ∆j 0 0 0 −2 −3 −5
x1 = 16 3 = 5 + 1 3 , x2 = 3, x3 = 10 3 = 3 + 1 3 and Max. Z = 29
Step 2 : Here x1 and x3 are not integers, as required,so we proceed to the next step.
Step 3 : Here x1 = 16 3 = 5 + 1 3 , x3 = 10 3 = 3 + 1 3
f
− ∑ y1 j x j − B1 ∑ y1 j x j ≤ − fB1 ...(1)
fB1 − 1
j ∈R + j ∈R −
for i = 1, i.e., in the first row in the above table, for non-basic variables.
∴ Gomory's constraint is
13
− y14 x4 − y15 x5 − y16 x6 − . 0 ≤ −1 3
1 3 − 1
or − (4 9) x4 − (1 9) x5 − (4 9) x6 ≤ − 1 3
Introducing non-negative slack variable x G1, the Gomory's constraint equation (i.e.,
Gomory's cutting plane) is
− (4 9) x4 − (1 9) x5 − (4 9) x6 + x G1 = − 1 3
Adding this Gomory's constraint equation at the bottom of the above table 9.12, we get
the following table in which the basic variable x G1 = −1 3 < 0. i.e., the solution is not
feasible. ∴we proceed by using dual simplex method as follows :
Table 9.13
cj 3 1 3 0 0 0 0
B cB xB Y1 Y2 Y3 Y4 Y5 Y6 yG1(β4 )
Y3 3 10 3 0 0 1 49 19 49 0
Y2 1 3 0 1 0 13 13 13 0
Y1 3 16 3 1 0 0 19 79 10 9 0
yG1 0 −1 3 0 0 0 −4 9 −1 9 −4 9 1→
Z = 29 ∆j 0 0 0 −2 −3 −5 0
Min. Ratio — — — −2 9 −3 −5 45
= = 27 =
∆j −4 9 2 −1 9 −4 4
, y4 j < 0 min 9
y4 j
↑
Y3 3 3 0 0 1 0 0 0 1
Y2 1 11 4 0 1 0 0 14 0 34
Y1 3 21 4 1 0 0 0 34 1 14
Y4 0 34 0 0 0 1 14 1 −9 4
Z = 43 2 ∆j 0 0 0 0 −5 2 −3 −9 2
372
14
− y35 x5 − y36 x6 − y37 x G1 − . 0 ≤ −1 4
1 4 − 1
or − (3 4) x5 − 1x6 − (1 4) x G1 ≤ −1 4
−(3 4) x5 − x6 − (1 4) x G1 + x G2 = − 1 4.
Adding this second Gomory constraint equation at the bottom of the last part of above
table, we get the table in which the basic variable x G2 = − 1 4 < 0.
Table 9.14
cj 3 1 3 0 0 0 0 0
yG2
B cB xB Y1 Y2 Y3 Y4 Y5 Y6 yG1
(β5 )
Y3 3 3 0 0 1 0 0 0 1 0
Y2 1 11 4 0 1 0 0 14 0 34 0
Y1 3 21 4 1 0 0 0 34 1 14 0
Y4 0 34 0 0 0 1 14 1 −9 4 0
yG2 0 −1 4 0 0 0 0 −3 4 −1 −1 4 1→
Z = 43 2 ∆j 0 0 0 0 −5 2 −3 −9 2 0
Min. Ratio — — — — −5 2 10 −3 −9 2
= =3 = 18
∆j −3 4 3 −1 −1 4
, y5 j < 0 (min)
y5 j ↑
373
Y3 3 3 0 0 1 0 0 0 1 0
Y2 1 11 4 0 1 0 0 14 0 34 0
Y1 3 5 1 0 0 0 0 0 0 1
Y4 0 12 0 0 0 1 −1 2 0 −5 2 1
Y6 0 14 0 0 0 0 34 1 14 −1
Z = 107 4 ∆j 0 0 0 0 −1 4 0 −15 4 −3
From the above table the optimal solution of the given problem with x1 and x3 having
integral values is x1 = 5, x2 = 11 4 , x3 = 3 and max. Z = 107 4.
1. Max. Z = 3 x1 + 4 x2 2. Max. Z = 7 x1 + 9 x2
subject to the constraints subject to the constraints
3 x1 − x2 ≤ 12 − x1 + 3 x2 ≤ 6
3 x1 + 11x2 ≤ 66 7 x1 + x2 ≤ 35
x1, x2 ≥ 0 and x2 is an integer x1, x2 ≥ 0 and x1 is an integer
3. Max. Z = 4 x1 + 6 x2 + 2 x3
4 x1 − 4 x2 ≤ 5, − x1 + 6 x2 ≤ 5
1. x1 = 16 3 , x2 = 4, Z = 32 2. x1 = 4, x2 = 10 3 , Z = 58
3. x1 = 2, x2 = 1, x3 = 6, Z = 26
374
The branch-and-bound method was first developed by A.H. Land and A.G. Doig and was
further developed by J.D.C. Little et al. and other researchers.
This technique is applicable to both all integer programming problems as well as mixed
integer programming problems. This is the most general technique for the solution of an
I.P.P. in which only a few or all the variables are constrained by their upper or lower
bounds, or by both.
j = 1, 2,..., r ≤ n ....(3)
Also let there exist lower and upper bounds for the optimum values of each integer valued
variable x j such that
Lj ≤ x j ≤ U j , j = 1, 2,..., r . ...(5)
Thus, any optimum solution of (1) to (5) must satisfy only one of the constraints
x j ≤ [x j] ...(6)
and x j ≥ [x j] + 1 ...(7)
Thus, ignoring the integer restriction (3) if x*t is the value of the variable x t in the
optimum solution of the above L.P.P. given by (1) to (5), then, in an integer valued
solution we have
or [ x*t ] + 1 ≤ x t ≤ U t ...(9)
For example if x1 = 3.5 ignoring integer constraint then in integer valued solution
either L1 ≤ x1 ≤ 3 or 4 ≤ x1 ≤ U1
375
Thus, the given I.I.P. given by (1) to (5) has two sub-I.P. problems :
(i) given by (1), (2), (3), (4) and (8)
and (ii) given by (1), (2), (3), (4) and (9)
In the above two sub-problems constraint (5) is modified only for x t (i.e., for x j , j = t).
Now solve these two sub I.P. problems. If the two problems posses integer valued
solution then the solution having the larger value of Z is taken as the optimum solution
of the given problem. If either of these sub-problems does not have an integer valued
solution then sub-divide this again into two sub-problems and proceed similarly till an
optimum integral valued solution is obtained.
Step 1 : Solve the given I.P.P ignoring the integer, valued condition.
Step 2 : Test the integrability of the optimum solution obtained in step 1. Now there are
two possibilities.
(i) The optimum solution is integral valued then the required solution is obtained.
(ii) The optimum solution is not integral valued then proceed to the next step 3.
Step 3 : If the optimal value x*t of the variable x t is fractional then form two sub
problems.
Sub problem 1 Sub problem 2
Given problem with one Given problem with one
more constraint more constraint
x t ≤ [ x*t ] x t ≥ [ x*t ] + 1
Step 4 : Solve the two sub problems 1 and 2 obtained in step 1. Now there are three
possibilities.
(i) If the optimal solution of the two sub problems are integral valued then the required
solution is that which given larger value of Z.
(ii) If the optimal solution of one sub-problem is integer value and the other
sub-problem has no feasible optimal solution, then the required solution is that of
the sub-problem having integer valued solution.
(iii) If the optimal solution of one sub-problem is integer valued while that of the other
sub-problem is fractional valued then record the integer valued solution and repeat
step 3 and 4 for the fractional valued sub-problem. Continue step 3 and 4
iteratively, till all integral valued solutions are recorded.
376
Step 5 : From all the recorded integral valued solutions choose that solution which given
the largest value of Z. This is the required optimal solution of the problem.
Max. Z = 7 x1 + 9 x2
subject to − x1 + 3 x2 ≤ 6
7 x1 + x2 ≤ 35
0 ≤ x1, x2 ≤ 7
Max. Z = 7 x1 + 9 x2
subject to − x1 + 3 x2 ≤ 6
7 x1 + x2 ≤ 35
x1 ≤ 7
x2 ≤ 7
Ignoring the condition of integer values of variables, the optimal solution of the given
problem, by graphical method, is given by
7x 1+
x2
x 2=
35
7
x2=7
(9/2, 7/2)
6
3x 2=
–x 1+
7/2 x1=7
2
7k
x1
–6 –9k 0 9/2 5
Z=7x1+9x2=0
x –9
gives 1 =
x2 7
Fig. 9.2
377
This value of Z represents the initial upper bound of the value of objective function Z. So
the value of the objective function Z for integral values of variables in the permissible
region cannot exceed 63.
Sub-Problem 1 Sub-Problem 2
Max. Z = 7 x1 + 9 x2 Max. Z = 7 x1 + 9 x2
subject to − x1 + 3 x2 ≤ 6 subject to − x1 + 3 x2 ≤ 6
7 x1 + x2 ≤ 35 7 x1 + x2 ≤ 35
x1 ≤ 4 x1 ≥ 5
x2 ≤ 7 x2 ≤ 7
x1, x2 ≥ 0 x1, x2 ≥ 0
Step 4 : By graphical method the optimal solutions of the above two sub problems are as
follows (see fig. 9.3) :
Sub-Problem 1. x1 = 4, x2 = 10 3 , Max, Z = 58
x2 (9/2, 7/2)
(4, 10/3)
S.P. 1 S.P. 2
(5, 0)
x1
0
x1=4
x1=5
(Sub Prob. 1 and 2)
Fig. 9.3
378
Sub-Problems 3 Sub-Problems 4
Max. Z = 7 x1 + 9 x2 Max. Z = 7 x1 + 9 x2
subject to − x1 + 3 x2 ≤ 6 subject to − x1 + 3 x2 ≤ 6
7 x1 + x2 ≤ 35 7 x1 + x2 ≤ 35
x1 ≤ 4 x1 ≤ 4
x2 ≤ 3 x2 ≥ 4
x1, x2 ≥ 0 x1, x2 ≥ 0
Fig. 9.4
Given Prob.
x1=9/2=4.5
x2=7/2=3.5
Max. Z=63
x1 ≤ 4 x1 ≥ 5
x1=4
x1=5, x2=0
x2=10/3 = 3+1/3
Max. Z=Z2=35
Max. Z=Z1=58
x2 ≤ 3 x2 ≥ 4
Sub. Problem 3 Sub. Problem 4
x1=4, x2=3
No. F.S.
Max. Z=Z3=55
Fig. 9.5
The solutions of sub-problems 2 and 3 are integer valued and Z3 > Z2 . Hence the
optimal integral solution of the given problem is x1 = 4, x2 = 3 and Max. Z = 55.
379
3. Max. Z = x1 + x2 4. Max. Z = x1 + x2
subject to 3 x1 + 2 x2 ≤ 12 subject to 4 x1 − x2 ≤ 10
x2 ≤ 2 2 x1 + 5 x2 ≤ 10
x1, x2 ≥ 0 4 x1 − 3 x2 ≤ 6
and are integers. x1, x2 = 0, 1, 2,...
5. Max. Z = 7 x1 + 9 x2 6. Min. Z = 4 x1 + 3 x2
− x1 + 3 x2 ≤ 6 5 x1 + 3 x2 ≥ 30
7 x1 + x2 ≤ 36 x1 ≤ 4, x2 ≤ 6
x1, x2 ≥ 0
and x1, x2 are integers. [Agra 2002]
7. Max. Z = x1 + x2 8. Min. Z = 3 x1 + 2 . 5 x2
subject to x1 + 7 x2 ≤ 28 subject to x1 + 2 x2 ≥ 20
14 x1 + 4 x2 ≤ 63 3 x1 + 2 x2 ≥ 50
7 x1 − 3 x2 ≤ 14 x2 ≤ 3 2
x1, x2 ≥ 0 x1, x2 ≥ 0
and integers. and integers
380
2. While solving and I.P.P. any non-integer variable in the solution obtained by
simplex method is picked up to :
(a) Enter the solution (b) Leave the solution
(c) Obtain Gomory cut constraint (d) None of those
4. The infeasible solution obtained by the addition of the Gomory constraint equation
to the optimal simplex table is changed to feasible optimal solution by using :
(a) Simplex method (b) Dual simplex method
(c) Revised simplex method (d) None of these
Exercise 9.3
3. x1 = 2, x2 = 2; x1 = 3, x2 = 1; 4. x1 = 2, x2 = 1, Z = 3
x1 = 4, x2 = 0, Z = 4
5. x1 = 4, x2 = 3, Z = 55 6. x1 = 3, x2 = 5, Z = 27
7. x1 = 3, x2 = 3, Z = 6 8. x1 = 14, x2 = 4, Z = 52
9. x1 = 3, x2 = 3, Z = 15 10. x1 = 1, x2 = 1, Z = 3
3. (a) 4. (b)
5. (c) 6. (c)
10.1 Introduction
s discussed earlier, the most generalized method for solving linear programming
A problem is the simplex method. However there is a particular class of linear
programming problem which is related to a lot of very practical problems generally called
‘Transportation problems’. It is far more simple to solve these problems by
‘Transportation Technique’ than by ‘simplex’.
Let there be m origins and n destinations (n may or may not be equal to m) with i-th origin
possessing ai units of a certain product and j-th destination requiring bj units of the same
product. Assume that the total available is equal to the total required
m n
i.e., ∑ ai = ∑ bj …(1)
i =1 j =1
Let cij be the cost of transportation of one unit product from i-th origin to j-th destination
and x ij be the quantity transported from the i-th origin to the j-th destination. Then the
problem is to determine non-negative (≥ 0) values of x ij , satisfying the availability
restrictions as well as the requirement restrictions, in such a way that the total
transportation cost is minimized.
n
such that ∑ x ij = ai, i = 1, 2,..., m
...(3)
j =1
m
and ∑ x ij = bj , j = 1, 2,..., n
...(4)
i =1
The equations (3) and (4) may be called the row and column equations respectively.
Note : 1. The objective function (2) and the constraint equations (3) and (4) are all
linear in x ij , so the above problem may be looked like a linear programming problem.
Thus a transportation problem is a special type of L.P.P.
The calculations are made directly on the transportation array given below which gives
the current trial solution.
Destination → W1 W2 LLL Wj LLL Wn Available
Origin ↓
F1 x11 x12 LLL x1 j LLL x1n a1
F2 x21 x22 LLL x2 j LLL x2 n a2
M M M M M M
M M M M M M
M M M M M M
Fi x i1 x i2 LLL x ij LLL x in ai
M M M M M M
M M M M M M
M M M M M M
Fm x m1 x m2 LLL x mj LLL x mn am
m n
Requirement b1 b2 LLL bj LLL bn
∑ ai = ∑ bj
i =1 j =1
The above two tables can be combined together by writing costs cij within the brackets( ).
∑ ai = ∑ bj (i = 1, 2, . . . , m ; j = 1, 2, . . . , n).
[Kanpur 2009]
Proof: The condition is necessary : Let there exist a feasible solution to the
transportation problem. Then
n n
∑ ∑ x ij = ∑ ai and ∑∑ x ij = ∑ bj
i =1 j =1 i =1 j =1 i =1 j =1
m n
⇒ ∑ ai = ∑ bj .
i =1 j =1
m n
The condition is sufficient : Let ∑ ai = ∑ bj = k (say).
i =1 j =1
ai bj
Thus, x ij = λ i bj = , for all i and j.
k
∑ ai = ∑ bj can be used to reduce one constraint. This can be easily justified by proving
the following theorem :
∑ x ij = ai, i = 1, 2,..., m
...(1)
j =1
m
and ∑ x ij = bj , j = 1, 2,..., n − 1
...(2)
i =1
∑ ∑ x ij = ∑ ai ...(3)
i =1 j =1 i =1
n −1 m n −1
∑∑ x ij = ∑ bj ...(4)
j =1 i =1 j =1
∑ ∑ x ij − ∑ ∑ x ij = ∑ ai − ∑ bj
i =1 j =1 j =1 i =1 i =1 j =1
390
m n n−1 n n −1
or ∑ ∑
x ij − ∑ x ij =∑
j =1
b j − ∑ bj [∴ ∑ ai = ∑ bj ]
i =1 j =1 j =1 j = 1
m
or ∑ x in = bn, which is the n-th destination-constraint.
i =1
It follows that if m + n −1 constraints are satisfied then the (m + n)th constraint will be
automatically satisfied due to the condition ∑ ai = ∑ bj . Thus we have only (m + n −1)
linearly independent equations. Out of (m + n) equations, one (any) is redundant.
It indicates that a B.F.S. will contain at most m + n −1 positive variables, others being
zero.
It follows that a feasible solution exists of the problem i.e., x ij ≥ 0 for all i and j.
Thus 0 ≤ x ij ≤ min (ai, bj ) i.e., the feasible region of the problem is non-empty, closed and
bounded.
The first cell of the set will follow the last one in the set.
We get a closed path satisfying the above conditions (1) and (2) if we join the cells of a
loop by horizontal and vertical line segments.
391
where (i, j)th cell of the transportation table is denoted by (i, j). Then it can be observed
that the set L forms a loop while the set L′ does not form a loop, because the three cells
(2, 4),(2, 3) and (2, 2) lie in the same row. Diagramatic illustration is given below :
Loop Non-Loop
(3, 1) (3, 4)
Step 1 : Start with the cell (1, 1) at the North-West corner i.e., the top-most left corner
and allocate there maximum possible amount. Thus x11 = min (a1, b1).
Step 2 : (i) If b1 < a1 then x11 = b1 and there is still some quantity available left in row 1.
So move to the right hand cell (1, 2) and make the second allocation of amount
x12 = min (a1 − x11, b2 ) in the cell (1, 2).
392
(ii) If b1 > a1, then x11 = a1 and there is still some requirement left in column 1. So move
vertically downwards to the cell (2, 1) and make the second allocation of amount
x21 = min (a2 , b1 − x11) in this cell.
(iii) If b1 = a1 then x12 = 0 or x21 = 0.
Start from the new North-West corner of the transportation table and allocate there as
much as possible.
Step 3 : Repeat steps 1 and 2 until all the available quantity is exhausted or all the
requirement is satisfied.
Find the initial basic feasible solution of the following transportation problem :
To
W1 W2 W3 Available
F1 2 7 4 5
F2 3 3 1 8
From F3 5 4 7 7
F4 1 6 2 14
Requirement 7 9 18 34
First we construct an empty 4 by 3 matrix complete with row and column requirements.
Start with the cell (1, 1) at the North-West corner (top-most left corner) and allocate it
maximum possible amount. Thus x11 = 5 as minimum of a1 = 5 and b1 = 7 is 5.
To
W1 W2 W3 Available
F1 5 (2) 5
F2 2 (3) 6 (3) 8
F4 14 (2) 14
Requirement 7 9 18
maximum amount 2 to the cell (2, 1) i.e., x21 = 2. Thus Allocations for column 1 are
complete. Now we move to the right of the cell (2, 1). Since the amount 6 is still available
in row 2 and amount 9 is needed in column 2, so we allocate the maximum amount 6 in
the cell (2, 2) i.e., x22 = 6. This completes the allocations for row 2. Now we move
vertically downwards to the cell (3, 2). In column 2 the amount 3 is still needed and in
row 3 amount available is 7 so we allocate the maximum amount 3 in the cell (3, 2) i.e.,
x32 = 3. Thus allocations for column 2 are complete. Now the amount 4 is still available
in row 3 and amount 18 is needed in column 3, so we move to the cell (3, 3) and allocate
the maximum amount 4 to this cell i.e., x33 = 4. Thus, there is no amount left available at
source 3. Now we move downwards to the cell (4, 3). The amount 14 is still needed in
column 3 and an equal amount 14 is available in row 4, so we allocate the amount 14 to
the cell (4, 3) i.e., x43 = 14. The resulting feasible solution is shown in the above table.
Allocations in the cells are in such a way that the total in each row and each column is the
same as shown against the respective rows and columns.
Multiplying each individual allocation by its corresponding unit cost in ( ), and adding,
the total cost corresponding to this feasible solution is
= ` (5 . 2 + 2 . 3 + 6 . 3 + 3 . 4 + 4 . 7 + 14 . 2) = ` 102.
Note : In this method we always move to the right or down, so no loop can be formulated
by drawing horizontal and vertical lines to the allocations. Also at each step (allocation)
at least one row or column is discarded from further consideration, while the last
allocation discards both a row and a column simultaneously, so we cannot get more than
(m + n −1) individual positive allocations. Thus we always get a non-degenerate basic feasible
solution by the North-West corner rule.
Step 1 : Examine the cost matrix carefully and find the lowest cost. Let it be cij . Then
Step 2 : (i) If x ij = ai, then the capacity of the i-th origin is completely exhausted. In this
case cross out the i-th row of the transportation table and decrease the requirement bj by
ai. Now go to step 3.
(ii) If x ij = bj , then the requirement of j-th destination is completely satisfied. In this
case cross out the j-th column of the transportation table and decrease ai by bj . Now
go to step 3.
(iii) If x ij = ai = bj , then either cross-out the i-th row or j-th column but not both. Now go
to step 3.
Step 3 : Repeat steps 1 and 2 for the reduced transportation table until all the available is
exhausted or all the requirement is satisfied.
394
Note : If the cell of lowest cost is not unique, we can select any one of these cells.
The method is well explained by taking the same numerical example as in method 1.
To
W1 W2 W3
7 9 18
First we write the cost and requirement matrix. We examine the cost matrix and find that
there is lowest cost 1 in cell (2, 3) and in (4, 1). We choose any one of these, say the cell
(2, 3) and allocate the maximum possible amount 8 to this cell. This exhausts the
availability from F2 and leaves the requirement 10 of W3 . Now leaving the second row in
the reduced transportation table we find that there is lowest cost 1 in cell (4, 1). Here we
allocate the maximum possible amount 7. This satisfies the requirement of W1 and leaves
the availability 7 in F4 . Leaving the first column, in the reduced transportation table we
find the lowest cost 2 in the cell (4, 3). We allocate the maximum possible amount 7 to
this cell. This exhausts the availability from F4 and leaves requirement 3 of W3 . Leaving
the 4th row, in the reduced transportation table, we find that there is lowest cost 4 in cell
(1, 3) and in cell (3, 2). We allocate the amount 3 in the cell (1, 3). This satisfies the
requirement of W3 and leaves the availability 2 in F1. In order to satisfy the availability of
F1 we allocate the amount 2 to the cell (1, 2). To complete the requirement of 9 units in
column 2, we allocate the amount 7 to the cell (3, 2).
Step 1 : Identify the smallest and next to smallest costs for each row of the
transportation table. Find the difference between them for each row. Write these
differences alongside the transportation table against the respective rows by enclosing
them in parentheses. Write the similar differences for each column below the
corresponding column. These are called ‘penalties’.
Step 2 : Now select the row or column for which the penalty is the largest. If a tie occurs,
use any arbitrary tie breaking choice. Allocate the maximum possible amount to the cell
395
with lowest cost in that particular row or column. Let the largest penalty correspond to
i-th row and let cij be the smallest cost in the i-th row. Allocate the amount x ij = min(ai, bi)
in the cell (i, j). Then we cross out the i-th row or the j-th column in the usual manner and
construct the reduced matrix with remaining availability and requirements.
Step 3 : Now compute the row and column penalties for the reduced transportation table
and repeat the step 2. We continue this process until all the available quantity is
exhausted or all the requirements are satisfied.
The method is well explained by taking the same numerical example as in method 1.
First we write the cost and requirement matrix and compute the penalties as follows :
W1 W2 W3 Available Penalties
Requirement 7 9 18
Penalties (1) (1) (1)
We find that the maximum penalty (2) is associated with row 1 and row 2, so we may
select any one of these. If we select row 1, then we allocate the maximum possible
amount to the lowest cost cell in this row i.e., cell (1, 1). Thus x11 = min (5, 7) = 5. This
exhausts the availability from F1. So we cross the row 1. Leaving this row, the reduced
cost and requirement matrix is as follows :
W1 W2 W3 Available Penalties
Requirement 2 9 18
Penalties (2) (1) (1)
Requirement 2 9 10
Penalties (4) (2) (5)
In this table, the maximum penalty (5) is associated with column 3 so the maximum
possible amount 10 is allocated to the cell with lowest cost 2 in this column. This
completes the requirement of W3 . After leaving the column corresponding to W3 the
remaining table is as follows :
W1 W2 Available Penalties
Requirement 2 9
Penalties (4) (2)
In this table, the maximum penalty (5) is associated with row 2, so the maximum possible
amount 2 is allocated to the cell with lowest cost 1 in this row.
The remaining amount 2 available to F4 is allocated to the cell with cost 6. In the last to
meet the requirement of W2 the amount 7 is allocated to the cell with cost 4.
W1 W2 W3 Available
Requirement 7 9 18
397
= ` (5 . 2 + 8 .1 + 7 . 4 + 2 .1 + 2 . 6 + 10 . 2) = ` 80
Note : If in the selected row or column the minimum cost is not unique, then allocate in
that cell in which more allocations can be made at lower cost cell.
Important : Although Vogel's method takes more time as compared to other methods,
but it reduces the time in reaching the optimal solution. To obtain the optimal solution
the students are advised to find the initial B.F.S. by Vogel's method.
3. If all the sources are emptied and all the destinations are filled show that
5. Explain the North-West corner rule for obtaining an initial basic feasible solution of
a transportation problem.
6. Explain the lowest cost entry method for obtaining an initial basic feasible solution
of a transportation problem.
8. Use North-West corner rule to determine an initial basic feasible solution to the
following transportation problem :
A 13 11 15 20 2 O1 6 4 1 5 14
From B 17 14 12 13 6 Origin O2 8 9 2 7 16
C 18 18 15 12 7 O3 4 3 6 2 5
A 2 11 10 3 7 4
B 1 4 7 2 1 8
Origin
C 3 9 4 8 12 9
Demand 3 3 4 5 6
10. Using ‘lowest cost entry method’ find the initial B.F.S. of the following
transportation problem :
(i) Destinations
A B C D Supply
I 1 5 3 3 34
Origins II 3 3 1 2 15
III 0 2 2 3 12
IV 2 7 2 4 19
(ii) Destination
D1 D2 D3 D4 Capacity
O1 1 2 3 4 6
Origin O2 4 3 2 0 8
O3 0 2 2 1 10
Demand 4 6 8 6
11. Obtain an initial B.F.S. to the following transportation problem using Vogel's
approximation method :
To
I II III IV Available
A 5 1 3 3 34
3 3 5 4 15
From B
C 6 4 4 3 12
D 4 1 4 2 19
Requirement 21 25 17 17
399
12. Determine an initial B.F.S. to the following transportation table using (i) matrix
minima method (ii) Vogel's approximation method :
Destination
D1 D2 D3 D4 Supply
O1 1 2 1 4 30
Origin O2 3 3 2 1 50
O3 4 2 5 9 20
Demand 20 40 30 10 100
13. Find the initial basic feasible solution of the following transportation problem using
(i) North-West corner rule (ii) matrix minima method (iii) Vogel's approximation
method:
Warehouse
W1 W2 W3 W4 Capacity
F1 19 30 50 10 7
Factory F2 70 30 40 60 9
F3 40 8 70 20 18
Requirement 5 8 7 14
14. Find initial basic feasible solutions of the following transportation problems by
Vogel's Approximation method :
O1 5 8 3 6 30 S1 3 7 6 4 5
Sources O2 4 5 7 4 50 Sources S2 2 4 3 2 2
O3 6 2 4 6 20 S3 4 3 8 5 3
O1 11 13 17 14 250 O1 21 16 25 13 11
O3 21 24 13 10 400 O3 32 27 18 41 19
(v) Stores
S1 S2 S3 S4 Supply
A 5 1 3 3 34
Warehouse B 3 3 5 4 15
[Kanpur 2008]
C 6 4 4 3 12
D 4 1 4 2 19
Demand 21 25 17 17 80
11. x12 = 25, x13 = 9, x21 = 15, x33 = 8, x34 = 4, x41 = 6, x44 = 13
12. For (i) and (ii) both : x11 = 20, x13 = 10, x22 = 20, x 23 = 20, x24 = 10, x32 = 20
13. (i) x11 = 5, x12 = 2, x22 = 6, x23 = 3, x33 = 4, x34 = 14
(ii) x14 = 7, x21 = 2, x23 = 7, x31 = 3, x32 = 8, x34 = 7
(iii) x11 = 5, x14 = 2, x23 = 7, x24 = 2, x32 = 8, x34 = 10
14. (i) x11 = 10, x13 = 20, x21 = 20, x22 = 20, x24 = 10, x32 = 20
(ii) x11 = 3, x14 = 2, x23 = 2, x32 = 3
(iii) x11 = 200, x12 = 50, x22 = 175, x24 = 125, x33 = 275, x34 = 125
(iv) x14 = 11, x21 = 6, x22 = 3, x24 = 4, x32 = 7, x33 = 12
(v) x12 = 25, x13 = 9, x21 = 15, x31 = 4, x33 = 8, x41 = 2, x44 = 17
401
Independent position of a set of allocation means that it is always impossible to form any
closed loop through these allocations. A loop may or may not involve all allocations. It
consists of (at least 4) horizontal and vertical lines with an allocation at each corner
which, in turn, is a join of a horizontal and a vertical lines.
Since there are m . n − (m + n − 1) = (m − 1)(n − 1) empty cells, therefore there are (m − 1)(n − 1)
such cell evaluations. So the process of computing individually cell evaluations for all
unoccupied cells is very complicated. To avoid this complication, we prove the following
result :
evaluation dij corresponding to each empty cell (i, j) is given by dij = c ij − (ui + v j ).
∑ x ij = ai or 0 = ai − ∑ x ij , i = 1, 2,..., m ...(2)
j =1 j =1
m m
and ∑ x ij = bj or 0 = bj − ∑ x ij , j = 1, 2,..., n ...(3)
i =1 j =1
Multiplying (2) by ui (i = 1, 2,..., m), (3) by v j ( j = 1, 2,..., n) and adding to (1), we have
m n m n n m
Z = ∑ ∑ cij x ij + ∑ui ai −
∑
x ij + ∑
v j bj −
j =1
∑
x ij
i =1 j =1 i =1 j =1 i =1
m n m n
or Z = ∑ ∑ [cij − (ui + v j )] x ij + ∑ ui ai + ∑ v j bj ...(4)
i =1 j =1 i =1 j =1
But it is given that for each occupied cell (r , s) (cell with positive allocation)
crs = ur + vs . ...(5)
Hence in the objective function (4), all the terms of positive allocations vanish as their
coefficients are zero. Thus, for this feasible solution the value of the objective function
(4) reduces to
m n
Z = ∑ ui ai + ∑ v j bj . ...(6)
i =1 j =1
Let us determine the cell evaluation for the empty cell (h, k). When we allocate one unit
to this empty cell, the positive allocations become (m + n) in number and hence they
become dependent in position. So a closed loop can be formed. Let the closed loop thus
formed be as shown in the following figure :
403
(h, k) (h, s)
chk chs
+1 –1
crk –1 +1 c
rs
(r, k) (r, s)
Here cells (h, s), (r , s) and (r , k) are all occupied cells and hence
We have to decrease the individual allocations at (h, s) and (r , k) cell and increase at cell
(r , s) by 1 unit to maintain the row and column sums. So the values of the individual
allocations in these occupied cells are changed but in the objective function (4) they
contribute nothing as their coefficient are necessarily zero. Thus, corresponding to this
new solution [allocating 1 unit at cell (h, k)], the value of the objective function is given
by
m n
Z ′ = chk − (uh + vk ) + ∑ ui ai + ∑ v j bj ...(7)
i =1 j =1
Thus, in general to each empty cell (i, j) the cell evaluation is given by
Note 1 : The above result is proved by considering a square (or rectangle) shaped loop. It
can be generalised by considering a loop of an arbitrary shape connecting empty cell to
the occupied cells.
Note 2: Since there are (m + n −1) number of equations of the form (5) in (m + n) number
of unknowns ui and v j , so assigning an arbitrary value to one of ui or v j the rest of the
(m + n −1) unknowns can be easily found. Generally, we choose that ui or v j = 0 for which
the corresponding row or column has the maximum number of individual allocations.
The numbers ui and v j are called Fictitious costs or Shadow costs.
404
Step 1 : Construct a transportation table entering the capacities a1, a2 ,..., am of the
origins and the requirements b1, b2 ,..., bn of the destinations. Enter the costs cij in ( ) at
the upper left corners of all the cells.
Step 3 : Find the set of (m + n) numbers ui (i = 1, 2,..., m) and v j ( j = 1, 2,..., n) for each row
each unoccupied cell (i, j) and enter it at the lower right corner of the corresponding cell.
Step 6 : Examine the sign of each dij for unoccupied cells and apply optimality test :
(i) If all dij > 0, then the solution under test is optimal and unique.
405
(ii) If all dij ≥ 0, with at least one dij = 0, then the solution under test is optimal and an
alternative optimal solution exists.
(iii) If at least one dij < 0, the then solution is not optimal. In this case go to step 7.
Step 7 : Select the min (most negative) dij . Form a closed loop by joining this cell of min
dij with the cells of positive allocations. Then allocate in this cell as much as you can by
vacating at least one of the pre-occupied cells and maintaining the row and column sum
restrictions. This will give a new B.F.S.
Step 8 : Repeat the steps 3 to 6 to test the optimality of this new B.F.S. continue the
process until an optimum basic feasible solution is obtained.
Example 1: Determine the optimum basic feasible solution to the following transportation
problem :
D1 D2 D3 D4 ai↓
O1 5 3 6 2 19
O2 4 7 9 1 37
O3 3 4 7 5 34 [Meerut 2005]
bj→ 16 18 31 25
Solution: Step 1 : Using Vogel's method, the initial basic feasible solution is obtained as
follows :
D1 D2 D3 D4
(5) (3) (6) (2)
O1 18 1 19
16 18 31 25
= ` (18 . 3 + 1. 6 + 12 . 4 + 25 .1 + 4 . 3 + 30 . 7) = ` 355.
406
cost in the respective occupied cells only. For each occupied cell (r , s), crs = ur + vs .
Since all rows contain the same (maximum) number of allocations, so take any of the ui
(say u3 ) equal to zero.
Similarly c31 = u3 + v1 or 3 = 0 + v1 or v1 = 3.
Again c21 = u2 + v1 or 4 = u2 + 3 or u2 = 1.
Step 3 : Now we find the evaluation ui + v j for each unoccupied cell (i, j) and enter it at
Step 4 : The we find the cell evaluation dij = cij − (ui + v j ) for each unoccupied cell (i, j)
and enter at the lower right corner of the corresponding unoccupied cell.
Step 5 : Since all dij ≥ 0, the solution under test is optimal. But an alternative optimal
D1 D2 D3 D4 Supply
O1 23 27 16 18 30
O2 12 17 20 51 40
O3 22 28 12 32 53
Demand 22 35 25 41
[Meerut 2009]
Solution: Step 1: By Vogel's method an initial B.F.S. of the given problem is given in the
following table :
D1 D2 D3 D4 ai ↓
bj → 22 35 25 41
Step 2 : Now we determine a set of ui and v j such that for each occupied cell
(r , s), crs = ur + vs .
Step 3 : Now we find the evaluation ui + v j for each unoccupied cell (i, j) and enter at the
Step 4 : Then we find the cell evaluation dij = cij − (ui + v j ) for each unoccupied cell (i, j)
and enter at the lower right corner of the corresponding unoccupied cell.
408
Step 5 : Since all dij for empty cells are > 0, the solution under test is optimal.
x14 = 30, x21 = 5, x22 = 35, x31 = 17, x33 = 25, x34 = 11
Example 3: Solve the following transportation problem in which cell entries represent unit
cost :
To
O/D D1 D2 D3 Available
O1 2 7 4 5
3 3 1 8
From O2
O3 5 4 7 7
O4 1 6 2 14
Required 7 9 18 34
Solution: Step 1 : Using Vogel's method, the initial B.F.S. is obtained as given below :
ai ↓
(2) (7) (4)
5 5
ui (i = 1, 2, 3, 4) and v j ( j = 1, 2, 3)
For this let us choose u4 = 0 (as row 4 contains maximum number of allocations).
∴ v1 = 1, v2 = 6, v3 = 2.
∴ u1 = 1, u2 = −1, u3 = −2.
Step 3 : Now we find the evaluation ui + v j for each unoccupied cell (i, j) and enter it at
Step 4 : Then we find the cell evaluation dij = cij − (ui + v j ) for each unoccupied cell (i, j)
and enter it at the lower right corner of the corresponding unoccupied cell.
vj → 1 6 2
Step 5 : From the above table, we observe that the cell evaluation d22 = −2, is negative.
Therefore the solution under test is not optimal. The solution can be improved as shown
in the next step.
Step 6 : Since minimum dij is d22 = −2 (negative), so we allocate (say, θ) to the cell (2, 2)
as much as possible.
410
Now when we have decided to include one more cell to the solution the occupied cells
become dependent. Therefore we first identify the loop joining cell (2, 2) with the
occupied cells.
It is easily seen by the following rule that at the most θ = 2 units can be allocated from cell
(4, 2) to cell (2, 2) still satisfying the row and column total and non-negativity
restrictions on the allocations.
Here min [8 − θ, 2 − θ] = 0
or 2 −θ = 0
or θ = 2 units.
Thus cell (2, 2) enters the solution, while cell (4, 2) leaves the solution i.e., it becomes
empty. Thus improved B.F.S. is obtained. The illustration is shown in the following
tables :
7 9 18 7 9 18
= ` (2 . 5 + 3 . 2 + 1. 6 + 4 . 7 + 1. 2 + 2 .12)
= ` 76,
uj ↓
(2) (7) (5) (4) (3)
5 1
(2) (1)
(3) (0) (3) (1)
2 6 –1
(3)
(5) (1) (4) (7) (2)
7 0
(4) (5)
(1) (6) (4) (2)
12 0
(2)
vj→ 1 4 2
Step 8 : Since for empty cells (dij ) > 0, the solution under test is optimal.
Example 4: There are three parties who supply and three who require the following
quantities of coal :
1 6 8 4
2 4 9 3
3 1 2 6
Solution: By matrix minima method the initial B.F.S. of the problem is as follows :
412
Destinations
A B C
Available
(6) (8) (4)
1 1 10 3 14
Required 6 10 15
To test the solution for optimality : Finding the set of ui (i = 1, 2, 3), v j ( j = 1, 2, 3) such
that for occupied cells crs = ur + vs and then entering evaluations (ui + v j ) and dij in the
unoccupied cells, we get the following table :
ui↓
(6) (8) (4)
1 10 3 0
vj→ 6 8 4
Since all dij are not ≥ 0, the solution under test is not optimal.
First iteration : Here two cell evaluations are negative and they are both most negative.
We shall include any one of them in the solution. Let us allocate at cell (2,1) as much as
possible.
Improved solution
(6) (8) (4)
1–θ 10 3+θ 10 4
⇒ 1− θ = 0
⇒ θ = 1.
Thus cell (2,1) enters the solution while cell (1, 1) leaves the solution i.e., it becomes
empty.
= ` 138.
To test the improved solution for optimality : We have the following table giving all
the necessary information :
ui↓
(6) (5) (8) (4)
10 4 0
(1)
(4) (9) (7) (3)
1 11 –1
(2)
(1) (2) (4) (6) (0)
5 –4
(–2) (6)
vj→ 5 8 4
Since all dij are not ≥ 0, the solution under test is not optimal.
Second Iteration : Since the largest negative cell evaluation is d32 = −2, so allocate as
much as possible to cell (3, 2)
Here min [5 − θ, 10 − θ, 10 − θ] = 0 ⇒ 5 − θ = 0 ⇒ θ = 5.
Thus cell (3, 2) enters the solution while cell (3, 1) leaves the solution.
vj → 4 7 3
Since all dij for empty cells are > 0, so the solution under test is optimal.
Destinations
1 2 3 Capacity
1 2 2 3 10
Sources 2 4 1 2 15
3 1 3 × 40
Demand 20 15 30
The cost of shipment from third source to the third destination is not known. How many
units should be transported from sources to the destinations so that the total cost of
transporting all the units to their destinations is a minimum.
Solution: Since the cost c33 is unknown, we assign a large cost, say M, to this cell.
Then using Vogel's method an initial B.F.S. is obtained as shown in the table.
415
1 2 3
ai ↓
(2) (2) (3)
1 10 10
bj → 20 15 30
vj → 1 3 M
Since M is very large, we observe that all the cell evaluations dij ≥ 0. Hence the current
solution is optimum.
Note : The cell (3, 3) also appears in the solution for which the cost of shipment is not
known. This is known as pseudo optimum basic feasible solution.
2. It may become degenerate at any intermediate stage, when the selection of one
entering cell empties two or more pre-occupied cells simultaneously.
The extremely small quantity usually denoted by ∆ (delta) or ε (epsilon) satisfies the
following conditions :
2. x ij + ∆ = x ij = x ij − ∆, x ij > 0
3. ∆ + 0 = ∆.
4. If there are more than one ∆'s introduced in the solution, then
(i) If ∆, ∆′ are in the same row, ∆ < ∆′ when ∆ is to the left to ∆′ and
(ii) If ∆, ∆′ are in the same column, ∆ < ∆′ when ∆ is above ∆′.
Above rules show that even after introducing ∆, the original solution of the problem is
not changed. It is merely technique to apply to optimality test, As ∆ has no physical
significance, ultimately it is to be omitted.
Example 1: A manufacturer wants to ship 8 loads of his product as shown in the table.
The matrix gives the mileage from origin O to destination D. Shipping costs are ` 10 per
load per mile, What shipping schedule should be used ?
D1 D2 D3 Available
O1 50 30 220 1
O2 90 45 170 3
O3 250 200 50 4
Required 4 2 2
D1 D2 D3
ai ↓
(50) (30) (220)
O1 1 1
bj → 4 2 2
Since the total number of allocations is 4 which is one less than m + n − 1 = 5, hence this
solution is a degenerate solution. So the attempt to assign ui and v j values to the above
table will not succeed.
To resolved this degeneracy we allocate a very small amount ∆ to some suitable cell. We
allocate ∆ to the cell (1, 2) getting 5 allocations at independent positions.
4 2+ ∆ = 2 2
To test the solution for optimality : Now to test solution for optimality we have the
following table :
ui ↓
(50) (30) (220) –120
1+θ ∆−θ –170
340
(90) (45) 70 (170) –80
3−θ θ –130
–25 250
(250) 220 (200) (50)
2 2 0
30
vj → 220 200 50
418
Since d22 = −25 < 0, so the solution under test is not optimal. Now we shall allocate to
this cell (2, 2) as much as possible. Thus we take ∆ from cell (1, 2) to cell (2, 2) and form
the now table to check the solution for optimality.
ui ↓
(50) (30) 5 (220) –145
1 –195
25 365
(90) (45) (170) –105
3 ∆ –155
275
(250) 245 (200) (50)
2 2 0
5
vj → 245 200 50
Since for empty cells at dij are > 0, so the solution under test is optimal.
O1 7 4 0 5
Sources O2 6 8 0 15
O3 3 9 0 9
bj → 15 6 8
[Meerut 2006]
15 6 8
419
Now to test this solution for optimality we get the following table in usual manner :
ui ↓
(7) (4) 9 (0) 0
5 0
–5 0
(6) (8) (0) –1
10 5 –1
1
(3) 7 (9) (0)
1 8 0
–4
vj → 7 9 0
Since all the cell evaluations are not ≥ 0, so the solution under test is not optimal.
The largest negative cell evaluation is d12 = −5, so we allocate as much as possible to this
cell (1, 2).
15 6 8 15 6 8
Since simultaneously two cells vacate, so the number of allocations becomes less than
m + n −1 i.e., 5. Hence this is a degenerate solution. Technically we cannot apply the
optimality test. A negligible quantity ∆ may also be introduced in the independent cell
(3, 1), although least cost independent cell is (2, 3).
15 +∆ = 15 6 8
420
vj →
3 9 0
Since all cell evaluations are not ≥ 0, so the solution under test is not optimal. The largest
negative cell evaluation is d22 = −4, so allocate as much as possible to the cell (2, 2).
15 6 8
To test this new improved solution for optimality we have the following table :
ui ↓
(7) 2 (4) (0) –1
5 –4
5 1
(6) (8) (0) 3
14 1 0
–3
(3) (9) 5 (0)
1 8 –3
4
vj → 6 8 3
Since all the cell evaluations are not ≥ 0, so the solution under test is not optimal. The
largest negative cell evaluation is d23 = −3, so allocate as much as possible to the cell (2, 3).
421
15 6 8
To test this new improved solution for optimality we have the following table :
ui ↓
(7) 2 (4) (0) –4
5 –4
5 4
(6) (8) (0)
6 1 8 0
Since all the cell evaluations for empty cells are positive, the solution under test is
optimal.
transportation problem.
Example 1: Solve the following unbalanced transportation problem (symbols have their
usual meanings) :
D1 D2 D3 ai ↓
O1 4 3 2 10
O2 2 5 0 13
O3 3 8 6 12
bj → 8 5 4
O1 4 3 2 0 10
O2 2 5 0 0 13
O3 3 8 6 0 12
bj → 8 5 4 18
Applying the Vogel's method in the usual manner, the initial B.F.S. is obtained as given
below :
423
D1 D2 D3 D4
ai ↓
(4) (3) (0) (0)
O1 5 5 10
bj → 8 5 4 18
= ` (3 . 5 + 0 . 5 + 2 . 8 + 0 . 4 + 0 .1 + 0 .12) = ` 31.
Thus the solution of the given problem is x12 = 5, x21 = 8, x23 = 4 and min. cost = ` 31.
F1 19 30 50 10 7
F2 70 30 40 60 9
F3 40 8 70 20 18
O1 1 2 1 4 30
O2 3 3 2 1 50
O3 4 2 5 9 20
O1 5 5 6 4 2 9
O2 6 9 7 8 5 13
O3 5 6 4 6 3 9
Demand 3 7 8 5 8
where Oi and D j denote i-th origin and j-th destination respectively.
[Meerut 2012 (BP)]
425
9. Is x13 = 50, x14 = 20, x21 = 55, x31 = 30, x32 = 35, x34 = 25 an optimum solution
of the following transportation problem ?
Available units
6 1 9 3 70
11 5 2 8 55
10 12 4 7 90
Required units 85 35 50 45
10. A company has four plants P1, P2 , P3 , P4 from which it supplies to three markets
M1, M2 , M3 . Determine the optimal transportation plan from the following data
giving the plant to market shifting costs, quantities available at each plant and
quantities required at each market.
P1 P2 P3 P4 Required
M1 19 14 23 11 11
M2 15 16 12 21 13
M3 30 25 16 39 19
Available 6 10 12 15
11. The following table gives the cost for transporting material from supply points
A, B, C and D to demand points E, F, G, H and J.
E F G H J
A 8 10 12 17 15
B 15 13 18 11 9
From
C 14 20 6 10 13
D 13 19 7 5 12
To
I II III IV
A 15 10 17 18 2
From B 16 13 12 13 6
C 12 17 20 11 7
21 16 25 13 11
From 17 18 14 23 13
32 27 18 41 19
7 3 4 2
2 1 3 3
From
3 4 6 5
15. Given below the unit cost array with supplies ai ; i = 1, 2, 3 and demand bj ;
j = 1, 2, 3, 4.
ai ↓
8 10 7 6 50
12 9 4 7 40
9 11 10 8 30
bj →
25 32 40 23
X Y Z
A 8 7 3
B 3 8 9
C 11 3 5
F1 4 3 1 2 6 40
F2 5 2 3 4 5 30
From
F3 3 5 6 3 2 20
F4 2 4 4 5 3 10
Required 30 30 15 20 5
Obtain the optimal solution of the problem.
18. Solve the transportation problem where all entries are unit costs.
D1 D2 D3 D4 D5 ai ↓
O1 73 40 9 79 20 8
O2 62 93 96 8 13 7
O3 96 65 80 50 65 9
O4 57 58 29 12 87 3
O5 56 23 87 18 12 5
bj →
6 8 10 4 4 [Meerut 2004]
19. Solve the following transportation problem (cell entries represent unit cost) :
Available
5 3 7 3 8 5 3
5 6 12 5 7 11 4
2 1 2 4 8 2 2
9 6 10 5 10 9 8
Required 3 3 6 2 1 2 17
428
Demand 8 12 13 6 Demand 4 6 8 6
Since there is not enough supply, some of the demands at these destinations may
not be satisfied. Suppose there are penalty costs for every unsatisfied demand unit
which are given by 5, 3 and 2 for destination 1, 2 and 3 respectively. Find the
optimal solution.
22. Solve the following transportation problem :
To
D1 D2 D3 D4 ai ↓
O1 5 3 6 2 19
From O2 4 7 9 1 37
O3 3 4 7 5 34
bj →
16 18 32 25 [Meerut 2010]
3. The necessary and sufficient condition for the existence of a feasible solution of a
transportation problem is :
(a) ∑ ai = ∑ bj (b) ∑ ai ≠ ∑ bj
(c) ∑ ai = 0 (d) ∑ bj = 0
4. In a transportation problem a loop may be defined as an ordered set of atleast :
(a) 3 cells (b) 4 cells
(c) 5 cells (d) 6 cells
5. If we have a feasible solution consisting of m + n −1 independent allocations, and if
numbers ui and v j satisfying crs = ur + vs , for each occupied cell (r , s) then the
evaluation dij corresponding to each empty cell (i, j) is given by :
(a) dij = cij − (ui + v j ) (b) dij = cij + (ui + v j )
(c) dij = cij − (ui − v j ) (d) dij = cij + (ui − v j )
6. To improve the current B.F.S. if it is not optimal we allocate to the cell for which dij
is :
(a) Minimum and negative (b) Maximum and positive
(c) 0 (d) None of these
7. In a transportation problem the solution under test will be optimal if all the cell
evaluations are :
(a) <0 (b) >0
(c) ≤0 (d) ≥0
8. In Vogel's approximation method we select the row or column for which the penalty
is the :
(a) Largest (b) Smallest
(c) Zero (d) None of these
9. To find an initial B.F.S. we start with the cell (1, 1) in :
(a) North-West Corner Rule (b) Lowest Cost Entry Method
(c) Vogel's approximation method (d) None of these
10. To find an initial B.F.S. by Matrix Minima Method, we first choose the cell with :
(a) Zero cost (b) Highest cost
(c) Lowest cost (d) None of these
True or False
1. A feasible solution is said to be optimal if it minimizes the total transportation cost.
[Meerut 2005]
7. In optimality test for all the occupied cells, cell evaluations dij are non-zero.
8. If the cell evaluations dij are > 0 for all the unoccupied cells, then the optimum
solution is unique.
9. There are m + n independent equations in a transportation problem, m and n being
the number of origins and destinations.
10. In a non-degenerate basic feasible solution the allocations must be in independent
positions.
431
Exercise 10.2
6. Initial B.F.S. is x11 = 5, x14 = 2, x23 = 7, x24 = 2, x32 = 8, x34 = 10 ; not optimal.
Optimal solution is x11 = 5, x14 = 2, x22 = 2, x23 = 7, x32 = 6, x34 = 12
7. x11 = 20, x13 = 10, x22 = 20, x23 = 20, x24 = 10, x32 = 20 ; min. cost = ` 180
8. x12 = 4, x14 = 5, x21 = 3, x22 = 2, x25 = 8, x32 = 1, x33 = 8 min cost = ` 154
9. No ; x12 = 30, x14 = 40, x22 = 5, x23 = 50, x31 = 85, x34 = 5
and min. cost = ` 1160
10. x14 = 11, x21 = 6, x22 = 3, x24 = 4, x32 = 7, x33 = 12 ; min. cost = ` 710.
11. Not; x12 = 100, x22 = 70, x25 = 80, x31 = 90, x33 = 50, x35 = 40, x44 = 201, x45 = 70
or x12 = 100, x22 = 70, x25 = 80, x31 = 20, x33 = 50, x35 = 110, x41 = 70, x44 = 210
12. x12 = 2, x22 = 1, x23 = 4, x24 = 1, x31 = 3, x34 = 4 ; min. cost = ` 174
13. x14 = 11, x21 = 6, x22 = 3, x24 = 4, x32 = 7, x33 = 12 ; min. cost = ` 796
15. x11 = 25, x12 = 2, x14 = 23, x23 = 40, x32 = 30 ; min. cost = ` 848
16. x13 = 60, x21 = 50, x23 = 20, x32 = 80 ; min. cost = ` 750
17. x11 = 5, x13 = 15, x14 = 20, x22 = 30, x31 = 15, x35 = 5, x41 = 10; min.cost = ` 210
18. x13 = 8, x24 = 4, x25 = 3, x31 = 5, x32 = 4, x41 = 1, x43 = 2, x52 = 4, x55 = 1 ;
min. cost = ` 33
20. (i) x11 = 8, x12 = 7, x22 = 5, x24 = 6, x33 = 13 ; min cost = ` 224
(ii) x12 = 6, x23 = 2, x24 = 6, x31 = 4, x33 = 6 ; min. cost = ` 28
21. x12 = 10, x21 = 60, x22 = 10, x23 = 10, x31 = 15 ; min. cost = ` 515
22. x12 = 18, x13 = 6, x21 = 12, x24 = 25, x31 = 4, x33 = 30, min. cost = ` 355.
1. minimum 2. ∑ bj
j =1
3. assignment problem 4. m + n −1
5. non-degenerate. 6. m + n −1
7. >0 8. penalties
9. maximum 10. MODI Method
True or False
1. True 2. False
3. True 4. True
5. True 6. False
7. False 8. True
9. False 10. True
mmm
433
11.1 Introduction
n the present chapter we deal with a special type of linear programming problem
I generally called ‘Assignment problem’. Although simplex method is powerful enough to
solve all the linear programming problems, but the above type of the problems may be
solved by a very interesting method called the ‘Assignment Technique’ which is described in
this chapter.
The classical problems where the objective is to assign a number of origins (jobs) to the
equal number of destinations (persons) at a minimum cost (or maximum profit) are
called ‘Assignment problems’. Here we make the assumption that each person can perform
each job but with varying degree of efficiency.
Such types of problems may consist of assigning men to offices, classes to rooms, drivers
to trucks, trucks to delivery routes or problems to research teams, etc.
The above assignment problem can be stated in the form of n × n matrix [cij ] of real
numbers called cost matrix or effectiveness matrix as follows :
Cost Matrix
Jobs
1 2 3 LLL j LLL n
1 c11 c12 c13 LLL c1 j LLL c1n
2 c21 c22 c23 LLL c2 j LLL c2 n
3 c31 c32 c33 LLL c3 j LLL c3 n
Persons M M M M M M
M M M M M M
M M M M M M
i c i1 c i2 c i3 LLL c ij LLL c in
M M M M M M
M M M M M M
M M M M M M
n c n1 c n2 c n3 LLL c nj LLL c nn
n
(ii) ∑ x ij = 1, (only one person should be assigned to the j-th job, j = 1, 2,..., n)
i =1
435
which minimizes the total cost for one matrix also minimizes the total cost for the other
matrix.
Or
Mathematical Statement of Reduction Theorem : If xij = X ij
n n
minimizes Z = ∑ ∑ cij xij over all xij = 0 or 1 such that
i =1j =1
n n
Proof: We have
n n n n
Z′ = ∑ ∑ cij′ x ij = ∑ ∑ (cij ± ai ± bj ) xij
i =1 j =1 i =1 j =1
436
n n n n n n
= ∑ ∑ cij x ij ± ∑ ∑ ai x ij ± ∑ ∑ bj xij
i =1 j =1 i =1 j =1 i =1 j =1
n n n n n n
Q Z = cij x ij
=Z± ∑ ∑ ai x ij ± ∑ ∑ bj x ij
∑ ∑
i =1 j =1 j =1 i =1 i =1 j =1
n n n n
Q x ij
=Z± ∑ ai .1 ± ∑ bj .1
∑ x ij = 1 = ∑
i =1 j =1 i =1 j =1
n n
=Z± ∑ ai ± ∑ bj
i =1 j =1
n n
Since terms ∑ ai , ∑ bj are independent of x ij 's it follows that Z ′ is minimized
i =1 j =1
∑ ∑ cij xij = 0
i =1j =1
then this solution is an optimal solution for the problem (i.e., minimizes the objective
function).
n n
Proof: Since all cij ≥ 0 and all x ij ≥ 0, the objective function Z = ∑ ∑ cij xij cannot
i =1 j =1
Step 1 : Subtract the minimum element of each row of the cost matrix, from all the
elements of the respective rows. Further, modify the resulting matrix by subtracting the
437
minimum element of each column from all the elements of the respective columns. These
operations create zeros.
Step 2 : Make assignments using only zeros. If a complete assignment is possible then
this is the required optimal assignment plan and if not then we shall modify the cost
matrix to create some more zeros in it.
Thus at the end of step I, the question arises to decide whether a complete assignment is
possible or not. It can be done easily in case of smaller cost matrices but it is not so easy in
case of larger cost matrices. For this we apply the following procedure :
Starting with row 1 of the matrix obtained in step 1, examine rows successively until a
row with exactly one zero element is found. Mark ‘,’ at this zero as an assignment will be
made there. Mark ‘×’ at all other zeros if lying in the column containing the assigned zero.
This eliminates the possibility of marking further assignment in that column. Continue
in this manner until all the rows have been examined.
When the set of rows has been completely examined, an identical procedure is applied
successively to columns. Starting with column 1, examine all the columns until a column
containing exactly one unmarked zero is found. Then make an assignment in that
position (indicated by ,) and mark ‘×’ at all zeros in the row containing this marked zero.
proceed in this way until the last column is examined.
Continue the above operations on rows and columns successively until we reach to any of
the two situations
(i) all the zeros have been marked ‘,’ or ‘×’
(ii) the remaining zeros lie at least two in each row and column.
Note 1: The lines thus drawn (horizontal and vertical both) are the minimum number of
lines to pass through all the zeros of the matrix. It can be shown that the minimum
number of lines required to pass through all the zeros of the matrix is the same as the
maximum number of assigned independent zeros of the matrix.
Thus if the number of lines is exactly n, then the complete assignment plan is obtained
while if the number of lines is less than n, then the complete assignment is not possible.
2. These lines cover all the zeros and each line passes through one and only one
marked zero (assignment). If there are two marked ‘,’ zeros in a row then it follows
that we are assigning two jobs to one person which is a violation of the hypothesis.
Thus no line passes through more than one marked ‘,’ zero.
Step 4 : Select the smallest of the elements that do not have a line through them,
subtract if from all the elements that do not have a line through them, add it to every
element that lies at the intersection of two lines and leave the remaining elements of the
matrix unchanged. In the modified matrix number of zeros are increased (never decrease)
than that in step 2. Now apply the step 2 to this new matrix. If still a complete optimal
assignment is not possible in this matrix, then repeat steps 3 and 4 iteratively. Continue
the process until minimum number of lines be n.
Thus exactly one marked ‘,’ zero in each row and each column of the matrix is obtained.
The assignment corresponding to these marked ‘,’ zeros will give the optimal
assignment.
Note : The procedure of subtracting the minimum element of all uncovered elements
from all such elements and adding this minimum element to the elements placed at the
intersection does not change the optimum solution i.e., the two matrices will have the
same optimal solution. For the above mentioned two operations of addition and
subtraction are the resultant of operations of subtracting the above chosen minimum
element from the uncrossed rows and adding it to all the crossed columns and such
operations do not change the optimum solution.
439
Example 1: A department head has four subordinates, and four tasks have to be
performed. Subordinates differ in efficiency and tasks differ in their intrinsic difficulty.
Time each man would take to perform each task is given in the effectiveness matrix below.
How should the tasks be allocated, one to a man, so as to minimize the total man hours ?
[Kanpur 2007]
Subordinates
I II III IV
A 8 26 17 11
B 13 28 4 26
Tasks C 38 19 18 15
D 19 26 24 10
Solution: We shall solve this problem step by step to understand the method described
above.
Step 1 : Subtracting the minimum element of each row from every element of the
corresponding row, the matrix reduces to
I II III IV
A 0 18 9 3
B 9 24 0 22
C 23 4 3 0
D 9 16 14 0
Now subtracting the minimum element of each column from every element of the
corresponding column, the matrix reduces to
I II III IV
A 0 14 9 3
B 9 20 0 22
C 23 0 3 0
D 9 12 14 0
Step 2 : Starting with row 1 of this reduced matrix, we examine the rows one by one until
a row containing only one zero element is found. We mark ‘,’ at this zero, i.e., make an
assignment and mark a cross ‘×’ over all zeros lying in the column containing the assigned
zero. Continue in this manner until all the rows have been examined. Thus we get the
following matrix.
440
I II III IV
A 0 14 9 3
B 9 20 0 22
C 23 0 3 0
D 9 12 14 0
Now starting with column 1, examine all the columns until a column containing only one
zero (column 2) is found. We mark , at this zero i.e., make an assignment and cross other
zeros if lying in the row containing this marked zero. Continue in this manner until all
the columns have been examined. Thus we get the following matrix.
I II III IV
A 0 14 9 3
B 9 20 0 22
C 23 0 3 0
D 9 12 14 0
At this stage we see that all zeros have been either assigned or crossed out and every row
and every column have an assignment. Hence an optimal solution has been obtained.
The optimal solution is
And the minimum total man hours (from the original matrix)
= 8 + 4 + 19 + 10 = 41.
Jobs
I II III IV
A 2 3 4 5
B 4 5 6 7
Operators C 7 8 9 8
D 3 5 8 4
[Meerut 2005; Kanpur 2009]
Solution: Step 1 : Subtracting the minimum element of each row from every element of
the corresponding row, the matrix reduces to
441
0 1 2 3
0 1 2 3
0 1 2 1
0 2 5 1
Now subtracting the minimum element of each column from every element of the
corresponding column, the matrix reduces to
0 0 0 2
0 0 0 2
0 0 0 0
0 1 3 0
Step 2 : Now test whether it is possible to make an assignment using only zeros. Here
none of the rows or columns contain exactly one zero, therefore we start with row 1
searching two zeros. While examining rows successively, it is observed that row 4 has two
zeros. Now, arbitrarily make an assignment (indicated by ,) to one of these two zeros,
say zero in the column 1 and cross other zeros in row 4 and column 1 (table 1). Now we
examine the columns and find column 4 which contains only one unmarked zero in row
3. We make assignment (indicated by ,) at this zero and cross all other zeros of this row
(table 2). Now again we check the rows and columns for one unmarked zero. There is no
such row or column. So we start with row 1 searching two unmarked zeros and find the
row 1 containing two such zeros. We mark ‘,’ at any one of these zeros, say zero of
column 2, and cross other zeros of row 1 (not already crossed) and column 2. Now the
second row contains only one unmarked zero in third column where we can make an
assignment (indicated by ,) (table 3).
At this stage all zeros have been either assigned or crossed out. We observe that every row
and every column have one assignment, so we have the complete ‘zero assignment’.
Tables 1, 2, 3 show the necessary steps for reaching the optimal assignment.
×
0 0 0 2 ×
0 0 0 2 ×
0 0 ×
0 2
×
0 0 0 2 ×
0 0 0 2 ×
0 ×
0 0 2
×
0 0 0 0 ×
0 ×
0 ×
0 0 ×
0 ×
0 ×
0 0
0 1 3 ×
0 0 1 3 ×
0 0 1 3 ×
0
Minimum cost = 3 + 6 + 8 + 3 = 20
Note : In this example other optimal assignments are also possible. Student must try to
find them. Each will have the same cost 20.
Example 3: A car hire company has one car at each of five depots a, b, c, d and e. A
customer requires a car in each town namely A, B, C, D and E. Distance (in kms) between
depots (origins) and towns (destinations) are given in the following distance matrix :
a b c d e
D 50 50 80 80 110
E 55 35 70 80 105
Solution: Step 1 : Subtracting the minimum element of each row from every element of
the corresponding row, the matrix reduces to
30 0 45 60 70
15 0 10 40 55
30 0 45 60 75
0 0 30 30 60
20 0 35 45 70
Now subtracting the minimum element of each column from every element of the
corresponding column, the matrix reduces to
30 0 35 30 15
15 0 0 10 0
30 0 35 30 20
0 0 20 0 5
20 0 25 15 15
Step 2 : Now we give the zero assignments in our usual manner. Row 1 has a single zero
in column 2. Make an assignment by marking ‘,’ around it and delete other zeros (if
443
any) in column 2 by marking ‘×’. Examining the set of rows completely, an identical
procedure is applied successively to columns.
30 0 35 30 15
15 ×
0 0 10 ×
0
30 ×
0 35 30 20
0 ×
0 20 ×
0 5
20 ×
0 25 15 15
Now column 1 has a single zero in row 4. Make an assignment by marking ‘,’ at this zero
and cross the other zero of row 4 which is not yet crossed. Column 3 has a single zero in
row 2, make an assignment at this zero by putting ‘,’ and cross the other zero of row
which is not yet crossed. At this stage all zeros have been either assigned or crossed out. It
is observed that row 3, row 5, column 4 and column 5 each has no assignment. Hence the
required solution cannot be obtained at this stage. So we proceed to the next step.
Step 3 : In this step we draw minimum number of lines to cover all zeros at least once.
For this we proceed as follows :
(i) Mark (√) row 3 and row 5 as they have no assignments.
(ii) Mark (√) column 2 as having zeros in the marked rows 3 and 5.
(iii) Mark (√) row 1 as it contains assignment in the marked column 2.
No further rows or columns will be required to mark during this procedure.
(iv) Now draw line L1 through marked column 2. Then draw lines L2 and L3 through
unmarked rows 2 and 4.
L1
30 0 35 30 15 3 (4)
L2 15 ×
0 0 10 ×
0
30 ×
0 35 30 20 3 (1)
L3 0 ×
0 20 ×
0 5
20 ×
0 25 15 15 3 (2)
(3)
444
Step 4 : In this step we select the smallest element among all uncovered elements of the
matrix of step 3.
Here this element is 15. Subtracting this element 15 from all the elements that do not
have a line through them and adding to every element that lies at the intersection of two
lines and leaving the remaining elements unchanged we get the following matrix.
15 0 20 15 0
15 15 0 10 0
15 0 20 15 5
0 15 20 0 5
5 0 10 0 0
Step 5 : Now again performing the step 2 we make the zero assignments. It is observed
that there are no remaining zeros and every row and column has an assignment as shown
in the table.
15 ×
0 20 15 0
15 15 0 10 ×
0
15 0 20 15 5
0 15 20 ×
0 5
5 ×
0 10 0 ×
0
A → e, B → c, C → b, D → a, E → d.
I II III IV V VI
A 9 22 58 11 19 27
B 43 78 72 50 63 48
C 41 28 91 37 45 33
D 74 42 27 49 39 32
E 36 11 57 22 25 18
F 3 56 53 31 17 28
[Kanpur 2009]
445
Solution: Step 1: Subtracting the minimum element of each row from every element of
the corresponding row and then subtracting the minimum element of each column from
every element of the corresponding column, the matrix reduces to
0 13 49 0 0 13
0 35 29 5 10 0
13 0 63 7 7 0
47 15 0 20 2 0
25 0 46 9 4 2
0 53 50 26 4 20
Step 2 : Make the ‘zero assignments’ in usual manner. The illustration is shown in the
table.
×
0 13 49 0 ×
0 13
×
0 35 29 5 10 0
13 ×
0 63 7 7 ×
0
47 15 0 20 2 ×
0
25 0 46 9 4 2
0 53 50 26 4 20
Since row 3 and column 5 have no assignments so we proceed to the next step.
Step 3 : Draw minimum number of lines to cover all zeros at least once. For this we
proceed as follows :
(i) Mark (3) row 3 as having no assignment.
(ii) Mark (3) columns 2 and 6 as having zeros in marked row 3.
(iii) Mark (3) rows 5 and 2 as having assignments in the marked columns 2 and 6.
(iv) Mark (3) column 1 (not already marked) as having zero in the marked row 2.
(v) Then mark (3) row 6 as having assignment in the marked column1.
Now draw lines L1, L2 , L3 through marked columns 1, 2, 6 respectively and L4 , L5 through
unmarked rows 1, 4 respectively. This way minimum set of five lines (5<6) to cover all
the zeros is obtained.
446
L1 L2 L3
L4 ×
0 13 49 0 ×
0 13
×
0 35 29 5 10 0 3(5)
3(1)
13 ×
0 63 7 7 ×
0
L5 47 15 0 20 2 ×
0
25 0 46 9 4 2 3(4)
0 53 50 26 4 20 3(7)
3 3 3
(6) (2) (3)
Step 4 : Now the smallest element among all uncovered elements is 4. Subtracting this
element 4 from all the uncovered elements, adding to every element that lies at the
intersection of two lines and leaving the remaining elements unchanged the matrix of
step 3 reduces to the new form a shown in the table.
4 17 49 0 0 17
0 35 25 1 6 0
13 0 59 3 3 0
51 19 0 20 2 4
25 0 42 5 0 2
0 53 46 22 0 20
Step 5 : Repeating the step 2, make the ‘zero assignments’ as shown in the following
table. Thus exactly one marked ‘,’ zero in each row and each column of the matrix is
obtained.
4 17 49 0 ×
0 17
0 35 25 1 6 ×
0
13 ×
0 59 3 3 0
51 19 0 20 2 4
25 0 42 5 ×
0 2
×
0 53 46 22 0 20
447
Note : Another optimal solution of this assignment problem is shown in the following
table i.e., A → IV, B → VI, C → II, D → III, E → V, F → I.
4 17 49 0 ×
0 17
×
0 35 25 1 6 0
13 0 59 3 3 ×
0
51 19 0 20 2 4
25 ×
0 42 5 0 2
0 53 46 22 ×
0 20
Example 5: An airline that operates seven days a week, has the time table shown below.
Crews must have a minimum layover of 5 hours between flights. Obtain the pair of flights
that minimizes layover time away from home. For any given pair the crew will be based at
the city that results in the smallest layover.
Delhi-Jaipur Jaipur-Delhi
For each pair, mention the town where the crew should be based.
Solution: First we construct the tables for layover times between the flights. Suppose we
pair the flight no. 1 with flight no. 103 when crew is based at Delhi. Then the time of stay
at Jaipur will be the layover time away from home. Now a plane of flight no. 1 which
reaches Jaipur at 8.00 A.M., cannot fly at 12.00 Noon on the same day as minimum
layover time is 5 hours. So it will depart Jaipur on the next day which will result in a
layover time of 28 hours. Similarly, other layover times can be calculated.
448
To avoid the fractions we measure the layover times in terms of quarter hour (0.25 hr. or
15 minutes) as one unit of time. Thus multiplying the above tables by 4, the modified
tables are as follows :
When crew based at Delhi When crew based at Jaipur
1 96 98 112 38 1 87 85 71 49
2 92 94 108 34 2 91 89 75 53
4 50 52 66 88 4 37 35 21 95
As a next step we combine the above two tables, choosing that base which gives a lesser
layover time for each pairing. The layover times marked with ‘*’ denote that crew is based
at Jaipur, otherwise the crew is based at Delhi.
Minimum layover time table
3 70 72 86 75*
1 4* 0* 0* 0
2 12* 8* 8* 0
3 0 0 28 50*
4 4* 0* 0* 100
449
1 4 0 * ×
0 ×
0 1 4* ×
0 0 * ×
0
2 12* 8* 8* 0 2 12* 8* 8* 0
3 0 ×
0 28 50* 3 0 ×
0 28 50*
4 4* ×
0 0 * 100 4 4* 0 * ×
0 100
In both the cases minimum layover time is 210 quarter hours i.e., 52 hours 30 minutes.
2. Place minus sign before each element of the profit matrix to get the modified matrix.
In this case if [cij ] is the given profit matrix then the modified matrix will be [cij′ ]
x ij = X ij minimizes Z ′ = ∑ ∑ cij′ x ij .
450
Example 1: A company has 5 jobs to be done. The following matrix shows the return in
rupees on assigning i-th (i = 1, 2, 3, 4, 5 ) machine to the j-th job (i = A, B, C, D, E).
Assign the five jobs to the five machines so as to maximize the total expected profit.
Jobs
A B C D E
1 5 11 10 12 4
2 2 4 6 3 5
Machines 3 3 12 5 14 6
4 6 14 4 11 7
5 7 9 8 12 5
[Meerut 2007]
Solution: First we shall convert the problem from maximization to minimization. The
greatest element of the given matrix is 14. Subtracting all the elements of the given
matrix from 14, the modified matrix is as follows.
9 3 4 2 10
12 10 8 11 9
11 2 9 0 8
8 0 10 3 7
7 5 6 2 9
3 1 2 0 7
0 2 0 3 0
7 2 9 0 7
4 0 10 3 6
1 3 4 0 6
Step 2 : Giving zero assignments in the usual manner we observe that rows 3, 5 and
columns 3, 5 have no assignments. So we draw minimum number of lines to cover all the
zeros at least once. The number of such lines is 3.
451
L1
3 1 2 0 7 3 (4)
L2 0 2 ×
0 3 ×
0
7 2 9 ×
0 7 3 (1)
L3 4 0 10 3 6
1 3 4 ×
0 6 3 (2)
3
(3)
Step 3 : In this table the smallest of the uncovered elements is 1. Subtracting this
element from all uncovered elements, adding to each element that is at the intersection
of two lines and leaving all remaining elements unchanged, the reduced matrix is as
follows.
2 0 1 0 6
0 2 0 4 0
6 1 8 0 6
4 0 10 4 6
0 2 3 0 5
Step 4 : Giving zero assignments in usual manner, we observe that row 1 and column 5
have no zero assignments. So we again draw minimum number of lines to cover all zeros
at least once. Number of such lines is 4. visible
L1 L2
2 ×
0 1 ×
0 6
3(1)
L3 ×
0 2 0 4 ×
0
6 1 8 0 6 3(5)
4 0 10 4 6 3(4)
L4 0 2 3 ×
0 5
3 3
(2) (3)
452
Step 5 : In the last reduced table the smallest uncovered element is 1. Subtracting this
element 1 from all uncovered elements, adding to each element that lies at the
intersection of two lines and leaving remaining elements unchanged, the reduced matrix
is
1 0 0 0 5
0 3 0 5 0
5 1 7 0 5
3 0 9 4 5
0 3 3 1 5
Step 6 : Giving zero assignments in the usual manner we observe that each row and each
column have an assignment.
A B C D E
1 1 ×
0 0 ×
0 5
2 ×
0 3 ×
0 5 0
3 5 1 7 0 5
4 3 0 9 4 5
5 0 3 3 1 5
Example 2: A company has four territories open and four salesmen available for
assignment. The territories are not equally rich in their sales potential ; it is estimated
that a typical salesman operating in each territory would bring in the following annual
sales :
Territory : I II III IV
Annual sales (`) : 60,000 50,000 40,000 30,000
Four salesman are also considered to differ in their ability : It is estimated that, working
under the same conditions, their yearly sales would be proportionately as follows :
Salesman : A B C D
Proportion : 7 5 5 4
If the criterion is maximum expected total sales, then intuitive answer is to assign the best
salesman to the richest territory, the next best salesman to the second richest, and so on.
Verify this answer by the assignment technique.
453
Considering ` 10,000 as one unit, the annual sales in the four territories by the four
salesmen are as follows :
Salesman.
7 7 7 7 42 35 28 21
A × 6, × 5, × 4, ×3 i.e., , , ,
21 21 21 21 21 21 21 21
5 5 5 5 30 25 20 15
B × 6, × 5, × 4, ×3 i.e., , , ,
21 21 21 21 21 21 21 21
5 5 5 5 30 25 20 15
C × 6, × 5, × 4, ×3 i.e., , , ,
21 21 21 21 21 21 21 21
4 4 4 4 24 20 16 12
D × 6, × 5, × 4, ×3 i.e., , , ,
21 21 21 21 21 21 21 21
7 A 42 35 28 21
5 B 30 25 20 15
5 C 30 25 20 15
4 D 24 20 16 12
I II III IV
Step 1 : Subtracting the minimum element of each row from every element of the
corresponding row and then subtracting the minimum element of each column from
every element of the corresponding column, the reduced matrix is
454
0 3 6 9
0 1 2 3
0 1 2 3
0 0 0 0
Step 2 : Giving zero assignments in the usual manner, we observe that the row 2, 3 and
column 3, 4 have no assignments. So we draw minimum number of lines to cover all zeros
at least once. Number of such lines is 2.
L1
0 3 6 0 3(4)
×
0 1 2 3 3(1)
×
0 1 2 3 3(2)
L2 ×
0 0 ×
0 ×
0
3
(3)
Step 3 : In the last table the smallest uncovered element is 1. Subtracting this element 1
from all uncovered elements, adding to each element that lies at the intersection of two
lines and leaving remaining elements unchanged, the matrix reduces to
0 2 5 8
0 0 1 2
0 0 1 2
1 0 0 0
Step 4 : Giving zero assignments in the usual manner, we observe that row 3 and column
4 have no assignments.
L1 L2
0 2 5 8 3(4)
×
0 0 1 2 3(5)
×
0 ×
0 1 2 3(1)
L3 1 ×
0 0 ×
0
3 3
(2) (3)
455
So we again draw minimum number of lines to cover all zeros at least once. The number
of such lines is 3.
Step 5 : In this table the smallest uncovered element is 1. Subtracting this element 1
from all uncovered elements, adding to each element that lies at the intersection of two
lines and leaving remaining elements unchanged, we get the following reduced matrix.
0 2 4 7
0 0 0 1
0 0 0 1
2 1 0 0
Step 6 : Giving ‘zero assignments’, we get the two assignments as shown in the following
tables. :
I II III IV I II III IV
A 0 2 4 7 A 0 2 4 7
B ×
0 0 ×
0 1 B ×
0 ×
0 0 1
C ×
0 ×
0 0 1 C ×
0 0 ×
0 1
D 2 1 ×
0 0 D 2 1 ×
0 0
From both the solutions, it is obvious that the best salesman A is assigned to the richest
territory I, the worst salesman D to the poorest territory IV. Salesman B and C being
equally good, so they may be assigned to either II to III. This verifies the given intuitive
answer.
In case the number of tasks (jobs) is not equal to the number of facilities (persons), the
assignment problem is called an unbalanced assignment problem. Thus the cost
matrix of an unbalanced assignment problem is not a square matrix. For the solution of
such problem we add the dummy (fictitious) rows or columns to the given matrix with
zero costs to form it a square matrix. Then the usual assignment algorithm can be applied
to this resulting balanced assignment problem.
456
Example 1: A department head has four tasks to be performed and three subordinates.
The subordinates differ in efficiency. The estimates of the time, each subordinate would
take to perform, are given below in the matrix. How should he allocate the tasks, one to
each man, so as to minimize the total man hours ?
Subordinates
1 2 3
I 9 26 15
Tasks II 13 27 6
III 35 20 15
IV 18 30 20
Step 1 : Subtracting the minimum element of each row from every element of the
corresponding row and then subtracting the minimum element of each column from
every element of the corresponding column, the matrix reduces to
0 6 9 0
4 7 0 0
26 0 9 0
9 10 14 0
Step 2 : Giving zero assignments in the usual manner, we observe that, each row and
each column have zero assignments.
1 2 3 4
I 0 6 9 ×
0
II 4 7 0 ×
0
III 26 0 9 ×
0
IV 9 10 14 0
457
From the original matrix, the total time (man hours) = 9 + 6 + 20 = 35 hours.
Example 2: A company is faced with the problem of assigning six different machines to
five different jobs. The costs are estimated as follows (in hundreds of rupees) :
Jobs
1 2 3 4 5
1 2.5 5 1 6 1
2 2 5 1.5 7 3
Machines 3 3 6.5 2 8 3
4 3.5 7 2 9 4.5
5 4 7 3 9 6
6 6 9 5 10 6
Solve the problem assuming that the objective is to minimize the total cost. [Meerut 2006]
1 2 3 4 5 6
1 5 10 2 12 2 0
2 4 10 3 14 6 0
3 6 13 4 16 6 0
4 7 14 4 18 9 0
5 8 14 6 18 12 0
6 12 18 10 20 12 0
Step 1 : Subtracting the smallest element in each row from every element of the
corresponding row and then subtracting the smallest element in each column from every
element of the corresponding column, we get the following table :
458
1 0 0 0 0 0
0 0 1 2 4 0
2 3 2 4 4 0
3 4 2 6 7 0
4 4 4 6 10 0
8 8 8 8 10 0
Step 2 : Giving zero assignments in the usual manner, we observe that rows 3, 4, 5 and
columns 3, 4, 5 have no zero assignments.
L1
L2 1 0 ×
0 ×
0 ×
0 ×
0
L3 0 ×
0 1 2 4 ×
0
2 3 2 4 4 0 3(5)
3 4 2 6 7 ×
0 3(1)
4 4 4 6 10 ×
0 3(2)
8 8 8 8 10 ×
0 3(3)
3
(4)
Step 3 : The smallest element among all the uncovered elements is 2. Subtracting this
element 2 from all uncovered elements, adding to each element that lies at the
intersection of two lines and leaving all other elements unchanged, we get the following
matrix.
1 0 0 0 0 2
0 0 1 2 4 2
0 1 0 2 2 0
1 2 0 4 5 0
2 2 2 4 8 0
6 6 6 6 8 0
Step 4 : Giving zero assignments in the usual manner, we observe that row 6 and column
5 have no zero assignments.
So we again draw minimum number of lines to cover all zeros at least once.
459
L1
L2 1 ×
0 ×
0 0 ×
0 2
L3 ×
0 0 1 2 4 2
L4 0 1 ×
0 2 2 ×
0
L5 1 2 0 4 5 ×
0
2 2 2 4 8 0 3(3)
6 6 6 6 8 ×
0 3(1)
3
(2)
Step 5 : The smallest of the uncovered elements is 2. Subtracting this element 2 from all
uncovered elements adding to each element that lies at the intersection of two lines and
leaving all remaining elements unchanged, we get the following table.
1 0 0 0 0 4
0 0 1 2 4 4
0 1 0 2 2 2
1 2 0 4 5 2
0 0 0 2 6 0
4 4 4 4 6 0
Step 6 : Giving zero assignments in the usual manner, we observe that row 2 and column
5 have no zero assignments.
So we again draw minimum number of lines to cover all zeros at least once.
L1 L2 L3 L 4
L5 1 ×
0 ×
0 0 ×
0 4
×
0 ×
0 1 2 4 4 3(1)
0 1 ×
0 2 2 2 3(4)
1 2 0 4 5 2 3(6)
×
0 0 ×
0 2 6 ×
0 3(7)
4 4 4 4 6 0 3(9)
3 3 3 3
(2) (3) (5) (8)
460
Step 7 : The smallest element among the uncovered elements is 2. Subtracting this
element 2 from all the uncovered elements, adding to each element that lies at the
intersection of two lines and leaving remaining elements unchanged, we get the following
table.
3 2 2 0 0 6
0 0 1 0 2 4
0 1 0 0 0 2
1 2 0 2 3 2
0 0 0 0 4 0
4 4 4 2 4 0
Step 8 : Giving zero assignments in the usual manner, we get a number of optimal
assignments. Here two optimal assignments are given as follows :
1 2 3 4 5 6 1 2 3 4 5 6
1 3 2 2 0 ×
0 6 1 3 2 2 0 ×
0 6
2 ×
0 0 ×
0 1 2 4 2 0 ×
0 1×
0 2 4
3 ×
0 1 ×
0 ×
0 0 2 3 ×
0 1 ×
0 ×
0 0 2
4 1 2 0 2 3 2 4 1 2 0 2 3 2
5 0 ×
0 ×
0 ×
0 4 ×
0 5 ×
0 0 ×
0 ×
0 4 ×
0
6 4 4 4 2 4 0 6 4 4 4 2 4 0
Machine → Job :
(i) 1 → 4, 2 → 2, 3 → 5, 4 → 3, 5 → 1
and (ii) 1 → 4, 2 → 1, 3 → 5, 4 → 3, 5 → 2.
In both the cases, from the original matrix, total minimum cost
= 20 i.e., ` 2,000.
Example 1: Four engineers are available to design four projects. Engineer 2 is not
competent to design the project B. Given the following time estimates needed to each
engineer to design a given project, find how should the engineers be assigned to projects so
as to minimize the total design time of four project.
Projects
A B C D
1 12 10 10 8
Engineers 2 14 Not suitable 15 11
3 6 10 16 4
4 8 10 9 7
Solution: To avoid the assignment 2 → B, we take its time to be very large (say ∞). Then
the cost matrix of the resulting assignment problem is shown in the following table :
12 10 10 8
14 ∞ 15 11
6 10 16 4
8 10 9 7
Step 1 : Subtracting the minimum element of each row from every element of the
corresponding row and then subtracting minimum element of each column from every
element of the corresponding column, the reduced matrix is
3 0 0 0
2 ∞ 2 0
1 4 10 0
0 1 0 0
Step 2 : Giving zero assignments in the usual manner, we observe that row 3 and column
3 have no zero assignments. So we draw minimum number of lines to cover all zeros at
least once. Number of such zeros is 3.
L1
L2 3 0 ×
0 ×
0
2 ∞ 2 0 3(3)
1 4 10 ×
0 3(1)
L3 0 1 ×
0 ×
0
√(2)
462
Step 3 : In the above table, the smallest of the uncovered elements is 1. Subtracting this
element 1 from all uncovered elements, adding to each element that lies at the
intersection of two lines and leaving remaining elements unchanged we get the following
matrix.
3 0 0 1
1 ∞ 1 0
0 3 9 0
0 1 0 1
Step 4 : Giving zero assignments in the usual manner, we observe that each row and each
column have a zero assignment.
A B C D
1 3 0 ×
0 1
2 1 ∞ 1 0
3 0 3 9 ×
0
4 ×
0 1 0 1
Engineer → Project : 1 → B, 2 → D, 3 → A, 4 → C.
Assign the programmers to the programmes in such a way that the total computer
time is least. [Agra 2003]
9. Find the optimal solution for the assignment problem with the following cost
matrix :
I II III IV V
A 11 17 8 16 20
B 9 7 12 6 15
C 13 16 15 12 16
D 21 24 17 28 26
E 14 10 12 11 15
[Meerut 2009, 11]
10. Find the optimal assignment for the problems with given cost matrix
A 5 3 1 8 A 10 12 19 11
B 7 9 2 6 B 5 10 7 8
C 6 4 5 7 C 12 14 13 11
D 5 7 7 6 D 8 15 11 9
1 2 3 4 5 1 2 3 4 5
A 8 4 2 6 1 I 12 8 7 15 4
B 0 9 5 5 4 II 7 9 17 14 10
D 4 3 1 0 3 IV 7 6 14 6 10
E 9 5 8 9 5 V 9 6 12 10 6
[Meerut 2004]
464
I II III IV V I II III IV V
A 5 11 10 12 4 A 1 3 2 3 6
B 2 4 6 3 5 B 2 4 3 1 5
Machine C 3 12 5 14 6 Job C 5 6 3 4 6
D 6 14 4 11 7 D 3 1 4 2 2
E 7 9 8 12 5 E 1 5 6 5 4
[Meerut 2008 (BP), 10, 11 (BP), 12 (BP)] [Meerut 2009 (BP), 12]
12. One car is available at each of the stations 1, 2, 3, 4, 5, 6 and one car is required at
each of the stations 7, 8, 9, 10, 11, 12. The distances between the various stations
are given in the matrix below. How should the cars be despatched so as to minimize
the total mileage covered ?
7 8 9 10 11 12
1 41 72 39 52 25 51
2 22 29 49 65 81 50
3 27 39 60 51 32 32
4 45 50 48 52 37 43
5 29 40 39 26 30 33
6 82 40 40 60 51 30
13. A national truck-rental service has a surplus of one truck in each of the cities 1, 2, 3,
4, 5, 6 and a deficit of one truck in each of the cities 7, 8, 9, 10, 11, 12. The
distances (in kilometre) between the cities with a surplus and the cities with a
deficit are displayed below :
To
7 8 9 10 11 12
1 31 62 29 42 15 41
2 12 19 39 55 71 40
From 3 17 29 50 41 22 22
4 35 40 38 42 27 33
5 19 30 29 16 20 23
6 72 30 30 50 41 20
How should the trucks be dispersed so as to minimize the total distance travelled ?
14. Five wagons are available at five stations 1, 2, 3, 4 and 5. These are required at five
stations I, II, III, IV and V. The mileages between various stations are given by the
following table :
465
I II III IV V
1 10 5 9 18 11
2 13 9 6 12 14
3 3 2 4 4 5
4 18 9 12 17 15
5 11 6 14 19 10
How should the wagons be transported so as to minimize the total mileage covered?
15. A marketing manager has 5 salesmen and 5 sales-districts. Considering the
capabilities of the salesmen and the nature of districts, the marketing manager
estimates that sales per month (in hundred rupees) for each salesman in each
district would be as follows :
Districts
A B C D E
1 32 38 40 28 40
2 40 24 28 21 36
Salesmen 3 41 27 33 30 37
4 22 38 41 36 36
5 29 33 40 35 39
Find the assignment of salesmen to districts that will result in maximum sales.
16. Find the minimum cost solution for the 5 × 5 assignment problem whose cost
coefficients are as given below :
I II III IV V
1 −2 −4 −8 −6 −1
2 0 −9 −5 −5 −4
3 −3 −8 0 −2 −6
4 −4 −3 −1 0 −3
5 −9 −5 −9 −9 −5
17. Alpha corporation has four plants each of which can manufacture any of the four
products. Production costs differ from plant to plant as do sales revenue. From the
following data, obtain which product each plant should produce to maximize
profit?
466
Plant ↓ 1 2 3 4 Plant ↓ 1 2 3 4
A 50 68 49 62 A 49 60 45 61
B 60 70 51 74 B 55 63 45 69
C 55 67 53 70 C 52 62 49 68
D 58 65 54 69 D 55 64 48 66
Delhi-Calcutta Calcutta-Delhi
Flight No. Departure Arrival Flight No. Departure Arrival
1 7.00 A.M. 9.00 A.M. 101 9.00 A.M. 11.00 A.M.
2 9.00 A.M. 11.00 A.M. 102 10.00 A.M. 12.00 Noon
3 1.30 P.M. 3.30 P.M. 103 3.30 P.M. 5.30 P.M.
4 7.30 P.M. 9.30 P.M. 104 8.00 P.M. 10.00 P.M.
For each pair also mention the town where the crew should be based.
19. Solve the following minimal assignment problems.
[Kanpur 2009]
21. Four jobs are to be done on four different machines. The cost (in rupees) of
producing i-th job on the j-th machine is given below :
Machines ↓
Jobs ↓
M1 M2 M3 M4
J1 15 13 14 17
J2 11 12 15 13
J3 13 12 10 11
J4 15 17 14 16
Assign the jobs to different machines so as to minimize the total cost. What is the
minimum total cost ? [Gorakhpur 2009]
22. Four jobs are to be done on four different machines. The cost (in rupees) of
producing i-th job on the j-th machine is given below :
Machines ↓
Jobs ↓
M1 M2 M3 M4
J1 15 11 13 15
J2 17 12 12 13
J3 14 15 10 14
J4 16 13 11 17
Assign the jobs to different machines so as to minimize the total cost. What is the
minimum total cost ? [Gorakhpur 2008, 10]
23. Four salesmen are to be assigned to four districts. Estimates of the sales revenue in
hundred of ` for sale are as below :
Salesmen / Districts A B C D
1 320 350 400 280
2 400 250 300 220
3 420 270 340 300
4 250 390 410 350
468
25. A company has 4 machines to do 3 jobs. Each job can be assigned to one and only
one machine. The cost of each job on each machine is given in the following table :
Machine
W X Y Z
A 18 24 28 32
Job B 8 13 17 19
C 10 15 19 22
What are the job assignments which will minimize the cost ? [Meerut 2004]
26. A company is faced with the problem of assigning 4 machines to 6 different jobs
(one machine to one job only). The profit are estimated as follows :
Machine
A B C D
1 3 6 2 6
2 7 1 4 4
Job 3 3 8 5 8
4 6 4 3 7
5 5 2 4 3
6 5 7 6 4
Find the assignment of persons to jobs that will result in a maximum profit. Which
job should be declined ?
28. Five operators have to be assigned to five machines. The assignment costs are given
in the table below :
Machine
I II III IV V
A 5 5 2 6
B 7 4 2 3 4
Operator C 9 3 5 3
D 7 2 6 7 2
E 6 5 7 9 1
Operator A cannot operate machine III and operator C cannot operate machine IV.
Find the optimal assignment schedule.
29. There are 3 persons P1, P2 and P3 and 5 jobs J1, J2 ,..., J5 . Each person can do only
one job and a job is to be done by one person only. Using Hungarian method, find
which 2 jobs should be left undone in the following cost minimizing assignment
problem.
J1 J2 J3 J4 J5
P1 7 8 6 5 9
P2 9 6 7 6 10
P3 8 7 9 5 6
30. Use the Hungarian method to find which of the two jobs should be left undone
when each of the 4 persons will do only one job in the following cost minimizing
assignment problem :
Job
J1 J2 J3 J4 J5 J6
P1 10 9 11 12 8 5
Person P2 12 10 9 11 9 4
P3 8 11 10 7 12 6
P4 10 7 8 10 10 5
470
8. 1 → C, 2 → B, 3 → A.
9. A → I, B → IV, C → V, D → III, E → II ; min. cost = 60.
10. (i) A → III, B → IV, C → II, D → I, min. cost = 16
(ii) A → 2, B → 3, C → 4, D → 1, min. cost =38
11. (i) A → 5, B → 1, C → 4, D → 3, E → 2 ; min. cost =9
(ii) I → 3, II → 1, III → 2, IV → 4, V → 5
or I → 3, II → 1, III → 4, IV → 2, V → 5.
(iii) A → V, B → IV, C → I, D → III, E → II
(iv) A → I, B → IV, C → III, D → II, E → V.
12. 1 → 11, 2 → 8, 3 → 7, 4 → 9, 5 → 10, 6 → 12.
13. 1 → 11, 2 → 8, 3 → 7, 4 → 9, 5 → 10, 6 → 12.
14. 1 → I, 2 → III, 3 → IV, 4 → II, 5 → V ; 39 miles.
15. 1 → B, 2 → A, 3 → E, 4 → C, 5 → D
or 1 → B, 2 → A, 3 → E, 4 → D, 5 → C, etc.
16. 1 → III, 2 → II, 3 → V, 4 → I, 5 → IV.
17. A → 2, B → 4, C → 1, D → 3.
18. (1 → 103), (2 → 104), (3 → 101) ;(4 → 102)
Delhi Delhi Delhi Calcutta
19. (i) I → B, II → D, III → C, IV → A
or I → D, II → B, III → C, IV → A
(ii) I → A, II → C, III → B, IV → D
20. (i) A → III, B → I, C → II, D → IV (ii) I → 4, II → 3, III → 2, IV → 1
(iii) A → I, B → II, C → III, D → IV or A → II, B → III, C → IV, D → I
or A → III, B → II, C → IV, D → I etc.
(iv) I → 2, II → 4, III → 3, IV → 1 or I → 4, II → 2, III → 3, IV → 1.
21. J1 → M2 , J2 → M1, J3 → M4 , J4 → M3 . min cost ` 49.
22. J1 → M2 , J2 → M4 , J3 → M1, J4 → M3 . min cost ` 49.
23. 1 → C, 2 → A, 3 → D, 4 → B.
24. A → X, B → Y , C → Z
25. A → W , B → X , C → Y or A → W , B → Y , C → X . No job to machine Z.
26. 2 → A, 3 → B, 4 → D, 6 → C ; max. profit = 28.
27. 1 → D, 2 → B, 3 → C, 4 → E ; Job A should be declined.
28. A → IV, B → III, C → II, D → I, E → V
or A → IV, B → III, C → V, D → II, E → I
29. P1 → J4 , P2 → J2 , P3 → J5 ; Jobs J1 and J3 left undone.
30. P1 → J5 , P2 → J6 , P3 → J4 , P4 → J2 ; Jobs J1 and J3 left undone.
471
Further we note that for two cities there is only one possible route i.e.,there is no choice.
In case of three cities, say A, B and C, one of them (say A) is the home base, there are two
possible routes ; A → B → C and A → C → B. For four cities there are 3 ! = 6 possible
routes. In general, there are (n −1) ! possible routes if there are n cities. Thus, practically it
is impossible to find the best route by trying each one. That is why the travelling
salesman problem is considered as a puzzle by the mathematicians. The best procedure
to solve the problem is as if it were an assignment problem. We formulate the problem of
travelling salesman in the form of an assignment problem with the additional restriction
on his choice of route.
Let x ij = 1, if the salesman goes directly from city Ai to city A j , and zero otherwise. Also,
let cij be the distance (or time or cost) from city Ai to city A j . Then our problem is to
minimize Z = ∑ ∑ cij x ij with one additional restriction that the x ij ’s must be so chosen
i j
that no city is visited twice until the tour of all the cities is completed. In particular, he
cannot go directly from city Ai to Ai itself. To avoid this possibility in the minimization
process we adopt the convention cii = ∞ so that x ij can never be unity when i = j. Also we
note that only one x ij = 1 for each value of i and j. The distance (or time or cost) matrix for
this problem is given in the following table :
To
A1 A2 ................................. An
A1 ∞ c12 ................................. c1n
A2 c21 ∞ ................................. c2 n
From M M M M M
An cn1 cn2 ................................. ∞
472
We can omit the variable x ij from the problem specification. Our problem is to determine
a set of n elements of this matrix, one in each row and one in each column, so as to
minimize the sum of the elements determined above.
Note : A problem similar to travelling salesman arises when n items say Ai, i = 1, 2,..., n are
to be produced on a machine in continuation, given that cij (i, j = 1, 2,..., n) is the setup
cost of the machine when item Ai is followed by A j . Here two additional restrictions are
imposed. One restriction is that we do not follow Ai again by Ai. The other restriction is
that we do not produce an item again until all items are produced once.
Example 1: Given the matrix of setup costs, show how to sequence the production so as to
minimize the setup cost per cycle.
To
A1 A2 A3 A4 A5
A1 ∞ 2 5 7 1
A2 6 ∞ 3 8 2
From A3 8 7 ∞ 4 7
A4 12 4 6 ∞ 5
A5 1 3 2 8 ∞
[Meerut 2007]
To
A1 A2 A3 A4 A5
A1 ∞ 1 3 6 0
A2 4 ∞ 0 6 ×
0
From A 4 3 ∞ 0 3
3
A4 8 0 1 ∞ 1
A5 0 2 ×
0 7 ∞
473
A1 → A5 , A5 → A1, A2 → A3 , A3 → A4 , A4 → A2 .
This solution indicates to produce the products A1, then A5 and then again A1, without
producing the products A2 , A3 and A4 , which violates the additional restriction of
producing each product once and only once before returning to the first product. So this
is not a solution of the travelling salesman problem.
Now we try to find the next best solution which also satisfies the additional restriction.
The next minimum element (non-zero) in the matrix is 1. We try to bring 1 in the
solution. The cost 1 also occurs at three places.
Start by making unity-assignment in the cell (1, 2) instead of zero assignment in the cell
(1, 5). Then no other assignment can be made in the first row and the second column. So
the assignment in cell (4, 2) is shifted to cell (4, 5). The best solution of the problem lies
in the marked ‘,’ elements as shown in the table.
To
A1 A2 A3 A4 A5
A1 ∞ 1 3 6 0
A2 4 ∞ 0 6 ×
0
From A 4
3 3 ∞ 0 3
A4 8 0 1 ∞ 1
A5 0 2 ×
0 7 ∞
A1 → A2 → A3 → A4 → A5 → A1.
On the other hand if we select the element 1 in the cell (4, 3) in the solution, then no
feasible solution is available in terms of zeros or for which the reduced matrix gives the
minimum cost less than 2.
Example 2: Solve the travelling salesman problem given by the following data :
c12 = 20 , c13 = 4, c14 = 10 , c23 = 5, c34 = 6
and there is no route between cities i and j if the value for c ij is not given above.
[Meerut 2005, 06 (BP)]
474
If there is no route between cities i and j then we have taken cij = ∞ to avoid the
possibility of going from i-th station to j-th station.
Now we shall solve the problem by usual assignment algorithm. The following tables
show the necessary steps for reaching the solution.
Table 1 Table 2
1 2 3 4 5 1 2 3 4 5
3 3
1 ∞ 15 0 4 ∞ 3 1 ∞ 12 0 1 ∞ 3
2 15 ∞ ×
0 ∞ 3 3 2 12 ∞ ×
0 ∞ 0
3 0 ×
0 ∞ ×
0 ×
0 3 0 ×
0 ∞ ×
0 ×
0
4 4 ∞ ×
0 ∞ 12 3 4 1 ∞ ×
0 ∞ 9 3
5 ∞ 3 ×
0 12 ∞ 3 5 ∞ 0 ×
0 9 ∞
Table 3 Table 4
1 2 3 4 5 1 2 3 4 5
1 ∞ 11 0 ×
0 ∞ 1 ∞ 11 ×
0 0 ∞
2 12 ∞ 1 ∞ 0 2 12 ∞ 1 ∞ 0
3 ×
0 ×
0 ∞ 0 ×
0 or 3 0 ×
0 ∞ ×
0 ×
0
4 0 ∞ ×
0 ∞ 8 4 ×
0 ∞ 0 ∞ 8
5 ∞ 0 1 9 ∞ 5 ∞ 0 1 9 ∞
1 → 3, 3 → 4, 4 → 1, 2 → 5, 5 → 2 , from table 3
or 1 → 4, 4 → 3, 3 → 1, 2 → 5, 5 → 2 , from table 4.
But none of these solutions is the solution for the travelling salesman problem, as it is not
allowed to return to the starting city 1 without visiting the cities 2 and 5.
475
Now we try to find the best solution which satisfies the restrictions of the travelling
salesman by shifting the positions of assignments.
2 12 ∞ 1 ∞ 0
3 0 0 ∞ 0 0
4 0 ∞ 0 ∞ 8
5 ∞ 0 1 9 ∞
2 12 ∞ 1 ∞ 0
3 0 0 ∞ 0 0
4 0 ∞ 0 ∞ 8
5 ∞ 0 1 9 ∞
1 2 3 4 5
1 ∞ 6 12 6 4
2 6 ∞ 10 5 4
3 8 7 ∞ 11 3
4 5 4 11 ∞ 5
5 5 2 7 8 ∞
6. A salesman has to visit five cities, A, B, C, D and E. The distances (in hundred
miles) between the five cities are as follows :
To
A B C D E
A 7 6 8 4
B 7 8 5 6
From C 6 8 9 7
D 8 5 9 8
E 4 6 7 8
If the salesman starts from city A and has to come back to city A, which route
should he select so that the total distance travelled is minimum. [Gorakhpur 2007]
To
1 2 3 4 5 6
1 ∞ 20 23 27 29 34
2 21 ∞ 19 26 31 24
From 3 26 28 ∞ 15 36 26
4 25 16 25 ∞ 23 18
5 23 40 23 31 ∞ 10
6 27 18 12 35 16 ∞ [Meerut 2006]
478
3. In an assignment problem with cost (cij ), if all cij ≥ 0, then a feasible solution ( x ij )
n n
which satisfies ∑ ∑ cij x ij = ................., is optimal for the problem.
i = 1j = 1 [Meerut 2005]
4. In travelling salesman problem the elements of the leading diagonal of the cost
matrix are taken to be ................. .
5. If the cost matrix of an assignment problem is not a square matrix (number of
sources is not equal to the number of destinations), the assignment problem is
called an ................. assignment problem.
479
True or False
1. If in an assignment problem, a constant is added or subtracted to every element of a
row (or column) of the cost matrix [cij ], then an assignment which minimizes the
total cost for one matrix, also minimizes the total cost for the other matrix.
2. For solving ans assignment problem we modify the cost matrix by creating zeros in
it. [Meerut 2004]
3. If there is no solution to the travelling salesman problem among zeros then the best
solution lies in assigning the greatest element of the reduced matrix in place of zero.
4. The procedure of subtracting the minimum element not covered by any line, from
all the uncovered elements and adding the same element to all the elements lying at
the intersection of two lines results in a matrix with different optimal assignments
as the original matrix.
480
Exercise 11.2
4. 1 → 3 → 2 → 4 → 1 ; or 1 → 4 → 2 → 3 → 1 ; min. cost =19
5. 1 → 3 → 5 → 2 → 4 → 1 ; min. cost =27
6. A → E → B → D → C → A ; or A → C → D → B → E → A ; min distance = 30
hundred miles.
7. 1 → 5 → 6 → 3 → 4 → 2 → 1 ; min. cost =103.
True or False
1. True 2. True
3. False 4. False
mmm