Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
47 views

Duality and Sensitivity Analysis

The document discusses duality and sensitivity analysis in linear programming problems. It begins by presenting a sample primal linear program with two constraints and two variables. It then describes how to derive the dual linear program by multiplying and adding the primal constraints in different ways to obtain upper bounds on the optimal objective value. The document shows that the dual problem minimizes this upper bound. It then generalizes this process and describes how to write the dual problem for any primal linear program, including ones with different variable types and constraint types. Properties of the primal-dual relationship are outlined, such as how maximization problems have minimization duals and vice versa.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views

Duality and Sensitivity Analysis

The document discusses duality and sensitivity analysis in linear programming problems. It begins by presenting a sample primal linear program with two constraints and two variables. It then describes how to derive the dual linear program by multiplying and adding the primal constraints in different ways to obtain upper bounds on the optimal objective value. The document shows that the dual problem minimizes this upper bound. It then generalizes this process and describes how to write the dual problem for any primal linear program, including ones with different variable types and constraint types. Properties of the primal-dual relationship are outlined, such as how maximization problems have minimization duals and vice versa.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 63

Duality and Sensitivity Analysis

Maximize Z = 6X1 +5X2


Subject to
X1 +X2  5 (3.1)
3X1 +2X2  12 (3.2)
X1, X2  0

Let us assume that we do not know the optimum solution.

Let Z* be the optimum value of Z. Let us try to find a value


higher than Z* without solving the problem.

We wish to have as small a value as possible but it should be


greater than or equal to Z*
• The obvious value is infinity because it is a maximization problem. If we ignore both the constraints,
the value will be infinity.

Maximize Z = 6X1 + 5X2


Subject to
X1 + X 2  5 (3.1)
3X1 + 2X2  12 (3.2)
X1 , X 2  0

• Let us multiply the second constraint by 3 to get 9X1 + 6X2  36.


• The optimum solution has to be feasible (X1, X2  0) and therefore should satisfy 9X1 + 6X2  36.
• Since 6X1 + 5X2  9X1 + 6X2  36 for X1, X2  0, the upper estimate of Z* is 36.

• Let us multiply the first constraint by 6 to get 6X1 + 6X2  30. The optimum solution has to be feasible
(X1, X2  0) and therefore should satisfy 6X1 + 6X2  30.
• Since 6X1 + 5X2  6X1 + 6X2  30 for X1, X2  0, the upper estimate of Z* is 30.

• Let us multiply the first constraint by 1 and second constraint by 2 and add them to get 7X1 + 5X2  29.
• The optimum solution has to be feasible (X1, X2  0) and therefore should satisfy 7X1 + 5X2  29.
• Since 6X1 + 5X2  7X1 + 5X2  29 for X1, X2  0, the upper estimate of Z* is 29.
• From the above arguments we understand that if we multiply the
• first constraint by a non negative quantity a and
• the second constraint by a non negative quantity b and
• add them such that the resultant constraint has all coefficients more than that in
the objective function, then the right hand side value is an upper bound to Z*.
• This is under the assumption that the constraints are of the  type.

• The lowest value that can be achieved will have a certain value of a and b, if
such a value exists. Let such values of a and b be Y1 and Y2.

• In order to get the upper estimate of Z*


Y1 + 3Y2  6 (3.3)
Y1 + 2Y2  5 (3.4)
Y1, Y2  0 (3.5)

• For every Y1 and 3Y2 satisfying (3.3) to (3.5) W = 5Y1 + 12Y2 is an upper
estimate of Z*.
• The lowest value W can take is given by the Linear Programming problem
(P2)
Minimize W = 5Y1 + 12Y2
Subject to
Y1 + 3Y2  6 (3.3)
Y1 + 2Y2  5 (3.4)
Y1, Y2  0 (3.5)
• Problem P2 is called the dual of the given problem P1. The given problem is
called the Primal.

Writing the dual to the standard form of the LPP


From the previous section, we generalize that if the Primal is
Maximize Z = CX
Subject to
AX  b
X0
The Dual is
Minimize W = Yb
Subject to
AT Y  C
Y0
• We are aware that we can have
• two types of objective functions (Maximize and Minimize),
• three types of constraints ( type, equation and  type) and
• three types of variables ( type, unrestricted and  type).

Let us consider a Primal that has all these features.

• Consider problem P3 given by


Minimize 8X1 + 5X2 + 4X3
Subject to
4X1 + 2X2 + 8X3 = 12 (3.6)
7X1 + 5X2 + 6X3  9 (3.7)
8X1 + 5X2 + 4X3  10 (3.8)
3X1 + 7X2 + 9X3  7 (3.9)
X1  0, X2 unrestricted in sign and X3  0
Consider problem P3 given by
Minimize 8X1 + 5X2 + 4X3
Subject to
4X1 + 2X2 + 8X3 = 12 (3.6)
7X1 + 5X2 + 6X3  9 (3.7)
8X1 + 5X2 + 4X3  10 (3.8)
3X1 + 7X2 + 9X3  7 (3.9)
X1  0, X2 unrestricted in sign and X3  0
Let us bring the primal to the standard form
Maximize Z = CX
Subject to
AX  b
X0

Variable X2 is unrestricted in sign.

We replace it as the difference of two variables


X 2 = X 4 – X 5,
where both X4 and X5  0.

Variable X3  0 is replaced by – X6 where X6  0.


The problem P3 becomes
Minimize 8X1 + 5X4 – 5X5 - 4X6
Subject to
4X1 + 2X4 – 2X5 - 8X6 = 12
7X1 + 5X4 – 5X5 - 6X6  9
8X1 + 5X4 – 5X5 - 4X6  10
3X1 + 7X4 – 7X5 - 9X6  7
X1, X4, X5, X6  0

We convert the objective function to maximization by multiplying with –1.

The first constraint, which is an equation is rewritten as two constraints one


with a  sign and another with  sign.

All the  constraints are written as  constraints by multiplying them with –1.
Making theses changes we get
Maximize - 8X1 - 5X4­+ 5X5 + 4X6
Subject to
4X1 + 2X4 – 2X5 - 8X6  12
- 4X1 - 2X4 + 2X5 + 8X6  -12
- 7X1 - 5X4 + 5X5 + 6X6  -9
8X1 + 5X4 – 5X5 - 4X6  10
- 3X1 - 7X4 + 7X5 + 9X6  -7
X1, X4, X5, X6  0
• Now the primal is converted to the standard form for which we
can write the dual. Introducing variables Y1 to Y5 for each of the
constraints we write the dual as:

Minimize 12Y1 – 12Y2 –9Y3 +10Y4 –7Y5


Subject to
4Y1 – 4Y2 –7Y3 +8Y4 –3Y5  -8 (3.10)
2Y1 – 2Y2 –5Y3 +5Y4 –7Y5  -5 (3.11)
-2Y1 +2Y2 +5Y3 -5Y4 +7Y5  5 (3.12)
-8Y1 +8Y2 +6Y3 -4Y4 +9Y5  4 (3.13)
Y1, Y2, Y3, Y4, Y5  0

Since the primal is a minimization problem, the dual should be


a maximization problem.
Maximize -12Y1 + 12Y2 + 9Y3 - 10Y4 + 7Y5
Subject to
4Y1 – 4Y2 –7Y3 +8Y4 –3Y5  -8
2Y1 – 2Y2 –5Y3 +5Y4 –7Y5  -5
-2Y1 +2Y2 +5Y3 -5Y4 +7Y5  5
-8Y1 +8Y2 +6Y3 -4Y4 +9Y5  4
Y1, Y2, Y3, Y4, Y5  0

Writing variable Y2 – Y1 as Y6 we get


Maximize 12Y6 + 9Y3 - 10Y4 + 7Y5
Subject to
- 4Y6 –7Y3 +8Y4 –3Y5  -8
- 2Y6 –5Y3 +5Y4 –7Y5  -5
- 2Y6 +5Y3 -5Y4 +7Y5  5
8Y6 +6Y3 -4Y4 +9Y5  4
Y3, Y4, Y5  0

Y6 unrestricted in sign.

Replacing Y4 by –Y7 such that Y7  0 we get


Maximize 12Y6 + 9Y3 + 10Y7 + 7Y5
Subject to
- 4Y6 –7Y3 - 8Y7 –3Y5  -8 (3.14)
- 2Y6 –5Y3 - 5Y7 –7Y5  -5 (3.15)
2Y6 +5Y3 + 5Y7 +7Y5  5(3.16)
8Y6 +6Y3 + 4Y7 +9Y5  4(3.17)
Y3,Y5  0
Y7<0
Y6 unrestricted in sign.

Multiplying constraints 3.14 and 3.15 by –1 and writing the


modified constraint 3.15 and 3.16 as equation we get the dual as
Maximize 12Y6 + 9Y3 + 10Y7 + 7Y5
Subject to
4Y6 +7Y3 +8Y7 +3Y5  8 (3.18)
2Y6 +5Y3 +5Y7 +7Y5 = 5 (3.19)
8Y6 +6Y3 +4Y7 +9Y5  4 (3.20)
Y3,Y5  0
Y7 0
Y6 unrestricted in sign.

Let this be Problem 4


Problem P3 given by Problem P4 is given by
Minimize 8X1 + 5X2 + 4X3 Maximize 12Y6 + 9Y3 + 10Y7 + 7Y5
Subject to Subject to
4X1 + 2X2 + 8X3 = 12 4Y6 +7Y3 +8Y7 +3Y5  8
2Y6 +5Y3 +5Y7 +7Y5 = 5
7X1 + 5X2 + 6X3  9
8Y6 +6Y3 +4Y7 +9Y5  4
8X1 + 5X2 + 4X3  10 Y3,Y5  0
3X1 + 7X2 + 9X3  7 Y7 0
X1  0, X2 Y6 unrestricted in sign.
unrestricted in sign and X3  0

We observe that if problem P4 was the primal, problem P3 will


be the dual.

Result 3.1
In any LPP, the dual of the dual is the Primal itself
We also observe the following based on our
examples
Primal Dual
1. Maximization Minimization
2. Minimization Maximization
3. Number of variables (n) Number of constraints (n)
4. Number of constraints (m) Number of variables (m)
5. RHS (b) Objective function coefficients (cj)
6. Objection function coefficients(cj) RHS (b)
7. Constraint coefficients (A) Constraint coefficients (AT)

If the primal is a maximization problem, the dual is a minimization problem and vice
versa.
We have as many variables in the dual as the number of constraints in the primal
and vice versa.
The RHS of the primal become the objective function coefficients of the dual and
vice versa.
There is also a relationship between the types of variables (of the primal) and
the type of constraint (of the dual) and vice versa.

Primal(Maximization) Dual (Minimization)


 constraint  variable
 constraint  variable
equation constraint unrestricted variable
 variable  constraint
 variable  constraint
unrestricted variable equation constraint
If the first constraint of the primal (maximization) is of  type, the first variable of the
dual is of the  type and so on.

If the primal is a minimization problem, the table is to be interpreted correspondingly.

If the primal is a minimization problem as has third constraint as  type (say), then
the third variable of the dual is of the  type.
Primal- Dual Relationships

Result 3.2 Weak Duality Theorem


For a maximization primal, every feasible solution to the dual has an
objective function value greater than or equal to every feasible
solution to the primal
Let us apply this result to example P1.
The primal is Maximize Z = 6X1 + 5X2
Subject to
X1 + X 2  5 (3.21)
3X1 + 2X2  12 (3.22)
X1 , X 2  0
The dual is Minimize W = 5Y1 + 12Y2
Subject to
Y1 + 3Y2  6 (3.23)
Y1 + 2Y2  5 (3.24)
Y1, Y2  0 (3.25)
Since there exists a feasible solution X1, X2 to the primal,
we have X1 + X2  5 and 3X1 + 2X2  12
Substituting in the objective function value of a solution feasible to the dual we get
W  (X1 + X2) Y1 + (3X1 + 2X2) Y2

Rearranging the terms, we get


W  X1 (Y1 + 3Y2) + X2 (Y1 + 2Y2)
Since Y1 and Y2 are feasible to the dual, we have Y1 + 3Y2  6 and Y1 + 2Y2  5.
Substituting, we get
W  6X1 + 5X2
Let us apply the Complimentary Slackness Conditions to problem P1.
The primal is
Maximize Z = 6X1 + 5X2
Subject to
X1 + X2 + u1 = 5
3X1 + 2X2 + u2 = 12
X1, X2, u1, u2  0
(where u1 and u2 are slack variables)

The dual is
Minimize W = 5Y1 + 12Y2
Subject to
Y1 + 3Y2 – v1 = 6
Y1 + 2Y2 – v2 = 5
Y1, Y2, v1, v2  0

The optimal solution to the primal is X1* = 2, X2* = 3 Z* = 27


The optimal solution to the dual is Y1* = 3, Y2* = 1 W* = 27
• Applying the complimentary slackness conditions we verify that since X1 and X2
are the basic variables at the optimum, v1 and v2 are non-basic and take value
zero.

• Similarly since Y1 and Y2 are basic variables of the dual at the optimum, we
verify that u1 and u2 are non basic with value zero.

Let us apply the complimentary slackness conditions to problem P2


Maximize 3X1 + 4X2
Subject to
X1 + X2  12 The dual is
2X1 + 3X2  30 Minimize 12Y1 + 30Y2 + 36Y3
X1 + 4X2  36 Subject to Y1 + 2Y2 + Y3  3 X1, X2  0
Y + 3Y2 + 4Y3  4 Y1, Y2,
(Variables u1, u2 and1 u3 are primal slack variables while variables v1 and v2 are
Y3  0
dual surplus variables)
We know that the optimal solution to the primal is
X1* = 6, X2* = 6, u3* = 6 Z* = 42
We know that the optimal solution to the dual is
Y1* = 1, Y2* = 1 W* = 42
• Since X1, X2 are basic variables of the primal at the optimum, dual slack
variables v1 and v2 are non basic at the optimum to the dual with value zero.
• Since variable u3 is basic at the optimum to the primal, variable Y3 is non basic to
the dual at the optimum with value zero.
• Variables Y1 and Y2 are basic to the dual at the optimum, slack variables u1 and u2
are non basic at the optimum to the primal.

• In fact, we can evaluate the optimum solution to the dual if the optimum
solution to the primal is known and vice versa.
• From the optimum solution X1* = 6, X2* = 6, u3* = 6 Z* = 42, we observe that
variables Y1 and Y2 are basic at the optimum to the dual (because slack variables
u1 and u2 are non basic at the optimum to the primal).
• Also variable Y3 is non basic at the optimum to the dual because u3 is basic to the
primal at the optimum.

• It is enough now to solve for Y1 + 2Y2 = 3 and Y1 + 3Y2 = 4. This gives us Y1* =
1, Y2* = 1 W* = 42.

(Using the solution Y1* = 1, Y2* = 1 W* = 42 we can compute the optimum


solution to the primal).
Mathematical explanation to the dual

• Let us explain the dual mathematically using problem P1.


Maximize Z = 6X1 + 5X2
Subject to
X1 + X 2 < 5 (3.1)
3X1 + 2X2  12 (3.2)
X 1, X 2  0

• The optimal solution to the primal is X1* = 2, X2* = 1 Z* = 27


• The optimal solution to the dual is Y1* = 3, Y2* = 1 W* = 27
• Now let us add a small quantity  to the first constraint such that the resource available now is
5+.
• Assuming that X1 and X2 will remain as basic variables at the optimum and solving for X1 and X2
we get X * = 2 - 2, X2* = 3 +3 Z* = 27 + 3.
The increase1 in objective function value at the optimum for a small increase  of the first
constraint (resource) is 3, where ‘3’ is the value of the first dual variable (Y1) at the
optimum.

The value of the dual variable at the optimum is the rate of change of objective function for a
small change in the value of the resource.

It can be viewed as the change in the objective function for a unit change of the resource at
the optimum (assuming that the change is not significant enough to change the set of basic
variable themselves).
Economic Interpretation of the dual
• From the previous discussion we know that the objective function increases by 3 for
a unit increase in the first resource.
• If we have to buy the resource we will be willing to pay a maximum of Rs 3 for the
unit increase.
• Otherwise we will end up making a loss and it will not be profitable considering the
purchase of the extra resource
• The value of the dual variable is the marginal value of the corresponding resource
at the optimum
• We have defined the primal earlier as the problem of the carpenter who makes
tables and chairs.
• Now the dual is the problem faced by the person who is assumed to be selling the
resources to the carpenter.
• If the person sells the extra resource for a price less than Rs 3, the carpenter will buy
and make more profit than what the problem allows him to make (which the seller
would not want)

• On the other hand if the seller charges more than Rs 3, the carpenter will not buy
the resource and the seller cannot make money and profit.

• So both the carpenter and the seller will agree for Rs 3 (in a competitive
environment) and each will make their money and the associated profit.
Let us consider problem P4
Maximize 3X1 + 4X2
Subject to X1 + X2  12
2X1 + 3X2  30
X1 + 4X2  36
X 1, X 2  0

We know that the optimal solution to the primal is X1* = 6, X2* = 6, u3* = 6 Z* = 42
We know that the optimal solution to the dual is Y1* = 1, Y2* = 1 W* = 42

 If we add a small  to the third resource and solve the resultant problem assuming X 1, X2 and u3
as basic variables, we realize that the solution does not change and the optimum value of Z
remains at 42.

This means that the marginal value of the third resource at the optimum is zero. This is because
the resource is not completely used at the optimum.

The fact that u3 = 6 at the optimum means that only 30 units out of 36 is only consumed and a
balance of 6 units is available.

Therefore the person will not want to buy extra resources at extra cost because the resource is
already available.

Therefore the marginal value of the resource is zero. When a slack variable is in the basis, the
corresponding dual decision variable is non basic indicating that the marginal value of the
corresponding dual variable is zero.
Simplex method solves both the primal and the dual
Maximize Z = 6X1 + 5X2
Subject to
X1 + X 2 < 5 (3.1)
3X1 + 2X2  12 (3.2)
X1, X2  0
The simplex table is shown in the usual notation in following Table

6 5 0 0
X1 X2 u1 u2 RHS 
0 u1 1 1 1 0 5 5
0 u2 3 2 0 1 12 4
Cj - Zj 6 5 0 0 0
Simplex method solves both the primal and the dual …..

6 5 0 0
X1 X2 u1 u2 RHS 
0 u1 0 1/3 1 -1/3 1 3
6 X1 1 2/3 0 1/3 4 6
Cj - Zj 0 1 0 -2 24

5 X2 0 1 3 -1 3
6 X1 1 0 -2 1 2
Cj - Zj 0 0 -3 -1 27
In the optimal tableau, let us observe the Cj – Zj values.
These are 0, 0, -3 and –1 for variables X1, X2, u1 and u2.
We also know (from complimentary slackness conditions) that there is a relationship
between Xj and vj and between uj and Yj.

The values of Cj – Zj corresponding to Xj are the negatives of the values of dual


slack variables vj and the values of Cj – Zj corresponding to uj are the negatives
of the values of dual decision variables Yj.

Therefore
Y1* = 3, Y2* = 1, v1* = 0, v2* = 0. We also know from complimentary slackness
conditions that if Xj is basic then vj =0.
This is also true because when Xj is basic, Cj – Zj is zero.
When uj is non basic and has value zero, its Cj – Zj is negative at optimum indicating a
non negative value of the basic variable Yj.

The optimum solution to the dual can be read from the optimum tableau of the
primal in the simplex algorithm. We need not solve the dual explicitly.
Let us look at an intermediate iteration (say with basic variable u 1 and X1).
The basic feasible solution is u1 = 1, X1 = 4 with non basic variables X2 = 0 and u2 =0.
When we apply the above rule (and complimentary slackness conditions) we get the
corresponding dual solution to be Y1 =0, Y2 = 2, v1 = 0, v2 = -1 with W= 24 (same value of Z)

• This solution is infeasible to the dual because variable v2 takes a negative value.
• This means that the second dual constraint Y1 + 2Y2  5 is not feasible making v2 = -1.

• The value of v2 is the extent of infeasibility of the dual, which is the rate at which the
objective function can increase by entering the corresponding primal variable.

• A non optimal basic feasible solution to the primal results in an infeasible dual when
complimentary slackness conditions are applied.

• At the optimum, when complimentary slackness conditions are applied, the


resultant solution is also feasible and hence optimal.
• The Simplex algorithm can be seen as one that evaluates basic
feasible solutions to the primal (with non decreasing values of the
objective function for maximization problems) and applies
complimentary slackness conditions and evaluates the dual.

• When the primal basic feasible solution is non optimal, the dual
will be infeasible and when the dual becomes feasible, the optimal
solution for both primal and dual is known.
The Dual Simplex algorithm
Consider the linear programming problem (P5) given by
Minimize 4X1 + 7X2
Subject to
2X1 + 3X2  5
X1 + 7X2  9
X1, X2  0

 Normally we would have added two artificial variables a1 and a2 to get an initial basic feasible
solution.

 We do not add these now but write the constraints as equations with slack variables only. The
equations are written with a negative RHS (something that the simplex method does not approve).

 We also convert it as a maximization problem by multiplying with –1.

The problem becomes


Minimize 4X1 + 7X2 + 0X3 + 0X4
Subject to 2X1 + 3X2 – X3 = 5
X1 + 7X2 – X4 = 9
X1, X2  0
We set up the simplex table as shown in Table 3.4 with slack variables X3
and X4 as basic variables and with a negative RHS

-4 -7 0 0
X1 X2 X3 X4 RHS
0 X3 -2 -3 1 0 -5
0 X4 -1 -7 0 1 -9
Cj - Zj -4 -7 0 0
 4 1

This solution is infeasible because both the basic variables have negative
sign. However, it satisfies the optimality condition because all Cj – Zj are  0.
To get the optimal solution, we have to make the basic variables feasible.
We first decide on the leaving variable (most negative) and choose
variable X4.
This row becomes the pivot row.
In order that we have feasibility, we should get a non negative value in the
next iteration. This is possible only when the pivot is negative.

To find out the entering variable, we divide Cj – Zj by the


corresponding coefficient in the pivot row. We compute the  row
and compute minimum  = 1 for variable X2. This variable
replaces variable X4 in the basis.
We have the following iterations

-4 -7 0 0
X1 X2 X3 X4 RHS
0 X3 -2 -3 1 0 -5
0 X4 -1 -7 0 1 -9
Cj - Zj -4 -7
 4 7/4

0 X3 -11/7 0 1 -3/7 -8/7


-7 X2 1/7 1 0 -1/7 9/7
Cj - Zj -3 0 0 -1 -9
 21/11 7/3
There is only one candidate for leaving variable because only X3 has negative value.
Variable X3 leaves the basis and row X3 is the pivot row. We compute . Variables X1 and X4 have
negative coefficients in the pivot row.
Variable X1 has minimum  and enters the basis replacing variable X3.
The  value for variable X1 is –3  -11/7 = 21/11 and for variable X4,
 = -1  -3/7 = 7/3.
In the next iteration we have

-4 -7 0 0
X1 X2 X3 X4 RHS
-4 X1 1 0 -7/11 3/11 8/11
-7 X2 0 1 1/11 -2/11 13/11
Cj - Zj 0 0 -21/11 -2/11 123/11

Now, both the basic variables X1 and X2 are feasible. In this algorithm, right from the
first iteration the optimality condition is satisfied. We do not have a leaving variable.
The optimum solution has been reached.
• The above algorithm is called the dual simplex algorithm.

• When this is used to solve a minimization problem with all  type constraints, the
optimality condition is satisfied all the time.

• The feasibility condition is not satisfied by the starting solution and after some
iterations, when the solution is feasible we say that the optimum solution is
reached.

• This is called dual simplex because at any iteration, the dual (of the problem that
we are solving) has a feasible solution and the moment we get a primal feasible
solution, the optimum is reached.

• In fact, for the above example, the moment we get a feasible solution to the primal
(RHS  0) we can terminate even without showing that the optimality condition is
satisfied.
Solving problems with mixed type of constraints
Consider the following problem (P5)
Maximize -X1 + 5X2
Subject to
2X1 – 3X2  1
X1 + X2  3
X1, X2  0
We solve this problem without introducing artificial variables. We rewrite the constraints as
- 2X1 + 3X2 + X3 = 1
X1 + X2 + X4 = 3
The simplex table is set as shown in following Table with X3 and X4 as basic variables.

-1 5 0 0
X1 X2 X3 X4 RHS 
0 X3 -2 3 1 0 -1 --
0 X4 1 1 0 1 3 2
Cj - Zj -1 5 0 0
We have a negative value for variable X3 as well as a positive value for Cj-Zj.
We can do a simplex iteration by entering variable X1 or do a dual simplex iteration considering
variable X3.
We choose the simplex iteration (When both are possible it is better to do the simplex iteration
first).
Variable X2 is the entering variable and replaces X4 (the only candidate for leaving variable).
In the next iteration we have

-1 5 0 0
X1 X2 X3 X4 RHS 
0 X3 -5 0 1 -3 -10
5 X2 1 1 0 1 3
Cj - Zj -6 0 0 -5
 6/5 5/3

Now the primal is not feasible but the dual is.


We can only do a dual simplex iteration with X3 as leaving variable. This row becomes
the pivot row.
We have to find an leaving variable. Variables X1 and X4 have negative Cj –Zj and a
negative coefficient in the pivot row.
We compute the  row (dual simplex iteration) and enter X1 having minimum  value.
We perform a dual simplex iteration leaving variable X3 from the basis and replacing it
with variable X1.
In the next iteration we have

-1 5 0 0
X1 X2 X3 X4 RHS 
-1 X1 1 0 -1/5 3/5 2
5 X2 0 1 1/5 2/5 1
Cj - Zj 0 0 -6/5 -7/5 3

Here both the primal and dual are feasible. The optimal solution is
X1 = 2, X2 = 1, Z =3.
The optimal solution to the dual is Y1 = 6/5, Y2 = 7/5 W = 3
Consider another problem P6 given by
Maximize 7X1 – 3X2
Subject to
5X1 – 2X2  12
6X1 + 3X2  10
X1, X2  0

We solve this problem without introducing artificial variables. We rewrite the constraints as
- 5X1 + 2X2 + X3 = - 12
6X1 + 3X2 + X4 = 10

The simplex table is set as shown in the following Table with X3 and X4 as basic variables.

7 -3 0 0
X1 X2 X3 X4 RHS 
0 X3 -5 2 1 0 -12 --
0 X4 6 3 0 1 10 5/3
Cj - Zj 7 -3 0 0
0 X3 0 9/2 1 5/6 -11/3
7 X1 1 1/2 0 1/6 5/3
7 -3 0 0
X1 X2 X3 X4 RHS 
0 X3 -5 2 1 0 -12 --
0 X4 6 3 0 1 10 5/3
Cj - Zj 7 -3 0 0

0 X3 0 9/2 1 5/6 -11/3


7 X1 1 1/2 0 1/6 5/3
Cj - Zj 0 -13/2 0 -7/6
We have a negative value for variable X3 as well as a positive value for C1-Z1.

We can do a simplex iteration by entering variable X1 or do a dual simplex iteration


considering variable X3.
We choose the simplex iteration (When both are possible it is better to do the simplex
iteration first).

Variable X1 is the entering variable and replaces X4 (the only candidate for leaving
variable). The simplex iteration (after the first iteration) is shown in the above Table
Now the primal is not feasible but the dual is.
We can only do a dual simplex iteration first leaving variable X3.
However, we do not find an entering variable because there is variable with
a negative coefficient in the pivot row.
This indicates infeasible solution
Matrix Representation of the Simplex method

• Let us consider a linear programming problem with m constraints and n


variables (including slack variables).

• In the tabular form that we have been using, we compute (or store) a m
x n matrix of the constraint coefficients.

• We also compute m values of the RHS and a maximum of m values of .

• We also store n values of Cj – Zj and one value of the objective function.


We store mn + m + n + 1 values or (m+1)(n+1) values.
• In a simplex iteration, the first thing that we need is to check whether the optimality
condition is satisfied.
• For this purpose we need the Cj – Zj values of all the non basic variables (m values).
• For a maximization problem, we need to either find the variable with the most positive Cj – Zj or
the first variable with a positive value (depending on the rule for the entering variable).

• Once the entering variable is known, we need to find the leaving variable.
• For this purpose we need to compute , for which we need the RHS values (m values) and the
column corresponding to the entering variable (m values).
• If the optimal solution is reached, we need to compute the value of the objective function.

• Therefore, in EACH iteration, we need only 3m values. Can we compute only these values
and get to the optimum or can we compute fewer than (m+1)(n+1) values per iteration?
• In the tabular form, we compute the values in iteration from the values of the previous iteration.
• Can we make these values dependent only on the problem data?

We answer these questions using the matrix representation of the simplex method.
The simplex method tries to identify the optimal set of basic variables.
Every iteration is characterized by the set of basic variables. Let us call this as XB.
The rest of the variables are non basic and take value zero. We solve for the set XB such
that

BXB = b,
where
b is the RHS of the given problem and
B is the matrix of the values corresponding to the basic variables.
If Pj is a column vector corresponding to variable Xj in the constraint matrix A, the B is a
sub matrix of A made of the Pj in XB.

From BXB = b
we have XB = B-1b (3.17)

Let CB be the objective function coefficients corresponding to the variables in XB.

Let us assume that in a given iteration Pj is the column corresponding to variable Xj.
Let us assume
Pj = B-1Pj (3.18)
(We shall prove equation 3.18 later)
We also have from 3.18
Cj - Zj = Cj - CB Pj = CB B-1Pj (3.19)
Sensitivity Analysis
Consider the linear programming problem
Maximize 4X1 + 3X2 + 5X3
Subject to
X1 + 2X2 + 3X3  9
2X1 + 3X2 + X3  12
X1, X2,X3  0
The simplex solution for this problem is

4 3 5 0 0
X1 X2 X3 X4 X5 RHS 
0 X4 1 2 3 1 0 9 3
0 X5 2 3 1 0 1 12 12
Cj - Zj 4 3 5 0 0 0
4 3 5 0 0

X1 X2 X3 X4 X5 RHS 

5 X3 1/3 2/3 1 1/3 0 3 9


0 X5 5/3 7/3 0 -1/3 1 9 27/5
Cj - Z j 7/3 -1/3 0 -5/3 0 15

5 X3 0 1/5 1 2/5 -1/5 6/5


4 X1 1 7/5 0 -1/5 3/5 27/5
C j - Zj 0 -18/5 0 -6/5 -7/5 138/5

The optimal solution is given by X1 = 27/5, X3 = 6/5, Z = 138/5


In sensitivity analysis we attempt to address issues such as

What happens to the solution if the values of the objective function coefficients

change?

What happens when values of RHS change?

What happens when the coefficients in the constraints change?

What happens when we add a new variable?

What happens when we add a new constraint?

We can address each issue separately or consider a combination of these issues by


solving a new problem all over again.

However, here we iterate from the given optimal solution and evaluate the effect
of these changes.
Changes in values of objective function coefficients (Cj)

We consider two cases


Changes in Cj value of a non basic variable
Changes in Cj value of a basic variable.

Changes in Cj value of a non basic variable


Maximize 4X1 + 3X2 + 5X3
Subject to
X1 + 2X2 + 3X3  9
2X1 + 3X2 + X3  12
X1, X2,X3  0

In our example, variable X2 is non basic at the optimum.


Let us consider changes in C2 values.
Let us assume a value C2 instead of the given value of 3. This will mean that all C2 –
Z2 alone will change.
We compute C2 – Z2 = C2 – 33/5.
.
4 3 5 0 0
X1 X2 X3 X4 X5 RHS 
5 X3 0 1/5 1 2/5 -1/5 6/5
4 X1 1 7/5 0 -1/5 3/5 27/5
Cj - Zj 0 -18/5 0 -6/5 -7/5 138/5

The present value of C2 = 3 results in C2 – Z2 becoming negative.

A value of C2 > 33/5 would make C2 – Z2 take a positive value resulting in variable
X2 entering the basis

Let us consider C2 = 7. This would make C2 – Z2 = 2/5 and the variable X2 enters
the basis.

We perform simplex iterations till we reach the optimum.


The present optimal table (with the modified values of C2 and C2 – Z2 is shown and the
subsequent iterations are shown below

4 7 5 0 0
X1 X2 X3 X4 X5 RHS 
5 X3 0 1/5 1 2/5 -1/5 6/5 6
4 X1 1 7/5 0 -1/5 3/5 27/5 27/7
Cj - Zj 0 2/5 0 -6/5 -7/5 138/5

5 X3 -1/7 0 1 3/7 -2/7 3/7


7 X2 5/7 1 0 -1/7 3/7 27/7
-2/7 0 0 -8/7 -11/7 204/7
The new optimum solution is given by X3 = 3/7 X2 = 27/7 with Z = 204/7.

The optimum was found in one iteration starting from the modified optimum table.
Change in Cj value of a basic variable
Let us consider a change in the objective function coefficient of variable X1. Let us call it
C1. The change will affect all the non-basic Cj – Zj values.
 We compute
C2 – Z2 = 3 – 1 – 7C1/5
C4 – Z4 = 0 – 2 + C1/5
C5 – Z5 = 0 + 1 – 3C1/5
For the present value of C1 = 4, the values are –18/5, -6/5 and –7/5. All the values are
negative.

4 3 5 0 0

X1 X2 X3 X4 X5 RHS 
5 X3 0 1/5 1 2/5 -1/5 6/5
4 X1 1 7/5 0 -1/5 3/5 27/5
Cj - Zj 0 -18/5 0 -6/5 -7/5 138/5
We also observe that for C1 < 10/7, C2 – Z2 can become positive.
For C1>10, C4 – Z4 >0 and for C1 < 3/5, C5 – Z5 >0.
In the range 6/5  C1  10, the present set of basic variable will be
optimal.
Let us consider C1 = 12.
C2 – Z2 = C2 – y P2 = 3 – [12 x 7/5 + 1] = -74/5,
C4 – Z4 = -2 + 12/5 = 2/5 and
C5 – Z5 = 1 – 36/5 = -31/5.
Variable X4 enters the basis and we perform simplex iterations till it
reaches the optimum.
The modified optimum (for C1 = 12) and the final solutions are shown below

12 3 5 0 0
X1 X2 X3 X4 X5 RHS 
5 X3 0 1/5 1 2/5 -1/5 6/5 3
12 X1 1 7/5 0 -1/5 3/5 27/5
Cj - Zj 0 -74/5 0 2/5 -31/5 354/5

0 X4 0 ½ 5/2 1 -1/2 3
12 X1 1 3/2 ½ 0 ½ 6
Cj - Zj 0 -15 -1 0 -6 72

Here also, the optimum was found within one iteration. The optimum solution now is
X1 = 12 and X4 = 3 with Z = 72
Changes in RHS values
Let us consider changes in RHS values. Let us consider changing the RHS of the first constraint. Let the value be b1. Due to this the RHS
values of the solution will change. This will become

RHS = B-1 b

We can read B-1 from the simplex table under the initial identity matrix for the chosen basic variables.

 2 / 5  1 / 5 b1 
  
/ 5 
 1 / 5 3and
B-1 = b=
12
 2 / 5  1 / 5 b1  (2b1  12) / 5
RHS =    =  
 1 / 5 3 / 5 
12 (36  b1 ) / 5 

As long as b1 is such that (2b1-12)/5 and (36-b1)/5 are  0, the variables X3 and X1 will be the basic variables at the optimum with values (2b 1-
12)/5 and (36-b1)/5 respectively.

This means that as long as 6  b1  36, variables X3 and X1 will be the basic variables at the optimum with values (2b 1-12)/5 and (36-b1)/5
respectively.

If b1 is outside this range then one of the variables X1 or X3 will become negative and will provide an infeasible solution.

For example , if b1 = 40, we get X3 = 68/5 and X1 = -4/5. In this case, we perform dual simplex iterations till the optimum is reached. This is
shown in Table 3.13 (next slide)
4 3 5 0 0

X1 X2 X3 X4 X5 RHS

5 X3 0 1/5 1 2/5 -1/5 68/5


4 X1 1 7/5 0 -1/5 3/5 -4/5

Cj - Z j 0 -18/5 0 -6/5 -7/5

 6

4 X3 2 3 1 0 1 12
0 X4 -5 -7 0 1 -3 4

-4 -9 0 0 -4 48

The optimum solution is X3 = 12, X4 = 4, Z = 48. This was also


obtained in one iteration from the modified solution.
Changes in coefficient of constraint (of a non basic
variable)

Let us consider changes in the constraint coefficient of a non basic variable. Specifically we consider variable X 2 and its coefficient in the second constraint. Let the value be a. We have

P2 = 2 
 
a 

P2 = B-1 P2 and

C2 – Z2 = C2 - yP2.
From the simplex table the dual y = [6/5 7/5]
C2 – Z2 = 3 - [6/5 7/5] 2 – 7a/5 = 3/5 – 7a/5
= 3 - 12/5
 
a 

If a is such that 3/5 – 7a/5  0, the present solution will be optimal. That is if a  3/7, the present solution is optimal. Let us consider the case when a = 0.
 2 / 5  1 / 5 2  4 / 5 
P2      
a    2 / 5
= B P2 = =
 1 / 5 3 / 5 
-1

2 
C2 – Z2 = 3 – [6/5 7/5]
  = 3/5
a 
P2
Variable X2 enters the basis. The values of and C2 –Z2 change. We perform simplex iterations till we reach the optimum.
4 3 5 0 0

X1 X2 X3 X4 X5 RHS 

5 X3 0 4/5 1 2/5 -1/5 6/5 3/2


4 X1 1 -2/5 0 -1/5 3/5 27/5

Cj - Z j 0 3/5 0 -6/5 -7/5 138/5

3 X2 0 1 5/4 ½ -1/4 3/2


4 X1 1 0 1/2 0 ½ 6

0 0 -3/4 -3/2 -5/4 57/2

The optimum solution is given by X1 = 6, X2 = 3/2 with Z


= 57/2. Here too, the optimum solution is obtained in one
iteration.
Adding a new product

Let us consider adding a new product to the problem.

This is incorporated in the formulation by the introduction of a new variable X6. Let us
assume that this product requires 1 unit of resource 1 per product and 3 units of
resource 2 per product and fetches a profit of Rs 6/unit.

This is same as verifying whether C6 – Z6 can enter the basis. Now


C6 – Z6 = C6 –yP6. P6 = 1 

and C6 – Z6 = 6 – 27/5 = 3/5.

3
 2 / 5  1 / 5
1   1 / 5

 1 / 5 3 / 5   
P’6 = B-1 P6 =   3 = 
  8 / 5 

Variable X6 enters the basis. The modified optimum table incorporating changes in P’ 6
and C6 – Z6 is shown and the simplex iterations are shown in Table (next slide)
4 3 5 0 0 6
X1 X2 X3 X4 X5 X6 RHS 
5 X3 0 1/5 1 2/5 -1/5 -1/5 6/5
4 X1 1 7/5 0 -1/5 3/5 8/5 27/5 27/8

Cj - Z j 0 -18/5 0 -6/5 -7/5 3/5 138/5

5 X3 1/8 3/8 1 3/8 -1/8 0 15/8


6 X6 5/8 7/8 0 -1/8 3/8 1 27/8

Cj - Z j -3/8 -33/8 0 -9/8 -13/8 0 237/8

The optimum solution is given by X3 = 15/8, X6 = 27/8, Z = 237/8.


Adding a new constraint
It may be possible to add a new constraint to an existing problem. Let us consider such a situation.

Let the constraint be X1 + X2 + X3  8.

We verify whether the constraint is satisfies by the optimal solution.


The solution X1 = 27/5, X3 = 6/5 satisfies this constraint and the present solution continues to be
optimal even with the additional constraint.

If the constraint was X1 + X2 + X3  6, the present optimal solution violates the constraint. We rewrite
the constraint as an equation by adding a slack variable and write the basic variables in terms of the
non basic variables.

X1 + X2 + X3 + X6 = 8.

From the optimum simplex table we have


X1 + 7/5 X2 – 1/5 X4 + 3/5 X5 = 27/5 and X3 + 1/5 X2 + 2/5 X4 - 1/5 X5 = 6/5

Substituting, we get
27/5 – 7/5 X2 + 1/5 X4 – 3/5 X5 + X2 + 6/5 – 1/5 X2 –2/5 X4 + 1/5 X5 + X6 = 6
-3/5 X2 –1/5 X4 –2/5 X5 + X6 = -3/5

(Please observe that if the constraint is binding, as in this case, the final equation after substitution
will have a negative value of the right hand side when the slack variable has a +1 coefficient). We
include this constraint into the optimal table and proceed with dual simplex iterations till the new
4 3 5 0 0 0

X1 X2 X3 X4 X5 X6 RHS

5 X3 0 1/5 1 2/5 -1/5 0 6/5


4 X1 1 7/5 0 -1/5 3/5 0 27/5
0 X6 0 -3/5 0 -1/5 -2/5 1 -3/5

Cj - Z j 0 -18/5 0 -6/5 -7/5 0 138/5

 6 6 7/2

5 X3 0 1/2 1 1/2 0 -1/2 3/2


4 X1 1 1/2 0 -1/2 0 3/2 9/2
0 X5 0 3/2 0 1/2 1 -5/2 3/2

Cj - Z j 0 -3/2 0 -1/2 0 -7/2 51/2

The optimum solution is obtained in one iteration and is


given by X1 = 9/2, X3 = 3/2, X5 = 3/2, with Z = 51/2
Changes in constraint coefficient of
basic variables
• This is the only case that we have not addressed.

• In this case, we do not suggest sensitivity analysis but


suggest solving the problem from the beginning.

• This is because any change in the constraint coefficient of a


basic variable will change the basis matrix B and hence B-1.

• We propose sensitivity analysis in all situations where the


basis matrix B is not affected. When B changes, it is not
advisable to perform sensitivity analysis.

You might also like