Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
160 views105 pages

October 10, 2022 1 / 100

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 105

LPP

October 10, 2022

LPP October 10, 2022 1 / 100


Linear Programming problem

Mathematical Formulation

The procedure for mathematical formulation of a linear programming problem


consists of the following problem:

Step 1. Study the given situation to find the key decisions to be made

Step 2. Identify the variable involved and designate them by symbols


xj (j = 1, 2, · · · , .)

Step 3. State the feasible alternatives which generally are xj ≤ 0 for all j

Step 4. Identify the constraint in the problem and express them as linear
inequalities or equations, LHS of which are linear functions of the decision
variables.

Step 5. Identify the objective function and express it as a linear function of the
decision variable.

LPP October 10, 2022 2 / 100


Example
A furniture dealer deals only two items viz., tables and chairs. He has to invest
Rs.10, 000/− and a space to store at most 60 pieces. A table cost him Rs.500/–
and a chair Rs.200/–. He can sell all the items that he buys. He is getting a
profit of Rs.50 per table and Rs.15 per chair. Formulate this problem as an LPP,
so as to maximize the profit.

SOLUTION:
Step 1. Study the given situation to find the key decisions to be made

Step 2. Variables: Let x1 and x2 denote the number of tables and chairs
respectively.

Step 3. State the feasible alternatives which generally are x1 ≥ 0, x2 ≥ 0 for all j

Step 4. Identify the constraint in the problem and express them as linear
inequalities or equations.
The dealer has a space to store at most 60 pieces
x1 + x2 ≤ 60

LPP October 10, 2022 3 / 100


The cost of x1 tables = Rs.500x1
The cost of x2 tables = Rs.200x2
Total cost = 500x1 + 200x2 ,which cannot be more than 10000
500x1 + 200x2 ≤ 10000
5x1 + 2x2 ≤ 100
Step 5. Identify the objective function.
Profit on x1 tables = 50x1
Profit on x2 chairs = 15x2
Total profit = 50x1 + 15x2
Let Z = 50x1 + 15x2 , which is the objective function.
Since the total profit is to be maximized, we have to maximize Z = 50x1 + 15x2 .

Thus, the mathematical formulation of the LPP is


Maximize Z = 50x1 + 15x2
Subject to the constraints:
x1 + x2 ≤ 60
5x1 + 2x2 ≤ 100
x1 , x2 ≥ 0

LPP October 10, 2022 4 / 100


Graphical Solutions

Linear programming involving two decision variable can be easily solved by using
Graphical methods.
The major steps in the solution of LPP by graphical methods follows:
Step 1. Identify the problem :- The decision variable, the objective function and
restrictions.
Step 2. Set up the mathematical formulation of the problem.
Step 3. Plot the graph representing all the constraints of the problem and
identify the feasible region (solution space). The feasible region is intersection of
all regions represented by the constraints of the problem and is restricted to first
quadrant only.
Step 4. The feasible region may be bounded or unbounded. Compute the
coordinates of all the corner points of the feasible region.
Step 5. Find out the value of objective function at each corner points.
Step 6. Select the corner point and Optimize the value of the objective function.
It give the optimum feasible solution.

LPP October 10, 2022 5 / 100


Example
Solve the given linear programming problems graphically:
Minimize: z = 20x + 10y
and the constraints are :
x + 2y ≤ 40,
3x + y ≥ 30,
4x + 3y ≥ 60,
x ≥ 0, y ≥ 0 (non-negativity constraints)
SOLUTION: Step.1: The appropriate model formulation of the given LPP
Minimize: z = 20x + 10y
and the constraints are :
x + 2y ≤ 40,
3x + y ≥ 30,
4x + 3y ≥ 60,
x ≥ 0, y ≥ 0
LPP October 10, 2022 6 / 100
Step.2: Now plot these points in the graph and find the feasible region.

35

30 3x + y = 30

25 z = 20x + 10y = 240 (Minimum value)

20 4x + 3y = 60
D z = 20x + 10y = 360(approximate)
15
C
10

5 x + 2y = 40
A B
0
0 5 10 15 20 25 30 35 40 45 50

LPP October 10, 2022 7 / 100


Step.3 The optimum value of the objective function occur at one of the extreme
points of the feasible region.
That is, A = (15, 0), B = (40, 0), C = (6, 12) and D = (4, 18) be the extreme
points.

Step.4 Compute the z values at the extreme points

Extreme points (x1 , x2 ) z = 20x + 10y


A (15, 0) 300
B (40, 0) 800
C (6, 12) 240
D (4, 18) 260
Step.5 The Optimum solution is that extreme point for which the objective
function has least value. Thus Minimum z = 20x + 10y is 240.

LPP October 10, 2022 8 / 100


Definitions:

1 Decision variables
2 Objective function
3 Constraints
4 Non-negative constraints
5 Feasible solutions
6 Infeasible solutions
7 Corner points

LPP October 10, 2022 9 / 100


Example

Maximize:z = 4x1 + 3x2


and the constraints are :

2x1 + x2 ≤1000
x1 + x2 ≤800
x1 ≤ 400 and x2 ≤ 700
x1 ≥ 0 and x2 ≥ 0 (non-negativity constraints)

SOLUTION:

Step.1: The appropriate model formulation of the given LPP.

Step.2: Now plot these points in the graph and find the feasible region. As each
point has the coordinates 0f the type (x1 , x2 ); any point satisfying the condition
x1 ≥ 0 and x2 ≥ 0 lies in the first quadrant.
Now the inequality are graphed taking the equations, 2x1 + x2 ≤ 1000 graphed as
2x1 + x2 = 1000, and also x1 + x2 ≤ 800 as x1 + x2 = 800
LPP October 10, 2022 10 / 100
Further the constraints are are plotted on the graph which represents the area
between the lines x1 = 400 and x2 = 700
1,000
2x1 + x2 = 1000

800 x1 = 400
x2 = 700
E
D
600 C
z = 4x1 + 3x2 = 2600(Maximum value)

400

200 B x1 + x2 = 800

O A
0
0 200 400 600 800 1,000

Now all the constraints are graphed. The area bounded by all these constraints
are called feasible region, as shown by the shaded region.

LPP October 10, 2022 11 / 100


Step.3 The optimum value of the objective function occur at one of the extreme
points of the feasible region.
That is, O = (0, 0), A = (400, 0), B = (400, 200), C = (200, 600), D = (100, 700)
and E = (0, 700)
Step.4 Compute the z values at the extreme points

Extreme points (x1 , x2 ) z = 4x1 + 3x2


O (0, 0) 0
A (400, 0) 1600
B (400, 200) 2200
C (200, 600) 2600
D (100, 700) 2500
E (0, 700) 2100
Step.5 The Optimum solution is that extreme point for which the objective
function has least value. Thus Maximum z = 4x1 + 3x2 is 2600.

LPP October 10, 2022 12 / 100


Simplex Method

A standard method of maximizing a linear function of several variables under several


constraints on other linear functions.
Simplex method is an approach to solving linear programming models by hand using
slack variables, tableau’s, and pivot variables as a means to finding the optimal solution
of an optimization problem. Simplex tableau is used to perform row operations on the
linear programming model as well as for checking optimality.

Definition (Basic Solution)


Given a system of m simultaneous linear equations in n unknowns (m < n)

Ax = b, xT ∈ Rn

where A be an m × n Matrix of rank m. Let B be an m × n sub-matrix, formed by m


linearly independent column of A. Then a solution obtained by setting n − m variables
not associated with the column of B, equals to zero, and solving the resulting system, is
called Basic solution to the given system of equations

LPP October 10, 2022 13 / 100


Definition (Feasible Solution)
Any solution to a general LPP which is also satisfies the non-negative restrictions
of the problem, is called a feasible solution to the General LPP.

Definition (Degenerate solution)


A basic solution to the system is called degenerate if one or more of the basic
variable vanish

LPP October 10, 2022 14 / 100


Definition (Basic feasible solution)
A feasible solution to the LPP, which is also a basic solution to the problem is
called basic feasible solution to the LPP

Definition (Improved feasible solution)


Let XB and X̂B be two feasible solution to the LPP. Then X̂B is said to be the
improved feasible solution, as compared to XB , if

ĈB X̂B ≥ CB XB

where CB is constituted of cost component corresponding X̂B

Definition (Optimum basic feasible solution)


A basic feasible solution XB to the LPP.:

Maximize z = cx subject to certain constraint Ax = b, x ≥ 0

is called the optimum basic feasible solution if, z0 = CB XB ≥ z ∗ , where z ∗ is the


value of objective function for any feasible solution.
LPP October 10, 2022 15 / 100
Fundamental properties of the solution

Theorem (Reduction of a feasible solution to a Basic feasible solution)


If an LPP has a feasible solution, then it also has basic feasible solution.

Proof.
Let the L.P.P be to determine x so as to

Maximize z=c x, c, xT ∈ Rn

Subject to constraints:
A x = b, x ≥ 0
where A is an m × n real matrix and b, c are m × 1 and 1 × n real matrices
respectively.Let ρ(A) = m.
Since there does exist a feasible solution, we must have ρ(A, b) = ρ(A) and
m < n.
Let x = (x1 , x2 , . . . , xn ) be a feasible solution so that xj ≥ 0 for all j.
To be precise, let us suppose that x has p positive components and let the
remaining n − p components be zero.
LPP October 10, 2022 16 / 100
cts...
Let us so relabel our components that the positive components are the first
components and assume that the columns of A have been relabelled accordingly.
Then
a1 x1 + a2 x2 + · · · + ap xp = b
Where a1 , a2 , . . . ap are the first p columns of A.
Two cases now do arise:
(i) The vectors a1 , a2 , . . . , ap forms a linearly independent set. Then p ≤ m.
If p = m, the given solution is a non-degenerate basic feasible solution,
x1 , x2 , . . . , xp as the basic variables.
If p < m then the set {a1 , a2 , ap } can be extended to
{a1 , a2 , . . . , ap , ap+1 , . . . , an } form a basis for the columns of A.
Then we, have
a1 x1 + a2 x3 + · · · + am xm = b
Where xj = 0 for j = p + 1, p + 2, . . . , m
Thus we have, in the case, a degenerate basic feasible solution with m − p of
the basics variables zero.

LPP October 10, 2022 17 / 100


Proof cts....
(ii) The set {a1 , a2 , . . . , ap } is linearly dependent. Obviously p > m. Let
{α1 , α2 , . . . , αp } be a set of constants (not all zero) such that

α1 a1 + α2 a2 + · · · + αp ap = 0
Pp αj
Suppose that for any index r , αr ̸= 0. Then ar = − j̸=r αr aj
 
p p
X X αj 
aj xj + − aj xr = b
αr
j̸=r j̸=r

Or
p  
X αj
xj − xr aj = b
αr
j̸=r

Thus we have a solution with not more than p − 1 non-zero components.

LPP October 10, 2022 18 / 100


Proof cts....
(ii) [cts...] To ensure that these are positive, we shall choose ar in such a way
that
αj
xj − xr ≥ 0 for all j ̸= r
αr
This requires that either αj = 0 or
xj xr xj xr
≥ , if αj > 0 and ≤ , if αj < 0
αj αr αj αr

Thus, if we select ar such that


 
xr xj
= min , αj > 0
αr j αj
α
Then for each of the p − 1 variables xj − xr αrj is non negative, and so we
have a feasible solution with not more than p − 1 non zero components.

LPP October 10, 2022 19 / 100


Proof continue ...

1 Consider now this new feasible solution with not more than p − 1 non-zero
components.
2 If the corresponding set of p − 1 columns of A is linearly independent. Case
(i) applies and we have arrived at a basic feasible solution.
3 If this set is again linearly dependent, we may repeat the process to arrive at
a feasible solution with not more than p − 2 non zero components.
4 The argument can be repeated. Ultimately, we get a feasible solution with
associated set of column vectors of A linearly independent.
5 The discussion of case (i) then applies and we do get a basic feasible solution.
This completes the proof.

LPP October 10, 2022 20 / 100


Theorem (Replacement of a basic vector)
Let an LPP have a basic feasible solution. If we drop one of the basic vectors and
introduce a non-basic vector in the basic set, then the new solution obtained is
also a basic feasible solution.

Theorem (Improved Basic feasible solution)


Let XB be a feasible solution to the LPP:

Maximize z = cx subject to constraint Ax = b, x ≥ 0

Let X̂B be another feasible solution obtained by admiting a non-basis column


vector aj in the basis, for which the net evaluation zj − cj is negative. Then X̂B is
an improved basic feasible solution to the problem, that is

ĈB X̂B > CB XB

LPP October 10, 2022 21 / 100


Theorem (Unbounded solution)
Let there exist a basic feasible solution to a given LPP. If for at-least one j, for
which yij ≤ 0(i = 1, 2, · · · , m), and zj − cj is negative, then there does not exist
any optimum solution to the LPP.

Theorem (Condition for optimality)


A sufficient condition for a basic feasible solution to an LPP to be an
optimum(maximum) is that zj − cj ≥ 0 for all j for which the column vector
aj ∈ A is not in the basis B

Corollary
A necessary and sufficient condition for which a basic feasible solution ta an LPP
to be an optimum(maximum) zj − cj ≥ 0 for all j for which aj ∈ /B

Theorem
any convex combination of k different optimal solutions to an LPP is again an
optimum solution to the problem

LPP October 10, 2022 22 / 100


Computational procedure
The two fundamental condition on which the simplex method is based are
Condition of feasibility: It consumes that if the initial solution is basic
feasible the only feasible solution occur during computing.
Condition for Optimality: It guarantees that only better solution will be
encountered
The computation of simplex method will requires the construction of simplex
table. The initial simplex table is constructed by writing out the coefficients and
the constraints of LPP in a systematic tabular manner
CB yB xB y1 y2 ··· yn
CB1 yB1 xB1 y11 y12 ··· y1n
CB2 yB2 xB2 y21 y22 ··· y2n
. . . . . . .
. . . . . . .
. . . . . . .
CBm yBm xBm ym1 ym2 ··· ymn
z0 z1 − c 1 ··· zn − c n

LPP October 10, 2022 23 / 100


The optimal of the general LPP is obtained in the following steps:
Step.1. Select an initial basic feasible solution to initiate the algorithm.

Step.2. Check the objective function to see whether there is some non basic
variables that would improve the objective function if brought in the basis. If such
available exists go to next step otherwise stop.
Step.3. Determine how large the variable found in the step.2 can be made until
one of the basic variable in the current solution become zero. Eliminate the latter
variable and let the next trial solution contain the newly found variable instead
Step.4. Check for optimality the current solution.
Step.5. Continue the iterations until either an optimal solution is attained or
there is an indication than an unbounded solution exists.

LPP October 10, 2022 24 / 100


Simplex algorithm
The steps for computation of the optimum solution as follows:
Step.1. Check whether the objective function of the given LPP is to be maximized or
minimized. If it is to be minimized then we can change it to maximizing problem as

Minimum(z) = −Maximum(z)

Step.2. Check weather all bi are non-negative, if any of the bi are negative then multiply
the corresponding in-equation of the constraints by (−1) so that all the bi are
non-negative
Step.3. Convert all the inequality of the constraints into equations slack and/surplus
variables in the constraints. Put the cost of the variable equals zero.
Step.4. Obtain the initial basic feasible solution of the form XB = B−1 b and put it in
the first column of the simplex table.
Step.5. Compute the net evaluations zj − cj (j = 1, 2, · · · , n) by using the relation
zj − cj = CB yj − cj
Examine the sign of zj − cj
If all zj − cj ≥ 0 the the initial basic feasible solution xB is the optimum basic
feasible solution.
If at-least one zj − cj < 0, proceed on to next step

LPP October 10, 2022 25 / 100


Step.6. If there are more than one negative zj − cj , then choose the most negative of
them.Let it be zr − cr for some j = r
If all the yir ≤ 0(1 = 1, 2, · · · , m), then there is an unbounded solution to the given
problem.
If at-least one yir > 0, then the corresponding yr enters the basis yB
n
Step.7. Compute the ratios xyBi , yir > 0, i = 1, 2, · · · , m and choose the minimum of

ir
xBk
them. Let the minimum of these ratio be ykr
. Then yk will level the basis yB . The
common element ykr , which is in the k row and r th column is known as the leading
th

element or the pivotal element of the table.


Step.8. Convert the leading element to unity by dividing its row by the leading element
itself and all other elements in its column to zeroes by making use of the relation:
ykj
ŷij =yij − yir
ykr
ykj
ŷkj =
ykr
Step.9. Got to Step.5 and repeat the computational procedure until either an optimum
solution is obtained or there is an indication of an unbounded solution.

LPP October 10, 2022 26 / 100


Example

Maximizez = 4x1 + 10x2


subject to the constraints:
2x1 + x2 ≤50
2x1 + 5x2 ≤100
2x1 + 3x2 ≤90
x1 ≥ 0, and x2 ≥ 0
SOLUTION:
By introducing slack variables s1 ≥ 0, s2 ≥ 0 and s3 ≥ 0 respectively, the
constraints of LPP are converted to system of equations.
 
  x1  
2 1 1 0 0  x2 
 50
2 5 0 1 0  s1  = 100
 
2 3 0 0 1  s2  90
s3

LPP October 10, 2022 27 / 100


The modified function is to maximize

z = 4x1 + 10x2 + 0s1 + 0s2 + 0s3

The initial basic feasible solution is now represented as in the simplex table

LPP October 10, 2022 28 / 100


The modified function is to maximize

z = 4x1 + 10x2 + 0s1 + 0s2 + 0s3

The initial basic feasible solution is now represented as in the simplex table
cj 4 10 0 0 0

LPP October 10, 2022 28 / 100


The modified function is to maximize

z = 4x1 + 10x2 + 0s1 + 0s2 + 0s3

The initial basic feasible solution is now represented as in the simplex table
cj 4 10 0 0 0
CB yB xB x1 [y1 ] x2 [y2 ] s1 [y3 ] s2 [y4 ] s3 [y5 ]

LPP October 10, 2022 28 / 100


The modified function is to maximize

z = 4x1 + 10x2 + 0s1 + 0s2 + 0s3

The initial basic feasible solution is now represented as in the simplex table
cj 4 10 0 0 0
CB yB xB x1 [y1 ] x2 [y2 ] s1 [y3 ] s2 [y4 ] s3 [y5 ]
0 s1 50 2 1 1 0 0
0 s2 100 2 5 0 1 0
0 s3 90 2 3 0 0 1

LPP October 10, 2022 28 / 100


The modified function is to maximize

z = 4x1 + 10x2 + 0s1 + 0s2 + 0s3

The initial basic feasible solution is now represented as in the simplex table
cj 4 10 0 0 0
CB yB xB x1 [y1 ] x2 [y2 ] s1 [y3 ] s2 [y4 ] s3 [y5 ]
0 s1 50 2 1 1 0 0
0 s2 100 2 5 0 1 0
0 s3 90 2 3 0 0 1
zj 0 0 0 0 0

LPP October 10, 2022 28 / 100


The modified function is to maximize

z = 4x1 + 10x2 + 0s1 + 0s2 + 0s3

The initial basic feasible solution is now represented as in the simplex table
cj 4 10 0 0 0
CB yB xB x1 [y1 ] x2 [y2 ] s1 [y3 ] s2 [y4 ] s3 [y5 ]
0 s1 50 2 1 1 0 0
0 s2 100 2 5 0 1 0
0 s3 90 2 3 0 0 1
zj 0 0 0 0 0
zj − c j -4 -10 0 0 0

LPP October 10, 2022 28 / 100


The modified function is to maximize

z = 4x1 + 10x2 + 0s1 + 0s2 + 0s3

The initial basic feasible solution is now represented as in the simplex table

cj 4 10 0 0 0
CB yB xB x1 [y1 ] x2 [y2 ] s1 [y3 ] s2 [y4 ] s3 [y5 ] θ
0 s1 50 2 1 1 0 0 50/1
← 0 s2 ← 100 2 5 0 1 0 100/5 ←
0 s3 90 2 3 0 0 1 90/3
zj 0 0 0 0 0
zj − cj -4 −10 ↑ 0 0 0

where
n cj = coefficient of variables in z, zj = CB × yj ,
xBi

θ = y , yir > 0, (i = 1, 2, 3) and choose the minimum of them.
ir

LPP October 10, 2022 29 / 100


Convert the leading element into unity and all other elements in the column to
zero,
y2j
ŷij =y2j − y2r
y22
ykj
ŷkj =
ykr

y21 y20
ŷ21 = = 2/5; y20 = = 100/5 = 20; and so on to the same row
y22 y22
y20 100
ŷ10 =y10 − y12 = 50 − × 1 = 30
y22 5
y20 100
ŷ30 =y30 − y32 = 90 − × 3 = 30
y22 5
y21
ŷ31 =y31 − y31 = 2 − 2/5 × 3 = 4/5
y22
y21
ŷ11 =y − 11 − y12 = 2 − 2/5 × 1 = 8/5
y22
y21
ŷ 14 =y14 − y12 = 0 − 1/5 × 1 = −1/5
y22
and so on.
LPP October 10, 2022 30 / 100
Using computation the iterated simplex table is
cj 4 10 0 0 0
CB yB xB x1 x2 s1 s2 s3
0 s1 30 8/5 0 1 -1/5 0
10 x2 20 2/5 1 0 1/5 0
0 s3 30 4/5 0 0 -3/5 1
zj 4 10 0 2 0
zj − c j z(= 200) 0 0 0 2 0

In the above simplex table yields a new basic feasible solution with increased value
of z. Moreover no futher improvement in the value of z is possible since all
zj − cj ≥ 0.
Hence the maximum feasible solution is x1 = 0, x2 = 20 with Maximum z = 200

LPP October 10, 2022 31 / 100


Example

Maximize z = 107x1 + x2 + 2x3


subject to the constraints:
14x1 + x2 − 6x3 + 3x4 = 7
16x1 + x2 − 6x3 ≤ 5
3x1 − x2 − x3 ≤ 0
x1 , x2 , x3 , x4 ≥ 0
SOLUTION:
By introducing slack variables s1 ≥ 0 and s2 ≥ 0 respectively, the set of
constraints of given LPP are converted as
 
x1
  x2   
14/3 1/3 −2 1 0 0  x3 
 7/3
 16 1 −6 0 1 0  x4  = 5
  
3 −1 −1 0 0 1  
 s1  0
s2
LPP October 10, 2022 32 / 100
The initial basic feasible solution is now represented as in the simplex table.
cj 107 1 2 0 0 0
CB yB xB x1 x2 x3 x4 s1 s2 θ
0 x4 7/3 14/3 1/3 -2 1 0 0 1/2
0 s1 5 16 1 -6 0 0 0 5/16
0 ← s2 ← 0 3 -1 -1 0 0 1 0 ←
zj 0 0 0 0 0
zj − c j −107 ↑ -1 -2 0 0 0

LPP October 10, 2022 33 / 100


The Final iteration. Unbounded solution
cj 107 1 2 0 0 0
CB yB xB x1 x2 x3 x4 s1 s2
0 x4 7/3 0 17/9 -4/9 1 0 -14/9
0 s1 5 0 19/3 -2/3 0 1 -16/3
107 x1 0 1 -1/3 -1/3 0 0 1/3
zj 107 -107/3 -107/3 0 0 107/3
zj − cj z =0 0 -110/3 −113/3 ↑ 0 0 107/3

We can see that z2 − c2 and z3 − c3 both are negative, but the most negative one
is z3 − c3 . Now x3 will not enter to the basis yB since xi3 are non-positive. Thus
we can say that the solution is unbounded.

LPP October 10, 2022 34 / 100


Use of Artificial Variable

If original constraint is an equation or is of the form type (≥) we may no


longer have a ready starting basic feasible solution.

In order to obtain an initial basic feasible solution, we first put the given
L.P.P into its standard form and then a non-negative variable is added to the
left side of each equation have the much needed starting basic variables.

The added variable is called an artificial variable and plays the same role as a
slack variable in providing initial basic feasible solution.

Artificial variables have no physical meaning from the point of the original
problem, the method will be valid only we are able to force these variables to
be out or at zero level when the optimum solution attained.

LPP October 10, 2022 35 / 100


TWO-PHASE METHOD

In the first phase of this method. the sum of the artificial variables is minimize. subject
to the given constraints (known as auxiliary L.P.P.) to get a basic feasible solution to the
original L.P.P. Second phase then optimizes the original objective function starting With
the basic feasible solution obtained at the end of Phase 1.
The iterative procedure of the algorithm may he summarise as below
step 1. Write the given L.P.P. into. its standard form and check whether there exists
starting basic feasible solution.
1 If there is a ready starting basic feasible solution, go to in Phase 2 .
2 If there does not exist a ready starting basic feasible solution., go on to the next step
PHASE I
Step 2. Add the artificial variable to the left side of the each equation that lacks the
needed starting basic variables. Construct an auxiliary objective function aimed at
minimizing the sum of all artificial variables. Thus, the new objective is to
Minimize z = A1 + A2 + . . . + An
Maximize z ∗ = −A1 − A2 − . . . − An
where A, (i = 1.2, . . . , m) are the non-negative artificial variables.

LPP October 10, 2022 36 / 100


Step 3. Apply simplex algorithm to the specially constructed L.P.P. The following three
cases may arise at the least interaction:
1 max z ∗ ≤ 0 and at least one artificial variable is present in the basis with positive
value. In such a case, the original L. P.P. does not possess any feasible solution.
2 max z ∗ = 0 and at least one artificial variable is present in the basis at zero value.
In such a case, the original L.P.P. possess the feasible solution. In order to get basis
feasible solution we may proceed directly to Phase 2 of else eliminate the artificial
variable and then proceed to Phase 2.
3 max z ∗ = 0 and no artificial variable is present in the basis. In such a case, a basic
feasible solution to the original L.P.P. has been found. Go to Phage: 2 .
PHASE 2
Step 4. Consider the optimum BASIC feasible solution of Phase.1 as a starting basic
feasible solution for the original L.P.P. Assign actual coefficients to the variables in the
objective function and a value zero to the artificial variables that appear at zero value in
the final simplex table of Phase 1.
Apply usual simplex method to the modified simplex table to get optimum solution.

LPP October 10, 2022 37 / 100


Example

Maximize z = 5x1 − 4x2 + 3x3 subject to the constraints:

2x1 + x2 − 6x3 = 20
6x1 + 5x2 + 10x3 ≤76
8x1 − 3x2 + 6x3 ≤50
x1 , x2 , x3 ≥ 0
SOLUTION:
We convert the LPP into standard form by using slack variables s1 ≥ 0 and s2 ≥ 0
and artificial variable A1
Maximize z = 5x1 − 4x2 + 3x3 subject to the constraints:

2x1 + x2 − 6x3 + A1 = 20
6x1 + 5x2 + 10x3 + s1 = 76
8x1 − 3x2 + 6x3 + s2 = 50
x1 , x2 , x3 , s1 , s2 , A1 ≥ 0
LPP October 10, 2022 38 / 100
An initial basic feasible solution x1 = x2 = x3 = 0, A1 = 20, s1 = 76, s2 = 50

Phase: I
Auxiliary LPP

Maximize z = −A1 subject to the constraints:

2x1 + x2 − 6x3 + A1 = 20
6x1 + 5x2 + 10x3 + s1 = 76
8x1 − 3x2 + 6x3 + s2 = 50

writing in the Matrix form AX = B

 
x1
   x2   
2 1 −6 0 0 1  
 x3  20
6 5 10 1 0 0  
  = 76
s1 
8 −3 6 0 1 0  
 s2  50
A1

LPP October 10, 2022 39 / 100


cj 0 0 0 0 0 -1 min ratio
CB yB xB x1 x2 s3 s1 s2 A1 XB /xi , xi > 0
-1 A1 20 2 1 -6 0 0 1 10
0 s1 76 6 5 10 1 0 0 76/6
0← s2 ← 50 8 -3 6 0 1 0 50/8 ←
zj − c j −2 ↑ -1 6 0 0 0

zj − cj = CB Xj − Cj

LPP October 10, 2022 40 / 100


cj 0 0 0 0 0 -1 min ratio
CB yB xB x1 x2 s3 s1 s2 A1 XB /xi , xi > 0
-1 A1 ← 15/2 0 7/4 -15/4 0 -1/4 1 4.28 ←
0 s1 77/2 0 22/4 11/2 1 -3/4 0 5.31
0 x1 25/4 1 -3/8 3/4 0 1/8 0 Negative
zj − c j 0 −7/4 ↑ 15/4 0 1/4 0

LPP October 10, 2022 41 / 100


cj 0 0 0 0 0 -1 min ratio
CB yB xB x1 x2 s3 s1 s2 A1 XB /xi , xi > 0
0 x2 30/7 0 1 -30/7 0 -1/7 4/7
0 s1 52/7 0 0 256/7 1 2/7 -29/7
0 x1 55/7 1 0 -6/7 0 1/14 3/14
zj − c j 0 0 0 0 0 1

Thus we get the optimal value,

max z ∗ = 0

max z ∗ = 0 and no artificial variable is present in the basis. In such a case, a basic
feasible solution to the original L.P.P. has been found. Go to Phage: 2.

LPP October 10, 2022 42 / 100


Phase:II

Consider the final simplex table of Phase: I, considered the actual cost associated
with the original variables. Delete the artificial variable column A1 from the table
as it is been eliminated from phase:I
cj 5 -4 0 0 0
CB yB xB x1 x2 s3 s1 s2
-4 x2 30/7 0 1 -30/7 0 -1/7
0 s1 52/7 0 0 256/7 1 2/7
5 x1 55/7 1 0 -6/7 0 1/14
zj − c j 0 0 69/7 0 13/14

zj − cj = CB Xj − Cj
Since all zj − cj ≤ 0, an optimal feasible solution has been reached. Hence, an
optimum feasible solution to the given LPP is

x1 = 55/7 x2 = 30/7 x3 = 0

Max z = (−4 × 30/7 + 5 × 55/7 + 0) = 155/7


LPP October 10, 2022 43 / 100
Big M:- Method

If an LP has any or = constraints, a starting BFS may not be readily


avialable.
In order to use the simplex method, a BFS is needed. To remedy the
predicament, artificial variables are created.
===========================================
Solution Strategy

1 In the optimal solution, all artificial variables must be set equal to zero.
2 To accomplish this, in a min LP, a term MAi is added to the objective
function for each artificial variable Ai . For a max LP, the term −MAi is
added to the objective function for each Ai .
3 M represents some very large number.

LPP October 10, 2022 44 / 100


Big M:- Method

1 Modify the constraints so that the RHS of each constraint is non-negative.


Identify each constraint that is now an = or ≥ constraint.
2 Convert each inequality constraint to standard form (add a slack variable for
≤ constraints, add an excess variable for ≥ constraints.
3 For each ≥ constraints) or = constraint, add artificial variables. Add sign
restriction Ai ≥ 0.
4 Let M denote a very large positive number. Add (for each artificial variable)
MAi to min problem objective functions or −MAi problem objective
functions.
5 Since each artificial variable will be in the starting basis, all artificial variables
must be eliminated from row 0 before beginning the simplex. Remembering
M represents a very large number, solve the transformed problem by the
simplex.

LPP October 10, 2022 45 / 100


Example

Maximize z = 6x1 + 4x2 subject to the constraints:

2x1 + x2 ≤ 30
3x1 + 2x2 ≤ 24
x1 + x2 ≥ 3
x1 , x2 ≥ 0
SOLUTION:

Introducing slack variables s1 , s2 , surplus variable s3 and artificial variable A1 in


the constraints of LPP,
Maximize z = 6x1 + 4x2 + 0s1 + 0s2 + 0s3 subject to the constraints:

2x1 + x2 + s1 = 30
3x1 + 2x2 + s2 = 24
x1 + x2 − s 3 = 3
x1 , x2 , s 1 , s 2 , s 3 ≥ 0
LPP October 10, 2022 46 / 100
We don’t have the initial feasible solution. So we introduce artificial variable A1 in
the third constraint. Then the initial feasible solution is s1 = 30, s2 = 24 and
A1 = 3
Now the iterative simplex table
Initial iteration: introduce x1 and drop A1
cj 6 4 0 0 0 -M
CB yB xB x1 x2 s1 s2 s3 A1
0 s1 30 2 3 1 0 0 0
0 s2 24 3 2 0 1 0 0
-M A1 3 1 1 0 0 -1 1
zj -3M -M -M 0 0 M 0
zj − cj -M-6 -M-4 0 0 M 0

LPP October 10, 2022 47 / 100


Introduce s3 and drop s2
cj 6 4 0 0 0
CB yB xB x1 x2 s1 s2 s3
0 s1 24 0 1 1 0 2
0 s2 15 0 -1 0 1 3
6 x1 3 1 1 0 0 -1
zj 18 6 6 0 0 -6
zj − c j 0 2 0 0 -6

LPP October 10, 2022 48 / 100


Optimal solution:
cj 6 4 0 0 0
CB yB xB x1 x2 s1 s2 s3
0 s1 14 0 5/3 1 -2/3 0
0 s3 5 0 -1/3 0 1/3 1
6 x1 8 1 2/3 0 1/3 0
zj 48 6 4 0 2 0
zj − c j 0 0 0 2 0

Since all zj − cj , an optimal solution has been reached. Thus it is at


x1 = 8, x2 = 0, z = 48

LPP October 10, 2022 49 / 100


Alternate solution

It is evident from the net evaluation of the optimum table that the net evaluation
to the non-feasible variable x2 is 0. This is an indication that the current solution
is not unique and alternate solution exists. Thus we bring x2 into the basis instead
of s1 or s3 , thus
cj 6 4 0 0 0
CB yB xB x1 x2 s1 s2 s3
4 x2 42/5 0 1 3/5 -2/5 0
0 s3 39/5 0 0 1/5 1/5 1
6 x1 12/5 1 0 -2/5 3/5 0
zj 48 6 4 0 2 0
zj − cj 0 0 0 2 0

Thus we observe that the optimum value is same. hence the feasible solutions are
x1 = [8, 0] and x2 = [12/5, 42/5]

LPP October 10, 2022 50 / 100


Example

Minimize z = 5x1 − 6x2 − 7x3 subject to the constraints:

x1 + 5x2 − 3x3 ≥ 15
5x1 − 6x2 + 10x3 ≥0
x1 + x2 + x3 =5
x1 , x2 , x3 ≥ 0

SOLUTION:
Constraint 1: It has a sign ”≥” (greater than or equal) so the surplus variable s1
will be subtracted and the artificial variable A1 will be added.
Constraint 2: It has a negative or null independent term. All coefficients will be
multiplied by -1 and their sign will be changed from ”≥” In this way it
corresponds to add the slack variable s2 .
Constraint 3: It has an ”=” sign (equal) so the artificial variable A2 will be
added.

LPP October 10, 2022 51 / 100


Minimize z = 5x1 −6x2 −7x3 +0s1 +0s2 +MA1 +MA2 subject to the constraints:

x1 + 5x2 − 3x3 − s1 + A1 = 15
−5x1 + 6x2 − 10x3 + s2 =0
x1 + x2 + x3 + A2 =5
x1 , x2 , x3 ≥ 0

Initial table
cj 5 -6 -7 0 0 M M
CB yB xB x1 x2 x3 s1 s2 A1 A2
M A1 15 1 5 -3 -1 0 1 0
0 s2 00 -5 6 -10 0 1 0 0
M A2 05 1 1 1 0 0 0 1
zj 20M 2M 6M -2M -M 0 0 0
zj − c j 2M-5 6M+6 -2M+7 -M 0 0 0

LPP October 10, 2022 52 / 100


In case of minimization problem we take the maximum of zj − cj
Thus, enter the variable x2 and the variable s2 leaves the base. The pivot element
is 6
cj 5 -6 -7 0 0 M M
CB yB xB x1 x2 x3 s1 s2 A1 A2
M A1 15 31/6 0 16/3 -1 -5/6 1 0
-6 x2 0 -5/6 1 -5/3 0 1/6 0 0
M A2 5 11/6 0 8/3 0 -1/6 0 1
zj 7M+5 -6 8M+10 -M -M-1 M M
zj − c j 7M 0 (8M+17) -M -M-1 0 0

Enter the variable x3 and the variable A2 leaves the base. The pivot element is 8/3

LPP October 10, 2022 53 / 100


cj 5 -6 -7 0 0 M M
CB yB xB x1 x2 x3 s1 s2 A1 A2
M A1 5 3/2 0 0 -1 -1/2 1 -2
-6 x2 25/8 5/16 1 0 0 1/16 0 5/8
-7 x3 15/8 11/16 0 1 0 -1/16 0 3/8
3 107 −1 1
zj 2M − 16 6 7 -M 2 M + 16 M -2M- 51
8
3 187 −1 1
zj − cj 2M − 16 0 0 -M 2 M + 16 0 -3M- 51
8

Enter the variable x1 and the variable x3 leaves the base. The pivot element is
11/16

LPP October 10, 2022 54 / 100


cj 5 -6 -7 0 0 M M
CB yB xB x1 x2 x3 s1 s2 A1 A2
M A1 10/11 0 0 -24/11 -1 -4/11 1 -31/11
-6 x2 25/11 0 1 -5/11 0 1/11 0 5/11
5 x1 30/11 1 0 16/11 0 -1/11 0 6/11
10 −24 −4 −31
zj 11 M 5 -6 11 M + 10 -M 11 M −1 M 11 M
−24 −4 −42
zj − c j 0 0 11 M + 17 -M 11 M −1 0 11 M

The iterations have been completed and there are artificial variables in the base
with values strictly greater than 0, so the problem has no solution (infeasible).

LPP October 10, 2022 55 / 100


Duality in LPP
General Primal-Dual pair

Definition (1)
Maximize z = c1 x1 + c2 x2 + · · · + cn xn .
subject to the constraints:

ai1 x1 + ai2 x2 + · · · + ain xn = bi ; i = 1, 2, · · · , m


xj ≥ 0; j = 1, 2, · · · , n

Dual problem

Minimize z ∗ = b1 w1 + b2 w2 + · · · + bn wn

a1j w1 + a2j w2 + · · · + anj wn ≥ ci ; j = 1, 2, · · · , n


wi (i = 1, 2, · · · , m), unrestricted

Note: xi ’s are primal variables and wi ’s are dual variables.


LPP October 10, 2022 56 / 100
Definition (2)
Minimize z = c1 x1 + c2 x2 + · · · + cn xn .
subject to the constraints:

ai1 x1 + ai2 x2 + · · · + ain xn = bi ; i = 1, 2, · · · , m


xj ≥ 0; j = 1, 2, · · · , n

Dual problem

Maximize z ∗ = b1 w1 + b2 w2 + · · · + bn wn

a1j w1 + a2j w2 + · · · + anj wn ≤ ci ; j = 1, 2, · · · , n


wi (i = 1, 2, · · · , m), unrestricted

LPP October 10, 2022 57 / 100


Definition (matrix form)
Primal Form: Find x T ∈ Rn so as to

Maximize z = cx, c ∈ Rn subject to Ax = b, and x ≥ 0, b T ∈ Rm

where A is an m × n real matrix.


Dual Form: Find w T ∈ Rn so as to

Minimize z ∗ = b T w , b ∈ Rm subject to AT w ≥ c T , c ∈ Rn

where AT transpose of an m × n real matrix A and w is unrestricted in sign

Definition
Primal Form: Find x T ∈ Rn so as to

Minimize z = cx, c ∈ Rn subject to Ax = b, and x ≥ 0, b T ∈ Rm

where A is an m × n real matrix.


Dual Form: Find w T ∈ Rn so as to

Maximize z ∗ = b T w , b ∈ Rm subject toAT w ≤ c T , c ∈ Rn

where AT transpose of an m × n real matrix A and w is unrestricted in sign


LPP October 10, 2022 58 / 100
Theorem (1)
The Dual of the Dual is primal

Theorem (weak duality theorem)


Let x0 be a feasible solution to the primal problem

Maximize f (x) = cx subject to Ax ≤ b, x ≥ 0

where x T and c ∈ Rn , b ∈ Rm and A is an m × n real matrix. if w0 be a feasible


solution to the dual of primal, namely

Minimize g (x) = b T w subject to AT w ≥ c T , w ≥ 0

where w T ∈ Rm , then cx0 ≤ b T w0

Theorem (Existence theorem)


If either the primal or the dual problem has an unbounded objective function
value, then the other problem has no feasible solution.

LPP October 10, 2022 59 / 100


Theorem (Fundamental theorem of Duality)
If the primal or dual has a finite optimum solution, then the other problem also
possesses a finite optimum solution and the optimum values of the objective
functions of two problem are equal.

Theorem (Basic duality theorem)


Let a primal problem be

Maximize f (x) = cx subject to Ax ≤ b, x ≥ 0

and the associated dual be

Minimize g (w ) = b T w , subject to AT w ≥ c T , w ≥ 0, w T , b T ∈ Rm

If x0 (w0 ) be the optimum solution to the primal (dual),then there exist a feasible
solution w0 (x0 ) to the dual(primal) such that

cx0 = b T w0

LPP October 10, 2022 60 / 100


Formulation: Dual problem

Steps involved in the formulation of dual problem are:


Step.1. Put the linear programming problem into its standard form. Consider it
as primal form.
Step.2. Identify the variable to be used in the Dual form. The number of these
variable is equal to the number of constraints equation in the primal.
Step.3. Write down the objective function of the dual, using the right hand side
constraint of the primal constraint.
If the primal problem is of maximization type, the dual will be a minimization
problem and vice-versa.
Step.4. Making use of the dual variable identified in Step 2, write the constraint
of the dual problem.
If primal is a maximization problem, the dual constraints must be all of ′ ≥′
type. If the primal is minimization,he dual constrain must be all of ′ ≤′ type.
The column coefficients of the primal constraints becomes the row
coefficients of the dual constraints.

LPP October 10, 2022 61 / 100


The coefficients of the primal objective function becomes the right hand side
constant of the dual constraints
The dual variables are defined to be unrestricted in sign.
Step.5. Using Step.3 and 4, write down the dual of LPP

Example Formulate the dual of the following LPP:

Maximize z = 5x1 + 3x2

subject to the constraints:

3x1 + 5x2 ≤ 15, 5x1 + 2x2 ≤ 10, x1 ≥ 0, x2 ≥ 0

SOLUTION:
Standard Primal.Introducing the slack variables s1 ≥ 0 and s2 ≥ 0 the standard
LPP:
Maximize z = 5x1 + 3x2 + 0s3 + 0s4
subject to the constraints:

LPP October 10, 2022 62 / 100


3x1 + 5x2 + s1 + 0s2 = 15,
5x1 + 2x2 + 0s1 + s2 = 10,
x1 , x2 , s 1 , s 2 ≥ 0

Dual. Let w1 and w2 be the dual variables corresponding to the primal


constraints. then the dual problem:

Minimize z ∗ = 15w1 + 10w2

subject to the constraints:

3w1 + 5w2 ≥ 5
5w1 + 2w1 ≥ 3
w1 + 0w2 ≥ 0 ⇒ w1 ≥ 0
0w1 + w2 ≥ 0 ⇒ w2 ≥ 0
w1 and W2 unrestricted

LPP October 10, 2022 63 / 100


Example

Write down the dual of LPP:

minimize z = 4x1 + 6x2 + 18x3

subject to the constraints:

x1 + 3x2 ≥ 3, x2 + 2x3 ≥ 5, and xj ≥ 0

SOLUTION:
Standard Primal.Introducing the slack variables s1 ≥ 0 and s2 ≥ 0 the standard
LPP:
minimize z = 4x1 + 6x2 + 18x3 + 0s1 + 0s2
subject to the constraints:

x1 + 3x2 − s1 = 3,
0x1 + x2 + 2x3 − s2 = 5,
xj ≥ 0

LPP October 10, 2022 64 / 100


Dual. Let w1 and w2 be the dual variables corresponding to the primal
constraints. then the dual problem:

Maximize z ∗ = 3w1 + 5w2

subject to the constraints:

w1 + 0w2 ≤ 4
3w1 + w2 ≤ 6
0w1 + 2w2 ≤ 18
−w1 + 0w1 ≤ 0
0w1 − w2 ≤ 0
w1 and w2 unrestricted

The dual becomes:


Maximize z ∗ = 3w1 + 5w2
subject to the constraints:

w1 ≤ 4, 3w1 + w2 ≤ 6, w2 ≤ 9
w1 ≥ 0, w2 ≥ 0

LPP October 10, 2022 65 / 100


General non-Linear programming problem

Let z be a real valued function of n variables defined by

z = f (x1 , x2 , · · · , xn ).

Let b1 , b2 , · · · , bm be a set of constants such that

gi (x1 , x2 , · · · , xn ) {≥, ≤, or =} bi , i = 1, 2, · · · , m

where gi ’s are real valued functions of n variables. Finally let

xj ≥ 0, j = 1, 2, · · · , n

If either f or gi , i = 1, 2, · · · , m; or both are non linear, then the problem to


determine the variables (x1 , x2 , · · · , xn ) which makes z to be maxima/minima and
satisfies the conditions (1) and (2) is called a general non-Linear programming
problem(GNLPP)

LPP October 10, 2022 66 / 100


Matrix Representation

Determine xT ∈ IRn , maximize/minimize the objective function z = f (x), subject


to the constraints:

gi (x) {≥, ≤, or =} bi , x ≥ 0

where either f (x) or gi or both are non linear in x

Note: The constraint gi (x) {≥, ≤, or =} bi can be convenient to write as


hi (x){≥, ≤, or =}0 where hi (x) = gi (x) − bi

LPP October 10, 2022 67 / 100


Convex Optimization

A convex optimization form is one of the form

min f0 (x), x ∈ Rm

such that:

fi (x) ≤ 0, i = 1, 2, · · · , m
aiT x =bi , i = 1, 2, · · · , p.

where f0 , f1 , · · · , fm are convex functions. Comparing with the general standard


form, the convex problem has three additional requirement
the objective function must be convex,
the inequality constraint function must be convex,
the equality constraint function hi (x) = aiT x − bi must be affine

LPP October 10, 2022 68 / 100


Constrained Optimization with Equality Constraints
If the non linear programming problem is composed of some differentiable
objective function and equality constraints, the optimisation may be achieved by
use of Lagrange multipliers. The method of Lagrange multipliers is used to
optimize a function subject to constraints.

Consider a two variable function

max / min f (x1 , x2 ),

subject to
h(x1 , x2 ) = 0
Taking the total derivative of the function at (x1 , x2 )

∂f ∂f
∂f = dx1 + dx2 = 0 (1)
∂x1 ∂x2
If (x1∗ , x2∗ ) be the solution of constrained optimization problem, then

h(x1∗ , x2∗ ) = 0

LPP October 10, 2022 69 / 100


Now the variation dx1 and dx2 is admissible only if

h(x1∗ + dx1 , x2∗ + dx2 ) = 0,

which can be expanded as


∂h(x1∗ , x2∗ ) ∂h(x1∗ , x2∗ )
h(x1∗ + dx1 , x2∗ + dx2 ) = h(x1∗ , x2∗ ) + dx1 + dx2 = 0
∂x1 ∂x2

∂h
∂h ∂h
∂h = dx1 + dx2 = 0 ⇒ dx2 = − ∂x
∂h
1
dx1
∂x1 ∂x2 ∂x 2

Putting in (1)
∂h
∂f ∂f  ∂x1 
∂f = dx1 + − ∂h dx1 =0
∂x1 ∂x2 ∂x2
 ∂f ∂h ∂f ∂h 
− dx1 =0
∂x1 ∂x2 ∂x2 ∂x1
 ∂f 
∂f ∂h
− ∂x ∂h
2
=0
∂x1 ∂x
∂x 1
2

LPP October 10, 2022 70 / 100


Thus,
∂f ∂h
−λ =0,
∂x1 ∂x1
We can also write
∂f ∂h
−λ =0,
∂x2 ∂x2

 ∂f ∂f 
∂x1 ∂x2
where λ = (λ1 , λ2 ) = ,
∂h ∂h
∂x1 ∂x2
Hence, we need to find where the gradient of f and the gradient of h are aligned

∆f (x) = λ∆h(x)

for some Lagrange multiplier λ. We need the scalar λ because the magnitudes of
the gradients may not be the same.

LPP October 10, 2022 71 / 100


Let’s consider a maximization/minimization problem

min /(max)f (x1 , x2 )

subject to the constraints:

g (x1 , x2 ) = c, x1 , x2 ≥ 0

where c is a constant.

To find the necessary condition for maximum/minimum value of z, a new function


is formed by using the Lagrange multipliers, λ as

L(x1 , x2 , λ) = f (x1 , x2 ) − λ(g (x1 , x2 ) − c)

where the function L(x1 , x2 , λ) is known as Lagrange function with Lagrange


multipliers, λ

LPP October 10, 2022 72 / 100


The necessary condition are:

∂L(x1 , x2 , λ) ∂f ∂(g (x1 , x2 ) − c)


=0 ⇒ − =0
∂x1 ∂x1 ∂x1
∂L(x1 , x2 , λ) ∂f ∂(g (x1 , x2 ) − c)
=0 ⇒ − =0
∂x2 ∂x2 ∂x2
∂L(x1 , x2 , λ)
=0 ⇒ g (x1 , x2 ) = c
∂λ
Thus the necessary condition for maximum/minimum of f (x1 , x2 ) are thus given by

∆f (x1 , x2 ) = λ∆h(x1 , x2 ),
h(x1 , x2 ) = 0

where h(x1 , x2 ) = g (x1 , x2 ) − c

LPP October 10, 2022 73 / 100


Necessary condition for general NLPP

Consider the general NLPP:

Maximize or minimize z = f (x1 , x2 , · · · , xn ) subject to the constraints:


gi (x1 , x2 , · · · , xn ) = bi and, xi ≥ 0i = 1, 2, · · · , m.

The constraints are reduced to hi (x1 , x2 , · · · , xn ) = 0 for i = 1, 2, · · · n by the


transformation gi (x1 , x2 , · · · , xn ) − bi = hi (x1 , x2 , · · · , xn )

Thus the problem in the matrix form as

Maximize/minimize z = f (x); x ∈ IRn

subject to
hi (x) = 0, x ≥ 0

To find the necessary condition, the Lagrangian function L(x, λ), is formed by
introducing m Lagrange multipliers λ = (λ1 , λ2 , · · · , λm ).

LPP October 10, 2022 74 / 100


The function is defined as
m
X
L(x, λ) = f (x) − λi hi (x)
i=1

If L, f , h are differentiable partially with respect to x1 , x2 , · · · , xn and


λ1 , λ2 , · · · , λm , the necessary condition for maxima/minima of z are
m
∂L ∂f X ∂h
=0⇒ − λi =0
∂xi ∂xi ∂xi
i=1
∂L
= 0 ⇒ hi (x) = 0
∂λi
That is the conditions are :
m
X
∆f (x) = λi ∆h(x)
i=1
h(x) = 0

LPP October 10, 2022 75 / 100


Example - 01
Consider the minimization problem:
 
3 3
min −exp − (x1 x2 − )2 − (x2 − )2
x 2 2

subject to : x1 − x22 = 0

SOLUTION:

We can use the method of Lagrange multipliers to solve the problem,


 
3 2 3 2
L(x1 , x2 , λ) = −exp − (x1 x2 − ) − (x2 − ) − λ(x1 − x22 )
2 2
and compute the gradient as
   
∂L 3 2 3 2 3
= 2x2 − exp − (x1 x2 − ) − (x2 − ) − x1 x2 + −λ
∂x1 2 2 2
  
∂L 3 3
= 2λx2 + − exp − (x1 x2 − )2 − (x2 − )2 ×□
∂x2 2 2

LPP October 10, 2022 76 / 100


 
3 3
where □ = − 2x1 (x1 x2 − ) − 2(x2 − )
2 2

∂L
and = x1 − x22
∂λ
Setting these derivatives to zero and solving yields x1 ≈ 1.358, x2 ≈ 1.165, and
λ ≈ 0.170.

Example - 02
Consider the problem,

min f (X ) =x12 + x22 + x32

subject to

g1 (X ) =x1 + x2 + 3x3 − 2
g2 (X ) =5x1 + 2x2 + x3 − 5

LPP October 10, 2022 77 / 100


SOLUTION:

The Lagrangian is defined as,


L(X , λ) = x12 + x22 + x32 − λ1 (x1 + x2 + 3x3 − 2) − λ2 (5x1 + 2x2 + x3 − 5)
This yields the following necessary conditions:
∂L
= 2x1 − λ1 − 5λ2 =0
∂x1
∂L
= 2x2 − λ1 − 2λ2 =0
∂x2
∂L
= 2x3 − 3λ1 − λ2 =0
∂x3
∂L
= −(x1 + x2 + 3x3 − 2) =0
∂λ1
∂L
= −(5x1 + 2x2 + x3 − 5) =0
∂λ2
The solution to these equations yields,
X0 =(x1 , x2 , x3 ) = (0.8043, 0.3478, 0.2826)
λ =(λ1 , λ2 ) = (0.0870, 0.3043)

LPP October 10, 2022 78 / 100


Example-03

(For a rectangle whose perimeter is 20 m, use the Lagrange multiplier method to


find the dimensions that will maximize the area

SOLUTION:

Let, with x and y representing the width and height, respectively, of the rectangle,
this problem can be stated as:

max f (x, y ) = xy ,

given: g (x, y ) = 2x + 2y = 20
Then solving the equation, ∆f (x, y ) = λ∆g (x, y ), for some λ, by solving the
equations
∂f ∂g ∂f ∂g
=λ , =λ
∂x ∂x ∂y ∂y
⇒ y = 2λ, x = 2λ

LPP October 10, 2022 79 / 100


The general idea is to solve for λ in both equations, then set those expressions
equal (since they both equal λ ) to solve for x and y . Doing this we get
x y
=λ= ⇒x =y
2 2
so now substitute either of the expressions for x or y into the constraint equation
to solve for x and y :
g (x, y ) = 2x + 2y = 20
⇒ 4x = 20 ⇒ x = 5
There must be a maximum area, since the minimum area is 0 and f (5, 5) = 25, so
the point (5, 5) that we found (called a constrained critical point) must be the
constrained maximum.
The maximum area occurs for a rectangle whose width and height both are 5m.

LPP October 10, 2022 80 / 100


Sufficient condition for GNLPP to have extrema

Sufficient condition for an n variable function to have extrema

Theorem
A sufficient condition for a stationary point X0 , to be an extremum is that the
Hessian H satisfies the following condition:
1 H is positive definite if X0 is a minimum point.
2 H is negative definite if X0 is a maximum point.

Example

Consider the function f (x, y , z) = x + 2z + yz − x 2 − y 2 − z 2

SOLUTION:
The necessary conditions are ∆f = 0
∂f
=0 ⇒ 1 − 2x = 0
∂x

LPP October 10, 2022 81 / 100


∂f
=0 ⇒ z − 2x = 0
∂y
∂f
=0 ⇒ 2 + y − 2z = 0
∂z
Thus the critical point is X0 = (1/2, 2/3, 4/3)
To determine the stationary points, consider the Hessian matrix:
 ∂2f ∂2f ∂2f
  
∂x 2 ∂x∂y ∂x∂z −2 0 0
 ∂2f ∂2f ∂2f 
H|X0 =  ∂y ∂x ∂y 2 ∂y ∂z 
=  0 −2 1 
2
∂ f 2
∂ f ∂2f
2
0 1 −2
∂z∂x ∂z∂y ∂z

Since the principle minor determinants of H|X0 are −2, 4, −6 respectively. Thus H
is negative definite, and X0 = (1/2, 2/3, 4/3) represent the maximum point

LPP October 10, 2022 82 / 100


Sufficient Condition: Single constraint GNLPP

Let the Lagrangian function for general NLPP involving n variable, one constraint
be:

L(x, λ) = f (x) − λh(x)

The necessary condition for stationary points to be maximum/minimum:

∂L ∂f ∂h
= −λ = 0, j = 1, 2, · · · n
∂xj ∂xj ∂xj
∂L
= −h(x) = 0,
∂λ
the value λ = (λ1 , λ2 , · · · , λn ) is obtained from

∂f /∂xj
λi = , j = 1, 2, · · · n
∂h/∂xj

LPP October 10, 2022 83 / 100


The sufficient condition for a maximum or minimum require the evaluation at
each stationary points, of (n − 1) principle minor of the determinant given below:
∂h ∂h ∂h
 
0 ∂x1 ∂x1 ··· ∂xn
 ∂h ∂2f ∂2h ∂2f ∂2h 2 2
 ∂x1 ∂x 2 − λ ∂x 2 ∂x1 ∂x2 − λ ∂x1 ∂x2 · · · ∂x∂1 ∂x
f
n
− λ ∂x∂1 ∂xh 
n

1 1
 2 2 2 2 2 2 
∂h ∂ f ∂ h ∂ f ∂ h ∂ f ∂ h 
HB =  ∂x2 ∂x1 − λ ∂x2 ∂x1 − λ ∂x 2 · · · ∂x2 ∂xn − λ ∂x2 ∂xn 

∂x2 ∂x22 2
 . . . .
 

 . . . .
 

2 2 2 2 2 2
∂h ∂ f ∂ h ∂ f ∂ h ∂ f ∂ h
∂xn ∂xn ∂x1 − λ ∂xn ∂x1 ∂xn ∂x2 − λ ∂xn ∂x2 · · · ∂x 2 − λ ∂x 2
n n

where H B is known as Bordered Hessian matrix, and ∆n+1 = det(H B )


If ∆3 > 0, ∆4 < 0, ∆5 > 0 · · · , the sign pattern changes alternate, the
stationary point is local maximum
If ∆3 < 0, ∆4 < 0, ∆5 < 0 · · · , the sign pattern being always negative, the
stationary point is local minimum

LPP October 10, 2022 84 / 100


Example

Solve the non-linear programming problem :


Minimize z = 2x 2 − 24x + 2y 2 − 8y + 2z 2 − 12z + 200 subject to the constraint,

x + y + z = 11, x, y , z ≥ 0

SOLUTION:

We formulate the Lagrangian as:

L(x, y , z, λ) = 2x 2 − 24x + 2y 2 − 8y + 2z 2 − 12z + 200 − λ(x + y + z − 11)

The necessary condition for stationary points are


∂L ∂L
∂x = 4x − 24 − λ = 0 ∂y = 4y − 8 − λ = 0
∂L ∂L
∂z = 4z − 12 − λ = 0 ∂λ = x + y + z − 11 = 0

The solution of these equation yields the stationary points x0 = (6, 2, 3); λ = 0

LPP October 10, 2022 85 / 100


The sufficient condition for the stationary point to be minimum is that ∆3 and∆4
both negative.Thus


0 1 1 1
0 1 1
1 4 0 0
∆3 = 1 4 0 = −8 ∆4 = = −48
1 0 4 1 0 4 0
1 0 0 4

which both are negative. Thus x0 = (6, 2, 3); provides the solution for NLPP

LPP October 10, 2022 86 / 100


Sufficient Condition: m(< n) constraint GNLPP

Introducing m Lagrange multipliers λ = (λ1 , λ2 , · · · , λm ), the Lagrangian function


can be defined as:
m
X
L(x, λ) = f (x) − λi hi (x)
i=1

The necessary conditions for stationary points are


∂L ∂L
∂xi =0 ∂λj =0 (i = 1, 2, · · · , n; j = 1, 2, · · · , m)

Thus the optimization of f (x) subject to the constraints h(x) is equivalent to the
optimization of L(x, λ).
Let’s assume that the functions L(x, λ), f (x), h(x) all posses partial derivatives of
order one and two with respect to the decision variables.

LPP October 10, 2022 87 / 100


The sufficient condition for the Lagrange multiplier method of stationary point of
f (x) to be maxima or minima.
Let, h 2 i
V = ∂x∂i ∂x
L
j
,
n×n

be the matrix of all second order partial derivative of L(x, λ), w.r.t decision
variables
h i
U = ∂h∂xj (x)
i
, , (i = 1, 2, · · · , n; j = 1, 2, · · · , m)
m×n

Define the matrix


 
0 U
HB =  · · · · · ·
UT V (m+n)×(m+n)

where HB is the Bordered Hessian matrix.

LPP October 10, 2022 88 / 100


Then the sufficient condition for a maximum or minimum require the evaluation
at each stationary points are given:

Let (x0 , λ0 ) be the stationary point of a function L(x, λ), let HB0 be the Bordered
Hessian matrix calculated at the stationary point. Then x0 is a

If starting with the principle minor of order (2m + 1), the last (n − m)
principle minor of HB0 forms alternate sign pattern (−1)m+n , the stationary
point is local maximum
And if starting with the principle minor of order (2m + 1), the last (n − m)
principle minor of HB0 has sign of (−1)m , the stationary point is local
minimum

LPP October 10, 2022 89 / 100


Example
Optimize z = 3x12 + 2x22 + x32 − 4x1 x2 subject to the constraints:

x1 + x2 + x3 = 15, 2x1 − x2 + 2x3 = 20

SOLUTION:
we have

f (x) = 3x12 + 2x22 + x32 − 4x1 x2 , h1 (x) = x1 + x2 + x3 − 15

h2 (x) = 2x1 − x2 + 2x3 − 20


Construct the Lagrangian function as:
L(x, λ) = 3x12 + 2x22 + x32 − 4x1 x2 − λ1 (x1 + x2 + x3 − 15) − λ2 (2x1 − x2 + 2x3 − 20)

Now the stationary point can be obtained as


∂L ∂L
∂x1
= 8x1 − 4x2 − λ1 − 2λ2 = 0 ∂x2
= 4x2 − 4x1 − λ1 + λ2
∂L ∂L
∂x3
= 2x3 − λ1 − 2λ2 ∂λ1
= −[x1 + x2 + x3 − 15] = 0
∂L
∂λ2
= −[2x1 − x2 + 2x3 − 20] = 0
LPP October 10, 2022 90 / 100
The solution to these equations yields

x0 = (x1 , x2 , x3 ) = (33/9, 10/3, 8), λ = (λ1 , λ2 ) = (40/9, 52/9)

The bordered Hessian matrix at the point (x0 , λ0 )


 
0 0 1 1 1
0 0 2 −1 2
B
 
H0 = 1  8 −4 0 
1 −1 −4 4 0
1 2 0 0 2

Here since n = 3 and m = 2, therefore n − m = 1, 2m + 1 = 5. This means that


one needs to check the determinant of H0B only it must have sign (−1)2 .

Since det(H0B )= 72 > 0, x0 is the minimum point

LPP October 10, 2022 91 / 100


Constrained optimization with inequality constraints

Karush–Kuhn–Tucker (KKT)conditions, the necessary and sufficient condition for


optimal solution of General NLPP.
One inequality constraint GNLPP
Optimize: z = f (x1 , x2 , · · · , xn ) subject to the constraints:

g (x1 , x2 , · · · , xn ) ≤ C , xi ≥ 0i = 1, 2, · · · , m

. The constraints are reduced to hi (x1 , x2 , · · · , xn ) ≤ 0 for i = 1, 2, · · · n by the


transformation g (x1 , x2 , · · · , xn ) − C = h(x1 , x2 , · · · , xn )

Thus the problem in the matrix form as

Optimize z = f (x); x ∈ IRn

subject to
h(x) ≤ 0, x ≥ 0

LPP October 10, 2022 92 / 100


Modify slightly by introducing the slack variable S, defined by h(x) + S 2 = 0;(S 2
is taken so as to ensure its non-negative)

Now the problem is restarted as


Maximize/minimize z = f (x); x ∈ IRn

subject to
h(x) + S 2 = 0, x ≥ 0
To determine the stationary points, consider the Lagrangian function L(x, S, λ),
L(x, S, λ) = f (x) − λ(h(x) + S 2 ),
where λ is the Lagrange multipliers.The necessary condition for stationary points
to be Optimize:
∂L ∂f ∂h
= −λ = 0, j = 1, 2, · · · n (2)
∂xj ∂xj ∂xj
∂L
= −[h(x) + S 2 ] = 0, (3)
∂λ
∂L
= −2λS = 0 (4)
∂S
LPP October 10, 2022 93 / 100
∂L
The equation states that ∂S = 0, which requires either λ = 0 or S = 0.If S = 0
implies that h(x) = 0.Thus from (2)and(3), ⇒ λh(x) = 0
The variable S was introduced merely to convert the inequality constraint to an
equality one, therefore may be discarded. Moreover since S 2 ≥ 0, (2) gives
h(x) ≥ 0.
Whenever h(x) < 0, we get λ = 0 and whenever λ > 0, h(x) = 0. However, λ is
un restricted in sign whenever h(x) = 0
The necessary condition for a point to be a point of maximum are thus restated as

KKT condition

∂f ∂h
−λ =0
∂xj ∂xj
λh = 0
h≤0
λ≥0

LPP October 10, 2022 94 / 100


Similar argument for minimization NLPP also,

Minimize z = f (x); x ∈ IRn

subject to
h(x) ≥ 0, x ≥ 0
The constraints are reduced to hi (x1 , x2 , · · · , xn ) ≤ 0 for i = 1, 2, · · · n by the
transformation gi (x1 , x2 , · · · , xn ) − C = hi (x1 , x2 , · · · , xn )

Modify slightly by introducing the slack variable S, defined by h(x) − S02 = 0;(S02
is taken so as to ensure its non-negative)

Now the problem is restarted as


minimize z = f (x); x ∈ IRn

subject to
h(x) − S02 = 0, x ≥ 0

LPP October 10, 2022 95 / 100


To determine the stationary points, consider the Lagrangian function L(x, S0 , λ),

L(x, λ) = f (x) − λ(h(x) − S02 ),

where λ is the Lagrange multipliers.The necessary condition for stationary

KKT condition

∂f ∂h
−λ =0
∂xj ∂xj
λh = 0
h≥0
λ≥0

LPP October 10, 2022 96 / 100


Sufficient of KKT conditions
Theorem
The KKT condition for a maximization NLPP of maximizing f (x) subject to the
constraints h(x) ≤ 0 and x ≥ 0, are sufficient for a condition of f (x), if f (x) is
concave and h(x) is convex.

Proof.
The result follows if we are able to show that the Lagrange function

L(x, S, λ) = f (x) − λ(h(x) + S 2 ),

where S is defined by h(x) + S 2 = 0, is concave in x under given conditions.


In that case the stationary points from KKT condition must be the global
maximum point.
Now since h(x) + S 2 = 0, it follows from the necessary condition that λS 2 = 0.
since h(x) is convex and λ ≥ 0 it follows that λh(x) is convex and −λh(x) is
concave. thus, we conclude that f (x) − λh(x) and hence
f (x) − λ(h(x) + S 2 ) = L(x, S, λ) is concave
LPP October 10, 2022 97 / 100
Theorem
The KKT condition for a minimization NLPP of minimizing f (x) subject to the
constraints h(x) ≥ 0 and x ≥ 0, are sufficient for a condition of f (x), if f (x) is
convex and h(x) is concave.

Example
Maximize z = 3.6x − 0.4x 2 + 1.6y − 0.2y 2 subject to the constraint :
2x + y ≤ 10, x, y ≥ 0
SOLUTION:
Here
f (x) = 3.6x − 0.4x 2 + 1.6y − 0.2y 2
h(x) = 2x + y − 10
The KKT condition are:
∂f ∂h
−λ = 0, λh = 0, h ≥ 0, λ ≥ 0
∂xj ∂xj
LPP October 10, 2022 98 / 100
That is,

3.6 − 0.8x =2λ


1.6 − 0.4y =λ
λ(2x + y − 10) ≤0
λ ≥0

From the third equation either λ = 0 or 2x + y − 10 = 0


Let λ = 0 ⇒ x = 4.5, y = 4, with these values the fourth equation don’t satisfies.
thus the optimum solution cannot be obtained for λ = 0.
Now, let λ ̸= 0 ⇒ 2x + y = 10. thus we get x0 = (3.5, 3)
Now it is easy to observe that h(x) is convex, and f (x) is concave. thus KKT
conditions are sufficient for maximum.
maximum value of z = 10.7, at stationary point (3.5, 3)

LPP October 10, 2022 99 / 100


KKT condition for general NLPP: m constraints
Introducing S = (S1 , S2 , · · · , Sm ), the slack variables, the Lagrangian function for
the GLNPP with m(< n) constraints be
m
X
L(x, S, λ) = f (x) − λi (hi (x) + Si2 )
i=1

where the Lagrange multipliers λ = (λ1 , λ2 , · · · , λm ).

The necessary conditions for f (x) to be maximum are


m
∂L ∂f X ∂h
= − λi + Si2 = 0
∂xi ∂xi ∂xi
i=1
∂L
= hi (x) = 0
∂λi
∂L
= −2λi Si = 0
∂Si

where L(x, S, λ) = L, f = f (x) and h = hi (x)


LPP October 10, 2022 100 / 100

You might also like