October 10, 2022 1 / 100
October 10, 2022 1 / 100
October 10, 2022 1 / 100
Mathematical Formulation
Step 1. Study the given situation to find the key decisions to be made
Step 3. State the feasible alternatives which generally are xj ≤ 0 for all j
Step 4. Identify the constraint in the problem and express them as linear
inequalities or equations, LHS of which are linear functions of the decision
variables.
Step 5. Identify the objective function and express it as a linear function of the
decision variable.
SOLUTION:
Step 1. Study the given situation to find the key decisions to be made
Step 2. Variables: Let x1 and x2 denote the number of tables and chairs
respectively.
Step 3. State the feasible alternatives which generally are x1 ≥ 0, x2 ≥ 0 for all j
Step 4. Identify the constraint in the problem and express them as linear
inequalities or equations.
The dealer has a space to store at most 60 pieces
x1 + x2 ≤ 60
Linear programming involving two decision variable can be easily solved by using
Graphical methods.
The major steps in the solution of LPP by graphical methods follows:
Step 1. Identify the problem :- The decision variable, the objective function and
restrictions.
Step 2. Set up the mathematical formulation of the problem.
Step 3. Plot the graph representing all the constraints of the problem and
identify the feasible region (solution space). The feasible region is intersection of
all regions represented by the constraints of the problem and is restricted to first
quadrant only.
Step 4. The feasible region may be bounded or unbounded. Compute the
coordinates of all the corner points of the feasible region.
Step 5. Find out the value of objective function at each corner points.
Step 6. Select the corner point and Optimize the value of the objective function.
It give the optimum feasible solution.
35
30 3x + y = 30
20 4x + 3y = 60
D z = 20x + 10y = 360(approximate)
15
C
10
5 x + 2y = 40
A B
0
0 5 10 15 20 25 30 35 40 45 50
1 Decision variables
2 Objective function
3 Constraints
4 Non-negative constraints
5 Feasible solutions
6 Infeasible solutions
7 Corner points
2x1 + x2 ≤1000
x1 + x2 ≤800
x1 ≤ 400 and x2 ≤ 700
x1 ≥ 0 and x2 ≥ 0 (non-negativity constraints)
SOLUTION:
Step.2: Now plot these points in the graph and find the feasible region. As each
point has the coordinates 0f the type (x1 , x2 ); any point satisfying the condition
x1 ≥ 0 and x2 ≥ 0 lies in the first quadrant.
Now the inequality are graphed taking the equations, 2x1 + x2 ≤ 1000 graphed as
2x1 + x2 = 1000, and also x1 + x2 ≤ 800 as x1 + x2 = 800
LPP October 10, 2022 10 / 100
Further the constraints are are plotted on the graph which represents the area
between the lines x1 = 400 and x2 = 700
1,000
2x1 + x2 = 1000
800 x1 = 400
x2 = 700
E
D
600 C
z = 4x1 + 3x2 = 2600(Maximum value)
400
200 B x1 + x2 = 800
O A
0
0 200 400 600 800 1,000
Now all the constraints are graphed. The area bounded by all these constraints
are called feasible region, as shown by the shaded region.
Ax = b, xT ∈ Rn
ĈB X̂B ≥ CB XB
Proof.
Let the L.P.P be to determine x so as to
Maximize z=c x, c, xT ∈ Rn
Subject to constraints:
A x = b, x ≥ 0
where A is an m × n real matrix and b, c are m × 1 and 1 × n real matrices
respectively.Let ρ(A) = m.
Since there does exist a feasible solution, we must have ρ(A, b) = ρ(A) and
m < n.
Let x = (x1 , x2 , . . . , xn ) be a feasible solution so that xj ≥ 0 for all j.
To be precise, let us suppose that x has p positive components and let the
remaining n − p components be zero.
LPP October 10, 2022 16 / 100
cts...
Let us so relabel our components that the positive components are the first
components and assume that the columns of A have been relabelled accordingly.
Then
a1 x1 + a2 x2 + · · · + ap xp = b
Where a1 , a2 , . . . ap are the first p columns of A.
Two cases now do arise:
(i) The vectors a1 , a2 , . . . , ap forms a linearly independent set. Then p ≤ m.
If p = m, the given solution is a non-degenerate basic feasible solution,
x1 , x2 , . . . , xp as the basic variables.
If p < m then the set {a1 , a2 , ap } can be extended to
{a1 , a2 , . . . , ap , ap+1 , . . . , an } form a basis for the columns of A.
Then we, have
a1 x1 + a2 x3 + · · · + am xm = b
Where xj = 0 for j = p + 1, p + 2, . . . , m
Thus we have, in the case, a degenerate basic feasible solution with m − p of
the basics variables zero.
α1 a1 + α2 a2 + · · · + αp ap = 0
Pp αj
Suppose that for any index r , αr ̸= 0. Then ar = − j̸=r αr aj
p p
X X αj
aj xj + − aj xr = b
αr
j̸=r j̸=r
Or
p
X αj
xj − xr aj = b
αr
j̸=r
1 Consider now this new feasible solution with not more than p − 1 non-zero
components.
2 If the corresponding set of p − 1 columns of A is linearly independent. Case
(i) applies and we have arrived at a basic feasible solution.
3 If this set is again linearly dependent, we may repeat the process to arrive at
a feasible solution with not more than p − 2 non zero components.
4 The argument can be repeated. Ultimately, we get a feasible solution with
associated set of column vectors of A linearly independent.
5 The discussion of case (i) then applies and we do get a basic feasible solution.
This completes the proof.
Corollary
A necessary and sufficient condition for which a basic feasible solution ta an LPP
to be an optimum(maximum) zj − cj ≥ 0 for all j for which aj ∈ /B
Theorem
any convex combination of k different optimal solutions to an LPP is again an
optimum solution to the problem
Step.2. Check the objective function to see whether there is some non basic
variables that would improve the objective function if brought in the basis. If such
available exists go to next step otherwise stop.
Step.3. Determine how large the variable found in the step.2 can be made until
one of the basic variable in the current solution become zero. Eliminate the latter
variable and let the next trial solution contain the newly found variable instead
Step.4. Check for optimality the current solution.
Step.5. Continue the iterations until either an optimal solution is attained or
there is an indication than an unbounded solution exists.
Minimum(z) = −Maximum(z)
Step.2. Check weather all bi are non-negative, if any of the bi are negative then multiply
the corresponding in-equation of the constraints by (−1) so that all the bi are
non-negative
Step.3. Convert all the inequality of the constraints into equations slack and/surplus
variables in the constraints. Put the cost of the variable equals zero.
Step.4. Obtain the initial basic feasible solution of the form XB = B−1 b and put it in
the first column of the simplex table.
Step.5. Compute the net evaluations zj − cj (j = 1, 2, · · · , n) by using the relation
zj − cj = CB yj − cj
Examine the sign of zj − cj
If all zj − cj ≥ 0 the the initial basic feasible solution xB is the optimum basic
feasible solution.
If at-least one zj − cj < 0, proceed on to next step
The initial basic feasible solution is now represented as in the simplex table
The initial basic feasible solution is now represented as in the simplex table
cj 4 10 0 0 0
The initial basic feasible solution is now represented as in the simplex table
cj 4 10 0 0 0
CB yB xB x1 [y1 ] x2 [y2 ] s1 [y3 ] s2 [y4 ] s3 [y5 ]
The initial basic feasible solution is now represented as in the simplex table
cj 4 10 0 0 0
CB yB xB x1 [y1 ] x2 [y2 ] s1 [y3 ] s2 [y4 ] s3 [y5 ]
0 s1 50 2 1 1 0 0
0 s2 100 2 5 0 1 0
0 s3 90 2 3 0 0 1
The initial basic feasible solution is now represented as in the simplex table
cj 4 10 0 0 0
CB yB xB x1 [y1 ] x2 [y2 ] s1 [y3 ] s2 [y4 ] s3 [y5 ]
0 s1 50 2 1 1 0 0
0 s2 100 2 5 0 1 0
0 s3 90 2 3 0 0 1
zj 0 0 0 0 0
The initial basic feasible solution is now represented as in the simplex table
cj 4 10 0 0 0
CB yB xB x1 [y1 ] x2 [y2 ] s1 [y3 ] s2 [y4 ] s3 [y5 ]
0 s1 50 2 1 1 0 0
0 s2 100 2 5 0 1 0
0 s3 90 2 3 0 0 1
zj 0 0 0 0 0
zj − c j -4 -10 0 0 0
The initial basic feasible solution is now represented as in the simplex table
cj 4 10 0 0 0
CB yB xB x1 [y1 ] x2 [y2 ] s1 [y3 ] s2 [y4 ] s3 [y5 ] θ
0 s1 50 2 1 1 0 0 50/1
← 0 s2 ← 100 2 5 0 1 0 100/5 ←
0 s3 90 2 3 0 0 1 90/3
zj 0 0 0 0 0
zj − cj -4 −10 ↑ 0 0 0
where
n cj = coefficient of variables in z, zj = CB × yj ,
xBi
θ = y , yir > 0, (i = 1, 2, 3) and choose the minimum of them.
ir
y21 y20
ŷ21 = = 2/5; y20 = = 100/5 = 20; and so on to the same row
y22 y22
y20 100
ŷ10 =y10 − y12 = 50 − × 1 = 30
y22 5
y20 100
ŷ30 =y30 − y32 = 90 − × 3 = 30
y22 5
y21
ŷ31 =y31 − y31 = 2 − 2/5 × 3 = 4/5
y22
y21
ŷ11 =y − 11 − y12 = 2 − 2/5 × 1 = 8/5
y22
y21
ŷ 14 =y14 − y12 = 0 − 1/5 × 1 = −1/5
y22
and so on.
LPP October 10, 2022 30 / 100
Using computation the iterated simplex table is
cj 4 10 0 0 0
CB yB xB x1 x2 s1 s2 s3
0 s1 30 8/5 0 1 -1/5 0
10 x2 20 2/5 1 0 1/5 0
0 s3 30 4/5 0 0 -3/5 1
zj 4 10 0 2 0
zj − c j z(= 200) 0 0 0 2 0
In the above simplex table yields a new basic feasible solution with increased value
of z. Moreover no futher improvement in the value of z is possible since all
zj − cj ≥ 0.
Hence the maximum feasible solution is x1 = 0, x2 = 20 with Maximum z = 200
We can see that z2 − c2 and z3 − c3 both are negative, but the most negative one
is z3 − c3 . Now x3 will not enter to the basis yB since xi3 are non-positive. Thus
we can say that the solution is unbounded.
In order to obtain an initial basic feasible solution, we first put the given
L.P.P into its standard form and then a non-negative variable is added to the
left side of each equation have the much needed starting basic variables.
The added variable is called an artificial variable and plays the same role as a
slack variable in providing initial basic feasible solution.
Artificial variables have no physical meaning from the point of the original
problem, the method will be valid only we are able to force these variables to
be out or at zero level when the optimum solution attained.
In the first phase of this method. the sum of the artificial variables is minimize. subject
to the given constraints (known as auxiliary L.P.P.) to get a basic feasible solution to the
original L.P.P. Second phase then optimizes the original objective function starting With
the basic feasible solution obtained at the end of Phase 1.
The iterative procedure of the algorithm may he summarise as below
step 1. Write the given L.P.P. into. its standard form and check whether there exists
starting basic feasible solution.
1 If there is a ready starting basic feasible solution, go to in Phase 2 .
2 If there does not exist a ready starting basic feasible solution., go on to the next step
PHASE I
Step 2. Add the artificial variable to the left side of the each equation that lacks the
needed starting basic variables. Construct an auxiliary objective function aimed at
minimizing the sum of all artificial variables. Thus, the new objective is to
Minimize z = A1 + A2 + . . . + An
Maximize z ∗ = −A1 − A2 − . . . − An
where A, (i = 1.2, . . . , m) are the non-negative artificial variables.
2x1 + x2 − 6x3 = 20
6x1 + 5x2 + 10x3 ≤76
8x1 − 3x2 + 6x3 ≤50
x1 , x2 , x3 ≥ 0
SOLUTION:
We convert the LPP into standard form by using slack variables s1 ≥ 0 and s2 ≥ 0
and artificial variable A1
Maximize z = 5x1 − 4x2 + 3x3 subject to the constraints:
2x1 + x2 − 6x3 + A1 = 20
6x1 + 5x2 + 10x3 + s1 = 76
8x1 − 3x2 + 6x3 + s2 = 50
x1 , x2 , x3 , s1 , s2 , A1 ≥ 0
LPP October 10, 2022 38 / 100
An initial basic feasible solution x1 = x2 = x3 = 0, A1 = 20, s1 = 76, s2 = 50
Phase: I
Auxiliary LPP
2x1 + x2 − 6x3 + A1 = 20
6x1 + 5x2 + 10x3 + s1 = 76
8x1 − 3x2 + 6x3 + s2 = 50
x1
x2
2 1 −6 0 0 1
x3 20
6 5 10 1 0 0
= 76
s1
8 −3 6 0 1 0
s2 50
A1
zj − cj = CB Xj − Cj
max z ∗ = 0
max z ∗ = 0 and no artificial variable is present in the basis. In such a case, a basic
feasible solution to the original L.P.P. has been found. Go to Phage: 2.
Consider the final simplex table of Phase: I, considered the actual cost associated
with the original variables. Delete the artificial variable column A1 from the table
as it is been eliminated from phase:I
cj 5 -4 0 0 0
CB yB xB x1 x2 s3 s1 s2
-4 x2 30/7 0 1 -30/7 0 -1/7
0 s1 52/7 0 0 256/7 1 2/7
5 x1 55/7 1 0 -6/7 0 1/14
zj − c j 0 0 69/7 0 13/14
zj − cj = CB Xj − Cj
Since all zj − cj ≤ 0, an optimal feasible solution has been reached. Hence, an
optimum feasible solution to the given LPP is
x1 = 55/7 x2 = 30/7 x3 = 0
1 In the optimal solution, all artificial variables must be set equal to zero.
2 To accomplish this, in a min LP, a term MAi is added to the objective
function for each artificial variable Ai . For a max LP, the term −MAi is
added to the objective function for each Ai .
3 M represents some very large number.
2x1 + x2 ≤ 30
3x1 + 2x2 ≤ 24
x1 + x2 ≥ 3
x1 , x2 ≥ 0
SOLUTION:
2x1 + x2 + s1 = 30
3x1 + 2x2 + s2 = 24
x1 + x2 − s 3 = 3
x1 , x2 , s 1 , s 2 , s 3 ≥ 0
LPP October 10, 2022 46 / 100
We don’t have the initial feasible solution. So we introduce artificial variable A1 in
the third constraint. Then the initial feasible solution is s1 = 30, s2 = 24 and
A1 = 3
Now the iterative simplex table
Initial iteration: introduce x1 and drop A1
cj 6 4 0 0 0 -M
CB yB xB x1 x2 s1 s2 s3 A1
0 s1 30 2 3 1 0 0 0
0 s2 24 3 2 0 1 0 0
-M A1 3 1 1 0 0 -1 1
zj -3M -M -M 0 0 M 0
zj − cj -M-6 -M-4 0 0 M 0
It is evident from the net evaluation of the optimum table that the net evaluation
to the non-feasible variable x2 is 0. This is an indication that the current solution
is not unique and alternate solution exists. Thus we bring x2 into the basis instead
of s1 or s3 , thus
cj 6 4 0 0 0
CB yB xB x1 x2 s1 s2 s3
4 x2 42/5 0 1 3/5 -2/5 0
0 s3 39/5 0 0 1/5 1/5 1
6 x1 12/5 1 0 -2/5 3/5 0
zj 48 6 4 0 2 0
zj − cj 0 0 0 2 0
Thus we observe that the optimum value is same. hence the feasible solutions are
x1 = [8, 0] and x2 = [12/5, 42/5]
x1 + 5x2 − 3x3 ≥ 15
5x1 − 6x2 + 10x3 ≥0
x1 + x2 + x3 =5
x1 , x2 , x3 ≥ 0
SOLUTION:
Constraint 1: It has a sign ”≥” (greater than or equal) so the surplus variable s1
will be subtracted and the artificial variable A1 will be added.
Constraint 2: It has a negative or null independent term. All coefficients will be
multiplied by -1 and their sign will be changed from ”≥” In this way it
corresponds to add the slack variable s2 .
Constraint 3: It has an ”=” sign (equal) so the artificial variable A2 will be
added.
x1 + 5x2 − 3x3 − s1 + A1 = 15
−5x1 + 6x2 − 10x3 + s2 =0
x1 + x2 + x3 + A2 =5
x1 , x2 , x3 ≥ 0
Initial table
cj 5 -6 -7 0 0 M M
CB yB xB x1 x2 x3 s1 s2 A1 A2
M A1 15 1 5 -3 -1 0 1 0
0 s2 00 -5 6 -10 0 1 0 0
M A2 05 1 1 1 0 0 0 1
zj 20M 2M 6M -2M -M 0 0 0
zj − c j 2M-5 6M+6 -2M+7 -M 0 0 0
Enter the variable x3 and the variable A2 leaves the base. The pivot element is 8/3
Enter the variable x1 and the variable x3 leaves the base. The pivot element is
11/16
The iterations have been completed and there are artificial variables in the base
with values strictly greater than 0, so the problem has no solution (infeasible).
Definition (1)
Maximize z = c1 x1 + c2 x2 + · · · + cn xn .
subject to the constraints:
Dual problem
Minimize z ∗ = b1 w1 + b2 w2 + · · · + bn wn
Dual problem
Maximize z ∗ = b1 w1 + b2 w2 + · · · + bn wn
Minimize z ∗ = b T w , b ∈ Rm subject to AT w ≥ c T , c ∈ Rn
Definition
Primal Form: Find x T ∈ Rn so as to
Minimize g (w ) = b T w , subject to AT w ≥ c T , w ≥ 0, w T , b T ∈ Rm
If x0 (w0 ) be the optimum solution to the primal (dual),then there exist a feasible
solution w0 (x0 ) to the dual(primal) such that
cx0 = b T w0
SOLUTION:
Standard Primal.Introducing the slack variables s1 ≥ 0 and s2 ≥ 0 the standard
LPP:
Maximize z = 5x1 + 3x2 + 0s3 + 0s4
subject to the constraints:
3w1 + 5w2 ≥ 5
5w1 + 2w1 ≥ 3
w1 + 0w2 ≥ 0 ⇒ w1 ≥ 0
0w1 + w2 ≥ 0 ⇒ w2 ≥ 0
w1 and W2 unrestricted
SOLUTION:
Standard Primal.Introducing the slack variables s1 ≥ 0 and s2 ≥ 0 the standard
LPP:
minimize z = 4x1 + 6x2 + 18x3 + 0s1 + 0s2
subject to the constraints:
x1 + 3x2 − s1 = 3,
0x1 + x2 + 2x3 − s2 = 5,
xj ≥ 0
w1 + 0w2 ≤ 4
3w1 + w2 ≤ 6
0w1 + 2w2 ≤ 18
−w1 + 0w1 ≤ 0
0w1 − w2 ≤ 0
w1 and w2 unrestricted
w1 ≤ 4, 3w1 + w2 ≤ 6, w2 ≤ 9
w1 ≥ 0, w2 ≥ 0
z = f (x1 , x2 , · · · , xn ).
gi (x1 , x2 , · · · , xn ) {≥, ≤, or =} bi , i = 1, 2, · · · , m
xj ≥ 0, j = 1, 2, · · · , n
gi (x) {≥, ≤, or =} bi , x ≥ 0
min f0 (x), x ∈ Rm
such that:
fi (x) ≤ 0, i = 1, 2, · · · , m
aiT x =bi , i = 1, 2, · · · , p.
subject to
h(x1 , x2 ) = 0
Taking the total derivative of the function at (x1 , x2 )
∂f ∂f
∂f = dx1 + dx2 = 0 (1)
∂x1 ∂x2
If (x1∗ , x2∗ ) be the solution of constrained optimization problem, then
h(x1∗ , x2∗ ) = 0
∂h
∂h ∂h
∂h = dx1 + dx2 = 0 ⇒ dx2 = − ∂x
∂h
1
dx1
∂x1 ∂x2 ∂x 2
Putting in (1)
∂h
∂f ∂f ∂x1
∂f = dx1 + − ∂h dx1 =0
∂x1 ∂x2 ∂x2
∂f ∂h ∂f ∂h
− dx1 =0
∂x1 ∂x2 ∂x2 ∂x1
∂f
∂f ∂h
− ∂x ∂h
2
=0
∂x1 ∂x
∂x 1
2
∂f ∂f
∂x1 ∂x2
where λ = (λ1 , λ2 ) = ,
∂h ∂h
∂x1 ∂x2
Hence, we need to find where the gradient of f and the gradient of h are aligned
∆f (x) = λ∆h(x)
for some Lagrange multiplier λ. We need the scalar λ because the magnitudes of
the gradients may not be the same.
g (x1 , x2 ) = c, x1 , x2 ≥ 0
where c is a constant.
∆f (x1 , x2 ) = λ∆h(x1 , x2 ),
h(x1 , x2 ) = 0
subject to
hi (x) = 0, x ≥ 0
To find the necessary condition, the Lagrangian function L(x, λ), is formed by
introducing m Lagrange multipliers λ = (λ1 , λ2 , · · · , λm ).
subject to : x1 − x22 = 0
SOLUTION:
∂L
and = x1 − x22
∂λ
Setting these derivatives to zero and solving yields x1 ≈ 1.358, x2 ≈ 1.165, and
λ ≈ 0.170.
Example - 02
Consider the problem,
subject to
g1 (X ) =x1 + x2 + 3x3 − 2
g2 (X ) =5x1 + 2x2 + x3 − 5
SOLUTION:
Let, with x and y representing the width and height, respectively, of the rectangle,
this problem can be stated as:
max f (x, y ) = xy ,
given: g (x, y ) = 2x + 2y = 20
Then solving the equation, ∆f (x, y ) = λ∆g (x, y ), for some λ, by solving the
equations
∂f ∂g ∂f ∂g
=λ , =λ
∂x ∂x ∂y ∂y
⇒ y = 2λ, x = 2λ
Theorem
A sufficient condition for a stationary point X0 , to be an extremum is that the
Hessian H satisfies the following condition:
1 H is positive definite if X0 is a minimum point.
2 H is negative definite if X0 is a maximum point.
Example
SOLUTION:
The necessary conditions are ∆f = 0
∂f
=0 ⇒ 1 − 2x = 0
∂x
Since the principle minor determinants of H|X0 are −2, 4, −6 respectively. Thus H
is negative definite, and X0 = (1/2, 2/3, 4/3) represent the maximum point
Let the Lagrangian function for general NLPP involving n variable, one constraint
be:
∂L ∂f ∂h
= −λ = 0, j = 1, 2, · · · n
∂xj ∂xj ∂xj
∂L
= −h(x) = 0,
∂λ
the value λ = (λ1 , λ2 , · · · , λn ) is obtained from
∂f /∂xj
λi = , j = 1, 2, · · · n
∂h/∂xj
x + y + z = 11, x, y , z ≥ 0
SOLUTION:
The solution of these equation yields the stationary points x0 = (6, 2, 3); λ = 0
0 1 1 1
0 1 1
1 4 0 0
∆3 = 1 4 0 = −8 ∆4 = = −48
1 0 4 1 0 4 0
1 0 0 4
which both are negative. Thus x0 = (6, 2, 3); provides the solution for NLPP
Thus the optimization of f (x) subject to the constraints h(x) is equivalent to the
optimization of L(x, λ).
Let’s assume that the functions L(x, λ), f (x), h(x) all posses partial derivatives of
order one and two with respect to the decision variables.
be the matrix of all second order partial derivative of L(x, λ), w.r.t decision
variables
h i
U = ∂h∂xj (x)
i
, , (i = 1, 2, · · · , n; j = 1, 2, · · · , m)
m×n
Let (x0 , λ0 ) be the stationary point of a function L(x, λ), let HB0 be the Bordered
Hessian matrix calculated at the stationary point. Then x0 is a
If starting with the principle minor of order (2m + 1), the last (n − m)
principle minor of HB0 forms alternate sign pattern (−1)m+n , the stationary
point is local maximum
And if starting with the principle minor of order (2m + 1), the last (n − m)
principle minor of HB0 has sign of (−1)m , the stationary point is local
minimum
SOLUTION:
we have
g (x1 , x2 , · · · , xn ) ≤ C , xi ≥ 0i = 1, 2, · · · , m
subject to
h(x) ≤ 0, x ≥ 0
subject to
h(x) + S 2 = 0, x ≥ 0
To determine the stationary points, consider the Lagrangian function L(x, S, λ),
L(x, S, λ) = f (x) − λ(h(x) + S 2 ),
where λ is the Lagrange multipliers.The necessary condition for stationary points
to be Optimize:
∂L ∂f ∂h
= −λ = 0, j = 1, 2, · · · n (2)
∂xj ∂xj ∂xj
∂L
= −[h(x) + S 2 ] = 0, (3)
∂λ
∂L
= −2λS = 0 (4)
∂S
LPP October 10, 2022 93 / 100
∂L
The equation states that ∂S = 0, which requires either λ = 0 or S = 0.If S = 0
implies that h(x) = 0.Thus from (2)and(3), ⇒ λh(x) = 0
The variable S was introduced merely to convert the inequality constraint to an
equality one, therefore may be discarded. Moreover since S 2 ≥ 0, (2) gives
h(x) ≥ 0.
Whenever h(x) < 0, we get λ = 0 and whenever λ > 0, h(x) = 0. However, λ is
un restricted in sign whenever h(x) = 0
The necessary condition for a point to be a point of maximum are thus restated as
KKT condition
∂f ∂h
−λ =0
∂xj ∂xj
λh = 0
h≤0
λ≥0
subject to
h(x) ≥ 0, x ≥ 0
The constraints are reduced to hi (x1 , x2 , · · · , xn ) ≤ 0 for i = 1, 2, · · · n by the
transformation gi (x1 , x2 , · · · , xn ) − C = hi (x1 , x2 , · · · , xn )
Modify slightly by introducing the slack variable S, defined by h(x) − S02 = 0;(S02
is taken so as to ensure its non-negative)
subject to
h(x) − S02 = 0, x ≥ 0
KKT condition
∂f ∂h
−λ =0
∂xj ∂xj
λh = 0
h≥0
λ≥0
Proof.
The result follows if we are able to show that the Lagrange function
Example
Maximize z = 3.6x − 0.4x 2 + 1.6y − 0.2y 2 subject to the constraint :
2x + y ≤ 10, x, y ≥ 0
SOLUTION:
Here
f (x) = 3.6x − 0.4x 2 + 1.6y − 0.2y 2
h(x) = 2x + y − 10
The KKT condition are:
∂f ∂h
−λ = 0, λh = 0, h ≥ 0, λ ≥ 0
∂xj ∂xj
LPP October 10, 2022 98 / 100
That is,