Wolfe Method
Wolfe Method
Wolfe Method
∑a
j =1
ij x j ≤ bi x j ≥ 0 , for all i and j.
D= {d } jk n× n
symmetric matrix (djk = dkj).
A = {a } ij n× n matrix.
The matrix D is symmetric and positive definite (i.e., the quadratic term xTDx in x is positive for all
values of x except at x = 0). In this case, the problem is of minimization type. If xTDx in x is negative definite
for all values of x except for x = 0, in this case the problem is of maximization type.
Also the objective function of the QPP is strictly convex in x for minimization and concave in x for
maximization. If the matrix D is null, the QPP reduces to the standard LPP.
Step 1: Introducing slack variables si2 and rj2 to constraints (1) and (2) the problem becomes
n
1 n n
Max f ( x) = ∑c x
j =1
j j − ∑
2 j =1
∑x d
k =1
j jk xk
Subject to constraints,
n
∑a x
j =1
ij j + Si2 = b i = 1,2,…, n
i
− x j + rj2 =0 j = 1,2,…, n.
Step 2: Forming Lagrangian function
n n
L(x, s, r, λ, μ) = f ( x) − ∑ λi ( aij x j + si − bi ) − ∑ μ j ( − x j + rj )
2 2
j =1 j =1
Step 3: Differentiate L(x, s, r, λ, μ) partially with x, s, r, λ and equating to zero in order to get the required
Kuhn-Tucker necessary conditions.
1
(i) c−
2
( 2 xT D ) − λ A + μ = 0
or
NON-LINEAR PROGRAMMING PROBLEM 485
n m
c j − ∑ xk d jk − ∑ λi aij + μ j = 0
k =1 i =1
j =1, 2,…, n.
(ii) −2 λs =0 or λi si2 = 0 or
⎡ n
⎤
λi ⎢ ∑ aij x j − bi ⎥ = 0, i = 1, 2,…, n.
⎣ j =1 ⎦
(iii) −2 μr = 0 or μ jrj = 0, j = 1, 2,…, n.
μ jxj = 0, j = 1, 2,…, n.
(iv) Ax + s2 − b =0, i.e., Ax ≤ b or
n
∑a
j =1
ij x j ≤ bi i = 1, 2,…, n.
(vi) λi , μ j , x j , s j , rj ≥ 0
These conditions except (ii) and (iii) are linear constraints involving 2(n + m) variables. The condition
μjxj = λisi = 0 implies that both xj and μj as well as si and λi cannot be basic variables at a time in a non-
degenerate basic feasible solution. The condition μjxj = 0 and λisi = 0 are also called Complementary
Slackness Conditions.
L ( x, s, r , λ , μ ) = f ( x) − ∑ λi ⎜ ∑ aij x j − bi + si ⎟ − ∑ μ j ( − x j + rj )
2
j =1 ⎝ j =1 ⎠ j =1
where x = (x1, x2,…, xn), s = (s1,s2,…, sn), r = (r1,r2,…,rn), λ = (λ 1 λ 2,…, λ n),
μ = (μ1 μ2… μ n),
Differentiate L(x, s, r, λ, μ) partially w.r. to x, s, r, λ, and μ and equate the first order partial derivatives to
zero. Derive the Kuhn-Tucker conditions from the resulting equations.
Step 3: Introduce artificial variables Aj(j = 1, 2,…, n) in the Kuhn-Tucker conditions (i). Then we have
n m
c j − ∑ xk d jk − ∑ λi aij + μ j + Aj = 0
k =1 i =1
Step 4: Apply phase I of the simplex method to check the feasibility of the Constraint AX ≤ b. If there is
no feasible solution terminate the procedure. Otherwise get an initial basic feasible solution for phase II.
To get the desired feasible solution solve the following problem
Minimize Z = A1 + A2 + …+ An.
n m
∑a x
j =1
ij j + Si2 = λi i = 1, 2, n.
λi si = 0 ⎫⎪
⎬ Complementary Slackness Condition.
μ j x j = 0 ⎪⎭
Thus, while deciding for a variable to enter into the basis at each iteration, the Complementary Slackness
Conditions are satisfied.
This problem has 2(m + n) variables and (m + n) linear constraints, together with (m + n) Complementary
Slackness Conditions.
Step 5: Apply phase II of simplex method to get an optimal solution to the problem given in step 4. The
solution obtained in step 5 is an optimum solution of the QPP.
Example Apply Wolfe’s method to solve QPP.
Maximize Z = 2 x1 + 3 x2 − 2 x12
Subject to, x1 + 4 x2 ≤ 4
x1 + x2 ≤ 2, x1 , x2 ≥ 0 .
x1 + x2 − s22 = 2
− x1 + r12 = 0
− x2 + r22 = 0
Step 1: Construct the Lagrangian function
L ( x1 , x2 , s1 , s2 , λ1 , λ2 , μ1 , μ2 , r1 , r2 )
( ) ( ) ( ) ( ) (
= 2 x1 + 3 x2 − 2 x12 − λ1 x1 + 4 x2 − s12 − 4 − λ 2 x1 + x2 − s22 − 2 − μ1 − x1 + r12 − μ 2 − x2 + r22 )
NON-LINEAR PROGRAMMING PROBLEM 487
The necessary and sufficient condition for the maximum of L and hence of Z are.
L
∂x1 = 2 − 4 x1 − λ1 − λ2 + μ1 = 0
∂L
∂x2 = 3 − 4λ1 − λ2 + μ1 = 0
∂L ∂L
= x1 + 4 x2 + s12 − 4 = 0
∂s1 = −2λ1 s1 = 0 ∂λ1
∂L ∂L
= x1 + x2 + s22 − 2 = 0
∂s2 = −2λ2 s2 = 0 ∂λ2
∂L ∂L
= − x1 + r12 = 0
∂r1 = 2μ1r1 = 0 ∂μ1
∂L ∂L
= − x2 + r22 = 0
∂r2 = 2 μ2 r2 = 0 ∂μ 2
4λ1 + λ2 − μ2 = 3 (2)
x1 + 4 x2 − s12 = 4 (3)
x1 + x2 + s22 = 2 (4)
x1 , x2 , s12 , s22 , λ1 , λ2 , μ1 μ 2 ≥ 0
μ1 x1 = μ 2 x2 = 0 ⎫
⎬ Complementary Slackness Condition.
λ1 s1 = λ2 s2 = 0 ⎭
By introducing artificial variables A1 and A2 in the first two constraints.
Step 2 The modified LPP becomes
Max Z = −A1−A2
Subject 4 x1 + λ1 + λ2 − μ1 + A1 = 2
4λ1 + λ2 − μ2 + A1 = 3
x1 + 4 x2 + x3 = 4
x1 + x2 + x4 = 2
The initial basic feasible solution to the LPP is shown in the table below.
cj 0 0 0 0 0 0 0 0 −1 −1
CB yB xB x1 x2 x3 x4 λ1 λ2 μ1 μ2 A1 A2
−1 A1 2 4 0 0 0 1 1 −1 0 1 0
−1 A2 3 0 0 0 0 4 1 0 −1 0 1
0 x3 4 1 4 1 0 0 0 0 0 0 0
0 x4 2 1 1 0 1 0 0 0 0 0 0
z j − cj −5 −4 0 0 0 5 2 −1 −1 0 0
From the above table, it is observed that x1, λ1, λ2 can enter the basis. But λ1 and λ2 will not enter the
basis because x3 ( s1 ) and x4 ( s2 ) are in the basis. Introduce x1 and drop A1.
2 2
First iteration
cj 0 0 0 0 0 0 0 0 −1
CB yB xB x1 x2 x3 x4 λ1 λ2 μ1 μ2 A2
0 x1 1 1 0 0 0 1 1 −1 0 0
2 4 4 4
−1 A2 3 0 0 0 0 4 1 0 −1 1
0 x3 7 0 4 1 0 −1 −1 1 0 0
2 4 4 4
0 x4 3 0 1 0 1 −1 −1 1 0 0
2 4 4 4
zj−cj –3 0 0 0 0 −4 −1 0 1 0
From the above table, it is observed that either λ1 or λ2 can’t enter the basis as x3 and x4 are still in
the basis in order to satisfy Complementary Slackness Condition. Since μ2 is not in the basis, x2 can enter
the basis (as μ2x2 = 0).
NON-LINEAR PROGRAMMING PROBLEM 489
Second iteration
Introduce x2 and drop x3.
CB yB xB x1 x2 x3 x4 λ1 λ2 μ1 μ2 A2
0 x1 1 1 0 0 0 1 1 −1 0 0
2 4 4 4
−1 A2 3 0 0 0 0 4 1 0 –1 1
0 x2 7 0 1 1 1 −1 −1 1 0 0
8 4 16 16 16
0 x4 5 0 0 −1 0 −3 −3 3 0 0
8 4 16 16 16
zj − cj –3 0 0 0 0 −4 −1 0 1 0
Since x4 ( s22 ) is in the basis, λ2 can’t enter the basis. Hence λ1 enter the basis.
Third iteration
Drop A2 and introduce λ1.
CB yB xB x1 x2 x3 x4 λ1 λ2 μ1 μ2
0 x1 5 1 0 0 0 0 3 −1 1
16 16 4 16
0 λ1 3 0 0 0 0 1 1 0 −1
4 4 4
0 x2 59 0 1 1 0 0 −3 1 −1
64 4 64 16 64
0 x4 49 0 0 −1 1 0 −9 3 −3
64 4 64 16 64
zj−cj 0 0 0 0 0 0 0 0 0
(ii) 2 x1 + x2 + s22 = 4
(iii) − x1 + r12 = 0
(iv) − x2 + r22 = 0
From the Lagrangian function L(x, s, λ , r, μ)
= ( 2 x1 + x2 − x1 ) − λ1 ( 2 x1 + 3 x2 + s1 − 6) − λ 2 ( 2 x1 + x2 + s2 − 4) − μ1 ( − x1 + r1 ) − μ 2 ( − x2 + r2 )
2 2 2 2 2
The necessary and sufficient condition for the Maximum of L and hence of Z are
∂L
dx1 = 2 − 2 x1 − 2λ1 − 2λ2 + μ1 = 0
∂L
∂x2 = 1 − 3λ1 − λ2 + μ1 = 0
∂L
∂s1 = −2λ1 s1 = 0
∂L
∂s2 = −2λ2 s2 = 0
∂L
∂r1 = 2μ1r1 = 0
∂L
∂r2 = 2μ2 r2 = 0
∂L
∂λ1 = 2 x1 + 3 x2 + s1 − 6 = 0
2
NON-LINEAR PROGRAMMING PROBLEM 491
∂L
= 2 x1 + x2 + s22 − 4 = 0
∂λ2
∂L ∂L
= − x1 + r12 = 0 = − x2 + r22 = 0
∂μ1 ∂μ 2
After simplifying the conditions, we get
(i) 2 x1 + 2λ1 + 2λ2 − μ1 = 2
(ii) 3λ1 + λ2 − μ2 = 1
(iii) 2 x1 + 3 x2 + s12 = 6
(iv) 2 x1 + x2 + s22 = 4
(v) λ1 s1 = λ2 s2 = 0, μ1 x1 ≠ μ2 x2 = 0
and x1 , x2 , λ1 , λ2 , μ1 μ 2 , s1 , s2 ≥ 0 .
Introduce artificial variables A1 and A2 in the first two constraints. The modified QPP becomes
Minimize Z = A1 + A2
or
Minimize Z = −A1 − A2
Subject to the Constraints
(i) 2 x1 + 2λ1 + 2λ2 − μ1 + A1 = 2
(ii) 3λ1 + λ2 − μ2 + A2 = 1
(iii) 2 x1 + 3 x2 + s12 = 6
⎡λ1s1 = λ 2 s2 = 0 ⎤
(iv) 2 x1 + x2 + s22 = 4 where ⎢ ⎥
⎣ μ1 x1 = μ2 x2 = 0 ⎦
and x1 , x2 , s1 , s2 , A1 , A2 λ1 , λ2 , μ1 , μ2 ≥ 0 .
The initial basic feasible solution to the LPP is shown in the table below.
cj 0 0 0 0 0 0 0 0 −1 −1
CB yB xB x1 x2 λ1 λ2 μ1 μ2 s1 s2 A1 A2
−1 A1 2 2 0 2 2 −1 0 0 0 1 0
−1 A2 1 0 0 3 1 0 −1 0 0 0 1
0 s1 6 2 3 0 0 0 0 1 0 0 0
0 s2 4 2 1 0 0 0 0 0 1 0 0
zj − cj −2 0 0 −5 −3 1 1 0 0 0 0
492 OPERATIONS RESEARCH
From the above table, it is observed that λ1 to enter the basis (or λ2). Due to Complementary Slackness
Condition λ1 s1 = λ2 s2 = 0 , it is not possible. Since μ1 = 0, x1 can be entered into the basis with A1 as
leaving variable.
First iteration: Introduce x1 and drop A1.
cj 0 0 0 0 0 0 0 0 −1
CB yB xB x1 x2 λ1 λ2 μ1 μ2 s1 s2 A2
−1 x1 1 1 0 1 1 1 0 0 0 0
2
−1 A2 1 0 0 3 1 0 −1 0 0 1
← 0 s1 4 0 3 −2 2 1 0 1 0 0
0 s2 2 0 1 −2 −2 1 0 0 1 0
zj − c j 0 0↑ −3 −1 0 1 0 0 0
From the above table, it is observed λ1, λ2 cannot enter the basis and also μ1 due to Complementary
Slackness Condition. Hence we enter x2 to the basis because μ2 = 0.
Second iteration: Introduce x2 and drop s1.
cj 0 0 0 0 0 0 0 0 –1
CB yB xB x1 x2 λ1 λ2 μ1 μ2 s1 s2 A2
0 x1 1 1 0 1 1 −1 0 0 0 0
2
← −1 A2 1 0 0 3 1 0 −1 0 0 1
0 x2 4 0 1 −2 −2 1 0 1 0 0
3 4 3 3 3
0 s2 2 0 0 −4 −4 2 0 −1 1 0
3 3 3 3 3
zj − c j 0 0 0 −3↑ −1 0 1 0 0 0
cj 0 0 0 0 0 0 0 0
xB CB yB x1 x2 λ1 λ2 μ1 μ2 s1 s2
2 0 x1 1 0 0 2 −1 1 0 0
3 3 2 3
1 0 λ1 0 0 1 1 0 −1 0 0
3 3 3
14 0 x2 0 1 0 −4 1 −2 1 0
9 9 3 9 3
10 0 s2 0 0 0 −8 2 −4 −1 1
9 9 3 9 3
0 zj −cj 0 0 0 0 0 0 0 0
2
Since all zj − cj = 0 an optimum solution for phase I is reached. Solution is given by x1 = ,
3
14 1 10
x2 = , λ1 = , λ2 = 0, μ1 = μ2 = 0, s1 = 0, s2 = .
9 3 9
The solution also satisfies Complementary Slackness Condition. Also Max Z = 0. The current solution
is also feasible. The max value of the objective function of the given Q.P. is
2
⎛ 2 ⎞ ⎛ 14 ⎞ ⎛ 2 ⎞ 22
Max Z = 2 ⎜ ⎟ + ⎜ ⎟ − ⎜ ⎟ = .
⎝3⎠ ⎝ 9 ⎠ ⎝3⎠ 9
EXERCIS ES
1. Use Wolfe’s method to solve
Maximize Z = 4 x1 + 6 x2 − 2 x12 − 2 x1 x2 − 2 x22
1 5 75
(Ans. x1 = , x2 = , λ1 = 1, μ1 = μ2 = 0, Max Z = .)
3 6 18
2. Use Wolfe’s method to solve
Maximize Z = 2x + y − x2
Subject to, 2x + 3y ≤ 6
2 x + y ≤ 4, x1 , x2 ≥ 0 .