Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
14 views

Nonlinear Programming Only

Uploaded by

Komal Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Nonlinear Programming Only

Uploaded by

Komal Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Nonlinear Programming

Presented by

Dr. Deepak Gupta


Assistant Professor, SM-IEEE
Department of CSE
MNNIT Allahabad, Prayagraj, India
deepakg@mnnit.ac.in; deepakjnu85@gmail.com
Non-linear Programming
• A general non-linear programming problem (NLPP) is formulated as

Maximum/ Minimum f ( X )
subject to: g i ( X )  = bi i = 1,2,..., n

where either f ( X ) or at least one of the constraints g i ( X ) is nonlinear in form.

For examples
Maximum f ( X ) = x1 + x2 − x1 x2 + 5
2 2

subject to: x1 + 2 x2 = 5
x1 , x2  0
x +2 x −4
Maximum f ( X ) = 2 x1 + 2 x2 − 3 x3 Minimize f ( X ) = e 1 + e 2
x12 + x22 = 1 subject to: x1 + 4 x2 = 11
2
subject to:
2 x1 − 4 x2 + 3x3  5 x1 , x2  0
x1 , x2 , x3  0
Quadratic form/ function
Suppose, we have n variables, the quadratic form or quadric form can be written
as
f ( X ) = a11 x12 + a22 x22 + .....+ ann xn2
+ a12 x1 x2 + a13 x1 x3 + ... + a1n x1 xn
...........
+ an1 xn x1 + an 2 xn x2 + ... + an −1n xn −1 xn
We can express the given quadric form into

f ( X ) = X T AX

where A = ( aij ) nn is a real symmetric matrix of order n such that

aij ; i= j
 aij = a ji
aij =  aij and
 ; i j
2
For example 1:

f ( X ) = x12 − 2 x22 + x1 x2 − 2 x2 x3 + 5 x1 x3 aij ; i = j



aij =  aij
Here, we have only 3 variables x1 , x2 , x3  ; i j
2
1 0.5 2.5  x1  and aij = a ji
f ( X ) = [ x1 x2 x3 ] 0.5 − 2 − 1   x2 
2.5 − 1 0   x3 

Example 2: Example 3:

f ( X ) = −3 x12 + 2 x22 − 3 x32 f ( X ) = x12 − 2 x2 + x1 + 10


So, we get
− 3 0 0   x1  So, we get

f ( X ) = [ x1 x2 x3 ] 0 2 0   x2  1 0  x1 
f ( X ) = [ x1 x2 ]  x 
 0 0 − 3  x3   0 0  2 
 x1 
+ [1 − 2]   + 10
 x2 
Definiteness

A quadratic form X T AX is said to be

Positive Definite Positive semi-definite

If X T AX  0 for all X  0 If X T AX  0 for all X  0


and at least one X  0 for which X T AX = 0

Negative Definite Negative semi-definite

If X T AX  0 for all X  0 If X T AX  0 for all X  0


and at least one X  0 for which X T AX = 0

Indefinite: A function which is neither positive nor negative definite is called Indefinite.
Positive Definite
If X T AX  0 for all X  0
For example:

f ( X ) = x12 + 2 x22 + 3 x32 is positive definite


since f ( X )  0 for all X = ( x1 , x2 , x3 )  (0,0,0)

Negative Definite

If X T AX  0 for all X  0
For example:
f ( X ) = − x12 − 2 x22 − 3 x32 is negative definite
since f ( X )  0 for all X = ( x1 , x2 , x3 )  (0,0,0)
Positive semi-definite

If X T AX  0 for all X  0
and at least one X  0 for which X T AX = 0
For example:

f ( X ) = x12 + ( x2 − 2) 2 + 5 x32 is positive semi-definite


since f ( X )  0 for all X = ( x1 , x2 , x3 )  (0,0,0)
and f ( X ) = 0 for X = (0,2,0)

Negative semi-definite

If X T AX  0 for all X  0
and at least one X  0 for which X T AX = 0
For example:

f ( X ) = − x12 − ( x2 − 2) 2 − 5 x32 is negative semi-definite


since f ( X )  0 for all X = ( x1 , x2 , x3 )  (0,0,0)
and f ( X ) = 0 for X = (0,2,0)
Indefinite: A function which is neither positive nor negative definite is called Indefinite.

For example:

f ( X ) = x12 − 2 x22

X = (1, 1) ; f ( X ) = −1  0
X = (3,−1) ; f ( X ) = 7  0
Hence it is indefinite.

Remark: Sometimes, it is not easy to check the definiteness of the function quickly
For example:
f ( X ) = 3 x12 + 2 x22 − 6 x32 + x1 x2 − 2 x2 x3 + 9 x1 x3 + 100
Clearly, the definiteness of the function f ( X ) depends on the nature of Matrix A in X T AX .
So, there are two methods to check the definiteness of the function:

1. Matrix Minor Test


2. Eigen Value Test

1. Matrix Minor Test: For any square matrix A, we define the principle matrix
minor (any submatrix of order m) which contains first m elements of the principle
diagonal

D1 = a11  a11 a12 a13 


a11 a12 A = a21 a22 a23 
D2 =  a31 a32 a33 
a21 a22
a11 a12 a13
D3 = a21 a22 a23
a31 a32 a33
A function f ( X ) = X T AX is said to be

Positive Definite Positive semi-definite

D1  0; D2  0; D3  0;.... D1  0; D2  0; D3  0;....
and at least one Di = 0 for i = 2,3,..., n

Negative Definite Negative semi-definite

D1  0; D2  0; D3  0;.... D1  0; D2  0; D3  0;....
i.e. alternative sign, but first i.e. alternative sign, and at least one Di = 0 for
principal minor should be negative i = 2,3,..., n

Indefinite: If none of the above cases:


D1  0; D2  0; D3  0;....
D1  0; D2  0; D3  0;.... and so on….
2. Eigen Value Test: Since matrix A is real symmetric matrix in X T AX , thus all its eigen
values (i ) are real. Thus X T AX is:

Positive Definite Positive semi-definite


If all i  0 If all i  0
and at least one i = 0

Negative Definite Negative semi-definite

If all i  0 If all i  0
and at least one i = 0

Indefinite: If matrix A has both positive and negative eigen values.


Example:
f ( X ) = −3 x12 + 2 x22 − 3 x32 − 10 x1 x2 + 4 x2 x3 + 6 x1 x3

Here, we have only 3 variables x1 , x2 , x3 , so we can write in the following form

− 3 − 5 3   x1 
f ( X ) = [ x1 x2 x3 ] − 5 2 2  x2 
 3 2 − 3   x3 
T − 3 − 5 3
A = − 5 2
X A X
2
Hence, the principal minors are:
 3 2 − 3 
D1 = −3
−3 −5
D2 = = − 31  0 Since, D1  0; D2  0; D3  0 
−5 2
So, the function is indefiniteness.
−3 −5 3
D3 = − 5 2 2 = 27  0
3 2 −3
Example:
f ( X ) = − x12 − x22 + 2 x1 + 3 x2 + 2
So, we get
− 1 0  x1   x1 
f ( X ) = [ x1 x2 ]     + [2 3]   + 2
 0 − 1  x2   x2 
1. Matrix Minor Test
D1 = −1  0 Since, D1  0; D2  0
−1 0
D2 = = 1 0 So, the function is negative definite.
0 −1
2. Eigen Value Test

The eigen values are -1, -1. As both are negative. Hence it is negative definite.

Example: Check the definiteness of the function

f ( X ) = 3e 2 x1 +1 + 2e x2 +5
We are unable to compute the quadratic form of this f ( X ) . We can’t define A.
Hessian Matrix
For a function f ( X ) of n variables such that f ( X )  C , where C 2 is the space of
2

all real functions whose second order partial derivatives are continues. Then Hessian
matrix of f ( X ) is n n symmetric matrix of second order partial derivatives
defined by
 2 f 2 f 2 f 
 . . . 
 x12 x1x2 x1xn 
 2 f 2 f 2 f 
 . . . 
 x2 x1 x22 x2 xn 
H = . . . 
 
 . . . 
 . . . 
 
  f2
2 f 2 f 
. . .
 xn x1 xn x2 xn2 
 

Remark: For a quadratic function f ( X ) = X T AX , we have

H = 2A

So, we can check the definiteness of the function based on the Hessian matrix
H by using principal matrix minors as well as eigen value test, as similar to A.
Example: Construct the Hessian matrix for the function & check its definiteness
f ( X ) = 3e 2 x1 +1 + 2e x2 +5
We know:
 2 f 2 f 
 
 x12 x1x2  12e 2 x1 +1 0 
H= = 
 2 f  f   0
2
2e x2 +5 
 
 x2 x1 x22 

Now, the principal minors are:

D1 = 12e 2 x1 +1  0 Since, D1  0; D2  0

12e 2 x1 +1 0 So, the function is positive definite.


D2 = = 24e 2 x1 + x2 + 6  0
0 2e x2 +5
Example: Construct the Hessian matrix for the function & check its definiteness
f ( X ) = 2 x12 + x22 + 3 x32 + 10 x1 + 8 x2 + 6 x3 − 100

 4 0 0
H = 0 2 0
0 0 6

Hence, the principal minors of H are:


D1 = 4
4 0
D2 = = 80
0 2 Since, D1  0; D2  0; D3  0 

4 0 0 So, the function is positive definite.


D3 = 0 2 0 = 48  0
0 0 6
Example: Construct the Hessian matrix for the function & check its definiteness
f ( X ) = − x12 − x22 + 6 x1 + 8 x2

Do it by yourself
Convex Function
A function f : S → R is said to be convex if

f ( X 1 + (1 −  ) X 2 )   f ( X 1 ) + (1 −  ) f ( X 2 )

For any x1 , x2  S , where S is convex set and   [0, 1].

The tangent will always lie


below the chord joining the
points then function will be
convex. Otherwise function
will be concave.
For one-dimensional functions

If f  0 for all domain points then it is convex. f ( x) = x 2


''
eg.

If f  0 for all domain points then it is concave.


''
eg. f ( x) = log( x), x , 1 − x2

If f ''  0 and f  0 for some points of the domain


''
eg. f ( x ) = x 3 , sin x
points then it is neither convex nor concave.
For Higher-dimensional functions: We will use Hessian Matrix H.

f ( X ) is convex D1  0; D2  0; D3  0;....
H is positive semi-definite and at least one Di = 0 for i = 2,3,..., n

f ( X ) is strictly convex
D1  0; D2  0; D3  0;....
H is positive definite

f ( X ) is concave D1  0; D2  0; D3  0;....
− f (x ) is convex i.e. alternative sign, but first
principal minor should be negative
Remark:

Convex Function always Concave function always


Minimize the problem Maximize the problem

If f  0 for all domain


''
If f  0 for all domain
''

points then it is convex. points then it is concave.

Example: Construct the Hessian matrix for the function & check its definiteness
f ( X ) = x12 + x22 + x1 + x2 − 1
 2 f 2 f 
 
x 2
x1x2  2 0
H = 2 1
=
  f  2 f  0 2
 
 x2 x1 x22 
Hence, the principal minors of H are:
D1 = 4  0 Since, D1  0; D2  0  The function is positive definite.
2 0
D2 = = 40 Thus, f ( X ) is strictly convex. Hence it will give
0 2 Minimum.
Example: Construct the Hessian matrix for the function & check its definiteness
f ( X ) = 4 x1 + 6 x2 − 2 x12 − 2 x1 x2 − 2 x22 + 41
 2 f 2 f 
 
 x12 x1x2  − 4 − 2
H= =
 2 f  2 f  − 2 − 4
 
 x2 x1 x22 
Hence, the principal minors of H are:
D1 = −4  0 Since, D1  0; D2  0  The function is negative definite.
−4 −2
D2 = = 12  0 Thus, f ( X ) is strictly concave. Hence it will give
−2 −4 Maximum.
Remark/ Properties
Every U-shape function is convex function. eg. x2
A function which is both convex and concave, then it has to be linear
function.
A convex/ concave function need not to be differential.
For example: f ( x ) =| x |: x  R is convex but not differentiable.
If f and g are two convex functions over the convex set, then
f + g ;  f (  0) ; max( f , g ) are also convex function.
For example f ( x ) = − x + 2 x + | x | is convex function.
2

convex f’’>0 so convex


convex
If f and g are two concave functions, then
f +g ;  f (  0) ; min( f , g ) are also concave function.
For example f ( x) = − x − 2 x 2 + log( x) is convex function.

linear so f’’<0 so f’’<0 so


concave concave concave
If f is convex function defined on convex set, then every local minima is
global minima of f.
Maxima and Minima of the function
Optimization can be defined as the process of finding the conditions that give the
maximum and minimum of a function f (x) .

Necessary Condition:

f ' ( x) = 0 After solving, we get x = x * , called stationary points.

Sufficient Condition:
( n −1)
Let, f '
( x *
) = f ''
( x *
) = ... = f ( x *
) = 0 but, f (n)
( x *
)  0 . Then f ( x *
) is

A minimum value of f (x ) A maximum value of f (x )


If f ( n ) ( x * )  0 and n is even. If f ( n ) ( x * )  0 and n is even.

Neither a minimum nor a


maximum if n is odd.
Example: f ( X ) = x4

We have:
f ' ( X ) = 4x3  f ' ( X ) = 0  x=0

After solving, we get x = x* = 0 stationary points.


f '' ( X ) = 12 x 2  f '' ( x * ) = 0

f 3 ( X ) = 24 x  f 3 ( x* ) = 0

f 4 ( X ) = 24 x  f 4 ( x* ) = 24  0
Hence, x = 0 is the minimum point.
Example: Find the maximum or minimum of the function

f ( X ) = 12 x 5 − 45 x 4 + 40 x 3 + 5

Necessary Condition:
f 1 ( X ) = 60 x 4 − 180 x 3 + 120 x 2

f 1 ( X ) = 60 x 2 ( x 2 − 3 x + 2)  f 1 ( X ) = 60 x 2 ( x − 1)( x − 2)

Hence f 1 ( X ) = 0  x = 0, 1, 2
Sufficient Condition:
At x=0
f 2 ( X ) = 60(4 x 3 − 9 x 2 + 4 x) f 2 ( 0) = 0 At x=1
f 2 (1) = −540  0
f 3 ( X ) = 60(12 x 2 − 18 x + 4) f 3 ( 0) = 0
So, Maximum i.e. f ( X ) = 12
f 4 ( X ) = 60(24 x − 18) f 4 ( 0) = 0
f ( X ) = 1440
5 f 5 (0) = 1440  0 At x=2
f 2 ( 2) = 240  0
Since, n is odd so neither
So, Minimum i.e. f ( X ) = −11
minimum nor maximum
For Higher-dimensional functions: We will use Hessian Matrix H.

Necessary Condition: All the first order derivatives is zero

f f f
=0; = 0 ......... =0
x1 x2 xn
and find the stationary points.

Sufficient Condition: We compute the Hessian Matrix H at the stationary points.

f ( X ) is convex D1  0; D2  0; D3  0;....
H is positive semi-definite
and at least one Di = 0 for i = 2,3,..., n
Minimum Point

f ( X ) is strictly convex
H is positive definite D1  0; D2  0; D3  0;....
Minimum Point

f ( X ) is concave D1  0; D2  0; D3  0;....
− f (x ) is convex
i.e. alternative sign, but first
Maximum Point principal minor should be negative
Example: Find the maximum or minimum of the function

f ( X ) = 4 x1 + 6 x2 − 2 x12 − 2 x1 x2 − 2 x 2 + 41

Necessary Condition:
f
= 0  4 − 4 x1 − 2 x2 = 0
x1
Thus the stationary point is ( x1 , x2 ) = (1 / 3,4 / 3) .
f
= 0  6 − 2 x1 − 4 x2 = 0
x2
Sufficient Condition:
 − 4 − 2
The Hessian matrix will be H =  
 − 2 − 4 
Hence, the principal minors of H are:
D1 = −4  0 Since, D1  0; D2  0  The function is negative definite.
−4 −2
D2 = = 12  0 Thus, f ( X ) is strictly concave. Hence it will give
−2 −4 Maximum.
Example: Find the maximum or minimum of the function

f ( X ) = x13 + x23 + 2 x12 + 4 x22 + 6

Necessary Condition:
f
= 0  x1 (3x1 + 4) = 0 Thus the stationary points are
x1 (.0,0) (0,−8 / 3)
f
= 0  x2 (3x2 + 8) = 0 (−4 / 3,0) (−4 / 3,−8 / 3)
x2
Sufficient Condition:
6 x1 + 4 0 
The Hessian matrix will be H =  
 0 6 x 2 + 8
Hence, the principal minors of H are:
D1 = 6 x1 + 4

6 x1 + 4 0
D2 =
0 6 x2 + 8
Point X Value of D1 Value of D2 Nature of H Nature of X f(x)

Positive
(0,0) 4 32 Minimum 6
definite

(0,-8/3) 4 -32 Indefinite Saddle point -

(-4/3,0) -4 -32 Indefinite Saddle point -

(-4/3,-8/3) -4 32 Negative Maximum 50/3


definite

Hence,
Minimum at (0,0) i.e. 6

Maximum at (−4 / 3,−8 / 3) i.e. 50/3


Lagrangian Method
Lagrangian method will help to solve the constraint optimization problem with
equality constraints.
Maximum/ Minimize f ( X )
subject to: g i ( X ) = 0

Here, we convert the constrained optimization problem to an equivalent


unconstrained problem with the help of certain unspecified parameters known as
Lagrangian multipliers.

Maximum z = x1 + 3x2 + 5 x3
2 2 2
Example
subject to: x1 + x2 + 3x3 = 4 only one equality constraint

Example
Maximum z = x12 + 3 x22 + 2 x32 − 2 x1 x2 + 3 x1 x3 − 100

subject to: x1 + x2 + 3x3 = 4 


 only two equality constraints
x1 − 3x2 + 4 x22 = 1
The following classical problems:

Minimize f (X )
subject to: g(X ) = b

can be converted to
Minimize L( X ,  ) = f ( X ) +  ( g ( X ) − b)

where  i.e. Lagrangian multiplier is a constant and unrestricted in sign.

Necessary Conditions:
L L
=0 & =0
X 
After solving these equations, we get a stationary point(s) ( X * , * ) .
Sufficient Conditions:
We first compute the bordered Hessian matrix H B as

0 U 
H = t 
B
is a (m + n)  (m + n) matrix. mn
U V 
m is number of constraints
n is number of variables  g g 2L 
  
where  X X  X X 
 g   g g g   2L 
1 2 1 n
U = = ....  V =  2  =     
 X   x1 x2 xn  mn  X   2 
  L  L
2
 L
2

 X X X X 
0 mm : Null matrix of size m m .  n 1 n 3 X n2  nn

At a stationary point ( X * , * )
Starting with principal minor of order 2m+1. Compute the last n-m principal minors of H B
at the point ( X * , * ) is

Maximum point if the principal minors form on alternative sign starting with ( −1) .
m+ n

Minimum point if the principal minors form of H B have the sign of (−1) m .
Example: Min f ( X ) = Z = x 21 + x 22 + x 23

subject to: x1 + x 2 +3 x3 = 2
5 x1 + 2 x2 + x3 = 5

First, define the Lagrangian function as:


L( X ,  ) = f ( X ) − 1g 1( X ) − 2 g 2 ( X )
= ( x12 + x22 + x32 ) − 1 ( x1 + x2 + 3x3 − 2) − 2 (5 x1 + 2 x2 + x3 − 5)
The necessary conditions are
L L
=0 & =0
X 
L  + 52
= 0  2 x1 −1 − 5 2 = 0  x1 = 1 (1)
x1 2
L  + 22
= 0  2 x 2 −1 − 2 2 = 0  x2 = 1 (2)
x 2 2
L 3 + 2
= 0  2 x 3 −31 − 2 = 0  x3 = 1 (3)
x 3 2
L
= 0  x1 + x2 + 3x3 − 2 = 0  x1 + x2 + 3x3 = 2 (4)
 1
L
= 0  5 x1 +2 x2 + x3 − 5 = 0  5 x1 +2 x2 + x3 = 5 (5)
 2
Thus, the stationary point is

 37 16 13 2 7 
( x1 , x 2 , x 3 , 1 , 2 ) =  , , , , 
 46 46 46 23 23 
Sufficient Conditions:

Now, the bordered Hessian matrix is

0 U  m=2 is number of constraints


H = t 
B
n=3 is number of variables   L 2 f 2 f 
2
U V   
 x1
2
x1x2 x1x3 
2 0 0 
 2L   2 f 2L  2 f  0 2 0 
 g  1 1 3 V = 2=  =
U = =  x2 x3   
 X  5 2 1 23  X   x2 x1 x22
 0 0 2 
 2 f 2 f 2L 
 x x x3x2 x32 
Hence  3 1
0 0 1 1 3
0 0 5 2 1  n-m=1 i.e. we compute one principal minor.

H = 1
B
5 2 0 0 2m+1= 2(2)+1 = 5 i.e. D5 .
 
1 2 0 2 0
3 1 0 0 2
0 0 1 1 3
0 0 5 2 1
D5 = 1 5 2 0 0 = 460  0
1 2 0 2 0
3 1 0 0 2

(−1) m +n = (−1) 5 = − 1  0 Not satisfy

(−1) m = (−1) 2 = 1  0 Hence, Minimum

Hence, the stationary point is minimum point.


Karush-Kuhn-Tucker (KKT)
Conditions
Karush-Kuhn-Tucker (KKT) Conditions also known as Kuhn-tucker conditions are
first derivatives tests for a solution in non-linear programming to be optimal,
provided that some regularity conditions are satisfied.

So, we will find the necessary and sufficient conditions for solving the NLPP with
inequality constraints.

Consider the NLPP


Maximum f ( X )
subject to: g i ( X )  bi

Consider the each i th inequality constraints into equations by adding the non-negative
slack variables S i2
g i ( X ) + S i2 = bi
or hi ( X ) = g i ( X ) + S i2 − bi
Thus, given NLPP reduces to
Maximum f ( X )
subject to: hi ( X ) = 0

Formulate the Lagrangian function as

 h (X )
L( X , S ,  ) = f ( X ) −
i
i i

= f ( X ) −  (g ( X ) + S
i i i
2
− bi )
i
Necessary Condition:

L f g i
X
=0 
X
−  i
i X
=0 (1)

L
= 0  g i ( X ) + Si2 − bi = 0 (2)
i
L
= 0  − 2i Si = 0 (3)
Si
1
In (3), multiply by Si , we get
2
i S i2 = 0
or
i (− g i ( X ) + bi ) = 0 (4)
From equation 3, we have either i = 0 or S i = 0 or both vanish at the optimal conditions:
Case 1: when S i  0
g i ( X ) + S i2 = bi
g i ( X )  bi when S i  0

If S i  0 it means constraints are satisfied as strict inequality and consequently, if


we relaxed the constraint (make bi larger) the stationary point will not be affected.
Hence i = 0
Case 2: when S i = 0 or i  0
g i ( X ) + S i2 = bi
g i ( X ) = bi when S i = 0

So, it means constraints are satisfied as equality .


f
Let i  0  0 That implies:
bi
1. The objective function decreases, as bi is increased.
2. However, as bi increases, more space becomes feasible and the optimal value of
the objective function f ( X ) , clearly can’t decreases i.e. it will increases. This
contradicts our assumption. So, i  0 will not satisfy.

Hence, an optimal solution is i  0 .


Conclusion
Hence, for NLPP:

Maximum f ( X ) Minimum f ( X )

subject to: g i ( X )  bi subject to: g i ( X )  bi

Necessary Conditions: Necessary Conditions:

f g i
f
− 
g
i i = 0 X
−  i X
=0
X i X i

i ( gi ( X ) − bi ) = 0 i ( gi ( X ) − bi ) = 0
g i ( X )  bi gi ( X )  bi
i  0 i  0
Sufficient Conditions:
The Kuhn-Tucker conditions which are necessary by Kuhn-Tucker condition are also
sufficient if
f ( X ) is Concave and (for maximization)
The feasible space i.e. g i ( X ) is convex.
or
f ( X ) is Convex and (for minimization)
The feasible space i.e. g i ( X ) is convex.

Hence, the KKT necessary conditions are also sufficient for on global maximum of f ( X )
*
at x .
Example: Find the maximum of the function

Max f ( X ) = Z = 3.6 x1 −0.4 x 21 +1.6 x 2 −0.2 x 22


subject to: 2 x1 +x 2  10
x1 , x 2  0

Now Define the Lagrangian function as:


L = f ( x ) − g ( x ) = (3.6 x1 −0.4 x 21 +1.6 x 2 −0.2 x 22 ) −  (2 x1 + x 2 −10 )
The necessary conditions are
L
= 0; g ( X ) = 0;   0; g  0; X 0
X
L
= 0  3.6 − 0.8 x1 −2 = 0 (1)
x1
L
= 0  1.6 − 0.4 x 2 − = 0 (2)
x 2
g ( X ) = 0   (2 x1 + x 2 −10) = 0 (3)
0   0 (4)
g( X )  0  2 x1 + x 2  10 (5)
X 0  x1 , x 2  0 (6)
L
= 0  3.6 − 0.8 x1 −2 = 0 (1)
x1
L
= 0  1.6 − 0.4 x 2 − = 0 (2)
x 2
g ( X ) = 0   (2 x1 + x 2 −10) = 0 (3)
0   0 (4)
g( X )  0  2 x1 + x 2  10 (5)
X 0  x1 , x 2  0 (6)

Case 1:  = 0 Case 2:   0
From 1 and 2 From equation 3
x1 = 4.5, x2 = 4 2 x1 +x 2 −10 = 0 (7)
But using equation (5) From equation 1 & 2
3.6 − 2 1.6 − 
2 x1 +x 2  10  2(4.5) + 4  10 x1 = x2=
0.8 0.4
 13  10 Therefore using (7)
which is not true. Hence this point 3.6 − 2 1.6 − 
will be rejected. 2( )+( ) = 10   = 0.4
0.8 0.4
Hence
x1 = 3.5, x2= 3
The sufficient conditions

f ( X ) should be concave and g i ( X ) is convex.

We construct the Hessian matrix as:


 2 f 2 f 
 
 x12 x1x2  − 0.8 0
H= =
 2 f  2 f   0 − 0.4
 
 x2 x1 x22 
So, the principal minors of H are:
D1 = −0.8  0 Since, D1  0; D2  0  The function is negative definite.
− 0.8 0
D2 = = 0.32  0 Thus, f ( X ) is strictly concave. Hence it will give
0 − 0.4 Maximum.

Also, the constraints 2 x1 +x 2  10is in linear form and we know that every linear function is
convex.

Hence, the KKT necessary conditions are also sufficient conditions for the maximum.
Example: Find the maximum of the function

Max f ( X ) = Z = 12 x1 +21x 2 +2 x1 x 2 −2 x 21 −2 x 22
subject to: x1 +x 2  10 , x 2  8
x1 , x 2  0
First, define the Lagrangian function as:
L( X ,  ) = f ( X ) − 1g 1( X ) − 2 g 2 ( X )

= (12 x1 +21x 2 +2 x1 x 2 −2 x 21 −2 x 22 ) − 1( x1 + x 2 −10) − 2 ( x 2 −8)


The necessary conditions are
L
= 0; g ( X ) = 0;   0; g  0; X 0
X
L
= 0  12 + 2 x 2 −4 x1 − 1= 0 (1)
x1
L
= 0  21 + 2 x1 −4 x 2 − 1− 2 = 0 (2)
x 2
 1g 1( X ) = 0   1( x1 + x 2 −10) = 0 (3)
 2 g 2( X ) = 0   2 ( x 2 −8) = 0 (4)
i  0  1 , 2  0 (5)
gi ( X )  0  x1 + x 2  10, x 2  8 (6)
X 0  x1 , x 2  0 (7)
We have the following cases:
Case 1: when  1= 0 ,  2 = 0 Case 4: when  1 0 ,  2 = 0
From (1) and (2) From (3), we get: x1 +x 2 = 10
12 + 2 x2 − 4 x1 = 0 x1 = 15 / 2 From (1) and (2)
21 + 2 x1 − 4 x2 = 0 x2 = 9
12 + 2 x2 − 4 x1 = 1 on subtracting
This does not satisfy equation (6) 9 + 6 x1 − 6 x2 = 0
and equation (7). Thus it is discarded. 21 + 2 x1 − 4 x2 = 1

Case 2: when  1= 0 ,  2  0 which implies x1 = 4.25, x 2 = −5.75


From (2), we get  1= 13 / 2
From (4) x2 = 8
which satisfy all equations. Thus this
From (1) x1 = 7
case will be considered.
which violate equation (6). Thus Hence, the stationary points is
this case is also discarded.
x1 = 4.25, x 2 = −5.75, 1 = 13 / 2, 2 = 0
Case 3: when  1 0 ,  2  0 The optimal value is:
From (3) and (4) f ( X ) = Z = 12 x1 +21x 2 +2 x1 x 2 −2 x 21 −2 x 22
x1 + x 2 −10 = 0 x1 = 2
Z = 12(4.25) + 21(5.75) + 2(4.25)(5.75) − 2(4.25) 2 − 2(5.75) 2
x2 − 8 = 0 x2 = 8
947
From (1) and (2) = = 118.375
 1= 20,  2 = −27 8
which violate equation (5). Thus this
case is also discarded.
Example: Find the minimum of the function

Min f ( X ) = Z = 2 x 1 +3x 2 − x12 − 2 x22


subject to: x 1 +3 x2  6 , 5 x1 + 2 x2  10
x1 , x 2  0
First, define the Lagrangian function as:
L( X ,  ) = f ( X ) − 1g 1( X ) − 2 g 2 ( X )

= (2 x 1 +3x 2 − x12 − 2 x22 ) −  1 (− x 1 −3 x 2 +6) −  2 (−5 x1 − 2 x2 + 10)


The necessary conditions are
L
= 0;  g ( X ) = 0;   0; g  0; X 0
X
L
= 0  2 − 2 x 1 +1 + 52 = 0 (1)
x 1
L
= 0  3 − 4 x2 + 3 1 +22 = 0 (2)
x 2
 1 g 1( X ) = 0   1 (− x 1 −3x 2 +6) = 0 (3)
 2 g 2 ( X ) = 0   2 (−5 x1 − 2 x 2 +10) = 0 (4)
i  0  1 , 2  0 (5)
gi ( X )  0  − x 1 −3 x 2 +6  0, − 5 x1 − 2 x2 + 10  0 (6)
X 0  x1 , x 2  0 (7)

You might also like