Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

ClassicalNLP_ConstrainedOptimization

The document discusses optimization of functions of multiple variables under equality and inequality constraints, detailing methods such as direct substitution and Lagrange multipliers. It introduces the Kuhn-Tucker conditions for solving constrained optimization problems with inequality constraints. Examples illustrate the application of these methods to find optimal solutions.

Uploaded by

samiksha.moodi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

ClassicalNLP_ConstrainedOptimization

The document discusses optimization of functions of multiple variables under equality and inequality constraints, detailing methods such as direct substitution and Lagrange multipliers. It introduces the Kuhn-Tucker conditions for solving constrained optimization problems with inequality constraints. Examples illustrate the application of these methods to find optimal solutions.

Uploaded by

samiksha.moodi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

CE676 Water Resources System

Optimization of Functions of Multiple Variables subject to Equality Constraints

The optimization of functions of multiple variables subjected to equality constraints can be


performed using (i) Direct substitution approach, (ii) the method of Lagrange multipliers.

Constrained optimization
A function of multiple variables, f(x), is to be optimized subject to one or more equality
constraints of many variables. These equality constraints, g j(x), may or may not be linear.
The problem statement is as follows:
Maximize (or minimize) f(X)
subject to
gj(X) = 0, j = 1, 2, … , m
where X is vector of n variables
with the condition that m  n (or else if m > n then the problem becomes an over defined one
and there will be no solution).

Solution by Direct Substitution Approach:


This method is applicable only for linear equality constraints and the number of decision
variables should be greater than number of constraints (i.e., m> n).
In this method, we express m variables in terms of remaining nu-m variables, then we convert
the constrained optimization problems into unconstrained optimization. Then we will apply
the procedures as discussed in single or multi-variable unconstrained optimization.

Solution by method of Lagrange multipliers


Let us consider the case of the optimization problem with n = 2 and m = 1. In this approach,
first a function known as the Lagrangian function is defined by considering the objective and
constraints equations as
L( x1 , x2 ,  )  f ( x1 , x2 )   g ( x1 , x2 )
Where  is called as Lagrange multiplier.
Then, by treating L as a function of x1,x2 and  , the necessary conditions for its extremum
are given by
L f g
( x1 , x2 ,  )  ( x1 , x2 )   ( x1 , x2 )  0
x1 x1 x1
L f g
( x1 , x2 ,  )  ( x1 , x2 )   ( x1 , x2 )  0
x2 x2 x2
L
( x1 , x2 ,  )  g ( x1 , x2 )  0


18
CE676 Water Resources System

General problem
For a general problem with n variables and m equality constraints, the Lagrange function, L,
will have one Lagrange multiplier  j for each constraint g j (X) as
L( x1 , x2 ,..., xn, 1 , 2 ,..., m )  f ( X)  1 g1 ( X)  2 g2 ( X)  ...  m g m ( X) (1)
L is now a function of n + m unknowns, x1 , x2 ,..., xn, 1 , 2 ,..., m .

Necessary conditions
the necessary conditions to obtain stationary points for the above case are given by
L f m g j
 (X)    j (X)  0, i  1, 2,..., n j  1, 2,..., m
xi xi j 1 xi
(2)
L
 g j (X)  0, j  1, 2,..., m
 j
which represent n + m equations in terms of the n + m unknowns, xi and  j . The solution to
this set of equations gives us
 x1 *   1 * 
 x *  *
 2   2 
X  and  *    (3)
     
 xn * m *
The vector X corresponds to the relative constrained minimum or maximum of f(X) (subject
to the verification of sufficient conditions).

Sufficient conditions for a general problem


A sufficient condition for f(X) to have a relative minimum (or maximum) at X* is that each
root of the polynomial in  , defined by the following determinant equation be positive.

L11   L12  L1n g11 g 21  g m1


L21 L22   L2 n g12 g 22 gm 2
     
Ln1 Ln 2  Lnn   g1n g 2n  g mn
0 (4)
g11 g12  g1n 0   0
g 21 g 22 g2 n   
    
g m1 gm 2  g mn 0   0

where

19
CE676 Water Resources System

2 L
Lij  ( X*, *), for i  1, 2,..., n j  1, 2,..., m
xi x j
(18)
g p
g pq  (X*), where p  1, 2,..., m and q  1, 2,..., n
xq
Similarly, a sufficient condition for f(X) to have a relative minimum (maximum) at X* is that
each root of the polynomial in  , defined by equation (4) be positive (negative). If equation
(4), on solving yields roots, some of which are positive and others negative, then the point X*
is neither a maximum nor a minimum.

Example
Minimize
f ( X)  3x12  6 x1 x2  5 x22  7 x1  5 x2
Subject to x1  x2  5
Solution
g1 ( X)  x1  x2  5  0
Here the Lagrange function with n = 2 and m = 1
L = 3 x12  6 x1 x2  5 x22  7 x1  5 x2  1 ( x1  x2  5)
L
 6 x1  6 x2  7  1  0
x1
1
 x1  x2  (7  1 )
6
1
 5  (7  1 ) or 1  23
6

L
 6 x1  10 x2  5  1  0
x2
1
 3 x1  5 x2  (5  1 )
2
1
 3( x1  x2 )  2 x2  (5  1 )
2
1
x2 
2
11
and, x1 
2

 1 11
Hence X*   ,  ; λ*   23
 2 2
 L11   L12 g11 
 L L22   g 21   0
 21
 g g12 0 
 11

20
CE676 Water Resources System

2L
L11   6
x12 ( X*,λ*)

 2L
L12  L21   6
x1x2 ( X*,λ*)

2
L
L22   10
x22 ( X*,λ*)

g1
g11  1
x1 ( X*,λ* )

g1
g12  g 21  1
x2 ( X*,λ* )

The determinant becomes


 6   6 1
 6 10   1   0

 1 1 0 

( 6 )[1]  (6)[1]  1[6  10 ]  0
or
 2
Since  is negative, [X*, λ * ] correspond to a maximum.

21
CE676 Water Resources System

Constrained Optimization with Inequality Constraints

In the previous section, the constrained optimization with multiple variables and equality
constraints was dealt using the method of Lagrange multipliers. To deal with constrained
optimization problems with inequality constraints, the Kuhn-Tucker conditions can be
applied and optimal solutions can be found.
Kuhn-Tucker Conditions
It was previously established that for both an unconstrained optimization problem and an
optimization problem with an equality constraint the first-order conditions are sufficient for a
global optimum when the objective and constraint functions satisfy appropriate
concavity/convexity conditions. The same is true for an optimization problem with inequality
constraints.
The Kuhn-Tucker conditions are both necessary and sufficient to find minima (maxima), if
the objective function is convex (concave) and each constraint is linear or constraint set
forms a convex surface (i.e. the problems belong to a class called the convex programming
problems).
First the given optimization problem should be converted to one of the standard form as
below and then apply the Kuhn Tucker conditions.

Maximization type optimization problem (in standard form):

Maximize f(X)
s.t. g j(X)  bj for j = 1,2,…,p ; where X = [x1 x2 . . . xn]

The Lagrangian function:

L( x1 , x2 ,..., xn ,1 , 2 ,..., m )  f ( X)  1 (b1  g1 ( X))  2 (b2  g 2 ( X))  ...  (bm  m g m ( X))

Then the Kuhn-Tucker conditions for X* = [x1* x2* . . . xn*] to be a local maximum are

f m  (b j  g j )
(i) gradient conditions   j 0 i  1, 2,..., n
xi j 1 xi
(ii) feasibility conditions g j  bj j  1,2,..., m
(iii) complementary slackness  j (b j  g j )  0 j  1, 2,..., m
(iv) non-negativity of Lagrange multipliers  j  0 j  1, 2,..., m

The solution has to satisfy all the above conditions including the non-negativity of Lagrange
multipliers.

Note: In case of minimization type problems, first convert to standard form as


Minimize f(X)
subject to g j(X)  bj for j = 1,2,…,p

22
CE676 Water Resources System

Example (1) Solve the following problem using Kuhn-Tucker conditions.

Minimize f  x12  x22  60 x1


subject to the constraints
g1  x1  80  0
g 2  x1  x2  120  0
Solution
The Kuhn Tucker conditions are given by
f  (b1  g1 ) (b2  g 2 )
a)  1  2 0
xi xi xi
2 x1  60  1  2  0 (11)
2 x2  2  0 (12)
1 ( x1  80)  0 (13)
b)  j g j  0
2 ( x1  x2  120)  0 (14)
x1  80  0 (15)
c) g j  0
x1  x2  120  0 (16)
1  0 (17)
d)  j  0
2  0 (18)
From (13) either 1 = 0 or, ( x1  80)  0
Case 1: 1 = 0

From (11) and (12) we have x1   2 


 30 and x2   2
2 2
Using these in (14) we get 2  2  150   0 ;  2  0 or  150
Considering 2  0 , X* = [ 30, 0].
But this solution set violates (15) and (16)
For 2  150 , X* = [ 45, 75].
But this solution set violates (15) .
Case 2: ( x1  80)  0
Using x1  80 in (11) and (12), we have
2  2 x2
1  2 x2  220 (19)
Substitute (19) in (14), we have 2 x2  x2  40   0 .
For this to be true, either x2  0 or x2  40  0
For x2  0 , 1  220 . This solution set violates (15) and (16)
For x2  40  0 , 1  140 and 2  80 . This solution set is satisfying all equations from
(15) to (19) and hence the desired. Therefore, the solution set for this optimization problem
is X* = [ 80 40 ].

23

You might also like