Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
34 views

Chapter VII Nonlinear Programming (NLP)

The document discusses nonlinear programming (NLP) problems. NLP generalizes linear programming (LP) problems to allow for nonlinear objective functions and constraints. Unlike LP, NLP problems may have no corners and the solution space can be complex curves or surfaces. The document covers the formulation of general NLP models, graphical illustrations of NLP problems, and types of NLP problems including unconstrained problems and convex programming problems.

Uploaded by

Sirgut Tesfaye
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Chapter VII Nonlinear Programming (NLP)

The document discusses nonlinear programming (NLP) problems. NLP generalizes linear programming (LP) problems to allow for nonlinear objective functions and constraints. Unlike LP, NLP problems may have no corners and the solution space can be complex curves or surfaces. The document covers the formulation of general NLP models, graphical illustrations of NLP problems, and types of NLP problems including unconstrained problems and convex programming problems.

Uploaded by

Sirgut Tesfaye
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

7.

Non-Linear Programming (NLP)


 Introduction
 General NLP model
 Graphical illustration of NLP problems
 Types of NLP problems and solution approaches
7.1 Introduction
 In LP, all the objective functions and constraints were linear; however,
many realistic problems have relationships that can be modeled only
with nonlinear functions.
 When problems fit the general LP format but include nonlinear functions,
they are referred to as nonlinear programming (NLP) problems.
 Their solution is considerably more complex to determine an optimal
solution, even for a relatively small problem.
 In NLP, there may be no intersection or corner points; instead, the
solution space can be complex lines or surfaces, which include virtually
an infinite number of points.
 Solution techniques generally involve searching a solution surface for
high or low points requiring the use of advanced mathematics.
2
7.2 General NLP model formulation
The general NLP model formulation is as follows:
Find x1,…,xn so as to
Optimize (min or max) z = f(x1,…,xn)
subject to
gi(x1,…,xn) ≤ or = or ≥ bi, for i = 1, 2, . . . , m.
x1,…,xn ≥ 0, for n decision variables.
Where at least one of the f(x) and gi(x) functions are nonlinear.
 There are different types of nonlinear programs, depending on the
characteristics of the f(x) and gi(x) functions.

3
7.3 Graphical Illustration of NLP problems
 When an NLP problem has just one or two variables, it can be represented
graphically in two dimensional plane.
 A graphical representation gives considerable insight into the properties of
optimal solutions for NLP problems similar to LP problems.
 It illustrates the difference between LP and NLP problems.
Example: Consider the LP problem in Chapter 2.
Maximize z = 3x1+ 5x2
subject to
x1 ≤4 A B

2x2 ≤ 12
3x1 + 2x2 ≤ 18 C

x1, x2 ≥ 0
O D
4
Example 1: Modifying the constraint functions into nonlinear (quadratic).

An example with nonlinear constraints when


the optimal solution is not a corner point
feasible solution.

5
Example 2: Modifying the objective function into nonlinear (quadratic).

An example with linear constraints but


nonlinear objective function when the
optimal solution is not a corner point
feasible solution.

6
Example 3: Modifying the objective function into another non-linear (elliptic).

An example when the optimal


solution is inside the boundary
of the feasible region.

7
Local and global optimum values
 The most important complication that arises in NLP is that a local
optimum need not be a global optimum (the overall optimal solution).
 NLP algorithms generally are unable to distinguish between a local
optimum and a global optimum (except by finding another better
local maximum).
Over the interval 0 ≤ x ≤ 5, this function has three local
maxima at x = 0, x = 2, and x = 4; but only one of these x
= 4 is a global maximum.
Similarly, there are three local minima at x = 1, 3, and 5,
but only x = 5 is a global minimum.
It illustrates a function that is neither concave (curving
downward) nor convex (curving upward) because it
alternates between curving upward and curving
downward.

8
Examples of concave and convex function (single variable)

 Functions of multiple variables also can be characterized as concave or convex if


they always curve downward or curve upward.

9
Convexity test for functions of single variable (sufficient conditions)

10
Convexity test for functions of two variables(sufficient conditions)

11
Convexity test for functions of more than two variables(sufficient conditions)
 A convenient way of checking functions of more than two variables
when the function consists of a sum of smaller functions of just one
or two variables each is
― If each smaller function is concave, then the overall function is
concave.
― Similarly, the overall function is convex if each smaller function is
convex.
― To illustrate, consider the following function:

― Is it concave or convex? Check it.

12
 If an NLP problem has no constraints (unconstrained), the objective function
being concave guarantees that a local maximum is a global maximum.
 Similarly, the objective function being convex ensures that a local minimum
is a global minimum.
 If there are constraints, then one more condition will provide this guarantee,
namely, that the feasible region is a convex set.
 A convex set is simply a set of points such that, for each pair of points in the
collection, the entire line segment joining these two points is also in the
collection.
 In general, the feasible region for an NLP problem is a convex set whenever
all the gi(x) (for the constraints gi(x) ≤ bi) are convex functions.
 The feasible regions of the previous three graphical examples are convex
sets.
13
Example 4: Modifying constraint functions into nonlinear (concave).

An example when a local


maximum is not a global
maximum (the feasible region is
not a convex set).

14
 Therefore, to guarantee that a local maximum is a global maximum for
an NLP problem with constraints gi(x) ≤ bi (i = 1, 2, . . . , m) and x ≥ 0,
1) The objective function f(x) must be a concave function and
2) Each constraint gi(x) function must be a convex function.
 Similarly to guarantee that a local minimum is a global minimum
1) The objective function f(x) must be a convex and
2) Each constraint gi(x) function must be a convex
 Such a problem is called a convex programming problem, which is one
of the key types of NLP problems.
 Covers a broad class of problems
 It must be noted that linear functions are both convex and concave.
15
7.4 Types of NLP problems and solution methods
― NLP problems come in many different shapes and forms.
― Unlike the simplex method for an LP, no single algorithm can solve all
NLP problems; instead, algorithms have been developed for various
individual classes of NLP problems.

16
1. Unconstrained optimization
 Unconstrained optimization problems have no constraints, so the
objective is simply to
Optimize (max or min) z = f(x)
These problems can be classified as:
1) Single variable and
2) Multiple variable

17
Single variable unconstrained optimization
 Unconstrained optimization with just a single variable x (n = 1), where
the differentiable objective function f(x) to be maximized/minimized
is concave/convex (sufficient condition), it can analytically be solved
by setting its first derivative equal to zero (necessary conditions).
 In this condition, a local optimum is a global optimum.
 If f(x) is neither concave nor convex, so the derivative is not just a
linear or quadratic function, you may not be able to solve the
equation analytically.
 In this case, the one-dimensional search procedure provides a
straightforward way of solving the problem numerically (a sequence
of trial solutions that leads toward an optimal solution).

18
Example: Nonlinear profit analysis

Profit function, z, with volume independent of price:


z = vp - cf - vcv
where:
v = sales volume
p = price
cf = fixed cost
cv = variable cost

19
 Let us suppose that the dependency of demand on price is defined by
the following linear function:
v = 1,500 - 24.6p
 It would be more realistic for the demand to vary as price increased
or decreased.
 Add volume-price relationship to
the previous z equation.

20
 With fixed cost (cf = 10,000) and variable cost (cv = 8):
z = 1,696.8p - 24.6p2 - 22,000 (concave function)

Find the maximum


profit and the sales
volume v to maximize
the profit.

21
Multiple variable unconstrained optimization
 When the objective function is f(x1, x2,…, xn) is concave/convex
(sufficient condition), it can be minimized/minimized by setting the
respective partial derivatives of the function equal to zero (necessary
condition).
 When it cannot be solved analytically, so that numerical search
procedure must be used such as the Newton Raphson method and
the gradient search procedure.
Example. Consider the following two-variable problem:
Maximize f(x1, x2) = z = 2x1x2 + 2x2 - x12 - 2x22
Solution
The function is concave.
22
2. Constrained optimization
 Constrained optimization problems are mostly consisting of a nonlinear
objective function and one or more linear or nonlinear constraints.
 These can be classified convex programming ( feasible region is a convex set)
and non-convex programming.
In convex programming:
 Finding a local optimum ensures a global optimum.
In non-convex programming:
 Even if you are successful in finding a local maximum, there is no assurance
that it also will be a global maximum.
 There is no algorithm that will guarantee finding an optimal solution for all
such problems.
 In some special cases (Geometric programming, Fractional programming), the
problem can be reduced to an equivalent convex programming problem.
23
Convex programming
These are two types of such problems:
1) Multi-variable optimization constrained with equality (=)
2) Multi-variable optimization constrained with inequality
 There two major solution methods for the first category
a)Direct substitution method
b)Lagrange multiplier method

24
1. Direct substitution method
 The least complex method for solving nonlinear programming
problems
 Restricted to a few decision variables.
Example: Total profit from the production of two item is expressed as:
Maximize z = 4x1 - 0.1x12 + 5x2 - 0.2x22
subject
x1 + 2x2 = 40

25
2. Lagrange multiplier method

 The method of Lagrange multipliers is a general mathematical


technique that can be used for solving constrained optimization
problems consisting of a nonlinear objective function and one or
more linear or nonlinear constraint equations.
 In this method, the constraints as multiples of a Lagrange multiplier,
λi, are subtracted from the objective function.
 The Lagrange multiplier, λi , in NLP problems is analogous to the dual
variables in a linear programming problem.
 It reflects the approximate change in the objective function resulting
from a unit change in the quantity (right-hand-side) value of the
constraint equation.

26
Lagrangian function development

For example, modifying the general NLP model into:


Optimize (min or max) z = f(x) = f(x1,…,xn)
subject to
gi(x) =gi(x1,…,xn) = bi, or
gi(x) - bi = 0, for i = 1, 2, . . . , m and n decision variables
The Lagrangian function can be formulated as:
(1)
At the optimum point (necessary conditions):
(2)
(3)
27
Example:
Minimize + + (Convex objective function)
subject to
+ + =2
+ + =5

Solution
The objective function is convex and the constraints are convex (a local optimum is a global
optimum).
The Lagrangian function is :
= + + - [(+ + - 2) +(+ + - 5)]

28
Exercises
Solve the following problem by using the Lagrangian method
a) Maximize +
subject to
+2+ =6
+ + =9
≥0
b) Minimize + +
subject to
++ = 5
+ + =7
≥0
29
Multi-variable optimization constrained with inequality

 This section extends the Lagrangian method to problems with


inequality constraints.
 The main contribution of the section is the development of the
general Karush-Kuhn-Tucker (KKT) necessary conditions for
determining the stationary points.
Consider the problem
Maximize z = f(x) = f(x1,…,xn)
subject to
gi(x) ≤ bi, or
gi(x) - bi ≤ 0, for i = 1, 2, . . . , m and n decision variables
30
The inequality constraints may be converted into equations by adding
non-negative slack variables si2 (≥ 0) to the ith constraint.
The equation form of the maximization problem is
Maximize z = f(x1,…,xn)
subject to
gi(x) - bi + si2 = 0
The Lagrangian function is formulated as:
(1)
At the optimum point (necessary conditions):
(2)
(3)
(4)
31
The fourth equation reveals the following results:
1) If , then = 0, means = 0, the resource is scarce, and it is consumed
completely (equality constraint).
2) If > 0, ( < 0), then = 0, this means resource i is not scarce and it has
no affect on the value of z.
From the third and fourth equations, it is possible to obtain the
complementary slackness equation:
=0

32
 The KKT necessary conditions for maximization problem are
summarized as:
1) 0 (nonnegative)
2) (optimality)
3) = 0 (complementary slackness)
4) (feasibility)
 From the conditions 2 and 3, the number of equations and
known variable ( + ) are equal to solve simultaneously a set of
equations.
 These conditions apply to the minimization case as well,
except that must be non-positive.

33
Justification for values
― The necessary condition for optimality is that 0 for maximization
and 0 for minimization problems is justified by that the measures
the rate of change of f(x) with respect to , that is:

― In the maximization case, as the right-hand side of the constraint


increases from = 0 to > 0, the solution space becomes less
constrained and hence f(x) can increase, meaning that 0 .
― Similarly for minimization, as the right-hand side of the constraints
increases, f(x) cannot increase, which implies that 0 .
― If the constraints are equalities, that is, = 0, then becomes
unrestricted in sign.
34
Example:
Maximize
subject to

The Lagrangian function is :

35
Exercises

Write the KKT necessary conditions for the following problems and solve
them.
a) Maximize
subject to

5
≥0
b) Minimize
subject to

≥0
36
Quadratic programming
― It has all linear constraints, but now the objective function f(x) must
be quadratic.
― Some of the terms in the objective function involve the square of a
variable or the product of two variables such as and/or (i j) terms.
Example: solve the previous KKT example.
Solution:
 Formulate an LP using the KKT conditions of (2) and (4); and satisfy
the remaining conditions.
 Write the dual form of the NLP (more advanced topic).
 Apply phase 1 of two-phase method by introducing a complementary
variable for .
37
Separable programming
― It is a special case of convex programming, where the one additional
assumption is that all the f(x) and gi(x) functions are separable
functions.
― A separable function is a function where each term involves a single
variable and separable into a sum of functions of individual variables.
― Such a problem can be closely approximated by a linear programming
problem so that the extremely efficient simplex method can be used.
― If f(x) is a separable function, it can be expressed as:

Example, can be separated into and

38

You might also like