Chapter VII Nonlinear Programming (NLP)
Chapter VII Nonlinear Programming (NLP)
3
7.3 Graphical Illustration of NLP problems
When an NLP problem has just one or two variables, it can be represented
graphically in two dimensional plane.
A graphical representation gives considerable insight into the properties of
optimal solutions for NLP problems similar to LP problems.
It illustrates the difference between LP and NLP problems.
Example: Consider the LP problem in Chapter 2.
Maximize z = 3x1+ 5x2
subject to
x1 ≤4 A B
2x2 ≤ 12
3x1 + 2x2 ≤ 18 C
x1, x2 ≥ 0
O D
4
Example 1: Modifying the constraint functions into nonlinear (quadratic).
5
Example 2: Modifying the objective function into nonlinear (quadratic).
6
Example 3: Modifying the objective function into another non-linear (elliptic).
7
Local and global optimum values
The most important complication that arises in NLP is that a local
optimum need not be a global optimum (the overall optimal solution).
NLP algorithms generally are unable to distinguish between a local
optimum and a global optimum (except by finding another better
local maximum).
Over the interval 0 ≤ x ≤ 5, this function has three local
maxima at x = 0, x = 2, and x = 4; but only one of these x
= 4 is a global maximum.
Similarly, there are three local minima at x = 1, 3, and 5,
but only x = 5 is a global minimum.
It illustrates a function that is neither concave (curving
downward) nor convex (curving upward) because it
alternates between curving upward and curving
downward.
8
Examples of concave and convex function (single variable)
9
Convexity test for functions of single variable (sufficient conditions)
10
Convexity test for functions of two variables(sufficient conditions)
11
Convexity test for functions of more than two variables(sufficient conditions)
A convenient way of checking functions of more than two variables
when the function consists of a sum of smaller functions of just one
or two variables each is
― If each smaller function is concave, then the overall function is
concave.
― Similarly, the overall function is convex if each smaller function is
convex.
― To illustrate, consider the following function:
12
If an NLP problem has no constraints (unconstrained), the objective function
being concave guarantees that a local maximum is a global maximum.
Similarly, the objective function being convex ensures that a local minimum
is a global minimum.
If there are constraints, then one more condition will provide this guarantee,
namely, that the feasible region is a convex set.
A convex set is simply a set of points such that, for each pair of points in the
collection, the entire line segment joining these two points is also in the
collection.
In general, the feasible region for an NLP problem is a convex set whenever
all the gi(x) (for the constraints gi(x) ≤ bi) are convex functions.
The feasible regions of the previous three graphical examples are convex
sets.
13
Example 4: Modifying constraint functions into nonlinear (concave).
14
Therefore, to guarantee that a local maximum is a global maximum for
an NLP problem with constraints gi(x) ≤ bi (i = 1, 2, . . . , m) and x ≥ 0,
1) The objective function f(x) must be a concave function and
2) Each constraint gi(x) function must be a convex function.
Similarly to guarantee that a local minimum is a global minimum
1) The objective function f(x) must be a convex and
2) Each constraint gi(x) function must be a convex
Such a problem is called a convex programming problem, which is one
of the key types of NLP problems.
Covers a broad class of problems
It must be noted that linear functions are both convex and concave.
15
7.4 Types of NLP problems and solution methods
― NLP problems come in many different shapes and forms.
― Unlike the simplex method for an LP, no single algorithm can solve all
NLP problems; instead, algorithms have been developed for various
individual classes of NLP problems.
16
1. Unconstrained optimization
Unconstrained optimization problems have no constraints, so the
objective is simply to
Optimize (max or min) z = f(x)
These problems can be classified as:
1) Single variable and
2) Multiple variable
17
Single variable unconstrained optimization
Unconstrained optimization with just a single variable x (n = 1), where
the differentiable objective function f(x) to be maximized/minimized
is concave/convex (sufficient condition), it can analytically be solved
by setting its first derivative equal to zero (necessary conditions).
In this condition, a local optimum is a global optimum.
If f(x) is neither concave nor convex, so the derivative is not just a
linear or quadratic function, you may not be able to solve the
equation analytically.
In this case, the one-dimensional search procedure provides a
straightforward way of solving the problem numerically (a sequence
of trial solutions that leads toward an optimal solution).
18
Example: Nonlinear profit analysis
19
Let us suppose that the dependency of demand on price is defined by
the following linear function:
v = 1,500 - 24.6p
It would be more realistic for the demand to vary as price increased
or decreased.
Add volume-price relationship to
the previous z equation.
20
With fixed cost (cf = 10,000) and variable cost (cv = 8):
z = 1,696.8p - 24.6p2 - 22,000 (concave function)
21
Multiple variable unconstrained optimization
When the objective function is f(x1, x2,…, xn) is concave/convex
(sufficient condition), it can be minimized/minimized by setting the
respective partial derivatives of the function equal to zero (necessary
condition).
When it cannot be solved analytically, so that numerical search
procedure must be used such as the Newton Raphson method and
the gradient search procedure.
Example. Consider the following two-variable problem:
Maximize f(x1, x2) = z = 2x1x2 + 2x2 - x12 - 2x22
Solution
The function is concave.
22
2. Constrained optimization
Constrained optimization problems are mostly consisting of a nonlinear
objective function and one or more linear or nonlinear constraints.
These can be classified convex programming ( feasible region is a convex set)
and non-convex programming.
In convex programming:
Finding a local optimum ensures a global optimum.
In non-convex programming:
Even if you are successful in finding a local maximum, there is no assurance
that it also will be a global maximum.
There is no algorithm that will guarantee finding an optimal solution for all
such problems.
In some special cases (Geometric programming, Fractional programming), the
problem can be reduced to an equivalent convex programming problem.
23
Convex programming
These are two types of such problems:
1) Multi-variable optimization constrained with equality (=)
2) Multi-variable optimization constrained with inequality
There two major solution methods for the first category
a)Direct substitution method
b)Lagrange multiplier method
24
1. Direct substitution method
The least complex method for solving nonlinear programming
problems
Restricted to a few decision variables.
Example: Total profit from the production of two item is expressed as:
Maximize z = 4x1 - 0.1x12 + 5x2 - 0.2x22
subject
x1 + 2x2 = 40
25
2. Lagrange multiplier method
26
Lagrangian function development
Solution
The objective function is convex and the constraints are convex (a local optimum is a global
optimum).
The Lagrangian function is :
= + + - [(+ + - 2) +(+ + - 5)]
28
Exercises
Solve the following problem by using the Lagrangian method
a) Maximize +
subject to
+2+ =6
+ + =9
≥0
b) Minimize + +
subject to
++ = 5
+ + =7
≥0
29
Multi-variable optimization constrained with inequality
32
The KKT necessary conditions for maximization problem are
summarized as:
1) 0 (nonnegative)
2) (optimality)
3) = 0 (complementary slackness)
4) (feasibility)
From the conditions 2 and 3, the number of equations and
known variable ( + ) are equal to solve simultaneously a set of
equations.
These conditions apply to the minimization case as well,
except that must be non-positive.
33
Justification for values
― The necessary condition for optimality is that 0 for maximization
and 0 for minimization problems is justified by that the measures
the rate of change of f(x) with respect to , that is:
35
Exercises
Write the KKT necessary conditions for the following problems and solve
them.
a) Maximize
subject to
5
≥0
b) Minimize
subject to
≥0
36
Quadratic programming
― It has all linear constraints, but now the objective function f(x) must
be quadratic.
― Some of the terms in the objective function involve the square of a
variable or the product of two variables such as and/or (i j) terms.
Example: solve the previous KKT example.
Solution:
Formulate an LP using the KKT conditions of (2) and (4); and satisfy
the remaining conditions.
Write the dual form of the NLP (more advanced topic).
Apply phase 1 of two-phase method by introducing a complementary
variable for .
37
Separable programming
― It is a special case of convex programming, where the one additional
assumption is that all the f(x) and gi(x) functions are separable
functions.
― A separable function is a function where each term involves a single
variable and separable into a sum of functions of individual variables.
― Such a problem can be closely approximated by a linear programming
problem so that the extremely efficient simplex method can be used.
― If f(x) is a separable function, it can be expressed as:
38