Nonlinear Programming: Operations Research: Applications and Algorithms 4th Edition
Nonlinear Programming: Operations Research: Applications and Algorithms 4th Edition
Nonlinear Programming: Operations Research: Applications and Algorithms 4th Edition
Nonlinear Programming
Solving NLPs with One Variable
to accompany
Operations Research: Applications and Algorithms
4th edition
by Wayne L. Winston
3
◼ There are three types of points for which the
NLP can have a local maximum or minimum
(these points are often called extremum
candidates).
Case 1. Points where a < x < b, f’(x) = 0 [called a
stationary point of f(x).
Case 2. Points where f’(x) does not exist
Case 3. Endpoints a and b of the interval [a,b]
4
Case 1
◼ Suppose a < x < b, f’(𝑥0 ) exists. If 𝑥0 is a local
maximum or a local minimum, then f’(𝑥0 )=0.
5
How to Determine Whether x0 Is a Local Maximum or
a Local Minimum When f’(x0) Exists
Theorem ?
6
Theorem ?
7
Case 2. Points Where f’(x) Does Not
Exist
8
9
Case 3. Endpoints a and b of [a, b]
10
Example 21: Profit Maximization by
Monopolist
◼ It costs a monopolist $5/unit to produce a
product.
◼ If he produces x units of the product, then
each can be sold for 10-x dollars.
◼ To maximize profit, how much should the
monopolist produce.
11
Example 21: Solution
◼ The monopolist wants to solve the NLP
max P( x)
s.t. 0 x 10
12
Finding Global Maximum When Endpoint Is a
Maximum (Winston Example 22)
13
14
Problem 6
16
Methods for One-Variable
Unconstrained Optimization
◼ A number of search procedures are available
for solving the problem numerically.
Newton’s Method
Bisection Method
Golden Section Search Method
17
Newton’s Method
◼ Fit a quadratic approximation to f(x) using both
gradient and curvature information at x.
◼ Expand f(x) locally using a Taylor series
19
20
When to stop
◼ Iterations generating new trial solutions in this
way would continue until these solutions have
essentially converged.
1. One criterion for convergence is that 𝑥𝑖+1 − 𝑥𝑖
has become sufficiently small.
2. 𝑓(𝑥𝑖+1 ) − 𝑓(𝑥𝑖 ) is sufficiently small.
3. f’(x) is sufficiently close to zero.
21
Summary of Newton’s Method
◼ Initialization: Select ϵ. Find an initial trial
solution 𝑥𝑖 by inspection. Set i=1.
◼ Iteration i:
1. Calculate f’ (𝑥𝑖 ) and f’’ (𝑥𝑖 ). [Calculating f(xi) is
optional.]
𝑓′ (𝑥𝑖 )
2. Set 𝑥𝑖+1 = 𝑥𝑖 − ′′
𝑓 (𝑥𝑖 )
3. i=i+1
22
Example
◼ Suppose that the function to be maximized is
𝑓 𝑥 = 12𝑥 − 3𝑥 4 − 2𝑥 6
◼ Solution
𝑑𝑓(𝑥)
= 12 1 − 𝑥 3 − 𝑥 5
𝑑𝑥
𝑑2 𝑓(𝑥) 2 + 5𝑥 4
= −12 3𝑥
𝑑𝑥 2
◼ 𝑥1 = 1
23
◼ Iteration 1
𝑥1 = 1 f’ (1)=-12, f’’ (1)=-96
𝑓 ′ 𝑥1 −12
𝑥2 = 𝑥1 − ′′ =1− = 0.875
𝑓 𝑥1 −96
◼ Iteration 2
24
Bisection Method
◼ Can always be applied when f(x) concave
◼ It can also be used for certain other functions
◼ If 𝑥 ∗ denotes the optimal solution, all that is
needed is that
26
Bisection Method- Overview
◼ Given two values, a<b, with f’(a)>0, f’(b)<0
𝑥 ′ : current trial solution
a: current lower bound on 𝑥 ∗
b: current upper bound on 𝑥 ∗
ϵ: error tolerance for 𝑥 ∗
𝑎+𝑏
◼ Find the midpoint 𝑥′ =
2
28
𝑎+𝑏 0+2
′
𝑓 𝑎 = 12 𝑥′ = = =1
2 2
𝑓 ′ 𝑥 ′ = −12
𝑓 ′ 𝑏 = −468
29
𝑓 ′ 𝑎 = 12
𝑓 ′ 𝑏 = −12
𝑎+𝑏 0+1
𝑥′ = = = 0.5
2 2
𝑓 ′ 𝑥 ′ = 10.125
30
𝑓 ′ 𝑏 = −12
𝑓 ′ 𝑎 = 10.125
𝑎 + 𝑏 0.5 + 1
𝑥′ = = = 0.75
2 2
𝑓 ′ 𝑥 ′ = 4.09
31
𝑓 ′ 𝑏 = −12
𝑓 ′ 𝑎 = 4.09
𝑎 + 𝑏 0.75 + 1
𝑥′ = = = 0.875
2 2
𝑓 ′ 𝑥 ′ = −2.19
32
𝑓 ′ 𝑏 = −2.19
𝑓 ′ 𝑎 = 4.09
𝑎 + 𝑏 0.75 + 0.875
𝑥′ = = = 0.8125
2 2
𝑓 ′ 𝑥 ′ = 1.31
33
11.5 Golden Section Search
◼ The Golden Section Method can be used if the
function is a unimodal function.
◼ A function f(x) is unimodel on [a,b] if for
some point x on [a,b], f(x) is strictly
increasing on [a, x ] and strictly decreasing on
[ x ,b].
◼ The optimal solution of the NLP is some point
on the interval [a,b]. By evaluating f(x) at two
points x1 and x2 on [a,b], we may reduce the
size of the interval in which the solution to the
NLP must lie.
34
◼ The values can be calculated with
𝑥1 = 𝑏 − 𝑟 𝑏 − 𝑎
𝑟 = 0.618
𝑥2 = 𝑎 + 𝑟 𝑏 − 𝑎
◼ The selection of 𝑥1 and 𝑥2 guarantees that
a < 𝑥1 < 𝑥2 < 𝑏
Interval after
Interval before 𝐼𝑖 = [𝑎, 𝑏] 𝐼𝑖+1 = (𝑥1 , 𝑏]
or
𝐼𝑖+1 = [𝑎, 𝑥2 )
35
◼ After evaluating f(x1) and f(x2), one of these
cases must occur. It can be shown in each case
that the optimal solution will lie in a subset of
[a,b].
Case 1 - f(x1) < f(x2) and 𝑥lj ∈ (𝑥1 , 𝑏]
36
Case 2 - f(x1) = f(x2) and 𝑥lj ∈ 𝑎, 𝑥2
37
◼ Case 3 - f(x1) > f(x2) and 𝑥lj ∈ (𝑎, 𝑥2 ]
38
◼ Many search algorithms use these ideas to
reduce the interval of uncertainty. Most of
these algorithms proceed as follows:
Begin with the region of uncertainty for x being [a,b].
Evaluate f(x) at two reasonably chosen points x1 and
x2 .
Determine which of cases 1-3 holds, and find a
reduced interval of uncertainty.
Evaluate f(x) at two new points (the algorithm
specifies how the two new points are chosen). Return
to step 2 unless the length of the interval of
uncertainty is sufficiently small.
◼ Spreadsheets or programming softwares can
be used to conduct golden section search.
39
Stopping criterion
◼ The algorithm can be stopped when
𝑟𝑘 𝑏 − 𝑎 < 𝜖
at iteration k.
◼ This can be used for calculating the number of
iterations k of the algorithm
𝐿𝑘 < 𝜖
As 𝐿2 = 𝑟𝐿1 and 𝐿1 = 𝑟 𝑏 − 𝑎 , then 𝐿2 = 𝑟 2 𝑏 − 𝑎 .
In general, 𝐿𝑘 = 𝑟𝐿𝑘−1
40
Example 25 Winston
41
Solution
◼ Here a=1, b=0.75, and b-a=1.75. To determine the
number k of iterations that must be performed, we solve
for k using
1.75 0.618𝑘 < 0.25
Taking logarithms to base e of both sides, we obtain
1
𝑘 ln 0.618 < ln
7
𝑘 −0.48 < −1. . 95
1.95
𝑘> = 4.06
0.48
Thus, five iterations of Golden Section Search must be
performed
42
Solution 1st iteration
◼ First calculate 𝑥1 and 𝑥2
𝑥1 =0.75−(0.618)(1.75)=−0.3315
𝑥2 =-1+(0.618)(1.75)=0.0815
◼ Then calculate 𝑓(𝑥1 ) and 𝑓(𝑥2 )
𝑓 𝑥1 = − −0.3315 2 − 1 = −1.1099
𝑓 𝑥2 = − 0.0815 2 − 1 = −1.0066
◼ Because 𝑓(𝑥1 ) < 𝑓(𝑥2 ), the new interval
𝐼1 = 𝑥1 , 𝑏 = (−0.3315,0.75]
43
Solution 2nd iteration
◼ First calculate 𝑥1 and 𝑥2
𝑥1 =0.3315−0.618(0.75−(−0.3315))=0.0815
𝑥2 = 0.75 − 0.618(0.75−(−0.3315))=0.3369
◼ Then calculate 𝑓(𝑥1 ) and 𝑓(𝑥2 )
𝑓 𝑥1 = − 0.0815 2 − 1 = −1.0066
𝑓 𝑥2 = − 0.3369 2 − 1 = −1.1135
◼ Because 𝑓(𝑥1 ) > 𝑓(𝑥2 ), the new interval
𝐼2 = 𝑥1 , 𝑏 = [−0.3315,0.3369)
44
a b 𝑥1 𝑥2 𝑓 𝑥1 𝑓 𝑥2 Interval
-1 0.75 −0.3315 0.0815 -1.1099 -1.0066 𝑓(𝑥1 ) < 𝑓(𝑥2 ) (−0.3315,0.75]
-0.3315 0.75 0.0815 0.3369 -1.0066 -1.1135 𝑓(𝑥1 ) > 𝑓(𝑥2 ) [−0.3315,0.3369)
-0.3315 0.3369 -0.0762 0.0815 -1.0058 -1.0066 𝑓(𝑥1 ) > 𝑓(𝑥2 ) [−0.3315,0.0815)
-0.3315 0.0815 -0.1737 -0.0762 -1.0302 -1.0058 𝑓(𝑥1 ) < 𝑓(𝑥2 ) (−0.1737,0.0815]
-0.1737 0.0815 -0.0762 -0.016 -1.0058 -1.0003 𝑓(𝑥1 ) < 𝑓(𝑥2 ) (−0.0762,0.0815]
45