Unconstrained NLP Algorithm Ipad
Unconstrained NLP Algorithm Ipad
programmi
ng
algorithm:
Unconstrain
ed optimiza
tio n
Nur Aini Ma
s ruroh
Scope
0 One dimensional unconstrained optimization
0 Direct method/value function-based method: bracketing
method
0 Indirect method: Newton method
0 Example:
minimize f(x) = (x-100)2
Bracketing method: example
(cont’d)
minimize f(x) = (x-100)2
Consider a sequence of x given by the following formula:
xk+1 = xk + δ 2k-1
Set δ = 1 and x1 = 0
x 0 1 3 7 15 31 63 127 255
f(x) 104 9801 9409 8649 7225 4761 1369 729 2325
New bracket
Minimize f(x) = x2 – x
f’(x)=2x – 1
f’’(x)= 2
no
Is Xk+1 optimum?
yes
0 Search direction:
0 Differentiate f(x) and set Ñf(x) = 0
i.e. Ñf(x) = Ñf(xk)+H(xk) Δxk = 0
then Δxk = -[H(xk)]-1 Δf(xk)
Sk = Δxk
Newton method (cont’d)
Procedures:
1. Set an initial point
2. Calculate f(xk), Ñf(xk), H(xk)
3. Set the search direction as Sk = Δxk = -[H(xk)]-1 Ñ f(xk)
4. Set xk+1 = xk + Δxk
5. Go to 2 until convergence criteria are met
Newton method: Remarks
0 If H(xk) = I, the method reduces to gradient method
(steepest descent)
0 Newton method generally works better than gradient
method due to the second order approximation
0 Method has the fastest convergence properties (second
order)
0 Search direction requires inversion of the matrix (time
consuming)
0 It can fail if:
0 x0 is far from the optimum
0 H(x) is singular at some points
0 xk is a saddle point (H(xk) is indefinite)
Newton method: example 1
Minimize f(x) = 4x12 + x22 – 2x1x2, use starting point of x0 = [1 1]T
Jawab:
8 −2
Ñf(xk) = [8x1-2x2 2x2-2x1]T H 𝑥! =
−2 2
Iterasi 0:
Untuk x0 = [1 1]T Ñf(x0)=[6 0]T
Ñf(x0)+H(x0) Δx0 = 0
6 8 − 2 ∆𝑥 " 0
+ =
0 −2 2 ∆𝑥 # 0
Iterasi 1:
Untuk x1 = [0 0]T Ñf(x0)=[0 0]T, sehingga hasil saat ini sudah optimal
Newton method: example 2
minimize the non-quadratic function
f(x) = (x1 – 2)4 + (x1 – 2)2x22 + (x2 + 1)2, use starting point of x0 = [1 1]T à f(x0) =6
Jawab:
Ñf(xk) = 4(x1 – 2)3 +2(x1 – 2)x22 H(xk) = 12(x1 – 2)2+2 x22 4(x1 – 2)x2
2(x1 – 2)2x2 + 2(x2 + 1) 4(x1 – 2)x2 2(x1 – 2)2+2
Iterasi 0:
Untuk x0 = [1 1]T Ñf(x0)=[-6 6]T H(x0) = 14 -4
Ñf(x0)+H(x0) Δx0 = 0 -4 4
−6 14 − 4 ∆𝑥 ! 0
+ =
6 −4 4 ∆𝑥 " 0
Iterasi 1:
Silahkan dilanjutkan sendiri, untuk iterasi 1 gunakan x1 = [1 -3/2]T dst untuk iterasi selanjutnya
General comments
0 Indirect methods have been proven to be more efficient than direct
methods on all experimental testing. However,
0 The major difficulty with indirect methods is finding the derivatives
analytically which sometimes is not easy or impractical to obtain
0 This disadvantage may be overcome by replacing derivatives with their
finite difference substitute
0 This method like any other method, cannot find the global
optimum if multiple local solutions exist