Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
135 views

Unconstrained NLP Algorithm Ipad

The document discusses various algorithms for unconstrained optimization problems, including both direct and indirect methods. Direct methods like bracketing methods search for optima by comparing function values at trial points, while indirect or derivative-based methods use derivatives at each iteration to determine potential optima. Specific methods covered include the Newton method, gradient/steepest descent method, and bracketing methods. Examples are provided to illustrate how these various algorithms work to find the minimum or maximum of both one-dimensional and multidimensional functions.

Uploaded by

watri
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views

Unconstrained NLP Algorithm Ipad

The document discusses various algorithms for unconstrained optimization problems, including both direct and indirect methods. Direct methods like bracketing methods search for optima by comparing function values at trial points, while indirect or derivative-based methods use derivatives at each iteration to determine potential optima. Specific methods covered include the Newton method, gradient/steepest descent method, and bracketing methods. Examples are provided to illustrate how these various algorithms work to find the minimum or maximum of both one-dimensional and multidimensional functions.

Uploaded by

watri
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Non-linear

programmi
ng
algorithm:
Unconstrain
ed optimiza
tio n
Nur Aini Ma
s ruroh
Scope
0 One dimensional unconstrained optimization
0 Direct method/value function-based method: bracketing
method
0 Indirect method: Newton method

0 Multidimensional unconstrained optimization


0 Gradient method: Steepest descent (ascent) method
0 Newton method
Two different methods for searching
for optimal solutions
0 Function value based method
0 Search for optimum by comparison of function values at
a sequence of trial points

0 Derivative based method


0 Involve derivatives at each iteration stage and
determine potential optimum by the necessary
condition
General procedure of function value
based methods
For minimization problem:
1. Start with an initial value, x0
2. Calculate f(x0)
3. Change x0 for next stage à x1
4. Calculate f(x1)
5. Make sure f(xk+1) < f(xk) in each stage
6. The calculation will stop when |f(xk+1) – f(xk)| < ε
ε = pre-specified tolerance

Note: for maximization à minimize –f(x)


Bracketing method
0 To avoid excessive search within a wide range, a
bracketing method should be applied:
0 First assume an initial bracket, Δ0, which should contain
the optimum of a function
0 Then determine the reduced bracket, Δk, at the stage k
0 This procedure terminates until an optimum is found

0 Example:
minimize f(x) = (x-100)2
Bracketing method: example
(cont’d)
minimize f(x) = (x-100)2
Consider a sequence of x given by the following formula:
xk+1 = xk + δ 2k-1
Set δ = 1 and x1 = 0
x 0 1 3 7 15 31 63 127 255

f(x) 104 9801 9409 8649 7225 4761 1369 729 2325

New bracket
Minimize f(x) = x2 – x

f’(x)=2x – 1
f’’(x)= 2

Misal titik awal pencarian, x0 = 1, maka f(x0) = 0; f’(x0)=1; f”(x0)=2

Iterasi 0: x1 = x0 – (f’(x0)/f”(x0)) = 1–(1/2)=1/2, sehingga f(x1) = -1/4

Iterasi 1: x2 = x1 – (f’(x1)/f”(x1)) = 1/2–(0/2)=1/2, sehingga f(x2) = -1/4


à tidak berubah

Karena f(x2) = f(x1) , maka sudah optimal

Dapat dilihat bahwa untuk menyelesaikan persamaan kuadrat, Newton


method hanya memerlukan sekali iterasi untuk mencapai nilai
optimalnya

Note: bisa dicoba titik awal pencarian yang lain


Newton method (cont’d)
0 Newton method approximates the function by a
quadratic function
0 For quadratic functions, the optimum is obtained
in one iteration
0 Both f’(x) and f’’(x) have to be calculated
0 If f”(x) à 0, the method converges slowly
Multidimensional optimization problems
Gradient method
Procedures:
1. Set initial point x0
2. Calculate f(xk) and Ñf(xk)
3. Set the direction:
0 Steepest descent (minimization)
0 Sk = - Ñf(xk) and xk+1=xk-λk Ñf(xk)
0 Steepest ascent (maximization)
0 Sk = Ñf(xk)
4. Do line search for a minimum to determine λk
5. Go to 2 until the convergence criteria are met
Convergence criteria
0 When the change in function value in two consecutive
iterations is small

0 When partial derivatives (components of the


gradient) of f are small

0 When the change in the design vector in two


consecutive iterations is small
Steepest descent Start with X1
Set k = 1

Find the search direction Sk = - Ñfk

Set k = k + 1 Find lk* to minimise f(xk + lk Sk)

Set Xk+1 = Xk + lk* Sk

no

Is Xk+1 optimum?

yes

Take Xopt = Xk+1


And stop
Newton method
0 Basic idea: use a second-order approximation:
f(x) = f(xk) + ÑTf(xk)Δxk + ½ (Ñxk)TH(xk) Δxk
where Δxk = xk+1 – xk
H(xk) is the Hessian matrix of f(x) at xk

0 Search direction:
0 Differentiate f(x) and set Ñf(x) = 0
i.e. Ñf(x) = Ñf(xk)+H(xk) Δxk = 0
then Δxk = -[H(xk)]-1 Δf(xk)
Sk = Δxk
Newton method (cont’d)
Procedures:
1. Set an initial point
2. Calculate f(xk), Ñf(xk), H(xk)
3. Set the search direction as Sk = Δxk = -[H(xk)]-1 Ñ f(xk)
4. Set xk+1 = xk + Δxk
5. Go to 2 until convergence criteria are met
Newton method: Remarks
0 If H(xk) = I, the method reduces to gradient method
(steepest descent)
0 Newton method generally works better than gradient
method due to the second order approximation
0 Method has the fastest convergence properties (second
order)
0 Search direction requires inversion of the matrix (time
consuming)
0 It can fail if:
0 x0 is far from the optimum
0 H(x) is singular at some points
0 xk is a saddle point (H(xk) is indefinite)
Newton method: example 1
Minimize f(x) = 4x12 + x22 – 2x1x2, use starting point of x0 = [1 1]T

Jawab:
8 −2
Ñf(xk) = [8x1-2x2 2x2-2x1]T H 𝑥! =
−2 2
Iterasi 0:
Untuk x0 = [1 1]T Ñf(x0)=[6 0]T
Ñf(x0)+H(x0) Δx0 = 0
6 8 − 2 ∆𝑥 " 0
+ =
0 −2 2 ∆𝑥 # 0

6+8Δx1 – 2Δx2= 0 (1) Δx1= -1


-2Δx1 + 2Δx2= 0 (2) Δx2= -1

Sehingga x1 = x0+ Δx0 = [1 1]T+ [-1 -1]T= [0 0]T ; f(x1) = 0

Iterasi 1:
Untuk x1 = [0 0]T Ñf(x0)=[0 0]T, sehingga hasil saat ini sudah optimal
Newton method: example 2
minimize the non-quadratic function
f(x) = (x1 – 2)4 + (x1 – 2)2x22 + (x2 + 1)2, use starting point of x0 = [1 1]T à f(x0) =6

Jawab:
Ñf(xk) = 4(x1 – 2)3 +2(x1 – 2)x22 H(xk) = 12(x1 – 2)2+2 x22 4(x1 – 2)x2
2(x1 – 2)2x2 + 2(x2 + 1) 4(x1 – 2)x2 2(x1 – 2)2+2

Iterasi 0:
Untuk x0 = [1 1]T Ñf(x0)=[-6 6]T H(x0) = 14 -4
Ñf(x0)+H(x0) Δx0 = 0 -4 4
−6 14 − 4 ∆𝑥 ! 0
+ =
6 −4 4 ∆𝑥 " 0

-6+14Δx1 – 4Δx2= 0 (1) Δx1= 0


6 - 4Δx1 + 4Δx2= 0 (2) Δx2= -3/2

Sehingga x1 = x0+ Δx0 = [1 1]T+ [0 -3/2]T= [1 -1/2]T ; f(x1) = 3/2

Iterasi 1:
Silahkan dilanjutkan sendiri, untuk iterasi 1 gunakan x1 = [1 -3/2]T dst untuk iterasi selanjutnya
General comments
0 Indirect methods have been proven to be more efficient than direct
methods on all experimental testing. However,
0 The major difficulty with indirect methods is finding the derivatives
analytically which sometimes is not easy or impractical to obtain
0 This disadvantage may be overcome by replacing derivatives with their
finite difference substitute

0 Within indirect methods, procedures which use second-order


information (e.g. Newton-like methods) are far superior to those use
first-order information (e.g. gradient based methods). Newton-like
methods have second order rate of convergence comparing with the
linear rate of convergence for gradient based methods

0 Newton methods probably are the fastest methods, if objective


function can be approximated reasonably well by a quadratic
function in the vicinity of the minimum
Disadvantages of Newton-like
method
0 For an initial point far away from the minimum, other
methods may converge faster than Newton’s method

0 This method like any other method, cannot find the global
optimum if multiple local solutions exist

0 The method may fail to find a local solution if it proceeds to


a saddle point
Tugas 1 (no.1)
0 Sebuah perusahaan penyedia listrik menerapkan aturan
tarif yang berbeda. Dalam kondisi peak time, tarif listrik
yang dikenakan ke pelanggan sebesar p1 $/kWh dan
permintaan pelanggan pada waktu peak ini diperkirakan
sebesar (60 – 0,5 p1) kWh. Sedangkan dalam kondisi off
peak time, tarif listrik yang dikenakan sebesar p2 dengan
perkiraan jumlah pelanggan sebesar (40 – p2). Perusahaan
tersebut dipastikan dapat memenuhi semua kebutuhan
listrik pelanggan. Perusahaan tersebut mengeluarkan
biaya operasional harian sebesar $10 untuk menjaga
ketersediaan listrik per kWh-nya. Tentukan p1 dan p2 yang
dapat memaksimumkan pendapatan perusahaan tersebut
dengan menggunakan metode Newton.

Dikumpulkan melalui google classroom


Tugas 1 (no.2)
Selesaikan permasalahan optimasi non-linear berikut:
0 Memaksimumkan f(x) = 4x1+6x2-2x12-2x1x2-
2x22
dengan menggunakan metode Newton

0 Meminimumkan f(x) = (x2 – x12)2 + (1 – x1)


dengan menggunakan metode steepest descent dengan
titik awal [0,0]. Lakukan sampai dengan dua iterasi saja.

Dikumpulkan melalui google classroom

You might also like