Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
61 views

1 Polynomial Equations: 1.1 Bijection Method

1. The document discusses various numerical methods for solving polynomial equations, systems of nonlinear equations, and systems of linear equations. These include bijection, secant, regula-falsi, Newton-Raphson, Jacobi, Gauss-Seidel, and SOR methods. 2. Iterative methods are characterized by their order/rate of convergence, which indicates how quickly errors decrease between iterations. The rates of convergence for the discussed methods are provided. 3. Interpolation techniques covered include Taylor series, Lagrange polynomials, and Newton's forward difference formula. Error analysis is presented for Lagrange interpolation.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

1 Polynomial Equations: 1.1 Bijection Method

1. The document discusses various numerical methods for solving polynomial equations, systems of nonlinear equations, and systems of linear equations. These include bijection, secant, regula-falsi, Newton-Raphson, Jacobi, Gauss-Seidel, and SOR methods. 2. Iterative methods are characterized by their order/rate of convergence, which indicates how quickly errors decrease between iterations. The rates of convergence for the discussed methods are provided. 3. Interpolation techniques covered include Taylor series, Lagrange polynomials, and Newton's forward difference formula. Error analysis is presented for Lagrange interpolation.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

1 Polynomial Equations

1.1 Bijection Method


Based on repeated application of the Intermediate Value Theorem.

a k + bk
mk+1 = , k = 0, 1, 2 · · ·
2

(ak , mk+1 ) if f (ak )f (mk+1 ) < 0
(ak+1 , bk+1 ) =
(mk+1 , bk ) if f (mk+1 )f (bk ) < 0
Create a table with k, ak−1 , bk−1 , mk , f (ak−1 )f (mk ). Final answer is the midpoint of the
final interval.
The error is given by
b0 − a 0
≤ε
2n

1.2 Secant and Regula-Falsi Methods


Assume f (x) = a0 x + a1 . Then, fk−1 = a0 xk−1 + a1 , fk = a0 xk + a1 . This gives the
value of a0 , a1 . Using them to find the next value
xk − xk−1 Δx
xk+1 = xk − = xk − (Secant Method)
fk − fk−1 Δf

If the approximations are such that fk fk−1 < 0 at all steps then it is called Regula-Falsi
Method. Perform the first step given interval (x0 , x1 ) to get a solution x2 . Select the
next interval as (x0 , x2 ) if f0 f2 < 0 or else (x2 , x1 ).

1.3 Newton-Raphson Method


Find the point of intersection of the tangent at (xk , fk ) with the X− axis. Use the
equations fk = a0 xk + a1 , fk� = a1 to determine a1 , a0 . Use them to predict the next
value.
fk
xk+1 = xk − � (Newton-Raphson)
fk

2
1.4 Rate of Convergence
An iterative method is said to have order or rate of convergence p if p is the largest real
positive number such for which there exists a finite, non-zero constant C (asymptotic
error constant) such that
|εk+1 | ≤ C|εk |p (Error Equation)
Let the actual root be ξ. Then, xk = ξ + εk . Substitute this in the equation and use
Taylor’s series approximation to get a relation

between εk+1 , εk .
Secant εk+1 = Cεk εk−1 1+2 5 = 1.618
Regula-Falsi εk+1 = Cε0 εk 1
Newton-Raphson εk+1 = Cε2k 2
f �� (ξ)
In all of the cases above, C = 2f � (ξ) .

1.5 Multiple Roots


In case ξ is a multiple root, all methods listed above have linear rate of convergence.
For Newton-Raphson Method
εm
k (m) εm+1
f (xk ) = f (ξ + εk ) = f (ξ) + k
f (m+1) (ξ) + · · ·
m! (m + 1)!
Term by term differentiation gives
εm−1 εm
f � (xk ) = f � (ξ + εk ) = k
f (m) (ξ) + k f (m+1) (ξ)
(m − 1)! (m)!
The error equation is
� �
1 1 f (m+1) (ξ) 2
εk+1 = 1− εk + ε + O(ε3k )
m m2 (m + 1) f (m) (ξ) k
If m �= 1, the method has linear rate of convergence. For stronger convergence, consider
fk
xk+1 = xk − α
fk�
This will have a quadratic rate of convergence if α = m, the multiplicity of the root.
If the multiplicity m is not known: g(x) = ff�(x)
(x) will have the root ξ but of single
multiplicity. Use Newton-Raphson or secant method on g(x).

1.6 System of Nonlinear Equations


Let (ξ, η) be the solution to the system f (x, y) = 0, g(x, y) = 0. First order Taylor
Series expansion around (xk , yk ) gives
� � � � � �
fx fy Δx f (xk , yk )
=− Jk Δx = −Fk
gx gy (x ,y ) Δy g(xk , yk )
k k

3
So,
Δx = −J−1
k F(x
(k)
) xk+1 = xk − J−1
k F(x
(k)
)
The matrix J−1 k is to be evaluated for each iteration.
The error equation is J−1
k ε
(k) = −F(x(k) ). A necessary and sufficient condition for

convergence is ρ(Jk ) < 1, while a sufficient condition for convergence is ||J−1


−1
k < 1|| for
all k.
It is an extension of Newton-Raphson method. The rate of convergence in 2, if the
system converges.

1.7 General Iteration Method


x = F (x, y) y = G(x, y). At a solution (ξ, η)

εk+1 = ξ − xk+1 = F (ξ, η) − F (xk , yk ) = εk Fx + δk Fy


δk+1 = η − yk+1 = G(ξ, η) − G(xk , yk ) = εk Gx + δk Gy

So, in matrix form �k+1 = Ak �k

4
2 System of Linear Equations
General form of iterative equations is x(k+1) = Hx(k) + c. In the limiting case, x(k+1) = x(k) = A−1 b.
Substituting this gives the value of c.

2.1 Jacobi Iteration


Assume that ai i are pivot quantities. In matrix form,

Dx(k+1) = −(L + U)x(k) + b x(k+1) = −D−1 (L + U)x(k) + D−1 b

v(k) = x(k+1) − x(k) is the error, r(k) = b − Ax(k) is the residue.

v(k) = x(k+1) − x(k) = −D−1 (D + L + U)x(k) + D−1 b = D−1 r(k)

So, the Jacobi iteration in error format is v(k) = D−1 r(k) .

2.2 Gauss-Seidel Iteration


(D + L)x(k+1) = −Ux(k) + b x(k+1) = −(D + L)−1 Ux(k) + (D + L)−1 b
In error format,
v(k) = (D + L)−1 r(k)

2.3 Successive Over Relaxation (SOR)


(D + wL)x(k+1) = −[(1 − w)L − U]x(k) + b

2.4 Condition Number


K(A) = ||A−1 || ||A||

5
3 Interpolation
3.1 Taylor Series Approximation
1
P (x) = f (x0 ) + (x − x0 )f � (x0 ) + · · · +
(x − x0 )n f (n) (x0 )
n!
1
Remainder/truncation term Rn = (n+1)! (x − x0 )n+1 f (n+1) (ξ), x0 < ξ < x This can be
used to estimate the truncation error.

3.2 Lagrange Polynomial


Lagrange fundamental polynomials of degree n for n + 1 distinct points a = x0 < x1 <
x 2 < · · · xn = b
(x − x0 )(x − x1 ) · · · (x − xi−1 )(x − xi+1 ) · · · (x − xn )
li (x) =
(xi − x0 )(xi − x1 ) · · · (xi − xi−1 )(xi − xi+1 ) · · · (xi − xn )
Another formulation is
n

π(x)
li (x) = π(x) = (x − xj )
(x − xi )π � (xi )
j=0

n

P (x) = li (x)f (xi )
i=0
Error En (f ; x) = f (x) − P (x). Then En = 0 for x = xi .
For x ∈ [a, b], x �= xi , define

(t − xi )
g(t) = f (t) − P (t) − [f (x) − P (x)] �
(x − xi )

g(t) = 0, t = x, xi . Repeated application of Rolle’s theorem for g(t), g � (t) · · · g (n) (t) gives
g (n+1) (ξ) = 0 for some min(x, x0 , x1 , · · · , xn ) < ξ < max(x, x0 , x1 , · · · , xn ).
(n + 1)![f (x) − P (x)]
g (n+1) (t) = f (n+1) (t) − �
(x − xi )

Solving for g (n+1) (ξ) = 0 gives


w(x) (n+1) w(x) (n+1)
f (x) = P (x) + f (ξ) E(f ; x) = f (ξ)
(n + 1)! (n + 1)!

where w(x) = (x − xi ) TODO - Iterated Interpolation

6
3.3 Newton’s Forward Difference
Define k th forward divided difference as
k
f [x1 , x2 · · · , xk ] − f [x0 , x1 · · · , xk−1 ] � f (xi )
f [x0 , x1 , · · · , xk ] = = �
xk − x0 j,j�=i (xi − xj )
i=0

For instance,

f (x0 ) f (x1 ) f (x2 )


f [x0 , x1 , x2 ] = + +
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 ) (x2 − x0 )(x2 − x1 )

The interpolating polynomial is

Pn (x) = f [x0 ] + (x − x0 )f [x0 , x1 ] + · · · + (x − x0 ) · · · (x − xn−1 )f [x0 , x1 , · · · xn ]


x−x0
Suppose the nodes are uniformly separated by h. Define u = h .

u(u − 1) 2
f (x) = f (x0 + uh) = E u f (x0 ) = (1 + Δ)u f (x0 ) = f0 + uΔf0 + Δ f0 + · · ·
2!
Where E is the shift operator and Δ is the difference operator. Error

u(u − 1) · · · (u − n) n+1 (n+1)


E(f ; x) = h f (ξ)
(n + 1)!

3.4 Newton’s Backward Difference


x−xn
Let u = h .

u(u + 1) 2
f (x) = f (xn + hu) = E u f (xn ) = (1 − ∇)−u f (xn ) = fn + u∇fn + ∇ fn + · · ·
2!
Error
u(u + 1) · · · (u + n) n+1 (n+1)
E(f ; x) = h f (ξ)
(n + 1)!

7
4 Approximation
4.1 Least Squares Approximation
Let the coordinate functions be φi (x), usually taken as xi . The error is approximation
is � �2
� b n

I(c0 , c1 , · · · , cn ) = W (x) f (x) − ci φi (x) dx
a i=0

The condition for minimum is (Normal Equations)


� b � n

∂I �
=0 W (x) f (x) − ci φi (x) φj (x) = 0
∂cj a i=0

8
5 Numerical Differentiation
5.1 Based on Interpolation
Suppose n + 1 distinct points are interpolated by Lagrange’s polynomials
� � π � (x) (n+1) π(x) d � (n+1) �
Pn (x) = li (x)fi Pn� = li� (x)fi En� = f (ξ)+ f (ξ)
(n + 1)! (n + 1)! dx

ξ(x) is not explicitly known. But at any nodal point xk , the second term vanishes and
the error may be calculated. Higher order error terms can be calculated using

1 dj � (n+1) � j!
j
f (ξ) = f (n+j+1) (ηj )
(n + 1)! dx (n + j + 1)!

where min(x, x0 , x1 , · · · , xn ) < ηj < max(x, x0 , x1 , · · · , xn ).


(r)
Don’t use the above formula for derivation. Write En (xi ) = f (r) (xi )−F (f0 , f1 , · · · , fn )
where F is the expression obtained by differentiating the Lagrange polynomial. Expand
f0 , f1 , · · · around xi by Taylor’s series and solve.

5.2 Based on Finite Differences


� �
h2 D 2
Ef (x) = f (x + h) = 1 + hD + + ··· f (x) = ehD f (x)
2!

log(1 + Δ) = Δ − 12 Δ2 + 13 Δ3 + · · ·
hD = log E =
log(1 − ∇) = ∇
Use Newton’s Forward and Backward Differences formula in this case.

5.3 Optimum Step Length


f1 − f0 ε1 − ε2 h ��
f � (x0 ) = + − f (ξ)
h h 2
Where ε term is for rounding error (RE), so that f (x0 ) = f0 + ε0 . Derivative term is for
truncation error (TE).
Criteria for optimum step length h - |RE| = |T E| |RE| + |T E| = min.
TODO - Richardson’s Extrapolation

9
6 Numerical Integration
The basic idea is to find a set of nodes and weights such that
� b n

w(x)f (x)dx = λk f k
a 0

6.1 Composite Integration Methods


6.1.1 Trapezoid Rule
Divide [a, b] into N sub-intervals of length h.
� x1 � x2 � xN
h
I= + +··· + = [f0 + 2(f1 + · · · + fN −1 ) + fN ]
x0 x1 xN −1 2
Error
N
h3 � �� h3 N ��
R=− f (ξk ) |R| ≤ f (η), a < η < b
12 12
k=0

6.1.2 Simpson’s Rule


Divide [a, b] into 2N sub-intervals of length h.
� x2 � x4 � x2N
h
I= + + · · ·+ = [f0 +4(f1 +f3 +· · ·+f2N −1 )+2(f2 +f4 +· · ·+f2N −2 )+f2N ]
x0 x2 x2N −2 3
Error
N
h5 � iv h5 N iv
R=− f (ξk ) |R| ≤ f (η), a < η < b
90 90
k=0

6.2 Based on Interpolation


Use interpolation
n
� π(x) (n+1)
f (x) = lk (x)fk + f (ξ)
(n + 1)!
k=0

Error is Rn = En . Same formulation can be obtained by undetermined coefficients.
Another method to find the error is - let Rn = 0 for xi , i ≤ n. Then,
� b �
C
Rn = f (n+1)
(ξ) C = f (x)dx − λk xn+1
k
(n + 1)! a

10
as the derivative for xn+1 is constant. This will give C in terms of fk or h for equi-spaced
points.

6.3 Quadrature Methods


Both λk , fk to be determined. First transform the interval to [−1, 1].
� 1 n

w(x)f (x) = λk f k
−1 k=0

6.3.1 Gauss-Legendre
n = 0 or 1-Point Formula, exact for f (x) = 1, x
� 1 � 1
f (x)dx = 2f (0) C = x2 dx − 2[0] = 0.67
−1 −1

n = 1 or 2-Point Formula, exact for f (x) = 1, x, x2 , x3


� 1 � � � � � 1
1 1 2 8 C
f (x)dx = f − √ +f √ C= x4 dx − = R = f (4) (ξ)
−1 3 3 −1 9 45 4!

n = 2 or 3-Point Formula, exact for f (x) = 1 · · · x5


� 1 � � � � �� ��
1 3 3 8 C
f (x)dx = 5f − + 8f (0) + 5f C= R = f (6) (ξ)
−1 9 5 5 175 6!

6.3.2 Gauss-Chebyshev
1
w(x) = √
1 − x2
n = 0 or 1-Point Formula, exact for f (x) = 1, x
� 1 � 1
f (x) x2 π
√ dx = πf (0) C = √ dx =
−1 1 − x2 −1 1−x 2 2

n = 1 or 2-Point Formula, exact for f (x) = 1, x, x2 , x3


� 1 � � � � ��
f (x) π 1 1 π C
√ dx = f −√ +f √ C = , R = f (4) (ξ) − 1 < ξ < 1
−1 1−x 2 2 2 2 8 4!

n = 2 or 3-Point Formula, exact for f (x) = 1, · · · , x5


� 1 � � √ � � √ ��
f (x) π 3 3 π C
√ dx = f − + f (0) + f C= , R = f (6) (ξ)−1 < ξ < 1
−1 1−x 2 3 2 2 32 6!

11
6.3.3 Gauss-Laguerre
� ∞ n

w(x) = exp(−x) e−x f (x)dx = λk f (xk )
0 k=0
� ∞
e−x f (x)dx = f (1) C=1
0
� ∞ √ √ √ √
1
e−x f (x)dx = [(2 + 2)f (2 − 2) + (2 − 2)f (2 + 2)] C=4
0 4

12
7 Numerical Solution of ODE
Given u� = f (t, u), Euler Method is
uj+1 − uj
= f (tj , uj ) uj+1 = uj + hfj
h
h2 ��
Truncation error is u(tj+1 ) − uj+1 = u(tj+1 ) − [u(tj ) + hf (tj , uj )] = 2 u (ξ).

7.1 Backward Euler


uj − uj−1
= f (tj , uj ) uj+1 = uj + hfj+1
h
Implicit system solved using Newton-Raphson Method.

7.2 Runge-Kutta Method


7.2.1 Euler-Cauchy
Second order method, reduces to trapezoidal rule if f (t, u) is independent of u.
1
uj+1 = uj + [K1 + K2 ]
2
K1 = hf (tj , uj )
K2 = hf (tj + h, uj + K1 )

7.2.2 Fourth Order

1
uj+1 = uj + [K1 + 2K2 + 2K3 + K4 ]
6
K1 = hf (tj , uj )
K2 = hf (tj + 0.5h, uj + 0.5K1 )
K3 = hf (tj + 0.5h, uj + 0.5K2 )
K4 = hf (tj + h, uj + K3 )

For multi-variate setting, replace Ki by column vector Ki .

13
8 Syllabus
Numerical methods: Solution of algebraic and transcendental equations of one variable
by bisection, Regula-Falsi and Newton-Raphson methods; solution of system of linear
equations by Gaussian elimination and Gauss-Jordan (direct), Gauss-Seidel(iterative)
methods. Newton’s (forward and backward) interpolation, Lagrange’s interpolation.
Numerical integration: Trapezoidal rule, Simpson’s rule, Gaussian quadrature for-
mula.
Numerical solution of ordinary differential equations : Eular and Runga Kutta meth-
ods.
Computer Programming : Binary system; Arithmetic and logical operations on num-
bers; Octal and Hexadecimal Systems; Conversion to and from decimal Systems; Algebra
of binary numbers.
Elements of computer systems and concept of memory; Basic logic gates and truth
tables, Boolean algebra, normal forms.
Representation of unsigned integers, signed integers and reals, double precision reals
and long integers. Algorithms and flow charts for solving numerical analysis problems.

14

You might also like