1 Polynomial Equations: 1.1 Bijection Method
1 Polynomial Equations: 1.1 Bijection Method
a k + bk
mk+1 = , k = 0, 1, 2 · · ·
2
�
(ak , mk+1 ) if f (ak )f (mk+1 ) < 0
(ak+1 , bk+1 ) =
(mk+1 , bk ) if f (mk+1 )f (bk ) < 0
Create a table with k, ak−1 , bk−1 , mk , f (ak−1 )f (mk ). Final answer is the midpoint of the
final interval.
The error is given by
b0 − a 0
≤ε
2n
If the approximations are such that fk fk−1 < 0 at all steps then it is called Regula-Falsi
Method. Perform the first step given interval (x0 , x1 ) to get a solution x2 . Select the
next interval as (x0 , x2 ) if f0 f2 < 0 or else (x2 , x1 ).
2
1.4 Rate of Convergence
An iterative method is said to have order or rate of convergence p if p is the largest real
positive number such for which there exists a finite, non-zero constant C (asymptotic
error constant) such that
|εk+1 | ≤ C|εk |p (Error Equation)
Let the actual root be ξ. Then, xk = ξ + εk . Substitute this in the equation and use
Taylor’s series approximation to get a relation
√
between εk+1 , εk .
Secant εk+1 = Cεk εk−1 1+2 5 = 1.618
Regula-Falsi εk+1 = Cε0 εk 1
Newton-Raphson εk+1 = Cε2k 2
f �� (ξ)
In all of the cases above, C = 2f � (ξ) .
3
So,
Δx = −J−1
k F(x
(k)
) xk+1 = xk − J−1
k F(x
(k)
)
The matrix J−1 k is to be evaluated for each iteration.
The error equation is J−1
k ε
(k) = −F(x(k) ). A necessary and sufficient condition for
4
2 System of Linear Equations
General form of iterative equations is x(k+1) = Hx(k) + c. In the limiting case, x(k+1) = x(k) = A−1 b.
Substituting this gives the value of c.
5
3 Interpolation
3.1 Taylor Series Approximation
1
P (x) = f (x0 ) + (x − x0 )f � (x0 ) + · · · +
(x − x0 )n f (n) (x0 )
n!
1
Remainder/truncation term Rn = (n+1)! (x − x0 )n+1 f (n+1) (ξ), x0 < ξ < x This can be
used to estimate the truncation error.
n
�
P (x) = li (x)f (xi )
i=0
Error En (f ; x) = f (x) − P (x). Then En = 0 for x = xi .
For x ∈ [a, b], x �= xi , define
�
(t − xi )
g(t) = f (t) − P (t) − [f (x) − P (x)] �
(x − xi )
g(t) = 0, t = x, xi . Repeated application of Rolle’s theorem for g(t), g � (t) · · · g (n) (t) gives
g (n+1) (ξ) = 0 for some min(x, x0 , x1 , · · · , xn ) < ξ < max(x, x0 , x1 , · · · , xn ).
(n + 1)![f (x) − P (x)]
g (n+1) (t) = f (n+1) (t) − �
(x − xi )
6
3.3 Newton’s Forward Difference
Define k th forward divided difference as
k
f [x1 , x2 · · · , xk ] − f [x0 , x1 · · · , xk−1 ] � f (xi )
f [x0 , x1 , · · · , xk ] = = �
xk − x0 j,j�=i (xi − xj )
i=0
For instance,
u(u − 1) 2
f (x) = f (x0 + uh) = E u f (x0 ) = (1 + Δ)u f (x0 ) = f0 + uΔf0 + Δ f0 + · · ·
2!
Where E is the shift operator and Δ is the difference operator. Error
u(u + 1) 2
f (x) = f (xn + hu) = E u f (xn ) = (1 − ∇)−u f (xn ) = fn + u∇fn + ∇ fn + · · ·
2!
Error
u(u + 1) · · · (u + n) n+1 (n+1)
E(f ; x) = h f (ξ)
(n + 1)!
7
4 Approximation
4.1 Least Squares Approximation
Let the coordinate functions be φi (x), usually taken as xi . The error is approximation
is � �2
� b n
�
I(c0 , c1 , · · · , cn ) = W (x) f (x) − ci φi (x) dx
a i=0
8
5 Numerical Differentiation
5.1 Based on Interpolation
Suppose n + 1 distinct points are interpolated by Lagrange’s polynomials
� � π � (x) (n+1) π(x) d � (n+1) �
Pn (x) = li (x)fi Pn� = li� (x)fi En� = f (ξ)+ f (ξ)
(n + 1)! (n + 1)! dx
ξ(x) is not explicitly known. But at any nodal point xk , the second term vanishes and
the error may be calculated. Higher order error terms can be calculated using
1 dj � (n+1) � j!
j
f (ξ) = f (n+j+1) (ηj )
(n + 1)! dx (n + j + 1)!
9
6 Numerical Integration
The basic idea is to find a set of nodes and weights such that
� b n
�
w(x)f (x)dx = λk f k
a 0
10
as the derivative for xn+1 is constant. This will give C in terms of fk or h for equi-spaced
points.
6.3.1 Gauss-Legendre
n = 0 or 1-Point Formula, exact for f (x) = 1, x
� 1 � 1
f (x)dx = 2f (0) C = x2 dx − 2[0] = 0.67
−1 −1
6.3.2 Gauss-Chebyshev
1
w(x) = √
1 − x2
n = 0 or 1-Point Formula, exact for f (x) = 1, x
� 1 � 1
f (x) x2 π
√ dx = πf (0) C = √ dx =
−1 1 − x2 −1 1−x 2 2
11
6.3.3 Gauss-Laguerre
� ∞ n
�
w(x) = exp(−x) e−x f (x)dx = λk f (xk )
0 k=0
� ∞
e−x f (x)dx = f (1) C=1
0
� ∞ √ √ √ √
1
e−x f (x)dx = [(2 + 2)f (2 − 2) + (2 − 2)f (2 + 2)] C=4
0 4
12
7 Numerical Solution of ODE
Given u� = f (t, u), Euler Method is
uj+1 − uj
= f (tj , uj ) uj+1 = uj + hfj
h
h2 ��
Truncation error is u(tj+1 ) − uj+1 = u(tj+1 ) − [u(tj ) + hf (tj , uj )] = 2 u (ξ).
1
uj+1 = uj + [K1 + 2K2 + 2K3 + K4 ]
6
K1 = hf (tj , uj )
K2 = hf (tj + 0.5h, uj + 0.5K1 )
K3 = hf (tj + 0.5h, uj + 0.5K2 )
K4 = hf (tj + h, uj + K3 )
13
8 Syllabus
Numerical methods: Solution of algebraic and transcendental equations of one variable
by bisection, Regula-Falsi and Newton-Raphson methods; solution of system of linear
equations by Gaussian elimination and Gauss-Jordan (direct), Gauss-Seidel(iterative)
methods. Newton’s (forward and backward) interpolation, Lagrange’s interpolation.
Numerical integration: Trapezoidal rule, Simpson’s rule, Gaussian quadrature for-
mula.
Numerical solution of ordinary differential equations : Eular and Runga Kutta meth-
ods.
Computer Programming : Binary system; Arithmetic and logical operations on num-
bers; Octal and Hexadecimal Systems; Conversion to and from decimal Systems; Algebra
of binary numbers.
Elements of computer systems and concept of memory; Basic logic gates and truth
tables, Boolean algebra, normal forms.
Representation of unsigned integers, signed integers and reals, double precision reals
and long integers. Algorithms and flow charts for solving numerical analysis problems.
14