473 - CSC 251
473 - CSC 251
473 - CSC 251
COURSE DETAILS:
Course Lecturers: Dr. (Mrs.) O. R. Vincent
Email: vincent.rebecca@gmail.com
Office Location: Room B201, COLNAS
Consultation hours: 12-2pm, Wednesdays & Fridays
Course Content:
NUMERICAL ANALYSIS I
Introduction
Numerical analysis is concerned with the process by which mathematical problems are solved by the
operations of ordinary arithmetic.
We shall be concerned with fundamental mathematical problems such as solution of equations, the
evaluation of functions and integration etc. Although, quite a lot of these problems have exact solutions, the range
of problems which can be solved exactly is very limited. Therefore, we require efficient methods of obtaining
good approximation.
Types of Errors
1. Blunder called Human Error: This occurs when a different answer is written from what is obtained e.g.
writing 0.7951 instead of 0.7591.
2. Truncation Error: These arises when an infinite process is replaced by a finite one. For instance, consider
a finite number of terms in any infinite series e.g.
√ +1=1+ - + … has truncation error of + …
½ ½ x 2
(x + 1) - (1 + x) = 1 + ½ + ½(½ - 1) x + ½ (½ - 1) (½ - 2) + 3x …
If we consider
( ) ( )( )
(x +1)1/2 √ + 1 = 1 + xn + n !
x2 + !
x3 + …
= 1 + - (1/2 – 1)x2
-1 2
= = + !=
Consider the taylor series expansion
ex = 1 + x
If the formular is used to calculate f = e0.1 we get
f = 1 + 0.1 +
Where do we stop? Theoretically, the calculation will never stop. There are always more terms to add on. If we do
stop after a finite number of terms, we will not get the exact answer.
3. Round-off Error: Numbers having decimal or binary representations are often rounded e.g. 1/3 = 0.3333.
If we multiply by 3 we have 0.9999 which is not exactly 1.
Round-off errors can be avoided by preventing cancellation of large terms.
1.2 Notation: Let ∑ (Epsilon) be the error. Let true value and appropriate be x and x1 respectively. The absolute
error = |∑| = |x – x1| and relative
| |
Error = provided x ≠ 0
| |
Definition: A number x is said to be rounded to a d-decimal place number x(d) if error ∑ is given by |∑| = |x – x(d)|
-d
≤ ½ 10
Example: Take = 0.142857142
0.14 (2 dec. place)
Subtract it.
0.002857142 (is the error)
0.002857142 ≤ ½ 10 -2 = 0.005
Suppose x1 = 0.1429 (4 dec. place)
|∑| = |x – x1| = 0.000042858 ≤ ½ 10 -4 = 0.00005
Assignment
Compute the multiplication and division
X0 0 3 4
xi 0.5000 2.75 4.5000
x2 0.5625 2.3906 5.5628
x3 0.5791 1.9287 8.2354
x4 0.5838 1.4300 17.4554
x5 0.5852 1.0112 76.6715
x6 0.5856 0.7556
x7 0.5857 0.6427
x8 0.5858 0.6033
x9 0.5858 0.5910
x 0.5858 which is given by initial estimates 0, 3. 4 is obviously a bad initial value. The actual root are x2 – 4x
+2=0
± – ( )
1. x = = 2 ± √2
x = 0.586 or 3.414
The actual root are 0.586 and 3.414. The other root 3.414 cannot be obtained by this method. The case can be
illustrated below.
Assignment
1. Find an approximation to the smallest positive root of F(x) = 8x4 – 8x2 + 1 = 0 (Take x0 = 0.3).
2. Draw a flow chart illustrating the use of fixed point formula to solve x2 – 4x + 2 = 0
Hence, write a FORTRAN program to solve the equation.
Once an interval has been located, the method is very easy to implement, very reliable and has good error bounds.
The only disadvantage is that the method is slow.
The interval ‘halving’ or ‘bisection’ method can be sped up by making better use of the information computed.
The method only uses the sign of the function f and not its value. Thus, if the absolute value of the function is
much smaller at one end than at the other it is likely that the root will be closed to the end where the function is
smaller. The idea is exploited in the Regulafalsi method.
3.1.3 Regular Falsi
xi (xi, F(xi)
Bisection Rule
Example F(x) = x2 – 3x + 1 = 0
X0 = 0 So that F(x0) = 1 +ve
Choose
x1 = 1 F(x1) = -1 –ve
= x0 y1 – x1 y0
y1 – y0
This may be re-written on a linear interpolation form as x0 F(x1) – x1 F(x0)
F(x1) – F(x0) … 3.5
This can also be written as
X2 = x1 – (x1 – x0) F (x1)
F (xi) – F(x0) … 3.5
It may be repeated such that an iterative scheme results. As in the case of binary search (bisection rule) the two
estimates slid give opposite signs in F(x).
The convergence in this method is faster than that in bisection rule.
Example F(x) = x2 – 3x + 1 = 0
Choose x0 = 0
x1 = 1
Since F(x0) is +ve = 1
and F (x1) is –ve = -1
x2 = x0 F(x1) – x1 F(x0)
F(x1) – F (x0)
x2 = 0 – 1
-1 – 1 = ½ = 0.5
F(x2) = F(0.5) = (0.5)2 – 3(0.5) + 1 = 0.25 -1.5 + 1 = -0.25 +ve
So, we reject x1
x3 = x0 F(x2) – x2 F(x0)
F(x2) – F (x0)
x3 = 0 – 0.5 x 1 0.5
-0.25 – 1 = 1.25 = 0.4
F(x3) = F(0.4) = 0.4 2 – 3(0.4) + 1 = -0.04 -ve
So, we reject x2
x4 = x0 F(x3) – x3 F(x0)
F(x3) – F (x0)
x3 = -0.4 0.4
-0.04 – 1 = 1.04 = 0.3846
F(x4) = (0.3846)2 – 3(.3846) + 1 = -0.154
And so on, we continue until convergence is achieved.
Assignment
Write a FROTRAN to illustrate the use of regula falsi to solve x2 – 5x + 1 = 0.
From the figure, a is the point at which f(x) = 0 and x0 is an estimate of a. the Newton Raphson method
computes a new estimate, x1 in the following way:
This is the second order convergence.
Let x be a root of equation (3.1) – F(x) = 0
If xn is the nth estimate with error ∑n,
then x = xn + ∑n … 4.1
F(x) = F(xn + ∑n) = 0
= F(xn) +n F'(xn) + ∑n2 F''(xn) + … + = 0
2!
Ω F(xn) + ∑nF'(xn) = 0p
= ∑n = -F (xn)
F'(xn)
The iteration scheme in 4.1 becomes
xn + 1 = xn – F(xn)
F'(xn) - N – R method.
For example
F(x) = x2 – 4x + 2 = 0
F'(x) = 2x – 4
Therefore xn + 1 = xn – (xn2 – 4xn + 2)
2xn – 4
Using our former initial estimate
X0 0 3
xi 0.5 3.5
x2 0.5833 3.4162
x3 0.5858 3.4142
x4 0.5858 3.4142
x5