Compilation of Arithmetic
Compilation of Arithmetic
COMPILATION OF ALGORITHMS
ENM 331 - NUMERICAL METHODS AND ANALYSIS
SUBMITTED BY:
UMBAC, NICETTE ANNE D.
SUBMITTED TO:
ENGR. IRISMAY JUMAWAN
TABLE OF CONTENTS
I. INTRODUCTION
Numerical Methods
Algortihm
Symbol for Differen Operations
Examples of Algorithm
Types of Algorithm
V. NUMERICAL INTEGRATION
A. Euler's Method
B. Trapezoidal Rule
C. Simpson Rule
D. Runge-Kutta Methods
VI. REFERENCES
I. INTRODUCTION
Open Problem
In science and mathematics, an open problem or an open question is a known problem which can
be accurately stated, and which is assumed to have an objective and verifiable solution, but
which has not yet been solved (no solution for it is known).
Algorithm
EXAMPLES OF ALGORITHM
Another simple examples of algorithm is the recipe of baking cake, the method we use in
solvong long division, the process in doing laundry, and the functionality of search engine are all
examples of algorithm.
TYPES OF ALGORITHM
- Sequence
- Branching (if – then)
- Loop – doing a sequence of step repeatedly
NUMERICAL METHODS
Numerical method is the approach of solvong mathematical or physical equations using
computers, This is done by converting differential equations defined in continuous space and
time to a lasrge system o f equations in discretized domain.
A system of equations is a set of two or more equations that are to be solved simultaneously.
A solution of a system of two equations in two variables is an ordered pair of numbers that
makes both equations true. The numbers in the ordered pair correspond to the variables in
alphabetical order.
II. SOLVING SYSTEM OF LINEAR EQUATIONS
A. Small Scale
Graphical Method
A system of equations is a set of two or more equations that are to be solved simultaneously.
A solution of a system of two equations in two variables is an ordered pair of numbers that
makes both equations true. The numbers in the ordered pair correspond to the variables in
alphabetical order.
The graphs intersect at one point. The system is consistent and has one solution. Since neither
equation is a multiple of the other, they are independent.
The graphs are parallel. The system is inconsistent because there is no solution. Since the
equations are not equivalent, they are independent.
Equations have the same graph. The system is consistent and has an infinite number of
solutions. The equations are dependent since they are equivalent.
EXAMPLE 1. 2𝑥 2 + 𝑥 − 1 = 0
x f(x) f(x)
25
-2 5 20
-1 0
15
0 -1
10
1 2
5
2 9
3 20 0
-3 -2 -1 0 1 2 3 4
-5
EXAMPLE 2. 4𝑥 3 + 2𝑥 2 − 𝑥 + 1 = 0
x f(x)
-3 -86 600
f(x)
-2 -21
-1 0 400
0 1
1 6 200
2 39
0
3 124 -4 -3 -2 -1 0 1 2 3 4 5 6
4 285
-200
5 546
-4 -25 -22 20
-3 -20 -19 10
-2 -15 -16 0
-6 -4 -2 -10 0 2 4 6
-1 -10 -13
0 -5 -10 -20
1 0 -7 -30
2 5 -4 -40
3 10 -1 f(x) g(x)
4 15 2
5 20 5
CRAMER'S RULE
Cramer's rule is used to find the solutions of any number of variables (x,y,z) and the same
number of system of equations.
In solving Cramer's Rule: 𝑎1 𝑥 + 𝑎1 y = 𝑐1
𝑎2 𝑥 + 𝑎2 𝑦 = 𝑐2
𝑎1 𝑏1
𝐷= = 𝑎1 𝑏2 − 𝑏1 𝑎2
𝑎2 𝑏2 𝐷𝑥
𝑥=
𝐷
𝑐1 𝑏1
𝐷𝑥 = = 𝑐1 𝑏2 − 𝑏1 𝑐2
𝑐2 𝑏2
𝐷𝑦
𝑦=
𝑎1 𝑐1 𝐷
𝐷𝑦 = 𝑎 𝑐2 = 𝑎1 𝑐2 − 𝑐1 𝑎2
2
10 6 x = Dx/D
Dx= -88
8 -4 2.3158
2 10 y = Dy/D
Dy= -34
5 8 0.8947
5(2.3158)-4(0.8947) = 8
8 =8
EXAMPLE 2. 12𝑥 − 4𝑦 = 20 Matrix:
5𝑥 + 3𝑦 = 15 A X B
12 -4 x 20
12 -4 5 3 y 15
D= 56
5 3
20 -4 x = Dx/D
Dx= 120
15 3 2.1429
12 20 y = Dy/D
Dy= 80
5 15 1.4286
5(2.1429)+3(1.4286) = 15
15 = 15
EXAMPLE 3. 2𝑥 + 3𝑦 + 𝑧 = 8
15𝑥 + 2𝑦 − 3𝑧 = 10
4𝑥 − 2𝑦 + 2𝑧 = 6
Matrix:
A X B
4 -2 1 x 8
2 6 -3 y 10
2 4 2 z 6
4 -2 1
D= 2 6 -3 112
2 4 2
8 -2 1 x = Dx/D
Dx = 10 6 -3 272 2.4286
6 4 2
4 8 1 y = Dy/D
Dy = 2 10 -3 64 0.5714
2 6 2
4 -2 8 z = Dz/D
Dz = 2 6 10 -64 -0.5714
2 4 6
Check by substituting the values of x and y to the equation:
4(2.4286)-2(0.5741)+(-0.5714) = 8
8 =8
2(2.4286)+6(0.5714)-3(-0.5714) = 10
10 = 10
2(2.4286)+4(0.5714)+2(-0.5714) = 6
6 =6
B.Large Scale
GAUSSIAN ELIMINATION
● To perform Gaussian elimination, we form an Augmented Matrix by combining the
matrix A with the column vector b:
AUGMENTED MATRIX: AX = B
Example: A B
2 4 2 10
1 5 3 6
4 6 8 12
● Row reduction is then performed on this matrix. Allowed operations are:
(1) multiply any row by a constant
(2) add multiple of one row to another row
(3) interchange the order of any rows
The goal is to convert the original matrix into an upper-triangular matrix.
A B
2 4 2 10
0 5 3 6
0 0 8 12
● The resulting equations can be determined from the matrix, and by back substituting
we can get the values of x.
● To check, substitute the values to the problem equation. If the result is equal then
the answer is correct.
EXAMPLE 1. 2𝑥1 + 6𝑥2 − 4𝑥3 = 8
𝑥1 + 4𝑥2 + 2𝑥3 = 6
4𝑥1 + 2𝑥2 − 𝑥3 = 12
AUGMENTED MATRIX:
A X B
2 6 -4 𝑥1 8
𝑥2 =
1 4 2 6
4 2 -1 𝑥3 12
2 6 -4 8 2x2+8x3 = 4
0 2 8 4 x2 = 0.6383
R3+5R2 0 -10 7 -4
2x1+6x2-4x3 = 8
2 6 -4 8 x1 = 2.7660
0 2 8 4
0 0 47 16
2(0.7660)+6(0.6383)-4(0.3404) = 8
8 =8
(2.7660)+4(0.6383)+2(0.3404) = 6
6 =6
4(2.7660)+2(0.6383)-(0.3404) = 12
12 = 12
AUGMENTED MATRIX:
A X B
1 3 -2 𝑥1 10
𝑥2 =
1 -4 2 4
6 4 -2 𝑥3 12
1 3 -2 10 Back substitute o find the value of x:
R2-R1 1 -4 2 4 2x3 = -36
R3-6R1 6 4 -2 12 x3 = -18.0000
1 3 -2 10 - 7x2+4x3 = -6
0 -7 4 -6 x2 = -9.4286
R3-2R2 0 -14 10 -48
x1+3x2-2x3 = 10
1 3 -2 10 x1 = 2.2857
0 -7 4 -6
0 0 2 -36
(-28.5714)+3(0.8571)-2(-18.0000) = 10
10 = 10
(-28.5714)-4(0.8571)+2(-18.0000) = 4
4 =4
6(-28.5714)+4(0.8571)-2(-18.0000) = 12
12 = 12
AUGMENTED MATRIX:
A X B
2 3 -1 𝑥1 6
𝑥2 =
2 -1 4 12
6 4 -2 𝑥3 16
2 3 -1 6 - 4x2+5x3 = 6
0 -4 5 6 x2 = -3.7619
4R3-5R2 0 -5 1 -2
2x1+3x2-x3 = 6
2 3 -1 6 x1 = 7.7381
0 -4 5 6
0 0 -21 -38
To check, subtitute the values to the equation:
2(-1.8095)+3(-3.7619)-(7.7381) = 6
6 =6
2(-1.8095)-(-3.7619)+4(7.7381) = 12
12 = 12
6(-1.8095)+4(-3.7619)-2(7.7381) = 16
16 = 16
NOTE: The pivot element is the highest value in the first column & interchange this pivot row
with the first row.
For the example, we will use the same problem in the Gaussian Elimination;
EXAMPLE 1. 2𝑥1 + 6𝑥2 − 4𝑥3 = 8
𝑥1 + 4𝑥2 + 2𝑥3 = 6
4𝑥1 + 2𝑥2 − 𝑥3 = 12
AUGMENTED MATRIX:
A X B
2 6 -4 𝑥1 8
𝑥2 =
1 4 2 6
4 2 -1 𝑥3 12
2 6 -4 8
1 4 2 6 M12 = 1/2 0.5
4 2 -1 12 M13 = 4/2 2
2 6 -4 8
0 1 4 2
0 -10 7 -4 M22 = -10/1 -10
2 6 -4 8
0 1 4 2
0 0 47 16
To check, subtitute the values to the equation: Back substitute o find the value of x:
47x3 = 16
2(0.7660)+6(0.6383)-4(0.3404) = 8 x3 = 0.3404
8 =8
(2.7660)+4(0.6383)+2(0.3404) = 6 x2+4x3 = 2
6 =6 x2 = 0.6383
4(2.7660)+2(0.6383)-(0.3404) = 12
12 = 12 2x1+6x2-4x3 = 8
x1 = 2.7660
AUGMENTED MATRIX:
A X B
1 3 -2 𝑥1 10
𝑥2 =
1 -4 2 4
6 4 -2 𝑥3 12
1 3 -2 10
1 -4 2 4 M12 = 1/1 1
6 4 -2 12 M13 = 6/1 6
1 3 -2 10
0 -7 4 -6
0 -14 10 -12 M23 = -14/-7 2
1 3 -2 10
0 -7 4 -6
0 0 2 0
To check, subtitute the values to the equation: Back substitute o find the value of x:
2x3 = 0
(7.4287)+3(0.8571)-2(0.0000) = 10 x3 = 0.0000
10 = 10
(7.4286)-4(0.8571)+2(0.0000) = 4 - 7x2+4x3 = -6
4 =4 x2 = 0.8571
6(7.4286)+4(0.8571)-2(0.0000) = 12
12 = 12 x1+3x2-2x3 = 10
x1 = 7.4286
EXAMPLE 3. 2𝑥1 + 3𝑥2 − 𝑥3 = 6
2𝑥1 − 𝑥2 + 4𝑥3 = 12
6𝑥1 + 4𝑥2 − 2𝑥3 = 16
AUGMENTED MATRIX:
A X B
2 3 -1 𝑥1 6
𝑥2 =
2 -1 4 12
6 4 -2 𝑥3 16
2 3 -1 6
2 -1 4 12 M12 = 2/2 1
6 4 -2 16 M13 = 6/2 3
2 3 -1 6
0 -4 5 6
0 -5 1 -2 M23 = -5/-4 1.25
2 3 -1 6
0 -4 5 6
0 0 2.25 -9.5
To check, subtitute the values to the equation: Back substitute o find the value of x:
2.25x3 = -9.5
2(-1.8095)+3(-3.7619)-(7.7381) = 6 x3 = 4.2222
6 =6
2(-1.8095)-(-3.7619)+4(7.7381) = 12 - 4x2+5x3 = 6
12 = 12 x2 = 3.7778
6(-1.8095)+4(-3.7619)-2(7.7381) = 16
16 = 16 2x1+3x2-x3 = 6
x1 = -0.5556
LU DECOMPOSITION
LU decomposition of a matrix is the factorization of a given square matrix into two triangular
matrices, one upper triangular matrix and one lower triangular matrix, such that the product of
these two matrices gives the original matrix. It was introduced by Alan Turing in 1948, who also
created the Turing machine.
This method of factorizing a matrix as a product of two triangular matrices has various
applications such as a solution of a system of equations, which itself is an integral part of many
applications such as finding current in a circuit and solution of discrete dynamical system
problems; finding the inverse of a matrix and finding the determinant of the matrix.
6 2 -12 6
2 -4 3 8 M12 = 0.3333
4 1 -2 10 M13 = 0.6667
6 2 -12 6
0 -4.6667 7 6
0 -0.3333 6 6 M23 = 0.0714
6 2 -12 6
0 -4.6667 7 6
0 0 5.5 5.5714
1 0 0 y1 6
0.3333 1 0 X y2 = 6
0.6667 0.0714 1 y3 5.5714
y1= 6
y2= 4
y3= 1.2857
8
6 2 -12 x1 6
0 -4.66667 7 X x2 = 6
0 0 5.5000 x3 5.5714286
x1= 2.9481
x2= 0.2338
x3= 1.0130
5 2 -1 12
2 4 3 8 M12 = 0.4000
3 6 -2 4 M13 = 0.6000
5 2 -1 12
0 3.2000 3.4 3.2
0 4.8000 -1.4 -3.2 M23 = 1.5000
5 2 -1 12
0 3.2000 3.4 3.2
0 0 -6.5 -8.0000
1 0 0 y1 12
0.4000 1 0 X y2 = 3.2
0.6000 1.5000 1 y3 -8.0000
y1= 12
y2= -1.6
y3= -12.8000
8
5 2 -1 x1 12
0 3.2 3.4 X x2 = 3.2
0 0 -6.5000 x3 -8
x1= 2.7692
x2= -0.3077
x3= 1.2308
12 2 -3 8
5 1 4 6 M12 = 0.4167
3 4 -6 10 M13 = 0.2500
12 2 -3 8
0 0.1667 5.25 2.6667
0 3.5000 -5.25 8 M23 = 21.0000
12 2 -3 8
0 0.1667 5.25 2.6667
0 0 -115.5 -48.000
1 0 0 y1 8
0.4167 1 0 X y2 = 2.6667
0.2500 21.0000 1 y3 -48.0000
y1= 8
y2= -0.6667
y3= -36.0000
8
12 2 -3 x1 8
0 0.1667 5.25 X x2 = 2.6667
0 0 -115.5000 x3 -48
x1= 0.2857
x2= 2.9091
x3= 0.4156
ITERATION METHOD
JACOBI METHOD
The Jacobi method is a method of solving a matrix equation on a matrix that has no zeros along
its main diagonal (Bronshtein and Semendyayev 1997, p. 892). Each diagonal element is solved
for, and an approximate value plugged in. The process is then iterated until it converges. This
algorithm is a stripped-down version of the Jacobi transformation method of matrix
diagonalization.
The Jacobi method is easily derived by examining each of the n equations in the linear system of
equations Ax=b in isolation.
In this method, the order in which the equations are examined is irrelevant, since the Jacobi
method treats them independently.
If any of the diagonal entries a11, a22,…, ann are zero, then we should interchange the rows or
columns to obtain a coefficient matrix that has nonzero entries on the main diagonal. Follow the
steps given below to get the solution of a given system of equations.
NOTE:
Gauss Jacobi method takes the values obtained from the previous step.
Augmented Matrix:
A X B
12>4+3+1 12 4 -3 1 x1 6
6>2+1+2 2 6 -1 2 x2 -7
8>1+2+4 1 -2 8 4 x3 4
7>2+1+3 2 -1 3 7 x4 10
k X1 X2 X3 X4
0 0 0 0 0
1 0.500 -1.167 0.500 1.429
2 0.895 -1.726 -0.568 0.905
3 0.858 -1.861 -0.496 1.170
4 0.899 -1.925 -0.658 1.130
5 0.883 -1.953 -0.659 1.178
6 0.888 -1.964 -0.688 1.180
7 0.884 -1.970 -0.692 1.189
8 0.885 -1.973 -0.698 1.191
9 0.884 -1.975 -0.699 1.193
10 0.884 -1.976 -0.701 1.194
11 0.884 -1.976 -0.701 1.194
12 0.884 -1.976 -0.702 1.194
13 0.884 -1.976 -0.702 1.194
14 0.884 -1.976 -0.702 1.194
To check, substitute the values x1, x2, x3, x4 to the equation.
12x1+4x2-3x3+x4 = 6 = 6
2x1+6x2-x3+2x4 = -7 = -7
x1-2x2+8x3+4x4 = 4 = 4
2x1-x2+3x3+7x4 = 10 = 10
Augmented Matrix:
A X B
12>4+3+1 12 4 -3 1 x1 6
6>2+1+2 2 6 -1 2 x2 -7
8>1+2+4 1 -2 8 4 x3 4
7>2+1+3 2 -1 3 7 x4 10
k X1 X2 X3 X4
0 0 0 0 0
1 0.500 -1.167 0.500 1.429
2 0.895 -1.726 -0.104 0.905
3 0.974 -1.784 0.009 0.971
4 1.016 -1.814 -0.072 0.892
5 1.012 -1.815 -0.044 0.910
6 1.018 -1.815 -0.058 0.899
7 1.016 -1.815 -0.052 0.903
8 1.017 -1.815 -0.055 0.901
9 1.016 -1.815 -0.053 0.902
10 1.017 -1.815 -0.054 0.902
11 1.016 -1.815 -0.054 0.902
12 1.016 -1.815 -0.054 0.902
To check, substitute the values of x1, x2, x3, and x4 to the equation:
12x1+4x2-3x3+x4 = 6 = 6
2x1+6x2-x3+2x4 = -7 = -7
x1-2x2+8x3+4x4 = 4 = 4
2x1-x2+3x3+7x4 = 10 = 10
Augmented Matrix:
A X B
11>3+2+1 11 3 -2 1 x1 12
7>1+2_+3 1 7 -2 3 x2 8
10>5+2+1 5 -2 10 1 x3 9
8>3+2+1 3 -2 1 8 x4 10
k X1 X2 X3 X4
0 0 0 0 0
1 1.091 1.143 0.900 1.250
2 0.829 0.708 0.458 1.014
3 0.889 0.721 0.526 1.059
4 0.894 0.712 0.494 1.031
5 0.893 0.714 0.492 1.031
6 0.892 0.714 0.493 1.032
7 0.892 0.714 0.494 1.032
8 0.892 0.714 0.494 1.032
To check, substitute the values of x1, x2, x3, and x4 to the equation:
11x1+3x2-2x3+x4 = 12 = 12
x1+7x2-2x3+3x4 = 8 = 8
5x1-2x2+10x3+x4 = 9 = 9
3x1-2x2+x3+8x4 = 10 = 10
Augmented Matrix:
A X B
8>4+1+2 8 4 -1 2 x1 11
12>2+5+1 2 12 -5 1 x2 6
6>1+2+2 1 -2 6 2 x3 13
14>4+2+5 4 -2 5 14 x4 8
k X1 X2 X3 X4
0 0 0 0 0
1 1.375 0.500 2.167 0.571
2 1.253 1.126 1.914 -0.524
3 1.182 1.132 2.508 -0.309
4 1.200 1.374 2.450 -0.500
5 1.119 1.363 2.591 -0.450
6 1.130 1.431 2.584 -0.479
7 1.103 1.428 2.615 -0.470
8 1.105 1.445 2.616 -0.473
9 1.098 1.445 2.622 -0.472
10 1.098 1.449 2.623 -0.472
11 1.096 1.449 2.624 -0.472
12 1.096 1.450 2.624 -0.472
13 1.096 1.450 2.625 -0.472
14 1.096 1.450 2.625 -0.472
To check, substitute the values of x1, x2, x3, and x4 to the equation:
8x1+4x2-x3+2x4 = 11 = 11
2x1+12x2-5x3+x4 = 6 = 6
x1-2x2+6x3+2x4 = 13 = 13
4x1-2x2+5x3+14x4 = 8 = 8
GAUSSE-SEIDEL METHOD
It is an iterative technique for solving the n equations a square system of n linear equations with
unknown x, where Ax =b only one at a time in sequence. This method is applicable to strictly
diagonally dominant, or symmetric positive definite matrices A.
Gauss–Seidel method is an improved form of Jacobi method, also known as the successive
displacement method. This method is named after Carl Friedrich Gauss (Apr. 1777–Feb. 1855)
and Philipp Ludwig von Seidel (Oct. 1821–Aug. 1896). Again, we assume that the starting
values are u2 = u3 = u4 = 0. The difference between the Gauss–Seidel and Jacobi methods is that
the Jacobi method uses the values obtained from the previous step while the Gauss–Seidel
method always applies the latest updated values during the iterative procedures, as demonstrated
in Table 7.2. The reason the Gauss–Seidel method is commonly known as the successive
displacement method is because the second unknown is determined from the first unknown in
the current iteration, the third unknown is determined from the first and second unknowns, etc.
NOTE: Gauss–Seidel method always applies the latest updated values during the iterative procedures.
We will use the same examples in the Jacobi Method to compare its convergence speed.
Augmented Matrix:
A X B
12>4+3+1 12 4 -3 1 x1 6
6>2+1+2 2 6 -1 2 x2 -7
8>1+2+4 1 -2 8 4 x3 4
7>2+1+3 2 -1 3 7 x4 10
k X1 X2 X3 X4
0 0 0 0 0
1 0.500 -1.333 0.104 1.051
2 0.883 -1.794 -0.584 1.170
3 0.854 -1.939 -0.677 1.197
4 0.877 -1.971 -0.701 1.197
5 0.882 -1.976 -0.703 1.195
6 0.884 -1.977 -0.702 1.195
7 0.884 -1.977 -0.702 1.195
8 0.884 -1.976 -0.702 1.195
9 0.884 -1.976 -0.702 1.194
To check, substitute the values x1, x2, x3, x4 to the equation.
12x1+4x2-3x3+x4 = 6 = 6
2x1+6x2-x3+2x4 = -7 = -7
x1-2x2+8x3+4x4 = 4 = 4
2x1-x2+3x3+7x4 = 10 = 10
Augmented Matrix:
A X B
11>3+2+1 11 3 -2 1 x1 12
7>1+2_+3 1 7 -2 3 x2 8
10>5+2+1 5 -2 10 1 x3 9
8>3+2+1 3 -2 1 8 x4 10
k X1 X2 X3 X4
0 0 0 0 0
1 1.091 0.987 0.552 1.019
2 0.829 0.745 0.532 1.059
3 0.888 0.714 0.493 1.034
4 0.892 0.713 0.493 1.032
5 0.892 0.714 0.493 1.032
6 0.892 0.714 0.494 1.032
7 0.892 0.714 0.494 1.032
To check, substitute the values of x1, x2, x3, and x4 to the equation:
11x1+3x2-2x3+x4 = 12 = 12
x1+7x2-2x3+3x4 = 8 = 8
5x1-2x2+10x3+x4 = 9 = 9
3x1-2x2+x3+8x4 = 10 = 10
k X1 X2 X3 X4
0 0 0 0 0
1 1.375 0.271 2.028 -0.507
2 1.620 1.117 2.438 -0.603
3 1.272 1.354 2.607 -0.530
4 1.156 1.438 2.630 -0.493
5 1.108 1.452 2.630 -0.477
6 1.097 1.453 2.627 -0.473
7 1.095 1.452 2.626 -0.472
8 1.095 1.451 2.625 -0.472
9 1.096 1.450 2.625 -0.472
10 1.096 1.450 2.625 -0.472
To check, substitute the values of x1, x2, x3, and x4 to the equation:
8x1+4x2-x3+2x4 = 11 = 11
2x1+12x2-5x3+x4 = 6 = 6
x1-2x2+6x3+2x4 = 13 = 13
4x1-2x2+5x3+14x4 = 8 = 8
As we can see from the examples above, the the Gausse Seidel Method converges faster
than the Jacobi Method.
with some initial guess x0 is called the fixed point iterative scheme.
EXAMPLE 1. 2x^2-x-2 = 0
2x^2-x = 2 𝑥𝑛𝑒𝑤 − 𝑥𝑜𝑙𝑑
𝑒= × 100%
x(2x-1) = 2 𝑥𝑛𝑒𝑤
Formula: x = 2/(2x-1)
I X g(x) e
1 0.0000 -2.0000
2 -2.0000 -0.4000 100%
3 -0.4000 -1.1111 -400%
4 -1.1111 -0.6207 64%
5 -0.6207 -0.8923 -79%
6 -0.8923 -0.7182 30%
7 -0.7182 -0.8209 -24%
8 -0.8209 -0.7571 13%
9 -0.7571 -0.7955 -8%
10 -0.7955 -0.7719 5%
11 -0.7719 -0.7862 -3%
12 -0.7862 -0.7775 2%
13 -0.7775 -0.7828 -1%
14 -0.7828 -0.7795 1%
15 -0.7795 -0.7815 0%
16 -0.7815 -0.7803 0%
17 -0.7803 -0.7811 0%
18 -0.7811 -0.7806 0%
19 -0.7806 -0.7809 0%
20 -0.7809 -0.7807 0%
21 -0.7807 -0.7808 0%
EXAMPLE 2. x^2+3x+1 = 0
x(x+3) = - 1 𝑥𝑛𝑒𝑤 − 𝑥𝑜𝑙𝑑
𝑒= × 100%
Formula: x = -1/(x+3) 𝑥𝑛𝑒𝑤
k x g(x) e
1 4.0000 -0.1429
2 -0.1429 -0.3500 2900%
3 -0.3500 -0.3774 59%
4 -0.3774 -0.3813 7%
5 -0.3813 -0.3819 1%
6 -0.3819 -0.3820 0%
7 -0.3820 -0.3820 0%
"The Newton - Raphson Method" uses one initial approximation to solve a given equation y =
f(x).In this method the function f(x) , is approximated by a tangent line, whose equation is found
from the value of f(x) and its first derivative at the initial approximation.
The tangent line then intersects the X - Axis at second point. This second point is again used as
next approximation to find the third point.
The Newton-Raphson method (also known as Newton's method) is a way to quickly find a good
approximation for the root of a real-valued function f(x) = 0f(x)=0. It uses the idea that a
continuous and differentiable function can be approximated by a straight line tangent to it.
Formula: 𝑓(𝑥𝑜𝑙𝑑 )
𝑥𝑛𝑒𝑤 = 𝑥𝑜𝑙𝑑 −
𝑓′(𝑥𝑜𝑙𝑑 )
𝑥𝑛𝑒𝑤 − 𝑥𝑜𝑙𝑑
𝑒= × 100%
𝑥𝑛𝑒𝑤
EXAMPLE 1. f(x) = 2x^2-x-2
f'(x) = 4x - 1
k x f(x) f'(x) e
1 0.0000 -2.0000 -1.0000
2 -2.0000 8.0000 -9.0000 100.00%
3 -1.1111 1.5802 -5.4444
4 -0.8209 0.1685 -4.2834 35.36%
5 -0.7815 0.0031 -4.1261
6 -0.7808 0.0000 -4.1231 0.10%
7 -0.7808 0.0000 -4.1231
8 -0.7808 0.0000 -4.1231 0.00%
9 -0.7808 0.0000 -4.1231
10 -0.7808 0.0000 -4.1231 0.00%
k x f(x) f'(x) e
1 4.0000 29.0000 11.0000
2 1.3636 6.9504 5.7273 193.33%
3 0.1501 1.4727 3.3001
4 -0.2962 0.1992 2.4076 150.67%
5 -0.3789 0.0068 2.2422
6 -0.3820 0.0000 2.2361 0.80%
7 -0.3820 0.0000 2.2361
8 -0.3820 0.0000 2.2361 0.00%
9 -0.3820 0.0000 2.2361
10 -0.3820 0.0000 2.2361 0.00%
SECANT METHOD
Secant method is also a recursive method for finding the root for the polynomials by successive
approximation.
In this method, the neighbourhoods roots are approximated by secant line or chord to the
function f(x). It’s also advantageous of this method that we don’t need to differentiate the given
function f(x), as we do in Newton-raphson method.
Note: To start the solution of the function f(x) two initial guesses are required such that f(x0)<0
and f(x1)>0. Usually it hasn’t been asked to find, that root of the polynomial f(x) at which f(x)
=0.
Advantages of Secant Method:
The speed of convergence of secant method is faster than that of Bisection and Regula falsi
method.
It uses the two most recent approximations of root to find new approximations, instead of using
only those approximations which bound the interval to enclose root
𝑥𝑛𝑒𝑤 − 𝑥𝑜𝑙𝑑
𝑒= × 100%
𝑥𝑛𝑒𝑤
EXAMPLE 1. f(x) = 2x^2-x-2
Let:
Xa = 0
Xb = 4
k Xa Xb f(Xa) f(Xb) e
1 0.0000 4.0000 -2.0000 26.0000
2 4.0000 0.2857 26.0000 -2.1224 1300.00%
3 0.2857 0.5660 -2.1224 -1.9252 49.52%
4 0.5660 3.3027 -1.9252 16.5127 82.86%
5 3.3027 0.8518 16.5127 -1.4007 287.73%
6 0.8518 1.0434 -1.4007 -0.8659 18.37%
7 1.0434 1.3538 -0.8659 0.3115 22.92%
8 1.3538 1.2716 0.3115 -0.0375 6.46%
9 1.2716 1.2805 -0.0375 -0.0013 0.69%
10 1.2805 1.2808 -0.0013 0.0000 0.02%
11 1.2808 1.2808 0.0000 0.0000 0.00%
12 1.2808 1.2808 0.0000 0.0000 0.00%
k Xa Xb f(Xa) f(Xb) e
1 2.0000 4.0000 11.0000 29.0000
2 4.0000 0.7778 29.0000 3.9383 414.29%
3 0.7778 0.2714 3.9383 1.8880 186.55%
4 0.2714 -0.1948 1.8880 0.4535 239.32%
5 -0.1948 -0.3422 0.4535 0.0904 43.07%
6 -0.3422 -0.3789 0.0904 0.0068 9.69%
7 -0.3789 -0.3819 0.0068 0.0001 0.78%
8 -0.3819 -0.3820 0.0001 0.0000 0.01%
9 -0.3820 -0.3820 0.0000 0.0000 0.00%
10 -0.3820 -0.3820 0.0000 0.0000 0.00%
B. Bracketing Method
BISECTION METHOD
The Bisection Method, also called the interval halving method, the binary search method, or the
dichotomy method. is based on the Bolzano’s theorem for continuous functions.
The Bisection Method looks to find the value c for which the plot of the function f crosses the x-
axis. The c value is in this case is an approximation of the root of the function f(x). How close
the value of c gets to the real root depends on the value of the tolerance we set for the algorithm.
The algorithm ends when the values of f(c) is less than a defined tolerance (e.g. 0.001). In this
case we say that c is close enough to be the root of the function for which f(c) ~= 0.
(𝑎𝑘 + 𝑏𝑘)
𝑀𝑖𝑑𝑝𝑜𝑖𝑛𝑡, 𝑐𝑘 =
2
EXAMPLE 1. f(x) = x^3+3x-5 Interval (-5, 5)
x f(x) f(x)
250 229
0 -5
1 -1 200
2 9 135
150
3 31
100 71
4 71
31
5 135 50 9
-5 -1
6 229 0
0 1 2 3 4 5 6 7
-50
Left Right
k endpoint, midpoint, ck endpoint, f(ak) f(ck) f(bk)
ak bk
0 -5.000 0.000 5.000 -145.000 -5.000 135.000
1 0.000 2.500 5.000 -5.000 18.125 135.000
2 0.000 1.250 2.500 -5.000 0.703 18.125
3 0.000 0.625 1.250 -5.000 -2.881 0.703
4 0.625 0.938 1.250 -2.881 -1.364 0.703
5 0.938 1.094 1.250 -1.364 -0.410 0.703
6 1.094 1.172 1.250 -0.410 0.125 0.703
7 1.094 1.133 1.172 -0.410 -0.148 0.125
8 1.133 1.152 1.172 -0.148 -0.013 0.125
9 1.152 1.162 1.172 -0.013 0.056 0.125
10 1.152 1.157 1.162 -0.013 0.021 0.056
11 1.152 1.155 1.157 -0.013 0.004 0.021
12 1.152 1.154 1.155 -0.013 -0.004 0.004
13 1.154 1.154 1.155 -0.004 0.000 0.004
14 1.154 1.154 1.154 -0.004 -0.002 0.000
15 1.154 1.154 1.154 -0.002 -0.001 0.000
16 1.154 1.154 1.154 -0.001 -0.001 0.000
17 1.154 1.154 1.154 -0.001 0.000 0.000
18 1.154 1.154 1.154 0.000 0.000 0.000
19 1.154 1.154 1.154 0.000 0.000 0.000
20 1.154 1.154 1.154 0.000 0.000 0.000
EXAMPLE 2. f(x) = x^2-2x-1 Interval (0, 10)
x f(x)
-2 7 f(x)
-1 2 50
47
0 -1
40 34
1 -2
2 -1 30 23
3 2 20 14
4 7 7 7
5 14 2 10 2
-1 -2 -1
6 23 0
-4 -2 0 2 4 6 8 10
7 34 -10
8 47
Left Right
k endpoint, midpoint, ck endpoint, f(ak) f(ck) f(bk)
ak bk
0 -10.0000 -5.0000 0.0000 119.0000 34.0000 -1.0000
1 -5.0000 -2.5000 0.0000 34.0000 10.2500 -1.0000
2 -2.5000 -1.2500 0.0000 10.2500 3.0625 -1.0000
3 -1.2500 -0.6250 0.0000 3.0625 0.6406 -1.0000
4 -0.6250 -0.3125 0.0000 0.6406 -0.2773 -1.0000
5 -0.6250 -0.4688 -0.3125 0.6406 0.1572 -0.2773
6 -0.4688 -0.3906 -0.3125 0.1572 -0.0662 -0.2773
7 -0.4688 -0.4297 -0.3906 0.1572 0.0440 -0.0662
8 -0.4297 -0.4102 -0.3906 0.0440 -0.0115 -0.0662
9 -0.4297 -0.4199 -0.4102 0.0440 0.0162 -0.0115
10 -0.4199 -0.4150 -0.4102 0.0162 0.0023 -0.0115
11 -0.4150 -0.4126 -0.4102 0.0023 -0.0046 -0.0115
12 -0.4150 -0.4138 -0.4126 0.0023 -0.0011 -0.0046
13 -0.4150 -0.4144 -0.4138 0.0023 0.0006 -0.0011
14 -0.4144 -0.4141 -0.4138 0.0006 -0.0003 -0.0011
15 -0.4144 -0.4143 -0.4141 0.0006 0.0002 -0.0003
16 -0.4143 -0.4142 -0.4141 0.0002 0.0000 -0.0003
17 -0.4142 -0.4142 -0.4141 0.0000 -0.0001 -0.0003
18 -0.4142 -0.4142 -0.4142 0.0000 -0.0001 -0.0001
19 -0.4142 -0.4142 -0.4142 0.0000 -0.0001 -0.0001
20 -0.4142 -0.4142 -0.4142 0.0000 -0.0001 -0.0001
FALSE POSITION METHOD
The Regula–Falsi Method is a numerical method for estimating the roots of a polynomial f(x).
A value x replaces the midpoint in the Bisection Method and serves as the new approximation of
a root of f(x). The objective is to make convergence faster. Assume that f(x) is continuous.
Algorithm for the Regula–Falsi Method: Given a continuous function f(x)
● Find points a and b such that a < b and f(a) * f(b) < 0.
● Take the interval [a, b] and determine the next value of x1.
● If f(x1) = 0 then x1 is an exact root, else if f(x1) * f(b) < 0
then let a = x1, else if f(a) * f(x1) < 0 then let b = x1.
● Repeat steps 2 & 3 until f(xi) = 0 or |f(xi)| £ DOA, where
DOA stands for degree of accuracy.
Graphically, if the root is in [ a, xi ], then the next interpolation line is drawn between ( a, f(a) ) and ( xi,
f(xi) ); otherwise, if the root is in [ xi, b ], then the next interpolation line is drawn between ( xi, f(xi) ) and
(b, f(b)).
𝑓(𝑏)(𝑏 − 𝑎)
𝑚𝑖𝑑𝑝𝑜𝑖𝑛𝑡, 𝑐𝑘 = 𝑏 −
𝑓 𝑏 − 𝑓(𝑎)
f(x)
x f(x)
150
0 -5
1 -1 100
2 9
50
3 31
4 71 0
5 135 0 1 2 3 4 5 6
-50
Left Right
k endpoint, midpoint, ck endpoint, f(ak) f(ck) f(bk)
ak bk
0 0 0.500 1.000 1.000 -0.375 -1.000
1 0 0.364 0.500 1.000 -0.043 -0.375
2 0 0.349 0.364 1.000 -0.004 -0.043
3 0 0.347 0.349 1.000 0.000 -0.004
4 0.347 0.347 0.349 0.000 0.000 -0.004
x f(x)
-2 7 f(x)
-1 2 47
50
0 -1 40 34
1 -2 30 23
2 -1 20 14
3 2 7 7
210 2
-1 -2 -1
4 7 0
5 14 -4 -2 -10 0 2 4 6 8 10
6 23
7 34
8 47
Left Right
k endpoint, midpoint, ck endpoint, f(ak) f(ck) f(bk)
ak bk
0 -10.000 -5.000 0.000 119.000 34.000 -1.000
1 -5.000 -2.500 0.000 34.000 10.250 -1.000
2 -2.500 -1.250 0.000 10.250 3.063 -1.000
3 -1.250 -0.625 0.000 3.063 0.641 -1.000
4 -0.625 -0.313 0.000 0.641 -0.277 -1.000
5 -0.625 -0.469 -0.313 0.641 0.157 -0.277
6 -0.469 -0.391 -0.313 0.157 -0.066 -0.277
7 -0.469 -0.430 -0.391 0.157 0.044 -0.066
8 -0.430 -0.410 -0.391 0.044 -0.011 -0.066
9 -0.430 -0.420 -0.410 0.044 0.016 -0.011
10 -0.420 -0.415 -0.410 0.016 0.002 -0.011
11 -0.415 -0.413 -0.410 0.002 -0.005 -0.011
12 -0.415 -0.414 -0.413 0.002 -0.001 -0.005
13 -0.415 -0.414 -0.414 0.002 0.001 -0.001
14 -0.414 -0.414 -0.414 0.001 0.000 -0.001
15 -0.414 -0.414 -0.414 0.001 0.000 0.000
16 -0.414 -0.414 -0.414 0.000 0.000 0.000
17 -0.414 -0.414 -0.414 0.000 0.000 0.000
18 -0.414 -0.414 -0.414 0.000 0.000 0.000
19 -0.414 -0.414 -0.414 0.000 0.000 0.000
20 -0.414 -0.414 -0.414 0.000 0.000 0.000
a (int ercept )
y x x xy
2
n( x ) ( x )
2 2 b = Slope of the line.
a = y-intercept of the line.
;where b x = Values of the first data set.
y = Values of the second data set.
n xy ( x)( y )
b( slope)
n x 2 ( x ) 2
EXAMPLE 1:
n x y x2 xy
1 5 6 25 30
2 10 12 100 120
3 12 16 144 192
4 16 20 256 320
43 54 525 662
b( slope)
n xy ( x)( y )
a (int ercept )
y x x xy
2
n x 2 ( x ) 2 n( x ) ( x )
2 2
b= 1.2988 a= -0.4622
r
(( x x )( y y )) (Y Y ) 2
(x x) 2
SY Sx
( x x) ( y y )
2 2
n 1 n 1
80
74
70 69 71 y = 0.6558x + 42.146
65 R² = 0.3536
60 61 62
56
55
50 53
45
40 Series1
y
30 Linear (Series1)
20 Linear (Series1)
10
0
0 10 20 30 40 50
x
POLYNOMIAL REGRESSION
Polynomial regression is one of several methods of curve fitting. With polynomial regression,
the data is approximated using a polynomial function. A polynomial is a function that takes the
form f( x ) = c0 + c1 x + c2 x2 ⋯ cn xn where n is the degree of the
When to Use Polynomial Regression:
We use polynomial regression when the relationship between a predictor
and response variable is nonlinear.
1. Create a Scatterplot. It shows the relationship of explanatory variable(x)
and response variable (y).
2. Create a residuals vs. fitted plot.
3. Calculate the R2 of the model.
n xk yk (xk)2 (xk)3
1 1 2 1 1
2 2 4 4 8
3 3 6 9 27
4 3 8 9 27
5 2 10 4 8
6 1 12 1 1
sum 12 42 28 72
mean 2 7.000
2 2
(xk)4 (xk)(yk) (xk2)(yk) Sr= (yk-b0-b1(xk)-b2(xk )) (yk-Ῡ)2
1 2 2 0.819 25.000
16 8 16 5.444 9.000
81 18 54 33.200 1.000
81 24 72 14.152 1.000
16 20 40 13.444 9.000
1 12 12 82.723 25.000
Normal Equations
b0 b1 b2
6 + 12 + 28 = 42
12 + 28 + 72 = 84
28 + 72 + 196 = 196
6 12 28 b0 42
12 28 72 b1 = 84
28 72 196 b2 196
Using triangularization followed by back substition method.
b0 b1 b2 B
6 12 28 42
M12 = 2 12 28 72 84
M13 = 4.667 28 72 196 196
6 12 28 42
0 -28.000 -72.000 -84
M22 = 2.6 0 -72.000 -196.000 -196.000
6 12 28 42
0 -28.000 -72.000 -84
0 0 196.000 196.000
Therefore, y=1.476+0.429x+1.000x^2
S= 7.066
V. NUMERICAL INTEGRATION
EULER'S METHOD
The Euler method is the most straightforward method to integrate a differential
equation. Consider the first-order differential equation
x˙ = f (t, x),
with the initial condition x(0) = x0. Define tn = nDt and xn = x(tn). A Taylor
series expansion of xn+1 results in
xn+1 = x(tn + Dt)
= x(tn) + Dtx˙(tn) + O(Dt2)
= x(tn) + Dt f (tn, xn) + O(Dt2).
The Euler Method is therefore written as
xn+1 = x(tn) + Dt f (tn, xn).
We say that the Euler method steps forward in time using a time-step Dt, starting
from the initial value x0 = x(0). The local error of the Euler Method is O(Dt2).
The global error, however, incurred when integrating to a time T, is a factor of 1/Dt
larger and is given by O(Dt). It is therefore customary to call the Euler Method a
first-order method.
Series1 Series2
TRAPEZOIDAL METHOD
Trapezoidal Rule is a rule that evaluates the area under the curves by dividing the total
area into smaller trapezoids rather than using rectangles. This integration works by
approximating the region under the graph of a function as a trapezoid, and it
calculates the area. This rule takes the average of the left and the right sum.
We suppose that the function f (x) is known at the n+1 points labeled as x0, x1, . . . ,
xn,
with the endpoints given by x0 = a and xn = b. Define
fi = f (xi), hi = xi+1 xi.
where the last equality arises from the change-of-variables s = x xi. Applying the
trapezoidal rule to the integral, we have
If the points are not evenly spaced, say because the data are experimental values,
then the hi may differ for each value of i and is to be used directly.
However, if the points are evenly spaced, say because f (x) can be computed, we
have hi = h, independent of i. We can then define
xi = a + ih, i = 0, 1, . . . , n;
and since the end point b satisfies b = a + nh, we have
h = b-a/n
The composite trapezoidal rule for evenly space points then becomes
The first and last terms have a multiple of one; all other terms have a multiple of
two; and the entire sum is multiplied by h/2.
EXAMPLE 1: x y
-4 0
Δx= 2 -2 3
2 0 5
n= 6 2 7
4 4
6 9
8 2
b x
a
f ( x)dx Tn
2
[ f ( x0 ) 2 f ( x1 ) 2 f ( x2 ) ...2 f ( xn 1 ) f ( xn )]
SIMPSON'S RULE
Simpson’s rule is a numerical method that can be used to evaluate a definite integral
or approximating the integrals. The antiderivative technique is applied in this
numerical integration.
We here consider the composite Simpson’s rule for evenly space points. We apply
Simpson’s rule over intervals of 2h, starting from a and ending at b:
Note that n must be even for this scheme to work. Combining terms, we have
The first and last terms have a multiple of one; the even indexed terms have a
multiple of 2; the odd indexed terms have a multiple of 4; and the entire sum is
multiplied by h/3.
EXAMPLE 1:
xs f(x) coefficient c*f(x) a b n
1.0000 1.0000 1 1.0000 1 3 12
1.1667 0.8571 3 2.5714
1.3333 0.7500 1 0.7500
1.5000 0.6667 3 2.0000 delta x
1.6667 0.6000 1 0.6000 0.1667
1.8333 0.5455 3 1.6364
2.0000 0.5000 1 0.5000
2.1667 0.4615 3 1.3846 0.7503
2.3333 0.4286 1 0.4286
2.5000 0.4000 3 1.2000
2.6667 0.3750 1 0.3750
2.8333 0.3529 3 1.0588
13.5048
1.2000
1.0000
1.0000 0.8571
0.7500
0.8000 0.6667
0.6000
0.5455
0.6000 0.5000
0.4615
0.4286
0.4000
0.3750
0.3529
0.4000
0.2000
0.0000
0.0000 0.5000 1.0000 1.5000 2.0000 2.5000 3.0000
EXAMPLE 2:
xs f(x) coefficient c*f(x) a b n
2.0000 0.5000 2 1.0000 2 4 10
2.2000 0.4545 4 1.8182
2.4000 0.4167 2 0.8333
2.6000 0.3846 4 1.5385 delta x
2.8000 0.3571 2 0.7143 0.2000
3.0000 0.3333 4 1.3333
3.2000 0.3125 2 0.6250
3.4000 0.2941 4 1.1765 0.7098
3.6000 0.2778 2 0.5556
3.8000 0.2632 4 1.0526
10.6473
0.6000
0.5000
0.5000 0.4545
0.4167
0.3846
0.4000 0.3571
0.3333
0.3125
0.2941
0.2778
0.2632
0.3000
0.2000
0.1000
0.0000
0 . 0 0 0 00 . 5 0 0 01 . 0 0 0 01 . 5 0 0 02 . 0 0 0 02 . 5 0 0 03 . 0 0 0 03 . 5 0 0 04 . 0 0 0 0
RUNGE-KUTTA METHOD
There are two types of Runge-Kutta Method one is the 2nd order and the other is the
4th order Runge-Kutta method. Both these Runge-Kutta methods are integrative
methods for approximation of solutions of differential equations. This method with
several versions were developed around 1900s byGerman mathematicians C. Runge
and M.W. Kutta.
The essence of the Runge-Katta methods involves the use of numerical
integrating the functionin differential equations by using a trial step at mid-point of an
interval, e.g., within a step Δx or h by using numerical integration techniques such as
trapezoidal or Simpson rules . The numerical integrations would allow the
cancellation of low-order error termsfor more accurate solutions.
where,
FOURTH ORDER RUNGE-KUTTA METHOD
This is the most popular version of the Runge-Kutta method for solving differential
equation of initial value problems. Formulation of this order solution method is
similar to that of the second order method.
where,
V. REFERENCES
https://www.sciencedirect.com/topics/engineering/numerical-method
onlinelearningmath.com
https://www.google.com/amp/s/www.geeksforgeeks.org/l-u-decomposition-system-linear-equations/amp
https://mathworld.wolfram.com/JacobiMethod.html#:~:text=The%20Jacobi%20method%20is%20a,then%2
0iterated%20until%20it%20converges
https://www.easycalculation.com/algebra/gauss-seidel-method.php
https://www.sciencedirect.com/topics/engineering/gauss-seidel-method
https://math.iitm.ac.in/public_html/sryedida/caimna/transcendental/iteration%20methods/fixed-
point/iteration.html
https://www.google.com/amp/s/www.geeksforgeeks.org/secant-method-of-numerical-analysis/amp
https://www.mathworks.com/matlabcentral/fileexchange/68885-the-newton-raphson-method
https://brilliant.org/wiki/newton-raphson-
method/#:~:text=The%20Newton%2DRaphson%20method%20(also,straight%20line%20tangent%20to%20
https://x-engineer.org/bisection-method
http://www2.lv.psu.edu/ojj/courses/cmpsc-
201/numerical/regula.html#:~:text=The%20Regula%E2%80%93Falsi%20Method%20is,is%20to%20make