Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
21 views

BCA Numerical Method

Uploaded by

Piyush Tiwari
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

BCA Numerical Method

Uploaded by

Piyush Tiwari
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Course Name : BCA / MCA

Subject Name: Numerical Methods

Prepared by Assistant Professor’s Team


of
Microtek College of Management & Technology

Under Guidance of

Dr. Pankaj Rajhans


An Alumni of IIT-Delhi
President & Executive Director
Microtek College of Management & Technology
Jaunpur & Varanasi (U.P)
UNIT- I

BISECTION METHODE

The Bisection Method is a numerical method for estimating the roots of a


polynomial f(x). It is one of the simplest and most reliable but it is not the fastest
method. Assume that f(x) is continuous.

Algorithm for the Bisection Method: Given a continuous function f(x)

1. Find points a and b such that a < b and f(a) * f(b) < 0.
2. Take the interval [a, b] and find its midpoint x1.
3. If f(x1) = 0 then x1 is an exact root, else if f(x1) * f(b) < 0 then let a = x1, else
if f(a) * f(x1) < 0 then let b = x1.
4. Repeat steps 2 & 3 until f(xi) = 0 or |f(xi)| <= DOA, where DOA stands for
degree of accuracy.
5. For the ith iteration, where i = 1, 2, . . . , n, the interval width is
6. xi = 0.5
xi–1 = ( 0.5 )i ( b – a )
7. and the new midpoint is
8. xi = ai–1 + xi

EXAMPLE: Consider f(x) = x3 + 3x – 5, where [a = 1, b = 2] and DOA = 0.001.

i a x b f(a) f(x) f(b)

1 1 1.5 2 –1 2.875 9

2 1 1.25 1.5 –1 0.703125 2.875

3 1 1.125 1.25 –1 –0.201171875 0.703125

4 1.125 1.1875 1.25 –0.201171875 0.237060546875 0.703125

5 1.125 1.15625 1.1875 –0.201171875 0.014556884765625 0.237060546875

6 1.125 1.140625 1.15625 –0.201171875 –0.0941429138183594


0.014556884765625

7 1.140625 1.1484375 1.15625 –0.0400032997131348 0.014556884765625
0.0941429138183594


8 1.1484375 1.15234375 1.15625 –0.0127759575843811 0.014556884765625
0.0400032997131348


9 1.15234375 1.154296875 1.15625 0.000877253711223602 0.014556884765625
0.0127759575843811

____________________________________________________

2.REGULA-FALSI METHOD
The convergen process in the bisection method is very slow. It depends only on the choice of end
pointsof the interval [a,b]. The function f(x) does not have any role in finding the point c (which
is just the mid-point of a and b). It is used only to decide the next smaller interval [a,c] or [c,b].
A better approximation to c can be obtained by taking the straight line L joining the points
(a,f(a)) and (b,f(b)) intersecting the x-axis. To obtain the value of c we can equate the two
expressions of the slope m of the line L.
f(b) - f(a) 0 - f(b)
m= =
(b-a) (c-b)

=> (c-b) * (f(b)-f(a)) = -(b-a) * f(b)

f(b) * (b-a)
c=b-
f(b) - f(a)
Now the next smaller interval which brackets the root can be obtained by checking

f(a) * f(b) < 0 then b = c


> 0 then a = c
= 0 then c is the root.

Selecting c by the above expression is called Regula-Falsi method or False position method

of the interval is getting smaller in each iteration, it is possible that it may not go to zero. If the
graph y = f(x) is concave near the root 's', one of the endpoints becomes fixed and the other end
marches towards the root.

EXAMPLE:

Find the root between (2,3) of x3+ - 2x - 5 = 0, by using regular falsi method.
Given
f(x) = x3 - 2 x - 5
f(2) = 23 - 2 (2) - 5 = -1 (negative)
f(3) = 33 - 2 (3) - 5 = 16 (positive)
Let us take a= 2 and b= 3.
The first approximation to root is x1 and is given by
x1 = (a f(a) - b f(b))/(f(b)-f(a))
=(2 f(3)- 3 f(2))/(f(3) - f(2))
=(2 x 16 - 3 (-1))/ (16- (-1))
= (32 + 3)/(16+1) =35/17
= 2.058

Now f(2.058) = 2.0583 - 2 x 2.058 - 5


= 8.716 - 4.116 - 5
= - 0.4
The root lies between 2.058 and 3

Taking a = 2.058 and b = 3. we have the second approximation to the root given by
x2 = (a f(a) - b f(b))/(f(b)-f(a))
= (2.058 x f(3) - 3 x f(2.058)) /(f(3) - f(2.058))
= (2.058 x 16 -3 x -0.4) / (16 - (-0.4))
= 2.081

Now f(2.081) = 2.0812 - 2 x 2.081 - 5


= -0.15
The root lies between 2.081 and 3
Take a = 2.081 and b = 3
The third approximation to the root is given by
x3 = (a f(a) - b f(b))/(f(b)-f(a))
= (2.089 X 16 - 3 x (-0.062))/ (16 - (-0.062))
= 2.093
The required root is 2.09.

____________________________________________________

3.Newton-Raphson:
In numerical analysis, Newton's method (also known as the Newton–Raphson method or the
Newton–Fourier method) is an efficient algorithm for finding approximations to the zeros (or
roots) of a real-valued function. As such, it is an example of a root-finding algorithm.

Any zero-finding method (Bisection Method, False Position Method, Newton-Raphson, etc.) can
also be used to find a minimum or maximum of such a function, by finding a zero in the
function's first derivative, see Newton's method as an optimization algorithm.

Description of the method:


The tangent through the point (xn, 'f(xn)) is
The next approximation, xn+1, is where the tangent line intersects the axis, so where y=0.
Rearranging, we find

Example

Let's consider f(x)=x2-a. Here, we know the roots exactly, so we can see better just how well the
method converges.

We have

This method is easily implemented, even with just pen and paper, and has been used to rapidly
estimate square roots since long before Newton.

The nth error is en=xn-√a, so we have


If a=0, this simplifies to en/2, as expected.

If a>0, en+1 will be positive, provided en is greater than -√a, i.e provided xn is positive. Thus,
starting from any positive number, all the errors, except perhaps the first will be positive.

The method converges when,

so, assuming en is positive, it converges when

which is always true.

This method converges to the square root, starting from any positive number, and it does so
quadratically.

____________________________________________________
<1>Gauss Elimination Method
DEFINITION 2.2.10 (Forward/Gauss Elimination Method) Gaussian elimination is a method
of solving a linear system (consisting of equations in unknowns) by bringing the
augmented matrix

to an upper triangular form


This elimination process is also called the forward elimination method.

The following examples illustrate the Gauss elimination procedure.

EXAMPLE 2.2.11 Solve the linear system by Gauss elimination method.

Solution: In this case, the augmented matrix is The method proceeds along the
following steps.

1. Interchange and equation (or ).

2. Divide the equation by (or ).

3. Add times the equation to the equation (or ).


4. Add times the equation to the equation (or ).

5. Multiply the equation by (or ).

The last equation gives the second equation now gives Finally the first equation

gives Hence the set of solutions is A UNIQUE SOLUTION.

EXAMPLE 2.2.12 Solve the linear system by Gauss elimination method.

Solution: In this case, the augmented matrix is and the method proceeds as
follows:

1. Add times the first equation to the second equation.


2. Add times the first equation to the third equation.

3. Add times the second equation to the third equation

Thus, the set of solutions is with


arbitrary. In other words, the system has INFINITE NUMBER OF SOLUTIONS.

<2>Jacobi method
In numerical linear algebra, the Jacobi method is an algorithm for determining the solutions of a system
of linear equations with largest absolute values in each row and column dominated by the diagonal
element. Each diagonal element is solved for, and an approximate value plugged in. The process is then
iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation
method of matrix diagonalization. The method is named after Carl Gustav Jakob Jacobi.

Given a square system of n linear equations:

where:
Then A can be decomposed into a diagonal component D, and the remainder R:

The solution is then obtained iteratively via

The element-based formula is thus:

Note that the computation of xi(k+1) requires each element in x(k) except itself. Unlike the Gauss–
Seidel method, we can't overwrite xi(k) with xi(k+1), as that value will be needed by the rest of the
computation. The minimum amount of storage is two vectors of size n.

A linear system of the form with initial estimate is given by

We use the equation , described above, to estimate . First, we


rewrite the equation in a more convenient form , where
and . Note that where and are the strictly lower
and upper parts of . From the known values
we determine as

Further, C is found as

With T and C calculated, we estimate as :

The next iteration yields

This process is repeated until convergence (i.e., until is small). The solution after
25 iterations is

<3>Gauss–Seidel method
In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or
the method of successive displacement, is an iterative method used to solve a linear system of
equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp
Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix
with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either
diagonally dominant, or symmetric and positive definite.

Description

Given a square system of n linear equations with unknown x:

where:

Then A can be decomposed into a lower triangular component L*, and a strictly upper triangular
component U:

The system of linear equations may be rewritten as:

The Gauss–Seidel method is an iterative technique that solves the left hand side of this
expression for x, using previous value for x on the right hand side. Analytically, this may be
written as:

However, by taking advantage of the triangular form of L*, the elements of x(k+1) can be
computed sequentially using forward substitution:
Note that the sum inside this computation of xi(k+1) requires each element in x(k) except xi(k) itself.

The procedure is generally continued until the changes made by an iteration are below some
tolerance.

An example for the matrix version


A linear system shown as is given by:

and

We want to use the equation

in the form

where:

and

We must decompose into the sum of a lower triangular component and a strict upper
triangular component :

and

The inverse of is:

Now we can find:


Now we have and and we can use them to obtain the vectors iteratively.

First of all, we have to choose : we can only guess. The better the guess, the quicker the
algorithm will perform.

We suppose:

We can then calculate:

As expected, the algorithm converges to the exact solution:


In fact, the matrix A is diagonally dominant (but not positive definite).

An example for the equation version


Suppose given k equations where xn are vectors of these equations and starting point x0. From the
first equation solve for x1 in terms of For the next equations substitute
the previous values of xs.

To make it clear let's consider an example.

Solving for , , and gives:

Suppose we choose (0, 0, 0, 0) as the initial approximation, then the first approximate solution is
given by
Using the approximations obtained, the iterative procedure is repeated until the desired accuracy
has been reached. The following are the approximated solutions after four iterations.

The exact solution of the system is (1, 2, −1, 1).

Another example for the matrix version


Another linear system shown as is given by:

and

We want to use the equation

in the form

where:

and
We must decompose into the sum of a lower triangular component and a strict upper
triangular component :

and

The inverse of is:

Now we can find:

Now we have and and we can use them to obtain the vectors iteratively.

First of all, we have to choose : we can only guess. The better the guess, the quicker will
perform the algorithm.

We suppose:

We can then calculate:


If we test for convergence we'll find that the algorithm diverges. In fact, the matrix A is neither
diagonally dominant nor positive definite. Then, convergence to the exact solution

is not guaranteed and, in this case, will not occur.

Unit-2

Bisection method: - Bisection method is also known as Boizano method. It


is based upon repeated application of Intermediate value property which
states. If f(x) is continuous function in [a,b], J(a) and f(b) have different
signs then f(x) 0 has at least one root lies between x a and x = b.

lies between a and x1 or x1 and h depending upon sign of f(x1) (+ve or -


ye). Then we bisect the interval as before and continue the process until the
root is found to desired accuracy.
Regula falsi method:- Regula falsi method is used to find roots of
algebraic as well as transcendental equations. It is also known as
method of false position. This method closely resembles with Bisection
method.

The general formula for successive approximation is given by

Newton-Raphson

In numerical analysis, Newton's method (also known as the Newton–Raphson method or the
Newton–Fourier method) is an efficient algorithm for finding approximations to the zeros (or
roots) of a real-valued function. As such, it is an example of a root-finding algorithm.

Any zero-finding method (Bisection Method, False Position Method, Newton-Raphson, etc.) can
also be used to find a minimum or maximum of such a function, by finding a zero in the
function's first derivative, see Newton's method as an optimization algorithm.

Description of the method

The idea of the Newton-Raphson method is as follows: one starts with an initial guess which is
reasonably close to the true root, then the function is approximated by its tangent line (which can
be computed using the tools of calculus), and one computes the x-intercept of this tangent line
(which is easily done with elementary algebra). This x-intercept will typically be a better
approximation to the function's root than the original guess, and the method can be iterated.
Suppose f : [a, b] → R is a differentiable function defined on the interval [a, b] with values in the
real numbers R. The formula for converging on the root can be easily derived. Suppose we have
some current approximation xn. Then we can derive the formula for a better approximation,
xn+1 by referring to the diagram on the right. We know from the definition of the derivative at a
given point that it is the slope of a tangent at that point.

We can get better convergence if we know about the function's derivatives. Consider the tangent
to the function:

Tangent is in red; the function itself in blue.

Near any point, the tangent at that point is approximately the same as f('x) itself, so we can use
the tangent to approximate the function.

The tangent through the point (xn, 'f(xn)) is

The next approximation, xn+1, is where the tangent line intersects the axis, so where y=0.
Rearranging, we find

Example:-

Find three roots of equation x3 - 4x + 1 = 0 to 3 significant digits using


Newton’s Raphson method.
Second Iteration

Case II. The second root lies between (0,1) Root lies between 0 and I
Second app.

Show that order of convergence of Newton’s Raphson method


is two.
Sol. Newton-Raphson method is given by:
This shows that subsequent error at each step is proportional to
square of previous error. Thus order of convegence is two (quadratic).

Gaussian elimination is a method:-

Gaussian elimination is a method for solving matrix equations of the form

(1)

To perform Gaussian elimination starting with the system of equations

(2)

compose the "augmented matrix equation"

(3)
Here, the column vector in the variables is carried along for labeling the matrix rows. Now,
perform elementary row operations to put the augmented matrix into the upper triangular form

(4)

Solve the equation of the th row for , then substitute back into the equation of the st row
to obtain a solution for , etc., according to the formula

(5)

In Mathematica, RowReduce performs a version of Gaussian elimination, with the equation


being solved by

GaussianElimination[m_?MatrixQ, v_?VectorQ] :=
Last /@ RowReduce[Flatten /@ Transpose[{m, v}]]

LU decomposition of a matrix is frequently used as part of a Gaussian elimination process for


solving a matrix equation.

A matrix that has undergone Gaussian elimination is said to be in echelon form.

For example, consider the matrix equation

(6)

In augmented form, this becomes

(7)

Switching the first and third rows (without switching the elements in the right-hand column
vector) gives

(8)

Subtracting 9 times the first row from the third row gives
(9)

Subtracting 4 times the first row from the second row gives

(10)

Finally, adding times the second row to the third row gives

(11)

Restoring the transformed matrix equation gives

(12)

which can be solved immediately to give , back-substituting to obtain (which


actually follows trivially in this example), and then again back-substituting to find

Jacobi Method

The Jacobi method is a method of solving a matrix equation on a matrix that has no zeros along
its main diagonal (Bronshtein and Semendyayev 1997, p. 892). Each diagonal element is solved
for, and an approximate value plugged in. The process is then iterated until it converges. This
algorithm is a stripped-down version of the Jacobi transformation method of matrix
diagonalization.

The Jacobi method is easily derived by examining each of the equations in the linear system of
equations in isolation. If, in the th equation

(1)

solve for the value of while assuming the other entries of remain fixed. This gives
(2)

which is the Jacobi method.

In this method, the order in which the equations are examined is irrelevant, since the Jacobi
method treats them independently. The definition of the Jacobi method can be expressed with
matrices as

(3)

where the matrices , , and represent thediagonal, strictly lower triangular, and strictly
upper triangular parts of , respectively.

A linear system of the form with initial estimate is given by

We use the equation , described above, to estimate . First, we


rewrite the equation in a more convenient form , where
and . Note that where and are the strictly lower
and upper parts of . From the known values

we determine as

Further, C is found as

With T and C calculated, we estimate as :


The next iteration yields

This process is repeated until convergence (i.e., until is small). The solution after
25 iterations is

Gauss-Seidel Method

The Gauss-Seidel method (called Seidel's method by Jeffreys and Jeffreys 1988, p. 305) is a
technique for solving the equations of the linear system of equations one at a time in
sequence, and uses previously computed results as soon as they are available,

There are two important characteristics of the Gauss-Seidel method should be noted. Firstly, the
computations appear to be serial. Since each component of the new iterate depends upon all
previously computed components, the updates cannot be done simultaneously as in the Jacobi
method. Secondly, the new iterate depends upon the order in which the equations are
examined. If this ordering is changed, the components of the new iterates (and not just their
order) will also change.

In terms of matrices, the definition of the Gauss-Seidel method can be expressed as

where the matrices , , and represent the diagonal, strictly lower triangular, and strictly
upper triangular parts of , respectively.

The Gauss-Seidel method is applicable to strictly diagonally dominant, or symmetric positive


definite matrices .

A linear system shown as is given by:


and

We want to use the equation

in the form

where:

and

We must decompose into the sum of a lower triangular component and a strict upper
triangular component :

and

The inverse of is:

Now we can find:

Now we have and and we can use them to obtain the vectors iteratively.

First of all, we have to choose : we can only guess. The better the guess, the quicker the
algorithm will perform.

We suppose:
We can then calculate:

As expected, the algorithm converges to the exact solution:

In fact, the matrix A is diagonally dominant (but not positive definite).

Linear System of Equations

A linear system of equations is a set of linear equations in variables (sometimes called


"unknowns"). Linear systems can be represented in matrix form as the matrix equation

(1)

where is the matrix of coefficients, is the column vector of variables, and is the column
vector of solutions.

If , then the system is (in general) overdetermined and there is no solution.


If and the matrix is nonsingular, then the system has a unique solution in the variables. In
particular, as shown by Cramer's rule, there is a unique solution if has a matrix inverse . In
this case,

(2)

If , then the solution is simply . If has no matrix inverse, then the solution set is the
translate of a subspace of dimension less than or the empty set.

If two equations are multiples of each other, solutions are of the form

(3)

for a real number. More generally, if , then the system is underdetermined. In this case,
elementary row and column operations can be used to solve the system as far as possible, then
the first components can be solved in terms of the last components to find the solution
space.
UUnit-3

Newton's Interpolation Formulae


As stated earlier, interpolation is the process of approximating a given function, whose values are known

at tabular points, by a suitable polynomial, of degree which takes the values at

for Note that if the given data has errors, it will also be reflected in the
polynomial so obtained.

In the following, we shall use forward and backward differences to obtain polynomial function

approximating when the tabular points 's are equally spaced. Let

where the polynomial is given in the following form:

(11.4.1
)

for some constants to be determined using the fact that for

So, for substitute in (11.4.1) to get This gives us Next,

So, For or
equivalently
Thus, Now, using mathematical induction, we get

Thus,

As this uses the forward differences, it is called NEWTON'S FORWARD DIFFERENCE FORMULA for
interpolation, or simply, forward interpolation formula.

EXERCISE 11.4.1 Show that

and

and in general,

For the sake of numerical calculations, we give below a convenient form of the forward
interpolation formula.

Let then
With this transformation the above forward interpolation formula is simplified to the following form:

(11.4.2
)

If =1, we have a linear interpolation given by

(11.4.3)

For we get a quadratic interpolating polynomial:

(11.4.4)

and so on.

It may be pointed out here that if is a polynomial function of degree then

coincides with on the given interval. Otherwise, this gives only an approximation to

the true values of


If we are given additional point also, then the error, denoted by

is estimated by

Similarly, if we assume, is of the form

then using the fact that we have

Thus, using backward differences and the transformation we obtain the Newton's
backward interpolation formula as follows:
(11.4.5)

EXERCISE 11.4.2 Derive the Newton's backward interpolation formula (11.4.5) for

Remark 11.4.3 If the interpolating point lies closer to the beginning of the interval then one uses the
Newton's forward formula and if it lies towards the end of the interval then Newton's backward formula
is used.

Remark 11.4.4 For a given set of n tabular points, in general, all the n points need not be used for
interpolating polynomial. In fact N is so chosen that forward/backward difference almost remains
constant. Thus N is less than or equal to n.

EXAMPLE 11.4.5

1. Obtain the Newton's forward interpolating polynomial, for the following tabular data
and interpolate the value of the function at

x 0 0.001 0.002 0.003 0.004 0.005

y 1.121 1.123 1.1255 1.127 1.128 1.1285

2.
Solution: For this data, we have the Forward difference difference table

0 1.121 0.002 0.0005 -0.0015 0.002 -.0025

.001 1.123 0.0025 -0.0010 0.0005 -0.0005

.002 1.1255 0.0015 -0.0005 0.0

.003 1.127 0.001 -0.0005


.004 1.128 0.0005

.005 1.1285

3. Thus, for where and we get

4.
Thus,

5.

6. Using the following table for approximate its value at Also, find an error estimate

(Note ).

0.70 72 0.74 0.76 0.78


0.84229 0.87707 0.91309 0.95045 0.98926

7. Solution: As the point lies towards the initial tabular values, we shall use Newton's
Forward formula. The forward difference table is:

0.70 0.84229 0.03478 0.00124 0.0001 0.00001

0.72 0.87707 0.03602 0.00134 0.00011

0.74 0.91309 0.03736 0.00145

0.76 0.95045 0.03881

0.78 0.98926

8. In the above table, we note that is almost constant, so we shall attempt degree
polynomial interpolation.

9. Note that gives Thus, using forward

interpolating polynomial of degree we get

10.
11.
12.
An error estimate for the approximate value is

13.

14. Note that exact value of (upto decimal place) is and the approximate
value, obtained using the Newton's interpolating polynomial is very close to this value. This is
also reflected by the error estimate given above.
15. Apply degree interpolation polynomial for the set of values given in Example 11.2.15, to

estimate the value of by taking

Also, find approximate value of


Solution: Note that is closer to the values lying in the beginning of tabular values,
while is towards the end of tabular values. Therefore, we shall use forward difference
formula for and the backward difference formula for Recall that the
interpolating polynomial of degree is given by

Therefore,

1. for and we have This gives,

2.

3. for and we have This gives,


4.

Note: as is closer to we may expect estimate calculated using

to be a better approximation.

5. for we use the backward interpolating polynomial, which gives,

Therefore, taking and we have

This gives,

What is central difference ?

Sol. In this system 6 is called central difference operator


and

Higher central differences are defined as


shift and Average operator.

Shift operator F. It is the operation of increasing the argument by ii so


that

Prove Newton’s forward interpolation formula.


It is
called Newton’s forward interpolation formula which contains
y0 and forward difference of y0. This formula is used to
interpolate value near starting of table.
Prove Newton’s Backward interpolation formula.

It is called Newton’s Backward interpolation formula which contains y,


and backward difference of y.

This formula is used for interpolating the values of y near end of table.

If tabulated function is of flth degree polynomial then A’y0 will be


constant. Hence the flt divided difference will be constant.
Gauss's and Stirling's Formulas
In case of equidistant tabular points a convenient form for interpolating polynomial can be derived from
Lagrange's interpolating polynomial. The process involves renaming or re-designating the tabular points.
We illustrate it by deriving the interpolating formula for 6 tabular points. This can be generalized for
more number of points. Let the given tabular points be

These six points in


the given order are not equidistant. We re-designate them for the sake of convenience as follows:
These re-designated tabular points in
their given order are equidistant. Now recall from remark (12.3.3) that Lagrange's interpolating
polynomial can also be written as :

which on using the re-designated points give:

Now note that the points and are equidistant and the divided difference are
independent of the order of their arguments. Thus, we have
where for Now using the above relations and the transformation

we get

Thus we get the following form of interpolating polynomial

(12.4.1)

Similarly using the tabular points

and the re-

designating them, as and we get another form of interpolating polynomial


as follows:
(12.4.2)

Now taking the average of the two interpoating polynomials (12.4.1) and (12.4.2) (called GAUSS'S FIRST
AND SECOND INTERPOLATING FORMULAS, respectively), we obtain Sterling's Formula of interpolation:

(12.4.3)

These are very helpful when, the point of interpolation lies near the middle of the interpolating interval.
In this case one usually writes the diagonal form of the difference table.

EXAMPLE 12.4.1 Using the following data, find by Sterling's formula, the value of at

0.20 0.21 0.22 0.23 0.24

1.37638 1.28919 1.20879 1.13427 1.06489

Here the point lies near the central tabular point Thus , we define

to get the difference table in


diagonal form as:
(here,

an

d etc.).

Using the Sterling's formula with we get as follows:

Note that tabulated value of at is 1.1708496.


Newton's Divided Differences

As we have seen, the interpolating polynomial is unique. However, there are forms other than the
Lagrange form which are more useful in some cases. We start by writing our interpolating
polynomial in the form

When we impose the interpolating conditions, we find successively

The advantage of this procedure over the Lagrange method is that we can increase the degree of p from

n to n+1 simply by adding the extra term . This


already interpolates at the original n+1 points, since the added term vanishes at all these points, and we
can find an+1 from the interpolation condition at xn+1 (compare this with the work involved in increasing
the degree using the Lagrange formulation, where every term would be altered).

Because of the uniqueness of the interpolating polynomial, we can equate the coefficient of xn in
the two forms to obtain

This is true whatever the degree n of the interpolating polynomial, and so gives an expression for
the coefficients in the Newton formula of any degree. This shows that an depends only on
which motivates the following notation:
which is called the nth divided difference of f. We note that this is independent of the order of the
points.

Neither of the above forms is very convenient for evaluating divided differences, so we derive an
alternative and more practical form. Since the order of points does not matter, we can write down
the same interpolating polynomial with the order of the points reversed:

Now equate the coefficients of xn and xn-1 in this and the previous form to obtain

using the first result. We find from this that

or using our new divided difference notation,

This formula is true for any value of n since the divided differences are the same for any degree of
interpolating polynomal.

The interpolating polynomial may now be written as

and is called Newton's Divided Difference Polynomial of degree n.

The divided differences are conveniently evaluated within a table, shown in symblolic form in

Table . Notice that the table is arranged so that the function values required at each stage are
adjacent.
Table: Divided Difference Table

Table: Divided Difference Table

Table shows a numerical example; the table would normally look more compact because
the column headings would be omitted. Using the numbers in the table, we find the interpolating
polynomial of degree 4 as
It has actually turned out to be a cubic, since the given data was actually obtained from a cubic.

Suppose that we now wanted to add an extra point f(0)=-13; rather than insert this point in its
place between x=-1 and x=1, we simply add it onto the end of the table, calling it x5. We find

, , ,

, . The interpolating polynomial of


degree 5 is now the previous polynomial with an extra term added:
Unit-4

Numerical Differentiation

Numerical differentiation is the process of finding the numerical value of a derivative of a given
function at a given point. In general, numerical differentiation is more difficult than numerical
integration. This is because while numerical integration requires only good continuity properties
of the function being integrated, numerical differentiation requires more complicated properties
such as Lipschitz classes. Numerical differentiation is implemented as ND[f, x, x0, Scale ->
scale] in the Mathematica package NumericalCalculus` .

There are many applications where derivatives need to be computed numerically. The simplest
approach simply uses the definition of the derivative

for some small numerical value of .


Numerical integration

There are two main reasons for you to need to do numerical integration: analytical integration may be impossible or
infeasible, or you may wish to integrate tabulated data rather than known functions. In this section we outline the
main approaches to numerical integration. Which is preferable depends in part on the results required, and in part on
the function or data to be integrated.

Manual method

If you were to perform the integration by hand, one approach is to superimpose a grid on a graph of the function to
be integrated, and simply count the squares, counting only those covered by 50% or more of the function. Provided
the grid is sufficiently fine, a reasonably accurate estimate may be obtained. Figure 11 demonstrates how this may
be achieved.
Trapezium rule

Consider the Taylor Series expansion integrated from x0 to x0+x:

(46)

The approximation represented by 1/2[f(x0) + f(x0+x)]x is called the Trapezium Rule based on its geometric
interpretation.

As we can see from equation ( 46 ), the error in the Trapezium Rule is proportional to x3. Thus, if we were to halve
x, the error would be decreased by a factor of eight. However, the size of the domain would be halved, thus
requiring the Trapezium Rule to be evaluated twice and the contributions summed. The net result is the error
decreasing by a factor of four rather than eight. The Trapezium Rule used in this manner is sometimes termed the
Compound Trapezium Rule, but more often simply the Trapezium Rule. In general it consists of the sum of
integrations over a smaller distance x to obtain a smaller error.

Suppose we need to integrate from x0 to x1. We shall subdivide this interval into n steps of size x=(x1-x0)/n
The Compound Trapezium Rule approximation to the integral is therefore

(47)

While the error for each step is O(x3), the cumulative error is n times this or O(x2) ~ O(n2).

The above analysis assumes x is constant over the interval being integrated. This is not necessary and an extension
to this procedure to utilise a smaller step size xi in regions of high curvature would reduce the total error in the
calculation, although it would remain O(x2). We would choose to reduce x in the regions of high curvature as we
can see from equation ( 46 ) that the leading order truncation error is scaled by f".

5.3 Mid-point rule

A variant on the Trapezium Rule is obtained by integrating the Taylor Series from x0x/2 to x0+x/2:

(48)
.

By evaluating the function f(x) at the midpoint of each interval the error may be slightly reduced relative to the
Trapezium rule (the coefficient in front of the curvature term is 1/24 for the Mid-point Rule compared with 1/12 for
the Trapezium Rule) but the method remains of the same order. Figure 14 provides a graphical interpretation of this
approach.

Again we may reduce the error when integrating the interval x0 to x1 by subdividing it into n smaller steps. This
Compound Mid-point Rule is then
(49)
,

with the graphical interpretation shown in figure 15 . The difference between the Trapezium
Rule and Mid-point Rule is greatly diminished in their compound forms. Comparison of
equations ( 47 ) and ( 49 ) show the only difference is in the phase relationship between the
points used and the domain, plus how the first and last intervals are calculated.

There are two further advantages of the Mid-point Rule over the Trapezium Rule. The first is
that is requires one fewer function evaluations for a given number of subintervals, and the second
that it can be used more effectively for determining the integral near an integrable singularity.

Simpson's rule

An alternative approach to decreasing the step size x for the integration is to increase the accuracy of the functions
used to approximate the integrand. Integrating the Taylor series over an interval 2x shows

(50)
Whereas the error in the Trapezium rule was O(x3), Simpson's rule is two orders more accurate at O(x5),
giving exact integration of cubics.

To improve the accuracy when integrating over larger intervals, the interval x0 to x1 may again be subdivided into n
steps. The three-point evaluation for each subinterval requires that there are an even number of subintervals. Hence
we must be able to express the number of intervals as n=2m. The Compound Simpson's rule is then

(51)

and the corresponding error O(nx5) or O(x4).

Quadratic triangulation*

Simpson's Rule may be employed in a manual way to determine the integral with nothing more than a ruler. The
approach is to cover the domain to be integrated with a triangle or trapezium (whichever is geometrically more
appropriate) as is shown in figure 17 . The integrand may cross the side of the trapezium (triangle) connecting the
end points. For each arc-like region so created (there are two in figure 17 ) the maximum deviation (indicated by
arrows in figure 17 ) from the line should be measured, as should the length of the chord joining the points of
crossing. From Simpson's rule we may approximate the area between each of these arcs and the chord as

2
area = /3 chord maxDeviation, (52)

remembering that some increase the area while others decrease it relative to the initial
trapezoidal (triangular) estimate. The overall estimate (ignoring linear measurement errors) will
be O(l5), where l is the length of the (longest) chord.
Newton-Cotes Integration

The Newton-Cotes formulas, the most commonly used numerical integration methods,
approximate the integration of a complicated function by replacing the function with many
polynomials across the integration interval. The integration of the original function can then be
obtained by summing up all polynomials whose "areas" are calculated by the weighting
coefficients and the values of the function at the nodal points.

Trapezoidal Rule:

Simpson's Rule:

Simpson's 3/8 Rule:


Boole's Rule:
Unit-5
The Euler method

For more details on this topic, see Euler method.

A brief explanation: From any point on a curve, you can find an approximation of a nearby point
on the curve by moving a short distance along a line tangent to the curve.

Rigorous development: Starting with the differential equation (1), we replace the derivative y' by
the finite difference approximation

which when re-arranged yields the following formula

and using (1) gives:

This formula is usually applied in the following way. We choose a step size h, and we construct
the sequence t0, t1 = t0 + h, t2 = t0 + 2h, … We denote by yn a numerical estimate of the exact
solution y(tn). Motivated by (3), we compute these estimates by the following recursive scheme

This is the Euler method (or forward Euler method, in contrast with the backward Euler method,
to be described below). The method is named after Leonhard Euler who described it in 1768.

The Euler method is an example of an explicit method. This means that the new value yn+1 is
defined in terms of things that are already known, like yn.

The backward Euler method

For more details on this topic, see Backward Euler method.

If, instead of (2), we use the approximation


we get the backward Euler method:

The backward Euler method is an implicit method, meaning that we have to solve an equation to
find yn+1. One often uses fixed point iteration or (some modification of) the Newton–Raphson
method to achieve this. Of course, it costs time to solve this equation; this cost must be taken into
consideration when one selects the method to use. The advantage of implicit methods such as (6)
is that they are usually more stable for solving a stiff equation, meaning that a larger step size h
can be used.

The exponential Euler method

If the differential equation is of the form

then an approximate explicit solution can be given by

This method is commonly employed in neural simulations and it is the default integrator in the
GENESIS neural simulator.[1]

Generalizations

The Euler method is often not accurate enough. In more precise terms, it only has order one (the
concept of order is explained below). This caused mathematicians to look for higher-order
methods.

One possibility is to use not only the previously computed value yn to determine yn+1, but to make
the solution depend on more past values. This yields a so-called multistep method. Perhaps the
simplest is the Leapfrog method which is second order and (roughly speaking) relies on two time
values.

Almost all practical multistep methods fall within the family of linear multistep methods, which
have the form
Another possibility is to use more points in the interval [tn,tn+1]. This leads to the family of
Runge–Kutta methods, named after Carl Runge and Martin Kutta. One of their fourth-order
methods is especially popular.

Advanced features

A good implementation of one of these methods for solving an ODE entails more than the time-
stepping formula.

It is often inefficient to use the same step size all the time, so variable step-size methods have
been developed. Usually, the step size is chosen such that the (local) error per step is below some
tolerance level. This means that the methods must also compute an error indicator, an estimate
of the local error.

An extension of this idea is to choose dynamically between different methods of different orders
(this is called a variable order method). Methods based on Richardson extrapolation, such as the
Bulirsch–Stoer algorithm, are often used to construct various methods of different orders.

Other desirable features include:

 dense output: cheap numerical approximations for the whole integration interval, and not only
at the points t0, t1, t2, ...
 event location: finding the times where, say, a particular function vanishes. This typically
requires the use of a root-finding algorithm.
 support for parallel computing.
 when used for integrating with respect to time, time reversibility

Alternative methods

Many methods do not fall within the framework discussed here. Some classes of alternative
methods are:

 multiderivative methods, which use not only the function f but also its derivatives. This class
includes Hermite–Obreschkoff methods and Fehlberg methods, as well as methods like the
Parker–Sochacki method or Bychkov-Scherbakov method, which compute the coefficients of the
Taylor series of the solution y recursively.
 methods for second order ODEs. We said that all higher-order ODEs can be transformed to first-
order ODEs of the form (1). While this is certainly true, it may not be the best way to proceed. In
particular, Nyström methods work directly with second-order equations.
 geometric integration methods are especially designed for special classes of ODEs (e.g.,
symplectic integrators for the solution of Hamiltonian equations). They take care that the
numerical solution respects the underlying structure or geometry of these classes.
Analysis

Numerical analysis is not only the design of numerical methods, but also their analysis. Three
central concepts in this analysis are:

 convergence: whether the method approximates the solution,


 order: how well it approximates the solution, and
 stability: whether errors are damped out.

Convergence

A numerical method is said to be convergent if the numerical solution approaches the exact
solution as the step size h goes to 0. More precisely, we require that for every ODE (1) with a
Lipschitz function f and every t* > 0,

All the methods mentioned above are convergent. In fact, a numerical scheme has to be
convergent to be of any use.

Consistency and order

For more details on this topic, see Truncation error (numerical integration).

Suppose the numerical method is

The local (truncation) error of the method is the error committed by one step of the method.
That is, it is the difference between the result given by the method, assuming that no error was
made in earlier steps, and the exact solution:

The method is said to be consistent if

The method has order if


Hence a method is consistent if it has an order greater than 0. The (forward) Euler method (4)
and the backward Euler method (6) introduced above both have order 1, so they are consistent.
Most methods being used in practice attain higher order. Consistency is a necessary condition for
convergence, but not sufficient; for a method to be convergent, it must be both consistent and
zero-stable.

A related concept is the global (truncation) error, the error sustained in all the steps one needs to
reach a fixed time t. Explicitly, the global error at time t is yN − y(t) where N = (t−t0)/h. The
global error of a pth order one-step method is O(hp); in particular, such a method is convergent.
This statement is not necessarily true for multi-step methods.

The Runge-Kutta Method

In the last lab you learned to use Heuns's Method to generate a numerical solution to an initial
value problem of the form:

y′ = f(x, y)

y(xo) = yo

We discussed the fact that Heun's Method was an improvement over the rather simple Euler
Method, and that though it uses Euler's method as a basis, it goes beyond it, attempting to
compensate for the Euler Method's failure to take the curvature of the solution curve into
account. Heun's Method is one of the simplest of a class of methods called predictor-corrector
algorithms. In this lab we will address one of the most powerful predictor-corrector algorithms
of all—one which is so accurate, that most computer packages designed to find numerical
solutions for differential equations will use it by default—the fourth order Runge-Kutta
Method. (For simplicity of language we will refer to the method as simply the Runge-Kutta
Method in this lab, but you should be aware that Runge-Kutta methods are actually a general
class of algorithms, the fourth order method being the most popular.)

The Runge-Kutta algorithm may be very crudely described as "Heun's Method on steroids." It
takes to extremes the idea of correcting the predicted value of the next solution point in the
numerical solution. (It should be noted here that the actual, formal derivation of the Runge-Kutta
Method will not be covered in this course. The calculations involved are complicated, and
rightly belong in a more advanced course in differential equations, or numerical methods.)
Without further ado, using the same notation as in the previous two labs, here is a summary of
the method:
xn+1 = xn + h

yn+1 = yn + (1/6)(k1 + 2k2 + 2k3 + k4)

where

k1 = h f(xn, yn)

k2 = h f(xn + h/2, yn + k1/2)

k3 = h f(xn + h/2, yn + k2/2)

k4 = h f(xn + h, yn + k3)

Even though we do not plan on deriving these formulas formally, it is valuable to understand the
geometric reasoning that supports them. Let's briefly discuss the components in the algorithm
above.

First we note that, just as with the previous two methods, the Runge-Kutta method iterates the x-
values by simply adding a fixed step-size of h at each iteration.

The y-iteration formula is far more interesting. It is a weighted average of four values—k1, k2, k3,
and k4. Visualize distributing the factor of 1/6 from the front of the sum. Doing this we see that
k1 and k4 are given a weight of 1/6 in the weighted average, whereas k2 and k3 are weighted 1/3,
or twice as heavily as k1 and k4. (As usual with a weighted average, the sum of the weights 1/6,
1/3, 1/3 and 1/6 is 1.) So what are these ki values that are being used in the weighted average?

k1 you may recognize, as we've used this same quantity on both of the previous labs. This
quantity, h f(xn, yn), is simply Euler's prediction for what we've previously called Δy—the
vertical jump from the current point to the next Euler-predicted point along the numerical
solution.

k2 we have never seen before. Notice the x-value at which it is evaluating the function f. xn + h/2
lies halfway across the prediction interval. What about the y-value that is coupled with it?
yn + k1/2 is the current y-value plus half of the Euler-predicted Δy that we just discussed as being
the meaning of k1. So this too is a halfway value, this time vertically halfway up from the current
point to the Euler-predicted next point. To summarize, then, the function f is being evaluated at a
point that lies halfway between the current point and the Euler-predicted next point. Recalling
that the function f gives us the slope of the solution curve, we can see that evaluating it at the
halfway point just described, i.e. f(xn + h/2, yn + k1/2), gives us an estimate of the slope of the
solution curve at this halfway point. Multiplying this slope by h, just as with the Euler Method
before, produces a prediction of the y-jump made by the actual solution across the whole width
of the interval, only this time the predicted jump is not based on the slope of the solution at the
left end of the interval, but on the estimated slope halfway to the Euler-predicted next point.
Whew! Maybe that could use a second reading for it to sink in!

k3 has a formula which is quite similar to that of k2, except that where k1 used to be, there is now
a k2. Essentially, the f-value here is yet another estimate of the slope of the solution at the
"midpoint" of the prediction interval. This time, however, the y-value of the midpoint is not
based on Euler's prediction, but on the y-jump predicted already with k2. Once again, this slope-
estimate is multiplied by h, giving us yet another estimate of the y-jump made by the actual
solution across the whole width of the interval.

k4 evaluates f at xn + h, which is at the extreme right of the prediction interval. The y-value
coupled with this, yn + k3, is an estimate of the y-value at the right end of the interval, based on
the y-jump just predicted by k3. The f-value thus found is once again multiplied by h, just as with
the three previous ki, giving us a final estimate of the y-jump made by the actual solution across
the whole width of the interval.

In summary, then, each of the ki gives us an estimate of the size of the y-jump made by the actual
solution across the whole width of the interval. The first one uses Euler's Method, the next two
use estimates of the slope of the solution at the midpoint, and the last one uses an estimate of the
slope at the right end-point. Each ki uses the earlier ki as a basis for its prediction of the y-jump.

This means that the Runge-Kutta formula for yn+1, namely:

yn+1 = yn + (1/6)(k1 + 2k2 + 2k3 + k4)

is simply the y-value of the current point plus a weighted average of four different y-jump
estimates for the interval, with the estimates based on the slope at the midpoint being weighted
twice as heavily as the those using the slope at the end-points.

As we have just seen, the Runge-Kutta algorithm is a little hard to follow even when one only
considers it from a geometric point of view. In reality the formula was not originally derived in
this fashion, but with a purely analytical approach. After all, among other things, our geometric
"explanation" doesn't even account for the weights that were used. If you're feeling ambitious, a
little research through a decent mathematics library should yield a detailed analysis of the
derivation of the method.

Common fourth-order Runge–Kutta method

One member of the family of Runge–Kutta methods is so commonly used that it is often referred
to as "RK4", "classical Runge–Kutta method" or simply as "the Runge–Kutta method".

Let an initial value problem be specified as follows.


In words, what this means is that the rate at which y changes is a function of y itself and of t
(time). At the start, time is and y is . In the equation, y may be a scalar or a vector.

The RK4 method for this problem is given by the following equations:

where is the RK4 approximation of , and

[1]

(Note: the above equations have different but equivalent definitions in different texts).[2]

Thus, the next value ( ) is determined by the present value ( ) plus the weighted average
of four increments, where each increment is the product of the size of the interval, h, and an
estimated slope specified by function f on the right-hand side of the differential equation.

 is the increment based on the slope at the beginning of the interval, using , (Euler's
method) ;
 is the increment based on the slope at the midpoint of the interval, using ;
 is again the increment based on the slope at the midpoint, but now using ;
 is the increment based on the slope at the end of the interval, using .

In averaging the four increments, greater weight is given to the increments at the midpoint. The
weights are chosen such that if is independent of , so that the differential equation is
equivalent to a simple integral, then RK4 is Simpson's rule.[3]

The RK4 method is a fourth-order method, meaning that the error per step is on the order of ,
while the total accumulated error has order .

Explicit Runge–Kutta methods

The family of explicit Runge–Kutta methods is a generalization of the RK4 method mentioned
above. It is given by
where

[4]

(Note: the above equations have different but equivalent definitions in different texts).[2]

To specify a particular method, one needs to provide the integer s (the number of stages), and the
coefficients aij (for 1 ≤ j < i ≤ s), bi (for i = 1, 2, ..., s) and ci (for i = 2, 3, ..., s). The matrix [aij] is
called the Runge–Kutta matrix, while the bi and ci are known as the weights and the nodes.[5]
These data are usually arranged in a mnemonic device, known as a Butcher tableau (after John
C. Butcher):

The Runge–Kutta method is consistent if

There are also accompanying requirements if we require the method to have a certain order p,
meaning that the local truncation error is O(hp+1). These can be derived from the definition of the
truncation error itself. For example, a 2-stage method has order 2 if b1 + b2 = 1, b2c2 = 1/2, and
a21 = c2.[6]
Examples

The RK4 method falls in this framework. Its tableau is:[7]

1/
1/2
2

1/
1/2 0
2

1 0 0 1

1/6 1/3 1/3 1/6

However, the simplest Runge–Kutta method is the (forward) Euler method, given by the formula
. This is the only consistent explicit Runge–Kutta method with one
stage. The corresponding tableau is:

Second-order methods with two stages

An example of a second-order method with two stages is provided by the midpoint method

The corresponding tableau is:

1/2 1/2

0 1

The midpoint method is not the only second-order Runge–Kutta method with two stages. In fact,
there is a family of such methods, parameterized by α, and given by the formula
[8]

Its Butcher tableau is

In this family, gives the midpoint method and is Heun's method.[3]

Usage

As an example, consider the two-stage second-order Runge–Kutta method with α = 2/3. It is


given by the tableau

2/3 2/3

1/4 3/4

with the corresponding equations

This method is used to solve the initial-value problem

with step size h = 0.025, so the method needs to take four steps.

The method proceeds as follows:


The numerical solutions correspond to the underlined values.

Adaptive Runge–Kutta methods

The adaptive methods are designed to produce an estimate of the local truncation error of a
single Runge–Kutta step. This is done by having two methods in the tableau, one with order
and one with order .

The lower-order step is given by

where the are the same as for the higher order method. Then the error is
which is . The Butcher tableau for this kind of method is extended to give the values of
:

The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4. Its extended Butcher
tableau is:

1/4 1/4

3/8 3/32 9/32

12/13 1932/2197 −7200/2197 7296/2197

1 439/216 −8 3680/513 -845/4104

1/2 −8/27 2 −3544/2565 1859/4104 −11/40

16/135 0 6656/12825 28561/56430 −9/50 2/55

25/216 0 1408/2565 2197/4104 −1/5 0

However, the simplest adaptive Runge–Kutta method involves combining the Heun method,
which is order 2, with the Euler method, which is order 1. Its extended Butcher tableau is:

11
1/2 1/2

1 0

The error estimate is used to control the stepsize.

Other adaptive Runge–Kutta methods are the Bogacki–Shampine method (orders 3 and 2), the
Cash–Karp method and the Dormand–Prince method (both with orders 5 and 4).

Implicit Runge–Kutta methods

All Runge–Kutta methods mentioned up to now are explicit methods. Unfortunately, explicit
Runge–Kutta methods are generally unsuitable for the solution of stiff equations because their
region of absolute stability is small; in particular, it is bounded.[9] This issue is especially
important in the solution of partial differential equations.

The instability of explicit Runge–Kutta methods motivates the development of implicit methods.
An implicit Runge–Kutta method has the form

where

[10]

The difference with an explicit method is that in an explicit method, the sum over j only goes up
to i − 1. This also shows up in the Butcher tableau. For an implicit method, the coefficient matrix
is not necessarily lower triangular:[7]

The consequence of this difference is that at every step, a system of algebraic equations has to be
solved. This increases the computational cost considerably. If a method with s stages is used to
solve a differential equation with m components, then the system of algebraic equations has ms
components. In contrast, an implicit s-step linear multistep method needs to solve a system of
algebraic equations with only s components.[11]

Examples

The simplest example of an implicit Runge–Kutta method is the backward Euler method:

The Butcher tableau for this is simply:

This Butcher tableau corresponds to the formulas

which can be re-arranged to get the formula for the backward Euler method listed above.

Another example for an implicit Runge–Kutta method is the trapezoidal rule. Its Butcher tableau
is:

The trapezoidal rule is a collocation method (as discussed in that article). All collocation
methods are implicit Runge–Kutta methods, but not all implicit Runge–Kutta methods are
collocation methods.[12]

The Gauss–Legendre methods form a family of collocation methods based on Gauss quadrature.
A Gauss–Legendre method with s stages has order 2s (thus, methods with arbitrarily high order
can be constructed).[13] The method with two stages (and thus order four) has Butcher tableau:

[11]

Stability
The advantage of implicit Runge–Kutta methods above explicit ones is their greater stability,
especially when applied to stiff equations. Consider the linear test equation y' = λy. A Runge–
Kutta method applied to this equation reduces to the iteration , with r given
by

[14]

where e stands for the vector of ones. The function r is called the stability function.[15] It follows
from the formula that r is the quotient of two polynomials of degree s if the method has s stages.
Explicit methods have a strictly lower triangular matrix A, which implies that det(I − zA) = 1 and
that the stability function is a polynomial.[16]

The numerical solution to the linear test equation decays to zero if | r(z) | < 1 with z = hλ. The set
of such z is called the domain of absolute stability. In particular, the method is said to be A-stable
if all z with Re(z) < 0 are in the domain of absolute stability. The stability function of an explicit
Runge–Kutta method is a polynomial, so explicit Runge–Kutta methods can never be A-
stable.[16]

If the method has order p, then the stability function satisfies as


. Thus, it is of interest to study quotients of polynomial of given degrees that approximate
the exponential function the best. These are known as Padé approximants. A Padé approximant
with numerator of degree m and denominator of degree n is A-stable if and only if m ≤ n ≤ m +
2.[17]

The Gauss–Legendre method with s stages has order 2s, so its stability function is the Padé
approximant with m = n = s. It follows that the method is A-stable.[18] This shows that A-stable
Runge–Kutta can have arbitrarily high order. In contrast, the order of A-stable linear multistep
methods cannot exceed two.[19]

Derivation of the Runge–Kutta fourth order method

In general a Runge–Kutta method of order can be written as:

where:
are increments obtained evaluating the derivatives of at the -th order.

We develop the derivation[20] for the Runge–Kutta fourth order method using the general formula
with evaluated, as explained above, at the starting point, the midpoint and the end point of
any interval , thus we choose:

and otherwise. We begin by defining the following quantities:

where and If we define:

and for the previous relations we can show that the following equalities holds up to :
where:

is the total derivative of with respect to time.

If we now express the general formula using what we just derived we obtain:
and comparing this with the Taylor series of around :

we obtain a system of constraints on the coefficients:

which solved gives as stated above.

Predictor–corrector method

In mathematics, particularly numerical analysis, a predictor–corrector method is an algorithm


that proceeds in two steps. First, the prediction step calculates a rough approximation of the
desired quantity. Second, the corrector step refines the initial approximation using another means

Predictor–corrector methods for solving ODEs

When considering the numerical solution of ordinary differential equations (ODEs), a predictor–
corrector method typically uses an explicit method for the predictor step and an implicit method
for the corrector step.

Example: Euler method with the trapezoidal rule

A simple predictor–corrector method can be constructed from the Euler method (an explicit
method) and the trapezoidal rule (an implicit method).

Consider the differential equation


and denote the step size by .

First, the predictor step: starting from the current value , calculate an initial guess value
via the Euler method,

Next, the corrector step: improve the initial guess using trapezoidal rule,

That value is used as the next step.

PEC mode and PECE mode

There are different variants of a predictor–corrector method, depending on how often the
corrector method is applied. The Predict–Evaluate–Correct–Evaluate (PECE) mode refers to the
variant in the above example:

It is also possible to evaluate the function f only once per step by using the method in Predict–
Evaluate–Correct (PEC) mode:

Additionally, the corrector step can be repeated in the hope that this achieves an even better
approximation to the true solution. If the corrector method is run twice, this yields the PECECE
mode:

The PECEC mode has one fewer function evaluation. More generally, if the corrector is run k
times, the method is in P(EC)k or P(EC)kE mode. If the corrector method is iterated until it
converges, this could be called PE(CE)∞
Predictor–corrector methods for solving ODEs

When considering the numerical solution of ordinary differential equations (ODEs), a predictor–
corrector method typically uses an explicit method for the predictor step and an implicit method
for the corrector step.

Example: Euler method with the trapezoidal rule

A simple predictor–corrector method can be constructed from the Euler method (an explicit
method) and the trapezoidal rule (an implicit method).

Consider the differential equation

and denote the step size by .

First, the predictor step: starting from the current value , calculate an initial guess value
via the Euler method,

Next, the corrector step: improve the initial guess using trapezoidal rule,

That value is used as the next step.

PEC mode and PECE mode

There are different variants of a predictor–corrector method, depending on how often the
corrector method is applied. The Predict–Evaluate–Correct–Evaluate (PECE) mode refers to the
variant in the above example:

It is also possible to evaluate the function f only once per step by using the method in Predict–
Evaluate–Correct (PEC) mode:
Additionally, the corrector step can be repeated in the hope that this achieves an even better
approximation to the true solution. If the corrector method is run twice, this yields the PECECE
mode:

The PECEC mode has one fewer function evaluation. More generally, if the corrector is run k
times, the method is in P(EC)k or P(EC)kE mode. If the corrector method is iterated until it
converges, this could be called PE(CE)∞

You might also like