Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
12 views

Numerical Analysis For Differential Equations

Uploaded by

vicsm2002
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Numerical Analysis For Differential Equations

Uploaded by

vicsm2002
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 171

John S Butler

Numerical Methods for


Differential Equations with
Python
CONTENTS

i initial value problems 6


1 numerical solutions to initial value problems 7
1.1 Numerical approximation of Differentiation 9
1.1.1 Derivation of Forward Euler for one step 9
1.1.2 Theorems about Ordinary Differential Equations 15
1.2 One-Step Methods 17
1.2.1 Euler’s Method 17
1.3 Problem Sheet 22
2 higher order methods 23
2.1 Higher order Taylor Methods 23
3 runge–kutta method 25
3.1 Derivation of Second Order Runge Kutta 26
3.1.1 Runge Kutta second order: Midpoint method 27
3.1.2 2nd Order Runge Kutta a0 = 0.5: Heun’s method 28
3.2 Third Order Runge Kutta methods 29
3.3 Runge Kutta fourth order 30
3.4 Butcher Tableau 31
3.4.1 Heun’s Method 32
3.4.2 4th Order Runge Kutta 32
3.5 Convergence Analysis 33
3.6 The choice of method and step-size 34
3.7 Problem Sheet 2 37
4 multi-step methods 39
4.1 Derivation of a explicit multistep method 40
4.1.1 General Derivation of a explicit method Adams-
Bashforth 40
4.1.2 Adams-Bashforth three step method 44
4.1.3 Adams-Bashforth four step method 44
4.2 Derivation of the implicit multi-step method 46
4.3 Table of Adam’s methods 49
4.4 Predictor-Corrector method 50
4.5 Improved step-size multi-step method 50
4.6 Problem Sheet 3 53
5 consistency, convergence and stability 56
5.1 One Step Methods 56
5.2 Multi-step methods 58
5.3 Problem Sheet 4 63
5.4 Initial Value Problem Review Questions 64

2
Contents 3

ii numerical solutions to boundary value prob-


lems 69
6 boundary value problems 70
6.1 Systems of equations 70
6.2 Higher order Equations 71
6.3 Boundary Value Problems 72
6.4 Some theorems about boundary value problem 74
6.5 Shooting Methods 75
6.5.1 Linear Shooting method 75
6.5.2 The Shooting method for non-linear equations 77
6.6 Finite Difference method 80

iiinumerical solutions to partial differential equa-


tions 84
7 partial differential equations 85
7.1 Introduction 85
7.2 PDE Classification 85
7.3 Difference Operators 89
8 parabolic equations 90
8.1 Example Heat Equation 90
8.2 An explicit method for the heat eqn 91
8.3 An implicit (BTCS) method for the Heat Equation 98
8.3.1 Example implicit (BTCS) for the Heat Equation 99
8.4 Crank Nicholson Implicit method 105
8.4.1 Example Crank-Nicholson solution of the Heat
Equation 106
8.5 The Theta Method 112
8.6 The General Matrix form 112
8.7 Derivative Boundary Conditions 113
8.7.1 Example Derivative Boundary Conditions 113
8.8 Local Truncation Error and Consistency 116
8.9 Consistency and Compatibility 118
8.10 Convergence and Stability 118
8.11 Stability by the Fourier Series method (von Neumann’s
method) 118
8.11.1 Stability for the explicit FTCS Method 119
8.11.2 Stability for the implicit BTCS Method 119
8.11.3 Stability for the Crank Nicholson Method 120
8.12 Parabolic Equations Questions 121
8.12.1 Explicit Equations 121
8.12.2 Implicit Methods 124
8.12.3 Crank Nicholson Methods 126
9 elliptic pde’s 130
9.1 The five point approximation of the Laplacian 130
9.1.1 Matrix representation of the five point scheme 132
Contents 4

9.1.2 Generalised Matrix form of the discrete Poisson


Equation 134
9.2 Specific Examples 135
9.2.1 Example 1:Homogeneous equation with non-zero
boundary 136
9.2.2 Example 2: non-homogeneous equation with
zero boundary 139
9.2.3 Example 3: Inhomogeneous equation with non-
zero boundary 142
9.3 Consistency and Convergence 146
9.4 Elliptic Equations Questions 150
10 hyperbolic equations 153
10.1 The Wave Equation 153
10.2 Finite Difference Method for Hyperbolic equations 154
10.2.1 Discretisation of the scalar equation 155
10.3 Analysis of the Finite Difference Methods 155
10.4 Consistency 155
10.5 Stability 156
10.6 Courant Freidrich Lewy Condition 156
10.6.1 von Neumann stability for the Forward Euler 156
10.6.2 von Neumann stability for the Lax-Friedrich 157
11 variational methods 158
11.1 Ritz -Galerkin Method 160
11.2 Finite Element 162
11.2.1 Error bounds of Finite Element methods 163
ACRONYMS

IVP Initial Value Problems

BVP Boundary Value Problems

ODE Ordinary Differential Equations

PDE Partial Differential Equations

RK Runge Kutta

5
Part I

I N I T I A L VA L U E P R O B L E M S
1
N U M E R I C A L S O L U T I O N S T O I N I T I A L VA L U E
PROBLEMS

Differential equations have numerous applications to describe dy-


namics from physics to biology to economics.

Initial value problems are subset of Ordinary Differential Equation


(ODE’s) with the form
0
y = f (x) (1)
f is a function. The general solution to (1) is
Z
y= f ( x )dx + c,

containing an arbitrary constant c. In order to determine the solution


uniquely it is necessary to impose an initial condition,

y ( x0 ) = y0 . (2)

Example 1
Simple Example
The differential equation describes the rate of change
of an oscillating input. The general solution of the
equation
0
y = sin( x ) (3)
is,
y = − cos( x ) + c,
with the initial condition,

y(0) = 2,

then it is easy to find c = 2.


Thus the desired solution is,

y = 2 − cos( x ).

The more general Ordinary Differential Equation is of the form


0
y = f ( x, y), (4)

7
numerical solutions to initial value problems 8

is approached in a similar fashion.


Let us consider
0
y = a ( x ) y ( x ) + b ( x ),
The given functions a( x ) and b( x ) are assumed continuous for this
equation
f ( x, z) = a( x )z( x ) + b( x ),
and the general solution can be found using the method of integrating
factors.

Example 2
General Example
Differential equations of the form
0
y ( x ) = λy( x ) + b( x ), x ≥ x0 , (5)

where λ is a given constant and b( x ) is a continuous


integrable function has a unique analytic. Multiply-
ing the equation (5) by the integrating factor e−λx , we
can reformulate

d(e−λx y( x ))
= e−λx b( x ).
dx
Integrating both sides from x0 to x we obtain
Z x
−λx
(e y( x )) = c + e−λt b(t)dt,
x0

so the general solution is


Z x
y( x ) = ceλx + eλ(x−t) b(t)dt,
x0

with c an arbitrary constant

c = e−λx0 y( x0 ).

For a great number of Initial Value Problems there is no known


exact (analytic) solution as the equations are non-linear, for exam-
0 4
ple y = e xy , or discontinuous or stochastic. There for a numerical
method is used to approximate the solution.
1.1 Numerical approximation of Differentiation 9

1.1 numerical approximation of differentiation

1.1.1 Derivation of Forward Euler for one step

df
The left hand side of a initial value problem dx can be approximated
by Taylors theorem expand about a point x0 giving:
0
f ( x1 ) = f ( x0 ) + ( x1 − x0 ) f ( x0 ) + τ, (6)

where τ is the truncation error,

( x1 − x0 )2 00
τ= f ( ξ ), ξ ∈ [ x0 , x1 ]. (7)
2!
Rearranging and letting h = x1 − x0 the equation becomes

0 f ( x1 ) − f ( x0 ) h 00
f ( x0 ) = − f ( ξ ).
h 2
The forward Euler method can also be derived using a variation on
the Lagrange interpolation formula called the divided difference.
Any function f ( x ) can be approximated by a polynomial of degree
Pn ( x ) and an error term,

f ( x ) = Pn ( x ) + error,
= f ( x0 ) + f [ x0 , x1 ]( x − x0 ) + f [ x0 , x1 , x2 ]( x − x0 )( x − x1 ),
+... + f [ x0 , ..., xn ]Πin=−01 ( x − xi ) + error,

where
f ( x1 ) − f ( x0 )
f [ x0 , x1 ] = ,
x1 − x0
f [ x1 , x2 ] − f [ x0 , x1 ]
f [ x0 , x1 , x2 ] = ,
x2 − x0
f [ x1 , x2 , ..., xn ] − f [ x0 , x1 , ..., xn−1 ]
f [ x0 , x1 , .., xn ] = ,
x n − x0
Differentiating Pn ( x )
0
Pn ( x ) = f [ x0 , x1 ] + f [ x0 , x1 , x2 ]{( x − x0 ) + ( x − x1 )},
n −1
( x − x0 )...( x − xn−1 )
+... + f [ x0 , ..., xn ] ∑ ( x − xi )
,
i =0

and the error becomes

f n +1 ( ξ )
error = ( x − x0 )...( x − xn ) .
( n + 1) !
1.1 Numerical approximation of Differentiation 10

Applying this to define our first derivative, we have

0 f ( x1 ) − f ( x0 )
f ( x ) = f [ x0 , x1 ] = ,
x1 − x0

this leads us other formulas for computing the derivatives

0 f ( x1 ) − f ( x0 )
f (x) = + O ( h ), Euler,
x1 − x0

0 f ( x 1 ) − f ( x −1 )
f (x) = + O ( h2 ), Central.
x 1 − x −1
Using the same method we can get out computational estimates for
the 2nd derivative
00 f2 − 2 f1 + f0
f ( x0 ) = + O ( h2 ),
h2
00 f 1 − 2 f 0 + f −1
f ( x0 ) = + O ( h2 ), central.
h2

Example 3
To numerically solve the first order Ordinary Differ-
ential Equation (4)
0
y = f ( x, y),

a ≤ x ≤ b,
the derivative y0 is approximated by

w i +1 − w i w − wi
= i +1 ,
x i +1 − x i h

where wi is the numerical approximation of y at xi .


The Differential Equation is converted to a discrete
difference equation with steps of size h,

w i +1 − w i
= f ( x i , w ).
h
Rearranging the difference equation gives the equa-
tion
w i +1 = wi + h f ( x i , w ),
which can be used to approximate the solution at wi+1
given information about y at point xi .

1.1.1.1 Simple example ODE y0 = sin( x )


1.1 Numerical approximation of Differentiation 11

Example 4
Applying the Euler formula to the first order equation
with an oscillating input (3)
0
y = sin( x ),

0 ≤ x ≤ 10.
The equation can be approximated using the forward
Euler as
w i +1 − w i
= sin( xi ).
h
Rearranging the equation gives the discrete difference
equation with the unknowns on the left and the know
values of the right

wi+1 = wi + h sin( xi ).

The Python code bellow implements this difference


equation. The output of the code is shown in Figure
1.1.1.
1 # Numerical s o l u t i o n o f a Cosine d i f f e r e n t i a l
equation
2 import numpy as np
3 import math
4 import m a t p l o t l i b . pyplot as p l t
5
6 h=0.01
7 a=0
8 b=10
9
10 N= i n t ( b−a/h )
11 w=np . z e r o s (N)
12 x=np . z e r o s (N)
13 A n a l y t i c S o l u t i o n =np . z e r o s (N)
14
15 # I n i t i a l Conditions
16 w[ 0 ] = 1 . 0
17 x [0]=0
18 Analytic Solution [0]=1.0
19 f o r i i n range ( 1 ,N) :
20 w[ i ]=w[ i − 1]+h∗math . s i n ( x [ i − 1])
21 x [ i ]= x [ i − 1]+h
22 A n a l y t i c S o l u t i o n [ i ]=2.0 − math . cos ( x [ i ] )
23
24 fig = plt . figure ( figsize =(8 ,4) )
25
26 # −−− l e f t hand p l o t
27 ax = f i g . add subplot ( 1 , 3 , 1 )
28 p l t . p l o t ( x ,w, c o l o r = ’ red ’ )
29 # ax . legend ( l o c = ’ b e s t ’ )
30 p l t . t i t l e ( ’ Numerical S o l u t i o n ’ )
1.1 Numerical approximation of Differentiation 12

31
32 # −−− r i g h t hand p l o t
33 ax = f i g . add subplot ( 1 , 3 , 2 )
34 p l t . p l o t ( x , A n a l y t i c S o l u t i o n , c o l o r = ’ blue ’ )
35 plt . t i t l e ( ’ Analytic Solution ’ )
36
37 # ax . legend ( l o c = ’ b e s t ’ )
38 ax = f i g . add subplot ( 1 , 3 , 3 )
39 p l t . p l o t ( x , A n a l y t i c S o l u t i o n −w, c o l o r = ’ blue ’ )
40 p l t . t i t l e ( ’ Error ’ )
41
42 # −−− t i t l e , e x p l a n a t o r y t e x t and save
43 f i g . s u p t i t l e ( ’ S i n e S o l u t i o n ’ , f o n t s i z e =20)
44 plt . tight layout ()
45 p l t . s u b p l o t s a d j u s t ( top = 0 . 8 5 )
Listing 1.1: Python Numerical and Analytical Solution of Eqn 3

Figure 1.1.1: Python output: Numerical (left), Analytic (middle) and


error(right) for y0 = sin( x ) Equation 3 with h=0.01

0
1.1.1.2 Simple example problem population growth y = εy.

Example 5
Simple population growth can be describe as a first
order differential equation of the form:
0
y = εy. (8)

This has an exact solution of

y = Ceεx .

Given the initial condition of condition

y (0) = 1
1.1 Numerical approximation of Differentiation 13

and a rate of change of

ε = 0.5

the analytic solution is

y = e0.5x .

Example 6
Applying the Euler formula to the first order equation
(8)
0
y = 0.5y
is approximated by

w i +1 − wi
= 0.5wi .
h
Rearranging the equation gives the difference equa-
tion
wi+1 = wi + h(0.5wi ).
The Python code below and the output is plotted in
Figure 1.1.2.
1 # Numerical s o l u t i o n o f a d i f f e r e n t i a l e q u a t i o n
2 import numpy as np
3 import math
4 import m a t p l o t l i b . pyplot as p l t
5
6 h=0.01
7 tau = 0 . 5
8 a=0
9 b=10
10
11 N= i n t ( ( b−a ) /h )
12 w=np . z e r o s (N)
13 x=np . z e r o s (N)
14 A n a l y t i c S o l u t i o n =np . z e r o s (N)
15
16 Numerical Solution [0]=1
17 x [0]=0
18 w[ 0 ] = 1
19
20 f o r i i n range ( 1 ,N) :
21 w[ i ]=w[ i − 1]+dx ∗ ( tau ) ∗w[ i − 1]
22 x [ i ]= x [ i − 1]+dx
23 A n a l y t i c S o l u t i o n [ i ]= math . exp ( tau ∗x [ i ] )
24
25
26 fig = plt . figure ( figsize =(8 ,4) )
27 # −−− l e f t hand p l o t
28 ax = f i g . add subplot ( 1 , 3 , 1 )
29 p l t . p l o t ( x ,w, c o l o r = ’ red ’ )
1.1 Numerical approximation of Differentiation 14

30 # ax . legend ( l o c = ’ b e s t ’ )
31 p l t . t i t l e ( ’ Numerical S o l u t i o n ’ )
32
33 # −−− r i g h t hand p l o t
34 ax = f i g . add subplot ( 1 , 3 , 2 )
35 p l t . p l o t ( x , A n a l y t i c S o l u t i o n , c o l o r = ’ blue ’ )
36 plt . t i t l e ( ’ Analytic Solution ’ )
37
38 # ax . legend ( l o c = ’ b e s t ’ )
39 ax = f i g . add subplot ( 1 , 3 , 3 )
40 p l t . p l o t ( x , A n a l y t i c S o l u t i o n −N um e ri c al S ol u ti o n ,
c o l o r = ’ blue ’ )
41 p l t . t i t l e ( ’ Error ’ )
42
43 # −−− t i t l e , e x p l a n a t o r y t e x t and save
44 f i g . s u p t i t l e ( ’ E x p o n e n t i a l Growth S o l u t i o n ’ ,
f o n t s i z e =20)
45 plt . tight layout ()
46 p l t . s u b p l o t s a d j u s t ( top = 0 . 8 5 )
Listing 1.2: Python Numerical and Analytical Solution of Eqn 8

Figure 1.1.2: Python output: Numerical (left), Analytic (middle) and


0
error(right) for y = εy Eqn 8 with h=0.01 and ε = 0.5

1.1.1.3 Example of exponential growth with a wiggle

Example 7
An extension of the exponential growth differential
equation includes a sinusoidal component
0
y = ε(y + y sin( x )). (9)

This complicates the exact solution but the numerical


approach is more or less the same. The difference
equation is

wi+1 = wi + h(0.5wi + wi sin( xi )).


1.1 Numerical approximation of Differentiation 15

Figure 1.1.3 illustrates the numerical solution of the


differential equation.

0
Figure 1.1.3: Python output: Numerical solution for y = ε(y +
ysin( x )) Equation 9 with h=0.01 and ε = 0.5

1.1.2 Theorems about Ordinary Differential Equations

Definition A function f (t, y) is said to satisfy a Lipschitz Condition


in the variable y on the set D ⊂ R2 if a constant L > 0 exist with the
property that
| f (t, y1 ) − f (t, y2 )| < L|y1 − y2 |,
whenever (t, y1 ), (t, y2 ) ∈ D. The constant L is call the Lipschitz Con-
dition of f .
Definition A set D ⊂ R2 is said to be convex if whenever (t1 , y1 ), (t2 , y2 )
belong to D the point ((1 − λ)t1 + λt2 , (1 − λ)y1 + λy2 ) also belongs
in D for each λ ∈ [0, 1].
Theorem 1.1.1. Suppose f (t, y) is defined on a convex set D ⊂ R2 . If a
constant L > 0 exists with
∂ f (t, y)
≤ L,
∂y

then f satisfies a Lipschitz Condition an D in the variable y with Lipschitz


constant L.
Theorem 1.1.2. Suppose that D = {(t, y)| a ≤ t ≤ b, −∞ < y < ∞}, and
f (t, y) is continuous on D in the variable y then the initial value problem
has a unique solution y(t) for a ≤ t ≤ b.
Definition The initial-value problem

dy
= f (t, y), a ≤ t ≤ b,
dt
with initial condition
y( a) = α,
1.1 Numerical approximation of Differentiation 16

is said to be well posed if:

• A unique solution y(t) to the problem exists;

• For any ε > 0 there exists a positive constant k (ε) with the prop-
erty that whenever |ε 0 | < ε and with |δ(t)|ε on [ a, b] a unique
solution z(t) to the problem

dz
= f (t, z) + δ(t), a ≤ t ≤ b, (10)
dt

z( a) = α + ε 0 ,
exists with
|z(t) − y(t)| < k(ε)ε.
The problem specified by (10) is called a perturbed problem
associated with the original problem.

It assumes the possibility of an error δ(ε) being introduced to the


statement of the differential equation as well as an error ε 0 being
present in the initial condition.

Theorem 1.1.3. Suppose D = {(t, y)| a ≤ t ≤ b, −∞ < y < ∞}. If


f (t, y) is continuous and satisfies a Lipschitz Condition in the variable y on
the set D, then the initial value problem

dy
= f (t, y), a ≤ t ≤ b,
dt
with initial condition
y( a) = α,
is well-posed.

Example 8

0
y ( x ) = −y( x ) + 1, 0 ≤ x ≤ b, y (0) = 1
has the solution y( x ) = 1. The perturbed problem
0
z ( x ) = −z( x ) + 1, 0 ≤ x ≤ b, z(0) = 1 + ε,

has the solution z( x ) = 1 + εe− x x ≤ 0.


Thus
y( x ) − z( x ) = −εe− x
|y( x ) − z( x )| ≤ |ε| x≥0
Therefore the problem is said to be stable. This is
illustrated in Figure 1.1.4.
1.2 One-Step Methods 17

0
Figure 1.1.4: Python output: Illustrating Stability y ( x ) =
0
−y( x ) + 1 with the initial condition y(0) = 1 and z ( x ) =
−z( x ) + 1 with the initial condition z(0) = 1 + ε, ε = 0.1

1.2 one-step methods

Dividing [ a, b] in to N subsections such that we now have N+1 points


of equal spacing h = b− a
N . This gives the formula ti = a + ih for
i = 0, 1, ..., N. One-Step Methods for Ordinary Differential Equation’s
only use one previous point to get the approximation for the next
point. The initial condition gives y( a = t0 ) = α, this gives the starting
point of our one step method. The general formula for One-step
methods is
wi+1 = wi + hΦ(ti , wi , h),
where wi is the approximated solution of the Ordinary Differential
Equation at the point ti
wi ≈ y i .

1.2.1 Euler’s Method

The simplest example of a one step method is Euler. The derivative


is replaced by the Euler approximation. The Ordinary Differential
Equation
dy
= f (t, y),
dt
is discretised
y i − y i −1
= f ( t i −1 , y i −1 ) + T
h
T is the truncation error.

Example 9
Consider the Initial Value Problem

0 y2
y =− , a = 0 ≤ t ≤ b = 0.5,
1+t
1.2 One-Step Methods 18

with the initial condition y(0) = 1 the Euler approxi-


mation is
hwi2
w i +1 = w i − ,
1 + ti
where wi is the approximation of y at ti .
Solving, let ti = ih where h = 0.05, from the initial
condition we have w0 = 1 at i = 0 our method is

0.05w02 0.05
w1 = w0 − = 1− = 0.95
1 + t0 1+0

and so forth, each approximation wi requiring the pre-


vious, thus creating a sequence starting at w0 to wn .
The Table below show the numerical approximation
for 10 steps.

i ti wi
0 0 1
1 0.05 0.95
2 0.1 0.90702381
3 0.15 0.86962871
4 0.2 0.8367481
5 0.25 0.80757529
6 0.3 0.78148818
7 0.35 0.7579988
8 0.4 0.73671872
9 0.45 0.71733463

Lemma 1.2.1. For all x ≥ 0.1 and any positive m we have

0 ≤ (1 + x )m ≤ emx .

Lemma 1.2.2. If s and t are positive real numbers { ai }iN=0 is a sequence


−t
satisfying a0 ≥ and ai+1 ≤ (1 + s) ai + t then,
s
 
t t
a i +1 ≤ e ( i +1) s a 0 + − .
s s

Theorem 1.2.3. Suppose f is continuous and satisfies a Lipschitz Condition


with constant L on D = {(t, y)| a ≤ t ≤ b, −∞ < y < ∞} and that a
constant M exists with the property that
00
|y (t)| ≤ M.

Let y(t) denote the unique solution of the Initial Value Problem
0
y = f (t, y), a ≤ t ≤ b, y( a) = α,
1.2 One-Step Methods 19

and w0 , w1 , ..., w N be the approx generated by the Euler method for some
positive integer N. Then for i = 0, 1, ..., N

Mh L(ti −a)
| y ( t i ) − wi | ≤ |e − 1|.
2L
Proof. When i = 0 the result is clearly true since y(t0 ) = w0 = α.
From Taylor we have,

h2 00
y(ti+1 ) = y(ti ) + h f (ti , y(ti )) + y ( ξ i ),
2
where xi ≥ ξ i ≥ xi+1 , and from this we get the Euler approximation

w i +1 = w i + h f ( t i , w i ).

Consequently we have

h2 00
y(ti+1 ) − wi+1 = y(ti ) − wi + h[ f (ti , y(ti )) − f (ti , wi )] + y ( ξ i ),
2
and

h2 00
|y(ti+1 ) − wi+1 | ≤ |y(ti ) − wi | + h| f (ti , y(ti )) − f (ti , wi )| + |y (ξ i )|.
2
Since f satisfies a Lipschitz Condition in the second variable with
00
constant L and |y | ≤ M we have

h2
|y(ti+1 ) − wi+1 | ≤ (1 + hL)|y(ti ) − wi | + M.
2
Using Lemma 1.2.1 and 1.2.2 and letting a j = (y j − w j ) for each j =
h2 M
0, .., N while s = hL and t = 2 we see that

h2 M h2 M
|y(ti+1 − wi+1 | ≤ e(i+1)hL (|y(t0 ) − w0 | + )− .
2hL 2hL
Since w0 − y0 = 0 and (i + 1)h = ti+1 − t0 = ti+1 − a we have

Mh L(ti −a)
| y ( t i ) − wi | ≤ |e − 1|,
2L
for each i = 0, 1, ..N − 1.

Example 10

0
y = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5,
the Euler approximation is

wi+1 = wi + h(wi − t2i + 1)


1.2 One-Step Methods 20

choosing h = 0.2, ti = 0.2i and w0 = 0.5.


f (t, y) = y − t2 + 1

∂f
=1
∂y

so L = 1. The exact solution is y(t) = (t + 1)2 − 12 et


from this we have
00
y (t) = 2 − 0.5et ,
00
|y (t)| ≤ 0.5e2 − 2, t ∈ [0, 2].
Using the above inequality we have we have

h
| y i − wi | ≤ (0.5e2 − 2)(eti − 1).
2
Figure 1.2.1 illustrates the upper bound of the error
and the actual error.

0
Figure 1.2.1: Python output: Illustrating upper bound y = y −
t2 + 1 with the initial condition y(0) = 0.5

Euler method is a typical one step method, in general such methods


are given by function Φ(t, y; h; f ). Our initial condition is w0 = y0 , for
i = 0, 1, ..
wi+1 = wi + hΦ(ti , wi : h : f )
with ti+1 = ti + h.
In the Euler case Φ(t, y; h; f ) = f (t, y) and is of order 1.
Theorem 1.2.3 can be extend to higher order one step methods with
the variation
Mh p L(ti −a)
| y ( t i ) − wi | ≤ |e − 1|
2L
where p is the order of the method.

Definition The difference method w0 = α

wi+1 = wi + hΦ(ti , wi ),
1.2 One-Step Methods 21

for i = 0, 1, ..., N − 1 has a local truncation error given by

yi+1 − (yi + hΦ(ti , yi ))


τi+1 (h) = ,
h
y i +1 − y i
= − Φ ( t i , y i ),
h
for each i = 0, .., N − 1 where as usual yi = y(ti ) denotes the exact
solution at ti .

For Euler method the local truncation error at the ith step for the
problem
0
y = f (t, y), a ≤ t ≤ b, y( a) = α,
is
y i +1 − y i
τi+1 (h) = − f ( t i , y i ),
h
for i = 0, .., N − 1.
But we know Euler has
h 00
τi+1 = y ( ξ i ), ξ i ∈ ( t i , t i +1 ),
2
00
When y (t) is known to be bounded by a constant M on [ a, b] this
implies
h
|ti+1 (h)| ≤ M ∼ O(h).
2
O(h) indicates a linear order of error. The higher the order the more
accurate the method.
1.3 Problem Sheet 22

1.3 problem sheet

1. Show that the following functions satisfy the Lipschitz condi-


tion on y on the indicated set D:
a) f (t, y) = ty3 , D = {(t, y); −1 ≤ t ≤ 1, 0 ≤ y ≤ 10};
t2 y2
b) f (t, y) = 1+ t2
, D = {(t, y); 0 ≤ t, −10 ≤ y ≤ 10}.

2. Apply Euler’s Method to approximate the solution of the given


initial value problems using the indicated number of time steps.
Compare the approximate solution with the given exact solu-
tion, and compate the actual error with the theoretical error
a) y0 = t − y, (0 ≤ t ≤ 4)
with the initial condition y(0) = 1,
N = 4, y(t) = 2e−t + t − 1,

The Lipschitz constant is determined on D = {(t, y); 0 ≤


t ≤ 4, y ∈ IR}.
b) y0 = y − t, (0 ≤ t ≤ 2)
with the initial condition y(0) = 2,
N = 4, y(t) = et + t + 1.

The Lipschitz constant is determined on D = {(t, y); 0 ≤


t ≤ 2, y ∈ IR}.
2
HIGHER ORDER METHODS

2.1 higher order taylor methods

The Taylor expansion

0 h2 0 h n +1 n +1
y(ti+1 ) = y(ti ) + hy (ti ) + y (ti ) + .. + y (ξ i )
2 ( n + 1) !

can be used to design more accurate higher order methods. By dif-


0
ferentiating the original Ordinary Differential Equation y = f (t, y)
higher ordered can be derived method it requires the function to be
continuous and differentiable.
In the general case of Taylor of order n:

w0 = α

wi+1 = wi + hT n (ti , wi ), for i = 0, ..., N − 1,


where

h 0 h n −1 n −1
T n ( t i , wi ) = f ( t i , wi ) + f (ti , wi ) + ... f ( t i , wi ). (11)
2 n!

Example 11
Applying the general Taylor method to create meth-
ods of order two and four to the initial value problem
0
y = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5,

from this we have


0 d
f (t, y(t)) = (y − t2 + 1) = y0 − 2t = y − t2 + 1 − 2t,
dt
00
f (t, y(t)) = y − t2 − 2t − 1,
000
f (t, y(t)) = y − t2 − 2t − 1.

23
2.1 Higher order Taylor Methods 24

From these derivatives we have


h 0
T 2 ( t i , wi ) = f ( t i , wi )
f ( t i , wi ) +
2
h
= wi − t2i + 1 + (wi − t2i − 2ti + 1)
  2
h
= 1+ (wi − t2i − 2ti + 1) − hti
2

and
h 0
T 4 ( t i , wi ) = f ( t i , wi )
f ( t i , wi ) +
2
h2 00 h3 00
+ f ( t i , wi ) + f ( t i , wi )
6 24
h h2 h3
 
= 1+ + + (wi − t2i )
2 6 24
h2
 
h
− 1+ + hti
3 12
h h2 h3
+1 + − −
2 6 24
From these equations we have, Taylor of order two

w0 = 0.5
  
h 2
w i +1 = wi + h 1 + (wi − ti − 2ti + 1) − hti
2
and Taylor of order 4

h h2 h3
 
w i +1 = wi + h 1 + + + (wi − t2i )
2 6 24
h2 h h2 h3
  
h
− 1+ + hti + 1 + − −
3 12 2 6 24

The local truncation error for the 2nd order method is

y i +1 − y i h2 2
τi+1 (h) = − T 2 ( ti , yi ) = f (ξ i , y( xi ))
h 6
where ξ ∈ (ti , ti+1 ).

In general if y ∈ C n+1 [ a, b]

hn
τi+1 (h) = f n (ξ i , y(ξ i )) O(hn ).
( n + 1) !

The issue is that for every differential equation a new method has be
to derived.
3
R U N G E – K U T TA M E T H O D

The Runge-Kutta method (RK) method is closely related to the Taylor


series expansions but no differentiation of f is necessary.
All RK methods will be written in the form

wn+1 = wn + hF (t, w, h; f ), n ≥ 0. (12)

The truncation error for (12) is defined by

Tn (y) = y(tn+1 ) − y(tn ) − hF (tn , y(tn ), h; f )

where the error is written as τn (y)

Tn = hτn (y).

Rearranging we get

y(tn+1 ) = y(tn ) − hF (tn , y(tn ), h; f ) + hτn (y).

Theorem 3.0.1. Suppose f(t,y) and all its partial derivatives of order less
than or equal to n+1 are continuous on D = {(t, y)| a ≤ t ≤ b, c ≤ y ≤ d}
and let (t0 , y0 ) ∈ D for every (t, y) ∈ D, ∃ ξ ∈ (t, t0 ) and µ ∈ (y, y0 )
with
f (t, y) = Pn (t, y) + Rn (t, y)
where
 
∂f ∂f
Pn (t, y) = f ( t0 , y0 ) + ( t − t0 ) ( t0 , y0 ) + ( y − y0 ) ( t0 , y0 )
∂t ∂y
2
( t − t0 ) ∂ f2 ∂2 f

+ ( t ,
0 0y ) + ( y − y 0 )( t − t 0 ) ( t0 , y0 )
2 ∂t2 ∂y∂t
( y − y0 )2 ∂2 f

+ ( t0 , y0 )
2 ∂y2
+..."+ #
1 n ∂n f
 
n
n! j∑
n− j j
+ ( t − t0 ) ( y − y0 ) j n − j ( t0 , y0 )
=0
j ∂y ∂t

25
3.1 Derivation of Second Order Runge Kutta 26

and

Rn (t, y) =
" #
n +1   n +1 f
1 n+1 ∂
( n + 1) ! ∑ j
(t − t0 )n+1− j (y − y0 ) j j n+1− j (ξ, µ)
∂y ∂t
j =0

3.1 derivation of second order runge kutta

Consider the explicit one-step method


w i +1 − w i
= F ( f , t i , wi , h ) (13)
h
with
F ( f , t, y, h) = a0 k1 + a1 k2 , (14)

F ( f , t, y, h) = a0 f (t, y) + a1 f (t + α1 , y + β 1 ), (15)
where a0 + a1 = 1.
There is a free parameter in the derivation of the Runge Kutta
method for this reason a0 must be choosen
Deriving the second order Runge-Kutta method by using Theorem
3.0.1 to determine values for values a1 , α1 and β 1 with the property
that a1 f (t + α1 , y + β 1 ) approximates the second order Taylor

h 0
f (t, y) + f (t, y)
2

with error no greater than than O(h2 ), the local truncation error for
the Taylor method of order two.
Using
0 ∂f ∂f 0
f (t, y) = (y, t) + (t, y).y (t),
∂t ∂y
the second order Taylor can be re-written as

h ∂f h ∂f
f (t, y) + (y, t) + (t, y). f (t, y). (16)
2 ∂t 2 ∂y

Expanding a1 f (t + α1 , y + β 1 ) in its Taylor polynomial of degree one


about (t, y) gives

∂f ∂f
a1 f (t + α1 , y + β 1 ) = a1 f (t, y) + a1 α1 (t, y) + a1 β 1 + a1 R1 (t + α1 , y + β 1 )
∂t ∂y
(17)
where

α21 ∂2 f ∂2 f β21 ∂2 f
R1 ( t + α1 , y + β 1 ) = ( ξ, µ ) + α 1 β 1 ( ξ, µ ) + (ξ, µ),
2 ∂t2 ∂t∂y 2 ∂y2
3.1 Derivation of Second Order Runge Kutta 27

for some ξ ∈ [t, t + α1 ] and µ ∈ [y, y + β 1 ].


Matching the coefficients and its derivatives in eqns (16) and (17)
gives the equations
f (t, y) : a1 = 1
∂f h
(t, y) : a1 α1 =
∂t 2
and
∂f h
(t, y) : a1 β 1 = f (t, y).
∂y 2

3.1.1 Runge Kutta second order: Midpoint method

h
Choosing a0 = 0 gives the unique values a1 = 1, α1 = 2 and β 1 =
h
2 f ( t, y ) so

h h h h
T 2 (t, y) = f (t + , y + f (t, y)) − R1 (t + , y + f (t, y))
2 2 2 2
and from

h h h2 ∂2 f h2 ∂2 f h2 ∂2 f
R1 (t + , y + f (t, y)) = 2
(ξ, µ) + (ξ, µ) + g(t, y)2 2 (ξ, µ),
2 2 8 ∂t 4 ∂t∂y 8 ∂y

for some ξ ∈ [t, t + 2h ] and µ ∈ [y, y + 2h f (t, y)].


If all the second-order partial derivatives are bounded then

h h
R1 (t + , y + f (t, y)) ∼ O(h2 ).
2 2
The Midpoint second order Runge-Kutta for the initial value problem

y0 = f (t, y)

with the initial condition y(t0 ) = α is given by

w0 = α,

h h
wi+1 = wi + h f (ti + , yi + f (ti , wi )),
2 2
with an error of order O(h2 ). The Figure 3.3.2 illustrates the solution
0
to the y = − xy
3.1 Derivation of Second Order Runge Kutta 28

0
Figure 3.1.1: Python output: Illustrating upper bound y = − xy with
the initial condition y(0) = 1

for each i = 0, 1, ...N − 1.

3.1.2 2nd Order Runge Kutta a0 = 0.5: Heun’s method

Choosing a0 = 0.5 gives the unique values a1 = 0.5, α1 = h and


β 1 = h f (t, y) such that

T 2 (t, y) = F (t, y) = 0.5 f (t, y) + 0.5 f (t + h, y + h f (t, y)) − R1 (t + h, y + h f (t, y))

and the error value from

h2 ∂2 f 2
2 ∂ f h2 2
2∂ f
R1 (t + h, y + h f (t, y)) = ( ξ, µ ) + h ( ξ, µ ) + f ( t, y ) (ξ, µ),
2 ∂t2 ∂t∂y 2 ∂y2

for some ξ ∈ [t, t + h] and µ ∈ [y, y + h f (t, y)].

Thus Heun’s second order Runge-Kutta for the initial value prob-
lem
y0 = f (t, y)
with the initial condition y(t0 ) = α is given by

w0 = α,

h
wi+1 = wi + [ f (ti , wi ) + f (ti + h, yi + h f (ti , wi ))]
2
with an error of order O(h2 ).
For ease of calculation this can be rewritten as:

k 1 = f ( t i , wi ),

k2 = f (ti + h, wi + hk1 ),
h
w i +1 = w i + [ k 1 + k 2 ] .
2
3.2 Third Order Runge Kutta methods 29

3.2 third order runge kutta methods

Higher order methods are derived in a similar fashion. For the Third
Order Runge Kutta methods

w i +1 − w i
= F ( f , t i , wi , h ), (18)
h
with
F ( f , t, w, h) = a0 k1 + a1 k2 + a2 k3 , (19)
where
a0 + a1 + a2 = 1
and
k 1 = f ( t i , wi ),
k2 = f (ti + α1 h, ti + β 11 k1 ),
k3 = f (ti + α2 h, ti + β 21 k1 + β 22 k2 )).
The values of a0 , a1 , a2 , α1 , α2 , β 11 ,β 21 , β 22 are derived by group the
Taylor expansion,

h2
y i +1 = y i + h f ( t i , y i ) + ( f t + f y f )(ti ,yi )
2
h3
f tt + 2 f ty f + f t f y + f yy f 2 + f y f y f (t ,y )

+
6 i i
4
+O(h ),

with the 3rd order expand form:

yi+1 = yi + ha1 f (ti , yi ) + ha2 ( f + α1 h f t + β 11 h f y f

h2
+ ( f tt α21 + f yy β211 f 2 + 2 f ty α1 β 11 f ) + O2 (h3 ))
2
+ha3 ( f + α2 h f t + f y β 21 h f + β 22 h( f + α1 h f t + β 11 h f y f + O3 (h2 ))


1
+ ( f tt (α2 h)2 + f yy h2 ( β 21 f + β 22 ( f + α1 h f t + β 11 h f y f + O4 (h2 )))2
2 | {z }
O5 (h)

+2 f ty α2 h2 ( β 21 f + β 22 ( f + α1 h f t + β 11 h f y f + O4 (h2 ))))).
| {z }
O5 (h)

This results in 8 equations with 8 unknowns, but only 6 of these equa-


tions are independent. For this reason the are two free parameters to
choose.
For example, we can choose that

1
α2 = 1, β 11 = ,
2
3.3 Runge Kutta fourth order 30

then we obtain the following difference equation.

h
wi+1 = wi + (k1 + 4k2 + k3 ),
6
where
k 1 = f ( t i , wi ),
k2 = f (tn + 1/2h, wn + 1/2hk1 ),
k3 = f (tn + h, wn − hk1 + 2hk2 ).

3.3 runge kutta fourth order

w0 = α,
k 1 = h f ( t i , wi ),
h 1
k 2 = h f ( t i + , wi + k 1 ),
2 2
h 1
k 3 = h f ( t i + , wi + k 2 ),
2 2
k 4 = h f ( t i +1 , w i + k 3 ),
1
wi+1 = wi + (k1 + 2k2 + 2k3 + k4 ).
6

Example 12
Example Midpoint method,
0
y = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5,

N = 10, ti = 0.2i, h = 0.2,


w0 = 0.5,

0.2 0.2
wi+1 = wi + 0.2 f (ti + , wi + f (ti , wi ))
2 2
= wi + 0.2 f (ti + 0.1, wi + 0.1(wi − t2i + 1))
= wi + 0.2(wi + 0.1(wi − t2i + 1) − (ti + 0.1)2 + 1)

Example 13
Example Runge Kutta fourth order method
0
y = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5,

N = 10, ti = 0.2i, h = 0.2,


3.4 Butcher Tableau 31

w0 = 0.5,

k1 = h(wi − t2i + 1),


1 h
k 2 = h ( wi + k 1 − ( t i + )2 + 1)
2 2
1 h
k 3 = h ( wi + k 2 − ( t i + )2 + 1)
2 2
1
k 4 = h ( wi + k 3 − ( t i + h )2 + 1)
2
1
wi+1 = wi + (k1 + 2k2 + 2k3 + k4 )
6

0
Figure 3.3.1: Python output: Illustrating upper bound y = − xy
with the initial condition y(0) = 1

0
Figure 3.3.2: Python output: Illustrating upper bound y = − xy
with the initial condition y(0) = 1

3.4 butcher tableau

Another way of representing a Runge Kutta method is called the


Butcher tableau named after John C Butcher (31 March 1933).
3.4 Butcher Tableau 32

s
y i +1 = y i + h ∑ an k n ,
n =1

where

k 1 = f ( t i , y i ),
k2 = f (ti + α2 h, yi + h( β 21 k1 )),
k3 = f (ti + α3 h, yi + h( β 31 k1 + β 32 k2 )),
..
.
k s = f (ti + αs h, yi + h( β s1 k1 + β s2 k2 + · · · + β s,s−1 k s−1 )).

These data are usually arranged in a mnemonic device, known as


a Butcher tableau
0
α2 β 21
α3 β 31 β 31
.. .. ..
. . .
αs β s1 β s2 ··· β ss−1
a1 a2 ··· a s −1 as

The method is consistent if


s −1
∑ βsj = αs .
j

3.4.1 Heun’s Method

The Butcher’s Tableau for Heun’s Method is:


0
1 1
1 1
2 2

3.4.2 4th Order Runge Kutta

The Butcher’s Tableau for the 4th Order Runge Kutta is:

0
1 1
2 2
1 1
2 0 2
1 0 0 1
1 2 2 1
6 6 6 6
3.5 Convergence Analysis 33

3.5 convergence analysis

In order to obtain convergence of the general Runge Kutta we need


to have the truncation τn (y) → 0 as h → 0. Since,

y ( t n +1 ) − y ( t n )
τ (y) = − F (tn , y(tn ), h; f ),
h
we require,
F ( x, y, h; f ) → y0 ( x ) = f ( x, y( x )).
More precisely define,

δ(h) = max | f (t, y) − F (t, y, h; f )|,


a≤t≤b;−∞<y<∞

and assume,
δ(h) → 0, as h → 0. (20)
This is called the consistency condition for the RK method.
We will also need a Lipschitz Condition on F:

| F (t, y, h; f ) − F (t, z, h; f )| ≤ |y − z|, (21)

for all a ≤ t ≤ b and −∞ < y, z < ∞ and small h > 0.

Example 14
Looking at the midpoint method

h h
| F (t, w, h; f ) − F (t, z, h; f )| = f (t + , w + f (t, w))
2 2
h h
− f (t + , z + f (t, z))
2 2
h
≤ K w − z + [ f (t, w) − f (t, z)]
2
 
h
≤ K 1 + K |w − z|
2

Theorem 3.5.1. Assume that the Runge Kutta method satisfies the Lipschitz
Condition. Then for the initial value problems
0
y = f ( x, y),

y ( x0 ) = y0 .
The numerical solution {wn } satisfies
" #
(b− a) L e(b− a) L − 1
max |y( xn ) − wn | ≤ e | y 0 − w0 | + τ (h)
a≤ x ≤b L
3.6 The choice of method and step-size 34

where
τ (h) = max |τn (h)|,
a≤ x ≤b

If the consistency condition

δ(h) → 0 as h → 0,

where
δ(h) = max | f ( x, y) − F ( x, y; h; f )|.
a≤ x ≤b

Proof. Subtracting

wn+1 = wn + hF (tn , wn , h; f ),

and
y(tn+1 ) = y(tn ) + hF (tn , y(tn ), h; f ) + hτn (h),
we obtain

en+1 = en + h[ F (tn , wn , h; f ) − F (tn , wn , h; f )] + hτn (h),

in which en = y(tn ) − wn . Apply the Lipschitz Condition L and the


truncation error we obtain

|en+1 | ≤ (1 + hL)|en | + hτn (h).

This nicely leads to the result.


In most cases it is known by direct computation that τ (h) → 0 as
h → 0 an in that case convergence of {wn } and y(tn ) is immediately
proved.
But all we need to know is that (20) is satisfied . To see this we write

hτn = y(tn+1 ) − y(tn ) − hF (tn , y(tn ), h; f ),


0 2 00
= hy (tn ) + h2 y (ξ n ) − hF (tn , y(tn ), h; f ),
2 00
h|τn | ≤ hδ(h) + h2 |y |.
00
|τn | ≤ δ(h) + 2h |y |.

Thus τ (h) → 0 as h → 0

From this we have

Corollary 3.5.2. If the RK method has a truncation error τ (h) = O(hm+1 )


then the rate of convergence of {wn } to Y (t) is O(hm ).

3.6 the choice of method and step-size

An interesting question is since Runge-Kutta method is 4th order but


requires 4 steps and Euler only required 3 is it more beneficial to use
a smaller h than a higher order method?
But this does lead us to the question of how do we define our h to
3.6 The choice of method and step-size 35

maximize the solution we have.


An ideal difference-equation method

wi+1 = wi + hφ(ti , wi , h) i = 0, .., n − 1


0
for approximating the solution y(t) to the Initial Value Problem y =
f (t, y) would have the property that given a tolerance ε > 0 the mini-
mal number of mesh points would be used to ensure that the global
error |y(ti ) − wi | would not exceed ε for any i = 0, ..., N.
We do this by finding an appropriate choice of mesh points. Although
we cannot generally determine the global error of a method there is
a close relation between local truncation and global error. By using
methods of differing order we can predict the local truncation error
and using this prediction choose a step size that will keep global er-
ror in check.
Suppose we have two techniques

1. An nth order Taylor method of the form

y(ti+1 ) = y(ti ) + hφ(ti , y(ti ), hi ) + O(hn+1 )

producing approximations

w0 = α

wi+1 = wi + hφ(ti , wi , hi )
with local truncation τi+1 = O(hn ).

2. An (n+1)st order Taylor of the form

y(ti+1 ) = y(ti ) + hψ(ti , y(ti ), hi ) + O(hn+2 )

producing approximations

v0 = α

vi+1 = vi + hψ(ti , vi , hi )
with local truncation υi+1 = O(hn+1 ).

We first make the assumption that wi ≈ y(ti ) ≈ vi and choose a fixed


step size to generate wi+1 and vi+1 to approximate y(ti+1 ). Then

y ( t i +1 ) − y ( t i )
τi+1 = − φ ( t i , y ( t i ), h )
h
y ( t i +1 ) − w i
= − φ ( t i , wi , h )
h
y(ti+1 ) − (wi + hφ(ti , wi , h))
=
h
y ( t i +1 ) − w i +1
=
h
3.6 The choice of method and step-size 36

Similarly
y ( t i +1 ) − v i +1
Υ i +1 =
h
As a consequence

y ( t i +1 ) − w i +1
τi+1 =
h
( y ( t i +1 ) − v i +1 ) + ( v i +1 − w i +1 )
=
h
( v i +1 − w i +1 )
= Υ i +1 ( h ) + .
h

but τi+1 (h) is O(hn ) and Υi+1 (h) is O(hn+1 ) so the significant factor of
(v −w )
τi+1 (h) must come from i+1 h i+1 . This gives us an easily computed
n
approximation of O(h ) method,

( v i +1 − w i +1 )
τi+1 ≈ .
h
The object is not to estimate the local truncation error but to adjust
step size to keep it within a specified bound. To do this we assume
that since τi+1 (h) is O(hn ) a number K independent of h exists with,

τi+1 (h) ≈ Khn .

Then the local truncation error produced by applying the nth order
method with a new step size qh can be estimated using the original
approximations wi+1 and vi+1

qn
τi+1 (qh) ≈ K (qh)n ≈ qn τi+1 (h) ≈ ( v i +1 − w i +1 ),
h
to bound τi+1 (qh) by ε we choose q such that

qn
|vi+1 − wi+1 | ≈ τi+1 (qh) ≤ ε,
h
which leads to
  n1
εh
q≤ ,
| v i +1 − w i +1 |
which can be used to control the error.
3.7 Problem Sheet 2 37

3.7 problem sheet 2

1. Apply the Taylor method to approximate the solution of initial


value problem

1
y0 = ty + ty2 , (0 ≤ t ≤ 2), y (0) =
2
using N = 4 steps.

2. Apply the Midpoint Method to approximate the solution of the


given initial value problems using the indicated number of time
steps. Compare the approximate solution with the given exact
solution
a) y0 = t − y, (0 ≤ t ≤ 4),
with the initial condition y(0) = 1,
N = 4, with the exact solution y(t) = 2e−t + t − 1.

b) y0 = y − t, (0 ≤ t ≤ 2),
with the initial condition y(0) = 2,
N = 4, with the exact solution y(t) = et + t + 1.

3. Apply the 4th Order Runge Kutta Method to approximate the


solution of the given initial value problems using the indicated
number of time steps. Compare the approximate solution with
the given exact solution
a) y0 = t − y, (0 ≤ t ≤ 4),
with the initial condition y(0) = 1,
N = 4, with the exact solution y(t) = 2e−t + t − 1.

b) y0 = y − t, (0 ≤ t ≤ 2)
with the initial condition y(0) = 2,
N = 4, with the exact solution y(t) = et + t + 1.

4. Derive the difference equation for the Midpoint Runge Kutta


method

w n +1 = w n + k 2 ,
k 1 = h f ( t n , w n ),
1 1
k2 = h f (tn + h, wn + k1 )
2 2
for dolving the ordinary differential equation

dy
= f (t, y),
dt
3.7 Problem Sheet 2 38

y ( t0 ) = y0 ,
by using a formula of the form

wn+1 = wn + ak1 + bk2 ,

where k1 is defined as above,

k2 = h f (tn + αh, wn + βk1 ),

and a, b, α and β are constants are deteremined. Prove that


a + b = 1 and bα = bβ = 12 and choose appropriate values to
give the Midpoint Runge Kutta method.
4
M U LT I - S T E P M E T H O D S

Methods using the approximation at more than one previous point to


determine the approx at the next point are called multi-step methods.

Definition An m-step multi-step method for solving the Initial Value


Problem
0
y = f (t, y) a ≤ t ≤ b y( a) = α
is on whose difference equation for finding the approximation wi+1
at the mesh points ti+1 can be represented by the following equation,
when m is an integer greater than 1,

wi+1 = am−1 wi + am−2 wi−1 + ... + a0 wi+1−m

+h[bm f (ti+1 , wi+1 ) + bm−1 f (ti , wi ) + ... + b0 f (ti+1−m , wi+1−m )] (22)

for i = m − 1, m, ..., N − 1 where h = b− a


N the a0 , a1 , ..., am − 1 and
b0 , b1 , ..., bm are constants, and the starting values

w0 = α, w1 = α1 , w2 = α2 , ... wm−1 = αm−1

are specified.
When bm = 0 the method is called explicit or open since (22) then
gives wi+1 explicitly in terms of previously determined approxima-
tions.
When bm 6= 0 the method is called implicit or closed since wi+1
occurs on both sides of (22). 

Example 15
Fourth order Adams-Bashforth

w0 = α w1 = α 1 w2 = α 2 w3 = α 3

39
4.1 Derivation of a explicit multistep method 40

h
w i +1 = w i + [55 f (ti , wi ) − 59 f (ti−1 , wi−1 )
24
+37 f (ti−2 , wi−2 ) − 9 f (ti−3 , wi−3 )]

For each i = 3, 4, ..., N − 1 define an explicit four step


method known as the fourth order Adams-Bashforth
technique.
The equation

w0 = α w1 = α 1 w2 = α 2

h
w i +1 = w i + [9 f (ti+1 , wi+1 ) + 19 f (ti , wi )
24
−5 f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )]

For each i = 2, 4, ..., N − 1 define an implicit three step


method known as the fourth order Adams-Moulton
technique.

For the previous methods we need to generate α1 , α2 and α3 by using


a one step method.

4.1 derivation of a explicit multistep method

4.1.1 General Derivation of a explicit method Adams-Bashforth


Z t i +1 Z t i +1
0
y ( t i +1 ) − y ( t i ) = y (t)dt = f (t, y(t))dt
ti ti

Consequently
Z t i +1
y ( t i +1 ) = y ( t i ) + f (t, y(t))dt
ti

Since we cannot integrate f (t, y(t)) without knowing y(t) the solution
to the problem we instead integrate an interpolating poly. P(t) to
f (t, y(t)) that is determined by some of the previous obtained data
points (t0 , w0 ), (t1 , w1 ), ..., (ti , wi ). When we assume in addition that
y ( t i ) ≈ wi
Z t i +1
y ( t i +1 ) ≈ w i + P(t)dt
ti

We use Newton back-substitution to derive an Adams-Bashforth ex-


plicit m-step technique, we form the backward difference poly Pm−1 (t)
through (ti , f (ti )), (ti−1 , f (ti−1 )), ..., (ti+1−m , f (ti+1−m ))

(t − ti )...(t − ti+1−m )
f (t, y(t)) = Pm−1 (t) + f m (ξ, y(ξ )) (23)
m!
4.1 Derivation of a explicit multistep method 41

Pm−1 (t) = Σm j
j=1 Lm−1,j ( t )∇ f ( ti +1− j , y ( ti +1− j )) (24)
where
∇ f (ti , y(ti )) = f (ti , y(ti )) − f (ti−1 , y(ti−1 )),
∇2 f (ti , y(ti )) = ∇ f (ti , y(ti )) − ∇ f (ti−1 , y(ti−1 ))
= f (ti , y(ti )) − 2 f (ti−1 , y(ti−1 )) + f (ti−2 , y(ti−2 )).

Derivation of a explicit two-step method Adams Bashforth


To derive two step Adams-Bashforth technique
Z t i +1 Z t i +1
( t − ti )
f (t, y)dt = [ f (ti , y(ti )) + ∇ f (ti , y(ti )) + error ]dt
ti ti h
t( 2t − ti ) t
y i +1 − y i = [t f (ti , y(ti )) + ∇ f (ti , y(ti ))]tii+1 + Error
h
y i +1 = y(ti ) + (ti+1 − ti ) f (ti , y(ti ))
t i +1 t2
− ti+1 ti + 2i − t2i
2
+ ∇( f (ti , y(ti )) + Error
h
= y(ti ) + h f (ti , y(ti ))
( t − t i )2
+ i +1 ( f (ti , y(ti )) − f (ti−1 , y(ti−1 ))) + Error
2h
= y(ti ) + h f (ti , y(ti ))
1
+ ( f (ti , y(ti )) − f (ti−1 , y(ti−1 ))) + Error
2
h
= y(ti ) + [3 f (ti , y(ti )) − f (ti−1 , y(ti−1 )) + Error ]
2
The two step Adams-Bashforth is w0 = α0 and w1 = α1 with

h
wi+1 = wi + [3wi − wi−1 ] f or i = 1, .., N − 1
2
The local truncation error is
y ( t i + 1) − y ( t i ) 1
τi+1 (h) = − [3 f (ti , y(ti )) − f (ti−1 , y(ti−1 ))]
h 2
Error
τi+1 (h) =
h

Z t i +1
(t − ti )(t − ti−1 )
Error = h3 f 2 (µi , y(µi ))]dt
ti (ti+1 − ti )(ti+1 − ti−1 )
5 3 2
= h f (µi , y(µi ))
12

5 3 2
12 h f ( µi , y ( µi ))
τi+1 (h) =
h
4.1 Derivation of a explicit multistep method 42

The local truncation error for the two step Adams-Bashforth methods
is of order 2
τi+1 (h) = O(h2 )

General Derivation of a explicit method Adams-Bashforth (cont.)

Definition The Lagrange polynomial Lm−1,j (t) has a degree of m − 1


and is associated with the interpolation point t j in the sense

1 i=j
Lm−1,j (t) =
0 i 6= j

m −1
(t − t0 )...(t − tm−1 ) t − tk
Lm−1,j (t) = = ∏ (25)
(t j − t0 )...(t j − tm−1 ) k=0,k6= j t j − tk

Introducing the variable t = tk + sh with dt = hds into Lm−1 (t)

m −1
ti + sh − tk
 
−s
Lm−1,j (t) = ∏ = (−1)(m−1) (26)
k =0,k 6= j
( i + 1 ) h ( m − 1)

Z t i +1  
R t i +1 −s
f (t, y(t))dt = ti ∑km=−01 ∇k f (ti , y(ti ))dt
ti k
R t i +1 (t−ti )...(t−ti+1−m )
+ ti
f m (ξ, y(ξ )) m! dt
 
−1
R1 −s
= ∑m
k =0 ∇k f (t i , y ( ti ))h(−1)k 0
ds
k
m +1 R1
+ hm! 0
s(s + 1)...(s + m − 1) f m (ξ, y(ξ ))ds
 
R1 −s
The integrals (−1)k 0
ds for various values of k are com-
k
puted as such,

Example 16
Example k = 2
Z 1 Z 1
−s(−s − 1)

−s
(−1)2 ds = ds
0 2 0 1.2
Z 1
1
= s2 + sds
2 0
1
1 s3 s2

5
= + =
2 3 2 0 12
4.1 Derivation of a explicit multistep method 43

k 0 1 2 3 4 ...
 
k 1
R −s 1 5 3
(−1) 0
ds 1 2 12 8 . ..
k

Table 1: Table of Adams-Bashforth coefficients

As a consequence
Z t i +1  
1 5
f (t, y(t))dt = h f (ti , y(ti )) + ∇ f (ti , y(ti )) + ∇2 f (ti , y(ti )) + ...
ti 2 12

h m +1
Z 1
+ s(s + 1)...(s + m − 1) f m (ξ, y(ξ ))ds
m! 0

Since s(s + 1)...(s + m − 1) does not change sign on [0, 1] it can be


stated that for some µi where ti+1−m < µi < ti+1 the error term
becomes
h m +1 1
Z
s(s + 1)...(s + m − 1) f m (ξ, y(ξ ))ds
m! 0
h m +1 m 1 Z
f (µ, y(µ)) s(s + 1)...(s + m − 1)ds
m! 0
Rt
Since y(ti+1 ) − y(ti ) = t i+1 f (s, y(s))ds this can be written as
i

 
1 5
y(ti+1 ) = y(ti ) + h f (ti , y(ti )) + ∇ f (ti , y(ti )) + ∇2 f (ti , y(ti )) + ...
2 12

Example 17
To derive the two step Adams-Bashforth method

1
y(ti+1 ) ≈ y(ti ) + h[ f (ti , y(ti )) + (∇ f (ti , y(ti )))]
2
1
= y(ti ) + h[ f (ti , y(ti )) + ( f (ti , y(ti )) − f (ti−1 , y(ti−1 )))]
2
h
= y(ti ) + [3 f (ti , y(ti )) − f (ti−1 , y(ti−1 ))]
2
The two step Adams-Bashforth is w0 = α0 and w1 =
α1 with

h
wi+1 = wi + [3wi − wi−1 ] f or i = 1, .., N − 1
2

Definition If y(t) is a solution of the Initial Value Problem


0
y = f (t, y), a ≤ t ≤ b y( a) = α

and
wi+1 = am−1 wi + am−2 wi−1 + ... + a0 wi+1−m
4.1 Derivation of a explicit multistep method 44

+h[bm f (ti+1 , wi+1 ) + bm−1 f (ti , wi ) + ... + b0 f (ti+1−m , wi+1−m )]


is the (i+1)th step in a multi-step method, the local truncation error
at this step is

y(ti+1 ) − am−1 y(ti ) − ... − a0 y(ti+1−m )


τi+1 (h) =
h
−[bm f (ti+1 , y(ti+1 )) + bm−1 f (ti , y(ti )) + ... + b0 f (ti+1−m , y(ti+1−m ))]
for each i = m − 1, ..., N − 1.

Example 18
Truncation error for the two step Adams-Bashforth
method is

5h3 2
Z 1 
3 2 2 −s
h f (µi , y(µi ))(−1) ds = f (µi , y(µi ))
0 2 12

using the fact that f 2 (µi , y(µi )) = y3 (µi )

y ( t i +1 ) − y ( t i ) 1
τi+1 (h) = − [3 f (ti , y(ti )) − f (ti−1 , y(ti−1 ))]
h 2

1 5h3 2
 
5 2 3
= f (µi , y(µi )) = h y ( µi )
h 12 12

4.1.2 Adams-Bashforth three step method

w0 = α w1 = α 1 w2 = α 2
h
w i +1 = w i +[23 f (ti , wi ) − 16 f (ti−1 , wi−1 ) + 5 f (ti−2 , wi−2 )]
12
where i=2,3,...,N-1
The local truncation error is of order 3

3 3 4
τi+1 (h) = h y ( µi )
8
µ i ∈ ( t i −2 , t i +1 )

4.1.3 Adams-Bashforth four step method

w0 = α, w1 = α1 , w2 = α2 , w3 = α3 ,
h
w i +1 = w i + [55 f (ti , wi ) − 59 f (ti−1 , wi−1 ) + 37 f (ti−2 , wi−2 ) − 9 f (ti−3 , wi−3 )],
24
4.1 Derivation of a explicit multistep method 45

where i = 3, ..., N − 1.
The local truncation error is of order 4

251 4 5
τi+1 (h) = h y ( µ i ),
720
µ i ∈ ( t i −3 , t i +1 ).

Example 19

0
y = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5,
Adams-Bashforth two step w0 = α0 and w1 = α1 with

h
wi+1 = wi + [3 f (ti , wi ) − f (ti−1 , wi−1 )], f or, i = 1, .., N − 1,
2
truncation error
5 2 3
τi+1 (h) = h y ( µ i ), µ i ∈ ( t i −1 , t i +1 ).
12
1. Calculate α0 and α1
From the initial condition we have w0 = 0.5
To calculate w1 we use the modified Euler method.

w0 = α
h
wi+1 = wi [ f (ti , wi ) + f (ti+1 , wi + h f (ti , wi ))]
2
We only need this to calculate w1

w0 = 0.5
h
w1 = w0 + [ f (t0 , w0 ) + f (t1 , w0 + h f (t0 , w0 ))]
2
0.2
w1 = w0 + [w0 − t20 + 1 + w0 + h(w0 − t20 + 1) − t21 + 1]
2
0.2
= 0.5 + [0.5 − 0 + 1 + 0.5 + 0.2(1.5) − (0.2)2 + 1]
2
= 0.826

we now have α1 = w1 = 0.826


4.2 Derivation of the implicit multi-step method 46

2. Calculate wi for i = 2, ...N

h
w2 = w1 + [3 f (t1 , w1 ) − f (t0 , w0 )]
2
h
= w1 + [3(w1 − t21 + 1) − (w0 − t20 + 1)]
2
0.2
= 0.826 + [3(0.826 − 0.22 + 1) − (0.5 − 02 + 1)]
2
= 0.8858
.
.
.
0.2
w i +1 = wi + [3(wi − t2i + 1) − (wi−1 − t2i−1 + 1)]
2

this method can be generalised for all Adams-Bashforth.

4.2 derivation of the implicit multi-step method

4.2.0.1 Derivation of an implicit one-step method Adams Moulton


To derive one step Adams-Moulton technique
Z t i +1 Z t i +1
( t − t i +1 )
f (t, y)dt = [ f (ti+1 , y(ti+1 )) + ∇ f (ti+1 , y(ti+1 )) + error ]dt
ti ti h
yi+1 − yi = [t f (ti+1 , y(ti+1 ))
t( 2t − ti+1 ) t
+ ∇ f (ti+1 , y(ti+1 ))]tii+1 + Error
h
y i +1 = y(ti ) + (ti+1 − ti ) f (ti+1 , y(ti+1 ))
t2i+1 t2i
− t2i+1 + ti ti+1 −
2 2
+ ∇( f (ti+1 , y(ti+1 ))
h
+ Error
= y(ti ) + h f (ti+1 , y(ti+1 ))
−(ti+1 − ti )2
+ ( f (ti+1 , y(ti+1 )) − f (ti , y(ti )))
2h
+ Error
= y(ti ) + h f (ti+1 , y(ti+1 ))
h
− ( f (ti+1 , y(ti+1 )) − f (ti , y(ti ))) + Error
2
h
= y(ti ) + [ f (ti+1 , y(ti+1 )) + f (ti−1 , y(ti−1 ))] + Error
2
The two step Adams-Moulton is w0 = α0 and w1 = α1 with

h
w i +1 = w i + [ w i +1 + w i ] f or i = 0, .., N − 1
2
4.2 Derivation of the implicit multi-step method 47

The local truncation error is


y ( t i + 1) − y ( t i ) 1
τi+1 (h) = − [ f (ti+1 , y(ti+1 )) + f (ti , y(ti ))]
h 2
Error
τi+1 (h) =
h

Z t i +1
(t − ti+1 )(t − ti ) 3 2
Error = h f (µi , y(µi ))]dt
ti (ti − ti+1 )(ti − ti−1 )
1 3 2
= h f (µi , y(µi ))
12

1 3 2
12 h f ( µi , y ( µi ))
τi+1 (h) =
h
The local truncation error for the one step Adams-Moulton methods
is of order 2
τi+1 (h) = O(h2 )

derivation of the implicit multi-step method (cont)

As before
Z 0
0
y ( t i +1 ) − y ( t i ) = y (ti+1 + sh)ds
−1
m −1 Z 0  
−s
= ∑ k
∇ f (ti+1 , y(ti+1 ))h(−1) k
−1 k
ds
k =0
h m +1
Z 0
+ s(s + 1)...(s + m − 1) f m (ξ, y(ξ ))ds
m! −1

Example 20
For k=3 we have
Z 0  Z 0
−s(−s − 1)(−s − 2)

3 −s
(−1) ds = ds
−1 k −1 1.2.3
0
1 s4

3 2 1
= +s +s =−
6 4 −1 24

The general form of the Adams-Moulton method is

1 1
y(ti+1 ) = y(ti ) + h[ f (ti+1 , y(ti+1 )) − ∇ f (ti+1 , y(ti+1 )) − ∇2 f (ti+1 , y(ti+1 )) − ...]
2 12
h m +1 0
Z
+ s(s + 1)...(s + m − 1) f m (ξ, y(ξ ))ds
m! −1
4.2 Derivation of the implicit multi-step method 48

adams-moulton two step method

w0 = α w1 = α 1
h
w i +1 = w i + [5 f (ti+1 , wi+1 ) + 8 f (ti , wi ) − f (ti−1 , wi−1 )]
12
where i=2,3,...,N-1
The local truncation error is
1 3 4
τi+1 (h) = − h y ( µi )
24
µ i ∈ ( t i −1 , t i +1 )

adams-moulton three step method

w0 = α w1 = α 1 w2 = α 2
h
w i +1 = w i + [9 f (ti+1 , wi+1 ) + 19 f (ti , wi ) − 5 f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )]
24
where i=3,...,N-1
The local truncation error is
19 4 5
τi+1 (h) = − h y ( µi )
720
µ i ∈ ( t i −2 , t i +1 )

Example 21

0
y = y − t2 + 1 0 ≤ t ≤ 2 y(0) = 0.5
Adams-Moulton two step w0 = α0 and w1 = α1 with

h
w i +1 = w i + [5 f (ti+1 , wi+1 ) + 8 f (ti , wi ) − f (ti−1 , wi−1 )]
12
f or i = 2, .., N − 1
1 3 4
truncation error τi+1 (h) = − 24 h y ( µ i ) µ i ∈ ( t i −1 , t i +1 ).
1. Calculate α0 and α1
From the initial condition we have w0 = 0.5
To calculate w1 we use the modified Euler method.
we now have α1 = w1 = 0.826
4.3 Table of Adam’s methods 49

2. Calculate wi for i = 2, ...N

h
w2 = w1 + [ 5 f ( t 2 , w2 )
12
+8 f (t1 , w1 ) − f (t0 , w0 )]
h
= w1 + [5(w2 + t22 + 1)
12
+8(w1 + t21 + 1) − (w0 + t20 + 1)]
h
= w1 + [5(w2 + t22 + 1)
12
+8(w1 + t21 + 1) − (w0 + t20 + 1)]
.
.
.
5h h 12
(1 − ) w i +1 = [[8 + ]wi − 5t2i+1
12 12 h
−8t2i + t2i−1 + 12]

This, of course can be generalised.


The only unfortunate aspect of the implicit method is that you must
convert it into an explicit method, this is not always possible. eg
0
y = ey .

4.3 table of adam’s methods

Order Formula LTE


h2 00
1 y n +1 = y n + h f n 2 y (η )
5h3 000
2 yn+1 = yn + 2h [3 f n − f n−1 ] 12 y ( η )
h 3h4 (4)
3 yn+1 = yn + 12 [23 f n − 16 f n−1 + 5 f n−2 ] 8 y (η )
h 251h5 (5)
4 yn+1 = yn + 24 [55 f n − 59 f n−1 + 37 f n−2 − 9 f n−3 ] 720 y ( η )

Table 2: Adams-Bashforth formulas of different order. LTE stands for


local truncation error.

Order Formula LTE


2
0 y n +1 = y n + h f n +1 − h2 y00 (η )
h3 000
1 yn+1 = yn + 2h [ f n+1 + f n ] − 12 y (η )
h h4 (4)
2 yn+1 = yn + 12 [ 5 f n +1 + 8 f n − f n −1 ] − 24 y (η )
h 5
3 yn+1 = yn + 24 [9 f n+1 + 19 f n − 5 f n−1 + f n−2 ] − 19h (5)
720 y ( η )

Table 3: Adams-Moulton formulas of different order. LTE stands for


local truncation error.
4.4 Predictor-Corrector method 50

4.4 predictor-corrector method

In practice implicit methods are not used as above. They are used to
improve approximations obtained by explicit methods. The combina-
tion of the two is called predictor-corrector method.

Example 22
Consider the following forth order method for solving
an initial-value problem.
1. Calculate w0 , w1 , w2 , w3 for the four step Adams-
Bashforth method, to do this we use a 4th order
one step method, eg Runge Kutta.
2. Calculate an approximation w4 to y(t4 ) using the
Adams-Bashforth method as the predictor.

h
w40 = w3 + [55 f (t3 , w3 ) − 59 f (t2 , w2 ) + 37 f (t1 , w1 ) − 9 f (t0 , w0 )]
24

3. This approximation is improved by inserting w40


in the RHS of the three step Adams-Moulton and
using it as a corrector

h
w41 = w3 + [9 f (t4 , w40 ) + 19 f (t3 , w3 ) − 5 f (t2 , w2 ) + f (t1 , w1 )]
24

The only new function evaluation is f (t4 , w40 ).


4. w41 is the approximation of y(t4 ).
5. Repeat steps 2-4 for calculating the approxima-
tion of y(t5 ).
6. Repeat til y(tn ).

Improved approximations to y(ti+1 ) can be obtained by integrating


the Adams Moulton formula
h
wik++11 = wi + [9 f (ti+1 , wik+1 ) + 19 f (ti , wi ) − 5 f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )]
24

wik++11 converges to the approximation of the implicit method rather


than the solution y(ti+1 ).
A more effective method is to reduce step-size if improved accuracy
is needed.

4.5 improved step-size multi-step method

As the predictor corrector technique produces two approximations


of each step it is a natural candidate for error-control. (see previous
section)
4.5 Improved step-size multi-step method 51

Example 23
The Adams-Bashforth 4-step method

w0 = α w1 = α 1 w2 = α 2 w3 = α 3

h
y ( t i +1 ) = y ( t i ) + [55 f (ti , y(ti )) − 59 f (ti−1 , y(ti−1 ))
24
+37 f (ti−2 , y(ti−2 )) − 9 f (ti−3 , y(ti−3 ))]
251 5 5
+ h y ( µi )
720
µi ∈ (ti−3 , ti+1 ) the truncation error is

y(ti+1 ) − wi0+1 251 4 5


= h y ( µi ) (27)
h 720
Similarly for Adams-Moulton three step method

h
y ( t i +1 ) = y ( t i ) + [9 f (ti+1 , y(ti+1 )) + 19 f (ti , y(ti ))
24
−5 f (ti−1 , y(ti−1 )) + f (ti−2 , y(ti−2 ))]
19 5 5
− h y (ξ i )
720
ξ i ∈ (ti−2 , ti+1 ) the truncation error is

y ( t i +1 ) − w i +1 19 4 5
=− h y (ξ i ) (28)
h 720
We make the assumption that for small h

y5 ( ξ i ) ≈ y5 ( µ i )

Subtracting (27) from (28) we have

wi+1 − wi0+1 h4 3
= [251y5 (µi ) + 19y5 (ξ i )] ≈ h4 y5 (ξ i )
h 720 8

8
y5 ( ξ i ) ≈ (wi+1 − wi0+1 ) (29)
3h5
Using this result to eliminate h4 y5 (ξ i ) from (28)

| y ( t i +1 ) − w i +1 | 19h4 8
|τi+1 (h)| = ≈ (|wi+1 − wi0+1 |)
h 720 3h5

19|wi+1 − wi0+1 |
=
270h
4.5 Improved step-size multi-step method 52

Now consider a new step size qh generating new ap-


proximations ŵ0 , ŵ1 , .., ŵi The objective is to choose a
q so that the local truncation is bounded by a tol ε

|y(ti + qh) − ŵi+1 | 19h4 5 19h4 8


= |y (ν)|q4 ≈ (|wi+1 − wi0+1 |)q4
qh 720 720 3h5

and we need to choose a q so that

|y(ti + qh) − ŵi+1 | 19q4 |wi+1 − wi0+1 |


≈ <ε
qh 270h

that is, we choose so that


! 41 ! 14
270 hε hε
q< ≈2
19 |wi+1 − wi0+1 | |wi+1 − wi0+1 |

q is normally chosen as
! 14

q = 1.5
|wi+1 − wi0+1 |

With this knowledge we can change step sizes and control out error.
4.6 Problem Sheet 3 53

4.6 problem sheet 3

1. Apply the 3-step Adams-Bashforth to approximate the solution


of the given initial value problems using the indicated number
of time steps. Compare the approximate solution with the given
exact solution
a) y0 = t − y, (0 ≤ t ≤ 4)
with the initial condition y(0) = 1,
N = 4, y(t) = 2e−t + t − 1

b) y0 = y − t, (0 ≤ t ≤ 2)
with the initial condition y(0) = 2,
N = 4, y(t) = et + t + 1

2. Apply the 2-step Adams-Moulton Method to approximate the


solution of the given initial value problems using the indicated
number of time steps. Compare the approximate solution with
the given exact solution
a) y0 = t − y, (0 ≤ t ≤ 4)
with the initial condition y(0) = 1,
N = 4, y(t) = 2e−t + t − 1

b) y0 = y − t, (0 ≤ t ≤ 2)
with the initial condition y(0) = 2,
N = 4, y(t) = et + t + 1

3. Derive the difference equation for the 1-step Adams-Bashforth


method:

w n +1 = w n + h f ( t n , w n ),
with the local truncation error
h 2
τn+1 (h) = y (µn )
2
where µn ∈ (tn , tn+1 ).

4. Derive the difference equation for the 2-step Adams-Bashforth


method:

3 1
wn+1 = wn + ( h f (tn , wn ) − h f (tn−1 , wn−1 )),
2 2
with the local truncation error

5h2 3
τn+1 (h) = y (µn )
12
4.6 Problem Sheet 3 54

where µn ∈ (tn−1 , tn+1 ).


5. Derive the difference equation for the 3-step Adams-Bashforth
method:

23 4 5
w n +1 = w n + ( h f (tn , wn ) − h f (tn−1 , wn−1 ) + h f (tn−2 , wn−2 )),
12 3 12
with the local truncation error

9h3 4
τn+1 (h) = y (µn )
24
where µn ∈ (tn−2 , tn+1 ).
6. Derive the difference equation for the 0-step Adams-Moulton
method:

w n +1 = w n + h f ( t n +1 , w n +1 ),
with the local truncation error
h
τn+1 (h) = − y2 (µn )
2
where µn ∈ (tn−2 , tn+1 ).
7. Derive the difference equation for the 1-step Adams-Moulton
method:

1 1
w n +1 = w n + h f ( t n +1 , w n +1 ) + h f ( t n , w n ),
2 2
with the local truncation error

h2 3
τn+1 (h) = − y (µn )
12
where µn ∈ (tn , tn+1 ).
8. Derive the difference equation for the 2-step Adams-Moulton
method:

5 8 1
w n +1 = w n + h f ( t n +1 , w n +1 ) + h f ( t n , w n ) − h f ( t n −1 , w n −1 ),
12 12 12
with the local truncation error

h3 4
τn+1 (h) = − y (µn )
24
where µn ∈ (tn−1 , tn+1 ).
9. Derive the difference equation for the 3-step Adams-Moulton
method:

9 19 5 1
w n +1 = w n + h f ( t n +1 , w n +1 ) + h f ( t n , w n ) − h f ( t n −1 , w n −1 ) + h f ( t n −2 , w n −2 ),
24 24 24 24
4.6 Problem Sheet 3 55

with the local truncation error

h4 5
τn+1 (h) = − y (µn )
720
where µn ∈ (tn−2 , tn+1 ).
5
C O N S I S T E N C Y, C O N V E R G E N C E A N D S TA B I L I T Y

5.1 one step methods

Stability is why some methods give satisfactory results and some do


not.

Definition A one-step method with local truncation error τi (h) at


the ith step is said to be consistent with the differential equation it
approximates if
lim ( max |τi (h)|) = 0
h →0 1≤ i ≤ N

where
y i +1 − y i
τi (h) = − F (ti , yi , h, f )
h
As h → 0 does F (ti , yi , h, f ) → f (t, y).

Definition A one step method difference equation is said to be con-


vergent with respect to the differential equation and wi , the approxi-
mation obtained from the difference method at the ith step.

max max |y(ti ) − wi | = 0


h →0 1≤ i ≤ N

For Euler’s method we have


Mh L(b−a)
max |wi − y(ti )| ≤ |e − 1|
1≤ i ≤ N 2L

so Euler’s method is convergent wrt to a differential equation.

Theorem 5.1.1. Suppose the initial value problem


0
y = f (t, y) a ≤ t ≤ b y( a) = α

is approximated by a one step difference method in the form

w0 = α
wi+1 = wi + hF (ti , wi : h)

56
5.1 One Step Methods 57

Suppose also that a number h0 > 0 exists and that F (ti , wi : h) is continuous
and satisfies a Lipschitz Condition in the variable w with Lipschitz constant
L on the set

D = {(t, w, h)| a ≤ t ≤ b, −∞ < w < ∞, 0 ≤ h ≤ h0 }

Then

1. The method is stable;

2. The difference method is convergent if and only if it is consistent - that


is iff
F (ti , wi : 0) = f (t, y) for all a ≤ t ≤ b

3. If a function τ exists and for each i = 1, 2, .., N, the local truncation


error τi (h) satisfies |τi (h)| ≤ τ (h) whenever 0 ≤ h ≤ h0 , then

τ ( h ) L ( ti − a )
| y ( t i ) − wi | ≤ e .
L

Example 24
Consider the modified Euler method given by

w0 = α

h
wi+1 = wi + [ f (ti , wi ) + f (ti+1 , wi + h f (ti , wi ))]
2
Verify that this method satisfies the theorem. For this
method
1 1
F (t, w : h) = f (t, w) + f (t + h, w + h f (t, w))
2 2
If f satisfies the Lipschitz Condition on {(t, w)| a ≤
t ≤ b, −∞ < w < ∞} in the variable w with constant
L, then
1 1
F (t, w : h) − F (t, ŵ : h) = f (t, w) + f (t + h, w + h f (t, w))
2 2
1 1
− f (t, ŵ) − f (t + h, ŵ + h f (t, ŵ))
2 2
5.2 Multi-step methods 58

the Lipschitz Condition on f leads to

1
| F (t, w : h) − F (t, ŵ : h)| ≤ L|w − ŵ|
2
1
+ L|w + h f (t, w) − ŵ − h f (t, ŵ)|
2
1
≤ L|w − ŵ| + L|h f (t, w) − h f (t, ŵ)|
2
1
≤ L|w − ŵ| + hL2 |w − ŵ|
 2
1 2
≤ L + hL |w − ŵ|
2

Therefore, F satisfies a Lipschitz Condition in w on


the set

D = {(t, w, h)| a ≤ t ≤ b, −∞ < w < ∞, 0 ≤ h ≤ h0 }


0
for any h0 > 0 with constant L = L + 21 hL2


Finally, if f is continuous on {(t, w)| a ≤ t ≤ b, −∞ <


w < ∞}, then F is continuous on

D = {(t, w, h)| a ≤ t ≤ b, −∞ < w < ∞, 0 ≤ h ≤ h0 }

so this implies that the method is stable. Letting h = 0


we have
1 1
F (t, w : 0) = f (t, w) + f (t + 0, w + 0 f (t, w)) = f (t, w)
2 2
so the consistency condition holds.
We know that the method is of order O(h2 )

5.2 multi-step methods

The general multi-step method for approximating the solution to the


Initial Value Problem
0
y = f (t, y) a ≤ t ≤ b y( a) = α

can be written in the form

w0 = α w1 = α1 ... wm−1 = αm−1

wi+1 = am−1 wi + am−2 wi−1 + ... + a0 wi+1−m + hF (ti , h, wi+1 , ..., wi+1−m ),
5.2 Multi-step methods 59

for each i = m − 1, ..., N − 1 where a0 , a1 , .., am−1 are constants.


The local truncation error for a multi-step method expressed in this
form is
y(ti+1 ) − am−1 y(ti ) − am−2 y(ti−1 ) + ... − a0 y(ti+1−m )
τi+1 (h) =
h
+ F (ti , h, y(ti+1 ), ..., y(ti+1−m ))
for each i = m − 1, ..., N − 1.

Definition A multi-step method is consistent if both

lim |τi (h)| = 0 for all i = m, ..., N


h →0

lim |αi − y(ti )| = 0 for all i = 0, .., m − 1


h →0

Definition A multi-step method is convergent if the solution to the


difference equation approaches the solution of the differential equa-
tion as the step size approaches zero.

lim |wi − y(ti )| = 0


h →0

Theorem 5.2.1. Suppose the Initial Value Problem


0
y = f (t, y) a ≤ t ≤ b y( a) = α

is approximated by an Adams predictor-corrector method with an m-step


Adams-Bashforth predictor equation

wi+1 = wi + h[bm−1 f (ti , wi ) + ... + b0 f (ti+1−m , wi+1−m )]

with local truncation error τi+1 (h) and an (m-1)-step Adams-Moulton equa-
tion

wi+1 = wi + h[b̂m−1 f (ti+1 , wi+1 ) + ... + b̂0 f (ti+2−m , wi+2−m )]

with local truncation error τ̂i+1 (h). In addition suppose that f (t, y) and
f y (t, y) are continuous on = {(t, y)| a ≤ t ≤ b, −∞ < y < ∞} and that
f y is bounded. Then the local truncation error σi+1 (h) of the predictor-
corrector method is
∂f
σi+1 (h) = τ̂i+1 (h) + hτi+1 (h)b̂m−1 ( t i +1 , θ i +1 )
∂y

where θi+1 ∈ [0, hτi+1 (h).


Moreover, there exists constants k1 and k2 such that
 
|wi − y(ti )| ≤ max |w j − y(t j )| + k1 σ(h) ek2 (ti −a) .
0≤ j ≤ m −1
5.2 Multi-step methods 60

where σ(h) = maxm≤i≤ N |σi (h)|.

Definition Associated with the difference equation

w0 = α w1 = α1 ... wm−1 = αm−1

wi+1 = am−1 wi + am−2 wi−1 + ... + a0 wi+1−m + hF (ti , h, wi+1 , ..., wi+1−m ),
is the characteristic equation given by

λm − am−1 λm−1 − am−2 λm−2 − ... − a0 = 0

Definition Let λ1 , ..., λm denote the roots of the that characteristic


equation
λm − am−1 λm−1 − am−2 λm−2 − ... − a0 = 0
associated with the multi-step difference method

w0 = α w1 = α1 ... wm−1 = αm−1

wi+1 = am−1 wi + am−2 wi−1 + ... + a0 wi+1−m + hF (ti , h, wi+1 , ..., wi+1−m ),
If |λi | ≤ 1 for each i = 1, ..., m and all roots with absolute value 1 are
simple roots then the difference equation is said to satisfy the root
condition.

Definition 1. Methods that satisfy the root condition and have


λ = 1 as the only root of the characteristic equation of mag-
nitude one are called strongly stable;

2. Methods that satisfy the root condition and have more than one
distinct root with magnitude one are called weakly stable;

3. Methods that do not satisfy the root condition are called unsta-
ble.

Theorem 5.2.2. A multi-step method of the form

w0 = α w1 = α1 ... wm−1 = αm−1

wi+1 = am−1 wi + am−2 wi−1 + ... + a0 wi+1−m + hF (ti , h, wi+1 , ..., wi+1−m )
is stable iff it satisfies the root condition. Moreover if the difference method
is consistent with the differential equation then the method is stable iff it is
convergent.

Example 25
We have seen that the fourth order Adams-Bashforth
method can be expressed as

wi+1 = am−1 wi + am−2 wi−1 + ... + a0 wi+1−m + hF (ti , h, wi+1 , wi , ..., wi−3 )
5.2 Multi-step methods 61

where
F (ti , h, wi+1 , wi , ..., wi−3 ) =
1
[55 f (ti , wi ) − 59 f (ti−1 , wi−1 ) + 37 f (ti−2 , wi−2 ) − 9 f (ti−3 , wi−3 )]
24
so m = 4, a0 = 0, a1 = 0, a2 = 0 and a3 = 1.
The characteristic equation is

λ4 − λ3 = λ3 ( λ − 1) = 0

which has the roots λ1 = 1, λ2 = 0, λ3 = 0 and λ4 = 0.


It satisfies the root condition and is strongly stable.

Example 26
The explicit multi-step method given by

4h
w i +1 = w i −3 + [2 f (ti , wi ) − f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )]
3
has a characteristic equation

λ4 − 1 = 0

which has the roots λ1 = 1, λ2 = −1, λ3 = i and λ4 =


−i, the method satisfies the root condition, but is only
weakly stable.

Example 27
The explicit multi-step method given by

4h
wi+1 = awi−3 + [2 f (ti , wi ) − f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )]
3
has a characteristic equation

λ4 − a = 0
√ √ √
which has the roots λ1 = 4 a, λ2 = − 4 a, λ3 = i 4 a

and λ4 = −i 4 a, when a > 1 the method dose not
satisfy the root condition, and hence is unstable.

Example 28
Solving the Initial Value Problem
0
y = −0.5y2 y(0) = 1
5.2 Multi-step methods 62

Using a weakly stable method

4h
w i +1 = w i −3 + [2wi − wi−1 + wi−2 ]
3
Using an two different unstable method
1.
4h
wi+1 = 1.0001wi−3 + [2 f (ti , wi ) − f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )]
3

2.
4h
wi+1 = 1.5wi−3 + [2 f (ti , wi ) − f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )]
3

λ4 − a = 0
√ √ √
which has the roots λ1 = 4 a, λ2 = − 4 a, λ3 = i 4 a

and λ4 = −i 4 a, when a > 1 the method dose not
satisfy the root condition, and hence is unstable.

Figure 5.2.1: Python output: Left: Weakly stable solution, middle:


unstable, right: very unstable
5.3 Problem Sheet 4 63

5.3 problem sheet 4

1. Determine whether the 2-step Adams-Bashforth Method is con-


sistent, stable and convergent

3 1
wn+1 = wn + ( h f (tn , wn ) − h f (tn−1 , wn−1 )),
2 2

2. Determine whether the 2-step Adams-Moulton Method is con-


sistent, stable and convergent

5 8 1
w n +1 = w n + h f ( t n +1 , w n +1 ) + h f ( t n , w n ) − h f ( t n −1 , w n −1 ),
12 12 12

3. Determine whether the linear multistep following methods are


consistent, stable and convergent
a)

1
wn+1 = wn−1 + h[ f (tn+1 , wn+1 ) + 4 f (tn , wn ) + f (tn−1 , wn−1 )].
3

b)
4 1 2
w n +1 = wn − wn−1 + h[ f (tn+1 , wn+1 )].
3 3 3
5.4 Initial Value Problem Review Questions 64

5.4 initial value problem review questions

1. a) Derive the Euler approximation show it has a local trunca-


tion error of O(h) of the Ordinary Differential Equation
0
y ( x ) = f ( x, y)

with initial condition

y( a) = α.

[8 marks]
b) Suppose f is a continuous and satisfies a Lipschitz condi-
tion with constant L on D = {(t, y)| a ≤ t ≤ b, −∞ < y <
∞} and that a constant M exists with the property that
00
|y (t)| ≤ M.

Let y(t) denote the unique solution of the IVP


0
y = f (t, y) a ≤ t ≤ b y( a) = α

and w0 , w1 , ..., w N be the approx generated by the Euler


method for some positive integer N. Then show for i =
0, 1, ..., N
Mh L(ti −a)
| y ( t i ) − wi | ≤ |e − 1|
2L
You may assume the two lemmas:
If s and t are positive real numbers { ai }iN=0 is a sequence
satisfying a0 ≥ −s t and ai+1 ≤ (1 + s) ai + t then
 
( i +1) s t t
a i +1 ≤ e a0 + −
s s

For all x ≥ 0.1 and any positive m we have

0 ≤ (1 + x )m ≤ emx

[17 marks]
c) Use Euler’s method to estimate the solution of
0
y = (1 − x )y2 − y; y (0) = 1

at x=1, using h = 0.25.


[8 marks]
5.4 Initial Value Problem Review Questions 65

2. a) Derive the difference equation for the Midpoint Runge Kutta


method

w n +1 = w n + k 2
k 1 = h f ( tn , wn )
1 1
k2 = h f (tn + h, wn + k1 )
2 2
for dolving the ordinary differential equation

dy
= f (t, y)
dt
y ( t0 ) = y0
by using a formula of the form

wn+1 = wn + ak1 + bk2

where k1 is defined as above,

k2 = h f (tn + αh, wn + βk1 )

and a, b, α and β are constants are deteremined. Prove that


a + b = 1 and bα = bβ = 21 and choose appropriate values
to give the Midpoint Runge Kutta method.
[18 marks]
b) Show that the midpoint Runge Kutta method is stable.
[5 marks]
c) Use the Runge Kutta method to approximate the solutions
to the following initial value problem
0
y = 1 + (t − y)2 , 2 ≤ t ≤ 3, y(2) = 1
1
with h = 0.2 with the exact solution y(t) = t + 1− t .
[10 marks]

3. a) Derive the two step Adams-Bashforth method:

3 1
wn+1 = wn + ( h f (tn , wn ) − h f (tn−1 , wn−1 )),
2 2
and the local truncation error

5h2 3
τn+1 (h) = − y (µn )
12
[18 marks]
5.4 Initial Value Problem Review Questions 66

b) Apply the two step Adams-Bashforth method to approxi-


mate the soluion of the initial value problem:

y0 = ty − y, (0 ≤ t ≤ 2) y(0) = 1

. Using N = 4 steps, given that y1 = 0.6872.


[15 marks]

4. a) Derive the Adams-Moulton two step method and its trun-


cation error which is of the form

w0 = α 0 w1 = α 1

h
w n +1 = w n + [5 f (tn+1 , wn+1 ) + 8 f (tn , wn1 ) − f (tn−2 , wn−2 )]
12
and the local truncation error

h3 4
τn+1 (h) = − y (µn )
24
[23 marks]
b) Define the terms strongly stable, weakly stable and unsta-
ble with respect to the characteristic equation.
[5 marks]
c) Show that the Adams-Bashforth two step method is stongly
stable.
[5 marks]

5. a) Given the initial value problem:

y0 = f (t, y), y ( t0 ) = y0

and a numerical method which generates a numerical so-


lution (wn )nN=0 , explain what it means for the method to be
convergent.
[5 marks]
b) Using the 2-step Adams-Bashforth method:

3 1
w n +1 = w n + h f ( t n , w n ) − h f ( t n −1 , w n −1 )
2 2
as a predictor, and the 2-step Adams-Moulton method:

h
w n +1 = w n + [5 f (tn+1 , wn+1 ) + 8 f (tn , wn1 ) − f (tn−2 , wn−2 )]
12
5.4 Initial Value Problem Review Questions 67

as a corrector, apply the 2-step Adams predicitor-corrector


method to approximate the solution of the initial value
problem

y0 = ty3 − y, (0 ≤ t ≤ 2), y (0) = 1

using N=4 steps, given y1 = 0.5.


[18 marks]
c) Using the predictor corrector define a bound for the error
by controlling the step size.
[10 marks]

6. a) Given the Midpoint point (Runge- Kutta) method

w0 = y 0

h h
wi+1 = wi + h f ( xi + , wi + f ( xi , wi ))
2 2
Assume that the Runge Kutta method satisfies the Lips-
chitz condition. Then for the initial value problems
0
y = f ( x, y)

y( x0 ) = Y0
Show that the numerical solution {wn } satisfies
" #
e (b− a) L − 1
max |y( xn ) − wn | ≤ e(b−a) L |y0 − w0 | + τ (h)
a≤ x ≤b L

where
τ (h) = max |τn (y)|
a≤ x ≤b

If the consistency condition

δ(h) → 0 as h → 0

where
δ(h) = max | f ( x, y) − F ( x, y; h; f )|
a≤ x ≤b

is satisfied then the numerical solution wn converges to


Y ( x n ).
[18 marks]
b) Consider the differential equation
0
y − y + x − 2 = 0, 0 ≤ x ≤ 1, y(0) = 0.
5.4 Initial Value Problem Review Questions 68

Apply the midpoint method to approximate the solution


at y(0.4) using h = 0.2

[11 marks]
c) How would you improve on this result.
[4 marks]
Part II

N U M E R I C A L S O L U T I O N S TO B O U N D A RY
VA L U E P R O B L E M S
6
B O U N D A R Y VA L U E P R O B L E M S

6.1 systems of equations

An m-th order system of equation of first order Initial Value Problem


can be expressed in the form
du1
dt = f 1 (t, u1 , ..., um )
du2
dt = f 2 (t, u1 , ..., um )
.
(30)
.
.
dum
dt = f m (t, u1 , ..., um )

for a ≤ t ≤ b with the the initial conditions

u1 ( a ) = α1
u2 ( a ) = α2
.
(31)
.
.
um ( a) = αm .

This can also be written in vector from


0
u = f(t, u)

with initial conditions


u(a) = ff.
Definition The function f (t, u1 , ..., um ) defined on the set

D = {(t, u1 , .., um )| a ≤ t ≤ b, −∞ < ui < ∞, i = 1, .., m}

is said to be a Lipschitz Condition on D in the variables u1 , ..., um if


a constant L, the Lipschitz Constant, exists with the property that
m
| f (t, u1 , ..., um ) − f (t, z1 , z2 , ..., zm )| ≤ L ∑ |u j − z j |
j =1

for all (t, u1 , ..., um ) and (t, z1 , z2 , ..., zm ) in D.

70
6.2 Higher order Equations 71

Theorem 6.1.1. Suppose

D = {(t, u1 , .., um )| a ≤ t ≤ b, −∞ < ui < ∞, i = 1, .., m}

is continuous on D and satisfy a Lipschitz Condition. The system of 1st or-


der equations subject th the initial conditions, has a unique solution u1 (t), u2 (t), ..., um (t)
for a ≤ t ≤ b.

Example 29
Using Euler method on the system

u0 = u2 − 2uv u(0) = 1
v0 = tu + u2 sinv v(0) = −1

for 0 ≤ t ≤ 0.5 and h = 0.05 the general Euler differ-


ence system of equations is of the form

ûi+1 = ûi + h f (ti , ûi , v̂i )


v̂i+1 = v̂i + hg(ti , ûi , v̂i )

Applied the the Initial Value Problem

ûi+1 = ûi + 0.05(û2i − 2ûi v̂i )


v̂i+1 = v̂i + 0.05(ti ûi + û2i sin(v̂i ))

We know for i = 0, û0 = 1 and v̂0 = −1 from the


initial conditions.
For i=0 we have

û1 = û0 + 0.05(û20 − 2û0 v̂0 ) = 1.15


v̂1 = v̂0 + 0.05(t0 û0 + û20 sin(v̂0 )) = −1.042

and so forth.

6.2 higher order equations

Definition A general mth order initial value problem

y(m) (t) = f (t, y, ..., y(m−1) ) a ≤ t ≤ b

with initial conditions


0
y( a) = α1 , y ( a) = α2 , ..., y(m−1) ( a) = αm

can be converted into a system of equations as in (30) and (31)


Let u1 (t) = y(t), u1 (t) = y1 (t), ..., um (t) = y(m−1) (t). This produces
the first order system of equations
6.3 Boundary Value Problems 72

du1 dy
dt = dt0 = u2
du2 dy
dt = dt = u3
.
.
.
dum−1 dy(m−2)
dt = dt = um
dum dy(m−1) ( m − 1 )
dt = dt fm (t, y, ..., y ) = f (t, u1 , ..., um )
with initial conditions

u1 ( a ) = y ( a ) = α1
0
u2 ( a ) = y ( a ) = α2
.
.
.
u m ( a ) = y −1) ( a ) = α m
( m

Example 30

00 0
y + 3y + 2y = et
0
with initial conditions y(0) = 1 and y (0) = 2 can be
converted to the system
0
u =v u (0) = 1
0 t
v = e − 2u − 3v v(0) = 2

the difference Euler equation is of the form

ûi+1 = ûi + hv(ti , ûi , v̂i )


v̂i+1 = v̂i + h(eti − 2ûi − 3v̂i )

6.3 boundary value problems

Consider the second order differential equation


00 0
y = f ( x, y, y ) (32)

defined on an interval a ≤ x ≤ b. Here f is a function of three vari-


ables and y is an unknown. The general solution to 32 contains two
arbitrary constants so in order to determine it uniquely it is necessary
to impose two additional conditions on y. When one of these is given
at x = a and the other at x = b the problem is called a boundary
value problem and associated conditions are called boundary condi-
6.3 Boundary Value Problems 73

tions.
The simplest type of boundary conditions are

y( a) = α

y(b) = β
for a given numbers α and β. However more general conditions such
as
0
λ1 y ( a ) + λ2 y ( a ) = α1
0
µ1 y ( b ) + µ2 y ( b ) = α2
for given numbers αi , λi and µi (i=1,2) are sometimes imposed.
Unlike Initial Value Problem whose problems are uniquely solvable
boundary value problem can have no solution or many.

Example 31
The differential equation
00
y +y = 0
0
y1 ( x ) = y ( x ) y2 ( x ) = y ( x )
0
y1 = y2
0
y2 = − y1
It has the general solution

w( x ) = C1 sin( x ) + C2 cos( x )

where C1 , C2 are constants.


The special solution w( x ) = sin( x ) is the only solution
that satisfies
π
w (0) = 0 w ( ) = 1
2
All functions of the form w( x ) = C1 sin( x ) where C1
is an arbitrary constant, satisfies

w (0) = 0 w ( π ) = 0

while there is no solution for the boundary conditions

w(0) = 0 w(π ) = 1.

While we cannot state that all boundary value problem are unique
we can say a few things.
6.4 Some theorems about boundary value problem 74

6.4 some theorems about boundary value problem

Writing the general linear subset Boundary Value Problem


00 0
y = p( x )y + q( x )y + g( x ) a < x < b
     
y( a) y(b) γ1 (33)
A 0 +B 0 =
y ( a) y (b) γ2

The homogeneous problem is the case in which g( x ) and γ1 = γ2 = 0.

Theorem 6.4.1. The non-homogeneous problem (33) has a unique solution


y( x ) on [ a, b] for each set of given { g( x ), γ1 , γ2 } if and only if the homoge-
neous problem has only the trivial solutions y( x ) = 0.

For conditions under which the homogeneous problem (33) has


only the zero solution we consider the following subset of problem
00 0
y = p( x )y + q( x )y + g( x ) a < x < b
0
a0 y( a) − a1 y ( a) = γ1 (34)
0
b0 y(b) + b1 y (b) = γ2

Assume the following conditions

q( x ) > 0 a ≤ x ≤ b
(35)
a0 , a1 ≥ 0 b0 , b1 ≥ 0

| a1 | + | a0 | 6= 0, |b1 | + |b0 | 6= 0, | a0 | + |b0 | 6= 0 Then the homogeneous


problem for (34) has only the zero solution therefore the theorem is
applicable and the non-homogeneous problem has a unique solution
for each set of data { g( x ), γ1 , γ2 }.
The theory for a non-linear problem is far more complicated than that
of a linear problem. Looking at the class of problems
00 0
y = f ( x, y, y ) a < x < b
0
a0 y( a) − a1 y ( a) = γ1 (36)
0
b0 y(b) + b1 y (b) = γ2

The function f is assumed to satisfy the following Lipschitz Condition

| f ( x, u1 , v1 ) − f ( x, u2 , v2 )| ≤ K1 |u1 − u2 |
(37)
| f ( x, u1 , v1 ) − f ( x, u2 , v2 )| ≤ K2 |v1 − v2 |
for all points in the region

R = {( x, u, v)| a ≤ x ≤ b, −∞ < u, v < ∞}


6.5 Shooting Methods 75

Theorem 6.4.2. The problem (36) assumes f ( x, u, v) is continuous on the


region R and it satisfies the Lipschitz condition (37). In addition assume
that f , on R, satisfies

∂ f ( x, u, v) ∂ f ( x, u, v)
>0 ≤M
∂u ∂v

for some constant M > 0 for the boundary conditions of 36 assume that
| a1 | + | a0 | 6= 0, |b1 | + |b0 | 6= 0, | a0 | + |b0 | 6= 0. The boundary value
problem has a unique solution.

Example 32
The boundary value problem boundary value prob-
lem
00 0
y + e− xy + sin(y ) = 0 1 < x < 2
with y(1) = y(2) = 0, has
0 0
f ( x, y, y ) = −e− xy − sin(y )

Since 0
∂ f ( x, y, y )
= xe xy > 0
∂y
and
0
∂ f ( x, y, y ) 0
0 = | − cos(y ) ≤ 1
∂y
this problem has a unique solution. 

6.5 shooting methods

The principal of the shooting method is to change our original bound-


ary value problem boundary value problem into 2 Initial Value Prob-
lem.

6.5.1 Linear Shooting method

Looking at problem class (34). We break this down into two Initial
Value Problem.
00 0 0
y1 = p( x )y1 + q( x )y1 + r ( x ), y1 ( a) = α, y1 ( a) = 0
00 0 0 (38)
y2 = p( x )y2 + q( x )y2 , y2 ( a) = 0, y2 ( a) = 1

combining these results together to get the unique solution

β − y1 ( b )
y ( x ) = y1 ( x ) + y2 ( x ) (39)
y2 ( b )
6.5 Shooting Methods 76

provided that y2 (b) 6= 0.

Example 33

00 0
y = 2y + 3y − 6
with boundary conditions

y (0) = 3

y (1) = e3 + 2
The exact solution is

y = e3x + 2

breaking this boundary value problem into two Initial


Value Problem’s
00 0 0
y1 = 2y1 + 3y1 − 6 y1 ( a) = 3, y1 ( a ) = 0 (40)

00 0 0
y2 = 2y2 + 3y2 y2 ( a) = 0, y2 ( a ) = 1 (41)
Discretising (40)
0
y1 = u1 y1 = u2
0
u1 = u2 u1 ( a ) = 3
0
u2 = 2u2 + 3u1 − 6 u2 ( a) = 0
using the Euler method we have the two difference
equations
u1i+1 = u1i + hu2i
u2i+1 = u2i + h(2u2i + 3u1i − 6)
Discretising (41)
0
y 2 = w1 y 2 = w2
0
w1 = w2 w1 ( a ) = 0
0
w2 = 2w2 + 3w1 w2 ( a) = 1
using the Euler method we have the two difference
equations
w1i+1 = w1i + hw2i
w2i+1 = w2i + h(2w2i + 3w1i )
6.5 Shooting Methods 77

combining all these to get our solution

β − u1 ( b )
yi = u1i + w1i
w1 ( b )

It can be said
w1i
|yi − y( xi )| ≤ Khn 1 +
u1i

O(hn ) is the order of the method.

Figure 6.5.1: Python output: Shooting Method

Figure 6.5.2: Python output: Shooting Method error

6.5.2 The Shooting method for non-linear equations

Example 34

00 0
y = −2yy y (0) = 0 y (1) = 1
The corresponding initial value problem is
00 0 0
y = −2yy y (0) = 0 y (0) = λ (42)
6.5 Shooting Methods 78

Which reduces to the first order system, letting y1 = y


0
and y2 = y .
0
y1 = y2 y1 (0) = 0
0 0
y2 = −2y1 y2 y2 (0) = λ
Taking λ = 1 and λ = 2 as the first and second guess
0
of y (0). (42) depends on two variable x and λ. 

How to choose λ?
Our goal is to choose λ such that.

F (λ) = y(b, λ) − β = 0

We use Newton’s method to generate the sequence λk with only the


initial approx λ0 . The iteration has the form

y(b, λk−1 ) − β
λ k = λ k −1 − dy
dλ ( b, λk −1 )

dy
and requires knowledge of dλ (b, λk−1 ). This present a difficulty since
an explicit representation for y(b, λ) is unknown we only know y(b, λ0 ),
y(b, λ1 ), ..., y(b, λk−1 ).
Rewriting our Initial Value Problem we have it so that it depends on
both x and λ.
00 0
y ( x, λ) = f ( x, y( x, λ), y ( x, λ)) a ≤ x ≤ b
0
y( a, λ) = α y ( a, λ) = λ
∂y
differentiating with respect to λ and let z( x, λ) denote ∂λ ( x, λ ) we
have 0
∂ 00 ∂f ∂ f ∂y ∂ f ∂y
(y ) = = +
∂λ ∂λ ∂y ∂λ ∂y0 ∂λ
Now 0  
∂y ∂ ∂y ∂ ∂y ∂z 0
= = = =z
∂λ ∂λ ∂x ∂x ∂λ ∂x
we have
00 ∂f ∂f 0
z ( x, λ) = z( x, λ) + 0 z ( x, λ)
∂y ∂y
for a ≤ x ≤ b and the boundary conditions
0
z( a, λ) = 0, z ( a, λ) = 1

Now we have
y(b, λk−1 ) − β
λ k = λ k −1 −
z(b, λk−1 )
We can solve the original non-linear subset Boundary Value Problem
by solving the 2 Initial Value Problem’s.
6.5 Shooting Methods 79

Example 35
(cont.)

∂f 0 ∂f
= −2y 0 = −2y
∂y ∂y
We now have the two Initial Value Problem’s
00 0 0
y = −2yy y (0) = 0 y (0) = λ

00 ∂f ∂f 0
z = z( x, λ) + 0 z ( x, λ)
∂y ∂y
0 0 0
= −2y z − 2yz z(0) = 0 z (0) = 1
0 0
Discretising we let y1 = y, y2 = y , y3 = z and y4 = z .
0
y1 = y2 : y1 (0) = 0
0
y2 = −2y1 y2 : y2 (0) = λ k
0
z1 = z2 : z1 (0) = 0
0
z2 = −2z1 y2 − 2y1 z2 : y2 (0) = 1

with
y1 ( b ) − β
λ k = λ k −1 −
y3 ( b )
Then solve using a one step method. 

Figure 6.5.3: Python output: Nonlinear Shooting Method


6.6 Finite Difference method 80

Figure 6.5.4: Python output: Nonlinear Shooting Method result

Figure 6.5.5: Python output: Nonlinear Shooting Method λ

6.6 finite difference method

Each finite difference operator can be derived from Taylor expansion.


Once again looking at a linear second order differential equation
00 0
y = p( x )y + q( x )y + r ( x )

on [ a, b] subject to boundary conditions

y( a) = α y(b) = β

As with all cases we divide the area into even spaced mesh points

b−a
x0 = a, x N = b xi = x0 + ih h =
N
0 00
We now replace the derivatives y ( x ) and y ( x ) with the centered
difference approximations

0 1 h2
y (x) = (y( xi+1 ) − y( xi−1 )) − y3 (ξ i )
2h 12
6.6 Finite Difference method 81

00 1 h2 4
y (x) = ( y ( x i + 1 ) − 2y ( x i ) + y ( x i − 1 )) − y ( µi )
h2 6
for some xi−1 ≤ ξ i µi ≤ xi+1 for i=1,...,N-1.
We now have the equation

1 1
2
(y( xi+1 ) − 2y( xi ) + y( xi−1 )) = p( xi ) (y( xi+1 ) − y( xi−1 )) + q( xi )y( xi ) + r ( xi )
h 2h
This is rearranged such that we have all the unknown together,
   
hp( xi ) 2 hp( xi )
1+ y( xi−1 ) − (2 + h q( xi ))y( xi ) + 1 − y ( x i +1 ) = h 2 r ( x i )
2 2

for i = 1, .., N − 1.
Since the values of p( xi ), q( xi ) and r ( xi ) are known it represents a
linear algebraic equation involving y( xi−1 ), y( xi ), y( xi+1 ).
This produces a system of N − 1 linear equations with N − 1 un-
knowns y( x1 ), ..., y( x N −1 ).
The first equation corresponding to i = 1 simplifies to
   
2 hp( x1 ) 2 hp( x1 )
−(2 + h q( x1 ))y( x1 ) + 1 − y ( x2 ) = h r ( x1 ) − 1 + α
2 2

because of the boundary condition y( a) = α, and for i = N − 1


   
hp( x N −1 ) 2 2 hp( x N −1 )
1+ y( x N −2 ) − (2 + h q( x N −1 ))y( x N −1 ) = h r ( x N −1 ) − 1 − β
2 2

because y(b) = β.
The values of yi , (i = 1, ..., N − 1) can therefore be found by solving
the tridiagonal system
Ay = b
where
A=
 
hp( x )
 
−(2 + h2 q( x1 )) 1− 21 0 . 0
    
 1 + hp(x2 ) hp( x2 )

−( 2 + h 2 q ( x )) 1 − 0 . 
 2 2 2 
 

 0 . . 0 0 


 . . . . . 


 . .  .  .  . 


hp( x N −2 ) 2 hp( x N −2 )
−(2 + h q( x N −2 )) 1−
 
 . 0 1+ 2 2

   
hp( x N −1 ) 2 q( x
. 0 0 1+ 2 −( 2 + h N −1 ))
6.6 Finite Difference method 82

 
hp1
 
h2 r1 − 1 +
 
y1 2 α
h2 r2
 

 y2 
 



. .
   
y= b = 
  
 .  
 . 


 y N −2

  2
h rN −2 
  
y N −1 hp1
h 2 r N −1 − 1 − 2 β

Example 36
Looking at the simple case

d2 y
= 4y, y(0) = 1.1752, y(1) = 10.0179.
dx2
Our difference equation is

1
(y( xi+1 ) − 2y( xi ) + y( xi−1 )) = 4y( xi ) i = 1, .., N − 1
h2
1−0
dividing [0, 1] into 4 subintervals we have h = 4

xi = x0 + ih = 0 + i (0.25)

In this simple example q( x ) = 4, p( x ) = 0 and r ( x ) =


0. Rearranging the equation we have
 
1 2 1
2
(y( xi+1 )) − 2
+ 4 y( xi ) + 2 (y( xi−1 )) = 0
h h h

multiplying across by h2

y( xi+1 ) − (2 + 4h2 )y( xi ) + (y( xi−1 )) = 0

with the boundary conditions y( x0 ) = 1.1752 and


y( x4 ) = 10.0179. Our equations are of the form

y( x2 ) − 2.25y( x1 ) = −1.1752
y( x3 ) − 2.25y( x2 ) + y( x1 ) = 0
−2.25y( x3 ) + y( x2 ) = −10.0179

Putting this into matrix form

−2.25 −1.1752
    
1 0 y1
 1 −2.25 1   y2  =  0 
0 1 −2.25 y3 −10.0179
6.6 Finite Difference method 83

x y Exact sinh(2x + 1)
0 1.1752 1.1752
0.25 2.1467 2.1293
0.5 3.6549 3.6269
0.75 6.0768 6.0502
1.0 10.0179 10.0179


Example 37
Looking at a more involved boundary value problem
00 0
y = xy − 3y + e x y(0) = 1 y(1) = 2
1−0
Let N=5 then h = 5 = 0.2. The difference equation
is of the form
1 1
(y( xi+1 ) − 2y( xi ) + y( xi−1 )) = xi (y( xi+1 ) − y( xi−1 )) − 3y( xi ) + e xi
h2 2h
Re arranging and putting h = 0.2

0.2( xi ) 0.2( xi )
(1 + )y( xi−1 ) − (1.88)y( xi ) + (1 − )y( xi+1 ) = 0.04e xi
2 2
In matrix form this is

−1.88 0.98 0 0 y1 0.04e0.2 − 1.02


    
 1.04 −1.88 0.96 0   y2   0.04e0.4 
  y3  = 
    
 0 1.06 −1.88 0.94 0.04e0.6 
0 0 1.08 −1.88 y4 0.04e0.8 − 1.84

y1 = 1.4651, y2 = 1.8196, y3 = 2.0283 and y4 = 2.1023.

solving a tri-diagonal system

To solve a tri-diagonal system we can use the method discussed in


the approximation theory.
Part III

N U M E R I C A L S O L U T I O N S T O PA R T I A L
D I F F E R E N T I A L E Q U AT I O N S
7
PA R T I A L D I F F E R E N T I A L E Q U AT I O N S

7.1 introduction

Partial Differential Equations (PDE), occur frequently in maths, natu-


ral science and engineering.
PDE’s are problems involving rates of change of functions of several
variables.
The following involve 2 independent variables:

∂2 u ∂2 u
−∇2 u = − − 2 = f ( x, y) Poisson Eqn
∂x2 ∂y
∂u ∂u
+v = 0 Advection Eqn
∂t ∂x
∂u ∂2 u
−D 2 =0 Heat Eqn
∂t ∂x
∂2 u 2
2∂ u
− c = 0 Wave Equation
∂t2 ∂x2
Here v, D, c are real positive constants. In these cases x, y are the
space coordinates and t, x are often viewed as time and space coordi-
nates, respectively.
These are only examples and do not cover all cases. In real occur-
rences PDE’s usually have 3 or 4 variables.

7.2 pde classification

PDE’s in two independent variables x and y have the form

∂u ∂u ∂2 u
 
Φ x, y, u, , , 2 , ... =0
∂x ∂y ∂x

where the symbol Φ stands for some functional relationship.


As we saw with BVP this is too general a case so we must define new
classes of the general PDE.

Definition The order of a PDE is the order of the highest derivative


that appears.
ie Poisson is 2nd order, Advection eqn is 1st order. ◦

85
7.2 PDE Classification 86

Most of the mathematical theory of PDE’s concerns linear equations


of first or second order.
After order and linearity (linear or non-linear), the most important
classification scheme for PDE’s involves geometry.
Introducing the ideas with an example:

Example 38

∂u ∂u
α(t, x ) +β = γ(t, x ) (43)
∂t ∂x
A solution u(t, x ) to this PDE defines a surface {t, x, u(t, x )}
lying over some region of the (t, x )-plane.
Consider any smooth path in the (t, x )-plane lying be-
low the solution {t, x, u(t, x )}.
Such a path has a parameterization (t(s), x (s)) where
the parameter s measures progress along the path.
What is the rate of change du ds of the solution as we
travel along the path (t(s), x (s)).
The chain rule provides the answer

dt ∂u dx ∂u du
+ = (44)
ds ∂t ds ∂x ds
Equation (44) holds for an arbitrary smooth path in
the (t, x )-plane. Restricting attention to a specific fam-
ily of paths leads to a useful observation: When

dt dx
= α(t, x ) and = β(t, x ) (45)
ds ds
the simultaneous validity of (43) and (44) requires that

du
= γ(t, x ). (46)
ds
Equation (46) defines a family of curves (t(s), x (s))
called characteristic curves in the plane (t, x ).
Equation (46) is an ode called the characteristic equa-
tion that the solution must satisfy along only the char-
acteristic curve.
Thus the original PDE collapses to an ODE along the
characteristic curves. Characteristic curves are paths
along which information about the solution to the
PDE propagates from points where the initial value
or boundary values are known. 

Consider a second order PDE having the form

∂2 u ∂2 u ∂2 u ∂u ∂u
α( x, y) 2
+ β ( x, y ) + γ ( x, y ) 2
= Ψ( x, y, u, , ) (47)
∂x ∂x∂y ∂y ∂x ∂y
7.2 PDE Classification 87

 smooth curve ( x (s), y(s)) in the ( x, y)-plane, the


Along anarbitrary
∂u ∂u
gradient ∂x , ∂y of the solution varies according to the chain rule:

dx ∂2 u dy ∂2 u
 
d ∂u
+ =
ds ∂y∂x ds ∂y∂x ds ∂x

dx ∂2 u dy ∂2 u
 
d ∂u
+ =
ds ∂x∂y ds ∂y2 ds ∂y
if the solution u( x, y) is continuously differentiable then these rela-
tionships together with the original PDE yield the following system:

Ψ 
   ∂2 u   
α β γ ∂x 2 
 dx dy   ∂2 u   d ∂u 
 ds ds 0   ∂x∂y  =   ds  ∂x  
 (48)
dx dy ∂ 2 u d
0 ds ds ∂y2 ds ∂y
∂u

By analogy with the first order case we determine the characteristic


curves bu where the PDE is redundant with the chain rule. This
occurs when the determinant of the matrix in (48) vanishes that is
when  2     2
dy dy dx dx
α −β +γ =0
ds ds ds ds
eliminating the parameter s reduces this equation to the equivalent
condition  2  
dy dy
α −β +γ = 0
dx dx
dy
Formally solving this quadratic for dx , we find
p
dy β± β2 − 4αγ
=
dx 2α
This pair of ODE’s determine the characteristic curves. From this
equation we divide into 3 classes each defined with respect to β2 −
4αγ.

1. HYPERBOLIC
β2 − 4αγ > 0 This gives two families of real characteristic curves.

2. PARABOLIC
β2 − 4αγ = 0 This gives exactly one family of real characteristic
curves.

3. ELLIPTIC
β2 − 4αγ < 0 This gives no real characteristic equations.

Example 39
7.2 PDE Classification 88

The wave equation

∂2 u ∂2 u
c2 − 2 =0
∂x2 ∂t
now equating this with our formula for the character-
istics we have

dt 0 ± 0 + 4c2
= = ±c
dx 2
this implies that the characteristics are x + ct = const
and x − ct = const. This means that the effects travel
along the characteristics.
Laplace equation

∂2 u ∂2 u
+ 2 =0
∂x2 ∂y

from this we have −4(1)(1) < 0 which implies it is


elliptic.
This means that information at one point affects all
other points.
Heat equation
∂2 u ∂u
− =0
∂x2 ∂t
from this we have β2 − 4αγ = 0 this implies that the
equation is parabolic thus we have

∂t
=0
∂x


We can also state that hyperbolic and parabolic are Boundary value
problems and initial value problems. While, elliptic problems are
boundary value problems.
7.3 Difference Operators 89

7.3 difference operators

Through out this chapter we will use U to denote the exact solution
and w to denote the numerical (approximate) solution.
1-D difference operators

Ui+1 − Ui
D + Ui = Forward
h i +1
Ui − Ui−1
D − Ui = Backward
hi
Ui+1 − Ui−1
D0 Ui = Centered
x i +1 − x i −1

For 2-D Differences Schemes is similar when dealing with the x-


direction we hold the y-direction constant and then dealing with the
y-direction hold the x-direction constant.

Ui+1j − Uij
Dx+ Uij = Forward in the x-direction
x i +1 − x i
Uij+1 − Uij
Dy+ Uij = Forward in the y-direction
y i +1− y i
Uij − Ui−1j
Dx− Uij = Backward in the x direction
x i − x i −1
Uij − Uij−1
Dy− Uij = Backward in the y direction
y i − y i −1
Ui+1j − Ui−1j
Dx0 Uij = Centered in the x direction
x i +1 − x i −1
Uij+1 − Uij−1
Dy0 Uij = Centered in the y direction
y i +1 − y i −1

Second derivatives
Ui+1j − Uij Uij − Ui−1j
 
2
δx2 Uij= − Centered in x direction
x i +1 − x i −1 x i +1 − x i x i − x i −1
U − U U ij − Uij−1
 
2 2 ij+1 ij
δy Uij = − Centered in y direction
y i +1 − y i −1 y i +1 − y i y i − y i −1
8
PA R A B O L I C E Q U AT I O N S

We will look at the Heat equation as our sample parabolic equation.

∂U ∂2 U
= K 2 on Ω
∂T ∂X
and
U = g( x, y) on the boundary δΩ
this can be transformed without loss of generality by a non-dimensional
transformation to

∂U ∂2 U
= (49)
∂t ∂x2
with the domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.

8.1 example heat equation

In this case we look at a rod of unit length with each end in ice.
The rod is heat insulated along its length so that temp changes occur
through heat conduction along its length and heat transfer at its ends,
where w denotes temp.
Given that the ends of the rod are kept in contact with ice and the
initial temp distribution is non dimensional form is
1
1. U = 2x for 0 ≤ x ≤ 2
1
2. U = 2(1 − x ) for 2 ≤x≤1
In other words we are seeking a numerical solution of

∂U ∂2 U
=
∂t ∂x2
which satisfies
1. U = 0 at x = 0 and x = 1 for all t > 0 (the boundary condition)

2. U = 2x for 0 ≤ x ≤ 12 for t = 0
U = 2(1 − x ) for 21 ≤ x ≤ 1 for t = 0 (the initial condition).

90
8.2 An explicit method for the heat eqn 91

Due to the initial conditions the problem is symmetric with respect


to x = 0.5. To illustrate the implementation and limitations of the
explicit, implicit and Crank-Nicholson methods we will numerically
solve the Heat Equation of the rod for three different values of r:
1 1 k 1
Case 1 Let h = 10 and k = 1000 so that r = h2
= 10 ;

1 1 k
Case 2 Let h = 10 and k = 200 so that r = h2
= 12 ;
1 1 k
Case 3 Let h = 10 and k = 100 so that r = h2
= 1.

8.2 an explicit method for the heat eqn

The explicit Forwards Time Centered Space (FTCS) equation differ-


ence equation of the differential equation (49) is

wij+1 − wij wi+1j − 2wij + wi−1j


= , (50)
t j +1 − t j h2

wij+1 − wij wi+1j − 2wij + wi−1j


=
k h2
when approaching this we have divided up the area into two uniform
meshes one in the x direction and the other in the t-direction. We
define t j = jk where k is the step size in the time direction.
We define xi = ih where h is the step size in the space direction.
wij denotes the numerical approximation of U at ( xi , t j ).
Rearranging the equation we get

wij+1 = rwi−1j + (1 − 2r )wij + rwi+1j (51)

where r = hk2 .
This gives the formula for the unknown term wij+1 at the (ij + 1)
mesh points in terms of all xi along the jth time row.
Hence we can calculate the unknown pivotal values of w along the
first row t = k or j = 1 in terms of the known boundary conditions.
This can be written in matrix form:

w j+1 = Aw j + b j

for which A is an N − 1 × N − 1 matrixL


 
1 − 2r r 0 . . . .

 r 1 − 2r r 0 . . . 


 0 r 1 − 2r r 0 . .


. . . . . . .
 
 
. r 1 − 2r
 
 . . . r 
. . . . . r 1 − 2r
8.2 An explicit method for the heat eqn 92

k
where r = h2x
> 0, w j is
 
w1j

 w2j 

.
 
 
w N −2j
 
 
w N −1j
and b j is
 
rw0j

 0 

. .
 

0
 
 
rw Nj
It is assumed that the boundary values w0j and w Nj are known for
j = 1, 2, ..., and wi0 is the initial condition.

Example 40

8.2.0.1 Explicit FTCS method for the Heat Equation with


1
r = 10

k
Let h = 15 and k = 250
1
so that r = h2
= 1
10 difference
equation (51) becomes

1
wij+1 = (wi−1j + 8wij + wi+1j ).
10
This can be written in matrix form

w1j+1 0.8 0.1 0 0 w1j w0j


      
 w2j+1   0.1 0.8 0.1 0   w2j   0 
 w3j+1  =  0 0.1 0.8 0.1   w3j  + 0.1 
      .
0 
w4j+1 0 0 0.1 0.8 w4j w5j

Figure 8.2.1 shows a graphical representation of the


matrix.
8.2 An explicit method for the heat eqn 93

1
Figure 8.2.1: Graphical representation of the matrix A for r = 10 .

To solve for w21 we have

1 1
w21 = (w10 + 8w20 + w30 ) = (0.4 + 8 × 0.8 + 0.8) = 0.76.
10 10
Table 4 shows the initial condition and one time step
for the Heat Equation.

j/x 0 0.2 0.4 0.6 0.8 1.0


0 0 0.4 0.8 0.8 0.4 0.0
1
250 0 0.4 0.76 0.76 0.4 0.0

Table 4: The explicit numerical solution w of the Heat Equation


1
for r = 10 for 1 time step.

Figure 8.2.2 shows the explicit numerical solution w


1
of the Heat Equation for r = 10 for 10 time steps each
represented by a different line.

Figure 8.2.2: The explicit numerical solution w of the Heat Equa-


1
tion for r = 10 for 10 time steps each represented by a different
line.
8.2 An explicit method for the heat eqn 94

Figure 8.2.3 shows the explicit numerical solution w


1
of the Heat Equation for r = 10 for 10 time steps as a
colour plot.

Figure 8.2.3: The colour plot of the explicit numerical solution w


1
of the Heat Equation for r = 10 .

The forward time and centered space numerical so-


lution for the Heat Equation shown in Figures 8.2.3
and 8.2.2 tends to 0 in a monotonic fashion as time
1
progresses for r = 10 .

Example 41

8.2.0.2 Explicit FTCS method for the Heat Equation with


r = 21

k
Let h = 15 and k = 50
1
so that r = h2
= 1
2 difference
equation (51) becomes

1
wij+1 = (wi−1j + wi+1j ).
2
This can be written in matrix form

w1j+1 0.0 0.5 0 0 w1j w0j


      
 w2j+1   0.5 0.0 0.5 0   w2j   0 
 w3j+1  =  0 0.5 0.0 0.5   w3j  + 0.5  0
      .

w4j+1 0 0 0.5 0.0 w4j w Nj

Figure 8.2.4 is a graphical representation of the matrix


A.
8.2 An explicit method for the heat eqn 95

Figure 8.2.4: Graphical representation of the matrix A for r = 21 .

Table 8.2.4 shows the explicit numerical solution w of


the Heat Equation for r = 12 for 1 time step.

t/x 0 0.2 0.4 0.6 0.8 1.0


0 0 0.4 0.8 0.8 0.4 0.0
1
50 0 0.4 0.6 0.6 0.4 0.0

Table 5: The explicit numerical solution w of the Heat Equation


for r = 12 for 1 time step.

Figure 8.2.5 shows the explicit numerical solution w


of the Heat Equation for r = 12 for 10 time steps each
represented by a different line.

Figure 8.2.5: The explicit numerical solution w of the Heat Equa-


tion for r = 21 for 10 time steps each represented by a different
line.

Figure 8.2.6 shows the explicit numerical solution w


of the Heat Equation for r = 12 for 10 time steps as a
colour plot.
8.2 An explicit method for the heat eqn 96

Figure 8.2.6: The colour plot of the explicit numerical solution w


of the Heat Equation for r = 12 .

The choice of r = 21 gives an acceptable approxima-


tion to the solution of the Heat Equation as shown in
Figures 8.2.5 and 8.2.6.

Example 42

8.2.0.3 Explicit FTCS method for the Heat Equation with


r=1

k
Let h = 51 and k = 25
1
so that r = h2
= 1 difference
equation (51) becomes

wij+1 = wi−1j − wij + wi+1j .

This can be written in matrix form

w1j+1 −1.0 1.0 0 0 w1j


    
 w2j+1   1.0 −1.0 1.0 0   w2j 
 w3j+1  =  0
    
1.0 −1.0 1.0   w3j 
w4j+1 0 0 1.0 −1.0 w4j

w0j
 
 0 
+
 0 .

w5j
8.2 An explicit method for the heat eqn 97

Figure 8.2.7: Graphical representation of the matrix A for r = 1.

t/x 0 0.2 0.4 0.6 0.8 1.0


0 0 0.4 0.8 0.8 0.4 0.0
1
25 0 0.4 0.4 0.4 0.4 0.0
2
25 0 0.0 0.4 0.4 0.0 0.0
3
25 0 0.4 0.0 0.0 0.4 0.0

Table 6: The explicit numerical solution w of the Heat Equation


for r = 1 for 3 time steps.

Figure 8.2.8: The explicit numerical solution w of the Heat Equa-


tion for r = 1 for 10 time steps each represented by a different
line.
8.3 An implicit (BTCS) method for the Heat Equation 98

Figure 8.2.9: The colorplot of The explicit numerical solution w


of the Heat Equation for r = 1.

Considered as a solution to the Heat Equation this is


meaningless although it is the correct solution of the
difference equation with respect to the initial condi-
tions and the boundary conditions.

8.3 an implicit (btcs) method for the heat equation

The implicit Backward Time Centered Space (BTCS) difference equa-


tion of the differential Heat equation (49) is

wij+1 − wij wi+1j+1 − 2wij+1 + wi−1j+1


= (52)
k h2
when approaching this we have divided up the area into two uniform
meshes one in the x direction and the other in the t-direction. We
define t j = jk where k is the step size in the time direction.
We define xi = ih where h is the step size in the space direction.
wij denotes the numerical approximation of U at ( xi , t j ).
Rearranging the equation we get

−rwi−1j+1 + (1 + 2r )wij+1 − rwi+1j+1 = wij (53)

where r = hk2 .
This gives the formula for the unknown term wij+1 at the (ij + 1)
mesh points in terms of terms along the jth time row.
Hence we can calculate the unknown pivotal values of w along the
first row t = k or j = 1 in terms of the known boundary conditions.
This can be written in matrix from

Aw j+1 = w j + b j+1
8.3 An implicit (BTCS) method for the Heat Equation 99

for which A is
 
1 + 2r −r 0 . . . .
 −r 1 + 2r − r 0 . . . 
 

 0 −r 1 + 2r −r 0 . .


. . . . . . .
 
 
. −r 1 + 2r −r
 
 . . . 
. . . . . −r 1 + 2r

k
where r = h2x
> 0, w j is
 
w1j

 w2j 

.
 
 
w N −2j
 
 
w N −1j
and b j+1 is
 
rw0j+1

 0 

. .
 

0
 
 
rw Nj+1
It is assumed that the boundary values w0j and w Nj are known for
j = 1, 2, ..., and wi0 is the initial condition.

8.3.1 Example implicit (BTCS) for the Heat Equation

In this case we look at a rod of unit length with each end in ice.
The rod is heat insulated along its length so that temperature changes
occur through heat conduction along its length and heat transfer at
its ends, where w denotes temperature.
Given that the ends of the rod are kept in contact with ice and the
initial temperature distribution is non dimensional form is
1
1. U = 2x for 0 ≤ x ≤ 2
1
2. U = 2(1 − x ) for 2 ≤x≤1
In other words we are seeking a numerical solution of

∂U ∂2 U
=
∂t ∂x2
which satisfies

1. U = 0 at x = 0 for all t > 0 (the boundary condition),

2. U = 2x for 0 ≤ x ≤ 12 for t = 0,
U = 2(1 − x ) for 21 ≤ x ≤ 1 for t = 0 (the initial condition).
8.3 An implicit (BTCS) method for the Heat Equation 100

Example 43

8.3.1.1 Implicit (BTCS) method for the Heat Equation


1
r = 10

k
Let h = 15 and k = 250
1
so that r = h2
= 1
10 difference
equation (51) becomes

1
(−wi−1j+1 + (12)wij+1 − wi+1j+1 ) = wij
10
This can be written in matrix form

1.2 −0.1 0 0 w1j+1 w1j w0j+1


     
 −0.1 1.2 −0.1 0   w2j+1   w2j   0 
=
  w3j  + 0.1  0
    .
 0 −0.1 1.2 −0.1   w3j+1 
0 0 −0.1 1.2 w4j+1 w4j w5j+1

1
Figure 8.3.1: Graphical representation of the matrix A for r = 10 .

To solve we need to invert the matix, to get

w j +1 = A −1 ( w j + b j )

j/x 0 0.2 0.4 0.6 0.8 1.0


0 0 0.4 0.8 0.8 0.4 0.0
1
250 0. 0.39694656 0.76335878 0.76335878 0.39694656 0.

Table 7: The implicit numerical solution w of the Heat Equation


1
for r = 10 for 1 time step.
8.3 An implicit (BTCS) method for the Heat Equation 101

Figure 8.3.2: The implicit numerical solution w of the Heat Equa-


1
tion for r = 10 for 10 time steps each represented by a different
line.

Figure 8.3.3: The colorplot of The implicit numerical solution w


1
of the Heat Equation for r = 10 .

Example 44

8.3.1.2 Implicit (BTCS) method for the Heat Equation for


r = 21

k
1
Let h = 15 and k = 50 so that r = h2
= 1
2 difference
equation (51) becomes

1
(−wi−1j+1 + (4)wij+1 − wi+1j+1 ) = wij
2
This can be written in matrix form

2 −0.5 0 0 w1j+1 w1j


    
 −0.5 2 −0.5 0   w2j+1
    w2j 
=
  w3j  .
 
 0 −0.5 2 −0.5   w3j+1
0 0 −0.5 2 w4j+1 w4j
8.3 An implicit (BTCS) method for the Heat Equation 102

1
Figure 8.3.4: Graphical representation of the matrix A for r = 2

t/x 0 0.2 0.4 0.6 0.8 1.0


0 0 0.4 0.8 0.8 0.4 0.0
1
50 0. 0.36363636 0.65454545 0.65454545 0.36363636 0.

Table 8: The implicit numerical solution w of the Heat Equation


for r = 21 for 1 time step

Figure 8.3.5: The implicit numerical solution w of the Heat Equa-


tion for r = 21 for 10 time steps each represented by a different
line
8.3 An implicit (BTCS) method for the Heat Equation 103

Figure 8.3.6: The colorplot of the implicit numerical solution w


of the Heat Equation for r = 12 .

This method also gives an acceptable approximation


to the solution of the PDE.

Example 45

8.3.1.3 Implicit (BTCS) method for the Heat Equation for


r=1

k
Let h = 51 and k = 25
1
so that r = h2
= 1 difference
equation (51) becomes

(−wi−1j+1 + (3)wij+1 − wi+1j+1 ) = wij

This can be written in matrix form

3 −1 0 0 w1j+1 w1j w0j+1


      
 −1 3 −1 0   w2j+1   w2j   0 
 0 −1 3 −1   w3j+1  =  w3j
+
  0 .
    

0 0 −1 3 w4j+1 w4j w5j+1


8.3 An implicit (BTCS) method for the Heat Equation 104

Figure 8.3.7: Graphical representation of the matrix A for r = 1

t/x 0 0.2 0.4 0.6 0.8 1.0


0 0 0.4 0.8 0.8 0.4 0.0
1
25 0. 0.32 0.56 0.56 0.32 0.
2
25 0. 0.24 0.4 0.4 0.24 0.
3
25 0. 0.176 0.288 0.288 0.176 0.

Table 9: The implicit numerical solution w of the Heat Equation


for r = 1 for 3 time step

Figure 8.3.8: The implicit numerical solution w of the Heat Equa-


tion for r = 1 for 10 time steps each represented by a different
line
8.4 Crank Nicholson Implicit method 105

Figure 8.3.9: The colorplot of the implicit numerical solution w


of the Heat Equation for r = 1.

8.4 crank nicholson implicit method

Since the implicit method requires that k ≤ 12 h2 a new method was


needed which would work for all finite values of r.
They considered the partial differential equation as being satisfied at
2
the midpoint {ih, ( j + 12 )k } and replace δδxU2 by the mean of its finite
difference approximations at the jth and (j+1)th time levels. In other
words they approximated the equation

δ2 U
   
δU
=
δt i,j+ 12 δx2 i,j+ 12

by

wi,j+1 − wij wi+1j+1 − 2wij+1 + wi−1j+1 wi+1j − 2wij + wi−1j


 
1
= 2
+
k 2 h h2

giving

−rwi−1j+1 + (2 + 2r )wij+1 − rwi+1j+1 = rwi−1j + (2 − 2r )wij + rwi+1j


(54)
k
with r = h2 .
In general the LHS contains 3 unknowns and the RHS 3 known piv-
otal values.
If there are N intervals mesh points along each row then for j = 0 and
i = 1, .., N it gives N simultaneous equations for N unknown pivotal
values along the first row.
Which can be described in matrix form

Bw j+1 = Cw j + b j
8.4 Crank Nicholson Implicit method 106

as
  
2 + 2r −r 0 . . . . w1j+1
 −r 2 + 2r −r 0 . . .  w 
  2j+1 

 0 −r 2 + 2r −r 0 . .

 .


. . . . . . . .
  
   
. −r 2 + 2r −r   w N −2j+1 
  
 . . .
. . . . . −r 2 + 2r w N −1j+1
  
2 − 2r r 0 . . . . w1j

 r 2 − 2r r 0 . . .   w2j 
  
0 r 2 − 2r r 0 . . .
  
=  + b j + b j +1
  
. . . . . . . .

  
. r 2 − 2r
  
 . . . r   w N −2j 
. . . . . r 2 − 2r w N −1j

where b j and b j+1 are vectors of known boundary conditions.


   
rw0j rw0j+1

 0 


 0 

. .
   
bj =  , b j +1 = (55)
   
. .

   
   
 0   0 
rw Nj rw Nj+1

8.4.1 Example Crank-Nicholson solution of the Heat Equation

In this case we look at a rod of unit length with each end in ice.
The rod is heat insulated along its length so that temperature changes
occur through heat conduction along its length and heat transfer at
its ends, where w denotes temperature.
Simple case
Given that the ends of the rod are kept in contact with ice and the
initial temperature distribution is non dimensional form is
1
1. U = 2x for 0 ≤ x ≤ 2
1
2. U = 2(1 − x ) for 2 ≤x≤1

In other words we are seeking a numerical solution of

∂U ∂2 U
=
∂t ∂x2
which satisfies

1. U = 0 at x = 0 for all t > 0 (the boundary condition)

2. U = 2x for 0 ≤ x ≤ 12 for t = 0 U = 2(1 − x ) for 1


2 ≤ x ≤ 1 for
t = 0 (the initial condition)
8.4 Crank Nicholson Implicit method 107

Example 46

1
8.4.1.1 Crank-Nicholson method r = 10

k
Let h = 15 and k = 250
1
so that r = h2
= 1
10 difference
equation (54) becomes

−0.1wi−1,j+1 + 2.2wi,j+1 − 0.1wi+1,j+1 = 0.1wi−1,j + 1.8wi,j + 0.1wi+1,j

Let j = 0

i=1:
−0.1w0,1 + 2.2w1,1 − 0.1w2,1 = 0.1w0,0 + 1.8w1,0 + 0.1w2,0
i=2:
−0.1w1,1 + 2.2w2,1 − 0.1w3,1 = 0.1w1,0 + 1.8w2,0j + 0.1w3,0
i=3:
−0.1w2,1 + 2.2w3,1 − 0.1w4,1 = 0.1w2,0 + 1.8w3,0 + 0.1w4,0
i=4:
−0.1w3,1 + 2.2w4,1 − 0.1w5,1 = 0.1w3,0 + 1.8w4,0 + 0.1w5,0

In matrix form

2.2 −0.1 0 0 w1,1


  
 −0.1 2.2 −0.1 0   w2,1 
  
 0 −0.1 2.2 −0.1   w3,1 
0 0 −0.1 2.2 w4,1

1.8 0.1 0 0 w1,0 w0,1 + w0,0


    
 0.1 1.8 0.1 0   w2,0   0 
= 
 0 0.1 1.8 0.1   w3,0
 + 0.1  
  0 
0 0 0.1 1.8 w4,0 w5,1 + w5,0

To solve we need to invert the matix, to get

w j+1 = A−1 ( Bw j + b j+1 + b j )

j/x 0 0.2 0.4 0.6 0.8 1.0


0 0 0.4 0.8 0.8 0.4 0.0
1
250 0. 0.39826464 0.76182213 0.76182213 0.39826464 0.

Table 10: The Crank-Nicholson numerical solution w of the Heat


1
Equation for r = 10 for 1 time step
8.4 Crank Nicholson Implicit method 108

Figure 8.4.1: The Crank-Nicholson numerical solution w of the


1
Heat Equation for r = 10 for 10 time steps each represented by a
different line

Figure 8.4.2: The colorplot of the Crank-Nicholson numerical so-


1
lution w of the Heat Equation for r = 10 .

Example 47

1
8.4.1.2 Crank-Nicholson method r = 2

k
Let h = 15 and k = 50
1
so that r = h2
= 1
2 difference
equation (54) becomes

−0.5wi−1,j+1 + 3wi,j+1 − 0.5wi+1,j+1 =

0.5wi−1,j + 1wi,j + 0.5wi+1,j


8.4 Crank Nicholson Implicit method 109

Let j = 0

i=1:
−0.5w0,1 + 3w1,1 − 0.5w2,1 = 0.5w0,0 + 1w1,0 + 0.5w2,0
i=2:
−0.5w1,1 + 3w2,1 − 0.5w3,1 = 0.5w1,0 + 1w2,0j + 0.5w3,0
i=3:
−0.5w2,1 + 3w3,1 − 0.5w4,1 = 0.5w2,0 + 1w3,0 + 0.5w4,0
i=4:
−0.5w3,1 + 3w4,1 − 0.5w5,1 = 0.5w3,0 + 1w4,0 + 0.5w5,0

In matrix form

3 −0.5 0 0 w1,1
  
 −0.5 3 −0.5 0   w2,1 
  w3,1  =
  
 0 −0.5 3 −0.5
0 0 −0.5 3 w4,1

1 0.5 0 0 w1,0 w0,1 + w0,0


    
 0.5 1 0.5 0   w2,0   0 
   + 0.5  
 0 0.5 1 0.5   w3,0   0 
0 0 0.5 1 w4,0 w5,1 + w5,0

t/x 0 0.2 0.4 0.6 0.8 1.0


0 0 0.4 0.8 0.8 0.4 0.0
1
50 0. 0.37241379 0.63448276 0.63448276 0.37241379 0.

Table 11: The Crank-Nicholson numerical solution w of the Heat


Equation for r = 21 for 1 time step

Figure 8.4.3: The Crank-Nicholson numerical solution w of the


Heat Equation for r = 21 for 10 time steps each represented by a
different line
8.4 Crank Nicholson Implicit method 110

Figure 8.4.4: The colorplot of the Crank-Nicholson numerical so-


lution w of the Heat Equation for r = 12 .

This method also gives an good approximation to the


solution of the PDE.

Example 48

8.4.1.3 Crank-Nicholson method for the Heat Equation


with r = 1

k
Let h = 51 and k = 25
1
so that r = h2
= 1 difference
equation (54) becomes

−wi−1,j+1 + 4wi,j+1 − wi+1,j+1 = wi−1,j + 0wi,j + wi+1,j

Let j = 0

i=1:
−w0,1 + 4w1,1 − w2,1 = w0,0 + 0w1,0 + w2,0
i=2:
−w1,1 + 4w2,1 − w3,1 = w1,0 + 0w2,0j + w3,0
i=3:
−w2,1 + 4w3,1 − w4,1 = w2,0 + 0w3,0 + w4,0
i=4:
−w3,1 + 4w4,1 − w5,1 = w3,0 + 0w4,0 + w5,0

In matrix form

4 −1 0 0 w1,1
  
 −1 4 −1 0   w2,1 
  
 0 −1 4 −1   w3,1 
0 0 −1 4 w4,1
8.4 Crank Nicholson Implicit method 111

0 1 0 0 w1,0 w0,1 + w0,0


    
 1 0 1 0   w2,0   0 
=  + 
 0 1 0 1   w3,0   0 
0 0 1 0 w4,0 w5,1 + w5,0

t/x 0 0.2 0.4 0.6 0.8 1.0


0 0 0.4 0.8 0.8 0.4 0.0
1
25 0. 0.32727273 0.50909091 0.50909091 0.32727273 0.
2
25 0. 0.21487603 0.35041322 0.35041322 0.21487603 0.
3
25 0. 0.14695718 0.23741548 0.23741548 0.14695718 0.

Table 12: The Crank-Nicholson numerical solution w of the Heat


Equation for r = 1 for 3 time step

Figure 8.4.5: The Crank-Nicholson numerical solution w of the


Heat Equation for r = 1 for 10 time steps each represented by a
different line

Figure 8.4.6: The colorplot of the Crank-Nicholson numerical so-


lution w of the Heat Equation for r = 1.
8.5 The Theta Method 112

8.5 the theta method

The Theta Method is a generalization of the Crank-Nicholson method


and expresses our partial differential equation as

wi,j+1 − wij wi+1j+1 − 2wij+1 + wi−1j+1 wi+1j − 2wij + wi−1j


    
= θ + (1 − θ )
k h2 h2
(56)

• when θ = 0 we get the explicit scheme,


1
• when θ = 2 we get the Crank-Nicholson scheme,

• and θ = 1 we get fully implicit backward finite difference method.


1 1
The equations are unconditionally valid for 2 ≤ θ ≤ 1. For 0 ≤ θ < 2
we must have
1
r≤ .
2(1 − 2θ )

8.6 the general matrix form

Let the solution domain of the PDE be the finite rectangle 0 ≤ x ≤ 1


and 0 ≤ t ≤ T and subdivide it into a uniform rectangular mesh by
the lines xi = ih for i = 0 to N and t j = jk for j = 0 to J it will be
assumed that h is related to k by some relationship such as k = rh or
k = rh2 with r > 0 and finite so that as h → 0 as k → 0.
Assume that the finite difference equation relating the mesh point
values along the ( j + 1)th and jth row is

bi−1 wi−1j+1 + bi wij+1 + bi+1 wi+1j+1 = ci−1 wi−1j + ci wij + ci+1 wi+1j

where the coefficients are constant. If the boundary values at i = 0


and N for j > 0 are known these ( N − 1) equations for i = 1 to N − 1
can be written in matrix form.
  
b1 b2 0 . . . . w1j+1
 b b b .   w2j+1
 1 2 3 0 . .   

0 b b b 0 . . .
  
 2 3 4  
 . . . . . . .  .
  

  
 . . . . b N −3 b N −2 b N −1   w N −2j+1 
. . . . . b N −2 b N −1 w N −1j+1
    
c1 c2 0 . . . . w1j c0 w0j − b0 w0j+1
 c c c 0 . . .  w   0 
 1 2 3  2j   
 0 c2 c3 c4 0 . .  . .
    
= +
  
 . . . . . . .  . .
 
  
    
 . . . . c N −3 c N −2 c N −1   w N −2j   0 
. . . . . c N −2 c N −1 w N −1j c N w Nj − b N w Nj+1
8.7 Derivative Boundary Conditions 113

Which can be written as

Bw j+1 = Cw j + d j

Where B and C are of order ( N − 1) w j denotes a column vector and


d j denotes a column vector of boundary values.
Hence
w j+1 = B−1 Cw j + B−1 d j .
Expressed in a more conventional manner as

w j+1 = Aw j + f j

Where A = B−1 C and f j = B−1 d j .

8.7 derivative boundary conditions

Boundary conditions expressed in terms of derivatives occur frequently.

8.7.1 Example Derivative Boundary Conditions

∂U
= H (U − v0 ) at x = 0
∂x
where H is a positive constant and v0 is the surrounding temperature.

How do we deal with this type of boundary condition?

∂U
1. By using forward difference for ∂x , we have

w1j − w0j
= H (w0j − v0 )
hx

where h x = x1 − x0 . This gives us one extra equation for the


temp wij .

2. If we wish to represent ∂U ∂x more accurately at x=0, we use a


central difference formula. It is necessary to introduce a fic-
titious temperature w−1j at the external mesh points (−h x , jk).
The temperature w−1j is unknown and needs another equation.
This is obtained by assuming that the heat conduction equation
is satisfied at the end points. The unknown w−1j can be elimi-
nated between these equations.

Solve for the equation


∂U ∂2 U
=
∂t ∂x2
8.7 Derivative Boundary Conditions 114

satisfying the initial condition

U = 1 for 0 ≤ x ≤ 1 when t = 0

and the boundary conditions

∂U
= U at x = 0 for all t
∂x
∂U
= −U at x = 1 for all t.
∂x

8.7.1.1 Example 1
Using forward difference approximation for the derivative boundary
condition and the explicit method to approximate the PDE.
Our difference equation is,

wi,j+1 − wij wi+1j − 2wij + wi−1j


=
k h2

wij+1 = wij + r (wi−1j − 2wij + wi+1j ) (57)


where r = hk2 .
x
At i=1, (57) is,

w1j+1 = w1j + r (w0j − 2w1j + w2j ) (58)

The boundary condition at x = 0 is ∂U


∂x = U in terms of forward
difference this is
w1j − w0j
= w0j
hx
rearranging
w1j
w0j = (59)
1 + hx
Using (61) and (58) to eliminate we get,
 
r
w1j+1 = 1 − 2r + w1j + rw2j .
1 + hx

At i = N − 1, (57) is,

w N −1j+1 = w N −1j + r (w N −2j − 2w N −1j + w Nj ) (60)

The boundary condition at x = 1 is ∂U ∂x = U in terms of forward


difference this is
w Nj − w N −10
= w Nj
hx
rearranging
w N −1j
w Nj = (61)
1 − hx
8.7 Derivative Boundary Conditions 115

Using (61) and (60) to eliminate we get,


 
r
w N −1j+1 = rw N −2j + 1 − 2r + w N −1j .
1 − hx

Choose hs = 51 and k = 1
100 such that r = 14 .
The equations become

7 1
w1j+1 = w1j + w2j ,
24 4

1
wij+1 = (wi−1j + 2wij + wi+1j ) i = 2, 3
4
and
1 13
w5j+1 = w3j + ( w4j )
4 16
In matrix form
  7 1
w1j+1 0 0 w1j
  
24 4
 w2j+1   1 1 1
0   w2j 
 = 4 2 4  .
 w3j+1   0 1 1 1 
4 2 4 w3j 
1 13
w4j+1 0 0 4 16 w4j
with the boundaries given by

10
w0j+1 = w1j+1 ,
12
10
w0j+1 = w1j+1 .
8

8.7.1.2 Example 2
Using central difference approximation for the derivative boundary
condition and the explicit method to approximate the PDE.
Our difference equation is as in (57).
At i = 0 we have

w0j+1 = w0j + r (w−1j − 2w0j + w1j ) (62)

The boundary condition at x = 0, in terms of central differences can


be written as
w1j − w−1j
= w0j (63)
2h x
Using (63) and (62) to eliminate the fictitious term w−1j we get,

w0j+1 = w0j + 2r ((−1 − h x )w0j + w1j )


8.8 Local Truncation Error and Consistency 116

8.7.1.3 Example 3
Using central difference approximation for the derivative boundary
condition and the Crank-Nicholson method to approximate the PDE.
The difference equation is,

wi,j+1 − wij 1 wi+1j+1 − 2wij+1 + wi−1j+1 wi+1j − 2wij + wi−1j


 
= +
k 2 h2 h2

giving

−rwi−1j+1 + (2 + 2r )wij+1 − rwi+1j+1 = rwi−1j + (2 − 2r )wij + rwi+1j


(64)
k
with r = h2 .
The boundary condition at x = 0, in terms of central differences can
be written as
w1j − w−1j
= w0j
2h x
Rearranging we have

w−1j = w1j − 2h x w0j (65)

and
w−1j+1 = w1j+1 − 2h x w0j+1 (66)
Let j = 0 and i = 0 the difference equation becomes

−rw−11 + (2 + 2r )w01 − rw11 = rw−10 + (2 − 2r )w00 + rw10 (67)

Using, (65), (66) and (67) we can eliminate the fictious terms w−1j and
w−1j+1 .

8.8 local truncation error and consistency

Let Fij (w) represent the difference equation approximating the PDE
at the ijth point with exact solution w.
If w is replaced by U at the mesh points of the difference equation
where U is the exact solution of the PDE the value of Fij (U ) is the
local truncation error Tij in at the ij mesh pont.
Using Taylor expansions it is easy to express Tij in terms of h x and k
and partial derivatives of U at (ih x , jk).
Although U and its derivatives are generally unknown it is worth-
while because it provides a method for comparing the local accuracies
of different difference schemes approximating the PDE.

Example 49
8.8 Local Truncation Error and Consistency 117

The local truncation error of the classical explicit dif-


ference approach to

∂U ∂2 U
− 2 =0
∂t ∂x
with
wij+1 − wij wi+1j − 2wij + wi−1j
Fij (w) = − =0
k h2x

is
Uij+1 − Uij Ui+1j − 2Uij + Ui−1j
Tij = Fij (U ) = −
k h2x

By Taylors expansions we have

Ui+1j = U ((i + 1)h x , jk) = U ( xi + h, t j )


h2x ∂2 U h3x ∂3 U
     
∂U
= Uij + h x + + + ...
∂x ij 2 ∂x2 ij 6 ∂x3 ij
Ui−1j = U ((i − 1)h x , jk) = U ( xi − h, t j )
h2 ∂2 U h3x ∂3 U
     
∂U
= Uij − h x + x − + ...
∂x ij 2 ∂x2 ij 6 ∂x3 ij
Uij+1 = U (ih x , ( j + 1)k ) = U ( xi , t j + k )
k 2 ∂2 U k 3 ∂3 U
     
∂U
= Uij + k + + + ...
∂t ij 2 ∂t2 ij 6 ∂t3 ij

substitution into the expression for Tij then gives

∂U ∂2 U k ∂2 U h2x ∂4 U
     
Tij = − 2 + −
∂t ∂x ij 2 ∂t2 ij 12 ∂x4 ij
k 2 ∂3 U h4x ∂6 U
   
+ − + ...
6 ∂t3 ij 360 ∂x6 ij

But U is the solution to the differential equation so

∂U ∂2 U
 
− 2 =0
∂t ∂x ij

the principal part of the local truncation error is

∂2 U h2 ∂4 U
   
k
− x .
2 ∂t2 ij 12 ∂x4 ij

Hence
Tij = O(k ) + O(h2x )
8.9 Consistency and Compatibility 118

8.9 consistency and compatibility

It is sometimes possible to approximate a parabolic or hyperbolic


equation with a finite difference scheme that is stable but which
does not converge to the solution of differential equation as the mesh
lengths tend to zero. Such a scheme is called inconsistent or incom-
patible.
This is useful when considering the theorem which states that is a
linear finite difference equation is consistent with a properly posed
linear IVP then stability guarantees convergence of w to U as the
mesh lengths tend to zero.

Definition Let L(U ) = 0 represent the PDE in the independent vari-


ables x and t with the exact solution U.
Let F (w) = 0 represent the approximate finite difference equation
with exact solution w.
Let v be a continuous function of x and t with sufficient derivatives to
enable L(v) to be evaluated at the point (ih x , jk ). Then the truncation
error Tij (v) at (ih x , jk) is defined by

Tij (v) = Fij (v) − L(vij )

If Tij (v) → 0 as h → 0, k → 0 the difference equation is said to be


consistent or compatible with the with the PDE. ◦

Looking back at the previous example it follows that the classical


explicit approximation to

∂U ∂2 U
=
∂t ∂x2
is consistent with the difference equation.

8.10 convergence and stability

Definition By convergence we mean that the results of the method


approach the analytical solution as k and h x tends to zero. ◦

Definition By stability we mean that errors at one stage of the calcu-


lations do not cause increasingly large errors as the computations are
continued. ◦

8.11 stability by the fourier series method (von neu-


mann’s method)

This method uses a Fourier series to express w pq = w( ph x , qk) which


is
w pq = eiβx ξ q
8.11 Stability by the Fourier Series method (von Neumann’s method) 119


where ξ = eαk in this case i denotes the complex number i = −1 and
for values of β needed to satisfy the initial conditions. ξ is known as
the amplification factor. The finite difference equation will be stable
if |w pq | remains bounded for all q as h → 0, k → 0 and all β.
If the exact solution does not increase exponentially with time then a
necessary and sufficient condition is that

|ξ | ≤ 1

8.11.1 Stability for the explicit FTCS Method

Investigating the stability of the fully explicit difference equation

1 1
(w pq+1 − w pq ) = 2 (w p−1q − 2w pq + w p+1q )
k hx

∂ U 2
approximating ∂U iβx ξ q into
∂t = ∂x2 at ( ph x , qk ). Substituting w pq = e
the difference equation

eiβph ξ q+1 − eiβph ξ q = r {eiβ( p−1)h ξ q − 2eiβph ξ q + eiβ( p+1)h ξ q }


k
where r = h2x
. Divide across by eiβ( p)h ξ q leads to

ξ − 1 = r (eiβ(−1)h − 2 + eiβh )
= 1 + r (2 cos( βh) − 2)
h
= 1 − 4r (sin2 ( β )).
2
Hence
h
1 − 4r (sin2 ( β )) ≤ 1,
2
for this to hold
h
4r (sin2 ( β )) ≤ 2,
2
which means
1
r≤ .
2
1
0 < ξ ≤ 1 for r < 2 and all β therefore the equation is conditionally
stable.

8.11.2 Stability for the implicit BTCS Method

Investigating the stability of the fully implicit difference equation

1 1
(w pq+1 − w pq ) = 2 (w p−1q+1 − 2w pq+1 + w p+1q+1 )
k hx
8.11 Stability by the Fourier Series method (von Neumann’s method) 120

∂ U 2
approximating ∂U iβx ξ q into
∂t = ∂x2 at ( ph x , qk ). Substituting w pq = e
the difference equation

eiβph ξ q+1 − eiβph ξ q = r {eiβ( p−1)h ξ q+1 − 2eiβph ξ q+1 + eiβ( p+1)h ξ q+1 }
k
where r = h2x
. Divide across by eiβ( p)h ξ q leads to

ξ − 1 = rξ (eiβ(−1)h − 2 + eiβh )
= rξ (2 cos( βh) − 2)
h
= −4rξ (sin2 ( β ))
2
Hence
1
ξ=
1 + 4r sin2 (
βh
2 )
0 < ξ ≤ 1 for all r > 0 and all β therefore the equation is uncondi-
tionally stable.

8.11.3 Stability for the Crank Nicholson Method

Investigating the stability of the fully implicit difference equation

1 1 1
(w pq+1 − w pq ) = 2 (w p−1q+1 − 2w pq+1 + w p+1q+1 ) + 2 (w p−1q − 2w pq + w p+1q )
k 2h x 2h x

∂ U 2
approximating ∂U iβx ξ q into
∂t = ∂x2 at ( ph x , qk ). Substituting w pq = e
the difference equation
r iβ( p−1)h q+1
eiβph ξ q+1 − eiβph ξ q = {e ξ − 2eiβph ξ q+1 + eiβ( p+1)h ξ q+1
2
+eiβ( p−1)h ξ q − 2eiβph ξ q + eiβ( p+1)h ξ q }
k
where r = h2x
. Divide across by eiβ( p)h ξ q leads to

r r
ξ −1 = ξ (e−iβh − 2 + eiβh ) + {e−iβh − 2 + eiβh }
2 2
r r
= ξ (2 cos( βh) − 2) + (2 cos( βh) − 2)
2 2
h h
= −2rξ (sin ( β )) − 2r (sin2 ( β ))
2
2 2
Hence
1 − 2r sin2 (
βh
2 )
ξ=
1 + 2r sin2 ( 2 )
βh

0 < ξ ≤ 1 for all r > 0 and all β therefore the equation is uncondi-
tionally stable.
8.12 Parabolic Equations Questions 121

8.12 parabolic equations questions

8.12.1 Explicit Equations

1. a) Use the central difference formula for the second deriva-


tive
00 f ( x0 + h ) − 2 f ( x0 ) + f ( x0 − h )
f ( x0 ) = + O(h2 )
h2
to derive the explicit numerical scheme

w j,k+1 = rw j−1,k + (1 − 2r )w j,k + rw j+1,k ,

where r = hk2 , k is the step in the time direction and h is


the step in the x direction, for the Heat equation

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.

[10 marks]
b) Consider the problem

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},

with the boundary conditions

u(0, t) = 1, u(1, t) = 1,

and initial condition

u( x, 0) = 4x2 − 4x + 1.

Taking h = 41 in the x-direction and k = 32


1
in the t-direction,
set up and solve the corresponding systems of finite differ-
ence equations for one time step.

[18 marks]
c) For the explicit method what is the step-size requirement
for h and k for the method to be stable.
[5 marks]
8.12 Parabolic Equations Questions 122

2. a) Use the central difference formula for the second deriva-


tive
00 f ( x0 + h ) − 2 f ( x0 ) + f ( x0 − h )
f ( x0 ) = + O(h2 )
h2
to derive the explicit numerical scheme

w j,k+1 = rw j−1,k + (1 − 2r )w j,k + rw j+1,k ,

where r = hk2 , k is the step in the time direction and h is


the step in the x direction, for the Heat equation

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.

[10 marks]
b) Consider the problem

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},

with the boundary conditions

u(0, t) = 1, u(1, t) = 1,

and initial condition


(
1
1−x for 0 ≤ t ≤ 2
u( x, 0) = 1
.
x for 2 ≤t≤1

Taking h = 15 in the x-direction and k = 250 1


in the t-
direction, set up and solve the corresponding systems of
finite difference equations for one time step.

[18 marks]
c) For the explicit method what is the step-size requirement
for h and k for the method to be stable.
[5 marks]
8.12 Parabolic Equations Questions 123

3. a) Use the central difference formula for the second deriva-


tive
00 f ( x0 + h ) − 2 f ( x0 ) + f ( x0 − h )
f ( x0 ) = + O(h2 )
h2
to derive the explicit numerical scheme

w j,k+1 = rw j−1,k + (1 − 2r )w j,k + rw j+1,k ,

where r = hk2 , k is the step in the time direction and h is


the step in the x direction, for the Heat equation

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.

[10 marks]
b) Consider the problem

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},

with the boundary conditions

u(0, t) = 0, u(1, t) = 0,

and initial condition

u( x, 0) = 2 sin(2πx )

Taking h = 61 in the x-direction and k = 144 1


in the t-
direction, set up and solve the corresponding systems of
finite difference equations for one time step.

[18 marks]
c) For the explicit method what is the step-size requirement
for h and k for the method to be stable.
[5 marks]
8.12 Parabolic Equations Questions 124

8.12.2 Implicit Methods

4. a) Use the central difference formula for the second deriva-


tive
00 f ( x0 + h ) − 2 f ( x0 ) + f ( x0 − h )
f ( x0 ) = + O(h2 )
h2
to derive the implicit numerical scheme

−rw j−1,k + (1 + 2r )w j,k − rw j+1,k = w j,k ,

where r = hk2 , k is the step in the time direction and h is


the step in the x direction, for the Heat equation

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.

[13 marks]
b) Consider the problem

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},

with the boundary conditions

u(0, t) = 1, u(1, t) = 1,

and initial condition

u( x, 0) = 4x2 − 4x + 1.

Taking h = 41 in the x-direction and k = 32


1
in the t-direction,
set up and write in matrix form (but do not solve) the cor-
responding systems of finite difference equations for one
time step.

[20 marks]
8.12 Parabolic Equations Questions 125

5. a) Use the central difference formula for the second deriva-


tive
00 f ( x0 + h ) − 2 f ( x0 ) + f ( x0 − h )
f ( x0 ) = + O(h2 )
h2
to derive the implicit numerical scheme

−rw j−1,k + (1 + 2r )w j,k − rw j+1,k = w j,k ,

where r = hk2 , k is the step in the time direction and h is


the step in the x direction, for the Heat equation

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.

[13 marks]
b) Consider the problem

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},

with the boundary conditions

u(0, t) = 1, u(1, t) = 1,

and initial condition


(
1
1−x for 0 ≤ t ≤ 2
u( x, 0) = 1
.
x for 2 ≤t≤1

Taking h = 15 in the x-direction and k = 250 1


in the t-
direction, set up and write in matrix form (but do not
solve) the corresponding systems of finite difference equa-
tions for one time step.

[20 marks]

6. a) Use the central difference formula for the second deriva-


tive
00 f ( x0 + h ) − 2 f ( x0 ) + f ( x0 − h )
f ( x0 ) = + O(h2 )
h2
8.12 Parabolic Equations Questions 126

to derive the implicit numerical scheme

−rw j−1,k + (1 + 2r )w j,k − rw j+1,k = w j,k ,

where r = hk2 , k is the step in the time direction and h is


the step in the x direction, for the Heat equation

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.

[13 marks]
b) Consider the problem

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},

with the boundary conditions

u(0, t) = 0, u(1, t) = 0,

and initial condition

u( x, 0) = 2 sin(2πx )

Taking h = 61 in the x-direction and k = 144 1


in the t-
direction, set up and write in matrix form (but do not
solve) the corresponding systems of finite difference equa-
tions for one time step.

[20 marks]

8.12.3 Crank Nicholson Methods

7. a) Use the central difference formula for the second deriva-


tive
00 f ( x0 + h ) − 2 f ( x0 ) + f ( x0 − h )
f ( x0 ) = + O(h2 )
h2
8.12 Parabolic Equations Questions 127

to derive the Crank Nicholson numerical scheme

−rw j−1,k + (2 + 2r )w j,k − rw j+1,k = rw j−1,k + (2 − 2r )w j,k + rw j+1,k ,

where r = hk2 , k is the step in the time direction and h is


the step in the x direction, for the Heat equation

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.

[13 marks]
b) Consider the problem

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},

with the boundary conditions

u(0, t) = 1, u(1, t) = 1,

and initial condition

u( x, 0) = 4x2 − 4x + 1.

Taking h = 41 in the x-direction and k = 32


1
in the t-direction,
set up and write in matrix form (but do not solve) the cor-
responding systems of finite difference equations for one
time step.

[20 marks]

8. a) Use the central difference formula for the second deriva-


tive
00 f ( x0 + h ) − 2 f ( x0 ) + f ( x0 − h )
f ( x0 ) = + O(h2 )
h2
to derive the Crank Nicholson numerical scheme

−rw j−1,k + (2 + 2r )w j,k − rw j+1,k = rw j−1,k + (2 − 2r )w j,k + rw j+1,k ,


8.12 Parabolic Equations Questions 128

where r = hk2 , k is the step in the time direction and h is


the step in the x direction, for the Heat equation

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.

[13 marks]
b) Consider the problem

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},

with the boundary conditions

u(0, t) = 1, u(1, t) = 1,

and initial condition


(
1
1−x for 0 ≤ t ≤ 2
u( x, 0) = 1
.
x for 2 ≤t≤1

Taking h = 15 in the x-direction and k = 250 1


in the t-
direction, set up and write in matrix form (but do not
solve) the corresponding systems of finite difference equa-
tions for one time step.

[20 marks]

9. a) Use the central difference formula for the second deriva-


tive
00 f ( x0 + h ) − 2 f ( x0 ) + f ( x0 − h )
f ( x0 ) = + O(h2 )
h2
to derive the Crank Nicholson numerical scheme

−rw j−1,k + (2 + 2r )w j,k − rw j+1,k = rw j−1,k + (2 − 2r )w j,k + rw j+1,k ,

where r = hk2 , k is the step in the time direction and h is


the step in the x direction, for the Heat equation

∂u ∂2 u
= 2
∂t ∂x
8.12 Parabolic Equations Questions 129

on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.

[13 marks]
b) Consider the problem

∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain

Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},

with the boundary conditions

u(0, t) = 0, u(1, t) = 0,

and initial condition

u( x, 0) = 2 sin(2πx )

Taking h = 61 in the x-direction and k = 144 1


in the t-
direction, set up and write in matrix form (but do not
solve) the corresponding systems of finite difference equa-
tions for one time step.

[20 marks]
9
ELLIPTIC PDE’S

The Poisson equation is,

−∇2 U ( x, y) = f ( x, y), ( x, y) ∈ Ω = (0, 1) × (0, 1), (68)

where ∇ is the Laplacian,

∂2 ∂2
∇= + ,
∂x2 ∂x2
with boundary conditions,

U ( x, y) = g( x, y), ( x, y) ∈ δΩ-boundary.

9.1 the five point approximation of the laplacian

To numerically approxiamte the solution of the Poisson Equation 68


the unit square region Ω̄ = [0, 1] × [0, 1] = Ω ∂Ω must be discre-
S

tised into a uniform grid.

4 = {( xi , y j ) ∈ [0, 1] × [0, 1] : xi = ih, y j = jh}

for i = 0, 1, .., N and i = 0, 1, .., N, where N is a positive constant. The


interior nodes of the grid are defined as:

Ωh = {( xi , y j ) ∈ 4 : 1 ≤ i, j ≤ N − 1},

the boundary nodes are

∂Ωh = {( x0 , y j ), ( x N , y j ), ( xi , y0 ), ( xi , y N ) : 1 ≤ i, j ≤ N − 1}.

The Poisson Equation 68 is discretised using δx2 the central difference


approximation of the second derivative in the x direction

1
δx2 = (wi+1j − 2wij + wi−1j ),
h2

130
9.1 The five point approximation of the Laplacian 131

and δy2 the central difference approximation of the second derivative


in the y direction

1
δy2 = (wij+1 − 2wij + wij−1 ).
h2
The gives the Poisson Difference Equation,

−∇h wij = f ij ( xi , y j ) ∈ Ωh , (69)


−(δx2 wij + δy2 wij ) = f ij ( xi , y j ) ∈ Ωh , (70)
wij = gij ( xi , y j ) ∈ ∂Ωh , (71)

where wij is the numerical approximation of U at xi and y j . Ex-


panding the the Poisson Difference Equation 71 gives the five point
method,

−(wi−1j + wij−1 − 4wij + wij+1 + wi+1j ) = h2 f ij

for i = 1, ..., N − 1 and j = 1, ..., N − 1, which is depicted in Figure


9.1.1 on a 6 discrete grid.

Figure 9.1.1: Graphical representation of the difference equation sten-


cil

Unlike the Parabolic equation, the Elliptic equation cannot be es-


timated by holding one variable constant and then stepping in that
direction. The approximation must be solved at all points at the same
instant.
9.1 The five point approximation of the Laplacian 132

9.1.1 Matrix representation of the five point scheme

The five point scheme results in a system of ( N − 1)2 equations for the
( N − 1)2 unknowns. This is depicted in Figure 9.1.1 on a 6 × 6 = 36
where there is a grid of 4 × 4 = 16 unknowns (red) surround by the
boundary of 20 known values. The general set of 4 × 4 equations of
the Poisson difference equation on the 6 × 6 grid where

1 1
h= = ,
6−1 5
can be written as:

j=1
12
i = 1 w0,1 + w1,0 − 4w1,1 + w1,2 + w2,1 = 5 f 11
12
i = 2 w1,1 + w2,0 − 4w2,1 + w2,2 + w3,1 = 5 f 21
12
i = 3 w2,1 + w3,0 − 4w3,1 + w3,2 + w4,1 = 5 f 31
12
i = 4 w3,1 + w4,0 − 4w4,1 + w4,2 + w5,1 = 5 f 41

j=2
12
i = 1 w0,2 + w1,1 − 4w1,2 + w1,3 + w2,2 = 5 f 12
12
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 5 f 22
12
i = 3 w2,2 + w3,1 − 4w3,2 + w3,3 + w4,2 = 5 f 32
12
i = 4 w3,2 + w4,1 − 4w4,2 + w4,3 + w5,2 = 5 f 42

j=3
12
i = 1 w0,3 + w1,2 − 4w1,3 + w1,4 + w2,3 = 5 f 13
12
i = 2 w1,3 + w2,2 − 4w2,3 + w2,4 + w3,3 = 5 f 23
12
i = 3 w2,3 + w3,2 − 4w3,3 + w3,4 + w4,3 = 5 f 33
12
i = 4 w3,3 + w4,2 − 4w4,3 + w4,4 + w5,3 = 5 f 43

j=4
12
i = 1 w0,4 + w1,3 − 4w1,4 + w1,5 + w2,4 = 5 f 14
12
i = 2 w1,4 + w2,3 − 4w2,4 + w2,5 + w3,4 = 5 f 24
12
i = 3 w2,4 + w3,3 − 4w3,4 + w3,5 + w4,4 = 5 f 34
12
i = 4 w3,4 + w4,3 − 4w4,4 + w4,5 + w5,4 = 5 f 44 .

This set of equations can be re-arranged by bringing the known bound-


ary conditions w0,j , w5,j , wi,0 and wi,5 , to the right hand side. This can
be written as a 16 × 16 Matrix equation of the form:
        
−4 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 w1,1 f 1,1 −w1,0 −w0,1
 1 −4 1 0 0 1 0 0 0 0 0 0 0 0 0 0  w2,1   f 2,1   −w2,0   0 
        
        
 0 1 −4 1 0 0 1 0 0 0 0 0 0 0 0 0  w3,1   f 3,1   −w3,0   0 
        
 0 0 1 −4 0 0 0 1 0 0 0 0 0 0 0 0  w4,1   f 4,1   −w4,0   −w5,1 
        

 1 0 0 0 −4 1 0 0 1 0 0 0 0 0 0 0 
 w1,2 


 f 1,2  
  0  
  −w0,2 


 0 1 0 0 1 −4 1 0 0 1 0 0 0 0 0 0 
 w2,2 


 f 2,2  
  0  
  0 

 0 0 1 0 0 1 −4 1 0 0 1 0 0 0 0 0  w3,2   f 3,2   0   0 
        
 0 0 0 1 0 0 1 −4 0 0 0 1 0 0 0 0  w4,2  12  f 4,2   0   −w5,2 
        
  =−  + + 
 0 0 0 0 1 0 0 0 −4 1 0 0 1 0 0 0  w1,3  5  f 1,3   0   −w0,3 
        

 0 0 0 0 0 1 0 0 1 −4 1 0 0 1 0 0 
 w2,3 


 f 2,3  
  0  
  0 


 0 0 0 0 0 0 1 0 0 1 −4 1 0 0 1 0 
 w3,3 


 f 3,3  
  0  
  0 

 0 0 0 0 0 0 0 1 0 0 1 −4 0 0 0 1  w4,3   f 4,3   0   −w5,3 
        
 0 0 0 0 0 0 0 0 1 0 0 0 −4 1 0 0  w1,4   f 1,4   −w1,4   −w0,4 
        
        
 0 0 0 0 0 0 0 0 0 1 0 0 1 −4 1 0  w2,4   f 2,4   −w2,4   0 
        
 0 0 0 0 0 0 0 0 0 0 1 0 0 1 −4 1  w3,4   f 3,4   −w3,4   0 
0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 −4 w4,4 f 4,4 −w4,4 −w5,4
9.1 The five point approximation of the Laplacian
133
9.1 The five point approximation of the Laplacian 134

The horizontal and vertical lines are for display purposes to help in-
dicated each set of the four sets of four equations.

9.1.2 Generalised Matrix form of the discrete Poisson Equation

The generalised form of this matrix of the system of equations for


the parabolic case results in ( N − 1) equations, that are written as an
( N − 1)2 × ( N − 1)2 square matrix A and the ( N − 1)2 × 1 vectors w,
r and b:
Aw = −hr + b.
The matrix can be written as following block tridiagonal structure
(Figure SparseMatrix) :
      
T I 0 0 . . . w1 f1 b1
 I T I 0 0 . .  w   f   b 
  2   2   2 
0 . . . 0 . . . . .
      
 = − h2  + ,
      
 . . . . . . .  .  .   .
 
 
      
 . . . 0 I T I   w N −2   f N −2   b N −2 
. . . . 0 I T w N −1 f N −1 b N −1

where I denotes an ( N − 1) × ( N − 1) identity matrix and T is an


( N − 1) × ( N − 1) tridiagonal matrix of the form:
 
−4 1 0 0 . . .
 1 −4 1 0 0 . . 
 
0 . . . 0 . .
 
T= ,
 
 . . . . . . . 
. 0 1 −4 1 
 
 . .
. . . . 0 1 −4

w j is an ( N − 1) × 1 vector of approximations wij ,


 
w1j

 w2j 

.
 
wj = 
 
.

 
 
 w N −2j 
w N −1j

the vector f is made up of ( N − 1) vectors of length ( N − 1) × 1,


 
f 1j
 f 
 2j 
.
 
fj =  ,
 
 . 
 
 f N −2j 
f N −1j
9.2 Specific Examples 135

finally b is the vector of boundary conditions made up of two ( N − 1)


vectors of length ( N − 1) × 1,
   
g0j 0
 0   0 
   
 .   . 
   
b j = ble f t,right,j + btop,bottom,j = −  − 
 .   . 
   
 0   0 
g Nj 0

for j = 2, .., N − 2, for j = 1 and j = N − 1


       
g10 g10 g0N −1 g1N
 0   g   0   g2N 
   20     
 .   . . .
       
b1 = −  −  , b N −1 = −  − .
    
 .   .   .   . 
       
 0   g N −20   0   g N −2N 
g1N g1N g NN −1 g N −1N

Figure 9.1.2: Graphical representation of the large sparse matrix A


for the discrete solution of the Poisson Equation

The matrix has a unique solution. For sparse matrices of this form
an iterative method is used as it would be to computationally expen-
sive to compute the inverse.

9.2 specific examples

This section will work through three example problems:

1. Homogenous form of the Poisson Equation (Lapalacian),

2. Poisson Equation with zero boundary conditions,

3. Poisson Equation with non-zero boundary conditions.


9.2 Specific Examples 136

9.2.1 Example 1:Homogeneous equation with non-zero boundary

Consider the Homogeneous Poisson Equation (also known as the


Laplacian):

∂2 u ∂2 u
+ 2 = 0, ( x, y) ∈ Ω = (0, 1) × (0, 1),
∂x2 ∂x
with boundary conditions:
lower boundary,
u( x, 0) = sin(2πx ),
upper boundary,
u( x, 1) = sin(2πx ),
left boundary,
u(0, y) = 2 sin(2πy),
right boundary.
u(1, y) = 2 sin(2πy).
The general difference equation for the Laplacian is of the form

−(wi−1j + wij−1 − 4wij + wij+1 + wi+1j ) = 0.

Here, N = 4, which gives the step-size,

1
h= ,
4
and
1 1
xi = i , y j = j ,
4 4
for i = 0, 1, 2, 3, 4 and j = 0, 1, 2, 3, 4. This gives the system of 3 × 3
equations:

j=1
12
i = 1 w0,1 + w1,0 − 4w1,1 + w1,2 + w2,1 = 4 0
12
i = 2 w1,1 + w2,0 − 4w2,1 + w2,2 + w3,1 = 4 0
12
i = 3 w2,1 + w3,0 − 4w3,1 + w3,2 + w4,1 = 4 0

j=2
12
i = 1 w0,2 + w1,1 − 4w1,2 + w1,3 + w2,2 = 4 0
12
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 4 0
12
i = 3 w2,2 + w3,1 − 4w3,2 + w3,3 + w4,2 = 4 0

j=3
12
i = 1 w0,3 + w1,2 − 4w1,3 + w1,4 + w2,3 = 4 0
12
i = 2 w1,3 + w2,2 − 4w2,3 + w2,4 + w3,3 = 4 0
12
i = 3 w2,3 + w3,2 − 4w3,3 + w3,4 + w4,3 = 4 0.
9.2 Specific Examples 137

This system is then rearranged by bringing the known boundary


conditions to the right hand side, to give:

j=1
12
i=1 −4w1,1 + w1,2 + w2,1 = 4 0 − w0,1 − w1,0
12
i = 2 w1,1 − 4w2,1 + w2,2 + w3,1 = 4 0 − w2,0
12
i=3 w2,1 − 4w3,1 + w3,2 = 4 0 − w4,1 − w3,0

j=2
12
i=1 w1,1 − 4w1,2 + w1,3 + w2,2 = 4 0 − w0,2
12
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 4 0
12
i=3 w2,2 + w3,1 − 4w3,2 + w3,3 = 4 0 − w4,2

j=3
12
i=1 w1,2 − 4w1,3 + w2,3 = 4 0 − w0,3 − w1,4
12
i = 2 w1,3 + w2,2 − 4w2,3 + w3,3 = 4 0 − w2,4
12
i=3 w2,3 + w3,2 − 4w3,3 = 4 0 − w4,3 − w3,4
Given the discrete boundary conditions:

Left boundary Right boundary


x0 = 0 x4 = 1
u(0, y) = 2 sin(2πy) u(1, y) = 2 sin(2πy)
w0,0 = 0 w4,0 = 0
w0,1 = 2 sin(2πy1 ) = 2 w4,1 = 2 sin(2πy1 ) = 2
w0,2 = 2 sin(2πy2 ) = 0 w4,2 = 2 sin(2πy2 ) = 0
w0,3 = 2 sin(2πy3 ) = −2 w4,3 = 2 sin(2πy3 ) = −2
w0,4 = 2 sin(2πy4 ) = 0 w4,4 = 2 sin(2πy4 ) = 0

Lower boundary Upper boundary


y0 = 0 y4 = 1
u( x, 0) = sin(2πx ) u( x, 1) = sin(2πx )
w0,0 = 0 w0,4 = 0
w1,0 = sin(2πx1 ) = 1 w1,4 = sin(2πx1 ) = 1
w2,0 = sin(2πx2 ) = 0 w2,4 = sin(2πx2 ) = 0
w3,0 = sin(2πx3 ) = −1 w3,4 = sin(2πx3 ) = −1
w4,0 = sin(2πx4 ) = 0 w4,4 = sin(2πx4 ) = 0
9.2 Specific Examples 138

Figure 9.2.1: Sine Wave Boundary Conditions.

The system of equations are written in matrix form:

−4 1 −w1,0 −w0,1
      
0 1 0 0 0 0 0 w1,1

 1 −4 1 0 1 0 0 0 0 
 w2,1  
  −w2,0  
  0 


 0 1 −4 0 0 1 0 0 0 
 w3,1  
  −w3,0  
  −w4,1 


 1 0 0 −4 1 0 1 0 0 
 w1,2  
  0  
  −w0,2 

0 1 0 1 −4 1 0 1 0 w2,2 = 0 + 0 ,
      
 
0 0 1 0 1 −4 0 0 1 w3,2 0 −w4,2
      
      
      

 0 0 0 1 0 0 −4 1 0 
 w1,3  
  −w1,4  
  −w0,3 

 0 0 0 0 1 0 1 −4 1  w2,3   −w2,4   0 
0 0 0 0 0 1 0 1 −4 w3,3 −w3,4 −w4,3

where the matrix is 32 × 32 which is graphically represented in Figure


9.2.2, where the colours indicated the values in each cell.

Figure 9.2.2: Graphical representation of the matrix


9.2 Specific Examples 139

For the given boundary conditions the matrix equation is written


as :

−4 1 −1 −2
      
0 1 0 0 0 0 0 w1,1
 1 −4 1 0 1 0 0 0 0   w2,1   0
      0 
   
 0
 1 −4 0 0 1 0 0 0    w3,1   1
    
  −2 

 1
 0 0 −4 1 0 1 0 0   w1,2 
 
  0
  
  0 

 0 1 0 1 −4 1 0 1 0   w2,2  =  0 + 0 .
      
 0 0 1 0 1 −4 0 0 1   w3,2   0 0
      
  
      
 0
 0 0 1 0 0 −4 1 0    w1,3   −1
    
  2 

 0 0 0 0 1 0 1 −4 1   w2,3   0   0 
0 0 0 0 0 1 0 1 −4 w3,3 1 2

Figure 9.2.3 shows the approximate solution of the Laplacian Equa-


tion for the given boundary conditions and h = 41 .

Figure 9.2.3: Numerical solution of the homogeneous differential


equation

9.2.2 Example 2: non-homogeneous equation with zero boundary

Consider the Poission Equation

∂2 u ∂2 u
+ 2 = x2 + y2 ( x, y) ∈ Ω = (0, 1) × (0, 1)
∂x2 ∂x
with zero boundary conditions: Left boundary:

u( x, 0) = 0

Right boundary:
u( x, 1) = 0
Lower boundary:
u(0, y) = 0
9.2 Specific Examples 140

Upper boundary:
u(1, y) = 0.
The difference equation is of the form:

−(wi−1j + wij−1 − 4wij + wij+1 + wi+1j ) = h2 ( xi2 + y2j ).

Here, N = 4, which gives the step-size,

1
h= ,
4
and
1 1
xi = i , y j = j ,
4 4
for i = 0, 1, 2, 3, 4 and j = 0, 1, 2, 3, 4. This gives the system of 3 × 3
equations:

j=1
12 2
i = 1 w0,1 + w1,0 − 4w1,1 + w1,2 + w2,1 = 4 ( x1 + y21 )
12 2
i = 2 w1,1 + w2,0 − 4w2,1 + w2,2 + w3,1 = 4 ( x2 + y21 )
12 2
i = 3 w2,1 + w3,0 − 4w3,1 + w3,2 + w4,1 = 4 ( x3 + y21 )

j=2
12 2
i = 1 w0,2 + w1,1 − 4w1,2 + w1,3 + w2,2 = 4 ( x1 + y22 )
12 2
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 4 ( x2 + y22 )
12 2
i = 3 w2,2 + w3,1 − 4w3,2 + w3,3 + w4,2 = 4 ( x3 + y22 )
j=3
12 2
i = 1 w0,3 + w1,2 − 4w1,3 + w1,4 + w2,3 = 4 ( x1 + y23 )
12 2
i = 2 w1,3 + w2,2 − 4w2,3 + w2,4 + w3,3 = 4 ( x2 + y23 )
12 2
i = 3 w2,3 + w3,2 − 4w3,3 + w3,4 + w4,3 = 4 ( x3 + y23 )
This system is then rearranged by bringing the known boundary con-
ditions to the right hand side, to give:

j=1
12 2
i=1 −4w1,1 + w1,2 + w2,1 = 4 ( x1 + y21 ) − w0,1 − w1,0
12 2
i = 2 w1,1 − 4w2,1 + w2,2 + w3,1 = 4 ( x2 + y21 ) − w2,0
12 2
i=3 w2,1 − 4w3,1 + w3,2 = 4 ( x3 + y21 ) − w4,1 − w3,0

j=2
12 2
i=1 w1,1 − 4w1,2 + w1,3 + w2,2 = 4 ( x1 + y22 ) − w0,2
12 2
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 4 ( x2 + y22 )
12 2
i=3 w2,2 + w3,1 − 4w3,2 + w3,3 = 4 ( x3 + y22 ) − w4,2
9.2 Specific Examples 141

j=3
12 2
i=1 w1,2 − 4w1,3 + w2,3 = 4 ( x1 + y23 ) − w0,3 − w1,4
12 2
i = 2 w1,3 + w2,2 − 4w2,3 + w3,3 = 4 ( x2 + y23 ) − w2,4
12 2
i=3 w2,3 + w3,2 − 4w3,3 = 4 ( x3 + y23 ) − w4,3 − w3,4 .
Given the zero boundary conditions

Lower Boundary Upper Boundary


x0 = 0 x4 = 1
u(0, y) = 0 u(1, y) = 0
w0,0 = 0 w4,0 = 0
w0,1 = 0 w4,1 = 0
w0,2 = 0 w4,2 = 0
w0,3 = 0 w4,3 = 0
w0,4 = 0 w4,4 = 0

Left Boundary Right Boundary


y0 = 0 y4 = 1
u( x, 0) = 0 u( x, 1) = 0
w0,0 = 0 w0,4 = 0
w1,0 = 0 w1,4 = 0
w2,0 = 0 w2,4 = 0
w3,0 = 0 w3,4 = 0
w4,0 = 0 w4,4 = 0

Figure 9.2.4: Sine Wave Boundary Conditions.

The system of equations can be written in matrix form:

−4 1 ( x12 + y21 )
    
0 1 0 0 0 0 0 w1,1

 1 −4 1 0 1 0 0 0 0 
 w2,1 


 ( x22 + y21 ) 


 0 1 −4 0 0 1 0 0 0 
 w3,1 


 ( x32 + y21 ) 


 1 0 0 −4 1 0 1 0 0 
 w1,2 


 ( x12 + y22 ) 

2
0 1 0 1 −4 1 0 1 0 w2,2 = h ( x22 + y22 ) .
   
   
0 0 1 0 1 −4 0 0 1 w3,2 ( x32 + y22 )
    
    
( x12 + y23 )
    

 0 0 0 1 0 0 −4 1 0 
 w1,3 





 0 0 0 0 1 0 1 −4 1  w2,3   ( x22 + y23 ) 
0 0 0 0 0 1 0 1 −4 w3,3 ( x32 + y23 )
9.2 Specific Examples 142

Substituting values into the right hand side gives the specific matric
form:

−4 1
    
0 1 0 0 0 0 0 w1,1 0.0078125
 1 −4 1 0 1 0 0 0 0  w2,1   0.01953125 
    
 0
 1 −4 0 0 1 0 0 0 
 w3,1  
  0.0390625 

 1
 0 0 −4 1 0 1 0 0 
 w1,2  
  0.01953125 

 0 1 0 1 −4 1 0 1 0 w2,2 = 0.0312 .
    

 0 0 1 0 1 −4 0 0 1 w3,2 0.05078125
    
   
    
 0
 0 0 1 0 0 −4 1 0 
 w1,3  
  0.0390625 

 0 0 0 0 1 0 1 −4 1  w2,3   0.05078125 
0 0 0 0 0 1 0 1 −4 w3,3 0.0703125

Figure 9.2.5 shows the numerical solution of the Poisson Equation


with zero boundary conditions.

Figure 9.2.5: Numerical solution of the differential equation with zero


boundary conditions

9.2.3 Example 3: Inhomogeneous equation with non-zero boundary

Consider the Poisson Equation

∂2 u ∂2 u
+ 2 = xy, ( x, y) ∈ Ω = (0, 1) × (0, 1),
∂x2 ∂x
with boundary conditions
Right Boundary
u( x, 0) = − x2 + x
Left Boundary
u( x, 1) = x2 − x
Lower Boundary
u(0, y) = −y2 + y
9.2 Specific Examples 143

Upper Boundary
u(1, y) = −y2 + y.
The five point difference equation is of the form

−(wi−1j + wij−1 − 4wij + wij+1 + wi+1j ) = h2 ( xi y j ).

Here, N = 4, which gives the step-size,

1
h= ,
4
and
1 1
xi = i , y j = j ,
4 4
for i = 0, 1, 2, 3, 4 and j = 0, 1, 2, 3, 4. This gives the system of 3 × 3
equations:

12
i = 1 w0,1 + w1,0 − 4w1,1 + w1,2 + w2,1 = 4 ( x1 y1 )
12
i = 2 w1,1 + w2,0 − 4w2,1 + w2,2 + w3,1 = 4 ( x2 y1 )
12
i = 3 w2,1 + w3,0 − 4w3,1 + w3,2 + w4,1 = 4 ( x3 y1 )

j=2
12
i = 1 w0,2 + w1,1 − 4w1,2 + w1,3 + w2,2 = 4 ( x1 y2 )
12
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 4 ( x2 y2 )
12
i = 3 w2,2 + w3,1 − 4w3,2 + w3,3 + w4,2 = 4 ( x3 y2 )

j=3
12
i = 1 w0,3 + w1,2 − 4w1,3 + w1,4 + w2,3 = 4 ( x1 y3 )
12
i = 2 w1,3 + w2,2 − 4w2,3 + w2,4 + w3,3 = 4 ( x2 y3 )
12
i = 3 w2,3 + w3,2 − 4w3,3 + w3,4 + w4,3 = 4 ( x3 y3 ).

Re-arranging the system such that the known values are on the right
hand side:

j=1
12
i=1 −4w1,1 + w1,2 + w2,1 = 4 ( x1 y1 ) − w0,1 − w1,0
12
i = 2 w1,1 − 4w2,1 + w2,2 + w3,1 = 4 ( x2 y1 ) − w2,0
12
i=3 w2,1 − 4w3,1 + w3,2 = 4 ( x3 y1 ) − w4,1 − w3,0

j=2
12
i=1 w1,1 − 4w1,2 + w1,3 + w2,2 = 4 ( x1 y2 ) − w0,2
12
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 4 ( x2 y2 )
12
i=3 w2,2 + w3,1 − 4w3,2 + w3,3 = 4 ( x3 y2 ) − w4,2
9.2 Specific Examples 144

j=3
12
i=1 w1,2 − 4w1,3 + w2,3 = 4 ( x1 y3 ) − w0,3 − w1,4
12
i = 2 w1,3 + w2,2 − 4w2,3 + w3,3 = 4 ( x2 y3 ) − w2,4
12
i=3 w2,3 + w3,2 − 4w3,3 = 4 ( x3 y3 ) − w4,3 − w3,4 .
The discrete boundary conditions are

Left boundary Right boundary


x0 = 0 x4 = 1
u(0, y) = −y2 + y u(1, y) = −y2 + y
w0,0 = 0 w4,0 = 0
3 1
w0,1 = −y21 + y1 = 16 w4,1 = −y21 + y1 = 16
2 1 2 1
w0,2 = −y2 + y2 = 4 w4,2 = −y2 + y2 = 4
3 3
w0,3 = −y23 + y3 = 16 w4,3 = −y23 + y3 = 16
w0,4 = −y24 + y4 = 0 w4,4 = −y24 + y4 = 0

Lower boundary Upper boundary


y0 = 0 y4 = 1
u( x, 0) = − x2 + x u( x, 1) = x2 − x
w0,0 = 0 w0,4 = 0
3 3
w1,0 = − x12 + x1 = 16 w1,4 = x12 − x1 = − 16
1
w2,0 = − x22 + x2 = 4 w2,4 = x22 − x2 = − 14
3 3
w3,0 = − x32 + x3 = 16 w3,4 = x32 − x3 = − 16
w4,0 = 0 w4,4 = 0
The system of equations can be written in 9 × 9 Matrix form:

−4 1
  
0 1 0 0 0 0 0 w1,1

 1 −4 1 0 1 0 0 0 0 
 w2,1 


 0 1 −4 0 0 1 0 0 0 
 w3,1 


 1 0 0 −4 1 0 1 0 0 
 w1,2 

0 1 0 1 −4 1 0 1 0 w2,2 =
  
 
0 0 1 0 1 −4 0 0 1 w3,2
  
  
  

 0 0 0 1 0 0 −4 1 0 
 w1,3 

 0 0 0 0 1 0 1 −4 1  w2,3 
0 0 0 0 0 1 0 1 −4 w3,3

−w1,0 −w0,1
     
( x1 y1 )

 ( x2 y1 )  
  −w2,0  
  0 


 ( x3 y1 )  
  −w3,0  
  −w4,1 


 ( x1 y2 )  
  0  
  −w0,2 

h2  ( x2 y2 ) + 0 + 0 ,
     
( x3 y2 ) 0 −w4,2
     
     
     

 ( x1 y3 )  
  −w1,4  
  −w0,3 

 ( x2 y3 )   −w2,4   0 
( x3 y3 ) −w3,4 −w4,3
9.2 Specific Examples 145

inputting the specific boundary values and the right hand side of the
equation gives:

−4 1
  
0 1 0 0 0 0 0 w1,1

 1 −4 1 0 1 0 0 0 0 
 w2,1 


 0 1 −4 0 0 1 0 0 0 
 w3,1 


 1 0 0 −4 1 0 1 0 0 
 w1,2 

0 1 0 1 −4 1 0 1 0 w2,2 =
  
 
0 0 1 0 1 −4 0 0 1 w3,2
  
  
  

 0 0 0 1 0 0 −4 1 0 
 w1,3 

 0 0 0 0 1 0 1 −4 1  w2,3 
0 0 0 0 0 1 0 1 −4 w3,3

3 3
− 16 − 16
     
0.0625

 0.125  
  − 14  
  0 

3 3

 0.1875  
  − 16  
  − 16 

 2  0.125   0   − 14 
1      
0.25 + 0 + 0 .
     
4 

0.375 0 − 14
    
     
3 3
     
 0.1875  
16
  − 16 
1
     
 0.375  
4
  0 
3 3
0.5626 16 − 16
Figure 9.2.6 shows the numerical solution of the Poisson Equation
with non-zero boundary conditions.

Figure 9.2.6: Numerical solution of the differential equation with non-


zero boundary conditions
9.3 Consistency and Convergence 146

9.3 consistency and convergence

We now ask how well the grid function determined by the five point
scheme approximates the exact solution of the Poisson problem.

Definition Let Lh denote the finite difference approximation associ-


ated with the grid Ωh having the mesh size h, to a partial differential
operator L defined on a simply connected, open set Ω ⊂ R2 . For a
given function ϕ ∈ C ∞ (Ω), the truncation error of Lh is

τh (x) = ( L − Lh ) ϕ( x )

The approximation Lh is consistent with L if

lim τh ( x ) = 0,
h →0

for all x ∈ D and all ϕ ∈ C ∞ (Ω). The approximation is consistent to


order p if τh (x) = O(h p ). ◦

While we have seen this definition a few times it is always interesting


how the terms are denoted and expressed but the ideas are always
the same.

Proposition 9.3.1. The five-point difference analog −∇2h is consistent to


order 2 with −∇2 .

Proof. Pick ϕ ∈ C ∞ ( D ), and let ( x, y) ∈ Ω be a point such that ( x ±


h, y), ( x, y ± h) ∈ Ω ∂Ω. By the Taylor Theorem
S

∂ϕ h2 ∂2 ϕ h3 ∂3 ϕ h4 ∂4 ϕ ±
ϕ( x ± h, y) = ϕ( x, y) ± h ( x, y) + ( x, y ) ± ( x, y ) + (ζ , y)
∂x 2! ∂x2 3! ∂x3 4! ∂x4
where ζ ± ∈ ( x − h, x + h). Adding this pair of equation together and
rearranging , we get

∂2 ϕ h2 ∂4 ϕ + ∂4 ϕ −
 
1
[ ϕ( x + h, y) − 2ϕ( x, y) + ϕ( x − h, y)] − 2 ( x, y) = (ζ , y) + 4 (ζ , y)
h2 ∂x 4! ∂x4 ∂x

By the intermediate value theorem

∂4 ϕ + ∂4 ϕ − ∂4 ϕ
 
( ζ , y ) + ( ζ , y ) = 2 (ζ, y),
∂x4 ∂x4 ∂x4

for some ζ ∈ ( x − h, x + h). Therefore,

∂2 ϕ h2 ∂4 ϕ
δx2 ( x, y) = ( x, y ) + (ζ, y)
∂x2 2! ∂x4
Similar reasoning shows that

∂2 ϕ h2 ∂4 ϕ
δy2 ( x, y) = ( x, y ) + ( x, η )
∂y2 2! ∂y4
9.3 Consistency and Convergence 147

for some η ∈ (y − h, y + h). We conclude that τh ( x, y) = (∇ −


∇h ) ϕ( x, y) = O(h2 ). •

Consistency does not guarantee that the solution to the difference


equations approximates the exact solution to the PDE.

Definition Let Lh w(x j ) = f (x j ) be a finite difference approximation,


defined on a grid mesh size h, to a PDE LU (x) = f (x) on a simply
connected set D ⊂ Rn . Assume that w( x, y) = U ( x, y) at all points
(x,y) on the boundary ∂Ω. The finite difference scheme converges (or
is convergent) if

max |U (x j ) − w(x j )| → 0 as h → 0.
j


For the five point scheme there is a direct connection between consis-
tency and convergence. Underlying this connection is an argument
based on the following principle:

Theorem 9.3.2. (DISCRETE MAXIMUM PRINCIPLE). If ∇2h Vij ≥ 0 for


all points ( xi , y j ) ∈ Ωh , then

max Vij ≤ max Vij


( xi ,y j )∈Ωh ( xi ,y j )∈∂Ωh

If ∇2h Vij ≤ 0 for all points ( xi , y j ) ∈ Ωh , then

min Vij ≥ min Vij


( xi ,y j )∈Ωh ( xi ,y j )∈∂Ωh

In other words, a grid function V for which ∇2h V is nonnegative


on Ωh attains its maximum on the boundary ∂Ωh of the grid. Simi-
larly, if ∇2h V is nonpositive on Ωh , then V attains its minimum on the
boundary ∂Ωh .

Proof. The proof is by contradiction. We argue for the case ∇2h Vij ≥ 0,
reasoning for the case ∇2h Vij ≤ 0 begin similar.
Assume that V attains its maximum value M at an interior grid point
( x I , y J ) and that max(xi ,y j )∈∂Ωh Vij < M. The hypothesis ∇2h Vij ≥ 0
implies that

1
VI J ≤ (VI +1J + VI −1J + VI J +1 + VI J −1 )
4
This cannot hold unless

VI +1J = VI −1J = VI J +1 = VI J −1 = M.

If any of the corresponding grid points ( x I +1 , y L ), ( x J −1 , y L ), ( x I , y L+1 ), ( x I , y L−1 )


lies in ∂Ωh , then we have reached the desired contradiction.
Otherwise, we continue arguing in this way until we conclude that
9.3 Consistency and Convergence 148

VI +iJ + j = M for some point ( x I +iJ + j ) ∈ ∂Ω, which again gives a con-
tradiction. •

This leads to interesting results


Proposition 9.3.3. 1. The zero grid function (for which Uij = 0 for
all ( xi , y j ) ∈ Ωh ∂Ωh is the only solution to the finite difference
S

problem
∇2h Uij = 0 for ( xi , y j ) ∈ Ωh ,
Uij = 0 for ( xi , y j ) ∈ ∂Ωh .

2. For prescribed grid functions f ij and gij , there exists a unique solution
to the problem
∇2h Uij = f ij for ( xi , y j ) ∈ Ωh ,
Uij = gij for ( xi , y j ) ∈ ∂Ωh .
Definition For any grid function V : Ωh ∂Ωh → R,
S

||V ||Ω = max |Vij |,


( xi ,y j )∈Ωh

||V ||∂Ω = max |Vij |.


( xi ,y j )∈∂Ωh


Lemma 9.3.4. If the grid function V : Ωh ∂Ωh → R satisfies the bound-
S

ary condition Vij = 0 for ( xi , y j ) ∈ ∂Ωh , then

1
||V| |Ω ≤ ||∇2h V ||Ω
8

Proof. Let ν = ||∇2h V ||Ω . Clearly for all points ( xi , y j ) ∈ Ωh ,

−ν ≤ ∇2h Vij ≤ ν (72)

Now we define W : Ωh ∂Ωh → R by setting Wij = 41 [( xi − 12 )2 +


S

(y j − 12 )2 ], which is nonnegative. Also ∇2h Wij = 1 and that ||W ||∂Ω =


8 . The inequality (72) implies that, for all points ( xi , y j ) ∈ Ω h ,
1

∇2h (Vij + νWij ) ≥ 0

∇2h (Vij − νWij ) ≤ 0


By the discrete minimum principle and the fact that V vanishes on
∂Ωh
Vij ≤ Vij + νWij ≤ ν||W ||∂Ω
Vij ≥ Vij − νWij ≥ −ν||W ||∂Ω
1
Since ||W ||∂Ω = 8

1 1
||V| |Ω ≤ ν = ||∇2h V ||Ω
8 8
9.3 Consistency and Convergence 149

Finally we prove that the five point scheme for the Poisson equation
is convergent.

Theorem 9.3.5. Let U be a solution to the Poisson equation and let w be


the grid function that satisfies the discrete analog

−∇2h wij = f ij for ( xi , y j ) ∈ Ωh ,

wij = gij for ( xi , y j ) ∈ ∂Ωh .


Then there exists a positive constant K such that

||U − w||Ω ≤ KMh2

where
∂4 U ∂4 U ∂4 U
 
M= , , ...,
∂x4 ∞ ∂x3 ∂y ∞ ∂y4 ∞

The statement of the theorem assumes that U ∈ C4 (Ω̄). This as-


sumption holds if f and g are smooth enough.

Proof. Following from the proof of the Proposition we have

h2 ∂4 U ∂4 U
 
(∇2h − ∇2 )Uij = ( ζ ,
i jy ) + ( x ,
i jη )
12 ∂x4 ∂y4

for some ζ ∈ ( xi−1 , xi+1 ) and η j ∈ (y j−1 , y j+1 ). Therefore,

h2 ∂4 U ∂4 U
 
−∇2h Uij = f ij − ( ζ i , y j ) + 4 ( xi , η j ) .
12 ∂x4 ∂y

If we subtract from this the identity equation −∇2h wij = f ij and note
that U − w vanishes on ∂Ωh , we find that

h2 ∂4 U ∂4 U
 
2
∇h (Uij − wij ) = ( ζ i , y j ) + 4 ( xi , η j ) .
12 ∂x4 ∂y

It follows that
1
||U − w||Ω ≤ ||∇2h (U − w)||Ω ≤ KMh2
8
9.4 Elliptic Equations Questions 150

9.4 elliptic equations questions

1. a) Use the central difference formula for the second deriva-


tive
00 f ( x0 + h ) − 2 f ( x0 ) + f ( x0 − h )
f ( x0 ) = + O(h2 )
h2
to derive the explicit numerical scheme

w j−1,k + w j+1,k + w j,k−1 + w j,k+1 − 4w j,k = h2 f j,k

for the Elliptic equation

∂2 u ∂2 u
+ 2 = f ( x, y)
∂x2 ∂y

on the rectangular domain

Ω = {( x, y)| a ≤ x ≤ b, c ≤ y ≤ d}.

[10 marks]
b) Consider the problem

∂2 u ∂2 u
+ 2 =0
∂x2 ∂y

on the rectangular domain

Ω = {( x, y)| 0 ≤ x ≤ 1, 0 ≤ y ≤ 1},

with the boundary conditions

u( x, 0) = 4x2 − 4x + 1, u( x, 1) = 4x2 − 4x + 1,

u(0, y) = 4y2 − 4y + 1, u(1, y) = 4y2 − 4y + 1.


Taking N = 4 steps in the x-direction and M = 4 steps
in the y-direction, set up and write in matrix form (but do
not solve) the corresponding systems of finite difference
equations.
[23 marks]
9.4 Elliptic Equations Questions 151

2. a) Use the central difference formula for the second deriva-


tive
00 f ( x0 + h ) − 2 f ( x0 ) + f ( x0 − h )
f ( x0 ) = + O(h2 )
h2
to derive the explicit numerical scheme

w j−1,k + w j+1,k + w j,k−1 + w j,k+1 − 4w j,k = h2 f j,k

for the Elliptic equation

∂2 u ∂2 u
+ 2 = f ( x, y)
∂x2 ∂y

on the rectangular domain

Ω = {( x, y)| a ≤ x ≤ b, c ≤ y ≤ d}.

[10 marks]
b) Consider the problem

∂2 u ∂2 u
+ 2 = xy + x2
∂x2 ∂y

on the rectangular domain

Ω = {( x, y)| 0 ≤ x ≤ 1, 0 ≤ y ≤ 1},

with the boundary conditions

u( x, 0) = 0, u( x, 1) = 0,

u(0, y) = 0, u(1, y) = 0.
Taking N = 4 steps in the x-direction and M = 4 steps
in the y-direction, set up and write in matrix form (but do
not solve) the corresponding systems of finite difference
equations.
[23 marks]
9.4 Elliptic Equations Questions 152

3. a) Use the central difference formula for the second deriva-


tive
00 f ( x0 + h ) − 2 f ( x0 ) + f ( x0 − h )
f ( x0 ) = + O(h2 )
h2
to derive the explicit numerical scheme

w j−1,k + w j+1,k + w j,k−1 + w j,k+1 − 4w j,k = h2 f j,k

for the Elliptic equation

∂2 u ∂2 u
+ 2 = f ( x, y)
∂x2 ∂y

on the rectangular domain

Ω = {( x, y)| a ≤ x ≤ b, c ≤ y ≤ d}.

[10 marks]
b) Consider the problem

∂2 u ∂2 u
+ 2 = y2
∂x2 ∂y

on the rectangular domain

Ω = {( x, y)| 0 ≤ x ≤ 1, 0 ≤ y ≤ 1},

with the boundary conditions

u( x, 0) = x, u( x, 1) = x,

u(0, y) = 0, u(1, y) = 1.
Taking N = 4 steps in the x-direction and M = 4 steps
in the y-direction, set up and write in matrix form (but do
not solve) the corresponding systems of finite difference
equations.
[23 marks]
10
H Y P E R B O L I C E Q U AT I O N S

First-order scalar equation


∂U
= − a ∂U x∈R t>0
∂t ∂x (73)
U ( x, 0) = U0 ( x ) x∈R

where a is a positive real number. Its solution is given by

U ( x, t) = U0 ( x − at) t≥0

and represents a traveling wave with velocity a. The curves ( x (t), t)


in the plane ( x, t) are the characteristic curves. They are the straight
lines x (t) = x0 + at, t > 0. The solution of (73) remains constant
along them.
For the more general problem
∂U
+ a ∂U∂x + a0 = f x∈R t>0
∂t (74)
U ( x, 0) = U0 ( x ) x∈R
where a, a0 and f are given functions of the variables ( x, t), the char-
acteristic curves are still defined as before. In this case the solutions
of (74) satisfy along the characteristics the following differential equa-
tion
du
= f − a0 u on ( x (t), t)
dt

10.1 the wave equation

Consider the second-order hyperbolic equation

∂2 U ∂2 U
− γ = f x ∈ (α, β), t > 0 (75)
∂t2 ∂x2
with initial data
∂U
U ( x, 0) = u0 ( x ) and ( x, 0) = v0 ( x ), x ∈ (α, β)
∂t
and boundary data

U (α, t) = 0 and U ( β, t) = 0, t > 0

153
10.2 Finite Difference Method for Hyperbolic equations 154

In this case, U may represent the transverse displacement of an elastic


vibrating string of length β − α, fixed at the endpoints and γ is a co-
efficient depending on the specific mass of the string and its tension.
The spring is subject to a vertical force of density f .

The functions u0 ( x ) and v0 ( x ) denote respectively the initial dis-


placement and initial velocity of the string.
The change of variables

∂U ∂U
ω1 = , ω2 =
∂x ∂t
transforms (75) into
∂ω̂ ∂ω̂
+A =0
∂t ∂x
where  
ω1
ω̂ =
ω2
Since the initial conditions are ω1 ( x, 0) = u00 ( x ) and ω2 ( x, 0) = v0 ( x ).

Aside

∂2 u ∂2 u
Notice that replacing ∂t2
by t2 , ∂x2
by x2 and f by 1, the wave equation
becomes
t2 − γ2 x 2 = 1
which represents an hyperbola in ( x, t) plane. Proceeding analo-
gously in the case of the heat equation we end up with

t − x2 = 1

which represents a parabola in the ( x, t) plane. Finally, for the Poisson


equation we get
x 2 + y2 = 1
which represents an ellipse in the ( x, y) plane.
Due to the geometric interpretation above, the corresponding differ-
ential operators are classified as hyperbolic, parabolic and elliptic.

10.2 finite difference method for hyperbolic equations

As always we discretise the domain by space-time finite difference.


To this aim, the half-plane {( x, t) : −∞ < x < ∞, t > 0} is discretised
by choosing a spatial grid size ∆x, a temporal step ∆t and the grid
points ( x j , tn ) as follows

x j = j∆x j ∈ Z, tn = n∆t n∈N


10.3 Analysis of the Finite Difference Methods 155

and let
∆t
λ= .
∆x

10.2.1 Discretisation of the scalar equation

Here are some explicit methods

• Forward Euler/centered method:


λ
unj +1 = unj − a(unj+1 − unj−1 ),
2

• Lax-Friedrichs method,

unj+1 + unj−1 λ
unj +1 = − a(unj+1 − unj−1 ),
2 2

• Lax-Wendroff method,

λ λ2
unj +1 = unj − a(unj+1 − unj−1 ) + a2 (unj+1 − 2unj + unj−1 ),
2 2

• Upwind method,

λ n λ
unj +1 = unj − (u j+1 − unj−1 ) + | a|(unj+1 − 2unj + unj−1 ).
2 2

The last three methods can be obtained from the forward Euler/-
centered method by adding a term proportional to a numerical ap-
proximation of a second derivative term so that they can be written
in the equivalent form
n n n
λ 1 (u j+1 − 2u j + u j−1 )
unj +1 = unj n n
− a ( u j +1 − u j −1 ) + k
2 2 (∆x )2

where k is an artificial viscosity term.


An example of an implicit method is the backward Euler/ centered
scheme
λ j +1
unj +1 + a(unj++11 − u j−1 ) = unj
2

10.3 analysis of the finite difference methods

10.4 consistency

A numerical method is convergent if

lim max |U ( x j , tn ) − wnj |


∆t,∆x →0 j,n
10.5 Stability 156

The local truncation error at x j , tn is defined as

τjn = L(Ujn )

the truncation error is

τ (∆t, ∆x ) = max |τjn |


j,n

When τ (∆t, ∆x ) goes to zero as ∆t and ∆x tend to zero independently


is said to be consistent.

10.5 stability

10.6 courant freidrich lewy condition

A method is said to be stable if, for any T here exist a constant CT > 0
and δ0 such that
||un ||∆ ≤ CT ||u0 ||∆
for any n such that n∆t ≤ T and for any ∆t, ∆x such that 0 < ∆t ≤
δ0 , 0 < ∆x ≤ δ0 . We have denoted by ||.||∆ a suitable discrete norm.
Forward Euler/centered
λ
unj +1 = unj − a(unj+1 − unj−1 )
2
Truncation error
O(∆t, (∆x )2 )
For an explicit method to be stable we need

δt
| aλ| = a ≤1
δx

this is known as the Courant Freidrich Lewy condition.


Using Von Neumann stability analysis we can show that the method
is stable under the Courant Freidrich Lewy condition.

10.6.1 von Neumann stability for the Forward Euler

unj = eiβj∆x ξ n
where
ξ = eα∆t
It is sufficient to show
|ξ | ≤ 1
λ
ξ n+1 eiβ( j)∆x = ξ n eiβ( j)∆x + a(ξ n eiβ( j+1)∆x − ξ n eiβ( j−1)∆x )
2
10.6 Courant Freidrich Lewy Condition 157

λ iβ∆x
ξ = 1− a(e − e−iβ∆x )
2
λ
ξ = 1 − i a(2sin( β∆x ))
2
ξ = 1 − iλa(sin( β∆x ))
q
|ξ | = 1 + (λa(sin( β∆x )))2
Hence
ξ>1
therefore the method is unstable for the Courant Freidrich Lewy.

10.6.2 von Neumann stability for the Lax-Friedrich

ξ n eiβ( j+1)∆x + ξ n eiβ( j−1)∆x λ


ξ n+1 eiβ( j)∆x = + a(ξ n eiβ( j+1)∆x − ξ n eiβ( j−1)∆x )
2 2
eiβ∆x − e−iβ∆x λ
ξ= + a(eiβ∆x − e−iβ∆x )
2 2
1 + λa iβ∆x 1 − λa −iβ∆x
ξ= e + e
2 2
ξ = cos( β∆x ) + iλa sin( β∆x )
|ξ |2 ≤ (cos( β∆x ))2 + ( aλ)2 (sin( β∆x ))2
Hence
ξ<1
for aλ ≤ 1.

Example 50
These methods can be applied to the Burgers equation

∂u ∂u
+u =0
∂t ∂x
as there it is a non-trivial non-linear hyperbolic equa-
tion. Taking initial condition

 1, x0 ≤ 0,
u( x, 0) = u0 ( x ) = 1 − x, 0 ≤ x0 ≤ 1,
0, x0 ≥ 1.

the characteristic line issuing from the point ( x0 , 0) is


given by

 x0 + t, x0 ≤ 0
x (t) = x0 + tu0 ( x0 ) = x0 + t(1 − x0 ), 0 ≤ x0 ≤ 1,
x0 x0 , ≥ 1.

11
VA R I AT I O N A L M E T H O D S

Variational methods are based on the fact that the solutions of some
Boundary Value Problems,
0 0
−( p( x )u ( x )) + q( x )u( x ) = g( x, u( x ))
(76)
u( a) = α, u(b) = β,

under the assumptions that,

p ∈ C1 [ a, b], p( x ) ≥ p0 > 0,
1
q ∈ C [ a, b], q( x ) ≥ 0, (77)
g ∈ C1 ([ a, b] × R), gu ( x, u) ≤ λ0

then if u( x ) is the solution of (76), it can be written in the form y( x ) =


u( x ) − l ( x ) with

b−x a−x
l (x) = α +β , l ( a) = α, l (b) = β,
b−a a−b
and y is the solution of a boundary value problem
0 0
−( p( x )y ( x )) + q( x )y( x ) = f ( x ),
(78)
y( a), = 0 y(b) = 0,

with zero boundary values. Without loss of generality we can just


consider problems of the form (78), is known as the:

Classical Problem (D)


0 0
−( p( x )u ( x )) + q( x )u( x ) = f ( x ),
u( a) = 0, u(b) = 0.
The assumptions on the Classical Problem can be relaxed such that
f ∈ L2 ([0, 1]), such that

u( x ) ∈ DL = {u ∈ C2 [ a, b] | u( a) = 0, u(b) = 0}.

Convolving the Classical Problem (D) with the function v( x ) gives


the problem
Z b Z b
0 0
[−( p( x )u ( x )) + q( x )u( x )]v( x )dx = f ( x )v( x )dx,
a a

158
variational methods 159

where v ∈ DL . Integrating by parts gives the simplified problem gives


the:

Weak Form Problem (w)


Z b Z b
0 0
[ p( x )u ( x )v ( x ) + q( x )u( x )v( x )]dx = f ( x )v( x )dx.
a a
It is sufficient to solve the Weak Form (W) of the Classical Problem (D).
From the Weak Form we have two definitions:

1. Definition (Bilinear Form)


Z b
0 0
a(u, v) = [ p( x )u ( x )v ( x ) + q( x )u( x )v( x )]dx;
a

2. Z b
( f , v) = f ( x )v( x )dx,
a

where f ∈ L2 ([ a, b]).

From these definitions the Weak Form of the ODE problem (D) is then
given by

a(u, v) = ( f , v),
where u ∈ DL is the solution to the Classical Problem.
The Weak Form of the problem can be equivalently written in a Vari-
ational or Minimisation form of the problem is given by,

Variational/Minimization form (M):


1
F (v) = a(v, v) − ( f , v).
2
where f ∈ L2 ([ a, b]). This gives the problem

F ( u ) ≤ F ( v ), all v ∈ DL

such that the function u that minimizes F over DL .

Theorem 11.0.1. We have the following relationships between the solutions


to the three problems Classical Problem (D), Weak Form (W) and Mini-
mization Form (M).

1. If the function u solves Classical Problem (D), then u solves Weak


Form (W).

2. The function u solves Weak Form (W) if and only if u solves Mini-
mization Form (M).

3. If f ∈ C ([0, 1]) and u ∈ C2 ([0, 1]) solves Weak Form (W), then u
solves Classical Problem (D).
11.1 Ritz -Galerkin Method 160

Proof. 1. Let u be the solution to Classical Problem (D); then u


solves Weak Form (W) is obvious, since the Weak Form (W)
derives directly from Classical Problem (D).
2. a) Show Weak Form (W) ⇒ Minimization Form (M).
Let u solve Weak Form (W), and define v( x ) = u( x ) + z( x ),
u, z ∈ DL . By linearity

F (v) = 12 a(u + z, u + z) − ( f , u + z)
= F (u) + 12 a(z, z) + a(u, z) − ( f , z)
= F (u) + 21 a(z, z)

which implies that F (v) ≥ F (u), and therefore u solves


Minimization Form (M).
b) Show Weak Form (W) ⇐ Minimization Form (M).
Let u solve Minimization Form (M) and choose ε ∈ R,
v ∈ DL . Then F (u) ≤ F (u + εv), since u + εv ∈ DL . Now
F (u + εv) is a quadratic form in ε and its minimum occurs
at ε = 0 ie
dF (u+εv)
0 = dε | ε =0
= a(u, v) − ( f , v),
it follows that u solves the Weak Form (W).
3. Is immediate.

11.1 ritz -galerkin method

This is a classical approach which we exploit to fined ”discrete” ap-


proximation to the problem Weak Form (W) / Minimization Form
(M). We look for a solution uS in a finite dimensional subspace S of
DL such that uS is an approximation to the solution of the continuous
problem,

uS = u1 φ1 + u2 φ2 + ... + un φn .
Discrete Weak Form (WS ):
Find uS ∈ S = span{φ1 , φ2 , ..., φn }, n < ∞ such that

a ( u S , v ) = ( f , v ),

u ≈ uS = u1 φ1 + u2 φ2 + ... + un φn .
Similarly the
Discrete Variational/Minimization form (MS ):
Find uS ∈ S = span{φ1 , φ2 , ..., φn }, n < ∞ that satisfies

F (uS ) ≤ F (v) all v ∈ S,


11.1 Ritz -Galerkin Method 161

where
1
F (v) = a(v, v) − ( f , v).
2
v ∈ DL .

Theorem 11.1.1. Given f ∈ L2 ([0, 1]), then (WS ) has a unique solution.

Proof. We write uS = ∑1n u j φj ( x ) and look for constants u j , j = 1, ..., n


to solve the discrete problem. We define
Z b
0 0
A = { Aij } = { a(φi , φj )} = [ p( x )φi φj + q( x )φi φj ]dx
a

and Z b
F̄ = { Fj } = {( f , φj )} = { f φi dx }
a
Then we require
n
a(uS , v) = a(∑ u j φj ( x ), v) = ( f , v) all v ∈ S
1

Hence, for each basis function φi ∈ S we must have,


n
a(uS , φi ) = a(∑ u j φj ( x ), φi ) = ( f , φi ) all i = 1, ..., n ∈ S
1

this gives the matrix,


    
a(φ1 , φ1 ) ... a(φn , φ1 ) u1 ( f , φ1 )

 . . . 
 .  
  . 

. . . . = .
    
  
. . . . .
    
    
a(φ1 , φn ) ... a(φn , φn ) un ( f , φn )

which can be written as,


Aū = F̄
Hence uS is found by the solution to a matrix equation. We now
show existence/uniqueness of the solution to the algebraic problem.
We show by contradiction that A is full-rank ie that the only solution
to Aū = 0 is ū = 0.
Suppose that there exists a vector v̄ = {v j } 6= 0 such that Av̄ = 0 and
construct v( x ) = ∑1n v j φj ∈ S. Then

Av̄ = 0 ⇔ ∑ j a(φj , φk )v j = a(v, φk ) = 0 all k


⇔ ∑k a(v, φk )v j = a(v, ∑ vk φk ) = a(v, v) = 0
⇔ v=0

Therefore a contradiction.
11.2 Finite Element 162

Classically, in the Ritz-Galerkin method, the basis functions are


chosen to be continuous functions over the entire interval [ a, b], for
example, {sin(mx ), cos(mx )} give us trigonometric polynomial ap-
proximations to the solutions of the ODEs.

11.2 finite element

We choose the basis functions {φi }1n to be piecewise polynomials with


compact support. In the simplest case φi is linear. We divide the
region in to n intervals or ”elements”,

a = x0 < x1 < ... < xn = b

and let Ei denote the element [ xi−1 , xi ], hi = xi − xi−1 .

Definition Let Sh ⊂ D be the space of functions such that v( x ) ∈


[0, 1], v( x ) is linear on Ei and v( a) = v(b) = 0 ie

Sh = {v( x ) : piecewise linear on [0, 1], v( a) = v(b) = 0}

The basis functions φi ( x ) for Sh are defined such that φi ( x ) is linear


on Ei , Ei+1 and φi ( x j ) = δij .

Figure 11.2.1: Hat functions φi form a basis for the space Sh

We now show that the hat functions φi form a basis for the space
Sh (Figure 11.2.1).
Lemma 11.2.1. The set of functions {φi }in is a basis for the space Sh .
Proof. We show first that the set {φi }1n is linearly independent. If
∑1n ci φi ( x ) = 0 for all x ∈ [ a, b], then taking x = x j , implies c j = 0 for
each value of j, and hence the functions are independent.
To show Sh = span{φi }, we only need to show that

v( x ) = v I = ∑ v j φj , all v( x ) ∈ Sh
11.2 Finite Element 163

This is proved by construction. Since (v − v I ) is linear on [ xi−1 , xi ]


and v − v I = 0 at all points x j , it follows that v = v I on Ei .

We now consider the matrix Aû = F̂ in the case where the basis
functions are chosen to be the ”hat functions”. In this case the ele-
ments of A can be found We have
0 [
φi = 0, φi = 0, for x ∈
/ [ xi−1 , xi+1 ) = Ei Ei+1 ,

where
x − x i −1 1 0 1
φi = = ( x − xi−1 ), φi = , on Ei .
x i − x i −1 hi hi
and
x i +1 − x 1 0 −1
φi = = ( xi+1 − x ), φi = , on Ei+1 .
x i +1 − x i h i +1 h i +1

Therefore we have the elements of the matrix A


Rx Rx
Ai,i = x i h12 p( x )dx + x i+1 h21 p( x )dx
R xi−1 i i i +1 Rx
+ x i h12 ( x − xi−1 )2 q( x )dx + x i+1 h21 ( xi+1 − x )2 q( x )dx,
i −1 i
i i +1

R x i +1 −1
R x i +1 1
Ai,i+1 = xi h2i+1
p( x )dx + xi h2i+1
( x i +1 − x )( x − xi )q( x )dx,
R xi −1
R x i +1 1
Ai,i−1 = xi−1 h2i
p( x )dx + xi h2i
( xi − x )( x − xi−1 )q( x )dx,
and
R xi 1
R x i +1 1
Fi = x i −1 hi ( x − xi−1 ) f ( x )dx + xi h i +1 ( x i + 1 − x ) f ( x )dx.

11.2.1 Error bounds of Finite Element methods

Lemma 11.2.2. Assume uS solves (WS ). Then

a(u − uS , w) = 0, for all x ∈ S

Proof. Given that


a ( u S , w ) = ( f , w ),
and
a(u, w) = ( f , w),
for all w ∈ S. Since a is bilinear, taking the differences gives

a(u − uS , w) = 0.

The error bounds we are interested in will be in term of the energy


norm,
1
||v|| E = [ a(v, v)] 2
11.2 Finite Element 164

for all v ∈ DL . The function satisfies the properties:

||αv|| E = α|| a|| E , ||v + z|| E ≤ ||v|| E + ||z|| E

Theorem 11.2.3. To show uS is the best fit we show that

||u − uS || E = min ||u − v|| E


v∈S

Proof. By the Cauchy -Schwarz Lemma, we have | a(u, v)| ≤ ||u|| E ||v|| E .
Let w = uS − v ∈ S. Using the previous lemma we obtain

||u − uS ||2E = a(u − us , u − us )


≤ a(u − us , u − us ) + a(u − us , w)
≤ a(u − us , u − us + w) = a(u − us , u − v)
≤ ||u − us || E ||u − v|| E .

If ||u − uS || E = 0, then the theorem holds. Otherwise

min ||u − v|| E ≤ ||u − uS || ≤ min ||u − v|| E ,

the result follows.

Theorem 11.2.4. Error bounds


00
||u − uS || E ≤ Ch||u ||∞

where C is a constant.
Proof. First from the previous theorem we have that

||u − uS || E = min ||u − v|| E ≤ ||u − u I || E


v∈S

We look for a bound on ||u − u I || E , where

u I (x) = ∑ ū j φj , ū j = u( x j ).
j

We assume that
uS ( x ) = ∑ u j φj
j

where u = {u j } solves Au = F. We define e = u − u I . Since u I ∈ S


00 00 00
implies that u I is piecewise linear, then u I = 0. Therefore e = u .
Looking at the subinterval [ xi , xi+1 ] The Schwarz inequality yields
the estimate Z x Z x
( e )2 ≤ 12 dξ (e0 (ξ ))2 dξ
xi xi
Z x
≤ ( x − xi ) (e0 (ξ ))2 dξ
xi
Z x i +1
≤ hi (e0 (ξ ))2 dξ
xi
11.2 Finite Element 165

and thus Z x i +1
0
||e||2∞ ≤ hi (e0 (ξ ))2 dξ ≤ h2i ||e ||2∞
xi

Similarly, Z x Z x
0 00
( e )2 ≤ 12 dξ (e (ξ ))2 dξ
xi xi
Z x
≤ ( x − xi ) (e00 (ξ ))2 dξ
xi
Z x i +1
00
≤ hi (e (ξ ))2 dξ
xi

and thus Z x i +1
0 00 00
||e ||2∞ ≤ hi (e (ξ ))2 dξ ≤ h2i ||e ||2∞
xi

Finally we also have


R x i +1 0
a(e, e) = ( p( x )[e ]2 + q( x )[e( x )]2 )dx
xi Rx 0 Rx
≤ || p||∞ xi i+1 [e ]2 + ||q||∞ xi i+1 [e( x )]2 dx
00 0
≤ || p||∞ h2i ||e ||2∞ + ||q||∞ h2i ||e ||2∞
00 0
≤ || p||∞ h2i ||e ||2∞ + ||q||∞ h4i ||e ||2∞
00
≤ Ch2i ||u ||2∞
00
||u − uS || E = min ||u − v|| E ≤ ||u − u I || E ≤ Ch||u ||∞
v∈S

where h = max { hi }.
PROBLEM SHEET

1. a) State the 3 classes and conditions of 2nd order Partial Dif-


ferential Equations defined by the characteristic curves.
b) Given the non-dimensional form of the heat equation

∂U ∂2 U
= .
∂t ∂x2
supply sample boundary conditions to specify this prob-
lem.
Write a fully implicit scheme to solve this partial differen-
tial equation.
c) Derive the local truncation error for the fully implicit method,
for the heat equation.
d) Show that the method is unconditionally stable using von
Neumann’s method.

2. a) State the 3 classes and conditions of 2nd order Partial Dif-


ferential Equations defined by the characteristic curves.
b) Given the non-dimensional form of the heat equation

∂U ∂2 U
= ,
∂t ∂x2
supply sample boundary conditions to specify this prob-
lem.
Write an explicit scheme to solve this partial differential
equation.
c) Derive the local truncation error for the explicit method,
for the heat equation.

d) Show that the method is consistent, convergent and stable


for hk2 < 12 , where k is the step-size in the t direction and h
is the step-size in the x direction.

3. a) State the 3 classes and conditions of 2nd order Partial Dif-


ferential Equations defined by the characteristic curves.
b) Given the non-dimensional form of the heat equation

∂U ∂2 U
= ,
∂t ∂x2

166
11.2 Finite Element 167

supply sample boundary conditions to specify this prob-


lem.
Write an the Crank-Nicholson method to solve this partial
differential equation.
c) Derive the local truncation error for the Crank-Nicholson
method, for the heat equation.

d) Show that the method is unconditionally stable using von


Neumann’s method.

4. a) Approximate the Poisson equation

−∇2 U ( x, y) = f ( x, y) ( x, y) ∈ Ω = (0, 1) × (0, 1)

with boundary conditions

U ( x, y) = g( x, y) ( x, y) ∈ δΩ − boundary

using the five point method. Sketch how the finite differ-
ence scheme may be rewritten in the form Ax = b, where
A is a sparse N 2 × N 2 matrix, b is an N 2 component matrix
and x is an N 2 component vector of unknowns. (Assume
your 2d discretised grid contains N components in the x
and y direction).
b) Prove (DISCRETE MAXIMUM PRINCIPLE). if ∇2h Vij ≥ 0
for all points ( xi , y j ) ∈ Ωh , then

max Vij ≤ max Vij


( xi ,y j )∈Ωh ( xi ,y j )∈∂Ωh

If ∇2h Vij ≤ 0 for all points ( xi , y j ) ∈ Ωh , then

min Vij ≥ min Vij


( xi ,y j )∈Ωh ( xi ,y j )∈∂Ωh

c) Hence prove:
Let U be a solution to the Poisson equation and let w be
the grid function that satisfies the discrete analog

−∇2h wij = f ij for ( xi , y j ) ∈ Ωh ,

wij = gij for ( xi , y j ) ∈ ∂Ωh .


Then there exists a positive constant K such that

||U − w||Ω ≤ KMh2


11.2 Finite Element 168

where

∂4 U ∂4 U ∂4 U
 
M= , , , ...,
∂x4 ∞ ∂x3 ∂y ∞ ∂y4 ∞

You may assume:


Lemma
If the grid function V : Ωh ∂Ωh → R satisfies the bound-
S

ary condition Vij = 0 for ( xi , y j ) ∈ ∂Ωh , then

1
||V ||Ω ≤ ||∇2h V ||Ω
8

5. a) For a finite difference scheme approximating a partial dif-


ferential equation of the form

∂U ∂U
= −a + f ( x, t), x ∈ R, t>0
∂t ∂x
U ( x, 0) = U0 ( x ), x∈R
define what is meant by:
i. convergence,
ii. consistency,
iii. stability.
b) Describe the forward Euler/centered difference method for
the transport equation and derive the local truncation er-
ror.
c) Define the Courant Friedrichs Lewy condition and state
how it is related to stability.
d) Show that the method is stable under the Courant Friedrichs
Lewy condition using Von Neumann analysis, you may as-
sume f ( x, t) = 0.

6. Consider the second order differential equation

d2 u
+u = x
dx2
with boundary conditions

u (0) = 0 u (1) = 0

a) Show that the solution u( x ) of this equation satisfies the


weak form
Z 1  
du dv
dx − + uv − xv = 0
0 dx dx
11.2 Finite Element 169

for all v( x ) which are sufficiently smooth and which satisfy

v (0) = 0 v (1) = 0

b) By splitting the interval x ∈ [0, 1] into N equal elements of


size h, where Nh = 1, one can define nodes xi and finite
element shape functions as follows

xi = ih


 0 0 ≤ x ≤ x i −1
 x − x i −1

h x i −1 ≤ x ≤ x i
φi ( x ) = x i +1 − x

 h x i ≤ x ≤ x i +1

 0 x i +1 ≤ x ≤ 1
A finite element approximation to the differential equation
is obtained by approximating u( x ) and v( x ) with linear
combinations of these finite element shape functions, φi ,
where
N −1
un = ∑ αi φi ( x )
i =1
N −1
vn = ∑ β j φj ( x )
j =1

Show that the equation which results from this approxima-


tion has the form
Kα = F
where K is an N − 1 × N − 1 sparse matrix, F is an N − 1
component vector and α is an N − 1 component vector of
unknown co-efficient αi .
c) What structure does the matrix K have? Evaluate the first
component of the main diagonal of K.
BIBLIOGRAPHY

[1] Ian Jaques and Colin Judd, Numerical Anaylsis, Chapman and
Hall Ltd, 1987

[2] J. Stoer and R. Bulirsch, Introduction to Numerical Analysis,


Springer-Verlag 1980

[3] Walter Gautschi, Numerical Analysis an Introduction, Birkhauser


1997

[4] Curtis F. Gerald, Patrick O. Wheatley, Applied Numerical Analysis,


Addison-Wesley 1994

[5] Richard L. Burden, J. Douglas Faires, Numerical Analysis, Brook-


s/Cole 1997

[6] Myron B Allen III, Eli L. Isaacson Numerical Analysis for Applied
Science John Wiley and Sons, Inc. 1997

[7] G D Smith Numerical Solution of Partial Differential Equations:


Finite Difference Method Oxford 1992

[8] Atkinson, Han Elementary Numerical Analysis

[9] Alfio Quarteroni, Riccardo Sacco, Fausto SaleriNumerical Mathe-


matics

[10] John H Mathews, Kurtis Fink Numerical Method using MATLAB

[11] Brian Bradie, A Friendly Introduction to Numerical Analysis

170
INDEX

Adams predictor-corrector, 59
Adams-Bashforth, 39–41, 43–46,
51, 53, 54, 63
Adams-Bashforth method, 50, 60
Adams-Moulton, 40, 46–48, 50,
51, 53, 54, 59, 63

Boundary Value Problem, 74, 78


boundary value problem, 72, 73,
75, 76, 83

Heat Equation, 96, 98


Heat equation, 88, 90

Initial Value Problem, 18, 35, 39,


43, 58, 59, 70, 71, 73, 75,
76, 78, 79

Lipschitz Condition, 15, 16, 18,


19, 33, 34, 57, 58, 70, 71,
74
Lipschitz condition, 75

Newton’s method, 78

ODE, 7
Ordinary Differential Equation,
7, 10, 17, 23

PDE, 85–87, 103, 110, 112, 114–


116, 118, 147
Poisson equation, 130, 149

Runge-Kutta method, 25, 26, 34

Variational methods, 158

171

You might also like