Numerical Analysis For Differential Equations
Numerical Analysis For Differential Equations
2
Contents 3
RK Runge Kutta
5
Part I
I N I T I A L VA L U E P R O B L E M S
1
N U M E R I C A L S O L U T I O N S T O I N I T I A L VA L U E
PROBLEMS
y ( x0 ) = y0 . (2)
Example 1
Simple Example
The differential equation describes the rate of change
of an oscillating input. The general solution of the
equation
0
y = sin( x ) (3)
is,
y = − cos( x ) + c,
with the initial condition,
y(0) = 2,
y = 2 − cos( x ).
7
numerical solutions to initial value problems 8
Example 2
General Example
Differential equations of the form
0
y ( x ) = λy( x ) + b( x ), x ≥ x0 , (5)
d(e−λx y( x ))
= e−λx b( x ).
dx
Integrating both sides from x0 to x we obtain
Z x
−λx
(e y( x )) = c + e−λt b(t)dt,
x0
c = e−λx0 y( x0 ).
df
The left hand side of a initial value problem dx can be approximated
by Taylors theorem expand about a point x0 giving:
0
f ( x1 ) = f ( x0 ) + ( x1 − x0 ) f ( x0 ) + τ, (6)
( x1 − x0 )2 00
τ= f ( ξ ), ξ ∈ [ x0 , x1 ]. (7)
2!
Rearranging and letting h = x1 − x0 the equation becomes
0 f ( x1 ) − f ( x0 ) h 00
f ( x0 ) = − f ( ξ ).
h 2
The forward Euler method can also be derived using a variation on
the Lagrange interpolation formula called the divided difference.
Any function f ( x ) can be approximated by a polynomial of degree
Pn ( x ) and an error term,
f ( x ) = Pn ( x ) + error,
= f ( x0 ) + f [ x0 , x1 ]( x − x0 ) + f [ x0 , x1 , x2 ]( x − x0 )( x − x1 ),
+... + f [ x0 , ..., xn ]Πin=−01 ( x − xi ) + error,
where
f ( x1 ) − f ( x0 )
f [ x0 , x1 ] = ,
x1 − x0
f [ x1 , x2 ] − f [ x0 , x1 ]
f [ x0 , x1 , x2 ] = ,
x2 − x0
f [ x1 , x2 , ..., xn ] − f [ x0 , x1 , ..., xn−1 ]
f [ x0 , x1 , .., xn ] = ,
x n − x0
Differentiating Pn ( x )
0
Pn ( x ) = f [ x0 , x1 ] + f [ x0 , x1 , x2 ]{( x − x0 ) + ( x − x1 )},
n −1
( x − x0 )...( x − xn−1 )
+... + f [ x0 , ..., xn ] ∑ ( x − xi )
,
i =0
f n +1 ( ξ )
error = ( x − x0 )...( x − xn ) .
( n + 1) !
1.1 Numerical approximation of Differentiation 10
0 f ( x1 ) − f ( x0 )
f ( x ) = f [ x0 , x1 ] = ,
x1 − x0
0 f ( x1 ) − f ( x0 )
f (x) = + O ( h ), Euler,
x1 − x0
0 f ( x 1 ) − f ( x −1 )
f (x) = + O ( h2 ), Central.
x 1 − x −1
Using the same method we can get out computational estimates for
the 2nd derivative
00 f2 − 2 f1 + f0
f ( x0 ) = + O ( h2 ),
h2
00 f 1 − 2 f 0 + f −1
f ( x0 ) = + O ( h2 ), central.
h2
Example 3
To numerically solve the first order Ordinary Differ-
ential Equation (4)
0
y = f ( x, y),
a ≤ x ≤ b,
the derivative y0 is approximated by
w i +1 − w i w − wi
= i +1 ,
x i +1 − x i h
w i +1 − w i
= f ( x i , w ).
h
Rearranging the difference equation gives the equa-
tion
w i +1 = wi + h f ( x i , w ),
which can be used to approximate the solution at wi+1
given information about y at point xi .
Example 4
Applying the Euler formula to the first order equation
with an oscillating input (3)
0
y = sin( x ),
0 ≤ x ≤ 10.
The equation can be approximated using the forward
Euler as
w i +1 − w i
= sin( xi ).
h
Rearranging the equation gives the discrete difference
equation with the unknowns on the left and the know
values of the right
wi+1 = wi + h sin( xi ).
31
32 # −−− r i g h t hand p l o t
33 ax = f i g . add subplot ( 1 , 3 , 2 )
34 p l t . p l o t ( x , A n a l y t i c S o l u t i o n , c o l o r = ’ blue ’ )
35 plt . t i t l e ( ’ Analytic Solution ’ )
36
37 # ax . legend ( l o c = ’ b e s t ’ )
38 ax = f i g . add subplot ( 1 , 3 , 3 )
39 p l t . p l o t ( x , A n a l y t i c S o l u t i o n −w, c o l o r = ’ blue ’ )
40 p l t . t i t l e ( ’ Error ’ )
41
42 # −−− t i t l e , e x p l a n a t o r y t e x t and save
43 f i g . s u p t i t l e ( ’ S i n e S o l u t i o n ’ , f o n t s i z e =20)
44 plt . tight layout ()
45 p l t . s u b p l o t s a d j u s t ( top = 0 . 8 5 )
Listing 1.1: Python Numerical and Analytical Solution of Eqn 3
0
1.1.1.2 Simple example problem population growth y = εy.
Example 5
Simple population growth can be describe as a first
order differential equation of the form:
0
y = εy. (8)
y = Ceεx .
y (0) = 1
1.1 Numerical approximation of Differentiation 13
ε = 0.5
y = e0.5x .
Example 6
Applying the Euler formula to the first order equation
(8)
0
y = 0.5y
is approximated by
w i +1 − wi
= 0.5wi .
h
Rearranging the equation gives the difference equa-
tion
wi+1 = wi + h(0.5wi ).
The Python code below and the output is plotted in
Figure 1.1.2.
1 # Numerical s o l u t i o n o f a d i f f e r e n t i a l e q u a t i o n
2 import numpy as np
3 import math
4 import m a t p l o t l i b . pyplot as p l t
5
6 h=0.01
7 tau = 0 . 5
8 a=0
9 b=10
10
11 N= i n t ( ( b−a ) /h )
12 w=np . z e r o s (N)
13 x=np . z e r o s (N)
14 A n a l y t i c S o l u t i o n =np . z e r o s (N)
15
16 Numerical Solution [0]=1
17 x [0]=0
18 w[ 0 ] = 1
19
20 f o r i i n range ( 1 ,N) :
21 w[ i ]=w[ i − 1]+dx ∗ ( tau ) ∗w[ i − 1]
22 x [ i ]= x [ i − 1]+dx
23 A n a l y t i c S o l u t i o n [ i ]= math . exp ( tau ∗x [ i ] )
24
25
26 fig = plt . figure ( figsize =(8 ,4) )
27 # −−− l e f t hand p l o t
28 ax = f i g . add subplot ( 1 , 3 , 1 )
29 p l t . p l o t ( x ,w, c o l o r = ’ red ’ )
1.1 Numerical approximation of Differentiation 14
30 # ax . legend ( l o c = ’ b e s t ’ )
31 p l t . t i t l e ( ’ Numerical S o l u t i o n ’ )
32
33 # −−− r i g h t hand p l o t
34 ax = f i g . add subplot ( 1 , 3 , 2 )
35 p l t . p l o t ( x , A n a l y t i c S o l u t i o n , c o l o r = ’ blue ’ )
36 plt . t i t l e ( ’ Analytic Solution ’ )
37
38 # ax . legend ( l o c = ’ b e s t ’ )
39 ax = f i g . add subplot ( 1 , 3 , 3 )
40 p l t . p l o t ( x , A n a l y t i c S o l u t i o n −N um e ri c al S ol u ti o n ,
c o l o r = ’ blue ’ )
41 p l t . t i t l e ( ’ Error ’ )
42
43 # −−− t i t l e , e x p l a n a t o r y t e x t and save
44 f i g . s u p t i t l e ( ’ E x p o n e n t i a l Growth S o l u t i o n ’ ,
f o n t s i z e =20)
45 plt . tight layout ()
46 p l t . s u b p l o t s a d j u s t ( top = 0 . 8 5 )
Listing 1.2: Python Numerical and Analytical Solution of Eqn 8
Example 7
An extension of the exponential growth differential
equation includes a sinusoidal component
0
y = ε(y + y sin( x )). (9)
0
Figure 1.1.3: Python output: Numerical solution for y = ε(y +
ysin( x )) Equation 9 with h=0.01 and ε = 0.5
dy
= f (t, y), a ≤ t ≤ b,
dt
with initial condition
y( a) = α,
1.1 Numerical approximation of Differentiation 16
• For any ε > 0 there exists a positive constant k (ε) with the prop-
erty that whenever |ε 0 | < ε and with |δ(t)|ε on [ a, b] a unique
solution z(t) to the problem
dz
= f (t, z) + δ(t), a ≤ t ≤ b, (10)
dt
z( a) = α + ε 0 ,
exists with
|z(t) − y(t)| < k(ε)ε.
The problem specified by (10) is called a perturbed problem
associated with the original problem.
dy
= f (t, y), a ≤ t ≤ b,
dt
with initial condition
y( a) = α,
is well-posed.
Example 8
0
y ( x ) = −y( x ) + 1, 0 ≤ x ≤ b, y (0) = 1
has the solution y( x ) = 1. The perturbed problem
0
z ( x ) = −z( x ) + 1, 0 ≤ x ≤ b, z(0) = 1 + ε,
0
Figure 1.1.4: Python output: Illustrating Stability y ( x ) =
0
−y( x ) + 1 with the initial condition y(0) = 1 and z ( x ) =
−z( x ) + 1 with the initial condition z(0) = 1 + ε, ε = 0.1
Example 9
Consider the Initial Value Problem
0 y2
y =− , a = 0 ≤ t ≤ b = 0.5,
1+t
1.2 One-Step Methods 18
0.05w02 0.05
w1 = w0 − = 1− = 0.95
1 + t0 1+0
i ti wi
0 0 1
1 0.05 0.95
2 0.1 0.90702381
3 0.15 0.86962871
4 0.2 0.8367481
5 0.25 0.80757529
6 0.3 0.78148818
7 0.35 0.7579988
8 0.4 0.73671872
9 0.45 0.71733463
0 ≤ (1 + x )m ≤ emx .
Let y(t) denote the unique solution of the Initial Value Problem
0
y = f (t, y), a ≤ t ≤ b, y( a) = α,
1.2 One-Step Methods 19
and w0 , w1 , ..., w N be the approx generated by the Euler method for some
positive integer N. Then for i = 0, 1, ..., N
Mh L(ti −a)
| y ( t i ) − wi | ≤ |e − 1|.
2L
Proof. When i = 0 the result is clearly true since y(t0 ) = w0 = α.
From Taylor we have,
h2 00
y(ti+1 ) = y(ti ) + h f (ti , y(ti )) + y ( ξ i ),
2
where xi ≥ ξ i ≥ xi+1 , and from this we get the Euler approximation
w i +1 = w i + h f ( t i , w i ).
Consequently we have
h2 00
y(ti+1 ) − wi+1 = y(ti ) − wi + h[ f (ti , y(ti )) − f (ti , wi )] + y ( ξ i ),
2
and
h2 00
|y(ti+1 ) − wi+1 | ≤ |y(ti ) − wi | + h| f (ti , y(ti )) − f (ti , wi )| + |y (ξ i )|.
2
Since f satisfies a Lipschitz Condition in the second variable with
00
constant L and |y | ≤ M we have
h2
|y(ti+1 ) − wi+1 | ≤ (1 + hL)|y(ti ) − wi | + M.
2
Using Lemma 1.2.1 and 1.2.2 and letting a j = (y j − w j ) for each j =
h2 M
0, .., N while s = hL and t = 2 we see that
h2 M h2 M
|y(ti+1 − wi+1 | ≤ e(i+1)hL (|y(t0 ) − w0 | + )− .
2hL 2hL
Since w0 − y0 = 0 and (i + 1)h = ti+1 − t0 = ti+1 − a we have
Mh L(ti −a)
| y ( t i ) − wi | ≤ |e − 1|,
2L
for each i = 0, 1, ..N − 1.
Example 10
0
y = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5,
the Euler approximation is
∂f
=1
∂y
h
| y i − wi | ≤ (0.5e2 − 2)(eti − 1).
2
Figure 1.2.1 illustrates the upper bound of the error
and the actual error.
0
Figure 1.2.1: Python output: Illustrating upper bound y = y −
t2 + 1 with the initial condition y(0) = 0.5
wi+1 = wi + hΦ(ti , wi ),
1.2 One-Step Methods 21
For Euler method the local truncation error at the ith step for the
problem
0
y = f (t, y), a ≤ t ≤ b, y( a) = α,
is
y i +1 − y i
τi+1 (h) = − f ( t i , y i ),
h
for i = 0, .., N − 1.
But we know Euler has
h 00
τi+1 = y ( ξ i ), ξ i ∈ ( t i , t i +1 ),
2
00
When y (t) is known to be bounded by a constant M on [ a, b] this
implies
h
|ti+1 (h)| ≤ M ∼ O(h).
2
O(h) indicates a linear order of error. The higher the order the more
accurate the method.
1.3 Problem Sheet 22
0 h2 0 h n +1 n +1
y(ti+1 ) = y(ti ) + hy (ti ) + y (ti ) + .. + y (ξ i )
2 ( n + 1) !
w0 = α
h 0 h n −1 n −1
T n ( t i , wi ) = f ( t i , wi ) + f (ti , wi ) + ... f ( t i , wi ). (11)
2 n!
Example 11
Applying the general Taylor method to create meth-
ods of order two and four to the initial value problem
0
y = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5,
23
2.1 Higher order Taylor Methods 24
and
h 0
T 4 ( t i , wi ) = f ( t i , wi )
f ( t i , wi ) +
2
h2 00 h3 00
+ f ( t i , wi ) + f ( t i , wi )
6 24
h h2 h3
= 1+ + + (wi − t2i )
2 6 24
h2
h
− 1+ + hti
3 12
h h2 h3
+1 + − −
2 6 24
From these equations we have, Taylor of order two
w0 = 0.5
h 2
w i +1 = wi + h 1 + (wi − ti − 2ti + 1) − hti
2
and Taylor of order 4
h h2 h3
w i +1 = wi + h 1 + + + (wi − t2i )
2 6 24
h2 h h2 h3
h
− 1+ + hti + 1 + − −
3 12 2 6 24
y i +1 − y i h2 2
τi+1 (h) = − T 2 ( ti , yi ) = f (ξ i , y( xi ))
h 6
where ξ ∈ (ti , ti+1 ).
In general if y ∈ C n+1 [ a, b]
hn
τi+1 (h) = f n (ξ i , y(ξ i )) O(hn ).
( n + 1) !
The issue is that for every differential equation a new method has be
to derived.
3
R U N G E – K U T TA M E T H O D
Tn = hτn (y).
Rearranging we get
Theorem 3.0.1. Suppose f(t,y) and all its partial derivatives of order less
than or equal to n+1 are continuous on D = {(t, y)| a ≤ t ≤ b, c ≤ y ≤ d}
and let (t0 , y0 ) ∈ D for every (t, y) ∈ D, ∃ ξ ∈ (t, t0 ) and µ ∈ (y, y0 )
with
f (t, y) = Pn (t, y) + Rn (t, y)
where
∂f ∂f
Pn (t, y) = f ( t0 , y0 ) + ( t − t0 ) ( t0 , y0 ) + ( y − y0 ) ( t0 , y0 )
∂t ∂y
2
( t − t0 ) ∂ f2 ∂2 f
+ ( t ,
0 0y ) + ( y − y 0 )( t − t 0 ) ( t0 , y0 )
2 ∂t2 ∂y∂t
( y − y0 )2 ∂2 f
+ ( t0 , y0 )
2 ∂y2
+..."+ #
1 n ∂n f
n
n! j∑
n− j j
+ ( t − t0 ) ( y − y0 ) j n − j ( t0 , y0 )
=0
j ∂y ∂t
25
3.1 Derivation of Second Order Runge Kutta 26
and
Rn (t, y) =
" #
n +1 n +1 f
1 n+1 ∂
( n + 1) ! ∑ j
(t − t0 )n+1− j (y − y0 ) j j n+1− j (ξ, µ)
∂y ∂t
j =0
F ( f , t, y, h) = a0 f (t, y) + a1 f (t + α1 , y + β 1 ), (15)
where a0 + a1 = 1.
There is a free parameter in the derivation of the Runge Kutta
method for this reason a0 must be choosen
Deriving the second order Runge-Kutta method by using Theorem
3.0.1 to determine values for values a1 , α1 and β 1 with the property
that a1 f (t + α1 , y + β 1 ) approximates the second order Taylor
h 0
f (t, y) + f (t, y)
2
with error no greater than than O(h2 ), the local truncation error for
the Taylor method of order two.
Using
0 ∂f ∂f 0
f (t, y) = (y, t) + (t, y).y (t),
∂t ∂y
the second order Taylor can be re-written as
h ∂f h ∂f
f (t, y) + (y, t) + (t, y). f (t, y). (16)
2 ∂t 2 ∂y
∂f ∂f
a1 f (t + α1 , y + β 1 ) = a1 f (t, y) + a1 α1 (t, y) + a1 β 1 + a1 R1 (t + α1 , y + β 1 )
∂t ∂y
(17)
where
α21 ∂2 f ∂2 f β21 ∂2 f
R1 ( t + α1 , y + β 1 ) = ( ξ, µ ) + α 1 β 1 ( ξ, µ ) + (ξ, µ),
2 ∂t2 ∂t∂y 2 ∂y2
3.1 Derivation of Second Order Runge Kutta 27
h
Choosing a0 = 0 gives the unique values a1 = 1, α1 = 2 and β 1 =
h
2 f ( t, y ) so
h h h h
T 2 (t, y) = f (t + , y + f (t, y)) − R1 (t + , y + f (t, y))
2 2 2 2
and from
h h h2 ∂2 f h2 ∂2 f h2 ∂2 f
R1 (t + , y + f (t, y)) = 2
(ξ, µ) + (ξ, µ) + g(t, y)2 2 (ξ, µ),
2 2 8 ∂t 4 ∂t∂y 8 ∂y
h h
R1 (t + , y + f (t, y)) ∼ O(h2 ).
2 2
The Midpoint second order Runge-Kutta for the initial value problem
y0 = f (t, y)
w0 = α,
h h
wi+1 = wi + h f (ti + , yi + f (ti , wi )),
2 2
with an error of order O(h2 ). The Figure 3.3.2 illustrates the solution
0
to the y = − xy
3.1 Derivation of Second Order Runge Kutta 28
0
Figure 3.1.1: Python output: Illustrating upper bound y = − xy with
the initial condition y(0) = 1
h2 ∂2 f 2
2 ∂ f h2 2
2∂ f
R1 (t + h, y + h f (t, y)) = ( ξ, µ ) + h ( ξ, µ ) + f ( t, y ) (ξ, µ),
2 ∂t2 ∂t∂y 2 ∂y2
Thus Heun’s second order Runge-Kutta for the initial value prob-
lem
y0 = f (t, y)
with the initial condition y(t0 ) = α is given by
w0 = α,
h
wi+1 = wi + [ f (ti , wi ) + f (ti + h, yi + h f (ti , wi ))]
2
with an error of order O(h2 ).
For ease of calculation this can be rewritten as:
k 1 = f ( t i , wi ),
k2 = f (ti + h, wi + hk1 ),
h
w i +1 = w i + [ k 1 + k 2 ] .
2
3.2 Third Order Runge Kutta methods 29
Higher order methods are derived in a similar fashion. For the Third
Order Runge Kutta methods
w i +1 − w i
= F ( f , t i , wi , h ), (18)
h
with
F ( f , t, w, h) = a0 k1 + a1 k2 + a2 k3 , (19)
where
a0 + a1 + a2 = 1
and
k 1 = f ( t i , wi ),
k2 = f (ti + α1 h, ti + β 11 k1 ),
k3 = f (ti + α2 h, ti + β 21 k1 + β 22 k2 )).
The values of a0 , a1 , a2 , α1 , α2 , β 11 ,β 21 , β 22 are derived by group the
Taylor expansion,
h2
y i +1 = y i + h f ( t i , y i ) + ( f t + f y f )(ti ,yi )
2
h3
f tt + 2 f ty f + f t f y + f yy f 2 + f y f y f (t ,y )
+
6 i i
4
+O(h ),
h2
+ ( f tt α21 + f yy β211 f 2 + 2 f ty α1 β 11 f ) + O2 (h3 ))
2
+ha3 ( f + α2 h f t + f y β 21 h f + β 22 h( f + α1 h f t + β 11 h f y f + O3 (h2 ))
1
+ ( f tt (α2 h)2 + f yy h2 ( β 21 f + β 22 ( f + α1 h f t + β 11 h f y f + O4 (h2 )))2
2 | {z }
O5 (h)
+2 f ty α2 h2 ( β 21 f + β 22 ( f + α1 h f t + β 11 h f y f + O4 (h2 ))))).
| {z }
O5 (h)
1
α2 = 1, β 11 = ,
2
3.3 Runge Kutta fourth order 30
h
wi+1 = wi + (k1 + 4k2 + k3 ),
6
where
k 1 = f ( t i , wi ),
k2 = f (tn + 1/2h, wn + 1/2hk1 ),
k3 = f (tn + h, wn − hk1 + 2hk2 ).
w0 = α,
k 1 = h f ( t i , wi ),
h 1
k 2 = h f ( t i + , wi + k 1 ),
2 2
h 1
k 3 = h f ( t i + , wi + k 2 ),
2 2
k 4 = h f ( t i +1 , w i + k 3 ),
1
wi+1 = wi + (k1 + 2k2 + 2k3 + k4 ).
6
Example 12
Example Midpoint method,
0
y = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5,
0.2 0.2
wi+1 = wi + 0.2 f (ti + , wi + f (ti , wi ))
2 2
= wi + 0.2 f (ti + 0.1, wi + 0.1(wi − t2i + 1))
= wi + 0.2(wi + 0.1(wi − t2i + 1) − (ti + 0.1)2 + 1)
Example 13
Example Runge Kutta fourth order method
0
y = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5,
w0 = 0.5,
0
Figure 3.3.1: Python output: Illustrating upper bound y = − xy
with the initial condition y(0) = 1
0
Figure 3.3.2: Python output: Illustrating upper bound y = − xy
with the initial condition y(0) = 1
s
y i +1 = y i + h ∑ an k n ,
n =1
where
k 1 = f ( t i , y i ),
k2 = f (ti + α2 h, yi + h( β 21 k1 )),
k3 = f (ti + α3 h, yi + h( β 31 k1 + β 32 k2 )),
..
.
k s = f (ti + αs h, yi + h( β s1 k1 + β s2 k2 + · · · + β s,s−1 k s−1 )).
The Butcher’s Tableau for the 4th Order Runge Kutta is:
0
1 1
2 2
1 1
2 0 2
1 0 0 1
1 2 2 1
6 6 6 6
3.5 Convergence Analysis 33
y ( t n +1 ) − y ( t n )
τ (y) = − F (tn , y(tn ), h; f ),
h
we require,
F ( x, y, h; f ) → y0 ( x ) = f ( x, y( x )).
More precisely define,
and assume,
δ(h) → 0, as h → 0. (20)
This is called the consistency condition for the RK method.
We will also need a Lipschitz Condition on F:
Example 14
Looking at the midpoint method
h h
| F (t, w, h; f ) − F (t, z, h; f )| = f (t + , w + f (t, w))
2 2
h h
− f (t + , z + f (t, z))
2 2
h
≤ K w − z + [ f (t, w) − f (t, z)]
2
h
≤ K 1 + K |w − z|
2
Theorem 3.5.1. Assume that the Runge Kutta method satisfies the Lipschitz
Condition. Then for the initial value problems
0
y = f ( x, y),
y ( x0 ) = y0 .
The numerical solution {wn } satisfies
" #
(b− a) L e(b− a) L − 1
max |y( xn ) − wn | ≤ e | y 0 − w0 | + τ (h)
a≤ x ≤b L
3.6 The choice of method and step-size 34
where
τ (h) = max |τn (h)|,
a≤ x ≤b
δ(h) → 0 as h → 0,
where
δ(h) = max | f ( x, y) − F ( x, y; h; f )|.
a≤ x ≤b
Proof. Subtracting
wn+1 = wn + hF (tn , wn , h; f ),
and
y(tn+1 ) = y(tn ) + hF (tn , y(tn ), h; f ) + hτn (h),
we obtain
Thus τ (h) → 0 as h → 0
producing approximations
w0 = α
wi+1 = wi + hφ(ti , wi , hi )
with local truncation τi+1 = O(hn ).
producing approximations
v0 = α
vi+1 = vi + hψ(ti , vi , hi )
with local truncation υi+1 = O(hn+1 ).
y ( t i +1 ) − y ( t i )
τi+1 = − φ ( t i , y ( t i ), h )
h
y ( t i +1 ) − w i
= − φ ( t i , wi , h )
h
y(ti+1 ) − (wi + hφ(ti , wi , h))
=
h
y ( t i +1 ) − w i +1
=
h
3.6 The choice of method and step-size 36
Similarly
y ( t i +1 ) − v i +1
Υ i +1 =
h
As a consequence
y ( t i +1 ) − w i +1
τi+1 =
h
( y ( t i +1 ) − v i +1 ) + ( v i +1 − w i +1 )
=
h
( v i +1 − w i +1 )
= Υ i +1 ( h ) + .
h
but τi+1 (h) is O(hn ) and Υi+1 (h) is O(hn+1 ) so the significant factor of
(v −w )
τi+1 (h) must come from i+1 h i+1 . This gives us an easily computed
n
approximation of O(h ) method,
( v i +1 − w i +1 )
τi+1 ≈ .
h
The object is not to estimate the local truncation error but to adjust
step size to keep it within a specified bound. To do this we assume
that since τi+1 (h) is O(hn ) a number K independent of h exists with,
Then the local truncation error produced by applying the nth order
method with a new step size qh can be estimated using the original
approximations wi+1 and vi+1
qn
τi+1 (qh) ≈ K (qh)n ≈ qn τi+1 (h) ≈ ( v i +1 − w i +1 ),
h
to bound τi+1 (qh) by ε we choose q such that
qn
|vi+1 − wi+1 | ≈ τi+1 (qh) ≤ ε,
h
which leads to
n1
εh
q≤ ,
| v i +1 − w i +1 |
which can be used to control the error.
3.7 Problem Sheet 2 37
1
y0 = ty + ty2 , (0 ≤ t ≤ 2), y (0) =
2
using N = 4 steps.
b) y0 = y − t, (0 ≤ t ≤ 2),
with the initial condition y(0) = 2,
N = 4, with the exact solution y(t) = et + t + 1.
b) y0 = y − t, (0 ≤ t ≤ 2)
with the initial condition y(0) = 2,
N = 4, with the exact solution y(t) = et + t + 1.
w n +1 = w n + k 2 ,
k 1 = h f ( t n , w n ),
1 1
k2 = h f (tn + h, wn + k1 )
2 2
for dolving the ordinary differential equation
dy
= f (t, y),
dt
3.7 Problem Sheet 2 38
y ( t0 ) = y0 ,
by using a formula of the form
are specified.
When bm = 0 the method is called explicit or open since (22) then
gives wi+1 explicitly in terms of previously determined approxima-
tions.
When bm 6= 0 the method is called implicit or closed since wi+1
occurs on both sides of (22).
Example 15
Fourth order Adams-Bashforth
w0 = α w1 = α 1 w2 = α 2 w3 = α 3
39
4.1 Derivation of a explicit multistep method 40
h
w i +1 = w i + [55 f (ti , wi ) − 59 f (ti−1 , wi−1 )
24
+37 f (ti−2 , wi−2 ) − 9 f (ti−3 , wi−3 )]
w0 = α w1 = α 1 w2 = α 2
h
w i +1 = w i + [9 f (ti+1 , wi+1 ) + 19 f (ti , wi )
24
−5 f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )]
Consequently
Z t i +1
y ( t i +1 ) = y ( t i ) + f (t, y(t))dt
ti
Since we cannot integrate f (t, y(t)) without knowing y(t) the solution
to the problem we instead integrate an interpolating poly. P(t) to
f (t, y(t)) that is determined by some of the previous obtained data
points (t0 , w0 ), (t1 , w1 ), ..., (ti , wi ). When we assume in addition that
y ( t i ) ≈ wi
Z t i +1
y ( t i +1 ) ≈ w i + P(t)dt
ti
(t − ti )...(t − ti+1−m )
f (t, y(t)) = Pm−1 (t) + f m (ξ, y(ξ )) (23)
m!
4.1 Derivation of a explicit multistep method 41
Pm−1 (t) = Σm j
j=1 Lm−1,j ( t )∇ f ( ti +1− j , y ( ti +1− j )) (24)
where
∇ f (ti , y(ti )) = f (ti , y(ti )) − f (ti−1 , y(ti−1 )),
∇2 f (ti , y(ti )) = ∇ f (ti , y(ti )) − ∇ f (ti−1 , y(ti−1 ))
= f (ti , y(ti )) − 2 f (ti−1 , y(ti−1 )) + f (ti−2 , y(ti−2 )).
h
wi+1 = wi + [3wi − wi−1 ] f or i = 1, .., N − 1
2
The local truncation error is
y ( t i + 1) − y ( t i ) 1
τi+1 (h) = − [3 f (ti , y(ti )) − f (ti−1 , y(ti−1 ))]
h 2
Error
τi+1 (h) =
h
Z t i +1
(t − ti )(t − ti−1 )
Error = h3 f 2 (µi , y(µi ))]dt
ti (ti+1 − ti )(ti+1 − ti−1 )
5 3 2
= h f (µi , y(µi ))
12
5 3 2
12 h f ( µi , y ( µi ))
τi+1 (h) =
h
4.1 Derivation of a explicit multistep method 42
The local truncation error for the two step Adams-Bashforth methods
is of order 2
τi+1 (h) = O(h2 )
m −1
(t − t0 )...(t − tm−1 ) t − tk
Lm−1,j (t) = = ∏ (25)
(t j − t0 )...(t j − tm−1 ) k=0,k6= j t j − tk
m −1
ti + sh − tk
−s
Lm−1,j (t) = ∏ = (−1)(m−1) (26)
k =0,k 6= j
( i + 1 ) h ( m − 1)
Z t i +1
R t i +1 −s
f (t, y(t))dt = ti ∑km=−01 ∇k f (ti , y(ti ))dt
ti k
R t i +1 (t−ti )...(t−ti+1−m )
+ ti
f m (ξ, y(ξ )) m! dt
−1
R1 −s
= ∑m
k =0 ∇k f (t i , y ( ti ))h(−1)k 0
ds
k
m +1 R1
+ hm! 0
s(s + 1)...(s + m − 1) f m (ξ, y(ξ ))ds
R1 −s
The integrals (−1)k 0
ds for various values of k are com-
k
puted as such,
Example 16
Example k = 2
Z 1 Z 1
−s(−s − 1)
−s
(−1)2 ds = ds
0 2 0 1.2
Z 1
1
= s2 + sds
2 0
1
1 s3 s2
5
= + =
2 3 2 0 12
4.1 Derivation of a explicit multistep method 43
k 0 1 2 3 4 ...
k 1
R −s 1 5 3
(−1) 0
ds 1 2 12 8 . ..
k
As a consequence
Z t i +1
1 5
f (t, y(t))dt = h f (ti , y(ti )) + ∇ f (ti , y(ti )) + ∇2 f (ti , y(ti )) + ...
ti 2 12
h m +1
Z 1
+ s(s + 1)...(s + m − 1) f m (ξ, y(ξ ))ds
m! 0
1 5
y(ti+1 ) = y(ti ) + h f (ti , y(ti )) + ∇ f (ti , y(ti )) + ∇2 f (ti , y(ti )) + ...
2 12
Example 17
To derive the two step Adams-Bashforth method
1
y(ti+1 ) ≈ y(ti ) + h[ f (ti , y(ti )) + (∇ f (ti , y(ti )))]
2
1
= y(ti ) + h[ f (ti , y(ti )) + ( f (ti , y(ti )) − f (ti−1 , y(ti−1 )))]
2
h
= y(ti ) + [3 f (ti , y(ti )) − f (ti−1 , y(ti−1 ))]
2
The two step Adams-Bashforth is w0 = α0 and w1 =
α1 with
h
wi+1 = wi + [3wi − wi−1 ] f or i = 1, .., N − 1
2
and
wi+1 = am−1 wi + am−2 wi−1 + ... + a0 wi+1−m
4.1 Derivation of a explicit multistep method 44
Example 18
Truncation error for the two step Adams-Bashforth
method is
5h3 2
Z 1
3 2 2 −s
h f (µi , y(µi ))(−1) ds = f (µi , y(µi ))
0 2 12
y ( t i +1 ) − y ( t i ) 1
τi+1 (h) = − [3 f (ti , y(ti )) − f (ti−1 , y(ti−1 ))]
h 2
1 5h3 2
5 2 3
= f (µi , y(µi )) = h y ( µi )
h 12 12
w0 = α w1 = α 1 w2 = α 2
h
w i +1 = w i +[23 f (ti , wi ) − 16 f (ti−1 , wi−1 ) + 5 f (ti−2 , wi−2 )]
12
where i=2,3,...,N-1
The local truncation error is of order 3
3 3 4
τi+1 (h) = h y ( µi )
8
µ i ∈ ( t i −2 , t i +1 )
w0 = α, w1 = α1 , w2 = α2 , w3 = α3 ,
h
w i +1 = w i + [55 f (ti , wi ) − 59 f (ti−1 , wi−1 ) + 37 f (ti−2 , wi−2 ) − 9 f (ti−3 , wi−3 )],
24
4.1 Derivation of a explicit multistep method 45
where i = 3, ..., N − 1.
The local truncation error is of order 4
251 4 5
τi+1 (h) = h y ( µ i ),
720
µ i ∈ ( t i −3 , t i +1 ).
Example 19
0
y = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5,
Adams-Bashforth two step w0 = α0 and w1 = α1 with
h
wi+1 = wi + [3 f (ti , wi ) − f (ti−1 , wi−1 )], f or, i = 1, .., N − 1,
2
truncation error
5 2 3
τi+1 (h) = h y ( µ i ), µ i ∈ ( t i −1 , t i +1 ).
12
1. Calculate α0 and α1
From the initial condition we have w0 = 0.5
To calculate w1 we use the modified Euler method.
w0 = α
h
wi+1 = wi [ f (ti , wi ) + f (ti+1 , wi + h f (ti , wi ))]
2
We only need this to calculate w1
w0 = 0.5
h
w1 = w0 + [ f (t0 , w0 ) + f (t1 , w0 + h f (t0 , w0 ))]
2
0.2
w1 = w0 + [w0 − t20 + 1 + w0 + h(w0 − t20 + 1) − t21 + 1]
2
0.2
= 0.5 + [0.5 − 0 + 1 + 0.5 + 0.2(1.5) − (0.2)2 + 1]
2
= 0.826
h
w2 = w1 + [3 f (t1 , w1 ) − f (t0 , w0 )]
2
h
= w1 + [3(w1 − t21 + 1) − (w0 − t20 + 1)]
2
0.2
= 0.826 + [3(0.826 − 0.22 + 1) − (0.5 − 02 + 1)]
2
= 0.8858
.
.
.
0.2
w i +1 = wi + [3(wi − t2i + 1) − (wi−1 − t2i−1 + 1)]
2
h
w i +1 = w i + [ w i +1 + w i ] f or i = 0, .., N − 1
2
4.2 Derivation of the implicit multi-step method 47
Z t i +1
(t − ti+1 )(t − ti ) 3 2
Error = h f (µi , y(µi ))]dt
ti (ti − ti+1 )(ti − ti−1 )
1 3 2
= h f (µi , y(µi ))
12
1 3 2
12 h f ( µi , y ( µi ))
τi+1 (h) =
h
The local truncation error for the one step Adams-Moulton methods
is of order 2
τi+1 (h) = O(h2 )
As before
Z 0
0
y ( t i +1 ) − y ( t i ) = y (ti+1 + sh)ds
−1
m −1 Z 0
−s
= ∑ k
∇ f (ti+1 , y(ti+1 ))h(−1) k
−1 k
ds
k =0
h m +1
Z 0
+ s(s + 1)...(s + m − 1) f m (ξ, y(ξ ))ds
m! −1
Example 20
For k=3 we have
Z 0 Z 0
−s(−s − 1)(−s − 2)
3 −s
(−1) ds = ds
−1 k −1 1.2.3
0
1 s4
3 2 1
= +s +s =−
6 4 −1 24
1 1
y(ti+1 ) = y(ti ) + h[ f (ti+1 , y(ti+1 )) − ∇ f (ti+1 , y(ti+1 )) − ∇2 f (ti+1 , y(ti+1 )) − ...]
2 12
h m +1 0
Z
+ s(s + 1)...(s + m − 1) f m (ξ, y(ξ ))ds
m! −1
4.2 Derivation of the implicit multi-step method 48
w0 = α w1 = α 1
h
w i +1 = w i + [5 f (ti+1 , wi+1 ) + 8 f (ti , wi ) − f (ti−1 , wi−1 )]
12
where i=2,3,...,N-1
The local truncation error is
1 3 4
τi+1 (h) = − h y ( µi )
24
µ i ∈ ( t i −1 , t i +1 )
w0 = α w1 = α 1 w2 = α 2
h
w i +1 = w i + [9 f (ti+1 , wi+1 ) + 19 f (ti , wi ) − 5 f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )]
24
where i=3,...,N-1
The local truncation error is
19 4 5
τi+1 (h) = − h y ( µi )
720
µ i ∈ ( t i −2 , t i +1 )
Example 21
0
y = y − t2 + 1 0 ≤ t ≤ 2 y(0) = 0.5
Adams-Moulton two step w0 = α0 and w1 = α1 with
h
w i +1 = w i + [5 f (ti+1 , wi+1 ) + 8 f (ti , wi ) − f (ti−1 , wi−1 )]
12
f or i = 2, .., N − 1
1 3 4
truncation error τi+1 (h) = − 24 h y ( µ i ) µ i ∈ ( t i −1 , t i +1 ).
1. Calculate α0 and α1
From the initial condition we have w0 = 0.5
To calculate w1 we use the modified Euler method.
we now have α1 = w1 = 0.826
4.3 Table of Adam’s methods 49
h
w2 = w1 + [ 5 f ( t 2 , w2 )
12
+8 f (t1 , w1 ) − f (t0 , w0 )]
h
= w1 + [5(w2 + t22 + 1)
12
+8(w1 + t21 + 1) − (w0 + t20 + 1)]
h
= w1 + [5(w2 + t22 + 1)
12
+8(w1 + t21 + 1) − (w0 + t20 + 1)]
.
.
.
5h h 12
(1 − ) w i +1 = [[8 + ]wi − 5t2i+1
12 12 h
−8t2i + t2i−1 + 12]
In practice implicit methods are not used as above. They are used to
improve approximations obtained by explicit methods. The combina-
tion of the two is called predictor-corrector method.
Example 22
Consider the following forth order method for solving
an initial-value problem.
1. Calculate w0 , w1 , w2 , w3 for the four step Adams-
Bashforth method, to do this we use a 4th order
one step method, eg Runge Kutta.
2. Calculate an approximation w4 to y(t4 ) using the
Adams-Bashforth method as the predictor.
h
w40 = w3 + [55 f (t3 , w3 ) − 59 f (t2 , w2 ) + 37 f (t1 , w1 ) − 9 f (t0 , w0 )]
24
h
w41 = w3 + [9 f (t4 , w40 ) + 19 f (t3 , w3 ) − 5 f (t2 , w2 ) + f (t1 , w1 )]
24
Example 23
The Adams-Bashforth 4-step method
w0 = α w1 = α 1 w2 = α 2 w3 = α 3
h
y ( t i +1 ) = y ( t i ) + [55 f (ti , y(ti )) − 59 f (ti−1 , y(ti−1 ))
24
+37 f (ti−2 , y(ti−2 )) − 9 f (ti−3 , y(ti−3 ))]
251 5 5
+ h y ( µi )
720
µi ∈ (ti−3 , ti+1 ) the truncation error is
h
y ( t i +1 ) = y ( t i ) + [9 f (ti+1 , y(ti+1 )) + 19 f (ti , y(ti ))
24
−5 f (ti−1 , y(ti−1 )) + f (ti−2 , y(ti−2 ))]
19 5 5
− h y (ξ i )
720
ξ i ∈ (ti−2 , ti+1 ) the truncation error is
y ( t i +1 ) − w i +1 19 4 5
=− h y (ξ i ) (28)
h 720
We make the assumption that for small h
y5 ( ξ i ) ≈ y5 ( µ i )
wi+1 − wi0+1 h4 3
= [251y5 (µi ) + 19y5 (ξ i )] ≈ h4 y5 (ξ i )
h 720 8
8
y5 ( ξ i ) ≈ (wi+1 − wi0+1 ) (29)
3h5
Using this result to eliminate h4 y5 (ξ i ) from (28)
| y ( t i +1 ) − w i +1 | 19h4 8
|τi+1 (h)| = ≈ (|wi+1 − wi0+1 |)
h 720 3h5
19|wi+1 − wi0+1 |
=
270h
4.5 Improved step-size multi-step method 52
q is normally chosen as
! 14
hε
q = 1.5
|wi+1 − wi0+1 |
With this knowledge we can change step sizes and control out error.
4.6 Problem Sheet 3 53
b) y0 = y − t, (0 ≤ t ≤ 2)
with the initial condition y(0) = 2,
N = 4, y(t) = et + t + 1
b) y0 = y − t, (0 ≤ t ≤ 2)
with the initial condition y(0) = 2,
N = 4, y(t) = et + t + 1
w n +1 = w n + h f ( t n , w n ),
with the local truncation error
h 2
τn+1 (h) = y (µn )
2
where µn ∈ (tn , tn+1 ).
3 1
wn+1 = wn + ( h f (tn , wn ) − h f (tn−1 , wn−1 )),
2 2
with the local truncation error
5h2 3
τn+1 (h) = y (µn )
12
4.6 Problem Sheet 3 54
23 4 5
w n +1 = w n + ( h f (tn , wn ) − h f (tn−1 , wn−1 ) + h f (tn−2 , wn−2 )),
12 3 12
with the local truncation error
9h3 4
τn+1 (h) = y (µn )
24
where µn ∈ (tn−2 , tn+1 ).
6. Derive the difference equation for the 0-step Adams-Moulton
method:
w n +1 = w n + h f ( t n +1 , w n +1 ),
with the local truncation error
h
τn+1 (h) = − y2 (µn )
2
where µn ∈ (tn−2 , tn+1 ).
7. Derive the difference equation for the 1-step Adams-Moulton
method:
1 1
w n +1 = w n + h f ( t n +1 , w n +1 ) + h f ( t n , w n ),
2 2
with the local truncation error
h2 3
τn+1 (h) = − y (µn )
12
where µn ∈ (tn , tn+1 ).
8. Derive the difference equation for the 2-step Adams-Moulton
method:
5 8 1
w n +1 = w n + h f ( t n +1 , w n +1 ) + h f ( t n , w n ) − h f ( t n −1 , w n −1 ),
12 12 12
with the local truncation error
h3 4
τn+1 (h) = − y (µn )
24
where µn ∈ (tn−1 , tn+1 ).
9. Derive the difference equation for the 3-step Adams-Moulton
method:
9 19 5 1
w n +1 = w n + h f ( t n +1 , w n +1 ) + h f ( t n , w n ) − h f ( t n −1 , w n −1 ) + h f ( t n −2 , w n −2 ),
24 24 24 24
4.6 Problem Sheet 3 55
h4 5
τn+1 (h) = − y (µn )
720
where µn ∈ (tn−2 , tn+1 ).
5
C O N S I S T E N C Y, C O N V E R G E N C E A N D S TA B I L I T Y
where
y i +1 − y i
τi (h) = − F (ti , yi , h, f )
h
As h → 0 does F (ti , yi , h, f ) → f (t, y).
w0 = α
wi+1 = wi + hF (ti , wi : h)
56
5.1 One Step Methods 57
Suppose also that a number h0 > 0 exists and that F (ti , wi : h) is continuous
and satisfies a Lipschitz Condition in the variable w with Lipschitz constant
L on the set
Then
τ ( h ) L ( ti − a )
| y ( t i ) − wi | ≤ e .
L
Example 24
Consider the modified Euler method given by
w0 = α
h
wi+1 = wi + [ f (ti , wi ) + f (ti+1 , wi + h f (ti , wi ))]
2
Verify that this method satisfies the theorem. For this
method
1 1
F (t, w : h) = f (t, w) + f (t + h, w + h f (t, w))
2 2
If f satisfies the Lipschitz Condition on {(t, w)| a ≤
t ≤ b, −∞ < w < ∞} in the variable w with constant
L, then
1 1
F (t, w : h) − F (t, ŵ : h) = f (t, w) + f (t + h, w + h f (t, w))
2 2
1 1
− f (t, ŵ) − f (t + h, ŵ + h f (t, ŵ))
2 2
5.2 Multi-step methods 58
1
| F (t, w : h) − F (t, ŵ : h)| ≤ L|w − ŵ|
2
1
+ L|w + h f (t, w) − ŵ − h f (t, ŵ)|
2
1
≤ L|w − ŵ| + L|h f (t, w) − h f (t, ŵ)|
2
1
≤ L|w − ŵ| + hL2 |w − ŵ|
2
1 2
≤ L + hL |w − ŵ|
2
wi+1 = am−1 wi + am−2 wi−1 + ... + a0 wi+1−m + hF (ti , h, wi+1 , ..., wi+1−m ),
5.2 Multi-step methods 59
with local truncation error τi+1 (h) and an (m-1)-step Adams-Moulton equa-
tion
with local truncation error τ̂i+1 (h). In addition suppose that f (t, y) and
f y (t, y) are continuous on = {(t, y)| a ≤ t ≤ b, −∞ < y < ∞} and that
f y is bounded. Then the local truncation error σi+1 (h) of the predictor-
corrector method is
∂f
σi+1 (h) = τ̂i+1 (h) + hτi+1 (h)b̂m−1 ( t i +1 , θ i +1 )
∂y
wi+1 = am−1 wi + am−2 wi−1 + ... + a0 wi+1−m + hF (ti , h, wi+1 , ..., wi+1−m ),
is the characteristic equation given by
wi+1 = am−1 wi + am−2 wi−1 + ... + a0 wi+1−m + hF (ti , h, wi+1 , ..., wi+1−m ),
If |λi | ≤ 1 for each i = 1, ..., m and all roots with absolute value 1 are
simple roots then the difference equation is said to satisfy the root
condition.
2. Methods that satisfy the root condition and have more than one
distinct root with magnitude one are called weakly stable;
3. Methods that do not satisfy the root condition are called unsta-
ble.
wi+1 = am−1 wi + am−2 wi−1 + ... + a0 wi+1−m + hF (ti , h, wi+1 , ..., wi+1−m )
is stable iff it satisfies the root condition. Moreover if the difference method
is consistent with the differential equation then the method is stable iff it is
convergent.
Example 25
We have seen that the fourth order Adams-Bashforth
method can be expressed as
wi+1 = am−1 wi + am−2 wi−1 + ... + a0 wi+1−m + hF (ti , h, wi+1 , wi , ..., wi−3 )
5.2 Multi-step methods 61
where
F (ti , h, wi+1 , wi , ..., wi−3 ) =
1
[55 f (ti , wi ) − 59 f (ti−1 , wi−1 ) + 37 f (ti−2 , wi−2 ) − 9 f (ti−3 , wi−3 )]
24
so m = 4, a0 = 0, a1 = 0, a2 = 0 and a3 = 1.
The characteristic equation is
λ4 − λ3 = λ3 ( λ − 1) = 0
Example 26
The explicit multi-step method given by
4h
w i +1 = w i −3 + [2 f (ti , wi ) − f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )]
3
has a characteristic equation
λ4 − 1 = 0
Example 27
The explicit multi-step method given by
4h
wi+1 = awi−3 + [2 f (ti , wi ) − f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )]
3
has a characteristic equation
λ4 − a = 0
√ √ √
which has the roots λ1 = 4 a, λ2 = − 4 a, λ3 = i 4 a
√
and λ4 = −i 4 a, when a > 1 the method dose not
satisfy the root condition, and hence is unstable.
Example 28
Solving the Initial Value Problem
0
y = −0.5y2 y(0) = 1
5.2 Multi-step methods 62
4h
w i +1 = w i −3 + [2wi − wi−1 + wi−2 ]
3
Using an two different unstable method
1.
4h
wi+1 = 1.0001wi−3 + [2 f (ti , wi ) − f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )]
3
2.
4h
wi+1 = 1.5wi−3 + [2 f (ti , wi ) − f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )]
3
λ4 − a = 0
√ √ √
which has the roots λ1 = 4 a, λ2 = − 4 a, λ3 = i 4 a
√
and λ4 = −i 4 a, when a > 1 the method dose not
satisfy the root condition, and hence is unstable.
3 1
wn+1 = wn + ( h f (tn , wn ) − h f (tn−1 , wn−1 )),
2 2
5 8 1
w n +1 = w n + h f ( t n +1 , w n +1 ) + h f ( t n , w n ) − h f ( t n −1 , w n −1 ),
12 12 12
1
wn+1 = wn−1 + h[ f (tn+1 , wn+1 ) + 4 f (tn , wn ) + f (tn−1 , wn−1 )].
3
b)
4 1 2
w n +1 = wn − wn−1 + h[ f (tn+1 , wn+1 )].
3 3 3
5.4 Initial Value Problem Review Questions 64
y( a) = α.
[8 marks]
b) Suppose f is a continuous and satisfies a Lipschitz condi-
tion with constant L on D = {(t, y)| a ≤ t ≤ b, −∞ < y <
∞} and that a constant M exists with the property that
00
|y (t)| ≤ M.
0 ≤ (1 + x )m ≤ emx
[17 marks]
c) Use Euler’s method to estimate the solution of
0
y = (1 − x )y2 − y; y (0) = 1
w n +1 = w n + k 2
k 1 = h f ( tn , wn )
1 1
k2 = h f (tn + h, wn + k1 )
2 2
for dolving the ordinary differential equation
dy
= f (t, y)
dt
y ( t0 ) = y0
by using a formula of the form
3 1
wn+1 = wn + ( h f (tn , wn ) − h f (tn−1 , wn−1 )),
2 2
and the local truncation error
5h2 3
τn+1 (h) = − y (µn )
12
[18 marks]
5.4 Initial Value Problem Review Questions 66
y0 = ty − y, (0 ≤ t ≤ 2) y(0) = 1
w0 = α 0 w1 = α 1
h
w n +1 = w n + [5 f (tn+1 , wn+1 ) + 8 f (tn , wn1 ) − f (tn−2 , wn−2 )]
12
and the local truncation error
h3 4
τn+1 (h) = − y (µn )
24
[23 marks]
b) Define the terms strongly stable, weakly stable and unsta-
ble with respect to the characteristic equation.
[5 marks]
c) Show that the Adams-Bashforth two step method is stongly
stable.
[5 marks]
y0 = f (t, y), y ( t0 ) = y0
3 1
w n +1 = w n + h f ( t n , w n ) − h f ( t n −1 , w n −1 )
2 2
as a predictor, and the 2-step Adams-Moulton method:
h
w n +1 = w n + [5 f (tn+1 , wn+1 ) + 8 f (tn , wn1 ) − f (tn−2 , wn−2 )]
12
5.4 Initial Value Problem Review Questions 67
w0 = y 0
h h
wi+1 = wi + h f ( xi + , wi + f ( xi , wi ))
2 2
Assume that the Runge Kutta method satisfies the Lips-
chitz condition. Then for the initial value problems
0
y = f ( x, y)
y( x0 ) = Y0
Show that the numerical solution {wn } satisfies
" #
e (b− a) L − 1
max |y( xn ) − wn | ≤ e(b−a) L |y0 − w0 | + τ (h)
a≤ x ≤b L
where
τ (h) = max |τn (y)|
a≤ x ≤b
δ(h) → 0 as h → 0
where
δ(h) = max | f ( x, y) − F ( x, y; h; f )|
a≤ x ≤b
[11 marks]
c) How would you improve on this result.
[4 marks]
Part II
N U M E R I C A L S O L U T I O N S TO B O U N D A RY
VA L U E P R O B L E M S
6
B O U N D A R Y VA L U E P R O B L E M S
u1 ( a ) = α1
u2 ( a ) = α2
.
(31)
.
.
um ( a) = αm .
70
6.2 Higher order Equations 71
Example 29
Using Euler method on the system
u0 = u2 − 2uv u(0) = 1
v0 = tu + u2 sinv v(0) = −1
and so forth.
du1 dy
dt = dt0 = u2
du2 dy
dt = dt = u3
.
.
.
dum−1 dy(m−2)
dt = dt = um
dum dy(m−1) ( m − 1 )
dt = dt fm (t, y, ..., y ) = f (t, u1 , ..., um )
with initial conditions
u1 ( a ) = y ( a ) = α1
0
u2 ( a ) = y ( a ) = α2
.
.
.
u m ( a ) = y −1) ( a ) = α m
( m
Example 30
00 0
y + 3y + 2y = et
0
with initial conditions y(0) = 1 and y (0) = 2 can be
converted to the system
0
u =v u (0) = 1
0 t
v = e − 2u − 3v v(0) = 2
tions.
The simplest type of boundary conditions are
y( a) = α
y(b) = β
for a given numbers α and β. However more general conditions such
as
0
λ1 y ( a ) + λ2 y ( a ) = α1
0
µ1 y ( b ) + µ2 y ( b ) = α2
for given numbers αi , λi and µi (i=1,2) are sometimes imposed.
Unlike Initial Value Problem whose problems are uniquely solvable
boundary value problem can have no solution or many.
Example 31
The differential equation
00
y +y = 0
0
y1 ( x ) = y ( x ) y2 ( x ) = y ( x )
0
y1 = y2
0
y2 = − y1
It has the general solution
w( x ) = C1 sin( x ) + C2 cos( x )
w (0) = 0 w ( π ) = 0
w(0) = 0 w(π ) = 1.
While we cannot state that all boundary value problem are unique
we can say a few things.
6.4 Some theorems about boundary value problem 74
q( x ) > 0 a ≤ x ≤ b
(35)
a0 , a1 ≥ 0 b0 , b1 ≥ 0
| f ( x, u1 , v1 ) − f ( x, u2 , v2 )| ≤ K1 |u1 − u2 |
(37)
| f ( x, u1 , v1 ) − f ( x, u2 , v2 )| ≤ K2 |v1 − v2 |
for all points in the region
∂ f ( x, u, v) ∂ f ( x, u, v)
>0 ≤M
∂u ∂v
for some constant M > 0 for the boundary conditions of 36 assume that
| a1 | + | a0 | 6= 0, |b1 | + |b0 | 6= 0, | a0 | + |b0 | 6= 0. The boundary value
problem has a unique solution.
Example 32
The boundary value problem boundary value prob-
lem
00 0
y + e− xy + sin(y ) = 0 1 < x < 2
with y(1) = y(2) = 0, has
0 0
f ( x, y, y ) = −e− xy − sin(y )
Since 0
∂ f ( x, y, y )
= xe xy > 0
∂y
and
0
∂ f ( x, y, y ) 0
0 = | − cos(y ) ≤ 1
∂y
this problem has a unique solution.
Looking at problem class (34). We break this down into two Initial
Value Problem.
00 0 0
y1 = p( x )y1 + q( x )y1 + r ( x ), y1 ( a) = α, y1 ( a) = 0
00 0 0 (38)
y2 = p( x )y2 + q( x )y2 , y2 ( a) = 0, y2 ( a) = 1
β − y1 ( b )
y ( x ) = y1 ( x ) + y2 ( x ) (39)
y2 ( b )
6.5 Shooting Methods 76
Example 33
00 0
y = 2y + 3y − 6
with boundary conditions
y (0) = 3
y (1) = e3 + 2
The exact solution is
y = e3x + 2
00 0 0
y2 = 2y2 + 3y2 y2 ( a) = 0, y2 ( a ) = 1 (41)
Discretising (40)
0
y1 = u1 y1 = u2
0
u1 = u2 u1 ( a ) = 3
0
u2 = 2u2 + 3u1 − 6 u2 ( a) = 0
using the Euler method we have the two difference
equations
u1i+1 = u1i + hu2i
u2i+1 = u2i + h(2u2i + 3u1i − 6)
Discretising (41)
0
y 2 = w1 y 2 = w2
0
w1 = w2 w1 ( a ) = 0
0
w2 = 2w2 + 3w1 w2 ( a) = 1
using the Euler method we have the two difference
equations
w1i+1 = w1i + hw2i
w2i+1 = w2i + h(2w2i + 3w1i )
6.5 Shooting Methods 77
β − u1 ( b )
yi = u1i + w1i
w1 ( b )
It can be said
w1i
|yi − y( xi )| ≤ Khn 1 +
u1i
Example 34
00 0
y = −2yy y (0) = 0 y (1) = 1
The corresponding initial value problem is
00 0 0
y = −2yy y (0) = 0 y (0) = λ (42)
6.5 Shooting Methods 78
How to choose λ?
Our goal is to choose λ such that.
F (λ) = y(b, λ) − β = 0
y(b, λk−1 ) − β
λ k = λ k −1 − dy
dλ ( b, λk −1 )
dy
and requires knowledge of dλ (b, λk−1 ). This present a difficulty since
an explicit representation for y(b, λ) is unknown we only know y(b, λ0 ),
y(b, λ1 ), ..., y(b, λk−1 ).
Rewriting our Initial Value Problem we have it so that it depends on
both x and λ.
00 0
y ( x, λ) = f ( x, y( x, λ), y ( x, λ)) a ≤ x ≤ b
0
y( a, λ) = α y ( a, λ) = λ
∂y
differentiating with respect to λ and let z( x, λ) denote ∂λ ( x, λ ) we
have 0
∂ 00 ∂f ∂ f ∂y ∂ f ∂y
(y ) = = +
∂λ ∂λ ∂y ∂λ ∂y0 ∂λ
Now 0
∂y ∂ ∂y ∂ ∂y ∂z 0
= = = =z
∂λ ∂λ ∂x ∂x ∂λ ∂x
we have
00 ∂f ∂f 0
z ( x, λ) = z( x, λ) + 0 z ( x, λ)
∂y ∂y
for a ≤ x ≤ b and the boundary conditions
0
z( a, λ) = 0, z ( a, λ) = 1
Now we have
y(b, λk−1 ) − β
λ k = λ k −1 −
z(b, λk−1 )
We can solve the original non-linear subset Boundary Value Problem
by solving the 2 Initial Value Problem’s.
6.5 Shooting Methods 79
Example 35
(cont.)
∂f 0 ∂f
= −2y 0 = −2y
∂y ∂y
We now have the two Initial Value Problem’s
00 0 0
y = −2yy y (0) = 0 y (0) = λ
00 ∂f ∂f 0
z = z( x, λ) + 0 z ( x, λ)
∂y ∂y
0 0 0
= −2y z − 2yz z(0) = 0 z (0) = 1
0 0
Discretising we let y1 = y, y2 = y , y3 = z and y4 = z .
0
y1 = y2 : y1 (0) = 0
0
y2 = −2y1 y2 : y2 (0) = λ k
0
z1 = z2 : z1 (0) = 0
0
z2 = −2z1 y2 − 2y1 z2 : y2 (0) = 1
with
y1 ( b ) − β
λ k = λ k −1 −
y3 ( b )
Then solve using a one step method.
y( a) = α y(b) = β
As with all cases we divide the area into even spaced mesh points
b−a
x0 = a, x N = b xi = x0 + ih h =
N
0 00
We now replace the derivatives y ( x ) and y ( x ) with the centered
difference approximations
0 1 h2
y (x) = (y( xi+1 ) − y( xi−1 )) − y3 (ξ i )
2h 12
6.6 Finite Difference method 81
00 1 h2 4
y (x) = ( y ( x i + 1 ) − 2y ( x i ) + y ( x i − 1 )) − y ( µi )
h2 6
for some xi−1 ≤ ξ i µi ≤ xi+1 for i=1,...,N-1.
We now have the equation
1 1
2
(y( xi+1 ) − 2y( xi ) + y( xi−1 )) = p( xi ) (y( xi+1 ) − y( xi−1 )) + q( xi )y( xi ) + r ( xi )
h 2h
This is rearranged such that we have all the unknown together,
hp( xi ) 2 hp( xi )
1+ y( xi−1 ) − (2 + h q( xi ))y( xi ) + 1 − y ( x i +1 ) = h 2 r ( x i )
2 2
for i = 1, .., N − 1.
Since the values of p( xi ), q( xi ) and r ( xi ) are known it represents a
linear algebraic equation involving y( xi−1 ), y( xi ), y( xi+1 ).
This produces a system of N − 1 linear equations with N − 1 un-
knowns y( x1 ), ..., y( x N −1 ).
The first equation corresponding to i = 1 simplifies to
2 hp( x1 ) 2 hp( x1 )
−(2 + h q( x1 ))y( x1 ) + 1 − y ( x2 ) = h r ( x1 ) − 1 + α
2 2
because y(b) = β.
The values of yi , (i = 1, ..., N − 1) can therefore be found by solving
the tridiagonal system
Ay = b
where
A=
hp( x )
−(2 + h2 q( x1 )) 1− 21 0 . 0
1 + hp(x2 ) hp( x2 )
−( 2 + h 2 q ( x )) 1 − 0 .
2 2 2
0 . . 0 0
. . . . .
. . . . .
hp( x N −2 ) 2 hp( x N −2 )
−(2 + h q( x N −2 )) 1−
. 0 1+ 2 2
hp( x N −1 ) 2 q( x
. 0 0 1+ 2 −( 2 + h N −1 ))
6.6 Finite Difference method 82
hp1
h2 r1 − 1 +
y1 2 α
h2 r2
y2
. .
y= b =
.
.
y N −2
2
h rN −2
y N −1 hp1
h 2 r N −1 − 1 − 2 β
Example 36
Looking at the simple case
d2 y
= 4y, y(0) = 1.1752, y(1) = 10.0179.
dx2
Our difference equation is
1
(y( xi+1 ) − 2y( xi ) + y( xi−1 )) = 4y( xi ) i = 1, .., N − 1
h2
1−0
dividing [0, 1] into 4 subintervals we have h = 4
xi = x0 + ih = 0 + i (0.25)
multiplying across by h2
y( x2 ) − 2.25y( x1 ) = −1.1752
y( x3 ) − 2.25y( x2 ) + y( x1 ) = 0
−2.25y( x3 ) + y( x2 ) = −10.0179
−2.25 −1.1752
1 0 y1
1 −2.25 1 y2 = 0
0 1 −2.25 y3 −10.0179
6.6 Finite Difference method 83
x y Exact sinh(2x + 1)
0 1.1752 1.1752
0.25 2.1467 2.1293
0.5 3.6549 3.6269
0.75 6.0768 6.0502
1.0 10.0179 10.0179
Example 37
Looking at a more involved boundary value problem
00 0
y = xy − 3y + e x y(0) = 1 y(1) = 2
1−0
Let N=5 then h = 5 = 0.2. The difference equation
is of the form
1 1
(y( xi+1 ) − 2y( xi ) + y( xi−1 )) = xi (y( xi+1 ) − y( xi−1 )) − 3y( xi ) + e xi
h2 2h
Re arranging and putting h = 0.2
0.2( xi ) 0.2( xi )
(1 + )y( xi−1 ) − (1.88)y( xi ) + (1 − )y( xi+1 ) = 0.04e xi
2 2
In matrix form this is
N U M E R I C A L S O L U T I O N S T O PA R T I A L
D I F F E R E N T I A L E Q U AT I O N S
7
PA R T I A L D I F F E R E N T I A L E Q U AT I O N S
7.1 introduction
∂2 u ∂2 u
−∇2 u = − − 2 = f ( x, y) Poisson Eqn
∂x2 ∂y
∂u ∂u
+v = 0 Advection Eqn
∂t ∂x
∂u ∂2 u
−D 2 =0 Heat Eqn
∂t ∂x
∂2 u 2
2∂ u
− c = 0 Wave Equation
∂t2 ∂x2
Here v, D, c are real positive constants. In these cases x, y are the
space coordinates and t, x are often viewed as time and space coordi-
nates, respectively.
These are only examples and do not cover all cases. In real occur-
rences PDE’s usually have 3 or 4 variables.
∂u ∂u ∂2 u
Φ x, y, u, , , 2 , ... =0
∂x ∂y ∂x
85
7.2 PDE Classification 86
Example 38
∂u ∂u
α(t, x ) +β = γ(t, x ) (43)
∂t ∂x
A solution u(t, x ) to this PDE defines a surface {t, x, u(t, x )}
lying over some region of the (t, x )-plane.
Consider any smooth path in the (t, x )-plane lying be-
low the solution {t, x, u(t, x )}.
Such a path has a parameterization (t(s), x (s)) where
the parameter s measures progress along the path.
What is the rate of change du ds of the solution as we
travel along the path (t(s), x (s)).
The chain rule provides the answer
dt ∂u dx ∂u du
+ = (44)
ds ∂t ds ∂x ds
Equation (44) holds for an arbitrary smooth path in
the (t, x )-plane. Restricting attention to a specific fam-
ily of paths leads to a useful observation: When
dt dx
= α(t, x ) and = β(t, x ) (45)
ds ds
the simultaneous validity of (43) and (44) requires that
du
= γ(t, x ). (46)
ds
Equation (46) defines a family of curves (t(s), x (s))
called characteristic curves in the plane (t, x ).
Equation (46) is an ode called the characteristic equa-
tion that the solution must satisfy along only the char-
acteristic curve.
Thus the original PDE collapses to an ODE along the
characteristic curves. Characteristic curves are paths
along which information about the solution to the
PDE propagates from points where the initial value
or boundary values are known.
∂2 u ∂2 u ∂2 u ∂u ∂u
α( x, y) 2
+ β ( x, y ) + γ ( x, y ) 2
= Ψ( x, y, u, , ) (47)
∂x ∂x∂y ∂y ∂x ∂y
7.2 PDE Classification 87
dx ∂2 u dy ∂2 u
d ∂u
+ =
ds ∂y∂x ds ∂y∂x ds ∂x
dx ∂2 u dy ∂2 u
d ∂u
+ =
ds ∂x∂y ds ∂y2 ds ∂y
if the solution u( x, y) is continuously differentiable then these rela-
tionships together with the original PDE yield the following system:
Ψ
∂2 u
α β γ ∂x 2
dx dy ∂2 u d ∂u
ds ds 0 ∂x∂y = ds ∂x
(48)
dx dy ∂ 2 u d
0 ds ds ∂y2 ds ∂y
∂u
1. HYPERBOLIC
β2 − 4αγ > 0 This gives two families of real characteristic curves.
2. PARABOLIC
β2 − 4αγ = 0 This gives exactly one family of real characteristic
curves.
3. ELLIPTIC
β2 − 4αγ < 0 This gives no real characteristic equations.
Example 39
7.2 PDE Classification 88
∂2 u ∂2 u
c2 − 2 =0
∂x2 ∂t
now equating this with our formula for the character-
istics we have
√
dt 0 ± 0 + 4c2
= = ±c
dx 2
this implies that the characteristics are x + ct = const
and x − ct = const. This means that the effects travel
along the characteristics.
Laplace equation
∂2 u ∂2 u
+ 2 =0
∂x2 ∂y
∂t
=0
∂x
We can also state that hyperbolic and parabolic are Boundary value
problems and initial value problems. While, elliptic problems are
boundary value problems.
7.3 Difference Operators 89
Through out this chapter we will use U to denote the exact solution
and w to denote the numerical (approximate) solution.
1-D difference operators
Ui+1 − Ui
D + Ui = Forward
h i +1
Ui − Ui−1
D − Ui = Backward
hi
Ui+1 − Ui−1
D0 Ui = Centered
x i +1 − x i −1
Ui+1j − Uij
Dx+ Uij = Forward in the x-direction
x i +1 − x i
Uij+1 − Uij
Dy+ Uij = Forward in the y-direction
y i +1− y i
Uij − Ui−1j
Dx− Uij = Backward in the x direction
x i − x i −1
Uij − Uij−1
Dy− Uij = Backward in the y direction
y i − y i −1
Ui+1j − Ui−1j
Dx0 Uij = Centered in the x direction
x i +1 − x i −1
Uij+1 − Uij−1
Dy0 Uij = Centered in the y direction
y i +1 − y i −1
Second derivatives
Ui+1j − Uij Uij − Ui−1j
2
δx2 Uij= − Centered in x direction
x i +1 − x i −1 x i +1 − x i x i − x i −1
U − U U ij − Uij−1
2 2 ij+1 ij
δy Uij = − Centered in y direction
y i +1 − y i −1 y i +1 − y i y i − y i −1
8
PA R A B O L I C E Q U AT I O N S
∂U ∂2 U
= K 2 on Ω
∂T ∂X
and
U = g( x, y) on the boundary δΩ
this can be transformed without loss of generality by a non-dimensional
transformation to
∂U ∂2 U
= (49)
∂t ∂x2
with the domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.
In this case we look at a rod of unit length with each end in ice.
The rod is heat insulated along its length so that temp changes occur
through heat conduction along its length and heat transfer at its ends,
where w denotes temp.
Given that the ends of the rod are kept in contact with ice and the
initial temp distribution is non dimensional form is
1
1. U = 2x for 0 ≤ x ≤ 2
1
2. U = 2(1 − x ) for 2 ≤x≤1
In other words we are seeking a numerical solution of
∂U ∂2 U
=
∂t ∂x2
which satisfies
1. U = 0 at x = 0 and x = 1 for all t > 0 (the boundary condition)
2. U = 2x for 0 ≤ x ≤ 12 for t = 0
U = 2(1 − x ) for 21 ≤ x ≤ 1 for t = 0 (the initial condition).
90
8.2 An explicit method for the heat eqn 91
1 1 k
Case 2 Let h = 10 and k = 200 so that r = h2
= 12 ;
1 1 k
Case 3 Let h = 10 and k = 100 so that r = h2
= 1.
where r = hk2 .
This gives the formula for the unknown term wij+1 at the (ij + 1)
mesh points in terms of all xi along the jth time row.
Hence we can calculate the unknown pivotal values of w along the
first row t = k or j = 1 in terms of the known boundary conditions.
This can be written in matrix form:
w j+1 = Aw j + b j
k
where r = h2x
> 0, w j is
w1j
w2j
.
w N −2j
w N −1j
and b j is
rw0j
0
. .
0
rw Nj
It is assumed that the boundary values w0j and w Nj are known for
j = 1, 2, ..., and wi0 is the initial condition.
Example 40
k
Let h = 15 and k = 250
1
so that r = h2
= 1
10 difference
equation (51) becomes
1
wij+1 = (wi−1j + 8wij + wi+1j ).
10
This can be written in matrix form
1
Figure 8.2.1: Graphical representation of the matrix A for r = 10 .
1 1
w21 = (w10 + 8w20 + w30 ) = (0.4 + 8 × 0.8 + 0.8) = 0.76.
10 10
Table 4 shows the initial condition and one time step
for the Heat Equation.
Example 41
k
Let h = 15 and k = 50
1
so that r = h2
= 1
2 difference
equation (51) becomes
1
wij+1 = (wi−1j + wi+1j ).
2
This can be written in matrix form
Example 42
k
Let h = 51 and k = 25
1
so that r = h2
= 1 difference
equation (51) becomes
w0j
0
+
0 .
w5j
8.2 An explicit method for the heat eqn 97
where r = hk2 .
This gives the formula for the unknown term wij+1 at the (ij + 1)
mesh points in terms of terms along the jth time row.
Hence we can calculate the unknown pivotal values of w along the
first row t = k or j = 1 in terms of the known boundary conditions.
This can be written in matrix from
Aw j+1 = w j + b j+1
8.3 An implicit (BTCS) method for the Heat Equation 99
for which A is
1 + 2r −r 0 . . . .
−r 1 + 2r − r 0 . . .
0 −r 1 + 2r −r 0 . .
. . . . . . .
. −r 1 + 2r −r
. . .
. . . . . −r 1 + 2r
k
where r = h2x
> 0, w j is
w1j
w2j
.
w N −2j
w N −1j
and b j+1 is
rw0j+1
0
. .
0
rw Nj+1
It is assumed that the boundary values w0j and w Nj are known for
j = 1, 2, ..., and wi0 is the initial condition.
In this case we look at a rod of unit length with each end in ice.
The rod is heat insulated along its length so that temperature changes
occur through heat conduction along its length and heat transfer at
its ends, where w denotes temperature.
Given that the ends of the rod are kept in contact with ice and the
initial temperature distribution is non dimensional form is
1
1. U = 2x for 0 ≤ x ≤ 2
1
2. U = 2(1 − x ) for 2 ≤x≤1
In other words we are seeking a numerical solution of
∂U ∂2 U
=
∂t ∂x2
which satisfies
2. U = 2x for 0 ≤ x ≤ 12 for t = 0,
U = 2(1 − x ) for 21 ≤ x ≤ 1 for t = 0 (the initial condition).
8.3 An implicit (BTCS) method for the Heat Equation 100
Example 43
k
Let h = 15 and k = 250
1
so that r = h2
= 1
10 difference
equation (51) becomes
1
(−wi−1j+1 + (12)wij+1 − wi+1j+1 ) = wij
10
This can be written in matrix form
1
Figure 8.3.1: Graphical representation of the matrix A for r = 10 .
w j +1 = A −1 ( w j + b j )
Example 44
k
1
Let h = 15 and k = 50 so that r = h2
= 1
2 difference
equation (51) becomes
1
(−wi−1j+1 + (4)wij+1 − wi+1j+1 ) = wij
2
This can be written in matrix form
1
Figure 8.3.4: Graphical representation of the matrix A for r = 2
Example 45
k
Let h = 51 and k = 25
1
so that r = h2
= 1 difference
equation (51) becomes
δ2 U
δU
=
δt i,j+ 12 δx2 i,j+ 12
by
giving
Bw j+1 = Cw j + b j
8.4 Crank Nicholson Implicit method 106
as
2 + 2r −r 0 . . . . w1j+1
−r 2 + 2r −r 0 . . . w
2j+1
0 −r 2 + 2r −r 0 . .
.
. . . . . . . .
. −r 2 + 2r −r w N −2j+1
. . .
. . . . . −r 2 + 2r w N −1j+1
2 − 2r r 0 . . . . w1j
r 2 − 2r r 0 . . . w2j
0 r 2 − 2r r 0 . . .
= + b j + b j +1
. . . . . . . .
. r 2 − 2r
. . . r w N −2j
. . . . . r 2 − 2r w N −1j
In this case we look at a rod of unit length with each end in ice.
The rod is heat insulated along its length so that temperature changes
occur through heat conduction along its length and heat transfer at
its ends, where w denotes temperature.
Simple case
Given that the ends of the rod are kept in contact with ice and the
initial temperature distribution is non dimensional form is
1
1. U = 2x for 0 ≤ x ≤ 2
1
2. U = 2(1 − x ) for 2 ≤x≤1
∂U ∂2 U
=
∂t ∂x2
which satisfies
Example 46
1
8.4.1.1 Crank-Nicholson method r = 10
k
Let h = 15 and k = 250
1
so that r = h2
= 1
10 difference
equation (54) becomes
Let j = 0
i=1:
−0.1w0,1 + 2.2w1,1 − 0.1w2,1 = 0.1w0,0 + 1.8w1,0 + 0.1w2,0
i=2:
−0.1w1,1 + 2.2w2,1 − 0.1w3,1 = 0.1w1,0 + 1.8w2,0j + 0.1w3,0
i=3:
−0.1w2,1 + 2.2w3,1 − 0.1w4,1 = 0.1w2,0 + 1.8w3,0 + 0.1w4,0
i=4:
−0.1w3,1 + 2.2w4,1 − 0.1w5,1 = 0.1w3,0 + 1.8w4,0 + 0.1w5,0
In matrix form
Example 47
1
8.4.1.2 Crank-Nicholson method r = 2
k
Let h = 15 and k = 50
1
so that r = h2
= 1
2 difference
equation (54) becomes
Let j = 0
i=1:
−0.5w0,1 + 3w1,1 − 0.5w2,1 = 0.5w0,0 + 1w1,0 + 0.5w2,0
i=2:
−0.5w1,1 + 3w2,1 − 0.5w3,1 = 0.5w1,0 + 1w2,0j + 0.5w3,0
i=3:
−0.5w2,1 + 3w3,1 − 0.5w4,1 = 0.5w2,0 + 1w3,0 + 0.5w4,0
i=4:
−0.5w3,1 + 3w4,1 − 0.5w5,1 = 0.5w3,0 + 1w4,0 + 0.5w5,0
In matrix form
3 −0.5 0 0 w1,1
−0.5 3 −0.5 0 w2,1
w3,1 =
0 −0.5 3 −0.5
0 0 −0.5 3 w4,1
Example 48
k
Let h = 51 and k = 25
1
so that r = h2
= 1 difference
equation (54) becomes
Let j = 0
i=1:
−w0,1 + 4w1,1 − w2,1 = w0,0 + 0w1,0 + w2,0
i=2:
−w1,1 + 4w2,1 − w3,1 = w1,0 + 0w2,0j + w3,0
i=3:
−w2,1 + 4w3,1 − w4,1 = w2,0 + 0w3,0 + w4,0
i=4:
−w3,1 + 4w4,1 − w5,1 = w3,0 + 0w4,0 + w5,0
In matrix form
4 −1 0 0 w1,1
−1 4 −1 0 w2,1
0 −1 4 −1 w3,1
0 0 −1 4 w4,1
8.4 Crank Nicholson Implicit method 111
bi−1 wi−1j+1 + bi wij+1 + bi+1 wi+1j+1 = ci−1 wi−1j + ci wij + ci+1 wi+1j
Bw j+1 = Cw j + d j
w j+1 = Aw j + f j
∂U
= H (U − v0 ) at x = 0
∂x
where H is a positive constant and v0 is the surrounding temperature.
∂U
1. By using forward difference for ∂x , we have
w1j − w0j
= H (w0j − v0 )
hx
U = 1 for 0 ≤ x ≤ 1 when t = 0
∂U
= U at x = 0 for all t
∂x
∂U
= −U at x = 1 for all t.
∂x
8.7.1.1 Example 1
Using forward difference approximation for the derivative boundary
condition and the explicit method to approximate the PDE.
Our difference equation is,
At i = N − 1, (57) is,
Choose hs = 51 and k = 1
100 such that r = 14 .
The equations become
7 1
w1j+1 = w1j + w2j ,
24 4
1
wij+1 = (wi−1j + 2wij + wi+1j ) i = 2, 3
4
and
1 13
w5j+1 = w3j + ( w4j )
4 16
In matrix form
7 1
w1j+1 0 0 w1j
24 4
w2j+1 1 1 1
0 w2j
= 4 2 4 .
w3j+1 0 1 1 1
4 2 4 w3j
1 13
w4j+1 0 0 4 16 w4j
with the boundaries given by
10
w0j+1 = w1j+1 ,
12
10
w0j+1 = w1j+1 .
8
8.7.1.2 Example 2
Using central difference approximation for the derivative boundary
condition and the explicit method to approximate the PDE.
Our difference equation is as in (57).
At i = 0 we have
8.7.1.3 Example 3
Using central difference approximation for the derivative boundary
condition and the Crank-Nicholson method to approximate the PDE.
The difference equation is,
giving
and
w−1j+1 = w1j+1 − 2h x w0j+1 (66)
Let j = 0 and i = 0 the difference equation becomes
Using, (65), (66) and (67) we can eliminate the fictious terms w−1j and
w−1j+1 .
Let Fij (w) represent the difference equation approximating the PDE
at the ijth point with exact solution w.
If w is replaced by U at the mesh points of the difference equation
where U is the exact solution of the PDE the value of Fij (U ) is the
local truncation error Tij in at the ij mesh pont.
Using Taylor expansions it is easy to express Tij in terms of h x and k
and partial derivatives of U at (ih x , jk).
Although U and its derivatives are generally unknown it is worth-
while because it provides a method for comparing the local accuracies
of different difference schemes approximating the PDE.
Example 49
8.8 Local Truncation Error and Consistency 117
∂U ∂2 U
− 2 =0
∂t ∂x
with
wij+1 − wij wi+1j − 2wij + wi−1j
Fij (w) = − =0
k h2x
is
Uij+1 − Uij Ui+1j − 2Uij + Ui−1j
Tij = Fij (U ) = −
k h2x
∂U ∂2 U k ∂2 U h2x ∂4 U
Tij = − 2 + −
∂t ∂x ij 2 ∂t2 ij 12 ∂x4 ij
k 2 ∂3 U h4x ∂6 U
+ − + ...
6 ∂t3 ij 360 ∂x6 ij
∂U ∂2 U
− 2 =0
∂t ∂x ij
∂2 U h2 ∂4 U
k
− x .
2 ∂t2 ij 12 ∂x4 ij
Hence
Tij = O(k ) + O(h2x )
8.9 Consistency and Compatibility 118
∂U ∂2 U
=
∂t ∂x2
is consistent with the difference equation.
√
where ξ = eαk in this case i denotes the complex number i = −1 and
for values of β needed to satisfy the initial conditions. ξ is known as
the amplification factor. The finite difference equation will be stable
if |w pq | remains bounded for all q as h → 0, k → 0 and all β.
If the exact solution does not increase exponentially with time then a
necessary and sufficient condition is that
|ξ | ≤ 1
1 1
(w pq+1 − w pq ) = 2 (w p−1q − 2w pq + w p+1q )
k hx
∂ U 2
approximating ∂U iβx ξ q into
∂t = ∂x2 at ( ph x , qk ). Substituting w pq = e
the difference equation
ξ − 1 = r (eiβ(−1)h − 2 + eiβh )
= 1 + r (2 cos( βh) − 2)
h
= 1 − 4r (sin2 ( β )).
2
Hence
h
1 − 4r (sin2 ( β )) ≤ 1,
2
for this to hold
h
4r (sin2 ( β )) ≤ 2,
2
which means
1
r≤ .
2
1
0 < ξ ≤ 1 for r < 2 and all β therefore the equation is conditionally
stable.
1 1
(w pq+1 − w pq ) = 2 (w p−1q+1 − 2w pq+1 + w p+1q+1 )
k hx
8.11 Stability by the Fourier Series method (von Neumann’s method) 120
∂ U 2
approximating ∂U iβx ξ q into
∂t = ∂x2 at ( ph x , qk ). Substituting w pq = e
the difference equation
eiβph ξ q+1 − eiβph ξ q = r {eiβ( p−1)h ξ q+1 − 2eiβph ξ q+1 + eiβ( p+1)h ξ q+1 }
k
where r = h2x
. Divide across by eiβ( p)h ξ q leads to
ξ − 1 = rξ (eiβ(−1)h − 2 + eiβh )
= rξ (2 cos( βh) − 2)
h
= −4rξ (sin2 ( β ))
2
Hence
1
ξ=
1 + 4r sin2 (
βh
2 )
0 < ξ ≤ 1 for all r > 0 and all β therefore the equation is uncondi-
tionally stable.
1 1 1
(w pq+1 − w pq ) = 2 (w p−1q+1 − 2w pq+1 + w p+1q+1 ) + 2 (w p−1q − 2w pq + w p+1q )
k 2h x 2h x
∂ U 2
approximating ∂U iβx ξ q into
∂t = ∂x2 at ( ph x , qk ). Substituting w pq = e
the difference equation
r iβ( p−1)h q+1
eiβph ξ q+1 − eiβph ξ q = {e ξ − 2eiβph ξ q+1 + eiβ( p+1)h ξ q+1
2
+eiβ( p−1)h ξ q − 2eiβph ξ q + eiβ( p+1)h ξ q }
k
where r = h2x
. Divide across by eiβ( p)h ξ q leads to
r r
ξ −1 = ξ (e−iβh − 2 + eiβh ) + {e−iβh − 2 + eiβh }
2 2
r r
= ξ (2 cos( βh) − 2) + (2 cos( βh) − 2)
2 2
h h
= −2rξ (sin ( β )) − 2r (sin2 ( β ))
2
2 2
Hence
1 − 2r sin2 (
βh
2 )
ξ=
1 + 2r sin2 ( 2 )
βh
0 < ξ ≤ 1 for all r > 0 and all β therefore the equation is uncondi-
tionally stable.
8.12 Parabolic Equations Questions 121
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.
[10 marks]
b) Consider the problem
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},
u(0, t) = 1, u(1, t) = 1,
u( x, 0) = 4x2 − 4x + 1.
[18 marks]
c) For the explicit method what is the step-size requirement
for h and k for the method to be stable.
[5 marks]
8.12 Parabolic Equations Questions 122
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.
[10 marks]
b) Consider the problem
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},
u(0, t) = 1, u(1, t) = 1,
[18 marks]
c) For the explicit method what is the step-size requirement
for h and k for the method to be stable.
[5 marks]
8.12 Parabolic Equations Questions 123
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.
[10 marks]
b) Consider the problem
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},
u(0, t) = 0, u(1, t) = 0,
u( x, 0) = 2 sin(2πx )
[18 marks]
c) For the explicit method what is the step-size requirement
for h and k for the method to be stable.
[5 marks]
8.12 Parabolic Equations Questions 124
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.
[13 marks]
b) Consider the problem
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},
u(0, t) = 1, u(1, t) = 1,
u( x, 0) = 4x2 − 4x + 1.
[20 marks]
8.12 Parabolic Equations Questions 125
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.
[13 marks]
b) Consider the problem
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},
u(0, t) = 1, u(1, t) = 1,
[20 marks]
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.
[13 marks]
b) Consider the problem
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},
u(0, t) = 0, u(1, t) = 0,
u( x, 0) = 2 sin(2πx )
[20 marks]
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.
[13 marks]
b) Consider the problem
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},
u(0, t) = 1, u(1, t) = 1,
u( x, 0) = 4x2 − 4x + 1.
[20 marks]
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.
[13 marks]
b) Consider the problem
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},
u(0, t) = 1, u(1, t) = 1,
[20 marks]
∂u ∂2 u
= 2
∂t ∂x
8.12 Parabolic Equations Questions 129
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1}.
[13 marks]
b) Consider the problem
∂u ∂2 u
= 2
∂t ∂x
on the rectangular domain
Ω = {(t, x )| 0 ≤ t, 0 ≤ x ≤ 1},
u(0, t) = 0, u(1, t) = 0,
u( x, 0) = 2 sin(2πx )
[20 marks]
9
ELLIPTIC PDE’S
∂2 ∂2
∇= + ,
∂x2 ∂x2
with boundary conditions,
U ( x, y) = g( x, y), ( x, y) ∈ δΩ-boundary.
Ωh = {( xi , y j ) ∈ 4 : 1 ≤ i, j ≤ N − 1},
∂Ωh = {( x0 , y j ), ( x N , y j ), ( xi , y0 ), ( xi , y N ) : 1 ≤ i, j ≤ N − 1}.
1
δx2 = (wi+1j − 2wij + wi−1j ),
h2
130
9.1 The five point approximation of the Laplacian 131
1
δy2 = (wij+1 − 2wij + wij−1 ).
h2
The gives the Poisson Difference Equation,
The five point scheme results in a system of ( N − 1)2 equations for the
( N − 1)2 unknowns. This is depicted in Figure 9.1.1 on a 6 × 6 = 36
where there is a grid of 4 × 4 = 16 unknowns (red) surround by the
boundary of 20 known values. The general set of 4 × 4 equations of
the Poisson difference equation on the 6 × 6 grid where
1 1
h= = ,
6−1 5
can be written as:
j=1
12
i = 1 w0,1 + w1,0 − 4w1,1 + w1,2 + w2,1 = 5 f 11
12
i = 2 w1,1 + w2,0 − 4w2,1 + w2,2 + w3,1 = 5 f 21
12
i = 3 w2,1 + w3,0 − 4w3,1 + w3,2 + w4,1 = 5 f 31
12
i = 4 w3,1 + w4,0 − 4w4,1 + w4,2 + w5,1 = 5 f 41
j=2
12
i = 1 w0,2 + w1,1 − 4w1,2 + w1,3 + w2,2 = 5 f 12
12
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 5 f 22
12
i = 3 w2,2 + w3,1 − 4w3,2 + w3,3 + w4,2 = 5 f 32
12
i = 4 w3,2 + w4,1 − 4w4,2 + w4,3 + w5,2 = 5 f 42
j=3
12
i = 1 w0,3 + w1,2 − 4w1,3 + w1,4 + w2,3 = 5 f 13
12
i = 2 w1,3 + w2,2 − 4w2,3 + w2,4 + w3,3 = 5 f 23
12
i = 3 w2,3 + w3,2 − 4w3,3 + w3,4 + w4,3 = 5 f 33
12
i = 4 w3,3 + w4,2 − 4w4,3 + w4,4 + w5,3 = 5 f 43
j=4
12
i = 1 w0,4 + w1,3 − 4w1,4 + w1,5 + w2,4 = 5 f 14
12
i = 2 w1,4 + w2,3 − 4w2,4 + w2,5 + w3,4 = 5 f 24
12
i = 3 w2,4 + w3,3 − 4w3,4 + w3,5 + w4,4 = 5 f 34
12
i = 4 w3,4 + w4,3 − 4w4,4 + w4,5 + w5,4 = 5 f 44 .
The horizontal and vertical lines are for display purposes to help in-
dicated each set of the four sets of four equations.
The matrix has a unique solution. For sparse matrices of this form
an iterative method is used as it would be to computationally expen-
sive to compute the inverse.
∂2 u ∂2 u
+ 2 = 0, ( x, y) ∈ Ω = (0, 1) × (0, 1),
∂x2 ∂x
with boundary conditions:
lower boundary,
u( x, 0) = sin(2πx ),
upper boundary,
u( x, 1) = sin(2πx ),
left boundary,
u(0, y) = 2 sin(2πy),
right boundary.
u(1, y) = 2 sin(2πy).
The general difference equation for the Laplacian is of the form
1
h= ,
4
and
1 1
xi = i , y j = j ,
4 4
for i = 0, 1, 2, 3, 4 and j = 0, 1, 2, 3, 4. This gives the system of 3 × 3
equations:
j=1
12
i = 1 w0,1 + w1,0 − 4w1,1 + w1,2 + w2,1 = 4 0
12
i = 2 w1,1 + w2,0 − 4w2,1 + w2,2 + w3,1 = 4 0
12
i = 3 w2,1 + w3,0 − 4w3,1 + w3,2 + w4,1 = 4 0
j=2
12
i = 1 w0,2 + w1,1 − 4w1,2 + w1,3 + w2,2 = 4 0
12
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 4 0
12
i = 3 w2,2 + w3,1 − 4w3,2 + w3,3 + w4,2 = 4 0
j=3
12
i = 1 w0,3 + w1,2 − 4w1,3 + w1,4 + w2,3 = 4 0
12
i = 2 w1,3 + w2,2 − 4w2,3 + w2,4 + w3,3 = 4 0
12
i = 3 w2,3 + w3,2 − 4w3,3 + w3,4 + w4,3 = 4 0.
9.2 Specific Examples 137
j=1
12
i=1 −4w1,1 + w1,2 + w2,1 = 4 0 − w0,1 − w1,0
12
i = 2 w1,1 − 4w2,1 + w2,2 + w3,1 = 4 0 − w2,0
12
i=3 w2,1 − 4w3,1 + w3,2 = 4 0 − w4,1 − w3,0
j=2
12
i=1 w1,1 − 4w1,2 + w1,3 + w2,2 = 4 0 − w0,2
12
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 4 0
12
i=3 w2,2 + w3,1 − 4w3,2 + w3,3 = 4 0 − w4,2
j=3
12
i=1 w1,2 − 4w1,3 + w2,3 = 4 0 − w0,3 − w1,4
12
i = 2 w1,3 + w2,2 − 4w2,3 + w3,3 = 4 0 − w2,4
12
i=3 w2,3 + w3,2 − 4w3,3 = 4 0 − w4,3 − w3,4
Given the discrete boundary conditions:
−4 1 −w1,0 −w0,1
0 1 0 0 0 0 0 w1,1
1 −4 1 0 1 0 0 0 0
w2,1
−w2,0
0
0 1 −4 0 0 1 0 0 0
w3,1
−w3,0
−w4,1
1 0 0 −4 1 0 1 0 0
w1,2
0
−w0,2
0 1 0 1 −4 1 0 1 0 w2,2 = 0 + 0 ,
0 0 1 0 1 −4 0 0 1 w3,2 0 −w4,2
0 0 0 1 0 0 −4 1 0
w1,3
−w1,4
−w0,3
0 0 0 0 1 0 1 −4 1 w2,3 −w2,4 0
0 0 0 0 0 1 0 1 −4 w3,3 −w3,4 −w4,3
−4 1 −1 −2
0 1 0 0 0 0 0 w1,1
1 −4 1 0 1 0 0 0 0 w2,1 0
0
0
1 −4 0 0 1 0 0 0 w3,1 1
−2
1
0 0 −4 1 0 1 0 0 w1,2
0
0
0 1 0 1 −4 1 0 1 0 w2,2 = 0 + 0 .
0 0 1 0 1 −4 0 0 1 w3,2 0 0
0
0 0 1 0 0 −4 1 0 w1,3 −1
2
0 0 0 0 1 0 1 −4 1 w2,3 0 0
0 0 0 0 0 1 0 1 −4 w3,3 1 2
∂2 u ∂2 u
+ 2 = x2 + y2 ( x, y) ∈ Ω = (0, 1) × (0, 1)
∂x2 ∂x
with zero boundary conditions: Left boundary:
u( x, 0) = 0
Right boundary:
u( x, 1) = 0
Lower boundary:
u(0, y) = 0
9.2 Specific Examples 140
Upper boundary:
u(1, y) = 0.
The difference equation is of the form:
1
h= ,
4
and
1 1
xi = i , y j = j ,
4 4
for i = 0, 1, 2, 3, 4 and j = 0, 1, 2, 3, 4. This gives the system of 3 × 3
equations:
j=1
12 2
i = 1 w0,1 + w1,0 − 4w1,1 + w1,2 + w2,1 = 4 ( x1 + y21 )
12 2
i = 2 w1,1 + w2,0 − 4w2,1 + w2,2 + w3,1 = 4 ( x2 + y21 )
12 2
i = 3 w2,1 + w3,0 − 4w3,1 + w3,2 + w4,1 = 4 ( x3 + y21 )
j=2
12 2
i = 1 w0,2 + w1,1 − 4w1,2 + w1,3 + w2,2 = 4 ( x1 + y22 )
12 2
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 4 ( x2 + y22 )
12 2
i = 3 w2,2 + w3,1 − 4w3,2 + w3,3 + w4,2 = 4 ( x3 + y22 )
j=3
12 2
i = 1 w0,3 + w1,2 − 4w1,3 + w1,4 + w2,3 = 4 ( x1 + y23 )
12 2
i = 2 w1,3 + w2,2 − 4w2,3 + w2,4 + w3,3 = 4 ( x2 + y23 )
12 2
i = 3 w2,3 + w3,2 − 4w3,3 + w3,4 + w4,3 = 4 ( x3 + y23 )
This system is then rearranged by bringing the known boundary con-
ditions to the right hand side, to give:
j=1
12 2
i=1 −4w1,1 + w1,2 + w2,1 = 4 ( x1 + y21 ) − w0,1 − w1,0
12 2
i = 2 w1,1 − 4w2,1 + w2,2 + w3,1 = 4 ( x2 + y21 ) − w2,0
12 2
i=3 w2,1 − 4w3,1 + w3,2 = 4 ( x3 + y21 ) − w4,1 − w3,0
j=2
12 2
i=1 w1,1 − 4w1,2 + w1,3 + w2,2 = 4 ( x1 + y22 ) − w0,2
12 2
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 4 ( x2 + y22 )
12 2
i=3 w2,2 + w3,1 − 4w3,2 + w3,3 = 4 ( x3 + y22 ) − w4,2
9.2 Specific Examples 141
j=3
12 2
i=1 w1,2 − 4w1,3 + w2,3 = 4 ( x1 + y23 ) − w0,3 − w1,4
12 2
i = 2 w1,3 + w2,2 − 4w2,3 + w3,3 = 4 ( x2 + y23 ) − w2,4
12 2
i=3 w2,3 + w3,2 − 4w3,3 = 4 ( x3 + y23 ) − w4,3 − w3,4 .
Given the zero boundary conditions
−4 1 ( x12 + y21 )
0 1 0 0 0 0 0 w1,1
1 −4 1 0 1 0 0 0 0
w2,1
( x22 + y21 )
0 1 −4 0 0 1 0 0 0
w3,1
( x32 + y21 )
1 0 0 −4 1 0 1 0 0
w1,2
( x12 + y22 )
2
0 1 0 1 −4 1 0 1 0 w2,2 = h ( x22 + y22 ) .
0 0 1 0 1 −4 0 0 1 w3,2 ( x32 + y22 )
( x12 + y23 )
0 0 0 1 0 0 −4 1 0
w1,3
0 0 0 0 1 0 1 −4 1 w2,3 ( x22 + y23 )
0 0 0 0 0 1 0 1 −4 w3,3 ( x32 + y23 )
9.2 Specific Examples 142
Substituting values into the right hand side gives the specific matric
form:
−4 1
0 1 0 0 0 0 0 w1,1 0.0078125
1 −4 1 0 1 0 0 0 0 w2,1 0.01953125
0
1 −4 0 0 1 0 0 0
w3,1
0.0390625
1
0 0 −4 1 0 1 0 0
w1,2
0.01953125
0 1 0 1 −4 1 0 1 0 w2,2 = 0.0312 .
0 0 1 0 1 −4 0 0 1 w3,2 0.05078125
0
0 0 1 0 0 −4 1 0
w1,3
0.0390625
0 0 0 0 1 0 1 −4 1 w2,3 0.05078125
0 0 0 0 0 1 0 1 −4 w3,3 0.0703125
∂2 u ∂2 u
+ 2 = xy, ( x, y) ∈ Ω = (0, 1) × (0, 1),
∂x2 ∂x
with boundary conditions
Right Boundary
u( x, 0) = − x2 + x
Left Boundary
u( x, 1) = x2 − x
Lower Boundary
u(0, y) = −y2 + y
9.2 Specific Examples 143
Upper Boundary
u(1, y) = −y2 + y.
The five point difference equation is of the form
1
h= ,
4
and
1 1
xi = i , y j = j ,
4 4
for i = 0, 1, 2, 3, 4 and j = 0, 1, 2, 3, 4. This gives the system of 3 × 3
equations:
12
i = 1 w0,1 + w1,0 − 4w1,1 + w1,2 + w2,1 = 4 ( x1 y1 )
12
i = 2 w1,1 + w2,0 − 4w2,1 + w2,2 + w3,1 = 4 ( x2 y1 )
12
i = 3 w2,1 + w3,0 − 4w3,1 + w3,2 + w4,1 = 4 ( x3 y1 )
j=2
12
i = 1 w0,2 + w1,1 − 4w1,2 + w1,3 + w2,2 = 4 ( x1 y2 )
12
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 4 ( x2 y2 )
12
i = 3 w2,2 + w3,1 − 4w3,2 + w3,3 + w4,2 = 4 ( x3 y2 )
j=3
12
i = 1 w0,3 + w1,2 − 4w1,3 + w1,4 + w2,3 = 4 ( x1 y3 )
12
i = 2 w1,3 + w2,2 − 4w2,3 + w2,4 + w3,3 = 4 ( x2 y3 )
12
i = 3 w2,3 + w3,2 − 4w3,3 + w3,4 + w4,3 = 4 ( x3 y3 ).
Re-arranging the system such that the known values are on the right
hand side:
j=1
12
i=1 −4w1,1 + w1,2 + w2,1 = 4 ( x1 y1 ) − w0,1 − w1,0
12
i = 2 w1,1 − 4w2,1 + w2,2 + w3,1 = 4 ( x2 y1 ) − w2,0
12
i=3 w2,1 − 4w3,1 + w3,2 = 4 ( x3 y1 ) − w4,1 − w3,0
j=2
12
i=1 w1,1 − 4w1,2 + w1,3 + w2,2 = 4 ( x1 y2 ) − w0,2
12
i = 2 w1,2 + w2,1 − 4w2,2 + w2,3 + w3,2 = 4 ( x2 y2 )
12
i=3 w2,2 + w3,1 − 4w3,2 + w3,3 = 4 ( x3 y2 ) − w4,2
9.2 Specific Examples 144
j=3
12
i=1 w1,2 − 4w1,3 + w2,3 = 4 ( x1 y3 ) − w0,3 − w1,4
12
i = 2 w1,3 + w2,2 − 4w2,3 + w3,3 = 4 ( x2 y3 ) − w2,4
12
i=3 w2,3 + w3,2 − 4w3,3 = 4 ( x3 y3 ) − w4,3 − w3,4 .
The discrete boundary conditions are
−4 1
0 1 0 0 0 0 0 w1,1
1 −4 1 0 1 0 0 0 0
w2,1
0 1 −4 0 0 1 0 0 0
w3,1
1 0 0 −4 1 0 1 0 0
w1,2
0 1 0 1 −4 1 0 1 0 w2,2 =
0 0 1 0 1 −4 0 0 1 w3,2
0 0 0 1 0 0 −4 1 0
w1,3
0 0 0 0 1 0 1 −4 1 w2,3
0 0 0 0 0 1 0 1 −4 w3,3
−w1,0 −w0,1
( x1 y1 )
( x2 y1 )
−w2,0
0
( x3 y1 )
−w3,0
−w4,1
( x1 y2 )
0
−w0,2
h2 ( x2 y2 ) + 0 + 0 ,
( x3 y2 ) 0 −w4,2
( x1 y3 )
−w1,4
−w0,3
( x2 y3 ) −w2,4 0
( x3 y3 ) −w3,4 −w4,3
9.2 Specific Examples 145
inputting the specific boundary values and the right hand side of the
equation gives:
−4 1
0 1 0 0 0 0 0 w1,1
1 −4 1 0 1 0 0 0 0
w2,1
0 1 −4 0 0 1 0 0 0
w3,1
1 0 0 −4 1 0 1 0 0
w1,2
0 1 0 1 −4 1 0 1 0 w2,2 =
0 0 1 0 1 −4 0 0 1 w3,2
0 0 0 1 0 0 −4 1 0
w1,3
0 0 0 0 1 0 1 −4 1 w2,3
0 0 0 0 0 1 0 1 −4 w3,3
3 3
− 16 − 16
0.0625
0.125
− 14
0
3 3
0.1875
− 16
− 16
2 0.125 0 − 14
1
0.25 + 0 + 0 .
4
0.375 0 − 14
3 3
0.1875
16
− 16
1
0.375
4
0
3 3
0.5626 16 − 16
Figure 9.2.6 shows the numerical solution of the Poisson Equation
with non-zero boundary conditions.
We now ask how well the grid function determined by the five point
scheme approximates the exact solution of the Poisson problem.
τh (x) = ( L − Lh ) ϕ( x )
lim τh ( x ) = 0,
h →0
∂ϕ h2 ∂2 ϕ h3 ∂3 ϕ h4 ∂4 ϕ ±
ϕ( x ± h, y) = ϕ( x, y) ± h ( x, y) + ( x, y ) ± ( x, y ) + (ζ , y)
∂x 2! ∂x2 3! ∂x3 4! ∂x4
where ζ ± ∈ ( x − h, x + h). Adding this pair of equation together and
rearranging , we get
∂2 ϕ h2 ∂4 ϕ + ∂4 ϕ −
1
[ ϕ( x + h, y) − 2ϕ( x, y) + ϕ( x − h, y)] − 2 ( x, y) = (ζ , y) + 4 (ζ , y)
h2 ∂x 4! ∂x4 ∂x
∂4 ϕ + ∂4 ϕ − ∂4 ϕ
( ζ , y ) + ( ζ , y ) = 2 (ζ, y),
∂x4 ∂x4 ∂x4
∂2 ϕ h2 ∂4 ϕ
δx2 ( x, y) = ( x, y ) + (ζ, y)
∂x2 2! ∂x4
Similar reasoning shows that
∂2 ϕ h2 ∂4 ϕ
δy2 ( x, y) = ( x, y ) + ( x, η )
∂y2 2! ∂y4
9.3 Consistency and Convergence 147
max |U (x j ) − w(x j )| → 0 as h → 0.
j
◦
For the five point scheme there is a direct connection between consis-
tency and convergence. Underlying this connection is an argument
based on the following principle:
Proof. The proof is by contradiction. We argue for the case ∇2h Vij ≥ 0,
reasoning for the case ∇2h Vij ≤ 0 begin similar.
Assume that V attains its maximum value M at an interior grid point
( x I , y J ) and that max(xi ,y j )∈∂Ωh Vij < M. The hypothesis ∇2h Vij ≥ 0
implies that
1
VI J ≤ (VI +1J + VI −1J + VI J +1 + VI J −1 )
4
This cannot hold unless
VI +1J = VI −1J = VI J +1 = VI J −1 = M.
VI +iJ + j = M for some point ( x I +iJ + j ) ∈ ∂Ω, which again gives a con-
tradiction. •
problem
∇2h Uij = 0 for ( xi , y j ) ∈ Ωh ,
Uij = 0 for ( xi , y j ) ∈ ∂Ωh .
2. For prescribed grid functions f ij and gij , there exists a unique solution
to the problem
∇2h Uij = f ij for ( xi , y j ) ∈ Ωh ,
Uij = gij for ( xi , y j ) ∈ ∂Ωh .
Definition For any grid function V : Ωh ∂Ωh → R,
S
◦
Lemma 9.3.4. If the grid function V : Ωh ∂Ωh → R satisfies the bound-
S
1
||V| |Ω ≤ ||∇2h V ||Ω
8
1 1
||V| |Ω ≤ ν = ||∇2h V ||Ω
8 8
9.3 Consistency and Convergence 149
Finally we prove that the five point scheme for the Poisson equation
is convergent.
where
∂4 U ∂4 U ∂4 U
M= , , ...,
∂x4 ∞ ∂x3 ∂y ∞ ∂y4 ∞
h2 ∂4 U ∂4 U
(∇2h − ∇2 )Uij = ( ζ ,
i jy ) + ( x ,
i jη )
12 ∂x4 ∂y4
h2 ∂4 U ∂4 U
−∇2h Uij = f ij − ( ζ i , y j ) + 4 ( xi , η j ) .
12 ∂x4 ∂y
If we subtract from this the identity equation −∇2h wij = f ij and note
that U − w vanishes on ∂Ωh , we find that
h2 ∂4 U ∂4 U
2
∇h (Uij − wij ) = ( ζ i , y j ) + 4 ( xi , η j ) .
12 ∂x4 ∂y
It follows that
1
||U − w||Ω ≤ ||∇2h (U − w)||Ω ≤ KMh2
8
9.4 Elliptic Equations Questions 150
∂2 u ∂2 u
+ 2 = f ( x, y)
∂x2 ∂y
Ω = {( x, y)| a ≤ x ≤ b, c ≤ y ≤ d}.
[10 marks]
b) Consider the problem
∂2 u ∂2 u
+ 2 =0
∂x2 ∂y
Ω = {( x, y)| 0 ≤ x ≤ 1, 0 ≤ y ≤ 1},
u( x, 0) = 4x2 − 4x + 1, u( x, 1) = 4x2 − 4x + 1,
∂2 u ∂2 u
+ 2 = f ( x, y)
∂x2 ∂y
Ω = {( x, y)| a ≤ x ≤ b, c ≤ y ≤ d}.
[10 marks]
b) Consider the problem
∂2 u ∂2 u
+ 2 = xy + x2
∂x2 ∂y
Ω = {( x, y)| 0 ≤ x ≤ 1, 0 ≤ y ≤ 1},
u( x, 0) = 0, u( x, 1) = 0,
u(0, y) = 0, u(1, y) = 0.
Taking N = 4 steps in the x-direction and M = 4 steps
in the y-direction, set up and write in matrix form (but do
not solve) the corresponding systems of finite difference
equations.
[23 marks]
9.4 Elliptic Equations Questions 152
∂2 u ∂2 u
+ 2 = f ( x, y)
∂x2 ∂y
Ω = {( x, y)| a ≤ x ≤ b, c ≤ y ≤ d}.
[10 marks]
b) Consider the problem
∂2 u ∂2 u
+ 2 = y2
∂x2 ∂y
Ω = {( x, y)| 0 ≤ x ≤ 1, 0 ≤ y ≤ 1},
u( x, 0) = x, u( x, 1) = x,
u(0, y) = 0, u(1, y) = 1.
Taking N = 4 steps in the x-direction and M = 4 steps
in the y-direction, set up and write in matrix form (but do
not solve) the corresponding systems of finite difference
equations.
[23 marks]
10
H Y P E R B O L I C E Q U AT I O N S
U ( x, t) = U0 ( x − at) t≥0
∂2 U ∂2 U
− γ = f x ∈ (α, β), t > 0 (75)
∂t2 ∂x2
with initial data
∂U
U ( x, 0) = u0 ( x ) and ( x, 0) = v0 ( x ), x ∈ (α, β)
∂t
and boundary data
153
10.2 Finite Difference Method for Hyperbolic equations 154
∂U ∂U
ω1 = , ω2 =
∂x ∂t
transforms (75) into
∂ω̂ ∂ω̂
+A =0
∂t ∂x
where
ω1
ω̂ =
ω2
Since the initial conditions are ω1 ( x, 0) = u00 ( x ) and ω2 ( x, 0) = v0 ( x ).
Aside
∂2 u ∂2 u
Notice that replacing ∂t2
by t2 , ∂x2
by x2 and f by 1, the wave equation
becomes
t2 − γ2 x 2 = 1
which represents an hyperbola in ( x, t) plane. Proceeding analo-
gously in the case of the heat equation we end up with
t − x2 = 1
and let
∆t
λ= .
∆x
• Lax-Friedrichs method,
unj+1 + unj−1 λ
unj +1 = − a(unj+1 − unj−1 ),
2 2
• Lax-Wendroff method,
λ λ2
unj +1 = unj − a(unj+1 − unj−1 ) + a2 (unj+1 − 2unj + unj−1 ),
2 2
• Upwind method,
λ n λ
unj +1 = unj − (u j+1 − unj−1 ) + | a|(unj+1 − 2unj + unj−1 ).
2 2
The last three methods can be obtained from the forward Euler/-
centered method by adding a term proportional to a numerical ap-
proximation of a second derivative term so that they can be written
in the equivalent form
n n n
λ 1 (u j+1 − 2u j + u j−1 )
unj +1 = unj n n
− a ( u j +1 − u j −1 ) + k
2 2 (∆x )2
10.4 consistency
τjn = L(Ujn )
10.5 stability
A method is said to be stable if, for any T here exist a constant CT > 0
and δ0 such that
||un ||∆ ≤ CT ||u0 ||∆
for any n such that n∆t ≤ T and for any ∆t, ∆x such that 0 < ∆t ≤
δ0 , 0 < ∆x ≤ δ0 . We have denoted by ||.||∆ a suitable discrete norm.
Forward Euler/centered
λ
unj +1 = unj − a(unj+1 − unj−1 )
2
Truncation error
O(∆t, (∆x )2 )
For an explicit method to be stable we need
δt
| aλ| = a ≤1
δx
unj = eiβj∆x ξ n
where
ξ = eα∆t
It is sufficient to show
|ξ | ≤ 1
λ
ξ n+1 eiβ( j)∆x = ξ n eiβ( j)∆x + a(ξ n eiβ( j+1)∆x − ξ n eiβ( j−1)∆x )
2
10.6 Courant Freidrich Lewy Condition 157
λ iβ∆x
ξ = 1− a(e − e−iβ∆x )
2
λ
ξ = 1 − i a(2sin( β∆x ))
2
ξ = 1 − iλa(sin( β∆x ))
q
|ξ | = 1 + (λa(sin( β∆x )))2
Hence
ξ>1
therefore the method is unstable for the Courant Freidrich Lewy.
Example 50
These methods can be applied to the Burgers equation
∂u ∂u
+u =0
∂t ∂x
as there it is a non-trivial non-linear hyperbolic equa-
tion. Taking initial condition
1, x0 ≤ 0,
u( x, 0) = u0 ( x ) = 1 − x, 0 ≤ x0 ≤ 1,
0, x0 ≥ 1.
Variational methods are based on the fact that the solutions of some
Boundary Value Problems,
0 0
−( p( x )u ( x )) + q( x )u( x ) = g( x, u( x ))
(76)
u( a) = α, u(b) = β,
p ∈ C1 [ a, b], p( x ) ≥ p0 > 0,
1
q ∈ C [ a, b], q( x ) ≥ 0, (77)
g ∈ C1 ([ a, b] × R), gu ( x, u) ≤ λ0
b−x a−x
l (x) = α +β , l ( a) = α, l (b) = β,
b−a a−b
and y is the solution of a boundary value problem
0 0
−( p( x )y ( x )) + q( x )y( x ) = f ( x ),
(78)
y( a), = 0 y(b) = 0,
u( x ) ∈ DL = {u ∈ C2 [ a, b] | u( a) = 0, u(b) = 0}.
158
variational methods 159
2. Z b
( f , v) = f ( x )v( x )dx,
a
where f ∈ L2 ([ a, b]).
From these definitions the Weak Form of the ODE problem (D) is then
given by
a(u, v) = ( f , v),
where u ∈ DL is the solution to the Classical Problem.
The Weak Form of the problem can be equivalently written in a Vari-
ational or Minimisation form of the problem is given by,
F ( u ) ≤ F ( v ), all v ∈ DL
2. The function u solves Weak Form (W) if and only if u solves Mini-
mization Form (M).
3. If f ∈ C ([0, 1]) and u ∈ C2 ([0, 1]) solves Weak Form (W), then u
solves Classical Problem (D).
11.1 Ritz -Galerkin Method 160
F (v) = 12 a(u + z, u + z) − ( f , u + z)
= F (u) + 12 a(z, z) + a(u, z) − ( f , z)
= F (u) + 21 a(z, z)
uS = u1 φ1 + u2 φ2 + ... + un φn .
Discrete Weak Form (WS ):
Find uS ∈ S = span{φ1 , φ2 , ..., φn }, n < ∞ such that
a ( u S , v ) = ( f , v ),
u ≈ uS = u1 φ1 + u2 φ2 + ... + un φn .
Similarly the
Discrete Variational/Minimization form (MS ):
Find uS ∈ S = span{φ1 , φ2 , ..., φn }, n < ∞ that satisfies
where
1
F (v) = a(v, v) − ( f , v).
2
v ∈ DL .
Theorem 11.1.1. Given f ∈ L2 ([0, 1]), then (WS ) has a unique solution.
and Z b
F̄ = { Fj } = {( f , φj )} = { f φi dx }
a
Then we require
n
a(uS , v) = a(∑ u j φj ( x ), v) = ( f , v) all v ∈ S
1
Therefore a contradiction.
11.2 Finite Element 162
We now show that the hat functions φi form a basis for the space
Sh (Figure 11.2.1).
Lemma 11.2.1. The set of functions {φi }in is a basis for the space Sh .
Proof. We show first that the set {φi }1n is linearly independent. If
∑1n ci φi ( x ) = 0 for all x ∈ [ a, b], then taking x = x j , implies c j = 0 for
each value of j, and hence the functions are independent.
To show Sh = span{φi }, we only need to show that
v( x ) = v I = ∑ v j φj , all v( x ) ∈ Sh
11.2 Finite Element 163
We now consider the matrix Aû = F̂ in the case where the basis
functions are chosen to be the ”hat functions”. In this case the ele-
ments of A can be found We have
0 [
φi = 0, φi = 0, for x ∈
/ [ xi−1 , xi+1 ) = Ei Ei+1 ,
where
x − x i −1 1 0 1
φi = = ( x − xi−1 ), φi = , on Ei .
x i − x i −1 hi hi
and
x i +1 − x 1 0 −1
φi = = ( xi+1 − x ), φi = , on Ei+1 .
x i +1 − x i h i +1 h i +1
R x i +1 −1
R x i +1 1
Ai,i+1 = xi h2i+1
p( x )dx + xi h2i+1
( x i +1 − x )( x − xi )q( x )dx,
R xi −1
R x i +1 1
Ai,i−1 = xi−1 h2i
p( x )dx + xi h2i
( xi − x )( x − xi−1 )q( x )dx,
and
R xi 1
R x i +1 1
Fi = x i −1 hi ( x − xi−1 ) f ( x )dx + xi h i +1 ( x i + 1 − x ) f ( x )dx.
a(u − uS , w) = 0.
Proof. By the Cauchy -Schwarz Lemma, we have | a(u, v)| ≤ ||u|| E ||v|| E .
Let w = uS − v ∈ S. Using the previous lemma we obtain
where C is a constant.
Proof. First from the previous theorem we have that
u I (x) = ∑ ū j φj , ū j = u( x j ).
j
We assume that
uS ( x ) = ∑ u j φj
j
and thus Z x i +1
0
||e||2∞ ≤ hi (e0 (ξ ))2 dξ ≤ h2i ||e ||2∞
xi
Similarly, Z x Z x
0 00
( e )2 ≤ 12 dξ (e (ξ ))2 dξ
xi xi
Z x
≤ ( x − xi ) (e00 (ξ ))2 dξ
xi
Z x i +1
00
≤ hi (e (ξ ))2 dξ
xi
and thus Z x i +1
0 00 00
||e ||2∞ ≤ hi (e (ξ ))2 dξ ≤ h2i ||e ||2∞
xi
where h = max { hi }.
PROBLEM SHEET
∂U ∂2 U
= .
∂t ∂x2
supply sample boundary conditions to specify this prob-
lem.
Write a fully implicit scheme to solve this partial differen-
tial equation.
c) Derive the local truncation error for the fully implicit method,
for the heat equation.
d) Show that the method is unconditionally stable using von
Neumann’s method.
∂U ∂2 U
= ,
∂t ∂x2
supply sample boundary conditions to specify this prob-
lem.
Write an explicit scheme to solve this partial differential
equation.
c) Derive the local truncation error for the explicit method,
for the heat equation.
∂U ∂2 U
= ,
∂t ∂x2
166
11.2 Finite Element 167
U ( x, y) = g( x, y) ( x, y) ∈ δΩ − boundary
using the five point method. Sketch how the finite differ-
ence scheme may be rewritten in the form Ax = b, where
A is a sparse N 2 × N 2 matrix, b is an N 2 component matrix
and x is an N 2 component vector of unknowns. (Assume
your 2d discretised grid contains N components in the x
and y direction).
b) Prove (DISCRETE MAXIMUM PRINCIPLE). if ∇2h Vij ≥ 0
for all points ( xi , y j ) ∈ Ωh , then
c) Hence prove:
Let U be a solution to the Poisson equation and let w be
the grid function that satisfies the discrete analog
where
∂4 U ∂4 U ∂4 U
M= , , , ...,
∂x4 ∞ ∂x3 ∂y ∞ ∂y4 ∞
1
||V ||Ω ≤ ||∇2h V ||Ω
8
∂U ∂U
= −a + f ( x, t), x ∈ R, t>0
∂t ∂x
U ( x, 0) = U0 ( x ), x∈R
define what is meant by:
i. convergence,
ii. consistency,
iii. stability.
b) Describe the forward Euler/centered difference method for
the transport equation and derive the local truncation er-
ror.
c) Define the Courant Friedrichs Lewy condition and state
how it is related to stability.
d) Show that the method is stable under the Courant Friedrichs
Lewy condition using Von Neumann analysis, you may as-
sume f ( x, t) = 0.
d2 u
+u = x
dx2
with boundary conditions
u (0) = 0 u (1) = 0
v (0) = 0 v (1) = 0
xi = ih
0 0 ≤ x ≤ x i −1
x − x i −1
h x i −1 ≤ x ≤ x i
φi ( x ) = x i +1 − x
h x i ≤ x ≤ x i +1
0 x i +1 ≤ x ≤ 1
A finite element approximation to the differential equation
is obtained by approximating u( x ) and v( x ) with linear
combinations of these finite element shape functions, φi ,
where
N −1
un = ∑ αi φi ( x )
i =1
N −1
vn = ∑ β j φj ( x )
j =1
[1] Ian Jaques and Colin Judd, Numerical Anaylsis, Chapman and
Hall Ltd, 1987
[6] Myron B Allen III, Eli L. Isaacson Numerical Analysis for Applied
Science John Wiley and Sons, Inc. 1997
170
INDEX
Adams predictor-corrector, 59
Adams-Bashforth, 39–41, 43–46,
51, 53, 54, 63
Adams-Bashforth method, 50, 60
Adams-Moulton, 40, 46–48, 50,
51, 53, 54, 59, 63
Newton’s method, 78
ODE, 7
Ordinary Differential Equation,
7, 10, 17, 23
171