Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Numerical Methods: Marisa Villano, Tom Fagan, Dave Fairburn, Chris Savino, David Goldberg, Daniel Rave

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 44

Numerical Methods

Marisa Villano, Tom Fagan,


Dave Fairburn, Chris Savino,
David Goldberg, Daniel Rave

An Overview
The Method of Finite Differences
Error Approximations and Dangers
Approxmations to Diffusions
Crank Nicholson Scheme
Stability Criterion

Finite Differences
Best known numerical method of
approximation
Marisa Villano

Finite Differences

Approximating the derivative with a


difference quotient from the Taylor series

Function of One Variable


Choose mesh size x
Then uj ~ u(jx)

First Derivative Approximations

Backward difference: (uj uj-1) / x

Forward difference: (uj+1 uj) / x

Centered difference: (uj+1 uj-1) / 2x

Taylor Expansion

u(x + x) = u(x) + u(x)x + 1/2 u(x)(x)


3
4
+ 1/6 u(x)(x) + O(x)

u(x x) = u(x) u(x)x + 1/2 u(x)(x)


3
4
- 1/6 u(x)(x) + O(x)

Taylor Expansion
u(x) = u(x) u(x x) + O(x)
x
u(x) = u(x + x) u(x) + O(x)
x
2
u(x) = u(x + x) u(x x) + O(x)
2x

Second Derivative Approximation

Centered difference: (uj+1 2uj + uj-1) / (x)

Taylor Expansion
2

u(x) = u(x + x) 2u(x) + u(x x) + O(x)


2
(x)

Function of Two Variables


n

u(jx, nt) ~ uj
Backward difference for t and x
u (jx, nt) ~ (ujn ujn-1) / t
t

u (jx, nt) ~ (ujn ujn-1) / x


x

Function of Two Variables

Forward difference for t and x

u (jx, nt) ~ (ujn+1 ujn ) / t


t
u (jx, nt) ~ (ujn+1 ujn ) / x
x

Function of Two Variables

Centered difference for t and x

u (jx, nt) ~ (ujn+1 ujn-1) / (2t)


t
u (jx, nt) ~ (ujn+1 ujn-1) / (2x)
x

Error

Truncation Error: introduced in the solution by the


approximation of the derivative
Local Error: from each term of the equation
Global

Error: from the accumulation of local

error

Roundoff Error: introduced in the computation by


the finite number of digits used by the computer

The Dangers of the Finite


Difference Method
Evidence from an example in 8.1

Dave Fairburn

Example from 8.1


Consider ut = uxx u(x,0) = h(x)
We will use the finite difference method to
approximate the solution
Forward difference for ut
Centered difference for uxx
Re-write equation in terms of the finite
difference approximations

Finite Difference Eqn.

ujn+1 - ujn = unj+1 - 2ujn + unj-1


t

( x) 2

Error: The local truncation error is O( t)


from the left hand side and is O( x)2 from

the right hand side.

Assumptions
Assume that we choose a small change in
x, and that the denominator on both sides
of the equation are equal.
We are now left with the scheme:
ujn+1 = unj+1 - unj + unj-1
Solving u with this scheme is now easy to
do once we have the initial data.

Initial Data
Let u(x,0) = h(x) = a step function with
the following properties:
h(x) = 0 for all j except for j = 5, so
hj = 0 0 0 0 1 0 0 0 0 0 0 .
Initially, only a certain section, which is
at j = 5 is equal to the value of 1.
j serves as the counter for the x
values.

How to solve?

We know u0j = 1 at j = 5 and 0 at all other j


initially (given by superscript 0).
We can plug into our scheme to solve for u1j at
all js.
u1j = u0j-1 - u0j + u0j+1
u15 = -1; u14 = 1; u16 = 1
Now we can continue to increase the # of
iterations, n, and create a table

Solution for 4 iterations


n
v
a
l
u
e
s

-4

10 -16 19 -16 10 -4

-3

-7

-3

-2

-2

-1

10

j values

Analysis of Solution

Is this solution viable?


Maximum principle states that the solution must
be between 0 and 1 given our initial data
At n = 4, our solution has already ballooned to u
= 19!
Clearly, there are cases when the finite
difference method can pose serious problems.

Charting the Error

Assume the solution is constant and equal to 0.5 (halfway between


the possible 0 and 1)

Lessons Learned
While the finite difference method is easy
and convenient to use in many cases,
there are some dangers associated with
the method.
We will investigate why the assumption
that allowed us to simplify the scheme
could have been a major contributor to
the large error.

Approximations of Diffusions
Neumann Boundary Conditions
and the Crank-Nicolson Scheme
Chris Savino

Approximations of Diffusions

Errors have accumulated from the


approximations of the derivatives using the
previous scheme
The problem is the choice of the mesh t
to the mesh x
t
Let s= (x)2

can solve scheme

uj n 1 s(uj 1n uj 1n ) (1 2s)uj n

Neumann Boundary Conditions

ux(0, t ) g (t )

ux(l , t ) h(t )

Simplest Approximations are


u1n u 0 n
gn
x

uj n uj 1n
hn
x

To get smallest error, we use centered


differences for the derivatives on the boundary
Introduce ghost points

n
1

uj 1n

Boundary Conditions become

u u
2x
n
1

n
1

uj 1n uj 1n
hn
2x

Crank-Nicolson Scheme
Can avoid any restrictions on stability
conditions
Unconditionally stable no matter what the
value of s is.

Centered Second Difference:


uj 1n 2uj n uj 1n
2 n

u
)j
2
( x )

Pick a number theta between 0 and 1


Theta scheme:
uj n 1 uj n
(1 )( u 2 ) j n ( u 2 ) j n 1
t

We analyze the scheme by plugging in a


separated solution
uj n (eik x ) j ( (k )) n

Therefore
1 2(1 ) s (1 cos k x)
(k )
1 2 s (1 cos k x)

Must Check stability condition

If (k ) 1

Therefore

then

(k ) 1

s(1 2 )(1 cos k x) 1

1 2 0
is always true

1
1
2

If
then there is no restriction on the size
of s for stability to hold
The scheme is unconditionally stable
When theta = it is called the Crank-Nicolson
scheme
If theta < then the scheme is stable if
t
1
s
2
(x)
2 4

Stability Criterion
Approximations of the diffusion
equation, ut=uxx

David Goldberg

Stability Criterion
The method of finite differences gives an
answer, but it does not guarantee that this
answer is meaningful.
Values must be chosen appropriately, to
ensure that the results make sense and
are applicable to real world scenarios.
This condition, that values must satisfy in
order to be worthwhile, is called the
stability criterion.

Example

As per the book, take, for instance, the


diffusion problem:
=

for 0<x<, t>0

=0

at x=0, , that is
0, = , = 0

in
0,

2
, 0 = =
in ,
2

Example, continued

1.8

As can be easily shown, the graph of (x)


looks like this.

1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0

0.5

1.5

2.5

3.5

Example, continued
In attempting to use the method of finite
differences, we are using a forward
difference for ut and a centered difference
for uxx.
This means that +1 +1 2 + 1

It is important to note here that the


superscript n denotes a counter on the t
variable, and the subscript j denotes a
counter on the x variable.

Example, continued

In order to make the calculations a bit


cleaner, we are introducing a variable, s,
which is defined by s t
( x ) 2

Rearranging, we have

u nj 1 s u nj 1 2u nj u nj 1 u nj
u nj 1 s u nj 1 2s u nj u nj s u nj 1
u nj 1 s u nj 1 u nj 1 1 2s u nj

It would be nice if we could just plug in


values and get a valid result

Example, continued

However, putting in different values can lead to the


results being close to, or far from, that actual answer.
For instance, letting x=/20, and letting s=5/11, we get
a relatively nice result. Letting s=5/9 does not get such
a nice result.
2

1.5

1.5

0.5

0.5

0
0

So what, of significance, changes?

Example, Continued

As it turns out, changing the value of s can


significantly change the validity of the
solution. To see why, we return to our
...
equation.
u nj 1 s u nj 1 u nj 1 1 2s u nj
Seperate variables
u XT u nj X jTn
So, by combining like terms,
X j 1 X j 1
Tn 1
s
1 2s

Tn
Xj

Example, continued

Since the left hand side is a function of T


and the right hand side is a function of X,
they must be equal to a constant.
...
X j 1 X j 1
Tn 1
s
1 2s

Tn
Xj

Tn 1
Tn Tn 1 Tn nT0
Tn
and also
X j 1 X j 1
s
1 2 s

Xj

Example, continued

...

This is a discrete version of an ODE,


which when solved gives

1 2 s s(eik x e ik x )
1 2 s 2 s cos(k x)
Since, as discovered before, Tn nT0
if 1, T will grow without bound.
By above,
1 4s 1
1
So 1 4 s 1, s
2

Example, finished
t

Thus, to achieve stability, x


. This is
why setting s=5/9 didnt give a valid
result.
It is to be noted that usually the necessary
criterion is that 1 O(t ) instead of 1 , but
that in this case it was irrelevant.
So the stability criterion must be worked
out before one can effectively use the
method of finite differences.

Approximations of Diffusions
Example from 8.2

Daniel Rave

Summary

Breif Review of Methods

Wide Applicability

Importance of Stability

You might also like