Senior Paper Rough Draft
Senior Paper Rough Draft
Senior Paper Rough Draft
5/19/08
Numerical Solution of
Parabolic Partial Differential Equations
with Applications in Finance
Corey Becker
Project Advisor: Dr. Daniel Willis
May 19, 2009
Corey Becker
5/19/08
In the world of quantitative finance, there is a wide array of advance mathematical tools used to
help model the various markets. Math and finance collide on a daily basis on Wall Street. One of the
countless applications has received Nobel Prize recognition. The work of Fischer Black and Myron
Scholes has been at the forefront of mathematical finance since 1973. While some will always debate it,
the impact it had on the investment world is indisputable. In order to better understand how one
mathematical model could change the world, we will look at the mathematics behind it. The power of
Taylor Series
The majority of functions f (x) found in mathematics cannot be evaluated exactly by simple
means. For example, calculating the everyday functions f (x)=cos ( x), e x , or √ x can be problematical
challenging assignment. However, if another function can be used to approximate the original function,
then one can develop an efficient estimate much easier. The most common approximating functions are
polynomials, and the most common of these is the Taylor polynomial. The Taylor polynomial is used
often due to its easy construction and its ability to be utilized by more accurate methods. In order to
construct an equation that we can use to accurately calculate approximate values of the original, less
demonstrable, equations we must generate a polynomial that mimics the behavior of the initial
function. One way to ensure this is to set their characteristics equal to each other. By characteristics
I’m referring to the position f (a), velocity f ’ (a), acceleration f ’ ’(a) , and the value of every other
derivative at point a. Given a position and information as to how that position will change over time
(derivatives) we can develop a function to fit these conditions. The Taylor polynomial is developed from
n
( x−a ) j ( j )
¿∑ f (a)
j=0 j!
The basic idea, as stated above, is that if we know all the information about where the value
f (a) will be based on the independent variable, then we can create a function that corresponds to the
original function but can be utilized much easier in computations. It seems reasonable to then assume
that the more derivatives or the more information we have about a function, the more accurate our
representation will be. The Taylor series above can be restrained by limiting the derivative n . For
'
example, for a linear approximation we can set p1 ( x )=f ( a ) + ( x−a ) f ( a ) where we have the initial
position and the slope but nothing else (thus linear). This can be accurate for points near a , however,
assuming that f (x) is not a line, it will not be accurate as we move further from a . Therefore, we can
take the linear approximations at several points to extrapolate more values but they will never be as
accurate as higher orders since we have less knowledge about its movement. When calculating these
linear approximations we make a few alterations to arrive at a very recognizable formula (Brandimarte
For simplicity’s sake we can substitute the variable x for a and truncate the series after the first
derivative term:
f ( x+ h )−f (x )
f ' ( x )= +O(h)
h
This is the forward approximation for the derivative (often referred to as Newton’s difference quotient)
which any calculus student should recognize. If we were to add a limit for h and drop the error term
O(h), we would see the formal definition of the derivative. We can also formulate the backward
difference approximation by taking the step h in the opposite direction (replace h with (−h ) then solve
for f ’ ( x) :
f ( x )−f ( x−h )
f ' ( x )= +O(h)
h
The O(h) element in these two equations represents the error assumed when we curtail the
approximation at n=1. It signifies all the information that we are missing; therefore, it denotes the
accuracy of our numerical solutions. From here on, one of our objectives will be to reduce this error as
much as possible.
A third difference method can reduce the error to O(h 2). Taking the difference between the
forward and backward values and dividing by two steps, will give us the following equation:
Corey Becker
5/19/08
3
' 2 h '' ' ( )
f ( x +h ) −f ( x−h )=2 h f ( x ) + f a…
3!
Or
f ( x+ h )−f ( x−h )
f ' ( x )= +O(h2 )
2h
This is more accurate because it takes the average of the first two approximations (Brandimarte 295).
Typically, if the forward approximation results in an overestimate, the backward approximation will
output an underestimate and vice versa, therefore, averaging the two seems logical.
Eventually, we will need to approximate the second derivative for our final equation so we will
derive it here. We would prefer an approximation of the second derivative using only function values,
without having to calculate first derivatives. If we add the forward and backward difference methods
Or
Corey Becker
5/19/08
f ( x+ h )−2 f ( x )+ f ( x−h )
f '' ( x) = 2
+O(h2 )
h
Here we have a second derivative approximation of f (x) with second order of accuracy (Brandimarte
295). Refer back to this equation when we need to estimate the second derivative of a function.
Direction Fields
One way we can see a graphical representation of these linear approximations is using a
direction field. This graphic plots line segments with the slope of the function at corresponding values of
y. For example, if we wanted to plot a direction field of e x it would look something like this:
As you can see, depending on initial conditions, these lines will match up with the instantaneous slope at
the respective y-values. Once we are provided with initial conditions we can isolate the specific set of
points/slopes and draw an integral curve like those seen to the right. These graphic representations are
not incredibly helpful in the formulation of a numerical solution but they provide an excellent view of
Euler’s Method
One specific formula that arises from the Taylor expansion is known as Euler’s Method.
Essentially this method is a first-order Taylor approximation with some different notation seen below:
y ' (x )=f ( x , y )
For
{ y ( x0 )= y0
y ( x n +h ) − y ( x n) y − yn
'
y ( x n) ≈ Or y ' ( x n) ≈ n +1
h h
y n+ 1= y n +h y ' ( x n )
y n+ 1= y n +hf ( x n , y n)
To demonstrate this approximation we will take the function y=tan( x ) where y ’= y 2+ 1 and y (0)=0
. Using a program written in MATLAB with h=.2 we are given the following estimations:
These values are obviously a very rough estimate but they provide a fine building block. Now, instead of
using the forward difference approximation we can also use the backward difference approximation
which will give us the backward Euler method. We will start by performing some algebra like before:
Corey Becker
5/19/08
f ( x )−f ( x−h )
f ' ( x )=
h
The backward Euler method appears to be a simple change to the Euler equation but it brings about a
much different process and thus a much different result. Difference methods can be classified into two
categories, implicit and explicit. Euler’s method is explicit because it uses the information at the current
point to project forward and estimate the next point n+1. On the other hand, the backward Euler
equation is implicit because it uses both the current position solution ( y n) and the slope at the next
solution f (x n+1 , y n +1) (usually found using iterative methods) to approximate the differential equation.
As one may infer, this extra step for approximating for an additional solution at x n+1 can be costly in
terms of computation effort. However, in some cases implicit methods are the only way to avoid
unfortunate situations that arise in equations that are known to be “stiff.” These equations will
As we were able to see in the previous table, there is much room for improvement. One way to
make the approximation more accurate that I mentioned earlier is to add another derivative thus
increasing the “order” of the Taylor expansion. Continuing the previous example using tan(x) we will
d 2
y ' 'n = [ y +1 ]= d [ tan ( x )2+1 ]=2 tan ( x ) [ tan2 ( x ) +1 ]=2 y ( y 2 +1)
dx dx
y n+ 1= y n +h( y 2n +1)+h2 [( y 2n + y ) ]
Plugging this equation into the MATLAB program generates the following results:
As was expected, this method was much more accurate but still has unacceptable errors. Therefore, we
will look at other ways to create the most accurate approximation possible.
An idea that was touched on with the central difference method was taking an average of two
slopes near x , so now we will test its efficiency. The Runge-Kutta method of approximating differential
equations uses another method in order to predict f ( x n +1 , y n+ 1) and then average it with f ( x n , y n ) by
adding them and dividing by 2. The specific Runge-Kutta formula we will be using utilizes Euler’s method
as the predictor for f ( x n +1 , y n+ 1) . This approach is known at Heun’s method and is also a type of
trapezoidal method. Using the tan(x) example equation again, Heun’s approximation looks like the
following:
f ( x n , y n ) + f ( x n+1 , y n+1 )
y n+ 1= y n +h [ 2 ]
Replacing the f (x n+1 , y n +1) with the Euler approximation and plugging our example in for f ( x n , y n ) we
y n+ 1= y n +h ¿
Using this formula and the same MATLAB program from before we generate the following values:
Corey Becker
5/19/08
Below is a graph of the errors to graphically demonstrate the behavior of the numerical method as x
increases.
Here is a comparison of the three methods’ errors as calculated by the MATLAB program:
After a brief analysis of the errors, we can see we are on the right track. Considering the size of h=.2,
these numbers are fairly impressive. Once we decrease the increment size we should be able to have a
Corey Becker
5/19/08
very accurate approximation for our purposes. A further analysis of the errors demonstrates the
advantage of the O(h 2)approximation. If we compare the errors of Euler and Heun’s method in a
different way, the significance of h becomes clearer. Calculating e 8 starting with x 0=0 Actual=2980.96
This output data confirms the premise behind O(h) notation. Assuming h is a very small number, h2
will be an even smaller number. As in the Euler approximation above, a function with O(h) error will
decrease linearly with the size of h . If h is cut in half then O(h) will likewise be halved. In the case of
Heun’s method, however, since the error is O(h 2) then for each decrease in h , the error will see a
quadratic decrease. As h is halved above, the error of Heun’s method is cut into fourths. One more
detail that should be mentioned is the efficiency of these methods. A disadvantage of Heun’s method
and Backward Euler’s Method pertains to the extra calculations needed to approximate the slopes.
Heun’s requires an Euler approximation within each approximation so naturally it will entail more
computational exertion. The Backward Euler Equation requires the user to implement another method
to approximate y n+ 1. The “elapsed time” values above give evidence to this fact. These should both be
considered in cases that involve complicated and time-consuming functions. Accuracy and efficiency
must be considered together when working with more advanced applications and limited computational
power but another aspect that demands even more attention in solving equations numerically is known
as stability.
Corey Becker
5/19/08
Stability
The errors from these methods may be acceptable for some applications but as we apply them
to more demanding functions, we need to have confidence that they will maintain their accuracy. As
was mentioned earlier, there are certain functions that do not conform well to explicit methods such as
Heun’s and Euler’s formulas. These models’ errors will begin to grow rapidly or “blow up” as it is often
described. Functions that do not conform to explicit methods are often called “stiff” functions. The
easiest way to understand the behavior of these stiff functions is to plot them. A common equation to
Y ' =λY
{ Y ( 0 )=1
y h ( x n ) → 0 as x n →∞
Plugging the function into the Euler model we get the following:
y n+ 1 ¿ y n +hλ y n= (1+ hλ ) y n , n ≥ 0 , y 0 =1
y n=(1+hλ)n , n≥ 0
We use MATLAB to execute this method with the following values of λ and h :
A . λ=−1∧h=0.1
B . λ=−100∧h=0.1
A. n=¿ 1 2 3 4 5
B. n=¿ 1 2 3 4 5
If the value of λ is very large, the value of h has to compensate by being very small. Using the equation
y n=(1+hλ)n , n≥ 0 we can see that if |1+hλ| is greater than one then the function will grow
exponentially with the value of n . This is the opposite of how the function should behave. However, if
the function is less than one, then taking it to a positive integer n will give us the desired decay effect.
So for λ=−100 then h ≤ 0.02. If h is any bigger than 0.02, example B will not behave properly. There
are some cases, however, that such a small step size would be impractical. Implicit methods give us a
versatile alternative to the unstable explicit methods (Atkinson & Han). We can see why with some
y n+ 1 ¿ y n +hλ y n+ 1 , n ≥ 0 , y 0 =1
yn
y n+ 1=
( 1−hλ )
y n=(1−hλ)−n
This relationship makes for a flexible model. For any positive value of h , |1−hλ|>1, therefore, the
model will decay as expected. Implicit methods are preferred when it comes to stiff problems for this
reason. They may take more computational effort but in these cases, it is definitely worth it. We could
take steps with h=1 and have 50 times less steps than with Euler and easily make up for the iterative
portion of the backward Euler program (Atkinson & Han). We will keep this in mind as we delve into the
professionals have used Partial differential equations (PDEs) extensively in many areas of analysis and
valuation. PDEs have proven over the years to be an effective instrument for the financial engineers
Corey Becker
5/19/08
seeking a consistent method to valuate complex financial derivatives. They can model the fluctuation of
many variables and how the fluctuations behave over time. This applies well to the field of finance with
its numerous and non-trivial variables. Some of these models have analytic solutions, such as the Black-
Scholes equation, but primarily the financial engineers, or “academics” as they are often called by
“traditionalists”, must resort to numerical methods to solve the PDEs. Since the Black-Scholes equation
has a known solution, we will use it to demonstrate the process of numerically solving the model for the
∂V 1 2 2 ∂2 V ∂V
+ σ S 2
+ rS −rV =0
∂t 2 ∂S ∂S
Numerically solving an equation like Black-Scholes often begins with classifying it based on
certain characteristics. The choice of numerical method to approximate the PDE generally depends on
these classifications. The Black-Scholes equation is a second-order, linear, parabolic partial differential
equation. The most important aspect to recognize is that through a tedious sequence of algebraic
manipulations and substitutions, this model can be simplified into the familiar heat equation:
∂ ϕ ∂2 ϕ
= , x ∈ ( 0,1 )∧t ∈(0 , T )
∂t ∂ x 2
We will attempt to model the equation using explicit and implicit approximations. Explicit will look like
the following:
∂t
ϕ i, j +1=ρ ϕi−1 , j + ( 1−2 ρ ) ϕi , j+ ρ ϕi+1 , j , ρ= 2
(∂ x)
Given the following initial data, we can begin solving this heat equation numerically:
Corey Becker
5/19/08
2x 0 ≤ x ≤0.5
ϕ ( x ,0 )=f ( x )= {2 ( 1−x ), 0.5 ≤ x ≤1 ,
ϕ ( 0 , t )=ϕ ( 1 , t )=0 ∀ t
This means the “temperature” at points x=0 and x=1 will be held fixed at 0 (Brandimarte 305). The
program for the MATLAB implementation of this model is attached. The numerical solution of the heat
equation with δx=0.1 and δx=0.001, by the explicit method for t=0 , t=0.01 , t=0.05 ,t=0.1:
As was discussed earlier, explicit methods have odd tendencies that are sensitive to changes in step
sizes. We would assume that decreasing the size of δx would increase the accuracy of the
approximation but as the output below demonstrates, this is not necessarily always the case. The
numerical solution of the heat equation with δx=0.01 and δx=0.001, by the explicit method for
Obviously, this is a case where the explicit method “blows up.” If we implement the implicit method
(code attached) with the same parameters we should maintain a smooth solution. The numerical
solution of the heat equation with δx=0.01 and δx=0.001, by the implicit method for
EDU>> EXPLICITHEAT
EDU>> IMPLICITHEAT
Now that we have seen the heat equation at work using numerical methods, we can understand
how it may apply to similar models in finance. The Black-Scholes equation plots the behavior of a stock
option’s price over time depending on the underlying stock’s current price. As we explained above, the
variables in the Black-Scholes equation can be simplified into a linear heat equation relationship and
solved similar to how we solved the above problem. The ability to reduce such a complicated formula
into the heat equation is what makes the Black-Scholes’ work so popular today. It had remarkable
simplicity then and even more so today. It will never claim to be the perfect solution to option pricing
(history will prevent that) but the math used to develop it and utilize it will continue to make it one of
Works Cited
Atkinson, Kendall and Weimin, Han. Elementary Numerical Analysis. December 31, 2002.
Brandimarte, Paolo. Numerical Methods in Finance and Economics. Hoboken: John Wiley & Sons, 2006.
Cheney, Ward and Kincaid, David. Numerical Mathematcs and Computing. Belmont: Thomson, 2008.
Code:
diary stability
disp(' x exp(x) numerical err1')
[x',true',y',error']
%plot numerical solution
hold off
plot(x,y)
hold on
%plot actual solution
plot(x,true)
diary off