Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
27 views

Lect Notes 4

Numerical Methods

Uploaded by

Safis Hajjouz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Lect Notes 4

Numerical Methods

Uploaded by

Safis Hajjouz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

Lecture Notes 4

Numerical differentiation and


integration
Numerical integration and differentiation is a key step is a lot of economic
applications, among which optimization of utility functions or profits, computation of expectations, incentive problems . . . .

4.1

Numerical differentiation

In a lot of economic problems and most of all in most of numerical problems we


will encounter we will have to compute either Jacobian of Hessian matrices, in
particular in optimization problems or when we will solve nonlinear equations
problems.

4.1.1

Computation of derivatives

A direct approach
Let us recall that the derivative of a function is given by
F (x + ) F (x)
0

F 0 (x) = lim

which suggests as an approximation of F 0 (x)


F 0 (x) '

F (x + x ) F (x)
x
1

(4.1)

The problem is then: how big should x be? It is obvious that x should be
small, in order to be as close as possible to the limit. The problem is that
it cannot be too small because of the numerical precision of the computer.
Assume for a while that the computer can only deliver a precision of 1e-2
and that we select x = 0.00001, then F (x + x ) F (x) would be 0 for the

computer as it would round x to 0! Theory actually delivers an answer to


this problem. Assume that F (x) is computed with accuracy e that is
|F (x) Fb(x)| 6 e

where Fb(x) is the computed value of F . If we compute the derivative using

formula (4.1), the computation error is given by

Fb(x + ) Fb (x) F (x + ) F (x)


2e

x
x

x
x
Further, Taylors expansion theorem states

F (x + x ) = F (x) + F 0 (x)x +

F 00 () 2
x
2

for [x; x + x ]. Therefore,1


F 00 ()
F (x + x ) F (x)
= F 0 (x) +
x
x
2
such that the approximation error is

Fb(x + ) Fb (x)

00 ()
F
2e

x
x 6
F 0 (x)

x
x
2

suppose now that M > 0 is an upper bound on |F 00 | in a neighborhood of x,


then we have

Fb (x + ) Fb(x)
2e
M

x
F 0 (x) 6
+ x

x
2

that is the approximation error is bounded above by


2e
M
+ x
x
2
1

Note that this also indicates that this approximation is O (x ).

If we minimize this quantity with respect to x , we obtain


r
e
?
x = 2
M

such that the upper bound is 2 eM . One problem here is that we usually
do not know M However, from a practical point of view, most people use the
following scheme for x
x = 1e 5. max(|x|, 1e 8)
which essentially amounts to work at the machine precision.
Ajouter ici une discussion de x
Similarly, rather than taking a forward difference, we may also take the
backward difference
F 0 (x) '

F (x) F (x x )
x

(4.2)

Central difference
There are a number of situations where onesided differences are not accurate enough, one potential solution is then to use the central difference or
twosided differences approach that essentially amounts to compute the
derivative using the backwardforward formula
F 0 (x) '

F (x + x ) F (x x )
2x

(4.3)

What do we gain from using this formula? To see this, let us consider the
Taylor series expansion of F (x + x ) and F (x x )
1
F (x + x ) = F (x) + F 0 (x)x + F 00 (x)2x +
2
1
F (x x ) = F (x) F 0 (x)x + F 00 (x)2x
2

1 (3)
F (1 )3x
6
1 (3)
F (2 )3x
6

(4.4)
(4.5)

where 1 [x, x + x ] and 2 [x x ; x]. Then, although the error term


involves the third derivative at two unknown points on two intervals, assuming
3

that F is at least C 3 , the central difference formula rewrites


F 0 (x) =

F (x + x ) F (x x ) 2x (3)
+
F ()
2x
6

with [x x ; x + x ]. A nice feature of this formula is therefore that it


is now O(2x ) rather than O(x ).

Further improvement: Richardson extrapolation


Basic idea of Richardson Extrapolation There are many approximation procedures in which one first picks a step size h and then generates an
approximation A(h) to some desired quantity A. Often the order of the error
generated by the procedure is known. This means that the quantity A writes
A = A(h) + hk + 0 hk+1 + 00 hk+2 + . . .
where k is some known constant, called the order of the error, and , 0 , 0 ,
. . . are some other (usually unknown) constants. For example, A may be the
derivative of a function, A(h) will be the approximation of the derivative when
we use a step size of h, and k will be set to 2.
The notation O(hk+1 ) is conventionally used to stand for a sum of terms
of order hk+1 and higher. So the above equation may be written
A = A(h) + hk + O(hk+1 )

(4.6)

Dropping the, hopefully tiny, term O(hk+1 ) from this equation, we obtain a
linear equation, A = A(h) + hk , in the two unknowns A and . But this
really gives a different equation for each possible value of h. We can therefore
get two different equations to identify both A and by just using two different
step sizes. Then doing this , using step sizes h and h/2, for any h, and taking
2k times
A = A(h/2) + (h/2)k + O(hk+1 )

(4.7)

(note that, in equations (4.6) and (4.7), the symbol O(hk+1 ) is used to stand
for two different sums of terms of order hk+1 and higher) and subtracting
4

equation (4.6) yields


(2k 1)A = 2k A(h/2) A(h) + O(hk+1 )
where O(hk+1 ) stands for a new sum of terms of order hk+1 and higher. We
then get

2k A(h/2) A(h)
+ O(hk+1 )
2k 1
where, once again, O(hk+1 ) stands for a new sum of terms of order hk+1 and
A=

higher. Denoting
B(h) =
then

2k A(h/2) A(h)
2k 1

A = B(h) + O(hk+1 )
What have we done so far? We have defined an approximation B(h) whose
error is of order k + 1 rather than k, such that it is a better one than A(h)s.
The generation of a new improved approximation for A from two A(h)s
with different values of h is called Richardson Extrapolation. We can then
continue the process with B(h) to get a new better approximation. This
method is widely used when computing numerical integration or numerical
differentiation.
Numerical differentiation with Richardson Extrapolation Assume
we want to compute the first order derivative of the function F C 2n R at
point x? . We may first compute the approximate quantity:
D00 (F ) =

F (x? + h0 ) F (x? h0 )
2h0

let us define h1 = h0 /2 and compute


D01 (F ) =

F (x? + h1 ) F (x? h1 )
2h1

Then according to the previous section, we may compute a better approximation as (since k = 2 in the case of numerical differentiation)
D01 (F ) =

4D01 (F ) D00 (F )
3
5

which may actually be rewritten as


D01 (F ) = D01 (F ) +

D01 (F ) D00 (F )
3

We then see that a recursive algorithm occurs as


Dj` (F )

j+1
D`1
(F )

`1
Dj+1
(F ) Dj`1 (F )

4k 1

note that since F is at most C 2n , then k 6 n such that


2(k+1)

F 0 (x? ) = Djk (F ) + O(hj

Hence, Djk (F ) yields an approximate value for F 0 (x? ) with an approximation


2(k+1)

error proportional to hj

. The recursive scheme is carried out until


|D0m D1m1 | <

in which case, D0m is used as an approximate value for F 0 (x? )


Matlab Code: Richardon Extrapolation
Function D = richardson(f,x,varargin)
%
% f -> function to differentiate
% x -> point at which the function is to be differentiated
% varargin -> parameters of the function
%
delta = 1e-12;
% error goal
toler = 1e-12;
% relative error goal
err
= 1;
% error bound
rerr = 1;
% relative error
h
= 1;
% initialize step size
j
= 1;
% initialize j
%
% First, compute the first derivative
%
fs
= feval(f,x+h,varargin{:});
fm
= feval(f,x-h,varargin{:});
D(1,1)= (fs-fm)/(2*h);
while (rerr>toler) & (err>delta) & (j<12)
h = h/2;
% update the step size
fs
= feval(f,x+h,varargin{:});

fm
= feval(f,x-h,varargin{:});
D(j+1,1) = (fs-fm)/(2*h);
% derivative with updated step size
%
% recursion
%
for k = 1:j,
D(j+1,k+1) = D(j+1,k-1+1) + (D(j+1,k-1+1)-D(j-1+1,k-1+1))/(4^k -1);
end
%
% compute errors
%
err = abs(D(j+1,j+1)-D(j-1+1,j-1+1));
rerr = 2*err/(abs(D(j+1,j+1))+abs(D(j-1+1,j-1+1))+eps);
j = j+1;
end
n = size(D,1);
D = D(n,n);

4.1.2

Partial Derivatives

Let us now consider that rather than having a single variable function, the
problem is multidimensional, such that F : Rn R and that we now want
to compute the first order partial derivative
Fi (x) =

F (x)
xi

This may be achieved extremely easily by computing, for example in the case
of central difference formula
F (x + ei x ) F (x ei x )
2x
where ei is a vector which i th component is 1 and all other elements are

0.

Matlab Code: Jacobian Matrix


function J=jacobian(func,x0,method,varargin);
%
% [df]=numgrad(func,x0,method,param)
%
% method = c -> centered difference

%
= l -> left difference
%
= r -> right difference
%
x0
= x0(:);
f
= feval(func,x0,varargin{:});
m
= length(x0);
n
= length(f);
J
= zeros(n,m);
dev = diag(.00001*max(abs(x0),1e-8*ones(size(x0))));
if (lower(method)==l);
for i=1:m;
ff= feval(func,x0+dev(:,i),varargin{:});
J(:,i)
= (ff-f)/dev(i,i);
end;
elseif (lower(method)==r)
for i=1:m;
fb= feval(func,x0-dev(:,i),varargin{:});
J(:,i)
= (f-fb)/dev(i,i);
end;
elseif (lower(method)==c)
for i=1:m;
ff= feval(func,x0+dev(:,i),varargin{:});
fb= feval(func,x0-dev(:,i),varargin{:});
J(:,i)
= (ff-fb)/(2*dev(i,i));
end;
else
error(Bad method specified)
end

4.1.3

Hessian

Hessian matrix can be computed relying on the same approach as for the
Jacobian matrix. Let us consider for example that we want to compute the
second order derivative of function F : R R using a central difference
approach, as we have seen it delivers higher accuracy. Let us write first write
the Taylors expansion of F (x + x ) and F (x x ) up to order 3
2x 00
F (x) +
2
2
F (x x ) = F (x) F 0 (x)x + x F 00 (x)
2
F (x + x ) = F (x) + F 0 (x)x +

3x (3)
F (x) +
6
3x (3)
F (x) +
6

4x (4)
F (1 )
4!
4x (4)
F (2 )
4!

with 1 [x; x + x ] and 2 [x x ; x]. We then get


F (x + x ) + F (x x ) = 2F (x) + 2x F 00 (x) +

4x (4)
[F (1 ) + F (4) (2 )]
4!

such that as long as F is at least C 4 , we have


F 00 (x) =

F (x + x ) 2F (x) + F (x x ) 2x (4)

F ()
2x
12

with [x x ; x + x ]. Note then that the approximate second order

derivative is O(2x ).

4.2

Numerical Integration

Numerical integration is a widely encountered problem in economics. For


example, if wa are to compute the welfare function in a continuous time model,
we will face an equation of the form
Z
et u(ct )dt
W =
0

Likewise in rational expectations models, we will have to compute conditional


expectations such that assuming that the innovations of the shocks are
gaussian we will quite often encounter an equation of the form
Z
1 2
1

f (X, )e 2 2 d
2

In general, numerical integration formulas approximate a definite integral by


a weighted sum of function values at points within the interval of integration.
In other words, a numerical integration rule takes the typical form
Z b
n
X
F (x)dx '
i F (xi )
a

i=0

where the coefficients i depend on the method chosen to compute the integral.
This approach to numerical integration is known as the quadrature problem.
These method essentially differ by (i) the weights that are assigned to each
function evaluation and (ii) the nodes at which the function is evaluated. In
fact basic quadrature methods may be categorized in two wide class:
9

1. The methods that are based on equally spaced data points: these are
Newtoncotes formulas: the midpoint rule, the trapezoid rule and
Simpson rule.
2. The methods that are based on data points which are not equally spaced:
these are Gaussian quadrature formulas.

4.2.1

NewtonCotes formulas

NewtonCotes formulas evaluate the function F at a finite number of points


and uses this point to build an interpolation between these points typically
a linear approximation in most of the cases. Then this interpolant is integrated
to get an approximate value of the integral.
Figure 4.1: NewtonCotes integration
F (x) 6
F (b)

Yb

Ya
YM
F (a)

a+b
2

The midpoint rule


The midpoint rule essentially amounts to compute the area of the rectangle
formed by the four points P0 = (a, 0), P1 = (b, 0), P2 = (a, f ()), P3 =
10

(b, f ()) where = (a + b)/2 as an approximation of the integral, such that

Z b
(b a)3 00
a+b
F (x)dx = (b a)F
+
F ()
2
4!
a
where [a; b], such that the approximate integral is given by

a+b
Ib = (b a)F
2

Note that this rule does not make any use of the end points. It is noteworthy
that this approximation is far too coarse to be accurate, such that what is usually done is to break the interval [a; b] into smaller intervals and compute the
approximation on each subinterval. The integral is then given by cumulating
the subintegrals, we therefore end up with a composite rule. Hence, assume
that the interval [a; b] is broken into n > 1 subintervals of size h = (b a)/n,

we have n+1 data points xi = a+ i 21 h with i = 1, . . . , n. The approximate

integral is then given by

Ibn = h

n
X

f (xi )

i=1

Matlab Code: Midpoint Rule Integration


function mpr=midpoint(func,a,b,n,varargin);
%
% function mpr=midpoint(func,a,b,n,P1,...,Pn);
%
% func
: Function to be integrated
% a
: lower bound of the interval
% b
: upper bound of the interval
% n
: number of sub-intervals => n+1 points
% P1,...,Pn : parameters of the function
%
h
= (b-a)/n;
x
= a+([1:n]-0.5)*h;
y
= feval(func,x,varargin{:});
mpr = h*(ones(1,n)*y);

Trapezoid rule
The trapezoid rule essentially amounts to use a linear approximation of the
function to be integrated between the two end points of the interval. This
11

then defines the trapezoid {(a, 0), (a, F (a)), (b, F (b)), (b, 0)} which area and
consequently the approximate integral is given by
(b a)
Ib =
(F (a) + F (b))
2

This may be derived appealing to the Lagrange approximation for function F


over the interval [a; b], which is given by
L (x) =

xb
xa
F (a) +
F (b)
ab
ba

then
Z

b
a

F (x)dx '
'
'
'
'
'
'

xa
xb
F (a) +
F (b)dx
ba
a ab
Z b
1
(b x)F (a) + (x a)F (b)dx
ba a
Z b
1
(bF (a) aF (b)) + x(F (b) F (a))dx
ba a
Z b
1
x(F (b) F (a))dx
bF (a) aF (b) +
ba a
b2 a2
bF (a) aF (b) +
(F (b) F (a))
2(b a)
b+a
(F (b) F (a))
bF (a) aF (b) +
2
(b a)
(F (a) + F (b))
2

Obviously, this approximation may be poor, as in the example reported in


figure 4.1, such that as in the midpoint rule we should break the [a; b] interval
in n > subintervals of size h = (ba)/n, we have n+1 data points xi = a+ih
and their corresponding function evaluations F (xi ) with i = 0, . . . , n. The
approximate integral is then given by
#
"
n1
X
h
F (xi )
F (x0 ) + F (xn ) + 2
Ibn =
2
i=1

12

Matlab Code: Trapezoid Rule Integration


function trap=trapezoid(func,a,b,n,varargin);
%
% function trap=trapezoid(func,a,b,n,P1,...,Pn);
%
% func
: Function to be integrated
% a
: lower bound of the interval
% b
: upper bound of the interval
% n
: number of sub-intervals => n+1 points
% P1,...,Pn : parameters of the function
%
h
= (b-a)/n;
x
= a+[0:n]*h;
y
= feval(func,x,varargin{:});
trap= 0.5*h*(2*sum(y(2:n))+y(1)+y(n+1));

Simpsons rule
The simpsons rule attempts to circumvent an inefficiency of the trapezoid rule:
a composite trapezoid rule may be far too coarse if F is smooth. An alternative
is then to use a piecewise quadratic approximation of F that uses the values
of F at a, b and (b + a)/2 as interpolating nodes. Figure 4.2 illustrates the
rule. The thick line is the function F to be integrated and the thin line is
the quadratic interpolant for this function. A quadratic interpolation may be
obtained by the Lagrange interpolation formula, where = (b + a)/2
L (x) =

(x a)(x b)
(x a)(x )
(x )(x b)
F (a) +
F () +
F (b)
(a )(a b)
( a)( b)
(b a)(b )

Setting h = (b a)/2 we can approximate the integral by


Z b
Z b
(x )(x b)
(x a)(x b)
(x a)(x )
F (x)dx '
F (a)
F () +
F (b)dx
2
2
2h
h
2h2
a
a
' I1 I2 + I3
We then compute each subintegral
Z b
(x )(x b)
F (a)dx
I1 =
2h2
a
Z
F (a) b 2
=
x (b + )x + bdx
2h2 a
13

Figure 4.2: Simpsons rule


F (x) 6

a+b
2

F (b)

F (a)

a+b
2

=
=

F (a) b3 a3
b2 a2

(b
+
)
+
b(b

a)
2h2
3
2
h
F (a) 2
(b 2ba + a2 ) = F (a)
12h
3

I2 =

I3 =

(x a)(x b)
F ()dx
h2
a
Z
F () b 2
x (b + a)x + abdx
=
h2 a

3
b2 a2
F () b a3
(b + a)
+ ba(b a)
=
h2
3
2
F ()
4h
=
(b a)2 = F ()
3h
3

=
=

(x )(x a)
F (b)dx
2h2
a
Z
F (b) b 2
x (a + )x + adx
2h2 a

b2 a2
F (b) b3 a3
(a + )
+ a(b a)
2h2
3
2
14

F (b) 2
h
(b 2ba + a2 ) = F (b)
12h
3

Then, summing the 3 components, we get an approximation of the integral


given by

b+a
ba
b
I=
F (a) + 4F
+ F (b)
6
2

If, like in the midpoint rule and the trapezoid rules we want to compute a
better approximation of the integral by breaking [a; b] into n > 2 even number
of subintervals, we set h = (b a)/n, xi = x + ih, i = 0, . . . , n. Then the

composite Simpsons rule is given by

h
Ibn = [F (x0 ) + 4F (x1 ) + 2F (x2 ) + 4F (x3 ) + . . . + 2F (xn2 ) + 4F (xn1 ) + F (xn )]
3
Matlab Code: Simpsons Rule Integration
function simp=simpson(func,a,b,n,varargin);
%
% function simp=simpson(func,a,b,n,P1,...,Pn);
%
% func
: Function to be integrated
% a
: lower bound of the interval
% b
: upper bound of the interval
% n
: even number of sub-intervals => n+1 points
% P1,...,Pn : parameters of the function
%
h
= (b-a)/n;
x
= a+[0:n]*h;
y
= feval(func,x,varargin{:});
simp= h*(2*(1+rem(1:n-1,2))*y(2:n)+y(1)+y(n+1))/3;

Infinite domains and improper integrals


The methods we presented so far were defined over finite domains, but it will
be often the case at least when we will be dealing with economic problems
that the domain of integration is infinite. We will now investigate how we
can transform the problem to be able to use standard methods to compute the
integrals. Nevertheless, we have to be sure that the integral is well defined.
15

For example, let us consider the integral


Z
F (x)dx

it may not exist because of either divergence if limx F (x) =


R
or because of oscillations as in sin(x)dx. Let us restrict ourselves to the
case where the integral exists. In this case, we can approximate
Z
F (x)dx

by

F (x)dx
a

setting a and b too large enough negative and positive values. However, this
may be a particularly slow way of approximating the integral, and the next
theorem provides a indirect way to achieve higher efficiency.
Theorem 1 If : R R is a monotonically increasing, C 1 , function on
the interval [a; b] then for any integrable function F (x) on [a; b] we have
Z 1 (b)
Z b
F ((y))0 (y)dy
F (x)dx =
1 (a)

This theorem is just what we usually call a change of variables, and convert a
problem where we want to integrate a function in the variable x into a perfectly
equivalent problem where we integrate with regards to y, with y and x being
related by the nonlinear relation: x = (y).
As an example, let us assume that we want to compute the average of the
transformation of a gaussian random variable x ; N (0, 1). This is given by
Z
x2
1

G(x)e 2 dx
2
such that F (x) = G(x)e

x2
2

. As a first change of variable, that will leave the

interval unchanged, we will apply the transformation z = x/ 2 such that the


integral rewrites

2
G( 2z)ez dz

16

We would like to transform this problem since it would be quite difficult to


compute this integral on the interval [a, b] and set both a and b to large negative
and positive values. Another possibility is to assume that we compute the
integral of a transformed problem over the interval [a; b]. We therefore look
for a C 1 , monotonically increasing transformation that would insure that
limya (y) = and limyb (y) = . Let us assume that a = 0 and

b = 1, a possible candidate for (y) is

1
y
(y) = log
such that 0 (y) =
1y
y(1 y)
In this case, the integral rewrites

Z 1
1
y
1

F
dy
2 log
1y
y(1 y)
0
or
1

1y
y

log((1y)/y)

y
1
2 log
dy
G
1y
y(1 y)

which is now equivalent to compute a simple integral of the form


Z 1
h(y)dy
0

with
1
h(y)

1y
y

log(y/(1y))

G
2 log

y
1y

1
y(1 y)

Table 4.1 reports the results for the different methods we have seen so far. As
can be seen, the midpoint and the trapezoid rule perform pretty well with
20 subintervals as the error is less than 1e-4, while the Simpson rule is less
efficient as we need 40 subintervals to be able to reach a reasonable accuracy.
We will see in the next section that there exist more efficient methods to deal
with this type of problem.
Note that not all change of variable are admissible. Indeed, in this case
we might have used (y) = log(y/(1 y))1/4 , which also maps [0; 1] into R in
a monotonically increasing way. But this would not have been an admissible
17

Table 4.1: Integration with a change in variables: True value=exp(0.5)


n
2
4
8
10
20
40

Midpoint
2.2232

Trapezoid
1.1284

Simpson
1.5045

(-0.574451)

(0.520344)

(0.144219)

1.6399

1.6758

1.8582

(0.0087836)

(-0.0270535)

(-0.209519)

1.6397

1.6579

1.6519

(0.00900982)

(-0.00913495)

(-0.0031621)

1.6453

1.6520

1.6427

(0.00342031)

(-0.00332232)

(0.00604608)

1.6488

1.6487

1.6475

(-4.31809e-005)

(4.89979e-005)

(0.00117277)

1.6487

1.6487

1.6487

(-2.92988e-006)

(2.90848e-006)

(-1.24547e-005)

transformation. Why? Remember that any approximate integration has an


associated error bound that depends on the derivatives of the function to be
integrated (the overall h(.) function). If the derivatives of h(.) are well defined
when y tends towards 0 or 1 in the case we considered in our experiments,
this is not the case for the latter case. In particular, the derivatives are found
to diverge as y tends to 1, such that the error bound does not converge. In
others, we always have to make sure that the derivatives of F ((y))0 (y) have
to be defined over the interval.

4.2.2

Gaussian quadrature

As we have seen from the earlier examples, NewtonCotes formulas actually


derives from piecewise interpolation theory, as they just use a collection of low
order polynomials to get an approximation for the function to be integrated
and then integrate this approximation which is in general far easier. These
formulas also write

b
a

F (x)dx '
18

n
X
i=1

i F (xi )

for some quadrature nodes xi [a; b] and quadrature weights i . All xi s are
arbitrarily set in NewtonCotes formulas, as we have seen we just imposed a

equally spaced grid over the interval [a; b]. Then the weights i follow from
the fact that we want the approximation to be equal for a polynomial of
order lower or equal to the degree of the polynomials used to approximate the
function. The question raised by Gaussian Quadrature is then Isnt there a
more efficient way to set the nodes and the weights? The answer is clearly
R
Yes. The key point is then to try to get a good approximation to F (x)dx.

The problem is what is a good approximation? Gaussian quadrature sets the


nodes and the weights in such a way that the approximation is exact when F
is a low order polynomial.

In fact, Gaussian quadrature is a much more general than simple integration, it actually computes an approximation to the weighted integral
Z b
n
X
F (x)w(x)dx '
i F (xi )
a

i=1

Gaussian quadrature imposes that the last approximation is exact when F is a


polynomial of order 2n 1. Further, the nodes and the weights are contingent

on the weighting function. Then orthogonal polynomials are expected to come


back in the whole story. This is stated in the following theorem by Davis and
Rabinowitz [1984]
Theorem 2 Assume {` (x)}
`=0 is an orthonormal family of polynomials with

respect to the weighting function w(x) on the interval [a; b], and define ` so

that ` (x) = k xk + . . .. Let xi , i = 1, . . . , n be the roots of the polynomial


n (x). If a < x1 < . . . < xn < b and if F C 2n [a; b], then
Z b
n
X
F (2n) ()
i F (xi ) + 2
w(x)F (x)dx =
n (2n)!
a
i=1

for [a; b] with


i =

n+1 /n
0
n (x)n+1 (x)
19

>0

This theorem is of direct applicability, as it gives for any weighting function


a general formula for both the nodes and the weights. Fortunately, most of
the job has already been done, and there exist Gaussian quadrature formulas for a wide spectrum of weighting function, and the values of the nodes
and the weights are given in tables. Assume we have a family of orthogonal
polynomials, {` (x)}n`=0 , we know that for any i 6= j
hi (x), j (x)i = 0
In particular, we have
hi (x), 0 (x)i =

k (x)0 (x)w(x)dx = 0 for i > 0


a

but since the orthogonal polynomial of order 0 is 1, this reduces to


Z b
k (x)w(x)dx = 0 for i > 0
a

We will take advantage of this property. The nodes will be the roots of the
orthogonal polynomial of order n, while the weights will be chosen such that
the gaussian formulas is exact for lower order polynomials
Z b
n
X
k (x)w(x)dx =
i k (xi ) for k = 0, . . . , n 1
a

i=1

This implies that the weights can be recovered by solving a linear system of
the form

Rb
1 0 (x1 ) + . . . + n 0 (xn ) = a w(x)dx
1 1 (x1 ) + . . . + n 1 (xn ) = 0
..
.
1 n1 (x1 ) + . . . + n1 n (xn ) = 0

which rewrites

=
with

0 (x1 )
..
.

0 (xn )

..
..
, =
.
.
n1 (x1 ) n1 (xn )
20

Rb

.. and =

w(x)dx
0
..
.
0

Note that the orthogonality property of the polynomials imply that the
matrix is invertible, such that = 1 . We now review the most commonly
used Gaussian quadrature formulas.
GaussChebychev quadrature
This particular quadrature can be applied to problems that takes the form
Z

1
1

F (x)(1 x2 ) 2 dx
1

such that in this case w(x) = (1 x2 ) 2 and a = 1, b = 1. The very

attractive feature of this gaussian quadrature is that the weight is constant


and equal to i = = /n, where n is the number of nodes, such that
Z

1
1

F (x)(1 x2 ) 2 dx =

X
F (2n)()
F (xi ) + 2n1
n
2
(2n)!
i=1

for [1; 1] and where the nodes are given by the roots of the Chebychev
polynomial of order n:

xi = cos

2i 1

2n

i = 1, . . . , n

It is obviously the case that we rarely have to compute an integral that exactly
takes the form this quadrature imposes, and we are rather likely to compute
Z

F (x)dx
a

Concerning the bounds of integration, we may use the change of variable


y=2

xa
2dx
1 implying dy =
ba
ba

such that the problem rewrites


ba
2

(y + 1)(b a)
F a+
dy
2
1
1

21

The weighting matrix is still missing, nevertheless multiplying and dividing


1

the integrand by (1 y 2 ) 2 , we get


ba
2
with
G(y) F

1
1

G(y) p

1
1 y2

(y + 1)(b a)
a+
2

such that
Z

b
a

(b a) X
F
F (x)dx '
2n
i=1

dy

p
1 y2

(yi + 1)(b a)
a+
2

q
1 yi2

where yi , i = 1, . . . , n are the n GaussChebychev quadrature nodes.


GaussLegendre quadrature
This particular quadrature can be applied to problems that takes the form
Z

F (x)dx
1

such that in this case w(x) = 1 and a = 1, b = 1. We are therefore back to a

standard integration problem as the weighting function is constant and equal


to 1. We then have
Z

F (x)dx =
1

n
X

i F (xi ) +

i=1

22n+1 (n!)4 F (2n) ()


(2n + 1)!(2n)! (2n)!

for [1; 1]. In this case, both the nodes and the weights are non trivial

to compute. Nevertheless, we can generate the nodes using any root finding
procedure, and the weights can be computed as explained earlier, noting that
R1
1 w(x)dx = 2.

Like in the case of GaussChebychev quadrature, we may use the linear

transformation
y=2

2dx
xa
1 implying dy =
ba
ba
22

to be able to compute integrals of the form


Z b
F (x)dx
a

which is then approximated by


Z

b
a

baX
i F
F (x)dx '
2
i=1

(yi + 1)(b a)
a+
2

where yi and i are the GaussLegendre nodes and weights over the interval
[a; b].
Such a simple formula has a direct implication when we want to compute
the discounted value of a an asset, the welfare of an agent or the discounted
sum of profits in a finite horizon problem, as it can be computed solving the
integral

T
0

et u(c(t)) with T <

in the case of the welfare of an individual or


Z T
ert (x(t)) with T <
0

in the case of a profit function. However, it will be often the case that we will
want to compute such quantities in an infinite horizon model, something that
this quadrature method cannot achieve unless considering a change of variable
of the kind we studied earlier. Nevertheless, there exists a specific Gaussian
quadrature that can achieve this task.
As an example of the potential of GaussLegendre quadrature formula, we
compute the welfare function of an individual that lives an infinite number of
period. Time is continuous and the welfare function takes the form
Z T
c(t)
et
dt
W =

0
where we assume that c(t) = c? et . Results for n=2, 4, 8 and 12 and T=10,
50, 100 and 1000 (as an approximation to ) are reported in table 4.3, where
23

we set = 0.01, = 0.05 and c? = 1. As can be seen from the table, the
integral converges pretty fast to the true value as the absolute error is almost
zero for n > 8, except for T=1000. Note that even with n = 4 a quite high
level of accuracy can be achieved in most of the cases.
GaussLaguerre quadrature
This particular quadrature can be applied to problems that takes the form
Z
F (x)ex dx
0

such that in this case w(x) =

ex

and a = 0, b = . The quadrature formula

is then given by
Z
n
X
x
F (x)e dx =
i F (xi ) +
0

i=1

(n!)2
F (2n) ()
(2n + 1)!(2n)! (2n)!

for [0; ). In this case, like in the GaussLegendre quadrature, both


the nodes and the weights are non trivial to compute. Nevertheless, we can

generate the nodes using any root finding procedure, and the weights can be
R
computed as explained earlier, noting that 0 w(x)dx = 1.

A direct application of this formula is that it can be used to to compute

the discounted sum of any quantity in an infinite horizon problem. Consider


for instance the welfare of an individual, as it can be computed solving the
integral once we know the function c(t)
Z
et u(c(t))dt
0

The problem involves a discount rate that should be eliminated to stick to the

exact formulation of the GaussLaguerre problem. Let us consider the linear


map y = t, the problem rewrites

Z
dy
y
y
e u c

0
and can be approximated by
n

1X
i F

i=1

24

yi

Table 4.2: Welfare in finite horizon


n
2
4

= 2.5

= 1

= 0.5

= 0.9

T=10

-3.5392

-8.2420

15.3833

8.3929

(-3.19388e-006)

(-4.85944e-005)

(0.000322752)

(0.000232844)

-3.5392

-8.2420

15.3836

8.3931

(-3.10862e-014)

(-3.01981e-012)

(7.1676e-011)

(6.8459e-011)

-3.5392

-8.2420

15.3836

8.3931

(0)

(1.77636e-015)

(1.77636e-015)

(-1.77636e-015)

12

-3.5392

-8.2420

15.3836

8.3931

(-4.44089e-016)

(0)

(3.55271e-015)

(1.77636e-015)

T=50
2
4
8
12

2
4
8
12

-11.4098

-21.5457

33.6783

17.6039

(-0.00614435)

(-0.0708747)

(0.360647)

(0.242766)

-11.4159

-21.6166

34.0389

17.8467

(-3.62327e-008)

(-2.71432e-006)

(4.87265e-005)

(4.32532e-005)

-11.4159

-21.6166

34.0390

17.8467

(3.55271e-015)

(3.55271e-015)

(7.10543e-015)

(3.55271e-015)

-11.4159

-21.6166

34.0390

17.8467

(-3.55271e-015)

(-7.10543e-015)

(1.42109e-014)

(7.10543e-015)

-14.5764

8
12

16.4972

(-0.110221)

(-0.938113)

(3.63138)

-14.6866

-24.5416

36.2078

18.7749

(-1.02204e-005)

(-0.000550308)

(0.00724483)

(0.00594034)

(2.28361)

-14.6866

-24.5421

36.2150

18.7808

(3.55271e-015)

(-1.03739e-012)

(1.68896e-010)

(2.39957e-010)

-14.6866

-24.5421

36.2150

18.7808

(-5.32907e-015)

(-1.77636e-014)

(2.84217e-014)

(1.77636e-014)

-1.0153
(-14.9847)

T=100
-23.6040
32.5837

T=1000
-0.1066
0.0090
(-24.8934)

(36.3547)

0.0021
(18.8303)

-12.2966

-10.8203

7.6372

3.2140

(-3.70336)

(-14.1797)

(28.7264)

(15.6184)

-15.9954

-24.7917

34.7956

17.7361

(-0.00459599)

(-0.208262)

(1.56803)

(1.09634)

-16.0000

-24.9998

36.3557

18.8245

(-2.01256e-007)

(-0.000188532)

(0.00798507)

(0.00784393)

25

where yi and i are the GaussLaguerre nodes and weights over the interval
[0; ).

As an example of the potential of GaussLaguerre quadrature formula, we

compute the welfare function of an individual that lives an infinite number of


period. Time is continuous and the welfare function takes the form
Z
c(t)
dt
W =
et

where we assume that c(t) = c? et . Results for n=2, 4, 8 and 12 are reported

in table 4.3, where we set = 0.01, = 0.05 and c? = 1. As can be seen from
the table, the integral converges pretty fast to the true value as the absolute
error is almost zero for n > 8. It is worth noting that the method performs
far better than the GaussLegendre quadrature method with T=1000. Note
that even with n = 4 a quite high level of accuracy can be achieved in some
cases.
Table 4.3: Welfare in infinite horizon
n
2
4
8
12

= 2.5
-15.6110

= 1
-24.9907

= 0.5
36.3631

= 0.9
18.8299

(0.388994)

(0.00925028)

(0.000517411)

(0.00248525)

-15.9938

-25.0000

36.3636

18.8324

(0.00622584)

(1.90929e-006)

(3.66246e-009)

(1.59375e-007)

-16.0000

-25.0000

36.3636

18.8324

(1.26797e-006)

(6.03961e-014)

(0)

(0)

-16.0000

-25.0000

36.3636

18.8324

(2.33914e-010)

(0)

(0)

(3.55271e-015)

GaussHermite quadrature
This type of quadrature will be particularly useful when we will consider
stochastic processes with gaussian distributions as they approximate integrals
of the type

F (x)ex dx

26

such that in this case w(x) = ex and a = , b = . The quadrature

formula is then given by

Z
n
X
n! F (2n) ()
2
i F (xi ) + n
F (x)ex dx =
2
(2n)!

i=1

for (; ). In this case, like in the two last particular quadratures,

both the nodes and the weights are non trivial to compute. The nodes can be
computed using any root finding procedure, and the weights can be computed
R

as explained earlier, noting that w(x)dx = .

As aforementioned, this type of quadrature is particularly useful when we


want to compute the moments of a normal distribution. Let us assume that
x ; N (, 2 ) and that we want to compute
Z
(x)2
1

F (x)e 22 dx
2
in order to stick to the problem this type of approach can explicitly solve, we
need to transform the variable using the linear map
y=

such that the problem rewrites


Z

1
2

F ( 2y + )ey dy

and can therefore be approximated by
n

1 X

i F ( 2yi + )

i=1

where yi and i are the GaussHermite nodes and weights over the interval
(; ).
As a first example, let us compute the average of a lognormal distribution,
that is log(X) ; N (, 2 ) We then know that E(X) = exp( + 2 /2). This
27

is particularly important as we will often rely in macroeconomics on shocks


that follow a lognormal distribution. Table 4.4 reports the results as well
as the approximation error into parenthesis for = 0 and different values of
. Another direct application of this method in economics is related to the
Table 4.4: GaussHermite quadrature
n
2
4
8
12

0.01
1.00005

0.1
1.00500

0.5
1.12763

1.0
1.54308

2.0
3.76219

(8.33353e-10)

(8.35280e-06)

(0.00552249)

(0.105641)

(3.62686)

1.00005

1.00501

1.13315

1.64797

6.99531

(2.22045e-16)

(5.96634e-12)

(2.46494e-06)

(0.000752311)

(0.393743)

1.00005

1.00501

1.13315

1.64872

7.38873

(2.22045e-16)

(4.44089e-16)

(3.06422e-14)

(2.44652e-09)

(0.00032857)

1.00005

1.00501

1.13315

1.64872

7.38906

(3.55271e-15)

(3.55271e-15)

(4.88498e-15)

(1.35447e-14)

(3.4044e-08)

discretization of shocks that we will face when we will deal with methods for
solving rational expectations models. In fact, we will often face shocks that
follow Gaussian AR(1) processes
xt+1 = xt + (1 )x + t+1
where t+1 ; N (0, 2 ). This implies that
(

)
Z
Z
1
1 xt+1 xt (1 )x 2

exp
dxt+1 = f (xt+1 |xt )dxt+1 = 1
2

2
which illustrates the fact that x is a continuous random variable. The question
we now ask is: does there exist a discrete representation to x which is equivalent
to its continuous representation? The answer to this question is yes as shown
in Tauchen and Hussey [1991]2 Tauchen and Hussey propose to replace the
integral by
Z
Z
f (xt+1 |xt )
f (xt+1 |x)dxt+1 = 1
(xt+1 ; xt , x)f (xt+1 |x)dxt+1
f (xt+1 |x)
2

This is actually a direct application of gaussian quadrature.

28

where f (xt+1 |x) denotes the density of xt+1 conditional on the fact that xt = x

(therefore the unconditional density), which in our case implies that


"
(

#)

xt+1 xt (1 )x 2
f (xt+1 |xt )
1
xt+1 x 2
(xt+1 ; xt , x)
= exp

f (xt+1 |x)
2

then we can use the standard linear transformation and impose yt = (xt

x)/( 2) to get
Z

2
1
2

exp (yt+1 yt )2 yt+1


exp yt+1
dyt+1

for which we can use a GaussHermite quadrature. Assume then that we have

the quadrature nodes yi and weights i , i = 1, . . . , n, the quadrature leads to


the formula

n
1 X

j (yj ; yi ; x) ' 1

j=1

in other words we might interpret the quantity j (yj ; yi ; x) as an estimate

bij of the transition probability from state i to state j, but remember that the

quadrature is just an approximation such that it will generally be the case that
Pn
bij = 1 will not hold exactly. Tauchen and Hussey therefore propose the
j=1

following modification:

bij =

Pn

j (yj ; yi ; x)

si

j (yj ; yi ; x). We then end up with a markov chain


j=1
with nodes xi = 2yi + and transition probability ij given by the pre-

where si =

vious equation. The matlab code to generate such an approximation is then


straightforward. It yields the following 4 states approximation to an AR(1)
process with persistence = 0.9 and = 0.01 with x = 0
xd = {0.0233, 0.0074, 0.0074, 0.0233}
and

0.7330
0.1745
=
0.0077
0.0000

0.2557
0.5964
0.2214
0.0113
29

0.0113
0.2214
0.5964
0.2557

0.0000
0.0077

0.1745
0.7330

meaning for instance that we stay in state 1 with probability 0.7330, but will
transit from state 2 to state 3 with probability 0.2214.
Matlab Code: Discretization of an AR(1)
n
xbar
rho
sigma

=
=
=
=

2;
0;
0.95;
0.01;

%
%
%
%

number of nodes
mean of the x process
persistence parameter
volatility

[xx,wx] = gauss_herm(n);
% nodes and weights for x
x_d
= sqrt(2)*s*xx+mx;
% discrete states
x=xx(:,ones(n,1));
y=x;
w=wx(:,ones(n,1));
%
% computation
%
px = (exp(y.*y-(y-rx*x).*(y-rx*x)).*w)./sqrt(pi);
sx = sum(px);
px = px./sx(:,ones(n,1));

4.2.3

Potential problems

In all the cases we dealt with in the previous sections, the integral were definite or at least existed (up to some examples), but there may exist some
singularities in the function such that the integral may not be definite. For
instance think of integrating x over [0; 1], the function diverges in 0. How
will perform the methods we presented in the previous section. The following
theorem by Davis and Rabinowitz [1984] states that standard method can still
be used.
Theorem 3 Assume that there exists a continuous monotonically increasing
R1
function G : [0; 1] R such that 0 G(x)dx < and |F (x)| 6 |G(x)| on

[0; 1], the the NewtonCotes rule (with F (1) = 0 to avoid the singularity in 1)
R1
and the GaussLegendre quadrature rule converge to 0 F (x)dx as n increases
to .

Therefore, we can still apply standard methods to compute such integrals, but
convergence is much slower and the error formulas cannot be used anymore as
30

kF (k) (x)k is infinite for k > 1. Then, if we still want to use error bounds,

we need to accommodate the rules to handle singularities. There are several


ways of dealing with singularities
develop a specific quadrature method to deal with the singularity
Use a change of variable

Another potential problem is how much intervals or nodes should we use?


Usually there is no clear answer to that question, and we therefore have to
adapt the method. This is the socalled adaptive quadrature method. The
idea is to increase the number of nodes up to the point where increases in the
number of nodes do not yield any significant change in the numerical integral.
The disadvantage of this approach is the computational cost it involves.

4.2.4

Multivariate integration

There will be situations where we would like to compute multivariate integrals.


This will in particular be the case when we will deal with models in which the
economic environment is hit by stochastic shocks, or in incentives problems
where the principal has to reveal multiple characteristics. . . . In such a case,
numerical integration is on order. There are several ways of obtaining multivariate integration, among which product rules that I will describe the most,
nonproduct rules which are extremely specific to the problem we handle or
MonteCarlo and Quasi MonteCarlo methods.
Product rules
Let us assume that we want to compute the integral
Z bs
Z b1
F (x1 , . . . , xs )w1 (x1 ) . . . ws (xs )dx1 . . . dxs
...
a1

as

for the function F : Rs R and where wk is a weighting function. The idea

of product rules is just to extend the standard onedimensional quadrature


31

approach to higher dimensions by multiplying sums. For instance, let xki and
ik , k = 1, . . . , nk be the quadrature nodes and weights of the one dimensional
problem along dimension k {1, . . . , s}, which can be obtained either from a

NewtonCotes formula or a Gaussian quadrature formula. The product rule


will approximate the integral by
n1
X

...

ns
X

i11 . . . iss F (x1i1 , . . . , xsis )

is =1

i1 =1

A potential difficulty with this approach is that when the dimension of the
space increases, the computational cost increases exponentially this is the
socalled curse of dimensionality. Therefore, this approach should be restricted for low dimensions problems.
As an example of use of this type of method, let us assume that we want
to compute the first order moment of the 2 dimensional function F (x1 , x2 ),
where

x1
x2

;N

1
2


11 12
,
12 22

We therefore have to compute the integral

Z Z
1
21
0 1
1
|| (2)
F (x1 , x2 ) exp (x ) (x ) dx1 dx2
2

11 12
0
0
. Let be the Cholesky
where x = (x1 , x2 ) , = (1 , 2 ) , =
12 22
decomposition of such that = 0 , and let us make the change of variable

y = 1 (x )/ 2 x = 2y +
then, the integral rewrites
!
s
Z Z
X

2
1
yi dy1 dy2

F ( 2y + ) exp

i=1

We then use the product rule relying on onedimensional GaussHermite

quadrature, such that we approximate the integral by


n2
n1 X

1 X
i11 i22 F ( 211 y1 + 1 , 2(21 y1 + 22 y2 ) + 2 )

i1 =1 i2 =1

32

As an example (see the matlab code) we set


F (x1 , x2 ) = (ex1 e1 ) (ex2 e2 )
with = (0.1, 0.2)0 and
=

0.0100 0.0075
0.0075 0.0200

The results are reported in table 4.5, where we consider different values for
n1 and n2 . It appears that the method performs well pretty fast, as the true
value for the integral is 0.01038358129717, which is attained for n 1 > 8 and
n2 > 8.
Table 4.5: 2D Gauss-Hermite quadrature
nx \ny
2
4
8
12

2
0.01029112845254
0.01038328639869
0.01038328710679
0.01038328710679

4
0.01029142086814
0.01038358058862
0.01038358129674
0.01038358129674

8
0.01029142086857
0.01038358058906
0.01038358129717
0.01038358129717

Matlab Code: 2D GaussHermite Quadrature (Product Rule)


n
n1
[x1,w1]
n2
[x2,w2]

=
=
=
=
=

2;
8;
gauss_herm(n1);
8;
gauss_herm(n2);

Sigma
Omega
mu1
mu2

=
=
=
=

0.01*[1 0.75;0.75 2];


chol(Sigma);
0.1;
0.2;

%
%
%
%
%

dimension of the problem


# of nodes for x1
nodes and weights for x1
# of nodes for x2
nodes and weights for x2

int=0;
for i=1:n1;
for j=1:n2;
x12
= sqrt(2)*Omega*[x1(i);x2(j)]+[mu1;mu2];
f
= (exp(x12(1))-exp(mu1))*(exp(x12(2))-exp(mu2));

33

12
0.01029142086857
0.01038358058906
0.01038358129717
0.01038358129717

int

= int+w1(i)*w2(j)*f

end
end
int=int/sqrt(pi^n);

The problem is that whenever the dimension of the problem increases or


as the function becomes complicated these procedures will not perform well,
and relying on stochastic approximation may be a good idea.

4.2.5

MonteCarlo integration

MonteCarlo integration methods are sampling methods that are based on


probability theory, and rely on several trials to reveal information. From an
intuitive point of view, Monte carlo methods rest on the central limit theorem
and the law of large number and are capable of handling quite complicated and
large problems. These two features make MonteCarlo method particularly
worth learning.
A very important feature of MonteCarlo methods is that they appeal
to probability theory, therefore any result of a MonteCarlo experiment is a
random variable. This is precisely a very nice feature of MonteCarlo methods
as by their probabilistic nature, they put a lot of structure on the error of
approximation which has a probabilistic distribution. Finally, by adjusting the
size of the sample we can always increase the accuracy of the approximation.
This is just a consequence of the central limit theorem.
The basic intuition that lies behind MonteCarlo integration may be found
in figure 4.3. The dark curve is the univariate function we want to integrate
and the shaded area under this curve is the integral. Then the evaluation of
an integral using MonteCarlo simulations amounts to draw random numbers
in the xy plan (the dots in the graph), then the integral of the function f
is approximately given by the total area times the fraction of points that fall
under the curve f (x). It is then obvious that the greater the number of points
the more information we get the more accurate is the evaluation of
this area. Further, this method will prove competitive only for complicated
34

Figure 4.3: Basic idea of MonteCarlo integration

and/or multidimensional functions. Note that the integral evaluation will be


better if the points are uniformly scattered in the entire area that is if the
information is spread all over the area.
Another way to think of it is just to realize that
Z b
f (x)dx = (b a)EU[a;b] (f (x))
a

such that if we draw n random numbers, xi , i = 1, . . . , n, from a U[b;a] , an


approximation of the integral of f (x) over the interval [a; b] is given by
n

(b a) X
f (xi )
n
i=1

The key point here is the way we get random numbers.


Not so random numbers!
MonteCarlo methods are usually associated to stochastic simulations and
therefore rely on random numbers. But such numbers cannot be generated
35

by computers.3 Computers are only and this is already a great thing


capable of generating pseudorandom numbers that is numbers that look
like random numbers because they look unpredictable. However it should
be clear to you that all these numbers are just generated with deterministic
algorithms explaining the term pseudo whose implementation is said
to be of the volatile type in the sense that the seed the initial value of a
sequence depends on an external unpredictable feeder like the computer clock.
Two important properties are usually demanded to such generators:
1. zero serial correlation: we want iid sequences.
2. correct frequency of runs: we do not want to generate predictable sequences
The most wellknown and the simplest random number generator relies on
the socalled linear congruential method which obeys the equation
xk+1 = axk + c (mod m)
One big advantage of this method is that it is pretty fast and cheap. The most
popular implementation of this scheme assumes that a = 3(mod8), c = 0 and

m = 2b where b is the number of significant bits available on the computer


(these days 32 or 64). Using this scheme we then generates sequences that
ressemble random numbers.4 For example figure 4.4 reports a sequence of 250
random numbers generated by this pseudo random numbers generator, and
as can be seen it looks like random numbers, it smells randomness, it tastes
randomness but this is not randomness! In fact, linear congruential methods
are not immune from serial correlation on successive calls: if k random numbers
generators at a time are used to plot points in k dimensional space, then the
points will not fill up the kdimensional space but they will tend to lie on
3

There have been attempts to build truly random number generators, but these technics
were far too costly and awkward.
4
Generating a 2 dimensional sequence may be done extracting subsequences: yk =
(x2k+1 , x2k+2 ).

36

(k 1)dimensional planes. This can easily be seen as soon as we plot xk+1

against xk , as done in figure 4.5. This too pronounced nonrandom pattern


Figure 4.4: A pseudo random numbers draw (linear congruential generator)
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0

50

100

150

200

250

for these numbers led to push linear congruential methods into disfavor, the
solution has been to design more complicated generators. An example of
those generators quoted by Judd [1998] is the multiple prime random number
generator for which we report the matlab code. This pseudo random numbers
generator proposed by Haas [1987] generates integers between 0 and 99999,
such that dividing the sequence by 100,000 returns numbers that approximate
a uniform random variable over [0;1] with 5 digits precision. If higher precision
is needed, the sequence may just be concatenated using the scheme (for 8 digits
precision) 100, 000 x2k + x2k+1 . The advantage of this generator is that its

period is over 85 trillions!

long = 10000;
m
= 971;
ia
= 11113;

Matlab Code: Prime Random Number Generator


% length of the sample

37

Figure 4.5: The linear congruential generator


1
0.9
0.8
0.7

k+1

0.6
0.5
0.4
0.3
0.2
0.1
0
0

0.1

0.2

0.3

0.4

0.5
xk

0.6

0.7

0.8

0.9

ib
= 104322;
x
= zeros(long,1);
x(1) = 481;
for i= 2:long;
m = m+7;
ia= ia+1907;
ib= ib+73939;
if m>=9973;m=m-9871;end
if ia>=99991;ia=ia-89989;end
if ib>=224729;ib=ib-96233;end
x(i)=mod(x(i-1)*m+ia+ib,100000)/10;
end

Over generators may be designed and can be nonlinear as


xk+1 = f (xk )mod m
or may take rather strange formulations as the one reported by Judd [1998],
that begins with a sequence of 55 odd numbers and computes
xk = (xk24 xk55 )mod 232
38

which has a period length of 102 5, such that it passes a lot of randomness
tests.
A key feature of all these random number generators is that they attempt
to draw numbers from a uniform distribution over the interval [0;1]. There
however may be some cases where we would like to draw numbers from another
distribution mainly the normal distribution. The way to handle this problem is then to invert the cumulative density function of the distribution we
want to generate to get a random draw from this particular distribution. More
formally, assume we want numbers generated from the distribution F (.), and
N
we have a draw {xi }N
i=1 from the uniform distribution, then the draw {yi }i=1

from the f distribution may be obtained solving


Z yi
F (s)ds = xi for i = 1, . . . , N
a

Inverting this function may be trivial in some cases (say the uniform over [a;b])

but it may require approximation as in the case of a normal distribution.


MonteCarlo integration
The underlying idea of MonteCarlo integration may be found in the Law of
Large Numbers
Theorem 4 (Law of Large Numbers) If Xi is a collection of i.i.d. random variables with density (x) then
Z
N
1 X
lim
Xi = x(x)dx almost surely.
N N
i=1

Further, we know that in this case


!

N
2
1 X
Xi =
where 2 = var(Xi )
var
N
N
i=1

If 2 is not known it can be estimated by

b2 =

N
N
2
1 X
1 X
Xi X with X =
Xi
N 1
N
i=1

i=1

39

With this in mind, we understand the potential of MonteCarlo methods for


numerical integration. Integrating a function F (x) over [0;1] is nothing else
than computing the mean of F (x) assuming that x ; U[0;1] therefore a crude
R1
application of MonteCarlo method to compute the integral 0 F (x)dx is to
draw N numbers, xi , from a U[0;1] distribution and take
N
1 X
IbF =
F (xi )
N
i=1

as an approximation to the integral. Further, as this is just an estimate of the


integral, it is a random variable that has variance
Z
2
1 1
2
(F (x) IF )2 dx = F
Ib =
F
N 0
N

where F2 may be estimated by

2
1 X
F (xi ) IbF
=
N 1
N

bF2

i=1

such that the standard error of the MonteCarlo integral is Ibf =


bf / N .

As an example of a crude application of MonteCarlo integration, we re-

port in table 4.6 the results obtained integrating the exponential function over
[0;1]. This table illustrates why MonteCarlo integration is seldom used (i)
Table 4.6: Crude MonteCarlo example:
N
10
100
1000
10000
100000
1000000

Ibf

1.54903750
1.69945455
1.72543465
1.72454262
1.72139292
1.71853252

bIbf

R1
0

ex dx.

0.13529216
0.05408852
0.01625793
0.00494992
0.00156246
0.00049203

True value: 1.71828182

for univariate integration and (ii) without modification. Indeed, as can be


40

seen a huge number of data points is needed to achieve, on average, a good


enough approximation as 1000000 points are needed to get an error lower than
0.5e-4 and the standard deviation associated to each experiment is far too
high as even with only 10 data points a Student test would lead us to accept the approximation despite its evident lack of accuracy! Therefore several
modifications are usually proposed in order to circumvent these drawbacks.
Antithetic variates: This acceleration method lies on the idea that if

f is monotonically increasing then f (x) and f (1 x) are negatively


correlated. Then estimating the integral as

N
1 X
(F (xi ) + F (1 xi ))
IbfA =
2N
i=1

will still furnish an unbiased estimator of the integral while delivering


a lower variance of the estimator because of the negative correlation
between F (x) and F (1 x):
var(IbfA ) =

var(F (x)) + var(F (1 x)) + 2 cov(F (x), F (1 x))


4N
F2 + cov(F (x), F (1 x))
2
6 F
2N
N

This method is particularly recommended when F is monotone. Table


4.7 illustrates the potential of the approach for the previous example.
As can be seen the gains in terms of volatility are particularly important
but these are also important in terms of average even in small sample.5
Stratified sampling: Stratified sampling rests on the basic and quite ap-

pealing idea that the variance of f over a subinterval of [0;1] should be


lower than the variance over the whole interval. The underlying idea is
to prevent draws from clustering in a particular region of the interval,
and therefore we force the procedure to visit each subinterval, and by
this we enlarge the information set used by the algorithm.

5
Note that we used the same seed when generating this integral and the one we generate
using crude MonteCarlo.

41

Table 4.7: Antithetic variates example:


N
10
100
1000
10000
100000
1000000

Ibf

1.71170096
1.73211884
1.72472178
1.71917393
1.71874441
1.71827383

bIbf

R1
0

ex dx.

0.02061231
0.00908890
0.00282691
0.00088709
0.00027981
0.00008845

True value: 1.71828182

The stratified sampling approach works as follows. We set (0, 1) and


we draw Na = N data points over [0; ] and Nb = N Na = (1 )N

over [; 1]. Then the integral can be evaluated by

Nb
Na
1 X
1 X
s
a
b
If =
F (xi ) +
F (xbi )
Na
Nb
i=1

i=1

where xai [0; ] and xbi [; 1]. Then the variance of this estimator is

given by

(1 )2
2
vara (F (x)) +
varb (F (x))
Na
Nb
which equals

(1 )
vara (F (x)) +
varb (F (x))
N
N
Table 4.8 reports results for the exponential function for = 0.25. As can
be seen from the table, up to the 10 points example,6 there is hopefully
no differences between the crude MonteCarlo method and the stratified
sampling approach in the evaluation of the integral and we find potential
gain in the use of this approach in the variance of the estimates. The
potential problem that remains to be fixed is How should be selected?
In fact we would like to select such that we minimize the volatility,
6

This is related to the very small sample in this case.

42

Table 4.8: Stratified sampling example:


Ibf

N
10
100
1000
10000
100000

1.52182534
1.69945455
1.72543465
1.72454262
1.72139292

bIbf

R1
0

ex dx.

0.11224567
0.04137204
0.01187637
0.00359030
0.00114040

True value: 1.71828182

which amounts to set such that


vara (F (x)) = varb (F (x))
which drives the overall variance to
varb (F (x))
N
Control variates: The method of control variates tries to extract infor-

mation from a function that approximates the function to be integrated


arbitrarily well, while begin easy to integrate. Hence, assume there exists a function that is similar to F , but that can be easily integrated,
the identity

F (x)dx =

(F (x) (x))dx +

(x)dx

restates the problem as the MonteCarlo integration of (F ) plus

the known integral of . The variance of (F ) is given by F2 + 2


2cov(F, ) which is lower than the variance of F2 provided the covariance
between F and is high enough.
In our example, we may use as the function: 1 + x since exp(x) ' 1 + x
R1
in a neighborhood of zero. 0 (1+x)dx is simple to compute and equal to

1.5. Table 4.9 reports the results. As can be seen the method performs
a little worse than the antithetic variates, but far better than the crude
MonteCarlo.
43

Table 4.9: Control variates example:


N
10
100
1000
10000
100000
1000000

Ibf

bIbf

1.64503465
1.71897083
1.72499149
1.72132486
1.71983807
1.71838279

R1
0

ex dx.

0.05006855
0.02293349
0.00688639
0.00210111
0.00066429
0.00020900

True value: 1.71828182

Importance sampling: Importance sampling attempts to circumvent a

insufficiency of crude MonteCarlo method: by drawing numbers from

a uniform distribution, information is spread all over the interval we are


sampling over. But there are some cases where this is not the most
efficient strategy. Further, it may exist a simple transformation of the
problem for which MonteCarlo integration can be improved to generate
a far better result in terms of variance. More formally, assume you want
to integrate F over a given interval
Z
F (x)dx
D

now assume there exists a function G such that H = F/G is almost


constant over the domain of integration D, the problem may be restated
Z

F (x)
G(x)dx
G(x)

H(x)G(x)dx
D

Then we can easily integrate F by instead sampling H, but not by


drawing numbers from a uniform density function but rather from a non
uniform density G(x)dx. Then the approximated integral is given by
N
1 X F (xi )
IbFis =
N
G(xi )
i=1

44

and it has variance

I2bis

Z
Z
2 !
h2
F (x)
1
F (x)2
=
G(x)dx
G(x)dx
2
N
N
D G(x)
D G(x)
Z
2 !
Z
1
F (x)2
F (x)dx
dx
N
D G(x)
D

The problem we still have is how should G be selected? In fact, we see


from the variance that if G were exactly F the variance would reduce to
zero, but then what would be the gain? and it may be the case that G
would not be a distribution or may be far too complicated to sample. In
fact we would like to have G to display a shape close to that of F while
being simple to sample.
In the example reported in table 4.10, we used G(x) = (1 + )x , with
= 1.5. As can be seen the gains in terms of variance are particularly
important, which render the method particularly attractive, nevertheless
the selection of the G function requires a pretty good knowledge of the
function to be integrated, which will not be the case in a number of
economic problems.

Table 4.10: Importance sampling example:


N
10
100
1000
10000
100000
1000000

Ibf

1.54903750
1.69945455
1.72543465
1.72454262
1.72139292
1.71853252

True value: 1.71828182

45

bIbf

0.04278314
0.00540885
0.00051412
0.00004950
0.00000494
0.00000049

R1
0

ex dx.

4.2.6

QuasiMonte Carlo methods

QuasiMonte Carlo methods are fundamentally different from MonteCarlo


methods although they look very similar. Indeed, in contrast to MonteCarlo
methods that relied on probability theory, quasiMonte Carlo methods rely
on number theory (and Fourier analysis, but we will not explore this avenue
here). In fact, as we have seen, MonteCarlo methods use pseudorandom
numbers generators, that are actually deterministic schemes. A first question
that may then be addressed to such an approach is: If the MC sequences are
deterministic, how can I use probability theory to get theoretical results? and
in particular What is the applicability of the Law of Large Numbers and the
Central Limit Theorem? This is however a bit unfair as many new random
number generators pass the randomness tests. Nevertheless, why not acknowledging the deterministic nature of these sequences and try to use them? This
is what is proposed by QuasiMonte Carlo methods.
There is another nice feature of QuasiMonte Carlo methods, which is
related to the rate of convergence of the method. Indeed, we have seen that
choosing N points uniformly in an ndimensional space leads to an error in

MonteCarlo that diminishes as 1/ N . From an intuitive point of view, this


comes from the fact that each new point adds linearly to an accumulated sum
that will become the function average, and also linearly to an accumulated
sum of squares that will become the variance. Since the estimated error is
the square root of the variance, the power is N 1/2 . But we can accelerate
the convergence relying on some purely deterministic schemes, as quasiMonte
Carlo methods do.

QuasiMonte Carlo methods rely on equidistributed sequences, that is


sequence that satisfy the following definition.

n
Definition 1 A sequence {xi }
i=1 D R is said to be equidistributed

46

over the domain D iff


N

(D) X
F (xi ) =
lim
N N
i=1

F (x)dx
D

for all Riemanintegrable function F : Rn R, where (D) is the Lebesgue


measure of D.

In order to better understand what it exactly means, let us consider the


unidimensional case the sequence {xi }
i=1 R is equidistributed if for any
Riemannintegrable function we have
N

(b a) X
lim
F (xi ) =
N
N
i=1

F (x)dx
a

This is therefore just a formal statement of a uniform distribution, as it just


states that if we sample correctly data points over the interval [a; b] then
these points should deliver a valid approximation to the integration problem.
From an intuitive point of view, equidistributed sequences are just deterministic sequences that mimic the uniform distribution, but since they are, by
essence, deterministic, we can select their exact location and therefore we can
avoid clustering or sampling twice the same point. This is why QuasiMonte
Carlo methods appear to be so attractive: they should be more efficient.
There exist different ways of selecting equidistributed sequences. Judd
[1998], chapter 9, reports different sequences that may be used, but they share
the common feature of being generated by the scheme
xk+1 = (xk + ) mod 1
which amounts to take the fractional part of k.7 should be an irrational
number. These sequences are among others
7

Remember that the fractional part is that part of a number that lies right after the dot.
This is denoted by {.}, such that {2.5} = 0.5. This can be computed as
{x} = x max{k Z|k 6 x}
The matlab function that return this component is x-fix(x).

47

Weyl: ({k p1 }, . . . , {k pn }), where n is the dimension of the space.

k(k+1)
Haber: { k(k+1)
p
p
},
.
.
.
,
{
}
1
n
2
2

Niederreiter: {k 21/(1+n) }, . . . , {k 2n/(1+n) }

Baker: ({k er1 }, . . . , {kern }), rs are rational and distinct numbers
In all these cases, the ps are usually prime numbers. Figure 4.6 reports a 2
dimensional sample of 1000 points for each type of sequence. There obviously
Figure 4.6: QuasiMonte Carlo sequences
Weyl sequence

1
0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0
0

0.2

0.4

0.6

0.8

0
0

Niederreiter sequence

0.8

0.6

0.6

0.4

0.4

0.2

0.2
0.2

0.4

0.6

0.8

0.2

0
0

0.4

0.6

0.8

0.8

Baker sequence

0.8

0
0

Haber sequence

0.2

0.4

0.6

exist other ways of obtaining sequences for quasiMonte carlo methods that
rely on low discrepancy approaches, Fourier methods, or the socalled good
lattice points approach. The interested reader may refer to chapter 9 in Judd
48

[1998], but we will not investigate this any further as this would bring us far
away from our initial purpose.
Matlab Code: Equidistributed Sequences
n=2;
% dimension of the space
nb=1000;
% number of data points
K=[1:nb];
% k=1,...,nb
seq=NIEDERREITER;
% Type of sequence
switch upper(seq)
case WEYL
% Weyl
p=sqrt(primes(n+1));
x=K*p;
x=x-fix(x);
case HABER
% Haber
p=sqrt(primes(n+1));
x=(K.*(K+1)./2)*p;
x=x-fix(x);
case NIEDERREITER
% Niederreiter
x=K*(2.^((1:n)/(1+n)));
x=x-fix(x);
case BAKER
% Baker
x=K*exp(1./primes(n+1));
x=x-fix(x);
otherwise
error(Unknown sequence requested)
end

As an example, we report in table 4.11 the results obtained integrating


the exponential function over [0;1]. Once again the potential gain of this type
of method will be found in approximating integral of multidimensional or
complicated functions. Further, as for MonteCarlo methods, this type of
integration is not restricted to the [0; 1]n hypercube. You may transform the
function, or perform a change of variables to be able to use the method. Finally
note, that we may apply all the acceleration methods applied to MonteCarlo
technics to the quasiMonte Carlo approach too.

49

Table 4.11: Quasi MonteCarlo example:

ex dx.

N
10

Weyl
1.67548650
(0.0427953)

(0.00186656)

(0.0427953)

(0.104939)

100

1.71386433

1.75678423

1.71386433

1.71871676

(0.0044175)

(0.0385024)

(0.0044175)

(0.000434929)

1000

1.71803058

1.71480932

1.71803058

1.71817437

(0.000251247)

(0.00347251)

(0.000251247)

(0.000107457)

10000
100000
1000000

Haber
1.72014839

R1

Niederreiter
1.67548650

Baker
1.82322097

1.71830854

1.71495774

1.71830854

1.71829897

(2.67146e-005)

(0.00332409)

(2.67146e-005)

(1.71431e-005)

1.71829045

1.71890493

1.71829045

1.71827363

(8.62217e-006)

(0.000623101)

(8.62217e-006)

(8.20223e-006)

1.71828227

1.71816697

1.71828227

1.71828124

(4.36844e-007)

(0.000114855)

(4.36844e-007)

(5.9314e-007)

True value: 1.71828182, absolute error into parenthesis.

50

Bibliography
Davis, P.J. and P. Rabinowitz, Methods of Numerical Integration, New York:
Academic Press, 1984.
Judd, K.L., Numerical methods in economics, Cambridge, Massachussets:
MIT Press, 1998.
Tauchen, G. and R. Hussey, Quadrature Based Methods for Obtaining Approximate Solutions to Nonlinear Asset Pricing Models, Econometrica,
1991, 59 (2), 371396.

51

Index
Antithetic variates, 41

Richardson Extrapolation, 5

Composite rule, 11

Simpsons rule, 13

Control variates, 43

Stratified sampling, 41

GaussChebychev quadrature, 21

Trapezoid rule, 11

GaussLaguerre quadrature, 24
GaussLegendre quadrature, 22
Hessian, 1
Importance sampling, 44
Jacobian, 1
Law of large numbers, 39
Midpoint rule, 10
MonteCarlo, 34
NewtonCotes, 10
Pseudorandom numbers, 36
Quadrature, 9
Quadrature nodes, 18
Quadrature weights, 18
QuasiMonte Carlo, 46
Random numbers generators, 35
52

Contents
4 Numerical differentiation and integration
4.1

4.2

Numerical differentiation . . . . . . . . . . . . . . . . . . . . . .

4.1.1

Computation of derivatives . . . . . . . . . . . . . . . .

4.1.2

Partial Derivatives . . . . . . . . . . . . . . . . . . . . .

4.1.3

Hessian . . . . . . . . . . . . . . . . . . . . . . . . . . .

Numerical Integration . . . . . . . . . . . . . . . . . . . . . . .

4.2.1

NewtonCotes formulas . . . . . . . . . . . . . . . . . .

10

4.2.2

Gaussian quadrature . . . . . . . . . . . . . . . . . . . .

18

4.2.3

Potential problems . . . . . . . . . . . . . . . . . . . . .

30

4.2.4

Multivariate integration . . . . . . . . . . . . . . . . . .

31

4.2.5

MonteCarlo integration . . . . . . . . . . . . . . . . . .

34

4.2.6

QuasiMonte Carlo methods . . . . . . . . . . . . . . .

46

53

54

List of Figures
4.1

NewtonCotes integration . . . . . . . . . . . . . . . . . . . . .

10

4.2

Simpsons rule . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

4.3

Basic idea of MonteCarlo integration . . . . . . . . . . . . . .

35

4.4

A pseudo random numbers draw (linear congruential generator)

37

4.5

The linear congruential generator . . . . . . . . . . . . . . . . .

38

4.6

QuasiMonte Carlo sequences . . . . . . . . . . . . . . . . . . .

48

55

56

List of Tables
4.1

Integration with a change in variables: True value=exp(0.5) . .

18

4.2

Welfare in finite horizon . . . . . . . . . . . . . . . . . . . . . .

25

4.3

Welfare in infinite horizon . . . . . . . . . . . . . . . . . . . . .

26

4.4

GaussHermite quadrature . . . . . . . . . . . . . . . . . . . .

28

4.5

. . . . . . . . . . . . .

33

. . . . . . . . . . . . .

40

. . . . . . . . . . . . .

42

. . . . . . . . . . . . .

43

. . . . . . . . . . . . .

44

. . . . . . . . . . . . .

45

. . . . . . . . . . . . .

50

2D Gauss-Hermite quadrature . . . . . .
R1
4.6 Crude MonteCarlo example: 0 ex dx. .
R1
4.7 Antithetic variates example: 0 ex dx. . .
R1
4.8 Stratified sampling example: 0 ex dx. .
R1
4.9 Control variates example: 0 ex dx. . . .
R1
4.10 Importance sampling example: 0 ex dx.
R1
4.11 Quasi MonteCarlo example: 0 ex dx. .

57

You might also like