Valuation of Mortgage Backed Securities Using Brownian Bridges To Reduce Eective Dimension
Valuation of Mortgage Backed Securities Using Brownian Bridges To Reduce Eective Dimension
Valuation of Mortgage Backed Securities Using Brownian Bridges To Reduce Eective Dimension
Abstract
The quasi-Monte Carlo method for nancial valuation and other
integration problems has error bounds of size O((log N )k N 1 ), or even
O((log N )k N 3=2 ), which suggests signicantly better performance
than the error size O(N 1=2 ) for standard Monte Carlo. But in high
dimensional problems this benet might not appear at feasible sample
sizes. Substantial improvements from quasi-Monte Carlo integration
have, however, been reported for problems such as the valuation of
mortgage-backed securities, in dimensions as high as 360. We believe
that this is due to a lower eective dimension of the integrand in those
cases. This paper denes the eective dimension and shows in exam-
ples how the eective dimension may be reduced by using a Brownian
bridge representation.
1 Introduction
Simulation is often the only eective numerical method for the accurate val-
uation of securities whose value depends on the whole trajectory of interest
Mathematics Department, UCLA. ca
isch@math.ucla.edu.
y Mathematics Department, UCLA and C.ATS. moroko@math.ucla.edu.
z Statistics Department, Stanford University. art@playfair.Stanford.edu
1
rates or other variables. Standard Monte Carlo simulation using pseudo-
random sequences can be quite slow, however, because its convergence rate
is only O(N = ) for N sample paths. Quasi-Monte Carlo simulation, using
1 2
deterministic sequences that are more uniform than random ones, holds out
the promise of much greater accuracy, close to O(N ) in optimal cases. Ran-
1
This dramatic improvement in convergence rate has the potential for sig-
nicant gains both in computational time and in range of application of sim-
ulation methods for nance problems. An optimistic reading of the results
suggests an eective squaring or even cubing of the sample size N . Large im-
provements have in fact been found in a number of earlier studies [1, 11, 18],
which were all motivated by the results of Paskov [17] on mortgage backed
securities.
Quasi-Monte Carlo simulation is not a magic bullet, however. The asymp-
totic error magnitudes are the ones it is \close to" above, multiplied by
log(N )k , where k depends on the dimension s of the simulation. In high
dimensions these powers of log(N ) do not become negligible at any computa-
tionally possible sample size. This loss of eectiveness has been documented
for a series of test problems in [6, 7, 8]. When simulations are cast as inte-
gration problems the resulting integral is often of very high dimension (e.g.
dimension 360 for a mortgage of length 30 years), so any loss of eectiveness
at high dimensionality can aect them.
Our rst goal in this paper is to reconcile two apparently con
icting
truths. The rst is that quasi-Monte Carlo is not much better than Monte
Carlo in high dimensions with practical sample sizes. The second is that
quasi-Monte Carlo has been seen to far surpass Monte Carlo in some high
dimensional examples. It is our view that success in high dimensional prob-
lems tells us more about the integrand than about the method of integration.
Some high dimensional integrands are indeed amenable to quasi-Monte Carlo
simulation. Integrands of low \eective dimension", which we dene in two
ways below, are of this type. Our second goal is to give an example of a
nancial simulation, in which one can reduce the eective dimension of an
integrand, thereby making quasi-Monte Carlo much more eective.
The outline of this paper is the following: Section 2 gives a brief in-
troduction to quasi-random sequences and their properties, including the
Koksma-Hlawka inequality which is the basic estimate on integration error
2
for quasi-Monte Carlo. The dependence on dimension and the character of
two-dimensional projections of quasi-random sequences is also discussed. Sec-
tion 3 introduces some useful decompositions of integrands, and uses them to
dene two notions of the eective dimension of an integrand. The mortgage-
backed security problem is formulated in Section 4. Our main technical tool
for formulating the problem with reduced eective dimension is the Brow-
nian bridge representation of a random walk, which is described in Section
5. Computational results for the mortgage-backed security problem are pre-
sented in Section 6. Conclusions are discussed in Section 7.
We are grateful to Spassimir Paskov and Joseph Traub for a number of
discussions and for providing us with their quasi-random number generator
FINDER.
N
RN (J ) = N1 XJ (xn ) m(J ) :
X
n =1
Here XJ is the characteristic function of the set J , and m(J ) is its volume.
If E is the set of subrectangles with one corner at (0; 0; : : : ; 0), then the star
3
Projection of 4096 Pseudo-Random Points
1* ** ** * * * ** **** ** * **** ****** ** * ** ***** ******** **** **** * *** * * ***** * * * * ** *
******* * ***** * * ******* ******* ** ** * ** **
* * ** * ** ***** ** *****
* ** ** * ******** ** ** *** **** **** * ****** **** ** **
** ** ** ***** * *********** ****** *** * ** * ** * ***** **** *** ***** *** ** ****** * *** * ** ** ****** ** ** ******* ** * * * *
* * * * **
* * * * * **** * * ** * * * * * * * * * * * * * ** ****
0.9 *** ************ *********** * * *** ** ** * ******** * *** *** * ****** ****************** ************ * **** * ** ***** **** ** ********************** ***** ** **** * * *****
* **** ** ** * * * * ** * * ** * *** ** ** * * * **** * * * * *** ** * * * * **
* ** * * * *** ********* ** ** * *** * * *** **** * * *** * * * * ** ** * * * * * * * ** ** ** * *
* * ** ** * * *
**** * * * * * *** ** ** *** ****** *** * ** * *** **** ** ** ********* ** **** ********* ** * * ** * ** ** ** ** ******** **** * * * * *
0.8 * ********* ***** *** ******* ******* ** ***** * *** *** *** **** * *** ** ******* ******** **** ** ******* * ****** **** ** *** *********** ********** **
* * ** ** ** * ** * * * ** * * * * * * * * *
* * ** * * * * * *
* *
* ** *** * * ** * * **** * * ** * ** *** **** *** * **** * **** ** *** * ** * *** **** ** **** *** *
***** ** ** *** ** ***** * * **** ***** ** * *** * ** * *** ***** * * * *** ** * * ***
** *** ** ** * * ********** * ****** ** ** *
* * * * * *** *** *** ** **** ** ** * ****** * *** **** **** ******** ** * * ** * * ********** * *****
* * * * * * ** * * * * * * ** * * * * * ** * * * * ** * * * * * *
* *
0.7 ***** ***** * ** ** * * * *** * **** ** ** * ** *** * * * ***** ******* * ******* **
**
*** *
**** * ** * *** *
* * **
** * * *** ******* * * ** * *** **** ** **** ********* * **** ***** ******** ****** *** *** * *********** *** *** ** ***** * * *** **** ** ** ** **
** ** ** ********** * * ** *** * ** **
* * * *********** * * ** ******** * **** *** * *** ** ** ***** * ****** * * ** * ***** *****
*
* * *** * *
** * * * * * * * * * ** * *** ** *
* * ** ** ** * * ** * *
0.6 * **** ** ** ******** **** *** ************ ************* ********* * ** ** ****** ** * **************** ** ***** ** ** ******* ********* ********* ***** * *** ********
Dimension 16
** ** * * **** ** ** ***
* * ** ** ** * *** * *** * **** * * *** * ** *** ***** * ******** * ********** ** ****** ***** ******* ** *** * **** * * ** ** * ** * *
* * * **
*
* ** **** * ** *** *** *
***** * ** * ** ** ** * ******** * ** * * **** ************** ** * *
** * ** ** * * *** ** * * * * * * * **
* * **
*** **** *** ** *** **** *
0.5 **** ** *********** * ***** * *** ** ** ************* *** ****** *** *** * ******** **** *** * *** **** ******* ***** ************ ***************** ****** ***** ** ****** * ** *
*
* **** * *** ***** ****** ** * *** * *** * *** * *** ** **** ****** ******* **** * * * ****** * *** * *** * **** **** **** * **** ****** *
* ******* ** * * *** ** *** * *** *** *** *** * * * * *** * ** ** * ****** **** * ** ***** * **** * * ***** * ** **
* * ** ** * *** * * * ***** ** ** ** * ** * ** *
** * * ** * * * * **** ** ***** ** * *** * *** ** **** ***** * *******
*
**
** *
0.4 ** * ** **** ** *** * ******* ***** * ** * ** * ** * ** * *** * * ***** * ***** * * ** * ** *** ** * ****** * ** **
* *** ** * ** * * * * ** * ** * ***** * ** * ** ***** * ** *** **** * * * ** * **** * *** **
*** ** * * *** ** ** * ***** * ************** **** * *** * * ** **** ****** *** ******* **
*
* * *
*** ****** ******** **** *** ******* **** ** * ****** *
*
* * ** * *** *** * * * * *
* * * * ** * * * * *
** * * **
* ** ** * * *
* ** * ** * * * * *
*** * * *
0.3 ********** **************** *** *************** ** ** ******** ******* ***** ** ***************** ******** ** *** *** ***** * * **** ********* *** **** ********* *********
* ** * * * *** * * ** * ** *** *** * ** * * * * ** * * ** * * * * *
** ********* **** *** ** ****** ****** * * * *** ** * ** *********** * *** * **** * * **** ** ** * * ******* * * ***** **** *** ** ** *
* ** ** ** ** ** ** * **** *
** * **** ** * * * * ** *** * * ****** ***
* **** * * **** ** * * ** * *
0.2 * * * ** *********** *********** ****** *** **** * ******************* ************ ********** ** ** * **** * ***** ** ***** * *** ** ** ******* ******** ********* ******
** *
* ** *
* **
** * **
* * * * * * * * * * *
** ** *
******* * * * * * * * * *
** * ** *** * **** * **** * * * * *
* ***** ** *** * ** ****** ** * *** *** ** ** * * ******
** *** * * * * ** ** * * *** ******* **** * ********* ** ** *
* ** ** ** * * * *
**** ** * ***** ** * * ******** ** * * * *** ** *** * **** ** *** *** ********* * * **** ******* ************ * * *** * * *** ** ** *** * **** ***** ***
0.1 * ******** * *** * *** * ** * ** ** **** ******* **** ** ** ** * * ****** *** ******* ** ******* *** * **** ****** ** ** *** **** **
** ** *** ** ** ** * ***** ** * ** **** ** ** **** * ***** ** ** ***** * *** * * * ***** * **
*** * * * * **** * * * * * * * * ******* ******** ** ** **** *** *** ** ** * **** * ** *
*** *** * ******* *** ** ** *** ** *** **** * ** *** ** ** **** ** * ***** * * ** ****************** * *************** ***
******** **** * ****** *** ***** *** * ****** * * * ** * * *** * * * ** ** *** ** * * * * * *
0* * * * ** ** * ** * **** * *** * * * ** * ***** ** * * ** * *
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Dimension 1
* * * * * * * * * * *
**** * ** ** *** **** **** ** *** *** ** **** * * *** *** **** **** * ***** ** ** **** * ********* *** *** * * **** ** *** *** ** **** **** ***** ** ** * ***
* ** ** ** * * * * * ** * * * ** ** * * * * * ** * ** * * * * **
* * **** ** ** **** ** *** ** ****** ****** ** ******* *** * ** **** * ** **** **** ** ** **** * *** ******** ** ***** ***** ** *** *** **** * * **** ** **
* * * * ** ** * * * * * ** * * ** * * * * * ** * *
0.5 * **** * ** *** **** * ***** ** *** ***** * *** ***** * ** ** * ****** *** * **** *** * **** * ***** *** * * **** * * ** * ** * * * * * * * *
*** * * ** * ** * ** ** * ** * ** ** * * ** *** * * *** * * * * * * * * *** ** ** * ** * ** ** * * ** * *** **** * * ***
* * * * *
* * ** ***** **** *** ***** ******* ** **** * **** ***** *** *** * ************* * *** **** ***** * ** * **** ** ******* ***** **** **** ****** ** *
****** ********** *** ** *** * * ** ** *** **** *** ** ****** *** * ** ****** *** *** *** *** ****** * **** * *** ** ******* ******
0.4 * *** ** *** * *** *** ** **** *** *** ** ***** ** * ** ** **** ** ** **** ***** **** **** ** * *** ** *** ***** ***** ****** **
* * * * * * * * * * * * *
*** * * **** ** * ** ** ** ** ** * * ** *** * * ** * * * * *** * * ** **** ** * * *** * ** *** *** * * **
*
** ** * * ** * *** * ** ** * *** ** ** * ** *** ** ** ** ** **** * ** *** ** ** ** * ** ** ** * ** **
* *** **** ***** **** **** **** **** **** *** * *** * ***** ******* ********** ******* ***** * **** * *** *** **** *** **** ****** ***** **** *** *
** * *** * * *** * * *** * *** *** *** * * ***
0.3 ** **** ** **** **** * ** *** * **** ** ***** ** ** ** *** ** ** ** *** *** ***** ** **** * *** ** ** **** ***** ** * * ****
* *** * * ** * * *
* * * * * *** * *** * * ** * **
* * *
*
* * * ** * * * *
****** ** *** **** **** ** ** **** *** *** **** ***** **** ** **** ***** ** * ** *** ****** *** *** **** **** *** ** **** *** *********** *
* ******* * ******** ***** ** * ****** ***** ********* ** ***** **** * **** ******** ****** ** ************ * ** ****** ********* ** ***** *
* * * * *** * ** * ** * * ** * ** ** * *** *** * * ** * * * * *** * * ** ** * * ** * * ** * * ** * * * ** * * ***
0.2 *** *** * *** ** * **** * *** * ****** **** ** **** * ** * *** ** ** *** * ** * **** ** ** * **** ** ** *** * *** * ** ** * ******
* * * * * * * * * * * * * *
* * * ** * * ** *** ** * * ** * ** * ** * ** * * ** * * ** ** *** ** * ** * ** * * * ** ** * * *
* ** **** * ** **** **** ** ** **** ** *** ** ** ****** **** * **** ** ** **** **** * ** **** ** **** ** ** ** *** *** ** ******** ** *** **** ** ** **** ** *
** ** * * * * ** * ** * * * * * * **
***** ** ***** *** * ***** * *** **** *** * **** ** * ***** **** ***** ** **** ** ***** **** ***** * *** **** * ** **** ** * *********** ***** ** *****
0.1 ** * *** ** * *** *** * ** *** * ** **** * *** ** *** ***** * ** *** *** ** * ***** * * ** *** * ***** ** * *** ** * ** *** * ** *** * **
*
** *** * ******* ** *** * **** * ** *** * **** ** *** * ** * ****** * ** * *** ** *** * * *** ** ** **** * *** ** ******* * *** **
*** *** ** *** * * ****** ******** ** ****** *** *** ** *** **** ** ** *** *** ****** *** ****** ** * ****** ****** * * **** * *** **
* * * * * ** ** * ** * ** ** * * * * * ** * * * * * ** * ** * * ** * ** ** * * * * *
0 * ** ** *** * ** *** ** ****** **** ** ** **** ***** ** ***** ****** ** ** ** **** ** ** ** * ****** ** *** ** * ** ** *** *
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Dimension 2
See Niederreiter [10] for some of these other denitions and Hickernell [4] for
some more recent generalizations.
The importance of discrepancy can be seen from the Koksma-Hlawka
inequality for integration error. For the integral of a function f on the s-
dimensional unit cube, the simulation based estimate of the integral is
N
IN (f ) = N1 f (xn)
X
(2.2)
n =1
The error magnitudes (2.4) and (2.5) are similar in that the bound is
a product of one term depending on properties of the integrand function
and a second term depending on properties of the sequence. The Koksma-
Hlawka inequality is an absolute bound, which is more satisfying theoretically
than (2.5), an equality in expectation which holds only probabilistically. For
practical purposes the preference is reversed. Each factor in (2.4) is incredibly
hard to compute, whereas the Monte Carlo variance can be estimated from
the same data used to compute f. Furthermore the Koksma-Hlawka bound is
an inequality that is only tight for a worst case function f , whose
uctuations
are exquisitely matched to the discrepancies in the sequence (xn), while the
Monte Carlo variance estimates the error for the actual f being sampled.
The innite sequence (xn)1 n is said to be quasi-random if
=1
DN c(log N )k N 1
(2.7)
in which the constant c and the logarithmic exponent k may depend on the di-
mension s. For integration by quasi-random sequences, the Koksma-Hlawka
inequality says that the integration error is size O((log N )k N ), which for
1
sequence, taking on an apparent O(N ) rate only for very large values of N .
1
identical to what is shown for 4096. However, the next 8192 points fall only
7
4096 Points of Sobol Sequence
1 ****** ***** *** *** ** ** ** ** ******** **
*** *** ****** ** ** ** ***** *** ** ** ******** ****** **
****** ******
** ** * ** * * ** ** ** * *** *** ******* ** ** ****** * ** ** * ** ** ** *********
*** *** ******** ****** ******** ****** ******* * ** *** *** ******** **** ****** ** *** ****** * **
** ** * * * * * * * * ** * * ** * * * * * * * * * * * **
* ** *
0.9 ********** ***** ***** ********** **** **** ********** ***********
** * ****** *** *** ****
*****
******* ** *
***** **
******* *** *** ***** ** ***
** ** ** * ** * ** *****
** ** ************ ***** ****** ** ** ******* ********** ****** ** *********** ******** ***** ****** *
*
** ** ** *
*** ***
*
**** * *** ** ***** ** ** **
** *** ***** *** *** ******* *** ** ** ** *** *** ****** ** ** ******
** ** ************** **** ** *** ******* *** *** ***** ******** ***** ******** ******
* * ** *
* ******** *** *** ********
* * * * * * * * * * * *
0.8 ******** ************ *** *** ****** ******** * * *
**** * ****** ****** *** *** ****** * ** ** ** **
** ** ****** ** *** ***
***** * * ***** ***** ** *** *** *** ***
**
** ******** *** **** ****** ** *** * ****** ******
**
** ** ** ** ****** *** ** ** ** **
*** *** * ******* * * ** **
* * * **
* *** *** * ** ***** *** ** ** * ** ** *** * ********** * * ****** * ***** * **** *** *** * ********* **
* ** * * ** * *
* ** ****** ** ***** * **
* * *
* ** * * * * *
** ** * *
** ** * * * *
0.7 *** *** ******** * * ** * * * * * * * * * *
** * * * * * * * **
** ** *** *** ******** ** * ** ****** ****** **** *** *** ******** * ** ****
****** ** ** ****** ************ ******** ****** ** ** ****** * *** *** ******** ******
**
******* *** *** **** ****** ** *** **** **** ******
* ***** ***** ** ******** ***** *** *** *** *** *** *** *** ****** *** **
***** ******** * * ** * * * * ** * * * * ****
***** ** ** * * * * **
0.6 ***** **** ********** ******** ******* ****** * *
**** ** *
****** * *
** ** *** *** ******
** ****** ** **
****** **************** ******
Dimension 28
** **
* ** *** *** *** *** *** *** *** * ** * ** *** ******* *****
* * * ** ** ****** ***
* * * * * * * * *
* ** *
** * * ** ** ** *
*****
* * **
* * ** ** *** ** *****
* * * * * * * * ** * * * ** * * * * * *** ***
*
** **
** ** ****** *** *** *** ** *** **** **** ****** ** *** ****** ******* ******** ************* ** ***
*** *** * **
** ** ******* ******** ***** **
****** ******* * ** ** ** * ** ** ***** *
0.5 * * ** * ** **** ** * ****** ****** ** * * ** * * * ******* *** * ** ** * **** * * *** ***** * * ****** * **** * * * **** * ** ******* *** *** ******
* * * ** ** ** ** * * * * * ** * * * *
* ** *
* **
* * *
* * * *
* ** * ** * * * * *
* ** ** * ** * * * * * * ** * * * *
** *
*** *** ******** **** ****** ** *** ******
* ** ** ** ******* *** *** ****** ******** ******** ** *
* * ** **
* * * ** ** **
** * ****** ** ** ****** ** ** ** ** * ** ******
* ****
** ** * **** * ** ** ** ** *** *** ******** ****** ****** *** *** ** *** **** ****
*
0.4 ********* ***************** ********* ********* ********* *
*****
***
* ** * **
** *
** ** * ** * * *
* *
* ** ** ** * ** **
****** ****** ****** ** *** * * * * * ** * *** ***
****** ** ******* ****** * ** *** * *
* **
** ** ***** * * * * * * ** ** ** **
** ** ** ** ** *** *** ****** ****** ************ ****** *** *** ******
*** *** *** *** ** ** * ** ** ** ************** ** ** *****
** ** ** * * ****** ** ** ** ** *** *** ******** ** ** *** ***** *** *** *****
0.3 ********** ********* ********** **** *** ********** ********** ****** *** ******** ******* ***** * * *** * ** ** ***
**** *** *** **** * ** * ** ***** * * *
** ** *** *** ******** ******* ** ** ** * ** ** ** ** ** * * ******** ***** ** * ** ***** ** *
**
** **
** ** ********* * ** ******** ******* * ** ** * *** **** ** ** **** * *** *** ** ** **** ** * ***** ** *** ** ** *** * ** **** * * **
*** *** ****** ** ** ** *** *** *** ****** ***** ***** **** **** * *** ******** **** *****
*** ** * * * * * *
* * * * * * * *
* * * * *
0.2 ****** ****** ** ** ***** *** *** ******** ** *** ******* *** *** ****** * * ****** *** *** ********* *** ****
***** * *** *** ******** ****** ****** ** ****** ** ** ****** ************ ********
**** ** ** ** *** **** **** ****** *** *** ***** ******
*** *** *** *** ** ** ** ** **** *** *** ****** *** ** ***** ** *** * * ** * * *****
* * * *
* ******* *****
* *
** * * * * ***** *
* * ************ ***** * ** ******** * * * *** ** *
0.1 *********** *********
******
****** ** ** ******** ******* ****** ** * *
******
** *** ***** * *** **
*** ** **************
*****
**** *** *** ***
** ** * ** *** *** *** *** *** *** ** **
*** *** ***** **** *** *** ******** **** ****** ** **
* *
***** * **
* * ** * * * * *
*** * * * * * ** *
* * *
**
*
****
*
* * * *** ***
** * ** ****
***** ***** ****
* * ****** *** * * * **
****
* * * **
* * * * * * * * **** *** *** * * ** ** *
****
* * * **
** *
** ** ****** ******
* **
*** ********* *** **** ** ** *******
** **
* ** ******** ** ** *** *** ******** ********
0 ****** *** *** **** ** ** **
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Dimension 27
8
where the gaps appear. Thus by N = 16; 384, the projection plot is almost
perfectly uniform. The problem is that the cycle for lling in such holes can
be too long.
The asymptotic rate for the discrepancy of nets should therefore start at
around N = bt s .
+
The most widely used constructions of (t; s)-sequences are due to Sobol',
Faure and Niederreiter. The Sobol' sequences are (t; s)-sequences in base
b = 2. For s = 360 the value of t is quite large for Sobol' sequences, in the
thousands, according to Niederreiter (personal communication, 1996). The
Faure sequences are (0; s)-sequences in base b where b s is a prime number.
For s = 360, the smallest value of b is 361.
For Faure sequences the net property starts to be relevant at N = 361,
but the asymptotics are not relevant until N = 361 > 10 . For Sobol'
360 920
9
Recent constructions of Niederreiter
: and Xing (1996) hold the promise
of (t; s)-sequences in base 2 with t = s, a signicantly smaller value than
previously known possible. Even here though this suggests that the net
property becomes relevant at N = 2 > 10 and the asymptotics become
361 108
relevant at N = 2 > 10 .
720 216
unscrambled nets, though with realistic sample sizes, signicant benets can
10
be seen on integrands of lower eective dimension.
XM
eN ( f ) = eN (fm) (3.2)
m=1
because IN (f ) = f . If fm(x)fk (x)dx = 0 for m 6= k, the decomposition
R
0 0
(3.1) is said to be orthogonal.
In simple Monte Carlo simulation, the xn are taken independently from
the
PM
uniform distribution on [0; 1]s. For an orthogonal decomposition, (f ) = 2
m (fm ). Many Monte Carlo methods have the eect of changing the
2
sampling variance of f to
=1
M
1 X
m (fm ) (3.3)
2
Nm =1
eective.
A case in point is antithetic sampling. Write f = f + f + f where 0 1 2
f (x) = f, f (x) = (f (x) + f (1 x))=2 f and f (x) = (f (x) f (1 x))=2.
0 1 2
Here 1 x is interpreted componentwise. This is an orthogonal decomposition
into constant, even and odd (symmetric and antisymmetric) parts. Antithetic
11
sampling involves taking the points xn and x0n = 1 xn in pairs, for 1 n
N=2. One can show that in antithetic sampling = 0 and = 2. Antithetic
2 1
sampling can be anywhere from half as good as simple Monte Carlo (when
f = 0) to innitely better than simple Monte Carlo (when f = 0). When it
2 1
is thought that , as for nearly linear functions, antithetic sampling
2
2
2
1
becomes attractive.
In an analysis of variance (ANOVA) decomposition, M = 2s 1, and there
is one term in (3.1) for each subset of the s components of x. The empty set
corresponds to f . The term for each subset is a function only of the com-
0
ponents within the subset. It is convenientPto replace the labels 0; 1; : : : ; M
by subsets u f1; 2; : : : ; sg. Thus f (x) P
= u fu(x). The ANOVA decompo-
sition is an orthogonal one, so (f ) = u u where u = (fu). See Owen
2 2 2 2
[12] for denitions, Takemura [22] for some history of this decomposition and
Hickernell [4] forPsome recent generalizations.
Let gt(x) = juj t fu(x), for 0 P t s. Then gt describes the part of f
=
that is exactly t dimensional and td gd describes
=1 the part that is at most
t dimensional. The variance of gt is (gt ) = juj t u . If, for example,
2
P
=
2
due to the components of X taken one at a time. In such a case we might say
that f is eectively one dimensional. Likewise, if (g ) + (g ) + (g ) is
2
1
2
2
2
3
close enough to we might consider f to be eectively 3 dimensional. For
2
such an f , a set of points with good uniformity in every triple of variables will
produce an accurate integral estimate, even if the points are not particularly
uniform in some quadruples of variables.
Denition 3.1 The eective dimension of f , in the superposition sense, is
the smallest integer dS such that P0<jujdS 2 (fu) 0:99 2(f ):
This notion of eective dimension diers from the one implicitly used in
[17], which we state formally as:
Denition 3.2 The eective P dimension of f , in the truncation sense, is the
smallest integer dT such that uf1;2;:::;dT g 2 (fu) 0:99 2 (f ).
The threshold 0:99 is an arbitrary choice, and one might well prefer
other values in some settings. For each d = 1; : : : ; s we can dene how
d dimensional f is, in the senses above, by the ratios P <jujd u= and 0
2 2
12
uf1;2;:::;dg u =
. Clearly the fraction of variance that is d dimensional is
P
2 2
the fraction of the variance that comes from one dimensional parts of an
integrand.
Some orthogonal array sampling schemes [12, 13, 23]: balance all margins
up to order t, randomize the higher ones and have u = 1juj>t. These should
be eective on integrands that are, or are nearly, of eective dimension t or
less, in the superposition sense.
For non-randomized quasi-Monte Carlo methods, the decomposition (3.1)
does not lead to a variance interpretation, but we may still write
M M
jeN (f )j jeN (fu)j V (f );
X X
DN;u u u (3.4)
juj1 juj1
where DN;u is the discrepancy of the juj-dimensional points obtained by keep-
ing only those components (xn)Nn in u, and Vu(fu) is the variation of fu taken
=1
as a juj-dimensional function. We believe that many successes of quasi-Monte
Carlo methods on high dimensional problems can be attributed to a low ef-
fective dimension of the integrand, in one or both of the senses above. In
13
to coincide with the few large
such cases, arranging for small values of DN;u
values of u and large values of DN;u, if any, to coincide with small values of
2
As N increases, more and more of the terms in equation 3.4 should switch
from the Monte Carlo rate N = to a quasi-Monte Carlo rate N (log N )k .
1 2 1
then one can expect that the errors are bounded below by a small multiple
of N = until N reaches the sometimes astronomical sample sizes bt s as
1 2 +
This explains how the scrambled net variance is o(N ). Large gains can be
1
4 Mortgage-Backed Securities
Consider a security backed by mortgages of length M months with xed
interest rate i , which is the current interest rate at the beginning of the
0
mortgage. The present value of the security is then
PV = E (v)
M
X
= E( uk mk ) (4.1)
k=1
14
in which E is the expectation over the random variables involved in the
interest rate
uctuations. The variables in the problem are the following:
uk = discount factor for month k
mk = cash
ow for month k
ik = interest rate for month k
wk = fraction of remaining mortgages prepaying in month k
rk = fraction of remaining mortgages at month k
ck = (remaining annuity at month k)/c
c = monthly payment
k = an N (0; ) random variable:
This notation follows that of Paskov [17], except that our ck corresponds to
his aM k . +1
Several of these variables are easily dened:
kY1
uk = (1 + ij ) 1
j =0
mk = crk ((1 wk ) + wk ck )
kY1
rk = (1 wj )
j =1
MXk
ck = (1 + i ) j :
0
j =0
Following Paskov, we use models for the interest rate
uctuations and the
prepayment rate given by
ik = K ek ik 0 1
= K k e1 ::: k i
0
+ +
0 (4.2)
wk = K + K arctan (K ik + K )
1 2 3 4
i . The initial interest rate i is an additional constant that must be specied.
0 0
15
In this study we do not divide the cash
ow of the security among a group
of tranches, as in [17], but only consider the total cash
ow. Nevertheless,
the results should be indicative of a more general computation involving a
number of tranches.
The expectation PV can be written as in integral over RM with Gaussian
weights
g( ) = (2 ) = e 2= 2 : 2
(4.3) 1 2 2
Nearly Linear Example, the prepayment rate is nearly linear in the interest
rate, in the range of interest; whereas for the Nonlinear Example, the pre-
payment rate has a step increase when the interest rate falls much below i . 0
In both examples the length of the loans is taken to be 30 years (M = 360).
16
5 Brownian Bridge
Since Brownian motion is a Markov process, it is most natural to generate
its value b(t + t ) as a random jump from a past value b(t) as
1
q
b(t + t ) = b(t) + t
1 1 (5.1)
in which is an N (0; 1) random variable. On the other hand, the value
b(t + t ) can also be generated from knowledge of both a past value b(t)
1
and a future value b(T = t + t + t ), with 0 ti , according to the
1 2
Brownian bridge formula
b(t + t ) = ab(t) + (1 a)b(T ) + c
1 (5.2)
in which
a = t =(t + t )
1 1
q
2
c = at 2 (5.3)
Note that at t , so that the variance of the random part of the Brow-
2 1
nian bridge formula (5.2) is less than that in (5.1).
The standard method of generating a random walk yk = b(k) is based
on the updating formula (5.1). The initial value is y = 0. Each subsequent
0
value yk is generated from the previous value yk using formula (5.1), with
+1
independent normal variables k .
Another method, which we refer to as the Brownian bridge discretization
can be based on (5.2). Suppose we wish to determine the path y ; y ; : : : ; yM ,
0 1
and for convenience assume that M is p a power of 2. The initial value is y = 0. 0
The next value generated is yM = M t . Then the value at the mid
0
point yM= is determined from the Brownian bridge formula (5.2). Subsequent
2
values are found at the successive mid-points; i.e. yM= ; y M= ; yM= ; : : :. . The
4 3 4 8
procedure is easily generalized to general values of M .
Although the total variance in this representation is the same as in the
standard discretization, much more of the variance is contained in the rst
few steps of the Brownian bridge discretization due to the reduction in vari-
ance in the Brownian bridge formula. This reduces the eective dimension
of the random walk simulation, which increases the accuracy of quasi-Monte
Carlo. Moskowitz and Ca
isch [9] applied this method to the evaluation
17
of Feynman-Kac integrals and showed the error to be substantially reduced
when the number of time steps, which is equal to the dimension of the cor-
responding integral, is large. Since the mortgage-backed securities problem
described above depends on a random walk, and can be written as a dis-
cretization of a Feynman-Kac integral, we were naturally led to apply the
Brownian bridge discretization to this problem.
6 Numerical Results
6.1 Nearly Linear Example
The value PV for this example was calculated to be 131:78706. The mean
length of a mortgage in this case is 100:9 months and the median length is
93 months. The Monte Carlo variance of PV is 41:84 and the variance in
antithetic sampling is 0:014. This suggests that the function is very nearly an
odd function of the Gaussian increments. In fact,: solving 41:84 = + with
2
1
2
2
0:014 = 2 , provides the rough estimates = 0:007 and = 41:833 so
2
1
2
1
2
2
that the odd or antisymmetric part of this integrand provides about 99:98%
of the variation.
Similarly the variance in Latin hypercube sampling is about 0:0155 from
which we nd that roughly (41:84 :0155)=41:84 =: 99:96% of the variation
comes from one dimensional structure. This function is eectively one di-
mensional in the superposition sense, and it is nearly antisymmetric. The
percentages quoted above are based on ratios of sampling variances and may
not be exact, but both of these ndings agree with what we found by nu-
merical inspection: this function is very nearly linear in the Gaussian incre-
ment variables. Application of Latin hypercube sampling to the antithetic
integrand leads to only a slight decrease in variance compared to the non-
antithetic case, which we interpret to mean that the multidimensional part
of the integrand is not predominantly odd.
We now describe the accuracy of various integration methods for this
problem as a function of N , the number of paths. For each of these results, we
present the root-mean-square of the error among 25 computations. For Monte
Carlo methods the 25 computations are statistically independent, at least to
the level possible using pseudo-random numbers. The Sobol' calculations for
each of the 25 runs and for each value of N was computed using dierent,
18
non-overlapping subsequences of the Sobol' sequence. Because they are not
a sample from any population, the root mean square error presented is the
dierence between the values obtained and a \gold standard" obtained from
using the quadratic terms of a Taylor series expansion of the integrand about
the origin in the Gaussian coordinates as a control variate and N 3:2
million. The results are plotted in terms of error versus N , both in log base
10.
In these plots, for the antithetic computations, N refers to the number
of times the antithetic integrand (f (x) + f (1 x))=2 is evaluated. This
corresponds to 2N function evaluations. Plotting the antithetic runs versus
2N would be more appropriate when function evaluation is the dominant
cost and plotting versus N when generating the xn is the dominant cost.
We don't attempt to plot cpu time on the axis as this can depend on how
eciently a method is implemented.
First, we perform straightforward Monte Carlo evaluation, with results
plotted in Figure 3. The top curve shows results from Monte Carlo using
standard pseudo-random points, with the error decreasing at the expected
rate of N = . The second curve shows a dramatic improvement using the
1 2
360 dimensional Sobol' sequence (generated with part of the code FINDER).
In a separate calculation (not plotted), it was found that if only the rst
50 dimensions (using the standard discretization) were taken to be quasi-
random and the rest pseudo-random, the size of the error decreased slightly
compared with the purely random case, and the apparent convergence rate
remained N = . So, in the truncation sense, the dimensionality is not below
1 2
50.
The third curve in Figure 3 shows the results of combining the quasi-
random sequence with antithetic sampling. This leads to a sizeable reduc-
tion in the error size as the dominant anti-symmetric part has been removed.
However, the improved convergence rate, characteristic of low-dimensional
quasi-Monte Carlo methods, also disappears. Finally, reference lines for Latin
hypercube sampling and antithetic random Monte Carlo are shown. These
both eectively remove the one dimensional linear elements of the integrand,
with antithetic variates killing it o exactly, while this error decreases like
O(N = ) for the Latin hypercube case, leaving the remaining errors to con-
3 2
ing simple random Monte Carlo on the one dimensional elements, still can
only achieve the O(N ) rate in the optimal case. Both Latin hypercube and
1
19
antithetic random outperform the antithetic quasi-random sequence slightly.
This may be because the quasi-random sequence is performing at worse than
the Monte Carlo level on the higher dimensional part of the integrand.
A scrambled (0; 360)-sequence in base 361 with N a small multiple of
361 behaves like Latin hypercube sampling. When N nears 361 = 130321
2
this function, the results for the scrambled (0; 360)-sequence were essentially
that same as for Latin hypercube sampling, and thus they are not plotted in
Figure 3. When applied to the antithetic integrand, the scrambled sequence
showed a slight improvement over the antithetic Latin hypercube sampling;
however, no signicant gains are achieved in either case with antithetic sam-
pling due to the increased computation time required.
Our results are consistent with the results of Paskov [17, 18] and Ninomiya
and Tezuka [11]. They contradict the observation that the eectiveness of
quasi-Monte Carlo is lost in high dimensions. However, we argue below that
the improvement is almost entirely due to improved integration of the one
dimensional parts of the integrand.
Next we consider the Brownian bridge version of the integrand. The
reformulated integrand has the same mean and the same variance as the
original, but more of the structure is packed into the rst few dimensions.
This should help the Sobol' sequence because it would only need to have
small discrepancy among the rst few dimensions. This encoding decreases
the variance in Latin hypercube sampling from 0:0155 to 0:00963, suggesting
that the BB encoding has made the integrand even more inherently one
dimensional.
Figure 4 shows the results from Sobol' sequence integration in the Brown-
ian bridge representation, with and without antithetic sampling. Also shown
are reference lines for simple Monte Carlo, which is not aected by the
change of representation, and for Latin hypercube sampling with the Brow-
nian bridge. Again antithetic sampling does not substantially improve Latin
hypercube sampling here. For the Brownian bridge Sobol' sequence without
antithetic sampling, the results are essentially the same as for the standard
representation. This is because for the dominant linear one dimensional el-
ements, the Brownian bridge representation simply rearranges the weights
on the elements, but the sum remains constant. Because the errors associ-
ated with each one dimensional projection of the Sobol' sequence are nearly
20
identical, no improvement is seen.
In the Brownian Bridge formulation, the Sobol' sequence with antithetic
sampling is much better than either antithetic variables or Latin hypercube
sampling. This suggests that it must be capturing some higher dimensional
antisymmetric structure, most probably among the rst few variables.
The theory of scrambled (0; 360)-sequences in base 361 predicts that they
will not be much better than Latin hypercube sampling until N = 361 = 2
130321 which is beyond the range we explore here. But by packing most of
the structure into the rst few dimensions of the integrand, we can consider
methods in which scrambled nets are used on the most important dimensions
and something else is used on the rest. Such methods have been proposed
before: Spanier [19] describes a scheme in which quasi-Monte Carlo meth-
ods are used on the rst few dimensions of an integrand and simple Monte
Carlo is used on the rest, and Owen [13] considered augmenting a random-
ized orthogonal array with further dimensions taken from Latin hypercube
samples.
We considered using a scrambled (0; 32)-sequence in base 32 for the rst
32 dimensions and Latin hypercube sampling on the last 328 dimensions.
For N = 32m one gets a scrambled (0; m; 32)-net in base 32 for the rst 32
dimensions and Latin hypercube sampling for the rest. This will integrate
the one dimensional part of the function and much of the m dimensional
structure (superposition sense) of the rst 32 dimensions, with the rest of the
structure being integrated at the Monte Carlo rate. But (0; 32)-sequences
can be stopped early or extended as necessary, whereas Latin hypercube
sampling requires a prespecied number N of runs. As a compromise we ran
a scrambled (0; 32)-sequence for the rst 32 dimensions and took repeated
independent Latin hypercube samples of size 1024 for the last 328 dimensions.
Such a simulation can be conveniently stopped at any multiple of 1024 runs.
The results are shown on Figure 4, labeled as RQR-BB for randomized quasi-
random with Brownian bridge. All pairs of two variables among the rst 32
variables start to become balanced at sample size N = 32 = 1024 and
2
similarly, all triples of variables among the rst 32 variables start to become
balanced at sample size N = 32 = 32768. This leads to results which are
3
similar to the antithetic Sobol' with Brownian bridge results. However, the
convergence rate for the scrambled net in the Brownian bridge representation
appears to be larger, leading to greater accuracy at large N .
We were able to achieve still better results than those shown here by
21
Near Linear Model
−2
−2.5
−3
MC
−3.5
log (relative error)
QR−anti
−4
LHS
−4.5 MC−anti
QR
−5
−5.5
−6
−6.5
−7
2 2.5 3 3.5 4 4.5 5
log N
Figure 3: Error versus N (log base 10) for the Nearly Linear Problem, in the
original representation.
bridge ordering. These randomizations are due to Cranley and Patterson [2]
and Tun [24] appears to be the rst to realize their utility on nets. The
certainty of this answer can be estimated by considering a six standard de-
22
Near Linear Model
−2
−2.5
−3
MC
−3.5
log (relative error)
−4
LHS−BB
−4.5
QR−BB
−5
−5.5
QR−anti,BB
−6
−6.5
RQR−BB
−7
2 2.5 3 3.5 4 4.5 5
log N
Figure 4: Error versus N (log base 10) for the Nearly Linear Problem, in the
Brownian bridge representation.
23
viation range, which was determined to be (130:712348; 130:712382). The
error curves in Figures 5 and 6 are based on using this value as the exact
solution.
The mean length of a mortgage in this case is 76:5 months and the me-
dian length is 58 months. The variance of PV in this value is 18:54 and
the variance in the antithetic computation of this value is 1:127. Thus the
function is about 97:7% antisymmetric. The variance under Latin hypercube
sampling is 1:087 so that the function is only about 94:1% one dimensional.
This may seem like a lot of one dimensional structure but compared with the
previous example, the proportion of higher dimensional structure is greatly
increased.
As in the nearly linear example, antithetic sampling and Latin hypercube
sampling in combination do not work better than separately. For this inte-
grand as for that one, they each appear to remove the same source of error.
In fact, for the nonlinear example, both give almost exactly the same errors.
Figure 5 shows the error reference lines from Monte Carlo and antithetic
Monte Carlo sampling. Both Latin hypercube sampling and the random-
ized (0; 360)-sequence in base 360, as well as their antithetic counterparts,
give roughly the same accuracy as the antithetic Monte Carlo sampling. For
simplicity, these error curves have therefore not been included in this graph.
Superimposed are errors and lines for Sobol', and Sobol' with antithetic sam-
pling. In this case the Sobol' sequence is seen to catch up with the antithetic
random sampling, but little is gained by combining antithetic sampling with
the quasi-random sequence. The Sobol' sequence outperforms Latin hyper-
cube sampling on this, probably because the one dimensional parts of it are
no longer so dominant.
Figure 6 shows the errors from the Brownian bridge representation of the
integrand. Reference lines are shown for Monte Carlo, antithetic sampling
and Latin hypercube sampling. In this case also, Latin hypercube sampling
does a bit better after the Brownian bridge transformation, suggesting that
the function has become somewhat more one dimensional. As before, com-
bining Latin hypercube sampling with antithetic sampling does not do much
good.
The Sobol' sequences perform especially well on the nonlinear function,
in the BB representation with antithetic sampling. In terms of equation (3.4)
this may be due to good equidistribution among the leading Sobol dimensions
matched with their greater importance to the integrand.
24
Non−Linear Model
−2
−2.5
−3
−3.5
MC
log (relative error)
−4
MC−anti
−4.5
−5 QR
QR−anti
−5.5
−6
−6.5
−7
2 2.5 3 3.5 4 4.5 5
log N
Figure 5: Error versus N (log base 10) for the Nonlinear Problem, in the
original representation.
25
Non−Linear Model
−2
−2.5
−3
−3.5
MC
log (relative error)
−4
MC−anti
−4.5
LHS−BB
−5 QR−BB
−5.5 RQR−BB
−6
QR−anti,BB
−6.5
−7
2 2.5 3 3.5 4 4.5 5
log N
Figure 6: Error versus N (log base 10) for the Nonlinear Problem, in the
Brownian bridge representation.
26
7 Conclusions
Our main conclusions are the following:
Quasi-Monte Carlo methods provide signicant improvements in ac-
curacy and computational speed for problems of small to moderate
dimension.
While the eectiveness of quasi-Monte Carlo can be lost on problems of
high dimension, this does not happen if the integrand is of low eective
dimension in the superposition sense.
Some problems that have a large nominal dimension can be reformu-
lated to have a moderate-sized eective dimension, so that the eec-
tiveness of quasi-Monte Carlo is increased.
The Brownian bridge representation reduces the eective dimension for
problems like the mortgage-backed security problem described here.
Instead of straightforward use of high-dimensional quasi-random sequences,
our recommendations are:
First analyze the problem, mathematically or numerically, to determine
the most important input dimensions.
Where possible reformulate the problem to concentrate the variation
in fewer dimensions.
When a small number of dominant dimensions can be identied or in-
duced, apply quasi-random or randomized quasi-random (when sample
based error estimates are desired) sequences to those dimensions.
For the remaining dimensions use pseudo-random or Latin hypercube
sampling.
Consider applying classical variance reduction techniques, such as an-
tithetic sampling, control variates, stratication and importance sam-
pling in conjunction with the above.
27
We believe that high dimensional integration problems can range in di-
culty from completely intractable to quite simple. In some cases it is possible
to turn the former into the latter by carefully engineering the integrand. It
is too early to say whether such manageable integrands are rare or dominant
in nancial applications. However, the results here indicate that, for the
valuation of securities which depend on a single stochastic factor modeled
as a Gaussian process with nonstochastic drift and volatility, the Brownian
bridge representation may be extremely eective in reducing the dimension
of the simulation.
References
[1] R.E. Ca
isch and W. Moroko. Quasi-Monte Carlo computation of a
nance problem. In K.T. Fang and F.J. Hickernell, editors, Workshop on
Quasi-Monte Carlo Methods and Their Applications, pages 15{30 and
UCLA CAM Report 96{16. 1996.
[2] R. Cranley and T.N.L. Patterson. Randomization of number theoretic
methods for multiple integration. SIAM Journal of Numerical Analysis,
23:904{914, 1976.
[3] F. J. Hickernell. The mean square discrepancy of randomized nets.
Technical Report MATH-112, Department of Mathematics, Hong Kong
Baptist University, 1996.
[4] F. J. Hickernell. Quadrature error bounds and gures of merit for quasi-
random points. Technical Report MATH-111, Department of Mathemat-
ics, Hong Kong Baptist University, 1996.
[5] M. D. McKay, R. J. Beckman, and W. J. Conover. A comparison of
three methods for selecting values of input variables in the analysis of
output from a computer code. Technometrics, 21(2):239{45, 1979.
[6] W. Moroko and R.E. Ca
isch. A Quasi-Monte Carlo approach to par-
ticle simulation of the heat equation. SIAM Journal on Numerical Anal-
ysis, 30:1558{1573, 1993.
[7] W. Moroko and R.E. Ca
isch. Quasi-random sequences and their dis-
crepancies. SIAM J. Sci. Stat. Computing, 15:1251{1279, 1994.
28
[8] W. Moroko and R.E. Ca
isch. Quasi-Monte Carlo integration. J.
Comp. Phys., 122:218{230, 1995.
[9] B. Moskowitz and R.E. Ca
isch. Smoothness and dimension reduction
in quasi-Monte Carlo methods. J. Math. Comp. Modeling, 23:37{54,
1996.
[10] H. Niederreiter. Random Number Generation and Quasi-Monte Carlo
Methods. SIAM, Philadelphia, 1992.
[11] S. Ninomiya and S. Tezuka. Toward real-time pricing of complex nan-
cial derivatives. Appl. Math. Finance, 3:1{20, 1996.
[12] A. B. Owen. Orthogonal arrays for computer experiments, integration
and visualization. Statistica Sinica, 2:439{452, 1992.
[13] A. B. Owen. Lattice sampling revisited: Monte Carlo variance of means
over randomized orthogonal arrays. The Annals of Statistics, 22:930{
945, 1994.
[14] A. B. Owen. Randomly permuted (t; m; s)-nets and (t; s)-sequences. In
Harald Niederreiter and Peter Jau-Shyong Shiue, editors, Monte Carlo
and Quasi-Monte Carlo Methods in Scientic Computing, pages 299{
317, New York, 1995. Springer-Verlag.
[15] A. B. Owen. Monte Carlo variance of scrambled equidistribution quadra-
ture. SIAM Journal of Numerical Analysis, 1997. In press.
[16] A. B. Owen. Scrambled net variance for integrals of smooth functions.
Annals of Statistics, (25), 1997. In press.
[17] S.H. Paskov. New methodolgies for valuing derivatives. In S. Pliska
and M. Dempster, editors, Mathematics of Derivative Securities. Isaac
Newton Inst., Cambridge U. Press, 1996.
[18] S.H. Paskov and J.F. Traub. Faster valuation of nancial derivatives. J.
Portfolio Manag., pages 113{120, 1995.
[19] J. Spanier. Quasi-Monte Carlo Methods for Particle Transport Prob-
lems. In Harald Niederreiter and Peter Jau-Shyong Shiue, editors, Monte
29
Carlo and Quasi-Monte Carlo Methods in Scientic Computing, pages
121{148, New York, 1995. Springer-Verlag.
[20] M. Stein. Large sample properties of simulations using latin hypercube
sampling. Technometrics, 29(2):143{51, 1987.
[21] D. F. Swayne, D. Cook, and A. Buja. XGobi: Interactive dynamic
graphics in the X window system with a link to S. In Harald Niederre-
iter and Peter Jau-Shyong Shiue, editors, ASA Proceedings of Statistical
Graphics Section, pages 1{8, 1991.
[22] A. Takemura. Tensor analysis of ANOVA decomposition. Journal of the
American Statistical Association, 78:894{900, 1983.
[23] B. Tang. Orthogonal array-based latin hypercubes. Journal of the Amer-
ican Statistical Association, 88:1392{1397, 1993.
[24] B. Tun. On the use of low discrepancy sequences in Monte Carlo
methods. Technical Report 1060, I.R.I.S.A., Rennes, France, 1996.
30