Lecture 12
Lecture 12
Lecture 12
1 / 20
Integration using
►
So far we have focused on computing
∫1
0
θ ψ(x )d
►
Monte Carlo integration can be especially useful
for estimating high-dimensional integrals.
►
Suppose we want to estimate
∫1∫1
0 0
θ ψ(x, y )dxdy
►
Monte Carlo is easy because we just need to sample
uniformly on [0, 1) × [0, 1),
1 Σ
N
θˆ = ψ(X i, Y i ) .
N
i
Quasi Monte Carlo
►
We already saw that standard MC methods have a
√
convergence of O(1/ N).
►
To accelerate it, one can apply so-called quasi Monte
Carlo method, or low-discrepancy method.
►
These methods seek to increase accuracy by generating points
that are too evenly distributed to be random.
►
Due to their deterministic nature, statistical methods do not
apply to QMC methods.
►
In this lecture we focus on d -dimensional integrals,
θ = Eψ(U) ,
Xi+1 = mod(aXi , m)
Ui +1 = Xi +1/m
congruential
1 1 grid rank 1
1
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
Hammersley
1 Latin Hypercube Stratified Random
1 1
►
Once we have our QMC points in [0, 1)d , we only need to
plug them into our random number generator in place of
our pseudo-random uniforms.
►
For example, we can use an integration lattice in d = 2 to
estimate a function of a jointly-normal pair.
►
For example,
q
E cos(ǁX ǁ) ≈ cos X2+Y2 ,
1 Σ N
i i
N
i =1
p h i = (1+ s q r t ( 5 ) ) / 2 ; % F i b o n a c c i l a t t i c e
v1 = [ 1 /( n+1) ; p h i ] ;
U qmc = mod( v1 ∗ [ 1 : n ] , 1 ) ;
►
Let us assume that d is the dimension of a problem, and
we want to estimate the integral of ψ over [0, 1)d .
►
Now we aim to fill the cube uniformly using
some deterministic rule.
►
There are a couple discrepancy definitions.
►
We define discrepancy for point set {x1, . . . , xn} as
1 2 n nd
log(n)
D(u , u , . . . , u ) that is O .
►
Integration error when using a low-discrepancy set is
quantified with the Koksma-Hlawka inequality,
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
►
Randomized digital nets offer a way to take advantage of
low-discrepancy sets and avoid some of the pitfalls in
basic QMC.
►
The Matousek-Owen scramble is commonly used with
the Sobol set.
►
Reverse-Radix scrambling is used for the Halton set.
Example: Scrambled Sobol Set to Generate
Brownian Motion
%% QMC Sim Brownian Motion
T = 3 / 12 ;
dt = 1 / 365 ;
N = round (T/ dt ) ;
Rho = −. 7 ;
dW = z e r o s (N, 2 ) ;
Sb l = s o b o l s e t ( 2 , ’ Skip ’ , 1 e3 , ’ Leap ’ , 1
e2 ) ; p = s c r a m b l e ( Sbl , ’ Matousek Affine
Owen ’ ) ; p = net ( p , N) ’ ;
U1 = p ( 1 : N) ;
U2 = p ( ( N+1) : end ) ;
dW( : , 1 ) = s q r t (−2∗ l o g ( U1 ) ) . ∗ cos ( 2 ∗ p i ∗U2 ) ∗ s q r t (
dt ) ; dW( : , 2 ) = s q r t (−2∗ l o g ( U1 ) ) . ∗ s i n ( 2 ∗ p i ∗U2 ) ∗ s
q r t ( dt ) ;
0.21
0.2
0.19
0.18
0.17
0.16
-0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25
Figure: Using QMC and RQMC to estimate Heston call price, and then
comparing implied vols.
Copul
►
Copulas are the CDFs of marginally uniform[0, 1) random
variables.
►
For X ∈ Rd , with X l denoting lth element.
►
Let Fl be the (marginal) CDF of X l .
►
We have X l = F −1 (Ul ) where Ul is a marginally uniform[0, 1)
l
random variable, and is the lth element of U ∈ [0, 1)d .
►
The CDF of U is copula C .
►
Gaussian copula:
s ∫ Φ−1(ud )
∫ −1
C (u1, . . . , ud ) (2π)−d Φ (u1) · · · e
1 ∗ −
− x ρ 1x
dx ,
= 2
|ρ| −∞ −∞
►
We can sample from the copula using the Marshall-
Olkin Algorithm
− 1
1. V ∼ LS ϕ
2. U˜ ∼ uniform[0, 1)d
3. Return U where U = ϕ(− log(U˜)/V ).
LS ϕ (v ) = ∫0 ϕ(t)e −vt dt is the Laplace-Stieltjes transform of
►
∞
− 1
LS ϕ (v ) ∝ v 1/ν−1 e −v ,
20 20
0.03
0 0.035 0.04 0.045 0.05 0
0.035 0.04 0.045
M mc M qmc
40
40
20 20
20 20
0 0
8 10 12 14 16 9 10 11 12
►
QMC is good for some problems in multiple dimensions.
►
Implementation is more complicated than standard MC.
►
Issues such as selection of a lattice/net, parameters for it,
and the absence of statistical analysis of error (i.e., no CLT)
make it less straightforward.
►
In 1 and 2 dimension there are quadrature rules that
are better (e.g., Gauss-Lobatto quadrature).
►
The Koksma-Hlawka bound and possible O(log(n)d /n)
error is appealing, but is only theoretical and in reality it
requires some now-how to see reduced error.