IRModellingLecture Part 05
IRModellingLecture Part 05
Sebastian Schlenkrich
WS, 2019/20
Part V
p. 313
Outline
Bermudan Swaptions
p. 314
Outline
Bermudan Swaptions
p. 315
Let’s have another look at the cancellation option
p. 316
What does such a Bermudan call right mean?
L1 Lm
✻ ✻ ✻ ✻ ✻ ✻ ✻ ✻ ✻ ✻ ✻ ✻
1
T̃k−1 2
T̃k−1 3
T̃k−1 T̃m
T̃0 ✲
T0 TE1 1
Tl−1 TE2 2
Tl−1 TE3 3
Tl−1 Tn
❄ ❄ ❄ ❄ ❄ ❄
K K
K
✻ ✻ ✻
1
Tl−1 2
Tl−1 3
Tl−1 Tn
✲
TE1 1
T̃k−1 TE2 2
T̃k−1 TE3 3
T̃k−1 T̃m
❄ ❄ ❄ ❄ ❄ ❄
Lm
p. 317
What is a Bermudan swaption?
K
✻ ✻ ✻
1
Tl−1 2
Tl−1 3
Tl−1 Tn
✲
TE1 1
T̃k−1 TE2 2
T̃k−1 TE3 3
T̃k−1 T̃m
❄ ❄ ❄ ❄ ❄ ❄
Lm
Bermudan swaption
A Bermudan swaption is an option to enter into a Vanilla swap with fixed rate
K and final maturity Tn at various exercise dates TE1 , TE2 , . . . , TEk̄ . If there is
only one exercise date (i.e. k̄ = 1) then the Bermudan swaption equals a
European swaption.
p. 318
A Bermudan swaption can be priced via backward
induction
K
✻ ✻ ✻
continuation value
1
Tl−1 2
Tl−1 3
Tl−1 Tn
✲
TE1 1
T̃k−1 TE2 2
T̃k−1 TE3 3
T̃k−1 T̃m
❄ ❄ ❄ ❄ ❄ ❄
Lm
K
✻ ✻ ✻
exercise payoff
1
Tl−1 2
Tl−1 3
Tl−1 Tn
✲
TE1 1
T̃k−1 TE2 2
T̃k−1 TE3 3
T̃k−1 T̃m
❄ ❄ ❄ ❄ ❄ ❄
Lm
p. 319
A Bermudan swaption can be priced via backward
induction - let’s add some notation
H1 = . . . H2 = . . . H3 = 0 K
✻ ✻ ✻
continuation value
1
Tl−1 2
Tl−1 3
Tl−1 Tn
✲
TE1 1
T̃k−1 TE2 2
T̃k−1 TE3 3
T̃k−1 T̃m
h i
H0 = B(t)E V1
B(T 1 )
| Ft ❄ ❄ ❄ ❄ ❄ ❄
E
Lm
V3 = max{U3 , H3 }
V2 = max{U2 , H2 }
V1 = max{U1 , H1 }
U1 U2 U3 K
✻ ✻ ✻
exercise payoff
1
Tl−1 2
Tl−1 3
Tl−1 Tn
✲
TE1 1
T̃k−1 TE2 2
T̃k−1 TE3 3
T̃k−1 T̃m
❄ ❄ ❄ ❄ ❄ ❄
Lm
p. 320
First we specify the future payoff cash flows
◮ Choose a numeraire B(t) and corresponding cond. expectations Et [·] = E[· | Ft ].
◮ Underlying payoff Uk if option is exercised
X h i
Xi (Ti )
Uk = B(TEk ) ET k
E B(Ti )
Ti ≥T k
E
X X
= B(TEk ) K · τi · P(TEk , Ti ) − Lδ (TEk , T̃j−1 , T̃j−1 + δ) · τ̃j · P(TEk , T̃j )
Ti ≥T k T̃j ≥T k
E E
| {z }
future fixed leg minus future float leg
X
= B(TEk ) K · τi · P(TEk , Ti )
Ti ≥T k
E
X
−P(TEk , T̃jk ) − P(TEk , T̃j−1 ) · D(T̃j−1 , T̃j ) − 1 + P(TEk , T̃m ) .
T̃j ≥T k
E
p. 321
Then we specify the continuation value and optimal
exercise
◮ Continuation value Hk (t) (TEk ≤ t ≤ TEk+1 ) represents the time-t value of the
remaining option if not exercised.
◮ Option becomes worthless if not exercises at last exercise date TEk̄ . Thus last
continuation value Hk̄ (TEk̄ ) = 0.
◮ Recall that Bermudan option gives the right but not the obligation to enter into
underlying at exercise.
◮ Rational agent will choose the maximum of payoff and continuation at exercise,
i.e.
Vk = max Uk , Hk (TEk ) .
◮ Vk represents the Bermudan option value at exercise TEk . Thus we also must
have for the continuation value
Hk−1 (TEk ) = Vk .
p. 322
We summarize the Bermudan pricing algorithm
Denote Hk (t) the option’s continuation value for TEk ≤ t ≤ TEk+1 and set
Hk̄ TEk̄ = 0. Also set TE0 = t (i.e. pricing time today).
" #
Hk (TEk+1 ) max Uk+1 , Hk+1 (TEk+1 )
k
Hk TE = B(TEk ) · ET k = B(TEk ) · ET k
E B(TEk+1 ) E B(TEk+1 )
p. 323
Some more comments regarding Bermudan pricing ...
◮ Recursion for Bermudan pricing can be formally derived via theory of optimal
stopping and Hamilton-Jacobi-Bellman (HJB) equation.
◮ For more details, see Sec. 18.2.2 in Andersen/Piterbarg (2010).
◮ For a single exercise date k̄ = 1 we get
max {U1 , 0)}
H0 (t) = B(t) · Et .
B(TE1 )
This is the general pricing formula for a European swaption (if U1 represents a
Vanilla swap).
max Uk+1 ,Hk+1 (T k+1 )
E
◮ In principle, recursion Hk TEk = B(TEk ) · ET k holds
E B(T k+1 )
E
for any payoffs Uk . However, computation
X h i
Xi (Ti )
Uk = B(TEk ) ET k
E B(Ti )
Ti ≥T k
E
might pose additional challenges if cash flows Xi (Ti ) are more complex.
p. 324
How do we price a Bermudan in practice?
We can apply general option pricing methods to roll-back the Bermudan payoff.
p. 325
Outline
Bermudan Swaptions
p. 326
Note that Uk , Vk and Hk depend on underlying state
variable x (TEk )
H1 = . . . H2 = . . . H3 = 0 K
✻ ✻ ✻
continuation value
1
Tl−1 2
Tl−1 3
Tl−1 Tn
✲
TE1 1
T̃k−1 TE2 2
T̃k−1 TE3 3
T̃k−1 T̃m
h i
H0 = B(t)E V1
B(T 1 )
| Ft ❄ ❄ ❄ ❄ ❄ ❄
E
Lm
V3 = max{U3 , H3 }
V2 = max{U2 , H2 }
V1 = max{U1 , H1 }
U1 U2 U3 K
✻ ✻ ✻
exercise payoff
1
Tl−1 2
Tl−1 3
Tl−1 Tn
✲
TE1 T̃k−1
1 TE2 T̃k−1
2 TE3 T̃k−1
3 T̃m
❄ ❄ ❄ ❄ ❄ ❄
Lm
p. 327
Typically we need to discretise variables Uk , Vk and Hk on
a grid of underlying state variables
p. 328
Outline
Bermudan Swaptions
p. 329
Outline
p. 330
Key idea using the conditional density function in the
Hull-White model
Recall that
V (T1 )
V (T0 ) = B(T0 ) · E | FT0 .
B(T1 )
State variable x = x (T1 ) is normally distributed with known mean and variance.
p. 331
Hull-White model results yield density parameters of the
state variable x (T1 )
Z +∞
V (T0 ) = P(x (T0 ); T0 , T1 ) · V (x ; T1 ) · pµ,σ2 (x ) · dx .
−∞
and variance
Thus
1 (x − µ)2
pµ,σ2 (x ) = √ · exp −
2πσ 2 2σ 2
and
Z +∞
V (x ; T1 ) (x − µ)2
V (T0 ) = P(x (T0 ); T0 , T1 ) · √ · exp − dx .
−∞ 2πσ 2 2σ 2
p. 332
Integral against normal density needs to be computed
numerically by quadrature methods
Z +∞
V (x ; T1 ) (x − µ)2
V (T0 ) = P(x (T0 ); T0 , T1 ) · √ · exp − dx .
−∞ 2πσ 2 2σ 2
◮ We can apply general purpose quadrature rules to function
V (x ; T1 ) (x − µ)2
f (x ) = √ · exp − .
2πσ 2 2σ 2
x0 = −λ · σ̂ and xN = λ · σ̂.
◮ Note, that ET1 [x (T1 )] = 0, thus we don’t apply a shift to the x -grid.
p. 334
There are some details that need to be considered - Take
care of the break-even point
p. 335
Can we exploit the structure of the integrand?
Z +∞
V (x ; T1 ) (x − µ)2
V (T0 ) = P(x (T0 ); T0 , T1 ) · √ · exp − dx .
−∞ 2πσ 2 2σ 2
p. 336
Outline
p. 337
Gauss–Hermite quadrature is an efficient integration
method for smooth integrands
p. 338
Variable transformation allows application of
Gauss–Hermite quadrature to Hull-White model integration
We get
Z +∞ Z +∞
V (x ; T1 ) (x − µ)2 1 √ 2
√ · exp − dx = √ V ( 2σx + µ; T1 ) · e −x dx
−∞ 2πσ 2 2σ 2 π −∞
d
1 X √
≈ √ wk · V ( 2σxk + µ; T1 ).
π
k=1
p. 339
Outline
p. 340
If we apply cubic spline interpolation anyway then we can
also integrate exactly
Approximate V (·, T1 ) via cubic spline on the grid [x0 , . . . xN ] as
N−1 d
X X
V (x , T1 ) ≈ C (x ) = ✶{xi ≤x <xi+1 } ck · (x − xi )k .
i=0 k=0
Then
Z +∞ N−1 Z
X xi+1 d
X
V (x ; T1 ) · pµ,σ2 (x ) · dx ≈ ck · (x − xi )k · pµ,σ2 (x ) · dx
−∞ i=0 xi k=0
N−1 d
X X Z xi+1
= ck · (x − xi )k · pµ,σ2 (x ) · dx .
i=0 k=0 xi
p. 341
We transform variables to make integration easier
First we apply the variable transformation x̄ = (x − µ)/σ. This yields
pµ,σ2 (x ) = p0,1 (x̄ )/σ and
Z x̄i+1
dx
Ii,k = (σx̄ + µ − xi )k · p0,1 (x̄ ) ·
x̄i
σ
Z x̄i+1
k 1 k x̄ 2
= σ (x̄ − x̄i ) · √ exp − d x̄
x̄i 2π 2
| {z }
standard normal density
p. 342
We use cubic splines (d = 3) to keep formulas reasonably
simple I
It turnes out that
Z
F0 (x̄ ) = Φ′ (x̄ )d x̄ = Φ(x̄ ),
Z
F1 (x̄ ) = x̄ Φ′ (x̄ )d x̄ = −Φ′ (x̄ ),
Z
F2 (x̄ ) = x̄ 2 Φ′ (x̄ )d x̄ = Φ(x̄ ) − x · Φ′ (x̄ ),
Z
F3 (x̄ ) = x̄ 3 Φ′ (x̄ )d x̄ = − x̄ 2 + 2 · Φ′ (x̄ ).
Z x̄i+1
Ii,0 = Φ′ (x̄ ) · dx = F0 (x̄i+1 ) − F0 (x̄i )
x̄i
p. 343
We use cubic splines (d = 3) to keep formulas reasonably
simple II
and for Ii,1
Z x̄i+1
Ii,1 = σ (x̄ − x̄i ) · Φ′ (x̄ ) · dx
x̄i
Z x̄i+1
=σ· x̄ · Φ′ (x̄ ) · dx − σ · x̄i · Ii,0
x̄i
= σ · [F1 (x̄i+1 ) − F1 (x̄i )] − σ · x̄i · Ii,0 .
Z x̄i+1
Ii,3 = σ 3 (x̄ − x̄i )3 · Φ′ (x̄ ) · dx
x̄i
Z x̄i+1
= σ 3 x̄ 3 − 3x̄i x̄ 2 + 3x̄i2 x̄ − x̄i3 · Φ′ (x̄ ) · dx
x̄i
p. 345
Let’s summarise the formulas...
We get
Z +∞
V (T0 ) = P(x (T0 ); T0 , T1 ) · V (x ; T1 ) · pµ,σ2 (x ) · dx
−∞
N−1 3
X X
≈ P(x (T0 ); T0 , T1 ) · ck · Ii,k
i=0 k=0
with
p. 346
Integrating a cubic spline versus a normal density function
occurs in various contexts of pricing methods
◮ Method already yields good accuracy for smaller number of grid points.
◮ For larger number of grid points accuracy benefit compared to e.g.
Simpson integration seems not too much.
p. 347
Outline
Bermudan Swaptions
p. 348
PDE methods for finance and pricing are extensively
studied in the literature
p. 349
Outline
p. 350
We can adapt the Black-Scholes equation to our
Hull-White model setting
◮ Recall that state variable x (t) follows the risk-neutral dynamics
dx (t) = [y (t) − a · x (t)] dt + σ(t) · dW (t).
| {z }
µ(t,x (t))
◮ Consider an option with price V = V (t, x (t)), option expiry time T and
payoff V (T , x (T )) = g (x (T )).
p. 351
Apply Ito’s Lemma to the discounted option price
We get
V (t, x (t)) dV (t, x (t)) 1
d = + V (t)d .
B(t) B(t) B(t)
With d B(t)−1 = −r (t) · B(t)−1 · dt follows
V (t, x (t)) 1
d = [dV (t, x (t)) − r (t) · V (t) · dt] .
B(t) B(t)
p. 352
Combining results yields dynamics of discounted option
price
h i
V (t, x (t)) 1 1
d = Vt + Vx · µ (t, x (t)) + Vxx · σ(t)2 − r (t) · V dt
B(t) B(t) 2
| {z }
µV (t,x (t))
Vx · σ(t)
+ ·dW (t).
B(t)
| {z }
σV (t,x (t))
V (t,x (t))
Martingale property of B(t)
requires that drift vanishes. That is
1
µV (t, x (t)) = Vt + Vx · µ (t, x (t)) + Vxx · σ(t)2 − r (t) · V = 0.
2
Substituting µ (t, x (t)) = y (t) − a · x (t) and r (t) = f (0, t) + x (t) yields pricing
PDE.
p. 353
We get the parabolic pricing PDE with terminal condition
Theorem (Derivative pricing PDE in Hull-White model)
Consider our Hull-White model setup and a derivative security with price
process V (t, x (t)) that pays at time T the payoff V (T , x (T )) = g (x (T )).
Further assume V (T , x (T )) has finite variance and is attainable.
Then for t < T the option price
Q V (T , x (T ))
V (t, x (t)) = B(t) · E | Ft
B(T )
Proof.
Follows from derivation above.
p. 354
How does this help for our Bermudan option pricing
problem?
p. 355
Pricing PDE is typically solved via finite difference scheme
and time integration
p. 356
Outline
p. 357
How do we discretise state space?
x0 = −λ · σ̂ and xN = λ · σ̂
p. 358
Differential operators in state-dimension are discretised via
central finite differences
For now leave time t continuous. We use notation V (·, x ).
For inner grid points xi with i = 1, . . . , N − 1
p. 359
Some initial comments regarding choice of λ0,N
◮ However, for bond options the choice Vxx (·, x0 ) = Vxx (·, xN ) = 0 might be
a poor approximation.
p. 360
Now consider PDE for each grid point individually
Define the vector-valued function v (t) via
vi+1 (t) − vi−1 (t) σ(t)2 vi+1 (t) − 2vi (t) + vi−1 (t)
vi′ (t)+[y (t) − axi ] + = [xi + f (0, t)] vi (t)
2hx 2 hx2
vi (T ) = g(xi ).
Parabolic PDE is transformed into linear system of ODEs with terminal condition.
p. 361
It is more convenient to write system of ODEs in
matrix-vector notation
We get
c0 u0
.. ..
l1 . . · v (t)
v ′ (t) = M(t) · v (t) =
.. ..
. . uN−1
lN cN
and
h i h i
σ(t)2 σ(t)2
2 y (t) − ax0 + λ0 2
2 y (t) − axN + λN 2
c0 = +x0 +f (0, t), cN = − +x0 +f (0, t),
(2 + λ0 hx ) hx (2 − λN hx ) hx
h i h i
σ(t)2 σ(t)2
2 y (t) − ax0 + λ0 2
2 y (t) − axN + λN 2
u0 = − , lN = .
(2 + λ0 hx ) hx (2 − λN hx ) hx
p. 362
Outline
p. 363
Linear system of ODEs can be solved by standard methods
We have
v ′ (t) = f (t, v (t)) = M(t) · v (t).
We demonstrate time discretisation based on θ-method. Consider equidistant
time grid t = t0 , . . . , tM = T with step size ht and approximation
v (tj+1 ) − v (tj )
≈ f (tj+1 − θht , (1 − θ)v (tj+1 ) + θv (tj ))
ht
for θ ∈ [0, 1].
◮ In general approximation yields method of order O(ht ).
◮ For θ = 21 approximation yields method of order O(ht2 ).
For our linear ODE we set v j = v (tj ), Mθ = M(tj+1 − θht ) and get the scheme
v j+1 − v j
= Mθ (1 − θ)v j+1 + θv j .
ht
p. 364
We get a recursion for the θ-method
Rearranging terms yields
[I + ht θMθ ] v j = [I − ht (1 − θ) Mθ ] v j+1 .
Terminal condition is
v M = [g(x0 ), . . . , g(xN )]⊤ .
p. 365
Outline
p. 366
Let’s have another look at the boundary condition ...
We look at an example of a zero coupon bond option with payoff
+
V (x , T ) = P(x , T , T ′ ) − K .
This yields
∂
Ṽ (x , t) = −G(t, T ) Ṽ (x , t) + K
∂x
and
∂2 ∂
Ṽ (x , t) = −G(t, T ) Ṽ (x , t).
∂x 2 | {z } ∂x
λ
∂2 ∂
Ṽ (x , t) = Ṽ (x , t) = 0.
∂x 2 ∂x
p. 367
We adapt that approximation to our general option pricing
problem
◮ In principle, for a coupon bond underlying we could estimate λ = λ(t) via
option intrinsic value Ṽ (x , t) and
∂2 ∂ ∂
λ(t) = Ṽ (x , t) / Ṽ (x , t) for Ṽ (x , t) 6= 0,
∂x 2 ∂x ∂x
otherwise λ(t) = 0.
j+1 j+1
for v2,N − v0,N−2 /(2hx ) 6= 0, otherwise λ0,N = 0.
p. 368
It turns out that accuracy of one-sided first order
derivative approximation is of order O(hx2 ) I
Lemma
Assume V = V (x ) is twice continuously differentiable. Moreover, consider grid points
x−1 , x0 , x1 with equal spacing hx = x1 − x0 = x0 − x−1 . If there is a λ0 ∈ R such that
V ′′ (x0 ) = λ0 · V ′ (x0 )
then
2 [V (x1 ) − V (x0 )]
V ′ (x0 ) = + O(hx2 ).
(2 + λ0 hx ) hx
Proof:
Denote vi = V (xi ). We have from standard Taylor approximation
p. 370
It turns out that accuracy of one-sided first order
derivative approximation is of order O(hx2 ) III
p. 371
Outline
p. 372
We summarise the PDE pricing method
1. Discretise state space x on a grid [x0 , . . . , xN ] and specify time step size
ht and θ ∈ [0, 1].
2. Determine the terminal condition v j+1 = max {Uj+1 , Hj+1 } for the current
valuation step.
p. 373
Outline
Bermudan Swaptions
p. 374
Monte Carlo methods are widely applied in various finance
applications
p. 375
Outline
p. 376
Monte Carlo (MC) pricing is based on the Strong Law of
Large Numbers
p. 377
Keep in mind that sample mean is still a random variable
governed by central limit theorem
Theorem (Central Limit Theorem)
Let Y1 , Y2 , . . . be a sequence of i.i.d. random variables with finite expectation
µ<∞P and standard deviation σ < ∞. Denote the sample mean
n
Ȳn = n1 i=1 Yi . Then
Ȳn − µ d
√ −→ N(0, 1).
σ/ n
Pn 2
Moreover, for the variance estimator sn2 = 1
n−1 i=1
Yi − Ȳn we also have
Ȳn − µ d
√ −→ N(0, 1).
sn / n
p. 379
We need to simulate our state variables on the relevant
observation dates
Consider the general dynamics for a process given as SDE
There are several standard methods to solve above SDE. We will briefly discuss
Euler method and Milstein method.
p. 380
Euler method for SDEs is similar to Explicit Euler method
for ODEs
◮ Specify a grid of simulation times t = t0 , t1 , . . . , tM = T .
p. 381
Milstein method refines the simulation of the diffusion term
◮ Again, specify a grid of simulation times t = t0 , t1 , . . . , tM = T .
◮ Calculate sequence of state variables
◮ Drift µ(tk , Xk ) and volatility σ(tk , Xk ) are evaluated at current time tk and
state Xk .
◮ ∂
Requires calculation of derivative of volatility ∂x
σ(tk , Xk ) w.r.t. state variable.
◮ Increment of Brownian motion W (tk+1 ) − W (tk ) is normally distributed, i.e.
p
W (tk+1 ) − W (tk ) = Zk · tk+1 − tk with Zk ∼ N(0, 1).
p 1 ∂σ(tk , Xk )
Xk+1 = Xk + µ(tk , Xk )∆k + σ(tk , Xk )Zk ∆k + σ(tk , Xk ) Zk2 − 1 ∆k .
2 ∂x
p. 382
How can we measure convergence of the methods?
◮ We distinguish strong order of convergence and weak order of
convergence.
M
◮ Consider a discrete SDE solution Xkh with Xkh ≈ X (t + kh),
k=0
T −t
h= M
.
p. 384
Some comments regarding weak order of convergence
Error estimate
h
E f XM − E [f (X (T ))] ≤ C · hβ
requires considerable assumptions regarding smoothness of µ(·), σ(·) and test
functions f (·).
p. 385
The choice of pricing measure is crucial for numeraire
simulation
Consider risk-neutral measure, then
Z T Z T
N(T ) = B(T ) = exp r (s)ds = exp [f (0, s) + x (s)] ds
0 0
Z T
= P(0, T )−1 exp x (s)ds .
0
RT
Requires simulation or approximation of 0 x (s)ds.
Suppose x (tk ) is simulated on a time grid {tk }M
k=0 then we approximate integral
via Trapezoidal rule
Z T M
X x (tk−1 ) + x (tk )
x (s)ds ≈ (tk − tk−1 ) .
0
2
i=1
p. 386
Alternatively, we can simulate in T -forward measure for a
fixed future time T
p. 387
Another commonly used numeraire for simulation is the
discretely compounded bank account
◮ Consider a grid of simulation times t = t0 , t1 , . . . , tM = T .
◮ Assume we start with 1 EUR at t = 0, i.e. N(0) = 1.
◮ At each tk we take numeraire N(tk ) and buy zero coupon bond maturing
at tk+1 . That is
N(tk )
N(t) = P(t, tk+1 ) · for t ∈ [tk , tk+1 ] .
P(tk , tk+1 )
We get
B̄(t) Y 1 P(t, tk+1 )
d = ·d =0 for t ∈ [tk , tk+1 ] .
P(t, tk+1 ) P(tk , tk+1 ) P(t, tk+1 )
tk <t
p. 389
Do we really need to solve the Hull-White SDE
numerically?
Recall dynamics in T -forward measure
dx (t) = y (t) − σ(t)2 G(t, T ) − a · x (t) · dt + σ(t) · dW T (t).
That gives
Z T
−a(T −t) a(u−t)
2
T
x (T ) = e x (t) + e y (u) − σ(u) G(u, T ) du + σ(u)dW (u) .
t
p. 390
Expectation calculation via µ = ET [x (T ) | Ft ] requires
carefull choice of numeraire
Consider grid of simulation times t = t0 , t1 , . . . , tM = T .
We simulate
x (tk+1 ) = µk + σk · Zk
with
Grid point tk+1 must coincide with forward measure for Etk+1 [·] for each
individual step k → k + 1.
Numeraire must be discretely compounded bank account B̄(t) and
B̄(tk )
B̄(tk+1 ) = .
P(x (tk ), tk , tk+1 )
Recursion for x (tk+1 ) and B̄(tk+1 ) fully specifies path simulation for pricing.
p. 391
Some comments regarding Hull-White MC simulation ...
p. 392
We illustrate MC pricing by means of a coupon bond
option example
Consider coupon bond option expiring at TE with coupons Ci paid at Ti
(i = 1, . . . , u, incl. strike and notional).
◮ Set t0 = 0, t1 = TE /2 and t2 = TE (two steps for illustrative purpose).
◮ Compute 2n independent N(0, 1) pseudo random numbers Z 1 , . . . , Z 2n .
◮ For all paths j = 1, . . . , n calculate:
◮ µj0 , σ0 and B̄ j (t1 ); note µj0 and B̄ j (t1 ) are equal for all paths j since
x (t0 ) = 0,
◮ x1j = µj0 + σ0 · Z j ,
◮ µj1 , σ1 and B̄ j (t2 ); note now µj1 and B̄ j (t2 ) depend on x1j ,
◮ x2j = µj1 + σ1 · Z n+j ,
Pu +
◮ payoff V j (t2 ) = C · P(x2j , t2 , Ti ) at t2 = TE .
i=1 i
◮ Calculate option price (note B̄(0) = 1)
n
1 X V j (t2 )
V (0) = B̄(0) · .
n B̄ j (t2 )
j=1
p. 393
Outline
p. 394
Let’s return to our Bermudan option pricing problem
H1 = . . . H2 = . . . H3 = 0 K
✻ ✻ ✻
continuation value
1
Tl−1 2
Tl−1 3
Tl−1 Tn
✲
TE1 1
T̃k−1 TE2 2
T̃k−1 TE3 3
T̃k−1 T̃m
h i
H0 = B(t)E V1
B(T 1 )
| Ft ❄ ❄ ❄ ❄ ❄ ❄
E
Lm
V3 = max{U3 , H3 }
V2 = max{U2 , H2 }
V1 = max{U1 , H1 }
U1 U2 U3 K
✻ ✻ ✻
exercise payoff
1
Tl−1 2
Tl−1 3
Tl−1 Tn
✲
TE1 1
T̃k−1 TE2 2
T̃k−1 TE3 3
T̃k−1 T̃m
❄ ❄ ❄ ❄ ❄ ❄
Lm
p. 395
In this setting we need to calculate future conditional
expectations
◮ Assume we already simulated paths for state variables xk , underlyings Uk
and numeraire Bk for all relevant dates tk .
◮ We need continuation values Hk defined recursively via Hk̄ = 0 and
max {Uk+1 , Hk+1 }
Hk = Bk Ek .
Bk+1
p. 396
A key idea of American Monte Carlo is approximating
conditional expectation via regression
Conditional expectation
Bk
Hk = Ek max {Uk+1 , Hk+1 }
Bk+1
Rk = Rk [Y ]
p. 397
What do we mean by regression operator?
Denote ζ(ω) = [ζ1 (ω), . . . , ζq (ω)]⊤ a set of basis functions (vector of random
variables).
R [Y ] (ω) = ζ(ω)⊤ β
Linear least squares system can be solved e.g. via QR factorisation or SVD.
p. 398
A basic pricing scheme is obtained by replacing conditional
expectation of future payoff by regression operator
Approximate H̃k ≈ Hk via H̃k̄ = Hk̄ = 0 and
Bk
H̃k = Rk max Uk+1 , H̃k+1 for k = k̄ − 1, . . . , 1.
Bk+1
p. 399
What are typical basis functions?
p. 400
Regression of the full underlying can be a bit rough - we
may restrict regression to exercise decision only
For a given path consider
Bk
Hk = max {Uk+1 , Hk+1 }
Bk+1
h i
Bk
= ✶{Uk+1 >Hk+1 } Uk+1 + 1 − ✶{Uk+1 >Hk+1 } Hk+1 .
Bk+1
Bkj
h i
Hkj = j
✶{Rk (ωj )>0} Uk+1
j
+ 1 − ✶{R
k (ωj )>0}
j
Hk+1 .
Bk+1
4. Calculate discounted payoffs for the paths j = n̂ + 1, . . . n not used for regression
Bkj
H0j = j
max U1j , H1j .
Bk+1
Pn
5. Derive average V (0) = 1
n−n̂ j=n̂+1
H0j .
p. 402
Some comments regarding AMC for Bermudans in
Hull-White model
p. 403
Contact
d-fine GmbH
Mobile: +49-162-263-1525
Mail: sebastian.schlenkrich@d-fine.de