Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Applications of Ito's Formula: 1. L Evy's Martingale Characterization of Brownian Motion

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

CHAPTER 4

Applications of Ito’s Formula

In this chapter, we discuss several basic theorems in stochastic analysis.


Their proofs are good examples of applications of Itô’s formula.

1. Lévy’s martingale characterization of Brownian motion


Recall that B is a Brownian motion with respect to a filtration F∗ if the
increment Bt − Bs has the normal distribution N (0, t − s) independent of
Fs . A consequence of this definition is that B is continuous martingale
with quadratic variation process h B, Bit = t. The following theorem shows
that this property characterizes Brownian motion completely.
T HEOREM 1.1. (Lévy’s characterization of Brownian motion) Suppose that B
is a continuous local martingale with respect to a filtration F∗ whose quadratic
variation process is h B, Bit = t (or equivalently, the process Bt2 − t, t ≥ 0 is

also a local martingale), then it is a Brownian motion with respect to F∗ .
P ROOF. We will establish the following: for any 0 ≤ s < t and any
bounded Fs -measurable random variable Z,
| a |2
 
(1.1) E [ Z exp {ia( Bt − Bs )}] = exp − (t − s) EZ.
2
Letting Z = 1, we see that Bt − Bs is distributed as N (0, t − s). Letting
Z = eibY for Y ∈ Fs , we infer by the uniqueness of two-dimensional char-
acteristic functions that Bt − Bs is independent of any Y ∈ Fs , which shows
that Bt − Bs is independent of Fs . Therefore it is sufficient to show (1.1)
Denote the left side of (1.1) by F (t). We first use Itô’s formula to obtain
Z t
exp {ia · ( Bt − Bs )} =1 + ia · exp {ia · ( Bu − Bs )} dBu
s
| a |2 t
Z
− exp {ia · ( Bu − Bs )} du.
2 s
The stochastic integral is uniformly bounded because the other terms in
this equality are uniformly bounded. Therefore the stochastic integral is
not only a local martingale, but a martingale. Multiply both sides by Z and
take the expectation. Noting that Z ∈ Fs , we have
| a |2
Z t
F (t) = EZ − F (u) du.
2 s
65
66 4. APPLICATIONS OF ITO’S FORMULA

Solving for F (t), we obtain


| a |2
 
F (t) = exp − (t − s) EZ.
2
This is what we wanted to prove. 
A very useful corollary of Lévy’s criterion is that every continuous local
martingale M is a time change of Brownian motion. This can be seen as fol-
lows. We know that the quadratic variation process h M, Mi is a continuous
increasing process. Let us assume that as t → ∞,
h M, Mit → ∞
with probability 1. Let τ = {τt } be the right inverse of h M, Mi defined by
τt = inf {s ≥ 0 : h M, Mis > t} .
It is easy to show that t 7→ τt is a right-continuous, increasing process and
h M, Miτt = t and τh M,Mit = t. Furthermore, τt is a stopping time for each
fixed t. Consider the time-changed process Bt = Mτt . Let
σn = inf {t : | Mt | ≥ n} .
Since M is a continuous local martingale, we have σn ↑ ∞ and each stopped
process Mσn is a square integrable martingale. From h M, M it = t, we have
by Fatou’s lemma
E Bt2 ≤ lim inf E Mσ2n ∧τt = lim inf E [h M, M iσn ∧τt ] ≤ E [h M, Miτt ] = t.
   
n→∞ n→∞
By the optional sampling theorem, B is a martingale.
We now show that B is continuous. There exists a sequence of partitions
{∆n } of the time set R+ such that |∆n | → 0 and that with probability 1,
h i2
∀t ≥ 0 : ∑ Mtnj ∧t − Mtnj−1 ∧t = h M, Mit
j

for all t ≥ 0. Now for any fixed t, the jump


( Bt − Bt− )2 = ( Mτt − Mτt− )2
is bounded by the quadratic variation of M in the time interval [τt−δ , τt ] for
any δ > 0. Hence
h i2
| Bt − Bt− |2 ≤ lim ∑ Mtnj ∧τt − Mtnj−1 ∧τt = h M, Miτt − h M, Miτt−δ = δ.
n→∞
tnj ≥τt−δ

Since δ is arbitrary, we see that Bt = Bt− . This shows that B is continuous


with probability 1.
So far we have proved that B is a continuous local martingale. We com-
pute the quadratic variation of B. The process M2 − h M, M i is a continuous
local martingale. By the optional sampling theorem, we see that
Mτ2t − h M, Miτt = Bt2 − t
2. EXPONENTIAL MARTINGALE 67

is a continuous local martingale. It follows that the quadratic variation


process of B is just h B, Bit = t. Now we can use Lévy’s characterization to
conclude that Bt = Mτt is a Brownian motion. Therefore we have shown
that every continuous local martingale can be transformed into a Brownian
motion by a time change.
From Bt = Mτt and τh M,Mit = t we have Mt = Bh M,Mit . In this sense we
say that every continuous local martingale is the time change of a Brownian
motion.

2. Exponential martingale
We want to find an analog of the exponential function e x . The defining
property of the exponential function is the differential equation
d f (x)
= f ( x ), f (0) = 1,
dx
or equivalently Z x
f (x) = 1 + f (t) dt.
0
So we define anexponential martingale Et by the stochastic integral equation
Z t
Et = 1 + Es dMs ,
0
where M is a continuous local martingale. This equation can be solved
explicitly. Instead of writing down the formula and verify it, let us discover
the formula. Since E0 = 1 we can take the logarithm of Et at least for small
time t. Let Ct = log Et . By Itô’s formula we have
Z t
1 t −2
Z
Ct = Es−1 dEs
− E dh E, Eis .
0 2 0 s
Since dEs = Es dMs , we have dh E, Eis = Es2 dh M, M is . Hence
1
Ct = Mt − h M, Mit .
2
Therefore the formula for exponential martingale is
 
1
Et = exp Mt − h M, Mit .
2
Now it is easy to verify directly that this process satisfies the defining equa-
tion for Et .
Every strictly positive continuous local martingale can be written in the
form of an exponential martingale:
 
1
Et = exp Mt − h M, Mit ,
2
where the local martingale M can be expressed in terms of E by
Z t
Mt = Es−1 dEs .
0
68 4. APPLICATIONS OF ITO’S FORMULA

We can say that an exponential martingale is nothing but a positive contin-


uous local martingale.
Exponential martingale is related to iterated stochastic integrals defined
by I0 (t) = 1 and
Z t
In (t) = In−1 (s) dMs .
0
By iterating the defining equation of the exponential martingale we have
the expansion
  ∞
1
Et = exp Mt − h M, Mit = ∑ In (t)
2 n =0

This formula can be verified rigorously, namely, it can be shown that the
infinite series converges and the remainder from the iteration tends to zero.
To continue our discussion let us introduce a parameter λ by replacing M
with λM and obtain

λ2
 
(2.1) exp λMt − h M, M it = ∑ λn In (t).
2 n =0

If we set
r
Mt h M, Mit
x= p and θ=λ ,
2h M, Mit 2
then the left side of (2.1) becomes exp 2xθ − θ 2 . The coefficients of its
 
Taylor expansion in θ are called Hermite polynomials

Hn ( x ) n

2
e2xθ −θ = θ .
n =0 n!

It can be shown that


 n
− x2 /2 d 2 /2
Hn ( x ) = e e− x .
dx
2 √
and { Hn , n ≥ 0} is the orthogonal basis of L2 (R, e− x /2 dx/ 2π ) obtained
by the Gramm-Schmidt procedure from the complete system { x n , n ≥ 0}.
Now we can write

!
λ2 λn h M, Mit n/2
   
Mt
exp λMt − h M, Mit = ∑ Hn p .
2 n=0 n! 2 2h M, Mit
It follows that the iterated stochastic integrals can be expressed in terms of
Mt and h M, Mit via Hermite polynomials as follows:
n/2 !
1 h M, Mit

Mt
In (t) = Hn p .
n! 2 2h M, Mit
3. UNIFORMLY INTEGRABLE EXPONENTIAL MARTINGALES 69

Here are the first few iterated stochastic integrals:


I0 (t) = 1,
I1 (t) = Mt ,
1 2 
I2 (t) = Mt − h M i t ,
2
1 3 
I3 (t) = Mt − h M i t Mt .
6
Further discussion on this topic can be found in Hida [5].

3. Uniformly integrable exponential martingales


In stochastic analysis exponential martingales (positive martingales) of-
ten appear in the following context. Suppose that (Ω, F∗ , P) is a filtered
probability space. Let Q be another probability measure which is abso-
lutely continuous with respect to P on the σ-algebra FT . Then Q is also
absolutely continuous with respect to P on Ft for all t ≤ T. Let Et be the
Radon-Nikodym derivative of Q with respect to P on Ft :

dQ
Et = .
dP Ft
Then it is easy to verify that
Et = E { ET |Ft } .
This shows that { Et , t ≤ T } is a positive martingale. If it is continuous, then
it is an exponential martingale. Not only that, it is a uniformly integrable
martingale. Conversely, if { Et , t ≤ T } is a uniformly integrable exponential
martingale, then it defines a change of measure on the filtered probability
space (Ω, F∗ , P) up to time T.
It often happens that we know the local martingale M and wish that
the exponential local martingale
 
1
Et = exp Mt − h M, Mit
2
defines a change of probability measures by dQ/dP = ET . For this it is
necessary that EET = 1. However, in general, we only know that { Et } is
a positive local martingale. It is therefore a supermartingale and EET ≤ 1.
Therefore the requirement EET = 1 is not automatic and has to be proved
by imposing further conditions on the local martingale M.
P ROPOSITION 3.1. Let M be a continuous local martingale and
 
1
Et = exp Mt − h M, Mit .
2
Then EET ≤ 1 and { Et , 0 ≤ t ≤ T } is a uniformly integrable martingale if and
only if EET = 1.
70 4. APPLICATIONS OF ITO’S FORMULA

P ROOF. We know that E is a local martingale, hence there is a sequence


of stopping times τn ↑ ∞ such that Et∧τn is a martingale, hence
EET ∧τn = EE0 = 1.
Letting n → ∞ and using Fatou’s lemma we have EET ≤ 1. This inequality
also follows from the fact that a nonnegative local martingale is always a
supermartingale.
Suppose that EET = 1. Since E is a supermartingale we have Et ≥
E { ET Ft }. Taking expected value gives
1 ≥ EEt ≥ EET = 1.
This shows that the equality must hold throughout and we have Et =
E { ET |Ft }, which means that { Et , 0 ≤ t ≤ T } is a uniformly integrable
martingale. 
We need to impose conditions on the local martingale M to ensure that
the local exponential martingale E is uniformly integrable. We have the
following result due to Kamazaki
T HEOREM 3.2. Suppose that M is a martingale and Ee MT /2 is finite. Then
   
1
exp Mt − h M it , 0 ≤ t ≤ T
2
is uniformly integrable.
P ROOF. This is a very interesting proof. Let
λ2
 
E(λ)t = exp λMt − h Mit .
2
We need to show that E(1) is uniformly integrable, but first we take a slight
step back and prove that E(λ) is uniformly integrable for all 0 < λ < 1. We
achieve this by proving that there is an r > 1 and C such that EE(λ)rσ ≤ C
for all stopping time σ ≤ T. We have
h √  i √
λ2 r

r 3 3
E(λ)σ = exp λr − λ r Mσ exp λ rMσ − h M iσ .
2
Using Hölder’s inequality with the exponents 1 − λ + λ = 1 we see that
EE(λ)rσ is bounded by
( " √ ! #)1−λ  √ λ
λr − λ3 r λ2 r
E exp Mσ E exp 2
λ rMσ − h M iσ .
1−λ 2
The second expectation on the right side does not exceed 1 (see P ROPOSI -
TION 3.1). For the first factor we claim that σ can be replaced by T and the
coefficient can be replaced by 1/2 if r > 1 is sufficiently close to 1. When
r = 1 the coefficient is

λ − λ3 λ 1
= √ <
1−λ 1+ λ 2
3. UNIFORMLY INTEGRABLE EXPONENTIAL MARTINGALES 71

because λ < 1. Hence the coefficient is still less than 1/2 if r > 1 but
sufficiently close to 1. On the other hand, we have assumed that M is a
martingale, hence Mσ = E { MT |Fσ }. By Jensen’s inequalty we have
n o
e Mσ /2 ≤ E e MT /2 Fσ .
It follows that
" √ ! #
λr − λ3 r
E exp Mσ ≤ Ee Mσ /2 ≤ Ee MT /2 .
1−λ
We therefore have shown that for any λ < 1, there is an r > 1 such that
EE(λ)rσ ≤ Ee MT /2
for all stopping times σ ≤ T. This shows that E(λ) is a uniformly integrable
martingale, which implies that EE(λ)T = 1 for all λ < 1.
We now use the same trick again to show EE(1)T = 1. We have
λ2
 
2
E(λ)T = exp λ MT − h M iT exp (λ − λ2 ) MT .
 
2
Using Hölder’s inequality with the exponents λ2 + 1 − λ2 = 1 we have
  λ2   1−λ2
1 λ
1 = EE(λ)T ≤ E exp MT − h MiT E exp MT .
2 1+λ
Because λ/(1 + λ) ≤ 1/2, the second expectation on the right side can be
replaced by E exp[ MT /2]. Letting λ ↓ 0 we obtain
 
1
E exp MT − h MiT = 1.
2

The condition E exp[ MT /2] < ∞ is not easy to verify because we usu-
ally know the quadratic variation h MiT much better than MT itself. The
following weaker criterion can often be used directly.
C OROLLARY 3.3. (Novikov’s criterion) Suppose that M is a martingale. If
Eexp [h MiT /2] is finite, then
 
1
E exp MT − h MiT = 1.
2
P ROOF. We have
     
1 1 1 1
exp MT = exp MT − h MiT exp h MiT .
2 2 4 4
By the Cauchy-Schwarz inequality we have
  s  s  
1 1 1
E exp MT ≤ E exp MT − h M iT E exp h MiT .
2 2 2
72 4. APPLICATIONS OF ITO’S FORMULA

The factor factor on the right side does not exceed 1. Therefore
  s  
1 1
E exp MT ≤ E exp h MiT .
2 2
Therefore Novikov’s condition implies Kamazaki’s condition. 

4. Girsanov and Cameron-Martin-Maruyama theorems


In stochastic analysis, we often need to change the base probability
measure from a given P to another measure Q. Suppose that B is a Browni-
an motion under P. In general it will no longer be Brownian motion under
Q. The Girsanov theorem describes the decomposition of B as the same
of a martingale (necessarily a Brownian motion) and a process of bounded
variation under a class of change of measures.
We assume that B is an F∗ -Brownian motion and V a progressively
measurable process such that
Z t
1 t
Z 
2
exp Vs dBs − |Vs | ds , 0 ≤ t ≤ T
0 2 0
is a uniformly integrable martingale. This exponential martingale therefore
defines a change of measure on FT by
Z T
1 T

dQ
Z
2
= exp Vs dBs − |Vs | ds , 0 ≤ t ≤ T.
dP 0 2 0
This is the class of changes of measures we will consider.
T HEOREM 4.1. Suppose that Q is the new measure described above. Consider
the Brownian motion with a drift
Z t
(4.1) Xt = Bt − Vτ dτ, 0 ≤ t ≤ T.
0
Then X is Brownian motion under the probability measure Q.
P ROOF. Let {es } be the exponential martingale
Z s
1 s
Z 
2
es = exp hVτ , dBτ i − |Vτ | dτ .
0 2 0
Then the density function of Q with respect to P on Fs is just es . From this
fact it is easy to verify that if Y is an adapted process then
EQ {Yt |Fs } = es−1 EP {Yt et |Fs } .
This means that Y is a (local) martingale under Q if and only if eY =
{es Ys , s ≥ 0} is a (local) martingale under Q.
Now e is a martingale and des = es Vs dBs . On the other hand, using
Itô’s formula we have
d(es Xs ) = es dBs + es Xs Vs dBs .
4. GIRSANOV AND CAMERON-MARTIN-MARUYAMA THEOREMS 73

This shows that eX is local martingale under P, hence X is a local martin-


gale under Q. Since Q and P are mutually absolutely continuous, it should
be clear that the quadratic variation process X under P and under Q should
be the same, i.e., h X it = t. We can also verify this fact directly by applying
Itô’s formula to Zs = es ( Xs2 − s). We have
dZs = es d( Xs2 − s) + ( Xs2 − s) des + 2Xs dhe, X is .
The second term on the right side is a martingale. We have
d( Xs2 − s) = 2Xs dXs − dh X, X is − ds = 2Xs dBs − 2Xs Vs ds.
On the other hand, from des = es Vs dBs we have
dhe, X is = es Vs ds.
It follows that
dZs = 2es Xs dBs + ( Xs2 − s) des ,
which shows that Z is a local martingale. Now we have shown that both Xs
and Xs2 − s are local martingales under Q. By Lévy’s criterion we conclude
that X is a Brownian motion under Q. 
The classical Cameron-Martin-Maruyama theorem is a special case of
the Girsanov theorem but stated in a slightly different form. Let µ be the
Wiener measure on the path space W (R). Let h ∈ W (R) and consider
the shift in the path space ξ h w = w + h. The shifted Wiener measure is
µh = µ ◦ ξ h−1 . If X is the coordinate process on W (R), then µh is just the
law of X + h. We will prove the following dichotomy: either µh and µ are
mutually absolutely continuous or they are mutually singular. In fact we
have an explicit criterion for this dichotomy.
D EFINITION 4.2. A path (function) h ∈ W (R) is called Cameron-Martin
path (function) if it is absolutely continuous and its derivative is square integrable.
The Cameron-Martin norm is defined by
Z 1
|h|2H = |ḣs |2 ds.
0
The space of Cameron-Martin paths is denoted by H .
It is clear that H is a Hilbert space.
T HEOREM 4.3. (Cameron-Martin-Maruyama theorem) Suppose that h ∈ H.
Then the shifted Wiener measure µh and µ are mutually absolutely continuous and
dµh
Z 1
1 1
Z 
2
(w) = exp ḣs dws − |ḣs | ds .
dµ 0 2 0
P ROOF. Denote the exponential martingale by e1 (w). We need to show
that for nonnegative measurable function F on W (R),
h
Eµ ( F ) = Eµ ( Fe1 ).
74 4. APPLICATIONS OF ITO’S FORMULA

Let X be the coordinate process on W (R). Then the left side is simply
Eµ F ( X + h). Introduce the measure ν by
 Z 1
1 1


Z
2
= exp − ḣs dws − |ḣs | ds .
dµ 0 2 0
We have
1 Z 1
Z 
dµ 1 2
= exp ḣs dws + |ḣs | ds
dν 0 2 0
 Z 1
1 1
Z 
= exp − ḣs d(ws + h) − |ḣs |2 ds .
0 2 0
Hence we can write dµ/dν = e1 ( X + h). It follows that
 
µh dµ
E F = E [ F ( X + h)] = E F ( X + h)
µ ν
= Eν [ F ( X + h)e1 ( X + h)] .

By Girsanov’s theorem X + h is a Brownian motion under ν. On the other
hand, X is a Brownian motion under µ. Therefore on the right side of the
above equality we can replace ν by µ and at the same time replace X + h by
X, hence
h
Eµ F = Eµ [ F ( X )e1 ( X )] = Eµ ( Fe1 ).

T HEOREM 4.4. Let h ∈ W (R). The shifted Wiener measure µh mutually
absolutely continuous or mutually singular with respect to µ according as h ∈ H
or h 6∈ H .
P ROOF. We need to show that if h 6∈ H , then µh is singular with respect
to µ.
First we need to convert the condition h 6∈ H into a more convenient
condition. A function ḣ to be square integrable if and only if
Z 1
f s ḣs ds ≤ C | f |2
0
for some constant C. Therefore it is conceivable that if h 6∈ H , then for any
C, there is a step function f such that
n
| f |22 = ∑ | f i | 2 ( s i − s i −1 ) = 1
i =1
and
Z 1 n

0
f s dhs = ∑ f i ( hs i
− hsi−1 ) ≥ C.
i =1
This characterization of h 6∈ H can indeed be verified rigorously.
Second, the convenient characterization that µh is singular with respect
to µ is the following: for any positive e, there is a set A such that µh ( A) ≥
1 − e and µ( A) ≤ e.
5. MOMENT INEQUALITIES FOR MARTINGALES 75

Consider the random variable


Z 1 n
Z (w) =
0
f s dws = ∑ f i ( ws i
− w s i −1 ) .
i =1

It is a Gaussian random variable with mean zero and variance | f |12 = 1.


Therefore it has the standard Gaussian distribution N (0, 1). Let
A = { Z ≥ C/2} .
We have µ( A) ≤ e for sufficiently large C. On the other hand,
 
C
µ ( A ) = µ w ∈ W (R) : Z ( w + h ) ≥
h
.
2
By the definition of Z (w) we have
n
Z (w + h) = Z (w) + ∑ f i (hsi − hsi−1 ) ≥ Z (w) + C.
i =1

Therefore    
C C
µh ( A) = µ Z + C ≥ =µ Z≥−
2 2
and µh ( A) ≥ 1 − e for sufficiently large C. Thus we have shown that µh
and µ are mutually singular. 

5. Moment inequalities for martingales


Let M be a continuous martingale and
Mt∗ = max | Ms |.
0≤ s ≤ t
∗p
h moment E

The i Mt can be bounded both from above and from below by
p/2
E h M, Mit .

T HEOREM 5.1. Let M be a continuous local martingale. For any p > 0, there
are positive constants c p , C p such that
h i h i
p/2
c p E h M, Mi p/2 ≤ E ( Mt∗ ) p ≤ C p E h M, M it
 
.

P ROOF. The case p = 2 is obvious. We only prove the case p > 2. The
case 0 < p < 2 is slightly more complicated, see Ikeda and Watanabe [6].
By the usual stopping time argument we may assume without loss of
generality that M is uniformly bounded, so there is no problem of integra-
bility. We prove the upper bound first. We start with the Doob’s submartin-
gale inequality
 p
 ∗p p
(5.1) E Mt ≤ E [| Mt | p ] .
p−1
76 4. APPLICATIONS OF ITO’S FORMULA

We use Itô’s formula to | Mt | p . Note that x 7→ | x | p is twice continuously


differentiable because p > 2. This gives
Z t
p ( p − 1) t
Z
p −1
p
| Mt | = p | Ms | sgn( Ms ) dMs + | Ms | p−2 sgn( Ms ) dh Mis .
0 2 0
Take expectation and using the obvious bound | Ms | ≤ Ms∗ we have
p( p − 1) h ∗( p−2) i
E [| Mt | p ] ≤ E Mt h M, Mit .
2
We use Hölder’s inequality on the right side and (5.1) on the left side to
obtain
 ∗p   ∗ p  ( p−2)/p n h io2/p
p/2
E Mt ≤ C E Mt E h M, Mit ,
where C is a constant depending on p. The upper bound follows immedi-
ately.
The lower bound is slightly trickier. Using Itô’s formula we have
Z t Z t
( p−2)/4 ( p−2)/4 ( p−2)/4
Mt h M, Mit = h M, Mis dMs + Ms dh, M, M is .
0 0
The first term on the right side is the term we are aiming at because its
second moment is precisely the left side of the inequality we wanted to
prove. This is the reason why choose the exponent ( p − 2)/4. We have
Z t
( p − 2 ) /4 ( p−2)/4
dMs ≤ 2Mt∗ h M, Mit

h M, Mis .
0
Squaring the inequality and taking the expectation, we have after using
Hölder’s inequality,
2 h i   ∗ p  2/p n h io( p−2)/2
p/2 p/2
E h M, Mit ≤ 4 E Mt E h M, Mit .
p
The lower bound follows immediately. 

6. Martingale representation theorem


Let B be a standard Brownian motion and FtB = σ { Bs , s ≤ t} be its
associated filtration of σ–fields properly completed so that it satisfies the
usual condition. Now suppose M is a square-integrable martingale with
respect to this filtration. We will show it can always be represented as a
stochastic integral with respect to the Brownian motion, namely, there ex-
ists a progressively measurable process H such that
Z t
Mt = Hs dBs .
0
Thus every martingale with respect to the filtration generated by a Brow-
nian motion is a stochastic integral with respect to this Brownian motion.
This is a very important result in stochastic analysis. In this section we will
give a proof of this result by an approach we believe is direct, short, and
6. MARTINGALE REPRESENTATION THEOREM 77

well motivated. It uses nothing more than Itô’s formula in a very elemen-
tary way. In the next section we will discuss another approach to this useful
theorem.
We make a few remarks before the proof. First of all, the martingale
representation theorem is equivalent to the following representation theo-
rem: if X is a square integrable random variable measurable with respect
to FTB , then it can be represented in the form
Z T
X = EX + Hs dBs ,
0
for if we take X = MT , then we have
Z T
MT = Hs dBs .
0
Since both sides are martingales, the equality must also holds if T is re-
placed by any t ≤ T.
Second, the representation is unique because if
Z T Z T
Hs dBs = Gs dBs ,
0 0
then from
Z T Z T 2 Z T
E Gs dBs = E | Hs − Gs |2 ds

Hs dBs −
0 0 0

we have immediately H = G on [0, T ] × Ω with respect to the measure


L × P, where L is the Lebesgue measure.
Third, if Xn → X in L2 (Ω, F , P) and
Z T
Xn = EXn + Hsn dBs ,
0
then EXn → EXn and
Z T
| Hsm − Hsn |2 ds = E| Xm − Xn |2 → 0.
0
This shows that { H n } is a Cauchy sequence in L2 ([0, T ] × Ω, L × P). It
therefore converges to a process H and
Z T
X = EX + Hs dBs .
0
The point here is that it is enough to show the representation theorem for a
dense subset of random variables.
Finally we observe a square integrable martingale with respect to the
filtration generated by a Brownian motion is necessarily continuous be-
cause every stochastic integral is continuous with respect to its upper time
limit.
We may assume without loss generality that T = 1. Our method starts
with a simple case X = f ( B1 ) for a smooth bounded function f . It is easy to
78 4. APPLICATIONS OF ITO’S FORMULA

prove the theorem in this case and find the explicit formula for the process
H.
P ROPOSITION 6.1. For any bounded smooth function f ,
Z 1
f (W1 ) = E f (W1 ) + E f 0 (W1 )|Fs dWs .

0

P ROOF. We have f (W1 ) = f (Wt + W1 − Wt ). We know that W1 − Wt


has the normal distribution N (0, 1 − t) and is independent of Ft . Hence,
1
Z
2 /2(1− t )
(6.1) E { f (W1 )|Ft } = p f (Wt + x )e−| x| dx.
2π (1 − t) R1

Now we regard the right side as a function of Bt and t and apply Itô’s
formula. Since we know that it is a martingale, we only need to find out
its martingale part, which is very easy: just differentiate with respect to Bt
and integrate the derivative with respect to Bt . We have
Z 1
" #
1
Z
2
f (W1 ) = E f (W1 ) + p f 0 (Wt + x )e−|x| /2(1−t) dx dWs .
0 2π (1 − t) R1
The difference between the integrand and the right side of (6.1) is simply
that f is replaced by its derivative f 0 . This shows that
Z 1
f (W1 ) = E f (W1 ) + E f 0 (W1 )Ft dt.

0

The general case can be handled by an induction argument.
T HEOREM 6.2. Let X ∈ L2 (Ω, F1 , P). Then there is a progressively mea-
surable process H such that
Z 1
X = EX+ Hs dWs .
0

P ROOF. It is enough to show the representation theorem for random


variables of the form
X = f (Ws1 , Ws2 − Ws1 , · · · , Wsn − Ws1 ),
where f is a bounded smooth function with bounded first derivatives be-
cause the random variables of this type form a dense subset of L2 (Ω, F1 , P)
(see the remarks at the beginning of this section). By the induction hypoth-
esis applied to the Brownian motion {Ws − Ws1 , 0 ≤ s ≤ 1 − s1 }, we have
Z 1
f ( x, Ws2 − Ws1 , · · · , Wsn − Ws1 ) = h( x ) + Hsx dWs ,
s1

where
h( x ) = E f ( x, Ws2 − Ws1 , · · · , Wsn − Ws1 ),
6. MARTINGALE REPRESENTATION THEOREM 79

and the dependence of H x on x is smooth. Hence, replacing x by Ws1 we


have
Z 1
X = h(Ws1 ) + Hs dWs ,
s1
Ws1
where Hs = Hs . We also have
Z s1
h(Ws1 ) = E h(Ws1 ) + Hs dWs .
0

It is clear that E h(Ws1 ) = E X, hence


Z 1
X = EX + Hs dWs .
0

We now give a general formula for the integrand in the martingale rep-
resentation theorem for a wide class of random variables. Recall the defini-
tion of the Cameron-Martin space
H = h ∈ C [0, 1] : h(0) = 0, ḣ ∈ L2 [0, 1]


and the Cameron-Martin norm


s
Z 1
| h |H = |ḣs |2 ds.
0

A function F : C [0, 1] → R is called a cylinder function if it has the form


F ( w ) = f ( w s1 , . . . , w s l ),
where 0 < s1 < · · · < sl ≤ 1 and f : Rl → R is bounded smooth function.
We also assume that f has bounded first derivatives and use the notation
Fxi (w) = f xi (ws1 , . . . , wsl ).
For a cylinder function F and h ∈ C [0, 1], the directional derivative along
h ∈ W (R) is defined by
F (w + th) − F (w)
Dh F (w) = lim .
t →0 t
If h ∈ H , then it is easy to verify that
Dh F (w) = h DF (w), hiH ,
where
l
DF (w)s = ∑ min {s, si } Fx (w). i
i =1
Note that the derivative
d { DF (w)s }
Ds F ( w ) =
ds
80 4. APPLICATIONS OF ITO’S FORMULA

is given by
l
Ds F ( w ) = ∑ Fx (w) I[0,s ] (s).
i i
i =1
We have the following integration by parts formula.
T HEOREM 6.3. Let H : Ω → H be F∗ -adapted and E exp | H |2H /2 is

finite. Then the following integration by parts formula holds for a cylinder function
F:  Z 1 
ED H F = Eh DF, H i = E F Ḣs dWs .
0

P ROOF. By Girsanov’s theorem, under the probability Q defined by


dQ t2 1
 Z 1 Z 
2
= exp t Ḣs dWs − | Ḣs | ds ,
dP 0 2 0
the process
Xs = Ws − tHs , 0 ≤ s ≤ 1,
Q
is a Brownian motion, hence E F ( X ) = E F (W ). This means that
t2 1
  Z 1 Z 
Q
E F ( X ) = E F (W − tH ) exp t Ḣs dWs − 2
| Ḣs | ds
0 2 0
is independent of t. Differentiating with respect to t and letting t = 0, we
obtain the integration by parts formula. 
The explicit martingale representation theorem is given by the follow-
ing Clark-Ocone formula.
T HEOREM 6.4. Let F be a cylinder function on the path space W (R). Then
Z 1
F (W ) = E F (W ) + E { Ds F (W )|Fs } dWs .
0

P ROOF. By the martingale representation theorem we have


Z 1
F (W ) = E F (W ) + Ḣs dWs ,
0

where Ḣ is F∗ -adapted Ḣ. By definition,


E DG F = Eh DF, G iH .
By the integration by parts formula, the left side is
 Z 1 
E DG F = E F Ġs dWs
0
Z 1 Z 1 
=E Ḣs dWs Ġs dWs
0 0
Z 1
=E Ḣs Ġs ds.
0
7. REFLECTING BROWNIAN MOTION 81

The right side is


Z Z 1
Eh DF, G iH = E Ġs Ds F ds = E E { Ds F |Fs } Ġs ds.
0

Hence for all F∗ -adapted process Ġ we have


Z 1 Z 1
E E { Ds F |Fs } Ġs ds = E Ḣs Ġs ds.
0 0
Since E { Ds F |Fs } − Ḣs is also adapted, we must have
Ḣs = E { Ds F |Fs } .


7. Reflecting Brownian motion


Let B be a one-dimensional Brownian motion starting from zero. The
process Xt = | Bt | is called reflecting Brownian motion. We have Xt =
F ( Bt ), where F ( x ) = | x |. We want to apply Itô’s formula to F ( Bt ), but
unfortunately F is not C2 . We approximate F by
Z x Z u1
1
Fe ( x ) = du1 I[−e,e] (u2 ) du2 .
e 0 0

The above function is still not C2 because Fe00 = I[−e,e] /e, which is not con-
tinuous, but it is clear that Fe ( x ) → | x | and Fe0 ( x ) → sgn( x ) as e → 0. Now
let φ be a continuous function and define
Z x Z u1
Fφ ( x ) = du1 φ(u2 ) du2 .
0 0
Itô’s formula can be applied to Fφ ( Bt ) and we obtain
Z t
1 t
Z
Fe ( Bt ) = +Fφ0 ( Bs ) dBs
φ( Bs )ds.
0 2 0
Now for a fixed e we let φ in the above formula to be the continuous func-
tion 
0,
 if | x | ≥ e + n−1 ,
φn ( x ) = e−1 , if | x | ≤ e,

linear, in the two remaining intervals.

Then Fφn → Fe , and Fφ0 n → F 0 . It follows that we have


Z t Z t
0 1
Fe ( Bt ) = F ( Bs ) dBs + I[−e,e] ( Bs ) ds.
0 2e 0
In the last term,
Z t
I[−e,e] ( Bs ) ds
0
is the amount of time Brownian paths spends in the interval [−e, e] up to
time t. Now let e → 0 in the above identity for Fe ( Bt ). The term on the left
82 4. APPLICATIONS OF ITO’S FORMULA

side converges to | Bt | and the first term on the right side converges to the
stochastic integral
Z t
sgn( Bs ) dBs .
0
Hence the limit Z t
1
Lt = lim I(−e,e) ( Bs ) ds
e →0 e 0
must exist and we have
Z t
1
| Bt | =
sgn( Bs ) dBs + Lt .
0 2
We see that Lt can be interpreted as the amount of time Brownian motion
spends in the interval (−e, e) properly normalized. It is called the local
time of Brownian motion B at x = 0. Let
Z t
Wt = sgn( Bs ) dBs .
0
Then W is a continuous martingale with quadratic variation process
Z t
hW, W it = |sgn( Bs )|2 ds = t.
0
Note that Brownian motion spends zero amount of time at x = 0 because
EI{0} ( Bs ) = P { Bs = 0} = 0 and
Z t Z t
E I{0} ( Bs ) ds = EI{0} ( Bs ) ds = 0.
0 0
We thus conclude that reflecting Brownian motion | Bt | is submartingale
with the decomposition
1
| Bt | = Wt + Lt .
2
It is interesting to note that W can be expressed in terms of reflecting Brow-
nian motion by
1
Wt = Xt − Lt ,
2
where
1 t
Z
(7.1) Lt = lim I[0,e] ( Xs ) ds.
e →0 e 0
We now pose the question: Can X and L be expressed in terms of W? That
the answer to this question is affirmative is the content of the so-called Sko-
rokhod problem.
D EFINITION 7.1. Given a continuous path f : R+ → R such that f (0) ≥ 0.
A pair of functions ( g, h) is the solution of the Skorokhod problem if
(1) g(t) ≥ 0 for all t ≥ 0;
(2) h is increasing from h(0) = 0 and increases only when g = 0;
(3) g = f + h.
7. REFLECTING BROWNIAN MOTION 83

The main result is that the Skorokhod problem can be solved uniquely and
explicitly.
T HEOREM 7.2. There exists a unique solution to the Skorokhod equation.
P ROOF. It is interesting that the solution can be written down explicitly:
h(t) = − min f (s) ∧ 0, g(t) = f (t) − min f (s) ∧ 0.
0≤ s ≤ t 0≤ s ≤ t

Let’s assume that f (0) = 0 for simplicity. If f (0) > 0, then h(t) = 0 and
g(t) = f (t) before the first time f reaches 0 and after this time it is as if the
path starts from 0. The explicit solution in this case is
h(t) = min f (s), g(t) = f (t) − min f (s).
0≤ s ≤ t 0≤ s ≤ t

It is clear that g(t) ≥ 0 for all t and h increases starting from h(0) = 0. The
equation f = g + h is also obvious. We only need to show that h increases
only when g(t) = 0. This means that as a Borel measure h only charges the
zero set {t : g(t) = 0}. This requirement is often written as
Z t
h(t) = I{0} ( g(s)) dh(s).
0
Equivalently, it is enough to show that for any t such that g(t) > 0 there
is a neighborhood (t − δ, t + δ) of t such that h is constant on there. This
should be clear, for if g(t) > 0, then f (t) > min0≤s≤t f (s), which means
that the minimum must be achieved at a point ξ ∈ [0, t) and f (t) > f (ξ ).
By continuity a small change of t will not alter this situation, which means
that h = f (ξ ) in a neighborhood of t. More precisely, from g(t) = f (t) −
min0≤s≤t f (s) > 0 and the continuity of f , there is a positive δ such that
min f (s) > min f (s).
t−δ≤s≤t+δ 0≤ s ≤ t − δ

Hence
 
min f (s) = min min f (s), min f (s) = min f (s).
0≤ s ≤ t + δ 0≤ s ≤ t − δ t−δ≤s≤t+δ 0≤ s ≤ t − δ

This means that h(t + δ) = h(t − δ), which means that h must constant on
(t − δ, t + δ) because h is increasing.
We now show that the solution to the Skorokhod problem is unique.
Suppose that ( g1 , h1 ) and let ξ = h − h1 . It is continuous and of bounded
variation, hence
Z t
ξ ( t )2 = 2 ξ (s) dξ (s).
0
On the other hand, ξ (s) = g(s) − g1 (s), hence
Z t
2
ξ (t) = 2 { g(s) − g1 (s)} d {h(s) − h1 (s)} .
0
There are four terms on the right side: g(s) dh(s) = g1 (s) dh1 (s) = 0 be-
cause h increases on when g = 0 and h1 increases only when g1 = 0;
84 4. APPLICATIONS OF ITO’S FORMULA

g(s) dh1 (s) ≥ 0 and g1 (s) dh(s) ≥ 0 because g(s) ≥ 0 and g1 (s) ≥ 0. Putting
these observations together we have ξ (t)2 ≤ 0, which means that ξ (t) = 0.
This proves the uniqueness. 
If we apply Skorokhod equation to Brownian motion by replacing f
with Brownian paths, we obtain some interesting results. We have shown
that
1
| Bt | = Wt + Lt ,
2
where W is a Brownian motion. From the solution of the Skorokhod prob-
lem we conclude that | B| and L are determined by W:
1
(7.2) | Bt | = Wt − min Ws , Lt = − min Ws .
0≤ s ≤ t 2 0≤ s ≤ t

T HEOREM 7.3. Let W be a Brownian motion. (1) The processes


 
max Ws − Wt , t ≥ 0 and {|Wt |, t ≥ 0}
0≤ s ≤ t

have the same law, i.e, that of a reflecting Brownian motion. (2) We have
Z t
1
max Ws = lim I[0,e] ( max Wu − Ws ) ds.
0≤ s ≤ t e→0 2e 0 0≤ u ≤ s

P ROOF. The assertions follow immediately from (7.2) by replacing W


with −W, which is also a Brownian motion; in the second assertion, we
use the fact that L is the normalized occupation time of reflecting Brownian
motion (7.1). 
R EMARK 7.4. We have calculated the joint distribution of max0≤s≤t Ws
and |Wt for a fixed t and we know that max0≤s≤t Ws − Ws and |Wt | have the
same distribution for each fixed t. The above theorem claims much more:
they have the same distribution as two stochastic processes.

8. Brownian bridge
If we condition a Brownian motion to return to a fixed point x at time
t = 1 we obtain Brownian bridge from o to x with time horizon 1. Let
L x (Rn ) = {w ∈ Wo (Rn ) : w1 = x } .
The law of a Brownian bridge from o to x in time 1 is a probability measure
µ x on L x (Rn ), which we will call the Wiener measure on L x (Rn ). Note that
L x (Rn ) is a subspace of Wo (Rn ), thus µ x is also a measure on Wo (Rn ). By
definition, we can write intuitively
µ x ( C ) = µ { C | w1 = x } .
Here µ is the Wiener measure on Wo (Rn ). The meaning of this suggestive
formula is as follows. If F is a nice function measurable with respect to Bs
with s < 1 and f a measurable function on Rn , then
Eµ { F f ( X1 )} = Eµ {EµX1 ( F ) f (W1 )} ,
8. BROWNIAN BRIDGE 85

where W denotes the coordinate process on Wo (Rn ). Using the Markov


property at time s we have
 Z  Z
E F
µ
p(1 − s, Ws , y) f (y) dy = Eµy ( F ) p(1, o, y) f (y)dy,
Rn Rn

where
1 n/2 −|y− x|2 /2t
 
p(t, y, x ) = e
2πt
is the transition density function of Brownian motion X. This being true for
all measurable f , we have for all F ∈ Bs ,
µ p (1 − s, Ws , x )
 
(8.1) E F=E
µx
F .
p(1, o, x )
Therefore µ x is absolutely continuous with respect to µ on Fs for any s < 1
and the Radon-Nikodym density is given by

dµ x p(1 − s, ws , x )
(w) = = es .
dµ Fs p(1, o, x )
The process {es , 0 ≤ s < 1} is a necessarily a positive (local) martingale un-
der the probability µ. It therefore must have the form of an exponential
martingale, which can be found explicitly by computing the differential of
log es . The density function p(t, y, x ) satisfies the heat equation
∂p 1
= ∆y p
∂t 2
in (t, y) for fixed x. This equation gives
∂ log p
∆y log p = − |∇y log p|2 .
∂t
Using this fact and Iô’s formula we find easily that
1
d log es = h∇ log p(1 − s, ws , x ), dws i − |∇ p(1 − s, ws , x )|2 ds.
2
Hence es is an exponential martingale of the form
Z s
1 s

dµo
Z
2
= exp hVu , dwu i − |Vu | du ,
dµ Bs 0 2 0
where
Vs = ∇y log p(1 − s, ws , x ).
By Girsanov’s theorem, under probability µ x , the process
Z s
Bs = Ws − ∇ log p(1 − τ, Wτ , x )dτ, 0≤s<1
0
is a Brownian motion. The explicit formula for p(t, y, x ) gives
y−x
∇y log p(1 − τ, y, x ) = − .
1−τ
86 4. APPLICATIONS OF ITO’S FORMULA

Under the probability µ x the coordinate process W is a reflecting Browni-


an motion from o to x in time 1. Therefore we have shown that reflecting
Brownian motion is the solution to the following stochastic differential e-
quation
Ws − x
dWs = dBs − ds.
1−s
This simple equation can be solved explicitly. From the equation we have
Ws − x
d(Ws − x ) + ds = dBs .
1−s
The left side after dividing by 1 − s is the differential of (Ws − x )/(1 − s),
hence Z s
dBu
Ws = sx + (1 − s) .
0 1 −u
This formula shows that, like Brownian motion itself, Brownian bridge is a
Gaussian procss.
The term “Brownian bridge” is often reserved specifically for Brownian
bridge which returns to its starting point at the terminal time. In this case
we have
Ws
dWs = dBs − ds
1−s
and Z s
dBu
Ws = (1 − s) .
0 1 −u
For dimension n = 1 it is easy verify that the covariance function is given
by
E {Ws Wt } = min {s, t} − st.
Using this fact we obtain another representation of Brownian bridge.
T HEOREM 8.1. If { Bs } is a Brownian motion, then the process
Ws = Bs − sB1
is a Brownian bridge.
P ROOF. (n = 1) Verify directly that W defined above has the correct
covariance function. 
The following heuristic discussion of Ws = Bs − sB1 is perhaps more
instructive. Let F∗ be the filtration of the Brownian motion B. We enlarge
the filtration to
Gs = σ {Fs , B1 } .
We compute the Doob-Meyer decomposition of W with respect to G∗ . Of
course the martingale part is a Brownian motion because its quadratic vari-
ation process will be the same has that of W. Denote this Brownian motion
by Ω. Doob’s explicit decomposition formula for a semimartingale suggest
that Z s
Ws = Ωs + E {dWs |Gs } .
0
9. FOURTH ASSIGNMENT 87

We have dWs = dBs − B1 ds. The differential dWs = Ws+ds − Ws is forward


differential. We need to project dWs to the L2 -space generated by
Gs = σ { Bu , u ≤ s; B1 } = { Bu , u ≤ s; B1 − Bs } .
Note that σ { Bu , u ≤ s} is orthogonal to B1 − Bs . The differential dBs is
orthogonal to the first part and its projection to the second part is
dBs · ( B1 − Bs ) B − Bs
· ( B1 − Bs ) = 1 ds.
1−s 1−s
The differential B1 ds is of course in the target space already, hence
B1 − Bs Bs − sB1 Ws ds
E {dWs |Gs } = ds − B1 ds = − ds = − .
1−s 1−s 1−s
It follows that Z s

Ws = Ωs − dτ,
1−τ 0
which is exactly the stochastic differential equation for a reflecting Brown-
ian motion.

9. Fourth assignment
E XERCISE 4.1. Let φ be a strictly convex function. If both N and φ( N )
are continuous local martingales then N is trivial, i.e., there is a constant C
such that Nt = 1 with probability 1 for all t.
E XERCISE 4.2. Let M be a continuous local martingale. Show that there
is a sequence of partitions ∆1 ⊂ ∆2 ⊂ ∆3 ⊂ · · · such that |∆n | → 0 and
with probability 1 the following holds: for all t ≥ 0,
∞  2
lim ∑ Mtin ∧t − Mtin−1 ∧t = h M, M it .
n→∞
i =1
E XERCISE 4.3. Let B be the standard Brownian motion. Then the re-
flecting Brownian motion Xt = | Bt | is a Markov process. This means
n o
P Xt+s ∈ C |FsX = P { Xt+s ∈ C | Xs } .
What is its transition density function
P { Xt+s ∈ dy| Xs = x }
q(t, x, y) = ?
dy
E XERCISE 4.4. Let Lt be the local time of Brownian motion at x = 0.
Show that r
8t
ELt = .
π
E XERCISE 4.5. Show that Brownian bridge from o to x in time 1 is a
Markov process with transition density is
p(s2 − s1 , y, z) p(1 − s2 , z, x )
q(s1 , y; s2 , z) = .
p(1 − s1 , y, x )
88 4. APPLICATIONS OF ITO’S FORMULA

E XERCISE 4.6. The finite-dimensional marginal density for Brownian


bridge from o to x in time 1 is
l
p(1, o, x )−1 ∏ p(si+1 − si , xi , xi+1 ).
i =0
[Convention: x0 = o, s0 = 0.]
E XERCISE 4.7. Let {Ws , 0 ≤ s ≤ 1} be a Brownian bridge at o. Then the
reversed process
{W1−s , 0 ≤ s ≤ 1}
is also a Brownian bridge.
E XERCISE 4.8. For an a > 0 define the stopping time
σa = inf {t : Bt − t = − a} .
Show that for µ ≥ 0, √
Ee−µσa = e−( 1+2µ−1) a
.
E XERCISE 4.9. Use the result of the previous exercise to show that Eeµσa
is infinite if µ > 1/2.
E XERCISE 4.10. By the martingale representation theorem there is a pro-
cess H such that Z 1
W13 = Hs dWs .
0
Fin an explicit expression for H.

You might also like