Applications of Ito's Formula: 1. L Evy's Martingale Characterization of Brownian Motion
Applications of Ito's Formula: 1. L Evy's Martingale Characterization of Brownian Motion
Applications of Ito's Formula: 1. L Evy's Martingale Characterization of Brownian Motion
2. Exponential martingale
We want to find an analog of the exponential function e x . The defining
property of the exponential function is the differential equation
d f (x)
= f ( x ), f (0) = 1,
dx
or equivalently Z x
f (x) = 1 + f (t) dt.
0
So we define anexponential martingale Et by the stochastic integral equation
Z t
Et = 1 + Es dMs ,
0
where M is a continuous local martingale. This equation can be solved
explicitly. Instead of writing down the formula and verify it, let us discover
the formula. Since E0 = 1 we can take the logarithm of Et at least for small
time t. Let Ct = log Et . By Itô’s formula we have
Z t
1 t −2
Z
Ct = Es−1 dEs
− E dh E, Eis .
0 2 0 s
Since dEs = Es dMs , we have dh E, Eis = Es2 dh M, M is . Hence
1
Ct = Mt − h M, Mit .
2
Therefore the formula for exponential martingale is
1
Et = exp Mt − h M, Mit .
2
Now it is easy to verify directly that this process satisfies the defining equa-
tion for Et .
Every strictly positive continuous local martingale can be written in the
form of an exponential martingale:
1
Et = exp Mt − h M, Mit ,
2
where the local martingale M can be expressed in terms of E by
Z t
Mt = Es−1 dEs .
0
68 4. APPLICATIONS OF ITO’S FORMULA
This formula can be verified rigorously, namely, it can be shown that the
infinite series converges and the remainder from the iteration tends to zero.
To continue our discussion let us introduce a parameter λ by replacing M
with λM and obtain
∞
λ2
(2.1) exp λMt − h M, M it = ∑ λn In (t).
2 n =0
If we set
r
Mt h M, Mit
x= p and θ=λ ,
2h M, Mit 2
then the left side of (2.1) becomes exp 2xθ − θ 2 . The coefficients of its
Taylor expansion in θ are called Hermite polynomials
∞
Hn ( x ) n
∑
2
e2xθ −θ = θ .
n =0 n!
because λ < 1. Hence the coefficient is still less than 1/2 if r > 1 but
sufficiently close to 1. On the other hand, we have assumed that M is a
martingale, hence Mσ = E { MT |Fσ }. By Jensen’s inequalty we have
n o
e Mσ /2 ≤ E e MT /2 Fσ .
It follows that
" √ ! #
λr − λ3 r
E exp Mσ ≤ Ee Mσ /2 ≤ Ee MT /2 .
1−λ
We therefore have shown that for any λ < 1, there is an r > 1 such that
EE(λ)rσ ≤ Ee MT /2
for all stopping times σ ≤ T. This shows that E(λ) is a uniformly integrable
martingale, which implies that EE(λ)T = 1 for all λ < 1.
We now use the same trick again to show EE(1)T = 1. We have
λ2
2
E(λ)T = exp λ MT − h M iT exp (λ − λ2 ) MT .
2
Using Hölder’s inequality with the exponents λ2 + 1 − λ2 = 1 we have
λ2 1−λ2
1 λ
1 = EE(λ)T ≤ E exp MT − h MiT E exp MT .
2 1+λ
Because λ/(1 + λ) ≤ 1/2, the second expectation on the right side can be
replaced by E exp[ MT /2]. Letting λ ↓ 0 we obtain
1
E exp MT − h MiT = 1.
2
The condition E exp[ MT /2] < ∞ is not easy to verify because we usu-
ally know the quadratic variation h MiT much better than MT itself. The
following weaker criterion can often be used directly.
C OROLLARY 3.3. (Novikov’s criterion) Suppose that M is a martingale. If
Eexp [h MiT /2] is finite, then
1
E exp MT − h MiT = 1.
2
P ROOF. We have
1 1 1 1
exp MT = exp MT − h MiT exp h MiT .
2 2 4 4
By the Cauchy-Schwarz inequality we have
s s
1 1 1
E exp MT ≤ E exp MT − h M iT E exp h MiT .
2 2 2
72 4. APPLICATIONS OF ITO’S FORMULA
The factor factor on the right side does not exceed 1. Therefore
s
1 1
E exp MT ≤ E exp h MiT .
2 2
Therefore Novikov’s condition implies Kamazaki’s condition.
Let X be the coordinate process on W (R). Then the left side is simply
Eµ F ( X + h). Introduce the measure ν by
Z 1
1 1
dν
Z
2
= exp − ḣs dws − |ḣs | ds .
dµ 0 2 0
We have
1 Z 1
Z
dµ 1 2
= exp ḣs dws + |ḣs | ds
dν 0 2 0
Z 1
1 1
Z
= exp − ḣs d(ws + h) − |ḣs |2 ds .
0 2 0
Hence we can write dµ/dν = e1 ( X + h). It follows that
µh dµ
E F = E [ F ( X + h)] = E F ( X + h)
µ ν
= Eν [ F ( X + h)e1 ( X + h)] .
dν
By Girsanov’s theorem X + h is a Brownian motion under ν. On the other
hand, X is a Brownian motion under µ. Therefore on the right side of the
above equality we can replace ν by µ and at the same time replace X + h by
X, hence
h
Eµ F = Eµ [ F ( X )e1 ( X )] = Eµ ( Fe1 ).
T HEOREM 4.4. Let h ∈ W (R). The shifted Wiener measure µh mutually
absolutely continuous or mutually singular with respect to µ according as h ∈ H
or h 6∈ H .
P ROOF. We need to show that if h 6∈ H , then µh is singular with respect
to µ.
First we need to convert the condition h 6∈ H into a more convenient
condition. A function ḣ to be square integrable if and only if
Z 1
f s ḣs ds ≤ C | f |2
0
for some constant C. Therefore it is conceivable that if h 6∈ H , then for any
C, there is a step function f such that
n
| f |22 = ∑ | f i | 2 ( s i − s i −1 ) = 1
i =1
and
Z 1 n
0
f s dhs = ∑ f i ( hs i
− hsi−1 ) ≥ C.
i =1
This characterization of h 6∈ H can indeed be verified rigorously.
Second, the convenient characterization that µh is singular with respect
to µ is the following: for any positive e, there is a set A such that µh ( A) ≥
1 − e and µ( A) ≤ e.
5. MOMENT INEQUALITIES FOR MARTINGALES 75
Therefore
C C
µh ( A) = µ Z + C ≥ =µ Z≥−
2 2
and µh ( A) ≥ 1 − e for sufficiently large C. Thus we have shown that µh
and µ are mutually singular.
T HEOREM 5.1. Let M be a continuous local martingale. For any p > 0, there
are positive constants c p , C p such that
h i h i
p/2
c p E h M, Mi p/2 ≤ E ( Mt∗ ) p ≤ C p E h M, M it
.
P ROOF. The case p = 2 is obvious. We only prove the case p > 2. The
case 0 < p < 2 is slightly more complicated, see Ikeda and Watanabe [6].
By the usual stopping time argument we may assume without loss of
generality that M is uniformly bounded, so there is no problem of integra-
bility. We prove the upper bound first. We start with the Doob’s submartin-
gale inequality
p
∗p p
(5.1) E Mt ≤ E [| Mt | p ] .
p−1
76 4. APPLICATIONS OF ITO’S FORMULA
well motivated. It uses nothing more than Itô’s formula in a very elemen-
tary way. In the next section we will discuss another approach to this useful
theorem.
We make a few remarks before the proof. First of all, the martingale
representation theorem is equivalent to the following representation theo-
rem: if X is a square integrable random variable measurable with respect
to FTB , then it can be represented in the form
Z T
X = EX + Hs dBs ,
0
for if we take X = MT , then we have
Z T
MT = Hs dBs .
0
Since both sides are martingales, the equality must also holds if T is re-
placed by any t ≤ T.
Second, the representation is unique because if
Z T Z T
Hs dBs = Gs dBs ,
0 0
then from
Z T Z T 2 Z T
E Gs dBs = E | Hs − Gs |2 ds
Hs dBs −
0 0 0
prove the theorem in this case and find the explicit formula for the process
H.
P ROPOSITION 6.1. For any bounded smooth function f ,
Z 1
f (W1 ) = E f (W1 ) + E f 0 (W1 )|Fs dWs .
0
Now we regard the right side as a function of Bt and t and apply Itô’s
formula. Since we know that it is a martingale, we only need to find out
its martingale part, which is very easy: just differentiate with respect to Bt
and integrate the derivative with respect to Bt . We have
Z 1
" #
1
Z
2
f (W1 ) = E f (W1 ) + p f 0 (Wt + x )e−|x| /2(1−t) dx dWs .
0 2π (1 − t) R1
The difference between the integrand and the right side of (6.1) is simply
that f is replaced by its derivative f 0 . This shows that
Z 1
f (W1 ) = E f (W1 ) + E f 0 (W1 )Ft dt.
0
The general case can be handled by an induction argument.
T HEOREM 6.2. Let X ∈ L2 (Ω, F1 , P). Then there is a progressively mea-
surable process H such that
Z 1
X = EX+ Hs dWs .
0
where
h( x ) = E f ( x, Ws2 − Ws1 , · · · , Wsn − Ws1 ),
6. MARTINGALE REPRESENTATION THEOREM 79
is given by
l
Ds F ( w ) = ∑ Fx (w) I[0,s ] (s).
i i
i =1
We have the following integration by parts formula.
T HEOREM 6.3. Let H : Ω → H be F∗ -adapted and E exp | H |2H /2 is
finite. Then the following integration by parts formula holds for a cylinder function
F: Z 1
ED H F = Eh DF, H i = E F Ḣs dWs .
0
The above function is still not C2 because Fe00 = I[−e,e] /e, which is not con-
tinuous, but it is clear that Fe ( x ) → | x | and Fe0 ( x ) → sgn( x ) as e → 0. Now
let φ be a continuous function and define
Z x Z u1
Fφ ( x ) = du1 φ(u2 ) du2 .
0 0
Itô’s formula can be applied to Fφ ( Bt ) and we obtain
Z t
1 t
Z
Fe ( Bt ) = +Fφ0 ( Bs ) dBs
φ( Bs )ds.
0 2 0
Now for a fixed e we let φ in the above formula to be the continuous func-
tion
0,
if | x | ≥ e + n−1 ,
φn ( x ) = e−1 , if | x | ≤ e,
linear, in the two remaining intervals.
side converges to | Bt | and the first term on the right side converges to the
stochastic integral
Z t
sgn( Bs ) dBs .
0
Hence the limit Z t
1
Lt = lim I(−e,e) ( Bs ) ds
e →0 e 0
must exist and we have
Z t
1
| Bt | =
sgn( Bs ) dBs + Lt .
0 2
We see that Lt can be interpreted as the amount of time Brownian motion
spends in the interval (−e, e) properly normalized. It is called the local
time of Brownian motion B at x = 0. Let
Z t
Wt = sgn( Bs ) dBs .
0
Then W is a continuous martingale with quadratic variation process
Z t
hW, W it = |sgn( Bs )|2 ds = t.
0
Note that Brownian motion spends zero amount of time at x = 0 because
EI{0} ( Bs ) = P { Bs = 0} = 0 and
Z t Z t
E I{0} ( Bs ) ds = EI{0} ( Bs ) ds = 0.
0 0
We thus conclude that reflecting Brownian motion | Bt | is submartingale
with the decomposition
1
| Bt | = Wt + Lt .
2
It is interesting to note that W can be expressed in terms of reflecting Brow-
nian motion by
1
Wt = Xt − Lt ,
2
where
1 t
Z
(7.1) Lt = lim I[0,e] ( Xs ) ds.
e →0 e 0
We now pose the question: Can X and L be expressed in terms of W? That
the answer to this question is affirmative is the content of the so-called Sko-
rokhod problem.
D EFINITION 7.1. Given a continuous path f : R+ → R such that f (0) ≥ 0.
A pair of functions ( g, h) is the solution of the Skorokhod problem if
(1) g(t) ≥ 0 for all t ≥ 0;
(2) h is increasing from h(0) = 0 and increases only when g = 0;
(3) g = f + h.
7. REFLECTING BROWNIAN MOTION 83
The main result is that the Skorokhod problem can be solved uniquely and
explicitly.
T HEOREM 7.2. There exists a unique solution to the Skorokhod equation.
P ROOF. It is interesting that the solution can be written down explicitly:
h(t) = − min f (s) ∧ 0, g(t) = f (t) − min f (s) ∧ 0.
0≤ s ≤ t 0≤ s ≤ t
Let’s assume that f (0) = 0 for simplicity. If f (0) > 0, then h(t) = 0 and
g(t) = f (t) before the first time f reaches 0 and after this time it is as if the
path starts from 0. The explicit solution in this case is
h(t) = min f (s), g(t) = f (t) − min f (s).
0≤ s ≤ t 0≤ s ≤ t
It is clear that g(t) ≥ 0 for all t and h increases starting from h(0) = 0. The
equation f = g + h is also obvious. We only need to show that h increases
only when g(t) = 0. This means that as a Borel measure h only charges the
zero set {t : g(t) = 0}. This requirement is often written as
Z t
h(t) = I{0} ( g(s)) dh(s).
0
Equivalently, it is enough to show that for any t such that g(t) > 0 there
is a neighborhood (t − δ, t + δ) of t such that h is constant on there. This
should be clear, for if g(t) > 0, then f (t) > min0≤s≤t f (s), which means
that the minimum must be achieved at a point ξ ∈ [0, t) and f (t) > f (ξ ).
By continuity a small change of t will not alter this situation, which means
that h = f (ξ ) in a neighborhood of t. More precisely, from g(t) = f (t) −
min0≤s≤t f (s) > 0 and the continuity of f , there is a positive δ such that
min f (s) > min f (s).
t−δ≤s≤t+δ 0≤ s ≤ t − δ
Hence
min f (s) = min min f (s), min f (s) = min f (s).
0≤ s ≤ t + δ 0≤ s ≤ t − δ t−δ≤s≤t+δ 0≤ s ≤ t − δ
This means that h(t + δ) = h(t − δ), which means that h must constant on
(t − δ, t + δ) because h is increasing.
We now show that the solution to the Skorokhod problem is unique.
Suppose that ( g1 , h1 ) and let ξ = h − h1 . It is continuous and of bounded
variation, hence
Z t
ξ ( t )2 = 2 ξ (s) dξ (s).
0
On the other hand, ξ (s) = g(s) − g1 (s), hence
Z t
2
ξ (t) = 2 { g(s) − g1 (s)} d {h(s) − h1 (s)} .
0
There are four terms on the right side: g(s) dh(s) = g1 (s) dh1 (s) = 0 be-
cause h increases on when g = 0 and h1 increases only when g1 = 0;
84 4. APPLICATIONS OF ITO’S FORMULA
g(s) dh1 (s) ≥ 0 and g1 (s) dh(s) ≥ 0 because g(s) ≥ 0 and g1 (s) ≥ 0. Putting
these observations together we have ξ (t)2 ≤ 0, which means that ξ (t) = 0.
This proves the uniqueness.
If we apply Skorokhod equation to Brownian motion by replacing f
with Brownian paths, we obtain some interesting results. We have shown
that
1
| Bt | = Wt + Lt ,
2
where W is a Brownian motion. From the solution of the Skorokhod prob-
lem we conclude that | B| and L are determined by W:
1
(7.2) | Bt | = Wt − min Ws , Lt = − min Ws .
0≤ s ≤ t 2 0≤ s ≤ t
have the same law, i.e, that of a reflecting Brownian motion. (2) We have
Z t
1
max Ws = lim I[0,e] ( max Wu − Ws ) ds.
0≤ s ≤ t e→0 2e 0 0≤ u ≤ s
8. Brownian bridge
If we condition a Brownian motion to return to a fixed point x at time
t = 1 we obtain Brownian bridge from o to x with time horizon 1. Let
L x (Rn ) = {w ∈ Wo (Rn ) : w1 = x } .
The law of a Brownian bridge from o to x in time 1 is a probability measure
µ x on L x (Rn ), which we will call the Wiener measure on L x (Rn ). Note that
L x (Rn ) is a subspace of Wo (Rn ), thus µ x is also a measure on Wo (Rn ). By
definition, we can write intuitively
µ x ( C ) = µ { C | w1 = x } .
Here µ is the Wiener measure on Wo (Rn ). The meaning of this suggestive
formula is as follows. If F is a nice function measurable with respect to Bs
with s < 1 and f a measurable function on Rn , then
Eµ { F f ( X1 )} = Eµ {EµX1 ( F ) f (W1 )} ,
8. BROWNIAN BRIDGE 85
where
1 n/2 −|y− x|2 /2t
p(t, y, x ) = e
2πt
is the transition density function of Brownian motion X. This being true for
all measurable f , we have for all F ∈ Bs ,
µ p (1 − s, Ws , x )
(8.1) E F=E
µx
F .
p(1, o, x )
Therefore µ x is absolutely continuous with respect to µ on Fs for any s < 1
and the Radon-Nikodym density is given by
dµ x p(1 − s, ws , x )
(w) = = es .
dµ Fs p(1, o, x )
The process {es , 0 ≤ s < 1} is a necessarily a positive (local) martingale un-
der the probability µ. It therefore must have the form of an exponential
martingale, which can be found explicitly by computing the differential of
log es . The density function p(t, y, x ) satisfies the heat equation
∂p 1
= ∆y p
∂t 2
in (t, y) for fixed x. This equation gives
∂ log p
∆y log p = − |∇y log p|2 .
∂t
Using this fact and Iô’s formula we find easily that
1
d log es = h∇ log p(1 − s, ws , x ), dws i − |∇ p(1 − s, ws , x )|2 ds.
2
Hence es is an exponential martingale of the form
Z s
1 s
dµo
Z
2
= exp hVu , dwu i − |Vu | du ,
dµ Bs 0 2 0
where
Vs = ∇y log p(1 − s, ws , x ).
By Girsanov’s theorem, under probability µ x , the process
Z s
Bs = Ws − ∇ log p(1 − τ, Wτ , x )dτ, 0≤s<1
0
is a Brownian motion. The explicit formula for p(t, y, x ) gives
y−x
∇y log p(1 − τ, y, x ) = − .
1−τ
86 4. APPLICATIONS OF ITO’S FORMULA
9. Fourth assignment
E XERCISE 4.1. Let φ be a strictly convex function. If both N and φ( N )
are continuous local martingales then N is trivial, i.e., there is a constant C
such that Nt = 1 with probability 1 for all t.
E XERCISE 4.2. Let M be a continuous local martingale. Show that there
is a sequence of partitions ∆1 ⊂ ∆2 ⊂ ∆3 ⊂ · · · such that |∆n | → 0 and
with probability 1 the following holds: for all t ≥ 0,
∞ 2
lim ∑ Mtin ∧t − Mtin−1 ∧t = h M, M it .
n→∞
i =1
E XERCISE 4.3. Let B be the standard Brownian motion. Then the re-
flecting Brownian motion Xt = | Bt | is a Markov process. This means
n o
P Xt+s ∈ C |FsX = P { Xt+s ∈ C | Xs } .
What is its transition density function
P { Xt+s ∈ dy| Xs = x }
q(t, x, y) = ?
dy
E XERCISE 4.4. Let Lt be the local time of Brownian motion at x = 0.
Show that r
8t
ELt = .
π
E XERCISE 4.5. Show that Brownian bridge from o to x in time 1 is a
Markov process with transition density is
p(s2 − s1 , y, z) p(1 − s2 , z, x )
q(s1 , y; s2 , z) = .
p(1 − s1 , y, x )
88 4. APPLICATIONS OF ITO’S FORMULA