Probability Essentials: 19 Weak Convergence and Characteristic Func-Tions
Probability Essentials: 19 Weak Convergence and Characteristic Func-Tions
Probability Essentials: 19 Weak Convergence and Characteristic Func-Tions
19
eiux n (dx)
eiux (dx).
n (u)du
Z Z
F ubini
eiux n (dx)du
eiux dun(dx)
2 sin(x)
n (dx).
x
Recall
0
and
sin x
1
x
1
sin x
x
2
if |x| > 2.
Z
2 sin(x)
1Z
(1
n (u))du = 2
n (dx)
x
!
Z
sin(x)
= 2
n (dx)
1
x
Z
1
n (dx)
2
|x|>2 2
= n ([, ]c )
As
N, n > N,
Z
1
(1 f (u))du < .
2
1
(1
n (u))du
n ([, ] )
Z
1
(1 f (u))du,
(1
n (u))du
< .
There are only finite many n before N . For each n, n , s.t. n ([n , n ]c ) <
. Let
b = max{1 , , N , }.
Then
n.
(u).
This proves the tightness of {n }. Let nk weakly. Then
nk (u)
Thus f (u) =
(u).
It remains to show that n weakly. For each convergent subsequence
0
nk , we have = f =
. Hence = . This proves the convergence of
.
iu n
Xn
iu/
1)
= eiu n en(e
= exp n(eiu/ n 1) iu n
u
1
u
= exp n i +
i
n 2
n
1
= exp u2 + o(1)
2
1 2
exp u .
2
Homework: 2, 3.
!2
1
iu n
+
n
20
n .
X
Theorem 20.1 (Strong law of large numbers, SLLN) Let (Xj )j1 be i.i.d.
Let
2
= E(Xj ) and 2 = X
< .
j
Let Sn =
Pn
i=1
Xi . Then
Sn
a.s. and in L2 .
So
1
1
1 2
2
V
ar(S
)
=
n
=
0.
n
n2
n2
n
Sn L 2
0.
n
As
X
n
we have
Sn2 2
E 2
X Sn2 2
2
n
X
n
1 2
< ,
n2
< ,
4
a.s.
and hence,
Sn2
0,
n2
a.s.
(p(n)+1)
1 X
Sn p(n)2 Sp(n)2
=
Xj .
n
n p(n)2
n j=p(n)2 +1
So
Sn p(n)2 Sp(n)2
E
n
n p(n)2
!2
i
1 h
2
2
2
(p(n)
+
1)
p(n)
2
n
1
1
3 2
2
2 =
3n
3
n2
n2
is summable. Thus
Sn p(n)2 Sp(n)2
0,
n
n p(n)2
As
p(n)2
n
1 and
Sp(n)2
p(n)2
0 a.s., we have
Sn
n
a.s.
0 a.s.
Note that the condition depends on 2 , but the conclusion does not. We
hope to relax the condition. The following theorem will be proved in Ch. 27.
Theorem 20.2 Let (Xj )j1 be i.i.d. and R. Then
Sn
21
Sn
n
is
Theorem 21.1 (Central limit theorem) Let (Xj )j1 be i.i.d. with E(Xj ) =
and V ar(Xj ) = 2 , 0 < 2 < . Let
Yn =
Sn n
.
n
!n
2 u2
+ o(u2 ),
2
as u 0.
Hence
2 u2
1
log Yn (u) = n log 1
+
o
2 2n
n
1
= u2 + o(1).
2
!
Therefore,
Yn (u) e 2 u .
Next, we consider the case that (Xj )j1 not i.i.d.
Theorem 21.2 Let (Xj )j1 be independent and E(Xj ) = 0, E(Xj2 ) = j2 .
Assume
sup E{|Xj |2+ } < for some > 0
j
and
X
j
Then
Sn
qP
n
j=1
j2
j2 = .
Z,
v
uX
u n 2
t
.
j
j=1
Note that
u
u
n
an
an
!!
2
j2
1 2u
n
.
= j=1 1 j 2 + o 2
2 an
an
Sn (u) = 1
an
Hence
j2
1 2 u2
log 1 j 2 + o 2
log Sn (u) =
an
2 an
an
j=1
n
X
!!
1
= u2 + o(1).
2
Z N (0, Q).
n
Proof: Similar to 1-dimensional case.
n
X
j=1
= p, 2 = p p2 = p(1 p).
Xj B(n, p).
Sn
p.
n
Sn np D
q
N (0, 1).
np(1 p)
SLLN :
CLT :
Example 21.5 Let (Xj )j1 be i.i.d. r.v.s in L2 with common distribution
F (unknown). We would like to estimate F using X1 , , Xn .
Recall
F (x) = P (Xj x) = E1Xj x .
Define
Yj = 1Xj x .
Then
n
1X
Yj F (x)
n j=1
Let
Fn (x) =
(SLLN ).
n
1X
Yj .
n j=1
a.s.
How fast?
1X
n(Fn (x) F (x)) =
n
Yj E(Yj )
n j=1
Sn n D
N (0, 2 (x))
n
where
2 (x) = V ar(Yj ) = F (x)(1 F (x)).
HW: 2, 3, 5, 8, 11
22
if
X = Y,
a.s.
Triangle inequality
kX + Y k2 = E(X 2 ) + E(Y 2 ) + 2E(XY )
2
= ( E(X 2 ) +
q
E(Y 2 ))2 .
Summarize, we have
Theorem 22.1 L2 is a normed vector space with inner product h, i and
k k = h, i1/2 .
Definition 22.2 1) A normed space is complete if every Cauchy sequence
Xn (i.e. kXn Xm k 0 as n, m 0) has a limit.
2) A Hilbert space H is a complete normed vector space with an inner product
satisfying
kxk = hx, xi1/2 ,
x H.
Theorem 22.3 L2 is a Hilbert space.
9
1
,
2k
nk s.t.
kXnk+1 Xnk k
Define
Ym () =
m
X
k=1
Then
Ym Y =
1
.
2k
k=1
|Xnk+1 Xnk |.
m
X
k=1
kXnk+1 Xnk k
!2
1.
By MCT,
E(Y 2 )
k=1
So
Xn 1 +
= lim E(Ym2 ) 1.
m
k=1
Thus
kX Xnk k lim
m
a.s
m
X
j=k
m
X
j=k
(Xnj+1 Xnj ).
kXnj+1 Xnj k
Hence, Xnk X in L2 .
10
1
2k1
0.
Note that
kXn Xk kXn Xnk k + kXnk Xk.
Let n, nk 0, we see that kXn Xk 0.
Next, we introduce some basic properties for Hilbert space.
H Hilbert space, norm k k, inner product h, i, , real numbers.
Definition 22.4 Two vectors x, y H are orthogonal if hx, yi = 0. A vector
x is orthogonal to a set of vectors if
hx, yi = 0,
y .
So x .
11
yn + ym
2
= 2kyn xk + 2kym xk
2
2kyn xk2 + 2kym xk2 4d(x, L)2
0.
2
4
x
z L.
Corollary 22.11
x = x + (x x) L
is the unique decomposition.
Corollary 22.12 i)
hx, yi = hx, yi ,
x L.
ii)
(x + y) = x + y.
13
Proof: i)
hx, yi = hx, y + (y y)i = hx, yi .
Similarly,
hx, yi = hx, yi .
ii) As
x = x1 + x2 ,
x1 L, x2 L
y = y1 + xy2 ,
y1 L, y2 L ,
and
we have
x1 + y1 L, x2 + y2 L .
x + y = (x1 + y1 ) + (x2 + y2 ),
Hence
(x + y) = x1 + y1 = |pix + y.
Finally, we give a characterization of the projection operator.
Theorem 22.13 Suppose T : H L satisfies: x H, x T x L .
Then T = .
Proof: For any x H,
x = T x + (x T x) L
So
T x = x.
14
L .
23
Conditional expectation
Q() = P (|X = xj ),
Proof:
E(Y |X = xj ) = EQ (Y ) =
=
X
k
yk Q(Y = yk )
yk P (Y = yk |X = xj ).
E(Y |X = xj )
any value
if P (X = x) > 0
if P (X = x) = 0.
15
Solution:
E(S|X = n) = np,
E(S|X) = pX.
P (X = n|S = k) =
= P
n
k
m=k
=
=
=
pk (1 p)nk n! e
m
k
pk (1 p)mk m! e
1
(1 p)n n
(nk)!
P
1
m m
m=k (mk)! (1 p)
1
(1 p)n n
(nk)!
(1 p)k k e(1p)
Proof: = B B,
Y 1 (B) = X 1 (f 1 (B)) X 1 (B n ) = (X).
= Suppose Y = 1A is (X)-measurable, then A (X). There is B B n
such that A = X 1 (B). Hence, Y = 1B (X). f = 1B .
If Y is simple, then
Y =
k
X
a i 1Ai .
i=1
f=
k
X
a i 1Bi
i=1
Then
Y = lim
Yn = lim
fn (X) = f (X).
n
n
For general Y , we have
Y = Y + Y = f1 (X) f2 (X).
Z L2 (, (X), P ).
Y = E(Y |X).
17
(23.1)
Z L2 (, G, P ).
(23.2)
Proof: (b) is proved in previous theorem. (c) follows from (23.1) by taking
Z = 1. (d) follows from the property of the projection operator. Now we
prove (a).
Note that Z 0 with Z L2 (, G, P ), we have
E(E(Y |G)Z) = E(Y Z) 0.
Take Z = 1E(Y |G)<0 . Then
E(E(Y |G)Z) 0.
Hence E(Y |G) 0 a.s.
Now we define E(Y |G) when Y L1 (, A, P ). First we consider the case
that Y 0. Note that L2 3 Y n Y . Then E(Y n|G) . We define
E(Y |G) = lim E(Y n|G).
n
In general, we define
E(Y |G) = E(Y + |G) E(Y |G).
18
(23.3)
= E(Y Z).
a.s.
(Y Y )1Y Y = 0,
a.s.
Example 23.13 Let (X, Z) be r.v.s with joint density f (x, z). Let g be a
bounded function and Y = g(Z). Calculate E(Y |X).
Solution: Let
f (x, z)
f (x, z)dz
be the conditional p.d.f. Let E(Y |X) = h(X). Then k(x),
fX=x (z) = R
Z
E(h(X)k(X)) =
h(x)k(x)fX (x)dx
h(x)k(x)
f (x, z)dzdx.
Z Z
f (x, z)dz =
h(x) =
g(z)fX=x (z)dz.
20
Pn
i=1
E(X|G) =
ai 1Ai , then
n
X
i=1
Note that
n
X
i=1
Thus
(E(X|G))
n
X
i=1
ai E(1Ai |G).
E(1Ai |G) = 1.
1
p
1
q
= 1.
1
E(1A X p ).
C
EQ =
1
E(X p ).
C
Then
Note that
XY = Y X 1p 1X>0 X p .
Let
Z = Y X 1p 1X>0 .
Note that |x|q is convex. Thus
1
1
q
(E(XY
))
=
(E(ZX p ))q = (EQ Z)q
q
q
C
C
1
1
EQ (Z q ) = E(Y q X (1p)q X p ) = E(Y q ).
C
C
Hence
E(XY ) C 1/p (EY q )1/q = (E|X|p)1/p (E|Y |q )1/q .
24
Martingales
= Xs ,
a.s.
(24.5)
n
X
k=1
fk (Xk Xk1 ).
(resp. )
= X
a.s.
(24.6)
E(X )
= E(X ).
= E(XB ) E(XN 1B c )
= E(XB ) E(XN 1B c )
= E(X 1B ).
This proves (24.6). The case for supermartingale can be proved similarly.
Next, we give some estimates on the probabilities related to submartingales. The corollary of these estimates will be very important.
Theorem 24.5 Let (Xn )nN be a submartingale. Then for every > 0 and
N N,
24
and
Proof: Let
= min{n N : Xn }
with the convention that inf = N . Then SN . By (24.6), we have
E(XN )
E(X )
Thus
P max Xn
nN
= E XN 1maxnN Xn
E(|XN |).
P max |Xn |
nN
and
E
max |Xn |p
nN
p
p1
E(|XN |p )
(24.7)
!p
E(|XN |
).
(24.8)
Hence
E(Y
) = E
= p
p
Z 0
0
pp1 1Y d
p1 P (Y )d
p2 E (1Y |XN |) d
p
E(Y p1 |XN |)
p1
p
(E(XN |p ))1/p (E(Y p ))(p1)/p
p1
where the last inequality follows from Holders inequality. (24.8) then follows
easily.
Next, we consider the limit for submartingales. Let (Xn )nN be a submartingale and a < b. Define 0 = 0 = 0 and for n 0,
n = min{m n1 : Xm a}
n = min{m n : Xm b}.
(24.9)
Then n and n are two sequences of increasing stopping times and the number of upcrossing of {Xn : 0 n N } for the interval [a, b] is
UNX (a, b) = max{n : n N }.
Theorem 24.7 Suppose that (Xn )nN is a submartingale. Then, N N
and a < b, we have
X
EUN
(a, b)
1
E{(XN a)+ (X0 a)+ }.
ba
n
X
k=1
(Yk N Yk N ) +
(b a)UNX (a, b) +
26
n
X
k=1
n
X
k=1
(Yk N Yk1 N )
(Yk N Yk1 N ).
Therefore
E(YN
r0
Hence
1
lim E((XN r)+ (X0 r)+ ) < .
r N
X
P lim inf Xn < lim sup Xn = P r<r0 ,r,r0 Q {U
(r, r 0 ) = } = 0,
n
Hence X is integrable.
Finally, we consider continuous time martingales.
Lemma 24.9 Let (Xt )t0 be a submartingale. Then T > 0,
!
tQ[0,T ]
and
P t 0,
lim
sQ+ , st
Xs and
27
lim
sQ+ , st
Xs exist = 1.
1
1
E(Yn a)+
E(XT a)+ .
ba
ba
and
EUnY (a, b)
Take n , we have
1
E|XT |
1in
tQ[0,T ]
1
E|XT |
(24.10)
and
1
E(XT a)+ .
(24.11)
ba
The conclusion of the lemma follows from (24.10) and (24.11) by letting
and a < b run over positive integers and pairs of rationals respectively.
X|Q[0,T ]
EU
(a, b)
t0
= lim E(Xr 1B )
rQ, rt
Then
s |Ft )
E(X
lim
r 0 Q, r 0 s
t,
X
E(Xr0 1B )
s 1B ).
= E( X
a.s.
t 1B ),
E(X
B Ft
hence, Xt = Xt a.s.
in Theorem 24.10 is called a c
If E(Xt ) is right-continuous, then X
adl
ag
modification of X. From now on, we always take cadlag versions for such
submartingales.
The following theorem is a consequence of Corollary 24.6.
Theorem 24.11 Let (Xt )t0 be a right-continuous martingale such that E(|Xt |p ) <
, t 0 for some p > 1. Then for every t 0,
P max |Xs |
st
and
E
max |Xs |
st
p
p1
E(|Xt |p )
(24.12)
!p
(24.13)
E(|Xt |p ).
M ST
Hence
sup E|Yn | M + sup E(|Yn |1|Yn |M ) < .
n
Note that
E|Yn Y | = E(|Yn Y |1|Yn |>M ) + E(|Yn Y |1|Yn |M )
sup E(|Yn |1|Yn |>M ) + E(|Y |1|Yn |>M,|Y |<M/2 )
n
Proof: Let
k1
k
k
if
< n.
n
n
2
2
2
Then n is a sequence of stopping times.
Let n be defined similarly. For any A F , we have A Fn and hence,
by Theorem 24.4,
E(Xn 1A ) = E(Xn 1A ).
n =
Take n , we have
E(X 1A )
= E(X 1A ).
31
25
a.s.
Hence
1
E((X0 a)+ ) < .
ba
X
P (U
(a, b) < ) = 1.
X = n
lim Xn
exists a.s. and X L1 . To prove Xn X in L1 , we only need to
show that {Xn }nN is uniformly integrable. Note that
Xn = E(X0 |Fn ).
Hence
E(|Xn |1|Xn |>M )
=
=
Then
lim sup E(|Xn |1|Xn |>M ) E(|X0 |1|X0 |>M 0 ).
Take M , we have
lim sup E(|Xn |1|Xn |>M ) = 0.
a.s.
Sn
n
E(X1 ) a.s.
Theorem 25.3 Let (Yn )n1 be independent r.v.s, E(Yn ) = 0 and E(Yn2 ) <
. Suppose
n=1
Let
E(Yn2 ) < .
Sn =
Yn .
n=1
X
n
E(Yn2 ) + 1 < .
Hence Sn S and S L1 .
Finally, we consider martingale CLT.
Theorem 25.4 Let (Xn )nN be s.t. a)
E(Xn |Fn1 ) = 0
b)
E(Xn2 |Fn1 ) = 1
c)
E(|Xn |3 ) K < .
Let Sn = X1 + + Xn . Then
Proof: Let
Sn
n,j (u) = E e
As
e
iu 1n Xj
iu 1n Xj
Fj1
1
u2 2
iu3 3
= 1 + iu Xj
Xj 3/2 X
j
n
2n
6n
j is between 0 and Xj ,
where X
iu
u2
iu3
3 |Fj1 )
n,j (u) = 1 + E(Xj |Fj1 ) E(Xj2 |Fj1 ) 3/2 E(X
j
n
2n
6n
u2
iu3
3 |Fj1 ).
= 1
3/2 E(X
j
2n 6n
34
Then for j n,
S
E(e
iu jn
) = E(e
= E(e
= E(e
= Ee
iu
Sj1
Sj1
iu
n
Sj1
iu
n
iu
Sj1
j
iu n
)
X
E(e
j
iu n
|F j 1))
n,j (u))
!
u2
iu3 3
1
X .
2n 6n3/2 j
Hence
!
Sj1
S
u2
iu
iu jn
n
1
Ee
Ee
2n
S
iu3
iu j1
n
E e
|Xj |3
6n3/2
K|u|3
.
6n3/2
Then
u2
1
2n
!nj
Ee
S
iu jn
u2
1
2n
!nj+1
Ee
Sj1
iu
n
u
for n large s.t. 1 2n
> 0.
Use telescoping sum, we have
As
Sn
iu
Ee n
u2
1
2n
u2
1
2n
we have
Ee
!n
!n
Sn
iu
n
eu
eu
35
K|u|3
n 0.
6n3/2
2 /2
2 /2
K|u|3
6n3/2
26
Doob-Meyer decomposition
(26.16)
By induction, we see that An is Fn1 -measurable. Since Xn is a submartingale, by (26.16), we have An An1 a.s.
Next we prove the uniqueness. Suppose (Mn , An ) is such a decomposition,
then
E(Xn |Fn1 ) = Mn1 An .
Therefore An is uniquely determined. The uniqueness of Mn then follows
from Mn = Xn + An .
Next we consider the decomposition of continuous time supermartingale.
Definition 26.2 (At )t0 is an integrable increasing process if A0 = 0, t 7
At is right-continuous and increasing a.s. and
E(At )
< ,
36
t 0.
Z
t
0
ms dAs = E
Z
t
0
ms dAs
=E
Z
ms dAs
Z
t
0
= E lim
=
=
lim
lim
ms dAs
n1
X
k=0
n1
X
k=0
n
X
k=1
= E(mt At )
m (k+1)t A (k+1)t A kt
n
n1
X
m (k+1)t A (k+1)t
n
m kt A kt
n
n1
X
k=0
k=0
!
E E m (k+1)t A kt F kt
n
m kt A kt
n
where the second equality follows from the dominated convergence theorem
and the third the martingale property of mt .
37
Theorem 26.5 (Doob-Meyer decomposition) If (Xt )t0 is a submartingale of class (DL), then it is expressible uniquely as
Xt = M t + A t
where At is an integrable natural increasing process and Mt is a martingale.
Proof: Uniqueness. Suppose that
Xt = Mt At = Mt0 A0t
be two such decompositions. Then
At A0t = Mt Mt0
is a martingale. Therefore, for any bounded martingale mt , we have
=
=
lim E
= 0.
n1
X
k=0
m kt
n
A (k+1)t A (k+1)t A kt A k
n
. As
Then Yt is a non-positive submartingale with YT = 0. Let tnj = jT
2n
(Ytnj , Ftnj ) is a submartingale with YT = 0, it follows from Theorem 26.1 that
Ytnj = E(AnT |Ftnj ) + Antnj
(26.17)
where An0 = 0, Atnj Atnj+1 , Atnj is Ftnj1 -measurable. Assume for the moment
that the family {AnT }n1 is uniformly integrable, which will be shown in
38
Lemma 26.6 below. Then there is a subsequence nk such that AnTk converges
to a random variable AT in the weak topology of L1 (): For any bounded
random variable , E(AnTk ) E(AT ).
Denote by Mt a right-continuous version of the uniformly integrable martingale (E(AT |Ft ))0tT and let
At = Y t M t .
Then (At ) is right-continuous. Let i j. For any n0 > 0,
Ytni 0 + E(AnTk |Ftni 0 ) Ytnj 0 + E(AnTk |Ftnj 0 ),
and hence by taking k , Atni 0 Atnj 0 . Therefore, At is increasing on
{tni 0 : n0 1, i = 0, 1, , 2n0 }, and thus on all [0, T ].
Finally, we prove that (At ) is natural. Let mt be a nonnegative, bounded,
right-continuous martingale. By the dominated convergence theorem,
E
T
0
ms dAs =
=
=
lim
lim
n
lim
n 1
2X
i=0
n 1
2X
i=0
n 1
2X
i=0
= E(mT AT )
where the next to the last equality follows from that Antni+1 is Ftni -measurable.
Hence At is natural.
Lemma 26.6 {AnT }n1 is uniformly integrable.
Proof: It is easy to show that
Antnk
k1
X
|Ftnj ) Ytnj .
j+1
E(Ytn
j=0
39
with the convention that the infimum over empty set is T . Then cn ST .
By the optional sampling theorem and (26.17), we have
Ycn = Ancn E(AnT |Fcn ).
Hence
E(An
>c )
T 1An
T
(26.18)
Note that
E(An
cn 1cn <T )
cP (cn < T )
2E((AnT Ann )1cn <T )
c/2
2E((AnT
n <T )
Ann )1c/2
c/2
n 1 n <T ).
= 2E(Yc/2
c/2
n 1 n <T ).
E(Ycn 1cn <T ) + 2E(Yc/2
c/2
Note that
sup E(Ycn 1cn <T ) sup E(Ycn 1|Ycn |>M ) + sup E(Ycn 1|Ycn |M, cn <T )
n
c
c
as c , uniformly for n, we have
lim sup E(AnT 1AnT >c ) = 0.
t (tnj , tnj+1 ].
Since Ant is a martingale on the interval (tnj , tnj+1 ] and At is a natural increasing
process, it is easy to show that
E
t
0
Ans dAs
=E
t
0
Ans dAs ,
t [0, T ].
(26.19)
Therefore
lim P
sup
t[0,T ]
|Ant
At c| > = 0.
nk t[0,T ]
41
a.s.
T
0
As cdAs = E
T
0
As cdAs .
Hence
0=E
T
0
(As c As c)dAs E
sT
(As c As c)(As As ).
42
27
< ,
t 0.
sup
0tT
Mt2
4E(MT2 ) < .
1
(hM + N it hM N it )
4
is called the quadratic covariation process of Mt and Nt .
hM, N it =
t.
Define
At = Ant ,
t n .
= hXit .
44
28
Brownian motions
Brownian motion is the simplest and the most useful square integrable martingale. In a sense, stochastic analysis is a branch of mathematics which
studies the functionals of Brownian motions.
Definition 28.1 A d-dimensional continuous process Xt is a Brownian motion if X0 = 0, for any t > s, Xt Xs is independent of Fs and Xt Xs
has a multivariate normal distribution with mean zero and covariance matrix
(t s)Id , where Id is the d d identity matrix.
The next theorem shows that the quadratic variation process for Brownian motion is t. The converse of this theorem is also true and will be proved
in next chapter.
Theorem 28.2 Suppose that Xt = (Xt1 , Xt2 , , Xtd ) is a d-dimensional
Brownian motion. Then Xtj , j = 1, 2, , d are square integrable martingales and
D
E
X j , X k = jk t.
(28.20)
t
E(Xt Xtk
jk t|Fs )
=
Xsj )(Xtk Xsk )|Fs ) + Xsj Xsk jk t
+E(Xsk (Xtj Xsj ) + Xsj (Xtk Xsk )|Fs )
= jk (t s) + Xsj Xsk jk t
= Xsj Xsk jk s.
j
E((Xt
45
29
Predictable processes
P = XL X 1 (B(R)) .
Namely, P is the smallest -field on (R+ , B(R+ ) F ) such that X L,
X : (R+ , P) (R, B(R)) is measurable.
Definition 29.1 A stochastic process X = (Xt ()) is predictable if X :
(R+ , P) (R, B(R)) is measurable.
Example 29.2 Let 0 = t0 < t1 < < tn . Define simple process
Xt () = X0 ()1{0} (t) +
n1
X
j=0
Xtn ()
= X0 ()1{0} (t) +
n
X
j=0
30
Stochastic integral
n1
X
j=1
n1
X
fs dMs =
j=1
fj (Mtj+1 Mtj ).
(30.21)
and
Z
Z
fs dMs
fs dMs = 0
2 !
=E
Z
fs2 d hM is
Z
fs dMs
n1
X
E(fj (Mtj+1
j=1
n1
X
Mtj ))
E(fj E(Mtj+1
j=1
= 0.
Mtj |Ftj ))
(30.22)
fs dMs
2
n1
X
j=1
+2
0j<kn1
I1 + I2 .
47
n1
X
j=1
n1
X
j=1
= E
Z
fs2 d hM is
2
Z
1A (t, )d hM it .
fs dMs .
t
0
fs dMs
48
fs 1[0,t] (s)dMs .
Lemma 30.3 If f L0 , then It (f ) is a continuous square integrable martingale with quadratic variation process
hI(f )it =
Proof: As
It (f ) =
n1
X
i=1
t
0
fs2 d hM is .
fi ()(Mti+1 t Mti t ),
j1
X
E(fi ()(Mti+1 t
n1
X
i=1
Mti t )|Fs )
j1
X
i=1
E(fi ()(Mti+1 t
Mti t )|Fs )
fi ()(Mti+1 t Mti t )
t
0
fs2 d hM is
t
0
fs2 d hM is .
49
fs2 d hM is .
Proof: We only need to prove the theorem for t T with T being fixed. Let
f n be a sequence of simple predictable processes such that
|fsn | |fs |
and
E
(30.23)
(fsn fs )2 d hM is < 2n .
|It (f n ) It (f )|2 0.
t
0
(30.24)
(fsn )2 d hM is
It (f )
t
0
fs2 d hM is
t
0
fs2 d hM is .
n E
0tT
4n2 E
sup |It (f ) It (f )|
T
1
P sup |It (f n ) It (f )| > , infinitely often = 0.
n
0tT
50
Hence
sup |It (f n ) It (f )| 0,
a.s.
0tT
T n
0
ft2 d hM it
< ,
T > 0, n N.
(30.25)
t
0
51
fs dMs .
31
It
os formula
d
1 X
2 i,j=1
d Z
X
i=1 0
t
d
X
F (Xs )
i
dM
+
s
xi
i=1
t
0
F (Xs ) i
dAs
xi
E
2 F (Xs ) D i
j
d
M
,
M
.
s
xi xj
(31.26)
0
if |X0 | > n
inf {t : |Mt | > n or V ar(A)t > n or hM it > n} if |X0 | n
where V ar(A)t is the total variation of A on [0, t]. It is clear that n a.s.
We only need to prove (31.26) with t replaced by t n . In other words, we
assume that |X0 |, |Mt |, V ar(A)t and hM it are all bounded by a constant C
and F C02 (R).
Let ti = itn , i = 0, 1, , n. Then
F (Xt ) F (X0 ) =
=
n
X
i=1
n
X
i=1
(F (Xti ) F (Xti1 ))
F 0 (Xti1 )(Xti Xti1 )
n
1X
F 00 (i )(Xti Xti1 )2
2 i=1
I1n + I2n
52
I1n =
i=1
t
t
0
n
X
i=1
F 0 (Xs )dAs .
2I2n =
i=1
+2
F 00 (i )(Mti Mti1 )2
n
X
i=1
n
X
i=1
F 00 (i )(Ati Ati1 )2
n
n
n
I21
+ I22
+ I23
.
k
X
i=1
(Mti Mti1 )2 ,
k = 1, 2, , n.
Then
E(Vnn )2
n
X
E(Mti
i=1
+2
4C
Mti1 )4
1i<jn
n
X
2
n
E E (Mtj
E(Mti
i=1
+2
1i<jn
Mti1 )2
n
(4C +
4C 2 E(Vnn ) + 2C
2
1i<n
n
2C)E(Vn ).
53
(Mti Mti1 )2
Hence
E(Vnn )2
Let
I3n =
n
X
i=1
and
I4n =
Then
{E(|I3n
n
I21
|)}2
n
1X
F 00 (Xti1 ) hM itj hM itj1 .
2 i=1
1in
I4n
Finally, note that
and hence,
= E
00
t
0
(Vnn )2 0
F 00 (Xs )d hM is .
|I3n I4n |2
n
X
i=1
00
E max |F (i ) F (Xti1 )|
and
(4C 2 + 2C)2 .
2kF 00 k2 E
0.
( n
X
i=1
2
2
)
As an application of the Itos formula, we justify the terminology quadratic variation process for hM it by the following theorem.
n
n
n
Theorem 31.3 Suppose that M M2,c
loc . Let 0 = t0 < t1 < < tn = t be
such that
max (tnj tnj1 ) hM it .
1jn
Then
lim
n
n
X
(Mtnj Mtnj1 )2 = 0.
j=1
54
(Mtnj Mtnj1 )2
j=1
n
X
j=1
= 2
tn
j
tn
j1
Ms dMs 2
hM it
n
X
j=1
j1
55
32
In this section, we make use of Itos formula to show that the Brownian
motion is characterized by its quadratic variational process. Then, as consequences of this result, we present some representation theorems of squareintegrable martingales in terms of Brownian motions.
Theorem 32.1 Suppose that Xt = (Xt1 , , Xtd ) be such that X j M2,c
loc ,
X0 = 0 and
E
D
X j , X k = jk t,
j, k = 1, 2, , d.
t
ih,Xt i
=e
ih,Xs i
ie
ih,Xu i
1
h, dXu i
2
||2 eih,Xu i du
here the notation h, Xs i stands for the inner product in Rd (do not confuse
with the quadratic covariational process). Thus
E
Fs
ih,Xt i
=e
ih,Xs i
t
s
|| E e
eih,Xt Xs i Fs = e 2 ||
Fs du.
ih,Xu i
2 (ts)
Proof: We first prove that Bt is continuous. Since the only possible case for
Bt not being continuous is that t has a jump and M is not constant over
this jump. In this case, hM it must be flat over an interval, say (r, r 0 ), and
M is not constant over this interval. Therefore, we only need to show that
P ({Mu = Mr , u [r, r 0 ]} \ {hM ir0 = hM ir }) = 0.
(32.27)
Let
= inf {s > r : hM is > hM ir } .
such that i) Ft (Ft ); ii) P = P and iii) for every bounded random
variable X on ,
X(
)|Ft ) = E(X|Ft )(
E(
) P -a.s.,
) = X(
.
We shall denote X
by X if its meaning is
where X(
), for
clear from the context.
F , P , Ft ) is called a standard extension of a stochastic basis (, F ,
(,
P, Ft ) if we have another stochastic basis (0 , F 0 , P 0 , Ft0 ) such that
F , P , Ft ) = (, F , P, Ft ) (0 , F 0 , P 0 , F 0 )
(,
t
and
= for
= (, 0) .
57
Ft = (s>0 Ft s ) .
= Mv s0
and
E((Mu s
u |Fv )
E(B
and
u
E((B
v
=B
Bt = Bt0 BthM
i + BthM i .
ij (s)2 ds < ,
and
D
M ,M
d
X
0 k=1
If
t>0
ik (s)jk (s)ds.
s,
(32.28)
then there exists a d-dimensional Brownian motion Bt (on the original stochastic basis) such that
Mti
d Z
X
k=1 0
ik (s)dBsk .
(32.29)
d Z
X
k=1 0
B i,N , B j,N =
t
0
i = 1, 2, , d.
IN (s)dsij .
B i , B j = ij t.
k=1 0
ik (s)dBsi,N
t
0
IN (s)dMsi .
ij (s)2 ds < ,
and
D
M i, M j
r
X
0 k=1
t>0
ik (s)jk (s)ds.
F,
P , Ft ) of (, F , P, Ft ) there exists a r-dimenThen on an extension (,
sional Brownian motion Bt such that
Mti
r Z
X
k=1 0
ik (s)dBsk .
(32.30)
ik (s)jk (s).
k=1
(s)
= lim (s) 2 ((s) + Id )1
0
2 = (s) 2 (s)
(s)(s)
= ER (s).
1
d Z
X
k=1 0
ik (s)dM k +
60
d Z
X
k=1 0
EN (s)ik dBs0k .
k=1 0
ik (s)dBsk =
d Z
X
k,j=1 0
=
Note that
d Z
X
j=1 0
kj (s)dM j
ik (s)
s
d Z
X
+
=
k,j=1 0
d Z
X
j=1 0
Mti
ER (s)ij dMsj
d Z
X
j=1 0
EN (s)ij dMsj
EN (s)ij dMsj .
(32.31)
= 0.
Let
k =
B
s
r Z
X
d Z
X
k=1 0
i=1 0
(s) 2 dBsk .
Pkj (s)dBsj .
placed by B.
61
33
Change of measures
Theorem 33.1 Mt is a continuous local martingale. Further, Mt is a supermartingale, and it is a martingale if and only if
E(Mt )
t 0.
= 1,
(33.32)
t
0
Ms dXs .
t 0,
(33.33)
1+21)x
Then
u 1 2 u u
+
= 0.
t
2 x2 x
Apply Itos formula, we have
u(t, Bt t) = 1 +
t
0
u
(s, Bs s)dBs .
x
Take t , we get
Eea
t a ) = 1.
= e(
1+21)a
(33.34)
Ee 2 a = e < .
Therefore
E exp
B a
1
1
a = ea Ee 2 a = 1.
2
Let
1
Yt = exp Bta t a .
2
By Theorem 33.1, (Yt )t0 is a uniformly integrable Ft -martingale. Hence, for
any (Ft )-stopping time ,
E exp
Ba
1
a = 1.
2
Note that, as a ,
1
E 1a hXit exp a + a
2
63
1
e E exp
hXit 0.
2
a
Hence E(Mt ) = 1.
Suppose that Mt is a martingale. We define a probability measure P on
(, Ft ) by
Pt (A) = E(Mt 1A ),
A Ft .
Then t > s, we have Pt |Fs = Ps . In fact, A Fs ,
Pt (A) = E(E(Mt 1A |Fs )) = E(Ms 1A ) = Ps (A).
We assume that
F = (t0 Ft ) .
Y 1 , Y 2 = Y 1 , Y 2 .
64
Corollary 33.4 If
Xt =
hs , dBs i
t
0
s ds
i M
2,c . As
Hence B
loc
D
i, B
j
B
Bi, X
= it .
= B i , B j = ij t,
65