Stochastic Calculus For Discontinuous Processes
Stochastic Calculus For Discontinuous Processes
Richard F. Bass
Department of Mathematics
University of Connecticut
October 2, 1998
c
These notes are
1998
by Richard Bass. They may be used for personal use or
class use, but not for commercial purposes.
1. Preliminaries.
Let {Ft } be a right continuous complete filtration. We define the predictable -field
P as the -field on [0, ) generated by all processes of the form
H(, s) =
n
X
(1.1)
i=1
where each Ki is Fai -measurable and bounded. A process that is measurable with respect
to P is called predictable. One can show that left continuous processes, i.e., processes
whose paths are left continuous, are measurable with respect to P. If a process has paths
that are right continuous with left limits, we say the process is rcll (or c`adl`ag in French).
If Xt is a process that is right continuous with left limits, we set Xt = limst,s<t Xs and
Xt = Xt Xt . Thus Xt is the size of the jump of Xt at time t.
We say a stopping time T is predictable (also known as previsible) if there exist
stopping times Tn increasing to T with Tn < T on the set (T < ). An example is
T = inf{t : Bt = 1}, where Bt is a Brownian motion; in this case we can take Tn = inf{t :
Bt = 1 1/n}.
A stopping time T is totally inaccessible if for all predictable stopping times S
we have P(T = S) = 0. An example is T = inf{t : Pt = 1}, where Pt is a Poisson
process. To see this, suppose S is predictable. Let M be a large integer. It is easy
to see that S M is predictable. If we show P(T = S M ) = 0 for each M , then
P(S = T ) = 0. So we may suppose S is bounded. Take Sn increasing to S. Note
Pt = 0 if t < T and PT = 1. The process Pt t is a martingale, so by optional stopping,
E (PSn T ) = E (Sn T ) E (S T ) = E (PST ) = P(S T ). On the other hand
E (PSn T ) = P(Sn T ) P(S > T ). So P(S > T ) = P(S T ), and hence P(S = T ) = 0.
Note that if T is predictable, then 1[0,T ()) = lim 1[0,Tn ()] . But 1[0,Tn ()] is a left
continuous process, hence P measurable.
1
a(t) =
0
Proof. Since an increasing function has at most countably many discontinuities, we may
P
write a(t) = ac (t) + i=1 ai (t), where ac is continuous and nondecreasing, and each ai is
Pn
constant except for a single jump. Since a is the limit, uniformly on [0, t], of ac + i=1 ai
as n , it suffices to prove our result when there are only finitely many jumps. Since
we can approximate ac uniformly on [0, t] by step functions, it suffices to prove our result
Pm
when a is of the form j=1 bj , where each bj is constant except for a single jump. But for
such a we can prove our result by a direct calculation.
We will need the following.
Lemma 1.2. Suppose At is a predictable nondecreasing process with A(0) = 0. Suppose
X and Y are uniformly integrable rcll processes, not necessarily adapted, such that E XS
R
E YS for every stopping time S, X = Y = 0, and E 0 Ys dAs < . Then
Z
Z
Xs dAs E
E
0
Ys dAs .
0
Proof. Suppose first that As () is either identically zero as a function of s or else has
a single jump of size b. Let S = inf{t : As = b}; note S = on those paths that
have no jump. Since At is predictable, S is predictable, and there exist stopping times Sn
increasing up to S. Then E XSn E YSn and taking a limit, E XS E YS . Multiplying
R
R
by b, this is the same as E 0 Xs dAs E 0 Ys dAs .
By linearity, the inequality holds if At is the sum of finitely many processes of the
above form. So it remains to show that every such predictable A can be written as the limit
of sums of finitely many such processes. Since increasing processes have countably many
jumps, it suffices to consider continuous As . If As is continuous, let S1 = inf{t : As },
Si+1 = inf{t > Si : At ASi } for i = 1, 2, . . .. If Ais = 1(tSi ) , then taking the limit
Pn
first as n of i=1 Ais , and then the limit as 0, we approximate As .
2
2. Decomposition of martingales.
Suppose At is a bounded increasing process. Then trivially At is a submartingale, and by the Doob-Meyer decomposition (applied to At ) there exists a predictable
et such that At A
et is a martingale. We call A
et the compensator of
increasing process A
At .
e2 2K 2 .
Lemma 2.1. If At is bounded by K, then E A
Proof. We fix t0 , let Xt = A At and Yt = K for t < t0 + 1, and Xt = Yt = 0 for
t t0 + 1. By Lemma 1.1,
e2 =
A
t0
t0
t0
Z
et A
es ) + [(A
et A
es )]dA
es 2
[(A
0
0
et A
es )dA
es .
(A
0
0
t0
es 2K 2 .
dA
R
Since Ns is predictable, by linearity and taking limits, E 0 Ns dMs = 0. By hypothesis,
R
R
E 0 Ns dMs = 0, so E 0 Ns dMs = 0. On the other hand, by conditioning on Fs we
see
Z
Z
E M N = E
N dMs = E
0
Ns dMs .
0
So E M N = 0.
If we apply the above to NtT , we have E M NT = 0. If we then condition on FT ,
E MT NT = E NT E [M | FT ] ] = E [NT M ] = 0.
3
2
Let Mt be a square integrable rcll martingale, so that E M
< . By Doobs
2
inequality we have E [sup0s< Ms ] < . For each i, let Ti1 = inf{t : |Mt | [2i , 2i+1 )},
Ti2 = inf{t > Ti1 : |Mt | [2i , 2i+1 )}, and so on. Since Mt is right continuous with left
limits, Tij as j . So Mt has at most countably many jumps. We order them as
ei (t).
S1 , S2 , . . . Let Ai (t) = MSi 1(tSi ) and Mi (t) = Ai (t) A
It is worth noting that from the general theory of processes, if Si is totally inaccesei (t) will be continuous, while if Si is predictable, then by virtue of the fact
sible, then A
ei (t) is identically zero.
that Mt is a martingale, we see that A
P
Theorem 2.3. Each Mi is square integrable. If Mtc = Mt i=1 Mi (t), then M c is
square integrable. The martingale M c is orthogonal to each Mi .
Proof. We have already seen that each Mi is square integrable. By the orthogonalPn
ity lemma, the Mi are mutually orthogonal and also Mt i=1 Mi (t) is orthogonal to
M1 , . . . , Mn for each n.
Pn
We will show Mt i=1 Mi (t) converges in L2 . Then by Doobs inequality, we see
Pn
that sups (Ms i=1 Mi (s)) will converge in L2 . A subsequence will converge a.s., so the
limit will have no jumps, and consequently must be a continuous process. Moreover it will
Pn
be orthogonal to each Mi . Write Sn (t) = i=1 Mi (t). By the orthogonality,
h
i2
h
i2
2
E M
= E M Sn () + Sn () = E M Sn () + E Sn ()2 E Sn ()2 .
So, using orthogonality again, the series E
h
E
Pn
i=1
m
m
i2
h X
i2
X
M Sn () M Sm ()
=E
Mi () =
E Mi ()2 .
i=n+1
i=n+1
3. Stochastic integrals.
If a(t) is a function of bounded variation and 0 = s0 s1 sn = t is a
partition of [0, t], note
2
a(t) =
n
X
a(si )2 a(si1 )2 =
i=1
(3.1)
If Mt is a square integrable martingale, then Mt2 is a submartingale by Jensens inequality. By the Doob-Meyer decomposition, there exists a predictable increasing process,
denoted hM it , such that Mt2 hM it is a martingale. Let us define
X
[M ]t = hM c it +
|Ms |2 .
st
st
It is easy to check, by approximating by a Riemann sum and using the fact that Mi is a
P
martingale, that the integral on the right in (3.2) is a martingale. So Mi2 (t) st |Ms |2
is a martingale. Since Mi2 (t) hMi it is a martingale, that completes the proof.
If H(, s) is as in (1.1) and M is square integrable, define the stochastic integral by
Z t
n
X
Nt =
Hs dMs =
Ki [Mbi t Mai t ].
0
i=1
Just as in [PTA], pp. 43-44, the left hand side will be a martingale and just as in the proof
of [PTA], Section I.5, with [ ] instead of h i, Nt2 [N ]t is a martingale.
R
If H is P-measurable and E 0 Hs2 d[M ]s < , approximate H by integrands Hsn
of the form (1.1), define Ntn as the stochastic integral of H n with respect to Mt . By the
same proof as in [PTA], Section I.5, the martingales Ntn converge in L2 . We call the limit
Rt
Rt
Nt = 0 Hs dMs . The stochastic integral is a martingale, and [N ]t = 0 Hs2 d[M ]s .
4. Itos formula.
Suppose Xt = Mt + At , where Mt is a square integrable martingale and At is a
process of bounded variation with total variation integrable. We will state and prove Itos
formula in this case. The extension to semimartingales without the integrability conditions
will be done later and is easy. Define hX c it to be hM c it .
5
Theorem 4.1. Suppose Xt is as above and f is C 2 on R with bounded first and second
derivatives. Then
Z t
Z
1 t 00
0
f (Xs )dhX c is
(4.1)
f (Xt ) = f (X0 ) +
f (Xs )dXs +
2 0
0
X
+
[f (Xs ) f (Xs ) f 0 (Xs )Xs ].
st
Ti t<st
f
0
n
)dhM c is
(Xs
f 0 (Xs )dhM c is
in probability.
We write
t
n
(Xs
)dAns
hZ t
f 0 (Xs )dAs
t
i
f
f 0 (Xs )dAns
0
0
Z t
hZ t
i
0
n
+
f (Xs )dAs
f 0 (Xs )dAs = I1 + I2 .
0
n
(Xs
)dAns
|f
n
(Xs
)
(Xs )| |dAns |
t
n
|f 0 (Xs
) f 0 (Xs )| |dAs |;
|dAns dAs |,
I2 | kf k
0
We also write
Z t
Z t
0
n
n
f (Xs )dMs
f 0 (Xs )dMs
0
0
Z t
hZ t
i
0
n
n
0
n
=
f (Xs )dMs
f (Xs )dMs
0
0
Z t
hZ t
i
0
n
+
f (Xs )dMs
f 0 (Xs )dMs = I3 + I4 .
0
X
0
I4 =
f (Xs )
dMi (s),
0
i=n+1
E [Mi ]
kf 0 k2
i=n+1
E Mi ()2 ,
i=n+1
which goes to 0 as n .
Finally,
X
X
n
n
I5 =
[f (Xs f (Xs ) f 0 (Xs )Xs ]
[f (Xsn ) f (Xs
) f 0 (Xs
)Xsn ]
=
st
st
i=n+1
00
i=n+1
i=n+1
If |XSi | 1, then
|XSi |2 2|MSi |2 + 2|ASi |,
while if |XSi | > 1,
|XSi | |MSi |2 + |ASi |.
P
P
Since i=1 |MSi |2 [M ] < and i=1 |ASi | < , then I5 tends to 0 as n .
This completes the proof.
The following corollary is very useful.
8
Xt Yt = X0 Y0 +
Z
Xs dYs +
Ys dXs + [X, Y ]t .
0
X02
Z
+2
Xs dXs + [X]t .
0
Lemma 5.3. If M is a local martingale with M0 = 0, then there exist stopping times
Tn that strongly reduce M .
Proof. Let Rn be a sequence reducing M . Let
Snm = Rn inf{t : E [|MRn | | Ft ] m}.
Arrange the stopping times Snm into a single sequence Un and let Tn = U1 Un . In
view of the preceding lemmas, we need to show Ui strongly reduces M , or that Snm does
for each n and m.
Let Yt = E [|MRn | | Ft ]. Y is bounded by m on [0, Snm ). We write
E [|MSnm |1(t<Snm ) ] = E [|E [|MRn |1(t<Snm ) | FSnm ] | Ft ]
E [E [|MRn |1(t<Snm ) | FSnm ] | Ft ]
= E [|MRn |1(t<Snm ) | FSnm t ]
= YSnm t 1(t<Snm )
= Yt 1(t<Snm ) m.
Theorem 5.4. Suppose M is a local martingale. Then there exist stopping times Tn
such that MtTn = Utn + Vtn , where U n is a square integrable martingale and V n is a
process of bounded variation whose total variation is integrable. Moreover, Ut = UT and
Vt = VT for t T .
Proof. It suffices to prove that if M is a local martingale with M0 = 0 and T strongly
reduces M , then MtT can be written as U + V with U and V of the described form. Thus
we may assume Mt = MT for t > T , |MT | is integrable, and E [|MT | | Ft ] is bounded, say
by K, on [0, T ).
e as in the proof of
Let Ct = MT 1(tT ) = Mt 1(tT ) and Xt = Mt 1(t<T ) . Define C
e and let U = X + C.
e Then V is a martingale of bounded
Theorem 4.1, let V = C C,
variation, and the expectation of the total variation is at most 2E |MT |. We need to show
e is square integrable.
U is square integrable. Since Xt is bounded by K, it suffices to show C
+
+
+
+
+
et be the compensator of C
et+ . We
Let Ct = Mt 1(tT ) , Xt = Mt 1(t<T ) , and let C
+
e
e + ] = E X + for every stopping time Sn . If
define C and X similarly. Note E [C
C
Sn
Sn
+
e
e + ] = E X + . So by
S is predictable and the Sn increase up to S, we then have E [C
C
S
S
Lemmas 1.1 and 1.2,
Z
+
2
e 2E
e C
es ]dC
es
EC
[C
0
Z
Z
+
e
es
= 2E
Xs dCs 2K
C
0
+
2KE C
2K .
6. Semimartingales.
We define a semimartingale to be a process of the form Xt = X0 + Mt + At , where
X0 is finite, Mt is a local martingale, and At is a process whose paths have bounded
variation on [0, t] for each t.
If Mt is a local martingale, let Tn be a sequence of stopping times as in Theorem 5.4.
P
c
We set MtT
= (U n )ct for each n and [M ]tTn = hM c itTn + stTn Ms2 . It is easy
n
to see that these definitions are independent of the decomposition of M into U n + V n and
of which sequence of stopping times Tn reducing M we choose. We define hX c it = hM c it
and similarly [X]t .
We say an adapted process H is locally bounded if there exist stopping times Sn
and constants Kn such that on [0, Sn ] the process H is bounded by Kn . If Xt is
Rt
a semimartingale and H is a locally bounded predictable process, define 0 Hs dXs as
follows. Let Xt = X0 + Mt + At . If Rn = Tn Sn , where the Tn are as in Theorem
11
R tR
5.4 and the Sn are as in the definition of locally bounded, set 0 n Hs dMs to be the
R tR
usual stochastic integral. Define 0 n Hs dAs to be the usual Lebesgue-Stieltjes integral.
Define the stochastic integral with respect to X as the sum of these two. Since Rn ,
Rt
this defines 0 Hs dXs for all t. One needs to check that the definition does not depend on
the decomposition of X into M and A nor on the choice of stopping times Rn . See Meyer
(1976) for details.
We now state the general Ito formula.
Theorem 6.1. Suppose X is a semimartingale and f is C 2 . Then
Z
Z t
1 t 00
0
f (Xs )dhX c is
f (Xt ) = f (X0 ) +
f (Xs ) dXs +
2
0
X 0
0
+
[f (Xs ) f (Xs ) f (Xs )Xs ].
st
Proof. (See Meyer,(1976) for details.) First suppose f has bounded first and second
Rt
derivatives. Let Tn be stopping times strongly reducing Mt , let Sn = inf{t : 0 |dAs | n},
let Rn = Tn Sn , and let Xtn = XtRn ARn . Since the total variation of At is bounded
on [0, Rn ), it follows that X n is a semimartingale which is the sum of a square integrable
martingale and a process whose total variation is integrable. We apply Theorem 4.1 to this
process. Xtn agrees with Xt on [0, Rn ). As in the proof of Theorem 4.1, by looking at the
jump at time Rn , both sides of Itos formula jump the same amount at time Rn , and so
Itos formula holds for Xt on [0, Rn ]. If we now only assume that f is C 2 but approximate
f by C 2 functions that have bounded first and second derivatives, it is not hard to see
that Itos formula holds for f without the assumption of bounded derivatives. We finally
let n .
The proof of the following corollary is similar to the proof of Itos formula.
Corollary 6.2. If Xt = (Xt1 , . . . , Xtd ) is a process taking values in Rd such that each
component is a semimartingale, and f is a C 2 function on Rd , then
Z tX
Z t X
d
d
i
1
f (Xt ) = f (X0 ) +
fi (Xs ) dXs + 2
fij (Xs )dh(X i )c , (X j )c is
0 i=1
X
st
[f (Xs ) f (Xs )
0 i,j=1
d
X
fi (Xs )Xsi ],
i=1
Zs dXs .
0
Proof. Since the product of finitely many functions of bounded variation which are purely
discontinuous will give a function of the same type and there are only finitely many jumps
of Xt of size larger than 1/2 or less than 1/2 in every finite time interval, it suffices to
consider
Y
Vt0 =
(1 + Xs )eXs 1(|Xs |1/2) .
0st
Note
log Vt0 =
st
P
which is bounded in absolute value by a constant times st Xs2 < . It follows that
Vt0 = exp(log Vt0 ) is a purely discontinuous process, and consequently V is also.
We apply the multivariate version of Itos formula (Corollary 6.2). Let f (x, y) = ex y
and let Zt = f (Kt , Vt ) where Kt = Xt 12 hX c it . We obtain
Z
Zt Z0 =
1
dVs +
2
Zs dhK c it
Zs dKs +
e
0
0
0
X
+
[Zs Zs Zs Ks eKs Vs ]
Ks
st
= I1 + I2 + I3 + I4 .
In I1 replace dKs by dXs 21 dhX c is and in I3 replace K c by X c . Since Vt is purely
P
discontinuous, then I2 = st eKs Vs . To simplify I4 , note than Zs = Zs (1 + Xs )
and Zs Ks = Zs Xs .
on. Since Q and P are equivalent, then M > 0 a.s., and so Mt never equals zero, a.s. It
is easy to see that MT is the density of Q with respect to P on FT .
Let Lt be the local martingale defined by
Z t
1
dMs ,
Lt =
0 Ms
so that M is the exponential of L.
Theorem 8.1. Suppose X is a local martingale with respect to P. Then Xt Dt is a
local martingale with respect to Q, where
Z t
Z t
1
Ms
Dt =
d[X, M ]s =
d[X, L]s .
0 Ms
0 Ms
Proof. We need to show that M (X D) is a martingale with respect to P. We see that
d(M (X D)) = (X D)s dMs + Ms dXs Ms dDs + d[M, X D]s .
The first two terms on the right are local martingales with respect to P. Since D is
Rt
P
of bounded variation, [M, D] =
Ms Ds = 0 Ms dDs . So M (X D) is a local
Rt
martingale plus [M, X]s 0 Ms dDs . Substituting in the formula for D shows that M (X
D) is a local martingale.
n
X
i=1
14
(9.1)
where for each i the random variable Ki is bounded and Fai -measurable and Ai S with
(Ai ) < . We define
Z
Nt =
0
n
X
i=1
Let us assume without loss of generality that the Ai are disjoint. By linearity it is easy to
see that Nt is a martingale. It is also easy to see that N c = 0 and
t
[N ]t =
(9.2)
Since hN it must be predictable and all the jumps of N are totally inaccessible, it follows
from the general theory of processes that hN it is continuous. Since [N ]t hN it is a
martingale, we conclude
t
Z
hN it =
(9.4)
[N ]t =
hN it =
(9.4)
One may think of the stochastic integral as follows: if has a point at time t with value
z, then Nt jumps at this time t and the size of the jump is H(t, z).
Now consider a stochastic differential equation that has a jump component. Look
at
dXt = (Xt ) dWt + b(Xt ) dt + F (Xt , z) d( ),
X0 = x0 .
This means
Z
Xt = x0 +
Z
(Xs )dWs +
b(Xs )ds +
0
15
(9.5)
b=b +
(|x|1)
1 + x2 0
m (dx)
x2
x3
m(dx)
1 + x2
Z
(|x|>1)
x
m(dx),
1 + x2
16
(10.3)
Now let Xti be Levy processes of the form ai Pti ai i t, where the Pti are independent
Poisson processes with parameter i . Clearly a finite sum of independent Levy processes is
Pn
a Levy process, and so Xt = i=1 Xti is a Levy process. Moreover the characteristic function of a sum of independent random variables is the product of the characteristic functions,
Pn
so the characteristic function of Xt is given by (10.3), with m(dx) = i=1 i ai (dx).
Recall that if is the characteristic function of a random variable Y , then 0 (0) =
iE Y and 00 (0) = E Y 2 . If Xt is as in the paragraph above, then clearly E Xt = 0, and
from what we just said, we see that
Z
2
E Xt = t x2 m(dx).
Now let m(dx) be a finite measure with compact support giving no mass to the
point 0, let mn (dx) be purely atomic measures converging weakly to m(dx) with support
contained in the support of m and giving no mass to 0, and let Xtn be Levy processes
R
corresponding to the measures mn . Since E (Xtn )2 = x2 mn (dx) is uniformly bounded,
we see that Xtn converges weakly to a random variable whose characteristic function is
given by (10.3). Using the bounds on the second moment, it is not hard to see that the
limit, call it Xt , is again a Levy process.
R
Next let m(dx) be a measure supported on (0, 1] with x2 m(dx) < . Let mn (dx)
be the measure m restricted to (2n , 2n+1 ]. Let Xtn be independent Levy processes whose
characteristic functions are given by (10.3) with m replaced by mn . Note E Xtn = 0 for all
n and by the independence
N
X
n=0
Xtn
2
N
X
n=0
E (Xtn )2
N Z
X
n=0
x mn (dx) =
x2 m(dx).
2N
PN
By our assumption on m, this shows that n=0 Xtn converges in L2 for each t. Call the
limit Xt . Xt has independent and stationary increments, and by Doobs inequality and
the L2 convergence, Xt will have rcll paths. If we do a similar procedure for m restricted
to [1, 0) and add the two Levy processes, we end up with a Levy process corresponding
to m.
If m is a measure supported in (1, ) with m(R) < , we do a similar procedure
starting with Levy processes whose characteristic functions are of the form (10.2). We let
mn (dx) be the restriction of m to (2n , 2n+1 ], let Xtn be the corresponding Levy process,
P
and form Xt = n=0 Xtn . Since m(R) < , for each t0 , the number of times t less than
17
t0 at which one of the Xtn jumps is finite. This shows Xt is rcll, and it is easy to then see
that Xt is a Levy process. We do a similar procedure for m supported on (, 1) and
add the two processes together.
R
Finally, suppose x2 1 m(dx) < . Let Xt1 be a Levy process with characteristic
function given by (10.2) with m replaced by the restriction of m to [1, 1]c , let Xt2 be a
Levy process with characteristic function given by (10.3) with m replaced by the restriction
of m to [1, 1], let Xt3 = bt, and let Xt4 be times a Brownian motion. Suppose the X i
are all independent. Then their sum will be a Levy process whose characteristic function
is given by (10.1).
P
It is clear from the construction that if m(A) < and Nt (A) = st 1(Xs A) ,
that Nt (A) is a Poisson process with intensity or parameter m(A). A key step in the
construction was the centering of the Poisson processes to get Levy processes with characteristic functions given by (10.3). Without the centering one is forced to work only with
characteristic functions given by (10.2).
We now work towards showing that every Levy process is of the form given by
(10.1). We start with the following lemma.
Lemma 10.2. If Xt is a Levy process and T is a bounded stopping time, then XT +t XT
is a Levy process with the same law as Xt X0 and independent of FT .
Proof. Suppose first that T takes only finitely many values t1 , . . . , tn . If f is a bounded
continuous function,
E f (XT +t XT ) =
n
X
E [f (Xti +t Xti ); T = ti ]
i=1
= E [f (Xt X0 )].
We used the independent increments property in the second equality and the stationary
increments property in the third equality.
If T does not necessarily have only finitely many values, let Tn = inf{k/2n : k/2n
T < (k + 1)/2n }, apply the above to Tn , and let n .
Since f is an arbitrary bounded continuous function, this shows the law of XT +t
Qn
XT is the same as the law of Xt X0 . A similar argument with i=1 fi (XT +ti+1 XT +ti )
shows that as processes the law of the process XT +t XT is the same as the law of
Xt X 0.
18
(1/2)P(Ti t).
By induction, P(supst |Xs | > 2iM ) 2i , and the lemma now follows immediately.
Theorem 10.4. Suppose Xt is a Levy process. Then there exists a measure m on R {0}
R
with 1 x2 m(dx) > such that the characteristic function of Xt is given by (10.1).
Proof. Let f be a strictly increasing function on R such that f (x) = x if |x| 1 and f is
bounded by 2. Let
X
Xt0 = Xt +
[f (Xs ) Xs ]1(|Xs |>1) .
st
Since Xt is rcll, then there are only finitely many jumps with size larger than 1 in any finite
time interval, so the sum is finite. Xt0 is again a Levy process, this time with bounded
jumps; if we show the desired representation for Xt0 , it is not hard to see that this gives
the desired representation for Xt . Now let Xt00 = Xt0 E Xt0 . Again, it suffices to look at
X 00 . So without loss of generality we may suppose Xt has jumps bounded by 2 and that
it has mean 0.
Let {Ii } be an ordering of the intervals (1, 2], [2, 1), (1/2, 1], [1, 1/2), . . . Let
X
i
i
i
Xt =
Xs 1(Xs Ii ) ,
Xti = X t E X t .
st
We want to show that the X i are all independent and also that X i is independent
Pi1
of Xt j=1 Xtj . We will prove that the random variable Xti is independent of Xtj . To
19
show that they are independent as processes and to do the other cases are similar. Let i
be the characteristic function of X1i . Then Mti = exp(iuXti ti (u)) is a martingale, since
E [Mti | Fs ] = Msi E [exp(iu(Xt Xs )) (t s)i (u)] = Msi ,
using the independence and stationarity of the increments. exp(ti (u) is a process of
bounded variation, so has no martingale part. By Itos formula, we see exp(iuXti ) is
a semimartingale, and the continuous part of the martingale part is 0. So Mti has no
continuous part. We let Mtj = exp(ivXtj tj (v)) and see that Mtj is also a martingale.
By Lemma 2.2, Mti is orthogonal to Mtj , since M j does not have jumps in common with
M i . Since M0i = M0j = 1, then E [Mti Mtj ] = 1, or
E [exp(iuXti ) exp(ivXtj )] = exp(ti (u)) exp(tj (v)) = E [exp(iuXti )]E [exp(ivXtj )].
This proves the independence.
Since Xt has bounded jumps, then by Lemma 10.2 Xt has second moments. By the
independence and the fact that all the X i have mean 0,
E (Xti )2
E (Xt
i=1
Xti )2
i
hX
i 2
+E (
Xt ) = E (Xt )2 < .
i=1
i=1
Hence
E[
n
X
Xti ]2
i=m
n
X
E (Xti )2
i=m
PN
tends to 0 as n, m , and thus Xt i=1 Xti converges in L2 . The limit, Xtc , say,
will be a Levy process independent of all the Xti . Moreover, X c has no jumps, i.e., it is
continuous. By the stationary and independent increments properties and the LindebergFeller theorem, we conclude that Xtc is Gaussian. Therefore Xtc is a Brownian motion.
To complete the proof, it remains to show that Xti has a characteristic function of the
R
form (10.3) for m supported on Ii . The fact that E (Xt )2 < will imply [2,2] x2 m(dx) <
. Fix i. Let Dn = {k/2n : k/2n Ii }. Thus the Dn are finite subsets of Ii increasing to
in
a countable dense subset of Ii . Let N t be the process that jumps k/2n if Xti has a jump
in
in
in the interval [k/2n , (k + 1)/2n ). Let Ntin = N t E N t . As n , Ntin Xti , so it
suffices to show that Ntin has a representation of the form (10.3) with m supported in Dn .
P
By the fact that we are dealing with Levy processes, the process st 1(N in =k/2n )
t
P
is a Poisson process. Let the intensity be kn . If we set min (dx) = k=0 kn k/2n (dx),
in
then we see that N t has a representation of the form (10.2). It follows that Ntin has a
characteristic function given by A(10.3) for this min . The proof is complete.
20