Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter 6

Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

Chapter 6

Analysis on the Poisson Space


In this chapter we give the denition of the Poisson measure on a space of
congurations of a metric space X, and we construct an isomorphism between
the Poisson measure on X and the Poisson process on R
+
. From this we
obtain the probabilistic interpretation of the gradient D as a nite dierence
operator and the relation between Poisson multiple stochastic integrals and
Charlier polynomials. Using the gradient and divergence operators we also
derive an integration by parts characterization of Poisson measures, and other
results such as deviation and concentration inequalities on the Poisson space.
6.1 Poisson Random Measures
Let X be a -compact metric space (i.e. X can be partitioned into a countable
union of compact metric spaces) with a diuse Radon measure . The space
of congurations of X is the set of Radon measures

X
:=
_
=
n

k=0

x
k
: (x
k
)
k=n
k=0
X, n N
_
, (6.1.1)
where
x
denotes the Dirac measure at x X, i.e.

x
(A) = 1
A
(x), A B(X),
and dened in (6.1.1) is restricted to locally nite congurations.
The conguration space
X
is endowed with the vague topology and its
associated -algebra denoted by T
X
, cf. [3]. When X is compact we will
consider Poisson functionals of the form
F() = f
0
1
(X)=0]
+

n=1
1
(X)=n]
f
n
(x
1
, . . . , x
n
), (6.1.2)
where f
n
L
1
(X
n
,
n
) is symmetric in n variables, n 1. As an example,
F() := (A), ,
N. Privault, Stochastic Analysis in Discrete and Continuous Settings,
Lecture Notes in Mathematics 1982, DOI 10.1007/978-3-642-02380-4 6,
c Springer-Verlag Berlin Heidelberg 2009
195
196 6 Analysis on the Poisson Space
is represented using the symmetric functions
f
n
(x
1
, . . . , x
n
) =
n

k=1
1
A
(x
k
), n 1.
Our construction of the Poisson measure is inspired by that of [90].
Denition 6.1.1. In case X is precompact (i.e. X has a compact closure),
let T
X
denote the -eld generated by all functionals F of the form (6.1.2),
and let
X

denote the probability measure on (


X
, T
X
) dened via
IE

[F] =e
(X)
f
0
+e
(X)

n=1
1
n!
_
X

_
X
f
n
(x
1
, . . . , x
n
)(dx
1
) (dx
n
),
(6.1.3)
for all non-negative F of the form (6.1.2).
For example, for A a compact subset of X, the mapping (A) has the
Poisson distribution with parameter (A) under
X

. Indeed we have
1
(A)=k]
=

n=k
1
(X)=n]
f
n
(x
1
, . . . , x
n
),
with
f
n
(x
1
, . . . , x
n
)
=
1
k!(n k)!

n
1
A
k(x
(1)
, . . . , x
(k)
)1
(X\A)
nk(x
(k+1)
, . . . , x
(n)
),
hence

((A) = k) = IE

[1
(A)=k]
]
= e
(X)

n=1
1
k!(n k)!
(A)
k
(X A)
nk
= e
(A)
(A)
k
k!
. (6.1.4)
The above construction is then extended to -compact X in the next
denition.
Denition 6.1.2. In case X is -compact we consider a countable partition
X =

nN
X
n
in compact subsets, and let

X
=

n=0

X
n
, T
X
=

n=0
T
X
n
,
X

n=0

X
n

. (6.1.5)
6.1 Poisson Random Measures 197
Note that
X

in Denition 6.1.2 is independent of the choice of partition


made for X in (6.1.5).
The argument leading to Relation (6.1.4) can be extended to n variables.
Proposition 6.1.3. Let A
1
, . . . , A
n
be compact disjoint subsets of X. Under
the measure
X

on (
X
, T
X
), the N
n
-valued vector
((A
1
), . . . , (A
n
))
has independent components with Poisson distributions of respective
parameters
(A
1
), . . . , (A
n
).
Proof. Consider a disjoint partition A
1
A
n
of X and
F() = 1
(A
1
)=k
1
]
1
(A
n
)=k
n
]
= 1
(X)=k
1
++k
n
]
f
n
(x
1
1
, . . . , x
1
k
1
, . . . , x
n
1
, . . . , x
n
k
n
),
where
f
n
(x
1
, . . . , x
N
)
=

N
1
k
1
! k
n
!
1
A
1
(x
(1)
, . . . , x
(k
1
)
) 1
A
n
(x
(k
1
+...+k
n1
+1)
, . . . , x
(N)
)
is the symmetrization in N = k
1
+ +k
n
variables of the function
(x
1
1
, . . . , x
1
k
1
, . . . , x
n
1
, . . . , x
n
k
n
)

(k
1
+ +k
n
)!
k
1
! k
n
!
1
A
1
(x
1
1
, . . . , x
1
k
1
) 1
A
n
(x
n
1
, . . . , x
n
k
n
),
hence

((A
1
) = k
1
, . . . , (A
n
) = k
n
) = IE

[F]
= e
((A
1
)++(A
n
))
(A
1
)
k
1
(A
n
)
k
n
k
1
! k
n
!
.

When X is compact, the conditional distribution of = x


1
, . . . , x
n
given
that (X) = n is given by the formula

(x
1
, . . . , x
n
A
n
[ (X) = n) =
_
(A
i
)
(X)
_
n
,
which follows from taking f
n
= 1
A
n in (6.1.2), and extends to symmetric
Borel subsets of X
n
.
198 6 Analysis on the Poisson Space
In the next proposition we compute the Fourier transform of
X

via the
Poisson stochastic integral
_
X
f(x)(dx) =

x
f(x), f L
1
(X, ).
In the sequel we will drop the index X in
X

.
Proposition 6.1.4. Let f L
1
(X, ). We have
IE

_
exp
_
i
_
X
f(x)(dx)
__
= exp
__
X
(e
if(x)
1)(dx)
_
. (6.1.6)
Proof. We rst assume that X is compact. We have
IE

_
exp
_
i
_
X
f(x)(dx)
__
= e
(X)

n=0
1
n!
_
X

_
X
e
i(f(x
1
)++f(x
n
))
(dx
1
) (dx
n
).
= e
(X)

n=0
1
n!
__
X
e
if(x)
(dx)
_
n
= exp
__
X
(e
if(x)
1)(dx)
_
.
The extension to the -compact case is done using (6.1.5).
We have
IE
__
X
f(x)(dx)
_
=
d
d
IE

_
exp
_
i
_
X
f(x)(dx)
__
]=0
=
d
d
exp
__
X
(e
if(x)
1)(dx)
_
]=0
=
_
X
f(x)(dx), f L
1
(X, ),
and similarly,
IE
_
__
X
f(x)((dx) (dx)
_
2
_
=
_
X
[f(x)[
2
(dx), f L
2
(X, ).
(6.1.7)
Both formulae can also be proved on simple functions of the form f =

n
i=1

i
1
A
i
, and then extended to measurable functions under appropriate
integrability conditions.
6.1 Poisson Random Measures 199
When f L
2
(X, ) the formula (6.1.6) can be extended as
IE

_
exp
_
i
_
X
f(x)((dx) (dx))
__
=exp
__
X
(e
if(x)
if(x) 1)(dx)
_
.
Taking X = R
d
and
_
R
d
(1 [y[
2
2
)(dy) < ,
where [ [
2
2
is the
2
norm on R
d
, the vector of single Poisson stochastic
integrals
F =
_
_
]y]
2
1]
y
k
((dy) (dy)) +
_
]y]
2
>1]
y
k
(dy)
_
1kn
(6.1.8)
has the characteristic function

F
(u) = IE[e
iF,u)
] (6.1.9)
= exp
__
R
d
(e
iy,u)
1 iy, u)1
]y]
2
1]
)(dy)
_
,
u R
d
, which is well-dened under the condition
_
R
d
(1 [y[
2
)(dy) < ,
from the bound
[e
it
it1
]t]1]
1[ 2(1 [t[
2
), t R.
Relation (6.1.9) is called the Levy-Khintchine formula. and the vector
F = (F
1
, . . . , F
n
)
is said to have an n-dimensional innitely divisible distribution with Levy
measure .
Denote by
X
,
the thinning of order (0, 1) of the Poisson measure
X

, i.e.

X
,
is obtained by removing, resp. keeping, independently each conguration
point with probability , resp. 1 .
The next proposition is a classical result on the thinning of Poisson measure.
Proposition 6.1.5. Let (0, 1). We have
X
,
=
X

, i.e.
X
,
is the
Poisson measure with intensity (dx) on
X
.
200 6 Analysis on the Poisson Space
Proof. It suces to treat the case where X is compact. We have
IE

,
_
exp
_
i
_
X
f(x)(dx)
__
= e
(X)

n=0
n

k=0

k
n!
_
n
k
___
X
e
if(x)
(dx)
_
k
((X))
nk
(1 )
nk
= e
(X)

n=0
1
n!
_

_
X
e
if(x)
(dx) + (1 )(X)
_
n
= e

_
X
(e
if(x)
1)(dx)
.
In terms of probabilities, for all compact A B(X) we have

X
,
((A) = n) = e
(A)

k=n
(A)
k
k!

n
(1 )
kn
_
k
n
_
= e
(A)
((A))
n
n!

k=n
(A)
kn
(k n)!
(1 )
kn
= e
(A)
((A))
n
n!
.

Remark 6.1.6. The construction of Poisson measures with a diuse inten-


sity measure can be extended to not necessarily diuse intensities.
Proof. In case is not diuse, we can identify the atoms (x
k
)
kN
of which
are at most countably innite with masses ((x
k
))
kN
. Next we choose a
family (X
k
,
k
)
kN
of measure spaces such that
k
is diuse and
k
(X
k
) =
(x
k
), k N. Letting

X := (X x
0
, x
1
, . . .)

_
k=0
X
k
and
:= +

k=0

k
,
then is a diuse measure on

X. Letting
f(x) = x1
X\x
0
,x
1
,...]
(x) +

k=1
x
k
1
X
k
(x), x

X,
6.1 Poisson Random Measures 201
then
_

X
f(x)((dx) (dx))
has an innitely divisible law with Levy measure since
_
X
(e
iux
iux 1)(dx) =
_

X
(e
iuf(x)
iuf(x) 1) (dx).

More generally, functionals on the Poisson space on (X, ) of the form


F() = f((A
1
), . . . , (A
n
))
can be constructed on the Poisson space on (

X, ) as

F() = f((B
1
), . . . , (B
n
)),
with
B
i
= (A
i
x
0
, x
2
, . . .)
_
_
_
k : x
k
A
i
X
k
_
.
Poisson random measures on a metric space X can be constructed from the
Poisson process on R
+
by identifying X with R
+
. More precisely we have the
following result, see e.g. [34], p. 192.
Proposition 6.1.7. There exists a measurable map
: X R
+
,
a.e. bijective, such that =

, i.e. the Lebesgue measure is the image of


by .
We denote by

the image measure of by , i.e.

:
X
maps
=

i=1

x
i
to

i=1

(x
i
)
. (6.1.10)
We have, for A T
X
:

(A) = #x : (x) A
= #x : x
1
(A)
= (
1
(A)).
Proposition 6.1.8. The application

:
X
maps the Poisson mea-
sure

on
X
to the Poisson measure

on .
202 6 Analysis on the Poisson Space
Proof. It suces to check that for all families A
1
, . . . , A
n
of disjoint Borel
subsets X and k
1
, . . . , k
n
N, we have

(
X
:

(A
1
) = k
1
, . . . ,

(A
n
) = k
n
)
=
n

i=1

(A
i
) = k
i
)
=
n

i=1

((
1
(A
i
)) = k
i
)
= exp
_

i=1
(
1
(A
i
))
_
n

i=1
((
1
(A
i
)))
k
i
k
i
!
= exp
_

i=1
(A
i
)
_
n

i=1
((A
i
))
k
i
k
i
!
=
n

i=1

((A
i
) = k
i
)
=

((A
1
) = k
1
, . . . , (A
n
) = k
n
).

Clearly, F F

denes an isometry from L


p
() L
p
(
X
), p 1,
and similarly we get that
_
X
f (x)

(dx) has same distribution as


_

0
f(t)(dt), since
IE

_
exp
_
i
_
X
f((x))

(dx)
__
= exp
__
X
(e
if((x))
1)(dx)
_
= exp
__

0
(e
if(t)
1)(dt)
_
= IE

_
exp
_
i
_

0
f(t)(dt)
__
.
Using the measurable bijection : X R
+
, we can also restate
Proposition 4.7.3 for a Poisson measure on X.
Corollary 6.1.9. Let F Dom(D) be such that DF K, a.s., for some
K 0, and |DF|
L

(,L
2
(X))
< . Then
P(F IE[F] x) exp
_

|DF|
2
L

(,L
2
(X))
K
2
g
_
xK
|DF|
2
L

(,L
2
(X))
__
exp
_

x
2K
log
_
1 +
xK
|DF|
2
L

(,L
2
(X))
__
,
6.2 Multiple Poisson Stochastic Integrals 203
with g(u) = (1 + u) log(1 +u) u, u 0. If K = 0 (decreasing functionals)
we have
P(F IE[F] x) exp
_

x
2
2|DF|
2
L

(,L
2
(X))
_
. (6.1.11)
In particular if F =
_
X
f(y)(dy) we have |DF|
L

(,L
2
(X))
= |f|
L
2
(X)
and
if f K, a.s., then
P
__
X
f(y)((dy) (dy)) x
_
exp
_

_
X
f
2
(y)(dy)
K
2
g
_
xK
_
X
f
2
(y)(dy)
__
.
If f 0, a.s., then
P
__
X
f(y)((dy) (dy)) x
_
exp
_

x
2
2
_
X
f
2
(y)(dy)
_
.
This result will be recovered in Section 6.9, cf. Proposition 6.9.3 below.
6.2 Multiple Poisson Stochastic Integrals
We start by considering the particular case of the Poisson space
=
_
=
n

k=1

t
k
: 0 t
1
< < t
n
, n N
_
on X = R
+
, where we drop the upper index R
+
, with intensity measure
(dx) = dx, > 0.
In this case the conguration points can be arranged in an ordered fashion
and the Poisson martingale of Section 2.3 can be constructed as in the next
proposition.
Proposition 6.2.1. The Poisson process (N
t
)
tR
+
of Denition 2.3.1 can
be constructed as
N
t
() = ([0, t]), t R
+
.
204 6 Analysis on the Poisson Space
Proof. Clearly, the paths of (N
t
)
tR
+
are piecewise continuous, c` adl` ag
(i.e. continuous on the right with left limits), with jumps of height equal
to one. Moreover, by denition of the Poisson measure on , the vector
(N
t
1
N
t
0
, . . . , N
t
n
N
t
n1
) of Poisson process increments, 0 t
0
< t
1
<
< t
n
, is made of independent, Poisson distributed, random variables with
parameters (t
1
t
0
), . . . , (t
n
t
n1
). Hence the law of (N
t
)
tR
+
coin-
cides with that of the standard Poisson process dened, cf. Corollary 2.3.5 in
Section 2.3.
In other words, every conguration can be viewed as the ordered
sequence = (T
k
)
k1
of jump times of (N
t
)
tR
+
on R
+
.
Applying Corollary 2.5.11 and using induction on n 1 yields the following
result.
Proposition 6.2.2. Let f
n
: R
n
+
R be continuous with compact support
in R
n
+
. Then we have the P(d)-almost sure equality
I
n
(f
n
)() = n!
_

0
_
t

n
0

_
t

2
0
f
n
(t
1
, . . . , t
n
)((dt
1
)dt
1
) ((dt
n
)dt
n
).
The above formula can also be written as
I
n
(f
n
) = n!
_

0
_
t

n
0

_
t

2
0
f
n
(t
1
, . . . , t
n
)d(N
t
1
t
1
) d(N
t
n
t
n
),
and by symmetry of f
n
in n variables we have
I
n
(f
n
) =
_

n
f
n
(t
1
, . . . , t
n
)((dt
1
) dt
1
) ((dt
n
) dt
n
),
with

n
= (t
1
, . . . , t
n
) R
n
+
: t
i
,= t
j
, i ,= j.
Using the mappings : X R
+
of Proposition 6.1.7 and

:
X

dened in (6.1.10), we can extend the construction of multiple Poisson
stochastic integrals to the setting of an abstract set X of indices.
Denition 6.2.3. For all f
n
(
c
(
n
), let
I
X
n
(f
n
)() := I
n
(f
n

1
)(

). (6.2.1)
Leting

X
n
= (x
1
, . . . , x
n
) X
n
: x
i
,= x
j
, i ,= j,
6.2 Multiple Poisson Stochastic Integrals 205
we have
I
X
n
(f
n
)() = I
n
(f
n

1
)(

) (6.2.2)
=
_

n
f
n
(
1
(t
1
), . . . ,
1
(t
n
))(

(dt
1
) dt
1
) (

(dt
n
) dt
n
)
=
_

X
n
f
n
(x
1
, . . . , x
n
)((dx
1
) (dx
1
)) ((dx
n
) (dx
n
)),
and for g
n+1
(
c
(
n+1
),
I
X
n+1
(g
n+1
)
=
_

X
n+1
g
n
(x
1
, . . . , x
n
, x)((dx) (dx))((dx
1
) (dx
1
))
((dx
n
) (dx
n
))
=
_
X
_

X
n
1
x/ x
1
,...,x
n
]]
g
n
(x
1
, . . . , x
n
, x)((dx
1
) (dx
1
))
((dx
n
) (dx
n
))((dx) (dx))
=
_

X
n
I
X
n
(g
n+1
(, x))()( x)((dx) (dx)). (6.2.3)
The integral I
X
n
(f
n
) extends to symmetric functions in f
n
L
2
(X)
n
via the
following isometry formula.
Proposition 6.2.4. For all symmetric functions f
n
L
2
(X, )
n
, g
m

L
2
(X, )
m
, we have
IE

_
I
X
n
(f
n
)I
X
m
(g
m
)

= n!1
n=m]
f
n
, g
m
)
L
2
(X,)
n. (6.2.4)
Proof. Denoting f
n
(
1
(x
1
), . . . ,
1
(x
n
)) by f
n

1
(x
1
, . . . , x
n
) we have
IE

[I
X
n
(f
n
)I
X
m
(g
m
)] = IE

[I
n
(f
n

1
)(

)I
m
(g
m

1
)(

)]
= IE

[I
n
(f
n

1
)I
m
(g
m

1
)]
= n!1
n=m]
f
n

1
, g
m

1
)
L
2
(R
n
+
,
n
)
= n!1
n=m]
f
n
, g
m
)
L
2
(X
n
,
n
)
.

We have the following multiplication formula, in which we again use the


convention I
X
0
(f
0
) = f
0
for f
0
R.
206 6 Analysis on the Poisson Space
Proposition 6.2.5. We have for u, v L
2
(X, ) such that uv L
2
(X, ):
I
X
1
(u)I
X
n
(v
n
) (6.2.5)
= I
X
n+1
(v
n
u) +nI
X
n
((uv) v
(n1)
) +nu, v)
L
2
(X,)
I
X
n1
(v
(n1)
).
Proof. This result can be proved by direct computation from (6.2.1). Al-
ternatively it can be proved rst for X = R
+
using stochastic calculus or
directly from Proposition 4.5.1 with
t
= 1, t R
+
:
I
1
(u
1
)I
n
((v
1
)
n
)
= I
n+1
((v
1
)
n
u
1
) +nI
n
((u
1
v
1
) (v
1
)
(n1)
)
+nu
1
, v
1
)
L
2
(R
+
,)
I
n1
((v
1
)
(n1)
),
and then extended to the general setting of metric spaces using the mapping

:
X

of Proposition 6.1.8 and Relation (6.2.2).
Similarly using the mapping

:
X
and Proposition 4.5.6 we have
I
n
(f
n
)I
m
(g
m
) =
2(nm)

s=0
I
n+ms
(h
n,m,s
),
f
n
L
2
(X, )
n
, g
m
L
2
(X, )
m
, where
h
n,m,s
=

s2i2(snm)
i!
_
n
i
__
m
i
__
i
s i
_
f
n

si
i
g
m
,
and f
n

l
k
g
m
, 0 l k, is the symmetrization of
(x
l+1
, . . . , x
n
, y
k+1
, . . . , y
m
)
_
X
l
f
n
(x
1
, . . . , x
n
)g
m
(x
1
, . . . , x
k
, y
k+1
, . . . , y
m
)(dx
1
) (dx
l
)
in n +mk l variables.
Given f
k
1
L
2
(X, )
k
1
, . . . , f
k
d
L
2
(X, )
k
d
with disjoint supports we have
I
n
(f
k
1
f
k
d
) =
d

i=1
I
k
i
(f
k
i
), (6.2.6)
for n = k
1
+ +k
d
.
6.2 Multiple Poisson Stochastic Integrals 207
Remark 6.2.6. Relation (6.2.5) implies that the linear space generated by
I
n
(f
1
f
n
) : f
1
, . . . , f
n
(

c
(X), n N,
coincides with the space of polynomials in rst order integrals of the form
I
1
(f), f (

c
(X).
Next we turn to the relation between multiple Poisson stochastic integrals
and the Charlier polynomials.
Denition 6.2.7. Let the Charlier polynomial of order n N and parameter
t 0 be dened by
C
0
(k, t) = 1, C
1
(k, t) = k t, k R, t R
+
,
and the recurrence relation
C
n+1
(k, t) = (k n t)C
n
(k, t) ntC
n1
(k, t), n 1. (6.2.7)
Let
p
k
(t) = e
t
t
k
k!
, k N, t R
+
, (6.2.8)
denote the Poisson probability density, which satises the nite dierence
dierential equation
p
k
t
(t) = p
k
(t), (6.2.9)
where is the dierence operator
f(k) := f(k) f(k 1), k N.
Let also

(k, t) =

n=0

n
n!
C
n
(k, t), (1, 1),
denote the generating function of Charlier polynomials.
Proposition 6.2.8. For all k Z and t R
+
we have the relations
C
n
(k, t) =
(1)
n
p
k
(t)
t
n
()
n
p
k
(t), (6.2.10)
C
n
(k, t) =
t
n
p
k
(t)

n
p
k
t
n
(t), (6.2.11)
C
n
(k + 1, t) C
n
(k, t) =
C
n
t
(k, t), (6.2.12)
208 6 Analysis on the Poisson Space
C
n
(k + 1, t) C
n
(k, t) = nC
n1
(k, t), (6.2.13)
C
n+1
(k, t) = kC
n
(k 1, t) tC
n
(k, t), (6.2.14)
and the generating function

(k, t) satises

(k, t) = e
t
(1 +)
k
, (6.2.15)
, t > 0, k N.
Proof. By the Denition 6.2.8 of p
k
(t) it follows that
(1)
n
p
k
(t)
t
n
()
n
p
k
(t)
satises the recurrence relation (6.2.7), i.e.
(1)
n+1
p
k
(t)
t
n+1
()
n+1
p
k
(t)
= (k n t)
(1)
n
p
k
(t)
t
n
()
n
p
k
(t) nt
(1)
n1
p
k
(t)
t
n1
()
n1
p
k
(t),
as well as its initial conditions, hence (6.2.10) holds. Relation (6.2.11) then
follows from Equation (6.2.9). On the other hand, the process
(C
n
(N
t
, t))
tR
+
= (I
n
(1
n
[0,t]
))
tR
+
is a martingale from Lemma 2.7.2 and can using It os formula
Proposition 2.12.1 it can be written as
C
n
(N
t
, t) = I
n
(1
n
[0,t]
)
= C
n
(0, 0) +
_
t
0
(C
n
(N
s
+ 1, s) C
n
(N
s
, s))d(N
s
s)
+
_
t
0
_
(C
n
(N
s
+ 1, s) C
n
(N
s
, s)) +
C
n
s
(N
s
, s)
_
ds
= n
_
t
0
I
n1
(1
(n1)
[0,s]
)d(N
s
s)
= n
_
t
0
C
n1
(N
s
, s)d(N
s
s)
6.2 Multiple Poisson Stochastic Integrals 209
where the last integral is in the Stieltjes sense of Proposition 2.5.10, hence
Relations (6.2.12) and (6.2.13) hold. Next, Relation (6.2.14) follows from
(6.2.11) and (6.2.9) as
C
n+1
(k, t) =
t
n+1
p
k
(t)

n+1
p
k
t
n+1
(t)
=
t
n+1
p
k
(t)

n
p
k
t
n
(t) +
t
n+1
p
k
(t)

n
p
k1
t
n
(t)
= t
t
n
p
k
(t)

n
p
k
t
n
(t) +k
t
n
p
k1
(t)

n
p
k1
t
n
(t)
= tC
n
(k, t) +kC
n
(k 1, t).
Finally, using Relation (6.2.14) we have

(k, t) =

n=1

n1
(n 1)!
C
n
(k, t)
=

n=0

n
n!
C
n+1
(k, t)
= t

n=0

n
n!
C
n
(k 1, t) +k

n=0

n
n!
C
n
(k, t)
= t

n=0

n
n!
C
n
(k 1, t) +k

n=0

n
n!
C
n
(k, t)
= t

(k, t) +k

(k 1, t),
(1, 1), hence the generating function

(k, t) satises the dierential


equation

(k, t) =

(k, t) +k

(k 1, t),
0
(k, t) = 1, k 1,
which yields (6.2.15) by induction on k.
We also have

k
p
k
t
k
(t) = ()
k
p
k
(t).
The next proposition links the Charlier polynomials with multiple Poisson
stochastic integrals.
Proposition 6.2.9. The multiple Poisson stochastic integral of the function
1
k
1
A
1
1
k
d
A
d
210 6 Analysis on the Poisson Space
satises
I
n
(1
k
1
A
1
1
k
d
A
d
)() =
d

i=1
C
k
i
((A
i
), (A
i
)), (6.2.16)
provided A
1
, . . . , A
d
are mutually disjoint compact subsets of X and n =
k
1
+ +k
d
.
Proof. We have
I
0
(1
0
A
) = 1 = C
0
((A), (A)),
and
I
1
(1
0
A
)() = (A) (A) = C
1
((A), (A)).
On the other hand, by Proposition 6.2.5 we have the recurrence relation
I
1
(1
B
)I
k
(1
k
A
)
= I
k+1
(1
k
A
1
B
) +kI
k
(1
AB
1
(k1)
A
) +k(A B)I
k1
(1
(k1)
A
),
which coincides for A = B with Relation (6.2.7) that denes the Charlier
polynomials, hence by induction on k N we obtain
I
k
(1
k
A
)() = C
k
((A)).
Finally from (6.2.6) we have
I
n
(1
k
1
A
1
1
k
d
A
d
) =
d

i=1
I
k
i
(1
k
i
A
i
)
=
d

i=1
C
k
i
((A
i
), (A
i
)),
which shows (6.2.16).
In this way we recover the orthogonality properties of the Charlier polyno-
mials with respect to the Poisson distribution, with t = (A):
C
n
(, t), C
m
(, t))

2
(N,p

(t))
= e
t

k=0
t
k
k!
C
n
(k, t)C
m
(k, t)
= IE[C
n
((A), t)C
m
((A), t)]
= IE[I
n
(1
n
A
)I
m
(1
m
A
)]
= 1
n=m]
n!t
n
. (6.2.17)
The next lemma is the Poisson space version of Lemma 5.1.6.
6.2 Multiple Poisson Stochastic Integrals 211
Lemma 6.2.10. Let F of the form
F() = g((A
1
), . . . , (A
k
))
where

l
1
,...,l
k
=0
[g(l
1
, . . . , l
k
)[
2
p
l
1
((A
1
)) p
l
k
((A
k
)) < , (6.2.18)
and A
1
, . . . , A
k
are compact disjoint subsets of X. Then F admits the chaos
expansion
F =

n=0
I
n
(f
n
),
where for all n 1, I
n
(f
n
) can be written as a linear combination
I
n
(f
n
)() = P
n
((A
1
), . . . , (A
n
), (A
1
), . . . (A
n
))
of multivariate Charlier polynomials
C
l
1
((A
1
), (A
1
)) C
l
k
((A
k
), (A
k
)).
Proof. We decompose g satisfying (6.2.18) as an orthogonal series
g(i
1
, . . . , i
k
) =

n=0
P
n
(i
1
, . . . , i
n
, (A
1
), . . . , (A
n
)),
where
P
n
(i
1
, . . . , i
k
, (A
1
), . . . , (A
k
))
=

l
1
++l
k
=n

l
1
,...,l
k
C
l
1
(i
1
, (A
1
)) C
l
k
(i
k
, (A
k
))
is a linear combination of multivariate Charlier polynomials of degree n which
is identied to the multiple stochastic integral
P
n
((A
1
), . . . , (A
n
), (A
1
), . . . (A
n
))
=

l
1
++l
k
=n

l
1
,...,l
k
I
n
(1
k
1
A
1
1
k
d
A
d
)
= I
n
_

l
1
++l
k
=n

l
1
,...,l
k
1
k
1
A
1
1
k
d
A
d
_
= I
n
(f
n
),
by Proposition 6.2.9.
212 6 Analysis on the Poisson Space
6.3 Chaos Representation Property
The following expression of the exponential vector
(u) =

k=0
1
n!
I
n
(u
n
)
is referred to as the Doleans exponential.
Proposition 6.3.1. For all u L
2
(X) we have
(u) = exp
__
X
u(x)((dx) (dx))
_

x
((1 +u(x))e
u(x)
).
Proof. The case X = R
+
is treated in Proposition 2.13.1, in particular when

t
= 1, t R
+
, and the extension to X a metric space is obtained using the
isomorphism : X R
+
of Proposition 6.1.7.
In particular, from Proposition 6.2.9 the exponential vector (1
A
) satises
(1
A
) =

n=0

n
n!
C
n
((A), (A))
= e
(A)
(1 +)
(A)
=

((A), (A)).
Next we show that the Poisson measure has the chaos representation property,
i.e. every square-integrable functional on Poisson space has an orthogonal
decomposition in a series of multiple stochastic integrals.
Proposition 6.3.2. Every square-integrable random variable F L
2
(
X
,

) admits the Wiener-Poisson decomposition


F =

n=0
I
n
(f
n
)
in series of multiple stochastic integrals.
Proof. A modication of the proof of Theorem 4.1 in [50], cf. also Theorem 1.3
of [66], shows that the linear space spanned by
_
e

_
X
u(x)(dx)

x
(1 +u(x)) : u (
c
(X)
_
is dense in L
2
(
X
). This concludes the proof since this space is contained in
the closure of o in L
2
(
X
).
6.3 Chaos Representation Property 213
As a corollary, the standard Poisson process (N
t
)
tR
+
has the chaos
representation property.
As in the Wiener case, cf. Relation (5.1.8), Proposition 6.3.2 implies that any
F L
2
() has a chaos decomposition
F =

n=0
I
n
(g
n
),
where
I
n
(g
n
)=
n

d=1

k
1
++k
d
=n
1
k
1
! k
d
!
I
n
(u
k
1
1
u
k
d
d
) IE[FI
n
(u
k
1
1
u
k
d
d
)],
(6.3.1)
for any orthonormal basis (u
n
)
nN
of L
2
(X, ), which completes the state-
ment of Lemma 6.2.10.
Consider now the compound Poisson process
Y
t
=
N
t

k=1
Y
k
,
of Denition 2.4.1, where (Y
k
)
k1
is an i.i.d. sequence of random variables
with distribution on R
d
and (N
t
)
tR
+
is a Poisson process with intensity
> 0, can be constructed as
Y
t
=
_
t
0
_
R
d
x(ds, dx), (6.3.2)
by taking X = R
+
R
d
and (ds, dx) = ds(dx). The compensated com-
pound Poisson process
X
t
=
_
N
t

k=1
Y
k
_
t IE[Y
1
], t R
+
,
of Section 2.4 has the chaos representation property if and only if Y
k
is
a.s. constant, i.e. when (M
t
)
tR
+
is the compensated Poisson martingale,
cf. Section 2.10 and Proposition 4.2.4.
Next we turn to some practical computations of chaos expansions in the
Poisson case. In particular, from (6.2.17) we deduce the orthogonal expansion
1
N
t
N
s
=n]
=

k=0
1
k!(t s)
k
1
n]
, C
ts
k
)

2
(Z,p
ts
)
C
ts
k
(N
t
N
s
),
214 6 Analysis on the Poisson Space
0 s t, n N, hence from (6.2.11):
1
N
t
N
s
=n]
=

k=0
1
k!
p
(k)
n
(t s)I
k
(1
k
[s,t]
), (6.3.3)
0 s t, n N.
From (6.3.3) we obtain for s = 0 and n 1:
1
[T
n
,[
(t) = 1
N
t
n]
(6.3.4)
=

k=0

ln
1
k!
p
(k)
l
(t)I
k
(1
k
[0,t]
)
=

k=0
1
k!

k
P
n
(t)I
k
(1
k
[0,t]
),
where
P
n
(t) =
_
t
0
p
n1
(s)ds, t R
+
, (6.3.5)
is the distribution function of T
n
and p
n
(s) is dened in (2.3.1).
More generally, we have the following result.
Proposition 6.3.3. Let f (
1
b
(R
+
). We have
f(T
n
) =

k=0
1
k!
I
k
__

t
1
t
k
f
t
(s)P
(k)
n
(s)ds
_
, (6.3.6)
where t
1
t
n
= max(t
1
, . . . , t
n
), t
1
, . . . , t
n
R
+
.
Proof. We have
f(T
n
) =
_

0
f
t
(s)1
[T
n
,)
(s)ds
=
_

0
f
t
(s)

k=0
1
k!
P
(k)
n
(s)I
k
(1
k
[0,s]
)ds
=

k=0
1
k!
_

0
f
t
(s)P
(k)
n
(s)I
k
(1
k
[0,s]
)ds
=

k=0
_

0
f
t
(s)P
(k)
n
(s)
_
s
0
_
t
k
0

_
t
2
0
d

N
t
1
d

N
t
k
ds
6.3 Chaos Representation Property 215
=

k=0
_

0
f
t
(s)P
(k)
n
(s)
_

0
_
t
k
0

_
t
2
0
1
[0,s]
(t
1
t
k
)d

N
t
1
d

N
t
k
ds
=

k=0
_

0
__

t
k
f
t
(s)P
(k)
n
(s)ds
_
t
k
0

_
t
2
0
d

N
t
1
d

N
t
k1
,
_
d

N
t
k
.

Note that Relation (6.3.6) can be rewritten after integration by parts on


R
+
as
f(T
n
) (6.3.7)
=

k=0
1
k!
I
k
_
f(t
1
t
k
)P
(k)
n
(t
1
t
k
)+
_

t
1
t
k
f(s)P
(k+1)
n
(s)ds
_
,
and then extends to all f L
2
(R
+
, t
n1
e
t
dt).
Next we state a result for smooth functions of a nite number of jump times.
As a convention, if k
1
0, . . . , k
d
0 satisfy k
1
+ +k
d
= n, we dene
(t
1
1
, . . . , t
1
k
1
, t
2
1
, . . . , t
2
k
2
, . . . , t
d
1
, . . . , t
d
k
d
)
as
(t
1
1
, . . . , t
1
k
1
, t
2
1
, . . . , t
2
k
2
, . . . , t
d
1
, . . . , t
d
k
d
) = (t
1
, . . . , t
n
).
The next result extends Proposition 6.3.3 to the multivariate case. Its
proof uses only Poisson-Charlier orthogonal expansions instead of using
Proposition 4.2.5 and the gradient operator D.
Proposition 6.3.4. Let n
1
, . . . , n
d
N with 1 n
1
< < n
d
, and let
f (
d
c
(
d
). The chaos expansion of f(T
n
1
, . . . , T
n
d
) is given as
f(T
n
1
, . . . , T
n
d
) = (1)
d

n=0
I
n
(1

n
h
n
),
where
h
n
(t
1
, . . . , t
n
) = (6.3.8)

k
1
+ + k
d
= n
k
1
0, . . . , k
d
0
_

t
d
k
d

_
t
i+1
1
t
i
k
i

_
t
2
1
t
1
k
1

1

d
f(s
1
, . . . , s
d
)K
k
1
,...,k
d
s
1
,...,s
d
ds
1
ds
d
,
with, for 0 = s
0
s
1
s
d
and k
1
, . . . k
d
N:
K
k
1
,...,k
d
s
1
,...,s
d
=

m
1
n
1
, . . . , m
d
n
d
0 = m
0
m
1
m
d
p
(k
1
)
m
1
m
0
(s
1
s
0
) p
(k
d
)
m
d
m
d1
(s
d
s
d1
).
216 6 Analysis on the Poisson Space
Proof. Let 0 = s
0
s
1
s
d
, and n
1
, . . . , n
d
N. We have from (6.3.3)
and (6.2.16):
d

i=1
1
N
s
i
N
s
i1
=n
i
]
=

n=0

k
1
+ + k
d
= n
k
1
0, . . . , k
d
0
1
k
1
! k
d
!
d

i=1
p
(k
i
)
m
i
m
i1
(s
i
s
i1
)I
k
1
(1
k
1
[s
0
,s
1
]
) I
k
d
(1
k
d
[s
d1
,s
d
]
)
=

n=0

k
1
+ + k
d
= n
k
1
0, . . . , k
d
0
1
k
1
! k
d
!
d

i=1
p
(k
i
)
m
i
m
i1
(s
i
s
i1
)I
n
(1
k
1
[s
0
,s
1
]
1
k
d
[s
d1
,s
d
]
),
where the last equality used the assumption s
1
s
d
. Now, with 0 =
m
0
m
1
m
d
,
1
[T
m
1
,T
m
1
+1
[
(s
1
) 1
[T
m
d
,T
m
d
+1
[
(s
d
) = 1
N
s
1
=m
1
]
1
N
s
d
=m
d
]
= 1
N
s
1
N
s
0
=m
1
m
0
]
1
N
s
d
N
s
d1
=m
d
m
d1
]
=

n=0

k
1
+ + k
d
= n
k
1
0, . . . , k
d
0
1
k
1
! k
d
!
d

i=1
p
(k
i
)
m
i
m
i1
(s
i
s
i1
)I
n
(1
k
1
[s
0
,s
1
]
1
k
d
[s
d1
,s
d
]
).
Given that s
1
s
d
, for any i < j the conditions s
i
[T
m
i
, T
m
i+1
) and
s
j
[T
m
j
, T
m
j+1
) imply m
i
m
j
, hence
d

i=1
1
[T
n
i
,)
(s
i
)=

m
1
n
1
, . . . , m
d
n
d
0=m
0
m
1
m
d
1
[T
m
1
,T
m
1
+1
[
(s
1
) 1
[T
m
d
,T
m
d
+1
[
(s
d
)
=

m
1
n
1
, . . . , m
d
n
d
0 = m
0
m
1
m
d
1
N
s
1
=m
1
]
1
N
s
d
=m
d
]
=

m
1
n
1
, . . . , m
d
n
d
0 = m
0
m
1
m
d
1
N
s
1
N
s
0
=m
1
m
0
]
1
N
s
d
N
s
d1
=m
d
m
d1
]
=

n=0

k
1
+ + k
d
= n
k
1
0, . . . , k
d
0
1
k
1
! k
d
!
6.3 Chaos Representation Property 217

m
1
n
1
, . . . , m
d
n
d
0 = m
0
m
1
m
d
p
(k
1
)
m
1
m
0
(s
1
s
0
) p
(k
d
)
m
d
m
d1
(s
d
s
d1
)
I
n
(1
k
1
[s
0
,s
1
]
1
k
d
[s
d1
,s
d
]
)
=

n=0

k
1
+ + k
d
= n
k
1
0, . . . , k
d
0
1
k
1
! k
d
!
K
k
1
,...,k
d
s
1
,...,s
d
I
n
(1
k
1
[s
0
,s
1
]
1
k
d
[s
d1
,s
d
]
).
Given f (
d
c
(
d
), using the identity
f(T
n
1
, . . . , T
n
d
)
= (1)
d
_

0

_

0
1
[T
n
1
,[
(s
1
) 1
[T
n
d
,[
(s
d
)

d

1

d
f(s
1
, . . . , s
d
)ds
1
ds
d
= (1)
d
_

d
1
[T
n
1
,[
(s
1
) 1
[T
n
d
,[
(s
d
)

d

1

d
f(s
1
, . . . , s
d
)ds
1
ds
d
,
we get
f(T
n
1
, . . . , T
n
d
) = (1)
d

n=0

k
1
+ + k
d
= n
k
1
0, . . . , k
d
0
1
k
1
! k
d
!
_

1

d
f(s
1
, . . . , s
d
)K
k
1
,...,k
d
s
1
,...,s
d
I
n
(1
k
1
[s
0
,s
1
]
1
k
d
[s
d1
,s
d
]
)ds
1
ds
d
.
From (6.2.16), we have for s
1
s
d
and k
1
0, . . . , k
d
0:
I
n
_
1
k
1
[s
0
,s
1
]
1
k
d
[s
d1
,s
d
]
_
= k
1
! k
d
!
_

0
_
t
d
k
d
0

_
t
1
2
0
1
2
[s
0
,s
1
]
(t
1
1
, t
1
k
1
) 1
2
[s
d1
,s
d
]
(t
d
1
, t
d
k
d
)
d

N
t
1
1
d

N
t
d
k
d
,
hence by exchange of deterministic and stochastic integrals we obtain
f(T
n
1
, . . . , T
n
d
) = (1)
d

n=0

k
1
+ + k
d
= n
k
1
0, . . . , k
d
0
I
n
_
1

n
_

t
d
k
d
_
t
d
1
t
d1
k
d1

_
t
3
1
t
2
k
2
_
t
2
1
t
1
k
1

d
f
s
1
s
d
(s
1
, . . . , s
d
)K
k
1
,...,k
d
s
1
,...,s
d
ds
1
ds
d
_
.

218 6 Analysis on the Poisson Space


Remarks
i) All expressions obtained above for f(T
1
, . . . , T
d
), f (

b
(
d
), extend
to f L
2
(
d
, e
s
d
ds
1
ds
d
), i.e. to square-integrable f(T
1
, . . . , T
d
), by
repeated integrations by parts.
ii) Chaotic decompositions on the Poisson space on the compact interval
[0, 1] as in [79] or [80] can be obtained by considering the functional f(1
T
1
, . . . , 1 T
d
) instead of f(T
1
, . . . , T
d
).
6.4 Finite Dierence Gradient
In this section we study the probabilistic interpretation and the extension to
the Poisson space on X of the operators D and dened in Denitions 4.1.1
and 4.1.2.
Let the spaces o and | of Section 3.1 be taken equal to
o =
_
n

k=0
I
k
(f
k
) : f
k
L
4
(X)
k
, k = 0, . . . , n, n N
_
,
and
| =
_
n

k=0
I
k
(g
k
(, )) : g
k
L
2
(X)
k
L
2
(X), k = 0, . . . , n, n N
_
.
Denition 6.4.1. Let the linear, unbounded, closable operators
D
X
: L
2
(
X
,

) L
2
(
X
X, P )
and

X
: L
2
(
X
X, P ) L
2
(
X
, P)
be dened on o and | respectively by
D
X
x
I
n
(f
n
) := nI
n1
(f
n
(, x)), (6.4.1)

(d) (dx)-a.e., n N, f
n
L
2
(X, )
n
, and

X
(I
n
(f
n+1
(, ))) := I
n+1
(

f
n+1
), (6.4.2)

(d)-a.s., n N, f
n+1
L
2
(X, )
n
L
2
(X, ).
In particular we have

X
(f) = I
1
(f) =
_
X
f(x)((dx) (dx)), f L
2
(X, ), (6.4.3)
6.4 Finite Dierence Gradient 219
and

X
(1
A
) = (A) (A), A B(X), (6.4.4)
and the Skorohod integral has zero expectation:
IE[
X
(u)] = 0, u Dom(
X
). (6.4.5)
In case X = R
+
we simply write D and instead of D
R
+
and
R
+
.
Note that using the mapping of Proposition 6.1.7 we have the relations
(D
(x)
F)

= D
X
x
(F

),

(d) (dx) a.e.


and
(u
()
)

=
X
(u

),

(d) a.e.
From these relations and Proposition 4.1.4 we have the following proposition.
Proposition 6.4.2. For any u | we have
D
X
x

X
(u) = u(x) +
X
(D
X
x
u). (6.4.6)
Let Dom(D
X
) denote the set of functionals F :
X
R with the expansion
F =

n=0
I
n
(f
n
)
such that

n=1
n!n|f
n
|
2
L
2
(X
n
,
n
)
< ,
and let Dom(
X
) denote the set of processes u :
X
X R with the
expansion
u(x) =

n=0
I
n
(f
n+1
(, x)), x X,
such that

n=0
(n + 1)!|

f
n+1
|
2
L
2
(X
n+1
,
(n+1)
)
< .
The following duality relation can be obtained by transfer from
Proposition 4.1.3 using Proposition 6.1.8. Here we also provide a direct
proof.
Proposition 6.4.3. The operators D
X
and
X
satisfy the duality relation
IE[D
X
F, u)
L
2
(X,)
] = IE[F
X
(u)], (6.4.7)
F Dom(D
X
), u Dom(
X
).
Proof. The proof is identical to those of Propositions 1.8.2 and 4.1.3, and
follows from the isometry formula (6.2.4). We consider F = I
n
(f
n
) and u
x
=
220 6 Analysis on the Poisson Space
I
m
(g
m+1
(, x)), x X, f
n
L
2
(X)
n
, g
m+1
L
2
(X)
m
L
2
(X). We have
IE[F
X
(u)] = IE[I
m+1
( g
m+1
)I
n
(f
n
)]
= n!1
n=m+1]
f
n
, g
n
)
L
2
(X
n
)
= 1
n1=m]
_
X
n
f
n
(x
1
, . . . , x
n1
, x)g
n
(x
1
, . . . , x
n1
, x)
(dx
1
) (dx
n1
)(dx)
= n1
n1=m]
_

0
IE[I
n1
(f
n
(, t))I
n1
(g
n
(, t))]dt
= IE[D
X

I
n
(f
n
), I
m
(g
m+1
(, )))
L
2
(X,)
]
= IE[D
X
F, u)
L
2
(X,)
].
Again, we may alternatively use the mapping : X R to prove this
proposition from Proposition 4.1.3.
Propositions 3.1.2 and 6.4.3 show in particular that D
X
is closable.
The next lemma gives the probabilistic interpretation of the gradient D
X
.
Lemma 6.4.4. For any F of the form
F = f(I
1
(u
1
), . . . , I
1
(u
n
)), (6.4.8)
with u
1
, . . . , u
n
(
c
(X), and f is a bounded and continuous function, or a
polynomial on R
n
, we have F Dom(D
X
) and
D
X
x
F() = F( x) F(), P (d, dx) a.e., (6.4.9)
where as a convention we identify
X
with its support.
Proof. We start by assuming that u
1
= 1
A
1
, . . . , u
n
= 1
A
n
, where A
1
, . . . , A
n
are compact disjoint measurable subsets of X. In this case the proposition
clearly holds for f polynomial from Proposition 6.2.9 and Relation (6.2.16)
which implies
D
X
x
I
n
(1
k
1
A
1
1
k
d
A
d
)()
=
d

i=1
1
A
i
(x)k
i
I
k
i
1
(1
k
i
1
A
i
)()

j,=i
I
k
j
(1
k
j
A
j
)()
=
d

i=1
1
A
i
(x)k
i
C
k
i
1
((A
i
), (A
i
))

j,=i
C
k
j
((A
j
), (A
j
))
=
d

i=1
1
A
i
(x)(C
k
i
((A
i
) + 1, (A
i
)) C
k
i
((A
i
), (A
i
)))

j,=i
C
k
j
((A
j
), (A
j
))
6.4 Finite Dierence Gradient 221
=
n

i=1
C
k
i
((A
i
) +1
A
i
(x), (A
i
))
n

i=1
C
k
i
((A
i
), (A
i
))
= I
n
(1
k
1
A
1
1
k
d
A
d
)( x) I
n
(1
k
1
A
1
1
k
d
A
d
)(),
by (6.2.13).
If f (
b
(R
n
), from Lemma 6.2.10 the functional
F := f(I
1
(1
A
1
), . . . , I
1
(1
A
n
))
has the chaotic decomposition
F = IE[F] +

k=1
I
k
(g
k
),
where I
k
(g
k
) is a polynomial in (A
1
), . . . , (A
n
). Let now
Q
k
:= IE[F] +
k

l=1
I
l
(g
l
), k 1.
The sequence (Q
k
)
kN
o consists in polynomial functionals converging to
F in L
2
(
X
). By the Abel transformation of sums

k=0
(f(k + 1) f(k))C
n
(k, )

k
k!
=

k=1
f(k)(kC
n
(k 1, )C
n
(k, ))

k1
k!
=
1

k=1
f(k)C
n+1
(k, )

k
k!
(6.4.10)
we get, with = (A
i
) and l = k
1
+ +k
d
,
IE
_
I
l
_
1
k
1
A
1
1
k
d
A
d
)
(f(I
1
(1
A
1
), . . . , I
1
(1
A
i
) + 1, . . . , I
1
(1
A
d
)) f(I
1
(1
A
1
), . . . , I
1
(1
A
d
)))]
=
1
(A
i
)
IE[f(I
1
(1
A
1
), . . . , I
1
(1
A
d
))I
l+1
(1
k
1
A
1
1
k
d
A
d
1
A
i
)]
=
1
(A
i
)
IE[I
l+1
(g
l+1
)I
l+1
(1
k
1
A
1
1
k
d
A
d
1
A
i
)]
=
(l + 1)!
(A
i
)
g
l+1
, 1
k
1
A
1
1
k
d
A
d
1
A
i
)
L
2
(X
l
,
l
)
=
1
(A
i
)
IE[D
X
I
l+1
(g
l+1
), 1
A
i
)
L
2
(X,)
I
l
(1
k
1
A
1
1
k
d
A
d
)].
222 6 Analysis on the Poisson Space
Hence the projection of
f(I
1
(1
A
1
), . . . , I
1
(1
A
i
) + 1, . . . , I
1
(1
A
d
)) f(I
1
(1
A
1
), . . . , I
1
(1
A
d
))
on the chaos H
l
of order l N is
1
(A
i
)
D
X
I
l+1
(g
l+1
), 1
A
i
)
L
2
(X,)
,
and we have the chaotic decomposition
f(I
1
(1
A
1
), . . . , I
1
(1
A
i
) + 1, . . . , I
1
(1
A
d
)) f(I
1
(1
A
1
), . . . , I
1
(1
A
d
))
=
1
(A
i
)

k=1
D
X
I
k
(g
k
), 1
A
i
)
L
2
(X,)
,
where the series converges in L
2
(
X
). Hence
n

i=1
1
A
i
(f(I
1
(1
A
1
), . . . , I
1
(1
A
i
) + 1, . . . , I
1
(1
A
d
)) f(I
1
(1
A
1
), . . . , I
1
(1
A
d
)))
=
n

i=1
1
(A
i
)
1
A
i

k=1
D
X
I
k
(g
k
), 1
A
i
)
L
2
(X,)
=

k=1
D
X
I
k
(g
k
)
= lim
n
D
X
Q
n
,
which shows that (D
X
Q
k
)
kN
converges in L
2
(
X
X) to
n

i=1
1
A
i
(f(I
1
(1
A
1
), . . . , I
1
(1
A
i
) + 1, . . . , I
1
(1
A
d
)) f(I
1
(1
A
1
), . . . , I
1
(1
A
d
))).
The proof is concluded by the closability of D
X
and approximation of func-
tions in (
c
(X) by linear combination of indicator functions.
Denition 6.4.5. Given a mapping F :
X
R, let

+
x
F :
X
R and

x
F :
X
R,
x X, be dened by
(

x
F)() = F(x), and (
+
x
F)() = F( x),
X
.
6.4 Finite Dierence Gradient 223
Note that Relation (6.4.9) can be written as
D
X
x
F =
+
x
F F, x X. (6.4.11)
On the other hand, the result of Lemma 6.4.4 is clearly veried on simple
functionals. For instance when F = I
1
(u) is a single Poisson stochastic inte-
gral, we have
D
X
x
I
1
(u)() = I
1
(u)( x) I
1
(u)()
=
_
X
u(y)((dy) +
x
(dy) (dy))
_
X
u(y)((dy) (dy))
=
_
X
u(y)
x
(dy)
= u(x).
Corollary 6.4.6. For all F is bounded and measurable A B(X), 0 <
(A) < , we have
IE
__
A
F( x)(dx)
_
= IE[F(A)]. (6.4.12)
Proof. From Proposition 6.4.3, Lemma 6.4.4, and Relation (6.4.4) we have
IE
__
A
F( x)(dx)
_
= IE
__
X
1
A
(x)D
x
F(dx)
_
+(A) IE[F]
= IE[F
X
(1
A
)] +(A) IE[F]
= IE[F(A)].

Hence as in [150] we get that the law of the mapping (x, ) x under
1
A
(x)(dx)

(d) is absolutely continuous with respect to

. In particular,
(, x) F( x) is well-dened,

, and this justies the extension


of Lemma 6.4.4 in the next proposition.
Proposition 6.4.7. For any F Dom(D
X
) we have
D
X
x
F() = F( x) F(),

(d) (dx)-a.e.
Proof. There exists a sequence (F
n
)
nN
of functionals of the form (6.4.8),
such that (D
X
F
n
)
nN
converges everywhere to D
X
F on a set A
F
such that
(

)(A
c
F
) = 0. For each n N, there exists a measurable set B
n

X
X
such that (

)(B
c
n
) = 0 and
224 6 Analysis on the Poisson Space
D
X
x
F
n
() = F
n
( x) F
n
(), (, x) B
n
.
Taking the limit as n goes to innity on (, x) A
F

n=0
B
n
, we get
D
X
x
F() = F( x) F(),

(d) (dx) a.e.

Proposition 6.4.7 also allows us to recover the annihilation property (6.4.1)


of D
X
, i.e.:
D
X
x
I
n
(f
n
) = nI
n1
(f
n
(, x)), (dx) a.e. (6.4.13)
Indeed, using the relation
1

n
(x
1
, . . . , x
n
)
x
(dx
i
)
x
(dx
j
) = 0, i, j = 1, . . . , n,
we have for f
n
L
2
(X, )
n
:
D
X
x
I
n
(f
n
) = D
X
x
_

n
f
n
(x
1
, . . . x
n
)((dx
1
) (dx
1
)) ((dx
n
) (dx
n
))
=
_

n
f
n
(x
1
, . . . , x
n
)
n

i=1
((dx
i
) (dx
i
) + (1 (x))
x
(dx
i
))

n
f
n
(x
1
, . . . , x
n
)((dx
1
) (dx
1
)) ((dx
n
) (dx
n
))
= (1 (x))
n

i=1
_

n1
f
n
(x
1
, . . . , x, . . . , x
n
)

1k,=in
((dx
k
) (dx
k
))
= (1 (x))
n

i=1
I
n1
(f
n
(
..
i1
, x,
..
ni
)), x X.
Hence we have for f
n
(
c
(X
n
):
D
X
x
I
n
(f
n
) = 1
x/ ]
n

i=1
I
n1
(f
n
(
..
i1
, x,
..
ni
)), x X,
and since f
n
is symmetric,
D
X
x
I
n
(f
n
) = 1
x/ ]
nI
n1
(f
n
(, x)), x X,
from which we recover (6.4.13) since is diuse.
Proposition 6.4.7 implies that D
X
satises the following nite dierence
product rule.
6.4 Finite Dierence Gradient 225
Proposition 6.4.8. We have for F, G o:
D
X
x
(FG) = FD
X
x
G+GD
X
x
F +D
X
x
FD
X
x
G, (6.4.14)
P(d)d(x)-a.e.
Proof. This formula can be proved either from Propositions 4.5.2 and 6.1.8
with
t
= 1, t R
+
, when X = R
+
, or directly from (6.4.9):
D
X
x
(FG)() = F( x)G( x) F()G()
= F()(G( x) G()) +G()(F( x) F())
+(F( x) F())(G( x) G())
= F()D
X
x
G() +G()D
X
x
F() +D
X
x
F()D
X
x
G(),
dP (dx)-a.e.
As a consequence of Proposition 6.4.7 above, when X = R
+
the Clark formula
Proposition 4.2.3 takes the following form when stated on the Poisson space.
Proposition 6.4.9. Assume that X = R
+
. For any F Dom(D) we have
F = IE[F] +
_

0
IE[F( t) F() [ T
t
]d(N
t
t).
In case X = R
+
the nite dierence operator D : L
2
() L
2
( R
+
)
can be written as
D
t
F = 1
N
t
<n]
(f(T
1
, . . . , T
N
t
, t, T
N
t
+1
, . . . , T
n1
)f(T
1
, . . . , T
n
)), (6.4.15)
t R
+
, for F = f(T
1
, . . . , T
n
), hence IE[D
t
F[T
t
] can be computed via the
following lemma.
Lemma 6.4.10. Let X = R
+
and (dx) = dx. For any F of the form
F = f(T
1
, . . . , T
n
) we have
IE[D
t
F[T
t
]
= 1
N
t
<n]
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+3
t
_
f(T
1
, . . . , T
N
t
, t, s
N
t
+2
, . . . , s
n
)
_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)ds
N
t
+1
_
ds
N
t
+2
ds
n
.
Proof. By application of Proposition 2.3.6 we have
IE[D
t
F[T
t
]
= 1
N
t
<n]
IE[f(T
1
, . . . , T
N
t
, t, T
N
t
+1
, . . . , T
n1
) f(T
1
, . . . , T
n
)[T
t
]
226 6 Analysis on the Poisson Space
= 1
N
t
<n]
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+3
t
f(T
1
, . . . , T
N
t
, t, s
N
t
+2
, . . . , s
n
)
ds
N
t
+2
ds
n
1
N
t
<n]
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)
ds
N
t
+1
ds
n
= 1
N
t
<n]
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+3
t
_
f(T
1
, . . . , T
N
t
, t, s
N
t
+2
, . . . , s
n
)
_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)ds
N
t
+1
_
ds
N
t
+2
ds
n
.

6.5 Divergence Operator


The adjoint
X
of D
X
satises the following divergence formula.
Proposition 6.5.1. Let u : X
X
R and F :
X
R such that
u(, ), D
X

F(), and u(, )D


X

F() L
1
(X, ),
X
. We have
F
X
(u) =
X
(uF) +u, D
X
F)
L
2
(X,)
+
X
(uD
X
F). (6.5.1)
The relation also holds if the series and integrals converge, or if F
Dom(D
X
) and u Dom(
X
) is such that uD
X
F Dom(
X
).
Proof. Relation (6.5.1) follows by duality from Proposition 6.4.8, or from
Proposition 4.5.6 and Proposition 6.1.8.
In the next proposition, Relation (6.5.2) can be seen as a generalization of
(6.2.14) in Proposition 6.2.8:
C
n+1
(k, t) = kC
n
(k 1, t) tC
n
(k, t),
which is recovered by taking u = 1
A
and t = (A). The following state-
ment provides a connection between the Skorohod integral and the Poisson
stochastic integral.
Proposition 6.5.2. For all u Dom(
X
) we have

X
(u) =
_
X
u
x
( x)((dx) (dx)). (6.5.2)
Proof. The statement clearly holds by (6.4.3) when g L
2
(X, ) is deter-
ministic. Next we show using (6.5.1) that the identity also holds for a process
6.5 Divergence Operator 227
of the form u = gI
n
(f
n
), g L
2
(X, ), by induction on the order of the
multiple stochastic integral F = I
n
(f
n
). From (6.5.1) we have

X
(gF) =
X
(gD
X
F) +F
X
(g) D
X
F, g)
L
2
(X,)
=
_
X
g(x)D
X
x
F(x)(dx) +
_
X
g(x)D
X
x
F(x)(dx)
+F
X
(g) D
X
F, g)
L
2
(X,)
=
_
X
g(x)F()(dx) +
_
X
g(x)F(x)(dx) +D
X
F(), g)
L
2
(X,)
+F
X
(g) D
X
F, g)
L
2
(X,)
= F()
_
X
g(x)(dx) +
_
X
g(x)F(x)(dx)
=
_
X
g(x)F(x)(dx)
_
X
g(x)F(x)(dx).
We used the fact that since is diuse on X, for u : X
X
R we have
u
x
(x) = u
x
(), (dx) a.e.,
X
,
hence
_
X
u
x
(x)(dx) =
_
X
u
x
()(dx),
X
, (6.5.3)
and

X
(u) =
_
X
u
x
(x)(dx)
_
X
u
x
()(dx), u Dom(
X
).

Note that (6.5.1) can also be recovered from (6.5.2) using a simple trajectorial
argument. For x we have

x
D
X
x
F() =

+
x
F()

x
F()
= F()

x
F()
= F() F(x),
hence

X
(uD
X
F)()
=
_
X
u
x
(x)D
X
x
F(x)(dx)
_
X
u
x
(x)D
X
x
F(x)(dx)
=
_
X
u
x
(x)F()(dx)
_
X
u
x
(x)F(x)(dx)
D
X
F(), u())
L
2
(X,)
= F()
X
(u)()
X
(uF)() D
X
F(), u())
L
2
(X,)
,
228 6 Analysis on the Poisson Space
since from Relation (6.5.3),
F()
_
X
u
x
(x)(dx) = F()
_
X
u
x
()(dx)
=
_
X
F(x)u
x
(x)(dx).
Relation (6.5.2) can also be proved using Relation (6.2.3).
In case X = R
+
, Proposition 6.5.2 yields Proposition 2.5.10 since u
t

does not depend on the presence of a jump at time t. On the other hand,
Proposition 4.3.4 can be written as follows.
Proposition 6.5.3. When X = R
+
, for any square-integrable adapted pro-
cess (u
t
)
tR
+
L
2
ad
( R
+
) we have
(u) =
_

0
u
t
d(N
t
t).
The following is the Skorohod isometry on the Poisson space, which follows
here from Proposition 6.5.1, or from Propositions 4.3.1 and 6.1.8.
Proposition 6.5.4. For u :
X
X R measurable and suciently inte-
grable we have
IE

_
[
X
(u)[
2

= IE
_
|u|
2
L
2
(X,)
_
+ IE
__
X
_
X
D
X
x
u(y)D
X
y
u(x)(dx)(dy)
_
.
(6.5.4)
Proof. Applying Proposition 6.4.2, Proposition 6.5.1 and Relation (6.4.5) we
have
IE

_
[
X
(u)[
2

= IE

X
(u
X
(u)) +u, D
X

X
(u))
L
2
(X,)
+
X
(uD
X

X
(u))

= IE

_
u, D
X

X
(u))
L
2
(X,)

= IE

_
|u|
2
L
2
(X,)
+
_
X
u(x)
X
(D
X
x
u)(dx)
_
= IE

_
|u|
2
L
2
(X,)
+
_
X
_
X
D
X
y
u(x)D
X
x
u(y)(dx)(dy)
_
.

The Skorohod isometry (6.5.4) shows that


X
is continuous on the subspace
IL
1,2
of L
2
(X
X
) dened by the norm
|u|
2
1,2
= |u|
2
L
2
(
X
X))
+|D
X
u|
2
L
2
(
X
X
2
)
.
6.5 Divergence Operator 229
Recall that the moment IE

[Z
n
] of order n of a Poisson random variable Z
with intensity can be written as
IE

[Z
n
] = T
n
()
where T
n
() is the Touchard polynomial of order n, dened by T
0
() = 1
and the recurrence relation
T
n
() =
n

k=0
_
n
k
_
T
k
(), n 1. (6.5.5)
Replacing the Touchard polynomial T
n
() by its centered version

T
n
() de-
ned by

T
0
() = 1, i.e.

T
n+1
() =
n1

k=0
_
n
k
_

nk+1

T
k
(), n 0, (6.5.6)
gives the moments of the centered Poisson random variable with intensity
> 0 is

T
n
() = IE[(Z )
n
], n 0.
The next proposition extends the Skorohod isometry of Proposition 6.5.4 to
higher order moments, and recovers Proposition 6.5.4 in case n = 1.
Proposition 6.5.5. We have, for u :
X
X R a suciently integrable
process,
IE[(
X
(u))
n+1
] =
n1

k=0
_
n
k
_
IE
__
X
(u
t
)
nk+1
(
X
(u))
k
(dt)
_
+
n

k=1
_
n
k
_
IE
__
X
(u
t
)
nk+1
((
X
((I +D
X
t
)u))
k
(
X
(u))
k
)(dt)
_
,
for all n 1.
Proof. Using the relation
D
X
t
(
X
(u))
n
=
+
t
(
X
(u))
n
(
X
(u))
n
= (
+
t

X
(u))
n
(
X
(u))
n
= (
X
(u) +D
X
t

X
(u))
n
(
X
(u))
n
= (
X
(u) +u
t
+
X
(D
X
t
u))
n
(
X
(u))
n
, t X,
that follows from Proposition 6.4.2 and Relation (6.4.11), we get, applying
the duality relation of Proposition 6.4.3,
230 6 Analysis on the Poisson Space
IE[(
X
(u))
n+1
] = IE
__
X
u(t)D
X
t
(
X
(u))
n
(dt)
_
= IE
__
X
u
t
((
X
(u) +u
t
+
X
(D
X
t
u))
n
(
X
(u))
n
)(dt)
_
=
n

k=0
_
n
k
_
IE
__
X
(u
t
)
nk+1
(
X
((I +D
X
t
)u))
k
(dt)
_
IE
_
(
X
(u))
n
_
X
u
t
(dt)
_
=
n1

k=0
_
n
k
_
IE
__
X
(u
t
)
nk+1
(
X
(u))
k
(dt)
_
+
n

k=1
_
n
k
_
IE
__
X
(u
t
)
nk+1
((
X
((I +D
X
t
)u))
k
(
X
(u))
k
)(dt)
_
.

Clearly, the moments of the compensated Poisson stochastic integral


_
X
f(t)((dt) (dt))
of f L
2
(X, ) satisfy the recurrence identity
IE
_
__
X
f(t)((dt) (dt))
_
n+1
_
=
n1

k=0
_
n
k
__
X
(f(t))
nk+1
(dt) IE
_
__
X
f(t)((dt) (dt))
_
k
_
,
which is analog to Relation (6.5.6) for the centered Touchard polynomials
and recovers in particular the isometry formula (6.1.7) for n = 1. Similarly
we can show that
IE
_
__
X
f(s)(ds)
_
n+1
_
(6.5.7)
=
n

k=0
_
n
k
_
IE
_
__
X
f(s)(ds)
_
k
_
X
(f(t))
n+1k
(dt)
_
,
which is analog to (6.5.5).
6.6 Characterization of Poisson Measures 231
6.6 Characterization of Poisson Measures
The duality relation (6.4.7) satised by the operators D
X
and
X
can also
be used to characterize Poisson measures.
Proposition 6.6.1. Let be a probability measure on
X
such that h
(

c
(X), I
1
(h) has nite moments of all orders under . Assume that
IE

X
(u)

= 0, (6.6.1)
for all u |, or equivalently
IE

_
D
X
F, h)
L
2
(X,)

= IE

_
F
X
(h)

, (6.6.2)
F o, h (

c
(X). Then is the Poisson measure

with intensity .
Proof. First, we note that from Remark 6.2.6, if I
1
(h) has nite moments of
all orders under , for all h (

c
(X), then
X
(u) is integrable under for
all u |. Next we show that (6.6.1) and (6.6.2) are equivalent. First we note
that Relation (6.6.1) implies (6.6.2) from (6.5.1). The proof of the converse
statement is done for u of the form u = hF, F = I
n
(f
n
), f, h (

c
(X),
by induction on the degree n N of I
n
(f
n
). The implication clearly holds
when n = 0. Next, assuming that
IE

X
(hI
n
(f
n
))

= 0, f, h (

c
(X),
for some n 0, the Kabanov multiplication formula (6.2.5) shows that

X
(hI
n+1
(f
(n+1)
)) =
X
(h)I
n+1
(f
(n+1)
) h, D
X
I
n+1
(f
(n+1)
))
L
2
(X,)
(n + 1)
X
((hf)I
n
(f
n
)),
hence from Relation (6.6.1) applied at the rank n we have
IE

X
(hI
n+1
(f
n+1
))

= IE

X
(h)I
n+1
(f
(n+1)
)
_
IE

_
h, D
X
I
n+1
(f
(n+1)
))
L
2
(X,)
_
(n + 1) IE

X
((hf)I
n
(f
n
))

= IE

X
(h)I
n+1
(f
(n+1)
)
_
IE

_
h, D
X
I
n+1
(f
(n+1)
))
L
2
(X,)
_
= 0,
by (6.6.2).
232 6 Analysis on the Poisson Space
Next we show that =

. We have for h (

c
(X) and n 1, using (6.6.2):
IE

___
X
h(x)(dx)
_
n
_
= IE

X
(h)
__
X
h(x)(dx)
_
n1
_
+
__
X
h(x)(dx)
_
IE

_
__
X
h(x)(dx)
_
n1
_
= IE

_
h, D
X
__
X
h(x)(dx)
_
n1
_
L
2
(X,)

+
__
X
h(x)(dx)
_
IE

_
__
X
h(x)(dx)
_
n1
_
= IE

_
_
X
h(x)
_
h(x) +
_
X
h(y)(x)(dy)
_
n1
(dx)
_
= IE

_
_
X
h(x)
_
h(x) +
_
X
h(y)(dy)
_
n1
(dx)
_
=
n1

k=0
_
n 1
k
___
X
h
nk
(x)(dx)
_
IE

_
__
X
h(x)(dx)
_
k
_
.
This induction relation coincides with (6.5.7) and characterizes the moments
of
_
X
h(x)(dx) under

, hence the moments of


_
X
h(x)(dx)
under are that of a Poisson random variable with intensity
_
X
h(x)(dx).
By dominated convergence this implies
IE

_
exp
_
iz
_
X
h(x)(dx)
__
= exp
_
X
(e
izh
1)d,
z R, h (

c
(X), hence =

.
This proposition can be modied as follows.
Proposition 6.6.2. Let be a probability measure on
X
such that
X
(u)
is integrable, u |. Assume that
IE

X
(u)

= 0, u |, (6.6.3)
or equivalently
IE

_
D
X
F, u)
L
2
(X,)

= IE

_
F
X
(u)

, F o, u |. (6.6.4)
Then is the Poisson measure

with intensity .
6.6 Characterization of Poisson Measures 233
Proof. Clearly, (6.6.3) implies (6.6.4) as in the proof of Proposition 6.6.1. The
implication (6.6.4) (6.6.3) follows in this case by taking F = 1. Denoting
the characteristic function of
_
X
h(x)(dx) by
(z) = IE

_
exp
_
iz
_
X
h(y)(dy)
__
,
z R, we have:
d
dz
(z) = i IE

__
X
h(y)(dy) exp
_
iz
_
X
h(y)(dy)
__
= i IE

X
(h) exp
_
iz
_
X
h(y)(dy)
__
+i IE

__
X
h(y)(dy) exp
_
iz
_
X
h(y)(dy)
__
= i IE

_
_
h, D
X
exp
_
iz
_
X
h(y)(dy)
__
L
2
(X,)
_
+i(z)
_
X
h(y)(dy)
= ih, e
izh
1)
L
2
(X,)
IE

_
exp
_
iz
_
X
h(y)(dy)
__
+i(z)
_
X
h(y)(dy)
= i(z)h, e
izh
)
L
2
(X,)
, z R.
We used the relation
D
X
x
exp
_
iz
_
X
h(y)(dy)
_
= (e
izh(x)
1) exp
_
iz
_
X
h(y)(dy)
_
, x X,
that follows from Proposition 6.4.7. With the initial condition (0) = 1 we
obtain
(z) = exp
_
X
(e
izh(y)
1)(dy), z R.

Corollary 6.6.3. Let be a probability measure on


X
such that I
n
(f
n
)
is integrable under , f (

c
(X). The relation
IE

_
I
n
(f
n
)

= 0, (6.6.5)
holds for all f (

c
(X) and n 1, if and only if is the Poisson measure

with intensity .
Proof. If (6.6.5) holds then by polarization and the Denition 6.4.2 we get
IE

X
(g I
n
(f
1
f
n
))

= 0,
234 6 Analysis on the Poisson Space
g, f
1
, . . . , f
n
(

c
(X), n 0, and from Remark 6.2.6 we have
IE

[
X
(u)] = 0, u |,
hence =

from Proposition 6.6.1.


6.7 Clark Formula and Levy Processes
In this section we extend the construction of the previous section to the case
of Levy processes, and state a Clark formula in this setting. Let X = R
+
R
d
and consider a random measure of the form
X(dt, dx) =
0
(dx)dB
t
+(dt, dx) (dx)dt,
where (dt, dx) (dx)dt is a compensated Poisson random measure on
R
d
0R
+
of intensity (dx)dt, and (B
t
)
tR
+
is a standard Brownian mo-
tion independent of N(dt, dx). The underlying probability space is denoted
by (, T, P), where T is generated by X. We dene the ltration (T
t
)
tR
+
generated by X as
T
t
= (X(ds, dx) : x R
d
, s t).
The integral of a square-integrable (T
t
)
tR
+
-adapted process u L
2
()
L
2
(R
d
R
+
) with respect to X(dt, dx) is written as
_
R
d
R
+
u(t, x)X(dt, dx),
with the isometry
IE

_
_
R
d
R
+
u(t, x)X(dt, dx)
_
2

= IE
_
_
R
d
R
+
[u(t, x)[
2
(dx)dt
_
, (6.7.1)
with
(dx) =
0
(dx) +(dx).
The multiple stochastic integral I
n
(h
n
) of h
n
L
2
(R
d
R
+
)
n
can be dened
by induction with
I
1
(h) =
_
R
d
R
+
h(t, x)X(dt, dx)
=
_

0
h(0, t)dB
t
+
_
R
d
\0]R
+
h(t, x)((dt, dx) (dx)dt),
6.7 Clark Formula and Levy Processes 235
h L
2
(R
d
R
+
), and
I
n
(h
n
) = n
_
R
d
R
+
I
n1
(
n
t,x
h
n
)X(dt, dx),
where

n
t,x
: L
2
(R
d
R
+
)
n
L
2
(R
d
R
+
)
(n1)
(6.7.2)
is dened by
_

n
t,x
h
n

(x
1
, t
1
, . . . , x
n1
, t
n1
)
= h
n
(x
1
, t
1
, . . . , x
n1
, t
n1
, t, x)1
[0,t]
(t
1
) 1
[0,t]
(t
n1
),
for x
1
, . . . , x
n1
, x R
d
and t
1
, . . . , t
n1
, t R
+
.
As in (6.1.9) the characteristic function of I
1
(h) is given by
IE
_
e
izI
1
(h)
_
= exp
_

z
2
2
_

0
h(0, t)
2
dt +
_
R
d
\0]R
+
(e
izh(t,x)
1 izh(t, x))(dx)dt
_
.
The isometry property
IE
_
[I
n
(h
n
)[
2

= n!|h
n
|
2
L
2
(R
d
R
+
)
n
,
follows from Relation (6.7.1).
From Proposition 5.1.5 and Proposition 6.3.2, every F L
2
() admits a
decomposition
F = IE[F] +

n1
1
n!
I
n
(f
n
) (6.7.3)
into a series of multiple stochastic integrals, with f
n
L
2
(R
d
R
+
)
n
, n 1.
The next proposition is a version of the Clark predictable representation
formula for Levy processes.
Proposition 6.7.1. For F L
2
(), we have
F = IE[F] +
_
R
d
R
+
IE[D
X
t,x
F [ T
t
]X(dt, dx). (6.7.4)
Proof. Let

n
=
_
((x
1
, t
1
), . . . , (x
n
, t
n
)) (R
d
R
+
)
n
: t
1
< < t
n
_
.
236 6 Analysis on the Poisson Space
From (6.7.3) we have for F o:
F = IE[F] +

n1
I
n
(f
n
1

n
)
= IE[F] +

n1
_
R
d
R
+
I
n1
(f
n
(, t, x)1

n
(, t, x))X(dt, dx)
= IE[F] +
_
R
d
R
+

n=0
IE[I
n
(f
n+1
(, t, x)1

n
) [ T
t
]X(dt, dx)
= IE[F] +
_
R
d
R
+
IE[D
X
t,x
F [ T
t
]X(dt, dx)
The extension of this statement to F L
2
() is a consequence of the fact
that the adapted projection of D
X
F extends to a continuous operator from
L
2
() into the space of adapted processes in L
2
() L
2
(R
d
R
+
). For
F =

n=0
I
n
(f
n
) o
and
u =

n=0
I
n
(u
n+1
) |, u
n+1
L
2
(R
d
R
+
)
n
L
2
(R
d
R
+
), n N,
we can extend the continuity argument of Proposition 3.2.6 as follows:

IE
_
_
R
d
R
+
u(t, x) IE[D
X
t,x
F [ T
t
](dx)dt
_

n=0
(n + 1)!

_
R
d
R
+
f
n+1
(, t, x)1
[0,t]
(), u
n+1
(, t, x))
L
2
(R
d
R
+
)
n(dx)dt

n=0
(n + 1)!|f
n+1
|
L
2
(R
d
R
+
)
|u
n+1
|
L
2
(R
d
R
+
)
(n+1)

n=0
n!|f
n
|
2
L
2
(R
d
R
+
)
n

n=0
n!|u
n+1
|
2
L
2
(R
d
R
+
)
(n+1)
_
1/2
|F|
L
2
()
|u|
L
2
()L
2
(R
d
R
+
)
.

Note that Relation (6.7.4) can be written as


F = IE[F] +
_
R
d
R
+
IE[D
X
t,0
F [ T
t
]dB
t
+
_
R
d
R
+
IE[D
X
t,x
F [ T
t
]((dt, dx) (dx)dt).
6.8 Covariance Identities 237
6.8 Covariance Identities
The Ornstein-Uhlenbeck semi-group satises
P
t
I
n
(f
n
) = e
nt
I
n
(f
n
), f
n
L
2
(X)
n
, n N.
We refer to [140] for the construction of the
X
-valued diusion process as-
sociated to (P
t
)
tR
+
. Here we shall only need the existence of the probability
density kernel associated to (P
t
)
tR
+
.
Lemma 6.8.1. In case is nite on X we have
P
t
F() =
_

X
F( )q
t
(, d , d ),
X
, (6.8.1)
where q
t
(, d , d ) is the probability kernel on
X

X
dened by
q
t
(, d , d ) =

[[!
[
t
[![
t
[!
(e
t
)
]

]
(1 e
t
)
]

(d )
(1e
t
)
(d ).
Here,
(1e
t
)
is the thinned Poisson measure with intensity (1e
t
)(dx),

denote the Dirac measure at


X
and [[ = (X) N0 +
represents the (

-a.s. nite) cardinal of


X
.
Proof. We consider random functionals of the form
F = e

_
X
u(x)(dx)

x
(1 +u(x)) =

k=0
1
n!
I
n
(u
n
),
cf. Proposition 6.3.1, for which we have
P
t
F =

n=0
1
n!
e
nt
I
n
(u
n
) = exp
_
e
t
_
X
u(x)(dx)
_

x
(1 + e
t
u(x)),
and
_

X
F( )P(, d , d )
= exp
_

_
X
u(x)(dx)
__

X
_

x
(1 +u(x))
[[!
[
t
[![
t
[!
e
t]

]
(1 e
t
)
]

(d )
(1e
t
)
(d )
= exp
_
e
t
_
X
u(x)(dx)
_
238 6 Analysis on the Poisson Space
_

x
(1 +u(x))
[[!
[
t
[![
t
[!
e
t]

]
(1 e
t
)
]

(d )
= exp
_
e
t
_
X
u(x)(dx)
_

(1 +u(x))
[[!
[
t
[![
t
[!
e
t]

]
(1 e
t
)
]

]
= exp
_
e
t
_
X
u(x)(dx)
_

[[!
[
t
[![
t
[!

e
t
(1 +u(x))

x\

(1 e
t
)
= exp
_
e
t
_
X
u(x)(dx)
_

x
(1 + e
t
u(x))
= P
t
F().

The semi-group P
t
can be rewritten as
P
t
F() =

[[!
[
t
[![
t
[!
e
t]

]
(1 e
t
)
]

]
_

X
F(
t
)
(1e
t
)
(d ).
Again, Lemma 6.8.1 and Jensens inequality (9.3.1) imply
|P
t
u|
L
2
(R
+
)
|u|
L
2
(R
+
)
, u L
2
(
X
R
+
),
a.s., hence
|P
t
u|
L

(
X
,L
2
(R
+
))
|u|
L

(
X
,L
2
(R
+
))
,
t R
+
, u L
2
(
X
R
+
). A covariance identity can be written using the
Ornstein-Uhlenbeck semi-group, in the same way as in Proposition 4.4.1.
Proposition 6.8.2. We have the covariance identity
Cov (F, G) = IE
__

0
_
X
e
s
D
X
x
FP
s
D
X
x
G(dx)ds
_
, (6.8.2)
F, G Dom(D
X
).
Proof. By the chaos representation property Proposition 6.3.2, orthogonality
of multiple integrals of dierent orders, and continuity of P
s
, s R
+
, on
L
2
(
X
, P), it suces to prove the identity for F = I
n
(f
n
) and G = I
n
(g
n
).
We have
6.8 Covariance Identities 239
IE

[I
n
(f
n
)I
n
(g
n
)] = n!f
n
, g
n
)
L
2
(X,)
n
= n!
_
X
n
f
n
(x
1
, . . . , x
n
)g
n
(x
1
, . . . , x
n
)(dx
1
) (dx
n
)
= n!
_
X
_
X
(n1)
f
n
(x, y)g
n
(x, y)
(n1)
(dx) (dy)
= n
_
X
IE

[I
n1
(f
n
(, y))I
n1
(g
n
(, y))] (dy)
=
1
n
IE

__
X
D
X
y
I
n
(f
n
)D
X
y
I
n
(g
n
) (dy)
_
= IE

__

0
e
ns
_
X
D
X
y
I
n
(f
n
)D
X
y
I
n
(g
n
)(dy)ds
_
= IE

__

0
e
s
_
X
D
X
y
I
n
(f
n
)P
s
D
X
y
I
n
(g
n
)(dy)ds
_
.

The above identity can be rewritten using the integral representation (6.8.1)
of P
t
, to extend Proposition 3 of [57]:
Corollary 6.8.3. We have
Cov (F, G) =
_
1
0
_
X
_

D
X
x
F()(G(
t
x) G(
t
))
[[!
[
t
[![
t
[!

]
(1 )
]

(1)
(d )

(d)(dx)d, (6.8.3)
F, G Dom(D
X
).
Proof. From (6.8.1) and (6.8.2) we have
Cov (F, G) = IE
__

0
_
X
e
s
(D
X
x
F)(P
s
D
X
x
G)(dx)ds
_
=
_

0
_
X
_

e
s
D
X
x
F()(G(
t
x) G(
t
))
[[!
[
t
[![
t
[!
e
s]

]
(1 e
s
)
]

(1e
s
)
(d )

(d)(dx)ds.
We conclude the proof by applying the change of variable = e
s
.
In other terms, denoting by

the thinning of with parameter


(0, 1), and by
1
an independent Poisson random measure with intensity
(1 )(dx), we can rewrite (6.8.3) as
Cov (F, G) = IE
__
1
0
_
X
D
X
x
F()D
X
x
G(


1
)(dx)d
_
. (6.8.4)
240 6 Analysis on the Poisson Space
This statement also admits a direct proof using characteristic functions. Let
(t) = IE
_
e
it]]
_
= e
(X)(e
it
1)
, t 0,
and let

(s, t) denote the characteristic function of


([[, [


1
[) = ([[, [

[ +[
1
[),
i.e.

(s, t) = IE[exp(is[[ +it[


1
[)]
=
_

X
_

X
exp(is[[ +it[ [))

(d)p
log
(, d , d ).
Since from Proposition 6.1.5 the thinning of order of a Poisson random
measure of intensity (dx) is itself a Poisson random measure with intensity
(dx), we have

(s, t)
=
_

X
_

X
exp (is(| | + | |) + it(| | + | |)))

(d)q
log
(, d , d )
=
_

X
exp (it| |)
(1)
(d )

X
exp
_
is(|

| + |

|) + it|

|
_

||!
|

|!|\

|!

|
(1 )
|

(d)
= ((t))
1
e
(X)

n=0
((X))
n
n!
n

k=0
e
is(k+(nk))+itk
n!
k!(n k)!

k
(1 )
nk
= ((t))
1
e
(X)

k,l=0
((X))
k
k!
((1 )(X))
l
l!
e
is(k+l)+itk
= ((t))
1
((t + s))

((s))
1
= (
0
(s, t))

(
1
(s, t))
1
.
Relation (6.8.4) also shows that
Cov (e
is]]
, e
it]]
) =
1
(s, t)
0
(s, t)
=
_
1
0
d

d
(s, t)d
=
_
1
0
d
d
(((t))
1
((t +s))

((s))
1
)d
=
_
1
0
log
_
(s +t)
(s)(t)
_

(s, t)d
6.9 Deviation Inequalities 241
=
_
1
0
(e
it
1)(e
is
1)

(s, t)d
=
_
1
0
_

X
_

X
_
X
D
X
x
e
is]]
(D
X
x
e
it]]
)(


1
)(dx)P(d)P(d )d.
6.9 Deviation Inequalities
Using the covariance identity of Proposition 6.8.2 and the representation of
Lemma 6.8.1 we now present a general deviation result for Poisson function-
als. In this proposition and the following ones, the supremum on
X
can be
taken as an essential supremum with respect to

.
Proposition 6.9.1. Let F Dom(D
X
) be such that e
sF
Dom(D
X
),
0 s t
0
, for some t
0
> 0. Then

(F IE[F] x) exp
_
min
0<t<t
0
_
tx +
_
t
0
h(s) ds
__
, x > 0,
where
h(s) = sup
(,

)
X

_
X
(e
sD
X
y
F()
1) D
X
y
F(
t
)(dy)

, s [0, t
0
).
(6.9.1)
If moreover h is nondecreasing and nite on [0, t
0
) then

(F IE[F] x) exp
_

_
x
0
h
1
(s)ds
_
, 0 < x < h(t

0
), (6.9.2)
where h
1
is the left-continuous inverse of h:
h
1
(x) = inft > 0 : h(t) x, 0 < x < h(t

0
).
Proof. We start by deriving the following inequality for F a centered random
variable:
IE[Fe
sF
] h(s) IE[e
sF
], 0 s t
0
. (6.9.3)
This follows from (6.8.2). Indeed, using the integral representation (6.8.1) of
the Ornstein-Uhlenbeck semi-group (P
t
)
tR
+
for P
v
D
X
y
F(), we have,
IE[Fe
sF
] = IE
__

0
e
v
_
X
D
X
y
e
sF
P
v
D
X
y
F(dy)dv
_
=
_

X
_

0
e
v
_
X
(e
sD
X
y
F()
1)e
sF()
242 6 Analysis on the Poisson Space

X
D
X
y
F(
t
)q
v
(, d
t
, d )(dy)dv

(d)

X
_

0
e
v
e
sF()

_
X
(e
sD
X
y
F()
1)D
X
y
F(
t
)(dy)

q
v
(, d
t
, d )dv

(d)
sup
(,

)
X

_
X
(e
sD
X
y
F()
1)D
X
y
F(
t
)(dy)

IE
_
e
sF
_

0
e
v
dv
_
= sup
(,

)
X

_
X
(e
sD
X
y
F()
1)D
X
y
F(
t
)(dy)

IE
_
e
sF

,
which yields (6.9.3). In the general case, we let L(s) = IE[exp(s(F IE[F]))]
and obtain:
L
t
(s)
L(s)
h(s), 0 s t
0
,
which using Chebychevs inequality gives:

(F IE[F] x) exp
_
tx +
_
t
0
h(s)ds
_
. (6.9.4)
Using the relation
d
dt
__
t
0
h(s) ds tx
_
= h(t) x, we can then optimize as
follows:
min
0<t<t
0
_
tx +
_
t
0
h(s) ds
_
=
_
h
1
(x)
0
h(s) ds xh
1
(x)
=
_
x
0
s dh
1
(s) xh
1
(x)
=
_
x
0
h
1
(s) ds, (6.9.5)
hence

(F IE[F] x) exp
_

_
x
0
h
1
(s)ds
_
, 0 < x < h(t

0
).

In the sequel we derive several corollaries from Proposition 6.9.1 and dis-
cuss possible choices for the function h, in particular for vectors of random
functionals.
Proposition 6.9.2. Let F :
X
R and let K : X R
+
be a function
such that
6.9 Deviation Inequalities 243
D
X
y
F() K(y), y X,
X
. (6.9.6)
Then

(F IE[F] x) exp
_
min
t>0
_
tx +
_
t
0
h(s) ds
__
, x > 0,
where
h(t) = sup

X
_
X
e
tK(y)
1
K(y)
[D
X
y
F()[
2
(dy), t > 0. (6.9.7)
If moreover h is nite on [0, t
0
) then

(F IE[F] x) exp
_

_
x
0
h
1
(s)ds
_
, 0 < x < h(t

0
). (6.9.8)
If K(y) = 0, y X, we have:

(F IE[F] x) exp
_

x
2
2
2
_
, x > 0,
with

2
= sup

X
_
X
(D
X
y
F())
2
(dy).
Proof. Let F
n
= max(n, min(F, n)), n 1. Since when K is R
+
-valued
the condition D
X
y
F
n
() K(y),
X
, y X, is satised we may apply
Proposition 6.9.1 to F
n
to get
h(t) = sup
(,

)
X

_
X
e
tD
X
y
F
n
()
1
D
X
y
F
n
()
D
X
y
F
n
()D
X
y
F
n
(
t
) (dy)

sup
(,

)
X

X
_
X
e
tK(y)
1
K(y)
[D
X
y
F
n
()[ [D
X
y
F
n
(
t
)[ (dy)

1
2
sup
(,

)
X

X
_
X
e
tK(y)
1
K(y)
([D
X
y
F
n
()[
2
+[D
X
y
F
n
(
t
)[
2
) (dy)
sup

X
_
X
e
tK(y)
1
K(y)
[D
X
y
F
n
()[
2
(dy)
sup

X
_
X
e
tK(y)
1
K(y)
[D
X
y
F()[
2
(dy),
from which the conclusion follows after letting n tend to innity.
244 6 Analysis on the Poisson Space
Part of the next corollary recovers a result of [151], see also [59].
Corollary 6.9.3. Let F L
2
(
X
,

) be such that D
X
F K,

-a.e.,
for some K R, and |D
X
F|
L

(
X
,L
2
(X,))
. We have for x > 0:

(F IE[F] x) e
x/K
_
1 +
xK

2
_

x
K


2
K
2
, x > 0, (6.9.9)
and for K = 0:

(F IE[F] x) exp
_

x
2
2
2
_
, x > 0. (6.9.10)
Proof. If K 0, let us rst assume that F is a bounded random variable.
The function h in (6.9.7) is such that
h(t)
e
tK
1
K
|D
X
F|
2
L

(
X
,L
2
(X,))

2
e
tK
1
K
, t > 0.
Applying (6.9.4) with
2
(e
tK
1)/K gives

(F IE[F] x) exp
_
tx +

2
K
2
(e
tK
tK 1)
_
.
Optimizing in t with t = K
1
log(1 +Kx/
2
) (or using directly (6.9.2) with
the inverse K
1
log
_
1 +Kt/
2
_
) we have

(F IE[F] x) exp
_
x
K

_
x
K
+

2
K
2
_
log
_
1 +
xK

2
__
,
which yields (6.9.10) and (6.9.9), depending on the value of K. For unbounded
F, apply the above to F
n
= max(n, min(F, n)) with [D
X
F
n
[ [D
X
F[,
n 1. Then (6.9.9) follows since, as n goes to innity, F
n
converges to F in
L
2
(
X
), D
X
F
n
converges to D
X
F in L
2
(
X
, L
2
(X, )), and D
X
F
n
K,
n 1. The same argument applies if K = 0.
As an example if F is the Poisson stochastic integral
F =
_
X
f(x)((dx) (dx)),
where f L
2
(X, ) is upper bounded by K > 0 then
6.10 Notes and References 245

(F IE[F] x) e
x/K
_
1 +
xK

2
_

x
K


2
K
2
, x > 0,
where

2
=
_
X
[f(x)[
2
(dx).
Corollary 6.9.3 yields the following result which recovers Corollary 1 of [55].
Corollary 6.9.4. Let
F = (F
1
, . . . , F
n
) =
_
_
]y]
2
1]
y
k
((dy) (dy))+
_
]y]
2
>1]
y
k
(dy)
_
1kn
(6.9.11)
be an innitely divisible random variable in R
n
with Levy measure . Assume
that X = R
n
and (dx) has bounded support, let
K = infr > 0 : (x X : |x| > r) = 0,
and
2
=
_
R
n
|y|
2
(dy). For any Lipschitz (c) function f : R
n
R with
respect to a given norm | | on R
n
, we have

(f(F) IE[f(F)] x) e
x/(cK)
_
1 +
xK
c
2
_
x/(cK)
2
/K
2
, x > 0.
Proof. The representation (6.9.11) shows that
[D
X
x
f(F)()[ = [f(F( x)) f(F())[
c|F( x) F()|
= c|x|. (6.9.12)
We conclude the proof by an application of Corollary 6.9.3.
6.10 Notes and References
Early statements of the Clark formula on the Poisson space can be found in
[129], [130] and [131]. See also [1] for a white noise version of this formula
on Poisson space. The Clark formula for Levy processes has been considered
in [1], [108], [98], [137], and applied to quadratic hedging in incomplete mar-
kets driven by jump processes in [98]. The construction of stochastic analysis
on the Poisson space using dierence operators has been developed in [66],
[33], [95], [100], cf. [94] for the Denition 6.2.3 of Poisson multiple stochas-
tic integrals. The Kabanov [68] multiplication formula has been extended to
246 6 Analysis on the Poisson Space
Azema martingales in [116]. Symmetric dierence operators on the Poisson
space have also been introduced in [100]. The study of the characterization of
Poisson measures by integration by parts has been initiated in [86], see also
[123], Relation (6.4.12) is also known as the Mecke characterization of Poisson
measures. Proposition 6.5.5 is useful to study the invariance of Poisson mea-
sures under random transformations, cf. [114]. The deviation inequalities
presented in this chapter are based on [21]. On the Poisson space, explicit
computations of chaos expansions can be carried out from Proposition 4.2.5
(cf. [66] and [138]) using the iterated dierence operator D
X
t
1
D
X
t
n
F, but
may be complicated by the recursive computation of nite dierences, cf. [79].
A direct calculation using only the operator D can also be found in [80], for
a Poisson process on a bounded interval, see also [110] for the chaos decom-
position of Proposition 6.3.4. See [99] for a characterization of anticipative
integrals with respect to the compensated Poisson process.

You might also like