Ito's and Tanaka's Type Formulae For The Stochastic Heat Equation: The Linear Case
Ito's and Tanaka's Type Formulae For The Stochastic Heat Equation: The Linear Case
Ito's and Tanaka's Type Formulae For The Stochastic Heat Equation: The Linear Case
Abstract
In this paper we consider the linear stochastic heat equation with additive
noise in dimension one. Then, using the representation of its solution X as a
stochastic convolution of the cylindrical Brownian motion with respect to an
operator-valued kernel, we derive Itô’s and Tanaka’s type formulae associated
to X.
1 Introduction
The study of stochastic partial differential equations (SPDE in short) has been seen
as a challenging topic in the past thirty years for two main reasons. On the one
hand, they can be associated to some natural models for a large amount of physical
phenomenon in random media (see for instance [4]). On the other hand, from a
more analytical point of view, they provide some rich examples of Markov processes
in infinite dimension, often associated to a nicely behaved semi-group of operators,
for which the study of smoothing and mixing properties give raise to some elegant,
and sometimes unexpected results. We refer for instance to [9], [10], [5] for a deep
and detailed account on these topics.
It is then a natural idea to try to construct a stochastic calculus with respect
to the solution to a SPDE. Indeed, it would certainly give some insight on the
properties of such a canonical object, and furthermore, it could give some hints
about the relationships between different classes of remarkable equations (this second
1
motivation is further detailed by L. Zambotti in [21], based on some previous results
obtained in [20]). However, strangely enough, this aspect of the theory is still poorely
developped, and our paper proposes to make one step in that direction.
Before going into details of the results we have obtained so far and of the method-
ology we have adopted, let us describe briefly the model we will consider, which is
nothing but the stochastic heat equation in dimension one. On a complete prob-
ability space (Ω, F, P), let {W n ; n ≥ 1} be a sequence of independent standard
Brownian motions. We denote by (Ft ) the filtration generated by {W n ; n ≥ 1}. Let
also H be the Hilbert space L2 ([0, 1]) of square integrable functions on [0, 1] with
Dirichlet boundary conditions, and {en ; n ≥ 1} the trigonometric basis of H, that
is √
en (x) = 2 sin(πnx), x ∈ [0, 1], n ≥ 1.
The inner product in H will be denoted by h , iH .
The stochastic equation will be driven by the cylindrical Brownian motion (see
[9] for further details on this object) defined by the formal series
X
Wt = Wtn en , t ∈ [0, T ], T > 0.
n≥1
Observe that Wt 6∈ H, but for any y ∈ H, n≥1 hy, en iWtn is a well defined Gaussian
P
random variable with variance |y|2H . It is also worth observing that W coincides with
the space-time white noise (see [9] and also (2.1) below).
∂2
Let now ∆ = ∂x 2 be the Laplace operator on [0, 1] with Dirichlet boundary con-
With all those notations in mind, let us go back to the main motivations of this
paper: if one wishes to get, for instance, an Itô’s type formula for the process X
defined above, a first natural idea would be to start from a finite-dimensional version
(of order N ≥ 1) of the the representation given by formula (1.2), and then to take
limits as N → ∞. Namely, if we set
(N )
X
Xt = Xtn en , t ∈ [0, T ],
n≤N
2
and if FN : RN → R is a Cb2 -function, then X (N ) is just a N -dimensional Ornstein-
Uhlenbeck process, and the usual semi-martingale representation of this approxima-
tion yields, for all t ∈ [0, T ],
(N )
XZ t 1 t
Z
(N ) n
Tr FN00 (Xs(N ) ) ds, (1.3)
FN Xt = FN (0) + ∂xn FN (Xs ) dXs +
n≤N 0
2 0
where the stochastic integral has to be interpreted in the Itô sense. However, when
one tries to take limits in (1.3) as N → ∞, it seems that a first requirement on
F ≡ limN →∞ FN is that Tr(F 00 ) is a bounded function. This is certainly not the case
in infinite dimension, since the typical functional to which we would like to apply
Itô’s formula is of the type F : H → R defined by
Z 1
F (`) = σ(`(x))φ(x) dx, with σ ∈ Cb2 (R), φ ∈ L∞ ([0, 1]),
0
and it is easily seen in this case that, for non degenerate coefficients σ and φ, F
is a Cb2 (H)-functional, but F 00 is not trace class. One could imagine another way
to make all the terms in (1.3) convergent, but it is also worth mentioning at this
point that, even if our process X is the limit of a semi-martingale sequence X (N ) ,
it is not a semi-martingale itself. Besides, the mapping t ∈ [0, T ] 7→ Xt ∈ H is only
Hölder-continuous of order (1/4)− (see Lemma 2.1 below). This fact also explains
why the classical semi-martingale approach fails in the current situation.
In order to get an Itô’s formula for the process X, we have then decided to use
another natural approach: the representation (1.2) of the solution to (1.1) shows that
X is a centered Gaussian process, given by the convolution of W by the operator-
valued kernel e(t−s)∆ . Furthermore, this R t kernel is divergent on the diagonal: in
order to define the stochastic integral 0 e(t−s)∆ dWs , one has to get some bounds
on ket∆ k2HS (see Theorem 5.2 in [9]), which diverges as t−1/2 . We will see that the
important quantity to control for us is k∆et∆ kop , which diverges as t−1 . In any case,
in one dimension, the stochastic calculus with respect to Gaussian processes defined
by an integral of the form
Z t
K(t, s) dBs , t ≥ 0,
0
3
Rt
where the term 0 hF 0 (Xs ), δXs i is a Skorokhod type integral that will be properly
defined at Section 2. Notice also that the last term in (1.4) is the one that one could
expect, since it corresponds to the Kolmogorov equation associated to (1.1) (see, for
instance, [9] p. 257). Let us also mention that we wished to explain our approach
by taking the simple example of the linear stochastic equation in dimension 1. But
we believe that our method can be applied to some more general situations, and
here is a list of possible extensions of our formulae:
3. The case of non-linear equations, that would amountR to get some Itô’s repre-
sentations for processes defined informally by Y = u(s, y)X(ds, dy), where
u is a process satisfying some regularity conditions, and X is still the solution
to equation (1.1).
4
Lemma 2.1 We have, for some constants 0 < c1 < c2 , and for all s, t ∈ [0, T ]:
Then, as usual in the Malliavin calculus setting, the smooth functionals of W will
be of the form
and for this kind of functional, the Malliavin derivative is defined as an element of
HW given by
X d
W
Dt F = ∂i f (W (h1 ), . . . , W (hd )) hi (t).
i=1
It can be seen that D is a closable operator on L2 (Ω), and for k ≥ 1, we will call
W
Dk,2 the closure of the set S of smooth functionals with respect to the norm
k
X h i
W,j
kF kk,2 = kF kL2 + E |D F |H⊗j .
W
j=1
5
detailed account on this topic). Throughout this paper we will mainly apply these
general considerations to V = HW . A chain rule for the derivative operator is also
available: if F = {F m ; m ≥ 1} ∈ D1,2 (HW ) and ϕ ∈ Cb1 (HW ), then ϕ(F ) ∈ D1,2 , and
X
DtW (ϕ(F )) = ∇ϕ(F ), DtW F H = DtW F m ∂m ϕ(F ).
(2.2)
W
m≥1
M = Im,T (⊗m
j=1 hj ); m ≥ 0, h1 , . . . , hm ∈ HW ,
is dense in L2 (Ω) (see, for instance, Theorem 1.1.2 in [15]). We stress that we use a
different normalization for the multiple integrals of order m, which is harmless for
our purposes. Eventually, an easy application of the basic rules of Malliavin calculus
yields that, for a given m ≥ 1:
Notice that, in our case, C(t, s) is a diagonal operator when expressed in the or-
thonormal basis {en ; n ≥ 1}, whose nth diagonal element is given by
6
The Wiener integral of an element h ∈ HX is now easily defined: X(h) is a centered
Gaussian random variable, and if h1 , h2 ∈ HX ,
In particular the previous equality provide a natural isometry between HX and the
first chaos associated to X. Once these Wiener integrals are defined, one can proceed
like in the case of the cylindrical Brownian motion, and construct a derivation
operator DX , some Sobolev spaces Dk,2 X
X (HW ), and a divergence operator δ .
Following the ideas contained in [2], we will now relate δ X with a Skorokhod
integral with respect to the Wiener process W . To this purpose, recall that HW =
L2 ([0, T ]; H), and let us introduce the linear operators G : HW → HW defined by
Z t
Gh(t) = e(t−u)∆ h(u)du, h ∈ HW , t ∈ [0, T ] (2.6)
0
Observe that
1
k∆et∆ kop ≤ sup αe−αt = , for all t ∈ (0, T ]
α≥0 et
and thus, it is easily seen from (2.7) that, for any ε > 0, C ε ([0, T ]; H) ⊂ Dom(G∗ ),
where C ε ([0, T ]; H) stands for the set of ε-Hölder continuous functions from [0, T ]
to H. At a heuristic level, notice also that, formally, we have X = GẆ , and thus,
if h : [0, T ] → H is regular enough,
Z T Z T
X
δ (h) = hh(t), δXt i = hh(t), GW(dt)iH . (2.8)
0 0
Proof. Without loss of generality, we can assume that h is given by h(s) = 1[0,τ ] (s)y
with τ ∈ [0, t] and y ∈ H. Indeed, to obtain the general case, it suffices to use the
linearity in (2.9) and the fact that the set of step functions is dense in C ε ([0, T ]; H).
Then we can write, on one hand:
Z t t
Z
hh(s), Gk(ds)iH = 1[0,τ ] (s)y, Gk(ds) H
0
Z0 τ Z τ
y, e(τ −s)∆ k(s) H ds.
= y, Gk(ds) = hy, Gk(τ )iH =
0 H 0
7
On the other hand, we have, by (2.7):
Z t
hG∗ h(s), k(s)iH ds
0
Z t Z T
(T −s)∆ (σ−s)∆
= e h(s) + ∆e [h(σ) − h(s)]dσ, k(s) ds
0 s H
Z τ Z T
(T −s)∆ (σ−s)∆
= e y− ∆e y dσ, k(s) ds
0 τ H
Z τ Z τ
(τ −s)∆
y, e(τ −s)∆ k(s) H ds,
= e y, k(s) H ds =
0 0
where we have used the integration by parts and the fact that, if h(t) = et∆ y, then
h0 (t) = ∆et∆ y for any t > 0. The claim follows now easily.
Lemma 2.2 suggests, replacing k by Ẇ in (2.9), that the natural meaning for the
quantities involved in (2.8) is, for h ∈ C ε ([0, T ]; H),
Z T
X
δ (h) = hG∗ h(t), dWt iH .
0
This transformation holds true for deterministic integrands like h, and we will now
see how to extend it to a large class of random processes, thanks to Skorokhod
integration.
Notice that G∗ is an isometry between HX and a closed subset of HW (see also
[2] p.772), which means that
and
Z T Z T Z T Z T
E dτ dt kDτW G∗ ut k2op =E dτ dt kG∗ DτW ut k2op < ∞, (2.11)
0 0 0 0
where kAkop = sup|y|H =1 |Ay|H . Then, for u ∈ D̂1,2 X (HX ), we can define the Sko-
and it is easily checked that expression (2.12) makes sense. This will be the meaning
we will give to a stochastic integral with respect to X. Let us insist again on the
8
Pk
fact that this is a natural definition: if g(s) = j=1 1[tj ,tj+1 ) (s)yj is a step function
with values in H, we have:
Z T k
X
hg(s), δXs i = yj , Xtj+1 − Xtj H
.
0 j=1
Remark 2.5 As it was already said in the introduction, if Tr(F 00 (x)) is uniformly
bounded in x ∈ H, one can take limits in equation (1.3) as N → ∞ to obtain:
Z t
1 t
Z
0
F (Xt ) = F (0) + hF (Xs ), dXs iH + Tr(F 00 (Xs ))ds, t ∈ [0, T ]. (2.14)
0 2 0
Here, the stochastic integral is naturally defined by
Z t N Z
X t
0 2
hF (Xs ), dXs iH := L − lim ∂n F (Xs )dXsn .
0 N →∞ 0
n=1
In this case, the stochastic integrals in formulae (2.13) and (2.14) are obviously
related by a simple algebraic equality. However, our formula (2.13) remains valid
for any Cb2 function F , without any hypothesis on the trace of F 00 .
Proof of Theorem 2.3. For simplicity, assume that F (0) = 0. We will split the
proof into several steps.
Step 1: strategy of the proof. Recall (see Section 2.1.1) that the set M is a total
subset of L2 (Ω) and M itself is generated by the random variables of the form
δ W (h⊗m ), m ∈ N, with h ∈ HW . Then, in order to obtain (2.13), it is sufficient to
show:
Z t Z t
0 1 2s∆ 00
E[Ym F (Xt )] = E Ym hF (Xs ), δXs i + E Ym Tr(e F (Xs ))ds , (2.15)
0 2 0
9
where Y0 ≡ 1 and, for m ≥ 1, Ym = δ W (h⊗m ) with h ∈ HW . This will be done in
Steps 2 and 3. The proof of the fact that F 0 (X) ∈ D̂X1,2 (HX ) is postponed at Step 4.
Step 2: the case m = 0. Set ϕ(t, y) = E[F (et∆ y + Xt )], with y ∈ H. Then, the
Kolmogorov equation given e.g. in [9] p. 257, states that
1 2
∂t ϕ = Tr(∂yy ϕ) + h∆y, ∂y ϕiH . (2.16)
2
Furthermore, in our case, we have:
2
∂yy ϕ(t, y) = e2t∆ E[F 00 (et∆ y + Xt )],
Step 3: the general case. For the sake of readability, we will prove (2.15) only for
m = 2, the general case m ≥ 1 being similar, except for some cumbersome notations.
Let us recall first that, according to (2.4), we can write, for t ≥ 0:
Z T Z t
W ⊗2 W
Y2 = δ (h ) = hut , δWt iH = δ (u) with ut = hh(s), δWs iH h(t).
0 0
(2.18)
On the other hand, thanks to (1.2) and (2.2), it is readily seen that:
X
DsW1 F (Xt ) = e−λn (t−s1 ) ∂n F (Xt )1[0,t] (s1 ) en (2.19)
n≥1
and
X
DsW2 (DsW1 F (Xt )) = e−λn (t−s1 ) e−λr (t−s2 ) ∂nr
2
F (Xt )1[0,t] (s1 )1[0,t] (s2 ) en ⊗ er , (2.20)
n,r≥1
10
Putting together (2.18) and (2.19), we get:
Z t
W
E[Y2 F (Xt )] = E[δ (u)F (Xt )] = ds1 E us1 , DsW1 F (Xt ) H
0
Z t
W
= ds1 E δ (1[0,s1 ] h)h(s1 ), DsW1 F (Xt ) H
0
XZ t
= ds1 E[δ W (1[0,s1 ] h)hn (s1 )Dsn,W
1
F (Xt )]
n≥1 0
XZ t Z t
n
Dsn,1W F (Xt )
W
= ds1 E ds2 1[0,s1 ] (s2 )h(s2 ), h (s1 )Ds2 H
,
n≥1 0 0
where we have written Dsn,1W F (Xt ) for the nth component in H of DsW1 F (Xt ). Thus,
invoking (2.20) and (2.21), we obtain
XZ t Z s1
ds2 hr (s2 )hn (s1 )e−λn (t−s1 ) e−λr (t−s2 ) E ∂nr
2
E[Y2 F (Xt )] = ds1 F (Xt )
n,r≥1 0 0
X (2.22)
= (G⊗2 2
nr h)(t)E[∂nr F (Xt )].
n,r≥1
Let us differentiate now this expression with respect to t: setting ψnr (s, y) :=
2
E[∂nr F (es∆ y + Xs )], we have
E[Y2 F (Xt )] = A1 + A2 ,
where
XZ t XZ t
A1 := 2
E[∂nr F (Xs )](G⊗2
nr h)(ds) and A2 := (G⊗2
nr h)(s)∂s ψnr (s, 0)ds.
n,r≥1 0 n,r≥1 0
Indeed, assume for the moment that F 0 (X) ∈ Dom(δ). Then, the integration by
parts (2.3) yields, starting from Â1 :
Z T
∗ 0
Â1 = E Y2 hG F (Xs )1[0,t] (s), δWs iH
0
Z T
W ∗ 0
= E hDs Y2 , G F (Xs )1[0,t] (s)iH ds ,
0
11
and according to (2.5), we get
Z T
W ∗ 0
Â1 = E δ (h) hh(s), G F (Xs )1[0,t] (s)iH ds
0
Z t
= hGh(ds), E[δ W (h)F 0 (Xs )]iH
0
XZ t Z T
n W
= Gh (ds1 )E hh(s2 ), Ds2 (∂n F (Xs1 ))iH ds2
n≥1 0 0
XZ t Z s1
= 2
E[∂nr F (Xs1 )]Ghn (ds1 ) hr (s2 )e−λr (s1 −s2 ) ds2 .
n,r≥1 0 0
1 X t
Z Z s1
Â1 = 2 n
E[∂nr F (Xs1 )] Gh (ds1 ) hr (s2 )e−λr (s1 −s2 ) ds2
2 n,r≥1 0 0
Z s1
r n −λn (s1 −s2 )
+Gh (ds1 ) h (s2 )e ds2 ,
0
Set now Z t
2s∆ 00
Â2 = E Y2 Tr(e F (Xs ))ds ,
0
and let us show that 2A2 = Â2 . Indeed, using the same reasoning which was used
to obtain (2.22), we can write:
Z t
2s∆ 00
Â2 = Tr e E[Y2 F (Xs )]ds
0
Z t !
X
= Tr e2s∆ (G⊗2 2 00
nr h)(s)E[∂nr F (Xs )] = 2A2 , (2.24)
0 n,r≥1
2
by applying relation (2.17) to ∂nr F . Thus, putting together (2.24) and (2.23), our
Itô type formula is proved, except for one point whose proof has been omitted up
to now, namely the fact that F 0 (X) ∈ Dom(δ X ).
Step 4: To end the proof, it suffices to show that F 0 (X) ∈ D̂X1,2 (HX ). To this purpose,
we first verify (2.10), and we start by observing that
Z T Z T
∗ 0
(Xs )|2H ds E |e(T −s)∆ F 0 (Xs )|2H ds
E |G F ≤ cst
0 0
Z T
" Z
T 2 # !
(t−s)∆ 0 0
+ E |∆e (F (Xt ) − F (Xs ))|H dt ds.
0 s
12
Clearly, the hypothesis “F 0 is bounded” means, in our context, that:
X
sup |F 0 (y)|2H = sup (∂n F (y))2 < ∞.
y∈H y∈H
n≥1
with fT given by
(Z
T 2 )
fT (s) := E (t − s)−1 |Xt − Xs |H dt . (2.25)
s
Fix now ε > 0 and consider the positive measure νs (dt) = (t − s)−1/2−2ε dt. Invoking
Lemma 2.1, we get that
(Z
T 2 )
fT (s) = E (t − s)−1/2+2ε |Xt − Xs |H νs (dt)
s
Z T
≤ cst νs ([s, T ]) (t − s)−1+4ε E(|Xt − Xs |2H )νs (dt)
s
Z T
≤ cst (T − s)1/2−2ε
(t − s)−1+2ε dt = cst (T − s)1/2 .
s
13
Hence kDτW F 0 (Xs )k2op ≤ kF 00 (Xs )k2op and
Z T Z T 2
(T −s)∆
E dτ dske DτW F 0 (Xs )kop
0 0
Z T Z T 2
(T −s)∆
≤E dτ dske kop kDτW F 0 (Xs )kop < ∞, (2.26)
0 0
according to the fact that ke(T −s)∆ kop ≤ 1. On the other hand, since Xt is Ft -
adapted, we get
Z T Z T Z T 2
E dτ ds dtk∆e (t−s)∆
(DτW F 0 (Xt ) − DτW F 0 (Xs ))kop = B1 + B2 ,
0 0 s
(2.27)
with
Z T Z τ Z T 2
B1 := E dτ ds dtk∆e (t−s)∆
DτW F 0 (Xt )kop
0 0 τ
Z T Z T Z T 2
B2 := E dτ ds dtk∆e (t−s)∆
(DτW F 0 (Xt ) − DτW F 0 (Xs ))kop .
0 τ s
and thus
k∆e(t−s)∆ DτW F 0 (Xt )kop ≤ cst(t − s)−1 ,
from which we deduce easily
Z T Z τ Z T 2
B1 = E dτ ds dtk∆e (t−s)∆
DτW F 0 (Xt )kop < ∞. (2.28)
0 0 τ
14
But, F 00 and F 000 being bounded, we can write:
X 2
e−λr (t−τ ) ∂nr
2
F (Xt ) − e−λr (s−τ ) ∂nr
2
F (Xs )
n,r≥1
X 2
≤cst e−λr (t−τ ) − e−λr (s−τ ) 2
(∂nr F (Xt ))2
n,r≥1
X 2
+ cst 2
∂nr 2
F (Xt ) − ∂nr F (Xs ) e−2λr (s−τ )
n,r≥1
2
≤cst sup e−α(t−τ ) − e−α(s−τ ) kF 00 (Xt )k2op + cstkF 00 (Xt ) − F 00 (Xs )k2op
α≥0
and consequently,
and
Z T Z T Z T 2
B2 = E dτ ds (t−s)∆
dtk∆e (DτW F 0 (Xt ) − DτW F 0 (Xs ))kop
0 τ s
Z T Z T
≤ cst dτ dsfT (s) (2.29)
0 τ
15
R1
Theorem 3.1 Let ϕ ∈ Cc (]0, 1[) and Fϕ : H → R given by Fϕ (`) = 0
|`(x)|ϕ(x)dx.
Then: Z t
Fϕ (Xs ), δXs + Lϕt ,
0
Fϕ (Xt ) = (3.2)
0
R1
˜ =
where [Fϕ0 (`)](`) ˜
sgn(`(x))ϕ(x)`(x)dx and Lϕt is the random variable given by
0
Z tZ 1
1
Lϕt = δ0 (Xs (x)) G2s (x, x) ϕ(x)dx ds, (3.3)
2 0 0
where δ0 stands for the Dirac measure at 0, and δ0 (Xs (x)) has to be understood as
a distribution on the Wiener space associated to W .
and Z T Z 1
E [W (h1 )W (h2 )] = h1 (t, x)h2 (t, x) dtdx, h1 , h2 ∈ HW ,
0 0
and where we recall that HW = L2 ([0, T ]×[0, 1]). Associated to this Gaussian family,
we can construct again a derivative operator, a divergence operator, some Sobolev
spaces, that we will simply denote respectively by D, δ, Dk,2 . These objects coincide
in fact with the ones introduced at Section 2.1.1. Notice for instance that, for a
given m ≥ 1, and for a functional F ∈ Dm,2 , Dm F will be considered as a random
function on ([0, T ] × [0, 1])m , denoted by D(s m
1 ,y1 ),...,(sm ,ym )
F . We will also deal with
the multiple integrals with respect to W , that can be defined as follows: for m ≥ 1
and fm : ([0, T ] × [0, 1])m → R such that fm (t1 , x1 , . . . , tm , xm ) is symmetric with
respect to (t1 , . . . , tm ), we set
Z Z
Im (fm ) = m! f (t1 , x1 , . . . , tm , xm )W (dt1 , dx1 ) . . . W (dtm , dxm ).
0<t1 <...<tm <T [0,1]m
Eventually, we will use the negative Sobolev space D−1,2 in the sense of Watanabe,
which can be defined as the dual space of D1,2 in L2 (Ω). We refer to [15] or [14] for
a detailed account on the Malliavin calculus with respect to W . Notice in particular
that the filtration (Ft )t∈[0,T ] considered here is generated by the random variables
{W (1[0,s] × 1A ); s ≤ t, A Borel set in [0, 1]}, which is useful for a correct definition
16
of Im (fm ). Then, the isometry relationship between multiple integrals can be read
as: (
0 if m 6= p
E [Im (fm )Ip (gp )] = , m, p ∈ N
m! hfm , gm iH⊗m if m = p,
W
⊗m
where HW has to be interpreted as L2 (([0, T ] × [0, 1])m ).
In this context, the stochastic convolution X can also be written according to
Walsh’s point of view (see [19]): set
Fix ϕ ∈ Cc (]0, 1[) and assume that ϕ has support in [η, 1 − η]. For ε > 0, let
Fε : H → R be defined by
Z 1
Fε (`) = σε (`(x))ϕ(x)dx, with σε : R → R given by σε = | · | ∗ pε ,
0
2
where pε (x) = (2πε)−1/2 e−x /(2ε) is the Gaussian kernel on R with variance ε > 0.
For t ∈ [0, T ], let us also define the random variable
Z 1
2t∆ 00
ε
G2t (x, x)ϕ(x)σε00 (Xt (x))dx.
Zt = Tr e Fε (Xt ) = (3.8)
0
17
Proof. Following the idea of [7], we will show this convergence result by means of
RT
the Wiener chaos decomposition of 0 Ztε dt, which will be computed firstly.
Stroock’s formula ([17]) states that any random variable F ∈ ∩k≥1 Dk,2 can be
expanded as
∞
X 1
F = Im (E [Dm F ]) .
m=0
m!
In our case, a straightforward computation yields, for any t ∈ [0, T ] and m ≥ 0,
m
D(s Zε
1 ,y1 ),...,(sm ,ym ) t
Z 1
= G2t (x, x)ϕ(x)G⊗m (m+2)
t,x ((s1 , y1 ), . . . , (sm , ym ))σε (Xt (x))dx.
0
where v(t, x) denotes the variance of the centered Gaussian random variable Xt (x)
and Hm is the mth Hermite polynomial:
m x2 dm − x2
Hm (x) = (−1) e 2 e 2 ,
dxm
(−1) m/2
verifying Hm (0) = 0 if m is odd and Hm (0) = 2m/2 (m/2)!
if m is even. Thus, the
RT ε
Wiener chaos decomposition of 0 Zt dt is given by
Z T
Ztε dt
0
XZ T Z 1
= dt dx G2t (x, x)ϕ(x) (ε + v(t, x))−m/2 pε+v(t,x) (0)Hm (0)Im (G⊗m
t,x )
m≥0 0 0
XZ T Z 1
= dt dx βm,ε (t, x)Im (G⊗m
t,x ), (3.9)
m≥0 0 0
with
βm,ε (t, x) := G2t (x, x)ϕ(x) (ε + v(t, x))−m/2 pε+v(t,x) (0)Hm (0), m ≥ 1.
RT
We will now establish the L2 -convergence of 0 Ztε dt, using (3.9). For this purpose
let us notice that each term
Z T Z 1
dt dx βm,ε (t, x)Im (G⊗m
t,x )
0 0
18
Thus, setting
(Z
T Z 1 2 )
αm,ε := E dt dx βm,ε (t, x)Im (G⊗m
t,x ) ,
0 0
RT
the L2 -convergence of 0
Ztε dt will be proven once we show that
X
lim sup αm,ε = 0, (3.10)
M →∞ ε>0
m≥M
and hence once we control the quantity αm,ε uniformly in ε. We can write
Z Z
αm,ε = dt1 dt2 dx1 dx2 βm,ε (t1 , x1 )βm,ε (t2 , x2 )E{Im (G⊗m ⊗m
t1 ,x1 )Im (Gt2 ,x2 )}.
[0,T ]2 [0,1]2
Moreover
E{Im (G⊗m ⊗m
⊗m ⊗m
t1 ,x1 )Im (G t2 ,x2 )} = m! G t1 ,x1 , G t2 ,x2 L2 ([0,T ]×[0,1])m
Z m
= m! Gt1 −s (x1 , y)1[0,t1 ] (s)Gt2 −s (x2 , y)1[0,t2 ] (s)dsdy
[0,T ]×[0,1]
=: m! (R(t1 , x1 , t2 , x2 ))m .
Using (3.6), we can give a rough upper bound on βm,ε (t, x):
1 cst cst |ϕ(x)|
|βm,ε (t, x)| ≤ |G2t (x, x)| |ϕ(x)| m+1 m m
≤ m m 1 m+1 .
v(t, x) 2 2 ( 2 )!
2 2 2 ( 2 )! t 2 v(t, x) 2
Then, thanks to the fact that ϕ = 0 outside [η, 1 − η], we get
|R(t1 , x1 , t2 , x2 )|m |ϕ(x1 )| |ϕ(x2 )|
Z
αm,ε ≤ cm dt1 dt2 dx1 dx2 1/2 1/2 ,
([0,T ]×[η,1−η])2 t1 t2 v(t1 , x1 )(m+1)/2 v(t2 , x2 )(m+1)/2
with
cst m! cst
cm = 2 ≤√ ,
2m [(m/2)!] m
by Stirling formula. Assume, for instance, t1 ≤ t2 . Invoking the decomposition (3.1)
of Gt (x, y) and the fact that {en ; n ≥ 1} is an orthogonal family, we obtain
Z t1 Z 1
R(t1 , x1 , t2 , x2 ) = ds dy Gt1 −s (x1 , y)Gt2 −s (x2 , y)
0 0
Z t1 Z 1 ! !
X X
= ds dy e−λn (t1 −s) en (x1 )en (y) e−λr (t2 −s) er (x2 )er (y)
0 0 n≥1 r≥1
X Z t1 X 2
= en (x1 )en (x2 ) ds e−λn [(t1 −s)+(t2 −s)] = en (x1 )en (x2 )e−λn t2 sinh(λn t1 ),
n≥1 0 λ
n≥1 n
19
Now Cauchy-Schwarz’s inequality gives
R(t1 , x1 , t2 , x2 )
( )1/2 ( )1/2
X 2 X 2
≤ en (x1 )2 e−λn t2 sinh(λn t1 ) en (x2 )2 e−λn t2 sinh(λn t1 )
n≥1 n
λ n≥1 n
λ
( )1/2
X 2
≤ en (x1 )2 e−λn t2 sinh(λn t1 ) v(t2 , x2 )1/2 .
n≥1 n
λ
We have obtained that R(t1 , x1 , t2 , x2 ) ≤ A(t1 , t2 , x1 )1/2 v(t2 , x2 )1/2 . Notice that (3.7)
yields c1 t1/2 ≤ v(t, x) ≤ c2 t1/2 uniformly in x ∈ [η, 1 − η]. Thus, we obtain
and hence
Z m/4
cst t2
αm,ε ≤√ dt1 dt2 dx1 dx2 1/2 1/2 (m+1)/2 (m+1)/2
m ([0,T ]×[η,1−η])2 t1 t2 t1 t2
Z t1 m/2
Gt1 +t2 −2s (x1 , x1 )ds .
0
Z T Z T m/2 Z T Z T
cst −(m+3)/4 −3/4 t1 cst (m−3)/4 dt2 cst
≤√ t1 dt1 t2 m/4
dt2 ≤√ t1 dt1 (m+3)/4
≤ .
m 0 t1 t2 m 0 t1 t2 m3/2
P
Consequently, the series m≥0 αm,ε converges uniformly in ε > 0, which gives im-
mediately (3.10).
RT
Thus, we obtain that 0 Ztε dt → Z in L2 (Ω), as ε → 0, where
XZ T Z 1
Z := dt dx G2t (x, x)ϕ(x)v(t, x)−m/2 pv(t,x) (0)Hm (0)Im (G⊗m
t,x ).
m≥0 0 0
To finish the proof we need to identify Z with (3.3). First, let us give the precise
meaning of (3.3). Using (3.5), we can write
Z T Z 1
1
LϕT = δ0 (W (Gt,x ))G2t (x, x)ϕ(x)dxdt,
2 0 0
20
where we recall that δ0 stands for the Dirac measure at 0, and we will show that
LϕT ∈ D−1,2 (this latter space has been defined at Section 3.1.1). Indeed, (see also
[16], p. 259), for any random variable U ∈ D1,2 , with obvious notation for the
Sobolev norm of U , we have
kU k1,2 kU k1,2
|E (U δ0 (W (Gt,x )))| ≤ ≤ cst 1/4 ,
|Gt,x |HW t
using (3.4) and (3.7). This yields
Z TZ 1−η
ϕ kU k1,2
|E (U LT )| ≤ cst |G2t (x, x)| |ϕ(x)|dxdt < ∞,
0 η t1/4
RT
according to (3.6). Similarly, 0 Ztε dt ∈ D−1,2 , since
Z T Z TZ 1
ε
Zt dt = σε00 (W (Gt,x ))G2t (x, x)ϕ(x)dxdt
0 0 0
RT
and the same reasoning applies. Moreover 21 0 Ztε dt → LϕT in D−1,2 as ε → 0.
Indeed, for any random variable U ∈ D1,2 ,
Z T
1 T 1
Z Z
1 ε ϕ
E U Z dt − LT = dxdtG2t (x, x)ϕ(x)
2 0 t 2 0 0
× E {U [σε00 (W (Gt,x )) − δ0 (W (Gt,x ))]}
and, as in [16],
and the conclusion follows using again (3.6) and (3.7), and also the fact that σε0 →
sgn, as ε → 0.
Finally, it is clear that LϕT = 21 Z. The proof of Lemma 3.2 is now complete.
21
RT
We have seen that 12 0 Ztε dt → LϕT as ε → 0, in L2 (Ω). Since it is obvious that
Fε (XT ) converges in L2 (Ω) to Fϕ (XT ), a simple use of formula (3.11) shows that
RT 0
0
hFε (Xt ), δXt i converges. In order to obtain (3.2), it remains to prove that
Z T Z T
lim hFε0 (Xt ), δXt i = hFϕ0 (Xt ), δXt i. (3.12)
ε→0 0 0
But, from standard Malliavin calculus results (see, for instance, Lemma 1, p. 304
in [7]), in order to prove (3.12), it is sufficient to show that
with
V ε (t) = Fε0 (Xt ) = σε0 (Xt )ϕ ∈ H and V (t) = sgn(Xt )ϕ ∈ H.
We will now prove (3.13) through several steps, adapting in our context the approach
used in [7].
Proof. The proof is similar to the one given for Lemma 4, p. 309 in [7]. Indeed, the
first part of that proof can be invoked in our case since (Xt (x), Xs (x)) is a centered
Gaussian vector (with covariance ω(s, t, x)). Hence we can write
√ s
1 + |a|ρ 2π v(t, x)v(s, x)
P (Xt (x) > a, Xs (x) < a) ≤ − 1, (3.15)
2π ω(s, t, x)2
where
E [(Xt (x) − Xs (x))2 ]
ρ2 = . (3.16)
v(t, x)v(s, x) − ω(s, t, x)2
Furthermore, it is a simple computation to show that
22
Moreover, one can observe, as in [7], that
Consequently, s
v(t, x)v(s, x)
− 1 ≤ cst (t − s)1/4 s−1/4 ,
ω(s, t, x)2
since it is well-known that
Step 2.
We shall prove that G∗ V ∈ L2 ([0, T ] × Ω; H). First, using the fact that
(T −t)∆
e
≤ 1, we remark that
op
Z T 2
Z T
(T −t)∆
2
|sgn(Xt )ϕ|2 dt < ∞.
(T −t)∆
E e sgn(Xt )ϕ H dt ≤ E
e
op H
0 0
We have
"Z
T Z T 2 #
(r−t)∆
A≤E
∆e
|sgn(Xr )ϕ − sgn(Xt )ϕ| dr
op H dt ,
0 t
with
+ −
sgn(Xr (x)) − sgn(Xt (x)) = 2 Ur,t (x) − Ur,t (x)
+ −
where Ur,t (x) = 1{Xr (x)>0, Xt (x)<0} and Ur,t (x) = 1{Xr (x)<0, Xt (x)>0} . Thus
Z T Z T Z 1 1/2 !2
dr + −
2
A ≤ cst dt E dx Ur,t (x) − Ur,t (x) ϕ(x)
0 t r−t 0
Z T Z T Z 1 1/2 !2
dr +
≤ cst dt E dx Ur,t (x) ϕ(x)2 .
0 t r − t 0
23
RT
Then A ≤ cst 0
At dt with
" Z 1/2
Z T Z T 1
dr2 dr1
At := E Ur+1 ,t (x)ϕ(x)2 dx
t r2 − t t r1 − t 0
Z 1 1/2 #
Ur+2 ,t (x)ϕ(x)2 dx ,
0
which gives
T T 1/2
Z Z Z
dr2 dr1 2
+ 2 +
At ≤ dx1 dx2 ϕ(x1 ) ϕ(x2 ) E Ur1 ,t (x1 )Ur2 ,t (x2 )
t r2 − t t r1 − t [0,1]2
Z T Z T Z 1 1/2
dr2 dr1 1/2
dx1 ϕ(x1 ) E Ur+1 ,t (x1 )
2
≤
t r2 − t t r1 − t 0
Z 1 1/2
1/2
dx2 ϕ(x2 ) E Ur+2 ,t (x2 )
2
0
"Z
T Z 1 1/2 #2
dr
= dx ϕ(x)2 P [Xr (x) > 0, Xt (x) < 0]1/2 .
t r−t 0
Plugging (3.14) into this last inequality, we easily get that G∗ V ∈ L2 ([0, T ] ×
Ω, H). The remainder of the proof follows now closely the steps developed in [7] and
the details are left to the reader.
References
[1] E. Alòs, O. Mazet, D. Nualart. Stochastic calculus with respect to fractional
Brownian motion with Hurst parameter lesser than 1/2. Stoch. Proc. Appl. 86,
121-139, 2000.
[3] M. van den Berg. Gaussian bounds for the Dirichlet heat kernel. J. Funct.
Anal. 88, 267-278, 1990.
[5] S. Cerrai. Second order PDE’s in finite and infinite dimension. A probabilistic
approach. Lect. Notes in Math. 1762, 330 pages, 2001.
24
[7] L. Coutin, D. Nualart, C. Tudor. Tanaka formula for the fractional Brownian
motion. Stoch. Proc. Appl. 94, 301-315, 2001.
[8] R. Dalang. Extending the martingale measure stochastic integral with applica-
tions to spatially homogeneous s.p.d.e.’s. Electron. J. Probab. 4, no. 6, 29 pp,
1999.
[11] J. Guerra, D. Nualart. The 1/H-variation of the divergence integral with respect
to the fractional Brownian motion for H > 1/2 and fractional Bessel processes.
Preprint Barcelona, 2004.
[12] Y. Hu, D. Nualart. Some processes associated with fractional Bessel processes.
Preprint Barcelona, 2004.
[15] D. Nualart. The Malliavin calculus and related topics. Springer-Verlag, 266
pages, 1995.
[16] D. Nualart, J. Vives. Smoothness of Brownian local times and related func-
tionals. Potential Anal. 1, no. 3, 257-263, 1992.
[18] C. Tudor, F. Viens. Itô formula and local time for the fractional Brownian
sheet. Electron. J. Probab. 8, no. 14, 31 pp, 2003.
[21] L. Zambotti. Itô-Tanaka formula for SPDEs driven by additive space-time white
noise. Preprint, 2004.
25