Brownian Motion
Brownian Motion
Brownian Motion
Solution Manual
2
Contents
3
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
21 On diffusions 185
4
1 Robert Brown’s new thing
(a) We show the result for Rd -valued random variables. Let ξ, η ∈ Rd . By assumption,
ξ Xn ξ X
lim E exp [i ⟨( ), ( )⟩] = E exp [i ⟨( ), ( )⟩]
n→∞ η Yn η Y
⇐⇒ lim E exp [i⟨ξ, Xn ⟩ + i⟨η, Yn ⟩] = E exp [i⟨ξ, X⟩ + i⟨η, Y ⟩]
n→∞
Since Xn á Yn we find
(b) We have
1 almost surely d
Xn = X + ÐÐÐÐÐÐÐ→ X Ô⇒ Xn Ð
→X
n n→∞
1 almost surely d
Yn = 1 − Xn = 1 − − X ÐÐÐÐÐÐÐ→ 1 − X Ô⇒ Yn Ð
→1−X
n n→∞
almost surely d
Xn + Yn = 1 ÐÐÐÐÐÐÐ→ 1 Ô⇒ Xn + Yn Ð
→ 1.
n→∞
Thus, X + Y ∼/ δ0 ∼ 1 = limn (Xn + Yn ) and this shows that we cannot have that
d
(Xn , Yn ) Ð
→ (X, Y ).
5
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
d
(c) If Xn á Yn and X á Y , then we have Xn + Yn Ð
→ X + Y : this follows since we have
for all ξ ∈ R:
= Ee iξX
Ee iξY
= E [eiξX eiξY ]
a)
= E eiξ(X+Y ) .
d
A similar (even easier) argument works if (Xn , Yn ) Ð
→ (X, Y ). Then we have
f (x, y) ∶= eiξ(x+y)
For a counterexample (if Xn and Yn are not independent), see part b).
Lemma. Let (Xn )n⩾1 and (Yn )n⩾1 be sequences of random variables (or random
vectors) on the same probability space (Ω, A, P). If
d d
Xn á Yn for all n ⩾ 1 and Xn ÐÐÐ→ X and Yn ÐÐÐ→ Y,
n→∞ n→∞
d
then (Xn , Yn ) ÐÐÐ→ (X, Y ) and X á Y .
n→∞
Proof. Write φX , φY , φX,Y for the characteristic functions of X, Y and the pair
(X, Y ). By assumption
A similar statement is true for Yn and Y . For the pair we get, because of independence
= Ee iξX
Ee iηY
= φX (ξ)φY (η).
6
Solution Manual. Last update December 19, 2017
Thus, φXn ,Yn (ξ, η) → h(ξ, η) = φX (ξ)φY (η). Since h is continuous at the origin
(ξ, η) = 0 and h(0, 0) = 1, we conclude from Lévy’s continuity theorem that h is a
d
(bivariate) characteristic function and that (Xn , Yn ) Ð
→ (X, Y ). Moreover,
∎∎
Thus,
Since limn→∞ E ei⟨ξ,Xn ⟩ = E ei⟨ξ,X⟩ , we are done if we can show that the first term in the
last line of the displayed formula tends to zero. To see this, we use the Lipschitz continuity
of the exponential function. Fix ξ ∈ Rd .
= E ∣ei⟨ξ,Yn −Xn ⟩ − 1∣
P
where we used in the last step the fact that Xn − Yn Ð
→ 0.
∎∎
d
Problem 1.3. Solution: Recall that Yn Ð
→ Y with Y = c a.s., i. e. where Y ∼ δc for some
P
constant c ∈ R. Since the d-limit is trivial, this implies Yn Ð
→ Y . This means that both
“is this still true”-questions can be answered in the affirmative.
7
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
d
We will show that (Xn , Yn ) Ð
→ (Xn , c) holds – without assuming anything on the joint
distribution of the random vector (Xn , Yn ), i.e. we do not make assumption on the corre-
lation structure of Xn and Yn . Since the maps x ↦ x + y and x ↦ x ⋅ y are continuous, we
see that
lim E f (Xn , Yn ) = E f (X, c) ∀f ∈ Cb (R × R)
n→∞
implies both
lim E g(Xn Yn ) = E g(Xc) ∀g ∈ Cb (R)
n→∞
and
lim E h(Xn + Yn ) = E h(X + c) ∀h ∈ Cb (R).
n→∞
∣E ei(ξXn +ηYn ) − E ei(ξX+ηc) ∣ ⩽ ∣E ei(ξXn +ηYn ) − E ei(ξXn +ηc) ∣ + ∣E ei(ξXn +ηc) − E ei(ξX+ηc) ∣
d
The second expression on the right-hand side converges to zero as Xn Ð
→ X. For fixed
η we have that y ↦ eiηy is uniformly continuous. Therefore, the first expression on the
right-hand side becomes, with any > 0 and a suitable choice of δ = δ() > 0
E ∣eiηYn − eiηc ∣ = E [∣eiηYn − eiηc ∣ 1{∣Yn −c∣>δ} ] + E [∣eiηYn − eiηc ∣ 1{∣Yn −c∣⩽δ} ]
⩽ 2 E [1{∣Yn −c∣>δ} ] + E [1{∣Yn −c∣⩽δ} ]
⩽ 2 P(∣Yn − c∣ > δ) +
P -convergence as δ, are fixed
ÐÐÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→ Ð→ 0.
n→∞ ↓0
Remark. The direct approach to (a) is possible but relatively ugly. Part (b) has a
relatively simple direct proof:
Fix ξ ∈ R.
For the first term on the right we find with the uniform-continuity argument from Prob-
lem 1.2 and any > 0 and suitable δ = δ(, ξ) that
= E ∣eiξYn − 1∣
8
Solution Manual. Last update December 19, 2017
⩽ + P (∣Yn ∣ > δ)
fixed
ÐÐÐ→ ÐÐ→ 0
n→∞ →0
∎∎
Problem 1.4. Solution: Let ξ, η ∈ R and note that f (x) = eiξx and g(y) = eiηy are bounded
and continuous functions. Thus we get
i⟨(ηξ ), (X )⟩
Ee Y = E eiξX eiηY
= E f (X)g(Y )
= lim E f (Xn )g(Y )
n→∞
d
and we see that (Xn , Y ) Ð
→ (X, Y ).
Assume now that X = φ(Y ) for some Borel function φ. Let f ∈ Cb and pick g ∶= f ○ φ.
Clearly, f ○ φ ∈ Bb and we get
= E f (X)f (X)
= E f 2 (X).
Thus,
L2
i. e. f (Xn ) Ð→ f (X).
Now fix > 0 and R > 0 and set f (x) = −R ∨ x ∧ R. Clearly, f ∈ Cb . Then
P(∣Xn − X∣ > )
⩽ P(∣Xn − X∣ > , ∣X∣ ⩽ R, ∣Xn ∣ ⩽ R) + P(∣X∣ ⩾ R) + P(∣Xn ∣ ⩾ R)
= P(∣f (Xn ) − f (X)∣ > , ∣X∣ ⩽ R, ∣Xn ∣ ⩽ R) + P(∣X∣ ⩾ R) + P(∣f (Xn )∣ ⩾ R)
9
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
where we used that {∣f (Xn )∣ ⩾ R} ⊂ {∣f (X)∣ ⩾ R/2} ∪ {∣f (Xn ) − f (X)∣ ⩾ R/2} because of
the triangle inequality: ∣f (Xn )∣ ⩽ ∣f (X)∣ + ∣f (X) − f (Xn )∣
= P(∣f (Xn ) − f (X)∣ > ) + P(∣X∣ ⩾ R/2) + P(∣X∣ ⩾ R/2) + P(∣f (Xn ) − f (X)∣ ⩾ R/2)
= P(∣f (Xn ) − f (X)∣ > ) + 2 P(∣X∣ ⩾ R/2) + P(∣f (Xn ) − f (X)∣ ⩾ R/2)
1 4
⩽ ( 2 + 2 ) E (∣f (X) − f (Xn )∣2 ) + 2 P(∣X∣ ⩾ R/2)
R
,R fixed and f =fR ∈Cb X is a.s. R-valued
ÐÐÐÐÐÐÐÐÐÐÐÐ→ 2 P(∣X∣ ⩾ R/2) ÐÐÐÐÐÐÐÐÐÐ→ 0.
n→∞ R→∞
∎∎
Problem 1.5. Solution: Note that E δj = 0 and V δj = E δj2 = 1. Thus, E S⌊nt⌋ = 0 and
V S⌊nt⌋ = ⌊nt⌋.
(b) Let s < t. Since the δj are iid, we have, S⌊nt⌋ − S⌊ns⌋ ∼ S⌊nt⌋−⌊ns⌋ , and by the central
limit theorem (CLT)
√
S⌊nt⌋−⌊ns⌋ ⌊nt⌋ − ⌊ns⌋ S⌊nt⌋−⌊ns⌋ CLT √
√ = √ √ ÐÐÐ→ t − s G1 ∼ Gt−s .
n n ⌊nt⌋ − ⌊ns⌋ n→∞
If we know that the bivariate random variable (S⌊ns⌋ , S⌊nt⌋ −S⌊ns⌋ ) converges in distri-
bution, we do get Gt ∼ Gs + Gt−s because of Problem 1.1. But this follows again from
the lemma which we prove in part d). This lemma shows that the limit has indepen-
dent coordinates, see also part c). This is as close as we can come to Gt − Gs ∼ Gt−s ,
unless we have a realization of ALL the Gt on a good space. It is Brownian motion
which will achieve just this.
(c) We know that the entries of the vector (Xtnm − Xtnm−1 , . . . , Xtn2 − Xtn1 , Xtn1 ) are inde-
pendent (they depend on different blocks of the δj and the δj are iid) and, by the
one-dimensional argument of b) we see that
d √
Xtnk − Xtnk−1 ÐÐÐ→ tk − tk−1 Gk1 ∼ Gktk −tk−1 for all k = 1, . . . , m
n→∞
10
Solution Manual. Last update December 19, 2017
and the Gk1 , k = 1, . . . , m are independent. Thus, by the second assertion of part b)
√ √
( t1 G11 , . . . , tm − tm−1 Gm
1 ) ∼ (Gt1 , . . . , Gtm −tm−1 ) ∼ (Gt1 , . . . , Gtm − Gtm−1 ).
1 m
Lemma. Let (Xn )n⩾1 and (Yn )n⩾1 be sequences of random variables (or random
vectors) on the same probability space (Ω, A, P). If
d d
Xn á Yn for all n ⩾ 1 and Xn ÐÐÐ→ X and Yn ÐÐÐ→ Y,
n→∞ n→∞
d
then (Xn , Yn ) ÐÐÐ→ (X, Y ) and X á Y (for suitable versions of the rv’s).
n→∞
Proof. Write φX , φY , φX,Y for the characteristic functions of X, Y and the pair
(X, Y ). By assumption
A similar statement is true for Yn and Y . For the pair we get, because of independence
= Ee iξX
Ee iηY
= φX (ξ)φY (η).
Thus, φXn ,Yn (ξ, η) → h(ξ, η) = φX (ξ)φY (η). Since h is continuous at the origin
(ξ, η) = 0 and h(0, 0) = 1, we conclude from Lévy’s continuity theorem that h is a
d
(bivariate) characteristic function and that (Xn , Yn ) Ð
→ (X, Y ). Moreover,
∎∎
1 ⎛ B(t) − B( 2 ) B( 2 ) − B(s) ⎞
s+t s+t
B(t) − B(s) 1
√ =√ ⎜ √ + √ ⎟ =∶ √ (X + Y ) .
t−s 2⎝ t−s t−s ⎠ 2
2 2
11
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
By assumption, the random variables (Gnj )j,n are identically distributed (for all j, n) and
independent (in j). Moreover, E(Gnj ) = 0 and V(Gnj ) = 1. Applying the central limit
theorem (for triangular arrays) we obtain
1 n d
√ ∑ Gnj Ð
→ G1
n j=1
∎∎
G,G′ ∼N(0,1)
2 2
√ ]
− 12 [ ξ+η √ ]
− 12 [ η−ξ
= e 2 e 2
e− 2 ξ e− 2 η .
1 2 1 2
=
∎∎
12
2 Brownian motion as a Gaussian process
Problem 2.1. Solution: Let us check first that f (u, v) ∶= g(u)g(v)(1 − sin u sin v) is indeed a
probability density. Clearly, f (u, v) ⩾ 0. Since g(u) = (2π)−1/2 e−u
2 /2
is even and sin u is
odd, we get
Let us show that (U, V ) is not a normal random variable. Assume that (U, V ) is normal,
then U + V ∼ N(0, σ 2 ), i. e.
E eiξ(U +V ) = e− 2 ξ
1 2 σ2
. (*)
2i
2
1
= e−ξ − ( (e− 2 (ξ+1) − e− 2 (ξ−1) ))
2 1 2 1 2
2i
1 2 2
= e−ξ + (e− 2 (ξ+1) − e− 2 (ξ−1) )
2 1 2 1
4
1
= e−ξ + e−1 e−ξ (e−ξ − eξ ) ,
2 2 2
4
and this contradicts (*).
∎∎
Problem 2.2. Show that the covariance matrix C = (tj ∧ tk )j,k=1,...,n appearing in Theorem 2.6
is positive definite. Solution: Let (ξ1 , . . . , ξn ) ≠ (0, . . . , 0) and set t0 = 0. Then we find
from (2.12)
n n n
∑ ∑ (tj ∧ tk ) ξj ξk = ∑ (tj − tj−1 )(ξj + ⋯ + ξn ) ⩾ 0.
2
(2.1)
j=1 k=1 j=1 ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
>0
13
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Equality (= 0) occurs if, and only if, (ξj + ⋯ + ξn )2 = 0 for all j = 1, . . . , n. This implies
that ξ1 = . . . = ξn = 0.
Abstract alternative: Let (Xt )t∈I be a real-valued stochastic process which has a second
moment (such that the covariance is defined!), set µt = E Xt . For any finite set S ⊂ I we
pick λs ∈ C, s ∈ S. Then
⎛ ⎞
=E ∑ (Xs − µs )λs (Xt − µt )λt
⎝s,t∈S ⎠
Remark: Note that this alternative does not prove that the covariance is strictly positive
definite. A standard counterexample is to take Xs ≡ X.
∎∎
∎∎
Problem 2.4. Solution: Let ei = (0, . . . , 0, 1, 0 . . .) ∈ Rn be the ith standard unit vector. Then
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
i
and
⟨B(ei + ej ), ei + ej ⟩ = bii + bjj + 2bij
We have
∎∎
14
Solution Manual. Last update December 19, 2017
= − E(Bt2 ) = −t ≠ 0.
(B1)
√
(c) Zt = tB1 is not a BM1 , the independent increments property is violated:
√ √ √ √ √ √
E(Zt − Zs )Zs = ( t − s) s E B12 = ( t − s) s ≠ 0.
∎∎
Thus,
1 1 x2 (y − x)2 y2
fB(s),B(t)∣B(1) (x, y ∣ B(1) = 0) = √ exp [− ( + + )] .
2π s(t − s)(1 − t) 2 s t−s 1−t
Note that
x2 (y − x)2 y2 t s 2 y2 y2 t s 2 y2
+ + = (x − y) + + = (x − y) + .
s t−s 1 − t s(t − s) t t 1 − t s(t − s) t t(1 − t)
Therefore,
E(B(s)B(t) ∣ B(1) = 0)
1 ∞ s 1 y2
=√ √ ∫ y 2 exp [− ] dy
2π t(1 − t) y=−∞ t 2 t(1 − t)
s
= t(1 − t) = s(1 − t).
t
15
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
∎∎
C(s, t) = E(Xs Xt )
= E(Bs2 − s)(Bt2 − t)
= E(Bs2 − s)([Bt − Bs + Bs ]2 − t)
= E(Bs2 − s)(Bt − Bs )2 + 2 E(Bs2 − s)Bs (Bt − Bs ) + E(Bs2 − s)Bs2 − E(Bs2 − s)t
= E(Bs2 − s) E(Bt − Bs )2 + 2 E(Bs2 − s)Bs E(Bt − Bs ) + E(Bs2 − s)Bs2 − E(Bs2 − s)t
(B1)
∎∎
16
Solution Manual. Last update December 19, 2017
(b) We have
Remark: the form of the density shows that the Ornstein–Uhlenbeck is strictly stationary,
i. e.
(X(t1 + h), . . . , X(tn + h) ∼ (X(t1 ), . . . , X(tn )) ∀h > 0.
∎∎
Σ ∶= ⋃ σ(B(t) ∶ t ∈ J)
J⊂[0,∞), J countable
Clearly,
⋃ σ(Bt ) ⊂ Σ ⊂ σ(Bt ∶ t ⩾ 0) = F∞
B def
(*)
t⩾0
The first inclusion follows from the fact that each Bt is measurable with respect to Σ.
∅∈Σ and F ∈ Σ Ô⇒ F c ∈ Σ.
Let (An )n ⊂ Σ. Then, for every n there is a countable set Jn such that An ∈ σ(B(t) ∶ t ∈
Jn ). Since J = ⋃n Jn is still countable we see that An ∈ σ(B(t) ∶ t ∈ J) for all n. Since
the latter family is a σ-algebra, we find
⋃ An ∈ σ(B(t) ∶ t ∈ J) ⊂ Σ.
n
17
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
∎∎
Problem 2.10. Solution: Assume that the indices t1 , . . . , tm and s1 , . . . , sn are given. Let
{u1 , . . . , up } ∶= {s1 , . . . , sn } ∪ {t1 , . . . , tm }. By assumption,
Thus, we may thin out the indices on each side without endangering independence:
{s1 , . . . , sn } ⊂ {u1 , . . . , up } and {t1 , . . . , tm } ⊂ {u1 , . . . , up }, and so
∎∎
F∞ á G∞ Ô⇒ Ft á Gt .
Conversely, since (Ft )t⩾0 and (Gt )t⩾0 are filtrations we find
⋃ Ft á ⋃ Gt .
t⩾0 t⩾0
Since the families ⋃t⩾0 Ft and ⋃t⩾0 Gt are ∩-stable (use again the argument that we have
filtrations to find for F, F ′ ∈ ⋃t⩾0 Ft some t0 with F, F ′ ∈ Ft0 etc.), the σ-algebras generated
by these families are independent:
F∞ = σ ( ⋃ Ft ) á σ ( ⋃ Gt ) = G∞ .
t⩾0 t⩾0
∎∎
18
Solution Manual. Last update December 19, 2017
⎡ ⎤
⎢ 1 n ⎥
= exp ⎢⎢− ∑ (tj − tj−1 )⟨U ⊺ ξj , U ⊺ ξj ⟩⎥⎥
⎢ 2 j=1 ⎥
⎣ ⎦
⎡ ⎤
⎢ 1 n ⎥
= exp ⎢⎢− ∑ (tj − tj−1 )∣ξj ∣2 ⎥⎥ .
⎢ 2 j=1 ⎥
⎣ ⎦
(Observe ⟨U ⊺ ξj , U ⊺ ξj ⟩ = ⟨U U ⊺ ξj , ξj ⟩ = ⟨ξj , ξj ⟩ = ∣ξj ∣2 ). The claim follows.
∎∎
Problem 2.13. Solution: Note that the coordinate processes b and β are independent BM1 .
√
(a) Since b á β, the process Wt = (bt + βt )/ 2 is a Gaussian process with continuous
sample paths. We determine its mean and covariance functions:
1
E Wt = √ (E bt + E βt ) = 0;
2
Cov(Ws , Wt ) = E(Ws Wt )
1
= E(bs + βs )(bt + βt )
2
1
= ( E bs bt + E βs bt + E bs βt + E βs βt )
2
1
= (s ∧ t + 0 + 0 + s ∧ t) = s ∧ t
2
where we used that, by independence, E bu βv = E bu E βv = 0. Now the claim follows
from Corollary 2.7.
This means that X is not a BM2 , as its coordinates are not independent.
1 ⎛bt + βt ⎞ ⎛ bt ⎞ 1 ⎛1 1 ⎞ ⎛ bt ⎞
√ =U =√ .
2 ⎝bt − βt ⎠ ⎝βt ⎠ 2 ⎝1 −1⎠ ⎝βt ⎠
∎∎
E Xt = 0
Cov(Xt , Xs ) = E Xt Xs
= E(λbs + µβs )(λbt + µβt )
= λ2 E bs bt + λµ E bs βt + λµ E bt βs + µ2 βs βt
19
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
= λ2 E bs bt + λµ E bs E βt + λµ E bt E βs + µ2 E βs βt
= λ2 (s ∧ t) + 0 + 0 + µ2 s ∧ t = (λ2 + µ2 )(s ∧ t).
The matrix U is a rotation, hence orthogonal and we see from Problem 2.12 that W is a
Brownian motion.
Generalization: take U orthogonal.
∎∎
where k < n is the rank of Q. The k-dimensional vector has a nondegenerate normal
distribution in Rk .
∎∎
20
Solution Manual. Last update December 19, 2017
“⇒” Assume that we have (B1). Observe that the family of sets
⋃ σ(Bu1 , . . . , Bun )
0⩽u1 ⩽⋯⩽un ⩽s, n⩾1
and so
⎛1 0 0 . . . 0⎞ ⎛ Bu1 ⎞ ⎛ Bu1 ⎞
⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜1 1 0 . . . 0⎟ ⎜ ⎟ ⎜ ⎟
⎜ ⎟ ⎜ Bu2 − Bu1 ⎟ ⎜ Bu2 ⎟
⎜ ⎟⎜ ⎟ ⎜ ⎟
Bt − Bs á ⎜1 1 1 . . . 0⎟ ⎜ ⎟=⎜ ⎟
⎜ ⎟ ⎜ Bu3 − Bu2 ⎟ ⎜ Bu3 ⎟
⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⋮ ⋮ ⋮ ⋱ 0⎟ ⎜ ⋮ ⎟ ⎜ ⋮ ⎟
⎜ ⎟⎜ ⎟ ⎜ ⎟
⎝1 1 1 . . . 1⎠ ⎝Bun − Bun−1 ⎠ ⎝Bun ⎠
“⇐” Let 0 = t0 ⩽ t1 < t2 < . . . < tn < ∞, n ⩾ 1. Then we find for all ξ1 , . . . , ξn ∈ Rd
E (ei ∑k=1 ⟨ξk , B(tk )−B(tk−1 )⟩ ) = E (ei⟨ξn , B(tn )−B(tn−1 )⟩ ⋅ ei ∑k=1 ⟨ξk , B(tk )−B(tk−1 )⟩ )
n n−1
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
Ftn−1 mble., hence áB(tn )−B(tn−1 )
⋮
n
= ∏ E (ei⟨ξk , B(tk )−B(tk−1 )⟩ ).
k=1
∎∎
∎∎
21
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
E Wt2 = C(t, t) = E(Bt2 − t)2 = E(Bt4 − 2tBt2 + t2 ) = 3t2 − 2t2 + t2 = 2t2 ≠ const.
E Ys Yt = E(Bs+h − Bs )(Bt+h − Bt )
= E Bs+h Bt+h − E Bs+h Bt − E Bs Bt+h + E Bs Bt
= (s + h) ∧ (t + h) − (s + h) ∧ t − s ∧ (t + h) + s ∧ t
⎧
⎪
⎪
⎪0, if t > s + h ⇐⇒ h < t − s
= (s + h) − (s + h) ∧ t = ⎨
⎪
⎪
⎩h − (t − s),
⎪ if t ⩽ s + h ⇐⇒ h ⩾ t − s.
Swapping the roles of s and t finally gives: the process is stationary with g(t) =
(h − ∣t∣)+ = (h − ∣t∣) ∨ 0.
(d) Not stationary. Note that
∎∎
and
lim Wt (ω) = B1 (ω) − lim tβ1/t (ω) − β1 (ω) = B1 (ω);
t↓1 t↓1
⎛ B1 ⎞
⎛ Wt1 ⎞ ⎛1 t1 0 0 ⋯ 0 −1⎞ ⎜ ⎟
⎜ ⎟ ⎜ ⎟⎜⎜ β1/t1 ⎟
⎟
⎜ Wt2 ⎟ ⎜1 0 t2 0 ⋯ −1⎟
⎜ ⎟=⎜ 0 ⎟⎜⎜ ⋮
⎟
⎟
⎜ ⎟ ⎜ ⎟⎜ ⎟
⎜ ⋮ ⎟ ⎜ ⋮ ⋮ 0 t3 ⋯ ⋮ ⋮ ⎟ ⎜ ⎟
⎜ ⎟ ⎜ ⎟ ⎜β
⎜ 1/tn ⎟
⎟
⎝Wtn ⎠ ⎝1 0 0 0 ⋯ tn −1 ⎠
⎝ β1 ⎠
22
Solution Manual. Last update December 19, 2017
and since
B1 á (β1/t1 , . . . , β1/tn , β1 )⊺
E Wt = E B1 + t E β1/t − E β1 = 0
E Wti Wtj = E(B1 + ti β1/ti − β1 )(B1 + tj β1/tj − β1 )
= 1 + ti tj t−1 −1 −1
j − ti ti − tj tj + 1 = ti = ti ∧ tj .
Case 3: Assume that 0 < t1 < . . . < tk ⩽ 1 < tk+1 < . . . < tn . Then we have
⎛ Bt1 ⎞
⎛ 1 0 ⋯ 0 ⎞⎜ ⎟
⎜ ⎟⎜⎜ ⋮ ⎟ ⎟
⎛ Wt1 ⎞ ⎜
⎜ 0 ⋱ 0 ⎟⎜
⎟⎜ ⎟
⎜ ⎟ ⎜ ⎟⎜ ⋮ ⎟ ⎟
⎜ Wt2 ⎟ ⎜ ⋮ ⋱ ⋮ ⎟⎜ ⎟
⎜ ⎟ ⎜ ⎟⎜
⎟ ⎜ Btk ⎟
⎜ ⎟ ⎜ ⎟
⎜ ⋮ ⎟ ⎜ ⋯ 1 ⎟⎜ ⎟
⎜ ⎟=⎜ 0 0 ⎟⎜
⎟ ⎜ B1 ⎟
⎜ ⎟ ⎜ ⎟.
⎜Wtk ⎟ ⎜ ⋯ −1 ⎟⎜ ⎟
⎜ ⎟ ⎜ 1 tk+1 0 0 ⎟⎜
⎟ ⎜β1/tk+1 ⎟
⎜ ⎟ ⎜ ⎟
⎜ ⋮ ⎟ ⎜ −1 ⎟⎜ ⎟
⎜ ⎟ ⎜
⎜
1 0 tk+2 0 ⎟⎜
⎟⎜ ⋮ ⎟
⎝Wtn ⎠ ⎜ ⎟⎜ ⎟
⎜ ⋮ ⋮ ⋱ ⋮ ⎟⎜ ⎟
⎜ ⎟ ⎜ β1/tn ⎟⎟
⎝ 1 0 ⋯ tn −1 ⎠⎜ ⎟
⎝ β1 ⎠
Since
(Bt1 , . . . , Btk , B1 ) á (β1/tk+1 , . . . , β1/tn , β1 )
E Wt = 0
E Wti Wtj = E Bti (B1 + tj β1/tj − β1 ) = ti = ti ∧ tj
for i ⩽ k < j.
∎∎
Problem 2.22. Solution: The process X(t) = B(et ) has no memory since (cf. Problem 2.18)
and, therefore,
The process X(t) ∶= e−t/2 B(et ) is not memoryless. For example, X(a + a) − X(a) is not
independent of X(a):
E(X(2a) − X(a))X(a) = E (e−a B(e2a ) − e−a/2 B(ea ))e−a/2 B(ea ) = e−3a/2 ea − e−a ea ≠ 0.
∎∎
23
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Problem 2.23. Solution: The process Wt = Ba−t − Ba , 0 ⩽ t ⩽ a clearly satisfies (B0) and (B4).
For 0 ⩽ s ⩽ t ⩽ a we find
∎∎
B(s)
lim tB(1/t) = 0 Ô⇒ lim =0 a.s.
t↓0 s↑∞ s
Moreover,
2
B(s) s 1 s→∞
E( ) = 2 = ÐÐÐ→ 0
s s s
i. e. we get also convergence in mean square.
Remark: a direct proof of the SLLN is a bit more tricky. Of course we have by the classical
SLLN that
Bn ∑j=1 (Bj − Bj−1 ) SLLN
n
= ÐÐÐ→ 0 a.s.
n n n→∞
But then we have to make sure that Bs /s converges. This can be done in the following
way: fix s > 0. Then there is a unique interval (n, n + 1] such that s ∈ (n, n + 1]. Thus,
∎∎
24
3 Constructions of Brownian motion
converge as N → ∞ P-a.s. uniformly for t towards B(t, ω), t ∈ [0, 1]—cf. Problem 3.3.
Therefore, the random variables
1 N −1 1 1
P -a.s.
∫ WN (t) dt = ∑ Gn ∫ Sn (t) dt ÐÐÐ→ X = ∫ B(t) dt.
0 n=0 0 N →∞ 0
1
This shows that ∫0 WN (t) dt is the sum of independent N(0, 1)-random variables, hence
itself normal and so is its limit X.
From the definition of the Schauder functions (cf. Figure 3.2) we find
1 1
∫ S0 (t) dt =
0 2
1 1 −3 j
∫ S2j +k (t) dt = 2 2 , k = 0, 1, . . . , 2j − 1, j ⩾ 0.
0 4
and this shows
1 n 2 −1 3
j
1 1
∫ W2n+1 (t) dt = G0 + ∑ ∑ 2− 2 j G2j +l .
0 2 4 j=0 l=0
Consequently, since the Gj are iid N(0, 1) random variables,
1
E∫ W2n+1 (t) dt = 0,
0
1 1 n 2 −1 −3j
j
1
V∫ W2n+1 (t) dt = + ∑ ∑ 2
0 4 16 j=0 l=0
1 1 n −2j
= + ∑2
4 16 j=0
1 1 1 − 2−2(n+1)
= +
4 16 1 − 41
1 1 4 1
ÐÐÐ→ + = .
n→∞ 4 16 3 3
This means that
∞
1 3 2 −1
j
1
X = G0 + ∑ 2− 2 j ∑ G2j +l
2 j=0 4 l=0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
∼N(0,2j )
where the series converges P-a.s. and in mean square, and X ∼ N(0, 13 ).
∎∎
25
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
= λ(A ∩ B)
26
Solution Manual. Last update December 19, 2017
(d) Let
l m
f (t) = ∑ fi 1Ai (t) = ∑ gj 1Bj (t)
i=1 j=1
Ai = ⊍ Ck and Bj = ⊍ Ck .
k∶Ck ⊂Ai k∶Ck ⊂Bj
For
hk ∶= ∑ fi = ∑ gj
i∶Ck ⊂Ai j∶Ck ⊂Bj
we have
l m n
f (t) = ∑ fi 1Ai (t) = ∑ gj 1Bj (t) = ∑ hk 1Ck (t).
i=1 j=1 k=1
N −1
W (A ∪ B) = lim ∑ Gn ⟨1A + 1B , φn ⟩L2
N →∞ n=0
N −1 N −1
= lim ∑ Gn ⟨1A , φn ⟩L2 + lim ∑ Gn ⟨1B , φn ⟩L2
N →∞ n=0 N →∞ n=0
= W (A) + W (B)
l l ⎛ ⎞
∑ fi W (Ai ) = ∑ fi W ⊍ Ck
i=1 i=1 ⎝k∶Ck ⊂Ai ⎠
l
=∑ ∑ fi W (Ck )
i=1 k∶Ck ⊂Ai
n
=∑ ∑ fi W (Ck )
k=1 i∶Ck ⊂Ai
n
= ∑ dk W (Ck )
k=1
m
= . . . = ∑ gj W (Bj ).
j=1
27
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
m m
= ∑ ∑ cj ck λ(Bj ∩ Bk )
b)
j=1 k=1
2
1⎛m ⎞ 1
=∫ ∑ cj 1Bj (t) λ(dt) = ∫ ∣f ∣ dλ.
2
0 ⎝j=1 ⎠ 0
Since the family of step functions S is dense in L2 ([0, 1], λ), the isometry allows us
to extend the operator I to L2 ([0, 1], λ): Let f ∈ L2 ([0, 1], λ), then there exists a
sequence (fn )n∈N ⊂ S such that fn → f in L2 ([0, 1], λ). From
1
E(∣I(fn ) − I(fm )∣2 ) = E(∣I(fn − fm )∣2 ) = ∫ ∣fn − fm ∣2 dλ
0
we see that the sequence (I(fn ))n∈N is a Cauchy sequence in L2 (P). Therefore, the
limit
I(f ) ∶= L2 (P)- lim I(fn )
n→∞
exists. Note that the isometry implies that I(f ) does not depend on the approxi-
mating sequence (fn )n∈N . Consequently, I is well-defined.
Remark: Using the results from Section 3.1 it is clear that Wt ∶= W ([0, t]), t ∈ [0, 1], has
all properties of a Brownian motion. As usual, the continuity of the paths t ↦ Wt is not
obvious and needs arguments along the lines of, say, the Lévy–Ciesielski construction in
Section 3.2.
∎∎
0 ⩽ Sn (t) ∀n, t
1 −j/2
S2j +k (t) ⩽ S2j +k ((2k + 1)/2j+1 ) = 2−j/2 /2j+1 = 2 ∀j, k, t
2
2j −1
1 −j/2
∑ S2j +k (t) ⩽ 2 (disjoint supports!)
k=0 2
By assumption,
∃C > 0, ∃ ∈ (0, 21 ), ∀n ∶ ∣an ∣ ⩽ C ⋅ n .
Thus, we find
∞ ∞ 2j −1
∑ ∣an ∣Sn (t) ⩽ ∣a0 ∣ + ∑ ∑ ∣a2j +k ∣S2j +k (t)
n=0 j=0 k=0
∞ 2j −1
⩽ ∣a0 ∣ + ∑ ∑ C ⋅ (2j+1 ) S2j +k (t)
j=0 k=0
∞
1 −j
⩽ ∣a0 ∣ + ∑ C ⋅ 2(j+1) 2 < ∞.
j=0 2
28
Solution Manual. Last update December 19, 2017
√
(b) For C > 2 we find from
√ √
√ 2 1 2 1 −C 2 /2
e− 2 C log n ⩽
1 2
P (∣Gn ∣ > log n) ⩽ √ n ∀n ⩾ 3
π C log n π C
that the following series converges:
∞ √
∑ P (∣Gn ∣ > log n) < ∞.
n=1
√
By the Borel–Cantelli Lemma we find that Gn (ω) = O( log n) for almost all ω, thus
Gn (ω) = O(n ) for any ∈ (0, 1/2).
∎∎
1/p
Problem 3.4. Solution: Set ∥f ∥p ∶= ( E ∣f ∣p )
Solution 2: By assumption
This means that that (d(XNk , Xm ))k⩾0 is a Cauchy sequence in Lp (P; R). By the com-
pleteness of the space Lp (P; R) there is some fm ∈ Lp (P; R) such that
in Lp
d(XNk , Xm ) ÐÐÐ→ fm
k→∞
The subsequence nk may also depend on m. Since (nk (m))k is still a subsequence of
(Nk ), we still have d(Xnk (m) , Xm+1 ) → fm+1 in Lp , hence we can find a subsequence
29
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
(nk (m + 1))k ⊂ (nk (m))k such that d(Xnk (m+1) , Xm+1 ) → fm+1 a.s. Iterating this we see
that we can assume that (nk )k does not depend on m.
Moreover,
lim ∥fm ∥p = lim ∥ lim d(Xnk , Xm )∥p ⩽ lim lim ∥d(Xnk , Xm )∥p
m→∞ m→∞ k→∞ m→∞ k→∞
Therefore,
d(Xnk , Xnl ) ⩽ ∣d(Xnk , Xmr ) − fmr ∣ + ∣d(Xnk , Xmr ) − fmr ∣ + 2 ⩽ 4 ∀k, l ⩾ L().
Since S is complete, this proves that (Xnk )k⩾0 converges to some X ∈ S almost surely.
This condition says that the sequence dn ∶= supm⩾n dp (Xn , Xm ) converges in Lp (P; R) to
zero. Hence there is a subsequence (nk )k such that
almost surely. This shows that d(Xnk , Xnl ) → 0 as k, l → ∞, i. e. we find by the complete-
ness of the space S that Xnk → X.
∎∎
⎛n ⎞
P(Xt = Yt ) = 1 ∀t ⩾ 0 Ô⇒ P(Xtj = Ytj j = 1, . . . , n) = P ⋂ {Xtj = Ytj } = 1.
⎝j=1 ⎠
30
Solution Manual. Last update December 19, 2017
Thus,
⎛n ⎞ ⎛n n ⎞
P ⋂ {Xtj ∈ Aj } = P ⋂ {Xtj ∈ Aj } ∩ ⋂ {Xtj = Ytj }
⎝j=1 ⎠ ⎝j=1 j=1 ⎠
⎛n ⎞
= P ⋂ {Xtj ∈ Aj } ∩ {Xtj = Ytj }
⎝j=1 ⎠
⎛n ⎞
= P ⋂ {Ytj ∈ Aj } ∩ {Xtj = Ytj }
⎝j=1 ⎠
⎛n ⎞
= P ⋂ {Ytj ∈ Aj } .
⎝j=1 ⎠
∎∎
P(Xt = Yt ∀t ⩾ 0) = 1 Ô⇒ ∀t ⩾ 0 ∶ P(Xt = Yt ) = 1.
⎛ ⎞
P ⋃ {Xq ≠ Yq } ⩽ ∑ P(Xq ≠ Yq ) = 0
⎝q∈D ⎠ q∈D
which means that P(Xq = Yq ∀q ∈ D) = 1. If I is countable, we are done. In the other case
we have, by the density of D,
equivalent Ô⇒
/ modification: To see this let (Bt )t⩾0 and (Wt )t⩾0 be two independent
one-dimensional Brownian motions defined on the same probability space. Clearly,
these processes have the same finite-dimensional distributions, i. e. they are equivalent.
On the other hand, for any t > 0
∞ ∞
P(Bt = Wt ) = ∫ P(Bt = y) P(Wt ∈ dy) = ∫ 0 P(Wt ∈ dy) = 0.
−∞ −∞
∎∎
Problem 3.7. Solution: We use the characterization from Lemma 2.8. Its proof shows that
we can derive (2.15)
⎡ ⎤
⎢ ⎛ n ⎞⎥⎥ ⎛ 1 n ⎞
⎢
E ⎢exp i ∑ ⟨ξj , Xqj − Xqj−1 ⟩ + i⟨ξ0 , Xq0 ⟩ ⎥ = exp − ∑ ∣ξj ∣2 (qj − qj−1 )
⎢ ⎝ j=1 ⎠⎥ ⎝ 2 j=1 ⎠
⎣ ⎦
31
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
1 1 1 (x1 − x0 )2 x20
ft0 ,t1 (x0 , x1 ) = √ exp (− [ + ]) .
(2π) (t1 − t0 )t0 2 t1 − t0 t0
ft∣t0 ,t1 (x ∣ x1 , x2 )
ft0 ,t,t1 (x0 , x, x1 )
=
ft0 ,t1 (x0 , x1 )
(x1 −x)2 (x−x0 )2 x20
1
(2π)3/2
√ 1
exp (− 12 [ t1 −t + + t0 ])
(t1 −t)(t−t0 )t0 t−t0
=
(x1 −x0 )2 x20
1 √ 1
(2π) (t1 −t0 )t0 exp (− 12 [ t1 −t0 + t0 ])
¿
1 ÁÀ (t1 − t0 ) 1 (x1 − x)2 (x − x0 )2 (x1 − x0 )2
=√ Á exp (− [ + − ])
2π (t1 − t)(t − t0 ) 2 t1 − t t − t0 t1 − t0
¿
1 ÁÀ (t1 − t0 ) 1 (t − t0 )(x1 − x)2 + (t1 − t)(x − x0 )2 (x1 − x0 )2
=√ Á exp (− [ − ])
2π (t1 − t)(t − t0 ) 2 (t1 − t)(t − t0 ) t1 − t0
Now consider the argument in the square brackets [⋯] of the exp-function
32
Solution Manual. Last update December 19, 2017
t − t0 t1 − t (t1 − t)(t − t0 )
−2 x1 x − 2 xx0 + 2 x1 x0 ]
t1 − t0 t1 − t0 (t1 − t0 )2
(t1 − t0 ) t − t0 t1 − t 2
= [x − x1 − x0 ]
(t1 − t)(t − t0 ) t1 − t0 t1 − t0
(t1 − t0 ) t − t0 t1 − t 2
= [x − ( x1 + x0 )] .
(t1 − t)(t − t0 ) t1 − t0 t1 − t0
Set
(t1 − t)(t − t0 ) t − t0 t1 − t
σ2 = and m = x1 + x0
(t1 − t0 ) t1 − t0 t1 − t0
then our calculation shows that
1 (x − m)2
ft∣t0 ,t1 (x ∣ x1 , x2 ) = √ exp ( ).
2π σ 2σ 2
∎∎
33
4 The canonical model
Problem 4.1. Solution: Let F ∶ R → [0, 1] be a distribution function. We begin with a general
lemma: F has a unique generalized monotone increasing right-continuous inverse:
Indeed: For those t where F is strictly increasing and continuous, there is nothing to show.
Let us look at the two problem cases: F jumps and F is flat.
F (t) G(u)
w+
w
w−
If F (t) jumps, we have G(w) = G(w+ ) = G(w− ) and if F (t) is flat, we take the right
endpoint of the ‘flatness interval’ [G(v−), G(v)] to define G (this leads to right-continuity
of G)
(a) Let (Ω, A, P) = ([0, 1], B[0, 1], du) (du stands for Lebesgue measure) and define X =
G (G = F −1 as before). Then
35
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
the new random variables ξ(ω, ω ′ ) ∶= X(ω) and η(ω, ω ′ ) ∶= Y (ω ′ ). Then we have
• ξ, η live on the same probability space
• ξ ∼ X, η ∼ Y
The same type of argument works for arbitrary products, since independence is
always defined for any finite-dimensional subfamily. In the infinite case, we have
to invoke the theorem on the existence of infinite product measures (which are
constructed via their finite marginals) and which can be seen as a particular case
of Kolmogorov’s theorem, cf. Theorem 4.8 and Theorem A.2 in the appendix.
(c) The statements are the same if one uses the same construction as above. A difficulty
is to identify a multidimensional distribution function F (x). Roughly speaking, these
are functions of the form
36
Solution Manual. Last update December 19, 2017
• F ∶ Rn → [0, 1]
and where the outer sum runs over all tuples (1 , . . . , n ) ∈ {0, 1}n
Another way would be to take (Ω, A, P) = (Rn , B(Rn ), µ) where µ is the probability
measure induced by F (x). Then the random variables Xn are just the identity maps!
The independent copies are then obtained by the usual product construction.
∎∎
Problem 4.2. Solution: Step 1: Let us first show that P(lims→t Xs exists) < 1.
Thus,
P(∣Xr − Xs ∣ > ) = P (∣X1 ∣ > √ ) ÐÐÐ→ P (∣X1 ∣ > √ ) ≠ 0.
s+r r,s→t 2t
This proves that Xs is not a Cauchy sequence in probability, i. e. it does not even converge
in probability towards a limit, so a.e. convergence is impossible.
In fact we have
∞
{ω ∶ lim Xs (ω) does not exist} ⊃ ⋂ { sup ∣Xs − Xr ∣ > 0}
s→t k=1 s,r∈[t−1/k,t+1/k]
37
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Clearly, A ⊂ A(tn ) for any such sequence. Moreover, take two sequences (sn )n , (tn )n such
that sn → t and tn → t and which have no points in common; then we get by independence
and step 1
(Xs1 , Xs2 , Xs3 . . .) á (Xt1 , Xt2 , Xt3 . . .) Ô⇒ A(tn ) á A(sn )
• Clearly, ∅ ∈ Σ.
• If S ∈ Σ, then S ∈ σ(CS ) for some countable CS ⊂ E. Moreover, S c ∈ σ(CS ), i. e. S c ∈
Σ.
• If (Sn )n⩾0 ⊂ Σ are countably many sets, then Sn ∈ σ(Cn ) for some countable Cn ⊂ E
and each n ⩾ 0. Set C ∶= ⋃n Cn . This is again countable and we get Sn ∈ σ(C) for all
n, hence ⋃n Sn ∈ σ(C) and so ⋃n Sn ∈ Σ.
∎∎
38
Solution Manual. Last update December 19, 2017
(e) Assume that I is countable. Then B(E I ) is generated by countably many sets,
−1
namely πK (B(q, r)∩E) where K ⊂ I is finite and B(q, r) are open balls with rational
radii r > 0 and rational centres q ∈ E. (Note: There are only countably many
finite sets contained in a countable set I! Here the argument would break down
for uncountable index sets.) These are but the cylinder sets, i. e. they also generate
BI (E), and this proves B(E I ) ⊂ BI (E).
∎∎
Problem 4.5. Solution: Xt (ω) is a ‘random’ path starting at the randomly chosen point ω
and moving uniformly with constant speed ∣v∣ in the direction v/∣v∣. Note that only the
starting point is random and it is ‘drawn’ using the law µ or δx , i. e. in the latter case we
start a.s. at x.
∎∎
39
5 Brownian motion as a martingale
(a) We have
FtB ⊂ σ(σ(X), FtB ) = σ(X, Bs ∶ s ⩽ t) = F̃t .
Let s ⩽ t. Then σ(Bt − Bs ), FsB and σ(X) are independent, thus σ(Bt − Bs ) is
independent of σ(σ(X), FB ) = F̃s . This shows that (F̃t ) is an admissible filtration s t⩾0
for (Bt )t⩾0 .
From measure theory we know that (Ω, A, P) can be completed to (Ω, A∗ , P∗ ) where
A∗ ∶= {A ∪ N ∶ A ∈ A, N ∈ N},
P∗ (A∗ ) ∶= P(A) for A∗ = A ∪ N ∈ A∗ .
= P({Bt − Bs ∈ A} ∩ F )
= P(Bt − Bs ∈ A) P(F )
= P∗ (Bt − Bs ∈ A) P∗ (F ∪ N ).
B
Therefore Ft is admissible.
∎∎
Problem 5.2. Solution: Let t = t0 < . . . < tn , and consider the random variables
E (ei ∑k=1 ⟨ξk , B(tk )−B(tk−1 )⟩ 1F ) = E (ei⟨ξn , B(tn )−B(tn−1 )⟩ ⋅ ei ∑k=1 ⟨ξk , B(tk )−B(tk−1 )⟩ 1F )
n n−1
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
Ftn−1 mble., hence áB(tn )−B(tn−1 )
41
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
⋮
n
= ∏ E (ei⟨ξk , B(tk )−B(tk−1 )⟩ ) E 1F .
k=1
This shows that the increments are independent among themselves (use F = Ω) and
that they are all together independent of Ft (use the above calculation and the fact that
the increments are among themselves independent to combine again the ∏n1 under the
expected value)
Thus,
Ft á σ(B(tk ) − B(tk−1 ) ∶ k = 1, . . . , n)
∎∎
(a) i) E ∣Xt ∣ < ∞, since the expectation does not depend on the filtration.
iii) Let N denote the set of all sets which are subsets of P-null sets. Denote by
P∗ the measure of the completion of (Ω, A, P) (compare with the solution to
Exercise 1.b)).
∫ Xs d P∗ = ∫ Xs d P = ∫ Xt d P = ∫ Xt d P∗ .
F∗ F F F∗
ii) Note that {Xt ≠ Yt }, its complement and any of its subsets is in Ft∗ . Let B ∈
B(Rd ). Then we get
Ys d P∗ = ∫ Xs d P∗ = ∫ Xt d P∗ = ∫ Yt d P∗ ,
a)
∫
F∗ F∗ F∗ F∗
i. e. E(Yt ∣ Fs∗ ) = Ys .
∎∎
42
Solution Manual. Last update December 19, 2017
Problem 5.4. Solution: Let s < t and pick sn ↓ s such that s < sn < t. Then
sub-MG a.e.
E(Xt ∣ Fs+ ) ←ÐÐÐÐ E(X(t) ∣ Fsn ) ⩾ X(sn ) ÐÐÐ→ X(s+) =
continuous
X(s).
sn ↓s n→∞ paths
The convergence on the left side follows from the (sub-)martingale convergence theorem
(Lévy’s downward theorem).
∎∎
Problem 5.5. Solution: Here is a direct proof without using the hint.
E(Bt4 ∣ Fs )
= E ((Bt − Bs + Bs )4 ∣ Fs )
= Bs4 + 4Bs3 E(Bt − Bs ) + 6Bs2 E((Bt − Bs )2 ) + 4Bs E((Bt − Bs )3 ) + E((Bt − Bs )4 )
= Bs4 + 6Bs2 (t − s) + 3(t − s)2
= Bs4 − 6Bs2 s + 6Bs2 t + 3(t − s)2 ,
and
E(Bt2 ∣ Fs ) = E ((Bt − Bs + Bs )2 ∣ Fs )
= t − s + 2Bs E(Bt − Bs ) + Bs2
= Bs2 + t − s.
Combining these calculations, such that the term 6Bs2 t vanishes from the first formula,
we get
∎∎
(a) Since Brownian motion has exponential moments of any order, we can use the dif-
ferentiation lemma for parameter-dependent integrals. Following the instructions we
get
d ξBt − t ξ2
e 2 = (Bt − tξ)Mtξ
dξ
d2 ξBt − t ξ2
e 2 = ((Bt − tξ)2 − t) Mtξ
dξ 2
d3 ξBt − t ξ2
e 2 = ((Bt − tξ)2 − 3t) (Bt − tξ)Mtξ
dξ 3
43
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
d4 ξBt − t ξ2
4
e 2 = {((Bt − tξ)2 − 3t) ((Bt − tξ)2 − t) − 2t(Bt − tξ)2 } Mtξ
dξ
and so on. The recursion n → n + 1 is pretty obvious
d d
Pn (B, ξ)Mtξ = [ Pn (B, ξ) + Pn (B, ξ)(B − tξ)] Mtξ .
dξ dξ
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
Pn+1 (B,ξ)
Bt
Bt2 − t
Bt3 − 3tBt
Bt4 − 6tBt2 + 3t2
are martingales.
(b) Part (a) shows the general recursion scheme
d
P1 (b, ξ) = b − tξ, Pn+1 (b, ξ) = Pn (b, ξ) + (b − tξ)Pn (b, ξ).
dξ
(c) Using the fact that Mt ∶= Bt4 − 6tBt2 + 3t2 is a martingale with M0 = 0 we get for the
bounded stopping times τ ∧ n by optional stopping
Thus, √
√
E [(τ ∧ n)2 ] ⩽ 2 E [Bτ4∧n ].
Since ∣Bτ ∧n ∣ ⩽ max{a, b}, we can use monotone convergence (on the left side) and
dominated convergence (on the right), and the first inequality follows.
Thus, √ √ √
E [Bτ4∧n ] ⩽ 6 E [(τ ∧ n)2 ] ⩽ 6 E [τ 2 ]
and the estimate follows from dominated convergence (on the left).
44
Solution Manual. Last update December 19, 2017
∎∎
E ec∣B0 ∣ = E ec∣B0 ∣ = 1.
2
and for c ⩽ 0
E ec∣B0 ∣ ⩽ 1 E ec∣B0 ∣ ⩽ 1.
2
and
Now let t > 0 and c > 0. There exists some R > 0 such that c∣x∣ < 1
4t ∣x∣2 for all ∣x∣ > R.
Thus
∎∎
Adding these terms and noting that ∣x∣2 = ∑dj=1 x2j we get
1 ∂2
∫ p(t, x) f (t, x) dx
2 ∂x2j
∞
1 ∂ ∂ 1 ∂
= p(t, x) f (t, x)∣ − ∫ p(t, x) ⋅ f (t, x) dx
2 ∂xj −∞ ∂x j 2 ∂xj
45
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
∞
∂ 1 ∂2 1
=0− p(t, x) ⋅ f (t, x)∣ + ∫ 2
p(t, x) ⋅ f (t, x) dx
∂xj 2 −∞ ∂xj 2
∂2 1
=∫ 2
p(t, x) ⋅ f (t, x) dx.
∂xj 2
By the same arguments as in Exercise 7 we find that all terms are integrable and
vanish as ∣x∣ → ∞. This justifies the above calculation. Furthermore summing over
j = 1, . . . d we obtain the statement.
∎∎
Problem 5.9. Solution: Note that E ∣Xt ∣ < ∞ for all a, b, cf. Problem 5.7. We have
∎∎
Problem 5.10. Solution: Measurability (i. e. adaptedness to the Filtration Ft ) and integra-
bility is no issue, see also Problem 5.7.
= Vs .
46
Solution Manual. Last update December 19, 2017
and
t s
3 E (∫ Br dr ∣ Fs ) = 3 ∫ Br dr + 3(t − s)Bs .
0 0
Thus, Xt is a martingale.
∎∎
∎∎
Problem 5.12. Solution: For a)–c) we prove only the statements for τ ○ , the statements for τ
are proved analogously.
A ⊂ C Ô⇒ {t ⩾ 0 ∶ Xt ∈ A} ⊂ {t ⩾ 0 ∶ Xt ∈ C} Ô⇒ τA○ ⩾ τC○ .
○
(b) By part a) we have τA∪C ○
⩽ τA○ and τA∪C ⩽ τC○ . Thus,
a)
○
τA∪C ⩽ min{τA○ , τC○ }.
○
To see the converse, min{τA○ , τC○ } ⩽ τA∪C , it is enough to show that
○
since this implication shows that τA∪C (ω) ⩾ min{τA○ (ω), τC○ (ω)} holds.
Remark: we cannot expect “=”. To see this consider a BM1 staring at B0 = 0 and
the set
A = [4, 6] and C = [1, 2] ∪ [5, 7].
47
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
In order to show the converse, τA○ ⩾ inf n⩾1 τA○ n , it is enough to check that
since, if this is true, this implies that τA○ (ω) ⩾ inf n⩾0 τA○ n (ω).
inf ( n1 + inf{s ⩾ 1
n ∶ Xs ∈ A}) = 0 + inf inf{s ⩾ 1
n ∶ Xs ∈ A}
n n
= inf{s > 0 ∶ Xs ∈ A}
= τA .
○
(f) Let Xt = x0 + t. Then τ{x 0}
= 0 and τ{x0 } = ∞.
More generally, a similar situation may happen if we consider a process with con-
tinuous paths, a closed set F , and if we let the process start on the boundary ∂F .
Then τF○ = 0 a.s. (since the process is in the set) while τF > 0 is possible with positive
probability.
∎∎
Let x0 ∈ U . Then τU○ = 0 and, since U is open and Xt is continuous, there exists an N > 0
such that
X 1 ∈ U for all n ⩾ N.
n
Thus τU = 0.
∎∎
48
Solution Manual. Last update December 19, 2017
= ∣x − z∣
∎∎
Problem 5.15. Solution: We treat the two cases simultaneously and check the three properties
of a sigma algebra:
i) We have Ω ∈ F∞ and
Ω ∩ {τ ⩽ t} = {τ ⩽ t} ∈ Ft ⊂ Ft+ .
∎∎
∎∎
49
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
(a) eiξBt + 2 t∣ξ∣ is a martingale for all ξ ∈ R by Example 5.2 d). By optional stopping
1 2
τ ∧t
1 = E e 2 (τ ∧t)c
1 2 +icB
.
t→∞
t→∞
1 2
⩾ E (e 2 τ c cos(cBτ ))
1 2
⩾ cos(mc) E e 2 τ c .
Set m = max{a, b}. By the definition of τ we see that ∣Bτ ∧t ∣ ⩽ m; since τ is integrable
we get
τ ∧t
∣Bτ3∧t ∣ ⩽ m3 and ∣∫ Bs ds∣ ⩽ τ ⋅ m.
0
Therefore, we can use in (*) the dominated convergence theorem and let t → ∞:
τ 1
E (∫ Bs ds) = E(Bτ3 )
0 3
1 1
= (−a)3 P(Bτ = −a) + b3 P(Bτ = b)
3 3
(5.12) 1 −a b + b a
3 3
=
3 a+b
1
= ab(b − a).
3
∎∎
50
Solution Manual. Last update December 19, 2017
Since ∣Bt∧τR ∣ ⩽ R, we can use monotone convergence on the left and dominated convergence
on the right-hand side to get
1 1 1
E τR = sup E(t ∧ τR ) = lim E ∣Bt∧τR ∣2 = E ∣BτR ∣2 = R2 .
t⩾0 t→∞ d d d
∎∎
This shows that {σ < τ }, {σ ⩾ τ } = {σ < τ }c ∈ Fσ∧τ . Since σ and τ play symmetric
roles, we get with a similar argument that {σ > τ }, {σ ⩽ τ } = {σ > τ }c ∈ Fσ∧τ , and
the claim follows.
(c) Since τ ∧σ is an integrable stopping time, we get from Wald’s identities, Theorem 5.10,
that
E Bτ2∧σ = E(τ ∧ σ) < ∞.
In the step marked with (*) we used that for integrable stopping times σ, τ we have
51
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Since Bτ ∧k ÐÐÐ→ Bτ in L2 (P), see the proof of Theorem 5.10, we get for some fixed
k→∞
i < k because of Fσ∧τ ∧i ⊂ Fσ∧τ ∧k that
∫ Bτ d P = k→∞
lim ∫ Bτ ∧k d P = lim ∫ Bσ∧τ ∧k d P = ∫ Bσ∧τ d P
k→∞
for all F ∈ Fσ∧τ ∧i .
F F F F
Let ρ = σ ∧ τ (or any other stopping time). Since Fρ∧k = Fρ ∩ Fk we see that Fρ is
generated by the ∩-stable generator ⋃i Fρ∧i , and (*) follows.
∎∎
52
6 Brownian motion as a Markov process
(b) Let u ∈ Bb (R) and s, t ⩾ 0. Then, by the independent and stationary increments
property of a Brownian motion
E u(∣Bt+s ∣ ∣ Fs ) = E u(∣(Bt+s − Bs ) + Bs ∣ ∣ Fs )
= E u(∣(Bt+s − Bs ) + y∣)∣
y=Bs
= E u(∣Bt + y∣)∣ .
y=Bs
and, therefore,
1
E u(∣Bt+s ∣ ∣ Fs ) = [E u(∣Bt + y∣) + E u(∣Bt − y∣)]y=B
2 s
1 ∞
= [∫ (u(∣z + y∣) + u(∣z − y∣)) gt (z) dz]
2 −∞ y=Bs
1 ∞
= [∫ u(∣z∣) (gt (z + y) + gt (z − y)) dz]
2 −∞ y=Bs
∞
=∫ u(∣z∣) (gt (z + y) + gt (z − y)) dz∣
0 y=Bs
∞
= ∫ u(∣z∣) (gt (z + ∣y∣) + gt (z − ∣y∣)) dz ∣
0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ y=Bs
=∶gu,s,t+s (y)—it is independent of s!
53
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
(c) Set Mt ∶= sups⩽t Bs for the running maximum, i. e. Yt = Mt − Bt . From the reflection
principle, Theorem 6.9 we know that Yt ∼ ∣Bt ∣. So the guess is that Y and ∣B∣ are
two Markov processes with the same transition function!
and, as supu⩽s (Br − Bs ) is Fs measurable and (Bs − Bt+s ), sup0⩽u⩽t (Bs+u − Bs+t ) á Fs ,
we get
Using time inversion (cf. 2.15) we see that W = (Wu )u∈[0,t] = (Bt−u − Bt )u∈[0,t]
is again a BM1 , and we get (Bt , sup0⩽u⩽t (Bu − Bt )) ∼ (Wt , sup0⩽u⩽t (Wu − Wt ))) =
(−Bt , sup0⩽u⩽t Bu )) (we understand the vector as a function of the whole process B
resp. W and use B ∼ W )
Using Solution 2 of Problem 6.8 we know the joint distribution of (Bt , supu⩽t Bu ):
and
2 ∞ z 2z − x −(2z−x)2 /2t
II = √ ∫ ∫ u(y + x) e dx dz
2πt z=0 x=−z−y t
2 ∞ x+y 2z − x
e−(2z−x) /2t dz
2
=√ ∫ u(y + x) ∫
2πt x=−y z=x t
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
x+y
=− 21 e−(2z−x) /2t ∣
2
z=x
54
Solution Manual. Last update December 19, 2017
1 ∞
u(y + x) [e−x /2t − e−(x+2y) /2t ] dx
2 2
dx = √ ∫
2πt x=−y
1 ∞
u(ξ) [e−(ξ−y) /2t − e−(ξ+y) /2t ] dξ.
2 2
=√ ∫
2πt x=−y
Finally, adding I and II we end up with
∞
E (u( max {y + Bt , sup Bu )})) = ∫ u(z)(gt (z + y) + gt (z − y)) dz, y⩾0
0⩽u⩽t 0
∎∎
E (f (Ms+t , Bs+t ) ∣ Fs )
= E (f ( sup Bu ∨ Ms , (Bs+t − Bs ) + Bs ) ∣ Fs )
s⩽u⩽s+t
E (f (Ms+t , Bs+t ) ∣ Fs )
= φ(Ms , Bs )
where
E (f (Is+t , Bs+t ) ∣ Fs )
s+t
= E (f ( ∫ Bu du + Is , (Bs+t − Bs ) + Bs ) ∣ Fs )
s
s+t
= E (f ( ∫ (Bu − Bs ) du + Is + tBs , (Bs+t − Bs ) + Bs ) ∣ Fs ) .
s
55
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
E (f (Is+t , Bs+t ) ∣ Fs )
s+t
= E (f ( ∫ (Bu − Bs ) du + y + tz, (Bs+t − Bs ) + z)) ∣
s y=Is ,z=Bs
= φ(Is , Bs )
(c) No! If we use the calculation of a) and b) for the function f (y, z) = g(y), i. e. only
depending on M or I, respectively, we see that we still get
E (g(It+s ) ∣ Fs ) = ψ(Bs , Is ),
∎∎
and the last expression is clearly measurable. This applies, in particular, to f = ∏nj=1 1Aj
where G ∶= ⋂nj=1 {B(tj ) ∈ Aj }, i. e. Ex 1G is Borel measurable.
Set
⎧
⎪ ⎫
⎪
⎪n ⎪
Γ ∶= ⎨ ⋂ {B(tj ) ∈ Aj } ∶ n ⩾ 0, 0 ⩽ t1 < ⋯tn , A1 , . . . An ∈ Bb (Rd )⎬ .
⎪
⎪ ⎪
⎪
⎩j=1 ⎭
x ↦ Ex 1Ac = Ex (1 − 1A ) = 1 − Ex 1A ∈ Bb (Rd ) Ô⇒ Ac ∈ Σ.
x ↦ Ex 1A = ∑ Ex 1Aj ∈ Bb (Rd ).
j
56
Solution Manual. Last update December 19, 2017
This shows that Σ is a Dynkin System. Denote by δ(⋅) the Dynkin system generated by
the argument. Then
Γ ⊂ Σ ⊂ F∞
B
Ô⇒ δ(Γ) ⊂ δ(Σ) = Σ ⊂ F∞
B
.
But δ(Γ) = σ(Γ) since Γ is stable under finite intersections and σ(Γ) = F∞
B
. This proves,
in particular, that Σ = F∞
B
.
∎∎
Problem 6.4. Solution: Following the hint we set un (x) ∶= (−n)∨x∧n. Then un (x) → u(x) ∶=
x. Using (6.7) we see
and we get
lim E [un (Bτ ) ∣ Fτ + ](ω) = lim un (Bτ )(ω) = Bτ (ω).
n→∞ n→∞
Since the l.h.S. is Fτ + measurable (as limit of such measurable functions!), the claim
follows.
∎∎
∎∎
57
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
(c) We have
∎∎
Problem 6.7. Solution: We begin with a simpler situation. As usual, we write τb for the first
passage time of the level b: τb = inf{t ⩾ 0 ∶ sups⩽t Bs = b} where b > 0. From Example 5.2
d) we know that (Mtξ ∶= exp(ξBt − 21 tξ 2 ))t⩾0 is a martingale. By optional stopping we get
that (Mt∧τ
ξ
)t⩾0 is also a martingale and has, therefore, constant expectation. Thus, for
b
ξ > 0 (and with E = E0 )
1 = E M0ξ = E ( exp(ξBt∧τb − 21 (t ∧ τb )ξ 2 ))
Since the RV exp(ξBt∧τb ) is bounded (mind: ξ ⩾ 0 and Bt∧τb ⩽ b), we can let t → ∞ and
get
1 = E ( exp(ξBτb − 21 τb ξ 2 )) = E ( exp(ξb − 12 τb ξ 2 ))
√
or, if we take ξ = 2λ,
√
E e−λτb = e− 2λb
.
○
Now let us turn to the situation of the problem. Set τ = τ(a,b) c . Here, Bt∧τ is bounded (it
is in the interval (a, b), and this makes things easier when it comes to optional stopping.
As before, we get by stopping the martingale (Mtξ )t⩾0 that
(and not, as before, for positive ξ! Mind also the starting point x ≠ 0, but this does not
change things dramatically.) by, e.g., dominated convergence. The problem is now that
Bτ does not attain a particular value as it may be a or b. We get, therefore, for all ξ ∈ R
58
Solution Manual. Last update December 19, 2017
√
Now pick ξ = ± 2λ. This yields 2 equations in two unknowns:
√ √ √
e 2λ x
=e 2λ a
Ex (e−λτ 1{Bτ =a} ) + e 2λ b
Ex (e−λτ 1{Bτ =b} )
√ √ √
e− 2λ x
= e− 2λ a
Ex (e−λτ 1{Bτ =a} ) + e− 2λ b
Ex (e−λτ 1{Bτ =b} )
and so
√ √
sinh ( 2λ (x − a)) sinh ( 2λ (b − x))
Ex (e−λτ 1{Bτ =b} ) = √ and Ex (e−λτ 1{Bτ =a} ) = √ .
sinh ( 2λ (b − a)) sinh ( 2λ (b − a))
For the solution of Problem a) we only have to add these two expressions:
√ √
−λτ −λτ −λτ sinh ( 2λ (b − x)) + sinh ( 2λ (x − a))
Ee = E (e 1{Bτ =a} ) + E (e 1{Bτ =b} ) = √ .
sinh ( 2λ (b − a))
∎∎
P(Bt ⩽ x, Mt ⩾ y) = P(Bt ⩽ x, τ ⩽ t)
= P(Bt∨τ ⩽ x, τ ⩽ t)
by the tower property and pull-out. Now we can use Theorem 6.11
59
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Solution 2 (using Theorem 6.18): We have (with the notation of Theorem 6.18)
dx x2 (x−2y)2
P(Mt < y, Bt ∈ dx) = lim P(mt > a, Mt < y, Bt ∈ dx) = √ [e− 2t − e− 2t ]
(6.19)
a→−∞ 2πt
and if we differentiate this expression in y we get
∎∎
Problem 6.9. Solution: This is the so-called absorbed or killed Brownian motion. The result
is
1
(e−(x−z) /(2t) − e−(x+z) /(2t) ) dz,
2 2
Px (Bt ∈ dz, τ0 > t) = (gt (x − z) − gt (x + z)) dz = √
2πt
for x, z > 0 or x, z < 0.
To see this result we assume that x > 0. Write Mt = sups⩽t Bs and mt = inf s⩽t Bs for the
running maximum and minimum, respectively. Then we have for A ⊂ [0, ∞)
= P0 (Bt ∈ x − A, 0 ⩽ Mt < x)
and we get
2(2b − a)
x (2b − a)2
Px (Bt ∈ A, τ0 > t) = ∫ 1A (x − a) [∫ √ exp (− ) db] da
0 2πt3 2t
t x 2 ⋅ 2 ⋅ (2b − a) (2b − a)2
= ∫ 1A (x − a) √ [∫ exp (− ) db] da
2πt3 0 2t 2t
1 x 2 ⋅ (2b − a) (2b − a)2
= ∫ 1A (x − a) √ [∫ exp (− ) db] da
2πt 0 t 2t
60
Solution Manual. Last update December 19, 2017
x
1 (2b − a)2
=√ ∫ 1A (x − a) [− exp (− )] da
2πt 2t b=0
1 a2 (2x − a)2
=√ ∫ 1A (x − a) {exp (− ) − exp (− )} da
2πt 2t 2t
1 (x − z)2 (x + z)2
=√ ∫ 1A (z) {exp (− ) − exp (− )} da.
2πt 2t 2t
∎∎
Problem 6.10. Solution: For a compact set K ⊂ Rd the set Un ∶= K + B(0, 1/n) ∶= {x + y ∶ x ∈
K, ∣y∣ < 1/n} is open.
we see that φn (x) is continuous. Obviously, 1Un (x) ⩾ φn (x) ⩾ φn+1 ⩾ 1K , and 1K = inf n φn
follows.
∎∎
61
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
= E [P (τ−x > a − t) ∣ ]
x=Bt
−1 ∫
(1 + (2s) ) 0
1 2s −1 ∞
[e−x (1+(2s) ) ]
2
=√
2πs 2s + 1 x=0
1 2s
=√ .
2πs 2s + 1
Let (Bt )t⩾0 be a BM1 . Find the distribution of ξ̃t ∶= inf{s ⩾ t ∶ Bs = 0}. This gives
1 1 2c2
P(ξ̃t ∈ da) = √ √
(a − t) π 2π c 2c2 + 1
√
1 a−t t
= √
(a − t)π t (a − t)a/(a − t)
√
1 t
= .
aπ a − t
∎∎
Problem 6.12. Solution: We have seen in Problem 6.1 that M − B is a Markov process with
the same law as ∣B∣. This entails immediately that ξ ∼ η.
Attention: this problem shows that it is not enough to have only Mt − Bt ∼ ∣Bt ∣ for all
t ⩾ 0, we do need that the finite-dimensional distributions coincide. The Markov property
guarantees just this once the one-dimensional distributions coincide!
∎∎
62
Solution Manual. Last update December 19, 2017
(a) We have
P (Bt = 0 for some t ∈ (u, v)) = 1 − P (Bt ≠ 0 for all t ∈ (u, v)).
(c) We have
u→0 arcsin
v
√ √
v v−u
= lim √ √
a)
∎∎
63
7 Brownian motion and transition semigroups
Problem 7.1. Solution: Banach space: It is obvious that C∞ (Rd ) is a linear space. Let us
show that it is closed. By definition, u ∈ C∞ (Rd ) if
Let (un )n ⊂ C∞ (Rd ) be a Cauchy sequence for the uniform convergence. It is clear that
the uniform limit u = limn un is again continuous. Fix and pick R as in (*). Then we get
∣u(x)∣ ⩽ ∣un (x) − u(x)∣ + ∣un (x)∣ ⩽ ∥un − u∥∞ + ∣un (x)∣.
Since un() ∈ C∞ , we find with (*) some R = R(n(), ) = R() such that
Density: Fix an and pick R > 0 as in (*), and pick a cut-off function χ = χR ∈ C(Rd )
such that
1B(0,R) ⩽ χR ⩽ 1B(0,2R) .
∎∎
Problem 7.2. Solution: Fix (t, y, v) ∈ [0, ∞) × Rd × C∞ (Rd ), > 0, and take any (s, x, u) ∈
[0, ∞) × Rd × C∞ (Rd ). Then we find using the triangle inequality
∣Ps u(x) − Pt v(y)∣ ⩽ ∣Ps u(x) − Ps v(x)∣ + ∣Ps v(x) − Pt v(x)∣ + ∣Pt v(x) − Pt v(y)∣
⩽ sup ∣Ps u(x) − Ps v(x)∣ + sup ∣Ps v(x) − Ps Pt−s v(x)∣ + ∣Pt v(x) − Pt v(y)∣
x x
65
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
• Using the strong continuity of the semigroup (Proposition 7.3 f) there is some δ2 =
δ2 (t, v, ) such that ∣t − s∣ < δ2 Ô⇒ ∥Pt−s v − v∥∞ ⩽ .
∎∎
property
= Ex (f (Xt ) Ex (g(Xt+s ) ∣ Ft ))
pull
out
property
= Ex (f (Xt )h(Xt ))
Thus, Ex f (Xt )g(Xt+s ) = Ex φ(Xt ) and φ(y) = f (y)h(y) is in C∞ . This shows that
x ↦ Ex (f (Xt )g(Xt+s )) is in C∞ .
Using semigroups we can write the above calculation in the following form:
∎∎
pt (z − y) = (2πt)−d/2 e−∣z−y∣
2 /2t
, t>0
∣∂zk pt (z − y)∣
⩽ sup ∣Qk (z, y, t)∣ 1B(0,2R) (y) + sup ∣Qk (z, y, t)e−∣y∣
2 /(16t)
∣ e−∣y∣
2 /(16t)
1Bc (0,2R) (y).
∣y∣⩽2R ∣y∣⩾2R
66
Solution Manual. Last update December 19, 2017
This inequality holds uniformly in a small neighbourhood of z, i. e. we can use the differ-
entiation lemma from measure and integration to conclude that ∂ k Pt u ∈ Cb .
x ↦ ∂t u(t, x) is in C∞ for t > 0: This follows from the first part and the fact that
d ∣z − y∣2
∂t pt (z − y) = − (2πt)−d/2−1 e−∣z−y∣ /2t + (2πt)−d/2 e−∣z−y∣ /2t
2 2
2 2t2
1 ∣z − y∣2 d
= ( − ) pt (z − y).
2 t2 t
Again with the domination argument of the first part we see that ∂t ∂xk u(t, x) is continuous
on (0, ∞) × Rd .
∎∎
(a) Note that ∣un ∣ ⩽ ∣u∣ ∈ Lp . Since ∣un − u∣p ⩽ (∣un ∣ + ∣u∣)p ⩽ (∣u∣ + ∣u∣)p = 2p ∣u∣p ∈ L1
and since ∣un (x) − u(x)∣ → 0 for every x as n → ∞, the claim follows by dominated
convergence.
If vn is any other sequence in Lp with limit u, the above argument shows that
limn Pt vn also exists. ‘Mixing’ the sequences (wn ) ∶= (u1 , v1 , u2 , v2 , u3 , v3 , . . .) pro-
duces yet another convergent sequence with limit u, and we conclude that
i. e. P̃t is well-defined.
(c) Any u ∈ Lp with 0 ⩽ u ⩽ 1 has a representative u ∈ Bb . And then the claim follows
since Pt is sub-Markovian.
(d) Recall that y ↦ ∥u(⋅ + y) − u∥Lp is for u ∈ Lp (dx) a continuous function. By Fubini’s
theorem and the Hölder inequality
= E (∥u(⋅ + Bt ) − u∥pLp ) .
∎∎
67
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
∎∎
Problem 7.7. Solution: Using Tt 1C (x) = pt (x, C) = ∫ 1C (y) pt (x, dy) we get
= Tt1 (1C1 [Tt2 −t1 1C2 {⋯Ttn−1 −tn−2 ∫ 1Cn (xn ) ptn −tn−1 (⋅, dxn )⋯}])(x)
= Tt1 (1C1 [Tt2 −t1 1C2 {⋯ ∫ 1Cn−1 (xn−1 ) ∫ 1Cn (xn ) ptn −tn−1 (xn−1 , dxn )×
= ∫ . . . ∫ 1C1 (x1 )1C2 (x2 )⋯1Cn (xn )ptn −tn−1 (xn−1 , dxn )ptn−1 −tn−2 (xn−2 , dxn−1 )×
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
n integrals
and the right-hand side clearly defines a probability measure. By the uniqueness theorem
for measures, each measure is uniquely defined by its values on the rectangles, so we are
done.
∎∎
68
Solution Manual. Last update December 19, 2017
inf ∣x − α∣ ⩽ ∣x − a∣ ⩽ ∣x − y∣ + ∣a − y∣
α∈A
inf ∣x − α∣ ⩽ ∣x − y∣ + inf ∣a − y∣
α∈A a∈A
d(x,Unc )
(b) By definition, Un = K + B(0, 1/n) and un (x) ∶= d(x,K)+d(x,Unc ) . Being a combination
of continuous functions, see Part (a), un is clearly continuous. Moreover,
un ∣K ≡ 1 and un ∣Unc ≡ 0.
n→∞
This shows that 1K ⩽ un ⩽ 1Unc ÐÐÐ→ 1K .
(c) Assume, without loss of generality, that supp χn ⊂ B(0, 1/n2 ). Since 0 ⩽ un ⩽ 1, we
find
χn ⋆ un (x) = ∫ χn (x − y)un (y) dy ⩽ ∫ χn (x − y) dy = 1 ∀x.
(Essentially this means that un is ‘linear’ for x ∈ Un ∖ K!). Thus, if γ > 1/n,
⩾ (1 − γ) ∫ χn (x − y)1K+B(0,γ/n) (y) dy
⩾ (1 − γ) ∫ χn (x − y)1x+B(0,1/n2 ) (y) dy
=1−γ ∀x ∈ K.
hence,
lim χn ⋆ un (x) = x for all x ∈ K.
n→∞
69
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
For x ∈ K we find
= ∫ χn (y)1K+supp un (x − y) dy
= ∫ χn (y) dy
=1 ∀x ∈ K.
∣Ts u(x) − Tt w(y)∣ ⩽ ∣Ts u(x) − Ts u(y)∣ + ∣Ts u(y) − Tt u(y)∣ + ∣Tt u(y) − Tt w(y)∣
70
Solution Manual. Last update December 19, 2017
This proves continuity with δ ∶= min(δ1 , δ2 , δ3 ); note that δ may (and will) depend
on as well as on the fixed point (s, x, u), as we require continuity at this point only.
Mind that there are minor, but obvious, changes necessary if s = 0.
Remark: A full account on Feller semigroups can be found in Böttcher–Schilling–
Wang [1, Chapter 1].
∎∎
= etA esA .
71
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
and, as in the first calculation, we see that the series converges absolutely. Letting
t → 0 shows strong continuity, even continuity in the operator norm.
(Strictly speaking, strong continuity means that for each vector v ∈ Rd
lim ∣etA v − v∣ = 0.
t→0
Since
∣etA v − v∣ ⩽ ∥etA − id ∥ ⋅ ∣v∣
strong continuity is implied by uniform continuity. One can show that the generator
of a norm-continuous semigroup is already a bounded operator, see e.g. Pazy.)
(b) Let s, t > 0. Then
∞ ∞
tj Aj sj Aj (tj − sj )Aj
etA − esA = ∑ ( − )=∑
j=0 j! j! j=1 j!
A similar calculation, pulling out A to the back, yields that the sum is also etA A.
(c) Assume first that AB = BA. Repeated applications of this rule show Aj B k = B k Aj
for all j, k ⩾ 0. Thus,
∞ ∞
tj Aj tk B k ∞ ∞ tj tk Aj B k ∞ ∞ tk tj B k Aj
etA etB = ∑ ∑ =∑∑ =∑∑ = etB etA .
j=0 k=0 j! k! j=0 k=0 j!k! k=0 j=0 k!j!
72
Solution Manual. Last update December 19, 2017
(d) We have
∞
Aj 1
eA/k = id + k1 A + ρk and k 2 ρk = ∑ j−2
.
j=2 j! k
Note that k 2 ρk is bounded. Do the same for B (with the remainder term ρ′k ) and
multiply these expansions to get
eA/k eB/k = id + k1 A + k1 B + σk
∥ k1 A + k1 B + σk ∥ < 1.
log(eA/k eB/k ) = 1
k A + k1 B + σk + σk′
k log(eA/k eB/k ) = A + B + τk
Observe that X X
X
X ∞
(A + B)j ∞ ∞ Aj B l X X
∥Sk − Tk ∥ = X X
X X C
X
X
X ∑ − ∑ ∑ X
j j! k l l! X
X ⩽ 2
X
X
Xj=0
k j j!
j=0 l=0 k X
X
X
k
with a constant C depending only on ∥A∥ and ∥B∥. This yields Skk − Tkk → 0.
73
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
∎∎
Problem 7.11. Solution: Add “D(B) ⊂ D(A)” to the statement of the problem.
Let 0 < s < t and assume throughout that h ∈ R is such that t − s − h > 0. We have
1
(Pt−(s+h) Ts+h u − Pt−s Ts u)
h
Ts+h u − Ts u Ts+h u − Ts u Pt−(s+h) − Pt−s
= (Pt−(s+h) − Pt−s ) + Pt−s + Ts u
h h h
= I + II + III.
Lemma
(we use for the last asserting that Ts (D(B)) ⊂ D(B) ⊂ D(A)). Let us show that
7.10.a)
I → 0. We have
Ts+h u − Ts u
I = (Pt−(s+h) − Pt−s ) ( − Ts Bu) + (Pt−(s+h) − Pt−s )Ts Bu = I1 + I2 .
h
By the strong continuity of the semigroup (Pt )t , we see that I2 → 0 as h → 0.
Furthermore, by contractivity,
Ts+h u − Ts u Ts+h u − Ts u
∥I1 ∥ ⩽ (∥Pt−(s+h) ∥ + ∥Pt−s ∥) ⋅ ∥ − Ts Bu∥ ⩽ 2 ∥ − Ts Bu∥ → 0
h h
since u ∈ D(B).
Remark. Usually this identity is used if D(A) = D(B), for example we have used it
in this way in the proof of Corollary 7.11.c) on page 90. Another, typical application
is the situation where B−A is a bounded operator (hence, D(A) = D(B)). Integrating
the identity of part a) yields
t d t
Tt u − Pt u = ∫ (Pt−s Ts )u ds = ∫ Pt−s (B − A)Ts u ds
0 ds 0
which is often referred to as Duhamel’s formula. This formula holds first for u ∈ D(B)
and then, by extension of bounded linear operators defined on a dense set, for all u
in the closure D(B).
74
Solution Manual. Last update December 19, 2017
(b) In general, no. The problem is the semigroup property (unless Tt and Ps commute
for all s, t ⩾ 0):
Ut Us = Tt Pt Ts Ps ≠ Tt Ts Pt Ps = Tt+s Pt+s = Ut+s .
It is interesting to note (and helpful for the proof of (c)) that Ut is an operator on
C∞ :
Pt Tt
Ut ∶ C∞ ÐÐÐÐ→ C∞ ÐÐÐÐ→ C∞
∥Ut f − Us f ∥ = ∥Tt Pt f − Ts Pt f + Ts Pt f − Ts Ps f ∥
⩽ ∥(Tt − Ts )Pt f ∥ + ∥Ts (Pt − Ps )f ∥
⩽ ∥(Tt − Ts )Pt f ∥ + ∥(Pt − Ps )f ∥
We have ∥Ut,n f ∥ = ∥Tt/n Pt/n ⋯Tt/n Pt/n f ∥ ⩽ ∏nj=1 ∥Tt/n ∥∥Pt/n ∥∥f ∥ ⩽ ∥f ∥. So, by the
continuity of the norm
By iteration, we get
n−1
∥X n f − Y n f ∥ ⩽ ∑ ∥(X − Y )Y k f ∥.
k=0
Take Y = Tt/n Pt/n , X = Ts/n Ps/n where n is fixed. Then letting s → t shows the
strong continuity of each t ↦ Ut,n .
Semigroup property: Let s, t ∈ Q and write s = j/m and t = k/m for the same m.
Then we take n = l(j + k) and get
n l(j+k)
(T s+t P s+t ) = (T 1 P 1 )
n n lm lm
75
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
lj lk
= (T 1 P 1 ) (T 1 P 1 )
lm lm lm lm
lj lk
= (T j P j ) (T k P k )
ljm ljm lkm lkm
lj lk
= (T ljs P ljs ) (T t P t )
lk lk
For arbitrary s, t the semigroup property follows by approximation and the strong
continuity of Ut : let Q ∋ sn → s and Q ∋ tn → t. Then, by the contraction property,
and the last expression tends to 0. The limit limn Usn +tn u = Us+t u is obvious.
Generator: Let us begin with a heuristic argument (by ? and ?? indicate the steps
which are questionable!). By the chain rule
d d
∣ Ut g = ∣ lim(Tt/n Pt/n )n g
dt t=0 dt t=0 n
d
= lim ∣ (Tt/n Pt/n )n g
?
n dt t=0
n−1
= lim [n(Tt/n Pt/n ) (Tt/n n1 BPt/n + Tt/n n1 APt/n )g∣ ]
??
n t=0
= Bg + Ag.
So it is sensible to assume that D(A)∩D(B) is not empty. For the rigorous argument
we have to justify the steps marked by question marks.
Divide by h. Then the first term converges to 0 as h → 0, while the other two terms
tend to Ts Af and BPs f , respectively.
76
Solution Manual. Last update December 19, 2017
This theorem holds for functions with values in any Banach space space and, there-
fore, we can apply it to the situation at hand: Fix g ∈ D(A) ∩ D(B); we know
that fn (t) ∶= Ut,n g converges (even locally uniformly) and, because of ?? , that
fn′ (t) = (Tt/n Pt/n )n−1 (Tt/n A + BPt/n )g.
Since limn (Tt/n Pt/n )n u converges locally uniformly, so does limn (Tt/n Pt/n )n−1 u; more-
over, by the strong continuity, Tt/n A + BPt/n → (A + B)g locally uniformly for
g ∈ D(A) ∩ D(B). Therefore, the assumptions of the theorem are satisfied and
we may interchange the limits in the calculation above.
Remark. It is surprisingly difficult to verify that A + B is the generator of the
semigroup Ut – even if one already knows that the Trotter Formula converges. The
obvious failure is that the canonical (pre-)domain of A + B, the set D(A) ∩ D(B) is
too small, i.e. not dense or even empty!. The assumption that D(B) ⊂ D(A) is a
strong, but still reasonable assumption. Alternatively one can require that A and B
commute.
The usual statements of Trotter’s formula, see e.g. the excellent monograph by En-
gel & Nagel [5, Chapter III.5], is such that one has a condition on D(A) ∩ D(B)
which also ensures that the limit defining Ut exists. In an L2 -context one can find a
counterexample on p. 229 of [5].
∎∎
Problem 7.12. Solution: The idea is to show that A = − 12 ∆ is closed when defined on C2∞ (R).
Since C2∞ (R) ⊂ D(A) and since (A, D(A)) is the smallest closed extension, we are done.
So let (un )n ⊂ C2∞ (R) be a sequence such that un → u uniformly and (Aun )n is a C∞
Cauchy sequence. Since C∞ (R) is complete, we can assume that u′′n → 2g uniformly for
some g ∈ C∞ (Rd ). The aim is to show that u ∈ C2∞ .
(a) By the fundamental theorem of differential and integral calculus we get
x x y
un (x) − un (0) − xu′n (0) = ∫ (u′n (y) − u′n (0)) dy = ∫ ∫ u′′n (z) dz.
0 0 0
Since un (x) → u(x) and un (0) → u(0), we conclude that u′n (0) → c converges.
(b) Recall the following theorem from calculus, e.g. Rudin [13, Theorem 7.17].
77
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Let us determine the constant c′ ∶= limn u′n (0). Since u′n converges uniformly, the
limit as n → ∞ is in C∞ , and so we get
x
− lim u′n (0)) = lim lim (u′n (x) − u′n (0)) = lim ∫ 2g(z) dz
n→∞ x→−∞ n→∞ x→−∞ 0
i. e. c′ = ∫−∞ g(z) dz. We conclude that u′n (x) → ∫−∞ g(z) dt uniformly.
0 x
x y
(c) Again by the Theorem quoted in (b) we get un (x) − un (0) → ∫0 ∫−∞ 2g(z) dz uni-
0 y
formly, and with the same argument as in (b) we get un (0) = ∫−∞ ∫−∞ 2g(z) dz.
∎∎
Problem 7.13. Solution: By definition, (for all α > 0 and formally but justifiable via monotone
convergence also for α = 0)
∞
Uα 1C (x) = ∫ e−αt Pt 1C (x) dt
0
∞
=∫ e−αt E 1C (Bt + x) dt
0
∞
= E∫ e−αt 1C−x (Bt ) dt.
0
This is the ‘discounted’ (with ‘interest rate’ α) total amount of time a Brownian motion
spends in the set C − x.
∎∎
Problem 7.14. Solution: Let u ∈ Bb (Rd ) and α, β > 0. By the definition of the potential
operator we get
∞ ∞
(β − α)Uα Uβ u(x) = (β − α) ∫ ∫ e−sα e−tβ Ps+t u(x) ds dt
0 0
∞ ∞
= (β − α) ∫ e−(r−t)α e−tβ Pr u(x) dr dt
r=s+t
∫
0 t
∞ r
= (β − α) ∫ e−t(β−α) e−rα Pr u(x) dt dr
Fubini
∫
0 0
∞ t=r
= ∫ [ − e−t(β−α) ] e−rα Pr u(x) dr
0 t=0
∞
= ∫ (e−rα − e−rβ )Pr u(x) dr
0
= Uα u(x) − Uβ u(x).
∎∎
Problem 7.15. Solution: First formula: We use induction. The induction start with n = 0 is
clearly correct. Let us assume that the formula holds for some n and we do the induction
step n ↝ n + 1. We have for β ≠ α
dn dn
dn+1 dαn Uα f (x) − dβ n Uβ f (x)
Uα f (x) = lim
dαn+1 β→α β−α
n!(−1)n Uαn+1 f (x) − n!(−1)n Uβn+1 f (x)
= lim
β→α β−α
78
Solution Manual. Last update December 19, 2017
Using the identity an+1 − bn+1 = (a − b) ∑nj=0 an−j bj we get, since the resolvents commute,
In the last line we used the resolvent identity. Now we can let β → α to get
β→α n
ÐÐ→ −Uα Uα ∑ Uαn−j Uαj f (x) = −(n + 1)Uαn+2 f (x).
j=0
∎∎
Problem 7.16. Solution: Using Dini’s Theorem (e. g.: Rudin, p. 150) we see that
for any compact set K ⊂ Rd . Fix > 0 and pick a compact set K = K ⊂ Rd such that
0 ⩽ fn (x) ⩽ f (x) ⩽ on Rd ∖ K. Then
lim sup ∣fn (x) − f (x)∣ ⩽ lim sup ∣fn (x) − f (x)∣ + lim sup ∣fn (x)∣ + sup ∣f (x)∣
n→∞ x n→∞ x∈K n→∞ x∉K x∉K
⩽ 3.
Remark: Positivity is, in fact, not needed. Here is the argument: Let f1 ⩽ f2 ⩽ . . . ⩽ fn ⩽ . . .
and fn ∈ C∞ (Rd ) (any sign is now allowed!) and set f = supn fn . Using Dini’s Theorem
(e. g.: Rudin, p. 150) we see that
for any compact set K ⊂ Rd . Fix > 0 and pick a compact set K = K ⊂ Rd such that
− ⩽ f1 (x) ⩽ fn (x) ⩽ f (x) ⩽ on Rd ∖ K. Then
lim sup ∣fn (x) − f (x)∣ ⩽ lim sup ∣fn (x) − f (x)∣ + lim sup(f (x) − fn (x))
n→∞ x n→∞ x∈K n→∞ x∉K
79
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
⩽ 3.
Alternative Solution (non-positive case): Apply the solution of the positive case to the
sequence gn ∶= fn − f1 . This is possible since 0 ⩽ gn ⩽ gn+1 and supn gn = f − f1 .
∎∎
Problem 7.17. Solution: Because of Lemma 7.24 c),d) it is enough to show that there are
positive functions u, v ∈ C∞ (Rd ) such that
sup Uα u, sup Uα v ∈ C∞ (Rd ) but U0 (u − v)(x) = sup Uα u(x) − sup Uα v(x) ∉ C2 (Rd ).
α>0 α>0 α>0 α>0
for any u ∈ C+∞ (Rd ) with αd = π −d/2 Γ( d2 )/(d − 2). Since ∫∣y∣⩽1 ∣y∣2−d dy < ∞, we see with
dominated convergence that the function U0 w is in C∞ (Rd ) for all u ∈ C+∞ (Rd ) ∩ L1 (dy).
Pick any f ∈ Cc ([0, 1)) such that f (0) = 0. We denote by (x1 , . . . , xd ) points in Rd and
set r2 ∶= x21 + . . . + x2d . Then let
x2d
u(x1 , . . . , xd ) ∶= γ f (r), v(x1 , . . . , xd ) ∶= f (r)
r2
and
w(x1 , . . . , xd ) ∶= u(x1 , . . . , xd ) − v(x1 , . . . , xd ).
We will show that there is some f and a constant γ > 0 such that w ∈ D(U0 ) and
U0 w ∉ C2 (Rd ). The first assertion follows directly from Lemma 7.24 d). Introducing polar
coordinates
yd = r cos θd−2 ,
yd−1 = r cos θd−3 ⋅ sin θd−2 ,
yd−2 = r cos θd−4 ⋅ sin θd−3 ⋅ sin θd−2 ,
⋮
y2 = r cos φ sin θ1 ⋅ . . . ⋅ sin θd−2 ,
y1 = r sin φ sin θ1 ⋅ . . . ⋅ sin θd−2 ,
and using the integral formula for U0 u and U0 v, we get for xd ∈ (0, 1/2)
U0 w(0, . . . , 0, xd )
1 π (γ cos2 θd−2 − 1)(sin θd−2 )d−2 d−2 π
= αd [ ∫ rd−1 f (r)( ∫ √ d−2
dθ d−2 ) dr] ∏ (∫ (sin θj ) dθj ) .
j
0 0 0
r2 + x2d − 2xd r cos θd−2 j=1
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
=∶βd
γ cos2 θ − 1 = −γ sin2 θ − (1 − γ)
80
Solution Manual. Last update December 19, 2017
we conclude
π (γ cos2 θ − 1)(sin θ)d−2
∫ √ d−2
dθ
0
r2 + x2 − 2xr cos θ
π (sin θ)d π (sin θ)d−2
= −γ ∫ √ d−2
dθ − (1 − γ) ∫ √ d−2
dθ
0
r2 + x2 − 2xr cos θ 0
r2 + x2 − 2xr cos θ
=∶ −γI1 (r, x) − (1 − γ)I2 (r, x).
The remaining part of the proof follows as in Example 7.25. Differentiating in x yields
d d x 1 f (r)
U0 w(0, . . . , 0, x) = Cd (− d+1 ∫ rd+1 f (r) dr + 2x ∫ dr) .
dx x 0 x r
It is not hard to show that limx→0+ d
dx U0 w(0, . . . , 0, x) = 0. Thus,
dx U0 w(0, . . . , 0, x) − dx U0 w(0, . . . , 0)
d d
d x 1 f (r)
= Cd (− ∫ rd+1 f (r) dr + 2 ∫ dr) .
x xd+2 0 x r
Applying l’Hôpital’s rule we obtain
dx U0 w(0, . . . , 0, x) − dx U0 (0, . . . , 0)
d d 1 f (r)
d
lim = Cd (− f (0) + 2 lim ∫ dr) .
x→0+ x d+2 x→0+ x r
This means that the second derivative of U0 w at x = 0 in xd -direction does not exist if
1 f (r) −1
∫0+ r dr diverges. A canonical candidate is f (r) = ∣ log r∣ χ(r) with a suitable cut-off
function χ ∈ C+c ([0, 1)) and χ∣[0,1/2] ≡ 1.
∎∎
81
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
(a) The process (t, Bt ) starts at (0, B0 ) = 0, and if we start at (s, x) we consider the
process (s + t, x + Bt ) = (s, x) + (t, Bt ). Let f ∈ Bb ([0, ∞) × R). Since the motion in
t is deterministic, we can use the probability space (Ω, A, P = P) generated by the
Brownian motion (Bt )t⩾0 . Then
=E lim f (σ + t, ξ + Bt )
(σ,ξ)→(s,x)
= E f (s + t, x + Bt )
= Tt f (s, x)
which shows that Tt preserves f ∈ Cb ([0, ∞) × R). In a similar way we see that
82
Solution Manual. Last update December 19, 2017
1
⩽ + 2∥f ∥∞ E(Bt2 )
δ2
t
= + 2∥f ∥∞ .
δ2
Since the estimate is uniform in (s, x), this proves strong continuity.
Markov property: this is trivial.
(b) The transition semigroup is
Note that, in view of Theorem 7.22, pointwise convergence is enough (provided the
pointwise limit is a C∞ -function).
(s,x)
(c) We get for u ∈ C1,2
∞ that under P
t
Mtu ∶= u(s + t, x + Bt ) − u(s, x) − ∫ (∂r + 21 ∆x )u(s + r, x + Br ) dr
0
is an Ft -martingale. This is the same assertion as in Theorem 5.6 (up to the choice
of u which is restricted here as we need it in the domain of the generator...).
∎∎
Problem 7.19. Solution: Let u ∈ D(A) and σ a stopping time with Ex σ < ∞. Use optional
stopping (Theorem A.18 in combination with remark A.21) to see that
σ∧t
u
Mσ∧t ∶= u(Xσ∧t ) − u(x) − ∫ Au(Xr ) dr
0
Since u, Au ∈ C∞ we see
σ∧t σ∧t
∣Ex (∫ Au(Xr ) dr)∣ ⩽ Ex (∫ ∥Au∥∞ dr) ⩽ ∥Au∥∞ ⋅ Ex σ < ∞,
0 0
83
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
P(Xt ∈ F ∀t ∈ R+ ) ⩽ P(Xq ∈ F ∀q ∈ Q+ ).
Xq ∈ F ∀q ∈ Q+ Ô⇒ Xt = +lim Xq ∈ F ∀t ⩾ 0
Q ∋q→t
∎∎
84
8 The PDE connection
group
Moreover,
∂ 1
u (t, ⋅) = ∆x Pt P f
∂t 2
1 uniformly 1
= P ( ∆x Pt f ) ÐÐÐÐÐ→ ∆x Pt f.
2 →0 2
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
∈C∞
Since both the sequence and the differentiated sequence converge uniformly, we can inter-
change differentiation and the limit, cf. [13, Theorem 7.17, p. 152], and we get
∂ ∂ 1
u(t, x) = lim u (t, x) = ∆x u(t, x)
∂t →0 ∂t 2
and
u (0, ⋅) = P f ÐÐ→ f = u(0, ⋅)
→0
and we get a solution for the initial value f . The proof of the uniqueness part in Lemma 8.1
stays valid.
∎∎
t
dt ∫0 f (Bs ) ds = f (Bt ) so that f (Bt ) = 0.
d
Problem 8.2. Solution: By differentiation we get
We can assume that f is positive and bounded, otherwise we could consider f ± (Bt ) ∧ c
for some constant c > 0. Now E f (Bt ) = 0 and we conclude from this that f = 0.
∎∎
85
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
′′
vn,λ (x) = 2(αχn (x) + λ)vn,λ (x) − gn (x), (*)
′′
and since the expression on the right has a limit as n → ∞, we get that limn→∞ vn,λ (x)
exists.
′ ′
and, again by dominated convergence, we conclude that limn→∞ [vn,λ (x) − vn,λ (0)]
exists. In addition, the right-hand side is uniformly bounded (for all ∣x∣ ⩽ R):
x x R R
∣2 ∫ (αχn (y) + λ)vn,λ (y) dy − ∫ gn (y) dy∣ ⩽ 2 ∫ (α + λ) dy + ∫ dy
0 0 0 0
⩽ 2(α + λ + 1)R.
Since the expression under the integral converges boundedly and since limn→∞ vn,λ (x)
′ ′
exists, we conclude that limn→∞ vn,λ (0) exists. Consequently, limn→∞ vn,λ (x) exists.
vλ′ (x) ′
= lim vn,λ (x)
n→∞
vλ′′ (x) ′′
= lim vn,λ (x)
n→∞
∎∎
t
Problem 8.4. Solution: We have to show that v(t, x) ∶= ∫0 Ps g(x) ds is the unique solution
of the initial value problem (8.7) with g = g(x) satisfying ∣v(t, x)∣ ⩽ C t.
86
Solution Manual. Last update December 19, 2017
Existence: The linear growth bound is obvious from ∣Ps g(x)∣ ⩽ ∥Ps g∥∞ ⩽ ∥g∥∞ < ∞. The
rest follows from the hint if we take A = 1
2 ∆ and Lemma 7.10.
∞
Uniqueness: We proceed as in the proof of Lemma 8.1. Set vλ (x) ∶= ∫0 e−λt v(t, x) dt.
This integral is, for λ > 0, convergent and it is the Laplace transform of v(⋅, x). Under the
Laplace transform the initial value problem (8.7) with g = g(x) becomes
and this problem has a unique solution, cf. Proposition 7.13 f). Since the Laplace trans-
form is invertible, we see that v is unique.
∎∎
with two integration constants c, d ∈ R. The boundary conditions u(0) = a and u(1) = b
show that
d = a and c = b − a
so that
u(x) = (b − a)x + a.
On the other hand, by Corollary 5.11 (Wald’s identities), Brownian motion started in
x ∈ (0, 1) has the probability to exit (at the exit time τ ) the interval (0, 1) in the following
way:
Px (Bτ = 1) = x and Px (Bτ = 0) = 1 − x.
Therefore, if f ∶ {0, 1} → R is a function on the boundary of the interval (0, 1) such that
f (0) = a and f (1) = b,then
This means that u(x) = Ex f (Bτ ), a result which we will see later in Section 8.4 in much
greater generality.
∎∎
Problem 8.6. Solution: The key is to show that all points in the open and bounded, hence
relatively compact, set D are non-absorbing. Thus the closure of D has an neighbourhood,
say V ⊃ D̄ such that E τDc ⩽ E τV c . Let us show that E τV c < ∞.
Since D is bounded, there is some R > 0 such that B(0, R) ⊃ D̄. Pick some test function
χ = χR such that χ∣Bc (0,R) ≡ 0 and χ ∈ Cc∞ (Rd ). Pick further some function u ∈ C2 (Rd )
such that ∆u > 0 in B(0, 2R). Here are two possibilities to get such a function:
d
u(x) = ∣x∣2 = ∑ x2j Ô⇒ 1
2 ∆u(x) = 1
j=1
87
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
and
x1 x1 y1
U (x) = U (x1 ) ∶= ∫ F (y1 ) dy1 = ∫ ∫ f (z1 ) dz1 .
0 0 0
Clearly, 1
2 ∆U (x) = 1
2 ∂x21 U (x1 ) = f (x1 ), and we can arrange things by picking the correct
f.
Problem: neither u nor U will be in D(∆) (unless you are so lucky as in the proof of
Lemma 8.8 to pick instantly the right function).
The rest of the proof follows now either as in Lemma 7.33 or Lemma 8.8 (both employ,
anyway, the same argument based on Dynkin’s formula).
∎∎
Problem 8.7. Solution: We are following the hint. Let L = ∑dj,k=1 ajk (x) ∂j ∂k + ∑dj=1 bj (x) ∂j .
Then
If ∣x∣ < R and χ∣B(0,R) = 1, then L(uχ)(x) = Lu(x). Set u(x) = e−x1 /γr . Then only the
2 2
88
Solution Manual. Last update December 19, 2017
This shows that the drift b1 (x) can make the expression in the bracket negative!
Let us modify the Ansatz. Observe that for f (x) = f (x1 ) we have
This means that ∂12 f /∂1 f > b0 /a0 seems to be natural and a reasonable Ansatz would be
x1 2b0
y
f (x) = ∫ e a0 dy.
0
Then
2b0
x1 2b0 2b 0 x
∂1 f (x) = e a0 and ∂12 f (x) = e a0 1
a0
and we get
2b0 2b 0 x 2b0
x
Lf (x) = a11 (x) e a0 1 − b1 (x)e a0 1
a0
2b0 2b 0 x 2b0
x
⩾ a0 e a0 1 − b0 e a0 1
a0
2b0
x1
⩾ (2b0 − b0 ) e a0 > 0.
∎∎
Problem 8.8. Solution: Assume that B0 = 0. Any other starting point can be reduced to this
situation by shifting Brownian motion to B0 = 0. The LIL shows that a Brownian motion
satisfies
B(t) B(t)
−1 = lim √ < lim √ =1
t→0
t→0 2t log log 1t 2t log log 1t
√
i. e. B(t) oscillates for t → 0 between the curves ± 2t log log 1t . Since a Brownian motion
has continuous sample paths, this means that it has to cross the level 0 infinitely often.
∎∎
Problem 8.9. Solution: The idea is to proceed as in Example 8.12 e) where Zaremba’s needle
plays the role a truncated flat cone in dimension d = 2 (but in dimension d ⩾ 3 it has
too small dimension). The set-up is as follows: without loss of generality we take x0 = 0
(otherwise we shift Brownian motion) and we assume that the cone lies in the hyperplane
{x ∈ Rd ∶ x1 = 0} (otherwise we rotate things).
Let B(t) = (b(t), β(t)), t ⩾ 0, be a BMd where b(t) is a BM1 and β(t) is a (d − 1)-
dimensional Brownian motion. Since B is a BMd , we know that the coordinate processes
b = (b(t))t⩾0 and β = (β(t))t⩾0 are independent processes. Set σn = inf{t > 1/n ∶ b(t) = 0}.
89
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Since 0 ∈ R is regular for {0} ⊂ R, see Example 8.12 e), we get that limn→∞ σn = τ{0} = 0
almost surely with respect to P0 . Since β á b, the random variable β(σn ) is rotationally
symmetric (see, e.g., the solution to Problem 8.10).
Let C be a flat (i. e. in the hyperplane {x ∈ Rd ∶ x1 = 0}) cone such that some truncation
C ′ of it lies in Dc . By rotational symmetry, we get
opening angle of C
P0 (β(σn ) ∈ C) = γ = .
full angle
P0 (β(σn ) ∈ C ′ ) = γ.
∎∎
Problem 8.10. Solution: Proving that the random variable β(σn ) is absolutely continuous
with respect to Lebesgue measure is relatively easy: note that, because of the independence
of b and β, hence σn and β,
d 0 d
− P (β(σn ) ⩾ x) = − ∫ P0 (βt ⩾ x) P(σn ∈ dt)
dx dx R
d
= ∫ − P0 (βt ⩾ x) P(σn ∈ dt)
R dx
1
e−x /(2t) P(σn ∈ dt)
2
=∫ √
R 2πt
∞ 1
e−x /(2t) P(σn ∈ dt).
2
=∫ √
1/n 2πt
(observe, for the last equality, that σn takes values in [1/n, ∞).) Since the integrand
is bounded (even as t → 0), the interchange of integration and differentiation is clearly
satisfied.
90
Solution Manual. Last update December 19, 2017
(here x ∈ Rd−1 ).
It is a bit more difficult to work out the exact shape of the density. Let us first determine
the distribution of σn . Clearly,
y=b(1/n)
∞
=4∫ P0 (b(t − 1/n) < y) P0 (b(1/n) ∈ dy)
0
2 1 ∞ y
√ ∫ ∫ e−z /2(t−1/n) dz e−ny /2 dy
2 2
= √
π t− 1 1 0 0
n n
√
change of variables: ζ = z/ t − 1
n
√ √
2 n ∞ 1
y/ t− n
e−ζ /2 dζ e−ny /2 dy.
2 2
= ∫ ∫
π 0 0
91
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Now we proceed with the d-dimensional case. We have for all x ∈ Rd−1
∞ 1
e−∣x∣ /(2t) P(σn ∈ dt) dx
2
β(σn ) ∼ ∫ (d−1)/2
1/n (2πt)
1 ∞ 1
e−∣x∣ /(2t) dt
2
= (d+1)/2 (d−1)/2 ∫ √
π 2 1/n t(d+1)/2 nt − 1
∎∎
92
9 The variation of Brownian paths
Problem 9.1. Solution: Let > 0 and Π = {t0 = 0 < t1 < . . . < tm = 1} be any partition of
[0, 1]. As a continuous function on a compact space, f is uniformly continuous, i. e. there
exists δ > 0 such that ∣f (x) − f (y)∣ <
2m for all x, y ∈ [0, 1] with ∣x − y∣ < δ. Pick n0 ∈ N so
′ ∣Π∣
that ∣Πn ∣ < δ ∶= δ ∧ 2 for all n ⩾ n0 .
∣Π∣
Now, the balls B(tj , δ ′ ) for 0 ⩽ j ⩽ m are disjoint as δ ′ ⩽ 2 . Therefore the sets B(tj , δ ′ ) ∩
Πn0 for 0 ⩽ j ⩽ m are also disjoint, and non-empty as ∣Πn0 ∣ < δ ′ . In particular, there exists
a subpartition Π′ = {q0 = 0 < q1 < . . . < qm = 1} of Πn0 such that ∣tj − qj ∣ < δ ′ ⩽ δ for all
0 ⩽ j ⩽ m. This implies
RRR m m RRR m
RRR ∣f (t ) − f (t )∣ − RR
RRR ∑ j j−1 ∑ ∣f (q j ) − f (q j−1 RRR ⩽ ∑ ∣∣f (tj ) − f (tj−1 )∣ − ∣f (qj ) − f (qj−1 )∣∣
)∣
RRj=1 j=1 RRR j=1
m
⩽ ∑ ∣f (tj ) − f (qj ) + f (tj−1 ) − f (qj−1 )∣
j=1
m
⩽ 2 ⋅ ∑ ∣f (tj ) − f (qj )∣
j=0
⩽ .
Because adding points to a partition increases the corresponding variation sum, we have
′ Πn0
S1Π (f, 1) ⩽ S1Π (f, 1) + ⩽ S1 (f, 1) + ⩽ lim S1Πn (f, 1) + ⩽ VAR1 (f, 1) +
n→∞
Problem 9.2. Solution: Note that the problem is straightforward if ∥x∥ stands for the maxi-
mum norm: ∥x∥ = max1⩽j⩽d ∣xj ∣.
Remember that all norms on Rd are equivalent. One quick way of showing this is the
following: Denote by ej with j ∈ {1, . . . , d} the usual basis of Rd . Then
93
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
for every x = ∑dj=1 xj ej in Rd using the triangle inequality and the positive homogeneity
of norms. In particular, x ↦ ∥x∥ is a continuous mapping from Rd equipped with the
supremum-norm ∥ ⋅ ∥∞ to R, since
holds for every x, y in Rd . Hence, the extreme value theorem claims that x ↦ ∥x∥ attains
its minimum on the compact set {x ∈ Rd ∶ ∥x∥∞ = 1}. Finally, this implies A ∶= min{∥x∥ ∶
∥x∥∞ = 1} > 0 and hence
x
∥x∥ = ∥ ∥ ⋅ ∥x∥∞ ⩾ A ⋅ ∥x∥∞
∥x∥∞
for every x ≠ 0 in Rd as required.
∎∎
Problem 9.3. Solution: Let p > 0, > 0 and Π = {t0 = 0 < t1 < . . . < tn = 1} a partition
of [0, 1]. Since f is continuous and the rational numbers are dense in R, there exist
0 < q1 < . . . < qn−1 < 1 such that qj is rational and ∣f (tj ) − f (qj )∣ < n−1/p 1/p for every
1 ⩽ j ⩽ n − 1. In particular, Π′ = {q0 = 0 < q1 < . . . < qn = 1} is a rational partition of [0, 1]
such that ∑nj=0 ∣f (tj ) − f (qj )∣p ⩽ .
For p > 1, on the other hand, and x, y ∈ R such that ∣x∣ < ∣y∣ we find
∣y∣
∣∣y∣p − ∣x∣p ∣ = ∫ ptp−1 dt ⩽ p ⋅ (∣x∣ ∨ ∣y∣)p−1 ⋅ (∣y∣ − ∣x∣) ⩽ p ⋅ (∣x∣ ∨ ∣y∣)p−1 ⋅ ∣y − x∣
∣x∣
and hence
94
Solution Manual. Last update December 19, 2017
Let p > 0 and > 0. For every partition Π = {t0 = 0 < t1 < . . . < tn = 1} there exists a
rational partition Π′ = {q0 = 0 < q1 < . . . < qn = 1} such that ∑nj=0 ∣f (tj ) − f (qj )∣1∧p ⩽ and
hence
RRR n RR
p RRR
n
RRR ∣f (t ) − f (t )∣p − ∣f (q ) − (q )∣ RRR
RRR ∑ j j−1 ∑ j f j−1
RRR
RRj=1 j=1
n
⩽ ∑ ∣∣f (tj ) − f (tj−1 )∣p − ∣f (qj ) − f (qj−1 )∣p ∣
j=1
(*) n
⩽ max {1, (p ⋅ 2p−1 ⋅ ∥f ∥p−1
∞ )} ⋅ ∑ ∣f (tj ) − f (qj ) + f (tj−1 ) − f (qj−1 )∣
1∧p
(**)
j=1
n
⩽ C ⋅ ∑ ∣f (tj ) − f (qj )∣1∧p
j=0
⩽C ⋅
⎧
⎪ ⎫
⎪
⎪ ′ ⎪
VARQ (f ; 1) ∶= sup ⎨ ∑ ∣f (q ) − f (q )∣ p
∶ Π finite, rational partition of [0, 1]⎬
p
⎪
⎪ ′
j j−1
⎪
⎪
⎩ j−1 j
q ,q ∈Π ⎭
and hence the desired result as tends to zero.
Alternative Approach: Note that (ξ1 , . . . , ξn ) ↦ ∑nj=1 ∣f (ξj ) − f (ξj−1 )∣p is a continuous
map since it is the finite sum and composition of continuous maps, and that the rational
numbers are dense in R.
∎∎
Let > 0 and Π = {t0 = 0 < t1 < . . . < tn = t} a partition of [0, t]. Set sj = tj for 1 ⩽ j ⩽ n − 1
and note that ξ ↦ ∣f (ξ0 ) − f (ξ)∣p is a continuous map for every ξ0 ∈ [0, t] since it is the
composition of continuous maps. Hence we can pick s0 ∈ (t0 , t1 ) and sn ∈ (tn−1 , tn ) with
ε
∣∣f (s1 ) − f (t0 )∣p − ∣f (s1 ) − f (s0 )∣p ∣ <
2
ε
∣∣f (tn ) − f (tn−1 )∣ − ∣f (sn ) − f (tn−1 )∣ ∣ <
p p
2
and so that 0 < s0 < s1 < . . . < sn < t. This implies
n n−1
∑ ∣f (tj ) − f (tj−1 )∣ = ∣f (s1 ) − f (t0 )∣ + ∑ ∣f (sj ) − f (sj−1 )∣ + ∣f (tn ) − f (sn−1 )∣
p p p p
j=1 j=2
95
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
ε n ε
⩽ + ∑ ∣f (sj ) − f (sj−1 )∣p +
2 j=1 2
⩽ + VAR○p (f ; t)
and thus VARp (f ; t) ⩽ + VAR○p (f ; t) since the partition Π = {t0 = 0 < t1 < . . . < tn = t} was
arbitrarily chosen. Consequently, VARp (f ; t) ⩽ VAR○p (f ; t) as tends to zero, as required.
The same argument shows that varp (f ; t) does not change its value (if it exists).
∎∎
=1
=2⋅ 1
n
Γ( 2 )
n
and we get
n
−n/2 n
(ns) 2 −1 e− 2 1[0,∞) (s).
2 n ns
∑ (B( nk ) − B( k−1
n )) ∼ 2
k=1 Γ( 2 )
n
96
Solution Manual. Last update December 19, 2017
Here is the calculation: (in case you do not know this standard result...): If X ∼
N(0, 1) and x > 0, we have
√
√ 1 x t2
P(X ⩽ x) = P(X ⩽ x) = √ ∫ √ exp (− ) dt
2
2π − x 2
√
2 x t2
=√ ∫ exp (− ) dt
2π 0 2
1 x s
= √ ∫ exp (− ) ⋅ s−1/2 ds
2π 0 2
using the change of variable s = t2 . Hence, X 2 has density
1 s
fX 2 (s) = 1(0,∞) (s) ⋅ √ ⋅ exp (− ) ⋅ s−1/2 .
2π 2
Let X1 , X2 , . . . be independent and identically distributed random variables with
X1 ∼ N(0, 1). We want to prove by induction that for n ⩾ 1
s
fX 2 +...+X 2 (s) = Cn ⋅ 1(0,∞) (s) ⋅ exp (− ) ⋅ sn/2−1
1 n 2
with some normalizing constants Cn > 0. Assume that this is true for 1, . . . , n. Since
2
Xn+1 is independent of X12 + . . . + Xn2 and distributed like X12 , we know that the
density of the sum is a convolution. This leads to
∞
fX 2 +...+X 2 (s) = ∫ fX 2 +...+X 2 (t) ⋅ fX 2 (s − t) dt
1 n+1 −∞ 1 n n+1
st s−t
= Cn ⋅ C1 ⋅ ∫ exp (− ) ⋅ tn/2−1 ⋅ exp (− ) ⋅ (s − t)−1/2 dt
0 2 2
s s
= Cn ⋅ C1 ⋅ exp (− ) ⋅ ∫ tn/2−1 ⋅ (s − t)−1/2 mdt
2 0
s s t n/2−1 t −1/2
= Cn ⋅ C1 ⋅ exp (− ) ⋅ sn/2−1 ⋅ s−1/2 ⋅ ∫ ( ) ⋅ (1 − ) dt
2 0 s s
s 1
= Cn ⋅ C1 ⋅ exp (− ) ⋅ s(n+1)/2−1 ⋅ ∫ xn/2−1 ⋅ (1 − x)−1/2 dx
2 0
s (n+1)/2−1
= Cn+1 ⋅ exp (− ) ⋅ s
2
using the change of variable x = t/s. Since probability distribution functions integrate
to one, we find
∞ s ∞
1 = Cn ⋅ ∫ exp (− ) ⋅ sn/2−1 ds = Cn ⋅ 2n/2 ∫ exp (−t) ⋅ tn/2−1 dt
0 2 0
= Cn ⋅ 2n/2 ⋅ Γ(n/2)
and thus
−1
fX 2 +...+Xn2 (s) = (2n/2 ⋅ Γ(n/2)) ⋅ 1(0,∞) (s) ⋅ e−s/2 ⋅ sn/2−1
1
97
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
using the change of variable x2 = (1 − 2ξ)y 2 . Since the moment generating function
ξ ↦ (1−2ξ)−1/2 has a unique analytic extension to an open strip around the imaginary
axis, the characteristic function is of the form
E(ei⋅ξ⋅X ) = (1 − 2iξ)−1/2 .
2
k=1 k=1
and hence
(d) We have shown in a) that E ((Yn − 1)2 ) = V(Yn ) = 2/n which tends to zero as n → ∞.
∎∎
(a)
√ ∞ y −y2 /2 ∞
1 ∞ 1
e−y
2 /2
dy = ⋅ [ − e−y /2 ] = ⋅ e−x /2
2 2
2π ⋅ P(Z > x) = ∫ dy > ∫
⋅e
x x x x x x
1 e−x /2
2
y x x
1 ∞ √
= x2 ⋅ ([− ⋅ e−y /2 ] − 2π ⋅ P(Z > x))
2
y x
√
Ô⇒ (1 + x ) ⋅ 2π ⋅ P(Z > x) ⩾ x ⋅ e−x /2
2 2
1 xe−x /2
2
98
Solution Manual. Last update December 19, 2017
2n 2n
P ( lim ⋃ Ak,n ) = 1 − P (lim inf ⋂ Ack,n )
n→∞ k=1 n→∞ k=1
2n
⩾ 1 − lim inf P ( ⋂ Ack,n )
n→∞ k=1
2n
= 1 − lim inf ∏ P(Ack,n )
n→∞
k=1
n
and hence it suffices to prove lim inf n→∞ ∏2k=1 P(Ack,n ) = 0.
and a) implies
√ √
2n ⋅ P(A1,n ) = 2n ⋅ P( 2−n ⋅ ∣Z∣ > c n2−n )
√
= 2n+1 ⋅ P(Z > c n)
√
2n+1 c n
⋅ e−c n/2 .
2
⩾√ ⋅ 2
2π c n + 1
Now, (c2 n)/(c2 n + 1) → 1 as n → ∞ and thus there exists some n0 ∈ N such that
√
c2 n 1 c n 1
⩾ ⇐⇒ 2 ⩾ √
c n+1 2
2 c n + 1 2c n
2π c n 2πc n
√
for n ⩾ n0 . Since ln(2)−c2 /2 > 0 if, and only if, c < 2 log(2), we have 2n ⋅P(A1,n ) → ∞
n √
and thus lim inf n→∞ ∏2k=1 P(AC k,n ) = 0 if c < 2 log(2).
√
(c) With c < 2 log(2) we deduce
2n
1 = P (lim sup ⋃ Ak,n )
n→∞ k=1
∎∎
99
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Since λ < λ0 < 1/2 there is some > 0 such that λ < λ0 < λ0 + < 1/2. Thus,
2 −1)
∣ ⩽ 2(X 4 + 1)e−X ⋅ e(λ0 +)X .
2 2
∣(X 2 − 1)2 eλ(X
∎∎
= E (eiξX ) E (eiη1F ).
∎∎
100
10 Regularity of Brownian paths
(λh)k −λh
P(Nt+h − Nt = k) = P(Nh = k) = e .
k!
This shows that we have for any α > 0
∞
(λh)k −λh
E (∣Nt+h − Nt ∣α ) = ∑ k α e
k=0 k!
∞
(λh)k −λh
= λh e−λh + ∑ k α e
k=2 k!
∞
(λh)k−1 −λh
= λh e−λh + λh ∑ k α e
k=2 k!
−λh
= λh e + o(h)
and, thus,
E (∣Nt+h − Nt ∣α )
lim =λ
h→0 h
which means that (10.1) cannot hold for any α > 0 and β > 0.
(b) Part a) shows also E (∣Nt+h − Nt ∣α ) ⩽ c h, i. e. condition (10.1) holds for α > 0 and
β = 0.
The fact that β = 0 is needed for the convergence of the dyadic series (with the power
γ < β/α) in the proof of Theorem 10.1.
(c) We have
∞
tk −t ∞ tk −t ∞
tk−1 ∞ j
t
E(Nt ) = ∑ k e = ∑k e =t∑ e−t = t ∑ e−t = t
k=0 k! k=1 k! k=1 (k − 1)! j=0 j!
∞
tk −t ∞ 2 tk −t ∞
tk−1
E(Nt2 ) = ∑ k 2 e = ∑k e =t∑k e−t
k=0 k! k=1 k! k=1 (k − 1)!
∞ ∞
tk−1 tk−1
= t ∑ (k − 1) e−t + t ∑ e−t
k=1 (k − 1)! k=1 (k − 1)!
∞ ∞
tk−2 tk−1
=t ∑
2
e−t + t ∑ e−t = t2 + t
k=2 (k − 2)! k=1 (k − 1)!
E(Nt − t) = E Nt − t = 0
101
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
and, finally, if s ⩽ t
Alternative Solution: One can show, as for a Brownian motion (Example 5.2 a)), that
Nt is a martingale for the canonical filtration FtN = σ(Ns ∶ s ⩽ t). The proof only
uses stationary and independent increments. Thus, by the tower property, pull out
and the martingale property,
∎∎
p
Since max1⩽j⩽n ∣xj ∣p = ( max1⩽j⩽n ∣xj ∣) the claim follows (actually with n1/p which is
smaller than n....)
∎∎
102
Solution Manual. Last update December 19, 2017
In the proof of Theorem 10.1 (page 154, line 1 from above and onwards) we get:
The rest of the proof continues literally as on page 154, line 10 onwards.
Alternative Solution: use the subadditivity of Z ↦ E(∣Z∣α ) directly in the second part of
the calculation, replacing ∥Z∥Lα by E(∣Z∣α ).
∎∎
Theorem. Let (Bt )t⩾0 be a BM1 . Then t ↦ Bt (ω) is for almost all ω ∈ Ω nowhere Hölder
continuous of any order α > 1/2.
It is not clear if the set An,α is measurable. We will show that Ω ∖ An,α ⊂ Nn,α for a
measurable null set Nn,α .
Assume that the function f is α-Hölder continuous of order α at the point t0 ∈ [0, n].
Then
∃ δ > 0 ∃ L > 0 ∀ t ∈ B(t0 , δ) ∶ ∣f (t) − f (t0 )∣ ⩽ L ∣t − t0 ∣α .
103
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Since [0, n] is compact, we can use a covering argument to get a uniform Hölder constant.
Consider for sufficiently large values of k ⩾ 1 the grid { kj ∶ j = 1, . . . , nk}. Then there
exists a smallest index j = j(k) such that for ν ⩾ 3 and, actually, 1 − να + ν/2 < 0
j j j+ν
t0 ⩽ and ,..., ∈ B(t0 , δ).
k k k
For i = j + 1, j + 2, . . . , j + ν we get therefore
∣f ( ki ) − f ( i−1
k
)∣ ⩽ ∣f ( ki ) − f (t0 )∣ + ∣f (t0 ) − f ( i−1
k
)∣
α α
⩽ L(∣ ki − t0 ∣ + ∣ i−1
k − t0
∣ )
(ν+1)α να 2L(ν+1)α
⩽ L( kα + kα
) = kα .
we have
∞ ∞
Ω ∖ An,α ⊂ ⋃ ⋃ Cm
L,ν,α
.
L=1 m=1
This proves that a Brownian path is almost surely nowhere not Hölder continuous of a
fixed order α > 1/2. Call the set where this holds Ωα . Then Ω0 ∶= ⋂Q∋α>1/2 Ωα is a set
with P(Ω0 ) = 1 and for all ω ∈ Ω0 we know that BM is nowhere Hölder continuous of any
order α > 1/2.
The last conclusion uses the following simple remark. Let 0 < α < q < ∞. Then we have
for f ∶ [0, n] → R and x, y ∈ [0, n] with ∣x − y∣ < 1 that
104
Solution Manual. Last update December 19, 2017
∎∎
Problem 10.5. Solution: Fix > 0, fix a set Ω0 ⊂ Ω with P(Ω0 ) = 1 and h0 = h0 (2, ω) such
that (10.6) holds for all ω ∈ Ω0 , i. e. for all h ⩽ h0 we have
√
sup ∣B(t + h, ω) − B(t, ω)∣ ⩽ 2 2h log h1 .
0⩽t⩽1−h
Pick a partition Π = {t0 = 0 < t1 < . . . < tn } of [0, 1] with mesh size h = maxj (tj − tj−1 ) ⩽ h0
and assume that h0 /2 ⩽ h ⩽ h0 . Then we get
n n 1+
∑ ∣B(tj , ω) − B(tj−1 , ω)∣ ⩽ 22+2 ⋅ 21+ ∑ ((tj − tj−1 ) log tj −t1j−1 )
2+2
j=1 j=1
n
⩽ c ∑ (tj − tj−1 ) = c .
j=1
Since we have ∣x − y∣p ⩽ 2p−1 (∣x − z∣p + ∣z − y∣p ) and since we can refine any partition Π of
[0, 1] in finitely many steps to a partition of mesh < h0 , we get
n
VAR2+2 (B; 1) = sup ∑ ∣B(tj , ω) − B(tj−1 , ω)∣2+2 < ∞
Π⊂[0,1] j=1
for all ω ∈ Ω0 .
∎∎
105
11 Brownian motion as a random fractal
Problem 11.1. Solution: The idea is to show that Hδs for every δ > 0 is an outer measure.
This solves the problem, since these properties are retained by taking the supremum over
δ > 0.
Let > 0 and suppose that (E k )k∈N is a sequence of subsets of Rd . Due to the definition
of Hδs , there exists a δ-cover (Ejk )j∈N for every k ∈ N such that
−k
∑ ∣Ej ∣ ⩽ Hδ (E ) + 2
ks s k
j∈N
holds. Since the double sequence (Ejk )j,k∈N is obviously a δ-cover of ⋃k∈N E k , we find that
∎∎
Problem 11.2. Solution: Let U be open. Then U contains an open ball B ⊂ U and B contains
a cube Q ⊂ B ⊂ U . On the other hand, since U is bounded, it is contained in a large cube
Q′ ⊃ U . Since Hausdorff measure is monotone, we have Hd (Q) ⩽ Hd (U ) ⩽ Hd (Q′ ) and it
is, thus, enough to show the claim for cubes.
The following argument is easily adapted to a general cube. Assume that Q = [0, 1]d
and cover Q by nd non-overlapping cubes which are shifted copies of [0, 1/n]d . Clearly, if
n > 1/δ,
nd
d √ d √
Hδd (Q) ⩽ ∑ ∣[0, 1/n]d ∣ = nd ( dn−1 ) = ( d)d .
j=1
√
This shows that Hd (Q) ⩽ ( d)d < ∞.
For the lower bound we take any δ-cover (Ej )j⩾1 of Q. For each j there is a closed cube
Cj such that Ej ⊂ Cj and the lengths of the edges of Cj are less or equal to 2∣Ej ∣ ⩽ 2δ. If
λd is Lebesgue measure, we get
107
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
This gives
∞
−d −d
∑ ∣Ej ∣ ⩾ 2 > 0 Ô⇒ H (Q) ⩾ 2 .
d d
j=1
∎∎
Problem 11.3. Solution: It is enough to show the following two assertions: Let 0 ⩽ α < β < ∞
and E ⊂ R . Then
d
∎∎
Problem 11.4. Solution: Since Ej ⊂ E, we have dim Ej ⩽ dim E and supj⩾1 dim Ej ⩽ dim E.
Conversely, if α < dim E, then Hα (E) = ∞ and by the σ-subadditivity of the outer measure
Hα , we get Hα (Ej0 ) > 0 for at least one index j0 . Thus, α ⩽ dim Ej0 ⩽ supj⩾1 dim Ej . This
proves dim E ⩽ supj⩾1 dim Ej . (Indeed, if we had dim E > supj⩾1 dim Ej , we could find
some λ such that dim E > λ > supj⩾1 dim Ej contradicting our previous calculation.)
∎∎
Problem 11.5. Solution: It is possible to show that dim(E × F ) ⩾ dim(E) + dim(F ) holds for
arbitrary E ⊂ Rd and F ⊂ Rn , cf. [6, Theorem 5.12]. Unfortunately, the opposite direction
only holds under certain restriction on the sets E and F , cf. for example [7, Corollary 7.4].
In fact, one can show that there exist Borel sets E, F ⊂ R with dim(E) = dim(F ) = 0 and
dim(E × F ) ⩾ 1, cf. [6, Theorem 5.11].
We are going to prove the other direction (that does not hold in general) for this special
case: Let t > dim(E) and δ > 0. According to the Definition 11.5, there exists a δ-cover
√
(Ej )j∈N of E ⊂ Rd with ∑j∈N ∣Ej ∣t ⩽ δ. Let m ∈ N so that n/m ⩽ δ, and (Fk )k be a
disjoint tessellation of [0, 1)n by mn -many cubes with side-length 1/m. Now, (Ej × Fk )j,k
is a δ 2 -cover of E × [0, 1)n and hence
mn mn
2 (E
Hδt+n × [0, 1) ) ⩽ ∑ ∑ ∣Ej × Fk ∣
n t+n
⩽ ∑ ∑ ∣Ej ∣t nn/2 m−n ⩽ nn/2 δ
k=1 j∈N k=1 j∈N
holds. In particular, Ht+n (E × [0, 1)n ) = 0 as δ → 0 and thus dim(E × [0, 1)n ) ⩽ t + n. Since
Rn can be represented as countable union of cubes with unit side-length, Problem 11.4
tells us that we also have dim(E × Rn ) = dim(E × [0, 1)n ) ⩽ t + n. This proves that
dim(E × Rn ) ⩽ dim(E) + n, as required.
∎∎
Problem 11.6. Solution: Remark 11.6.3 says that dim f (E) ⩽ dim E holds for a Lipschitz
map f ∶ Rd → Rn . Therefore, we also have dim E = dim f −1 (f (E)) ⩽ dim f (E) for a
bi-Lipschitz map f and hence the desired result.
108
Solution Manual. Last update December 19, 2017
Moreover, Remark 11.6.3 tells us that dim f (E) ⩽ γ −1 dim E holds for a Hölder continuous
map f ∶ Rd → Rn with index γ ∈ (0, 1]. Note that this inequality can be strict, e.g. take
f ≡ 0 and any E ⊂ Rd with dim E > 0.
Note that there is no bi-Lipschitz f ∶ Rd → Rn that is also Hölder continuous with index
γ ∈ (0, 1): Suppose f had these properties, then there would exist a constant C > 0 such
that
holds for all x, y ∈ Rd . This leads to a contradiction to the boundedness of C > 0. Hence,
there is no bi-Lipschitz map that is also Hölder continuous with index γ ∈ (0, 1).
∎∎
Problem 11.7. Solution: Let C0 ∶= [0, 1]. It is easy to see that Cn = f1 (Cn−1 ) ∪ f2 (Cn−1 ) for
n ∈ N and C ∶= ⋂n∈N Cn models the recursive definition of Cantor’s discontinuum in the
description of the problem. Now, note that
⎛∞ ⎞ ∞ ∞
f1 ∑ tj 3−j = ∑ tj 3−(j+1) = 0 ⋅ 3−1 + ∑ tj−1 3−j
⎝j=1 ⎠ j=1 j=2
⎛∞ ⎞ ∞ ∞
f2 ∑ tj 3−j = ∑ tj 3−(j+1) + 2/3 = 2 ⋅ 3−1 + ∑ tj−1 3−j
⎝j=1 ⎠ j=1 j=2
∎∎
Problem 11.8. Solution: Denote by σd = 2π d/2 /Γ(d/2) the surface volume of the (d − 1)-
dimensional unit sphere in Rd . Using polar coordinates, we find
Rd Rd
∞
= σd (2π)− 2 ∫ rd−λ−1 e− 2 r dr
d 1 2
109
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
∞
= σd (2π)− 2 ∫ e−u du
d d−λ−2
(2u) 2
0
= σd (2π)− 2 2
d d−λ−2
2 Γ ( d−λ
2
)
Γ ( d−λ )
= λ
2
.
2 2 Γ ( d2 )
∎∎
Problem 11.10. Solution: (F. Hausdorff) We show that a perfect set contains a Cantor-type
set. Since Cantor sets are uncountable, we are done.
Pick a1 , a2 ∈ F and disjoint closed balls Fj , j = 1, 2 with centre aj . Now take open balls such
that Uj ⊂ Fj . Since Ūj ∩ F , j = 1, 2, are again perfect sets, we can repeat this construction,
i. e. pick aj1 , aj2 ∈ Uj ∩ F and disjoint closed balls Fjk ⊂ Uj with centre ajk and open balls
Ujk ⊂ Fjk , k = 1, 2. Each of the four sets A ∩ Ūjk , j, k = 1, 2, is perfect. Again we find
points ajk1 , ajk2 ∈ Ujk etc. Without loss of generality we can arrange things such that the
diameters of the balls Fj , Fjk , Fjkl , . . . are smaller than 1, 12 , 13 , . . .. This construction yields
a discontinuum set D ⊂ F : Any x ∈ D which is contained in Fj , Fjk , Fjkl , . . . is the limit of
the centres aj , ajk , ajkl , . . . It is now obvious how to make a correspondence between the
points ajkl... ∈ F and the Cantor ternary set.
∎∎
110
Solution Manual. Last update December 19, 2017
1
fξt (s) = √ , 0 < s < t.
π s(t − s)
y
e−y /(2(t−s)) ,
2
f(∣Bt ∣, ξt ) (y, s) = √ 0 < s < t, y > 0.
π s(t − s)3
(c) Write
1
e−(x−y) /(2t)
2
pt (x, y) = fBt (x − y) = √
2πt
for the law of Bt and set
As a function of t, this is the density of τx−y , see (6.13). Then the identity reads
(after cancelling the factor 2)
t t
pt (0, y) = ∫ ps (0, 0)gt−s (0, y) ds = ∫ gt−s (0, y)ps (y, y) ds.
0 0
The first identity is a “last exit decomposition” of the density pt (0, y) while the last
identity is a “first entrance decomposition”.
∎∎
and so
√
∂ ∂ 2 s ∂ 1 1 1 −1
(1 − arccos )= ( √ √ )= √ .
∂u ∂s π u ∂u π s u − s 2π s(u − s)3
This is (up to the minus sign) just the density from Corollary 11.26. From Lemma 11.23
we know, however, that for s < t < u
√
2 s
P (ξt ⩽ s, ηt ⩾ u) = P (B● has no zero in (s, u)) = 1 − arccos .
π u
∎∎
111
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Problem 11.13. Solution: Notation: We write fX for the density of the random variable X.
Using Corollary 11.26 we have, with some obvious changes in the integration variables,
E φ(L−t , Lt ) = E φ(t − ξt , ηt − ξt )
∎∎
112
12 The growth of Brownian paths
√
Problem 12.1. Solution: Fix C > 2 and define An ∶= {Mn > C n log n}. By the reflection
principle we find
√
P(An ) = P (sup Bs > C n log n)
s⩽n
√
= 2 P (Bn > C n log n)
√ √
= 2 P ( n B1 > C n log n)
scaling
√
= 2 P (B1 > C log n)
2
(12.1) 1 2
⩽ √ √ exp (− C2 log n)
2π C log n
2 1 1
=√ √ .
2π C log n n /2
C 2
Since C 2 /2 > 2, the series ∑n P(An ) converges and, by the Borel–Cantelli lemma we see
that
√
∃ΩC ⊂ Ω, P(ΩC ) = 1, ∀ω ∈ ΩC ∃n0 (ω) ∀n ⩾ n0 (ω) ∶ Mn (ω) ⩽ C n log n.
Remark: We can get the exceptional set in a uniform way: On the set Ω0 ∶= ⋂Q∋C>2 ΩC
we have P(Ω0 ) = 1 and
Mn
∀ω ∈ Ω0 ∶ lim √ ⩽ 2.
n→∞ n log n
∎∎
Problem 12.2. Solution: One should assume that ξ > 0. Since y ↦ exp(ξy) is monotone
increasing, we see
1
P (sup(Bs − 12 ξs) > x) = P (esups⩽t (ξBs − 2 ξ
2 s)
> eξx )
s⩽t
113
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Doob
⩽ e−xξ E eξBt − 2 ξ t = e−xξ .
1 2
(A.13)
(Remark: we have shown (A.13) only for supD∋s⩽t Msξ where D is a dense subset of [0, ∞).
Since s ↦ Msξ has continuous paths, it is easy to see that supD∋s⩽t Msξ = sups⩽t Msξ almost
surely.)
Usage in step 1o of the Proof of Theorem 12.1: With the notation of the proof we set
√ 1√ n
t = qn and ξ = q −n (1 + ) 2q n log log q n and x = 2q log log q n .
2
Since sups⩽t (Bs − 12 ξs) ⩾ sups⩽t Bs − 21 ξt the above inequality becomes
Problem 12.3. Solution: Actually, the hint is not needed, the present proof can be adapted in
an easier way. We perform the following changes at the beginning of page 166: Since every
√
t > 1 is in some interval of the form [q n−1 , q n ] and since the function Λ(t) = 2t log log t
is increasing for t > 3, we find for all t ⩾ q n−1 > 3
√ n
∣B(t)∣ sups⩽qn ∣B(s)∣ 2q log log q n
√ ⩽√ n √ .
2t log log t 2q log log q n 2q n−1 log log q n−1
Therefore
∣B(t)∣ √
lim √ ⩽ (1 + ) q a.s.
2t log log t
t→∞
Remark: The interesting paper by Dupuis [4] shows LILs for processes (Xt )t⩾0 with sta-
tionary and independent increments. It is shown there that the important ingredient are
estimates of the type P(Xt > x). Thus, if we know that P(Xt > x) ≍ P ( sups⩽t Xs > x),
we get a LIL for Xt if, and only if, we have a LIL for sups⩽t Xs .
∎∎
114
Solution Manual. Last update December 19, 2017
∎∎
Problem 12.5. Solution: Denote by W (t) = tB(1/t) and W (0) = 0 the projective reflection
of (Bt )t⩾0 . This is again a BM1 . Thus
Set K(s) ∶= sκ(1/s). In order to apply Kolmogorov’s test we need (always s → 0, t = 1/s →
∞, ↑=increasing, ↓=decreasing) that
and
√ √ √
K(s)/ s ↓ ⇐⇒ sκ(1/s) ↓ ⇐⇒ κ(t)/ t ↑ .
∎∎
(b) Let b ⩾ 1 and assume, to the contrary, that E τ < ∞. Then we can use the second
Wald identity, cf. Theorem 5.10, and get
115
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
E (τ ∧ n) = E B 2 (τ ∧ n) ⩽ E(b2 (a + τ ∧ n)).
∎∎
116
13 Strassen’s functional law of the iterated
logarithm
holds for all sufficiently large n and every t ∈ [0, 1]. This, however, contradicts
√ √
1 1
(1 − ) 2tk log (log ) ⩽ B(tk ) ⩽ (1 + ) 2tk log (log ), (**)
tk tk
for a sequence tk = tk (ω) → 0, k → ∞, cf. Corollary 12.2.
Indeed: fix some n, then the right side of (*) is in contradiction with the left side of (**).
Remark: Note that
1 1 1 ds
∫ w′ (s)2 ds = ∫ = +∞.
0 4 0 s
∎∎
117
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
∎∎
Problem 13.3. Solution: Since u is absolutely continuous (w.r.t. Lebesgue measure), for
almost all t ∈ [0, 1], the derivative u′ (t) exists almost everywhere.
Let t be a point where u′ exists and let (Πn )n⩾1 be a sequence of partitions of [0, 1] such
(n)
that ∣Πn ∣ → 0 as n → ∞. We denote the points in Πn by tk . Clearly, there exists a
(n) (n) (n) (n) (n) (n)
sequence (tjn )n⩾1 such that tjn ∈ Πn and tjn −1 ⩽ t ⩽ tjn for all n ∈ N and tjn − tjn −1 → 0
as n → ∞. We obtain
⎡ (n) ⎤2
⎢ 1 tjn ⎥
fn (t) = ⎢ (n) (n) ∫ (n) u (s) ds⎥⎥
⎢ ′
⎢t − t tjn −1 ⎥
⎣ jn jn −1 ⎦
(n) (n)
to simplify notation, we set tj ∶= tjn and tj−1 ∶= tjn −1 , then
2
1
=[ ⋅ (u(tj ) − u(tj−1 ))]
tj − tj−1
2
1
=[ ⋅ (u(tj ) − u(t) + u(t) − u(tj−1 ))]
tj − tj−1
⎡ ⎤2
⎢ tj − t u(tj ) − u(t) t − tj−1 u(t) − u(tj−1 ) ⎥⎥
= ⎢⎢ ⋅ + ⋅ ⎥
⎢ tj − tj−1 tj − t tj − tj−1 t − tj−1 ⎥
⎣ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ⎦
→ u′ (t) → u′ (t)
ÐÐÐ→ [u′ (t)] .
2
n→∞
∎∎
Problem 13.4. Solution: We use the notation of Chapter 4: Ω = C(o) [0, 1], w = ω, A =
B(C(o) [0, 1]), P = µ, B(t, ω) = Bt (ω) = w(t), t ∈ [0, ∞).
Linearity of Gφ is clear. Let Πn , n ⩾ 1, be a sequence of partitions of [0, 1] such that
limn→∞ ∣Πn ∣ = 0,
(n) (n) (n) (n)
Πn = {sk ∶ 0 = s0 < s1 < . . . < sln = 1} ;
(n) (n) (n) (n)
by s̃k , k = 1, . . . , ln we denote arbitrary intermediate points, i. e. sk−1 ⩽ s̃k ⩽ sk for all
k. Then we have
1
Gφ (ω) = φ(1)B1 (ω) − ∫ Bs (ω) dφ(s)
0
ln
(n) (n)
= φ(1)B1 (ω) − lim ∑ Bs̃(n) (ω)(φ(sk ) − φ(sk−1 )).
∣Πn ∣→0 k=1 k
Write
ln
(n) (n)
Gφn ∶=φ(1)B1 − ∑ Bs̃(n) (φ(sk ) − φ(sk−1 ))
k
k=1
118
Solution Manual. Last update December 19, 2017
ln
(n) (n)
= ∑ (B1 − Bs̃(n) )(φ(sk ) − φ(sk−1 )) + B1 φ(0).
k
k=1
Then Gφ (ω) = limn→∞ Gφn (ω) for all ω ∈ Ω. Moreover, the elementary identity
l l−1
∑ ak (bk − bk−1 ) = ∑ (ak − ak+1 )bk + al bl − a1 b0
k=1 k=1
implies
ln −1
(n)
Gφn = ∑ (Bs̃(n) − Bs̃(n) )φ(sk ) + (B1 − Bs̃(n) )φ(1) − (B1 − Bs̃(n) )φ(0) + B1 φ(0)
k+1 k ln 1
k=1
ln
(n)
= ∑ (Bs̃(n) − Bs̃(n) )φ(sk ) + Bs̃(n) φ(0),
k+1 k 1
k=0
(n) (n)
where s̃ln +1 ∶= 1, s̃0 ∶= 0.
(a) Gφn is a Gaussian random variable with mean E Gφn = 0 and variance
ln
(n)
V Gφn = ∑ φ2 (sk ) V(Bs̃(n) − Bs̃(n) ) + φ2 (0) V Bs̃(n)
k+1 k 1
k=0
ln
(n) (n) (n) (n)
= ∑ φ2 (sk )(s̃k+1 − s̃k ) + φ2 (0)s̃1
k=0
1
ÐÐÐ→ ∫ φ2 (s) ds.
n→∞ 0
This and limn→∞ Gφn = Gφ (P-a.s.) imply that Gφ is a Gaussian random variable
1
with E Gφ = 0 and V Gφ = ∫0 φ2 (s) ds.
(b) Without loss of generality we use for φ and ψ the same sequence of partitions.
Clearly, Gφn ⋅ Gψ
n → G ⋅ G for n → ∞ (P-a.s.) Using the elementary inequality
φ ψ
2ab ⩽ a2 + b2 and the fact that for a Gaussian random variable E(G4 ) = 3(E(G2 ))2 ,
we get
1
E ((Gφn Gψ
n) ) ⩽
2
n ) )]
[ E ((Gφn )4 ) + E ((Gψ 4
2
3 2 2 2
= [( E(Gφn )2 ) + ( E(Gψ n) ) ]
2
3 1 2 1 2
⩽ [( ∫ φ2 (s) ds) + ( ∫ ψ 2 (s) ds) ] + (n ⩾ n ).
2 0 0
This implies
n) Ð
E(Gφn Gψ ÐÐ→ E(Gφ Gψ ).
n→∞
Moreover,
ln ln
(n) (n)
n ) = E [( ∑ (Bs̃(n) − Bs̃(n) )φ(sk )) ⋅ ( ∑ (Bs̃(n) − Bs̃(n) )ψ(sj ))]
E(Gφn Gψ
k+1 k j=0 j+1 j
k=0
ln
(n)
+ φ(0)ψ(0) E(B 2(n) ) + φ(0) E [Bs̃(n) ∑ (Bs̃(n) − Bs̃(n) )ψ(sj )]
s̃1 1 j+1 j
j=0
119
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
ln
(n)
+ ψ(0) E [Bs̃(n) ∑ (Bs̃(n) − Bs̃(n) )ψ(sk )]
1 k+1 k
k=0
ln
(n) (n)
= ∑ E ((Bs̃(n) − Bs̃(n) )2 ) φ(sk )ψ(sk ) + ⋯
k+1 k
k=0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
(n) (n)
=s̃k+1 −s̃k
1
ÐÐÐ→ ∫ φ(s)ψ(s) ds.
n→∞ 0
This proves
1
E (Gφ Gψ ) = ∫ φ(s)ψ(s) ds.
0
E [(Gφn − Gψ
n ) ] = E [(Gn ) ] − 2 E [Gn Gn ] + E [(Gn ) ]
2 φ 2 φ ψ ψ 2
1 1 1
=∫ φ2n (s) ds − 2 ∫ φn (s)ψn (s) ds + ∫ ψn2 (s) ds
0 0 0
1
=∫ (φn (s) − ψn (s))2 ds.
0
This and φn → φ in L2 imply that (Gφn )n⩾1 is a Cauchy sequence in L2 (Ω, A, P).
Consequently, the limit X = limn→∞ Gφn exists in L2 . Moreover, as φn → φ in L2 , we
1 1
also obtain that ∫0 φ2n (s) ds → ∫0 φ2 (s) ds.
1
Since Gφn is a Gaussian random variable with mean 0 and variance ∫0 φ2n (s) ds, we
1
see that Gφ is Gaussian with mean 0 and variance ∫0 φ2 (s) ds.
Finally, we have φn → φ and ψn → ψ in L2 ([0, 1]) implying
Thus,
1
E(Gφ Gψ ) = ∫ φ(s)ψ(s) ds.
0
∎∎
H1
Moreover, by construction un ÐÐÐ→ W .
n→∞
120
Solution Manual. Last update December 19, 2017
h φ
∥h∥H1 = ⟨ , h⟩ ⩽ sup ⟨ , h⟩ .
∥h∥H1 H1 φ∈H1 ∥φ∥H1 H1
h φn φ
∥h∥H1 = ⟨ , h⟩ = lim ⟨ , h⟩ ⩽ sup ⟨ , h⟩ .
∥h∥H1 H1
n→∞ ∥φn ∥ 1
H H1 φ∈H○1 ∥φ∥H1 H1
4. Assume that there is some (φn )n⩾1 ⊂ H○1 with ∥φn ∥H1 = 1 and ⟨φn , h⟩H1 ⩾ 2n. Then
Conversely, assume that for every sequence (φn )n⩾1 ⊂ H○1 with ∥φn ∥H1 = 1 we have
⟨φn , h⟩H1 ⩽ C. (Think! Why this is the proper negation of the condition in the
problem?) Since the supremum can be realized by a sequence, we get for a suitable
sequence of φn ’s
Remark. An alternative, and more elementary argument for part (d) can be based
on step functions and Lemmas 13.2 and 13.3.
∎∎
Problem 13.6. Solution: The vectors (X, Y ) in a) – d) are a.s. limits of two-dimensional
Gaussian distributions. Therefore, they are also Gaussian. Their mean is clearly 0. The
general density of a two-dimensional Gaussian law (with mean zero) is given by
1 1 x2 y 2 2ρ xy
f (x, y) = √ exp {− ( + − )} .
2πσ1 σ2 1 − ρ2 2(1 − ρ2 ) σ12 σ22 σ1 σ2
In order to solve the problems we have to determine the variances σ12 = V X, σ22 = V Y
E XY
and the correlation coefficient ρ = . We will use the results of Problem 13.4.
σ1 σ2
121
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
t 1 1 5 1
(a) σ12 = V (∫ s2 dw(s)) = ∫ 1[1/2,t] (s)s4 ds = (t − ),
1/2 0 5 32
σ22 = V w(1/2) = 1/2 (= V B1/2 cf. canonical model),
t 1
E (∫ s2 dw(s) ⋅ w(1/2)) = ∫ 1[1/2,t] (s)s2 ⋅ 1[0,1/2] (s) ds = 0
1/2 0
Ô⇒ ρ = 0.
1 5 1
(b) σ12 = (t − )
5 32
σ22 = V w(u + 1/2) = u + 1/2
t
E (∫ s2 dw(s) ⋅ w(u + 1/2))
1/2
1
=∫ 1[1/2,t] (s)s2 ⋅ 1[0,u+1/2] (s) ds
0
(1/2+u)∧t
=∫ s2 ds
1/2
3
1 1 1
= ((( + u) ∧ t) − ) .
3 2 8
3
1
3 ((( 12 + u) ∧ t) − 81 )
Ô⇒ ρ = 1/2
.
[ 15 (t5 − ) (u + 21 )]
32 ⋅
1
1 5 1
t
(c) σ12 = V (∫ s2 dw(s)) =
(t − ),
1/2 5 32
t 1 1
σ22 = V (∫ s dw(s)) = (t3 − )
1/2 3 8
t t t 1 4 1
E (∫ s2 dw(s) ⋅ ∫ s dw(s)) = ∫ s3 ds = (t − )
1/2 1/2 1/2 4 16
1
(t4 − 1
)
Ô⇒ ρ = 4 16
1/2
.
[ 15 (t5 − ) 1
32 ⋅ 3
1
(t3 − 81 )]
1 1 1
(d) σ12 = V (∫ es dw(s)) = ∫ e2s ds = (e2 − e),
1/2 1/2 2
σ22 = V(w(1) − w(1/2)) = 1/2,
1 1
E (∫ es dw(s) ⋅ (w(1) − w(1/2))) = ∫ es ⋅ 1 ds = e − e1/2 .
1/2 1/2
e − e1/2
Ô⇒ ρ = 1/2
.
( 14 (e2 − e))
∎∎
Problem 13.7. Solution: Let wn ∈ F , n ⩾ 1, and wn → v in C(o) [0, 1]. We have to show that
v ∈ F.
Now:
wn ∈ F Ô⇒ ∃(cn , rn ) ∈ [q −1 , 1] × [0, 1] ∶ ∣wn (cn rn ) − wn (rn )∣ ⩾ 1.
122
Solution Manual. Last update December 19, 2017
Observe that the function (c, r) ↦ w(cr) − w(r) with (c, r) ∈ [q −1 , 1] × [0, 1] is continuous
for every w ∈ C(o) [0, 1].
Since [q −1 , 1] × [0, 1] is compact, there exists a subsequence (nk )k⩾1 such that cnk → c̃ and
rnk → r̃ as k → ∞ and (c̃, r̃) ∈ [q −1 , 1] × [0, 1].
Finally,
∣v(c̃r̃) − v(r̃)∣ = lim ∣wnk (cnk rnk ) − wnk (rnk )∣ ⩾ 1,
k→∞
and v ∈ F follows.
∎∎
√
Problem 13.8. Solution: Set L(t) = 2t log log t, t ⩾ e and sn = q n , n ∈ N, q > 1. Then:
∣B(sn−1 )∣ ⎛ B(sn−1 ) 1 ⎞
P( > )=P ∣ √ ∣ ⋅√ >
L(sn ) 4 ⎝ sn−1 2q log log sn 4 ⎠
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
∼N (0,1)
√
= P (∣B(1)∣ > 2q log log q n )
4
if q is sufficiently large.
123
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
B(⋅ sn )
(c) for the third inequality: Brownian scaling √
sn
∼ B(⋅) yields
⎛ ∣B(tsn )∣ ⎞ ⎛ ∣B(t)∣ ⎞
P sup √ > = P sup √ >
⎝0⩽t⩽q−1 2sn log log sn 4 ⎠ ⎝0⩽t⩽q−1 2 log log sn 4 ⎠
⎛ √ ⎞
=P sup ∣B(t)∣ > 2 log log sn
⎝0⩽t⩽q−1 4 ⎠
(*) √
⩽ 2 P (∣B(1/q)∣ > 2 log log sn )
4
⎛ ∣B(1/q)∣ √ ⎞
= 2P √ > 2q log log q n
⎝ 1/q 4 ⎠
C
⩽
n2
for all q sufficiently large. In the estimate marked with (*) we used
P ( sup ∣B(t)∣ > x) ⩽ 2 P ( sup B(t) > x) = 2 P(M (t0 ) > x) = 2 P (∣B(t0 )∣ > x).
Thm.
⎛ ∣B(sn−1 )∣ ∣B(tsn )∣ 3 ⎞
P + sup ∣w(t)∣ + sup >
⎝ L(sn ) t⩽q −1 0⩽t⩽q −1 L(sn ) 4⎠
⎛ ∣B(sn−1 )∣ ∣B(tsn )∣ ⎞
⩽P > or sup ∣w(t)∣ > or sup >
⎝ L(sn ) 4 t⩽q −1 4 0⩽t⩽q −1 L(sn ) 4⎠
∣B(sn−1 )∣ ⎛ ⎞ ⎛ ∣B(tsn )∣ ⎞
⩽ P( > ) + P sup ∣w(t)∣ > + P sup >
L(sn ) 4 ⎝t⩽q−1 4⎠ ⎝0⩽t⩽q−1 L(sn ) 4⎠
C C
⩽ 2
+0+ 2
n n
for all sufficiently large q. Using the Borel–Cantelli lemma we see that
⎛ ∣B(sn−1 )∣ ∣B(tsn )∣ ⎞ 3
lim + sup ∣w(t)∣ + sup ⩽ .
n→∞ ⎝ L(sn ) t⩽q −1 0⩽t⩽q −1 L(sn ) ⎠ 4
∎∎
124
14 Skorokhod representation
P ({Bt − Bs ∈ C} ∩ F ∩ {U ∈ A} ∩ {V ∈ A′ })
= P ({Bt − Bs ∈ C} ∩ F ) ⋅ P ({U ∈ A} ∩ {V ∈ A′ }) (since U, V á F∞
B
)
= P ({Bt − Bs ∈ C}) ⋅ P (F ) ⋅ P ({U ∈ A} ∩ {V ∈ A′ }) (since Bt − Bs á F∞
B
)
= P ({Bt − Bs ∈ C}) ⋅ P (F ∩ {U ∈ A} ∩ {V ∈ A′ }) (since U, V á F∞
B
)
and this shows that Bt −Bs is independent of the family Es = {F ∩G ∶ F ∈ FsB , G ∈ σ(U, V )}.
This family is stable under finite intersections, so Bt − Bs á σ(Es ) = Fs .
∎∎
125
15 Stochastic integrals: L2–theory
M 2 − ⟨M ⟩ and N 2 − ⟨N ⟩
(M + N )2 − ⟨M + N ⟩ and (M − N )2 − ⟨M − N ⟩
(M + N )2 − (M − N )2 = 4M N and ⟨M + N ⟩ − ⟨M − N ⟩ = 4⟨M, N ⟩
def
∎∎
Then assume that we have any two representations for a simple process
Then
f = ∑ φj−1 1[sj−1 ,sj ) 1[0,T ) = ∑ φj−1 1[sj−1 ,sj ) 1[tk−1 ,tk )
j j,k
and, similarly,
f = ∑ ψk−1 1[sj−1 ,sj ) 1[tk−1 ,tk ) .
k,j
∎∎
127
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
• Triangle inequality:
1
2
∥M + N ∥M2 = (E [sup ∣Ms + Ns ∣ ]) 2
T
s⩽T
1
2
⩽ (E [(sup ∣Ms ∣ + sup ∣Ns ∣) ]) 2
s⩽T s⩽T
1 1
2 2
⩽ (E [sup ∣Ms ∣ ]) + (E [sup ∣Ns ∣ ])
2 2
s⩽T s⩽T
where we used in the first estimate the subadditivity of the supremum and in the
second inequality the Minkowski inequality (triangle inequality) in L2 .
• Positive homogeneity
1 1
2 2
∥λM ∥M2 = (E [sup ∣λMs ∣ ]) = ∣λ∣ (E [sup ∣Ms ∣ ]) = ∣λ∣ ⋅ ∥M ∥M2 .
2 2
T T
s⩽T s⩽T
• Definiteness
∥M ∥M2 = 0 ⇐⇒ sup ∣Ms ∣2 = 0 (almost surely).
T
s⩽T
∎∎
2 2
E (∣fn ● BT − gn ● BT ∣ ) = E (∣(fn − gn ) ● BT ∣ )
T
= E (∫ ∣fn (s) − gn (s)∣2 ds)
0
∎∎
Problem 15.5. Solution: Solution 1: Let τ be a stopping time and consider the sequence of
discrete stopping times
⌊2m τ ⌋ + 1
τm ∶= ∧ T.
2m
128
Solution Manual. Last update December 19, 2017
Let t0 = 0 < t1 < t2 < . . . < tn = T and, without loss of generality, τm (Ω) ⊂ {t0 , . . . , tn }.
Then (Bt2j − tj )j is again a discrete martingale and by optional stopping we get that
(Bτ2m ∧tj − τm ∧ tj )j is a discrete martingale. This means that for each m ⩾ 1
and this indicates that we can set ⟨B τ ⟩t = t ∧ τ . This process makes Bt∧τ
2
− t ∧ τ into a
martingale. Indeed: fix 0 ⩽ s ⩽ t ⩽ T and add them to the partition, if necessary. Then
a.e. L1 (P)
Bτ2m ∧t ÐÐÐ→ Bτ2∧t and Bτ2m ∧t ÐÐÐ→ Bτ2∧t
m→∞ m→∞
∫ (Bτ ∧s − τ ∧ s) d P = m→∞
lim ∫ (Bτ2m ∧s − τm ∧ s) d P
2
F F
= lim ∫ (Bτ2m ∧t − τm ∧ t) d P
m→∞ F
(Of course, one should make sure that 1[0,τ ) ∈ L2T , see e.g. Problem 15.16 below or Prob-
lem 16.2 in combination with Theorem 15.20.)
∎∎
Problem 15.6. Solution: We begin with a general remark: if f = 0 on [0, s] × Ω, we can use
Theorem 15.13 f) and deduce f ● Bs = 0.
(a) We have
t
E [(f ● Bt )2 ∣ Fs ] = E [(f ● Bt − f ● Bs )2 ∣ Fs ] = E [∫ f 2 (r) dr ∣ Fs ] .
15.13 b)
(15.20) s
If both f and g vanish on [0, s], the same is true for f ± g. We get
2 t
E [((f ± g) ● Bt ) ∣ Fs ] = E [∫ (f ± g)2 (r) dr ∣ Fs ] .
s
Subtracting the ‘minus’ version from the ‘plus’ version and gives
2 2 t
E [((f + g) ● Bt ) − ((f − g) ● Bt ) ∣ Fs ] = E [∫ (f + g)2 (r) − (f − g)2 (r) dr ∣ Fs ] .
s
or
t
4 E [(f ● Bt ) ⋅ (g ● Bt ) ∣ Fs ] = 4 E [∫ (f ⋅ g)(r) dr ∣ Fs ] .
s
129
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
E (f ● Bt ∣ Fs ) = f ● Bs =
martingale see above
0
∎∎
n→∞
Problem 15.7. Solution: Because of Lemma 15.10 it is enough to show that fn ● BT ÐÐÐ→
f ● BT in L2 (P). This follows immediately from Theorem 15.13 c):
2 2
E [∣fn ● BT − f ● BT ∣ ] = E [∣(fn − f ) ● BT ∣ ]
T n→∞
= E [∫ ∣fn (s) − f (s)∣2 ds] ÐÐÐ→ 0.
0
∎∎
Problem 15.8. Solution: Without loss of generality we assume that f (0) = 0. Fix c > 0.
Then we have for all > 0, using the Markov inequality and the Hölder inequality with
p = 4 and q = 4/3
1/2 √
1 1
P (∣ ∫ f (s) dBs ∣ > c) = P (∣ ∫ f (s) dBs ∣ > c)
B 0 B 0
⎡ 1 1/2 ⎤
⎢
⎥
⩽ c−1/2 E ⎢ ∣ (s) ∣ ⎥
⎢ ∣B ∣1/2 ∫0
f dBs
⎥
⎣ ⎦
⎡ 1 ⎤⎞3/4 2 1/4
−1/2 ⎛ ⎢ ⎥
⩽c E⎢ ⎥ (E [( ∫ f (s) dB ) ]) .
⎝ ⎢⎣ ∣B ∣2/3 ⎥⎦⎠
s
0
1 3/4 1/4
P (∣ ∫ f (s) dBs ∣ > c) ⩽ c
−1/2
(E [∣B1 ∣−2/3 ]) −1/4 (∫ E [∣f (s)∣2 ] ds)
B 0 0
1/4
3/4 1
= c−1/2 (E [∣B1 ∣−2/3 ]) ( ∫ E [∣f (s)∣2 ] ds)
0
1/4
3/4
⩽ c−1/2 (E [∣B1 ∣−2/3 ]) (sup E [∣f (s)∣2 ]) .
s⩽
Since E [∣B1 ∣−2/3 ] < ∞, see (the solution of) Problem 11.8, we find
1/4
1
lim P (∣ ∫ f (s) dBs ∣ > c) ⩽ C ( lim sup E [∣f (s)∣2 ])
→0 B 0 →0 s⩽
´¹¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¶
=lim sup→0
1/4
= C (lim E [∣f ()∣ ]) 2
= 0.
→0
130
Solution Manual. Last update December 19, 2017
The first expression can now be estimated using the Markov inequality and Itô’s isometry,
for the second expression we use Brownian scaling.
∎∎
∎∎
t
which means that ∫0 f (s)g(s) ds is well-defined. Since f ± g ∈ L2t , we get by polarization
∎∎
L2
Problem 15.11. Solution: If Xn Ð→ X then supn E(Xn2 ) < ∞ and the claim follows from the
fact that
E ∣Xn2 − Xm
2
∣ = E [∣Xn − Xm ∣∣Xn + Xm ∣]
√ √
⩽ E ∣Xn + Xm ∣2 E ∣Xn − Xm ∣2
√ √ √
⩽ ( E ∣Xn ∣2 + E ∣Xm ∣2 ) E ∣Xn − Xm ∣2 .
∎∎
131
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Problem 15.12. Solution: Let Π = {t0 = 0 < t1 < . . . < tn = T } be a partition of [0, T ]. Then
we get
n
BT3 = ∑ (Bt3j − Bt3j−1 )
j=1
n
= ∑ (Btj − Btj−1 )[Bt2j + Btj Btj−1 + Bt2j−1 ]
j=1
n
= ∑ (Btj − Btj−1 )[Bt2j − 2Btj Btj−1 + Bt2j−1 + 3Btj Btj−1 ]
j=1
n
= ∑ (Btj − Btj−1 )[(Btj − Btj−1 )2 + 3Btj Btj−1 ]
j=1
n
= ∑ (Btj − Btj−1 )[(Btj − Btj−1 )2 + 3Bt2j−1 + 3Btj−1 (Btj − Btj−1 )]
j=1
n n n
3 2
= ∑ (Btj − Btj−1 ) + 3 ∑ Bt2j−1 (Btj − Btj−1 ) + 3 ∑ Btj−1 (Btj − Btj−1 )
j=1 j=1 j=1
n n n
3
= ∑ (Btj − Btj−1 ) + 3 ∑ Bt2j−1 (Btj − Btj−1 ) + 3 ∑ Btj−1 (tj − tj−1 )
j=1 j=1 j=1
n
2
+ 3 ∑ Btj−1 [(Btj − Btj−1 ) − (tj − tj−1 )]
j=1
= I1 + I2 + I3 + I4 .
Clearly,
T T
I2 ÐÐÐ→ 3 ∫ Bs2 dBs and I3 ÐÐÐ→ 3 ∫ Bs ds
∣Π∣→0 0 ∣Π∣→0 0
⎛n 3 ⎞ (B1)
n
3
V I1 = V ∑ (Btj − Btj−1 ) = ∑ V ((Btj − Btj−1 ) )
⎝j=1 ⎠ j=1
n
= ∑ V (Bt3j −tj−1 )
(B2)
j=1
n
= ∑ (tj − tj−1 ) V (B1 )
scaling 3 3
j=1
n
⩽ ∣Π∣ ∑ (tj − tj−1 ) V (B13 )
2
j=1
Moreover,
⎛⎛ n 2 ⎞ ⎞
2
132
Solution Manual. Last update December 19, 2017
⎛n n 2 2 ⎞
= 9 E ∑ ∑ Btj−1 [(Btj − Btj−1 ) − (tj − tj−1 )]Btk−1 [(Btk − Btk−1 ) − (tk − tk−1 )]
⎝j=1 k=1 ⎠
⎛n 2 2⎞
= 9 E ∑ Bt2j−1 [(Btj − Btj−1 ) − (tj − tj−1 )]
⎝j=1 ⎠
j=1
n
2
= 9 ∑ E (Bt2j−1 ) E ([Bt2j −tj−1 − (tj − tj−1 )] )
(B2)
j=1
n
2
= 9 ∑ tj−1 E (B12 )(tj − tj−1 )2 E ([B12 − 1] )
scaling
j=1
n
= 9 ∑ tj−1 (tj − tj−1 )2 V(B12 )
j=1
n
⩽ 9T ∣Π∣ ∑ (tj − tj−1 ) V(B12 )
j=1
Now for the argument with the mixed terms. Let j < k; then tj−1 < tj ⩽ tk−1 < tk , and by
the tower property,
2 2
E (Btj−1 [(Btj − Btj−1 ) − (tj − tj−1 )]Btk−1 [(Btk − Btk−1 ) − (tk − tk−1 )])
2 2
= E (E [Btj−1 [(Btj − Btj−1 ) − (tj − tj−1 )]Btk−1 [(Btk − Btk−1 ) − (tk − tk−1 )] ∣ Ftk−1 ])
tower
2 2
= E (Btj−1 [(Btj − Btj−1 ) − (tj − tj−1 )]Btk−1 E [[(Btk − Btk−1 ) − (tk − tk−1 )] ∣ Ftk−1 ])
pull
out
2 2
= E (Btj−1 [(Btj − Btj−1 ) − (tj − tj−1 )]Btk−1 E [(Btk − Btk−1 ) − (tk − tk−1 )] )
(B1)
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
=0
= 0.
∎∎
Problem 15.13. Solution: Let Π = {t0 = 0 < t1 < . . . < tn = T } be a partition of [0, T ]. Then
we get
f (T )BT − f (0)B0
n n n
= ∑ f (tj−1 )(Btj − Btj−1 ) + ∑ Btj−1 (f (tj ) − f (tj−1 )) + ∑ (Btj − Btj−1 )(f (tj ) − f (tj−1 ))
j=1 j=1 j=1
133
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
= I1 + I2 + I3 .
Clearly,
L2 T
I1 Ð→ ∫ f (s) dBs (stochastic integral)
0
a.s. T
I2 ÐÐ→ ∫ Bs df (x) (Riemann-Stieltjes integral)
0
and if we can show that I3 → 0 in L2 , then we are done (as this also implies the L2 -
convergence of I2 ). Now we have
⎡ n 2⎤
⎢⎛ ⎞ ⎥⎥
⎢
E ⎢ ∑ (Btj − Btj−1 )(f (tj ) − f (tj−1 )) ⎥
⎢⎝j=1 ⎠ ⎥
⎣ ⎦
⎡n n ⎤
⎢ ⎥
= E ⎢ ∑ ∑ (Btj − Btj−1 )(f (tj ) − f (tj−1 ))(Btk − Btk−1 )(f (tk ) − f (tk−1 ))⎥⎥
⎢
⎢j=1 k=1 ⎥
⎣ ⎦
the mixed terms break away because of the independent increments property of Brownian
motion
n
= ∑ E [(Btj − Btj−1 )2 (f (tj ) − f (tj−1 ))2 ]
j=1
n
= ∑ (f (tj ) − f (tj−1 ))2 E [(Btj − Btj−1 )2 ]
j=1
n
= ∑ (tj − tj−1 )(f (tj ) − f (tj−1 ))2
j=1
n
⩽ 2 ∣Π∣ ⋅ ∥f ∥∞ ∑ ∣f (tj ) − f (tj−1 )∣
j=1
∣f (t)∣ ⩽ ∣f (t) − f (0)∣ + ∣f (0)∣ ⩽ VAR1 (f ; [0, t]) + VAR1 (f ; {0}) ⩽ 2VAR1 (f ; [0, T ])
∎∎
Problem 15.14. Solution: Replace, starting in the fourth line of the proof of Proposi-
tion 15.16, the argument as follows:
134
Solution Manual. Last update December 19, 2017
n sj
⩽4∑∫ sup E [∣f (u) − f (v)∣2 ] ds ÐÐÐ→ 0.
j=1 sj−1 u,v∈[sj−1 ,sj ] ∣Π∣→0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
→0, ∣Π∣→0
∎∎
Problem 15.15. Solution: To simplify notation, we drop the n in Πn and write only 0 = t0 <
t1 < . . . < tk = T and
α
θn,j = θj = αtj + (1 − α)tj−1 .
We get
k T
LT (α) ∶= L2 (P)- lim ∑ Bθj (Btj − Btj−1 ) = ∫ Bs dBs + αT.
∣Π∣→0 j=1 0
Indeed, we have
k
∑ Bθj (Btj − Btj−1 )
j=1
k k
= ∑ Btj−1 (Btj − Btj−1 ) + ∑ (Bθj − Btj−1 )(Btj − Btj−1 )
j=1 j=1
k k k
= ∑ Btj−1 (Btj − Btj−1 ) + ∑ (Bθj − Btj−1 )2 + ∑ (Btj − Bθj )(Bθj − Btj−1 )
j=1 j=1 j=1
= X + Y + Z.
L2 T
We know already that X ÐÐÐ→ ∫0 Bs dBs . Moreover,
∣Π∣→0
⎛k ⎞
V Z = V ∑ (Btj − Bθj )(Bθj − Btj−1 )
⎝j=1 ⎠
k
= ∑ V [(Btj − Bθj )(Bθj − Btj−1 )]
j=1
k
= ∑ E [(Btj − Bθj )2 (Bθj − Btj−1 )2 ]
j=1
k
= ∑ E [(Btj − Bθj )2 ] E [(Bθj − Btj−1 )2 ]
j=1
k
= ∑ (tj − θj )(θj − tj−1 )
j=1
k
as in Theorem 9.1
= α(1 − α) ∑ (tj − tj−1 )(tj − tj−1 ) ÐÐÐÐÐÐÐÐÐ→ 0.
j=1
Finally,
⎛k ⎞ k
E Y = E ∑ (Bθj − Btj−1 )2 = ∑ E(Bθj − Btj−1 )2
⎝j=1 ⎠ j=1
k k
= ∑ (θj − tj−1 ) = α ∑ (tj − tj−1 ) = αT.
j=1 j=1
135
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Consequence: LT (α) = 1
2
(BT2 + (2α − 1)T ), and this stochastic integral is a martingale if,
and only if, α = 0, i. e. if θj = tj−1 is the left endpoint of the interval.
For α = 1
2 we get the so-called Stratonovich or mid-point stochastic integral. This will
obey the usual calculus rules (instead of Itô’s rule). A first sign is the fact that
LT ( 12 ) = 21 BT2
∎∎
(a) Let τk be a sequence of stopping times with countably many, discrete values such
that τk ↓ τ . For example, τk ∶= (⌊2k τ ⌋ + 1)/2k , see Lemma A.15 in the appendix.
Write s1 < . . . < sK for the values of τk . In particular,
And so
(b) Since T ∧ τk ↓ T ∧ τ and T ∧ τk has only finitely many values, and we find
(c) Fix k and write 0 ⩽ s1 < . . . < sK for the values of T ∧ τk . Following the proof of
Theorem 15.9.c)
∫ 1[0,T ∧τk ) (s) dBs = ∫ ∑ 1[T ∧sj−1 ,T ∧sj ) (s)1[0,T ∧τk ) (s) dBs
j
= ∑ ∫ 1[T ∧sj−1 ,T ∧sj ∧τk ) (s) 1{T ∧τk >T ∧sj−1 } dBs
j ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
T ∧sj ∧τk =T ∧sj FT ∧sj−1 -mble
136
Solution Manual. Last update December 19, 2017
= BT ∧τk .
= E(T ∧ τk − T ∧ τ ) ÐÐÐ→ 0
k→∞
by dominated convergence.
∫ 1[0,T ∧τ ) (s) dBs = L - lim ∫ 1[0,T ∧τk ) (s) dBs = L - lim BT ∧τk = BT ∧τ
2 d) 2 c)
k k
(f) The result is, in the light of the localization principle of Theorem 15.13 not unex-
pected.
∎∎
• Clearly, ∅, [0, T ] × Ω ∈ P.
• Let Γ ∈ P. Then
thus Γc ∈ P.
• Let Γn ∈ P. By definition
Γn ∩ ([0, t] × Ω) ∈ B[0, t] ⊗ Ft
i. e. ⋃n Γn ∈ P.
∎∎
137
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Problem 15.18. Solution: Let f (t, ω) be right-continuous on the interval [0, T ]. (We consider
only T < ∞ since the case of the infinite interval [0, ∞) is actually easier.)
Set
⌊2n s⌋+1
fnT (s, ω) ∶= f ( 2n ∧ T, ω)
then
fnT (s, ω) = ∑ f ( k+1
2n ∧ T, ω) 1[k2−n ,(k+1)2−n ) (s) (s ⩽ T )
k
Fix n ⩾ 0, write tj = j2−n . Then t0 = 0 < t1 < . . . tN ⩽ T for some suitable N . Observe that
for any x ∈ R
N
{(s, ω) ∶ f (s, ω) ⩽ x} = {T } × {ω ∶ f (T, ω) ⩽ x} ∪ ⋃ [tj−1 , tj ) × {ω ∶ f (tj , ω) ⩽ x}
j=1
and each set appearing in the union set on the right is in B[0, T ] ⊗ FT .
Now consider fnt and f (t)1[0,t] . We conclude, with the same reasoning, that both are
B[0, t] ⊗ Ft measurable.
∎∎
where φj−1 is Ftj−1 measurable such that fn → f in L2 (µT ⊗ P). In particular, there is a
subsequence such that
t t
lim ∫ ∣fn(k) (s)∣2 dAs = ∫ ∣f (s)∣2 dAs a.s.
k→∞ 0 0
t
so that it is enough to check that the integrals ∫0 ∣fn(j) (s)∣2 dAs are adapted. By defintion
t
∫ ∣fn(j) (s)∣2 dAs = ∑ φ2j−1 (Atj ∧t − Atj−1 ∧t )
0 j
and from this it is clear that the integral is Ft measurable for each t.
∎∎
138
16 Stochastic integrals: Beyond L2T
Problem 16.1. Solution: Yes. In view of Lemma 16.3 we have to show that L0T ⊃ L2T,loc . Let
f ∈ L2T,loc and take some localizing sequence (σn )n⩾1 such that
a.s. T ∧σn
σn ÐÐÐ→ ∞ and ∫ ∣f (s, ⋅)∣2 ds < ∞
n→∞ 0
(the finiteness of the integral latter follows from the fact that f 1[0,σn ) is in L2T for each
n. Moreover, f 1[0,σn ) is P-measurable, hence f 1[0,T ∧σn ) → f 1[0,T ) is P-measurable. Note
that the completeness of the filtration allows that σn → ∞ holds only a.s.). Now observe
that for every fixed ω there is some n(T, ω) ⩾ 1 such that for all n ⩾ n(T, ω) we have
σn (ω) ⩾ T . Thus,
T T ∧σn (ω)
∫ ∣f (s, ω)∣2 ds = ∫ ∣f (s, ω)∣2 ds < ∞.
0 0
∎∎
Problem 16.2. Solution: Solution 1: We have that the process t ↦ 1[0,τ (ω)) (t) is adapted
⌊2n s⌋
Int (s, ω) ∶= 1[0,τ (ω)) ( 2n ∧ t) = ∑ 1[0,τ (ω)) (tj+1 ∧ t)1[tj ,tj+1 ) (s ∧ t).
j
∎∎
Problem 16.3. Solution: Assume that σn are stopping times such that (Mtσn 1{σn >0} )t is a
martingale. Clearly,
• τn ∶= σn ∧ n ↑ ∞ almost surely as n → ∞;
1{σn >0} = Mtσn ∧n 1{σn >0} = Mtσn ∧n 1{σn ∧n>0} = Mt n 1{τn >0} .
σn τ
Mt∧n
139
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
∎∎
(a) The picture below show that Iσu = Iτu = u since t ↦ It is continuous.
?tn
11
3-n
?-D_, nA ?a nrb
rl rl
-) )a .r
Q- ?+l,r3
Thus,
ω ∈ {σu ⩾ t} ⇐⇒ σu (ω) ⩾ t
⇐⇒ ω ∈ {It ⩽ u}
and
(b) We have
{τu ⩽ t} = {τu > t}c = {It < u} ∈ Ft
(a)
140
Solution Manual. Last update December 19, 2017
and
c c
{σu ⩽ t} = ⋂ {σu < t + k1 } = ⋂ {σu ⩾ t + k1 } = ⋂ {It+ 1 ⩽ u} ∈ ⋂ Ft+ 1 = Ft+ .
k k
k k k k
(c) Proof for σ: Clearly, σu ⩽ σu+ for all ⩾ 0. Thus, σu ⩽ lim↓0 σu+ .
In order to show that σu ⩾ lim↓0 σu+ , it is enough to check that
Indeed: if lim↓0 σu+ > σu , then there is some q such that lim↓0 σu+ > q > σu , and
this contradicts (*).
Let us show (*):
(a)
lim σu+ ⩾ t Ô⇒ ∀ < 0 ∶ It ⩽ u + Ô⇒ It ⩽ u Ô⇒ σu ⩾ t.
↓0
Indeed: if lim↓0 τu− < τu , then there is some q such that lim↓0 τu− < q < τu and this
contradicts (**).
Let us show (**):
(a)
lim τu− ⩽ t Ô⇒ ∀ < 0 ∶ It ⩾ u − Ô⇒ It ⩾ u Ô⇒ τu ⩽ t.
↓0
?tn
11
3-n
?-D_, nA ?a nrb
rl rl
-) )a .r
Q- ?+l,r3
141
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Indeed: if σu− < τu , then there is some q with σu− < q < τu contradicting (***).
Ô⇒ It ⩾ u
Ô⇒ τu ⩽ t.
(e) Clear, since in this case It is continuously invertible and σ, τ are the left- and right-
continuous inverses.
∎∎
142
17 Itô’s formula
Problem 17.1. Solution: We try to identify the bits and pieces as parts of Itô’s formula. For
f (x) = e we get f (x) = f ′′ (x) = ex and so
x ′
t 1 t Bs
eBt − 1 = ∫ eBs dBs + ∫ e ds.
0 2 0
Thus,
1 t Bs
Xt = eBt − 1 − ∫ e ds.
2 0
With the same trick we try to find f (x) such that f ′ (x) = xex . A moment’s thought
2
1 x2
will do. Moreover f ′′ (x) = ex + 2x2 ex . This then gives
2 2
reveals that f (x) = 2e
1 Bt2 1 t 2 1 t 2 2
e − = ∫ Bs eBs dBs + ∫ (eBs + 2Bs2 eBs ) ds
2 2 0 2 0
and we see that
1 Bt2 t 2 2
Yt = (e − 1 − ∫ (eBs + 2Bs2 eBs ) ds) .
2 0
2
Note: the integrand Bs2 eBs is not of class L2T , thus we have to use a stopping technique
(as in step 4o of the proof of Itô’s formula or as in Chapter 16).
∎∎
If γ = 1 + > 1 we get
N N
∑ (tj − tj−1 ) ⩽ ∣Π∣ ∑ (tj − tj−1 ) = ∣Π∣ T ÐÐÐ→ 0,
1+
j=1 j=1 ∣Π∣→0
∎∎
Problem 17.3. Solution: Let 0 = t0 < t1 < . . . < tN = T be a generic partition of [0, T ] and
write ∆j = Btj − Btj−1 . Then we get
4
( ∑∆2j ) = ∑ ∑ ∑ ∑ ∆2j ∆2k ∆2l ∆2m
j j k l m
143
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
+ c1,1,2 ∑ ∑ ∑ ∆2j ∆2k (∆2l )2 + c1,2,1 ∑ ∑ ∑ ∆2j (∆2k )2 ∆2l + c2,1,1 ∑ ∑ ∑(∆2j )2 ∆2k ∆2l
j<k<l j<k<l j<k<l
+ c4 ∑(∆2j )4 .
j
By the scaling property E(∆2j )n = (tj − tj−1 )n E B12n = δjn E B12n where δj = tj − tj−1 . Using
the independent increments property we get
4
E [( ∑∆2j ) ] = ∑ ∑ ∑ ∑ E (∆2j ∆2k ∆2l ∆2m )
j j k l m
= c′1,1,1,1 ∑ ∑ ∑ ∑ δj δk δl δm
j<k<l<m
+ c′4 ∑ δj4
j
4
′
= c1,1,1,1 ( ∑ δj )
j
+ c′′4 ∑ δj4 .
j
Since ∑j δj = T and since we can estimate the terms containing powers of δj by, for
example,
we get
4
E [( ∑ (Btj − Btj−1 ) ) ] ÐÐÐÐÐÐ→ c′1,1,1,1 T 4 .
2
j ∣Π∣→0
We will use this on page 252 (of Brownian Motion) when we estimate ∣J2 ∣:
N 2
2 2
∣J2 ∣ ⩽ max ∣g(ξl ) − g(Btl−1 )∣ [∑ (Btl − Btl−1 ) ] ,
2
1⩽l⩽N l=1
144
Solution Manual. Last update December 19, 2017
The second factor is, however, bounded by CT 2 , see the considerations from above, and
the L2 -convergence follows.
Alternative Solution: Let 0 = t0 < t1 < . . . < tn = T be a generic partition of [0, T ] and
write ∆j = Btj − Btj−1 . By the independence and stationarity of the increments, we have
for any ξ ∈ R
⎛ n ⎞ n n
1 n
E exp i ξ ∑ ∆2j = ∏ E exp (i ξ (tj − tj−1 )B12 ) = ∏ √ =∶ ∏ gj (ξ)
⎝ j=1 ⎠ j=1 j=1 1 − 2i ξ(tj − tj−1 ) j=1
dk (tj − tj−1 )k
gj (ξ) = ck
dξ k (1 − 2i ξ(tj − tj−1 ))1/2+k
From
⎡ n 4⎤ ⎡ ⎤
⎢⎛ 2⎞ ⎥ d4 ⎢⎢ ⎛ n 2 ⎞⎥⎥
⎢ ⎥
E ⎢ ∑ ∆j ⎥ = 4 ⎢E exp i ξ ∑ ∆j ⎥ ∣
⎢⎝j=1 ⎠ ⎥ dξ ⎢ ⎝ j=1 ⎠⎥ ξ=0
⎣ ⎦ ⎣ ⎦
we conclude, by applying Leibniz’ product rule,
⎡ n 4⎤
⎢⎛ ⎞ ⎥ ⎛ n
dαj ⎞ n
E ⎢⎢ ∑ ∆j ⎥⎥ = ∑ Cα ∏ αj gj (ξ) ∣ ⩽ C ∑ (Cα ∏(tj − tj−1 )αj ) ⩽ C∣Π∣4 n4 .
2
⎢⎝j=1 ⎠ ⎥ α∈Nn0 ⎝ j=1 dξ ⎠ ξ=0
⎣ ⎦ α∈Nn 0 j=1
∣α∣=4 ∣α∣=4 ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
⩽∣Π∣∣α∣ =∣Π∣4
∎∎
(a) Assume first that f ∈ C1b . Let Π = {0 = t0 < t1 < . . . < tn = t} be any partition. We
have
n
Bt f (Bt ) = ∑ (Btl f (Btl ) − Btl−1 f (Btl−1 )).
l=1
Using
Bt f (Bt ) − Bs f (Bs )
145
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Letting ∣Π∣ → 0 in this identity, we see that the left-hand side converges (it is con-
stant!) and the second and third term on the right converge, in probability, to
t t
∫ f (Bs ) dBs and ∫ f ′ (Bs ) ds,
0 0
respectively, cf. Lemma 17.4. Therefore, the first term has to converge, i. e.
t
∫ Bs df (Bs )
0
If f ′ is not bounded, we can use a stopping and cutting technique as in the proof of
Theorem 17.1 (step 4o ).
(b) This follows from (a) after having taken the limit.
(e) We can assume that f has compact support, supp f ⊂ [−K, K], say. Otherwise, we
use the stopping and cutting technique from the proof (step 4o ) of Theorem 17.1,
to remove this assumption.
The rest follows from the previous step (d) which allows us to interchange (stochastic)
integration and limits. (The Riemann part in Itô’s formula is clear, since we have
uniform convergence!).
∎∎
146
Solution Manual. Last update December 19, 2017
Then f (t)g(t) = F ○ G(t). If we differentiate this using the chain rule we get
d
(F ○ G) = ∂x F ○ G(t) ⋅ f ′ (t) + ∂y F ○ G(t) ⋅ g ′ (t) = g(t) ⋅ f ′ (t) + f (t) ⋅ g ′ (t)
dt
(surprised?) and if we integrate this up we see
t t
F ○ G(t) − F ○ G(0) = ∫ f (s)g ′ (s) ds + ∫ g(s)f ′ (s) ds
0 0
t t
=∫ f (s) dg(s) + ∫ g(s) df (s).
0 0
Note: For the first equality we have to assume that f ′ , g ′ exist Lebesgue a.e. and
that their primitives are f and g, respectively. This is tantamount to saying that f, g
are absolutely continuous with respect to Lebesgue measure.
If b á β we have ⟨b, β⟩ ≡ 0 (note our Itô formula has no mixed second derivatives!)
and we get the formula as in the statement. Otherwise we have to take care of
⟨b, β⟩. This is not so easy to calculate since we need more information on the joint
distribution. In general, we have
∎∎
Problem 17.6. Solution: Consider the two-dimensional Itô process Xt = (t, Bt ) with param-
eters
⎛0⎞ ⎛1⎞
σ≡ and b ≡ .
⎝1⎠ ⎝0⎠
147
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
t t 1
=∫ ∂2 f (Xs ) dBs + ∫ (∂1 f (Xs )b1 + ∂2 ∂2 f (Xs )) ds
0 0 2
t ∂f t ∂f 1 ∂2f
=∫ (s, Bs ) dBs + ∫ ( (s, Bs ) + (s, Bs )) ds.
0 ∂x 0 ∂t 2 ∂x2
⎛1⎞
⎧
⎪ ⎜ ⎟
⎪
⎪1, if i = k + 1; ⎜0⎟
σ ∈ Rd+1×d , σik = ⎨ and b = ⎜ ⎟
⎜ ⎟.
⎪
⎪
⎪ ⎜⋮⎟
⎩0, else; ⎜ ⎟
⎝0⎠
∎∎
Problem 17.7. Solution: Let Bt = (Bt1 , . . . , Btd ) be a BMd and f ∈ C1,2 ((0, ∞) × Rd , R) as in
Theorem 5.6. Then the multidimensional time-dependent Itô’s formula shown in Problem
17.6 yields
t
Mtf = f (t, Bt ) − f (0, B0 ) − ∫ Lf (s, Bs ) ds
0
t ∂ 1
= f (t, Bt ) − f (0, B0 ) − ∫ ( f (s, Bs ) + ∆x f (s, Bs )) ds
0 ∂t 2
d t ∂f
= ∑∫ (s, Bs1 , . . . , Bsd ) dBsk .
k=1 0 ∂xk
By Theorem 15.13 it follows that Mtf is a martingale (note that the assumption (5.5)
guarantees that the integrand is of class L2T !)
∎∎
Problem 17.8. Solution: First we show that Xt = et/2 cos Bt is a martingale. We use the
time-dependent Itô’s formula from Problem 17.6. Therefore, we set f (t, x) = et/2 cos x.
Then
∂f 1 ∂f ∂2f
(t, x) = et/2 cos x, (t, x) = −et/2 sin x, (t, x) = −et/2 cos x.
∂t 2 ∂x ∂x2
148
Solution Manual. Last update December 19, 2017
Hence we obtain
∎∎
(a) The stochastic integrals exist if bs /rs and βs /rs are in L2T . As ∣bs /rs ∣ ⩽ 1 we get
T T
∥b/r∥2L2 (λT ⊗P) = ∫ [E (∣bs /rs ∣2 )] ds ⩽ ∫ 1ds = T < ∞.
0 0
Since bs /rs is adapted and has continuous sample paths, it is progressive and so an
element of L2T . Analogously, ∣βs /rs ∣ ⩽ 1 implies βs /rs ∈ L2T .
(b) We use Lévy’s characterization of a BM1 , Theorem 9.12 or 18.5. From Theorem
15.13 it follows that
t t
• t ↦ ∫0 bs /rs dbs , t ↦ ∫0 βs /rs dβs are continuous; thus t ↦ Wt is a continuous
process.
t t
• ∫0 bs /rs dbs , ∫0 βs /rs dβs are square integrable martingales, and so is Wt .
149
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Problem 17.10. Solution: The function f = u + iv is analytic, and as such it satisfies the
Cauchy–Riemann equations, see e.g. Rudin [14, Theorem 11.2],
ux = vy and uy = −vx .
u(bt , βt ) − u(b0 , β0 )
t t 1 t
=∫ ux (bs , βs ) dbs + ∫ uy (bs , βs ) dβs + ∫ (uxx (bs , βs ) + uyy (bs , βs )) ds
0 0 2 0
t t
=∫ ux (bs , βs ) dbs + ∫ uy (bs , βs ) dβs ,
0 0
where the last term cancels as uxx = vyx and uyy = −vxy . Theorem 15.13 implies
t t
• t ↦ u(bt , βt ) = ∫0 ux (bs , βs ) dbs + ∫0 uy (bs , βs ) dβs is a continuous process.
t t
• ∫0 ux (bs , βs ) dbs , ∫0 uy (bs , βs ) dβs are square integrable martingales, and so u(bt , βt )
is a square integrable martingale.
• the quadratic variation is given by
150
Solution Manual. Last update December 19, 2017
1 t t t t
= (∫ (ux + vx )2 ds + ∫ (uy + vy )2 ds − ∫ (ux − vx )2 ds − ∫ (uy − vy )2 ds)
4 0 0 0 0
t
= ∫ (ux vx + uy vy ) ds
0
t
= ∫ (−vy uy + uy vy ) ds = 0.
0
as the quadratic covariation ⟨u, v⟩t = 0. Since ∣g∣ ⩽ 1 and since g(ut , vt ) is progressive,
the integrand is in L2T and the above stochastic integrals exist. From Theorem 15.13 we
deduce that
t t
E (∫ g(ur , vr ) dur 1F ) = 0 and E (∫ g(ur , vr ) dvr 1F ) = 0.
s s
for all F ∈ σ(ur , vr ∶ r ⩽ s) =∶ Fs . If we multiply the above equality by e−i(ξus +ηvs ) 1F and
take expectations, we get
1 t
E (g(ut − us , vt − vs )1F ) = P(F ) − (ξ 2 + η 2 ) ∫ E (g(ur − us , vr − vs )1F ) dr.
2 0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
=Φ(t) =Φ(r)
Since this integral equation has a unique solution (use Gronwall’s lemma, Theorem A.47),
we get
From this we deduce with Lemma 5.4 that (u(bt , βt ), v(bt , βt )) is a BM2 .
Note that the above calculation is essentially the proof of Lévy’s characterization theorem.
Only a few modifications are necessary for the proof of the multidimensional version, see
e.g. Karatzas, Shreve [9, Theorem 3.3.16].
∎∎
t t
Problem 17.11. Solution: Let Xt = ∫0 σ(s) dBs + ∫0 b(s) ds be an d-dimensional Itô process.
Assuming that f = u + iv and thus u = Re f = 21 f + 12 f¯ and v = Im f = 2i
1 1 ¯
f + 2i f are
C2 -functions, we may apply the real d-dimensional Itô formula (17.14) to the functions
u, v ∶ Rd → R,
f (Xt ) − f (X0 )
= u(Xt ) − u(X0 ) + i(v(Xt ) − v(X0 ))
151
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
t t 1 t
=∫ ∇u(Xs )⊺ σ(s) dBs + ∫ ∇u(Xs )⊺ b(s) ds + ⊺ 2
∫ trace(σ(s) D u(Xs )σ(s)) ds
0 0 2 0
t t 1 t
+ i (∫ ∇v(Xs )⊺ σ(s) dBs + ∫ ∇v(Xs )⊺ b(s) ds + ∫ trace(σ(s)⊺ D2 v(Xs )σ(s)) ds)
0 0 2 0
t t 1 t
= ∫ ∇f (Xs )⊺ σ(s) dBs + ∫ ∇f (Xs )⊺ b(s) ds + ∫ trace(σ(s)⊺ D2 f (Xs )σ(s)) ds,
0 0 2 0
by the linearity of the differential operators and the (stochastic) integral.
∎∎
(a) By definition we have supp χ ⊂ [−1, 1] hence it is obvious that for χn (x) ∶= nχ(nx)
we have supp χn ⊂ [−1/n, 1/n]. Substituting y = nx we get
1/n 1/n 1
∫ χn (x) dx = ∫ nχ(nx) dx = ∫ χ(y) dy = 1
−1/n −1/n −1
(b) For derivatives of convolutions we know that ∂(f ⋆ χn ) = f ⋆ (∂χn ). Hence we obtain
∣∂ k fn (x)∣ = ∣f ⋆ (∂ k χn )(x)∣
= ∣∫ f (y)∂ k χn (x − y) dy∣
B(x,1/n)
This shows that limn→∞ ∣f ⋆ χn (x) − f (x)∣ = 0, i. e. limn→∞ f ⋆ χn (x) = f (x), at all x
where f is continuous.
(d) Using the above result and taking the supremum over all x ∈ R we get
∎∎
152
Solution Manual. Last update December 19, 2017
Problem 17.13. Solution: We follow the hint and use Lévy’s characterization of a BM1 ,
Theorem 9.12 or 18.5.
• t ↦ βt is a continuous process.
Thus, β is a BM1 .
∎∎
where σj ∈ L2T and bj is bounded. Then we get from the two-dimensional Itô’s formula
t t t
X1 (t)X2 (t) = ∫ σ1 (s)σ2 (s) ds + ∫ X1 (s) dX2 (s) + ∫ X2 (s) dX1 (s).
0 0 0
2o — Now let X1 = f ● B and X2 = Φ(g ● B) with Φ ∈ C2b (R). Then, by Itô’s formula
(17.1),
1 t s ′′ s
+ ∫ ∫ f (r) dBr Φ (∫0 g(r) dBr ) g (s) ds)
2
2 0 0
∎∎
L2 (λT ⊗P) L2 (λT ⊗P)
Problem 17.15. Solution: Let σΠ , bΠ ∈ ET such that σΠ ÐÐÐÐÐÐ→ σ, bΠ ÐÐÐÐÐÐ→ b. By the
∣Π∣→0 ∣Π∣→0
Chebyshev inequality, Doob’s maximal inequality, and Itô’s isometry, we have
t t
P ( sup ∣ ∫ g(XsΠ )σΠ (s) dBs − ∫ g(Xs )σ(s) dBs ∣ > )
t⩽T 0 0
153
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
2
1 t
⩽ E (sup ∣∫ (g(Xs
Π
)σΠ
(s) − g(Xs )σ(s)) dBs ∣ )
2 t⩽T 0
2
4 T
⩽ E (∣ ∫ (g(Xs
Π
)σ Π
(s) − g(Xs )σ(s)) dB s ∣ )
2 0
4 T
= 2 E (∫ ∣g(XsΠ )σΠ (s) − g(Xs )σ(s)∣2 ds) .
0
From
g(XsΠ )σΠ (s) − g(Xs )σ(s) = g(XsΠ )(σΠ (s) − σ(s)) − σ(s)(g(Xs ) − g(XsΠ ))
=∶ I1 + I2 .
L2 (λT ⊗P)
Since g is bounded and σΠ ÐÐÐÐÐÐ→ σ, it follows that I1 → 0 as ∣Π∣ → 0. For the second
∣Π∣→0
term we note that, by Lemma 17.5,
P
sup ∣g(XsΠ ) − g(Xs )∣ ÐÐÐ→ 0.
s⩽T ∣Π∣→0
Consequently,
t t
P (sup ∣∫ g(XsΠ )dXsΠ − ∫ g(Xs ) dXs ∣ > ) ÐÐÐ→ 0.
t⩽T 0 0 ∣Π∣→0
∎∎
154
18 Applications of Itô’s formula
Problem 18.1. Solution: Lemma. Let (Bt , Ft )t⩾0 be a BMd , f = (f1 , . . . , fd ), fj ∈ L2P (λT ⊗P)
for all T > 0, and assume that ∣fj (s, ω)∣ ⩽ C for some C > 0 and all s ⩾ 0, 1 ⩽ j ⩽ d, and
ω ∈ Ω. Then
⎛d t 1 d t ⎞
exp ∑ ∫ fj (s) dBsj − ∑ ∫ fj2 (s) ds , t ⩾ 0, (18.1)
⎝j=1 0 2 j=1 0 ⎠
is a martingale for the filtration (Ft )t⩾0 .
t t
Proof. Set Xt = ∑dj=1 ∫0 fj (s) dBsj − 21 ∑dj=1 ∫0 fj2 (s) ds. Itô’s formula, Theorem 17.7, yields
d t 1 d t 1 d t
eXt − 1 = ∑ ∫ eXs fj (s) dBsj − ∑ ∫ e s fj (s) ds + ∑ ∫ e s fj (s) ds
X 2 X 2
j=1 0 2 j=1 0 2 j=1 0
d d
t s 1 d s
= ∑∫ exp ( ∑ ∫ fk (r) dBrk − ∑ ∫ fk (r) dr) fj (s) dBs
2 j
j=1 0 k=1 0 2 k=1 0
d t d s 1 s 2
= ∑∫ ∏ exp (∫ fk (r) dBrk − ∫ f (r) dr) fj (s) dBs .
j
j=1 0 k=1 0 2 0 k
If we can show that the integrand is in L2P (λT ⊗ P) for every T > 0, then Theorem 15.13
applies and shows that the stochastic integral, hence eXt , is a martingale.
We will see that we can reduce the d-dimensional setting to a one-dimensional setting.
The essential step in the proof is the analogue of the estimate on page 250, line 6 from
above. In the d-dimensional setting we have for each k = 1, . . . , d
T 2
fj (r) dBrj − 12 ∑dj=1 ∫0T fj2 (r) dr T
fj (r) dBrj
E [∣e∑j=1 ∫0
d
fk (T )∣ ] ⩽ C 2 E [e2 ∑j=1 ∫0
d
]
⎡d ⎤
⎢ 2 ∫0T fj (r) dBrj ⎥
⎢
= C E ⎢∏ e
2 ⎥
⎥
⎢j=1 ⎥
⎣ ⎦
d T 1/d
fj (r) dBrj
⩽ C 2 ∏ (E [e2d ∫0 ]) .
j=1
with n = d and p1 = . . . = pd = d. Now the one-dimensional argument with dfj playing the
role of f shows (cf. page 250, line 9 from above)
T 2 d 1/d
fj (r) dBrj − 12 ∑dj=1 ∫0T fj2 (r) dr T
fj (r) dBrj
E [∣e∑j=1 ∫0
d
fk (T )∣ ] ⩽ C 2 ∏ (E [e2d ∫0 ])
j=1
155
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
2T
⩽ C 2 e2dC < ∞.
∎∎
Problem 18.2. Solution: As for a Brownian motion one can see that the independent incre-
ments property of a Poisson process is equivalent to saying that Nt − Ns á FsN for all s ⩽ t,
cf. Lemma 2.14 or Section 5.1. Thus, we have for s ⩽ t
= E(Nt−s ) − (t − s) + Ns − s
= Ns − s.
Observe that
2
(Nt − t)2 − t = (Nt − Ns − (t − s) + (Ns − s)) − t
2
= (Nt − Ns − (t − s)) + (Ns − s)2 + 2(Ns − s)(Nt − Ns − t + s) − t.
Thus,
Now take E(⋯ ∣ FsN ) in the last equality and observe that Nt − Ns á Fs . Then
Nt −Ns áFs
N
= V Nt−s + 2(Ns − s) E(Nt − Ns − t + s) − (t − s)
= t − s + 2(Ns − s) ⋅ 0 − (t − s) = 0.
∎∎
Problem 18.3. Solution: We want to use Lévy’s characterization, Theorem 18.5. Clearly,
t ↦ Wt is continuous and W0 = 0. Set Ftb = σ(br ∶ r ⩽ t), Ftβ = σ(βr ∶ r ⩽ t) and
FtW = σ(br , βr ∶ r ⩽ t) = σ(Ftb , Ftβ ), and
√
λ = σ1 / σ12 + σ22 ,
√
µ = σ2 / σ12 + σ22 .
156
Solution Manual. Last update December 19, 2017
We have
proving that (Wt , FtW ) is a martingale. Similarly one shows that (Wt2 − t, FtW )t⩾0 is a
martingale. Now Theorem 18.5 applies.
∎∎
1 2
By the tower property and the fact that eξB(t)− 2 ξ t
is a martingale we get
n
ξB(T )− 12 ξ 2 T
∫ ∏1Aj (B(tj ) − ξtj ) e dP
j=1
⎡ ⎤
⎢ ⎛n ⎞⎥⎥
⎢
= E ⎢E ∏ 1Aj (B(tj ) − ξtj ) e ξB(T )− 12 ξ 2 T
∣ Ftn ⎥
⎢ ⎝j=1 ⎠⎥
⎣ ⎦
⎡n ⎤
⎢ ⎥
= E ⎢⎢∏ 1Aj (B(tj ) − ξtj ) E (eξB(T )− 2 ξ T ∣ Ftn )⎥⎥
1 2
⎢j=1 ⎥
⎣ ⎦
⎡n ⎤
⎢ ⎥
= E ⎢⎢∏ 1Aj (B(tj ) − ξtj ) eξB(tn )− 2 ξ tn ⎥⎥
1 2
⎢j=1 ⎥
⎣ ⎦
⎡ ⎤
⎢ ⎛n ⎞⎥
= E ⎢⎢E ∏ 1Aj (B(tj ) − ξtj ) eξB(tn )− 2 ξ tn ∣ Ftn−1 ⎥⎥
1 2
⎢ ⎝j=1 ⎠⎥
⎣ ⎦
⎡ n−1
⎢
= E ⎢⎢ ∏ 1Aj (B(tj ) − ξtj ) eξB(tn−1 )− 2 ξ tn−1 ×
1 2
⎢ j=1
⎣
⎤
⎥
× E (1An (B(tn ) − ξtn ) e ξ(B(tn )−B(tn−1 ))− 12 ξ 2 (tn −tn−1 )
∣ Ftn−1 ) ⎥⎥
⎥
⎦
Now, since B(tn ) − B(tn−1 ) á Ftn−1 we get
157
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
E (1An ((B(tn ) − B(tn−1 )) − ξ(tn − tn−1 ) + y)eξ(B(tn )−B(tn−1 ))− 2 ξ (tn −tn−1 )
1 2
)
1 1 2
(tn −tn−1 ) − 2(t 1
n −tn−1 )
x2
=√ ∫ 1An (x − ξ(tn − tn−1 ) + y)e 2
ξx− ξ
e dx
2π(tn − tn−1 )
1 − 2(t 1
n −tn−1 )
(x−ξ(tn −tn−1 ))2
=√ ∫ 1An (x − ξ(tn − tn−1 ) + y)e dx
2π(tn − tn−1 )
1 − 2(t 1
n −tn−1 )
z2
=√ ∫ 1An (z + y)e dz
2π(tn − tn−1 )
= E 1An (B(tn ) − B(tn−1 ) + y)
Solution 2: As in the first part of Solution 1 we see that we can assume that T = tn . Since
we know the joint distribution of (B(t1 ), . . . , B(tn )), cf. (2.10b), we get (using x0 = t0 = 0)
∎∎
158
Solution Manual. Last update December 19, 2017
= e− 2 α t ∫
1 2
1(−∞,x] (ξ) eαξ Q(Wt ∈ dξ, sups⩽t Ws ⩽ y).
Rd
Q( sups⩽t Wt < y, Wt ∈ dξ) = lim Q( inf s⩽t Ws > a, sups⩽t Wt < y, Wt ∈ dξ)
a→−∞
dξ ξ2 (ξ−2y)2
= √ [e− 2t − e− 2t ]
(6.19)
2πt
and we get the same result for Q( sups⩽t Wt ⩽ y, Wt ∈ dξ). Thus,
∎∎
̂
τ b = inf {t ⩾ 0 ∶ Xt ⩾ b}.
Moreover, we have
{̂
τ b ⩽ t} = { sups⩽t Xs ⩾ b}.
Indeed,
159
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Ô⇒ ̂
τ b (ω) ⩽ t
Ô⇒ ω ∈ {̂
τ b ⩽ t},
and so {̂
τ b ⩽ t} ⊃ { sups⩽t Xs ⩾ b}. Conversely,
ω ∈ {̂
τ b ⩽ t} Ô⇒ ̂
τ b (ω) ⩽ t
Ô⇒ Xτ̂ (ω) ⩾ b, ̂
τ b (ω) ⩽ t
b (ω)
Ô⇒ sup Xs (ω) ⩾ b
s⩽t
Ô⇒ ω ∈ { sups⩽t Xs ⩾ b},
and so {̂
τ b ⩽ t} ⊂ { sups⩽t Xs ⩾ b}.
P (̂
τ b > t) = P ( sups⩽t Xs < b)
= P ( sups⩽t Xs ⩽ b)
= P (Xt ⩽ b, sups⩽t Xs ⩽ b)
√ ) − e2αb Φ( −b−αt
= Φ( b−αt √ )
Prob.
5 t t
√ √
= Φ( √bt − α t) − e2αb Φ( − √bt − α t).
Differentiating in t yields
d √ √
− P (̂
τ b > t) = e2αb ( 2tb√t − 2√
α
t
) Φ ′
( − √b − α t) + ( b√ + √
t 2t t
α
2 t
) Φ ′ √b
( t
− α t)
dt
1 (b+αt)2 (b−αt)2
− 2t − 2t
= √ (e2αb ( 2tb√t − 2√ α
t
) e + ( b√
2t t
+ α
√
2 t
) e )
2π
1 (b−αt)2 (b−αt)2
− 2t − 2t
= √ (( 2tb√t − 2√ α
t
) e + ( b√
2t t
+ α
√
2 t
) e )
2π
1 2b − (b−αt)2
=√ √ e 2t
2π 2t t
b (b−αt)2
= √ e− 2t
t 2πt
P (̂ √ ) − e2αb Φ( −b−αt
τ b > t) = Φ( b−αt
t
√ )
t
⎧
⎪
⎪
⎪
⎪Φ(−∞) − e2αb Φ(−∞) = 0 if α > 0
⎪
⎪
⎪
ÐÐ→ ⎨Φ(0) − e0 Φ(0) = 0 if α = 0
t→∞ ⎪
⎪
⎪
⎪
⎪
⎪
⎩Φ(∞) − e Φ(∞) = 1 − e
⎪ if α < 0
2αb 2αb
Therefore, we get
⎧
⎪
⎪
⎪1 if α ⩾ 0
̂
P (τ b < ∞) = ⎨
⎪
⎪
⎪ 2αb
if α < 0.
⎩e
160
Solution Manual. Last update December 19, 2017
∎∎
Problem 18.7. Solution: Basically, the claim follows from Lemma 18.10. Indeed, if we set
n
g(s) ∶= ∑ (ξj + . . . + ξn )1[tj−1 ,tj ) (s),
j=1
then
T n
∫ g(s) dBs = ∑ (ξj + . . . + ξn )(Btj − Btj−1 )
0 j=1
n
= ∑ ((ξj + . . . + ξn ) − (ξj+1 + . . . + ξn ))Btj
j=1
n
= ∑ ξj Btj .
j=1
If you want to be a bit more careful, you should treat the real and imaginary parts of
T T T
exp (i ∫0 g(s) dBs ) = cos (∫0 g(s) dBs ) + i sin (∫0 g(s) dBs ) separately. Let us do this for
the real part.
We apply the two-dimensional Itô-formula (17.14) to f (x, y) = cos(x)ey/2 and the process
t t
(Xt , Yt ) = (∫0 g(s) dBs , ∫0 g 2 (s) ds): Since
∂x f (x, y) = − sin(x)ey/2
∂x2 f (x, y) = − cos(x)ey/2
1
∂y f (x, y) = cos(x)ey/2
2
we get
cos(XT )eYT /2 − 1
T 1 T 1 T
= −∫ sin(Xs )eYs /2 dXs + ∫ cos(Xs )e
Ys /2
dYs − ∫ cos(Xs )eYs /2 g 2 (s) ds
0 2 0 2 0
T
= −∫ sin(Xs )eYs /2 g(s) dBs .
0
⎛n ⎞ T
cos ∑ ξj Btj = cos (∫ g(s) dBs )
⎝j=1 ⎠ 0
T T s T
= e− ∫0 g 2 (s) ds
g(r) dBr ) e− 2 ∫s g 2 (r) dr
1
−∫ sin (∫ g(s) dBs .
0 0
Since the integrand of the stochastic integral is continuous and bounded, it is clear that
it is in L2P (λT ⊗ P). Hence cos (∑nj=1 ξj Btj ) ∈ HT2 .
∎∎
161
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Problem 18.8. Solution: Set Σt1 ,...,tn ∶= σ(Bt1 , . . . , Btn ). There are several possibilities to
prove this result.
Possibility 1: Set tn = (t1 , . . . , tn ) and Σ(tn ) = Σt1 ,...,tn . Then the family of σ-algebras
Σ(tn ) is upwards filtering, i. e. whenever we have tn and sm there is some un+m such that
Σ(sm )∪Σ(tn ) ⊂ Σ(un+m ). Therefore we can use Lévy’s (upwards) martingale convergence
theorem and conclude that
Since FTB and F̄TB differ only by trivial sets (with probability zero or one), we get a. s.
Y = E(Y ∣ F̄TB ) = E(Y ∣ FTB ) = 0.
Possibility 2: Set ΣT ∶= ⋃0⩽t1 <⋯<tn =T Σt1 ,...,tn . Then σ(ΣT ) = FTB and ΣT is stable under
n⩾1
intersections. Consider the measures
µ± (F ) ∶= ∫ Y ± d P ∀F ∈ ΣT .
F
If we add to ΣT all P null set, the above considerations remain valid (without changes!)
and we get ∫F Y d P = 0 for all F ∈ F̄TB , hence Y = 0 as Y is F̄T measurable.
∎∎
Problem 18.9. Solution: Because of the properties of conditional expectations we have for
s⩽t
M áG∞
E (Mt ∣ Hs ) = E (Mt ∣ σ(Fs , Gs )) = E (Mt ∣ Fs ) = Ms .
Thus, (Mt , Ht )t⩾0 is still a martingale; (Bt , Ht )t⩾0 is treated in a similar way.
∎∎
we get
inf{t ∶ a(t) ⩾ s} ⩾ inf{t ∶ a(t) > s − } ⩾ inf{t ∶ a(t) ⩾ s − }
and
inf{t ∶ a(t) ⩾ s} ⩾ lim inf{t ∶ a(t) > s − } ⩾ lim inf{t ∶ a(t) ⩾ s − }.
↑0 ↑0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
=lim↑0 τ (s−)=τ (s−)
162
Solution Manual. Last update December 19, 2017
Thus, inf{t ∶ a(t) ⩾ s} ⩾ τ (s−). Assume that inf{t ∶ a(t) ⩾ s} > τ (s−). Then
∎∎
c) s+1/n
n⩾1 n⩾1
Problem 18.12. Solution: Solution 1: Assume that f ∈ C2 . Then we can apply Itô’s formula.
Use Itô’s formula for the deterministic process Xt = f (t) and apply it to the function xa
(we assume that f ⩾ 0 to make sure that f a is defined for all a > 0):
t d a t
f a (t) − f a (0) = ∫ [ x ] df (s) = ∫ af a−1 (s) df (s).
0 dx x=f (s) 0
This proves that the primitive ∫ f a−1 df = f a /a. The rest is an approximation argument
(f ∈ C1 is pretty immediate).
Solution 2: Any absolutely continuous function has an Lebesgue a.e. defined derivative f ′
and f = ∫ f ′ ds. Thus,
t
t t
′
t 1 d a f a (s) f a (t) − f a (0)
∫ f a−1
(s) df (s) = ∫ f a−1
(s)f (s) ds = ∫ f (s) ds = [ ] = .
0 0 0 a ds a 0 a
∎∎
163
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
t
Proof. Let Xt = ∑k ∫0 fk (s) dBsk . Then we have
t t
⟨X⟩t = ⟨∑ ∫ fk (s) dBsk , ∑ ∫ fl (s) dBsl ⟩
k 0 l 0
t t
= ∑ ⟨∫ fk (s) dBsk , ∫ fl (s) dBsl ⟩
k,l 0 0
t
= ∑∫ fk (s)fl (s) d ⟨B k , B l ⟩s
k,l 0
t
= ∑∫ fk2 (s) ds
k 0
With these notations, the proof of Theorem 17.16 goes through almost unchanged and we
get the inequalities for p ⩾ 2.
Remark: Often one needs only one direction (as we do later in the book) and one can use
18.20 directly, without going through the proof again. Note that
d p d p
t t
∣∑ ∫ fk (s) dBs ∣
k
⩽ ( ∑ ∣∫ fk (s) dBs ∣)
k
k=1 0 k=1 0
d t p
⩽ cd,p ∑ ∣∫ fk (s) dBsk ∣ .
k=1 0
Thus, by (18.20)
⎡ d p⎤ d p
⎢ t
k ⎥
t
E ⎢sup ∣ ∑ ∫ fk (s) dBs ∣ ⎥ ⩽ cd,p ∑ E [sup ∣∫ fk (s) dBsk ∣ ]
⎢ t⩽T k=1 0 ⎥
⎣ ⎦ k=1 t⩽T 0
d ⎡ p/2 ⎤
⎢ T
⎥
≍ cd,p ∑ E ⎢(∫ ∣fk (s)∣2 ds) ⎥
k=1 ⎣
⎢ ⎥
0 ⎦
⎡ p/2 ⎤
⎢ T d ⎥
≍ cd,p E ⎢(∫ ∑ ∣fk (s)∣ ds) ⎥⎥ .
⎢ 2
⎢ 0 k=1 ⎥
⎣ ⎦
∎∎
164
19 Stochastic differential equations
where b, σ are non-random coefficients such that the corresponding (stochastic) integrals
exist. Obviously,
(dXt )2 = σ 2 (t) (dBt )2 = σ 2 (t) dt
and so
t
b(r) dr− 12 ξ 2 ∫st σ 2 (r) dr
E (eiξ(Xt −Xs ) 1F ) = P(F ) eiξ ∫s . (*)
165
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
i. e. (Xt1 , Xt2 − Xt1 , . . . , Xtn − Xtn−1 ) is a Gaussian random vector with independent com-
ponents. Since Xtk = ∑kj=1 (Xtj − Xtj−1 ) we see that (Xt1 , . . . , Xtn ) is a Gaussian random
variable.
In fact, since the mean is not zero, it would have been more elegant to compute the
covariance
s
Cov(Xs , Xt ) = E(Xs − µs )(Xt − µt ) = E(Xs Xt ) − E Xs E Xt = V Xs = ∫ σ 2 (r) dr.
0
166
Solution Manual. Last update December 19, 2017
∎∎
Observe that B(tj ) − B(tj−1 ) ∼ N(0, 2−n ) for all j and A ∼ N(0, 1). Because of the
independence we get
2n −1
Xn ( 12 ) ∼ N(0, (1 − 2−n−1 )2 + ∑j=1 (1 − 2−n−1 )2j ⋅ 2−n ).
n
Using
lim (1 − 2−n−1 )2 = e− 2
n 1
n→∞
and
2n −1
1 − (1 − 2−n−1 )2 1 − (1 − 2−n−1 )2
n n
−n−1 2j
∑ (1 − 2 ) ⋅ 2−n = ⋅ 2−n
= ÐÐÐ→ 1 − e − 12
j=1 1 − (1 − 2−n−1 )2 1 − 2−n−2 n→∞
d
finally shows that Xn ( 12 ) ÐÐÐ→ X ∼ N(0, 1).
n→∞
(b) The solution of this SDE follows along the lines of Example 19.7 where α(t) ≡ 0,
β(t) ≡ − 12 , δ(t) ≡ 0 and γ(t) ≡ 1:
dXt○ = 1 ○
2 Xt dt Ô⇒ Xt○ = et/2
Zt = et/2 Xt , Z0 = X0
t
dZt = et/2 dBt Ô⇒ Zt = Z0 + ∫ es/2 dBs
0
t
Xt = e−t/2 A + e−t/2 ∫ es/2 dBs .
0
167
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
For t = 1
2 we get
1/2
X1/2 = A e−1/4 + e−1/4 ∫ es/2 dBs
0
= e−(t−s)/2 .
∎∎
where
σ1 σ2
Ws ∶= √ bt + √ βt
σ1 + σ2
2 2 σ12 + σ22
is, by Problem 18.3, a BM1 . This reduces the problem to a geometric Brownian motion
as in Example a:
1 2 √
Xt = x exp ([b − (σ1 + σ22 )] t + σ12 + σ22 Wt )
2
1
= x exp ([b − (σ12 + σ22 )] t + σ1 bt + σ2 βt ) .
2
t 1 1 t 1
Zt − Z0 = ∫ dXs + ∫ (− 2 ) (dXs )2
0 Xs 2 0 Xs
t t t 1 t
= ∫ b ds + ∫ σ1 dbs + ∫ σ2 dβ2 − ∫ (σ12 + σ22 ) ds
0 0 0 2 0
1 2
= (b − (σ1 + σ22 )) ⋅ t + σ1 bt + σ2 βt .
2
Since, by assumption,
168
Solution Manual. Last update December 19, 2017
Consequently,
1
Xt = x exp (σ1 bt + σ2 βt + (b − (σ12 + σ22 )) t) .
2
A direct calculation shows that Xt is indeed a solution of the given SDE.
∎∎
Problem 19.4. Solution: Since Xt○ is such that 1/Xt○ solves the homogeneous SDE from
Example 19.6, we see that
t t
Xt○ = exp (− ∫ (β(s) − 21 δ 2 (s)) ds) exp (− ∫ δ(s) dBs )
0 0
Remark:
1. We used here the two-dimensional Itô formula (17.14) but we could have equally well
used the one-dimensional version (17.13) with the Itô process It1 + It2 .
2. Observe that Itô’s multiplication table gives us exactly the second-order term in
(17.14).
Since
dZt = (α(t) − γ(t)δ(t))Xt○ dt + γ(t)Xt○ dBt and Xt = Zt /Xt○
we get
1 t t
Xt = ○
(X0 + ∫ (α(s) − γ(s)δ(s))Xs○ ds + ∫ γ(s)Xs○ dBs ) .
Xt 0 0
∎∎
169
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
(a) We have Xt = e−βt X0 + ∫0 σe−β(t−s) dBs . This can be shown in four ways:
t
Solution 1: you guess the right result and use Itô’s formula (17.7) to verify that the
above Xt is indeed a solution to the SDE. For this rewrite the above solution as
t
eβt Xt = X0 + ∫ σeβs dBs Ô⇒ d(eβt Xt ) = σeβt dBt .
0
Now with the two-dimensional Itô formula for f (x, y) = xy and the two-dimensional
Itô-process (eβt , Xt ) we get
so that
βXt eβt dt + eβt dXt = σeβt dBt ⇐⇒ dXt = −βXt dt + σ dBt .
Admittedly, this is unfair as one has to know the solution beforehand. On the other
hand, this is exactly the way one verifies that the solution one has found is the
correct one.
Solution 2: you apply the time-dependent Itô formula from Problem 17.6 or the two-
dimensional Itô formula, Theorem 17.8 to
t
Xt = u(t, It ) and It = ∫ eβs dBs and u(t, x) = eβt X0 + σeβt x
0
to get—as dt dBt = 0—
Again, this is best for the verification of the solution since you need to know its form
beforehand.
Solution 3: you use Example 19.7 with α(t) ≡ 0, β(t) ≡ −β, γ(t) ≡ σ and δ(t) ≡ 0.
But, honestly, you will have to look up the formula in the book. We get
t
Zt = σ ∫ eβs dBs + Z0 ;
0
t
Xt = e−βt ξ + e−βt σ ∫ eβs dBs , t ⩾ 0.
0
Solution 4: by bare hands and with Itô’s formula! Consider first the deterministic
ODE
t
xt = x0 − β ∫ xs ds
0
170
Solution Manual. Last update December 19, 2017
which has the solution xt = x0 e−βt , i. e. eβt xt = x0 = const. This indicates that the
transformation
Yt ∶= eβt Xt
By assumption,
Yt − Y0
t t
=∫ ( ft (s, Xs ) − βXs fx (s, Xs ) + 21 σ 2 fxx (s, Xs ) ) ds + ∫ σfx (s, Xs ) dBs
0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶ 0
=0 =0
t
=∫ σfx (s, Xs ) dBs .
0
So we have the solution, but we still have to go through the procedure in Solution 1
or 2 in order to verify our result.
(b) Since Xt is the limit of normally distributed random variables, it is itself Gaussian (see
also part d))—if ξ is non-random or itself Gaussian and independent of everything
else. In particular, if X0 = ξ = const.,
Now
σ 2 −β(t+s) 2βs
C(s, t) = E Xs Xt = e−β(t+s) ξ 2 + e (e − 1), t ⩾ s ⩾ 0,
2β
and, therefore
σ 2 −β∣t−s∣
C(s, t) = e−β(t+s) ξ 2 + (e − e−β(t+s) ) for all s, t ⩾ 0.
2β
(d) We have
⎡ n ⎤
⎛ ⎢ ⎥⎞
E exp ⎢i ∑ λj Xtj ⎥⎥
⎢
⎝ ⎢ j=1 ⎥⎠
⎣ ⎦
⎡ n ⎤
⎛ ⎢ n tj ⎥⎞
= E exp ⎢⎢i ∑ λj e−βtj ξ + iσ ∑ λj e−βtj ∫ eβs dBs ⎥⎥
⎝ ⎢ j=1 0 ⎥⎠
⎣ j=1 ⎦
⎛ σ 2 ⎡⎢ n ⎤2 ⎞
⎥ ⎛
⎡ n
⎢
⎤
⎥⎞
= exp ⎜− ⎢⎢ ∑ λj e−βtj ⎥⎥ ⎟ E exp ⎢⎢iσ ∑ ηj Yj ⎥⎥
⎝ 4β ⎢⎣j=1 ⎥ ⎠ ⎝ ⎢ j=1 ⎥⎠
⎦ ⎣ ⎦
171
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
where
tj
ηj = λj e−βtj , Yj = ∫ eβs dBs , t0 = 0, Y0 = 0.
0
Moreover,
n n n
∑ ηj Yj = ∑ (Yk − Yk−1 ) ∑ ηj
j=1 k=1 j=k
and
tk
Yk − Yk−1 = ∫ eβs dBs ∼ N(0, (2β)−1 (e2βtk − e2βtk−1 )) are independent.
tk−1
Note: the distribution of (Xt1 , . . . , Xtn ) depends on the difference of the consecutive
epochs t1 < . . . < tn .
and we show that both processes have the same finite-dimensional distributions.
Clearly, both processes are Gaussian and both have independent increments. From
and for s ⩽ t
t
X̃t − X̃s = σ ∫ eβr dBr
s
2
∼ N(0, 2β
σ
(e2βt − e2βs )),
σ
Ũt − Ũs = √ (B(e2βt − 1) − B(e2βs − 1))
2β
σ
∼ B(e2βt − e2βs )
2β
σ 2 2βt
∼ N(0, 2β
(e − e2βs ))
172
Solution Manual. Last update December 19, 2017
∎∎
Problem 19.6. Solution: We use the time-dependent Itô formula from Problem 17.6 (or the
x dy
two-dimensional Itô-formula for the process (t, Xt )) with f (t, x) = ect ∫0 σ(y) . Note that
the parameter c is still a free parameter.
Thus,
Let us show that the expression in the brackets [⋯] is constant if we choose c appropriately.
For this we differentiate this expression:
d x dy 1 b(x) c d 1 ′ b(x)
[c ∫ − σ ′ (x) + ]= − [ σ (x) − ]
dx 0 σ(dy) 2 σ(x) σ(x) dx 2 σ(x)
c 1 d b(x)
= − [ σ ′′ (x) − ]
σ(x) 2 dx σ(x)
1 1 d b(x)
= (c − σ(x) [ σ ′′ (x) − ])
σ(x) 2 dx σ(x)
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
=const. by assumption
This shows that we should choose c in such a way that the expression c − σ ⋅ [⋯] becomes
zero, i. e.
1 d b(x)
c = σ(x) [ σ ′′ (x) − ].
2 dx σ(x)
∎∎
Using the time-dependent Itô formula (cf. Problem 17.6) or the two-dimensional Itô for-
mula (cf. Theorem 17.8) for the process (t, Bt ) we get
173
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Together with the initial condition X0 = 0 this is the SDE which has Xt = tBt as solution.
The trouble is, that the solution is not unique! To see this, assume that Xt and Yt are
any two solutions. Then
Xt Yt Zt
dZt ∶= d(Xt − Yt ) = dXt − dYt = ( − ) dt = dt, Z0 = 0.
t t t
This is an ODE and all (deterministic) processes Zt = ct are solutions with initial condition
Z0 = 0. If we want to enforce uniqueness, we need a condition on Z0′ . So
Xt d
dXt = dt + t dBt and Xt ∣ = x′0
t dt t=0
174
Solution Manual. Last update December 19, 2017
∎∎
Then we get
= X0○ exp ( 12 σ 2 t − σ Bt )
t
Zt = ∫ bXs○ ds
0
Thus,
t 1 2 s−σB
Zt = ∫ b e2 σ s
ds
0
Zt t 1 2
= be− 2 σ t+σBt ∫ e 2 σ s−σBs ds.
1 2
Xt = ○
Xt 0
Using Zt ∶= log Xt and Itô’s formula (or simply Example we see that this equation
has the unique solution
Xt = x0 e− 2 σ
1 2 t+σB
t
,
Because of the form of the homogeneous solution, we now use stochastic integration
by parts1
1
Note this formula can be shown by applying Itô’s formula on f (x, x0 ) = xx0 .
175
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
So we need to find dXt0 . Using f (y) = ey for the process Yt = 12 σ 2 t − σBt gives
1
dXto = df (Yt ) = f ′ (Yt ) dYt + f ′′ (Yt )(dYt )2
2
o 1 2 1
= Xt ( σ dt − σ dBt + σ 2 dt)
2 2
= σ 2 Xt0 dt − σXto dBt .
Then we get
dXt○ = Xt○ dt
dZt = mXt○ dt + σXt○ dBt
Thus,
Xt○ = X0○ et
t t
Zt = ∫ m es ds + σ ∫ es dBs
0 0
t
= m (et − 1) + σ ∫ es dBs
0
Zt t
Xt = ○ = m (1 − e−t ) + σ ∫ es−t dBs
Xt 0
ẋ(t) = −x(t)
176
Solution Manual. Last update December 19, 2017
x(t) = x0 e−t .
1
df (t, Xt ) = ∂t f (t, Xt ) dt + ∂x f (t, Xt ) dXt + ∂x2 f (t, Xt )(dXt )2
2
= Xt e dt + e ((m − Xt ) dt + σ dBt ) + 0
t t
= et (m dt + σ dBt ) .
Hence,
t t
Xt et − X0 = m ∫ es ds +σ ∫ es dBs
0 0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
= (et −1)
or
t
Xt = X0 + m (1 − e−t ) + σ ∫ es−t dBs .
0
∎∎
x b(x) 1 ′
σ ′ (x) = √ and κ(x) = − σ (x) = 1.
1 + x2 σ(x) 2
= dt + dBt ,
177
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
and so Zt = Z0 + t + Bt . Finally,
arsinh x(t) = t + c,
1 √ 1 √ Xt
= −dt + √ [( 1 + Xt2 + Xt ) dt + 1 + Xt2 dBt ] − √ dt
1 + Xt2 2 2 1 + Xt2
Xt Xt
= −dt + dt + √ dt + dBt − √ dt
2 1 + Xt 2 2 1 + Xt2
= dBt ,
∎∎
1/2
Problem 19.11. Solution: Set b = b(t, x), b0 = b(t, 0) etc. Observe that ∥b∥ = (∑j ∣bj (t, x)∣2 )
1/2
and ∥σ∥ = (∑j,k ∣σjk (t, x)∣2 ) are norms; therefore, we get using the triangle estimate
and the elementary inequality (a + b)2 ⩽ 2(a2 + b2 )
∥b∥2 + ∥σ∥2 = ∥b − b0 + b0 ∥2 + ∥σ − σ0 + σ0 ∥2
⩽ 2∥b − b0 ∥2 + 2∥σ − σ0 ∥2 + 2∥b0 ∥2 + 2∥σ0 ∥2
⩽ 2L2 ∣x∣2 + 2∥b0 ∥2 + 2∥σ0 ∥2
⩽ 2L2 (1 + ∣x∣)2 + 2(∥b0 ∥2 + ∥σ0 ∥2 )(1 + ∣x∣)2
⩽ 2(L2 + ∥b0 ∥2 + ∥σ0 ∥2 )(1 + ∣x∣)2 .
∎∎
(a) If b(x) = −ex and X0x = x we have to solve the following ODE/integral equation
t x
Xtx = x − ∫ eXs ds
0
178
Solution Manual. Last update December 19, 2017
(b) Now assume that ∣b(x)∣ + ∣σ(x)∣ ⩽ M for all x. Then we have
t
∣∫ b(Xs ) ds∣ ⩽ M t.
0
⩽ 2(M t)2 + 2M 2 t
= 2M 2 t(t + 1).
By Fatou’s lemma
⎛ ⎞
E lim ∣Xtx − x∣2 ⩽ lim E(∣Xtx − x∣2 ) ⩽ 2M 2 t(t + 1)
⎝∣x∣→∞ ⎠ ∣x∣→∞
(c) Assume now that b(x) and σ(x) grow like ∣x∣p/2 for some p ∈ (0, 2). A calculation as
above yields
t 2 Cauchy t t
∣∫ b(Xs ) ds∣ ⩽ t∫ ∣b(Xs )∣2 ds ⩽ cp t ∫ (1 + ∣Xs ∣p ) ds
0 Schwarz 0 0
179
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Again by Fatou’s theorem we see that the left-hand side grows like ∣x∣2 (if Xtx is
unbounded) while the (larger!) right-hand side grows like ∣x∣p , p < 2, and this is
impossible.
Thus, (Xtx )x is unbounded as ∣x∣ → ∞.
∎∎
Consequently,
Tt f (x) = ∫ f (y) p(t, x; dy) = E(f (Xtx )).
By Theorem 19.27 we know that x ↦ Tt f (x) = E(f (Xtx )) is continuous for each
f ∈ Cb (R). Since
∣Tt f (x)∣ ⩽ E ∣f (Xtx )∣ ⩽ ∥f ∥∞
180
Solution Manual. Last update December 19, 2017
1
+ ∂x2 u(s, Xsx )σ 2 (Xsx )) ds.
2
We have shown in part (c) that (for an extension of A)
1
Ax u(s, Xsx ) = b(Xsx )∂x u(s, Xsx ) + σ 2 (Xsx )∂x2 u(s, Xsx ) for all fixed s ⩾ 0.
2
Consequently,
t t
u(t, Xtx ) − u(0, x) = ∫ ∂x u(s, Xsx )σ(Xsx ) dBs + ∫ (∂t u(s, Xsx ) + Ax u(s, Xsx )) ds.
0 0
181
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
and, since everything is bounded and since E τ < ∞, dominated convergence proves
the claim.
Remark: A close inspection of our argument reveals that we do not need boundedness
of b, σ if we replace E τ < ∞ by
τ τ
E (∫ σ 2 (Xsx ) ds) + E (∫ ∣b(Xσx )∣ ds) < ∞.
0 0
∎∎
182
20 Stratonovich’s stochastic calculus
Problem 20.1. Solution: We have, using the time-dependent Itô formula (17.18) with d =
m = 1,
df (t, Xt ) = ∂t f (t, Xt ) dt + ∂x f (t, Xt )b(t) dt + ∂x f (t, Xt )σ(t) dBt + 21 ∂x2 f (t, Xt )σ 2 (t) dt
and this is exactly what we would get if we would use normal calculus rules and if dBt =
β̇t dt: By the usual chain rule
Yo
So. *'*
The solution to the SDE is now given by
τ ∶= inf {t ⩾ 0 ∶ u(Bt ) = 0} ,
183
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
Note that the fact that σ(0) = 0 means that in the SDE Xt cannot move once it reaches
0. The pictures above illustrate that
dξ
∫ = ∞ Ô⇒ τ = ∞
0+ σ(ξ)
dξ
∫ < ∞ Ô⇒ τ < ∞.
0+ σ(ξ)
∎∎
Problem 20.3. Solution: Denote by Lf , Lg the global Lipschitz constants and observe that
the global Lipschitz property entails linear growth:
∎∎
184
21 On diffusions
1 d d
Au = Lu = ∑ aij ∂i ∂j u + ∑ bi ∂i u
2 i,j=1 i=1
and χ ∈ C∞
c (R ) such that χ∣B(0,R) ≡ 1.
d
L(uχ)(x) = aij + xj bi (x) + xi bj (x) for all ∣x∣ < R Ô⇒ aij ∣B(0,R) continuous.
∎∎
∂ 2 p(t, x, y)
∣ ∣ ⩽ C(t) for all x, y ∈ Rd
∂xj ∂xk
∂ 2 p(t, x, y)
∣ u(y)∣ ⩽ C(t) ∣u(y)∣ ∈ L1 (Rd ) (*)
∂xj ∂xk
185
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
∂2 ∂2
∫ p(t, x, y) u(y) dy = ∫ p(t, x, y) u(y) dy.
∂xj ∂xk ∂xj ∂xk
Moreover, (*) and the fact that p(t, ⋅, y) ∈ C∞ (Rd ) allow us to change limits and integrals
to get for x → x0 and ∣x∣ → ∞
∂2 ∂2
lim ∫ p(t, x, y) u(y) dy = ∫ lim p(t, x, y) u(y) dy
x→x0 ∂xj ∂xk x→x0 ∂xj ∂xk
∂2
=∫ p(t, x0 , y) u(y) dy
∂xj ∂xk
Ô⇒ Tt maps C∞
c (R ) into C(R );
d d
∂2 ∂2
lim ∫ p(t, x, y) u(y) dy = ∫ lim p(t, x, y) u(y) dy = 0
∣x∣→∞ ∂xj ∂xk ∣x∣→∞ ∂xj ∂xk
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
=0
Ô⇒ Tt maps C∞
c (R ) into C∞ (R ).
d d
Addition: With a standard uniform boundedness and density argument we can show that
Tt maps C∞ into C∞ : fix u ∈ C∞ (Rd ) and pick a sequence (un )n ⊂ C∞
c (R ) such that
d
lim ∥u − un ∥∞ = 0.
n→∞
Then we get
∎∎
Problem 21.3. Solution: Let u ∈ C2∞ . Then there is a sequence of test functions (un )n ⊂ C∞
c
such that ∥un − u∥(2) → 0. Thus, un → u uniformly and A(un − um ) → 0 uniformly. The
closedness now gives u ∈ D(A).
∎∎
= ∑∫ u ⋅ ∂i ∂j (aij φ) dx − ∑ ∫ u ⋅ ∂j (bj φ) dx + ∫ u ⋅ cφ dx
int by
parts
i,j Rd j Rd Rd
= ⟨u, L∗ φ⟩L2
where
L∗ (x, Dx )φ(x) = ∑ ∂i ∂j (aij (x)φ(x)) − ∑ ∂j (bj (x)φ(x)) + c(x)φ(x).
ij j
186
Solution Manual. Last update December 19, 2017
Now assume that we are in (t, x) ∈ [0, ∞) × Rd —the case R × Rd is easier, as we have no
boundary term. Consider L + ∂t = L(x, Dx ) + ∂t for sufficiently smooth u = u(t, x) and
φ = φ(t, x) with compact support in [0, ∞) × Rd . We find
∞
∫ ∫ (L + ∂t )u(t, x) ⋅ φ(t, x) dx dt
0 Rd
∞ ∞
=∫ ∫ Lu(t, x) ⋅ φ(t, x) dx dt + ∫ ∫ ∂t u(t, x) ⋅ φ(t, x) dx dt
0 Rd 0 Rd
∞ ∞
=∫ ∫ Lu(t, x) ⋅ φ(t, x) dx dt + ∫ ∫ ∂t u(t, x) ⋅ φ(t, x) dt dx
0 Rd Rd 0
∞ ∞ ∞
=∫ ∫ u(t, x) ⋅ L∗ φ(t, x) dx dt + ∫ (u(t, x)φ(t, x)∣ −∫ u(t, x) ⋅ ∂t φ(t, x) dt) dx
0 Rd Rd t=0 0
∞ ∞
=∫ ∫ u(t, x) ⋅ L∗ φ(t, x) dx dt − ∫ (u(0, x)φ(0, x) + ∫ u(t, x) ⋅ ∂t φ(t, x) dt) dx.
0 Rd Rd 0
d
Tt u(x) = Tt L(⋅, D)u(x)
dt
d
Ô⇒ ∫ p(t, x, y) u(y) dy = ∫ p(t, x, y) L(y, Dy )u(y) dy
dt
d
Ô⇒ ∫ p(t, x, y) u(y) dy = ∫ p(t, x, y) L(y, Dy )u(y) dy.
dt
The change of differentiation and integration can easily be justified by a routine application
of the differentiation lemma (e.g. Schilling [15, Theorem 11.5, pp. 92–93]): under our
assumptions we have for all ∈ (0, 1) and R > 0
d
sup sup ∣ p(t, x, y) u(y)∣ ⩽ C(, R) ∣u(y)∣ ∈ L1 (Rd ).
t∈[,1/] ∣x∣⩽R dt
Inserting the expression for the differential operator L(y, Dy ), we find for the right-hand
side
∎∎
Problem 21.6. Solution: Problem 6.2 shows that Xt is a Markov process. The continuity of
the sample paths is obvious and so is the Feller property (using the form of the transition
function found in the solution of Problem 6.2).
187
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
t
Let us calculate the generator. Set It = ∫0 Bs ds. The semigroup is given by
t
Tt u(x, y) = Ex,y u(Bt , It ) = E u (Bt + x, ∫0 (Bs + x) ds + y) = E u(Bt + x, It + tx + y).
If we differentiate the expression under the expectation with respect to t, we get with the
help of Itô’s formula
∎∎
Problem 21.7. Solution: We assume for a) and b) that the operator L is more general than
written in (21.1), namely
1 d ∂ 2 u(x) d ∂u(x)
Lu(x) = ∑ aij (x) + ∑ bj (x) + c(x)u(x)
2 i,j=1 ∂xi ∂xj j=1 ∂xj
(a) If u has compact support, then Lu has compact support. Since, by assumption, the
coefficients of L are continuous, Lu is bounded, hence Mtu is square integrable.
Obviously, Mtu is Ft measurable. Let us establish the martingale property. For this
we fix s ⩽ t. Then
t
Ex (Mtu ∣ Fs ) = Ex (u(Xt ) − u(X0 ) − ∫ Lu(Xr ) dr ∣ Fs )
0
t
= Ex (u(Xt ) − u(Xs ) − ∫ Lu(Xr ) dr ∣ Fs )
s
s
+ u(Xs ) − u(X0 ) − ∫ Lu(Xr ) dr
0
t−s
= Ex (u(Xt ) − u(Xs ) − ∫ Lu(Xr+s ) dr ∣ Fs ) + Msu
0
188
Solution Manual. Last update December 19, 2017
t−s
= EXs (u(Xt−s ) − u(X0 ) − ∫ Lu(Xr ) dr) + Msu .
Markov
property 0
Observe that Tt u(y) = Ey u(Xt ) is the semigroup associated with the Markov process.
Then
t−s
Ey (u(Xt−s ) − u(X0 ) − ∫ Lu(Xr ) dr)
0
t−s
= Tt−s u(y) − u(y) − ∫ Ey (Lu(Xr )) dr = 0
0
by Lemma 7.10, see also Theorem 7.30. This shows that Ex (Mtu ∣ Fs ) = Msu , and we
are done.
χ∣B(x, R) ≡ 1. Then for all f ∈ C2 (Rd ) we have χf ∈ C2c (Rd ) and it is not hard to see
that the calculation in part a) still holds for such functions.
Set τ = τRx = inf{t > 0 ∶ ∣Xt − x∣ ⩾ R}. This is a stopping time and we have
Moreover,
1
L(χf ) = ∑ aij ∂i ∂j (χf ) + ∑ bi ∂i (χf ) + cχf
2 i,j i
1
= ∑ aij (f ∂i ∂j χ + χ∂i ∂j f + ∂i χ∂j f + ∂i f ∂j χ) + ∑ bi (f ∂i χ + χ∂i f ) + cχf
2 i,j i
Ex (Mt∧τ
f
R
∣ Fs ) = Ex (Mt∧τ
χf
R
∣ Fs )
= Ms∧τ
χf
R
= Ms∧τ
f
R
.
(c) A diffusion operator L satisfies that c = 0. Thus, the calculation for L(χf ) in part
b) shows that
189
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
For the first we note that d⟨M u , M φ ⟩t = dMtu dMtφ (by the definition of the bracket
process) and the latter we can calculate with the rules for Itô differentials. We have
By definition,
Thus, using that all terms containing (dt)2 and dBtk dt are zero, we get
dMtu dMtφ = ∑ ∑ ∂j u(Xt )∂l φ(Xt )σjk (Xt )σlm (Xt ) dBtk dBtm
j,k l,m
where ajl = ∑k σjk (Xt )σlk (Xt ) = (σσ ⊺ )jl . (x ⋅ y denotes the Euclidean scalar product
and ∇ = (∂1 , . . . , ∂d )⊺ .)
implies
t t 2
(Mtu )2 = u2 (Xt ) − 2Mtu ∫ Lu(Xr ) dr − (∫ Lu(Xr ) dr) .
0 0
Part a) shows that
t
u2 (Xt ) − ∫ L(u2 )(Xr ) dr
0
is a martingale. Moreover, since (Mtu )t⩾0 is a martingale, we obtain by the tower
property
t
Ex (2Mtu ∫ Lu(Xr ) dr ∣ Fs )
0
s t
= 2 Ex (Mtu ∣ Fs ) ∫ Lu(Xr ) dr + 2 ∫ Ex ( Ex (Mtu Lu(Xr ) ∣ Fr ) ∣ Fs ) dr
0 s
s t
= 2Msu ∫ Lu(Xr ) dr + 2 Ex (∫ Mru Lu(Xr ) dr ∣ Fs ) . (*)
0 s
190
Solution Manual. Last update December 19, 2017
using that
t r t 2
2∫ ∫ Lu(Xv )Lu(Xr ) dv dr = (∫ Lu(Xr ) dr) .
s s s
is a martingale. Consequently,
t
⟨M u ⟩t = ∫ (L(u2 ) − 2uLu)(Xr ) dr.
0
This proves the first equality for u = φ. The formula for the quadratic covariation
⟨M u , M φ ⟩ follows by using polarization, i. e.
1 1
⟨M u , M φ ⟩t = (⟨M u + M φ ⟩t − ⟨M u − M φ ⟩t ) = (⟨M u+ϕ ⟩t − ⟨M u−ϕ ⟩t ).
4 4
∎∎
191
Bibliography
[1] Böttcher, B., Schilling, R.L., Wang, J.: Lévy-Type Processes: Construction, Approxi-
mation and Sample Path Properties. Springer Lecture Notes in Mathematics vol. 2099,
(vol. III of the “Lévy Matters” subseries). Springer, Berlin 2014.
[2] Chung, K.L.: Lectures from Markov Processes to Brownian Motion. Springer, New York
1982. Enlarged and reprinted as [3].
[3] Chung, K.L., Walsh, J.B.: Markov Processes, Brownian Motion, and Time Symmetry
(Second Edition). Springer, Berlin 2005. This is the 2nd edn. of [2].
[5] Engel, K.-J., Nagel, R.: One-Parameter Semigroups for Linear Evolution Equations.
Springer, New York 2000.
[6] Falconer, K.: The Geometry of Fractal Sets. Cambridge University Press, Cambridge
1985.
[7] Falconer, K.: Fractal Geometry. Mathematical Foundations and Applications. Second Edi-
tion. Wiley, Chichester 2003.
[8] Gradshteyn, I.S., Ryzhik, I.M.: Table of Integrals, Series, and Products (4th corrected and
enlarged edn). Academic Press, San Diego 1980.
[9] Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus. Springer, New York
1988.
[10] Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equa-
tions. Springer, New York 1983.
[11] Protter, P.E.: Stochastic Integration and Differential Equations (Second Edn.). Springer,
Berlin 2004.
[12] A. Rényi: Probability Theory. Dover, Mineola (NY) 2007. (Reprint of the 1970 edition
published by North-Holland and Akadémiai Kiadó.)
193
R.L. Schilling, L. Partzsch: Brownian Motion (2nd edn.)
[13] Rudin, W.: Principles of Mathematical Analysis. McGraw-Hill, Auckland 1976 (3rd edn).
[14] Rudin, W.: Real and Complex Analysis. McGraw-Hill, New York 1986 (3rd edn).
[15] Schilling, R.L.: Measures, Integrals and Martingales. Cambridge University Press, Cam-
bridge 2011 (3rd printing with corrections).
[16] Schilling, R.L. and Partzsch, L.: Brownian Motion. An Introduction to Stochastic Pro-
cesses. De Gruyter, Berlin 2012.
[17] Trèves, F.: Topological Vector Spaces, Distributions and Kernels. Academic Press, San
Diego (CA) 1990.
[18] Wentzell, A. D.: Theorie zufälliger Prozesse. Birkhäuser, Basel 1979. Russian original.
English translation: Course in the Theory of Stochastic Processes, McGraw-Hill 1981.
194