Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Grad Anal

Download as pdf or txt
Download as pdf or txt
You are on page 1of 243

Differential Analysis

Lecture notes for 18.155 and 156

Richard B. Melrose
Contents

Introduction 6
Chapter 1. Measure and Integration 7
1. Continuous functions 7
2. Measures and σ-algebras 14
3. Measureability of functions 20
4. Integration 22
Chapter 2. Hilbert spaces and operators 35
1. Hilbert space 35
2. Spectral theorem 38

Chapter 3. Distributions 43
1. Test functions 43
2. Tempered distributions 50
3. Convolution and density 55
4. Fourier inversion 65
5. Sobolev embedding 70
6. Differential operators. 74
7. Cone support and wavefront set 89
8. Homogeneous distributions 102
9. Operators and kernels 103
10. Fourier transform 103
11. Schwartz space. 103
12. Tempered distributions. 104
13. Fourier transform 105
14. Sobolev spaces 106
15. Weighted Sobolev spaces. 109
16. Multiplicativity 112
17. Some bounded operators 115
Chapter 4. Elliptic Regularity 117
1. Constant coefficient operators 117
2. Constant coefficient elliptic operators 119
3. Interior elliptic estimates 126
3
4 CONTENTS

Addenda to Chapter 4 135

Chapter 5. Coordinate invariance and manifolds 137


1. Local diffeomorphisms 137
2. Manifolds 141
3. Vector bundles 147

Chapter 6. Invertibility of elliptic operators 149


1. Global elliptic estimates 149
2. Compact inclusion of Sobolev spaces 152
3. Elliptic operators are Fredholm 153
4. Generalized inverses 157
5. Self-adjoint elliptic operators 160
6. Index theorem 165
Addenda to Chapter 6 165

Chapter 7. Suspended families and the resolvent 167


1. Product with a line 167
2. Translation-invariant Operators 174
3. Invertibility 180
4. Resolvent operator 185
Addenda to Chapter 7 185

Chapter 8. Manifolds with boundary 187


1. Compactifications of R. 187
2. Basic properties 191
3. Boundary Sobolev spaces 192
4. Dirac operators 192
5. Homogeneous translation-invariant operators 192
6. Scattering structure 195

Chapter 9. Electromagnetism 201


1. Maxwell’s equations 201
2. Hodge Theory 204
3. Coulomb potential 208
4. Dirac strings 208
Addenda to Chapter 9 208

Chapter 10. Monopoles 209


1. Gauge theory 209
2. Bogomolny equations 209
3. Problems 209
4. Solutions to (some of) the problems 236
CONTENTS 5

Bibliography 243
6 CONTENTS

Introduction
These notes are for the graduate analysis courses (18.155 and 18.156)
at MIT. They are based on various earlier similar courses. In giving
the lectures I usually cut many corners!
To thank:- Austin Frakt, Philip Dorrell, Jacob Bernstein....
CHAPTER 1

Measure and Integration

A rather quick review of measure and integration.

1. Continuous functions
A the beginning I want to remind you of things I think you already
know and then go on to show the direction the course will be taking.
Let me first try to set the context.
One basic notion I assume you are reasonably familiar with is that
of a metric space ([6] p.9). This consists of a set, X, and a distance
function
d : X × X = X 2 −→ [0, ∞) ,
satisfying the following three axioms:
i) d(x, y) = 0 ⇔ x = y, (and d(x, y) ≥ 0)
(1.1) ii) d(x, y) = d(y, x) ∀ x, y ∈ X
iii) d(x, y) ≤ d(x, z) + d(z, y) ∀ x, y, z ∈ X.
The basic theory of metric spaces deals with properties of subsets
(open, closed, compact, connected), sequences (convergent, Cauchy)
and maps (continuous) and the relationship between these notions.
Let me just remind you of one such result.
Proposition 1.1. A map f : X → Y between metric spaces is
continuous if and only if one of the three following equivalent conditions
holds
(1) f −1 (O) ⊂ X is open ∀ O ⊂ Y open.
(2) f −1 (C) ⊂ X is closed ∀ C ⊂ Y closed.
(3) limn→∞ f (xn ) = f (x) in Y if xn → x in X.
The basic example of a metric space is Euclidean space. Real n-
dimensional Euclidean space, Rn , is the set of ordered n-tuples of real
numbers
x = (x1 , . . . , xn ) ∈ Rn , xj ∈ R , j = 1, . . . , n .
7
8 1. MEASURE AND INTEGRATION

It is also the basic example of a vector (or linear) space with the oper-
ations
x + y = (x1 + y1 , x2 + y2 , . . . , xn + yn )
cx = (cx1 , . . . , cxn ) .
The metric is usually taken to be given by the Euclidean metric
Xn
2 2 1/2
|x| = (x1 + · · · + xn ) = ( x2j )1/2 ,
j=1

in the sense that


d(x, y) = |x − y| .
Let us abstract this immediately to the notion of a normed vector
space, or normed space. This is a vector space V (over R or C) equipped
with a norm, which is to say a function
k k : V −→ [0, ∞)
satisfying
i) kvk = 0 ⇐⇒ v = 0,
(1.2) ii) kcvk = |c| kvk ∀ c ∈ K,
iii) kv + wk ≤ kvk + kwk.
This means that (V, d), d(v, w) = kv − wk is a vector space; I am also
using K to denote either R or C as is appropriate.
The case of a finite dimensional normed space is not very interesting
because, apart from the dimension, they are all “the same”. We shall
say (in general) that two norms k • k1 and k • k2 on V are equivalent
of there exists C > 0 such that
1
kvk1 ≤ kvk2 ≤ Ckvk1 ∀ v ∈ V .
C
Proposition 1.2. Any two norms on a finite dimensional vector
space are equivalent.
So, we are mainly interested in the infinite dimensional case. I will
start the course, in a slightly unorthodox manner, by concentrating on
one such normed space (really one class). Let X be a metric space.
The case of a continuous function, f : X → R (or C) is a special case
of Proposition 1.1 above. We then define
C(X) = {f : X → R, f bounded and continuous} .
In fact the same notation is generally used for the space of complex-
valued functions. If we want to distinguish between these two possi-
bilities we can use the more pedantic notation C(X; R) and C(X; C).
1. CONTINUOUS FUNCTIONS 9

Now, the ‘obvious’ norm on this linear space is the supremum (or ‘uni-
form’) norm
kf k∞ = sup |f (x)| .
x∈X
Here X is an arbitrary metric space. For the moment X is sup-
posed to be a “physical” space, something like Rn . Corresponding to
the finite-dimensionality of Rn we often assume (or demand) that X
is locally compact. This just means that every point has a compact
neighborhood, i.e., is in the interior of a compact set. Whether locally
compact or not we can consider
 
(1.3) C0 (X) = f ∈ C(X); ∀  > 0 ∃ K b Xs.t. sup |f (x)| ≤  .
x∈K
/

Here the notation K b X means ‘K is a compact subset of X’.


If V is a normed linear space we are particularly interested in the
continuous linear functionals on V . Here ‘functional’ just means func-
tion but V is allowed to be ‘large’ (not like Rn ) so ‘functional’ is used
for historical reasons.
Proposition 1.3. The following are equivalent conditions on a
linear functional u : V −→ R on a normed space V .
(1) u is continuous.
(2) u is continuous at 0.
(3) {u(f ) ∈ R ; f ∈ V , kf k ≤ 1} is bounded.
(4) ∃ C s.t. |u(f )| ≤ Ckf k ∀ f ∈ V .
Proof. (1) =⇒ (2) by definition. Then (2) implies that u−1 (−1, 1)
is a neighborhood of 0 ∈ V , so for some  > 0, u({f ∈ V ; kf k < }) ⊂
(−1, 1). By linearity of u, u({f ∈ V ; kf k < 1}) ⊂ (− 1 , 1 ) is bounded,
so (2) =⇒ (3). Then (3) implies that
|u(f )| ≤ C ∀ f ∈ V, kf k ≤ 1
for some C. Again using linearity of u, if f 6= 0,
 
f
|u(f )| ≤ kf ku ≤ Ckf k ,
kf k
giving (4). Finally, assuming (4),
|u(f ) − u(g)| = |u(f − g)| ≤ Ckf − gk
shows that u is continuous at any point g ∈ V . 
In view of this identification, continuous linear functionals are often
said to be bounded. One of the important ideas that we shall exploit
later is that of ‘duality’. In particular this suggests that it is a good
10 1. MEASURE AND INTEGRATION

idea to examine the totality of bounded linear functionals on V . The


dual space is

V 0 = V ∗ = {u : V −→ K , linear and bounded} .

This is also a normed linear space where the linear operations are

(u + v)(f ) = u(f ) + v(f )


(1.4) ∀ f ∈ V.
(cu)(f ) = c(u(f ))

The natural norm on V 0 is

kuk = sup |u(f )|.


kf k≤1

This is just the ‘best constant’ in the boundedness estimate,

kuk = inf {C; |u(f )| ≤ Ckf k ∀ f ⊂ V } .

One of the basic questions I wish to pursue in the first part of the
course is: What is the dual of C0 (X) for a locally compact metric space
X? The answer is given by Riesz’ representation theorem, in terms of
(Borel) measures.
Let me give you a vague picture of ‘regularity of functions’ which
is what this course is about, even though I have not introduced most
of these spaces yet. Smooth functions (and small spaces) are towards
the top. Duality flips up and down and as we shall see L2 , the space
of Lebesgue square-integrable functions, is generally ‘in the middle’.
What I will discuss first is the right side of the diagramme, where we
have the space of continuous functions on Rn which vanish at infinity
and its dual space, Mfin (Rn ), the space of finite Borel measures. There
are many other spaces that you may encounter, here I only include test
functions, Schwartz functions, Sobolev spaces and their duals; k is a
1. CONTINUOUS FUNCTIONS 11

general positive integer.

(1.5) S(R n )  w
_

 *

H k (R n
 ) Cc (R n )  / C0 (Rn )
_ Kk _

 y
L2 (R b
 ) s _

 % 
H −k
(R n
) M (Rn ) o ? _ Mfin (Rn )
 _ Gg

 t
0
S (Rn ).

I have set the goal of understanding the dual space Mfin (Rn ) of
C0 (X), where X is a locally compact metric space. This will force me
to go through the elements of measure theory and Lebesgue integration.
It does require a little forcing!
The basic case of interest is Rn . Then an obvious example of a
continuous linear functional on C0 (Rn ) is given by Riemann integration,
for instance over the unit cube [0, 1]n :
Z
u(f ) = f (x) dx .
[0,1]n

In some sense we must show that all continuous linear functionals


on C0 (X) are given by integration. However, we have to interpret
integration somewhat widely since there are also evaluation functionals.
If z ∈ X consider the Dirac delta
δz (f ) = f (z) .
This is also called a point mass of z. So we need a theory of measure
and integration wide enough to include both of these cases.
One special feature of C0 (X), compared to general normed spaces,
is that there is a notion of positivity for its elements. Thus f ≥ 0 just
means f (x) ≥ 0 ∀ x ∈ X.
Lemma 1.4. Each f ∈ C0 (X) can be decomposed uniquely as the
difference of its positive and negative parts
(1.6) f = f+ − f− , f± ∈ C0 (X) , f± (x) ≤ |f (x)| ∀ x ∈ X .
12 1. MEASURE AND INTEGRATION

Proof. Simply define



±f (x) if ±f (x) ≥ 0
f± (x) =
0 if ±f (x) < 0
for the same sign throughout. Then (3.8) holds. Observe that f+ is
continuous at each y ∈ X since, with U an appropriate neighborhood
of y, in each case
f (y) > 0 =⇒ f (x) > 0 for x ∈ U =⇒ f+ = f in U
f (y) < 0 =⇒ f (x) < 0 for x ∈ U =⇒ f+ = 0 in U
f (y) = 0 =⇒ given  > 0 ∃ U s.t. |f (x)| <  in U
=⇒ |f+ (x)| <  in U .
Thus f− = f −f+ ∈ C0 (X), since both f+ and f− vanish at infinity. 
We can similarly split elements of the dual space into positive and
negative parts although it is a little bit more delicate. We say that
u ∈ (C0 (X))0 is positive if
(1.7) u(f ) ≥ 0 ∀ 0 ≤ f ∈ C0 (X) .
For a general (real) u ∈ (C0 (X))0 and for each 0 ≤ f ∈ C0 (X) set
(1.8) u+ (f ) = sup {u(g) ; g ∈ C0 (X) , 0 ≤ g(x) ≤ f (x) ∀ x ∈ X} .
This is certainly finite since u(g) ≤ Ckgk∞ ≤ Ckf k∞ . Moreover, if
0 < c ∈ R then u+ (cf ) = cu+ (f ) by inspection. Suppose 0 ≤ fi ∈
C0 (X) for i = 1, 2. Then given  > 0 there exist gi ∈ C0 (X) with
0 ≤ gi (x) ≤ fi (x) and
u+ (fi ) ≤ u(gi ) +  .
It follows that 0 ≤ g(x) ≤ f1 (x) + f2 (x) if g = g1 + g2 so
u+ (f1 + f2 ) ≥ u(g) = u(g1 ) + u(g2 ) ≥ u+ (f1 ) + u+ (f2 ) − 2 .
Thus
u+ (f1 + f2 ) ≥ u+ (f1 ) + u+ (f2 ).
Conversely, if 0 ≤ g(x) ≤ f1 (x) + f2 (x) set g1 (x) = min(g, f1 ) ∈
C0 (X) and g2 = g − g1 . Then 0 ≤ gi ≤ fi and u+ (f1 ) + u+ (f2 ) ≥
u(g1 ) + u(g2 ) = u(g). Taking the supremum over g, u+ (f1 + f2 ) ≤
u+ (f1 ) + u+ (f2 ), so we find
(1.9) u+ (f1 + f2 ) = u+ (f1 ) + u+ (f2 ) .
Having shown this effective linearity on the positive functions we
can obtain a linear functional by setting
(1.10) u+ (f ) = u+ (f+ ) − u+ (f− ) ∀ f ∈ C0 (X) .
1. CONTINUOUS FUNCTIONS 13

Note that (1.9) shows that u+ (f ) = u+ (f1 ) − u+ (f2 ) for any decom-
posiiton of f = f1 − f2 with fi ∈ C0 (X), both positive. [Since f1 + f− =
f2 + f+ so u+ (f1 ) + u+ (f− ) = u+ (f2 ) + u+ (f+ ).] Moreover,
|u+ (f )| ≤ max(u+ (f+ ), u(f− )) ≤ kuk kf k∞
=⇒ ku+ k ≤ kuk .
The functional
u− = u+ − u
is also positive, since u+ (f ) ≥ u(f ) for all 0 ≤ f ∈ C0 (x). Thus we
have proved
Lemma 1.5. Any element u ∈ (C0 (X))0 can be decomposed,
u = u+ − u−
into the difference of positive elements with
ku+ k , ku− k ≤ kuk .
The idea behind the definition of u+ is that u itself is, more or
less, “integration against a function” (even though we do not know
how to interpret this yet). In defining u+ from u we are effectively
throwing away the negative part of that ‘function.’ The next step is
to show that a positive functional corresponds to a ‘measure’ meaning
a function measuring the size of sets. To define this we really want to
evaluate u on the characteristic function of a set

1 if x ∈ E
χE (x) =
0 if x ∈/ E.
The problem is that χE is not continuous. Instead we use an idea
similar to (15.9).
If 0 ≤ u ∈ (C0 (X))0 and U ⊂ X is open, set1
(1.11) µ(U ) = sup {u(f ) ; 0 ≤ f (x) ≤ 1, f ∈ C0 (X) , supp(f ) b U } .
Here the support of f , supp(f ), is the closure of the set of points where
f (x) 6= 0. Thus supp(f ) is always closed, in (15.4) we only admit f if
its support is a compact subset of U. The reason for this is that, only
then do we ‘really know’ that f ∈ C0 (X).
Suppose we try to measure general sets in this way. We can do this
by defining
(1.12) µ∗ (E) = inf {µ(U ) ; U ⊃ E , U open} .
Already with µ it may happen that µ(U ) = ∞, so we think of
(1.13) µ∗ : P(X) → [0, ∞]
1See [6] starting p.42 or [1] starting p.206.
14 1. MEASURE AND INTEGRATION

as defined on the power set of X and taking values in the extended


positive real numbers.
Definition 1.6. A positive extended function, µ∗ , defined on the
power set of X is called an outer measure if µ∗ (∅) = 0, µ∗ (A) ≤ µ∗ (B)
whenever A ⊂ B and
[ X
(1.14) µ∗ ( Aj ) ≤ µ(Aj ) ∀ {Aj }∞j=1 ⊂ P(X) .
j j

Lemma 1.7. If u is a positive continuous linear functional on C0 (X)


then µ∗ , defined by (15.4), (15.12) is an outer measure.
To prove this we need to find enough continuous functions. I have
relegated the proof of the following result to Problem 2.
Lemma 1.8. Suppose Ui , i = 1, . . . , N is ,a finite
S collection of open
sets in a locally compact metric space and K b N i=1 Ui is a compact
subset, then there exist continuous functions fi ∈ C(X) with 0 ≤ fi ≤
1, supp(fi ) b Ui and
X
(1.15) fi = 1 in a neighborhood of K .
i

Proof of Lemma 15.8. We have to S prove (15.6). Suppose first


that the Ai are open, then so is A = i Ai . If f ∈ C(X) and
supp(f ) b A then supp(f ) is covered by a finite union of the Ai s.
Applying Lemma 15.7 we can find fP i ’s, all but a finite number iden-
tically zero, so supp(fi ) b Ai and i fi = 1 in a neighborhood of
supp(f ). P
Since f = i fi f we conclude that
X X
u(f ) = u(fi f ) =⇒ µ∗ (A) ≤ µ∗ (Ai )
i i

since 0 ≤ fi f ≤ 1 and supp(fi f ) b Ai .


Thus (15.6) holds when the Ai are open. In the general case if
Ai ⊂ Bi with the Bi open then, from the definition,
[ [ X
µ∗ ( Ai ) ≤ µ∗ ( Bi ) ≤ µ∗ (Bi ) .
i i i

Taking the infimum over the Bi gives (15.6) in general. 

2. Measures and σ-algebras


An outer measure such as µ∗ is a rather crude object since, even
if the Ai are disjoint, there is generally strict inequality in (15.6). It
turns out to be unreasonable to expect equality in (15.6), for disjoint
2. MEASURES AND σ-ALGEBRAS 15

unions, for a function defined on all subsets of X. We therefore restrict


attention to smaller collections of subsets.
Definition 2.1. A collection of subsets M of a set X is a σ-algebra
if
(1) φ, X ∈ M
(2) E ∈ M =⇒ E C = S
X\E ∈ M
(3) {Ei }i=1 ⊂ M =⇒ ∞

i=1 Ei ∈ M.
For a general outer measure µ∗ we define the notion of µ∗ -measurability
of a set.
Definition 2.2. A set E ⊂ X is µ∗ -measurable (for an outer mea-
sure µ∗ on X) if
(2.1) µ∗ (A) = µ∗ (A ∩ E) + µ∗ (A ∩ E { ) ∀ A ⊂ X .
Proposition 2.3. The collection of µ∗ -measurable sets for any
outer measure is a σ-algebra.
Proof. Suppose E is µ∗ -measurable, then E C is µ∗ -measurable by
the symmetry of (3.9).
Suppose A, E and F are any three sets. Then
A ∩ (E ∪ F ) = (A ∩ E ∩ F ) ∪ (A ∩ E ∩ F C ) ∪ (A ∩ E C ∩ F )
A ∩ (E ∪ F )C = A ∩ E C ∩ F C .
From the subadditivity of µ∗
µ∗ (A ∩ (E ∪ F )) + µ∗ (A ∩ (E ∪ F )C )
≤ µ∗ (A ∩ E ∩ F ) + µ∗ (A ∩ E ∪ F C )
+ µ∗ (A ∩ E C ∩ F ) + µ∗ (A ∩ E C ∩ F C ).
Now, if E and F are µ∗ -measurable then applying the definition twice,
for any A,
µ∗ (A) = µ∗ (A ∩ E ∩ F ) + µ∗ (A ∩ E ∩ F C )
+ µ∗ (A ∩ E C ∩ F ) + µ∗ (A ∩ E C ∩ F C )
≥ µ∗ (A ∩ (E ∪ F )) + µ∗ (A ∩ (E ∪ F )C ) .
The reverse inequality follows from the subadditivity of µ∗ , so E ∪ F
is also µ∗ -measurable.

Sn If {Ei }i=1 is aSsequence of disjoint µ∗ -measurable sets, set Fn =

i=1 Ei and F = i=1 Ei . Then for any A,

µ∗ (A ∩ Fn ) = µ∗ (A ∩ Fn ∩ En ) + µ∗ (A ∩ Fn ∩ EnC )
= µ∗ (A ∩ En ) + µ∗ (A ∩ Fn−1 ) .
16 1. MEASURE AND INTEGRATION

Iterating this shows that


n
X

µ (A ∩ Fn ) = µ∗ (A ∩ Ej ) .
j=1

From the µ -measurability of Fn and the subadditivity of µ∗ ,


µ∗ (A) = µ∗ (A ∩ Fn ) + µ∗ (A ∩ FnC )
Xn
≥ µ∗ (A ∩ Ej ) + µ∗ (A ∩ F C ) .
j=1

Taking the limit as n → ∞ and using subadditivity,


X∞

(2.2) µ (A) ≥ µ∗ (A ∩ Ej ) + µ∗ (A ∩ F C )
j=1

≥ µ∗ (A ∩ F ) + µ∗ (A ∩ F C ) ≥ µ∗ (A)
proves that inequalities are equalities, so F is also µ∗ -measurable.
In general, for any countable union of µ∗ -measurable sets,
[∞ ∞
[
Aj = A
ej ,
j=1 j=1

j−1 j−1
!C
[ [
ej = Aj \
A Ai = Aj ∩ Ai
i=1 i=1

is µ∗ -measurable since the A


ej are disjoint. 
A measure (sometimes called a positive measure) is an extended
function defined on the elements of a σ-algebra M:
µ : M → [0, ∞]
such that
(2.3) µ(∅) = 0 and

! ∞
[ X
µ Ai = µ(Ai )
(2.4) i=1 i=1
if {Ai }∞
i=1 ⊂ M and Ai ∩ Aj = φ i 6= j.

The elements of M with measure zero, i.e., E ∈ M, µ(E) = 0, are


supposed to be ‘ignorable’. The measure µ is said to be complete if
(2.5) E ⊂ X and ∃ F ∈ M , µ(F ) = 0 , E ⊂ F ⇒ E ∈ M .
See Problem 4.
2. MEASURES AND σ-ALGEBRAS 17

The first part of the following important result due to Caratheodory


was shown above.
Theorem 2.4. If µ∗ is an outer measure on X then the collection
of µ∗ -measurable subsets of X is a σ-algebra and µ∗ restricted to M is
a complete measure.
Proof. We have already shown that the collection of µ∗ -measurable
subsets of X is a σ-algebra. To see the second part, observe that taking
A = F in (3.11) gives
X [∞
∗ ∗
µ (F ) = µ (Ej ) if F = Ej
j j=1

and the Ej are disjoint elements of M. This is (3.3).


Similarly if µ∗ (E) = 0 and F ⊂ E then µ∗ (F ) = 0. Thus it is
enough to show that for any subset E ⊂ X, µ∗ (E) = 0 implies E ∈ M.
For any A ⊂ X, using the fact that µ∗ (A ∩ E) = 0, and the ‘increasing’
property of µ∗
µ∗ (A) ≤ µ∗ (A ∩ E) + µ∗ (A ∩ E C )
= µ∗ (A ∩ E C ) ≤ µ∗ (A)
shows that these must always be equalities, so E ∈ M (i.e., is µ∗ -
measurable). 
Going back to our primary concern, recall that we constructed the
outer measure µ∗ from 0 ≤ u ∈ (C0 (X))0 using (15.4) and (15.12). For
the measure whose existence follows from Caratheodory’s theorem to
be much use we need
Proposition 2.5. If 0 ≤ u ∈ (C0 (X))0 , for X a locally compact
metric space, then each open subset of X is µ∗ -measurable for the outer
measure defined by (15.4) and (15.12) and µ in (15.4) is its measure.
Proof. Let U ⊂ X be open. We only need to prove (3.9) for all
A ⊂ X with µ∗ (A) < ∞.2
Suppose first that A ⊂ X is open and µ∗ (A) < ∞. Then A ∩ U
is open, so given  > 0 there exists f ∈ C(X) supp(f ) b A ∩ U with
0 ≤ f ≤ 1 and
µ∗ (A ∩ U ) = µ(A ∩ U ) ≤ u(f ) +  .
Now, A\ supp(f ) is also open, so we can find g ∈ C(X) , 0 ≤ g ≤
1 , supp(g) b A\ supp(f ) with
µ∗ (A\ supp(f )) = µ(A\ supp(f )) ≤ u(g) +  .
2Why?
18 1. MEASURE AND INTEGRATION

Since
A\ supp(f ) ⊃ A ∩ U C , 0 ≤ f + g ≤ 1 , supp(f + g) b A ,
µ(A) ≥ u(f + g) = u(f ) + u(g)
> µ∗ (A ∩ U ) + µ∗ (A ∩ U C ) − 2
≥ µ∗ (A) − 2
using subadditivity of µ∗ . Letting  ↓ 0 we conclude that
µ∗ (A) ≤ µ∗ (A ∩ U ) + µ∗ (A ∩ U C ) ≤ µ∗ (A) = µ(A) .
This gives (3.9) when A is open.
In general, if E ⊂ X and µ∗ (E) < ∞ then given  > 0 there exists
A ⊂ X open with µ∗ (E) > µ∗ (A) − . Thus,
µ∗ (E) ≥ µ∗ (A ∩ U ) + µ∗ (A ∩ U C ) − 
≥ µ∗ (E ∩ U ) + µ∗ (E ∩ U C ) − 
≥ µ∗ (E) −  .
This shows that (3.9) always holds, so U is µ∗ -measurable if it is open.
We have already observed that µ(U ) = µ∗ (U ) if U is open. 
Thus we have shown that the σ-algebra given by Caratheodory’s
theorem contains all open sets. You showed in Problem 3 that the
intersection of any collection of σ-algebras on a given set is a σ-algebra.
Since P(X) is always a σ-algebra it follows that for any collection
E ⊂ P(X) there is always a smallest σ-algebra containing E, namely
\
ME = {M ⊃ E ; M is a σ-algebra , M ⊂ P(X)} .
The elements of the smallest σ-algebra containing the open sets are
called ‘Borel sets’. A measure defined on the σ-algebra of all Borel sets
is called a Borel measure. This we have shown:
Proposition 2.6. The measure defined by (15.4), (15.12) from
0 ≤ u ∈ (C0 (X))0 by Caratheodory’s theorem is a Borel measure.
Proof. This is what Proposition 3.14 says! See how easy proofs
are. 
We can even continue in the same vein. A Borel measure is said to
be outer regular on E ⊂ X if
(2.6) µ(E) = inf {µ(U ) ; U ⊃ E , U open} .
Thus the measure constructed in Proposition 3.14 is outer regular on
all Borel sets! A Borel measure is inner regular on E if
(2.7) µ(E) = sup {µ(K) ; K ⊂ E , K compact} .
2. MEASURES AND σ-ALGEBRAS 19

Here we need to know that compact sets are Borel measurable. This
is Problem 5.
Definition 2.7. A Radon measure (on a metric space) is a Borel
measure which is outer regular on all Borel sets, inner regular on open
sets and finite on compact sets.
Proposition 2.8. The measure defined by (15.4), (15.12) from
0 ≤ u ∈ (C0 (X))0 using Caratheodory’s theorem is a Radon measure.
Proof. Suppose K ⊂ X is compact. Let χK be the charac-
teristic function of K , χK = 1 on K , χK = 0 on K C . Suppose
f ∈ C0 (X) , supp(f ) b X and f ≥ χK . Set
U = {x ∈ X ; f (x) > 1 − }
where  > 0 is small. Thus U is open, by the continuity of f and
contains K. Moreover, we can choose g ∈ C(X) , supp(g) b U , 0 ≤
g ≤ 1 with g = 1 near3 K. Thus, g ≤ (1 − )−1 f and hence
µ∗ (K) ≤ u(g) = (1 − )−1 u(f ) .
Letting  ↓ 0, and using the measurability of K,
µ(K) ≤ u(f )
⇒ µ(K) = inf {u(f ) ; f ∈ C(X) , supp(f ) b X , f ≥ χK } .
In particular this implies that µ(K) < ∞ if K b X, but is also proves
(3.17). 
Let me now review a little of what we have done. We used the
positive functional u to define an outer measure µ∗ , hence a measure
µ and then checked the properties of the latter.
This is a pretty nice scheme; getting ahead of myself a little, let me
suggest that we try it on something else.
Let us say that Q ⊂ Rn is ‘rectangular’ if it is a product of finite
intervals (open, closed or half-open)
n
Y
(2.8) Q= (or[ai , bi ]or) ai ≤ bi
i=1

we all agree on its standard volume:


n
Y
(2.9) v(Q) = (bi − ai ) ∈ [0, ∞) .
i=1

3Meaning in a neighborhood of K.
20 1. MEASURE AND INTEGRATION

Clearly if we have two such sets, Q1 ⊂ Q2 , then v(Q1 ) ≤ v(Q2 ). Let


us try to define an outer measure on subsets of Rn by
(∞ ∞
)
X [
(2.10) v ∗ (A) = inf v(Qi ) ; A ⊂ Qi , Qi rectangular .
i=1 i=1

We want to show that (3.22) does define an outer measure. This is


pretty easy; certainly v(∅) = 0. Similarly if {Ai }∞
i=1 are (disjoint) sets
and {Qij }∞
i=1 is a covering
S of A i by open rectangles then all the Qij
together cover A = i Ai and
XX
v ∗ (A) ≤ v(Qij )
i j
X
⇒ v ∗ (A) ≤ v ∗ (Ai ) .
i

So we have an outer measure. We also want


Lemma 2.9. If Q is rectangular then v ∗ (Q) = v(Q).
Assuming this, the measure defined from v ∗ using Caratheodory’s
theorem is called Lebesgue measure.
Proposition 2.10. Lebesgue measure is a Borel measure.
To prove this we just need to show that (open) rectangular sets are
v ∗ -measurable.

3. Measureability of functions
Suppose that M is a σ-algebra on a set X 4 and N is a σ-algebra on
another set Y. A map f : X → Y is said to be measurable with respect
to these given σ-algebras on X and Y if
(3.1) f −1 (E) ∈ M ∀ E ∈ N .
Notice how similar this is to one of the characterizations of continuity
for maps between metric spaces in terms of open sets. Indeed this
analogy yields a useful result.
Lemma 3.1. If G ⊂ N generates N , in the sense that
\
(3.2) N = {N 0 ; N 0 ⊃ G, N 0 a σ-algebra}
then f : X −→ Y is measurable iff f −1 (A) ∈ M for all A ∈ G.
4Then X, or if you want to be pedantic (X, M), is often said to be a measure
space or even a measurable space.
3. MEASUREABILITY OF FUNCTIONS 21

Proof. The main point to note here is that f −1 as a map on power


sets, is very well behaved for any map. That is if f : X → Y then
f −1 : P(Y ) → P(X) satisfies:
f −1 (E C ) = (f −1 (E))C

! ∞
[ [
−1
f Ej = f −1 (Ej )
j=1 j=1
(3.3)

! ∞
\ \
−1
f Ej = f −1 (Ej )
j=1 j=1
−1 −1
f (φ) = φ , f (Y ) = X .
Putting these things together one sees that if M is any σ-algebra on
X then
f∗ (M) = E ⊂ Y ; f −1 (E) ∈ M

(3.4)
is always a σ-algebra on Y.
In particular if f −1 (A) ∈ M for all A ∈ G ⊂ N then f∗ (M) is a σ-
algebra containing G, hence containing N by the generating condition.
Thus f −1 (E) ∈ M for all E ∈ N so f is measurable. 
Proposition 3.2. Any continuous map f : X → Y between metric
spaces is measurable with respect to the Borel σ-algebras on X and Y.
Proof. The continuity of f shows that f −1 (E) ⊂ X is open if E ⊂
Y is open. By definition, the open sets generate the Borel σ-algebra
on Y so the preceeding Lemma shows that f is Borel measurable i.e.,
f −1 (B(Y )) ⊂ B(X).

We are mainly interested in functions on X. If M is a σ-algebra
on X then f : X → R is measurable if it is measurable with respect
to the Borel σ-algebra on R and M on X. More generally, for an
extended function f : X → [−∞, ∞] we take as the ‘Borel’ σ-algebra
in [−∞, ∞] the smallest σ-algebra containing all open subsets of R and
all sets (a, ∞] and [−∞, b); in fact it is generated by the sets (a, ∞].
(See Problem 6.)
Our main task is to define the integral of a measurable function: we
start with simple functions. Observe that the characteristic function
of a set

1 x∈E
χE =
0 x∈ /E
22 1. MEASURE AND INTEGRATION

is measurable if and only if E ∈ M. More generally a simple function,


N
X
(3.5) f= ai χEi , ai ∈ R
i=1

is measurable if the Ei are measurable. The presentation, (3.5), of a


simple function is not unique. We can make it so, getting the minimal
presentation, by insisting that all the ai are non-zero and
Ei = {x ∈ E ; f (x) = ai }
then f in (3.5) is measurable iff all the Ei are measurable.
The Lebesgue integral is based on approximation of functions by
simple functions, so it is important to show that this is possible.
Proposition 3.3. For any non-negative µ-measurable extended func-
tion f : X −→ [0, ∞] there is an increasing sequence fn of simple mea-
surable functions such that limn→∞ fn (x) = f (x) for each x ∈ X and
this limit is uniform on any measurable set on which f is finite.
Proof. Folland [1] page 45 has a nice proof. For each integer n > 0
and 0 ≤ k ≤ 22n − 1, set
En,k = {x ∈ X; 2−n k ≤ f (x) < 2−n (k + 1)},
En0 = {x ∈ X; f (x) ≥ 2n }.
These are measurable sets. On increasing n by one, the interval in the
definition of En,k is divided into two. It follows that the sequence of
simple functions
X
(3.6) fn = 2−n kχEk,n + 2n χEn0
k

is increasing and has limit f and that this limit is uniform on any
measurable set where f is finite. 

4. Integration
The (µ)-integral of a non-negative simple function is by definition
Z X
(4.1) f dµ = ai µ(Y ∩ Ei ) , Y ∈ M .
Y i

Here the convention is that if µ(Y ∩ Ei ) = ∞ but ai = 0 then ai · µ(Y ∩


Ei ) = 0. Clearly this integral takes values in [0, ∞]. More significantly,
4. INTEGRATION 23

if c ≥ 0 is a constant and f and g are two non-negative (µ-measurable)


simple functions then
Z Z
cf dµ = c f dµ
Y
Z Z Y Z
(4.2) (f + g)dµ = f dµ + gdµ
Y Y Y
Z Z
0≤f ≤g ⇒ f dµ ≤ g dµ .
Y Y

(See [1] Proposition 2.13 on page 48.)


To see this, observe that (4.1) holds for any presentation (3.5) of f
with all ai ≥ 0. Indeed, by restriction to Ei and division by ai (which
can be assumed non-zero) it is enough to consider the special case
X
χE = bj χFj .
j

The Fj can always be written as the union of a finite number, N 0 ,


of disjoint measurable sets, Fj = ∪l∈Sj Gl where j = 1, . . . , N and
Sj ⊂ {1, . . . , N 0 }. Thus
X X X
bj µ(Fj ) = bj µ(Gl ) = µ(E)
j j l∈Sj
P
since {j;l∈Sj } bj = 1 for each j.
From this all the statements follow easily.
Definition 4.1. For a non-negative µ-measurable extended func-
tion f : X −→ [0, ∞] the integral (with respect to µ) over any measur-
able set E ⊂ X is
Z Z
(4.3) f dµ = sup{ hdµ; 0 ≤ h ≤ f, h simple and measurable.}
E E
R
By taking suprema, E f dµ has the first and last properties in (4.2).
It also has the middle property, but this is less obvious. To see this, we
shall prove the basic ‘Monotone convergence theorem’ (of Lebesgue).
Before doing so however, note what the vanishing of the integral means.
R
Lemma 4.2. If f : X −→ [0, ∞] is measurable then E f dµ = 0 for
a measurable set E if and only if
(4.4) {x ∈ E; f (x) > 0} has measure zero.
Proof. If (4.4) holds, then any positive simple function bounded
above by f must also vanish outside a set of measure zero, so its integral
24 1. MEASURE AND INTEGRATION
R
must be zero and hence E f dµ = 0. Conversely, observe that the set
in (4.4) can be written as
[
En = {x ∈ E; f (x) > 1/n}.
n

Since these sets increase with n, if (4.4) does not hold then one of these
must have positive measure.
R In that case the simple function n−1 χEn
has positive integral so E f dµ > 0. 
Notice the fundamental difference in approach here between Rie-
mann and Lebesgue integrals. The Lebesgue integral, (4.3), uses ap-
proximation by functions constant on possibly quite nasty measurable
sets, not just intervals as in the Riemann lower and upper integrals.
Theorem 4.3 (Monotone Convergence). Let fn be an increasing
sequence of non-negative measurable (extended) functions, then f (x) =
limn→∞ fn (x) is measurable and
Z Z
(4.5) f dµ = lim fn dµ
E n→∞ E

for any measurable set E ⊂ X.


Proof. To see that f is measurable, observe that
[
(4.6) f −1 (a, ∞] = fn−1 (a, ∞].
n

Since the sets (a, ∞] generate the Borel σ-algebra this shows that f is
measurable.
So we proceed to prove the main part of the proposition, which
is (4.5). Rudin has quite a nice proof of this, [6] page 21. Here I
paraphrase it. We can easily see from (4.1) that
Z Z Z
α = sup fn dµ = lim fn dµ ≤ f dµ.
E n→∞ E E

Given a simple measurable function g with 0 ≤ g ≤ f and 0 < c < 1


consider the sets En = {x ∈ E; fn (x)
S≥ cg(x)}. These are measurable
and increase with n. Moreover E = n En . It follows that
Z Z Z X
(4.7) fn dµ ≥ fn dµ ≥ c gdµ = ai µ(En ∩ Fi )
E En En i
P
in terms of the natural presentation of g = i ai χFi . Now, the fact
that the En are measurable and increase to E shows that
µ(En ∩ Fi ) → µ(E ∩ Fi )
4. INTEGRATION 25
R
as n → ∞. Thus
R the right side of (4.7) tends to c E gdµ as n → ∞.
Hence α ≥ c E gdµ for all 0 < c < 1. Taking the supremum over c and
then over all such g shows that
Z Z Z
α = lim fn dµ ≥ sup gdµ = f dµ.
n→∞ E E E

They must therefore be equal. 


Now for instance the additivity in (4.1) for f ≥ 0 and g ≥ 0 any
measurable functions follows from Proposition 3.3. Thus if f ≥ 0 is
measurable
R and fn is an Rapproximating sequence as in the Proposition
then E f dµ = limn→∞ E fn dµ. So if f and g are two non-negative
measurable functions then fn (x) + gn (x) ↑ f + g(x) which shows not
only that f + g is measurable by also that
Z Z Z
(f + g)dµ = f dµ + gdµ.
E E E

As with the definition of u+ long ago, this allows us to extend the


definition of the integral to any integrable function.
Definition 4.4. A measurable extended function f : X −→ [−∞, ∞]
is said to be integrable on E if its positive and negative parts both have
finite integrals over E, and then
Z Z Z
f dµ = f+ dµ − f− dµ.
E E E

Notice if f is µ-integrable then so is |f |. One of the objects we wish


to study is the space of integrable functions. The fact that the integral
of |f | can vanish encourages us to look at what at first seems a much
more complicated object. Namely we consider an equivalence relation
between integrable functions
(4.8) f1 ≡ f2 ⇐⇒ µ({x ∈ X; f1 (x) 6= f2 (x)}) = 0.
That is we identify two such functions if they are equal ‘off a set of
measure zero.’ Clearly if f1 ≡ f2 in this sense then
Z Z Z Z
|f1 |dµ = |f2 |dµ = 0, f1 dµ = f2 dµ.
X X X X

A necessary condition for a measurable function f ≥ 0 to be inte-


grable is
µ{x ∈ X; f (x) = ∞} = 0.
Let E be the (necessarily measureable) set where f = ∞. Indeed, if
this does not have measure zero, then the sequence of simple functions
26 1. MEASURE AND INTEGRATION

nχE ≤ f has integral tending to infinity. It follows that each equiva-


lence class under (4.8) has a representative which is an honest function,
i.e. which is finite everywhere. Namely if f is one representative then
(
f (x) x ∈
/E
f 0 (x) =
0 x∈E
is also a representative.
We shall denote by L1 (X, µ) the space consisting of such equivalence
classes of integrable functions. This is a normed linear space as I ask
you to show in Problem 11.
The monotone convergence theorem often occurrs in the slightly
disguised form of Fatou’s Lemma.
Lemma 4.5 (Fatou). If fk is a sequence of non-negative integrable
functions then
Z Z
lim inf fn dµ ≤ lim inf fn dµ .
n→∞ n→∞

Proof. Set Fk (x) = inf n≥k fn (x). Thus Fk is an increasing se-


quence of non-negative functions with limiting function lim inf n→∞ fn
and Fk (x) ≤ fn (x) ∀ n ≥ k. By the monotone convergence theorem
Z Z Z
lim inf fn dµ = lim Fk (x) dµ ≤ lim inf fn dµ.
n→∞ k→∞ n→∞


We further extend the integral to complex-valued functions, just
saying that
f :X→C
is integrable if its real and imaginary parts are both integrable. Then,
by definition,
Z Z Z
f dµ = Re f dµ + i Im f dµ
E E E
for any E ⊂ X measurable. It follows that if f is integrable then so is
|f |. Furthermore
Z Z
f dµ ≤ |f | dµ .
E E
R
This is obvious if E f dµ = 0, and if not then
Z
f dµ = Reiθ R > 0 , θ ⊂ [0, 2π) .
E
4. INTEGRATION 27

Then
Z Z
−iθ
f dµ = e f dµ
E E
Z
= e−iθ f dµ
ZE
= Re(e−iθ f ) dµ
ZE
≤ Re(e−iθ f ) dµ
ZE Z
−iθ
≤ e f dµ = |f | dµ .
E E
The other important convergence result for integrals is Lebesgue’s
Dominated convergence theorem.
Theorem 4.6. If fn is a sequence of integrable functions, fk → f
a.e.5 and |fn | ≤ g for some integrable g then f is integrable and
Z Z
f dµ = lim fn dµ .
n→∞

Proof. First we can make the sequence fn (x) converge by chang-


ing all the fn (x)’s to zero on a set of measure zero outside which they
converge. This does not change the conclusions. Moreover, it suffices
to suppose that the fn are real-valued. Then consider
hk = g − fk ≥ 0 .
Now, lim inf k→∞ hk = g − f by the convergence of fn ; in particular f
is integrable. By monotone convergence and Fatou’s lemma
Z Z Z
(g − f )dµ = lim inf hk dµ ≤ lim inf (g − fk ) dµ
k→∞ k→∞
Z Z
= g dµ − lim sup fk dµ .
k→∞

Similarly, if Hk = g + fk then
Z Z Z Z
(g + f )dµ = lim inf Hk dµ ≤ g dµ + lim inf fk dµ.
k→∞ k→∞

It follows that
Z Z Z
lim sup fk dµ ≤ f dµ ≤ lim inf fk dµ.
k→∞ k→∞

5Means on the complement of a set of measure zero.


28 1. MEASURE AND INTEGRATION

Thus in fact Z Z
fk dµ → f dµ .

Having proved Lebesgue’s theorem of dominated convergence, let
me use it to show something important. As before, let µ be a positive
measure on X. We have defined L1 (X, µ); let me consider the more
general space Lp (X, µ). A measurable function
f :X→C
is said to be ‘Lp ’, for 1 ≤ p < ∞, if |f |p is integrable6, i.e.,
Z
|f |p dµ < ∞ .
X
As before we consider equivalence classes of such functions under the
equivalence relation
(4.9) f ∼ g ⇔ µ {x; (f − g)(x) 6= 0} = 0 .
p
We denote by L (X, µ) the space of such equivalence classes. It is a
linear space and the function
Z 1/p
p
(4.10) kf kp = |f | dµ
X

is a norm (we always assume 1 ≤ p < ∞, sometimes p = 1 is excluded


but later p = ∞ is allowed). It is straightforward to check everything
except the triangle inequality. For this we start with
Lemma 4.7. If a ≥ 0, b ≥ 0 and 0 < γ < 1 then
(4.11) aγ b1−γ ≤ γa + (1 − γ)b
with equality only when a = b.
Proof. If b = 0 this is easy. So assume b > 0 and divide by b.
Taking t = a/b we must show
(4.12) tγ ≤ γt + 1 − γ , 0 ≤ t , 0 < γ < 1 .
The function f (t) = tγ − γt is differentiable for t > 0 with derivative
γtγ−1 − γ, which is positive for t < 1 and negative for t > 1. Thus
f (t) ≤ f (1) with equality only for t = 1. Since f (1) = 1 − γ, this is
(5.17), proving the lemma. 
We use this to prove Hölder’s inequality
6Check p
that |f | is automatically measurable.
4. INTEGRATION 29

Lemma 4.8. If f and g are measurable then


Z
(4.13) f gdµ ≤ kf kp kgkq

1 1
for any 1 < p < ∞, with p
+ q
= 1.
Proof. If kf kp = 0 or kgkq = 0 the result is trivial, as it is if either
is infinite. Thus consider
p q
f (x) g(x)
a= ,b=
kf kp kgkq
and apply (5.16) with γ = p1 . This gives
|f (x)g(x)| |f (x)|p |g(x)|q
≤ + .
kf kp kgkq pkf kpp qkgkqq
Integrating over X we find
Z
1
|f (x)g(x)| dµ
kf kp kgkq X
1 1
≤ + = 1.
p q
R R
Since X
f g dµ ≤ X
|f g| dµ this implies (5.18).

The final inequality we need is Minkowski’s inequality.
Proposition 4.9. If 1 < p < ∞ and f, g ∈ Lp (X, µ) then
(4.14) kf + gkp ≤ kf kp + kgkp .
Proof. The case p = 1 you have already done. It is also obvious
if f + g = 0 a.e.. If not we can write
|f + g|p ≤ (|f | + |g|) |f + g|p−1
and apply Hölder’s inequality, to the right side, expanded out,
Z Z 1/q
p q(p−1)
|f + g| dµ ≤ (kf kp + kgkp ) , |f + g| dµ .

1
Since q(p − 1) = p and 1 − q
= 1/p this is just (5.20). 
So, now we know that Lp (X, µ) is a normed space for 1 ≤ p < ∞. In
particular it is a metric space. One important additional property that
a metric space may have is completeness, meaning that every Cauchy
sequence is convergent.
30 1. MEASURE AND INTEGRATION

Definition 4.10. A normed space in which the underlying metric


space is complete is called a Banach space.
Theorem 4.11. For any measure space (X, M, µ) the spaces Lp (X, µ),
1 ≤ p < ∞, are Banach spaces.
Proof. We need to show that a given Cauchy sequence {fn } con-
verges in Lp (X, µ). It suffices to show that it has a convergent subse-
quence. By the Cauchy property, for each k ∃ n = n(k) s.t.
(4.15) kfn − f` kp ≤ 2−k ∀ ` ≥ n .
Consider the sequence
g1 = f1 , gk = fn(k) − fn(k−1) , k > 1 .
By (5.3), kgk kp ≤ 2−k , for k > 1, so the series k kgk kp converges, say
P
to B < ∞. Now set
X n ∞
X
hn (x) = |gk (x)| , n ≥ 1 , h(x) = gk (x).
k=1 k=1

Then by the monotone convergence theorem


Z Z
p
h dµ = lim |hn |p dµ ≤ B p ,
X n→∞ X
where we have also used Minkowski’s inequality. Thus h ∈ Lp (X, µ),
so the series

X
f (x) = gk (x)
k=1
converges (absolutely) almost everywhere. Since
n p
p
X
|f (x)| = lim gk ≤ hp
n→∞
k=1
p 0
with h ∈ L (X, µ), the dominated convergence theorem applies and
shows that f ∈ Lp (X, µ). Furthermore,
`
p
X
gk (x) = fn(`) (x) and f (x) − fn(`) (x) ≤ (2h(x))p
k=1

so again by the dominated convergence theorem,


Z
p
f (x) − fn(`) (x) → 0 .
X
Thus the subsequence fn(`) → f in Lp (X, µ), proving its completeness.

4. INTEGRATION 31

Next I want to return to our starting point and discuss the Riesz
representation theorem. There are two important results in measure
theory that I have not covered — I will get you to do most of them
in the problems — namely the Hahn decomposition theorem and the
Radon-Nikodym theorem. For the moment we can do without the
latter, but I will use the former.
So, consider a locally compact metric space, X. By a Borel measure
on X, or a signed Borel measure, we shall mean a function on Borel
sets
µ : B(X) → R
which is given as the difference of two finite positive Borel measures
(4.16) µ(E) = µ1 (E) − µ2 (E) .
Similarly we shall say that µ is Radon, or a signed Radon measure, if
it can be written as such a difference, with both µ1 and µ2 finite Radon
measures. See the problems below for a discussion of this point.
Let Mfin (X) denote the set of finite Radon measures on X. This is
a normed space with
(4.17) kµk1 = inf(µ1 (X) + µ2 (X))
with the infimum over all Radon decompositions (4.16). Each signed
Radon measure defines a continuous linear functional on C0 (X):
Z Z
(4.18) · dµ : C0 (X) 3 f 7−→ f · dµ .
X

Theorem 4.12 (Riesz representation.). If X is a locally compact


metric space then every continuous linear functional on C0 (X) is given
by a unique finite Radon measure on X through (4.18).
Thus the dual space of C0 (X) is Mfin (X) – at least this is how such
a result is usually interpreted
(4.19) (C0 (X))0 = Mfin (X),
see the remarks following the proof.

Proof. We have done half of this already. Let me remind you of


the steps.
We started with u ∈ (C0 (X))0 and showed that u = u+ − u− where
u± are positive continuous linear functionals; this is Lemma 1.5. Then
we showed that u ≥ 0 defines a finite positive Radon measure µ. Here µ
is defined by (15.4) on open sets and µ(E) = µ∗ (E) is given by (15.12)
32 1. MEASURE AND INTEGRATION

on general Borel sets. It is finite because


(4.20) µ(X) = sup {u(f ) ; 0 ≤ f ≤ 1 , supp f b X , f ∈ C(X)}
≤ kuk .
From Proposition 3.19 we conclude that µ is a Radon measure. Since
this argument applies to u± we get two positive finite Radon measures
µ± and hence a signed Radon measure
(4.21) µ = µ+ − µ− ∈ Mfin (X).
In the problems you are supposed to prove the Hahn decomposition
theorem, in particular in Problem 14 I ask you to show that (4.21) is
the Hahn decomposition of µ — this means that there is a Borel set
E ⊂ X such that µ− (E) = 0 , µ+ (X \ E) = 0.
What we have defined is a linear map
(4.22) (C0 (X))0 → M (X), u 7−→ µ .
We want to show that this is an isomorphism, i.e., it is 1 − 1 and onto.
We first show that it is 1 − 1. That is, suppose µ = 0. Given the
uniqueness of the Hahn decomposition this implies that µ+ = µ− = 0.
So we can suppose that u ≥ 0 and µ = µ+ = 0 and we have to show
that u = 0; this is obvious since
µ(X) = sup {u(f ); supp u b X, 0 ≤ f ≤ 1 f ∈ C(X)} = 0
(4.23)
⇒ u(f ) = 0 for all such f .
If 0 ≤ f ∈ C(X) and supp f b X then f 0 = f /kf k∞ is of this type
so u(f ) = 0 for every 0 ≤ f ∈ C(X) of compact support. From
the decomposition of continuous functions into positive and negative
parts it follows that u(f ) = 0 for every f of compact support. Now, if
f ∈ Co (X), then given n ∈ N there exists K b X such that |f | < 1/n
on X \ K. As you showed in the problems, there exists χ ∈ C(X) with
supp(χ) b X and χ = 1 on K. Thus if fn = χf then supp(fn ) b X and
kf − fn k = sup(|f − fn | < 1/n. This shows that C0 (X) is the closure
of the subspace of continuous functions of compact support so by the
assumed continuity of u, u = 0.
So it remains to show that every finite Radon measure on X arises
from (4.22). We do this by starting from µ and constructing u. Again
we use the Hahn decomposition of µ, as in (4.21)7. Thus we assume
µ ≥ 0 and construct u. It is obvious what we want, namely
Z
(4.24) u(f ) = f dµ , f ∈ Cc (X) .
X
7Actually
we can just take any decomposition (4.21) into a difference of positive
Radon measures.
4. INTEGRATION 33

Here we need to recall from Proposition 3.2 that continuous functions


on X, a locally compact metric space, are (Borel) measurable. Further-
more, we know that there is an increasing sequence of simple functions
with limit f , so
Z
(4.25) f dµ ≤ µ(X) · kf k∞ .
X
This shows that u in (4.24) is continuous and that its norm kuk ≤
µ(X). In fact
(4.26) kuk = µ(X) .
Indeed, the inner regularity of µ implies that there is a compact set
K b X with µ(K) ≥ µ(X)− n1 ; then there is f ∈ Cc (X) with 0 ≤ f ≤ 1
and f = 1 on K. It follows that µ(f ) ≥ µ(K) ≥ µ(X) − n1 , for any n.
This proves (4.26).
We still have to show that if u is defined by (4.24), with µ a finite
positive Radon measure, then the measure µ̃ defined from u via (4.24)
is precisely µ itself.
This is easy provided we keep things clear. Starting from µ ≥ 0 a
finite Radon measure, define u by (4.24) and, for U ⊂ X open
Z 
(4.27) µ̃(U ) = sup f dµ, 0 ≤ f ≤ 1, f ∈ C(X), supp(f ) b U .
X
By the properties of the integral, µ̃(U ) ≤ µ(U ). Conversely if K b U
there exists an element f ∈ Cc (X), 0 ≤ f ≤ 1, f = 1 on K and
supp(f ) ⊂ U. Then we know that
Z
(4.28) µ̃(U ) ≥ f dµ ≥ µ(K).
X
By the inner regularity of µ, we can choose K b U such that µ(K) ≥
µ(U ) − , given  > 0. Thus µ̃(U ) = µ(U ).
This proves the Riesz representation theorem, modulo the decom-
position of the measure - which I will do in class if the demand is there!
In my view this is quite enough measure theory. 
Notice that we have in fact proved something stronger than the
statement of the theorem. Namely we have shown that under the
correspondence u ←→ µ,
(4.29) kuk = |µ| (X) =: kµk1 .
Thus the map is an isometry.
CHAPTER 2

Hilbert spaces and operators

1. Hilbert space
We have shown that Lp (X, µ) is a Banach space – a complete
normed space. I shall next discuss the class of Hilbert spaces, a spe-
cial class of Banach spaces, of which L2 (X, µ) is a standard example,
in which the norm arises from an inner product, just as it does in
Euclidean space.
An inner product on a vector space V over C (one can do the real
case too, not much changes) is a sesquilinear form
V ×V →C
written (u, v), if u, v ∈ V . The ‘sesqui-’ part is just linearity in the first
variable
(1.1) (a1 u1 + a2 u2 , v) = a1 (u1 , v) + a2 (u2 , v),
anti-linearly in the second
(1.2) (u, a1 v1 + a2 v2 ) = a1 (u, v1 ) + a2 (u, v2 )
and the conjugacy condition
(1.3) (u, v) = (v, u) .
Notice that (1.2) follows from (1.1) and (1.3). If we assume in addition
the positivity condition1
(1.4) (u, u) ≥ 0 , (u, u) = 0 ⇒ u = 0 ,
then
(1.5) kuk = (u, u)1/2
is a norm on V , as we shall see.
Suppose that u, v ∈ V have kuk = kvk = 1. Then (u, v) =
e |(u, v)| for some θ ∈ R. By choice of θ, e−iθ (u, v) = |(u, v)| is

1Notice that (u, u) is real by (1.3).


35
36 2. HILBERT SPACES AND OPERATORS

real, so expanding out using linearity for s ∈ R,

0 ≤ (e−iθ u − sv , e−iθ u − sv)


= kuk2 − 2s Re e−iθ (u, v) + s2 kvk2 = 1 − 2s|(u, v)| + s2 .
The minimum of this occurs when s = |(u, v)| and this is negative
unless |(u, v)| ≤ 1. Using linearity, and checking the trivial cases u =
or v = 0 shows that
(1.6) |(u, v)| ≤ kuk kvk, ∀ u, v ∈ V .
This is called Schwarz’2 inequality.
Using Schwarz’ inequality
ku + vk2 = kuk2 + (u, v) + (v, u) + kvk2
≤ (kuk + kvk)2
=⇒ ku + vk ≤ kuk + kvk ∀ u, v ∈ V
which is the triangle inequality.
Definition 1.1. A Hilbert space is a vector space V with an inner
product satisfying (1.1) - (1.4) which is complete as a normed space
(i.e., is a Banach space).
Thus we have already shown L2 (X, µ) to be a Hilbert space for any
positive measure µ. The inner product is
Z
(1.7) (f, g) = f g dµ ,
X

since then (1.3) gives kf k2 .


Another important identity valid in any inner product spaces is the
parallelogram law:
(1.8) ku + vk2 + ku − vk2 = 2kuk2 + 2kvk2 .
This can be used to prove the basic ‘existence theorem’ in Hilbert space
theory.
Lemma 1.2. Let C ⊂ H, in a Hilbert space, be closed and convex
(i.e., su + (1 − s)v ∈ C if u, v ∈ C and 0 < s < 1). Then C contains
a unique element of smallest norm.
Proof. We can certainly choose a sequence un ∈ C such that
kun k → δ = inf {kvk ; v ∈ C} .
2No ‘t’ in this Schwarz.
1. HILBERT SPACE 37

By the parallelogram law,


kun − um k2 = 2kun k2 + 2kum k2 − kun + um k2
≤ 2(kun k2 + kum k2 ) − 4δ 2
where we use the fact that (un + um )/2 ∈ C so must have norm at least
δ. Thus {un } is a Cauchy sequence, hence convergent by the assumed
completeness of H. Thus lim un = u ∈ C (since it is assumed closed)
and by the triangle inequality
|kun k − kuk| ≤ kun − uk → 0
So kuk = δ. Uniqueness of u follows again from the parallelogram law
which shows that if ku0 k = δ then
ku − u0 k ≤ 2δ 2 − 4k(u + u0 )/2k2 ≤ 0 .

The fundamental fact about a Hilbert space is that each element
v ∈ H defines a continuous linear functional by
H 3 u 7−→ (u, v) ∈ C
and conversely every continuous linear functional arises this way. This
is also called the Riesz representation theorem.
Proposition 1.3. If L : H → C is a continuous linear functional
on a Hilbert space then this is a unique element v ∈ H such that
(1.9) Lu = (u, v) ∀ u ∈ H ,
Proof. Consider the linear space
M = {u ∈ H ; Lu = 0}
the null space of L, a continuous linear functional on H. By the as-
sumed continuity, M is closed. We can suppose that L is not identically
zero (since then v = 0 in (1.9)). Thus there exists w ∈
/ M . Consider
w + M = {v ∈ H ; v = w + u , u ∈ M } .
This is a closed convex subset of H. Applying Lemma 1.2 it has a
unique smallest element, v ∈ w + M . Since v minimizes the norm on
w + M,
kv + suk2 = kvk2 + 2 Re(su, v) + ksk2 kuk2
is stationary at s = 0. Thus Re(u, v) = 0 ∀ u ∈ M , and the same
argument with s replaced by is shows that (v, u) = 0 ∀ u ∈ M .
Now v ∈ w + M , so Lv = Lw 6= 0. Consider the element w0 =
w/Lw ∈ H. Since Lw0 = 1, for any u ∈ H
L(u − (Lu)w0 ) = Lu − Lu = 0 .
38 2. HILBERT SPACES AND OPERATORS

It follows that u − (Lu)w0 ∈ M so if w00 = w0 /kw0 k2


00 0 00 (w0 , w0 )
(u, w ) = ((Lu)w , w ) = Lu = Lu .
kw0 k2
The uniqueness of v follows from the positivity of the norm. 
Corollary 1.4. For any positive measure µ, any continuous linear
functional
L : L2 (X, µ) → C
is of the form Z
Lf = f g dµ , g ∈ L2 (X, µ) .
X

Notice the apparent power of ‘abstract reasoning’ here! Although


we seem to have constructed g out of nowhere, its existence follows
from the completeness of L2 (X, µ), but it is very convenient to express
the argument abstractly for a general Hilbert space.

2. Spectral theorem
For a bounded operator T on a Hilbert space we define the spectrum
as the set
(2.1) spec(T ) = {z ∈ C; T − z Id is not invertible}.
Proposition 2.1. For any bounded linear operator on a Hilbert
space spec(T ) ⊂ C is a compact subset of {|z| ≤ kT k}.
Proof. We show that the set C \ spec(T ) (generally called the
resolvent set of T ) is open and contains the complement of a sufficiently
large ball. This is based on the convergence of the Neumann series.
Namely if T is bounded and kT k < 1 then
X∞
(2.2) (Id −T )−1 = Tj
j=0

converges to a bounded operator which is a two-sided inverse of Id −T.


Indeed, kT j k ≤ kT kj so the series is convergent and composing with
Id −T on either side gives a telescoping series reducing to the identity.
Applying this result, we first see that
(2.3) (T − z) = −z(Id −T /z)
is invertible if |z| > kT k. Similarly, if (T − z0 )−1 exists for some z0 ∈ C
then
(2.4) (T −z) = (T −z0 )−(z −z0 ) = (T −z0 )−1 (Id −(z −z0 )(T −z0 )−1 )
exists for |z − z0 |k(T − z0 )−1 k < 1. 
2. SPECTRAL THEOREM 39

In general it is rather difficult to precisely locate spec(T ).


However for a bounded self-adjoint operator it is easier. One sign
of this is the the norm of the operator has an alternative, simple, char-
acterization. Namely
(2.5) if A∗ = A then sup hAφ, φi| = kAk.
kφk=1

If a is this supermum, then clearly a ≤ kAk. To see the converse, choose


any φ, ψ ∈ H with norm 1 and then replace ψ by eiθ ψ with θ chosen
so that hAφ, ψi is real. Then use the polarization identity to write
(2.6) 4hAφ, ψi = hA(φ + ψ), (φ + ψ)i − hA(φ − ψ), (φ − ψ)i
+ ihA(φ + iψ), (φ + iψ)i − ihA(φ − iψ), (φ − iψ)i.
Now, by the assumed reality we may drop the last two terms and see
that
(2.7) 4|hAφ, ψi| ≤ a(kφ + ψk2 + kφ − ψk2 ) = 2a(kφk2 + kψk2 ) = 4a.
Thus indeed kAk = supkφk=kψk=1 |hAφ, ψi| = a.
We can always subtract a real constant from A so that A0 = A − t
satisfies
(2.8) − inf hA0 φ, φi = sup hA0 φ, φi = kA0 k.
kφk=1 kφk=1

Then, it follows that A0 ± kA0 k is not invertible. Indeed, there exists a


sequence φn , with kφn k = 1 such that h(A0 − kA0 k)φn , φn i → 0. Thus
(2.9)
k(A0 −kA0 k)φn k2 = −2hA0 φn , φn i+kA0 φn k2 +kA0 k2 ≤ −2hA0 φn , φn i+2kA0 k2 → 0.
This shows that A0 − kA0 k cannot be invertible and the same argument
works for A0 + kA0 k. For the original operator A if we set
(2.10) m = inf hAφ, φi M = sup hAφ, φi
kφk=1 kφk=1

then we conclude that neither A − m Id nor A − M Id is invertible and


kAk = max(−m, M ).
Proposition 2.2. If A is a bounded self-adjoint operator then, with
m and M defined by (2.10),
(2.11) {m} ∪ {M } ⊂ spec(A) ⊂ [m, M ].
Proof. We have already shown the first part, that m and M are
in the spectrum so it remains to show that A − z is invertible for all
z ∈ C \ [m, M ].
Using the self-adjointness
(2.12) Imh(A − z)φ, φi = − Im zkφk2 .
40 2. HILBERT SPACES AND OPERATORS

This implies that A − z is invertible if z ∈ C \ R. First it shows that


(A − z)φ = 0 implies φ = 0, so A − z is injective. Secondly, the range
is closed. Indeed, if (A − z)φn → ψ then applying (2.12) directly shows
that kφn k is bounded and so can be replaced by a weakly convergent
subsequence. Applying (2.12) again to φn −φm shows that the sequence
is actually Cauchy, hence convergens to φ so (A − z)φ = ψ is in the
range. Finally, the orthocomplement to this range is the null space of
A∗ − z̄, which is also trivial, so A − z is an isomorphism and (2.12) also
shows that the inverse is bounded, in fact
1
(2.13) k(A − z)−1 k ≤ .
| Im z|
When z ∈ R we can replace A by A0 satisfying (2.8). Then we have
to show that A0 − z is inverible for |z| > kAk, but that is shown in the
proof of Proposition 2.1. 
The basic estimate leading to the spectral theorem is:
Proposition 2.3. If A is a bounded self-adjoint operator and p is
a real polynomial in one variable,
N
X
(2.14) p(t) = ci ti , cN 6= 0,
i=0
N
ci Ai satisfies
P
then p(A) =
i=0

(2.15) kp(A)k ≤ sup |p(t)|.


t∈[m,M ]

Proof. Clearly, p(A) is a bounded self-adjoint operator. If s ∈ /


p([m, M ]) then p(A) − s is invertible. Indeed, the roots of p(t) − s must
cannot lie in [m.M ], since otherwise s ∈ p([m, M ]). Thus, factorizing
p(s) − t we have
(2.16)
YN
p(t) − s = cN / [m, M ] =⇒ (p(A) − s)−1 exists
(t − ti (s)), ti (s) ∈
i=1
P
since p(A) = cN (A − ti (s)) and each of the factors is invertible.
i
Thus spec(p(A)) ⊂ p([m, M ]), which is an interval (or a point), and
from Proposition 2.3 we conclude that kp(A)k ≤ sup p([m, M ]) which
is (2.15). 
Now, reinterpreting (2.15) we have a linear map
(2.17) P(R) 3 p 7−→ p(A) ∈ B(H)
2. SPECTRAL THEOREM 41

from the real polynomials to the bounded self-adjoint operators which


is continuous with respect to the supremum norm on [m, M ]. Since
polynomials are dense in continuous functions on finite intervals, we
see that (2.17) extends by continuity to a linear map
(2.18)
C([m, M ]) 3 f 7−→ f (A) ∈ B(H), kf (A)k ≤ kf k[m,M ] , f g(A) = f (A)g(A)
where the multiplicativity follows by continuity together with the fact
that it is true for polynomials.
Now, consider any two elements φ, ψ ∈ H. Evaluating f (A) on φ
and pairing with ψ gives a linear map
(2.19) C([m, M ]) 3 f 7−→ hf (A)φ, ψi ∈ C.
This is a linear functional on C([m, M ]) to which we can apply the Riesz
representatin theorem and conclude that it is defined by integration
against a unique Radon measure µφ,ψ :
Z
(2.20) hf (A)φ, ψi = f dµφ,ψ .
[m,M ]

The total mass |µφ,ψ | of this measure is the norm of the functional.
Since it is a Borel measure, we can take the integral on −∞, b] for any
b ∈ R ad, with the uniqueness, this shows that we have a continuous
sesquilinear map
(2.21) Z
Pb (φ, ψ) : H×H 3 (φ, ψ) 7−→ dµφ,ψ ∈ R, |Pb (φ, ψ)| ≤ kAkkφkkψk.
[m,b]

From the Hilbert space Riesz representation theorem it follows that


this sesquilinear form defines, and is determined by, a bounded linear
operator
(2.22) Pb (φ, ψ) = hPb φ, ψi, kPb k ≤ kAk.
In fact, from the functional calculus (the multiplicativity in (2.18)) we
see that
(2.23) Pb∗ = Pb , Pb2 = Pb , kPb k ≤ 1,
so Pb is a projection.
Thus the spectral theorem gives us an increasing (with b) family of
commuting self-adjoint projections such that µφ,ψ ((−∞, b]) = hPb φ, ψi
determines the Radon measure for which (2.20) holds. One can go
further and think of Pb itself as determining a measure
(2.24) µ((−∞, b]) = Pb
42 2. HILBERT SPACES AND OPERATORS

which takes values in the projections on H and which allows the func-
tions of A to be written as integrals in the form
Z
(2.25) f (A) = f dµ
[m,M ]

of which (2.20) becomes the ‘weak form’. To do so one needs to develop


the theory of such measures and the corresponding integrals. This is
not so hard but I shall not do it.
CHAPTER 3

Distributions

1. Test functions
So far we have largely been dealing with integration. One thing we
have seen is that, by considering dual spaces, we can think of functions
as functionals. Let me briefly review this idea.
Consider the unit ball in Rn ,
n
B = {x ∈ Rn ; |x| ≤ 1} .
I take the closed unit ball because I want to deal with a compact metric
space. We have dealt with several Banach spaces of functions on Bn ,
for example

C(Bn ) = u : Bn → C ; u continuous
 Z 
2 2
L (Bn ) = u : Bn → C; Borel measurable with |u| dx < ∞ .

Here, as always below, dx is Lebesgue measure and functions are iden-


tified if they are equal almost everywhere.
Since Bn is compact we have a natural inclusion
(1.1) C(Bn ) ,→ L2 (Bn ) .
This is also a topological inclusion, i.e., is a bounded linear map, since
(1.2) kukL2 ≤ Cku||∞
where C 2 is the volume of the unit ball.
In general if we have such a set up then
Lemma 1.1. If V ,→ U is a subspace with a stronger norm,
kϕkU ≤ CkϕkV ∀ ϕ ∈ V
then restriction gives a continuous linear map
(1.3) U 0 → V 0 , U 0 3 L 7−→ L̃ = L|V ∈ V 0 , kL̃kV 0 ≤ CkLkU 0 .
If V is dense in U then the map (6.9) is injective.
43
44 3. DISTRIBUTIONS

Proof. By definition of the dual norm


n o
kL̃kV 0 = sup L̃(v) ; kvkV ≤ 1 , v ∈ V
n o
≤ sup L̃(v) ; kvkU ≤ C , v ∈ V
≤ sup {|L(u)| ; kukU ≤ C , u ∈ U }
= CkLkU 0 .
If V ⊂ U is dense then the vanishing of L : U → C on V implies its
vanishing on U .

Going back to the particular case (6.8) we do indeed get a contin-
uous map between the dual spaces
L2 (Bn ) ∼
= (L2 (Bn ))0 → (C(Bn ))0 = M (Bn ) .
Here we use the Riesz representation theorem and duality for Hilbert
spaces. The map use here is supposed to be linear not antilinear, i.e.,
Z
(1.4) L (B ) 3 g 7−→ ·g dx ∈ (C(Bn ))0 .
2 n

So the idea is to make the space of ‘test functions’ as small as reasonably


possible, while still retaining density in reasonable spaces.
Recall that a function u : Rn → C is differentiable at x ∈ Rn if
there exists a ∈ Cn such that
(1.5) |u(x) − u(x) − a · (x − x)| = o(|x − x|) .
The ‘little oh’ notation here means that given  > 0 there exists δ > 0
s.t.
|x − x| < δ ⇒ |u(x) − u(x) − a(x − x)| <  |x − x| .
The coefficients of a = (a1 , . . . , an ) are the partial derivations of u at
x,
∂u
ai = (x)
∂xj
since
u(x + tei ) − u(x)
(1.6) ai = lim ,
t→0 t
ei = (0, . . . , 1, 0, . . . , 0) being the ith basis vector. The function u is
said to be continuously differentiable on Rn if it is differentiable at each
point x ∈ Rn and each of the n partial derivatives are continuous,
∂u
(1.7) : Rn → C .
∂xj
1. TEST FUNCTIONS 45

Definition 1.2. Let C01 (Rn ) be the subspace of C0 (Rn ) = C00 (Rn )
such that each element u ∈ C01 (Rn ) is continuously differentiable and
∂u
∂xj
∈ C0 (Rn ), j = 1, . . . , n.
Proposition 1.3. The function
n
X ∂u
kukC 1 = kuk∞ + k k∞
i=1
∂x1
is a norm on C01 (Rn ) with respect to which it is a Banach space.
Proof. That k kC 1 is a norm follows from the properties of k k∞ .
Namely kukC 1 = 0 certainly implies u = 0, kaukC 1 = |a| kukC 1 and the
triangle inequality follows from the same inequality for k k∞ .
Similarly, the main part of the completeness of C01 (Rn ) follows from
the completeness of C00 (Rn ). If {un } is a Cauchy sequence in C01 (Rn )
then un and the ∂u n
∂xj
are Cauchy in C00 (Rn ). It follows that there are
limits of these sequences,
∂un
un → v , → vj ∈ C00 (Rn ) .
∂xj
However we do have to check that v is continuously differentiable and
∂v
that ∂x j
= vj .
One way to do this is to use the Fundamental Theorem of Calculus
in each variable. Thus
Z t
∂un
un (x + tei ) = (x + sei ) ds + un (x) .
0 ∂xj
As n → ∞ all terms converge and so, by the continuity of the integral,
Z t
u(x + tei ) = vj (x + sei ) ds + u(x) .
0
This shows that the limit in (6.20) exists, so vi (x) is the partial deriva-
tion of u with respect to xi . It remains only to show that u is indeed
differentiable at each point and I leave this to you in Problem 17.

So, almost by definition, we have an example of Lemma 6.17,
C01 (Rn ) ,→ C00 (Rn ).
It is in fact dense but I will not bother showing this (yet). So we know
that
(C00 (Rn ))0 → (C01 (Rn ))0
and we expect it to be injective. Thus there are more functionals on
C01 (Rn ) including things that are ‘more singular than measures’.
46 3. DISTRIBUTIONS

An example is related to the Dirac delta


δ(x)(u) = u(x) , u ∈ C00 (Rn ) ,
namely
∂u
C01 (Rn ) 3 u 7−→
(x) ∈ C .
∂xj
This is clearly a continuous linear functional which it is only just to
denote ∂x∂ j δ(x).
Of course, why stop at one derivative?
Definition 1.4. The space C0k (Rn ) ⊂ C01 (Rn ) k ≥ 1 is defined in-
ductively by requiring that
∂u
∈ C0k−1 (Rn ) , j = 1, . . . , n .
∂xj
The norm on C0k (Rn ) is taken to be
n
X ∂u
(1.8) kukC k = kukC k−1 + k k k−1 .
j=1
∂xj C

These are all Banach spaces, since if {un } is Cauchy in C0k (Rn ),
it is Cauchy and hence convergent in C0k−1 (Rn ), as is ∂un /∂xj , j =
1, . . . , n − 1. Furthermore the limits of the ∂un /∂xj are the derivatives
of the limits by Proposition 1.3.
This gives us a sequence of spaces getting ‘smoother and smoother’
C00 (Rn ) ⊃ C01 (Rn ) ⊃ · · · ⊃ C0k (Rn ) ⊃ · · · ,
with norms getting larger and larger. The duals can also be expected
to get larger and larger as k increases.
As well as looking at functions getting smoother and smoother, we
need to think about ‘infinity’, since Rn is not compact. Observe that
an element g ∈ L1 (Rn ) (with respect to Lebesgue measure by default)
defines a functional on C00 (Rn ) — and hence all the C0k (Rn )s. However a
function such as the constant function 1 is not integrable on Rn . Since
we certainly want to talk about this, and polynomials, we consider a
second condition of smallness at infinity. Let us set
(1.9) hxi = (1 + |x|2 )1/2
a function which is the size of |x| for |x| large, but has the virtue of
being smooth1
1See Problem 18.
1. TEST FUNCTIONS 47

Definition 1.5. For any k, l ∈ N = {1, 2, · · · } set


hxi−l C0k (Rn ) = u ∈ C0k (Rn ) ; u = hxi−l v , v ∈ C0k (Rn ) ,


with norm, kukk,l = kvkC k , v = hxil u.


Notice that the definition just says that u = hxi−l v, with v ∈
It follows immediately that hxi−l C0k (Rn ) is a Banach space
C0k (Rn ).
with this norm.
Definition 1.6. Schwartz’ space2 of test functions on Rn is
S(Rn ) = u : Rn → C; u ∈ hxi−l C0k (Rn ) for all k and l ∈ N .


It is not immediately apparent that this space is non-empty (well


0 is in there but...); that
(1.10) P (x) exp(− |x|2 ) ∈ S(Rn )
for any polynomial P is Problem 19.
Corollary 1.7. S(Rn ) is infinite-dimensional.
In fact the linear space in (1.10) turns out to be dense in S(Rn )
when we sort out the topology – so it will be separable.
Schwartz’ idea is that the dual of S(Rn ) should contain all the ‘in-
teresting’ objects, at least those of ‘polynomial growth’. The problem
is that we do not have a good norm on S(Rn ). Rather we have a lot of
them. Observe that
0 0
hxi−l C0k (Rn ) ⊂ hxi−l C0k (Rn ) if l ≥ l0 and k ≥ k 0 .
Thus we see that as a linear space
\
(1.11) S(Rn ) = hxi−k C0k (Rn ).
k
Since these spaces are getting smaller, we have a countably infinite
number of norms. For this reason S(Rn ) is called a countably normed
space.
Proposition 1.8. For u ∈ S(Rn ), set
(1.12) kuk(k) = khxik ukC k
and define

X ku − vk(k)
(1.13) d(u, v) = 2−k ,
k=0
1 + ku − vk(k)
then d is a distance function in S(Rn ) with respect to which it is a
complete metric space.
2Laurent Schwartz – this one with a ‘t’.
48 3. DISTRIBUTIONS

Proof. The series in (1.13) certainly converges, since


ku − vk(k)
≤ 1.
1 + ku − vk(k)
The first two conditions on a metric are clear,
d(u, v) = 0 ⇒ ku − vkC0 = 0 ⇒ u = v,
and symmetry is immediate. The triangle inequality is perhaps more
mysterious!
Certainly it is enough to show that

(1.14) ˜ v) = ku − vk
d(u,
1 + ku − vk
is a metric on any normed space, since then we may sum over k. Thus
we consider
ku − vk kv − wk
+
1 + ku − vk 1 + kv − wk
ku − vk(1 + kv − wk) + kv − wk(1 + ku − vk)
= .
(1 + ku − vk)(1 + kv − wk)
˜ w) we must show that
Comparing this to d(v,
(1 + ku − vk)(1 + kv − wk)ku − wk
≤ (ku − vk(1 + kv − wk) + kv − wk(1 + ku − vk))(1 + ku − wk).
Starting from the LHS and using the triangle inequality,
LHS ≤ ku − wk + (ku − vk + kv − wk + ku − vkkv − wk)ku − wk
≤ (ku − vk + kv − wk + ku − vkkv − wk)(1 + ku − wk)
≤ RHS.
Thus, d is a metric.
Suppose un is a Cauchy sequence. Thus, d(un , um ) → 0 as n, m →
∞. In particular, given
 > 0 ∃ N s.t. n, m > N implies
d(un , um ) < 2−k ∀ n, m > N.
The terms in (1.13) are all positive, so this implies
kun − um k(k)
<  ∀ n, m > N.
1 + kun − um k(k)
If  < 1/2 this in turn implies that
kun − um k(k) < 2,
1. TEST FUNCTIONS 49

so the sequence is Cauchy in hxi−k C0k (Rn ) for each k. From the com-
pleteness of these spaces it follows that un → u in hxi−k C0k (Rn )j for
each k. Given  > 0 choose k so large that 2−k < /2. Then ∃ N s.t.
n>N
⇒ ku − un k(j) < /2 n > N, j ≤ k.
Hence
X ku − un k(j)
d(un , u) = 2−j
j≤k
1 + ku − un k(j)

X ku − un k(j)
+ 2−j
j>k
1 + ku − un k(j)

≤ /4 + 2−k < .


This un → u in S(Rn ). 
As well as the Schwartz space, S(Rn ), of functions of rapid decrease
with all derivatives, there is a smaller ‘standard’ space of test functions,
namely
(1.15) Cc∞ (Rn ) = {u ∈ S(Rn ); supp(u) b Rn } ,
the space of smooth functions of compact support. Again, it is not
quite obvious that this has any non-trivial elements, but it does as
we shall see. If we fix a compact subset of Rn and look at functions
with support in that set, for instance the closed ball of radius R > 0,
then we get a closed subspace of S(Rn ), hence a complete metric space.
One ‘problem’ with Cc∞ (Rn ) is that it does not have a complete metric
topology which restricts to this topology on the subsets. Rather we
must use an inductive limit procedure to get a decent topology.
Just to show that this is not really hard, I will discuss it briefly
here, but it is not used in the sequel. In particular I will not do this
in the lectures themselves. By definition our space Cc∞ (Rn ) (denoted
traditionally as D(Rn )) is a countable union of subspaces
(1.16) [
Cc∞ (Rn ) = C˙c∞ (B(n)), C˙c∞ (B(n)) = {u ∈ S(Rn ); u = 0 in |x| > n}.
n∈N

Consider
(1.17)
T = {U ⊂ Cc∞ (Rn ); U ∩ C˙c∞ (B(n)) is open in C˙∞ (B(n)) for each n}.
This is a topology on Cc∞ (Rn ) – contains the empty set and the whole
space and is closed under finite intersections and arbitrary unions –
50 3. DISTRIBUTIONS

simply because the same is true for the open sets in C˙∞ (B(n)) for each
n. This is in fact the inductive limit topology. One obvious question
is:- what does it mean for a linear functional u : Cc∞ (Rn ) −→ C to be
continuous? This just means that u−1 (O) is open for each open set in C.
Directly from the definition this in turn means that u−1 (O)∩ C˙∞ (B(n))
should be open in C˙∞ (B(n)) for each n. This however just means that,
restricted to each of these subspaces u is continuous. If you now go
forwards to Lemma 2.3 you can see what this means; see Problem 74.
Of course there is a lot more to be said about these spaces; you can
find plenty of it in the references.

2. Tempered distributions
A good first reference for distributions is [2], [5] gives a more ex-
haustive treatment.
The complete metric topology on S(Rn ) is described above. Next I
want to try to convice you that elements of its dual space S 0 (Rn ), have
enough of the properties of functions that we can work with them as
‘generalized functions’.
First let me develop some notation. A differentiable function ϕ :
n n
R → C has partial derivatives which we have denoted ∂ϕ/∂x √ j:R →
C. For reasons that will become clear later, we put a −1 into the
definition and write
1 ∂ϕ
(2.1) Dj ϕ = .
i ∂xj
We say ϕ is once continuously differentiable if each of these Dj ϕ is
continuous. Then we defined k times continuous differentiability in-
ductively by saying that ϕ and the Dj ϕ are (k − 1)-times continuously
differentiable. For k = 2 this means that
Dj Dk ϕ are continuous for j, k = 1, · · · , n .
Now, recall that, if continuous, these second derivatives are symmetric:
(2.2) Dj Dk ϕ = Dk Dj ϕ .
This means we can use a compact notation for higher derivatives. Put
N0 = {0, 1, . . .}; we call an element α ∈ Nn0 a ‘multi-index’ and if ϕ is
at least k times continuously differentiable, we set3
1 ∂ α1 ∂ αn
(2.3) Dα ϕ = |α| ··· ϕ whenever |α| = α1 +α2 +· · ·+αn ≤ k.
i ∂x1 ∂xn
3Periodicallythere is the possibility of confusion between the two meanings of
|α| but it seldom arises.
2. TEMPERED DISTRIBUTIONS 51

In fact we will use a closely related notation of powers of a variable.


Namely if α is a multi-index we shall also write
(2.4) xα = xα1 1 xα2 2 . . . xαnn .
Now we have defined the spaces.
C0k (Rn ) = ϕ : Rn → C ; Dα ϕ ∈ C00 (Rn ) ∀ |α| ≤ k .

(2.5)
Notice the convention is that Dα ϕ is asserted to exist if it is required
to be continuous! Using hxi = (1 + |x|2 ) we defined
hxi−k C0k (Rn ) = ϕ : Rn → C ; hxik ϕ ∈ C0k (Rn ) ,

(2.6)
and then our space of test functions is
\
S(Rn ) = hxi−k C0k (Rn ) .
k

Thus,
(2.7) ϕ ∈ S(Rn ) ⇔ Dα (hxik ϕ) ∈ C00 (Rn ) ∀ |α| ≤ k and all k .
Lemma 2.1. The condition ϕ ∈ S(Rn ) can be written
hxik Dα ϕ ∈ C00 (Rn ) ∀ |α| ≤ k , ∀ k .
Proof. We first check that
ϕ ∈ C00 (Rn ) , Dj (hxiϕ) ∈ C00 (Rn ) , j = 1, · · · , n
⇔ ϕ ∈ C00 (Rn ) , hxiDj ϕ ∈ C00 (Rn ) , j = 1, · · · , n .
Since
Dj hxiϕ = hxiDj ϕ + (Dj hxi)ϕ
and Dj hxi = 1i xj hxi−1 is a bounded continuous function, this is clear.
Then consider the same thing for a larger k:
(2.8) Dα hxip ϕ ∈ C00 (Rn ) ∀ |α| = p , 0 ≤ p ≤ k
⇔ hxip Dα ϕ ∈ C00 (Rn ) ∀ |α| = p , 0 ≤ p ≤ k .

I leave you to check this as Problem 2.1.
Corollary 2.2. For any k ∈ N the norms
X
khxik ϕkC k and kxα Dxβ ϕk∞
|α|≤k,
|β|≤k

are equivalent.
52 3. DISTRIBUTIONS

Proof. Any reasonable proof of (2.2) shows that the norms


X
khxik ϕkC k and khxik Dβ ϕk∞
|β|≤k

are equivalent. Since there are positive constants such that


   
X X
C 1 1 + |xα | ≤ hxik ≤ C2 1 + |xα |
|α|≤k |α|≤k

the equivalent of the norms follows.



Proposition 2.3. A linear functional u : S(Rn ) → C is continuous
if and only if there exist C, k such that
X
|u(ϕ)| ≤ C sup xα Dxβ ϕ .
Rn
|α|≤k,
|β|≤k

Proof. This is just the equivalence of the norms, since we showed


that u ∈ S 0 (Rn ) if and only if
|u(ϕ)| ≤ Ckhxik ϕkC k
for some k.

Lemma 2.4. A linear map
T : S(Rn ) → S(Rn )
is continuous if and only if for each k there exist C and j such that if
|α| ≤ k and |β| ≤ k
0 0
X
(2.9) sup xα Dβ T ϕ ≤ C sup xα Dβ ϕ ∀ ϕ ∈ S(Rn ).
Rn
|α0 |≤j, |β 0 |≤j

Proof. This is Problem 2.2. 


All this messing about with norms shows that
xj : S(Rn ) → S(Rn ) and Dj : S(Rn ) → S(Rn )
are continuous.
So now we have some idea of what u ∈ S 0 (Rn ) means. Let’s notice
that u ∈ S 0 (Rn ) implies
(2.10) xj u ∈ S 0 (Rn ) ∀ j = 1, · · · , n
(2.11) Dj u ∈ S 0 (Rn ) ∀ j = 1, · · · , n
(2.12) ϕu ∈ S 0 (Rn ) ∀ ϕ ∈ S(Rn )
2. TEMPERED DISTRIBUTIONS 53

where we have to define these things in a reasonable way. Remem-


ber that u ∈ S 0 (Rn ) is “supposed” to be like an integral against a
“generalized function”
Z
(2.13) u(ψ) = u(x)ψ(x) dx ∀ ψ ∈ S(Rn ).
Rn

Since it would be true if u were a function we define


(2.14) xj u(ψ) = u(xj ψ) ∀ ψ ∈ S(Rn ).
Then we check that xj u ∈ S 0 (Rn ):
|xj u(ψ)| = |u(xj ψ)|
X
≤C sup xα Dβ (xj ψ)
Rn
|α|≤k, |β|≤k

X
≤ C0 sup xα Dβ ψ .
Rn
|α|≤k+1, |β|≤k

Similarly we can define the partial derivatives by using the standard


integration by parts formula
Z Z
(2.15) (Dj u)(x)ϕ(x) dx = − u(x)(Dj ϕ(x)) dx
Rn Rn

if u ∈ C01 (Rn ). Thus if u ∈ S 0 (Rn ) again we define


Dj u(ψ) = −u(Dj ψ) ∀ ψ ∈ S(Rn ).
Then it is clear that Dj u ∈ S 0 (Rn ).
Iterating these definition we find that Dα , for any multi-index α,
defines a linear map
(2.16) Dα : S 0 (Rn ) → S 0 (Rn ) .
In general a linear differential operator with constant coefficients is a
sum of such “monomials”. For example Laplace’s operator is
∂2 ∂2 ∂2
∆ = − 2 − 2 − · · · − 2 = D12 + D22 + · · · + Dn2 .
∂x1 ∂x2 ∂xn
We will be interested in trying to solve differential equations such as
∆u = f ∈ S 0 (Rn ) .
We can also multiply u ∈ S 0 (Rn ) by ϕ ∈ S(Rn ), simply defining
(2.17) ϕu(ψ) = u(ϕψ) ∀ ψ ∈ S(Rn ).
54 3. DISTRIBUTIONS

For this to make sense it suffices to check that


X X
(2.18) sup xα Dβ (ϕψ) ≤ C sup xα Dβ ψ .
Rn Rn
|α|≤k, |α|≤k,
|β|≤k |β|≤k

This follows easily from Leibniz’ formula.


Now, to start thinking of u ∈ S 0 (Rn ) as a generalized function we
first define its support. Recall that
(2.19) supp(ψ) = clos {x ∈ Rn ; ψ(x) 6= 0} .
We can write this in another ‘weak’ way which is easier to generalize.
Namely
(2.20) / supp(u) ⇔ ∃ϕ ∈ S(Rn ) , ϕ(p) 6= 0 , ϕu = 0 .
p∈
In fact this definition makes sense for any u ∈ S 0 (Rn ).
Lemma 2.5. The set supp(u) defined by (2.20) is a closed subset of
Rn and reduces to (2.19) if u ∈ S(Rn ).
Proof. The set defined by (2.20) is closed, since
(2.21) supp(u){ = {p ∈ Rn ; ∃ ϕ ∈ S(Rn ), ϕ(p) 6= 0, ϕu = 0}
is clearly open — the same ϕ works for nearby points. If ψ ∈ S(Rn )
we define uψ ∈ S 0 (Rn ), which we will again identify with ψ, by
Z
(2.22) uψ (ϕ) = ϕ(x)ψ(x) dx .

Obviously uψ = 0 =⇒ ψ = 0, simply set ϕ = ψ in (2.22). Thus the


map
(2.23) S(Rn ) 3 ψ 7−→ uψ ∈ S 0 (Rn )
is injective. We want to show that
(2.24) supp(uψ ) = supp(ψ)
on the left given by (2.20) and on the right by (2.19). We show first
that
supp(uψ ) ⊂ supp(ψ).
Thus, we need to see that p ∈ / supp(ψ) ⇒ p ∈ / supp(uψ ). The first
condition is that ψ(x) = 0 in a neighbourhood, U of p, hence there
is a C ∞ function ϕ with support in U and ϕ(p) 6= 0. Then ϕψ ≡ 0.
Conversely suppose p ∈ / supp(uψ ). Then there exists ϕ ∈ S(Rn ) with
ϕ(p) 6= 0 and ϕuψ = 0, i.e., ϕuψ (η) = 0 ∀ η ∈ S(Rn ). By the injectivity
of S(Rn ) ,→ S 0 (Rn ) this means ϕψ = 0, so ψ ≡ 0 in a neighborhood of
p and p ∈/ supp(ψ). 
3. CONVOLUTION AND DENSITY 55

Consider the simplest examples of distribution which are not func-


tions, namely those with support at a given point p. The obvious one
is the Dirac delta ‘function’
(2.25) δp (ϕ) = ϕ(p) ∀ ϕ ∈ S(Rn ) .
We can make many more, because Dα is local
(2.26) supp(Dα u) ⊂ supp(u) ∀ u ∈ S 0 (Rn ) .
Indeed, p ∈/ supp(u) ⇒ ∃ ϕ ∈ S(Rn ), ϕu ≡ 0, ϕ(p) 6= 0. Thus each of
the distributions Dα δp also has support contained in {p}. In fact none
of them vanish, and they are all linearly independent.

3. Convolution and density


We have defined an inclusion map
(3.1) Z
n 0 n
S(R ) 3 ϕ 7−→ uϕ ∈ S (R ), uϕ (ψ) = ϕ(x)ψ(x) dx ∀ ψ ∈ S(Rn ).
Rn

This allows us to ‘think of’ S(Rn ) as a subspace of S 0 (Rn ); that is we


habitually identify uϕ with ϕ. We can do this because we know (3.1)
to be injective. We can extend the map (3.1) to include bigger spaces
C00 (Rn ) 3 ϕ 7−→ uϕ ∈ S 0 (Rn )
Lp (Rn ) 3 ϕ 7−→ uϕ ∈ S 0 (Rn )
(3.2) M (Rn ) 3 µ 7−→ uµ ∈ S 0 (Rn )
Z
uµ (ψ) = ψ dµ ,
Rn

but we need to know that these maps are injective before we can forget
about them.
We can see this using convolution. This is a sort of ‘product’ of
functions. To begin with, suppose v ∈ C00 (Rn ) and ψ ∈ S(Rn ). We
define a new function by ‘averaging v with respect to ψ:’
Z
(3.3) v ∗ ψ(x) = v(x − y)ψ(y) dy .
Rn

The integral converges by dominated convergence, namely ψ(y) is in-


tegrable and v is bounded,
|v(x − y)ψ(y)| ≤ kvkC00 |ψ(y)| .
56 3. DISTRIBUTIONS

We can use the same sort of estimates to show that v ∗ ψ is continuous.


Fix x ∈ Rn ,

(3.4) v ∗ ψ(x + x0 ) − v ∗ ψ(x)


Z
= (v(x + x0 − y) − v(x − y))ψ(y) dy .

To see that this is small for x0 small, we split the integral into two
pieces. Since ψ is very small near infinity, given  > 0 we can choose
R so large that
Z
(3.5) kvk∞ · |ψ(y)| dy ≤ /4 .
|y]|≥R

The set |y| ≤ R is compact and if |x| ≤ R0 , |x0 | ≤ 1 then |x + x0 − y| ≤


R + R0 + 1. A continuous function is uniformly continuous on any
compact set, so we can chose δ > 0 such that
Z
0
(3.6) sup |v(x + x − y) − v(x − y)| · |ψ(y)| dy < /2 .
|x0 |<δ |y|≤R
|y|≤R

Combining (3.5) and (3.6) we conclude that v∗ψ is continuous. Finally,


we conclude that
(3.7) v ∈ C00 (Rn ) ⇒ v ∗ ψ ∈ C00 (Rn ) .
For this we need to show that v ∗ ψ is small at infinity, which follows
from the fact that v is small at infinity. Namely given  > 0 there exists
R > 0 such that |v(y)| ≤  if |y| ≥ R. Divide the integral defining the
convolution into two
Z Z
|v ∗ ψ(x)| ≤ u(y)ψ(x − y)dy + |u(y)ψ(x − y)|dy
|y|>R y<R
≤ /2kψk∞ + kuk∞ sup |ψ|.
B(x,R)

Since ψ ∈ S(Rn ) the last constant tends to 0 as |x| → ∞.


We can do much better than this! Assuming |x0 | ≤ 1 we can use
Taylor’s formula with remainder to write
Z 0 n
0 d 0
X
(3.8) ψ(z + x ) − ψ(z) = ψ(z + tx ) dt = xj · ψ̃j (z, x0 ) .
0 dt j=1

As Problem 23 I ask you to check carefully that


(3.9) ψj (z; x0 ) ∈ S(Rn ) depends continuously on x0 in |x0 | ≤ 1 .
3. CONVOLUTION AND DENSITY 57

Going back to (3.3))we can use the translation and reflection-invariance


of Lebesgue measure to rewrite the integral (by changing variable) as
Z
(3.10) v ∗ ψ(x) = v(y)ψ(x − y) dy .
Rn
This reverses the role of v and ψ and shows that if both v and ψ are in
S(Rn ) then v ∗ ψ = ψ ∗ v.
Using this formula on (3.4) we find
(3.11) Z
0
v ∗ ψ(x + x ) − v ∗ ψ(x) = v(y)(ψ(x + x0 − y) − ψ(x − y)) dy
n
X Z n
X
= xj v(y)ψ̃j (x − y, x0 ) dy = xj (v ∗ ψj (·; x0 )(x) .
j=1 Rn j=1

From (3.9) and what we have already shown, v ∗ ψ(·; x0 ) is continuous


in both variables, and is in C00 (Rn ) in the first. Thus
(3.12) v ∈ C00 (Rn ) , ψ ∈ S(Rn ) ⇒ v ∗ ψ ∈ C01 (Rn ) .
In fact we also see that
∂ ∂ψ
(3.13) v∗ψ =v∗ .
∂xj ∂xj
Thus v ∗ ψ inherits its regularity from ψ.
Proposition 3.1. If v ∈ C00 (Rn ) and ψ ∈ S(Rn ) then
\
(3.14) v ∗ ψ ∈ C0∞ (Rn ) = C0k (Rn ) .
k≥0

Proof. This follows from (3.12), (3.13) and induction. 


Now, let us make a more special choice of ψ. We have shown the
existence of
(3.15) ϕ ∈ Cc∞ (Rn ) , ϕ ≥ 0 , supp(ϕ) ⊂ {|x| ≤ 1} .
R
We can also assume Rn ϕ dx = 1, by multiplying by a positive constant.
Now consider
x
(3.16) ϕt (x) = t−n ϕ 1 ≥ t > 0.
t
This has all the same properties, except that
Z
(3.17) supp ϕt ⊂ {|x| ≤ t} , ϕt dx = 1 .

Proposition 3.2. If v ∈ C00 (Rn ) then as t → 0, vt = v ∗ ϕt → v in


C00 (Rn ).
58 3. DISTRIBUTIONS

Proof. using (3.17) we can write the difference as


Z
(3.18) |vt (x) − v(x)| = | (v(x − y) − v(x))ϕt (y) dy|
Rn
≤ sup |v(x − y) − v(x)| → 0.
|y|≤t

Here we have used the fact that ϕt ≥ 0 has support in |y| ≤ t and has
integral 1. Thus vt → v uniformly on any set on which v is uniformly
continuous, namel Rn ! 
Corollary 3.3. C0k (Rn ) is dense in C0p (Rn ) for any k ≥ p.
Proposition 3.4. S(Rn ) is dense in C0k (Rn ) for any k ≥ 0.
Proof. Take k = 0 first. The subspace Cc0 (Rn ) is dense in C00 (Rn ),
by cutting off outside a large ball. If v ∈ Cc0 (Rn ) has support in
{|x| ≤ R} then
v ∗ ϕt ∈ Cc∞ (Rn ) ⊂ S(Rn )
has support in {|x| ≤ R + 1}. Since v ∗ ϕt → v the result follows for
k = 0.
For k ≥ 1 the same argument works, since Dα (v ∗ ϕt ) = (Dα V ) ∗
ϕt . 
Corollary 3.5. The map from finite Radon measures
(3.19) Mfin (Rn ) 3 µ 7−→ uµ ∈ S 0 (Rn )
is injective.
Now, we want the same result for L2 (Rn ) (and maybe for Lp (Rn ),
1 ≤ p < ∞). I leave the measure-theoretic part of the argument to
you.
Proposition 3.6. Elements of L2 (Rn ) are “continuous in the mean”
i.e.,
Z
(3.20) lim |u(x + t) − u(x)|2 dx = 0 .
|t|→0 Rn

This is Problem 24.


Using this we conclude that
(3.21) S(Rn ) ,→ L2 (Rn ) is dense
as before. First observe that the space of L2 functions of compact
support is dense in L2 (Rn ), since
Z
lim |u(x)|2 dx = 0 ∀ u ∈ L2 (Rn ) .
R→∞ |x|≥R
3. CONVOLUTION AND DENSITY 59

Then look back at the discussion of v ∗ ϕ, now v is replaced by u ∈


L2c (Rn ). The compactness of the support means that u ∈ L1 (Rn ) so in
Z
(3.22) u ∗ ϕ(x) = u(x − y)ϕ(y)dy
Rn
the integral is absolutely convergent. Moreover
|u ∗ ϕ(x + x0 ) − u ∗ ϕ(x)|
Z
= u(y)(ϕ(x + x0 − y) − ϕ(x − y)) dy

≤ Ckuk sup |ϕ(x + x0 − y) − ϕ(x − y)| → 0


|y|≤R

when {|x| ≤ R} large enough. Thus u ∗ ϕ is continuous and the same


argument as before shows that
u ∗ ϕt ∈ S(Rn ) .
Now to see that u ∗ ϕt → u, assuming u has compact support (or not)
we estimate the integral
Z
|u ∗ ϕt (x) − u(x)| = (u(x − y) − u(x))ϕt (y) dy
Z
≤ |u(x − y) − u(x)| ϕt (y) dy .

Using the same argument twice


Z
|u ∗ ϕt (x) − u(x)|2 dx
ZZZ
≤ |u(x − y) − u(x)| ϕt (y) |u(x − y 0 ) − u(x)| ϕt (y 0 ) dx dy dy 0
Z 
2 0 0
≤ |u(x − y) − u(x)| ϕt (y)ϕt (y )dx dy dy
Z
≤ sup |u(x − y) − u(x)|2 dx .
|y|≤t

Note that at the second step here I have used Schwarz’s inequality with
the integrand written as the product
1/2 1/2 1/2 1/2
|u(x − y) − u(x)| ϕt (y)ϕt (y 0 ) · |u(x − y 0 ) − u(x)| ϕt (y)ϕt (y 0 ) .
Thus we now know that
L2 (Rn ) ,→ S 0 (Rn ) is injective.
This means that all our usual spaces of functions ‘sit inside’ S 0 (Rn ).
60 3. DISTRIBUTIONS

Finally we can use convolution with ϕt to show the existence of


smooth partitions of unity. If K b U ⊂ Rn is a compact set in an
open set then we have shown the existence of ξ ∈ Cc0 (Rn ), with ξ = 1
in some neighborhood of K and ξ = 1 in some neighborhood of K and
supp(ξ) b U .
Then consider ξ ∗ ϕt for t small. In fact
supp(ξ ∗ ϕt ) ⊂ {p ∈ Rn ; dist(p, supp ξ) ≤ 2t}
and similarly, 0 ≤ ξ ∗ ϕt ≤ 1 and
ξ ∗ ϕt = 1 at p if ξ = 1 on B(p, 2t) .
Using this we get:
n
S Proposition 3.7. If Ua ⊂ R are open∞ forn a ∈ A and K b
a∈A Ua then there exist finitely
P many ϕi ∈ Cc (R ), with 0 ≤ ϕi ≤ 1,
supp(ϕi ) ⊂ Uai such that ϕi = 1 in a neighbourhood of K.
i
Proof. By the compactness of K we may choose a finite open
subcover. Using Lemma 15.7 we may choose a continuous partition,
φ0i , of unity subordinate to this cover. Using the convolution argument
above we can replace φ0i by φ0i ∗ ϕt for t > 0. If t is sufficiently small
then this is again a partition of unity subordinate to the cover, but
now smooth. 
Next we can make a simple ‘cut off argument’ to show
Lemma 3.8. The space Cc∞ (Rn ) of C ∞ functions of compact support
is dense in S(Rn ).
Proof. Choose ϕ ∈ Cc∞ (Rn ) with ϕ(x) = 1 in |x| ≤ 1. Then given
ψ ∈ S(Rn ) consider the sequence
ψn (x) = ϕ(x/n)ψ(x) .
Clearly ψn = ψ on |x| ≤ n, so if it converges in S(Rn ) it must converge
to ψ. Suppose m ≥ n then by Leibniz’s formula4
Dxα (ψn (x) − ψm (x))
X α  x x 
= β
Dx ϕ( ) − ϕ( ) · Dxα−β ψ(x) .
β≤α
β n m
All derivatives of ϕ(x/n) are bounded, independent of n and ψn = ψm
in |x| ≤ n so for any p

α 0 |x| ≤ n
|Dx (ψn (x) − ψm (x))| ≤ .
Cα,p hxi−2p |x| ≥ n
4Problem 25.
3. CONVOLUTION AND DENSITY 61

Hence ψn is Cauchy in S(Rn ). 


Thus every element of S 0 (Rn ) is determined by its restriction to
Cc∞ (Rn ). The support of a tempered distribution was defined above to
be
(3.23) supp(u) = {x ∈ Rn ; ∃ ϕ ∈ S(Rn ) , ϕ(x) 6= 0 , ϕu = 0}{ .
Using the preceding lemma and the construction of smooth partitions
of unity we find
Proposition 3.9. f u ∈ S 0 (Rn ) and supp(u) = ∅ then u = 0.
Proof. From (3.23), if ψ ∈ S(Rn ), supp(ψu) ⊂ supp(u). If x 3
supp(u) then, by definition, ϕu = 0 for some ϕ ∈ S(Rn ) with ϕ(x) 6= 0.
Thus ϕ 6= 0 on B(x, ) for  > 0 sufficiently small. If ψ ∈ Cc∞ (Rn ) has
support in B(x, ) then ψu = ψ̃ϕu = 0, where ψ̃ ∈ Cc∞ (Rn ):

ψ/ϕ in B(x, )
ψ̃ =
0 elsewhere .
n ∞ n
P K b R we can find ϕj ∈ Cc (R ), supported in
Thus, given such balls,
so that j ϕj ≡ 1 on K but ϕj u = 0. For given µ ∈ Cc∞ (Rn ) apply
this to supp(µ). Then
X X
µ= ϕj µ ⇒ u(µ) = (φj u)(µ) = 0 .
j j

Thus u = 0 on Cc∞ (Rn ), so u = 0. 


The linear space of distributions of compact support will be denoted
Cc−∞ (Rn ); it is often written E 0 (Rn ).
Now let us give a characterization of the ‘delta function’
δ(ϕ) = ϕ(0) ∀ ϕ ∈ S(Rn ) ,
or at least the one-dimensional subspace of S 0 (Rn ) it spans. This is
based on the simple observation that (xj ϕ)(0) = 0 if ϕ ∈ S(Rn )!
Proposition 3.10. If u ∈ S 0 (Rn ) satisfies xj u = 0, j = 1, · · · , n
then u = cδ.
Proof. The main work is in characterizing the null space of δ as
a linear functional, namely in showing that
(3.24) H = {ϕ ∈ S(Rn ); ϕ(0) = 0}
can also be written as
( n
)
X
(3.25) H= ϕ ∈ S(Rn ); ϕ = xj ψj , ϕj ∈ S(Rn ) .
j=1
62 3. DISTRIBUTIONS

Clearly the right side of (3.25) is contained in the left. To see the
converse, suppose first that
(3.26) ϕ ∈ S(Rn ) , ϕ = 0 in |x| < 1 .
Then define

0 |x| < 1
ψ= 2
ϕ/ |x| |x| ≥ 1 .

All the derivatives of 1/ |x|2 are bounded in |x| ≥ 1, so from Leibniz’s


formula it follows that ψ ∈ S(Rn ). Since
X
ϕ= xj (xj ψ)
j

this shows that ϕ of the form (3.26) is in the right side of (3.25). In
general suppose ϕ ∈ S(Rn ). Then
Z t
d
ϕ(x) − ϕ(0) = ϕ(tx) dt
0 dt
(3.27) n Z t
X ∂ϕ
= xj (tx) dt .
j=1 0 ∂xj

Certainly these integrals are C ∞ , but they may not decay rapidly at
infinity. However, choose µ ∈ Cc∞ (Rn ) with µ = 1 in |x| ≤ 1. Then
(3.27) becomes, if ϕ(0) = 0,
ϕ = µϕ + (1 − µ)ϕ
n Z t
X ∂ϕ
= xj ψj + (1 − µ)ϕ , ψj = µ (tx) dt ∈ S(Rn ) .
j=1 0 ∂xj

Since (1 − µ)ϕ is of the form (3.26), this proves (3.25).


Our assumption on u is that xj u = 0, thus
u(ϕ) = 0 ∀ ϕ ∈ H
by (3.25). Choosing µ as above, a general ϕ ∈ S(Rn ) can be written
ϕ = ϕ(0) · µ + ϕ0 , ϕ0 ∈ H .
Then
u(ϕ) = ϕ(0)u(µ) ⇒ u = cδ , c = u(µ) .

3. CONVOLUTION AND DENSITY 63

This result is quite powerful, as we shall soon see. The Fourier


transform of an element ϕ ∈ S(Rn ) is5
Z
(3.28) ϕ̂(ξ) = e−ix·ξ ϕ(x) dx , ξ ∈ Rn .

The integral certainly converges, since |ϕ| ≤ Chxi−n−1 . In fact it fol-


lows easily that ϕ̂ is continuous, since
Z
0 0
|ϕ̂(ξ) − ϕ̂(ξ )| ∈ eix−ξ − e−x·ξ |ϕ| dx

→ 0 as ξ 0 → ξ .
In fact
Proposition 3.11. Fourier transformation, (3.28), defines a con-
tinuous linear map
(3.29) F : S(Rn ) → S(Rn ) , Fϕ = ϕ̂ .
Proof. Differentiating under the integral6 sign shows that
Z
∂ξj ϕ̂(ξ) = −i e−ix·ξ xj ϕ(x) dx .

Since the integral on the right is absolutely convergent that shows that
(remember the i’s)
(3.30) xj ϕ , ∀ ϕ ∈ S(Rn ) .
Dξj ϕ̂ = −d
Similarly, if we multiply by ξj and observe that ξj e−ix·ξ = i ∂x∂ j e−ix·ξ
then integration by parts shows
Z
∂ −ix·ξ
(3.31) ξj ϕ̂ = i ( e )ϕ(x) dx
∂xj
Z
∂ϕ
= −i e−ix·ξ dx
∂xj
n
D j ϕ = ξj ϕ̂ , ∀ ϕ ∈ S(R ) .
d
Since xj ϕ, Dj ϕ ∈ S(Rn ) these results can be iterated, showing that
ξ α Dξβ ϕ̂ = F (−1)|β| Dα x xβ ϕ .

(3.32)

Thus ξ α Dξβ ϕ̂ ≤ Cαβ sup hxi+n+1 Dα x xβ ϕ ≤ Ckhxin+1+|β| ϕkC |α| , which


shows that F is continuous as a map (3.32).

5Normalizations vary, but it doesn’t matter much.
6See [6]
64 3. DISTRIBUTIONS

Suppose ϕ ∈ S(Rn ). Since ϕ̂ ∈ S(Rn ) we can consider the distri-


bution u ∈ S 0 (Rn )
Z
(3.33) u(ϕ) = ϕ̂(ξ) dξ .
Rn

The continuity of u follows from the fact that integration is continuous


and (3.29). Now observe that
Z
u(xj ϕ) = xd
j ϕ(ξ) dξ
Rn
Z
=− Dξj ϕ̂ dξ = 0
Rn

where we use (3.30). Applying Proposition 3.10 we conclude that u =


cδ for some (universal) constant c. By definition this means
Z
(3.34) ϕ̂(ξ) dξ = cϕ(0) .
Rn

So what is the constant? To find it we need to work out an example.


The simplest one is
ϕ = exp(− |x|2 /2) .
Lemma 3.12. The Fourier transform of the Gaussian exp(− |x|2 /2)
is the Gaussian (2π)n/2 exp(− |ξ|2 /2).
Proof. There are two obvious methods — one uses complex anal-
ysis (Cauchy’s theorem) the other, which I shall follow, uses the unique-
ness of solutions to ordinary differentialQequations.
First observe that exp(− |x|2 /2) = j exp(−x2j /2). Thus7
n
2 /2
Y
ϕ̂(ξ) = ψ̂(ξj ) , ψ(x) = e−x ,
j=1

being a function of one variable. Now ψ satisfies the differential equa-


tion
(∂x + x) ψ = 0 ,
and is the only solution of this equation up to a constant multiple. By
(3.30) and (3.31) its Fourier transform satisfies
d
∂d
x ψ + xψ = iξ ψ̂ + i
c ϕ̂ = 0 .

7Really by Fubini’s theorem, but here one can use Riemann integrals.
4. FOURIER INVERSION 65
2
This is the same equation, but in the ξ variable. Thus ψ̂ = ce−|ξ| /2
.
Again we need to find the constant. However,
Z
2
ψ̂(0) = c = e−x /2 dx = (2π)1/2

by the standard use of polar coordinates:


Z Z ∞ Z 2π
−(x2 +y 2 )/2 2
2
c = e dx dy = e−r /2 r dr dθ = 2π .
Rn 0 0

This proves the lemma.



Thus we have shown that for any ϕ ∈ S(Rn )
Z
(3.35) ϕ̂(ξ) dξ = (2π)n ϕ(0) .
Rn

Since this is true for ϕ = exp(− |x|2 /2). The identity allows us to
invert the Fourier transform.

4. Fourier inversion
It is shown above that the Fourier transform satisfies the identity
Z
−n
(4.1) ϕ(0) = (2π) ϕ̂(ξ) dξ ∀ ϕ ∈ S(Rn ) .
Rn

If y ∈ Rn and ϕ ∈ S(Rn ) set ψ(x) = ϕ(x + y). The translation-


invariance of Lebesgue measure shows that
Z
ψ̂(ξ) = e−ix·ξ ϕ(x + y) dx

= eiy·ξ ϕ̂(ξ) .
Applied to ψ the inversion formula (4.1) becomes
Z
−n
(4.2) ϕ(y) = ψ(0) = (2π) ψ̂(ξ) dξ
Z
−n
= (2π) eiy·ξ ϕ̂(ξ) dξ .
Rn

Theorem 4.1. Fourier transform F : S(Rn ) → S(Rn ) is an iso-


morphism with inverse
Z
n n −n
(4.3) G : S(R ) → S(R ) , Gψ(y) = (2π) eiy·ξ ψ(ξ) dξ .
66 3. DISTRIBUTIONS

Proof. The identity (4.2) shows that F is 1 − 1, i.e., injective,


since we can remove ϕ from ϕ̂. Moreover,
(4.4) Gψ(y) = (2π)−n Fψ(−y)
So G is also a continuous linear map, G : S(Rn ) → S(Rn ). Indeed
the argument above shows that G ◦ F = Id and the same argument,
with some changes of sign, shows that F · G = Id. Thus F and G are
isomorphisms.

Lemma 4.2. For all ϕ, ψ ∈ S(Rn ), Paseval’s identity holds:
Z Z
−n
(4.5) ϕψ dx = (2π) ϕ̂ψ̂ dξ .
Rn Rn

Proof. Using the inversion formula on ϕ,


Z Z
−n
eix·ξ ϕ̂(ξ) dξ ψ(x) dx

ϕψ dx = (2π)
Z Z
−n
= (2π) ϕ̂(ξ) e−ix·ξ ψ(x) dx dξ
Z
−n
= (2π) ϕ̂(ξ)ϕ̂(ξ) dξ .

Here the integrals are absolutely convergent, justifying the exchange of


orders.

Proposition 4.3. Fourier transform extends to an isomorphism
(4.6) F : L2 (Rn ) → L2 (Rn ) .
Proof. Setting ϕ = ψ in (4.5) shows that
(4.7) kFϕkL2 = (2π)n/2 kϕkL2 .
In particular this proves, given the known density of S(Rn ) in L2 (Rn ),
that F is an isomorphism, with inverse G, as in (4.6).

For any m ∈ R
hxim L2 (Rn ) = u ∈ S 0 (Rn ) ; hxi−m û ∈ L2 (Rn )


is a well-defined subspace. We define the Sobolev spaces on Rn by, for


m≥0
H m (Rn ) = u ∈ L2 (Rn ) ; û = Fu ∈ hξi−m L2 (Rn ) .

(4.8)
0
Thus H m (Rn ) ⊂ H m (Rn ) if m ≥ m0 , H 0 (Rn ) = L2 (Rn ) .
4. FOURIER INVERSION 67

Lemma 4.4. If m ∈ N is an integer, then


(4.9) u ∈ H m (Rn ) ⇔ Dα u ∈ L2 (Rn ) ∀ |α| ≤ m .
Proof. By definition, u ∈ H m (Rn ) implies that hξi−m û ∈ L2 (Rn ).
Since D
d α u = ξ α û this certainly implies that D α u ∈ L2 (Rn ) for |α| ≤ m.

Conversely if Dα u ∈ L2 (Rn ) for all |α| ≤ m then ξ α û ∈ L2 (Rn ) for all


|α| ≤ m and since
X
hξim ≤ Cm |ξ α | .
|α|≤m

this in turn implies that hξim û ∈ L2 (Rn ).




Now that we have considered the Fourier transform of Schwartz


test functions we can use the usual method, of duality, to extend it to
tempered distributions. If we set η = ψ̂ then ψ̂ = η and ψ = G ψ̂ = Gη
so
Z
−n
ψ(x) = (2π) e−ix·ξ ψ̂(ξ) dξ
Z
−n
= (2π) e−ix·ξ η(ξ) dξ = (2π)−n η̂(x).

Substituting in (4.5) we find that


Z Z
ϕη̂ dx = ϕ̂η dξ .

Now, recalling how we embed S(Rn ) ,→ S 0 (Rn ) we see that


(4.10) uϕ̂ (η) = uϕ (η̂) ∀ η ∈ S(Rn ) .
Definition 4.5. If u ∈ S 0 (Rn ) we define its Fourier transform by
(4.11) û(ϕ) = u(ϕ̂) ∀ ϕ ∈ S(Rn ) .
As a composite map, û = u · F, with each term continuous, û is
continuous, i.e., û ∈ S 0 (Rn ).
Proposition 4.6. The definition (4.7) gives an isomorphism
F : S 0 (Rn ) → S 0 (Rn ) , Fu = û
satisfying the identities
(4.12) D
d αu = ξαu , x
d α u = (−1)|α| D α û .
68 3. DISTRIBUTIONS

Proof. Since û = u ◦ F and G is the 2-sided inverse of F,


(4.13) u = û ◦ G
gives the inverse to F : S 0 (Rn ) → S 0 (Rn ), showing it to be an isomor-
phism. The identities (4.12) follow from their counterparts on S(Rn ):

D
d α u(ϕ) = D α u(ϕ̂) = u((−1)|α| D α ϕ̂)

α ϕ) = û(ξ α ϕ) = ξ α û(ϕ) ∀ ϕ ∈ S(Rn ) .


= u(ξd


We can also define Sobolev spaces of negative order:


H m (Rn ) = u ∈ S 0 (Rn ) ; û ∈ hξi−m L2 (Rn ) .

(4.14)
Proposition 4.7. If m ≤ 0 is an integer then u ∈ H m (Rn ) if and
only if it can be written in the form
X
(4.15) u= Dα vα , vα ∈ L2 (Rn ) .
|α|≤−m

Proof. If u ∈ S 0 (Rn ) is of the form (4.15) then


X
(4.16) û = ξ α v̂α with v̂α ∈ L2 (Rn ) .
|α|≤−m

Thus hξim û = α m α m
P
|α|≤−m ξ hξi v̂α . Since all the factors ξ hξi are
2 n m 2 n
bounded, each term here is in L (R ), so hξi û ∈ L (R ) which is the
definition, u ∈ hξi−m L2 (Rn ).
Conversely, suppose u ∈ H m (Rn ), i.e., hξim û ∈ L2 (Rn ). The func-
tion
 
X
 |ξ α | · hξim ∈ L2 (Rn ) (m < 0)
|α|≤−m

is bounded below by a positive constant. Thus


 −1
X
v= |ξ α | û ∈ L2 (Rn ) .
|α|≤−m

Each of the functions v̂α = sgn(ξ α )v̂ ∈ L2 (Rn ) so the identity (4.16),
and hence (4.15), follows with these choices.

4. FOURIER INVERSION 69

Proposition 4.8. Each of the Sobolev spaces H m (Rn ) is a Hilbert


space with the norm and inner product
Z 1/2
2 2m
(4.17) kukH m = |û(ξ)| hξi dξ ,
Rn
Z
hu, vi = û(ξ)v̂(ξ)hξi2m dξ .
Rn
The Schwartz space S(R ) ,→ H m (Rn ) is dense for each m and the
n

pairing
(4.18) H m (Rn ) × H −m (Rn ) 3 (u, u0 ) 7−→
Z
0
((u, u )) = û0 (ξ)û0 (·ξ) dξ ∈ C
Rn
n 0
gives an identification (H m (R )) = H −m (Rn ).
Proof. The Hilbert space property follows essentially directly from
the definition (4.14) since hξi−m L2 (Rn ) is a Hilbert space with the norm
(4.17). Similarly the density of S in H m (Rn ) follows, since S(Rn ) dense
in L2 (Rn ) (Problem L11.P3) implies hξi−m S(Rn ) = S(Rn ) is dense in
hξi−m L2 (Rn ) and so, since F is an isomorphism in S(Rn ), S(Rn ) is
dense in H m (Rn ).
Finally observe that the pairing in (4.18) makes sense, since hξi−m û(ξ),
hξi û (ξ) ∈ L2 (Rn ) implies
m 0

û(ξ))û0 (−ξ) ∈ L1 (Rn ) .


Furthermore, by the self-duality of L2 (Rn ) each continuous linear func-
tional
U : H m (Rn ) → C , U (u) ≤ CkukH m
can be written uniquely in the form
U (u) = ((u, u0 )) for some u0 ∈ H −m (Rn ) .

Notice that if u, u0 ∈ S(Rn ) then
Z
0
((u, u )) = u(x)u0 (x) dx .
Rn
This is always how we “pair” functions — it is the natural pairing on
L2 (Rn ). Thus in (4.18) what we have shown is that this pairing on test
function
Z
0 0
n n
S(R ) × S(R ) 3 (u, u ) 7−→ ((u, u )) = u(x)u0 (x) dx
Rn
70 3. DISTRIBUTIONS

extends by continuity to H m (Rn ) × H −m (Rn ) (for each fixed m) when


it identifies H −m (Rn ) as the dual of H m (Rn ). This was our ‘picture’
at the beginning.
For m > 0 the spaces H m (Rn ) represents elements of L2 (Rn ) that
have “m” derivatives in L2 (Rn ). For m < 0 the elements are ?? of “up
to −m” derivatives of L2 functions. For integers this is precisely ??.

5. Sobolev embedding
The properties of Sobolev spaces are briefly discussed above. If
m is a positive integer then u ∈ H m (Rn ) ‘means’ that u has up to
m derivatives in L2 (Rn ). The question naturally arises as to the sense
in which these ‘weak’ derivatives correspond to old-fashioned ‘strong’
derivatives. Of course when m is not an integer it is a little harder
to imagine what these ‘fractional derivatives’ are. However the main
result is:
Theorem 5.1 (Sobolev embedding). If u ∈ H m (Rn ) where m >
n/2 then u ∈ C00 (Rn ), i.e.,
(5.1) H m (Rn ) ⊂ C00 (Rn ) , m > n/2 .
Proof. By definition, u ∈ H m (Rn ) means v ∈ S 0 (Rn ) and hξim û(ξ) ∈
L2 (Rn ). Suppose first that u ∈ S(Rn ). The Fourier inversion formula
shows that
Z
n
(2π) |u(x)| = eix·ξ û(ξ) dξ
Z 1/2 !1/2
X
≤ hξi2m |û(ξ)|2 dξ · hξi−2m dξ .
Rn Rn

Now, if m > n/2 then the second integral is finite. Since the first
integral is the norm on H m (Rn ) we see that
(5.2) sup |u(x)| = kukL∞ ≤ (2π)−n kukH m , m > n/2 .
Rn

This is all for u ∈ S(Rn ), but S(Rn ) ,→ H m (Rn ) is dense. The


estimate (5.2) shows that if uj → u in H m (Rn ), with uj ∈ S(Rn ), then
uj → u0 in C00 (Rn ). In fact u0 = u in S R0 (Rn ) since uj → u in L2 (Rn )
and uj → u0 in C00 (Rn ) both imply that uj ϕ converges, so
Z Z Z
uj ϕ → uϕ = u0 ϕ ∀ ϕ ∈ S(Rn ).
Rn Rn Rn


5. SOBOLEV EMBEDDING 71

Notice here the precise meaning of u = u0 , u ∈ H m (Rn ) ⊂ L2 (Rn ),


0
u ∈ C00 (Rn ). When identifying u ∈ L2 (Rn ) with the corresponding
tempered distribution, the values on any set of measure zero ‘are lost’.
Thus as functions (5.1) means that each u ∈ H m (Rn ) has a represen-
tative u0 ∈ C00 (Rn ).
We can extend this to higher derivatives by noting that
Proposition 5.2. If u ∈ H m (Rn ), m ∈ R, then Dα u ∈ H m−|α| (Rn )
and
(5.3) Dα : H m (Rn ) → H m−|α| (Rn )
is continuous.
Proof. First it is enough to show that each Dj defines a continuous
linear map
(5.4) Dj : H m (Rn ) → H m−1 (Rn ) ∀ j
since then (5.3) follows by composition.
If m ∈ R then u ∈ H m (Rn ) means û ∈ hξi−m L2 (Rn ). Since D
dju =
ξj · û, and
|ξj | hξi−m ≤ Cm hξi−m+1 ∀ m
we conclude that Dj u ∈ H m−1 (Rn ) and
kDj ukH m−1 ≤ Cm kukH m .

Applying this result we see
n
Corollary 5.3. If k ∈ N0 and m > 2
+ k then
(5.5) H m (Rn ) ⊂ C0k (R ) .
n

Proof. If |α| ≤ k, then Dα u ∈ H m−k (Rn ) ⊂ C00 (Rn ). Thus the


‘weak derivatives’ Dα u are continuous. Still we have to check that this
means that u is itself k times continuously differentiable. In fact this
again follows from the density of S(Rn ) in H m (Rn ). The continuity
in (5.3) implies that if uj → u in H m (Rn ), m > n2 + k, then uj → u0
in C0k (Rn ) (using its completeness). However u = u0 as before, so
u ∈ C0k (Rn ).

In particular we see that
\
(5.6) H ∞ (Rn ) = H m (Rn ) ⊂ C ∞ (Rn ) .
m
These functions are not in general Schwartz test functions.
72 3. DISTRIBUTIONS

Proposition 5.4. Schwartz space can be written in terms of weighted


Sobolev spaces
\
(5.7) S(Rn ) = hxi−k H k (Rn ) .
k

Proof. This follows directly from (5.5) since the left side is con-
tained in \
hxi−k C0k−n (Rn ) ⊂ S(Rn ).
k

Theorem 5.5 (Schwartz representation). Any tempered distribu-
tion can be written in the form of a finite sum
X
(5.8) u= xα Dxβ uαβ , uαβ ∈ C00 (Rn ).
|α|≤m
|β|≤m

or in the form
X
(5.9) u= Dxβ (xα vαβ ), vαβ ∈ C00 (Rn ).
|α|≤m
|β|≤m

Thus every tempered distribution is a finite sum of derivatives of


continuous functions of poynomial growth.
Proof. Essentially by definition any u ∈ S 0 (Rn ) is continuous with
respect to one of the norms khxik ϕkC k . From the Sobolev embedding
theorem we deduce that, with m > k + n/2,
|u(ϕ)| ≤ Ckhxik ϕkH m ∀ ϕ ∈ S(Rn ).
This is the same as
hxi−k u(ϕ) ≤ CkϕkH m ∀ ϕ ∈ S(Rn ).
which shows that hxi−k u ∈ H −m (Rn ), i.e., from Proposition 4.8,
X
hxi−k u = Dα uα , uα ∈ L2 (Rn ) .
|α|≤m

In fact, choose j > n/2 and consider vα ∈ H j (Rn ) defined by


v̂α = hξi−j ûα . As in the proof of Proposition 4.14 we conclude that
X
uα = Dβ u0α,β , u0α,β ∈ H j (Rn ) ⊂ C00 (Rn ) .
|β|≤j
5. SOBOLEV EMBEDDING 73

Thus,8
X
(5.10) u = hxik Dαγ vγ , vγ ∈ C00 (Rn ) .
|γ|≤M

To get (5.9) we ‘commute’ the factor hxik to the inside; since I have
not done such an argument carefully so far, let me do it as a lemma.
Lemma 5.6. For any γ ∈ Nn0 there are polynomials pα,γ (x) of de-
grees at most |γ − α| such that
X
hxik Dγ v = Dγ−α pα,γ hxik−2|γ−α| v .

α≤γ

Proof. In fact it is convenient to prove a more general result. Sup-


pose p is a polynomial of a degree at most j then there exist polynomials
of degrees at most j + |γ − α| such that
X
(5.11) phxik Dγ v = Dγ−α (pα,γ hxik−2|γ−α| v) .
α≤γ

The lemma follows from this by taking p = 1.


Furthermore, the identity (5.11) is trivial when γ = 0, and proceed-
ing by induction we can suppose it is known whenever |γ| ≤ L. Taking
|γ| = L + 1,
0
Dγ = Dj Dγ |γ 0 | = L.
Writing the identity for γ 0 as
0 0 0 0 0
X
phxik Dγ = Dγ −α (pα0 ,γ 0 hxik−2|γ −α | v)
α0 ≤γ 0

we may differentiate with respect to xj . This gives


0
phxik Dγ = −Dj (phxik ) · Dγ v
0
X
+ Dγ−α (p0α0 ,γ 0 hxik−2|γ−α|+2 v) .
|α0 |≤γ

The first term on the right expands to


0 1 0
(−(Dj p) · hxik Dγ v − kpxj hxik−2 Dγ v) .
i
We may apply the inductive hypothesis to each of these terms and
rewrite the result in the form (5.11); it is only necessary to check the
order of the polynomials, and recall that hxi2 is a polynomial of degree
2. 
8This is probably the most useful form of the representation theorem!
74 3. DISTRIBUTIONS

Applying Lemma 5.6 to (5.10) gives (5.9), once negative powers of


hxi are absorbed into the continuous functions. Then (5.8) follows from
(5.9) and Leibniz’s formula. 

6. Differential operators.
In the last third of the course we will apply what we have learned
about distributions, and a little more, to understand properties of dif-
ferential operators with constant coefficients. Before I start talking
about these, I want to prove another density result.
So far we have not defined a topology on S 0 (Rn ) – I will leave
this as an optional exercise.9 However we shall consider a notion of
convergence. Suppose uj ∈ S 0 (Rn ) is a sequence in S 0 (Rn ). It is said
to converge weakly to u ∈ S 0 (Rn ) if
(6.1) uj (ϕ) → u(ϕ) ∀ ϕ ∈ S(Rn ) .
There is no ‘uniformity’ assumed here, it is rather like pointwise con-
vergence (except the linearity of the functions makes it seem stronger).
Proposition 6.1. The subspace S(Rn ) ⊂ S 0 (Rn ) is weakly dense,
i.e., each u ∈ S 0 (Rn ) is the weak limit of a subspace uj ∈ S(Rn ).
Proof. We can use Schwartz representation theorem to write, for
some m depending on u,
X
u = hxim Dα uα , uα ∈ L2 (Rn ) .
|α|≤m

We know that S(Rn ) is dense in L2 (Rn ), in the sense of metric spaces


so we can find uα,j ∈ S(Rn ), uα,j → uα in L2 (Rn ). The density result
then follows from the basic properties of weak convergence. 
Proposition 6.2. If uj → u and u0j → u0 weakly in S 0 (Rn ) then
cuj → cu, uj + u0j → u + u0 , Dα uj → Dα u and hxim uj → hxim u weakly
in S 0 (Rn ).
Proof. This follows by writing everyting in terms of pairings, for
example if ϕ ∈ S(Rn )
Dα uj (ϕ) = uj ((−1)(α) Dα ϕ) → u((−1)(α) Dα ϕ) = Dα u(ϕ) .

This weak density shows that our definition of Dj , and xj × are
unique if we require Proposition 6.2 to hold.
9Problem 34.
6. DIFFERENTIAL OPERATORS. 75

We have discussed differentiation as an operator (meaning just a


linear map between spaces of function-like objects)
Dj : S 0 (Rn ) → S 0 (Rn ) .
Any polynomial on Rn
X
p(ξ) = pα ξ α , p α ∈ C
|α|≤m

defines a differential operator10


X
(6.2) p(D)u = pα D α u .
|α|≤m

Before discussing any general theorems let me consider some exam-


ples.
(6.3) On R2 , ∂ = ∂x + i∂y “d-bar operator”
X n
n
(6.4) on R , ∆ = Dj2 “Laplacian”
j=1
n n+1
(6.5) on R × R = R , Dt2 − ∆“Wave operator”
(6.6) onR × Rn = Rn+1 , ∂t + ∆“Heat operator”
(6.7) on R × Rn = Rn+1 , Dt + ∆“Schrödinger operator”

Functions, or distributions, satisfying ∂u = 0 are said to be holo-


morphic, those satisfying ∆u = 0 are said to be harmonic.
Definition 6.3. An element E ∈ S 0 (Rn ) satisfying
(6.8) P (D)E = δ
is said to be a (tempered) fundamental solution of P (D).
Theorem 6.4 (without proof). Every non-zero constant coefficient
differential operator has a tempered fundamental solution.
This is quite hard to prove and not as interetsing as it might seem.
We will however give lots of examples, starting with ∂. Consider the
function
1
(6.9) E(x, y) = (x + iy)−1 , (x, y) 6= 0 .

10More correctly a partial differential operator with constant coefficients.
76 3. DISTRIBUTIONS

Lemma 6.5. E(x, y) is locally integrable and so defines E ∈ S 0 (R2 )


by
Z
1
(6.10) E(ϕ) = (x + iy)−1 ϕ(x, y) dx dy ,
2π R2

and E so defined is a tempered fundamental solution of ∂.


Proof. Since (x + iy)−1 is smooth and bounded away from the
origin the local integrability follows from the estimate, using polar co-
ordinates,
Z Z 2π Z 1
dx dy r dr dθ
(6.11) = = 2π .
|(x,y)|≤1 |x + iy| 0 0 r
Differentiating directly in the region where it is smooth,
∂x (x + iy)−1 = −(x + iy)−2 , ∂y (x + iy)−1 = −i(x ∈ iy)−2
so indeed, ∂E = 0 in (x, y) 6= 0.11
The derivative is really defined by
(6.12) (∂E)(ϕ) = E(−∂ϕ)
Z
1
= lim − (x + iy)−1 ∂ϕ dx dy .
↓0 2π |x|≥
|y|≥

Here I have cut the space {|x| ≤  , |y| ≤ } out of the integral and used
the local integrability in taking the limit as  ↓ 0. Integrating by parts
in x we find
Z Z
−1
− (x + iy) ∂x ϕ dx dy = (∂x (x + iy)−1 )ϕ dx dy
|x|≥ |x|≥
|y|≥ |y|≥
Z Z
−1
+ (x + iy) ϕ(x, y) dy − (x + iy)−1 ϕ(x, y) dy .
|y|≤ |y|≤
x= x=−

There is a corrsponding formula for integration by parts in y so,


recalling that ∂E = 0 away from (0, 0),

(6.13) 2π∂E(ϕ) =
Z
lim [( + iy)−1 ϕ(, y) − (− + iy)−1 ϕ(−, y)] dy
↓0 |y|≤
Z
+ i lim [(x + i)−1 ϕ(x, ) − (x − i)−1 ϕ(x, )] dx ,
↓0 |x|≤

11Thus at this stage we know ∂E must be a sum of derivatives of δ.


6. DIFFERENTIAL OPERATORS. 77

assuming that both limits exist. Now, we can write


ϕ(x, y) = ϕ(0, 0) + xψ1 (x1 y) + yψ2 (x, y) .
Replacing ϕ by either xψ1 or yψ2 in (6.13) both limits are zero. For
example
Z Z
−1
( + iy) ψ1 (, y) dy ≤ |ψ1 | → 0 .
|y|≤ |y|≤

Thus we get the same result in (6.13) by replacing ϕ(x, y) by ϕ(0, 0).
Then 2π∂E(ϕ) = cϕ(0),
Z Z
dy dy
c = lim 2 2 2
= lim < 2
= 2π .
↓0 |y|≤  + y ↓0 |y|≤1 1 + y


Let me remind you that we have already discussed the convolution
of functions
Z
u ∗ v(x) = u(x − y)v(y) dy = v ∗ u(x) .

This makes sense provided u is of slow growth and s ∈ S(Rn ). In fact


we can rewrite the definition in terms of pairing
(6.14) (u ∗ ϕ)(x) = hu, ϕ(x − ·)i
where the · indicates the variable in the pairing.
Theorem 6.6 (Hörmander, Theorem 4.1.1). If u ∈ S 0 (Rn ) and
ϕ ∈ S(Rn ) then u ∗ ϕ ∈ S 0 (Rn ) ∩ C ∞ (Rn ) and if supp(ϕ) b Rn
supp(u ∗ ϕ) ⊂ supp(u) + supp(ϕ) .
For any multi-index α
Dα (u ∗ ϕ) = Dα u ∗ ϕ = u ∗ Dα ϕ .
Proof. If ϕ ∈ S(Rn ) then for any fixed x ∈ Rn ,
ϕ(x − ·) ∈ S(Rn ) .
Indeed the seminorm estimates required are
sup(1 + |y|2 )k/2 |Dα y ϕ(x − y)| < ∞ ∀ α, k > 0 .
y

Since Dα y ϕ(x − y) = (−1)|α| (Dα ϕ)(x − y) and


(1 + |y|2 ) ≤ (1 + |x − y|2 )(1 + |x|2 )
we conclude that
k(1 + |y|2 )k/2 Dα y (x − y)kL∞ ≤ (1 + |x|2 )k/2 khyik Dα y ϕ(y)kL∞ .
78 3. DISTRIBUTIONS

The continuity of u ∈ S 0 (Rn ) means that for some k


|u(ϕ)| ≤ C sup k(y)k Dα ϕkL∞
|α|≤k

so it follows that
(6.15) |u ∗ ϕ(x)| = |hu, ϕ(x − ·)i| ≤ C(1 + |x|2 )k/2 .
The argument above shows that x 7→ ϕ(x − ·) is a continuous func-
tion of x ∈ Rn with values in S(Rn ), so u ∗ ϕ is continuous and satisfies
(6.15). It is therefore an element of S 0 (Rn ).
Differentiability follows in the same way since for each j, with ej
the jth unit vector
ϕ(x + sej − y) − ϕ(x − y)
∈ S(Rn )
s
is continuous in x ∈ Rn , s ∈ R. Thus, u ∗ ϕ has continuous partial
derivatives and
Dj u ∗ ϕ = u ∗ Dj ϕ .
The same argument then shows that u∗ϕ ∈ C ∞ (Rn ). That Dj (u∗ϕ) =
Dj u ∗ ϕ follows from the definition of derivative of distributions
Dj (u ∗ ϕ(x)) = (u ∗ Dj ϕ)(x)
= hu, Dxj ϕ(x − y)i = −hu(y), Dyj ϕ(x − y)iy
= (Dj u) ∗ ϕ .
Finally consider the support property. Here we are assuming that
supp(ϕ) is compact; we also know that supp(u) is a closed set. We
have to show that
(6.16) x∈
/ supp(u) + supp(ϕ)
implies u ∗ ϕ(x ) = 0 for x0 near x. Now (6.16) just means that
0

(6.17) supp ϕ(x − ·) ∩ supp(u) = φ ,


Since supp ϕ(x − ·) = {y ∈ Rn ; x − y ∈ supp(ϕ)}, so both statements
mean that there is no y ∈ supp(ϕ) with x − y ∈ supp(u). This can also
be written
supp(ϕ) ∩ supp u(x − ·) = φ
and as we showed when discussing supports implies
u ∗ ϕ(x0 ) = hu(x0 − ·), ϕi = 0 .
From (6.17) this is an open condition on x0 , so the support property
follows.

6. DIFFERENTIAL OPERATORS. 79

Now suppose ϕ, ψ ∈ S(Rn ) and u ∈ S 0 (Rn ). Then


(6.18) (u ∗ ϕ) ∗ ψ = u ∗ (ϕ ∗ ψ) .
This is really Hörmander’s Lemma 4.1.3 and Theorem 4.1.2; I ask you
to prove it as Problem 35.
We have shown that u ∗ ϕ is C ∞ if u ∈ S 0 (Rn ) and ϕ ∈ S(Rn ),
i.e., the regularity of u ∗ ϕ follows from the regularity of one of the
factors. This makes it reasonable to expect that u ∗ v can be defined
when u ∈ S 0 (Rn ), v ∈ S 0 (Rn ) and one of them has compact support.
If v ∈ Cc∞ (Rn ) and ϕ ∈ S(Rn ) then
Z Z
u ∗ v(ϕ) = hu(·), v(x − ·)iϕ(x) dx = hu(·), v(x − ·)iv̌ϕ(−x) dx

where ϕ̌(z) = ϕ(−z). In fact using Problem 35,


(6.19) u ∗ v(ϕ) = ((u ∗ v) ∗ ϕ̌)(0) = (u ∗ (v ∗ ϕ̌))(0) .
Here, v, ϕ are both smooth, but notice
Lemma 6.7. If v ∈ S 0 (Rn ) has compact support and ϕ ∈ S(Rn )
then v ∗ ϕ ∈ S(Rn ).
Proof. Since v ∈ S 0 (Rn ) has compact support there exists χ ∈
Cc∞ (Rn ) such that χv = v. Then
v ∗ ϕ(x) = (χv) ∗ ϕ(x) = hχv(y), ϕ(x − y)iy
= hu(y), χ(y)ϕ(x − y)iy .
Thus, for some k,
|v ∗ ϕ(x)| ≤ Ckχ(y)ϕ(x − y)k(k)
where k k(k) is one of our norms on S(Rn ). Since χ is supported in
some large ball,
kχ(y)ϕ(x − y)k(k)
≤ sup hyik Dα y (χ(y)ϕ(x − y))
|α|≤k

≤ C sup sup |(Dα ϕ)(x − y)|


|y|≤R |α|≤k

≤ CN sup (1 + |x − y|2 )−N/2


|y|≤R

≤ CN (1 + |x|2 )−N/2 .
Thus (1 + |x|2 )N/2 |v ∗ ϕ| is bounded for each N . The same argument
applies to the derivative using Theorem 6.6, so
v ∗ ϕ ∈ S(Rn ) .
80 3. DISTRIBUTIONS


In fact we get a little more, since we see that for each k there exists
k 0 and C (depending on k and v) such that
kv ∗ ϕk(k) ≤ Ckϕk(k0 ) .
This means that
v∗ : S(Rn ) → S(Rn )
is a continuous linear map.
Now (6.19) allows us to define u∗v when u ∈ S 0 (Rn ) and v ∈ S 0 (Rn )
has compact support by
u ∗ v(ϕ) = u ∗ (v ∗ ϕ̌)(0) .
Using the continuity above, I ask you to check that u ∗ v ∈ S 0 (Rn ) in
Problem 36. For the moment let me assume that this convolution has
the same properties as before – I ask you to check the main parts of
this in Problem 37.
Recall that E ∈ S 0 (Rn ) is a fundamental situation for P (D), a
constant coefficient differential operator, if P (D)E = δ. We also use a
weaker notion.
Definition 6.8. A parametrix for a constant coefficient differential
operator P (D) is a distribution F ∈ S 0 (Rn ) such that
(6.20) P (D)F = δ + ψ , ψ ∈ C ∞ (Rn ) .
An operator P (D) is said to be hypoelliptic if it has a parametrix sat-
isfying
(6.21) sing supp(F ) ⊂ {0} ,
where for any u ∈ S 0 (Rn )

(6.22) (sing supp(u)){ = {x ∈ Rn ; ∃ ϕ ∈ Cc∞ (Rn ) ,


ϕ(x) 6= 0, ϕu ∈ Cc∞ (Rn )} .
Since the same ϕ must work for nearby points in (6.22), the set
sing supp(u) is closed. Furthermore
(6.23) sing supp(u) ⊂ supp(u) .
As Problem 37 I ask you to show that if K b Rn and K ∩sing supp(u) =
φ the ∃ ϕ ∈ Cc∞ (Rn ) with ϕ(x) = 1 in a neighbourhood of K such that
ϕu ∈ Cc∞ (Rn ). In particular
(6.24) sing supp(u) = φ ⇒ u ∈ S 0 (Rn ) ∩ C ∞ (Rn ) .
6. DIFFERENTIAL OPERATORS. 81

Theorem 6.9. If P (D) is hypoelliptic then


(6.25) sing supp(u) = sing supp(P (D)u) ∀ u ∈ S 0 (Rn ) .
Proof. One half of this is true for any differential operator:
Lemma 6.10. If u ∈ S 0 (Rn ) then for any polynomial
(6.26) sing supp(P (D)u) ⊂ sing supp(u) ∀ u ∈ S 0 (Rn ) .

Proof. We must show that x ∈ / sing supp(u) ⇒ x ∈/ sing supp(P (D)u).
/ sing supp(u) we can find ϕ ∈ Cc∞ (Rn ), ϕ ≡ 1 near x, such
Now, if x ∈
that ϕu ∈ Cc∞ (Rn ). Then
P (D)u = P (D)(ϕu + (1 − ϕ)u)
= P (D)(ϕu) + P (D)((1 − ϕ)u) .
The first term is C ∞ and x ∈
/ supp(P (D)((1−ϕ)u)), so x ∈
/ sing supp(P (D)u).

It remains to show the converse of (6.26) where P (D) is assumed to
be hypoelliptic. Take F , a parametrix for P (D) with sing supp u ⊂ {0}
and assume, or rather arrange, that F have compact support. In fact
if x ∈
/ sing supp(P (D)u) we can arrange that
(supp(F ) + x) ∩ sing supp(P (D)u) = φ .
Now P (D)F = δψ with ψ ∈ Cc∞ (Rn ) so
u = δ ∗ u = (P (D)F ) ∗ u − ψ ∗ u.
Since ψ ∗ u ∈ C ∞ it suffices to show that x̄ ∈
/ sing supp ((P (D)u) ∗ f ) .
∞ n ∞
Take ϕ ∈ Cc (R ) with ϕf ∈ C , f = P (D)u but
(supp F + x) ∩ supp(ϕ) = 0 .
Then f = f1 + f2 , f1 = ϕf ∈ Cc∞ (Rn ) so
f ∗ F = f1 ∗ F + f2 ∗ F
where f1 ∗ F ∈ C ∞ (Rn ) and x ∈
/ supp(f2 ∗ F ). It follows that x ∈
/
sing supp(u).
Example 6.1. If u is holomorphic on Rn , ∂u = 0, then u ∈ C ∞ (Rn ).
Recall from last time that a differential operator P (D) is said to be
hypoelliptic if there exists F ∈ S 0 (Rn ) with
(6.27) P (D)F − δ ∈ C ∞ (Rn ) and sing supp(F ) ⊂ {0} .
82 3. DISTRIBUTIONS

The second condition here means that if ϕ ∈ Cc∞ (Rn ) and ϕ(x) = 1
in |x| <  for some  > 0 then (1 − ϕ)F ∈ C ∞ (Rn ). Since P (D)((1 −
ϕ)F ) ∈ C ∞ (Rn ) we conclude that
P (D)(ϕF ) − δ ∈ Cc∞ (Rn )
and we may well suppose that F , replaced now by ϕF , has compact
support. Last time I showed that
If P (D) is hypoelliptic and u ∈ S 0 (Rn ) then
sing supp(u) = sing supp(P (D)u) .
I will remind you of the proof later.
First however I want to discuss the important notion of ellipticity.
Remember that P (D) is ‘really’ just a polynomial, called the charac-
teristic polynomial
X
P (ξ) = Cα ξ α .
|α|≤m

It has the property


(D)u(ξ) = P (ξ)û(ξ) ∀ u ∈ S 0 (Rn ) .
P\
This shows (if it isn’t already obvious) that we can remove P (ξ) from
P (D) thought of as an operator on S 0 (Rn ).
We can think of inverting P (D) by dividing by P (ξ). This works
well provided P (ξ) 6= 0, for all ξ ∈ Rn . An example of this is
n
X
2
P (ξ) = |ξ| + 1 = +1 .
j=1

However even the Laplacian, ∆ = nj=1 Dj2 , does not satisfy this rather
P
stringent condition.
It is reasonable to expect the top order derivatives to be the most
important. We therefore consider
X
Pm (ξ) = Cα ξ α
|α|=m

the leading part, or principal symbol, of P (D).


Definition 6.11. A polynomial P (ξ), or P (D), is said to be elliptic
of order m provided Pm (ξ) 6= 0 for all 0 6= ξ ∈ Rn .
So what I want to show today is
Theorem 6.12. Every elliptic differential operator P (D) is hypoel-
liptic.
6. DIFFERENTIAL OPERATORS. 83

We want to find a parametrix for P (D); we already know that we


might as well suppose that F has compact support. Taking the Fourier
transform of (6.27) we see that Fb should satisfy
(6.28) b ψb ∈ S(Rn ) .
P (ξ)Fb(ξ) = 1 + ψ,
Here we use the fact that ψ ∈ Cc∞ (Rn ) ⊂ S(Rn ), so ψb ∈ S(Rn ) too.
First suppose that P (ξ) = Pm (ξ) is actually homogeneous of degree
m. Thus
Pm (ξ) = |ξ|m Pm (ξ),
b ξb = ξ/ |ξ| , ξ 6= 0 .
The assumption at ellipticity means that
(6.29) b 6= 0 ∀ ξb ∈ S n−1 = {ξ ∈ Rn ; |ξ| = 1} .
Pm (ξ)
Since S n−1 is compact and Pm is continuous
(6.30) b ≥ C > 0 ∀ ξb ∈ S n−1 ,
Pm (ξ)
for some constant C. Using homogeneity
(6.31) b ≥ C |ξ|m , C > 0 ∀ ξ ∈ Rn .
Pm (ξ)

Now, to get Fb from (6.28) we want to divide by Pm (ξ) or multiply


by 1/Pm (ξ). The only problem with defining 1/Pm (ξ) is at ξ = 0. We
shall simply avoid this unfortunate point by choosing P ∈ Cc∞ (Rn ) as
before, with ϕ(ξ) = 1 in |ξ| ≤ 1.
Lemma 6.13. If Pm (ξ) is homogeneous of degree m and elliptic then
(1 − ϕ(ξ))
(6.32) Q(ξ) = ∈ S 0 (Rn )
Pm (ξ)
is the Fourier transform of a parametrix for Pm (D), satisfying (6.27).
Proof. Clearly Q(ξ) is a continuous function and |Q(ξ)| ≤ C(1 +
|ξ|)−m ∀ ξ ∈ Rn , so Q ∈ S 0 (Rn ). It therefore is the Fourier transform
of some F ∈ S 0 (Rn ). Furthermore
\
Pm (D)F (ξ) = Pm (ξ)Fb = Pm (ξ)Q(ξ)
= 1 − ϕ(ξ) ,
⇒ Pm (D)F = δ + ψ , ψ(ξ)
b = −ϕ(ξ) .
Since ϕ ∈ Cc∞ (Rn ) ⊂ S(Rn ), ψ ∈ S(Rn ) ⊂ C ∞ (Rn ). Thus F is a
parametrix for Pm (D). We still need to show the ‘hard part’ that
(6.33) sing supp(F ) ⊂ {0} .

84 3. DISTRIBUTIONS

We can show (6.33) by considering the distributions xα F . The idea


is that for |α| large, xα vanishes rather rapidly at the origin and this
should ‘weaken’ the singularity of F there. In fact we shall show that
(6.34) xα F ∈ H |α|+m−n−1 (Rn ) , |α| > n + 1 − m .
If you recall, these Sobolev spaces are defined in terms of the Fourier
transform, namely we must show that
α F ∈ hξi−|α|−m+n+1 L2 (Rn ) .
xd

Now xd α F = (−1)|α| D α F
ξ , so what we need to cinsider is the behaviour
b
of the derivatives of Fb, which is just Q(ξ) in (6.32).
Lemma 6.14. Let P (ξ) be a polynomial of degree m satisfying
(6.35) |P (ξ)| ≥ C |ξ|m in |ξ| > 1/C for some C > 0 ,
then for some constants Cα
1
(6.36) Dα ≤ Cα |ξ|−m−|α| in |ξ| > 1/C .
P (ξ)
Proof. The estimate in (6.36) for α = 0 is just (6.35). To prove
the higher estimates that for each α there is a polynomial of degree at
most (m − 1) |α| such that
1 Lα (ξ)
(6.37) Dα = .
P (ξ) (P (ξ))1+|α|
Once we know (6.37) we get (6.36) straight away since

1 Cα0 |ξ|(m−1)|α|
D α
≤ m(1+|α|)
≤ Cα |ξ|−m−|α| .
P (ξ) C 1+|α| |ξ|
We can prove (6.37) by induction, since it is certainly true for α = 0.
Suppose it is true for |α| ≤ k. To get the same identity for each β with
|β| = k +1 it is enough to differentiate one of the identities with |α| = k
once. Thus
1 1 Dj Lα (ξ) (1 + |α|)Lα Dj P (ξ)
Dβ = Dj Dα = 1+|α|
− .
P (ξ) P (ξ) P (ξ) (P (ξ))2+|α|
Since Lβ (ξ) = P (ξ)Dj Lα (ξ) − (1 + |α|)Lα (ξ)Dj P (ξ) is a polynomial of
degree at most (m − 1) |α| + m − 1 = (m − 1) |β| this proves the lemma.

6. DIFFERENTIAL OPERATORS. 85

1−ϕ
Going backwards, observe that Q(ξ) = Pm (ξ)
is smooth in |ξ| ≤ 1/C,
so (6.36) implies that
(6.38) |Dα Q(ξ)| ≤ Cα (1 + |ξ|)−m−|α|
n
⇒ hξi` Dα Q ∈ L2 (Rn ) if ` − m − |α| < − ,
2
which certainly holds if ` = |α| + m − n − 1, giving (6.34). Now, by
Sobolev’s embedding theorem
n
xα F ∈ C k if |α| > n + 1 − m + k + .
2
∞ n
In particular this means that if we choose µ ∈ Cc (R ) with 0 ∈
/ supp(µ)
2k
then for every k, µ/ |x| is smooth and
µ
µF = 2k |x|2k F ∈ C 2`−2n , ` > n .
|x|
Thus µF ∈ Cc∞ (Rn ) and this is what we wanted to show, sing supp(F ) ⊂
{0}.
So now we have actually proved that Pm (D) is hypoelliptic if it is
elliptic. Rather than go through the proof again to make sure, let me
go on to the general case and in doing so review it.
Proof. Proof of theorem. We need to show that if P (ξ) is elliptic
then P (D) has a parametrix F as in (6.27). From the discussion above
the ellipticity of P (ξ) implies (and is equivalent to)
|Pm (ξ)| ≥ c |ξ|m , c > 0 .
On the other hand
X
P (ξ) − Pm (ξ) = Cα ξ α
|α|<m

is a polynomial of degree at most m − 1, so


|P (ξ) − Pm (ξ)| 2 ≤ C 0 (1 + |ξ|)m−1 .
This means that id C > 0 is large enough then in |ξ| > C, C 0 (1 +
|ξ|)m−1 < 2c |ξ|m , so
|P (ξ)| ≥ |Pm (ξ)| − |P (ξ) − Pm (ξ)|
c
≥ c |ξ|m − C 0 (1 + |ξ|)m−1 ≥ |ξ|m .
2
This means that P (ξ) itself satisfies the conditions of Lemma 6.14.
Thus if ϕ ∈ Cc∞ (Rn ) is equal to 1 in a large enough ball then Q(xi) =
(1 − ϕ(ξ))/P (ξ) in C ∞ and satisfies (6.36) which can be written
|Dα Q(ξ)| ≤ Cα (1 + |ξ|)m−|α| .
86 3. DISTRIBUTIONS

The discussion above now shows that defining F ∈ S 0 (Rn ) by Fb(ξ) =


Q(ξ) gives a solution to (6.27).

The last step in the proof is to show that if F ∈ S 0 (Rn ) has compact
support, and satisfies (6.27), then
u ∈ S(Rn ) , P (D)u ∈ S 0 (Rn ) ∩ C ∞ (Rn )
⇒ u = F ∗ (P (D)u) − ψ ∗ u ∈ C ∞ (Rn ) .
Let me refine this result a little bit.
Proposition 6.15. If f ∈ S 0 (Rn ) and µ ∈ S 0 (Rn ) has compact
support then
sing supp(u ∗ f ) ⊂ sing supp(u) + sing supp(f ).
Proof. We need to show that p ∈ / sing supp(u) ∈ sing supp(f )
then p ∈/ sing supp(u ∗ f ). Once we can fix p, we might as well suppose
that f has compact support too. Indeed, choose a large ball B(R, 0)
so that
z∈/ B(0, R) ⇒ p ∈ / supp(u) + B(0, R) .
This is possible by the assumed boundedness of supp(u). Then choose
ϕ ∈ Cc∞ (Rn ) with ϕ = 1 on B(0, R); it follows from Theorem L16.2, or
rather its extension to distributions, that φ ∈ / supp(u(1 − ϕ)f ), so we
can replace f by ϕf , noting that sing supp(ϕf ) ⊂ sing supp(f ). Now if
f has compact support we can choose compact neighbourhoods K1 , K2
of sing supp(u) and sing supp(f ) such that p ∈ / K1 + K2 . Furthermore
we an decompose u = u1 + u2 , f = f1 + f2 so that supp(u1 ) ⊂ K1 ,
supp(f2 ) ⊂ K2 and u2 , f2 ∈ C ∞ (Rn ). It follows that
u ∗ f = u1 ∗ f1 + u2 ∗ f2 + u1 ∗ f2 + u2 ∗ f2 .
Now, p ∈/ supp(u1 ∗ f1 ), by the support property of convolution and the
three other terms are C ∞ , since at least one of the factors is C ∞ . Thus
p∈/ sing supp(u ∗ f ). 
The most important example of a differential operator which is
hypoelliptic, but not elliptic, is the heat operator
Xn
(6.39) ∂t + ∆ = ∂t − ∂x2j .
j=1

In fact the distribution


(  2

1
(4πt)n/2
exp − |x|
4t
t≥0
(6.40) E(t, x) =
0 t≤0
6. DIFFERENTIAL OPERATORS. 87

is a fundamental solution. First we need to check that E is a distri-


bution. Certainly E is C ∞ in t > 0. Moreover as t ↓ 0 in x 6= 0 it
vanishes with all derivatives, so it is C ∞ except at t = 0, x = 0. Since
it is clearly measurable we will check that it is locally integrable near
the origin, i.e.,
Z
(6.41) E(t, x) dx dt < ∞ ,
0≤t≤1
|x|≤1

since E ≥ 0. We can change variables, setting X = x/t1/2 , so dx =


tn/2 dX and the integral becomes
|X|2
Z 0Z
1
exp(− ) dx dt < ∞ .
(4π)n/2 0 |X|≤t−1/2 4
Since E is actually bounded near infinity, it follows that E ∈ S 0 Rn ,
Z
E(ϕ) = E(t, x)ϕ(t, x) dx dt ∀ ϕ ∈ S(Rn+1 ) .
t≥0

As before we want to compute


(6.42) (∂t + ∆)E(ϕ) = E(−∂t ϕ + ∆ϕ)
Z ∞ Z
= lim E(t, x)(−∂t ϕ + ∆ϕ) dx dt .
E↓0 E Rn

First we check that (∂t + ∆)E = 0 in t > 0, where it is a C ∞ function.


This is a straightforward computation:
n |x|2
∂t E = − E + 2 E
2t 4t
xj 2 1 x2j
∂xj E = − E , ∂xj E = − E + 2 E
2t 2t 4t
n |x|2
⇒ ∆E = E + 2 E .
2t 4t
Now we can integrate by parts in (6.42) to get
2
e−|x| /4E
Z
(∂t + ∆)E(ϕ) = lim ϕ(E, x) dx .
E↓0 Rn (4πE)n/2
Making the same change of variables as before, X = x/2E 1/2 ,
2
e−|x|
Z
1/2
(∂t + ∆)E(ϕ) = lim ϕ(E, E X) n/2 dX .
E↓0 Rn π
88 3. DISTRIBUTIONS

As E ↓ 0 the integral here is bounded by the integrable function


C exp(− |X|2 ), for some C > 0, so by Lebesgue’s theorem of domi-
nated convergence, conveys to the integral of the limit. This is
Z
2 dx
ϕ(0, 0) · e−|x| n/2 = ϕ(0, 0) .
Rn π
Thus
(∂t + ∆)E(ϕ) = ϕ(0, 0) ⇒ (∂t + ∆)E = δt δx ,
so E is indeed a fundamental solution. Since it vanishes in t < 0 it is
called a forward fundamrntal solution.
Let’s see what we can use it for.
Proposition 6.16. If f ∈ S 0 Rn has compact support ∃ !u ∈ S 0 Rn
with supp(m) ⊂ {t ≥ −T } for some T and
(6.43) (∂t + ∆)u = f in Rn+1 .
Proof. Naturally we try u = E ∗ f . That it satisfies (6.43)follows
from the properties of convolution. Similarly if T is such that supp(f ) ⊂
{t ≥ T } then
supp(u) ⊂ supp(f ) + supp(E) ⊂ {t ≥ T ] .
So we need to show uniqueness. If u1 , u2 ∈ S 0 Rn in two solutions
of (6.43) then their difference v = u1 − u2 satisfies the ‘homogeneous’
equation (∂t + ∆)v = 0. Furthermore, v = 0 in t < T 0 for some T 0 .
Given any E ∈ R choose ϕ(t) ∈ C ∞ (R) with ϕ(t) = 0 in t > t + 1,
ϕ(t) = 1 in t < t and consider
Et = ϕ(t)E = F1 + F2 ,
where F1 = ψEt for some ψ ∈ Cc∞ Rn+1 ), ψ = 1 near 0. Thus F1 has
comapct support and in fact F2 ∈ SRn . I ask you to check this last
statement as Problem L18.P1.
Anyway,
(∂t + ∆)(F1 + F2 ) = δ + ψ ∈ SRn , ψt = 0 t ≤ t .
Now,
(∂t + ∆)(Et ∗ u) = 0 = u + ψt ∗ u .

Since supp(ψt ) ⊂ t ≥ t ], the second tier here is supported in t ≥ t ≥
T 0 . Thus u = 0 in t < t + T 0 , but t is arbitrary, so u = 0. 
Notice that the assumption that u ∈ S 0 Rn is not redundant in the
statement of the Proposition, if we allow “large” solutions they be-
come non-unique. Problem L18.P2 asks you to apply the fundamental
solution to solve the initial value problem for the heat operator.
7. CONE SUPPORT AND WAVEFRONT SET 89

Next we make similar use of the fundamental solution for Laplace’s


operator. If n ≥ 3 the
(6.44) E = Cn |x|−n+2
is a fundamental solution. You should check that ∆En = 0 in x 6= 0
directly, I will show later that ∆En = δ, for the appropriate choice of
Cn , but you can do it directly, as in the case n = 3.
Theorem 6.17. If f ∈ SRn ∃ !u ∈ C0∞ Rn such that ∆u = f.
Proof. Since convolution u = E ∗ f ∈ S 0 Rn ∩ C ∞ Rn is defined we
certainly get a solution to ∆u = f this way. We need to check that
u ∈ C0∞ Rn . First we know that ∆ is hypoelliptic so we can decompose
E = F1 + F2 , F1 ∈ S 0 Rn , supp F, b Rn
and then F2 ∈ C ∞ Rn . In fact we can see from (6.44) that
|Dα F2 (x)| ≤ Cα (1 + |x|)−n+2−|α| .
Now, F1 ∗ f ∈ SRn , as we showed before, and continuing the integral
we see that
|Dα u| ≤ |Dα F2 ∗ f | + CN (1 + |x|)−N ∀ N
≤ Cα0 (1 + |x|)−n+2−|α| .
Since n > 2 it follows that u ∈ C0∞ Rn .
So only the uniqueness remains. If there are two solutions, u1 , u2
for a given f then v = u1 −u2 ∈ C0∞ Rn satisfies ∆v = 0. Since v ∈ S 0 Rn
we can take the Fourier transform and see that
|χ|2 vb(χ) = 0 ⇒ supp(b
v ) ⊂ {0} .
an earlier problem was to conclude from this that vb = |α|≤m Cα Dα δ
P
for some constants Cα . This in turn implies that v is a polynomial.
However the only polynomials in C00 Rn are identically 0. Thus v = 0
and uniqueness follows. 

7. Cone support and wavefront set


In discussing the singular support of a tempered distibution above,
notice that
singsupp(u) = ∅

only implies that u ∈ C (Rn ), not as one might want, that u ∈ S(Rn ).
We can however ‘refine’ the concept of singular support a little to get
this.
Let us think of the sphere Sn−1 as the set of ‘asymptotic directions’
in Rn . That is, we identify a point in Sn−1 with a half-line {ax̄; a ∈
90 3. DISTRIBUTIONS

(0, ∞)} for 0 6= x̄ ∈ Rn . Since two points give the same half-line if and
only if they are positive multiples of each other, this means we think
of the sphere as the quotient
(7.1) Sn−1 = (Rn \ {0})/R+ .
Of course if we have a metric on Rn , for instance the usual Euclidean
metric, then we can identify Sn−1 with the unit sphere. However (7.1)
does not require a choice of metric.
Now, suppose we consider functions on Rn \ {0} which are (posi-
tively) homogeneous of degree 0. That is f (ax̄) = f (x̄), for all a > 0,
and they are just functions on Sn−1 . Smooth functions on Sn−1 cor-
respond (if you like by definition) with smooth functions on Rn \ {0}
which are homogeneous of degree 0. Let us take such a function ψ ∈
C ∞ (Rn \{0}), ψ(ax) = ψ(x) for all a > 0. Now, to make this smooth on
Rn we need to cut it off near 0. So choose a cutoff function χ ∈ Cc∞ (Rn ),
with χ(x) = 1 in |x| < 1. Then
(7.2) ψR (x) = ψ(x)(1 − χ(x/R)) ∈ C ∞ (Rn ),
for any R > 0. This function is supported in |x| ≥ R. Now, if ψ has
support near some point ω ∈ Sn−1 then for R large the corresponding
function ψR will ‘localize near ω as a point at infinity of Rn .’ Rather
than try to understand this directly, let us consider a corresponding
analytic construction.
First of all, a function of the form ψR is a multiplier on S(Rn ). That
is,
(7.3) ψR · : S(Rn ) −→ S(Rn ).
To see this, the main problem is to estimate the derivatives at infinity,
since the product of smooth functions is smooth. This in turn amounts
to estimating the deriviatives of ψ in |x| ≥ 1. This we can do using the
homogeneity.
Lemma 7.1. If ψ ∈ C ∞ (Rn \ {0}) is homogeneous of degree 0 then
(7.4) |Dα ψ| ≤ Cα |x|−|α| .
Proof. I should not have even called this a lemma. By the chain
rule, the derivative of order α is a homogeneous function of degree −|α|
from which (7.4) follows. 

For the smoothed versio, ψR , of ψ this gives the estimates


(7.5) |Dα ψR (x)| ≤ Cα hxi−|α| .
7. CONE SUPPORT AND WAVEFRONT SET 91

This allows us to estimate the derivatives of the product of a Schwartz


function and ψR :
(7.6) xβ Dα (ψR f )
X α 
= Dα−γ ψR xβ Dγ f =⇒ sup |xβ Dα (ψR f )| ≤ C sup kf kk
γ≤α
γ |x|≥1

for some seminorm on S(Rn ). Thus the map (7.3) is actually continu-
ous. This continuity means that ψR is a multiplier on S 0 (Rn ), defined
as usual by duality:
(7.7) ψR u(f ) = u(ψR f ) ∀ f ∈ S(Rn ).
Definition 7.2. The cone-support and cone-singular-support of a
tempered distribution are the subsets Csp(u) ⊂ Rn ∪ Sn−1 and Css(u) ⊂
Rn ∪ Sn−1 defined by the conditions
(7.8)
Csp(u) ∩ Rn = supp(u)
(Csp(u)){ ∩ Sn−1 ={ω ∈ Sn−1 ;
∃ R > 0, ψ ∈ C ∞ (Sn−1 ), ψ(ω) 6= 0, ψR u = 0},
Css(u) ∩ Rn = singsupp(u)
(Css(u)){ ∩ Sn−1 ={ω ∈ Sn−1 ;
∃ R > 0, ψ ∈ C ∞ (Sn−1 ), ψ(ω) 6= 0, ψR u ∈ S(Rn )}.
That is, on the Rn part these are the same sets as before but ‘at
infinity’ they are defined by conic localization on Sn−1 .
In considering Csp(u) and Css(u) it is convenient to combine Rn
and Sn−1 into a compactification of Rn . To do so (topologically) let
us identify Rn with the interior of the unit ball with respect to the
Euclidean metric using the map
x
(7.9) Rn 3 x 7−→ ∈ {y ∈ Rn ; |y| ≤ 1} = Bn .
hxi
Clearly |x| < hxi and for 0 ≤ a < 1, |x| = ahxi has only the solution
1
|x| = a/(1 − a2 ) 2 . Thus if we combine (7.9) with the identification of
Sn with the unit sphere we get an identification
(7.10) Rn ∪ Sn−1 ' Bn .
Using this identification we can, and will, regard Csp(u) and Css(u) as
subsets of Bn .12
12In fact while the topology here is correct the smooth structure on Bn is not
the right one —– see Problem?? For our purposes here this issue is irrelevant.
92 3. DISTRIBUTIONS

Lemma 7.3. For any u ∈ S 0 (Rn ), Csp(u) and Css(u) are closed
subsets of Bn and if ψ̃ ∈ C ∞ (Sn ) has supp(ψ̃) ∩ Css(u) = ∅ then for R
sufficiently large ψ̃R u ∈ S(Rn ).

Proof. Directly from the definition we know that Csp(u) ∩ Rn is


closed, as is Css(u)∩Rn . Thus, in each case, we need to show that if ω ∈
Sn−1 and ω ∈ / Csp(u) then Csp(u) is disjoint from some neighbourhood
n
of ω in B . However, by definition,

U = {x ∈ Rn ; ψR (x) 6= 0} ∪ {ω 0 ∈ Sn−1 ; ψ(ω 0 ) 6= 0}

is such a neighbourhood. Thus the fact that Csp(u) is closed follows


directly from the definition. The argument for Css(u) is essentially the
same.
The second result follows by the use of a partition of unity on Sn−1 .
Thus, for each point in supp(ψ) ⊂ Sn−1 there exists a conic localizer for
which ψR u ∈ S(Rn ). By compactness we may choose a finite number of
these functions ψj such that the open sets {ψj (ω) > 0} cover supp(ψ̃).
By assumption (ψj )Rj u ∈ S(Rn ) for some Rj > 0. However this will
remain true if Rj is increased, so we may suppose that Rj = R is
independent of j. Then for function
X
µ= |ψj |2 ∈ C ∞ (Sn−1 )
j

we have µR u ∈ S(Rn ). Since ψ̃ = ψ 0 µ for some µ ∈ C ∞ (Sn−1 ) it follows


that ψ̃R+1 u ∈ S(Rn ) as claimed. 

Corollary 7.4. If u ∈ S 0 (Rn ) then Css(u) = ∅ if and only if


u ∈ S(Rn ).

Proof. Certainly Css(u) = ∅ if u ∈ S(Rn ). If u ∈ S 0 (Rn ) and


Css(u) = ∅ then from Lemma 7.3, ψR u ∈ S(Rn ) where ψ = 1. Thus
v = (1 − ψR )u ∈ Cc−∞ (Rn ) has singsupp(v) = ∅ so v ∈ Cc∞ (Rn ) and
hence u ∈ S(Rn ). 

Of course the analogous result for Csp(u), that Csp(u) = ∅ if and


only if u = 0 follows from the fact that this is true if supp(u) = ∅. I
will treat a few other properties as self-evident. For instance
(7.11)
Csp(φu) ⊂ Csp(u), Css(φu) ⊂ Css(u) ∀ u ∈ S 0 (Rn ), φ ∈ S(Rn )
7. CONE SUPPORT AND WAVEFRONT SET 93

and
(7.12) Csp(c1 u1 + c2 u2 ) ⊂ Csp(u1 ) ∪ Csp(u2 ),
Css(c1 u1 + c2 u2 ) ⊂ Css(u1 ) ∪ Css(u2 )
∀ u1 , u2 ∈ S 0 (Rn ), c1 , c2 ∈ C.
One useful consequence of having the cone support at our disposal
is that we can discuss sufficient conditions to allow us to multiply dis-
tributions; we will get better conditions below using the same idea but
applied to the wavefront set but this preliminary discussion is used
there. In general the product of two distributions is not defined, and
indeed not definable, as a distribution. However, we can always multi-
ply an element of S 0 (Rn ) and an element of S(Rn ).
To try to understand multiplication look at the question of pairing
between two distributions.
Lemma 7.5. If Ki ⊂ Bn , i = 1, 2, are two disjoint closed (hence
compact) subsets then we can define an unambiguous pairing
(7.13)
{u ∈ S 0 (Rn ); Css(u) ⊂ K1 } × {u ∈ S 0 (Rn ); Css(u) ⊂ K2 } 3 (u1 , u2 )
−→ u1 (u2 ) ∈ C.
Proof. To define the pairing, choose a function ψ ∈ C ∞ (Sn−1 )
which is identically equal to 1 in a neighbourhood of K1 ∩Sn−1 and with
support disjoint from K2 ∩ Sn−1 . Then extend it to be homogeneous, as
above, and cut off to get ψR . If R is large enough Csp(ψR ) is disjoint
from K2 . Then ψR + (1 − ψ)R = 1 + ν where ν ∈ Cc∞ (Rn ). We can
find another function µ ∈ Cc∞ (Rn ) such that ψ1 = ψR + µ = 1 in
a neighbourhood of K1 and with Csp(ψ1 ) disjoint from K2 . Once we
have this, for u1 and u2 as in (7.13),
(7.14) ψ1 u2 ∈ S(Rn ) and (1 − ψ1 )u1 ∈ S(Rn )
since in both cases Css is empty from the definition. Thus we can define
the desired pairing between u1 and u2 by
(7.15) u1 (u2 ) = u1 (ψ1 u2 ) + u2 ((1 − ψ1 )u1 ).
Of course we should check that this definition is independent of the
cut-off function used in it. However, if we go through the definition and
choose a different function ψ 0 to start with, extend it homogeneoulsy
and cut off (probably at a different R) and then find a correction term
µ0 then the 1-parameter linear homotopy between them
(7.16) ψ1 (t) = tψ1 + (1 − t)ψ10 , t ∈ [0, 1]
94 3. DISTRIBUTIONS

satisfies all the conditions required of ψ1 in formula (7.14). Thus in


fact we get a smooth family of pairings, which we can write for the
moment as
(7.17) (u1 , u2 )t = u1 (ψ1 (t)u2 ) + u2 ((1 − ψ1 (t))u1 ).
By inspection, this is an affine-linear function of t with derivative
(7.18) u1 ((ψ1 − ψ10 )u2 ) + u2 ((ψ10 − ψ1 ))u1 ).
Now, we just have to justify moving the smooth function in (7.18) to
see that this gives zero. This should be possible since Csp(ψ10 − ψ1 ) is
disjoint from both K1 and K2 .
In fact, to be very careful for once, we should construct another
function χ in the same way as we constructed ψ1 to be homogenous
near infinity and smooth and such that Csp(χ) is also disjoint from both
K1 and K2 but χ = 1 on Csp(ψ10 − ψ1 ). Then χ(ψ10 − ψ1 ) = ψ10 − ψ1 so
we can insert it in (7.18) and justify

(7.19) u1 ((ψ1 − ψ10 )u2 ) = u1 (χ2 (ψ1 − ψ10 )u2 ) = (χu1 )((ψ1 − ψ10 )χu2 )
= (χu2 )(ψ1 − ψ10 )χu1 ) = u2 (ψ1 − ψ10 )χu1 ).
Here the second equality is just the identity for χ as a (multiplica-
tive) linear map on S(Rn ) and hence S 0 (Rn ) and the operation to give
the crucial, third, equality is permissible because both elements are in
S(Rn ). 

Once we have defined the pairing between tempered distibutions


with disjoint conic singular supports, in the sense of (7.14), (7.15), we
can define the product under the same conditions. Namely to define
the product of say u1 and u2 we simply set

(7.20) u1 u2 (φ) = u1 (φu2 ) = u2 (φu1 ) ∀ φ ∈ S(Rn ),


provided Css(u1 ) ∩ Css(u2 ) = ∅.
Indeed, this would be true if one of u1 or u2 was itself in S(Rn ) and
makes sense in general. I leave it to you to check the continuity state-
ment required to prove that the product is actually a tempered disti-
bution (Problem 78).
One can also give a similar discussion of the convolution of two
tempered distributions. Once again we do not have a definition of u ∗ v
as a tempered distribution for all u, v ∈ S 0 (Rn ). We do know how to
define the convolution if either u or v is compactly supported, or if
either is in S(Rn ). This leads directly to
7. CONE SUPPORT AND WAVEFRONT SET 95

Lemma 7.6. If Css(u)∩Sn−1 = ∅ then u∗v is defined unambiguously


by
x
(7.21) u ∗ v = u1 ∗ v + u2 ∗ v, u1 = (1 − χ( ))u, u2 = u − u1
r
∞ n
where χ ∈ Cc (R ) has χ(x) = 1 in |x| ≤ 1 and R is sufficiently large;
there is a similar definition if Css(v) ∩ Sn−1 = ∅.
Proof. Since Css(u) ∩ Sn−1 = ∅, we know that Css(u1 ) = ∅ if R
is large enough, so then both terms on the right in (7.21) are well-
defined. To see that the result is independent of R just observe that
the difference of the right-hand side for two values of R is of the form
w ∗ v − w ∗ v with w compactly supported. 
Now, we can go even further using a slightly more sophisticated
decomposition based on
Lemma 7.7. If u ∈ S 0 (Rn ) and Css(u) ∩ Γ = ∅ where Γ ⊂ Sn−1 is
a closed set, then u = u1 + u2 where Csp(u1 ) ∩ Γ = ∅ and u2 ∈ S(Rn );
in fact
(7.22) u = u01 + u001 + u2 where u01 ∈ Cc−∞ (Rn ) and
/ supp(u001 ), x ∈ Rn \ {0}, x/|x| ∈ Γ =⇒ x ∈
0∈ / supp(u001 ).
Proof. A covering argument which you should provide. 
Let Γi ⊂ Rn , i = 1, 2, be closed cones. That is they are closed sets
such that if x ∈ Γi and a > 0 then ax ∈ Γi . Suppose in addition that
(7.23) Γ1 ∩ (−Γ2 ) = {0}.
That is, if x ∈ Γ1 and −x ∈ Γ2 then x = 0. Then it follows that for
some c > 0,
(7.24) x ∈ Γ1 , y ∈ Γ2 =⇒ |x + y| ≥ c(|x| + |y|).
To see this consider x + y where x ∈ Γ1 , y ∈ Γ2 and |y| ≤ |x|. We
can assume that x 6= 0, otherwise the estimate is trivially true with
c = 1, and then Y = y/|x| ∈ Γ1 and X = x/|x| ∈ Γ2 have |Y | ≤ 1 and
|X| = 1. However X + Y 6= 0, since |X| = 1, so by the continuity of the
sum, |X + Y | ≥ 2c > 0 for some c > 0. Thus |X + Y | ≥ c(|X| + |Y |)
and the result follows by scaling back. The other case, of |x| ≤ |y|
follows by the same argument with x and y interchanged, so (7.24) is
a consequence of (7.23).
Lemma 7.8. For any u ∈ S 0 (Rn ) and φ ∈ S(Rn ),
(7.25) Css(φ ∗ u) ⊂ Css(u) ∩ Sn−1 .
96 3. DISTRIBUTIONS

Proof. We already know that φ ∗ u is smooth, so Css(φ ∗ u) ⊂


Sn−1 . Thus, we need to show that if ω ∈ Sn−1 and ω ∈ / Css(u) then
ω∈ / Css(φ ∗ u).
Fix such a point ω ∈ Sn−1 \ Css(u) and take a closed set Γ ⊂ Sn−1
which is a neighbourhood of ω but which is still disjoint from Css(u)
and then apply Lemma 7.7. The two terms φ∗u2 , where u2 ∈ S(Rn ) and
φ ∗ u01 where u01 ∈ Cc−∞ (Rn ) are both in S(Rn ) so we can assume that u
has the support properties of u001 . In particular there is a smaller closed
subset Γ1 ⊂ Sn−1 which is still a neighbourhood of ω but which does
not meet Γ2 , which is the closure of the complement of Γ. If we replace
these Γi by the closed cones of which they are the ‘cross-sections’ then
we are in the situation of (7.23) and (7.23), except for the signs. That
is, there is a constant c > 0 such that
(7.26) |x − y| ≥ c(|x| + |y|).
Now, we can assume that there is a cutoff function ψR which has
support in Γ2 and is such that u = ψR u. For any conic cutoff, ψR0 , with
support in Γ1
(7.27) ψR0 (φ ∗ u) = hψR u, φ(x − ·)i = hu(y), ψR (y)ψR0 (x)φ(x − y)i.
The continuity of u means that this is estimated by some Schwartz
seminorm
(7.28) sup |Dyα (ψR (y)ψR0 (x)φ(x − y))|(1 + |y|)k
y,|α|≤k

≤ CN kφk sup(1 + |x| + |y|)−N (1 + |y|)k ≤ CN kφk(1 + |x|)−N +k


y

for some Schwartz seminorm on φ. Here we have used the estimate


(7.24), in the form (7.26), using the properties of the supports of ψR0
and ψR . Since this is true for any N and similar estimates hold for
the derivatives, it follows that ψR0 (u ∗ φ) ∈ S(Rn ) and hence that ω ∈
/
Css(u ∗ φ). 
Corollary 7.9. Under the conditions of Lemma 7.6
(7.29) Css(u ∗ v) ⊂ (singsupp(u) + singsupp(v)) ∪ (Css(v) ∩ Sn−1 ).
Proof. We can apply Lemma 7.8 to the first term in (7.21) to
conclude that it has conic singular support contained in the second
term in (7.29). Thus it is enough to show that (7.29) holds when
u ∈ Cc−∞ (Rn ). In that case we know that the singular support of the
convolution is contained in the first term in (7.29), so it is enough to
consider the conic singular support in the sphere at infinity. Thus, if
ω ∈/ Css(v) we need to show that ω ∈ / Css(u ∗ v). Using Lemma 7.7
7. CONE SUPPORT AND WAVEFRONT SET 97

we can decompose v = v1 + v2 + v3 as a sum of a Schwartz term, a


compact supported term and a term which does not have ω in its conic
support. Then u ∗ v1 is Schwartz, u ∗ v2 has compact support and
satisfies (7.29) and ω is not in the cone support of u ∗ v3 . Thus (7.29)
holds in general. 
Lemma 7.10. If u, v ∈ S 0 (Rn ) and ω ∈ Css(u) ∩ Sn−1 =⇒ −ω ∈ /
Css(v) then their convolution is defined unambiguously, using the pair-
ing in Lemma 7.5, by
(7.30) u ∗ v(φ) = u(v̌ ∗ φ) ∀ φ ∈ S(Rn ).
Proof. Since v̌(x) = v(−x), Css(v̌) = − Css(v) so applying Lemma 7.8
we know that
(7.31) Css(v̌ ∗ φ) ⊂ − Css(v) ∩ Sn−1 .
Thus, Css(v) ∩ Css(v̌ ∗ φ) = ∅ and the pairing on the right in (7.30)
is well-defined by Lemma 7.5. Continuity follows from your work in
Problem 78. 
In Problem 79 I ask you to get a bound on Css(u ∗ v) ∩ Sn−1 under
the conditions in Lemma 7.10.
Let me do what is actually a fundamental computation.
Lemma 7.11. For a conic cutoff, ψR , where ψ ∈ C ∞ (Sn−1 ),
(7.32) cR ) ⊂ {0}.
Css(ψ
Proof. This is actually much easier than it seems. Namely we
already know that Dα (ψR ) is smooth and homogeneous of degree −|α|
near infinity. From the same argument it follows that
(7.33) Dα (xβ ψR ) ∈ L2 (Rn ) if |α| > |β| + n/2
since this is a smooth function homogeneous of degree less than −n/2
near infinity, hence square-integrable. Now, taking the Fourier trans-
form gives
(7.34) ξ α D β (ψ
cR ) ∈ L2 (Rn ) ∀ |α| > |β| + n/2.

If we localize in a cone near infinity, using a (completely unrelated)


cutoff ψR0 0 (ξ) then we must get a Schwartz function since
(7.35)
|ξ||α| ψR0 0 (ξ)Dβ (ψ
cR ) ∈ L2 (Rn ) ∀ |α| > |β| + n/2 =⇒ ψ 0 0 (ξ)ψ
R
cR ∈ S(Rn ).

Indeed this argument applies anywhere that ξ 6= 0 and so shows that


(7.32) holds. 
98 3. DISTRIBUTIONS

Now, we have obtained some reasonable looking conditions under


which the product uv or the convolution u∗v of two elements of S 0 (Rn )
is defined. However, reasonable as they might be there is clearly a flaw,
or at least a deficiency, in the discussion. We know that in the simplest
of cases,
(7.36) ∗v =u
u[ bvb.
Thus, it is very natural to expect a relationship between the conditions
under which the product of the Fourier transforms is defined and the
conditions under which the convolution is defined. Is there? Well, not
much it would seem, since on the one hand we are considering the rela-
tionship between Css(b u) and Css(b v ) and on the other the relationship
between Css(u) ∩ Sn−1 and Css(v) ∩ Sn−1 . If these are to be related,
we would have to find a relationship of some sort between Css(u) and
Css(bu). As we shall see, there is one but it is not very strong as can
be guessed from Lemma 7.11. This is not so much a bad thing as a
sign that we should look for another notion which combines aspects of
both Css(u) and Css(b u). This we will do through the notion of wave-
front set. In fact we define two related objects. The first is the more
conventional, the second is more natural in our present discussion.
Definition 7.12. If u ∈ S 0 (Rn ) we define the wavefront set of u
to be
(7.37) WF(u) = {(x, ω) ∈ Rn × Sn−1 ;
∃ φ ∈ Cc∞ (Rn ), φ(x) 6= 0, ω ∈ c {
/ Css(φu)}
and more generally the scattering wavefront set by
(7.38) WFsc (u) = WF(u) ∪ {(ω, p) ∈ Sn−1 × Bn ;
∃ ψ ∈ C ∞ (Sn ), ψ(ω) 6= 0, R > 0 such that p ∈
/ Css(ψd {
R u)} .

So, the definition is really always the same. To show that (p, q) ∈
/
WFsc (u) we need to find ‘a cutoff Φ near p’ – depending on whether
p ∈ Rn or p ∈ Sn−1 this is either Φ = φ ∈ Cc∞ (Rn ) with F = φ(p) 6= 0
or a ψR where ψ ∈ C ∞ (Sn−1 ) has ψ(p) 6= 0 – such that q ∈/ Css(Φu).
c
One crucial property is
Lemma 7.13. If (p, q) ∈ / WFsc (u) then if p ∈ Rn there exists a
neighbourhood U ⊂ Rn of p and a neighbourhood U ⊂ Bn of q such
that for all φ ∈ Cc∞ (Rn ) with support in U, U 0 ∩ Css(φu)
c = ∅; similarly
n−1
if p ∈ S then there exists a neigbourhood Ũ ⊂ Bn of p such that
U 0 ∩ Css(ψdR u) = ∅ if Csp(ωR ) ⊂ Ũ .
7. CONE SUPPORT AND WAVEFRONT SET 99

Proof. First suppose p ∈ Rn . From the definition of conic singular


support, (7.37) means precisely that there exists ψ ∈ C ∞ (Sn−1 ), ψ(ω) 6=
0 and R such that
(7.39) c ∈ S(Rn ).
ψR (φu)
Since we know that φu c ∈ C ∞ (Rn ), this is actually true for all R > 0
as soon as it is true for one value. Furthermore, if φ0 ∈ Cc∞ (Rn ) has
supp(φ0 ) ⊂ {φ 6= 0} then ω ∈ / Css(φc0 u) follows from ω ∈ / Css(φu).
c
0 ∞ n
Indeed we can then write φ = µφ where µ ∈ Cc (R ) so it suffices
to show that if v ∈ Cc−∞ (Rn ) has ω ∈ / Css(b v ) then ω ∈/ Css(cµv) if
∞ n −n n
µ ∈ Cc (R ). Since µ cv = (2π) υ ∗ u b where υ̌ = µ b ∈ S(R ), applying
Lemma 7.8 we see that Css(υ ∗ vb) ⊂ Css(b v ), so indeed ω ∈ / Css(φc0 u).
n−1
The case that p ∈ S is similar. Namely we have one cut-off ψR
with ψ(p) 6= 0 and q ∈ / Css(ωd R u). We can take U = {ψR+10 6= 0} since if
ψR0 0 has conic support in U then ψR0 0 = ψ 00 R0 ψR for some ψ 00 ∈ C ∞ (Sn−1 ).
Thus
0 00
0 u = v ∗ ψR u, v̌ = ω 00 .
(7.40) [
ψ d d
R R

From Lemma 7.11 and Corollary7.9 we deduce that


0
R0 u) ⊂ Css(ω
(7.41) [
Css(ψ dR u)

and hence the result follows with U 0 a small neighourhood of q. 


Proposition 7.14. For any u ∈ S 0 (Rn ),
(7.42) WFsc (u) ⊂ ∂(Bn × Bn ) = (Bn × Sn−1 ) ∪ (Sn−1 × Bn )
= (Rn × Sn−1 ) ∪ (Sn−1 × Sn−1 ) ∪ (Sn−1 × Rn )
and WF(u) ⊂ Rn are closed sets and under projection onto the first
variable
(7.43) π1 (WF(u)) = singsupp(u) ⊂ Rn , π1 (WFsc (u)) = Css(u) ⊂ Bn .
Proof. To prove the first part of (7.43) we need to show that
/ WF(u) for all ω ∈ Sn−1 with x̄ ∈ Rn fixed, then x̄ ∈
if (x̄, ω) ∈ /
singsupp(u). The definition (7.37) means that for each ω ∈ Sn−1 there
exists φω ∈ Cc∞ (Rn ) with φω (x̄) 6= 0 such that ω ∈ / Css(φd ω u). Since
n−1
Css(φu) is closed and S is compact, a finite number of these cutoffs,

φj ∈ Cc (R ), can be chosen so that φj (x̄) 6= 0 with the Sn−1 \ Css(φ
n dj u)
n−1
covering S . Now applying Lemma 7.13 above, we can find one φ ∈
∞ n
T
Cc (R ), with support in j {φj (x) 6= 0} and φ(x̄) 6= 0, such that
c ⊂ Css(φ n
Css(φu) j u) for each j and hence φu ∈ S(R ) (since it is
d
already smooth). Thus indeed it follows that x̄ ∈ / singsupp(u). The
100 3. DISTRIBUTIONS

converse, that x̄ ∈ / WF(u) for all ω ∈ Sn−1


/ singsupp(u) implies (x̄, ω) ∈
is immediate.
The argument to prove the second part of (7.43) is similar. Since, by
definition, WFsc (u)∩(Rn ×Bn ) = WF(u) and Css(u)∩Rn = singsupp(u)
we only need consider points in Css(u) ∩ Sn−1 . Now, we first check that
if θ ∈/ Css(u) then {θ} × Bn ∩ WFsc (u) = ∅. By definition of Css(u)
there is a cut-off ψR , where ψ ∈ C ∞ (Sn−1 ) and ψ(θ) 6= 0, such that
ψR u ∈ S(Rn ). From (7.38) this implies that (θ, p) ∈ / WFsc (u) for all
n
p∈B .
Now, Lemma 7.13 allows us to apply the same argument as used
above for WF . Namely we are given that (θ, p) ∈ / WFsc (u) for all
p ∈ Bn . Thus, for each p we may find ψR , depending on p, such that
n
ψ(θ) 6= 0 and p ∈ / Css(ψdR u). Since B is compact, we may choose a
(j)
finite subset of these conic localizers, ψRj such that the intersection
[(j)
of the corresponding sets Css(ψRj u), is empty, i.e. their complements
cover Bn . Now, using Lemma 7.13 we may choose one ψ with support
in the intersection of the sets {ψ (j) 6= 0} with ψ(θ) 6= 0 and one R
n
such that Css(ψd R u) = ∅, but this just means that ψR u ∈ S(R ) and so
θ∈ / Css(u) as desired.
The fact that these sets are closed (in the appropriate sets) follows
directly from Lemma7.13. 
Corollary 7.15. For u ∈ S 0 (Rn ),
(7.44) WFsc (u) = ∅ ⇐⇒ u ∈ S(Rn ).
Let me return to the definition of WFsc (u) and rewrite it, using
what we have learned so far, in terms of a decomposition of u.
Proposition 7.16. For any u ∈ S 0 (Rn ) and (p, q) ∈ ∂(Bn × Bn ),

(7.45) (p, q) ∈
/ WFsc (u) ⇐⇒
u = u1 + u2 , u1 , u2 ∈ S 0 (Rn ), p ∈
/ Css(u1 ), q ∈
/ Css(ub2 ).
Proof. For given (p, q) ∈ / WFsc (u), take Φ = φ ∈ Cc∞ (Rn ) with
φ ≡ 1 near p, if p ∈ Rn or Φ = ψR with ψ ∈ C ∞ (Sn−1 ) and ψ ≡ 1
near p, if p ∈ Sn−1 . In either case p ∈
/ Css(u1 ) if u1 = (1 − Φ)u directly
from the definition. So u2 = u − u1 = Φu. If the support of Φ is small
enough it follows as in the discussion in the proof of Proposition 7.14
that
(7.46) q∈
/ Css(ub2 ).
Thus we have (7.45) in the forward direction.
7. CONE SUPPORT AND WAVEFRONT SET 101

For reverse implication it follows directly that (p, q) ∈


/ WFsc (u1 )
and that (p, q) ∈
/ WFsc (u2 ). 
This restatement of the definition makes it clear that there a high
degree of symmetry under the Fourier transform
Corollary 7.17. For any u ∈ S 0 (Rn ),
(7.47) (p, q) ∈ WFsc (u)) ⇐⇒ (q, −p) ∈ WFsc (û).
Proof. I suppose a corollary should not need a proof, but still . . . .
The statement (7.47) is equivalent to
(7.48) (p, q) ∈
/ WFsc (u)) =⇒ (q, −p) ∈
/ WFsc (û)
since the reverse is the same by Fourier inversion. By (7.45) the con-
dition on the left is equivalent to u = u1 + u2 with p ∈ / Css(u1 ),
q∈/ Css(ub2 ). Hence equivalent to
(7.49) b = v1 + v2 , v1 = ub2 , vb2 = (2π)−n uˇ1
u
so q ∈
/ Css(v1 ), −p ∈
/ Css(vb2 ) which proves (7.47). 
Now, we can exploit these notions to refine our conditions under
which pairing, the product and convolution can be defined.
Theorem 7.18. For u, v ∈ S 0 (Rn )
(7.50) uv ∈ S 0 (Rn ) is unambiguously defined provided
(p, ω) ∈ WFsc (u) ∩ (Bn × Sn−1 ) =⇒ (p, −ω) ∈
/ WFsc (v)
and
(7.51) u ∗ v ∈ S 0 (Rn ) is unambiguously defined provided
(θ, q) ∈ WFsc (u) ∩ (Sn−1 × Bn ) =⇒ (−θ, q) ∈
/ WFsc (v).
Proof. Let us consider convolution first. The hypothesis, (7.51)
means that for each θ ∈ Sn−1
(7.52)
{q ∈ Bn−1 ; (θ, q) ∈ WFsc (u)} ∩ {q ∈ Bn−1 ; (−θ, q) ∈ WFsc (v)} = ∅.
Now, the fact that WFsc is always a closed set means that (7.52) re-
mains true near θ in the sense that if U ⊂ Sn−1 is a sufficiently small
neighbourhood of θ then
(7.53) {q ∈ Bn−1 ; ∃ θ0 ∈ U, (θ0 , q) ∈ WFsc (u)}
∩ {q ∈ Bn−1 ; ∃ θ00 ∈ U, (−θ00 , q) ∈ WFsc (v)} = ∅.
The compactness of Sn−1 means that there is a finite cover of Sn−1 by
such sets Uj . Now select a partition of unity ψi of Sn−1 which is not
102 3. DISTRIBUTIONS

only subordinate to this open cover, so each ψi is supported in one of


the Uj but satisfies the additional condition that

(7.54) supp(ψi ) ∩ (− supp(ψi0 )) 6= ∅ =⇒


supp(ψi ) ∪ (− supp(ψi0 )) ⊂ Uj for some j.
P
Now, if we set ui = (ψi )R u, and vi0 = (ψi0 )R v, we know that u − ui
i
has compact support and similarly for v. Since convolution is already
known to be possible if (at least) one factor has compact support, it
suffices to define ui ∗ vi0 for every i, i0 . So, first suppose that supp(ψi ) ∩
(− supp(ψi0 )) 6= ∅. In this case we conclude from (7.54) that
(7.55) Css(ubi ) ∩ Css(vbi0 ) = ∅.
Thus we may define
(7.56) i ∗ v i0 = u
u\ bi vbi0
using (7.20). On the other hand if supp ψi ∩ (− supp(ψi0 )) = ∅ then
(7.57) Css(ui ) ∩ (− Css(vi0 )) ∩ Sn−1 = ∅
and in this case we can define ui ∗ vi0 using Lemma 7.10.
Thus with such a decomposition of u and v all terms in the convo-
lution are well-defined. Of course we should check that this definition
is independent of choices made in the decomposition. I leave this to
you.
That the product is well-defined under condition (7.50) now follows
if we define it using convolution, i.e. as
(7.58) cv = f ∗ g, f = u
u b, ǧ = vb.
Indeed, using (7.47), (7.50) for u and v becomes (7.51) for f and g. 

8. Homogeneous distributions
Next time I will talk about homogeneous distributions. On R the
functions
 s
s x x>0
xt =
0 x<0
where S ∈ R, is locally integrable (and hence a tempered distribution)
precisely when S > −1. As a function it is homogeneous of degree s.
Thus if a > 0 then
(ax)st = as xst .
11. SCHWARTZ SPACE. 103

Thinking of xst = µs as a distribution we can set this as


Z
µs (ax)(ϕ) = µs (ax)ϕ(x) dx
Z
dx
= µs (x)ϕ(x/a)
a
s
= a µs (ϕ) .
Thus if we define ϕa (x) = a1 ϕ( xa ), for any a > 0, ϕ ∈ S(R) we can ask
whether a distribution is homogeneous:
µ(ϕa ) = as µ(ϕ) ∀ ϕ ∈ S(R).

9. Operators and kernels


From here on a summary of parts of 18.155 used in 18.156 – to be
redistributed backwards With some corrections by incorporated.

10. Fourier transform


The basic properties of the Fourier transform, tempered distribu-
tions and Sobolev spaces form the subject of the first half of this course.
I will recall and slightly expand on such a standard treatment.

11. Schwartz space.


The space S(Rn ) of all complex-volumed functions with rapidly
decreasing derivatives of all orders is a complete metric space with
metric

X ku − vk(k)
d(u, v) = 2−k where
k=0
1 + ku − vk(k)
(11.1) X
kuk(k) = sup |z α Dzβ u(z)|.
z∈Rn
|α|+|β|≤k

Here and below I will use the notation for derivatives


1 ∂
Dzα = Dzα11 . . . , Dzαnn , Dzj = 1 .
i ∂zj
These norms can be replaced by other equivalent ones, for instance
by reordering the factors
X
kuk0(k) = sup |Dzβ (z β u)|.
z∈Rn
|α|+|β|≤k
104 3. DISTRIBUTIONS

In fact it is only the cumulative effect of the norms that matters, so


one can use
(11.2) kuk00(k) = sup |hzi2k (∆ + 1)k u|
z∈Rn

in (11.1) and the same topology results. Here


n
X
hzi2 = 1 + |z|2 , ∆ = Dj2
j=1

(so the Laplacian is formally positive, the geometers’ convention). It


is not quite so trivial to see that inserting (11.2) in (11.1) gives an
equivalent metric.

12. Tempered distributions.


The space of (metrically) continuous linear maps
(12.1) f : S(Rn ) −→ C
is the space of tempered distribution, denoted S 0 (Rn ) since it is the
dual of S(Rn ). The continuity in (12.1) is equivalent to the estimates
(12.2) ∃ k, Ck > 0 s.t. |f (ϕ)| ≤ Ck kϕk(k) ∀ ϕ ∈ S(Rn ).
There are several topologies which can be considered on S 0 (Rn ).
Unless otherwise noted we consider the uniform topology on S 0 (Rn );
a subset U ⊂ S 0 (Rn ) is open in the uniform topology if for every
u ∈ U and every k sufficiently large there exists δk > 0 (both k and δk
depending on u) such that

v ∈ S 0 (Rn ), |(u − u)(ϕ) ≤ δk kϕk(k) ⇒ v ∈ U.


For linear maps it is straightforward to work out continuity condi-
tions. Namely
P : S(Rn ) −→ S(Rm )
Q : S(Rn ) −→ S 0 (Rm )
R : S 0 (Rn ) −→ S(Rm )
S : S 0 (Rn ) −→ S 0 (Rm )
are, respectively, continuous for the metric and uniform topologies if
∀ k ∃ k 0 , C s.t. kP ϕk(k) ≤ Ckϕk(k0 ) ∀ ϕ ∈ S(Rn )
∃ k, k 0 , C s.t. |Qϕ(ψ)| ≤ Ckϕk(k) kψk(k0 )
∀ k, k 0 ∃ C s.t. |u(ϕ)| ≤ kϕk(k0 ) ∀ ϕ ∈ S(Rn ) ⇒ kRuk(k) ≤ C
∀ k 0 ∃ k, C, C 0 s.t. ku(ϕ)k(k) ≤ kϕk(k) ∀ ϕ ∈ S(Rn ) ⇒ |Su(ψ)| ≤ C 0 kψk(k0 ) ∀ ψ ∈ S(Rn ).
13. FOURIER TRANSFORM 105

The particular case of R, for m = 0, where at least formally S(R0 ) = C,


corresponds to the reflexivity of S(Rn ), that
R : S 0 (Rn ) −→ C is cts. iff ∃ ϕ ∈ S(Rn ) s.t.
Ru = u(ϕ) i.e. (S 0 (Rn ))0 = S(Rn ).
In fact another extension of the middle two of these results corresponds
to the Schwartz kernel theorem:
Q :S(Rn ) −→ S 0 (Rm ) is linear and continuous
iff ∃ Q ∈ S 0 (Rm × Rn ) s.t. (Q(ϕ))(ψ) = Q(ψ  ϕ) ∀ ϕ ∈ S(Rm ) ψ ∈ S(Rn ).
R :S 0 (Rn ) −→ S(Rn ) is linear and continuous
iff ∃ R ∈ S(Rm × Rn ) s.t. (Ru)(z) = u(R(z, ·)).
Schwartz test functions are dense in tempered distributions
S(Rn ) ,→ S 0 (Rn )
where the standard inclusion is via Lebesgue measure
Z
n 0 n
(12.3) S(R ) 3 ϕ 7→ uϕ ∈ S (R ), uϕ (ψ) = ϕ(z)ψ(z)dz.
Rn
The basic operators of differentiation and multiplication are transferred
to S 0 (Rn ) by duality so that they remain consistent with the (12.3):
Dz u(ϕ) = u(−Dz ϕ)
f u(ϕ) = u(f ϕ) ∀ f ∈ S(Rn )).
In fact multiplication extends to the space of function of polynomial
growth:
∀ α ∈ Nn0 ∃ k s.t. |Dzα f (z)| ≤ Chzik .
Thus such a function is a multiplier on S(Rn ) and hence by duality on
S 0 (Rn ) as well.

13. Fourier transform


Many of the results just listed are best proved using the Fourier
transform
F : S(Rn ) −→ S(Rn )
Z
Fϕ(ζ) = ϕ̂(ζ) = e−izζ ϕ(z)dz.

This map is an isomorphism that extends to an isomorphism of S 0 (Rn )


F : S(Rn ) −→ S(Rn )
Fϕ(Dzj u) = ζj Fu, F(zj u) = −Dζj Fu
106 3. DISTRIBUTIONS

and also extends to an isomorphism of L2 (Rn ) from the dense subset


(13.1) S(Rn ) ,→ L2 (R2 )dense, kFϕk2L2 = (2π)n kϕk2L2 .

14. Sobolev spaces


Plancherel’s theorem, (??), is the basis of the definition of the (stan-
dard, later there will be others) Sobolev spaces.
H s (Rn ) = {u ∈ S 0 (Rn ); (1 + |ζ|2 )s/2 û ∈ L2 (Rn )}
Z
2
kuks = (1 + |ζ|2 )s |û(ζ)|dζ,
Rn

where we use the fact that L2 (Rn ) ,→ S 0 (Rn ) is a well-defined injection


(regarded as an inclusion) by continuous extension from (12.3). Now,
(14.1) Dα : H s (Rn ) −→ H s−|α| (Rn ) ∀ s, α.
As well as this action by constant coefficient differential operators
we note here that multiplication by Schwartz functions also preserves
the Sobolev spaces – this is generalized with a different proof below.
I give this cruder version first partly to show a little how to estimate
convolution integrals.
Proposition 14.1. For any s ∈ R there is a continuous bilinear
map extending multiplication on Schwartz space
(14.2) S(Rn ) × H s (Rn ) −→ H s (Rn )
Proof. The product φu is well-defined for any φ ∈ S(Rn ) and
u ∈ S 0 (Rn ). Since Schwartz functions are dense in the Sobolev spaces
it suffices to assume u ∈ S(Rn ) and then to use continuity. The Fourier
transform of the product is the convolution of the Fourier transforms
Z
−n
c = (2π) φ̂ ∗ û, φ̂ ∗ û(ξ) =
(14.3) φu φ̂(ξ − η)û(η)dη.
Rn
This is proved above, but let’s just note that in this case it is easy
enough since all the integrals are absolutely convergent and we can
compute the inverse Fourier transform of the convolution
Z Z
−n iz·ξ
(2π) dξe φ̂(ξ − η)û(η)dη
Rn
Z Z
−n iz·(ξ−η)
= (2π) dξe φ̂(ξ − η)eiz·η û(η)dη
(14.4) Z Z R
n

−n iz·Ξ
= (2π) dΞe φ̂(Ξ)eiz·η û(η)dη
Rn
n
= (2π) φ(z)u(z).
14. SOBOLEV SPACES 107

First, take s = 0 and prove this way the, rather obvious, fact that
S is a space of multipliers on L2 . Writing out the square of the abso-
lute value of the integral as the product with the complex conjugate,
estimating by the absolute value and then using the Cauchy-Schwarz
inequality gives what we want
Z Z
| | ψ(ξ − η)û(η)dη|2 dξ
Z Z
≤ |ψ(ξ − η1 )||û(η1 )||ψ(ξ − η2 )||û(η2 )|dη1 dη2 dξ
(14.5) Z Z
≤ |ψ(ξ − η1 )||ψ(ξ − η2 )||û(η2 )|2 dη1 dη2
Z
≤ ( |ψ|)2 kuk2L2 .
1
Here, we have decomposed the integral as the product of |ψ(ξ−η1 )| 2 |û(η1 )||ψ(ξ−
1
η2 )| 2 and the same term with the η variables exchanged. The two re-
sulting factors are then the same after changing variable so there is no
square-root in the integral.
Note that what we have really shown here is the well-known result:-
Lemma 14.2. Convolution gives is a continous bilinear map
(14.6)
L1 (Rn ) × L2 (Rn ) 3 (u, v) 7−→ u ∗ v ∈ L2 (Rn ), ku ∗ vkL2 ≤ kukL1 kvkL2 .
Now, to do the general case we need to take care of the weights in
the integral for the Sobolev norm
Z
(14.7) kφukH s = (1 + |ξ|2 )s |φu(ξ)|
2 c 2
dξ.

To do so, we divide the convolution integral into two regions:-


1
I = {η ∈ Rn ; |ξ − η| ≥ (|ξ| + |η|)}
(14.8) 10
1
II = {η ∈ Rn ; |ξ − η| ≤ (|ξ| + |η|)}.
10
In the first region φ(ξ − η) is rapidly decreasing in both variable, so
(14.9) |ψ(ξ − η)| ≤ CN (1 + |ξ|)−N (1 + |η|)−N
for any N and as a result this contribution to the integral is rapidly
decreasing:-
Z
(14.10) | ψ(ξ − η)û(η)dη| ≤ CN (1 + |ξ|)−n kukH s
I
108 3. DISTRIBUTIONS

where the η decay is used to squelch the weight. So this certainly


constributes a term to ψ ∗ û with the bilinear bound.
To estimate the contribution from the second region, proceed as
above but the insert the weight after using the Cauchy-Schwartz inte-
quality
(14.11)
Z Z
(1 + |ξ|2 )s | ψ(ξ − η)û(η)dη|2 dξ
II
Z Z Z
2 s
≤ (1 + |ξ| ) |ψ(ξ − η1 )||û(η1 )||ψ(ξ − η2 )||û(η2 )|dη1 dη2
II II
Z Z Z
≤ (1 + |ξ|2 )s (1 + |η2 |2 )−s |ψ(ξ − η1 )||ψ(ξ − η2 )|(1 + |η2 |2 )s |û(η2 )|2 dη1 dη2
II II

Exchange the order of integration and note that in region II the two
variables η2 and ξ are each bounded relative to the other. Thus the
quotient of the weights is bounded above so the same argument applies
to estimate the integral by
Z 2
(14.12) C dΞ|ψ(Ξ)| kuk2H s

as desired. 

The Sobolev spaces are Hilbert spaces, so their duals are (conjugate)
isomorphic to themselves. However, in view of our inclusion L2 (Rn ) ,→
S 0 (Rn ), we habitually identify

(H s (Rn ))0 = H −s (Rn ),

with the ‘extension of the L2 paring’


Z Z
00 −n
(u, v) = “ u(z)v(z)dz = (2π) hζis û · hζi−s ûdζ.
Rn

Note that then (14) is a linear, not a conjugate-linear, isomorphism


since (14) is a real pairing.
The Sobolev spaces decrease with increasing s,
0
H s (Rn ) ⊂ H s (Rn ) ∀ s ≥ s0 .

One essential property is the relationship between the ‘L2 derivatives’


involved in the definition of Sobolev spaces and standard derivatives.
15. WEIGHTED SOBOLEV SPACES. 109

Namely, the Sobolev embedding theorem:


n
s > =⇒H s (Rn ) ⊂ C∞0
(Rn )
2
= {u; Rn −→ C its continuous and bounded}.
n
s > + k, k ∈ N =⇒H s (Rn ) ⊂ C∞k
(Rn )
2
def
= {u; Rn −→ C s.t. Dα u ∈ C∞
0
(Rn ) ∀ |α| ≤ k}.
For positive integral s the Sobolev norms are easily written in terms of
the functions, without Fourier transform:
u ∈ H k (Rn ) ⇔ Dα u ∈ L2 (Rn ) ∀ |α| ≤ k
XZ
2
kukk = |Dα u|2 dz.
|α|≤k Rn

For negative integral orders there is a similar characterization by du-


ality, namely
H −k (Rn ) = {u ∈ S 0 (Rn ) s.t. , ∃ uα ∈ L2 (Rn ), |α| ≥ k
X
u= Dα uα }.
|α|≤k

In fact there are similar “Hölder” characterizations in general. For


0 < s < 1, u ∈ H s (Rn ) =⇒ u ∈ L2 (Rn ) and
|u(z) − u(z 0 )|2
Z
(14.13) 0 n+2s
dzdz 0 < ∞.
R2n |z − z |
Then for k < s < k + 1, k ∈ N u ∈ H s (R2 ) is equivalent to Dα ∈
H s−k (Rn ) for all |α| ∈ k, with corresponding (Hilbert) norm. Similar
realizations of the norms exist for s < 0.
One simple consequence of this is that
\

C∞ (Rn ) = k
C∞ (Rn ) = {u; Rn −→ C s.t. |Dα u| is bounded ∀ α}
k

is a multiplier on all Sobolev spaces



C∞ (Rn ) · H s (Rn ) = H s (Rn ) ∀ s ∈ R.

15. Weighted Sobolev spaces.


It follows from the Sobolev embedding theorem that
\

(15.1) H s (Rn ) ⊂ C∞ (Rn );
s
110 3. DISTRIBUTIONS

in fact the intersection here is quite a lot smaller, but nowhere near as
small as S(Rn ). To discuss decay at infinity, as will definitely want to
do, we may use weighted Sobolev spaces.
The ordinary Sobolev spaces do not effectively define decay (or
growth) at infinity. We will therefore also set
H m,l (Rn ) = {u ∈ S 0 (Rn ); hzi` u ∈ H m (Rn )}, m, ` ∈ R,
= hzi−` H m (Rn ) ,
where the second notation is supported to indicate that u ∈ H m,l (Rn )
may be written as a product hzi−` v with v ∈ H m (Rn ). Thus
0 0
H m,` (Rn ) ⊂ H m ,` (Rn ) if m ≥ m0 and ` ≥ `0 ,
so the spaces are decreasing in each index. As consequences of the
Schwartz structure theorem
[
S 0 (Rn ) = H m,` (Rn )
m,`
(15.2) \
n
S(R ) = H m,` (Rn ).
m,`

This is also true ‘topologically’ meaning that the first is an ‘inductive


limit’ and the second a ‘projective limit’.
Similarly, using some commutation arguments
Dzj : H m,` (Rn ) −→ H m−1,` (Rn ), ∀ m, elll
×zj : H m,` (Rn ) −→ H m,`−1 (Rn ).
Moreover there is symmetry under the Fourier transform
F : H m,` (Rn ) −→ H `,m (Rn ) is an isomorphism ∀ m, `.
As with the usual Sobolev spaces, S(Rn ) is dense in all the H m,` (Rn )
spaces and the continuous extension of the L2 paring gives an identifi-
cation
H m,` (Rn ) ∼
= (H −m,−` (Rn ))0 fron
H m,` (Rn ) × H −m,−` (Rn ) 3 u, v 7→
Z
(u, v) = “ u(z)v(z)dz 00 .

Let Rs be the operator defined by Fourier multiplication by hζis :



(15.3) Rs : S(Rn ) −→ S(Rn ), R
ds f (ζ) = hζi f (ζ).
15. WEIGHTED SOBOLEV SPACES. 111

Lemma 15.1. If ψ ∈ S(Rn ) then


(15.4) Ms = [ψ, Rs ∗] : H t (Rn ) −→ H t−s+1 (Rn )
is bounded for each t.
Proof. Since the Sobolev spaces are defined in terms of the Fourier
transform, first conjugate and observe that (15.4) is equivalent to the
boundeness of the integral operator with kernel
(15.5)
t−s+1 s s t
Ks,t (ζ, ζ 0 ) = (1+|ζ|2 ) 2 ψ̂(ζ−ζ 0 ) (1 + |ζ 0 |2 ) 2 − (1 + |ζ|2 ) 2 (1+|ζ 0 |2 )− 2
on L2 (Rn ). If we insert the characteristic function for the region near
the diagonal
1
(15.6) |ζ − ζ 0 | ≤ (|ζ| + |ζ 0 |) =⇒ |ζ| ≤ 2|ζ 0 |, |ζ 0 | ≤ 2|ζ|
4
0
then |ζ| and |ζ | are of comparable size. Using Taylor’s formula
(15.7)
Z 1
s s  s −1
(1+|ζ | ) −(1+|ζ| ) = s(ζ−ζ )· (tζ+(1−tζ 0 ) 1 + |tζ + (1 − t)ζ 0 |2 2 dt
0 2 2
2 2
0
0
s s
=⇒ (1 + |ζ | ) − (1 + |ζ|2 ) 2 ≤ Cs |ζ − ζ 0 |(1 + |ζ|)s−1 .
0 2 2

It follows that in the region (15.6) the kernel in (15.5) is bounded by


(15.8) C|ζ − ζ 0 ||ψ̂(ζ − ζ 0 )|.
In the complement to (15.6) the kernel is rapidly decreasing in ζ and ζ 0
in view of the rapid decrease of ψ̂. Both terms give bounded operators
on L2 , in the first case using the same estimates that show convolution
by an element of S to be bounded. 
Lemma 15.2. If u ∈ H s (Rn ) and ψ ∈ Cc∞ (Rn ) then
(15.9) kψuks ≤ kψkL∞ kuks + Ckuks−1
where the constant depends on s and ψ but not u.
Proof. This is really a standard estimate for Sobolev spaces. Re-
call that the Sobolev norm is related to the L2 norm by
(15.10) kuks = khDis ukL2 .
Here hDis is the convolution operator with kernel defined by its Fourier
transform
(15.11) cs (ζ) = (1 + |ζ|2 ) 2s .
hDis u = Rs ∗ u, R
To get (15.9) use Lemma 15.1.
112 3. DISTRIBUTIONS

From (15.4), (writing 0 for the L2 norm)


(15.12) kψuks = kRs ∗ (ψu)k0 ≤ kψ(Rs ∗ u)k0 + kMs uk0
≤ kψkL∞ kRs uk0 + Ckuks−1 ≤ kψkL∞ kuks + Ckuks−1 .
This completes the proof of (15.9) and so of Lemma 15.2. 

16. Multiplicativity
Of primary importance later in our treatment of non-linear prob-
lems is some version of the multliplicative property
(
H s (Rn ) ∩ L∞ (Rn ) s ≤ n2
(16.1) As (Rn ) = is a C ∞ algebra.
H s (Rn ) s > n2
Here, a C ∞ algebra is an algebra with an additional closure property.
Namely if F : RN −→ C is a C ∞ function vanishing at the origin and
u1 , . . . , uN ∈ As are real-valued then
F (u1 , . . . , un ) ∈ As .
I will only consider the case of real interest here, where s is an
integer and s > n2 . The obvious place to start is
n
Lemma 16.1. If s > 2
then
(16.2) u, v ∈ H s (Rn ) =⇒ uv ∈ H s (Rn ).
Proof. We will prove this directly in terms of convolution. Thus,
in terms of weighted Sobolev spaces u ∈ H s (Rn ) = H s,0 (Rn ) is equiva-
lent to û ∈ H 0,s (Rn ). So (16.2) is equivalent to
(16.3) u, v ∈ H 0,s (Rn ) =⇒ u ∗ v ∈ H 0,s (Rn ).
Using the density of S(Rn ) it suffices to prove the estimate
n
(16.4) ku ∗ vkH 0,s ≤ Cs kukH 0,s kvkH 0,s for s > .
2
−s 0
Now, we can write u(ζ) = hζi u etc and convert (16.4) to an estimate
on the L2 norm of
Z
−s
(16.5) hζi hξi−s u0 (ξ)hζ − ξi−s v 0 (ζ − ξ)dξ

in terms of the L2 norms of u0 and v 0 ∈ S(Rn ).


Writing out the L2 norm as in the proof of Lemma 15.1 above, we
need to estimate the absolute value of
(16.6)
Z Z Z
dζdξdηhζi2s hξi−s u1 (ξ)hζ−ξi−s v1 (ζ−ξ)hηi−s u2 (η)hζ−ηi−s v2 (ζ−η)
16. MULTIPLICATIVITY 113

in terms of the L2 norms of the ui and vi . To do so divide the integral


into the four regions,
1 1
|ζ − ξ| ≤ (|ζ| + |ξ|), |ζ − η| ≤ (|ζ| + |η|)
4 4
1 1
|ζ − ξ| ≤ (|ζ| + |ξ|), |ζ − η| ≥ (|ζ| + |η|)
(16.7) 4 4
1 1
|ζ − ξ| ≥ (|ζ| + |ξ|), |ζ − η| ≤ (|ζ| + |η|)
4 4
1 1
|ζ − ξ| ≥ (|ζ| + |ξ|), |ζ − η| ≥ (|ζ| + |η|).
4 4
Using (15.6) the integrand in (16.6) may be correspondingly bounded
by
Chζ − ηi−s |u1 (ξ)||v1 (ζ − ξ)| · hζ − ξi−s |u2 (η)||v2 (ζ − η)|
Chηi−s |u1 (ξ)||v1 (ζ − ξ)| · hζ − ξi−s |u2 (η)||v2 (ζ − η)|
(16.8)
Chζ − ηi−s |u1 (ξ)||v1 (ζ − ξ)| · hξi−s |u2 (η)||v2 (ζ − η)|
Chηi−s |u1 (ξ)|v1 (ζ − ξ)| · hξi−s |u2 (η)||v2 (ζ − η)|.
Now applying Cauchy-Schwarz inequality, with the factors as indicated,
and changing variables appropriately gives the desired estimate. 

Next, we extend this argument to (many) more factors to get the


following result which is close to the Gagliardo-Nirenberg estimates
(since I am concentrating here on L2 methods I will not actually discuss
the latter).
n
Lemma 16.2. If s > 2
, N ≥ 1 and αi ∈ Nk0 for i = 1, . . . , N are
such that
N
X
|αi | = T ≤ s
i=1

then
(16.9)
N
Y N
Y
s n αi s−T n
ui ∈ H (R ) =⇒ U = D ui ∈ H (R ), kU kH s−T ≤ CN kui kH s .
i=1 i=1

Proof. We proceed as in the proof of Lemma 16.1 using the Fourier


transform to replace the product by the convolution. Thus it suffices
to show that
(16.10) u1 ∗ u2 ∗ u3 ∗ · · · ∗ uN ∈ H 0,s−T if ui ∈ H 0,s−αi .
114 3. DISTRIBUTIONS

Writing out the convolution symmetrically in all variables,


Z
(16.11) u1 ∗ u2 ∗ u3 ∗ · · · ∗ uN (ζ) = P
u1 (ξ1 ) · · · uN (ξN )
ζ= ξi
i

it follows that we need to estimate the L2 norm in ζ of


Z
(16.12) hζis−T
P
hξ1 i−s+a1 v1 (ξ1 ) · · · hξN i−s+aN vN (ξN )
ζ= ξi
i

for N factors vi which are in L2 with the ai = |α|i non-negative integers


summing to T ≤ s. Again writing the square as the product with the
complex conjuage it is enough to estimate integrals of the type
Z X
(16.13) h ξi2s−2T hξ1 i−s+a1
{(ξ,η)∈R2N ;
P P
ξi = ηi } i
i i

v1 (ξ1 ) · · · hξN i−s+aN vN (ξN )hη1 i−s+a1 v̄1 (η1 ) · · · hηN i−s+aN v̄N (ηN ).
This is really an integral over R2N −1 with respect to Lebesgue measure.
Applying Cauchy-Schwarz inequality the absolute value is estimated by
Z YN X N
Y
(16.14) 2
|vi (ξi )| h ηl i2s−2T
hηi i−2s+2ai
{(ξ,η)∈R2N ;
P P
ξi = ηi } i=1 l i=1
i i
P P
The domain of integration, given by ηi = ξi , is covered by the
i i
finite number of subsets Γj on which in addition |ηj | ≥ |ηi |, for all i.
On this set we may take P the variables of integration to be ηi for i 6= j
and the ξl . Then |ηi | ≥ | ηl |/N so the second part of the integrand
l
in (16.14) is estimated by
(16.15) X Y Y Y
hηj i−2s+2aj h ηl i2s−2T hηi i−2s+2ai ≤ CN hηj i−2T +2aj hηi i−2s+2ai ≤ CN0 hηi i−2s
l i6=j i6=j i6=j

Thus the integral in (16.14) is finite and the desired estimate follows.

Proposition 16.3. If F ∈ C ∞ (Rn × R) and u ∈ H s (Rn ) for s > n
2
an integer then
s
(16.16) F (z, u(z)) ∈ Hloc (Rn ).
Proof. Since the result is local on Rn we may multiply by a com-
pactly supported function of z. In fact since u ∈ H s (Rn ) is bounded we
17. SOME BOUNDED OPERATORS 115

also multiply by a compactly supported function in R without changing


the result. Thus it suffices to show that
(16.17) F ∈ Cc∞ (Rn × R) =⇒ F (z, u(z)) ∈ H s (Rn ).
Now, Lemma 16.2 can be applied to show that F (z, u(z)) ∈ H s (Rn ).
Certainly F (z, u(z)) ∈ L2 (Rn ) since it is continuous and has compact
support. Moreover, differentiating s times and applying the chain rule
gives
X
(16.18) Dα F (z, u(z)) = Fα1 ,...,αN (z, u(z))Dα1 u · · · DαN u
N
P
where the sum is over all (finitely many) decomposition with αi ≤
i=1
α and the F· (z, u) are smooth with compact support, being various
derivitives of F (z, u). Thus it follows from Lemma 16.2 that all terms
on the right are in L2 (Rn ) for |α| ≤ s. 
Note that slightly more sophisticated versions of these arguments
give the full result (16.1) but Proposition 16.3 suffices for our purposes
below.
17. Some bounded operators
Lemma 17.1. If J ∈ C k (Ω2 ) is properly supported then the operator
with kernel J (also denoted J) is a map
s k
(17.1) J : Hloc (Ω) −→ Hloc (Ω) ∀ s ≥ −k.
CHAPTER 4

Elliptic Regularity

Includes some corrections noted by Tim Nguyen and corrections by,


and some suggestions from, Jacob Bernstein.

1. Constant coefficient operators


A linear, constant coefficient differential operator can be thought
of as a map
(1.1)
X
P (D) : S(Rn ) −→ S(Rn ) of the form P (D)u(z) = cα Dα u(z),
|α|≤m

1 ∂
Dα = D1α1 . . . Dnαn , Dj = ,
i ∂zj
but it also acts on various other spaces. So, really it is just a polynomial
P (ζ) in n variables. This ‘characteristic polynomial’ has the property
that
(1.2) F(P (D)u)(ζ) = P (ζ)Fu(ζ),
which you may think of as a little square
P (D)
(1.3) S(Rn ) / S(Rn )
O O
F F
 
S(Rn ) / S(Rn )

and this is why the Fourier tranform is especially useful. However, this
still does not solve the important questions directly.
Question 1.1. P (D) is always injective as a map (1.1) but is usu-
ally not surjective. When is it surjective? If Ω ⊂ Rn is a non-empty
open set then
(1.4) P (D) : C ∞ (Ω) −→ C ∞ (Ω)
is never injective (unless P (ζ) is constnat), for which polynomials is it
surjective?
117
118 4. ELLIPTIC REGULARITY

The first three points are relatively easy. As a map (1.1) P (D)
is injective since if P (D)u = 0 then by (1.2), P (ζ)Fu(ζ) = 0 on Rn .
However, a zero set, in Rn , of a non-trivial polynomial alwasys has
empty interior, i.e. the set where it is non-zero is dense, so Fu(ζ) = 0
on Rn (by continuity) and hence u = 0 by the invertibility of the
Fourier transform. So (1.1) is injective (of course excepting the case
that P is the zero polynomial). When is it surjective? That is, when
can every f ∈ S(Rn ) be written as P (D)u with u ∈ S(Rn )? Taking
the Fourier transform again, this is the same as asking when every
g ∈ S(Rn ) can be written in the form P (ζ)v(ζ) with v ∈ S(Rn ). If
P (ζ) has a zero in Rn then this is not possible, since P (ζ)v(ζ) always
vanishes at such a point. It is a little trickier to see the converse, that
P (ζ) 6= 0 on Rn implies that P (D) in (1.1) is surjective. Why is this
not obvious? Because we need to show that v(ζ) = g(ζ)/P (ζ) ∈ S(Rn )
whenever g ∈ S(Rn ). Certainly, v ∈ C ∞ (Rn ) but we need to show that
the derivatives decay rapidly at infinity. To do this we need to get an
estimate on the rate of decay of a non-vanishing polynomial
Lemma 1.1. If P is a polynomial such that P (ζ) 6= 0 for all ζ ∈ Rn
then there exists C > 0 and a ∈ R such that
(1.5) |P (ζ)| ≥ C(1 + |ζ|)a .
Proof. This is a form of the Tarski-Seidenberg Lemma. Stated
loosely, a semi-algebraic function has power-law bounds. Thus
(1.6) F (R) = inf{|P (ζ)|; |ζ| ≤ R}
is semi-algebraic and non-vanishing so must satisfy F (R) ≥ c(1 + R)a
for some c > 0 and a (possibly negative). This gives the desired bound.
Is there an elementary proof? 
Thirdly the non-injectivity in (1.4) is obvious for the opposite rea-
son. Namely for any non-constant polynomial there exists ζ ∈ Cn such
that P (ζ) = 0. Since
(1.7) P (D)eiζ·z = P (ζ)eiζ·z
such a zero gives rise to a non-trivial element of the null space of
(1.4). You can find an extensive discussion of the density of these
sort of ‘exponential’ solutions (with polynomial factors) in all solutions
in Hörmander’s book [4].
What about the surjectivity of (1.4)? It is not always surjective
unless Ω is convex but there are decent answers, to find them you
should look under P -convexity in [4]. If P (ζ) is elliptic then (1.4) is
surjective for every open Ω; maybe I will prove this later although it is
not a result of great utility.
2. CONSTANT COEFFICIENT ELLIPTIC OPERATORS 119

2. Constant coefficient elliptic operators


To discuss elliptic regularity, let me recall that any constant coeffi-
cient differential operator of order m defines a continuous linear map
(2.1) P (D) : H s+m (Rn ) 7−→ H s (Rn ).
Provided P is not the zero polynomial this map is always injective. This
follows as in the discussion above for S(Rn ). Namely, if u ∈ H s+m (Rn )
then, by definition, û ∈ L2loc (Rn ) and if P u = 0 then P (ζ)û(ζ) = 0 off
a set of measure zero. Since P (ζ) 6= 0 on an open dense set it follows
that û = 0 off a set of measure zero and so u = 0 as a distribution.
As a map (2.1), P (D) is is seldom surjective. It is said to be elliptic
(either as a polynomial or as a differential operator) if it is of order m
and there is a constant c > 0 such that
(2.2) |P (ζ)| ≥ c(1 + |ζ|)m in {ζ ∈ Rn ; |ζ| > 1/c}.
Proposition 2.1. As a map (2.1), for a given s, P (D) is surjec-
tive if and only if P is elliptic and P (ζ) 6= 0 on Rn and then it is a
topological isomorphism for every s.
Proof. Since the Sobolev spaces are defined as the Fourier trans-
forms of the weighted L2 spaces, that is
(2.3) f ∈ H t (Rn ) ⇐⇒ (1 + |ζ|2 )t/2 fˆ ∈ L2 (Rn )
the sufficiency of these conditions is fairly clear. Namely the combi-
nation of ellipticity, as in (2.2), and the condition that P (ζ) 6= 0 for
ζ ∈ Rn means that
(2.4) |P (ζ)| ≥ c(1 + |ζ|2 )m/2 , c > 0, ζ ∈ Rn .
From this it follows that P (ζ) is bounded above and below by multiples
of (1 + |ζ|2 )m/2 and so maps the weighted L2 spaces into each other
(2.5)
×P (ζ) : H 0,s+m (Rn ) −→ H 0,s (Rn ), H 0,s = {u ∈ L2loc (Rn ); hζis u(ζ) ∈ L2 (Rn )},
giving an isomorphism (2.1) after Fourier transform.
The necessity follows either by direct construction or else by use
of the closed graph theorem. If P (D) is surjective then multiplication
by P (ζ) must be an isomorphism between the corresponding weighted
space H 0,s (Rn ) and H 0,s+m (Rn ). By the density of functions supported
off the zero set of P the norm of the inverse can be seen to be the
inverse of
(2.6) infn |P (ζ)|hζi−m
ζ∈R

which proves ellipticity. 


120 4. ELLIPTIC REGULARITY

Ellipticity is reasonably common in appliactions, but the condition


that the characteristic polynomial not vanish at all is frequently not
satisfied. In fact one of the questions I want to get to in this course –
even though we are interested in variable coefficient operators – is im-
proving (2.1) (by changing the Sobolev spaces) to get an isomorphism
at least for homogeneous elliptic operators (which do not satisfy the
second condition in Proposition 2.1 because they vanish at the origin).
One reason for this is that we want it for monopoles.
Note that ellipticity itself is a condition on the principal part of the
polynomial.
cα ζ α of degree m is elliptic
P
Lemma 2.2. A polynomial P (ζ) =
α|≤m
if and only if its leading part
X
(2.7) Pm (ζ) = cα ζ α 6= 0 on Rn \ {0}.
|α|=m

Proof. Since the principal part is homogeneous of degree m the


requirement (2.7) is equivalent to
(2.8) |Pm (ζ)| ≥ c|ζ|m , c = inf |P (ζ)| > 0.
|ζ|=1

Thus, (2.2) follows from this, since


(2.9)
c
|P (ζ)| ≥ |Pm (ζ)| − |P 0 (ζ)| ≥ c|ζ|m − C|ζ|m−1 − C ≥ |ζ|m if |ζ| > C 0 ,
2
0
P = P − Mm being of degree at most m − 1. Conversely, ellipticity in
the sense of (2.2) implies that
(2.10)
|Pm (ζ)| ≥ |P (ζ)| − |P 0 (ζ)| ≥ c|ζ|m − C|ζ|m−1 − C > 0 in |ζ| > C 0
and so Pm (ζ) 6= 0 for ζ ∈ Rn \ {0} by homogeneity. 
Let me next recall elliptic regularity for constant coefficient opera-
tors. Since this is a local issue, I first want to recall the local versions
of the Sobolev spaces discussed in Chapter 3
Definition 2.3. If Ω ⊂ Rn is an open set then
s
(Ω) = u ∈ C −∞ (Ω); φu ∈ H s (Rn ) ∀ φ ∈ Cc∞ (Ω) .

(2.11) Hloc
Again you need to know what C −∞ (Ω) is (it is the dual of Cc∞ (Ω))
and that multiplication by φ ∈ Cc∞ (Ω) defines a linear continuous map
from C −∞ (Rn ) to Cc−∞ (Rn ) and gives a bounded operator on H m (Rn )
for all m.
2. CONSTANT COEFFICIENT ELLIPTIC OPERATORS 121

Proposition 2.4. If P (D) is elliptic, u ∈ C −∞ (Ω) is a distribution


s s+m
on an open set and P (D)u ∈ Hloc (Ω) then u ∈ Hloc (Ω). Furthermore

if φ, ψ ∈ Cc (Ω) with φ = 1 in a neighbourhood of supp(ψ) then
(2.12) kψuks+m ≤ CkψP (D)uks + C 0 kφuks+m−1
for any M ∈ R, with C 0 depending only on ψ, φ, M and P (D) and C
depending only on P (D) (so neither depends on u).
Although I will no prove it here, and it is not of any use below, it is
worth noting that (2.12) characterizes the ellipticity of a differential
operator with smooth coefficients.
Proof. Let me discuss this in two slightly different ways. The
first, older, approach is via direct regularity estimates. The second is
through the use of a parametrix; they are not really very different!
First the regularity estimates. An easy case of Proposition 2.4 arises
if u ∈ Cc−∞ (Ω) has compact support to start with. Then P (D)u also
has compact support so in this case
(2.13) u ∈ Cc−∞ (Rn ) and P (D)u ∈ H s (Rn ).
Then of course the Fourier transform works like a charm. Namely
P (D)u ∈ H s (Rn ) means that
(2.14)
hζis P (ζ)û(ζ) ∈ L2 (Rn ) =⇒ hζis+m F (ζ)û(ζ) ∈ L2 (Rn ), F (ζ) = hζi−m P (ζ).
Ellipticity of P (ζ) implies that F (ζ) is bounded above and below on
|ζ| > 1/c and hence can be inverted there by a bounded function. It
follows that, given any M ∈ R the norm of u in H s+m (Rn ) is bounded
0
(2.15) kuks+m ≤ Ckuks + CM kukM , u ∈ C −∞ (Ω),
where the second term is used to bound the L2 norm of the Fourier
transform in |ζ| ≤ 1/c.
To do the general case of an open set we need to use cutoffs more
seriously. We want to show that ψu ∈ H s+m (Rn ) where ψ ∈ Cc∞ (Ω) is
some fixed but arbitrary element. We can always choose some function
φ ∈ Cc∞ (Ω) which is equal to one in a neighbourhood of the support
of ψ. Then φu ∈ Cc−∞ (Rn ) so, by the Schwartz structure theorem,
φu ∈ H m+t−1 (Rn ) for some (unknown) t ∈ R. We will show that ψu ∈
H m+T (Rn ) where T is the smaller of s and t. To see this, compute
X
(2.16) P (D)(ψu) = ψP (D)u + cβ,γ Dγ ψDβ φu.
|β|≤m−1,|γ|≥1

With the final φu replaced by u this is just the effect of expanding out
the derivatives on the product. Namely, the ψP (D)u term is when no
122 4. ELLIPTIC REGULARITY

derivative hits ψ and the other terms come from at least one derivative
hitting ψ. Since φ = 1 on the support of ψ we may then insert φ
without changing the result. Thus the first term on the right in (2.16)
is in H s (Rn ) and all terms in the sum are in H t (Rn ) (since at most
m − 1 derivatives are involved and φu ∈ H m+t−1 (Rn ) be definition
of t). Applying the simple case discussed above it follows that ψu ∈
H m+r (Rn ) with r the minimum of s and t. This would result in the
estimate
(2.17) kψuks+m ≤ CkψP (D)uks + C 0 kφuks+m−1
provided we knew that φu ∈ H s+m−1 (since then t = s). Thus, initially
we only have this estimate with s replaced by T where T = min(s, t).
However, the only obstruction to getting the correct estimate is know-
ing that ψu ∈ H s+m−1 (Rn ).
To see this we can use a bootstrap argument. Observe that ψ can
be taken to be any smooth function with support in the interior of the
set where φ = 1. We can therefore insert a chain of functions, of any
finite (integer) length k ≥ s − t, between then, with each supported in
the region where the previous one is equal to 1 :
(2.18)
supp(ψ) ⊂ {φk = 1}◦ ⊂ supp(φk ) ⊂ · · · ⊂ supp(φ1 ) ⊂ {φ = 1}◦ ⊂ supp(φ)
where ψ and φ were our initial choices above. Then we can apply the
argument above with ψ = φ1 , then ψ = φ2 with φ replaced by φ1 and
so on. The initial regularity of φu ∈ H t+m−1 (Rn ) for some t therefore
allows us to deduce that
(2.19) φj u ∈ H m+Tj (Rn ), Tj = min(s, t + j − 1).
If k is large enough then min(s, t + k) = s so we conclude that ψu ∈
H s+m (Rn ) for any such ψ and that (2.17) holds. 
Although this is a perfectly adequate proof, I will now discuss the
second method to get elliptic regularity; the main difference is that
we think more in terms of operators and avoid the explicit iteration
technique, by doing it all at once – but at the expense of a little more
thought. Namely, going back to the easy case of a tempered distibution
on Rn give the map a name:-
(2.20)
  1 − χ(ζ)
Q(D) : f ∈ S 0 (Rn ) 7−→ F −1 q̂(ζ)fˆ(ζ) ∈ S 0 (Rn ), q̂(ζ) = .
P (ζ)
Here χ ∈ Cc∞ (Rn ) is chosen to be equal to one on the set |ζ| ≤ 1c + 1
corresponding to the ellipticity estimate (2.2). Thus q̂(ζ) ∈ C ∞ (Rn ) is
2. CONSTANT COEFFICIENT ELLIPTIC OPERATORS 123

bounded and in fact


(2.21) |Dζα q̂(ζ)| ≤ Cα (1 + |ζ|)−m−|α| ∀ α.
This has a straightforward proof by induction. Namely, these estimates
are trivial on any compact set, where the function is smooth, so we need
only consider the region where χ(ζ) = 0. The inductive statement is
that for some polynomials Hα ,
1 Hα (ζ)
(2.22) Dζα = , deg(Hα ) ≤ (m − 1)|α|.
P (ζ) (P (ζ))|α|+1
From this (2.21) follows. Prove (2.22) itself by differentiating one more
time and reorganizing the result.
So, in view of the estimate with α = 0 in (2.21),
(2.23) Q(D) : H s (Rn ) −→ H s+m (Rn )
is continuous for each s and it is also an essential inverse of P (D) in
the sense that as operators on S 0 (Rn )
(2.24)
Q(D)P (D) = P (D)Q(D) = Id −E, E : H s (Rn ) −→ H ∞ (Rn ) ∀ s ∈ R.
Namely, E is Fourier multiplication by a smooth function of com-
pact support (namely 1 − q̂(ζ)P (ζ). So, in the global case of Rn , we
get elliptic regularity by applying Q(D) to both sides of the equation
P (D)u = f to find
(2.25) f ∈ H s (Rn ) =⇒ u = Eu + Qf ∈ H s+m (Rn ).
This also gives the esimate (2.15) where the second term comes from
the continuity of E.
The idea then, is to do the same thing for P (D) acting on functions
on the open set Ω; this argument will subsequently be generalized to
variable coefficient operators. The problem is that Q(D) does not act
on functions (or chapterdistributions) defined just on Ω, they need to
be defined on the whole of Rn and to be tempered before the the Fourier
transform can be applied and then multiplied by q̂(ζ) to define Q(D)f.
Now, Q(D) is a convolution operator. Namely, rewriting (2.20)
1 − χ(ζ)
(2.26) Q(D)f = Qf = q ∗ f, q ∈ S 0 (Rn ), q̂ = .
P (ζ)
This in fact is exactly what (2.20) means, since F(q ∗ f ) = q̂ fˆ. We can
write out convolution by a smooth function (which q is not, but let’s
not quibble) as an integral
Z
(2.27) q ∗ f (ζ) = q(z − z 0 )f (z 0 )dz 0 .
Rn
124 4. ELLIPTIC REGULARITY

Restating the problem, (2.27) is an integral (really a distributional


pairing) over the whole of Rn not the subset Ω. In essence the cutoff
argument above inserts a cutoff φ in front of f (really of course in front
of u but not to worry). So, let’s think about inserting a cutoff into
(2.27), replacing it by
Z
(2.28) Qψ f (ζ) = q(z − z 0 )χ(z, z 0 )f (z 0 )dz 0 .
Rn

Here we will take χ ∈ C ∞ (Ω2 ). To get the integrand to have compact


support in Ω for each z ∈ Ω we want to arrange that the projection
onto the second variable, z 0
(2.29) πL : Ω × Ω ⊃ supp(χ) −→ Ω
should be proper, meaning that the inverse image of a compact subset
K ⊂ Ω, namely (Ω × K) ∩ supp(χ), should be compact in Ω.
Let me strengthen the condition on the support of χ by making it
more two-sided and demand that χ ∈ C ∞ (Ω2 ) have proper support in
the following sense:
(2.30)
If K ⊂ Ω then πR ((Ω × K) ∩ supp(χ)) ∪ πL ((L × Ω) ∩ supp(χ)) b Ω.
Here πL , πR : Ω2 −→ Ω are the two projections, onto left and right
factors. This condition means that if we multiply the integrand in
(2.28) on the left by φ(z), φ ∈ Cc∞ (Ω) then the integrand has compact
support in z 0 as well – and so should exist at least as a distributional
pairing. The second property we want of χ is that it should not change
the properties of q as a convolution operator too much. This reduces
to
(2.31) χ = 1 in a neighbourhood of Diag = {(z, z); z ∈ Ω} ⊂ Ω2
although we could get away with the weaker condition that
(2.32) χ ≡ 1 in Taylor series at Diag .
Before discussing why these conditions help us, let me just check
that it is possible to find such a χ. This follows easily from the existence
of a partition of unity in Ω as follows. It is possible to find functions
φi ∈ Cc∞ (Ω), i ∈ N, which have locally finite supports (i.e. any compact
subsetPof Ω only meets the supports of a finite number of the φi , ) such
that φi (z) = 1 in Ω and also so there exist functions φ0i ∈ Cc∞ (Ω),
i
also with locally finite supports in the same sense and such that φ0i = 1
on a neighborhood of the support of φi . The existence of such functions
is a standard result, or if you prefer, an exercise.
2. CONSTANT COEFFICIENT ELLIPTIC OPERATORS 125

Accepting that such functions exists, consider


X
(2.33) χ(z, z 0 ) = φi (z)φ0i (z 0 ).
i
2
Any compact subset of Ω is contained in a compact set of the form
K ×K and hence meets the supports of only a finite number of terms in
(2.33). Thus the sum is locally finite and hence χ ∈ C ∞ (Ω2 ). Moreover,
its support has the property (2.30). Clearly, by the assumption that
φ0i = 1 on the support of φi and that the latter form a partition of unity,
χ(z, z) = 1. In fact χ(z, z 0 ) = 1 in a neighborhood of the diagonal since
each z has a neighborhood N such that z 0 ∈ N, φi (z) 6= 0 implies
φ0i (z 0 ) = 1. Thus we have shown that such a cutoff function χ exists.
Now, why do we want (2.31)? This arises because of the following
‘pseudolocal’ property of the kernel q.
Lemma 2.5. Any distribution q defined as the inverse Fourier trans-
form of a function satisfying (2.21) has the property
(2.34) singsupp(q) ⊂ {0}
Proof. This follows directly from (2.21) and the properties of the
Fourier transform. Indeed these estimates show that
(2.35) z α q(z) ∈ C N (Rn ) if |α| > n + N
since this is enough to show that the Fourier transform, (i∂ζ )α q̂, is L1 .
At every point of Rn , other than 0, one of the zj is non-zero and so,
taking z α = zjk , (2.35) shows that q(z) is in C N in Rn \ {0} for all N,
i.e. (2.34) holds. 
Thus the distribution q(z − z 0 ) is only singular at the diagonal. It
follows that different choices of χ with the properties listed above lead
to kernels in (2.28) which differ by smooth functions in Ω2 with proper
supports.
Lemma 2.6. A properly supported smoothing operator, which is by
defninition given by an integral operator
Z
(2.36) Ef (z) = E(z, z 0 )f (z 0 )dz 0

∞ 2
where E ∈ C (Ω ) has proper support (so both maps
(2.37) πL , πR : supp(E) −→ Ω
are proper), defines continuous operators
(2.38) E : C −∞ (Ω) −→ C ∞ (Ω), Cc−∞ (Ω) −→ Cc∞ (Ω)
and has an adjoint of the same type.
126 4. ELLIPTIC REGULARITY

See the discussion in Chapter 3.


Proposition 2.7. If P (D) is an elliptic operator with constant co-
efficients then the kernel in (2.28) defines an operator QΩ : C −∞ (Ω) −→
s+m
C −∞ (Ω) which maps Hloc s
(Ω) to Hloc (Ω) for each s ∈ R and gives a
2-sided parametrix for P (D) in Ω :
(2.39) P (D)QΩ = Id −R, QΩ P (D) = Id −R0
where R and R0 are properly supported smoothing operators.
Proof. We have already seen that changing χ in (2.28) changes
QΩ by a smoothing operator; such a change will just change R and R0
in (2.39) to different properly supported smoothing operators. So, we
can use the explicit choice for χ made in (2.33) in terms of a partition
of unity. Thus, multiplying on the left by some µ ∈ Cc∞ (Ω) the sum
becomes finite and
X
(2.40) µQΩ f = µψj q ∗ (ψj0 f ).
j
−∞
It follows that QΩ acts on C (Ω) and, from the properties of q it maps
s s+m
Hloc (Rn ) to Hloc (Rn ) for any s. To check (2.39) we may apply P (D)
to (2.40) and consider a region where µ = 1. Since P (D)q = δ0 − R̃
where R̃ ∈ S(Rn ), P (D)QΩ f = Id −R where additional ‘error terms’
arise from any differentiation of φj . All such terms have smooth kernels
(since φ0j = 1 on the support of φj and q(z − z 0 ) is smooth outside the
diagonal) and are properly supported. The second identity in (2.39)
comes from the same computation for the adjoints of P (D) and QΩ . 

3. Interior elliptic estimates


Next we proceed to prove the same type of regularity and estimates,
(2.17), for elliptic differential operators with variable coefficients. Thus
consider
X
(3.1) P (z, D) = pα (z)Dα , pα ∈ C ∞ (Ω).
|α|≤m

We now assume ellipticity, of fixed order m, for the polynomial P (z, ζ)


for each z ∈ Ω. This is the same thing as ellipticity for the principal
part, i.e. the condition for each compact subset of Ω
X
(3.2) | pα (z)ζ α | ≥ C(K)|ζ|m , z ∈ K b Ω, C(K) > 0.
|α|=m

Since the coefficients are smooth this and C ∞ (Ω) is a multiplier on


s
Hloc (Ω) such a differential operator (elliptic or not) gives continuous
3. INTERIOR ELLIPTIC ESTIMATES 127

linear maps
(3.3)
s+m
P (z, D) : Hloc s
(Ω) −→ Hloc (Ω), ∀ s ∈ R, P (z, D) : C ∞ (Ω) −→ C ∞ (Ω).
Now, we arrived at the estimate (2.12) in the constant coefficient
case by iteration from the case M = s + m − 1 (by nesting cutoff
functions). Pick a point z̄ ∈ Ω. In a small ball around z̄ the coefficients
are ‘almost constant’. In fact, by Taylor’s theorem,
X
(3.4) P (z, ζ) = P (z̄, ζ) + Q(z, ζ), Q(z, ζ) = (z − z̄)j Pj (z, z̄, ζ)
j

where the Pj are also polynomials of degree m in ζ and smooth in z in


the ball (and in z̄.) We can apply the estimate (2.12) for P (z̄, D) and
s = 0 to find
(3.5) kψukm ≤ Ckψ (P (z, D)u − Q(z, D)) uk0 + C 0 kφukm−1 .
Because the coefficients are small
X
(3.6) kψQ(z, D)uk0 ≤ k(z − z̄)j rj,α Dα ψuk0 + C 0 kφukm−1
j,|α|≤m

≤ δCkψukm + C 0 kφukm−1 .
What we would like to say next is that we can choose δ so small that
δC < 21 and so inserting (3.6) into (3.5) we would get

(3.7) kψukm ≤ CkψP (z, D)uk0 + CkψQ(z, D)uk0 + C 0 kφukm−1


1
≤ CkψP (z, D)uk0 + kψukm + C 0 kφukm−1
2
1
=⇒ kψukm ≤ CkψP (z, D)uk0 + C 0 kφukm−1 .
2
However, there is a problem here. Namely this is an a priori estimate
– to move the norm term from right to left we need to know that it
is finite. Really, that is what we are trying to prove! So more work
is required. Nevertheless we will eventually get essentially the same
estimate as in the constant coefficient case.
Theorem 3.1. If P (z, D) is an elliptic differential operator of order
m with smooth coefficients in Ω ⊂ Rn and u ∈ C −∞ (Ω) is such that
s s+m
P (z, D)u ∈ Hloc (Ω) for some s ∈ R then u ∈ Hloc (Ω) and for any φ,

ψ ∈ Cc (Ω) with φ = 1 in a neighbourhood of supp(ψ) and M ∈ R, there
exist constants C (depending only on P and ψ) and C 0 (independent of
u) such that
(3.8) kψukm+s ≤ CkψP (z, D)uks + C 0 kφukM .
128 4. ELLIPTIC REGULARITY

There are three main things to do. First we need to get the a priori
estimate first for general s, rather than s = 0, and then for general ψ
(since up to this point it is only for ψ with sufficiently small support).
One problem here is that in the estimates in (3.6) the L2 norm of a
product is estimated by the L∞ norm of one factor and the L2 norm
of the other. For general Sobolev norms such an estimate does not
hold, but something similar does; see Lemma 15.2. The proof of this
theorem occupies the rest of this Chapter.
Proposition 3.2. Under the hypotheses of Theorem 3.1 if in ad-
dition u ∈ C ∞ (Ω) then (3.8) follows.
Proof of Proposition 3.2. First we can generalize (3.5), now
using Lemma 15.2. Thus, if ψ has support near the point z̄
(3.9) kψuks+m ≤ CkψP (z̄, D)uks + kφQ(z, D)ψuks + C 0 kφuks+m−1
≤ CkψP (z̄, D)uks + δCkψuks+m + C 0 kφuks+m−1 .
This gives the extension of (3.7) to general s (where we are assuming
that u is indeed smooth):
(3.10) kψuks+m ≤ Cs kψP (z, D)uks + C 0 kφuks+m−1 .
Now, given a general element ψ ∈ Cc∞ (Ω) and φ ∈ Cc∞ (Ω) with φ = 1
in a neighbourhood of supp(ψ) we may choose a partition of unity ψj
with respect to supp(ψ) for each element of which (3.10) holds for some
φj ∈ Cc∞ (Ω) where in addition φ = 1 in a neighbourhood of supp(φj ).
Then, with various constants
(3.11)
X X X
kψuks+m ≤ kψj uks+m ≤ Cs kψj φP (z, D)uks +C 0 kφj φuks+m−1
j j j
00
≤ Cs (K)kφP (z, D)uks + C kφuks+m−1 ,
where K is the support of ψ and Lemma 15.2 has been used again.
This removes the restriction on supports.
Now, to get the full (a priori) estimate (3.8), where the error term
on the right has been replaced by one with arbitrarily negative Sobolev
order, it is only necessary to iterate (3.11) on a nested sequence of cutoff
functions as we did earlier in the constant coefficient case.
This completes the proof of Proposition 3.2. 
So, this proves a priori estimates for solutions of the elliptic op-
erator in terms of Sobolev norms. To use these we need to show the
regularity of solutions and I will do this by constructing parametrices
in a manner very similar to the constant coefficient case.
3. INTERIOR ELLIPTIC ESTIMATES 129

Theorem 3.3. If P (z, D) is an elliptic differential operator of order


m with smooth coefficients in Ω ⊂ Rn then there is a continuous linear
operator
(3.12) Q : C −∞ (Ω) −→ C −∞ (Ω)
such that
(3.13) P (z, D)Q = Id −RR , QP (z, D) = Id −RL
where RR , RL are properly-supported smoothing operators.
That is, both RR and RL have kernels in C ∞ (Ω2 ) with proper sup-
ports. We will in fact conclude that
s s+m
(3.14) Q : Hloc (Ω) −→ Hloc (Ω), ∀ s ∈ R
using the a priori estimates.
To construct at least a first approximation to Q essentially the same
formula as in the constant coefficient case suffices. Thus consider
Z
(3.15) Q0 f (z) = q(z, z − z 0 )χ(z, z 0 )f (z 0 )dz 0 .

Here q is defined as last time, except it now depends on both vari-
ables, rather than just the difference, and is defined by inverse Fourier
transform
1 − χ(z, ζ)
(3.16) q0 (z, Z) = Fζ7−1
−→Z q̂0 (z, ζ), q̂0 =
P (z, ζ)
where χ ∈ C ∞ (Ω × R) is chosen to have compact support in the second
variable, so supp(χ) ∩ (K × Rn ) is compact for each K b Ω, and to
be equal to 1 on such a large set that P (z, ζ) 6= 0 on the support of
1 − χ(z, ζ). Thus the right side makes sense and the inverse Fourier
transform exists.
Next we extend the esimates, (2.21), on the ζ derivatives of such
a quotient, using the ellipticity of P. The same argument works for
derivatives with respect to z, except no decay occurs. That is, for any
compact set K b Ω
(3.17) |Dzβ Dζα q̂0 (z, ζ)| ≤ Cα,β (K)(1 + |ζ|)−m−|α| , z ∈ K.
Now the argument, in Lemma 2.5, concerning the singularities of q0
works with z derivatives as well. It shows that
(3.18) (zj − zj0 )N +k q0 (z, z − z 0 ) ∈ C N (Ω × Rn ) if k + m > n/2.
Thus,
(3.19) singsupp q0 ⊂ Diag = {(z, z) ∈ Ω2 }.
130 4. ELLIPTIC REGULARITY

The ‘pseudolocality’ statement (3.19), shows that as in the earlier


case, changing the cutoff function in (3.15) changes Q0 by a properly
supported smoothing operator and this will not affect the validity of
(3.13) one way or the other! For the moment not worrying too much
about how to make sense of (3.15) consider (formally)
(3.20) Z
P (z, D)Q0 f = (P (z, DZ )q0 (z, Z))Z=z−z0 χ(z, z 0 )f (z 0 )dz 0 +E1 f +R1 f.

To apply P (z, D) we just need to apply Dα to Q0 f, multiply the result


by pα (z) and add. Applying Dzα (formally) under the integral sign in
(3.15) each derivative may fall on either the ‘parameter’ z in q0 (z, z −
z 0 ), the variable Z = z − z 0 or else on the cutoff χ(z, z 0 ). Now, if χ
is ever differentiated the result vanishes near the diagonal and as a
consequence of (3.19) this gives a smooth kernel. So any such term is
included in R1 in (3.20) which is a smoothing operator and we only
have to consider derivatives falling on the first or second variables of
q0 . The first term in (3.20) corresponds to all derivatives falling on the
second variable. Thus
Z
(3.21) E1 f = e1 (z, z − z 0 )χ(z, z 0 )f (z 0 )dz 0

is the sum of the terms which arise from at least one derivative in the
‘parameter variable’ z in q0 (which is to say ultimately the coefficients
of P (z, ζ)). We need to examine this in detail. First however notice
that we may rewrite (3.20) as

(3.22) P (z, D)Q0 f = Id +E1 + R10

where E1 is unchanged and R10 is a new properly supported smoothing


operator which comes from the fact that

(3.23) P (z, ζ)q0 (z, ζ) = 1 − ρ(z, ζ) =⇒


P (z, DZ )q0 (z, Z) = δ(Z) + r(z, Z), r ∈ C ∞ (Ω × Rn )

from the choice of q0 . This part is just as in the constant coefficient


case.
So, it is the new error term, E1 which we must examine more care-
fully. This arises, as already noted, directly from the fact that the
coefficients of P (z, D) are not assumed to be constant, hence q0 (z, Z)
depends parameterically on z and this is differentiated in (3.20). So,
using Leibniz’ formula to get an explicit representation of e1 in (3.21)
3. INTERIOR ELLIPTIC ESTIMATES 131

we see that
 
X α
(3.24) e1 (z, Z) = pα (z) Dzα−γ DZγ q0 (z, Z).
γ
|α|≤m, |γ|<m

The precise form of this expansion is not really significant. What is


important is that at most m − 1 derivatives are acting on the second
variable of q0 (z, Z) since all the terms where all m act here have already
been treated. Taking the Fourier transform in the second variable, as
before, we find that
 
X α
(3.25) ê1 (z, ζ) = pα (z) Dzα−γ ζ γ q̂0 (z, ζ) ∈ C ∞ (Ω × Rn ).
γ
|α|≤m, |γ|<m

Thus ê1 is the sum of products of z derivatives of q0 (z, ζ) and polyno-


mials in ζ of degree at most m − 1 with smooth dependence on z. We
may therefore transfer the estimates (3.17) to e1 and conclude that
(3.26) |Dzβ Dζα ê1 (z, ζ)| ≤ Cα,β (K)(1 + |ζ|)−1−|α| .
Let us denote by S m (Ω × Rn ) ⊂ C ∞ (Ω × Rn ) the linear space of
functions satisfying (3.17) when −m is replaced by m, i.e.
(3.27) |Dzβ Dζα a(z, ζ)| ≤ Cα,β (K)(1 + |ζ|)m−|α| ⇐⇒ a ∈ S m (Ω × Rn ).
This allows (3.26) to be written succinctly as ê1 ∈ S −1 (Ω × Rn ).
To summarize so far, we have chosen q̂0 ∈ S −m (Ω × Rn ) such that
with Q0 given by (3.15),
(3.28) P (z, D)Q0 = Id +E1 + R10
where E1 is given by the same formula (3.15), as (3.21), where now
ê1 ∈ S −1 (Ω × Rn ). In fact we can easily generalize this discussion, to
do so let me use the notation
Z
(3.29) Op(a)f (z) = A(z, z − z 0 )χ(z, z 0 )f (z 0 )dz 0 ,

if Â(z, ζ) = a(z, ζ) ∈ S m (Ω × Rn ).
0
Proposition 3.4. If a ∈ S m (Ω × Rn ) then
(3.30) P (z, D) Op(a) = Op(pa) + Op(b) + R
0
where R is a (properly supported) smoothing operator and b ∈ S m +m−1 (Ω×
Rn ).
Proof. Follow through the discussion above with q̂0 replaced by
a. 
132 4. ELLIPTIC REGULARITY

So, we wish to get rid of the error term E1 in (3.21) to as great an


extent as possible. To do so we add to Q0 a second term Q1 = Op(a1 )
where
1−χ
(3.31) a1 = − ê1 (z, ζ) ∈ S −m−1 (Ω × Rn ).
P (z, ζ)
Indeed
0 00 0 00
(3.32) S m (Ω × Rn )S m (Ω × Rn ) ⊂ S m +m (Ω × Rn )
(pretty much as though we are multiplying polynomials) as follows from
Leibniz’ formula and the defining estimates (3.27). With this choice of
Q1 the identity (3.30) becomes
(3.33) P (z, D)Q1 = −E1 + Op(b2 ) + R2 , b2 ∈ S −2 (Ω × Rn )
since p(z, ζ)a1 = −ê1 + r0 (z, ζ) where supp(r0 ) is compact in the second
variable and so contributes a smoothing operator and by definition
E1 = Op(ê1 ).
Now we can proceed by induction, let me formalize it a little.
Lemma 3.5. If P (z, D) is elliptic with smooth coefficients on Ω then
we may choose a sequence of elements ai ∈ S −m−i (Ω×Rn ) i = 0, 1, . . . ,
such that if Qi = Op(ai ) then
(3.34) P (z, D)(Q0 +Q1 +· · ·+Qj ) = Id +Ej+1 +Rj0 , Ej+1 = Op(bj+1 )
with Rj a smoothing operator and bj ∈ S −j (Ω × Rn ), j = 1, 2, . . . .
Proof. We have already taken the first two steps! Namely with
a0 = q̂0 , given by (3.16), (3.28) is just (3.34) for j = 0. Then, with
a1 given by (3.31), adding (3.33) to (3.31) gives (3.34) for j = 1.
Proceeding by induction we may assume that we have obtained (3.34)
for some j. Then we simply set
1 − χ(z, ζ)
aj+1 = − bj+1 (z, ζ) ∈ S −j−1−m (Ω × Rn )
P (z, ζ)
where we have used (3.32). Setting Qj+1 = Op(aj+1 ) the identity (3.30)
becomes
00
(3.35) P (z, D)Qj+1 = −Ej+1 + Ej+2 + Rj+1 , Ej+2 = Op(bj+2 )
for some bj+2 ∈ S −j−2 (Ω × Rn ). Adding (3.35) to (3.34) gives the next
step in the inductive argument. 
Consider the error term in (3.34) for large j. From the estimates on
an element a ∈ S −j (Ω × Rn )
(3.36) |Dzβ Dζα a(z, ζ)| ≤ Cα,β (K)(1 + |ζ|)−j−|α|
3. INTERIOR ELLIPTIC ESTIMATES 133

it follows that if j > n + k then ζ γ a is integrable in ζ with all its z


derivatives for |ζ| ≤ k. Thus the inverse Fourier transform has continu-
ous derivatives in all variables up to order k. Applied to the error term
in (3.34) we conclude that
(3.37) Ej = Op(bj ) has kernel in C j−n−1 (Ω2 ) for large j.
Thus as j increases the error terms in (3.34) have increasingly smooth
kernels.
Now, standard properties of operators and kernels, see Lemma 17.1,
show that operator
k
X
(3.38) Q(k) = Qj
j=0

comes increasingly close to satisfying the first identity in (3.13), except


that the error term is only finitely (but arbitrarily) smoothing. Since
this is enough for what we want here I will banish the actual solution
of (3.13) to the addenda to this Chapter.
Lemma 3.6. For k sufficiently large, the left parametrix Q(k) is a
continuous operator on C ∞ (Ω) and
s s+m
(3.39) Q(k) : Hloc (Ω) −→ Hloc (Ω) ∀ s ∈ R.
Proof. So far I have been rather cavalier in treating Op(a) for
a ∈ S m (Ω × Rn ) as an operator without showing that this is really
the case, however this is a rather easy exercise in distribution theory.
Namely, from the basic properties of the Fourier transform and Sobolev
spaces
−n−1+m−k
(3.40) A(z, z − z 0 ) ∈ C k (Ω; Hloc (Ω)) ∀ k ∈ N.
It follows that Op(a) : Hcn+1−m+k (Ω) into C k (Ω) and in fact into Cck (Ω)
by the properness of the support. In particular it does define an op-
erator on C ∞ (Ω) as we have been pretending and the steps above are
easily justified.
A similar argument, which I will not give here since it is better to
do it by duality (see the addenda), shows that for any fixed s
s S
(3.41) A : Hloc (Ω) −→ Hloc (Ω)
for some S. Of course we want something a bit more precise than this.
s
If f ∈ Hloc (Ω) then it may be approximated by a sequence fj ∈
∞ s
C (Ω) in the topology of Hloc (Ω), so µfj → µf in H s (Rn ) for each µ ∈
Cc∞ (Ω). Set uj = Q(k) fj ∈ C ∞ (Ω) as we have just seen, where k is fixed
134 4. ELLIPTIC REGULARITY

but will be chosen to be large. Then from our identity P (z, D)Q(k) =
Id +R(k) it follows that
N
(3.42) P (z, D)uj = fj + gj , gj = R(k) fj → R(k) f in Hloc (Ω)
for k large enough depending on s and N. Thus, for k large, the right
s S
side converges in Hloc (Ω) and by (3.41), uj → u in some Hloc (Ω). But

now we can use the a priori estimates (3.8) on uj ∈ C (Ω) to conclude
that
(3.43) kψuj ks+m ≤ Ckψ(fj + gj )ks + C 00 kφuj kS
to see that ψuj is bounded in H s+m (Rn ) for any ψ ∈ Cc∞ (Ω). In fact,
applied to the difference uj − ul it shows the sequence to be Cauchy.
s+m
Hence in fact u ∈ Hloc (Ω) and the estimates (3.8) hold for this u.
That is, Q(k) has the mapping property (3.39) for large k. 
In fact the continuity property (3.39) holds for all Op(a) where a ∈
S m (Ω × Rn ), not just those which are parametrices for elliptic differ-
ential operators. I will comment on this below – it is one of the basic
results on pseudodifferential operators.
There is also the question of the second identity in (3.13), at least
in the same finite-order-error sense. To solve this we may use the
transpose identity. Thus taking formal transposes this second identity
should be equivalent to
(3.44) P t Qt = Id −RLt .
The transpose of P (z, D) is the differential operotor
X
(3.45) P t (z, D) = (−D)αz pα (z).
|α|≤m

This is again of order m and after a lot of differenttiation to move


the coefficients back to the left we see that its leading part is just
Pm (z, −D) where Pm (z, D) is the leading part of P (z, D), so it is elliptic
in Ω exactly when P is elliptic. To construct a solution to (3.45), up
to finite order errors, we need just apply Lemma 3.5 to the transpose
differential operator. This gives Q0(N ) = Op(a0(N ) with the property
(3.46) P t (z, D)Q0(N ) = Id −R(N
0
)
0 N 2
where the kernel of R(N ) is in C (Ω ). Since this property is preserved
under transpose we have indeed solved the second identity in (3.13) up
to an arbitrarily smooth error.
Of course the claim in Theorem 3.3 is that the one operator satisfies
both identities, whereas we have constructed two operators which each
ADDENDA TO CHAPTER 4 135

satisfy one of them, up to finite smoothing error terms


(3.47) P (z, D)QR = Id −RR , QL P (z, D) = Id −RL .
However these operators must themselves be equal up to finite smooth-
ing error terms since composing the first identity on the left with QL
and the second on the right with QR shows that
(3.48) QL − QL RR = QL P (z, D)QR = QR − RL QR
where the associativity of operator composition has been used. We
have already checked the mapping property(3.39) for both QL and QR ,
assuming the error terms are sufficiently smoothing. It follows that the
−p p
composite error terms here map Hloc (Ω) into Hloc (Ω) where p → ∞
with k with the same also true of the transposes of these operators.
0
Such an operator has kernel in C p (Ω2 ) where again p0 → ∞ with k.
Thus the difference of QL and QR itself becomes arbitrarily smoothing
as k → ∞.
Finally then we have proved most of Theorem 3.3 except with arbi-
trarily finitely smoothing errors. In fact we have not quite proved the
s s+m
regularity statement that P (z, D)u ∈ Hloc (Ω) implies u ∈ Hloc (Ω)
although we came very close in the proof of Lemma 3.6. Now that
we know that Q(k) is also a right parametrix, i.e. satisfies the second
identity in (3.8) up to arbitrarily smoothing errors, this too follows.
Namely from the discussion above Q(k) is an operator on C −∞ (Ω) and
Q(k) P (z, D)u = u + vk , ψvk ∈ H s+m (Ω)
s+m
for large enough k so (3.39) implies u ∈ Hloc (Ω) and the a priori
estimates magically become real estimates on all solutions.
Addenda to Chapter 4
Asymptotic completeness to show that we really can get smoothing
errors.
Some discussion of pseudodifferential operators – adjoints, composition
and boundedness, but only to make clear what is going on.
Some more reassurance as regards operators, kernels and mapping
properties – since I have treated these fairly shabbily!
CHAPTER 5

Coordinate invariance and manifolds

For the geometric applications we wish to make later (and of course


many others) it is important to understand how the objects discussed
above behave under coordinate transformations, so that they can be
transferred to manifolds (and vector bundles). The basic principle is
that the results above are independent of the choice of coordinates,
which is to say diffeomorphisms of open sets.

1. Local diffeomorphisms
Let Ωi ⊂ Rn be open and f : Ω1 −→ Ω2 be a diffeomorphism, so it
is a C ∞ map, which is equivalent to the condition
(1.1) f ∗ u ∈ C ∞ (Ω1 ) ∀ u ∈ C ∞ (Ω2 ), f ∗ u = u ◦ f, f ∗ u(z) = u(f (z)),
and has a C ∞ inverse f −1 : Ω2 −→ Ω1 . Such a map induces an isomor-
phisms f ∗ : Cc∞ (Ω2 ) −→ Cc∞ (Ω1 ) and f ∗ : C ∞ (Ω2 ) −→ C ∞ (Ω1 ) with
inverse (f −1 )∗ = (f ∗ )−1 .
Recall also that, as a homeomorphism, f ∗ identifies the (Borel)
measurable functions on Ω2 with those on Ω1 . Since it is continuously
differentiable it also identifies L1loc (Ω2 ) with L1loc (Ω1 ) and
(1.2) Z Z
∗ ∂fi (z)
1
u ∈ Lc (Ω2 ) =⇒ f u(z)|Jf (z)|dz = u(z 0 )dz 0 , Jf (z) = det .
Ω1 Ω1 ∂zj
The absolute value appears because the definition of the Lebesgue in-
tegral is through the Lebesgue measure.
It follows that f ∗ : L2loc (Ω2 ) −→ L2loc (Ω1 ) is also an isomorphism. If
u ∈ L2 (Ω2 ) has support in some compact subset K b Ω2 then f ∗ u has
support in the compact subset f −1 (K) b Ω1 and
(1.3) Z Z
kf ∗ uk2L2 = |f ∗ u|2 dz ≤ C(K) |f ∗ u|2 |Jf (z)|dz = C(K)kuk2L2 .
Ω1 Ω1

Distributions are defined by duality, as the continuous linear functionals:-

(1.4) u ∈ C −∞ (Ω) =⇒ u : Cc∞ (Ω) −→ C.


137
138 5. COORDINATE INVARIANCE AND MANIFOLDS

We always embed the smooth functions in the distributions using inte-


gration. This presents a small problem here, namely it is not consistent
under pull-back. Indeed if u ∈ C ∞ (Ω2 ) and µ ∈ Cc∞ (Ω1 ) then
Z Z

(1.5) f u(z)µ(z)|Jf (z)|dz = u(z 0 )(f −1 )∗ µ(z 0 )dz 0 or
Ω1 Ω2
Z Z
f ∗ u(z)µ(z)dz = u(z 0 )(f −1 )∗ µ(z 0 )|Jf −1 (z 0 )|dz 0 ,
Ω1 Ω2
since f ∗ Jf −1 = (Jf )−1 .
So, if we want distributions to be ‘generalized functions’, so that the
identification of u ∈ C ∞ (Ω2 ) as an element of C −∞ (Ω2 ) is consistent
with the identification of f ∗ u ∈ C ∞ (Ω1 ) as an element of C −∞ (Ω1 ) we
need to use (1.5). Thus we define
(1.6) f ∗ : C −∞ (Ω2 ) −→ C −∞ (Ω1 ) by f ∗ u(µ) = u((f −1 )∗ µ|Jf −1 |).
There are better ways to think about this, namely in terms of densities,
but let me not stop to do this at the moment. Of course one should
check that f ∗ is a map as indicated and that it behaves correctly under
composition, so (f ◦ g)∗ = g ∗ ◦ f ∗ .
As already remarked, smooth functions pull back under a diffeo-
morphism (or any smooth map) to be smooth. Dually, vector fields
push-forward. A vector field, in local coordinates, is just a first order
differential operator without constant term
n
X 1 ∂
(1.7) V = vj (z)Dzj , Dzj = Dj = .
j=1
i ∂zj

For a diffeomorphism, the push-forward may be defined by


(1.8) f ∗ (f∗ (V )u) = V f ∗ u ∀ u ∈ C ∞ (Ω2 )
where we use the fact that f ∗ in (1.1) is an isomorphism of C ∞ (Ω2 )
onto C ∞ (Ω1 ). The chain rule is the computation of f∗ V, namely
n
X ∂fj (z)
(1.9) f∗ V (f (z)) = vj (z) Dk .
j,k=1
∂zk

As always this operation is natural under composition of diffeomor-


phism, and in particular (f −1 )∗ (f∗ )V = V. Thus, under a diffeomor-
phism, vector fields push forward to vector fields and so, more generally,
differential operators push-forward to differential operators.
Now, with these definitions we have
1. LOCAL DIFFEOMORPHISMS 139

Theorem 1.1. For every s ∈ R, any diffeomorphism f : Ω1 −→ Ω2


induces an isomorphism
(1.10) f ∗ : Hloc
s s
(Ω2 ) −→ Hloc (Ω1 ).
Proof. We know this already for s = 0. To prove it for 0 < s < 1
we use the norm on H s (Rn ) equivalent to the standard Fourier trans-
form norm:-
|u(z) − u(ζ)|2
Z
2 2
(1.11) kuks = kukL2 + dzdζ.
R2n |z − ζ|2s+n
See Sect 7.9 of [4]. Then if u ∈ Hcs (Ω2 ) has support in K b Ω2 with
0 < s < 1, certainly u ∈ L2 so f ∗ u ∈ L2 and we can bound the second
part of the norm in (1.11) on f ∗ u :

|u(f (z)) − u(f (ζ))|2


Z
(1.12) dzdζ
R2n |z − ζ|2s+n
|u(z 0 ) − u(ζ 0 )|2
Z
= 0 0 2s+n
|Jg (z 0 )||Jg (ζ 0 )|dz 0 dζ 0
R2n |g(z ) − g(ζ )|
|u(z) − u(ζ)|2
Z
≤C dzdζ
R2n |z − ζ|2s+n
since C|g(z 0 ) − g(ζ 0 )| ≥ |z 0 − ζ 0 | where g = f −1 .
For the spaces of order m + s, 0 ≤ s < 1 and m ∈ N we know that
(1.13) m+s
u ∈ Hloc s
(Ω2 ) ⇐⇒ P u ∈ Hloc (Ω2 ) ∀ P ∈ Diff m (Ω2 )
where Diff m (Ω) is the space of differential operators of order at most
m with smooth coefficients in Ω. As noted above, differential operators
map to differential operators under a diffeomorphism, so from (1.13) it
m+s m+s
follows that Hloc (Ω2 ) is mapped into Hloc (Ω1 ) by f ∗ .
For negative orders we may proceed in the same way. That is if
m ∈ N and 0 ≤ s < 1 then
(1.14) X
s−m
u ∈ Hloc (Ω2 ) ⇐⇒ u = PJ uJ , PJ ∈ Diff m (Ω2 ), uJ ∈ H s (Ω2 )
J

where the sum over J is finite. A similar argument then applies to


prove (1.10) for all real orders. 
Consider the issue of differential operators more carefully. If P :
C ∞ (Ω1 ) −→ C ∞ (Ω1 ) is a differential operator of order m with smooth
coefficients then, as already noted, so is
(1.15) Pf : C ∞ (Ω2 ) −→ C ∞ (Ω2 ), Pf v = (f −1 )∗ (P f ∗ v).
140 5. COORDINATE INVARIANCE AND MANIFOLDS

However, the formula for the coefficients, i.e. the explicit formula for
Pf , is rather complicated:-
X X
(1.16) P = =⇒ Pf = pα (g(z 0 ))(Jf (z 0 )Dz0 )α
|α|≤m |α|≤m

since we have to do some serious differentiation to move all the Jacobian


terms to the left.
Even though the formula (1.16) is complicated, the leading part of
it is rather simple. Observe that we can compute the leading part of
a differential operator by ‘oscillatory testing’. Thus, on an open set Ω
consider
(1.17)
m
X
itψ
P (z, D)(e u) = e itψ
tk Pk (z, D)u, u ∈ C ∞ (Ω), ψ ∈ C ∞ (Ω), t ∈ R.
k=0

Here the Pk (z, D) are differential operators of order m − k acting on


u (they involve derivatives of ψ of course). Note that the only way a
factor of t can occur is from a derivative acting on eitψ through
∂ψ
(1.18) Dzj eitψ = eitψ t .
∂zj
Thus, the coefficient of tm involves no differentiation of u at all and is
therefore multiplication by a smooth function which takes the simple
form
X
(1.19) σm (P )(ψ, z) = pα (z)(Dψ)α ∈ C ∞ (Ω).
|α|=m

In particular, the value of this function at any point z ∈ Ω is deter-


mined once we know dψ, the differential of ψ at that point. Using this
observation, we can easily compute the leading part of Pf given that
of P in (1.15). Namely if ψ ∈ C ∞ (Ω2 ) and (Pf )(z 0 ) is the leading part
of Pf for
0
(1.20) σm (Pf )(ψ 0 , z 0 )v = lim t−m e−itψ Pf (z 0 , Dz0 )(eitψ v)
t→∞
−m −itψ ∗ ∗ ψ0
= lim t e g (P (z, Dz )(eitf f ∗ v)
t→∞
−m −itf ∗ ψ 0 ∗ ∗ ψ0
= g ∗ ( lim t e g (P (z, Dz )(eitf f ∗ v) = g ∗ Pm (f ∗ ψ, z)f ∗ v.
t→∞

Thus
(1.21) σm (Pf )(ψ 0 , ζ 0 )) = g ∗ σm (P )(f ∗ ψ 0 , z).
This allows us to ‘geometrize’ the transformation law for the leading
part (called the principal symbol) of the differential operator P. To do
2. MANIFOLDS 141

so we think of T ∗ Ω, for Ω and open subset of Rn , as the union of the


T ∗ ZΩ, z ∈ Ω, where Tz∗ Ω is the linear space

(1.22) Tz∗ Ω = C ∞ (Ω)/ ∼z , ψ ∼z ψ 0 ⇐⇒


ψ(Z) − ψ 0 (Z) − ψ(z) + ψ 0 (z) vanishes to second order at Z = z.

Essentially by definition of the derivative, for any ψ ∈ C ∞ (Ω),


n
X ∂ψ
(1.23) ψ ∼z (z)(Zj − zj ).
j=1
∂z

This shows that there is an isomorphism, given by the use of coordi-


nates

(1.24) T ∗ Ω ≡ Ω × Rn , [z, ψ] 7−→ (z, dψ(z)).

The point of the complicated-looking definition (1.22) is that it shows


easily (and I recommend you do it explicitly) that any smooth map
h : Ω1 −→ Ω2 induces a smooth map

(1.25) h∗ T ∗ Ω2 −→ T ∗ Ω1 , h([h(z), ψ]) = [z, h∗ ψ]

which for a diffeomorphism is an isomorphism.

Lemma 1.2. The transformation law (1.21) shows that for any el-
ement P ∈ Diff m (Ω) the principal symbol is well-defined as an element

(1.26) σ(P ) ∈ C ∞ (T ∗ Ω)

which furthermore transforms as a function under the pull-back map


(1.25) induced by any diffeomorphism of open sets.

Proof. The formula (1.19) is consistent with (1.23) and hence with
(1.21) in showing that σm (P ) is a well-defined function on T ∗ Ω. 

2. Manifolds
I will only give a rather cursory discussion of manifolds here. The
main cases we are interested in are practical ones, the spheres Sn and
the balls Bn . Still, it is obviously worth thinking about the general case,
since it is the standard setting for much of modern mathematics. There
are in fact several different, but equivalent, definitions of a manifold.
142 5. COORDINATE INVARIANCE AND MANIFOLDS

2.1. Coordinate covers. Take a Hausdorff topological (in fact


metrizable) space M. A coordinate patch on M is an open set and a
homeomorphism
F
M ⊃ Ω −→ Ω0 ⊂ Rn
onto an open subset of Rn . An atlas on M is a covering by such coor-
dinate patches (Ωa , Fa ),
[
M= Ωa .
a∈A

Since each Fab : Ω0a → Ω0a is, by assumption, a homeomorphism, the


transition maps
Fab : Ω0ab → Ω0ba ,
Ω0ab = Fb (Ωa ∩ Ωb ) ,
(⇒ Ω0ba = Fa (Ωa ∩ Ωb ))
Fab = Fa ◦ Fb−1
are also homeomorphisms of open subsets of Rn (in particular n is
constructed on components of M ). The atlas is C k , C ∞ , real analytic,
etc.) if each Fab is C k , C ∞ or real analytic. A C ∞ (C k or whatever)
structure on M is usually taken to be a maximal C ∞ atlas (meaning any
coordinate patch compatible with all elements of the atlas is already
in the atlas).

2.2. Smooth functions. A second possible definition is to take


again a Hausdorff topological space and a subspace F ⊂ C(M ) of the
continuous real-valued function on M with the following two properties.
1) For each p ∈ M ∃f1 , . . . , fn ∈ F and an open set Ω 3 p such
that F = (f1 , . . . , fn ) : Ω → Rn is a homeomorphism onto an
open set, Ω0 ⊂ Rn and (F −1 )∗ g ∈ C ∞ (Ω0 ) ∀g ∈ F.
2) F is maximal with this property.

2.3. Embedding. Alternatively one can simply say that a (C ∞ )


manifold is a subset M ⊂ RN such that ∀p ∈ M ∃ an open set U 3 p,
U ⊂ RN , and h1 , . . . , hN −n ∈ C ∞ (U ) s.t.
M ∩ U = {q ∈ U ; hi (q) = 0, i = 1, . . . , N − n}
dhi (p) are linearly independent.
I leave it to you to show that these definitions are equivalent in
an appropriate sense. If we weaken the various notions of coordinates
in each case, for instance in the first case, by requiring that Ω0 ∈
2. MANIFOLDS 143

Rn−k × [0, ∞)k for some k, with a corresponding version of smoothness,


we arrive at the notion of a manifold with cones.1
So I will assume that you are reasonably familiar with the notion of
a smooth (C ∞ ) manifold M , equipped with the space C ∞ (M ) — this
is just F in the second definition and in the first
C ∞ (M ) = {u : M → R; u ◦ F −1 ∈ C ∞ (Ω0 ) ∀ coordinate patches}.
Typically I will not distinguish between complex and real-valued func-
tions unless it seems necessary in this context.
Manifolds are always paracompact — so have countable covers by
compact sets — and admit partitions of unity.
S
Proposition 2.1. If M = a∈A Ua is a cover of a manifold by open
sets then there exist ρa ∈ C ∞ (M ) s.t. supp(ρa ) b Ua (i.e., ∃Ka b Ua
s.t. ρa = 0 on M \Ka ), these supports are locally finite, so if K b M
then
{a ∈ A; ρa (m) 6= 0 for some m ∈ K}
is finite, and finally
X
ρa (m) = 1, ∀ m ∈ M.
a∈A

It can also be arranged that


(1) 0 ≤ ρa (m) ≤ 1 ∀ a, ∀ m ∈ M.
(2) ρa = µ2a , µa ∈ C ∞ (M ).
(3) ∃ ϕa ∈ C ∞ (M ), 0 ≤ ϕa ≤ 1, ϕ = 1 in a neighborhood of
supp(ρa ) and the sets supp(ϕa ) are locally finite.
Proof. Up to you. 
Using a partition of unity subordinate to a covering by coordinate
patches we may transfer definitions from Rn to M, provided they are
coordinate-invariant in the first place and preserved by multiplication
by smooth functions of compact support. For instance:
s
Definition 2.2. If u : M −→ C and s ≥ 0 then u ∈ Hloc (M ) if
for some partition of unity subordinate to a cover of M by coordinate
patches
(Fa−1 )∗ (ρa u) ∈ H s (Rn )
(2.1)
or (Fa−1 )∗ (ρa u) ∈ Hloc
s
(Ω0a ).
1Ialways demand in addition that the boundary faces of a manifold with cones
be a embedded but others differ on this. I call the more general object a tied
manifold.
144 5. COORDINATE INVARIANCE AND MANIFOLDS

Note that there are some abuses of notation here. In the first part
of (2.1) we use the fact that (Fa−1 )∗ (ρa u), defined really on Ω0a (the
image of the coordinate patch Fa : Ωa → Ω0a ∈ Rn ), vanishes outside a
compact subset and so can be unambiguously extended as zero outside
Ω0a to give a function on Rn . The second form of (2.1) is better, but
there is an equivalence relation, of equality off sets of measure zero,
which is being ignored. The definition doesn’t work well for s < 0
because u might then not be representable by a function so we don’t
know what u0 is to start with.
The most sysetematic approach is to define distributions on M first,
so we know what we are dealing with. However, there is a problem
here too, because of the transformation law (1.5) that was forced on
us by the local identification C ∞ (Ω) ⊂ C −∞ (Ω). Namely, we really
need densities on M before we can define distributions. I will discuss
densities properly later; for the moment let me use a little ruse, sticking
for simplicity to the compact case.
Definition 2.3. If M is a compact C ∞ manifold then C 0 (M ) is a
Banach space with the supremum norm and a continuous linear func-
tional
(2.2) µ : C 0 (M ) −→ R
is said to be a positive smooth measure if for every coordinate patch on
M, F : Ω −→ Ω0 there exists µF ∈ C ∞ (Ω0 ), µF > 0, such that
Z
(2.3) µ(f ) = (F −1 )∗ f µF dz ∀ f ∈ C 0 (M ) with supp(f ) ⊂ Ω.
Ω0

Now if µ, µ0 : C 0 (M ) −→ R is two such smooth measures then


µ0F = vF µF with vF ∈ C ∞ (Ω0 ). In fact ∃ v ∈ C ∞ (M ), v > 0, such
that Fv∗F = v on Ω. That is, the v’s patch to a well-defined function
globally on M. To see this, notice that every g ∈ Cc0 (Ω0 ) is of the form
(F −1 )∗ g for some g ∈ C 0 (M ) (with support in Ω) so (2.3) certainly
determines µF on Ω0 . Thus, assuming we have two smooth measures,
vF is determined on Ω0 for every coordinate patch. Choose a partition
of unity ρa and define
X
v= ρa Fa∗ vFa ∈ C ∞ (M ).
a

Exercise. Show (using the transformation of integrals under diffeo-


morphisms) that
(2.4) µ0 (f ) = µ(vf ) ∀ f ∈ C ∞ (M ).
Thus we have ‘proved’ half of
2. MANIFOLDS 145

Proposition 2.4. Any (compact) manifold admits a positive smooth


density and any two positive smooth densities are related by (2.4) for
some (uniquely determined) v ∈ C ∞ (M ), v > 0.
Proof. I have already unloaded the hard part on you. The exten-
sion is similar. Namely, chose a covering of M by coordinate patches
and a corresponding partition of unity as above. Then simply define
XZ
µ(f ) = (Fa−1 )∗ (ρa f )dz
a Ω0a

using Lebesgue measure in each Ω0a . The fact that this satisfies (2.3) is
similar to the exercise above. 
Now, for a compact manifold, we can define a smooth positive den-
sity µ0 ∈ C ∞ (M ; Ω) as a continuous linear functional of the form
(2.5) µ0 : C 0 (M ) −→ C, µ0 (f ) = µ(ϕf ) for some ϕ ∈ C ∞ (M )
where ϕ is allowed to be complex-valued. For the moment the notation,
C ∞ (M ; Ω), is not explained. However, the choice of a fixed positive C ∞
measure allows us to identify
C ∞ (M ; Ω) 3 µ0 −→ ϕ ∈ C ∞ (M ),
meaning that this map is an isomorphism.
Lemma 2.5. For a compact manifold, M, C ∞ (M ; Ω) is a complete
metric space with the norms and distance function
kµ0 k(k) = sup |V1α1 · · · Vpα1 ϕ|
|α|≤k

X kµ0 k(k)
d(µ01 , µ02 ) = 2−k

k=0
1 + kµ0 k(k)

where {V1 , . . . , Vp } is a collection of vector fields spanning the tangent


space at each point of M.
This is really a result of about C ∞ (M ) itself. I have put it this way
because of the current relevance of C ∞ (M ; Ω).
Proof. First notice that there are indeed such vector fields on a
compact manifold. Simply take a covering by coordinate patches and
associated partitions of unity, ϕa , supported in the coordinate patch Ωa .
Then if Ψa ∈ C ∞ (M ) has support in Ωa and Ψa ≡ 1 in a neighborhood
of supp(ϕa ) consider
Va` = Ψa (Fa−1 )∗ (∂z` ), ` = 1, . . . , n,
146 5. COORDINATE INVARIANCE AND MANIFOLDS

just the coordinate vector fields cut off in Ωa . Clearly, taken together,
these span the tangent space at each point of M, i.e., the local coor-
dinate vector fields are really linear combinations of the Vi given by
renumbering the Va` . It follows that

kµ0 k(k) = sup |V1α1 · · · Vpαp ϕ| ∈ M


|α≤k

is a norm on C ∞ (M ; Ω) locally equivalent to the C k norm on ϕf on


compact subsets of coordinate patches. It follows that (2.6) gives a
distance function on C ∞ (M ; Ω) with respect to what is complete —
just as for S(Rn ). 

Thus we can define the space of distributions on M as the space of


continuous linear functionals u ∈ C −∞ (M )

(2.6) u : C ∞ (M ; Ω) −→ C, |u(µ)| ≤ Ck kµk(k) .

As in the Euclidean case smooth, and even locally integrable, functions


embed in C −∞ (M ) by integration
Z
1 −∞
(2.7) L (M ) ,→ C (M ), f 7→ f (µ) = fµ
M

where the integral is defined unambiguously using a partition of unity


subordinate to a coordinate cover:
Z XZ
fµ = (Fa−1 )∗ (ϕa f µa )dz
M a Ω0a

since µ = µa dz in local coordinates.

Definition 2.6. The Sobolev spaces on a compact manifold are


defined by reference to a coordinate case, namely if u ∈ C −∞ (M ) then
(2.8)
u ∈ H s (M ) ⇔ u(ψµ) = ua ((Fa−1 )∗ ψµa ), ∀ ψ ∈ Cc∞ (Ωa ) with ua ∈ Hloc
s
(Ω0a ).

Here the condition can be the requirement for all coordinate sys-
tems or for a covering by coordinate systems in view of the coordinate
independence of the local Sobolev spaces on Rn , that is the weaker
condition implies the stronger.
3. VECTOR BUNDLES 147

Now we can transfer the properties of Sobolev for Rn to a compact


manifold; in fact the compactness simplifies the properties
0
(2.9) H m (M ) ⊂ H m (M ), ∀ m ≥ m0
1
(2.10) H m (M ) ,→ C k (M ), ∀ m > k + dim M
\ 2
(2.11) H m (M ) = C ∞ (M )
m
[
(2.12) H m (M ) = C −∞ (M ).
m

These are indeed Hilbert(able) spaces — meaning they do not have


a natural choice of Hilbert space structure, but they do have one. For
instance X
hu, vis = h(Fa−1 )∗ ϕa u, (Fa−1 )∗ ϕa viH s (Rn )
a
where ϕa is a square partition of unity subordinate to coordinate covers.
3. Vector bundles
Although it is not really the subject of this course, it is important
to get used to the coordinate-free language of vector bundles, etc. So I
will insert here at least a minimum treatment of bundles, connections
and differential operators on manifolds.
CHAPTER 6

Invertibility of elliptic operators

Next we will use the local elliptic estimates obtained earlier on open
sets in Rn to analyse the global invertibility properties of elliptic oper-
ators on compact manifolds. This includes at least a brief discussion
of spectral theory in the self-adjoint case.

1. Global elliptic estimates


For a single differential operator acting on functions on a compact
manifold we now have a relatively simple argument to prove global
elliptic estimates.
Proposition 1.1. If M is a compact manifold and P : C ∞ (M ) −→
C ∞ (M ) is a differential operator with C ∞ coefficients which is elliptic
(in the sense that σm (P ) 6= 0) on T ∗ M \0) then for any s, M ∈ R there
0
exist constants Cs , CM such that
u ∈ H M (M ), P u ∈ H s (M ) =⇒ u ∈ H s+m (M )
(1.1) 0
kuks+m ≤ Cs kP uks + CM kukM ,
where m is the order of P.
Proof. The regularity result in (1.1) follows
S directly from our ear-
lier local regularity results. Namely, if M = a Ωa is a (finite) covering
of M by coordinate patches,
Fa : Ωa −→ Ω0a ⊂ Rn
then
(1.2) Pa v = (Fa−1 )∗ P Fa∗ v, v ∈ Cc∞ (Ω0a )
defines Pa ∈ Diff m (Ω0a ) which is a differential operator in local coor-
dinates with smooth coefficients; the invariant definition of ellipticity
above shows that it is elliptic for each a. Thus if ϕa is a partition of
unity subordinate to the open cover and ψa ∈ Cc∞ (Ωa ) are chosen with
ψa = 1 in a neighbourhood of supp(ϕa ) then
(1.3) kϕ0a vks+m ≤ Ca,s kψa0 Pa vks + Ca,m
0
kψa0 vkM
149
150 6. INVERTIBILITY OF ELLIPTIC OPERATORS

where ϕ0a = (Fa−1 )∗ ϕa and similarly for ψa0 (Fa−1 )∗ ϕa ∈ Cc∞ (Ω0a ), are
the local coordinate representations. We know that (1.3) holds for
every v ∈ C −∞ (Ω0a ) such that Pa v ∈ Hloc M
(Ω0a ). Applying (1.3) to
−1 ∗
(Fa ) u = va , for u ∈ H (M ), it follows that P u ∈ H s (M ) implies
M
M
Pa va ∈ Hloc (Ω0a ), by coordinate-invariance of the Sobolev spaces and
then conversely
s+m
va ∈ Hloc (Ω0a ) ∀ a =⇒ u ∈ H s+m (M ).
The norm on H s (M ) can be taken to be
!1/2
X
kuks = k(Fa−1 )∗ (ϕa u)k2s
a

so the estimates in (1.1) also follow from the local estimates:


X
kuk2s+m = k(Fa−1 )∗ (ϕa u)k2s+m
a
X
≤ Ca,s kψa0 Pa (Fa−1 )∗ uk2s
a
0
≤ Cs kP uk2s + CM kuk2M .

Thus the elliptic regularity, and estimates, in (1.1) just follow by
patching from the local estimates. The same argument applies to ellip-
tic operators on vector bundles, once we prove the corresponding local
results. This means going back to the beginning!
As discussed in Section 3, a differential operator between sections of
the bundles E1 and E2 is represented in terms of local coordinates and
local trivializations of the bundles, by a matrix of differential operators
P11 (z, Dz ) · · · P1` (z, Dz )
 

P = .. .. .
. .
Pn1 (z, Dz ) · · · Pn` (z, Dz )
The (usual) order of P is the maximum of the orders of the Pij (z, D3 )
and the symbol is just the corresponding matrix of symbols
σm (P11 )(z, ζ) · · · σm (P1` )(z, ζ)
 

(1.4) σm (P )(z, ζ) =  .. .. .
. .
σm (Pn1 )(z, ζ) · · · σm (Pn` )(z, ζ)
Such a P is said to be elliptic at z if this matrix is invariable for all
ζ 6= 0, ζ ∈ Rn . Of course this implies that the matrix is square, so the
two vector bundles have the same rank, `. As a differential operator,
P ∈ Diff m (M, E), E = E1 , E2 , is elliptic if it is elliptic at each point.
1. GLOBAL ELLIPTIC ESTIMATES 151

Proposition 1.2. If P ∈ Diff m (M, E) is a differential operator


between sections of vector bundles (E1 , E2 ) = E which is elliptic of
order m at every point of M then
(1.5) u ∈ C −∞ (M ; E1 ), P u ∈ H s (M, E) =⇒ u ∈ H s+m (M ; E1 )
and for all s, t ∈ R there exist constants C = Cs , C 0 = Cs,t
0
such that
(1.6) kuks+m ≤ CkP uks + C 0 kukt .
Furthermore, there is an operator
(1.7) Q : C ∞ (M ; E2 ) −→ C ∞ M ; E1 )
such that
(1.8) P Q − Id2 = R2 , QP − Id1 = R1
are smoothing operators.
Proof. As already remarked, we need to go back and carry the
discussion through from the beginning for systems. Fortunately this
requires little more than notational change.
Starting in the constant coefficient case, we first need to observe
that ellipticity of a (square) matrix system is equivalent to the elliptic-
ity of the determinant polynomial
P11 (ζ) · · · P1k (ζ)
 

(1.9) Dp (ζ) = det  .. ..


. . 
Pk1 (ζ) · · · Pkk (ζ)
which is a polynomial degree km. If the Pi ’s are replaced by their
leading parts, of homogeneity m, then Dp is replaced by its leading part
of degree km. From this it is clear that the ellipticity at P is equivalent
to the ellipticity at Dp . Furthermore the invertibility of matrix in (1.9),
under the assumption of ellipticity, follows for |ζ| > C. The inverse can
be written
P (ζ)−1 = cof(P (ζ))/Dp (ζ).
Since the cofactor matrix represents the Fourier transform of a differen-
tial operator, applying the earlier discussion to Dp and then composing
with this differential operator gives a generalized inverse etc.
For example, if Ω ⊂ Rn is an open set and DΩ is the parameterix
constructed above for Dp on Ω then
QΩ = cof(P (D)) ◦ DΩ
is a 2-sided parameterix for the matrix of operators P :
P QΩ − Idk×k = RR
(1.10)
QΩ − Idk×k = RL
152 6. INVERTIBILITY OF ELLIPTIC OPERATORS

where RL , RR are k × k matrices of smoothing operators. Similar


considerations apply to the variable coefficient case. To construct the
global parameterix for an elliptic operator P we proceed as before to
piece together the local parameterices Qa for P with respect to a coor-
dinate patch over which the bundles E1 , E2 are trivial. Then
X
Qf = Fa∗ ψa0 Qa ϕ0a (Fa )−1 f
a

is a global 1-sided parameterix for P ; here ϕa is a partition of unity


and ψa ∈ Cc∞ (Ωa ) is equal to 1 in a neighborhood of its support. 
(Probably should be a little more detail.)

2. Compact inclusion of Sobolev spaces


For any R > 0 consider the Sobolev spaces of elements with compact
support in a ball:
(2.1) Ḣ s (B) = {u ∈ H s (Rn ); u) = 0 in |x| > 1}.
Lemma 2.1. Tthe inclusion map
(2.2) Ḣ s (B) ,→ Ḣ t (B) is compact if s > t.
Proof. Recall that compactness of a linear map between (sepa-
rable) Hilbert (or Banach) spaces is the condition that the image of
any bounded sequence has a convergent subsequence (since we are in
separable spaces this is the same as the condition that the image of
the unit ball have compact closure). So, consider a bounded sequence
un ∈ Ḣ s (B). Now u ∈ Ḣ s (B) implies that u ∈ H s (Rn ) and that φu = u
where φ ∈ Cc∞ (Rn ) is equal to 1 in a neighbourhood of the unit ball.
Thus the Fourier transform satifies
(2.3) û = φ̂ ∗ û =⇒ û ∈ C ∞ (Rn ).
In fact this is true with uniformity. That is, one can bound any deriv-
ative of û on a compact set by the norm
(2.4) sup |Dj û| + max sup |Dj û| ≤ C(R)kukH s
|z|≤R j |z|≤R

where the constant does not depend on u. By the Ascoli-Arzela the-


orem, this implies that for each R the sequence ûn has a convergent
subsequence in C({|ζ| ≤ R}). Now, by diagonalization we can extract a
subsequence which converges in Vc ({|ζ| ≤ R}) for every R. This implies
that the restriction to {|ζ| ≤ R} converges in the weighted L2 norm
corresponding to H t , i.e. that (1 + |ζ|2 )t/2 χR ûnj → (1 + |ζ|2 )t/2 χR v̂
3. ELLIPTIC OPERATORS ARE FREDHOLM 153

in L2 where χR is the characteristic function of the ball of radius R.


However the boundedness of un in H s strengthens this to
(1 + |ζ|2 )t/2 ûnj → (1 + |ζ|2 )t/2 v̂ in L2 (Rn ).

Namely, the sequence is Cauchy in L( Rn ) and hence convergnet. To


see this, just note that for  > 0 one can first choose R so large that
the norm outside the ball is
(2.5)
Z Z
s−t s−t
(1+|ζ|2 )t |un |2 dζ ≤ (1+R2 ) 2 (1+|ζ|2 )s |un |2 dζ ≤ C(1+R2 ) 2 < /2
|ζ|≥R |ζ|≥R

where C is the bound on the norm in H s . Now, having chosen R, the


subsequence converges in |ζ| ≤ R. This proves the compactness. 

Once we have this local result we easily deduce the global result.
Proposition 2.2. On a compact manifold the inclusion H s (M ) ,→
H t (M ), for any s > t, is compact.
Proof. If φi ∈ Cc∞ (Ui ) is a partition of unity subordinate to an
open cover of M by coordinate patches gi : Ui −→ Ui0 ⊂ Rn , then
(2.6) u ∈ H s (M ) =⇒ (gi−1 )∗ φi u ∈ H s (Rn ), supp((gi−1 )∗ φi u) b Ui0 .
Thus if un is a bounded sequence in H s (M ) then the (gi−1 )∗ φi un form
a bounded sequence in H s (Rn ) with fixed compact supports. It follows
from Lemma 2.1 that we may choose a subsequence so that each φi unj
converges in H t (Rn ). Hence the subsequence unj converges in H t (M ).


3. Elliptic operators are Fredholm


If V1 , V2 are two vector spaces then a linear operator P : V1 → V2 is
said to be Fredholm if these are finite-dimensional subspaces N1 ⊂ V1 ,
N2 ⊂ V2 such that
{v ∈ V1 ; P v = 0} ⊂ N1
(3.1)
{w ∈ V2 ; ∃ v ∈ V1 , P v = w} + N2 = V2 .
The first condition just says that the null space is finite-dimensional
and the second that the range has a finite-dimensional complement –
by shrinking N1 and N2 if necessary we may arrange that the inclusion
in (3.1) is an equality and that the sum is direct.
154 6. INVERTIBILITY OF ELLIPTIC OPERATORS

Theorem 3.1. For any elliptic operator, P ∈ Diff m (M ; E), acting


between sections of vector bundles over a compact manifold,
P : H s+m (M ; E1 ) −→ H s (M ; E2 )
and P : C ∞ (M ; E1 ) −→ C ∞ (M ; E2 )
are Fredholm for all s ∈ R.
The result for the C ∞ spaces follows from the result for Sobolev
spaces. To prove this, consider the notion of a Fredholm operator
between Hilbert spaces,
(3.2) P : H1 −→ H2 .
In this case we can unwind the conditions (3.1) which are then equiv-
alent to the three conditions
Nul(P ) ⊂ H1 is finite-dimensional.
(3.3) Ran(P ) ⊂ H2 is closed.
Ran(P ))⊥ ⊂ H2 is finite-dimensional.
Note that any subspace of a Hilbert space with a finite-dimensional
complement is closed so (3.3) does follow from (3.1). On the other
hand the ortho-complement of a subspace is the same as the ortho-
complement of its closure so the first and the third conditions in (3.3)
do not suffice to prove (3.1), in general. For instance the range of an
operator can be dense but not closed.
The main lemma we need, given the global elliptic estimates, is a
standard one:-
Lemma 3.2. If R : H −→ H is a compact operator on a Hilbert
space then Id −R is Fredholm.
Proof. A compact operator is one which maps the unit ball (and
hence any bounded subset) of H into a precompact set, a set with
compact closure. The unit ball in the null space of Id −R is
{u ∈ H; kuk = 1, u = Ru} ⊂ R{u ∈ H; kuk = 1}
and is therefore precompact. Since is it closed, it is compact and any
Hilbert space with a compact unit ball is finite-dimensional. Thus the
null space of Id −R is finite-dimensional.
Consider a sequence un = vn − Rvn in the range of Id −R and
suppose un → u in H; we need to show that u is in the range of Id −R.
We may assume u 6= 0, since 0 is in the range, and by passing to a
subsequence suppose that kun k = 6 0; kun k → kuk = 6 0 by assumption.
Now consider wn = vn /kvn k. Since kun k =
6 0, inf n kvn k =
6 0, since other
wise there is a subsequence converging to 0, and so wn is well-defined
3. ELLIPTIC OPERATORS ARE FREDHOLM 155

and of norm 1. Since wn = Rwn + un /kvn k and kvn k is bounded below,


wn must have a convergence subsequence, by the compactness of R.
Passing to such a subsequence, and relabelling, wn → w, un → u,
un /kvn k → cu, c ∈ C. If c = 0 then (Id −R)w = 0. However, we can
assume in the first place that un ⊥ Nul(Id −R) , so the same is true
of wn . As kwk = 1 this is a contradiction, so kvn k is bounded above,
c 6= 0, and hence there is a solution to (Id −R)w = u. Thus the range
of Id −R is closed.
The ortho-complement of the range Ran(Id −R)⊥ is the null space
at Id −R∗ which is also finite-dimensional since R∗ is compact. Thus
Id −R is Fredholm. 
Proposition 3.3. Any smoothing operator on a compact manifold
is compact as an operator between (any) Sobolev spaces.
Proof. By definition a smoothing operator is one with a smooth
kernel. For vector bundles this can be expressed in terms of local
coordinates and a partition of unity with trivialization of the bundles
over the supports as follows.
X
Ru = ϕb Rϕa u
a,b

(3.4) ϕb Rϕa u = Fb∗ ϕ0b Rab ϕ0a (Fa−1 )∗ u


Z
Rab v(z) = Rab (z, z 0 )v(z 0 ), z ∈ Ω0b , v ∈ Cc∞ (Ω0a ; E1 )
Ω0a

where Rab is a matrix of smooth sections of the localized (hence trivial


by refinement) bundle on Ω0b × Ωa . In fact, by inserting extra cutoffs in
(3.4), we may assume that Rab has compact support in Ω0b × Ω0a . Thus,
by the compactness of sums of compact operators, it suffices to show
that a single smoothing operator of compact support compact support
is compact on the standard Sobolev spaces. Thus if R ∈ Cc∞ (R2n
Z
L0 n
(3.5) H (R ) 3 u 7→ R(z) ∈ H L (Rn )
Rn
0
is compact for any L, L . By the continuous inclusion of Sobolev spaces
it suffices to take L0 = −L with L a large even integer. Then (∆+1)L/2
is an isomorphism from (L2 (Rn )) to H −L (R2 ) and from H L (Rn ) to
L2 (Rn ). Thus the compactness of (3.5) is equivalent to the compactness
of
(3.6) (∆ + 1)L/2 R(∆ + 1)L/2 on L2 (Rn ).
This is still a smoothing operator with compactly supported kernel,
then we are reduced to the special case of (3.5) for L = L0 = 0. Finally
156 6. INVERTIBILITY OF ELLIPTIC OPERATORS

then it suffices to use Sturm’s theorem, that R is uniformly approx-


imated by polynomials on a large ball. Cutting off on left and right
then shows that
ρ(z)Ri (z, z 0 )ρ(z 0 ) → Rz, z 0 ) uniformly on R2n
the Ri is a polynomial (and ρ(z)ρ(z 0 ) = 1 on supp(R)) with ρ ∈
Cc∞ (Rn ). The uniform convergence of the kernels implies the conver-
gence of the operators on L2 (Rn ) in the norm topology, so R is in
the norm closure of the finite rank operators on L2 (Rn ), hence is com-
pact. 
Proof of Theorem 3.1. We know that P has a 2-sided param-
eterix Q : H s (M ; E2 ) −→ H s+m (M ; E1 ) (for any s) such that
P Q − Id2 = R2 , QP − Id2 = R1 ,
are both smoothing (or at least C N for arbitrarily large N ) operators.
Then we can apply Proposition 3.3 and Lemma 3.2. First
QP = Id −R1 : H s+m (M ; E1 ) −→ H s+m (M ; E2 )
have finite-dimensional null spaces. However, the null space of P
is certainly contained in the null space of Id −R, so it too is finite-
dimensional. Similarly,
P Q = Id −R1 : H s (M ; E2 ) −→ H s (M ; E2 )
has closed range of finite codimension. But the range of P certainly
contains the range of Id −R so it too must be closed and of finite
codimension. Thus P is Fredholm as an operator from H s+m (M ; E2 )
to H s (M ; E2 ) for any s ∈ R.
So consider P as an operator on the C ∞ spaces. The null space
of P : H m (M ; E1 ) −→ H 0 (M ; E2 ) consists of C ∞ sections, by elliptic
regularity, so must be equal to the null space on C ∞ (M ; E1 ) — which
is therefore finite-dimensional. Similarly consider the range of P :
H m (M ; E1 ) −→ H 0 (M ; E2 ). We know this to have a finite-dimensional
complement, with basis v1 , . . . , vn ∈ H 0 (M ; E2 ). By the density of
C ∞ (M ; E2 ) in L2 (M ; E2 ) we can approximate the vi ’s closely by wi ∈
C ∞ (M ; E2 ). On close enough approximation, the wi must span the
complement. Thus P H m (M ; E1 ) has a complement in L2 (M ; E2 ) which
is a finite-dimensional subspace of C ∞ (M ; E2 ); call this N2 . If f ∈
C ∞ (M ; E2 ) ⊂ L2 (M ; E2 ) then there are constants ci such that
N
X
f− ci wi = P u, u ∈ H m (M ; E1 ).
i=1
4. GENERALIZED INVERSES 157

Again by elliptic regularity, u ∈ C ∞ (M ; E1 ) thus N2 is a complement


to P C ∞ (M ; E1 ) in C ∞ (M ; E2 ) and P is Fredholm. 
The point of Fredholm operators is that they are ‘almost invert-
ible’ — in the sense that they are invertible up to finite-dimensional
obstructions. However, a Fredholm operator may not itself be close to
an invertible operator. This defect is measured by the index
ind(P ) = dim Nul(P ) − dim(Ran(P )⊥ )
P : H m (M ; E1 ) −→ L2 (M ; E2 ).

4. Generalized inverses
Written, at least in part, by Chris Kottke.
As discussed above, a bounded operator between Hilbert spaces,
T : H1 −→ H2
is Fredholm if and only if it has a parametrix up to compact errors,
that is, there exists an operator
S : H2 −→ H1
such that
T S − Id2 = R2 , ST − Id1 = R1
are both compact on the respective Hilbert spaces H1 and H2 . In this
case of Hilbert spaces there is a “preferred” parametrix or generalized
inverse.
Recall that the adjoint
T ∗ : H2 −→ H1
of any bounded operator is defined using the Riesz Representation The-
orem. Thus, by the continuity of T , for any u ∈ H2 ,
H1 3 φ −→ hT φ, ui ∈ C
is continuous and so there exists a unique v ∈ H1 such that
hT φ, ui2 = hφ, vi1 , ∀ φ ∈ H1 .
Thus v is determined by u and the resulting map
H2 3 u 7→ v = T ∗ u ∈ H1
is easily seen to be continuous giving the adjoint identity
(4.1) hT φ, ui = hφ, T ∗ ui, ∀ φ ∈ H1 , u ∈ H2
In particular it is always the case that
(4.2) Nul(T ∗ ) = (Ran(T ))⊥
158 6. INVERTIBILITY OF ELLIPTIC OPERATORS

as follows directly from (4.1). As a useful consequence, if Ran(T ) is


closed, then H2 = Ran(T ) ⊕ Nul(T ∗ ) is an orthogonal direct sum.
Proposition 4.1. If T : H1 −→ H2 is a Fredholm operator between
Hilbert spaces then T ∗ is also Fredholm, ind(T ∗ ) = − ind(T ), and T has
a unique generalized inverse S : H2 −→ H1 satisfying
(4.3) T S = Id2 −ΠNul(P ∗ ) , ST = Id1 −ΠNul(P )
Proof. A straightforward exercise, but it should probably be writ-
ten out! 
Notice that ind(T ) is the difference of the two non-negative integers
dim Nul(T ) and dim Nul(T ∗ ). Thus
(4.4) dim Nul(T ) ≥ ind(T )
(4.5) dim Nul(T ∗ ) ≥ − ind(T )
so if ind(T ) 6= 0 then T is definitely not invertible. In fact it cannot
then be made invertible by small bounded perturbations.
Proposition 4.2. If H1 and H2 are two seperable, infinite-dimensional
Hilbert spaces then for all k ∈ Z,
Frk = {T : H1 −→ H2 ; T is Fredholm and ind(T ) = k}
is a non-empty subset of B(H1 , H2 ), the Banach space of bounded op-
erators from H1 to H2 .
Proof. All separable Hilbert spaces of infinite dimension are iso-
morphic, so Fr0 is non-empty. More generally if {ei }∞
i=1 is an orthonor-
mal basis of H1 , then the shift operator, determined by

 ei+k , i ≥ 1, k ≥ 0
Sk ei = ei+k , i ≥ −k, k ≤ 0
0, i < −k

is easily seen to be Fredholm of index k in H1 . Composing with an


isomorphism to H2 shows that Frk 6= ∅ for all k ∈ Z. 
One important property of the spaces Frk (H1 , H2 ) is that they are
stable under compact perturbations; that is, if K : H1 −→ H2 is a
compact operator and T ∈ Frk then (T + K) ∈ Frk . That (T + K) is
Fredholm is clear, sinces a parametrix for T is a parametrix for T + K,
but it remains to show that the index itself is stable and we do this
in steps. In what follows, take T ∈ Frk (H1 , H2 ) with kernel N1 ⊂ H1 .
Define T̃ by the factorization

(4.6) T : H1 −→ H̃1 = H1 /N1 −→ Ran T ,→ H2 ,
4. GENERALIZED INVERSES 159

so that T̃ is invertible.
Lemma 4.3. Suppose T ∈ Frk (H1 , H2 ) has kernel N1 ⊂ H1 and
M1 ⊃ N1 is a finite dimensional subspace of H1 then defining T 0 = T
on M1⊥ and T 0 = 0 on M1 gives an element T 0 ∈ Frk .
Proof. Since N1 ⊂ M1 , T 0 is obtained from (4.6) by replacing
T̃ by T̃ 0 which is defined in essentially the same way as T 0 , that is
T̃ 0 = 0 on M1 /N1 , and T̃ 0 = T̃ on the orthocomplement. Thus the
range of T̃ 0 in Ran(T ) has complement T̃ (M1 /N1 ) which has the same
dimension as M1 /N1 . Thus T 0 has null space M1 and has range in H2
with complement of dimension that of M1 /N1 + N2 , and hence has
index k. 
Lemma 4.4. If A is a finite rank operator A : H1 −→ H2 such that
Ran A ∩ Ran T = {0}, then T + A ∈ Frk .
Proof. First note that Nul(T + A) = Nul T ∩ Nul A since
x ∈ Nul(T +A) ⇔ T x = −Ax ∈ Ran T ∩Ran A = {0} ⇔ x ∈ Nul T ∩Nul A.
Similarly the range of T +A restricted to Nul T meets the range of T +A
restricted to (null T )⊥ only in 0 so the codimension of the Ran(T + A)
is the codimension of Ran AN where AN is A as a map from Nul T to
H2 / Ran T. So, the equality of row and column rank for matrices,
codim Ran(T +A) = codim Ran T −dim Nul(AN ) = dim Nul(T )−k−dim Nul(AN ) = dim Nul(T +
Thus T + A ∈ Frk . 
Proposition 4.5. If A : H1 −→ H2 is any finite rank operator,
then T + A ∈ Frk .
Proof. Let E2 = Ran A ∩ Ran T , which is finite dimensional, then
E1 = T̃ −1 (E2 ) has the same dimension. Put M1 = E1 ⊕ N1 and apply
Lemma 4.3 to get T 0 ∈ Frk with kernel M1 . Then
T + A = T 0 + A0 + A
where A0 = T on E1 and A0 = 0 on E1⊥ . Then A0 + A is a finite rank
operator and Ran(A0 + A) ∩ Ran T 0 = {0} and Lemma 4.4 applies.
Thus
T + A = T 0 + (A0 + A) ∈ Frk (H1 , H2 ).

Proposition 4.6. If B : H1 −→ H2 is compact then T + B ∈ Frk .
160 6. INVERTIBILITY OF ELLIPTIC OPERATORS

Proof. A compact operator is the sum of a finite rank operator


and an operator of arbitrarily small norm so it suffices to show that
T + C ∈ Frk where kCk <  for  small enough and then apply Propo-
sition 4.5. Let P : H1 −→ H̃1 = H1 /N1 and Q : H2 −→ Ran T be
projection operators. Then
C = QCP + QC(Id −P ) + (Id −Q)CP + (Id −Q)C(Id −P )
the last three of which are finite rank operators. Thus it suffices to
show that
T̃ + QC : H̃1 −→ Ran T
is invertible. The set of invertible operators is open, by the convergence
of the Neumann series so the result follows. 
Remark 1. In fact the Frk are all connected although I will not use
this below. In fact this follows from the multiplicativity of the index:-
(4.7) Frk ◦ Frl = Frk+l
and the connectedness of the group of invertible operators on a Hilbert
space. The topological type of the Frk is actually a point of some
importance. A fact, which you should know but I am not going to
prove here is:-
S
Theorem 4.7. The open set Fr = k Frk in the Banach space of
bounded operators on a separable Hilbert space is a classifying space
for even K-theory.
That is, if X is a reasonable space – for instance a compact manifold
– then the space of homotopy classes of continuous maps into Fr may be
canonically identified as an Abelian group with the (complex) K-theory
of X :
(4.8) K0 (X) = [X; Fr].

5. Self-adjoint elliptic operators


Last time I showed that elliptic differential operators, acting on
functions on a compact manifold, are Fredholm on Sobolev spaces.
Today I will first quickly discuss the rudiments of spectral theory for
self-adjoint elliptic operators and then pass over to the general case
of operators between sections of vector bundles (which is really only
notationally different from the case of operators on functions).
To define self-adjointness of an operator we need to define the ad-
joint! To do so requires invariant integration. I have already talked
about this a little, but recall from 18.155 (I hope) Riesz’ theorem iden-
tifying (appropriately behaved, i.e. Borel outer continuous and inner
5. SELF-ADJOINT ELLIPTIC OPERATORS 161

regular) measures on a locally compact space with continuous linear


functionals on C00 (M ) (the space of continuous functions ‘vanishing at
infinity’). In the case of a manifold we define a smooth positive mea-
sure, also called a positive density, as one given in local coordinates by
a smooth positive multiple of the Lebesgue measure. The existence of
such a density is guaranteed by the existence of a partition of unity
subordinate to a coordinate cover, since the we can take
X
(5.1) ν= φj fj∗ |dz|
j

where |dz| is Lebesgue measure in the local coordinate patch corre-


sponding to fj : Uj −→ Uj0 . Since we know that a smooth coordinate
transforms |dz| to a positive smooth multiple of the new Lebesque mea-
sure (namely the absolute value of the Jacobian) and two such positive
smooth measures are related by
(5.2) ν 0 = µν, 0 < µ ∈ C ∞ (M ).
In the case of a compact manifold this allows one to define integra-
tion of functions and hence an inner product on L2 (M ),
Z
(5.3) hu, viν = u(z)v(z)ν.
M

It is with respect to such a choice of smooth density that adjoints are


defined.
Lemma 5.1. If P : C ∞ (M ) −→ C ∞ (M ) is a differential opera-
tor with smooth coefficients and ν is a smooth positive measure then
there exists a unque differential operator with smooth coefficients P ∗ :
C ∞ (M ) −→ C ∞ (M ) such that
(5.4) hP u, viν = hu, P ∗ viν ∀ u, v ∈ C ∞ (M ).
Proof. First existence. If φi is a partition of unity subordinate to
an open cover of M by coordinate patches and φ0i ∈ C ∞ (M ) have sup-
ports in the same coordinate patches, with φ0i = 1 in a neighbourhood
of supp(φi ) then we know that
X X
(5.5) Pu = φ0i P φi u = fi∗ Pi (fi−1 )∗ u
i i

where fi : U )i −→ Ui0 are the coordinate charts and Pi is a differential


operator on Ui0 with smooth coefficients, all compactly supported in Ui0 .
The existence of P ∗ follows from the existence of (φ0i P φi )∗ and hence
162 6. INVERTIBILITY OF ELLIPTIC OPERATORS

Pi∗ in each coordinate patch, where the Pi∗ should satisfy


Z Z
0 0 0
(5.6) (Pi )u v µ dz = u0 Pi∗ v 0 µ0 dz, ∀ u0 , v 0 ∈ C ∞ (Ui0 ).
Ui0 Ui0

Here ν = µ0 |dz| with 0 < µ0 ∈ C ∞ (Ui0 ) in the local coordinates. So in


fact Pi∗ is unique and given by
X X
(5.7) Pi∗ (z, D)v 0 = (µ0 )−1 Dα pα (z)µ0 v 0 if Pi = pα (z)Dα .
|α|≤m |α|≤m

The uniqueness of P follows from (5.4) since the difference of two
would be an operator Q : C ∞ (M ) −→ C ∞ (M ) satisfying
(5.8) hu, Qviν = 0 ∀ u, v ∈ C ∞ (M )
and this implies that Q = 0 as an operator. 
Proposition 5.2. If P : C ∞ (M ) −→ C ∞ (M ) is an elliptic differ-
ential operator of order m > 0 which is (formally) self-adjoint with
respect to some smooth positive density then
(5.9)
spec(P ) = {λ ∈ C; (P −λ) : C ∞ (M ) −→ C ∞ (M ) is not an isomorphism}
is a discrete subset of R, for each λ ∈ spec(P )
(5.10) E(λ) = {u ∈ C ∞ (M ); P u = λu}
is finite dimensional and
X
(5.11) L2 (M ) = E(λ) is orthogonal.
λ∈spec(P )

Formal self-adjointness just means that P ∗ = P as differential operators


acting on C ∞ (M ). Actual self-adjointness means a little more but this
follows easily from formal self-adjointness and ellipticity.
Proof. First notice that spec(P ) ⊂ R since if P u = λu with
u ∈ C ∞ (M ) then
(5.12) λkukν 2 = hP u, ui = hu, P ui = λ̄kukν 2
so λ ∈ / R implies that the null space of P − λ is trivial. Since we
know that the range is closed and has complement the null space of
(P − λ)∗ = P − λ̄ it follows that P − λ is an isomorphism on C ∞ (M )
if λ ∈
/ R.
If λ ∈ R then we also know that E(λ) is finite dimensional. For
any λ ∈ R suppose that (P − λ)u = 0 with u ∈ C ∞ (M ). Then we know
that P − λ is an isomorphism from E(λ)⊥ to itself which extends by
continuity to an isomorphism from the closure of E ⊥ (λ) in H m (M ) to
E ⊥ (λ) ⊂ L2 (M ). It follows that P − λ0 defines such an isomorphism for
5. SELF-ADJOINT ELLIPTIC OPERATORS 163

|λ = l0 | <  for some  > 0. However acting on E(λ), P − λ0 = (λ − λ0 )


is also an isomorphism for λ0 6= λ so P − λ0 is an isomorphism. This
shows that E(λ0 ) = {0} for |λ0 − λ| < .
This leaves the completeness statement, (5.11). In fact this re-
ally amounts to the existence of a non-zero eigenvalue as we shall see.
Consider the generalized inverse of P acting on L2 (M ). It maps the or-
thocomplement of the null space to itself and is a compact operator, as
follows from the a priori estimats for P and the compactness of the em-
bedding of H m (M ) in L2 (M ) for m > 0. Futhermore it is self-adjoint.
A standard result shows that a compact self-adjoint operator either has
a non-zero eigenvalue or is itself zero. For the completeness it is enough
to show that the generalized inverse maps the orthocomplement of the
span of the E(λ) in L2 (M ) into itself and is compact. It is therefore
either zero or has a non-zero eigenvalue. Any corresponding eigenfunc-
tion would be an eigenfunction of P and hence in one of the E(λ) so
this operator must be zero, meaning that (5.11) holds. 

For single differential operators we first considered constant coef-


ficient operators, then extended this to variable coefficient operators
by a combination of perturbation (to get the a priori estimates) and
construction of parametrices (to get approximation) and finally used
coordinate invariance to transfer the discussion to a (compact) mani-
fold. If we consider matrices of operators we can follow the same path,
so I shall only comment on the changes needed.
A k × l matrix of differential operators (so with k rows and l
columns) maps l-vectors of smooth functions to k vectors:
X X
(5.13) Pij (D) = cα,i,j Dα , (P (D)u)i (z) = Pij (D)uj (z).
|α|≤m j

The matrix Pij (ζ) is invertible if and only if k = l and the polyno-
mial of order mk, det P (ζ) 6= 0. Such a matrix is said to be elliptic if
det P (ζ) is elliptic. The cofactor matrix defines a matrix P 0 of differ-
ential operators of order (k − 1)m and we may construct a parametrix
for P (assuming it to be elliptic) from a parametrix for det P :
(5.14) QP = Qdet P P 0 (D).
It is then easy to see that it has the same mapping properties as in the
case of a single operator (although notice that the product is no longer
commutative because of the non-commutativity of matrix multiplica-
tion)
(5.15) QP P = Id −RL , P QP = Id −RR
164 6. INVERTIBILITY OF ELLIPTIC OPERATORS

where RL and RR are given by matrices of convolution operators with


all elements being Schwartz functions. For the action on vector-valued
functions on an open subset of Rn we may proceed exactly as before,
cutting off the kernel of QP with a properly supported function which
is 1 near the diagonal
Z
(5.16) QΩ f (z) = q(z − z 0 )χ(z, z 0 )f (z 0 )dz 0 .

The regularity estimates look exactly the same as before if we define
the local Sobolev spaces to be simply the direct sum of k copies of the
usual local Sobolev spaces
(5.17)
s
P u = f ∈ Hloc (Ω) =⇒ kψuks+m ≤ CkψP (D)uks +C 0 kφukm−1 or kψuks+m ≤ CkφP (D)uks +C 00 k
where ψ, φ ∈ Cc∞ (Ω) and φ = 1 in a neighbourhood of ψ (and in the
second case C 00 depends on M.
Now, the variable case proceed again as before, where now we are
considering a k × k matrix of differential operators of order m. I will
not go into the details. A priori estimates in the first form in (5.17),
for functions ψ with small support near a point, follow by perturbation
from the constant coefficient case and then in the second form by use
of a partition of unity. The existence of a parametrix for the variable
coefficient matrix of operators also goes through without problems –
the commutativity which disappears in the matrix case was not used
anyway.
As regards coordinate transformations, we get the same results as
before. It is also notural to allow transformations by variable coefficient
matrices. Thus if Gi (z) ∈ C ∞ (Ω; GL(k, C) i = 1, 2, are smooth family
of invertible matrices we may consider the composites P G2 or G−1 1 P,
or more usually the ‘conjugate’ operator
(5.18) G−1 0
1 P (z, D)G)2 = P (z, D).

This is also a variable coefficient differential operator, elliptic if and


s
only if P (z, D) is elliptic. The Sobolev spaces Hloc (Ω; Rk ) are invariant
under composition with such matrices, since they are the same in each
variable.
Combining coordinate transformations and such matrix conjugation
allows us to consider not only manifolds but also vector bundles over
manifolds. Let me briefly remind you of what this is about. Over
an open subset Ω ⊂ Rn one can introduce a vector bundle as just a
subbundle of some trivial N -dimensional bundle. That is, consider a
smooth N × N matrix Π ∈ C ∞ (Ω; M (N, C)) on Ω which is valued in
the projections (i.e. idempotents) meaning that Π(z)Π(z) = Π(z) for
ADDENDA TO CHAPTER 6 165

all z ∈ Ω. Then the range of Π(z) defines a linear subspace of CN for


each z ∈ Ω and together these form a vector bundle over Ω. Namely
these spaces fit together to define a manifold of dimension n + k where
k is the rank of Π(z) (constant if Ω is connected, otherwise require it
be the same on all components)
[
(5.19) EΩ = Ez , Ez = Π(z)CN .
z∈Ω
If z̄ ∈ Ω then we may choose a basis of Ez̄ and so identify it with
Ck . By the smoothness of Π(z) in z it follows that in some small ball
B(z̄, r), so that kΠ(z)(Π(z) − Π(z̄))Π(z)k < 21 ) the map
(5.20) [
EB(z̄,r) = Ez , Ez = Π(z)CN 3 (z, u) 7−→ (z, E(z̄)u) ∈ B(z̄, r)×Ez̄ ' B(z̄, r)×Ck
z∈B(z̄,r)

is an isomorphism. Injectivity is just injectivity of each of the maps


Ez −→ Ez̄ and this follows from the fact that Π(z)Π(z̄)Π(z) is invert-
ible on Ez ; this also implies surjectivity.
6. Index theorem
Addenda to Chapter 6
CHAPTER 7

Suspended families and the resolvent

For a compact manifold, M, the Sobolev spaces H s (M ; E) (of sec-


tions of a vector bundle E) are defined above by reference to local
coordinates and local trivializations of E. If M is not compact (but is
paracompact, as is demanded by the definition of a manifold) the same
sort of definition leads either to the spaces of sections with compact
support, or the “local” spaces:
(0.1) Hcs (M ; E) ⊂ Hloc
s
(M ; E), s ∈ R.
Thus, if Fa : Ωa → Ω0a is a covering of M , for a ∈ A, by coordinate
patches over which E is trivial, Ta : (Fa−1 )∗ E ∼= CN , and {ρa } is a
partition of unity subordinate to this cover then
(0.2) s
µ ∈ Hloc (M ; E) ⇔ Ta (Fa−1 )∗ (ρa µ) ∈ H s (Ω0a ; CN ) ∀ a.
Practically, these spaces have serious limitations; for instance they
are not Hilbert or even Banach spaaces. On the other hand they cer-
tainly have their uses and differential operators act on them in the
usual way,
P ∈ Diff m (M ; E) ⇒
s+m s
(0.3) P :Hloc (M ; E+ ) → Hloc (M ; E− ),
P :Hcs+m (M ; E+ ) → Hcs (M ; E− ).
However, without some limitations on the growth of elements, as is the
s
case in Hloc (M ; E), it is not reasonable to expect the null space of the
first realization of P above to be finite dimensional. Similarly in the
second case it is not reasonable to expect the operator to be even close
to surjective.

1. Product with a line


Some corrections from Fang Wang added, 25 July, 2007.
Thus, for non-compact manifolds, we need to find intermediate
spaces which represent some growth constraints on functions or dis-
tributions. Of course this is precisely what we have done for Rn in
167
168 7. SUSPENDED FAMILIES AND THE RESOLVENT

defining the weighted Sobolev spaces,


H s,t (Rn ) = u ∈ S 0 (Rn ); hzi−t u ∈ H s (Rn ) .

(1.1)
However, it turns out that even these spaces are not always what we
want.
To lead up to the discussion of other spaces I will start with the
simplest sort of non-compact space, the real line. To make things more
interesting (and useful) I will conisider
(1.2) X =R×M
where M is a compact manifold. The new Sobolev spaces defined
for this product will combine the features of H s (R) and H s (M ). The
Sobolev spaces on Rn are associated with the translation action of Rn
on itself, in the sense that this fixes the “uniformity” at infinity through
the Fourier transform. What happens on X is quite similar.
First we can define “tempered distributions” on X. The space of
Schwartz functions of rapid decay on X can be fixed in terms of differ-
ential operators on M and differentiation on R.
(1.3)  
l ∗
S(R×M ) = u : R × M → C; sup t Dtk P u(t, ·) < ∞ ∀ l, k, P ∈ Diff (M ) .
R×M

Exercise 1. Define the corresponding space for sections of a vector


bundle E over M lifted to X and then put a topology on S(R × M ; E)
corresponding to these estimates and check that it is a complete metric
space, just like S(R) in Chapter 3.
There are several different ways to look at
S(R × M ) ⊂ C ∞ (R × M ).
Namely we can think of either R or M as “coming first” and see that
(1.4) S(R × M ) = C ∞ (M ; S(R)) = S(R; C ∞ (M )).
The notion of a C ∞ function on M with values in a topological vector
space is easy to define, since C 0 (M ; S(R)) is defined using the metric
space topology on S(R). In a coordinate patch on M higher deriva-
tives are defined in the usual way, using difference quotients and these
definitions are coordinate-invariant. Similarly, continuity and differen-
tiability for a map R → C ∞ (M ) are easy to define and then
(1.5)  
S(R; C ∞ (M )) = u : R → C ∞ (M ); sup tk Dtp u C l (M )
< ∞, ∀ k, p, l .
t
1. PRODUCT WITH A LINE 169

Using such an interpretation of S(R × M ), or directly, it follows


easily that the 1-dimensional Fourier transform gives an isomorphism
F : S(R × M ) → S(R × M ) by
Z
(1.6) F : u(t, ·) 7−→ û(τ, ·) = e−itτ u(t, ·) dt.
R

So, one might hope to use F to define Sobolev spaces on R ×


M with uniform behavior as t → ∞ in R. However this is not so
straightforward, although I will come back to it, since the 1-dimensional
Fourier transform in (1.6) does nothing in the variables in M. Instead
let us think about L2 (R × M ), the definition of which requires a choice
of measure.
Of course there is an obvious class of product measures on R × M,
namely dt · νM , where νM is a positive smooth density on M and dt is
Lebesgue measure on R. This corresponds to the functional
Z Z
0
(1.7) : Cc (R × M ) 3 u 7−→ u(t, ·) dt · ν ∈ C.

The analogues of (1.4) correspond to Fubini’s Theorem.


(1.8)  Z 
2 2
Lti (R × M ) = u : R × M → C measurable; |u(t, z)| dt νz < ∞ / ∼ a.e.

L2ti (R × M ) = L2 (R; L2 (M )) = L2 (M ; L2 (R)).


Here the subscript “ti” is supposed to denote translation-invariance (of
the measure and hence the space).
We can now easily define the Sobolev spaces of positive integer
order:
n
(1.9) Htim (R × M ) = u ∈ L2ti (R × M );
o
j 2 k
Dt Pk u ∈ Lti (R × M ) ∀ j ≤ m − k, 0 ≤ k ≤ m, Pk ∈ Diff (M ) .

In fact we can write them more succinctly by defining


(1.10) ( )
X j
Diff kti (R×M ) = Q ∈ Diff m (R × M ); Q = Dt Pj , Pj ∈ Diff m−j (M ) .
0≤j≤m

This is the space of “t-translation-invariant” differential operators on


R × M and (1.9) reduces to
(1.11)
Htim (R×M ) = u ∈ L2ti (R × M ); P u ∈ L2ti (R × M ), ∀ P ∈ Diff m

ti (R × M ) .
170 7. SUSPENDED FAMILIES AND THE RESOLVENT

I will discuss such operators in some detail below, especially the


elliptic case. First, we need to consider the Sobolev spaces of non-
integral order, for completeness sake if nothing else. To do this, observe
that on R itself (so for M = {pt}), L2ti (R × {pt}) = L2 (R) in the usual
sense. Let us consider a special partition of unity on R consisting of
integral translates of one function.
Definition 1.1. An element µ ∈ Cc∞ (R) generates a “ti-partition
P unity” (a non-standard description) on R if 0 ≤ µ ≤ 1 and
of
k∈Z µ(t − k) = 1.

It is easy to construct such a µ. Just take µ1 ∈ Cc∞ (R), µ1 ≥ 0 with


µ1 (t) = 1 in |t| ≤ 1/2. Then let
X
F (t) = µ1 (t − k) ∈ C ∞ (R)
k∈Z

since the sum is finite on each bounded set. Moreover F (t) ≥ 1 and is
itself invariant under translation by any integer; set µ(t) = µ1 (t)/F (t).
Then µ generates a ti-partition of unity.
Using such a function we can easily decompose L2 (R). Thus, setting
τk (t) = t − k,
(1.12)
XZ
2 ∗ 2
f ∈ L (R) ⇐⇒ (τk f )µ ∈ Lloc (R) ∀ k ∈ Z and |τk∗ f µ|2 dt < ∞.
k∈Z

Of course, saying (τk∗ f )µ


∈ as (τk∗ f )µ ∈ L2c (R).
L2loc (R) is the same
2
Certainly, if f ∈ L (R) then (τk∗ f )µ ∈ L2 (R) and
since 0 ≤ µ ≤ 1 and
supp(µ) ⊂ [−R, R] for some R,
XZ 2
Z

|(τk f )µ| ≤ C |f |2 dt.
k
P
Conversely, since |k|≤T µ = 1 on [−1, 1] for some T, it follows that
Z XZ
2
|f | dt ≤ C 0
|(τk∗ f )µ|2 dt.
k

Now, Dt τk∗ f ∗
= τk (Dt f ), so we can use (1.12) torewrite the definition
of thespaces Htik (R × M ) in a form that extendsto all orders. Namely
(1.13) X
u ∈ Htis (R × M ) ⇐⇒ (τk∗ u)µ ∈ Hcs (R × M ) and kτk∗ ukH s < ∞
k

provided we choose a fixed norm on Hcs (R × M )


giving the usual topol-
ogy for functions supported in a fixed compact set, for example by em-
bedding [−T, T ] in a torus T and then taking the norm on H s (T × M ).
1. PRODUCT WITH A LINE 171

Lemma 1.2. With Diff m ti (R×M ) defined by (1.10) and the translation-
invariant Sobolev spaces by (1.13),

P ∈ Diff m
ti (R × M ) =⇒
(1.14)
P :Htis+m (R × M ) −→ Htis (R × M ) ∀ s ∈ R.

Proof. This is basically an exercise. Really we also need to check


(
a little more carefully that the two definitions of Hti R × M ) for k a
positive integer, are the same. In fact this is similar to the proof of
(1.14) so is omitted. So, to prove (1.14) we will proceed by induction
over m. For m = 0 there is nothing to prove. Now observe that the
translation-invariant of P means that P τk∗ u = τk∗ (P u) so

(1.15) u ∈ Htis+m (R × M ) =⇒
0 0
X
P (τk∗ uµ) = τk∗ (P u) + τk∗ (Pm0 u)Dtm−m µ, Pm0 ∈ Diff m
ti (R × M ).
m0 <m

The left side is in Htis (R × M ), with the sum over k of the squares
of the norms bounded, by the regularity of u. The same is easily seen
to be true for the sum on the right by the inductive hypothesis, and
hence for the first term on the right. This proves the mapping property
(1.14) and continuity follows by the same argument or the closed graph
theorem. 

We can, and shall, extend this in various ways. If E = (E1 , E2 ) is a


pair of vector bundles over M then it lifts to a pair of vector bundles
over R×M , which we can again denote by E. It is then straightforward
to define Diff m s
ti (R × M ); E) and the Sobolev spaces Hti (R × M ; Ei ) and
to check that (1.14) extends in the obvious way.
Then main question we want to understand is the invertibility of
an operator such as P in (1.14). However, let me look first at these
Sobolev spaces a little more carefully. As already noted we really have
two definitions in the case of positive integral order. Thinking about
these we can also make the following provisional definitions in terms of
the 1-dimensional Fourier transform discussed above – where the ‘H̃’
notation is only temporary since these will turn out to be the same as
the spaces just considered.
172 7. SUSPENDED FAMILIES AND THE RESOLVENT

For any compact manifold define


(1.16)
H̃tis (R × M ) = u ∈ L2 (R × M );

Z  Z 
2 s 2 2
kuks = hτ i |û(τ, ·)|L2 (M ) + |û(τ, ·)|H s (M ) dτ < ∞ , s ≥ 0
R R
(1.17)
H̃tis (R × M ) = u ∈ S 0 (R × M ); u = u1 + u2 ,


u1 ∈ L2 (R; H s (M )), u2 ∈ L2 (M ; H s (R)) , kuk2s = inf ku1 k2 + ku2 k2 , s < 0.


The following interpolation result for Sobolev norms on M should
be back in Chapter 5.
Lemma 1.3. If M is a compact manifold or Rn then for any m1 ≥
m2 ≥ m3 and any R, the Sobolev norms are related by
kukm2 ≤ C (1 + R)m2 −m1 kukm1 + (1 + R)m2 −m3 kukm3 .

(1.18)
Proof. On Rn this follows directly by dividing Fourier space in
two pieces
(1.19) Z Z
2 2m2
kukm2 = hζi |û|dζ + hζi2m2 |û|dζ
|ζ|>R |ζ|≤R
Z Z
2(m1 −m2 ) 2m1 2(m2 −m3 )
≤ hRi hζi |û|dζ + hRi hζi2m3 |û|dζ
|ζ|>R |ζ|≤R
2(m1 −m2 ) 2(m2 −m3 )
≤ hRi kuk2m1 + hRi kuk2m3 .
On a compact manifold we have defined the norms by using a partition
φi of unity subordinate to a covering by coordinate patches Fi : Yi −→
Ui0 :
X
(1.20) kuk2m = k(Fi )∗ (φi u)k2m
i

where on the right we are using the Sobolev norms on Rn . Thus, ap-
plying the estimates for Euclidean space to each term on the right we
get the same estimate on any compact manifold. 
Corollary 1.4. If u ∈ H̃tis (R × M ), for s > 0, then for any
0<t<s
Z
(1.21) hτ i2t kû(τ, ·)k2H s−t (M ) dτ < ∞
R

which we can interpret as meaning ‘u ∈ H t (R; H s−t (M )) or u ∈ H s−t (M ; H s (R)).’


1. PRODUCT WITH A LINE 173

Proof. Apply the estimate to û(τ, ·) ∈ H s (M ), with R = |τ |,


m1 = s and m3 = 0 and integrate over τ. 
Lemma 1.5. The Sobolev spaces H̃tis (R × M ) and Htis (R × M ) are
the same.
Proof. 
Lemma 1.6. For 0 < s < 1 u ∈ Htis (R × M ) if and only if u ∈
2
L (R × M ) and

(1.22)
|u(t, z) − u(t0 , z)|2 |u(t, z 0 ) − u(t, z)|2
Z Z
0
0 2s+1
dtdt ν+ s+ n dtν(z)ν(z 0 ) < ∞,
R2 ×M |t − t | R×M 2 ρ(z, z ) 0 2

n = dim M,
where 0 ≤ ρ ∈ C ∞ (M 2 ) vanishes exactly quadratically at Diag ⊂ M 2 .
Proof. This follows as in the cases of Rn and a compact manifold
discussed earlier since the second term in (1.22) gives (with the L2
norm) a norm on L2 (R; H s (M )) and the first term gives a norm on
L2 (M ; H s (R)). 
Using these results we can see directly that the Sobolev spaces in
(1.16) have the following ‘obvious’ property as in the cases of Rn and
M.
Lemma 1.7. Schwartz space S(R × M ) = C ∞ (M ; S(R)) is dense in
each Htis (R × M ) and the L2 pairing extends by continuity to a jointly
continuous non-degenerate pairing
(1.23) Htis (R × M ) × Hti−s (R × M ) −→ C
which identifies Hti−s (R×M ) with the dual of Htis (R×M ) for any s ∈ R.
Proof. I leave the density as an exercise – use convolution in R
and the density of C ∞ (M ) in H s (M ) (explicity, using a partition of
unity on M and convolution on Rn to get density in each coordinate
patch).
Then the existence and continuity of the pairing follows from the
definitions and the corresponding pairings on R and M. We can assume
that s > 0 in (1.23) (otherwise reverse the factors). Then if u ∈
Htis (R × M ) and v = v1 + v2 ∈ Hti−s (R × M ) as in (1.17),
Z Z
(1.24) (u, v) = (u(t, ·), u1 (t, ·)) dt + (u(·, z), v2 (·, z)) νz
R M
174 7. SUSPENDED FAMILIES AND THE RESOLVENT

where the first pairing is the extension of the L2 pairing to H s (M ) ×


H −s (M ) and in the second case to H s (R) × H −s (R). The continuity of
the pairing follows directly from (1.24).
So, it remains only to show that the pairing is non-degenerate – so
that
(1.25) Hti−s (R × M ) 3 v 7−→ sup |(u, v)|
kukH s (R×M ) =1
ti

is equivalent to the norm on Hti−s (R


× M ). We already know that this
is bounded above by a multiple of the norm on Hti−s so we need the
estimate the other way. To see this we just need to go back to Euclidean
space. Take a partition of unity ψi with our usual φi on M subordinate
to a coordinate cover and consider with φi = 1 in a neighbourhood of
the support of ψi . Then
(1.26) (u, ψi v) = (φi u, ψi v)
allows us to extend ψi v to a continuous linear functional on H s (Rn )
by reference to the local coordinates and using the fact that for s > 0
(Fi−1 )∗ (φi u) ∈ H s (Rn+1 ). This shows that the coordinate representative
of ψi v is a sum as desired and summing over i gives the desired bound.


2. Translation-invariant Operators
Some corrections from Fang Wang added, 25 July, 2007.
Next I will characterize those operators P ∈ Diff m
ti (R×M ; E) which
give invertible maps (1.14), or rather in the case of a pair of vector
bundles E = (E1 , E2 ) over M :
(2.1) P : Htis+m (R×M ; E1 ) −→ Htis (R×M ; E2 ), P ∈ Diff m
ti (R×M ; E).

This is a generalization of the 1-dimensional case, M = {pt} which we


have already discussed. In fact it will become clear how to generalize
some parts of the discussion below to products Rn × M as well, but
the case of a 1-dimensional Euclidean factor is both easier and more
fundamental.
As with the constant coefficient case, there is a basic dichotomy
here. A t-translation-invariant differential operator as in (2.1) is Fred-
holm if and only if it is invertible. To find necessary and sufficient
conditons for invertibility we will we use the 1-dimensional Fourier
transform as in (1.6).
If
Xm
m
(2.2) P ∈ Diff ti (R × M ); E) ⇐⇒ P = Dti Pi , Pi ∈ Diff m−i (M ; E)
i=0
2. TRANSLATION-INVARIANT OPERATORS 175

then
P : S(R × M ; E1 ) −→ S(R × M ; E2 )
and
m
X
(2.3) Pcu(τ, ·) = τ i Pi u
b(τ, ·)
i=0

where ub(τ, ·) is the 1-dimensional Fourier transform from (1.6). So we


clearly need to examine the “suspended” family of operators
m
X
(2.4) P (τ ) = τ i Pi ∈ C ∞ (C; Diff m (M ; E)) .
i=0

I use the term “suspended” to denote the addition of a parameter


to Diff m (M ; E) to get such a family—in this case polynomial. They
are sometimes called “operator pencils” for reasons that escape me.
Anyway, the main result we want is
Theorem 2.1. If P ∈ Diff m ti (M ; E) is elliptic then the suspended
family P (τ ) is invertible for all τ ∈ C \ D with inverse
(2.5) P (τ )−1 : H s (M ; E2 ) −→ H s+m (M ; E1 )
where
(2.6) D ⊂ C is discrete and D ⊂ {τ ∈ C; | Re τ | ≤ c| Im τ | + 1/c}
for some c > 0 (see Fig. ?? – still not quite right).
In fact we need some more information on P (τ )−1 which we will
pick up during the proof of this result. The translation-invariance of
P can be written in operator form as
(2.7) P u(t + s, ·) = (P u)(t + s, ·) ∀ s ∈ R
Lemma 2.2. If P ∈ Diff m
ti (R × M ; E) is elliptic then it has a para-
metrix
(2.8) Q : S(R × M ; E2 ) −→ S(R × M ; E1 )
which is translation-invariant in the sense of (2.7) and preserves the
compactness of supports in R,
(2.9) Q : Cc∞ (R × M ; E2 ) −→ Cc∞ (R × M ; E1 )
Proof. In the case of a compact manifold we contructed a global
parametrix by patching local parametricies with a partition of unity.
Here we do the same thing, treating the variable t ∈ R globally through-
out. Thus if Fa : Ωa → Ω0a is a coordinate patch in M over which E1
176 7. SUSPENDED FAMILIES AND THE RESOLVENT

and (hence) E2 are trivial, P becomes a square matrix of differential


operators
P11 (z, Dt , Dz ) · · · Pl1 (z, Dt , Dz )
 

(2.10) Pa =  .. ..
. . 
P1l (z, Dt , Dz ) · · · Pll (z, Dt , Dz )
in which the coefficients do not depend on t. As discussed in Sections 2
and 3 above, we can construct a local parametrix in Ω0a using a properly
supported cutoff χ. In the t variable the parametrix is global anyway,
so we use a fixed cutoff χ̃ ∈ Cc∞ (R), χ̃ = 1 in |t| < 1, and so construct
a parametrix
Z
(2.11) Qa f (t, z) = q(t − t0 , z, z 0 )χ̃(t − t0 )χ(z, z 0 )f (t0 , z 0 ) dt0 dz 0 .
Ω0a

This satisfies
(2.12) Pa Qa = Id −Ra , Qa Pa = Id −Ra0
where Ra and Ra0 are smoothing operators on Ω0a with kernels of the
form
Z
Ra f (t, z) = Ra (t − t0 , z, z 0 )f (t0 , z 0 ) dt0 dz 0
(2.13) Ω0a

Ra ∈C (R × Ω02
∞ 0
a ), Ra (t, z, z ) = 0 if |t| ≥ 2

with the support proper in Ω0a .


Now, we can sum these local parametricies, which are all t-translation-
invariant to get a global parametrix with the same properties
X
(2.14) Qf = χa (Fa−1 )∗ (Ta−1 )∗ Qa Ta∗ Fa∗ f
a

where Ta denotes the trivialization of bundles E1 and E2 . It follows


that Q satisfies (2.9) and since it is translation-invariant, also (2.8).
The global version of (2.12) becomes
P Q = Id −R2 , QP = Id −R1 ,
Ri :Cc∞ (R × M ; Ei ) −→ Cc∞ (R × M ; Ei ),
(2.15) Z
Ri f = Ri (t − t0 , z, z 0 )f (t0 , z 0 ) dt0 νz0
R×M
where the kernels
Ri ∈ Cc∞ R × M 2 ; Hom(Ei ) , i = 1, 2.

(2.16)

In fact we can deduce directly from (2.11) the boundedness of Q.
2. TRANSLATION-INVARIANT OPERATORS 177

Lemma 2.3. The properly-supported parametrix Q constructed above


extends by continuity to a bounded operator
Q :Htis (R × M ; E2 ) −→ Htis+m (R × M ; E1 ) ∀ s ∈ R
(2.17)
Q :S(R × M ; E2 ) −→ S(R × M ; E1 ).
Proof. This follows directly from the earlier discussion of elliptic
regularity for each term in (2.14) to show that
(2.18) Q : {f ∈ Htis (R × M ; E2 ; supp(f ) ⊂ [−2, 2] × M }
−→ u ∈ Htis+m (R × M ; E1 ; supp(u) ⊂ [−2 − R, 2 + R] × M


for some R (which can in fact be taken to be small and positive).


Indeed on compact sets the translation-invariant Sobolev spaces reduce
to the usual ones. Then (2.17) follows from (2.18) and the translation-
invariance of Q. Using a µ ∈ Cc∞ (R) generating a ti-paritition of unity
on R we can decompose
X
(2.19) Htis (R × M ; E2 ) 3 f = τk∗ (µτ−k

f ).
k∈Z

Then
X
τk∗ Q(µτ−k


(2.20) Qf = f) .
k∈Z

The estimates corresponding to (2.18) give


kQf kH s+m ≤ Ckf kHtis
ti

if f has support in [−2, 2] × M. The decomposition (2.19) then gives


X

kµτ−k f k2H s = kf k2Hs < ∞ =⇒ kQf k2 ≤ C 0 kf k2H s .
This proves Lemma 2.3. 
Going back to the remainder term in (2.15), we can apply the 1-
dimensional Fourier transform and find the following uniform results.
Lemma 2.4. If R is a compactly supported, t-translation-invariant
smoothing operator as in (2.15) then
(2.21) c (τ, ·) = R(τ
Rf b )fb(τ, ·)
b ) ∈ C ∞ (C × M 2 ; Hom(E)) is entire in τ ∈ C and satisfies
where R(τ
the estimates
(2.22) ∀ k, p ∃ Cp,k such that kτ k R(τ
b )kC p ≤ Cp,k exp(A| Im τ |).
Here A is a constant such that
(2.23) supp R(t, ·) ⊂ [−A, A] × M 2 .
178 7. SUSPENDED FAMILIES AND THE RESOLVENT

Proof. This is a parameter-dependent version of the usual esti-


mates for the Fourier-Laplace transform. That is,
Z
(2.24) R(τ, ·) = e−iτ t R(t, ·) dt
b

from which all the statements follow just as in the standard case when
R ∈ Cc∞ (R) has support in [−A, A]. 
Proposition 2.5. If R is as  in Lemma−1 2.4 then there exists a
discrete subset D ⊂ C such that Id −R(τ
b ) exists for all τ ∈ C \ D
and
 −1
(2.25) Id −R(τ )
b = Id −S(τ
b )

where Sb : C −→ C ∞ (M 2 ; Hom(E)) is a family of smoothing operators


which is meromorphic in the complex plane with poles of finite order
and residues of finite rank at D. Furthermore,
(2.26) D ⊂ {τ ∈ C; log(| Re τ |) < c| Im τ | + 1/c}
for some c > 0 and for any C > 0, there exists C 0 such that
(2.27) | Im τ | < C, | Re τ | > C 0 =⇒ kτ k S(τ
b )kC p ≤ Cp,k .

Proof. This is part of “Analytic Fredholm Theory” (although usu-


ally done with compact operators on a Hilbert space). The estimates
(2.22) on R(τ
b ) show that, in some region as on the right in (2.26),

(2.28) kR(τ
b )kL2 ≤ 1/2.
Thus, by Neumann series,
∞ 
X k
(2.29) S(τ
b )= R(τ )
b
k=1

exists as a bounded operator on L2 (M ; E). In fact it follows that S(τ


b )
is itself a family of smoothing operators in the region in which the
Neumann series converges. Indeed, the series can be rewritten
(2.30) S(τ
b ) = R(τ b )2 + R(τ
b ) + R(τ b )S(τ
b )R(τ
b )
The smoothing operators form a “corner” in the bounded operators in
the sense that products like the third here are smoothing if the outer
two factors are. This follows from the formula for the kernel of the
product
Z
Rb1 (τ ; z, z 0 )S(τ
b ; z 0 , z 00 )R
b2 (τ ; z 00 , z̃) νz0 νz00 .
M ×M
2. TRANSLATION-INVARIANT OPERATORS 179

b ) ∈ C ∞ (M 2 ; Hom(E)) exists in a region as on the right in


Thus S(τ
(2.26). To see that it extends to be meromorphic in C \ D for a discrete
divisor D we can use a finite-dimensional approximation to R(τ b ).
Recall — if neccessary from local coordinates — that given any p ∈
(τ ) (τ )
N, R > 0, q > 0 there are finitely many sections fi ∈ C ∞ (M ; E 0 ), gi ∈
C ∞ (M ; E) and such that
X
(2.31) kR(τ
b )− gi (τ, z) · fi (τ, z 0 )kC p < , |τ | < R.
i

Writing this difference as M (τ ),


Id −R(τ
b ) = Id −M (τ ) + F (τ )
where F (τ ) is a finite rank operator. In view of (2.31), Id −M (τ ) is
invertible and, as seen above, of the form Id −M c(τ ) where Mc(τ ) is
holomorphic in |τ | < R as a smoothing operator.
Thus
Id −R(τ
b ) = (Id −M (τ ))(Id +F (τ ) − M
c(τ )F (τ ))
is invertible if and only if the finite rank perturbation of the identity
by (Id −M c(τ ))F (τ ) is invertible. For R large, by the previous result,
this finite rank perturbation must be invertible in an open set in {|τ | <
R}. Then, by standard results for finite dimensional matrices, it has a
meromorphic inverse with finite rank (generalized) residues. The same
is therefore true of Id −R(τ
b ) itself.
Since R > 0 is arbitrary this proves the result. 
Proof. Proof of Theorem 2.1 We have proved (2.15) and the cor-
responding form for the Fourier transformed kernels follows:
(2.32) b0 (τ ) = Id −R
Pb(τ )Q b0 (τ )Pb(τ ) = Id −R
b2 (τ ), Q b1 (τ )

where Rb1 (τ ), R
b2 (τ ) are families of smoothing operators as in Proposi-
tion 2.5. Applying that result to the first equation gives a new mero-
morphic right inverse
Q(τ
b )=Q b0 (τ )(Id −R
b2 (τ ))−1 = Q
b0 (τ ) − Q
b0 (τ )M (τ )
where the first term is entire and the second is a meromorphic family
of smoothing operators with finite rank residues. The same argument
on the second term gives a left inverse, but his shows that Q(τb ) must
be a two-sided inverse.
This we have proved everything except the locations of the poles of
Q(τ
b ) — which are only constrained by (2.26) instead of (2.6). However,
we can apply the same argument to Pθ (z, Dt , Dz ) = P (z, eiθ Dt , Dz ) for
180 7. SUSPENDED FAMILIES AND THE RESOLVENT

|θ| < δ, δ > 0 small, since Pθ stays elliptic. This shows that the poles
of Q(τ
b ) lie in a set of the form (2.6). 

3. Invertibility
We are now in a position to characterize those t-translation-invariant
differential operators which give isomorphisms on the translation-invariant
Sobolev spaces.
Theorem 3.1. An element P ∈ Diff m ti (R × M ; E) gives an isomor-
phism (2.1) (or equivalently is Fredholm) if and only if it is elliptic and
D ∩ R = ∅, i.e. P̂ (τ ) is invertible for all τ ∈ R.
Proof. We have already done most of the work for the important
direction for applications, which is that the ellipticity of P and the
invertibility at P̂ (τ ) for all τ ∈ R together imply that (2.1) is an
isomorphism for any s ∈ R.
Recall that the ellipticity of P leads to a parameterix Q which is
translation-invariant and has the mapping property we want, namely
(2.17).
To prove the same estimate for the true inverse (and its existence)
consider the difference
(3.1) P̂ (τ )−1 − Q̂(τ ) = R̂(τ ), τ ∈ R.
Since P̂ (τ ) ∈ Diff m (M ; E) depends smoothly on τ ∈ R and Q̂(τ ) is a
paramaterix for it, we know that
(3.2) R̂(τ ) ∈ C ∞ (R; Ψ−∞ (M ; E))
is a smoothing operator on M which depends smoothly on τ ∈ R as a
parameter. On the other hand, from (2.32) we also know that for large
real τ,
P̂ (τ )−1 − Q̂(τ ) = Q̂(τ )M (τ )
where M (τ ) satisfies the estimates (2.27). It follows that Q̂(τ )M (τ )
also satisfies these estimates and (3.2) can be strengthened to
(3.3) sup kτ k R̂(τ, ·, ·)kC p < ∞ ∀ p, k.
τ ∈R

That is, the kernel R̂(τ ) ∈ S(R; C ∞ (M 2 ; Hom(E))). So if we define the


t-translation-invariant operator
Z
(3.4) Rf (t, z) = (2π)−1
eitτ R̂(τ )fˆ(τ, ·)dτ

by inverse Fourier transform then


(3.5) R : Htis (R × M ; E2 ) −→ Hti∞ (R × M ; E1 ) ∀ s ∈ R.
3. INVERTIBILITY 181

It certainly suffices to show this for s < 0 and then we know that the
Fourier transform gives a map
(3.6) F : Htis (R × M ; E2 ) −→ hτ i|s| L2 (R; H −|s| (M ; E2 )).
Since the kernel R̂(τ ) is rapidly decreasing in τ, as well as being smooth,
for every N > 0,
(3.7) R̂(τ ) : hτ i|s| L2 (R; H −|s| M ; E2 ) −→ hτ i−N L2 (R; H N (M ; E2 ))
and inverse Fourier transform maps
F −1 : hτ i−N H N (M ; E2 ) −→ HtiN (R × M ; E2 )
which gives (3.5).
Thus Q + R has the same property as Q in (2.17). So it only
remains to check that Q + R is the two-sided version of P and it is
enough to do this on S(R × M ; Ei ) since these subspaces are dense
in the Sobolev spaces. This in turn follows from (3.1) by taking the
Fourier transform. Thus we have shown that the invertibility of P
follows from its ellipticity and the invertibility of P̂ (τ ) for τ ∈ R.
The converse statement is less important but certainly worth know-
ing! If P is an isomorphism as in (2.1), even for one value of s, then
it must be elliptic — this follows as in the compact case since it is
everywhere a local statement. Then if P̂ (τ ) is not invertible for some
τ ∈ R we know, by ellipticity, that it is Fredholm and, by the stability
of the index, of index zero (since P̂ (τ ) is invertible for a dense set of
τ ∈ C). There is therefore some τ0 ∈ R and f0 ∈ C ∞ (M ; E2 ), f0 6= 0,
such that
(3.8) P̂ (τ0 )∗ f0 = 0.
It follows that f0 is not in the range of P̂ (τ0 ). Then, choose a cut off
function, ρ ∈ Cc∞ (R) with ρ(τ0 ) = 1 (and supported sufficiently close
to τ0 ) and define f ∈ S(R × M ; E2 ) by
(3.9) fˆ(τ, ·) = ρ(τ )f0 (·).
Then f ∈ / P · Htis (R × M ; E1 ) for any s ∈ R. To see this, suppose
s
u ∈ Hti (R × M ; E1 ) has
(3.10) P u = f ⇒ P̂ (τ )û(τ ) = fˆ(τ )
where û(τ ) ∈ hτ i|s| L2 (R; H −|s| (M ; E1 )). The invertibility of P (τ ) for
τ 6= τ0 on supp(ρ) (chosen with support close enough to τ0 ) shows that
û(τ ) = P̂ (τ )−1 fˆ(τ ) ∈ C ∞ ((R\{τ0 }) × M ; E1 ).
182 7. SUSPENDED FAMILIES AND THE RESOLVENT

Since we know that P̂ (τ )−1 − Q̂(τ ) = R̂(τ ) is a meromorphic family


of smoothing operators it actually follows that û(ι) is meromorphic in
τ near τ0 in the sense that
k
X
(3.11) û(τ ) = (τ − τ0 )−j uj + v(τ )
j=1

where the uj ∈ C ∞ (M ; E1 ) and v ∈ C ∞ ((τ − , τ + ) × M ; E1 ). Now,


one of the uj is not identically zero, since otherwise P̂ (τ0 )v(τ0 ) = f0 ,
contradicting the choice of f0 . However, a function such as (3.11) is not
locally in L2 with values in any Sobolev space on M, which contradicts
the existence of u ∈ Htis (R × M ; E1 ).
This completes the proof for invertibility of P. To get the Fredholm
version it suffices to prove that if P is Fredholm then it is invertible.
Since the arguments above easily show that the null space of P is empty
on any of the Htis (R×M ; E1 ) spaces and the same applies to the adjoint,
we easily conclude that P is an isomorphism if it is Fredholm. 
This result allows us to deduce similar invertibility conditions on
exponentially-weighted Sobolev spaces. Set
(3.12)
eat Htis (R × M ; E) = {u ∈ Hloc
s
(R × M ; E); e−at u ∈ Htis (R × M ; E)}
for any C ∞ vector bundle E over M. The translation-invariant differ-
ential operators also act on these spaces.
Lemma 3.2. For any a ∈ R, P ∈ Diff m
ti (R × M ; E) defines a con-
tinuous linear operator
(3.13) P : eat Htis+m (R × M ; E1 ) −→ eat Htis+m (R × M ; E2 ).
Proof. We already know this for a = 0. To reduce the general
case to this one, observe that (3.13) just means that
(3.14) P · eat u ∈ eat Htis (R × M ; E2 ) ∀ u ∈ Htis (R × M ; E1 )
with continuity meaning just continuous dependence on u. However,
(3.14) in turn means that the conjugate operator
(3.15) Pa = e−at · P · eat : Htis+m (R × M ; E1 ) −→ Htis (R × M ; E2 ).
Conjugation by an exponential is actually an isomorphism
−at
(3.16) Diff m
ti (R × M ; E) 3 P 7−→ e P eat ∈ Diff m
ti (R × M ; E).

To see this, note that elements of Diff j (M ; E) commute with multipli-


cation by eat and
(3.17) e−at Dt eat = Dt − ia
3. INVERTIBILITY 183

which gives (3.16)).


The result now follows. 
Proposition 3.3. If P ∈ Diff m ti (R×M ; E) is elliptic then as a map
(3.13) it is invertible precisely for
(3.18) a∈
/ − Im(D), D = D(P ) ⊂ C,
that is, a is not the negative of the imaginary part of an element of D.
Note that the set − Im(D) ⊂ R, for which invertibility fails, is
discrete. This follows from the discreteness of D and the estimate (2.6).
Thus in Fig ?? invertibility on the space with weight eat correspond
exactly to the horizonatal line with Im τ = −a missing D.
Proof. This is direct consequence of (??) and the discussion around
(3.15). Namely, P is invertible as a map (3.13) if and only if Pa is in-
vertible as a map (2.1) so, by Theorem 3.1, if
and only if
D(Pa ) ∩ R = ∅.
From (3.17), D(Pa ) = D(P ) + ia so this condition is just D(P ) ∩ (R −
ia) = ∅ as claimed. 
Although this is a characterization of the Fredholm properties on
the standard Sobolev spaces, it is not the end of the story, as we shall
see below.
One important thing to note is that R has two ends. The exponen-
tial weight eat treats these differently – since if it is big at one end it
is small at the other – and in fact we (or rather you) can easily define
doubly-exponentially weighted spaces and get similar results for those.
Since this is rather an informative extended exercise, I will offer some
guidance.
Definition 3.4. Set
(3.19)
s,a,b s
Hti,exp (R × M ; E) = {u ∈ Hloc (R × M ; E);
χ(t)e−at u ∈ Htis (R × M ; E)(1 − χ(t))ebt u ∈ Htis (R × M ; E)}
where χ ∈ C ∞ (R), χ = 1 in t > 1, χ = 0 in t < −1.
Exercises.
(1) Show that the spaces in (3.19) are independent of the choice of
χ, are all Hilbertable (are complete with respect to a Hilbert
norm) and show that if a + b ≥ 0
s,a,b
(3.20) Hti,exp (R × M ; E) = eat Htis (R × M ; E) + e−bt Htis (R × M ; E)
184 7. SUSPENDED FAMILIES AND THE RESOLVENT

whereas if a + b ≤ 0 then
s,a,b
(3.21) Hti,exp (R × M ; E) = eat Htis (R × M ; E) ∩ e−bt Htis (R × M ; E).
(2) Show that any P ∈ Diff m
ti (R×M ; E) defines a continuous linear
map for any s, a, b ∈ R
s+m,a,b s,a,b
(3.22) P : Hti- exp (R × M ; E1 ) −→ Hti- exp (R × M ; E2 ).

(3) Show that the standard L2 pairing, with respect to dt, a smooth
positive density on M and an inner product on E extends to
a non-degenerate bilinear pairing
s,a,b −s,−a,−b
(3.23) Hti,exp (R × M ; E) × Hti,exp (R × M ; E) −→ C
for any s, a and b. Show that the adjoint of P with respect to
this pairing is P ∗ on the ‘negative’ spaces – you can use this
to halve the work below.
(4) Show that if P is elliptic then (3.22) is Fredholm precisely
when
(3.24) a∈
/ − Im(D) and b ∈
/ Im(D).
Hint:- Assume for instance that a+b ≥ 0 and use (3.20). Given
(3.24) a parametrix for P can be constructed by combining the
inverses on the single exponential spaces
−1
(3.25) Qa,b = χ0 Pa−1 χ + (1 − χ00 )P−b (1 − χ)
where χ is as in (3.19) and χ0 and χ00 are similar but such that
χ0 χ = 1, (1 − χ00 )(1 − χ) = 1 − χ.
(5) Show that P is an isomorphism if and only if
a+b ≤ 0 and [a, −b]∩− Im(D) = ∅ or a+b ≥ 0 and [−b, a]∩− Im(D) = ∅.
(6) Show that if a + b ≤ 0 and (3.24) holds then
X
ind(P ) = dim null(P ) = Mult(P, τi )
τi ∈D∩(R×[b,−a])

where Mult(P, τi ) is the algebraic multiplicity of τ as a ‘zero’


of P̂ (τ ), namely the dimension of the generalized null space
( N
)
X
Mult(P, τi ) = dim u = up (z)Dτp δ(τ − τi ); P (τ )u(τ ) ≡ 0 .
p=0

(7) Characterize these multiplicities in a more algebraic way. Namely,


if τ 0 is a zero of P (τ ) set E0 = null P (τ 0 ) and F0 = C ∞ (M ; E2 )/P (τ 0 )C ∞ (M ; E1 ).
Since P (τ ) is Fredholm of index zero, these are finite dimen-
sional vector spaces of the same dimension. Let the derivatives
of P be Ti = ∂ i P/∂τ i at τ = τ 0 Then define R1 : E0 −→ F0
ADDENDA TO CHAPTER 7 185

as T1 restricted to E0 and projected to F0 . Let E1 be the


null space of R1 and F1 = F0 /R1 E0 . Now proceed inductively
and define for each i the space Ei as the null space of Ri ,
Fi = Fi−1 /Ri Ei−1 and Ri+1 : Ei −→ Fi as Ti restricted to Ei
and projected to Fi . Clearly Ei and Fi have the same, finite,
dimension which is non-increasing as i increases. The prop-
erties of P (τ ) can be used to show that for large enough i,
Ei = Fi = {0} and

X
(3.26) Mult(P, τ 0 ) = dim(Ei )
i=0

where the sum is in fact finite.


(8) Derive, by duality, a similar formula for the index of P when
a + b ≥ 0 and (3.24) holds, showing in particular that it is
injective.

4. Resolvent operator
Addenda to Chapter 7
More?
• Why – manifold with boundary later for Euclidean space, but
also resolvent (Photo-C5-01)
• Hölder type estimates – Photo-C5-03. Gives interpolation.
As already noted even a result such as Proposition 3.3 and the
results in the exercises above by no means exhausts the possibile real-
izations of an element P ∈ Diff mti (R × M ; E) as a Fredholm operator.
Necessarily these other realization cannot simply be between spaces
like those in (3.19). To see what else one can do, suppose that the
condition in Theorem 3.1 is violated, so
(4.1) D(P ) ∩ R = {τ1 , . . . , τN } =
6 ∅.
To get a Fredholm operator we need to change either the domain or the
range space. Suppose we want the range to be L2 (R×M ; E2 ). Now, the
condition (3.24) guarantees that P is Fredholm as an operator (3.22).
So in particular
m,, 0,,
(4.2) P : Hti−exp (R × M ; E1 ) −→ Hti- exp (R × M ; E2 )

is Fredholm for all  > 0 sufficiently small (becuase D is discrete). The


image space (which is necessarily the range in this case) just consists
of the sections of the form exp(a|t|)f with f in L2 . So, in this case the
186 7. SUSPENDED FAMILIES AND THE RESOLVENT

range certainly contains L2 so we can define


(4.3)
m,, 2
DomAS (P ) = {u ∈ Hti- exp (R×M ; E1 ); P u ∈ L (R×M ; E2 )},  > 0 sufficiently small.
This space is independent of  > 0 if it is taken smalle enough, so the
same space arises by taking the intersection over  > 0.
Proposition 4.1. For any elliptic element P ∈ Diff m
ti (R × M ; E)
the space in (4.3) is Hilbertable space and
(4.4) P : DomAS (P ) −→ L2 (R × M ; E2 ) is Fredholm.
I have not made the assumption (4.1) since it is relatively easy to see
that if D ∩ R = ∅ then the domain in (4.3) reduces again to Htim (R ×
M ; E1 ) and (4.4) is just the standard realization. Conversely of course
under the assumption (4.1) the domain in (4.4) is strictly larger than
the standard Sobolev space. To see what it actually is requires a bit of
work but if you did the exercises above you are in a position to work
this out! Here is the result when there is only one pole of P̂ (τ ) on the
real line and it has order one.
Proposition 4.2. Suppose P ∈ Diff m ti (R × M ; E) is elliptic, P̂ (τ )
is invertible for τ ∈ R \ {0} and in addition τ P̂ (τ )−1 is holomorphic
near 0. Then the Atiyah-Singer domain in (4.4) is
(4.5) DomAS (P ) = u = u1 + u2 ; u1 ∈ Htim (R × M ; E1 ),

Z t

u2 = f (t)v, v ∈ C (M ; E1 ), P̂ (0)v = 0, f (t) = g(t)dt, g ∈ H m−1 (R) .
0

Notice that the ‘anomalous’ term here, u2 , need not be square-


1
integrable. In fact for any δ > 0 the power hti 2 −δ v ∈ hti1−δ L2 (R ×
M ; E1 ) is included and conversely
\
(4.6) f∈ hti1+δ H m−1 (R).
δ>0
One can say a lot more about the growth of f if desired but it is
generally quite close to htiL2 (R).
Domains of this sort are sometimes called ‘extended L2 domains’ –
see if you can work out what happens more generally.
CHAPTER 8

Manifolds with boundary

• Dirac operators – Photos-C5-16, C5-17.


• Homogeneity etc Photos-C5-18, C5-19, C5-20, C5-21, C5-23,
C5-24.

1. Compactifications of R.
As I will try to show by example later in the course, there are
I believe considerable advantages to looking at compactifications of
non-compact spaces. These advantages show up last in geometric and
analytic considerations. Let me start with the simplest possible case,
namely the real line. There are two standard compactifications which
one can think of as ‘exponential’ and ‘projective’. Since there is only
one connected compact manifold with boundary compactification cor-
responds to the choice of a diffeomorphism onto the interior of [0, 1]:

γ : R −→ [0, 1], γ(R) = (0, 1),


(1.1)
γ −1 : (0, 1) −→ R, γ, γ −1 C ∞ .
In fact it is not particularly pleasant to have to think of the global
maps γ, although we can. Rather we can think of separate maps
γ+ : (T+ , ∞) −→ [0, 1]
(1.2)
γ− : (T− , −∞) −→ [0, 1]

which both have images (0, x± ) and as diffeomorphism other than signs.
In fact if we want the two ends to be the ‘same’ then we can take
γ− (t) = γ+ (−t). I leave it as an exercise to show that γ then exists
with
(
γ(t) = γ+ (t) t0
(1.3)
γ(t) = 1 − γ− (t) t  0.

So, all we are really doing here is identifying a ‘global coordinate’


γ+∗ xnear ∞ and another near −∞. Then two choices I refer to above
187
188 8. MANIFOLDS WITH BOUNDARY

are
x = e−t exponential compactification
(CR.4)
x = 1/t projective compactification .
Note that these are alternatives!
Rather than just consider R, I want to consider R × M, with M
compact, as discussed above.
Lemma 1.1. If R : H −→ H is a compact operator on a Hilbert
space then Id −R is Fredholm.
Proof. A compact operator is one which maps the unit ball (and
hence any bounded subset) of H onto a precompact set, a set with
compact closure. The unit ball in the null space of Id −R is
{u ∈ H; kuk = 1 , u = Ru} ⊂ R{u ∈ H; kuk = 1}
and is therefore precompact. Since it is closed, it is compact and any
Hilbert space with a compact unit ball is finite dimensional. Thus the
null space of (Id −R) is finite dimensional.
Consider a sequence un = vn − Rvn in the range of Id −R and
suppose un → u in H. We may assume u 6= 0, since 0 is in the range,
and by passing to a subsequence suppose that of γ on ?? fields. Clearly
γ(t) = e−t ⇒ γ∗ (∂t ) = −x(∂x )
(CR.5)
γ̃(t) = 1/t ⇒ γ̃∗ (∂t ) = −s2 ∂s
where I use ‘s’ for the variable in the second case to try to reduce
confusion, it is just a variable in [0, 1]. Dually
 
∗ dx
γ = −dt
x
(CR.6)  
∗ ds
γ̃ = −dt
s2
in the two cases. The minus signs just come from the fact that both
γ’s reverse orientation.
Proposition 1.2. Under exponential compactification the translation-
invariant Sobolev spaces on R × M are identified with
(1.4)   
k 2 dx
Hb ([0, 1] × M ) = u ∈ L [0, 1] × M ; VM ; ∀ `, p ≤ k
x

p ` 2 dx 
Pp ∈ Diff (M ) , (xDx ) Pp u ∈ L [0, 1] × M ; VM
x
1. COMPACTIFICATIONS OF R. 189

for k a positive integer, dim M = n,


 
s
 2 dx
(1.5) Hb ([0, 1] × M ) = u ∈ L [0, 1] × M ; VM ;
x
0 0 2
|u(x, z) − u(x , z )| dx dx0 0
ZZ
 n+s+1 0
νν < ∞ 0 < s < 1
x 2 0
| log x0 | + ρ(z, z ) 2 x x

and for s < 0, k ∈ N s.t., 0 ≤ s + k < 1,

X
(1.6) Hbs ([0, 1] × M ) = u = (XdJX )pP uj,p ,

0≤j+p≤k

Pp ∈ Diff p (M ) , uj,p ∈ Hbs+k ([0, 1] × M ) .

Moreover the L2 pairing with respect to the measure dx x


ν extends by

continuity from the dense subspaces Cc ((0, 1)×M ) to a non-degenerate
pairing
Z
s −s dx
(1.7) Hb ([0, 1] × M ) × Hb ([0, 1] × M ) 3 (n, u) 7−→ u · v ν ∈ C.
x


Proof. This is all just translation of the properties of the space


Htis (R
× M ) to the new coordinates. 

Note that there are other properties I have not translated into this
new setting. There is one additional fact which it is easy to check.
Namely C ∞ ([0, 1] × M ) acts as multipliers on all the spaces Hbs ([0, 1] ×
M ). This follows directly from Proposition 1.2;
(CR.12)
C ∞ ([0, 1] × M ) × Hbs ([0, 1] × M ) 3 (ϕ, u) 7→ ϕu ∈ Hbs ([0, 1] × M ) .
What about the ‘b’ notation? Notice that (1−x)x∂x and the smooth
vector fields on M span, over C ∞ (X), for X = [0, 1] × M , all the vector
fields tangent to {x = 0|u|x = 1}. Thus we can define the ‘boundary
differential operators’ as
(CR.13)
(
X
Diff m E
b ([0, 1] × Mi ) = P = aj,p (xj )((1 − x)xDx )j Pp ,
0≤j+p≤m

Pp ∈ Diff p (Mi )E
190 8. MANIFOLDS WITH BOUNDARY

and conclude from (CR.12) and the earlier properties that


(CR.14) P ∈ Diff m
b (X; E) ⇒

P : Hbs+m (X; E) → Hbs (X; E) ∀s ∈ R .


Theorem 1.3. A differential operator as in (1.3) is Fredholm if
and only if it is elliptic in the interior and the two “normal operators’
X
(CR.16) I± (P ) = aj,p (x±1 )(±Dk )i Pp x+ = 0 , x− = 1
0≤j+p≤m

derived from (CR.13), are elliptic and invertible on the translation-


invariant Sobolev spaces.
Proof. As usual we are more interested in the sufficiency of these
conditions than the necessity. To prove this result by using the present
(slightly low-tech) methods requires going back to the beginning and
redoing most of the proof of the Fredholm property for elliptic operators
on a compact manifold.
The first step then is a priori bounds. What we want to show is
that if the conditions of the theorem hold then for u ∈ Hbs+m (X; E),
x = R × M , ∃C > 0 s.t.
(CR.17) kukm+s ≤ Cs kP uks + Cs kx(1 − x)uks−1+m .
Notice that the norm on the right has a factor, x(1 − x), which van-
ishes at the boundary. Of course this is supposed to come from the
invertibility of I± (P ) in R(0) and the ellipticity of P .
By comparison I± (P ) : H~s+m (R × M ) → H~s (R × M ) are isomor-
phisms — necessary and sufficient conditions for this are given in The-
orem ???. We can use the compactifying map γ to convert this to a
statement as in (CR.17) for the operators
(CR.18) P± ∈ Diff m
b (X) , P± = I± (P )(γ∗ Dt , ·) .

Namely
(CR.19) kukm+s ≤ Cs kP± uks
where these norms, as in (CR.17) are in the Hbs spaces. Note that
near x = 0 or x = 1, P± are obtained by substituting Dt 7→ xDx or
(1 − x)Dx in (CR.17). Thus
(CR.20) P − P± ∈ (x − x± ) Diff m
b (X) , x± = 0, 1
have coefficients which vanish at the appropriate boundary. This is
precisely how (CR.16) is derived from (CR.13). Now choose ϕ ∈
2. BASIC PROPERTIES 191

C ∞ , (0, 1) × M which is equal to 1 on a sufficiently large set (and has


0 ≤ ϕ ≤ 1) so that
(CR.21) 1 − ϕ = ϕ+ + ϕ− , ϕ± ∈ C ∞ ([0, 1] × M )
have supp(ϕ± ) ⊂ {|x − x± | ≤ ), 0 ≤ ϕ+ 1.
By the interim elliptic estimate,
(CR.22) kϕuks+m ≤ Cs kϕP uks + Cs0 kψuks−1+m
where ψ ∈ Cc∞ ((0, 1) × M ). On the other hand, because of (CR.20)
(CR.23)
kϕ± ukm+s ≤ Cs kϕ± P± uks + Cs k[ϕ± , P± u]ks
≤ Cs kϕ± P uks + Cs ϕ± (P − P± )uks + Cs k[ϕ± , P± ]uks .
Now, if we can choose the support at ϕ± small enough — recalling that
Cs truly depends on I± (Pt ) and s — then the second term on the right
in (CR.23) is bounded by 14 kukm+s , since all the coefficients of P − P±
are small on the support off ϕ± . Then (CR.24) ensures that the final
term in (CR.17), since the coefficients vanish at x = x± .
The last term in (CR.22) has a similar bound since ψ has compact
support in the interim. This combining (CR.2) and (CR.23) gives the
desired bound (CR.17).
To complete the proof that P is Fredholm, we need another property
of these Sobolev spaces.
Lemma 1.4. The map
(1.8) Xx(1 − x) : Hbs (X) −→ Hbs−1 (X)
is compact.
Proof. Follow it back to R × M !

Now, it follows from the a priori estimate (CR.17) that, as a map
(CR.14), P has finite dimensional null space and closed range. This
is really the proof of Proposition ?? again. Moreover the adjoint of
P with respect to dxx
V, P ∗ , is again elliptic and satisfies the condition
of the theorem, so it too has finite-dimensional null space. Thus the
range of P has finite codimension so it is Fredholm.

A corresponding theorem, with similar proof follows for the cusp
compactification. I will formulate it later.
2. Basic properties
A discussion of manifolds with boundary goes here.
192 8. MANIFOLDS WITH BOUNDARY

3. Boundary Sobolev spaces


Generalize results of Section 1 to arbitrary compact manifolds with
boundary.
4. Dirac operators
Euclidean and then general Dirac operators
5. Homogeneous translation-invariant operators
One application of the results of Section 3 is to homogeneous constant-
coefficient operators on Rn , including the Euclidean Dirac operators in-
troduced in Section 4. Recall from Chapter 4 that an elliptic constant-
coefficient operator is Fredholm, on the standard Sobolev spaces, if and
only if its characteristic polynomial has no real zeros. If P is homoge-
neous
(5.1) Pij (tζ) = tm Pij (ζ) ∀ ζ ∈ Cn , t ∈ R ,
and elliptic, then the only real zero (of the determinant) is at ζ = 0. We
will proceed to discuss the radial compactification of Euclidean space
to a ball, or more conveniently a half-sphere
(5.2) γR : Rn ,→ Sn,1 = {Z ∈ Rn+1 ; |Z| = 1 , Z0 ≥ 0} .
Transferring P to Sn,1 gives
(5.3) PR ∈ Z0m Diff m
b (S
n,1
; CN )
which is elliptic and to which the discussion in Section 3 applies.
In the 1-dimensional case, the map (5.2) reduces to the second
‘projective’ compactification of R discussed above. It can be realized
globally by
!
1 z
(5.4) γR (z) = p , p ∈ Sn,1 .
1 + |z|2 1 + |z|2

Geometrically this corresponds to a form of stereographic projection.


Namely, if Rn 3 z 7→ (1, z) ∈ Rn+1 is embedded as a ‘horizontal
plane’ which is then projected radially onto the sphere (of radius one
around the origin) one arrives at (5.4). It follows easily that γR is a
diffeomorphism onto the open half-sphere with inverse
(5.5) z = Z 0 /Z0 , Z 0 = (Z1 , . . . , Zn ) .
Whilst (5.4) is concise it is not a convenient form of the compacti-
fication as far as computation is concerned. Observe that
x
x 7→ √
1 + x2
5. HOMOGENEOUS TRANSLATION-INVARIANT OPERATORS 193

is a diffeomorphism of neighborhoods of 0 ∈ R. It follows that Z0 , the


first variable in (5.4) can be replaced, near Z0 = 0, by 1/|z| = x. That
is, there is a diffeomorphism
(5.6) {0 ≤ Z0 ≤ } ∩ Sn,1 ↔ [0, δ]x × Sθn−1
which composed with (5.4) gives x = 1/|z| and θ = z/|z|. In other
words the compactification (5.4) is equivalent to the introduction of
polar coordinates near infinity on Rn followed by inversion of the radial
variable.
Lemma 5.1. If P = (Pij (Dz )) is an N × N matrix of constant
coefficient operators in Rn which is homogeneous of degree −m then
(5.3) holds after radial compactification. If P is elliptic then PR is
elliptic.
Proof. This is a bit tedious if one tries to do it by direct com-
putation. However, it is really only the homogeneity that is involved.
Thus if we use the coordinates x = 1/|z| and θ = z/|z| valid near the
boundary of the compactification (i.e., near ∞ on Rn ) then
X
(5.7) Pij = Dx` P`,i,j (x, θ, Dθ ) , P`,i,j ∈ C ∞ (0, δ)x ; Diff m−` (Sn−1 ).
0≤`≤m

Notice that we do know that the coefficients are smooth in 0 < x < δ,
since we are applying a diffeomorphism there. Moreover, the operators
P`,i,j are uniquely determined by (5.7).
So we can exploit the assumed homogeneity of Pij . This means that
for any t > 0, the transformation z 7→ tz gives
(5.8) Pij f (tz) = tm (Pij f )(tz) .
Since |tz| = t|z|, this means that the transformed operator must satisfy
(5.9)
X X
Dx` P`,i,j (x, θ, Dθ )f (x/t, θ) = tm ( D` P`,i,j (·, θ, Dθ )f (·, θ))(x/t) .
` `

Expanding this out we conclude that


(5.10) x−m−` P`,i,j (x, θ, Dθ ) = P`,i,j (θ, Dθ )
is independent of x. Thus in fact (5.7) becomes
X
(5.11) Pij = xm x` Dx` P`,j,i (θ, Dθ ) .
0≤j≤`

Since we can rewrite


X
(5.12) x` Dx = C`,j (xDx )j
0≤j≤`
194 8. MANIFOLDS WITH BOUNDARY

(with explicit coefficients if you want) this gives (5.3). Ellipticity in


this sense, meaning that
(5.13) x−m PR ∈ Diff m
b (S
n,1
; CN )
(5.11) and the original ellipticity at P. Namely, when expressed in terms
of xDx the coefficients of 5.13 are independent of x (this of course just
reflects the homogeneity), ellipticity in x > 0 follows by the coordinate
independence of ellipticity, and hence extends down to x = 0. 
Now the coefficient function Z0w+m in (5.3) always gives an isomor-
phism
(5.14) ×Z0m : Z0w Hbs (Sn,1 ) −→ Z0w+m Hbs (Sn,1 ) .
Combining this with the results of Section 3 we find most of
Theorem 5.2. If P is an N × N matrix of constant coefficient
differential operators on Rn which is elliptic and homogeneous of degree
−m then there is a discrete set − Im(D(P )) ⊂ R such that
(5.15)
P : Z0w Hbm+s (Sn,1 ) −→ Z0w+m Hbs (Sn,1 ) is Fredholm ∀ w ∈
/ − Im(D(P ))
where (5.4) is used to pull these spaces back to Rn . Moreover,
P is injective for w ∈ [0, ∞) and
(5.16)
P is surjective for w ∈ (−∞, n − m] ∩ (− Im(D)(P )) .
Proof. The conclusion (5.15) is exactly what we get by applying
Theorem X knowing (5.3).
To see the specific restriction (5.16) on the null space and range,
observe that the domain spaces in (5.15) are tempered. Thus the null
space is contained in the null space on S 0 (Rn ). Fourier transform shows
that P (ζ)û(ζ) = 0. From the assumed ellipticity of P and homogeneity
it follows that supp(û(ζ)) ⊂ {0} and hence û is a sum of derivatives of
delta functions and finally that u itself is a polynomial. If w ≥ 0 the
domain in (5.15) contains no polynomials and the first part of (5.16)
follows.
The second part of (5.16) follows by a duality argument. Namely,
the adjoint of P with respect to L2 (Rn ), the usual Lebesgue space,
is P ∗ which is another elliptic homogeneous differential operator with
constant coefficients. Thus the first part of (5.16) applies to P ∗ . Using
the homogeneity of Lebesgue measure,
dx
(5.17) |dz| = n+1 · νθ near ∞
x
and the shift in weight in (5.15), the second part of (5.16) follows. 
6. SCATTERING STRUCTURE 195

One important consequence of this is a result going back to Niren-


berg and Walker (although expressed in different language).
Corollary 5.3. If P is an elliptic N × N matrix constant co-
efficient differential operator which is homogeneous of degree m, with
n > m, the the map (5.15) is an isomorphism for w ∈ (0, n − m).
In particular this applies to the Laplacian in dimensions n > 2
and to the constant coefficient Dirac operators discussed above in di-
mensions n > 1. In these cases it is also straightforward to compute
the index and to identify the surjective set. Namely, for a constant
coefficient Dirac operator
(5.18) D(P ) = iN0 ∪ i(n − m + N0 ) .
Figure goes here.

6. Scattering structure
Let me briefly review how the main result of Section 5 was arrived
at. To deal with a constant coefficient Dirac operator we first radially
compactified Rn to a ball, then peeled off a multiplicative factor Z0
from the operator showed that the remaining operator was Fredholm by
identifing a neighbourhood of the boundary with part of R×Sn−1 using
the exponential map to exploit the results of Section 1 near infinity.
Here we will use a similar, but different, procedure to treat a different
class of operators which are Fredholm on the standard Sobolev spaces.
Although we will only apply this in the case of a ball, coming from
Rn , I cannot resist carrying out the discussed for a general compact
manifolds — since I think the generality clarifies what is going on.
Starting from a compact manifold with boundary, M, the first step is
essentially the reverse of the radial compactification of Rn .
Near any point on the boundary, p ∈ ∂M, we can introduce ‘ad-
missible’ coordinates, x, y1 , . . . , yn−1 where {x = 0k is the local form of
the boundary and y1 , . . . , yn−1 are tangential coordinates; we normalize
y1 = · · · = yn−1 = 0 at p. By reversing the radial compactification of
Rn I mean we can introduce a diffeomorphism of a neighbourhood of p
to a conic set in Rn :
(6.1) zn = 1/x , zj = yj /x , j = 1, . . . , n − 1 .
Clearly the ‘square’ |y| < , 0 < x <  is mapped onto the truncated
conic set
(6.2) zn ≥ 1/ , |z 0 | < |zn | , z 0 = (z1 , . . . , zn−1 ) .
196 8. MANIFOLDS WITH BOUNDARY

s
Definition 6.1. We define spaces Hsc (M ) for any compact mani-
fold with boundary M by the requirements
(6.3) s
u ∈ Hsc s
(M ) ⇐⇒ u ∈ Hloc (M \ ∂M ) and Rj∗ (ϕj u) ∈ H s (Rn )
for ϕj ∈ C ∞ (M ), 0 ≤ ϕi ≤ 1,
P
ϕi = 1 in a neighbourhood of the
boundary and where each ϕj is supported in a coordinate patch (??),
(6.2) with R given by (6.1).
Of course such a definition would not make much sense if it de-
pended on the choice of the partition of unity near the boundary {ϕi k
or the choice of coordinate. So really (6.1) should be preceded by such
an invariance statement. The key to this is the following observation.
Proposition 6.2. If we set Vsc (M ) = xVb (M ) for any compact
manifold with boundary then for any ψ ∈ C ∞ (M ) supported in a coor-
dinate patch (??), and any C ∞ vector field V on M
Xn
(6.4) ψV ∈ Vsc (M ) ⇐⇒ ψV = µj (R−1 )∗ (Dzj ) , µj ∈ C ∞ (M ) .
j=1

Proof. The main step is to compute the form of Dzj in terms of


the coordinate obtained by inverting (6.1). Clearly
(6.5) Dzn = x2 Dx , Dzj = xDyj − yi x2 Dx , j < n .
Now, as discussed in Section 3, xDx and Dyj locally span Vb (M ), so
x2 Dx , xDyj locally span Vsc (M ). Thus (6.5) shows that in the singular
coordinates (6.1), Vsc (M ) is spanned by the Dz` , which is exactly what
(6.4) claims. 
Next let’s check what happens to Euclidean measure under R, ac-
tually we did this before:
|dx|
(SS.9) |dz| = n+1 νy .
x
Thus we can first identify what (6.3) means in the case of s = 0.
Lemma 6.3. For s = 0, Definition (6.1) unambiguously defines
 Z 
0 2 2 νM
(6.6) Hsc (M ) = u ∈ Lloc (M ) ; |u| n+1 < ∞
x
where νM is a positive smooth density on M (smooth up to the boundary
of course) and x ∈ C ∞ (M ) is a boundary defining function.
Proof. This is just what (6.3) and (SS.9) mean. 
Combining this with Proposition 6.2 we can see directly what (6.3)
means for kinN.
6. SCATTERING STRUCTURE 197

Lemma 6.4. If (6.3) holds for s = k ∈ N for any one such partition
0
of unity then u ∈ Hsc (M ) in the sense of (6.6) and
0
(6.7) V1 . . . Vj u ∈ Hsc (M ) ∀ Vi ∈ Vsc (M ) if j ≤ k ,
and conversely.
Proof. For clarity we can proceed by induction on k and re-
k−1 k−1
place (6.7) by the statements that u ∈ Hsc (M ) and V u ∈ Hsc (M )
∀V ∈ Vsc (M ). In the interior this is clear and follows immediately from
Proposition 6.2 provided we carry along the inductive statement that
(6.8) C ∞ (M ) acts by multiplication on Hsc
k
(M ) .

As usual we can pass to general s ∈ R by treating the cases 0 <
s < 1 first and then using the action of the vector fields.
Proposition 6.5. For 0 < s < 1 the condition (6.3) (for any one
0
partition of unity) is equivalent to requiring u ∈ Hsc (M ) and
|u(p) − u(p0 )|2 νM 0
ZZ
νM
(6.9) <∞
M ×M ρn+2s
sc xn+1 (x0 )n+1
where ρsc (p, p0 ) = χχ0 p(p, p0 ) + j ϕj ϕ0j hz − z 0 i.
P

Proof. Use local coordinates. 


Then for s ≥ 1 if k is the integral part of s, so 0 ≤ s − k < 1, k ∈ N,
s s−k
(6.10) u ∈ Hsc (M ) ⇐⇒ V1 , . . . , Vj u ∈ Hsc (M ) , Vi ∈ Vsc (M ) , j ≤ k
and for s < 0 if k ∈ N is chosen so that 0 ≤ k + s < 1, then
s s+k
u ∈ Hsc (M ) ⇔ ∃ Vj ∈ Hsc (M ) , j = 1, . . . , N ,
s−k
uj ∈ Hsc (M ) , Vj,i (M ) , 1 ≤ i ≤ `j ≤ k s.t.
(6.11) N
X
u = u0 + Vj,i · · · Vj,`j uj .
j=1

All this complexity is just because we are preceding in such a ‘low-


tech’ fashion. The important point is that these Sobolev spaces are
determined by the choice of ‘structure vector fields’, V ∈ Vsc (M ). I
leave it as an important exercise to check that
Lemma 6.6. For the ball, or half-sphere,
γR∗ Hsc
s
(Sn,1 ) = H s (Rn ) .
198 8. MANIFOLDS WITH BOUNDARY

Thus on Euclidean space we have done nothing. However, my claim


is that we understand things better by doing this! The idea is that we
should Fourier analysis on Rn to analyse differential operators which
are made up out of Vsc (M ) on any compact manifold with boundary
M, and this includes Sn,1 as the radial compactification of Rn . Thus set
∞ ∞ ∞
(6.12) Diff m

sc (M ) = P : C (M ) −→ C (M ); ∃ f ∈ C (M ) and
X
Vi,j ∈ Vsc (M ) s.t. P = f + Vi,1 . . . Vi,j .
i,1≤j≤m

In local coordinates this is just a differential operator and it is smooth


up to the boundary. Since only scattering vector fields are allowed in
the definition such an operator is quite degenerate at the boundary. It
always looks like
X
(6.13) P = ak,α (x, y)(x2 Dx )k (xDy )α ,
k+|α|≤m

with smooth coefficients in terms of local coordinates (??).


Now, if we freeze the coefficients at a point, p, on the boundary of
M we get a polynomial
X
(6.14) σsc (P )(p) = ak,α (p)τ k η α .
k+|α|≤m

Note that this is not in general homogeneous since the lower order terms
are retained. Despite this one gets essentially the same polynomial at
each point, independent of the admissible coordinates chosen, as will
be shown below. Let’s just assume this for the moment so that the
condition in the following result makes sense.
Theorem 6.7. If P ∈ Diff m sc (M ; E) acts between vector bundles
over M, is elliptic in the interior and each of the polynomials (matrices)
(6.14) is elliptic and has no real zeros then
s+m s
(6.15) P : Hsc (M, E1 ) −→ Hsc (M ; E2 ) is Fredholm
for each s ∈ R and conversely.
Last time at the end I gave the following definition and theorem.
Definition 6.8. We define weighted (non-standard) Sobolev spaces
for (m, w) ∈ R2 on Rn by
(6.16)
H̃ m,w (Rn ) = {u ∈ Mloc
m
(Rn ); F ∗ (1 − χ)r−w u ∈ Htim (R × Sn−1 )}


where χ ∈ Cc∞ (Rn ), χ(y) = 1 in |y| < 1 and


(6.17) F : R × Sn−1 3 (t, θ) −→ (et , et θ) ∈ Rn \ {0}.
6. SCATTERING STRUCTURE 199

n
P
Theorem 6.9. If P = Γi Di , Γi ∈ M (N, C), is an elliptic, con-
i=1
stant coefficient, homogeneous differential operator of first order then
(6.18) P : H̃ m,w (Rn ) −→ H̃ m−1,w+1 (Rn ) ∀ (m, w) ∈ R2
is continuous and is Fredholm for w ∈ R \ D̃ where D̃ is discrete.
If P is a Dirac operators, which is to say explicitly here that the
coefficients are ‘Pauli matrices’ in the sense that
(6.19) Γ∗i = Γi , Γ2i = IdN ×N , ∀ i, Γi Γj + Γj Γi = 0, i 6= j,
then
(6.20) D̃ = −N0 ∪ (n − 2 + N0 )
and if n > 2 then for w ∈ (0, n − 2) the operator P in (6.18) is an
isomorphism.
I also proved the following result from which this is derived
Lemma 6.10. In polar coordinates on Rn in which Rn \ {0} '
(0, ∞) × Sn−1 , y = rθ,
(6.21) Dyj =
CHAPTER 9

Electromagnetism

1. Maxwell’s equations
Maxwell’s equations in a vacuum take the standard form
div E = ρ div B = 0
(1.1) ∂B ∂E
curl E = − curl B = +J
∂t ∂t
where E is the electric and B the magnetic field strength, both are
3-vectors depending on position z ∈ R3 and time t ∈ R. The external
quantities are ρ, the charge density which is a scalar, and J, the current
density which is a vector.
We will be interested here in stationary solutions for which E and
B are independent of time and with J = 0, since this also represents
motion in the standard description. Thus we arrive at
div E = ρ div B = 0
(1.2)
curl E = 0 curl B = 0.
The simplest interesting solutions represent charged particles, say
with the charge at the origin, ρ = cδ0 (z), and with no magnetic field,
B = 0. By identifying E with a 1-form, instead of a vector field on R3 ,
(1.3) E = (E1 , E2 , E3 ) =⇒ e = E1 dz1 + E2 dz2 + E3 dz3
we may identify curl E with the 2-form de,

(1.4) de =
     
∂E2 ∂E1 ∂E3 ∂E2 ∂E1 ∂E3
− dz1 ∧dz2 + − dz2 ∧dz3 + − dz3 ∧dz1 .
∂z1 ∂z2 ∂z2 ∂z3 ∂z3 ∂z1
Thus (1.2) implies that e is a closed 1-form, satisfying
∂E1 ∂E2 ∂E3
(1.5) + + = cδ0 (z).
∂z1 ∂z2 ∂z3
By the Poincaré Lemma, a closed 1-form on R3 is exact, e = dp,
with p determined up to an additive constant. If e is smooth (which it
201
202 9. ELECTROMAGNETISM

cannot be, because of (1.5)), then


(1.6) Z 1
0
p(z) − p(z ) = γ ∗e along γ : [0, 1] −→ R3 , γ(0) = z 0 , γ(1) = z.
0

It is reasonable to look for a particular p and 1-form e which satisfy


(1.5) and are smooth outside the origin. Then (1.6) gives a potential
which is well defined, up to an additive constant, outside 0, once z 0 is
fixed, since de = 0 implies that the integral of γ ∗ e along a closed curve
vanishes. This depends on the fact that R3 \{0} is simply connected.
So, modulo confirmation of these simple statements, it suffices to
look for p ∈ C ∞ (R3 \{0}) satisfying e = dp and (1.5), so
 2
∂ p ∂ 2p ∂ 2p

(1.7) ∆p = − + + = −cδ0 (z).
∂z12 ∂z22 ∂z32
Then E is recovered from e = dp.
The operator ‘div’ can also be understood in terms of de Rham d
together with the Hodge star ∗. If we take R3 to have the standard ori-
entation and Euclidean metric dz12 + dz22 + dz32 , the Hodge star operator
is given on 1-forms by
(1.8) ∗dz1 = dz2 ∧ dz3 , ∗dz2 = dz3 ∧ dz1 , ∗dz3 = dz1 ∧ dz2 .
Thus ∗e is a 2-form,

(1.9)∗ e = E1 dz2 ∧ dz3 + E2 dz3 ∧ dz1 + E3 dz1 ∧ dz2


 
∂E1 ∂E2 ∂E3
=⇒ d∗e = + + dz1 ∧dz2 ∧dz3 = (div E) dz1 ∧dz2 ∧dz3 .
∂z1 ∂z2 ∂z3
The stationary Maxwell’s equations on e become
(1.10) d ∗ e = ρ dz1 ∧ dz2 ∧ dz3 , de = 0.
There is essential symmetry in (1.1) except for the appearance of the
“source” terms, ρ and J. To reduce (1.1) to two equations, analogous
to (1.10) but in 4-dimensional (Minkowski) space requires B to be
identified with a 2-form on R3 , rather than a 1-form. Thus, set

(1.11) β = B1 dz2 ∧ dz3 + B2 dz3 ∧ dz1 + B3 dz1 ∧ dz2 .


Then
(1.12) dβ = div B dz1 ∧ dz2 ∧ dz3
as follows from (1.9) and the second equation in (1.1) implies β is
closed.
1. MAXWELL’S EQUATIONS 203

Thus e and β are respectively a closed 1-form and a closed 2-form


on R3 . If we return to the general time-dependent setting then we may
define a 2-form on R4 by
(1.13) λ = e ∧ dt + β
where e and β are pulled back by the projection π : R4 → R3 . Com-
puting directly,
∂β
(1.14) dλ = d0 e ∧ dt + d0 β + ∧ dt
∂t
where d0 is now the differential on R3 . Thus
∂β
(1.15) dλ = 0 ⇔ d0 e + = 0, d0 β = 0
∂t
recovers two of Maxwell’s equations. On the other hand we can define
a 4-dimensional analogue of the Hodge star but corresponding to the
Minkowski metric, not the Euclidean one. Using the natural analogue
of the 3-dimensional Euclidean Hodge by formally inserting an i into
the t-component, gives
∗4 dz1 ∧ dz2 = idz3 ∧ dt



∗4 dz1 ∧ dz3 = idt ∧ dz2





 ∗4 dz1 ∧ dt = −idz2 ∧ dz3

(1.16)

 ∗4 dz2 ∧ dz3 = idz1 ∧ dt

∗4 dz2 ∧ dt = −idz3 ∧ dz1





∗4 dz3 ∧ dt = −idz1 ∧ dz2 .

The other two of Maxwell’s equations then become


(1.17) d ∗4 λ = d(−i ∗ e + i(∗β) ∧ dt) = −i(ρ dz1 ∧ dz2 ∧ dz3 + j ∧ dt)
where j is the 1-form associated to J as in (1.3). For our purposes this
is really just to confirm that it is best to think of B as the 2-form β
rather than try to make it into a 1-form. There are other good reasons
for this, related to behaviour under linear coodinate changes.
Returning to the stationary setting, note that (1.7) has a ‘preferred’
solution
1
(1.18) p= .
4π|z|
This is in fact the only solution which vanishes at infinity.
Proposition 1.1. The only tempered solutions of (1.7) are of the
form
1
(1.19) p= + q, ∆q = 0, q a polynomial.
4π|z|
204 9. ELECTROMAGNETISM

Proof. The only solutions are of the form (1.19) where q ∈ S 0 (R3 )
is harmonic. Thus qb ∈ S 0 (R3 ) satisfies |ξ|2 qb = 0, which implies that q
is a polynomial. 

2. Hodge Theory
The Hodge ∗ operator discussed briefly above in the case of R3 (and
Minkowski 4-space) makes sense in any oriented real vector space, V,
with a Euclidean inner product—that is, on a finite dimensional real
Hilbert space. Namely, if e1 , . . . , en is an oriented orthonormal basis
then
(2.1) ∗(ei1 ∧ · · · ∧ eik ) = sgn(i∗ )eik+1 ∧ · · · ein
extends by linearity to
Vk Vn−k
(2.2) ∗: V −→ V.
Proposition 2.1. The linear map (2.2) is independent of the ori-
ented orthonormal basis used to define it and so depends only on the
choice of inner product and orientation of V. Moreover,
∗2 = (−1)k(n−k) , on k V .
V
(2.3)
Proof. Note that sgn(i∗ ), the sign of the permutation defined by
{i1 , . . . , in } is fixed by
(2.4) ei1 ∧ · · · ∧ ein = sgn(i∗ )e1 ∧ · · · ∧ en .
Thus, on the basis ei1 ∧ . . . ∧ ein of k V given by strictly increasing
V
sequences i1 < i2 < · · · < ik in {1, . . . , n},
(2.5) e∗ ∧ ∗e∗ = sgn(i∗ )2 e1 ∧ · · · ∧ en = e1 ∧ · · · ∧ en .
The standard inner product on k V is chosen so that this basis is
V
orthonormal. Then (2.5) can be rewritten
(2.6) eI ∧ ∗eJ = heI , eJ ie1 ∧ · · · ∧ en .
This in turn fixes ∗ uniquely since the pairing given by
Vk
V × k−1 V 3 (u, v) 7→ (u ∧ v)/e1 ∧···∧en
V
(2.7)
is non-degenerate, as can be checked on these bases.
Thus it follows from (2.6) that ∗ depends only on the choice of inner
product andV orientation as claimed, provided it is shown that the inner
product on k V only depends on that of V. This is a standard fact
following from the embedding
Vk
(2.8) V ,→ V ⊗k
2. HODGE THEORY 205

as the totally antisymmetric part, the fact thatVV ⊗k has a natural inner
product and the fact that this induces one on k V after normalization
(depending on the convention used in (2.8). These details are omitted.

Since ∗ is uniquely determined in this way, it necessarily depends
smoothly on the data, in particular the inner product. On an ori-
ented Riemannian manifold the induced inner product on Tp∗ M varies
smoothly with p (by assumption) so
∗ : kp M −→ n−k M , kp M = kp (Tp∗ M )
V V V V
(2.9) p

varies smoothly and so defines a smooth bundle map


∗ ∈ C ∞ (M ; k M , n−k M ).
V V
(2.10)
An oriented
V Riemannian manifold carries a natural volume form
ν ∈ C ∞ (M, n M ), and this allows (2.6) to be written in integral form:
Z Z
α ∧ ∗β ∀ α, β ∈ C ∞ (M, k M ).
V
(2.11) hα, βi ν =
M M

Lemma 2.2. On an oriented, (compact) Riemannian manifold the


adjoint of d with respect to the Riemannian inner product and volume
form is
d∗ = δ = (−1)k+n(n−k+1) ∗ d ∗ on k M .
V
(2.12)
Proof. By definition,

(2.13) d : C ∞ (M, k M ) −→ C ∞ (M, k+1 M )


V V

=⇒ δ : C ∞ (M, k+1 M ) −→ C ∞ (M, k M ),


V V
Z Z
0
hα, δα0 i ν ∀ α ∈ C ∞ (M, k M ), α0 ∈ C ∞ (M, k+1 M ).
V V
hdα, α i ν =
M M

Applying (2.11) and using Stokes’ theorem, (and compactness of either


M or the support of at least one of α, α0 ),
Z Z
0
hδα, α i ν = dα ∧ ∗α0
ZM M
Z Z
0 0
= d(α∧∗α )+(−1) k+1
α∧d∗α = 0+(−1) k+1
hα, ∗−1 d∗α0 i ν.
M M M

Taking into account (2.3) to compute ∗−1 on n − k forms shows that


(2.14) δα0 = (−1)k+1+n(n−k) ∗ d ∗ on (k + 1)-forms
which is just (2.12) on k-forms. 
206 9. ELECTROMAGNETISM

Notice that changing the orientation simply changes the sign of ∗


on all forms. Thus (2.12) does not depend on the orientation and as a
local formula is valid even if M is not orientable — since the existence
of δ = d∗ does not require M to be orientable.
Theorem 2.3 (Hodge/Weyl). On any compact Riemannian man-
ifold there is a canonical isomorphism
n o
k ∼ k
(2.15) HdR (M ) = HHo (M ) = u ∈ L (M ;2
Vk
M ); (d + δ)u = 0

where the left-hand side is either the C ∞ or the distributional de Rham


cohomology
n o.
u ∈ C ∞ (M ; k M ); du = 0 d C ∞ (M ; k M )
V V
(2.16)
n o.
∼ −∞
= u ∈ C (M ;
Vk
M ); du = 0 d C −∞ (M ; k M ).
V

Proof. The critical point of course is that


d + δ ∈ Diff 1 (M ; ∗ M ) is elliptic.
V
(2.17)
We know that the symbol of d at a point ζ ∈ Tp∗ M is the map
Vk
(2.18) M 3 α 7→ iζ ∧ α.
We are only interested in ζ 6= 0 and by homogeneity it is enough to
consider |ζ| = 1. Let e1 = ζ, e2 , . . . , en be an orthonormal basis of
Tp∗ M , then from (2.12) with a fixed sign throughout:
(2.19) σ(δ, ζ)α = ± ∗ (iζ ∧ ·) ∗ α.
Take α = eI , ∗α = ±eI 0 where I ∪ I 0 = {1, . . . , n}. Thus
 0 1 6∈ I
(2.20) σ(δ, ζ)α = .
±iαI\{1} 1 ∈ I
In particular, σ(d + δ) is an isomorphism since it satisfies
(2.21) σ(d + δ)2 = |ζ|2
as follows from (2.18) and (2.20) or directly from the fact that
(2.22) (d + δ)2 = d2 + dδ + δd + δ 2 = dδ + δd
again using (2.18) and (2.20).
Once we know that d + δ is elliptic we conclude from the discussion
of Fredholm properties above that the distributional null space
u ∈ C −∞ (M, ∗ M ); (d + δ)u = 0 ⊂ C ∞ (M, ∗ M )
 V V
(2.23)
2. HODGE THEORY 207

is finite dimensional. From this it follows that


k
={u ∈ C −∞ (M, k M ); (d + δ)u = 0}
V
HHo
(2.24)
={u ∈ C ∞ (M, k M ); du = δu = 0}
V

and that the null space in (2.23) is simply the direct sum of these spaces
over k. Indeed, from (2.23) the integration by parts in
Z Z
0 = hdu, (d + δ)ui ν = kdukL2 + hu, δ 2 ui ν = kduk2L2
2

is justified.
Thus we can consider d + δ as a Fredholm operator in three forms
d + δ :C −∞ (M, ∗ M ) −→ C −∞ (M, ∗ M ),
V V

d + δ :H 1 (M, ∗ M ) −→ H 1 (M, ∗ M ),
V V
(2.25)
d + δ :C ∞ (M, ∗ M ) −→ C ∞ (M, ∗ M )
V V

and obtain the three direct sum decompositions


C −∞ (M, ∗ M ) = HHo ∗
⊕ (d + δ)C −∞ (M, ∗ M ),
V V

L2 (M, ∗ M ) = HHo∗
⊕ (d + δ)L2 (M, ∗ M ),
V V
(2.26)
C ∞ (M, ∗ M ) = HHo∗
⊕ (d + δ)C ∞ (M, ∗ M ).
V V

The same complement occurs in all three cases in view of (2.24).


k
From (2.24) directly, all the “harmonic” forms in HHo (M ) are closed
and so there is a natural map
k k k
(2.27) HHo (M ) −→ HdR (M ) −→ HdR,C −∞ (M )

where the two de Rham spaces are those in (2.16), not yet shown to be
equal.
We proceed to show that the maps in (2.27) are isomorphisms. First
k
to show injectivity, suppose u ∈ HHo (M ) is mapped to zero in either
space. This means u = dv where v Vis either C ∞ or distributional, so
it suffices to suppose v ∈ C −∞ (M, k−1 M ). Since u is smooth the
integration by parts in the distributional pairing
Z Z
2
kukL2 = hu, dvi ν = hδu, vi ν = 0
M M
is justified, so u = 0 and the maps are injective.
To see surjectivity, use the Hodge decomposition (2.26). If u0 ∈
C −∞ (M, k M ) or C ∞ (M, k M ), we find
V V

(2.28) u0 = u0 + (d + δ)v
where correspondingly, v ∈ C −∞ (M, ∗ M ) or C ∞ (M, ∗ M ) and u0 ∈
V V
k
HHo (M ). If u0 is closed, du0 = 0, then dδv = 0 follows from applying
208 9. ELECTROMAGNETISM


= 0, since δ 2 = 0. Thus δv ∈ HHo
d to (2.28) and hence (d + δ)δv V (M )
∞ ∗
and in particular, δv ∈ C (M, M ). Then the integration by parts
in Z Z
2
kδvkL2 = hδv, δvi ν = hv, (d + δ)δvi ν = 0
is justified, so δv = 0. Then (2.28) shows that any closed form, smooth
k
or distributional, is cohomologous in the same sense to u0 ∈ HHo (M ).
Thus the natural maps (2.27) are isomorphisms and the Theorem is
proved. 
Thus, on a compact Riemannian manifold (whether orientable or
not), each de Rham class has a unique harmonic representative.
3. Coulomb potential
4. Dirac strings
Addenda to Chapter 9
CHAPTER 10

Monopoles

1. Gauge theory
2. Bogomolny equations
(1) Compact operators, spectral theorem
(2) Families of Fredholm operators(*)
(3) Non-compact self-adjoint operators, spectral theorem
(4) Spectral theory of the Laplacian on a compact manifold
(5) Pseudodifferential operators(*)
(6) Invertibility of the Laplacian on Euclidean space
(7) Lie groups( ), bundles and gauge invariance
(8) Bogomolny equations on R3
(9) Gauge fixing
(10) Charge and monopoles
(11) Monopole moduli spaces
* I will drop these if it looks as though time will become an issue.
„, I will provide a brief and elementary discussion of manifolds and Lie
groups if that is found to be necessary.

3. Problems
Problem 1. Prove that u+ , defined by (15.10) is linear.

Problem 2. Prove Lemma 15.7.


Hint(s). All functions here are supposed to be continuous, I just
don’t bother to keep on saying it.
(1) Recall, or check, that the local compactness of a metric space
X means that for each point x ∈ X there is an  > 0 such that
the ball {y ∈ X; d(x, y) ≤ δ} is compact for δ ≤ .
(2) First do the case n = 1, so K b U is a compact set in an open
subset.
(a) Given δ > 0, use the local compactness of X, to cover K
with a finite number of compact closed balls of radius at
most δ.
209
210 10. MONOPOLES

(b) Deduce that if  > 0 is small enough then the set {x ∈


X; d(x, K) ≤ }, where
d(x, K) = inf d(x, y),
y∈K

is compact.
(c) Show that d(x, K), for K compact, is continuous.
(d) Given  > 0 show that there is a continuous function
g : R −→ [0, 1] such that g (t) = 1 for t ≤ /2 and
g (t) = 0 for t > 3/4.
(e) Show that f = g ◦d(·, K) satisfies the conditions for n = 1
if  > 0 is small enough.
(3) Prove the general case by induction over n.
(a) In the general case, set K 0 = K ∩ U1{ and show that the
inductive hypothesis applies to K 0 and the Uj for j > 1; let
fj0 , j = 2, . . . , n be the functions supplied by the inductive
assumption and put f 0 = j≥2 fj0 .
P

(b) Show that K1 = K ∩ {f 0 ≤ 12 } is a compact subset of U1 .


(c) Using the case n = 1 construct a function F for K1 and
U1 .
(d) Use the case n = 1 again to find G such that G = 1 on K
and supp(G) b {f 0 + F > 21 }.
(e) Make sense of the functions
G G
f1 = F , fj = fj0 0 , j≥2
f0
+F f +F
and show that they satisfies the inductive assumptions.
Problem 3. Show that σ-algebras are closed under countable in-
tersections.
Problem 4. (Easy) Show that if µ is a complete measure and
E ⊂ F where F is measurable and has measure 0 then µ(E) = 0.
Problem 5. Show that compact subsets are measurable for any
Borel measure. (This just means that compact sets are Borel sets if
you follow through the tortuous terminology.)
Problem 6. Show that the smallest σ-algebra containing the sets
(a, ∞] ⊂ [−∞, ∞]
for all a ∈ R, generates what is called above the ‘Borel’ σ-algebra on
[−∞, ∞].
Problem 7. Write down a careful proof of Proposition 1.1.
3. PROBLEMS 211

Problem 8. Write down a careful proof of Proposition 1.2.


Problem 9. Let X be the metric space
X = {0} ∪ {1/n; n ∈ N = {1, 2, . . .}} ⊂ R
with the induced metric (i.e. the same distance as on R). Recall why
X is compact. Show that the space C0 (X) and its dual are infinite
dimensional. Try to describe the dual space in terms of sequences; at
least guess the answer.
Problem 10. For the space Y = N = {1, 2, . . .} ⊂ R, describe
C0 (Y ) and guess a description of its dual in terms of sequences.
Problem 11. Let (X, M, µ) be any measure space (so µ is a mea-
sure on the σ-algebra M of subsets of X). Show that the set of equiv-
alence classes of µ-integrable functions on X, with the equivalence re-
lation given by (4.8), is a normed linear space with the usual linear
structure and the norm given by
Z
kf k = |f |dµ.
X

Problem 12. Let (X, M) be a set with a σ-algebra. Let µ : M →


R be a finite measure in the sense that µ(φ) = 0 and for any {Ei }∞
i=1 ⊂
M with Ei ∩ Ej = φ for i 6= j,

! ∞
[ X
(3.1) µ Ei = µ(Ei )
i=1 i=1

with the series on the right always absolutely convergenct (i.e., this is
part of the requirement on µ). Define

X
(3.2) |µ| (E) = sup |µ(Ei )|
i=1
for E S∈ M, with the supremum over all measurable decompositions
E = ∞ i=1 Ei with the Ei disjoint. Show that |µ| is a finite, positive
measure.
Hint 1. You must show that |µ| (E) = ∞
P S
i=1
S |µ| (A i ) if i Ai = E,
Ai ∈ M being disjoint. Observe that if Aj = l Ajl is a measurable
decomposition of ASj then together the Ajl give a decomposition of E.
Similarly, if E = j Ej is any such decomposition of E then Ajl =
Aj ∩ El gives such a decomposition of Aj .
Hint 2. See [6] p. 117!
Problem 13. (Hahn Decomposition)
With assumptions as in Problem 12:
212 10. MONOPOLES

(1) Show that µ+ = 12 (|µ| + µ) and µ− = 12 (|µ| − µ) are positive


measures, µ = µ+ − µ− . Conclude that the definition of a
measure based on (4.16) is the same as that in Problem 12.
(2) Show that µ± so constructed are orthogonal in the sense that
there is a set E ∈ M such that µ− (E) = 0, µ+ (X \ E) = 0.
Hint. Use the definition of |µ| to show that for any F ∈ M
and any  > 0 there is a subset F 0 ∈ M, F 0 ⊂ F such that
µ+ (F 0 ) ≥ µ+ (F ) −  and µ− (F 0 ) ≤ . Given δ > 0 apply
this result repeatedly (say with  = 2−n δ) to find a decreasing
sequence of sets F1 = X, Fn ∈ M, Fn+1 ⊂ Fn such that
−n
µ+ (FTn ) ≥ µ+ (Fn−1 ) − 2 δ and µ− (Fn ) ≤ 2−n δ. Conclude that
G = n Fn has µ+ (G) ≥ µ+ (X) − δ and µ− (G) = 0. Now S let
Gm be chosen this way with δ = 1/m. Show that E = m Gm
is as required.
Problem 14. Now suppose that µ is a finite, positive Radon mea-
sure on a locally compact metric space X (meaning a finite positive
Borel measure outer regular on Borel sets and inner regular on open
sets). Show that µ is inner regular on all Borel sets and hence, given
 > 0 and E ∈ B(X) there exist sets K ⊂ E ⊂ U with K compact and
U open such that µ(K) ≥ µ(E) − , µ(E) ≥ µ(U ) − .
Hint. First take U open, then use its inner regularity to find K
with K 0 b U and µ(K 0 ) ≥ µ(U ) − /2. How big is µ(E\K 0 )? Find
V ⊃ K 0 \E with V open and look at K = K 0 \V .
Problem 15. Using Problem 14 show that if µ is a finite Borel
measure on a locally compact metric space X then the following three
conditions are equivalent
(1) µ = µ1 − µ2 with µ1 and µ2 both positive finite Radon mea-
sures.
(2) |µ| is a finite positive Radon measure.
(3) µ+ and µ− are finite positive Radon measures.
Problem 16. Let k k be a norm on a vector space V . Show that
kuk = (u, u)1/2 for an inner product satisfying (1.1) - (1.4) if and only
if the parallelogram law holds for every pair u, v ∈ V .
Hint (From Dimitri Kountourogiannis)
If k · k comes from an inner product, then it must satisfy the polar-
isation identity:
(x, y) = 1/4(kx + yk2 − kx − yk2 − ikx + iyk2 − ikx − iyk2 )
i.e, the inner product is recoverable from the norm, so use the RHS
(right hand side) to define an inner product on the vector space. You
3. PROBLEMS 213

will need the paralellogram law to verify the additivity of the RHS.
Note the polarization identity is a bit more transparent for real vector
spaces. There we have
(x, y) = 1/2(kx + yk2 − kx − yk2 )
both are easy to prove using kak2 = (a, a).
Problem 17. Show (Rudin does it) that if u : Rn → C has con-
tinuous partial derivatives then it is differentiable at each point in the
sense of (6.19).
Problem 18. Consider the function f (x) = hxi−1 = (1 + |x|2 )−1/2 .
Show that
∂f
= lj (x) · hxi−3
∂xj
with lj (x) a linear function. Conclude by induction that hxi−1 ∈
C0k (Rn ) for all k.
Problem 19. Show that exp(− |x|2 ) ∈ S(Rn ).
Problem 20. Prove (2.8), probably by induction over k.
Problem 21. Prove Lemma 2.4.
Hint. Show that a set U 3 0 in S(Rn ) is a neighbourhood of 0 if
and only if for some k and  > 0 it contains a set of the form
 

 

 X 
n α β
ϕ ∈ S(R ) ; sup x D ϕ <  .
 

 |α|≤k, 

|β|≤k

Problem 22. Prove (3.7), by estimating the integrals.


Problem 23. Prove (3.9) where
Z 0
0 ∂ψ
ψj (z; x ) = (z + tx0 ) dt .
0 ∂z j

Problem 24. Prove (3.20). You will probably have to go back to


first principles to do this. Show that it is enough to assume u ≥ 0 has
compact support. Then show it is enough to assume that u is a simple,
and integrable, function. Finally look at the definition of Lebesgue
measure and show that if E ⊂ Rn is Borel and has finite Lebesgue
measure then
lim µ(E\(E + t)) = 0
|t|→∞
where µ = Lebesgue measure and
E + t = {p ∈ Rn ; p0 + t , p0 ∈ E} .
214 10. MONOPOLES

Problem 25. Prove Leibniz’ formula


X α
α
D x (ϕψ) = Dα x ϕ · dα−β
x ψ
β≤α
β
for any C ∞ functions and ϕ and ψ. Here α and β are multiindices,
β ≤ α means βj ≤ αj for each j? and
  Y 
α αj
= .
β j
β j

I suggest induction!
Problem 26. Prove the generalization of Proposition 3.10 that
u ∈ S 0 (Rn ), supp(w) ⊂ {0} implies there are constants cα , |α| ≤ m,
for some m, such that X
u= cα D α δ .
|α|≤m
Hint This is not so easy! I would be happy if you can show that
u ∈ M (Rn ), supp u ⊂ {0} implies u = cδ. To see this, you can show
that
ϕ ∈ S(Rn ), ϕ(0) = 0
⇒ ∃ϕj ∈ S(Rn ) , ϕj (x) = 0 in |x| ≤ j > 0(↓ 0) ,
sup |ϕj − ϕ| → 0 as j → ∞ .
To prove the general case you need something similar — that given m,
if ϕ ∈ S(Rn ) and Dα x ϕ(0) = 0 for |α| ≤ m then ∃ ϕj ∈ S(Rn ), ϕj = 0
in |x| ≤ j , j ↓ 0 such that ϕj → ϕ in the C m norm.
Problem 27. If m ∈ N, m0 > 0 show that u ∈ H m (Rn ) and
0 0
D u ∈ H m (Rn ) for all |α| ≤ m implies u ∈ H m+m (Rn ). Is the
α

converse true?
Problem 28. Show that every element u ∈ L2 (Rn ) can be written
as a sum
n
X
u = u0 + Dj uj , uj ∈ H 1 (Rn ) , j = 0, . . . , n .
j=1

Problem 29. Consider for n = 1, the locally integrable function


(the Heaviside function),

0 x≤0
H(x) =
1 x > 1.
Show that Dx H(x) = cδ; what is the constant c?
3. PROBLEMS 215

Problem 30. For what range of orders m is it true that δ ∈


H m (Rn ) , δ(ϕ) = ϕ(0)?
Problem 31. Try to write the Dirac measure explicitly (as possi-
ble) in the form (5.8). How many derivatives do you think are neces-
sary?
Problem 32. Go through the computation of ∂E again, but cut-
ting out a disk {x2 + y 2 ≤ 2 } instead.
Problem 33. Consider the Laplacian, (6.4), for n = 3. Show that
E = c(x2 + y 2 )−1/2 is a fundamental solution for some value of c.
Problem 34. Recall that a topology on a set X is a collection F of
subsets (called the open sets) with the properties, φ ∈ F, X ∈ F and
F is closed under finite intersections and arbitrary unions. Show that
the following definition of an open set U ⊂ S 0 (Rn ) defines a topology:
∀ u ∈ U and all ϕ ∈ S(Rn ) ∃ > 0 st.
|(u0 − u)(ϕ)| <  ⇒ u0 ∈ U .
This is called the weak topology (because there are very few open
sets). Show that uj → u weakly in S 0 (Rn ) means that for every open
set U 3 u ∃N st. uj ∈ U ∀ j ≥ N .
Problem 35. Prove (6.18) where u ∈ S 0 (Rn ) and ϕ, ψ ∈ S(Rn ).
Problem 36. Show that for fixed v ∈ S 0 (Rn ) with compact support
S(Rn ) 3 ϕ 7→ v ∗ ϕ ∈ S(Rn )
is a continuous linear map.
Problem 37. Prove the ?? to properties in Theorem 6.6 for u ∗ v
where u ∈ S 0 (Rn ) and v ∈ S 0 (Rn ) with at least one of them having
compact support.
Problem 38. Use Theorem 6.9 to show that if P (D) is hypoelliptic
then every parametrix F ∈ S(Rn ) has sing supp(F ) = {0}.
Problem 39. Show that if P (D) is an ellipitic differential operator
of order m, u ∈ L2 (Rn ) and P (D)u ∈ L2 (Rn ) then u ∈ H m (Rn ).
Problem 40 (Taylor’s theorem). . Let u : Rn −→ R be a real-
valued function which is k times continuously differentiable. Prove that
there is a polynomial p and a continuous function v such that
|v(x)|
u(x) = p(x) + v(x) where lim = 0.
|x|↓0 |x|k
216 10. MONOPOLES

Problem 41. Let C(Bn ) be the space of continuous functions on


the (closed) unit ball, Bn = {x ∈ Rn ; |x| ≤ 1}. Let C0 (Bn ) ⊂ C(Bn ) be
the subspace of functions which vanish at each point of the boundary
and let C(Sn−1 ) be the space of continuous functions on the unit sphere.
Show that inclusion and restriction to the boundary gives a short exact
sequence
C0 (Bn ) ,→ C(Bn ) −→ C(Sn−1 )
(meaning the first map is injective, the second is surjective and the
image of the first is the null space of the second.)
Problem 42 (Measures). A measure on the ball is a continuous
linear functional µ : C(Bn ) −→ R where continuity is with respect to
the supremum norm, i.e. there must be a constant C such that
|µ(f )| ≤ C sup |f (x)| ∀ f ∈ C(Bn ).
x∈Rn

Let M (Bn ) be the linear space of such measures. The space M (Sn−1 )
of measures on the sphere is defined similarly. Describe an injective
map
M (Sn−1 ) −→ M (Bn ).
Can you define another space so that this can be extended to a short
exact sequence?
Problem 43. Show that the Riemann integral defines a measure
Z
n
(3.3) C(B ) 3 f 7−→ f (x)dx.
Bn

Problem 44. If g ∈ C(Bn ) and µ ∈ M (Bn ) show that gµ ∈ M (Bn )


where (gµ)(f ) = µ(f g) for all f ∈ C(Bn ). Describe all the measures
with the property that
xj µ = 0 in M (Bn ) for j = 1, . . . , n.
Problem 45 (Hörmander, Theorem 3.1.4). Let I ⊂ R be an open,
non-empty interval.
i) Show (you mayR use results from class) that there exists ψ ∈
Cc∞ (I) with R ψ(x)ds = 1.
ii) Show that any φ ∈ Cc∞ (I) may be written in the form
Z

φ = φ̃ + cψ, c ∈ C, φ̃ ∈ Cc (I) with φ̃ = 0.
R

iii) Show that if φ̃ ∈ Cc∞ (I) and


R
R
φ̃ = 0 then there exists µ ∈
Cc∞ (I) such that dµ
dx
= φ̃ in I.
3. PROBLEMS 217

iv) Suppose u ∈ C −∞ (I) satisfies du


dx
= 0, i.e.

u(− ) = 0 ∀ φ ∈ Cc∞ (I),
dx
show that u = c for some constant c.
v) Suppose that u ∈ C −∞ (I) satisfies du
dx
= c, for some constant
c, show that u = cx + d for some d ∈ C.
Problem 46. [Hörmander Theorem 3.1.16]
i) Use Taylor’s formula to show that there is a fixed ψ ∈ Cc∞ (Rn )
such that any φ ∈ Cc∞ (Rn ) can be written in the form
n
X
φ = cψ + xj ψ j
j=1

where c ∈ C and the ψj ∈ Cc∞ (Rn ) depend on φ.


ii) Recall that δ0 is the distribution defined by
δ0 (φ) = φ(0) ∀ φ ∈ Cc∞ (Rn );
explain why δ0 ∈ C −∞ (Rn ).
iii) Show that if u ∈ C −∞ (Rn ) and u(xj φ) = 0 for all φ ∈ Cc∞ (Rn )
and j = 1, . . . , n then u = cδ0 for some c ∈ C.
iv) Define the ‘Heaviside function’
Z ∞
H(φ) = φ(x)dx ∀ φ ∈ Cc∞ (R);
0
−∞
show that H ∈ C (R).
v) Compute dxd
H ∈ C −∞ (R).
Problem 47. Using Problems 45 and 46, find all u ∈ C −∞ (R)
satisfying the differential equation
du
x = 0 in R.
dx
These three problems are all about homogeneous distributions on
the line, extending various things using the fact that
(
z exp(z log x) x > 0
x+ =
0 x≤0
is a continuous function on R if Re z > 0 and is differentiable if Re z > 1
and then satisfies
d z z−1
x = zx+ .
dx +
218 10. MONOPOLES

We used this to define


1 1 1 dk z+k
(3.4) xz+ = ··· x if z ∈ C \ −N.
z+kz+k−1 z + 1 dxk +
Problem 48. [Hadamard regularization]
i) Show that (3.4) just means that for each φ ∈ Cc∞ (R)
Z ∞ k
z (−1)k d φ
x+ (φ) = k
(x)xz+k dx, Re z > −k, z ∈
/ −N.
(z + k) · · · (z + 1) 0 dx
ii) Use integration by parts to show that
(3.5) "Z #
∞ k
X
xz+ (φ) = lim φ(x)xz dx − Cj (φ)z+j , Re z > −k, z ∈
/ −N
↓0  j=1

for certain constants Cj (φ) which you should give explicitly.


[This is called Hadamard regularization after Jacques Hadamard,
feel free to look at his classic book [3].]
iii) Assuming that −k + 1 ≥ Re z > −k, z 6= −k + 1, show that
there can only be one set of the constants with j < k (for each
choice of φ ∈ Cc∞ (R)) such that the limit in (3.5) exists.
iiv) Use ii), and maybe iii), to show that
d z
x = zxz−1 in C −∞ (R) ∀ z ∈
/ −N0 = {0, 1, . . . }.
dx + +

v) Similarly show that xxz+ = xz+1


+ for all z ∈
/ −N.
vi) Show that xz+ = 0 in x < 0 for all z ∈
/ −N. (Duh.)
d
Problem 49. [Null space of x dx − z]
i) Show that if u ∈ C −∞ (R) then ũ(φ) = u(φ̃), where φ̃(x) =
φ(−x) ∀ φ ∈ Cc∞ (R), defines an element of C −∞ (R). What is
ũ if u ∈ C 0 (R)? Compute δe0 .
ii) Show that d ũ = − gdx
d
u. dx
iii) Define xz− = xf
z
+ for z ∈
/ −N and show that dx x− = −zxz−1
d z

z+1
and xxz− = −x− .
iv) Suppose that u ∈ C −∞ (R) satisfies the distributional equation
d
(x dx − z)u = 0 (meaning of course, x dudx
= zu where z is a
constant). Show that

u x>0
= c+ xz− x>0
and u x<0
= c− xz− x<0
3. PROBLEMS 219

for some constants c± . Deduce that v = u − c+ xz+ − c− xz−


satisfies
d
(3.6) (x − z)v = 0 and supp(v) ⊂ {0}.
dx
d dk
v) Show that for each k ∈ N, (x dx + k + 1) dx k δ0 = 0.
−∞
vi) Using the fact that any v ∈ C (R) with supp(v) ⊂ {0} is
dk
a finite sum of constant multiples of the dx k δ0 , show that, for

z∈ / −N, the only solution of (3.6) is v = 0.


vii) Conclude that for z ∈ / −N
 
−∞ d
(3.7) u ∈ C (R); (x − z)u = 0
dx
is a two-dimensional vector space.
Problem 50. [Negative integral order] To do the same thing for
negative integral order we need to work a little differently. Fix k ∈ N.
i) We define weak convergence of distributions by saying un → u
in Cc∞ (X), where un , u ∈ C −∞ (X), X ⊂ Rn being open, if
un (φ) → u(φ) for each φ ∈ Cc∞ (X). Show that un → u implies
that ∂u n
∂xj
∂u
→ ∂x j
for each j = 1, . . . , n and f un → f u if f ∈

C (X).
ii) Show that (z + k)xz+ is weakly continuous as z → −k in the
sense that for any sequence zn → −k, zn ∈ / −N, (zn + k)xz+n →
vk where
1 1 dk+1
vk = ··· k+1
x+ , x+ = x1+ .
−1 −k + 1 dx
iii) Compute vk , including the constant factor.
iv) Do the same thing for (z + k)xz− as z → −k.
v) Show that there is a linear combination (k + z)(xz+ + c(k)xz− )
such that as z → −k the limit is zero.
vi) If you get this far, show that in fact xz+ + c(k)xz− also has a
weak limit, uk , as z → −k. [This may be the hardest part.]
d
vii) Show that this limit distribution satisfies (x dx + k)uk = 0.
viii) Conclude that (3.7) does in fact hold for z ∈ −N as well.
[There are still some things to prove to get this.]
Problem 51. Show that for any set G ⊂ Rn
X∞

v (G) = inf v(Ai )
i=1
where the infimum is taken over coverings of G by rectangular sets
(products of intervals).
220 10. MONOPOLES

Problem 52. Show that a σ-algebra is closed under countable in-


tersections.
Problem 53. Show that compact sets are Lebesgue measurable
and have finite volume and also show the inner regularity of the Lebesgue
measure on open sets, that is if E is open then
(3.8) v(E) = sup{v(K); K ⊂ E, K compact}.
Problem 54. Show that a set B ⊂ Rn is Lebesgue measurable if
and only if
v ∗ (E) = v ∗ (E ∩ B) + v ∗ (E ∩ B { ) ∀ open E ⊂ Rn .
[The definition is this for all E ⊂ Rn .]
Problem 55. Show that a real-valued continuous function f :
U −→ R on an open set, is Lebesgue measurable, in the sense that
f −1 (I) ⊂ U ⊂ Rn is measurable for each interval I.
Problem 56. Hilbert space and the Riesz representation theorem.
If you need help with this, it can be found in lots of places – for instance
[7] has a nice treatment.
i) A pre-Hilbert space is a vector space V (over C) with a ‘posi-
tive definite sesquilinear inner product’ i.e. a function
V × V 3 (v, w) 7→ hv, wi ∈ C
satisfying
• hw, vi = hv, wi
• ha1 v1 + a2 v2 , wi = a1 hv1 , wi + a2 hv2 , wi
• hv, vi ≥ 0
• hv, vi = 0 ⇒ v = 0.
Prove Schwarz’ inequality, that
1 1
|hu, vi| ≤ hui 2 hvi 2 ∀ u, v ∈ V.
Hint: Reduce to the case hv, vi = 1 and then expand
hu − hu, viv, u − hu, vivi ≥ 0.
ii) Show that kvk = hv, vi1/2 is a norm and that it satisfies the
parallelogram law:
(3.9) kv1 + v2 k2 + kv1 − v2 k2 = 2kv1 k2 + 2kv2 k2 ∀ v1 , v2 ∈ V.
iii) Conversely, suppose that V is a linear space over C with a
norm which satisfies (3.9). Show that
4hv, wi = kv + wk2 − kv − wk2 + ikv + iwk2 − ikv − iwk2
3. PROBLEMS 221

defines a pre-Hilbert inner product which gives the original


norm.
iv) Let V be a Hilbert space, so as in (i) but complete as well.
Let C ⊂ V be a closed non-empty convex subset, meaning
v, w ∈ C ⇒ (v + w)/2 ∈ C. Show that there exists a unique
v ∈ C minimizing the norm, i.e. such that
kvk = inf kwk.
w∈C

Hint: Use the parallelogram law to show that a norm min-


imizing sequence is Cauchy.
v) Let u : H → C be a continuous linear functional on a Hilbert
space, so |u(ϕ)| ≤ Ckϕk ∀ ϕ ∈ H. Show that N = {ϕ ∈
H; u(ϕ) = 0} is closed and that if v0 ∈ H has u(v0 ) 6= 0 then
each v ∈ H can be written uniquely in the form
v = cv0 + w, c ∈ C, w ∈ N.
vi) With u as in v), not the zero functional, show that there exists
a unique f ∈ H with u(f ) = 1 and hw, f i = 0 for all w ∈ N .
Hint: Apply iv) to C = {g ∈ V ; u(g) = 1}.
vii) Prove the Riesz Representation theorem, that every continu-
ous linear functional on a Hilbert space is of the form
uf : H 3 ϕ 7→ hϕ, f i for a unique f ∈ H.
Problem 57. Density of Cc∞ (Rn ) in Lp (Rn ).
i) Recall in a few words why simple integrable functions
R are dense
1 n
in L (R ) with respect to the norm kf kL1 = Rn |f (x)|dx.
PN
ii) Show that simple functions j=1 cj χ(Uj ) where the Uj are
open and bounded are also dense in L1 (Rn ).
iii) Show that if U is open and bounded then F (y) = v(U ∩ Uy ),
where Uy = {z ∈ Rn ; z = y + y 0 , y 0 ∈ U } is continuous in
y ∈ Rn and that
v(U ∩ Uy{ ) + v(U { ∩ Uy ) → 0 as y → 0.
iv) If U is open and bounded and ϕ ∈ Cc∞ (Rn ) show that
Z
f (x) = ϕ(x − y)dy ∈ Cc∞ (Rn ).
U

v) Show that if U is open and bounded then


Z
sup |χU (x) − χU (x − y)|dx → 0 as δ ↓ 0.
|y|≤δ
222 10. MONOPOLES

vi) If U is open and bounded and ϕ ∈ Cc∞ (Rn ), ϕ ≥ 0, ϕ = 1


R
then
fδ → χU in L1 (Rn ) as δ ↓ 0
where
Z  
−n y
fδ (x) = δ ϕ χU (x − y)dy.
δ
Hint: Write χU (x) = δ −n ϕ yδ χU (x) and use v).
R 

vii) Conclude that Cc∞ (Rn ) is dense in L1 (Rn ).


viii) Show that Cc∞ (Rn ) is dense in Lp (Rn ) for any 1 ≤ p < ∞.
Problem 58. Schwartz representation theorem. Here we (well you)
come to grips with the general structure of a tempered distribution.
i) Recall briefly the proof of the Sobolev embedding theorem and
the corresponding estimate
n
sup |φ(x)| ≤ CkφkH m , < m ∈ R.
x∈Rn 2
ii) For m = n + 1 write down a(n equivalent) norm on the right
in a form that does not involve the Fourier transform.
iii) Show that for any α ∈ N0
X
|Dα (1 + |x|2 )N φ | ≤ Cα,N (1 + |x|2 )N |Dβ φ|.

β≤α

iv) Deduce the general estimates


sup (1 + |x|2 )N |Dα φ(x)| ≤ CN k(1 + |x|2 )N φkH N +n+1 .
|α|≤N
x∈Rn

v) Conclude that for each tempered distribution u ∈ S 0 (Rn ) there


is an integer N and a constant C such that
|u(φ)| ≤ Ck(1 + |x|2 )N φkH 2N ∀ φ ∈ S(Rn ).
vi) Show that v = (1 + |x|2 )−N u ∈ S 0 (Rn ) satisfies
|v(φ)| ≤ Ck(1 + |D|2 )N φkL2 ∀ φ ∈ S(Rn ).
vi) Recall (from class or just show it) that if v is a tempered
distribution then there is a unique w ∈ S 0 (Rn ) such that (1 +
|D|2 )N w = v.
vii) Use the Riesz Representation Theorem to conclude that for
each tempered distribution u there exists N and w ∈ L2 (Rn )
such that
(3.10) u = (1 + |D|2 )N (1 + |x|2 )N w.
3. PROBLEMS 223

viii) Use the Fourier transform on S 0 (Rn ) (and the fact that it is
an isomorphism on L2 (Rn )) to show that any tempered distri-
bution can be written in the form
u = (1 + |x|2 )N (1 + |D|2 )N w for some N and some w ∈ L2 (Rn ).
ix) Show that any tempered distribution can be written in the
form
u = (1+|x|2 )N (1+|D|2 )N +n+1 w̃ for some N and some w̃ ∈ H 2(n+1) (Rn ).
x) Conclude that any tempered distribution can be written in the
form
u = (1 + |x|2 )N (1 + |D|2 )M U for some N, M
and a bounded continuous function U
Problem 59. Distributions of compact support.
i) Recall the definition of the support of a distribution, defined
in terms of its complement
Rn \supp(u) = p ∈ Rn ; ∃ U ⊂ Rn , open, with p ∈ U such that u U = 0


ii) Show that if u ∈ C −∞ (Rn ) and φ ∈ Cc∞ (Rn ) satisfy


supp(u) ∩ supp(φ) = ∅
then u(φ) = 0.
iii) Consider the space C ∞ (Rn ) of all smooth functions on Rn ,
without restriction on supports. Show that for each N
kf k(N ) = sup |Dα f (x)|
|α|≤N, |x|≤N

is a seminorn on C ∞ (Rn ) (meaning it satisfies kf k ≥ 0, kcf k =


|c|kf k for c ∈ C and the triangle inequality but that kf k = 0
does not necessarily imply that f = 0.)
iv) Show that Cc∞ (Rn ) ⊂ C ∞ (Rn ) is dense in the sense that for
each f ∈ C ∞ (Rn ) there is a sequence fn in Cc∞ (Rn ) such that
kf − fn k(N ) → 0 for each N.
v) Let E 0 (Rn ) temporarily (or permanantly if you prefer) denote
the dual space of C ∞ (Rn ) (which is also written E(Rn )), that
is, v ∈ E 0 (Rn ) is a linear map v : C ∞ (Rn ) −→ C which is
continuous in the sense that for some N
(3.11) |v(f )| ≤ Ckf k(N ) ∀ f ∈ C ∞ (Rn ).
Show that such a v ‘is’ a distribution and that the map E 0 (Rn ) −→
C −∞ (Rn ) is injective.
224 10. MONOPOLES

vi) Show that if v ∈ E 0 (Rn ) satisfies (3.11) and f ∈ C ∞ (Rn ) has


f = 0 in |x| < N +  for some  > 0 then v(f ) = 0.
vii) Conclude that each element of E 0 (Rn ) has compact support
when considered as an element of C −∞ (Rn ).
viii) Show the converse, that each element of C −∞ (Rn ) with com-
pact support is an element of E 0 (Rn ) ⊂ C −∞ (Rn ) and hence
conclude that E 0 (Rn ) ‘is’ the space of distributions of compact
support.
I will denote the space of distributions of compact support by Cc−∞ (R).
Problem 60. Hypoellipticity of the heat operator H = iDt + ∆ =
n
Dx2j on Rn+1 .
P
iDt +
j=1
(1) Using τ to denote the ‘dual variable’ to t and ξ ∈ Rn to denote
the dual variables to x ∈ Rn observe that H = p(Dt , Dx ) where
p = iτ + |ξ|2 .
(2) Show that |p(τ, ξ)| > 12 (|τ | + |ξ|2 ) .
(3) Use an inductive argument to show that, in (τ, ξ) 6= 0 where
it makes sense,
|α|
1 X qk,α,j (ξ)
(3.12) Dτk Dξα =
p(τ, ξ) j=1
p(τ, ξ)k+j+1
where qk,α,j (ξ) is a polynomial of degree (at most) 2j − |α|.
(4) Conclude that if φ ∈ Cc∞ (Rn+1 ) is identically equal to 1 in a
neighbourhood of 0 then the function
1 − φ(τ, ξ)
g(τ, ξ) =
iτ + |ξ|2
is the Fourier transform of a distribution F ∈ S 0 (Rn ) with
sing supp(F ) ⊂ {0}. [Remember that sing supp(F ) is the com-
plement of the largest open subset of Rn the restriction of F
to which is smooth].
(5) Show that F is a parametrix for the heat operator.
(6) Deduce that iDt + ∆ is hypoelliptic – that is, if U ⊂ Rn is an
open set and u ∈ C −∞ (U ) satisfies (iDt + ∆)u ∈ C ∞ (U ) then
u ∈ C ∞ (U ).
(7) Show that iDt − ∆ is also hypoelliptic.
Problem 61. Wavefront set computations and more – all pretty
easy, especially if you use results from class.
i) Compute WF(δ) where δ ∈ S 0 (Rn ) is the Dirac delta function
at the origin.
3. PROBLEMS 225

ii) Compute WF(H(x)) where H(x) ∈ S 0 (R) is the Heaviside


function (
1 x>0
H(x) = .
0 x≤0
Hint: Dx is elliptic in one dimension, hit H with it.
iii) Compute WF(E), E = iH(x1 )δ(x0 ) which is the Heaviside in
the first variable on Rn , n > 1, and delta in the others.
iv) Show that Dx1 E = δ, so E is a fundamental solution of Dx1 .
v) If f ∈ Cc−∞ (Rn ) show that u = E ? f solves Dx1 u = f.
vi) What does our estimate on WF(E ? f ) tell us about WF(u) in
terms of WF(f )?
Problem 62. The wave equation in two variables (or one spatial
variable).
i) Recall that the Riemann function
(
− 41 if t > x and t > −x
E(t, x) =
0 otherwise
is a fundamental solution of Dt2 − Dx2 (check my constant).
ii) Find the singular support of E.
iii) Write the Fourier transform (dual) variables as τ, ξ and show
that
WF(E) ⊂ {0} × S1 ∪ {(t, x, τ, ξ); x = t > 0 and ξ + τ = 0}
∪ {(t, x, τ, ξ); −x = t > 0 and ξ = τ } .
iv) Show that if f ∈ Cc−∞ (R2 ) then u = E?f satisfies (Dt2 −Dx2 )u =
f.
v) With u defined as in iv) show that
supp(u) ⊂ {(t, x); ∃
(t0 , x0 ) ∈ supp(f ) with t0 + x0 ≤ t + x and t0 − x0 ≤ t − x}.
vi) Sketch an illustrative example of v).
vii) Show that, still with u given by iv),
sing supp(u) ⊂ {(t, x); ∃ (t0 , x0 ) ∈ sing supp(f ) with
t ≥ t0 and t + x = t0 + x0 or t − x = t0 − x0 }.
viii) Bound WF(u) in terms of WF(f ).
Problem 63. A little uniqueness theorems. Suppose u ∈ Cc−∞ (Rn )
recall that the Fourier transform û ∈ C ∞ (Rn ). Now, suppose u ∈
226 10. MONOPOLES

Cc−∞ (Rn ) satisfies P (D)u = 0 for some non-trivial polynomial P, show


that u = 0.
Problem 64. Work out the elementary behavior of the heat equa-
tion.
i) Show that the function on R × Rn , for n ≥ 1,
( n  2

t− 2 exp − |x|
4t
t>0
F (t, x) =
0 t≤0
is measurable, bounded on the any set {|(t, x)| ≥ R} and is
integrable on {|(t, x)| ≤ R} for any R > 0.
ii) Conclude that F defines a tempered distibution on Rn+1 .
iii) Show that F is C ∞ outside the origin.
iv) Show that F satisfies the heat equation
n
X
(∂t − ∂x2j )F (t, x) = 0 in (t, x) 6= 0.
j=1

v) Show that F satisfies


(3.13) F (s2 t, sx) = s−n F (t, x) in S 0 (Rn+1 )
where the left hand side is defined by duality “F (s2 t, sx) = Fs ”
where
t x
Fs (φ) = s−n−2 F (φ1/s ), φ1/s (t, x) = φ( 2 , ).
s s
vi) Conclude that
n
X
(∂t − ∂x2j )F (t, x) = G(t, x)
j=1

where G(t, x) satisfies


(3.14) G(s2 t, sx) = s−n−2 G(t, x) in S 0 (Rn+1 )
in the same sense as above and has support at most {0}.
vii) Hence deduce that
n
X
(3.15) (∂t − ∂x2j )F (t, x) = cδ(t)δ(x)
j=1

for some real constant c.


Hint: Check which distributions with support at (0, 0) sat-
isfy (3.14).
3. PROBLEMS 227

viii) If ψ ∈ Cc∞ (Rn+1 ) show that u = F ? ψ satisfies

(3.16) u ∈ C ∞ (Rn+1 ) and


sup (1 + |x|)N |Dα u(t, x)| < ∞ ∀ S > 0, α ∈ Nn+1 , N.
x∈Rn , t∈[−S,S]

ix) Supposing that u satisfies (3.16) and is a real-valued solution


of
X n
(∂t − ∂x2j )u(t, x) = 0
j=1

in Rn+1 , show that


Z
v(t) = u2 (t, x)
Rn

is a non-increasing function of t.
Hint: Multiply the equation by u and integrate over a slab
[t1 , t2 ] × Rn .
x) Show that c in (3.15) is non-zero by arriving at a contradiction
from the assumption that it is zero. Namely, show that if c = 0
then u in viii) satisfies the conditions of ix) and also vanishes
in t < T for some T (depending on ψ). Conclude that u = 0 for
all ψ. Using properties of convolution show that this in turn
implies that F = 0 which is a contradiction.
xi) So, finally, we know that E = 1c F is a fundamental solution of
the heat operator which vanishes in t < 0. Explain why this
allows us to show that for any ψ ∈ Cc∞ (R × Rn ) there is a
solution of
n
X
(3.17) (∂t − ∂x2j )u = ψ, u = 0 in t < T for some T.
j=1

What is the largest value of T for which this holds?


xii) Can you give a heuristic, or indeed a rigorous, explanation of
why
|x|2
Z
c= exp(− )dx?
Rn 4
xiii) Explain why the argument we used for the wave equation to
show that there is only one solution, u ∈ C ∞ (Rn+1 ), of (3.17)
does not apply here. (Indeed such uniqueness does not hold
without some growth assumption on u.)
228 10. MONOPOLES

Problem 65. (Poisson summation formula) As in class, let L ⊂ Rn


be an integral lattice of the form
( n
)
X
L= v= kj vj , kj ∈ Z
j=1

where the vj form a basis of Rn and using the dual basis wj (so wj · vi =
δij is 0 or 1 as i 6= j or i = j) set
( n
)
X
L◦ = w = 2π kj wj , kj ∈ Z .
j=1

Recall that we defined


(3.18) C ∞ (TL ) = {u ∈ C ∞ (Rn ); u(z + v) = u(z) ∀ z ∈ Rn , v ∈ L}.
i) Show that summation over shifts by lattice points:
X
(3.19) AL : S(Rn ) 3 f 7−→ AL f (z) = f (z − v) ∈ C ∞ (TL ).
v∈L

defines a map into smooth periodic functions.


ii) Show that there exists f ∈ Cc∞ (Rn ) such that AL f ≡ 1 is the
costant function on Rn .
iii) Show that the map (3.19) is surjective. Hint: Well obviously
enough use the f in part ii) and show that if u is periodic then
AL (uf ) = u.
iv) Show that the infinite sum
X
(3.20) F = δ(· − v) ∈ S 0 (Rn )
v∈L

does indeed define a tempered distribution and that F is L-


periodic and satisfies exp(iw · z)F (z) = F (z) for each w ∈ L◦
with equality in S 0 (Rn ).
v) Deduce that F̂ , the Fourier transform of F, is L◦ periodic,
conclude that it is of the form
X
(3.21) F̂ (ξ) = c δ(ξ − w)
w∈L◦

vi) Compute the constant c.


vii) Show that AL (f ) = F ? f.
viii) Using this, or otherwise, show that AL (f ) = 0 in C ∞ (TL ) if
and only if fˆ = 0 on L◦ .
3. PROBLEMS 229

Problem 66. For a measurable set Ω ⊂ Rn , with non-zero measure,


set H = L2 (Ω) and let B = B(H) be the algebra of bounded linear
operators on the Hilbert space H with the norm on B being
(3.22) kBkB = sup{kBf kH ; f ∈ H, kf kH = 1}.
i) Show that B is complete with respect to this norm. Hint (prob-
ably not necessary!) For a Cauchy sequence {Bn } observe that
Bn f is Cauchy for each f ∈ H.
ii) If V ⊂ H is a finite-dimensional subspace and W ⊂ H is a
closed subspace with a finite-dimensional complement (that is
W + U = H for some finite-dimensional subspace U ) show
that there is a closed subspace Y ⊂ W with finite-dimensional
complement (in H) such that V ⊥ Y, that is hv, yi = 0 for all
v ∈ V and y ∈ Y.
iii) If A ∈ B has finite rank (meaning AH is a finite-dimensional
vector space) show that there is a finite-dimensional space V ⊂
H such that AV ⊂ V and AV ⊥ = {0} where
V ⊥ = {f ∈ H; hf, vi = 0 ∀ v ∈ V }.
Hint: Set R = AH, a finite dimensional subspace by hypoth-
esis. Let N be the null space of A, show that N ⊥ is finite
dimensional. Try V = R + N ⊥ .
iv) If A ∈ B has finite rank, show that (Id −zA)−1 exists for all
but a finite set of λ ∈ C (just quote some matrix theory).
What might it mean to say in this case that (Id −zA)−1 is
meromorphic in z? (No marks for this second part).
v) Recall that K ⊂ B is the algebra of compact operators, defined
as the closure of the space of finite rank operators. Show that
K is an ideal in B.
vi) If A ∈ K show that
Id +A = (Id +B)(Id +A0 )
where B ∈ K, (Id +B)−1 exists and A0 has finite rank. Hint:
Use the invertibility of Id +B when kBkB < 1 proved in class.
vii) Conclude that if A ∈ K then
⊥
{f ∈ H; (Id +A)f = 0} and (Id +A)H are finite dimensional.
Problem 67. [Separable Hilbert spaces]
i) (Gramm-Schmidt Lemma). Let {vi }i∈N be a sequence in a
Hilbert space H. Let Vj ⊂ H be the span of the first j elements
and set Nj = dim Vj . Show that there is an orthonormal se-
quence e1 , . . . , ej (finite if Nj is bounded above) such that Vj is
230 10. MONOPOLES

the span of the first Nj elements. Hint: Proceed by induction


over N such that the result is true for all j with Nj < N. So,
consider what happens for a value of j with Nj = Nj−1 +1 and
add element eNj ∈ Vj which is orthogonal to all the previous
ek ’s.
ii) A Hilbert space is separable if it has a countable dense subset
(sometimes people say Hilbert space when they mean separa-
ble Hilbert space). Show that every separable Hilbert space
has a complete orthonormal sequence, that is a sequence {ej }
such that hu, ej i = 0 for all j implies u = 0.
iii) Let {ej } an orthonormal sequence in a Hilbert space, show
that for any aj ∈ C,
N
X N
X
2
k aj e j k = |aj |2 .
j=1 j=1

iv) (Bessel’s inequality) Show that if ej is an orthormal sequence


in a Hilbert space and u ∈ H then
N
X
k hu, ej iej k2 ≤ kuk2
j=1

and conclude (assuming the sequence of ej ’s to be infinite)


that the series

X
hu, ej iej
j=1

converges in H.
v) Show that if ej is a complete orthonormal basis in a separable
Hilbert space then, for each u ∈ H,

X
u= hu, ej iej .
j=1

Problem 68. [Compactness] Let’s agree that a compact set in a


metric space is one for which every open cover has a finite subcover.
You may use the compactness of closed bounded sets in a finite dimen-
sional vector space.
i) Show that a compact subset of a Hilbert space is closed and
bounded.
ii) If ej is a complete orthonormal subspace of a separable Hilbert
space and K is compact show that given  > 0 there exists N
3. PROBLEMS 231

such that
X
(3.23) |hu, ej i|2 ≤  ∀ u ∈ K.
j≥N

iii) Conversely show that any closed bounded set in a separable


Hilbert space for which (3.23) holds for some orthonormal basis
is indeed compact.
iv) Show directly that any sequence in a compact set in a Hilbert
space has a convergent subsequence.
v) Show that a subspace of H which has a precompact unit ball
must be finite dimensional.
vi) Use the existence of a complete orthonormal basis to show that
any bounded sequence {uj }, kuj k ≤ C, has a weakly conver-
gent subsequence, meaning that hv, uj i converges in C along
the subsequence for each v ∈ H. Show that the subsequnce
can be chosen so that hek , uj i converges for each k, where ek
is the complete orthonormal sequence.
Problem 69. [Spectral theorem, compact case] Recall that a bounded
operator A on a Hilbert space H is compact if A{kuk ≤ 1} is precom-
pact (has compact closure). Throughout this problem A will be a
compact operator on a separable Hilbert space, H.
i) Show that if 0 6= λ ∈ C then
Eλ = {u ∈ H; Au = λu}.
is finite dimensional.
ii) If A is self-adjoint show that all eigenvalues (meaning Eλ 6=
{0}) are real and that different eigenspaces are orthogonal.
iii) Show that αA = sup{|hAu, ui|2 }; kuk = 1} is attained. Hint:
Choose a sequence such that |hAuj , uj i|2 tends to the supre-
mum, pass to a weakly convergent sequence as discussed above
and then using the compactness to a furhter subsequence such
that Auj converges.
iv) If v is such a maximum point and f ⊥ v show that hAv, f i +
hAf, vi = 0.
v) If A is also self-adjoint and u is a maximum point as in iii)
deduce that Au = λu for some λ ∈ R and that λ = ±α.
vi) Still assuming A to be self-adjoint, deduce that there is a finite-
dimensional subspace M ⊂ H, the sum of eigenspaces with
eigenvalues ±α, containing all the maximum points.
vii) Continuing vi) show that A restricts to a self-adjoint bounded
operator on the Hilbert space M ⊥ and that the supremum in
iii) for this new operator is smaller.
232 10. MONOPOLES

viii) Deduce that for any compact self-adjoint operator on a sep-


arable Hilbert space there is a complete orthonormal basis of
eigenvectors. Hint: Be careful about the null space – it could
be big.
Problem 70. Show that a (complex-valued) square-integrable func-
tion u ∈ L2 (Rn ) is continuous in the mean, in the sense that
Z
(3.24) lim sup |u(x + y) − u(x)|2 dx = 0.
↓0 |y|<

Hint: Show that it is enough to prove this for non-negative functions


and then that it suffices to prove it for non-negative simple functions
and finally that it is enough to check it for the characteristic function
of an open set of finite measure. Then use Problem 57 to show that it
is true in this case.
Problem 71. [Ascoli-Arzela] Recall the proof of the theorem of
Ascoli and Arzela, that a subset of C00 (Rn ) is precompact (with respect
to the supremum norm) if and only if it is equicontinuous and equi-
small at infinity, i.e. given  > 0 there exists δ > 0 such that for all
elements u ∈ B
(3.25)
|y| < δ =⇒ sup |u(x + y) = u(x)| <  and |x| > 1/δ =⇒ |u(x)| < .
x∈Rn

Problem 72. [Compactness of sets in L2 (Rn ).] Show that a subset


B ⊂ L2 (Rn ) is precompact in L2 (Rn ) if and only if it satisfies the
following two conditions:
i) (Equi-continuity in the mean) For each  > 0 there exists δ > 0
suchZ that
(3.26) |u(x + y) − u(x)|2 dx <  ∀ |y| < δ, u ∈ B.
Rn
ii) (Equi-smallness at infinity) For each  > 0 there exists R such
that Z
(3.27) |u|2 dx <  ∀ u ∈ B.
|x|>R|

Hint: Problem 70 shows that (3.26) holds for each u ∈ L2 (Rn ); check
that (3.27) also holds for each function. Then use a covering argument
to prove that both these conditions must hold for a compact subset
of L2 (R) and hence for a precompact set. One method to prove the
converse is to show that if (3.26) and (3.27) hold then B is bounded
and to use this to extract a weakly convergent sequence from any given
sequence in B. Next show that (3.26) is equivalent to (3.27) for the
3. PROBLEMS 233

set F(B), the image of B under the Fourier transform. Show, possi-
bly using Problem 71, that if χR is cut-off to a ball of radius R then
χR G(χR ûn ) converges strongly if un converges weakly. Deduce from
this that the weakly convergent subsequence in fact converges strongly
so B̄ is sequently compact, and hence is compact.
Problem 73. Consider the space Cc (Rn ) of all continuous functions
on Rn with compact support. Thus each element vanishes in |x| > R
for some R, depending on the function. We want to give this a toplogy
in terms of which is complete. We will use the inductive limit topology.
Thus the whole space can be written as a countable union
(3.28) [
Cc (Rn ) = {u : Rn ; u is continuous and u(x) = 0 for |x| > R}.
n

Each of the space on the right is a Banach space for the supremum
norm.
(1) Show that the supreumum norm is not complete on the whole
of this space.
(2) Define a subset U ⊂ Cc (Rn ) to be open if its intersection with
each of the subspaces on the right in (3.28) is open w.r.t. the
supremum norm.
(3) Show that this definition does yield a topology.
(4) Show that any sequence {fn } which is ‘Cauchy’ in the sense
that for any open neighbourhood U of 0 there exists N such
that fn − fm ∈ U for all n, m ≥ N, is convergent (in the
corresponding sense that there exists f in the space such that
f − fn ∈ U eventually).
(5) If you are determined, discuss the corresponding issue for nets.
Problem 74. Show that the continuity of a linear functional u :
Cc∞ (Rn ) −→ C with respect to the inductive limit topology defined in
(1.17) means precisely that for each n ∈ N there exists k = k(n) and
C = Cn such that
(3.29) |u(ϕ)| ≤ CkϕkC k , ∀ ϕ ∈ C˙∞ (B(n)).
The point of course is that the ‘order’ k and the constnat C can both
increase as n, measuring the size of the support, increases.
Problem 75. [Restriction from Sobolev spaces] The Sobolev em-
bedding theorem shows that a function in H m (Rn ), for m > n/2 is
continuous – and hence can be restricted to a subspace of Rn . In fact
this works more generally. Show that there is a well defined restriction
234 10. MONOPOLES

map
1 1
(3.30) H m (Rn ) −→ H m− 2 (Rn ) if m >
2
with the following properties:
(1) On S(Rn ) it is given by u 7−→ u(0, x0 ), x0 ∈ Rn−1 .
(2) It is continuous and linear.
Hint: Use the usual method of finding a weak version of the map on
smooth Schwartz functions; namely show that in terms of the Fourier
transforms on Rn and Rn−1
Z
0 −1
(3.31) \
u(0, ·)(ξ ) = (2π) û(ξ1 , ξ 0 )dξ1 , ∀ ξ 0 ∈ Rn−1 .
R
Use Cauchy’s inequality to show that this is continuous as a map on
Sobolev spaces as indicated and then the density of S(Rn ) in H m (Rn )
to conclude that the map is well-defined and unique.
Problem 76. [Restriction by WF] From class we know that the
product of two distributions, one with compact support, is defined
provided they have no ‘opposite’ directions in their wavefront set:
(3.32) / WF(v) then uv ∈ Cc−∞ (Rn ).
(x, ω) ∈ WF(u) =⇒ (x, −ω) ∈
Show that this product has the property that f (uv) = (f u)v = u(f v)
if f ∈ C ∞ (Rn ). Use this to define a restriction map to x1 = 0 for
distributions of compact support satisfying ((0, x0 ), (ω1 , 0)) ∈
/ WF(u)
as the product
(3.33) u0 = uδ(x1 ).
[Show that u0 (f ), f ∈ C ∞ (Rn ) only depends on f (0, ·) ∈ C ∞ (Rn−1 ).
Problem 77. [Stone’s theorem] For a bounded self-adjoint opera-
tor A show that the spectral measure can be obtained from the resolvent
in the sense that for φ, ψ ∈ H
1
(3.34) lim h[(A − t − i)−1 − (A + t + i)−1 ]φ, ψi −→ µφ,ψ
↓0 2πi

in the sense of distributions – or measures if you are prepared to work


harder!
Problem 78. If u ∈ S(Rn ) and ψ 0 = ψR + µ is, as in the proof of
Lemma 7.5, such that
supp(ψ 0 ) ∩ Css(u) = ∅
show that
S(Rn ) 3 φ 7−→ φψ 0 u ∈ S(Rn )
3. PROBLEMS 235

is continuous and hence (or otherwise) show that the functional u1 u2


defined by (7.20) is an element of S 0 (Rn ).
Problem 79. Under the conditions of Lemma 7.10 show that
(3.35)
sx + ty
Css(u∗v)∩Sn−1 ⊂ { , |x| = |y| = 1, x ∈ Css(u), y ∈ Css(v), 0 ≤ s, t ≤ 1}.
|sx + ty|
Notice that this make sense exactly because sx + ty = 0 implies that
t/s = 1 but x + y 6= 0 under these conditions by the assumption of
Lemma 7.10.
Problem 80. Show that the pairing u(v) of two distributions u, v ∈
b
S 0 (Rn ) may be defined under the hypothesis (7.50).
Problem 81. Show that under the hypothesis (7.51)
(3.36)
WFsc (u∗v) ⊂ {(x+y, p); (x, p) ∈ WFsc (u)∩(Rn ×Sn−1 ), (y, p) ∈ WFsc (v)∩(Rn ×Sn−1 )}
s0 θ0 + s00 θ00
∪ {(θ, q) ∈ Sn−1 × Bn ; θ = 0 0 , 0 ≤ s0 , s00 ≤ 1,
|s θ + s00 θ00 |
(θ0 , q) ∈ WFsc (u) ∩ (Sn−1 × Bn ), (θ00 , q) ∈ WFsc (v) ∩ (Sn−1 × Bn )}.
Problem 82. Formulate and prove a bound similar to (3.36) for
WFsc (uv) when u, v ∈ S 0 (Rn ) satisfy (7.50).
Problem 83. Show that for convolution u ∗ v defined under con-
dition (7.51) it is still true that
(3.37) P (D)(u ∗ v) = (P (D)u) ∗ v = u ∗ (P (D)v).
Problem 84. Using Problem 80 (or otherwise) show that integra-
tion is defined as a functional
(3.38) {u ∈ S 0 (Rn ); (Sn−1 × {0}) ∩ WFsc (u) = ∅} −→ C.
R R
If u satisfies this condition, show that P (D)u = c u where c is the
constant term in P (D), i.e. P (D)1 = c.
Problem 85. Compute WFsc (E) where E = C/|x − y| is the stan-
dard fundamental solution for the Laplacian on R3 . Using Problem 83
give a condition on WFsc (f ) under which u = ER ∗ f is defined and
satisfies ∆u = f. Show that under this condition f is defined using
Problem
R 84. What can you say about WFsc (u)? Why is it not the case
that ∆u = 0, even though this is true if u has compact support?
236 10. MONOPOLES

4. Solutions to (some of ) the problems


Solution 4.1 (To Problem 10). (by Matjaž Konvalinka).
Since the topology on N, inherited from R, is discrete, a set is
compact if and only if it is finite. If a sequence {xn } (i.e. a function
N → C) is in C0 (N) if and only if for any  > 0 there exists a compact
(hence finite) set F so that |xn | <  for any n not in F . We can
assume that F = {1, . . . , n }, which gives us the condition that {xn }
is in C0 (N) if and only if it converges to 0. We denote this space by c0 ,
and the supremum norm by k · k0 . A sequence {xn } will be abbreviated
to x.
Let l1 denote the space of (real or complex) sequences x with a
finite 1-norm ∞
X
kxk1 = |xn |.
n=1
We can define pointwise summation and multiplication with scalars,
and (l1 , k · k1 ) is a normed (in fact Banach) space. Because the func-
tional ∞
X
y 7→ xn y n
n=1
P∞ P∞
is linear and bounded (| n=1 xn yn | ≤ n=1 |xn ||yn | ≤ kxk0 kyk1 ) by
kxk0 , the mapping
Φ : l1 7−→ c∗0
defined by

!
X
x 7→ y 7→ x n yn
n=1
is a (linear) well-defined mapping with norm at most 1. In fact, Φ is
an isometry because if |xj | = kxk0 then |Φ(x)(ej )| = 1 where ej is
the j-th unit vector. We claim that Φ is also surjective (and hence an
isometric isomorphism). P If ϕ is a functional
Pon c0 let us denote ϕ(ej )
∞ ∞
by xj . Then Φ(x)(y) = ϕ(e
P∞ n n
n=1 )y = n=1 ϕ(yn en ) = ϕ(y) (the
last equality holds because n=1 yn en converges to y in c0 and ϕ is
continuous with respect to the topology in c0 ), so Φ(x) = ϕ.
Solution 4.2 (To Problem 29). (Matjaž Konvalinka) Since
Z ∞
Dx H(ϕ) = H(−Dx ϕ) = i H(x)ϕ0 (x) dx =
−∞
Z ∞
i ϕ0 (x) dx = i(0 − ϕ(0)) = −iδ(ϕ),
0
we get Dx H = Cδ for C = −i.
4. SOLUTIONS TO (SOME OF) THE PROBLEMS 237

Solution 4.3 (To Problem 40). (Matjaž Konvalinka) Let us prove


this in the case where n = 1. Define (for b 6= 0)
(b − x)k−1 (k−1)
U (x) = u(b) − u(x) − (b − x)u0 (x) − . . . − u (x);
(k − 1)!
then
(b − x)k−1 (k)
U 0 (x) = − u (x).
(k − 1)!
For the continuously differentiable function V (x) = U (x)−(1−x/b)k U (0)
we have V (0) = V (b) = 0, so by Rolle’s theorem there exists ζ between
0 and b with
k(b − ζ)k−1
V 0 (ζ) = U 0 (ζ) + U (0) = 0
bk
Then
bk
U (0) = − U 0 (ζ),
k(b − ζ)k−1
u(k−1) (0) k−1 u(k) (ζ) k
u(b) = u(0) + u0 (0)b + . . . + b + b .
(k − 1)! k!
The required decomposition is u(x) = p(x) + v(x) for
u00 (0) 2 u(k−1) (0) k−1 u(k) (0) k
p(x) = u(0) + u0 (0)x + x + ... + x + x ,
2 (k − 1)! k!

u(k) (ζ) − u(k) (0) k


v(x) = u(x) − p(x) = x
k!
for ζ between 0 and x, and since u(k) is continuous, (u(x) − p(x))/xk
tends to 0 as x tends to 0.
The proof for general n is not much more difficult. Define the
function wx : I → R by wx (t) = u(tx). Then wx is k-times continuously
differentiable,
n
0
X ∂u
wx (t) = (tx)xi ,
i=1
∂xi
n
X ∂ 2u
wx00 (t) = (tx)xi xj ,
i,j=1
∂xi ∂xj

X l! ∂lu
wx(l) (t) = l1 l2 li
(tx)xl11 xl22 · · · xlii
l1 !l2 ! · · · li ! ∂x1 ∂x2 · · · ∂xi
l1 +l2 +...+l =l
i
238 10. MONOPOLES

so by above u(x) = wx (1) is the sum of some polynomial p (od degree


k), and we have
(k) (k)
u(x) − p(x) vx (1) wx (ζx ) − wx (0)
k
= k
= ,
|x| |x| k!|x|k
so it is bounded by a positive combination of terms of the form
∂lu ∂lu
(ζx x) − (0)
∂xl11 ∂xl22 · · · ∂xlii ∂xl11 ∂xl22 · · · ∂xlii
with l1 + . . . + li = k and 0 < ζx < 1. This tends to zero as x → 0
because the derivative is continuous.
Solution 4.4 (Solution to Problem 41). (Matjž Konvalinka) Obvi-
ously the map C0 (Bn ) → C(Bn ) is injective (since it is just the inclusion
map), and f ∈ C(Bn ) is in C0 (Bn ) if and only if it is zero on ∂Bn , ie. if
and only if f |Sn−1 = 0. It remains to prove that any map g on Sn−1 is
the restriction of a continuous function on Bn . This is clear since
(
|x|g(x/|x|) x 6= 0
f (x) =
0 x=0
is well-defined, coincides with f on Sn−1 , and is continuous: if M is
the maximum of |g| on Sn−1 , and  > 0 is given, then |f (x)| <  for
|x| < /M.
Solution 4.5. (partly Matjaž Konvalinka)
For any ϕ ∈ S(R) we have
Z ∞ Z ∞ Z ∞
| ϕ(x)dx| ≤ 2
|ϕ(x)|dx ≤ sup((1+x| )|ϕ(x)|) (1+|x|2 )−1 dx
−∞ −∞ −∞
≤ C sup((1 + x|2 )|ϕ(x)|).
R
Thus S(R) 3 ϕ 7−→ R ϕdx is continous.
Now, choose φ ∈ Cc∞ (R) with R φ(x)dx = 1. Then, for ψ ∈ S(R),
R
set
Z x Z ∞
(4.1) Aψ(x) = (ψ(t) − c(ψ)φ(t)) dt, c(ψ) = ψ(s) ds.
−∞ −∞
Note that the assumption on φ means that
Z ∞
(4.2) Aψ(x) = − (ψ(t) − c(ψ)φ(t)) dt
x
Clearly Aψ is smooth, and in fact it is a Schwartz function since
d
(4.3) (Aψ(x)) = ψ(x) − cφ(x) ∈ S(R)
dx
4. SOLUTIONS TO (SOME OF) THE PROBLEMS 239

so it suffices to show that xk Aψ is bounded for any k as |x| → ±∞.


Since ψ(t) − cφ(t) ≤ Ck t−k−1 in t ≥ 1 it follows from (4.2) that
Z ∞
k
|x Aψ(x)| ≤ Cx k
t−k−1 dt ≤ C 0 , k > 1, in x > 1.
x

A similar estimate as x → −∞ follows from (4.1). Now, A is clearly


linear, and it follows from the estimates above, including that on the
integral, that for any k there exists C and j such that
0 0
X
sup |xα Dβ Aψ| ≤ C sup |xα Dβ ψ|.
α,β≤k x∈R
α0 ,β 0 ≤j

Finally then, given u ∈ S 0 (R) define v(ψ) = −u(Aψ). From the


continuity of A, v ∈ S(R) and from the definition of A, A(ψ 0 ) = ψ.
Thus
dv
dv/dx(ψ) = v(−ψ 0 ) = u(Aψ 0 ) = u(ψ) =⇒ = u.
dx
0
Solution 4.6. We have to prove that hξim+m u
b ∈ L2 (Rn ), in other
words, that Z
0
hξi2(m+m ) |b
u|2 dξ < ∞.
Rn
But that is true since
Z Z
2(m+m0 ) 2 0
hξi |b
u| dξ = hξi2m (1 + ξ12 + . . . + ξn2 )m |b
u|2 dξ =
Rn Rn
 
Z Z 
2m0 2m0 2α
X X
2α  2 2
= hξi  Cα ξ |b
u| dξ = Cα hξi ξ |b
u| dξ
Rn |α|≤m |α|≤m Rn

m0 m0
and since hξi ξ α u
b = hξi D d α u is in L2 (Rn ) (note that u ∈ H m (Rn )
0
follows from Dα u ∈ H m (Rn ), |α| ≤ m). The converse is also true since
Cα in the formula above are strictly positive.
Solution 4.7. Take v ∈ L2 (Rn ), and define subsets of Rn by
E0 = {x : |x| ≤ 1},
Ei = {x : |x| ≥ 1, |xi | = max |xj |}.
j

Then obviously we have 1 = i=0 χEj a.e., and v = nj=0 vj for vj =


Pn P

χEj v. Then hxi is bounded by 2 on E0 , and hxiv0 ∈ L2 (Rn ); and on
Ej , 1 ≤ j ≤ n, we have
hxi (1 + n|xj |2 )1/2 1/2
≤ = n + 1/|xj |2 ≤ (2n)1/2 ,
|xj | |xj |
240 10. MONOPOLES

so hxivj = xj wj for wj ∈ L2 (Rn ). But that means that hxiv = w0 +


P n 2 n
j=1 xj wj for wj ∈ L (R ).
If u is in L2 (Rn ) then u
b ∈ L2 (Rn ), and so there exist w0 , . . . , wn ∈
L2 (Rn ) so that
X n
hξib
u = w0 + ξj wj ,
j=1
in other words
n
X
u
b=u
b0 + ξj u
bj
j=1

uj ∈ L2 (Rn ). Hence
where hξib
n
X
u = u0 + Dj uj
j=1

where uj ∈ H 1 (Rn ).
Solution 4.8. Since
Z ∞ Z ∞
0
Dx H(ϕ) = H(−Dx ϕ) = i H(x)ϕ (x) dx = i ϕ0 (x) dx = i(0−ϕ(0)) = −iδ(ϕ),
−∞ 0

we get Dx H = Cδ for C = −i.


Solution 4.9. It is equivalent to ask when hξim δb0 is in L2 (Rn ).
Since Z
δ0 (ψ) = δ0 (ψ) = ψ(0) =
b b b ψ(x) dx = 1(ψ),
Rn
this is equivalent to finding m such that hξi2m has a finite integral over
Rn . One option is to write hξi = (1 + r2 )1/2 in spherical coordinates,
and to recall that the Jacobian of spherical coordinates in n dimensions
has the form rn−1 Ψ(ϕ1 , . . . , ϕn−1 ), and so hξi2m is integrable if and only
if Z ∞
rn−1
dr
0 (1 + r2 )m
converges. It is obvious that this is true if and only if n − 1 − 2m < −1,
ie. if and only if m > n/2.
Solution 4.10 (Solution to Problem31). We know that δ ∈ H m (Rn )
for any m < −n/1. Thus is just because hξip ∈ L2 (Rn ) when p < −n/2.
Now, divide Rn into n + 1 regions, as above, being A0 = {ξ; |ξ| ≤ 1 and
Ai = {ξ; |ξi | = supj |ξj |, |ξ| ≥ 1}. Let v0 have Fourier transform χA0
and for i = 1, . . . , n, vi ∈ S; (Rn ) have Fourier transforms ξi−n−1 χAi .
Since |ξi | > chξi on the support of vbi for each i = 1, . . . , n, each term
4. SOLUTIONS TO (SOME OF) THE PROBLEMS 241

is in H m for any m < 1 + n/2 so, by the Sobolev embedding theorem,


each vi ∈ C00 (Rn ) and
n
X X
(4.4) 1 = v̂0 ξin+1 vbi =⇒ δ = v0 + Din+1 vi .
i=1 i
How to see that this cannot be done with n or less derivatives? For
the moment I do not have a proof of this, although I believe it is true.
Notice that we are actually proving that δ can be written
X
(4.5) δ= Dα uα , uα ∈ H n/2 (Rn ).
|α|≤n+1

This cannot be improved to n from n + 1 since this would mean that


δ ∈ H −n/2 (Rn ), which it isn’t. However, what I am asking is a little
more subtle than this.
Bibliography

[1] G.B. Folland, Real analysis, Wiley, 1984.


[2] F. G. Friedlander, Introduction to the theory of distributions, second ed., Cam-
bridge University Press, Cambridge, 1998, With additional material by M. Joshi.
MR 2000g:46002
[3] J. Hadamard, Le problème de Cauchy et les èquatons aux dérivées partielles
linéaires hyperboliques, Hermann, Paris, 1932.
[4] L. Hörmander, The analysis of linear partial differential operators, vol. 2,
Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1983.
[5] , The analysis of linear partial differential operators, vol. 3, Springer-
Verlag, Berlin, Heidelberg, New York, Tokyo, 1985.
[6] W. Rudin, Real and complex analysis, third edition ed., McGraw-Hill, 1987.
[7] George F. Simmons, Introduction to topology and modern analysis, Robert E.
Krieger Publishing Co. Inc., Melbourne, Fla., 1983, Reprint of the 1963 original.
MR 84b:54002

243

You might also like