Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Distributions - Generalized Functions: Andr As Vasy

Download as pdf
Download as pdf
You are on page 1of 21

Distributions – generalized

functions

András Vasy

March 25, 2004


The problem
One of the main achievements of 19th century
mathematics was to carefully analyze concepts
such as the continuity and differentiability of
functions. Recall that f is differentiable at x,
and its derivative is f 0(x) = L, if the limit
f (x + h) − f (x)
lim
h→0 h
exists, and is equal to L.

While it was always clear that not every con-


tinuous function is differentiable, e.g. the func-
tion f : R → R given by f (x) = |x| is not dif-
ferentiable at 0, it was not until the work of
Bolzano and Weierstrass that the full extent of
the problem became clear: there are nowhere
differentiable continuous functions.
Let u be the saw-tooth function: u(0) = 0,
u(1/2) = 1/2, u is periodic with period 1, and
linear on [0, 1/2] as well as on [1/2, 1]. Then
let

X
f (x) = cj u(q j x),
j=0

for suitable cj and q – e.g. q = 16, cj = 2−j


work. Then the sum converges to a continuous
function f , but the difference quotients do not
have limits. In fact, u could be replaced even
by u(x) = sin(2πx).

However, one can make sense of f 0 and even


the 27th derivative of f for any continuous f if
one relaxes the requirement that f 0 be a func-
tion. So, for instance, we cannot expect f 0 to
have values at any point – it will be a distribu-
tion, i.e. a ‘generalized function’, introduced
by Schwartz and Sobolev.
Why care?

• PDE’s: most PDE’s are not explicitly solv-


able. Related techniques play a crucial role
in analyzing PDEs.

• Another PDE example: take the wave equa-


tion on the line:
utt = c2uxx ,
2
u a function on Rx × Rt, utt = ∂∂t2u , etc. The
general solution of this PDE, obtained by
d’Alembert in the 18th century, is
u(x, t) = f (x + ct) + g(x − ct),
where f and g are ‘arbitrary’ functions on
R. Indeed, it is easy to check by the chain
rule that u solves the PDE – as long as we
can make sense of the differentiation. So,
in the ‘classical sense’, f, g twice continu-
ously differentiable, written as f, g ∈ C 2(R),
suffice.
But shouldn’t this also work for rougher
f, g? For instance, what about the step
function f : f (x) = 1 if x ≥ 0, f (x) = 0 for
x < 0?

• Limits of familiar objects are often distri-


butions. For example, for  > 0, define
f : R → C by
1
f (x) = .
x + i
What is lim→0 f ? For x 6= 0, of course the
limit makes sense directly – it is f (x) = 1
x.
But what about x = 0? For instance,
R1
does −1 f (x) dx make sense, and what is
it? Note that this integral does not con-
verge due to the behavior of the integrand
at 0!

However, we can take


Z 1
lim f (x) dx = lim log(x + i)|1
−1
→0 −1 →0
= log(1) − log(−1) = 0 − (iπ) = −iπ.
So, the integral of the limit f on [−1, 1]
should be −iπ. Can we make sense of this
directly?

• Idealization of physical problems often re-


sults in distributions. For instance, the
sharp front for the wave equation discussed
above, or point charges (the electron is
supposed to be such!) are good examples.
I will usually talk about functions on R, but
almost everything makes sense on Rn, n ≥ 1
arbitrary.

Notation:

• We say that f is C 0 if f is continuous.

• We say that f is C k , k ≥ 1 integer, if f is


k times continuously differentiable, i.e. if f
is C k−1 and its (k − 1)st derivative, f (k−1),
is differentiable, and its derivative, f (k) is
continuous.

• We say that f is C ∞, i.e. f is infinitely


differentiable, if f is C k for every k.

Motivation: to deal with very ‘bad’ objects,


first we need very ‘good’ ones.
Example of an interesting C ∞ function on R:
f (x) = 0 for x ≤ 0, f (x) = e−1/x for x > 0.

An even more interesting example: g(x) =


f (1 − x2). Note that g is 0 for |x| ≥ 1.

Our very good functions then will be the (complex-


valued) functions φ which are C ∞ and which
are 0 outside a bounded set, i.e. there is R > 0
such that φ(x) = 0 for |x| ≥ R. The set of such
functions is denoted by Cc∞(R), and its ele-
ments are called ‘compactly supported smooth
functions’ or simply ‘test functions’.

There are other sets of very good functions


with which analogous conclusions are possible:
e.g. C ∞ functions which decrease faster than
Ck |x|−k at infinity for all k, and analogous es-
timates hold for their derivatives. Such func-
tions are called Schwartz functions.
The set Cc∞ (R) is a vector space with the usual
pointwise addition of functions and pointwise
multiplication by scalars c ∈ C. Since this is an
infinite dimensional vector space, we need one
more notion:

Suppose that φn, n ∈ N, is a sequence in Cc∞ (R),


and φ ∈ Cc∞ (R). We say that φn → φ in Cc∞ (R)
if

1. there is an R > 0 such that φn(x) = 0 for


all n and for all |x| ≥ R,

k
2. and for all k, maxx∈R | d k (φn − φ)| → 0 as
dx
n → ∞, i.e. for all k and for all  > 0, there
is N such that
dk
n ≥ N, x ∈ R ⇒ | k (φn − φ)| < .
dx
Now we ‘dualize’ Cc∞(R) to define distribu-
tions:

A distribution u ∈ D 0(R) is a continuous linear


functional u : Cc∞(R) → C. That is:

1. u is linear:
u(c1φ1 + c2φ2 ) = c1u(φ1 ) + c2u(φ2 )
for all cj ∈ C, φj ∈ Cc∞(R), j = 1, 2.

2. u is continuous: if φn → φ in Cc∞ (R) then


u(φn) → u(φ), i.e. limn→∞ u(φn ) = u(φ), in
C.

The simplest example is the delta distribution:


for a ∈ R, δa is the distribution given by δa(φ) =
φ(a) for φ ∈ Cc∞(R).

Another example: for φ ∈ Cc∞(R), let u(φ) =


φ0(1) − φ00(−2).
Why is this a generalization of functions?

If f is continuous (or indeed just locally inte-


grable), we can associate a distribution ι(f ) =
ιf to it:
Z
ιf (φ) = f (x)φ(x) dx.
R

Note that ι : C 0 (R) → D 0(R) is injective, i.e.


ιf1 = ιf2 implies f1 = f2, or equivalently ιf = 0
implies f = 0, so we can think of C 0(R) as a
subset of D 0(R), identifying f with ιf .

Here we already used that D 0(R) is a vector


space: u1 + u2 is the distribution given by
(u1 + u2)(φ) = u1(φ) + u2(φ), while cu is the
distribution given by (cu)(φ) = cu(φ) (c ∈ C).
Convergence: suppose that un is a sequence of
distributions and u ∈ D 0(R). We say that un →
u in D 0(R) if for all φ ∈ Cc∞(R), limn→∞ un(φ) =
u(φ).

Example: Suppose that un ≥ 0 are continu-


ous functions (i.e. un = ιfn , Rfn continuous),
un(x) = 0 for |x| ≥ n 1 , and
R un(x) dx = 1.
Then limn→∞ un = δ0.
1 .
Example: Suppose u(x) = x+i Then for
 > 0, φ ∈ Cc∞(R),
Z Z
1
u(x)φ(x) dx = φ(x) dx
x + i
Z
=− log(x + i)φ0 (x) dx,

But the last expression has a limit as  → 0,


for log is locally integrable; the limit is
Z
u(φ) = − log(x + i0)φ0(x) dx,
where log(x + i0) = log |x| + iπH(−x), with H
the step function H(x) = 1 if x > 0, H(x) = 0,
if x < 0.
If one wants to, one can integrate by parts
once more to get
Z
u(φ) = lim u(x)φ(x) dx
→0
Z
= (x + i)(log(x + i) − 1)φ00 (x) dx
Z
= x(log(x + i0) − 1)φ00(x) dx,

with the integrand continuous now even at  =


0.

The distribution u is called (x + i0)−1 .

A simple and interesting calculation gives

(x + i0)−1 − (x − i0)−1 = −2πiδ0.


This is all well, but has the goal been achieved,
namely can we differentiate any distribution?
Yes! We could see this by approximating distri-
butions by differentiable functions, whose deriva-
tive we thus already know, and show that the
limit exists. But this requires first proving that
every distribution can be approximated by such
functions. So we proceed more directly.

If u = ιf , and f is C 1, we want u0 = ιf 0 . That


is, we want
Z
u0(φ) = ιf 0 (φ) = f 0(x)φ(x) dx
Z
=− f (x)φ0 (x) dx = −ιf (φ0 ) = −u(φ0).

So for any u ∈ D 0(R), we define u0 ∈ D 0(R) by


u0(φ) = −u(φ0).
It is easy to see that u0 is indeed a distribution.
In particular, it can be differentiated again, etc.
It is also easy to check that if un → u in D 0(R)
then u0n → u0 in D 0(R).

Example: u = δa. Then u0(φ) = −u(φ0) =


−φ0(a), i.e. δa0 is the distribution φ 7→ −φ0(a).

Example: u = ιH , H the step function. Then


Z ∞
u0(φ) = −u(φ0) = − H(x)φ0 (x) dx
−∞
Z ∞
=− φ0 (x) = φ(0) = δ0(φ)
0
by the fundamental theorem of calculus, so
H 0 = δ0 .

Now it is easy to check that u(x, t) = H(x − ct)


solves the wave equation!

Another good feature is that all standard iden-


tities hold for distributional derivatives, e.g.
∂ 2u = ∂ 2u , since they hold for test functions
∂x∂y ∂y∂x
φ.
The downside: multiplication does not extend
to D 0(R), e.g. δ0 · δ0 makes no sense. To
see this, consider a sequence un of continuous
functions converging to δ0, and check that u2 n
does not converge to any distribution. Actu-
ally, there are algebraic problems as well: the
product rule gives an incompatibility for differ-
entiation and multiplication when applied to
‘bad’ functions.

This is why solving non-linear PDE’s can be


hard: differentiation and multiplication fight
against each other: e.g. utt = u2
xx .

However, one can still multiply distributions by


C ∞ functions f : (f u)(φ) = u(f φ), motivated
as for differentiation. Thus, distribution the-
ory is ideal for solving variable coefficient linear
PDE’s: e.g. utt = c(x)2uxx .
Also note that

(x + i0)−1 · (x + i0)−1 = (x + i0)−2


makes perfectly good sense, as does (x−i0)−2.
The problem is with the product (x + i0)−1 ·
(x−i0)−1 . A more general perspective that dis-
tinguishes (x + i0)−1 and (x − i0)−1, by saying
that they are both singular at 0 but in different
‘directions’, is microlocal analysis.
As an application, consider the fundamental
theorem of calculus.

Suppose that u0 = f , and f is a given distribu-


tion. What is u?

Since f (ψ) = u0(ψ) = −u(ψ 0), we already know


what u is applied to the derivative of a test
function. But we need to know what u(φ) is
for any test function φ.

R
So let φ0 be a fixed test function with R φ0(x) dx =
1. If φ ∈ Cc∞ (R), define φ̃ ∈ Cc∞ (R) by
Z
φ̃(x) = φ(x) − ( φ(x0 ) dx0)φ0 (x).
R
R
Then R φ̃(x) dx = 0, hence φ̃ is the derivative
of a test function ψ, namely we can let
Z x
ψ(x) = φ̃(x0) dx0 .
−∞
R
Thus, φ(x) = ψ 0(x) + ( R φ(x0 ) dx0)φ0 (x), so
Z
u(φ) = u(ψ 0) + ( φ(x0 ) dx0)u(φ0 )
R
Z
= −f (ψ) + cφ(x0) dx0
R
with c = u(φ0 ) a constant independent of φ.
Thus, u is determined by u0 = f , plus the
knowledge of u(φ0).

In particular, if f = 0, we deduce that u = ιc,


i.e. u is a constant function!

This is a form of the fundamental theorem of


calculus: if u is C 1, a, b ∈ R, a < b, we can take
φ0 approach δa, φ approach δb, in which case
ψ will approach a function that is −1 between
a and b, 0 elsewhere, so we recover u(b) =
R
u(a) + ab f (x) dx.
More examples: electrostatics. The electro-
static potential u generated by a charge den-
sity ρ satisfies

−∆u = ρ, ∆u = uxx + uyy + uzz .


If ρ = δ0, i.e. we have a point charge, what is
u? We need conditions at infinity, such as u →
1 , r(X) =
0 at infinity, to find u. In fact, u = 4πr
|X|, X = (x, y, z), as a direct calculation shows:
to evaluate −∆u, consider
Z
1
−∆u(φ) = u(−∆φ) = − ∆φ(X) dX
R3 4π|X|
Z
1
= lim ∆φ(X) dX,
→0 |X|> 4π|X|
and use the divergence theorem to show that
the right hand side converges to φ(0) = δ0(φ)!
This also solves the PDE −∆u = f for any f
(with some decay at infinity), by
Z
1
u(x) = E(X − Y ) f (Y ) dY, E(X) = ;
4π|X|
this integral actually makes sense even if f is
a distribution (with some decay at infinity).

You might also like