Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Math222anotes PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 101

Math 222a (Partial Differential Equations 1) Lecture

Notes
Izak Oltman
last updated: December 11, 2019

These are the lecture notes for a first semester graduate course in partial differential
equations taught at UC Berkeley by professor Sung-Jin Oh during the fall of 2019.

Contents
1 Introduction to PDE’s 3
1.1 Overview of PDE’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Basic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 First Order (scalar) PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Method of Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 Existence of Characteristic Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.7 noncharactersitic condition for kth order system . . . . . . . . . . . . . . . . . . 20

2 Distributions 25
2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.4 convergence of distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4.1 Approximation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.5 Fundamental Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.5.1 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3 Laplace Equation 40
3.1 Cauchy Riemman Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 Boundary Value Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.3 Green’s Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4 Wave Equation 55
4.1 1 dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

1
Contents

5 Fourier Transform 65
5.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.1.1 Duhamel’s Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

6 Energy Method and Sobolev Spaces 75


6.1 Energy Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.2 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.3 Traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.4 Sobolev Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.5 Compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.6 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

Appendices 92

A Midterm Review 92
A.1 First order scalar characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
A.1.1 Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
A.1.2 General Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
A.1.3 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
A.2 Noncharacteristic Condition for Kth order systems . . . . . . . . . . . . . . . . . 93
A.2.1 General Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.2.2 Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.3 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.3.1 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
A.3.2 Convergence of Distributions . . . . . . . . . . . . . . . . . . . . . . . . . 95
A.3.3 Operations between distributions . . . . . . . . . . . . . . . . . . . . . . . 96
A.4 Fundamental Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
A.5 Laplace equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

B Final Review 97
B.1 Green’s Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
B.2 Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
B.3 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

Index 101

– 2–
1. Introduction to PDE’s

Lecture 1 (8/29)

1 Introduction to PDE’s
The course outline will be:
1. basic issues (existence, uniqueness, regularity, ... ) will be demonstrated by means of
examples (Evans chapters 2,3,4)

2. theory of distributions and the Fourier transform (Evans chapter 4 and notes)

3. Sobolev spaces (Evans Chapter 6)

1.1 Overview of PDE’s


At a rudimentary level, a partial differential equation (PDE) is a functional equation, that

involves partial differentiation ∂α . The unknown depends on several variables. PDEs
∂xα
are ubiquitous, they arise from many fields in science and mathematics.

1.2 Examples
Here are some fundamental examples of PDEs:
Example 1.2.1 (Laplace Equation). u  Rd R or C, with ∆u 0, where ∆ is the
Laplace operator defined as ∆ Pdj 1 ∂j2
Example 1.2.2 (Wave Equation). ju 0 where j ∂02  ∆ (the d’Alembertian) and
u  R1 d R or C. Here we have define x0

t as the time coordinate.
Both of these are scalar equations. The Laplace equation could model temperature distri-
bution at equilibrium or electrostatics which can measure the electric potential where there
is no charge. The wave equation models wave propagation (at finite speed). This could be
used to model elastic waves or acoustic waves or electromagnetic waves.

The Laplace equation is the prototypical example of elliptic PDEs while the wave is the
prototypical example of hyperbolic PDEs

Example 1.2.3 (Heat Equation). u  R1  d R, C has the form ˆ∂t  ∆u 0


Example 1.2.4 (Schrodinger equation). u  R1  d C with ˆi∂t  ∆u 0
The heat equation is a parabolic PDE while the Schrodinger equation is a dispersive
PDE.

– 3–
1. Introduction to PDE’s

Example 1.2.5 (Cauchy-Riemann). u, v  R2 R, we have:

∂x u  ∂y v 0
∂y u  ∂x v 0

The Cauchy-Riemman equations give necessary and sufficient conditions for a pair of
functions ˆu, v  to give us that uˆz   iv ˆz  is holomorphic with z x  iy. Also note that u
and v also satisfy the Laplace equation.

Example 1.2.6 (Maxwell Equations (vacuum)). Given E  R1  3 R3 and B  R1


 3

R3 :

∂t E  ©  B 0
∂t B  ©  E 0
© E 0
© B 0

It turns out that jE j B 0.

Note that all these PDE’s can be written as:

F u 0

Note that F is linear for all these examples. So all equations stated so far are linear PDEs.
As we can write the operator of the thing equal to zero, we have a linear homogeneous
PDE. If instead we had F u f then we have a linear inhomogeneous. In the case we
have ∆u f we then have what is called the Poisson equation.

There are many nonlinear PDE examples:

Example 1.2.7 (Minimal Surface Equation). u  Rd R with:

d ’ “
∂j u
Q ∂j ” » 0
j 1 1  SDuS2 •

with the notation D © ∂

This is to find a surface with minimal area that satisfies something. This is an example
of an elliptic PDE.

Example 1.2.8 (Korteng-de Vries (KDV) Equation). u  R1  1 R with:

pt u  ∂x3 u  6u∂x 0

This is an example of a dispersive nonlinear PDE.

Here are nonlinear systems of PDEs

– 4–
1. Introduction to PDE’s

Example 1.2.9 (Ricci Flow). g a Riemannian metric (positive definite symmetric n  n


matrix) with:

∂t g  2Ric g 

Where Ric ˆg  1 ∂ 2 g  ˆg 1 ∂g ˆg 1 ∂g

Example 1.2.10 (Einstein Equation). g  a Lorentzian metric which is a ˆ1  3  ˆ1  3


symmetric metric with signature , , ,  (similar to diag ˆ1, 1, 1, 1):

Ric g  0

1.3 Basic Problems


Sometimes there is no solution to a PDE, sometimes there is no unique solution, so we may
have to impose more information. One way to do this is to impose boundary values, which
is the Boundary Value Problem (BVP)
Example 1.3.1 (Dirichlet Problem). Given U `open,bdd Rd , u  Ū R with
¢̈
¨∆u f in U
¦
¨u g on ∂U
¤̈

Example 1.3.2 (Neumann Problem). Given U and u as above, but:


¢̈
¨∆u f in U
¦
¨ν Du g on ∂U
¤̈

where ν is the outer normal to ∂U , and we require RU f dx R∂U gdA


Note that translated solutions are still solutions, therefore we say that solutions are
equivalent if u  v c
Example 1.3.3 (Initial Value Problem (IVP)). For the heat equation:
¢̈
¨∂t u  ∆u f x > ˆ0, ª  Rd
¦
¨u g
¤̈
x > ˜t 0 Rd

For Schrodinger
¢̈
¨i∂t u  ∆u f x > R  Rd
¦
¨u g
¤̈
x > ˜t 0  Rd

For the wave equation:


¢̈ju f x > R  Rd
¨
¨
¨
¦u g x > ˜t 0  Rd
¨
¨
¨∂ u h x > ˜t 0  Rd
¤̈ t

– 5–
1. Introduction to PDE’s

Definition 1.3.1. We say a boundary value problem or initial value problem is wellposed
(Hadaward) if it satisfies

1. existence

2. uniqueness

3. continuous dependence on dataa

a PDE that fails this is called illposed

The regularity of a solution (if not we have a singularity). We also have asymptotics and
dynamics

Let’s introduce notation. First the multi-index notation: given α ˆα 1 , . . . , α d  then we


define:

∂α Dα ∂1α1 ∂2α2 ∂dαd

We call SαS P αi the order of α. The order of a PDE is the order of the highest order ∂ α
that appears in the equation.

Nonlinear PDEs are classified into three categories:

1. semilinear: 0 F u P α d aα∂ αu nonlinear stuff


S S 

2. quasilinear: 0 F u P α k aαˆu, Du, . . . , Dk 1ux∂ αu


S S

 rest

3. fully nonlinear.

Lecture 2 (9/3)

1.4 First Order (scalar) PDEs


Reference: Evans section 3.2

Here are three examples of first order PDEs (some of which are not in the text)

1. Positive example: ˆ∂t  aˆt, x∂x u f where u  R1 1 R, f  R1   1 R, a  R1  1 R


such that supx>R Saˆt, xS  supx S∂x aS B M ˆt @ ª, and f, a > C ª

2. Inviscid Burger equation ∂t u  u ∂x u 0 with u  R1  1 R.

This is an example of a nonlinear first order scalar PDE. Something goes wrong with
this equation.
a
that is the map from the data to the solution is continuous

– 6–
1. Introduction to PDE’s

3. Lewy–Nirenberg example: ˆ∂t  it∂x u f with u, f  R1  1 C.

It turns out that there exists a function f > C ª


such that there is no C 1 solution u
exists in any neighborhood of ˆ0, 0

If we identify C by R2 , this is also an example of a linear first-order system of PDEs.

Let’s begin with a linear first order scalar PDE.

First suppose a 0 (this is known as a transport equation), then we have ∂t u f , so we


can integrate both sides to get:

uˆt, x S f dt  C ˆx

We could also have initial value problem:


¢̈
¨ ∂t u f
¦
¨uˆ0, x u0 ˆx
¤̈

u0 ˆx  R0 f ˆt , xdt
t
Then we have uˆt, x œ œ

Now let’s supose a is a general function which obeys the boundedness condition.

Theorem 1.4.1. Given u0  R R smooth, and f  R1  1 R smooth. Then there exists a


unique solution u  R1 1 R to the IVPa


∂t u  a∂x u f
uˆ0, x u0 ˆx

Proof. (method of characteristics)

The idea is to try to make a change of variables to reduce this to the simpler case. In
other words, we want a change of variables ˆt, x ˆs, y  such that: (
∂s ∂t  a∂x (1)

We have:
∂t ∂x
∂s ∂t  ∂x
∂s ∂s
∂t ∂x
∂y ∂t  ∂x
∂y ∂y
a
note that this u is a global solution

– 7–
1. Introduction to PDE’s

∂t ∂x
To get (1), we want: 1 and ˆs, y  aˆtˆs, y , xˆs, y . The first equality gives us
∂s ∂s
that s t. So we would like to find:
∂x
ˆs, y  aˆs, xˆs, y 
∂s
This is a first order ODE. Now since SaS is bounded for each t, this allows us to conclude
that xˆs, y  is always global (in s)a . Let’s allow xˆ0, y  y.

u0 ˆy   R0 f ˆs , y ds
s
In our equation we had ∂s uˆs, y  f ˆs, y  therefore uˆs, y  œ œ

The last step is to show that x ( y is invertible. We have:


∂s ∂y x ˆ∂x aˆt, xˆs, y ∂y x @ M ˆt∂y x
by comparison of ODEs, we have:
C e R0 M ˆt dt ∂y xˆ0, y  A 0
t œ œ
S∂y xS


Therefore x is strictly increasing or strictly decreasing as we increase y, therefore we have


(
that x y is invertible.

Remark 1.4.1. This proof is an instance of the methods of characteristics. Because the
xˆs, y  are called the characteristics and the ode involving x is called the characteristic ode.
Now let’s turn the the Inviscid Burger equation:
¢̈
¨∂t u  u∂x u 0 ˆt, x > R2
¦
¨uˆ0, x u0 ˆx ˆt, x > ˜0  R
¤̈
Theorem 1.4.2. For any smooth and compactly supported u0  R R, then there does not
exist a smooth global solution u  R1 1 R with this data.


It turns out that there always exists a local solution, that is there exists δ A 0 such that
on 0, δ   R, there exists a unique smooth solution u.
Proof. Suppose that u  R1  1 R exists. Now let’s apply the method of characteristics to
the operator ˆ∂t  u∂x .

So we have ∂s x uˆs, xˆs, y  and uˆs, y  u0 ˆy  which is just constant. Visualy things
are pretty clear.

Or we could use:
0 ∂x ˆ∂t u  u∂x u ∂t ∂x u  u∂x ∂x u  ˆ∂x u2
After a little, we get ∂s x  x2 0, which is known as the Ricatti equation, which blows up in
finite time. This can be solved via separation of variables.
a
this is an ODE result, and requires the additional assumption that M is locally bounded. Then we apply
Picard’s theorem and prove by contradiction on a saturated solution

– 8–
1. Introduction to PDE’s

Now let’s turn to Lewy-Nirenberg’s example:


∂t u  it∂x u f
Theorem 1.4.3. There exists a smooth function f  R1 1 Cˆ R2  with the property that 

there does not exist any C 1 solution u to this PDE on any neighborhood of ˆ0, 0
Proof. (requires complex analysis). Notation: let Br ˜ˆt, x > R1  1  t2  r2 @ r2  with ∂Br
the boundary. Take a smooth function f with the properties:
1. f ˆt, x f ˆt, x
2. for some sequence rn  0 (ex. 2 n) we want f SB

rn δn Brn δn
0 with RB rn
f dtdx x 0
Assume that such a C 1 solution to the PDE, u, exists on some neighborhood of ˆ0, 0. Pick
1
n large enough so that Brn ` U . Then let ũˆt, x ˆuˆt, x  uˆt, x, then observe that ũ
2
is again a C 1 solution on Brn δ . But observe ũ is odd.


From now on u ũ. On the one hand, we have:

SBrn
∂t u  it∂x u SB rn
f x0

By the divergence theorem, the left side is just:


t
’º “
– t2  x2 — Œ u ‘ dσ
SB ∂t u  it∂x u S∂B – x — itu
Ӽ 2
rn rn

t 2•  x
Therefore uS∂Brn x 0. On the other hand, it turns out uS∂Brn 0.

By continuity, and the fact that f is zero on a bunch of thin annuli. On each annulus,
we have that f 0 ∂t u  it∂x u. Let’s call the right half of the annulus U , that is: 

U 
™ˆrn  δ 2 @ t2  x2 @ ˆrn  δ 2 , t A 0ž

On U , change the variables to ˆt, x ˆs, y 



( ˆ
1 2
2
t , x so 0
1
t
∂t u  i∂x u ˆ∂s  i∂y u
which is the Cauchy - Riemann operator. So now:
U 
™ˆs, y  ` R1  1
 ˆrn  δn 2 @ 2s  y 2 @ ˆrn  δn 2 ž
We can then apply the Schwartz reflection to get a contradiction.

Lecture 3 (9/5)

Today we will cover section 3.2 from Evan’s book. Some notation things: we will use d
instead of n for the spacial dimension. We will also us xj as the jth coordinate (instead of
xj ).

– 9–
1. Introduction to PDE’s

1.5 Method of Characteristics


We will be interested in the general, fully nonlinear, first order, scalar, PDE: F ˆx, u, Du 0.
So F  U  R  Rd R and u  U R. And we will subject this to a boundary condition u g
on Γ ` ∂U . And we will assume that F and g are smooth. The coordinates in the domain
of F will be written as ˆx, z, p1 , . . . , pd  ˆx, z, p.

Our first goal is to derive the characteristics of ODEs. To do this, we want to find the
solution uˆx by finding a suitable curve γ that connects x to x0 > Γ on which the evolution
of u can be computed.

Recall that the simplest example is ∂t u 0 with Γ ˜0  R, then the curves are just the
vertical straight lines.

Let’s parameterize our curve γ ` U with xˆs such that s > I. Then let the function value
along this curve be z ˆs uˆxˆs. We would also like to keep track of the partials, by the
symbols pi ˆs ∂i uˆxˆs

Assume that a smooth solution u exists. Then we have:


∂s pi ˆs  ṗi ˆs Q ∂j ∂iuˆxˆsẋj ˆs
j

Furthermore, we have:
0 ∂xi F ˆx, uˆx, Duˆx
ˆ∂xi F ˆx, uˆx, Duˆx  ˆ∂z F ˆx, uˆx, Duˆx∂i u  
 Qˆ∂p F ˆx, uˆx, Duˆx∂i ∂j u (2)
j
j

So now we have second order terms in terms of first order terms. Now we will set:
ẋj ˆs ˆ∂pj F ˆxˆs, uˆxˆs, Duˆxˆs ˆ∂pj F ˆxˆs, z ˆs, pˆs (3)

Therefore (10) becomes:


0 ˆ∂xi F ˆx, uˆx, Duˆx  ˆ∂z F ˆx, uˆx, Duˆx∂i u  Q ẋj ˆs∂i∂j u
j
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
ṗi ˆs

So we get:

ṗi ˆs  ˆ∂z F ˆxˆs, z ˆs, pˆspi ˆs  ˆ∂xi F ˆxˆs, z ˆs, pˆs (4)

Lastly, we have:

ż ˆs Q ∂iuˆxˆsẋiˆs Q piˆsˆ∂p F ˆxˆs, z ˆs, pˆs


i
(5)
i i

Then the collection of the three ODEs involving z, x, p are called the characteristic ODEs.

– 10 –
1. Introduction to PDE’s

Theorem 1.5.1. If u is a smooth solution to F ˆx, u, Du 0. Then on a curve x satisfying


(3):

z ˆ s uˆxˆs p i ˆs  ∂i uˆxˆs

will obey (4) and (5)


Example 1.5.1. Consider when F is linear. Suppose

0 F ˆx, u, Du Q bj ∂ j u  cu
j

where bj bj ˆx and c cˆ x .

In this case F ˆx, z, p P j bj p j  cz. Then if we plug in our characteristic equations, we


get that

ẋj ˆs bj ˆxˆs


ż ˆs Q bj pj cˆxˆsz ˆs
j

Remark 1.5.1.
1. The p equation is not needed
2. there is a hierarchy of the ODEs
Example 1.5.2 (Linear F).
¢̈ 1
¨x ux2  x2 ux1 u in U
¦
¨u g on Γ
¤̈

With U ˜ˆx1 , x2  > R2 , x1 A 0, x2 A 0 and Γ ˜ˆx1 , 0 > R2  ` ∂U .

Therefore F x1 p2  x2 p1  z, so b ˆx2 , x1 , and c  1. If we use the characteristic


equations, we get that ẋ1 x2 and ẋ2 x1 , and ż z.

The solution is x1 ˆs x0 cosˆs and x2 ˆs x0 sinˆs with x0 the initial condition. We
also have z ˆs es z0 g ˆx0 es .

Based on our domain, 0 @ s @ π ~2. So our path that we know the solution for is a quarter
circle. And we know that solution on this quarter circle is g ˆx0 es .

Now given ˆx1 , x2  > U , we want to find the quarter circle that hits this point.
»
That is we
1 2
want to find ˆx0 , s such that ˆx , x  ˆx0 cos s, x0 sin s. Then we get x0 ˆ x  2  ˆx 2 2 ,
1

x2
and s arctan 1 . So the result is:
x
» x2
uˆx1 , x2  gŠ ˆ x1  2  ˆx2 2  exp ‹arctan 
x1

– 11 –
1. Introduction to PDE’s

Remark 1.5.2. We recover the method of characteristics for ˆ∂t  aˆt, x∂x u 0 by the
linear case.
Example 1.5.3. Consider when F is quasilinear, that is:

F ˆx, u, Du Q bj ˆx, u∂j u  cˆx, u


j

In this case we have:

F Q bj ˆx, z pj  cˆx, z 


j

Plug in our expressions, to get:


¢̈ j
¨ẋ bj ˆxˆs, z ˆs
¦
¨ż
¤̈
Pj pj ∂pj F cˆxˆs, z ˆs
Remark 1.5.3.
1. There is no need to introduce p

2. there is no hierarchy of ODEs.


Example 1.5.4 (Burger’s Equation). Recall this equation was: ∂x1 u  u∂x2 u 0. Then
we have F p1  zp2 , then ẋ1 1 and ẋ2 z and ż 0
Example 1.5.5. Consider
¢̈
¨ ∂1 u  ∂2 u u2 in U
¦
¨u g in Γ
¤̈

with U ˜ˆx1 , x2  > R2 , x2 A 0 and Γ ˜ˆx1 , 0 > R2 . So we have b1 1, b2 1 and c  z2.


Therefore: ẋ1 ẋ2 1 and ż z 2 .

So we have ˆx1 , x2  ˆx 0  s, s.

dz 1 1 z ˆ0
For the other one, we have: ds therefore   s therefore z ˆs .
z2 z ˆs z ˆ0 1  sz ˆ0
g ˆx 0 
Then since z ˆ0 g ˆx0 , then z ˆs .
1  g ˆ x0  s
Finally, given ˆx1 , x2  > U , we need to find x0 and s so that ˆx1 , x2  ˆx 0  s, s. But
g ˆx 1  x 2 
then x0 x1  x2 , s x2 . Therefore uˆx
1  g ˆ x1  x2  x2
Example 1.5.6. Here will have a fully nonlinear F given as:
¢̈
¨∂x1 u∂x2 u u in U
¦
¨u g on Γ
¤̈

– 12 –
1. Introduction to PDE’s

Now let U ˜x1 A 0 and Γ ˜ˆ0, x2  > R2 . Now we have:

ẋj ∂pj F ˆx, z, p


ṗj ∂z F ˆx, z, ppj  ∂xj F ˆx, z, p

ż Q pj ˆ∂pj F ˆx, z, p
j

Then in this case we have: F p1 p2  z, ẋ1 p2 , ẋ2 p1 , ṗ1 p1 , ṗ2 p2 and ż 2p1 p2 .

Then we have:
¢̈
¨ p 1 ˆs  ˆp0 1 es
¦
¨ p 2 ˆs  ˆp0 2 es
¤̈

Therefore: ż 2ˆp0 1 ˆp0 2 e2s , so that z z0  ˆp0 1 ˆp0 2 ˆe2s  1. Lastly, ẋ1 ˆp0 2 ˆes  1
so that x1 ˆp0 2 ˆes  1 and x2 x0  ˆp0 1 ˆes  1.

g ˆx 0 
And z0 g ˆ x 0  , ˆp 0  2 g ˆ x 0 , ˆ p 1 
œ
.
g ˆ x0 
œ

1
In the case where g ˆx0  x20 , then we have that z0 x20 , ˆp0 1 x0 , ˆp0 2 2x0 . Now
2
given ˆx1 , x2  > U we want to find ˆx0 , s such that ˆx1 , x2  ˆx1x0 ˆs, x2x0 ˆs. Plugging in
what we know about these already, we get:
1
x0 x2  x1
4
s 1 2 1 1 x2  14 x1 4x2  x1
e ˆx  x 
x 4 x2  14 x1 4x2 x1

ˆ x1 4x2 2
Then we get u and we simplify, to get: uˆx
16
Lecture 4 (9/10)

Today we will continue the material in Evans section 3.2.

Recall what we did last time with precise logical assumptions. We were looking at the
scalar first order PDE: F ˆx, u, Dx 0 with F  U  R  Rd R, with the boundary condition
u g on Γ ` ∂U with U a domaina

Then we derived the characteristic equations. For this we supposed that there exist a C 2
solution u to our PDE. From this we looked for a trajectory xˆs > U, z ˆs > R, pˆs > Rd such
that uˆxˆs z ˆs, pi ˆs ∂i uˆxˆs and xˆs is chosen so that we have a closed system of
ODEs for ˆx, z, pˆs.

a
open and connected set in Rd

– 13 –
1. Introduction to PDE’s

The worst term is ṗi ˆs ∂j ∂i uˆxˆsẋj ˆs. To simplify this, we differentiate F in xi to
get:

∂xi F ˆ∂xi F ˆx, uˆx, Duˆx  ˆ∂z F ˆx, uˆx, Duˆx∂i u  ˆ∂pj F ˆx, uˆx, Duˆx∂i ∂j uˆx

To simplify everything, we choose ẋj ∂j ∂i uˆxˆsẋj ˆs. Then we get: ṗi ˆs  ˆ∂z F ˆx, z, ppi 
ˆ∂xi F ˆx, z, p. And lastly, we get that ż Pi pi ˆ∂pi F ˆx, z, p.a .

1.6 Existence of Characteristic Solutions


Goal: We would like to develop local existence theory for smooth solutions to our PDE.
That is, given data near a point x0 > Γ, then under certain conditions, show that there exists
a smooth solution to our PDE in a small neighborhood V ? x0 .b

Let’s start by considering a simpler case. Let U Rd 


˜ xd A 0 and Γ ` ˜xd 0

Let’s say we have the data ˆx, z, pˆ0 at x0 .


¢̈xˆ0 x > Γ
¨
¨ 0
¨
¨
¨
¨z ˆ0 g ˆx0 
¦ (6)
¨
¨ pi ˆ0 ∂i uˆxˆ0 ∂i g ˆx0  f or i 1, . . . , d  1
¨
¨
¨
¨F ˆxˆ0, z ˆ0, pˆ0 0
¤̈

So we have 2d  1 equations for 2d  1 for ˆx, z, p (2d  1 variables). These are known as the
compatibility conditions.

Assume that we start with xˆ0, z ˆ0, pˆ0 that satisfy (6) (such a triple is called admis-
sible). Let x0 > W ` Γ. Parametrize the points in W by y ˆy 1 , y 2 , . . . , y d 1 , 0. Now the 

goal is to find the characteristics:

’xˆy, s“
– z ˆy, s —
” pˆy, s •

With
¢̈xˆy, 0 y
¨
¨
¨
¨
¨
¨z ˆy, 0 g ˆy 
¦ (7)
¨
¨ pi ˆy, 0 ∂i g ˆy  f or i 1, . . . , d  1
¨
¨
¨
¨p ˆy, 0
¤̈ d  F ˆxˆy, 0, z ˆy, 0, pˆy, 0 0

To do this, we will use the implicit function theorem.


a
This can fail in many ways, one to keep in mind is if different trajectories collide
b
we mean local in space and time. And we were from now on refer to this as local existence.

– 14 –
1. Introduction to PDE’s

Theorem 1.6.1 (Implicit Function Theorem). Given F  Rm  Rn Rm which is C 1 and


suppose that F ˆy0 , p0  0 is satisfied for some y0 , p0 . If ∂y F is invertible (as a matrix) then
there exist neighborhoods U ? y0 and V ? p0 then there exists a C 1 function p  U V such
that pˆy0  p0 and F ˆy, pˆy  0 for all y > U .

Moreover, if F ˆy, p 0 for some y > U and p > V then p p ˆy 


Therefore (7) is solvable if ∂pd F ˆx0 , z0 , p0  x 0. In this case, this is known as nonchar-
acteristic.

In summary: we began with ˆx0 , z0 , p0  with z0 g ˆx0  and p0 ˆ∂1 g, . . . , ∂d 1 g ˆx0 , ˆp0 d  

which is admissible. And ∂pd F ˆx0 , z0 , p0  x 0 noncharactersitic. Then we have:


¢̈xˆy, 0 y
¨
¨
¨
¦z ˆy, 0 g ˆy 
¨
¨
¨pˆy, 0
¤̈

With ˆxˆy ˆs, z ˆy, s, pˆy, s are given by the charactersitic equations.

Now, given x > U near x0 , we want to find ˆy, s such that x xˆy ˆs.
Lemma 1.6.1. Under the noncharactersitic assumption, the map ˆy, s ( xˆy, s is invert-
(
ible near x0 . Denote the inverse by x ˆy ˆx, sˆx
Proof. Inverse function theorem. Compute using the characteristic equation that x satisfies:

’ Fp1 ˆx0 , y0 , p0 
“
k –Id1  —
‰∂y jx ∂s xk Ž Uy xn – —
– Fpn1 ˆx0 , z0 , p0 —
s 0
” 0 F ˆx , z , p  •
pn 0 0 0

Compute the determinant which is just ẋd ˆx0 , 0 ∂pn F ˆx0 , z0 , p0  x 0


Theorem 1.6.2. Under the above assumption, uˆx z ˆy ˆx, sˆx where ˆy, sˆx are
defined with x > V . Then this gives us a solution to the PDE
¢̈
¨F ˆx, u, Du 0 in V
¦
¨u g
¤̈
on Γ 9 V ` W

Consider the quasilinear 1st order PDE:

Q bj ˆx, u∂j u  cˆx, u 0


j

 F Q bj ˆx, zpj  cˆx, z 


j

Then the noncharacteristic condition requires ∂pd F ˆx0 , z0 , p0  bd ˆx0 , z0  x 0. So in the


quasilinear case, the noncharacteristic assumption is equivalent to the existence of a unique

– 15 –
1. Introduction to PDE’s

choice ˆp0 d from everything else.

In the fully nonlinear case, take F ˆ∂x u2  ˆ∂y u2  1. Then F p21  p12  1. Then given
1
ˆ p0  1 , then we have two choices of ˆp0 2
2
Now let’s prove Theorem 1.6.2.
Proof. step 1 veryify that F ˆxˆy, s, z ˆy, s, pˆy, s 0. Compute:

∂s ˆF ˆx, z, p Q ∂x F ẋi i  ∂z F ż  Q ∂j F ṗj


i j

Then apply the characterisitc equations to get that the whole thing is zero. Then since
F ˆxˆy, 0, z ˆy, 0, pˆy, 0 0 by preparation, we are good.

step 2 We know that xˆy ˆx, sˆx x and z ˆy ˆx, sˆx uˆx. So it remains to show
that ∂i uˆx pi ˆy ˆx, sˆx.

pi z ˆy ˆx, sˆx Q ∂y z∂iyj j


 ∂s z∂i s
j

And recall that ∂s z Pi pi ∂pi F . So we need to compute ∂yj z. From z ˆy, s uˆxˆy, s ,
then we need to prove:

∂y j z Q piˆy, s∂y xi j
i

Define:

r j ˆs  ∂yj z  Q pi ˆy, s∂yj xi ˆy, s


i

rj ˆ0 0 by preparation. Then by a long computation, we get that:

∂s r j  ∂z F r j

From this we get that some equation is:

Q Q pk ∂y xk ∂x yj Q pk ˆ∂p F  ∂x s Q pk Q ∂y xk ∂x yj
j k 
k
i j i  ∂s xk ∂xi s
j k k ´¹¹ ¹ ¹ ¸¹¹ ¹ ¹ ¶ k j
∂s ẋk

Q pk δik pi
k

Lecture 5 (9/12)

For problem number 4 of the homework (Evans section 3.3.5 problem 6(b)). Take
uˆx, t g ˆxˆ0, x, tJ ˆ0, x, t (take this as the definition of u and verify that it is a solution).

– 16 –
1. Introduction to PDE’s

Review: in the case Γ ` ˜xd 0 U ˜ xd A 0. We wanted to solve:


¢̈
¨F ˆx, u, Du 0 in U
¦
¨u g on Γ
¤̈

Then we used the method of characteristics, picking x0 > Γ


¢̈∂ xi ∂pi F ˆx, z, p
¨
¨
s
¨
¦ ∂s z
¨
Pi pi∂pi F ˆx, z, p
¨
¨∂ p ∂z F ˆx, z, ppi  ∂i F ˆx, z, p
¤̈ s i

We are interested in solving for this on an open set U . To solve this we needed initial values:
¢̈xˆ0 x0
¨
¨
¨
¦z ˆ0 g ˆ x0 
¨
¨
¨p ˆ0 ˆp 0 i
¤̈ i

For p0 we need to make a choice. For ˆp0 i ∂i g ˆx0  with i 1, . . . , d  1. But for the normal
direction, we need to figure this out such that it solves F ˆx0 , z0 , p0  0. Such a triple is
called admissible.

Noncharacteristic ˆx0 , z0 , p0  means that ∂pd F ˆx0 , z0 , p0  x 0. This allows us to solve


for pd ˆy, 0 for y > W ? x0 and W ` Γ (unique continuous choice with pd ˆx0 , 0 ˆp0 d .

We also could invert ˆy, s ( xˆy, s for s very small and y close to x0.
Now define uˆx z ˆy ˆx, sˆx. Then the claim is that this u is a solution to the bound-
ary valued problem in V . And this is smooth.

Now let’s extend this to general U and Γ (Ref: Evans Appendix C)


Definition 1.6.1 (C k Boundary). Given U open bounded in Rd . We say that ∂U is C k at
some x0 > ∂U if there exists some r A 0 and γ  Rd 1 R in C k so that: 

U9 B ˆx 0 , r  ™ xd A 㠈x 1 , . . . , x d  1
ž
´¹¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
ball of radius r centered at x0

And ∂U 9 B ˆx0 , r ˜ xd 㠈x 1 , . . . , x d  1 

So now we would like to flatten the boundary. Define the coordinates y 1 , . . . , y d so that:
¢̈ i
¨y xi i 1, . . . , d  1
¦
¨y d xd  㠈 x1 , . . . , x d  1
¤̈

Then U 9 B ˆx0 , r ` ˜y d A 0 and ∂U 9 B ˆx0 , r ` ˜y d 0.

– 17 –
1. Introduction to PDE’s

Note this is invertible by:


¢̈ i
¨x yi i 1, . . . , d  1
¦
¨ xd y d  㠈y 1 , . . . , y d
 1
¤̈

If U or ∂U is C k for all k, we say it is C ª

Suppose we have the general problem:


¢̈
¨F ˆx, u, Du 0 in U
¦
¨u g on Γ
¤̈

(
With U a C domain. Then there exists x y ˆx, smooth, invertible in a neighborhood
ª

V ? x0 . So if v ˆy  uˆxˆy , then v satisfies a PDE of the same form.

∂y
F ‹xˆy , v ˆy , Dv ˆy  ˆxˆy  0
∂x
Then we can write the above operator as Gˆy, v ˆy , Dv ˆy  0. Then applying our theory of
what we have done so far to our equation Gˆy, v, Dv  0 then we solve the original problem.

In this case, the noncharacteristic condition now becomes:

Q ν∂U
j
∂p F ˆx0 , z0 , p0  x 0
j
j

Where ν∂U is the unit outer normal vector to ∂U

Theorem 1.6.3. Local existence near a noncharacteristic boundary value holds for general
C domains.
ª

Let’s now talk about uniqueness. Suppose that we already have two solutions uˆ1 , uˆ2
that are C 2 solving:
¢̈
¨F ˆx, u, Du 0 in U
¦
¨u g on Γ
¤̈

does uˆ1 uˆ2 ?

The idea is the method of characterics from Γ to x in U . But we should be careful about
j
the normal derivative ν∂U ∂j u 0

Example 1.6.1. Consider ˆ∂1 u2 1 on RC0 and u 1 on ˜u. We have two solutions
u 1  x and u 1  x. This has to do with the choice of ˆp0 1 Pj uj∂U ˆp0 j .

So for uniqueness, we require that the normal derivatives also agree: P ∂∂U
j ˆ 
∂j u 1
P ∂U
∂ j
∂j uˆ2 .

– 18 –
1. Introduction to PDE’s

Then for all x > U such that there exists a characteristic xˆs, z ˆs, pˆs with x0 > Γ, z ˆ0
uˆx0 , pˆ0 Duˆx0  such that xˆs x and xˆs  > U for all s > 0, s. Then we have œ œ

uˆ1 ˆx uˆ2 ˆx by uniqueness of ODEs.

What about continuous dependence? Continuous dependence does hold, but it is pretty
subtle.
Example 1.6.2. Consider Burger’s equation
¢̈
¨∂t u  u∂x u 0 in ˜t A 0
¦
¨u g on ˜t 0
¤̈

Suppose that g > C 2 and Yg YC 2 B A a then there exists a neighborhood V 0, T   R with


T only dependent on A (in particular independent of g) such that there exists a unique C 2
solution u on V .

(
Now let’s write g u g  is continuous in C 2 , in the sense that if gn g in C 2 ˆΓ then
u gn  u g  in C 2 ˆV .
ˆ1 ˆ2
However, there exist counterexamples ˆhn , hn  such that:
ˆ1 ˆ2
Yu hn u hn YC 2 ˆV  C nYhˆn1  hnˆ2 YC 2 ˆΓ
i.e. g ( u g cannot be Lipschitz C 2 C 2.

Terry Tao wrote a good blog on quasilinear well-posedness.


Here we will be looking at Evans section 4.6. We would like to study the extension of
the notion of noncharacteristic bounded data to higher order systes of ODEs.

Let’s begin with the noncharaceristic condition for a quasilinear, 1st order, scalar PDE.
Recall, we were looking at (with Γ ` ˜xd 0):

F ˆx, u, Du Q aj ˆx, u∂j u  a0 ˆx, u 0


j

In this case ˆx0 , p0 , z0  became our characteristic if and only if ad ˆx0 , z0  x 0.

Note that given this, we can uniquely write:

F ˆx, z, p Q aj ˆx, z pj  a0 ˆx, z 


j

ˆ x0 , z 0  is noncharacteristic, therefore this is equivalent to:


d 1
1 

Œ Q aj ˆx0 , z0 ˆp0 j  a0 ˆx0 , z0 ‘



ˆ p0  d
ad ˆ x 0 , z 0  j 1
a
where Yf YC k ˆ U sup α Bk supx>U S∂ α uS
S S

– 19 –
1. Introduction to PDE’s

If we are interested, we can determine ∂ α uˆx0  under he noncharacteristic assumption.


Because from the boundary data g we got z0 , ˆp0 1 , . . . , ˆp0 d 1 and from nonchar we got


ˆp0 d ∂d uˆx0 . Now suppose we want ∂i ∂j uˆx0 . This is easy if i j 1, 2, . . . , d  1. But


also, for y close to x0 , we get ∂i ∂d uˆx0  from i 1, . . . , d  1. To compute the second normal
derivative, we differentiate the equation in ∂d to get:

0 Q aj ˆx, u∂j ∂du  bˆx, u, ∂u


j

but if we evaluate at x0 , then we know everything but aj ˆxp , z0 ˆ∂d2 u. In other words we
can write ad ˆx0 , z0 ∂02 ˆx0  things that only depend on x0 , uˆx0 , ∂j uˆx0  j 1, . . . , d, other
things. But the thing on the left is nonzero, so we can solve for the thing on the left.

So the noncharacteristic assumption of ˆx0 , z0  is equivalent to determining all derivatives


of u at x0 .
z
Lecture 6 (9/17)

1.7 noncharactersitic condition for kth order system


Reference: Evans section 4.6. The things that are important

ˆ 4.6.1: noncharacteristic cauchy data for quasilinear k th order systems

ˆ 4.6.3: Cauchy - Kovalewski theorem

Consider the quasilinear K-th order system of PDEs. We have u  U RN , with uˆx
®
`Rd
1
’u “
–  —. And:
”uN •

N
0 Q Q ˆbαAB ˆx, u, . . . , Dk 1uDαuB 
 cA ˆx, u, . . . , Dk 1 u 

Sα S kB 1

k1 2
With: bα ; U  RN  Rd N RN , with ˆbα A B with A the row index and B the column index.
And C defined similarly. (I don’t really understand the indicies).

The Cauchy data is data for u, ∂ν u, . . . , ∂νkk11 u on Γ ` ∂U .




More simply, we could have Γ ` ˜xd 0, with Cauchy data for u:
2 k 1
ˆu, ∂d u, ∂d u, . . . , ∂d u ˆg0 , g1 , . . . , gk1 

on Γ.

– 20 –
1. Introduction to PDE’s

If we assume that we have a smooth solution u, one question we will ask is whether we
can determine all derivatives Dn uˆx0  with x0 > Γ from the Cauchy data and the equation?

The answer is to come up with a noncharacteristic condition for the Cauchy data near x0 .
œ
Given the Cauchy data: g0 , . . . , gk 1 near x0 > Γ. Then we can compute any Dk uˆx0  for


k B k except ∂dk uˆx0 . We can get this from the PDE. Moving things around we get:
œ

bˆ0,...,0,k ∂dk uˆx0   Q ˆ bα D


α
u  cx 0
S αS k
αxˆ0,...,0,d

We need that bˆ0,...,0,d is invertible, in which case we have that:

’ “
∂dk uˆx0  1
ˆbˆ0,...,0,d  –
–
–
Q α —
ˆb α D u  c x 0 —
—
Sα S k
” αxˆ0,...,0,d •

We want to this again and again. Note that bˆ0,...,0,d is invertible around x0 , therefore we can
compute ∂dk uˆx in a neighborhood of x0 (in Γ). Therefore we can compute all of Dk 1 uˆx 

except ∂dk 1 uˆx0 .




Now take ∂d of the equation, keep ∂dk 1 u on the left hand side and the rest on the right


hand side. Then we get:

bˆ0,...,0,d ∂dk 1 uˆx0 



ˆˆknown

but we can invert and get the something known. The same condition (that bˆ0,...,0,d is in-
vertible) allows us determine ∂dk 1 uˆx0  and ∂dk 1 ˆx in a neighborhood of x0 .
 

Definition 1.7.1 (noncharacterisitic condition). ˆg0 , . . . , gk 1  is noncharacteristic at 

x0 if bˆ0,...,0,d evaluated at ˆx0 , uˆx0 , . . . , Dk 1 uˆx0  is invertiblea .




Theorem 1.7.1. If ˆg0 , . . . , gk  1 is noncharacterisitic at x0 , then all derivatives of u at x0


can be determined.
We can generalize this to general domains. Given U a smooth domain, which means that
for all x0 > ∂U , after renaming and reorienting the coordinate axis, there exists r A 0 such
that:

∂U 9 B ˆx, r ™xd 㠈x 1 , . . . , x d  1
ž

with γ a smooth function. Then we can take y i xi for i 1, . . . , d  1 and y d xd 

㠈 x1 , . . . , x d 1  .


a
the imput to b can be computed from the Cauchy data

– 21 –
1. Introduction to PDE’s

Recall we can also define the unit normal to the boundary ∂U as ν∂U .

∂k
Let’s define the k-th normal derivative uˆx0  for x0 > ∂U which we define as:
∂ν k
d
∂k
∂ν k
uˆx0  Q ν i1 ν ik ∂i1 ∂ik uˆc0 
i1 ,...,ik 1

∂k ∂
Note that k
uˆx0  agrees with ˆ k uˆx0 ˆν © K uˆx0  at the top order.
∂ν ∂ν
After playing with indicies, we get that:
∂k k
∂ν k
uˆx0  Q ‹αν αDαuˆx0
SαS k

With:
k k!
‹ 
α α1 !αd !
Now the Cauchy data is:
k 1
ˆu, ∂ν u, . . . , ∂ν k1 u ˆg0 , g1 , . . . , gk1  in Γ

The noncharacterisitc condition comes out from the following. Idea: work iwth v ˆy 
uˆxˆy . The equation for u will translate to some equation for v in the variable y. After a
long computation, it will turn out that we get:

Q b̃αˆy, v, . . . , Dk 1vDαv

 c̃ˆy, . . . , Dk 1 v 


Then we require that b̃ˆ0,...,0,d is invertible.

“oh I see, I had to be much more clever than this”


– Oh (2019)

To do this, we need to compute (for SαS k)

Du Dv ˆy ˆx Dy i ˆ∂yi v ˆy ˆx


D2 u  DˆDy i ˆ∂yi v 


So we have:

Dα u Dα v ˆy ˆx k
ˆ∂y d v ˆDy 
d α
 ˆterms not involving ∂ykd v 

Then we see that y d is a boundary defining function. By this we mean that ∂U ` ˜y d 0.
And we have that from multivariable calculus that Dy d is parallel to the nornal ν.

– 22 –
1. Introduction to PDE’s

This means that the original equation

bα D α u bα ˆDy d α ∂yKd v  ˆterms not involving ∂ykd v  Q b̃α∂αk v


d  c̃
α

Therefore:

b̃ˆ0,...,0,d Q bαˆDydα ck Q bα ν α
Sα S k S αS k

Therefore b̃ is invertible if and only if Pα S S k bα ν


α is invertible.

Example 1.7.1. Consider constant coefficient, 2nd order, scalar PDEs.


d
Q aij ∂i∂j u 0
i,j 1

Here the noncharacterisitic condition only depends on ν (or ∂U ). This requires that:
d
Q aij ν iν j x 0
i,j 1

If ∂U is given by (locally) ˜w 0, then w is parallel to ν and the noncharacterisitc con-


©

dition becomes:
d
Q aij ∂iw∂j w x 0
i,j 1

One case of this is if aij is a positive definite matrixa . For example if aij is the identity
matrix, then our PDE becomes ∆u 0 and all boundaries are noncharacteristic (this is the
property of ellipticity).

Another example would be:

1 0
aij Œ ‘
0 I

Then we get the wave equation ju 0. This is characteristic if and only if ˆ∂0 w 2
 

P d
i 1 ˆ∂i w 
20 (these are characteristic hypersurfaces).

Lecture 7 (9/19)

Recall our discussion on the noncharacterisitic condition for


¢̈
¦
P
¨ SαS k bα ˆx, u, . . . , D k1 uD α u  cˆx, u, . . . , D k1 u 0 in U
(8)
¨ˆu, ∂ν u, . . . , ∂ kk11  ˆg0 , . . . , gk1 
¤̈ ν
on Γ ` ∂U
a
P aij ξi ξj A 0
– 23 –
1. Introduction to PDE’s

We say that the boundary value is noncharacteristic at some point x0 > Γ implies that
we can compute (or determine) all of the derivatives of u at x0 (Dβ uˆx0 ) if and only if the
matrix PSαS k bα ˆx0 , uˆx0 , . . . , Dk 1 uˆx0 ν α is invertible.


This motivates the following theorem. Can we construct a solution of u near x0 by the
knowledge of all Dα uˆx0 ? The answer is yes, provided that everything can be written as
power series. Equivalently, if everything is real analytic.

Definition 1.7.2 (real-analytic). We say that f  U RN is real analytic if for all x0 > U ,
there exists r A 0 and coefficients aα > RN where α are multi-indicies such that:

f ˆx  Q aα ˆ x  x0  α
α

for Sx  x0 S @ r

Remark 1.7.1. If f is real analytic, then we have that:

D α f ˆ x0  α!aα  α1 !α2 !αd !

Definition 1.7.3 (real-analytic boundary). We say ∂U (or U ) is real analytic if for all
x0 > ∂U there exists r A 0 such that ∂U 9 B ˆx0 , r ˜xd 㠈x1 , . . . , xd 1  (after a suitable


relabeling and reoriented of the coordinates), with γ real analytic

Theorem 1.7.2 (Cauchy–Kovaleskaya Theorem). Consider the boundary valued prob-


lem (8) such that bα , c, g0 , . . . , gk 1 , ∂U are real-analytic. And suppose the boundary values


are noncharacterisitc at x0 > Γ. Then there exists a neighborhood V ? x0 such that there
exists a unique, real-analytic solution u to the boundary valued problem on V 9 U .

Moreover, this solution:


1
uˆx Q α! Dαuˆx0ˆx  x 0 α

near x0

Proof. Really clever proof, see the text.

Remark 1.7.2 (about Cauchy–Kovaleskaya theorem). At first, you might think that this
theorem looks like a very general existence and uniqueness statement for PDEs. However,
real-analytic functions are too restrictive to be useful.

Suppose that f was real-analytic on Rd , and suppose that f 0 on B ˆ0, 1, then f 0 on
Rd .

The dependence of u on the boundary data in the Cauchy–Kovaleskaya theorem is very


weak. This would make small disturbances of g’s lead to large disturbances for u.

This leads to a classical example by J. Hadamard.

– 24 –
2. Distributions

Example 1.7.2 (J.Hadamard). Consider


¢̈ 2 2
¨ ˆ∂ 1  ∂ 2  u 0 in R2 
˜x2 A 0
¦
¨ˆu1 , ∂2 uˆx0 , 0 ˆ0, k sin kx0 
¤̈

This can be solved as:

uˆx1 , x2 0 sinˆkx1 f ˆx2 

Then we have:
2
0 ˆ∂ 1  ∂22 u  k 2 sinˆkx1 f ˆx2   sinˆkx1 f œœ 2
ˆx  sin kx1 ˆf œœ 2
ˆx  k f ˆx  
2 2

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
0

Once we do this, to solve for f , we get:

uˆx1 , x2  sinˆkx1  sinhˆkx2 

is a solution to the boundary value problem. Now everything is analytic, so by the theorem,
this is the unique solution.

Now let’s perturb the initial data. Let’s have the initial data now ˆ0, e  k 1 ~ 2
k sin kx0 , then
we get:
k1 ~ 2
u e
sinˆkx1  sinhˆkx2 

Now as we send k ª , then our data goes to ˆ0, 0 in any C n -norm.

However, let’s look at the solution with x2 A 0, then we have:

e  k1
~ 2
sin kx1 sinhˆkx2  ÐÐ
k ª 1 kx2
2
e e  k1 ~ 2
sinˆkx1 

but that thing blows up in the uniform norm.

2 Distributions
We now are turning to the second part of the course. We have two goals

1. We would like to study several fundamental linear 2nd order scalar PDEs (Laplace,
Wave, Heat, Schrodinger). (chpt 2.2, 2.4, 2.3)

2. Understand the theory of distributions. This is the “completion of differential calcu-


lus”. We will also like to study Fourier analysis (chpt 4ish). Check the bcourse site for
lecture notes on these topics.

– 25 –
2. Distributions

2.1 Motivation
Schwartz was the founder of the theory of distributions, but it was independently constructed
earlier. Let’s consider a motivating example from electrostatics.

Example 2.1.1. Given E  R3 R3 the electric field and ρ  R3 R the charge density
¢̈
¨© E ρ
¦
¨©  E 0
¤̈

Recall from vector calculus, that ©  ˆ©ϕ 0. The converse is also true, therefore there
exists some ϕ such that E ©ϕ.

A basic problem in electrostatics is to determine E (or ϕ) from ρ. Let’s solve:

 ∆ϕ ρ

First consider the case when ρ is the sum of point charges at xk with charges qk . By linearity,
it is enough to find:

ϕ Q q1 ϕ 0 ˆ x  xk 

where ϕ0 is the potential of the point charge with charge q 1 at the point 0. Then we can
say that ρ could be written as a continuous sum of point charges:

ρ S ρˆy dy

Then we can say that ϕ is the continuous sum of point-charge electric potentials. So:

ϕ S ρˆy ϕ0 ˆx  y dy

So all that remains to find is the electric potential of a point charge, ϕ0 . Note by rotational
symmetry of the problem, ϕ0 must be radial: ϕ0 ϕ0 ˆr. By the Gauss equation, we have:

1 SB 0,rˆ 
© E  S∂B 0,r
ˆ 
ν © ϕ0  S∂B 0,rˆ 
∂r ϕ0 ∂r ϕ0 ˆr4πr2

11
Therefore: ∂r ϕ0 ˆr . So we know normalize so that ϕ0 0 as r ª and then we see
4π r2
that:
1 1 1
ϕ0
4π r 4π SxS

An alternative derivation of this comes from Evans chapter 2.2.

Lecture 8 (9/24)

– 26 –
2. Distributions

2.2 Basic Definitions


I would like to point out that the word distribution to define these objects doesn’t really
make a lot of sense. I prefer the word ‘generalized functions’, I think that makes more sense.

Today will the real beginning of our discussion on distribution theory. Let’s motivate the
definition of a distribution.

Let’s try to make sense of the notation of point charges qi at points xi > R3 . Let u be the
charge distribution function of these charges. We would like:

1. u 0 outside ˜xi 

2. RU u Px >U qi
i

However, u cannot be a function that satisfies this. Rather than viewing u as a function on
R3 , it is more natural to interpret u as a functional on the set of sets. By this we mean:

u ˆU  Q qi
xi > U

Given two open disjoint subsets U, V , then uˆU 8 V uˆU   uˆV . If we interpret these
has integrals, we have (not rigoruously):

S uˆχU  χV  S uχU  S uχV

where χ is the characteristic function. It is therefore natural to interpret u as a linear func-


tional on the space of characteristic functions.

So we want to view u as a linear functional on the space of nice functions. The nicest we
can think of is the space of smooth, compactly supported functions: C0 ˆR3 . ª

Here is a rough definition of a distribution:

Definition 2.2.1. A distribution u on a set U ` Rd is such that u  C0 ˆU  ª


R a linear
functional which is also “continuous”

Notation: Given ϕ > C0 ˆU , we will write:


ª

uˆϕ `u, ϕe

which mimics the L2 inner product.

We call the space C0 ˆU  the space of test functions.


ª

How “flexible” is this space? There exists one nontrivial element:

ψR R

– 27 –
2. Distributions

with the property ψ > C ª


such that suppˆψ   x  ψ ˆx x 0 ` 0, ª. We can set:
¢̈ 1~x
¨e xA0
ψ ¦
¨0
¤̈
xB0

Then ψ works. Now consider ϕ  Rd R by writing ϕˆx ψ ˆ1  ˆˆx1 2    ˆxd 2 , then
supp ϕ ` B ˆ0, 1.

Definition 2.2.2 (Convolution). Given f a locally integrable function, ϕ > C0 ˆRd , then ª

we write:

f ‡ ϕ ˆx  S f ˆy ϕˆx  y dy

Remark 2.2.1. Note that even if f is nondifferentiable, then f ‡ ϕ is C , because:


ª

∂ xj ˆf ‡ ϕ ∂xj S f ˆy ϕˆx  y dy S f ˆy ∂xj ϕˆx  y dy

Where the second equality relies on f being locally integrable and ϕ smooth.
Remark 2.2.2. If f > C ˆRd  and ϕ > C0 ˆRd . Then suppˆf ‡ ϕ ` supp f  supp ϕ. Where
ª

supp f  supp ϕ ˜x  y > Rd  x > supp f, y > supp ϕ


In particular, if f > C0 ˆRd  and ϕ > C0 ˆRd , then f ª
‡ ϕ > C0 ˆRd 
ª

Lemma 2.2.1 (Approximation by mollification). Let f > C k ˆRd  and ϕ > C0 ˆRd  with ª

R ϕ 1. Then given δ A 0, define:


1
ϕδ ˆx ϕ ˆx ~ δ 
δd

Therefore supp ϕδ ` B ˆ0, δ  and R ϕδ 1. Then f ‡ ϕδ ÐÐÐÐÐ


unif ormly
f as δ 0. Moreover, up
to k derivatives uniformly converge:

sup SDα ˆf ‡ ϕδ  f S 0
Rd

for all α, SαS B k.


Proof. When SαS 0 and k 0. Then we have:

f ‡ ϕ δ ˆx   f ˆx  S f ˆy ϕδ ˆx  y dy  f ˆx S f ˆx  y ϕδ ˆy dy  S f ˆxϕδ ˆy dy

S ϕδ ˆy ˆf ˆx  y   f ˆxdy S ˆf ˆx  δz   f ˆxϕˆz dz

Then as δ 0, we use continuity of f to get that this integral goes to zero.

For higher k with SαS A 0, you let the derivatives fall on f , then proceed with the same
proof.

– 28 –
2. Distributions

Now what is the topology that we should take?


Definition 2.2.3 (Convergence of Test Functions). We say that a sequence ϕj > C0 ˆU  ª

converges to ϕ > C0 ˆU  (uj u) if there exists a compact set K ` U such that supp ϕj ` K
ª

(for all but finitely many) and supp ϕ ` K and supx>K SDα ϕj ˆx  Dα ϕˆxS 0 as j ª for
all multi-index α
Remark 2.2.3 (Optional). This isn’t a Frechet space or a Banach space
Remark 2.2.4 (Optional). This is the strongest topology such that every C0 ˆK  embeds ª

into this space for every compact subset K. Where C0 ˆK  is the space of functions in
ª

supp ϕ ` K endowed with ϕj ϕ if and only if supx>K SDα ϕj ˆx  Dα ϕˆxS 0 for all α.
This is a complete and metrizable set (Frechet).
Definition 2.2.4 (Distribution). u  C0 ˆU  U is a distribution if it is linear and
ª

continuous in the sense that if ϕj is a sequence of test functions which converge to ϕ in the
above sense, then uˆϕj  converges to uˆϕ.
Lemma 2.2.2. Let u  C0 ˆU  ª
R be a linear functional. Then u is a distribution (i.e.
continuous) if and only if (“boundedness”) for all compact K ` U there exists N A 0 and
C CK,N A 0 such that for all ϕ > C0 ˆU , supp ϕ ` K:
ª

S `u, ϕe S B CK,N Q sup SDα ϕˆxS


x> K
SαSBN

Proof. If you have boundedness, then you have continuity. We can ask (by linearity) if
`u, ϕj  ϕe 0. But ϕj ϕ 0 in C0 ˆU  there exist K compact such that supx>K SDα ˆϕj ˆx
ª

ϕˆxS 0. By boundedness, there exists a N and C such that

S `u, ϕj  ϕe S B C Q sup SDα ˆϕj ˆx  ϕˆxS


x> K
0
SαSBN

For the other direction, suppose that u doesn’t have boundedness. Then we get a compact
K ` U such that for all N , there exists ϕN with S `u, ϕN e S C N PSαSBN supx SDα ϕˆxS. Then
set ψN ˆx ϕN ˆxˆN PSαSBN supx SDα ϕˆxS 1 . Then SDα ψN S B 1~N forall SαS B N . Then


ψN 0 in C0 ˆU  but not because the above.


ª

Lecture 9 (9/26)

Recall that we defined the notation of a distribution.


Definition 2.2.5 (distribution). A distribution u is a linear functional ϕ ( `u, ϕe on
C0 ˆU  that is continuous.
ª

i.e. if ϕj ϕ in the sense that there exists a compact set K ` U such that supp ϕj , supp ϕ `
K, we have:

sup SDα ϕj ˆx  Dα ϕS 0


x> K

as j ª for all α. Then `u, ϕj e `u, ϕe.

– 29 –
2. Distributions

Lemma 2.2.3. A linear functional u  C0 ˆU  ª


R is a distribution if and only if it is
“bounded” in the sense that for all compact subsets K ` U there exists an N and CK,N A 0
such that for all supp ϕ ` K, we have that S `u, ϕe S B CK,N supx>K supSαSBN SDα ϕˆxS

We will call the space of all distributions D ˆU 


œ

Definition 2.2.6 (Order of a distribution). We say that a distribution u > D ˆU  is said œ

to be of order B N if there exists N such that for all compact sets K there exists CK,N A 0
such that S `u, ϕe S B CK,N supx>K supSαSBN SDα ϕˆxS

The order of a distribution is the minimum among such N .

Definition 2.2.7 (Support of a Distribution). Given an open subset V ` U we say that


u vanishes on V if for all ϕ > C0 ˆU , supp ϕ ` V implies that `u, ϕe 0.
ª

Then set Vmax  ˜V  u vanishes on V . Then set supp u c


Vmax

Example 2.2.1. Any continuous function u defines a distribution by `u, ϕe R uϕ for all
ϕ > C 0 ˆ U .
ª

The order of such a distribution is zero and the support definitions agree

Example 2.2.2. Locally integrable functions are distributions. Recall u a function on U is


locally integrable if for all compact subsets K ` U we have that RK SuS @ ª.

This is a distribution, the order is 0 and supp u suppˆudu

Example 2.2.3. The signed Borel measure on U , B ˆU , defines a distribution (just integrate
against it). The order is 0.

Example 2.2.4. The simplest example of order N is `u, ϕe Dα ϕ

2.3 Operations
Now let’s discuss the basic operations for distributions.

The basic principal (the adjoint method): take an operation P on a function u, we will
try to compute P so that R P uϕ R uP ϕ for all test functions ϕ. Then given a distribution
œ œ

u > D ˆU , we define P u by `P u, ϕe `u, P ϕe


œ œ

Example 2.3.1. Consider multiplication of u > D ˆU  by f > C œ ª


ˆ U . When u > C0 ˆU 
ª

S f uϕ S uf ϕ

Following this we say that `f u, ϕe `u, f ϕe

– 30 –
2. Distributions

Example 2.3.2 (Differentiation). Start with u > C0 ˆU  and ϕ > C0 ˆU  then ª ª

S Dα uϕ ˆ1
S αS
S uDα ϕ

So define: `Dα u, ϕe ˆ1SαS `u, D α ϕe

Note that all distributions are differentiable. So distribution theory extends differential
calclus. Moreover, all distributions are locally Dα f for f continuous functions.
Example 2.3.3 (convolution). Let’s first consider u > D ˆU  and v > C0 ˆU . What is u ‡ v? œ ª

If v, ϕ > C0 ˆU , then recall:


ª

u ‡ v ˆx  S uˆy v ˆx  y dy

So naturally:

`u ‡ v, ϕe S ˆu ‡ v ˆxϕˆx U uˆy v ˆx  y dyϕˆxdx S uˆy  S v ˆx  y ϕˆxdx dy


´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
v ‡œ ϕˆy 

So `u ‡ v, ϕe `u, v œ
‡ ϕe
Because v > C0 ˆU , then u ‡ v is not just a distribution, it is a smooth function.
ª

Proof. Consider u ‡ v ˆx `u, v ˆx  e. By continuity of u and the fact that the map
(
x v ˆx   is continuous in the space of test functions.

So if xn x, then v ˆxn  y  v ˆx  y  in C0 ˆU . This means that u ‡ v ˆxn 


ª
u ‡ v ˆ x
ie. as a function, u ‡ v ˆx is continuous. Note that:

S u ‡ v ˆxϕˆx `u ‡ v, ϕe

it is easy to show smoothness.


Let’s discuss more examples of distributions.
Example 2.3.4 (Delta Function). Delta distribution δx0 > D ˆU  where `δx0 , ϕe œ
ϕˆx0 .
This is equivalently given by an atomic measure of mass 1 supported x0
Example 2.3.5 (Heaviside Function). Heaviside function H  R R with:
¢̈
¨1 xA0
H ˆx  ¦
¨0
¤̈
xB0

Then H œ
δ0 . Because
ª

`H
œ
, ϕe  `H, ϕ e
œ
 S0 ϕ ˆxdx
œ
ϕˆ0

– 31 –
2. Distributions

Remark 2.3.1. We can generalize this computation to give a distribution theoretic proof of
the divergence theorem.
Remark 2.3.2. Trying to make sense of 1~x as a distribution. This isn’t a L1loc function.
However, does there exists a distribution u such that u 1~x in R  ˜0. Take log SxS and
differentiate it as a distribution (this is known as taking the principal value of 1~x.
ª
ϕ ª L 0
`1~x, ϕe S ª x
dx  S ª
log SxSϕ ˆxdx
œ
 S0 log SxSˆϕ  ϕˆ0 œ
 S L log SxSˆϕ

 ϕˆ0 œ

L ϕˆx  ϕˆ0 ε ϕˆx  ϕˆ0 ε ϕˆx  ϕˆ0 L ϕˆx  ϕˆ0


SL x
dx SL x
dx  S
ε x
dx  S
ε x
dx

(Yeah I know, a bunch of those bounds should be negative) So we then send this to infinity.
The middle term ends up going to zero so we define:
ε ϕˆx  ϕˆ0 L ϕˆx  ϕˆ0
`P V ˆ1~x, e ϕ lim ‹S  S 
ε 0  L x ε x
To show the middle term actually goes to zero we note that:
ϕˆx  ϕˆ0 ε ε 1

x Sε
dx S S ϕ ˆxσ dσdx
 ε 0 
œ

Then we can bound ϕ and send ε to zero to get that this integral goes to zero.
œ

Lecture 10 (10/1)

Today we will continue on our discussion of distribution theory. Recall that distributions
(D ˆU ) are defined as linear functionals on C0 ˆU  that are continuous in the sense of con-
œ ª

vergence on C0 ˆU .ª

We discussed basic operations on distributions: multiplication by a test function, differ-


entiation, convolution with a test function.

2.4 convergence of distributions


Let’s now discuss sequences of distributions.
Definition 2.4.1 (Convergence in the sense of distributions). Given a sequence un >
D ˆU , we say that un u > D ˆU  in the sense of distributions if for all ϕ > C0 ˆU ,
œ œ ª

`un , ϕe `u, ϕe

Remark 2.4.1. This is the same of weak convergence of linear functionals our space.
Theorem 2.4.1 (Sequential Compactness). Let un > D ˆU  and suppose that for all œ

ϕ > C0 ˆU , `un , ϕe is convergent. Then the claim is that the linear functional u defined by
ª

`u, ϕe lim `un , ϕe (9)


is a distribution (i.e. it is continuous on C0 ˆU ).
ª

– 32 –
2. Distributions

Proof. (optional) It’s clear that u as defined above is linear. So it suffices to check continuity.

Let K be any compact subset of U . Then u obeys (9) for ϕ > C0 ˆK . Note that this
ª

space is nice (for a fixed K). Convergence is just ϕn ϕ when for all N , SαS B N , we have
supx>K SˆDα un  Dα uˆxS 0. So define the seminorm ρN ˆϕ supSαSBN supx>K SDα ϕˆxS.

Then let:
PN ˆ ϕ  ψ 
dˆϕ, ψ  Q2 N 1


 PN ˆ ϕ  ψ 
N

then we have convergence of ϕn if and only if dˆϕn , ϕ 0. So ˆC0 ˆK , d defines the


ª

same topology as above and d is invariant in the sense that dˆϕ  v, ψ  v  (.ϕ, ψ  for all
ϕ, ψ, v > C0 ˆK , and d is complete.
ª

These must be proven. But once these are shown, then ˜un  is the family of linear
functionals on C0 ˆK  which is a complete metric space (compatible with the vector space
ª

structure). Then we can apply the boundedness principal (or the Banach-Steinhaus theo-
rem). If `un , ϕe is bounded for each ϕ, then ˜un  is equicontinuous.

By equicontinuity, we have that uSC0ª ˆK  is continuous and K was arbitrary. Then since
C0 ˆU  is the strongest topology such that C0 ˆK  C0 ˆU  imbedds continuously. It follows
ª ª ª

that u is continuous on C0 ˆU 
ª

Example 2.4.1. Consider a sequence of functions un > L1loc ˆU . In this case you can connect
the notion of pointwise convergence and the convergence in the sense of distributions via the
dominated convergence theorem.

That is, if un u in the pointwise sense, and there exists some function g > L1loc such
that Sun S B g almost everywhere on U , then un u in the sense of distributions

Proof. We just need to show that R un ϕ R uϕ for all ϕ > C0 ˆU . But this follows imme-
ª

diately from the dominated convergence theorem.

Example 2.4.2. Let h > C with h 1 on 1, ª and supp h ` 0, ª. Then if we take
ª

limδ 0 hˆx~δ  this converges to the heaviside function pointwise and in the sense of distri-
butions.

Example 2.4.3. Consider the sequence:


¢̈ inx
¨e xA0
un ˆx ¦
¨0
¤̈
xB0

Then un ˆx does not have a pointwise limit for 0 @ x x 2πZ. But this sequence does have a
limit in the sense of distributions, and it is zero.

– 33 –
2. Distributions

To see this, pick phi > C0 ˆU , then we have: ª

ª
1 ª
1 inx ª
1 ª

`un , ϕe S einx ϕˆxdx S


in S0
∂x einx ϕˆxdx e ϕˆxU  einx ∂x ϕˆxdx
0 0 in in 0
1 1 ª

ϕˆ0  S einx ∂x ϕˆxdx


in in 0
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
bang

1
But we can control (bang) as its absolute value is bounded by R0 S∂x ϕˆxSdx which goes to
ª

n
zero.
1
Example 2.4.4. Take the same un but normalize it by n. Then the limit of limn nun δ0 ª
i
in the sense of distributions (check).
Example 2.4.5 (approximation of the identity). Let ϕδ be the standard mollifer with
support contained in a ball of radius δ. Then we saw that ϕδ ‡ u u as δ 0.

Now if we take the same ϕδ , then ϕδ δ0 as δ 0 in the sense of distributions.


Proof. Fix ψ > C0 ˆRd , then:
ª

`ϕδ , ψ e S δ d ϕˆδ 1 xψ ˆxdx


 
S ϕˆz ψ ˆδz dz
then apply dominated convergence theorem.
Remark 2.4.2. δ0 ‡ u u
Proposition 2.4.1. u > D ˆRd  take ϕ > œ
C0 ˆRd with R ϕ
ª
1, ϕδ ˆx δ d ϕˆx~δ , then


ϕδ ‡ u u in the sense of distributions.


Proof.
`ϕδ u, ψ e `u, ϕδ ψe
œ
‡ ‡

with:
S S ϕδ ˆz ϕˆz ydz ÐÐ ψˆy
δ 0
ϕδ ‡ ψ
œ
ϕδ ˆx  y ψ ˆxdx 

repeat for Dα ˆϕδ ‡ ψ , then ϕδ


œ
‡ ψ ψ in C0 ˆU . Then by continuity
ª
of u, we get
`u, ϕ0 ‡ ψ e
œ
`u, ψ e

Remark 2.4.3. ϕδ ‡u is a smooth function. Therefore every distribution can be approximated


by smooth functions.
Take u > D ˆU , then there exist a sequence uj > C0 ˆU  such that uj
œ ª
u in the sense of
distributions.

Here is a construction. We would like to construct a sequence Kj of compact subset of


U which is nicely nested such that:
1. Kj ` Kj0  1

2. j Kj U
now construct χj > C0 ˆU  such that χj 1 on Kj but supp χj ` Kj 1 . Then we can compute
ª


ϕδ ‡ ˆχj u then set uj be a sequence of these then we are done.

– 34 –
2. Distributions

2.4.1 Approximation Method


An operation P on smooth (smooth and compactly supported) functions is generalized to
u > D ˆU  by taking a sequence of continuous functions uj > C ˆU  such that uj u in the
œ ª

sense of distributions and define P u lim P uj

Example 2.4.6. Prove that if u > D ˆU  such that ∂j u œ


0 then u is a constant. This is
trivial with the approximation method:

Let uδ ϕδ u, this converges to u distributionally. But we also know that ∂j uδ ϕδ ‡∂i u 0,


therefore uδ 0, therefore u must be zero.

Example 2.4.7 (Differentiation of the characteristic function). Recall last time we


computed that H δ0 .œ

Now let U ` Rd be an open domain. Define 1U χU ˆx as the characteristic function.


Then the claim is that if U has a C 1 boundary then we have:

∂j χU  ˆν∂U j dS

with dS the Euclidean surface element. So `ϕ, dS e R∂U ϕS∂U dS.


This actually implies the divergence theorem. Let bj > C ª
ˆRd , then we have:
d d
SU © b d1U , Q ∂j bj i Q a∂j 1U , bj f S∂U ν bdS
j 1 j 1

Proof. Note that ∂j 1U is supported in ∂U . It suffices to compute this on balls that cover
∂U . Then B 9 ∂U ˜xd 㠈x1 , . . . , xd 1  and B 9 U ˜xd @ 㠈1 , . . . , xd 1 . Then we have:
 

1 1
1U ˆx lim hˆδ 
ˆγ ˆx , . . . , xd  1
  x 
d
δ 0

set ṽ ˆ∂1 γ, . . . , ∂d 1 γ, 1. Then we can differentiate 1U by differentiating all the hδ ’s. By


computaion we have:
1 1
∂j ˆh h ˆδ
œ 
ˆ 㠈x , . . . , xd  1
  x δ
d  1
ṽj

now for each x1 , . . . , xd 1 as δ goes to zero, this converges to the distribution δ㠈x1 ,...,xd1  ˆxd ,


therefore for each ϕ > C0 ˆU  with supp ϕ > Bα we have:


ª

`∂j 1U , ϕe SB α
ṽj ϕˆx1 , . . . , xd 1 , 㠈x1 , . . . , xd
  1
dx
1
 dxd  1

»
S∂B α
ν 1  S©γ S2 ϕˆx1 , . . . , xd 1 , 㠈x1 , . . . , xd  1
dx
1
 dxd
 1

S∂U νj ϕˆxdS
Where I dropped a negative sign somewhere.

– 35 –
2. Distributions

Lecture 11 (10/3)

Today will be the last class where we will discuss distribution theory. We will:
ˆ introduce the notion of fundamental solutions
ˆ discuss operations between distributions
Let’s consider multiplication of distributions.
º
Take
º
u, v > D ˆU . What does u v look like?
œ

This doesn’t always work, for example 1~ x and 1~ x are locally integrable, therefore they
define distributions, however, their multiplication gives 1~x which isn’t locally integrable.

So in general, this is impossible, but if for all x > U , at least one of u or v is smooth, then
u v is well defined.
Definition 2.4.2 (Singular Support). Let u > D ˆU . We say that u is smooth on an
œ

open subset V ` U if `u, ϕe R ũϕ for some ũ > C and for all ϕ > C0 ˆU  with supp ϕ ` V .
ª ª

We define the singular support as:


singsupp u U  ˆ V ` U  an open subset in U on which u is smooth
Intuitively, this is the collection of points where the distribution fails to be a smooth
function
Proposition 2.4.2. If the singular supports of u and v are disjoint, then their product is
well-defined.
Proof. Construct a smooth function χ such that χ 1 on singsupp U with supp χ ` U 
singsupp u. Then for all ϕ > C0 ˆU  we have ϕ χϕ  ˆ1  χϕ. So supp χϕ ` U  singsupp u
ª

and suppˆ1  χϕ ` U  singsupp v.

Then `uv, ϕe `uv, χϕe  `uv, ˆ1  χϕe `v, uχϕe  `u, v ˆ1  χϕe

Remark 2.4.4. The approximation method is more flexible than the adjoint method.
The converse of Proposition 2.4.2 is false as the following example shows:
Example 2.4.8. In R2 , let u dS˜x 0 and v dS˜y 0 . So now supp u singsupp u ˜x 0
and supp v singsupp v ˜y 0. Now singsupp u 9 singsupp v ˜0 but uv is well-defined.

We have u limδ 0 δ 1 ψ ˆδ 1 x, ψ > C0 ˆR with


  ª
Rψ 1. and v defined similarly. Then
we have uv“ ” limδ δ 2 ψ ˆx~δ ψ ˆy ~δ  δ0


Now let’s discuss the convolution of two distributions. In general you cannot define this.
Proposition 2.4.3. If u and/or v (distributions) have compact support, then u ‡ v is well
defined and moreover u ‡ v v ‡ u and supp u ‡ v ` supp u  supp v
Proof. welldefined: recall when v is a function `u ‡ v, ϕe `u, v œ
‡ ϕe, where:
v ‡ ϕˆy 
œ
S v ˆx  y ϕˆxdx `v, ϕˆx  e

without loss of generality, suppose that supp v is compact. Then since ϕ > C0 ˆRd  then ª

v ‡ ϕ > C ˆRd  even when v > D ˆU  and suppˆv ‡ ϕ `  supp v  supp ϕ. Therefore
œ ª œ œ

v ‡ ϕ > C0 ˆRd  therefore `u ‡ v, ϕe `u, v ‡ ϕe is well-defined.


œ ª œ

– 36 –
2. Distributions

2.5 Fundamental Solutions


By the previous proposition, for any u > D ˆRd  we can always make sense of u ‡ δ0 δ0 ‡ u
œ

which is by definition just u. If u > C0 ˆRd , then uˆx `δx , ue `δ0 ˆx  , ue δ0 ‡ uˆx.
ª

We can write:
`δ0 ˆx  , ue S δ0 ˆx  y uˆy dy“ ” uˆx

Example 2.5.1. If V is a finite dimensional vector space, then we can write v Pi viei if
˜ei  span V .

Now suppose that P u v where P is a linear operator from U to V , then if we are


interested in the existence of solutions to this equation follows if we know P ui ei . Then if
v > V with v P vi ei then u P vi ui solves P u v
So the above is like saying δ0 ˆx   span the whole space C0 ˆRd
ª

Let’s consider P as a linear scalar partial differential operator of the form:


P u Q aα ˆ x  D α u
SαSBk

And aα is a real-valued smooth function. Then P u makes sense for u > D ˆRd œ

Definition 2.5.1 (Fundamental Solution). Given y > Rd , we say Ey is a fundamental


solution for P if it solves the equation P Ey δy  δ ˆ  y 
The application is that if we would like to solve P u v we expect that u“ ” R v ˆy Ey ˆx
to be a solution (in analogy to the finite dimensional vector space example)

v ˆx  S v ˆy δ0 ˆx  y dy

So we have (hopefully)

PS v ˆy Ey ˆxdy S v ˆy δ0 ˆx  y dy v ˆ x

but these equalities need to be checked in a case-by-case basis.

2.5.1 Uniqueness
Example 2.5.2. Let U and V be finite dimensional vector spaces and P be a liner operator.
Let P u v. Let ϕ > V then:
‡

`u, P ϕe `P u, ϕe
œ
`v, ϕe (10)
With P  V
œ ‡
U the adjoint. Suppose there exist a set of vectors ˜θi  that span U . If for
‡ ‡

all θi we can find E i > V such that P E i θi , then by (10) (replacing ϕ with E i )
‡ œ

au, θ i f av, E i f

for all i. If our dual space could seperate points, then this would give uniqueness. Because if
u1 , u2 both solve P u v, then we must have that `u1 , θi e `u2 , θi e for all θi > U . ‡

– 37 –
2. Distributions

Now given P define the formal adjoint as:


P v Q ˆ 1 α Dαˆaαu
œ

S S

S α S BK

note that if one of u or v is smooth an compactly supported, then `P u, v e `u, P v e (this œ

follows by integrating by parts).


Consider the fundamental solutions Ey ˆx to P Ey δ ˆ  y , then you expect a charac-
œ œ œ

terization of u in P u v:

S S P uˆxEy ˆxdx S uˆxP Ey ˆxdx S


?
v ˆxEy ˆxdx
œ œ œ œ
uˆxδ0 ˆx  y dx u ˆy 

Example 2.5.3. Let P ∂x on R. We know that


d d
H δ0 so H ˆx  y  δ0 ˆ x  y  δy
dx dx
therefore H is a fundamental solution for ∂x .

So let’s suppose we would like to solve ∂x u u. Then by our formula we have:


x
uˆx S v ˆy H ˆx  y dy S ª
v ˆy dy

The adjoint operator is just P œ


∂x therefore the solutions we are looking for with
P œ
Ey ˆx δ0 ˆx  y . Just let Ey ˆx
œ œ
H ˆy  x. Then compute:
y
S S
?
uˆy  v ˆxH ˆy  xdx v ˆxdx

when v > C0 ˆR, we have:


ª

S ∂x uH ˆy  xdx S uˆxˆ∂x H ˆy  xdx u ˆy 

Suppose we are interested in solving ∂x u v on ˆa, b.

S 1ˆa,b ˆx∂x uˆxH ˆy  xdx S uˆxˆ∂x ˆ1ˆa,b ˆxH ˆy  xdx

Then since ∂x 1ˆa,b  δa  δb we get that the above becomes:

uˆy   uˆaH ˆy  a  uˆbH ˆy  b

because y @ b. And so we get that uˆy  uˆa  Ra vdx and we have the fundamental theorem
y

of calculus.
Lecture 12 (10/8)

Today we will cover the Laplace equation (Evans 2.2).

Recall we were discussing the fundamental solution. If P is a linear operator of the form:

P Q aαDα
SαSBk

– 38 –
2. Distributions

we say that Ey is a fundamental solution for P at y > Rd if P Ey δy  δ0 ˆ x  y  .

Once we have this, we can prove existence. Given a nice function f , then if we formally
write:
u f S f ˆy Ey ˆxdy

Then we expect that Pu f  f.

We also get uniqueness, we look at the adjoint of P which is characterized by:


` P u, ve `u, P veœ

after integrating by parts, we get that:


P v Qˆ
œ
1α Dα ˆaα v 
S α S Bk

Then the fundamental solution can be defined as P ˆE  xœ œ


δx . So now given a nice u, we
have:
uˆx `u, δx e `u, P ˆE xe `P u, ˆE xe
œ œ œ

Then we can recover P u (which gives us uniqueness). This formula above we will call the
representation formula for u.

We also have uniqueness for a boundary valued problem. This is the same idea where we
write for x > U :
uˆx `u, δx e `1U u, δx e `1U u, P ˆE x e
œ œ
`1U P u, ˆE xe œ
 terms involving © 1U
But these extra terms give us the contributions of the boundary values of u.

Remark 2.5.1. If you consider constant coefficient P. This is the case where aα are con-
stants. There are two simplifications that occur.
1. if P E0 δ0 then Ey ˆx E0 ˆx  y  is a fundamental solution for P at P E0 ˆ  y
δ0 ˆ  y  δy . So we get translational invariance.
2. P v P α Bk ˆ 1 α aαDαv P ˆvˆ x. Therefore if we set ˆE xˆy
œ
S S 
S S

œ
E0 ˆx  y  then
Py ˆE x ˆP E0ˆx y δ0ˆx y δx
œ œ
 

So we will now only consider the constant coefficient P . Then Ey ˆx E0 ˆx  y  and
ˆE œ
 x ˆy  E0 ˆx  y . Then we have that:

u f S f ˆy E0 ˆx  y dy f ‡ E 0 ˆ x

So for uniqueness, ˆE œ
x ˆ y  E 0 ˆx  y :

` P u, ˆE xe S P uˆyE0ˆx
œ
 y dy Pu ‡ E0 ˆx

– 39 –
3. Laplace Equation

3 Laplace Equation
Now we will consider P ∆. Let’s begin by deriving fundamental solutions to the Laplacian.
That is solve ∆E0 δ0 in Rd for all d C 2. We already did this. We will use rotational
invariance of ∆. That is given a rotation matrix R on Rd , then we have:
 ∆ˆU Rx ˆ∆uˆRx

Then we can look for E0 E0 ˆr. So we would like a∆E0 , 1B ˆ0,r f aδ0 , 1B ˆ0,r f 1.
Integrating by parts the thing on the left, we get:
x
S ©E0 © 1B ˆ0,r S © E0 νdS∂B ˆ0,r S SxS
©E0 dS∂B ˆ0,r  S∂B ˆ0, r S∂r E0 dαˆdrd 1 ∂r E0 ˆr


we say αˆd is the volume of the unit ball in Rd . Therefore:


1
 ∂r E 0 ˆ r 
dαˆdrd  1

So we have the fundamental solution


¢̈ 1
¨
¨  logˆr d 2
¨ 2π
E0 ˆr  ¦ 1 1
¨
¨ dC3
¨ d2
¤̈ dˆd  2αˆd r

Remark 3.0.1. There is no universal way to determine fundamental solutions.


Existence: of © u f in Rd . We have u f  f ‡ E0 ˆx which is:
1
cd S f ˆy  dy
Sx  y Sd 2

as long as f > D ˆRd  with compact support, ∆u f  f


œ

Uniqueness: Take u > D ˆRd  with compact support. Then u


œ
ˆ∆u ‡ E0 (what?)
Theorem 3.0.1 (Analytic regularity to solutions to ∆u 0). If u > D ˆR d 
œ
is a
solution to ∆u 0 (i.e. u is harmonic). Then u is smooth and analytic.
Proof. We have (since ∆u 0):
 ∆ˆχu ˆ∆χu  2©χ © u
Now if x > Rd , choose χ > C0 ˆRd  such that χ
ª
1 in a neighborhood of x, then © χ ˆx  0
near x.

Now:
χu ˆ∆ˆχu ‡ E0 S ˆˆ∆χu  2©χ © uˆy E0 ˆx  y dy

Now E0 is not smooth only when x y. By smoothness of E0 ˆx  y  outside of x y, then


χu is smooth.

– 40 –
3. Laplace Equation

Theorem 3.0.2 (Derivative estimate for a harmonic function). If u is harmonic then


there exists a constant C C ˆSαS such that
1
SD
α
uˆxS B C
rd  Sα S SB x,y
ˆ 
SuSdy

Proof. Take χ > C , χ ª


1 for x > B ˆx, r~2, supp χ ` B ˆx, r, S©χS B c~r. Then:

Dα uˆx Dα ˆχuˆx S ˆ∆χuˆy   2©χ © uˆy Dα E0 ˆx  y dy

Then ∆χ is supported in the annulus of radii r~2 and r. Then we get control of the thing
by:
C
rdSαS SB x,r
ˆ 
SuSdy  something

where something comes from integration by parts.


Theorem 3.0.3 (Liouville). If u is a bounded harmonic function defined on Rd then u is
constant.
Proof. Use the above theorem.
Theorem 3.0.4. Given f > D ˆRd  with compact support (d C 3). Any bounded solution to
œ

∆u f is of the form:

u u f  c

for some constant c.


Proof. If u, u are solutions, then u  u is bounded and harmonic and therefore equal to a
œ œ

constant.
Theorem 3.0.5 (Mean Value Property). If u is harmonic then:
1 1
uˆx S
S∂B ˆx, r S ∂B ˆx,r
udS ˆy  S
SB ˆx, r  B ˆx,r
u ˆy 

Proof. Note that the two are equivalent (omitted)

We will just prove the fist equality.

uˆx S 1B ˆx,r ˆy uˆy ˆ∆E0 ˆx  y dy S ©ˆ1B ˆx,r ˆy uˆy  © E0 ˆx  y dy

 S ν ˆy u ˆy  © E0 ˆx  y dS∂B ˆx,r  S 1B ˆx,r ˆy ©u © E 0 ˆ x  y y

 S uˆy ν © E0 ˆx  y dS∂B ˆx,r  S © ˆ1B ˆx,r ˆy ©uE0 ˆx  y dy

the second term is just:

S ν © uE0 ˆx  y dS∂B ˆx,r

– 41 –
3. Laplace Equation

So we ultimately get:

uˆx S ν © uE0 ˆx  y dS∂B ˆx,r  S uˆy ν © E0 ˆx  y dS∂B ˆx,r

note that the second term is:


1
S
S∂B ˆx, r S ∂B ˆx,r
udS

And the first term can be shown to be zero by the divergence theorem(possibly?).
Lecture 13 (10/15)
Day of the midterm Lecture 14 (10/17)
Today we will spend 20-30 minutes discussing the midterm and review what we have done
so far in the course.

General Theory
1. For first-order scalar PDE’s we studied the method of characteristics. The idea is that
we can always reduce PDE’s to first order ODE’s which then we can use ODE results
to get existence and uniqueness. This required noncharacteristic boundary data. In
the quasilinear case, the noncharacteristic boundary data is equivalent to the condition
that we are able to compute all derivatives of the solution at the boundary point. We
extended this to general quasilinear k-th order systems (see Holmgien’s theorem about
an application). We then discussed Cauchy Koweleskaya, but it isn’t the end due to
Hadamard’s example. He emphasized well-posedness (which asks not only existence
and uniqueness, but also continuous dependence).
2. Then we turned to constant coefficient 2nd order scalar PDE’s. We are studying the
four equations:
¢̈∆u 0
¨
¨
¨
¨
¨
¨ju 0
¦
¨
¨ ˆ∂t  ∆u 0
¨
¨
¨
¨ˆi∂  ∆u
¤̈ t 0

this can be studied on a case by case basis (like in Evans), but we can study them at
the same time using distribution theory. This allows us to make use of the fundamental
solution: P u δy and P u δx . We gen existence for P u f . And we get uniqueness:
œ

if u is nice (eg compact support) then uˆx E0 ‡ P u, u is only defined on Ω so


uˆx E0 ‡ P u  R∂Ω 
What’s to come: for the next 2-3 weeks we will focus on these equations, then we will
discuss Sobolev spaces. We wil be studying A-priori estimates. We will be studying:
¢̈
¦
P
¨ u f in Ω
¨u g on Γ
¤̈

– 42 –
3. Laplace Equation

we first assume a solution exists, then we prove estimates on them. For existence we would
like `u, P ϕe `f, ϕe. This is something we can define if P is injective. Then the linear
œ œ

functional `u, P ϕe is well-defined in span ˜P ϕ  ϕ > C0 ˆU . This strategy favors L2 type
œ œ ª

spaces, in which we can find solutions u > L2 , and all derivatives will stay in L2 . A Sobolev
spcae is H k ˜u  Dα u > L2 , SαS B k .

3.1 Cauchy Riemman Equation


Let’s apply what we have learned to the Cauchy Riemman equation for f u  iv:

∂x u  ∂y v 0
∂y u  ∂x v 0

This is equivalent to ˆ∂x  i∂y f 0. Note that if this is true, then each component of f is
harmonic as ˆ∂x  i∂y ˆ∂x  i∂y  ∂x2  ∂y2 ∆. Now in 2  d the fundamental solution is:

1
∆ˆ log r δ0

1 1
Decomposing ∆, we get the fundamental solution ˆ∂x  i∂y  log r ˆx~r 2  iy ~r 2  
2π 2π
1
2πz
1 1
Theorem 3.1.1. E0 is a fundamental solution for ˆ∂x  i∂y 
2π z
Theorem 3.1.2. If ˆ∂x  i∂y f 0 for f > D ˆU  then f > C
œ ª
ˆU 

Proof. Let z0 > U , let χ be 1 near z0 and zero outside a small ball centered at z0 . Then:

ˆ∂ x  i∂y χf f ˆ∂x  i∂y χ

Then since χf is compactly supported, we can use the representation formula to get that:

χf E0 ‡ ˆ∂x  i∂y ˆχf  E0 ‡ ˆˆ∂x χ  i∂y χf ˆz 


´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
F

But this last term vanishes near z0 , if we let that term be F , then we have (if F is a function):

S E0 ˆz  wF ˆwdwx dwy

which is smooth. If F isn’t smooth, then we use approximation

Corollary 3.1.1 (Morera’s Theorem). If f is C 1 on U and RΓ f ˆz dz for all closed curves
Γ, then f is holomorphic (i.e. f is a C solution to CR equation)
ª

– 43 –
3. Laplace Equation

Proof. We say that f is a distribution solution if and only if  `f, ˆ∂x  i∂y ϕe 0 for all ϕ >
C0 ˆU  complex valued. Where the inner product is now defined as `u, ve R uˆzvˆzdxdy
ª

(which isn’t real valued anymore). We have that if:

0  S f ˆz ˆ∂x  i∂y ϕˆz dxdy

for all ϕ then f is smooth.


Lemma 3.1.1.
1
S hˆz ˆ∂x  i∂y ˆχΩ dxdy 
i S∂Ω
hˆz ˆdx  idy 

By hypothesis, we have that `f, ˆ∂x  i∂y χΩ e 0. Let ϕ χΩ be an approximation,


then we can approximate ϕ be step functions in L1 . Then `ˆ∂x  i∂y f, χΩ e 0 implies that
`ˆ∂x  i∂y f, ϕe 0 for all ϕ > C0 ˆU  ª

Theorem 3.1.3 (Cauchy Integral formula).

1 f ˆz 
f ˆz0  S
2π ∂Ω z  z0
dz

Proof. (uniqueness in the case of u on Ω). We have:

f ˆz0  f ˆz0 χΩ ˆz0  S δˆ z  z0 χΩ ˆz0 f ˆz dz S ˆ∂x  i∂y E0 ˆz0  z χˆz f ˆz dz

then we integrate by parts, when the differential operator falls on f we get zero, when it
falls on χ we use the lemma to get:
1 1 1 dz
2π S z0  z
ˆˆ∂X  i∂y χΩ ˆz f ˆz dxdy S
2πi ∂Ω z  z0
f ˆz 

where there was a sign error.


Lecture 15 (10/22)
Today we will continue the discussion of the Laplace equation. We were looking at:

 ∆u f

what we did last time was we computed the (radial) fundamental solutions for ∆. That is
∆E0 δ0 which had the form:
¢̈ 1
¨
¨  log SxS d 2
¨ 2π
E0 ˆx ¦ 1
¨
¨ SxSd2 dC3
¨
¤̈ dα ˆ d ˆ d  2

with αˆd the volume of the d-dimensional unit ball.

Recall that we proved:

– 44 –
3. Laplace Equation

ˆ existence of a solution u f  to ∆u f for f a compactly supported continuous function


on Rd where:
u f ˆx E0 ‡ f ˆx

ˆ uniqueness due to the representation formula. When u is compactly supported:

E0 ‡ ˆ∆u u

ˆ combined with the regularity (smooth, analytic) of E0 outside ˜x 0 leads to smooth-


ness (and analycity) of u near x > U , ∆u 0 in U
ˆ derivative estimate: given ∆u 0 in U and B ˆx, r ` U then:

r1 SαS SB ˆx,r
SD
α
uˆxS B 
SuSdy

ˆ Liouvile theorem: if u is a harmonic bounded function on Rd , then u is constant.

ˆ in d C 3, any bounded solution to ∆u f in Rd is of the form u u f   c where c is


some constant.
ˆ see the notes for d 2
We said that u has to be a compactly supported distribution for: u E0 ‡ ˆ∆u to hold.
If u is a distribution solution to ∆u 0 in U . Then we truncate and approximate u by
distributions with compact support.

3.2 Boundary Value Problem


Now will try to apply the representation formula for a boundary valued problem. Suppose
we are solving for u in U . So we have for x > U :
uˆx 1U uˆx ˆ δ0 ‡ 1U uˆx ˆ∆E0 ‡ 1U uˆx Qˆ  ∂j ∂j E0 ‡ 1U uˆx
j

Qˆ  ∂j E0 ‡ ∂j ˆ1U uˆx Qˆ  ∂j E0 ‡ ∂j u1U ˆx  Qˆ∂j E0 ‡ u∂j 1U ˆx


j j j

ˆE 0 ‡ ˆ∆u1U ˆx  Qˆ  E0 ‡ ∂j u∂j 1U ˆx  Qˆ∂j E0 ‡ u∂j 1U ˆx


j j
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
B C

where we used the fact that 1U u has compact support. Recall that ∂j 1U νj dS∂U . So C
has a term that becomes:

Q ∂j E0 ‡ uˆ∂j 1U ˆx Q S∂U ∂y E0ˆx  y uˆy νj dS ˆy  S∂U ν DE0 ˆx  y uˆy dS ˆy 
j j

And B becomes:

Q E0 ‡ ∂j uˆ∂j 1U ˆx S∂U ν Duˆy E0 ˆx  y dS ˆy 


j

– 45 –
3. Laplace Equation

Theorem 3.2.1. If u is a smooth function on Ū then:

uˆx SU E 0 ˆx  y ˆ∆uˆy dy  S


∂U
E0 ˆx  y ν Duˆy dy  S
∂U
ν Dy E0 ˆx  y uˆy dy

Corollary 3.2.1 (Mean Value Property). If u is a harmonic function in U such that


B ˆx, r ` U , then:
1
u ˆx 
dαˆdrd  1 SB x,r ˆ 
uˆy dS ˆy 

Proof. Apply the above theorem with:

Ẽ0 ˆx E0 ˆx  E0 ˆr

where E0 ˆr is the value of E0 ˆy  for Sy S r. Then we get:

u ˆx  S∂B x,r ˆ 
ν Dy Ẽ ˆx  y  uˆy dS ˆy 
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
 ∂r Ẽ0 ˆr

1
but we know that ∂r E 
dαˆdrd  1

Remark 3.2.1. The same proof applies to ∆u x 0. This leaves to a nice proof of Jensen’s
formula in complex analysis.

Theorem 3.2.2 (Maximal Principal). Let U be a bounded C 1 -domain (open and con-
nected). Let u > C 2 ˆU  9 C 0 ˆŪ  with ∆u 0 in U then:

1. (weak maximal principal)

max u max u
Ū ∂U

2. (strong maximal principal): if x0 > U and uˆx0  maxŪ u, then u is constant.

Proof. It suffices to prove the strong maximal principal. Let M maxŪ u. Let B ˆx, r ` U .
Then:
1
M uˆx0  S
S∂B ˆx, r S ∂B ˆx,r
udS ˆy  B M

this means that u M on ∂B ˆx, r (because we have  R∂B M  u dS ˆy ).


´¹¹ ¹ ¸¹¹ ¹ ¶
We can do this
C0
for all balls in U , but this implies that u M in B ˆx, r. Then by connectedness, we are
donea
a
let Z ˜x > U  uˆx M . This is open by the aboe computation and closed as it is the preimage of a
closed set, therefore it is U

– 46 –
3. Laplace Equation

Corollary 3.2.2 (Uniqueness of Dirichlet Problem). Given U a bounded C 1 domain.


The Dirichlet problem is:
¢̈
¨∆u f in U
¦
¨y g on ∂U
¤̈

note that ν Du is not prescribed. If f and g are continuous and u > C 2 ˆU  9 C ˆŪ  then u
is unique.
Proof. If u and u solve this, then u  u is harmonic with boundary value 0. Then by the
œ œ

maximal principal we get that u  u 0 (show B and C).


œ

Remark 3.2.2. This uniqueness theorem is a global phenomenon!


Remark 3.2.3. The uniqueness that comes from the maximal principal implies the existence
of a representation formula.
Proof. (sketch) Given x > U with ∆u 0. If we take uS∂U > C ˆ∂U  and map it to uˆx > R.
Then this map is bounded by the maximal principal. Then we look at the linear subspace
V ™uS∂U of  ∆u 0 in U, u > C 2 ˆU  9 C ˆŪ ž. Then we use Hahn-Banach.
Note that this is almost useless, as we do not get what the formula is, just that it exists.
Theorem 3.2.3 (Harnack’s inequality). Given u > C 2 ˆU  9 C ˆŪ , ∆u 0, u C 0. Then
on any bounded domain V ` V̄ ` U :

sup u B C inf u
V̄ V̄

Where C depends only on U and V


Proof. We will apply the mena value property for balls:
1
uˆx S
SB ˆx, r S B ˆx,r
uˆy dy

1
this follows by averaging R œ udS for 0 @ r @ r.
S∂B ˆx, r S ∂B ˆx,r 
œ

Now take r P distˆV̄ , ∂U . Then take x, y > V such that Sx  y S @ r. Then we can write:
1 1 2d
uˆx S
SB ˆx, r S B ˆx,r
udz B S
SB ˆx, r S B ˆy,2r
udy S
SB ˆy, 2r S B ˆy,2r
udy B 2d uˆy 

using that B ˆx, r ` B ˆy, 2r. This tells us:

2 d uˆy  B uˆx B 2d uˆy 




then we cover U with balls of radius r. So if we jump N balls, we get:

2 dN
uˆy  B uˆx B 2dN uˆy 

since V̄ is compact, we know that N is bounded and we are done.

– 47 –
3. Laplace Equation

Lecture 16 (10/24)

Recall our discussion of the Laplace equation. We got uniqueness of a solution to the Dirchlet
problem for a bounded domain U
¢̈
¨∆u f in U
¦
¨u g on ∂U
¤̈

If f, g are continuous and u > C 2 ˆU  9 C ˆŪ , then we have uniqueness.

3.3 Green’s Function


Definition 3.3.1 (Green’s function). Given a domain U and some y > U , we say that
Gˆ , y  > D ˆU  is a Green’s function for U at y if
œ

¢̈
¨∆x Gˆ , y  δ0 ˆ x  y  in U
¦
¨Gˆx, y  0
¤̈
x > ∂U

Remark 3.3.1. This definition is opposite to Evan’s definition.


Note that Gˆx, y   E0 ˆx  y  is a harmonic function. So one way to find Gˆx, y  is to take
Gˆx, y  E0 ˆx  y   hy ˆx such that h solves:
¢̈
¨∆hy ˆx 0 in U
¦
¨hy ˆx E0 ˆx  y 
¤̈
f or x > ∂U

So, existence theory for the Dirichlet problem implies the existence of a Green’s function.

Now, if you have a Green’s function, you have existence of a solution u f  to:
¢̈
¨∆u f f in U
¦
¨u f  0 on ∂U
¤̈

where we set:

u f ˆx S f ˆy Gˆx, y dy

This has to be justified, but morally speaking, one can solve:


¢̈
¨∆u 0 in U
¦
¨u g on ∂U
¤̈

by working with v u  g̃. We find a C ª


extension g̃ > C ª
ˆŪ  such that g̃ S∂U g. Then:
¢̈
¨∆v ∆u  ∆g̃  ∆g̃ in U
¦
¨v 0 on ∂U
¤̈

– 48 –
3. Laplace Equation

So if we solve v and put u v  g̃, then u solves the original problem.

So we have that “homogeneous and nontrivial boundary value” can be solved by solving
“inhomogeneous and trivial boundary value”

It turns out that existence to Dirichlet problem is equivalent to the existence of a Green’s
function.

Remark 3.3.2. It turns out that G is a C ª


function in U  U  ˜y x

Theorem 3.3.1 (uniqueness and symmetry of Green’s functions). Let U be a bounded


C 1 domain and suppose there exists a Green’s function Gˆx, y . Then (for x x y, x, y > U :

1. If G ˆx, y  is also a Green’s function, then Gˆx, y 


œ
G ˆx, y 
œ

2. Gˆx, y  Gˆy, x

Proof.

G ˆx, y 
œ
S δ0 ˆx  z G ˆz, y dz œ
S 1U ˆz δ0 ˆx  z G ˆz, y dz
œ
S  ∆z Gˆz, xG ˆz, y 1U ˆz dz
œ

d d
QS ∂j Gˆz, x∂j G ˆz, y 1U ˆz dz  Q S ∂j Gˆz, xG ˆz, y ∂y 1U ˆz dz œ œ

j 1 j 1
d d
QS Gˆz, xˆ∂j2 G ˆz, y 1U ˆz dz  Q S Gˆz, x∂j G ˆz, y ∂j 1U ˆz dz   œ œ

j 1 j 1
d
  QS ∂j Gˆz, xG ˆz, y ∂j 1U ˆz dz œ

j 1

S Gˆz, xδ0 ˆz, y dz 0  0


´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
Gˆy,x

Remark 3.3.3. We required that ∂G to be continuous up to ∂U


So now if we take G G , then we see the symmetry property for any Greens’ function. œ

Then we use this to see that G ˆx, y  Gˆy, x œ

Remark 3.3.4. Gˆx,  is a solution to the adjoint problem.


¢̈
¨∆Gˆx,  δx
¦
¨Gˆx,  0
¤̈

Revised definition of Greens’ function:

– 49 –
3. Laplace Equation

Definition 3.3.2. A Greens function is a solution to:


¢̈
¨∆Gˆ , y  δy in U
¦
¨Gˆ , y  0 on ∂U
¤̈

and Gˆ , y  > C 1 ˆŪ  ˜y 


Now let’s turn to construction of Green’s functions on some domains. Here we will use a
technique called the method of image charges.

The idea is that the fundamental solution E0 ˆx, y  models the electric potential of a unit
point charge which is situated at y. To ensure that on the boundary that the potential is
zero, we simply pretend there is an opposite charge on the other side of the boundary. So
we would get Gˆx, y  E0 ˆx  y   E0 ˆx  ȳ , where ȳ is y reflected across the boundary
Example 3.3.1. Let U B ˆ0, 1.

If we look for Sx  y S Sx  ȳ S then we get hyperplanes. Recall that for d C 3 we have:


1 1
E0 ˆX 
dαˆdˆd  2 SxSd  2

so we get that
1
E 0 ˆ x E0 ˆλx
λd2
for λ A 0

So now we can consider the hypersurface for R A 1 P ˜x > Rd  Sx  y S RSx  ȳ S.

On the plane, this is the circle.

In this case for y > U :


Gˆx, y  E0 ˆx  y   E0 ˆSy Sˆx  ȳ 
y
where ȳ
Sy S2

Proof. (that this is the Green’s function) Let y > B ˆ0, 1. It suffices to check that Sx  y S
Sy SSx  ȳ S for x > ∂B ˆ0, 1 if and only if SxS2 1.

But this is just:


2
SxS  2x y  Sy S2 2
Sy S ˆSxS
2
 2x ȳ  Sȳ S2 
but the right side is:
2 2 y Sy 2 S 2 2
Sy S ˆSxS  2x   Sy S SxS  2x y  1
Sy S2 Sy 4 S

– 50 –
3. Laplace Equation

In Summary we have the following Greens functions:


1. For Rd
˜x > Rd  xd A 0, then if y > Rd let ȳ

ˆy 1 , . . . , y d1 , y d . Then we set:
Gˆx, y  E0 ˆx  y   E0 ˆx  ȳ 
y
2. For B ˆ0, 1 we have y > B ˆ0, 1 ȳ then:
Sy S2

Gˆx, y  E0 ˆx  y   E0 ˆˆSy Sˆx  ȳ 

Theorem 3.3.2 (Representation formula for the Dirichlet problem). Let u > C 2 ˆU  9
C ˆŪ , then we have that:

uˆx SU Gˆx, yˆ  ∆uˆy dy  S


∂U
ν ˆy  Dy Gˆx, y uˆy dS ˆy 

Proof.

uˆx S δ0 ˆx  y uˆy 1U ˆy dy S ˆ∆y Gˆx, y uˆy 1U ˆy dy

d d
QS ∂yj Gˆx, y ∂yj uˆy 1U ˆy dy  Q S ∂yj Gˆx, y uˆy ∂yj 1U ˆy dy
j 1 j 1
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
A
d
A  S Gˆx, y ˆ∆uˆu1U ˆy dy  Q S Gˆy, x∂j u∂j 1U dy
j 1

First assume that u > C ˆŪ . Then the first and third term is zero by Gˆx, y 
ª
0 for y > ∂U .
For general u, you can approximate u in C 2 ˆU  9 C ˆŪ .
Theorem 3.3.3 (Poisson integral formula). Given g > C ˆ∂U  (for today U is the half
plane or the unit ball). If we define:

uˆx  S ν ˆy  Dy Gˆx, y g ˆy dy

then u > C 2 ˆU  9 C ˆŪ  and u g on ∂U and ∆u 0


Lecture 17 (10/29)

Recall that we were discussing the Green’s function for the Laplace equation. It solved:
¢̈
¨∆Gˆ , y  δˆ  y  in U
¦
¨Gˆ , y  0 on ∂U
¤̈
For a domain U ` Rd with y > U . We found that G is unique and that Gˆx, y  Gˆy, x. So
we also have:
¢̈
¨∆Gˆx,  δ ˆx   in U
¦
¨Gˆx,  0 in ∂U
¤̈
which is the definition in Evan’s.

– 51 –
3. Laplace Equation

Theorem 3.3.4 (Poisson Integral Formula). Given U a C 1 -domain and u > C ª


ˆŪ ,
then:

uˆx SU Gˆx, yˆ ∆uˆy dy  S


 ν ˆy  Dy Gˆx, y  uˆy dS ˆy 
∂U ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
Poisson integral kernel

Remark 3.3.5. Note that this representation formula does not have a term with the normal
of u.

Remark 3.3.6.  ∆u 0 is what is usually called Poisson integral formula.

Remark 3.3.7. If we want to solve:


¢̈
¨∆u g in U
¦ (11)
¨u h in ∂U
¤̈

If a solution exists, then:

u ˆx  SU Gˆx, yˆ  ∆uˆy dy  S


∂U
ν Dy Gˆx, y udS

In fact, we can define:

ũˆx SU Gˆx, ygˆydy S∂U ν ˆgDy Gˆx, yhˆydS ˆy




it turns out that ũˆx is indeed the solution to the Dirichlet problem (11), but we will not
justify this here.

It isn’t hard to show that ∆ũ g, it is difficult to show that ũˆx ÐxÐÐÐÐ
x >∂U 0
hˆx0 

We had examples of Green’s functions using the method of image charges.

Example 3.3.2. When U Rd


˜ xd A 0. In this case, we had:

Gˆx, y  E0 ˆx  y   E0 ˆx  ȳ 

with y ˆy 1 , . . . , y d1 , y d 

Example 3.3.3. When U B ˆ0, 1, we used the fact that if y > B ˆ0, 1 and let ȳ Sy S2 y,
then we have that:

∂B ˆ0, 1 ™x > Rd  Sx  y S Sy SSx  ȳ Sž

for any y > B ˆ0, 1. This suggested that we take:

Gˆx, y  E0 ˆx, y   E0 ˆSy Sˆx  ȳ 

if y 0, then we set Gˆx, y  E0 ˆx, y   0 E0 ˆx, y 

– 52 –
3. Laplace Equation

Given an integral operator:

T f ˆ x S K ˆx, y f ˆy dy

then K is the kernel of the integral operator. By the Schwartz kernel theorem, any linear
T  C0 ˆX  D ˆY  is of the form T f `K ˆx, , f e
ª œ

Now we would like to compute the Poisson kernel: ν ˆy Dy Gˆx, y  for the two above
examples.

Example 3.3.4. Swap x and y, set:

Gˆx, y  E0 ˆy  x  E0 ˆy  x̄

ν ˆy  Dy on ∂U ˜ xd 0 is just ∂yd , so we have:

1 y d  xd 1
∂yd E0 ˆy  x ∂yd Sy  xSE0 ˆSx  y S
œ

dαˆd Sy  xS Sy  xSd1

where we used that:


1
 1
E 0 ˆx 
œ

dαˆd r 1
d 

If we restrict to y ˆy œ , 0, then we get:

1 xd
αˆdd Sy  xSd

Then we compute:

1 x̄d
∂yd E0 ˆy  x̄ 
dαˆd Sy  x̄Sd

1 xd
then restrict to y ˆy 1 , 0, to get . So our Poisson kernel becomes:
dαˆd Sy  xSd

2xd 1
ν ˆy Dy Gˆx, y  
dαˆd Sy  xSd

there was a sign error, the final answer is right. So the Poisson integral formula for harmonic
u in Rd is:


2xd 1
uˆx S dαˆd Sy  xSd
hˆy dy

Remark 3.3.8. uˆx hˆx0  as x x0 if h > C ª


ˆ∂U 

– 53 –
3. Laplace Equation

Example 3.3.5. For the ball, we have:

Gˆx, y  E0 ˆy  x  E0 ˆSxSSy  x̄S

and again we will use that:


1 1
E 0 ˆr 
œ

dαˆd rd 1 

then we have that on ∂B ˆ0, 1:

ν ˆy  Dy Q y α ∂α
α

Then we have:
1 1
Q y α ∂y α E0 ˆy  x Q yα∂y αS y  xSˆ
dαˆd Sy  xSd  1

α α
Sy S2y x 1


dαˆd Sy  xSd

And we have:
1 1
 Q yα∂y α E0 ˆSxSSy  x̄S  Q y α ∂y α ˆS xSSy  x̄Sˆ
dαˆd SxS Sy  x̄Sd
d 1   1

α α
1 1 Pα y α ˆy α  x̄α  1
dαˆd SxSd 2 Sy  x̄S

Sy  x̄Sd 1 

1 1 Sy S2  yx̄
dαˆd SxSd 2 Sy  x̄Sd

then if we restrict both to ∂B ˆ0, 1, then we get:


1y x 1

dαˆd Sy  xSd

for the first term, and the second term becomes something else, and after we add them up,
we get:

SxS2  1 1
dαˆd Sy  xSd

Note that if u is harmonic in B ˆ0, 1, we get:

1  SxS2 hˆy 
u ˆx   S∂B 0,1
ˆ 
ν ˆy  Dy Gˆx, y hˆy dS ˆy  S∂B 0,1
ˆ  dαˆd Sy  xS
d
dS ˆy 

– 54 –
4. Wave Equation

4 Wave Equation
The wave equation is an equation on ϕ  R1 d R where the coordinates of the domain are


ˆx0 , x1 , . . . , xd  with x0 t and our operator is:

1 2
j ϕ ˆ ∂  ∆ϕ 0
c2 t
this is called the d’Alembertian operator. This is the prototypical hyperbolic equation.
We always take c 1 in math courses. We have three goals:

1. find an explicit fundamental solution for j. We will figure out that there are a lot of
symmetries of j.

2. derive an (explicit) representation formula for:


¢̈jϕ f, in R1 d 
˜t A 0
¨
¨ 
¨
¦ϕ g, on ∂R1  d (12)
¨
¨


¨∂ ϕ h on ∂R1  d
¤̈ t 

3. Prove existence of a solution to (12)

4.1 1 dimension
Let’s consider the domain R1 1 . Then we have:


j  ∂t2  ∂x2  ˆ∂t  ∂x ˆ∂t  ∂x 

So let’s take advantage of this factoring by finding the fundamental solution. Let’s make a
change of coordinates u t  x, v t  x. Then we get that du dt  dx, dv dt  dx threfore
1 1 ∂t ∂x 1 1
dt ˆdu  dv  and dx ˆdu  dv  this means that ∂u ∂t  ∂x ∂t  ∂x and
2 2 ∂u ∂u 2 2
1 1
∂v ∂t  ∂x .
2 2
Then if E is our fundamental solution, we have:

j E0 4∂u ∂v E0 ˆu, v  2δ0 ˆu, v 

where we had to use the approximation method to compute δ with a change of variables. So
we just need to solve:
 1
∂u ∂v E0 δ0 ˆuδ0 ˆv 
2
So that:
1
E0  ˆH ˆu  c1 ˆH ˆv   v2 
2

– 55 –
4. Wave Equation

The forward fundamental solution E is defined so that E is a fundamental solution and


 

supp E ` ˜t C 0. This forces c1 c2 0. Therefore the fowards fundamental solution has


the form:
1 1
E   H ˆu H ˆv   H ˆt  x  H ˆ t  x 
2 2

Lecture 18 (10/31)

Let’s rectify what was messed up last time.

Theorem 4.1.1 (Change of Variables for Distributions). Suppose that Φ is a diffeo-


morphism (smooth change of coordinates) from X1 to X2 (open subsets in Rd . And suppose
that u > D ˆX2 , then we can define u X Φ, by for all ϕ > C0 ˆX1 :
œ ª

1 1
`u X Φ, ϕe du, ϕXΦ i


S det DΦS

Remark 4.1.1. u (u X Φ is continuous and linear in u.

Proof. First assume that u > C0 ˆX2 ,then let ˆy 1 , . . . , y d 


ª
ֈx1 , . . . , xd , then we have:

∂ ˆx 1 , . . . , x d 
S uˆΦˆxϕˆxdx S u ˆy ϕ ˆΦ  1
ˆy  W
∂ ˆy 1 , . . . , y d 
W dy

1 1
du, ϕˆΦ ˆy  i


S det Dֈy S

Now since Φ is a diffeomorphism, we have that we still get a test function for the input of
u
1
Corollary 4.1.1. δ0 X Φ δΦ1 ˆ0
S det DΦS

Let’s now return to the wave equation:


2
j ϕ ˆ∂t  ∆ϕ 0

this is the sign choice that people usually use in general relativity. For R1 1 , we got: 

1
E0  ˆH ˆt  x  c1 ˆH ˆt  x  c2 
2
Then we introduced something called the forward fundamental solution, which required
that supp E ` ˜t C 0. This is the relevant choice for solving the wave equation because we


would like to know what happens in the future. This requires jE  δ0 and E 0 before  

seeing δ0 . Then we get c1 c2 0, and get the unique forward fundamental solution:
1
E   H ˆt  x  H ˆ t  x 
2

– 56 –
4. Wave Equation

Our goal today is to derive a representation formula using E . In 1  1 dimension, we get 

that for all ϕ > C ˆRd , for all ˆt, x > Rd then, we get d’Alembert’s formula
ª
 

1 t x ˆt s   
1 1 x 6 

2 S0 Sx ˆt s  2 Sx t

ϕˆt, x jϕˆ s, y  dyds  ˆ ϕˆ 0, x  t  ϕ ˆ 0, x  t   ∂t ϕˆ0, y dy
  2 

Note that there is a sign change in Evans due to the choice of sign of j.

Let’s turn to the derivation of the reresentation formula for any dimension (given that
we know the forward fundamental solution). Then we will go towards a way of deriving
fundamental solutions for all dimensions.

Let E be a forward fundamental solution to


 j in R1 d . In other words we get that:


¢̈
¨jE δ0
¦
¨supp E ` ˜t C 0
¤̈

Assume in addition that for all bounded interval

supp E  9 ˜t > I (13)

is compact. It turns out (13) is required for the following


Lemma 4.1.1. Let u > D ˆR1 œ  d , and supp u ` ˜t > L, ª, then u ‡ E is well-defined. 

Proof. (sketch) Suppose that u is a C ª


function and E is a function. Let’s make sense of 

u ‡ E ˆt, x, this is just:




u ‡ E ˆt, x U uˆs, y E ˆt  s, x  y  dsdy 

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
compactly supported

Proof. (real) Given an interval I ˆa, b with a, b > R 8 ˜ª, let χI be a smooth function
that is 1 on I and 0 in s B a1 and s C b1. Then we will consider χI ˆu‡E  for all bounded I. 

Now for ϕ > C0 ˆRd , we have:


ª

?
`χI ˆu ‡ E , ϕe `u ‡ E , χI ϕe du, E  ‡ χI ϕ i
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
need to show is a test function

Actually we will prove that χ L, ª ˆE ‡œ χI ϕ . For functions, we had:

f ‡
œ
g ˆy  S f ˆx  y g ˆxdx

then we had supp f œ


‡ g `  supp f  supp g.

– 57 –
4. Wave Equation

Now suppˆϕχI  ` ˜t > I , then we would like to show that supp f ‡


œ
g ` L, ª. Now
E χ M,  E  ˆ1  χ M,  E . Now:
 ª  ª 

suppˆχ M, E ª  ‡
œ
χI ϕ ` ˜t C M   ˜t > I 

Yeah...I got a little lost, this is one of those things that is easier to just work out by yourself.

Theorem 4.1.2 (Uniqueness of forward fundamental solutions). Let E be a forward 

fundamental solution with the property (13) and let E be any forward fundamental solution œ


to:
¢̈
¨jEœ δ0
¦
¨supp Eœ ` ˜t C 0
¤̈

Then E œ

E


Proof. We have:

E δ0 ‡ E ˆjE  ‡ E0 ˆE  E  E E E δ0 E
œ œ œ œ œ
  j ‡  ‡ j  ‡
  

Theorem 4.1.3. Let ϕ > C ª


ˆRd , ˆt, x > Rd . Then:


ϕˆt, x E  ‡ δt ϕδt 0  ∂t ˆEt ‡ ϕ∂t 0   Et ‡ ˆjϕ1˜tC0

Proof. We have:

ϕ1˜tC0 δ0 ‡ ϕ1˜tC0 jE ‡ ϕ1˜tC0 

2
∂ E ‡ ϕ1˜tC0  ∆E ‡ ϕ1˜tC0
t  

∂t Et ‡ ∂t ϕ1˜tC0  ∂t E ‡ ˆϕδt 0   E ‡ ˆ∆ϕ1˜tC0


 

2
E ‡ ˆ∂t ϕ1˜tC0  E ‡ ˆ∂t ϕδt 0   E ‡ ˆ∆ϕ1˜tC0  ∂t ˆE
    ‡ ˆϕδt 0 

Looking at the representation formula and pretend everything is a function, we have:


ª

E  ‡ ˆjϕ1˜tC0 ˆt, x S0 S j ϕˆs, y E ˆt  s, x  y dyds




Then if you believe that supp E ` ˜SxS B t, then you are integrating over the triangle below


the thing.
 1
Let’s specialize to d 1, then E  H ˆtxH ˆtx, then let’s look at the representation
2
formula. Then we have:
1 1
2 S S 2 U
 
E ‡ ψ ˆt, x ψ ˆs, y H ˆt  s  ˆx  y H ˆt  s  ˆx  y dsdy ψ ˆs, y dsdy

– 58 –
4. Wave Equation

where we are integrating over ˜s, y  t  s C x  y, t  s C ˆx  y , s C 0. This is telling us


that x  ˆt  s C y C x  ˆt  s. So we get:

1 t x ˆt s   

2 S0 Sx ˆt s 

ψ ˆs, y dyds
 

Then if we apply this expression to the last term we get:

1 t x ˆt s   

2 S0 Sx ˆt s 
E  ‡ j ϕ  jϕdyds
 

and we get that:


1 x t 

2 Sx t
E
  ‡ ∂t ϕδt 0 ∂t ϕˆ0, y dy


and:
1 x t 
1
2 Sx t
∂t ˆE
 ‡ ϕδt 0 ∂t ˆ ϕˆ0, y dy  ˆϕˆ0, x  t  ϕˆ0, x  t
 2
Lecture 19 (11/5)

We were discussion the wave equation:

j ϕ 0

1
In R1  1 we were able to compute the fundamental solution E ˆt, xH ˆt  xH ˆt  x. In 
2
R1 d , given a forward fundamental solution E with the condition that for all bounded inter-



vals supp E 9 ˜ˆt, x  t > I  is compact. Then we were able to construct the representation


formula for the Cuachy problem:


¢̈
¨j ϕ f in R1 d 

¦ (14)
¨ˆϕ, ∂t ϕ ˆg, h in ˜0  Rd
¤̈

then:

ϕˆt, x  E  ‡ ∂t ϕδt 0  ∂t ˆ E  ‡ ϕδt 0   E ‡ ˆjϕ1R1d 




If everything is a function, then we have:

E  ‡ f S E ˆt  s, x  y f ˆs, y dsdy


then we are just integrating over the inverted cone in R1 d . This is known as “finite speed 

of propagation”.

– 59 –
4. Wave Equation

Remark 4.1.2. In the case of wave equations, we can take the representation formula to
write down the solution to the initial value problem. For ˆt, x > R1 d , define 


ϕˆt, x  E ‡ hδt 0  ∂ t ˆE  ‡ gδt 0   E ‡ f 1R1d

Then the claim is that for nice enough f, g, h (for instance all smooth) then ϕˆt, x does solve
the initial value problem (14).

The goal today is to find the foward fundamental solution in R1 d for d C 1. To do this, 

we will exploit the symmetries of j to narrow down the possible candidates for E . 

Symmetries of j, we would like to find L  R1  d R1  d such that:

j ˆϕ X L ˆjϕ X L

One example is would be translations. If x0 > R1 d , then we have: 

ˆϕˆˆt, x  x0 
j j ϕˆˆt, x  x0 

This is not helpful for E because jE


  δ is not invariant under translations.

We are instead interested in symmetries that fix ˆ0, 0. These are called the Lorentz
transformations. These are:

1. (rotations), R > SOˆda . Then jϕˆt, Rx j ϕˆt, Rx.

2. (reflections) L  ˆt, x1 , . . . , xd  ˆt, x1 , . . . , xd  or we would reflect a spacial coordinate.


Then since all derivatives are second order, then this is invariant.

3. (Lorentz boosts) Lorentz boost with speed v and direction x1 is the formula:

1
ˆt, x  ( Œ ºt  vx1 x1  vt
,º ‘
1  v2 1  v2
Any other Lorentz transform is given by a composition of the above. Another way to
characterize these is L is a Lorentz transformation if and only if it preserves:

 t2  ˆx1 2    ˆxd 2

Another property of the Lorentz transformation is the behavior of j under scaling. For all
(
λ A 0, given the scaling transformation ˆt, x ˆt~λ, x~λ, then we have:
1
j ˆϕˆt~λ, x~λ ˆjϕˆt~λ, x~λ
λ2
Definition 4.1.1 (Homogeneity). We say a function ϕ  Rn  ˜0 R is homogeneous of
1
degree a if ϕˆx~λ ϕˆx
λa
a
RRt Rt R I and det R 1

– 60 –
4. Wave Equation

Example 4.1.1. If n 1 and ϕ xa for x A 0. Then this is homogeneous of degree a.

Remark 4.1.3. If ϕ is homogeneous of degree a, then jϕ is homogeneous of degree a  2

For distributions consider:

`ϕˆ ~λ, ψ e S ϕˆx~λψ ˆxdx λd S ϕˆy ψ ˆλy dy aϕ, λd ψ ˆλ f

Definition 4.1.2. For ϕ > D ˆRn  or œ


D ˆR n
œ
 ˜0 is homogeneous of degree a if for all
ψ > C0 ˆRn  or C0 ˆRn  ˜0 we have:
ª ª

n a
`ϕ, λ ψ ˆλ e λ `ϕ, ψ e


Example 4.1.2. δ0 on Rn , then the degree is n.

So if we are looking for jE δ0 and j lowers the degree by 2 and δ0 has degree 1  d,


it is reasonable to look for a homogeneous E of degree d  1 

Example 4.1.3. Consider ϕˆx xa for x A 0 and 0 for x @ 0. Then ϕ is a homogeneous


function of degree a on R  ˜0. When a A 1, then ϕ can be extended as a homogeneous
distribution on the whole R.
ª

`ϕ, ψ e S0 x a ψ ˆxdx


when a 1, no homogeneous distribution agrees on R.




Lemma 4.1.2. If ϕ is a homogeneous distribution of degree a A .  n in Rn  ˜0, then there


exists a unique distribution ϕ̃ > D ˆRn  that is homogeneous of degree a and agrees with ϕ in
œ

Rn  ˜0.

Now we are looking for E . 

ˆ Motivated by the homogeity of j and δ0 we may assume that E is a homogeneous 

distribution on R1 d of degree d  1. By the previous lemma, it is enough to construct




E in R1 d  ˜ˆ0, 0



ˆ s2 ˆt, x t2  SxS2 is invariant under all Lorentz transformations, we will assume that
kinda d1 2
E  χ 2 ˆt  SxS2 

ˆ We wanted the forward fundamental solution. So we rather define E as: 

d1 2 2
E  1t> 0, ª  χ 2 ˆt  SxS 

d1
to make sure that this is well-defined, it is good to require that χ 2 vanishes for s @ 0.

Claim 4.1.1.

j E  0 in R1  d
 ˜ˆ0, 0

– 61 –
4. Wave Equation

Theorem 4.1.4. If ϕ > D ˆRn and supp ϕ ` ˜0, then ϕ Pα cαDαδ0 for at most finitely
œ

many cα x 0
Using this theorem, we have that jE Pα cα Dα δ0 and jE is homogeneous of degree d  1.
 

This implies that jE c0 δ0 . So the last thing you have to check is that c0 x 0 One way to


do this is to compute, which is possible, but difficult). But this fact is not difficult, you just
need the uniqueness of jϕ 0 in R1 d and supp ϕ ` R1 d then ϕ 0 (this can be proved by




Fourier transform or integration by parts).

d1
Remark 4.1.4. χ 2 is a distribution so the composition of it with ˆt2  SxS2  should be
understood by approximations.
Let’s now prove the claim.
d1
Proof. letting χ χ 2

d1 d1
jˆ1tC0 χ 2
2
ˆt  SxS 
2
1tC0 j ˆχ 2 ˆt
2

2
SxS  1tA0 ˆ∂t ˆ2tx1 ˆt2  SxS2   Q ∂j ˆ2xj x1 ˆt2  SxS2 
j

lots of computation. We have to use Euler’s identity:


d1
sχ ˆs œœ
 χ ˆs  œ

2
d1
  d1
this is because χ is a homogeneosus distribution of degree
œ
 1
2 2
Lecture 20 (11/7)

Recall that we were aiming to derive the fundamental solution for j in R1 d . what we 

were doing was looking at symmetries of j, that is finding L  R1 d R1 d linear such that  

s2 ˆt, x t2  SxS2 s2 ˆLt, Lx. And we were looking at homogenity of j and δ0 . And
looking at the forward condition supp E ` R1 d . Then our guess was the E is of the form:
 



d1 2 2
Ẽ  1R1d χ 2 ˆt  SxS 

d1 d1
with χ 
2
homogeneous of degree
 d1
2
in D ˆR with supp χ
œ

2
` 0, ª.

We computed that jẼ 0 outside the origin, i.e. jE ` ˜ˆ0, 0 and is homogeneous of
 

degree d  1 which tells us that jẼ cδ0 . Then it is not too difficult to show that c x 0,


but refer to the class notes to compute it.


d1
How does χ 2 look like? Let’s construct a family of distributions, χa for a > R such
 

that it is a distribution on R, it is homogeneous of degree a and supp χa ` 0, ª. 

– 62 –
4. Wave Equation

For a A 1, then there is a obvious candidate:

χ̃a 
H ˆx  x a

To extend this to a B 1, we will differentiate our distribution. Observe that by computation
(for a A 1) we have (you have to check this):
d a
χ̃ 
aH ˆxxa  1
aχ̃a 
 1
dx
Then let’s just define: χ̃ a by using:


1 d a
χ̃a 
χ̃ 
 1
a  1 dx
this is good, and will give nice distributions χ̃a which will happen if a

 1, 2, 3, . . . . That’s
why we need to be a little smarter, let:

χa 
ca H ˆx  x a

with ca canceling the possibly zero. If we set:


1
χa 
H ˆ x x a
Èa  1
for a A 1 where:
x dx
ª

Èa S0 xa e 

x
then we wouldn’t have this problem. To see this:
d a 1 Èaa a
χ 
aH ˆxxa  1
χ 

1
dx Èa  1 Èa  1
Then since Èa  1 aÈa, so the above is just χa 1 . So for a B 1, we define:



d a dk a
χa

χ 
 1
 χ 

k
dx dxk
for k > N large enough so that a  k A 1.
Example 4.1.4. χ0 
H ˆx. And:
k ˆk 1
χ δ0 ˆx 



Also we have:
 1~2 1 1~2
χ º H ˆx  x 

π
º 1~2k 1
º ˆH ˆxx1~2 ˆk

this uses the computation that È1~2 π. Therefore we have χ 
π

– 63 –
4. Wave Equation

Remark 4.1.5. Up to a constant, this is the unique distribution with the three desired
properties.
d1
δ
Remark 4.1.6. j Ẽ
 2π 2 0 . This means that:
1 d1
2 2
E   1 d χ
d1 R1 
2
ˆt  SxS 
2π 2

Proposition 4.1.1 (Properties of E ). 

1. Finite speed of propagation (weak Huyghen’s principal) This follows from the
fact that:
supp E ` ˜ˆt, x > R


 0 B SxS B t
With the representation formula that we just derived:
ϕˆt, x E  ‡ ∂t ϕδt 0  ∂ t ˆE  ‡ ϕδt 0   E ‡ j ϕ
with ϕ > C0 ˆR1 d  and all these convolutions involve integration on inverted cones
ª 


which have finite area at time equal to zero.


2. Strong Huyghens principle when d is odd and C 3. Note that:
d1 2 2
E  c1R1d χ 2 ˆt  SxS 

which is only supported it ˜t, x  t SxS which is the boundary of the light cone.
1 1
Remark 4.1.7. When d 1, we get E  1 11 H ˆt2  SxS2  H ˆ t  x H ˆt  x 
2 R

2
Remark 4.1.8. IF d 2 then we have:
1 1
E  1R12 1~2
2π ˆt  SxS2 
2


If you plug this into the representation formula, you will get Poisson formula for the 2
dimensional wave equation.
Remark 4.1.9. For d 3, then we get:
1
E  1 13 δ0 ˆt2  SxS2 
2π R


1
the easiest way to compute this is by the approximation method, with ϕˆs~λ δ0 as λ 0,
λ
so we get:
1
 º
4
dSC0
2πt
with C0 ˜ˆt, x  t SxS. Plugging this into the representation formula gives you the Kir-


choff formula in R1 3 .


Remark 4.1.10. In Evan’s book, for chapter 2.4, there is an alternative derivation of the
representation formula for j using a very different method (the method of spherical means).

– 64 –
5. Fourier Transform

5 Fourier Transform
Let’s motivate why the Fourier transform takes the form that it does. This will be a brief
two lecture crash course on the Fourier transform.

The Fourier transform can be thought of as a change of basis in the space of functions.
One way to understand f ˆx is a a description of f in the basis that consists of δ0 ˆx  y  for
all y > Rd . We can do this because we can write:

f ˆx  S f ˆy δ0 ˆx  y dy

This is not good for understanding ∂x or any Dα . It is reasonable to ask for a basis that di-
agonalizes out basis with respect to our linear transformation of differentiation. The Fourier
transform selects a basis ˜eiξ x ξ>Rd .

An important property of eiξ x is that:

∂j eiξ x iξj eiξ x

The Fourier transform, F f ˆξ  is the change of basis formula such that:

f S aˆξ eiξ x dξ

with:

F f ˆξ   aˆξ 

We need to compute the transitional matrix m so that:

δ0 ˆ x  y  S mˆy, ξ eix ξ dξ

Note first that:

S mˆy, ξ eix ξ dξ δ0 ˆx  y  S mˆ0, ξ eiξˆx  y


dξ S mˆ0, ξ e  iξy iξ x
e

therefore mˆy, ξ  mˆ0, ξ e  iξy .

Note that:

0 x j δ0 ˆ x  S xj mˆ0, ξ eix ξ dξ  i S mˆ0, ξ ∂ξj eix ξ dξ  i S ∂ξ mˆ0, ξ eix ξ dξ

We then conclude that mˆ0, ξ  c. And we have already dedcued δ0 ˆx  y  R ce iξ y eiξ xdξ.


Now we have:

f ˆx  S f ˆy δ0 ˆx  y dy cS S f ˆy  e  iξ y
ydy eiξ x dξ

– 65 –
5. Fourier Transform

Therefore we have:

F f ˆξ  d S f ˆy e  iξ y
dy

so all we need to do is to determine the constant c.

We want to determine c, where c mˆ0, ξ  c. In other words we want to determine:

δ0 ˆ x  c S eix ξ dξ

Note that

δ0 ˆx δ0 ˆx1 δ0 ˆx2 δ0 ˆxd 

And note that the above integral can be written as:

c S ex dξ1  S ex
1ξ d ξd
1
dξ d

So if we just compute c1 for

δ0 ˆ x  c1 S eixξ dξ

in R, then cd cd1

Claim 5.0.1. c1 ˆ2π 1

Proof.
Method 1:
ª 0
lim S e  εSξ S ixξ
e dξ lim S e  ξ ˆεix
dξ  S eξˆε  ix

ε 0 ε 0 0 ª

1 1
lim 
ε 0 ε  ix ε  ix
1 1 1
lim 
i ε 0 x  iε x  iε
ε
2 lim 2 2 2πδ0
ε 0 x ε

Lecture 21 (11/12)

Recall our discussion on the Fourier transform. We motivated the Fourier transform as a
change of basis from ˜δ0 ˆx  y y>Rd to a new basis ˜eiξ x ξ>Rd . And we have:

S f ˆy δ0 ˆx  y dy f ˆx  S aˆξ eiξ x dξ

– 66 –
5. Fourier Transform

wo we would like to relate f ˆy  with aˆξ . The heart of the matter is to write:

δ0 ˆ x  c S eix ξ dξ

we had one way to compute c ˆ2π d . Let’s compute this a different way, using an algebraic
method.

Let’s start with the knowledge:

δ0 ˆ x  c S eiξ x dξ

for d 1. Given any function f , we have:

f ˆ0 S f ˆy δ0 ˆy dy c̄ U f ˆy e  iξy


dydξ

Let fÂˆξ  R e iξ y f ˆydy. So we have




f ˆ0 c̄ S fÂˆξ dξ

Some properties:
1. xe  iξx i∂ξ e  ixξ

2. ∂x e  iξx  iξe  ixξ

So if ˆx  ∂x f 0, then we get that iˆ∂ξ  ξ f 0. This is easy to solve, and we get:
1 2
f ˆx  f ˆ0e 2 x
1 2
x
Let’s normalize f ˆ0 1 to get f ˆx e 
2 , this gives us:
1 2
fÂˆξ  fˆ0e 
2
ξ

but we have:
º
S S
1
fˆ0 f ˆy dy e 
2
Sy S 2
dy 2π
º 1
Therefore fÂˆξ  2πe 
2
Sξ S2
. Therefore:
º
f ˆ0c̄ S fÂˆξ dξ 2πc̄ S e
1
Sξ S2
1 
2 dξ 2πc̄

therefore c c̄ ˆ2π 1

Let’s move to a rigorous derivation. Let’s begin with some comments about complex
valued distributions. Let C0 ˆU  C be the set of smooth, compactly supported, complex
ª

valued functions in U . We say uj u if and only if Ruj u and Iuj u.

Now given u, v > C0 ˆU  C, then we let `u, v e


ª
R uv̄dx be the Hermitian product.
– 67 –
5. Fourier Transform

Definition 5.0.1 (Conjugate Linear Complex Distribution). We say a complex valued


distribution, u, is a continuous conjugate linear functional on C0 ˆU  C almost everywhere, ª

i.e.

uˆϕ  ψ  uˆϕ  uˆψ , uˆcϕ c̄uˆϕ

This generalizes uˆϕ `u, ϕe when u > C0 ˆU  C ª

Remark 5.0.1. Equivalently, u > D ˆU  C if and only if u œ


v  iw with v, w distributions.

Definition 5.0.2. Given f > C0 ˆRd  C, let: ª

F f ˆξ  fÂˆξ  S f ˆ y e  iξ y
dy

Let us introduce F ‡
in the following way

`a, beˆ2π d dξ S ab̄
ˆ2π d

then define:

` F f, ae 2π  dξ
ˆ  d
`f, F ‡
ae

We can compute this by:

`F f , aeˆ2πd dξ U f ˆy e  iξy
aˆ ξ 

ˆ2π d

S f ˆy S aˆξ eiξ y dξ ˆ2π  d dy 

Then we will define:

F ‡
aˆf  ǎˆx S aˆξ eiξ x

ˆ2π d

we will soon prove that F ‡


F  1

Lemma 5.0.1.

1. if f > L1 , then F f  is well-defined and Y F f YLª B Yf YL1

2. for all x, y > Rd ,

F f ˆ xˆξ  e ix ξ F f ˆξ 



F f ˆξ η F eiη f ˆξ 



ˆ

3. if f, ∂j f > L1 , then F ∂j f  iξj F f ˆξ 

4. if f and xj f > L1 , then F xj f  i∂x iF f ˆξ 

– 68 –
5. Fourier Transform

Remark 5.0.2. By the Fourier transform, regularity of f is equivalent to decay of F f


Remark 5.0.3. F ‡
aˆx ˆ2π d F aˆx
We would now like to build a space such that it is closed under the Fourier transform
(note that test functions and L1 functions both won’t work).
Definition 5.0.3. Schwartz class on Rd
S ˆR d  C ™ϕ >C ª
ˆR
d
 C  ¦α, β > Nn , Yxβ ∂ α ϕYLª @ ªž
we say that ϕj ϕ in S if and only if:
sup Sxβ Dα ˆϕj  ϕS 0
x>Rd

for all α, β
By our lemma, we have that S is closed under the Fourier transform, and the adjoint of
the Fourier transform. Therefore F  S S can act on the dual space of the Schwartz class.
œ œ

Definition 5.0.4 (Tempered distributions).


S ˆRd
œ
 C ™u  continuous, conjugate-linear functions on S ˆRd, Cž
. Where we say u is continuous ϕj S if uˆϕj  ϕˆϕ
ϕ in
So we get F S 
œ
S by `F u, ϕe 2π  dξ `u, F ϕe and F S
œ
ˆ  d
‡ ‡

œ
S œ
(Defined similiarly,
no time).
Remark 5.0.4. All test funcitons are Schwartz functions but the Gaussian is a Schwartz
function, but not a distrubution.

Also S ` D ˆRd
œ œ
 C, but this is a strict subset.
Remark 5.0.5. Usually, the way to compute F u when u > S , one approximates u by uj > L1 œ

(in the topology of S ). And œ

F uj  S uj ˆy e  iξ y
dy

Theorem 5.0.1 (Fourier Inversion, Plancherel).


1. (Fourier Inversion on S ) For f > S ˆRd, C, F F f  FF ‡ ‡
f f
2. (Plancherel’s theorem) for f, g > S ˆRd C, we have: 

S F f F

S f ḡdx g
ˆ2π d

In particular:
Yf YL2
2
Y F f Y2 2
L ‹ dξ

ˆ 2π d

and F extends uniquely to F  L2 L2 by density.

– 69 –
5. Fourier Transform

3. (FI in S ) For u > S ˆRd


œ œ
 C, FF f 
FF u u ‡

Proof. It suffices to show that FF id in S or S . Because the other FF œ ‡


id follows from
F ˆ2π dF ˆ ,
‡ 


(1) in S , f > S , F f  > F . So we have:

FF
‡
f S F f ˆξ eiξ x

ˆ2π d

lim S
ε 0
F f ˆξ e  εˆSξ1 SSξd S iξ x
e

ˆ2π d

lim U f ˆy e  iξ y
e εˆSξ1 S Sξd S iξ x
e dy
ε 0 ˆ2π d
d
lim S
ε 0
Œ MS e  εSξj Siξj ˆxy 
ˆ2π 
 d
dξ ‘ f ˆy dy
j 1

S δ0 ˆx1  y 1 δ0 ˆxd  y d f ˆy dy f ˆ x

Where we used the following fact that we proved last time

lim S e  εSξ S iξ x
e dξ δ0 ˆx2π
ε 0

(2) (sketch) FF‡


id on S 1, This key point is to justify that:
δ0 F 1 ‡

There are two steps to show this. First ∂ξj 1 0 implies that xj F 1 0 for all j, With a ‡

little more argument, this reduces to computing the constant cδ0 F 1. This follows by ‡

computing:

,F
 1 S xS 2 1 1 SxS2 ‡
1 be 2 , δ0 g be 2 g  1~c
c

Remark 5.0.6. The Fourier inversion formula and the Plancheral theorem is all about
finding a way to justify:

δ0 S eiξ x

Remark 5.0.7. The Fourier transform maps convolutions to products (and vice-versa)
F f ‡ g F f F g
And

a ‡ˆ2πd dξ bˆη  S aˆ ξ  b ˆ η  ξ 
ˆ2π d

Then:
F  1
a ‡ˆ2πd dξ bˆη  F  1
aF  1
b

– 70 –
5. Fourier Transform

Remark 5.0.8. Let L  Rd Rd be an invertible linear map. Then:


F f X Lˆξ  ˆdet L
 1
F f ˆˆL  1 t
 ξ
Lemma 5.0.2. You can compute the Fourier transform of any Gaussian. If A is a symmetric
positive definite matrix, then:
F 1 t
e 2 x Ax  ˆ2π 
d~2
S det AS
 1~2 21 ξ t A1 ξ
e
This is the end of our discussion of the Fourier transform.
Lecture 22 (11/14)

5.1 Applications
The goal today is to give some applications of the Fourier transform to the study of evolu-
tionary PDE’s. The main ideas to get across are:
1. using the Fourier transform to solve homogeneous evolutionary PDE’s. Like PDEs of
the type ˆ∂t  f u 0
2. Duhamel’s principle: using the homogeneous solution to solve the inhomogeneous pde.
3. discuss the Fourier transform of the foward fundamental solution. The main example
to study is the heat equation
¢̈
¨ ˆ∂ t  ∆u f in R1 d ˜t A 0



¦
¨u g in ˜t 0
¤̈
and the Schrodinger equation
¢̈
¨ˆi∂t  ∆u f in R1 d ˜t A 0



¦
¨u g in ˜t 0
¤̈
these and the wave equation are worked out in the notes.
Let’s begin by studying the homogeneous equation:
¢̈
¨ ˆ∂ t  ∆u 0 in R1 
 d
¦
¨u g on ˜t 0
¤̈
From now on, the Fourier transform only relies of the space fourier transform, and F t will
mean the time-spacial Fourier transform.

ˆt, ξ 
Letting u F uˆt, ξ . So we get the new problem
¢̈
¨ˆ∂t  Sξ S2 Â
uˆt, ξ  0
¦ (15)
¨u ˆ0, ξ   g ˆξ 
¤̈
so solving this first order ODE, we get:
tSξ S2
ˆt, ξ 
u Â
g ˆξ e 

– 71 –
5. Fourier Transform

Proposition 5.1.1 (Solvability for the heat equation).

1. (existence) for g > L2 , then there exists a solution u > Ct ˆ 0, ª  L2  to (15) such that
Yuˆt, YL2 B Yg YL2

2. If u, u are solutions to (15), u, u > Ct ˆ 0, ª, L2 , then u


œ œ
ua
œ

5.1.1 Duhamel’s Principle

ˆ∂ t  Atu f

where At PN α α
j 0 aα ˆt, xD with D not containing ∂t . This is a first-order evolutionary
equation.Suppose that we know how to solve the homogeneous problem. In other words,
given a space X of distributions on Rd and there exists the solution operator S ˆt, s g 
where g > X and t C s is such that:

ˆ∂t  AtS ˆt, sg 0


S ˆs, s g  g

For fixed s, S ˆt, s g  > Ct ˆ s, ª, X .

Then we can compute:

ˆ∂t  Atˆ1 s, ˆtS ˆt, s g


ˆ ª 

We know that: At1 s, ˆ ª ˆ tu 1ˆs, ª  ˆtAt u and we have:

∂t ˆ1ˆs, ª ˆ tu δ0 ˆt  su  1ˆs, ª ˆ t∂t u

So we get:

ˆ∂ t  Atˆ1 s, ˆ ªˆ tS ˆt, s g  δ0 ˆt  sS ˆt, s g  δ0 ˆ t  s  g

So the affect of initial data is an instantaneous forcing term.

Now given any F ˆt, x R F ˆs, xδ0ˆt  sds, this suggests that we want to define

t
uF ˆt, x S S ˆt, s F ˆs, x1ˆs, ª ˆ tds S ª
S ˆt, s F ˆs, xds

This is called Duhamel’s Formula This computation tells us that:

ˆ∂t  At u F S ˆ∂ t  At1 s, ˆ ª ˆ tS ˆt, s F 9s, ds S F ˆs, xδ0 ˆt  sds F ˆt, x
a
use the formula and Plancherel, then apply ODE uniqueness

– 72 –
5. Fourier Transform

Usually, X will have a norm defined on it, and we will have YS ˆt, s g YX B C Yg YX . In this
case if F > L1t ˆ 0, T , X  (that is YF ˆt, YX is absolutely integrable) then uF > Ct ˆ 0, t, X .
And moreover, if supp F ` ˜0 B t B T , then
t t
uF S ª
S ˆt, sˆF ˆsds S0 S ˆt, sˆF ˆsds

and UF ˆ0, x 0.

Let’s now apply Duhamell’s principle to:


¢̈
¨ˆ∂t  Sξ S2 Â
uˆt, ξ  1ˆ0, ª ˆ tfˆt, ξ 
¦
¨u ˆ0, ξ   g ˆξ 
¤̈

this tells you that:


t t
ÂF ˆt, x
u S ª
S ˆt, s 1ˆ0, ª  fˆsds S0 e
 ˆtsSξ S2
fˆsds

this is supposed to give us the solution to the inhomogeneous equation


¢̈
¨ˆ∂t  Sξ S2 Â
uF 1ˆ0, ª ˆ tfÂ
¦
¨u  ˆ0, ξ  0
¤̈ F

Then we get that:


t
ˆt, ξ 
u e  tSξ S2
Â
g ˆξ   S0 e  ˆtsSξ S2
fˆs, ξ ds

Alternatively we could have done the following: we could compute:

F  1
e  tSξ S2
Â
g F  1
e  tSξ S2
‡g º
1
d S e 
S xy S2
4t g ˆy 
4πt
then we can apply Duhamnels principal directly to ∂t  ∆ to get:
t
uˆt, x δ ˆt, 0 g   S S ˆt, s f ˆsds
0

which solves the inhomogeneous problem


¢̈
¨ˆ∂t  ∆u f
¦
¨u g
¤̈

Theorem 5.1.1.
1. (existence) for g > L2 and f > L1 ˆˆ0, T   L2 , there exists a solution to the inhomoge-
neous equation such that Yuˆt, YL2 B Yg YL2  R0 Yf ˆs, YL2 a
t

a
notation: u > Lpt ˆI; X  iff t ( Y uˆt, YX > LPt ˆI 

– 73 –
5. Fourier Transform

2. (uniqueness) If u and u are in Ct ˆ 0, T ; L2  and the inhomogeneous equation then


œ

u u œ

The crux of Duhamels’ formula is:

ˆ∂ t  Atˆ1 s, ˆ ª ˆ tS ˆt, s g  δ0 ˆ t  s  g

We can get the forward fundamental solution via the Fourier transform. We have:

ˆ∂t  ∆E δ0 ˆtδ0 ˆx




supp E ` ˜t C 0 

This formula tells us that we must have that:

E  1ˆ0, ª ˆ tS ˆt, 0 δ0 ˆx

Recall that F δ0  1. This means that:

 S ˆÆ
u t, 0 δ0 

should solve
¢̈
¨ˆ∂t  Sξ S2 Â
uˆt, ξ  0
¦
¨u ˆ0, ξ  1
¤̈

but we know that e  tSξ S2 solves this, therefore we get that:

tF 1 tSξ S2 1 S xS2


E 1ˆ0, ˆ e  1ˆ0, ˆ t º e
  
4t
 ª ª
d
rπt
Remark 5.1.1. E is smooth outside of the space time origin


Remark 5.1.2. E is analytic outside ˜t


 0

Remark 5.1.3. For any u > S 1 ˆR1  d with supp u ` ˜t C L , then u ‡ E is well-defined. 

Now let’s discuss the space time Fourier transform of the forward fundamental solution.
Let’s start with:

ˆ∂t  ∆E  δ0

let’s take the Fourier transform to get:


2
ˆiτS ξ S  t,x F E   1

provided that ˆiτ  Sξ S2  x 0, we have

F t,x E  
iτ 
1
Sξ S2

– 74 –
6. Energy Method and Sobolev Spaces

This isn’t quite enough, we also need to use the support property supp E ` ˜t C 0. 

We can approximate E by e  εt E as ε 0 . Now we can try to compute:


  

εt εt εt εt
ˆ∂t ∆ˆe E  e ˆ∂ t ∆E εe E δ0 ˆt, x  εe E
   
      

putting these together, we get:


εt
ˆ∂t ∆  ε e E δ0 ˆt, x

 

then take the Fourier transform of this to get:

ˆiτ  Sξ S
2
 εFt,x e  εt
Eτ  1

for all ε A 0, but note that:

F t,x e εtE 
 
1
iτ  YxiS2  ε
so now:

Ft,x E   lim Ft,x e


ε 0
 εt
E  
ε 0
lim
1
iτ  Sξ S2  ε

Remark 5.1.4. Analyticity in the lower (or upper) half-plane of the Fourier transform
cooresponds to the original distribution in ˜t C 0 or ˜t B 0

Remark 5.1.5.

Lemma 5.1.1. Let a > H ˜a > C  Ia C 0, then:

lim F t 1
1

 i1ˆ0, ˆ teiat
ε 0
ª
τ  iε  a
Assuming this lemma, we see that

E  Ft,x1 Ft,xE

  lim Ft,x1 
ε 0

1
iˆt  iε  Sξ S2

F  1 2
1ˆ0,ª ˆteSξ S t 

Lecture 23 (11/19)

6 Energy Method and Sobolev Spaces


Now we will move on to discussion on the energy method and Sobolev spaces. This will
follow Evans chapter 5.

– 75 –
6. Energy Method and Sobolev Spaces

6.1 Energy Method


This is the third general way to trying to understand 2nd order PDEs. This is glorified
integration by parts.
Example 6.1.1 (Laplace equation). We will focus on proving a-priori estimates. By
which we will assume there is a regular solution to the PDE, then we will prove inequalities
about norms of the solution with respect to the data. Let’s look at the Dirichlet problem
¢̈
¨∆u f in U
¦ (16)
¨u g on ∂Ω
¤̈

Proposition 6.1.1. Suppose that U is a C 1 bounded domain and u > C 2 ˆŪ , f > C 0 ˆŪ ,
g > C 0 ˆ∂U . Then the solution, u, to (16) is unique.
Proof. We have already seen a proof, but here we present a different proof.

Let u1 , u2 be two solutions to (16). Then if we let v u2  u1 > C 2 ˆŪ , then this solves
the homogeneous equation:
¢̈
¨∆v 0 in U
¦
¨v 0 on ∂U
¤̈
The key idea is: multiply the equation by v, integrate over U , then integrate by parts.

0 SU  ∆vvdx SU v
© © vdx  S
∂U
 ν © vvdS∂U

therefore R S©v S2 dx 0, this tells us that © v 0, therefore v 0 in U by the boundary


condition.
So the energy method is a very loose term but it is:
Multiply the equation by a suitable Xu (suitable operator), then we integrate by parts.
Example 6.1.2 (Heat Equation). Consider the heat equation:
¢̈
¨∂t u  ∆u f in R1 
 d
¦
¨u g on ˜t 0
¤̈
Proposition 6.1.2. If u > Ct ˆ 0, T , L2 , © u > L2t,x ˆˆ0, t  Rd  and f > L1t ˆ 0, T , L2  and
g > L2 . Then u is unique and
YuˆtYL2  Y©uYL2 ˆ 0,T ,L2  B Yuˆ0YL2  Yf YL1 ˆ 0,T  L2  

Proof. In this case we have:


t t
S0 S f udxdt œ
S0 S ˆ∂t  ∆uuˆt , xdxdt
œ œ

t 1 2 t
S0 S 2
∂t u dxdt œ
 S0 S  ∆udxdt
t 1 2 t
S0 S 2
∂t u dxdt œ
 S0 S © u © udxdt  Boundary term

– 76 –
6. Energy Method and Sobolev Spaces

the Boundary term is zero by taking limits and using the assumptions on u and ©u. But
then we get:
1 1 t

2S 2S S0 S
2 2 2
u ˆ t, x  dx u ˆ 0, x  dx S©uS dxdt œ
 

Then we get the inequality via rearrangement, Cauchy Schwartz, and Minkowski’s inequality.

Another main thing we do in the energy method is: to apply operators Y to u, then look at
P ˆY u , then apply the process as above.

Notice that for any ∂j , if ˆ∂t  ∆u f , then ˆ∂t  ∆∂j u ∂j f , this is because ∂j commutes
with the heat operator. So if we apply the same method to this resulting equation, then we
can get estimates:

sup YDα uˆtYL2  Y©Dα uYL2t,x ˆ0,T   Rd B sup ˆYDα g YL2  YDα f YL1 ˆ 0,T ;L2  
SαSBk S α S Bk

Remark 6.1.1. For the wave equation, j u f , a “good” multiplier is ∂t u. This gives us
the physical conservation of energy.a

6.2 Sobolev Spaces


From the energy method, you expect to be able to control something like:
α
YD uYL2

One natural question is: can we say other things about the solution from the control of these
kinds of norms?

We know that if f > Cc1 ˆR, then by the fundamental theorem of calculus
x
f ˆ x S ª
f ˆx dx B Yf
œ œ œ œ
YL1

Now if x > I (a compact interval) and supp f ` I, then we have:

sup Sf S B Yf œ
YL1 ˆI  B Yf œ
YL2 ˆI  SI S
1~2
I

by Holder’s inequality. Now it turns out we can generalize this to multiple dimensions, which
are known as Sobolev inequalities.

Definition 6.2.1 (Sobolev Space). Given U a domain in Rd , k > ZC0 , 1 B p B ª, then we


let:

W k,p ˆU  ˜u > Lp ˆU   Dα u > Lp ˆU  ¦ SαS B k


a
this multiplier makes sense due to Lagrangian mechanics

– 77 –
6. Energy Method and Sobolev Spaces

where Dα are taken in the sense of distributions.

We associate a norm:

YuYW k,p ˆU  Q Y D α u YL p ˆU 

S α S Bk

So Sobolev inequalities will look like

YD
α
uYLq B YuYW k,p

Remark 6.2.1. We want a “function space” framework for converting a-prioi estimates to
existence results.

In linear algebra, given a n  m matrix L and we are interested in solving Lu f , then


the existence of u such that Lu f is equivalent to `f, ϕe 0 for all ϕ such that L ϕ 0 œ

(which is equivalent to ImˆL ˆker L Ù ). It turns out the Sobolev spaces are the ideal
œ

space to do the same thing.

Let’s introduce a little more notation. If ϕ > C0 ˆU  then ϕ > W k,p ˆU  for all k, p. In
ª

other words C0 ˆU  ` W k,p ˆU , and we can consider the completion of C0 ˆU  in W k,p ˆU 


ª ª

with respect to Y YW k,p ˆU  . The resulting space will be denoted W0k,p ˆU  (the completion
of test function in the Sobolev space.

In general, this is a strictly smaller subset, and you should think of these function as
elements in the Sobolev space whose boundary value on ∂U is zero.

The case p 2 is of importance (these are the bounds we get from the energy method).
We let H k ˆU  W k,2 ˆU . And we often write H0k ˆU  W0k,2 ˆU .

Proposition 6.2.1 (Completeness of Sobolev Space, Norm Equivalence To Fourier


Transform).

1. For all k, p, W k,p ˆU  is complete under Y YW k,p ˆU  . In other words W k,p ˆU  is a Banach
space.

2. There is a characterization of H k ˆRd  in terms of the Fourier transform, that is:

YuYH k ˆRd  Yˆ1  Sξ Sk u


Âˆξ YL2
ξ

where A B if and only there exists C A 0 such that A B CB and B B CA

Now let’s discuss some basic properties of W k,p ˆU .

1. approximation theorem

2. extension theorems

3. trace theorem

– 78 –
6. Energy Method and Sobolev Spaces

(
Lemma 6.2.1. Given y > Rd , let τy  uˆx uˆx  y  viewed as a linear operator. Then τy
is continuous on Lp as a function of y (implicitly, this means u > Lp ). In other words:

lim Yτy u  uYLp 0


y 0

Proof. See Evans appendix.


Corollary 6.2.1. For all u > L, , given a mollifer ϕ, then:

lim Yϕε ‡ u  uYLp 0


ε 0

Proof. We have:
1
\ S εd
ϕˆy ~εuˆx  y dy  uˆx\
Lp
Y S ϕˆz uˆx  εz dz  S ϕˆz uˆxdz YLp

Y S ϕˆz ˆuˆx  εz   uˆxdz YLp

B S ϕˆz Yτεz u  uYLp dz

take ε 0 and apply the dominated convergence theorem.


Theorem 6.2.1 (Approximation). By the same token if u > W k,p ˆRd , then ϕε ‡ u u in
W k,p ˆRd . And we know that ϕε ‡ u > C ˆRd  ª

Proposition 6.2.2. If u > W k,p ˆU , then there exists uε > C ª


ˆU  such that uε u in
W k,p ˆU 
Proof. Consider a sequence of open sets Vn such that Vn is bounded, V̄n is compact, Vn ` Vn 1 

and n 1 Vn U . Every open covering has a smooth partition of unity χn subordinate to Vn .


ª

By this we mean that χn > C ˆRd  such that supp χn ` Vn and Pn 1 χn ˆx
ª ª
1 and for
all x > U , only finitely many terms in the sum are nonzero. And 0 B χn B 1

Then we can split u Pn 1 χn u which can be defined on Rd . Then the idea is to approxi-
ª

mate each χn u > W k,p ˆRd  by extending it by 0 outside Vn . Then you take an approximation:

vn ϕεn ‡ χn u

such that Yvn  χn uY B δn ε. Then we let vε Pn vn where δn is chosen so that P δn 1, then


ª ª

Yvn  uYW k,p Q Yvn  χn uYW k,p ˆRd  B Q δn ε


n 1 n 1

, which is bounded by ε.
Proposition 6.2.3 (Approximation of elements in C ˆŪ ). If U is a C 1 -domain and ª

u > W k,p ˆU , then there exists a sequence uε > C ˆŪ  such that uε u in W k,p ˆU .
ª

Proof. (sketch) It was a picture.

– 79 –
6. Energy Method and Sobolev Spaces

Lecture 24 (11/21)

We will continue the discussion on Sobolev spaces today. Recall we say that u > W k,p ˆU 
if:

YuYW k,p Q Y D α u YL p @ª
SαSBk

We then approximated elements of the Sobolev space that were better behaved. If u >
W k,p ˆRd , with ϕ > C0 ˆU , R ϕ 1, ϕε ε d ϕˆ ~ε, then:
ª 

ϕε ‡ u u

in W k,p ˆRd . Also if u > W k,p ˆU  then there exists a sequence uj > C ª
ˆU  such that uj u
in W k,p ˆU .

We also have that if u > W k,p ˆU  with U a C 1 domain, then there is a sequence uj > C ª
ˆŪ 
such that uj u in W k,p ˆU .

Proposition 6.2.4. If u > W k,p ˆRd , k is a non-negative integer, 1 B p @ ª, then χˆx~Ru


u in W k,p ˆRd  as R ª (for χ a smooth function that is 1 on B ˆ0, 1 and 0 outside B ˆ0, 2)

Proof. For k 1 (the others are similar). We need to show that:

YD ˆχˆ ~Ru  uYLp  Yχˆ ~Ru  uYLp 0

as R ª . To show this we right the first term as:

χˆ ~RDu  Du  Dˆχˆ ~Ru

which can both be shown to go to zero by simple estimates.

Corollary 6.2.2. C0 ˆRd is a dense subset of W k,pˆRd


ª

(This can’t be right...we defined W0k,p to be the completion of test functions, but by this
corollary, this means that W0k,p W k,p )

Theorem 6.2.2 (Extension). Suppose U is a bounded domain with ∂U a C k boundary.


Let V be an open set such that V a Ū . Then there exists a linear operator E  W k,p ˆU 
W k,p ˆRd  such that:

1. E uU
U
u

2. supp E u ` V for all u > W k,p ˆU 

3. Y E uYW k,p ˆRd  B C YuYW k,p ˆU  where C depends on k, p, V, U .

– 80 –
6. Energy Method and Sobolev Spaces

Proof. (sketch)
Step 0: it suffices to consider u > C ª
ˆŪ  (by approximation methods).

step 1: Reduce the result to the following: there exists an extension operator E on œ

U B ˆ0, 1 9 Rd and V B ˆ0, 2, defined for u > C ˆŪ  and supp u ` B ˆ0, 1 9 ˜xd C 0 and
ª

YE uYW k,p ˆRd  B C YuYW k,p ˆU  .




Here we just argue by picture, I did not draw the picture...

Step 2: proof of the claim in part 1. The key is to use reflection. Let x œ
ˆx1 , . . . , xd1 
then define:
¢̈
d ¨uˆx, xd C 0
ũˆx , xœ
 ¦
¨uˆxœ , xd ,
¤̈
xd @ 0

This is a continuous extension. However, can we extend so that Dũˆx , 0 Duˆx , 0? œ œ

Can we do this for all derivatives? The key part is to show that the normal derivatives agree:

∂xl d ũˆx , 0 œ


∂xl d ũˆx , 0
œ

For l 0, . . . , k. So let’s introduce α0 , . . . , αk , β0 , . . . , βk . Then define:


¢̈
¨ u ˆx œ , x d  , xd C 0
ũˆx , xd 
œ
¦ k
P
¨ l 0 αl uˆxœ , βl xd ,
¤̈
xd @ 0

with αi > R and βi A 0. So now if we compute, for xd @ 0:


k k
∂xl d ũˆx , xd 
œ
∂xl d Q α m u ˆx , œ
 β m xd  Q αmˆ  βm l ∂xl d uˆx , βm xd 
œ

m 0 m 0

So now if we want the two limits as xd 0 (from both sides), we get the following linear
system of equations for β and α:

1 α0    αk
1 β0 α0    ˆβ0 αk



k
1 ˆβ0  α0    ˆβk k αk

Which is just the matrix equation:


0 ˆβk 0 “ ’α0 “
’1“ ’ˆβ0  

–— –    —–  —
”1• ”ˆβ0 k  ˆβk k • ”αk •

which is just the Vandermond matrix, which has determinant:

M ˆβi  βj 
0Bi@j Bk

– 81 –
6. Energy Method and Sobolev Spaces

so if we choose βi ’s to be pairwise distinct, then there exists ˆα0 , . . . , αk  so that this matrix
holds.

So now define:

E œ
u χũ

with χ equal 1 on the support of U and 0 outside a B ˆ0, 1 9 ˜xd C 1~2. Then you check
the estimate.
Remark 6.2.2 (Stein). There exists a universal way to extend Sobolev spaces Euniv for all
W k,p (k C 0, 1 B p @ ª which only requires U to be C 1 .

6.3 Traces
Here we will restrict u to ∂U
Theorem 6.3.1. Let k C 1 and 1 B p @ ª, U a C 1 domain
1. u > C ª
ˆŪ  and define:

trˆu uS∂U

Then this extends uniquely as a bounded map from W k,p ˆU  to Lp ˆ∂U  such that:

Ytr ˆuYLp ˆ∂U  B C YuYW 1,p ˆU 

2. If u > W k,p ˆU  and trˆu 0, then u > W0k,p ˆU  and vice-versa.


Proof. See Evans section 5.5
Remark 6.3.1. In fact, the image of tr leads to the notion of fractional regularity Besov
spaces.
It turns out trˆH k ˆU  Hk  1~2 ˆ∂U , with U is C k (this can be defined with the Fourier
transform)
s
YuYH s ˆRd  Âˆξ YL2
Yˆ1  Sξ S u
ξ

6.4 Sobolev Inequalities


This is all expressions of the form:

YuYW p,q B C YuYW k,p ˆU 


or the term on the left could be one of many types of norms.

A key element in this theory is the following inequality

– 82 –
6. Energy Method and Sobolev Spaces

Theorem 6.4.1 (Gagliardo – Nirenberk - Sobolev Inequality). Let u > C0 ˆRd , then: ª

YuYLd ~ˆ d1 ˆRd  B cYDuYL1 ˆRd 


We can remember d~ˆd  1 through dimensional analysis (or scaling analysis). If we have
uˆx, define uλ ˆx uˆxλ 1 . Then we have:


YDuλ YL1 S SDuλ Sdx  λd  1


S SDuˆy Sdy

We also have that Yuλ YLp λd~p YuYLp In order for the inequality to hold for all u and all λ,
d
we would like p .
d1
Proof. Given u > C0 ˆRd , we know that we can write:
ª

xi
uˆx S ª
∂ i u ˆx 1 , . . . , yi
®
, . . . , xd dy i
ith component

This tells us that:

sup SuˆxS B S S∂i uˆx


1
, . . . , xi , . . . , xd Sdy i
xi > R ª

Lemma 6.4.1 (Looms-Whitney-Inequality). Let fi be functions on Rd such that:


 ,...,x  C 0
f i ˆx 1 , . . . , x i d

Âi indicates that we are fixing that coordinate (so the function doesn’t depend on that
where x
coordinate) then:
d
S f1 fd dx B M Yfi YLd1 ˆRd1 
i 1

Proof.

S f1 fd dx1 f1 S f2 fd dx1 B f1 Yf2 YLd11 Yfd YLd11


x x

then we integrate this thing and apply Holders inequality again and again and everything
will end up working out.
With this we are almost done. Define:
fi sup Suˆx1 , . . . , xd S B S SDuˆx1 , . . . , y i , . . . , xd Sdy i
xi > R R

Let us apply the Looms-Whitney inequality to:

S d~ˆd1
SuS dx1 dxd B S f1
1~ˆd1
 fd
1~ˆd1
B
®
M 1
Y fi
~ˆd1
YLd1
i
LW

B MˆS Sfi Sdx 1



i d 1~ˆd
 dx 
dx  1
B MˆS SDuSdx1 dxd 1~ˆd  1
i i

ˆ S SDuSdx
1
 dxd d~ˆd  1

– 83 –
6. Energy Method and Sobolev Spaces

Remark 6.4.1. Isopermetric inequality Give a C 1 bounded domain, then:

SV olU S
ˆd1~d
B C SArea∂U S

Lecture 25 (12/3)

Recall our discussion on Sobolev inequalities. Last time we proved the Gaglaudo-Nirenberg
Sobolev inequality for u > C0 ˆRd  9 W 1,1 ˆRd , we got that:
ª

YuYLd ~ˆ d1 ˆRd  B C YDuYL1 ˆRd 


We will now upgrade this to more general cases when u > C0 ˆRd  9 W 1,p ˆRd  ª

Theorem 6.4.2. For u > C0 ˆRd  9 W 1,p ˆRd , 1 B p @ d, we have that:


ª

YuYp‡ B C YDuYLp ˆRd  (17)


dp
where after scaling analysis, it is easy to see that p ‡

dp
d
Proof. Let v SuSγ so that 㠈  p , then apply the W 1,1 version of GN S to v:
‡

d1
d1

S B c S SDSuSγ Sdx B c S SuSγ


d
B cYuγ
d
1 1
‹ Sv S d1 dx SDuSdx YLpœ YDuYLp
 

claim: p ˆγ  1
œ
p , this has to be true because we respected scaling.
‡

We have only discussed nice functions, but we can extend this to non compactly supported
ones:
dp
Theorem 6.4.3 (Sobolev inequalities for W 1,p ˆU ). Let 1 B p @ d, p ‡

dp
1. if u > W01,p ˆU , then (17) holds
2. If U is a bounded C 1 domain, then for any u > W 1,p ˆU , (17) holds on the domain U .
When p d, p ‡
ª , but :

YuYLª ˆRd  B YDuYLd ˆRd 


is only true when d 1. There is a nice counterexample in Evans. Let U B ˆ0, 1 and
u log logˆ1  1~SxS.

It turns out that often in analysis, the following substitute for L ª


is useful. The space
we have in mind is called BM O. We say u > BM O if

sup
B ˆx,r
S~ B x,r ˆ 
Su  aB x,r
ˆ 
uSdx @ ª

– 84 –
6. Energy Method and Sobolev Spaces

this is known as bounded mean oscillation.

If p A d, in this case we have:

YuYLª  r B cYu YW p

where r is the modulus of continuity.

Definition 6.4.1 (Holder space). for u > C 0 ˆK  with K a closed subset of Rd ,

Suˆx  uˆy 
uC 0,α sup
x,y >K Sx  y Sα

where 0 @ α @ 1. Then define:

C 0,α ˆK  ™u > C 0 ˆK   uC 0,ª @ ªž

which we equip with the norm:

YuYC 0,ª ˆK  YuYC 0 ˆK   uC 0,ª ˆK 


´¹¹ ¹ ¹ ¹ ¹ ¹ ¹¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
sup norm

Remark 6.4.2. C 0,α ˆK  is a complete space.


œ
Remark 6.4.3. If u > C 0,α ˆK , then u > C 0,α ˆK  with 0 @ α @ α @ 1 œ

Remark 6.4.4. If uC 0,α @ ª for α A 1, then u is constant.

Theorem 6.4.4 (Morey’s inequality). For u > C ª


ˆRd  9 W 1,p ˆRd  (for p A d)

YuYC 0,α ˆRd  B C YuYW 1,p ˆRd 

d
with α 1
p
Proof. The key is the following proposition:
Proposition 6.4.1. for u > C ª
ˆRd , with x > Rd and r A 0, then we have:

1 SDuˆy S
S
SB ˆx, r S B ˆx,r
Suˆy   uˆxSdy B C S
B ˆx,r Sx  y Sd 1
dy 

with C only depending on d


Proof. The idea is to compute:

S∂B x,r
ˆ 
Suˆy   uˆxSdy rd  1
S∂B 0,1 ˆ 
Suˆx  rω   uˆxSdω

– 85 –
6. Energy Method and Sobolev Spaces

then note that:


r r
uˆx  rω   uˆx S0 ∂s uˆx  sω ds S0 ω © uˆx  sω ds

therefore:
r
rd 1
S∂B 0,1
ˆ 
S Sdω rd  1
S∂B 0,1 S S0
ˆ 
ω uˆx  sω sd 1 dsSdω 

computation follows, I couldn’t really read the board, I’ll try to fill this in later.
Now let’s prove the main thing. First for all x > Rd , we have:

SuˆxS Suˆx  aB x,1


ˆ 
uˆy dy S  a
B ˆ0,1
SuSdy Ba
B ˆ,x1
Suˆx  uˆy S  a Suˆy Sdy

both of these can be trivially bounded (see notes). The hard part is bounded the semi-norm.
Note that we have:

Suˆx  uˆy S B Suˆx  a uˆz dz S  S a uˆz dz  uˆy S


B ˆx,r9B ˆy,r B ˆx,r9B ˆy,r

Ba Suˆx  uˆz Sdz  aB x,r Suˆz   uˆy Sdz


B ˆx,r9B ˆy,r ˆ 9B ˆy,r 

This can be controlled by our previous results, scaling estimates on regions, and Holder’s
inequality.
Definition 6.4.2 (Version). We say u is a version of u if u ‡
u almost everywhere.
‡

Theorem 6.4.5 (Sobolev Inequality for u > W 1,p ˆU , p A d). given p A d and α 1  d~p
1. If u > W01,p ˆU , then there exists a version u > C 0,α ˆŪ  and Yu ‡ ‡
YC 0,α ˆŪ  B C YuYW 1,p ˆU 
2. Assume also that U is a bounded C 1 domain. For any u > W 1,p ˆU  there exists a
version u > C 0, ˆŪ  and the above inequality holds.
‡ ª

Theorem 6.4.6 (General Sobolev inequality for W k,p ˆU ). Let k be any non-negative
integer and 1 B p B ª and assue either U is a domain and u > W0k,p ˆU  or U is a bounded
C k domain and u > W k,p ˆU .
1. let l be a nonnegative integer, 0 B l B k, 1 B q @ ª, d~q  l C d~p  k, then u > W l,q ˆU 
and YuYW l,q ˆU  B C YuYW k,p ˆU 

2. let l be a nonnegative integer 0 B l B k, 0 @ α @ 1 if l  α C d~p  k, then there exists a


version u > C l,α ˆŪ  and YuYC l,α ˆŪ  B C YuYW k,p ˆU 
‡

Lecture 26 (12/5)

Today we will wrap up our discussion of Sobolev spaces. We will study compactness
properties with the main theorem the Relich - Konarchov theorem. We will also look at
Poincare’s inequality. And lastly we will look at the dual space of Sobolev spaces, which will
naturally lead to the notation of negative regularity Sobolev spaces.

– 86 –
6. Energy Method and Sobolev Spaces

6.5 Compactness
We want to look at ˜ui  ` W 1,p ˆU  given Yui YW 1,p ˆU  B C, then this turns out to be compact
in an appropriate larger space.

Definition 6.5.1 (Compact Embedding of Banach Spaces). Let’s say we have ˆX, Y
YX  and ˆY, Y YY  two normed spaces with Y ` X. We say that ˆY, Y YY  embeddes compactly
into ˆX, Y YX  (and say Y bb X if for all bounded sequences in Y has a convergent sub-
sequence in X.

Theorem 6.5.1 (Relich–Konoluchov). Let U be a C 1 bounded domain, 1 B p @ ª and


1 B q @ p , then W 1,p ˆU  bb Lq ˆU . Recall we have:
‡

YuYLp‡ ˆU  B C YuYW 1,p ˆU 


‡
and W 1,p ˆU  b Lp ˆU 

Proof. The key ingredients to this are:


Theorem 6.5.2 (Arezela-Ascoli). Let K be a compact metric space with ˜un  ` C ˆK ,
then given

1. (pointwise boundedness) for all x > K, we have ˜un ˆx is bounded

2. (equicontinuity) for all ε A 0 there exists δ0 such that Sun ˆx  un ˆy S @ ε for all n and
Sx  y S @ δ.

then there exists a subsequence uni that is convergent in the uniform topology.
Another key ingredient is a property of mollification called accelerated convergence.
Proposition 6.5.1. Let ϕ > C0 ˆRd  with R ϕ 1 and define ϕε ˆx ε d ϕˆxε 1 , then
ª  

ϕe ‡ u u as ε 0 in Lp . It turns out that if you assume that u > W 1,p ˆU , then the rate of
convergence in ϕε ‡ u u can be quantified.
Here is a more general proposition
Proposition 6.5.2. Let k be a positive integer, 1 B p @ ª, let ϕ > C0 ˆRd  such that ª
Rϕ 1
and R xα ϕdx 0 for all 1 B SαS B k  1. Then if u > W k,p ˆRd  then:

Yϕε ‡ u  uYLp B Cεk Q Y D α u YL p

Sα S k

Proof. When k 1:
ε
uˆx  ϕε ‡ uˆx S ϕˆz ˆuˆx  uˆx  εz dz S ϕ ˆz  S
0
∂z uˆx  tz dz
ε
S ϕˆz  S
0
z Duˆx  tz dz

– 87 –
6. Energy Method and Sobolev Spaces

then by the Minkowski inequality, we have:


ε
Yuˆx  ϕε ‡ uˆxYLp B S S0 Yϕˆz z Duˆx  tz YLpx dtdz
ε
BS S0 Sϕˆz z SYDuˆx  tz YLp dtdz

B CεYDuYLp

Now let’s prove the RK theorem.

Step 1: It suffices to prove the case when q p. We want to show that if Yun YW 1,p ˆU  B C
then there exists a subsequence uni so that Yuni  unj YLp 0 as i, j ª .

Indeed, if p @ q, then we have that all the other norms are bounded. If p @ q @ p then ‡

use Holder’s inequality which states that:

Yv YLq B Yv YθLp Yv Y1Lrθ 

with q  1 p 1θ  r
  1 ˆ1  θ 

Step 2: we would like to prove that there exists a subsequnce uni such that for all ε A 0
there exists I with Yuni  uniœ YLp ˆU  @ ε if i, i C I. œ

Fix a bounded subset of Rd (call it V ) and let Ū ` V . Let’s extend un to elements in


with supp un ` V and
W 1,p ˆRd 

Y E u n YW 1,p ˆRd  B C Yun YW 1,p ˆU  B C

Now let un E un . Choose δ and let vn un ‡ ϕδ and supp vn ` Ṽ another bounded set with
V̄ ` Ṽ , let δbe such that:

Yun  vn YLp B ε~3

(where we use the proposition). Then note that vn is smooth, then we have:

vn ˆx S ϕδ ˆx  y vn ˆy dy

then by Holder’s inequality, we have that:

Yvn YLª B Cδ

and also

YDvn YLª B Cδ

– 88 –
6. Energy Method and Sobolev Spaces

this means that ˜vn  has a convergent subsequence by Arzela-Ascoli. So there exists vni so
that Yvni  vniœ YLp @ ε~3, for i, i C I.
œ

So now we have:

Yuni  uniœ YLp B Yuni  vni YLp  Yvni  vni SLp  Yuniœ  vniœ YLp B ε

now we must preform a diagonalization argument.

Let’s now consider one nice application of this theorem. This is a key way to estimate u
by Du.

Proposition 6.5.3 (Poincare’s Inequality). Let U be a C 1 bounded domain, 1 B p @ ª,


then:

\u  aU u\L P ˆU 
B C YDuYLp ˆU 

with C only depending on U and p.

Proof. Assume that this fails. Then for all n C 1 there exists un ~ 0 so that:

Yu n  aU u n Y LP ˆU  C nYDun YLP ˆU 

Now let:
un  `U un
vn
Yun  `U un YLp

1
Therefore: Yvn YLp 1 and ` vn 0, and
n
Yvn YLP ˆU  C YDvn YLp so that YDvn YLp ˆU  0 as
N ª.

We know that vn is bounded in W 1,p . THerefore there exists vni with vni v in Lp ˆU .
In particular:

Yv YLp lim Yvni YLp 1


i ª

on the other hand:

aU v lim a vni
i ª U
0

and finally, YDvn YLp 0 this implies that Dv 0 in the sense of distributions, therefore v is
a constant and we get a contradiction.

– 89 –
6. Energy Method and Sobolev Spaces

6.6 Duality
Definition 6.6.1 (Negative Regularity Sobolev Spaces). Let k be a non-negative inte-
ger and 1 B p @ ª, let:
¢̈ £̈
> D ˆU   ¦ SαS B k, § gα > L ˆU , u
¨ ¨
W  k,p
ˆU  ¦u
¨
œ p
QD α
gα §
¨
¤̈ SαSBk ¥̈

with norm:

YuYW k,p ˆU  inf


g
ˆ Q YgαYpL 1 pp
~

P α Bk Dα gα
α
u SαSBk
S S

Theorem 6.6.1. For all domains U , k a positive integer, 1 @ p @ ª:


k,p k,pœ
ˆW0 ˆU  W ˆU 
œ 

– 90 –
Appendices
A Midterm Review
A.1 First order scalar characteristics
We are trying to solve the first order scalar PDE:
¢̈
¨F ˆx, u, Du 0 U  R  Rd
¦ (18)
¨u g Γ ∂U
¤̈

With x > U ` Rd , u  Rd R. Then if we assume there exists a solution, then along any path
x  I Rd , we have z ˆs uˆxˆs with:
¢̈ẋj ∂pj F
¨
¨
¨
¦ṗi ∂z F pi  ∂xi F
¨
¨
¨ż
¤̈ Pi pi∂pi F
Where in F we replaced u with z and ∂i with pi .
Corollary A.1.1. If F P bj ∂ j u  c is linear or quasilinear, then:
¢̈ j
¨ẋ bj
¦
¨ż c
¤̈

A.1.1 Existence
Given our PDE (18), we would like to know when this method works, whether a solution
exists. If our boundary is flat (Γ ` ˜xd 0) then for any characteristic path, we know it
must satisfy compatability conditions:
¢̈xˆ0  x > Γ
¨
¨ 0
¨
¨
¨
¨z ˆ0 g ˆx0 
¦
¨
¨ pi ˆ0 ∂i g ˆx0  i 1, . . . , d  1
¨
¨
¨
¨F ˆxˆ0, z ˆ0, pˆ0
¤̈

To apply our method of characteristics, we require the information of pd ˆ0 which we can
get if ∂pd F ˆx0 , z0 , p0  x 0 (this is known as the noncharacteristic condition).

Now we would like to show that u has a solution near x0 . To do this let’s find a whole
bunch of characteristics where the first input indicates where the path starts, and the second
indicates the time on the path:

’xˆy, s“
– z ˆy, s —
” pˆy, s •

– 91 –
A. Midterm Review

with y ˆy 1 , y 2 , . . . , y d 1 , 0. Then in order to find the solution uˆx we must figure out the


characteristic that hits x. That is we must determine y and s such that xˆy, s xa . It
turns out that under the noncharacterisitic assumption, this map is locally invertible, and
(
we denote the inverse: x ˆy ˆx, sˆx, which gives rise to the main theorem:
Theorem A.1.1. Given our PDE 18 and a point x0 > Γ such that ∂pd F x 0 at x0 , then there
exists a neighborhood of x0 such that uˆx z ˆy ˆx, sˆx solves the PDE for all x in this
neighborhood.
Remark A.1.1. The nonchraracteristic assumption of ˆx0 , z0  is equivalent to determining
all derivatives of u (in every direction) at x0 .

A.1.2 General Boundary


For a smooth boundary Γ we can do the same thing, but the noncharacterisitc condition
becomes:
Q ν∂U
j
∂p F ˆx0 , z0 , p0  x 0
j
j

A.1.3 Uniqueness
Given two solutions to such a PDE whose sum of their normal derivatives agree, then they
are the same.

A.2 Noncharacteristic Condition for Kth order systems


Our new quasilinear K-th order PDE is of the form:
N
F Q Q ˆbαBA ˆx, u, . . . , Dk 1uDαuB 
 cA ˆx, u, . . . , Dk 1 u

0 (19)
S αS kB 1
k 1
RN , U ` Rd , bα  U  RN  Rd N
2
with u  U RN . So bα evaluated at each point is a
N  N matrix. The ˆA, B  component of this matrix is ˆbα A
th
B . Therefore a very compact
(but uninformative) way of writing F would be:
F Q bα D α u  c
Sα S k

But just note that F is a set of N equations. Now the Cauchy data (initial conditions) for
this PDE would be to prescribe normal derivatives. If our boundary is flat, Γ ` ˜xd 0,
then our Cauchy data is:
k 1
ˆu, ∂d u, . . . , ∂d u ˆg0 , g1 , . . . , gk1 

Then define the noncharacteristic condition for the Cauchy data for a point x0 > Γ as when
the matrix:
bˆ0,0,...,0,k ˆx0 , uˆx0 , . . . , Dk 1 uˆx0 

is invertible. If this is the case then we can determine all derivatives of u at x0 .


a
lol, this notation could use work, I literally just figured this out while writing it

– 92 –
A. Midterm Review

A.2.1 General Domains


For a general domain, the noncharacteristic condition requires that:
Q bα ν α
SαS k

is invertible, were ν is the outer unit normal to the boundary.

A.2.2 Power Series


Theorem A.2.1. Given the PDE (19) such that all functions and boundary are real analytic
and x > Γ is noncharacteristic, then there exists a unique, local, real-analytic solution u with:
1
uˆx Q α! Dαuˆx0ˆx  x 0 α
near x0

A.3 Distributions
Definition A.3.1 (Test Function). A test function on a set U ` Rd is a smooth compactly
supported function on U . The space of test functions on U is denoted C0 ˆR ª

Definition A.3.2 (Convergence of Test Functions). We say fn f converges in the


space of test functions (with fn , f > C0 ˆU ) if there exists a compact set containing the
ª

support of alla the functions on which all derivatives uniformly converge.


Definition A.3.3 (Distribution). A distribution on a set U ` Rd is a linear functionalb
from C0 ˆU  to R which is continuous with respect to convergence of test functions.
ª

Theorem A.3.1 (Boundedness). A linear functional u  C0 ˆU  R is a distribution if ª

and only if for all compact sets K ` U , there exists a N and C C ˆK, N  such that for all
ϕ > C0 ˆU  with supp ϕ ` K:
ª

S `u, ϕe S BC Q sup SDα ϕˆxS


x> K
SαSBN

Definition A.3.4 (Order of a Distribution). The largest such N in the above theorem
is called the order of a distribution.
Theorem A.3.2. If u is a distribution that agrees with 1~x on ˆ0, ª then the order of u is
at least 1.
Definition A.3.5 (Convolution). If f > L1loc and ϕ > C0 ˆRd  then: ª

f ‡ ϕˆx SR d
f ˆy ϕˆx  y dy

we can also define the starred convolution as:

f œ
‡ ϕ ˆx  SR d
ϕˆy f ˆy  xdy
a
but finitely many of course
b
linear map from a vector space to its base field

– 93 –
A. Midterm Review

Theorem A.3.3. If f > L1loc and ϕ > C0 ˆRd  then f ª


‡ ϕ>C ª

Theorem A.3.4. If f > C ˆRd  and ϕ > C0 ˆRd  then suppˆf


ª
‡ ϕ ` supp f  supp ϕ

Definition A.3.6 (Mollifer). A mollifier is a function ϕ > C0 ˆRd  with ª


Rϕ 1

Theorem A.3.5 (Approximation). If ϕ is a mollifier, let ϕδ δ d ϕˆxδ 1 . Then if  

f > C ˆR  then f ‡ ϕδ converges uniformly to f (an up to and including k derivatives


k d

converge uniformly).

Theorem A.3.6. If f > L1loc then f defines a distribution.

A.3.1 Operations
We may do the following operations with u a distribution, f > C ª
ˆU , ϕ, v > C 0 ˆR d 
ª

1. `f u, ϕe `u, f ϕe

2. `Dα u, ϕe ˆ1SαS `u, D α ϕe

3. `u ‡ v, ϕe `u, v ‡
œ
ϕe

Theorem A.3.7. u ‡ v > C ª

Theorem A.3.8. δ0 ‡ u u

Theorem A.3.9. If U is an open, connected, bounded subset of Rd with a C 1 boundary and


1U is the characteristic function (a distribution) then:

∂j 1U  ν j dS∂U

with ν the unit outer normal of ∂U and dS∂U the surface measure.

A.3.2 Convergence of Distributions


Definition A.3.7. un (distributions) converge in distribution to a distribution u if for all
ϕ > C0 ˆRd  then `un , ϕe `u, ϕe
ª

Theorem A.3.10. If un u as distributions, then Dα un u as distributions for all α

Theorem A.3.11 (Sequential Compactness). If un are distributions and for all ϕ >
C0 ˆU , `un, ϕe converges, then u defined as `u, ϕe limn `un, ϕe is a distribution.
ª

Theorem A.3.12. If ϕ is a moliffier, then ϕδ converges in distribution to δ0

Theorem A.3.13. If ϕ is a mollifier and u a distribution then ϕδ ‡ u are smooth functions


that converge in distribution to u.

– 94 –
A. Midterm Review

A.3.3 Operations between distributions


Definition A.3.8 (Singular Support). The set of points on which a distribution fails to
a be a smooth function.

Theorem A.3.14. If two distributions have disjoint singular support, then their product is
defined.

Theorem A.3.15. If u or v have compact support, then u ‡ v v ‡ u is well-defined


(`u ‡ v, ϕe `u, v ‡ ϕe) and supp u ‡ v ` supp u  supp v.
œ

A.4 Fundamental Solutions


For this section consider the linear scalar partial differential operator:

P u Q aα ˆ x  D α u
SαSBk

with aα real valued smooth.

Definition A.4.1 (Fundamental Solution). Given y > Rd , the fundamental solution Ey


is such that P Ey ˆx δy ˆx  δ0 ˆx  y 

Then formally, a solution to this PDE is uˆx R vˆyE y ˆxdy.


Definition A.4.2. The formal adjoint to the above linear operator is:

P v Q aαˆxDαˆaαu
œ

SαSBk

which we expect to satisfy `P u, ve `u, P veœ

Remark A.4.1. Finding fundamental solutions Ey ˆx to the adjoint of œ


P gives us unique-
ness, this is because if u solves P u v, then:

uˆx `u, δx e `u, P ˆE xe `P u, ˆE xe


œ œ œ
`v, ˆE  e
œ x

Which expresses u in terms of v.

Theorem A.4.1. If P is constant coefficient (ie all aα are constant) and P E0 δ0 , then

1. Ey ˆx E0 ˆx  y 

2. P v ˆ x P v ˆ
œ



3. ˆE œ
x ˆy  E0 ˆx  y 

– 95 –
B. Final Review

A.5 Laplace equation


The fundamental solution to the Laplace equation in Rd is:
¢̈  log r
¨
¨ d 2
¨ 2π
E0 ˆr ¦ 1
¨
¨ dC3
¨ d2
¤̈ dˆd  2αˆdr

Theorem A.5.1. If u > D ˆRd  and ∆u


œ
0 then u is smooth and harmonic.
Theorem A.5.2. If u is harmonic, then there exists a constant C C ˆSαS sch that:
1
SD
α
uˆxS B C
rd  S αS SB x,r
ˆ 
SuSdy

Theorem A.5.3. Bounded harmonic functions are constant.


Theorem A.5.4. For f > D ˆRd  with compact support with d C 3, then solutions to ∆u
œ
f
are unique upto addition by a constant.
Theorem A.5.5. If u is harmonic, then:
1 1
uˆx S
S∂B ˆx, r  ∂B ˆx,r
udS ˆy  S
SB ˆx, r S B ˆx,r
u ˆy 

B Final Review
B.1 Green’s Functions
Given E0 as the fundamental solution to the Laplace equation, U an open set and u a smooth
function on Ū , then we have:

uˆx SU E 0 ˆ x  y ˆ∆uˆy dy  S


∂U
E0 ˆx  y ν Duˆy dy  S
∂U
ν Dy E0 ˆx  y uˆy dy
(20)

Now if u is harmonic and we choose E0 so that it vanishes on the boundary (a Green


function) then we can get results like:
Theorem B.1.1 (Mean Value Theorem). if u is harmonic then:
1
u ˆx 
dαˆdrd  1 SB x,r ˆ 
uˆy dS ˆy 

Concretly, we have that:


Definition B.1.1 (Green’s Function). Given an open set U with y > U , then Gˆ , y  >
C 1 ˆŪ  ˜y  is the Green’s function at y if:
¢̈
¨∆x Gˆx, y  δ0 ˆ x  y  , x>U
¦
¨Gˆx, y  0,
¤̈
x > ∂U

– 96 –
B. Final Review

Theorem B.1.2. On an a bounded open set with C 1 boundary, Green’s functions are unique
and symmetric (Gˆx, y  Gˆy, x).

We can construct Green’s functions for certain domains by the method of image charges.
This takes advantage of the symmetries of our domain, which can only be done in specific
circumstances.

By looking at (20) we have the following representation formula for u > C 2 ˆU  9 C ˆŪ :

uˆx SU Gˆx, yˆ  ∆uˆy dy  S


∂U
ν ˆy  Dy Gˆx, y uˆy dS ˆy 

This leads to

Theorem B.1.3 (Poisson Integral Formula).

uˆx  S∂U ν ˆy Dy Gˆx, y g ˆy dS ˆy 

solves the ODE:


¢̈
¨∆u 0 x>U
¦
¨u g
¤̈
x > ∂U

We even have the stronger formula (with certain conditions):

uˆx SU Gˆx, ygˆydy S∂U ν ˆy


 Dy Gˆx, y hˆy dS ˆy 

solves:
¢̈
¨∆u g x>U
¦
¨u f
¤̈
x > ∂U

B.2 Wave Equation


Here we define the d’Alembertian operator j ∂t2  ∆ and ask to solve:
¢̈jϕ 0 ˆt, x > R1 d 
¨
¨ 
¨
¦ϕ g ˆt, x > ∂R1 d 

¨
¨


¨∂ ϕ h ˆt, x > ∂R1 d 


¤̈ t 

In one dimension, we use change of variables to determine that the foward fundamental
solution (fundamental solution supported in nonnegative time) is:
1
E ˆt, x
  H ˆt  x  H ˆ t  x 
2
with H the Heaviside function.

– 97 –
B. Final Review

Definition B.2.1 (Forward Fundamental Solution). To solve the wave equation we look
for forward fundamental solutions E that satisfy 

1. E
j  δ0
2. supp E ` ˜t C 0


3. supp E  9 ˜t > I  is compact for all intervals I.


Theorem B.2.1. Forward fundamental solutions are unique
Theorem B.2.2. For ϕ > C ª
ˆRd  and ˆt, x > Rd , then: 

ϕˆt, x  E  ‡ ∂t ϕδt 0  ∂t E  ‡ ϕδt 0  E ‡ j ϕ1tC0

For the one dimensional case, this leads to


Theorem B.2.3 (d’Alembert’s Formula).
1 t x ˆt s   
1 1 x t 

2 S0 Sx ˆt s  2 Sx t

ϕˆt, x jϕ ˆ s, y  dyds  ˆ ϕˆ 0, x  t  ϕ ˆ 0, x  t   ∂t ϕˆ0, y dy
  2 

To find the forward fundamental solution for arbitrary dimensions, we take advantage of
the fact that reflections, rotations, and Lorentz boots are conserved under j. Also by
looking at scaling properties, we know that E should to be homogeneous of degree d  1,


ie:

aE , λd1 ϕˆλ f λd  1


`E , ϕe

B.3 Fourier Transform


The motivation is to preform a change of coordinates in the space of functions to something
that behaves well with differentiation. We simply define:
Definition B.3.1 (Fourier Transfom).

F f ˆxˆξ 
1
S f ˆx  e
ˆ2π d Rd
 iξ x
dx  fÂˆξ 

And we have the inverse:


Definition B.3.2 (Inverse Fourier Transform).

F  1 Â
f ˆξ ˆx
1
ˆ2π d
S fÂˆξ eiξ xdξ
Proposition B.3.1. If f > L1 , we have f is well defined and YfÂYLª B Yf YL1
Proposition B.3.2. For f > L1 :

F f ˆxˆξ  e  ix ξ
F f ˆxˆξ 
F f ˆxˆξ  η  F eiηx
f ˆxˆξ 

– 98 –
B. Final Review

Ä
Proposition B.3.3. ∂ iξj fÂ
jf

Ä
Proposition B.3.4. x jf i∂ξ fÂ

Proposition B.3.5. fÅ
‡ g fÂÂ
g

Proposition B.3.6. F  1 f 
‡ˆ2π  d dξ g F  1 f F  1 g

The best space to take the Fourier transform is the space of Scwartz functions, which
are like test functions, but are allowed to have non-compact support as long as the decay
quickly

Definition B.3.3 (Schwartz Class on Rd ).

S ˆRd C ™ϕ > C ˆRd C  Yxβ DαϕYLª @


 α, β > Nn ž
ª
  ª ¦

Then F and F 1 map S to S . The space of conjugate linear functionals on S are called


tempered distributions and we are allowed to take Fourier transforms (and inverse Fourier
transforms) of these elements (which will be tempered distributions).

Theorem B.3.1 (Fourier Inversion on S ). for f > S , we have that F 1F



f FF  1 ˆf 

f . The same is true for tempered distributions.

Theorem B.3.2 (Plancherel’s Theorem). For f, g > S , we have the conservation of the
two inner products:

S fÂ
g dx S fÂÂ
g
ˆ2π d

There are important integrals to keep in mind to help compute Fourier transforms

1. R eiξ xdξ limε 0 R e ε ξ eiξ xdξ  S S


δ0 ˆx2π

2. R e 1 2 ξ dξ ˆ2πd 2
ˆ ~ S S2 ~

º
3. F e 1 2 x  2πe 1 2ξ
 ˆ ~  2  ~ 2

– 99 –
Index

Index
C k boundary, 17 Harnack’s inequality, 47
heat equation, 3
admissible, 14, 15 energy, 76
approximation, 28 Heaviside function, 31
Arezela-Ascoli theorem, 87 Holder space, 85
Huyghen’s principal, 64
Burger’s equation, 12, 19
hyperbolic, 3
Cauchy Riemman equations, 4
implicit function theorem, 15
characteristic ODEs, 10
initial value problem, 5
compact embedding, 87
inviscid Burger equation, 8
compatibility conditions, 14
conjugate linear, 68 KDV equation, 4
convolution, 28
Laplace Equation
d’Alembert’s formula, 57 energy, 76
d’Alembertian operator, 55 Laplace equation, 3
dirac delta function, 31 fundamental solution, 40
Dirichlet problem, 5 maximal principal, 46
representation, 51 mean value property, 46
uniqueness, 47 lecture
dispersive, 3 01 (8/29), 3
distribution, 29 02 (9/3), 6
complex valued, 67 03 (9/5), 9
convergence, 32 04 (9/10), 13
convolution, 31 05 (9/12), 16
differentiation, 31 06 (9/17), 20
order, 30 07 (9/19), 23
singular support, 36 08 (9/24), 26
support, 30 09 (9/26), 29
distributions 10 (10/1), 32
change of variables, 56 11 (10/3), 36
Duhamel’s Formula, 72 12 (10/8), 38
13 (10/15), 42
Einstein equation, 5
14 (10/17), 42
elliptic, 3
15 (10/22), 44
finite speed of propagation, 64 16 (10/24), 48
formal adjoint, 38 17 (10/29), 51
forward fundamental solution, 56 18 (10/31), 56
Fourier inversion theorem, 69 19 (11/5), 59
fundamental solution, 37 20 (11/7), 62
21 (11/12), 66
Gagliardo inequality, 83 22 (11/14), 71
Green’s function, 48 23 (11/19), 75

– 100 –
Index

24 (11/21), 80 question, 20, 40, 42, 80


25 (12/3), 84 quote, 22
26 (12/5), 86
linear homogeneous, 4 real analytic, 24
Looms-Whitney inequality, 83 Relich-Konoluchov theorem, 87
Lorentz transformation, 60 representation formula, 39
Ricci flow, 5
Maxwell equations, 4
method of index charges, 50 Schrodinger equation, 3
minimal surface equation, 4 Schwartz class, 69
mollifier, 28 Sobolev Space
Morey’s inequality, 85 approximation, 79
noncharacteristic, 15, 24 Sobolev space, 77
Nuemman problem, 5 H k , 78
W0k,p , 78
parabolic, 3 extension, 80
Plancherel theorem, 69
Poincare inequality, 89 tempered distributions, 69
Poisson equation, 4 test functions, 27
Poisson integral formula, 51
principal value, 32 wave equation, 3

– 101 –

You might also like