Math222anotes PDF
Math222anotes PDF
Math222anotes PDF
Notes
Izak Oltman
last updated: December 11, 2019
These are the lecture notes for a first semester graduate course in partial differential
equations taught at UC Berkeley by professor Sung-Jin Oh during the fall of 2019.
Contents
1 Introduction to PDE’s 3
1.1 Overview of PDE’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Basic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 First Order (scalar) PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Method of Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 Existence of Characteristic Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.7 noncharactersitic condition for kth order system . . . . . . . . . . . . . . . . . . 20
2 Distributions 25
2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.4 convergence of distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4.1 Approximation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.5 Fundamental Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.5.1 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3 Laplace Equation 40
3.1 Cauchy Riemman Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 Boundary Value Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.3 Green’s Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4 Wave Equation 55
4.1 1 dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
1
Contents
5 Fourier Transform 65
5.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.1.1 Duhamel’s Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Appendices 92
A Midterm Review 92
A.1 First order scalar characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
A.1.1 Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
A.1.2 General Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
A.1.3 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
A.2 Noncharacteristic Condition for Kth order systems . . . . . . . . . . . . . . . . . 93
A.2.1 General Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.2.2 Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.3 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.3.1 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
A.3.2 Convergence of Distributions . . . . . . . . . . . . . . . . . . . . . . . . . 95
A.3.3 Operations between distributions . . . . . . . . . . . . . . . . . . . . . . . 96
A.4 Fundamental Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
A.5 Laplace equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
B Final Review 97
B.1 Green’s Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
B.2 Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
B.3 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Index 101
– 2–
1. Introduction to PDE’s
Lecture 1 (8/29)
1 Introduction to PDE’s
The course outline will be:
1. basic issues (existence, uniqueness, regularity, ... ) will be demonstrated by means of
examples (Evans chapters 2,3,4)
2. theory of distributions and the Fourier transform (Evans chapter 4 and notes)
1.2 Examples
Here are some fundamental examples of PDEs:
Example 1.2.1 (Laplace Equation). u Rd R or C, with ∆u 0, where ∆ is the
Laplace operator defined as ∆ Pdj 1 ∂j2
Example 1.2.2 (Wave Equation). ju 0 where j ∂02 ∆ (the d’Alembertian) and
u R1 d R or C. Here we have define x0
t as the time coordinate.
Both of these are scalar equations. The Laplace equation could model temperature distri-
bution at equilibrium or electrostatics which can measure the electric potential where there
is no charge. The wave equation models wave propagation (at finite speed). This could be
used to model elastic waves or acoustic waves or electromagnetic waves.
The Laplace equation is the prototypical example of elliptic PDEs while the wave is the
prototypical example of hyperbolic PDEs
– 3–
1. Introduction to PDE’s
∂x u ∂y v 0
∂y u ∂x v 0
The Cauchy-Riemman equations give necessary and sufficient conditions for a pair of
functions u, v to give us that uz iv z is holomorphic with z x iy. Also note that u
and v also satisfy the Laplace equation.
R3 :
∂t E © B 0
∂t B © E 0
© E 0
© B 0
F u 0
Note that F is linear for all these examples. So all equations stated so far are linear PDEs.
As we can write the operator of the thing equal to zero, we have a linear homogeneous
PDE. If instead we had F u f then we have a linear inhomogeneous. In the case we
have ∆u f we then have what is called the Poisson equation.
d
∂j u
Q ∂j » 0
j 1 1 SDuS2
This is to find a surface with minimal area that satisfies something. This is an example
of an elliptic PDE.
pt u ∂x3 u 6u∂x 0
– 4–
1. Introduction to PDE’s
∂t g 2Ric g
Ric g 0
For Schrodinger
¢̈
¨i∂t u ∆u f x > R Rd
¦
¨u g
¤̈
x > t 0 Rd
– 5–
1. Introduction to PDE’s
Definition 1.3.1. We say a boundary value problem or initial value problem is wellposed
(Hadaward) if it satisfies
1. existence
2. uniqueness
The regularity of a solution (if not we have a singularity). We also have asymptotics and
dynamics
We call SαS P αi the order of α. The order of a PDE is the order of the highest order ∂ α
that appears in the equation.
3. fully nonlinear.
Lecture 2 (9/3)
Here are three examples of first order PDEs (some of which are not in the text)
This is an example of a nonlinear first order scalar PDE. Something goes wrong with
this equation.
a
that is the map from the data to the solution is continuous
– 6–
1. Introduction to PDE’s
ut, x S f dt C x
u0 x R0 f t , xdt
t
Then we have ut, x
Now let’s supose a is a general function which obeys the boundedness condition.
∂t u a∂x u f
u0, x u0 x
The idea is to try to make a change of variables to reduce this to the simpler case. In
other words, we want a change of variables t, x s, y such that: (
∂s ∂t a∂x (1)
We have:
∂t ∂x
∂s ∂t ∂x
∂s ∂s
∂t ∂x
∂y ∂t ∂x
∂y ∂y
a
note that this u is a global solution
– 7–
1. Introduction to PDE’s
∂t ∂x
To get (1), we want: 1 and s, y ats, y , xs, y . The first equality gives us
∂s ∂s
that s t. So we would like to find:
∂x
s, y as, xs, y
∂s
This is a first order ODE. Now since SaS is bounded for each t, this allows us to conclude
that xs, y is always global (in s)a . Let’s allow x0, y y.
u0 y R0 f s , y ds
s
In our equation we had ∂s us, y f s, y therefore us, y
Remark 1.4.1. This proof is an instance of the methods of characteristics. Because the
xs, y are called the characteristics and the ode involving x is called the characteristic ode.
Now let’s turn the the Inviscid Burger equation:
¢̈
¨∂t u u∂x u 0 t, x > R2
¦
¨u0, x u0 x t, x > 0 R
¤̈
Theorem 1.4.2. For any smooth and compactly supported u0 R R, then there does not
exist a smooth global solution u R1 1 R with this data.
It turns out that there always exists a local solution, that is there exists δ A 0 such that
on 0, δ R, there exists a unique smooth solution u.
Proof. Suppose that u R1 1 R exists. Now let’s apply the method of characteristics to
the operator ∂t u∂x .
So we have ∂s x us, xs, y and us, y u0 y which is just constant. Visualy things
are pretty clear.
Or we could use:
0 ∂x ∂t u u∂x u ∂t ∂x u u∂x ∂x u ∂x u2
After a little, we get ∂s x x2 0, which is known as the Ricatti equation, which blows up in
finite time. This can be solved via separation of variables.
a
this is an ODE result, and requires the additional assumption that M is locally bounded. Then we apply
Picard’s theorem and prove by contradiction on a saturated solution
– 8–
1. Introduction to PDE’s
there does not exist any C 1 solution u to this PDE on any neighborhood of 0, 0
Proof. (requires complex analysis). Notation: let Br t, x > R1 1 t2 r2 @ r2 with ∂Br
the boundary. Take a smooth function f with the properties:
1. f t, x f t, x
2. for some sequence rn 0 (ex. 2 n) we want f SB
rn δn Brn δn
0 with RB rn
f dtdx x 0
Assume that such a C 1 solution to the PDE, u, exists on some neighborhood of 0, 0. Pick
1
n large enough so that Brn ` U . Then let ũt, x ut, x ut, x, then observe that ũ
2
is again a C 1 solution on Brn δ . But observe ũ is odd.
SBrn
∂t u it∂x u SB rn
f x0
t 2 x
Therefore uS∂Brn x 0. On the other hand, it turns out uS∂Brn 0.
By continuity, and the fact that f is zero on a bunch of thin annuli. On each annulus,
we have that f 0 ∂t u it∂x u. Let’s call the right half of the annulus U , that is:
U
rn δ 2 @ t2 x2 @ rn δ 2 , t A 0
Lecture 3 (9/5)
Today we will cover section 3.2 from Evan’s book. Some notation things: we will use d
instead of n for the spacial dimension. We will also us xj as the jth coordinate (instead of
xj ).
– 9–
1. Introduction to PDE’s
Our first goal is to derive the characteristics of ODEs. To do this, we want to find the
solution ux by finding a suitable curve γ that connects x to x0 > Γ on which the evolution
of u can be computed.
Recall that the simplest example is ∂t u 0 with Γ 0 R, then the curves are just the
vertical straight lines.
Let’s parameterize our curve γ ` U with xs such that s > I. Then let the function value
along this curve be z s uxs. We would also like to keep track of the partials, by the
symbols pi s ∂i uxs
Furthermore, we have:
0 ∂xi F x, ux, Dux
∂xi F x, ux, Dux ∂z F x, ux, Dux∂i u
Q∂p F x, ux, Dux∂i ∂j u (2)
j
j
So now we have second order terms in terms of first order terms. Now we will set:
ẋj s ∂pj F xs, uxs, Duxs ∂pj F xs, z s, ps (3)
So we get:
ṗi s ∂z F xs, z s, pspi s ∂xi F xs, z s, ps (4)
Lastly, we have:
Then the collection of the three ODEs involving z, x, p are called the characteristic ODEs.
– 10 –
1. Introduction to PDE’s
z s uxs p i s ∂i uxs
0 F x, u, Du Q bj ∂ j u cu
j
Remark 1.5.1.
1. The p equation is not needed
2. there is a hierarchy of the ODEs
Example 1.5.2 (Linear F).
¢̈ 1
¨x ux2 x2 ux1 u in U
¦
¨u g on Γ
¤̈
The solution is x1 s x0 coss and x2 s x0 sins with x0 the initial condition. We
also have z s es z0 g x0 es .
Based on our domain, 0 @ s @ π ~2. So our path that we know the solution for is a quarter
circle. And we know that solution on this quarter circle is g x0 es .
Now given x1 , x2 > U , we want to find the quarter circle that hits this point.
»
That is we
1 2
want to find x0 , s such that x , x x0 cos s, x0 sin s. Then we get x0 x 2 x 2 2 ,
1
x2
and s arctan 1 . So the result is:
x
» x2
ux1 , x2 g x1 2 x2 2 exp arctan
x1
– 11 –
1. Introduction to PDE’s
Remark 1.5.2. We recover the method of characteristics for ∂t at, x∂x u 0 by the
linear case.
Example 1.5.3. Consider when F is quasilinear, that is:
dz 1 1 z 0
For the other one, we have: ds therefore s therefore z s .
z2 z s z 0 1 sz 0
g x 0
Then since z 0 g x0 , then z s .
1 g x0 s
Finally, given x1 , x2 > U , we need to find x0 and s so that x1 , x2 x 0 s, s. But
g x 1 x 2
then x0 x1 x2 , s x2 . Therefore ux
1 g x1 x2 x2
Example 1.5.6. Here will have a fully nonlinear F given as:
¢̈
¨∂x1 u∂x2 u u in U
¦
¨u g on Γ
¤̈
– 12 –
1. Introduction to PDE’s
ż Q pj ∂pj F x, z, p
j
Then in this case we have: F p1 p2 z, ẋ1 p2 , ẋ2 p1 , ṗ1 p1 , ṗ2 p2 and ż 2p1 p2 .
Then we have:
¢̈
¨ p 1 s p0 1 es
¦
¨ p 2 s p0 2 es
¤̈
Therefore: ż 2p0 1 p0 2 e2s , so that z z0 p0 1 p0 2 e2s 1. Lastly, ẋ1 p0 2 es 1
so that x1 p0 2 es 1 and x2 x0 p0 1 es 1.
g x 0
And z0 g x 0 , p 0 2 g x 0 , p 1
.
g x0
1
In the case where g x0 x20 , then we have that z0 x20 , p0 1 x0 , p0 2 2x0 . Now
2
given x1 , x2 > U we want to find x0 , s such that x1 , x2 x1x0 s, x2x0 s. Plugging in
what we know about these already, we get:
1
x0 x2 x1
4
s 1 2 1 1 x2 14 x1 4x2 x1
e x x
x 4 x2 14 x1 4x2 x1
x1 4x2 2
Then we get u and we simplify, to get: ux
16
Lecture 4 (9/10)
Recall what we did last time with precise logical assumptions. We were looking at the
scalar first order PDE: F x, u, Dx 0 with F U R Rd R, with the boundary condition
u g on Γ ` ∂U with U a domaina
Then we derived the characteristic equations. For this we supposed that there exist a C 2
solution u to our PDE. From this we looked for a trajectory xs > U, z s > R, ps > Rd such
that uxs z s, pi s ∂i uxs and xs is chosen so that we have a closed system of
ODEs for x, z, ps.
a
open and connected set in Rd
– 13 –
1. Introduction to PDE’s
The worst term is ṗi s ∂j ∂i uxsẋj s. To simplify this, we differentiate F in xi to
get:
∂xi F ∂xi F x, ux, Dux ∂z F x, ux, Dux∂i u ∂pj F x, ux, Dux∂i ∂j ux
To simplify everything, we choose ẋj ∂j ∂i uxsẋj s. Then we get: ṗi s ∂z F x, z, ppi
∂xi F x, z, p. And lastly, we get that ż Pi pi ∂pi F x, z, p.a .
So we have 2d 1 equations for 2d 1 for x, z, p (2d 1 variables). These are known as the
compatibility conditions.
Assume that we start with x0, z 0, p0 that satisfy (6) (such a triple is called admis-
sible). Let x0 > W ` Γ. Parametrize the points in W by y y 1 , y 2 , . . . , y d 1 , 0. Now the
xy, s
z y, s
py, s
With
¢̈xy, 0 y
¨
¨
¨
¨
¨
¨z y, 0 g y
¦ (7)
¨
¨ pi y, 0 ∂i g y f or i 1, . . . , d 1
¨
¨
¨
¨p y, 0
¤̈ d F xy, 0, z y, 0, py, 0 0
– 14 –
1. Introduction to PDE’s
In summary: we began with x0 , z0 , p0 with z0 g x0 and p0 ∂1 g, . . . , ∂d 1 g x0 , p0 d
With xy s, z y, s, py, s are given by the charactersitic equations.
Now, given x > U near x0 , we want to find y, s such that x xy s.
Lemma 1.6.1. Under the noncharactersitic assumption, the map y, s ( xy, s is invert-
(
ible near x0 . Denote the inverse by x y x, sx
Proof. Inverse function theorem. Compute using the characteristic equation that x satisfies:
Fp1 x0 , y0 , p0
k Id1
∂y jx ∂s xk Uy xn
Fpn1 x0 , z0 , p0
s 0
0 F x , z , p
pn 0 0 0
– 15 –
1. Introduction to PDE’s
In the fully nonlinear case, take F ∂x u2 ∂y u2 1. Then F p21 p12 1. Then given
1
p0 1 , then we have two choices of p0 2
2
Now let’s prove Theorem 1.6.2.
Proof. step 1 veryify that F xy, s, z y, s, py, s 0. Compute:
Then apply the characterisitc equations to get that the whole thing is zero. Then since
F xy, 0, z y, 0, py, 0 0 by preparation, we are good.
step 2 We know that xy x, sx x and z y x, sx ux. So it remains to show
that ∂i ux pi y x, sx.
And recall that ∂s z Pi pi ∂pi F . So we need to compute ∂yj z. From z y, s uxy, s ,
then we need to prove:
∂y j z Q piy, s∂y xi j
i
Define:
∂s r j ∂z F r j
Q Q pk ∂y xk ∂x yj Q pk ∂p F ∂x s Q pk Q ∂y xk ∂x yj
j k
k
i j i ∂s xk ∂xi s
j k k ´¹¹ ¹ ¹ ¸¹¹ ¹ ¹ ¶ k j
∂s ẋk
Q pk δik pi
k
Lecture 5 (9/12)
For problem number 4 of the homework (Evans section 3.3.5 problem 6(b)). Take
ux, t g x0, x, tJ 0, x, t (take this as the definition of u and verify that it is a solution).
– 16 –
1. Introduction to PDE’s
We are interested in solving for this on an open set U . To solve this we needed initial values:
¢̈x0 x0
¨
¨
¨
¦z 0 g x0
¨
¨
¨p 0 p 0 i
¤̈ i
For p0 we need to make a choice. For p0 i ∂i g x0 with i 1, . . . , d 1. But for the normal
direction, we need to figure this out such that it solves F x0 , z0 , p0 0. Such a triple is
called admissible.
We also could invert y, s ( xy, s for s very small and y close to x0.
Now define ux z y x, sx. Then the claim is that this u is a solution to the bound-
ary valued problem in V . And this is smooth.
U9 B x 0 , r xd A γ x 1 , . . . , x d 1
´¹¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
ball of radius r centered at x0
And ∂U 9 B x0 , r xd γ x 1 , . . . , x d 1
So now we would like to flatten the boundary. Define the coordinates y 1 , . . . , y d so that:
¢̈ i
¨y xi i 1, . . . , d 1
¦
¨y d xd γ x1 , . . . , x d 1
¤̈
– 17 –
1. Introduction to PDE’s
(
With U a C domain. Then there exists x y x, smooth, invertible in a neighborhood
ª
∂y
F xy , v y , Dv y xy 0
∂x
Then we can write the above operator as Gy, v y , Dv y 0. Then applying our theory of
what we have done so far to our equation Gy, v, Dv 0 then we solve the original problem.
Q ν∂U
j
∂p F x0 , z0 , p0 x 0
j
j
Theorem 1.6.3. Local existence near a noncharacteristic boundary value holds for general
C domains.
ª
Let’s now talk about uniqueness. Suppose that we already have two solutions u1 , u2
that are C 2 solving:
¢̈
¨F x, u, Du 0 in U
¦
¨u g on Γ
¤̈
The idea is the method of characterics from Γ to x in U . But we should be careful about
j
the normal derivative ν∂U ∂j u 0
Example 1.6.1. Consider ∂1 u2 1 on RC0 and u 1 on u. We have two solutions
u 1 x and u 1 x. This has to do with the choice of p0 1 Pj uj∂U p0 j .
So for uniqueness, we require that the normal derivatives also agree: P ∂∂U
j
∂j u 1
P ∂U
∂ j
∂j u2 .
– 18 –
1. Introduction to PDE’s
Then for all x > U such that there exists a characteristic xs, z s, ps with x0 > Γ, z 0
ux0 , p0 Dux0 such that xs x and xs > U for all s > 0, s. Then we have
What about continuous dependence? Continuous dependence does hold, but it is pretty
subtle.
Example 1.6.2. Consider Burger’s equation
¢̈
¨∂t u u∂x u 0 in t A 0
¦
¨u g on t 0
¤̈
(
Now let’s write g u g is continuous in C 2 , in the sense that if gn g in C 2 Γ then
u gn u g in C 2 V .
1 2
However, there exist counterexamples hn , hn such that:
1 2
Yu hn u hn YC 2 V C nYhn1 hn2 YC 2 Γ
i.e. g ( u g cannot be Lipschitz C 2 C 2.
Let’s begin with the noncharaceristic condition for a quasilinear, 1st order, scalar PDE.
Recall, we were looking at (with Γ ` xd 0):
– 19 –
1. Introduction to PDE’s
but if we evaluate at x0 , then we know everything but aj xp , z0 ∂d2 u. In other words we
can write ad x0 , z0 ∂02 x0 things that only depend on x0 , ux0 , ∂j ux0 j 1, . . . , d, other
things. But the thing on the left is nonzero, so we can solve for the thing on the left.
Consider the quasilinear K-th order system of PDEs. We have u U RN , with ux
®
`Rd
1
u
. And:
uN
N
0 Q Q bαAB x, u, . . . , Dk 1uDαuB
cA x, u, . . . , Dk 1 u
Sα S kB 1
k1 2
With: bα ; U RN Rd N RN , with bα A B with A the row index and B the column index.
And C defined similarly. (I don’t really understand the indicies).
More simply, we could have Γ ` xd 0, with Cauchy data for u:
2 k 1
u, ∂d u, ∂d u, . . . , ∂d u g0 , g1 , . . . , gk1
on Γ.
– 20 –
1. Introduction to PDE’s
If we assume that we have a smooth solution u, one question we will ask is whether we
can determine all derivatives Dn ux0 with x0 > Γ from the Cauchy data and the equation?
The answer is to come up with a noncharacteristic condition for the Cauchy data near x0 .
Given the Cauchy data: g0 , . . . , gk 1 near x0 > Γ. Then we can compute any Dk ux0 for
k B k except ∂dk ux0 . We can get this from the PDE. Moving things around we get:
∂dk ux0 1
b0,...,0,d
Q α
b α D u c x 0
Sα S k
αx0,...,0,d
We want to this again and again. Note that b0,...,0,d is invertible around x0 , therefore we can
compute ∂dk ux in a neighborhood of x0 (in Γ). Therefore we can compute all of Dk 1 ux
Now take ∂d of the equation, keep ∂dk 1 u on the left hand side and the rest on the right
but we can invert and get the something known. The same condition (that b0,...,0,d is in-
vertible) allows us determine ∂dk 1 ux0 and ∂dk 1 x in a neighborhood of x0 .
∂U 9 B x, r xd γ x 1 , . . . , x d 1
γ x1 , . . . , x d 1 .
a
the imput to b can be computed from the Cauchy data
– 21 –
1. Introduction to PDE’s
Recall we can also define the unit normal to the boundary ∂U as ν∂U .
∂k
Let’s define the k-th normal derivative ux0 for x0 > ∂U which we define as:
∂ν k
d
∂k
∂ν k
ux0 Q ν i1 ν ik ∂i1 ∂ik uc0
i1 ,...,ik 1
∂k ∂
Note that k
ux0 agrees with k ux0 ν © K ux0 at the top order.
∂ν ∂ν
After playing with indicies, we get that:
∂k k
∂ν k
ux0 Q αν αDαux0
SαS k
With:
k k!
α α1 !αd !
Now the Cauchy data is:
k 1
u, ∂ν u, . . . , ∂ν k1 u g0 , g1 , . . . , gk1 in Γ
The noncharacterisitc condition comes out from the following. Idea: work iwth v y
uxy . The equation for u will translate to some equation for v in the variable y. After a
long computation, it will turn out that we get:
Q b̃αy, v, . . . , Dk 1vDαv
c̃y, . . . , Dk 1 v
So we have:
Dα u Dα v y x k
∂y d v Dy
d α
terms not involving ∂ykd v
Then we see that y d is a boundary defining function. By this we mean that ∂U ` y d 0.
And we have that from multivariable calculus that Dy d is parallel to the nornal ν.
– 22 –
1. Introduction to PDE’s
Therefore:
b̃0,...,0,d Q bαDydα ck Q bα ν α
Sα S k S αS k
Here the noncharacterisitic condition only depends on ν (or ∂U ). This requires that:
d
Q aij ν iν j x 0
i,j 1
dition becomes:
d
Q aij ∂iw∂j w x 0
i,j 1
One case of this is if aij is a positive definite matrixa . For example if aij is the identity
matrix, then our PDE becomes ∆u 0 and all boundaries are noncharacteristic (this is the
property of ellipticity).
1 0
aij
0 I
Then we get the wave equation ju 0. This is characteristic if and only if ∂0 w 2
P d
i 1 ∂i w
20 (these are characteristic hypersurfaces).
Lecture 7 (9/19)
We say that the boundary value is noncharacteristic at some point x0 > Γ implies that
we can compute (or determine) all of the derivatives of u at x0 (Dβ ux0 ) if and only if the
matrix PSαS k bα x0 , ux0 , . . . , Dk 1 ux0 ν α is invertible.
This motivates the following theorem. Can we construct a solution of u near x0 by the
knowledge of all Dα ux0 ? The answer is yes, provided that everything can be written as
power series. Equivalently, if everything is real analytic.
Definition 1.7.2 (real-analytic). We say that f U RN is real analytic if for all x0 > U ,
there exists r A 0 and coefficients aα > RN where α are multi-indicies such that:
f x Q aα x x0 α
α
for Sx x0 S @ r
Definition 1.7.3 (real-analytic boundary). We say ∂U (or U ) is real analytic if for all
x0 > ∂U there exists r A 0 such that ∂U 9 B x0 , r xd γ x1 , . . . , xd 1 (after a suitable
are noncharacterisitc at x0 > Γ. Then there exists a neighborhood V ? x0 such that there
exists a unique, real-analytic solution u to the boundary valued problem on V 9 U .
near x0
Remark 1.7.2 (about Cauchy–Kovaleskaya theorem). At first, you might think that this
theorem looks like a very general existence and uniqueness statement for PDEs. However,
real-analytic functions are too restrictive to be useful.
Suppose that f was real-analytic on Rd , and suppose that f 0 on B 0, 1, then f 0 on
Rd .
– 24 –
2. Distributions
Then we have:
2
0 ∂ 1 ∂22 u k 2 sinkx1 f x2 sinkx1 f 2
x sin kx1 f 2
x k f x
2 2
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
0
is a solution to the boundary value problem. Now everything is analytic, so by the theorem,
this is the unique solution.
Now let’s perturb the initial data. Let’s have the initial data now 0, e k 1 ~ 2
k sin kx0 , then
we get:
k1 ~ 2
u e
sinkx1 sinhkx2
e k1
~ 2
sin kx1 sinhkx2 ÐÐ
k ª 1 kx2
2
e e k1 ~ 2
sinkx1
2 Distributions
We now are turning to the second part of the course. We have two goals
1. We would like to study several fundamental linear 2nd order scalar PDEs (Laplace,
Wave, Heat, Schrodinger). (chpt 2.2, 2.4, 2.3)
– 25 –
2. Distributions
2.1 Motivation
Schwartz was the founder of the theory of distributions, but it was independently constructed
earlier. Let’s consider a motivating example from electrostatics.
Example 2.1.1. Given E R3 R3 the electric field and ρ R3 R the charge density
¢̈
¨© E ρ
¦
¨© E 0
¤̈
Recall from vector calculus, that © ©ϕ 0. The converse is also true, therefore there
exists some ϕ such that E ©ϕ.
∆ϕ ρ
First consider the case when ρ is the sum of point charges at xk with charges qk . By linearity,
it is enough to find:
ϕ Q q1 ϕ 0 x xk
where ϕ0 is the potential of the point charge with charge q 1 at the point 0. Then we can
say that ρ could be written as a continuous sum of point charges:
ρ S ρy dy
Then we can say that ϕ is the continuous sum of point-charge electric potentials. So:
So all that remains to find is the electric potential of a point charge, ϕ0 . Note by rotational
symmetry of the problem, ϕ0 must be radial: ϕ0 ϕ0 r. By the Gauss equation, we have:
1 SB 0,r
© E S∂B 0,r
ν © ϕ0 S∂B 0,r
∂r ϕ0 ∂r ϕ0 r4πr2
11
Therefore: ∂r ϕ0 r . So we know normalize so that ϕ0 0 as r ª and then we see
4π r2
that:
1 1 1
ϕ0
4π r 4π SxS
Lecture 8 (9/24)
– 26 –
2. Distributions
Today will the real beginning of our discussion on distribution theory. Let’s motivate the
definition of a distribution.
Let’s try to make sense of the notation of point charges qi at points xi > R3 . Let u be the
charge distribution function of these charges. We would like:
1. u 0 outside xi
2. RU u Px >U qi
i
However, u cannot be a function that satisfies this. Rather than viewing u as a function on
R3 , it is more natural to interpret u as a functional on the set of sets. By this we mean:
u U Q qi
xi > U
Given two open disjoint subsets U, V , then uU 8 V uU uV . If we interpret these
has integrals, we have (not rigoruously):
So we want to view u as a linear functional on the space of nice functions. The nicest we
can think of is the space of smooth, compactly supported functions: C0 R3 . ª
uϕ `u, ϕe
ψR R
– 27 –
2. Distributions
Then ψ works. Now consider ϕ Rd R by writing ϕx ψ 1 x1 2 xd 2 , then
supp ϕ ` B 0, 1.
Definition 2.2.2 (Convolution). Given f a locally integrable function, ϕ > C0 Rd , then ª
we write:
f ϕ x S f y ϕx y dy
Where the second equality relies on f being locally integrable and ϕ smooth.
Remark 2.2.2. If f > C Rd and ϕ > C0 Rd . Then suppf ϕ ` supp f supp ϕ. Where
ª
Lemma 2.2.1 (Approximation by mollification). Let f > C k Rd and ϕ > C0 Rd with ª
sup SDα f ϕδ f S 0
Rd
For higher k with SαS A 0, you let the derivatives fall on f , then proceed with the same
proof.
– 28 –
2. Distributions
converges to ϕ > C0 U (uj u) if there exists a compact set K ` U such that supp ϕj ` K
ª
(for all but finitely many) and supp ϕ ` K and supx>K SDα ϕj x Dα ϕxS 0 as j ª for
all multi-index α
Remark 2.2.3 (Optional). This isn’t a Frechet space or a Banach space
Remark 2.2.4 (Optional). This is the strongest topology such that every C0 K embeds ª
into this space for every compact subset K. Where C0 K is the space of functions in
ª
supp ϕ ` K endowed with ϕj ϕ if and only if supx>K SDα ϕj x Dα ϕxS 0 for all α.
This is a complete and metrizable set (Frechet).
Definition 2.2.4 (Distribution). u C0 U U is a distribution if it is linear and
ª
continuous in the sense that if ϕj is a sequence of test functions which converge to ϕ in the
above sense, then uϕj converges to uϕ.
Lemma 2.2.2. Let u C0 U ª
R be a linear functional. Then u is a distribution (i.e.
continuous) if and only if (“boundedness”) for all compact K ` U there exists N A 0 and
C CK,N A 0 such that for all ϕ > C0 U , supp ϕ ` K:
ª
Proof. If you have boundedness, then you have continuity. We can ask (by linearity) if
`u, ϕj ϕe 0. But ϕj ϕ 0 in C0 U there exist K compact such that supx>K SDα ϕj x
ª
For the other direction, suppose that u doesn’t have boundedness. Then we get a compact
K ` U such that for all N , there exists ϕN with S `u, ϕN e S C N PSαSBN supx SDα ϕxS. Then
set ψN x ϕN xN PSαSBN supx SDα ϕxS 1 . Then SDα ψN S B 1~N forall SαS B N . Then
Lecture 9 (9/26)
i.e. if ϕj ϕ in the sense that there exists a compact set K ` U such that supp ϕj , supp ϕ `
K, we have:
– 29 –
2. Distributions
to be of order B N if there exists N such that for all compact sets K there exists CK,N A 0
such that S `u, ϕe S B CK,N supx>K supSαSBN SDα ϕxS
Example 2.2.1. Any continuous function u defines a distribution by `u, ϕe R uϕ for all
ϕ > C 0 U .
ª
The order of such a distribution is zero and the support definitions agree
Example 2.2.3. The signed Borel measure on U , B U , defines a distribution (just integrate
against it). The order is 0.
2.3 Operations
Now let’s discuss the basic operations for distributions.
The basic principal (the adjoint method): take an operation P on a function u, we will
try to compute P so that R P uϕ R uP ϕ for all test functions ϕ. Then given a distribution
S f uϕ S uf ϕ
– 30 –
2. Distributions
S Dα uϕ 1
S αS
S uDα ϕ
Note that all distributions are differentiable. So distribution theory extends differential
calclus. Moreover, all distributions are locally Dα f for f continuous functions.
Example 2.3.3 (convolution). Let’s first consider u > D U and v > C0 U . What is u v? ª
u v x S uy v x y dy
So naturally:
So `u v, ϕe `u, v
ϕe
Because v > C0 U , then u v is not just a distribution, it is a smooth function.
ª
Proof. Consider u v x `u, v x e. By continuity of u and the fact that the map
(
x v x is continuous in the space of test functions.
S u v xϕx `u v, ϕe
Then H
δ0 . Because
ª
`H
, ϕe `H, ϕ e
S0 ϕ xdx
ϕ0
– 31 –
2. Distributions
Remark 2.3.1. We can generalize this computation to give a distribution theoretic proof of
the divergence theorem.
Remark 2.3.2. Trying to make sense of 1~x as a distribution. This isn’t a L1loc function.
However, does there exists a distribution u such that u 1~x in R 0. Take log SxS and
differentiate it as a distribution (this is known as taking the principal value of 1~x.
ª
ϕ ª L 0
`1~x, ϕe S ª x
dx S ª
log SxSϕ xdx
S0 log SxSϕ ϕ0
S L log SxSϕ
ϕ0
(Yeah I know, a bunch of those bounds should be negative) So we then send this to infinity.
The middle term ends up going to zero so we define:
ε ϕx ϕ0 L ϕx ϕ0
`P V 1~x, e ϕ lim S S
ε 0 L x ε x
To show the middle term actually goes to zero we note that:
ϕx ϕ0 ε ε 1
x Sε
dx S S ϕ xσ dσdx
ε 0
Then we can bound ϕ and send ε to zero to get that this integral goes to zero.
Lecture 10 (10/1)
Today we will continue on our discussion of distribution theory. Recall that distributions
(D U ) are defined as linear functionals on C0 U that are continuous in the sense of con-
ª
vergence on C0 U .ª
`un , ϕe `u, ϕe
Remark 2.4.1. This is the same of weak convergence of linear functionals our space.
Theorem 2.4.1 (Sequential Compactness). Let un > D U and suppose that for all
ϕ > C0 U , `un , ϕe is convergent. Then the claim is that the linear functional u defined by
ª
– 32 –
2. Distributions
Proof. (optional) It’s clear that u as defined above is linear. So it suffices to check continuity.
Let K be any compact subset of U . Then u obeys (9) for ϕ > C0 K . Note that this
ª
space is nice (for a fixed K). Convergence is just ϕn ϕ when for all N , SαS B N , we have
supx>K SDα un Dα uxS 0. So define the seminorm ρN ϕ supSαSBN supx>K SDα ϕxS.
Then let:
PN ϕ ψ
dϕ, ψ Q2 N 1
PN ϕ ψ
N
same topology as above and d is invariant in the sense that dϕ v, ψ v (.ϕ, ψ for all
ϕ, ψ, v > C0 K , and d is complete.
ª
These must be proven. But once these are shown, then un is the family of linear
functionals on C0 K which is a complete metric space (compatible with the vector space
ª
structure). Then we can apply the boundedness principal (or the Banach-Steinhaus theo-
rem). If `un , ϕe is bounded for each ϕ, then un is equicontinuous.
By equicontinuity, we have that uSC0ª K is continuous and K was arbitrary. Then since
C0 U is the strongest topology such that C0 K C0 U imbedds continuously. It follows
ª ª ª
that u is continuous on C0 U
ª
Example 2.4.1. Consider a sequence of functions un > L1loc U . In this case you can connect
the notion of pointwise convergence and the convergence in the sense of distributions via the
dominated convergence theorem.
That is, if un u in the pointwise sense, and there exists some function g > L1loc such
that Sun S B g almost everywhere on U , then un u in the sense of distributions
Proof. We just need to show that R un ϕ R uϕ for all ϕ > C0 U . But this follows imme-
ª
Example 2.4.2. Let h > C with h 1 on 1, ª and supp h ` 0, ª. Then if we take
ª
limδ 0 hx~δ this converges to the heaviside function pointwise and in the sense of distri-
butions.
Then un x does not have a pointwise limit for 0 @ x x 2πZ. But this sequence does have a
limit in the sense of distributions, and it is zero.
– 33 –
2. Distributions
ª
1 ª
1 inx ª
1 ª
1
But we can control (bang) as its absolute value is bounded by R0 S∂x ϕxSdx which goes to
ª
n
zero.
1
Example 2.4.4. Take the same un but normalize it by n. Then the limit of limn nun δ0 ª
i
in the sense of distributions (check).
Example 2.4.5 (approximation of the identity). Let ϕδ be the standard mollifer with
support contained in a ball of radius δ. Then we saw that ϕδ u u as δ 0.
with:
S S ϕδ z ϕz ydz ÐÐ ψy
δ 0
ϕδ ψ
ϕδ x y ψ xdx
2. j Kj U
now construct χj > C0 U such that χj 1 on Kj but supp χj ` Kj 1 . Then we can compute
ª
– 34 –
2. Distributions
∂j χU ν∂U j dS
Proof. Note that ∂j 1U is supported in ∂U . It suffices to compute this on balls that cover
∂U . Then B 9 ∂U xd γ x1 , . . . , xd 1 and B 9 U xd @ γ 1 , . . . , xd 1 . Then we have:
1 1
1U x lim hδ
γ x , . . . , xd 1
x
d
δ 0
set ṽ ∂1 γ, . . . , ∂d 1 γ, 1. Then we can differentiate 1U by differentiating all the hδ ’s. By
computaion we have:
1 1
∂j h h δ
γ x , . . . , xd 1
x δ
d 1
ṽj
now for each x1 , . . . , xd 1 as δ goes to zero, this converges to the distribution δγ x1 ,...,xd1 xd ,
`∂j 1U , ϕe SB α
ṽj ϕx1 , . . . , xd 1 , γ x1 , . . . , xd
1
dx
1
dxd 1
»
S∂B α
ν 1 S©γ S2 ϕx1 , . . . , xd 1 , γ x1 , . . . , xd 1
dx
1
dxd
1
S∂U νj ϕxdS
Where I dropped a negative sign somewhere.
– 35 –
2. Distributions
Lecture 11 (10/3)
Today will be the last class where we will discuss distribution theory. We will:
introduce the notion of fundamental solutions
discuss operations between distributions
Let’s consider multiplication of distributions.
º
Take
º
u, v > D U . What does u v look like?
This doesn’t always work, for example 1~ x and 1~ x are locally integrable, therefore they
define distributions, however, their multiplication gives 1~x which isn’t locally integrable.
So in general, this is impossible, but if for all x > U , at least one of u or v is smooth, then
u v is well defined.
Definition 2.4.2 (Singular Support). Let u > D U . We say that u is smooth on an
open subset V ` U if `u, ϕe R ũϕ for some ũ > C and for all ϕ > C0 U with supp ϕ ` V .
ª ª
Then `uv, ϕe `uv, χϕe `uv, 1 χϕe `v, uχϕe `u, v 1 χϕe
Remark 2.4.4. The approximation method is more flexible than the adjoint method.
The converse of Proposition 2.4.2 is false as the following example shows:
Example 2.4.8. In R2 , let u dSx 0 and v dSy 0 . So now supp u singsupp u x 0
and supp v singsupp v y 0. Now singsupp u 9 singsupp v 0 but uv is well-defined.
Now let’s discuss the convolution of two distributions. In general you cannot define this.
Proposition 2.4.3. If u and/or v (distributions) have compact support, then u v is well
defined and moreover u v v u and supp u v ` supp u supp v
Proof. welldefined: recall when v is a function `u v, ϕe `u, v
ϕe, where:
v ϕy
S v x y ϕxdx `v, ϕx e
without loss of generality, suppose that supp v is compact. Then since ϕ > C0 Rd then ª
v ϕ > C Rd even when v > D U and suppv ϕ ` supp v supp ϕ. Therefore
ª
– 36 –
2. Distributions
which is by definition just u. If u > C0 Rd , then ux `δx , ue `δ0 x , ue δ0 ux.
ª
We can write:
`δ0 x , ue S δ0 x y uy dy“ ” ux
Example 2.5.1. If V is a finite dimensional vector space, then we can write v Pi viei if
ei span V .
And aα is a real-valued smooth function. Then P u makes sense for u > D Rd
v x S v y δ0 x y dy
So we have (hopefully)
2.5.1 Uniqueness
Example 2.5.2. Let U and V be finite dimensional vector spaces and P be a liner operator.
Let P u v. Let ϕ > V then:
`u, P ϕe `P u, ϕe
`v, ϕe (10)
With P V
U the adjoint. Suppose there exist a set of vectors θi that span U . If for
all θi we can find E i > V such that P E i θi , then by (10) (replacing ϕ with E i )
au, θ i f av, E i f
for all i. If our dual space could seperate points, then this would give uniqueness. Because if
u1 , u2 both solve P u v, then we must have that `u1 , θi e `u2 , θi e for all θi > U .
– 37 –
2. Distributions
S α S BK
terization of u in P u v:
because y @ b. And so we get that uy ua Ra vdx and we have the fundamental theorem
y
of calculus.
Lecture 12 (10/8)
Recall we were discussing the fundamental solution. If P is a linear operator of the form:
P Q aαDα
SαSBk
– 38 –
2. Distributions
Once we have this, we can prove existence. Given a nice function f , then if we formally
write:
u f S f y Ey xdy
Then we can recover P u (which gives us uniqueness). This formula above we will call the
representation formula for u.
We also have uniqueness for a boundary valued problem. This is the same idea where we
write for x > U :
ux `u, δx e `1U u, δx e `1U u, P E x e
`1U P u, E xe
terms involving © 1U
But these extra terms give us the contributions of the boundary values of u.
Remark 2.5.1. If you consider constant coefficient P. This is the case where aα are con-
stants. There are two simplifications that occur.
1. if P E0 δ0 then Ey x E0 x y is a fundamental solution for P at P E0 y
δ0 y δy . So we get translational invariance.
2. P v P α Bk 1 α aαDαv P v x. Therefore if we set E xy
S S
S S
E0 x y then
Py E x P E0x y δ0x y δx
So we will now only consider the constant coefficient P . Then Ey x E0 x y and
E
x y E0 x y . Then we have that:
u f S f y E0 x y dy f E 0 x
So for uniqueness, E
x y E 0 x y :
` P u, E xe S P uyE0x
y dy Pu E0 x
– 39 –
3. Laplace Equation
3 Laplace Equation
Now we will consider P ∆. Let’s begin by deriving fundamental solutions to the Laplacian.
That is solve ∆E0 δ0 in Rd for all d C 2. We already did this. We will use rotational
invariance of ∆. That is given a rotation matrix R on Rd , then we have:
∆U Rx ∆uRx
Then we can look for E0 E0 r. So we would like a∆E0 , 1B 0,r f aδ0 , 1B 0,r f 1.
Integrating by parts the thing on the left, we get:
x
S ©E0 © 1B 0,r S © E0 νdS∂B 0,r S SxS
©E0 dS∂B 0,r S∂B 0, r S∂r E0 dαdrd 1 ∂r E0 r
Now:
χu ∆χu E0 S ∆χu 2©χ © uy E0 x y dy
– 40 –
3. Laplace Equation
Then ∆χ is supported in the annulus of radii r~2 and r. Then we get control of the thing
by:
C
rdSαS SB x,r
SuSdy something
u u f c
constant.
Theorem 3.0.5 (Mean Value Property). If u is harmonic then:
1 1
ux S
S∂B x, r S ∂B x,r
udS y S
SB x, r B x,r
u y
– 41 –
3. Laplace Equation
So we ultimately get:
And the first term can be shown to be zero by the divergence theorem(possibly?).
Lecture 13 (10/15)
Day of the midterm Lecture 14 (10/17)
Today we will spend 20-30 minutes discussing the midterm and review what we have done
so far in the course.
General Theory
1. For first-order scalar PDE’s we studied the method of characteristics. The idea is that
we can always reduce PDE’s to first order ODE’s which then we can use ODE results
to get existence and uniqueness. This required noncharacteristic boundary data. In
the quasilinear case, the noncharacteristic boundary data is equivalent to the condition
that we are able to compute all derivatives of the solution at the boundary point. We
extended this to general quasilinear k-th order systems (see Holmgien’s theorem about
an application). We then discussed Cauchy Koweleskaya, but it isn’t the end due to
Hadamard’s example. He emphasized well-posedness (which asks not only existence
and uniqueness, but also continuous dependence).
2. Then we turned to constant coefficient 2nd order scalar PDE’s. We are studying the
four equations:
¢̈∆u 0
¨
¨
¨
¨
¨
¨ju 0
¦
¨
¨ ∂t ∆u 0
¨
¨
¨
¨i∂ ∆u
¤̈ t 0
this can be studied on a case by case basis (like in Evans), but we can study them at
the same time using distribution theory. This allows us to make use of the fundamental
solution: P u δy and P u δx . We gen existence for P u f . And we get uniqueness:
– 42 –
3. Laplace Equation
we first assume a solution exists, then we prove estimates on them. For existence we would
like `u, P ϕe `f, ϕe. This is something we can define if P is injective. Then the linear
functional `u, P ϕe is well-defined in span P ϕ ϕ > C0 U . This strategy favors L2 type
ª
spaces, in which we can find solutions u > L2 , and all derivatives will stay in L2 . A Sobolev
spcae is H k u Dα u > L2 , SαS B k .
∂x u ∂y v 0
∂y u ∂x v 0
This is equivalent to ∂x i∂y f 0. Note that if this is true, then each component of f is
harmonic as ∂x i∂y ∂x i∂y ∂x2 ∂y2 ∆. Now in 2 d the fundamental solution is:
1
∆ log r δ0
2π
1 1
Decomposing ∆, we get the fundamental solution ∂x i∂y log r x~r 2 iy ~r 2
2π 2π
1
2πz
1 1
Theorem 3.1.1. E0 is a fundamental solution for ∂x i∂y
2π z
Theorem 3.1.2. If ∂x i∂y f 0 for f > D U then f > C
ª
U
Proof. Let z0 > U , let χ be 1 near z0 and zero outside a small ball centered at z0 . Then:
Then since χf is compactly supported, we can use the representation formula to get that:
But this last term vanishes near z0 , if we let that term be F , then we have (if F is a function):
Corollary 3.1.1 (Morera’s Theorem). If f is C 1 on U and RΓ f z dz for all closed curves
Γ, then f is holomorphic (i.e. f is a C solution to CR equation)
ª
– 43 –
3. Laplace Equation
Proof. We say that f is a distribution solution if and only if `f, ∂x i∂y ϕe 0 for all ϕ >
C0 U complex valued. Where the inner product is now defined as `u, ve R uzvzdxdy
ª
1 f z
f z0 S
2π ∂Ω z z0
dz
f z0 f z0 χΩ z0 S δ z z0 χΩ z0 f z dz S ∂x i∂y E0 z0 z χz f z dz
then we integrate by parts, when the differential operator falls on f we get zero, when it
falls on χ we use the lemma to get:
1 1 1 dz
2π S z0 z
∂X i∂y χΩ z f z dxdy S
2πi ∂Ω z z0
f z
∆u f
what we did last time was we computed the (radial) fundamental solutions for ∆. That is
∆E0 δ0 which had the form:
¢̈ 1
¨
¨ log SxS d 2
¨ 2π
E0 x ¦ 1
¨
¨ SxSd2 dC3
¨
¤̈ dα d d 2
– 44 –
3. Laplace Equation
E0 ∆u u
where we used the fact that 1U u has compact support. Recall that ∂j 1U νj dS∂U . So C
has a term that becomes:
Q ∂j E0 u∂j 1U x Q S∂U ∂y E0x y uy νj dS y S∂U ν DE0 x y uy dS y
j j
And B becomes:
– 45 –
3. Laplace Equation
u x S∂B x,r
ν Dy Ẽ x y uy dS y
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
∂r Ẽ0 r
1
but we know that ∂r E
dαdrd 1
Remark 3.2.1. The same proof applies to ∆u x 0. This leaves to a nice proof of Jensen’s
formula in complex analysis.
Theorem 3.2.2 (Maximal Principal). Let U be a bounded C 1 -domain (open and con-
nected). Let u > C 2 U 9 C 0 Ū with ∆u 0 in U then:
max u max u
Ū ∂U
Proof. It suffices to prove the strong maximal principal. Let M maxŪ u. Let B x, r ` U .
Then:
1
M ux0 S
S∂B x, r S ∂B x,r
udS y B M
– 46 –
3. Laplace Equation
note that ν Du is not prescribed. If f and g are continuous and u > C 2 U 9 C Ū then u
is unique.
Proof. If u and u solve this, then u u is harmonic with boundary value 0. Then by the
sup u B C inf u
V̄ V̄
1
this follows by averaging R udS for 0 @ r @ r.
S∂B x, r S ∂B x,r
Now take r P distV̄ , ∂U . Then take x, y > V such that Sx y S @ r. Then we can write:
1 1 2d
ux S
SB x, r S B x,r
udz B S
SB x, r S B y,2r
udy S
SB y, 2r S B y,2r
udy B 2d uy
2 dN
uy B ux B 2dN uy
– 47 –
3. Laplace Equation
Lecture 16 (10/24)
Recall our discussion of the Laplace equation. We got uniqueness of a solution to the Dirchlet
problem for a bounded domain U
¢̈
¨∆u f in U
¦
¨u g on ∂U
¤̈
¢̈
¨∆x G , y δ0 x y in U
¦
¨Gx, y 0
¤̈
x > ∂U
So, existence theory for the Dirichlet problem implies the existence of a Green’s function.
Now, if you have a Green’s function, you have existence of a solution u f to:
¢̈
¨∆u f f in U
¦
¨u f 0 on ∂U
¤̈
where we set:
– 48 –
3. Laplace Equation
So we have that “homogeneous and nontrivial boundary value” can be solved by solving
“inhomogeneous and trivial boundary value”
It turns out that existence to Dirichlet problem is equivalent to the existence of a Green’s
function.
2. Gx, y Gy, x
Proof.
G x, y
S δ0 x z G z, y dz
S 1U z δ0 x z G z, y dz
S ∆z Gz, xG z, y 1U z dz
d d
QS ∂j Gz, x∂j G z, y 1U z dz Q S ∂j Gz, xG z, y ∂y 1U z dz
j 1 j 1
d d
QS Gz, x∂j2 G z, y 1U z dz Q S Gz, x∂j G z, y ∂j 1U z dz
j 1 j 1
d
QS ∂j Gz, xG z, y ∂j 1U z dz
j 1
– 49 –
3. Laplace Equation
The idea is that the fundamental solution E0 x, y models the electric potential of a unit
point charge which is situated at y. To ensure that on the boundary that the potential is
zero, we simply pretend there is an opposite charge on the other side of the boundary. So
we would get Gx, y E0 x y E0 x ȳ , where ȳ is y reflected across the boundary
Example 3.3.1. Let U B 0, 1.
so we get that
1
E 0 x E0 λx
λd2
for λ A 0
Proof. (that this is the Green’s function) Let y > B 0, 1. It suffices to check that Sx y S
Sy SSx ȳ S for x > ∂B 0, 1 if and only if SxS2 1.
– 50 –
3. Laplace Equation
Theorem 3.3.2 (Representation formula for the Dirichlet problem). Let u > C 2 U 9
C Ū , then we have that:
Proof.
d d
QS ∂yj Gx, y ∂yj uy 1U y dy Q S ∂yj Gx, y uy ∂yj 1U y dy
j 1 j 1
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
A
d
A S Gx, y ∆uu1U y dy Q S Gy, x∂j u∂j 1U dy
j 1
First assume that u > C Ū . Then the first and third term is zero by Gx, y
ª
0 for y > ∂U .
For general u, you can approximate u in C 2 U 9 C Ū .
Theorem 3.3.3 (Poisson integral formula). Given g > C ∂U (for today U is the half
plane or the unit ball). If we define:
Recall that we were discussing the Green’s function for the Laplace equation. It solved:
¢̈
¨∆G , y δ y in U
¦
¨G , y 0 on ∂U
¤̈
For a domain U ` Rd with y > U . We found that G is unique and that Gx, y Gy, x. So
we also have:
¢̈
¨∆Gx, δ x in U
¦
¨Gx, 0 in ∂U
¤̈
which is the definition in Evan’s.
– 51 –
3. Laplace Equation
Remark 3.3.5. Note that this representation formula does not have a term with the normal
of u.
it turns out that ũx is indeed the solution to the Dirichlet problem (11), but we will not
justify this here.
It isn’t hard to show that ∆ũ g, it is difficult to show that ũx ÐxÐÐÐÐ
x >∂U 0
hx0
Gx, y E0 x y E0 x ȳ
with y y 1 , . . . , y d1 , y d
Example 3.3.3. When U B 0, 1, we used the fact that if y > B 0, 1 and let ȳ Sy S2 y,
then we have that:
– 52 –
3. Laplace Equation
T f x S K x, y f y dy
then K is the kernel of the integral operator. By the Schwartz kernel theorem, any linear
T C0 X D Y is of the form T f `K x, , f e
ª
Now we would like to compute the Poisson kernel: ν y Dy Gx, y for the two above
examples.
Gx, y E0 y x E0 y x̄
1 y d xd 1
∂yd E0 y x ∂yd Sy xSE0 Sx y S
dαd Sy xS Sy xSd1
dαd r 1
d
1 xd
αdd Sy xSd
Then we compute:
1 x̄d
∂yd E0 y x̄
dαd Sy x̄Sd
1 xd
then restrict to y y 1 , 0, to get . So our Poisson kernel becomes:
dαd Sy xSd
2xd 1
ν y Dy Gx, y
dαd Sy xSd
there was a sign error, the final answer is right. So the Poisson integral formula for harmonic
u in Rd is:
2xd 1
ux S dαd Sy xSd
hy dy
– 53 –
3. Laplace Equation
ν y Dy Q y α ∂α
α
Then we have:
1 1
Q y α ∂y α E0 y x Q yα∂y αS y xS
dαd Sy xSd 1
α α
Sy S2y x 1
dαd Sy xSd
And we have:
1 1
Q yα∂y α E0 SxSSy x̄S Q y α ∂y α S xSSy x̄S
dαd SxS Sy x̄Sd
d 1 1
α α
1 1 Pα y α y α x̄α 1
dαd SxSd 2 Sy x̄S
Sy x̄Sd 1
1 1 Sy S2 yx̄
dαd SxSd 2 Sy x̄Sd
for the first term, and the second term becomes something else, and after we add them up,
we get:
SxS2 1 1
dαd Sy xSd
1 SxS2 hy
u x S∂B 0,1
ν y Dy Gx, y hy dS y S∂B 0,1
dαd Sy xS
d
dS y
– 54 –
4. Wave Equation
4 Wave Equation
The wave equation is an equation on ϕ R1 d R where the coordinates of the domain are
1 2
j ϕ ∂ ∆ϕ 0
c2 t
this is called the d’Alembertian operator. This is the prototypical hyperbolic equation.
We always take c 1 in math courses. We have three goals:
1. find an explicit fundamental solution for j. We will figure out that there are a lot of
symmetries of j.
¨∂ ϕ h on ∂R1 d
¤̈ t
4.1 1 dimension
Let’s consider the domain R1 1 . Then we have:
So let’s take advantage of this factoring by finding the fundamental solution. Let’s make a
change of coordinates u t x, v t x. Then we get that du dt dx, dv dt dx threfore
1 1 ∂t ∂x 1 1
dt du dv and dx du dv this means that ∂u ∂t ∂x ∂t ∂x and
2 2 ∂u ∂u 2 2
1 1
∂v ∂t ∂x .
2 2
Then if E is our fundamental solution, we have:
where we had to use the approximation method to compute δ with a change of variables. So
we just need to solve:
1
∂u ∂v E0 δ0 uδ0 v
2
So that:
1
E0 H u c1 H v v2
2
– 55 –
4. Wave Equation
supp E ` t C 0. This forces c1 c2 0. Therefore the fowards fundamental solution has
the form:
1 1
E H u H v H t x H t x
2 2
Lecture 18 (10/31)
1 1
`u X Φ, ϕe du, ϕXΦ i
S det DΦS
∂ x 1 , . . . , x d
S uΦxϕxdx S u y ϕ Φ 1
y W
∂ y 1 , . . . , y d
W dy
1 1
du, ϕΦ y i
S det DΦy S
Now since Φ is a diffeomorphism, we have that we still get a test function for the input of
u
1
Corollary 4.1.1. δ0 X Φ δΦ1 0
S det DΦS
this is the sign choice that people usually use in general relativity. For R1 1 , we got:
1
E0 H t x c1 H t x c2
2
Then we introduced something called the forward fundamental solution, which required
that supp E ` t C 0. This is the relevant choice for solving the wave equation because we
would like to know what happens in the future. This requires jE δ0 and E 0 before
seeing δ0 . Then we get c1 c2 0, and get the unique forward fundamental solution:
1
E H t x H t x
2
– 56 –
4. Wave Equation
that for all ϕ > C Rd , for all t, x > Rd then, we get d’Alembert’s formula
ª
1 t x t s
1 1 x 6
2 S0 Sx t s 2 Sx t
ϕt, x jϕ s, y dyds ϕ 0, x t ϕ 0, x t ∂t ϕ0, y dy
2
Note that there is a sign change in Evans due to the choice of sign of j.
Let’s turn to the derivation of the reresentation formula for any dimension (given that
we know the forward fundamental solution). Then we will go towards a way of deriving
fundamental solutions for all dimensions.
¢̈
¨jE δ0
¦
¨supp E ` t C 0
¤̈
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
compactly supported
Proof. (real) Given an interval I a, b with a, b > R 8 ª, let χI be a smooth function
that is 1 on I and 0 in s B a1 and s C b1. Then we will consider χI uE for all bounded I.
?
`χI u E , ϕe `u E , χI ϕe du, E χI ϕ i
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶
need to show is a test function
f
g y S f x y g xdx
– 57 –
4. Wave Equation
suppχ M, E ª
χI ϕ ` t C M t > I
Yeah...I got a little lost, this is one of those things that is easier to just work out by yourself.
fundamental solution with the property (13) and let E be any forward fundamental solution
to:
¢̈
¨jE δ0
¦
¨supp E ` t C 0
¤̈
Then E
E
Proof. We have:
E δ0 E jE E0 E E E E E δ0 E
j j
Proof. We have:
2
∂ E ϕ1tC0 ∆E ϕ1tC0
t
2
E ∂t ϕ1tC0 E ∂t ϕδt 0 E ∆ϕ1tC0 ∂t E
ϕδt 0
Then if you believe that supp E ` SxS B t, then you are integrating over the triangle below
the thing.
1
Let’s specialize to d 1, then E H txH tx, then let’s look at the representation
2
formula. Then we have:
1 1
2 S S 2 U
E ψ t, x ψ s, y H t s x y H t s x y dsdy ψ s, y dsdy
– 58 –
4. Wave Equation
1 t x t s
2 S0 Sx t s
ψ s, y dyds
1 t x t s
2 S0 Sx t s
E j ϕ jϕdyds
2 Sx t
E
∂t ϕδt 0 ∂t ϕ0, y dy
and:
1 x t
1
2 Sx t
∂t E
ϕδt 0 ∂t ϕ0, y dy ϕ0, x t ϕ0, x t
2
Lecture 19 (11/5)
j ϕ 0
1
In R1 1 we were able to compute the fundamental solution E t, xH t xH t x. In
2
R1 d , given a forward fundamental solution E with the condition that for all bounded inter-
vals supp E 9 t, x t > I is compact. Then we were able to construct the representation
then:
E f S E t s, x y f s, y dsdy
then we are just integrating over the inverted cone in R1 d . This is known as “finite speed
of propagation”.
– 59 –
4. Wave Equation
Remark 4.1.2. In the case of wave equations, we can take the representation formula to
write down the solution to the initial value problem. For t, x > R1 d , define
Then the claim is that for nice enough f, g, h (for instance all smooth) then ϕt, x does solve
the initial value problem (14).
The goal today is to find the foward fundamental solution in R1 d for d C 1. To do this,
we will exploit the symmetries of j to narrow down the possible candidates for E .
j ϕ X L jϕ X L
ϕt, x x0
j j ϕt, x x0
We are instead interested in symmetries that fix 0, 0. These are called the Lorentz
transformations. These are:
3. (Lorentz boosts) Lorentz boost with speed v and direction x1 is the formula:
1
t, x ( ºt vx1 x1 vt
,º
1 v2 1 v2
Any other Lorentz transform is given by a composition of the above. Another way to
characterize these is L is a Lorentz transformation if and only if it preserves:
t2 x1 2 xd 2
Another property of the Lorentz transformation is the behavior of j under scaling. For all
(
λ A 0, given the scaling transformation t, x t~λ, x~λ, then we have:
1
j ϕt~λ, x~λ jϕt~λ, x~λ
λ2
Definition 4.1.1 (Homogeneity). We say a function ϕ Rn 0 R is homogeneous of
1
degree a if ϕx~λ ϕx
λa
a
RRt Rt R I and det R 1
– 60 –
4. Wave Equation
n a
`ϕ, λ ψ λ e λ `ϕ, ψ e
So if we are looking for jE δ0 and j lowers the degree by 2 and δ0 has degree 1 d,
`ϕ, ψ e S0 x a ψ xdx
Rn 0.
E in R1 d 0, 0
s2 t, x t2 SxS2 is invariant under all Lorentz transformations, we will assume that
kinda d1 2
E χ 2 t SxS2
d1 2 2
E 1t> 0, ª χ 2 t SxS
d1
to make sure that this is well-defined, it is good to require that χ 2 vanishes for s @ 0.
Claim 4.1.1.
j E 0 in R1 d
0, 0
– 61 –
4. Wave Equation
Theorem 4.1.4. If ϕ > D Rn and supp ϕ ` 0, then ϕ Pα cαDαδ0 for at most finitely
many cα x 0
Using this theorem, we have that jE Pα cα Dα δ0 and jE is homogeneous of degree d 1.
This implies that jE c0 δ0 . So the last thing you have to check is that c0 x 0 One way to
do this is to compute, which is possible, but difficult). But this fact is not difficult, you just
need the uniqueness of jϕ 0 in R1 d and supp ϕ ` R1 d then ϕ 0 (this can be proved by
d1
Remark 4.1.4. χ 2 is a distribution so the composition of it with t2 SxS2 should be
understood by approximations.
Let’s now prove the claim.
d1
Proof. letting χ χ 2
d1 d1
j1tC0 χ 2
2
t SxS
2
1tC0 j χ 2 t
2
2
SxS 1tA0 ∂t 2tx1 t2 SxS2 Q ∂j 2xj x1 t2 SxS2
j
2
d1
d1
this is because χ is a homogeneosus distribution of degree
1
2 2
Lecture 20 (11/7)
Recall that we were aiming to derive the fundamental solution for j in R1 d . what we
were doing was looking at symmetries of j, that is finding L R1 d R1 d linear such that
s2 t, x t2 SxS2 s2 Lt, Lx. And we were looking at homogenity of j and δ0 . And
looking at the forward condition supp E ` R1 d . Then our guess was the E is of the form:
d1 2 2
Ẽ 1R1d χ 2 t SxS
d1 d1
with χ
2
homogeneous of degree
d1
2
in D R with supp χ
2
` 0, ª.
We computed that jẼ 0 outside the origin, i.e. jE ` 0, 0 and is homogeneous of
degree d 1 which tells us that jẼ cδ0 . Then it is not too difficult to show that c x 0,
– 62 –
4. Wave Equation
χ̃a
H x x a
To extend this to a B 1, we will differentiate our distribution. Observe that by computation
(for a A 1) we have (you have to check this):
d a
χ̃
aH xxa 1
aχ̃a
1
dx
Then let’s just define: χ̃ a by using:
1 d a
χ̃a
χ̃
1
a 1 dx
this is good, and will give nice distributions χ̃a which will happen if a
1, 2, 3, . . . . That’s
why we need to be a little smarter, let:
χa
ca H x x a
Γa S0 xa e
x
then we wouldn’t have this problem. To see this:
d a 1 Γaa a
χ
aH xxa 1
χ
1
dx Γa 1 Γa 1
Then since Γa 1 aΓa, so the above is just χa 1 . So for a B 1, we define:
d a dk a
χa
χ
1
χ
k
dx dxk
for k > N large enough so that a k A 1.
Example 4.1.4. χ0
H x. And:
k k 1
χ δ0 x
Also we have:
1~2 1 1~2
χ º H x x
π
º 1~2k 1
º H xx1~2 k
this uses the computation that Γ1~2 π. Therefore we have χ
π
– 63 –
4. Wave Equation
Remark 4.1.5. Up to a constant, this is the unique distribution with the three desired
properties.
d1
δ
Remark 4.1.6. j Ẽ
2π 2 0 . This means that:
1 d1
2 2
E 1 d χ
d1 R1
2
t SxS
2π 2
1. Finite speed of propagation (weak Huyghen’s principal) This follows from the
fact that:
supp E ` t, x > R
0 B SxS B t
With the representation formula that we just derived:
ϕt, x E ∂t ϕδt 0 ∂ t E ϕδt 0 E j ϕ
with ϕ > C0 R1 d and all these convolutions involve integration on inverted cones
ª
which is only supported it t, x t SxS which is the boundary of the light cone.
1 1
Remark 4.1.7. When d 1, we get E 1 11 H t2 SxS2 H t x H t x
2 R
2
Remark 4.1.8. IF d 2 then we have:
1 1
E 1R12 1~2
2π t SxS2
2
If you plug this into the representation formula, you will get Poisson formula for the 2
dimensional wave equation.
Remark 4.1.9. For d 3, then we get:
1
E 1 13 δ0 t2 SxS2
2π R
1
the easiest way to compute this is by the approximation method, with ϕs~λ δ0 as λ 0,
λ
so we get:
1
º
4
dSC0
2πt
with C0 t, x t SxS. Plugging this into the representation formula gives you the Kir-
choff formula in R1 3 .
Remark 4.1.10. In Evan’s book, for chapter 2.4, there is an alternative derivation of the
representation formula for j using a very different method (the method of spherical means).
– 64 –
5. Fourier Transform
5 Fourier Transform
Let’s motivate why the Fourier transform takes the form that it does. This will be a brief
two lecture crash course on the Fourier transform.
The Fourier transform can be thought of as a change of basis in the space of functions.
One way to understand f x is a a description of f in the basis that consists of δ0 x y for
all y > Rd . We can do this because we can write:
f x S f y δ0 x y dy
This is not good for understanding ∂x or any Dα . It is reasonable to ask for a basis that di-
agonalizes out basis with respect to our linear transformation of differentiation. The Fourier
transform selects a basis eiξ x ξ>Rd .
The Fourier transform, F f ξ is the change of basis formula such that:
f S aξ eiξ x dξ
with:
F f ξ aξ
δ0 x y S my, ξ eix ξ dξ
Note that:
We then conclude that m0, ξ c. And we have already dedcued δ0 x y R ce iξ y eiξ xdξ.
Now we have:
f x S f y δ0 x y dy cS S f y e iξ y
ydy eiξ x dξ
– 65 –
5. Fourier Transform
Therefore we have:
F f ξ d S f y e iξ y
dy
δ0 x c S eix ξ dξ
Note that
c S ex dξ1 S ex
1ξ d ξd
1
dξ d
δ0 x c1 S eixξ dξ
in R, then cd cd1
Proof.
Method 1:
ª 0
lim S e εSξ S ixξ
e dξ lim S e ξ εix
dξ S eξε ix
dξ
ε 0 ε 0 0 ª
1 1
lim
ε 0 ε ix ε ix
1 1 1
lim
i ε 0 x iε x iε
ε
2 lim 2 2 2πδ0
ε 0 x ε
Lecture 21 (11/12)
Recall our discussion on the Fourier transform. We motivated the Fourier transform as a
change of basis from δ0 x y y>Rd to a new basis eiξ x ξ>Rd . And we have:
– 66 –
5. Fourier Transform
wo we would like to relate f y with aξ . The heart of the matter is to write:
δ0 x c S eix ξ dξ
we had one way to compute c 2π d . Let’s compute this a different way, using an algebraic
method.
δ0 x c S eiξ x dξ
Some properties:
1. xe iξx i∂ξ e ixξ
So if x ∂x f 0, then we get that i∂ξ ξ f 0. This is easy to solve, and we get:
1 2
f x f 0e 2 x
1 2
x
Let’s normalize f 0 1 to get f x e
2 , this gives us:
1 2
fÂξ fÂ0e
2
ξ
but we have:
º
S S
1
fÂ0 f y dy e
2
Sy S 2
dy 2π
º 1
Therefore fÂξ 2πe
2
Sξ S2
. Therefore:
º
f 0c̄ S fÂξ dξ 2πc̄ S e
1
Sξ S2
1
2 dξ 2πc̄
Let’s move to a rigorous derivation. Let’s begin with some comments about complex
valued distributions. Let C0 U C be the set of smooth, compactly supported, complex
ª
i.e.
F f ξ fÂξ S f y e iξ y
dy
Let us introduce F
in the following way
dξ
`a, be2π d dξ S ab̄
2π d
then define:
` F f, ae 2π dξ
d
`f, F
ae
`F f , ae2πd dξ U f y e iξy
a ξ
dξ
2π d
F
af ǎx S aξ eiξ x
dξ
2π d
Lemma 5.0.1.
F f xξ e ix ξ F f ξ
– 68 –
5. Fourier Transform
for all α, β
By our lemma, we have that S is closed under the Fourier transform, and the adjoint of
the Fourier transform. Therefore F S S can act on the dual space of the Schwartz class.
Also S ` D Rd
C, but this is a strict subset.
Remark 5.0.5. Usually, the way to compute F u when u > S , one approximates u by uj > L1
F uj S uj y e iξ y
dy
S F f F
dξ
S f ḡdx g
2π d
In particular:
Yf YL2
2
Y F f Y2 2
L dξ
2π d
– 69 –
5. Fourier Transform
FF
f S F f ξ eiξ x
dξ
2π d
lim S
ε 0
F f ξ e εSξ1 SSξd S iξ x
e
dξ
2π d
dξ
lim U f y e iξ y
e εSξ1 S Sξd S iξ x
e dy
ε 0 2π d
d
lim S
ε 0
MS e εSξj Siξj xy
2π
d
dξ f y dy
j 1
lim S e εSξ S iξ x
e dξ δ0 x2π
ε 0
There are two steps to show this. First ∂ξj 1 0 implies that xj F 1 0 for all j, With a
little more argument, this reduces to computing the constant cδ0 F 1. This follows by
computing:
,F
1 S xS 2 1 1 SxS2
1 be 2 , δ0 g be 2 g 1~c
c
Remark 5.0.6. The Fourier inversion formula and the Plancheral theorem is all about
finding a way to justify:
dξ
δ0 S eiξ x
2π
Remark 5.0.7. The Fourier transform maps convolutions to products (and vice-versa)
F f g F f F g
And
dξ
a 2πd dξ bη S a ξ b η ξ
2π d
Then:
F 1
a 2πd dξ bη F 1
aF 1
b
– 70 –
5. Fourier Transform
5.1 Applications
The goal today is to give some applications of the Fourier transform to the study of evolu-
tionary PDE’s. The main ideas to get across are:
1. using the Fourier transform to solve homogeneous evolutionary PDE’s. Like PDEs of
the type ∂t f u 0
2. Duhamel’s principle: using the homogeneous solution to solve the inhomogeneous pde.
3. discuss the Fourier transform of the foward fundamental solution. The main example
to study is the heat equation
¢̈
¨ ∂ t ∆u f in R1 d t A 0
¦
¨u g in t 0
¤̈
and the Schrodinger equation
¢̈
¨i∂t ∆u f in R1 d t A 0
¦
¨u g in t 0
¤̈
these and the wave equation are worked out in the notes.
Let’s begin by studying the homogeneous equation:
¢̈
¨ ∂ t ∆u 0 in R1
d
¦
¨u g on t 0
¤̈
From now on, the Fourier transform only relies of the space fourier transform, and F t will
mean the time-spacial Fourier transform.
Ât, ξ
Letting u F ut, ξ . So we get the new problem
¢̈
¨∂t Sξ S2 Â
ut, ξ 0
¦ (15)
¨u Â0, ξ Â g ξ
¤̈
so solving this first order ODE, we get:
tSξ S2
Ât, ξ
u Â
g ξ e
– 71 –
5. Fourier Transform
1. (existence) for g > L2 , then there exists a solution u > Ct 0, ª L2 to (15) such that
Yut, YL2 B Yg YL2
∂ t Atu f
where At PN α α
j 0 aα t, xD with D not containing ∂t . This is a first-order evolutionary
equation.Suppose that we know how to solve the homogeneous problem. In other words,
given a space X of distributions on Rd and there exists the solution operator S t, s g
where g > X and t C s is such that:
So we get:
Now given any F t, x R F s, xδ0t sds, this suggests that we want to define
t
uF t, x S S t, s F s, x1s, ª tds S ª
S t, s F s, xds
∂t At u F S ∂ t At1 s, ª tS t, s F 9s, ds S F s, xδ0 t sds F t, x
a
use the formula and Plancherel, then apply ODE uniqueness
– 72 –
5. Fourier Transform
Usually, X will have a norm defined on it, and we will have YS t, s g YX B C Yg YX . In this
case if F > L1t 0, T , X (that is YF t, YX is absolutely integrable) then uF > Ct 0, t, X .
And moreover, if supp F ` 0 B t B T , then
t t
uF S ª
S t, sF sds S0 S t, sF sds
and UF 0, x 0.
F 1
e tSξ S2
Â
g F 1
e tSξ S2
g º
1
d S e
S xy S2
4t g y
4πt
then we can apply Duhamnels principal directly to ∂t ∆ to get:
t
ut, x δ t, 0 g S S t, s f sds
0
Theorem 5.1.1.
1. (existence) for g > L2 and f > L1 0, T L2 , there exists a solution to the inhomoge-
neous equation such that Yut, YL2 B Yg YL2 R0 Yf s, YL2 a
t
a
notation: u > Lpt I; X iff t ( Y ut, YX > LPt I
– 73 –
5. Fourier Transform
u u
We can get the forward fundamental solution via the Fourier transform. We have:
supp E ` t C 0
 S Æ
u t, 0 δ0
should solve
¢̈
¨∂t Sξ S2 Â
ut, ξ 0
¦
¨u Â0, ξ 1
¤̈
Remark 5.1.3. For any u > S 1 R1 d with supp u ` t C L , then u E is well-defined.
Now let’s discuss the space time Fourier transform of the forward fundamental solution.
Let’s start with:
∂t ∆E δ0
F t,x E
iτ
1
Sξ S2
– 74 –
6. Energy Method and Sobolev Spaces
This isn’t quite enough, we also need to use the support property supp E ` t C 0.
εt εt εt εt
∂t ∆e E e ∂ t ∆E εe E δ0 t, x εe E
iτ Sξ S
2
εFt,x e εt
Eτ 1
F t,x e εtE
1
iτ YxiS2 ε
so now:
Remark 5.1.4. Analyticity in the lower (or upper) half-plane of the Fourier transform
cooresponds to the original distribution in t C 0 or t B 0
Remark 5.1.5.
lim F t 1
1
i10, teiat
ε 0
ª
τ iε a
Assuming this lemma, we see that
E Ft,x1 Ft,xE
lim Ft,x1
ε 0
1
it iε Sξ S2
F 1 2
10,ª teSξ S t
Lecture 23 (11/19)
– 75 –
6. Energy Method and Sobolev Spaces
Proposition 6.1.1. Suppose that U is a C 1 bounded domain and u > C 2 Ū , f > C 0 Ū ,
g > C 0 ∂U . Then the solution, u, to (16) is unique.
Proof. We have already seen a proof, but here we present a different proof.
Let u1 , u2 be two solutions to (16). Then if we let v u2 u1 > C 2 Ū , then this solves
the homogeneous equation:
¢̈
¨∆v 0 in U
¦
¨v 0 on ∂U
¤̈
The key idea is: multiply the equation by v, integrate over U , then integrate by parts.
0 SU ∆vvdx SU v
© © vdx S
∂U
ν © vvdS∂U
t 1 2 t
S0 S 2
∂t u dxdt
S0 S ∆udxdt
t 1 2 t
S0 S 2
∂t u dxdt
S0 S © u © udxdt Boundary term
– 76 –
6. Energy Method and Sobolev Spaces
the Boundary term is zero by taking limits and using the assumptions on u and ©u. But
then we get:
1 1 t
2S 2S S0 S
2 2 2
u t, x dx u 0, x dx S©uS dxdt
Then we get the inequality via rearrangement, Cauchy Schwartz, and Minkowski’s inequality.
Another main thing we do in the energy method is: to apply operators Y to u, then look at
P Y u , then apply the process as above.
Notice that for any ∂j , if ∂t ∆u f , then ∂t ∆∂j u ∂j f , this is because ∂j commutes
with the heat operator. So if we apply the same method to this resulting equation, then we
can get estimates:
sup YDα utYL2 Y©Dα uYL2t,x 0,T Rd B sup YDα g YL2 YDα f YL1 0,T ;L2
SαSBk S α S Bk
Remark 6.1.1. For the wave equation, j u f , a “good” multiplier is ∂t u. This gives us
the physical conservation of energy.a
One natural question is: can we say other things about the solution from the control of these
kinds of norms?
We know that if f > Cc1 R, then by the fundamental theorem of calculus
x
f x S ª
f x dx B Yf
YL1
sup Sf S B Yf
YL1 I B Yf
YL2 I SI S
1~2
I
by Holder’s inequality. Now it turns out we can generalize this to multiple dimensions, which
are known as Sobolev inequalities.
– 77 –
6. Energy Method and Sobolev Spaces
We associate a norm:
YuYW k,p U Q Y D α u YL p U
S α S Bk
YD
α
uYLq B YuYW k,p
Remark 6.2.1. We want a “function space” framework for converting a-prioi estimates to
existence results.
(which is equivalent to ImL ker L Ù ). It turns out the Sobolev spaces are the ideal
Let’s introduce a little more notation. If ϕ > C0 U then ϕ > W k,p U for all k, p. In
ª
with respect to Y YW k,p U . The resulting space will be denoted W0k,p U (the completion
of test function in the Sobolev space.
In general, this is a strictly smaller subset, and you should think of these function as
elements in the Sobolev space whose boundary value on ∂U is zero.
The case p 2 is of importance (these are the bounds we get from the energy method).
We let H k U W k,2 U . And we often write H0k U W0k,2 U .
1. For all k, p, W k,p U is complete under Y YW k,p U . In other words W k,p U is a Banach
space.
1. approximation theorem
2. extension theorems
3. trace theorem
– 78 –
6. Energy Method and Sobolev Spaces
(
Lemma 6.2.1. Given y > Rd , let τy ux ux y viewed as a linear operator. Then τy
is continuous on Lp as a function of y (implicitly, this means u > Lp ). In other words:
Proof. We have:
1
\ S εd
ϕy ~εux y dy ux\
Lp
Y S ϕz ux εz dz S ϕz uxdz YLp
By this we mean that χn > C Rd such that supp χn ` Vn and Pn 1 χn x
ª ª
1 and for
all x > U , only finitely many terms in the sum are nonzero. And 0 B χn B 1
Then we can split u Pn 1 χn u which can be defined on Rd . Then the idea is to approxi-
ª
mate each χn u > W k,p Rd by extending it by 0 outside Vn . Then you take an approximation:
vn ϕεn χn u
, which is bounded by ε.
Proposition 6.2.3 (Approximation of elements in C Ū ). If U is a C 1 -domain and ª
u > W k,p U , then there exists a sequence uε > C Ū such that uε u in W k,p U .
ª
– 79 –
6. Energy Method and Sobolev Spaces
Lecture 24 (11/21)
We will continue the discussion on Sobolev spaces today. Recall we say that u > W k,p U
if:
YuYW k,p Q Y D α u YL p @ª
SαSBk
We then approximated elements of the Sobolev space that were better behaved. If u >
W k,p Rd , with ϕ > C0 U , R ϕ 1, ϕε ε d ϕ ~ε, then:
ª
ϕε u u
in W k,p Rd . Also if u > W k,p U then there exists a sequence uj > C ª
U such that uj u
in W k,p U .
We also have that if u > W k,p U with U a C 1 domain, then there is a sequence uj > C ª
Ū
such that uj u in W k,p U .
(This can’t be right...we defined W0k,p to be the completion of test functions, but by this
corollary, this means that W0k,p W k,p )
1. E uU
U
u
– 80 –
6. Energy Method and Sobolev Spaces
Proof. (sketch)
Step 0: it suffices to consider u > C ª
Ū (by approximation methods).
step 1: Reduce the result to the following: there exists an extension operator E on
U B 0, 1 9 Rd and V B 0, 2, defined for u > C Ū and supp u ` B 0, 1 9 xd C 0 and
ª
Step 2: proof of the claim in part 1. The key is to use reflection. Let x
x1 , . . . , xd1
then define:
¢̈
d ¨ux, xd C 0
ũx , x
¦
¨ux , xd ,
¤̈
xd @ 0
This is a continuous extension. However, can we extend so that Dũx , 0 Dux , 0?
Can we do this for all derivatives? The key part is to show that the normal derivatives agree:
m 0 m 0
So now if we want the two limits as xd 0 (from both sides), we get the following linear
system of equations for β and α:
1 α0 αk
1 β0 α0 β0 αk
k
1 β0 α0 βk k αk
1 β0 k βk k αk
M βi βj
0Bi@j Bk
– 81 –
6. Energy Method and Sobolev Spaces
so if we choose βi ’s to be pairwise distinct, then there exists α0 , . . . , αk so that this matrix
holds.
So now define:
E
u χũ
with χ equal 1 on the support of U and 0 outside a B 0, 1 9 xd C 1~2. Then you check
the estimate.
Remark 6.2.2 (Stein). There exists a universal way to extend Sobolev spaces Euniv for all
W k,p (k C 0, 1 B p @ ª which only requires U to be C 1 .
6.3 Traces
Here we will restrict u to ∂U
Theorem 6.3.1. Let k C 1 and 1 B p @ ª, U a C 1 domain
1. u > C ª
Ū and define:
tru uS∂U
Then this extends uniquely as a bounded map from W k,p U to Lp ∂U such that:
– 82 –
6. Energy Method and Sobolev Spaces
Theorem 6.4.1 (Gagliardo – Nirenberk - Sobolev Inequality). Let u > C0 Rd , then: ª
We also have that Yuλ YLp λd~p YuYLp In order for the inequality to hold for all u and all λ,
d
we would like p .
d1
Proof. Given u > C0 Rd , we know that we can write:
ª
xi
ux S ª
∂ i u x 1 , . . . , yi
®
, . . . , xd dy i
ith component
Âi indicates that we are fixing that coordinate (so the function doesn’t depend on that
where x
coordinate) then:
d
S f1 fd dx B M Yfi YLd1 Rd1
i 1
Proof.
then we integrate this thing and apply Holders inequality again and again and everything
will end up working out.
With this we are almost done. Define:
fi sup Sux1 , . . . , xd S B S SDux1 , . . . , y i , . . . , xd Sdy i
xi > R R
S d~d1
SuS dx1 dxd B S f1
1~d1
fd
1~d1
B
®
M 1
Y fi
~d1
YLd1
i
LW
S SDuSdx
1
dxd d~d 1
– 83 –
6. Energy Method and Sobolev Spaces
SV olU S
d1~d
B C SArea∂U S
Lecture 25 (12/3)
Recall our discussion on Sobolev inequalities. Last time we proved the Gaglaudo-Nirenberg
Sobolev inequality for u > C0 Rd 9 W 1,1 Rd , we got that:
ª
dp
d
Proof. Let v SuSγ so that γ p , then apply the W 1,1 version of GN S to v:
d1
d1
claim: p γ 1
p , this has to be true because we respected scaling.
We have only discussed nice functions, but we can extend this to non compactly supported
ones:
dp
Theorem 6.4.3 (Sobolev inequalities for W 1,p U ). Let 1 B p @ d, p
dp
1. if u > W01,p U , then (17) holds
2. If U is a bounded C 1 domain, then for any u > W 1,p U , (17) holds on the domain U .
When p d, p
ª , but :
sup
B x,r
S~ B x,r
Su aB x,r
uSdx @ ª
– 84 –
6. Energy Method and Sobolev Spaces
YuYLª r B cYu YW p
Sux uy
uC 0,α sup
x,y >K Sx y Sα
d
with α 1
p
Proof. The key is the following proposition:
Proposition 6.4.1. for u > C ª
Rd , with x > Rd and r A 0, then we have:
1 SDuy S
S
SB x, r S B x,r
Suy uxSdy B C S
B x,r Sx y Sd 1
dy
S∂B x,r
Suy uxSdy rd 1
S∂B 0,1
Sux rω uxSdω
– 85 –
6. Energy Method and Sobolev Spaces
therefore:
r
rd 1
S∂B 0,1
S Sdω rd 1
S∂B 0,1 S S0
ω ux sω sd 1 dsSdω
computation follows, I couldn’t really read the board, I’ll try to fill this in later.
Now let’s prove the main thing. First for all x > Rd , we have:
both of these can be trivially bounded (see notes). The hard part is bounded the semi-norm.
Note that we have:
This can be controlled by our previous results, scaling estimates on regions, and Holder’s
inequality.
Definition 6.4.2 (Version). We say u is a version of u if u
u almost everywhere.
Theorem 6.4.5 (Sobolev Inequality for u > W 1,p U , p A d). given p A d and α 1 d~p
1. If u > W01,p U , then there exists a version u > C 0,α Ū and Yu
YC 0,α Ū B C YuYW 1,p U
2. Assume also that U is a bounded C 1 domain. For any u > W 1,p U there exists a
version u > C 0, Ū and the above inequality holds.
ª
Theorem 6.4.6 (General Sobolev inequality for W k,p U ). Let k be any non-negative
integer and 1 B p B ª and assue either U is a domain and u > W0k,p U or U is a bounded
C k domain and u > W k,p U .
1. let l be a nonnegative integer, 0 B l B k, 1 B q @ ª, d~q l C d~p k, then u > W l,q U
and YuYW l,q U B C YuYW k,p U
Lecture 26 (12/5)
Today we will wrap up our discussion of Sobolev spaces. We will study compactness
properties with the main theorem the Relich - Konarchov theorem. We will also look at
Poincare’s inequality. And lastly we will look at the dual space of Sobolev spaces, which will
naturally lead to the notation of negative regularity Sobolev spaces.
– 86 –
6. Energy Method and Sobolev Spaces
6.5 Compactness
We want to look at ui ` W 1,p U given Yui YW 1,p U B C, then this turns out to be compact
in an appropriate larger space.
Definition 6.5.1 (Compact Embedding of Banach Spaces). Let’s say we have X, Y
YX and Y, Y YY two normed spaces with Y ` X. We say that Y, Y YY embeddes compactly
into X, Y YX (and say Y bb X if for all bounded sequences in Y has a convergent sub-
sequence in X.
2. (equicontinuity) for all ε A 0 there exists δ0 such that Sun x un y S @ ε for all n and
Sx y S @ δ.
then there exists a subsequence uni that is convergent in the uniform topology.
Another key ingredient is a property of mollification called accelerated convergence.
Proposition 6.5.1. Let ϕ > C0 Rd with R ϕ 1 and define ϕε x ε d ϕxε 1 , then
ª
ϕe u u as ε 0 in Lp . It turns out that if you assume that u > W 1,p U , then the rate of
convergence in ϕε u u can be quantified.
Here is a more general proposition
Proposition 6.5.2. Let k be a positive integer, 1 B p @ ª, let ϕ > C0 Rd such that ª
Rϕ 1
and R xα ϕdx 0 for all 1 B SαS B k 1. Then if u > W k,p Rd then:
Sα S k
Proof. When k 1:
ε
ux ϕε ux S ϕz ux ux εz dz S ϕ z S
0
∂z ux tz dz
ε
S ϕz S
0
z Dux tz dz
– 87 –
6. Energy Method and Sobolev Spaces
B CεYDuYLp
Step 1: It suffices to prove the case when q p. We want to show that if Yun YW 1,p U B C
then there exists a subsequence uni so that Yuni unj YLp 0 as i, j ª .
Indeed, if p @ q, then we have that all the other norms are bounded. If p @ q @ p then
with q 1 p 1θ r
1 1 θ
Step 2: we would like to prove that there exists a subsequnce uni such that for all ε A 0
there exists I with Yuni uni YLp U @ ε if i, i C I.
Now let un E un . Choose δ and let vn un ϕδ and supp vn ` Ṽ another bounded set with
V̄ ` Ṽ , let δbe such that:
(where we use the proposition). Then note that vn is smooth, then we have:
Yvn YLª B Cδ
and also
YDvn YLª B Cδ
– 88 –
6. Energy Method and Sobolev Spaces
this means that vn has a convergent subsequence by Arzela-Ascoli. So there exists vni so
that Yvni vni YLp @ ε~3, for i, i C I.
So now we have:
Yuni uni YLp B Yuni vni YLp Yvni vni SLp Yuni vni YLp B ε
Let’s now consider one nice application of this theorem. This is a key way to estimate u
by Du.
\u aU u\L P U
B C YDuYLp U
Proof. Assume that this fails. Then for all n C 1 there exists un ~ 0 so that:
Yu n aU u n Y LP U C nYDun YLP U
Now let:
un `U un
vn
Yun `U un YLp
1
Therefore: Yvn YLp 1 and ` vn 0, and
n
Yvn YLP U C YDvn YLp so that YDvn YLp U 0 as
N ª.
We know that vn is bounded in W 1,p . THerefore there exists vni with vni v in Lp U .
In particular:
aU v lim a vni
i ª U
0
and finally, YDvn YLp 0 this implies that Dv 0 in the sense of distributions, therefore v is
a constant and we get a contradiction.
– 89 –
6. Energy Method and Sobolev Spaces
6.6 Duality
Definition 6.6.1 (Negative Regularity Sobolev Spaces). Let k be a non-negative inte-
ger and 1 B p @ ª, let:
¢̈ £̈
> D U ¦ SαS B k, § gα > L U , u
¨ ¨
W k,p
U ¦u
¨
p
QD α
gα §
¨
¤̈ SαSBk ¥̈
with norm:
P α Bk Dα gα
α
u SαSBk
S S
– 90 –
Appendices
A Midterm Review
A.1 First order scalar characteristics
We are trying to solve the first order scalar PDE:
¢̈
¨F x, u, Du 0 U R Rd
¦ (18)
¨u g Γ ∂U
¤̈
With x > U ` Rd , u Rd R. Then if we assume there exists a solution, then along any path
x I Rd , we have z s uxs with:
¢̈ẋj ∂pj F
¨
¨
¨
¦ṗi ∂z F pi ∂xi F
¨
¨
¨ż
¤̈ Pi pi∂pi F
Where in F we replaced u with z and ∂i with pi .
Corollary A.1.1. If F P bj ∂ j u c is linear or quasilinear, then:
¢̈ j
¨ẋ bj
¦
¨ż c
¤̈
A.1.1 Existence
Given our PDE (18), we would like to know when this method works, whether a solution
exists. If our boundary is flat (Γ ` xd 0) then for any characteristic path, we know it
must satisfy compatability conditions:
¢̈x0 x > Γ
¨
¨ 0
¨
¨
¨
¨z 0 g x0
¦
¨
¨ pi 0 ∂i g x0 i 1, . . . , d 1
¨
¨
¨
¨F x0, z 0, p0
¤̈
To apply our method of characteristics, we require the information of pd 0 which we can
get if ∂pd F x0 , z0 , p0 x 0 (this is known as the noncharacteristic condition).
Now we would like to show that u has a solution near x0 . To do this let’s find a whole
bunch of characteristics where the first input indicates where the path starts, and the second
indicates the time on the path:
xy, s
z y, s
py, s
– 91 –
A. Midterm Review
with y y 1 , y 2 , . . . , y d 1 , 0. Then in order to find the solution ux we must figure out the
characteristic that hits x. That is we must determine y and s such that xy, s xa . It
turns out that under the noncharacterisitic assumption, this map is locally invertible, and
(
we denote the inverse: x y x, sx, which gives rise to the main theorem:
Theorem A.1.1. Given our PDE 18 and a point x0 > Γ such that ∂pd F x 0 at x0 , then there
exists a neighborhood of x0 such that ux z y x, sx solves the PDE for all x in this
neighborhood.
Remark A.1.1. The nonchraracteristic assumption of x0 , z0 is equivalent to determining
all derivatives of u (in every direction) at x0 .
A.1.3 Uniqueness
Given two solutions to such a PDE whose sum of their normal derivatives agree, then they
are the same.
But just note that F is a set of N equations. Now the Cauchy data (initial conditions) for
this PDE would be to prescribe normal derivatives. If our boundary is flat, Γ ` xd 0,
then our Cauchy data is:
k 1
u, ∂d u, . . . , ∂d u g0 , g1 , . . . , gk1
Then define the noncharacteristic condition for the Cauchy data for a point x0 > Γ as when
the matrix:
b0,0,...,0,k x0 , ux0 , . . . , Dk 1 ux0
– 92 –
A. Midterm Review
A.3 Distributions
Definition A.3.1 (Test Function). A test function on a set U ` Rd is a smooth compactly
supported function on U . The space of test functions on U is denoted C0 R ª
and only if for all compact sets K ` U , there exists a N and C C K, N such that for all
ϕ > C0 U with supp ϕ ` K:
ª
Definition A.3.4 (Order of a Distribution). The largest such N in the above theorem
is called the order of a distribution.
Theorem A.3.2. If u is a distribution that agrees with 1~x on 0, ª then the order of u is
at least 1.
Definition A.3.5 (Convolution). If f > L1loc and ϕ > C0 Rd then: ª
f ϕx SR d
f y ϕx y dy
f
ϕ x SR d
ϕy f y xdy
a
but finitely many of course
b
linear map from a vector space to its base field
– 93 –
A. Midterm Review
converge uniformly).
A.3.1 Operations
We may do the following operations with u a distribution, f > C ª
U , ϕ, v > C 0 R d
ª
1. `f u, ϕe `u, f ϕe
3. `u v, ϕe `u, v
ϕe
Theorem A.3.8. δ0 u u
∂j 1U ν j dS∂U
with ν the unit outer normal of ∂U and dS∂U the surface measure.
Theorem A.3.11 (Sequential Compactness). If un are distributions and for all ϕ >
C0 U , `un, ϕe converges, then u defined as `u, ϕe limn `un, ϕe is a distribution.
ª
– 94 –
A. Midterm Review
Theorem A.3.14. If two distributions have disjoint singular support, then their product is
defined.
P u Q aα x D α u
SαSBk
P v Q aαxDαaαu
SαSBk
Theorem A.4.1. If P is constant coefficient (ie all aα are constant) and P E0 δ0 , then
1. Ey x E0 x y
2. P v x P v
x
3. E
x y E0 x y
– 95 –
B. Final Review
B Final Review
B.1 Green’s Functions
Given E0 as the fundamental solution to the Laplace equation, U an open set and u a smooth
function on Ū , then we have:
– 96 –
B. Final Review
Theorem B.1.2. On an a bounded open set with C 1 boundary, Green’s functions are unique
and symmetric (Gx, y Gy, x).
We can construct Green’s functions for certain domains by the method of image charges.
This takes advantage of the symmetries of our domain, which can only be done in specific
circumstances.
By looking at (20) we have the following representation formula for u > C 2 U 9 C Ū :
This leads to
solves:
¢̈
¨∆u g x>U
¦
¨u f
¤̈
x > ∂U
¨
¨
In one dimension, we use change of variables to determine that the foward fundamental
solution (fundamental solution supported in nonnegative time) is:
1
E t, x
H t x H t x
2
with H the Heaviside function.
– 97 –
B. Final Review
Definition B.2.1 (Forward Fundamental Solution). To solve the wave equation we look
for forward fundamental solutions E that satisfy
1. E
j δ0
2. supp E ` t C 0
2 S0 Sx t s 2 Sx t
ϕt, x jϕ s, y dyds ϕ 0, x t ϕ 0, x t ∂t ϕ0, y dy
2
To find the forward fundamental solution for arbitrary dimensions, we take advantage of
the fact that reflections, rotations, and Lorentz boots are conserved under j. Also by
looking at scaling properties, we know that E should to be homogeneous of degree d 1,
ie:
F f xξ
1
S f x e
2π d Rd
iξ x
dx fÂξ
F 1 Â
f ξ x
1
2π d
S fÂξ eiξ xdξ
Proposition B.3.1. If f > L1 , we have f is well defined and YfÂYLª B Yf YL1
Proposition B.3.2. For f > L1 :
F f xξ e ix ξ
F f xξ
F f xξ η F eiηx
f xξ
– 98 –
B. Final Review
Ä
Proposition B.3.3. ∂ iξj fÂ
jf
Ä
Proposition B.3.4. x jf i∂ξ fÂ
Proposition B.3.5. fÅ
g fÂÂ
g
Proposition B.3.6. F 1 f
2π d dξ g F 1 f F 1 g
The best space to take the Fourier transform is the space of Scwartz functions, which
are like test functions, but are allowed to have non-compact support as long as the decay
quickly
Then F and F 1 map S to S . The space of conjugate linear functionals on S are called
tempered distributions and we are allowed to take Fourier transforms (and inverse Fourier
transforms) of these elements (which will be tempered distributions).
Theorem B.3.2 (Plancherel’s Theorem). For f, g > S , we have the conservation of the
two inner products:
dξ
S fÂ
g dx S fÂÂ
g
2π d
There are important integrals to keep in mind to help compute Fourier transforms
2. R e 1 2 ξ dξ 2πd 2
~ S S2 ~
º
3. F e 1 2 x 2πe 1 2ξ
~ 2 ~ 2
– 99 –
Index
Index
C k boundary, 17 Harnack’s inequality, 47
heat equation, 3
admissible, 14, 15 energy, 76
approximation, 28 Heaviside function, 31
Arezela-Ascoli theorem, 87 Holder space, 85
Huyghen’s principal, 64
Burger’s equation, 12, 19
hyperbolic, 3
Cauchy Riemman equations, 4
implicit function theorem, 15
characteristic ODEs, 10
initial value problem, 5
compact embedding, 87
inviscid Burger equation, 8
compatibility conditions, 14
conjugate linear, 68 KDV equation, 4
convolution, 28
Laplace Equation
d’Alembert’s formula, 57 energy, 76
d’Alembertian operator, 55 Laplace equation, 3
dirac delta function, 31 fundamental solution, 40
Dirichlet problem, 5 maximal principal, 46
representation, 51 mean value property, 46
uniqueness, 47 lecture
dispersive, 3 01 (8/29), 3
distribution, 29 02 (9/3), 6
complex valued, 67 03 (9/5), 9
convergence, 32 04 (9/10), 13
convolution, 31 05 (9/12), 16
differentiation, 31 06 (9/17), 20
order, 30 07 (9/19), 23
singular support, 36 08 (9/24), 26
support, 30 09 (9/26), 29
distributions 10 (10/1), 32
change of variables, 56 11 (10/3), 36
Duhamel’s Formula, 72 12 (10/8), 38
13 (10/15), 42
Einstein equation, 5
14 (10/17), 42
elliptic, 3
15 (10/22), 44
finite speed of propagation, 64 16 (10/24), 48
formal adjoint, 38 17 (10/29), 51
forward fundamental solution, 56 18 (10/31), 56
Fourier inversion theorem, 69 19 (11/5), 59
fundamental solution, 37 20 (11/7), 62
21 (11/12), 66
Gagliardo inequality, 83 22 (11/14), 71
Green’s function, 48 23 (11/19), 75
– 100 –
Index
– 101 –