Functional Analysis: March 2010
Functional Analysis: March 2010
Functional Analysis: March 2010
net/publication/45904504
Functional Analysis
CITATION READS
1 10,713
2 authors, including:
Palle Jorgensen
University of Iowa
282 PUBLICATIONS 9,533 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Palle Jorgensen on 30 May 2014.
Department of Mathematics
14 MLH
The University of Iowa
Iowa City, IA 52242-1419
USA.
1
2
Notes from a course taught by Palle Jorgensen in the fall semester of 2009.
The course covered central themes in functional analysis and operator theory, with
an emphasis on topics of special relevance to such applications as representation
theory, harmonic analysis, mathematical physics, and stochastic integration.
3
These are the lecture notes I took from a topic course taught by Professor
Jorgensen during the fall semester of 2009. The course started with elementary
Hilbert space theory, and moved very fast to spectral theory, completely positive
maps, Kadison-Singer conjecture, induced representations, self-adjoint extensions
of operators, etc. It contains a lot of motivations and illuminating examples.
I would like to thank Professor Jorgensen for teaching such a wonderful course.
I hope students in other areas of mathematics would benefit from these lecture
notes as well.
Unfortunately, I have not been able to fill in all the details. The notes are
undergoing editing. I take full responsibility for any errors and missing parts.
Feng Tian
03. 2010
Contents
Appendix 96
semi-direct product 96
4
CONTENTS 5
Bibliography 98
CHAPTER 1
Elementary Facts
We will apply the transfinite induction to show that every infinite dimensional
Hilbert space has an orthonormal basis (ONB).
Classical functional analysis roughly divides into two branches
• study of function spaces (Banach space, Hilbert space)
• applications in physics and engineering
Within pure mathematics, it is manifested in
• representation theory of groups and algebras
• C ∗ -algebras, Von Neumann algebras
• wavelets theory
• harmonic analysis
• analytic number theory
Definition 1.5. Let X be a vector space over C. k·k is a norm on X if for all
x, y in X and c in C
• kcxk = ckxk
• kxk ≥ 0; kxk = 0 implies x = 0
• kx + yk ≤ kxk + kyk
X is a Banach space if it is complete with respect to the metric induced by k·k.
Definition 1.6. Let X be vector space over C. An inner product is a function
h·, ·i : X × X → C so that for all x, y in H and c in C,
• hx, ·i is linear (linearity)
• hx, yi = hy, xi (conjugation)
• hx, xi ≥ 0; and hx, xi = 0 implies x = 0 (positivity)
p
In that case hx, xi defines a norm on H and is denote by kxk. X is said to be
an inner product space is an inner product is defined. A Hilbert space is a complete
inner product space.
Remark 1.7. The abstract formulation of Hilbert was invented by Von Neu-
mann in 1925. It fits precisely with the axioms of quantum mechanics (spectral
lines, etc.) A few years before Von Neumann’s formulation, Heisenberg translated
Max Born’s quantum mechanics into mathematics.
For any inner product space H, observe that the matrix
hx, xi hx, yi
hy, xi hy, yi
is positive definite by the positivity axiom of the definition of an inner product.
Hence the matrix has positive determinant, which gives rises to the famous Cauchy-
Schwartz inequality
|hx, yi|2 ≤ hx, xihy, yi.
An extremely useful way to construct a Hilbert space is the GNS construction,
which starts with a semi-positive definite funciton defined on a set X. ϕ : X × X →
C is said to be semi-positive definite, if for finite collection of complex nubmers {cx },
X
c̄x cy ϕ(x, y) ≥ 0.
Let H0 be the span of δx where x ∈ X, and define a sesiquilinear form h·, ·i on H0
as X X X
h cx δx , cy δy i := c̄x cy ϕ(x, y).
1.1. TRANSFINITE INDUCTION 8
However, the positivity condition may not be satisfied. Hence one has to pass to a
quotient space by letting N = {f ∈ H0 , hf, f i = 0}, and H̃0 be the quotient space
H0 /N . The fact that N is really a subspace follows from the Cauchy-Schwartz
inequality above. Therefore, h·, ·i is an inner product on H̃0 . Finally, let H be the
completion of H̃0 under h·, ·i and H is a Hilbert space.
Definition 1.8. Let H be a Hilbert space. A family of vectors {uα } in H is
said to be an orthonormal basis of H if
(1) huα , uβ i = δαβ and
(2) span{uα } = H.
We are ready to prove the existance of an orthonormal basis of a Hilbert space,
using transfinite induction. Again, the key idea is to cook up a partially ordered
set satisfying all the requirments in the transfinite induction, so that the maximum
elements turns out to be an orthonormal basis. Notice that all we have at hands
are the abstract axioms of a Hilbert space, and nothing else. Everything will be
developed out of these axioms.
Theorem 1.9. Every Hilbert space H has an orthonormal basis.
To start out, we need the following lemmas.
Lemma 1.10. Let H be a Hilbert space and S ⊂ H. Then the following are
equivalent:
(1) x ⊥ S implies x = 0
(2) span{S} = H
Lemma 1.11. (Gram-Schmidt) Let {un } be a sequence of linearly independent
vectors in H then there exists a sequence {vn } of unit vectors so that hvi , vj i = δij .
Remark. The Gram-Schmidt orthogonalization process was developed a little
earlier than Von Neumann’s formuation of abstract Hilbert space.
Proof. we now prove theorem (1.9). If H is empty then we are finished.
Otherwise, let u1 ∈ H. If ku1 k 6= 1, we may consider u1 /ku1 k which is a normalized
vector. Hence we may assume ku1 k = 1. If span{u1 } = H we are finished again,
otherwise there exists u2 ∈/ span{u1}. By lemma (1.11), we may assume ku2 k = 1
and u1 ⊥ u2 . By induction, we get a collection S of orthonormal vectors in H.
Consider P(S) partially order by set inclusion. Let C ⊂ P(S) be a chain and let
M = ∪E∈C E. M is clearly a majorant of C. We claim that M is in the partially
ordered system. In fact, for all x, y ∈ M there exist Ex and Ey in C so that x ∈ Ex
and y ∈ Ey . Since C is a chain, we may assume Ex ≤ Ey . Hence x, y ∈ E2 and
x ⊥ y, which shows that M is in the partially ordered system.
By Zorn’s lemma, there exists a maximum element m ∈ S. It suffices to show
that the closed span of m is H. Suppose this is false, then by lemma (1.10) there
exists x ∈ H so that x ⊥ M . Since m ∪ {x} ≥ m and m is maximal, it follows that
x ∈ m, which implies x ⊥ x. By the positivity axiom of the definition of Hilbert
space, x = 0.
Corollary 1.12. Let H be a Hilbert space, then H is isomorphic to the l2
space of the index set of an ONB of H.
Remark 1.13. There seems to be just one Hilbert space, which is true in
terms of the Hilbert space structure. But this is misleading, because numerous
1.2. DIRAC’S NOTATION 9
Every Hilbert space has an ONB, but it does not mean in pratice it is easy to select
one that works well for a particular problem.
It’s also easy to see that the operator
X
PF = |vi ihvi |
vi ∈F
and PF∗
= PF .
The Gram-Schmidt orthogonalization process may now be written in Dirac’s
notation so that the induction step is really just
x − PF x
kx − PF xk
which is a unit vector and orthogonal to PF H. Notice that if H is non separable,
the standard induction does not work, and the transfinite induction is needed.
and
0 1 √
−1 0 2 √
√
1 − 2 0 3
Q= √
i ..
− 3 0 .
.. ..
. .
the complex i in front of Q is to make it self-adjoint.
A selection of ONB makes a connection to the algebra operators acting on H
and infinite matrices. We check that using Dirac’s notation, the algebra of operators
really becomes the algebr of infinite matrices.
Pick an ONB {ui } in H, A, B ∈ B(H). We denote by MA = Aij := hui , Auj i
the matrix of A under the ONB. We compute hui , ABuj i.
X
(MA MB )ij = Aik Bkj
k
X
= hui , Auk ihuk , Buj i
k
X
= hA∗ ui , uk ihuk , Buj i
k
= hA∗ ui , Buj i
= hui , ABuj i
P
where I = |ui ihui |.
P Let w 2be a unit vector in H. w represents a quantum state. Since kwk2 =
2
|hui , wi| = 1, the numbers |hui , wi| represent a probability distribution over
the index set. If w and w′ are two states, then
X
hw, w′ i = hw, ui ihui , w′ i
which has the interpretation so that the transition from w′ to w may go through
all possible intermediate states ui . Two states are uncorrelated if and only if they
are orthogonal.
follows the expectation value of any observable remains unchanged under unitary
transformation.
Definition 1.19. Let A : H1 → H1 and B : H2 → H2 be operators. A is
unitarily equivalent to B is there exists a unitary operator U : H1 → H2 so that
B = U AU ∗ .
Example 1.20. Fourier transform U : L2 (R) → L2 (R),
1
ˆ
(U f )(t) = fˆ(t) = √ e−itx f (x)dx.
2π
The operators Q = Mx and P = −id/dx are both densely defined on the Schwartz
space S ⊂ L2 (R). P and Q are unitary equivalent via the Fourier transform,
P = F ∗ QF .
2
Apply Gram-Schmidt orthogonalization to polynomials against the measrue e−x /2 dx,
and get orthognoal polynomials. There are the Hermite polynomials (Hermit func-
tions). n
−x2 /2 −x2 d 2
hn = e Pn = e ex /2 .
dx
The Hermite functions form an orthonormal basis (normalize it) and transform P
and Q to Heisenberg’s infinite matrices. Some related operators: H := (Q2 + P 2 −
1)/2. It can be shown that
Hhn = nhn
or equivalently,
(P 2 + Q2 )hn = (2n + 1)hn
n = 0, 1, 2, . . .. H is called the energy operator in quantum mechanics. This explains
mathematically why the energy levels are discrete, being a multiple of ~.
A multiplication operator version is also available which works especially well
in physics. It says that A is a normal operator in H if and only if A is unitarily
equivalent to the operator of multiplication by a measurable function f on L2 (M, µ)
where M is compact and Hausdorff. We will see how the two versions of the spectral
theorem are related after first introduing the concept of transformation of measure.
1.3.1. Transformation of measure. Let (X, S) and (Y, T ) be two measur-
able spaces with σ-algebras S and T respectively. Let ϕ : X → Y be a measurable
function. Suppose there is a measure µ on (X, S). Then µϕ (·) := µ ◦ ϕ−1 (·) defines
a measure on Y . µϕ is the transformation measure of µ under ϕ.
Notice that if E ∈ T , then ϕ(x) ∈ E if and only if x ∈ ϕ−1 (E) ∈ S. Hence
χE ◦ ϕ(x) = χϕ−1 (E) .
P P
It follows that for simple function s(·) = ci χEi (·) = ci χϕ−1 (Ei ) , and
ˆ ˆ X ˆ X ˆ
s ◦ ϕdµ = ci χEi ◦ ϕ(x)dµ = ci χϕ−1 (Ei ) (·)dµ = sd(µ ◦ ϕ−1 ).
The multiplication version of the spectral theory states that every normal op-
erator A is unitarily equivalent to the operator of multiplication by a measurable
function Mf on L2 (M, µ) where M is compact and Hausdorff. With transforma-
tion of measure, we can go one step further and get that A is unitarily equivalent
to the operator of multiplication by the independent variable on some L2 space.
Notice that if f is nesty, even if µ is a nice measure (say the Lebesgue measure),
the transformation meaure µ ◦ f −1 can still be nesty, it could even be singular.
Let’s assume we have a normal operator Mϕ : L2 (M, µ) → L2 (M, µ) given by
multiplication by a measurable function ϕ. Define an operator U : L2 (ϕ(M ), µ ◦
ϕ−1 ) → L2 (M, µ) by
(U f )(·) = f (ϕ(·)).
U is unitary, since
ˆ ˆ ˆ
|f | d(µ ◦ ϕ ) = |f (ϕ)| dµ = |U f |2 dµ.
2 −1 2
A∩B χA χB P ∧Q PH ∩ PQ
A∪B χA∪B P ∨Q span{P H ∪ QH}
A⊂B χA χB = χA P ≤Q P H ⊂ QH
⊂ A2 ⊂ · · ·
A1 S χAi χAi+1 = χAi P1 ≤ P2 ≤ · · · ⊂ Pi+1 H
Pi H S
∞ ∞
Tk=1 Ak χ∪ k A ∨∞
k=1 Pk T∞ k=1 Pk H}
span{
∞
k=1 Ak χ ∩ k Ak ∧∞
k=1 Pk k=1 PK H
A×B (χA×X ) (χX×B ) P ⊗Q P ⊗ Q ∈ proj(H ⊗ K)
in the sense that (strongly convergent) for all x ∈ H, there exists a vector, which
we denote by P x so that
limkPk x − P xk = 0
k
and P really defines a self-adjoint projection.
The examples using Gram-Schmidt can now be formulated in the lattice of pro-
jections. We have Vn the n-dimensional subspaces and Pn the orthogonal projection
onto Vn , where
Vn ⊂ Vn+1 → ∪Vn ∼ Pn ≤ Pn+1 → P
Pn⊥ ≥ Pn+1
⊥
→ P ⊥.
Since ∪Vn is dense in H, it follows that P = I and P ⊥ = 0. We may express this
in the lattice notations by
∨Pn = sup Pn = I
∧Pn⊥ = inf Pn = 0.
The tensor product construction fits with composite system in quamtum me-
chanics.
1.5. IDEAS IN THE SPECTRAL THEOREM 17
1.5.1. Multiplication by Mf .
(1) In this version of the spectral theorem, A = A∗ implies that A is unitarily
equivalent to the operator Mf of multiplication by a measurable function
f on the Hilbert space L2 (X, µ), where X is a compact Hausdorff space,
and µ is a regular Borel measure.
A /H
HO O
U U
Mf
L2 (µ) / L2 (µ)
(2) What’s involved are two algebras: the algebra of measurable functions
on X, treated as multiplication operators, and the algebra of operators
generated by A (with identity). The two algebras are ∗-isomorphic. The
spectral theorem allows to represent the algebra of A by the algebra of
functions (in this direction, it helps to understand A); also represent the
algebra of functions by algebra of operators generated by A (in this di-
rection, it reveals properties of the function algebra and the underlying
space X. We will see this in a minute.)
(3) Let A be the algebra of functions. π ∈ Rep(A, H) is a representation,
where
π(ψ) = F Mψ F −1
1.5. IDEAS IN THE SPECTRAL THEOREM 19
Remark 1.30. P (E) = F χE F −1 defines a PVM. In fact all PVMs come from
this way. In this sense, the Mt version of the spectral theorem is better, since it
implies the PVM version. However, the PVM version facilites some formuations in
quantum mechanics, so physicists usually prefer this version.
Remark 1.31. Suppose we start with the PVM version of the spectral theorem.
How to prove (ψ1 ψ2 )(A) = ψ1 (A)ψ2 (A)? i.e. how to chech we do have an algebra
isomorphism? Recall in the PVM version, ψ(A) is defined as the operator so that
for all ϕ ∈ H, we have ˆ
ψdµϕ = hϕ, ψ(A)ϕi.
As a stardard approximation technique, once starts with simple or even step func-
tions. Once it is worked out for simple functions, the extention to any measurable
functions is straightforward. Hence let’s suppose (WLOG) f
X
ψ1 = ψ1 (ti )χEi
X
ψ2 = ψ2 (tj )χEj
then
ˆ ˆ X
ψ1 P (dx) ψ2 P (dx) = ψ1 (ti )ψ2 (tj )P (Ei )P (Ej )
i,j
X
= ψ1 (ti )ψ2 (ti )P (Ei )
i
ˆ
= ψ1 ψ2 P (dx)
Proof. The uniquess part follows from a standard argument. We will only
prove the existance of P . Let F : L2 (µ) → H be the unitary operator so that
A = F Mt F −1 . Define
P (E) = F χE F −1
for all E in the Borel σ-algebra B of R. Then P (φ) = 0, P (R) = I; for all
E1 , E2 ∈ B,
P (E1 ∩ E2 ) = F χE1 ∩E2 F −1
= F χE1 χE2 F −1
= F χE1 F −1 F χE2 F −1
= = P (E1 )P (E2 ).
Suppose {Ek } is a sequence of mutually disjoint elements in B. Let h ∈ H and
write h = F ĥ for some ĥ ∈ L2 (µ). Then
hh, P (∪Ek )hiH = hF ĥ, P (∪Ek )F ĥiH
= hĥ, F −1 P (∪Ek )F ĥiL2
= hĥ, χ∪Ek ĥiL2
ˆ
= |ĥ|2 dµ
∪Ek
Xˆ
= |ĥ|2 dµ
k Ek
X
= hh, P (Ek )hiH .
k
Remark 1.37. In fact, A is in the closed (under norm or storng topology) span
of {P (E) : E ∈ B}. This is equivalent to say that t = F −1 AF is in the closed span
of the set of characteristic functions, the latter is again a standard approximation
in measure theory. It suffices to approximate tχ[0,∞] .
The wonderful idea of Lebesgue is not to partition the domain, as was the case
in Riemann integral over Rn , but instead the range. Therefore integration over an
arbitrary set is made possible. Important exmaples include analysis on groups.
Proposition 1.38. Let f : [0, ∞] → R, f (x) = x, i.e. f = xχ[0,∞] . Then there
exists a sequence of step functions s1 ≤ s2 ≤ · · · ≤ f (x) such that limn→∞ sn (x) =
f (x).
1.5. IDEAS IN THE SPECTRAL THEOREM 23
Corollary 1.39. Let f (x) = xχ[0,M] (x). Then there exists a sequence of step
functions sn such that 0 ≤ s1 ≤ s2 ≤ · · · ≤ f (x) and sn → f uniformly, as n → ∞.
Proof. Define sn as in proposition (1.38). Let n > M , then by construction
f (x) − 2−n < sn (x) ≤ f (x)
for all s ∈ [0, M ]. Hence sn → f uniformly as n → ∞.
Then
2
lim |(f (x) − sn (x))h(x)| = 0
n→∞
and
|(f (x) − sn (x))h(x)|2 ≤ const|h(x)|2 .
Hence by the dominiated convergence theorem,
ˆ
2
lim |(f (x) − sn (x))h(x)| dµ = 0
n→∞
or equivalently,
k(f − sn )hk2 → 0
as n → ∞. i.e. Msn converges to Mf in the strong operator topology.
GNS, Representations
U U
Mt
L2 (R, µf ) / L2 (R, µf )
= kg ◦ f −1 ({λ})k2 + kg ◦ f −1 ({0})k2
ˆ
= |g(x1 )|2 µ({x1 }) + |g(x2 )|2 µ({x2 }) + |g(x)|2 dµ
X\{x1 ,x2 }
ˆ
= |g(x)|2 dµ
X
To see U diagonalizes Mf ,
Mt U g = λg(x1 ) ⊕ λg(x2 ) ⊕ 0g(t)χX\{x1 ,x2 }
= λg(x1 ) ⊕ λg(x2 ) ⊕ 0
U Mf g = U (λg(x)χ{x1 ,x2 } )
= λg(x1 ) ⊕ λg(x2 ) ⊕ 0
Thus
Mt U = U Mf .
Remark 2.3. Notice that f should really be written as
f = λχ{x1 ,x2 } = λχ{x1 } + λχ{x2 } + 0χX\{x1 ,x2 }
since 0 is also an eigenvalue of Mf , and the corresponding eigenspace is the kernel
of Mf .
Example 2.4. diagonalize Mf on L2 (µ) where f = χ[0,1] and µ is the Lebesgue
measure on R.
Example 2.5. diagonalize Mf on L2 (µ) where
(
2x x ∈ [0, 1/2]
f (x) =
2 − 2x x ∈ [1/2, 1]
and µ is the Lebesgue measure on [0, 1].
Remark 2.6. see direc integral and disintegration of measures.
2.2. STATES, DUAL AND PRE-DUAL 28
Hahn-Banach theorem implies that for all v ∈ V , kvk 6= 0, there exists lv ∈ V ∗ such
that l(v) = kvk. The construction is to define lv on one vector, then use transfinite
induction to extend to all vectors in V . Notice that V ∗ is always complete, even V
is an incomplete normed space. i.e. V ∗ is always a Banach space.
V is embedded in to V ∗∗ (we always do this). The embedding is given by
v 7→ ψ(v) ∈ V ∗
where
ψ(v)(l) := l(v).
Example 2.8. Let X be a compact Hausdorff space. C(X) with the sup norm
is a Banach space. (lp )∗ = lq , (Lp )∗ = Lq , for 1/p + 1/q = 1 and p < ∞. If
1 < p < ∞, then (lp )∗∗ = lp , i.e. these spaces are reflexive. (l1 )∗ = l∞ , but (l∞ )∗
is much bigger than l1 . Also note that (lp )∗ 6= lq except for p = q = 2 where l2 is a
Hilbert space.
Hilbert space
• vector space over C
• norm forms an inner product h·, ·i
• complete with respect to k·k = h·, ·i1/2
• H∗ = H
• every Hilbert space has a basis (proved by Zorn’s lemma)
The identification H = H ∗ is due to Riesz, and the corresponding map is given by
h 7→ hh, ·i ∈ H ∗
This can also be seen by noting that H is unitarily equivalent to l2 (A) and the
latter is reflecxive.
Let H be a Hilbert space. The set of all bounded operators B(H) on H is a
Banach space. We ask two questions: What is B(H)∗ ? Is B(H) the dual space of
some Banach space?
The first question extremely difficult and we will discuss that later. We now
show that
B(H) = T1 (H)∗
where we denote by T1 (H) the trace class operators.
Let ρ : H → H be a compact self-adjoint operator. Assume ρ is positive, i.e.
hx, ρxi ≥ 0 for all x ∈ H. By the spectral theorem of compact operators,
X
ρ= λk Pk
The W * refers to the fact that its topology comes from the weak * topology. B(H),
for any Hilbert space, is a Von Neumann algebra.
Example 2.19. uf = eiθ f (θ), vf = f (θ − ϕ), restrict to [0, 2π], i.e. 2π periodic
functions.
vuv −1 = eiϕ u
vu = eiϕ uv
u, v generate C ∗ -algebra, noncummutative.
Example 2.20. (from quantum mechanics)
[p, q] = −iI
p, q generate an algebra. But they can not be represented by bounded operators.
May apply bounded functions to them and get a C ∗ -algebra.
Example 2.21. Let H be an infinite dimensional Hilbert space. H is iso-
metrically isomorphic to a subspace of itself. For example, let {en } be an ONB.
H1 = span{e2n }, H2 = span{e2n+1 }. Let
V1 (en ) = e2n
V2 (en ) = e2n+1
then we get two isometries.
V1 V1∗ + V2 V2∗ = I
Vi∗ Vi = I
Vi Vi∗ = Pi
where Pi is a self-adjoint projection, i = 1, 2. This is the Cuntz algebra O2 . More
general On . Cuntz showed that this is a simple algebra in 1977.
From algebras, get representations. For abelian algebras, we get measures. For non-
abelian algebras, we get representations, and the measures come out as a corallory
of representations.
Proof. (Sketch of the proof of GN S) Let w be a state on A. Need to construct
(π, H, Ω).
A is an algebra, and it is also a complex vector space. We pretend that A is a
Hilbert space, see what is needed for this.
We do get a homomorphism A → H which follows from the associative law of
A being an algebra, i.e. (AB)C = A(BC).
2.4. GNS, SPECTRAL THOERY 33
Example 2.23. C[0, 1], a 7→ a(0), a∗ a = |a|2 , hence w(a∗ a) = |a|2 (0) ≥ 0.
ker = {a : a(0) = 0}
and C/ker is one dimensional. The reason if ∀f ∈ C[0, 1] such that f (0) 6= 0, we
have
f (x) ∼ f (0)
because f (x) − f (0) ∈ ker, where f (0) represent the constant function f (0) over
[0, 1]. This shows that w is a pure state, since the representation has to be irre-
ducible.
The multiplicity theory starts with breaking up each Hj into irreducible com-
ponents.
2.4.3. Examples on disintegration.
Example 2.29. L2 (I) with Lebesgue measure. Let
(
1 t≥x
Fx (t) =
0 t<x
Fx is a monotone increasing function on R, hence by Riesz, we get the corresponding
Riemann-Stieljes measure dFx .
ˆ ⊕
dµ = dFx (t)dx.
i.e. ˆ ˆ ˆ
f dµ = dFx (f )dx = f (x)dx.
Equivalently, ˆ
dµ = δx dx
i.e. ˆ ˆ ˆ
f dµ = δx (f )dx = f (x)dx.
µ is a state, δx = dFx (t) is a pure state, ∀x ∈ I. This is a decomposition of state
into direct integral of pure states.
Q
Example 2.30. Ω = t≥0 R̄, Ωx = {w ∈ Ω : w(0) = x}. Kolmogorov gives
rise to Px by conditioning P with respect to “starting at x”.
ˆ ⊕
P = Px dx
i.e. ˆ
P () = P (·|start at x)dx.
Poisson integration.
2.4.4. Noncommutative Radon-Nicodym derivative. Let w be a state,
K is an operator. √ √
w( KA K)
wk (A) =
w(K)
is a state, and wk ≪ w, i.e. w(A) = 0 ⇒ wK (A) = 0. K = dw/dwK .
Check:
wK (1) = 1
√ √
w( KA∗ A K)
wK (A∗ A) =
w(K)
√ ∗ √
w((A K) (A K))
= ≥0
w(K)
2.5. CHOQUET, KREIN-MILMAN, DECOMPOSITION OF STATES 36
Note. The dual of a normed vector space is always a Banach space, so the
theorem applies. The convex hull in an infinite dimensional space is not always
closed, so close it. A good reference to locally convex topological space is TVS by
F. Treves.
The decompsition of states into pure states was developed by R.Phelps for
representation theory. The idea goes back to Choquet.
Theorem 2.34. (Choquet) Let w ∈ K = S(A), there exists a measure µw
“concentrated” on E(K), such that for affine function f
ˆ
f (w) = f dµw .
”E(K)”
Note. E(K) may not be Borel. In this case replace E(K) be Borel set V ⊃
E(K) s.t.
µw (V \E(K)) = 0.
This is a theorem of Glimm. Examples for this case include the Cuntz algebra, free
group with 2 generators, uvu−1 = u2 for wavelets.
Note. µw may not be unique. If it is unique, K is called a simplex. The
unit disk has its boundary as extreme points. But representation of points in the
interior using points on the boundary is not unique. Therefore the unit disk is not
a simplex. A triangle is.
Proof. (Krein-Milman) If K % conv(E(K)), get a linear functional w, such
that w is zero on conv(E(K)) and not zero on w ∈ K\conv(E(K)). Extend by
Hahn-Banach theorem to a linear functional to the whole space, and get a contra-
diction.
2.5.2. The Gelfand space X. Back to the question of what the Gelfand
space X is.
Let A be a commutative Banach ∗-algebra. Consider the closed ideals in A
(since A is normed, so consider closed ideals) ordered by inclusion. By zorn’s
lemma, there exists maximal ideals M . A/M is 1-dimensional, hence A/M = {tv}
for some v ∈ A and t ∈ R. Therefore
A → A/M → C
the combined map ϕ : a 7→ a/M 7→ ta is a homomorphism. A ∋ 1 7→ v := 1/M ∈
A/M then ϕ(1) = 1.
Conversely, the kernel of a homomorphism ϕ is a maximal ideal in A. Therefore
there is a bijection between maximal ideas and homomorphisms. Note that if
ϕ : A → C is a homomorphism then it has to be a contraction.
Let X be the set of all maximal ideals ∼ all homomorphisms in A∗1 , where A∗1
is the unit ball in A∗ . Since A∗1 is compact, X is closed in it, X is also compact.
The Gelfand transform F : A → C(X) is defined by
F (a)(ϕ) = ϕ(a)
then
A/kerF ≃ C(X)
(mod the kernel for general Banach algebras)
2.6. BEGINNING OF MULTIPLICITY 38
To identity X in practice, always start with a guess, and usually it turns out to be
correct. Since Fourier transform converts convolution to multiplication,
X
ϕz : a 7→ an z n
is a complext homormorphism. To see ϕz is multiplicative,
X
ϕz (ab) = (ab)n z n
X
= ak bn−k z n
n,k
X X
= ak z k bn−k z n−k
k n
! !
X X
k k
= ak z bk z .
k k
Example 2.40. 2-d, λI, {λI}′ = M2 (C) which is not abelian. Hence mult(λ) =
2.
Example 2.41. 3-d,
λ1
λ1 I
λ1 =
λ2
λ2
where λ1 6= λ2 . The commutatant is
B
b
where B ∈ M2 (C), and b ∈ C. Therefore the commutant is isomorphic to M2 (C),
and multiplicity is equal to 2.
Example 2.42. The example of Mϕ with repeteation.
Mϕ ⊕ Mϕ : L2 (µ) ⊕ L2 (µ) → L2 (µ) ⊕ L2 (µ)
Mϕ f1 ϕf1
=
Mϕ f2 ϕf2
the commutant is this case is isomorphic to M2 (C). If we introduces tensor product,
then representation space is also written asL2 (µ) ⊗ V2 , the multiplication operator
is amplified to Mϕ ⊗ I, whose commutant is represented as I ⊗ V2 . Hence it’s clear
that the commutatant is isomorphic to M2 (C). To check
(ϕ ⊗ I)(I ⊗ B) = ϕ⊗B
(I ⊗ B)(ϕ ⊗ I) = ϕ ⊗ B.
produce a Hilbert space and a representation. It turns out that the condition to
put on ϕ is complete positivity, in the sense that, for all n ∈ N
(
|ei ihel | j = k
ei,j ekl = |ei ihej | |ek ihel | =
0 j=6 k
P
take any v = k vk ⊗ ek in H ⊗ Cn ,
X X X
h vl ⊗ el , ( ϕ(Aij ) ⊗ eij )( vk ⊗ ek )i
l i,j k
X X
= h vl ⊗ el , ϕ(Aij )vk ⊗ eij (ek )i
l i,j,k
X X
= h vl ⊗ el , ϕ(Aij )vj ⊗ ei i
l i,j
X
= hvl , ϕ(Aij )vj ihel , ei i
i,j,l
X
= hvi , ϕ(Aij )vj i ≥ 0.
i,j
V /K
H(A)
ϕ π(A)
H /K
V
is an automorphism on B(H).
Suppose ϕ(A) = V ∗ π(A)V . Since positive elements in A ⊗ Mn are sums of the
operator matrix
∗
A1
X A∗2
A∗i Aj ⊗ eij = . A1 A2 · · · An
i,j
..
A∗n
it suffices to show that
X X X
ϕ ⊗ IMn ( A∗i Aj ⊗ eij ) = ϕ(A∗i Aj ) ⊗ IMn (eij ) = ϕ(A∗i Aj ) ⊗ eij
i,j i,j i,j
hence h·, ·iϕ is positive semi-definite. Let ker := {v ∈ K0 : hv, viϕ = 0}. Since the
Schwartz inequality holds for any sesquilinear form, it follows that ker = {v ∈ K0 :
hs, viϕ = 0, for all s ∈ K0 }, thus ker is a closed subspace in K0 . Let Kϕ be the
1/2
Hilbert space by completing K0 under the norm k·kϕ := h·, ·iϕ .
Define V : H → K0 , where V ξ := 1A ⊗ ξ. Then
kV ξk2ϕ = h1A ⊗ ξ, 1A ⊗ ξiϕ
= hξ, ϕ(1∗A 1A )ξiH
= hξ, ξiH
= kξk2H
which implies that V is an isometry, and H is isometrically embeded into K0 .
Claim: V ∗ V = IH ; V V ∗ is a self-adjoint projection from K0 onto the subspace
1A ⊗ H. In fact, for any A ⊗ η ∈ K0 ,
hA ⊗ η, V ξiϕ = hA ⊗ η, 1A ⊗ ξiϕ
= hη, ϕ(A∗ )ξiH
= hϕ(A∗ )∗ η, ξiH
= hV ∗ (A ⊗ η), ξiϕ
therefore,
V ∗ (A ⊗ η) = ϕ(A∗ )∗ η.
It follows that
V ∗ V ξ = V ∗ (1A ⊗ ξ) = ϕ(1∗A )∗ ξ = ξ, ∀ξ ∈ H ⇔ V ∗ V = IH .
Moreover, for any A ⊗ η ∈ K0 ,
V V ∗ (A ⊗ η) = V (ϕ(A∗ )∗ η) = 1A ⊗ ϕ(A∗ )∗ η.
P P
For any A ∈ A, let πϕ (A)( j Bj ⊗ ηj ) := j ABj ⊗ ηj and extend to Kϕ . For
all ξ, η ∈ H,
hξ, V ∗ π(A)V ηiH = hV ξ, π(A)V ηiϕ
= h1A ⊗ ξ, π(A)1A ⊗ ηiϕ
= h1A ⊗ ξ, A ⊗ ηiϕ
= hξ, ϕ(1∗A A)ηiH
= hξ, ϕ(A)ηiH
hence ϕ(A) = V ∗ π(A)V for all A ∈ A.
2.8. COMMENTS ON STINESPRING’S THEOREM 47
Let ker = {v ∈ K0 : hv, viµ = 0}, complete K0 /ker with the corresponding norm,
we actually get the Hilbert space L2 (µ).
Let A be a ∗-algebra. The set of C-valued functions on A is precisely A ⊗ C.
We may think of putting A on the horizontal axis, and at each point A ∈ A attach
a complex number to it i.e. building functions indexed by A. Then members of
A ⊗ C are of the form
X X X
Ai ⊗ ci 1C = ci Ai = c i δ Ai
i i i
with finite summation over i. Note that C is natually embedded into A ⊗ C as
1A ⊗ C, i.e. c 7→ cδ1A , and the latter is a 1-dimensional subspace. In order to build
a Hilbert space out of the function space, a quadradic form is required. A state on
A does exactly the job. Let w be a state on A. Then the sesquilinear form
X X X
h c i δ Ai , dj δBj iw := ci dj w(A∗i Bj )
i i i,j
which is equal to
ϕ(A∗1 B1 ) ϕ(A∗1 B2 ) ··· ϕ(A∗1 Bn ) ξ1
ϕ(A∗2 B1 ) ϕ(A∗2 B1 ) ··· ϕ(A∗2 B1 )
ξ2
ξ1 ξ2 · · · ξn .. .. .. .. ..
. . . . .
ϕ(A∗n B1 ) ϕ(A∗n B2 ) · · · ϕ(A∗n Bn ) ξn
it is not clear why the matrix (ϕ(A∗i Bj ))
should be a positive operator acting on
H ⊗ Cn . But we could very well put this extra requirement into an axiom, and
consider ϕ being a CP map!
2.8. COMMENTS ON STINESPRING’S THEOREM 49
ϕ(A) = V ∗ π(A)V
P 2 = V V ∗ V V ∗ = V (V ∗ V )V ∗ = V V ∗ .
n times n times
z }| { z }| {
idH ⊕ · · · ⊕ idH ∈ Rep(A, H ⊕ · · · ⊕ H)
and ˆ
hf, gi = hf (x), g(x)iH dµ(x).
X
• Show that all the spaces above are Hilbert spaces.
Corollary 2.55. (Krauss, physicist) Let dimH = n. Then all the CP maps
are of the form X
ϕ(A) = Vi∗ AVi .
i
Remark. This was discovered in the physics literature by Kraus. The original
proof was very intracate, but it is a corollary of Stinespring’s theorem. When
dimH = n, let e1 , . . . en be an ONB. Vi : ei 7→ V ei ∈ K is an isometry, i =
1, 2, . . . , n. So we get a system of isometries, and
A V1
A V2
ϕ(A) = V1∗ V2∗ · · · Vn∗
. .. .. .
.
A Vn
P ∗
Notice that ϕ(1) = 1 if and only if i Vi Vi = 1.
Using tensor product in representations.
Exercise 2.56. (Xi , Mi , µi ) i = 1, 2 are measure spaces. Let πi : L∞ (µi ) →
2
L (µi ) be the representation such that πi (f ) is the operator of multiplication by f
on L2 (µi ). Hence π ∈ Rep(L∞ (Xi ), L2 (µi )), and
π1 ⊗ π2 ∈ Rep(L∞ (X1 × X2 ), L2 (µ1 × µ2 ))
with
π1 ⊗ π2 (ϕ̃)f˜ = ϕ̃f˜
where ϕ̃ ∈ L∞ (X1 × X2 ) and f˜ ∈ L2 (µ1 × µ2 ).
More about multiplicaity
Exercise. dsf
which intuitively says that the whole space X can’t be divided into parts where µ
is invariant. It has to be mixed up by the transformation σ.
and ˆ
sξ (A) = hξ, Aξi = λkE(dλ)ξk2 .
so that ˆ ˆ
2 2
kξk = kE(dλ)ξk = c(dλ)2 = 1
2.13. KADISON-SINGER CONJECTURE 60
B(H).
Lemma 2.80. Pure states on B(H) are unit vectors. Let u ∈ H such that
kuk = 1. Then
wu (A) = hu, Aui
is a pure state. All pure states on B(H) are of this form.
Remark. It is in fact the equivalent class of unit vectors that are the pure
states on B(H). Since
heiθ u, Aeiθ ui = hu, Aui.
Equivalently, pure states sit inside the projective vector space. In Cn+1 , this is
CP n .
Since l∞ is an abelian algebra, by Gelfand’s theorem, l∞ ≃ C(X) for some
compact Hausdorff space X. X = βN, the Stone-Cech compactification of N.
Points in βN are called ultrafilters. Pure states on l∞ correspond to pure states on
C(βN), i.e. Dirac measures on βN.
Let s be a pure state on l∞ . Use Hahn-Banach theorem to extend s, as a
linear functional, from l∞ to s̃ on the Banach space B(H). However, Hahn-Banach
theorem doesn’t gurantee the extension is a pure state. Let E(s) be the set of
all states on B(H) which extend s. Since s̃ ∈ E(s), E(s) is nonempty. E(s)
is compact convex in the weak ∗ topology. By Krein-Milman’s theorem, E(s) =
closure(Extreme Points). Any extreme point will then be a pure state extension of
s. But which one to choose? It’s the uniqueness part that is the famous conjecture.
CHAPTER 3
Appliations to Groups
• πG ∈ Rep(G, H)
π(g1 g2 ) = π(g1 )π(g2 )
π(eG ) = IH
∗
= π(g −1 )
π(g)
• πA ∈ Rep(A, H)
π(A1 A2 ) = π(A1 )π(A)2
π(1A ) = IH
π(A)∗ = π(A∗ )
Case 1. G is discrete −→ A = G ⊗ l1
! !
X X X
a(g)g b(h)h = a(g)b(h)gh
g g g,h
XX
= a(g ′ h−1 )b(h)g ′
g′ h
!∗ !
X X
c(g)g = c(g −1 )g
g g
Existance of Haar measure: easy proof for compact groups, and extend
to locally compact cases. For non compact groups, the left / right Haar
measures could be different. If they are always equal, the group is called
unimodular. Many non compact groups have no Haar measure.
L1 (G)
ˆ
(ϕ1 ⋆ ϕ2 )(g) = ϕ1 (gh−1 )ϕ2 (h)dλR (h)
Note. (1) for the case l1 (G), the counting measure in unimodular,
hence △(g) does not appear; (2) In ϕ1 (gh−1 ), h−1 appears since operation
on points is dual to operation on functions. A change of variable shows
ˆ ˆ ˆ
ϕ(gh−1 )dλR (g) = ϕ(g ′ )dλR (g ′ h) = ϕ(g ′ )dλR (g ′ )
1
− ab da db dadb
dλL = g −1 dg = a =
0 1 0 0 a2
check:
′ b−b′
a b a′ b′ 1
− ab ′ a b a
g= , h= , h−1 g = a′ = a′ a′
0 1 0 1 0 1 0 1 0 1
3.2. SOME EXAMPLES 64
a b − b′ dadb
ˆ ˆ
f (h−1 g)dλL (g) = ,
f( ) 2
a′ a′ a
d(a′ s)d(a′ t + b′ )
ˆ
= f (s, t)
(a′ s)(a′ s)
dsdt
ˆ
= f (s, t) 2
s
where with a change of variable
a
s= , da = a′ ds
a′
b − b′
t= , db = a′ dt
a′
dadb a′2 dsdt dsdt
2
= ′ 2
= 2
a (sa ) s
Right Haar:
ˆ ˆ
f (gh−1 )(dg)g −1 = f (g ′ )d(g ′ h)(g ′ h)−1
ˆ
= f (g ′ )(dg ′ )(hh−1 )g ′−1
ˆ
= f (g ′ )(dg ′ )g ′−1
1
da db − ab dadb
dλR = (dg)g −1 = a =
0 0 0 1 a
check:
′ ′
a b a′ b′ a b 1
− ab ′ a
− ab
a′ + b
g= ,h = , gh−1 = a′ = a′
0 1 0 1 0 1 0 1 0 1
a ab′ dadb
ˆ ˆ
−1
f (gh )dλR (g) = ,
f( − + b)
a′ a′ a
′
a dsdt
ˆ
= f (s, t) ′
as
dsdt
ˆ
= f (s, t)
s
where with a change of variable
a
s= , da = a′ ds
a′
ab′
t=− + b, db = dt
a′
dadb a′ dsdt dsdt
= =
a a′ s s
3.3. INDUCED REPRESENTATION 65
• ax + b
• Heisenberg
• SL2 (R)
• Lorents
• Poincare
Among these, the ax + b, Heisenberg and Poincare groups are semi-direct product
groups. Their representations are induced from a smaller normal subgroup. It is
extremely easy to find representations of abelian subgroups. Unitary representation
of abelian subgroups are one-dimensional, but the induced representation on an
enlarged Hilbert space is infinite dimensional. See the appendix for a quick review
of semi-direct product.
a b
Example 3.2. The ax+b group (a > 0). G = {(a, b)} where (a, b) = .
0 1
The multiplication rule is given by
check:
x y−b
f (g −1 x) = f ( , ).
a a
d x y−b ∂ ∂
X̃f = f( , ) = (−x − y )f (x, y)
da a=1,b=0 a a ∂x ∂y
d x y−b ∂
Ỹ f = a=1,b=0
f( , ) = − f (x, y)
db a a ∂y
∂ ∂
X̃ = −x −y
∂x ∂y
∂
Ỹ = −
∂y
3.3. INDUCED REPRESENTATION 67
[X̃, Ỹ ] = X̃ Ỹ − Ỹ X̃
∂ ∂ ∂ ∂ ∂ ∂
= (−x − y )(− ) − (− )(−x −y )
∂x ∂y ∂y ∂y ∂x ∂y
∂2 ∂2 ∂2 ∂ ∂2
= x + y 2 − (x + + y 2)
∂x∂y ∂y ∂x∂y ∂y ∂y
∂
= −
∂y
= Ỹ .
d
X̃f = f (e−tX x)
dt t=0
d
= f ((e−t , 1)(x, y))
dt t=0
d
= f (e−t x, e−t y + 1)
dt t=0
∂ ∂
= (−x − y )f (x, y)
∂x ∂y
d
Y˜f = f (e−tY x)
dt t=0
d
= f ((1, −t)(x, y))
dt t=0
d
= f (x, y − t)
dt t=0
∂
= − f (x, y)
∂y
Example 3.4. We may parametrize the Lie algebra of the ax+b group us-
ing (x, y) variables. Build the Hilbert space L2 (µL ). The unitary representation
π(g)f (σ) = f (g −1 σ) induces the follows representations of the Lie algebra
d
dπ(s)f (σ) = f (e−sX σ) = X̃f (σ)
dx s=0
d
dπ(t)f (σ) = f (e−tY σ) = Ỹ f (σ).
dy t=0
Hence in the paramter space (s, t) ∈ R2 we have two usual derivative operators
∂/∂s and ∂/∂t, where on the manifold we have
∂ ∂ ∂
= −x −y
∂s ∂x ∂y
∂ ∂
= −y
∂t ∂y
3.3. INDUCED REPRESENTATION 68
Recall the modular functions come in when the translation was put on the
wrong side, i.e. ˆ ˆ
f (gx)dx = △(g −1 ) f (x)dx
G G
or equivalently, ˆ ˆ
△(g) f (gx)dx = f (x)dx
G G
similar for dξ on the subgroup Γ.
Form the quotient M = Γ\G. Let π : G → Γ\G be the quotient map or the
covering map. M carries a transitive G action.
Note 3.7. M is called fundamental domain or homogeneous space. M is a
group if and only if Γ is a normal subgroup in G. In general, M may not be a
group, but it is still a very important manifold.
Note 3.8. µ is called an invariant measure on M , if µ(Eg) = µ(E), ∀g ∈ G.
µ is said to be quasi-invariant, if µ(E) = 0 ⇔ µ(Eg) = 0, ∀g. In general there is
no invariant measure on M , but only quasi-invariant measures. M has an invariant
measure if and only if M is unimodular (Heisenberg group). Not all groups are
unimodular, a typical example is the ax + b group.
3.3. INDUCED REPRESENTATION 70
Define τ : Cc (G) → Cc (M ) by
ˆ
(τ ϕ)(π(x)) = ϕ(ξx)dξ.
Γ
Lemma 3.9. τ is surjective.
Note 3.10. Since ϕ has compact support, the integral is well-defined. τ is
called conditional expectation. It is simply the summation of ϕ over the orbit Γx.
This is because if ξ runs over Γ, ξx runs over Γx. τ ϕ may also be interpreted as
taking average, only it does not divide out the total mass, but that only differs by
a constant.
Note 3.11. We may also say τ ϕ is a Γ-periodic extention, by looking at it as
a function defined on G. Then we check that
ˆ
τ ϕ(ξ1 x) = ϕ(ξξ1 x)dξ = τ ϕ(x)
Γ
because dξ is a right Haar measure. Thus τ ϕ is Γ-periodic in the sense that
τ ϕ(ξx) = τ ϕ(x), ∀ξ ∈ Γ.
Example 3.12. G = R, Γ = Z with dξ being the counting measure on Z.
ˆ X
(τ ϕ)(π(x)) = ϕ(ξx)dξ = ϕ(z + x)
Γ z∈Z
As a consequence, τ ϕ is left translation invariant by integers, i.e. τ ϕ is Z-periodic,
X
(τ ϕ)(π(z0 + x)) = ϕ(z0 + z + x)
z∈Z
X
= ϕ(z + x)
z∈Z
Since ϕ has compact support, ϕ(z + x) vanishes for all but a finite number of z.
Hence it is a finite summation, and it is well-defined.
Let L be a unitary representation of Γ on a Hilbert space V . The task now is
to get a unitary representaton U ind of G on an enlarged Hilbert space H. i.e. given
L ∈ Rep(Γ, V ), get U ind ∈ Rep(G, H) with H ⊃ V .
Let F∗ be the set of function f : G → V so that
f (ξg) = ρ(ξ)1/2 Lξ (f (g))
where ρ = δ/△.
Note 3.13. If f ∈ F∗ , then f (·g ′ ) ∈ F∗ , as
f (ξgg ′ ) = ρ(ξ)1/2 Lξ (f (gg ′ )).
It follows that F∗ is invariant under right translation by g ∈ G, i.e. (Rg f )(·) =
f (·g) ∈ F∗ , ∀f ∈ F∗ . Eventually, we want to define (Ugind f )(·) := f (·g), not on F∗
but pass to a subspace.
Note 3.14. The factor ρ(ξ)1/2 comes in, since later we will defined an inner
product h·, ·inew on F∗ so that kf (ξg)knew = kf (g)knew . Let’s ignore ρ(ξ)1/2 for a
moment.
Lξ is unitary implies that kf (ξg)kV = kLξ f (g)kV = kf (g)kV . Notice that
Hilbert spaces exist up to unitary equivalence, Lξ f (g) and f (g) really are the same
3.3. INDUCED REPRESENTATION 71
Note 3.17. Recall that given a measure space (X, M, µ), let f : X → Y .
Define a linear functional Λ : Cc (Y ) → C by
ˆ
Λϕ := ϕ(f (x))dµ(x)
Λ is positive, hence by Riesz’s theorem, there exists a unique regular Borel measure
µf on Y so that ˆ ˆ
Λϕ = ϕdµf = ϕ(f (x))dµ(x).
Y X
It follows that µf = µ ◦ f −1 .
Note 3.18. Under current setting, we have a covering map π : G → Γ\G =: M ,
and the right Haar measure µ on G. Thus we may define a measure µ ◦ π −1 .
However, given ϕ ∈ Cc (M ), ϕ(π(x)) may not have compact support, or equivalently,
π −1 (E) is Γ periodic. For example, take G = R, Γ = Z, M = Z\R. Then
π −1 ([0, 1/2)) is Z-periodic, which has infinite Lebesgue measure. What we really
need is some map so that the inverse of a subset of M is restricted to a single Γ
period. This is essentially what τ does. Taking τ ϕ ∈ Cc (Γ), get the inverse image
ϕ ∈ Cc (G). Even if ϕ is not restricted in a single Γ period, ϕ always has compact
support.
Hence we get a family of measures indexed by elements in F∗ . If choosing
f, g ∈ F∗ then we get complex measures µf,g . (using polarization identity)
• Define kf k2 := µf,f (M ), hf, gi := µf,g (M )
• Complete F∗ with respect to this norm to get an enlarged Hilbert space
H.
3.4. EXAMPLE - HEISENBERG GROUP 72
check:
Ugind P (ψ)f (x) = Ugind ψ(π(x))f (x)
= ψ(π(xg))f (xg)
P (ψ(·g))Ugind f (x) = P (ψ(·g))f (xg)
= ψ(π(xg))f (xg)
Conversely, how to recognize induced representation?
Theorem 3.20. (imprimitivity) Let G be a locally compact group with a closed
subgroup Γ. Let M = Γ\G. Suppose the system (U, P ) satisfies the covariance
relation,
Ug P (ψ)Ug−1 = P (ψ(·g)).
Then, there exists a unitary representation L ∈ Rep(Γ, V ) such that U ≃ indG
Γ (L).
as follows.
ˆ ˆ
2
kf (g)kV ϕ(g)dg = |f (x, y, z)|2 ϕ(x, y, z)dxdydz
G G≃R3
ˆ ˆ
2
= |f (x, y, z)| ϕ(x, y, z)dydz dx
M≃R Γ≃R2
ˆ ˆ
2
= |f (x, y, z)| ϕ(x, y, z)dydz dx
R2
ˆR
= |f (x, y, z)|2 (τ ϕ)(π(g))dx
ˆR
= |f (x, y, z)|2 (τ ϕ)(x)dx
R
ˆ
= |f (x, 0, 0)|2 (τ ϕ)(x)dx
R
where
ˆ
(τ ϕ)(π(g)) = ϕ(ξg)dξ
ˆΓ
= ϕ((0, b, c)(x, y, z))dbdc
2
ˆR
= ϕ(x, b + y, c + z)dbdc
2
ˆR
= ϕ(x, b, c)dbdc
2
ˆR
= ϕ(x, y, z)dydz
R2
= (τ ϕ)(x).
Hence Λ : Cc (M ) → C given by
ˆ
Λ : τ ϕ 7→ kf (g)k2V ϕ(g)dg
G
is a positive linear functional, therefore
Λ = µf,f
i.e.
ˆ ˆ
2
|f (x, y, z)| ϕ(x, y, z)dxdydz = (τ ϕ)(x)dµf,f (x).
R3 R
(4) Define
ˆ ˆ ˆ
kf k2ind := µf,f (M ) = 2
|f | dξ = 2
|f (x)| dx = |f (x, 0, 0)|2 dx
M R R
Ugind f (g ′ ) := f (g ′ g)
By definition, if g = g(a, b, c), g ′ = g ′ (x, y, z) then
Ugind f (g ′ ) = f (g ′ g)
= f ((x, y, z)(a, b, c))
= f (x + a, y + b, z + c + xb)
3.4. EXAMPLE - HEISENBERG GROUP 75
W is unitary:
ˆ ˆ ˆ
kW f k2L2 = |W f |2 dx = |f (x, 0, 0)|2 dx = |f |2 dξ = kf k2ind
R R Γ\G
d
[ , iex ] = iex
dx
[A, B] = B
or
x
U(et ,b) f = eite f (x + b)
3.5. COADJOINT ORBITS 76
0 b
0 1
1-d representation. L = eib . Induce indG
L ≃ Schrodinger.
3.4.2. ax + b gruop.
There is a famous cute little trick to make On−1 into a subgroup of On . On−1
is not normal in On . We may split the quadratic form into
Xn
x2i + 1
i=1
where 1 corresponds to the last coordinate in On . Then we may identity On−1 as
a subgroup of On
g 0
g 7→
0 I
where I is the identity operator.
Claim: On /On−1 ≃ S n−1 . How to see this? Let u be the unit vector corre-
sponding to the last dimension, look for g that fixes u i.e. gu = u. Such g forms a
subgroup of On , and it is called isotropy group.
In = {g : gu = u} ≃ On−1
For any g ∈ On , gu =?. Notice that for all v ∈ S n−1 , there exists g ∈ On such that
gu = v. Hence
g 7→ gu
n−1
in onto S . The kernel of this map is In ≃ On−1 , thus
On /On−1 ≃ Sn
Such spaces are called homogeneous spaces.
Example 3.24. visualize this with O3 and O2 .
Other examples of homogeneous spaces show up in number theory all the time.
For example, the Poincare group G/discrete subgroup.
G, N ⊂ G nornal subgroup. The map g · g −1 : G → G is an automorphism
sending identity to identity, hence if we differentiate it, we get a transformation
in GL(g). i.e. we get a family of maps Adg ∈ GL(g) indexed by elements in G.
g 7→ Adg ∈ GL(g) is a representation of G, hence if it is differentiated, we get a
representation of g, adg : g 7→ End(g) acting on the vector space g.
gng −1 ∈ N . ∀g, g · g −1 is a transformation from N to N , define Adg (n) =
−1
gng . Differentiate to get ad : n → n. n is a vector space, has a dual. Linear
transformation on vector space passes to the dual space.
ϕ∗ (v ∗ )(u) = v ∗ (ϕ(u))
m
∗ ∗
hΛ v , ui = hv ∗ , Λui.
In order to get the transformation rules work out, have to pass to the adjoint or
the dual space.
Ad∗g : n∗ → n∗
the coadjoint representation of n.
Orbits of co-adjoint representation acounts precisely to equivalence classes of
irrducible representations.
Example 3.25. Heisenberg group G = {(a, b, c)} with
1 a c
(a, b, c) = 0 1 b
0 0 1
3.5. COADJOINT ORBITS 78
i.e. get vertical lines indexed by the x-coordinate ξ. In this example, a cross section
is a subset of R2 that intersects each orbit at precisely one point. Every cross section
in this example is a Borel set in R2 .
We don’t always get measurable cross sections. An example is the construction
of non-measurable set as was given in Rudin’s book. Cross section is a Borel set
that intersects each coset at precisely one point.
Why does it give all the equivalent classes of irreducible representations? Since
we have a unitary representation Ln ∈ Rep(N, V ), Ln : V → V and by construction
of the induced representation Ug ∈ Rep(G, H), N ⊂ G normal such that
Ug Ln Ug−1 = Lgng−1
i.e.
Lg ≃ Lgng−1
now pass to the Lie algebra and its dual
Ln → LA → LA∗ .
to indicate that dU (X) is the directional derivative along the direction X. Notice
∗
that HX = HX but
(iHX )∗ = −(iHX )
i.e. dU (X) is skew adjoint.
Example 3.26. G = {(a, b, c)} Heisenberg group. g = {X1 ∼ a, X2 ∼ b, X3 ∼
c}. Take the Schrodinger representation Ug f (x) = eih(c+bx) f (x + a), f ∈ L2 (R).
• U (etX1 )f (x) = f (x + t)
d d
U (etX1 )f (x) = f (x)
dt t=0 dx
d
dU (X1 ) =
dx
• U (etX2 )f (x) = eih(tx) f (x)
d
U (etX2 )f (x) = ihxf (x)
dt t=0
dU (X2 ) = ihx
• U (etX3 )f (x) = eiht f (x)
d
U (etX3 )f (x) = ihf (x)
dt t=0
dU (X2 ) = ihI
Notice that dU (Xi ) are all skew adjoint.
d
[dU (X1 ), dU (X2 )] = [ , ihx]
dx
d
= ih[ , x]
dx
= ih
In case we want self-adjoint operators, replace dU (Xi ) by−idU (Xi ) and
get
1 d
−idU (X1 ) =
i dx
−idU (X2 ) = hx
−idU (X1 ) = hI
1 d h
[ , hx] = .
i dx i
What is the space of functions that Ug acts on? L. Gaarding /gor-ding/ (Sweed-
ish mathematician) looked for one space that always works. It’s now called the
Gaarding space.
Start with Cc (G), every ϕ ∈ Cc (G) can be approximated by the so called
Gaarding functions, using the convolution argument. Define convolution as
ˆ
ϕ ⋆ ψ(g) = ϕ(gh)ψ(h)dR h
ˆ
ϕ ⋆ ψ(g) = ϕ(h)ψ(g −1 h)dL h
3.6. GAARDING SPACE 81
Since ϕ vanishes outside a compact set, and since U (h)v is continuous and bounded
in k·k, it follows that U (ϕ) is well-defined.
Lemma 3.27. U (ϕ1 ⋆ ϕ2 ) = U (ϕ1 )U (ϕ2 ) (U is a representation of the group
algebra)
Proof. Use Fubini,
ˆ ¨
ϕ1 ⋆ ϕ2 (g)U (g)dg = ϕ1 (h)ϕ(h−1 g)U (g)dhdg
¨
= ϕ1 (h)ϕ(g)U (hg)dhdg (dg is r-Haarg 7→ hg)
¨
= ϕ1 (h)ϕ(g)U (h)U (g)dhdg
ˆ ˆ
= ϕ1 (h)U (h)dh ϕ2 (g)U (g)dg
set g = etX .
Note 3.29. If assuming unimodular, △ does not show up. Otherwise, △ is
some correction term which is also differnetiable. X̃ acts on ϕ as X̃ϕ. X̃ is called
the derivative of the translation operator etX .
Note 3.30. Schwartz space is the Gaarding space for the Schrodinger repre-
sentation.
In this case, Segal’s theorem gives finite Fourier transform. U : l2 (Z) → l2 (Ẑ) where
1 X kl
U f (l) = √ ζ f (k)
N k
A theorem by Iwasawa states that simple matrix group (Lie group) can be
decomposed into
G = KAN
where K is compact, A is abelian and N is nilpotent. For example, in the SL2
case,
s
cos t − sin t e 0 1 u
SL2 (R) = .
sin t cos t 0 e−s 0 1
The simple groups do not have normal subgroups. The representations are much
more difficult.
3.8.1. Induced representation. Suppose from now on that G has a normal
abelian subgrup N ⊳ G, and G = H ⋉ N The N ≃ Rd and N ∗ ≃ (Rd )∗ = Rd . In
this case
χt (ν) = eitν
for ν ∈ N and t ∈ N̂ = N ∗ . Notice that χt is a 1-d irreducible representation on C.
Let Ht be the space of functions f : G → C so that
f (νg) = χt (ν)f (g).
On Ht , define inner product so that
ˆ ˆ
kf k2Ht := |f (g)|2 = kf (g)k2 dm
G G/N
indG
χt
Ht / Ht
Wt Wt
Ut
L2 (H) / L2 (H)
3.8. SUMMARY OF INDUCED REPREP, d/dx EXAMPLE 86
Let f ∈ L2 (H).
Ut (g)f (h) = Wt indG ∗
χt (g)Wt f (h)
= (indG ∗
χt (g)Wt f )(h)
= (Wt∗ f )(hg).
Since G = H ⋉ N , g is uniquely decomposed into g = gN gH . Hence hg = hgN gH =
−1
gN gN hgN gH = gN h̃gH and
Ut (g)f (h) = (Wt∗ f )(hg)
= (Wt∗ f )(gN h̃gH )
= χt (gN )(Wt∗ f )(h̃gH )
−1
= χt (gN )(Wt∗ f )(gN hgN gH )
This last formula is called the Mackey machine.
The Mackey machine does not cover many important symmetry groups in
physics. Actually most of these are simple groups. However it can still be ap-
plied. For example, in special relativity theory, we have the Poincare group L ⋉ R4
where R4 is the normal subgroup. The baby version of this is when L = SL2 (R).
V. Bargman fomulated this baby version. Wigner poineered the Mackey machine,
long before Mackey was around.
Once we get unitary representations, differentiate it and get self-adjoint al-
gebra of operators (possibly unbounded). These are the observables in quantum
mechanics.
Example 3.41. Z ⊂ R, Ẑ = T . χt ∈ T , χt (n) = eitn . Let Ht be the space of
functions f : R → C so that
f (n + x) = χt (n)f (x) = eint f (x).
Define inner product on Ht so that
ˆ 1
kf k2Ht := |f (x)|2 dx.
0
2
Define indR
χt (y)f (x) = f (x + y). Claim that Ht ≃ L [0, 1]. The unitary transfor-
2
mation is given by Wt : Ht → L [0, 1]
(Wt Ft )(x) = Ft (x).
Let’s see what indR
χt (y) looks like on L2 [0, 1]. For any f ∈ L2 [0, 1],
(Wt indG ∗
χt (y)Wt f )(x) = (indG ∗
χt (y)Wt f )(x)
= (Wt∗ f )(x + y)
Since y ∈ R is uniquely decomposed as y = n + x′ for some x′ ∈ [0, 1), therefore
(Wt indG ∗
χt (y)Wt f )(x) = (Wt∗ f )(x + y)
= (Wt∗ f )(x + n + x′ )
= (Wt∗ f )(n + (−n + x + n) + x′ )
= χt (n)(Wt∗ f )((−n + x + n) + x′ )
= χt (n)(Wt∗ f )(x + x′ )
= eitn (Wt∗ f )(x + x′ )
3.9. CONNECTION TO NELSON’S SPECTRAL THEORY 87
Note 3.42. Are there any functions in Ht ? Yes, for example, f (x) = eitx . If
f ∈ Ht , |f | is 1-periodic. Therefore f is really a function defined on Z\R ≃ [0, 1].
Such a function has the form
X X
f (x) = ( cn ei2πnx )eitx = cn ei(2πn+t)x .
Any 1-periodic function g satisfies the boundary condition g(0) = g(1). f ∈ Ht has
a modified boundary condition where f (1) = eit f (0).
Example 3.43. G = (R, +), group algebra L1 (R). Define Fourier transform
ˆ
ˆ
f (t) = f (x)e−itx dx.
It follows that
fˆ(ρ) := ρ̃(f )
f[ ⋆g = f[ ⋆ g = fˆĝ
i.e. Fourier transform of f ∈ L1 (R) is a representation of the group algebra L1 (R)
on to the 1-dimensional Hilbert space C. The range of Fourier transform in this
case is 1-d abelian algebra of multiplication operators, multiplication by complex
numbers.
Example 3.45. H = L2 (R), ρ ∈ Rep(G, H) so that
ρ(y)f (x) := f (x + y)
i.e. ρ is the right regular representation. The representation space H in this
case is infinite dimensional. From ρ, we get a group algebra representation ρ̃ ∈
Rep(L1 (R), H) where
ˆ
ρ̃(f ) = f (y)ρ(y)dy.
Define
fˆ(ρ) := ρ̂(f )
then fˆ(ρ) is an operator acting on H.
ˆ
fˆ(ρ)g = ρ̃(f )g = f (y)ρ(y)g(·)dy
ˆ
= f (y)(Ry g)(·)dy
ˆ
= f (y)g(· + y)dy.
If we have used the left regular representation, instead of the right, then
ˆ
fˆ(ρ)g = ρ̃(f )g = f (y)ρ(y)g(·)dy
ˆ
= f (y)(Ly g)(·)dy
ˆ
= f (y)g(· − y)dy.
Back to the general case. Given a locally compact group G, form the group
algebra L1 (G), and define the left and right convolutions as
ˆ ˆ
(ϕ ⋆ ψ)(x) = ϕ(g)ψ(g −1 x)dL g = ϕ(g)(Lg ψ)dL g
ˆ ˆ
(ϕ ⋆ ψ)(x) = ϕ(xg)ψ(g)dR g = (Rg ϕ)ψ(g)dR g
and write
ψ̂(ρ) := ρ̃(ψ).
and
ˆ ˆ
ρ̃(ψ)ϕ = ψ(g)ρ(g)ϕdg = ψ(g)(Rg ϕ)dg
ˆG G
= ψ(g)ϕ(xg)dg
G
= (ϕ ⋆ ψ)(x)
ρh : G → L2 (R)
where H is the normal subgroup {b, c}. It is not so nice to work with indG H (χh )
directly, so instead, we work with the equivalent representations, i.e. Schrodinger
representation. See Folland’s book on abstract harmonic analysis.
ˆ
ψ̂(h) = ψ(g)ρh (g)dg
G
3.9. CONNECTION TO NELSON’S SPECTRAL THEORY 90
Here the ψ̂ on the right hand side in the Fourier transform of ψ in the usual sense.
Therefore the operator ψ̂(h) is the one so that
L2 (R) ∋ f 7→ ψ̂(·, h·, h) ⋆ f (x).
where ˆ
v= ϕ(g)ρ(g)wdg = ρ(ϕ)w. generalized convolution
3.9. CONNECTION TO NELSON’S SPECTRAL THEORY 91
If ρ = R, the n ˆ
v= ϕ(g)R(g)
X̃(ϕ ⋆ w) = (Xϕ) ⋆ w.
Example 3.47. H3
∂
a →
∂a
∂
b 7→
∂b
∂
c 7→
∂c
get standard Laplace operator. {ρh (ϕ)w} ⊂ L2 (R) . ” = ” due to Dixmier.
{ρh (ϕ)w} is the Schwartz space.
2 2
d d
+ (ihx)2 + (ih)2 = − (hx)2 − h2
dx dx
Notice that 2
d
− + (hx)2 + h2
dx
is the Harmonic oscilator. Spectrum = hZ+ .
CHAPTER 4
Unbounded Operators
4.1.1. Domain.
Example 4.1. d/dx and Mx in QM, acting on L2 with dense domain the
Schwartz space.
An alternative way to get a dense domain, a way that works for all representa-
tions, is to use Garding space, or C ∞ vectors. Let u ∈ H and define
ˆ
uϕ := ϕ(g)Ug udg
Note 4.4. Notice that not only uϕ is dense in H, their derivatives are also
dense in H.
92
4.2. SELF-ADJOINT EXTENSIONS 93
d+ := dim(D+ )
d− := dim(D− )
Von Neumann’s notion of defeciency space expresses the extent to which A∗
is bigger than A. One wouldn’t expect to get a complex eigenvalue for a self
adjoint operator. We understand A not being self adjoint by looking at its “wrong”
eigenvalues. This reveals that A∗ is defined on a bigger domain. The extend that
A∗ is defined on a bigger domain is refleced on the “wrong” eigenvalues.
4.2. SELF-ADJOINT EXTENSIONS 94
D+ D−
R(A + i) R(A − i)
Definition 4.8. Define the Caley transform
CA : R(A + i) → R(A − i)
(A + i)x 7→ (A − i)x
i.e.
A−i
CA = (A − i)(A + i)−1 = ” ”
A+i
The inverse map is given by
1 + CA
A = i(1 + CA )(1 − CA )−1 = ”i ”.
1 − CA
Lemma 4.9. CA is a partial isometry.
semi-direct product
G = HK
Theorem 4.18. H, K commute ⇐⇒ G ≃ H × K.
H ∩K =1
[1] William Arveson. An invitation to C ∗ -algebras. Springer-Verlag, New York, 1976. Graduate
Texts in Mathematics, No. 39.
[2] Richard V. Kadison and John R. Ringrose. Fundamentals of the theory of operator algebras.
Vol. II, volume 16 of Graduate Studies in Mathematics. American Mathematical Society,
Providence, RI, 1997. Advanced theory, Corrected reprint of the 1986 original.
[3] Shôichirô Sakai. C ∗ -algebras and W ∗ -algebras. Springer-Verlag, New York, 1971. Ergebnisse
der Mathematik und ihrer Grenzgebiete, Band 60.
98