Solutions
Solutions
using that the inner product is anti-linear in the first variable. To prove (b) we
write
(x, (AB)(y)) = (x, A(B(y)) = (A† (x), B(y)) = (B † (A† (x), y) = ((B † A† )(x), y) .
parts.
R
Exercise 1.4.1 (a) For example, you may deduce from the fact that dxδ(x)ξ(x) =
810 Solutions to Selected Exercises
so that the δ function has integral 1. (Of course this makes no mathematical sense,
but neither does the statement you try to “prove”.)
(b) One should have δ(0) = ∞, but, as my kindergarten teacher said, “infinity is
not a number.”
(c) You may argue that δ 0 (x) = 0 if x 6= 0, but what is the value of δ 0 (0)?
Exercise 1.4.2 Considering for example for any y the continuous function ηx (y) =
ξ(x, y), one has
ZZ Z Z Z Z
dxdyξ(x, y)δ(x − y) = dx dyηx (y)δ(x − y) = dxηx (x) = dxξ(x, x) .
i.e. (g, A(f )) = (A(g), f ) + ig(1)∗ f (1) − ig(0)∗ f (0). Thus A is not symmetric on the
domain D but it is symmetric on the domain Dα .
Exercise 2.5.7 Since A is symmetric D(A) ⊂ D(A† ). This implies that D((A† )† ) ⊂
D(A† ). But since A† is symmetric we have D(A† ) ⊂ D((A† )† ) so that D(A† ) =
D((A† )† ) and A is self-adjoint.
Let us recall the following simple fact of functional analysis, which is useful to
solve the next exercises: in a Hilbert space,
sup |(x, y)| = kxk ,
kyk≤1
as follows from the choice (when x 6= 0) of y = x/kxk. In particular if |(x, y)| ≤ Ckyk
then kxk ≤ C.
Exercise 2.5.8 (a) It is straightforward that D(A) is a dense subspace on which
A makes sense.
∗
P
(b) For x, y ∈ D(A) we have (x, A(y)) = n≥0 xn λn yn whereas (A(x), y) =
∗ ∗
P
n≥0 λn xn yn , from which the result follows.
(c) If x ∈ D(A† ) then for all y in D(A),
X
|(x, A(y))| = x∗n λn yn = |(z, y)| ≤ Ckyk
n≥0
where z = (λ∗n xn )n≥0 . Thus kzk2 = n≥0 |λn xn |2 ≤ C 2 and x ∈ D(A). Further-
P
This makes it obvious that D(B) ⊂ D(A† ). When y ∈ D(A† ) we have |(y, A(x))| ≤
Ckxk, and thus n≥0 kBn (yn+1 )k2 ≤ C 2 so that y ∈ D(B) and B = A† . The proof
P
the corresponding Fourier series converges absolutely and its sum is continuous.
Exercise 2.5.16 The operator T is bounded so that its graph is closed. The graph
of its restriction to L0 is closed so that the graph of A is closed. For f ∈ L2 and
g ∈ L0 integration by parts and approximation by smooth functions shows that
(T (f ), g) = −(f, T (g)). This proves that A is symmetric. Moreover A is not self-
adjoint because the previous formula implies that T (L2 ) is contained in the domain
of A† whereas T (L2 ) is larger than T (L0 ).
Exercise 2.5.17 Denoting by A the “multiplication by x operator”, the domain of
A† consists of the functions ψ such that |(ψ, A(ϕ))| ≤ Ckϕk2 i.e.
Z
dx(xψ(x)∗ )ϕ(x) ≤ Ckϕk2 ,
so that xψ ∈ L2 .
Exercise 2.5.18 A function ϕ is a eigenvector of eigenvalue a for the multiplication
by x operator if aϕ = xϕ in L2 (R2 , dµ, C), i.e. (x − a)ϕ = 0 µ-a.e. Thus ϕ = 0
µ-a.e. on R \ {a} and since ϕ is not zero we must have µ({a}) 6= 0.
Exercise 2.5.19 As in the previous exercise, the space of eigenvectors with eigen-
value a identifies to the space of functions ϕ(x, y) which are zero for x 6= a, i.e.
with L2 (dµ0 ) where dµ0 is the restriction of dµ to {a} × R. It then suffices to prove
that for a measure dν on R which gives finite measure to bounded sets the space
L2 (dν) has dimension n if and only if dν is carried by n points but not by n − 1
points. If L2 (dν) is finite-dimensional, dν cannot have a continuous part, and can
charge only finitely many points. It is straightforward to check that its dimension
is exactly the number n of points it charges.
Exercise 2.5.20 We write, using mathematical notation for clarity,
hγ|αi = (|γi, |αi) = (|αi, |γi)∗ = (|αi, A|βi)∗ = (A† (|αi), |βi)∗ = (|βi, A† (|αi) ,
Solutions to Selected Exercises 813
multiple of δp . Is has to correspond precisely to 2π~δp since for the natural mea-
sure on momentum space this function is of integral 1, which is required to make
formulas such as the equivalent of (2.26) work. Another approach is that by anal-
ogy with (2.26) if the test function
R ξ is seen as an element |ξi of position state
space, one should have |ξi = (dp/(2π~))ξ(p)|pi. Integration in p of the relation
hp|p0 iξ(p) = 2π~δ(p − p0 )ξ(p) then yields as required hξ|p0 i = ξ(p0 ).
Exercise 2.9.1 This might be obvious at the formal level, but if you try to think
about it, you will need to write at least a line.
Exercise 2.10.2 On the one hand
U (abc) = U ((ab)c) = r(ab, c)U (ab)U (c) = r(ab, c)r(a, b)U (a)U (b)U (c)
and on the other hand
U (abc) = U (a(bc)) = r(a, bc)U (a)U (bc) = r(a, bc)r(b, c)U (a)U (b)U (c) .
R
Exercise 2.14.1 Write V\ (t)(f )(p) = dx exp(−ixp/~)f (x + t) and make the
change of variables y = x + t.
Exercise 2.14.3 (a) The only point which is not obvious is strong continuity. If h
is uniformly bounded by B on the support of f one simply use that k(exp(ith/~) −
1)f k2 ≤ B|t|kf k2 /~ so that then limt→0 k(exp(ith/~) − 1)f k2 = 0 and the result
since the set of f where this limit is 0 is closed in norm. (b) If f belongs to the
domain of A, the function θ(f, t) := (U (t)(f ) − U (0)(f ))/(t~) remains bounded
in L2 as t → 0. Since this function converges point-wise to hf we have hf ∈ L2 .
Conversely if hf ∈ L2 , then kθ(f, t)k2 ≤ khf k2 /~ so that by approximation one
reduces to the case where h is bounded on the support of f to prove that f ∈ D
and A(f ) = hf . (c) is a consequence of (a) and (b).
Exercise 2.17.1 One has to show first that |(y, a(x))| ≤ Ckxk for x ∈ D if and only
√
if y ∈ D. The “if” part is easy and the “only if part” is done by taking xn+1 = nyn
for n ≤ k and xn+1 = 0 otherwise so that
Xp sX
2
(y, a(x)) = n(n + 1)|yn | ≤ Ckxk = C n|yn |2
n≤k n≤k
|Pn (x) − exp(itx)| = | k≥n+1 (itx)k /k!| ≤ exp |tx|. Since |Pn (x) − exp(itx)| goes
P
Exercise 3.1.1 dim H1 ⊗H2 = dim H1 ×dim H2 whereas dim(H1 ×H2 ) = dim H1 +
dim H2 .
Exercise 3.1.2 Show first that to prove that a unitary group U (t) is strongly
continuous it suffices to prove that t 7→ U (t)(y) is continuous at t = 0 for each y is
a set large enough that its closed linear span is H. Taking n = 2 it suffices then to
consider the case where y = x1 ⊗ x2 . We write
Exercise 3.2.2 The one thing which is not completely obvious is that these ele-
ments span H2,s . To see this, we recall that the elements of the type x ⊗ y span
H ⊗ H. Thus every element z of H ⊗ H may be approximated by a linear combina-
tion of the x ⊗ y. If z is moreover symmetric, it is also approximated by the same
linear combination of the y ⊗ x, and hence by a linear combination of tensors of the
form x ⊗ y + y ⊗ x, and the conclusion should then be obvious.
Exercise 3.2.3 Orthonormality of this sequence follows easily from (3.8) and the
definition of inner product on Hn,s , see (3.2). To show that it indeed spans the whole
space one can simply use the same arguments as in the solution to Exercise 3.2.2.
Exercise 3.3.1 We have to prove that for α ∈ Hn,s and β ∈ Hn+1,s it holds
(A† (γ)(α), β) = (α, A(γ), β). It suffices to prove these formulas where γ, α, β are
basis elements, γ = ek , α = |n1 , n2 , . . . , nk , . . .i, β = |n01 , n02 , . . . , n0k , . . .i. In that
case (A† (γ)(α), β) = (α, A(γ), β) = 0 unless n0j = nj for j 6= k and n0k = nk + 1, in
√
which case (A† (γ)(α), β) = (α, A(γ), β) = nk + 1.
induction over n, this reduces to showing that a function of L2 (Rn ) which is or-
thogonal to every function of the type u(xn )v(x1 , . . . , xn−1 ) is zero, an easy exercise
of measure theory. Recall that elements of the type (3.21) span Hn,s (this can be
proved as Exercise 3.2.3). Thus it suffices to prove these formulas when f is of this
type, and then they quickly reduce to (3.22) and (3.23). The last statement follows
by combining (3.24) and (3.25).
Exercise 3.4.3 The domains of A(ξ) and A† (η) are respectively given by
n X o
D(A(ξ)) = (α(n))n≥1 ∈ B ; kAn (ξ)(α(n))k2 < ∞
n≥0
n X o
D(A† (η)) = (α(n))n≥0 ∈ B ; kA†n (η)(α(n))k2 < ∞ ,
n≥0
and the operators A(γ) and A† (γ) are adjoint to each other.
Exercise 3.4.4 This requires simply care and patience. For example, for α ∈ Hn,s ,
X X
A(ξ)A† (η)(α)i1 ,...,in = ξi∗ ηi` αi1 ,...,î` ,...,in ,i + αi1 ,...,in ξi∗ ηi ,
i≥1,`≤n i≥1
whereas the computation of A† (η)A(ξ)(α)i1 ,...,in yields only the first summation to
the right and this proves (3.33).
Exercise 3.6.1 One just performs the same computation inside every eigenspace.
Exercise 3.7.1 Consider the case where ξ(x, y) = h(x)g(y) where h, g ∈ S. Then
S = A† (h)A(g ∗ ) and the desired formula is the last assertion of Exercise 3.3.3.
Approximating ξ ∈ S2 by a sum of functions of the preceding type concludes the
argument.
so that
X
a† (y)a† (x)a(y)a(x)(f )(x1 , . . . , xn ) = δy (x` )g(x1 , . . . , x
b` , . . . , xn )
`≤n
X
= δy (x` ) δx (x1 ) + . . . + δ\
x (x` ) + . . . + δx (xn ) f (x1 , . . . , x
b` , . . . , xn , y)
`≤n
X X
= δy (x` )δx (xk )f (x1 , . . . , xb` , . . . , xn , y) = δy (x` )δx (xk )f (x1 , . . . , xn ) .
`6=k k6=`
R
One then finishes with the relation dxdyV (x, y)δy (x` )δx (xk ) = V (xk , x` ).
Second solution. This solution is more formal. Using the relation a† (y)a(y) =
a(y)a† (x) − δ(y − x)1 and assuming that V (x, y) = V1 (x)V2 (y),
Z Z Z
HV = dxV1 (x)a† (x)a(x) dyV2 (y)a† (y)a(y) − dxV (x, x)a† (x)a(x) .
One then uses the formula (3.53). The case of a general function V (x, y) is recovered
by approximation.
Exercise 3.7.3 The sensibleRR way is to require that when V (x, y) = ξ(x)η(y) this
† ∗
is A (ξ)A(η ).R For f ∈ Hn,s , dxdyV (x, y)a† (x)a(y)f is then the symmetrized of
the function dyV (x1 , y)f (y, x2 , . . . , xn ).
Exercise 3.7.4 As explained just before, the first term is the sum of the kinetic
energy of the individual particles, and the second term means that any two different
particles located at x and y interact with a potential V (x, y).
Exercise 3.8.1 Recalling that (ξ, η) = [A0 (ξ), A0† (η)] one compares the formulas
d3 p d3 p0
ZZ
(ξ, η) = ξ(p)∗ η(p0 )(2π~)3 δ (3) (p − p0 )
(2π~)3 (2π~)3
and
hZd3 p d3 p0
Z i
[A0 (ξ), A0† (η)] = ξ(p)∗
a(p), η(p 0 † 0
)a (p )
(2π~)3 (2π~)3
ZZ 3 3 0
d p d p
= ξ(p)∗ η(p0 )[a(p), a† (p0 )] . (P.8)
(2π~)3 (2π~)3
Exercise 3.8.2 The quantity b† (p)|0i represents the ideal case of a single particle
state having exactly momentum p, whereas b† (p)b† (p0 )|0i represents (the ideal case
of) a two-particle state, the particles having momenta p and p0 .
Exercise 3.8.3 The meaning of (3.60)
R 3 is that we have
[ d3 p/(2π~)3 ξ(p)b(p), d3 p/(2π~)3 ξ 0 (p)b† (p)] = d p/(2π~)3 ξ(p)∗ ξ(p). The
R R
point is that the factor d3 p/(2π~)3 has dimension [l−3 ] because ~ has the dimension
of an action, the product of a momentum by a length, and the natural way to
818 Solutions to Selected Exercises
make the dimensions equal in both sides of the previous equality is to think of b(p)
and b† (p) as having dimension [l3/2 ].
Exercise 3.9.1 Since (fn ) is an orthonormal basis,
R ∗ any square-integrable function
P
g has an expansion g = n≥1 an fn with an = fn (x)g(x)dx. Thus, formally
Z X X Z X
∗
dx fn (x) fn (y) g(x) = fn (y) dxfn (x)∗ g(x) = an fn (y) = g(y) ,
n≥1 n≥1 n≥1
P
and using (P.7) this is what one wanted to prove. Considering h = n≥1 bn fn you
may also write
ZZ X X Z
∗ ∗
lim dxdy fn (x) fn (y) g(x)h(y) = bn an = dxh(x)∗ g(x) ,
∗
k→∞
n≤k n≥1
and this means that as a distribution of two variables, n≥1 fn (x)∗ fn (y) is well-
P
that
X X
αk (c(k) + c† (k))(e∅ ) = αk ek ,
k k
but the right-hand side is not defined when k≥1 |αk |2 = ∞. On the other hand,
P
Since
∂2
(c(k, t) + c† (k, t)) = −ωk2 (c(k, t) + c† (k, t) ,
∂t2
the required identity
Z ∂2 ∂2
dx 2
ϕ(x, t) − α2 2 ϕ(x, t) + βϕ(x, t) ξ(x) = 0
∂t ∂x
Solutions to Selected Exercises 819
then simply results for (3.78), (P.9), (P.10) and the relation ωk2 = α2 k 2 /~2 + β.
Exercise 3.10.4 We write
2a(k)a† (−k) = (c(k) + ic(−k))(c† (k) − ic† (−k))
so that
Z Z 3 0
d p p
dλm (p0 )η(p0 )ξ(p0 ) = 2ωp 2ωp0 δp(3) (p0 )ξ(p0 ) = ξ(p) ,
p
2ωp0
RExercise 4.10.3 For Rexample the heuristic formula hp0 |ξi = ξ(p0 ) is equivalent to
0
dλm (p)ξ(p)hp |pi = dλm (p)ξ(p)δm,p0 (p) .
Exercise 5.1.2 This is quite obvious. Formally, taking the adjoint of (5.1) gets
√ √
cϕ(f )† = A† (fˆ) + A(fb∗ ) = cϕ(f ∗ ).
Exercise 5.1.8 We prove by induction over n that T n (f )(u) =
(k) n+1
P
0≤k≤n gn,k (u)f (u) where gn,k (u) = P n,k (sinh u, cosh u)/ cosh u where P n,k
is polynomial in two variables of degree n. In particular the functions |gn,k | are
integrable. This proves (a). For (b) it suffices (appealing to dominated convergence)
(k)
to show that the functions (1 − fε,r ) and fε,r for k = 1, 2, 3 are bounded uniformly
over r and converge pointwise to zero as ε → 0. √ This is done by elementary bounds.
0
For example, |fε,r (u)| ≤ a exp(−a) where a = ε r2 + m2 c2 cosh(u + τ ).
Exercise 5.1.9 We denote by d, d0 . . . numerical
√ constants, (which need not be the
same at each occurrence). We set ωr = r2 + m2 c2 . Assuming again x to be in the
z direction we now integrate in spherical coordinates to obtain
Z ∞
r
Iε (x) = d dr exp(−iωr x0 /~ − εωr ) sin(|x|r/~)
|x|ω r
Z0
d r
= dr exp(−iωr x0 /~ − εωr ) sin(|x|r/~)
2 |x|ωr
Z
1
= d0 dr exp(−ix0 ωr x0 /~ − εωr ) cos(|x|r/~) , (P.12)
x0 + d00 ε
where in the third inequality we have integrated by parts. More integration by parts
show that the limit
Z
I(x) = d dr exp(−iωr x0 /~) cos(|x|r/~)
Exercise 5.4.1 Let us define the function η on R3 by η(p) = ξ(ωp , p)/ 2ωp , so
p
(c) To understand this difference meditate on the term a(C(p)) (as opposed to
a(C −1 (p))). Here you get a rule to transform the operator a(p) whereas (4.61) is a
rule to transform the function ϕ of p. (d) To deduce Lorentz invariance from (5.42),
we write, using (5.41) and the corresponding formula for a† ,
√
cUB (b, B) ◦ ϕ(x) ◦ UB (b, B)−1 =
Z
= dλm (p) exp(−i(x, p)/~) exp(−i(b, B(p))/~)a(B(p))
Making the change of variable p → B −1 (p) and using that (x, B −1 (p)) = (B(x), p)
√
the right-hand side is cϕ(b + B(x)).
Exercise 5.4.3 Using (5.39) the formula (5.43) is just another way to write (5.41).
Exercise 5.4.4 Because δm,p has the property that dλm (p0 )δm,p (p0 )a(p0 ) = a(p).
R
limit of) the quotient of an increment of u by a length and is of dimension [l−1 ]. Sim-
ilarly ∂ ν ∂ν u is of dimension [l−2 ]. Thus the term ~2 ∂ ν ∂ν has dimension [m2 l4 t−2 l−2 ]
which is the same as the dimension of the term m2 c2 u.
Exercise 6.4.1 For the motion x(t) = cos ωt it is straightforward that the action
is zero. But for x(t) ≡ 1 the action is < 0. Thus x(t) = cos ωt does not minimize
the action.
Exercise 6.4.3 We don’t know the dimension of u but is irrelevant as the equation
is homogeneous in u. The dimension of m2 c4 is [m2 l4 t−4 ]. The dimension of ~2 c2 is
[m2 l6 t−4 ], but each operator ∂ν creates a dimension [l−1 ].
P
Exercise 6.5.3 Let us write R`,k the matrix of R, v` = k≤n R`,k v̄k . Then, by
the chain rule,
∂ L̄ X ∂L ∂v` X ∂v` X
p̄k := = = p` = p` R`,k ,
∂v̄k ∂v` ∂v̄k ∂v̄k
`≤n `≤n `≤n
P P
Since k≤n v̄k p̄k = k≤n vk pk this shows that indeed H̄(x̄, p̄) = H(R(x̄), R(p̄)).
Exercise 6.5.4 The first statement is just another way to express (6.44). Moreover
since the construction of the Hamiltonian H̄ is, in the new basis, the same as the
construction of H in the old basis, it satisfies Hamilton’s equation of motion which
is what is meant by the second statement.
∂2
X Z X
=− bk b` d3 xgk (x) g` (x)
B (∂xν )2
k,`∈K0 1≤ν≤3
Z
1 X X
= bk b` ` 2 d3 xgk (x)g` (x) = k2 b2k /~2 ,
~2 B
k,`∈K0 k∈K0
using (6.57) in the second line, that g` is an eigenvector of the Laplacian, of eigen-
value −`2 /~2 in the third line and finally that (gk ) is an orthogonal basis.
Exercise
R 6.9.2 In precise terms one is looking for distributions Φt such that
dtΦt (ft ) = f (0), where fRt (x) = f (t, x). Let us then take the function
R f of the type
f (t, x) = ϕ(t)ψ(x). Then dtϕ(t)Φt (ψ) = ϕ(0)ψ(0). In particular dtϕ(t)Φt (ψ) =
0 whenever ϕ(0) =R0, so that Φt (ψ) = 0 whenever t 6= 0. Since ψ is arbitrary, Φt = 0
when t 6= 0. Thus dtΦt (ft ) = 0, a contradiction.
Exercise 6.9.3 (a) We compute
↔ ↔
Z Z Z
∂t d3 xA ∂t B = d3 x∂t (A ∂t B) = d3 x(A∂t2 B − B∂t2 A) = 0 , (P.13)
using in the last equality that for C = A, B we have c−2 ∂t2 C = −m2 c2 C/~2 +
2
P
k≤3 ∂k C, and integrating by parts in the space variables. (b) Denoting by Bt the
distribution obtained in fixing x0 = ct, and using that A and
R B satisfy the2 Klein-
Gordon equation the right hand side of (P.13) is c−2 k≤3 dx(A∂k2 B
P
R t − (∂k 2A)Bt ).
Now, by the very definition of the derivative of a distribution, dxA∂k Bt =
dx(∂k2 A)Bt .
R
so that using (6.74) and (6.76), and since π(0, x) = ~2 ∂t ϕ(0, x), we obtain:
Z
ip i
2cωp a(p) = d3 x exp(−ix · p/~) cωp ϕ(0, x) − ∂t ϕ(0, x) . (P.14)
~ ~
↔
This is the quantity d3 xϕ(x) ∂t exp(i(x, p)/~) for x0 = 0. To argue that this
R
dλm (p) exp(i(x, p)/~)f + (p) and B = dλm (p) exp(i(x, p)/~)g + (p). Then
R R
ZZ
ci
A∂B/∂t = dλm (p)dλm (p0 )p00 exp(i(x, p + p0 )/~)f + (p)g + (p0 ) .
~
d3 p d3 p0
ZZ
ci
A∂B/∂t = p00 exp(i(x, p + p0 )/~)f + (p)g + (p0 ) ,
~ (2π~)3 2ωp (2π~)3 2ωp0
where p = (ωp , p) (see section 4.4) so that p0 = ωp , and similarly for p0 . Thus
d3 p d3 p0
ZZ
ci
A∂B/∂t = exp(i(x, p + p0 )/~)f + (p)g + (p0 ) .
2~ (2π~)3 2ωp (2π~)3
Integrating in d3 x and using the formula d3 x exp(i(x, p+p0 )/~) = (2π~)3 δ (3) (p+
R
p0 ) we obtain
d3 p
Z ZZ
ci
3
d xA∂B/∂t = d3 p0 δ (3) (p+p0 ) exp(ix(p0 +p00 )/~)f + (p)g + (p0 ).
2~ (2π~)3 2ωp
d3 p
Z Z
3 ci
d xA∂B/∂t = exp(2ix0 p0 /~)f + (p)g + (p̄)
2~ (2π~)3 2ωp
Z
ci
= dλ(p) exp(2ix0 p0 /~)f + (p)g + (p̄) .
2~
Exchanging A and B Rand using the transformation p → p̄ which preserves λm we
find that this equals d3 x∂A/∂tB. Proceeding in a similar fashion for the other
terms one gets
↔
Z Z
ci
d3 xA ∂t B = dλm (p)(f − (p)g + (p) − f + (p)g − (p)) .
~
We compute the integral of each of the first four terms as in (6.82). The tricky part
is that implementing the condition δ(r) = 0 for r = ±p ± p0 does not affect the
term pν p0ν the same way when ν = 0 and when ν ≥ 1 because ω−p = ωp . When
ν = 0 one gets
d3 p
Z Z
2 3 2 2
~ d x(∂0 ϕ(x)) = (p0 ) − exp(−2iωp ct/~)a(p)a(−p)
(2π~)3 2cωp
+ a(p)a† (p) + a† (p)a(p) − exp(2iωp ct/~)a† (p)a† (−p) ,
826 Solutions to Selected Exercises
whereas for ν ≥ 1
d3 p
Z Z
2 3 2 2
~ d x(∂ν ϕ(x)) = (pν ) exp(−2iωp ct/~)a(p)a(−p)
(2π~)3 2cωp
+ a(p)a† (p) + a† (p)a(p) + exp(2iωp ct/~)a† (p)a† (−p) .
Exercise 8.2.4 Let us identify the space H1 of homogeneous first degree poly-
2
nomials
to the set C seen as a set of column matrices. That is to the matrix
a1
a = corresponds the polynomial fa (z1 , z2 ) = a1 z1 + a2 z2 = aT z where
a2
z1
z= . Thus π1 (A)(fa )(z1 , z2 ) = fa (z10 , z20 ) = aT A† (z) = (A∗ a)T z. This means
z2
that π1 (A)(fa ) = fA∗ a . In other words π1 is equivalent to the representation of π of
SU (2) on C2 where π(A) is the operator with matrix A∗ . According to Lemma 8.5.4
this representation is equivalent to the representation π 0 of SU (2) where π 0 (A) is
the operator with matrix A†−1 = A.
Exercise 8.2.8 It is obvious that αi = αj−i , and the proof of (8.12) is then
straightforward from the definitions. For example, in the case of J we have z10 = −z2
and z20 = z1 . Within a multiplicative factor which is a positive number and is
due to the normalization constants αi , πj (C)k,` is the coefficient of z1k z2j−k in the
expansion of z10` z20j−` where z10 and z20 are given by (8.7), and this should make the
first equality in (8.13) obvious. Next, we use that C ∗ = J −1 CJ so that πj (C ∗ ) =
πj (J −1 )πj (C)πj (J). We then compute πj (C ∗ )(fi ) using (8.12) and the formula
P
πj (C)(fk ) = ` πj (C)`,k f` to obtain the result.
Exercise 8.4.1 We use the definition of κ. The point y = κ(exp(aσ3 ))(x) is such
Solutions to Selected Exercises 827
that M (y) = exp(aσ3 )M (x) exp(a∗ σ3 ), using also that exp(aσ3 )† = exp(a∗ σ3 ). This
gives the relations
Wheb a = s/2 ∈ R this coincides with the formula (4.21) for Bs . When a = −iθ/2
for θ ∈ R this gives the relations y 0 = x0 , y 3 = x3 , y 1 + iy 2 = exp(iθ)(x1 + ix2 ) and
y 1 − iy 2 = exp(−iθ)(x1 − ix2 ) i.e. y 1 = x1 cos θ − x2 sin θ and y 2 = x1 sin θ + x2 cos θ
which indeed correspond to a rotation of angle θ in the plane spanned by e1 and
e2 .
Exercise 8.4.2 Since κ(SL(2, C)) contains all pure boosts it suffices from (8.22)
to prove that it contains all rotations. From (8.23) κ(exp(−iθσ3 /2) is a rotation of
angle θ around the third axis. Since κ is 2-to-1, if an element A ∈ SU (2) is not of
the type exp(−iθσ3 /2), κ(A) is a rotation around another axis than the third axis.
Thus G := κ(SU (2)) is a group of rotations which contains all rotations around
the third axis, and at least another rotation. We will show that such a group must
be the entire group rotations. The set D of directions with the property that any
rotation around this direction belongs to G is invariant under the action of any
rotation of G. This is because if R and S are rotations, RSR−1 is a rotation of
the same angle as S but around the image by R of the axis of rotation of S. Thus
D contains the direction of the third axis, and another direction D0 , which after
rotation around the third axis, we may assume to be in the plane generated by e1
and e3 . Let us call θ the angle between D0 and e3 . By rotating D0 around the third
axis, to a direction D with an angle ξ with D0 , 0 ≤ ξ ≤ 2θ, and then bringing D
back to the e1 , e3 plane by a rotation around D0 we can obtain every direction in
the e1 , e3 plane with an angle less than 3θ with e3 , and the conclusion should then
be obvious.
Exercise 8.4.5 Unfortunately a direct proof of that does not seem much easier
than the solution of Exercise 8.4.4. It is much easier to appeal to Theorem D.6.4.
The only thing we have show is that the representation is irreducible. When there
is an invariant subspace in a unitary representation, the orthogonal complement
of the subspace is also invariant. Since here the representation lives in a space of
dimension 3, if it was not irreducible there would be a one-dimensional invariant
subspace, but it is quite obvious that such a subspace does not exist.
Exercise 8.4.7 Let us first observe that a pure boost L satisfies L = LT . This
is obvious if L = B s , and in general L = RB s R−1 for a rotation R so that LT =
(R−1 )T B s RT = L. Thus if A is positive Hermitian, κ(A† ) = κ(A) = κ(A)T because
828 Solutions to Selected Exercises
θ of SU (2) such that θ(A) is the operator M → AM A† . Let us show that this
representation is equivalent to π2 . Recalling the matrix J of Lemma 8.1.4, let us
denote by M0 the spaces of matrices of the type M J for M ∈ M. Using the map
M → M J −1 from M0 to M shows that θ is equivalent to the representation θ of
SU (2) on M such that θ0 (A) is the map M → AM J −1 A−1 J from M0 to itself.
In Lemma 8.1.4 we prove that for C ∈ SU (2) we have J −1 C ∗ J = C †−1 = C, so
that J −1 A† J = AT , and θ0 (A) is the map M → AM AT . Now it is straightforward
to see that M0 consists of symmetric matrices so it identifies with the space of
symmetric order two tensors, and under this identification writing the formula for
θ0 (A) shows that θ0 is just π2 . Thus θ is equivalent to π2 . We now have to prove
that the representation B → θ(ρ(B)) is equivalent to matrix multiplication by B.
Consider the map U : C3 → M given by
x3 x1 − ix2
1 2 3
U (x , x , x ) = .
x1 + ix2 −x3
and according to (8.35)3 this yields the formula J3 (ϕ)(p) = −i~(p2 ∂1 ϕ(p) −
p1 ∂2 ϕ(p)) .
Exercise 8.8.2 The wave function is an element ϕ ∈ L2 ⊗ Hj i.e. a function
ϕ : R3 → Hj . The hypothesis that measurement of the spin always yields the value
3
As in the present version of this text, that is with an opposite sign compared to the
first printed version, see equation (*) in the erratum.
830 Solutions to Selected Exercises
~(j/2−k) means that this wave function is an eigenvector of the spin operator 1⊗S3
with eigenvalue ~(j/2 − k), so that for each p ∈ R3 , ϕ(p) ∈ Hj is an eigenvector of
the operator S3 with eigenvalue ~(j/2−k). According to the definition (8.35) of S3 4
this implies that πj (exp(−iθσ3 /2))(ϕ(p)) = exp(−i(j/2−k)θ)ϕ(p). Next, the action
of an element A of SU (2) on the wave function is given by (8.34), and the element
A corresponding to a rotation of angle θ around the z axis is A = exp(−iθσ3 /2)).
In the ideal case where ϕ(p) is not zero only when p is in the direction of the
z axis, ϕ(κ(A−1 )(p)) = ϕ(p) and then πj (A)(ϕ(κ(A−1 )(p))) = πj (A)(ϕ(p)) =
exp(−i(j/2 − k)θ)ϕ(p) which is the desired result.
Exercise 8.9.4 Denoting by S(A) the first matrix and by S 0 the second one, this
should be obvious from the relations S(A)S(B) = S(AB) and S 0 S(A)S 0 = S(A†−1 ).
Exercise 8.10.1 (a) Just reverse the manipulations leading from (8.47) to (8.51).
(b) Taking determinants in the relation γ(θ(L)(x)) = Lγ(x)L−1 proves that θ(L) ∈
O(1, 3). To prove that P in(1, 3) is a group, consider two elements L, L0 of this
set. Then L0 Lγ(x)L−1 L0−1 = L0 γ(θ(L)(x))L0−1 = γ(θ(L0 )θ(L)(x)) so that L0 L ∈
P in(1, 3) and θ(L0 L) = θ(L0 )θ(L) (etc.) (c) It is better to take for granted that the
linear span of the product of γ matrices is the set of all matrices, as there is nothing
to learn from checking that. Prove then that a matrix which commutes with every
matrix is a multiple of the identity, which is straightforward. (d) It follows from
(8.47) that S(A) ∈ P in(1, 3) and that θ(S(A)) = κ(A). Thus θ(S(SL+ (2, C))) =
κ(SL+ (2, C)). And we proved in Section 8.9 that κ(SL+ (2, C)) = O+ (1, 3). (e) It is
obvious that S is one-to-one. We have already proved that it is an homomorphism
from SL+ (2, C) into θ−1 (O+ (1, 3)), and we have to prove that it is onto. According
to (c) given any C ∈ O+ (1, 3) we can find D ∈ SL+ (2, C) with θ(S(D)) = C. Let us
define −D in the obvious manner if D ∈ SL(2, C) and −D = P 0 (−A) if D = P 0 A
for A ∈ SL(2, C). Then κ(−D) = κ(D), so that θ(S(D)) = θ(S(−D)). This means
that we have found two different elements of S(SL+ (2, C)) whose image by θ is C.
Then (d) implies that S(SL+ (2, C)) contains θ−1 (A). (f): It is straightforward to
compute that θ(T )00 = −1, θ(T )i i = 1 for 1 ≤ i ≤ 3, the other being zero. This
“reverses the flow of time”. (g) We may guess the rules from the previous matrix
expressions: T 02 = −1, P 0 T 0 = T 0 P 0 , T 0 A = −A†−1 T 0 . Let us denote by SL(2, C)∗
the group generated by SL+ (2, C) and T 0 , and extend S to SL(2, C)∗ by defining
S(T 0 ) = T . We then prove as in (e) that S is an isomorphism from SL(2, C)∗ to
P in(1, 3).
Exercise 8.10.2 (a) The equivalence is given by the map S(P 0 ) since S(A†−1 ) =
S(P 0 )S(A)S(P 0 )−1 . (b) It should be transparent that if S is the representation (j, `)
of SL(2, C) then A 7→ S(A†−1 ) is the representation (`, j) and these can be the
equivalent only if j = `. S(P 0 ) then has to exchange the actions of A and A† . I see
4
As in the present version of this text, that is with an opposite sign compared to the
first printed version, see equation (*) in the erratum.
Solutions to Selected Exercises 831
two maps which achieve this, namely the map (xi1 ,...,i` ,j1 ,...,j` ) 7→ (xj1 ,...,j` ,i1 ,...,i` )
and the map (xi1 ,...,i` ,j1 ,...,j` ) 7→ (−xj1 ,...,j` ,i1 ,...,i` ) but I could not prove that there
are no others.
Exercise 8.10.4 (a) This is because S(A)S(P ) = S(P )S(A†−1 ). (b) Indeed the
space G ∩ G 0 is invariant under S so that it reduces to {0}. (c) Since the space G ⊕ G 0
is invariant under S and since S is irreducible. (d) Let us denote by θ the restriction
of S to SL(2, C) and G. It is irreducible because if K is an invariant subspace of G
for θ then K ⊕ S(P 0 )(K) is an invariant subspace of S. Let us then consider the map
T : G ⊕ G → H = G ⊕ G 0 given by T (x, y) = x + S(P 0 )(y). Using again S(A)S(P 0 ) =
S(P 0 )S(A†−1 ) one obtains the relations T −1 S(A)T (x, y) = (θ(A)(x), θ(A†−1 )(x))
and T −1 S(P 0 )T (x, y) = (y, x).
Exercise 9.1.1 The point is that then ϕ(A−1 (p)) 6= 0 only for p = A(p0 ).
Exercise 9.2.3 The point is that the element (0, −I) commutes with every element
(a, A) because −Ia = κ(−I)a = a. Thus by Shur’s lemma π(0, −I) is a multiple
of the identity. In fact, it is ± the identity since its square is the identity. And
when ρ(A)ρ(B) = −ρ(AB) we have (a, ρ(A))(b, ρ(B)) = (a + ρ(A)(b), ρ(A)ρ(B)) =
(a + Aa, ρ(AB))(0, −I) since ρ(A)a = κ(ρ(A))(a).
Exercise 9.4.3 Since (Dp )−1 Dp0 ) belongs to the little group, on which V is unitary,
the operator V (Dp−1 Dp0 ) is unitary. Now, using (9.7) we obtain
U (a, A)(W (ϕ))(p) = exp(i(a, p)/~)V (Dp−1 ADA−1 (p) )[W (ϕ)(A−1 (p))] ,
−1
and since W (ϕ)(A−1 (p)) = V (DA 0
−1 (p) DA−1 (p) )ϕ(A
−1
(p)) the right-hand side is,
with obvious notation,
V (Dp−1 Dp0 ) exp(i(a, p)/~)V (Dp0−1 ADA 0
−1 (p) )[ϕ(A
−1
(p))] = W (U 0 (a, A)(ϕ))(p) .
R RR
Exercise 9.5.5 (a) We have dµ(A)f (CA) = dλm (p)dν(B)f (CDp B). Now,
CD
R p = D C(p) D where
R D ∈ SU (2), so that by left-invariance
RR of dν we have
RRdν(B)f (CD p B) = dν(B)f (D
R C(p) B) and thus dλ m (p)dν(B)f (CDp B) =
dλm (p)dν(B)f (DC(p) B) = dµ(A)f (A) as is shown by the change of variables
p → C −1 (p) and the invariance of dλm . (b) Since Dp B(p∗ ) = Dp p∗ = p we have
Z ZZ
dµ(A)kV (A)−1 ϕ(A(p∗ ))k2 = dλm (p)dν(B)kV (B)−1 V (Dp )−1 (ϕ(p))k2 .
a probability because the little group is not compact, and the previous argument
does not work.
Exercise 9.5.8 By definition kukR,p = kDp−1 uk and kukL,p = kDp† uk. We assume
that Dp = Dp† so that Dp† Dp−2 = Dp−1 . Thus kW (ϕ)(p)kL,p = kϕ(p)kR,p . Now,
−2
UL (a, A)W (ϕ)(p) = exp(i(a, p)/~)A†−1 DA −1 (p) ϕ(A
−1
(p))
For (d) observe that unitarity is obvious as | exp(i Im (αb∗ w))| = 1 and simply write
U (c0 , a0 )U (c, a)(f )(w) = (aa0 )j exp(i Im (αc0∗ w)) exp(i Im (αc∗ a0−2 w))f ((aa0 )−2 w) ,
and this is U ((c0 , a0 )(c, a))(f )(W ) since c0∗ + c∗ a0−2 = (c0 + a02 c)∗ .
Exercise
0 9.6.8 (a) and (b) are straightforward. So is (c). Indeed, for B =
b0
0 0
a −1 d −b
∈ SL(2, C) we have B = ∈ SL(2, C), B −1 · z =
c0 d0 −c0 a0
(−c0 + a0 z)/(d0 − b0 z), f (B, z) = d0 − b0 z, f (A, B −1 · z) = d − b(−c0 + a0 z)/(d0 − b0 z)
and f (BA, z) = dd0 + bc0 − (db0 + ba0 )z = f (B, z)f (A, B −1 · z). (d) For p ∈
1
X0 \ {0}, we have M (p) = (p0 + p3 )ZZ † where Z is the column matrix
z
1 2 0 3 1 2 0 3
for z = (p + ip )/(p + p ). The map p 7→ z = (p + ip )/(p + p ) provides an
identification of the quotient of X0 \ {0} by the equivalence relation pRp0 if p0 = λp
for some λ > 0 identifies with C and the action of SL(2, C) on X0 \ {0} respects
this equivalence relation. The quotient action of SL(2, C) on C is the one we study
here, and (9.46) implies (9.45) for the function occurring in (9.42). .
Exercise 9.6.9 (a) is obvious. It is obvious that Hj is invariant under the trans-
formations V (a, A). To prove (b), writing what this means boils down to proving
that p(Av) = A(p(v)). This is because M (p(Av)) = Avv † A† = AM (p(v))A† =
M (A(p(v))). (c) It is obvious that w(p)w(p)† = M (p) so that p(w(p)) = p. When
vv † = M (p) then v = θw(p) where |θ| = 1. Then θ = v1 /|v1 | since w(p)1 ≥ 0.
Solutions to Selected Exercises 833
since V (B −1 ) is unitary we have kf (Dp B)k = kf (Dp )k and the result follows.
Exercise 9.8.4 If instead of p∗ we use the specific point C(p∗ ) and a representation
V 0 of the little group G0 of C(p∗ ) the state space is the space F 0 of functions f :
SL(2, C) → V for which f (AB) = V 0 (B)−1 f (A) for A ∈ SL(2, C) and B ∈ G0 , and
the representation is given by π(a, A)(f )(B) = exp(i(a, BC(p∗ ))f (A−1 B). When
V 0 (B) = V (C −1 BC) for B ∈ G0 , an intertwining map T from the space F of
functions which satisfies the condition (9.66) to F 0 is given by T (f )(A) = f (AC).
The details are straightforward.
Exercise 9.8.5 Please read Section A.4. One can take for λ the counting measure,
λ(A) = card A, and this exercise is a small variation on the theme of Theorem 9.8.1.
Exercise 9.8.6 Please read Section A.5 after which everything should look very
simple.
Exercise 9.10.2 From (8.47) we have S(A)γ(A−1 (p)) = γ(p)S(A) and thus
U (a, A)D(ξ)(p)
b = exp(i(a, p)/~)S(A)γ(A−1 (p))ξ(A−1 (p))
= γ(p)U (a, A)(ξ)(p) = DU
b (a, A)(ξ)(p) .
Exercise 9.10.5 The only difference is that for u ∈ G 0 one has u† γ0 u = −kuk2 so
that there has to be a minus sign in the definition of the inner product.
Solutions to Selected Exercises 835
Exercise 9.11.2 Starting with the statement γµ S(A) = S(A)γν κ(A)νµ we may
first raise µ both sides to obtain γ µ S(A) = S(A)γν κ(A)νµ . We may then raise the
index ν in γν while lowering the index ν in κ(A)νµ as is done in Exercise (4.1.5).
Exercise 9.11.3 (a) is integration by parts, using that (x, p) = xµ pµ . (b) The
Dirac operator D := γ µ ∂µ satisfies
1 µ ν
D2 = γ µ γ ν ∂µ ∂ν = (γ γ + γ ν γ µ )∂µ ∂ν = η µν ∂µ ∂ν = ∂ µ ∂µ . (P.20)
2
If a function f satisfies the Dirac equation, that is i~Df = mcf , then (i~)2 D2 f =
m2 c2 f and hence ~2 ∂ µ ∂µ f + m2 c2 f = 0. Every component of f satisfies the Klein-
Gordon equation. (c) Follows from the relation i~∂d ˆ
µ f (p) = pµ f (p). (d) It follows
ˆ
from (c) that the Fourier transform ϕ = f satisfies the equation D(ϕ) b = mcϕ so
2 2 2
that by (9.71) we have (p − m c )(ϕ)(p) = 0: this Fourier transform is zero outside
Xm ∪(−Xm ) (which we already knew since each component is the Fourier transform
of the function satisfying the Klein-Gordon equation). It is then a function from
Xm ∪ (−Xm ) → C4 . When f is real-valued then fˆ(−p) = fˆ(p)∗ so that ϕ = fˆ. (e)
We use
Z
ˆ
f (κ(A )(p)) = d4 x exp(i(κ(A−1 )(p)), x)/~)f (x)
−1
Z
= d4 x exp(i(p, x)/~)f (κ(A−1 (x)) ,
by change of variable x → κ(A−1 )(x) and Lorentz invariance. (f) These two relations
are Fourier transforms of each other.
Exercise 9.12.3 Recalling the formula (9.52), define the representation Ũ (a, A) :=
U (P a, A†−1 ). Then the map T given by T (ϕ)(p) = ϕ(P p) shows that Ũ (a, A) is
unitarily equivalent to the representation U 0 (a, A) of (P.17).
Exercise 9.12.4 We check that JM (p)∗ J † = M (P p). Consequently (Jv ∗ )(Jv ∗ )† =
JM (p(v))∗ J † = M (P p(v)) and thus p(Jv ∗ ) = P p(v). We then note from
Lemma 8.1.4 that A†−1 J = JA∗ to obtain the formula T V (P a, A†−1 )(v) =
V (a, A)T (f )(v).
Exercise 9.12.11 Generally speaking, given Q ∈ SL+ (2, C) and a representation
π of P ∗ one may define as in (9.12.1) a representation πQ of P ∗ by the formula
πQ (a, A) = π(Qa, QAQ−1 ). Intuitively, if π is associated to a given particle, the
representation πQ is associated to “the image of the particle through Q”. The point
is that when Q−1 Q0 ∈ SL(2, C) then πQ and πQ0 are equivalent. (This applies in
particular to the case where Q is parity and Q0 mirror symmetry.) This equivalence
is a consequence of the following immediate formula: if W = π(0, A) for A ∈
SL(2, C) then W πQ W −1 = πQA .
Exercise 9.13.5 The trick is as always to transport ϕ(p) to Vp∗ using V (Dp )−1 .
836 Solutions to Selected Exercises
obtain
X
U (c, C)(ϕk )(p) = S(C −1 )∗k,` Πp (exp(i(c, p)/~)fˆ(C −1 (p))g` )); .
`≤N
Since exp(i(c, p)/~)fˆ(C −1 (p)) is the Fourier transform of V (c, C)f this proves the
formula 10.22. Applying A to (10.22) implies (10.8).
Exercise 10.5.3 This treatment can be found in [24], Sections 7.3 to 7.5, but it
might require dedication to plunge there.
Exercise 10.6.3 (a) We assume that S is irreducible. In Appendix D we prove
that S is equivalent to a representation of the type (n1 , n2 ) as in Definition 8.3.2.
The restriction of S to G = SU (2) is (equivalent to) the representation πn1 ⊗ πn2
of Proposition D.7.6. We also prove in Appendix D that for j = n1 + n2 , n1 + n2 −
2, . . . , |n1 − n2 | there is exactly one subspace G for which the restriction of S to
SU (2) and G is equivalent to πj . Then V = V 0 = πj .
Solutions to Selected Exercises 837
(b) Taking the conjugate of the matrix relation (10.32) yields V ∗ (C) = Z −1 S(C)Z.
Since V ∗ (C) = V (C ∗ ) = V (J −1 )V (C)V (J) this implies (10.38).
(c) Proposition 10.4.2 implies that ZV (J −1 ) = λW for some λ ∈ C. Thus Z =
W 0∗ = λW V (J), and (10.36) yields v(p, q) = λS(Dp )W V (J)(fq ). Using (8.12),
yields (for a different λ) that for 0 ≤ q ≤ j we have v(p, q) = λ(−1)q u(p, j − q).
Exercise 10.7.2 We consider only the first of these quantities. For 1 ≤ k ≤ N
the quantities (u(p, q)k ) form a column vector u(p, q) = S(Dp )W (fq ), as we saw
in (10.27). In a similar manner, the quantities v(p, q)k form a column vector
v(p, q) = S(Dp )W 0 (fq )∗ . Denote by v T the row vector which is the transpose of
a column vector, and by v † the row vector which is the conjugate-transpose of v.
Thus u(p, q)v(p, q)T is an N × N matrix, and the element of this matrix located
on row k and column k 0 is u(p, q)k v(p, q)k0 . Therefore it suffices to show that the
matrix
X X
u(p, q)v(p, q)T = S(Dp ) W (fq )W 0 (fq )† S(Dp )T (P.21)
q≤n q≤n
rather obvious so we do not detail it. Choose another element Dp0 with Dp0 (p∗ ) = p,
so that Dp0 = Dp C where C leaves p∗ invariant. It suffices to show that for such C
we have S(C)M S(C)T = M . Now
0
V (C) = W −1 S(C)W = W −1
S ∗ (C)W 0 (P.22)
is a unitary transformation of H0 , so that
0 X
W −1 S(C)W (fq ) = W −1
S(C)∗ W 0 (fq ) = αqj fj ,
j≤n
where the αqj are the coefficients of an orthonormal matrix. Thus S(C)W (fq ) =
j
S(C)∗ W 0 (fq ) = i≤n αqi W 0 (fi ) and
P P
j≤n αq W (fj ) whereas in a similar manner
thus, taking adjoints, W 0 (fq )† S(C)T = i≤n αqi∗ W 0 (fi )† . The result follows since
P
P j i∗ j
q≤n αq αq = δi .
Exercise 10.12.1 We treat only the case of (10.63) since we have already treated
the case of (10.64) in a related situation. What this means is that for a test function
f we have ψ µ (∂µ f ) = 0. This equation is satisfied separately for ψ + and ψ − . We
treat the case of ψ − , for which (10.52) reads
XZ d3 p
ψ −µ (f ) = p 3
fˆ(p)(Dp )µq a† (p, q) .
q≤3
2cω p (2π~)
Since i~∂d ˆ
µ f (p) = pµ f (p) to prove that ψ
−µ
(∂µ f ) = 0 it suffices to prove that
pµ (Dp ) q = 0. Now we have (p, Dp eq ) = (Dp p∗ , Dp eq ) = (p∗ , eq ) = 0 which means
µ
mc + p0 − p3 −p1 + ip2
−p1 − ip2 mc + p0 + p3
mc + p0 + p3 ; Cu(p, 2) = p1 − ip2 .
Cu(p, 1) =
p1 + ip2 mc + p0 − p3
Exercise 11.2.3 the hint is immediate. One then applies (11.37) to t − s rather
than t, one uses that since HI (θ) = U0 (−θ)HI U0 (θ) then U0 (−s)HI (θ)U0 (s) =
HI (−s − θ) and one changes θj into θj + s.
Exercise 11.4.1 Straightforward.
Exercise 11.5.2 There are three momenta operators, one for each coordinate.
Using vector notation to describe the three components simultaneously, the operator
P
is given by P|m, (m` )i = (m + ` m` `)|m, (m` )i. It is obvious that this operator
commutes with H0 since they have a common basis of eigenvectors. We prove that
each operator HI,k commutes with P. This is because
X
hm, (m` )|HI,k P|n, (n` )i = (n + (n` `))hm, (m` )|HI,k |n, (n` )i ,
`
X
hm, (m` )|PHI,k |n, (n` )i = (m + (m` `))hm, (m` )|HI,k |n, (n` )i ,
`
840 Solutions to Selected Exercises
whereas by (11.64)
Z
hm, (m` )|HI,k |n, (n` )i = h(m` )|ak |(n` )i d3 xfm (x)∗ fn (x)fk (x)
B
and one may recursively choose the points tn so that for n 6= k one has |U0 (tn −
tk )ϕk (0)| ≤ 2− max(k,n) . For this we use that when ψ is continuous with compact
support U (t)ψ converges uniformly to zero, so we simply choose tn large enough
that for k < n we have both |U (tn − tk )ϕk (0)| ≤ 2−n and |U (tk − tn )ϕn (0)| ≤ 2−n .
Exercise 12.2.7 According to (11.23), in the interaction picture the state ψ evolves
at time t into U0 (t)−1 U (t)ψ, so that the state ψ = U (t)−1 U0 (t)ϕ evolves at time t
into ϕ. Taking t = −∞ and ϕ = |ξi, the state |ξiin = U (−∞)−1 U0 (−∞)|ξi evolves
at time t = −∞ into |ξi.
Exercise 12.4.2 In the R case where there is no scattering, Ξ ≡ 0, it follows from
(12.32) that Φ(|ϕiu ) = dR3 pθ(p)|hp|ϕi|2 /(2π~)3 so that this quantity must be zero
in order for the quantity kuk≤R d2 uΦ(|ϕiu ) to stay bounded as R → ∞.
Solutions to Selected Exercises 841
Exercise 13.12.1 Indeed we then have p1 = −p2 so that p01 = p02 . Since 0 =
p1 + p2 = p3 + p4 we also have p3 = −p4 so that p03 = p04 . Since p1 + p2 = p3 + p4
we then have p01 = p92 = p03 = p04 , from which the claim follows.
Exercise 13.3.1 We have S † = n≥0 Sn† where
P
(ig)n
Z Z
†
Sn = d x1 . . . d4 xn T̄ H(x1 ) · · · H(xn ) ,
4
(P.25)
n!
and where T̄ denotes “reverse time ordering”. Computing S † S we obtain 1 +
n
P
n≥1 g An where obviously A1 = 0 and
Z
2A2 = d4 x1 d4 x2 (H(x1 )H(x2 ) + H(x2 )H(x1 ) − T H(x1 )H(x2 ) − T̄ H(x1 )H(x2 ))
is zero because the integrand is zero. To prove that A3 is zero, denoting H(xi ) by
hi one reduces to the identity T (h1 h2 h3 ) − h1 T (h2 h3 ) − h2 T (h1 h3 ) − h3 T (h1 h2 ) +
T̄ (h2 h3 )h1 + T̄ (h1 h3 )h2 + T̄ (h1 h2 )h3 − T̄ (h1 h2 h3 ) = 0, which is proved by writing
what it means when x1 ≥ x2 ≥ x3 .
Exercise 13.5.1 Since both sides are anti-symmetric tensors, it suffices to consider
the case k = 1, k 0 = 2 and the right-hand side is the determinant of C −1 which is
1.
Exercise 13.9.1 (a) We have
Z
Re B = 2 dθ1 dθ2 Re (α(θ2 )∗ α(θ1 )) ,
{0≤θ1 ≤θ2 ≤t}
k
P
For (b), U (t)|0i is of the type U (t)|0i =
√ k≥0 λk ek . Then h0|a U (t)|0i =
k!λk and since |h0|ak U (t)|0i|2 = |h0|ak V (t)|0i|2 by (13.53) we get |λk |2 =
|A|2k exp(−Re B)/k! which sums to 1 by (a), and |λk |2 is of the probability that
U (t) has k quanta of oscillation.
Exercise 13.10.5 Just compute the integral:
Z ∞ Z 0 Z ∞
dt exp(−|t|a + itω) = dt exp(ta + itω) + dt exp(−ta + itω)
−∞ −∞ 0
1 1 2a
= − = 2 , (P.26)
a + iω −a + iω ω + a2
and (13.64) is just application of the inverse Fourier transform to this relation.
Denoting by ωp,ε the root of the equation z 2 = p2 +m2 −iε with negative imaginary
part, one uses (13.64) with a = iωp,ε to obtain (13.62).
842 Solutions to Selected Exercises
d4 p d4 p0 exp(−i(x, p + p0 ))
Z
∆F (x)2 = lim ,
ε→0+ (2π) (2π) (−p + m2 − iε)(−p02 + m2 − iε)
4 4 2
d4 p d4 p0 fˆ(−p − p0 )
Z Z
d4 xf (x)∆F (x)2 = lim .
ε→0 (2π)4 (2π)4 (−p2 + m2 − iε)(−p02 + m2 − iε)
The integral is however not convergent in general. To get convinced of that, you
may integrate in p0 at given p. Assuming that fˆ > 0 in a neighborhood of 0 you get
an integral of order at least 1/kpk4 (where kpk denotes the Euclidean norm) and
this is not an integrable function of p.
(b) Formal manipulations yield the definition ∆20 (f ) = dλm (p)dλm (p0 )fˆ(p + p0 ).
R
This integral is well defined because λ2m ({(p, p0 ) ; p0 +p00 ≤ a}) grows polynomially
in a and fˆ decreases fast at infinity.
Exercise 13.13.3 (a) is a linguistic issue. Renumbering the set of internal vertices
defines a permutation of this set (sending a point to the point which has the same
order in the new ordering) and conversely. The rules we state about the permutation
exactly amount to saying the corresponding renumbering of the internal vertices
does not change the diagram. (b) If an internal vertex i is connected to an external
vertex v then σ(i) has to be connected to v. But there is unique internal vertex
connected to a given external vertex, so that σ(i) = i.
Exercise 13.14.1 (a) We write ϕ(x) as the sum ϕ(x) = ϕ+ (x) + ϕ− (x) of an anni-
hilation and a creation part, so that say h0|a(p)ϕ(x)|0i = h0|a(p)ϕ− (x)|0i, whereas
h0|ϕ− (x)a† (p)|0i = 0. Then :ϕ(x)4 := ϕ− (x)4 + 4ϕ− (x)3 ϕ+ (x) + 6ϕ− (x)2 ϕ+ (x)2 +
4ϕ− (x)ϕ+ (x)3 + ϕ+ (x)4 . We simply expand before we can apply Lemma 13.8.1.
To understand why there are fewer terms and (b) is true let us number ϕj (x),
1 ≤ j ≤ 4 the four copies of ϕ(x) in the product ϕ(x)4 . The lines between the
internal vertex corresponding to x and itself occur because of contractions such as
h0|ϕ1 (x)ϕ2 (x)|0i, or after expansion in creation and annihilation part, because of
−
contractions of the type h0|ϕ+1 (x)ϕ2 (x)|0i. However, these terms do not occur when
Solutions to Selected Exercises 843
−
we replace ϕ(x)4 by :ϕ(x)4 : because the normal ordering replaces ϕ+
1 (x)ϕ2 (x) by
− +
ϕ2 (x)ϕ1 (x).
Exercise 13.14.2 (a) is a special case of (b). Let R us try more generally for a test
function f to make sense of the operator W := d4 xf (x) :ϕ(x)4 :, as an operator
on a certain subspace of the Boson Fock space, which we define now. Let us say
n
is of fast decrease if for each k the function ( i≤n p0i )k ξ
P
that a function ξ on Xm
is bounded. Let Hnfast be the set of symmetric functions of fast decrease on Xm n
.
We will try to define our operators on the algebraic sum of the spaces Hnfast (a
nice subspace of the boson Fock space.) Computations are easier using the formula
(5.42). A typical term in :ϕ(x)4 : is
ZZZZ
dλm (p1 )dλm (p2 )dλm (p3 )dλm (p4 )
Arguing as in Exercise 3.7.1 (and crossing our fingers because the function fˆ is not
a test function!) and not being concerned with the numerical factor, for ξ ∈ Hnfast ,
we should define W0 ξ as being proportional to the symmetrization of the following
function
ZZ
η(p1 , . . . , pn ) = dλm (p01 )dλm (p02 )fˆ(p1 + p2 − p01 − p02 )ξ(p01 , p02 , p3 , . . . , pn )
with respect to the variables p1 , . . . , pn . To show that this function decreases fast,
one simply splits the integral in the regions where p00 00 0 0
1 + p2 ≤ (p1 + p2 )/2 and its
complement and one uses easy bounds. The same computation Reven makes sense
when f ≡ 1. (c) If f is a test function on R3 , proceeding as above dxf (x) :ϕ0 (x)4 :
|0i should be the function fˆ(p1 + p2 + p3 + p4 ) on X 4 . But such a function is not
square-integrable, and it seems very difficult to make sense of HI (0). However,
despite the fact that we are not in the situation of Section 13.3, we are able to
compute the S matrix in φ4 theory, so that this theory makes some sense after all.
Exercise 13.14.3 Let us write ϕ(x) = ϕ+ (x)+ϕ− (x), the annihilation and creation
parts of ϕ. Pretending that [ϕ− (x), ϕ+ (x)] = ∆1 for a certain number ∆, the trick
is to write :ϕ(x)4 : as a linear combination of ϕ(x)4 , ϕ(x)2 ∆ and ∆2 1, which makes
the desired result formally obvious. You may try first to write first :ϕ(x)2 : and
:ϕ(x)3 : in this manner.
Exercise 13.15.1 Writing C := h0|T ϕ(x1 )ϕ(x2 )|0i2 , these are
h0|ϕ(x1 )a† (p1 )|0ih0|ϕ(x1 )a† (p2 )|0iC 2 h0|a(p3 )ϕ(x2 )|0ih0|a(p4 )ϕ(x2 )|0i ,
844 Solutions to Selected Exercises
h0|ϕ(x1 )a† (p1 )|0ih0|ϕ(x2 )a† (p2 )|0iC 2 h0|a(p3 )ϕ(x1 )|0ih0|a(p4 )ϕ(x2 )|0i ,
h0|ϕ(x1 )a† (p1 )|0ih0|ϕ(x2 )a† (p2 )|0iC 2 h0|a(p4 )ϕ(x1 )|0ih0|a(p3 )ϕ(x2 )|0i .
Exercise 13.15.2 For Exercise 13.13.1: The number of different contraction dia-
grams one obtains when labeling the internal vertices of a Feynman diagram all
possible ways and the lines out the internal vertices all possible ways is of the form
n!(4!)n /S where S is an integer called the symmetry factor of the diagram. For
Lemma 13.13.2: The symmetry factor S of a contraction diagram is the number
of ways one may relabel the internal vertices and relabel the lines out the internal
vertices without changing the diagram.
Exercise 13.18.1 Let us do this the easy way. We compute the integral using the
residue formula and the contour of Figure 13.2. The existence of the limit is obvious,
as each term has a limit and denominators do not approach zero. The quantity A(p)
is the sum of two terms of the type 1/g(p). In one of them the function g(p) is a
constant times the quantity
p p
p2 + m2 −( p2 + m2 − w0 )2 + (p − w)2 + m2 .
where the supremum is over all values of θ for which θ(p) = 1 when p2 ≤ R2 .
Making the change of variable p → p − uw we obtain
Z 1
θ(p)d4 p
Z
1
U2 (w, R, θ) = du 4 (kp − uwk2 − u(1 − u)w 2 + m2 )2
,
0 kp−uwk≥R (2π)
(P.29)
Solutions to Selected Exercises 845
so that
U2 (w, R, θ) = U3 (w, R, θ) + U4 (w, R, θ) , (P.30)
where
1
θ(p)d4 p
Z Z
1
U3 (w, R, θ) = du ,
0 kpk≥R (2π)4 (kp − uwk2 − u(1 − u)w2 + m2 )2
and where (P.30) defines U4 (w, R, θ). Since 0 ≤ θ(p) ≤ 1, considering the symmetric
difference ∆(w, u, R) between the sets {kpk ≥ R} and {kp − uwk ≥ R}, we have
Z 1 Z
1
|U4 (w, R, θ)| ≤ U5 (w, R) := du 2 − u(1 − u)w 2 + m2 )2
.
0 ∆(w,u,R) (kp − uwk
Then U5 (w, R) → 0 as R → ∞ since for large R we integrate a quantity of order
R−4 on a domain of volume of order R3 . Therefore since 0 ≤ θ(p) ≤ 1 we have
Z 1
d4 p
Z
0
|U2 (w, R, θ) − U2 (w , R, θ)| ≤ du 4
H(w, w0 , p, u) + R(R) , (P.31)
0 kpk≥R (2π)
where R(R) = U5 (w, R) + U5 (w0 , R) → 0 and the function H(w, w0 , p, u) is given
by
1 1
−
(kp − uwk2 2 2
− u(1 − u)w + m ) 2 (kp − uw k − u(1 − u)w02 + m2 )2
0 2
g = g 0 + n≥2 an (g 0 )n in each term and we expand. This makes sense because there
P
are only finitely many terms which contain a given power of g. To express g 0 as a
846 Solutions to Selected Exercises
recursively compute the coefficients cn , for example c2 = −a2 and c3 = −a3 −2(a2 )2 .
Exercise 14.10.1 We look for θ of the type θ = m2 + g 2 Γ2 (m2 ) + g 4 B + O(g 6 ).
Thus Γ(θ) = g 2 Γ2 (θ) + g 4 Γ4 (θ) + O(g 6 ). Since Γ4 (θ) = Γ4 (m2 ) + O(g 2 ) and Γ2 (θ) =
Γ2 (m2 )+g 2 Γ2 (m2 )Γ02 (m2 )+O(g 4 ) we obtain Γ(θ) = g 2 Γ2 (m2 )+g 4 (Γ2 (m2 )Γ02 (m2 )+
Γ4 (m2 )) + O(g 6 ), and B = Γ2 (m2 )Γ02 (m2 ) + Γ4 (m2 ) from (14.97).
Exercise 14.10.2 If a diagram has n internal vertices, i internal lines and e external
lines then 3n = 2i + e. To see that, think of each internal vertex as providing three
slots, each of which has to be filled by the end of a line. The end of the e external
lines fill e such slots, and the internal lines each fill two of the slots, one with each
of their ends. Thus when e is even, so is n.
Exercise 14.12.3 That µ = m follows from (14.101) (hoping of course that nothing
goes wrong). The second statement follows (14.124) using l’Hospital rule.
Exercise 15.4.12 According to (15.41) and (15.38) we have card E1 + 2 card (E2 \
E 0 ) = b ≤ 4. Since the whole diagram is connected we must have card E1 > 0.
Since b is even card E1 is even, so that card E1 ≥ 2. Thus card (E2 \ E 0 ) ≤ 1: either
the sub-diagram is a subgraph or it is obtained from a subgraph by removing a
single edge. The only possible cases are (a) card E1 = 2 and card (E2 \ E 0 ) = 0: the
sub-diagram α is a biped and d(α) = 2. (b) card E1 = 2 and card (E2 \ E 0 ) = 1:
the sub-diagram α has been obtained from a biped by removing a single edge, and
d(α) = 0. (c) card E1 = 4 and card (E2 \ E 0 ) = 0: the sub-diagram α is a quadruped
and d(α) = 0.
Exercise 15.5.2 This space has a base formed by the elements (e1 − ek ) for 2 ≤
k ≤ n.
Exercise 15.7.3 Fix an arbitrary point v0 ∈ V and for w̄ ∈ (R1,3 )V\{v0 } define
S(w̄) ∈ (R1,3 )V by S(w̄)v = w̄v for v 6= v0 and S(w̄)v0 = − v6=v0 w̄v so that
P
The integral on the right is with respect to a translation invariant measure, the
image of the translation invariant measure on (R1,3 )V\{v0 } under the linear map S.
Exercise 16.3.2 It should be obvious that ker L consists of the vectors of the type
(`, `, `) and that the projection of x on ker L is obtained for the value of ` given.
Exercise 16.3.3 With obvious notation we have L(Ax) = AL(x) so that ker L is
invariant by A. Obviously A preserves the dot product on (R1,3 )E so that AQ = Q.
The equality x = I(x)+T (x) implies Ax = AI(x)+AT (x), and since AI(x) ∈ ker L
Solutions to Selected Exercises 847
and AT (x) ∈ Q we have AI(x) = I(Ax) and T (Ax) = AT (x). Next you have to
convince yourself that for a function H on Q the Taylor polynomial of order d at
q = 0 of the function q 7→ H(Aq) is H d (Aq) where H d is the Taylor polynomial of
order d at q = 0 of H. Consequently (using the same notation as below (16.11)) the
Taylor polynomial of order d at q = 0 of the function q 7→ FA (z +q) = F (Az +Aq) is
Gd (Az, Aq). Thus T d FA (x) = Gd (AI(x), AT (x)) = Gd (I(Ax), T (Ax)) = T d F (Ax)
which is the required equality.
Exercise 16.5.6 The forests are {γ} ; {γ1 , γ} ; {γ2 , γ} ; {γ3 , γ} ; {γ4 , γ} ;
{γ1 , γ2 , γ} ; {γ1 , γ3 , γ} ; {γ2 , γ4 , γ}.
Exercise 16.6.2 (a) Use (16.12) and Exercise 16.3.2. (b) Proceeding in a similar
manner for the diagram β we obtain F2 F = −f (p/2 − q)f (p/2 + q)f (q/3 − `)3 .
For example if ` is fixed, the quantity F2 F is integrable in q by itself whereas
the quantity F + F1 F is integrable in q. (In fact, F1 F has been designed for this
purpose.)
Exercise 16.6.3 Given p, r, q, ` ∈ R1,5 consider x(p, r, `, q) ∈ (R1,5 )Eα given by
the parameterization of Figure 16.6. Then obviously x(0, r, 0, q) ∈ ker Lα . Since
by (15.34) we have dim ker Lα = 6(6 − 5 + 1) = 12, all the elements of ker Lα
are of the type x(0, r, 0, q). On the other hand, it is straightforward to check that
x(p, 0, `, 0) and x(0, r, 0, q) are orthogonal. This means that x(p, 0, `, 0) ∈ ker L⊥
α so
that x(0, r, 0, q) = Iα (x(p, r, `, q)). This should make the formula for F1 F obvious,
the flows on the edges of α are replaced by the internal flows. What happens here is
that even though the sum F + F2 F does not have a divergence in the sub-diagram
β (as the term F2 F is really designed to remove this divergence), the term F1 F
bring in a new divergence in β.
Exercise 16.6.4 To improve convergence we replace F by (1 − T d(γ) )F , so
we should also do this for subdivergences. If you have disjoint subdiagrams
Q
γ1 , . . . , γn , you should use that process “on each γi ”. Using the identity i≤n (1 −
P Q
xi ) I⊂{1,...,n} = i∈I (−xi ) you see that a good idea is to add a compensating
P
term I⊂{1,...,} FI F , where FI is the forest containing the diagrams γ and γi for
i ∈ I. The reason you assume that γ1 , . . . , γn are disjoint is simply that it is far
from obvious to see what to do for subdiagrams which are not disjoint. Once you
see that you must add such a compensating term to F , arguing that you must ap-
ply the same procedure “on each γi ” it does not seem to require much imagination
to invent the forest formula. The previous argument shows that it seems sort of
necessary to have a chance of success to add all the terms of the forest formula.
Why it is sufficient to add these terms is a different matter.
Exercise 16.7.1 The diagram obtained by contracting each of the connected com-
ponents cannot contain a loop because then the Feynman diagram would contain a
loop such that removing any edge of this loop would disconnect the diagram, which
is absurd. Next assume if possible that one of the connected components A of the
848 Solutions to Selected Exercises
remaining diagram is not 1-PI. Then it contains an edge e such that removing e
disconnects this component into pieces B and C. We prove that removing e discon-
nects the original diagram which contradicts the fact that e is an edge of A. We
proceed by contradiction. If removing e does not disconnect the original diagram,
this diagram contains a path linking a point of B to a point of C and this path must
contain edges which are not edges of A. These edges form a loop in the diagram
obtained by contracting the connected components, which we showed is impossible.
Exercise 17.6.2 Let us stress that Lorentz invariance is really built in at ev-
ery step of the theory, which is why we did not insist more on it. Denoting by
I(p1 ) the quantity (17.14) we have to show that for any Lorentz transformation
A we have I(p1 ) = I(Ap1 ). For vector x = (xe )e∈E ∈ (R1,5 )E we denote Ax the
element (Axe )e∈E . Since the components χ(p1 )e are all multiples of p1 (because
of the general fact that they are linear combinations of the external momenta)
we have χ(Ap1 ) = Aχ(p1 ). Thus T 2 F(Ax + χ(Ap1 )) = T 2 F(A(x + χ(p1 ))). The
first thing we have to check is that T 2 F (Ay) = T 2 F (y). It is certainly true that
F (Ay) = F (y) because the propagator f is Lorentz invariant, and Exercise 16.3.3
shows that the operation T 2 preserves Lorentz invariance. Next you have to con-
vince yourself that the measure dµL is invariant by the map a 7→ Ax (so that then
the equality I(p1 ) = I(Ap1 ) follows by this change of variable), which requires re-
examining the definition of this measure, and ultimately relies on the fact that the
volume measure on R1,5 is invariant by Lorentz transformations,
Exercise 17.6.3 The value of this diagram is
d6 k
Z
(2π)6 δ (6) (p1 + p2 + p3 )(−ig)3 f (k + q1 )f (k + q2 )f (k + q3 ) (P.33)
(2π)6
where q1 , q2 , q3 are certain linear combinations of the external momenta p1 , p2 , p3 .
The way to enforce the first conditionR(17.7) at order g 3 is to define at this stage
d6 k
the counter-term D by D = −(−ig)3 (2π) 3
6 f (k) , which at this order is exactly
the value following form (17.15). (Please note that in (17.15) there is exactly one
term of order g 3 , corresponding to the tripod α0 with three internal vertices and
the unique possible forest on α0 , consting of α0 itself.
Exercise 17.6.4 Keeping again the cutoff implicit, the relevant integral is
d6 k
Z
U (p) := (−ig)2 f (k + p/2)f (k − p/2) . (P.34)
(2π)6
Since we pretend that our cutoff is Lorentz invariant, the quantity U (p) depends
only on p2 : it is of the type Y (p2 ). The way to enforce the second part of (17.7) at
order g 2 is to set B = −Y 0 (0) and C = −Y (0). Rather remarkably, the contribution
of the single vertex ⊗ then cancels the divergence of β0 at order 2, because the
quantity H(p2 ) := Y (p2 ) − p2 Y 0 (0) − Y (0) is given by a convergent integral. A
simple way to see it is to proceed as in the BPHZ method, to replace the integrand
Solutions to Selected Exercises 849
R1 R 1/√ε √
Exercise 18.1.3 Indeed −1
dx/(x2 − iε) = √
−1/ ε
dt/( ε(t2 − i)).
Exercise 18.1.4 It is the convergence near zero which is a problem, that is the
existence of
Z 1 Z 1/√ε
3 1 2−s 1
lim dr r 2 = lim+ ε dt t3 2 ,
ε→0+ 0 (r − iε)s ε→0 0 (t − i)s
√
where we have set r = εt. This quantity just happens to have a limit when s > 2.
It is quite obvious that the limit does not exist for s > 2 since the integral converges
and the exponent of ε is negative. For s = 2 the integral diverges. For s < 2 the
√
integrand grow as t3−2s , so that the integral grows as (1/ ε)4−2s = εs−2 and the
limit is finite (as one can verify by√actually proving that the limit does not change
R 1/ ε
if one replaces the integral by 1 dt t3−2s ).
Exercise 18.1.5 Just integrate by parts s0 times in y and s times in x.
Exercise 18.1.6 After change of variable this looks like
d5 y √ d5 y
Z Z
2 2
= ε 2 2
.
{kyk≤A} (kyk − iε) kyk≤Aε−1/2 (kyk − i)
Looking at Exercise 18.1.4 this could have a limit, but the result might depend on
what one assumes about θ.
Exercise 18.2.4 Thinking of p and ε as parameters, it follows from Lemma 15.1.15
that for any space E one has degE PM (k) ≤ degE P (k, p, ε). Denoting by Q the
denominator in (18.14), the necessary condition degE P (k, p, ε) − degE Q < 0 for
convergence implies degE PM (k) − degE Q < 0.
Exercise 18.2.9 In fact no computation is needed. As a function of x the quantity
X X 2
G(x) := F (x, y, u) − ui ai,j xj − Bj (y, u)
i≤s j≤n
at x = B(y, u). The desired result follows from computing the value at the point
x = B(y, u).
Exercise 18.2.10 Consider the linear map T from Rn to Rs given by T (x) =
P
( j≤n ai,j xj )i≤s and E the image of T . The point T (B(y, u)) is the point z(y, u) =
(z(y, u)i )i≤s of E for which the function i≤s ui (zi −yi )2 of (zi )i≤s ∈ E is minimum.
P
We will prove that for any matrix (ai,j )i≤s,j≤n the point z(y, u) stays bounded
over all values of u and of yi with |yi | ≤ 1. When the matrix (ai,j )i≤s,j≤n is of rank
n, the map T is one-to-one, so that the point z(y, u) determines the point B(y, u),
and these points also stay bounded over all values of u and of |yi | ≤ 1.
The idea of the proof is simple. The point z(y, u) is the point where a certain
ellipsoid i≤s ui (zi − yi )2 ≤ α of Rs centered at y touches the linear space E. For
P
z(y, u) to be far the ellipsoid would have to be stretched diagonally, which is not
the case for our ellipsoids, which are stretched only along the axes.
The formal proof goes by induction over s. The result is obvious for s = 1. Let us
argue by contradiction, and consider a sequence u` ∈ D0 , points y` = (yi,` )i≤s with
|yi,` | ≤ 1, and points z` ∈ E which minimize the quantity i≤s ui,` (zi,` − yi,` )2
P
because the choice of z` minimizes the left-hand side, and that the right-hand
side is simply the value of the left-hand side for z` = 0. Taking the limit we have
2
P
i≤s ui (zi − yi ) ≤ 1. Thus for i 6∈ I weP have zi finite, and there is nothing to prove
P
if I is empty. Observe also that since i≤s ui,` = 1 it holds that i≤s ui = 1 so
that card I < s. Thus the complement I c of I is not empty. Assuming that I is not
c
empty we consider the projection Q from Rs to RI which forgets the coordinates
xi for i ∈ I. We can decompose E = E1 ⊕ E2 where E1 ⊂ ker Q whereas Q is
one-to-one on E2 . We can therefore decompose z` = z`1 + z`2 where z`1 ∈ E1 and
z`2 ∈ E2 , so that Q(z` ) = Q(z`2 ) = (zi,` )i∈I c . Since zi = lim`→∞ zi,` is finite for
i∈ / I, as ` → ∞ the limit of Q(z`2 ) = Q(z` ) exists. Thus this is also the case of the
limit of z`2 = Q−1 (Q(z`2 )), and the numbers zi,` 2
stay bounded as ` → ∞.
Now, since z` minimizes the quantity i≤s ui,` (zi −yi,` )2 over all choices of z ∈ E,
P
among all the possible points of z 1 ∈ E1 . That is, z`1 is obtained by the same
procedure as z` but in fewer dimensions (since card I < s). The induction hypothesis
2
shows that since the numbers −zi,` + yi,` stay bounded, this is also the case for the
1
numbers zi,` . Hence the numbers zi,` stay bounded as desired.
Exercise 18.2.11 A function of x given by a formula of the type (18.29) does
attain its minimum on Rn , although this minimum may not be reached at a unique
point.
Let us fix y. For each x we have G(y, un ) ≤ F (x, y, un ), so that as un → u we get
lim sup G(y, un ) ≤ F (x, y, u)
n→∞
interpolation principle of Lemma 18.2.13 that this is also the case of the functions
Fj,j 0 (u).
Exercise 18.2.17 In this situation an elementary computation is the most effective.
P
Let us assume that Q(y) = i,j≤r ci,j yi yj so that
X X
−Q(p0 ) + Q(pν ) = ηµ,ν ci,j pµi pνj ,
1≤ν≤3 i,j≤r
where as usual repeated Lorentz indices are summed. Replacing each pi by L(pi )
for a Lorentz transformation L replaces the right-hand side by
X 0 X 0
ηµ,ν ci,j Lµλ pλi Lνλ0 pλj = ηλ,λ0 ci,j pλi pλj ,
i,j≤r i,j≤r
either β contains α (in which case we are done) or α strictly contains β, so that α
contains β + . If α strictly contains β + we are done by Lemma 19.3.5 (a) because
β + is contained in one of the αi . If β + = α, then, by construction β is one of the
components τj of the diagram τ of the basic construction. But since all the edges of
E(F, α) are α-active, the diagram τ is jsut the union of the αi so that its connected
components are the maximal sub-diagram αi themselves and β is one of them.
Exercise A.2.2 First by Lemma A.2.1 this projective representation arises from
a projective representation which is strongly continuous in a neighborhood of zero.
Then by Lemma A.1.1 it arises from a representation which is a true representation
in a neighborhood of zero, and the result follows easily.
which takes a value different from 1 so that a∈N w0 (a)∗ w(a) = 0. So if we have a
P
P P
linear combination w αw w ≡ 0, i.e. w αw w(a) = 0 for each a then for another
character w0 we have 0 = w a αw w0 (a)∗ w(a) = αw0 . (b) Each character is a
P P
function N → C and the dimension of this space of functions is card N . Since the
characters are linearly independent we have card N b ≤ card N . But each element a
of N defines a character on N by the map w → w(a) and this defines an injec-
b
tion from N to M c where M = N b . Thus card N ≤ card M c ≤ card M = card N b .(c)
Just repeat the arguments of Section 9.8. (d) We could repeat the arguments of
the proof of Proposition 9.4.6, but the proof is even simpler. Consider f ∈ F with
f 6= 0. Assuming that (λ(a, A)f, g) = 0 for each a, A we have to show that g = 0.
b(B)(w] )(a)(f (AB), g(B)) = 0. For
P
Thus for each (a, A) we assume that B∈H κ
]
b(B)(w] )(a)(f (AB), g(B)) =
P
w ∈ O let Sw = {B ∈ H; κ b(B)(w ) = w}, so that B κ
P P
w∈O w(a)
P B∈Sw
(f (AB), g(B)). As this holds for each a we then conclude from
(a) that B∈Sw (f (AB), g(B)) = 0 for each w and each A. Now if B, B 0 ∈ Sw we
have C := B −1 B 0 ∈ Hw] , so that B 0 = BC and thus f (AB 0 ) = U (C −1 )f (AB) and
similarly g(BB 0 ) = U (C −1 )f (B). Since U is unitary this shows that (f (AB), g(B))
is independent of B ∈ Sw . So (f (AB), g(B)) = 0 for each A ∈ H and each B ∈ Sw .
Since w is arbitray, (f (A), g(B)) = 0 for each A, B in H. Now since f 6= 0 there ex-
ists D with f (D) 6= 0, and since U is irreducible the set of f (DC) = U (C −1 )(f (D))
for C ∈ Hw] spans V. In particular the set of f (A) spans V so that g(B) = 0 for
each B.
Exercise A.5.12 (a) This is simply because the restriction of Ξ to V and Hw] is
Solutions to Selected Exercises 853
Exercise A.6.3 The relation (A.47) should be pretty obvious in a basis where
b ∈ SU (2) is diagonal.
Exercise C.1.5 All statements are straightforward to check, including (c) if one
reads (C.10) as T (s, t, r)T (s0 , t0 , r0 ) = T ((s, t, r) ∗ (s0 , t0 , r0 )).
and it is a special case of Wick’s theorem (Lemma 13.8.1) that h0|ak (a† )n |0i = δkn k!.
(In one word, a term on the right-hand side of (13.41) can be non-zero only if it
pairs a† and a so one must have k = n and there are k! way to make such pairs.)
Exercise C.2.4 To look for the elements x such that ax = γx/~ try to find x
of the type n≥0 cn (a† )n |0i, using that a(a† )n = (a† )n a + n(a† )n−1 to find that
P
cn = γcn−1 /(n~).
X un dn
exp(2uy − u2 ) = exp(y 2 ) (−1)n exp(−y 2 )
n! dy n
n≥0
or, equivalently,
X (−u)n dn
exp(−(y − u)2 ) = exp(−y 2 ) ,
n! dy n
n≥0
which is simply Taylor’s formula used for the function v 7→ exp(−(y + v)2 ) at v = 0.
√
Exercise C.2.7 Use the √ relations n!en =√(a† )n |0i so that by definition of Pn
en (x) = Pn (αx)ϕ0 (x)/ n! and Pn (y) = Hn ( 2y)/2n/2 by definition of Hn .
and thus
Z
(g, P (f )) = dx(−ig 0 (x) + ig(x))∗ f (x) exp(−x2 ) = (P (g), f ) .
Exercise C.5.1 Changing Xk into exp sXk and Pk into exp(−s)Pk changes the
operators ak and a†k of (C.72) into the operators a0k and a0†
k of (C.76).
Exercise C.5.2 The relation (C.82) is a special case of (C.23). Since Ak (γ) is a
unitary operator, taking the adjoint of (C.82) we obtain Ak (γ)a†k Ak (γ)−1 = a†k −
γ ∗ 1. That is, the unitary operator Ak (γk ) witnesses that the pairs (ak , a†k ) and (ak −
γk 1, a†k −γk∗ 1) are unitarily equivalent. Thus we have (letting you guess how we define
Sk (s, t)!) Ak (γk )S0 (s, t)Ak (γk )−1 = Sk (s, t), i.e S0 (s, t) = Ak (γk )† Sk (s, t)Ak (γk )
and (C.63) is satisfied by the R function τk = Ak (γk )|0i. Since |0i is the constant
function equal to 1, we have τk dµ1 = h0|Ak (γk )|0i, and this equals exp(−|γk |2 /2)
is a consequence of (C.22), in the form h0|γi = exp(−|γ|2 /2). The rest is obvious.
Exercise C.5.3 (a) Consider a sequence (ϕk ) of test functions on R3 , with kϕk k =
1, and assume that the support of ϕk is contained in Ck = [−k − 1, k + 1]3 \ [−k, k]3 .
Then the sequence ak = A(ϕk ) satisfies the canonical commutation relations. As a
consequence of (C.84)
R the sequence (ak ) is unitary equivalent to the sequence (ak +
γk 1) where γk = γ(p)ϕk (p)d3 p/(2π)3 . According to the previous exercise, the
quantity k |γk |2 is finite. The function ψk such that ψk (p) := γ(p)∗ 1Ck satisfies
P
Solutions to Selected Exercises 855
support in Ck , with kϕk k = 1 and approximating ψk /kψk k well will satisfy 2|γk |2 ≥
|γ(p)|2 d3 p/(2π)3 . Since k |γk |2 < ∞ this shows that γ ∈ H.
R P
Ck
For (b), let us choose a basis of H such that the linear form ψ is (a multiple
of) the coordinate on the first basis vector. Then we are simply in the situation of
(C.77) for γk = 0 if k ≥ 2.
Exercise D.1.2 Indeed, for any integer n we then have exp nεk X ∈ G. If εk → 0
we can find a sequence nk with nk εk → t so that exp tX = limn→∞ exp(nk εk ) ∈ G
since G is closed. Thus X ∈ g.
Exercise D.1.10 Just write down the components of [u · J, v · J].
Exercise D.1.11 If the unit vector u is fixed by X, then X induces an orthogonal
transformation in the plane perpendicular to u, so that X since det X = 1 X must
be a rotation. The elements of the type exp θu · J provide all the rotations of axis
u.
Exercise D.1.12 Very similar to the proof of Lemma D.9.1. Namely, we have
dA(t)/dt = u · JA(t) and dA−1 (t)/dt = −A(t)−1 u · J so that this derivative is
A−1 (t)BA(t) where B = −[u · J, A(t)(v) · J] + (u · JA(t)(v)) · J. Now, by (D.13) we
have [u · J, A(t)(v) · J] = (u ∧ A(t)(v)) · J, and u ∧ A(t)(v) = u · JA(t)(v).
Exercise D.6.7 We write
Z Z
N (V (h)(x))2 = kV (g)V (h)(x)k2 dµ(g) = kV (gh)(x)k2 dµ(g)
Z
= kV (g)(x)k2 dµ(g) = N (x)2 .
R
Denote by (·, ·) the inner product in H. Then hx, yi := h(V (g)(x), V (g)(y))dµ(g)
is an inner product, and N (x)2 = hx, xi. When H is finite-dimensional, N and k · k
are equivalent because any two norms are equivalent.
Exercise D.6.11 If σ is any of the Pauli matrices,
πj (exp(−itσ/2))(f )(z1 , z2 ) = f (z1 (t), z2 (t)) (P.35)
z1 (t) z
where = exp(itσ/2) 1 , so that taking the derivative of (P.35) at t = 0
z2 (t) z2
ż1 z
0
we get πj (−iσ/2)(f )(z1 , z2 ) = ż1 ∂f /∂z1 + ż2 ∂f /∂z2 , where = (iσ/2) 1 .
ż2 z2
Then Z = 2iπj0 (−iσ3 /2) is given by Z(f )(z1 , z2 ) = −z1 ∂f /∂z1 + z2 ∂f /∂z2 , and
z2j is an eigenvector of eigenvalue j. Similarly, A+ (f )(z1 , z2 ) = −z2 ∂f /∂z1 and
A− (f )(z1 , z2 ) = −z1 ∂f /∂z2 .
Exercise D.6.12 A non-zero invariant subspace has to be invariant by the opera-
tors A+ and A− and as in Proposition D.6.1 we show that it has to be the whole
space.
856 Solutions to Selected Exercises
Exercise D.7.5 Probably it is best to figure this out yourself with little pictures.
Recall (D.55), and the definition of Aj . Obviously mj = 0 for j > n + n0 . For
j = n + n0 then j ∈ An+n0 but to no other set A` . Next n + n0 − 1 does not belong
to any set A` , n + n0 − 2 belongs to An+n0 and An+n0 −2 , etc. and in this way one
gets the first equality in (D.56). To prove the second equality, we compute the last
term of (D.56) and we show that it equal 1 + min(n, n0 , r). Since k 0 = r − k we have
to count the number of integers k such that 0 ≤ k ≤ min(n, r) and r − k ≤ n0 i.e.
r − n0 ≤ k. If n0 ≥ r this is obviously 1 + min(n, r) = 1 + min(n, n0 , r). Assume then
n0 < r. Since ` = n + n0 − 2r ≥ 0 we have r ≤ (n + n0 )/2 and thus n0 < (n + n0 )/2 so
that n0 < r < n. Then min(n, n0 , r) = n0 and min(n, r) = r. The number of values
of k with r − n0 ≤ k ≤ r = min(n, r) is then n0 + 1.
Exercise D.7.8 Follow the hint.
Exercise D.7.9 It follows from Proposition D.7.7 (b) that the spaces Gn are or-
thogonal, so they form an orthogonal decomposition of H. Furthermore, again for
the same reason, an irreducible G of dimension n is orthogonal to each of the Gn0
for n0 6= n so it is a subspace of Gn . Thus Gn is just the span of the irreducibles of
dimension n.
Exercise D.9.3 By definition of κ for X ∈ slC (2) one has M (κ(exp tX)(x)) =
exp tXM (x) exp tX † and taking the derivative at t = 0 yields M (κ0 (X)(x)) =
XM (x) + M (x)X † , from which the desired relations are checked by explicit com-
putation.
Exercise D.9.4 This is obvious if v1 = v2 = 0 because then exp v · Y is diagonal
with positive coefficients. Since A(exp v · Y)A−1 = exp(v · AYA−1 ) one may reduce
to the previous case by Lemma D.9.1.
Exercise D.10.2 Consider such an invariant subspace G and assume that it con-
P
tains a non-zero vector k,` αk,` em,` . Consider the largest integer `0 such that not
all the αk,`0 are zero. By successive applications of (D.89) we may assume that
`0 = 0. Consider then the smallest value k0 of k for which αk0 ,0 6= 0. Successive
applications of (D.86) reduce to the case where k = m. Thus em,0 ∈ G, and then
each ek,` ∈ G by successive applications of (D.87) and (D.88).
Exercise D.10.3 This is far simpler than what it sounds. Starting with the rep-
resentation π = πn,m , we know how to define the required operators, starting with
Lj = π 0 (Xj ), etc. and they automatically satisfy the required commutations rela-
tions. If we can find a vector e such that Z(e) = me, W (e) = ne, A− (e) = C − (e) = 0
then the construction of Proposition D.10.1 carries through to construct the
whole required structure. It is quite straightforward to check that the tensor
e = (xi1 ,...,im ,j1 ,...,jn ) such that xi1 ,...,im ,j1 ,...,jn = 0 unless all the indices are equal
to 1 when then xi1 ,...,im ,j1 ,...,jn = 1, has the previous properties. All the rest is
obvious.
Solutions to Selected Exercises 857
Exercise D.10.4 It should be clear at this stage that the representation is of type
(m, 0) if and only if B = 0 and of type (0, n) if and only if A = 0.
Exercise D.12.3 Recall the matrix J of Lemma 8.1.4. Then J −1 C ∗ J = C †−1 so
that A† = J −1 A∗−1 J. Thus
θ(A)(M J) = A(M J)A† = AM A∗−1 J .
If W : M → M is given by W (M ) = M J this means that
θ(A)W (M ) = W (AM A∗−1 ) .
Consequently θ is equivalent to the representation r given by r(A) : M 7→ AM A∗−1 .
Exercise D.12.4 Let us consider the map x 7→ M (x) for C4 to M defined by the
formula (8.19). Then by definition of κ we have M (κ(A)(x)) = θ(A)(M (x)) i.e.
M κ(A) = θ(A)M so that indeed κ is equivalent to θ. The rest is obvious.
Exercise D.12.9 This inverse image is a space of dimension nine which is invariant
under the representation U , so indeed what else could it be? More precisely, the
following holds. Assume that a finite-dimensional space decomposes as a direct sum
of invariant subspaces F1 , . . . , Fk , such that the restrictions of the representation
to these subspaces are not equivalent. Then if a subspace F is invariant, and such
that the restriction to this subspace of the representation is irreducible, it is one of
the spaces of the decomposition. To see this consider the projection Pk of F onto
Fk . It commutes with the representation, and its kernel is an invariant subspace of
F , with therefore must be zero or the whole of F . When it is zero, the restriction
of the representation to F and Fk are equivalent, so this happens for exactly one k
and then F = Fk .
Exercise E.2.1 Let us shorten notation by writing (κµν) = ∂κ Fµν . Since ∂κ =
∂/∂xκ and since L−1 (x)γ = (L−1 )γκ xκ = Lκγ xκ , by the chain rule the tensor
field (κµν)(x) is transformed into the tensor field Lκγ Lµλ Lνα (γλα)(L−1 (x)). Con-
sequently the field (κµν) + (µνκ) + (νµκ) is transformed into
Lκγ Lµλ Lνα (γλα) + (λαγ) + (αλγ) .
Exercise E.2.3 cos θe1 + sin θe2 ± i(− sin θe1 + cos θe2 ) = exp(∓iθ)(e1 ± ie2 )
Exercise G.1.2 Just use that cosh s + sinh s + 1 = 1 + exp s = 2 exp(s/2) cosh(s/2)
and 1 + cosh s = 2 cosh(s/2)2 .
Exercise G.1.4 Just compute Dp DP p using (G.4) and the formulas M (p) +
M (P p) = 2p0 I, M (p)M (P p) = m2 c2 I.
Exercise L.2.2 Use that
1 1
|f (x)h(x)| ≤ 2
(1 + |x|)k+2 |f (x)| ≤ kf kk+2
(1 + |x|) (1 + |x|)2
858 Solutions to Selected Exercises
and integrate in x.
Exercise L.2.5 Let F be the class of closed sets which are support of a given
tempered distribution Φ, and F the intersection of this family. It is closed under
finite intersection by Lemma L.2.4. If a function ξ ∈ S n has a compact support
which does not intersect F , then its support does not intersect one of the elements
of F, so Φ(ξ) = 0. Thus F supports Φ.
Exercise L.2.7 Since δ 0 (ξ) = −δ(ξ 0 ) = −ξ 0 (0) this quantity need not be zero even
if ξ is 0 on the support {0} of δ i.e. if ξ(0) = 0.
Exercise L.2.9 One simply shows that a non-zero distribution cannot be zero on
each test function with compact support, an obvious consequence of Lemma L.2.8.
Exercise M.4.7 Since ξˇ] (x) = exp i(p, x)ξ(p)d4 x we have formally
R
Z Z Z
ξ(p)Ŵ (p)d4 p = Ŵ (ξ) = W (ξˇ] ) = ξ(p) exp i(p, x)W (x)d4 x d4 p .
R∞
Exercise M.4.14 We write 1 = R 0 exp((1 − m)x)dρ(m). Given m0 < 1 the right-
m
handR side is ≥ exp((1 − m0 )x) 0 0 dρ(m) and letting x → ∞ this shows that
m0
0 = 0 dρ(m) so that ρ isR constant on the interval [0, m0 ] and then on the interval
∞
[0, 1[. Thus we have 1 = 1 exp((1 − m)x)dρ(m). Letting x → ∞ shows that ρ
has a jump of 1 atR m = 1 (i.e. that dρ gives mass 1 to the point 1). We then get
the identity 0 = ]1,∞[ exp((1 − m)x)dρ(m), and this obviously implies that ρ is
constant on the interval ]1, ∞[.
Exercise N.2.2 (a) Using formulas such as ∂d x f = ip1 f (p) one obtains ∇f (p) =
b c
2ˆ 2ˆ
−p f (p) so that if f satisfies the Laplace equation it holds p f (p) = −1. The rest
is straightforward.
Exercise O.1 The first part of (O.3) shows that to get a non-zero contribution
each term a(p) has to be paired with a term ϕ†a (x), but in S1 there is only one such
term which cannot be paired to both a(p1 ) and a(p2 ).
Exercise O.2 We now have two copies of ϕa , and to get a non-zero result, each of
them has to be paired with either an incoming a-particle or an outgoing ā particle.
Exercise O.3 The value of the first diagram is
δ (4) (p4 + p3 − p1 − p2 )
(−i)3 g 2 (2π)4 ,
m2c − (p1 − p3 )2
and for the second diagram one has to replace p1 − p3 by p1 − p4 .
Exercise O.4 Bis repetita placent.