Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Java 1c

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

QUANTUM FIELD THEORY

for mathematicians

E. Horozov

Faculty of Mathematics and Informatics,


Sofia University ”St. Kliment Okhridski”,

and

Institute of Mathematics and Informatics,


Bulg. Acad. of Sci., Acad. G. Bonchev Str.,Block 8,
1113 Sofia, Bulgaria


E-mail: horozov@fmi.uni-sofia.bg and horozov@math.bas.bg

1
Contents
0 Introduction 4
0.1 Phenomenology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
0.2 Mathematical view on QFT . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1 QFT in dimension 0 – first Feynman graphs 6


1.1 Free theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Steepest descent and the stationary phase methods . . . . . . . . . . . . . 9
1.3 Definitions of Feynman graphs . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Feynman’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.1 Sums over connected graphs . . . . . . . . . . . . . . . . . . . . . . 16
1.5 Computations with Feynman’s graphs . . . . . . . . . . . . . . . . . . . . 17
1.5.1 Loop expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5.2 1-particle irreducible diagrams . . . . . . . . . . . . . . . . . . . . 18
1.5.3 Legendre Transform . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2 Quantum mechanics 20
2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 Heisenberg picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 The Harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3 Path integral formulation of quantum mechanics 24


3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.1 The partition function . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3.1 ”Derivation” of Feynman’s formula . . . . . . . . . . . . . . . . . 28
3.3.2 Feynman-Kac formula . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4 Example - the Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . 28
3.5 Example - φ3 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.6 Momentum space formulation . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.7 Wick’s Rotation in Momentum space . . . . . . . . . . . . . . . . . . . . . 30

4 Symmetries 31
4.1 The Group SO(2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2 The Groups SO(3) and SU(2) . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3 The Lorentz and Poincaré groups . . . . . . . . . . . . . . . . . . . . . . . 32
4.4 Clifford algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5 Classical fields 32
5.1 Multidimensional Variational Problems . . . . . . . . . . . . . . . . . . . . 32
5.2 Klein-Gordon Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.3 Electromagnetic field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.4 Dirac Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.5 Weyl, Majorana, Yukawa fields . . . . . . . . . . . . . . . . . . . . . . . . 35

2
6 Quantum fields 35
6.1 Scalar Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.2 Cross Sections and S-matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 36

7 Fermions 36
7.1 Linear Superspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.2 Supermanifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.3 Calculus on Supermanifolds . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.4 Fermionic Gaussian Integrals and Correlators . . . . . . . . . . . . . . . . 39
7.5 Fermionic Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . . . . 41
7.6 Path Integrals for Free Fermionic Fields . . . . . . . . . . . . . . . . . . . 42

8 Renormalization 42
8.1 Renormalizability of Field Theories . . . . . . . . . . . . . . . . . . . . . . 43
8.2 Dimensional Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . 43

9 Quantum electrodynamics 43

10 Gauge theories 43
10.1 Chern-Simons theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.2 Yang-Mills Theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

11 Appendix 43
11.1 Linear and Multilinear Algebra . . . . . . . . . . . . . . . . . . . . . . . . 43
11.2 Differential Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
11.3 Classical Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
11.4 Functional Analysis and Differential Equations . . . . . . . . . . . . . . . 45
11.5 Relativistic Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
11.6 Miscellaneous Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3
0 Introduction
From the times of Newton Physics and Mathematics have lived most of the time in
symbiosis. Physics has supplied Mathematics with deep problems. On its hand Math-
ematics has developed a language for writing the physical problems and laws and tools
for solving the problems posed by physicists. Only for some part of 20-th century there
was the fashion of ”pure mathematics”, which means the neglect of any motivation of
mathematical research but the intrinsic mathematical problems. For some time this has
been maybe useful for the advance of abstract algebraic geometry, number theory, etc.
But after some really fruitful years for Mathematics there came the new marriage –
Mathematics and Physics once again became very close. Now there is a new ingredient
in their relations – Physics started to supply not only ideas but also intuition and tools
for posing and solving mathematical problems. Let us mention here the problem of com-
puting the intersection numbers of Chern classes on moduli spaces of Riemann surfaces
[17, 28], knot invariants, mirror symmetry, etc.

0.1 Phenomenology

Thirty one years ago Dick Feynman


told me about his ”sum over his-
tories” version of quantum mechan-
ics. ”The electron does anything it
likes”, he said. It goes in any di-
rection with any speed, forward or
backward in time, however it likes,
and then you add up the amplitudes
and it gives you the wave function.”
I said to him ”You are crazy.” But
he wasn’t.

–F. J. Dyson

The title of this subsection is a little bit misleading. Here we present only one
simple (but very important) experiment whose goal is to justify the introduction of path
integrals in physics. It is taken from R. Feynman’s numerous popular lectures (see e.g.
[11]

0.2 Mathematical view on QFT


Before presenting a more complete account on the path integral method we would like
to explain in few words some of the ideas on which it rests. The Feynman path (or more
general – functional) integrals are integrals depending on parameters and the ”integra-
tion” is carried on infinite-dimensional spaces. First we are going to study integrals of
the Feynman type but on finite-dimensional spaces. They should not be considered only
4
as toy models for the real QFTs. They are a main ingredient in their study. We are
going to introduce the famous Feynman graphs that help to express in a simple way their
asymptotic expansions. With their help we are going to define the Feynman integrals
in the cases relevant for QFT. The 0-dimensional QFT being a powerful mathematical
tool has a lot of beautiful applications to areas far from QFT, e.g. – topology of moduli
spaces of Riemann surfaces.
From mathematical point of view QFT studies ”integrals” that are defined as follows.
Let Σ and N be manifolds with Riemannian or pseudo-Riemannian metric. We shall
denote by M ap(Σ, N ) the set of all smooth (= infinitely differentiable) maps from Σ to
N . Let us also have an action function (or rather functional) S(φ) on φ ∈ M ap(Σ, N ).
Let h be a small constant (Planck’s constant). We will be interested in the following
object (including giving sense to it):

−S(φ)
Z
V (φ) exp( )D[φ] (0.1)
M ap(Σ,N ) ~
Here V (φ) is ”an insertion function” in physicist’s language. This is a smooth function
on M ap(Σ, N ), whose meaning will be explained later. The function exp( −S(φ) ~ ) has the
meaning of probability amplitude of the map φ ∈ M ap(Σ,  N ) to the integral.
The set of the objects Σ, N, S(φ), φ ∈ M ap(Σ, N ) is called by physicists ”theory”.
In the case when V ≡ 1 the above integral (0.1) is called partition function of the theory
and is denoted by

−S(φ)
Z
E
Z = exp( )D[φ] (0.2)
M ap(Σ,N ) ~
The superscript ”E” means that the theory is ”euclidean”, e.i. the manifold Σ is Rie-
mannian. When Σ is pseudoeucledian manifold with Lorentzian metric (of signature
(-,+,+,+)) we call the theory relativistic QFT. The first coordinate is reserved for the
time. In that case we replace the sign (−) with the imaginary unit i:

iS(φ)
Z
ZM = exp( )D[φ] (0.3)
M ap(Σ,N ) ~
In this case the theory is Minkovskian QFT, the letter M designating this fact. We
are going to start with 0-dimensional theory. Of course here there is no time or spacial
coordinates. Let us start with the case when Σ is one point and N is the real line. Now
the set M ap(Σ, N ) consists of all real constants, e.i. it is the real line. The partition
function becomes
Z ∞
e−S(x)/~dx (0.4)
−∞
This integral is studied by the method of the steepest descent, which will be explained in
one of the next sections.
The physicists are in particular interested in the integral (0.3) when the manifold Σ
has dimension ≥ 3. As a rule its value cannot be computed explicitly. So they (Feynman)
have invented an algorithm to ”find” its asymptotic expansion in ~. From mathematical
point of view even the definition needs clarification. One reasonable definition is to
define the integral (0.3) just by a series (asymptotic, not convergent!), whose members
5
are indexed by different types of graphs. (This is the reason why we put ”find” in
quotation marks.)
But this is not the end. The problem is that the members of the series are defined
via integrals that are not well defined. Then there comes a painful procedure to put
meaning in each of this integrals (renormalization). The methods of renormalization are
found by physical intuition but further the predictions are confirmed by experiments.

1 QFT in dimension 0 – first Feynman graphs


In this chapter we are going to consider Feynman integrals for 0-dimensional manifolds
Σ, i.e. when Σ consists of finite number (say d) of point. This is reason for the title of
the chapter. In this case the set of all mappings from Σ to R is just the vector space
V = Rd and the integrals become
Z
e−S(x)/~dx. (1.1)
V

1.1 Free theory


In quantum field theory the case when S contains only quadratic terms is called free the-
ory. We are going to consider the theory as a perturbation P of the free theory Then
the function S is given in the form S = B(x, x) + m≥3 gm Bm (x, x, . . . , x) where
Bm (x, x, . . . , x) are the interactions and gm are formal parameters.
The simplest example of an integral of the form (0.4) is the Gaussian integral:


r

Z
− 21 ax2
e dx = .
−∞ a
Its multidimensional generalization is defined via a symmetric quadratic positive-
definite form B(x, x) defined on a d-dimensional space V in terms of a positive-definite
symmetric matrix B(x, y) = (Bx, y). Then the integral we want to study is
Z
1
e− 2 (Bx,x) dx. (1.2)
V
By a change of variables x → Sx where S ∈ SO(d) we can diagonalize the matrix B.
Then the integral (1.2) is easily computed:
r
(2π)d
Z
− 21 (Bx,x)
e dx = . (1.3)
V det B
Sometimes it is useful in to consider slightly more general integrals, namely – with a
linear term in the exponent:
Z
1
Zb = e− 2 (Bx,x)+(b,x) dx. (1.4)
V
It is easy to check that
1 −1 b) 1 −1 b
Zb = (2π)d/2 (det B)−1/2 e 2 (b,B = Z0 e 2 (b,B ) (1.5)
6
We also will be interested in integrals with insertions
Z
1
P (x)e− 2 (Bx,x) dx. (1.6)
V
To compute the above integral it is enough to consider the case when the polynomial
P (x) is homogeneous and moreover that it is a product of linear forms l1 (x)l2 (x) . . . l2m (x).
Such an integral is a major object in quantum physics (and not only there!) and is called
correlation function or correlator. It is denoted as follows

1
Z
1
< l1 , l2 , . . . , lm >= l1 (x)l2 (x) . . . lm (x)e− 2 B(x,x) dx, (1.7)
Z0 V
The correlators can be computed by using the integral Zb as follows. First notice that


Z
1
Zb = e− 2 (Bx,x)+(b,x) xj dx
∂bj V

Then for any product of coordinate functions xi1 xi2 . . . xik , not necessarily different, we
obtain:

∂ ∂
Z
1
... Zb = e− 2 (Bx,x)+(b,x) xi1 xi2 . . . xik dx.
∂bi1 ∂bik V

From this formula we obtain the correlator

1 ∂ ∂ ∂ ∂ 12 (b,B −1 b)
< xi1 , xi2 , . . . , xim >= ... Zb |b=0 = ... e (1.8)
Z0 ∂bi1 ∂bik ∂bi1 ∂bik |b=0
In particular the two-point correlation functions are given by the matrix elements of
B −1 :

< xi , xj >= (B −1 )|ij .


We can use these results for more general functions and even for formal power series
f1 , . . . , fm to obtain

Proposition 1.1 The correlator < f1 , . . . , fm > is given by the formula


∂ ∂ 1 −1 
< f1 , f2 , . . . , fm >= f1 ( ) . . . fm ( ) e 2 (b,B b) |b=0 .
∂b ∂b
Proof. The formula for the monomials is (1.8). The general formula is obtained as linear
combination of the monomial formulas. 2
From this we obtain a simple but very important combinatorial theorem, known to
physicists as Wick’s Lemma. Before formulating it we will introduce some combinatorics.
Consider the set {1, 2, . . . , 2m}. A pairing of this set is a partition σ of the set into m
disjoint pairs. Let us denote the set of all pairings of the above set by Πm . It is known
that |Πm | = 2(2m) !
m m ! . Any σ ∈ Πm can be considered as a permutation of {1, 2, . . . , 2m}

without fixed points and such that σ 2 = 1. Any pair consists of an element i and its
image σ(i). Now we are ready to formulate the Wick’s lemma.
7
Theorem 1.2 (Wick’s lemma)
(P Q
σ∈Πm i∈{1,...,m}/σ < li , lσ(i) > if m is even
< l1 . . . lm >= . (1.9)
0 if m is odd

Proof. As before we shall first prove the theorem for coordinate functions. In this case
our formula takes the form

(P
< xi , xσ(i) > if m is even
Q
i1 im σ∈Πm i∈{1,...,m}/σ
< x ,...,x >= .
0 if m is odd

By (1.8) we need to compute the derivatives:

∂ ∂ 1 (b,B −1 b)
... e2 .
∂bi1 ∂bik

Let us do this computation by induction. We have


1 −1
X 1 −1
∂i e 2 (b,B b) = (B −1 )ij bj e 2 (b,B b) .
j

It is clear that applying the next derivative ∂k produces by Leibnitz’ rule:


1 −1 b) 1 −1 b) 1 −1 b)
(B −1 )ik e 2 (b,B + Qik (b)e 2 (b,B = Pik (b)e 2 (b,B ,

where Q is some homogenous polynomial of degree two and P has only even power terms,
the free term (B −1 )ik giving the result in this case. In general we proceed in the same
way. Denote by Pi1 ...is the corresponding polynomial in b (but when it is clear we will
drop the indeces):

∂ ∂ 1 (b,B −1 b) 1 −1
... e2 = P (b)e 2 (b,B b) .
∂bi1 ∂bis
Each new application of a derivative ∂j has the following effect on P

1 −1   1 −1 1 −1 
∂j P (b)e 2 (b,B b) = ∂j P (b) e 2 (b,B b) + P (b) ∂j e 2 (b,B b) ,

i.e. P → ∂j + m (B −1 )jm bm P (b). Notice that the function Pi1 ...is is either even or
P 

odd depending on the number s. This proves the formula for odd m. At the same time
we obtained a formula for Pi1 ...is :

X X
(B −1 )i1 m bm . . . ∂is + (B −1 )is m bm · 1.
 
Pi1 ...is = ∂i1 +
m m

From this formula we see that the free term consists of sum of products of the type
(B −1 )l1 m1 . . . (B −1 )lp mp , where 2p = s and lj mj is a pair from the set of indices i1 , . . . is .
Moreover, each pair is present exactly once.
The general case can be obtained using linearity as above. 2
8
Notice that each summand in the formula can be represented by simple graphs. For
each σ and each pair (ij) ∈ σ draw an unoriented subgraph with two vertices – i and j
and a wedge connecting them. The disconnected union of these subgraphs is the desired
graph Γσ , corresponding to the partition σ. Then our sum (1.9) becomes sum over the
graphs Γσ .

1 /2 1 2 1> 2
>>
>>
>>
  
3 /4 3 4 3 4

Figure 1.

Although this is just change of notation we are going to use widely it in the compu-
tations involving general action, i.e. when S is the perturbed function

B(x, x) X Ur (x, . . . , x)
S= +
2 r!
r≥3

1.2 Steepest descent and the stationary phase methods


The method of steepest descent gives the asymptotics of integral of the type (0.4).

Theorem 1.3 Let the functions f (x) and g(x) are smooth functions defined on an in-
terval [a, b] ∈ R. Assume that the function f (x) has a unique global minimum at a point
00
c ∈ [a, b] and f (c) > 0. Then the integral
Z b
g(x)e−f (x)/~ dx
a
has the following asymptotic expansion:
Z b
g(x)e−f (x)/~ dx = ~1/2 e−f (c)/~I(~), (1.10)
a
where I(~) is continuous function on (0, ∞), which extends in 0 as
√ g(c)
lim I(~) = π p 00 . (1.11)
~→0 f (c)

9
Proof. To simplify slightly notations we can consider that c = 0. Then we are going to
cut the singular point from the integration region, i.e. we define the integral over a small
neighborhood of 0. This we will do as follows. Take a small real number ε satisfying
1/2 > ε > 0 and define I1 (~) by the equation

1
Z ~ 2 −ε
1/2 −f (0)/~
~ e I1 = 1
g(x)e−f (x)/~ dx
−~ 2 −ε

Then it is clear that the difference |I(~) − I1 (~)| decays faster than ~N for any N . So it
suffices to show that I1 (~) has the asymptotics (1.10). Let us introduce new variable y
by x = y~. Then the function I1 can be written as
Z ~ε √ √
I1 = g(y ~)e(f (0)−f (y ~))/~dy
−~ε

Now it is clear that the integrand is a smooth function in ~. Then we can replace I1 (~)
by I2 (~) which is the Taylor expansion of I1 (~) modulo N . Then |I1 (~) − I2 (~)| ≤ C~N .
At the end we replace I2 (~) by I3 (~) which is the same integral but with limits from −∞
to ∞. Then the difference I2 (~) − I3 (~) is rapidly decaying.
Hence it is enough to show that the I3 (~) has Taylor expansion in ~1/2 . In fact
I3 (~) is a polynomial in ~1/2 . Also the odd powers of ~1/2 vanish as the corresponding
coefficients are integrals of odd functions. So we find that the Taylor expansion exists.
Let us compute the value of I3 (~). We have
∞ 00
(0)y 2
Z
f

I3 (0) = g(0) e 2 dy,
−∞

Using the value of the Gaussian integral (1.2) we get the desired result. 2

Example 1.4 Consider the integral


Z ∞
x2 +x4 √
e− 2~ dx = 2π~1/2 I(~)
−∞

Then the function I(~) is given by



1
Z
y 2 +~ y 4
I(~) = e− 2 dy
2π −∞

The integral has the asymptotic expansion



X
I(~) = an ~n ,
n=0

where

(−1)n y 4n
Z
y2
an = √ e− 2 n !dy
2π −∞ 22n

10
The method of stationary phase is slightly more complicated and uses the Fresnel
integral
Z ∞
2 √
eix /2 dx = 2πeπi/4
−∞
instead of Gaussian integral.
We are going only to formulate the result.
00
Theorem 1.5 Assume that f has unique critical point in c ∈ (a, b) with f (c) 6= 0 and
g vanishes with all its derivatives at the ends a and b. Then
Z b
g(x)eif (x)/~ dx = ~1/2 eif (c)/~I(~), (1.12)
a
where I(~) extends to a smooth function on [0, ∞), such that
√ g(c)
I(0) = 2πesign(f (c))iπ/4 p 00
| f |
.

The methods of steepest descent and the stationary phase easily extend to the mul-
tidimensional case. We introduce the following notation. By V we denote a real vector
space of dimension d and by B – a closed d-dimensional box in it. We assume that the
functions f (x) and g(x) are defined on B and smooth.

Theorem 1.6 Let the function f have a unique global minimum at a point c ∈ B and
the form D 2 f (c) be positive-definite. Then
Z
g(x)e−f (x)/~ = ~d/2 e−f (c)/~I(~), (1.13)
B
where I(~) extends as smooth function on [0, ∞), such that

g(c)
I(0) = (2π)d/2 p .
det D2 f (c)

In a similar manner we formulate the stationary phase method.

Theorem 1.7 Let the function f have a unique global minimum at a point c ∈ B and
let the form D2 f (c) be non-degenerate. Then
Z
g(x)eif (x)/~ = ~d/2 eif (c)/~I(~), (1.14)
B
where I(~) extends as smooth function on [0, ∞), such that

g(c)
I(0) = (2π)d/2 eπiσ/4 p .
det D2 f (c)

Here σ is the signature of the symmetric bilinear form D2 f (c).


11
Notice that the multidimensional Gaussian and Fresnel integrals become respectively
Z
e−B(x,x) = (2π)d/2 (det B)−1/2 (1.15)
V
for positive-definite form B and
Z
e−B(x,x) = (2π)d/2 eπiσ | det B|−1/2 (1.16)
V
for nondegenerate form B. We leave the details of the proofs for the reader.

1.3 Definitions of Feynman graphs


We aim to compute the entire asymptotic expansion of integrals of the form:

Z
l1 . . . lN e−S(x)/~dx.
V

in terms of some combinatorics. The result will be useful as it gives the model to define
Feynman integrals in physically meaningful theories. Here

1 X gj Uj (x)
S = (Bx, x) +
2 n!
The functions Uj (x) are homogeneous polynomials of degree jq i.e. symmetric j-tensors.
The integral is a formal power series in ~ and gm in a form that will be explained below.
For simplicity assume that c = 0 and S(0) = 0. The expansion will be done in terms
of Feynman diagrams, which are√ a major object in quantum field theory. Also we make
the change of the variables x/ ~ → x. We use the same variables. The correlator above
becomes:

Z gj Uj (x)
~j/2
P
(N +d)/2
~ l1 . . . lN e−(Bx,x)+ n! dx.
V

In what follows we are going to drop the factor ~(N +d)/2 . We expand the exponential
function above as follows:

~ j/2 gj Uj (x)
− 12 (Bx,x)+
P 1
 1 X j/2 gj Uj (x)
e = e− 2 (Bx,x) 1 +
j! ~ +
1! j!
1 X j/2 gj Uj (x) 2 1 X j/2 gj Uj (x) k 
~ + ... + ~ + ...
2! j! k! j!
The correlator becomes

1 X j/2 gj Uj (x) 1 X j/2 gj Uj (x) 2


Z  
1
l1 . . . lN e− 2 (Bx,x) 1 + ~ + ~ + . . . dx =
V 1! j! 2! j!
j
∞ Z ∞
X 1 1 X X j/2 gj Uj (x) n 
l1 . . . lN e− 2 (Bx,x) ~ + . . . dx.
V n! j!
n=0 j=3
12
P∞ P j/2 gj Uj (x) n
Also we expand the n-th power of the infinite sum j=3 ~ j! . This is the
formal series we are interested in.
In the case when there are no functionals lj the corresponding function is called
partition function. Explicitly it is

Z
1 1/2 U (x)
ZU = e− 2 (Bx,x)+~ dx

We have the obvious

Proposition 1.8
1/2 U ( ∂ ) 1 −1 b)
ZU = Z0 e~ ∂b e 2 (b,B |b=0 .

Next we define the correlation function of f1 , . . . , fm with respect to the above per-
turbed action:

1
Z
1 1/2 U (x)
< f1 , . . . , fm >U = f1 . . . fm e− 2 (Bx,x)+~ dx
ZU
And again we have

Proposition 1.9
Z0 ~1/2 U ( ∂ ) ∂ ∂ 1 −1
< f 1 , . . . , f m >U = e ∂b f (
1 ) . . . fm ( )e 2 (b,B b) |b=0 .
ZU ∂b ∂b

We want to express the correlator in terms of Feynman’s graphs, which we define


below.
We denote by G≥3 (N ) the set of isomorphism classes of graphs with N 1-valent
external vertices, labeled by 1, . . . , N and finite number of unlabeled internal vertices of
valency ≥ 3.
For each graph Γ we define a Feynman amplitude of Γ by the following rules:

1. Put the covector lj at the the j-th external vertex.

2. Put the tensor −gm Um at each m-valent vertex.

3. Take the contraction of the tensors along the edges of Γ, using the bilinear form
B −1 . The result will be a number denoted by FΓ . This is the Feynman amplitude.

1.4 Feynman’s theorem


Theorem 1.10 (Feynman) The correlation function < l1 . . . lN > is given by the asymp-
totic series:
X ~b (Γ)
< l1 . . . lN >= Z0 FΓ (l1 , . . . , lN ) (1.17)
|Aut(Γ)|
Γ∈G≥3(N)

13
We will give another version of this theorem, easier to prove. Before that let us
introduce some notation.
Let n = (n0 , n1 , . . .) be a sequence of nonnegative integers, only a finite number of
which are nonzero. Let G(n) be the set of isomorphism classes of graphs with n0 0-valent
vertices, n1 1-valent vertices, etc.
The version of Feynman’s theorem that we have in mind goes as follows.

Theorem 1.11 The partition function has the following asymptotic expansion
√ X Y n X ~b (Γ)
Z = Z0 det B gi i FΓ (l1 , . . . , lN )
n
|Aut(Γ)|
i Γ∈G(n)

Proof. First expand the exponential function in Taylor series. The partition function
becomes

X
Z= Zn ,
n

where
gini
Z
1 Y
e− 2 B(x,x) − ~i/2−1 Ui (x, . . . , x)ni dx

Zn = n
(1.18)
V (i!) i ni !
i

We can write the terms Ui as sums of products of linear functions. Then we can apply
Wick’s lemma. It gives that each Zn can be computed as follows.

1. Define a flower – a graph with one vertex and i outgoing edges (see fig. 1). Attach
it to the tensorUi .

Figure 2.

2. Consider the set T of these outgoing edges (see fig.) and for any pairing of this
set, consider the corresponding contraction of the tensor −Ui using the form B −1 .
This will give the a number Fσ corresponding to this pairing.

We can visualize a pairing σ by drawing its elements as points and connecting the
points in each pair them by an edge (see fig .). In this way we obtain an unoriented
graph Γ = Γσ . The number Fσ is called an amplitude of the graph Γ.

14
Figure 3.

Figure 4.

Figure 5.

It is easy to see that each graph with ni i-valent vertices can be obtained in this
way. But it can be obtain many times and need to count this number. This means that
we need to count how many σ-s can produce a fixed graph Γ. For this we need to find
the group G of permutations which preserve the ”flowers”. It consists of the following
elements:
(1) Permutations which permute the set of flowers of fixed valency;
(2) Permutations which permute the edges of a fixed flower.

15
Q Q ni
We see that the group G is a semi-direct product ( i SQ ni )< ( i Si ) where Sj is
the permutation group of j elements. Its cardinality |G| is i (i!)ni ni !. This is exactly
the product of the integers at the denominators in (1.18). The group G acts on the set
of all pairings of T . The action is transitive on the set PΓ of the pairings which produce
a fixed graph Γ. On the other hand the stabilizer of a fixed pairing is Aut(Γ). Thus the
number of the pairings producing Γ is

ni
Q
i (i!) ni !
|Aut(Γ)|

In this way we obtain a formula connecting the sum of the numbers Fσ and the sum of
the amplitudes with weights:

X X Q (i!)ni ni !
i
Fσ = FΓ
σ
|Aut(Γ)|
Γ

At the end we will compute the powers of ~ at the amplitudes. We note that the power
of ~ is given by the number of edges of Γ minus the number of vertices, i.e. b(Γ). This
gives exactly i/2 − 1. This proves the theorem.2
Now we are going to extract Feynman’s theorem.
Proof. of Theorem 1.10. As in Wick’s lemma we can use the symmetry of the correlation
function with respect to lj . So it is enough to consider the case of l1 = l2 = . . . = lN =
l. The corresponding correlation function is denoted by < lN > and is also called
expectation value of lN . Let us compute the expectation value < et l >. Obviously this
is the generating function of the expectation values < lN > N1 ! . If we put in Theorem
1.11 gi = 1, i ≥ 3, g0 = g2 = 0, g1 = −~t and B1 = l, B0 = B2 = 0 we get the result.2

1.4.1 Sums over connected graphs


Here we are going to show that the reduce the computation of the correlator to a sum
only over connected graphs. This is very useful in studies of Feynman’s integrals in real
physics. We denote the set of connected graphs in G(n) by Gc (n).

Theorem 1.12 The logarithm of the partition function ln(ZU ) has the following asymp-
totic expansion:
XY n X ~b(γ)
ln(ZU ) = gi i FΓ (1.19)
n
|Aut(Γ)|
i Γ∈Gc (n)

Proof. Denote by Γ1 Γ2 the disjoint union of two graphs Γ1 and Γ2 . Following this
notation we use Γn for the disjoint union of n copies of Γ. Thus any graph can be
written as Γk11 . . . Γkl l with some connected graphs Γj . Then we have FΓ1 Γ2 = FΓ1 FΓ2 ,
bΓ1 Γ2 = bΓ1 + bΓ2 and |Aut(Γk11 Γk22 )| = |Aut(Γ1 )|k1 k1 !|Aut(Γ2 )|k2 k2 !
After exponentiating (1.19) and expanding the r.h.s. in Taylor series we find the
expression of the partition function, given by Theorem 1.11. 2

16
1.5 Computations with Feynman’s graphs
1.5.1 Loop expansions
Note that the number b(Γ) in Theorem 1.12 is the number of the loops of Γ minus 1.
For this reason this expansion is referred to as ”loop expansion”.
(j)
Denote by Gn the set of graphs from Gc (n) with j loops. Also denote the j-loop
term of ln(Z) by

 XY n X ~b(γ)
ln(Z) j = gi i FΓ
n
|Aut(Γ)|
i Γ∈G( j)(n)

We are especially interested in the 0-th and the first terms, i.e. in the tree expansion and
the one-loop expansion.

Theorem 1.13 (i) The tree expansion of ln(Z) is given by the value of the action S
with minus sign at the singular point:

ln(Z) 0 = −S(x0 ). (1.20)

(ii) The value of ln(Z) 1 is:
 1 det(B)
ln(Z) 1 = ln . (1.21)
2 det D2 S(x0 )
Proof. It is enough to study the case when S is a polynomial U = m
P
j gj Uj /j!. Also
assume that the numbers gj are small enough and that the integration takes place on
small box B around x0 . Then the function S has a global maximum at x0 and we can
apply the method of steepest descent. It gives

ZZ0 = ~d/2 e−S(x0 )/~I(~).


where

s
1
I(~) = (2π)d/2 (1 + a1 ~d/2 + . . .) (asymptotically)
det D2 S(x0 )

Using the value of Z0 = (2π)d/2 ~d/2 (det B)−1/2 we find:

s
−S(x0 )/~ det B
Z=e .
det D2 S(x0 )

After taking a logarithm this yields that

1 det B
ln(Z) = −S(x0 )/~ + ln( ) + O(~)
2 det D2 S(x0 )
which are exactly the desired equalities.
17
1.5.2 1-particle irreducible diagrams
A powerful method widely used by physicists to compute the partition function is to find
a new action Seff |0 , called effective action such that

ln(Seff )0 = ln(S)

Then using the simple formula (1.20) for ln(Seff )0 we can find the partition function for
the initial action. Before that we need some definitions.

Definition 1.14 An edge of a connected graph is called a bridge if when removed the
graph becomes disconnected. A connected graph without bridges is called 1-particle
irreducible (1PI).

Figure 6.

The graph on Fig.4 obviously isn’t 1-particle irreducible. The graph on Fig.6 is an
example of 1PI graph. Note that the 1PI graphs are what in mathematics are known as
”2-connected”.
We are ready to describe the rules for computing of the effective action.
We will consider graphs with at least one internal and one external vertices. Such a
graph is called 1PI if the graph obtained by removing the external vertices is 1PI. Denote
by G1P I (n, N ) the set of isomorphism classes of 1PI graphs with N external vertices and
ni i-valent vertices. Here the isomorphisms are taken to keep the external vertices fixed.

Theorem 1.15 The effective action is given by the formula

(Bx, x) X Ui
Seff = − ,
2 i!
i≥0

where
X Y  X ~b(Γ)+1
Ui (x, x, . . . , x) = gi FΓ (x∗ , . . . , x∗ ).
n
|AutΓ|
i Γ∈G1P I (n,N)

and the functional x∗ ∈ V ∗ is defined as x∗ (y) := B(x, y)

Before giving the proof let us make few comments. Write Seff as a power series:

Seff = S + ~S1 + ~2 S2 + . . .

18
The expression ~j Sj is called j-loop correction to the effective action. The theorem
formulated above shows that we can work only with 1PI diagrams. Physicists rarely
use other diagrams, see e.g. the cited textbooks. Notice that the 1PI diagrams are
considerably less than all the diagrams.
Proof. of Theorem 1.11 We will make use of the following theorem from graph theory
(see e.g. [3]).

Proposition 1.16 Any connected graph can be uniquely represented as a tree (called
skeleton), whose vertices are 1PI subgraphs (with external edges) and the edges of the
tree are bridges of Γ.

Proof. of the proposition. 2

Graph Skeleton

Figure 7. The skeleton of a graph.

1.5.3 Legendre Transform


In this section we are going to express the effective action in terms of the Legendre
transform of the logarithm of the partition function.
Consider an action S and perturb it with a linear term:

S(b, x) = S(x) − (b, x)

Consider the corresponding partition function

R −S(x)+(b,x)
e ~ dx
ZU (b) = RV −(Bx,x)/2
V e dx

Using Theorem 1.13 we have

ln(ZU (b)) = −Seff (0, b)

Let us find the perturbed effective action


19
Seff (x, b).

Theorem 1.12 tells us that Seff (x, b) is given by the expansion in 1PI graphs. One of
these graphs is the graph having a single wedge with vertices – tensors (b, x). The only
one of them is a wedge connecting two vertices labeled by (b, x).

2 Quantum mechanics
2.1 Preliminaries
There is a dictionary that translates the objects from classical mechanics into the cor-
responding objects from quantum mechanics. Naturally we start with the phase space
M . Its analog in quantum mechanics is a Hilbert space H. This Hilbert space here will
be the space L2 (M ) of functions on the configuration space with integrable square. The
observables, i.e. functions of positions and momenta become self-adjoint operators in
this Hilbert space. The eigenvalues and the eigenvectors are interprest as follows. An
eigenvalue a of a self-adjoint operator A is the probability to measure an observable A
at the eigenstate |a > (= normed eigenvector).
In particular, the position qj translates into the operator qˆj of multiplication by qj
and the momentum pj translates into the operator of differentiation i~∂j . Then we see
that the Hamiltonian translates into the Schrödinger operator:

−~2 X 2
Ĥ = ∂qj + V (q) (2.1)
2m
j
2
The function V (q) is again called potential and obviously Ĥ = − −~ 2m ∆ + V (q). The
constant ~ is called Planck’s constant. The rule (naı̈ve) to write the Schrödinger operator
is obvious: we put i~∂xj instead of pj . Much more important is the analog and the
interpretation of the Hamiltonian equations. They read

∂ψ
i~ = Ĥψ (2.2)
∂t
This is the famous Schrödinger equation. It describes a particle (or more particles)
under the action of a potential V . The unknown function ψ(x, t) is called wave function.
Its physical interpretation is that |ψ(x, t)|2 is a probability density, i.e. the probability
to find a particle, described by the equation (2.2) in an infinitesimally small volume
d3 x at the point x and the time t, is |ψ(x, t)|2 d3 x. The standard method for solving
Schrödinger equation is by the method of separation of variables. We seek solution of
the form ψ(x, t) = ψ(x)e−iEt/~ where the constant E means energy. Then the spacial
part ψ(x) of the wave function satisfies the time-independent Schrödinger equation

Ĥψ = Eψ (2.3)
The main problem of quantum mechanics is to solve the eigenvalue problem (2.3).

20
Unfortunately most of the operators needed in quantum mechanics have no eigenfunc-
tions in H. E.g., the operator − ∂i∂q , acting in L2 (R), which is basic for quantum physics
has no eigenfunction in that space. On the other hand naı̈vely one can say that any
function of the form eipq is an eigenfunction with an eigenvalue p in some bigger space.
The operator q is even worse; it has no eigenfunction in the class of functions but only
in the class of distributions. One standard way to get out of this situation (but not the
only one) is to consider the sequence S ⊂ H ⊂ S ∗ , where S is the space of C∞ -functions
that decay faster than any polynomial and S ∗ is the space of its functionals.
d
Example 2.1 (Fourier transform.) Consider the operator −i dq . Its eigenfunctions are
e−ipq with any fixed p. The linear functional
Z
ˆ
f (q) → f (p) = f (q)e−ipq dq, (2.4)

where f is a test function, belongs to S ∗ . The inverse Fourier transform

1
Z
f (q) = fˆ(q)eipq dq,

d
gives the expansion of f in the eigenfunctions of the operator −i dq . Of course, they do
not belong to the Hilbert space.

Each physical state is represented by a vector, i.e. a L2 -function. We are going use
Dirac’s ”ket” and ”bra” notation. By the ”ket” |ψ > we (following Dirac) are going to
denote the states (vectors in H. Here ψ could, be e.g. an eigenvalue, a vacuum state
or any letter denoting some physical object. In a similar way we denote by ”bra” < φ|
the dual vector. The scalar product (φ, Aψ) will be denoted by < φ|A|ψ > and called
matrix element of A. The name comes from the situation when |φ > and |ψ > are both
members of an orthogonal basis of H. In that case < φ|A|ψ > is really an element of the
matrix of A in that basis.
Let {a} be a complete orthonormal set of eigenvectors of a self-adjoint operator A in
H. One can expand any vector ψ as

X
|ψ >= |a >< a|ψ >
a

i.e. - in Fourier series. This equality will be used quite frequently and referred to as
insertion of a complete set of states. In a general form it reads:

X
|a >< a| = 1,
a

where by 1 we denote the identity operator in H. Here is an example.

21
Example 2.2 Let |ψ > be a state. We want to find the average value of the measure-
ments of the observable A at the state |ψ >. We have
X X
a| < a|ψ > |2 = a| < ψ|a >< a|ψ >=
a a
X
< ψ|A|a >< a|ψ >=< ψ|A|ψ >
a

The most important observables are the coordinates qj and the momenta pj . Using
their definition

d
q̂ j f (q) := qj f (q), p̂j f (q) := −i~
dqj

we find that they satisfy the following identities

[qi , qj ] = 0, [pi , pj ] = 0, [qi , pj ] = iδij .

Another important observable is the energy given by the Hamiltonian Ĥ. Further we
are going to skip the hat denoting quantization.
We can consider Schrödinger equation (2.1) as a dynamical system in the Hilbert
space H. Then we can solve it by the formula:

ψ(t) = e−itH |ψ(0) > . (2.5)

The evolution is one parameter family of unitary operators e−itH .

2.2 Heisenberg picture


Up to now the main role in our discussion was played by the Schrödinger equation
(2.1). This setting is referred to as Schrödinger picture. There is an equivalent quantum
mechanical picture, called Heisenberg picture. The states |ψ > at time t is mapped to
eiHt |ψ >, and the operators A are mapped to eiHt Ae−iHt . The operator eiHt is unitary
and hence it preserves the scalar products. Notice that all measurable quantities are
given by matrix elements, i.e. by scalar products. This shows that we do not change the
physical picture.
In Schrödinger picture the observables do not change and the states change with
time. In Heisenberg picture the situation is the opposite –
the observables change by the law

dA
= −i[A, H], (2.6)
dt
(this is obtained by differentiation) but the states stay constant.

22
2.3 The Harmonic oscillator
In classical mechanics the simplest but very important system is the harmonic oscillator.
The importance lies in the fact that roughly speaking all other systems can be considered
as sets of connected oscillators. The situation in quantum mechanics and quantum field
theory is the same.
The classical harmonic oscillator is governed by the Hamiltonian

p2 kx2 p2 mω 2 x2
H= + = + (2.7)
2m 2 2m 2
”Quantizing” it gives for the Schrödinger operator

−∂x2 mω 2 x2
H= + (2.8)
2m 2
Here we assume that the Planck constant ~ = 1. Our Hilbert space will be L2 (R). The
above operator is essentially the Hermite operator whose eigenfunctions are expressed
in terms of the Hermite polynomials. This is well known fact but we will derive it here
below.
In what follows we are going to use simple arguments from representation theory.
Instead of using the operators x and p we are going to present H in terms of the following
two operators:

r r
mω 1
a = x + ip (2.9)
2 2mω
r r
† mω 1
a = x − ip (2.10)
2 2mω

Notice that the operators a and a† satisfy the canonical commutation relation [a, a† ] = 1,
which plays a crucial role below. Obviously the Hamiltonian can be written in the form
ω † 1
H= (a a + aa† ) = ω(N + ), (2.11)
2 2
where N = aa† . The Hermitian operator N satisfies the relations

[N, a† ] = a† and [N, a] = −a. (2.12)


The above operators define an algebra, called Heisenberg algebra. We are going to study
the representations of this algebra in order to obtain the spectrum of N .
Let |n > be a normalized eigenvector of N , i.e. N |n >= n|n > and < n|n >=
1. Consider the vectors a† |n > and a|n >. If we apply to them N and use the the
commutation relations (2.12) we obtain

N a† |n >= (a† N + a† )|n >= a† (N + 1)|n >= (n + 1)a† |n > (2.13)


N a|n >= (aN − a)|n >= a(N − 1)|n >= (n − 1)a|n > (2.14)

The equations (2.14) explain the names of the operators a† and a – operators of
creation and annihilation.
23
The last equations show that we can build new eigenstates from old ones. In partic-
ular we can obtain eigenstates with arbitrary negative eigenvalues. Below we are going
to show that this cannot be true.
The operator (2.7) H is a sum of squares of Hermitian operators. This shows that
it cannot have negative eigenvalues. This shows that from some positive k further the
vectors ak |n > are zero and we do not produce new eigenvectors from it. Let us denote
by |0 > the last non-zero vector of the sequence |n >, a|n >, . . . , ak |n >, . . .. The
vector |0 > is called vacuum. (Notice that here we have denoted a non-zero state by
|0 >! This is the vacuum and not the zero vector.) The uniqueness of the vacuum is
also easy to prove, see below. We have a|0 >= 0. On the other hand all eigenvectors
|0 >, a† |0 >, . . . , a†k |0 >, . . . are non-zero. Let us show this.
Take a normalized eigenvector |n > as above. The squared norm of a† n can be
computed as follows.

< a† n|a† |n >=< n|aa† |n >= (a† a + 1)|n >= (n + 1)|n + 1 > .

(Why the first equality is true?)


One can easily show that the eigenspaces of N , corresponding to the eigenvalues n
are one-dimensional. Let us start with the vacuum |0 >. It satisfies ordinary differential
equation of order one, a|n >= 0. Hence the statement is true. Assume that we have
proved the statement for an eigenvalue n − 1. If for the eigenvalue n we have at least
0
two independent eigenvectors |n > and |n > we can act upon them by a, Then the we
obtain

0 0
N a|n >= (n − 1)a|n >, N a|n >= (n − 1)a|n > .
0 0
This shows that a|n − n >= 0 and hence |n − n > is the vacuum. On the other hand
0 0
N |n − n >= n|n − n > contradicting to the fact that the vacuum has zero eigenvalue.
Finally the fact that the eigenvectors of N form a complete orthogonal system in
2
L (R) is well known fact, e.g. from the theory of Hermite polynomials.
In this way we obtained an orthogonal basis of L2 (R) formed by the eigenvectors of
H with eigenvalues ω(n + 1/2).

3 Path integral formulation of quantum mechanics


3.1 Definitions
We are going define the path integrals for quantum mechanics by the same expansion
(1.17) we used in 0-dimensional QFT. For this we need to define the Feynman amplitudes,
which means we have to define the function S, the quadratic form B, to find its inverse
– B −1 and finally to define the covectors lj .
Let us consider a classical particle with action functional

Z
S(q) = L(qj , q̇j )dt

24
Then we need to define the Feynman integral, having the meaning of correlation function.

q(t1 ) . . . q(tN )eiS(q)/~Dq


R
G(t1 , . . . , tN ) =< q(t1 ), . . . , q(tN ) >:= R (3.1)
eiS(q)/~Dq

Remark 3.1 An obvious but important remark is that q(tj ) has the meaning of a func-
tional. Here tj is fixed and q is the variable.

The notation Gn (t1 , . . . , tn ) refers to another name of the correlator – Green’s func-
tion. Of course we consider first the Euclidian picture. For this we need to make Wick’s
rotation, i.e. to rotate the time in the complex domain. We are going to consider only
Lagrangians of the form L(q, q̇) = (q) ˙ 2 /2 − U (q) Then our action will become:

Z
− (q̇)2 /2 − U (q) idt

S=

and the Green’s function will be given by the formula

q(t1 ) . . . q(tN )e−SE (q)/~Dq


R
E
G (t1 , . . . , tN ) =< q(t1 ), . . . , q(tN ) >:= R (3.2)
e−SE (q)/~Dq
(q̇)2 /2 + U (q) dt.
R 
with SE =
We may assume for simplicity that the particle moves in one-dimensional space. The
general case
Pis not much harder. The potential U will be taken a power series of the

form U = j=2 Uj , i.e. without constant and linear terms. For further use introduce
the notation Uj = uj xj /j!.
Then in analogy with 0-dimensional case we take the quadratic form B to be
Z
B = (q̇ 2 + m2 q 2 )dt.

Here m2 q 2 = 2U2 . The coefficient m has the meaning of mass. Integrating by parts we
obtain

B =< Aq|q >,

where A = −d2 /dt2 + m2 . This will help us define the inverse B −1 ; namely we put
B −1 (f, f ) =< A−1 f |f >. The operator A−1 is defined as in differential equations: if
Aq = f , then the solution of this equation is given by q = A−1 f . In differential equations
this is the integral operator with kernel the Green function G(x, y):
Z
q(x) = G(x, y)f (y)dy.

It is well known that in our case the Green’s function is given explicitly by the formula:

e−m|x−y|
G(x, y) = . (3.3)
2m

25
We see that our Hilbert space H has to be the space of quadratically integrable functions
L2 . But we are going to work with Schwartz spaces S(Rn ) and S ∗ (Rn ) as explained in
Section 2.
Now we are ready to give definition of the Feynman integral (3.1) (Euclidean version).
Introduce some numeration of the internal vertices. The formula below does not depend
on the choice.

Definition 3.2 The correlation (Green’s) function (3.1) is given by the asymptotic series
X ~b (Γ)
G(t1 , . . . , tN ) = FΓ (t1 , . . . , tN ) (3.4)
|Aut(Γ)|
Γ∈G∗≥3 (N )

To define the numbers FΓ we fix the graph Γ. Then the following rules hold:

1. Put the variable tj (the functional q(tj ) at the j-th external vertex of Γ.

2. Put the variable sk at the internal vertex k.

3. For each edge connecting u and v write the Green’s function G(α, β).

4. The number FΓ is defined by the formula

Y Z
FΓ = (−uv(j) ) G(t, s)ds, (3.5)
j

where v(j) is the valency of the j-th vertex of Γ.

Example 3.3 (Wick’s Lemma.) Let us examine in detail the free theory:

q̇ 2 m2 q 2
Z
S= (− − )dt.
2 2
In this case each graph is disconnected union of subgraphs with two vertices and edge
connecting them. The above formula gives us that

X Y
G(t1 , . . . , t2k ) = ~k G(ti − tσ(i) ).
σ∈Πm i∈{1,...,2m}/σ

Example 3.4 φ3 -theory. Consider action with Lagrangian L = φ̇2 /2 − m2 φ2 /2 + φ3 .


Let us compute the two-point correlation function up to some order.

26
Figure 8.

3.2 Computations
3.2.1 The partition function
Let us consider the partition function with slight modification – partition function with
external current J: Z
Z(J) := eiSE (q)/~+<J|q> Dq. (3.6)

Here the arbitrary function J ∈ S (the space of fast decaying functions S. Then we have
the equality (only formally!):

Z(J) X ~−n
Z
= Gn (t1 , . . . , tn )J(t1 ) . . . J(tn )dt1 . . . dtn .
Z(0) n
n! Rn

This will be our definition for Z(J) Z(0) . We see that this the generating function of all
the Green’s functions Gn (t1 , . . . , tn ).
As in the 0-dimensional QFT here we have
Proposition 3.5 The following formula holds:

Z(J) X ~−n
Z
W (J) := ln = Gnc (t1 , . . . , tn )J(t1 ) . . . J(tn )dt1 . . . dtn .
Z(0) n
n! Rn

The Proof. is the same as in the 0-dimensional QFT. In this way we have generating
function of all the Green’s functions Gnc (t1 , . . . , tn ).
Also as in 0-dimensional QFT we have j-loops expansion:

W (J) = ~−1 W0 (J) + W1 (J) + . . . + ~j−1 Wj (J) + . . . ,


where W0 is the sum over trees, W1 is the 1-loop contribution, etc. Furthermore,
Proposition 3.6 (0) The tree approximation is given by

W0 (J) = −SE (qJ )+ < qJ , J >,


J := S (q )− < q , J >;
where qJ is the extremal of the functional SE E J J
(1) The one-loop contribution is given by

1
W1 (J) = − ln det LJ ,
2
where LJ is the linear operator on H such that d2 SE
J (q )(f , f ) = d2 S 0 (0)(L f , f ).
j 1 2 E J 1 2

27
In a similar vein we can write explicitly a generating function for the one-particle irre-
ducible Green functions Gn1P I (t1 , . . . , tn ), i.e. the Green functions that are defined only
over the 1PI-graphs.

3.3 Motivation
3.3.1 ”Derivation” of Feynman’s formula
3.3.2 Feynman-Kac formula
3.4 Example - the Harmonic Oscillator
3.5 Example - φ3 Theory
3.6 Momentum space formulation
The computations in the position variables are quite heavy. In particular the Feynman
amplitude is given by an integral over a space of dimension equal to the number of internal
vertices, which can be enormous even for trees. Instead one can pass to momentum
representation by applying Fourier transform. Let us start with the classical equation:

∂2
(− + m2 )G(x) = δ
∂t2
Applying Fourier transform to it (with the variable p instead of ξ) we obtain:

(p2 + m2 )Ĝ = 1

This gives

1
Ĝ(E) =
E2 + m2
Of course

eip(t−s) dp
Z
G(t − s) =
2π(p2 + m2 )

Below we introduce the following notation. We denote by pj the edges of a fixed


graph Γ and by α(pj ), β(pj ) the vertices adjacent to pj . Both α(pj ) and β(pj ) denote
either t or s. We plug in the above expression for G with corresponding variables into
the formula for the amplitude FΓ (t1 , . . . , tN ). We have

eipj (α(pj )−β(pj )) dpj 


Z YZ
F (t) = ds
s j pj ∈R 2π(p2j + m2 )

We can change the order of integration by first integrating with respect to s and after
that – with respect to p. We obtain the following integral:
28
YZ 1
Z
F (t) = 2 2
eipj (α(pj )−β(pj )) dsdpj =
j pj ∈R 2π(pj + m ) s

YZ Y eitl El Y X
δ( pk )dpj .
j pj ∈R 2π(p2j + m2 ) ver
l k∈ver

Here we have denoted the dual variables on the external wedges by El and by ver –
the vertices of the graph. Finally we perform Fourier transform on FΓ (t1 , . . . , tN ) (with
respect to (t1 , . . . , tN )).
We obtain
1
Z
F̂ (E) = Q 2 2
dpj .
pj ∈R j 2π(pj + m )

From the δ-functions we obtain some relations between the p-s and the E-s. One of them
is E1 + . . . + EN . The rest define an affine subspace Y (E) where the variables p take
values.

Example 3.7 Consider as an example the diagram on Fig. 8. We can first write the
propagator as inverse Fourier transform:

eip(t−s) dp
Z
G(t − s) =
2π(p2 + m2 )
Then we plug it in the formula for the amplitude:

P
eip4 (t1 −s1 ) eip5 (t2 −s2 ) ei pj (s1 −s2 ) dp 
Z YZ
F (t) = ds.
bf s bf p 2π(p2j + m2 )

Let us integrate first with respect to s1 , s2 . We obtain

P3 P3 ip4 t1 +ip5 t2
δ(p4 − j=1 pj )δ(−p5 − j=1 pj )e
Z
F (t) = Q5 2
dp.
j=1 2π(pj + m2 )

This gives the equations p4 + p5 = 0 and 3j=1 pj = p4 . Next perform Fourier transform
P
with respect to t. The dual variables are denoted by E. This gives
dp
Z
F̂ (E) = Q5 2
,
2
j=1 2π(pj + m )
P3
where the variables satisfy the relations E1 = −E2 and j=1 pj = E1 and p4 = E1 ,
p5 = E2 . The final answer is:

1 dp1 dp2
Z Z
F̂ (E) = 2 , (3.7)
(E1 + m2 )2 (2π)5 (p21 + m2 )(p22 + m2 )((E1 − p1 − p2 )2 + m2 )
29
Below we give the rules defining Feynman’s amplitude in momentum variables. Recall
that the dual variables to t are denoted by E. The dual variables to s will be denoted
by Q.

Definition 3.8 (Feynman’s rules for the amplitudes in momentum variables.) The
Fourier transform of an amplitude FΓ are as follows:

1. Put a variable Ej at each external edge and a variable Qj at each internal one;
1
2. Assign a propagator p2 +m 2 to each edge and substitute p by Ej for the external
edges and by Qj by the internal ones. Multiply all the propagators and denote the
result by ΦΓ (E, Q);

3. Orient the external edges inward;

4. Orient the internal edges arbitrarily;

5. For each internal vertex write ”the Kirchhoff law”: the sum of the incoming vari-
ables is equal to the sum of the outgoing ones.
PN This will produce relations among
the variables Q and E. One of them is j Ej = 0. The rest define a linear
subspace Y (E) of the space of the Q-s;

6. Define the momentum-space amplitude of Γ by

Y Z
F̂Γ (E) = (−av(l) ) Φ(E, Q)dQ.
l Y (E)

7. The measure dQ on Y (E) is defined to be in such a way that the volume of


Y (E)/YZ (0) = 1, where YZ (0) is the set of integer points on Y (0).

Exercise. Write the amplitude of the Feynman graph given on fig.5.

3.7 Wick’s Rotation in Momentum space


Now that we have learned to compute the correlation functions in the Euclidian space
we need to return to the Minkowski’s one. This means that we have to perform back
the Wick’s rotation. We will do it as follows. Take the propagator Ĝ in momentum
coordinates:

1
Z
= G(t)eiEt dt.
E + m2
2

Let us make the Wick’s rotation τ = teiθ in the Green’s function G(t). We obtain
G(teiθ ), where θ changes from 0 to −π/2. In momentum coordinates the propagator G
undergoes the deformation:

Z
G(te−iθ )eiEt dt.

30
Let us put E = ξe−iθ , t = τ eiθ . The above formula becomes:

eiθ eiθ e−iθ


Z
G(τ )eiξτ eiθ dτ = = =
ξ 2 + m2 e2iθ E 2 + m2 E 2 + m2 e−2iθ
For θ → −π/2 the number e−2iθ ∼ −1 + iε with ε > 0. A standard notation in
physics is
i
Ĝ(E) = ,
E2 − m2 + iε
understood as a distribution. This should be interpreted as the distribution acting on a
probe function φ as

Z
Ĝ(E)φ(E)dE
C

where the contour C is taken along the real line from −∞ to a negative value of ε, going
around the half-circle |E| = ε, ReE > 0 and continuing along the real line to +∞.
The same result can be obtained working directly with Minkowski’s propagator.

4 Symmetries
Symmetries are everywhere around us. Quite often we attribute beauty to some visible
symmetry. In science they are less visible but no less important. In classical mechanics
the symmetries are responsible for the integrability of mechanical equations. Some of
the corresponding symmetries can be seen easily, e.g. – the rotational symmetry yields
the conservation of the momentum. But other, as some of the symmetries in rigid body
equations (e.g. in Kowalevskaia case) are not at all obvious.
The adequate mathematical tool describing symmetries is group theory. In this
section we assume some knowledge of groups and present some of the theory needed
in the course. On the other hand we are going to consider simple enough examples
that presumably would help the reader to get more insight even without preliminary
acquaintance with groups.
As the definition of a group is simple let’s recall it.

Definition 4.1 A group is a set G with the following properties:


1. Multiplication. For any ordered pair of elements g1 , g2 ∈ G there exits an element
g1 · g2 ∈ G;
2. Inversion. For any element g ∈ G there exits an element g−1 ∈ G
3. Unit. There exist an element 1 ∈ G such that 1 · g = g for any g ∈ G;
4. Associativity. For any three elements g1 , g2 , g3 ∈ G the associativity equation
holds:

g1 · (g2 · g3 ) = (g1 · g2 ) · g3 .
31
If the order of multiplication is irrelevant, i.e. g1 · g2 = g2 · g1 we say that the group is
commutative or Abelian.

In the subsections that follow we study few examples all important for QFT.

4.1 The Group SO(2)


This is the group of rotations of the circle. (Show that it is a group.) We can identify
its elements with the 2 × 2 matrices of the form:

 
cos θ sin θ
A=
− sin θ cos θ

Obviously such a matrix describes a rotation of angle θ.


In this example we meet one of the most important notions in mathematics and
physics – the notion of representation. In simple terms we described above our group as
subgroup of a matrix group. The description of the representations of groups is a major
goal of mathematics. Soon we will see its importance for quantum theory.
First we will give a precise definition.

Definition 4.2 Let V be a vector space (it could be infinite-dimensional). Denote by


Inv(V ) the group of invertible linear operators in V . A homomorphism of a group G
into a subgroup of Inv(V ) is called representation.

4.2 The Groups SO(3) and SU(2)


4.3 The Lorentz and Poincaré groups
4.4 Clifford algebras
Let V be a complex space with scalar product.

Definition 4.3 Clifford algebra is an algebra spanned by the elements of V and the
complex numbers C, satisfying the relation

ξη + ηξ = 2(ξ, η), ξ, η ∈ V.

5 Classical fields
5.1 Multidimensional Variational Problems
Here we going to generalize the variational approach in mechanics to some other phys-
ical problems, where the configuration space is infinite-dimensional. In more detail we
consider vector space with dimension l ≥ 2 with coordinates (x1 , . . . , xl ). Let us first
consider the simplest case of one scalar field ψ. We define the action

Z
S(ψ) = L(∇ψ(x), ψ(x))dl x
Rl
32
Exactly as in the 1-dimensional case the Euler-Lagrange equations

l
∂L X ∂L
− ∂xj = 0.
∂ψ ∂xj ψ
j=1

In an obvious manner we can study several scalar fields with Lagrangian L(∇ψ1 (x), . . . , ∇ψr (x), ψ1 (x), . . . , ψr (x)
But it is much more tricky to understand fields with values in some vector bundles, spinor
fields etc. In this section we do not aim at studying the general picture but rather con-
sider some cases of physical importance. All of these need to have some properties, e.g.,
to be Poincar/’e-invariant.

5.2 Klein-Gordon Field


The simplest non-trivial Poincaré-invariant Lagrangian is (in suitable physical units):

1 1
L = ∂µ φ∂ µ φ − m2 φ2
2 2
Klein-Gordon equation is the corresponding Euler-Lagrange equation for the action S =
L(∇φ, φ)d4 x:
R

(2 + m2 )ϕ = 0. (5.1)

5.3 Electromagnetic field


The electromagnetic field is governed by the Maxwell equations. We recall them. Let
E(x, t) = (E 1 , E 2 , E 3 ) and B(x, t) = (B 1 , B 2 , B 3 ) be the electric and magnetic fields
in a three-dimensional space correspondingly. Denote also by j and by ρ the current
density and the charge density. Then the Maxwell’s equations are:

∂B
a) ∇ · B = 0, b) ∇ × E + =0 (5.2)
∂t
∂E
c) ∇ · E = ρ, d) ∇ × B − =j (5.3)
∂t
The meaning of the equations is the following. Equation a) means that there are no
magnetic charges. Next comes Faraday’s law b) of induction: if the magnetic field is
changing, then an electric field appears. Equation c) is nothing but Gauss’s (Stokes,
Green, Ostrogradsky, etc.) theorem in differential form. Finally equation d) is the
Ampère’s circuital law, with the Maxwell correction.
By Helmholtz’s theorem, B can be written in terms of a vector field A, called the
magnetic potential:

B = ∇ × A.

Differentiating and using Faraday’s law we find


∂A
∇ × (E + ) = 0.
∂t
33
This shows, again by Helmholtz’s theorem, that there exists a function ϕ such that
E + ∂A µ
∂t = ϕ. Denote by A = (ϕ, A). It is called 4-potential.
We are going to write the Maxwell equations in terms of the 4-potential. Introduce
the electromagnetic tensor F µν by the equalities:

F µν = ∂ µ Aν − ∂ ν Aµ = −F νµ . (5.4)
Component-wise it reads:

0 −E 1 −E 2 −E 3
 
 E1 0 −B 3 B 2 
F =
 E2 B3
. (5.5)
0 −B 1 
E 3 −B 2 B 1 0
It is quite obvious that the electromagnetic tensor is invariant under the transformation

Aµ → Aµ + ∂ µ χ

5.4 Dirac Field


Introduce the Dirac matrices for four-dimensional Minkowski space. They are:

σi
   
0 0 1 i 0
γ = , γ = , (5.6)
1 0 −σ i 0
where σ i are the Pauli matrices 1 .
This representation is called Weyl or chiral representation
In terms of Dirac matrices we can write Dirac equation as:

(iγ µ ∂µ − m)ψ(x) = 0,

or in Dirac’s notation (i∂/ − m)ψ(x) = 0

Proposition 5.1 (i) Dirac equation is Lorentz invariant.


(ii) Klein-Gordon operator

∂ 2 + m2 = (−iγ µ ∂µ − m)(iγ µ ∂µ − m),

i.e. Klein-Gordon equation follows from Dirac equation.

Proof. is elementary computation and is left for the reader.


The Lagrangian for Dirac theory is:

LDirac = Ψ̄(i∂/ − m)Ψ, (5.7)


where Ψ̄ = ψ † γ 0 .
Dirac propagator, i.e. the fundamental solution of Dirac equation is:
1
Warning! This notation is not the only one used in literature.

34
5.5 Weyl, Majorana, Yukawa fields

6 Quantum fields
6.1 Scalar Fields
We start with one scalar field φ on Minkowski space. This meansR that we have a vector
space V with a signature −1, . . . , 1. We also have an action S = Ldx with Lagrangian
L(ψ, ∇ψ). In QFT there is an operator of time ordering T acting on fields as follows. If
(x − y)2M ≥ 0 then T (φ(x)ψ(y)) = φ(x)ψ(y). Otherwise T (φ(x)ψ(y)) = ψ(y)φ(x).
We want to put sense in expressions of the form:

T (φ(x1 ) . . . φ(xN ))eiS(φ)/~Dφ


R
1 N 1 N
G(x , . . . , x ) =< φ(x ), . . . , φ(x ) >:= R . (6.1)
eiS(φ)/~Dφ

Of course after that we need to learn how to compute them.


As explained in the section about quantum mechanics via Wick’s rotation we can
reduce the problem to Euclidian theory. Let us start with the latter. In Euclidian
theory we have to find the correlator

G(x1 , . . . , xN ) =< φ(x1 ), . . . , φ(xN ) >:=


T (φ(x1 ) . . . φ(xN ))e−S(φ)/~Dφ
R
R ,
e−S(φ)/~Dφ

We will proceed as in quantum mechanics. The rules are the same but new problem
arises. Let us consider the simplest example of Klein-Gordon action.
The Lagrangian for φ4 Klein-Gordon theory is:

m−1
1 2 1X
LKG = (∂0 φ) − (∂j φ)2 − m2 φ2
2 2
j=1

Uj φj , i.e.
P
perturbed by j
X
L = LKG + Uj φj . (6.2)
j

After Wick’s rotation it becomes

1
LE (∇φ)2 + m2 φ dx.

KG = −
2
Denote the Green’s function of the Euclidian Klein-Gordon equation by GE
KG (x − y).
In other words we look for solution of the equation

(−∆ + m2 )GE
KG (x − y) = δ(x − y)

35
Performing Fourier transform on both sides yields:

(k2 + m2 )ĜKG (k) = 1

Then the Green’s function is given by the inverse Fourier transform:

1 dl k
Z
GE
KG (x − y) = e−i(x−y)k .
(2π)l k 2 + m2

As in quantum mechanics we see the advantages of Wick’s rotation – the integrand has
no poles in the real domain. We can define the Euclidian correlation functions exactly
in the same way is in quantum mechanics. The formulation in momentum variables is
the same, too.
But here we need some attention.

Example 6.1 Consider again the diagram on Fig.8. By the rules the amplitude is

F̂ (E) =
1 dl p1 dl p2
Z Z
.
(E12 + m2 )2 (2π)5 l(p21 + m2 )(p22 + m2 )((E1 − p1 − p2 )2 + m2 )

The power of the numerator is 2l, while the power of the denominator is 6. We see
that the integral is divergent for l ≥ 3.

In this example we meet one of the greatest problems of QFT – the amplitudes are
quite often divergent. We postpone the ways to get out of this situation in the section
Renormalization. Until then we will consider our integrals only formally.

6.2 Cross Sections and S-matrix

7 Fermions
Elementary particles are divided into two types: bosons and fermions. Examples of the
former are photons, W , and Z particles. The electrons, protons, neutrons are examples
of fermions. The bosons are characterized by the fact that several bosons can occupy the
same quantum state, while fermions cannot. Mathematically this difference is expressed
by the corresponding Hilbert spaces. If the Hilbert space for a single particle is H then
for k bosons it is S k H (the k-th symmetric power of H), while for k fermions it is Λk H
(the k-th exterior power of H). The quantum theory we have developed up to now
describes mostly bosons – the fields commute. Now we need to develop field theory of
anti-commuting fields. The relevant mathematical tool is the notion of supermanifolds.

7.1 Linear Superspaces


Of course we start with the relevant linear algebra.

36
Definition 7.1 A supervector space (or superspace) V is Z2 -graded vector space – V =
V0 ⊕ V1 with the following additional structure. We define tensor product v ⊗ u of two
vectors, where v ∈ Vi , u ∈ Vj , i, j ∈ {0, 1} satisfying the rule v ⊗ u = (−1)ij u ⊗ v. Let
us define the operation of changing parity Π, by ΠVi = V1−i , i, j ∈ {0, 1}. With this
notation we can define the following extension of the notions of symmetric and exterior
powers:

S m V = Π(Λm (ΠV ), Λm V = Π(SΠ(V ).


When V0 = Rn , V1 = Rm we denote V by Rn|m . In general we say that V has dimension
n|m, where n, m are the dimensions of V0 , V1 , correspondingly.

The elements of V0 are called even and the elements of V1 are called odd.
We define the algebra of polynomial functions O(V ) on a superspace V as SV ∗ , where
S acts as defined above on the superalgebra V ∗ . In more details if x1 , . . . , xn are linear
coordinates on V0 ,called even variables and ξ1 , . . . , ξm , called odd variables are linear
coordinates V1 then O(V ) is R[x1 , . . . , xn , ξ1 , . . . , ξm ] with the relations

xi xj = xj xi , ξi ξj = −ξj ξi , xi ξj = ξj xi .
The algebra spanned only by the odd variables ξ1 , . . . , ξm is called Grassmann or exterior
algebra. Using the standard notation of anticommutator – {a, b} = ab + ba we can write
the defining relations {a, b} = 0 for any a, b of the Grassmann algebra. It is a finite-
dimensional space, while O(V ) is (in general) an infinite-dimensional supervector space.

7.2 Supermanifolds
More generally we can define the algebra of smooth functions C ∞ (V ) on V as C ∞ (V0 ) ⊗
ΛV1∗ . We can look on the smooth functions on a supermanifold V as functions of the
form:

X
F (x, ξ) = fi (x)ξ1α1 . . . ξm
αm
,

where αi = 0 or 1.

Definition 7.2 A supermanifold M is an ordinary manyfold M0 but instead of the


standard sheaf of smooth functions we consider a sheaf of smooth functions C ∞ (V )
on a superspace. This means that that the structure sheaf is locally isomorphic to
CM∞ ⊗ Λ(ξ , . . . , ξ ).
0 1 m

7.3 Calculus on Supermanifolds


Let us define the notion of integral of Grassmann functions. It will have the properties:

Z Z
1dξ = 0, ξdξ = 1,
Z Z  Z Z
ξ2 ξ1 dξ1 dξ2 = ξ2 ξ1 dξ1 dξ2 = 1

37
Next we define an integral for functions in both even and odd variables. Consider
functions f (x, ξ) that in even variables are compactly supported, i.e.

X
f (x, ξ) = fα (x)ξ α ,
α

and the functions fα are with compact support. It is enough to define the integral for
the summands: fα (x)ξ α . The integral will be

Z Z Z
α
fα (x)ξ dxdξ = fα (x)dx ξ α dξ.
V V0 V1

The general case is defined by linearity.


We need to learn how to make changes of variables. Consider the case when there are
only odd variables. To get an idea about the natural formulas we start with 2-dimensional
V = V1 . The linear change of variables F has the form:

ξ1 = f11 η1 + f12 η2 , ξ2 = f21 η1 + f22 η2 .

Then the function ξ1 ξ2 transforms into

(f11 η1 + f12 η2 )(f21 η1 + f22 η2 ) =


(f11 f22 − f12 f21 )η1 η2 = det (F )η1 η2 .
R
We want to keep the value of the integral ξ1 ξ2 dξ2 dξ1 = 1. This yields that the change
of the variables must be:

ξ1 ξ2 dξ2 dξ1 = det (F )−1 η1 η2 dη2 dη1 .

Obviously the same formula has to be applied to odd spaces in any dimension.
To guess the formula for change the variables in general, i.e. when we have integral
f (x)ξ1 . . . ξm we can apply the above arguments. Again take two odd variables.
The linear map will be

x = Ay + b11 η1 + b12 η2 ,
ξ1 = c1 y + d11 η1 + d12 η2 .
ξ2 = c2 y + d21 η1 + d22 η2 .
(7.1)

Here A, D have even elements while B, C have odd elements. The matrix B is m × 2
and the matrix C is 2 × m.
The change (7.1) gives
38
f (x)ξ1 ξ2 dxdξ2 dξ1 = f (x)η1 η2 (Ady + b1 dη1 + b2 dη2 )
(c1 dy + d11 dη1 + d12 dη2 )(c2 dy + d21 dη1 + d22 dη2 ).

Assume that det D 6= 0. After some manipulations we obtain

ξ1 ξ2 dxdξ1 dξ2 =
η1 η2 (det A det D)dydη1 dη2 −
η1 η2 det(BD−1 C) det Ddydη1 dη2 =
η1 η2 det A − BD−1 C det Ddydη1 dη2


Having in mind that the integral of ξ1 ξ2 dξ1 dξ2 must be 1 we finally find that the formula
for the change of the variables is given by

Z Z
f (x)dxdξ = f Ber(F )dydξ

where the Berezinian of F is the number

det(A − BD−1 C)
Ber(F ) = .
det D
We also need to learn how to differentiate functions of anticommuting variables. Here
L R
we are going to distinguish between left derivative ∂∂ξ and right derivative – ∂∂ξ . It is
enough to define them for the function ξ1 ξ2 . We have

∂L ∂R
ξ1 ξ2 = δi1 ξ2 − δi2 ξ1 , ξ1 ξ2 = δi2 ξ1 − δi1 ξ2 .
∂ξi ∂ξi

7.4 Fermionic Gaussian Integrals and Correlators


Similarly to the bosonic case we would like to study integrals of the form:

Z
e(Bξ,ξ) dξ

with Grassman variables. We start with the notion of Pfaffian.


Consider a skew-symmetric matrix A of even size 2n. One can prove that the deter-
minant of A is square of a polynomial of the entries [19]. Then the square root of det A
is called Pfaffian of A. It is denoted by Pf(A). There is ambiguity about the sign of
Pf(A). It is fixed by the condition that for the block matrix
 
0 −1
A=
1 0

39
aij ei ej .
P
Pf(A) = 1. One can also use the following equivalent definition. Let ω = i<j
Then

1 n
∧ ω = Pf(A)e1 ∧ . . . ∧ e2n .
n!
Let V be an odd space with coordinates ξ1 , . . . , ξn and B(ξ, ξ) be a bilinear symmetric
form on it.
Lemma 7.3

Z
1
e 2 B(ξ,ξ) dξ = Pf(B).

Proof. The second definition yields immediately the statement. 2


Our next goal is the Wick’s lemma. Let l1 , . . . , lm be linear functionals on V .
Theorem 7.4

Z
1
λ1 (ξ) . . . λm (ξ)e− 2 B(ξ,ξ) dξ = Pf(−B)Pf(B −1 (λi , λσi )).

Consider a superspace V = V0 ⊕ V1 and nondegenerate quadratic form B0 ⊕ B1 . We


also assume that B0 is positive definite. Let the action

1 X Br
S(v) = B(v, v) +
2 r!
r≥3

be an even function on V. We would like to compute the integral


Z
1
I(~) = l1 (v1 ) . . . lk (v1 )λ1 (v1 ) . . . λm (v1 )e− 2 S(v)/~dv.

The computation in many respects repeats the one for the bosons. Let us explain the
differences. The diagrams now contain odd and even edges, depicted by straight and
wiggly lines respectively. They are obtained in the following manner. Expand the r-th
term Br of the action:

X r 
Br = Bs,r−s (v0 , . . . , v0 ; v1 , . . . , v1 ),
s
s
where Bs,r−s is a homogeneous function of degree s with respect to v0 and homogeneous
of degree r − s with respect to v1 . We associate with Bs,r−s a flower with s even outgoing
edges and r − s odd outgoing edges. For the set of odd edges we specify which orderings
are even. Then given a set of flowers we pair the odd edges among them selves and the
same with the even. For each pairing σ we define an amplitude Fσ by contracting the
tensor Bs,r−s using the form B −1 . Of course we have to use Theorem 7.4. The answer
is the following:
dimV0 −dimV1 Pf(−B ) X ~b (Γ)
1
I(~) = (2π)dim V0 /2 ~ 2 √ FΓ .
det B0 |Aut|Γ
Γ

40
7.5 Fermionic Quantum Mechanics
The simplest fermionic Lagrangian is

L(t) = ψ(t)ψ̇(t).
This is quantum-mechanical Lagrangian of a single massless fermion. Here ψ(t) is an
odd function of the even variable t ∈ R (time). Notice that unlike the even case ψ(t)ψ̇(t)
is not a derivative, hence this Lagrangian defines non-trivial theory. On the other the
hand the standard bosonic Lagrangian ψ̇ 2 − m2 ψ 2 is trivial for fermions.
Let us compute the corresponding quadratic form B. First, notice that the Euler-
Lagrange equations are

d ∂L ∂L
= ,
dt ∂ ψ̇ ∂ψ
which gives −ψ̇ = ψ̇, i.e. ψ̇ = 0. As in the bosonic case we want to define integrals of
the form:
Z
ψ(t1 ) . . . , ψ(tN )eiS(ψ)/~Dψ.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Let us consider the massive case. If we have only one field the only quadratic term
is ψ 2 which is zero. Hence we need at least two fields. Let us consider the Lagrangian

1 
L(t) = ψ1 (t)ψ̇1 (t) + ψ2 (t)ψ̇2 (t) − mψ1 ψ2 ,
2
where m > 0 denotes the mass. The Euler Lagrange equations in this case read:

− ψ˙1 = −mψ2 (7.2)


−ψ˙2 = +mψ1 (7.3)
The Green functions is a matrix solution of the equation
Ġ − AG = iδ,
where  
0 m
A= . (7.4)
−m 0
We are interested in solutions that are antisymmetric in t, i.e. that satisfy GT (−t) =
−G(t). They are given by the formula:

1
G = −( signtI + aA)eAt .
2i
Here I is the identity matrix and a is a number.
41
7.6 Path Integrals for Free Fermionic Fields
We already know some fermionic Lagrangians – Weyl, Majorana, Yukawa... ..................
................................
Dirac Lagrangian is

LD = ψL† σ∂ψL + ψR
† †
ψL + ψL† ψR .

σ̄∂ψR + im ψR
It describes a pair particle-antiparticle, for example electron and positron. Unlike Ma-
jorana’s Lagrangian here the antiparticle is different from the particle.
Using the four-component Dirac’s spinor

 
ψL
ΨD =
ψR
we can express Dirac’s Lagrangian in a compact form:

LD = Ψ̄D (i∂/ − mI)ΨD .


Let us write the Feynman rules for free theories. We simply add a source coupling
to get the generating functional:

1
Z R
Z= ei [LD +iΨ̄D ζ+iζ̄ΨD ]dx
DΨD DΨ̄D .
N
Denote by Ψ̂D (ξ) the Fourier transform of ΨD . Then the equation for the propagator in
momenta variables reads:

(−p/ − mI)Ψ̂D (ξ) = I.


Let us hit this equation with the operator p/ − mI. We obtain

(−p2 + m2 )Ψ̂D (ξ) = p/ − mI,


which yields

Ψ̂D (ξ) = (−p2 + m2 )−1 (p/ − mI)

8 Renormalization
Unfortunately many of the diagrams have divergent amplitudes. We have already con-
sidered the four-point function for the diagram on Fig. 8. In the case of d ≥ 4, which
we need in QFT, the integral is divergent at ∞. This is the so called ultra-violet (UV)
divergence.
The physicists have invented a number of ways to overcome this difficulty. The
objective of this section is to give idea of some of these methods.
42
8.1 Renormalizability of Field Theories
8.2 Dimensional Regularization

9 Quantum electrodynamics
10 Gauge theories
10.1 Chern-Simons theory
10.2 Yang-Mills Theories

11 Appendix
11.1 Linear and Multilinear Algebra
Here we give some definitions. For a more detailed treatment of the topics see e.g.
[13, 14].
Let U, V and E be a vector spaces.
L
Definition 11.1 We say that the mapping φ from U V to E is bilinear if it is linear
in each argument when the other is fixed.
L
Exactly in the same manner we define a multilinear mapping from j Vj to E.
L
Definition 11.2 Let the mapping φ from V U to E is bilinear. We sayLthat E is a
tensor product of U and V if the image of φ is E and if ψ is a map from U V to some
vector space F then there exists a linear mapping χ from E to F such that ψ = χ ◦ φ.
In other words the following diagram is commutative:

L φ
V U /E
FF
FFψ
FF χ
FF
F# 
F

11.2 Differential Geometry


The main object of differential geometry is the notion of connection. This notion makes
precise the idea of transporting data along a curve or family of curves in a parallel and
consistent manner.

11.3 Classical Mechanics


Here we give a brief account on some notions of classical mechanics. Our exposition
follows [1] and for more thorough course the same book is excellent. Of course there are
many other books that could serve the purpose.
We start with the notion of a functional. Roughly speaking this is a function whose
arguments are functions. We will be interested in functionals defined as follows in a
particular but very important case, describing classical mechanics. Let L(r, q) be a

43
function defined on an open set R × U of R2d . Let q(t) be a smooth path with q̇, q
taking values in U .
Action is the functional (= function in which the variable is the path q(t)):

Z t1
S(q) = L(q̇(t), q(t))dt
t0

The function L is called Lagrangian. Most of the time we will consider Lagrangians of
the form

||q̇||2
L = T − U (q) = − U (q).
2
The quadratic form T is called kinetic energy and the function U is called potential
(energy).
One can define variational derivative of S with respect to the path q as usually. Let
δq(t) is a small change of the path q(t). The difference:

δS(q) = S(q + δq(t)) − S(q)

is small and can be written as

δS(q) = F (q)δq(t) + O(|δq|2 ).

The function F (q) is called variational derivative of S and is denoted by

δS
.
δq
The paths for which the variational derivative becomes zero satisfy the Euler-Lagrange
equations:

d ∂L ∂L
− = 0, j = 1, . . . , d. (11.1)
dt ∂ q̇j ∂qj
We will need also the equivalent formulation of classical mechanics (and field theory)
in Hamiltonian form. Let us first recall the notion of Legendre transform. Consider a
convex (or concave) function f (x) and define the function in p and x

F (x, p) = (p, x) − f (x).

For a fixed p ∈ V ∗ find the unique extremum of F (x, p) as a function in x. This yields
the equation:

p − Df (x) = 0.

44
Due to the convexity of f this equation has a unique solution x = x0 ∈ V . The Legendre
transform of f is the function g(p) defined by

L(f )(p) = (p, x0 (p)) − f (x0 (p)).

Using Legendre transform we can arrive at Hamiltonian formulation of classical mechan-


ics in the following manner. Let L(q̇, q) be a Lagrangian. Fix the variable q and perform
Legendre transform with respect to the variable q̇. We obtain a new function H(p, q),
which is called Hamiltonan. Then the Euler-Lagrange equations (11.1) are equivalent to
the Hamiltonian system of equations

∂H ∂H
q̇j = , ṗj = − , j = 1, . . . , d. (11.2)
∂pj ∂qj
We will often use the following terminology. The variables q will be called position
variables this implying that q̇ are velocities. The variables p are momenta. The set where
the position variables are defined is called configuration space. The entire space where
the Hamiltonian is defined is called phase space.

11.4 Functional Analysis and Differential Equations


We will need quite often Hilbert spaces. Here is the definition.

Definition 11.3 Hilbert space H is a linear space over R (or C) with a vector product
(x, y) (hermitian vector product (x, ȳ) which is complete with respect to the norm ||x|| =
p
(x, x).

Example 11.4 Let X be a set with a Lebesgue measure dµ. Denote by L2 (X) the
2
R
space
R of complex functions 2with integrable square X |f | dµ. Define a scalar2 product
by X f ḡdµ where f, g ∈ L (X). Then by the theory of Lebesgue integral L (X) is a
Hilbert space. This is the most important example.

The continuous operators are exactly those which satisfy ||Ax|| ≤ c||x|| with some
constant c. They are also called bounded. However we will need operators that are
unbounded as well as operators that are defined only on subspace of H. For example the
operators x̂j in L2 (Rn ) (multiplication by xj is neither defined everywhere, nor bounded.
The same is true for the Schrd̈inger operator −δ+U (x). We will need to find the spectrum
of such operators. In fact this problem is in the center of quantum mechanics. More
generally we will need to find solutions of partial differential equations. Even when
they have ”good solutions” (which is not so often) it is very convenient to have broader
spaces of ”functions” to operate with. The corresponding spaces are different spaces of
distributions (= generalized functions). We are going to work with the space of tempered
distributions, which we define below. First define the Schwartz space S of all infinitely
differentiable functions on Rn which decay at infinity faster than any power of xj . We
define topology on this space by the semi-norms

pα,β (φ) = supx∈Rn ||xα Dβ φ||.


45
The space of continuous functionals on this space is called the space of tempered distri-
butions. It is denoted by S ∗ . A very important example of a tempered distribution is
Dirac’s delta-function. It is defined as

δ(f ) = f (0).

Fourier transform of a function f ∈ S(Rn ) is defined by the formula:

Z
fˆ(ξ) = f (x)e−i(x,ξ) dx
Rn

The inverse transform is given by

1
Z
f (x) = fˆ(ξ)ei(x,ξ) dξ
(2π)n Rn

We can define Fourier transform of tempered distribution by the formula:

1
F̂ (φ) := F ( φ̂),
(2π)n

where F ∈ S ∗ and φ ∈ S for any test function φ ∈ S. Let us compute the Fourier
transform of δ. We have

1 1 1
Z
δ̂(φ) = δ( n
φ̂) = n
φ̂(0) = φ(x)dx.
(2π) (2π) (2π)n Rn

1
This yields that δ̂ = (2π) n. In physics and mathematics we often need δ-functions
supported at more complicated sets than one point.
Hermitian operator A is an operator, satisfying the equality (Ax, y) = (x, Ay) for all
vectors from the definition domain of A.
Differential equations
Spectral theorem
Representation theory

11.5 Relativistic Notations


Minkowski space is a space Rn with Minkowskian metric, i.e. a metric with signature
(−1, 1 . . . , 1). Minkowski inner product is defined by (x, y)M := x0 y0 − x1 y1 − . . . −
xn−1 yn−1 .
In Minkowski space we define the light cone by the equation x20 − x21 − . . . − x2n−1 = 0.
A point with coordinates (x0 , x1 . . . , xn−1 ) is said to be space-like if (x, y)M < 0. If
(x, y)M > 0 the point is said to be time-like.
We would like to introduce time ordering. If x, y are points we say that x chronolog-
ically precedes y if (x − y)2 > 0.

46
11.6 Miscellaneous Notations

∇ · A = ∂1 A1 + ∂2 A2 + ∂3 A3 − − called nabla of A
∇ × A = (∂2 A3 − ∂3 A2 , ∂3 A1 − ∂1 A3 , ∂1 A2 − ∂2 A1 ) called rotor, or curl of A

References
[1] V. I. Arnol’d, Mathematical methods of classical mechanics. Moscow,
[2] A. Berezin, The Method of Second Quantization, Academic Press, (1966).
[3] B. Bolobas
[4] R. Casalbuoni, Advanced Quantum Field Theory, Lectures given at Florence University
during the academic year 1998/99.
[5] Pierre Deligne, Pavel Etingof, Daniel S. Freed, Lisa C. Jeffrey, David Kazhdan, John W.
Morgan, David R. Morrison, and Edward Witten, editors. Quantum fields and strings:
a course for mathematicians. Vol. 1, 2. American Mathematical Society, Providence, RI,
1999. Material from the Special Year on Quantum Field Theory held at the Institute for
Advanced Study, Princeton, NJ, 19961997. (also lecture notes available online)
[6] Ph. Di Francesco, P. Mathieu, David Sénéchal, Conformal Field Theory. Springer, New
York, 1997.
[7] I. Dolgachev, Introduction to string theory, preprint - Ann Arbor., lecture notes available
online: http://www.math.lsa.umich.edu/ idolga/lecturenotes.html.
[8] B. Dubrovin, S. P. Novikov, A. Fomenko, em Modern Geometry Part 1 and Part 2,
Springer, 1992.
[9] P. Etingof, Mathematical ideas and notions of quantum field theory. preprint - MIT lecture
notes available online: http://math.mit.edu/ etingof/.
[10] L. D. Faddeev, O. A. Yakubovsky, Lectures in quantum mechanics for students in math-
ematics, Leningradskii universitet, 1980. (in Russian). English translation: Lectures on
Quantum Mechanics for Mathematics Students - L. D. Faddeev, Steklov Mathematical
Institute, and O. A. Yakubovskii(, St. Petersburg University - with an appendix by Leon
Takhtajan - AMS, 2009
[11] R. P. Feynman, The character of physical laws. Cox and Wyman Ltd., London, 1965.
[12] R. P. Feynman, R. B. Leighton and M. Sands, The Feynman Lectures on Physics.,
(Addison-Wesley, 1963), Vol III, Chapter 1.
[13] G I. M. Gel’fand, Lectures in linear algebra
[14] W. H. Greub, Multilinear algebra, Springer, 1967
[15] C. Itzykson, and J. B. Zuber, Quantum Field Theory, McGraw-Hill, 1980.
[16] M. Kaku, Quantum Field Theory, A Modern Introduction, Oxford University Press, 1993.
[17] M. Kontsevich, Intersection theory on the moduli spaces of curves and the matrix. Airy
function, Comm.Math.Phys.,vol.147(1992),1-23.
[18] M. Kontsevich, Vassiliev’s knot invariants, Adv.Soviet Math.,vol.16,Part 2(1993),137-150.
[19] Mehta

47
[20] M. E. Peskin, D.V. Schröder, An introduction in quantum field theory. Perseus Books,
Reading Massachusetts, 1995.
[21] M. Polyak, Feynman diagrams for pedestrians and mathematicians, in: Graphs and Pat-
terns in Mathematics and Theoretical Physics Edited by: Mikhail Lyubich, and Leon
Takhtajan, Proceedings of Symposia in Pure Mathematics, AMS, 2005.
[22] J. Rabin, Introduction to QFT for mathematicians. in: Freed, D. and Uhlenbeck, K.,
eds., Geometry and Quantum Field Theory, American Mathematical Society, 1995.
[23] Ramond P. Field theory: A modern primer (2ed., Westview, 2001).
[24] L. Ryder, Quantum Field Theory. Cambridge University Press.
[25] L. Schwartz, Cours d’analyse (French Edition), 1981.
[26] M.E. Taylor, Partial differential equations 1. Basic theory, AMS115, Springer, 1996.
[27] R. Ticciati, Quantum field theory for mathematicians (CUP, 1999).
[28] E. Witten, ”Quantum field theory and the Jones polynomials”, CMP, (1988).
[29] E. Witten,
[30] H Woolf, ed., Some strangeness in proportion, Addison-Wesley, 1980.

48

You might also like