Quantum Theory of Many-Body Systems Techniques and Applications (PDFDrive)
Quantum Theory of Many-Body Systems Techniques and Applications (PDFDrive)
Quantum Theory of Many-Body Systems Techniques and Applications (PDFDrive)
Alexandre Zagoskin
Quantum Theory
of Many-Body
Systems
Techniques and Applications
Second Edition
Graduate Texts in Physics
Series Editors
Quantum Theory
of Many-Body Systems
Techniques and Applications
Second Edition
123
Alexandre Zagoskin
Department of Physics
Loughborough University
Leicestershire
UK
Over the last 15 years, there has been a considerable amount of advancements in
condensed matter physics: graphene, pnictide superconductors, and topological
insulators, to name just a few. The understanding, and to a large degree the very
discovery, of these new phenomena required the use of advanced theoretical tools.
The knowledge of the basic methods of quantum many-body theory thus becomes
more important than ever for each student in the field.
Some of the most challenging current problems stem from the spectacular
progress in quantum engineering and quantum computing, more specifically, in
developing solid-state based—mostly superconducting—quantum bits and qubit
arrays. During this short period, we arrived from the first experimental demon-
stration of coherent quantum tunnelling in single qubits (which are, after all, quite
macroscopic objects) to precise manipulation of quantum state of several qubits,
their quantum entanglement over macroscopic distances and, recently, signatures
of quantum coherent behaviour in devices comprising hundreds of qubits. The
difficulty is that it is impossible to directly simulate such large, partially coherent,
essentially nonequilibrium quantum systems, due to the sheer volume of compu-
tation—which was the motivation behind quantum computing in the first place. It
would seem that one needs a quantum computer in order to make a quantum
computer! The hope is that appropriate generalizations of the methods of non-
equilibrium many-body theory would provide good enough approximations and
keep the research going until the time when (and if) the task can be handed to
quantum computers themselves.
Given the above considerations, I did not feel the need to change the scope or
the approach of the book. I have, though, added a new chapter, in order to
introduce bosonization and elements of conformal field theory. These are beautiful
and powerful ideas, especially useful when dealing with low-dimensional systems
with interactions, and belong to the essential condensed matter theory toolkit. I
have also corrected some typos—hopefully introducing fewer new ones in the
process.
In addition to those of my teachers and colleagues, whom I had the opportunity
to thank in the preface to the first edition, I would like to express my gratitude to
Profs. A. N. Omelyanchouk, F. V. Kusmartsev, Jeff Young, and Franco Nori, and
to all my colleagues at the University of British Columbia, D-Wave Systems Inc.,
RIKEN, and Loughborough University, with whom I had the pleasure and honour
vii
viii Preface to the Second Edition
to collaborate during this time. My special thanks to Dr. Uki Kabasawa, who
translated the first edition of this book to the Japanese, and whose questions and
helpful remarks contributed to improving the book you hold.
This book grew out of lectures that I gave in the framework of a graduate course in
quantum theory of many-body systems at the Applied Physics Department of
Chalmers University of Technology and Göteborg University (Göteborg, Sweden)
in 1992–1995. Its purpose is to give a compact and self-contained account of basic
ideas and techniques of the theory from the ‘‘condensed matter’’ point of view. The
book is addressed to graduate students with knowledge of standard quantum
mechanics and statistical physics. (Hopefully, physicists working in other fields
may also find it useful.)
The approach is—quite traditionally—based on a quasiparticle description of
many-body systems and its mathematical apparatus—the method of Green’s
functions. In particular, I tried to bring together all the main versions of diagram
techniques for normal and superconducting systems, in and out of equilibrium (i.e.,
zero-temperature, Matsubara, Keldysh, and Nambu–Gor’kov formalisms) and
present them in just enough detail to enable the reader to follow the original papers
or more comprehensive monographs, or to apply the techniques to his own
problems. Many examples are drawn from mesoscopic physics—a rapidly
developing chapter of condensed matter theory and experiment, which deals with
macroscopic systems small enough to preserve quantum coherence throughout
their volume; this seems to me a natural ground to discuss quantum theory of
many-body systems.
The plan of the book is as follows.
In Chapter 1, after a semi-qualitative discussion of the quasiparticle concept,
Green’s function is introduced in the case of one-body quantum theory, using
Feynman path integrals. Then its relation to the S-operator is established, and the
general perturbation theory is developed based on operator formalism. Finally, the
second quantization method is introduced.
Chapter 2 contains the usual zero-temperature formalism, beginning with the
definition, properties, and physical meaning of Green’s function in the many-body
system, and then building up the diagram technique of the perturbation theory.
In Chapter 3, I present equilibrium Green’s functions at finite temperature, and
then the Matsubara formalism. Their applications are discussed in relation to linear
response theory. Then Keldysh technique is introduced as a means to handle
essentially nonequilibrium situations, illustrated by an example of quantum
ix
x Preface to the First Edition
Acknowledgments
1
Based on a ‘‘two hours’’ (90 min) lecture length.
Preface to the First Edition xi
xiii
xiv Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Chapter 1
Basic Concepts
A popular wisdom.
From the book “Physicists keep joking:”
Technically speaking, physics deals only with one-body and many-body problems
(because the two-body problem reduces to the one-body case, and the three-body
problem does not, and is already insolvable, (Fig. 1.1)). Still, what an average physi-
cist thinks of as “many” in this context is probably something of the order of
1019 − 1023 , the number of particles in a cubic centimeter of a gas or a solid, respec-
tively. When you have this many particles on your hands, you need a many-body
theory. At these densities, the particles will spend enough time at several de Broglie
wavelengths from each other, and therefore we need a quantum many-body theory.
(A good thing too: What we really should not mess with is the classical chaos!)
The real reason why you want to deal with such a large collection of particles in
the first place, instead of quietly discussing a helium atom, is of course that 1023 is
much closer to infinity. The epigraph, or intuition, or both, tell us that the infinite
number of particles (or legs) is almost as easy to handle as one, and much, much
easier than, say, 3, 4, or 7.
The basic idea of the approach is that instead of following a large number of
strongly interacting real particles, we should try to get away with considering a rela-
tively small number of weakly interacting quasiparticles, or elementary excitations.
An elementary excitation is what its name implies: something that appears in the
system after it has suffered an external perturbation, and to which the reaction of the
system to this perturbation can be almost completely ascribed—like a ripple on the
surface of a pond, only in quantum theory those ripples will be quantized. In a crystal
lattice, such quantized ripples are phonons, sound quanta, which carry both energy
and quasimomentum, and only weakly interact with each other and, e.g., electrons.
Strike a solid, or heat it, and you excite (that is, generate) a whole bunch of phonons,
which will carry away the energy and momentum of your influence (Fig. 1.1).
Phonons form a rather dilute Bose gas, and therefore are much easier to deal with,
than the actual particles—atoms or ions—that constitute the lattice. The phonons are
called quasiparticles not only because they don’t exist outside the lattice; they also
have finite lifetime, unlike the stable “proper” particles. A key point here is that the
quasiparticles must be stable enough: if they decay faster than they can be created,
the whole description loses sense.
Let us now consider a system of interacting electrons in a metal lattice (which
we will describe by the standard “jellium” model of uniformly distributed positive
charge, neutralizing the total charge of free electrons). Here we have real particles,
which interact through strong Coulomb forces, which moreover have an infinite
radius (because they decay only as 1/r 2 ). For a given electron, we must thus take
into account influences of all the other electrons. Therefore, nothing actually depends
on the details of behavior of any of those electrons! We can safely replace their action
1.1 Introduction: Whys and Hows of Quantum Many-Body Theory 3
by some average field, depending on averaged electronic density n(r), thus arriving
at the mean field approximation (MFA). We immediately use it to calculate the
screening of Coulomb interaction, to see that not only particles, but interactions as
well, are changed in the many-body systems.
Suppose we place an external charge Q in the system. It will create a potential (r),
which will change the initial uniform distribution of electronic density,
pF3
n= . (1.1)
3π 2 3
√
Here pF = 2mμ is the Fermi momentum, and we have used the well-known rela-
tion between pF and density of the electron gas. Of course, if the electronic density
becomes coordinate dependent, so is the Fermi momentum: pF → pF (n(r)). In
equilibrium, the electrochemical potential of the electrons must be constant, that is,
pF2 (n(r))
μ= + e(r) = const, (1.2)
2m
and we easily find that
(2m(μ − e(r)))3/2
n(r) = . (1.3)
3π 2 3
If there is no external potential, we return to the unperturbed case (1.1).
Now let us employ the electrostatics. The potential must satisfy Poisson’s
equation,
where ρ is the charge density induced on the neutral background by the probe charge,
and n(r) = n(r) − n is the change in electronic density. (The positive “jellium”
neutralized the negative charge of the electrons, remember? Besides, we assume that
it has unit dielectric permeability, ε = 1.) Therefore, we can write
(2m(μ − e(r)))3/2 − (2mμ)3/2
∇ (r) = 4πe
2
. (1.4)
3π 2 3
This is the Thomas–Fermi equation, first obtained in the theory of electron density
distribution in atoms.
Generally, this nonlinear equation can be solved only numerically. If, though, we
assume that e is much smaller than the Fermi energy, μ, and expand the right-hand
4 1 Basic Concepts
3 e(r)n
n(r) = − , (1.5)
2 μ
1
∇ 2 (r) = (r). (1.6)
λ2T F
μ1/2 π 1/6
λTF = √ = n −1/6 . (1.7)
6πen 1/2 2 · 3 em 1/2
1/6
To find the physical meaning of λTF , let us solve (1.6) for (r), imposing the condi-
tion that at small distances (r) ≈ Q/r . This is reasonable, because close enough
to the probe charge—at r ∓ n −1/3 —there will be, on average, no electrons around,
and we should observe the same potential as in vacuum. (Of course, the constant
potential of the “jellium” does not matter.) Therefore, we obtain the following result:
Q
(r) = exp[−r/λTF ]. (1.8)
r
As you see, the Coulomb potential is modified, and it now exponentially decays
at a distance of order λTF from the source—it was screened.1
Physically, when a positive charge is brought into our system, the electrons will
be attracted to it, forming a negatively charged cloud. Formula (1.8) tells us that
the total charge of this cloud is equal and opposite to the external charge, so that
for the electrons at r λTF it is completely compensated. The size of the cloud is
determined by the interplay between the Coulomb attraction of electrons to the charge
and their Coulomb repulsion from each other. The latter we took into account in our
approximate formula (1.5). In the case of negative external charge, the electrons will
be repulsed, laying bare the positive charge of the “jellium” background, which again
will compensate for it. In either case, the charge is being surrounded by the charged
cloud of equal and opposite sign. This holds, of course, for every single electron in
the system (Figs. 1.2, 1.3).
Now we can refine the criterion for the applicability of our “averaging” approach.
Previously, we thought that since the Coulomb interaction reaches to infinity, the
number of electrons acting on any single electron is always very large—the same as
the number of electrons in the whole system, and their action could be replaced by
1 The dependence (1.8) is often called the Yukawa potential, though in the context of screening of
the Coulomb potential in√(classical) plasma it was derived by Debye and Hückel, with a different
screening length, λD ∼ k B T /(ne). (The difference is due to the use of the Boltzmann instead of
the Fermi distribution in a nondegenerate gas.)
1.1 Introduction: Whys and Hows of Quantum Many-Body Theory 5
the action of some average charge density, ne. Now we see that the actual number of
electrons influencing any single electron is only of order nλ3TF . Then our considera-
tions are exact as long as
nλ3TF 1. (1.9)
Since
μ pF2 · h vF
n 1/3
λTF ∝ α ∝ ,
n 1/3 e2 m · pF · e 2 e2
Recalling that e2 /(c) ≈ 1/137, we find on the left-hand side 137vF /c, a ratio of
order one.
This is a nasty surprise, of the sort that are abundant in the many-body theory. It
indicates that even if our approximation gives a qualitatively correct answer (which
it does), we must be ready to take into account the effects of deviations from the
“mean field” picture, and preferably in a systematic way. (A pleasant surprise, which
also occasionally can be encountered here, is that mean field approximation often—
though not always—gives excellent results even outside its domain of applicability.)
One all-important qualitative conclusion that we can make. based on our results is
that in metal, an electron is surrounded by the screening cloud of other electrons. Any
force applied to it will have to accelerate the whole cloud. Therefore, the electron
will behave as if it had a larger effective mass, m ∗ , than in vacuum! (We have
not yet considered its interactions with the crystal lattice itself.) Instead of being a
point particle, it acquires a finite size, the size of the cloud. We can thus call this
complex entity “electron + cloud” a quasi particle. As they say, an electron is dressed.
(Logically, the “lone” electron is called a bare particle.)
6 1 Basic Concepts
The quasi electrons are what we will see when probing the system. Pleasantly,
the problem of considering these quasielectrons, interacting through some sort of
short-range potential (even if it is not exactly the Yukawa potential we derived), is
much simpler than the initial problem dealing with electrons strongly interacting
with infinite-range Coulomb forces. We need only accurately find effective masses
and potentials. This “only” is actually the very subject of the many-body theory!
m r̈(t) = eE(r(t)),
where the electric field E arises due to local deviation of the electronic density from
its equilibrium value and satisfies the Maxwell equation
⎡ ⎤
∇r · E(r(t)) = 4πe ⎣ δ(r − r(t)) − n ⎦ .
i=0
Here
we have used for the moment the exact electronic density at the point
r, i=0 δ(ri − r) (summation is taken over all other electrons). If now write
1.1 Introduction: Whys and Hows of Quantum Many-Body Theory 7
E(r0 ) = −4πer(t)n.
Substituting this into the equation of motion, performing Fourier transformation over
frequencies, and inserting the expression obtained for r back into the Maxwell
equation, we find
eE(ω)
r(ω) = − ; (1.12)
mω 2
4πe2 n
E(ω) = E(ω). (1.13)
mω 2
The result is consistent if the frequency
4πe2 n
ω = ωp ≡ , (1.14)
m
the plasma frequency. This is the frequency of small oscillations of a uniform elec-
tron gas. The period of plasma oscillations gives the characteristic time of any charge
redistribution in the metal.
Now we see that the screening cloud will be able to follow the electron only as
long as its velocity
v ∓ λTF ω p . (1.15)
Otherwise, the surrounding electrons simply will not have time to react! (Figs. 1.4, 1.5).
The quanta of plasma oscillations are called plasmons. They can propagate across
the system and are created whenever the charge neutrality of the metal is disturbed.
This is yet another example of quasiparticles. The screening of Coulomb potential,
e.g., and dressing of bare electrons in the metal can be directly described in terms of
plasmons.
We have already mentioned phonons. Interactions of the electrons with the crystal
lattice can lead to what can be described as a phonon cloud around an electron,
forming a polaron. Since the characteristic phonon frequency (Debye frequency,
ωD ) is much lower than ω p in metals, there electrons always leave the phonon cloud
behind. Nevertheless, such a cloud can be run into by another electron; as a result, an
effective electron–electron interaction arises, which can lead to such a spectacular
phenomenon as superconductivity.
8 1 Basic Concepts
so that the future does not affect the past. (It would not be this easy if we had to deal
with relativistic covariance!)
Let’s now suppose that the particle at the initial moment is strictly localized:
Γ(x ↑ , t ↑ ) = δ(x ↑ − x0 ). Then from (1.16)
That is, more specifically, the propagator is the transition amplitude of the particle
between the points (x ↑ , t ↑ ) and (x, t), and its square modulus gives the transition
probability.
In (1.16) we did not specify the moment t ↑ , except that it must precede the
observation moment t. Then, for some t ↑↑ > t ↑ we obtain
Γ(x, t) = d x ↑↑ K (x, t; x ↑↑ , t ↑↑ )Γ(x ↑↑ , t ↑↑ )
↑↑
= dx d x ↑ K (x, t; x ↑↑ , t ↑↑ )K (x ↑↑ , t ↑↑ ; x ↑ , t ↑ )Γ(x ↑ , t ↑ ). (1.19)
This is the composition property of the propagator, and we will heavily use it later.
Of course, this is a reformulation of the Huygens principle, from the wave point of
view. But what does it mean from the particle point of view? If we want to know
the probability amplitude for a particle, starting at (xi , ti ) to reach point x f at t f , at
any intermediate moment t ↑ we must take into account all conceivable positions the
particle can occupy in order to obtain a proper result. This situation is often illustrated
by the famous double-slit experiment (we do not know for certain through which
slit the particle passed). It is a close, though a slightly different, situation. There
(in the double-slit experiment) we know the relevant region in space over which we
should integrate (the slits), but we do not know when the particle passes it. Here we
know the relevant time, but we have to integrate over all the available space. You
can ponder how these two situations complement each other (just think about the
stationary wave propagation).
In principle, the above picture is almost identical to one of Brownian motion of
a classical particle. The only difference is that the reasoning is applied rather to
probabilities themselves than to their complex amplitudes, and this, as you know,
changes a lot.
Returning to the properties of the propagator, we have decided that for negative
times it is strictly zero, while for positive times it certainly is not. This might imply
the singular behavior for t − t ↑ = 0. Indeed, for t = t ↑ we must get an identity:
Γ(x, t) ≡ d x ↑ K (xt; x ↑ t)Γ(x ↑ , t),
so that
K (x, t; x ↑ , t) = δ(x − x ↑ ). (1.21)
We have now received all the information available about the properties of the propa-
gator that could be obtained from the most general principles of quantum mechanics.
1.2 Propagation Function in a One-Body Quantum Theory 11
To proceed, we need more specific data. One way is to use the Schrödinger equation.
The other is to formulate instead some statement from which the Schrödinger equa-
tion itself could be derived. We will take both ways, because if we are here most
interested in the inner workings of the formalism (we are), it is wise to run it in both
directions.
If the wave function obeys the Schrödinger equation,
∂
i − H(x, ∂x , t) Γ(x, t) = 0, (1.22)
∂t
then it follows from (1.16) that for t > t ↑ the propagator satisfies the same equation:
∂
i − H(x, ∂x , t) K (x, t; x ↑ , t ↑ ) = 0. (1.23)
∂t
where θ(t − t ↑ ) is the Heaviside step function. All these properties can be taken care
of by the following equation:
∂
i − H(x, ∂x , t) K (x, t; x ↑ , t ↑ ) = iδ(x − x ↑ )δ(t − t ↑ ). (1.25)
∂t
Indeed, for t > t ↑ this reduces to (1.23), while for t → t ↑ + 0 we can keep in the left-
hand side of (1.25) only the term with ∂θ(t − t ↑ )/∂t, which matches the right-hand
side.
From (1.25) we see that up to the factor of i/, the propagator is Green’s function
of the Schrödinger equation in the mathematical sense. (If L̂ is a linear differential
operator, then Green’s function of the equation L̂ψ = 0 is the solution to the equation
L̂G ψ = −δ(x − x ↑ ).) Therefore, quantum-mechanical propagators are more often
called Green’s functions, especially in the many-particle case; since our solution
vanishes for t < t ↑ , it is called the retarded Green’s function. We, though, will keep
calling the function K (x, t; x ↑ , t ↑ ) a propagator, to stress that we are still working
on the one-particle problem.
It is easy to see that for a free particle of mass m (described by the Hamiltonian
(H = (−2 /2m)(∂x )2 )) the propagator depends only on differences of its arguments,
and the solution to (1.25) is given by
d/2
↑ ↑ m im(x − x ↑ )2
K 0 (x − x , t − t ) = exp θ(t − t ↑ ). (1.26)
2πi(t − t) 2(t − t)
where the sum is taken over the residues at all the poles of K 0 (k, ω) as a function
of the complex variable ω, and the closed contour C consists of the real axis and
an infinitely remote half-circle (we assume that the integral converges). The sign
depends on whether the contour is circumscribed in the positive or negative direction.
Since the integrand contains the factor e−iωt = e−itω+t Jω , we must close the
contour in the upper half-plane of ω if t < 0 and in the lower half-plane if t > 0.
Then the factor etω ensures exponential decay of the integrand on the half-circle
and convergence of the integral (Watson’s lemma).
As a matter of fact, the only pole of K 0 (k, ω), at ω = k 2 /2m, lies on the very
integration contour, and adding an infinitesimal imaginary part to ω displaces the
pole to either the positive or negative imaginary half plane, which will dramatically
change the answer (Fig. 1.6).
If, for example, we write ω → ω + i0, the pole will shift below the real axis.
Then for t < 0 the contour does not contain any singularity, and K 0 (x, t) will
be identically zero. This is exactly what we need: a retarded propagator! On the
other hand, for t > 0 the contour encloses the pole, yielding exp(−ik 2 t/2m). The
momentum integral is now straightforward: it has a Gaussian form
∞
dk ikx−i k 2t
K 0 (x, t) = θ(t) e 2m ,
2π
−∞
We could leave the pole on the contour, with still another answer. This game of
infinitesimals reflects the fact that besides the differential Eq. (1.25), we need initial
conditions—e.g., that the solution is a retarded Green’s function.
The above result was almost too easy to obtain, and it leaves an unpleasant after-
taste of having cheated. Indeed, it was not worth the trouble to introduce the prop-
agator “from the most general principles,” only to resort finally to the Schrödinger
equation. Of course, in the one-particle case propagator may seem to be simply a
mathematical tool to solve the wave equation, without important physics involved
(as in the many-body case). But as often happens in physics, mathematical reformu-
lation here also provides a tool for deeper understanding of fundamentals, which we
will see in the next section.
To start with, note the striking similarity between the formula (1.26) for the prop-
agator and the well-known formula for the probability distribution of the classical
Brownian particle. The latter quantity, P(x, t|x ↑ , t ↑ ), gives the conventional proba-
bility of finding the particle at x at some time t, if at some earlier time t ↑ it was at x ↑
(see, e.g., [3]):
↑ ↑ ↑ −d/2 (x − x ↑ )2
P(x, t|x , t ) = (4π D(t − t )) exp − θ(t − t ↑ ). (1.28)
4D(t − t)
The diffusion coefficient D in the quantum case is replaced by 2m/i. From the
mathematical point of view, the similarity between (1.26) and (1.28) is due to the
fact that both K 0 and P are Green’s functions of similar linear differential equations:
14 1 Basic Concepts
a free Schrödinger equation and a diffusion equation, ∂t f (x, t) = D(∂x )2 f (x, t).
Differences in behavior of quantum and classical Brownian particles are due to the
presence of an imaginary unit in one of these equations. From the physical point
of view, though, this might indicate some deeper link between how we describe
classical and quantum motion. But a direct analogy with Brownian motion would
not work, since for a free classical particle, we should obtain deterministic, rather
than probabilistic, equations of motion.
To achieve this goal, we first recall the extremal action principle of classical
mechanics. It states that the particle’s trajectory, or path, xcl (t), between the initial
and final points xi , ti and x f , t f should minimize the action
t f
S[x f t f , xi t j ] = dt L(x, ẋ, t), (1.29)
ti
and this is the only admissible—real-path for a classical particle. The action is the
so-called functional of the trajectory, not a function, since it depends on the behavior
of x(t) on the whole interval [ti , t f ].
Here L(x, ẋ, t) is the Lagrange function of the system: in the simplest case
with T (ẋ(t)) and V (x(t)) being the kinetic and potential energy respectively.
As you know, the condition of extremum means that the action is not sensitive
to small deviations from the classical (extremal) path. More specifically, if we take,
instead of the real path xcl (t), a trial one, xtr (t) = xcl (t) + δx(t), where δx(t) is
small, then the the change in the action integral (1.29) will be only of second order
in δx(t):
δS = O(δx(t)2 ). (1.31)
t f
m ẋ2 m (x f − xi )2 m(x f − xi )2
S0 [x f t f , xi ti ] = dt = (t f − ti ) = . (1.32)
2 2 (t f − ti )2 2(t f − ti )
ti
This is—up to a factor of i/—the very expression we have seen in the exponent of
the free quantum-mechanical propagator!
The numerical factor here is very important. The role of is more or less clear:
since only dimensionless quantities are allowed in the exponent, and is the action
quantum, the ratio S/ must appear in the quantum case. The imaginary unit plays a
1.2 Propagation Function in a One-Body Quantum Theory 15
somewhat subtler role: it brings out the interference, which distinguishes propagation
of a quantum particle from its classical counterpart. But anyway, we see that the
quantum-mechanical propagation is somehow related to the classical action S, or
more specifically to exp[i/S].
It was the very idea first suggested by Dirac, and then implemented by Feynman,
that the propagation amplitude of a quantum particle between two points is given by
a coherent sum of terms exp[(i/ h)S[q, q̇]] corresponding to all possible classical
trajectories q(t) between these points. Instead of propagation amplitude here you can
read propagator, K (x f , t f ; xi , ti ), since we have established that they are the same.
What does this give us in the classical limit, when by definition the action S ?
Then exp[i/S] will very quickly oscillate in response to any minute change in
q(t). This means that the contributions to the transition amplitude from virtually all
trajectories cancel! The only exclusion will be the classical trajectory: by definition,
small deviations from it do not change the action, so that the contribution of this
trajectory will survive. Now you see why classical particles choose classical paths!
(Similar reasoning, long ago, helped to reconcile the wave theory of light with the
fact that light usually propagates along straight lines.)
In order to develop the fundamental idea that we have just described, we need
some way of counting the trajectories and summing up their contributions. Let us
divide the time interval [ti , t f ] into a large number (N − 1) of “slices” each of
length t = (t f − ti )/(N − 1). The N partition moments are thus t1 ≡ ti , t2 , . . .,
t N ≡ t f . Each classical trajectory thus is sliced into (N − 1) pieces (see Fig. 1.7):
[x1 ≡ xi , x2 ], [x2 , x3 ], . . ., [x N −1 , x N ≡ x f ]. Now we can use the composition
property of the propagator, Eq. (1.20), and obtain the expression
∞ ∞ ∞
K (x N t N ; x1 t1 ) = d x N −1 d x N −2 · · · d x2
−∞ −∞ −∞
×K (x N t N ; x N −1 t N −1 )K (x N −1 t N −1 ; x N −2 t N −2 ) · · · K (x2 t2 ; x1 t1 ). (1.33)
16 1 Basic Concepts
What we have done here? We used for simplicity the Lagrangian of Eq. (1.30) (which
is, though, general enough). We chose t so tiny that the kinetic energy term in the
action on this interval is much larger than , so that we can disregard all except the
classical trajectory between the points xn , xn+1 . And finally, we approximated the
classical action on this trajectory by using in (1.34) the average value of the Lagrange
function.
The expression (1.34) lacks the normalization factor, since the dimensionality of
the propagator is inverse volume, L −d . It can be restored from condition (1.21). If
we recall one of limit representations of the delta function,
1
ei x /α ,
2
δ(x) = lim √ (1.35)
α→0 απi
we see that the factor in question will be (m/(2πit))d/2 . (This is the very factor
that we obtained for the free propagator from the Schrödinger equation, but here we
did not exploit this equation at all.)
Now substitute this form of the propagator (for infinitesimal t)
m d/2
i m(x−x ↑ )2
− V (x)+V (x ↑ )
t 2
K (xt; x ↑ 0) =
2t 2 2
e (1.36)
2πhit
into the composition equation, to obtain
∞ ∞ ∞
K (x N t N ; x1 t1 ) = lim d x N −1 d x N −2 · · · d x2
N →∞
−∞ −∞ −∞
N
m d/2 i n−2
m(xn −xn−1 )2
N
t 2
V (x )+V (x
− n 2 n−1
)
× e 2t
.
2πit
n=2
(1.37)
As you see, the exponent of this nontrivial construction contains i/ times the Rie-
mannian sum for the integral, giving the classical action along some path, x(t). The
limit of the infinite number of consequent integrations over intermediate coordinates,
x j , with the proper normalization factors, is called a continual, functional, or simply
path integral, and is denoted by Dx. Thus,
x(t)
i
↑ ↑
K (x, t; x , t ) = Dxe S[x(t),ẋ(t)] . (1.38)
x ↑ (t ↑ )
1.2 Propagation Function in a One-Body Quantum Theory 17
This also can be written in a more symmetric (and more general) form:
x(t)
↑ ↑ p i S[ p(t),x(t),t]
K (x, t; x , t ) = Dx D e , (1.39)
2π
x ↑ (t ↑ )
where
t
S[ p(t), x(t), t] = dt[ p ẋ − H ( p(t), x(t), t)]
t↑
is the Hamiltonian function of the particle. Since the above expression explicitly con-
tains H , it proves more useful in applications of path integral methods to the systems
with many degrees of freedom. But for our limited goals, it will be enough to demon-
strate the equivalence of (1.38) and (1.39), simultaneously explaining the meaning
of the symbol D p (before that the expression (1.39) is, of course, null and void).
The simplest way to do that is to employ the Schrödinger equation for the propaga-
tor. Now we do not postulate, but derive it from (1.38), thus preserving the consistency
of speculation. It is clear that given the form of the propagator for infinitesimal times,
(1.36), the integral composition equation can be reduced to a differential one.
Using expression (1.36), we can write (for the one-dimensional case, generaliza-
tions are trivial)
∞ m 1/2
K (x N t N −1 + t; x1 t1 ) ≈ d x N −1
2πit
−∞
i m(x N −x N −1 )2 V (x N )+V (x N −1 )
− t
2t 2 2
×e K (x N −1 t N −1 ; x1 t1 )
Integrating over x N −1 (which is easy, since the integrals are of Gaussian type) and
keeping the leading terms in t, we obtain:
∂ i
t K (x N −1 t N −1 ; x1 t1 ) = − V (x N )t K (x N t N −1 ; x1 t1 )
∂t N −1
i ∂2
+ t 2 K (x N t N −1 ; x1 t1 ) + o(t).
2m ∂x N
Dividing by t, and in the limit t → 0, we finally obtain the Eq. (1.25) for the
propagator for t > t ↑ , thus having demonstrated that the Schrödinger equation follows
from the Dirac–Feynman conjecture about the structure of the transition amplitude.
(Of course, the opposite is true as well.)
Now let us return to the basic Eq. (1.16), which determines the action of the
propagator on the wave function. Using Dirac’s “bra” and “ket” notation, in which
the wave function Γ(x) is presented as a scalar product of two abstract vectors in
Hilbert space,
Γ(x) ≡ x|Γ, (1.40)
We have used the closure relation (completeness) of the quantum states of the particle
with definite coordinate (coordinate eigenstates), |x, that is,
|x ↑ x ↑ | = I, (1.42)
x↑
∂
i S(t, t ↑ ) = HS(t, t ↑ ), (1.43)
∂t
and its formal solution is found immediately:
↑
S(t, t ↑ ) = e− H(t−t ) .
i
(1.44)
What is the benefit? It is that now we are not limited to the coordinate repre-
sentation, and can easily work in, say, momentum space. This is what we actually
need to prove (1.39). Besides, we will need the closure relation for the momentum
eigenstates | p:
| p ↑ p ↑ | = I. (1.45)
p↑
Recall that in the coordinate (momentum) representation the coordinate and momen-
tum eigenstates look as follows:
i ↑
Γx (x ↑ ) ≡ x ↑ |x = δ(x ↑ − x); Γ p (x ↑ ) ≡ x ↑ | p = e px , (1.46)
and respectively
i ↑
Γ̃x ( p ↑ ) ≡ p ↑ |x = e− p x ; Γ̃ p ( p ↑ ) ≡ p ↑ | p = δ( p ↑ − p). (1.47)
xm |e− H( p̂,x̂) t| pm pm |xm−1 ≈ xm |e− H ( pm ,xm )t | pm pm |xm−1
i i
i i i
= e pm xm e− H ( pm ,xm ) te− pm xm−1 (1.49)
(note that now instead of the (operator) Hamiltonian, we have obtained the classical
Hamiltonian function, depending on usual coordinates and momenta). Therefore,
Eq. (1.48) is reduced to
N
−1
N
dpn
x N |S(t N , t1 )|x1 = lim d xn
N →∞ 2π
n=2 n=2
N x −x
i n n−1
t pn t −H ( pn ,xn )
×e n−2 . (1.50)
We have restored the continuous case notation (i.e., x → d x; p → dp/
(2π)). This is the very path integral in the phase space (that is, over coordinates
and momenta), the shorthand notation of which was given above by (1.39). Keep in
20 1 Basic Concepts
⎫1/2
mind that here we did not include the normalization factors 2πmit in definition
of Dx. Actually they will be given by integrations over momenta, and there is no
general convention whether such factors should be written explicitly or not.
As you see, the expression (1.50) contains (N − 1) momenta and N coordinates,
but there are N − 2 integrations over coordinates and N − 1 over momenta. As a
result, we have two “loose” coordinates, initial and final ones, as it should be for the
propagator in coordinate representation. But nothing prevents us from calculating a
different matrix element of S, say, p f |S(t f , ti )| pi . Evidently, this should be the
propagator in momentum representation, giving the probability amplitude for the
particle to change its momentum from pi to p f . You can easily demonstrate that the
corresponding path integral can be written as (see Problem 1.1)
p(t)
p
Dxe S [ p(t),x(t),t] .
i
↑ ↑
K ( p, t; p , t ) = D (1.51)
2π
p ↑ (t ↑ )
The path integral description as we have introduced it is nice and clear when we
deal with a single particle. Then it seems inapplicable to the problems of condensed
matter, with giant numbers of particles involved: we have to develop a more subtle
approach, equivalent to the technique of Green’s functions, etc.
Nevertheless, there exists a class of solid systems where the single particle
approach holds and gives sensible results, namely, the mesoscopic systems (see,
e.g., [5]). These are the systems of intermediate size, i.e., macroscopic but small
enough (≤10−4 cm). In these systems quantum interference is very important, since
at low enough temperatures (<1 K) the phase coherence length of quasiparticles
(“electrons”) exceeds the size of the system. This means that the electrons preserve
their “individuality” when passing through the system.
1.2 Propagation Function in a One-Body Quantum Theory 21
Since the wave function of the quantum particle depends on its energy as e−i Et/,
any inelastic interaction spoils the phase coherence. Then the condition
l φ ≈ li > L (1.53)
must hold. Here lφ is the phase coherence length, li is the inelastic scattering
length, L is the size of the system. The above condition can be satisfied in exper-
iment, due to the fact we have discussed above: that in the condensed matter we
can deal with weakly interacting quasiparticles instead of strongly interacting real
particles.
Because the inelastic scattering length of the quasielectron exceeds the size of the
mesoscopic system, we can regard it as a single particle in the external potential field
and apply to it the path integral formalism in the simplest possible version.
(We have for the sake of brevity denoted the transition amplitude—propagator—
between the points x A , t A and x B , t B simply by Bt B |At A ; in the next section we
will see that this is not only a shorthand.)
The Lagrange function of the electron in the magnetic field is given by a Legendre
transformation:
There exists a special class of trajectories that loop around the hole and have a self-
intersection (see Fig. 1.8). Each of them has a counterpart with an opposite direction
of motion around the hole. Each pair of thus conjugated trajectories has the same
value of the exp( i S0 [x, ẋ]) factor (since without the magnetic field the motion is
reversible), while the rest of the expression gives
t B
ie ie ie ie
dtA · ẋ = ± A · dx = ± rotA · dS = ± . (1.59)
c c c c
tA
The third term vanishes due to phase randomness; the second term does not contain
any pronounced -dependence. But the first one2 is periodic in with a period equal
to the superconducting flux quantum, 0 = hc/2e:
|B|A↔ | = 2|F↔ | 1 + cos 2π
2
.2
(1.62)
0
The doubling of the period is, of course, not due to the Cooper pairing and double
electric charge, but due to the simple fact that the transition amplitude contains the
difference between the contributions of particles that encircle the hole clockwise and
counterclockwise, thus doubling the path.
Another type of oscillation originates from a different class of trajectories (see
Fig. 1.9), that run from A to B on the different sides of the hole. Each pair of trajec-
tories from this class produces in the transition probability the term
ie
i
c ( A·dx− Adx)e ( L 0 (x,ẋ)− L 0 (x,ẋ))
2e 1 2 1 2 (1.63)
ie
⎬
c A·dx
= 2e 1 eiχ12 = 2 cos 2π + χ12 .
20
These oscillations have a doubled period, 20 = hc/e, but they include a random
phase, χ12 . Therefore, they are sensitive to the number of possible trajectories of
this class, and quickly vanish when it grows. For example, in the metal rings both
hc/e and hc/2e oscillations were observed, while in the metal cylinders only the
2 It is not so easy to calculate the prefactor F↔ ; but it is not difficult to show that it is small only as
a power of the parameter λ F /L.
24 1 Basic Concepts
latter exist, while the former are averaged to zero. (A cylindrical conductor can be
regarded as a huge number of rings stacked together.)
Though we always can write a path-integral formula (1.38), an explicit expression for
the propagator in the general case cannot be found, neither directly, nor by solving
the Schrödinger equation (1.25). Apart from exactly solvable cases (which are as
beautiful as rare—and worse still, usually known for years), the only regular way to
deal with a problem is to use some sort of perturbation theory.
Fortunately, the propagator formalism is uniquely suited for the task.
We will work again in Dirac’s notation. Let us start with the Schrödinger represen-
tation, where, as you know from quantum mechanics, the operators of observables are
time independent (except possible explicit time dependence), while the state vectors
(wave functions) evolve according to the Schrödinger equation:
∂
i |(t) S = H|(t) S . (1.64)
∂t
If the Hamiltonian is time independent, the formal solution to this is given by
|(t) S = e− Ht |(0) S .
i
(1.65)
where I is the unit operator, and the justification of (1.65) is in the fact that we can
rewrite the Schrödinger equation as
t
i
|(t) S = |(0) S − H|(t ↑ ) S dt ↑ (1.67)
0
and then iterate it, which will give us the series for exp(−i/Ht).
The operator
U(t) = e− Ht
i
(1.68)
1.3 Perturbation Theory for the Propagator 25
is for an obvious reason called the evolution operator. Written in this form, it satisfies
the Schrödinger equation with time-independent Hamiltonian. What if H is time
dependent? For a usual number, the solution would be
t
− i
e dt ↑ H(t ↑ ),
0
but here we are dealing with operators. There is no reason to believe that at different
moments of time, t1 and t2 , H(t1 ) and H(t1 ) commute, and the above expression
will be invalid. But we can still iterate the Schrödinger equation,
t
i
U(t) = I + − dt ↑ H(t ↑ ), (1.69)
0
to yield
↑
t t t1
i i 2
U(t) = I + − dt1↑ H(t1↑ ) + − dt1↑ dt2↑ H(t1↑ )H(t2↑ )
0 0 0
↑ ↑
t t1 t2
i 3
+ − dt1↑ dt2↑ dt3↑ H(t1↑ )H(t2↑ )H(t3↑ ) + · · · (1.70)
0 0 0
t
− i dτ H(τ )
U(t) = T e 0 . (1.72)
Indeed, let us expand the exponent and take the nth term,
t t t
1
T[ dτ1 dτ2 · · · dτn H(τ1 )H(τ2 ) · · · H(τn )].
n!
0 0 0
26 1 Basic Concepts
The second line contains the all-important unitarity condition, which physically
means that probability is not getting lost when the quantum state evolves – if we start
with one particle, we will not end up with 1/4 (or 22/7). Indeed, the norm of the state
vector, related to the probability,
⇔(t)⇔ ≡ (t)|(t) = (0)|U † (t)U(t)|(0)
= (0)|(0) = const
is conserved.
Of course, there is nothing special in the moment t = 0, and we can follow the
evolution of the quantum state from any point: evidently, for any t, t ↑ ,
Now we can, for example, express the wave function of the particle in coordinate
space at time t via its value at some previous time t ↑ , by
Now we see that it is the very operator S, related to the propagator, that we have
previously introduced (see Eq. 1.41): for t > t ↑ ,
This operator (sometimes called the S-matrix) has the following properties:
∂
i S(t, t ↑ ) = HS(t, t ↑ ); (1.79)
∂t
S(t, t) = I; (1.80)
S (t, t ↑ ) = S −1 (t, t ↑ ) = S(t ↑ , t);
†
(1.81)
S(t, t ↑↑ )S(t ↑↑ , t ↑ ) = S(t, t ↑ ); (1.82)
t
− i dτ H(τ )
for t > t ↑ S(t, t ↑ ) = T e t ↑
i ↑
= e− H(t−t ) if H = H(t) . (1.83)
Equation (1.81) is the unitarity condition. Equation (1.82) follows directly from the
definition of S and the unitarity of the evolution operator, but it is the very composition
property that we introduced for the propagator in the beginning (see Eq. 1.20).
The last line follows from (1.72). This is an elegant formula, but not very practical:
the Hamiltonian of the system is “of order one,” and the expansion would converge
very slowly, or simply diverge! Fortunately, in most cases the Hamiltonian can be
split into two parts: the unperturbed, time-independent Hamiltonian (for which we
presumably know the solution) and a small, possibly time-dependent perturbation:
H(t) = H0 + W(t).
The goal is to present the solution for H as one for H0 plus corrections in powers of
the small perturbation. The latter series will hopefully be rapidly convergent.
Until now we have worked in the Schrödinger representation, i.e., the state vectors
were time dependent (governed by U(t)), while the operators were constant (if not
explicitly time dependent). The opposite picture is provided by the Heisenberg rep-
resentation. It can be arrived at by the canonical transformation, using the evolution
operators:
Now, the operators evolve over time, while state vectors do not. The operators satisfy
the Heisenberg equations of motion:
d ∂
i A H (t) = [A H (t), H H (t)] + i A H (t). (1.84)
dt ∂t
28 1 Basic Concepts
The above equation follows immediately from the definition of A H and properties of
the evolution operator. Here the partial derivative deals with explicit time dependence
of the operator (say, due to changing of external conditions); the Hamiltonian, if
time dependent, should be taken in Heisenberg representation as well, H H (t) =
U † (t)H(t)U(t). The above equation follows immediately from the definition of AH
and properties of the evolution operator.
For our goals it is more convenient to employ an intermediate, interaction rep-
resentation, first suggested by Dirac. In this representation both operators and state
vectors are time dependent, but the evolution of operators is governed by the unper-
turbed Hamiltonian (we will not use the index I to label operators and state vectors
in interaction representation, since this will be our working representation):
(i ↑ H0 t/)S S (t, t ↑ ), which, as is easy to see, satisfies the same equation as the state
operator:
∂
i O(t, t ↑ ) = W(t)O(t, t ↑ ).
∂t
Then, of course,
t
− i dτ W (τ )
O(t, t ↑ ) = T e t↑ O(t ↑ , t ↑ ),
t
− i dτ W (τ )
(−i H0 t/) ↑
↑
S S (t, t ) = e Te t↑ e(i H0 t /) . (1.90)
This is the so-called Dyson’s expansion for the S-operator in Schrödinger represen-
tation. Transforming the U-operators according to (1.85), we get that in interaction
representation the S-operator takes the simple form
↑
S(t, t ↑ ) = e(i H0 t/) S S (t, t ↑ )e(−i H0 t /)
t
− i dτ W (τ )
=Te t↑ (t > t ↑ ). (1.91)
It looks as if it depends only on the perturbation! (Of course, the unperturbed Hamil-
tonian is hidden in W(τ ); but we presumably know how everything behaves without
perturbation.)
We have expressed the propagator as a matrix element of the S-operator. There is
another expression, sometimes useful in the many-body case. In order to arrive at it, let
us return to Heisenberg representation with its time-dependent operators. The coor-
dinate operator X H (t) will be time dependent as well. In Schrödinger representation
this operator had time-independent eigenstates, which constituted the complete basis
of the Hilbert space:
(In coordinate representation they are simply delta functions, in momentum repre-
sentation, plane waves.) Let us now introduce the set of instantaneous eigenstates of
the coordinate operator in Heisenberg representation, {|xt}:
Since χ H (t)|xt = U † (t)X S U(t)|xt, then U(t)|xt = |x, and we see that the time
evolution of these states is governed by U † (t) instead of U(t), and they still constitute
a complete basis at any moment t;
= I. (1.96)
Now we can rewrite the propagator in coordinate space simply as an overlap of two
states from this basis (cf. Eq. (1.54) of the previous section!):
= pt| p ↑ t ↑
The states { pt} are, of course, instantaneous eigenstates of the momentum operator
in Heisenberg representation, P H (t).
The above formulae are very general: actually they are applicable to any quantum
system, notwithstanding the number of particles and type of interaction. This was
one reason why we went to such lengths to derive them: we will use them later
throughout this book.
Now let us apply them to our initial case of one, structureless, quantum particle.
Now we have a single option for the perturbation operator, a scalar external potential,
so that its coordinate matrix element is
Table 1.1 Feynman rules for a particle in the external potential field
K (x, t; x↑ , t ↑ ) Propagator
K 0 (x, t; x↑ ,
t ↑)
Free (unperturbed) propagator
im(x−x↑ )2
m 3/2 exp 2(t−t ↑ )
= (2π i (t−t ↑ ))3/2
θ(t − t ↑ )
Table 1.2 Feynman rules for a particle in the external potential field (momentum representation)
K (p, E; p↑ , E ↑ ) Propagator
K (x, t; x ↑ , t ↑ ) = K 0 (x, t; x ↑ , t ↑ )
↑↑ ↑↑ ↑↑ ↑↑ i
+ d x dt K 0 (x, t; x , t ) −
× V (x ↑↑ , t ↑↑ )K 0 (x ↑↑ , t ↑↑ ; x ↑ , t ↑ ) + · · · . (1.100)
This expression is presented graphically in Fig. 1.10, the elements of which are
explained in Table 1.1.
Of course, our discourse is not limited to the coordinate representation; as a matter
of fact, it is more often than not easier to use momentum representation. The Feynman
rules for the momentum representation can be found in Table 1.2 (see Problem 1.2).
32 1 Basic Concepts
The graph in Fig. 1.10 is the simplest example of a Feynman diagram. In this
case its use seems superfluous, because of the simple structure of the perturbation
involved. In the many-body case, though, the structure of the terms entering the
perturbation series is much more complicated, and the graphs provide great help
in comprehending their structure and making physically consistent approximations.
The graph under consideration, e.g., suggests to us a clear picture of a quantum
particle repeatedly scattered by an external potential, but propagating freely between
the scattering acts. It will be useful to look into how (and whether) this intuitive
picture fits into a path-integral description of the behavior of the quantum particle.
We shall see that this very result can indeed be easily derived directly from formula
(1.37) for the propagator, in a slightly changed form;
∞ ∞ ∞
K (x N t N ; x1 t1 ) = lim d x N −1 d x N −2 · · · d x2
N →∞
−∞ −∞ −∞
N
m d/2 i n−2
N
t
m(xn −xn−1 )2
i N
× e 2t 2 e− k−2 t V (xk ,tk ) .
2πit
n=2
All we need to do is expand the exponents containing potential and rearrange this
expression as a power series over the external potential, V .
The zero-order term is, evidently, unperturbed propagator, K 0 (x N t N ; x1 t1 ). The
first-order term is
K 1 (x N t N ; x1 t1 )
∞ ∞ ∞ N
m d/2
= lim d x N −1 d x N −2 · · · d x2
N →∞ 2πit
−∞ −∞ −∞ n=2
!
i
N m(xn −xn−1 )2
N
i
× e n−2 t 2t − t V (xk , tk ) .
k=2
t N ∞
K 1 (x N t N ; x1 t1 ) = dt d x K 0 (x N t N ; xt)V (x, t)K 0 (xt; x1 t1 )
t1 −∞
∞ ∞
≡ dt d x K 0 (x N t N ; xt)V (x, t)K 0 (xt; x1 t1 )
−∞ −∞
(the last transformation has taken into account that for t < t1 or t > t N the integrand
is identically zero). It is clear from our derivation that indeed, in the path integral
picture we can regard the effects of external potential as a result of multiple scatterings
of an otherwise free particle.
The next terms of the expansion can be derived in the same way. Factors 1/n! in
the expansion of the exponent will be canceled because we will have exactly n! ways
to relabel the points xk , tk where the scattering occurs.
(ξ1 , ξ2 , . . . , ξi , . . . , ξ j , . . .)
#
+(ξ1 , ξ2 , . . . , ξi , . . . , ξ j , . . .) (Bose−Einstein statistics)
= (1.103)
−(ξ1 , ξ2 , . . . , ξi , . . . , ξ j , . . .) (Fermi−Dirac statistics)
Here ξ j denotes the coordinates and spin of the jth particle, and p j labels the one-
particle state. Very often (but not always) one chooses for φ j (ξ) plane waves, φ j (ξ) ∝
exp(i(p j x j −ε j t)/). This is an excellent choice when dealing with a uniform infinite
system. Otherwise, it is more convenient to use a different complete set of one-particle
functions that would explicitly express the symmetry of the problem or nontrivial
boundary conditions.
1.4 Second Quantization 35
The condition of (anti)symmetry (1.103) means that we can use only properly
symmetrized products of one-particle functions. For bosons we have thus
Here the nonnegative number Ni shows how many times the ith one-particle
eigenfunction φi enters the product (N1 +N2 +· · · = N , the number of particles in the
system). It is called the occupation number of the ith one-particle state. Summation is
extended over all distinguishable permutations of indices { p1, p2, . . . , pN }. Notice
that since all the N j are nonnegative and add up to N , the sequence N0 , N1 , . . .
always contains a rightmost nonzero term, N jmax , followed by zeros to infinity.
For fermions we use Slater’s determinants
( N j ) jmax is a finite nonnegative integer, say M. Moreover, there is only a finite
number of states |N j with the same value of M. Therefore we can count all states
in the [0]-set (first n 0 states for M = 0, then n 1 states for M = 1, and so on ad
infinitum). This exactly means that there are “as many” states in the [0]-set as integer
numbers; i.e., the set is denumerably infinite.
The Hilbert space spanned by a [0]-set is called Fock’s space, and it is in Fock’s
space that second-quantized operators act. The state vectors here, as we have said,
are defined by the corresponding set of the occupation numbers, and the second
quantized operators change these numbers. Thus, any operator can be represented
by some combination of basic creation/annihilation operators, which act as follows
Annihilation operator(we will establish the proper factors a little later):
Annihilation operator:
c j | . . . , N j , . . . ∝ | . . . , N j − 1, . . .. (1.106)
Creation operator:
Evidently, any element of the [0]-set can be obtained by the repeated action of creation
operators on the vacuum state (state with no particles) |0 = |0, 0, 0, 0, . . .:
N1 N2 N j
|N1 , N2 , . . . , N j , . . . ∝ c1† c2† · · · c†j · · · |0. (1.108)
c|0 = 0. (1.109)
1.4.2 Bosons
Let us take a matrix element of F1 between two N -particle Bose states, B↑ |F1 |B .
Since in order to make a number (matrix element) of an operator f 1 (ξ j ) we need
two one-particle wave functions, and since any two such functions φi (ξ), φ j (ξ) are
orthogonal for i = j, then there will be only two sorts of nonzero matrix elements of
the operator F1 : (a) diagonal, and (b) between the states |B↑ , |B , which differ
by one particle, which from some (initial) state φi (ξ) was transferred to another
(final) state φ f (ξ) : |B = | . . . , Ni , . . . , N f − 1, . . ., and |B↑ = | . . . , Ni −
1, . . . , N f , . . ..
The diagonal matrix element is
N1 ! · · · Na ! · · ·
B |F1 |B = ··· dξ1 dξ1 · · · dξ N
a
N!
× Ps [φ∗ρ↑ (ξ1 )φ∗p↑ (ξ2 ) · · · φ∗ρ↑ ](ξ N ) f 1 (ξa )Ps [φρ1 (ξ1 )φρ2 (ξ2 ) · · · φ p N ](ξ N ).
1 2 N
ρ,ρ↑
Here Ps [. . .] denotes symmetrization of indices. Due to the fact that sets of in-
dices pi and pi↑ coincide, pi = pi↑ . Let us say that the particle that is affected
by the operator is in the state pa . After we take the (diagonal) matrix element of
f 1 (ξa ), pa | f 1 (ξa )| pa = dξa φi∗ (ξa ) f 1 (ξa )φi (ξa ), and calculated the other inte-
grals (which are all equal to one due to orthonormality), we can symmetrically
rearrange the rest of the occupied states in (N − 1)! ways. This must be divided by
N1 !, N2 !, . . . , (Na − 1)!, . . ., because we cannot distinguish between N j ! possible
rearrangements of identical particles occupying the same, jth, state. (An equivalent
way to state this is that there are (N − 1)!/(N1 !N2 ! · · · (Na − 1)! · · · ) ways to choose
one-particle wave functions to be acted upon by f 1 (ξa ).) Therefore, we obtain the
expression
and we no longer need to show explicitly on the coordinates of what particle the
operator f 1 acts.
38 1 Basic Concepts
Now let us calculate the off-diagonal matrix elements of the operator. This time
on the left side there will be one extra function φ∗f (ξ), and on the right one extra
φi (ξ), so that
These unmatched functions must then be integrated with the operator to yield
( f | f 1 (ξa )|i = dξa φ∗f (ξa ) f 1 (ξa )φi (ξa ), while the rest can be rearranged in (N −
1)!/(N1 !N2 ! · · · (Ni − 1)! · · · (N f − 1)! . . .) ways. The result is
N j ≡ b†j b j ; (1.116)
N j |N j = N j |N j .
These are the Bose commutation relations; we could start from them, and the demand
that they are satisfied would completely determine the structure of the many-particle
Bose wave function.
Returning to our one-particle operator, we see that it can be expressed via cre-
ation/annihilation operators as follows:
F1 = f | f 1 |ib†f b j . (1.118)
i, f
Indeed, the matrix elements of this operator between any two states from Fock’s
space are the same as we have calculated above. Intuitively, the expression looks
evident: a particle is being “teleported” (annihilated in state |i > and then created
in state | f ), that is, scattered. A less evident and very useful property of the above
expression is that the coefficients f | f 1 |i are just matrix elements of a one-particle
operator f between corresponding one-particle states. Not only have we expressed
the operator via b† , b, but we have also reduced F = a f 1 (ξa ) to a constituent
operator f 1 in the process. This suggests that we rewrite Eq. (1.118) as
F1 = dξ φ̂† (ξ) f 1 φ̂(ξ). (1.119)
In the above equation we have introduced the so-called field operators, φ̂† (ξ), φ̂(ξ),
which are by definition
φ̂(ξ) = φ p (ξ)b p ;
p
40 1 Basic Concepts
φ̂† (ξ) = φ∗p (ξ)b†p . (1.120)
p
Evidently, these operators also act in Fock space.3 What do they create or annihilate?
The operator b†p , e.g., creates a particle with a wave function φ p (ξ ↑ ). The operator
φ̂† (ξ) thus creates a particle with a wave function
φ∗p (ξ)φ p (ξ ↑ ) = δ(ξ − ξ ↑ )
p
(we have used the completeness of the basis of one-particle states). That is, the field
operator creates (or annihilates) a particle at a given point. An important density
operator,
(ξ) = φ̂† (ξ)φ̂(ξ) = |φ p (ξ)|2 b†p b p ≡ |φ p (ξ)|2 N p , (1.121)
p p
Time dependence, of course, can enter the above equations either through the opera-
tors b† , b (Heisenberg representation) or through the basic functions φ p (Schrödinger
representation), or both (interaction representation). What is important is that definite
commutation relations exist only between field operators taken at the same moment
of time.
Looking at the definition, Eq. (1.120), one sees that field operators are built of the
annihilation/creation operators in the same way as an expansion of a single-particle
wave function in a generalized Fourier series over some complete set of functions
∞ :
{φi }i=0
Γ(ξ) = φi (ξ)Ci ;
i
ψ̂(ξ) = φi (ξ)bi .
i
3 We will omit the hats over the field operators when it does not create confusion.
1.4 Second Quantization 41
That is why the method is called second quantization: it looks as if we quantize the
quantum wave function one more time, transforming it into an operator! (See also
Problem 3 to this chapter.)
Using field operators, we can write any operator in the second-quantization form
almost without thinking. The recipe is that any n-particle operator
1
Fn = f n (ξ j1 , ξ j2 , . . .)
n!
j1 = j2 =···= jn
This is exactly the sort of expression that we would obtain if we calculated the average
value of an operator f n using one-particle wave functions; here they are substituted
by field operators. The factorial simply takes into account that there are n! versions
of the above expression differing only by the arrangement of indices 1, 2, . . . , n.
(Being operators, the φ̂’s are sensitive to their order; for this particular recipe to be
correct, the ordering of the field operators should be as shown; the outermost couple
of operators should have the same argument, etc.)
This rule can be derived for arbitrary n, though three-and more-particle interac-
tions (collisions) are found rarely. For one-particle operators it is correct; it is enough
to look at our previous results. We therefore sketch here a proof for n = 2.
In this case
1
F2 = f 2 (ξa , ξb ).
2
a=b
(An example: scalar pair interaction, where f 2 (ξa , ξb ) is simply a scalar potential
energy U (|ξa − ξb |).) We will calculate its matrix elements in the same way as we
did it for a one-particle operator. Besides diagonal ones, there are only following
nonzero matrix elements:
N f , Ni − 1, N j − 1|F2 |N f − 2, Ni , N j ;
N f , Ng , Ni − 1, N j − 1|F2 |N f − 1, Ng − 1, Ni , N j ;
42 1 Basic Concepts
N f , Ng , Ni − 2|F2 |N f − 1, Ng − 1, Ni ;
N f , Ni − 2|F2 |N f − 2, Ni , N
N f , Ni − 1|F2 |N f − 1, Ni
(affecting only one particle). Evidently, the operator F2 must be of the form
bn b p bq ,
† †
Cm,n, p,q bm
m,n, p,q
N f , Ng , Ni − 1, N j − 1|F2 |N f − 1, Ng − 1, Ni , N j
1 . . . N f ! · · · Ng ! · · · (Ni − 1)! · · · (N j − 1)! · · ·
=
2 N!
a=b
· · · (N f − 1)! · · · (Ng − 1)! · · · Ni ! · · · N j ! · · ·
×
N!
× dξ1 · · · dξ N φ p↑ (ξ1 ) · · · φ∗p↑ (ξ N ) f 2 (ξa , ξb )P [φ p1 (ξ1 ) · · · φ p N ](ξ N ).
∗
1 N
p, p ↑ P [
In this expression there are unmatched states, φ∗f , φ∗g on the left and φi , φ j on the
right, which will be integrated with the operator f 2 to yield:
dξa dξb (φ∗f (ξa )φ∗g (ξb ) f 2 (ξa , ξb )φi (ξa )φ j (ξb )
(we have explicitly written all terms following from symmetrization, P). The sym-
metrization of the rest (N − 2) one-particle states produces the combinatorial factor
(N −2)!/(. . . (N f −1)! . . . (Ng −1)! . . . (Ni −1)! . . . (N j −1)! . . .), so that gathering
these results we obtain
N f , Ng , Ni − 1, N j − 1|F2 |N f − 1, Ng − 1, Ni , N j
1.4 Second Quantization 43
1 · · · N f ! · · · Ng ! · · · (Ni − 1)! · · · (N j − 1)! · · ·
=
2 N!
a=b
· · · (N f − 1)! · · · (Ng − 1)! · · · Ni ! · · · N j ! · · ·
×
N!
(N − 2)!
×
· · · (N f − 1)! · · · (Ng − 1)! · · · (Ni − 1)! · · · (N j − 1)! · · ·
× ( f g| f 2 |i j) + (g f | f 2 |i j) + ( f g| f 2 | ji + g f | f 2 | ji)
1 a=b 1
= N f Ng Ni N j ( f g| f 2 |i j
2 N (N − 1)
+ g f | f 2 |i j + f g| f 2 | ji + g f | f 2 | ji)
1
= N f Ng Ni N j ( f g| f 2 |i j + g f | f 2 |i j + f g| f 2 | ji + g f | f 2 | ji).
2
† †
If we take the same matrix element of m,n, p,q Cm,n, p,q bm bn b p bq , we obtain
for each b† b† bb the combination:
N f , Ng , Ni − 1, N j − 1|bm bn b p bq |N f − 1, Ng − 1, Ni , N j
† †
= N f Ng Ni N j N f , Ng , Ni − 1, N j − 1|N f , Ng , Ni − 1, N j − 1
× (δm f δng δ pi δq j + δmg δn f δ pi δq j + δm f δng δ pj δqi + δmg δn f δ pj δqi ),
which means that Cm,n,ρ,q = (1/2)mn| f 2 | pq. Now, this is the very coefficient that
we obtain if we write the expression for F2 following the general recipe, and then
express field operators through b† , b. You can check that this is the case for the rest
of the matrix elements as well.
We have introduced the occupation number operator N . In the simplest case of a sin-
gle harmonic oscillator it would describe the number of quanta—that is, the amplitude
of the oscillations. This is a well-defined observable in the classical limit. Therefore,
it is reasonable to ask, what its conjugate operator is, if such can be constructed,
and what would be the corresponding classical variable. The canonical commutation
relation between this hypothetical Hermitian operator, ϕ̂, and N should be
(We do not write here for reasons that will be clear shortly.)
44 1 Basic Concepts
˙
iϕ̂(t) = [ϕ̂, H] = iω0 . (1.126)
1
N · ϕ ≥ . (1.127)
2
It means that you cannot measure phase and amplitude of an oscillator simultaneously
with an arbitrary degree of accuracy. (You may ask, where is the Planck constant in
here? Is it a classical uncertainty relation? Of course not, as you can check by direct
observation of a pendulum. The is hidden in N . Indeed, the number of quanta of
an oscillator is its classical energy E divided by ω0 , so that is in the denominator
of the left-hand side of (1.127).)
More generally, we can attempt to introduce a Hermitian operator ϕ̂ demanding
that the creation/annihilation Bose operators could be presented as
√
b = e−i ϕ̂ N ; (1.128)
√
b† = N ei ϕ̂ .
√ √
We see that b† b = N ei ϕ̂ · e−i ϕ̂ N = N . On the other hand, bb† = e−i ϕ̂ N ei ϕ̂ ,
and this must equal b† b + 1. Therefore, we must have
Û Û † = Û † Û = I.
This means that Û |n = |n − 1. Note, that Û must annihilate the vacuum state,
Û |0 = 0. (Otherwise we could repeatedly act on Û |0 by Û , producing states with
negative occupation numbers!)
In a similar way we find that Û † |n = |n + 1.
Now, for any n = 0, 1, 2, . . . we see that m|Û Û † |n = m|Û |n + 1 = m|n =
δmn , and therefore, Û Û † = I. This holds for Û † Û as well, but only for positive n,
because 0|Û † Û |0 = 0. Therefore, Û † Û = I, and ϕ̂ cannot be a well defined Her-
mitian operator. This is the result of the fact that the states with negative occupation
numbers do not exist.
On the bright side, it can be shown that the above approach provides a good
approximation if N 1 (which is, fortunately, what we are usually dealing with).
(The deviation of the occupation number from its average value N is confined to the
interval [−N , ∞); at large N , it is “almost” (−∞, ∞).) The relation (1.127) holds
for the states with small ϕ.4
Keeping in mind these caveats, we can infer from (1.124) that in the basis of
eigenstates of ϕ̂ the action of operators ϕ̂ and N on a wave function Γ(ϕ) ≡ (ϕ|Γ)
is given by
4For the eigenstates of N , as one should expect, the results are close to the uniform distribution of
ϕ in the interval [0, 2π).
46 1 Basic Concepts
we formally have
1 ∂
ϕ̂Γ(n) = − Γ(n). (1.136)
i ∂n
These representations are related by a Fourier transform:
2π
dϕ inϕ
Γ(n) = e Γ(ϕ); (1.137)
2π
0
Γ(ϕ) = e−inϕ Γ(n). (1.138)
n
We will see how useful this proves when dealing with transport in small super-
conductors. The phase there is the superconducting phase; the bosons are Cooper
pairs of electrons – fermions – which is certainly nontrivial.
1.4.4 Fermions
In the beginning, we state that the formal results of the above section hold for Fermi
statistics as well. We can introduce fermionic field operators in the same way as
bosonic ones, and use the same recipe to write down a second-quantized expression
for any n-particle operator Fn . The only difference (and a world of difference!) is that
now instead of Bose creation/annihilation operators we use the Fermi ones, a † , a.
Unlike the previous case, due to the Pauli exclusion principle, for any state of the
Fermi system and for any one-particle state p, a †p a †p |)F = 0 and a p aρ |)F = 0;
i.e., (aρ† )2 = (a p )2 = 0. (In formal mathematical language they are called nilpotent
operators.) In order to find their matrix elements, we again return to the case of a
one-particle operator F1 and calculate its matrix elements between fermionic states
|F , which are now given by Slater determinants (1.105),
To fix the sign of the wave function, set p1 < p2 < · · · < p N .
For a one-particle operator the only nonzero off-diagonal matrix elements are of
the form 1 f , 0i |F1 |0 f , 1i (here we explicitly use the fact that in the Fermi system
any occupation number N p is either 0 or 1; indices i, f show what occupied or
empty state is affected by the operator). Using the definition of a determinant, we
can write
1 ↑
1 f , 0i |F1 |0 f , 1i = dξ1 · · · dξ N (−1) P (−1) P
N! ↑
a P P
× φ∗p↑ (ξ1 ) · · · φ∗p↑ · · · f 1 (ξa )φ p1 (ξ1 ) · · · φ pa · · · ,
1 a
where the sums are taken over all particles and all permutations of indices P[ p1 , p2 ,
↑
· · · , p N ], P ↑ [ p1↑ , p2↑ , · · · , p ↑N ], (−1) P,P being parities of corresponding
permutations.
There is an extra φi (ξ) on the right-hand side and an extra φ∗f (ξ) on the right-
hand side, and they must be connected by the operator f 1 (ξa ). Therefore, in all terms
contributing to the matrix element, the permutations P and P ↑ differ only by a single
index, f , instead of i:
P : p1 p2 . . . i . . . p N ;
P ↑ : p1 p2 . . . f . . . p N .
These permutations have relative parity (−1) Q , where Q is the number of occupied
states between i and f (that is, with i < p < f , if i < f ). Indeed, the parity of
the permutation P is (−1) P , where P is the number of steps you need to achieve it
starting from the ordered set of indices pa < pb < · · · < i < · · · (we can transpose
two adjacent indices at each step). To obtain the permutation P ↑ , we replace i in the
initial set with the index of the final state, f . Now we first have to put index f in
place; if between i and f there is no occupied state, then it is already in place; if
not, we have to make exactly Q steps, to order the set of indices, after which they
can be rearranged in the same P steps. Therefore, the relative parity of P and P ↑
is (−1) Q .
This allows us to write for the matrix element
1 f , 0i |F1 |0 f , 1i = f | f 1 |i(−1) Q ,
48 1 Basic Concepts
Here we keep the same notation for the matrix elements of operator f 1 as in Bose
case.
Evidently, the creation/annihilation operators in the Fermi case, a † , a, will have
only one nonzero matrix element each. We define them to be
j−1
0 j |a j |1 j = 1 j |a †j |0 j = (−1) s−1 Ns
. (1.139)
First let us check whether the particle number operator N j in the Fermi case is still
a †j a j . The answer is positive, since
0 j |a †j a j |0 j = 0,
1 j |a †j a j |1 j = 1,
1 f , 0i |a †f ai |0 f , 1i = 1 f , 0i |a †f |0 f , 0i 0 f , 0i |ai |0 f , 1i
f −1 ↑ i−1
= (−1)s=1 Ns (−1) z=1 Nz
= (−1) Q
(the prime means that in the left sum the occupation numbers are calculated after
a particle in the initial state was annihilated. E.g., if i < f , then the resulting
f −1 ↑
f −1
expression is (−1) s=i Ns = (−1) s=i+1 Ns ≡ (−1) Q .) This confirms our guess
that a one-particle operator can indeed be written via a † , a operators in the same
way as in Bose case,
N
F1 = f | f 1 |ia †f ai .
f,i=1
1 f , 0i |ai a †f |0 f , 1i = 1 f , 0i |ai |1 f , 1i 1 f , 1i |a †f |0 f , 1i
f −1 ↑ i−1
= (−1) s=1 Ns (−1) z=1 Nz
= (−1)(Q+1) .
1.4 Second Quantization 49
(Fermi field operators are built from a † , a and one-particle wave functions in the
same way as in the Bose case, and (1.141) follows from the completeness of those
functions’ set.)
As an exercise, you can check that the same recipe as in the Bose case holds, e.g.,
for two-particle operators. But we can now finally conclude our considerations on
how to express operators of observables in the second quantization representation
(Table 1.3)
1.5 Problems
• Problem 1
Derive the path integral expression for the single-particle propagator in momentum
space, starting from the definition:
↑ ↑ ↑ ↑ ↑ Dp i
K (p, t; p , t ) = p, t|p , t θ(t − t ) Dx e S(p,x) ,
(2π)3
Show that the result coincides with the Fourier transform of the coordinate-space
propagator:
i i ↑ ↑
K (p, t; p↑ , t ↑ ) = d 3x d 3 x↑ e− px K (x, t; x↑ , t ↑ )e p x
• Problem 2
Derive the Feynman rules in momentum space for the scattering by the potential
d 3 pd E i (px−Et)
V (x, t) = e v(p, E).
(2π)4
• Problem 3
Suppose the interactions of particles in the system are described by a scalar pair
potential u,
1
W= u|xa − xb |,
2
a=b
H = K(kin.energy) + W.
Write down the expression for K and W in the second quantized form. Write the
equations of motion for field operators (Bose and Fermi case) in Heisenberg and inter-
action representations. Compare them to one-particle Schrödinger equation. What is
the difference?
References
1. Carruthers, P., Nieto, M.M.: Phase and angle variables in quantum mechanics. Rev. Mod. Phys.
40, 411 (1967)
2. Feynman, R.P., Hibbs, A.R.: Quantum Mechanics and Path Integrals. McGraw-Hill, New York
(1965)
3. Gardiner, C.W.: Handbook of Stochastic Methods for Physics, Chemistry, and the Natural Sci-
ences. Springer, Berlin (1985)
4. Goldstein, H.: Classical Mechanics. Addison-Wesley, Reading (1980)
5. Imry, Y.: Physics of mesoscopic systems. In: Grinstein, G., Mazenko, G. (eds.) Directions in
Condensed Matter Physics: Memorial Volume in Honor of Shang-Keng Ma. World Scientific,
Singapore (1986)
6. Landau, L.D., Lifshitz, E.M.: Quantum mechanics, non-relativistic theory. A Course of theo-
retical physics, vol. III, pp. 64–65. Pergamon Press, New York (1989) (A concise and clear
explanation of the second quantization formalism)
7. Ryder, L.: Quantum field theory. Chapter 5, Cambridge University Press, New York (1996) (A
discussion of path integrals in quantum mechanics and field theory)
References 51
8. Washburn, S., Webb, R.A.: Quantum transport in small disordered samples from the diffusive to
the ballistic regime. Rep. Progr. Phys. 55, 1311 (1992) (A review of theoretical and experimental
results on mesoscopic transport)
9. Ziman, J.M.: Elements of advanced quantum theory, Cambridge University Press, Cambridge
(1969) (An excellent introduction into the very heart of the method of Green’s functions)
Chapter 2
Green’s Functions at Zero Temperature
Men say that the Bodhisat Himself drew it with grains of rice
upon dust, to teach His disciples the cause of things. Many ages
have crystallised it into a most wonderful convention crowded
with hundreds of little figures whose every line carries a
meaning.
Abstract Green’s functions as a tool for probing the response of a many-body system
to an external perturbation. Similarity and difference from a one-particle propagator.
Statistical ensembles. Definition of Green’s functions at zero temperature. Analytical
properties of Green’s functions and their relation to quasiparticles. Perturbation the-
ory and diagram techniques for Green’s functions at zero temperature. "Dressing"
of particles and interactions: Polarization operator and self energy. Many-particle
Green’s functions.
For the one-particle state this means that it should be relatively stable, thus possess-
ing some measurable characteristics and giving us a palatable zero-order approxima-
tion: the one of independent quasiparticles. (We have discussed this at some length
in Chap. 1. Here we are going to elaborate that qualitative discussion and see how it
fits the general formalism of perturbation theory developed so far.)
We have seen that the field operator satisfies the “Schrödinger equation”
π
i ρ(ε, t) = (E(ε, t) + V(ε, t))ρ(ε, t)
πt
+ dε √ ρ † (ε √ , t)U(ε √ , ε)ρ(ε √ , t)ρ(ε, t) + · · · (2.1)
Here E and V are operators of kinetic energy and external potential; U describes
instantaneous particle–particle interactions, and so on. Evidently, if the basic set of
one-particle functions is chosen correctly, the leading term in this equation will be
the one-particle one; i.e.,
π
i ρ(ε, t) → (E(ε, t) + V(ε, t))ρ(ε, t). (2.2)
πt
This is a mathematical demonstration of the fact that we have a system of weakly
interacting objects—“quasiparticles”—which can be approximately described by
(2.2). Since the deviations from this description are small, the lifetimes of these
objects are large enough to measure their characteristics in some way, and so they
are reasonably well defined. And as often as not they are drastically different from
the properties of free particles, already due to the presence of the V-term in the above
equation. Let us consider this point in more detail.
For example, when we investigate the properties of electrons in a metal, the rea-
sonable first approximation is to take into account the periodic potential of the crystal
lattice, neglecting for a while both electron—electron interactions and “freezing” the
ions at their equilibrium positions. Even in this crude approximation the properties
of these “quasielectrons” are very different from those of a QED electron, with its
mass of 9.109 ×10−28 g and electric charge of −4.803 × 10−10 esu. Its mass is
now, generally, anisotropic and may be significantly less or more; its charge may
become positive; its momentum is no longer conserved due to the celebrated Umk-
lapp processes; and there can exist several different species of electrons in our system!
(See Fig. 2.1.)
On the other hand, when we are interested in the properties of the crystal lattice
of the very metal, we quantize the motion of the ions, and come to the concept of
phonons. These are quasiparticles, if there are any, because outside the lattice phonons
simply don’t exist, while inside it they thrive. They even interact with electrons and
each other—through terms like the third one in (2.1).
Of course, our ultimate goal is to take into account these other terms as well. Then
we will have some other objects, governed by an equation like (2.2) without any
extra interaction—and they will be the actual quasiparticles in our system.
2.1 Green’s Function of The Many-Body System: Definition and Properties 55
Now we can turn to the building of such a theory in the many-body case. We will
begin with the case of single-component normal, uniform, and homogeneous Fermi
and Bose system (the above-mentioned textbook case) at zero temperature.
This is the simplest possible and practically important case, since it does not
involve superfluid (superconducting) condensate. (The discussion of the latter we
postpone until Chap. 4.) In the Fermi case it applies to nonsuperconducting metals
and semiconductors—if we forget for a while about the subtleties of band structures.
Alkali metals are especially good examples (Fig. 2.2).
In the Bose case the example seems purely academic (since bosons must undergo
Bose condensation at zero) until we recall that there is at least one practically impor-
tant system of bosons that don’t condense: phonons! (This is because the number of
phonons is not conserved, but this is not important when we use the grand potential
formalism.)
In Chap. 1 we introduced the one-particle propagator
K (x, t; x √ , t √ ) = x|S(t, t √ )|x √ = xt|x √ t √
(here the Heisenberg field operator ρ † (x, t) creates a particle at a given point), and
introduce Green’s function as their overlap:
Of course, we must average over the states of the many-body system on which our
field operators act, in order to get rid of all other nonmacroscopic variables except
the two coordinates and moments of time between which the quasiparticle travels.
Such an averaging of an operator A (both quantum and statistical) is achieved by
taking its trace with the statistical operator (density matrix) of the system, α̂,
Now, the above formula indeed looks like a propagator, describing a process when
we add to our system of N identical fermions one extra particle, let it propagate
from (x √ , t √ ) to (x, t), and then take it away. It is a good probe of particle-particle
interactions in the system. The other option would be first to take away the particle,
look at how the resulting hole propagates, and then fill it, restoring the particle
(Fig. 2.3).
Then the one-particle causal Green’s function, describing both processes, can be
defined by the expression
G δδ√ (x, t; x√ , t √ )
⎡ ⎡
= −i ρδ (x, t)ρδ† √ (x√ , t √ ) (t − t √ ) ∓ i ρδ† √ (x√ , t √ )ρδ (x, t) (t √ − t)
⎡
≡ −i T ρδ (x, t)ρδ† √ (x√ , t √ ) . (2.4)
Here we write spin indices explicitly. They can take two values (e.g., up and down)
for fermions (and only one for phonons).1
Averaging defined by (2.3) is a linear operation. Using this property, we can apply
the time differentiation operator, π/πt, to T ρδ (x, t)ρδ† √ (x√ , t √ ), and average it to
obtain
1 It can be shown that no matter what the spin of real fermions, the (basic) quasiparticles will have
spin 1/2 (though there will be several types of quasiparticles); see [4], §1).
58 2 Green’s Functions at Zero Temperature
π
i G (x1 , t1 ; x2 , t2 ) (2.5)
πt1 δω
= E(x1 )G δω (x1 , t1 ; x2 , t2 ) + V(x1 , t1 )G δω (x1 , t1 ; x2 , t2 )
⎡
− i dx3 U (x3 , x1 ) T ρ∂† (x3 , t1 )ρ∂ (x3 , t1 )ρδ (x1 , t1 )ρω† (x2 , t2 ) + · · ·
We see that the many-body Green’s function defined above is not a Green’s func-
tion in the mathematical sense. It is a solution to differential Eq. (2.6). This is not
a closed equation for Green’s function, since it contains averages of four and more
field operators (two-particle, three-particle, etc. Green’s functions). Thus (2.6) is only
the first equation in the quantum analogue to the well known infinite BBGKY chain
(Bogoliubov–Born–Green–Kirkwood–Yvon) in classical statistical mechanics. The
latter consists of interlinked equations of motion for classical n-particle distribution
functions (see e.g. in Chap. 3, [2]).
As in the classical case, breaking this chain leads to a nonlinear differential
equation for Green’s function, as distinct from the linear Eq. (1.25), which governs
the one-particle propagator K(x, t; x √ , t √ ).
The averaging procedure in equilibrium can be performed most conveniently
using either the canonical or grand canonical ensemble. Mathematically, the two
reflect different choices of independent variables: (T, V, N : temperature, volume,
2.1 Green’s Function of The Many-Body System: Definition and Properties 59
or
√
α̂GC E = eω(Γ−H ) . (2.8)
The Hamiltonian now has the complete set of eigenstates |n∇ with eigenvalues
E n√ = E n − μNn , where Nn is the number of particles in state |n∇. The average of
a time-ordered product of two field operators in Heisenberg representation can now
be written as
⎡
T ρ(t1 )ρ † (t2 )
(We have used the standard trick of inserting the complete set of states, and allow
for the possibility that they are not normalized to unity.)
At zero temperature (ω → ∼) we are left with
Here |0∇ is the exact ground state of the system in Heisenberg representation: it
is time independent and includes all interaction effects.
In a homogeneous and isotropic system in a stationary state, Green’s function can
depend only on differences of coordinates and times:
If, moreover, the system is not magnetically ordered and is not placed in and
external magnetic field, then spin dependence in (2.11) reduces to a unit matrix:
G δω = θδω G, (2.12)
1 ⎡
G 0 (x, t) = T ρ(x, t)ρ † (0, 0) . (2.13)
i 0
2.1 Green’s Function of The Many-Body System: Definition and Properties 61
1 ⎣
G 0 (x, t) = [φ(t)(1 − φ(μ − ψk )) − φ(−t)φ(μ − ψk )]eikx−i(ψk −μ)t . (2.14)
iV
k
1
G 0 (p, χ) =
χ − (ψp − μ) + i0sgn(ψp − μ)
1
= . (2.15)
χ − (ψp − μ) + i0sgnχ
1 ⎣ ⎦ χk 1/2 i(kx−χk t)
τ(x, t) = ∝ bk e + bk† e−i(kx−χk t) , (2.17)
V k 2
and b, b† are the usual Bose operators. (Since the phonon field is ultimately a
quantized sound wave, it should be Hermitian to yield in the classical limit a classical
observable, the medium displacement.) You will see that
χk2
D 0 (k, χ) = . (2.18)
χ 2 − χk2 + i0
where the operator (G 0 )−1 (1) in coordinate space is (iπ/πt1 −E(∇x1 )), in momentum
space (χ − E(k)).
The derivation of the corresponding equation for D 0 (1, 2) is suggested as one of
the problems to this chapter.
for both Fermi and Bose field operators. (It is enough to substitute the definition of
P and use the canonical (anti)commutation relations.) Equation (2.21) is reminiscent
of the Heisenberg equations of motion for a field operator (1.84), and they together
imply
In a more formal language, the Hamiltonian and the operator of total momentum are
generators of temporal and spatial shifts respectively.
We can substitute the above expression into Green’s function, (2.4), and then
insert the unity operator constructed of the full set of common eigenstates of the two
commuting operators H, P,
2.1 Green’s Function of The Many-Body System: Definition and Properties 63
⎣
I= |s∇≈s|,
s
wherever it seems reasonable. The following calculations are tedious but straight-
forward.
⎣
≈0|T ρ(x, t)ρ † (x √ , t √ )|0∇ = φ(t − t √ ) ≈0|ρ(x, t)|s∇≈s|ρ † (x √ , t √ )|0∇
s
⎣
√
∓ φ(t − t) ≈0|ρ † (x √ , t √ )|s∇≈s|ρ(x, t)|0∇
s
⎣ √ √ √ √
= φ(t − t ) √
≈0|ei(Ht−P x) ρe−i(Ht−P x) |s∇≈s|ei(Ht −P x ) ρ † e−i(Ht −P x ) |0∇
s
⎣ √ √ √ √
√
∓ φ(t − t) ≈0|ei(Ht −P x ) ρ † e−i(Ht −P x )|s∇≈s| ei(Ht−P x) ρe−i(Ht−P x) |0∇
s
⎣
= φ(t − t √ ) ei(E 0 −μN0 )t ≈0|ρ|s∇e−i((E s −μNs )t−Ps x)
s
i((E s −μNs )t √ −P x √ ) √
× e ≈s|ρ † |0∇e−i(E 0 −μN0 )t
⎣ √ √ √
∓ φ(t √ − t) ei(E 0 −μN0 )t ≈0|ρ † |s∇e−i((E s −μNs )t −Ps x )
s
× ei((E s −μNs )t−Ps x) ≈s|ρ|0∇e−i(E 0 −μN0 )t .
The momentum of the state |0∇ is zero. The energy exponents here contain some
subtlety. Since the field operators create or annihilate particles one at a time, in the
first part of the expression, which contains ≈0|ρ|s∇≈s|ρ†|0∇ ≡ |≈0|ρ|s∇|2 states |s∇
must contain one particle more than the state |0∇, say Ns = N0 + 1 ≡ N + 1
(otherwise the annihilation operator has nothing to annihilate). On the other hand,
in the second half of the expression, with ≈0|ρ † |s∇≈s|ρ|0∇ ≡ |≈s|ρ|0∇|2 the states
|s∇ contain N − 1 particles. Since, generally, the eigenvalues of the Hamiltonian
depend on the number of particles both via the −μN term and directly, we have in
the exponents (showing explicitly the dependence of eigenenergies on the particle
number)
Evidently, the former gives the energy change when a particle is added to the state
|s∇; the latter, when the particle is removed from the state |s∇. (You can check the
above inequalities, if you recall that at zero temperature for the ground-state energy
(a thermodynamic observable!) (π E 0 /π N ) = (π/π N ), ( is thermodynamical
potential) and that by definition (π/π N ) = μ.) For the phonons, μ = 0.
Now we can sweep all the details of the system under the carpet by introducing
1 ⎣
−1
As = ≈0|0∇ |≈0|ρδ |s∇|2 ; (2.25)
2 δ
1 ⎣
Bs = ≈0|0∇−1 |≈s|ρδ |0∇|2 . (2.26)
2 δ
(The operations in the brackets are reserved for the spin degrees of freedom, δ.)
Those are some functions of index s only. Therefore, we can easily take the Fourier
transform of Green’s function, using the above results, and finally get the Källén–
Lehmann’s representation:
⎣ As θ(p − Ps ) Bs θ(p + Ps )
G(p, χ) = (2ξ) 3
(+)
± (−)
. (2.27)
s χ − ψs + μ + i0 χ − ψs + μ − i0
In this expression, delta functions of momenta arise from the exponential factors
exp[iPs x]; they indicate the values of momenta, corresponding to single-particle
excitations (note that the second term in (2.27) clearly indicates the holes, with
(−)
momenta −Ps and energies ψs ). The frequency denominators contain the infin-
itesimal ±i0, due to the presence of theta functions of time in the initial expres-
sion, exactly as when we calculated the unperturbed Green’s functions. Of course,
our present result is consistent with expressions (2.15, 2.18). Mathematically, the
Källén–Lehmann representation tells us that Green’s function of a finite system is a
meromorphic function of the complex variable χ; all its singularities are simple poles.
(±)
Each pole corresponds to a definite excitation energy, ψs , and definite momentum
of the system, ±Ps . The poles are infinitesimally shifted into the upper half-plane of
χ when χ > 0, and into the lower one when χ < 0. Thus the causal Green’s function
is not analytic in either half-plane.
In the thermodynamic limit (N , V → ∼, N /V = const) it is more convenient
to use a different form of (2.27):
∼
dχ √ ρ A (χ √ ) ρ B (χ √ )
G(p, χ) = + , (2.28)
ξ χ √ − χ − i0 χ √ − χ + i0
−∼
2.1 Green’s Function of The Many-Body System: Definition and Properties 65
where
⎣
ρ A (p, χ √ ) = −ξ(2ξ)3 As θ(p − Ps )θ(χ √ − ψs(+) + μ); (2.29)
s
⎣
ρ B (p, χ √ ) = ∓ξ(2ξ)3 Bs θ(p + Ps )θ(χ √ − ψs(−) + μ). (2.30)
s
(±)
Indeed, in this limit we can no longer resolve the individual levels ψs . The den-
sities ρ A,B (p, χ √ ) become continuous functions, zero at negative (resp. positive)
frequencies on the real axis; the latter becomes a branch cut in the complex χ plane.
The real and imaginary parts of Green’s function (for real frequencies) can be
easily obtained from (2.27), using the Weierstrass (or Sokhotsky—Weierstrass) for-
mula
1 1
= P ∓ iξθ(x). (2.31)
x ± i0 x
The difference reflects the fact that there is no Fermi surface in Bose systems (and
thus “particles” and “holes” are the same).
66 2 Green’s Functions at Zero Temperature
To prove it, note that in this limit we can neglect all other terms in the denominators
of (2.27), so that
1 ⎣
G(p, χ) ∞ (2ξ)3 (As θ(p − Ps ) ± Bs θ(p + Ps )).
χ s
The sum can be evaluated by performing an inverse Fourier transform and making
use of the canonical (anti)commutation relation between field operators in coordinate
space:
⎣
(2ξ)3 (As θ(p − Ps ) ± Bs θ(p + Ps ))
s
⎣ ⎦ √ √
= d 3 (x − x√ ) As ei(p−Ps )(x−x ) ± Bs ei(p+Ps )(x−x )
s
⎣ 1 ⎣ √
= d 3 (x − x√ ) |≈0|ρδ (0)|s∇|2 ei(p−Ps )(x−x )
s
2 δ
2 i(p+Ps )(x−x√ )
±|≈s|ρδ (0)|0∇| e
1 ⎣ √
= d 3 (x − x√ ) eip(x−x )
2 δ
× ≈0|ρδ (r, t)ρδ† (r√ , t) ± ρδ† (r√ , t)ρδ (r, t)|0∇
= 1.
We will define two more Green’s functions: retarded and advanced ones, G R and
G A.
R
G δω (x1 , t1 ; x2 , t2 )
⎡
= −i ρδ (x1 , t1 )ρω† (x2 , t2 ) ± ρω† (x2 , t2 )ρδ (x1 , t1 ) φ(t1 − t2 ); (2.37)
A
G δω (x1 , t1 ; x2 , t2 )
⎡
= +i ρδ (x1 , t1 )ρω† (x2 , t2 ) ± ρω† (x2 , t2 )ρδ (x1 , t1 ) φ(t2 − t1 ). (2.38)
The definition is chosen in such a way as to guarantee that (1) the retarded (advanced)
Green’s function is zero for all negative (positive) time differences t − t √ , and (2) at
t = t √ both have a (−iθ(x − x √ )) discontinuity, exactly as does the causal Green’s
function. The latter statement is easy to check by taking its time derivative and using
the canonical relation
π ⎫
lim√ G R(A) (t − t √ ) = ∓i≈0| ρδ (x, t), ρω† (x√ , t √ ) |0∇ · (±θ(t − t √ ))
t→t πt ±
√
= −iθ(x − x ).
R(A)
Again, in the uniform and stationary case, G δω (x1 , t1 ; x2 , t2 ) = G R(A) (x1 −
x2 , t1 − t2 )θδω . Unperturbed retarded and advanced Green’s functions, for example,
can be easily found by direct calculation:
1
G 0R,A (p, χ) = ;
χ − (ψp − μ) ± i0
χk 1 1 χk2
D 0R,A
(k, χ) = − = 2 .
2 χ − χk ± i0 χ + χk ± i0 χ − χk2 + i0sgnχ
(2.39)
The Källén–Lehmann representation for G R,A can be obtained in the same way
as for the causal Green’s function. The result is as follows:
⎣ As θ(p − Ps ) Bs θ(p + Ps )
G (p, χ) = (2ξ)
R 3
(+)
± (−)
; (2.40)
s χ − ψs + μ + i0 χ − ψs + μ + i0
⎣ As θ(p − Ps ) Bs θ(p + Ps )
G (p, χ) = (2ξ)
A 3
(+)
± (−)
. (2.41)
s χ − ψs + μ − i0 χ − ψs + μ − i0
Taking real and imaginary parts of these expressions, we see that on the real axis
68 2 Green’s Functions at Zero Temperature
⎬
⎧
⎧ ↑G A (p, χ) = ↑G A (p, χ) = ↑G(p, χ);
⎧
⎪ ∗G R (p, χ) = ∗G(p, χ); χ > 0;
∗G A (p, χ) = ∗G(p, χ); χ < 0; (2.42)
⎧
⎧ ⎫∗
⎧
⎨ G R (p, χ) = G A (p, χ) .
δω ωδ
On the other hand, the retarded (advanced) Green’s function is clearly analytic in the
upper (lower) χ-half-plane. This means that they are analytic continuations of the
causal Green’s function from the rays χ > 0(χ < 0) respectively. Their asymptotic
behavior is, of course, the same as that of the causal Green’s function:
∼
dχ √ ρ R,A (p, χ √ )
G R,A
(p, χ) = , (2.43)
ξ χ √ − χ ± i0
−∼
Evidently,
ρ R (p, χ √ ) = −∗G R (p, χ √ ). (2.45)
From formula (2.44) it is clear that ρ R (p, χ √ ) is proportional to the probability density
of an elementary excitation with momentum p having energy χ. In the non-interacting
case, e.g., −∗G 0,R = ξθ(χ − (ψp − μ)), because in the absence of interactions,
quasiparticles would indeed coincide with “basic” particles, with dispersion law ψp .
We can now visualize the concept of a quasiparticle excitation and its relation to
the existence of isolated poles of G(p, χ). Suppose that there is such a pole at
χ = Γ − i, > 0. (This corresponds to a particle added in the state p.) To see the
evolution of the excitation created by the operator ap† , calculate Green’s function in
the (p, t)-representation:
2.1 Green’s Function of The Many-Body System: Definition and Properties 69
∼
dχ −iχt
G(p, t) = e G(p, χ).
2ξ
−∼
For negative t the integration contour can be closed in the upper half-plane, and
since there are no singularities there, the integral is zero. For positive t the contour
closes in the lower half-plane and will contain the pole. We cannot, though, simply
calculate the residue, since the causal Green’s function is not analytic in the lower
half-plane. We must then replace it by its analytic continuations, G R,A . In order to
do this, we close the integration contour as shown in Fig. 2.5. For ↑χ < 0 we can
replace G(p, χ) by G A (p, χ), and for ↑χ > 0 by G R (p, χ). Now, the Cauchy
theorem of complex analysis tells us that the integral we ar e interested in can be
written as follows:
dχ −iχt A dχ −iχt R
G(p, t) = − e G (p, χ) − e G (p, χ).
2ξ 2ξ
√√ √√
C1√ +C1 C2√ +C2 +C
Watson’s lemma, together with the 1/χ asymptotics of Green’s functions, ensures
that the integrals over infinitely remote quarter circles C1√ and C2√ are zero. Therefore,
we are left with two terms: the contribution from the pole, and the integral along the
negative imaginary axis:
0
−iΓt −t dχ −iχt A ⎫
G(p, t) = −i Z e e + e G (p, χ) − G R (p, χ) , (2.46)
2ξ
−i∼
where Z is the residue of G R (χ) in the pole. The first term describes a free quasi-
particle with finite lifetime ∞ 1/ . The contribution from the integral is small, if
only Γt 1, t 1. (This means that the decay rate must be small enough,
Γ.) Indeed, invoking the Källén-Lehmann representation, we see that
70 2 Green’s Functions at Zero Temperature
0 dχ −iχt A ⎫
e G (p, χ) − G R (p, χ)
−i∼ 2ξ
0
dχ −iχt A A
→ e −
−i∼ 2ξ χ − ψ p + μ − i χ − ψρ + μ + i
0 −iχt
dχ e
= −2i A
−i∼ 2ξ (χ − ψ p + μ)2 + 2
− A e−iμt
→ Z e−iΓt e−t
ξt (μ − ψρ )2
if (ψρ − μ) = Γ.
From the Källén–Lehmann representation for the retarded and advanced Green’s
functions (2.40, 2.41) follows a beautiful (and important) relation between the real
and imaginary parts of those functions of real frequencies: the Kramers–Kronig
relation
∼
dχ √ ∗G R,A (p, χ √ )
↑G R,A (p, χ) = ±P . (2.47)
ξ χ√ − χ
−∼
(This can be established directly by taking the imaginary and real parts of (2.40, 2.41).
It can be shown that the reason why this relation holds is the causality, that is, the
property of advanced and retarded Green’s functions of time to be zero for t > (<)t √ .
The proof (which is quite straightforward) is like our calculation of the Fourier trans-
form of the propagator in Chap. 1, where we established for the first time that the poles
of K (χ) must be infinitesimally displaced from the real axis in order to provide for
the φ(t)-like behavior of K (t). Then, knowing that G (R,A) (χ) is an analytic function
⎤∼ √ ∗G R,A (p,χ √ )
in the corresponding half-plane, we can calculate the integral −∼ dχ ξ χ √ −χ
along the real axis, using the Cauchy theorem, and come to the above relations. In
mathematics this relation is known as the Plemelj theorem [6].
N
n= = −2i G(r = 0, t = −0) = 2∗G(r = 0, t = −0). (2.49)
V
The relation between density and Green’s function allows us, in turn, to express
the thermodynamic properties of the system at T = 0 through its Green’s function.
Indeed, the grand potential of the system satisfies (see, e.g., [4])
dΓ = −SdT − N dμ = −N dμ
at T = 0 (since the entropy S(0) = 0). This equation can thus be integrated (remem-
bering that Γ(μ = 0) = 0):
μ
Γ=− dμN (μ),
0
where we can substitute the expression for N (μ) from (2.48) or (2.49) (where Green’s
function is explicitly μ-dependent; see Problem 2.1).
A slightly more sophisticated example is presented by current, which in agreement
with Sect. 1.4 is expressed through the field operators as
ie ⎣
j(r) = ≈(∇ρδ† (r))ρδ (r) − ρδ† (r)(∇ρδ (r))∇.
2m δ
We can express the current through Green’s function, using a “hair splitting” trick
that allows us to separate the differentiation over two coinciding coordinates:
ie ⎣
j(r) = lim (∇r√ − ∇r )≈ρδ† (r√ )ρδ (r)∇
2m δ r√ →r
ie ⎣
= lim lim (∇r√ − ∇r )(∓i G δδ (r√ , t; r, 0)). (2.50)
2m δ t→−0 r√ →r
true, but not sufficient). On the other hand, we cannot look into the differences due
to interactions, because, e.g., we have no way of determining the matrix elements of
field operators in the Källén-Lehmann representation.
As always, we have to apply perturbation theory. The great achievement by Feyn-
man was to build the perturbation theory formalism, in which the whole pertur-
bation expansion, including its most cumbersome expressions, is reduced to a set
of sometimes spectacular and always physically understandable graphs—Feynman
diagrams.
As in medieval paintings, there is a strict set of rules for both drawing and reading
those images (determined by the Hamiltonian of the system). This makes diagram
techniques a highly symbolic form of art. (Of course, there are differences between
various schools and books, as in Fig. 2.6.)
When calculating the Green’s function of a system with interactions, we meet the
usual obstacle of not knowing the wave function (state) of the system over which the
average is to be taken. We don’t know the ground state, |0∇. We don’t know excited
states. Moreover, any approximation we are going to make will be virtually orthogo-
nal to the proper many-particle state. Luckily, the approximate matrix elements (like
Grren’s function) can be quite accurate. The seeming paradox is just a reflection of
the fact that while the wave function involves all N particle states, Green’s function
deals with only two one-particle states (initial and final). As Thouless has noted
[7], if the one-particle state is approximated with a small mistake δ, the projection
of the corresponding many-particle state on the exact state will be of the order of
(1 − δ) N ∞ e−N δ → 0 as N → ∼ for my finite δ. On the other hand, the
average of a one-particle operator (like Green’s function) will contain only a small
mistake δ !
2.2 Perturbation Theory: Feynman Diagrams 73
Two conclusions can be drawn. (1) Green’s functions provide a physically sensible
method of approaching the many-body problem. (Something we could guess after a
short glance cast at library shelves filled with numerous folios on the subject, to which
we dare add a book of our own.) (2) The results are to be expressed in terms of averages
over the unperturbed state of the system, rather than corrections to many-body wave
functions. (In usual quantum mechanics both would be equivalent—because it is a
one-body theory.)
In order to fulfill our program, let us turn to the interaction representation. We have
seen before that this is a natural way to deal with perturbation theory. To avoid unnec-
essary subscripts, here we denote the field operators in the interaction representation
by uppercase Greek letters. The connection between them and Heisenberg operators
is given by
Later on we suppress the (I) subscript in the S-matrix as well; we use it only in the
interaction representation anyway; that is,
⎤t
−i ,dt W (t)
S(t, t √ ) = T e t√ , t > t √.
√ √
Here |0∇ is a Heisenberg ground-state vector. Then ei H0 t U(t √ )|0∇ = ei H0 t |0(t √ )∇ S =
|0(t √ )∇ I , i.e., the ground-state vector in interaction representation, and
√
ei H0 t U(t √ )|0∇ = S(t √ , −∼)|0(−∼)∇ I ; (2.53)
−i H0 t
≈0|U (t)e
†
= ≈0(∼)| I S(∼, t). (2.54)
Now we introduce a very important adiabatic hypothesis. First, let us assume that
the perturbation was absent very long ago and was turned on in an infinitely slow
way, say W(t ≤ t1 ) = W exp(δ(t − t1 )), δ → 0+. It will be turned off in some
very distant future, say W(t ≤ t2 ) = W exp(−δ(t − t2 )), δ → 0+. Here [t1 , t2 ]
is the time interval within which we investigate our system (and we don’t care what
happens later or previously: aprés mois le déluge).
Of course, physically it is rather easy to turn on and off the external potential, while
we don’t have such a free hand when the perturbation is due to particle—particle
interaction. But nothing a priori prohibits such a property of the perturbation term
in the Hamiltonian, and finally we take δ = 0 anyway. (A mathematically inclined
reader may recognize that we are actually going to use so-called Abel regularization
of conditionally convergent integrals, which appear a little later.)
Now, since at minus infinity there is no perturbation, we can write instead of
|0(−∼)∇ I the unperturbed ground state vector |0 ∇ (which is time independent,
because iπ|0 (t → −∼)∇/πt = W exp(δ(t − t1 ))|0 (t → −∼)∇ → 0). It is
convenient to choose a normalized state, ≈0 |0 ∇ = 0.
Now, it seems natural to think that since we had an unperturbed ground state at
minus infinity, when there was no interaction, we should have the same state at plus
infinity, when there will be no interaction. This is not true, though: it is known from
usual quantum mechanics that the adiabatically slow perturbation can actually switch
the system to a different state with the same energy. Our good luck is that the ground
state of a quantum-mechanical system is always non-degenerate, and it is the ground
state averages that we deal with! Therefore, the only difference between the states
at minus and plus infinity may be a phase factor: |0(+∼)∇ I = (exp(i L))|0 ∇, and
this factor anyway cancels from the numerator and denominator of (2.51)).
Thus we have derived the key formula
The difference is that (1) we need the matrix elements not between the coordinate
(momentum) eigenstates, but between unperturbed ground state vectors of a many-
body system, and (2) now we have the denominator ≈0 |S(∼, −∼)|0 ∇.
2.2 Perturbation Theory: Feynman Diagrams 75
This is the theorem of quantum field theory, because it makes all the formalism
tick. After the exponent in the S-operator is expanded, we must calculate the matrix
elements of the sort ≈0 |T λ1 λ2 · · · λm |0 ∇. Here λ1 , λ2 , . . . , λm are (Fermi or
Bose) field operators in interaction representation, and it is Wick’s theorem that
allows us to do this.
For the sake of clarity, from now on we denote the set of variables (x, t, δ) by a
single number or capital letter. For example:
Ψδ (x, t) Ψ X ;
Ψ∂1 (x1 , t1 ) Ψ1 ;
⎣
d 3 x dt d X;
δ
⎣
3
d x1 dt1 d1.
∂1
(−) (+)
ρX = ρX + ρX
1 ⎣ i(px−ψ p t) 1 ⎣ i(px−ψ p t)
= ∝ e ap + ∝ e ap . (2.57)
V p> p F V ρ<ρ F
76 2 Green’s Functions at Zero Temperature
(−) (+)
τX = τX + τX
1 ⎣ ⎦ χk 1/2 1 ⎣ ⎦ χk 1/2 † −i(kx−χk t)
= ∝ bk ei(kx−χk t) + ∝ bk e . (2.58)
V k 2 V k 2
Since both time ordering and normal ordering are distributive, we can deal with
the (+) and (−) parts separately, and therefore all the intermediate manipulations
can be on the field operators ρ, τ themselves.
By definition, the normal product of any set of the field operators A, B, C, . . .
has zero ground-state average,
If both operators here are of the same sort (both creation or both annihilation), the
contraction is identically zero. Indeed, then the normal ordering does not affect their
product, and
⎩ !
λ1 λ2 = φ(t1 − t2 )λ1 λ2 ∓ φ(t2 − t1 )λ2 λ1 − λ1 λ2
= (φ(t1 − t2 ) + φ(t2 − t1 ))λ1 λ2 − λ1 λ2
= 0.
On the other hand, a contraction of conjugate field operators is a number: taking into
account that the operators are in interaction representation and their time dependence
is trivial, we see that, for example,
⎩ ! ⎣⎣⎦ ⎦
λ†1 λ2 = ei E k t1 λ†k e−i E q t2 λq
k q
⎣⎣ ⎦
= ei E k t1 e−i E q t2 φ(t1 − t2 )λ†k λq ∓ φ(t2 − t1 )λq λ†k − λ†k λq
k q
⎣⎣ ⎫
= ei E k t1 e−i E q t2 (φ(t1 − t2 ) + φ(t2 − t1 ))λ†k λq − λ†k λq + φ(t2 − t1 )θkq ,
k q
and all the operator terms cancel. This is an important fact, that the contraction of
Fermi/Bose field operators is a usual number, because then we can write
2.2 Perturbation Theory: Feynman Diagrams 77
⎩ ! ⎩ !
λ1 λ2 = ≈0 | λ1 λ2 |0 ∇
= ≈0 |T λ1 λ2 |0 ∇ − ≈0 | : λ1 λ2 : |0 ∇
= ≈0 |T λ1 λ2 |0 ∇ = i G 0 (12). (2.61)
could here insert |X ∇≈X | instead of the complete expression for the unit operator,
(we
s |s∇≈s|, because the rest of it does not contribute anything.) The above expression
78 2 Green’s Functions at Zero Temperature
is actually the fully contracted term of Wick’s theorem. In the thermodynamic limit,
Γ → ∼, it stays finite, because every power of Γ in the normalization term is com-
pensated by the summation over k (both are proportional to the number of particles
in the system).
On the other hand, there are other terms in the expression for ≈X |λ1 λ2 · · · |X ∇,
but they contain N /2 − 1, N /2 − 2, etc. different values of k, and thus independent
summations. Because of that, some powers of volume in the denominator will not be
canceled by summations, and all these terms vanish, which concludes our reasoning
(Fig. 2.7).
Now the path is straightforward. (1) Expand the time-ordered exponent in the
expression for Green’s function; (2) Take all averages over the ground state, using
Wick’s theorem, thus factoring all terms in products of unperturbed Green’s func-
tions (with appropriate integrations); (3) Represent these terms by graphs—Feynman
diagrams. After the correspondence between those graphs and the analytic terms in
the expansion series is established, it is much simpler to work with the diagrams,
which give much clearer understanding of the structure of the expressions involved.
The rules of drawing and reading Feynman diagrams in some detail, of course,
depend on the interaction. What is even worse, they depend on tastes and preferences
of the author whose book or chapter you are reading (Fig. 2.6); there are at least three
popular schools. Here we will take one of those approaches, where time flows from
right to left and so it is also from right to left that the lines symbolizing Green’s
2.2 Perturbation Theory: Feynman Diagrams 79
functions are drawn. This has at least the advantage, that the order of letters labeling
the diagram is the same as that in the analytic formulas, where later moments stand
to the left.
For illustration, we derive the rules for the simple case of scalar electron—electron
interaction, which by definition involves only one sort of particles (electrons) inter-
acting via instantaneous spin-independent potential:
1 ⎣⎣
W(t) = d 3 x1 d 3 x2 Ψδ† 1 (x1 , t)Ψδ† 2 (x2 , t)U (x1 − x2 )
2 δ δ
1 2
U (1 − 2) ≡ U (x1 − x2 )θ(t1 − t2 );
and now we will integrate over space and time coordinates indiscriminately, and the
whole set (x, t, δ) (δ is spin index) is written down as X .
Expanding the exponent in the general expression for i G(X, X √ ) up to the first
order in interactions and substituting out scalar electron–electron interaction, we find
i G(X X √ ) (2.62)
⎤ ⎤
≈0 |Ψ X Ψ X† √ |0 ∇ + (− i ) 21 d1 d2U (1 − 2)≈0 |T Ψ1† Ψ2† Ψ2 Ψ1 Ψ X Ψ X† √ |0 ∇
→ ⎤ ⎤ .
1 + (− i ) 21 d1 d2U (1 − 2)≈0 |T Ψ1† Ψ2† Ψ2 Ψ1 |0 ∇
This was step one. In step two, the average of the six operators in the numerator
and that of the four operators in the denominator must be evaluated using Wick’s
theorem.
The expression ≈0 |T Ψ1† Ψ2† Ψ2 Ψ1 Ψ X Ψ X† √ |0 ∇ can be fully contracted in six
⎩ !⎩ !
different ways. For example, ≈0 |T Ψ1† Ψ2† Ψ2 Ψ1 Ψ X Ψ X† √ |0 ∇ = n 0 (1)n 0 (2)i G 0
!⎩
(X, X √ ). Here n 0 (1) = ≈Ψ1† Ψ1 ∇0 is simply unperturbed electronic density in the
system. As you see, in this term the “probe” particle (traveling from X √ to X ) is
decoupled—disconnected—from the rest of the system, that is, does not interact
with it.
Another example:
⎩ ! ⎩ !
≈0 |T Ψ1† Ψ2† Ψ2 Ψ1 Ψχ Ψχ†√ |0 ∇ = −i G 0 (1, 2)i G 0 (2, 1)i G 0 (X, X √ )
!⎩
also produces a disconnected term, with minus sign, because in order to put together
the field operators to form corresponding pairings we had to change the places of an
odd number of Fermi operators.
80 2 Green’s Functions at Zero Temperature
Still another choice gives us at last a connected term, where the probe particle
interacts with the system:
⎩ !⎩ !
≈0 |T Ψ1† Ψ2† Ψ2 Ψ1 Ψχ Ψχ√
†
|Ψ0 ∇ = i G 0 (X, 2)i G 0 (2, 1)i G 0 (1, X √ ).
!⎩
All this is rather boring, but if we mark all the elements of the above expression—
points where interaction occurs, unperturbed Green’s functions, etc.–as suggested
in Table 2.1, we can draw Green’s function to first order in interaction as shown
in Fig. 2.8. The resulting diagrams are like the ones we obtained earlier for the
one-particle propagator. Once again, the probe particle interacts with the particles
in the system, and propagates freely between those acts of scattering. But now the
“background” particles interact with each other, and this is expressed in the structure
of the graphs, which is now far richer.
You see that the first two terms in the numerator are indeed disconnected; they
literally fall apart. On the other hand, the remaining four terms are connected, and
they show that the probe particle is scattered (that is, interacts) with the other panicles
in the system.
By definition, the connected diagrams are the ones that do not contain parts dis-
connected from the external ends, that is, the coordinates of the “external” particle
(in our case, external ends are X, X √ ). Only the external ends of the diagram carry
significant coordinates (spins, etc.), the ones that actually appear as the arguments of
the exact Green’s function that we wish to calculate. All the rest are dummy labels,
because there will be integrations (summations) over them. Of course, it does not
matter how we denote a dummy variable, and all the diagrams that differ only by the
mute labels are the same.
Here we see that connected terms contain integrations over the dummy variables
1 and 2. Therefore, of four connected terms there are only two that are different, and
we can get rid of the factor one half before them.
2.2 Perturbation Theory: Feynman Diagrams 81
Within the same accuracy, we can factor the numerator (neglecting the higher-
order terms in the interaction), and see that the denominator actually cancels the
disconnected terms from the numerator!
This observation is actually a strict mathematical statement, and since the proof
is very simple and general, let us prove.
All disconnected diagrams appearing in the perturbation series for the Green’s func-
tion exactly cancel from its numerator and denominator. Therefore Green’s function
is expressed as a sum over all connected diagrams.
No need to specify the interaction term, W. Let us consider the νth order term in
the numerator of Green’s function:
∼ ⎣
⎣ ∼ ∼
1 ν!
θm+n,ν (−i) m+n
dt1 · · · dtm ≈0 |T W(t1 ) · · · W(tm )
ν! m!n!
n=0 m=0 −∼
√
× Ψ(X )Ψ (X )|0 ∇connected
†
∼
× dtm+1 · · · dtν ≈0 |T W(tm+1 ) · · · W(tm+n )|0 ∇.
−∼
∼ ∼
1 ⎣
dt1 . . . dtn (−i)n ≈0 |T W(t1 ) . . . W(tn )|0 ∇
n!
n=0−∼
∼
−i
= ≈0 |T e dtW(t)|0 ∇,
−∼
that is, the denominator of the expression for Green’s function! This contribution
cancels, which proves the theorem.
We see as well that in any connected term of mth order there will be exactly
m! identical contributions due to rearrangements of t1 , . . . , tm in ≈0 |T W(t1 ) · · ·
W(tm )Ψ(X )Ψ † (X √ )|0 ∇connected . This cancels the m!
1
factors and allows us to deal
only with topologically different graphs. An example (Fig. 2.9): these two second
order diagrams are the same diagram, because they differ only by labels of the
interaction lines, 12 ↔ 34. Returning to our specific form of the interaction, we will
see that in our case there is also a 2−n factor associated with diagram, due to the one
half in the two-particle interaction term. This factor also cancels, this time because
we don’t distinguish between the ends of the interaction line, being
the same as . (As we said, only the labels of external ends matter.
The rest are just dummy integration variables!) Then we finally come to
2.2 Perturbation Theory: Feynman Diagrams 83
Thus the energy (frequency) and momentum (wave vector) are conserved in each
vertex. The physical reason for this is clear: each vertex of the diagram describes a
scattering process. The Hamiltonian of our problem (which describes such scattering)
is spatially uniform and time independent, which in agreement with general principles
yields momentum and energy conservation.
Besides scalar electron–electron interaction, another important interaction in
solid-state systems is electron–phonon interaction. We will not derive here the corre-
sponding Hamiltonian in terms of electron and phonon field operators: this is rather
84 2 Green’s Functions at Zero Temperature
Table 2.2 Feynman rules for scalar electron–electron interaction (momentum representation)
a subject for a course in solid state physics. It is enough for us to note that the
electron–phonon interaction is described by terms in the Hamiltonian proportional
to ρ † (X )ρ(X )λ(X ) (this expression is Hermitian, since the phonon operator λ (as
we defined it) is real). It is clear then that only even-order terms in electron–phonon
interaction enter the perturbation expansion, because otherwise there will be unpaired
phonon operators, giving zero vacuum average. In the even-order terms, phonon oper-
ators pair to form unperturbed phonon Green’s functions (propagators) D 0 (k, χ).
The definition of the vertex and of the phonon propagator depends on convention;
we give here for your convenience the rules used in two basic monographs on the
subject. The following discussion will not actually depend on such details, but each
time you perform or follow specific calculations, it pays to check all the conventions
beforehand.
d1d2d3d4[i G 0 (X 1)(−1)i G 0 (23)i G 0 (32)
This is very different from the diagrammatic series for the grand potential, Γ,
where a factor 1/n in each nth-order diagram prohibits such partial summation of
diagram series (Table. 2.3).
The idea of this summation is simple and mathematically shaky. Suppose we
have a diagram, for example, . In the infinite series for Green’s
function there is an infinite subset of diagrams like
which include all possible corrections to the inner line. Due to
86 2 Green’s Functions at Zero Temperature
Fig. 2.11 Self energy diagrams: a self energy parts, b irreducible self energy parts, c proper self
energy
the fact that there is no explicit dependence of the expression on the order of the
diagram, we can forget about everything that lies beyond these interaction points and
concentrate on the inside of the graph. The corrections here should transform the thin
line (unperturbed Green’s function, G 0 ) into a solid line (exact Green’s function, G) in
the same way, as the whole series gives the exact Green’s function
We have partially summed the diagram series for Green’s function!
This is not yet a victory, though. First, the summation of this sort still gives
us an equation: a self-consistent equation for the exact Green’s function, usually
a nonlinear integral or integro-differential one. To solve it would be really tough!
Second, there is absolutely no guarantee that this equation is correct. Indeed, we
know from mathematics that only for a very restricted class of convergent series
(absolutely convergent) the sum is independent of the order of the terms. What we
have done here is to redistribute the terms of the perturbation series, about which
we even do not know (and usually cannot know) whether it converges at all! The
justification here comes from the results: if they are wrong, then something is wrong
in our way of partial summation (evidently, there are many, and each is approximate,
since some classes of diagrams are neglected). Or maybe something funny occurs
to the system, and this is already useful information. We will meet such a case later,
when discussing application of the theory to superconductivity. In most cases the
results are right if the partial summation is made taking into account the physics of
the problem. Usually we can show, with physical if not mathematical rigor, that a
certain class of diagrams is more important than the others, and therefore the result
of its summation reflects essential properties of the system.
To approach such partial summations systematically, let us make some definitions.
The self energy part is called any part of the diagram connected to the rest of it
only by two particle lines (Fig. 2.11).
2.2 Perturbation Theory: Feynman Diagrams 87
The irreducible, or proper, self energy part is the one that cannot be separated by
breaking one particle line, like the one in Fig. 2.11b.
Finally, the proper self energy, or self energy par excellence, or mass opera-
tor, is called the sum of all possible irreducible self energy parts and is denoted
by δδ√ (X, X √ ). The name is given for historical field-theoretical reasons, and its
meaning will become clear a little later.
It is convenient to include a (−i) factor into the definition (Fig. 2.11). Then the
series for Green’s function can be read and drawn as follows (Fig. 2.12):
i G = i G 0 + i G 0 G 0 + t G 0 G 0 G 0 + · · · (2.63)
Here the terms in the infinite series are redistributed in such a way as to make
it a simple series (a geometric progression!) over the powers of self energy and
unperturbed Green’s function only. (Of course, all necessary integrations and matrix
multiplications with respect to spin indices are implied, so that this is an operator
series.)
Separating the i G 0 factor, we obtain the celebrated Dyson’s equation (see
Fig. 2.12), which is exactly of the self-consistent form we anticipated:
(Of course, we could take i G 0 from the other side, and get G(P) = G 0 +
G(P)(P)G 0 (P).) In a homogeneous stationary and nonmagnetic system (the
last condition means that G and are diagonal with respect to spin indices)
we can make a Fourier transformation, reducing the above equation to G(P) =
G 0 + G 0 (P)(P)G(P). Then we see that
⎫−1 1
G(p, χ) = (G 0 (p, χ))−1 − (p, χ) = . (2.65)
χ − ψ(p) + μ − (p, χ)
−1
π
G= i ˆ
−E − . (2.66)
πt
The latter equation holds even if G and are nondiagonal (e.g., in the nonhomoge-
neous case), understanding [. . .]−1 as an inverse operator.
An important feature of (2.65) is that if we substitute there some finite-order
approximation for the self energy, the resulting approximation for G will be equiv-
alent to calculating an infinite subseries of the perturbation series, and this gives a
much better result than the simple-minded calculation of the initial series term by
term.
This is a natural consequence of a self-consistent approach. Another, less pleasant,
one is that any approximate self energy is to be checked, lest it violates the general
analytic properties of Green’s function (which follow from the general causality
principle and should not be toyed with). Returning to the simple case (2.65) and
recalling the Källén–Lehmann representation, we see that necessarily
"
∗(p, χ) ≤ 0, χ < 0,
(2.67)
∗(p, χ) ≤ 0, χ > 0.
We see as well that ∗ is the inverse lifetime of the elementary excitation, while
↑ defines the change of dispersion law due to interaction. (In quantum field theory
this leads to the change of the particle mass, which is why is also called the mass
operator.)
Following the same approach, we can consider the insertions into the interaction line
as well, like those shown in Fig. 2.13.
The polarization insertion is called the part of the diagram that is connected to
the rest of it only by two interaction lines. The irreducible polarization insertion
is one that cannot be separated by breaking of a single interaction line. Finally, the
polarization operator is the sum of all irreducible polarization insertions, , and is
a direct analogue to the self energy.
2.2 Perturbation Theory: Feynman Diagrams 89
Since there is a (−i) factor in the definition of the interaction line, it is convenient
to introduce an (i) factor into the polarization operator. The analogue to Dyson’s
equation is readily obtained and reads (see Fig. 2.14)
U (p, χ) U (p, χ)
Ueff (p, χ) ≡ = , (2.69)
κ(p, χ) 1 − U (p, χ)(p, χ)
The Thomas–Fermi result concerning the screening of the Coulomb potential by the
charged Fermi gas can be reproduced if we use the random phase approximation
(RPA), which here means taking the lowest-order term in the polarization operator:
d 3 qdζ 0
i0 (p, χ) = =2 G (p + q, χ + ζ)G 0 (q, ζ). (2.70)
(2ξ)4
The calculations give the following result for the static screening:
# #
mp F p 2F − p 2 /4 ## p F + p/2 ##
↑0 (p, 0) = − 2 1+ ln # ; (2.71)
2ξ pF p p F − p/2 #
∗0 (p, 0) = 0. (2.72)
0 → −2N (μ),
90 2 Green’s Functions at Zero Temperature
where N (μ) ≡ mp F
2ξ 2
is the density of states on the Fermi surface. Thus the Fourier
transform of the interaction is
4ξe2 /q 2 4ξe2
Uefff (q) → = . (2.73)
1 + 2N (μ)4ξe2 /q 2 q 2 + 8ξe2 N (μ)
The quantity
is the squared Thomas–Fermi wave vector, and the potential indeed takes the Yukawa
form:
e2
Uefff (r ) = exp(−qT F r ). (2.74)
r
Thus, the presence of other charged particles leads to screening of initial long- range
Coulomb interactions, and limits it to a finite Thomas–Fermi radius. How this hap-
pens is graphically clear from the simplest polarization diagram. The interaction cre-
ates a virtual electron–hole pair. (Virtual, of course, because the energy-momentum
relation for every internal line of a diagram is violated: we integrate over all ener-
p2
gies and all momenta independently! For a real particle, E = 2m or something like
this.) The approximation we used included only independent events of such virtual
electron-hole creation: because the energy and momentum along the interaction line
are conserved, the quantum-mechanical phase of the electron–hole pair is immedi-
ately lost and does not affect the next virtual pair. This is the reason it is called RPA,
random phase approximation (Fig. 2.15). As we discussed in the very beginning of the
book, this kind of approach works well if there is a large number of particles within
the interaction radius: then indeed it is much more probable to interact with two
different particles consecutively than with the same one twice. In the opposite case,
when the density of particles is low, RPA naturally fails, while the ladder approxi-
mation is relevant: Here a virtual pair (quasiparticle–quasihole) interacts repeatedly
before disappearing (Fig. 2.16). This is again reasonable, because when density is
low, it is improbable to find some other quasiparticle close at hand to interact with.
Unfortunately, on our path to the Thomas-Fermi screening, Eq. (2.74), from the
random phase approximation, Eqs. (2.71), (2.72), we made one simplification too
2.2 Perturbation Theory: Feynman Diagrams 91
many when replaced the static polarization 0 (p, 0) with its value at p = 0. The
logarithmic term in (2.71) is non-analytical at p = p F , and—as it turns out—it pro-
duces instead of the exponential screening (2.74) a qualitatively different potential,
which far away from the charge behaves as
e2
Uefff (r ) ≥ cos(2 p F r ).
r3
Not only it does not fall off exponentially, but it also demonstrates Friedel oscil-
lations. Both effects are due to the sharp step of the Fermi distribution function
at T = 0, which produced the non-analytical term in (2.71) in the first place (see
Appendix A). At finite temperature the step is smeared, and the above expression is
multiplied by an exponentially decaying factor, thus reverting to the Yukawa–type
screening with Friedel oscillations superimposed.
We have seen that Green’s functions give a convenient apparatus for a description of
many-body systems. So far we have used a one-particle Green’s function, dealing
with a single quasiparticle excitation, though against the many-body background.
They don’t apply, e.g., to the case of the bound state of two such excitations. Indeed,
in a Fermi system such a state would be a boson, while the one-particle Green’s
function describes a fermion.
This problem can be easily solved. Nobody limits us to consideration of averages
≈ρρ † ∇ only. The “Schrödinger equation” for G(X, X √ ) included terms ≈ρρρ † ρ † ∇.
Therefore it is natural to introduce n-particle Green’s functions. (As usual, there is
no common convention here, so when reading a chapter be careful what definition
is actually used.)
The n-particle (or 2n-point) Green’s function (Fig. 2.17) is defined as follows:
G (n)δ1 δ2 ...δn ,δ1√ δ2√ ...δn√ (x1 t1 , x2 t2 , . . . , xn tn ; x1√ t1√ , x2√ t2√ , . . . , xn√ tn√ )
≡ G (n) (12 · · · n; 1√ 2√ · · · n √ )
1
= ≈T ρ(1)ρ(2) · · · ρ(n)ρ † (n √ ) · · · ρ † (2√ )ρ † (1√ )∇ (2.75)
(i)n
The rules of drawing and decoding Feynman diagrams stay intact and can be
easily derived from the expansion of the S-operator in the average ≈T · · · ρ † ρ † ∇.
There is only one additional rule.
The diagram is multiplied by (−1) S , where S is the parity of the permutation of
the fermion lines’ ends (1√ 2√ · · · n √ ) ↔ (12 · · · n) (see Fig. 2.18).
92 2 Green’s Functions at Zero Temperature
The origin of this rule is easy to see applying Wick’s theorem to the lowest order
expression for the two-particle Green’s function:
The cancellation theorem removes only the diagrams with loose parts discon-
nected from the external ends. This means that not every diagram looking discon-
nected is actually disconnected! For example, the diagrams corresponding to (2.76)
(see the two diagrams in Fig. 2.19) are not disconnected and are not canceled. As
a matter of fact they provide a Hartree–Fock approximation for the two-particle
Green’s function (direct and exchange terms, as is evident from their structure).
The two-particle Green’s function is most widely used and therefore has its own
letter, K :
Fig. 2.20 Generalized Hartree–Fock approximation for the two-particle Green’s function
Fig. 2.21 Irreducible part of the two-particle Green’s function and the vertex function
tion appear only in the vertex function: the “tails” are one-particle Green’s functions
and as such don’t bring anything new. Therefore, we concentrate on the vertex func-
tion.
The discourse is simpler in momentum representation, if we are (as usual) dealing
with a stationary, spatially uniform system. Evidently, only three sets of variables
of four are independent (because a uniform shift of coordinates or times should not
change anything). We choose the following set of independent combinations:
X 1 − X 1√ , X 2 − X 2√ , X 1√ − X 2 . (2.80)
The Fourier transformation for any function of these four sets of variables is defined
by
Now we are ready to derive (for scalar electron–electron interaction, our stan-
dard guinea pig) an important general relation between the vertex function and self
energy. That such a relation should exist is reasonable, since both and have in
common, besides being uppercase Greek letters, that they result from summation of
all somehow irreducible diagrams. First, we present a very graphic proof, which will
be then supported by more rigorous calculation (which, on the other hand, is only a
translation of graphs into equations).
We start from writing down the equation of motion for the one-particle Green’s
function, in position space. As we observed much earlier, such an equation will
contain the two-particle Green’s function:
π 1 2
i + ∇x + μ G δδ√ (X, X √ ) = θδδ√ θ(X − X √ )
πt 2m
−i d 4 Y U (X − Y )K δ∂,δ√ ∂ (X, Y ; X √ , Y ) (2.83)
(we have made use of the definition: ≈T Ψ∂† (Y )Ψ∂ (Y )Ψδ (X )Ψδ† (X √ )∇ ≡ K δ∂,δ√ ∂ (X,
Y ; X √ , Y )). Since G = G 0 + G 0 G, the relation in question is indeed here, and we
have only to extract it.
Graphically, it is simple: the equation can be symbolically written as [i G 0 ]−1 i G =
I − (−iU )(i 2 K ), that is,
The result is shown in Fig. 2.22. Notice that we used the specific for (n > 2)-
particle Green’s function sign convention in order to determine the signs of the first
two terms on the right-hand side of Fig. 2.22: if “decoded” following the one-particle
rules, they would lack a (-1)-factor due to the exchange of tails of the two-particle
diagram.
96 2 Green’s Functions at Zero Temperature
In analytical form, this equation (sometimes called Dyson’s equation, but less
often than the Dyson equation we encountered earlier) reads
d P1
(P)θδω = U (0)n(μ)θδω + iθδω U (P − P1 )G(P1 )
(2ξ)4
d P1 d P2
+ G(P1 )G(P2 ) (2.84)
(2ξ)4 (2ξ)4
× δ∂,ω∂ (P1 , P2 ; P, P1 + P2 − P)G(P1 + P2 − P)U (P − P1 ).
Now let us derive it without graphs, or rather write down each step instead of
drawing it. Again, assume a uniform, stationary, and isotropic system. Then, in
momentum space, (2.83) looks like
⎫
d P1 d P2
(G 0 (P))−1 G(P) − 1 θδδ√ = −i K δ∂,δ√ ∂ (P1 , P2 ; P, P1 + P2 − P)U (P − P1 ).
(2ξ)8
⎦
p2
(Here (G 0 (P))−1 ≡ χ − 2m + μ is a function, not an operator, and simply equals
1/G 0 (P).) Now substitute in this equation the definition (2.79) and divide by G(P).
After this messy operation we obtain
d P2
[1/G 0 (P) − 1/G(P)]θδδ√ = − iθδδ√ U (0) G(P2 )
(2ξ)4
d P1
± iθδδ√ U (P − P1 )G(P1 )
(2ξ)4
d P1 d P2
+ ∂δ,∂δ√ (P1 , P2 ; P, P1 + P2 − P)
(2ξ)8
× G(P1 )G(P2 )G(P1 + P2 − P)U (P − P1 ).
Since by virtue of the Dyson equation 1/G 0 (P) − 1/G(P) = (P), then we even-
tually recover Eq. (2.84). See how much easier it was with the diagrams? By the way,
the graphs immediately show the physical sense of this relation. The first two terms in
(2.84) give the self-consistent Hartree–Fock approximation with initial (bare) poten-
tial: they take into account the interaction of the test particle with the medium, and
with itself (exchange term). The rest must contain the effects of renormalization of
the interaction, and indeed, the third graph can be understood as containing the renor-
malized interaction vertex (Fig. 2.23). As you see, it contains, in particular, all the
polarization insertions in the interaction line. This is the reason we had a bare poten-
tial line in Fig. 2.22 and Eq. (2.84): otherwise certain diagrams would be included
2.2 Perturbation Theory: Feynman Diagrams 97
Fig. 2.25 Particle-particle irreducible vertex function and two–particle Green’s function
twice. In all operations with diagrams we must pay special attention to avoiding the
double count.
Earlier we introduced the irreducible self energy as a sum of all diagrams that cannot
be separated by severing one fermion line. Let us generalize this and introduce the
particle-particle irreducible vertex function, ˜ (P P) , which includes all diagrams that
cannot be separated by severing two fermion lines between in- and outcoming ends.
(In Fig. 2.24 diagram (a) is particle–particle irreducible, but diagram (b) is not.)
Then the diagram series for the particle–particle irreducible vertex part (or the
particle–particle irreducible two–particle Green’s function, if drop the external tails)
can be drawn as in Fig. 2.25.
For the vertex function we thus obtain the Bethe–Salpeter equation, which is
a direct analogue of the Dyson equation for the one-particle Green’s function2
(Fig. 2.26):
2Of course, this equation can as well be written for the two–particle Green’s function itself, instead
of the vertex function.
98 2 Green’s Functions at Zero Temperature
(12; 1√ 2√ ) = ˜ (P P) (12; 1√ 2√ )
+i d3 d3 √
d4 d4√ ˜ (P P) (12; 3√ 4√ )G(33√ )G(44√ )(3√ 4√ ; 1√ 2√ ). (2.85)
Two-particle functions allow for more possibilities: there are more loose ends
in a diagram! Thus, we have a different particle–hole irreducible vertex, ˜ (P H ) ,
(Fig. 2.27, where diagram (b) is now (particle–hole) irreducible, while diagram (a)
is not). This yields another version of the Bethe–Salpeter equation (Fig. 2.28):
(12; 1√ 2√ ) = ˜ (P H ) (12; 1√ 2√ )
+ i d3 d3√ d4 d4√ ˜ (P H )(42;4√ 2√ )G(43)G(3√ 4√ )(13;1√ 3√ ) .
(2.86)
Of course, both versions are equivalent mathematically, but not physically. Since
there is little hope that either can be solved exactly, some approximations are in
order, and we should choose, as usual, the version that is better as a starting point.
The latter one, e.g., proves to be useful for investigation of the processes with small
momentum transfer between quasiparticles, but this is beyond the scope of this book.
2.3 Problems 99
2.3 Problems
• Problem 1
μ
dpdχ −iχt
Γ= dμ(2i V ) lim e G(p, χ),
t→−0 (2ξ)4
0
• Problem 3
Calculate the lowest order diagram for the polarization operator:
References
1. Abrikosov, A.A., Gorkov, L.P., Dzyaloshinski, I.E.: Methods of quantum field theory in statistical
physics. Ch. 2. Dover Publications, New York (1975) (An evergreen classic on the subject.)
2. Fetter, A.L., Walecka, J.D.: Quantum theory of many-particle systems. McGraw-Hill, San Fran-
cisco (1971)
3. Mahan, G.D.: Many-particle physics. Plenum Press, New York (1990) ([2] and [3] are high-level,
very detailed monographs: the standard references on the subject.)
4. Lifshitz, E.M., Pitaevskii, L.P.: Statistical physics pt. II. (Landau and Lifshitz Course of theo-
retical physics, v. IX.) Pergamon Press, New York (1980) (Ch. 2. A comprehensive, but very
compressed account of the zero-temperature Green’s functions techniques.)
5. Mattuck, R.: A guide to Feynman diagrams in the many-body problem. McGraw-Hill, New York
(1976) (Green’s functions techniques are presented in a very instructive and intuitive way.)
6. Nussenzvejg, H.M.: Causality and dispersion relations. Academic Press, New York (1972). (A
very good book for the mathematically inclined reader)
7. Thouless, D.J.: Quantum mechanics of many-body systems. Academic Press, New York (1972)
8. Ziman, J.M.: Elements of advanced quantum theory. Ch. 3, 4. Cambridge University Press,
Cambridge (1969)
Chapter 3
More Green’s Functions, Equilibrium
and Otherwise, and Their Applications
The formalism we have developed so far is limited to zero temperature (i.e., to the
ground state) properties of many-body systems. As you remember, this is because
the ground state is always nondegenerate, so that we could pull off the trick with
adiabatic hypothesis: if you slowly turn interactions on, and then off, the worst that
can happen is some phase factor, which anyway cancels. This, in turn, allowed us to
build up the diagrammatic technique.
Physically, it is rather awkward to be confined to the case of T = 0. In principle,
the average in the definition of Green’s function could be taken over any quantum
state, or set of states, and we would be able at least to determine its analytic properties,
following the same steps as at T = 0. For example, we can define equilibrium Green’s
functions at finite temperature. Moreover, it turns out that diagram techniques exist
that can be used to actually calculate such Green’s functions. In this chapter, we will
discuss how and why this can be done.
Here Wm is the probability of finding the system in the quantum state |m √; evidently,
Wm = 1 (3.2)
m
(In a mixed state we have to do averaging twice: first over each constituent quantum
state, and then over the set of these states with weights Wm ; the trace with the
statistical operator in the above formula takes care of both.) Equation (3.2) ensures
that the probabilities add up to one—that is, the unitarity.
If we choose some orthonormal basis, {|n√}, then the statistical operator can be
rewritten as follows:
π̂ = |n√πnn≈ →n ≈ |; (3.4)
n n≈
πnn ≈ = Wm →n|m √→m |n ≈ √. (3.5)
m
In this form the statistical operator (as a set of matrix elements {πnn≈ }) is often called
the density matrix.1
3.1 Analytic Properties of Equilibrium Green’s Functions 103
The diagonal elements of the density matrix, πnn ∓ 0, give the probabilities of
finding the system in the state |n√, while the off-diagonal terms describe the quantum
correlations between different states.
Useful properties of the trace of the statistical operator directly follow from its
definition:
tr(π̂) = 1; (3.6)
⎪ 2
tr π̂2 ≤ tr(π̂) (3.7)
(an equality is achieved if and only if the system is in a pure state). The former equality
ensures probability conservation and directly follows from (3.2): the trace of a matrix
(or an operator) is an invariant under unitary transformations of coordinates, and since
in one special basis it equals one (3.2), so will it under any choice of a basic set of
states.
The time evolution of the statistical operator can be determined if write it in the
form Eq. (3.1) and recall that |(t)√ = U(t)|(0)√:
π̂(t) = |m (t)√Wm →m (t)| = U(t)π̂(0)U † (t). (3.8)
m
Therefore, the statistical operator satisfies the Liouville equation (called so because it
is a direct analogue to the classical Liouville equation for the distribution function):
˙ = [H(t), π̂(t)].
i π̂(t) (3.9)
The general definitions of the causal, retarded, and advanced one-particle Green’s
functions,
1 In the case of a single particle we can take the basic set of coordinate eigenfunctions {|x√}, so that
→n|m √ ∼ →x|m √ = m (x), and the result takes the familiar form
π(x, x≈ ) = Wm m (x)∝m (x≈ ).
m
104 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
G ρε (x1 , t1 ; x2 , t2 ) = −itr π̂T λρ (x1 , t1 )λε† (x2 , t2 ) , (3.10)
R
G ρε (x1 , t1 ; x2 , t2 )
= −itr π̂ λρ (x1 , t1 )λε† (x2 , t2 ) ± λε† (x2 , t2 )λρ (x1 , t1 ) α(t1 − t2 ), (3.11)
A
G ρε (x1 , t1 ; x2 , t2 )
= +i tr π̂ λρ (x1 , t1 )λε† (x2 , t2 ) ± λε† (x2 , t2 )λρ (x1 , t1 ) α(t2 − t1 ),
(3.12)
are, of course, valid for the equilibrium state at finite temperature, when the statistical
operator has standard Gibbs form,
π̂ = e−ε(H−)
= e−ε(E s −μNs −) |s√→s|
s
= πs |s√→s|, (3.13)
s
1
ε≡ .
kB T
We choose the basic set of common energy and particle number eigenstates, |s√, and
work, as usual, with the grand canonical ensemble:
Therefore, the statistical operator is diagonal, π̂ss≈ ≡ δss≈ πs = eε(−E s −μNs ) . The
≈
normalization factor, eε = treε H , contains the grand potential, .
In the isotropic uniform case, of course,
Now we will quickly repeat, mutatis mutandis, the calculations we made when
discussing analytic properties of zero-temperature Green’s functions.
elements of field operators between all states of the system, →s|λ(X )|s ≈ √, because
now all excited states enter with nonzero weight. As a result, we obtain for the causal
Green’s function
1
G(p, ω) = (2∂)3 πn Amn δ(p − Pmn )
2 m n
1 e−εωmn
× ± ; (3.14)
ω − ωmn + i0 ω − ωmn − i0
Amn = |→n|λρ |m√|2 ; (3.15)
ρ
ωmn = E m − μNm − (E n − μNn ). (3.16)
Separating real and imaginary parts of (3.14) at real frequencies (using the
ubiquitous Weierstrass formula (2.31)), we find
1
∗G(p, ω) = (2∂) P 3
πn Amn δ(p − Pmn )
2 m n
1
× (1 ± e−εωmn ) , (3.17)
ω−ω
mn
1
G(p, ω) = −∂(2∂)3 πn Amn δ(p − Pmn )
2 m n
× (1 ↑ e−εωmn )δ(ω − ωmn ). (3.18)
On the other hand, for the retarded and advanced Green’s functions we obtain by the
same method
1 1 ± e−εωmn
G (p, ω) =
R
(2∂) 3
πn Amn δ(p − Pmn ) ; (3.19)
2 m n
ω − ωmn + i0
1 1 ± e−εωmn
G (p, ω) =
A
(2∂) 3
πn Amn δ(p − Pmn ) . (3.20)
2 m n
ω − ωmn − i0
∇
dω ≈ π R,A (p, ω ≈ )
G R,A (p, ω) = ; (3.21)
∂ ω ≈ − ω ± i0
−∇
Relation (3.25) allows us to find the G R,A (ω) if we know G(ω). Note that the
latter is not an analytic function, so that now the quasiparticle excitations are rather
defined by the poles of G R,A (ω) in the lower (upper) half-plane of complex frequency,
respectively.
This comes as no surprise, since we already know that these two Green’s functions
have direct physical meaning. They are involved, e.g., in calculations of the kinetic
properties of the system in linear response theory, which we will consider later. But
since there is no regular perturbation theory to calculate G R,A directly, we will use
an easy detour. There is a regular way to find the causal Green’s function (the so
called Matsubara formalism), after which retarded and advanced Green’s functions
can be directly obtained with the help of (3.25).
We still have a safeguard against mistakes that can be caused by inadequate
approximations, the Kramers–Kronig relations, which, of course, hold at any tem-
perature (as causality itself):
∇
→≈ G R,A (p, →≈ )
∗G R,A
(p, ω) = ±P ,
≈ →≈ − →
−∇
1
G(ω), G R,A (ω)||ω|∞∇ ∼ ;
ω
3.1 Analytic Properties of Equilibrium Green’s Functions 107
1
π R (p, ω) = G R (p, ω) ≡ − Γ(p, ω). (3.26)
2
The latter function, Γ(p, ω), is also frequently called spectral density, which (hope-
fully) will not lead to any confusion.
Here is another very useful safeguard: the sum rule for the spectral density Γ (here
is the opportunity not to get confused!),
dω
Γ(p, ω) = 1. (3.27)
2∂
Indeed,
1
Γ(p, ω) = (2∂)4 πn Amn (1 ± e−εωmn )δ(p − Pmn )(ω − ωmn ), (3.28)
2 m n
and we can integrate over frequency and then roll the calculations back to canonical
commutation relations between field operators, in the same manner as we did when
we calculated the 1/ω-asymptotics of Green’s functions.
What is the physical meaning of this formula? Γ(p, ω) gives the probability that
a quasiparticle with energy ω has momentum p (or vice versa). We have already dis-
108 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
cussed that due to interactions there is always some momentum and energy exchange
between particles, broadening the (θp − μ − ω) peak of a noninteracting system
(Fig. 3.1). Since a quasiparticle must have some energy given momentum, the inte-
gral (with appropriate normalization) must yield unity. Which it does, as we have
seen.
Unperturbed Green’s functions can be easily calculated directly from the definition.
Here it is easier, though, to calculate retarded and advanced Green’s functions first,
and then obtain the causal Green’s function from (3.25). If you perform this useful
exercise, you will find
1
G R,A(0) (p, ω) = ; (3.29)
ω − θp + μ ± i0
1
G (0) (p, ω) = P
ω − θp + μ
⎧
⎨tanh εω Fermi statistics,
− i∂δ(ω − θp + μ) 2 (3.30)
⎩coth εω Bose statistics,
2
†
Here cp[ρ] , cp[ρ] are Fermi (Bose) creation/annihilation operators. Note also a useful
relation
†
cp[ρ] cp≈ [ρ≈ ] = (2∂)3 δ(p − p≈ )[δρρ≈ ]n p . (3.32)
and
1
np = πm δ(p − Pmn )Amn . (3.33)
2 m n
It has an evident physical meaning: the statistical Fermi (Bose) distribution determines
the probability for the particle to have energy ω at given temperature, while the spec-
tral density Γ(p, ω) gives the probability that the particle with this energy has the
momentum p.
After learning a lot about the analytic properties of equilibrium Green’s functions at
finite temperatures, we once again meet the nasty question of how to calculate the
actual Green’s function?
We cannot use directly the results of zero-temperature diagram technique. The
reason is that now we have to average over all excited states of the system, not
only its ground state. And while the latter is unique, the former are highly degenerate
(infinitely degenerate in the thermodynamic limit). Therefore our previous reasoning
employing the adiabatic hypothesis no longer works: the adiabatic turning on and
off of the interaction can leave the system at t = +∇ in any linear combination
of excited states, very different from the one present at t = −∇, depending on the
interaction, initial state, and exact way of turning this interaction on and off. This,
in its turn, means that we cannot separate the 1/→S√-term, and the entire scheme
fails. A clear indication of this fact is that the causal Green’s function is essentially
nonanalytic, and thus cannot be obtained by summation of a series.
There are different ways of dealing with this trouble. First, we could write down
an equation of motion for the Green’s function, like (2.83), then decouple the higher-
order Green’s function and find an approximation (checking that Kramers–Kronig
relations are satisfied, etc.). The setback here is that you don’t have a regular proce-
dure and must rely on a happy guess.
110 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
Second, we could calculate Green’s function directly from the general formula
(3.10) for the average of Heisenberg operators, n|λλ † |n = n|S −1 T (S)|n .
There is an ingenious way to actually succeed (the Keldysh formalism), and it has a
bonus of being naturally applicable to any nonequilibrium state of the system as well.
We will discuss it later. The setback of this method is that all the Green’s functions,
self energies etc. become 2 × 2 matrices, which does not make calculations easier. If
we do not exactly need to deal with an essentially nonequilibrium situation, we had
better opt for something handier.
Third, we can use the remarkable analogy between the evolution operator in
conventional time, U = e−i H t , and the (non-normalized) equilibrium statistical
operator π̂ = e−ε H , ε = 1/T . The idea of Matsubara was to use this analogy to
define some new, Matsubara, or temperature, Green’s functions, closely related to
conventional causal Green’s functions in real time. It turns out that for temperature
Green’s functions a simple and useful diagrammatics can be developed [1, 6].
If we introduce the variable ψ , 0 < ψ < ε, we see that π̂ satisfies the Bloch
equation,
φ
π̂(ψ ) = −Hπ̂(ψ ), (3.37)
φψ
t ↔ −iψ , (3.38)
this equation transforms into the Schrödinger equation for π̂(it) on the imaginary
interval 0 > t > −iε:
φ
i π̂(it) = Hπ̂(it). (3.39)
φt
The statistical operator is a generalization of the wave function, and it is not sur-
prising that it satisfies some sort of Schrödinger equation. What is mildly surprising
is that imaginary time enters the picture; but this is not totally exotic, because a
vaguely similar situation with imaginary frequencies we meet when the evolution to
equilibrium is discussed (e.g., in the classical theory of a damped oscillator). Here
it is more convenient to rotate the time axis by ∂/2 in the complex time plane (see
Fig. 3.2).
This so-called Wick’s rotation transforms the Heisenberg operators into Matsub-
ara operators:
Let us stress that the conjugated Matsubara field operator is not the Hermitian con-
jugate of the Matsubara field operator:
3.2 Matsubara Formalism 111
These operators satisfy the equations of motion, which are an “analytic continuation”
of Heisenberg equations (1.84) at imaginary times:
φ M
λ (x, ψ ) = H, λ M (x, ψ ) ; (3.42)
φψ
φ M
λ̄ (x, ψ ) = H, λ̄ M (x, ψ ) . (3.43)
φψ
Now we can define the temperature Green’s functions. First, we introduce the tem-
perature ordering operator, Tψ , which, as usual, orders the operators so that the larger
is the argument ψ the further to the left it stands. The temperature Green’s function
is then, in direct analogy to (3.10),
Gρρ≈ (x, ψ ; x≈ , ψ ≈ ) = − Tψ λρM (x, ψ )λ̄ρM≈ (x≈ , ψ ≈ )
= −tr e−ε(H−) Tψ λρM (x, ψ )λ̄ρM≈ (x≈ , ψ ≈ ) (3.44)
112 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
As usual, we consider the uniform state without magnetic ordering, so that the Green’s
function depends on x − x≈ , and spin dependence (if any) reduces to δρρ≈ .
Following the usual drill, we now explore the analytic properties of this function.
First we show that it depends only on the difference of its ψ -arguments:
Now we will show that depending on the statistics of particles involved, the series
contains either odd or even Matsubara frequencies,
(2v + 1)∂
ωvF = ; (3.48)
ε
2v∂
ωvB = . (3.49)
ε
To see this, let us take some ψ < 0 and calculate G(ψ ) and G(ψ + ε):
3.2 Matsubara Formalism 113
G(ψ ) = ±tr eε()−H λ † eHψ λe−Hψ
= ±eε tr e−H(ψ +ε) λ † eHψ λ ;
G(ψ + ε) = −tr eε(−H) eH(ψ +ε) λe−H(ψ +ε) λ †
= eε tr eHψ λe−H(ψ +ε) λ †
= ↑G(ψ ).
We have used here the cyclic invariance of the trace. The upper sign, as usual,
corresponds to the Fermi statistics (Fig. 3.3). Thus temperature Green’s functions
are periodic (for bosons) or antiperiodic (for fermions) with period ε. This is exactly
what we obtain in keeping only even or odd Matsubara frequencies in series (3.46),
because
= −e−iωχ ψ ;
F
B(ψ +ε)
e−iωχ = e−iωχ ψ e−i2v∂
B
= e−iωχ ψ .
B
ε
G(p, ωv ) = dψ d 3 xe−i(px−ωχ ψ ) G(x, ψ ). (3.51)
0
114 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
Without writing the details of this by now routine calculations (which you can do as
an exercise), we have
1 1 ± e−εωmn
G(p, ωv ) = (2∂) 3
πn Amn δ(p − Pmn ) . (3.52)
2 m n
iωv − ωmn
The coefficients Amn here are the same ones that enter the formulas for real time
Green’s functions in equilibrium, Eqs. (3.14), (3.19), (3.20). Comparing (3.52) to
these equations immediately leads to the relation between temperature and realtime
Green’s functions:
Now, at last, we can return to drawing pictures. To begin with, we present the system’s
Hamiltonian in the standard form
H = H0 + H1
Repeating essentially the same steps as before, (Sects. 1.3, 2.2.1), let us introduce
the imaginary-time S-matrix in “interaction representation”:
3.2 Matsubara Formalism 115
φ
τ(ψ , ψ2 ) = −H1 (ψ )τ(ψ , ψ2 ),
φψ
where
H1 (ψ ) = eH0ψ H1 e−H0 ψ . (3.60)
Iterating it, we find the analogue to Dyson’s expansion for τ, which is (for ψ1 > ψ2 )
⎧ ψ ⎫
⎨ 1 ⎬
τ(ψ1 , ψ2 ) = Tψ exp − dψ H1 (ψ ) . (3.61)
⎩ ⎭
ψ2
You already know what to do next, but we will nevertheless explicitly derive
the basic expression for G. Using the “Matsubara interaction” representation for
Matsubara field operators, we find that the temperature Green’s function can be
expressed through the “interaction” field operators as follows (omitting the “M”
superscripts for brevity):
G(x1 , x2 ; ψ1 − ψ2 )
= −eε tr e−ε H eHψ1 e−H0 ψ1 (ψ1 )eH0 ψ1 e−Hψ1
× eHψ2 e−H0 ψ2 (ψ
¯ 2 )eH0 ψ2 e−Hψ2 α(ψ1 − ψ2 )
↑ tr e−ε H eHψ2 e−H0 ψ2 (ψ
¯ 2 )eH0 ψ2 e−Hψ2
Take, for example, the first term in this equation. Using the definition of τ(ψ1 , ψ2 )
(3.57), we can rewrite it as
−eε tr e−ε H0 τ(ε, ψ1 )(ψ1 )τ(ψ1 , ψ2 )(ψ
¯ 2 )eH0 ψ2 τ(ψ2 , 0) α(ψ1 − ψ2 ).
we see that actually the formula is a direct analogue to the zero-temperature case
(where we have explicitly written spin indices):
This formula provides the basis for the Matsubara diagram techniques. We again
expand the S-matrix τ(ε, 0) in series over the interaction H1 and express the terms
as averages over the unperturbed ground state. Wick’s and cancellation theorems
are still valid in this case, but we will not bother to rewrite the proof for imaginary
times. You can easily check that e.g., the “thermodynamic” proof of Wick’s theorem
holds after the substitution it ∞ ψ . Therefore we can present the terms as Feynman
diagrams; all disconnected diagrams cancel, and we are left with the usual connected
lot. The rules are given in Table 3.1.
The only difference is that in Fourier representation, instead of integrating over
dummy frequencies in the vertices from minus to plus infinity we will sum over
the discrete set of Matsubara frequencies. This is generally more troublesome than
integration (as all discrete mathematics goes), but there are many useful tricks. I will
give here the most basic one, which in many cases does the job.
Table 3.1 Feynman rules for temperature Green’s function (scalar electron–electron interaction)
Coordinate space
X X'
−G(X X ) Temperature Green’s
≡ −Gαα (x, x ; τ − τ ) function
−G 0 (X X ) Unperturbed temperature
≡ −G 0 (x, x ; τ − τ )δαα Green’s function
1
∇ 1 εz s
f iωvF = − tanh Res f (z). (3.63)
ε v=−∇ 2 s 2 Z =z s
Bose frequencies
1
∇ 1 εz s
f iωvB = − coth Res f (z). (3.64)
ε v=−∇ 2 s 2 z=z s
The origin of these formulae is clear if we recall that the function of complex
eεz −1
variable z, tanh εz2 = eεz +1 , has poles exactly at the points z v = i∂(2v + 1)/ε =
iωvF , and its residue at any of these points equals 2/ε.
The contour integral dzf (z) tanh εz 2 along the infinitely large circleof Fig. 3.4
vanishes (this is ensured by the condition that f (z) vanishes faster than 1/|z| at
infinity). On the other hand, by Cauchy theorem, the integral is proportional to the
sum of the residues of the integrand; the residues of tanh give the left-hand side of
(3.63), while the rest gives its right-hand side. The same considerations lead to (3.64)
if we use coth(εz/2) instead of tanh(εz/2). Since tanh(εz/2) = n B (z)/n F (z), we
118 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
see that equilibrium distribution functions naturally enter the calculations from this
(seemingly) formal side.
We have developed some approaches that allow us, we hope, to calculate equilib-
rium real-time Green’s functions, both at zero and at finite temperatures. We have
repeatedly referred to G R (t) as the function that naturally describes the reaction
of the system to external perturbation. This could seem a contradiction, since in
equilibrium—where only we can calculate Green’s functions using the methods
developed so far—there can be no external perturbation. Nevertheless, we still can
use equilibrium Green’s functions in order to find the linear reaction of the system
to a weak perturbation. This constitutes so-called linear response theory. The main
idea behind it is, well, that of linear response: we can neglect the higher powers of
the perturbation, as long as it is small enough. The two famous examples of this
approach are Hooke’s law (F = kx) and Ohm’s law (V = IR): the constants k, R
are taken in equilibrium, at zero strain or current, and we neglect the higher-order
corrections in x or I .
Suppose that the system is affected by a weak external perturbation (which is
generally time dependent, say an external electromagnetic field). Its Hamiltonian is
thus
H(t) = H0 + H1 (t)
(here H0 includes all interactions in the system except the external perturbation under
consideration). We are interested in some observable, represented by an operator A
3.3 Linear Response Theory 119
(say, the electric current). We measure its average value, →A√t , as a function of the
perturbation strength.
In accordance with our usual approach, we introduce the statistical operator in
interaction representation:
t
π̃(t) = −i dt ≈ H̃1 (t ≈ ), π̃(t ≈ ) + π̃(−∇).
−∇
The idea of Kubo’s approach to linear response theory is as follows: Assume that at
t = −∇ the perturbation was absent (and later adiabatically switched on), so that
the system is in equilibrium:
Then the linear response of the system to the perturbation is given by the following
expression:
t
π̃(t) ≡ π̃(t) − π̂0 = −i dt ≈ H̃1 (t ≈ ), π̂0 + O (H1 )2 . (3.68)
−∇
In other words, we use the first-order perturbation theory for the nonequilibrium sta-
tistical operator. Of course, we could go further and find the second, third, etc. orders
in perturbation.The unpleasant
feature
of such aseries is thatit is a
series in nth-order
≈ ≈≈ ≈≈≈ (n)
commutators, H̃1 (t ), H̃1 (t ), H̃1 (t ), . . . H̃1 (t ), π̂0 . . . , which cannot
be expressed in such convenient way as the higher-order time-ordered products of
field operators. But as the basis for linear theory it is all right.
120 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
The shift of the average value of an operator A in the first order in perturbation
(linear response) is given by
A(t) = tr π̃(t)Ã(t)
t
= −i dt ≈ tr H̃1 (t ≈ ), π̂0 Ã(t)
−∇
t
= −i dt ≈ Ã(t), H̃1 (t ≈ ) (3.69)
−∇
where the c-number function f (t) is the so-called generalized force, and B is some
operator defined for the system under consideration. This is usually described as the
perturbation being coupled to B through f (t). Examples are −Ŝ · H(t) (the spin
coupled to the external magnetic field), or − 1c ĵ · A(t) (the current coupled to the
vector potential). It is not always this easy to tell what we should consider as the
generalized force. A useful recipe is as follows. Since the only time dependent term
in (3.70) (as well as in the whole Hamiltonian H) is f (t), we can figure out the
proper expression for the generalized force if we write down the energy dissipation
of the external field in the system per unit time:
Q = Ė
= →Ḣ√
= q(t)→B√ (3.71)
∼ q(t) = − f˙(t). (3.72)
1 ⎪
A(t)B(t ≈ ) ≤ R = A(t), B(t ≈ ) α t − t ≈ . (3.73)
i
This construction may seem a little strange, but it agrees with our earlier retarded
Green’s function: evidently, G R (t, t ≈ ) = λ(t)λ † (t ≈ ) ≤ R . After all, we can always
rewrite the operators A(t), B(t ≈ ) in terms of the field operators, thus reducing this
retarded Green’s function to the ones we are accustomed to. This expression, though,
has more direct physical meaning: it defines the system’s response (already in terms of
the observable A(t) we are interested in) at time t to an external perturbation at earlier
moments of time, t ≈ < t (coupled to the operator B(t ≈ ). We can as well introduce
3.3 Linear Response Theory 121
⎪
the advanced Green’s function, A(t)B(t ≈ ) ≤ A = − 1i [A(t), B(t ≈ )] α(t ≈ − t),
though it does not have a straight-forward physical sense.
Please notice that we no longer write tildes (∼) over the operators: they all
are assumed to be in interaction representation in relation to external perturbation,
A(t) = exp(iH0 t)A(0) exp(−iH0 t). But the only thing not included in H0 is an
external perturbation term (3.70). Therefore, the averages are to be calculated using
the perturbation series over interaction terms, and then the operator can be regarded
as taken in Heisenberg representation.
Now we can rewrite Eq. (3.69) in the form of the Kubo formula:
∇
A(t) = − dt ≈ f (t ≈ ) A(t)B(t ≈ ) ≤ R . (3.74)
−∇
This is a very transparent formula: the change in the value of the observable is
determined by the (first order of the) external force f (t ≈ ) applied at all earlier
moments of time, and the kernel of this integral operator is exactly the “AB”-Green’s
function. Generally it is a tensor, since the operators A, B don’t have to be scalars.
The equilibrium state of the system is, of course, time independent. Therefore,
A(ω)
χ(ω) = . (3.77)
f (ω)
Then
χ(ω) = −AB ≤ωR .
Examples are many: electric conductivity, ja (ω) = τab (ω)Eb (ω); magnetic suscep-
tibility, mg (ω) = χab (ω)Hb (ω); and so on. Here tensor indices a, b = x, y, z, and
Einstein’s summation rule is implied.
There are various ways of writing Kubo formulas. The above one seems quite
general. If you can write the perturbation as coupled to the same operator, the average
value of which you investigate, you will come to a more often met expression of the
Kubo formula. For example, if we are calculating the electrical conductivity, the
operator A is the current operator, ĵ. (I don’t want to bother with tensor indices here;
suppose we have a thin wire, with only one current component allowed.) On the
other hand, the external field is coupled to the system through − 1c ĵA. Since E(t) =
⎪
1 1c Ȧ(t), in Fourier components we rewrite the perturbation as − 1c ĵ − icω E(ω) =
122 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
i
ω ĵ E(ω). This is the correct coupling, since we want to find the system’s response
to the external electric field, not the
vector potential.
The Green’s function thus will
≈
include equilibrium averages like ĵ(t) ĵ(t ) . I will not dwell on the detailed form
0
of this specific Green’s function. A general statement is, though, to be made here.
The linear response of the system to the external electric field—the nonequilibrium
current—turns out to be determined by the equilibrium correlators of the current
itself. The external field, in a sense, simply reveals these equilibrium fluctuations.
In the next subsection we will see that this is indeed the case, and we will give this
vague statement an exact form in the fluctuation-dissipation theorem.
1
K A (t) = →{A(t), A(0)}√ . (3.79)
2
Often it is simpler to deal with the autocovariation function,
1
K δ A (t) = →{δA(t), δA(0)}√
2
1
≡ →{A(t) − →A√, A(0) − →A√}√
2
= K A (t) − →A√2 . (3.80)
In this way we explicitly consider the fluctuations around the average value.
The Fourier transform of the correlator is called the spectral density of fluctuations:
∇
(A )ω =
2
dt eiωt K A (t). (3.81)
−∇
(It is often denoted by S(ω), but we will not use this notation here.)
3.3 Linear Response Theory 123
The Wiener–Khintchin theorem of the theory of random processes states that the
⎪ power of the fluctuations of A in the fre-
spectral density (3.81) gives the average
quency interval [ω, ω + ω) through A2 ω ω, and thus can be directly measured.
For example, if we are measuring the voltage fluctuations on a resistor (Fig. 3.5), this
is what the wattmeter shows.
For the equilibrium state average of the product of two operators A, B we have
the Kubo–Martin–Schwinger identity:
→A(t)B(0)√ = tr eε(−H) ei Ht A(0)e−i Ht B(0) = →B(0)A(t + iε)√. (3.82)
It is an evident result of the cyclic invariance of trace and a special form of the
equilibrium statistical operator. This allows us to rewrite the spectral density as
∇
1
A2
= 1 + eεω dt eiωt →A(0)A(t)√. (3.83)
ω 2
−∇
In this expression enters the Fourier transform of the anticommutator, {A(t), A(0)}.
It is easy to find a like formula for the commutator:
∇ ∇
εω
([ A, A])ω ≡ dt e iωt
→[A(t), A(0)]√ = e −1 dt eiωt →A(0)A(t)√.
−∇ −∇
(3.84)
Then
1 εω
(A2 )ω = coth ([ A, A])ω . (3.85)
2 2
On the other hand, we can rewrite Eq. (3.84) in the form
124 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
∇ ∇
−iωt
([ A, A])ω = − dte {[A(t), A(0)]} + dteiωt →[A(t), A(0)]√,
0 0
to find that
([ A, A])ω = −2 AA ≤ωR . (3.86)
Let us consider one of the most important applications of the theorem. Take the
current operator, J , in a simple electric circuit (Fig. 3.6). At frequencies low enough
(ω c/L, where L is the size of the circuit) the current is the same throughout the
circuit and depends only on time:
→J (t)√ = J (t).
If there were an external emf in the circuit, W (t), the energy dissipation per unit
time would be
Q = JW = →J √W.
f˙ = −W ; iω f (ω) = W (ω).
3.3 Linear Response Theory 125
and
J (ω) = W (ω)/Z (ω) = iω f (ω)/Z (ω),
iω
χ(ω) =
Z (ω)
iω
= ; (3.88)
R(ω) + iY (ω)
ω
χ (ω) = R(ω). (3.89)
|Z (ω)|2
(As it should be, the imaginary part of the susceptibility corresponds to the real part
of the impedance: the reactance).
The corresponding voltage fluctuations, W (ω) = Z (ω)J (ω), are given by
εω
W2 = ∗Z (ω)ω coth . (3.91)
ω 2
In the classical limit (T ≤ ω) we obtain the famous Nyquist theorem:
J2 = 2TG(ω); (3.92)
ω
W2 = 2TR(ω). (3.93)
ω
we imposed no specific limitations upon the quantum state of the system, i.e., on its
statistical operator ϕ̂. If later we had to deal with the system in its ground state (at
T = 0) or in equilibrium (at T = 0), it was because our elegant diagram technique
based on perturbation theory essentially used specific properties: the uniqueness of
the ground state (up to the phase factor) in the first case, and formal equivalence of the
equilibrium statistical operator and the (analytically continued) evolution operator
in the second.
Extremely powerful with regard to the thermodynamic properties, these
approaches are evidently unable to cope with the kinetic problems, which are very
important in condensed matter theory. For example, they cannot describe the response
of the system to time-dependent external perturbation, even at zero temperature
(since such a perturbation will lead to energy pumping into the system, which can
be transferred to some excited state). Linear response theory can answer this sort of
question, but only in the first order, while there is no convenient way to write the
higher-order terms in this method.
Nevertheless there exists a possibility to develop a diagram technique for the
nonequilibrium Green’s function, if we take into account several types of Green’s
functions simultaneously [7]. Then the Green’s function becomes a matrix. This is
the price we pay for universality, and the reason why this technique introduced by
Keldysh did not replace the other two in their respective fields.
We define the causal nonequilibrium Green’s function as follows:
G −−
ρε (x1 , t 1 ; x 2 , t 2 ) = −i |T λ ρ (x1 , t 1 )λ †
ε (x 2 , t 2 )| . (3.95)
Here |√ denotes an arbitrary quantum state (in Heisenberg representation) of the
system under consideration.2
2 Statistical averaging can be included at any stage of the calculations without any problem, since
tr(ϕ̂A) ≡ W →|A|√
is a linear operation.
3.4 Nonequilibrium Green’s Functions 127
This is essentially the same expression as the one we introduced in Eq. (2.10),
except that now |√ generally is not the ground state. What are the consequences of
this difference?
Let us try to repeat the steps that have led us to the generating formula (2.55) of the
perturbation theory for Green’s function. We express the Heisenberg field operators,
λ, through the interaction representation operators, :
≈
λ(x, t) = U † (t)e−i H0 t (x, t)ei H0 t U(t).
Then note that the Heisenberg state |√ is related to the corresponding state vector
in the interaction representation, |(t)√ I , by
Here the S-matrix (in the interaction representation given by Dyson’s expansion,
see Chap. 2) relates the actual state |(t)√ I to the presumably unperturbed one
|(−∇)√ I ≡ |0 √, i.e., to the state of the system of free Particles.
This allows us to rewrite Green’s function as3
G −− (x1 , t1 ; x2 , t2 )
= −i→(∇)|T S(∇, −∇)(x1 , t1 ) † (x2 , t2 )|0 √. (3.97)
The fundamental difference between this expression and the corresponding result
for the ground state average (2.55) is that the state at t = ∇, |(∇)√, is by no means
simply related to the unperturbed state at t = −∇, |0 √. The only way to bring it
back is to use a straightforward formula,
Then we obtain the basic formula for the nonequilibrium Green’s function,
G −− (x1 , t1 ; x2 , t2 )
= →0 |S † (∇, −∇)T S(∇, −∇)(x1 , t1 ) † (x2 , t2 )|0 √. (3.99)
Now we substitute in this expression the Dyson expansion for the S-matrix (1.90,
1.91),
The anti time ordering operator T̃ arranges the operators in the opposite order to that
of the T -operator.
It can be shown that the main features of theory are kept intact; namely, (1)
Wick’s theorem is still valid, so that we can express anything in terms of pairings in
the unperturbed state, and (2) the vacuum (disconnected) diagrams are canceled.
We see from (3.99) that the main difference between the present case and the zero
temperature technique is the presence of two time orderings in the same formula:
G −− (x1 , t1 ; x2 , t2 )
⎡∇ ⎡∇
−∇ dψ W (ψ ) −∇ dψ W (ψ )
i i
= (0 |T̃ e T e− (x1 , t1 ) † (x2 , t2 )|0 √. (3.102)
It appears due to the necessity to return back in time, to the initial unperturbed state,
before the interaction was turned on, since we don’t know what the state will be
like after it is finally turned off. Formally this can be presented as a single time
ordering along the contour running from −∇ to ∇ and back again (Fig. 3.7). (The
time ordering along a contour that returns to −∇ instead of running to ∇ was first
suggested by Schwinger.) The operators standing to the right of the T -operator in
(3.102) belong to the right-going (−), the other to the left-going (+) branch of the
contour. The + operators always stand to the left of the—ones.
Evidently, if we use Wick’s theorem, we obtain four types of pairings, namely
(± denotes the branch)
† †
< T − † >, < T̃ + + >, < + † >, < + − > . (3.103)
3.4 Nonequilibrium Green’s Functions 129
The first of these gives the causal Green’s function; the rest we have not met before.
The diagram technique in nonequilibrium thus includes four Green’s functions, which
we will define as follows4 :
G −− (1, 2) = −i T λ(1)λ † (2) ; (3.104)
G +− (1, 2) = −i λ(1)λ † (2) ; (3.105)
G −+ (1, 2) = ±i λ † (2)λ(1) ; (3.106)
G ++ (1, 2) = −i T̃ λ(1)λ † (2) . (3.107)
The (−+) function is directly proportional to the density of real particles in the
system.
Then, we have
⎪ ∝
G −− (1, 2) = − G ++ (2, 1) ;
⎪ ∝
G −+ (1, 2) = − G −+ (2, 1) ; (3.109)
⎪ ∝
G +− (1, 2) = − G +− (2, 1) . (3.110)
If we define the retarded and advanced Green’s functions as before, they can be
expressed as follows:
The unperturbed Green’s functions satisfy the following linear differential equations:
Here we have introduced one more linear combination of G ±± , the so-called Keldysh
Green’s function:
Note that the retarded and advanced Green’s functions don’t contain any infor-
mation on the state of the system, which is given solely by the Keldysh Green’s
function. Since they are given by linear combinations of G ±± , we can use them as
three independent functions, instead of four dependent G ±± . As we will see later,
in many cases the set (G R , G A , G K ) indeed is simpler to use than the initial set
(G −− , G −+ , G +− , G ++ ).
3.4 Nonequilibrium Green’s Functions 131
The rules of the Keldysh diagram technique directly follow from the expansion of
S-matrices in Eq. (3.99).5 First consider the rules for the scalar interaction with an
external field W (x, t).
The only difference from the zero-temperature case will be that each electron
or interaction line bears at its ends ±-indices, which show to which branch of the
Keldysh contour the corresponding operator belongs; besides, the “+”-vertices are
multiplied by +i instead of −i, since they originate from the S † -operator. This can
be taken into account in an elegant way, if we introduce the matrix Green’s function
in the Keldysh space:
⎣ #
G −− (1, 2) G −+ (1, 2)
Ĝ(1, 2) = (3.125)
G +− (1, 2) G ++ (1, 2)
The Feynman rules for some cases of interest are given in Table 3.2. Note that four
differential equations for the unperturbed Green’s functions (3.113)–(3.116) are now
gathered in one elegant expression:
We can make use of the relations between the G ±± functions to obtain another
representation of the same formalism. If we perform the transformation
1
Ĝ ∞ Ḡ = (ψ̂0 − i ψ̂2 )ψ̂3 Ĝ(ψ̂0 − i ψ̂2 )† , (3.130)
2
we come to Green’s function in the following form (check this!):
⎣ #
GR GK
Ḡ = . (3.131)
0 GA
The Feynman rules for this representation can be obtained from the initial ones if we
use the transformation inverse to (3.130). They are given in Table 3.3.
The Keldysh Green’s functions often contain more information than we need. Indeed,
as we have seen, of the three independent components of the matrix Ḡ, G R and
G A contain only the information about the dispersion relation in the system; all
information about the occupation of the states is contained in the component G K
(plus again the information about the dispersion relation). In many cases we are more
interested in the kinetics, i.e., in how the states are occupied, than in what exactly
these states are (after all, we can use some approximate relation θ(p) and forget about
them). Since the matrix theory is rather awkward, this puts a premium on some sort
of reduced description, which would let us get rid of nonessential information. In
134 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
this way we will derive a quantum kinetic equation for the quantum analogue of the
distribution function of classical statistical mechanics.
Defining the statistical distribution function in a quantum limit is a nontrivial prob-
lem. The uncertainty relations prohibit the use of the classical distribution function
itself, f (r, p, t). Instead, we can introduce the Wigner function,
* ⎣ # ⎣ #+
−ip·ξ/ ξ ξ
f W
(r, p, t) ≡ d ξe
3
λ †
r − ,t λ r + ,t (3.133)
2 2
The Wigner function has many useful properties, but being positively determined
is not one of them: f W (r, p) can take negative values in some regions of the phase
space. This is the price we have to pay for our wish to have some definite rela-
tion between momentum and coordinate in quantum mechanics! Nevertheless, if we
average f W (r, p) over the scale of h d (d is the dimensionality of the system), this
“roughened” distribution function coincides with the classical distribution function
f (r, p) up to the terms of higher order in h:
d d pd d r W
f (r, p, t) = f (r, p, t) + o(h d ). (3.134)
hd
h
ˆ Ĝ;
Ĝ = Ĝ 0 + Ĝ 0 (3.135)
(Ḡ = Ḡ 0 + Ḡ 0 + ¯ Ḡ); (3.136)
3.5 Quantum Kinetic Equation 135
ˆ Ĝ 0 ;
Ĝ = Ĝ 0 + Ĝ (3.137)
¯ Ḡ 0 ).
(Ḡ = Ḡ 0 + Ḡ (3.138)
Only three components of the former matrix are independent, since as is easy to
see, ⎪
−− + ++ = − −+ + +− . (3.141)
K = −− + ++ ;
R = −− + −+ ; (3.142)
A = −− + +− .
Acting on Dyson’s equation from the left (or from the right on its conjugate) by
the operator G0−1 , we find the Keldysh equations, equivalent to the set of integro-
differential equations for the component Green’s functions:
G0−1 − ψ̂3
ˆ · Ĝ(1, 2) = ψ̂3 δ(1 − 2); (3.143)
Ĝ(1, 2) · G0−1 − ˆ ψ̂3 = ψ̂3 δ(1 − 2), (3.144)
Now we can obtain the quantum kinetic equation. We will use the “hat” representation
as more straightforward.
136 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
Set T = t1 +t
2 , ψ = t 1 − t2 ; R =
2 x1 +x2
2 ,ξ = x1 − x2 . Then Wigner’s function can
be written as follows:
∇
dω −+
f W (R, p, T ) = −i G (R, T ; p, ω); (3.147)
2∂
−∇
−+
G (R, T ; p, ω)
⎣ #
ξ ψ ξ ψ
= d 3 ξdψ eiωψ −ip·ξ G −+ R + , T + ; R − , T − . (3.148)
2 2 2 2
After subtracting the first line from the second and noticing that
∝ ⎣ #
φ i
G0−1 (2) − G0−1 (1) = −i − ↔R · ↔ξ , (3.150)
φT m
This is the quantum kinetic equation. Its right-hand side in the quasiclassical
limit
must yield the (quasli)classical collision integral, St f W (R, p, T ) . Generally, there
appears one more term, which is responsible for the renormalization of the energy
spectrum of the quasiparticles and can be merged with the dynamic left-hand side
of (3.151). This is consistent with the fact that the imaginary part of the self energy
(entering the right-hand side of (3.151)) determines the lifetimes of the quasiparticles
6 In the most general case, we could introduce the distribution function of all four conju-
gated variables, f (R, p, T, θ), which in the presence of the external potential obeys the equation
(Footnote 6 continued)
⎣ #
φ p φU (R, T ) φ
+ · ↔R − ↔R U (R, T ) · ↔p + f (R, p, T, θ)
φT m φT φθ
= I[ f (R, p, T, θ)].
3.5 Quantum Kinetic Equation 137
(in our case, through the collision integral governing the in- and outscattering rates),
while its real part changes their dispersion law (thus modifying the dynamical part
of the kinetic equation).
But in the quasiclassical limit the corrections to the dispersion law are negligible,
and only the collision integral survives. To show this, we take into account that then
we can write
d 4 3(1, 3)G(3, 2)
⎣ #
X1 + X3 X1 − X3 X1 + X3 X1 − X3
= d 3 4
+ , −
2 2 2 2
⎣ #
X3 + X2 X3 − X2 X3 + X2 X3 − X2
×G + , −
2 2 2 2
≥ d 4 3 (X + (X 1 − X 3 )/2, X − (X 1 − X 3 )/2)
× G (X + (X 3 − X 2 )/2, X − (X 3 − X 2 )/2) .
∇
dω ⎪
St f w (R, p, T) = − −+ (R, T ; p, ω)G +− (R, T ; p, ω) (3.152)
2∂
−∇
+ +− (R, T ; p, ω)G −+ (R, T ; p, ω) .
In the quasiclassical limit, due to the slow variation of the distribution function,
we can substitute into the previous expression the values of Green’s function for the
uniform stationary case [Eqs. (3.117)–(3.123)], changing there np to f W (R, p, T ):
St f w (R, p, T) = i −+ (R, T ; p, θp − μ) 1 − f W (R, p, T ) (3.153)
+ i +− (R, T ; p, θp − μ) f W (R, p, T ).
The specific form of the collision integral is determined by the interaction, which
enters the self energy functions.
As an example of how the above formalism can be applied, we discuss here the quan-
tum conductivity of quantum point contacts (QPC). Point contacts are the contacts
between two conductors, whose dimension is much less than the inelastic scattering
138 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
length of carriers, l j (Fig. 3.9). The size of the quantum point contact, moreover, is
of the order of the Fermi wavelength or smaller.
In the last 10 years, quantum point contacts were realized in highly mobile 2-
dimensional electron gas (2DEG) of semiconductor heterostructures by “squeezing”
it from under the gate electrodes by applied negative voltage. In this case, there are
two benefits: the size of the contact can be changed at will, and—because ν F in
2DEG is large, about 400⇔ A—they really are quantum point contacts. On the other
hand, three-dimensional metallic QPC of atomic size are now being created using a
variety of experimental techniques.
We begin with the three-dimensional case, following the analysis of [15]. The
model of such a contact is presented in Fig. 3.10: it is the round opening with radius
a in a thin planar dielectric barrier separating two conducting half-spaces.
The matrix Green’s function in this system satisfies the Keldysh equation (3.143),
where the electron–phonon interaction enters the self energy operator. (We consider
electron–phonon interactions as the only interactions present.)
The geometry of the system imposes specific boundary conditions. Far from the
contact, the electron gas of either bank does not feel the presence of the contact at
all and is in equilibrium, so that the Wigner’s distribution function of the electron
(related to the (−+)-component of the Keldysh matrix Green’s function by (3.147))
must satisfy the boundary conditions
,
1, z > 0,
lim f W
(r, p) = n p,τ , τ = (3.154)
r ∞∇ 2, z < 0,
where
1
n p,τ = θp −μτ (3.155)
exp T +1
is the equilibrium distribution. The chemical potentials in the banks of the contact
are biased by the driving voltage
3.6 Application: Electrical Conductivity of Quantum Point Contacts 139
μ1 − μ2 = eV. (3.156)
The impenetrability of the dielectric barrier for the electrons is taken into
account by setting
Here v = p/m.
with momentum k, impinging against the point contact from the right (τ = 1) and
from the left (τ = 2). They are the solution of the Schrödinger equation
where we take k z > 0, and k R is the wave vector with the opposite to k z-component.7
In (r, ω)-representation the (unperturbed) retarded and advanced Green’s func-
tions can be expressed as
2
d 3k
Ĝ 0 (1, 2) = i α(k z )n̂ kτ (t1 − t2 )e−iθk (t1 −t2 )
(2∂)3
τ=1
× χkτ (r1 )χ∝kτ (r2 ). (3.164)
Substituting (3.164) into the relation (3.158) and carrying out the integration over
any surface enveloping the point contact from the right (z > 0) or the left (z < 0),
we obtain the following expression for the total current in the elastic limit:
7 We can neglect the effect of the electric field in our considerations due to the fact that the potential
drop in the vicinity of the contact is (a/r D ) times smaller than the total potential drop, V . Here r D
is the screening length, which is large enough in semiconductors to provide the condition r D ≤ a.
3.6 Application: Electrical Conductivity of Quantum Point Contacts 141
2em d 3k ⎪
I (V ) = α(k z )(n k2 − n k1 ) dS χ∝k2 ↔χk1 . (3.166)
(2∂)3
S1
In the limit k F a ≤ 1 far from the contact, the wave function has the asymptotic form
(for example, for τ = 2)
⎣ #
iaeikr kz
χk2 (r) = − kz + J1 (qa), z > 0, (3.168)
2qr r
where q = |k|| − kr|| /r |, here k|| is the component of the vector parallel to the
dielectric barrier; J1 (x) is a Bessel function.
The contact current is thus given by
I (V ) = I1 (V ) − I2 (V ), (3.169)
(k Fτ a)−9/2
Iτ (V ) = Iτ(0) 1 − √ cos(4k Fτ a − ∂/4) . (3.170)
64 2∂
Here
|e|ma 2 μ2τ
Iτ(0) = (3.171)
4∂3
is the point contact current in the classical limit. In the linear response regime the latter
formula yields the expression for the classical point contact resistance (Sharvin’s
resistance) in the form
e2 SS F
R0−1 = , (3.172)
h3
where S = ∂a 2 , S F = 4∂ p 2F are the areas of the contact and Fermi surface respec-
tively. Equations (3.169), (3.170) show that in the limit k F a ≤ 1, only small correc-
tions to this resistance appear, oscillating with k F (that is, with driving voltage).
142 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
By standard methods of Green’s functions for the classical wave equation, described,
e.g., in Chap. 7 of [8], it can be expressed through the values of the wave function in
the opening, χk (x, 0):
d/2 d/2
ek 2 χk (x, 0)χk (x, 0)∝
Jk = dx dx≈ J1 (k(x − x ≈ )). (3.174)
m∝ k(x ≈ − x)
−d/2 −d/2
Here J1 (z) is the Bessel function. In the quasiclassical limit we can take χk (x, 0) ≥
eik x x , and find
⎣ #
kF de2 sin 2k F d 1
I (V ) = G(d)V = +V − . (3.175)
∂ ∂ 2∂k F d 4
−1 2
We again found an analogue to Sharvin resistance R0,2D = ∂e k F∂d plus small
oscillating corrections. As we will see shortly, this is not a purely academic exercise.
Now a natural question arises: What is Sharvin resistance?
seem self-evident, but they imply that all electrons impinging at the contact have
equilibrium distribution—are completely thermalized (and thus have no “memory” of
their previous history). This thermalization is vital for the theory and can be achieved
only if there is sufficiently strong inelastic scattering in the system. Its details are,
though, irrelevant, since as long as li exceeds the contact size, the dissipation rate is
determined by Sharvin resistance.
The point contact is thus a very fine example of the Landauer formalism, a power-
ful tool in transport theory of small systems. Landauer considered a one-dimensional
(1D) wire a scatterer, its quantum-mechanical transition and reflection amplitudes
being t and r respectively, |t|2 + |r |2 = 1, and asked a question:
What is its electrical resistance? To answer this question, he connected with this
wire two equilibrium electron reservoirs (that is, systems containing vast numbers
of electrons at equilibrium, and with effective energy and momentum relaxation
mechanisms) at differing chemical potentials, μ1 − μ2 = eV (Fig. 3.9). Assuming
that once leaving the wire for a reservoir, an electron never returns (or, more exactly,
immediately thermalizes)—which is an exact analogue to our boundary condition
(3.154)—it is easy to write down the current (since it is the same in all of the wire,
we can calculate it to the right of the scatterer):
dk
I (V ) = 2e v(k) n 1 (k)|t|2 − n 2 (k)(1 − |r |2 )
2∂
2e 1
= |t| 2
dθ v(θ)(n F (θ − μ − eV ) − n F (θ − μ))
2∂ v(θ)
2e2 2
≥ |t| V. (3.176)
h
2e2
N⊥
1
G≡ = |ta |2 . (3.177)
R h
a=1
The quantum resistance unit h/(2e2 ) (which is, evidently, 137∂/3 × 10−10 s/cm, or
approximately 13 k in more convenient units) is the same as appears, e.g., in the
quantum Hall effect. The sum is taken over so-called quantum channels. It is proper
to say that 2e2 / h is a conductance of an ideal quantum channel.
How this is related to Sharvin resistance? We had previously
−1 e2 ∂a 2 4∂ p 2F
R0,3D =
h3
2e 2
= (2∂a/ν F )2 ;
h
144 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
−1 e2 k F d
R0,2D =
∂ ∂
2e2 2d
= .
h νF
We see that in both cases the conductance is really given by 2e2 / h times approx-
imately N⊥ ≥ (a/ν F )dim , where a is the size of the opening and dim its dimen-
sionality. (There is no additional scattering, so |t|2 = 1.) In this case the number of
channels tells us how many electrons at the Fermi surface (because they are carrying
current) can squeeze through the opening simultaneously, given the uncertainty rela-
tion which keeps them ≥ν F apart. But thus defined the number of channels is fuzzy:
the conductance, as we see, changes continuously with the size of the opening.
The situation changes dramatically if instead of an opening in an infinitely thin
wall, we consider a long channel that smoothly enters the reservoirs. This system
is more like a quantum wire of Landauer formula, and we should expect that as
we change the width of the wire, the number of modes changes by 1, so that the
conductance is quantized in units of 2e2 / h. This quantization was indeed observed
in 2D quantum point contacts [19, 20], which we have mentioned above. The beauty
of the thing is that in this system the size of the contact can be continually changed by
changing the gate voltage, and it turns out that the conductance changes by 2e2 / h-
steps. (Unfortunately, the accuracy of these steps is far inferior to those of quantum
Hall effect.)
To understand this, let us return to (3.167). The partial current Jk is the current
carried through the opening by a particle incident from infinity. In a smooth
(adiabatic) channel (with diameter d(z) slowly varying with longitudinal coordi-
nate az) we can approximately present such a wave function in factorized form,
χk,a (π, z) = κa (π; z)eikz , where now k is the longitudinal momentum, π is the
transverse coordinate, and a labels transverse eigenfunctions (which in the 2D case
can be, e.g., ≥ sin(∂aπ/d(z))). The channel is effectively presented as a set of 1D
“subbands”, playing the role of Landauer quantum wires. It is almost self-evident
that only if the energy of transverse motion in the narrowest pan of the channel,
≥2 a 2 /(2m ∝ dmin
2 ), is less than the Fermi energy, can the corresponding ath mode
participate in conductivity. Otherwise the particle will be reflected back to the initial
reservoir. Here indeed, the number of conducting modes is determined by how many
of them can squeeze through the narrowest part of the channel [14].
This picture should naturally hold in the three-dimensional case as well, with the
only complication that now there may occur multiple steps, n × 2e2 / h, due to acci-
dental degeneracy of different transverse modes [11]. Such conductance quantization
was observed in 3D metal point contacts—mechanically controllable break-junctions
[16] and scanning tunneling microscopy (STM) devices [18].
For further reading on the Landauer formalism and transport in point contacts and
other mesoscopic devices I refer you to [3, 4, 9], and references therein.
3.6 Application: Electrical Conductivity of Quantum Point Contacts 145
Now let us return to the interactions. As we have said above, generally the righthand
side of the quantum kinetic equation (3.151) yields not only the collision integral,
but also the renormalization effects. Therefore, we will call its right-hand side the
“collision integral”:
Iph (r, p) = d 3 ξe−ipξ I ph (r + ξ/2, t; r − ξ/2, t), (3.178)
where
Iph (1, 2) = − d 4 3 −+ (1, 3)G++ (3, 2) + −− (1, 3)G−+ (3, 2)
In the lowest order in the electron–phonon interaction the self energy components
are given by the expressions (see Table 3.2)
ˆ =
(3.180)
−+ (1, 3) = −i G −+ +−
0 (1, 3)D0 (3, 1);
+− (1, 3) = −i G +− −+
0 (1, 3)D0 (3, 1);
−− −− −−
(1, 3) = i G 0 (1, 3)D0 (3, 1);
++ (1, 3) = i G ++ ++
0 (1, 3)D0 (3, 1). (3.181)
With the use of Eqs. (3.164), (3.183) we find the following expressions for the
self energy components (to spare space, we will later on denote the combination
(k, τ)(k z > 0) by K ):
−− (r1 , r3 ; ω) = χ K (r1 )χ∝K (r3 )
qK
, ⎣ #
n K (Nq + 1) (1 − n K )Nq
× q (r3 )q∝ (r1 ) +
ω + ωq − θ K − i0 ω + ωq − θ K + i0
⎣ #/
n K Nq (1 − n K )(Nq + 1)
+ q∝ (r3 ) q (r1 ) + ;
ω − ωq − θ K − i0 ω − ωq − θ K + i0
(3.185)
++ (r1 , r3 ; ω) = − χ K (r1 )χ∝K (r3 )
qK
, ⎣ #
∝ n K (Nq + 1) (1 − n K )Nq
× q (r3 )q (r1 ) +
ω + ωq − θ K + i0 ω + ωq − θ K − i0
⎣ #/
n K Nq (1 − n K )(Nq + 1)
+ q∝ (r3 ) q (r1 ) + ;
ω − ωq − θ K + i0 ω − ωq − θ K − i0
−+ (r1 , r3 ; ω) = −2∂i χ K (r1 )χ∝K (r3 )
qK
× q (r3 )q∝ (r1 )n K (Nq + 1)δ(ω + ωq − θ K )
+ q∝ (r3 )q (r1 )n K Nq δ(ω − ωq − θ K ) ; (3.186)
+− (r1 , r3 ; ω) = −2∂i χ K (r1 )χ∝K (r3 )
qK
× q (r3 )q∝ (r1 )(n K − 1)Nq δ(ω + ωq − θ K )
+ q∝ (r3 )q (r1 )(n K − 1)(Nq + 1)δ(ω − ωq − θ K ) .
3.6 Application: Electrical Conductivity of Quantum Point Contacts 147
Substituting these expressions into (3.182) and (3.178), and gathering like terms,
we finally obtain the following expression for the electron–phonon collision integral:
q q
Iph (r, p) = −2 CK ≈ K S K ≈ K (r, p)
K ,K ≈ q
n K (1 − n K ≈ )(Nq + 1) − n K ≈ (1 − n K )Nq
× .
θ K − θ K ≈ − ωq − i0
(3.187)
q
in the uniform case would yield the momentum conservation law, C K ≈ K ∝ δ(k −
k≈ − q). The function S K ≈ K (r, p) is defined as
q
d 3 ξe−ipξ χ K ≈ (r + ξ/2)χ∝K (r − ξ/2)
q
S K ≈ K (r, p) =
⎪
× q (r + ξ/2) − q (r − ξ/2) . (3.189)
It is clearly seen that Eq. (3.187) yields two different terms. One would contain
the energy-conserving delta functions δ(θ K − θ K ≈ − ωq ) multiplied by the usual
statistical in–out factors for the electronic scattering with emission or absorption of
phonons. The other (expressed through the main value integrals) is responsible for
the spectrum renormalization.
as follows:
148 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
Then the solution to the boundary value problem (3.190), (3.192) has the form
f W (r, p) = d 3 p≈ d 3 r ≈ gpp≈ (r, r≈ )Iph (r≈ , p≈ ), (3.193)
where the function gpp≈ (r, r≈ ) = −gp≈ p (r≈ , r) satisfies the linear equation
with boundary conditions for its inverse Fourier transform analogous to (3.192).
This allows us to express the inelastic correction to the point contact current as
follows (we used Eq. (3.159)):
2e
Iph = 3
d p d 3 r Fp (r)Iph (r, p). (3.195)
(2∂)3
(the integration is taken over the opening O) and is the solution of the following
boundary value problem:
is the quantum analogue of the classical probability for an electron moving from
infinity with momentum p to be located at the point r to the right of the contact.
The above results allow us to explain the experimentally observed nonlinear
current–voltage dependence, I (V ), in point contacts, the origins of the nonlinearity
being (1) electron–phonon interaction and (2) renormalization of the electron mass.
It turns out that the peaks of the function d 2 V /d I 2 (eV ) are situated at the maxima
of the phonon density of states. Qualitatively this is understandable, since on the one
hand, the distribution functions of the electrons injected through the point contact is
shifted by eV with respect to the surrounding electrons, and on the other hand, its
relaxation to the distribution function of the surroundings will be accompanied by
emission of phonons with energy ω = eV .
This effect [17, 21] is a basis for point contact spectroscopy, the method of restora-
tion of the phonon density of states from the nonlinear current–voltage characteristics
of a point contact (for a review see [5]). It was first developed in metals, where due
to both screening length and Fermi wavelength being very small, the quasiclassical
theory is already sufficient.
HR = θq,τ dq,τ
†
dq,τ ; (3.202)
q,τ
† ∝ †
HT = Tkq ck,τ dq,τ + Tkq dq,τ ck,τ . (3.203)
k,q,τ
150 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
As you see, H L ,R are written as though the banks of the contact were infinite,
and as though quasiparticle states could be characterized by their momenta (which
in reality are not good quantum numbers).
The tunneling matrix element in the case of time reversal symmetry satisfies the
relation
∝
Tkq = T−k,−q . (3.204)
but the detailed behavior of Tkq is not relevant. It will suffice to note that the compo-
nent of the momentum parallel to the interface is conserved, and that in many cases
the energy dependence of the tunneling matrix element can be neglected in a rather
wide interval around the Fermi energy.
If we apply to the left bank the voltage V , the Hamiltonian will acquire the form
and it commutes with the corresponding unperturbed Hamiltonian. The only term
changing particle numbers in each separate bank is, of course, HT .
The Heisenberg equation of motion for the electron annihilation operator in the
left bank for nonzero bias, c̃(t), is then
2|e|
†
I (V, t) = Tkq c̃kτ (t)dqτ (t) . (3.209)
kqτ
We have thus reduced the problem of calculating the current to one of calculating an
average of four field operators over a nonequilibrium quantum state—which, as we
know well, can be expressed as an infinite series in the perturbation, HT . Here we
will use the Keldysh formalism and consider the simplest case of a constant tunneling
3.7 Method of Tunneling Hamiltonian 151
Fig. 3.11 “Left–right” Green’s function in the Keldysh formalism for the tunneling contact
matrix element,
Tkq ≡ T,
We have introduced Keldysh Green’s functions, mixing the states in different banks
of the contact:
⎛ −− −+ ⎞
Fkq (t) Fkq (t)
F̂kq (t) = ⎝ ⎠;
+− ++
Fkq (t) Fkq (t)
−− 1
Fkq (t) = T c̃k (t)dq† (0) ;
i
1
++
Fkq (t) = T̃ c̃k (t)dq† (0) ; (3.211)
i
−+ 1
Fkq (t) = − dq† (0)c̃k (t) ;
i
+− 1
Fkq (t) = c̃k (t)dq† (0) .
i
The diagram series for F̂kq (ω) is shown in Fig. 3.11 (here solid and broken lines
correspond to the unperturbed Keldysh matrix in the left and right bank, Ĝl,r ).8
Due to T being momentum independent, we can easily integrate over internal
momenta. For example,
where
(3.214)
and
ζ̂(ω, ω0 ) = ĝr (ω) · ψ̂3 · ĝl (ω − ω0 ) · ψ̂3 .
Now we are left only with independent components of the Keldysh matrix:
⎣ # ⎣ #
g−− g−+ 0 gA
ĝ = ∞ ǧ = . (3.215)
g+− g++ g R gK
∇
1 N (ξ) dξ
P =P dξ ≥ N (ω)P = 0.
ω − ξk ω−ξ ω−ξ
k −∇
Therefore,
g R,A (ω) ≥ ↑i∂ δ(ω − ξk ) ≡ ↑i∂ N (ω), (3.216)
k
3.7 Method of Tunneling Hamiltonian 153
After performing the inverse rotation, we find the expression for the normal tun-
neling current in the first nonvanishing order:
∇
(1) 4|e|T dω 2T
I (V ) = ∗ ∂ Nl (ω − ω0 ) · ∂ Nr (ω) [n F (ω) − n F (ω − ω0 )]
2∂
−∇
⎣ #⎣ #
e2 2∂T 2∂T
=V· · Nl (0) Nr (0) (3.218)
∂h
and the effective barrier transparency in the sense of the Landauer formula is
⎣ #⎣ # ⎣ #2
2∂T 2∂T 2∂
Teff = Nl (0) Nr (0) ≥ N (0) T2 . (3.220)
The evaluation of ˆ leads to the following correction: the integrand of the expression
for I (V ) (3.218) acquires the factor
- -−2
⎣ #2 −2
- -
-1 − T g A (ω − ω0 )g A (ω)- ≥ 1 + ∂T Nl (0)Nr (0)
2
- r - ,
2 l
e2 Teff
I (V ) = V · · . (3.221)
∂ (1 + Teff /4)2
This result was first obtained by [13] using the Matsubara formalism. Note that the
actual small parameter of the problem is not T, but rather Teff /4.
A rather disturbing property of the above result is that it agrees with the Landauer
formula only in the lowest order; moreover, for large T conductance tends to zero as
transparency grows! Nevertheless, neither of the approaches is at fault. Indeed, the
154 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
tenet of the Landauer formalism is that once leaving the contact, the particle never
comes back (or immediately loses the phase memory, which is the same). On the
other hand, from the first look at the diagram series that we draw, it becomes clear
that the higher-order terms represent exactly these processes of multiple coherent
reentrances. Then Landauer transparency should be defined as
Teff
TLandauer = ; (3.222)
(1 + Teff /4)2
that is all: after the many happy returns to the contact, an electron finally goes to
infinity, and the Landauer approach becomes legitimate.
The question of why conductance goes to zero as T grows may seem irrelevant,
because the THA itself cannot possibly apply in this limit. We could, though, consider
a formally equivalent case of a one-dimensional tight-binding Hamiltonian:
−1
†
H L = −t0 ci−1 ci + ci† ci−1 ; (3.223)
i=−∇
∇
H R = −t0 di† di+1 + di+1
†
di ; (3.224)
i=1
†
HT = −T c−1 d1 + d1† c−1 . (3.225)
3.8 Problems
• Problem 1
Check whether the approximation for the retarded Green’s function
Z
G R (p, ω) =
ω − (θp − μ) − (p, ω)
satisfies
(1) the Kramers–Kronig relations;
(2) the sum rule;
(3) asymptotics at |ω| ∞ ∇, if the approximation for the self energy is given by:
(A) (p, ω) = ≈ − i ≈≈ ;
3.8 Problems 155
(B) (p, ω) = Aω − i ≈≈ ;
(C) (p, ω) = Aω − i Bω 2
(where Z , ≈ , ≈≈ , A, B are positive constants).
• Problem 2
Write the analytical expression for the polarization operator at finite temperature
in the lowest order and evaluate it.
• Problem 3
Using the above result, find the large wavelength (q ∞ 0) screening of Coulomb
potential in the nondegenerate limit (eμε 1). Calculate the (Debye–Hückel)
screening length and compare it to the Thomas–Fermi screening length.
References
1. Abrikosov, A.A., Gorkov, L.P., Dzyaloshinski, I.E.: Methods of Quantum Field Theory in
Statistical Physics. Dover Publications, New York (1975) (Matsubara formalism)
2. Balescu, R.: Equilibrium and Nonequilibrium Statistical Mechanics. Wiley, New York (1975)
(Definition and properties of Wigner’s functions quantum distribution functions)
3. Datta, S.: Electronic Transport in Mesoscopic Systems. Cambridge University Press, Cam-
bridge (1995) (Very detailed and pedagogical presentation of transport theory in normal meso-
scopic systems)
4. Imry, Y.: Physics of mesoscopic systems. In: Grinstein, G., Mazenko, G. (eds.) Directions in
Condensed Matter Physics: Memorial Volume in Honor of Shang-Keng Ma. World Scientific,
Singapore (1986)
5. Jansen, A.G.M., van Gelder, A.P., Wyder, P.: Point-contact spectroscopy in metals. J. Phys. C
13, 6073 (1980)
6. Lifshitz, E.M., Pitaevskii, L.P.: Statistical Physics pt. II. (Landau and Lifshitz Course of The-
oretical Physics, v. IX.). Pergamon Press, Oxford (1980) (Matsubara formalism)
7. Lifshitz, E.M., Pitaevskii, L.P.: Physical Kinetics. (Landau and Lifshitz Course of Theoretical
Physics, v.X.). Pergamon Press, Oxford (1981) (Keldysh formalism)
8. Morse, P.M., Feschbach, H.: Methods of Theoretical Physics. McGraw-Hill, New York (1953)
9. Washburn, S.,Webb, R.A.: Quantum transport in small disordered samples from the diffusive to
the ballistic regime. Rep. Progr. Phys. 55, 1311 (1992) (A review of theoretical and experimental
results on mesoscopic transport)
10. Rammer, J., Smith, H.: Quantum field-theoretical methods in transport theory of metals. Rev.
Mod. Phys. 58, 323 (1986)
156 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
Articles
11. Bogachek, E.N., Zagoskin, A.M., Kulik, I.O.: Sov. J. Low Temp. Phys. 16, 796 (1990) (An
emphasis is made on the Keldysh formalism and the method of quantum kinetic equations)
12. Cuevas, J.C., Martín-Rodero, A.: Phys. Rev. B 54, 7366 (1996)
13. Genenko, Yu.A., Ivanchenko, Yu.M.: Theor. Math. Physics 69, 1056 (1986)
14. Glazman, L.I., Lesovik, G.B., Khmelnitskii, D.E., Shekhter, R.I.: JETP Lett. 48, 239 (1988)
15. Itskovich, I.F., Shekhter, R.I.: Sov. J. Low Temp. Phys. 11, 202 (1985)
16. Krans, J.M., van Ruitenbeek, J.M., Fisun, V.V., Yanson, I.K., de Jongh, L.J.: Nature 375, 767
(1995)
17. Kulik, I.O., Omelyanchuk, A.N., Shekhter, R.I.: Sov. J. Low Temp. Phys. 3, 1543 (1977)
18. Pascual, J.I., et al.: Science 267, 1793 (1995)
19. van Wees, B.J., et al.: Phys. Rev. Lett. 60, 848 (1988)
20. Wharam, D.A., et al.: J. Phys. C 21, L209 (1988)
21. Yanson, I.K.: Sov. Phys. JETP 39, 506 (1974)
22. Zagoskin, A.M., Kulik, I.O.: Sov. J. Low Temp. Phys. 16, 533 (1990)
Chapter 4
Methods of the Many-Body Theory
in Superconductivity
state. For our purposes, thus, it is enough to recollect two deciding consequences of
the experimental data.
The ground state of the superconductor is unusual
At sufficiently low temperature below the superconducting transition temperature
Tc there exists macroscopic phase coherence of the electrons throughout the sample.
You can imagine the superconductor as a single giant molecule in the sense that its
electrons are described rather by the wave function than by the density matrix (as in
any decent macroscopic system).
The elementary excitations in the superconductor
Elementary excitations separated from the ground state by a finite energy gap.
This means that the system opposes attempts to excite it, staying in its ground state
if the perturbation is not sufficiently strong.
These key properties can be reproduced by the theory, provided it accounts for
the following things:
(1) Degeneracy of the electron gas (exclusion principle): The existence of the Fermi
surface is essential.
(2) Attraction between electrons. This property seems incredible due to the inevitable
Coulomb repulsion between electrons. Nevertheless, in the previous chapter you
were shown the possibility of such an attraction due to the electron–phonon
interaction (EPI). The role of EPI was understood by Frölich even before the
appearance of BCS theory and is directly confirmed by an isotope effect.
(3) The characteristic interaction energy, i.e., the range of electrons involved in the
interaction (the order of the Debye energy, π D ), must be much less than the
Fermi energy:
π D √ E F .
If these demands are met, the normal zero-temperature ground state of the
metal-filled Fermi sphere-becomes unstable. The qualitatively different ground state
appears instead, with all the strange features that we observe.
This instability of the ground state, or of the vacuum (you remember this termi-
nology), means that we can no longer use the perturbation techniques starting from
the normal ground state. As in the search for the root of a function by iteration, we
must start not too far from the root we want to find, lest we get a wrong answer, or
no answer at all. Then we have no regular way to build the new vacuum; we need an
insight.
But analogies can help to bring the insight closer. The normal ground state has
no gaps. The superconducting one has. In the usual quantum mechanics we also find
the situation when the gap appears: when the bound state exists. For an electron in
the isolated atom, e.g., the gap between the level and the continuous spectrum is the
ionization potential.
Note that the bound state cannot be obtained by the perturbation theory from the
propagating one. Indeed, the bound state does not carry a current. So, no matter how
4.1 Introduction: General Picture of the Superconducting State 159
weak the attractive potential is, there is a finite, qualitative difference between these
states.
“No matter how weak” is a slightly inaccurate statement, for in the three-
dimensional (3D) case too weak an attractive potential cannot create the bound
state. But in the two- and one-dimensional cases the bound state can be created by
an arbitrarily weak attractive potential.
Here the dimensionality is physically significant. We guess that the attraction
between the electrons is fairly weak. (Experimental measurements of the gap confirm
this supposition.) Thus we must have an effectively low-dimensional situation to get
a bound state.
What takes place in the one-body case? Assuming that the attractive potential is
of the simplest form,
−→ 2 − uρ(r) = E.
we find
(k 2 − E)k eikr = u(0)eikr ,
k k
or
u u
k = (0) ∇ 2 ≡.
k2 −E k −E ≡ k
k
1 1
= . (4.1)
u k −E
2
2k >0
In the propagating states we have E, k 2 > 0, in the bound ones E, k 2 < 0. First
of all, we see that for repulsive potential (u < 0) only the propagating states are
possible (the r.h.s. of 4.1 is strictly positive for negative E).
Then, for weak attraction (the case we are interested in) the l.h.s. of this equation
is a large positive number. For positive E it can be easily matched by one of the terms
in the r.h.s. sum. This demands only a slight shift of the energy levels relative to their
160 4 Methods of the Many-Body Theory in Superconductivity
positions without a potential (see Fig. 4.1). But for negative E it can be matched
only by the whole sum, since none of its terms is any longer arbitrarily large. If the
attraction is infinitely weak, the excitation energy, |E|, of the possible bound state
tends to zero, and can be neglected in (4.1).
Thus, for the bound state to be created by an infinitesimal attractive potential,
the sum k 2 >0 k −2 must diverge. Here the dimensionality role becomes absolutely
clear, for ≈
⎪ ≈
4ε 0 (k dk) k 2 ∓ k|0 , (3D)
2 1
1 ≈
= 2ε 0 (kdk) k12 ∓ ln k|≈ 0 , (2D) (4.2)
2 ≈ dk 1
⎪
2
k 1 ≈
k
0 k 2 ∓ k 0 . (1D)
The divergence on the upper limit is an artifact due to our choice of ρ-like potential;
we should renormalize it, which means introducing a cutoff at some kmax ≈ 1/a,
where a is the scattering length, related to scattering cross-section λ = 4εa 2 . The
physically significant divergence, at small momenta (i.e., large distances), occurs
only in the 1D and 2D cases.
These academic speculations become practically important as soon as we recollect
that in the metal at low temperature we have effectively a 2D situation.
Indeed, a filled Fermi sphere does not allow the electrons to be scattered inside
it. On other hand, the energy transfer during the electron–electron collision due to
our attractive potential is of the order of its energy scale, π D √ E F . Thus the
electrons under consideration are confined to an effectively 2D layer around the
Fermi surface. It was Cooper who first understood this fact, and now we shall sketch
his considerations of the famous Cooper pairing.
Consider two electrons (the “pair”) above the filled Fermi sphere (at T = 0)
(Fig. 4.2a). We neglect all the electrons inside in any other relation. We then assume
the translation invariance of the whole system, and neglect all spin-dependent forces.
4.1 Introduction: General Picture of the Superconducting State 161
Then the momentum of the pair mass center, q, and its total spin, S, are the motion
constants. The pair orbital wave function is then
where δ = r1 − r2 , R = r+r 2 .
The problem is greatly simplified if the pair is at rest (q = 0). (It is easy to see
(Fig. 4.2b) that indeed in this case the scattering phase volume is largest.) In this case
the problem is spherically symmetric; α(δ) is an eigenfunction of the moment L.
For the singlet spin state of the pair (S = 0) the orbital wave function is symmetric
(Fig. 4.3).
We can write
(r1 , r2 ) = α(δ) = ak eikδ = ak eikr1 e−ikr2 , (4.4)
k>k F k>k F
162 4 Methods of the Many-Body Theory in Superconductivity
and see that the pair wave function is a superposition of the configurations with
occupied states (k, −k). The eigenstate of the pair with spin S = 0 can be found
from the Schrödinger equation:
(E − H0 ) = V , (4.5)
2
where H0 = − 2m (→r1
2 + → 2 ), and V is an interaction potential. The energy is
r2
measured, as usual, from the Fermi level.
Substituting here (4.4) and taking matrix elements of both sides of the equation,
we obtain
(E − 2E k )ak = ∼k, −k|V |k≡ , −k≡ ∝ak≡ . (4.6)
k ≡ >k F
1 |wk |2
− = . (4.8)
ω 2E k − E
k>k F
Now we see that in the case of attraction (ω < 0) the bound state appears if the
interaction is confined to the vicinity of the Fermi surface, i.e., if
and
2πc
|E| = exp(−2/|ω|N (0)). (4.11)
1 − exp(−2/|ω|N (0))
Of course, if the attraction is strong, there is a bound state with |E| ≈ πc |ω|N (0).
But in the weak coupling limit it still exists, and its energy is given by
|vk |2 + |u k |2 = 1. (4.13)
The fact that we use the amplitudes, not the probabilities, shows that this uncertainty
is not due to the usual scattering: we have a coherent state, preserving the quantum-
mechanical phase effects.
As usual, the price to be paid for the advantages of the many-body approach is
high. The calculations, though, could be greatly simplified if some sort of a mean
field approximation (MFA) can be applied. For this, the pair concentration must be
large enough. Let us make an estimate. The size of a Cooper pair may be defined as
|α(δ)|2 δ2 d 3 δ k |→k ak |
2
(δ) =2 =
|α(δ)|2 d 3 δ |ak |2
≈ k
N (0)(∂θ/∂k)θ=0 0 (∂a/∂θ)2 dθ
2
≈ ≈ (4.14)
N (0) 0 a 2 dθ
const ∂a 2 const
a(θ) = ; = .
E − 2θ ∂θ (E − 2θ)2
Therefore,
4 v F
δ ≈ . (4.15)
3 E
This term contains four operators and will lead to the four-legged vertex in the dia-
gram techniques. Along with the sharp momentum dependence of the matrix element,
this is a rather unpleasant thing to deal with. The MFA immediately allows us to get
rid of two of these operators, writing instead of them the average value (and then
obtaining for it the self-consistent equation). But the usual choice in accordance with
Wick’s theorem, say, a † a † aa → ∼a † a∝a † a, does not give rise to any superconduc-
tivity: this term is for usual electron scattering and has nothing to do with pairing
instability. Anyway, it could have been included in other terms of the Hamiltonian
or/and in the renormalization of quasiparticle characteristics.
To keep the pairs in our approximation, we must make a crazy step and write
† †
Hi → Hi,MFA = Vkk≡ ak↑ a−k∞ ∼ak≡ ∞ a−k≡ ↑ ∝ + Hermitian conjugate
kk≡
= † †
k ak↑ a−k∞ + ∗k a−k∞ ak↑ . (4.16)
k
The craziness of this step consists in the fact that we have introduced anomalous
averages of two creation or annihilation operators, which must be zero in any state
with a fixed number of particles! But there is a method (or at least self-consistency)
in this madness. The Hamiltonian Hi,MFA also violates the law of conservation of
the particles’ number, creating and annihilating them in pairs. Then an average like
∼aa∝ or ∼a † a † ∝ calculated using this Hamiltonian will not be zero, and the ends
are met. Moreover, as you certainly know, the nonzero anomalous averages are the
fundamental feature of the superconducting state. They describe the off-diagonal
long-range order (ODLRO) of this state (the concept introduced by Yang [27])—the
sort of symmetry that makes it qualitatively different from the normal state.
ODLRO is so called because the anomalous averages are related to off-diagonal
terms of the density matrix of the system (which, as you know, are responsible for the
quantum coherence phenomena). The long-range order can be understood as follows.
The pair-pair correlation function,
S↑∞ (r1 , r2 ) = ∼↑† (r1 )∞† (r1 )∞ (r2 )↑ (r2 )∝, (4.18)
describes the correlation between the pairs at r1 and at r2 . When the distance between
them grows, this function must factorize (because of the general principle of the
correlations’ extinction):
S↑∞ (r1 , r2 ) ∼ ∼↑† (r1 )∞† (r1 )∝ × ∼∞ (r2 )↑ (r2 )∝. (4.19)
|r1 −r2 |→≈
In the normal state it factorizes trivially, for anomalous averages are zero. But in the
superconducting state we get a nontrivial factorization
which means that there exists a long-range correlation between the electronic states
(macroscopic quantum coherence). The superconductor is in an ordered state. There-
fore, the pairing potential is also called the order parameter.
But if these anomalous averages are so important, then the superconductivity
should not appear in a system with a fixed number of particles. That is, the super-
conducting ground state seems to be forbidden in a closed system.
The most trivial answer (and the correct one) is that there are no isolated objects
in this Universe; the electrons are, in principle, delocalized, etc., so that two electrons
more or less don’t play any role in practice.
Another answer, also true, is that, O.K., anomalous averages are important. But
let us think: what are the observable consequences of nonzero , ∗ ? We shall see
that the superconductor is described in terms of || and Arg.
166 4 Methods of the Many-Body Theory in Superconductivity
1 Of course, if you deal with a very small system, when one or two extra electrons significantly
change the total energy, certain precautions must be taken; an example-the parity effect—will be
discussed later.
4.1 Introduction: General Picture of the Superconducting State 167
This is an example of a very general and very important situation. The symmetry
of the Hamiltonian is higher, than the symmetry of the ground state: that is, the
symmetry is spontaneously broken.
The role played by the field h is just to reduce the symmetry of the Hamiltonian
to that of the ground state, through the term-Mh.
The fact that below the transition temperature an infinitesimal external field leads
to finite magnetization in our calculations means that the previous ground state, with
zero M, has lost stability and spontaneously acquired a finite magnetic moment.
(Therefore we speak of a spontaneous symmetry breaking; see Fig. 4.5.)
We encounter a like situation in a superconductor. If we introduce the pair sources
into the Hamiltonian, say f a † a † and f ∗ aa, calculate ∼a † a † ∝ and ∼aa∝, and then put
f = f ∗ = 0, above Tc we get zero. But below Tc we find a finite result even for zero
pair sources, and it is possible to show that all the averages of observables calculated
with the help of our “pair” Hamiltonian HMFA are the same as quasiaverages obtained
by more complicated procedure explained above. The normal state is then unstable,
but the real particle non-conservation is not necessary.
The principle “push the one who is falling” is never violated. Thus, any instability
will fully develop, and below Tc we never find the normal state. We find instead the
qualitatively different, superconducting one.
Conclusions
The normal ground state of the superconductor is unstable below the transition
temperature with regard to the creation of Cooper pairs of electrons (in the case
of arbitrarily weak electron–electron attraction near the Fermi surface).
The superconducting ground state is qualitatively different from the normal one
and possesses the special kind of symmetry that is revealed in the existence of nonzero
168 4 Methods of the Many-Body Theory in Superconductivity
The poles of the two-particle Green’s function correspond to the bound states of two
quasiparticles (such as plasmons). This is me for the temperature Green’s functions
as well. We will show that if there exists an arbitrarily weak attraction between
the quasiparticles on the Fermi surface, the pole appears in the vertex function
(see Lifshits and Pitaevski [5]). As we know, this is equivalent to the pole in the
two-particle Green’s function itself.
Let us consider the temperature vertex function,
This means that the pairing occurs on the Fermi surface, with zero binding energy
(this must be so at the very transition point), and zero total momentum (this was
discussed earlier).
The pole arises due to the ladder diagrams of Fig. 4.6. (We do not have to consider
the diagrams with interchanged ends, for the pole arises in both series simultane-
ously.) The Bethe-Salpeter equation can be thus written as follows (Fig. 4.7):
4.2 Instability of the Normal State 169
In the sum and integral, only p3 close to the Fermi surface and small Matsubara
frequencies ψ are important. Therefore, we can set |p3 | = p F , ψs = 0 in the
arguments of both and U under the summation and integration.
Now all the vector arguments lie on the Fermi sphere, and , U depend each on
a single variable (the angle between the corresponding momenta). Then they can be
expanded in Legendre polynomials:
≈
U (θ) = (2l + 1)u l Pl (cos θ); (4.24)
l=0
≈
(θ) = (2l + 1)χl Pl (cos θ). (4.25)
l=0
−u l
χl = . (4.27)
1 + ul Ψ
In this expression
3 2 d 3 p 1
1 d p3 0 3 1
Ψ= G (p 3 s =
ψ ) . (4.28)
τ s (2ε)3 (2ε)3 τ s ψs2 + θ 2p
The sum quickly converges. Using the formula for summation over fermionic
Matsubara frequencies, we find that
τθ p
d 3 p tanh
Ψ= 2
. (4.29)
2(2ε)3 θ p
τθ p
−u l tanh
d p3 2
= 1. (4.30)
2(2ε)3 θp
This very equation (with l = 0) is obtained from BCS theory if we put the order
parameter = 0 (i.e., in the transition point). The critical temperature in the lth
channel is thus, up to a factor of order unity,
1
Tc(l) ≈ ϕmax exp{− }. (4.31)
N (0)|u l |
4.2 Instability of the Normal State 171
Here ϕmax is the energy cutoff parameter. We see that the bound state appears if
at least one of the coefficients u l in the potential’s angular expansion is negative
(l)
(attraction). The transition takes place at the largest of the Tc , and the nonanalytic
(l)
dependence of Tc on the interaction parameter is properly restored.
In conventional superconductors the electron–electron attraction appears already
in the s-wave channel (l = 0) due to the phonon-mediated electron–electron coupling
(see Sect. 4.4.2). But for l > 0 negative coefficients may arise from the bare repulsive
electron–electron interaction without such an intermediary. This so-called Kohn–
Luttinger pairing mechanism [21] has the same origin as Friedel oscillations of
screened electrostatic potential at large distances (see Appendix A). Namely, due
to the sharp edge of Fermi distribution the polarization operator, as a function of
momentum, is non-analytic when the momentum transfer is close to 2 p F . As a
result, even starting from a purely repulsive bare potential, all angular components
u l of which are positive, one obtains the screened potential, where at least some
components u leff are negative. Substituting such a u leff in Eq. (4.27) instead of u l , we
obtain the instability of the normal state and the superconducting transition. Note
that the existence of a sharp Fermi surface and the resulting effective reduction of
the dimensionality of the momentum space, which was critically important for the
formation of Cooper pairs, is also crucial here.
The Kohn–Luttinger mechanism offers a possibility to explain the superconduct-
ing pairing with l = 0 in systems, where the phonon-mediated coupling is too weak
(like in high-Tc cuprates). While the original version of the Kohn-Luttinger argu-
ment does not directly apply to the systems without spherical symmetry, its recent
extensions to lattice models provide interesting insight in the possible nature of super-
conducting state in the cuprates, the Fe- graphene (see Maiti and Chubukov [6]). In
the following, though, we will mainly concern ourselves with conventional, s-wave,
phonon-mediated superconductivity.
Finally, note also an important difference of Eq. (4.31) from the result we obtained
earlier: −1/(N (0)|u l |) in the exponent, instead of twice this value, as followed
from Cooper’s initial argument. This is because now we have properly taken into
account the many-body character of the problem. Unfortunately, we cannot proceed
any further: the fact of some transition due to electron–electron coupling, and the
temperature of this transition are the only things which can be obtained by the usual
technique, which starts from the normal ground state. In fact, no real bound state
(in terms of quasiparticles above the normal vacuum) appear, but rather instability
arises, which cannot be dealt with.
Therefore, we need the modified formalism, which will be built with the help of
the so-called pairing Hamiltonian.
172 4 Methods of the Many-Body Theory in Superconductivity
(an appropriate momentum cutoff confining the interaction into the narrow layer near
the Fermi surface is implied). From the naïve point of view we have followed earlier,
the MFA pairing Hamiltonian is obtained by selective averaging of operators in the
previous expression:
⎧
HMFA = H0 + g d 3 r ψ↑† (r)ψ∞† (r)∼ψ∞ (r)ψ↑ (r)∝ + ψ↑† (r)ψ∞† (r) ψ∞ (r)ψ↑ (r)
∇ H0 − d 3 r ψ↑† (r)ψ∞† (r)(r) + ∗ (r)ψ∞ (r)ψ↑ (r) ; (4.33)
(r) = |g|∼ψ∞ (r)ψ↑ (r)∝. (4.34)
The result is correct, but its foundation seems to be a bit shaky. In the following we
pursue two objectives: to show that the pairing Hamiltonian is more reliable than one
may guess, and how the corrections can be introduced in a regular way (following
Svidzinskii [8]). The equilibrium properties of the system can be derived from its
grand partition function,
where
⎨
A(r, ν ) = |g|ψ ↑ (r, ν )ψ ∞ (r, ν ); (4.38)
⎨
A(r, ν ) = |g|ψ∞ (r, ν )ψ↑ (r, ν ). (4.39)
4.3 Pairing (BCS) Hamiltonian 173
Now we use one of the functional integration formulae, which is a direct generaliza-
tion of the standard formula for Gaussian integrals
≈ −x 2 +2Ax
≈
A2 1 −x 2 +2Ax −≈ dxe
e =√ dxe = ≈ . (4.40)
ε −≈ −x 2
−≈ dxe
Let us write
where P, Q are Hermitian operators. Then reordering the operators under the sign
of the Tν -operator, we can obtain the expression
τ τ 2 τ 2
d 3 rA(r,ν )A(r,ν ) d 3 rP (r,ν ) d 3 rQ (r,ν )
Tν e 0 dν
= Tν e 0 dν
e 0 dν
, (4.42)
τ ⎩−1
− 0 dν d 3 r[θ 2 (r,ν )+ξ 2 (r,ν )]
× Dθ(r, ν )Dξ(r, ν )e . (4.43)
noticing that
τ 3 ∗
⎧
× Tν e+ 0 dν d r[(r,ν )ψ↑ (r,ν )ψ∞ (r,ν )+(r,ν ) ψ∞ (r,ν )ψ↑ (r,ν )] (4.47)
0
τ 3 ⎩−1
1
− dν d r|(r,ν )| 2
× D(r, ν )D∗ (r, ν )e |g| 0 .
174 4 Methods of the Many-Body Theory in Superconductivity
The partition function contains a Gaussian average over the pair sources’ fields,
, ∗ , of the Bogoliubov functional
∗
τ ⎧
B [, ∗ ] ∇ e−τ B [, ] = Tν e− 0 dν H B (ν ) , (4.48)
0
with the pairing Hamiltonian H B (in Matsubara representation). Then the approx-
imation of the pairing Hamiltonian corresponds to the main order in the expansion
of the functional integral over the pair sources’ fields in (4.48). The corresponding
values of these fields are given by the extremum condition:
τ
1
− |g| dν d 3 r|(r,ν )|2 −τ B [,∗ ]
e 0 = max, (4.49)
that is2
ρΣ B [, ∗ ]
∗ (r, ν ) = −τ|g| . (4.50)
ρ(r, ν )
ρΣ B ρ ln(/0 ) ρ ln
= −τ −1 = −τ −1 . (4.51)
ρ ρ ρ
Substituting here the partition function (4.48), we obtain
ρB 1
−τ H0
=− tr e Tν ψ ↑ (r, ν )ψ ∞ (r, ν )
ρ(r, ν ) τ
τ
d 3 r[(r,ν ) ↑ (r,ν ) ∞ (r,ν )+(r,ν )∗ ψ∞ (r,ν )ψ∞ (r,ν )]
× e+ 0 dν
Its conjugate coincides with the relation (4.34) obtained from the “naïve” point of
view.
The advantage of our approach is that we not only have established the validity
of the “MFA” Hamiltonian in the superconducting case, but also found its limits and
way of introducing the necessary corrections: there may occur situation in which not
only the extremal value (4.53), but its vicinity as well is to be taken into account.3
As you see, the Bogoliubov transformation mixes electron and hole operators with
opposite spins—this is the only way to get rid of nondiagonal pairing terms! The phys-
ical significance of this is that the quasiparticles in the superconductor are rather like
centaurs: “part electrons, part holes.” For obvious reasons they are called bogolons.
The coefficients of the Bogoliubov transformation must satisfy the following
canonical relations,
3 You can see from the structure of the functional integral that the overall sign (or, more generally,
the initial phase) of the complex field is of no importance. In different books you thus can find
the same equation with opposite signs of . This is a matter of convention.
176 4 Methods of the Many-Body Theory in Superconductivity
u q (r)u q∗ (r≡ ) + vq (r≡ )vq∗ (r) = ρ(r − r≡ ); (4.58)
q
u q (r)vq∗ (r≡ ) − u q (r≡ )vq∗ (r) = 0; (4.59)
q
d 3 r u q (r)u q∗ ≡ (r) + vq (r)vq∗≡ (r) = ρqq≡ ; (4.60)
d 3 r u q (r)vq ≡ (r) − u q ≡ (r)vq (r) = 0, (4.61)
in order to comply with Fermi statistics of both old and new creation/annihilation
operators. (To check this would be a really useful exercise, even in the simplest case
of the plane wave basis.)
In new operators the Hamiltonian takes the simple form
⎫ ⎬
† †
H B = U0 + E q φq,↑ φq,↑ + φq,∞ φq,∞ . (4.62)
q
The first term here is the ground state energy of the superconductor. The second
one is the quasiparticle term, which describes the elementary excitations above the
ground state.
The excitation energies along with the transformation coefficients are given by
the solution of the following system of Bogoliubov-de Gennes equations:
⎭ 2
1 1
→ − eA/c − μ + V (r) u q (r) + (r)vq (r) = E q u q (r);
2m i
⎭ 2
1 1
→ + eA/c − μ + V (r) vq (r) − ∗ (r)u q (r) = −E q vq (r),
2m i
These equations can be easily derived if we write down the Heisenberg equations of
motion for the field operators, i ψ̇ = [ψ, H B ]; that is,
i ψ̇↑ (r, t) = θˆ + V (r) ψ↑ (r, t) − (r)ψ∞† (r, t);
(4.64)
i ψ̇∞† (r, t) = − θˆc + V (r) ψ∞† (r, t) − ∗ (r)ψ↑ (r, t)
or in matrix form,
⎭
∂ ψ↑ (r, t) θˆ + V (r) − (r) ψ↑ (r, t)
i = . (4.65)
∂l ψ∞† (r, t) −∗ (r) − θˆc − V (r) ψ∞† (r, t)
The change of sign before eA/c in the conjugate operator appears in the process
of integration by parts, which we must perform while calculating the commutator
[ψ † , H B ]. This change of sign of the electrical charge should be expected, since ψ †
is the hole annihilation operator.
The ψ-operators are not the eigenvectors of the Hamiltonian and have no def-
inite frequency. We can, though, express them through the eigenoperators of the
Hamiltonian, φ, φ† :
Gathering the terms, say, with φq,↑ , we get the Bogoliubov-de Gennes equations.
Here we will derive a useful general relation between the excitation energies, order
parameter, and coherence factors, which follows directly from the Bogoliubov-de
Gennes equations (4.63). Multiplying the first line of (4.63) from the left by u q∗ (r)
and the second by vq∗ (r), adding them, and integrating over the whole space, we find
(with the help of (4.60) and after an integration by parts)
Eq = d 3 r u q∗ (r)(θˆ + V (r))u q (r) − vq (r)(θˆ + V (r))vq∗ (r)
+2R((r)u q∗ (r)vq (r)) . (4.66)
4.3.3 Bogolons
The elementary excitations above the ground state of the superconductor, bogolons,
are created and annihilated by the φ, φ† -operators.
It follows from the relation (4.55) that this quasiparticle is a coherent combination
of an electron-like and hole-like excitations with opposite spins. The coefficients
u q , vq (they, or some their bilinear combinations-depending on the book you read -
are called coherence factors) give the probability amplitudes of these states in the
actual mixture and are defined by the Bogoliubov-de Gennes equations. Bogolons
178 4 Methods of the Many-Body Theory in Superconductivity
should not be mixed up with Cooper pairs, which are not excitations, but form the
ground state of the superconductor. Two bogolons appear when a Cooper pair is tom
apart (by thermal fluctuations, for example); of course, they keep some information
about the superconducting phase coherence, which is contained in the coefficients
u, v, as we will see later.
The electric charge of such a quasiparticle is no longer an integer multiple of e,
but rather equals
Of course, the charge conservation law is here violated no more than the mass con-
servation law was by the fact that quasielectrons can have a mass different from m 0 .
The extra charge is taken or supplied by the condensate.
In the spatially uniform case and in the absence of external fields we can use
the momentum representation to simplify equations (4.63). In the absence of the
supercurrent, we can choose the order parameter to be real:
θp − E p up
= 0. (4.68)
−θp − E p vp
The solvability condition gives the dispersion law of bogolons (see Fig. 4.9):
Ep = θp2 + 2 . (4.69)
1 θp 1 θp
|u p | = √ 1 + ; |vp | = √ 1 − . (4.70)
2 Ep 2 Ep
The self-consistency relation for the gap follows from the extremum condition ∗ =
|g|∼ψ † ψ † ∝ and reads in the general case as follows:
E q (∗ )
∗ (r, T ) = |g| u q∗ (r)vq (r) tanh . (4.71)
q
2T
In the uniform case this is reduced to the famous BCS equation for the energy gap:
E p (∗ )
d 3 p tanh 2T
1 = |g| . (4.72)
(2ε)3 2 E p (∗ )
Fig. 4.9 Quasiparticle dispersion law: a normal metal (electrons and holes), b superconductor
(electron-like and hole-like bogolons)
⎧
∼O∝ = ψλ† Oψλ
λ=↑,∞
= (u q∗ φq↑
†
+ vq φq∞ )O(u q φq↑ + vq∗ φq∞
†
)
q
⎧
+(u q∗ φq∞
†
− vq φq↑ )O(u q φq∞ − vq∗ φq↑ †
)
= 2 d 3 r u q∗ Ou q n F (E q ) + vq Ovq∗ (1 − n F (E q )) . (4.73)
E q >0
Here we have assumed for simplicity that O is spin independent, but the
generalization is trivial.
For example, the equilibrium current density is given by
2e
∗ e e
j= R u v (p + A)u v n F (E v ) + vv (p + A)vv∗ (1 − n F (E v )) . (4.74)
m c c
E>0
Returning to the extremum condition (4.49) that we used earlier to derive the self-
consistency relation for the order parameter, we see that the role of the thermody-
namic potential of the superconductor is played by
1
[, ∗ ] = d 3 r|(r)|2 + B [, ∗ ], (4.75)
|g|
where the fluctuation corrections are thus neglected. The Bogoliubov functional B
is easily calculated from the diagonalized form of the BCS Hamiltonian, (4.62):
180 4 Methods of the Many-Body Theory in Superconductivity
1
B [, ∗ ] = − ln tre−τ H B
τ
2
= U0 − ln(1 + e−τ Eq )
τ q
2 τ Eq
= U0 + Eq − ln 2 cosh .
q
τ q 2
and therefore
U0 = − Eq + d 3 r vq (r)(θˆ + V (r))vq∗ (r) + u q∗ (r)(θˆ + V (r))u q (r) .
q q
(4.76)
Finally, the (infinite) sums of excitation energies in B exactly cancel, and we obtain
an important formula
1
[, ∗ ] = d 3 r|(r)|2 + d 3 r vq (r)(θˆ + V (r))vq∗ (r)
|g| q
2 τ Eq
+ u q∗ (r)(θˆ + V (r))u q (r) − ln 2 cosh . (4.77)
τ q 2
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 181
We have already seen how the pairing Hamiltonian can be derived from the one with
two-particle point interactions, and how anomalous averages appear if accept such a
Hamiltonian. Here we will arrive at anomalous averages from a different direction.
Namely, we will develop a special Green’s function technique, where both normal
and anomalous averages appear in a natural way.
We have seen from the Bogoliubov transformation that the superconducting state
somehow mixes electrons and holes. It is then natural to introduce two-component
field operators (Nambu operators)
ψ↑ (r)
(r) = , † (r) = ψ↑† (r), ψ∞ (r) . (4.78)
ψ∞† (r)
1 ⎧
Ĝjl = T j (X )l† (X ≡ ) ; (4.82)
i
⎡ ⎧
† #
≡ ) 1 T ψ (X )ψ (X ≡ )
⎛⎜
1
T ψ ↑ (X )ψ ↑ (X ↑ ∞
Ĝ = ⎣ 1 † ⎧i ⎧⎝
i
† ≡ † ≡
i T ψ∞ (X )ψ↑ (X ) i T ψ∞ (X )ψ∞ (X )
1
G(X, X ≡ ) F(X, X ≡ )
∇ . (4.83)
F + (X, X ≡ ) −G(X ≡ , X )
182 4 Methods of the Many-Body Theory in Superconductivity
We see that the relevant terms arise from the pairings of Nambu operators of the
type ∼ † ∝, while the pairings ∼ † † ∝, ∼∝ contain terms like ∼ψ↑ ψ↑ ∝ or ∼ψ↑† ψ∞ ∝
(which would correspond to triplet pairing or magnetic ordering respectively) and are
equal to zero in our case. (Of course, we could not rule them out a priori: here we use
our knowledge of the properties of the superconducting state, based on experimental
data, to narrow the field of search.) Then Wick’s theorem for the Nambu operators
looks the same as for the usual ones, and we can at once build the diagram technique.
It is done, of course, along the same lines as before, and we need not repeat all the
calculations.
This is an important point: neither the unperturbed Hamiltonian nor the unperturbed
Green’s function contains anomalous (i.e., off-diagonal) terms (see Table 4.1). Nev-
ertheless we will see that they naturally appear in Nambu-Gor’kov picture after
interactions are taken into account [12].
As before, we use the Pauli matrices as a basis in the space of 2 × 2 matrices.
Denoting them by
10 01 0 i 10
ν̂0 = ; ν̂1 = ; ν̂2 = ; ν̂3 = ,
01 10 −i 0 0 −1
−1 −1
Ĝ = Ĝ0 − Σ̂ ∇ E ν̂0 − θp ν̂3 − Σ̂. (4.87)
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 183
Table 4.1 Feynman rules for Nambu-Gor’kov Green’s function (Momentum space)
0
iĜ (p, E) = (E τ̂0 − ξp τ̂3 )−1 Unperturbed matrix Green’s function
The function (p, E) here is not the “initial” pairing potential, which we set to zero.
In the absence of the magnetic field, in a stationary state, we can choose the phase
in such a way as to eliminate the ν̂2 -component. Then
Fig. 4.11 a Lowest-order exchange terms in the self energy, b self-consistent approximation for
the exchange self energy
(1) d E ≡ dp≡
Σ̂ex (pE) =i ν̂3 Ĝ0 (p≡ E ≡ )ν̂3
(2ε)4
⎞
⎟
× |g j (pp≡ )|2 D 0j (p − p≡ , E − E ≡ ) + VC (p − p≡ ) ∓ ν̂0 .
⎠
j
In this way self energy and Green’s function acquire an off-diagonal ν̂1 -term:
From this matrix relation the set of two nonlinear integral equations for Z , fol-
lows, the so-called Eliashberg equations. They are central to the theory of supercon-
ductors with strong coupling and contain the expression for the intensity of electron–
phonon interaction, traditionally written as
d 2 p̂ d 2 p̂ ≡
SF v p S F v p≡ j |g j (pp≡ )|2 ρ(π − π j (p − p≡ ))
φ (π)F(π) ∇
2
d 2 p̂ .
SF v p
In the weak interaction limit these equations reduce to the BCS theory. For exam-
ple, the transition temperature is given by the same formula:
⎩
1
Tc = 1.14π D exp − . (4.93)
ω − μ∗
Here μ∗ is the Coulomb pseudopotential, which would appear in the BCS theory
if Coulomb repulsion were explicitly taken into account.4 Usually μ∗ √ ω. The
electron–phonon coupling constant, ω, is here defined as
≈ φ2 (π)F(π)
ω=2 dπ . (4.94)
0 π
The matrix Green’s function contains only two independent components: normal
and anomalous Green’s functions. The set of equations for these functions (Gor’kov
equations) immediately follows from their definitions and the equation of motion for
the field operators (4.64):
⎫ ⎫ ⎬ ⎬
∂
i ∂t − θˆ + V G(X, X ≡ ) + (r)F + (X, X ≡ ) = ρ(X − X ≡ );
⎫ ⎫ ⎬X ⎬ (4.95)
∂
i ∂t + θˆc + V F + (X, X ≡ ) + ∗ (r)G(X, X ≡ ) = 0.
X
In the stationary homogeneous case these equations take the following form (in the
momentum representation):
(π − θp )G(p, π) + F + (p, π) = 1;
(4.96)
(π + θp )F + (p, π) + ∗ G(p, π) = 0.
At zero temperature
the solution to this set is given by
∗
F + (p, π) = − G(p, π);
π + θp
π + θp
G(p, π) = 2 .
π − (θp2 + ||2 )
It is easy to show that in the uniform case this equation coincides with the BCS
equation for the gap at T = 0.
At finite temperatures
we use the same methods as in Chap. 3. Due to the analyticity of the retarded
Green’s function in the upper half-plane, we just put π → π + i0 and obtain
2 2
u p v p
G (p, π) =
R
+ . (4.99)
π − E p + i0 π + E p + i0
The causal Green’s function is obtained from the retarded one with the use of the
relation between their real and imaginary parts in equilibrium:
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 187
Fig. 4.12 Spectral density of the retarded Green’s function in the superconductor. Note that the
quasiparticles (bogolons) have infinite lifetime in spite of interactions
* 2 2 +
u p v p
G(p, π) = P +
π − Ep π + Ep
E p ⎫ 2 2 ⎬
−iε tanh · u p ρ(π − E p ) − v p ρ(π + E p ) . (4.101)
2T
Ep
Since tanh 2T = 1 − 2n F (E p ), this can be represented as
⎫ ⎬
G(p, π)|T =0 = G(p, π)|T =0 + 2εin F (E p ) u 2p ρ(π − E p ) − v 2p ρ(π + E p ) ,
(4.102)
while the anomalous Green’s function is now
Again, this equation leads to the BCS equation for (T ) at a finite temperature.
All the modifications of Green’s functions techniques discussed earlier can be gener-
alized to Nambu-Gor’kov Green’s functions. For example, a Keldysh Green’s func-
tion becomes a 4 × 4 matrix (because G A,R,K are 2 × 2 Nambu matrices) (see
Rammer and Smith [7]). In equilibrium it is, though, more convenient to use a less
cumbersome Matsubara formalism.
The temperature anomalous Green’s functions are defined by
188 4 Methods of the Many-Body Theory in Superconductivity
The Gor’kov equations for the temperature Green’s functions are thus
⎫ ⎫ ⎬⎬
∂
− ∂ν − θˆ + V G(rν , r≡ ν ≡ ) + (r)F(rν , r≡ ν ≡ ) = ρ(r − r≡ )ρ(ν − ν ≡ );
⎫ ⎫ r ⎬⎬
∂
− ∂ν + θˆc + V F(rν , r≡ ν ≡ ) + ∗ (r)G(rν , r≡ ν ≡ ) = 0.
r
(4.107)
In the uniform case we Fourier transform this to get
The solution is
iπs + θ p
G(p, πs ) = − ; (4.109)
πs2 + E 2p
∗
F(p, πs ) =
πs2 + E 2p
+
= F (p, iπs ). (4.110)
Then
|g|T d3 p
1= . (4.111)
(2ε)3 s πs2 + E 2p
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 189
This equation immediately yields the BCS equation for (T ). The sum is evaluated
with the help of the formula
1 a
[(2s + 1)2 ε 2 + a 2 ]−1 = tanh .
s
2a 2
†
ap → ap−mvs ; ap† → ap−mv s
;
ψλ (r) → eimvs r ψλ (r); (4.112)
ψλ† (r) → e−imvs r ψλ† (r). (4.113)
exp(imζ(r)),
vs = →ζ(r).
In the case of spatially uniform superflow, ζ(r) = vs · r(+const), but once we have
related the superfluid velocity to the phase of the order parameter, we no longer
depend on this uniformity assumption and can consider arbitrary ζ(r) and vs (r).
Let us choose the transverse gauge → · A = 0. It simplifies the calculations,
since then the vector potential and the momentum operator commute (see, e.g.,
190 4 Methods of the Many-Body Theory in Superconductivity
Landau and Lifshitz [4]). Indeed, for arbitrary coordinate functions, f (r), g(r), we
find [p̂, f ]g = −i(→ f )g. Therefore, [p̂, A] = −i→ · A = 0 in this gauge.
Substituting (4.115) into Gor’kov’s equations, we then find that they obey the
equations
⎫ ⎬
i∂ − 2m (p̂ + mvs )
1 2 − μ G̃(X, X ≡ ) + (r) F̃ + (X, X ≡ ) = ρ(X − X ≡ );
⎫ ∂t ⎬ (4.117)
∂
i ∂t + 2m (p̂ − mvs )
1 2 + μ F̃ + (X, X ≡ ) + (r)G̃(X, X ≡ ) = 0,
where the order parameter phase and the vector potential of the electromagnetic field
enter through the gauge-invariant combination, superfluid velocity
e
vs (r) = →ζ(r) − A(r); (4.118)
mc
e
→ × vs (r) = − B(r). (4.119)
mc
The supercurrent is a “thermodynamic current”: it flows in equilibrium, it is a
property of the ground state of the system. Therefore, it can be calculated from ther-
modynamics only, where the current density is expressed as a variational derivative
of any of thermodynamic potentials with respect to the vector potential:
1 ρE ρF
− j(r) = =
c ρA(r) S,V,N ρA(r) T,V,N
ρW ρ
= = (4.120)
ρA(r) S,P,N ρA(r) T,P,N
ρ
= .
ρA(r) T,V,μ
Therefore, we find
ρ
j(r) = −c
ρA(r)
i e #, - ⎛ e2 # ⎛
= →ψλ† (r) ψλ (r) − ψλ† (r)(→ψλ (r)) − A(r) ψλ† (r)ψλ (r) .
2m λ
mc λ
(4.121)
→ · j(r) = 0. (4.122)
The current can be expressed through the normal Green’s function (compare to
Sect. 2.1.4):
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 191
ie
j(r) = lim (→r≡ − →r )G̃(r0, r≡ 0+ ) + 2evs (r)G̃(r0, r≡ 0+ )
m r≡ →r
≈
ie
= lim (→ r≡ − →r )T G̃(r, r≡ , πs )
m r≡ →r s=−≈
≈
+2evs (r)T G̃(r, r≡ , πs ), (4.123)
s=−≈
Here we will do basically the same thing as in Chap. 3, when we derived the quantum
kinetic equation: the gradient expansion. Again we assume that the field A(r), the
superfluid velocity, and the order parameter are slow functions of the coordinates,
and introduce the Wigner representation:
R = (r + r≡ )/2; ρ = r − r≡ ; (4.124)
f (r, r≡ ) → f (R, q) ∇ d 3 ρe−iqρ f (R + ρ/2, R − ρ/2); (4.125)
i i
p̂ = −i→ → q − →R ; r → R + →q . (4.126)
2 2
Then the Gor’kov’s equations read
. 2 /
i i
iπs − 1/2m q − →R + mvs R − →q + μ G(R, q, πs )
2 2
i
+(R − →q )F(R, q, πs ) = 1;
. 2 /
2
i i
iπs + 1/2m q − →R − mvs R − →q − μ F(R, q, πs )
2 2
i
+(R − →q )G(R, q, πs ) = 0.
2
In zeroth order in gradients we have thus the set of algebraic equations
iπs − 1/2m(q + mvs (R))2 + μ G(R, q, πs ) + (R)F(R, q, πs ) = 1;
iπs + 1/2m(q − mvs (R))2 − μ F(R, q, πs ) + (R)G(R, q, πs ) = 0,
(R) 1
F(R, q, πs ) =
2E q (R) iπs − q · vs (R) + E q (R)
⎩
1
− . (4.129)
iπs − q · vs (R) − E q (R)
The only change brought to these formulae by the supercurrent is that instead of
iπs we have
iπs − qvs ,
mv2s
μ → μ(R) = μ − , (4.130)
2
so that the kinetic energy term becomes
q2 mv2s
θ˜q (R) = − μ(R) = θq (R) + .
2m 2
Taking this into account, the relation between the kinetic energy and excitation
energy is the same as before, locally:
Using the formula for summation over odd (fermionic) Matsubara frequencies,
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 193
≈
1
T = n F (φ),
s=−≈
iπs − φ
we find that
≈
T G̃(q, πs )
s=−≈
.* + * + /
1 θ̃q θ̃q
= 1+ n F (qvs + E q ) + 1 − n F (qvs − E q ) , (4.132)
2 Eq Eq
j2 = nevs .
j1 = −n n evs ∇ −jn ,
in order to eliminate the contribution of the “normal” electrons (n n is then the density
of the normal component, n n + n s = n).
Noting that
194 4 Methods of the Many-Body Theory in Superconductivity
n F (−x) = 1 − n F (x),
and
* +
θ˜q
d qq 1 ±
3
=0
Eq
(the latter because the energies are direction independent), we can write this term as
follows:
.* +
e d 3q θ˜q
j1 (R) = q 1+ n F (E q + qvs )
m (2ε)3 Eq
* + /
θ˜q
− 1− n F (E q − qvs )
Eq
2e d 3q
≈− qn F (E q − qvs ). (4.135)
m (2ε)3
But n F (E q − qvs ) is simply the distribution function of Fermi particles (in the event,
bogolons) moving with velocity vs . Thus the term j1 indeed is minus the current of
the elementary excitations moving with the velocity of the condensate:
j1 = −jn ∇ −n n evs .
where
n n + n s = n. (4.137)
We noted that while the energy gap in the current-carrying state linearly drops with
the supercurrent, the behavior of the order parameter should be determined from the
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 195
self-consistency relation
d 3q
(R) = |g|T F(R, q, πs ). (4.138)
s
(2ε)3
Substituting there the expression for the anomalous Green’s function (4.129), we
obtain the integral equation
d 3q 1
1 = |g| (1 − n F (E q + qvs ) − n F (E q − qvs )). (4.139)
(2ε)3 2E q
p F vs < 0 , (4.141)
where 0 is the order parameter in the state with j = 0. Indeed, then both expo-
nents are zero. Therefore, the equation for the order parameter stays the same as
in the absence of the supercurrent. Equation (4.141) is the celebrated Landau crite-
rion, telling at what vs the energy gap (not the order parameter!) first goes to zero.
But, unlike the case of Bose superfluid, the superconducting state in three dimen-
sions is not immediately destroyed when vs reaches vs,Landau = 0 / p F : there still
exists gapless superconductivity up to somewhat higher vs,c , which can be seen from
(4.140) (see [8]). ⎨
Let the order parameter be < 0 < p F vs , and introduce π = ( p F vs )2 − 2 .
The r.h.s. of (4.140) is then
* +
π dθ πD dθ
|g|N (0) + ⎨
0 p F vs
θ 2 + 2 π
πD
2 dθ
= |g|N (0) 1 − + ⎨ ,
( p F vs )2 π θ 2 + 2
Fig. 4.13 Order parameter (a) and energy gap (b) dependence on the superfluid velocity
πD dθ
|g|N (0) .
0 θ + 20
2
1− + ln ⎢
⎣ ⎥
⎝
( p F vs ) 2
1 + 1 − ( p v )2
2
F s
⎭
p F vs 20
= ln + ln 1 + 1− .
0 π 2D
since (20 /π 2D ) √ 1.
We have established the existence of the region of vs between vs,Landau and
vs,c ≈ 1.359vs,Landau , where the superconductivity exists in spite of the possibility of
the creation of elementary excitations with arbitrarily small energies (see Fig. 4.13).
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 197
Fig. 4.14 Andreev reflection of the quasiparticle from the N-S interface. At the point where E =
(x) the quasiparticle changes the branch (particle to hole, e.g.), thus changing the velocity to its
opposite with the minimal possible momentum change
(It is straightforward to check that the order parameter in the end point behaves as
√
(vs,c − vs ) ∓ vs,c − vs .) The reason why such a regime of gapless supercon-
ductivity can exist is that unlike the Bose superfluid, the elementary excitations in
the superconductor are fermions. Since the destruction of the supercurrent can occur
only after all of its momentum has been transferred to the quasiparticles, there must
be a certain number of them generated. For fermions, this demands a finite phase
space, with zero energy gap. But in three dimensions, after the Landau criterion is
met, the gap is zero only on an infinitesimally small portion of the Fermi sphere.
This portion, and the quasiparticle generation rate, grows as the superfluid velocity
grows, while the order parameter continuously drops until it becomes zero at vs,c .
The dimensionality is here crucial: if, e.g., formally consider the one-dimensional
case, it is evident from (4.139), that the order parameter will drop to zero immediately
at vs,Landau (Bagwell [14]). Small wonder: in this case there is no room for growth
of the phase space volume available to quasiparticle generation.
where ξ(r) and ζ(r) vary slowly in comparison to the exponential factors. Neglecting
second derivatives, we thus obtain
So much we already guessed: the quasiparticle with subgap energy inside the gap
cannot enter the superconductor, where there is no available place for it. Nor can it
change its momentum significantly, since the effective potential varies too slowly.
The only possible reflection process is thus to change the direction of its group
velocity, changing the branch of the excitation spectrum (see Fig. 4.14).
When x → −≈, we find that
Note that unlike the usual reflection, Andreev reflection changes all the veloc-
ity components of the incident quasiparticle (simultaneously transforming it into a
quasihole), and vice versa.
Due to this feature, Andreev reflection provides a mechanism for current flow
through the NS interface. A quasielectron with velocity v is transformed into a qua-
sihole with velocity −v which carries the same current in the same direction. In
terms of real electrons this means that an electron above the Fermi surface in the
normal conductor forms a Cooper pair with another electron (below the Fermi sur-
face) and leaves for the superconducting condensate (Fig. 4.15), thus transferring the
dissipative quasiparticle current on the N -side into the condensate supercurrent on
the S-side. The Andreev-reflected hole is the hole left by the second electron in the
Fermi sea. In the conjugated process, a Cooper pair pounces at an unsuspecting hole
wandering too close to the interface; one electron fills the hole, and the other moves
away into the normal region.
To investigate this picture in some detail, let us now consider the case of steplike
pairing potential (Fig. 4.16):
This is, of course, an approximation (though a very often used one). Since pairing
potential must be determined self-consistently from (4.71), in reality it sags a little
near to the SN interface (at distances about θ0 ). This is sometimes called the proximity
effect (later we will discuss another meaning of this term); anyway, it is irrelevant
for our problem, since it can bring only small corrections.
Since the system is homogeneous in the y, z-directions, we can separate a sin-
gle mode with dependence on y, z given by eik y y+ik z z . The Bogoliubov-de Gennes
equations for such a mode will have the same form as (4.143), with
200 4 Methods of the Many-Body Theory in Superconductivity
Fig. 4.16 Steplike pairing potential. Both Andreev and normal reflection are possible. (We will see
that in the gap, normal reflection is still suppressed)
2 d 2
θˆ → − ,
2m dx 2
k 2y + k z2
μ → μk y ,kz = μ − ,
2m
and
k F → k F,k y ,kz = k 2F − (k 2y + k z2 ).
(We will keep this in mind and omit trivial y, z-dependence and k y,z subscripts to
simplify notation.)
In the normal part of the system there exist electrons and holes, described by the
(non-normalized) vectors
1 ±ik + x
ψe± (x) = e ; (4.150)
0
0 ±ik − X
ψh± (x) = e , (4.151)
1
where as is easy to see from the Bogoliubov-de Gennes equations, the wave vectors
satisfy the dispersion law
E
k± (E) = k F 1 ± . (4.152)
μ
Here the dispersion law and expressions for u, v also follow from the Bogoliubov-de
Gennes equations (plus the condition u 2 + v 2 = 1) after some exercise in elementary
4.5 Andreev Reflection 201
⎨
1+
1 − 2 /E 2
u(E) = ; (4.155)
2
⎨
1 − 1 − 2 /E 2
v(E) = ; (4.156)
2
√
E 2 − 2
q ± (E) = k F 1 ± . (4.157)
μ
√ subgap excitations (E < ), u, v, and q acquire imaginary parts (we will take
For
−1 = +i). Physically possible subgap solutions must exponentially decay into
the bulk of the superconductor (in our case, at x → ≈, which allows only e+ and
h− ).
Now we can solve the stationary scattering problem for the Bogoliubov-de Gennes
equations. Let an electron impinge on the boundary from the left. Then the wave
function at x < 0 will be
while at x > 0 it is
1 q+ − k− iα/2
teh = e reh ; (4.158)
u q+ + k −
1 q− + k− iα/2
tee = e reh ; (4.159)
v q+ + k −
u q+ + k − v q+ − k −
ree = + eiα reh − 1, (4.160)
v q+ + q− u q+ + q−
while
2e−iα
reh = ⎫ ⎬ ⎫ ⎬. (4.161)
u q− −k− −k−
v q+ +q− 1 + qk++ + uv qq++ +q −
1− q−
k+
202 4 Methods of the Many-Body Theory in Superconductivity
Note that the Andreev electron-hole amplitude acquires the phase −α. We will
not repeat the calculations, but the hole-electron amplitude acquires phase +α:
2eiα
rhe = ⎫ ⎬
u q− +k− q+ v q+ −k−
v q+ +q− 1 + k+ + u q+ +q− 1 q−
k+
√
iα E − E −
2 2
≈e . (4.164)
We will see the importance of this in the next section.
In the Andreev approximation the Andreev reflection amplitude (4.162, 4.164)
can be written as
. E
≤iα e−iarccos , E ≤ ;
reh(he) = e × E (4.165)
e−arccosh , E > .
For subgap particles, we thus have total Andreev reflection |reh(he) (E)|2 = 1, in
complete agreement with our qualitative reasoning, and the sharp change in pairing
potential notwithstanding. The latter leads to small corrections o(/μ), leaving
place for finite, but small, normal reflection (evidently, for E < , |reh(he) (E)|2 +
|ree(hh) (E)|2 = 1), which can be usually neglected. It becomes significant if there
is a (normal) potential barrier at the NS interface, or if the Fermi vectors in normal
and superconducting regions differ (see Blonder et al. [16] for a detailed discussion).
Instead, we will consider the Andreev levels and Josephson effect in SNS junctions,
which is far more exciting.
4.5 Andreev Reflection 203
E
[eh (x; E)]2 ([eh (x; E)]1 )∗ = ei E x/v F · elα+t arccos + t E x/v F ∓ e2i E x/v F
[cf. Eq. (3.163)]. The only coordinate dependence enters this expression via the
phase factor, 2Ex/(v F ), which has the sense of the relative phase shift of electron and
hole components of the wave function eh (x; E). If E = 0, then these components
keep constant relative phase α + arccos E
(and thus superconducting coherence) all
the way to x = −≈, where no pairing interactions exist (g = 0)! At finite energy,
this coherence decays when the relative phase becomes of order unity, that is, at a
distance from the boundary ≈ l E = 2E vF
. At temperature T , for a thermal electron-
hole pair correlations decay at a distance ≈ l T = kBvTF . The latter length is usually
called the normal metal coherence length (in the clean case: we completely neglected
impurity scattering).
The strange coherence of electrons and holes in the absence of any pairing inter-
action can be understood if we return to the picture of Andreev reflection. A hole into
204 4 Methods of the Many-Body Theory in Superconductivity
which the incident electron has been transformed has exactly the opposite velocity
(if E = 0), and will thus retrace exactly the same path all the way to minus infinity!
Little wonder that the correlations are conserved. (They will be conserved even in
the presence of nonmagnetic scatterers, but this is not a book on superconductivity
theory.) At E = 0 the paths of electron and hole will diverge, and l E is exactly
the measure of when they diverge irreparably: the proximity effect is essentially a
kinematic phenomenon.
Since the subgap particle in the normal region is reflected by the pairing potential
as a hole, and vice versa, in an SNS junction, when a normal region, of width L, is
sandwiched between two superconductors, there should appear quantized Andreev
levels [22]. This is illustrated in Fig. 4.17. In the normal pail of the system the solution
of the Bogoliubov-de Gennes equations will be
ψ(x; E) = aψe+ (x, E) + bψh+ (x, E) + cψe− (x, E) + dψh− (x, E).
In the gap, the above system yields the discrete set of allowed energy levels of the
bound states, the Andreev levels, which must satisfy
E n±
−2arccos ±(α1 −α2 )+(k+ (E n± )−k− (E n± ))L = 2εn; n = 0, ±1, . . . . (4.168)
4.5 Andreev Reflection 205
Here there are two sets of levels, with ±(α1 − α2 ), depending on the direction
of motion of the electron. Both depend explicitly on the superconducting phase
difference between the superconductors, due to the phase acquired by the wave
function during Andreev reflection. Note that the phase itself is irrelevant, as it
should be: it is the difference that counts. You may have noticed that (4.168) could
be written immediately from the quasiclassical quantization condition in the pairing
potential well,
6
p(E)dq = 2εn,
E n± 1
arccos = (±α − 2εn), (4.169)
2
where α ∇ α1 − α2 ; therefore,
α1 − α2 α
E(α) = cos ∇ cos . (4.170)
2 2
The contact contains a single, twice-degenerate level. (If we now recall that we
considered a single k y , k z mode, the degeneracy is 2× N ≥-fold, where N≥ ≈ A/ω2F
is the number of transverse modes in the contact of area A, in the spirit of the Landauer
formula.)
In the second case, for E √ , we can expand k+ (E) − k− (E) ≈ k F E/μ and
set arccos(E/) = ε/2, to yield
v F
E n± = [ε(2n + 1) ± α]. (4.171)
2L
In this case there are many Andreev levels in each mode. (Exactly how many we
cannot tell, because the above formula doesn’t work when E ± is close to the top of
the well.)
The knowledge of Andreev levels will allow us to calculate the Josephson current
in the SNS junction. We will do this, using different approaches in the cases L = 0
and L → ≈.
The most popular version of the Josephson effect is the one in SIS (superconductor-
insulator-superconductor) tunneling junctions. Josephson’s great achievement was
206 4 Methods of the Many-Body Theory in Superconductivity
(We will not discuss it here: the topic was covered in great detail, e.g., in Barone
and Paterno [2]). But the Josephson effect is not limited to SIS junctions and sin α
dependence. It appears wherever supercurrent can flow due to coherent transport of
Cooper pairs through a weak link-a layer of normal conductor, for example, where
superconducting correlations are not supported dynamically. The specific mechanism
of the effect is, though, different, which leads-as we shall see shortly -to drastic
deviations from (4.172).
First we must decide whether the limit L = 0 for an SNS contact corresponds to
a physically sensible situation. It would seem that this limit corresponds simply to
a bulk superconductor, and a stationary jump of the superconducting phase is as
impossible to realize as a finite voltage drop between two banks of such a contact in
the normal state. But we have already encountered the latter situation. in the point
contact, which is a weak link of exactly the sort we need for the Josephson effect (it
is often called an ScS junction, c for constriction). Therefore, the limit L = 0 can be
considered as an approximation to the case of a superconducting point contact.
The Josephson current can be calculated if we know a thermodynamic potential
of the system, Ψ (for example, G, F, , . . .), as a function of the phase difference
across the junction, α (this is a general formula, valid for any sort of Josephson
junction):
2e dΨ
I = . (4.173)
d(α)
We have already noted that the supercurrent is a “thermodynamic current,” and its
density can be obtained by taking a variational derivative of any of the thermodynamic
potentials over the vector potential, A (4.120). Since the latter always enters through
the gauge invariant supercurrent velocity,
e
mvs (r) = m→ζ(r) − A(r),
c
ρ 2e ρ
=− .
ρA(r) c ρ→α(r)
ρΨ
j(r) = 2e .
ρ→α(r)
The variational derivative in this expression is the coefficient that appears when we
write Ψ as
ρΨ
Ψ = d 3r · →α(r) + · · · .
ρ→α(r)
∂Ψ
Ψ= α + · · · .
∂α
→α(r) = αρ(x)ex .
we notice that
ρΨ 1 ∂Ψ
= ex . (4.174)
ρ→α(r) A ∂α
+ u q∗ (r)(θˆ +
V (r))u q (r)}
2 τ Eq
− ln 2 cosh .
τ q 2
In this formula, the first two lines are independent of α. Therefore, the Josephson
current can be written as follows:
2e E p dE p
I =− tanh .
p 2k B T dα
≈
2e E dN c (E)
− · 2k B T dE ln(2 cosh )· .
0 2k B T dα
The first term contains the contributions from the discrete Andreev levels in the gap,
while the second term accounts for the excited states in the continuum, with energies
exceeding ||, Nc (E) being the density of states in the continuum. The latter—since
we consider the case L = 0—is evidently the same as in a bulk superconductor, and
thus is α independent. Therefore only the discrete Andreev levels contribute to the
Josephson current:
2e E p dE p
I =− tanh . (4.175)
p 2k B T dα
ε0 G N α
I = sin , (4.177)
e 2
tells us that in the superconducting quantum point contact the critical current is also
quantized, in the (nonuniversal) units of e0 τ.5
An important property of the short ScS contact, which made quantization possible,
was the N≥ -fold degeneracy of Andreev levels (4.170), which ensured that each
transverse mode in the contact gives the same contribution to the current-as it did in
the normal case. Unfortunately, for an ScS junction this holds only in the zero length
limit, as we will see in the next subsection.
Let us now consider along ballistic SNS junction. Here the normal layer width is still
much smaller than both normal metal coherence length and elastic scattering length,
but exceeds the superconducting coherence length: l T , le ↔ L ↔ θ0 .
We begin with an insightful picture due to Bardeen and Johnson [15]. Suppose the
temperature is zero and there is a supercurrent through the junction, with superfluid
velocity vs in the x-direction. In a long channel, we can neglect the boundary effects
and simply relate vs to the phase difference:
α
mv s = . (4.178)
L
We have seen earlier, Eq. (4.136), that the supercurrent can be presented as the current
due to macroscopic flow of the condensate, minus the current of the elementary
excitations, carried with velocity vs :
ek q,x
I = nev s − In,x = nev s − n F (E q± − kq,x vs ). (4.179)
mL
kq
The factor of 1/L appears because the u, v-components of the wave function must
be normalized to the width of the normal layer. The quasiparticle energies in this
expression are measured in the reference frame of the moving condensate. Therefore,
they correspond to vs = 0 and are simply the energies of the Andreev levels at
α = 0.
At zero temperature, the Fermi distribution function is identically zero for all “–”
levels (with kq,x < 0, when the electron moves to the left). Let us now increase α
(and vs ). At first, the contribution from the “+”-levels will be zero as well, and the
current grows linearly with the phase difference. When for the lowest Andreev level
E q+ − kq,x vs = 0,
though, use the insight that the low-lying Andreev states dominate the supercurrent
in a long SNS junction.
We use the basic formula (4.73) to calculate the current in the normal part of the
junction, and for the time being consider a single transverse mode, with fixed k y , k z .
Taking normalization into account, we see that the current is given by
≈
e
I = dE(ν+ (E) − ν− (E))[n F (E)k+ (E) − (1 − n F (E))k− (E)]. (4.180)
Lm 0
Here in the brackets k± (E) is the momentum of the electron (hole) excitation. The
parentheses contain the densities of “±” -states, which take into account both bound
Andreev levels and the continuum. At E < || they we simply sets of ρ-functions
at energies E q± .
Since we expect that the main contribution in the current is from the lowest
Andreev levels, we substitute k F (that is, k F,x ) for of k± (E). Then a simple manipula-
tion with Fermi distribution functions allows us to extend the integration to (−≈, ≈):
ev F,x ≈
I = dE(ν+ (E) − ν− (E))[2n F (E) − 1]
L −≈
≈
ev F,x τE
=− dE(ν+ (E) − ν− (E)) tanh , (4.181)
L −≈ 2
τ = 1/k B T .
Now we find the density of the excited states. Using the Weierstrass formula, we
write
≈
ν± (E) = ρ(E − E k± )
−≈
≈ ≈
1 1 1 1
∇− ⇔ → − ⇔ , (4.182)
ε −≈ E − E k± + i0 ε −≈ E − E k± + i
ν
where on the right-hand side we immediately recognize the retarded Green’s function
in Källén-Lehmann representation.
Here we introduced a finite lifetime, ν , due, e.g., to weak nonmagnetic impurity
scattering. In the limit ν → ≈ it reduces to the infinitesimal i0 term. It is convenient
to parameterize by
L L L
ν= ; ϕ= ∇ √ 1,
ϕv F,x v F,x T le
≈ −2
ϕL LE 1 1
ν ± (E) = − (q + )ε ± α + iϕ .
εv F,x q=−≈ v F,x 2 2
The sum is easily taken using the Poisson summation formula, which transforms the
functional series to the series of Fourier transforms:
≈
≈ ≈
f (n) = dxf (x) ρ(x − n)
n=−≈ −≈ n=−≈
≈ ≈ ≈
= dxf (x) e 2εipx
∇ f˜( p). (4.183)
−≈ p=−≈ p=−≈
≈
L 2i p( vLE − ε±α
2 )
ν± (E) = e−2| p|ϕ e Fx ;
εv F,x p=−≈
≈
−2iL −2| p|ϕ+2ip vLE
ν+ (E) − ν− (E) = (−1) p e F,x sin pα. (4.184)
εv F,x
p=−≈, p =0
Substituting the last expression in (4.181) and taking the tabular integral
≈ 2ipLE
τE 2εi
dEe v F,x tanh =
−≈ 2 τ sinh 2εLp
τv F,x
and summing up the contributions from all transverse modes, we finally obtain the
expression for the Josephson current in a long clean SNS junction:
≈
ev F,x 2 L
−2 p l (k L sin pα
I (α) = (−1) p+1 e C F) . (4.185)
L ε l T (k F ) sinh pL
kF p=1 lT (k F )
v
Here l T (k F ) = 2εkF,x
BT
, and le (k F ) = v F,x ν .
This is a remarkable expression. At zero temperature, and in the ballistic limit,
all L/le , L/l T are zero. You can check that the series
≈
2 sin pα
(−1) p+1
ε p
p=1
eventually, as is clear from (4.185), only the lowest harmonic survives, leaving us
the “standard” I ∓ sin α-dependence.
The total Josephson current is given by the sum of contributions of different
modes, k F . To calculate it in a classical SNS junction we should integrate over all
k F (with positive projection on the x-axis), which will give the critical current as
proportional to R N . In the case of a quantum junction, when a few allowed modes are
present, we see that the critical current is not quantized even at zero temperature: the
amplitudes ev F,x /L depend on the direction of k F ; that is, opening an extra mode
will bring a mode-dependent increase to the critical Josephson current.
A curious situation arises if the pairing symmetry in one bank of a long SNS junction
is different from that in the other. We have already discussed superconducting transi-
tion in case of arbitrary orbital symmetry of pairing. The only complication was that
the order parameter ∼ak a−k ∝ ∇ k̂ becomes dependent on the relative momentum
direction on the Fermi surface, k̂.
It is now firmly established that in superconducting cuprates the order parameter
has d-wave symmetry [3, 10, 11, 26]. This means that under an appropriate choice
of coordinate axes, k̂ ∓ k x2 − k 2y . Therefore, it is negative for some directions. If
we insist on writing the order parameter as ||eiα , we are now compelled to add ε
to the superconducting phase. As a result, the Andreev reflected electron may now
(depending on its direction of propagation) acquire an extra phase of ε.
What happens in a long SNS junction between conventional (s-wave) and d-
wave superconductors (a so-called SND junction)? There will now be two kinds of
Andreev levels (Fig. 4.20): zero-and ε-levels, so called because of additional phase
acquired when reflected from the d-wave bank. The Josephson current is thus a sum
of two contributions, each given by (4.185), with momentum summation limited
to appropriate states [28]. The magnitudes of these terms of course depend on the
orientation of the d-wave crystal. In the most symmetric case, when each zero-level
has its ε-counterpart, we would find that
≈
ev F,x 2
zerolevels
−2 p le (kL )
I (α) = (−1) p+1 e F
L ε
kF p=1
L sin pα + sin p(α + ε)
× . (4.186)
l T (k F ) sinh pL l T (k F )
This is a ε-periodic sawtooth of α (Fig. 4.21): the Josephson effect period is thus
halved.6
6This phenomenon can occur in different types of Josephson junctions between superconductors
with different pairing symmetry; see Zagoskin [28] and references therein.
214 4 Methods of the Many-Body Theory in Superconductivity
7Generally, depending on relative contributions of zero- and ε-levels to the current, the equilibrium
phase can take any value from −ε to ε.
4.5 Andreev Reflection 215
H = H L + H R + HT , (4.187)
2eV
α(t) = α0 + t (4.189)
describes the time dependence of field operators due to voltage applied to one of the
banks [as in (3.207)]. We write it explicitly because in the superconducting junction,
(4.189) gives the observable nonstationary (ac) Josephson effect:. oscillations of
superconducting phase difference across a junction (and therefore Josephson current)
with Josephson frequency
2eV
πJ = , (4.190)
which is determined by the applied voltage. (Physically, this is simply the quantum
frequency associated with a Cooper pair gaining or losing energy 2eV in transfer
across a finite voltage drop V.8 )
The current through the contact is derived in the same way as (3.208), and the
analogue to (3.210) is given by the (11)-component of a Nambu matrix:
2e
I (t) = [TF+− ∗ +−
1 (t, t) − T F2 (t, t)]11 . (4.191)
8 In an SND junction of the previous section (4.186) the frequency of the nonstationary Josephson
effect doubles to 2π J = 4eV /, because the period of I (α) was halved.
216 4 Methods of the Many-Body Theory in Superconductivity
and in F2+− (t ≡ , t) the c’s and d’s are transposed. Expression (4.191) contains both
superconducting (Josephson) current and normal (quasiparticle) current; the latter
can flow if there is a finite voltage drop across the contact.
The “hybrid” Green’s functions F+− are then calculated as a (+−)-component of
a matrix series over the Keldysh-Nambu matrix T̂ and unperturbed Green’s functions
in the banks (assuming them identical):
ε N (0) −π ± iζ
g R,A
(π) = ⎨ ; (4.194)
2 − (π ± iζ)2 −π ± iζ
1
g+− (π) = 2εin F (π) − ⇔g R (π) . (4.195)
ε
Here ζ is the small energy dissipation rate in the banks due to inelastic scattering.
The calculations are of necessity much more involved than those of Sect. 3.7.
Therefore, we present here only the results.
In the linear response regime, V → 0, the quasiparticle current through the
quantum contact is characterized by phase-dependent conductance (τ = 1/k B T )
⎭ 2
2e2 ετ φ sin α τ E a (α)
G(α) = ⎨ sech . (4.196)
h 16ζ 1 − φ sin (α/2)
2 2
(2ε N (0)T) 2
In the above formulae, φ = 1+(ε N (0)T)2
is nothing but the effective Landauer trans-
parency of the barrier, TLandauer , that we calculated in (3.222). Energies
E a (α) = ± 1 − φ sin2 (α/2) (4.198)
are energies of Andreev levels in the contact. If φ = 1, we are back to our previous
results (4.170) and (4.176) for an ideal short Josephson junction in a one-mode limit.
4.6 Tunneling of Single Electrons and Cooper Pairs 217
In this, the final section of the book, we are going to make good on our promise
to discuss the case when a single electron can make a difference in a many-body
system.9
This can occur if the picture of independent quasiparticles is no longer valid,
and we need to take into account their correlations. Roughly, this happens when
the criterion of the mean-field approximation applicability, Eq. (1.10), fails. This
criterion,
v F
↔ 1, (4.199)
e2
shows explicitly that the faster the particles move, the better is the MFA description
-something we have discussed already. On the other hand, for correlations to become
important, the particles must be slowed down. One way to do this is to create “bumps”
in their way: tunneling barriers, for example, or point contacts, or whatever weak
links we can invent. Consider, e.g., the system shown in Fig. (4.22), a single-grain
tunnel junction. Here electrons can travel between two massive banks through a
small grain, separated from it by tunneling barriers. (The role of gate electrode will
become clear in a moment.) Though v F is not affected by the presence of the barriers,
the characteristic time electrons spend on the grain (and effective travel velocity) is,
which allows correlations to develop. Alternatively, we could rewrite the left-hand
side of (4.199) as ev2F/l/l , where l is some characteristic length. This is the ratio of
interlevel spacing in the grain due to spatial quantization (remember the similar result
for Andreev levels?) to a characteristic Coulomb energy. If the ratio is large, then
correlations that are due to Coulomb interactions are irrelevant, and MFA will work.
In the opposite limit we need a more refined approach.
First let us consider a normal system. Denote the capacitance of our grain by C
(for an isolated spherical grain of radius ρ in the medium with dielectric constant
κ, C = κρ). Then the electrostatic energy due to the extra electron on the grain is
e2
E= . (4.200)
2C
n 2 e2 C1 C2 Cg e2
E(n) = + en( V1 + V2 + Vg ) ≈ const + [n − n ∗ (V1 , V2 , Vg )]2 .
2C C C C 2C
(4.201)
1
n ∗ = − [C1 V1 + C2 V2 + Cg Vg ]. (4.202)
e
10 See, e.g., Tinkham [9], Chap. 7, Zagoskin [13], 2.4 and references therein.
4.6 Tunneling of Single Electrons and Cooper Pairs 219
Note that though there is no charge transfer between the gate electrode and the rest
of the system, the gate voltage Vg critically affects the electrostatic energy and the
transport through the grain.
Indeed, at a certain combination of parameters two the lowest-lying states with
n and n ± 1 electrons on the dot become degenerate (Fig. 4.24). This degeneracy
is equivalent to lifting the Coulomb blockade, deblocking the current. It appears as
periodic conductance dependence on Vg (single-electron oscillations) with period
e
ρVg ≈ . (4.203)
C
Note that in a normal system, degeneracy between states with n and n ± 2 extra
electrons on the grain is impossible: the system will always drop to a lower-lying
state with n ± 1 electrons (Fig. 4.24c).
Observation of the Coulomb blockade and single-electron oscillations is possible,
if we can resolve the levels differing by the charging energy E(n + 1) − E(n) ≈
e2
2C = Uc . On the other hand, the level width is
ρE ≈ , (4.204)
ν
where ν is the characteristic lifetime of the extra electron on the grain. If the con-
ductance of the system is G, this time will be of order the discharge time of an
RC-contour,
ν = C/G. (4.205)
e2 G
> , (4.206)
2C C
leading to the condition on the conductance of our system
e2 2e2
G< ≈ . (4.207)
2 h
The latter is the same quantum conductance unit (≈(13k)−1 in more conventional
units) that we have met before. As we remarked, the transport through the grain must
be hindered enough by the barriers in order to make correlation effects important.
The quantum resistance unit provides a quantitative measure of this hindrance.
220 4 Methods of the Many-Body Theory in Superconductivity
Fig. 4.24 a Electrostatic energy quantization. b Single-electron degeneracy of the ground state:
deblocking of the single-electron tunneling. c No double-electron degeneracy of the ground state
Fig. 4.25 Splitting of the Coulomb parabola due to parity effect: a 2e-degeneracy is possible if
> Uc = e2 /2C. b No 2e-degeneracy otherwise
n = 2n C + n q , (4.208)
E = Uc (2n C + n q − n ∗ )2 + n q . (4.209)
2e
ρVg(SC) ≈ . (4.210)
C
Let us now make the entire system superconducting. Then the grain will play the
role of a weak link between the banks 1 and 2, allowing for a superconducting phase
difference and Josephson current to appear. The latter is carried by condensate, i.e.,
Cooper pairs. Therefore, one should expect that 2e-degeneracy of the ground state
will lead to enhancement of the critical current, which thus oscillates as a function of
Vg . Moreover, since at < Uc 2e-degeneracy becomes impossible, the effect must
disappear, e.g., in a magnetic field strong enough to suppress below the charging
energy.
A quantitative consideration must take into account that we are dealing with a
system of bosons—Cooper pairs—which can be described by a couple of comple-
mentary operators, number, and phase (like in Sect. 1.4.3). It is convenient to work
in the basis of phase eigenstates, where [see (1.132)]
1 ∂
n̂ C = .
i ∂α
Here α is the superconducting phase of the grain. The eigenstates of the operator
n̂ C are evidently
∼α|n∝ ∓ einα , (4.211)
since (1/i)∂einα /∂α = neinα . The phases of the massive superconducting banks,
α1,2 , are external parameters, and we can choose them to be
α0
α1,2 = ± .
2
The Josephson current through the grain is thus
2e ∂ E 0 (α0 )
I = , (4.212)
∂α0
222 4 Methods of the Many-Body Theory in Superconductivity
∂ nq n∗ 2
H = 4Uc (−i + − ) + n q − E J (cos(α1 − α) + cos(α2 − α)). (4.213)
∂α 2 2
In the above formula, the first term is the charging energy expressed through the
Cooper pair number operator; the second term is the odd electron contribution. The
third term is the so-called Josephson coupling energy between the banks and the
grain (we assume that the system is symmetric). By itself, a term like −E J cos(α)
would yield a usual Josephson current (2eE J /) sin(α) between the bank and the
grain [see Eq. (4.173)].
The Hamiltonian (4.213) is convenient to rewrite as
∂
H = Uc (−i + n q − n ∗ )2 + n q − 2 Ẽ J (α0 ) cos(α), (4.214)
∂α
where Ẽ J (α0 ) = E J cos(α0 /2) is the only parameter dependent on the supercon-
ducting phase difference α0 between the banks.
The coupling term mixes states with n and n ± 1 Cooper pairs on the grain:
≡ ≡
∼n| cos α|n ∝ ∓ dαe−inα cos αein α ∓ ρn,n ≡ ±1 . (4.215)
We can thus write the Hamiltonian in the basis of two states, e.g., |n∝ and |n + 1∝:
E(2n) − Ẽ J (α0 )
H= . (4.216)
− Ẽ J (α0 ) E(2n + 2)
Here E(2n), E(2n + 2) are corresponding eigenvalues of the charging part of the
Hamiltonian. The eigenvalues of matrix (4.216) are easily found, with the ground
state energy
1
E 0 (α0 ) = (E(2n) + E(2n + 2)) − (E(2n) − E(2n + 2))2 + 4 Ẽ 2J (α0 )
2
1
= (E(2n) + E(2n + 2)) (4.217)
2
α0
− (E(2n) − E(2n + 2))2 + 4E 2J cos2 ( ) .
2
2e (1/4)E J sin α0
I (α0 ) = . (4.218)
h ( E(2n)−E(2n+2) )2 + cos2 ( α0 )
2E J 2
4.6 Tunneling of Single Electrons and Cooper Pairs 223
It depends on the gate voltage Vg through E(2n) − E(2n + 2), and it is clear that
near the values Vg = Vg,n where these two energies are degenerate, it peaks. Near
these values of Vg ,
2e (1/4)E J sin α0
I (Vg , α0 ) ≈ . (4.219)
[ e(Vg −Vg,n ) ]2 + cos2 ( α0 )
EJ 2
4.7 Problems
• Problem 1
Show that the following equation for the anomalous Green’s function at finite
temperature leads to the BCS equation for (T ):
Use the self-consistency relation between and F + and the expressions for F +
at zero from Sect. 4.4.3.
• Problem 2
Write the analytical expressions for the following Nambu diagram both in the
matrix form and in components.
• Problem 3
Prove that the expression for the superconducting current
224 4 Methods of the Many-Body Theory in Superconductivity
ie
j(r) = ∼(→ψλ† (r))ψλ (r) − ψλ† (r)(→ψλ (r))∝
2m λ
e2
− A(r) ∼ψλ† (r)ψλ (r)∝,
mc λ
→ · j(r) = 0.
References
1. Beenakker, C.W.J., van Houten, H.: Phys. Rev. Lett. 66, 3056 (1991)
2. Barone, A., Paternó, G.: Physics and Applications of the Josephson Effect. Wiley, New York
(1982)
3. Basov, D.N., Tinusk, T.: Electrodynamics of high-Tc superconductors. Rev. Mod. Phys. 77,
721 (2005) (Section VIII)
4. Landau, L.D., Lifshitz, E.M.: Quantum Mechanics, Non-relativistic Theory (Landau and Lif-
shitz Course of Theoretical Physics, v. III.). Pergamon Press, Oxford (1989)
5. Lifshitz, E.M., Pitaevskii, L.P.: Statistical Physics pt.II (Landau and Lifshitz Course of the-
oretical physics, v. IX.). Pergamon Press, Oxford (1980) (Ch. 5. Theory of Superconducting
Fermi Gas)
6. Maiti, S., Chubukov, A.V.: Superconductivity from repulsive interaction, . In: Proceedings
of the XVII Training Course in the Physics of Strongly Correlated Systems, Vietri sul Mare
(Salerno), Italy (2013)
7. Rammer, J., Smith, H.: Quantum field-theoretical methods in transport theory of metals. Rev.
Mod. Phys. 58, 323 (1986)
8. Svidzinskii A.V.: Spatially Inhomogeneous Problems in the Theory of Superconductivity.
Nauka, Moscow (1982) (in Russian) (An excellent book, using both functional integration
and Green’s functions formalism)
9. Tinkham M.: Introduction to Superconductivity, 2nd edn. McGraw Hill, New York (1996) (A
classical treatise on superconductivity. This second edition contains a special chapter (Ch.7)
devoted to effects in small (mesoscopic) Josephson junctions)
10. Tsuei, C.C., Kirtley, J.R.: Pairing symmetry in cuprate superconductors. Rev. Mod. Phys. 72,
969 (2000)
11. Van Harlingen, D.J.: Phase-sensitive tests of the symmetry of the pairing state in the high-
temperature superconductors-evidence for dx 2 −y 2 symmetry. Rev. Mod. Phys. 67, 515 (1995)
12. Vonsovsky, S.V., Izyumov Yu A., Kurmaev, E.Z.: Superconductivity of Transition Metals,
Their Alloys and Compounds. Springer, Berlin (1982) (Chapter 2 of this comprehensive book
provides both formulation of Nambu-Gor’kov formalism and detailed derivation and analysis
of Eliashberg equations. In Chapter 3 the formalism is generalized on case of magnetic metal)
13. Zagoskin, A.M.: Quantum Engineering: Theory and Design of Quantum Coherent Structures.
Cambridge University Press, Cambridge (2011). (Chapters 2 and 4 contain a review of theory
and experiment on superconducting quantum bits (qubits) and qubit arrays)
References 225
Articles
We have seen, that the existence of the Fermi surface gives rise to subtle and important
effects. One example is the Cooper pairing, where the instability of the normal ground
state in the presence of an arbitrarily small electron–electron attraction near the Fermi
surface was due to the effectively two-dimensional character of the problem. Another
is provided by the charge screening, where instead of an exponential Thomas–Fermi
screened potential one obtains Friedel oscillations with a much slower, power-law
potential drop. Mathematically, the latter followed from branch cuts—as opposed
to simple poles—in the complex frequency plane of the response function of the
electron systems (in this case, the polarization operator). It is natural to expect that
such non-analyticity may also produce a non-exponential time dependence in the
response of a system of fermions to a perturbation. This is actually the case, e.g., if
a point charge is suddenly introduced into a sea of electrons.
εF εF
Eh Eh +
Fig. 5.1 Schematics of X-ray spectroscopy of metals. A core hole potential is instantaneously
created or eliminated with, respectively, the absorption or emission of an X-ray photon
Such a situation arises in the X-ray spectroscopy of metals (see Fig. 5.1). An elec-
tron in a core level of an ion in the metal absorbs an X-ray photon and is (practically
instantaneously) ejected on top of the conduction band, leaving behind produces a
positively charged core hole in the middle of a sea of conduction electrons, which
rush to screen it. The filling in of the hole by a conduction electron, with the emission
of a photon, will later eliminate this charge. This process can be described (see, e.g.,
[3], Sect. 8.3.B) by the Hamiltonian
H = E h b† b + πk ck† ck + b† b Vkq ck† cq . (5.1)
k kq
Here b† , b are the core hole creation/annihilation operators, ck† , ck ditto for the band
electrons, E h is the core hole energy, πk the band dispersion law, and Vkq the matrix
elements of the core hole scattering potential. We consider here spinless fermions,
for the sake of simplicity.
At zero temperature the X-ray emission and absorption spectra obviously have the
coinciding threshold, ρ0 : ρab √ ρ0 √ ρem , where ρ0 = π F − E h (with the energies
measured from the bottom of the conduction band). The absorption and emission
spectra near the threshold have a power-law shape. For example, the absorption
intensity
A(ρ) → (ρ − ρ0 )ε . (5.2)
where λ(π f ) is the phase shift of an electron at the Fermi surface, scattered by the
core-hole potential.
In order to understand this, we start from the result of [11], who related the X-ray
absorption and emission rates to the Green’s function of the core hole,
5.1 Orthogonality Catastrophe and Related Effects 229
Here |0 ≡ is the ground state of the system with no core hole; obviously, ∇0 |bb† |0 ≡
= 1; and we made use of the hole being created and annihilated instantaneously, so
that the Hamiltonian (5.1) is either
H0 = πk ck† ck , (5.5)
k
or
ρ0 + H1 = ρ0 + πk ck† ck + Vkq ck† cq . (5.6)
k kq
If the core hole potential produces a bound state, there are two possibilities: it will be
filled by one of the conduction electrons, or will remain empty (Fig. 5.2). The above
expression is easily modified to take both cases into account:
⎣ ⎣2
i ⎣ ˜s ⎣
G h (t) = −iδ(t)e−iρ0 t e− ( Ẽ m −E 0 )t ⎣∇
s
m | 0 ≡⎣ . (5.9)
m s=e,f
Here the index s labels empty or filled bound state of the core hole potential. We will
see what difference it makes.
The overlap ∇ ˜ s |0 ≡ between the ground state wave functions of the N -electron
0
systems with and without the core hole potential tends to zero as the number of elec-
trons grows. We have quoted [14] on a similar situation in Sect. 2.2 when discussing
an approximate calculation of an N -body wave function in the limit N ≈ ∓: even
a small error in a one-particle wave function leads to the approximate N -particle
function being orthogonal to the exact one. This situation is called the orthogonality
catastrophe.
230 5 Many-Body Theory in One Dimension
εB εB
Eh Eh
+ +
that is, a determinant composed of the single-particle overlaps. Anderson [13] demon-
strated that actually
∇˜ s0 |0 ≡ → N −x , (5.11)
Calculating overlaps of N -particle wave functions in (5.9) and (5.11) requires certain
approximations. For example, one can assume the band and the core-hole potential
to be spherically symmetric and consider only s-wave scattering. This effectively
reduces the problem to one dimension, with spinless (for simplicity) fermions con-
fined to a ray, r > 0, and a scattering potential placed at the origin. Let us go one
step farther and consider a 1D chain of finite length (l − 1), with nearest-neighbour
hopping, free boundary conditions and a potential at the first site, which represents
the core hole charge and can be switched on and off (Fig. 5.3); [12]:
5.1 Orthogonality Catastrophe and Related Effects 231
l−2
H0 = −T ∂i† ∂i+1 + ∂i+1
†
∂i ; (5.13)
i=1
H1 = H0 − V ∂1† ∂1 . (5.14)
Here ∂i† , ∂i creates/annihilates an electron on the ith site, and the negative signs at
the tunneling amplitude T and the “hole potential” V are chosen for convenience.
The one-particle eigenstates of H0 and H1 are produced by acting with the creation
operators on the vacuum state, |0≡, and can be written as
l−1
l−1
|k≡ = k j ∂ †j |0≡ ∝C sin [k( j − l)] ∂ †j |0≡. (5.15)
j=1 j=1
Here C is the normalization constant. The free boundary condition at the last ((l −
1)th) site is satisfied: the wave function is zero at the j = l. Since sin [k( j + 1 − l)]+
sin [k( j − 1 − l)] = 2 sin [k( j − l)] cos k, the Schrödinger equation H|k≡ = π(k)|k≡
is also automatically satisfied everywhere, except the first site, for any k, if only
(the standard tight-binding dispersion law). The allowed values of k are determined
from the contribution to the Schrödinger equation from site j = 1:
sin k(l − 1) T
= . (5.17)
sin kl V
Without the core hole, V = 0, this yields sin kl = 0 and the obvious set of (l − 1)
band states with kn = αn/l; n = 1, 2, . . . (l − 1). Small enough attractive potential
(V > 0) does not change this situation qualitatively. Nevertheless at some V a bound
state can be formed (Fig. 5.4). This happens when Eq. (5.17) acquires an imaginary
root, k = iθ, i.e.,
232 5 Many-Body Theory in One Dimension
2
V
εF εF
...
...
...
k
εB
Fig. 5.4 Band spectrum and the bound state in the tight-binding model
sinh θ(l − 1) T
= . (5.18)
sinh θl V
Then the one-particle wave function amplitude is B → sinh θ(l − j). The wave
function should decay away from the origin. Therefore θ > 0, and there is only one
bound state in our model. Its energy
In the limit of an infinitely long chain we can introduce the scattering phase shift
for the band states via
Therefore
sin[λ(k) + k] T
= , (5.22)
sin λ(k) V
and
sin k
λ(k) = arctan . (5.23)
(T/V ) − cos k
T T2 + V 2
e−θ = ; πB = − . (5.24)
V V
5.1 Orthogonality Catastrophe and Related Effects 233
The advantage of our tight-binding model is that it allows a simple expression for
the overlap (5.10). We will use the trigonometric identities
l−1
1 sin k(l − 1) cos kl
sin k( j − l) =
2
(l − 1) − ,
2 sin k
j=1
l−1
1 sinh θ(l − 1) cosh θl
sinh2 θ( j − l) = − (l − 1) − ,
2 sinh θ
j=1
l−1
1 sin k(l − 1) sin ql − sin q(l − 1) sin kl
sin k( j − l) sin q( j − 1) = .
2 cos q − cos k
j=1
From the two first equations the normalizations of one-particle states are easily found.
If substitute in the last one the wave vectors k and k̃, which satisfy Eq. (5.17) with
the potential U and Ũ , respectively, we obtain
l−1
(Ũ − U ) sin k(l − 1) sin k̃(l − 1)
sin k( j − l) sin k̃( j − 1) =
j=1
2T(cos k̃ − cos k)
(Ũ − U ) sin k(l − 1) sin k̃(l − 1)
∝ . (5.25)
π(k̃) − π(k)
(Ũ − U )C(k̃)C(k)
˜ |k ≡ =
∇ , (5.26)
k̃
π(k) − π(k̃)
√
where C(k) = 2 sin k(l − 1) [(l − 1) − (sin k(l − 1) cos kl)/ sin k]−1/2 . For the
overlap with a bound state (if it exists) we just need to replace π(k̃) with π B Eq. (5.19)
and C(k̃) with
√
CB = 2 sinh θ(l − l) [(sinh θ(l − 1) cosh θl)/ sinh θ − (l − 1)]−1/2 .
In our problem, U = 0 (no core hole), Ũ = V , sin k(l − 1) = ± sin k and, in the
limit l ≈ ∓, sin k̃(l − 1) = (T/V ) sin k̃l = ±(T/V ) sin λ(k).
Equation (5.26) is exact. We now linearize the dispersion relation near the Fermi
surface:
and find
234 5 Many-Body Theory in One Dimension
This is the same formula, which was obtained by Anderson [13] in the first-order
perturbation theory for s-wave scattering. Following his approach towards deriving
the orthogonality exponent (5.12), we transform it to a more convenient form using
the Cauchy formula,
1 m>n (am − an ) m>n (bm − bn )
det = . (5.30)
am − bn m,n (am − bn )
˜ 0 |0 ≡|:
This allows to calculate the logarithm of |∇
sin αλn λn − λm
˜ 0 |0 ≡| ↑ −
ln |∇ ln ln 1 + −
n m<n
αλn n−m
λn λm
− ln 1 + − ln 1 − . (5.31)
n−m n−m
we can expand (5.31) in powers of λn (assuming λn /α ∞ 1) and check that the linear
terms cancel, while the quadratic terms yield the Anderson’s result (5.11, 5.12): up
to a factor,
1 λ(k F ) 2
˜ 0 |0 ≡| ↑ N − 2
|∇ α
.
This formula holds also if the core potential has a bound state, if this bound state is
filled. Otherwise the Anderson exponent is given by
1 λ(k F ) 2
xe = 1− . (5.33)
2 α
5.1 Orthogonality Catastrophe and Related Effects 235
Instead of following the original derivation of the latter [9, 10] we will use some
generalizations of our one-dimensional approach, which do not rely on such assump-
tions as smallness of the phase shift, provide us with some powerful theoretical tools
and, in the end, better illuminate the physical meaning of these results.
Let us return to the tight-binding Hamiltonian of Eq. (5.13), adding to the model the
particle–particle interaction on the neighbouring sites:
l−2
l−2
H = −T ∂i† ∂i+1 + ∂i+1
†
∂i + g ∂i† ∂i − n i ∂i+1
†
∂i+1 − n i+1 .
i=1 i=1
(5.34)
Here n j ∝ ∇0|∂ †j ∂ j |0≡ is the ground state expectation value of the site occupation
number. Besides modeling various one-dimensional conductors with interactions
(Eggert 2009) and impurity scattering (with a given orbital momentum) in a 3D
electron gas, this Hamiltonian also describes the XXZ spin-1/2 chain. It turns out
that the latter’s Hamiltonian,
1 x x y y 1 z z
HS = J⊥ ψi ψi+1 + ψi ψi+1 + Jz ψi ψi+1 , (5.35)
4 4
i
where ψ x,y,z are the Pauli matrices, can be transformed to the form (5.34) with
T = J⊥ , g = Jz and n = 1/2, i.e., the Fermi level in the middle of the band (see, e.g.,
[4], 1.2). The fermionic operators create and remove kinks in the spin configuration.
Thus the model we are going to investigate has quite versatile applications.
Substituting the Fourier-transforms of the site operators (∂ j = C k eikx j ck ), we
find for the non-interacting part of the Hamiltonian:
k
max
H0 = π(k)ck† ck = †
π(−k F + q)c−k c
F +q −k F +q
k=−kmax q
+ π(k F + q )ck†F +q ck F +q . (5.36)
q
Since linearizing the dispersion law near the Fermi surface (5.27) brought useful
simplifications earlier, we will do the same here:
236 5 Many-Body Theory in One Dimension
H0 ↑ †
(−v F )qc−k c
F +q −k F +q
+ v F q ck†F +q ck F +q (5.37)
q q
The anticommutation relations between the new operators are, obviously, as follows:
{dk , dq† } = λkq ; {sk , sq† } = λkq ; {dk , sq† } = λk,q−2k F . (5.40)
This is just a different notation, as long as we keep k F +q > 0 for the “right-movers”,
and −k F + q < 0 for the “left-movers”. The crucial step is now to consider them
as two different kinds of fermions, and let q run from −∓ to ∓ both for sq and dq
(Fig. 5.5). This is, of course, an approximation. As long as we are interested in low-
energy excitations, it is reasonable enough: situations, when the anticommutators of
right- and left-movers would be nonzero, require the momentum transfer of 2k F .
The advantage of this approximation is that the Hamiltonian (5.34) becomes exactly
solvable [3].
The electron annihilation operator at the position x j is now written as
1 ik F x j iq x j 1 −ik F x j iq x j
∂j = √ e e dq + √ e e sq
L q L q
1 iq x j 1 iq x j
= eik F x j √ e dq + e−ik F x j √ e sq
L q L q
∝ eik F x j d(x j ) + e−ik F x j s(x j ), (5.41)
where L is the length of our chain. The anticommutation relations between the right-
or left-movers are
{d(x j ), d † (xm )} = L −1 eiq x j −iq xm {dq , dq† } = L −1 eiq(x j −xm )
q,q q
∓ ∓
= L −1 e2αin(x j −xm )/L = λ(x j − xm − n L). (5.42)
n=−∓ n =−∓
2αn
kn = , n = 0, ±1, ±2, . . .
L
5.2 Tomonaga-Luttinger Model 237
In the limit L ≈ ∓ the summation over the wave numbers is replaced by integration,
L
≈ dq.
q
2α
On the right-hand side of Eq. (5.42) will remain only the term with n = 0:
{d(x), d † (y)} = {s(x), s † (y)} = λ(x −y); {d(x), s † (y)} = {d(x), d(y)} = · · · = 0.
(5.44)
Then the fermion field operators ∂(x) will satisfy the standard anticommutation
relations,
if
1
∂(x) = √ eik F x d(x) + e−ik F x s(x) . (5.45)
2
Note that the interaction term of Hamiltonian (5.34) enters the expression
∂i† ∂i − n i ∝ : ∂i† ∂i :, (5.46)
the normally ordered product of Fermi operators, and its ground state expectation
value is zero. In the continuous limit then
1
†
: ∂ † (x)∂(x) : = : d (x)d(x) + s † (x)s(x) :
2
1
+ : d † (x)s(x)e−2ik F x + s † (x)d(x)e2ik F x : (5.47)
2
We would like to get rid of electron–electron interactions (quartic terms in (5.34)) and
deal with free particles. In the BCS theory this was made possible by the presence
of superconducting condensate, but at a price of having to solve a self-consistent
equation for the order parameter. Here a different approach is possible, due to the
strictly one-dimensional character of the problem (and the approximations we made
up to this point). The goal of the following excercise is to express the Hamiltonian
of 1D fermions in terms of bosonic fermion density operators,
(a) (b)
vF |k| -vF k vF k
F F
k k
(a') (b')
k k
Fig. 5.5 Tomonaga (a) and Luttinger (b) models of a one-dimensional system of fermions. Note
that in the Tomonaga model the spectrum is bounded from below, and the total number of fermions
is finite. In the Luttinger model the spectrum is not bounded, the number of particles is infinite, and
we have two distinct “species” of fermions (right- and left-movers)
The interaction terms, which are quartic in terms of fermions, will become quadratic
in terms of these new operators, i.e., yielding the Hamiltonian for non-interacting
bosons. This is the idea of bosonization.
5.2.2 Bosonization
where
†
φdQ = dk† dk+Q ∝ dk† −Q dk ; φdQ = φd−Q , (5.50)
k k
1 See [3], Sect. 4.4; [4], Sect. 2.1; [2, 6] for more details and original references. Be aware of
significant differences in notation and conventions.
5.2 Tomonaga-Luttinger Model 239
|Q|
|Q|
Fig. 5.6 Action of the fermion density operators on the ground state of the Luttinger model
Substituting here
In the same way, we obtain the relations for the left movers. In particular, for
Q, Q = 0
Q
[φsQ , φsQ ] = −λ Q,−Q . (5.52)
2α
For the position-dependent densities this yields
240 5 Many-Body Theory in One Dimension
Q L i Qx+i Q x −≈
[φd (x), φd (x )] = L −2 λ Q,−Q e L≈∓
2α
Q,Q
i χ
− λ(x − x ); (5.53)
2α χx
−≈ i χ
[φs (x), φs (x )] L≈∓ λ(x − x ). (5.54)
2α χx
We should include in the expansions (5.49) for φd,s (x) also the contributions from
φd,s
Q=0 . The case Q = 0 (so called zero mode) requires a special consideration. Indeed,
the expectation value of φd0 is infinite. But the normally ordered operator : φd0 :
†
: φd0 := : dk† dk := ck F +k ck F +k − n k F +k = Nd , (5.55)
k k
is the number operator for the right-moving particles in the given state relative to
the ground state. Same reasoning applies to the left-moving zero mode. Note, that
: φ0d,s : commute with each other and with all φkd,s with k = 0.
It is clear from (5.51, 5.52), that the Fourier components φkd,s can be expressed
through standard Bose operators bkd,s with [bk , bk† ] = λkk : for k > 0
kL d kL d †
φdk = −i bk ; φ−k = i
d
(b ) ; (5.56)
2α 2α k
kL s † kL s
φsk = i (bk ) ; φs−k = −i b . (5.57)
2α 2α k
Again, it is straightforward to check that left and right bosons commute with each
other and with both right- and left-moving zero modes.
[(b|Q|
d †
) , H0 ] = −(v F |Q|)(b|Q|
d †
) . (5.59)
(You see that the linear dispersion law was here important). In the same way we can
show that (Q > 0)
[bsQ , H0 ] = (v F Q)bsQ (5.60)
and
Recalling, that for Bose operators [b, b† b] = b and [b† , b† b] = −b† , we see that the
Hamiltonian H0 can be written as
⎡
H0 = v F Q (bdQ )† bdQ + (bsQ )† bsQ + H0 . (5.62)
Q>0
The term H0 is the part of the expression (5.39), which commutes with all Bose
operators. Actually, it equals
αv F
H0 = (Nd (Nd + 1) + Ns (Ns + 1)) (5.63)
L
and is simply the energy of zero modes. Indeed, in the right-moving zero mode all the
states up to the state with momentum k F + kmax are occupied. The additional number
of right-moving particles is Nd = ∇Nd ≡ (which can have either sign, see Eq. (5.55)),
and the additional energy respective to the (infinite) energy of states filled up to k F is
2α
Nd
2α Nd (Nd + 1)
E 0,d = v F Q = v F n = v F
L L 2
Q n=1
2α
0
2α Nd (Nd + 1)
E 0,d = −v F n = v F
L L 2
n=Nd +1
for negative Nd ’s (see Fig. 5.6). In the same way, for the left-moving zero mode
2α Ns (Ns + 1)
E 0,s = v F ,
L 2
242 5 Many-Body Theory in One Dimension
which is in full agreement with Eq. (5.63). The rest of the Hamiltonian (5.62)
describes the electron-hole excitations against the background of zero modes.
Not only the Hamiltonian of a system of free fermions in one dimension can be
expressed in terms of Bose operators. Single Fermi operators can be expressed in
terms of Bose operators as well. In order to do so, let us first introduce the “phase”
operators ωd,s (x), such that
Nd,s 1 χ d,s
φd,s (x) = + ω (x). (5.64)
L 2α χx
Then we can write
This expression reminds the formula (2.17) for the phonon field, especially if sub-
stitute here the Bose operators from (5.56, 5.57):
2α 1 ikx d ⎡
ω (x) = −
d
√ e bk + e−ikx (bkd )† e−kε/2 ∝ τd (x) + τd† (x);
L k
k>0
(5.66)
⎡
2α 1
ωs (x) = √ eikx (bks )† + e−ikx bks e−kε/2 ∝ τs† (x) + τs (x). (5.67)
L k
k>0
The factor exp[−kε/2], where we will eventually put ε ≈ 0, enables the Abel regu-
larization (Sect. 2.2.1) in case the resulting expressions diverge. It is straightforward
to check that the only nonzero commutators of τ-operators are
2α 1 ik(x−y)−kε 1 ⎤ 2α (i(x−y)−ε) ⎦n
∓
[τ (x), τ (y)] =
d d†
e = eL
L k n
k>0 n=1
⎤ ⎦
2α
(i(x−y)−ε) −≈ 2αi
= − ln 1 − e L L≈∓ − ln (y − x − iε) ,
L
(5.68)
and
5.2 Tomonaga-Luttinger Model 243
2α 1 −ik(x−y)−kε
[τs (x), τs† (y)] = e
L k
k>0
⎤ ⎦
2α −≈ 2αi
= − ln 1 − e L (−i(x−y)−ε) L≈∓ − ln (x − y − iε) ,
L
(5.69)
For such operators A, B, that [A, B] commutes with either of them, the Baker-
Hausdorff formula holds:
eA eB = eA+B e 2 [A,B] .
1
(5.71)
We can also calculate the commutator of the fields ω(x). For example,
[ωd (x), ωd (y)] = [τd (x), τd† (y)] − [τd (y), τd† (x)]
2αi
= − ln(1 − e L (x−y+iε) )
2αi
2αi −≈ 1 − e− L (x−y) −≈
+ ln(1 − e L (y−x+iε) ) ε≈0 ln L≈∓
2αi
1−e L (x−y)
Using the Baker-Hausdorff theorem (of which the formula (5.71) is a corollary),
∓
1
e−B AeB = [A, B]n
n!
n=0
1 1
∝ A + [A, B] + [[A, B], B] + · · · + [· · · [[A, B], B] · · · B] + · · · ,
2! n! ⎪ ⎨⎩
n times
(5.77)
we see that, since [ω, ω† ] is not an operator, the exponent of the “phase operator”
satisfies the conditions
d d
[eiω (x) , φd (y)] = eiω (x) φd (y)e−iω (x) − φd (y) eiω (x)
d d
d (x)
= −i[φd (y), ωd (x)]eiω
eiω (x) χ
d
−i χ d
[ω (y), ωd (x)]eiω (x) =
d
= sgn(y − x)
2α χ y 2 χy
d (x)
= eiω λ(x − y), (5.78)
(see Eqs. (5.64, 5.74)), and the same holds for ωs (x), φs (y). Therefore eiω(x) could
be a candidate for representing the right- or left-moving field operator.
Now we arrive at the central point of the bosonization technique. With the help
of Baker-Hausdorff formula (5.71) written as
eA eB = eB eA e[A,B] , (5.79)
and the commutation relation (5.74) we see, that the exponentials of bosonic “phase”
operators, e±iω(x) , e±iω(y) , anticommute:
d (x) d (y) d (y) d (x) −≈
e[iω
d (x),iωd (y)] d (y) d (x)
eiω eiω = eiω eiω L≈∓ eiω eiω e±iα
d (y) d (x)
= −eiω eiω ; (5.80)
−iωd (x) −iωd (y) −≈ −iωd (y) −iωd (x) ±iα
e e L≈∓ e e e
−iωd (y) −iωd (x)
= −e e ;
5.2 Tomonaga-Luttinger Model 245
1 d
d (x) 1 d −≈
, e−iω
d (y) d (x)−ωd (y))
e− 2 [ω (x),ω (y)] + e 2 [ω (x),ω (y)] L≈∓
d d
{eiω } = ei(ω
−iα iα
0, x = y;
ei(ω (x)−ω (y)) e 2 sgn(x−y) + e 2 sgn(x−y) =
d d
2, x = y,
(5.81)
and the same for the left-movers. Therefore it is indeed possible to represent fermions
in one dimension in terms of bosonic operators: d(x) ∗ eiω (x) , s(x) ∗ eiω (x) , with
d s
1 1
Fd ei L Nd x eiω (x) = √ Fd ei L Nd x ei(τ ) (x) eiτ (x) ;
2α d 2α d † d
d(x) = √ (5.82)
2αε L
1 1
Fs ei L Ns x eiω (x) = √ Fs ei L Ns x ei(τ ) (x) eiτ (x) .
2α s 2α s † s
s(x) = √ (5.83)
2αε L
The appearance of zero mode number operators Nd,s in the exponents does not
contradict (5.78), since they commute with the “phase” operators. The operators Fξ
(so called Klein factors) satisfy the following (anti)commutation relations
They play a double role. First, they ensure that right- and left-moving fermions of
(5.82, 5.83) anticommute (the relations (5.80, 5.81) only provide for the anticom-
mutation inside each of the right- or left-moving groups). They also enforce the
anticommutation relations between the fermions belonging to some other distinct
species (e.g., with opposite spins). Second, the Klein factors change the number of
fermions in the system by one. This is important: as one can show [6], the reason
why bosonization works is that the Fock space of a one-dimensional Fermi sys-
tem can be split into subspaces, each with a fixed particle number; the excitations
in each of these subspaces are creating particle-hole pairs, and are thus bosonic.
The Klein factors serve as ladder operators, i.e., they allow the transitions between
such subspaces. (Note that the commutaton relations (5.85) between F, F † and the
number operators are the same as between creation/annihilation operators a, a † and
the number operator a † a.) The bosonic and fermionic descriptions of 1D systems
are thus exactly equivalent2 if the energy spectrum is not bound from below (as in
Luttinger, but not Tomonaga, model—see Fig. 5.5). This is not going to create any
2 One way of wrapping one’s head around this counterintuitive fact is to recall that the difference
between bosons and fermions comes from the wave function of the latter changing sign, when two
fermions change places. But in one dimension there is no way to make two particles change places
246 5 Many-Body Theory in One Dimension
(We have used the bosonization formulas (5.64, 5.66, 5.67). The difference between
N 2 /L and N (N + 1)/L is negligible in the limit L ≈ ∓.) The fermion-fermion
interaction term, following from the tight-binding Hamiltonian (5.34), with help of
(5.47) and (5.48) becomes
g
H1 = d x : φd (x)φd (x + a) + φs (x)φs (x + a) + 2φd (x)φs (x)(1 − cos 2k F a)
4
⎡
+ (· · · )e−2ik F x + (· · · )e2ik F x + (· · · )e−4ik F x + (· · · )e4ik F x :, (5.88)
where a ≈ 0 is the lattice constant. The second line describes the backward scat-
tering processes with momentum transfer 2k F , which contain products like d † dd † s,
and the Umklapp processes (with momentum transfer 4k F and terms like d † d † ss).
These processes convert left-moving fermions into right-moving ones, and vice versa.
They can be produced by sharp enough interaction potentials. But such potentials
could also couple, within the right-moving (or left-moving) sector, the states just
above the Fermi surface with the unphysical states below the bottom of the band
(Fig. 5.5), which we have added in order to make bosonization possible, and with
the understanding that they will not be excited at low energies we are interested
in. It is possible to deal with such processes, but they requre special care. We will
therefore drop the second line in (5.88), and understand the “pointlike” interactions
in the first line as short-range, but smooth enough to justify such treatment (which is
usually the case). Since the factor (1 − cos 2k F a) is not quite controllable, and the
next-neighbour coupling in (5.34) is just an approximation anyway, we will simply
without passing through each other—i.e., occupying the same state simultaneously, which fermions
simply cannot do.
5.2 Tomonaga-Luttinger Model 247
introduce two different coupling constants g1 , g2 , and finally write the Hamiltonian
of the Tomonaga-Luttinger liquid as
L ⎤ ⎦
HT L L = αv F d x φd (x)2 + φs (x)2 + g4 (φd (x)2 + φs (x)2 ) + 2g2 φd (x)φs (x)
0
⎜ L
αv F 1
= (1 + g4 )2 − g22 dx φ+ (x)2 + g̃φ− (x)2 . (5.89)
2 0 g̃
In the second line of (5.91) we can get rid of the off-diagonal terms by introducing
new Bose operators, related to the old ones via a Bogoliubov transformation (cf.
(4.54–4.57)):
Bk+ = bkd cosh δ − (bks )† sinh δ; Bk− = bks cosh δ − (bkd )† sinh δ;
bkd = Bk+ cosh δ + (Bk− )† sinh δ; bks = Bk− cosh δ + (Bk+ )† sinh δ. (5.92)
pear, if
1
g̃ − g̃
tanh 2δ = , (5.93)
1
g̃ + g̃
i.e., exp[−2δ] = g̃. The remaining term is simply 2αA k>0 k[(Bk+ )† Bk+ +(Bk− )† Bk− ].
The first line in (5.91) is diagonalized by introducing N± = (Nd ±Ns )/2. Finally
we find, that
HT L L = Hμ , (5.94)
μ=±
where
2αṽ F 1 μ μ
Hμ = + g̃ (Nμ )2 + ṽ F k(Bk )† Bk . (5.95)
L g̃
k>0
This is the “speed of sound” of the “acoustic phonons” (the second term in (5.95)),
which appear on top of zero modes.
As the final touch, we can introduce the new “phase” operators via (cf. (5.66,
5.67))
2α 1 ikx + ⎡
+
(x) = − √ e Bk + e−ikx (Bk+ )† e−kε/2 ; (5.97)
L k
k>0
2α 1 ikx − † ⎡
− (x) = √ e (Bk ) + e−ikx Bk− e−kε/2 . (5.98)
L k
k>0
3 Starting from ± and N± , one can go backwards and introduce new Fermi operators. Such
refermionization is sometimes useful ([6], §10.C).
5.2 Tomonaga-Luttinger Model 249
! ⎛
L 1 Nμ 2 1 χμ 2
Hμ = αṽ F dx 2 + g̃ + . (5.99)
0 g̃ L 2α χx
d(s)
φC (x) = φd(s)↑ (x) + φd(s)≤ (x), (5.100)
φd(s)
S (x) = φ
d(s)↑
(x) − φd(s)≤ (x). (5.101)
We can now introduce new Bose operators, related to the ones of Eqs. (5.56, 5.57)
for each spin species via
d(s) ωd(s)↑ (x) + ωd(s)≤ (x) d(s) ωd(s)↑ (x) − ωd(s)≤ (x)
ωC (x) = √ ; ω S (x) = √ ; (5.103)
2 2
d(s) Nd(s)↑ (x) + Nd(s)≤ (x) d(s) Nd(s)↑ (x) − Nd(s)≤ (x)
NC (x) = √ ; N S (x) = √ . (5.104)
2 2
Now the Hamiltonian (5.62, 5.63), trivially generalized to include contributions from
both spin projections, can be written similarly to (5.87), and it splits into charge and
spin parts, which commute with each other:
250 5 Many-Body Theory in One Dimension
L ⎤ ⎦
H0 = αv F d x φd↑ (x)2 + φd≤ (x)2 + φs↑ (x)2 + φs≤ (x)2
0
⎤ L ⎦
= αv F d x φCd
(x)2 + φC
s
(x)2 + φdS (x)2 + φsS (x)2 (5.105)
0
⎫ 2
⎬ L 2
1 χωdμ (x) 1 χωsμ (x)
= αv F dx +
⎧ 0 2α χx 2α χx
μ=C,S
⎛
(Nμd )2 + (Nμs )2
+ .
L
The physical right- or left-moving spinful fermion fields and densities are expressed
in terms of new charge/spin operators through the generalizations of the bosonization
formulas (5.64, 5.82, 5.83):
d(s) d(s) d(s) d(s)
ωC (x)±ω S (x)
i √ d(s) 1 χ ωC (x) ± ω S (x)
d(s)↑/≤ (x) → e 2 ; φ↑/≤ (x) = √ .(5.106)
2α χx 2
Therefore the spin and charge degrees of freedom can be treated separately. This
becomes interesting in the presence of interactions, which would couple to either
spin or charge density and make their dynamics different (see, e.g., [5], Chap. 28;
Nagaosa 1998, Sect. 3.2).
Let us return to free 1D fermions with a linear dispersion law. Their creation/
annihilation operators depend on position and time through exp[±(i x −iv F t)] (right-
movers) or exp[±(i x + iv F t)] (left-movers). We will consider the Green’s functions
in imaginary time, like in Sect. 3.2, only here we denote
ϕ = iv F t. (5.107)
−iv F kt †
sk† (t) = e sk (0) ≈ s̄k (ϕ ) = e−kϕ s̄k (0).
Same relations hold for the right- and left-moving Matsubara Bose operators:
1 ∼ 1 ∼
Gd (ϕ − i x) = −δ(ϕ ) (1 − n k )e−kz + δ(−ϕ ) n k e−kz , (5.111)
L L
k k
1
− 2α (z ∼ +ε) n 1
2α (z ∼ −kε) n
∓ ∓
= −δ(ϕ ) e L + δ(−ϕ ) eL (5.112)
L L
n=1 n=1
α
1 e L sgn(ϕ )(ϕ −i x+εsgn(ϕ )) −≈ 1 1
=− − · ∼ .
L 2 sinh αL (ϕ − i x + εsgn(ϕ ))
L≈∓
2α z + εsgn(ϕ )
Here ε ≈ 0 is the regularization parameter. In the same way, for the left-movers
−≈ 1 1
Gs0 (ϕ + i x) ∝ Gs0 (z) L≈∓ − · . (5.113)
2α z + εsgn(ϕ )
1
1 − n F (E) = 1 − = n F (−E)
eξ E + 1
and write
252 5 Many-Body Theory in One Dimension
which in the limit ξ ≈ ∓ coincides with (5.112) (see Problem 2). For the left-
movers, of course,
1/ξ −≈ 1 α/(ξv F )
Gs (ϕ + i x) L≈∓ − ·
. (5.115)
2α sin α(ϕ +i x+εsgn(ϕ ))
ξ v F
Now let us compute the equilibrium boson Green’s functions. For the right-movers
1 ∼
Gd (z ∼ ) ≈ − sgn(ϕ ) e−Dd (z ) . (5.119)
L
This intuitively agrees with the bosonization formulas (5.82, 5.83). This intuition is
fully justified. In order to demonstrate this we will need the following relation for
the averages of exponents of a Bose operator B:
∇eλB ≡ = e 2 λ ∇B ≡ .
1 2 2
(5.120)
It holds identically for any linear combination of bosons, B = q>0 (μq bq† + μ̃q bq ),
if the average is taken over the thermal state of the free boson Hamiltonian (5.62)
(see [6], C.10, Theorem 4). In the thermodynamical limit it holds for an arbitrary
state and Hamiltonian, as long as ∇B 2n+1 ≡ = 0. Indeed, then
∓
∓
λ2n λ2n (2n)! 2 n
∇eλB ≡ = ∇B ≡ = e 2 λ ∇B ≡ .
1 2 2
∇B 2n ≡ =
(2n)! (2n)! 2n n!
n=0 n=0
Here we have used the weak version of the Wick’s theorem (Sect. 2.2.1) to reduce
each macroscopic average ∇B 2n ≡ to the sum of (2n)!/(2n n!) identical fully contracted
terms ∇B 2 ≡n . Next, from (5.120) and the Baker-Hausdorff formula we find
1
δ(ϕ )∇Fd e− L Nd (ϕ −i x) eiω (x,ϕ ) e−iω (0,0) Fd† ≡
2α d d
Gd (ϕ − i x) = −
2αε
1
δ(−ϕ )∇e−iω (0,0) Fd† Fd e− L Nd (ϕ −i x) eiω (x,ϕ ) ≡
d 2α d
+
2αε
(the factor exp[− 2αL Nd (ϕ −i x)] reflects the dependence of the Klein factor on (imag-
inary) time, following from the commutation relations (5.85) and the Hamiltonian
(5.62), (5.63)). Finally, using (5.84), dropping the Nd /L ≈ 0 terms in the exponents,
applying Eq. (5.121) and using D(0) = ln(2αε/L), we see, that indeed
1
sgn(ϕ )e∇Tϕ ω(x,ϕ )ω(0,0)≡− 2 (∇ω(0,0)ω(0,0)≡+∇ω(x,ϕ )ω(x,ϕ )≡)
1
Gd (ϕ − i x) ≈ −
2αε
1 1
=− sgn(ϕ )e−Dd (ϕ −i x)+Dd (0) = − sgn(ϕ )e−Dd (ϕ −i x) .
2αε L
For the left-movers, of course,
1
Gs (z) ≈ − sgn(ϕ ) e−Ds (z) . (5.122)
L
254 5 Many-Body Theory in One Dimension
where G d,s are real time causal Green’s functions (cf. Eq. (2.48)). Using the analytic
continuations of zero-temperature thermal Green’s functions (5.112, 5.113) (which in
this case amounts to the replacement of ϕ with iv F t) and taking the contour integrals,
we find that the contributions of right- and left-movers are n d (k) = (1/2)δ(k F − k)
and n s (k) = (1/2)δ(k F + k), as one would expect. In the presence of interactions
the situation drastically changes. Expressing the physical Fermi operators in terms
of the Bogoliubov-transformed Bose operators (5.92), and n(k) in terms of their
Green’s functions, one discovers that in the presence of an infinitesimally small
interaction n(k) = 1/2: not only the step at |k| = k F disappears, but the distribution
becomes altogether momentum-independent ([15]; [3], Sect. 4.4.E). This is in a sharp
contrast to the behaviour of the Fermi liquid, where the step in n(k) at |k| = k F
becomes less than unity, but still survives in the presence of interactions (Lifshits
and Pitaevskii 1980, Sect. 10). Like in the case of the instability of the normal state of a
superconductor considered in Chap. 4, the dependence of the Green’s functions on the
interaction strength is non-analytic and could not be reproduced by the perturbation
theory.
In particular, their correlation functions in the infinite complex plane will have the
form
2h 2h ∼
1 1
∇Oh,h ∼ (z 1 , z 1∼ )Oh,h
† ∼
∼ (z 2 , z 2 )≡ = . (5.125)
z1 − z2 z 1∼ − z 2∼
The real integers h and h ∼ are called conformal dimensions of the field. As is clear
from Eqs. (5.112), (5.113) and the bosonization formulas, for the system of free
bosons (or fermions) in 1+1 dimensions these operators are left- and right-movers,
d ∼
s(z) → eiω (z) and d(z ∼ ) → eiω (z ) , with the conformal dimensions (1/2,0) and
s
6 An introduction into it can be found in [5], Chap. 24, 25; [4], Sect. 2.2, and a massive exposition
in [1].
7 The conformal field theory with boundaries was developed by Cardy [8].
256 5 Many-Body Theory in One Dimension
ir ir
iL A
τ τ
A τ1 B τ2 A A τ1 B τ2 A
Fig. 5.7 Conformal field theory for a 1D system of infinite (left) and finite (right) length. The time
axis is chosen to coincide with the real axis. Boundary condition changing operators act at ϕ1 and ϕ2
Change of the scattering phase, which reflects the creation and filling in of the core
hole, is produced by the operator O, which changes the boundary condition from
“A” (no core hole, zero phase shift) to “B” (core hole, phase shift λ). Assuming that
O is a primary field with the conformal dimension x, its zero-temperature Green’s
function will be
1
∇A|O(ϕ1 )O† (ϕ2 )|A≡ = . (5.127)
(ϕ1 − ϕ2 )2x
Here |A≡ is the ground state of the system with the boundary condition “A” at the
origin (≥z ∝ r = 0). The analytic function
maps the upper half-plane into the strip 0 ⇔ v ⇔ L, which corresponds to a system
of a finite length L, with the positive real ray mapped on the lower, and the negative
real ray - on the upper boundary of the strip. Let’s take ϕ1 , ϕ2 > 0. Then the boundary
condition at r = L will be always “A”. The Green’s function (5.127) is transformed
according to (5.124):
5.3 Conformal Field Theory and the Orthogonality Catastrophe 257
x x 2x
dz dz 1
∇A A|O(u 1 )O† u 2 )|A A≡ =
dw(u 1 ) dw(u 2 ) z(u 1 ) − z(u 2 )
2x
α 2x
α/2L −≈ αx(u 2 −u 1 )
= (u 2 −u 1 )L e− L ,
sinh[(α/2L)(u 1 − u 2 )] L
(5.129)
where |A A≡ is the ground state of the system of length L with the boundary conditions
“A” (i.e., with zero phaseshifts) at both ends. On the other hand, by directly inserting
the closure relation I = |n≡∇n| in the expression ∇A A|O(u 1 )O† (u 2 )|A A≡, we find
|∇A A|O|AB, n≡|2 e−(E AB −E A A )(u 2 −u 1 ) .
n 0
∇A A|O(u 1 )O† (u 2 )|A A≡ = (5.130)
Here n labels all energy eigenstates in the system of length L with the boundary
conditions “B” at r = 0 and “A” at r = L.
In the limit (u 2 − u 1 ) L the leading exponent in (5.130) should coincide with
(5.129). This exponent corresponds to the lowest-energy state of the system with the
phase shift, which is usually the ground state energy of the system with the boundary
conditions “A” and “B”, i.e.
α 2x αx(u 2 −u 1 )
e− ∗ |∇A A|O|AB, 0≡|2 e−(E AB −E A A )(u 2 −u 1 ) .
0 0
L (5.131)
L
Therefore the conformal dimension of the boundary condition change operator can
be obtained from the shift of the ground state energy due to change in the boundary
conditions (Affleck and Ludwig [7]):
L 0
x= (E − E 0A A ). (5.132)
α AB
The matrix element ∇A A|O|AB, 0≡ is the overlap between the ground states of the
system with and without the scattering potential at the origin. Thus, the power x in
(5.131) is the Anderson orthogonality exponent of Eq. (5.11), and the relation (5.132)
provides a convenient way of computing it directly from the energy spectrum of the
system.
In order to find the ground state energy shift due to scattering potential, let us recall
Eq. (5.20) for the asymptotic form of the wave function:
⎤ ⎦
k̃n (x j ) ∗ sin k̃n x j + λ(k̃n ) , j 1. (5.133)
258 5 Many-Body Theory in One Dimension
Here k̃n is the wave vector shifted from its value kn in the absence of scattering
potential, x j = ja, and a is the lattice constant. The ground state energy is then
N
E0 = π(k̃n ) (5.134)
n=1
(recall that in Sect. 5.1.2 we were imposing free boundary conditions on the 1D chain;
therefore kn = αn/L , n > 0). Taking in (5.133) x j = L and demanding, in order to
satify the boundary condition, that
k̃n L + λ(k̃n ) = kn L ,
we obtain
N N
1 1
F n− = d x F(x) − (F (N ) − F (0)) + O(F ). (5.136)
2 0 24
n=1
Setting
1
F n− = π(K (kn )), (5.137)
2
where now v F = π (k F ) (while still = 1), and we have kept only terms O(1/L).
Changing the integration variable to ν = α(n + 1/2)/L, using (5.135) and again
neglecting corrections of order 1/L 2 , we find
kF dν αv F
E0 = L π [K (ν)] −
0 α 24L
kF dν π (ν)λ(ν) π (ν)λ 2 (ν) π (ν)λ (ν)λ(ν) αv F
=L π(ν) − + + − .
0 α L 2L 2 L 2 24L
(5.139)
5.3 Conformal Field Theory and the Orthogonality Catastrophe 259
Integrating by parts and using π (0) = 0, we eventually obtain the desired result:
" #
kF dν 1 π(k F ) 1 λ(k F ) 2
αv F 1 1
E0 = L π(ν) − dπλ(π) + − + O( 2 ).
0 α απ(0) 2 α L 24 L
(5.140)
It can be directly checked using the tight-binding model of Sec. 5.1.2 with free
boundary conditions. There π(k) = −2T cos k, v F = 2T sin k F , and the ground state
energy is given exactly by a geometric series,
T sin k F l αv F 1
E0 = T − = vF + T − + O( 2 ). (5.141)
sin(α/2l) α 24l l
If add to the ground state n extra electrons directly above the Fermi level, the
energy of the system becomes
n
λ(k F ) α(m − 1/2)
En = E0 + π kF − +
L L
m=1
αv F
n
1 λ(k F ) 1
= E 0 + nπ(k F ) + m− − + O( 2 ). (5.142)
L 2 α L
m=1
Returning to Eq. (5.132) for the Anderson exponent, we substitute there the O(1/L)-
correction to the ground state energy from (5.140) and immediately find, that we have
successfully rederived Eq. (5.12):
2
1 λ(k F )
x= .
2 α
Moreover, we are now equipped to find out what happens, if the core hole potential
creates a bound state. Then the Green’s function (5.130) becomes the sum of two
terms, corresponding to the bound state being empty or filled [12]:
260 5 Many-Body Theory in One Dimension
|∇A A|O|AB, n, e≡|2 e−(E AB,e −E A A )(u 2 −u 1 )
n 0
∇A A|O(u 1 )O† (u 2 )|A A≡ =
|∇A A|O|AB, n, f ≡|2 e−(E AB, f −E A A )(u 2 −u 1 ) .
n 0
+
(5.144)
They give rise to two peaks in the absorption rate, separated by the bound state energy
|π B |. The processes of core hole creation with or without filling the bound state can
be in the long time limit considered independently, as due to separate operators. In
case of filled bound state the result will be the same as Eq. (5.12): the wave function
of the bound state is exponentially decaying and cannot influence the O(1/L)-terms
in the ground state energy. Therefore
2
1 λ(k F )
xf = . (5.145)
2 α
If the bound state is empty, then the corresponding operator not only changes the
boundary condition, but also creates an extra electron above the Fermi level. Using
(5.143) with n = 1, we finally reproduce, using quite a different approach, the result
of [9, 10]:
1 λ(k F ) 2
xe = 1− . (5.146)
2 α
We have reached the goal in a circuitous way, but learned some useful techniques
and did not need the assumption λ ∞ α!
5.4 Problems
• Problem 1
Verify that Eq. (5.87) is equivalent to Eqs. (5.62, 5.63). When integrating, make use
of φ(x + L) = φ(x).
• Problem 2
Obtain the explicit expression (5.114) for the thermal Green’s function of free fermi-
ons in the Tomonaga-Luttinger model. Taking the integral over k, consider it as a
complex variable and use the method of contour integration. Take into account that
the integrand has infinitely many simple poles on the imaginary axis, and close the
integration contour in either upper or lower half-plane of complex wave vector k,
depending on the sign of x.
• Problem 3
References
1. Di Francesco, P., Mathieu, P., Sénéchal, D.: Conformal Field Theory, Springer, GTCP, New
York (1997) (A fundamental textbook on conformal field theory in 1+1 and higher dimensions.)
2. Eggert, S.: One-dimensional quantum wires: A pedestrian approach to bosonization. In: Kuk
Y. et al. (eds.) Theoretical Survey of One Dimensional Wire Systems, Sowha Publishing, Seoul
(2007); arXiv:0708.0003 (chapter 2)
3. Mahan, G.D.: Many-Particle Physics, 2nd edn. Plenum Press, New York (1990)
4. Nagaosa, N.: Quantum Field Theory in Strongly Correlated Electronic Systems. Springer, TMP,
Berlin-Heidelberg (2010)
5. Tsvelik, A.M.: Quantum Field Theory in Condensed Matter Physics. Cambridge University
Press (1995)
6. von Delft, J., Schoeller, H.: Bosonization for Beginners - Refermionization for Experts, Ann.
Phys. 4, 225 (1998) (A very detailed tutorial, where bosonization is introduced constructively
and different approaches are compared.)
Articles
7. Affleck, I., Ludwig, A.W.W.: J. Phys. A: Math. Gen. 27, 5375 (1993)
8. Cardy, J.L.: Nucl. Phys. B 324, 581 (1989)
9. Combescot, M., Nozières, P.: J. Physique 32, 913 (1971)
10. Hopfield, J.J.: Comment. Solid State Phys. 11, 40 (1969)
11. Nozières, P., De Dominicis, C.T.: Phys. Rev. 178, 178 (1969)
12. Zagoskin, A.M., Affleck, I.: J. Phys. A: Math. Gen. 30, 5743 (1997)
13. Anderson, P.W., Phys. Rev. Lett. 18, 1049 (1967)
14. Thouless, D.J.: Quantum mechanics of many-body systems. Academic Press, New York (1972)
15. Mattis, D.C., Lieb, E.H., J. Math. Phys. N.Y. 6, 304 (1965)
Appendix A
Friedel Oscillations
In the static limit the polarization operator, which describes screening of the Coulomb
potential by the Fermi gas, is given by Eqs. (2.71, 2.72):
mp F p 2F − p 2 /4 p F + p/2
0 ( p) = − 2 1+ ln .
2π pF p p F − p/2
This formula was obtained in the random phase approximation at zero temperature.
As mentioned before, the second term in the parentheses is non-analytic at p = 2 p F .
Indeed, though
p 2F − p 2 /4 p F + p/2
lim ln → lim g( p) = 0, (A.1)
p√2 p F pF p p F − p/2 p√2 p F
all the derivatives of g( p) diverge at this point. The screened Coulomb potential in
the coordinate representation is then
d3 p eipr 4πe2 / p 2
Ueff (r ) =
(2π)3 1 − (4πe2 / p 2 )0 ( p)
1
e2 ∇ ei pr cos ρ
= dp p 2 d(cos ρ) 2
π 0 −1 p + (1/2)qT2 F (1 + g( p))
2e2 ∇ p sin( pr )
= dp 2
πr 0 p + (1/2)qT2 F (1 + g( p))
∇
e2 pei pr
= ≡ dp 2 . (A.2)
πr −∇ p + (1/2)qT2 F (1 + g( p))
Here we took into account that g( p) and p sin( pr ) are even functions of p and
extended the integration to all of the real axis. Now we can, as usual, close the
integration contour in the upper halfplane of the complex variable p and reduce
ip0
iqTF p
-2pF -iqTF 2pF
-ip0
Fig. A.1 Analytic structure of the integrand in (A.2) after regularization, and the initial (a) and
deformed (b) integration contours
the integral to the contributions from the singularities of the integrand. Replacing
g( p) in this expression with its limiting value g(0) = 1 we would indeed obtain the
exponential screening ≈exp(−qT F r ) due to the simple poles at p = ±iqT F .
Taking into account the actual expression for g( p) drastically changes the out-
come. It is straightforward to see that the poles will be slightly shifted along the
imaginary axis,
±iqT F
p = ±i p0 ∓
2 .
qT F
1 − 12 p F
1
But the main difference comes from the logarithm having singularities - branch points
p = ±2 p F .
Following [3], let us regularize the logarithm:
p F + p/2 1 | p F + p/2|2 ( p + 2 p F )2 + ε2
ln = ln = lim ln .
p F − p/2 2 | p F − p/2|2 ε√0 ( p − 2 p F )2 + ε2
This way we shift the singularities to the points p = ±2 p F ± iε, away from the
real axis (see Fig. A.1). The branch cuts of the logarithm in the upper halfplane are
chosen along the rays ±2 p F + iε + is, where 0 ≤ s < ∇. Since
1 1
ln(z 2 + ε2 ) = [ln(z − iε) + ln(z + iε)] , z = p ± 2 p F ,
2 2
it is clear that going full circle around a branch point one adds to the logarithmic
term an extra ±πi.
Now we can deform the integration contour to run mostly along the infinitely large
cemicircle in the upper complex half-plane. Its contribution to the integral will be
suppressed due to the exponential factor exp(i pr ) √ 0. The surviving terms come
Appendix A: Friedel Oscillations 265
from the integrations along the branch cuts and around the pole at i p0 (Fig. A.1).
The latter, together with the prefactor in (A.2), will yield the exponentially screened
potential,
e2
≈ e − p0 r ,
r
but as we shall see, at r √ ∇ it becomes irrelevant compared to the slower-decaying
contributions from the former.
Consider the integral around the left branch cut. Taking ε √ 0, we find (the
prime indicates that we first integrate along the left, and then along the right bank of
the cut):
e2 −2 p F −2 p F +i∇ ∼ pei pr
I−2 p F = ≡ + dp 2
πr −2 p F +i∇ −2 p F p + (qT2 F /2)(1 + g( p))
e2 −2 p F +i∇ pei pr
= ≡ dp .
πr −2 p F p 2 + (qT2 F /2)(1 + g( p))
In the last expression [...] is the difference between the right and left banks of
the branch cut, which is solely due to the multivaluedness of the logarithm. Writing
now p = −2 p F + is and replacing the slowly varying terms under the integral with
their values on the real axis (which is justified in the limit r √ ∇ by the exponent,
exp(i pr ) = exp(−2i p F r ) exp(−sr )), rapidly decaying away from the real axis, we
can write
∇
e2 (1/2)qT2 F −2i p F r i −u 1 ∇ 2 −u
I−2 p F ∓ ≡ e du u e + 3 du u e
r (4 p 2F + (1/2)qT2 F )2 r2 0 r 0
e2 (1/2)qT2 F
∓ cos(2 p F r ). (A.3)
r (4 p F + (1/2)qT2 F )2
3 2
We have kept the slowest-decaying term. The integral over the branch cut at p = 2 p F
yields the same expression, and therefore, as promised, at large distances the screened
potential behaves as
e2
Ueff (r ) ≈ 3 cos(2 p F r ). (A.4)
r
The non-analytic behaviour of the polarization operator at p = ±2 p f is due to
the sharp edges of the Fermi distribution; the momentum transmission ±2 p F clearly
corresponds to the transitions between the opposite points of the Fermi surface.
Therefore one expects that when the Fermi step is smeared by finite temperature or
interactions (like in a superconductor), an exponential screening should be restored.
This is indeed the case (see [3], p.180). At a finite temperature T in a normal system,
266 Appendix A: Friedel Oscillations
You should not be too surprized realizing that the exponents, up to a factor of order
unity, are simply the ratios of the distance r to the normal metal (resp. superconduct-
ing) coherence length,
v F 2v F
and ,
kB T π
The matrix has a rich structure (a purely normal system would be described simply
by transmission, T , and reflection, R, coefficients, related by the unitarity condition
T + R = 1), because in the presence of a superconductor, quasiparticles can switch
between particle and hole branches of the spectrum due to Andreev reflections (as
in Fig. 4.16). For example, Ree (Reh ) is the probability for an electron in the left
lead to be reflected as an electron (hole) to the same lead, while Te∼ e (Th ∼ e ) gives the
probability of its normal (Andreev) transmission to the other lead, etc. (see Fig. B.3).
In other words, we have added to the system an off-diagonal scattering potential.
The probability flux conservation (unitarity) requires that the elements in any row or
column of P̂ add up to unity:
Fig. B.2 Measuring quasiparticle energies from zero or from μ0 : two equivalent pictures. Occupied
states are shown by solid lines for quasielectrons, dotted lines for quasiholes
Pij = 1; Pij = 1. (B.2)
j i
I
G= .
V
Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures 269
The current is
2
= ev F · [(μ − μ0 )(1 − Ree + Rhe ) + (μ0 − μ∼ )(Thh∼ − Teh∼ )]. (B.3)
hv F
Here 2/(hv F ) is the one-dimensional density of states per velocity direction, and the
meaning of the terms in brackets is self-evident.
Now we must somehow get rid of μ0 . This can be done if we impose the condition
that there be no net electric current in or out of the superconductor. This will be so, e.g.,
if the superconductor is finite: otherwise it would accumulate electric charge until
its field stops the further charge transfer. This gives us an extra equation necessary
to exclude μ0 from the answer.
The total current from the left reservoir flowing into the system is carried by
quasielectrons and equals
2e 4e
λi = (μ − μ0 )(1 − Ree + Rhe − Te∼ e + Th∼ e ) = (μ − μ0 )(Rhe + Th∼ e ) (B.4)
h h
(we have used 1 = Ree + Rhe + Te∼ e + Th∼ e ). The current from the right reservoir is
carried by quasiholes (thus the minus sign):
4e
λi ∼ = − (μ0 − μ∼ )(Rh ∼ e∼ + Teh∼ ). (B.5)
h
Re∼ h∼ + Teh∼
μ − μ0 = (μ − μ∼ ) , (B.6)
Re∼ h ∼ + Teh∼ + Rhe + Th∼ e
Rhe + Th∼ e
μ − μ0 = (μ − μ∼ ) ,
Re∼ h ∼ + Teh∼ + Rhe + Th∼ e
and
270 Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures
Ie
G=
μ − μ∼
2e2 (Re∼ h∼ + Teh∼ )(1 − Ree + Rhe ) + (Rhe + Th∼ e )(Thh∼ − Teh∼ )
=
h Re∼ h∼ + Teh∼ + Rhe + Th∼ e
2e (Re∼ h∼ + Thh∼ )(Rhe + Th∼ e ) + (Re∼ h∼ + Teh∼ )(Te∼ e + Rhe )
2
= . (B.7)
h Re∼ h∼ + Teh∼ + Rhe + Th∼ e
Finally, if the system is spatially symmetric (that is, the difference between primed
and nonprimed coefficients disappears), we see that simply
2e2
G= (TN + R A ). (B.9)
h
This is an intuitively clear result: In addition to the normal transmission channel,
2e2
h TN , which we had in the normal case, another conductivity channel opens due to
Andreev reflections.
This is only one of Landauer-type formulas that describe conductivity of normal-
superconducting mesoscopic systems. We could, e.g., calculate the four-probe con-
ductance,
Ie
G̃ = ,
μ A − μ A∼
2 2
2× (μ A − μ0 ) = [(μ − μ0 )(1 − Rhe + Ree ) + (μ0 − μ∼ )(Teh∼ − Thh∼ )],
hv F hv F
2 2
2× (μ A∼ − μ0 ) = [(μ0 − μ A∼ )(−1 − Rh∼ h∼ + Re∼ h∼ ) (B.10)
hv F hv F
+ (μ − μ0 )(Te∼ e − Th∼ e )].
Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures 271
2e2 R A + TN
G̃ = . (B.12)
h R N + TA
This is the original Landauer formula for the four-point conductance; due to (1− TN )
in the denominator, it indeed diverges in the limit of ideal transparency of the barrier,
TN √ 1.
Many more theoretical and experimental results in this very dynamic field are
discussed in [1, 2, 4].
Fig. B.4 Landauer conductance of a ballistic Andreev interferometer. Below Andreev levels in the
vth and v ∗ th transverse modes (see text)
||
±
v F,v
E v,n = ((2n + 1)π ↑ δ) (B.14)
2L
||
is independent on v F,α ! When it is satisfied, not one, but N∗ Andreev levels (one
in each of N∗ transverse modes) are simultaneously aligned with the Fermi energy,
thus producing a giant conductance peak with amplitude N∗ 2e2 / h. The width of the
peak is of the order of the single-electron transparency of the barriers at B, C.
There is one more interesting detail. The very Andreev levels that are responsible
for the normal conductance are responsible for the Josephson current that flows
between the superconductors when δ ∞= 2πn. When the Andreev level coincides
Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures 273
with the Fermi level, the Josephson current abruptly changes sign, while normal
conductance peaks. One could therefore infer that there is a relation between I J and
G in this system such as G(δ) ∓ −ω I J (δ)/ωδ.
The quantitative consideration of the problem [6] is based on the Landauer-
Lambert formalism of the previous paragraph. (The presence of the Josephson current
is consistent with our assumption that there is no net current flow in or out of the
superconducting part of the system.) We assume that the electron–hole symmetry
holds and that the normal part of the system is spatially symmetric. Then the con-
ductance can be expressed as
∇
2e2 ωn F (∂)
G= 2 d∂(TN (∂) + R A (∂)) − + θ, (B.16)
h 0 ω∂
where TN (∂)(R A (∂)) is the probability for normal transmission (Andreev reflection)
of an electron incident from the left normal reservoir with energy ∂, n F (∂) is the
Fermi distribution function, and the energy ∂ is measured from the Fermi level.
(This is an evident generalization of our formula (B.9) to finite temperatures.) The
additional term θ in (B.16) reflects the fact that (B.9) requires spatial symmetry of
the system, which would also include δ = δ∼ . In the presence of finite δ we should
use formula (B.8) instead, but due to the fact that this correction term is a rapidly
oscillating function of the electron momentum (θ ≈ exp 2ik F L), we can neglect it
if we are not interested in investigating the fine structure of conductance peaks.
The scattering coefficients in (B.16) can be found by solving the Bogoliubov-de
Gennes equations in the normal part of the system. To do this, we must somehow
describe scattering of electrons and holes at the junctions B and C. This is conve-
niently done by introducing (after [5]) identical, real scattering matrices
⎛ √ ⎞
−ε/2 1 − ε/2 √ε
S = ⎝ 1√− ε/2 √
−ε/2 ε ⎠. (B.17)
ε ε −1 + ε
2ε
1 2
TN (∂) ∓ R A (∂) ∓ . (B.18)
1 + 2ε2 + cos(δ + ψ 2L
∂)
ψ=±1 v ||F
274 Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures
Fig. B.5 Phase dependence of the normal conductance (a) and Josephson current (b) in a ballistic
Andreev interferometer at zero temperature and ε = 0.1
We have described the method of calculation of the Josephson current in this system
in Sect. 4.5.4. The only difference is that Andreev levels are broadened not due to
impurity scattering, but due to ε-proportional “leakage” into the normal reservoirs,
and we take T = 0. As a result,
|| ∇
2ev F (−1)n+1 e−2|n|ε sinnδ
I J(ε) (δ) = N∗ , (B.20)
πL n
n=1
N∗
where v F = N∗−1 α=1 v F,v (Fig. B.5b).
Comparing (B.20) and (B.19), we see that the normal conductance and Josephson
current in this system are indeed related by
(ε)
eL dI J 2e2
G(δ) = ε − || + N∗ (B.21)
v dδ F
h
(within the accuracy of ε2 , that is, neglecting the details of the conductance peak
structure on a finer scale).
Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures 275
References
Articles
A C
Abel regularization, 74, 242 Callen–Welton formula, 124
Action, 14ff. Cancellation theorem, 81
Adiabatic hypothesis, 74, 101, 109 Canonical ensemble, 58, 59
Aharonov-Bohm effect, 21 Cauchy
Anderson orthogonality exponent, 230, 255 formula, 234
Andreev theorem, 69
levels in an SNS junction, 204ff. Charging energy, 219, 222
and Josephson current, 205, 208 Chronological ordering operator, see Time
reflection, 197ff. ordering operator
Annihilation operator, 36, 38, 48 Closure relation, 19, 229, 257
Anomalous average, 165
Coherence factors, 177
Autocorrelation function, 122
Coherence length in normal metal, 203, 266
Autocovariation function, 122
Collision integral, 136, 145
Commutation (anticommutation) relations
between phase and number operators, 45
B Bose, 39
Baker-Hausdorff Fermi, 49
formula, 243 Completeness, see Closure relation
theorem, 244 Conductance, 125, 143, 153
Bardeen–Cooper–Schrieffer (BCS) Hamil- quantization, 142ff.
tonian, 172ff. Conformal
Bare particle, 5, 7
dimensions, 255
BBGKY chain, 58
field theory, 255
Bethe–Salpeter equation, 91ff., 168
symmetry, 254
Bloch equation, 109, 110
Bogoliubov functional, 174 Continual integral, see Path integral
Bogoliubov-de Gennes equations, 176ff. Contraction, 76
transformation, 175ff. Cooper
Bogolon, 177ff. pair, 163ff.
Bose-Einstein statistics, 34 pairing, 160
Bosonization, 238ff. Coulomb
bosonization formulas, 242, 245 blockade, 218ff.
Bound state formation, 160, 229 potential screening, 3, 89, 263
Boundary condition changing operators, 256 Creation operator, 36, 48
D G
Debye Gaussian integral, 12, 18, 20
frequency, 7, 158 Gaussian integrals, 173
Hückel screening length, 4, 155 Generalized force, 120
Density matrix, see Statistical operator Generalized susceptibility, 121
Density operators, 238, 239 Gibbs, 59, 104
Dirac Gor’kov
“bra” and “ket” notation, 18 equations, 185ff.
interaction representation, 29 Green’s function, 181
Dispersion law, 66, 68, 92, 137, 178, 200, Gradient expansion, 134, 191
228, 231 Grand canonical ensemble, 58, 59
Dressed particle, 5, 55 Grand potential, 59, 71, 85, 99, 103
Dyson equation of a superconductor, 179
for Green’s function, 134, 182 and Josephson curent, 207
for the vertex part, 95 Green’s function, 11, 53ff.
Dyson expansion, 29, 115, 119 n-particle, 91
advanced, 12, 67ff., 103ff., 121, 129
analytic properties, 62ff., 103ff., 185
and observables, 70
E causal, 57, 103ff., 186
Electron–phonon nonequilibrium, 126ff.
collision integral, 145ff. of two operators, 120
coupling constant, 185 retarded, 11, 67ff., 103ff., 186
Elementary excitation, see Quasiparticle temperature (Matsubara), 111ff., 187
Eliashberg equations, 185
Euler-MacLaurin formula, 258
Evolution operator, 25ff., 110 H
Hamiltonian function, 17
Hartree–Fock approximation, 92
F Heaviside step function, 11
Fermi Heisenberg
anticommutation relations, 49 equations of motion, 27, 44, 62, 111, 150,
177
Dirac statistics, 34
representation, 28, 29, 59, 60
distribution function, 91, 210, 211
Hilbert space, 18, 29, 35
edge singularity (FES), 228
Holomorphic (antiholomorphic), 254
exponent, 228
Hooke’s law, 118
field operators, 49
Huygens principle, 10
momentum, 3
surface, 56, 141, 144, 160ff., 236
Feynman I
diagrams, 31, 32, 71ff., 117, 132, 133, Impedance, 125
182, 183 Indistinguishability principle, 34
path integrals, 14ff. Instantaneous eigenstates, 30
rules, 31, 71ff., 117, 132, 133, 182, 183ff. Interaction representation, 40, 59, 73, 114,
Field operators, 39, 46, 49 121
equations of motion, 27, 50, 66, 177, 188,
237, 244
Fluctuation-dissipation theorem, 122ff. J
Flux quantum, 23 Jellium model, 2
Fock space, 36, 40, 245 Johnson–Nyquist noise, 122, 125
Friedel oscillations, 91, 171, 263 Josephson
Functional, 14, 174 coupling energy, 222
Functional integrals, see Path integrals current, 205ff.
Index 279
L P
Ladder approximation Pairing
for the polarization operator, 89 in superconducting state, 160, 164
for the two-particle Green’s function, 99, Hamiltonian, see BCS Hamiltonian
168 potential, 164, 165, 182, 183
Ladder operators, 245 of field operators, see Contraction
Lagrange function, 14 Parity
Landau, 55, 170 effect, 166, 220
criterion, 195 of a permutation, 47
Landauer formula, 142ff., 153, 271 Partial current, 141, 144
Left-movers, 236, 238 Partial summation, 85, 86
Linear response theory, 118ff. Partial summation of Feynman diagrams,
Liouville equation, 103 85, 86
Luttinger liquid, see Tomonaga-Luttinger Path (functional) integrals, 16, 20, 32, 174,
liquid 175
parameter, 247, 248 Pauli principle, 35, 46
Luttinger model, see Tomonaga-Luttinger Perturbation expansion
model in many-body theory, 71, 84, 114
in one-body theory, 24, 31
Phase
M coherence, 21, 158, 178
Mass operator, 87 coherence length, 20
Matsubara number uncertainty relation, 44ff.
conjugate, 110 Phonon, 2, 54, 56, 76, 242, 248
field operator, 110 Green’s function (propagator), 61, 85,
formalism, 106, 109ff., 187 132, 134, 183
frequencies, 112, 170, 192 Plasma frequency, 7
Green’s function, see Green’s function Plasmon, 7
summation frequencies, 116 Plemelj theorem, 70
Mean field approximation (MFA), 3, 5, 163, Poisson summation formula, 212
217 Polarization
Meromorphic function, 64 insertion, 88
Mesoscopic system, 20, 141, 270 operator, 88–90, 171, 227, 263, 265
Mixed state, 102 Primary field, 255
280 Index
S V
S-operator (S-matrix), 26, 73, 114, 127 Vacuum, 36, 45, 158
Schrödinger Vector potential, 22, 189ff.
equation, 11, 18, 24, 54, 110 Vertex function, 91–94
representation, 24, 40
Screening, 3, 4, 6, 89, 90, 227, 263
Second quantization, 33ff. W
representation of operators, 41, 49 Watson’s lemma, 12, 69
Self energy, 84 Weierstrass formaula, 65
matrix, 134 Wick’s
part, 86 rotation, 110
proper, 86, 143, 183 theorem, 75ff.
Sharvin resistance, 141ff., 208, 270 Wiener–Khintchin theorem, 123
Slater determinant, 35, 46, 230 Wigner function, 134
SND junction, 213, 214
SNS junction, 204ff., 271
Spectral density Y
of fluctuations, 122 Yukawa potential, 4, 90
of Green’s function, 68, 105–107, 186
Spontaneous symmetry breaking, 167
Statistical operator, 57, 58, 102ff. Z
Sum rule, 107 Zero mode, 240